text
stringlengths 56
7.94M
|
---|
\begin{document}
\begin{abstract}
Estimating numerically the spectral radius of a random walk on a
nonamenable graph is complicated, since the cardinality of balls
grows exponentially fast with the radius. We propose an algorithm to
get a bound from below for this spectral radius in Cayley graphs with
finitely many cone types (including for instance hyperbolic groups).
In the genus $2$ surface group, it improves by an order of magnitude
the previous best bound, due to Bartholdi.
\end{abstract}
\keywords{spectral radius, cogrowth, numerical algorithm, surface
groups}
\maketitle
\section{Main algorithm}
Let $\Gamma$ be a countable group, generated by a finite symmetric
set $S$ of cardinality $\abs{S}$. The simple random walk
$X_0,X_1,\dotsc$ on $\Gamma$ is defined by $X_0=e$ the identity of
$\Gamma$, and $X_{n+1}=X_n s$ with probability $1/\abs{S}$ for any
$s\in S$. A crucial numerical parameter of this random walk is its
\emph{spectral radius} $\rho=\lim \mathbb{P}(X_{2n} = e)^{1/2n}$.
Equivalently, denote by $W_n$ the number of words of length $n$ in
the generators that represent $e$ in $\Gamma$, then $\mathbb{P}(X_n = e) =
W_n/\abs{S}^n$, so that $\rho = \lim W_{2n}^{1/2n} / \abs{S}$. It is
equivalent to study the spectral radius or the \emph{cogrowth} $\lim
W_{2n}^{1/2n}$.
The spectral radius is at most $1$, and $\rho = 1$ if and only if
$\Gamma$ is amenable. In the free group with $d$ generators, the
generating function $\sum W_n z^n$ can be computed explicitly (it is
algebraic), and the exact value of the spectral radius follows: $\rho
= \sqrt{2d-1}/d$. Since words that reduce to the identity in the free
group also reduce to the identity in any group with the same number
of generators, one infers that in any group $\Gamma$, $\rho \geqslant
2\sqrt{\abs{S}-1}/\abs{S}$. Moreover equality holds if and only if
the Cayley graph of $\Gamma$ is a tree~\cite{kesten}.
In general, there are no explicit formulas for $\rho$, and even
giving precise numerical estimates is a delicate question. In this
short note, we will describe an algorithm giving bounds from below on
$\rho$ in some classes of groups, particularly for the fundamental
group $\Gamma_g$ of a compact surface of genus $g\geqslant 2$, given by
its usual presentation
\begin{equation}
\label{eq:presentation}
\Gamma_g = \langle a_1,\dotsc, a_g,b_1,\dotsc, b_g \mid [a_1,b_1]\dotsm [a_g, b_g]=e\rangle.
\end{equation}
Since there are $4$ generators in $\Gamma_2$, the above trivial bound
obtained by comparison to the free group gives $\rho \geqslant 0.661437$.
Our main estimate is the following result.
\begin{thm}
\label{main_thm}
In the surface group $\Gamma_2$, one has $\rho \geqslant 0.662772$.
\end{thm}
This improves on the previously best known result, due to
Bartholdi~\cite{bartholdi_rho}, giving $\rho \geqslant \rho_{\mathrm{Bar}}
= 0.662421$.\footnote{Bartholdi claims that $\rho\geqslant 0.662418$, but
implementing his algorithm in multiprecision one gets in fact the
better bound $\rho\geqslant 0.662421$.} Bartholdi's method is to study a
specific class of paths from the identity to itself (called cactus
trees), for which he can compute the generating function. The radius
of convergence of this generating function is a lower bound for
$\rho$.
The best known upper bound for $\rho$ in $\Gamma_2$ is $\rho \leqslant
\rho_{\mathrm{Nag}} = 0.662816$, due to
Nagnibeda~\cite{nagnibeda_rho}. Non-rigorous numerical
estimates\footnote{
I obtained this estimate as follows: one can count exactly the number
$W_n$ of words of length $n$ representing the identity in the group,
for reasonable $n$, say up to $n=24$ -- for the record,
$W_{24}=4214946994935248$ -- giving the first values of the sequence
$p_n=\mathbb{P}(X_{2n}=e)$. We know rigorously from~\cite{gouezel_lalley}
that $p_n \sim C \rho^{2n}/n^{3/2}$ when $n\to \infty$. Define
$q_n=\log(n^{3/2} p_n)/(2n)$, it follows that $q_n\to \log \rho$. In
the free group, where $p_n$ is known very explicitly, the sequence
$q_n$ has a further expansion in powers of $1/n$. Assuming that the
same holds in the surface group, we get $q_n = \log \rho +
\sum_{k=1}^K a_k/n^k + o(1/n^K)$ where the $a_k$ are unknown. Using
the known value of $q_{24}$, this gives an estimate for $\log \rho$,
with an error of the order of $1/24$, which is very bad. However, it
is possible to accelerate the convergence of sequences having an
asymptotic expansion in powers of $1/n$: there are explicit recipes
(for instance Richardson extrapolation or Wynn's rho algorithm)
taking such a sequence, and giving a new sequence converging to the
same limit, with an expansion in powers of $1/n$, but starting at
$1/n^2$. Iterating this process, one can eliminate the first few
terms, and get a speed of convergence $O(1/n^L)$ for any $L$ (but one
needs to know enough terms of the initial sequence). Applying this
process to our sequence $q_n$, one gets the claimed estimate for
$\rho$. To make this rigorous, one would need to know that an
asymptotic expansion of $q_n$ exists, with explicit bounds on the
$a_k$ and on the $o(1/n^K)$ term. This seems completely out of reach.
} suggest that $\rho = 0.662812\dotsc$, so the upper bound is still
sharper than our lower bound, although our lower bound is an order of
magnitude better than the bound of~\cite{bartholdi_rho}: indeed,
$\rho_{\mathrm{Nag}} - \rho_{\mathrm{Bar}} \sim 4.10^{-4}$ while our
estimate $\rho$ from Theorem~\ref{main_thm} satisfies
$\rho_{\mathrm{Nag}} - \rho \sim 4.10^{-5}$.
Nagnibeda's upper bound does not rely on a counting argument for
closed paths, but on another spectral interpretation of $\rho$.
Indeed, $\rho$ is also the spectral radius of the Markov operator $Q$
on $\ell^2(\Gamma)$ corresponding to the random walk, i.e., the
convolution with the probability measure $\mu$ which is uniformly
distributed on $S$ (see for instance~\cite[Corollary 10.2]{woess}).
It is also the norm of this operator, since it is symmetric.
Nagnibeda gets the above upper bound by using a lemma of Gabber about
norms of convolution operators on graphs and the precise geometry of
$\Gamma_2$.
Our approach to get Theorem~\ref{main_thm} is very similar to
Nagnibeda's. To bound from below the norm of the convolution operator
$Q$, it is sufficient to exhibit one function $u$ (which ought to be
close to an hypothetic eigenfunction for the element $\rho$ of the
spectrum of $Q$) for which $\norm{Qu}/\norm{u}$ is large. This is
exactly what we will do.
For any $\alpha<\rho$, the function $u_\alpha=\sum_{n=0}^\infty
\alpha^n Q^n \delta_e$ is in $\ell^2$, and
$\norm{Qu_\alpha}/\norm{u_\alpha}$ converges to $\rho$ when $\alpha$
tends to $\rho$. Unfortunately, $u_\alpha$ is not explicit enough. To
find estimates, one should rather find an ansatz for the function
$u$, depending on finitely many parameters, and then optimize over
these parameters.
A first strategy would be the following: take a very large ball $B_n$
in the Cayley graph, and compute the function $u$ supported in this
ball such that $\norm{Qu}/\norm{u}$ is largest. This gives a lower
bound $\rho_n$ on $\rho$, and $\rho_n$ converges to $\rho$ when $n$
tends to infinity. However, this strategy is computationally not
efficient at all: one would need to take a very large $n$ to obtain
good estimates (since most mass of $u_\alpha$ is supported close to
infinity if $\alpha$ is close to $\rho$), and the cardinality of
$B_n$ grows exponentially with $n$. On the other hand, it can be
implemented in any finitely presented group for which the word
problem is solvable (see for instance~\cite{cogrowth_thompson} for
examples in Baumslag-Solitar and Thompson groups). We will use a more
efficient method, but which requires more assumptions on the group:
it should have finitely many cone types.
To illustrate our method of construction of $u$, let us describe it
quickly in the case of the free group $\mathbb{F}_d$ with $d$ generators.
The sphere $\mathbb{S}^n$ of radius $n\geqslant 1$ has cardinality
$2d(2d-1)^{n-1}$. Fix some $\alpha<1/\sqrt{2d-1}$, and define a
function $u_\alpha$ by $u_\alpha(x) = \alpha^n$ for $x\in \mathbb{S}^n$,
$n\geqslant 1$. This function belongs to $\ell^2(\mathbb{F}_d)$. We write $x\sim
y$ if $x$ and $y$ are neighbors in the Cayley graph of $\Gamma$, and
$x\to y$ if $x\sim y$ and $d(e,y)=d(e,x)+1$. Then
\begin{equation*}
\langle Qu_\alpha, u_\alpha \rangle = \frac{1}{2d} \sum_{x\sim y} u_\alpha(x)u_\alpha(y)
= \frac{1}{d} \sum_{x\to y} u_\alpha(x) u_\alpha(y)
= \frac{1}{d} \sum_{n=1}^\infty (2d-1)\alpha^{2n+1} \abs{\mathbb{S}^n},
\end{equation*}
since a point in $\mathbb{S}^n$ has $2d-1$ successors in $\mathbb{S}^{n+1}$.
Since $\langle u_\alpha, u_\alpha\rangle = \sum_{n=1}^\infty
\alpha^{2n} \abs{\mathbb{S}^n}$, we get
\begin{equation*}
\langle Qu_\alpha, u_\alpha\rangle = \alpha \frac{2d-1}{d} \langle u_\alpha, u_\alpha \rangle.
\end{equation*}
Hence, $\rho = \norm{Q} \geqslant \alpha(2d-1)/d$. Letting $\alpha$ tend
to $1/\sqrt{2d-1}$, we finally obtain $\rho\geqslant \sqrt{2d-1}/d$, which
is the true value of the spectral radius.
In the free group, it is natural to take a function $u$ that is
constant of the sphere $\mathbb{S}^n$ of radius $n$, since all the points
in such a sphere are equivalent: the automorphisms of the Cayley
graph of $\mathbb{F}_d$ fixing the identity act transitively on $\mathbb{S}^n$.
In more general groups, for instance surface groups, this is not the
case. Intuitively, we would like to take a function that decays
exponentially as above, but with different values on different
equivalence classes under the automorphism group. However, this
automorphism group is finite in the case of surface groups, so
instead of true equivalence classes (which are finite), we will
consider larger classes, of points that ``locally behave in the same
way'', and we will construct functions that are constants on such
classes of points (leaving only finitely many parameters which one
can optimize using a computer).
This intuition is made precise with the notion of \emph{type} of the
elements of the group (as in~\cite{nagnibeda_rho}). Let $\Gamma$ be a
countable group generated by a finite symmetric set $S$. Assume that
there are no cycles of odd length, so that any edge can be oriented
from the closer point from $e$ to the farther point. Let $\mathcal{S}(x)$ be
the set of successors of $x$, i.e., the points $y$ which are
neighbors of $x$ with $d(e,y) = d(e,x) + 1$.
\begin{defn}
\label{type_system}
Let $T$ be a finite set, let $t$ be a function from $\Gamma$ to $T$
and let $M$ be a square matrix indexed by $T$. We say that $(T,t,M)$
is a \emph{type system} for $(\Gamma,S)$ if, for all $i$ and $j$ in
$T$, for all but finitely many $x\in \Gamma$ with $t(x)=j$, one has
\begin{equation*}
\mathbb{C}ard\{y\in \mathcal{S}(x) \::\: t(y) = i\} = M_{ij}.
\end{equation*}
We will often simply say that $t$ is a type system, since it
determines $T$ and $M$.
\end{defn}
In other words, if one knows the type of a point $x$, then one knows
the number of successors of each type, thanks to the matrix $M$. For
instance, in $\mathbb{F}_d$, one can use one single type, with $M_{11} =
2d-1$: every point but the identity has $2d-1$ successors.
Using a type system, we will be able to find a lower bound for the
spectral radius of the simple random walk. While the argument works
in general, it is more convenient to formulate using an additional
assumption, which is satisfied for surface groups.
\begin{defn}
A type system $(T,t,M)$ is Perron-Frobenius if the matrix $M$ is
Perron-Frobenius, i.e., some power $M^n$ has only positive entries.
\end{defn}
The algorithm to estimate the spectral radius follows.
\begin{thm}
\label{main_alg}
Let $(\Gamma,S)$ be a countable group with a finite symmetric
generating set, whose Cayley graph has no cycle of odd length. Let
$(T,t,M)$ be a Perron-Frobenius type system for $(\Gamma,S)$.
Define a new matrix $\tilde M$ by $\tilde M_{ij} = M_{ij}/p_i$, where
$p_i$ is the number of predecessors of a point of type $i$ (it is
given by $p_j = \abs{S}-\sum_i M_{ij}$). Since it is
Perron-Frobenius, its dominating eigenvalue $e^v$ is simple. Let
$(A_1,\dotsc, A_k)$ be a corresponding eigenvector, with positive
entries, let $D$ be the diagonal matrix with entries $A_i$, and let
$M'=D^{-1/2}MD^{1/2}$. Define
\begin{equation}
\label{def:lambda}
\lambda=\max_{\abs{q}=1} \langle M'q,q\rangle.
\end{equation}
Then
\begin{equation}
\label{eq_main_rho}
\rho \geqslant \frac{2 e^{-v/2} \lambda}{\abs{S}}.
\end{equation}
\end{thm}
\begin{proof}
Let $s_n(i)=\mathbb{C}ard\{x\in \mathbb{S}^n \::\: t(x) = i\}$. By definition of a
type system, if $n$ is large enough (say $n\geqslant n_0$),
\begin{equation*}
p_i s_{n+1}(i) = \sum_{y\in \mathbb{S}^n} \mathbb{C}ard\{x\in \mathcal{S}(y) \::\: t(x) = i\}
= \sum_j M_{ij} s_n(j).
\end{equation*}
This shows that $s_{n+1}=\tilde M s_n$. Therefore, the cardinality of
$\mathbb{S}^n$ grows like $c e^{nv}$ for some $c>0$. Moreover, $s_n(i) = c'
A_i e^{nv} + O(e^{n(v-\varepsilon)})$ for some $\varepsilon>0$.
Take some parameters $b_1,\dotsc, b_k>0$ to be chosen later, and let
$\alpha< e^{-v/2}$. We define a function $u_\alpha$ by $u_\alpha(x) =
\alpha^n b_i$ if $x\in \mathbb{S}^n$ and $t(x)=i$ with $n\geqslant n_0$. For
$n<n_0$, let $u_\alpha(x) = 0$. We have when $\alpha$ tends to
$e^{-v/2}$
\begin{align*}
\langle Qu_\alpha, u_\alpha\rangle
&
=\frac{1}{\abs{S}} \sum_{x\sim y} u_\alpha(x) u_\alpha(y)
=\frac{2}{\abs{S}} \adjustlimits \sum_x \sum_{y\in \mathcal{S}(x)} u_\alpha(x) u_\alpha(y)
\\&
=\frac{2}{\abs{S}} \adjustlimits \sum_{n\geqslant n_0} \sum_{i,j} s_n(j) b_j\alpha^n M_{ij}b_i \alpha^{n+1}
= \frac{2\alpha}{\abs{S}} \adjustlimits \sum_{n\geqslant n_0} \sum_{i,j} c'A_j e^{nv} b_j\alpha^{2n} M_{ij}b_i
+O(1)
\\&
= \frac{2\alpha}{\abs{S}} \sum_{i,j} c' A_j b_j M_{ij}b_i / (1-\alpha^2 e^v)
+O(1).
\end{align*}
On the other hand,
\begin{align*}
\langle u_\alpha, u_\alpha \rangle
& = \sum_{n\geqslant n_0} \sum_i s_n(i) b_i^2 \alpha^{2n}
= \sum_{n\geqslant n_0} \sum_i c'A_i e^{nv} b_i^2 \alpha^{2n} + O(1)
\\&
= \sum_i c' A_i b_i^2 /(1-\alpha^2 e^v) + O(1).
\end{align*}
We have $\rho \geqslant \langle Qu_\alpha, u_\alpha\rangle/\langle
u_\alpha, u_\alpha \rangle$. Comparing the above two equations and
letting $\alpha$ tend to $e^{-v/2}$, we get
\begin{equation*}
\rho \geqslant \frac{2e^{-v/2}}{\abs{S}} \frac{ \sum_{i,j} A_j b_j M_{ij} b_i}{\sum_i A_i b_i^2}.
\end{equation*}
To conclude, we need to optimize in $b_i$. Writing $b_i$ as
$A_i^{-1/2}c_i$, this lower bound becomes
\begin{equation*}
\frac{2e^{-v/2}}{\abs{S}} \frac{ \sum A_j^{1/2} c_j M_{ij} A_i^{-1/2} c_i}{\sum c_i^2}.
\end{equation*}
The maximum of the last factor is the maximum on the unit sphere of
the quadratic form with matrix $M' = D^{-1/2} M D^{1/2}$. This
proves~\eqref{eq_main_rho}.
\end{proof}
\begin{rmk}
\label{rmk:sym}
It follows from the formula~\eqref{def:lambda} that $\lambda$ is the
maximum on the unit sphere of $\langle M'' q, q\rangle$, where $M''$
is the symmetric matrix $(M'+{M'}^*)/2$. Since any symmetric matrix
is diagonal in some orthogonal basis, it also follows that $\lambda$
is the maximal eigenvalue of $M''$, i.e., its spectral radius. Hence,
it is easy to compute using standard algorithms.
\end{rmk}
The formula given by Theorem~\ref{main_alg} depends not only on the
geometry of the group, but also on the choice of a type system: in a
given group (with a given system of generators), there may be several
type systems, giving different estimates. We will take advantage of
this fact for surface groups in Section~\ref{sec:surface}: applying
Theorem~\ref{main_alg} with the canonical type system for surface
groups, constructed by Cannon, we obtain in~\eqref{eq:lowerrho1} an
estimate for the spectral radius which is weaker than the estimate of
Theorem~\ref{main_thm}. This stronger estimate is proved by applying
Theorem~\ref{main_alg} to a different type system, constructed as a
refinement of the canonical type system.
This dependence on the choice of a type system should be contrasted
with the upper bound of Nagnibeda in~\cite{nagnibeda_rho}. Indeed, it
is shown in~\cite{nagnibeda_geometric} that this upper bound,
computed using a type system, has a purely geometric interpretation
(it is the spectral radius of a random walk on the tree of geodesics
of the group), which does not depend on the choice of the type
system. In particular, the refined type system we use to prove
Theorem~\ref{main_thm} can not improve the upper bound of Nagnibeda.
\textbf{Acknowledgments:} We thank the anonymous referee for pointing
out the reference~\cite{nagnibeda_geometric}, and suggesting that
there might be a possible geometric interpretation to the lower bound
in Theorem~\ref{main_alg}, as in~\cite{nagnibeda_geometric}. This led
to Section~\ref{sec:geometric} below.
\section{Geometric interpretation}
\label{sec:geometric}
In this section, we describe a geometric interpretation of
Theorem~\ref{main_alg}, similar to Nagnibeda's interpretation
in~\cite{nagnibeda_geometric} of the bound she obtained
in~\cite{nagnibeda_rho}.
We first recall Nagnibeda's construction. Consider a group $\Gamma$
with a finite system of generators $S$, whose Cayley graph has no
cycle of odd length. Let $X$ be its tree of geodesics, i.e., the
graph whose vertices are the finite geodesics in $\Gamma$ originating
from the identity $e$, and where one puts an edge from a geodesic
with length $n$ to its extensions with length $n+1$. There is a
canonical projection $\pi_X$ from $X$ to $\Gamma$, taking a geodesic
to its endpoint. One can think of $X$ as obtained from $\Gamma$ by
unfolding the loops based at $e$.
Consider the random walk in $X$ whose transitions are as follows:
from $x$, one goes to any of its successors with probability
$1/\abs{S}$, and to its unique predecessor with probability
$p_x/\abs{S}$ where $p_x$ is $\abs{S}$ minus the number of successors
of $x$ (it is the number of predecessors in $\Gamma$ of the
projection $\pi_X(x)$). This random walk on $X$ does \emph{not}
project to the simple random walk on $\Gamma$, since it does not
follow loops in $\Gamma$ (the projected random walk is not Markov in
general). The transition probabilities coincide when going towards
infinity, but not when going back towards the identity. One expects
that the probability to come back to the identity is higher in $X$
than in $\Gamma$, thanks to the following heuristic: since the
process in $X$ is less random when coming back toward the identity,
once the walk is in a subset where it comes back often to the
identity, it can not escape easily from this subset, and therefore
returns even more.
To illustrate this heuristic, suppose that two points $x$ and $x'$ in
$X$ (with $p_x=p_{x'}=2$) have successors $y$ and $y'$ in $X$, and
consider a new random walk in which $y$ and $y'$ are identified (this
is what the projection $\pi_X$ does, all over the place), so that
from this new point one can either jump back to $x$ or to $x'$ with
probability $1/\abs{S}$. Let $u_n$ and $u'_n$ be the probabilities in
$X$ to be at time $n$ at $x$ and $x'$. For the sake of the argument,
we will assume some form of symmetry, i.e., $u_n$ and $u'_n$ are also
the probabilities to reach $e$ at time $n$ starting respectively from
$x$ or $x'$. In $X$, one can form paths from $e$ to itself of length
$2n+2$ by jumping to $x$ in time $n$, then to $y$, then back to $x$,
and then from $x$ to $e$. This happens with probability
\begin{equation*}
u_n\cdot \frac{1}{\abs{S}} \cdot \frac{2}{\abs{S}} \cdot u_n.
\end{equation*}
One can do the same with $x'$, giving an overall probability
$a=\frac{2}{\abs{S}^2} (u_n^2+{u'_n}^2)$. On the other hand, if $y$
and $y'$ are identified, then from this new point one can either jump
back to $x$ or to $x'$. The corresponding probability to come back to
$e$ at time $2n+2$ following such paths is therefore
$b=\frac{1}{\abs{S}^2}(u_n+u'_n)^2$. As $2(v^2+w^2) \geqslant (v+w)^2$, we
have $a\geqslant b$, i.e., the probability of returning to $e$ using
corresponding paths is bigger in $X$ than in the random walk where
$y$ and $y'$ are identified. This explains our heuristic that more
randomness in the choice of predecessors in the graph creates a
mixing effect that decreases the spectral radius.
In~\cite{nagnibeda_rho} and~\cite{nagnibeda_geometric}, Nagnibeda
justifies this heuristic rigorously as follows. Consider a group
$\Gamma$ and a generating system $S$ such that the Cayley graph of
$\Gamma$ with respect to $S$ has no cycle of odd length, and finitely
many cone types. By applying a spectral lemma of Gabber, she gets an
upper bound $\rho$ (given by a minimax formula, complicated to
estimate in general) for the spectral radius $\rho_\Gamma$ of the
simple random walk on $(\Gamma,S)$. Since the tree of geodesics $X$
also has finitely many cone types, she is able to compute exactly the
spectral radius $\rho_X$ of the random walk in $X$. It turns out that
this is exactly $\rho$. Hence, $\rho_\Gamma \leqslant \rho_X$, the
interest of this formula being that $\rho_X$ can be easily computed
(it is algebraic as the tree $X$ has finitely many cone types,
see~\cite{nagnibeda_tree_cone_types}). Note however that this bound
does not come from a direct argument using the projection $\pi_X:X\to
\Gamma$, but rather from two separate computations in $X$ and in
$\Gamma$.
We now turn to a similar geometric interpretation of the lower bound
given in Theorem~\ref{main_alg}. We are looking for a natural random
walk, related to the original random walk on $\Gamma$, whose spectral
radius can be computed exactly and coincides with the lower bound
given in~\eqref{eq_main_rho}. Following the above heuristic, this
random walk should have more randomness than the original random walk
regarding the choice of predecessors, to decrease the probability to
return to the origin.
We use the setting and notations of Theorem~\ref{main_alg}. In
particular, $\Gamma$ is a group with a type system $(T,t,M)$, and
$(A_1,\dotsc, A_k)$ and $\tilde M$ are defined in the statement of
this theorem. We define a random walk as follows: It is a walk on the
space $Y = \mathbb{Z} \times T$ (where $T$ is the space of types), whose
transition probabilities are given by:
\begin{equation}
\label{eq:trans_Y}
p( (n,j)\to (n+1,i)) = M_{ij}/\abs{S}, \quad p((n,j)\to (n-1,i))=e^{-v} \frac{A_i M_{ji}}{A_j \abs{S}}.
\end{equation}
The spectral radius of this random walk is by definition $\rho_Y=\lim
\mathbb{P}(X_{2n} = e)^{1/2n}$ (this is \emph{not} a spectral definition).
Note that this random walk admits a quasi-transitive $\mathbb{Z}$-action
(i.e., $Y$ is endowed with a free action of $\mathbb{Z}$, with finite
quotient, and the transition probabilities are invariant under $\mathbb{Z}$).
Such random walks are well studied, see for instance~\cite[Section
8.B]{woess}.
\begin{thm}
\label{thm:geometric}
With the notations of Theorem~\ref{main_alg}, the random walk on $Y$
has spectral radius $\rho_Y=2e^{-v/2} \lambda / \abs{S}$.
\end{thm}
Hence, the result of Theorem~\ref{main_alg} reads $\rho_\Gamma \geqslant
\rho_Y$, and $\rho_Y$ is easy to compute.
Let us first explain why this random walk is natural, and related to
the random walk on $\Gamma$. Starting from a point $x\in \Gamma$, of
type $j$ and length $n$, the original random walk goes to any of its
successors with probability $1/\abs{S}$. In particular, it reaches
points of type $i$ and length $n+1$ with probability
$M_{ij}/\abs{S}$, just like the probability given
in~\eqref{eq:trans_Y}. On the other hand, it goes to any of its
predecessors with probability $1/\abs{S}$, but the types of these
predecessors depend on $x$, not only on $j$. A random walk which is
simpler to estimate may be constructed by randomizing the
predecessors: from $x$, one chooses to go to any point of length
$n-1$ and type $i$, provided that there is an edge from type $i$ to
type $j$, i.e., $M_{ji}>0$. The probability to go from $x$ to such
points should be given by the average number of predecessors of type
$i$ to a point of type $j$. Writing $s_n(i)=\mathbb{C}ard\{x\in \mathbb{S}^n \::\:
t(x) = i\}$, this quantity is
\begin{equation}
\label{eq:rapport_limite}
\frac{\sum_{\abs{x}=n, t(x)=j} \sum_{\abs{y}=n-1, t(y)=i} 1(x\in \mathcal{S}(y))}{\sum_{\abs{x}=n, t(x)=j} 1}
=\frac{ s_{n-1}(i)M_{ji}}{s_n(j)}.
\end{equation}
As $s_n(i)\sim c' A_i e^{nv}$ for some $c'>0$ (see the proof of
Theorem~\ref{main_alg}), the quantity in~\eqref{eq:rapport_limite}
converges when $n\to\infty$ to $e^{-v} A_i M_{ji}/A_j$, giving the
transition probability~\eqref{eq:trans_Y} in the limit. Note that, in
this randomized random walk, all the points of the same length and
the same type are equivalent. Hence, we may identify them, to get a
smaller space and a simpler random walk. This is precisely our random
walk on $Y$.
Thus, the random walk on $Y$ is obtained by starting from the random
walk on $\Gamma$, randomizing the choice of predecessors, going in
the asymptotic regime $n\to \infty$, and identifying the points on
the sphere that are equivalent. One can define a projection map
$\pi_Y: \Gamma \to Y$ by $\pi_Y(x) = (\abs{x}, t(x))$, under which
the two random walks correspond in a loose sense (going towards
infinity, the transition probabilities are the same, but coming back
towards the identity they differ, just like for the projection
$\pi_X$ in Nagnibeda's construction, with more randomness in $Y$ than
in $\Gamma$).
\begin{proof}[Proof of Theorem~\ref{thm:geometric}]
We define two matrices $P^+$ and $P^-$ giving the transition
probabilities of the walk on $Y$ respectively to the right and to the
left, i.e.,
\begin{equation*}
P^+_{ij} = M_{ij}/\abs{S},\quad P^-_{ij}=e^{-v} \frac{A_i M_{ji}}{A_j \abs{S}}.
\end{equation*}
In other words, $P^+=M/\abs{S}$ and $P^-=e^{-v} D M^*
D^{-1}/\abs{S}$, where $D$ is the diagonal matrix with entries $A_i$
and $M^*$ is the transpose of $M$.
Although this is clear from the geometric construction, let us first
check algebraically that the transition probabilities
in~\eqref{eq:trans_Y} indeed define probabilities, i.e., $\sum_i
P^+_{ij} + P^-_{ij}=1$ for all $j$. Let $p_j$ be the number of
predecessors of a point of type $j$ in $\Gamma$. By definition of
$A$, the matrix $\tilde M_{ji}=M_{ji}/p_j$ satisfies $\tilde M A =
e^v A$. Hence,
\begin{equation*}
\sum_i M_{ji} A_i = p_j \sum_i \tilde M_{ji} A_i = p_j e^v A_j.
\end{equation*}
Therefore,
\begin{equation*}
\sum_i M_{ij} + \sum_i e^{-v} \frac{A_i M_{ji}}{A_j}
= \abs{S}-p_j + e^{-v} \frac{p_j e^v A_j}{A_j} = \abs{S},
\end{equation*}
proving that~\eqref{eq:trans_Y} defines transition probabilities.
While one can give a pedestrian proof of the equality
$\rho_Y=2e^{-v/2} \lambda / \abs{S}$, it is more efficient to use
available results of the literature. Define a function $\varphi$ on $\mathbb{R}$
by $\varphi(c)=\rho(e^c P^+ + e^{-c} P^-)$ (where this quantity is the
spectral radius of a bona fide finite dimensional matrix). It is
proved in~\cite[Proposition 8.20 and Theorem 8.23]{woess} that $\varphi$
is convex, that it tends to infinity at $\pm\infty$, and that its
minimum is precisely $\rho_Y$. Since the spectral radius is invariant
under transposition and conjugation, we have
\begin{align*}
\abs{S}\varphi(c) & = \rho( e^c M + e^{-c} e^{-v}DM^* D^{-1})
=\rho(e^c M^* + e^{-c-v} D^{-1}M D)
\\&
=\rho(e^c DM^* D^{-1}+ e^{-c-v} M)
=\abs{S} \varphi(-c-v).
\end{align*}
Hence, the function $c\mapsto \varphi(c)$ is symmetric around $c=-v/2$.
As it is convex, it attains its minimum at $c=-v/2$. Therefore,
\begin{equation*}
\rho_Y = \varphi(-v/2)
=\frac{e^{-v/2}}{\abs{S}}\rho(M +D M^* D^{-1})
=\frac{e^{-v/2}}{\abs{S}}\rho(D^{-1/2}MD^{1/2} +D^{1/2} M^*D^{-1/2}).
\end{equation*}
By Remark~\ref{rmk:sym}, the last term is equal to $2\lambda$.
\end{proof}
\section{Application to surface groups}
\label{sec:surface}
\subsection{Cannon's types}
\begin{figure}
\caption{Two examples of geodesics from $e$ to $x$}
\label{fig:geod}
\end{figure}
Consider a countable group with the word distance coming from a
finite generating set $S$. The \emph{cone} of a point $x$ is the set
of points $y$ for which there is a geodesic from $e$ to $y$ going
through $x$. The \emph{cone type} of $x$ is the set $\{x^{-1}y\}$,
for $y$ in the cone of $x$. Note that knowing the cone type of a
point determines the number of its successors, and the number of its
successors having any given cone type. Cannon proved that, in any
hyperbolic group, there are finitely many cone types. Therefore, such
a group admits a type system in the sense of
Definition~\ref{type_system}. This is in particular the case of the
surface groups $\Gamma_g$. However, the number of cone types is too
large, and it is more convenient for practical purposes to reduce
them using symmetries. We obtain Cannon's canonical types for the
surface groups, described in~\cite{cannon_unpublished}
or~\cite{floyd_plotnick} as follows.
The hyperbolic plane can be tessellated by regular $4g$-gons, with
$4g$ of them around each vertex. The Cayley graph of $\Gamma_g$ (with
its usual presentation~\eqref{eq:presentation}) is dual to this
(self-dual) tessellation, and is therefore isomorphic to it. Define
the type of a point $x\in \Gamma_g$ as the maximal length along the
last $4g$-gon of a geodesic starting from $e$ and ending at $x$.
Beware that one really has to take the maximum: for instance, in
Figure~\ref{fig:geod}, the thick geodesic from $e$ to $x$ shares only
one edge with the last octagon, while the wiggly one shares two
edges. Hence, the type of $x$ is $2$.
The type can also be described combinatorially as follows: write
$x=c_1\dotsm c_{\lgth{x}}$ as a product of minimal length in the
generators $a_1,\dotsc, b_g$, look at the length $n$ of its longest
common suffix with a fundamental relator (i.e., a cyclic permutation
of the basic relation $[a_1, b_1]\dotsm [a_g, b_g]$
in~\eqref{eq:presentation} or its inverse: $c_{\lgth{x}-n+1}\dotsm
c_{\lgth{x}}$ should be a subword of the basic relation or its
inverse, up to cyclic permutation), and take the maximum of all such
$n$ over all ways to write $x=c_1\dotsm c_{\lgth{x}}$. It is obvious
that the geometric and combinatorial descriptions are equivalent, we
will mostly rely on the geometric one.
The type of a group element $x$ can be at most $2g$ (otherwise,
taking the same path but going the other way around the last
$4g$-gon, one would get a strictly shorter path, contradicting the
fact that the initial path is geodesic), and it is $0$ only for the
identity. Points $x$ of type $i < 2g$ have only one predecessor, and
$4g-1$ successors. Among them, $2$ are followers of $x$ on the
$4g$-gon to its left and to its right, while the other ones
correspond to newly created $4g$-gons (whose closest point to $e$ is
$x$). It follows that those points have $4g-3$ successors of type
$1$, one of type $2$ and one of type $i+1$. Points of type $2g$ are
special, since they have two predecessors (one can reach them with a
geodesic either from the left or from the right of a single
$4g$-gon). They have $2$ successors of type $2$, corresponding to the
extremal outgoing edges of $x$ (they extend the two $4g$-gons
adjacent to both incoming edges to $x$), and the $4g-4$ remaining
successors are on newly created $4g$-gons, and are of type $1$. See
Figure~\ref{fig:types} for an illustration in genus $2$ (of course,
additional octagons should be drawn around all outgoing edges, but
since this is notoriously difficult to do in a Euclidean drawing, we
have to rely on the reader's imagination).
\begin{figure}
\caption{Types of the points in (part of) the Cayley graph of $\Gamma_2$}
\label{fig:types}
\end{figure}
Keeping only the types from $1$ to $2g$ (since type $0$ only happens
for the identity, while Definition~\ref{type_system} allows to
discard finitely many points), we obtain a type system for $\Gamma_g$
with $T=\{1,\dotsc, 2g\}$, where the matrix $M$ has been described in
the previous paragraph. For instance, in genus $2$,
\begin{equation*}
M = \left(\begin{matrix}
5 & 5 & 5 & 4 \\
2 & 1 & 1 & 2 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0
\end{matrix}\right).
\end{equation*}
One can now apply the algorithm of Theorem~\ref{main_alg} to this
matrix to bound the spectral radius of the simple random walk from
below. All points but points of type $4$ have $1$ predecessor, so the
matrix $\tilde M$ is
\begin{equation*}
\tilde M = \left(\begin{matrix}
5 & 5 & 5 & 4 \\
2 & 1 & 1 & 2 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1/2 & 0
\end{matrix}\right).
\end{equation*}
The dominating eigenvalue of this matrix is $e^v = 6.979835\dotsc$
(this is also the growth of the group), while the corresponding
eigenvector is
\begin{equation*}
A = (0.715987\dotsc, 0.246211\dotsc, 0.035274\dotsc, 0.002526\dotsc).
\end{equation*}
One gets that the matrix $M''$ of Remark~\ref{rmk:sym} is
\begin{equation*}
M''
= \left(\begin{matrix}
5 & 3.171316\dotsc & 0.554905\dotsc & 0.118814\dotsc \\
3.171316\dotsc & 1 & 1.510223\dotsc & 0.101307\dotsc \\
0.554905\dotsc & 1.510223\dotsc & 0 & 1.868132\dotsc \\
0.118814\dotsc & 0.101307\dotsc & 1.868132\dotsc & 0
\end{matrix}\right),
\end{equation*}
with dominating eigenvalue $\lambda = 7.000902\dotsc$. Finally,
\begin{equation}
\label{eq:lowerrho1}
\rho \geqslant \frac{2 e^{-v/2} \lambda}{8} = 0.662477\dotsc.
\end{equation}
This is already slightly better than Bartholdi's estimate $\rho \geqslant
0.662421$, but much weaker than the estimate $\rho\geqslant 0.662772$ that
we claimed in Theorem~\ref{main_thm} (to be compared with the
``true'' value $\rho \sim 0.662812$).
In the next sections, we will explain how to get better estimates by
using different type systems, that distinguish between more points
(but, of course, give rise to larger matrices $M$ and therefore to
more computer-intensive computations).
\begin{rmk}
Using cone types instead of Cannon's canonical types does not give
rise to better estimates for $\rho$ (although the number of types is
much larger). Indeed, if some type system $t'$ is obtained from some
type system $t$ by quotienting by some symmetries of $t$, then the
dominating eigenvector of $\tilde M(t)$, being unique, is invariant
by those symmetries and reduces to the dominating eigenvector of
$\tilde M(t')$. It follows that the dominating eigenvector of
$M''(t)$ is also invariant by those symmetries, and that the
dominating eigenvalue of $M''(t)$ is the same as that of $M''(t')$.
Hence, the estimates on $\rho$ given by Theorem~\ref{main_thm} for
$t$ and $t'$ are the same.
\end{rmk}
\subsection{Suffix types}
\label{subsec_suffix}
There are many ways to define new type systems in surface groups,
that separate more points. If a type system is finer than another
one, then the estimate on the spectral radius coming from
Theorem~\ref{main_alg} is better, but the matrix involved in the
computation is larger. To get manageable estimates, we should find
the right balance.
In this paragraph, we describe a very simple extension of Cannon's
canonical type systems in surface groups, that we call suffix types.
Given a point $x\in \Gamma_g$, there can be several geodesics from
$e$ to $x$. Consider the longest ending that is common to all these
geodesics, say $x_{n-k+1},\dotsc, x_n$ (with $x_n=x$), and define the
suffix type of $x$ to be
\begin{equation*}
t_{\mathrm{suff}}(x)=(t(x_n), t(x_{n-1}),\dotsc, t(x_{n-k+1})),
\end{equation*}
where $t$ is the canonical type of Cannon.
For any $x$, $t_{\mathrm{suff}}(x)$ is easy to compute inductively:
\begin{itemize}
\item If $t(x)=0$, i.e., $x=e$, then $t_{\mathrm{suff}}(x)=(0)$.
\item If $x$ is of type $2g$, it has two predecessors, so the
common ending to all geodesics ending at $x$ is simply $x$,
and $t_{\mathrm{suff}}(x)=(2g)$.
\item If $t(x) \in \{1,\dotsc, 2g-1\}$, then $x$ has a unique
predecessor $z$. The common ending to all geodesics ending at
$x$ is the common ending to all geodesics ending at $z$,
followed with $x$. Hence, $t_{\mathrm{suff}}(x) = (t(x),
t_{\mathrm{suff}}(z))$.
\end{itemize}
It also follows from this description that, if one knows
$t_{\mathrm{suff}}(x)$, it is easy to determine $t_{\mathrm{suff}}(y)$ for any
successor $y$ of $x$: if $t(y)=2g$, then $t_{\mathrm{suff}}(y)=(2g)$,
otherwise $x$ is the only predecessor of $y$ and $t_{\mathrm{suff}}(y)=(t(y),
t_{\mathrm{suff}}(x))$.
We have shown that $t_{\mathrm{suff}}$ shares most properties of type systems
as described in Definition~\ref{type_system}, except that it does not
take its values in a finite set. To ensure this additional property,
one should truncate the suffix type. For instance, one can fix some
maximal length $k$, and define the $k$-truncated suffix type
$t_{\mathrm{suff}}^{(k)}(x)$ by keeping only the first $k$ elements of
$t_{\mathrm{suff}}(x)$ if its length is $>k$.
The following proposition is obvious from the previous discussion.
\begin{prop}
For any $k\geqslant 1$, the $k$-truncated suffix type system
$t_{\mathrm{suff}}^{(k)}$ is a (Perron-Frobenius) type system in the sense of
Definition~\ref{type_system}.
\end{prop}
The matrix size increases with $k$, but the estimates on the spectral
radius following from Theorem~\ref{main_alg} get better. For
instance, in $\Gamma_2$, for $k=5$, the matrix size is $148$, and we
get $\rho \geqslant 0.662694$.
A drawback of this truncation process is that it truncates uniformly,
independently of the likeliness of the type, while it should be more
efficient to extend mostly those types that are more likely to
happen. This intuition leads to another truncation process: fix a
system of weights $w=(w_0,\dotsc, w_{2g}) \in [0,+\infty)^{2g+1}$, a
threshold $k$, and truncate a suffix type $(t_0,t_1,\dotsc)$ at the
smallest $n$ such that $t_0+\dotsc+t_n > k$. This gives another type
system denoted by $t_{\mathrm{suff}}^{(k,w)}$ ($t_{\mathrm{suff}}^{(k)}$ corresponds to
the weights $w=(1,\dotsc, 1)$ and the threshold $k-1$). Define for
instance a weight system $\bar w$ by $\bar w_0=1$ and $\bar w_i=i$
for $i\geqslant 1$: the corresponding type system $t_{\mathrm{suff}}^{(k, \bar w)}$
truncates more quickly the suffix types involving a lot of large
types, that happen less often in the group. Hence, it should give a
smaller matrix than the naive truncation only according to length,
while retaining a comparatively good estimate for the spectral
radius.
This intuition is correct: for instance, in $\Gamma_2$, using
$t_{\mathrm{suff}}^{(k, \bar w)}$ with $k=6$, one gets a matrix with size
$109$ and an estimate $\rho\geqslant 0.662697$: the matrix is smaller than
for the naive truncation $t_{\mathrm{suff}}^{(5)}$, while the estimate on the
spectral radius is better.
We can now push the computations, to a larger matrix size: using in
$\Gamma_2$ the weight $\bar w$ and the truncation threshold $k=25$,
one obtains a type system where the matrix is of size $2,774,629$,
and the following estimate on the spectral radius.
\begin{prop}
In $\Gamma_2$, one has $\rho \geqslant 0.662757$.
\end{prop}
This is definitely better than~\eqref{eq:lowerrho1}, but not yet as
good as Theorem~\ref{main_thm}.
A few comments on the practical implementation. There are three main
steps in the algorithm of Theorem~\ref{main_alg}:
\begin{enumerate}
\item Compute the matrix $M$ corresponding to the type system.
\item Find the eigenvector $A$, to define the matrix $M'$.
\item Find the maximal expansion rate of $M'$.
\end{enumerate}
Computing the matrix of the type system is a matter of simple
combinatorics: we explained above all the transitions from one suffix
type to the next ones. The resulting matrix $M$ is very sparse: each
type has at most $2g$ successors. However, it is extremely large, so
that finding the eigenvector $A$ and then the maximal expansion rate
of $M'$ might seem computationally expensive. This is not the case,
as we explain now.
Let $A^{(0)}$ be the eigenvector for the original Cannon type, so
that $\mathbb{C}ard\{x\in \mathbb{S}^n \::\: t(x)=i\} \sim A^{(0)}_i e^{nv}$. Let
also $M^{(0)}$ be the matrix for the original Cannon type. Given a
new type $\bar i = (i_0,\dotsc, i_m)$, the entry $A_{\bar i}$ of the
eigenvector $A$ for the new type $t_{\mathrm{suff}}^{(w,k)}$ is such that
$\mathbb{C}ard\{x\in \mathbb{S}^n \::\: t_{\mathrm{suff}}^{(w,k)}(x)=\bar i\} \sim A_{\bar i}
e^{nv}$. Such a point $x$ can be obtained uniquely by starting from a
point $y\in \mathbb{S}^{n-m}$ with type $i_m$, and then taking successors
respectively of type $i_{m-1},\dotsc, i_0$. Hence,
\begin{multline}
\label{eq:bijection}
\mathbb{C}ard\{x\in \mathbb{S}^n \::\: t_{\mathrm{suff}}^{(w,k)}(x)=(i_0,\dotsc, i_m)\}
\\ = M^{(0)}_{i_0 i_1}\dotsm M^{(0)}_{i_{m-1}i_m}\mathbb{C}ard\{y\in \mathbb{S}^{n-m} \::\: t(y)=i_m\}.
\end{multline}
It follows that the new eigenvector is given by
\begin{equation*}
A_{\bar i} = M^{(0)}_{i_0 i_1}\dotsm M^{(0)}_{i_{m-1}i_m} A^{(0)}_{i_m} e^{-mv}.
\end{equation*}
This shows that $A$ is very easy to compute.
By Remark~\ref{rmk:sym}, to determine the maximal expansion rate
$\lambda$ of $M'$, it suffices the find the maximal eigenvalue of
$M''=(M'+{M'}^*)/2$. This matrix is real, symmetric, with nonnegative
coefficients, and it is Perron-Frobenius (i.e., it has one single
maximal eigenvalue). It follows that, for any vector $v$ with
positive coefficients, $\lambda = \lim \norm{{M''}^n
v}/\norm{{M''}^{n-1}v}$ (and moreover this sequence is
non-decreasing, see for instance~\cite[Corollary 10.2]{woess}).
Hence, one can readily estimate $\lambda$ from below, by starting
from a fixed vector $v$ and iterating $M''$. Again, there is no issue
of instability or complexity.
\subsection{Essential types}
To improve the suffix types, to separate even more points, one can
for instance replace the canonical Cannon types with the true cone
types. However, the matrix size increases so quickly that this is not
usable in practice. Moreover, this does not solve the main problem of
suffix types: they do not separate at all points with Cannon type
$2g$, although such points are clearly not always equivalent. In this
paragraph, we introduce a new type system that can separate such
points, that we call the \emph{essential type}.
The basic idea (that will not work directly) is to memorize not only
the common ending of all geodesics ending at a point $x$, but all the
parts that are common to such geodesics: i.e., the sequence
$F_{\mathrm{ess}}(x)=(x_0=e, x_1,\dotsc, x_n=x)$ (with $n=\lgth{x}$)
where $x_i=*$ if there are two geodesics from $e$ to $x$ that differ
at position $i$, and $x_i$ is the point that is common to all those
geodesics at position $i$ otherwise. We then associate to $x$ the
sequence $t_{\mathrm{ess}}(x)=(t(x_n), t(x_{n-1}),\dotsc, t(x_0))$ where
$t(x_i)$ is the Cannon type of $x_i$ if $x_i\not=*$, and $t(*)=*$.
\begin{figure}
\caption{The points $x$ and $x'$ have the same essential type, contrary to $y$ and $y'$.}
\label{fig:tessentialnotunique}
\end{figure}
The problem with this notion is that $t_{\mathrm{ess}}(x)$ does not
determine $t_{\mathrm{ess}}(y)$ for $y$ a successor of $x$: in
Figure~\ref{fig:tessentialnotunique}, the points $x$ and $x'$ have
the same essential type $(3,2,1,3,2,1,0)$, while their successors $y$
and $y'$ have respective essential types $(4,*,*,*,3,2,1,0)$ and
$(4,*,*,*,*,*,*,0)$ (this follows from the fact that the thick paths
and wiggly paths are geodesics). This shows that, as we have defined
it, $t_{\mathrm{ess}}$ can not be used to define a type system.
This problem can be solved if we do not use directly the Cannon types
in the definition of $t_{\mathrm{ess}}$, but a slightly refined notion,
the Cannon modified type, taking values in $\{1,\dotsc, 2g, 1',
2'\}$. The modified type of a point is the same as its Cannon type,
except for some points of Cannon types $1$ and $2$, that have
modified types $1'$ and $2'$ respectively: considering any point $y$
of type $2g-1$, it has a unique successor $z$ of type $2g$, and a
unique successor $x$ of type $1$ that is on the same $4g$-gon as $z$.
We say that $x$ is of modified type $1'$. Moreover, $x$ has a unique
successor of type $2$ that is also on the same $4g$-gon as $z$, we
say that it is of modified type $2'$. See
Figure~\ref{fig:types_prime} for an example in genus $2$. By
definition, the modified Cannon type $t'$ is also a type system. The
transition matrix is the same as for the usual Cannon type, except
that
\begin{itemize}
\item A point of type $2g-1$ has one successor of type $1'$, one
successor of type $2g$, one successor of type $2$, and $4g-4$
successors of type $1$.
\item A point of type $1'$ has one successor of type $2$, one
successor of type $2'$, and $4g-3$ successors of type $1$.
\item A point of type $2'$ has one successor of type $2$, one
successor of type $3$, and $4g-3$ successors of type $1$.
\end{itemize}
\begin{figure}
\caption{Modified Cannon types of the points in (part of) the Cayley graph of $\Gamma_2$}
\label{fig:types_prime}
\end{figure}
We define $t_{\mathrm{ess}}(x)=(t'(x_n),\dotsc, t'(x_0))$ where
$(x_0,\dotsc, x_n)=F_{\mathrm{ess}}(x)$ and $t'(*)=*$.
\begin{prop}
\label{prop:tessential_type}
The essential type $t_{\mathrm{ess}}(x)$ of a point $x$ determines the
essential type of its successors.
\end{prop}
\begin{proof}
We argue by induction on $\lgth{x}=d(x,e)$.
Consider a point $x$, and one of its successors $y$. If the type of
$y$ is not $2g$, then $x$ is the unique predecessor of $y$, and
$t_{\mathrm{ess}}(y)=(t'(y), t_{\mathrm{ess}}(x))$. Assume now that $t'(y)=2g$
(so that $t'(x)=2g-1$). Let $z_{2g-1}=x$, and define inductively
$z_i$ as the unique predecessor of $z_{i+1}$ for $i\geqslant 1$, so that
$F(x)=(e,\dotsc, z_1,z_2,\dotsc, z_{2g-1}=x)$. Those points are on a
common $4g$-gon $R$, and $t(z_i)=i$ for $i\geqslant 2$, while $t(z_1)$ can
be anything. In the same way, let $\tilde z_{2g-1},\dotsc, \tilde
z_1$ be the successive preimages of $y$ going around $R$ in the other
direction. They also satisfy $t(\tilde z_i)=i$ for $i\geqslant 2$.
\begin{figure}
\caption{Determining the essential type of $y$ from that of $x$}
\label{fig:tessentialunique}
\end{figure}
If $t(z_1)\not =2g$, then $z_1$ has a unique preimage $z_0$, which
also belongs to $R$. Moreover, $z_0$ is the unique closest point to
$e$ on $R$. The path $P=(z_0,z_1,\dotsc, z_ {2g-1}, y)$ is a geodesic
path going around $R$ in one direction, and $\tilde P=(z_0, \tilde
z_1,\dotsc, \tilde z_{2g-1}, y)$ is also geodesic and goes around $R$
in the other direction. If, along $\tilde P$, all points different
from $z_0,y$ have type $<2g$ (so that they have a unique preimage),
then any geodesic from $e$ to $y$ has to follow either $P$ or $\tilde
P$, so that $t_{\mathrm{ess}}(y) = (2g, *,\dotsc,*, t_{\mathrm{ess}}(z_0))$,
with $2g-1$ ambiguous points. See the first part of
Figure~\ref{fig:tessentialunique} for an illustration.
The only way to have a point of type $2g$ along $\tilde P$ is if
$t(z_0)=2g-1$ and $t(\tilde z_1)=2g$, since $t(\tilde z_i)=i$ for
$i\geqslant 2$ (see the second part of Figure~\ref{fig:tessentialunique}).
By definition of the modified type, this happens exactly when
$t'(z_1)=1'$ and $t'(z_2)=2'$. In this case, a geodesic from $e$ to
$y$ can either go through $z_0$ and then follow $P$, or go through
$\tilde z_1$ and then follow $(\tilde z_2,\dotsc, \tilde z_{2g-1},
y)$. In the first case, if one truncates the geodesic when it reaches
$z_0$ and then adds the edge $[z_0\tilde z_1]$, one gets a geodesic
from $e$ to $\tilde z_1$. It follows that the essential type of $y$
is $(2g, *,\dotsc, *, \widehatt_{\mathrm{ess}}(\tilde z_1))$, where
$\widehatt_{\mathrm{ess}}(\tilde z_1)$ is the essential type of $\tilde
z_1$ minus its first entry (i.e., the type $2g$ of $\tilde z_1$), and
where there are $2g-1$ stars. Since $\tilde z_1$ is a successor of
$z_0$, the induction hypothesis ensures that its essential type can
be determined from that of $z_0$, which is given by that of $x$. When
$t(z_1)\not=2g$, we have shown that in all situations
$t_{\mathrm{ess}}(x)$ determines $t_{\mathrm{ess}}(y)$ in an algorithmic way.
Assume now that $t(z_1)=2g$ (see the third part of
Figure~\ref{fig:tessentialunique}). This case is very similar to the
previous one. To reach $y$, one has to reach $z_1$ or its preimage on
$R$, and then reach $y$ by going around $R$ in one direction or the
other. It follows that $t_{\mathrm{ess}}(y) = (2g, *,\dotsc, *,
\widehatt_{\mathrm{ess}}(z_1))$ as above. Since $t_{\mathrm{ess}}(z_1)$ is
obtained by removing the last $2g-2$ entries of $t_{\mathrm{ess}}(x)$, it
follows again that $t_{\mathrm{ess}}(x)$ determines $t_{\mathrm{ess}}(y)$.
\end{proof}
For $k>0$, let $t_{\mathrm{ess}}^{(k)}(x)$ be the truncated essential
type, obtained by keeping the first $k$ entries of the essential type
of $x$. The above proof also shows that $t_{\mathrm{ess}}^{(k)}(x)$
determines $t_{\mathrm{ess}}^{(k+1)}(y)$ (and therefore
$t_{\mathrm{ess}}^{(k)}(y)$) for any successor $y$ of $x$. In the same
way, if one considers a truncation according to a weight $w=(w_0,
w_1,\dotsc, w_{2g}, w_{1'}, w_{2'},w_*)$ and a threshold $k$, then
$t_{\mathrm{ess}}^{(k,w)}(x)$ determines $t_{\mathrm{ess}}^{(k,w)}(y)$ for any
successor $y$ of $x$, if the weight $w_*$ is maximal among all
weights. This last requirement is necessary for the following reason:
the essential type of $y$ can be obtained from that of $x$ by
replacing some entries with stars; if this could decrease the weight
of the resulting sequence, one might need to look further to
determine $t_{\mathrm{ess}}^{(k,w)}(y)$, and $t_{\mathrm{ess}}^{(k,w)}(x)$
might not be sufficient.
Under these conditions, it follows that $t_{\mathrm{ess}}^{(k)}$ and
$t_{\mathrm{ess}}^{(k,w)}$ are (Perron-Frobenius) type systems, in the
sense of Definition~\ref{type_system}. Hence, we can use
Theorem~\ref{main_alg} to estimate the spectral radius of the
corresponding random walk. Again, it turns out that it is more
efficient to truncate using weights than length.
In genus $2$, taking the weights $w=(1,2,3,4,1,2,4)$ and the
threshold $k=25$, we obtain a matrix of size $8,999,902$. The
corresponding bound on the spectral radius is $\rho\geqslant 0.662772$,
proving Theorem~\ref{main_thm}. Those bounds were obtained on a
personal computer with a memory of $12 \text{GB}$ (memory is indeed
the main limiting factor, since one should store all truncated
essential types to create the matrix $M$). With more memory, one
would get better estimates, but it is unlikely that those estimates
converge to the true spectral radius when $k$ tends to infinity: to
recover it, it is probably necessary to distinguish even more points,
for instance by using Cannon's cone types instead of the canonical
types (but this would become totally impractical).
In higher genus, here are the bounds we obtain.
\begin{thm}
In genus $3$, using $t_{\mathrm{ess}}^{(k,w)}$ with
$w=(1,2,3,4,5,6,1,2,6)$ and $k=25$, we get a matrix of size
$7,307,293$ and the estimate $\rho\geqslant 0.5527735593$.
In genus $4$, using $t_{\mathrm{suff}}^{(k,w)}$ with $w=(1,2,3,4,5,6,7,8)$ and
$k=24$, we get a matrix of size $4,120,495$ and the estimate $\rho
\geqslant 0.48412292068$.
\end{thm}
When the genus increases, the groups look more and more like free
groups. This means that the spectral radius is very close to that of
the random walk on a tree with the corresponding number of generators
(i.e., $\sqrt{4g-1}/(2g)$ in general, specializing to $0.55277079$ in
genus $3$ and $0.48412291827$ in genus $4$), and to get a significant
improvement one needs to take very large matrices. The paths counting
arguments of Bartholdi~\cite{bartholdi_rho}, on the other hand, are
more and more precise when the groups looks more and more like a free
group: in genus $3$, he gets $\rho\geqslant 0.5527735401$, which is just
slightly worse than our estimate, while requiring considerably less
computer power. In genus $4$, he gets $\rho\geqslant 0.48412292074$, which
is already better than our estimate, and the situation is certainly
the same in higher genus.
In genus $3$, the upper bound of Nagnibeda~\cite{nagnibeda_rho} is
$\rho\leqslant 0.552792$, while our lower bound (or Bartholdi's) is much
closer to the naive lower bound coming from the free group. It is
unclear which one is closer to the real value of the spectral radius.
For the practical implementation, as in the end of
Paragraph~\ref{subsec_suffix}, it is important to know the
asymptotics of the number of points on $\mathbb{S}^n$ having a given
truncated essential type. We illustrate how to compute such
asymptotics in three significant examples, that can be combined to
handle the general case.
\begin{itemize}
\item Assume first that the type $(i_0,\dotsc, i_m)$ does not
contain any ambiguous letter, i.e., $i_\ell\not=*$ for all
$\ell$. In this case, the formula~\eqref{eq:bijection} still
holds.
\item Assume now that the type is of the form $(i,*,\dotsc, *,
j)$ for some types $i,j\not=*$, and some number $N$ of stars.
Let $x$ be a point with the above truncated essential type,
on a sphere $\mathbb{S}^n$, and let $y$ be the point of modified
type $j$, on the sphere $\mathbb{S}^{n-N-1}$, such that any
geodesic from $e$ to $x$ goes through $y$. Since the type
$2g$ is the only one to have several predecessors,
necessarily $i=2g$. On the other hand, $j$ can be any type in
$\{1,1',2,2',3,\dotsc, 2g\}$.
The discussion in the proof of
Proposition~\ref{prop:tessential_type} (see in particular the
last $2$ cases of Figure~\ref{fig:tessentialunique}) implies
that $N = (2g-1)m$, for some integer $m$: a geodesic from $e$
to $x$ goes through $y$, and then it follows $m$ $4g$-gons.
Let us first study the case $m=1$. Consider a point $y$ of
type $j$, and a $4g$-gon $R$ based at $y$ (i.e., $y$ is the
closest point to $e$ on this $4g$-gon). There are $4g-2$ such
$R$ if $j\not= 2g$ and $4g-3$ if $j=2g$ (since a point of
type $2g$ has two incoming edges). On $R$, consider the point
$x$ that is the farthest from $e$: it has type $2g$, and one
can reach it from $y$ by going around $R$ in one direction or
the other. It follows that
\begin{multline*}
\mathbb{C}ard\{x\in \mathbb{S}^n \::\: t_{\mathrm{suff}}^{(w,k)}(x)=(2g,*,\dotsc,*,j)\text{ with $N=2g-1$ stars}\}
\\ = a_j\mathbb{C}ard\{y\in \mathbb{S}^{n-N-1} \::\: t'(y)=j\},
\end{multline*}
where $a_j = 4g-3$ if $j\in \{2g-1, 2g\}$, and $a_j=4g-2$
otherwise. The case of a point $y$ of type $j=2g-1$ is
special since, on the $4g$-gon $R$ containing the successors
of $y$ of types $1'$ and $2'$, one of the geodesic paths from
$y$ to the opposite point $x$ goes through a vertex of type
$2g$, giving rise to further ambiguities. Hence, this
$4g$-gon should be discarded from the above counting, leaving
only $2g-3$ suitable $4g$-gons.
In the case of a general $m\geqslant 1$, the number of points $x$
corresponding to a given point $y$ of type $j$ may be
obtained first by choosing a suitable $4g$-gon $R_1$ based at
$y$ (giving $a_j$ choices). Further ambiguities can only be
obtained by choosing one of the two predecessors (of type
$2g-1$) of the point that is opposite to $y$ on $R_1$, and
then following the $4g$-gon $R_2$ based at this point and
containing its successors of type $1'$ and $2'$. This gives
$2$ choices for $R_2$, then $2$ more choices for the next
$4g$-gon $R_3$, and so on. In the end, we obtain
\begin{multline*}
\mathbb{C}ard\{x\in \mathbb{S}^n \::\: t_{\mathrm{suff}}^{(w,k)}(x)=(2g,*,\dotsc,*,j)\text{ with $N=(2g-1)m$ stars}\}
\\ = a_j 2^{m-1}\mathbb{C}ard\{y\in \mathbb{S}^{n-N-1} \::\: t'(y)=j\}.
\end{multline*}
\item Finally, assume that the type is $(i, *,\dotsc, *)$ with
some number $N$ of stars (the situation is very similar to
the previous one). Necessarily, as above, $i=2g$. Write
$N=(2g-1)m+k$ for some $k\in \{1,\dotsc, 2g-1\}$.
For $m=0$, a point of type $2g$ has two predecessors, so
there are always ambiguities regarding his first $2g-1$
ancestors. Hence, the set of points we are considering is
simply the set of points of type $2g$ on the sphere $\mathbb{S}^n$,
and there is nothing to do.
For $m=1$, there can be further ambiguities only if $x$ has
an ancestor $x'$ of generation $2g-1$ that has type $2g$, and
$x$ is at the tip of the $4g$-gon $R$ based at one of the two
predecessors of $x'$ (of type $2g-1$), and containing $x'$.
There are two choices for $R$.
Proceeding inductively in the case of a general $m$, we get
\begin{multline*}
\mathbb{C}ard\{x\in \mathbb{S}^n \::\: t_{\mathrm{suff}}^{(w,k)}(x)=(2g,*,\dotsc,*)\text{ with $N=(2g-1)m+k$ stars}\}
\\ = 2^{m}\mathbb{C}ard\{y\in \mathbb{S}^{n-(2g-1)m} \::\: t(y)=2g\}.
\end{multline*}
\end{itemize}
A general truncated essential type is the concatenation of successive
types of the form just described in the examples. We can thus count
the number of points with a given type just by combining the above
formulas.
\::\:opcompilationbeforecode
\appendix
\section{Implementation}
In this appendix (not intended for publication), we include the code
of the program used to get the numerical estimates of this paper,
written using the \verb+sage+ mathematical software and converted to
\LaTeX{} form thanks to \verb+sws2tex+.
\makeatletter
\def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax
\let\PY@ul=\relax \let\PY@tc=\relax
\let\PY@bc=\relax \let\PY@ff=\relax}
\def\PY@tok#1{\csname PY@tok@#1\endcsname}
\def\PY@toks#1+{\ifx\relax#1\empty\else
\PY@tok{#1}\expandafter\PY@toks\fi}
\def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{
\PY@it{\PY@bf{\PY@ff{#1}}}}}}}
\def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}}
\def\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}}
\def\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.00,0.25,0.82}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.00,0.25,0.82}{##1}}}
\def\let\PY@bf=\textbf{\let\PY@bf=\textbf}
\def\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}}
\def\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.56}{##1}}{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.56}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.73,0.38,0.84}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.73,0.38,0.84}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}h{\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.56}{##1}}\def\PY@bc##1{\colorbox[rgb]{1.00,0.94,0.94}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.56}{##1}}\def\PY@bc##1{\colorbox[rgb]{1.00,0.94,0.94}{##1}}}
\def\let\PY@it=\textit{\let\PY@it=\textit}
\def\def\PY@tc##1{\textcolor[rgb]{0.73,0.38,0.84}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.73,0.38,0.84}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.19,0.19,0.19}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.19,0.19,0.19}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}}
\def\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}}
\def\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.84,0.33,0.22}{##1}}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.84,0.33,0.22}{##1}}}
\def\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.13,0.44}{##1}}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.13,0.44}{##1}}}
\def\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.05,0.52,0.71}{##1}}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.05,0.52,0.71}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.38,0.68,0.84}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.38,0.68,0.84}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
\def\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.05,0.52,0.71}{##1}}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.05,0.52,0.71}{##1}}}
\def\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.33,0.33,0.33}{##1}}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.33,0.33,0.33}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.02,0.16,0.49}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.02,0.16,0.49}{##1}}}
\def\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.44,0.63,0.82}{##1}}{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.44,0.63,0.82}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.73,0.38,0.84}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.73,0.38,0.84}{##1}}}
\def\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.02,0.16,0.45}{##1}}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.02,0.16,0.45}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.73,0.38,0.84}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.73,0.38,0.84}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
\def\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.78,0.36,0.04}{##1}}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.78,0.36,0.04}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
\def\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.78,0.36,0.04}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.78,0.36,0.04}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
\def\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.56}{##1}}{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.56}{##1}}}
\def\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
\def\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.56}{##1}}{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.56}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}f{\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}}
\def\def\PY@bc##1{\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{##1}}{\def\PY@bc##1{\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{##1}}}
\def\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.32,0.47,0.09}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.32,0.47,0.09}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.14,0.33,0.53}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.14,0.33,0.53}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}o{\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}i{\def\PY@tc##1{\textcolor[rgb]{0.13,0.50,0.31}{##1}}}
\def\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}}
\def\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.56,0.13,0.00}{##1}}{\def\PY@tc##1{\textcolor[rgb]{0.56,0.13,0.00}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}c{\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}b{\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
\def\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.44,0.13}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}e{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
\def\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}d{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.44,0.63}{##1}}}
\def\char`\\{\char`\\}
\def\char`\_{\char`\_}
\def\PYZob{\char`\{}
\def\PYZcb{\char`\}}
\def\char`\^{\char`\^}
\def@{@}
\def[{[}
\def]{]}
\makeatother
\attachfile[description=You can get the Sage worksheet by clicking this icon.]{estimateRho.sws}
Consider a group without odd cycle, orient every edge towards infinity. Assume that every point outside a finite neighborhood of the identity has a type, taking finitely many values, such that the type of a point determines the type of its successors. The transition rules (i.e., the number of successors) are given by a matrix $M$, we also assume that this matrix is transitive and aperiodic (i.e., it is a Perron-Frobenius matrix). More precisely, $M_{ij}$ is the number of successors of type $i$ of any point of type $j$.
The number of points of type $i$ in the sphere of radius $n$ can be determined as follows: let $\tilde M_{ij} = M_{ij}/p_i$, where $p_i$ is the number of predecessors of any point of type $i$. Then $\tilde M^n$ (plus some information related to the finite ball around $e$ where types are not well defined) dictates the growth of points of type $i$: denoting by $(A_0,\dots, A_{t-1})$ the eigenvector of $\tilde M$ corresponding to the maximal eigenvalue $e^v$ of $\tilde M$, then $Card(\mathbb{S}^n \cap type_i) \sim c A_i e^{nv}$, for some constant $c$ not depending on the type.
One can then bound from below the spectral radius of the simple random walk, as follows: choose some $\alpha < e^{-v/2}$. We define a function $u$ equal to $b_i \alpha^n$ on points of type $i$ at distance $n$ of the identity, for some $b_i$ yet to be chosen. Then $u$ belongs to $\ell^2$. Moreover, if $Q$ is the Markov operator of the random walk and $d$ is the number of neighbors of any point,
\begin{align*}
\langle Qu, u\rangle &= \frac{1}{d} \sum_{x\sim y} u(x) u(y) = \frac{2}{d} \sum_{x\to y} u(x)u(y) \sim \frac{2}{d} \sum_{i,j,n} cA_j e^{nv} b_j M_{i,j} b_i \alpha^{2n+1}
\\&
= \frac{2}{d} \sum_{i,j,n} cA_j b_j M_{i,j} b_i \alpha/(1-e^{v}\alpha^2).
\end{align*}
On the other hand, $\langle u, u \rangle \sim \sum cA_i b_i^2 /(1-\alpha^2 e^v)$. Letting $\alpha$ tend to $e^{-v/2}$, one obtains
\[ \rho \geqslant \frac{e^{-v/2}}{d/2} \frac{ \sum A_j b_j M_{i,j} b_i}{\sum A_i b_i^2}.\]
Note that the unknown constant $c$ has disappeared. To obtain good results, one is reduced to an optimization problem for a quadratic form. Writing $c_i = A_i^{1/2}b_i$, one should maximize
\[\frac{\sum c_j c_i M_{i,j} A_j^{1/2}/A_i^{1/2}}{\sum d_i^2}.\]
Writing $M'_{i,j} = M_{i,j} A_j^{1/2}/A_i^{1/2}$, this is the maximum on the unit sphere of the quadratic form defined by $M'$. This is also the maximal eigenvalue of $M''=(M'+(M')^t)/2$. Hence, all this can be readily implemented algorithmically. Let $D$ be the matrix with $A_i$ on the diagonal, then $M' = D^{-1/2} M D^{1/2}$, and $M''= D^{-1/2}M D^{1/2} + D^{1/2}M^t D^{-1/2}$. Conjugating by $D^{-1/2}$, we remark that this matrix is similar to $U = M + DM^tD^{-1}$, hence the value we are seeking is also the maximal eigenvalue of $U$. The interest of this remark is that there is no square root any more, everything can be computed in the algebraic field generated by $e^v$ if we want to do everything algebraically.
In the implementation, we split everything into small tasks, since we will improve several of them later on. Since we will deal with large sparse matrices, we try to iterate only over the nonzero entries of the matrices, not over all entries.
\begin{Verbatim}[commandchars=\\\{\}]
\PY{k}{def} \PY{n+nf}{apply\char`\_{}map\char`\_{}and\char`\_{}ring}\PY{p}{(}\PY{n}{M}\PY{p}{,} \PY{n}{R}\PY{p}{,} \PY{n}{f}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""f is a function of the form f((i,j), a)=b, with b=0 if a=0.}
\PY{l+s+sd}{ R is a ring.}
\PY{l+s+sd}{ Returns a matrix with coefficients in R with }
\PY{l+s+sd}{ coefficient f((i,j), M[i,j]) at position (i,j).}
\PY{l+s+sd}{ """}
\PY{n}{N} \PY{o}{=} \PY{n}{Matrix}\PY{p}{(}\PY{n}{R}\PY{p}{,} \PY{n}{M}\PY{o}{.}\PY{n}{nrows}\PY{p}{(}\PY{p}{)}\PY{p}{,} \PY{n}{M}\PY{o}{.}\PY{n}{ncols}\PY{p}{(}\PY{p}{)}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{n}{sparse} \PY{o}{=} \PY{n}{M}\PY{o}{.}\PY{n}{is\char`\_{}sparse}\PY{p}{(}\PY{p}{)}\PY{p}{)}
\PY{k}{for} \PY{p}{(}\PY{n}{i}\PY{p}{,}\PY{n}{j}\PY{p}{)} \PY{o+ow}{in} \PY{n}{M}\PY{o}{.}\PY{n}{nonzero\char`\_{}positions}\PY{p}{(}\PY{n}{copy}\PY{o}{=}\PY{n}{false}\PY{p}{)}\PY{p}{:}
\PY{n}{N}\PY{p}{[}\PY{n}{i}\PY{p}{,}\PY{n}{j}\PY{p}{]} \PY{o}{=} \PY{n}{f}\PY{p}{(}\PY{p}{(}\PY{n}{i}\PY{p}{,}\PY{n}{j}\PY{p}{)}\PY{p}{,} \PY{n}{M}\PY{p}{[}\PY{n}{i}\PY{p}{,}\PY{n}{j}\PY{p}{]}\PY{p}{)}
\PY{k}{return} \PY{n}{N}
\PY{k}{class} \PY{n+nc}{RhoEstimatorBasic}\PY{p}{:}
\PY{l+s+sd}{"""Given a Perron-Frobenius matrix M such that M[i,j] is the number }
\PY{l+s+sd}{ of transitions from type j to type i, and an integer d corresponding to the}
\PY{l+s+sd}{ number of neighbors of every point in the graph,}
\PY{l+s+sd}{ estimates from below the spectral radius of the simple random}
\PY{l+s+sd}{ walk using the algorithm described in the previous paragraph.}
\PY{l+s+sd}{ estimate() returns this estimate.}
\PY{l+s+sd}{ """}
\PY{k}{def} \PY{n+nf}{\char`\_{}\char`\_{}init\char`\_{}\char`\_{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{M}\PY{p}{,} \PY{n}{d}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{M} \PY{o}{=} \PY{n}{M}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{d} \PY{o}{=} \PY{n}{d}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{types} \PY{o}{=} \PY{n}{M}\PY{o}{.}\PY{n}{nrows}\PY{p}{(}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{Mtilde}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{)}\PY{p}{:}
\PY{n}{predecessors} \PY{o}{=} \PY{p}{[}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{d} \PY{o}{-} \PY{n+nb}{sum}\PY{p}{(}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{M}\PY{o}{.}\PY{n}{column}\PY{p}{(}\PY{n}{i}\PY{p}{)}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{types}\PY{p}{)}\PY{p}{]}
\PY{k}{return} \PY{n}{apply\char`\_{}map\char`\_{}and\char`\_{}ring}\PY{p}{(}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{M}\PY{p}{,} \PY{n}{RDF}\PY{p}{,} \PY{k}{lambda} \PY{p}{(}\PY{n}{i}\PY{p}{,}\PY{n}{j}\PY{p}{)}\PY{p}{,}\PY{n}{a}\PY{p}{:} \PY{n}{a}\PY{o}{/}\PY{n}{predecessors}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{growth\char`\_{}and\char`\_{}A}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""returns the growth e\char`\^{}v and the eigenvector A of the graph"""}
\PY{n}{Mtilde\char`\_{}eigenvectors} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{Mtilde}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{dense\char`\_{}matrix}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{right\char`\_{}eigenvectors}\PY{p}{(}\PY{p}{)}
\PY{n}{Mtilde\char`\_{}eigenvalues} \PY{o}{=} \PY{p}{[}\PY{n}{Mtilde\char`\_{}eigenvectors}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{o}{.}\PY{n}{real}\PY{p}{(}\PY{p}{)}
\PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{types}\PY{p}{)}\PY{p}{]}
\PY{n}{growth} \PY{o}{=} \PY{n+nb}{max}\PY{p}{(}\PY{n}{Mtilde\char`\_{}eigenvalues}\PY{p}{)}
\PY{n}{A} \PY{o}{=} \PY{n}{Mtilde\char`\_{}eigenvectors}\PY{p}{[}\PY{n}{Mtilde\char`\_{}eigenvalues}\PY{o}{.}\PY{n}{index}\PY{p}{(}\PY{n}{growth}\PY{p}{)}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\PY{k}{return} \PY{n}{growth}\PY{p}{,} \PY{n}{A}
\PY{k}{def} \PY{n+nf}{eig\char`\_{}Msym}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""returns the maximal expansion of M' on the unit sphere"""}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{growth}\PY{p}{,} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{A} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{growth\char`\_{}and\char`\_{}A}\PY{p}{(}\PY{p}{)}
\PY{n}{Mprime} \PY{o}{=} \PY{n}{apply\char`\_{}map\char`\_{}and\char`\_{}ring}\PY{p}{(}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{M}\PY{p}{,} \PY{n}{RDF}\PY{p}{,}
\PY{k}{lambda} \PY{p}{(}\PY{n}{i}\PY{p}{,}\PY{n}{j}\PY{p}{)}\PY{p}{,}\PY{n}{a}\PY{p}{:} \PY{n}{a}\PY{o}{*}\PY{n+nb}{abs}\PY{p}{(}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{A}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{o}{/}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{A}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)}\PY{o}{\char`\^{}}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{)}
\PY{k}{return} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{max\char`\_{}expansion}\PY{p}{(}\PY{n}{Mprime}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{max\char`\_{}expansion}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{Mprime}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""returns the maximum of (x, Mprime x) for x in the unit sphere"""}
\PY{n}{Msym} \PY{o}{=} \PY{p}{(}\PY{n}{Mprime} \PY{o}{+} \PY{n}{Mprime}\PY{o}{.}\PY{n}{transpose}\PY{p}{(}\PY{p}{)}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{2}
\PY{k}{return} \PY{n+nb}{max}\PY{p}{(}\PY{n}{Msym}\PY{o}{.}\PY{n}{dense\char`\_{}matrix}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{eigenvalues}\PY{p}{(}\PY{p}{)}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{estimate}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{)}\PY{p}{:}
\PY{n}{eig} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{eig\char`\_{}Msym}\PY{p}{(}\PY{p}{)}
\PY{k}{return} \PY{l+m+mi}{2}\PY{o}{*}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{growth}\PY{o}{\char`\^{}}\PY{p}{(}\PY{o}{-}\PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{)}\PY{o}{*}\PY{n}{eig}\PY{o}{/}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{d}
\end{Verbatim}
In surface groups, types have been described by Cannon. The next function constructs the transition matrix for types, using his results.
\begin{Verbatim}[commandchars=\\\{\}]
\PY{k}{def} \PY{n+nf}{Msurface}\PY{p}{(}\PY{n}{g}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""Returns a (2g times 2g) matrix M.}
\PY{l+s+sd}{ M[i,j] is the number of successors of type i of a point of type j, }
\PY{l+s+sd}{ in the surface group Sigma\char`\_{}g.}
\PY{l+s+sd}{ We only consider types from 1 to 2g, since type 0, being not recurrent,}
\PY{l+s+sd}{ is not relevant for our purposes.}
\PY{l+s+sd}{ """}
\PY{n}{M} \PY{o}{=} \PY{n}{Matrix}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{0} \PY{p}{,}\PY{n}{j}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{4}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{3}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{1} \PY{p}{,}\PY{n}{j}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{M}\PY{p}{[}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{,}\PY{n}{j}\PY{p}{]} \PY{o}{+}\PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{4}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{4}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{2}
\PY{k}{return} \PY{n}{M}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{Msurface}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{)}
\end{Verbatim}
{\color{blue}
$\newcommand{rrrr}{rrrr}
5 & 5 & 5 & 4 \\
2 & 1 & 1 & 2 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0
\end{array}\right)$
}
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{RhoEstimatorBasic}\PY{p}{(}\PY{n}{Msurface}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{,} \PY{l+m+mi}{8}\PY{p}{)}\PY{o}{.}\PY{n}{estimate}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
{\color{blue}
$\newcommand{\Bold}[1]{\mathbf{#1}}0.662477976598$
}
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{RhoEstimatorBasic}\PY{p}{(}\PY{n}{Msurface}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{)}\PY{p}{,} \PY{l+m+mi}{12}\PY{p}{)}\PY{o}{.}\PY{n}{estimate}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
{\color{blue}
$\newcommand{\Bold}[1]{\mathbf{#1}}0.552772892866$
}
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{RhoEstimatorBasic}\PY{p}{(}\PY{n}{Msurface}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)}\PY{p}{,} \PY{l+m+mi}{16}\PY{p}{)}\PY{o}{.}\PY{n}{estimate}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
{\color{blue}
$\newcommand{\Bold}[1]{\mathbf{#1}}0.484122920106$
}
These results are better than Bartholdi's in genus $2$, worse in genus $3$.
To go further, one can use extended types: some points have a unique predecessor, i.e., the last step of geodesics ending at a point may be uniquely determined. In this case, the point also determines the Cannon type of its predecessor (note however that the Cannon type of the point does not determine the Cannon type of its predecessor). This may be used to separate the points into further categories, refining Cannon's types (while retaining the property that an extended type determines the extended types of the successors). What we have just described is extended types of length $2$. More generally, one can define sets of extended types as follows, inductively. Consider a set of words $W$ of the form $w_n=(w_n[0],\dots, w_n[\ell_n-1])$ with $M_{w[i]w[i+1]} > 0$ and each letter $w[i]$, $i < \ell_n-1$, has a unique predecessor. Choose a word $w\in W$ whose last letter has a unique predecessor, and let $E(w)$ be the words extending it by one letter. If $W$ defines an extended type, then $W'= (W\backslash \{w\}) \cup E(w)$ also does.
Equivalently, the sets $W$ defining extended types are the sets of finite words as above such that, for any $w\in W$ and any strict prefix $p$ of $w$, for any letter $a$ that can follow the last letter of $p$, the word $pa$ is a prefix of a word in $W$. One also requires that any letter is a prefix of a word in $W$.
Given such a set, consider a point $x\in \Gamma$, with some type $w[0]$. If $w[0]$ is in $W$, then $t(x) = (w[0])$. Otherwise, $x$ has a unique predecessor, of some type $w[1]$, and $w[0]w[1]$ is again a prefix of a word in $W$. One can go on until one reaches a word that is in $W$, and this is the $W$-extended type of $x$.
Here is a last equivalent point of view on sets of extended types: consider the set of all admissible words, and let $\tau$ be a bounded stopping time, i.e., a function such that if $\tau(w) = k$ then for all words $w'$ coinciding with $w$ up to position $k$, $\tau(w') = k$, and moreover if $w$ is a prefix of $w'$ then $\tau(w) \geqslant \tau(w')$. Then the set $\{w : \tau(w) = |w| \}$ defines an extended type.
For instance, the set of admissible words of length at most $k$, ending with a letter with several predecessors if length $ < k$, defines an extended type.
For another example, associate to every letter a weight. The weight of a word is the sum of the weights of its letters. We define $W$ by saying that a word $w$ belongs to $W$ if $weight(w) > K$, for some threshold $K$, while this inequality is false for all strict prefixes of $w$ (or, as usual, the last letter of $w$ has several predecessors). Then $W$ defines again an extended type.
One can also mix those two examples, by stopping when $length(w)= C$ or $weight(w) > K$ (or, of course, when a letter has several predecessors).
Experimentally, it turns out that using the weight $[1,2,3,4]$ gives extended types yielding good bounds on the spectral radius (compared to the size of the matrix it involves).
~
A useful remark is that it is easy to compute the growth and type asymptotics for an extended type. Indeed, let $M_0$ and $A_0$ be the matrix and the growth rates for the original types (very cheap to compute): for some constant $c$, the number of points on the sphere $S^n$ with type $i$ is asymptotically $c A_0(i) growth^n$. Let $w=(w_0\dots w_{k-1})$ be an admissible word. Points on the sphere $S^n$ with this type are parameterized by a point on $S^{n-k+1}$ with type $w_{k-1}$ and then a sequence of successors on the spheres $S^{n-i}$ with type $w_i$. The number of such points is therefore asymptotically
\[ c A_0(w_{k-1}) growth^{n-k+1} M_0(w_0, w_1) M_0(w_1, w_2)\dots M_0(w_{k-2}, w_{k-1}).\]
It follows that the vector $A$ of growth rates is given by
\[ A(w) = A_0(w_{k-1}) growth^{-k+1} M_0(w_0, w_1) M_0(w_1, w_2)\dots M_0(w_{k-2}, w_{k-1}). \]
It is very easy to compute if one knows not only the matrix $M$, but also the words it is coming from. Hence, this computation can be done by the extended type builder.
\begin{Verbatim}[commandchars=\\\{\}]
\PY{k}{class} \PY{n+nc}{ExtendedTypesBuilderWeightLength}\PY{p}{:}
\PY{l+s+sd}{"""this class constructs the extended type corresponding}
\PY{l+s+sd}{ to its initialization parameters weights, max\char`\_{}weight, length:}
\PY{l+s+sd}{ it constructs all admissible words, stopping when the length }
\PY{l+s+sd}{ of the word becomes equal to length, or when the weight }
\PY{l+s+sd}{ becomes larger than max\char`\_{}weight.}
\PY{l+s+sd}{ }
\PY{l+s+sd}{ matrix() returns the transition matrix for the extended type}
\PY{l+s+sd}{ extended\char`\_{}types() returns all the words in the extended type}
\PY{l+s+sd}{ growth\char`\_{}and\char`\_{}A() returns the growth of spheres in the group,}
\PY{l+s+sd}{ and asymptotics for each type}
\PY{l+s+sd}{ """}
\PY{k}{def} \PY{n+nf}{\char`\_{}\char`\_{}init\char`\_{}\char`\_{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{M}\PY{p}{,} \PY{n}{d}\PY{p}{,} \PY{n}{weights}\PY{p}{,} \PY{n}{max\char`\_{}weight}\PY{p}{,} \PY{n}{length}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{M} \PY{o}{=} \PY{n}{M}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{d} \PY{o}{=} \PY{n}{d}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{types} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{M}\PY{o}{.}\PY{n}{nrows}\PY{p}{(}\PY{p}{)}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{predecessors} \PY{o}{=} \PY{p}{[}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{d} \PY{o}{-} \PY{n+nb}{sum}\PY{p}{(}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{M}\PY{o}{.}\PY{n}{column}\PY{p}{(}\PY{n}{i}\PY{p}{)}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{types}\PY{p}{)}\PY{p}{]}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{words} \PY{o}{=} \PY{n+nb+bp}{None}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{weights} \PY{o}{=} \PY{n}{weights}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{length} \PY{o}{=} \PY{n}{length}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{max\char`\_{}weight} \PY{o}{=} \PY{n}{max\char`\_{}weight}
\PY{c}{# compute an upper bound for the length of admissible words}
\PY{k}{if} \PY{n+nb}{min}\PY{p}{(}\PY{n}{weights}\PY{p}{)} \PY{o}{>} \PY{l+m+mi}{0}\PY{p}{:}
\PY{n}{a} \PY{o}{=} \PY{n}{ceil}\PY{p}{(}\PY{n}{max\char`\_{}weight}\PY{o}{/}\PY{n+nb}{min}\PY{p}{(}\PY{n}{weights}\PY{p}{)}\PY{p}{)} \PY{o}{+} \PY{l+m+mi}{1}
\PY{k}{else}\PY{p}{:}
\PY{n}{a} \PY{o}{=} \PY{n}{length}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{max\char`\_{}length} \PY{o}{=} \PY{n+nb}{min}\PY{p}{(}\PY{n}{length}\PY{p}{,} \PY{n}{a}\PY{p}{)} \PY{o}{+} \PY{l+m+mi}{5}
\PY{k}{def} \PY{n+nf}{truncate}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{t}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""truncates a word t to its admissible part.}
\PY{l+s+sd}{ Only works if all the transitions in t are admissible, i.e.,}
\PY{l+s+sd}{ M[t[i], t[i+1]] > 0 for all i.}
\PY{l+s+sd}{ """}
\PY{n}{total} \PY{o}{=} \PY{l+m+mi}{0}
\PY{n}{i} \PY{o}{=} \PY{l+m+mi}{0}
\PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n+nb}{min}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{t}\PY{p}{)}\PY{p}{,} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{length}\PY{p}{)}\PY{p}{)}\PY{p}{:}
\PY{k}{if} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{predecessors}\PY{p}{[}\PY{n}{t}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{]} \PY{o}{>} \PY{l+m+mi}{1}\PY{p}{:}
\PY{k}{break}
\PY{n}{total} \PY{o}{+}\PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{weights}\PY{p}{[}\PY{n}{t}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{]}
\PY{k}{if} \PY{n}{total} \PY{o}{>} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{max\char`\_{}weight}\PY{p}{:}
\PY{k}{break}
\PY{k}{return} \PY{n}{t}\PY{p}{[}\PY{p}{:}\PY{n}{i}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}
\PY{k}{def} \PY{n+nf}{extended\char`\_{}types}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""returns a list of all admissible types"""}
\PY{k}{if} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{words} \PY{o+ow}{is} \PY{n+nb+bp}{None}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{compute\char`\_{}extended\char`\_{}types}\PY{p}{(}\PY{p}{)}
\PY{k}{return} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{words}
\PY{k}{def} \PY{n+nf}{compute\char`\_{}extended\char`\_{}types}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""Puts in self.words a list of all admissible types.}
\PY{l+s+sd}{ The construction starts from a collection of types, }
\PY{l+s+sd}{ and tries to extend them.}
\PY{l+s+sd}{ If they are not extendable, put them in result.}
\PY{l+s+sd}{ Otherwise, go on with the new types.}
\PY{l+s+sd}{ """}
\PY{n}{result} \PY{o}{=} \PY{p}{[}\PY{p}{]}
\PY{n}{current\char`\_{}pool} \PY{o}{=} \PY{p}{[}\PY{p}{(}\PY{p}{(}\PY{n}{i}\PY{p}{,}\PY{p}{)}\PY{p}{,} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{weights}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{types}\PY{p}{)}\PY{p}{]}
\PY{n}{length} \PY{o}{=} \PY{l+m+mi}{1}
\PY{k}{while} \PY{n}{length} \PY{o}{<} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{length}\PY{p}{:}
\PY{n}{length} \PY{o}{+}\PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{temp\char`\_{}pool} \PY{o}{=} \PY{p}{[}\PY{p}{(}\PY{n}{t}\PY{o}{+}\PY{p}{(}\PY{n}{i}\PY{p}{,}\PY{p}{)}\PY{p}{,} \PY{n}{w} \PY{o}{+} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{weights}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)}
\PY{k}{for} \PY{p}{(}\PY{n}{t}\PY{p}{,}\PY{n}{w}\PY{p}{)} \PY{o+ow}{in} \PY{n}{current\char`\_{}pool}
\PY{k}{if} \PY{n}{w} \PY{o}{<}\PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{max\char`\_{}weight} \PY{o+ow}{and} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{predecessors}\PY{p}{[}\PY{n}{t}\PY{p}{[}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{]} \PY{o}{<}\PY{o}{=} \PY{l+m+mi}{1}
\PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{types}\PY{p}{)}
\PY{k}{if} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{M}\PY{p}{[}\PY{n}{t}\PY{p}{[}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{i}\PY{p}{]} \PY{o}{>} \PY{l+m+mi}{0}\PY{p}{]}
\PY{n}{result} \PY{o}{+}\PY{o}{=} \PY{p}{[}\PY{n}{t} \PY{k}{for} \PY{p}{(}\PY{n}{t}\PY{p}{,}\PY{n}{w}\PY{p}{)} \PY{o+ow}{in} \PY{n}{current\char`\_{}pool}
\PY{k}{if} \PY{n}{w} \PY{o}{>} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{max\char`\_{}weight} \PY{o+ow}{or} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{predecessors}\PY{p}{[}\PY{n}{t}\PY{p}{[}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{]} \PY{o}{>} \PY{l+m+mi}{1}\PY{p}{]}
\PY{n}{current\char`\_{}pool} \PY{o}{=} \PY{n}{temp\char`\_{}pool}
\PY{n}{result} \PY{o}{+}\PY{o}{=} \PY{p}{[}\PY{n}{t} \PY{k}{for} \PY{p}{(}\PY{n}{t}\PY{p}{,}\PY{n}{w}\PY{p}{)} \PY{o+ow}{in} \PY{n}{current\char`\_{}pool}\PY{p}{]}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{words} \PY{o}{=} \PY{n}{result}
\PY{k}{def} \PY{n+nf}{growth\char`\_{}and\char`\_{}Azero}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""returns the growth and the eigenvalues corresponding to the }
\PY{l+s+sd}{ initial matrix M"""}
\PY{n}{r} \PY{o}{=} \PY{n}{RhoEstimatorBasic}\PY{p}{(}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{M}\PY{p}{,} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{d}\PY{p}{)}
\PY{k}{return} \PY{n}{r}\PY{o}{.}\PY{n}{growth\char`\_{}and\char`\_{}A}\PY{p}{(}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{growth\char`\_{}and\char`\_{}A}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""returns the growth and the asymptotics of extended types"""}
\PY{n}{growth}\PY{p}{,} \PY{n}{Azero} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{growth\char`\_{}and\char`\_{}Azero}\PY{p}{(}\PY{p}{)}
\PY{n}{types} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{M}\PY{o}{.}\PY{n}{nrows}\PY{p}{(}\PY{p}{)}
\PY{c}{# precompute all the possible values of A0(w\char`\_{}\PYZob{}k-1\PYZcb{})growth\char`\^{}\PYZob{}-k+1\PYZcb{}}
\PY{n}{words} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{extended\char`\_{}types}\PY{p}{(}\PY{p}{)}
\PY{n}{precompute} \PY{o}{=} \PY{p}{[}\PY{p}{[}\PY{l+m+mi}{0} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{max\char`\_{}length}\PY{p}{)}\PY{p}{]} \PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n}{types}\PY{p}{)}\PY{p}{]}
\PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n}{types}\PY{p}{)}\PY{p}{:}
\PY{n}{precompute}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{=} \PY{n}{Azero}\PY{p}{[}\PY{n}{j}\PY{p}{]}
\PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{max\char`\_{}length}\PY{p}{)}\PY{p}{:}
\PY{n}{precompute}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{precompute}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{[}\PY{n}{i}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{/}\PY{n}{growth}
\PY{c}{# compute all the values of A}
\PY{n}{A} \PY{o}{=} \PY{p}{[}\PY{l+m+mi}{0} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{words}\PY{p}{)}\PY{p}{)}\PY{p}{]}
\PY{k}{for} \PY{p}{(}\PY{n}{i}\PY{p}{,}\PY{n}{t}\PY{p}{)} \PY{o+ow}{in} \PY{n+nb}{enumerate}\PY{p}{(}\PY{n}{words}\PY{p}{)}\PY{p}{:}
\PY{n}{m} \PY{o}{=} \PY{l+m+mi}{1}
\PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{t}\PY{p}{)}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:}
\PY{n}{m} \PY{o}{*}\PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{M}\PY{p}{[}\PY{n}{t}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{,} \PY{n}{t}\PY{p}{[}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{]}
\PY{n}{A}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{m}\PY{o}{*}\PY{n}{precompute}\PY{p}{[}\PY{n}{t}\PY{p}{[}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{]}\PY{p}{[}\PY{n+nb}{len}\PY{p}{(}\PY{n}{t}\PY{p}{)}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{]}
\PY{k}{return} \PY{n}{growth}\PY{p}{,} \PY{n}{A}
\PY{k}{def} \PY{n+nf}{successor}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{orig}\PY{p}{,} \PY{n}{letter}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""returns the admissible type }
\PY{l+s+sd}{ obtained by adding letter at the beginning of orig"""}
\PY{k}{return} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{truncate}\PY{p}{(}\PY{p}{(}\PY{n}{letter}\PY{p}{,}\PY{p}{)} \PY{o}{+} \PY{n}{orig}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{matrix}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""returns the transition matrix for the extended type"""}
\PY{n}{new\char`\_{}types} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{extended\char`\_{}types}\PY{p}{(}\PY{p}{)}
\PY{n}{types\char`\_{}dict} \PY{o}{=} \PY{n+nb}{dict}\PY{p}{(}\PY{n+nb}{zip}\PY{p}{(}\PY{n}{new\char`\_{}types}\PY{p}{,} \PY{n+nb}{range}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{new\char`\_{}types}\PY{p}{)}\PY{p}{)}\PY{p}{)}\PY{p}{)}
\PY{n}{N} \PY{o}{=} \PY{n}{Matrix}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{new\char`\_{}types}\PY{p}{)}\PY{p}{,} \PY{n+nb}{len}\PY{p}{(}\PY{n}{new\char`\_{}types}\PY{p}{)}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{n}{sparse} \PY{o}{=} \PY{n+nb+bp}{True}\PY{p}{)}
\PY{k}{for} \PY{n}{orig} \PY{o+ow}{in} \PY{n}{new\char`\_{}types}\PY{p}{:}
\PY{k}{for} \PY{n}{letter} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{types}\PY{p}{)}\PY{p}{:}
\PY{k}{if} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{M}\PY{p}{[}\PY{n}{letter}\PY{p}{,} \PY{n}{orig}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{]} \PY{o}{>} \PY{l+m+mi}{0}\PY{p}{:}
\PY{n}{s} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{successor}\PY{p}{(}\PY{n}{orig}\PY{p}{,} \PY{n}{letter}\PY{p}{)}
\PY{n}{N}\PY{p}{[}\PY{n}{types\char`\_{}dict}\PY{p}{[}\PY{n}{s}\PY{p}{]}\PY{p}{,} \PY{n}{types\char`\_{}dict}\PY{p}{[}\PY{n}{orig}\PY{p}{]}\PY{p}{]} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{M}\PY{p}{[}\PY{n}{letter}\PY{p}{,} \PY{n}{orig}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{]}
\PY{k}{return} \PY{n}{N}
\PY{k}{class} \PY{n+nc}{ExtendedTypesBuilderLength}\PY{p}{(}\PY{n}{ExtendedTypesBuilderWeightLength}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""Extended type builder that only selects admissible}
\PY{l+s+sd}{ words through their length"""}
\PY{k}{def} \PY{n+nf}{\char`\_{}\char`\_{}init\char`\_{}\char`\_{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{M}\PY{p}{,} \PY{n}{d}\PY{p}{,} \PY{n}{length}\PY{p}{)}\PY{p}{:}
\PY{n}{ExtendedTypesBuilderWeightLength}\PY{o}{.}\PY{n}{\char`\_{}\char`\_{}init\char`\_{}\char`\_{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,}
\PY{n}{M}\PY{p}{,} \PY{n}{d}\PY{p}{,} \PY{p}{[}\PY{l+m+mi}{0} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n}{M}\PY{o}{.}\PY{n}{nrows}\PY{p}{(}\PY{p}{)}\PY{p}{)}\PY{p}{]}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{length}\PY{p}{)}
\PY{k}{class} \PY{n+nc}{ExtendedTypesBuilderWeight}\PY{p}{(}\PY{n}{ExtendedTypesBuilderWeightLength}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""Extended type builder that only selects admissible}
\PY{l+s+sd}{ words through their weight"""}
\PY{k}{def} \PY{n+nf}{\char`\_{}\char`\_{}init\char`\_{}\char`\_{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{M}\PY{p}{,} \PY{n}{d}\PY{p}{,} \PY{n}{weights}\PY{p}{,} \PY{n}{max\char`\_{}weight}\PY{p}{)}\PY{p}{:}
\PY{n}{ExtendedTypesBuilderWeightLength}\PY{o}{.}\PY{n}{\char`\_{}\char`\_{}init\char`\_{}\char`\_{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,}
\PY{n}{M}\PY{p}{,} \PY{n}{d}\PY{p}{,} \PY{n}{weights}\PY{p}{,} \PY{n}{max\char`\_{}weight}\PY{p}{,} \PY{l+m+mi}{10000}\PY{p}{)}
\end{Verbatim}
In surface groups, one can estimate from below the spectral radius by using $k$-truncated extended types instead of types: the above algorithm still applies. Moreover, if $k$ is larger, one distinguishes more categories of points, and one may therefore expect better bounds on $\rho$. However, the size of the transition matrix increases, which means that computations are more and more intensive...
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{genus} \PY{o}{=} \PY{l+m+mi}{2}
\PY{k}{for} \PY{n}{max\char`\_{}length} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{8}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{:}
\PY{n}{M} \PY{o}{=} \PY{n}{ExtendedTypesBuilderLength}\PY{p}{(}\PY{n}{Msurface}\PY{p}{(}\PY{n}{genus}\PY{p}{)}\PY{p}{,} \PY{l+m+mi}{4}\PY{o}{*}\PY{n}{genus}\PY{p}{,} \PY{n}{max\char`\_{}length}\PY{p}{)}\PY{o}{.}\PY{n}{matrix}\PY{p}{(}\PY{p}{)}
\PY{k}{print} \PY{l+s}{"}\PY{l+s}{genus = }\PY{l+s}{"}\PY{p}{,} \PY{n}{genus}\PY{p}{,} \PY{l+s}{"}\PY{l+s}{, max\char`\_{}length = }\PY{l+s}{"}\PY{p}{,} \PY{n}{max\char`\_{}length}\PY{p}{,}\char`\\{}
\PY{l+s}{"}\PY{l+s}{, matrix size = }\PY{l+s}{"}\PY{p}{,} \PY{n}{M}\PY{o}{.}\PY{n}{nrows}\PY{p}{(}\PY{p}{)}
\PY{n}{time} \PY{n}{rho} \PY{o}{=} \PY{n}{RhoEstimatorBasic}\PY{p}{(}\PY{n}{M}\PY{p}{,} \PY{l+m+mi}{4}\PY{o}{*}\PY{n}{genus}\PY{p}{)}\PY{o}{.}\PY{n}{estimate}\PY{p}{(}\PY{p}{)}
\PY{k}{print} \PY{l+s}{"}\PY{l+s}{estimate = }\PY{l+s}{"}\PY{p}{,} \PY{n}{rho}\PY{p}{,} \PY{l+s}{"}\PY{l+s+se}{\char`\\{}n}\PY{l+s}{"}
\end{Verbatim}
\begin{Verbatim}[formatcom=\color{blue}]
genus = 2 , max_length = 1 , matrix size = 4
Time: CPU 0.00 s, Wall: 0.02 s
estimate = 0.662477976598
genus = 2 , max_length = 3 , matrix size = 25
Time: CPU 0.01 s, Wall: 0.11 s
estimate = 0.6626394462
genus = 2 , max_length = 5 , matrix size = 148
Time: CPU 0.16 s, Wall: 0.18 s
estimate = 0.662694226446
genus = 2 , max_length = 7 , matrix size = 865
Time: CPU 12.27 s, Wall: 7.83 s
estimate = 0.662720574395
\end{Verbatim}
There are two bottlenecks in the previous computations, when the matrix size increases: the computation of the growth rates $A$, which is done by the RhoEstimatorBasic class, and the computation of the maximal expansion of the matrix $M'$. We will now explain how to optimize those. The first issue is easy to deal with, since we have already observed that $A$ can be cheaply computed by the extended types builder itself.
Let us now deal with the maximal expansion of $M'$, i.e., the dominating eigenvalue of $Msym = (M' + (M')^*))/2$. Luckily, we do not need to compute all the eigenvalues, contrary to what we did before. Since this is a positive matrix, if one starts with a positive vector and iterates $Msym$, one converges exponentially fast to the eigenvector for the dominating eigenvalue. In particular, $\|Msym^{n+1} v\|/\|Msym^n v\|$ converges to the dominating eigenvalue (and one checks easily that this sequence is non-decreasing, see Woess, Corollary 10.2). This gives a simple algorithm to estimate the dominating eigenvalue from below.
\begin{Verbatim}[commandchars=\\\{\}]
\PY{k}{class} \PY{n+nc}{RhoEstimator}\PY{p}{(}\PY{n}{RhoEstimatorBasic}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""In this class, based on RhoEstimator, the computation of the maximal}
\PY{l+s+sd}{ eigenvalue of Msym is made using a simple iterative procedure."""}
\PY{k}{def} \PY{n+nf}{max\char`\_{}expansion}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{M}\PY{p}{,} \PY{n}{precision} \PY{o}{=} \PY{l+m+mi}{10}\PY{o}{\char`\^{}}\PY{p}{(}\PY{o}{-}\PY{l+m+mi}{50}\PY{p}{)}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""M is a matrix with nonnegative entries.}
\PY{l+s+sd}{ returns a number that is less than or equal to the}
\PY{l+s+sd}{ maximum of (q, M q) for q in the unit sphere. }
\PY{l+s+sd}{ Computations are done within the given precision.}
\PY{l+s+sd}{ """}
\PY{c}{# Sparse matrices with RDF coefficients have no specific}
\PY{c}{# implementation in Sage 5.10, and are very slow.}
\PY{c}{# We provide a custom cython implementation, at the end of}
\PY{c}{# this worksheet for clarity.}
\PY{k}{if} \PY{n}{M}\PY{o}{.}\PY{n}{is\char`\_{}sparse}\PY{p}{(}\PY{p}{)}\PY{p}{:}
\PY{k}{return} \PY{n}{max\char`\_{}expansion\char`\_{}sparse\char`\_{}double\char`\_{}cython}\PY{p}{(}\PY{n}{M}\PY{p}{,} \PY{l+m+mi}{10}\PY{o}{\char`\^{}}\PY{p}{(}\PY{o}{-}\PY{l+m+mi}{50}\PY{p}{)}\PY{p}{)}
\PY{n}{u} \PY{o}{=} \PY{n}{vector}\PY{p}{(}\PY{n}{M}\PY{o}{.}\PY{n}{base\char`\_{}ring}\PY{p}{(}\PY{p}{)}\PY{p}{,} \PY{p}{[}\PY{l+m+mi}{1} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n}{M}\PY{o}{.}\PY{n}{nrows}\PY{p}{(}\PY{p}{)}\PY{p}{)}\PY{p}{]}\PY{p}{)}
\PY{n}{u} \PY{o}{=} \PY{n}{u}\PY{o}{/}\PY{n}{u}\PY{o}{.}\PY{n}{norm}\PY{p}{(}\PY{p}{)}
\PY{n}{expansion} \PY{o}{=} \PY{l+m+mi}{0}
\PY{n}{Msym} \PY{o}{=} \PY{p}{(}\PY{n}{M} \PY{o}{+} \PY{n}{M}\PY{o}{.}\PY{n}{transpose}\PY{p}{(}\PY{p}{)}\PY{p}{)}\PY{o}{/}\PY{l+m+mi}{2}
\PY{k}{while} \PY{n+nb+bp}{True}\PY{p}{:}
\PY{n}{v} \PY{o}{=} \PY{n}{Msym} \PY{o}{*} \PY{n}{u}
\PY{n}{vnorm} \PY{o}{=} \PY{n}{v}\PY{o}{.}\PY{n}{norm}\PY{p}{(}\PY{p}{)}
\PY{n}{diff} \PY{o}{=} \PY{n}{vnorm} \PY{o}{-} \PY{n}{expansion}
\PY{n}{expansion} \PY{o}{=} \PY{n}{vnorm}
\PY{n}{u} \PY{o}{=} \PY{n}{v}\PY{o}{/}\PY{n}{vnorm}
\PY{k}{if} \PY{n}{diff} \PY{o}{<} \PY{n}{precision}\PY{p}{:}
\PY{k}{return} \PY{n}{expansion}
\PY{k}{class} \PY{n+nc}{RhoEstimatorExt}\PY{p}{(}\PY{n}{RhoEstimator}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""this estimator takes as a parameter the extended types builder.}
\PY{l+s+sd}{ Hence, type asymptotics are readily computed"""}
\PY{k}{def} \PY{n+nf}{\char`\_{}\char`\_{}init\char`\_{}\char`\_{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{builder}\PY{p}{)}\PY{p}{:}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{builder} \PY{o}{=} \PY{n}{builder}
\PY{n}{RhoEstimator}\PY{o}{.}\PY{n}{\char`\_{}\char`\_{}init\char`\_{}\char`\_{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{builder}\PY{o}{.}\PY{n}{matrix}\PY{p}{(}\PY{p}{)}\PY{p}{,} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{builder}\PY{o}{.}\PY{n}{d}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{growth\char`\_{}and\char`\_{}A}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{)}\PY{p}{:}
\PY{k}{return} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{builder}\PY{o}{.}\PY{n}{growth\char`\_{}and\char`\_{}A}\PY{p}{(}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{test\char`\_{}length}\PY{p}{(}\PY{n}{genus}\PY{p}{,} \PY{n}{max\char`\_{}length}\PY{p}{)}\PY{p}{:}
\PY{n}{e} \PY{o}{=} \PY{n}{ExtendedTypesBuilderLength}\PY{p}{(}\PY{n}{Msurface}\PY{p}{(}\PY{n}{genus}\PY{p}{)}\PY{p}{,} \PY{l+m+mi}{4}\PY{o}{*}\PY{n}{genus}\PY{p}{,} \PY{n}{max\char`\_{}length}\PY{p}{)}
\PY{n}{r} \PY{o}{=} \PY{n}{RhoEstimatorExt}\PY{p}{(}\PY{n}{e}\PY{p}{)}
\PY{k}{print} \PY{l+s}{"}\PY{l+s}{genus = }\PY{l+s}{"}\PY{p}{,} \PY{n}{genus}\PY{p}{,} \PY{l+s}{"}\PY{l+s}{, max\char`\_{}length = }\PY{l+s}{"}\PY{p}{,} \PY{n}{max\char`\_{}length}\PY{p}{,}\char`\\{}
\PY{l+s}{"}\PY{l+s}{, matrix size = }\PY{l+s}{"}\PY{p}{,} \PY{n}{r}\PY{o}{.}\PY{n}{M}\PY{o}{.}\PY{n}{nrows}\PY{p}{(}\PY{p}{)}
\PY{k}{print} \PY{l+s}{"}\PY{l+s}{estimate = }\PY{l+s}{"}\PY{p}{,} \PY{n}{r}\PY{o}{.}\PY{n}{estimate}\PY{p}{(}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{test\char`\_{}weight}\PY{p}{(}\PY{n}{genus}\PY{p}{,} \PY{n}{weights}\PY{p}{,} \PY{n}{max\char`\_{}weight}\PY{p}{)}\PY{p}{:}
\PY{n}{e} \PY{o}{=} \PY{n}{ExtendedTypesBuilderWeight}\PY{p}{(}\PY{n}{Msurface}\PY{p}{(}\PY{n}{genus}\PY{p}{)}\PY{p}{,} \PY{l+m+mi}{4}\PY{o}{*}\PY{n}{genus}\PY{p}{,} \PY{n}{weights}\PY{p}{,} \PY{n}{max\char`\_{}weight}\PY{p}{)}
\PY{n}{r} \PY{o}{=} \PY{n}{RhoEstimatorExt}\PY{p}{(}\PY{n}{e}\PY{p}{)}
\PY{k}{print} \PY{l+s}{"}\PY{l+s}{genus = }\PY{l+s}{"}\PY{p}{,} \PY{n}{genus}\PY{p}{,} \PY{l+s}{"}\PY{l+s}{, max\char`\_{}weight = }\PY{l+s}{"}\PY{p}{,} \PY{n}{max\char`\_{}weight}\PY{p}{,}\char`\\{}
\PY{l+s}{"}\PY{l+s}{, matrix size = }\PY{l+s}{"}\PY{p}{,} \PY{n}{r}\PY{o}{.}\PY{n}{M}\PY{o}{.}\PY{n}{nrows}\PY{p}{(}\PY{p}{)}
\PY{k}{print} \PY{l+s}{"}\PY{l+s}{estimate = }\PY{l+s}{"}\PY{p}{,} \PY{n}{r}\PY{o}{.}\PY{n}{estimate}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
\PY{k}{for} \PY{n}{max\char`\_{}length} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{8}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{:}
\PY{n}{time} \PY{n}{test\char`\_{}length}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\char`\_{}length}\PY{p}{)}
\PY{k}{print} \PY{l+s}{"}\PY{l+s}{"}
\end{Verbatim}
\begin{Verbatim}[formatcom=\color{blue}]
genus = 2 , max_length = 1 , matrix size = 4
estimate = 0.662477976598
Time: CPU 0.00 s, Wall: 0.01 s
genus = 2 , max_length = 3 , matrix size = 25
estimate = 0.6626394462
Time: CPU 0.00 s, Wall: 0.00 s
genus = 2 , max_length = 5 , matrix size = 148
estimate = 0.662694226446
Time: CPU 0.01 s, Wall: 0.01 s
genus = 2 , max_length = 7 , matrix size = 865
estimate = 0.662720574395
Time: CPU 0.06 s, Wall: 0.06 s
\end{Verbatim}
Now that we have a fast enough algorithm, let us compare what we get using lengths or using weights.
\begin{Verbatim}[commandchars=\\\{\}]
\PY{k}{for} \PY{n}{max\char`\_{}weight} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,}\PY{l+m+mi}{9}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{:}
\PY{n}{time} \PY{n}{test\char`\_{}weight}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{,}\PY{l+m+mi}{3}\PY{p}{,}\PY{l+m+mi}{4}\PY{p}{]}\PY{p}{,} \PY{n}{max\char`\_{}weight}\PY{p}{)}
\PY{k}{print} \PY{l+s}{"}\PY{l+s}{"}
\end{Verbatim}
\begin{Verbatim}[formatcom=\color{blue}]
genus = 2 , max_weight = 2 , matrix size = 13
estimate = 0.662607354086
Time: CPU 0.01 s, Wall: 0.01 s
genus = 2 , max_weight = 4 , matrix size = 37
estimate = 0.662663626794
Time: CPU 0.01 s, Wall: 0.01 s
genus = 2 , max_weight = 6 , matrix size = 109
estimate = 0.66269793275
Time: CPU 0.02 s, Wall: 0.01 s
genus = 2 , max_weight = 8 , matrix size = 319
estimate = 0.662717774996
Time: CPU 0.03 s, Wall: 0.03 s
\end{Verbatim}
At comparable matrix size, the estimates with weights are better than the corresponding estimates with length. For instance, a matrix size of 109 generated using weights gives a better estimate than a matrix size of 148 generated using length. This is not surprising, since using weights separates more those points that are more typical. To get the best possible estimates, we will therefore use weights (and very large matrices!)
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{time} \PY{n}{test\char`\_{}weight}\PY{p}{(}\PY{n}{genus} \PY{o}{=} \PY{l+m+mi}{2}\PY{p}{,} \PY{n}{weights} \PY{o}{=} \PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{,}\PY{l+m+mi}{3}\PY{p}{,}\PY{l+m+mi}{4}\PY{p}{]}\PY{p}{,} \PY{n}{max\char`\_{}weight} \PY{o}{=} \PY{l+m+mi}{25}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[formatcom=\color{blue}]
genus = 2 , max_weight = 25 , matrix size = 2774629
estimate = 0.66275789907
Time: CPU 2153.67 s, Wall: 2159.49 s
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{time} \PY{n}{test\char`\_{}weight}\PY{p}{(}\PY{n}{genus} \PY{o}{=} \PY{l+m+mi}{3}\PY{p}{,} \PY{n}{weights} \PY{o}{=} \PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{,}\PY{l+m+mi}{3}\PY{p}{,}\PY{l+m+mi}{4}\PY{p}{,}\PY{l+m+mi}{5}\PY{p}{,}\PY{l+m+mi}{6}\PY{p}{]}\PY{p}{,} \PY{n}{max\char`\_{}weight} \PY{o}{=} \PY{l+m+mi}{24}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[formatcom=\color{blue}]
genus = 3 , max_weight = 24 , matrix size = 2943021
estimate = 0.552773556459
Time: CPU 510.54 s, Wall: 511.90 s
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{time} \PY{n}{test\char`\_{}weight}\PY{p}{(}\PY{n}{genus} \PY{o}{=} \PY{l+m+mi}{4}\PY{p}{,} \PY{n}{weights} \PY{o}{=} \PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{,}\PY{l+m+mi}{3}\PY{p}{,}\PY{l+m+mi}{4}\PY{p}{,}\PY{l+m+mi}{5}\PY{p}{,}\PY{l+m+mi}{6}\PY{p}{,}\PY{l+m+mi}{7}\PY{p}{,}\PY{l+m+mi}{8}\PY{p}{]}\PY{p}{,} \PY{n}{max\char`\_{}weight} \PY{o}{=} \PY{l+m+mi}{24}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[formatcom=\color{blue}]
genus = 4 , max_weight = 24 , matrix size = 4120495
estimate = 0.484122920682
Time: CPU 745.34 s, Wall: 747.25 s
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
\PY{k}{def} \PY{n+nf}{MsurfaceExt}\PY{p}{(}\PY{n}{g}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""Returns a (2g+2 times 2g+2) matrix M.}
\PY{l+s+sd}{ M[i,j] is the number of successors of modified type i of }
\PY{l+s+sd}{ a point of modified type j, in the surface group Gamma\char`\_{}g.}
\PY{l+s+sd}{ Compared to Cannon, we add two types 1prime and 2prime, corresponding to}
\PY{l+s+sd}{ the successors of a point of type 2g-1 that can be in a loop with}
\PY{l+s+sd}{ further non-uniqueness.}
\PY{l+s+sd}{ }
\PY{l+s+sd}{ The correspondance between the types and the matrix indices follows:}
\PY{l+s+sd}{ type i <-> index i-1}
\PY{l+s+sd}{ type 1prime <-> index 2*g}
\PY{l+s+sd}{ type 2prime <-> index 2*g+1}
\PY{l+s+sd}{ We will also add later an "ambiguous" type, with index 2*g+2}
\PY{l+s+sd}{ """}
\PY{n}{M} \PY{o}{=} \PY{n}{Matrix}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{:}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{0} \PY{p}{,}\PY{n}{j}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{4}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{3}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{1} \PY{p}{,}\PY{n}{j}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{M}\PY{p}{[}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{,}\PY{n}{j}\PY{p}{]} \PY{o}{+}\PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{2}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{4}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{4}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{2}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{2}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{2}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{4}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{4}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{2}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{4}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{3}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{4}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{3}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{M}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{1}
\PY{k}{return} \PY{n}{M}
\PY{k}{class} \PY{n+nc}{ExtendedTypesBuilderSurfaceWeightLength}\PY{p}{(}\PY{n}{ExtendedTypesBuilderWeightLength}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""This class constructs the extended type for the surface}
\PY{l+s+sd}{ group corresponding to its initialization parameters genus, }
\PY{l+s+sd}{ length, weights:}
\PY{l+s+sd}{ among all words ending at some point, it selects the}
\PY{l+s+sd}{ part that is common to all geodesics ending at that point (with }
\PY{l+s+sd}{ length and weight at most the initialization parameters), }
\PY{l+s+sd}{ and constructs the types from them.}
\PY{l+s+sd}{ }
\PY{l+s+sd}{ There are 2*g+2 true types, as explained in MsurfaceExt(genus), and one }
\PY{l+s+sd}{ "undetermined" type, corresponding to parts of geodesics that are non-unique. }
\PY{l+s+sd}{ Therefore, weights should be of length 2*g+3. Moreover, the weight of the }
\PY{l+s+sd}{ undetermined type should be maximal among weights, so that extending a geodesic }
\PY{l+s+sd}{ and then possibly replacing some parts by undetermined parts one can only }
\PY{l+s+sd}{ increase the weight, and therefore shorten the part that is selected.}
\PY{l+s+sd}{ }
\PY{l+s+sd}{ matrix() returns the transition matrix for the extended type.}
\PY{l+s+sd}{ extended\char`\_{}types() returns all the words in the extended type.}
\PY{l+s+sd}{ """}
\PY{k}{def} \PY{n+nf}{\char`\_{}\char`\_{}init\char`\_{}\char`\_{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{genus}\PY{p}{,} \PY{n}{weights}\PY{p}{,} \PY{n}{max\char`\_{}weight}\PY{p}{,} \PY{n}{length}\PY{p}{)}\PY{p}{:}
\PY{k}{if} \PY{n+nb}{len}\PY{p}{(}\PY{n}{weights}\PY{p}{)} \PY{o}{!=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{genus} \PY{o}{+} \PY{l+m+mi}{3} \PY{o+ow}{or} \PY{n}{weights}\PY{p}{[}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{!=} \PY{n+nb}{max}\PY{p}{(}\PY{n}{weights}\PY{p}{)}\PY{p}{:}
\PY{k}{raise} \PY{n+ne}{ValueError}\PY{p}{,} \PY{l+s}{"}\PY{l+s}{incorrect parameters}\PY{l+s}{"}
\PY{n}{ExtendedTypesBuilderWeightLength}\PY{o}{.}\PY{n}{\char`\_{}\char`\_{}init\char`\_{}\char`\_{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{MsurfaceExt}\PY{p}{(}\PY{n}{genus}\PY{p}{)}\PY{p}{,}
\PY{l+m+mi}{4}\PY{o}{*}\PY{n}{genus}\PY{p}{,} \PY{n}{weights}\PY{p}{,} \PY{n}{max\char`\_{}weight}\PY{p}{,} \PY{n}{length}\PY{p}{)}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{genus} \PY{o}{=} \PY{n}{genus}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{words} \PY{o}{=} \PY{n+nb+bp}{None}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{predecessors} \PY{o}{=} \PY{p}{[}\PY{l+m+mi}{1} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{genus}\PY{o}{+}\PY{l+m+mi}{3}\PY{p}{)}\PY{p}{]}
\PY{k}{def} \PY{n+nf}{mark\char`\_{}ambiguous}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{t}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""In the word t, locates the parts that are not common to all }
\PY{l+s+sd}{ geodesics ending by such a word, and replaces them by 2*g+2.}
\PY{l+s+sd}{ Accepts as input a word already with ambiguities.}
\PY{l+s+sd}{ """}
\PY{n}{u} \PY{o}{=} \PY{n+nb}{list}\PY{p}{(}\PY{n}{t}\PY{p}{)}
\PY{n}{g} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{genus}
\PY{k}{try}\PY{p}{:}
\PY{n}{i} \PY{o}{=} \PY{n}{t}\PY{o}{.}\PY{n}{index}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{)}
\PY{k}{while} \PY{n}{i} \PY{o}{<} \PY{n+nb}{len}\PY{p}{(}\PY{n}{u}\PY{p}{)}\PY{p}{:}
\PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n}{i}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{i}\PY{o}{+}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{p}{)}\PY{p}{:}
\PY{n}{u}\PY{p}{[}\PY{n}{j}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{2}
\PY{n}{i} \PY{o}{=} \PY{n}{i} \PY{o}{+} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{1}
\PY{k}{if} \PY{o+ow}{not} \PY{p}{(}\PY{n}{t}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{1} \PY{o+ow}{or} \PY{p}{(}\PY{n}{t}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g} \PY{o+ow}{and} \PY{n}{t}\PY{p}{[}\PY{n}{i}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}\PY{p}{:}
\PY{n}{i} \PY{o}{=} \PY{n}{t}\PY{o}{.}\PY{n}{index}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{i}\PY{p}{)}
\PY{k}{except} \PY{p}{(}\PY{n+ne}{ValueError}\PY{p}{,} \PY{n+ne}{IndexError}\PY{p}{)}\PY{p}{:}
\PY{k}{pass}
\PY{k}{return} \PY{n+nb}{tuple}\PY{p}{(}\PY{n}{u}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{truncate}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{t}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""Truncates a word t to its admissible part.}
\PY{l+s+sd}{ Marks the ambiguous part with type 2g+2 if necessary.}
\PY{l+s+sd}{ """}
\PY{k}{return} \PY{n}{ExtendedTypesBuilderWeightLength}\PY{o}{.}\PY{n}{truncate}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{mark\char`\_{}ambiguous}\PY{p}{(}\PY{n}{t}\PY{p}{)}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{compute\char`\_{}extended\char`\_{}types}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""Puts in self.words a list of all admissible types.}
\PY{l+s+sd}{ The construction starts from a collection of types, }
\PY{l+s+sd}{ and tries to extend them.}
\PY{l+s+sd}{ If they are not extendable, put them in result.}
\PY{l+s+sd}{ Otherwise, go on with the new types.}
\PY{l+s+sd}{ """}
\PY{n}{result} \PY{o}{=} \PY{p}{[}\PY{p}{]}
\PY{c}{# in current\char`\_{}pool, store (word, weight of the word, }
\PY{c}{# number of ambiguous letters at the end of word modulo 2g-1) }
\PY{n}{current\char`\_{}pool} \PY{o}{=} \PY{p}{[}\PY{p}{(}\PY{p}{(}\PY{n}{i}\PY{p}{,}\PY{p}{)}\PY{p}{,} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{weights}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{types}\PY{p}{)}\PY{p}{]}
\PY{n}{length} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{g} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{genus}
\PY{n}{types} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{types}
\PY{k}{while} \PY{n}{length} \PY{o}{<} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{length}\PY{p}{:}
\PY{n}{length} \PY{o}{+}\PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{temp\char`\_{}pool} \PY{o}{=} \PY{p}{[}\PY{p}{]}
\PY{k}{for} \PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{n}{w}\PY{p}{,} \PY{n}{undet}\PY{p}{)} \PY{o+ow}{in} \PY{n}{current\char`\_{}pool}\PY{p}{:}
\PY{k}{if} \PY{n}{w} \PY{o}{>} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{max\char`\_{}weight}\PY{p}{:}
\PY{n}{result} \PY{o}{+}\PY{o}{=} \PY{p}{[}\PY{n}{t}\PY{p}{]}
\PY{k}{elif} \PY{n}{t}\PY{p}{[}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{:}
\PY{n}{temp\char`\_{}pool} \PY{o}{+}\PY{o}{=} \PY{p}{[}\PY{p}{(}\PY{n}{t}\PY{o}{+}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{2}\PY{p}{,}\PY{p}{)}\PY{p}{,} \PY{n}{w}\PY{o}{+}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{weights}\PY{p}{[}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)}\PY{p}{]}
\PY{k}{elif} \PY{n}{t}\PY{p}{[}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{!=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{2}\PY{p}{:}
\PY{n}{temp\char`\_{}pool} \PY{o}{+}\PY{o}{=} \PY{p}{[}\PY{p}{(}\PY{n}{t}\PY{o}{+}\PY{p}{(}\PY{n}{i}\PY{p}{,}\PY{p}{)}\PY{p}{,} \PY{n}{w} \PY{o}{+} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{weights}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n}{types}\PY{p}{)}
\PY{k}{if} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{M}\PY{p}{[}\PY{n}{t}\PY{p}{[}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{i}\PY{p}{]} \PY{o}{>} \PY{l+m+mi}{0}\PY{p}{]}
\PY{k}{else}\PY{p}{:}
\PY{c}{# last letter is ambiguous}
\PY{n}{undet} \PY{o}{+}\PY{o}{=} \PY{l+m+mi}{1}
\PY{k}{if} \PY{n}{undet} \PY{o}{==} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{p}{:}
\PY{c}{# a half-turn around an octagon is finished.}
\PY{c}{# the word can be extended with any definite type,}
\PY{c}{# or again with an ambiguity.}
\PY{n}{temp\char`\_{}pool} \PY{o}{+}\PY{o}{=} \PY{p}{[}\PY{p}{(}\PY{n}{t}\PY{o}{+}\PY{p}{(}\PY{n}{i}\PY{p}{,}\PY{p}{)}\PY{p}{,} \PY{n}{w}\PY{o}{+}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{weights}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)}
\PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n}{types}\PY{p}{)}\PY{p}{]}
\PY{n}{undet} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{temp\char`\_{}pool} \PY{o}{+}\PY{o}{=} \PY{p}{[}\PY{p}{(}\PY{n}{t}\PY{o}{+}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{2}\PY{p}{,}\PY{p}{)}\PY{p}{,} \PY{n}{w}\PY{o}{+}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{weights}\PY{p}{[}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{,} \PY{n}{undet}\PY{p}{)}\PY{p}{]}
\PY{n}{current\char`\_{}pool} \PY{o}{=} \PY{n}{temp\char`\_{}pool}
\PY{n}{result} \PY{o}{+}\PY{o}{=} \PY{p}{[}\PY{n}{t} \PY{k}{for} \PY{p}{(}\PY{n}{t}\PY{p}{,}\PY{n}{w}\PY{p}{,} \PY{n}{undet}\PY{p}{)} \PY{o+ow}{in} \PY{n}{current\char`\_{}pool}\PY{p}{]}
\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{words} \PY{o}{=} \PY{n}{result}
\PY{k}{def} \PY{n+nf}{growth\char`\_{}and\char`\_{}A}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{)}\PY{p}{:}
\PY{n}{growth}\PY{p}{,} \PY{n}{Azero} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{growth\char`\_{}and\char`\_{}Azero}\PY{p}{(}\PY{p}{)}
\PY{n}{types} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{types}
\PY{n}{g} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{genus}
\PY{n}{words} \PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{extended\char`\_{}types}\PY{p}{(}\PY{p}{)}
\PY{c}{# precompute all the possible values of A0(w\char`\_{}\PYZob{}k-1\PYZcb{})growth\char`\^{}\PYZob{}-k+1\PYZcb{}}
\PY{n}{precompute} \PY{o}{=} \PY{p}{[}\PY{p}{[}\PY{l+m+mi}{0} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{max\char`\_{}length}\PY{p}{)}\PY{p}{]} \PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n}{types}\PY{p}{)}\PY{p}{]}
\PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n}{types}\PY{p}{)}\PY{p}{:}
\PY{n}{precompute}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{=} \PY{n}{Azero}\PY{p}{[}\PY{n}{j}\PY{p}{]}
\PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{max\char`\_{}length}\PY{p}{)}\PY{p}{:}
\PY{n}{precompute}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{precompute}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{[}\PY{n}{i}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{/}\PY{n}{growth}
\PY{c}{# compute all the values of A}
\PY{n}{A} \PY{o}{=} \PY{p}{[}\PY{l+m+mi}{0} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{words}\PY{p}{)}\PY{p}{)}\PY{p}{]}
\PY{k}{for} \PY{p}{(}\PY{n}{i}\PY{p}{,}\PY{n}{t}\PY{p}{)} \PY{o+ow}{in} \PY{n+nb}{enumerate}\PY{p}{(}\PY{n}{words}\PY{p}{)}\PY{p}{:}
\PY{n}{m} \PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{ambiguous\char`\_{}count} \PY{o}{=} \PY{l+m+mi}{0}
\PY{c}{# find the last non-ambiguous position}
\PY{k}{for} \PY{n}{true\char`\_{}length} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{t}\PY{p}{)}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{,} \PY{o}{-}\PY{l+m+mi}{1}\PY{p}{,} \PY{o}{-}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:}
\PY{k}{if} \PY{n}{t}\PY{p}{[}\PY{n}{true\char`\_{}length}\PY{p}{]} \PY{o}{!=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{2}\PY{p}{:}
\PY{k}{break}
\PY{c}{# count the multiplicities between 0 and true\char`\_{}length}
\PY{k}{for} \PY{n}{j} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n}{true\char`\_{}length}\PY{p}{)}\PY{p}{:}
\PY{k}{if} \PY{n}{t}\PY{p}{[}\PY{n}{j}\PY{p}{]} \PY{o}{!=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{2} \PY{o+ow}{and} \PY{n}{t}\PY{p}{[}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{!=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{2}\PY{p}{:}
\PY{n}{m} \PY{o}{*}\PY{o}{=} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{M}\PY{p}{[}\PY{n}{t}\PY{p}{[}\PY{n}{j}\PY{p}{]}\PY{p}{,} \PY{n}{t}\PY{p}{[}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{]}
\PY{k}{elif} \PY{n}{t}\PY{p}{[}\PY{n}{j}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{2} \PY{o+ow}{and} \PY{n}{t}\PY{p}{[}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{!=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{+}\PY{l+m+mi}{2}\PY{p}{:}
\PY{n}{mult} \PY{o}{=} \PY{l+m+mi}{4}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{2} \PY{k}{if} \PY{n}{t}\PY{p}{[}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{!=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{1} \PY{o+ow}{and} \PY{n}{t}\PY{p}{[}\PY{n}{j}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{!=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{2} \PY{k}{else} \PY{l+m+mi}{4}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{3}
\PY{n}{m} \PY{o}{*}\PY{o}{=} \PY{n}{mult}
\PY{n}{ambiguous\char`\_{}count} \PY{o}{=} \PY{n}{ambiguous\char`\_{}count} \PY{o}{-} \PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{)}
\PY{k}{else}\PY{p}{:}
\PY{n}{ambiguous\char`\_{}count} \PY{o}{+}\PY{o}{=} \PY{l+m+mi}{1}
\PY{n}{m} \PY{o}{=} \PY{n}{m} \PY{o}{*} \PY{l+m+mi}{2}\PY{o}{\char`\^{}}\PY{p}{(}\PY{n}{ambiguous\char`\_{}count}\PY{o}{/}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}
\PY{c}{# count the multiplicities due to the ambiguities after true\char`\_{}length}
\PY{n}{last\char`\_{}letter} \PY{o}{=} \PY{n}{t}\PY{p}{[}\PY{n}{true\char`\_{}length}\PY{p}{]}
\PY{n}{remainder} \PY{o}{=} \PY{n}{floor}\PY{p}{(}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{t}\PY{p}{)} \PY{o}{-} \PY{n}{true\char`\_{}length} \PY{o}{-} \PY{l+m+mi}{2}\PY{p}{)}\PY{o}{/}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}
\PY{k}{if} \PY{n}{remainder} \PY{o}{>} \PY{l+m+mi}{0}\PY{p}{:}
\PY{n}{true\char`\_{}length} \PY{o}{+}\PY{o}{=} \PY{n}{remainder}\PY{o}{*}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{g}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{m} \PY{o}{=} \PY{n}{m}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{\char`\^{}}\PY{n}{remainder}
\PY{n}{A}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{o}{=} \PY{n}{m} \PY{o}{*} \PY{n}{precompute}\PY{p}{[}\PY{n}{last\char`\_{}letter}\PY{p}{]}\PY{p}{[}\PY{n}{true\char`\_{}length}\PY{p}{]}
\PY{k}{return} \PY{n}{growth}\PY{p}{,} \PY{n}{A}
\PY{k}{class} \PY{n+nc}{ExtendedTypesBuilderSurfaceLength}\PY{p}{(}\PY{n}{ExtendedTypesBuilderSurfaceWeightLength}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""Extended type builder that only selects admissible}
\PY{l+s+sd}{ words through their length"""}
\PY{k}{def} \PY{n+nf}{\char`\_{}\char`\_{}init\char`\_{}\char`\_{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{genus}\PY{p}{,} \PY{n}{length}\PY{p}{)}\PY{p}{:}
\PY{n}{ExtendedTypesBuilderSurfaceWeightLength}\PY{o}{.}\PY{n}{\char`\_{}\char`\_{}init\char`\_{}\char`\_{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,}
\PY{n}{genus}\PY{p}{,} \PY{p}{[}\PY{l+m+mi}{0} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{genus}\PY{o}{+}\PY{l+m+mi}{3}\PY{p}{)}\PY{p}{]}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{n}{length}\PY{p}{)}
\PY{k}{class} \PY{n+nc}{ExtendedTypesBuilderSurfaceWeight}\PY{p}{(}\PY{n}{ExtendedTypesBuilderSurfaceWeightLength}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""Extended type builder that only selects admissible}
\PY{l+s+sd}{ words through their weight"""}
\PY{k}{def} \PY{n+nf}{\char`\_{}\char`\_{}init\char`\_{}\char`\_{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,} \PY{n}{genus}\PY{p}{,} \PY{n}{weights}\PY{p}{,} \PY{n}{max\char`\_{}weight}\PY{p}{)}\PY{p}{:}
\PY{n}{ExtendedTypesBuilderSurfaceWeightLength}\PY{o}{.}\PY{n}{\char`\_{}\char`\_{}init\char`\_{}\char`\_{}}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{,}
\PY{n}{genus}\PY{p}{,} \PY{n}{weights}\PY{p}{,} \PY{n}{max\char`\_{}weight}\PY{p}{,} \PY{l+m+mi}{10000}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{test\char`\_{}length\char`\_{}surface}\PY{p}{(}\PY{n}{genus}\PY{p}{,} \PY{n}{max\char`\_{}length}\PY{p}{)}\PY{p}{:}
\PY{n}{e} \PY{o}{=} \PY{n}{ExtendedTypesBuilderSurfaceLength}\PY{p}{(}\PY{n}{genus}\PY{p}{,} \PY{n}{max\char`\_{}length}\PY{p}{)}
\PY{n}{r} \PY{o}{=} \PY{n}{RhoEstimatorExt}\PY{p}{(}\PY{n}{e}\PY{p}{)}
\PY{k}{print} \PY{l+s}{"}\PY{l+s}{genus = }\PY{l+s}{"}\PY{p}{,} \PY{n}{genus}\PY{p}{,} \PY{l+s}{"}\PY{l+s}{, max\char`\_{}length = }\PY{l+s}{"}\PY{p}{,} \PY{n}{max\char`\_{}length}\PY{p}{,}\char`\\{}
\PY{l+s}{"}\PY{l+s}{, matrix size = }\PY{l+s}{"}\PY{p}{,} \PY{n}{r}\PY{o}{.}\PY{n}{M}\PY{o}{.}\PY{n}{nrows}\PY{p}{(}\PY{p}{)}
\PY{k}{print} \PY{l+s}{"}\PY{l+s}{estimate = }\PY{l+s}{"}\PY{p}{,} \PY{n}{r}\PY{o}{.}\PY{n}{estimate}\PY{p}{(}\PY{p}{)}
\PY{k}{def} \PY{n+nf}{test\char`\_{}weight\char`\_{}surface}\PY{p}{(}\PY{n}{genus}\PY{p}{,} \PY{n}{weights}\PY{p}{,} \PY{n}{max\char`\_{}weight}\PY{p}{)}\PY{p}{:}
\PY{n}{e} \PY{o}{=} \PY{n}{ExtendedTypesBuilderSurfaceWeight}\PY{p}{(}\PY{n}{genus}\PY{p}{,} \PY{n}{weights}\PY{p}{,} \PY{n}{max\char`\_{}weight}\PY{p}{)}
\PY{n}{r} \PY{o}{=} \PY{n}{RhoEstimatorExt}\PY{p}{(}\PY{n}{e}\PY{p}{)}
\PY{k}{print} \PY{l+s}{"}\PY{l+s}{genus = }\PY{l+s}{"}\PY{p}{,} \PY{n}{genus}\PY{p}{,} \PY{l+s}{"}\PY{l+s}{, max\char`\_{}weight = }\PY{l+s}{"}\PY{p}{,} \PY{n}{max\char`\_{}weight}\PY{p}{,}\char`\\{}
\PY{l+s}{"}\PY{l+s}{, matrix size = }\PY{l+s}{"}\PY{p}{,} \PY{n}{r}\PY{o}{.}\PY{n}{M}\PY{o}{.}\PY{n}{nrows}\PY{p}{(}\PY{p}{)}
\PY{k}{print} \PY{l+s}{"}\PY{l+s}{estimate = }\PY{l+s}{"}\PY{p}{,} \PY{n}{r}\PY{o}{.}\PY{n}{estimate}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{time} \PY{n}{test\char`\_{}length\char`\_{}surface}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{11}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[formatcom=\color{blue}]
genus = 2 , max_length = 11 , matrix size = 111331
estimate = 0.662752835287
Time: CPU 13.36 s, Wall: 13.40 s
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{time} \PY{n}{test\char`\_{}weight\char`\_{}surface}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{]}\PY{p}{,} \PY{l+m+mi}{17}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[formatcom=\color{blue}]
genus = 2 , max_weight = 17 , matrix size = 98406
estimate = 0.662754827875
Time: CPU 11.58 s, Wall: 11.60 s
\end{Verbatim}
Again, weights give better estimates than length.
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{time} \PY{n}{test\char`\_{}weight\char`\_{}surface}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{]}\PY{p}{,} \PY{l+m+mi}{24}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[formatcom=\color{blue}]
genus = 2 , max_weight = 24 , matrix size = 5117838
estimate = 0.662770548031
Time: CPU 1170.24 s, Wall: 1173.11 s
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{time} \PY{n}{test\char`\_{}weight\char`\_{}surface}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{]}\PY{p}{,} \PY{l+m+mi}{25}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[formatcom=\color{blue}]
genus = 2 , max_weight = 25 , matrix size = 8999902
estimate = 0.662772114698
Time: CPU 2105.08 s, Wall: 2212.72 s
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{time} \PY{n}{test\char`\_{}weight\char`\_{}surface}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{,} \PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{5}\PY{p}{,} \PY{l+m+mi}{6}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2}\PY{p}{,} \PY{l+m+mi}{6}\PY{p}{]}\PY{p}{,} \PY{l+m+mi}{25}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[formatcom=\color{blue}]
genus = 3 , max_weight = 25 , matrix size = 7307293
estimate = 0.55277355933
Time: CPU 1662.42 s, Wall: 1666.67 s
\end{Verbatim}
We will now compare these estimates with the best estimates up to now, due to Bartholdi.
We redo Bartholdi's computations with high precision. The method is first to find the root $\zeta$ of the polynomial $((dx)^{m-1}-1)(x-1)+2(dx-1)$ (for $m=d=4g$) (beware, there is a typo in the article on Page 11 Line 10: he writes $d^m \zeta^m$, but he really means $d^{m-1}\zeta^{m-1}$ (this is what comes out of (3), and this is what he uses to compute effectively $\zeta$). Then, a solution of an explicit polynomial equation involving $\zeta$ is a lower bound for the spectral radius.
\begin{Verbatim}[commandchars=\\\{\}]
\PY{k}{def} \PY{n+nf}{bartholdiLower}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{print\char`\_{}zeta} \PY{o}{=} \PY{n}{false}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""returns Bartholdi's lower bound on the spectral radius }
\PY{l+s+sd}{ in the surface group of genus g.}
\PY{l+s+sd}{ Computations are exact, made only with algebraic numbers"""}
\PY{n}{m} \PY{o}{=} \PY{l+m+mi}{4}\PY{o}{*}\PY{n}{g}
\PY{n}{d} \PY{o}{=} \PY{l+m+mi}{4}\PY{o}{*}\PY{n}{g}
\PY{n}{K}\PY{o}{.}\PY{o}{<}\PY{n}{x}\PY{o}{>} \PY{o}{=} \PY{n}{AA}\PY{p}{[}\PY{l+s}{'}\PY{l+s}{x}\PY{l+s}{'}\PY{p}{]}
\PY{n}{P} \PY{o}{=} \PY{p}{(}\PY{p}{(}\PY{n}{d}\PY{o}{*}\PY{n}{x}\PY{p}{)}\PY{o}{\char`\^{}}\PY{p}{(}\PY{n}{m}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{)}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{)}\PY{o}{*}\PY{p}{(}\PY{n}{x}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{)}\PY{o}{+}\PY{l+m+mi}{2}\PY{o}{*}\PY{p}{(}\PY{n}{d}\PY{o}{*}\PY{n}{x}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{zeta}\PY{o}{=}\PY{n+nb}{max}\PY{p}{(}\PY{n}{P}\PY{o}{.}\PY{n}{roots}\PY{p}{(}\PY{n}{multiplicities}\PY{o}{=}\PY{n+nb+bp}{False}\PY{p}{)}\PY{p}{)}
\PY{k}{if} \PY{n}{print\char`\_{}zeta}\PY{p}{:}
\PY{k}{print} \PY{l+s}{"}\PY{l+s}{zeta = }\PY{l+s}{"}\PY{p}{,} \PY{n}{zeta}
\PY{c}{# We make intermediate computations in the symbolic ring,}
\PY{c}{# since simplifications are handled much more efficiently.}
\PY{c}{# The only drawback is that we have to convert back to polynomials in the end}
\PY{n}{t}\PY{p}{,}\PY{n}{u}\PY{p}{,}\PY{n}{z} \PY{o}{=} \PY{n}{var}\PY{p}{(}\PY{l+s}{'}\PY{l+s}{t,u,z}\PY{l+s}{'}\PY{p}{)}
\PY{n}{f} \PY{o}{=} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{d}\PY{o}{*}\PY{n}{t}\PY{o}{\char`\^{}}\PY{n}{m}
\PY{n}{Hsing} \PY{o}{=} \PY{n}{t}\PY{o}{/}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{+}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{-}\PY{n}{u}\PY{p}{)}\PY{o}{*}\PY{p}{(}\PY{n}{d}\PY{o}{-}\PY{l+m+mi}{1}\PY{o}{+}\PY{n}{u}\PY{p}{)}\PY{o}{*}\PY{n}{t}\PY{o}{\char`\^{}}\PY{l+m+mi}{2}\PY{p}{)}
\PY{n}{g1sing} \PY{o}{=} \PY{n}{Hsing}\PY{p}{(}\PY{n}{t} \PY{o}{=} \PY{n}{t}\PY{o}{*}\PY{n}{z}\PY{p}{)}
\PY{n}{g2sing} \PY{o}{=} \PY{n}{g1sing}\PY{p}{(}\PY{n}{t}\PY{o}{=} \PY{n}{t}\PY{o}{*}\PY{p}{(}\PY{n}{d}\PY{o}{-}\PY{n}{f}\PY{p}{)}\PY{o}{/}\PY{p}{(}\PY{n}{d}\PY{o}{-}\PY{p}{(}\PY{n}{d}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{)}\PY{o}{*}\PY{n}{f}\PY{p}{)}\PY{p}{,} \PY{n}{u} \PY{o}{=} \PY{p}{(}\PY{n}{d}\PY{o}{-}\PY{l+m+mi}{2}\PY{p}{)}\PY{o}{*}\PY{n}{f}\PY{o}{/}\PY{p}{(}\PY{n}{d}\PY{o}{-}\PY{n}{f}\PY{p}{)}\PY{p}{)}
\PY{n}{A}\PY{o}{=}\PY{p}{(}\PY{n}{g2sing}\PY{o}{\char`\^{}}\PY{l+m+mi}{2}\PY{o}{-}\PY{l+m+mi}{1}\PY{o}{/}\PY{p}{(}\PY{l+m+mi}{4}\PY{o}{*}\PY{p}{(}\PY{n}{d}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)}\PY{p}{)}\PY{o}{.}\PY{n}{simplify\char`\_{}full}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{numerator}\PY{p}{(}\PY{p}{)}
\PY{n}{A}\PY{o}{=}\PY{n}{A}\PY{p}{(}\PY{n}{z}\PY{o}{=}\PY{n}{SR}\PY{p}{(}\PY{n}{zeta}\PY{p}{)}\PY{p}{)}
\PY{c}{#Convert everything back to polynomials}
\PY{n}{R}\PY{o}{.}\PY{o}{<}\PY{n}{t}\PY{o}{>}\PY{o}{=}\PY{n}{AA}\PY{p}{[}\PY{l+s}{'}\PY{l+s}{t}\PY{l+s}{'}\PY{p}{]}
\PY{n}{A}\PY{o}{=}\PY{n}{R}\PY{p}{(}\PY{n}{A}\PY{p}{)}
\PY{n}{alpha}\PY{o}{=}\PY{n+nb}{min}\PY{p}{(}\PY{p}{[}\PY{n}{r} \PY{k}{for} \PY{n}{r} \PY{o+ow}{in} \PY{n}{A}\PY{o}{.}\PY{n}{roots}\PY{p}{(}\PY{n}{multiplicities} \PY{o}{=} \PY{n+nb+bp}{False}\PY{p}{)} \PY{k}{if} \PY{n}{r} \PY{o}{>} \PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}
\PY{n}{rho} \PY{o}{=} \PY{n}{alpha}\PY{o}{/}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{+}\PY{p}{(}\PY{n}{d}\PY{o}{-}\PY{l+m+mi}{1}\PY{p}{)}\PY{o}{*}\PY{n}{alpha}\PY{o}{\char`\^{}}\PY{l+m+mi}{2}\PY{p}{)}
\PY{k}{return} \PY{l+m+mi}{1}\PY{o}{/}\PY{p}{(}\PY{n}{d}\PY{o}{*}\PY{n}{rho}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{bartholdiLower}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{true}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[formatcom=\color{blue}]
zeta = 0.999993324015561?
\end{Verbatim}
{\color{blue}
$\newcommand{\Bold}[1]{\mathbf{#1}}0.6624219223029230?$
}
Let us check that our value of $\zeta$ coincides with the value given by Bartholdi:
\begin{Verbatim}[commandchars=\\\{\}]
\PY{l+m+mi}{1}\PY{o}{-}\PY{l+m+mf}{0.63}\PY{o}{*}\PY{l+m+mi}{10}\PY{o}{\char`\^{}}\PY{p}{(}\PY{o}{-}\PY{l+m+mi}{5}\PY{p}{)}
\end{Verbatim}
{\color{blue}
$\newcommand{\Bold}[1]{\mathbf{#1}}0.999993700000000$
}
The value given by Bartholdi for $\zeta$ differs slightly from what we found, probably due to rounding errors. Since we made an exact computation relying only on algebraic numbers, our value should be the correct one. The lower bound in the end is slightly better than what Bartholdi claims in his article!
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{bartholdiLower}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{,} \PY{n}{true}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[formatcom=\color{blue}]
zeta = 0.9999999999703906?
\end{Verbatim}
{\color{blue}
$\newcommand{\Bold}[1]{\mathbf{#1}}0.5527735401122323?$
}
\begin{Verbatim}[commandchars=\\\{\}]
\PY{l+m+mi}{1}\PY{o}{-}\PY{l+m+mf}{0.29}\PY{o}{*}\PY{l+m+mi}{10}\PY{o}{\char`\^{}}\PY{p}{(}\PY{o}{-}\PY{l+m+mi}{10}\PY{p}{)}
\end{Verbatim}
{\color{blue}
$\newcommand{\Bold}[1]{\mathbf{#1}}0.999999999971000$
}
Again, Bartholdi's value for $\zeta$ is not completely exact, but almost. The lower bound we get on $\rho$ agrees with Bartholdi's claims in his paper.
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{bartholdiLower}\PY{p}{(}\PY{l+m+mi}{4}\PY{p}{)}
\end{Verbatim}
{\color{blue}
$\newcommand{\Bold}[1]{\mathbf{#1}}0.484122920740487?$
}
\begin{Verbatim}[commandchars=\\\{\}]
\PY{n}{bartholdiLower}\PY{p}{(}\PY{l+m+mi}{5}\PY{p}{)}
\end{Verbatim}
{\color{blue}
$\newcommand{\Bold}[1]{\mathbf{#1}}0.4358898943553?$
}
Our bounds are really better in genus $2$ (even without taking extended types), marginally better in genus $3$ (but we need to take very long extended types, corresponding to very large matrices), but worse in genus $4$ (and certainly also in higher genus).
\begin{Verbatim}[commandchars=\\\{\}]
\PY{o}{
\PY{c}{# Since multiplication of double float sparse matrices is not}
\PY{c}{# specifically implemented in sage, it is very slow.}
\PY{c}{# We provide a Cython version.}
\PY{c}{# All the number crunching part (the while True loop) is plain C, }
\PY{c}{# and therefore very fast}
\PY{k}{from} \PY{n+nn}{libc.stdlib} \PY{k}{cimport} \PY{n}{malloc}\PY{p}{,} \PY{n}{free}
\PY{k}{def} \PY{n+nf}{max\char`\_{}expansion\char`\_{}sparse\char`\_{}double\char`\_{}cython}\PY{p}{(}\PY{n}{M}\PY{p}{,} \PY{n}{precision} \PY{o}{=} \PY{l+m+mf}{10}\PY{o}{\char`\^{}}\PY{p}{(}\PY{o}{-}\PY{l+m+mf}{50}\PY{p}{)}\PY{p}{)}\PY{p}{:}
\PY{l+s+sd}{"""M is a matrix with nonnegative entries.}
\PY{l+s+sd}{ returns a number that is less than or equal to the}
\PY{l+s+sd}{ maximum of (q, M q) for q in the unit sphere. }
\PY{l+s+sd}{ Computations are done within the given precision.}
\PY{l+s+sd}{ """}
\PY{k}{cdef} \PY{p}{:}
\PY{n}{Py\char`\_{}ssize\char`\_{}t} \PY{n}{k}\PY{p}{,} \PY{n}{vect\char`\_{}size}\PY{p}{,} \PY{n}{mat\char`\_{}size}\PY{p}{,} \PY{n}{i}
\PY{n}{double} \PY{o}{*}\PY{n}{w}\PY{p}{,} \PY{n}{diff}\PY{p}{,} \PY{n}{norm}\PY{p}{,} \PY{n}{expansion}\PY{p}{,} \PY{n}{norm\char`\_{}inv}
\PY{n}{double} \PY{n}{prec} \PY{o}{=} \PY{n}{precision}
\PY{n}{nz} \PY{o}{=} \PY{n}{M}\PY{o}{.}\PY{n}{nonzero\char`\_{}positions}\PY{p}{(}\PY{n}{copy}\PY{o}{=}\PY{n+nb+bp}{False}\PY{p}{)}
\PY{n}{mat\char`\_{}size} \PY{o}{=} \PY{n+nb}{len}\PY{p}{(}\PY{n}{nz}\PY{p}{)}
\PY{n}{vect\char`\_{}size} \PY{o}{=} \PY{n}{M}\PY{o}{.}\PY{n}{nrows}\PY{p}{(}\PY{p}{)}
\PY{c}{# Allocate the memory}
\PY{n}{cdef}\PY{p}{:}
\PY{n+nb}{int} \PY{o}{*}\PY{n}{matrix\char`\_{}line} \PY{o}{=} \PY{o}{<}\PY{n+nb}{int} \PY{o}{*}\PY{o}{>}\PY{n}{malloc}\PY{p}{(}\PY{n}{mat\char`\_{}size} \PY{o}{*} \PY{n}{sizeof}\PY{p}{(}\PY{n+nb}{int}\PY{p}{)}\PY{p}{)}
\PY{n+nb}{int} \PY{o}{*}\PY{n}{matrix\char`\_{}column} \PY{o}{=} \PY{o}{<}\PY{n+nb}{int} \PY{o}{*}\PY{o}{>}\PY{n}{malloc}\PY{p}{(}\PY{n}{mat\char`\_{}size} \PY{o}{*} \PY{n}{sizeof}\PY{p}{(}\PY{n+nb}{int}\PY{p}{)}\PY{p}{)}
\PY{n}{double} \PY{o}{*}\PY{n}{matrix\char`\_{}coeff} \PY{o}{=} \PY{o}{<}\PY{n}{double} \PY{o}{*}\PY{o}{>}\PY{n}{malloc}\PY{p}{(}\PY{n}{mat\char`\_{}size} \PY{o}{*}\PY{n}{sizeof}\PY{p}{(}\PY{n}{double}\PY{p}{)}\PY{p}{)}
\PY{n}{double} \PY{o}{*}\PY{n}{u} \PY{o}{=} \PY{o}{<}\PY{n}{double} \PY{o}{*}\PY{o}{>} \PY{n}{malloc}\PY{p}{(}\PY{n}{vect\char`\_{}size} \PY{o}{*} \PY{n}{sizeof}\PY{p}{(}\PY{n}{double}\PY{p}{)}\PY{p}{)}
\PY{n}{double} \PY{o}{*}\PY{n}{v} \PY{o}{=} \PY{o}{<}\PY{n}{double} \PY{o}{*}\PY{o}{>} \PY{n}{malloc}\PY{p}{(}\PY{n}{vect\char`\_{}size} \PY{o}{*} \PY{n}{sizeof}\PY{p}{(}\PY{n}{double}\PY{p}{)}\PY{p}{)}
\PY{k}{if} \PY{p}{(}\PY{p}{(}\PY{o+ow}{not} \PY{n}{matrix\char`\_{}line}\PY{p}{)} \PY{o+ow}{or} \PY{p}{(}\PY{o+ow}{not} \PY{n}{matrix\char`\_{}column}\PY{p}{)} \PY{o+ow}{or} \PY{p}{(}\PY{o+ow}{not} \PY{n}{matrix\char`\_{}coeff}\PY{p}{)}
\PY{o+ow}{or} \PY{p}{(}\PY{o+ow}{not} \PY{n}{u}\PY{p}{)} \PY{o+ow}{or} \PY{p}{(}\PY{o+ow}{not} \PY{n}{v}\PY{p}{)}\PY{p}{)}\PY{p}{:}
\PY{n}{free}\PY{p}{(}\PY{n}{matrix\char`\_{}line}\PY{p}{)}
\PY{n}{free}\PY{p}{(}\PY{n}{matrix\char`\_{}column}\PY{p}{)}
\PY{n}{free}\PY{p}{(}\PY{n}{matrix\char`\_{}coeff}\PY{p}{)}
\PY{n}{free}\PY{p}{(}\PY{n}{u}\PY{p}{)}
\PY{n}{free}\PY{p}{(}\PY{n}{v}\PY{p}{)}
\PY{k}{raise} \PY{n+ne}{MemoryError}\PY{p}{(}\PY{p}{)}
\PY{c}{# Store in the arrays the nonzero coefficients of M/2 }
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n}{mat\char`\_{}size}\PY{p}{)}\PY{p}{:}
\PY{n}{matrix\char`\_{}line}\PY{p}{[}\PY{n}{k}\PY{p}{]} \PY{o}{=} \PY{n}{nz}\PY{p}{[}\PY{n}{k}\PY{p}{]}\PY{p}{[}\PY{l+m+mf}{0}\PY{p}{]}
\PY{n}{matrix\char`\_{}column}\PY{p}{[}\PY{n}{k}\PY{p}{]} \PY{o}{=} \PY{n}{nz}\PY{p}{[}\PY{n}{k}\PY{p}{]}\PY{p}{[}\PY{l+m+mf}{1}\PY{p}{]}
\PY{n}{matrix\char`\_{}coeff}\PY{p}{[}\PY{n}{k}\PY{p}{]} \PY{o}{=} \PY{p}{<}\PY{k+kt}{double}\PY{p}{>}\PY{n}{M}\PY{p}{[}\PY{n}{matrix\char`\_{}line}\PY{p}{[}\PY{n}{k}\PY{p}{]}\PY{p}{,} \PY{n}{matrix\char`\_{}column}\PY{p}{[}\PY{n}{k}\PY{p}{]}\PY{p}{]}\PY{o}{/}\PY{l+m+mf}{2}
\PY{c}{# Fill u with a uniform vector of norm 1 }
\PY{n}{d} \PY{o}{=} \PY{l+m+mf}{1}\PY{o}{/}\PY{n}{sqrt}\PY{p}{(}\PY{p}{<}\PY{k+kt}{double}\PY{p}{>}\PY{n}{M}\PY{o}{.}\PY{n}{nrows}\PY{p}{(}\PY{p}{)}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n}{vect\char`\_{}size}\PY{p}{)}\PY{p}{:}
\PY{n}{u}\PY{p}{[}\PY{n}{k}\PY{p}{]} \PY{o}{=} \PY{n}{d}
\PY{c}{# Apply iteratively the matrix (M+M.transpose())/2 to u,}
\PY{c}{# to converge towards the maximal eigenvector}
\PY{n}{expansion} \PY{o}{=} \PY{l+m+mf}{0}
\PY{k}{while} \PY{n+nb+bp}{True}\PY{p}{:}
\PY{c}{# v = M * u}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n}{vect\char`\_{}size}\PY{p}{)}\PY{p}{:}
\PY{n}{v}\PY{p}{[}\PY{n}{k}\PY{p}{]} \PY{o}{=} \PY{l+m+mf}{0}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n}{mat\char`\_{}size}\PY{p}{)}\PY{p}{:}
\PY{n}{v}\PY{p}{[}\PY{n}{matrix\char`\_{}line}\PY{p}{[}\PY{n}{k}\PY{p}{]}\PY{p}{]} \PY{o}{+}\PY{o}{=} \PY{n}{matrix\char`\_{}coeff}\PY{p}{[}\PY{n}{k}\PY{p}{]} \PY{o}{*} \PY{n}{u}\PY{p}{[}\PY{n}{matrix\char`\_{}column}\PY{p}{[}\PY{n}{k}\PY{p}{]}\PY{p}{]}
\PY{n}{v}\PY{p}{[}\PY{n}{matrix\char`\_{}column}\PY{p}{[}\PY{n}{k}\PY{p}{]}\PY{p}{]} \PY{o}{+}\PY{o}{=} \PY{n}{matrix\char`\_{}coeff}\PY{p}{[}\PY{n}{k}\PY{p}{]} \PY{o}{*} \PY{n}{u}\PY{p}{[}\PY{n}{matrix\char`\_{}line}\PY{p}{[}\PY{n}{k}\PY{p}{]}\PY{p}{]}
\PY{c}{# norm = v.norm()}
\PY{n}{norm} \PY{o}{=} \PY{l+m+mf}{0}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n}{vect\char`\_{}size}\PY{p}{)}\PY{p}{:}
\PY{n}{norm} \PY{o}{+}\PY{o}{=} \PY{n}{v}\PY{p}{[}\PY{n}{k}\PY{p}{]} \PY{o}{*} \PY{n}{v}\PY{p}{[}\PY{n}{k}\PY{p}{]}
\PY{n}{norm} \PY{o}{=} \PY{n}{sqrt}\PY{p}{(}\PY{n}{norm}\PY{p}{)}
\PY{n}{diff} \PY{o}{=} \PY{n}{norm} \PY{o}{-} \PY{n}{expansion}
\PY{n}{expansion} \PY{o}{=} \PY{n}{norm}
\PY{c}{# v = v/v.norm()}
\PY{n}{norm\char`\_{}inv} \PY{o}{=} \PY{l+m+mf}{1}\PY{o}{/}\PY{n}{norm}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{xrange}\PY{p}{(}\PY{n}{vect\char`\_{}size}\PY{p}{)}\PY{p}{:}
\PY{n}{v}\PY{p}{[}\PY{n}{k}\PY{p}{]} \PY{o}{*}\PY{o}{=} \PY{n}{norm\char`\_{}inv}
\PY{c}{# swap u and v}
\PY{n}{w} \PY{o}{=} \PY{n}{u}
\PY{n}{u} \PY{o}{=} \PY{n}{v}
\PY{n}{v} \PY{o}{=} \PY{n}{w}
\PY{k}{if} \PY{p}{(}\PY{n}{diff} \PY{o}{<} \PY{n}{prec}\PY{p}{)}\PY{p}{:}
\PY{k}{break}
\PY{n}{free}\PY{p}{(}\PY{n}{matrix\char`\_{}line}\PY{p}{)}
\PY{n}{free}\PY{p}{(}\PY{n}{matrix\char`\_{}column}\PY{p}{)}
\PY{n}{free}\PY{p}{(}\PY{n}{matrix\char`\_{}coeff}\PY{p}{)}
\PY{n}{free}\PY{p}{(}\PY{n}{u}\PY{p}{)}
\PY{n}{free}\PY{p}{(}\PY{n}{v}\PY{p}{)}
\PY{k}{return} \PY{n}{expansion}
\end{Verbatim}
\end{document} |
\begin{document}
\title{A Simple Randomized Algorithm to Compute Harmonic Numbers and Logarithms}
\begin{abstract}
Given a list of N numbers, the maximum can be computed in N
iterations. During these N iterations, the maximum gets updated on
average as many times as the Nth harmonic number. We first use this
fact to approximate the Nth harmonic number as a side effect. Further,
using the fact the Nth harmonic number is equal to the natural
logarithm of N plus a constant that goes to zero with N, we
approximate the natural logarithm from the harmonic number. To improve
accuracy, we repeat the computation over many lists of uniformly
generated random numbers. The algorithm is easily extended to
approximate logarithms with integer bases or rational arguments.
\end{abstract}
\section{Introduction}\label{sec:intro}
We approximately compute the harmonic number and the natural logarithm
of an integer as a side effect of computing the maximum of a list of
$x$ numbers randomly drawn from a uniform distribution. The key point
of this computation is that it basically uses only counting. To
improve accuracy, we repeat the computation multiple times and take
the average. Using the basic properties of the natural logarithm
function, it is simple to extend the algorithm to approximate the
logarithms with integer bases or rational arguments. The details
follow.
\section{Computing the Maximum}\label{sec:comput-max}
\begin{figure}
\caption{A Python function to compute the maximum over a list of $x$
numbers in the interval $[0.0, 1.0)$, where $x>0$.}
\label{fig:compute_max}
\end{figure}
Given a list of $x$ numbers in the interval $[0.0, 1.0)$, where $x>0$,
the algorithm (written in the Python programming language) in
Figure~\ref{fig:compute_max} computes the maximum in $x$
iterations. It is well known that during these iterations, the
maximum gets updated $H_x$ times on average, where $H_x$ is the
$x$th harmonic number~\cite{CrStRi01}. The reason for this fact is
that in a list of $x$ numbers randomly drawn from a uniform
distribution, each number has a probability of $1/x$ of being the
maximum.
\begin{figure}
\caption{A Python function to approximate the natural logarithm as
an average over $n$ iterations of the maximum computation. Each
maximum computation goes over $x$ uniformly random numbers and
produces a new approximation to the $x$th harmonic number $H_x$.}
\label{fig:compute_ln}
\end{figure}
\section{Computing the Harmonic Number and the Natural Logarithm}\label{sec:comput-log}
The $x$th harmonic number $H_x$ is defined as the series
\begin{equation}\label{eq:harmonic}
H_x = \sum_{i=1}^{x} \frac{1}{i} = \ln(x) + \gamma + \epsilon_x
\end{equation}
where $\gamma$ is the Euler-Mascheroni constant (roughly equal to
$0.57721$) and $\epsilon_x$, which is in the interval
$(\frac{1}{2(x+1)}, \frac{1}{2x})$, approaches $0$ as $x$ goes
to infinity~\cite{SoWe07}.
We can rewrite this equation to compute $\ln(x)$ as
\begin{equation}\label{eq:ln}
\ln(x) = H_x - \gamma - \epsilon_x .
\end{equation}
This means an approximation to $H_x$ can be converted to an
approximation to $\ln(x)$.
The algorithm (written in the Python programming language) is given in
Figure~\ref{fig:compute_ln}. The inner loop computes the maximum over
$x$ uniformly random numbers. The outer loop with $n$ iterations is
for accuracy improvement; it computes an approximation to $H_x$ every
iteration as a side effect of the maximum computation rather than the
result of the summation in Equation~\ref{eq:harmonic}. This $H_x$
computation is an approximation due to two reasons: 1) $H_x$ is never
an integer except for $x=1$~\cite{SoWe07}, and 2) it is a
probabilistic estimate. After these loops exit, the final $H_x$ is set
to the average over all these approximations. The natural logarithm is
then approximated at the end of this algorithm using
Equation~\ref{eq:ln}, where we set $\epsilon_x$ to its upper bound of
$1/(2x)$.
\section{Results}\label{sec:results}
Some results from limited experiments as shown in
Figure~\ref{fig:results} indicate that the approximation quality is
relatively good especially with larger arguments. In this figure, the
approximate $\ln(x)$ is the value computed by the algorithm in the
previous section and the library $\ln(x)$ is the log function from the
Python Math library.
Though we used 1000 repetitions for the results shown, separate
limited experiments show that the approximation quality gets better
after as low as 10 repetitions.
\begin{figure}
\caption{The approximation results on powers of 4 from 1 to 8 with
1000 iterations. The library $\ln(x)$ is \texttt{math.log()}
\label{fig:results}
\end{figure}
\section{Pros and Cons}\label{sec:proscons}
This algorithm to approximate the harmonic number and the natural
logarithm takes time proportional to the product of $x$ and $n$. Even
for $n=1$, the time is linear in $x$. As such, this is probably not an
efficient way of computing the harmonic number or the natural
logarithm. What is the use of this algorithm then?
One reason, possibly the main reason, why this algorithm may be
interesting is that it approximately computes a function that occurs
in its own time complexity analysis. Here the functions are the
harmonic number as well as the natural logarithm. Another reason is
that this algorithm uses integer arithmetic only except for the final
averaging and error computation. Finally, this algorithm is easily
parallelizable since the maximum of a list is equal to the maximum
over the maximums of parts of the list.
In the technical literature, there are of course many formulas and
algorithms for computing both functions~\cite{SoWe07}. This is
expected as the harmonic number and the natural logarithm are so
fundamental. This paper is not meant to provide any comparisons with
those algorithms or to claim that it is better; it is mainly a fun
application on the use of the side effect of a well known and simple
algorithm, namely, the maximum computation.
\section{Conclusions}\label{sec:conclusions}
We provide a simple algorithm that exploits the time complexity
expression of the maximum computation of a list of numbers to
approximate the harmonic number and then using it to approximate the
natural logarithm. Limited experiments show that the approximations
are good with small relative and absolute errors. We hope others may
find this algorithm interesting enough to study and potentially
improve. At a minimum this paper might hopefully inspire some
exercises for students of a basic algorithms book like
\cite{CrStRi01}.
\end{document} |
\begin{document}
\maketitle
\begin{abstract}
We give an elementary self-contained proof of the fact that the walk dimension of the
Brownian motion on an \emph{arbitrary} generalized Sierpi\'{n}ski carpet is greater than
two, no proof of which in this generality had been available in the literature.
Our proof is based solely on the self-similarity and hypercubic symmetry of the associated
Dirichlet form and on several very basic pieces of functional analysis and
the theory of regular symmetric Dirichlet forms. We also present an application of this fact
to the singularity of the energy measures with respect to the canonical self-similar
measure (uniform distribution) in this case, proved first by M.\ Hino in
[\emph{Probab.\ Theory Related Fields} \textbf{132} (2005), no.\ 2, 265--290].
\end{abstract}
\section{Introduction} \label{sec:intro}
It is an established result in the field of analysis on fractals that,
on the \emph{Sierpi\'{n}ski carpet} and certain generalizations of it called
\emph{generalized Sierpi\'{n}ski carpets} (see Figure \ref{fig:GSCs} below),
there exists a canonical diffusion process $\{X_{t}\}_{t\in[0,\infty)}$
which is symmetric with respect to the canonical self-similar measure
(uniform distribution) $\mu$ and satisfies the following estimates for
its transition density (heat kernel) $p_{t}(x,y)$:
\begin{equation}\label{eq:HKEdw}
\begin{split}
\frac{c_{1}}{\mu(B(x,t^{1/d_{\mathrm{w}}}))} \exp\biggl( - \Bigl( \frac{\rho(x,y)^{d_{\mathrm{w}}}}{c_{2}t} \Bigr)^{\frac{1}{d_{\mathrm{w}}-1}}\biggr)
&\leq p_{t}(x,y)\\
&\leq\frac{c_{3}}{\mu(B(x,t^{1/d_{\mathrm{w}}}))} \exp\biggl( - \Bigl( \frac{\rho(x,y)^{d_{\mathrm{w}}}}{c_{4}t} \Bigr)^{\frac{1}{d_{\mathrm{w}}-1}}\biggr)
\end{split}
\end{equation}
for any points $x,y$ and any $t\in(0,\infty)$, where $c_{1},c_{2},c_{3},c_{4}\in(0,\infty)$
are some constants, $\rho$ is the Euclidean metric, $B(x,s)$ denotes
the open ball of radius $s$ centered at $x$, and $d_{\mathrm{w}}\in[2,\infty)$
is a characteristic of the diffusion called its \emph{walk dimension}.
This result was obtained by M.\ T.\ Barlow and R.\ F.\ Bass in their series of papers
\cite{BB89,BB92,BB99} (see also \cite{BB90,KZ,BBK,BBKT}), its direct analog was proved
also for the Sierpi\'{n}ski gasket in \cite{BP}, for nested fractals in \cite{Kum} and
for affine nested fractals in \cite{FHK}, and it is believed for essentially all the
known examples, and has been verified for many of them, that the walk dimension $d_{\mathrm{w}}$
is \emph{strictly greater than two}. Therefore \eqref{eq:HKEdw} implies in particular
that a typical distance the diffusion travels by time $t$ is of order $t^{1/d_{\mathrm{w}}}$
and is much smaller than the order $t^{1/2}$ of such a distance for the Brownian motion on
the Euclidean spaces. The estimates \eqref{eq:HKEdw} with $d_{\mathrm{w}}>2$ are called
\emph{sub-Gaussian estimates} for this reason, and are also known to imply a number of
other anomalous features of the diffusion, one of the most important among which is the
\emph{singularity} of the associated \emph{energy measures} with respect to the
reference measure $\mu$, proved recently in \cite[Theorem 2.13-(a)]{KM};
see also \cite{Kus89,Kus93,BST,Hin05,HN} for earlier results on singularity of
energy measures for diffusions on fractals.
The main concern of this paper is the proof of the strict inequality $d_{\mathrm{w}}>2$
for an \emph{arbitrary} generalized Sierpi\'{n}ski carpet (see Framework \ref{frmwrk:GSC} and
Definition \ref{dfn:GSC} below for its definition). In fact, the existing proof of $d_{\mathrm{w}}>2$
for this case due to Barlow and Bass in \cite[Proof of Proposition 5.1-(a)]{BB99}
requires a certain extra geometric assumption on the generalized Sierpi\'{n}ski carpet
(see Remark \ref{rmk:dwSC} below), and there is no proof of it in the literature that
is applicable to \emph{any} generalized Sierpi\'{n}ski carpet although they claimed
to have one in \cite[Remarks 5.4-1.]{BB99}. The purpose of the present paper is
to give such a proof as is also elementary, self-contained and based solely
on the self-similarity and hypercubic symmetry of the associated Dirichlet form
(see Theorem \ref{thm:GSCDF} below) and on several very basic pieces of functional analysis
and the theory of regular symmetric Dirichlet forms in \cite[Section 1.4]{FOT}.
This minimality of the requirements in our method is crucial for potential future
applications; in fact, our proof has been adapted by R.\ Shimizu in his recent preprint
\cite{Shi} to show the counterpart of $d_{\mathrm{w}}>2$ for a canonical self-similar
$p$-energy form on the Sierpi\'{n}ski carpet, whose detailed properties are mostly unknown
(except in the case of $p=2$, where his energy form coincides with the Dirichlet form
of the canonical diffusion). As an important consequence of $d_{\mathrm{w}}>2$, we
also see that \cite[Theorem 2.13-(a)]{KM} applies and recovers M.\ Hino's result
in \cite[Subsection 5.2]{Hin05} that for any generalized Sierpi\'{n}ski carpet
the energy measures are singular with respect to the reference measure $\mu$.
This paper is organized as follows. In Section \ref{sec:framework-main-theorem}, we
first introduce the framework of a generalized Sierpi\'{n}ski carpet and the canonical
Dirichlet form on it, then give the precise statement of our main theorem on the strict
inequality $d_{\mathrm{w}}>2$ (Theorem \ref{thm:dwSC}) and deduce the singularity of
the associated energy measures (Corollary \ref{cor:dwSC}). Finally, we give our
elementary self-contained proof of Theorem \ref{thm:dwSC} in Section \ref{sec:proof}.
\begin{notation}
Throughout this paper, we use the following notation and conventions.
\begin{enumerate}[label=\textup{(\arabic*)},align=left,leftmargin=*,topsep=2pt,parsep=0pt,itemsep=2pt]
\item The symbols $\subset$ and $\supset$ for set inclusion
\emph{allow} the case of the equality.
\item $\mathbb{N}:=\{n\in\mathbb{Z}\mid n>0\}$, i.e., $0\not\in\mathbb{N}$.
\item The cardinality (the number of elements) of a set $A$ is denoted by $\#A$.
\item We set $a\vee b:=\max\{a,b\}$, $a\wedge b:=\min\{a,b\}$ and $a^{+}:=a\vee 0$
for $a,b\in[-\infty,\infty]$, and we use the same notation also for
$[-\infty,\infty]$-valued functions and equivalence classes of them. All numerical
functions in this paper are assumed to be $[-\infty,\infty]$-valued.
\item Let $K$ be a non-empty set. We define $\mathbf{1}_{A}=\mathbf{1}_{A}^{K}\in\mathbb{R}^{K}$ for $A\subset K$ by
$\mathbf{1}_{A}(x):=\mathbf{1}_{A}^{K}(x):=\bigl\{\begin{smallmatrix}1 & \textrm{if $x\in A$,}\\ 0 & \textrm{if $x\not\in A$,}\end{smallmatrix}$
and set $\|u\|_{\sup}:=\|u\|_{\sup,K}:=\sup_{x\in K}|u(x)|$ for $u\colon K\to[-\infty,\infty]$.
\item Let $K$ be a topological space. The interior and closure of $A\subset K$ in $K$
are denoted by $\operatorname{int}_{K}A$ and $\overline{A}^{K}$, respectively. We set
$\mathcal{C}(K):=\{u\mid\textrm{$u\colon K\to\mathbb{R}$, $u$ is continuous}\}$
and $\operatorname{supp}_{K}[u]:=\overline{K\setminus u^{-1}(0)}^{K}$ for $u\in\mathcal{C}(K)$.
\item For $d\in\mathbb{N}$, we equip $\mathbb{R}^{d}$ with the Euclidean norm denoted by $|\cdot|$
and set $\mathbf{0}_{d}:=(0)_{k=1}^{d}\in\mathbb{R}^{d}$.
\end{enumerate}
\end{notation}
\begin{figure}
\caption{Sierpi\'{n}
\label{fig:GSCs}
\end{figure}
\section{Framework, the main theorem and an application}\label{sec:framework-main-theorem}
The following presentation, up to Theorem \ref{thm:GSCDF} below, is a brief summary of
the corresponding part in \cite[Section 4]{K:SPFSC}; see \cite[Section 5]{K:oscNRVNP}
and the references therein for further details.
We fix the following setting throughout this and the next sections.
\begin{framework}\label{frmwrk:GSC}
Let $d,l\in\mathbb{N}$, $d\geq 2$, $l\geq 3$ and set $Q_{0}:=[0,1]^{d}$.
Let $S\subsetneq\{0,1,\ldots,l-1\}^{d}$ be non-empty, define
$f_{i}\colon\mathbb{R}^{d}\to\mathbb{R}^{d}$ by $f_{i}(x):=l^{-1}i+l^{-1}x$ for each $i\in S$
and set $Q_{1}:=\bigcup_{i\in S}f_{i}(Q_{0})$, so that $Q_{1}\subsetneq Q_{0}$.
Let $K$ be the \emph{self-similar set} associated with $\{f_{i}\}_{i\in S}$,
i.e., the unique non-empty compact subset of $\mathbb{R}^{d}$ such that
$K=\bigcup_{i\in S}f_{i}(K)$, which exists and satisfies $K\subsetneq Q_{0}$
thanks to $Q_{1}\subsetneq Q_{0}$ by \cite[Theorem 1.1.4]{Kig01}, and
set $F_{i}:=f_{i}|_{K}$ for each $i\in S$ and $\mathrm{GSC}(d,l,S):=(K,S,\{F_{i}\}_{i\in S})$.
Let $\rho\colon K\times K\to[0,\infty)$ be the Euclidean metric on $K$ given by $\rho(x,y):=|x-y|$,
set $d_{\mathrm{f}}:=\log_{l}\#S$, and let $\mu$ be the \emph{self-similar measure} on
$\mathrm{GSC}(d,l,S)$ with weight $(1/\#S)_{i\in S}$, i.e., the unique Borel probability measure
on $K$ such that $\mu=(\#S)\mu\circ F_{i}$ (as Borel measures on $K$) for any $i\in S$,
which exists by \cite[Propositions 1.5.8, 1.4.3, 1.4.4 and Corollary 1.4.8]{Kig01}.
Let $\langle\cdot,\cdot\rangle$ and $\|\!\cdot\!\|_{2}$ denote the inner product
and norm on $L^{2}(K,\mu)$, respectively.
\end{framework}
Recall that $d_{\mathrm{f}}$ is the Hausdorff dimension of $(K,\rho)$ and that
$\mu$ is a constant multiple of the $d_{\mathrm{f}}$-dimensional Hausdorff measure
on $(K,\rho)$; see, e.g., \cite[Proposition 1.5.8 and Theorem 1.5.7]{Kig01}.
Note that $d_{\mathrm{f}}<d$ by $S\subsetneq\{0,1,\ldots,l-1\}^{d}$.
The following definition is due to Barlow and Bass \cite[Section 2]{BB99}, except that
the non-diagonality condition in \cite[Hypotheses 2.1]{BB99} has been strengthened
later in \cite{BBKT} to fill a gap in \cite[Proof of Theorem 3.19]{BB99};
see \cite[Remark 2.10-1.]{BBKT} for some more details of this correction.
\begin{definition}[(Generalized Sierpi\'{n}ski carpet, {\cite[Subsection \textup{2.2}]{BBKT}})]\label{dfn:GSC}
$\mathrm{GSC}(d,l,S)$ is called a \emph{generalized Sierpi\'{n}ski carpet}
if and only if the following four conditions are satisfied:
\begin{enumerate}[label=\textup{(GSC\arabic*)},align=left,leftmargin=*,topsep=2pt,parsep=0pt,itemsep=2pt]
\item\label{GSC1}(Symmetry) $f(Q_{1})=Q_{1}$ for any isometry $f$ of $\mathbb{R}^{d}$ with $f(Q_{0})=Q_{0}$.
\item\label{GSC2}(Connectedness) $Q_{1}$ is connected.
\item\label{GSC3}(Non-diagonality)
$\operatorname{int}_{\mathbb{R}^{d}}\bigl(Q_{1}\cap \prod_{k=1}^{d}[(i_{k}-\varepsilon_{k})l^{-1},(i_{k}+1)l^{-1}]\bigr)$
is either empty or connected for any $(i_{k})_{k=1}^{d}\in\mathbb{Z}^{d}$ and
any $(\varepsilon_{k})_{k=1}^{d}\in\{0,1\}^{d}$.
\item\label{GSC4}(Borders included) $[0,1]\times\{0\}^{d-1}\subset Q_{1}$.
\end{enumerate}
\end{definition}
As special cases of Definition \ref{dfn:GSC}, $\mathrm{GSC}(2,3,S_{\mathrm{SC}})$ and
$\mathrm{GSC}(3,3,S_{\mathrm{MS}})$ are called the \emph{Sierpi\'{n}ski carpet}
and the \emph{Menger sponge}, respectively, where
$S_{\mathrm{SC}}:=\{0,1,2\}^{2}\setminus\{(1,1)\}$ and
$S_{\mathrm{MS}}:=\bigl\{(i_{1},i_{2},i_{3})\in\{0,1,2\}^{3}\bigm|\sum_{k=1}^{3}\mathbf{1}_{\{1\}}(i_{k})\leq 1\bigr\}$
(see Figure \ref{fig:GSCs} above).
See \cite[Remark 2.2]{BB99} for a description of the meaning of each of the four
conditions \ref{GSC1}, \ref{GSC2}, \ref{GSC3} and \ref{GSC4} in Definition \ref{dfn:GSC}.
To be precise, \ref{GSC3} is slightly different from the formulation of the non-diagonality
condition in \cite[Subsection 2.2]{BBKT}, but they have been proved to be equivalent to
each other in \cite[Theorem 2.4]{K:NDSC}; see \cite[\S 2]{K:NDSC} for some other equivalent
formulations of the non-diagonality condition.
Throughout the rest of this paper, we assume that $\mathrm{GSC}(d,l,S)=(K,S,\{F_{i}\}_{i\in S})$
as introduced in Framework \ref{frmwrk:GSC} is a generalized Sierpi\'{n}ski carpet
as defined in Definition \ref{dfn:GSC}.
We next recall the result on the existence and uniqueness of a canonical diffusion
(Brownian motion) on $\mathrm{GSC}(d,l,S)$, which can be presented most efficiently in the language
of its associated (regular symmetric) Dirichlet form on $L^{2}(K,\mu)$ as follows;
see \cite{FOT,CF} for the basics of regular symmetric Dirichlet forms and associated
symmetric Markov processes. Below we state only the final consequence of the unique
characterizations of a canonical Dirichlet form on $\mathrm{GSC}(d,l,S)$ established in \cite{BBKT}
(combined with some complementary discussions in \cite{Hin13,K:oscNRVNP}), and we refer
the reader to \cite[Section 1]{BBKT} for a description of the earlier results on its
existence in \cite{BB89,KZ,BB99}.
\begin{definition}\label{dfn:GSC-isometry}
We define
\begin{equation}\label{eq:GSC-isometry}
\mathcal{G}_{0}:=\{f|_{K}\mid\textrm{$f$ is an isometry of $\mathbb{R}^{d}$, $f(Q_{0})=Q_{0}$}\},
\end{equation}
which forms a finite subgroup of the group of homeomorphisms of $K$ by virtue of \ref{GSC1}.
\end{definition}
\begin{theorem}[({\cite[Theorems \textup{1.2} and \textup{4.32}]{BBKT}}, {\cite[Proposition \textup{5.1}]{Hin13}}, {\cite[Proposition \textup{5.9}]{K:oscNRVNP}})]\label{thm:GSCDF}
There exists a unique (up to constant multiples of $\mathcal{E}$) regular symmetric Dirichlet
form $(\mathcal{E},\mathcal{F})$ on $L^{2}(K,\mu)$ satisfying $\mathcal{E}(u,u)>0$ for some
$u\in\mathcal{F}$, $\mathbf{1}_{K}\in\mathcal{F}$, $\mathcal{E}(\mathbf{1}_{K},\mathbf{1}_{K})=0$, and the following:
\begin{enumerate}[label=\textup{(GSCDF\arabic*)},align=left,leftmargin=*,topsep=2pt,parsep=0pt,itemsep=2pt]
\item\label{GSCDF1}If $u\in \mathcal{F}\cap\mathcal{C}(K)$ and $g\in\mathcal{G}_{0}$
then $u\circ g\in\mathcal{F}$ and $\mathcal{E}(u\circ g,u\circ g)=\mathcal{E}(u,u)$.
\item\label{GSCDF2}$\mathcal{F}\cap\mathcal{C}(K)=\{u\in\mathcal{C}(K)\mid\textrm{$u\circ F_{i}\in\mathcal{F}$ for any $i\in S$}\}$.
\item\label{GSCDF3}There exists $r\in(0,\infty)$ such that for any $u\in\mathcal{F}\cap\mathcal{C}(K)$,
\begin{equation}\label{eq:GSCDF3}
\mathcal{E}(u,u)=\sum_{i\in S}\frac{1}{r}\mathcal{E}(u\circ F_{i},u\circ F_{i}).
\end{equation}
\end{enumerate}
\end{theorem}
Throughout the rest of this paper, we fix $(\mathcal{E},\mathcal{F})$ and $r$ as given in
Theorem \ref{thm:GSCDF}; note that $r$ is uniquely determined by $(\mathcal{E},\mathcal{F})$,
since $\mathcal{E}(u,u)>0$ for some $u\in\mathcal{F}\cap\mathcal{C}(K)$ by the existence of such
$u\in\mathcal{F}$ and the denseness of $\mathcal{F}\cap\mathcal{C}(K)$ in the Hilbert space
$(\mathcal{F},\mathcal{E}_{1}:=\mathcal{E}+\langle\cdot,\cdot\rangle)$.
\begin{definition}\label{dfn:GSCDF}
The regular symmetric Dirichlet form $(\mathcal{E},\mathcal{F})$ on $L^{2}(K,\mu)$
is called the \emph{canonical Dirichlet form} on $\mathrm{GSC}(d,l,S)$,
and the \emph{walk dimension} $d_{\mathrm{w}}$ of $(\mathcal{E},\mathcal{F})$
(or of $\mathrm{GSC}(d,l,S)$) is defined by $d_{\mathrm{w}}:=\log_{l}(\#S/r)$.
\end{definition}
\begin{remark}\label{rmk:GSCDF}
The walk dimension $d_{\mathrm{w}}$ defined in Definition \ref{dfn:GSCDF} coincides
with the exponent $d_{\mathrm{w}}$ in \eqref{eq:HKEdw} for the regular symmetric
Dirichlet space $(K,\mu,\mathcal{E},\mathcal{F})$ equipped with the Euclidean metric $\rho$;
see the proof of Corollary \ref{cor:dwSC} below and the references therein for details.
\end{remark}
The main result of this paper is an elementary self-contained proof of the following
theorem based solely on the setting and properties stated in Framework \ref{frmwrk:GSC},
Definitions \ref{dfn:GSC}, \ref{dfn:GSC-isometry} and Theorem \ref{thm:GSCDF}
(except the uniqueness of $(\mathcal{E},\mathcal{F})$) and on several very basic pieces of
functional analysis and the theory of regular symmetric Dirichlet forms in \cite[Section 1.4]{FOT}.
To keep the whole treatment as elementary and self-contained as possible,
in our proof of Theorem \ref{thm:dwSC} we refrain from using any known properties of
$(\mathcal{E},\mathcal{F})$ other than those in Theorem \ref{thm:GSCDF}.
\begin{theorem}[(Cf.\ {\cite[Remarks 5.4-1.]{BB99}})]\label{thm:dwSC}
$d_{\mathrm{w}}>2$.
\end{theorem}
\begin{remark}\label{rmk:dwSC}
\emph{No proof of Theorem \textup{\ref{thm:dwSC}} in the present generality had been available
in the literature}, although Barlow and Bass claimed to have one in \cite[Remarks 5.4-1.]{BB99}. Its
existing proof in \cite[Proof of Proposition 5.1-(a)]{BB99} requires the extra assumption on $\mathrm{GSC}(d,l,S)$ that
\begin{equation}\label{eq:dwSC-BB99}
\#\{(i_{k})_{k=1}^{d}\in S\mid i_{1}=j\}\not=\#\{(i_{k})_{k=1}^{d}\in S\mid i_{1}=0\}
\quad\textrm{for some $j\in\{1,\ldots,l-1\}$},
\end{equation}
which holds for any generalized Sierpi\'{n}ski carpet with $d=2$ but does fail
for infinitely many examples of generalized Sierpi\'{n}ski carpets with fixed $d$
for each $d\geq 3$; indeed, for each $d,l\in\mathbb{N}$ with $d\geq 3$ and $l\geq 2$,
it is not difficult to see that $\mathrm{GSC}(d,2ld,S_{d,l})$ with
\begin{equation}\label{eq:dwSC-BB99-counterexamples}
S_{d,l}:=\biggl\{i\biggm|
\begin{minipage}{290pt}
$i=(i_{k})_{k=1}^{d}\in\{0,1,\ldots,2ld-1\}^{d}$, and for any $j\in\{1,3,\ldots,2l-1\}$,
$\{|2i_{k}-2ld+1|\mid k\in\{1,2,\ldots,d\}\}\not=\{j,j+2l,\ldots,j+2l(d-1)\}$
\end{minipage}
\biggr\}
\end{equation}
satisfies \ref{GSC1}, \ref{GSC2}, \ref{GSC3} and \ref{GSC4} in Definition \ref{dfn:GSC}
but not \eqref{eq:dwSC-BB99}.
\end{remark}
The proof of Theorem \ref{thm:dwSC} is given in the next section.
We conclude this section by presenting an application of Theorem \ref{thm:dwSC}
to the singularity with respect to $\mu$ of the energy measures associated with
$(K,\mu,\mathcal{E},\mathcal{F})$, which was proved first by Hino in
\cite[Subsection 5.2]{Hin05} via $d_{\mathrm{w}}>2$ and is obtained here
by combining \cite[Theorem 2.13-(a)]{KM} with $d_{\mathrm{w}}>2$.
\begin{definition}[(Cf.\ {\cite[(3.2.13), (3.2.14) and (3.2.15)]{FOT}})]\label{d:EnergyMeas}
The \emph{$\mathcal{E}$-energy measure} $\mu_{\langle u\rangle}$ of
$u\in\mathcal{F}$ is defined, first for $u\in\mathcal{F}\cap L^{\infty}(K,\mu)$
as the unique ($[0,\infty]$-valued) Borel measure on $K$ such that
\begin{equation}\label{e:EnergyMeas}
\int_{K}v\,d\mu_{\langle u\rangle}=\mathcal{E}(uv,u)-\frac{1}{2}\mathcal{E}(v,u^{2})
\qquad\textrm{for any $v\in\mathcal{F}\cap\mathcal{C}(K)$,}
\end{equation}
and then by
$\mu_{\langle u\rangle}(A):=\lim_{n\to\infty}\mu_{\langle(-n)\vee(u\wedge n)\rangle}(A)$
for each Borel subset $A$ of $K$ for general $u\in\mathcal{F}$;
note that $uv\in\mathcal{F}$ for any $u,v\in\mathcal{F}\cap L^{\infty}(K,\mu)$ by
\cite[Theorem 1.4.2-(ii)]{FOT} and that $\{(-n)\vee(u\wedge n)\}_{n=1}^{\infty}\subset\mathcal{F}$
and $\lim_{n\to\infty}\mathcal{E}\bigl(u-(-n)\vee(u\wedge n),u-(-n)\vee(u\wedge n)\bigr)=0$
by \cite[Theorem 1.4.2-(iii)]{FOT}.
\end{definition}
\begin{corollary}[({\cite[Subsection 5.2]{Hin05}})]\label{cor:dwSC}
$\mu_{\langle u\rangle}$ is singular with respect to $\mu$ for any $u\in\mathcal{F}$.
\end{corollary}
\begin{proof}
(Unlike the proof of Theorem \ref{thm:dwSC}, this proof is \emph{not} meant to be self-contained.)
$(\mathcal{E},\mathcal{F})$ is local by \cite[Lemma 3.4]{K:cdsa}, whose proof is based
only on \ref{GSCDF2}, \ref{GSCDF3} and \cite[Exercise 1.4.1 and Theorem 3.1.2]{FOT}, and
is therefore strongly local since $\mathcal{E}(\mathbf{1}_{K},v)=0$ for any $v\in\mathcal{F}$
by $\mathcal{E}(\mathbf{1}_{K},\mathbf{1}_{K})=0$ (see Lemma \ref{lem:conservative} below).
We easily see that $c_{5}s^{d_{\mathrm{f}}}\leq\mu(B(x,s))\leq c_{6}s^{d_{\mathrm{f}}}$
for any $(x,s)\in K\times(0,d]$ for some $c_{5},c_{6}\in(0,\infty)$, where
$B(x,s):=\{y\in K\mid\rho(x,y)<s\}$. It is also immediate that $(K,\rho)$ satisfies the
chain condition as defined in \cite[Definition 2.10-(a)]{KM}, in view of the fact that
by \ref{GSC4}, \ref{GSC1} and \ref{GSC2} there exists $c_{7}\in(0,\infty)$ such that
for any $x,y\in K$ there exists a continuous map $\gamma\colon[0,1]\to K$ with $\gamma(0)=x$
and $\gamma(1)=y$ whose Euclidean length is at most $c_{7}\rho(x,y)$. Finally,
by \cite[Theorem 4.30 and Remark 4.33]{BBKT} (see also \cite[Theorem 1.3]{BB99})
the heat kernel $p_{t}(x,y)$ of $(K,\mu,\mathcal{E},\mathcal{F})$ exists and
there exist $\beta_{0}\in(1,\infty)$ and $c_{1},c_{2},c_{3},c_{4}\in(0,\infty)$
such that \eqref{eq:HKEdw} with $\beta_{0}$ in place of $d_{\mathrm{w}}$ holds
for $\mu$-a.e.\ $x,y\in K$ for each $t\in(0,\infty)$, but then necessarily
$\beta_{0}=\log_{l}(\#S/r)=d_{\mathrm{w}}$ by \ref{GSCDF2}, \ref{GSCDF3} and \cite[Theorem 4.31]{BBKT}
as shown in \cite[Proof of Proposition 5.9, Second paragraph]{K:oscNRVNP},
whence $\beta_{0}=d_{\mathrm{w}}>2$ by Theorem \ref{thm:dwSC}. Thus
$(K,\rho,\mu,\mathcal{E},\mathcal{F})$ satisfies all the assumptions of
\cite[Theorem 2.13-(a)]{KM}, which implies the desired claim.
\end{proof}
\section{The elementary proof of the main theorem}\label{sec:proof}
This section is devoted to giving our elementary self-contained proof of the main
theorem (Theorem \ref{thm:dwSC}), which is an adaptation of, and has been inspired by,
an elementary proof of the counterpart of Theorem \ref{thm:dwSC} for Sierpi\'{n}ski
gaskets presented in \cite[Proof of Proposition 5.3, Second paragraph]{KM}.
We start with basic definitions and some simple lemmas.
\begin{definition}\label{dfn:words}
We set $W_{m}:=S^{m}=\{w_{1}\ldots w_{m}\mid\textrm{$w_{i}\in S$ for $i\in\{1,\ldots,m\}$}\}$
for $m\in\mathbb{N}$ and $W_{*}:=\bigcup_{m=1}^{\infty}W_{m}$. For each
$w=w_{1}\ldots w_{m}\in W_{*}$, the unique $m\in\mathbb{N}$ with $w\in W_{m}$
is denoted by $|w|$, and we set $F_{w}:=F_{w_{1}}\circ\cdots\circ F_{w_{m}}$,
$K_{w}:=F_{w}(K)$ and $q^{w}=(q^{w}_{k})_{k=1}^{d}:=F_{w}(\mathbf{0}_{d})$.
\end{definition}
\begin{lemma}\label{lem:cell-intersection-null}
If $w,v\in W_{*}$, $|w|=|v|$ and $w\not=v$, then
$\mu(F_{w}(K\setminus(0,1)^{d}))=0=\mu(K_{w}\cap K_{v})$.
\end{lemma}
\begin{proof}
This follows easily from \ref{GSC1} and the fact that $\mu$ is a Borel probability
measure on $K$ satisfying $\mu(K_{w})=(\#S)^{-|w|}$ for any $w\in W_{*}$.
\end{proof}
\begin{lemma}\label{lem:Fw-star}
Let $w\in W_{*}$. Then for any Borel measurable function $u\colon K\to[-\infty,\infty]$,
$\int_{K}|u\circ F_{w}|\,d\mu=(\#S)^{|w|}\int_{K_{w}}|u|\,d\mu$ and
$\int_{K_{w}}|u\circ F_{w}^{-1}|\,d\mu=(\#S)^{-|w|}\int_{K}|u|\,d\mu$. In particular, bounded
linear operators $F_{w}^{*},(F_{w})_{*}\colon L^{2}(K,\mu)\to L^{2}(K,\mu)$ can be defined by setting
\begin{equation}\label{eq:Fw-star}
F_{w}^{*}u:=u\circ F_{w}\qquad\textrm{and}\qquad
(F_{w})_{*}u:=
\begin{cases}
u\circ F_{w}^{-1}&\textrm{on $K_{w}$,}\\
0&\textrm{on $K\setminus K_{w}$}
\end{cases}
\end{equation}
for each $u\in L^{2}(K,\mu)$. Moreover, $u\circ F_{w}\in\mathcal{F}$
and \eqref{eq:GSCDF3} holds for any $u\in\mathcal{F}$.
\end{lemma}
\begin{proof}
The former assertions are immediate from $\mu=(\#S)^{|w|}\mu\circ F_{w}$.
For the latter ones, let $u\in\mathcal{F}$.
Since $\mathcal{F}\cap\mathcal{C}(K)$ is dense in the Hilbert space
$(\mathcal{F},\mathcal{E}_{1}=\mathcal{E}+\langle\cdot,\cdot\rangle)$
by the regularity of $(\mathcal{E},\mathcal{F})$,
we can choose $\{u_{n}\}_{n=1}^{\infty}\subset\mathcal{F}\cap\mathcal{C}(K)$
so that $\lim_{n\to\infty}\mathcal{E}_{1}(u-u_{n},u-u_{n})=0$, and then
$\{u_{n}\circ F_{w}\}_{n=1}^{\infty}$ is a Cauchy sequence in $(\mathcal{F},\mathcal{E}_{1})$
with $\lim_{n\to\infty}\|u\circ F_{w}-u_{n}\circ F_{w}\|_{2}=0$ by \ref{GSCDF2} and
\ref{GSCDF3} and therefore has to converge to $u\circ F_{w}$ in norm in $(\mathcal{F},\mathcal{E}_{1})$.
Thus $u\circ F_{w}\in\mathcal{F}$, and \eqref{eq:GSCDF3} for $u$ follows by letting
$n\to\infty$ in \eqref{eq:GSCDF3} for $u_{n}\in\mathcal{F}\cap\mathcal{C}(K)$.
\end{proof}
\begin{lemma}\label{lem:conservative}
$\mathcal{E}(\mathbf{1}_{K},v)=0$ for any $v\in\mathcal{F}$.
\end{lemma}
\begin{proof}
This is immediate from the Cauchy--Schwarz inequality for $\mathcal{E}$ and
$\mathcal{E}(\mathbf{1}_{K},\mathbf{1}_{K})=0$.
\end{proof}
\begin{definition}\label{dfn:part-harmonic}
Let $U$ be a non-empty open subset of $K$.
\begin{enumerate}[label=\textup{(\arabic*)},align=left,leftmargin=*,topsep=2pt,parsep=0pt,itemsep=2pt]
\item Equipping $\mathcal{F}$ with the inner product
$\mathcal{E}_{1}=\mathcal{E}+\langle\cdot,\cdot\rangle$, we define
\begin{equation}\label{eq:part}
\mathcal{C}_{U}:=\{u\in\mathcal{F}\cap\mathcal{C}(K)\mid\operatorname{supp}_{K}[u]\subset U\}
\qquad\textrm{and}\qquad
\mathcal{F}_{U}:=\overline{\mathcal{C}_{U}}^{\mathcal{F}},
\end{equation}
which are linear subspaces of $\mathcal{F}$, and for each $u\in\mathcal{F}$ we also set
$u+\mathcal{C}_{U}:=\{u+v\mid v\in\mathcal{C}_{U}\}$ and $u+\mathcal{F}_{U}:=\{u+v\mid v\in\mathcal{F}_{U}\}$,
so that $\overline{u+\mathcal{C}_{U}}^{\mathcal{F}}=u+\mathcal{F}_{U}$.
\item A function $h\in\mathcal{F}$ is said to be \emph{$\mathcal{E}$-harmonic} on $U$
if and only if either of the following two conditions, which are easily seen
to be equivalent to each other, holds:
\begin{gather}\label{eq:harmonic-var}
\mathcal{E}(h,h)=\inf\{\mathcal{E}(u,u)\mid u\in h+\mathcal{F}_{U}\},\\
\mathcal{E}(h,v)=0\quad\textrm{for any $v\in\mathcal{C}_{U}$, or equivalently, for any $v\in\mathcal{F}_{U}$,}
\label{eq:harmonic-weak}
\end{gather}
where the equivalence stated in \eqref{eq:harmonic-weak} is immediate from \eqref{eq:part}.
\end{enumerate}
\end{definition}
\begin{definition}\label{dfn:GSC-V00V01-isometry-subgroup}
\begin{enumerate}[label=\textup{(\arabic*)},align=left,leftmargin=*,topsep=2pt,parsep=0pt,itemsep=2pt]
\item We set $V_{0}^{\varepsilon}:=K\cap(\{\varepsilon\}\times\mathbb{R}^{d-1})$ for each
$\varepsilon\in\{0,1\}$ and $U_{0}:=K\setminus(V_{0}^{0}\cup V_{0}^{1})$.
\item We fix an arbitrary $\varphi_{0}\in\mathcal{C}_{K\setminus V_{0}^{0}}$ with
$\operatorname{supp}_{K}[\mathbf{1}_{K}-\varphi_{0}]\subset K\setminus V_{0}^{1}$, which exists by \cite[Exercise 1.4.1]{FOT};
note that $\varphi_{0}+\mathcal{C}_{U_{0}},\varphi_{0}+\mathcal{F}_{U_{0}}$
are independent of a particular choice of \nolinebreak$\varphi_{0}$.
\item We define $g_{\varepsilon}\in\mathcal{G}_{0}$ by $g_{\varepsilon}:=\tau_{\varepsilon}|_{K}$
for each $\varepsilon=(\varepsilon_{k})_{k=1}^{d}\in\{0,1\}^{d}$,
where $\tau_{\varepsilon}\colon\mathbb{R}^{d}\to\mathbb{R}^{d}$ is given by
$\tau_{\varepsilon}((x_{k})_{k=1}^{d}):=(\varepsilon_{k}+(1-2\varepsilon_{k})x_{k})_{k=1}^{d}$,
and define a subgroup $\mathcal{G}_{1}$ of $\mathcal{G}_{0}$ by
\begin{equation}\label{eq:GSC-isometry-subgroup}
\mathcal{G}_{1}:=\{g_{\varepsilon}\mid\varepsilon\in\{0\}\times\{0,1\}^{d-1}\}.
\end{equation}
\end{enumerate}
\end{definition}
Now we proceed to the core part of the proof of Theorem \ref{thm:dwSC}.
It is divided into three propositions, proving respectively the existence of a good sequence
$\{u_{n}\}_{n=1}^{\infty}\subset\mathcal{F}\cap\mathcal{C}(K)$ converging in norm in
$(\mathcal{F},\mathcal{E}_{1})$ to $h_{0}\in\varphi_{0}+\mathcal{F}_{U_{0}}$
which is $\mathcal{E}$-harmonic on $U_{0}$ (Proposition \ref{prop:h0-approx}),
$\mathcal{E}(h_{0},h_{0})>0$ (Proposition \ref{prop:h0-nonzero})
and the \emph{non}-$\mathcal{E}$-harmonicity on $U_{0}$ of
$h_{2}:=\sum_{w\in W_{2}}(F_{w})_{*}(l^{-2}h_{0}+q^{w}_{1}\mathbf{1}_{K})\in h_{0}+\mathcal{F}_{U_{0}}$
(Proposition \ref{prop:h2-non-harmonic}); see Figures \ref{fig:h0U0-dfn} and \ref{fig:hm-dfn}
below for an illustration of $h_{0}$ and $h_{2}$. Then Theorem \ref{thm:dwSC} will follow
from $\mathcal{E}(h_{0},h_{0})<\mathcal{E}(h_{2},h_{2})$ and \eqref{eq:GSCDF3}
for $u\in\mathcal{F}$. While the existence of such $h_{0}$ is implied by
\cite[Theorems 7.2.1, 4.6.5, 1.5.2-(iii), A.2.6-(i), 4.1.3, 4.2.1-(ii) and Corollary 2.3.1]{FOT},
that of $\{u_{n}\}_{n=1}^{\infty}\subset\mathcal{F}\cap\mathcal{C}(K)$ as in the following proposition
cannot be obtained directly from the theory of regular symmetric Dirichlet forms in \cite{FOT,CF}.
\begin{proposition}\label{prop:h0-approx}
There exist $h_{0}\in\mathcal{F}$ and $\{u_{n}\}_{n=1}^{\infty}\subset\mathcal{F}\cap\mathcal{C}(K)$
satisfying the following:
\begin{enumerate}[label=\textup{(\arabic*)},align=left,leftmargin=*,topsep=2pt,parsep=0pt,itemsep=2pt]
\item\label{it:h0}$h_{0}$ is $\mathcal{E}$-harmonic on $U_{0}$ and $h_{0}\in\varphi_{0}+\mathcal{F}_{U_{0}}$.
In particular, $h_{0}+\mathcal{F}_{U_{0}}=\varphi_{0}+\mathcal{F}_{U_{0}}$.
\item\label{it:un}For each $n\in\mathbb{N}$, $u_{n}\circ g=u_{n}$ for any $g\in\mathcal{G}_{1}$
and $u_{n}\in\varphi_{0}+\mathcal{C}_{U_{0}}$.
\item\label{it:h0-approx}$\lim_{n\to\infty}\mathcal{E}_{1}(h_{0}-u_{n},h_{0}-u_{n})=0$.
\end{enumerate}
\end{proposition}
\begin{proof}
Recalling \eqref{eq:harmonic-var}, for each $\alpha\in[0,\infty)$ we set
\begin{equation}\label{eq:CapAlphaV00V01}
a_{\alpha}:=\inf\bigl\{\mathcal{E}(u,u)+\alpha\|u\|_{2}^{2}\bigm|u\in\varphi_{0}+\mathcal{C}_{U_{0}}\bigr\}
=\inf\bigl\{\mathcal{E}(u,u)+\alpha\|u\|_{2}^{2}\bigm|u\in\varphi_{0}+\mathcal{F}_{U_{0}}\bigr\},
\end{equation}
where the latter equality in \eqref{eq:CapAlphaV00V01} is immediate from
$\overline{\varphi_{0}+\mathcal{C}_{U_{0}}}^{\mathcal{F}}=\varphi_{0}+\mathcal{F}_{U_{0}}$.
Then for any $\alpha\in[0,\infty)$ and any $u\in\varphi_{0}+\mathcal{C}_{U_{0}}$,
by the unit contraction operating on $(\mathcal{E},\mathcal{F})$ (see \cite[Section 1.1 and Theorem 1.4.1]{FOT})
we have $u^{+}\wedge 1\in\varphi_{0}+\mathcal{C}_{U_{0}}$ and
\begin{equation*}
\mathcal{E}(u,u)\geq\mathcal{E}(u^{+}\wedge 1,u^{+}\wedge 1)
\geq\mathcal{E}(u^{+}\wedge 1,u^{+}\wedge 1)+\alpha\|u^{+}\wedge 1\|_{2}^{2}-\alpha
\geq a_{\alpha}-\alpha,
\end{equation*}
and hence $a_{0}\geq a_{\alpha}-\alpha$, so that for each $n\in\mathbb{N}$
we can take $v_{n}\in\varphi_{0}+\mathcal{C}_{U_{0}}$ such that
\begin{equation}\label{eq:a0-approx-core}
\mathcal{E}(v_{n}^{+}\wedge 1,v_{n}^{+}\wedge 1)
\leq\mathcal{E}(v_{n},v_{n})+n^{-1}\|v_{n}\|_{2}^{2}
<a_{n^{-1}}+n^{-1}\leq a_{0}+2n^{-1}.
\end{equation}
Recalling \ref{GSCDF1}, now for each $n\in\mathbb{N}$ we can define
$u_{n}\in\mathcal{F}\cap\mathcal{C}(K)$ with the properties in \ref{it:un} by
$u_{n}:=(\#\mathcal{G}_{1})^{-1}\sum_{g\in\mathcal{G}_{1}}(v_{n}^{+}\wedge 1)\circ g$
and see from the triangle inequality for $\mathcal{F}\ni u\mapsto\mathcal{E}(u,u)^{1/2}$,
$\mathcal{E}((v_{n}^{+}\wedge 1)\circ g,(v_{n}^{+}\wedge 1)\circ g)=\mathcal{E}(v_{n}^{+}\wedge 1,v_{n}^{+}\wedge 1)$
for $g\in\mathcal{G}_{1}$ and \eqref{eq:a0-approx-core} that
\begin{equation}\label{eq:a0-approx-core-sym}
\mathcal{E}(u_{n},u_{n})\leq\mathcal{E}(v_{n}^{+}\wedge 1,v_{n}^{+}\wedge 1)<a_{0}+2n^{-1}.
\end{equation}
Further, since $\|u_{n}\|_{2}\leq 1$ by $0\leq u_{n}\leq 1$ for any $n\in\mathbb{N}$, the
Banach--Saks theorem \cite[Theorem A.4.1-(i)]{CF} yields $h_{0}\in L^{2}(K,\mu)$ and a strictly
increasing sequence $\{j_{k}\}_{k=1}^{\infty}\subset\mathbb{N}$ such that the Ces\`{a}ro
mean sequence $\{\overline{u}_{n}\}_{n=1}^{\infty}\subset\mathcal{F}\cap\mathcal{C}(K)$
of $\{u_{j_{k}}\}_{k=1}^{\infty}$ given by $\overline{u}_{n}:=n^{-1}\sum_{k=1}^{n}u_{j_{k}}$
satisfies $\lim_{n\to\infty}\|h_{0}-\overline{u}_{n}\|_{2}=0$.
Then \ref{it:un} obviously holds for $\{\overline{u}_{n}\}_{n=1}^{\infty}$,
and it follows from \eqref{eq:CapAlphaV00V01} and \eqref{eq:a0-approx-core-sym}
that $\lim_{n\to\infty}\mathcal{E}(\overline{u}_{n},\overline{u}_{n})=a_{0}$
and that for any $n,k\in\mathbb{N}$,
\begin{align*}
\mathcal{E}(\overline{u}_{n}-\overline{u}_{k},\overline{u}_{n}-\overline{u}_{k})
&=2\mathcal{E}(\overline{u}_{n},\overline{u}_{n})+2\mathcal{E}(\overline{u}_{k},\overline{u}_{k})
-4\mathcal{E}((\overline{u}_{n}+\overline{u}_{k})/2,(\overline{u}_{n}+\overline{u}_{k})/2)\\
&\leq 2\mathcal{E}(\overline{u}_{n},\overline{u}_{n})+2\mathcal{E}(\overline{u}_{k},\overline{u}_{k})-4a_{0}
\xrightarrow{n\wedge k\to\infty}0,
\end{align*}
which together with $\lim_{n\to\infty}\|h_{0}-\overline{u}_{n}\|_{2}=0$ and
the completeness of $(\mathcal{F},\mathcal{E}_{1})$ implies that $h_{0}\in\mathcal{F}$ and
$\lim_{n\to\infty}\mathcal{E}_{1}(h_{0}-\overline{u}_{n},h_{0}-\overline{u}_{n})=0$. Thus
$h_{0}\in\overline{\varphi_{0}+\mathcal{C}_{U_{0}}}^{\mathcal{F}}
=\varphi_{0}+\mathcal{F}_{U_{0}}=h_{0}+\mathcal{F}_{U_{0}}$
by $\{\overline{u}_{n}\}_{n=1}^{\infty}\subset\varphi_{0}+\mathcal{C}_{U_{0}}$,
$\mathcal{E}(h_{0},h_{0})=\lim_{n\to\infty}\mathcal{E}(\overline{u}_{n},\overline{u}_{n})=a_{0}$,
and therefore $h_{0}$ is $\mathcal{E}$-harmonic on $U_{0}$
in view of \eqref{eq:CapAlphaV00V01} and \eqref{eq:harmonic-var}, completing the proof.
\end{proof}
\begin{figure}
\caption{The choice of $h_{0}
\label{fig:h0U0-dfn}
\caption{The construction of $h_{m}
\label{fig:hm-dfn}
\end{figure}
We need the following two lemmas for the remaining two propositions and their proofs.
\begin{lemma}\label{lem:hm-domain-boundary-value}
Let $h_{0}\in\mathcal{F}$ be as in Proposition \textup{\ref{prop:h0-approx}},
let $m\in\mathbb{N}$ and define $h_{m}\in L^{2}(K,\mu)$ by
\begin{equation}\label{eq:hm-dfn}
h_{m}:=\sum_{w\in W_{m}}(F_{w})_{*}(l^{-m}h_{0}+q^{w}_{1}\mathbf{1}_{K})
\end{equation}
\textup{(see Figure \ref{fig:hm-dfn} below for an illustration of \eqref{eq:hm-dfn})}.
Then $h_{m}\in h_{0}+\mathcal{F}_{U_{0}}$.
\end{lemma}
\begin{proof}
Let $\{u_{n}\}_{n=1}^{\infty}\subset\mathcal{F}\cap\mathcal{C}(K)$ be as in Proposition \ref{prop:h0-approx}.
For each $n\in\mathbb{N}$, since $u_{n}\circ g=u_{n}$ for any $g\in\mathcal{G}_{1}$,
$u_{n}\in\varphi_{0}+\mathcal{C}_{U_{0}}$ and hence
$u_{n}|_{V_{0}^{0}\cup V_{0}^{1}}=\varphi_{0}|_{V_{0}^{0}\cup V_{0}^{1}}=\mathbf{1}_{V_{0}^{1}}$
by Proposition \ref{prop:h0-approx}-\ref{it:un}, we can define $u_{m,n}\in\mathcal{C}(K)$
by setting $u_{m,n}|_{K_{w}}:=(l^{-m}u_{n}+q^{w}_{1}\mathbf{1}_{K})\circ F_{w}^{-1}$ for each $w\in W_{m}$,
so that $u_{m,n}\circ F_{w}=l^{-m}u_{n}+q^{w}_{1}\mathbf{1}_{K}\in\mathcal{F}$ by $\mathbf{1}_{K}\in\mathcal{F}$
and thus $u_{m,n}\in\varphi_{0}+\mathcal{C}_{U_{0}}$ by \ref{GSCDF2} and $u_{n}\in\varphi_{0}+\mathcal{C}_{U_{0}}$.
Then we see from \ref{GSCDF3}, Lemmas \ref{lem:cell-intersection-null}, \ref{lem:Fw-star}
and Proposition \ref{prop:h0-approx}-\ref{it:h0-approx} that $\{u_{m,n}\}_{n=1}^{\infty}$
is a Cauchy sequence in the Hilbert space $(\mathcal{F},\mathcal{E}_{1})$ with
$\lim_{n\to\infty}\|h_{m}-u_{m,n}\|_{2}=0$ and therefore has to converge to
$h_{m}$ in norm in $(\mathcal{F},\mathcal{E}_{1})$, whence
$h_{m}\in\overline{\varphi_{0}+\mathcal{C}_{U_{0}}}^{\mathcal{F}}
=\varphi_{0}+\mathcal{F}_{U_{0}}=h_{0}+\mathcal{F}_{U_{0}}$
by $\{u_{m,n}\}_{n=1}^{\infty}\subset\varphi_{0}+\mathcal{C}_{U_{0}}$
and Proposition \ref{prop:h0-approx}-\ref{it:h0}.
\end{proof}
\begin{lemma}\label{lem:coordinate-func}
Let $k\in\{1,2,\ldots,d\}$ and define $f_{k}\in\mathcal{C}(\mathbb{R}^{d})$ by
$f_{k}((x_{j})_{j=1}^{d}):=x_{k}$. Then either $f_{k}|_{K}\in\mathcal{F}$ and
$\mathcal{E}(f_{k}|_{K},f_{k}|_{K})>0$ or $f_{k}|_{K}\not\in\mathcal{F}$.
\end{lemma}
\begin{proof}
Suppose to the contrary that $f_{k}|_{K}\in\mathcal{F}$ and $\mathcal{E}(f_{k}|_{K},f_{k}|_{K})=0$.
Then $f|_{K}\in\mathcal{F}$ and $\mathcal{E}(f|_{K},f|_{K})=0$ for any
$f\in\{\mathbf{1}_{\mathbb{R}^{d}},f_{1},f_{2},\ldots,f_{d}\}$ by $\mathbf{1}_{K}\in\mathcal{F}$,
$\mathcal{E}(\mathbf{1}_{K},\mathbf{1}_{K})=0$ and \ref{GSCDF1}, and hence also for any polynomial $f\in\mathcal{C}(\mathbb{R}^{d})$
since for any $u,v\in\mathcal{F}\cap\mathcal{C}(K)$ we have $uv\in\mathcal{F}\cap\mathcal{C}(K)$ and
$\mathcal{E}(uv,uv)^{1/2}\leq\|u\|_{\sup}\mathcal{E}(v,v)^{1/2}+\|v\|_{\sup}\mathcal{E}(u,u)^{1/2}$
by \cite[Theorem 1.4.2-(ii)]{FOT}. On the other hand, $\mathcal{E}(u,u)>0$ for some
$u\in\mathcal{F}\cap\mathcal{C}(K)$ by the existence of such $u\in\mathcal{F}$
and the denseness of $\mathcal{F}\cap\mathcal{C}(K)$ in $(\mathcal{F},\mathcal{E}_{1})$,
the Stone--Weierstrass theorem \cite[Theorem 2.4.11]{Dud} implies that
$\lim_{n\to\infty}\|u-f_{n}|_{K}\|_{\sup}=0$ for some sequence
$\{f_{n}\}_{n=1}^{\infty}\subset\mathcal{C}(\mathbb{R}^{d})$ of polynomials, then
$\lim_{n\to\infty}\|u-f_{n}|_{K}\|_{2}=0$ and
$\lim_{n\wedge k\to\infty}\mathcal{E}_{1}(f_{n}|_{K}-f_{k}|_{K},f_{n}|_{K}-f_{k}|_{K})=0$.
Thus $\lim_{n\to\infty}\mathcal{E}_{1}(u-f_{n}|_{K},u-f_{n}|_{K})=0$ by the completeness
of $(\mathcal{F},\mathcal{E}_{1})$ and therefore
$0<\mathcal{E}(u,u)=\lim_{n\to\infty}\mathcal{E}(f_{n}|_{K},f_{n}|_{K})=0$,
which is a contradiction and completes the proof.
\end{proof}
\begin{proposition}\label{prop:h0-nonzero}
Let $h_{0}\in\mathcal{F}$ be as in Proposition \textup{\ref{prop:h0-approx}}.
Then $\mathcal{E}(h_{0},h_{0})>0$.
\end{proposition}
\begin{proof}
Let $f_{1}\in\mathcal{C}(\mathbb{R}^{d})$ be as in Lemma \ref{lem:coordinate-func} with $k=1$ and
for each $m\in\mathbb{N}$ let $h_{m}\in\mathcal{F}$ be as in Lemma \ref{lem:hm-domain-boundary-value},
so that by \eqref{eq:hm-dfn}, Lemmas \ref{lem:cell-intersection-null} and \ref{lem:Fw-star} we have
$\|f_{1}|_{K}-h_{m}\|_{2}=l^{-m}\|f_{1}|_{K}-h_{0}\|_{2}$
and $h_{m}\circ F_{w}=l^{-m}h_{0}+q^{w}_{1}\mathbf{1}_{K}$ $\mu$-a.e.\ for any $w\in W_{m}$
and hence \eqref{eq:GSCDF3} for $u\in\mathcal{F}$ from Lemma \ref{lem:Fw-star} and
Lemma \ref{lem:conservative} together yield
\begin{equation}\label{eq:hm-GSCDF3}
\mathcal{E}(h_{m},h_{m})
=\sum_{w\in W_{m}}\frac{1}{r^{m}}\mathcal{E}(l^{-m}h_{0}+q^{w}_{1}\mathbf{1}_{K},l^{-m}h_{0}+q^{w}_{1}\mathbf{1}_{K})
=\Bigl(\frac{\#S}{r}l^{-2}\Bigr)^{m}\mathcal{E}(h_{0},h_{0}).
\end{equation}
Now if $\mathcal{E}(h_{0},h_{0})=0$, then $\mathcal{E}(h_{m},h_{m})=0$ by \eqref{eq:hm-GSCDF3}
for any $m\in\mathbb{N}$, thus $\{h_{m}\}_{m=1}^{\infty}$ would be
a Cauchy sequence in the Hilbert space $(\mathcal{F},\mathcal{E}_{1})$ with
$\lim_{m\to\infty}\|f_{1}|_{K}-h_{m}\|_{2}=0$ and therefore convergent to
$f_{1}|_{K}$ in norm in $(\mathcal{F},\mathcal{E}_{1})$, hence $f_{1}|_{K}\in\mathcal{F}$
and $\mathcal{E}(f_{1}|_{K},f_{1}|_{K})=\lim_{m\to\infty}\mathcal{E}(h_{m},h_{m})=0$,
which contradicts Lemma \ref{lem:coordinate-func} and completes the proof.
\end{proof}
It is the proof of the following proposition that requires our standing assumption that
$S\not=\{0,1,\ldots,l-1\}^{d}$, which excludes the case of $K=[0,1]^{d}$ from the present framework.
\begin{proposition}\label{prop:h2-non-harmonic}
Let $h_{2}\in\mathcal{F}$ be as in Lemma \textup{\ref{lem:hm-domain-boundary-value}}
with $m=2$. Then $h_{2}$ is not $\mathcal{E}$-harmonic on $U_{0}$.
\end{proposition}
\begin{proof}
We claim that, if $h_{2}$ were $\mathcal{E}$-harmonic on $U_{0}$,
then $h_{0}\in\mathcal{F}$ as in Proposition \ref{prop:h0-approx} would
turn out to be $\mathcal{E}$-harmonic on $K\setminus V_{0}^{0}$, which would imply that
$\mathcal{E}(h_{0},h_{0})=\lim_{n\to\infty}\mathcal{E}(h_{0},u_{n})=0$ for
$\{u_{n}\}_{n=1}^{\infty}\subset\varphi_{0}+\mathcal{C}_{U_{0}}\subset\mathcal{C}_{K\setminus V_{0}^{0}}$
as in Proposition \ref{prop:h0-approx} by \eqref{eq:harmonic-weak},
a contradiction to Proposition \ref{prop:h0-nonzero} and will thereby
prove that $h_{2}$ is not $\mathcal{E}$-harmonic on $U_{0}$.
For each $\varepsilon=(\varepsilon_{k})_{k=1}^{d}\in\{1\}\times\{0,1\}^{d-1}$,
set $U^{\varepsilon}:=K\cap\prod_{k=1}^{d}(\varepsilon_{k}-1,\varepsilon_{k}+1)$,
$K^{\varepsilon}:=K\cap\prod_{k=1}^{d}[\varepsilon_{k}-1/2,\varepsilon_{k}+1/2]$
and choose $\varphi_{\varepsilon}\in\mathcal{C}_{U^{\varepsilon}}$ so that
$\varphi_{\varepsilon}|_{K^{\varepsilon}}=\mathbf{1}_{K^{\varepsilon}}$;
such $\varphi_{\varepsilon}$ exists by \cite[Exercise 1.4.1]{FOT}. Let
$v\in\mathcal{C}_{K\setminus V_{0}^{0}}$ and, taking an enumeration $\{\varepsilon^{(k)}\}_{k=1}^{2^{d-1}}$
of $\{1\}\times\{0,1\}^{d-1}$ and recalling that $v_{1}v_{2}\in\mathcal{F}\cap\mathcal{C}(K)$
for any $v_{1},v_{2}\in\mathcal{F}\cap\mathcal{C}(K)$ by \cite[Theorem 1.4.2-(ii)]{FOT}, define
$v_{\varepsilon}\in\mathcal{C}_{U^{\varepsilon}}$ for $\varepsilon\in\{1\}\times\{0,1\}^{d-1}$
by $v_{\varepsilon^{(1)}}:=v\varphi_{\varepsilon^{(1)}}$ and
$v_{\varepsilon^{(k)}}:=v\varphi_{\varepsilon^{(k)}}\prod_{j=1}^{k-1}(\mathbf{1}_{K}-\varphi_{\varepsilon^{(j)}})$
for $k\in\{2,\ldots,2^{d-1}\}$. Then
$v-\sum_{\varepsilon\in\{1\}\times\{0,1\}^{d-1}}v_{\varepsilon}
=v\prod_{\varepsilon\in\{1\}\times\{0,1\}^{d-1}}(\mathbf{1}_{K}-\varphi_{\varepsilon})
\in\mathcal{C}_{U_{0}}$,
hence $\mathcal{E}(h_{0},v)=\sum_{\varepsilon\in\{1\}\times\{0,1\}^{d-1}}\mathcal{E}(h_{0},v_{\varepsilon})$
by Proposition \ref{prop:h0-approx}-\ref{it:h0} and \eqref{eq:harmonic-weak}, and therefore
the desired $\mathcal{E}$-harmonicity of $h_{0}$ on $K\setminus V_{0}^{0}$, i.e.,
\eqref{eq:harmonic-weak} with $h=h_{0}$ and $U=K\setminus V_{0}^{0}$, would be
obtained by deducing that $\mathcal{E}(h_{0},v_{\varepsilon})=0$ for any
$\varepsilon\in\{1\}\times\{0,1\}^{d-1}$.
To this end, set $\varepsilon^{(0)}:=(\mathbf{1}_{\{1\}}(k))_{k=1}^{d}$, take
$i=(i_{k})_{k=1}^{d}\in S$ with $i_{1}<l-1$ and $i+\varepsilon^{(0)}\not\in S$,
which exists by $\emptyset\not=S\subsetneq\{0,1,\ldots,l-1\}^{d}$ and \ref{GSC1},
and let $\varepsilon=(\varepsilon_{k})_{k=1}^{d}\in\{1\}\times\{0,1\}^{d-1}$. We will choose
$i^{\varepsilon}\in S$ with $F_{ii^{\varepsilon}}(\varepsilon)\in F_{i}(K\cap(\{1\}\times(0,1)^{d-1}))$
and assemble $v_{\varepsilon}\circ g_{w}\circ F_{w}^{-1}$ with a suitable
$g_{w}\in\mathcal{G}_{1}$ for $w\in W_{2}$ with $F_{ii^{\varepsilon}}(\varepsilon)\in K_{w}$
into a function $v_{\varepsilon,2}\in\mathcal{C}_{U_{0}}$. Specifically, set
$i^{\varepsilon,\eta}:=\bigl((l-1)(\mathbf{1}_{\{1\}}(k)+1-\varepsilon_{k})+(2\varepsilon_{k}-1)\eta_{k}\bigr)_{k=1}^{d}$
for each $\eta=(\eta_{k})_{k=1}^{d}\in\{0\}\times\{0,1\}^{d-1}$ and
$I^{\varepsilon}:=\{\eta\in\{0\}\times\{0,1\}^{d-1}\mid i^{\varepsilon,\eta}\in S\}$,
so that $i^{\varepsilon}:=i^{\varepsilon,\mathbf{0}_{d}}\in S$ by \ref{GSC4} and \ref{GSC1} and hence
$\mathbf{0}_{d}\in I^{\varepsilon}$. Thanks to $v_{\varepsilon}\in\mathcal{C}_{U^{\varepsilon}}$ and
$i+\varepsilon^{(0)}\not\in S$ we can define $v_{\varepsilon,2}\in\mathcal{C}(K)$ by setting
\begin{equation}\label{eq:vepsilon2}
v_{\varepsilon,2}|_{K_{w}}:=
\begin{cases}
v_{\varepsilon}\circ g_{\eta}\circ F_{w}^{-1}
& \textrm{if $\eta\in I^{\varepsilon}$ and $w=ii^{\varepsilon,\eta}$}\\
0 & \textrm{if $w\not\in\{ii^{\varepsilon,\eta}\mid\eta\in I^{\varepsilon}\}$}
\end{cases}
\quad\textrm{for each $w\in W_{2}$,}
\end{equation}
then $\operatorname{supp}_{K}[v_{\varepsilon,2}]\subset K_{i}\setminus V_{0}^{0}\subset U_{0}$
by \eqref{eq:vepsilon2} and $i_{1}<l-1$, $v_{\varepsilon,2}\circ F_{w}\in\mathcal{F}$ for any
$w\in W_{2}$ by \eqref{eq:vepsilon2}, $v_{\varepsilon}\in\mathcal{F}\cap\mathcal{C}(K)$
and \ref{GSCDF1}, thus $v_{\varepsilon,2}\in\mathcal{F}$ by \ref{GSCDF2} and therefore
$v_{\varepsilon,2}\in\mathcal{C}_{U_{0}}$. Moreover, recalling that
$h_{2}\circ F_{w}=l^{-2}h_{0}+q^{w}_{1}\mathbf{1}_{K}$ $\mu$-a.e.\ for any $w\in W_{2}$ by
\eqref{eq:hm-dfn}, Lemmas \ref{lem:cell-intersection-null} and \ref{lem:Fw-star} and
letting $\{u_{n}\}_{n=1}^{\infty}\subset\mathcal{F}\cap\mathcal{C}(K)$ be as in
Proposition \ref{prop:h0-approx}, we see from \eqref{eq:GSCDF3} for $u\in\mathcal{F}$
in Lemma \ref{lem:Fw-star}, \eqref{eq:vepsilon2}, Lemma \ref{lem:conservative},
Proposition \ref{prop:h0-approx}-\ref{it:h0-approx}, \ref{GSCDF1} and
Proposition \ref{prop:h0-approx}-\ref{it:un} that
\begin{equation}\label{eq:Eh2vepsilon2}
\begin{split}
\mathcal{E}(h_{2},v_{\varepsilon,2})
&=\sum_{\eta\in I^{\varepsilon}}\frac{1}{r^{2}l^{2}}\mathcal{E}(h_{0},v_{\varepsilon}\circ g_{\eta})
=\lim_{n\to\infty}\sum_{\eta\in I^{\varepsilon}}\frac{1}{r^{2}l^{2}}\mathcal{E}(u_{n},v_{\varepsilon}\circ g_{\eta})\\
&=\lim_{n\to\infty}\sum_{\eta\in I^{\varepsilon}}\frac{1}{r^{2}l^{2}}\mathcal{E}(u_{n}\circ g_{\eta},v_{\varepsilon})
=\lim_{n\to\infty}\frac{\#I^{\varepsilon}}{r^{2}l^{2}}\mathcal{E}(u_{n},v_{\varepsilon})
=\frac{\#I^{\varepsilon}}{r^{2}l^{2}}\mathcal{E}(h_{0},v_{\varepsilon}).
\end{split}
\end{equation}
Now, supposing that $h_{2}$ were $\mathcal{E}$-harmonic on $U_{0}$, from \eqref{eq:Eh2vepsilon2},
$\#I^{\varepsilon}>0$, $v_{\varepsilon,2}\in\mathcal{C}_{U_{0}}$ and \eqref{eq:harmonic-weak} we would obtain
$\mathcal{E}(h_{0},v_{\varepsilon})=r^{2}l^{2}(\#I^{\varepsilon})^{-1}\mathcal{E}(h_{2},v_{\varepsilon,2})=0$,
which would imply a contradiction as explained in the last two paragraphs and thus completes the proof.
\end{proof}
\begin{proof}[of Theorem \textup{\ref{thm:dwSC}}]
Let $h_{0}\in\mathcal{F}$ be as in Proposition \ref{prop:h0-approx} and let
$h_{2}\in h_{0}+\mathcal{F}_{U_{0}}$ be as in Lemma \ref{lem:hm-domain-boundary-value} with $m=2$,
so that $h_{0}$ is $\mathcal{E}$-harmonic on $U_{0}$, $h_{2}+\mathcal{F}_{U_{0}}=h_{0}+\mathcal{F}_{U_{0}}$,
$h_{2}$ is not $\mathcal{E}$-harmonic on $U_{0}$ by Proposition \ref{prop:h2-non-harmonic}
and hence $\mathcal{E}(h_{0},h_{0})<\mathcal{E}(h_{2},h_{2})$ in view of
\eqref{eq:harmonic-var}. This strict inequality combined with \eqref{eq:hm-GSCDF3} shows that
\begin{equation*}
\mathcal{E}(h_{0},h_{0})<\mathcal{E}(h_{2},h_{2})=\Bigl(\frac{\#S}{r}l^{-2}\Bigr)^{2}\mathcal{E}(h_{0},h_{0}),
\end{equation*}
whence $l^{2}<\#S/r$, namely $d_{\mathrm{w}}=\log_{l}(\#S/r)>2$.
\end{proof}
\begin{remark}\label{rmk:h2-non-harmonic-maximum-principle}
As an alternative to Proposition \ref{prop:h2-non-harmonic} and its proof above,
we could have used the \emph{strong maximum principle} for $\mathcal{E}$-harmonic functions
to show that $h_{1}\in\mathcal{F}$ as in Lemma \ref{lem:hm-domain-boundary-value} with $m=1$
is not $\mathcal{E}$-harmonic on $U_{0}$, from which Theorem \ref{thm:dwSC} follows in exactly
the same way. Indeed, let $h_{0}\in\mathcal{F}$ be as in Proposition \ref{prop:h0-approx},
let $i=(i_{k})_{k=1}^{d}\in S$ be as in the above proof of Proposition \ref{prop:h2-non-harmonic}
and set $U_{i}:=F_{i}(K\cap((0,1]\times(0,1)^{d-1}))$, which is an open subset of $U_{0}$ by $i_{1}<l-1$
and $i+(\mathbf{1}_{\{1\}}(k))_{k=1}^{d}\not\in S$ and connected by \ref{GSC4}, \ref{GSC1} and \ref{GSC2}.
Then noting that $h_{1}|_{U_{i}}=l^{-1}h_{0}\circ F_{i}^{-1}|_{U_{i}}+l^{-1}i_{1}\mathbf{1}_{U_{i}}$ $\mu$-a.e.\ by
\eqref{eq:hm-dfn} and that $\mu(K_{i}\setminus U_{i})=0$ by Lemma \ref{lem:cell-intersection-null},
we easily see from Proposition \ref{prop:h0-approx} (and its proof), Lemma \ref{lem:Fw-star},
Proposition \ref{prop:h0-nonzero} and $\mathcal{E}(\mathbf{1}_{K},\mathbf{1}_{K})=0$ that
$h_{1}\leq l^{-1}(i_{1}+1)$ $\mu$-a.e.\ on $U_{i}$, ``$h_{1}=l^{-1}(i_{1}+1)$ on $U_{i}\cap F_{i}(V_{0}^{1})$'' and
$h_{1}|_{U_{i}}\not=l^{-1}(i_{1}+1)\mathbf{1}_{U_{i}}$, so that $h_{1}$ cannot be $\mathcal{E}$-harmonic on $U_{i}$
or on $U_{0}\supset U_{i}$ by the strong maximum principle \cite[Theorem 2.11]{CK}
for $\mathcal{E}$-(sub)harmonic functions on $U_{i}$.
While this short ``proof'' nicely illustrates the heuristics behind the proof of
Proposition \ref{prop:h2-non-harmonic} above, it is in fact highly non-trivial to justify
our last application of the strong maximum principle \cite[Theorem 2.11]{CK} because it
requires the following set of conditions; see \cite[Sections A.2, 1.4, 4.1, 4.2 and 1.6]{FOT}
for the definitions and further details of the notions involved.
Note that \hyperlink{con}{\textup{(con)}} below also gives a precise formulation of the statement
``$h_{1}=l^{-1}(i_{1}+1)$ on $U_{i}\cap F_{i}(V_{0}^{1})$'' in the previous paragraph,
which needs to be done since $\mu(U_{i}\cap F_{i}(V_{0}^{1}))=0$ by Lemma \ref{lem:cell-intersection-null}.
\begin{enumerate}[label=\textup{(con)},align=left,leftmargin=*,topsep=4pt,parsep=0pt,itemsep=2pt]
\item[\hypertarget{AC}{\textup{(AC)}}]There exists a $\mu$-symmetric Hunt process
$X=\bigl(\Omega,\mathscr{M},\{X_{t}\}_{t\in[0,\infty]},\{\mathbb{P}_{x}\}_{x\in K_{\Delta}}\bigr)$
on $K$ such that its Dirichlet form on $L^{2}(K,\mu)$ is $(\mathcal{E},\mathcal{F})$
and the Borel measure $\mathbb{P}_{x}[X_{t}\in\cdot]$ on $K$ is
\emph{absolutely continuous with respect to $\mu$ for any $(t,x)\in(0,\infty)\times K$},
where $K_{\Delta}:=K\cup\{\Delta\}$ denotes the one-point compactification of $K$.
\item[\hypertarget{irr}{\textup{(irr)}}]
$(\mathcal{E}|_{\mathcal{F}_{U_{i}}\times\mathcal{F}_{U_{i}}},\mathcal{F}_{U_{i}})$
is \emph{$\mu$-irreducible}, i.e., $\mu(A)\mu(U_{i}\setminus A)=0$ for any Borel subset
$A$ of $U_{i}$ with the property that $u\mathbf{1}_{A}\in\mathcal{F}_{U_{i}}$ and
$\mathcal{E}(u\mathbf{1}_{A},u-u\mathbf{1}_{A})=0$ for any $u\in\mathcal{F}_{U_{i}}$.
\item[\hypertarget{con}{\textup{(con)}}]If $h_{1}$ were $\mathcal{E}$-harmonic on $U_{i}$,
then there would exist a $\mu$-version of $h_{1}|_{U_{i}}$ which would be nearly Borel
measurable and finely continuous with respect to $X$ as in \hyperlink{AC}{\textup{(AC)}}
and satisfy $h_{1}(x)=l^{-1}(i_{1}+1)$ for some $x\in U_{i}\cap F_{i}(V_{0}^{1})$.
\end{enumerate}
(Some other versions of the strong maximum principle for (sub)harmonic functions in the setting
of a strongly local regular symmetric Dirichlet form have been proved in \cite{Kuw00,Kuw08,Kuw12},
but conditions very similar to \hyperlink{AC}{\textup{(AC)}}, \hyperlink{irr}{\textup{(irr)}} and
\hyperlink{con}{\textup{(con)}} are assumed also by them and therefore have to be verified anyway.)
It is possible to deduce \hyperlink{AC}{\textup{(AC)}}, \hyperlink{irr}{\textup{(irr)}} and
\hyperlink{con}{\textup{(con)}} from known properties of $(K,\rho,\mu,\mathcal{E},\mathcal{F})$.
Indeed, \hyperlink{AC}{\textup{(AC)}} follows from \cite[Chapter I, Theorem 9.4]{BG} and
the fact that $(K,\rho,\mu,\mathcal{E},\mathcal{F})$ has a version $p=p_{t}(x,y)$ of the heat
kernel which is jointly continuous in $(t,x,y)\in(0,\infty)\times K\times K$ and satisfies
\eqref{eq:HKEdw} by \cite[Theorem 4.30 and Remark 4.33]{BBKT} and \cite[Theorem 3.1]{BGK},
and \hyperlink{irr}{\textup{(irr)}} is implied by \textup{w-LLE($(\cdot)^{d_{\mathrm{w}}}$)}
from \cite[Theorem 3.1]{BGK}, the connectedness of $U_{i}$ and \cite[Theorem 1.6.1]{FOT}.
For \hyperlink{con}{\textup{(con)}}, let $\operatorname{Cap}^{\mathcal{E}}_{1}(A)$ denote the $1$-capacity
of $A\subset K$ with respect to $(K,\mu,\mathcal{E},\mathcal{F})$ as defined in
\cite[(2.1.1), (2.1.2) and (2.1.3)]{FOT}, and suppose that $h_{1}$ were $\mathcal{E}$-harmonic on $U_{i}$.
Then by \textup{w-LLE($(\cdot)^{d_{\mathrm{w}}}$)} and \cite[Corollary 4.2]{BGK}
there would exist a continuous $\mu$-version of $h_{1}|_{U_{i}}$ and hence so would
a continuous one of $h_{0}=lh_{1}\circ F_{i}-i_{1}\mathbf{1}_{K}$ on $F_{i}^{-1}(U_{i})$
by Lemma \ref{lem:Fw-star}. Moreover, $\operatorname{Cap}^{\mathcal{E}}_{1}(V_{0}^{1})>0$
by Propositions \ref{prop:h0-approx}-\ref{it:h0}, \ref{prop:h0-nonzero}, \eqref{eq:harmonic-var}
and \cite[Corollary 2.3.1]{FOT}, $\operatorname{Cap}^{\mathcal{E}}_{1}(F_{i}^{-1}(U_{i})\cap V_{0}^{1})>0$
by \ref{GSC4}, \ref{GSC1} and \cite[Lemma 7.14]{K:cdsa}, the continuous
$\mu$-version of $h_{0}$ on $F_{i}^{-1}(U_{i})$ would satisfy $h_{0}=1$ on
$F_{i}^{-1}(U_{i})\cap V_{0}^{1}\setminus N$ for some $N\subset K$ with $\operatorname{Cap}^{\mathcal{E}}_{1}(N)=0$
by $h_{0}\in\varphi_{0}+\mathcal{F}_{U_{0}}$ and \cite[Theorem 2.1.4-(i)]{FOT}, and
$F_{i}^{-1}(U_{i})\cap V_{0}^{1}\setminus N\not=\emptyset$ by
$\operatorname{Cap}^{\mathcal{E}}_{1}(F_{i}^{-1}(U_{i})\cap V_{0}^{1})>\operatorname{Cap}^{\mathcal{E}}_{1}(N)$.
Thus $h_{0}(y)=1$ for some $y\in F_{i}^{-1}(U_{i})\cap V_{0}^{1}$ and $h_{1}(x)=l^{-1}(i_{1}+1)$
for $x:=F_{i}(y)\in U_{i}\cap F_{i}(V_{0}^{1})$, proving \hyperlink{con}{\textup{(con)}}.
Note that the arguments in the last paragraph heavily rely on the demanding results in \cite{BBKT,BGK},
and also that the proofs of the strong maximum principle in \cite{Kuw00,Kuw08,Kuw12,CK} make
full use of the potential theory for regular symmetric Dirichlet forms in \cite{FOT,CF}.
In this sense, the above ``short'' proof of the non-$\mathcal{E}$-harmonicity of $h_{1}$
on $U_{0}$ based on the strong maximum principle is far from self-contained and much less
elementary than the proof of Proposition \ref{prop:h2-non-harmonic}.
\end{remark}
\affiliationone{
Naotaka Kajino\\
Department of Mathematics, Graduate School of Science, Kobe University\\
Rokkodai-cho 1-1, Nada-ku\\
Kobe 657-8501\\
Japan
\email{[email protected]}}
\affiliationtwo{ }
\affiliationthree{
Current address:\\
Research Institute for Mathematical Sciences, Kyoto University\\
Kitashirakawa-Oiwake-cho, Sakyo-ku\\
Kyoto 606-8502\\
Japan
\email{[email protected]}}
\end{document} |
\begin{document}
\maketitle
\noindent{\bf Abstract}
Let $u = \{u(t, x); (t,x)\in \mathbb R_+\times \mathbb R\}$ be the solution to a linear stochastic heat equation driven by a Gaussian noise, which is a Brownian motion in time and a fractional Brownian motion in space with Hurst parameter $H\in(0, 1)$.
For any given $x\in \mathbb R$ (resp. $t\in \mathbb R_+$), we show a decomposition of the stochastic process $t\mapsto u(t,x)$ (resp. $x\mapsto u(t,x)$) as the sum of a fractional Brownian motion with Hurst parameter $H/2$ (resp. $H$) and a stochastic process with $C^{\infty}$-continuous trajectories. Some applications of those decompositions are discussed.
\vskip0.3cm
\noindent {\bf Keywords} {Stochastic heat equation; fractional Brownian motion;
path regularity; law of the iterated logarithm.}
\vskip0.3cm
\noindent {\bf Mathematics Subject Classification (2000)}{ 60G15, 60H15, 60G17.}
\maketitle
\section{Introduction}
Consider the following one-dimensional stochastic heat equation
\begin{equation}\label{eq SPDE}
\frac{\partial u}{\partial t}=\frac{\kappa}{2}\frac{\partial ^2 u}{\partial x^2}+\dot W, \ \ \ t\ge0, x\in \mathbb R,
\varepsilonnd{equation}
with some initial condition $u(0,x) \varepsilonquiv0$, $\dot W=\frac{\partial^2 W}{\partial t\partial x}$, where $W$ is a centered Gaussian process with covariance given by
\begin{equation}\label{eq cov}
\mathbb E[W(s,x)W(t,y)]=\frac12\left(|x|^{2H}+|y|^{2H}-|x-y|^{2H} \right)(s\wedge t), \ \
\varepsilonnd{equation}
for any $s,t\ge0, x, y\in\mathbb R$, with $H \in (0,1)$. That is, $W$ is a standard Brownian motion in time and a fractional Browinian motion (fBm for short) with Hurst parameter $H$ in space.
When $H=1/2$, $\dot W$ is a space-time white noise and $u$ is the classical stochastic convolution, which has been understood very well (see e.g., \cite{Walsh}). Theorem 3.3 in \cite{K} tells us that the stochastic process $t\mapsto u(t,x)$ (resp. $x\mapsto u(t,x)$) can be represented as the sum of a fBm with Hurst parameter $1/4$ (resp. $1/2$) and a stochastic process with $C^{\infty}$-continuous trajectories. Hence, locally $t\mapsto u(t,x)$ (resp. $x\mapsto u(t,x)$ ) behaves as a fBm with Hurst parameter $1/4$ (resp. $1/2$), and it has the same regularity (such as the H\"older continuity, the global and local moduli of continuities, Chung-type law of the iterated logarithm) as a fBm with Hurst parameter $1/4$ (resp. $1/2$). See
Lei and Nualart \cite{LN} for earlier related work.
When $H\in(1/2,1)$, Mueller and Wu \cite{MW} used such a decomposition to study the critical dimension for hitting points for the fBm; Tudor and Xiao \cite{TX} studied the sample regularities of the solution for the fractional-colored stochastic heat equation by using this decomposition.
When $H\in (0,1/2)$, the spatial covariance $\Lambda$, given in
$$
\mathbb E\left[\dot W(s,x)\dot W(t,y)\right]=\delta_0(t-s)\Lambda(x-y),
$$
is a distribution, which fails to be positive. The study of stochastic partial differential equations with this kind of noises lies outside the scope of application of the classical references (see, e.g., \cite{ Dal, PZ, DPZ}).
It seems that the decomposition results in \cite{MW} and \cite{TX} are very hard to be extended to the case of $H\in (0,1/2)$.
Recently, the problems of the stochastic partial differential equation driven by a fractional Gaussian noise in space with $H\in(0,1/2)$ have attracted many authors' attention.
For example,
Balan et al. \cite{BJQ} studied the existence and uniqueness of a mild solution for stochastic heat equation with affine multiplicative fractional noise in space, that is, the diffusion coefficient is given by an affine function $\sigma(x)=ax+b$. They
established the H\"older continuity of the solution in \cite{BJQ2016}. The case of the general nonlinear coefficient $\sigma$, which has a Lipschitz derivative and satisfies $\sigma(0)=0$, has been studied in Hu et al. \cite{HHLNT}.
In this paper, we give unified forms of the decompositions for the stochastic convolution about both temporal and spatial variables when $H\in (0,1)$. That is, for any given $x\in \mathbb R$ (resp. $t\in \mathbb R_+$), we show a decomposition of the stochastic process $t\mapsto u(t,x)$ (resp. $x\mapsto u(t,x)$) as the sum of a fractional Brownian motion with Hurst parameter $H/2$ (resp. $H$) and a stochastic process with $C^{\infty}$-continuous trajectories.
Those decompositions not only lead to a better understanding of the H\"older regularity of the stochastic convolution \varepsilonqref{eq SPDE}, but also give the uniform and local moduli of continuities and Chung-type law of iterated logarithm.
Notice that our decompositions are natural extensions of \cite[Theorem 3.3]{K}, and they are given in different forms with that obtained in \cite{MW} and \cite{TX}.
The rest of this paper is organized as follows. In Section 2, we recall some results about the
Gaussian noise and the stochastic convolution. The main results are given in Section 3, and their proofs are given in Section 4.
\section{The Gaussian noise and the stochastic convolution}
In this section, we introduce the Gaussian noise and corresponding stochastic integration, borrowed from \cite{BJQ} and \cite{PT}.
Let $\mathcal S$ be the space of Schwartz functions, and $\mathcal S'$ be its dual, the space of tempered distributions. The Fourier transform of a function $u\in \mathcal S$ is defined as
$$
\mathcal Fu(\xi)=\int_{\mathbb R}e^{-i\xi x}u(x)dx,
$$
and the inverse Fourier transform is given by $\mathcal F^{-1}u(\xi)=(2\pi)^{-1}\mathcal Fu(-\xi)$.
Given a domain $G\subset \mathbb R^n$ for some $n\ge1$, let $\mathcal D(G)$ be the space of all real-valued infinitely differential functions with compact support on $G$. According to \cite[Theorem 3.1]{PT}, the noise $W$ can be represented by a zero-mean Gaussian family $\{W(\phi), \phi\in \mathcal D((0,\infty)\times \mathbb R)\}$ defined on a complete probability space $(\Omega, \mathcal F, \mathbb P)$, whose covariance is given by
\begin{equation}\label{eq cov 2}
\mathbb E[W(\phi)W(\psi)]=c_{1, H}\int_{\mathbb R_+\times \mathbb R} \mathcal F\phi(s,\xi)\overline{\mathcal F\psi(s,\xi)}|\xi|^{1-2H}dsd\xi,\ \ \ \ H \in (0,1),
\varepsilonnd{equation}
for any $\phi ,\psi \in \mathcal D((0,\infty)\times \mathbb R)$, where the Fourier transforms $\mathcal F \phi, \mathcal F\psi$ are understood as Fourier transforms in space only, $\bar z$ is the conjugation of a complex number $z$ and
\begin{equation}\label{const c1H}
c_{1, H}=\frac{1}{2\pi} \Gamma(2H+1)\sin(H \pi).
\varepsilonnd{equation}
Let $\mathcal H$ be the Hilbert space obtained by completing $\mathcal D (\mathbb R)$ under the inner production,
\begin{equation}
\langle \phi, \psi\rangle_{\mathcal H}=
\left\{
\begin{aligned}\label{eq inn}
&c_{2, H}^2\int_{\mathbb R^2} (\phi( x+y)-\phi(x))(\psi( x+y)-\psi(x))|y|^{2H-2} dxdy,& H&\in(0,1/2);\\
& \int_{\mathbb R} \phi(x)\psi(x)dx, & H&=1/2;\\
&c_{3, H}^2\int_{\mathbb R^2} \phi( x+y)\psi(x)|y|^{2H-2} dxdy, &H&\in(1/2,1),
\varepsilonnd{aligned}
\ \right.
\varepsilonnd{equation}
for any $\phi ,\psi \in \mathcal H $,
where
$c_{2, H}^2=H\left(\frac12-H\right)$,
$c_{3, H}^2=H(2H-1).$
Denote $\|\phi\|_{\mathcal H}:=\sqrt{\langle \phi, \phi\rangle_{\mathcal H}}$ for any $\phi\in \mathcal H$.
Then
\begin{align}\label{eq cov 3}
\mathbb E[W(\phi)W(\psi)]
=\mathbb E\left[\int_{\mathbb R_+}\langle \phi(s), \psi(s)\rangle_{\mathcal H} ds\right].
\varepsilonnd{align}
See e.g., \cite{HHLNT, PT, TTV}.
Let
\begin{equation}\label{eq p}
p_t(x)=\frac{1}{\sqrt{2\pi \kappa t}}e^{-\frac{x^2}{2\kappa t}}
\varepsilonnd{equation}
be the heat kernel on the real line related to $\frac{\kappa}{2}\Delta$.
\begin{definition}\label{def solut}
We say that a random field $u=\{
u(t, x); t\in [0, T], x\in \mathbb R\}$ is a mild solution of \varepsilonqref{eq SPDE}, if $u$ is
predictable and for any $(t, x)\in [0, T]\times \mathbb R$,
\begin{equation}\label{eq solut}
u(t,x)= \int_0^t\int_{\mathbb R}p_{t-s}(x-y)W(ds,dy),\ \ \ a.s..
\varepsilonnd{equation}
It is usually called the {\it stochastic convolution}. Denote $u(t,x)$ by $u_t(x)$ for any $(t,x)\in \mathbb R_+\times\mathbb R$.
\varepsilonnd{definition}
\section{Main results}
Recall that a mean-zero Gaussian process $\{X_t\}_{t\ge0}$ is called a (two-sided) fractional Brownian motion with Hurst parameter $H\in (0,1)$, if it satisfies
\begin{align}\label{eq fBm}
X_0=0, \ \ \ \mathbb E\left(|X_t-X_s|^2\right)=|t-s|^{2H}, \ \ \ t, s\in \mathbb R.
\varepsilonnd{align}
\begin{theorem}\label{thm main} For $H\in (0,1)$, the following results hold for the stochastic convolution $u=\{u_t(x); (t,x)\in \mathbb R_+\times \mathbb R\}$ given by \varepsilonqref{eq solut}:
\begin{itemize}
\item[(a)] For every $x\in \mathbb R$, there exists a fBm $\{X_t\}_{t\ge0}$ with Hurst parameter $H/{2}$, such that
$$
u_t(x)- C_{1, H, \kappa} X_t, \ \ \ \ t\ge0,
$$
defines a mean-zero Gaussian process with a version that is continuous on $\mathbb R_+$ and infinitely differentiable on $(0, \infty)$, where
\begin{equation}\label{eq c} C_{1, H, \kappa}:=\left( \frac{2^{1-H}\Gamma(2H)}{\kappa^{1-H}\Gamma(H)} \right)^{\frac12}.
\varepsilonnd{equation}
\item[(b)] For every $t>0$, there exists a fBm $\{B(x)\}_{x\in\mathbb R}$ with Hurst parameter $H$,
such that
$$
u_t(x)- \kappa^{-\frac12} B(x),\ \ \ \ x\in\mathbb R,
$$
defines a Gaussian random field with a version that is continuous on $\mathbb R$ and infinitely differentiable on $\mathbb R$.
\varepsilonnd{itemize}
\varepsilonnd{theorem}
Let us observe that Theorem \ref{thm main} says that, locally $t\mapsto u_t(x)$ behaves as a fBm with Hurst parameter $H/2$ and $x\mapsto u_t(x)$ behaves as a fBm with Hurst parameter $H$. Thus, for instance, it follows from this observation and known facts about fBm (see e.g., \cite{MRm}, \cite[Chapter 1]{Mis}, or \cite{Xiao2008}), we can obtain the following sample regularities of the stochastic convolution.
By applying Theorem \ref{thm main} and the H\"older continuity result of fBms \cite[Chapter 1]{Mis}, we have the following well-known results, (see e.g., \cite[Chapter 3]{Walsh}, \cite[Theorem 1.1]{BJQ2016}).
\begin{corollary}
\begin{itemize}
\item[(a)]
For every $x\in \mathbb R$, the stochastic process
$t\mapsto u_t(x)$ is a.s. H\"older continuous of parameter $H/2-\varepsilon$ for every $\varepsilon >0$.
\item[(b)]
For every $t>0$, the stochastic process
$x\mapsto u_t(x)$ is a.s. H\"older continuous of parameter $H-\varepsilon$ for every $\varepsilon >0$.
\varepsilonnd{itemize}
\varepsilonnd{corollary}
By applying Theorem \ref{thm main} and the variations of fBms (see e.g.,\cite{MRm}), we have the following results.
\begin{corollary}\label{prop sample regularity11} Let $\mathcal N$ be a standard normal random variable. Then, for every $x\in \mathbb R$ and $[a,b]\subset \mathbb R_+$,
\begin{align}\label{eq variation 1}
\lim_{n\rightarrow \infty}\sum_{a2^n\le i\le b 2^n}\left[u_{(i+1)/2^n}(x)-u_{i/2^n}(x) \right]^{\frac 2 H}= (b-a)C_{1, H, \kappa}^{2/H}\mathbb E\left[|\mathcal N|^{2/H}\right], \ \ a.s.;
\varepsilonnd{align}
and for every $t>0$, $[c,d]\subset \mathbb R$,
\begin{align}\label{eq variation 2}
\lim_{n\rightarrow \infty}\sum_{c2^n\le i\le d 2^n}\left[u_{t}((i+1)/2^n)-u_{t}(i/2^n) \right]^{\frac 1 H}= (d-c) C_{2, H, \kappa}^{1/H} \mathbb E\left[|\mathcal N|^{1/H}\right], \ \ a.s..
\varepsilonnd{align}
\varepsilonnd{corollary}
By applying Theorem \ref{thm main} and the global and local moduli of continuity results for fBms (see e.g., \cite[Chapter 7]{MRm}), we have
\begin{corollary}\label{prop sample regularity2 }
\begin{itemize}
\item[(a)](Global moduli of continuity for intervals).
For every $x\in \mathbb R$ and $[a,b]\subset \mathbb R_+$, we have
\begin{align}\label{eq LIL 1}
\lim_{\varepsilon \rightarrow 0+}\sup_{s,t\in[a,b], 0<|t-s|\le \varepsilon}\frac{|u_{t}(x)-u_{s}(x)|}{ |t-s|^{H/2}\sqrt{2\ln (1/|t-s|)} }= C_{3, H, \kappa},\ \ \ a.s.;
\varepsilonnd{align}
and for every $t>0$, $[c,d]\subset \mathbb R$, we have
\begin{align}\label{eq LIL 2}
\lim_{\varepsilon \rightarrow 0+}\sup_{x,y\in[c,d], 0<|x-y|\le \varepsilon}\frac{|u_{t}(x)-u_{t}(y)|}{ |x-y|^{H}\sqrt{2\ln (1/|x-y|)} }= C_{4, H, \kappa},\ \ \ a.s..
\varepsilonnd{align}
\item[(b)](Local moduli of continuity for intervals).
For every $t>0$ and $x\in \mathbb R$ , we have
\begin{align}\label{eq local moduli 1}
\varlimsup_{\varepsilon \rightarrow 0+}\frac{\sup_{0<|t-s|\le \varepsilon} |u_{t}(x)-u_{s}(x)|}{\varepsilon ^{H/2}\sqrt{2\ln\ln (1/\varepsilon )} }= C_{5, H, \kappa},\ \ \ a.s.;
\varepsilonnd{align}
and
\begin{align}\label{eq local moduli 2}
\varlimsup_{\varepsilon \rightarrow 0+}\frac{\sup_{0<|x-y|\le \varepsilon} |u_{t}(x)-u_{t}(y)|}{\varepsilon ^{H}\sqrt{2\ln\ln (1/\varepsilon)} }= C_{6, H, \kappa},\ \ \ a.s..
\varepsilonnd{align}
\varepsilonnd{itemize}
\varepsilonnd{corollary}
By applying Theorem \ref{thm main} and the Chung-type law of iterated logarithm in \cite [Theorem 3.3]{MR}, we have
\begin{corollary}\label{prop sample regularity2 }
For every $t>0$ and $x\in \mathbb R$, we have
\begin{align}\label{eq CLIL 1}
\varliminf_{\varepsilon \rightarrow 0+}\frac{\sup_{0<|t-s|\le \varepsilon} |u_{t}(x)-u_{s}(x)|}{\left(\varepsilon/ \ln\ln (1/\varepsilon)\right)^{H/2} }= C_{7, H, \kappa},\ \ \ a.s.;
\varepsilonnd{align}
and
\begin{align}\label{eq CLIL 1}
\varliminf_{\varepsilon \rightarrow 0+}\frac{\sup_{0<|x-y|\le \varepsilon} |u_{t}(x)-u_{t}(y)|}{\left(\varepsilon/ \ln\ln (1/\varepsilon)\right)^{H} }= C_{8, H, \kappa},\ \ \ a.s..
\varepsilonnd{align}
\varepsilonnd{corollary}
\section{The proof of Theorem \ref{thm main}}
\subsection{The proof of (a) in Theorem \ref{thm main}}
The method of proof is similar to those of \cite[Theorem 3.3]{K} and \cite[Proposition 3.1]{FKM}, but is complicated in the fractional noise case.
Choose and fix some $x\in \mathbb R$. For every $t,\varepsilon >0$, by \varepsilonqref{eq solut}, we have
\begin{align*}
&u_{t+\varepsilon}(x)-u_t(x)\notag\\
=& \int_0^t\int_{\mathbb R}[p_{t+\varepsilon-s}(x-y)-p_{t-s}(x-y)] W(ds, dy)+ \int_t^{t+\varepsilon}\int_{\mathbb R} p_{t+\varepsilon-s}(x-y) W(ds, dy).
\varepsilonnd{align*}
Let
\begin{align*}
J_1:=&\int_0^t\int_{\mathbb R}[p_{t+\varepsilon-s}(x-y)-p_{t-s}(x-y)] W(ds, dy),\\
J_2:=&\int_t^{t+\varepsilon}\int_{\mathbb R} p_{t+\varepsilon-s}(x-y) W(ds, dy).
\varepsilonnd{align*}
The construction of the Gaussian noise $W$, which is white in time, ensures that $J_1$ and $J_2$ are independent mean-zero Gaussian random variables. Thus,
\begin{align*}
\mathbb E\left[|u_{t+\varepsilon}(x)-u_t(x)|^2 \right]
= \mathbb E(J_1^2)+\mathbb E(J_2^2).
\varepsilonnd{align*}
Let us compute their variances respectively. First, we compute the variance of $J_2$ by noting that
\begin{align*}
\mathbb E(J_2^2)=&c_{1, H}\int_t^{t+\varepsilon} \int_{\mathbb R}\left|\mathcal F p_{t+\varepsilon-s}(\xi) \right|^{2}|\xi|^{1-2H}ds d\xi\\
=& c_{1, H} \int_t^{t+\varepsilon} \int_{\mathbb R} e^{-\kappa ( t+\varepsilon-s) |\xi|^2}|\xi|^{1-2H}ds d\xi\\
=& c_{1, H} \int_0^{\varepsilon} \int_{\mathbb R} e^{-\kappa s |\xi|^2}|\xi|^{1-2H}ds d\xi.
\varepsilonnd{align*}
The change of variables $\tau:=\sqrt{\kappa s}\xi$ yields
\begin{align}\label{eq J2}
\mathbb E(J_2^2)=& c_{1, H} \int_0^{\varepsilon} \int_{\mathbb R} e^{-|\tau|^2}|\tau|^{1-2H} (\kappa s)^{-1+H} ds d\tau\notag\\
=& c_{1, H}\Gamma(1-H) H^{-1}\kappa^{H-1}\varepsilon^{H}.
\varepsilonnd{align}
For the term $J_1$, we have
\begin{align}\label{eq J1}
\mathbb E(J_1^2)=& c_{1, H}\int_0^{t} \int_{\mathbb R}\left|\mathcal F p_{t+\varepsilon-s}(\xi)- \mathcal F p_{t-s}(\xi)\right|^{2}|\xi|^{1-2H}ds d\xi\notag\\
=& c_{1, H}\int_0^{t} \int_{\mathbb R}\left|\mathcal F p_{s+\varepsilon}(\xi)- \mathcal F p_{s}(\xi)\right|^{2}|\xi|^{1-2H}ds d\xi\notag\\
=&c_{1, H}\int_0^{t} \int_{\mathbb R}e^{-\kappa s|\xi|^2}\left(1-e^{-\frac{\kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{1-2H}ds d\xi.
\varepsilonnd{align}
This integral is hard to evaluate. By the change of variables and Lemma \ref{lem integ} in appendix, we have
\begin{align}\label{eq J12}
\int_0^{\infty} \int_{\mathbb R}e^{-\kappa s |\xi|^2}\left(1-e^{-\frac{\kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{1-2H}ds d\xi =& \kappa^{-1}\int_{\mathbb R} \left(1-e^{-\frac{\kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{-1-2H} d\xi\notag\\
=& \varepsilon^H \kappa ^{H-1}\int_{\mathbb R}\left(1-e^{-\frac{ |\tau|^2}{2}}\right)^2 |\tau|^{-1-2H} d\tau\notag\\
=&\Gamma(1-H) H^{-1}(2^{1-H}-1)\kappa^{H-1}\varepsilon^H.
\varepsilonnd{align}
Therefore,
\begin{align}\label{eq J13}
\mathbb E(J_1^2)=& c_{1, H} \Gamma(1-H) H^{-1}(2^{1-H}-1)\kappa^{H-1}\varepsilon^H\notag\\
&- c_{1, H}\int_t^{\infty} \int_{\mathbb R}e^{-\kappa s |\xi|^2}\left(e^{-\frac{\kappa \varepsilon |\xi|^2}{2}}-1\right)^2 |\xi|^{1-2H}ds d\xi,
\varepsilonnd{align}
and hence
\begin{align}\label{eq J3}
\mathbb E(J_1^2)+\mathbb E(J_2^2)= &c_{1, H}\Gamma(1-H) H^{-1} 2^{1-H} \kappa^{H-1}\varepsilon^{H}\notag\\
&- c_{1, H}\int_t^{\infty} \int_{\mathbb R}e^{-\kappa s |\xi|^2}\left(1-e^{-\frac{\kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{1-2H}ds d\xi.
\varepsilonnd{align}
Let $\varepsilonta$ denote a white noise on $\mathbb R$ independent of $W$, and consider the Gaussian process $\{T_t\}_{t\ge0}$ defined by
$$
T_t:=\left(\frac{c_{1,H}}{\kappa}\right)^{\frac12}\int_{-\infty}^{\infty} \left( 1- e^{-\frac{ \kappa t |\xi|^2}{2}}\right) |\xi|^{-\frac12-H}\varepsilonta(d\xi),\ \ t\ge0.
$$
Then $\{T_t\}_{t\ge0}$ is a well-defined mean-zero Wiener integral process, $T_0=0$, and
$$
{\rm Var} (T_t)=\frac{c_{1,H}}{\kappa}\int_{-\infty}^{\infty} \left( 1- e^{-\frac{ \kappa t |\xi|^2}{2}}\right)^2 |\xi|^{-1-2H}d\xi<\infty, \ \ \text{for all } t>0.
$$
Furthermore, we note that for any $t,\varepsilon>0$,
\begin{align}\label{eq T}
\mathbb E(|T_{t+\varepsilon}-T_t|^2)=&\frac{c_{1,H}}{\kappa} \int_{-\infty}^{\infty} \left( e^{-\frac{ \kappa t |\xi|^2}{2}}- e^{-\frac{ \kappa (t+\varepsilon) |\xi|^2}{2}}\right)^2 |\xi|^{-1-2H}d\xi\notag\\
=&\frac{c_{1,H}}{\kappa}\int_{-\infty}^{\infty} e^{- \kappa t |\xi|^2} \left( 1 - e^{-\frac{ \kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{-1-2H}d\xi\notag\\
=& c_{1, H} \int_t^{\infty} \int_{-\infty}^{\infty} e^{- \kappa s |\xi|^2} \left( 1 - e^{-\frac{ \kappa \varepsilon |\xi|^2}{2}}\right)^2 |\xi|^{1-2H} dsd\xi.
\varepsilonnd{align}
This is precisely the missing integral in \varepsilonqref{eq J3}. Therefore, by the independence of $T$ and $u$, we can rewrite \varepsilonqref{eq J3} as follows:
\begin{align}
&\mathbb E\left(\left|(u_{t+\varepsilon}(x)+T_{t+\varepsilon})-(u_t(x)+T_t)\right|^2\right)\notag\\
=&c_{1, H}\Gamma(1-H) H^{-1}\kappa^{H-1} 2^{1-H}\varepsilon^{H}\notag\\
=&\frac{ \sin( H \pi )\Gamma(2H+1)\Gamma(1-H) }{2^H H \kappa^{1-H} \pi} \varepsilon^{H}\notag\\
=& \frac{2^{1-H}\Gamma(2H)}{\kappa^{1-H}\Gamma(H)}\varepsilon^{H},
\varepsilonnd{align}
This implies that
$$
X_t:=\left(\frac{2^{1-H}\Gamma(2H)}{\kappa^{1-H}\Gamma(H)}\right)^{-\frac12}(u_t(x)+T_t), \ \ \ t\ge0,
$$
is a fBm with Hurst parameter $H/2$. Using the same argument in the proof of \cite [Lemma 3.6]{K}, we know that the random process $\{T(t)\}_{t\ge0}$ has a version that is infinitely-differentiable on $(0, \infty)$.
\subsection{The proof of (b) in Theorem \ref{thm main}} This result has been proved in \cite[Proposition 3.1]{FKM} when $H=1/2$. We will prove it for the case of $H\neq 1/2$.
For any $t>0, x\in \mathbb R$, let
\begin{equation}\label{eq S}
S_t(x):=\int_t^{\infty} \int_{\mathbb R} \left[p_s(x-w)-p_s(w)\right]\zeta(ds,dw),
\varepsilonnd{equation}
where $p_t(x)$ is given by \varepsilonqref{eq p}, $\zeta$ is a Gaussian noise independent with $W$, which is white in time and fractional in space variable with Hurst parameter $H$.
By the argument in the proof of \cite [Lemma 3.6]{K}, we know that $\{S_t(x)\}_{x\in \mathbb R}$ admits a $C^{\infty}$-version for any $t>0$.
Next, we will prove that
\begin{equation}\label{eq claim}
\mathbb E\left[\left|\left(u_t(x+\varepsilon)+S_t(x+\varepsilon) \right)-\left(u_t(x)+S_t(x) \right) \right|^2\right]= \kappa^{-1}\varepsilon^{2H}.
\varepsilonnd{equation}
Then
\begin{equation}\label{aim}
B(x):= \kappa^{1/2}(u_t(x)+S_t(x)),\ \ \ \ x\in\mathbb R,
\varepsilonnd{equation}
is a two-side fBm with parameter $H$, and (b) in Theorem \ref{thm main} holds.
In the remaining part, we will prove \varepsilonqref{eq claim} for $H\in (0,1/2)$ and $H\in (1/2,1)$, respectively.
\subsubsection{$(0<H<1/2)$}
For any fixed $t>0$ and $\varepsilon>0$, by Plancherel's identity with respect to $y$ and the explicit formula for $\mathcal F p_t$, we have
\begin{align}
&\mathbb E\left(|u_t(x+\varepsilon)-u_t(x)|^2 \right) \notag \\
=&c_{2, H}^2\int_0^t\int_{\mathbb R^2}\Big[(p_{t-s}(x+\varepsilon-y+z)-p_{t-s}(x-y+z))-(p_{t-s}(x+\varepsilon-y)-p_{t-s}(x-y))\Big]^2 \notag \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times |z|^{2H-2}dsdydz\notag\\
=& \frac{1}{2\pi}c_{2,H}^2 \int_0^t \int_{\mathbb R^2} e^{-\kappa (t-s) |\xi|^2}\left|e^{i\xi z}-1\right|^2\left|e^{i \xi \varepsilon}-1\right|^2 |z|^{2H-2}ds d\xi dz.
\varepsilonnd{align}
Since $|e^{i\xi z}-1|^2=2(1-\cos (\xi z))$ and for any $\alpha\in(0,1)$,
\begin{equation}\label{eq iden1}
\int_0^{\infty} \frac{1-cos(\xi z)}{z^{1+\alpha}}dz= \alpha^{-1}\Gamma(1-\alpha)\cos(\pi\alpha/2) \xi^{\alpha},
\varepsilonnd{equation}
(see \cite [Lemma D.1]{BJQ}), we have
\begin{align}\label{uoff}
&\mathbb E\left(|u_t(x+\varepsilon)-u_t(x)|^2 \right) \notag \\
=&\frac{2 \Gamma(2H)\sin( H \pi)}{(1-2H) \pi}c_{2, H}^2 \int_0^t \int_{\mathbb R} e^{-\kappa (t-s)|\xi|^2}
\left|e^{i \xi \varepsilon}-1\right|^2 |\xi|^{1-2H}dsd\xi \notag \\
=&\frac{4\Gamma(2H)\sin( H \pi)}{(1-2H) \kappa\pi }c_{2, H}^2 \int_{\mathbb R}\left(1- e^{- \kappa t |\xi|^2}\right)\left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi \notag \\
=& \frac{4\Gamma(2H)\sin( H \pi)}{(1-2H) \kappa\pi }c_{2, H}^2 \notag \\
&\ \ \ \times\left(\frac{\Gamma(1-2H)\cos( H \pi)\varepsilon^{2H}}{H} - \int_{\mathbb R} e^{- \kappa t |\xi|^2} \left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi \right).
\varepsilonnd{align}
Recall $S_t(x)$ defined by \varepsilonqref{eq S}. Using the same techniques above, we have
\begin{align}
& \mathbb E\left(|S_t(x+\varepsilon)-S_t(x)|^2 \right) \notag \\
=& \frac{2\Gamma(2H)\sin( H \pi)}{ (1-2H)\pi}c_{2, H}^2 \int_t^{\infty} \int_{\mathbb R} e^{-\kappa s |\xi|^2}
\left|e^{i \xi \varepsilon}-1\right|^2 |\xi|^{1-2H}ds d\xi\notag \\
=&\frac{4\Gamma(2H)\sin( H \pi)}{(1-2H) \kappa\pi }c_{2, H}^2 \int_{\mathbb R} e^{- \kappa t |\xi|^2} \left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi,
\varepsilonnd{align}
which is exactly the missing integral in \varepsilonqref{uoff}. By the independence of $W$ and $\zeta$, we know that $u_t(x)$ and $S_t(x)$ are independent. Therefore, we have
\begin{equation}
\begin{aligned}
&\mathbb E\left[\left|\left(u_t(x+\varepsilon)+S_t(x+\varepsilon) \right)-\left(u_t(x)+S_t(x) \right) \right|^2\right]\\
=& \frac{\sin(2 H \pi)\Gamma(2H)\Gamma(1-2H) }{\kappa \pi }
\varepsilon^{2H}\\
=& \kappa^{-1} \varepsilon^{2H}.
\varepsilonnd{aligned}
\varepsilonnd{equation}
\quad
\subsubsection{$(1/2<H<1)$}
For any fixed $t>0$ and $\varepsilon>0$,
\begin{align}
&\mathbb E\left(|u_t(x+\varepsilon)-u_t(x)|^2 \right) \notag \\
=&c_{3, H}^2\int_0^t\int_{\mathbb R^2} (p_{t-s}(x+\varepsilon-y+z)-p_{t-s}(x-y+z))(p_{t-s}(x+\varepsilon-y)-p_{t-s}(x-y)) \notag \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times |z|^{2H-2}dsdydz.
\varepsilonnd{align}
Since $p_{t-s}(x+y)p_{t-s}(x)=\frac12\left[p^2_{t-s}(x+y)+p^2_{t-s}(x)-(p_{t-s}(x+y)-p_{t-s}(x))^2\right]$, by Plancherel's identity with respect to $x$ and the explicit formula for $\mathcal F p_t$, we have
$$ \int_{\mathbb R}\int_{\mathbb R} p_{t-s}(x+y)p_{t-s}(x)|y|^{2H-2} dxdy = \frac{1}{2\pi}\int_{\mathbb R}\int_{\mathbb R} e^{-\kappa (t-s) |\xi|^2} \cos(\xi y) |y|^{2H-2} d\xi dy.$$
Therefore,
\begin{align}
&\mathbb E\left(|u_t(x+\varepsilon)-u_t(x)|^2 \right) \notag \\
=&\frac{1}{2\pi}c_{3, H}^2\int_0^t\int_{\mathbb R^2}e^{-\kappa (t-s) |\xi|^2}\left[ 2\cos(\xi z)-\cos(\xi(\varepsilon+z))-\cos(\xi(\varepsilon-z))\right]|z|^{2H-2}ds d\xi dz \notag \\
=&\frac1\pi c_{3, H}^2\int_0^t\int_{\mathbb R^2}e^{-\kappa (t-s) |\xi|^2}\cos(\xi z)(1-\cos(\xi \varepsilon))|z|^{2H-2}ds d\xi dz.
\varepsilonnd{align}
By formula $\left( 3.761-9 \right)$ of \cite {GR}, we know that
\begin{equation}\label{eq formula}
\int_0^{\infty} \frac{\cos(ax)}{x^{1-\mu}}dx=\frac{\Gamma(\mu)}{a^\mu}\cos(\pi\mu/2),\ \ \ \text{for any} \ \mu \in (0,1), a>0.
\varepsilonnd{equation}
Since $H \in (1/2,1)$, using \varepsilonqref{eq formula} with $\mu=2H-1,$ we have
\begin{align}
&\mathbb E\left(|u_t(x+\varepsilon)-u_t(x)|^2 \right) \notag \\
=&\frac{2\Gamma(2H-1)\cos(\pi(2H-1)/2)}{\pi}c_{3, H}^2\int_0^t\int_{\mathbb R}e^{-\kappa (t-s) |\xi|^2} (1-\cos(\xi \varepsilon) |\xi|^{1-2H}ds d\xi \notag \\
=&\frac{2\Gamma(2H-1)\sin( H \pi)}{\kappa \pi}c_{3, H}^2 \int_{\mathbb R}\left(1- e^{- \kappa t |\xi|^2}\right)\left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi \notag \\
=&\frac{2\Gamma(2H-1)\sin( H \pi)}{\kappa \pi}c_{3, H}^2 \notag \\
&\ \ \ \times \left( -\frac{\Gamma(2-2H)\cos( H \pi)}{H(2H-1)}\varepsilon^{2H} - \int_{\mathbb R} e^{- \kappa t |\xi|^2} \left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi \right),
\varepsilonnd{align}
where the last equality we used the identity:
\begin{equation}
\int_0^{\infty} \frac{1-cos(\xi z)}{z^{1+\alpha}}dz= -\alpha^{-1}(\alpha-1)^{-1}\Gamma(2-\alpha)\cos(\pi\alpha/2) \xi^{\alpha},\ \ \ \ \alpha\in(1,2),
\varepsilonnd{equation}
(see \cite [Lemma D.1]{BJQ}).
Recall $S_t(x)$ given by \varepsilonqref{eq S}. Using the same techniques above, we have
\begin{align}
& \mathbb E\left(|S_t(x+\varepsilon)-S_t(x)|^2 \right) \notag \\
=& \frac{2\Gamma(2H-1)\sin( H \pi)}{\pi}c_{3, H}^2 \int_t^{\infty} \int_{\mathbb R} e^{-\kappa s |\xi|^2}
\left|e^{i \xi \varepsilon}-1\right|^2 |\xi|^{1-2H}ds d\xi\notag \\
=&\frac{2\Gamma(2H-1)\sin( H \pi)}{\kappa \pi}c_{3, H}^2 \int_{\mathbb R} e^{- \kappa t |\xi|^2} \left(\frac{1-\cos (\xi \varepsilon)}{|\xi|^{1+2H}}\right)d\xi.
\varepsilonnd{align}
By the independence of $W$ and $\zeta$, we know that $u_t(x)$ and $S_t(x)$ are independent. Therefore, we have
\begin{equation}
\begin{aligned}
&\mathbb E\left[\left|\left(u_t(x+\varepsilon)+S_t(x+\varepsilon) \right)-\left(u_t(x)+S_t(x) \right) \right|^2\right]\\
=& \frac{- \sin(2H \pi) \Gamma(2H-1) \Gamma(2-2H) }{\kappa \pi} \varepsilon^{2H}\\
=&\kappa^{-1} \varepsilon^{2H}.
\varepsilonnd{aligned}
\varepsilonnd{equation}
\section{Appendix}
\begin{lemma}\label{lem integ} The following identity holds:
$$\int_0^{\infty}\left(e^{-\frac{x^2}{2}}-1\right)^2 x^{-1-2H} dx= \Gamma(1-H) H^{-1}(2^{-H}-2^{-1}).$$
\varepsilonnd{lemma}
\begin{proof}
The proof is inspired by Lemma A.1 in \cite{K}. By the change of variables $w=x^2/2$, we have
\begin{align*}
\int_0^{\infty}\left(e^{-\frac{x^2}{2}}-1\right)^2 x^{-1-2H} dx
=& 2^{-1-H} \int_0^{\infty}\left(e^{-w}-1\right)^2 w^{-1-H} dw\notag\\
=& 2^{-1-H} (I_{0,1}-I_{1,2}),
\varepsilonnd{align*}
where $I_{a, b}:=\int_0^{\infty}\left( e^{-aw}- e^{-b w}\right) w^{-1-H} dw$ for all $a,b\ge0$. Since $e^{-aw}-e^{-b w}=w\int_a^b e^{-rw}dr$, we have
\begin{align*}
I_{a, b}= \int_0^{\infty}\int_a^b e^{-rw} w^{-H}drdw
= \Gamma(1-H) \int_a^b r ^{-1+H}dr
= \Gamma(1-H) H^{-1} (b^H-a^H).
\varepsilonnd{align*}
Thus, $I_{0,1}-I_{1,2}=\Gamma(1-H) H^{-1} (2-2^H)$, and the lemma follows.
\varepsilonnd{proof}
\vskip0.5cm
\vskip0.5cm
\noindent{\bf Acknowledgments}: R. Wang is supported by NSFC (11871382), Chinese State Scholarship Fund Award by the China Scholarship Council and Youth Talent Training Program by Wuhan University.
\vskip0.5cm
\begin{thebibliography}{abc}
\bibitem{BJQ} Balan R, Jolis M, Quer-Sardanyons L. SPDEs with affine multiplicative fractional noise in space with inder $\frac14<H<\frac12$. Electron J Probab, 2015, {\bf 20}(54): 1-36
\bibitem{BJQ2016} Balan R, Jolis M, Quer-sardanyons L. SPDEs with rough noise in space: H\"older continuity of the solution. Statist Probab Lett, 2016, {\bf 119}: 310-316
\bibitem{Dal} Dalang R C. Extending martingale measure stochastic integral with applications to spatially homogeneous s.p.d.e's. Electron J Probab, 1999, {\bf 4}(6): 1-29
\bibitem{FKM} Foondun M, Khoshnevisan D, Mahboubi P. Analysis of the gradient of the solution to a stochastic heat equation via fractional Brownian motion. Stoch PDE: Anal Comp. 2005, {\bf 3}: 133-158
\bibitem{GR} Gradshteyn I, Ryzhik I. Table of integrals, series and products. Academic Press, 2007
\bibitem{HHLNT}Hu Y, Huang J, L\^e K, Nualart D, Tindel S. Stochastic heat equation with rough dependence in space. Ann Probab, 2017, {\bf 45}(6): 4561-4616
\bibitem{LN}Lei P, Nualart D. A decomposition of the bifractional Brownian motion and some applications. Statist Probab Lett, 2009, {\bf 79}(5): 619-624
\bibitem{MRm} Marcus M, Rosen J. Markov processes, Gaussian processes and local times. Cambridge Studies in Advanced Mathematics. Cambridge University Press, 2006
\bibitem{Mis} Mishura. Stochastic calculus for fractional Brownian motion and related processes. Lecture Notes in Math, 1929. Springer-Verlag, Berlin, 2008
\bibitem{MR} Monrad D, Rootz\'en H. Small values of Gaussian processes and functional laws of the iterated logarithm. Probab Theory Related Fields, 1995, {\bf 192}(91): 173-192
\bibitem{MW} Mueller C, Wu, Z. Erratum: A connection between the stochastic heat equation and fractional Brownian motion and a simple proof of a result of Talagrand. Electron Commun Probab, 2012, {\bf 17}(8): 1-10
\bibitem{K} Khoshnevisan D. Analysis of stochastic partial differential equations. CBMS Regional Conference Series in Mathematics, 119. American Mathematical Society, 2014
\bibitem{PZ} Peszat S, Zabczyk J. Stochastic evolution equations with a spatially homogeneous Wiener process. Stochastic Process Appl, 1997, {\bf 72}(2): 187-204
\bibitem{PT} Pipiras V, Taqqu M S. Integration questions related to fractional Brownian motion. Probab Theory Related Fields, 2000, {\bf 291}: 251-291
\bibitem{DPZ} Prato G D, Zabczyk J. Stochastic equations in infinite dimensions. Second edition. Encyclopedia of Mathematics and its Applications. Cambridge University Press, 2014
\bibitem{TTV} Tindel S, Tudor C, Viens F. Stochastic evolution equations with fractional Brownian motion, Probab. Theory Related Fields, 2003, {\bf 127}: 186-204
\bibitem{TX} Tudor C, Xiao Y M. Sample paths of the solution to the fractional-colored stochastic heat equation. Stoch Dyn, 2017, {\bf 17}(1): 20pp
\bibitem{Walsh} Walsh J. An introduction to stochastic partial differential equations. Lecture Notes in Mathematics, 1180. Springer-Verlag, Berlin, 1986
\bibitem{Xiao2008} Xiao Y M. Strong local nondeterminism and sample path properties of Gaussian random fields. Asymptotic theory in probability and statistics with applications,
Adv Lect Math, 2008, 136-176
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{equation}gin{document}
\muaketitle
\abstract{In these notes we summarise some recent developments on the existence and uniqueness theory for Vlasov-type equations, both on the torus and on the whole space. }
\mc{S}ection{An introduction to Vlasov-type equations in plasma physics}
In this note, we discuss some recent results concerning a class of PDEs used in the modelling of plasma.
Plasma is a state of matter abundant in the universe. It can be found in stars, the solar wind and the interstellar medium, and is therefore widely studied in astrophysics, as well as in many other contexts. For example, a major terrestrial application is in nuclear fusion research.
For this reason, mathematical modelling of plasma is of interest,
with different types of plasma models being suitable for different contexts.
A plasma consists of an ionised gas. It forms when an electrically neutral gas is subjected to high temperatures or a strong electromagnetic field, which causes the gas particles to dissociate into charged particles. These charged particles then interact through electromagnetic forces. The relatively long range nature of these interactions results in a collective behaviour distinct from that expected from a neutral gas.
In this article, we will discuss the well-posedness of a certain class of PDE models for plasma. We will consider equations of Vlasov type, which describe particle systems with mean field interactions.
\mc{S}ubsection{The Vlasov-Poisson system: the electrons' view-point}
The ionisation process in the formation of a plasma produces two types of charged particle: positively charged ions and negatively charged electrons. It also generally contains neutral species, since not all of the particles of the original neutral gas will dissociate. However, typically the interactions with the neutral species are weak in comparison to the interactions of the charged species. For the purposes of these notes, we will neglect interactions with the neutral particles and concentrate on the modelling of the charged particles.
In fact, it is usual to make an assumption which decouples the dynamics of the two species. It is possible to do this because the mass of an electron is much smaller than the mass of an ion.
The result is a separation between the timescales on which each species evolves: in short, the ions typically move much more slowly than the electrons.
When modelling the electrons, it is thus common to assume that the ions are stationary over the time interval of observation.
The Vlasov-Poisson system is a well-known kinetic equation describing this situation. This equation was proposed by Jeans \cite{Jeans} as a model for galaxies. Its use in the plasma context dates back to the work of Vlasov \cite{Vlasov1}.
The most commonly known version of the system models the electrons in the plasma.
The electrons are described by a density function $f = f(t,x,v)$, which is the unknown in the following system of equations:
\begin{equation} \lambdabel{eq:VP-physical-bg}
(VP) : = \begin{equation}gin{cases}
\partial_t f + v \cdot \nablabla_x f + {\mathcal{F}}rac{q_e}{m_e} E \cdot \nablabla_v f = 0, \\
\nablabla_x \times E = 0, \\
\varepsilonpsilon_0 \nablabla_x \cdot E = R_i + q_e \rhoho_f , \\ \displaystyle
\rhoho_f(t,x) : = \int_{\mathbb{R}^d} f(t,x,v) \di v, \\
f \vert_{t=0} = f_0 \geq 0 .
\varepsilonnd{cases}
\varepsilone
Here $q_e$ is the charge on each electron, $m_e$ is the mass of an electron and $\varepsilonpsilon_0$ is the electric permittivity. $R_i : \mathbb{R}^d \to \mathbb{R}_+$ is the charge density contributed by the ions, which is independent of time since they are assumed to be stationary. The electrons experience a force $q_e E$, where $E$ is the electric field induced by the plasma itself. This is found from the Gauss law
\begin{equation}
\nablabla_x \times E = 0, \quad \varepsilonpsilon_0 \, \nablabla_x \cdot E = R_i + q_e \rhoho_f,
\varepsilone
which arises as an electrostatic approximation of the full Maxwell equations.
The system \varepsilonqref{eq:VP-physical-bg} expresses the fact that each electron in the plasma feels the influence of the other particles in the plasma in an averaged sense, through the electric field $E$ induced collectively by the whole plasma. This is a long-range interaction between particles. In particular, this equation does not account for collisions between particles of any species.
The Vlasov-Poisson system as written in equation \varepsilonqref{eq:VP-physical-bg} does not yet include a boundary condition.
In this note we focus on two cases: either the periodic case
where the spatial variable $x$ lies in the $d$-dimensional flat torus $\mathbb{T}^d$
and the velocity variable $v$ lies in the whole Euclidean space $\mathbb{R}^d$, or the whole space case where both $x$ and $v$ range over $\mathbb{R}^d$.
We will use the notation $\muathcal{X}$ to denote the spatial domain, either $\mathbb{T}^d$ or $\mathbb{R}^d$ as appropriate, so that throughout this note we have $(x,v) \in \muathcal{X} \times \mathbb{R}^d$.
It is common to restrict in particular to the case where the background ion density $R_i$ is spatially uniform.
In the case of the torus, $\muathcal{X} = \mathbb{T}^d$, this results in the system
\begin{equation} \lambdabel{eq:VP-physical-torus}
(VP) : = \begin{equation}gin{cases}
\partial_t f + v \cdot \nablabla_x f + {\mathcal{F}}rac{q_e}{m_e} E \cdot \nablabla_v f = 0, \\
\nablabla_x \times E = 0, \\
\varepsilonpsilon_0 \nablabla_x \cdot E = q_e \left (\rhoho_f - \displaystyle \int_{\mathbb{T}^d \times \mathbb{R}^d} f \di x \di v \rhoightght ) , \\ \displaystyle
f \vert_{t=0} = f_0 \geq 0 .
\varepsilonnd{cases}
\varepsilone
The ion charge density is chosen to be
\begin{equation}
R_i \varepsilonquiv - q_e \int_{\mathbb{T}^d \times \mathbb{R}^d} f \di x \di v
\varepsilone
so that the system is globally neutral.
This is required from the point of view of the physics under consideration due to the conservation of charge, since the plasma forms from an electrically neutral gas.
Note that any solution $f$ of \varepsilonqref{eq:VP-physical-torus} satisfies a transport equation with a divergence free vector field, which implies that the mass of $f$ is conserved by the evolution. Thus in fact
\begin{equation}
R_i \varepsilonquiv - q_e \int_{\mathbb{T}^d \times \mathbb{R}^d} f_0 \di x \di v .
\varepsilone
In mathematical treatments, it is common to see \varepsilonqref{eq:VP-physical-torus} written in the rescaled form
\begin{equation} \lambdabel{eq:VP}
(VP) : = \begin{equation}gin{cases}
\partial_t f + v \cdot \nablabla_x f + E \cdot \nablabla_v f = 0, \\
\nablabla_x \times E = 0, \\
\nablabla_ x \cdot E = \rhoho_f - 1, \\ \displaystyle
f \vert_{t=0} = f_0 \geq 0 , \, \int_{\mathbb{T}^d \times \mathbb{R}^d} f_0 \di x \di v = 1.
\varepsilonnd{cases}
\varepsilone
In the whole space case $\muathcal{X} = \mathbb{R}^d$, one often considers a vanishing background, in order to have a system with finite mass. This results in the system
\begin{equation} \lambdabel{eq:VP-whole}
\begin{equation}gin{cases}
\partial_t f + v \cdot \nablabla_x f + E \cdot \nablabla_v f = 0, \\
\nablabla_x \times E = 0, \\
\nablabla_ x \cdot E = \rhoho_f , \\ \displaystyle
f \vert_{t=0} = f_0 \geq 0 , \, \int_{\mathbb{R}^d \times \mathbb{R}^d} f_0 \di x \di v = 1.
\varepsilonnd{cases}
\varepsilone
\mc{S}ubsection{The Vlasov-Poisson system with massless electrons: the ions' view-point}
The previous section presented the Vlasov-Poisson system as a model for the electrons in a dilute, unmagnetised, collisionless plasma.
A variant of the Vlasov-Poisson system may be used to model the ions in the plasma instead.
To derive an appropriate model, once again we make use of the large disparity between the masses of the two species. The resulting separation of timescales allows an approximation in which the two species are modelled separately.
From the point of view of the ions, the electrons have a very small mass and so are very fast moving.
Since the electrons are not stationary, a model of the form \varepsilonqref{eq:VP} is not appropriate.
Instead observe that, since the electrons move quickly relative to the ions, the frequency of electron-electron collisions is high in comparison to ion-ion or ion-electron collisions. Electron-electron collisions are expected to be relevant on the typical timescale of evolution of the ions, even while the frequencies of other kinds of collisions remain negligible.
The expected effect of the electron-electron collisions is to drive the electron distribution towards its equilibrium configuration.
In ion models it is therefore common in physics literature to assume that the electrons are close to thermal equilibrium.
In the limit of \thetaxtbf{massless electrons}, the ratio between the masses of the electrons and ions, $m_e/m_i$, tends to zero. Here $m_e$ is the mass of an electron and $m_i$ is the mass of an ion.
In the limiting regime, it is assumed that the electrons instantaneously assume the equilibrium distribution.
This approximation is often made in the physics literature, motivated by the fact that $m_e/m_i$ is close to zero in applications.
\mc{S}ubsubsection{The Maxwell-Boltzmann Law for Electrons}
The equilibrium distribution can be identified by studying the equation for the evolution of electrons.
Let the ion density $\rhoho[f_i]$ be fixed, and assume that all ions carry the same charge $q_i$. We have discussed that a possible model for the evolution of the electron density is the Vlasov-Poisson system \varepsilonqref{eq:VP-physical-bg}.
However, the Vlasov-Poisson system is a collisionless model.
As discussed above, in the long time regime we consider we expect the effect of electron-electron collisions to be significant.
Collisions in a plasma are described by the Landau-Coulomb operator $Q_L$ \cite[Chapter 4]{Lifshitz-Pitaevskii}, which is an integral operator defined as follows: for a given function $g = g(v) : \mathbb{R}^d \to \mathbb{R}$,
\begin{equation}
Q_L(g) : = \nablabla_v \cdot \int_{\mathbb{R}^d} a(v - v_*) : \left [ g(v_*) \nablabla_v g(v) - g(v) \nablabla_v g(v_*) \rhoightght ] \di v_* .
\varepsilone
The tensor $a$ is defined by
\begin{equation}
a(z) = {\mathcal{F}}rac{|z|^2 - z \otimes z}{|z|^3} .
\varepsilone
We add this term to the Vlasov-Poisson system to model a plasma with collisions. This results in the following model for the electron density $f_e$:
\begin{equation} \lambdabel{eq:VP-electrons}
\begin{equation}gin{cases} \displaystyle
\partial_t f_e + v \cdot \nablabla_x f_e + {\mathcal{F}}rac{q_e}{m_e} E \cdot \nablabla_v f_e = {\mathcal{F}}rac{C_e}{m_e^2} Q_L(f_e), \\
\nablabla_x \times E = 0, \quad
\varepsilonpsilon_0 \nablabla_x \cdot E = q_i \rhoho[f_i] + q_e \rhoho[f_e] .
\varepsilonnd{cases}
\varepsilone
Here $C_e$ is a constant depending on physical quantities such as the electron charge $q_e$ and number density $n_e$, but \thetaxtit{not} on the electron mass $m_e$. For the derivation of the scaling $C_e/m_e^2$ in front of the Landau-Coulomb operator, see Bellan \cite[Chapter 13, Equation (13.46)]{Bellan}.
Consider the rescaling
\begin{equation}
F_e(t,x,v) = m_e^{-{\mathcal{F}}rac{d}{2}} f_e \left ( t, x, {\mathcal{F}}rac{v}{\mc{S}qrt{m_e}}\rhoightght) .
\varepsilone
Notice that this scaling preserves the macroscopic density: $\rhoho[F_e] = \rhoho[f_e]$.
Then $F_e$ satisfies
\begin{equation} \lambdabel{eq:VP-electrons-mass}
\left \{
\begin{equation}gin{array}{c} \displaystyle
\mc{S}qrt{m_e} \partial_t F_e + v \cdot \nablabla_x F_e + q_e E \cdot \nablabla_v F_e = C_e Q_L(F_e), \\ \displaystyle
\nablabla \times E = 0, \quad
\varepsilonpsilon_0 \nablabla \cdot E = q_i \rhoho[f_i] + q_e \rhoho[F_e] .
\varepsilonnd{array} \rhoightght.
\varepsilone
We assume that $F_e$ converges to a stationary distribution $\bar f_e = \bar f_e(x,v)$ as $m_e$ tends to zero, and focus on formally identifying $\bar f_e$.
To identify the possible forms of $ \bar f_e$, we consider the entropy functional
\begin{equation}
H[f] : = \int_{\muathcal{X} \times \mathbb{R}^d} f \log{f} \di x \di v .
\varepsilone
For a solution $F_e$ of Equation~\varepsilonqref{eq:VP-electrons-mass},
\begin{equation}
{\mathcal{F}}rac{\di}{\di t} H[F_e] = m_e^{-1/2} \int_{\muathcal{X} \times \mathbb{R}^d} (1 + \log{F_e}) \left [ - \operatorname{div}_{x,v} \left ( (v, E) F_e \rhoightght) + Q_L(F_e) \rhoightght ] \di x \di v.
\varepsilone
Integrating by parts formally, the transport term vanishes:
\begin{equation}gin{align}
- \int_{\muathcal{X} \times \mathbb{R}^d} (1 + \log{F_e}) \operatorname{div}_{x,v} \left ( (v, E) F_e \rhoightght) \di x \di v& = \int_{\muathcal{X} \times \mathbb{R}^d} (v, E) \cdot \nablabla_{x,v} F_e \di x \di v \\
& = \int_{\muathcal{X} \times \mathbb{R}^d} \operatorname{div}_{x,v} \left ( (v, E) F_e \rhoightght) \di x \di v = 0 .
\varepsilonnd{align}
Thus
\begin{equation}
{\mathcal{F}}rac{\di}{\di t} H[F_e] = m_e^{-1/2} \int_{\muathcal{X} \times \mathbb{R}^d} (1 + \log{F_e}) Q_L(F_e) \di x \di v.
\varepsilone
By substituting the definition of $Q_L$, one can calculate formally (see \cite{Desvillettes-Villani2000}) that
\begin{equation}gin{multline} \lambdabel{def:dissipation}
{\mathcal{F}}rac{\di}{\di t} H[F_e] =\\ - {\mathcal{F}}rac{C}{\mc{S}qrt{m_e}} \int_{\muathcal{X} \times \mathbb{R}^d} {\mathcal{F}}rac{ 1 }{|v-v_*|} \left \lvert P_{(v-v_*)^{\perp}} \left [ \nablabla_v \mc{S}qrt{F_e(x,v) F_e(x,v_*)} - \nablabla_{v_*} \mc{S}qrt{F_e(x,v) F_e(x,v_*)} \rhoightght ] \rhoightght \rhovert^2 \di v_* \di v \di x ,
\varepsilonnd{multline}
where $P_{(v-v_*)^{\perp}}$ denotes the operator giving the orthogonal projection onto the hyperplane perpendicular to $v- v_*$.
For a stationary solution $\bar f_e$, we must have ${\mathcal{F}}rac{\di}{\di t} H[\bar f_e] = 0$, that is, the functional on the right hand side of \varepsilonqref{def:dissipation} must vanish. If $\bar f_e \in L^1$, it follows (see for example \cite[Lemma 3]{Villani1996}) that $\bar f_e$ is a local Maxwellian of the form
\begin{equation} \lambdabel{def:bar-fe}
\bar f_e(x,v) = \rhoho_e(x) \left (\pi \begin{equation}ta_e(x)\rhoightght )^{d/2} \varepsilonxp \left [ - \begin{equation}ta_e(x) |v - u_e(x)|^2 \rhoightght ] .
\varepsilone
The electron density $\rhoho_e$, mean velocity $u_e$ and inverse temperature $\begin{equation}ta_e$ can then be studied using an argument similar to the one given in the proof of \cite[Theorem 1.1]{BGNS18}.
Substituting the form \varepsilonqref{def:bar-fe} into equation \varepsilonqref{eq:VP-electrons}, we obtain the following identity for all $x$ such that $\rhoho_e(x) \neq 0$ and all $v \in \mathbb{R}^d$:
\begin{equation}gin{multline}
- \nablabla_x \begin{equation}ta_e \cdot (v - u_e) |v - u_e|^2 - u_e \cdot \nablabla_x \begin{equation}ta_e |v - u_e|^2 + \begin{equation}ta_e (v - u_e)^\top \nablabla_x u_e (v - u_e)\\
+ (v - u_e) \cdot \left [ \nablabla_x \log{(\rhoho_e \begin{equation}ta_e^{d/2})} - q_e \begin{equation}ta_e E + u_e \cdot \nablabla_x u_e\rhoightght] + u_e \cdot \nablabla_x \log{(\rhoho_e \begin{equation}ta_e^{d/2})} = 0 .
\varepsilonnd{multline}
For each fixed $x$, the left hand side is a polynomial in $v - u_e(x)$, whose coefficients must all be equal to zero.
For example, by looking at the cubic term we see that $\nablabla_x \begin{equation}ta_e = 0$ and thus $\begin{equation}ta_e$ must be a constant independent of $x$.
The quadratic term then gives
\begin{equation}
v^\top \nablabla_x u_e v = 0 \qquad \thetaxt{for all } v \in \mathbb{R}^d,
\varepsilone
which implies that $\nablabla_x u_e$ is skew-symmetric.
On a spatial domain for which a Korn inequality holds, this restricts the class of $u_e$ that can occur. For example, in the case of the torus $\muathcal{X} = \mathbb{T}^d$, the fact that the symmetric part of $\nablabla_x u_e$ vanishes implies that $u_e$ is constant \cite[Proposition 13]{Desvillettes-Villani2005}.
Finally, from the linear term we obtain that
\begin{equation}
\nablabla_x \log{(\rhoho_e \begin{equation}ta_e^{d/2})} - q_e \begin{equation}ta_e E = 0.
\varepsilone
Since $\nablabla_x \times E = 0$, $E$ is a gradient - that is, it can be written as $E = - \nablabla U$ for some function $U$.
Then
\begin{equation}
\nablabla_x \log{(\rhoho_e \begin{equation}ta_e^{d/2})} = - q_e \begin{equation}ta_e \nablabla_x U .
\varepsilone
From this we deduce that $\rhoho_e$ should be of the form
\begin{equation}
\rhoho_e(x) = A \varepsilonxp \left ( - q_e \begin{equation}ta_e U \rhoightght ),
\varepsilone
for some constant $A>0$.
This is known as a \thetaxtbf{Maxwell-Boltzmann} law.
In the whole space case $\muathcal{X} = \mathbb{R}^d$, we include an additional spatial confinement of the electrons, by adding an additional potential $\Psi$ to the electron dynamics.
The equivalent of equation \varepsilonqref{eq:VP-electrons-mass} is then
\begin{equation} \lambdabel{}
\left \{
\begin{equation}gin{array}{c} \displaystyle
\mc{S}qrt{m_e} \partial_t F_e + v \cdot \nablabla_x F_e + ( q_e E - \nablabla \Psi ) \cdot \nablabla_v F_e = C_e Q_L(F_e), \\ \displaystyle
\nablabla \times E = 0, \quad
\varepsilonpsilon_0 \nablabla \cdot E = q_i \rhoho[f_i] + q_e \rhoho[F_e] .
\varepsilonnd{array} \rhoightght.
\varepsilone
Repeating the previous argument, we can derive the following limiting distribution in the regime $m_e \to 0$:
\begin{equation}
\rhoho_e = A e^{ - \begin{equation}ta_e (q_e U + \Psi) } = A g e^{- \begin{equation}ta_e q_e U},
\varepsilone
where we let $g = e^{- \begin{equation}ta_e \Psi}$. We assume that the confining potential $\Psi$ grows sufficiently quickly at infinity so that $g \in L^1 \cap L^\infty(\mathbb{R}^d \times \mathbb{R}^d)$.
Bardos, Golse, Nguyen and Sentis \cite{BGNS18} studied the problem of rigorously identifying the Maxwell-Boltzmann law as the distribution of electrons in the massless limit. They consider coupled systems of the form
\begin{equation} \lambdabel{eq:VP-coupled}
\begin{equation}gin{cases}
\partial_t f_i + v \cdot \nablabla_x f_i + {\mathcal{F}}rac{q_i}{m_i} E \cdot \nablabla_v f_i = 0, \\
\partial_t f_e + v \cdot \nablabla_x f_e + {\mathcal{F}}rac{q_e}{m_e} E \cdot \nablabla_v f_e = C(m_e) Q(f_e), \\
\nablabla_x \times E = 0, \quad
\varepsilonpsilon_0 \nablabla_x \cdot E = q_i \rhoho[f_i] + q_e \rhoho[f_e] .
\varepsilonnd{cases}
\varepsilone
In the above, $Q$ denotes a collision operator such as a BGK or Boltzmann operator. Under suitable hypotheses on the spatial domain and the collision rate $C(m_e)$, and assuming the existence of sufficiently regular solutions of the coupled system \varepsilonqref{eq:VP-coupled}, they derive that, in the limit as $m_e/m_i$ tends to zero, the electrons indeed take on a Maxwell-Boltzmann distribution. Moreover, solutions of the system \varepsilonqref{eq:VP-coupled} converge to a solution of a system of a similar form to \varepsilonqref{eq:vpme}, but where the electron temperature depends on time and is chosen to respect the conservation of energy. Other works on this topic include, for example, the work of Bouchut and Dolbeault \cite{Bouchut-Dolbeault95} on the long time limit for the Vlasov-Poisson-Fokker-Planck system for one species -- the massless electrons limit can be related to a long time limit since \varepsilonqref{eq:VP-electrons-mass} can also be seen as a time rescaling. Herda \cite{Herda16} also considered the massless electron limit in the case with an external magnetic field. In this case the limiting system is a fluid model for the electrons, coupled with a kinetic model for the ions.
\mc{S}ubsubsection{The Vlasov-Poisson System in the Limit of Massless Electrons}
From equation \varepsilonqref{eq:VP-electrons}, we see that the electrostatic potential $U$ induced by a distribution $\rhoho[f_i]$ of ions with a background of thermalised electrons should satisfy the following semilinear elliptic PDE:
\begin{equation} \lambdabel{eq:semilinear}
- \varepsilonpsilon_0 \Delta U = q_i \rhoho[f_i] + A q_e \, g \varepsilonxp \left ( - {\mathcal{F}}rac{q_e U}{k_B T_e }\rhoightght ),
\varepsilone
where in the torus case $\muathcal{X} = \mathbb{T}^d$ we let $g \varepsilonquiv 1$.
The normalising constant $A$ should be chosen so that the system is globally neutral, that is, the total charge is zero:
\begin{equation} \lambdabel{eq:Poisson-SL}
\int_{\muathcal{X}} q_i \rhoho[f_i] + A q_e \varepsilonxp \left ( - {\mathcal{F}}rac{q_e U}{k_B T_e }\rhoightght ) \di x = 0 .
\varepsilone
Indeed, on the torus $\muathcal{X} = \mathbb{T}^d$, the Poisson equation
\begin{equation}
\Delta U = h
\varepsilone
can only be solved if $h$ has total integral zero.
Thus if \varepsilonqref{eq:semilinear} has a solution, global neutrality must hold automatically.
Adjusting the choice of $A$ corresponds to adding a constant to $U$. Thus without loss of generality we choose $A=1$.
Then, the nonlinear equation \varepsilonqref{eq:semilinear} replaces the standard Poisson equation for the electrostatic potential in the Vlasov-Poisson system \varepsilonqref{eq:VP-physical-bg}. After a suitable normalisation of physical constants, this leads to the following system for the ions:
\begin{equation} \lambdabel{eq:vpme}
(VPME) : = \begin{equation}gin{cases}
\partial_t f + v \cdot \nablabla_x f + E \cdot \nablabla_v f = 0, \\
E = - \nablabla_x U, \\
\Delta U = e^U - \rhoho_f , \\ \displaystyle
f \vert_{t=0} = f_0 \geq 0 , \; \int_{\mathbb{T}^d \times \mathbb{R}^d} f_0 \di x \di v = 1.
\varepsilonnd{cases}
\varepsilone
This is known as the Vlasov-Poisson system \thetaxtbf{with massless electrons}, or \thetaxtbf{VPME} system.
In the whole space case $\muathcal{X} = \mathbb{R}^d$, we consider two versions of the VPME system, depending on the choice of the constant $A$.
In one case, we let $A=1$. With a suitable choice of dimensionless variables, this results in the following system:
\begin{equation} \lambdabel{eq:vpme-whole-variable}
\begin{equation}gin{cases}
\partial_t f + v \cdot \nablabla_x f + E \cdot \nablabla_v f = 0, \\
E = - \nablabla_x U, \\
\Delta U = g e^U - \rhoho_f , \\ \displaystyle
f \vert_{t=0} = f_0 \geq 0 , \; \int_{\mathbb{R}^d \times \mathbb{R}^d} f_0 \di x \di v = 1.
\varepsilonnd{cases}
\varepsilone
This system is structurally similar to the torus case \varepsilonqref{eq:vpme} considered above. Note however that in this model the system is not necessarily globally neutral.
In order to enforce global neutrality, we can instead choose $A$ to be a normalising constant
\begin{equation}
A = {\mathcal{F}}rac{1}{\int_{\mathbb{R}^d} g e^U \di x }.
\varepsilone
Thus we obtain the following system:
\begin{equation} \lambdabel{eq:vpme-whole-fixed}
\begin{equation}gin{cases}
\partial_t f + v \cdot \nablabla_x f + E \cdot \nablabla_v f = 0, \\
E = - \nablabla_x U, \\
\Delta U = {\mathcal{F}}rac{g e^U}{\int_{\mathbb{R}^d} g e^U \di x } - \rhoho_f , \\ \displaystyle
f \vert_{t=0} = f_0 \geq 0 , \; \int_{\mathbb{R}^d \times \mathbb{R}^d} f_0 \di x \di v = 1.
\varepsilonnd{cases}
\varepsilone
The VPME system has been used in the physics literature in, for instance, numerical studies of the formation of ion-acoustic shocks \cite{Mason71, SCM} and the development of phase-space vortices behind such shocks \cite{BPLT1991}, as well as in studies of the expansion of plasma into vacuum \cite{Medvedev2011}. A physically oriented introduction to the model \varepsilonqref{eq:vpme} may be found in \cite{Gurevich-Pitaevsky75}.
In \cite{IGP-WP}, we consider the problem of proving well-posedness for the VPME system \varepsilonqref{eq:vpme} under reasonable conditions on the initial datum $f_0$. The well-posedness of the systems \varepsilonqref{eq:vpme-whole-variable} and \varepsilonqref{eq:vpme-whole-fixed} is considered in a forthcoming paper.
\mc{S}ection{Well-posedness for Vlasov equations with smooth interactions}
The Vlasov-Poisson system is an example of a more general class of nonlinear scalar transport equations known as \thetaxtbf{Vlasov equations}. A Vlasov equation takes the following form:
\begin{equation} \lambdabel{eq:Vlasov}
\begin{equation}gin{cases}
\partial_t f + v \cdot \nablabla_x f + F[f] \cdot \nablabla_v f = 0 , \\
F[f](t,x) = - \nablabla_x W \ast \rhoho_f, \\ \displaystyle
\rhoho_f(t,x) = \int_{\mathbb{R}^d} f(t,x,v) \di v , \\
f(0,x,v) = f_0(x,v) \geq 0.
\varepsilonnd{cases}
\varepsilone
The system \varepsilonqref{eq:Vlasov} is a mean field model for a system of interacting particles with binary interactions described by a pair potential $W: \muathcal{X} \to \mathbb{R}$.
The electron Vlasov-Poisson systems \varepsilonqref{eq:VP}, \varepsilonqref{eq:VP-whole} can be seen to be of the form \varepsilonqref{eq:Vlasov} by choosing $W$ to be the Green's function of the Laplacian on $\muathcal{X}$. By this we mean that $G$ is a function satisfying the relation
\begin{equation} \lambdabel{eq:fundamental-solution}
- \Delta G = \delta_0 - 1 \quad \thetaxt{for } \muathcal{X} = \mathbb{T}^d , \quad \thetaxt{ or } \quad - \Delta G = \delta_0 \quad \thetaxt{for } \muathcal{X} = \mathbb{R}^d .
\varepsilone
The function $U = G \ast ( \rhoho_f - 1 )$ is a solution of the Poisson equation, respectively
\begin{equation}
- \Delta U = \rhoho_f - 1 \quad \thetaxt{on} \; \mathbb{T}^d \quad \thetaxt{ or } \quad - \Delta U = \rhoho_f \quad \thetaxt{on} \; \mathbb{R}^d .
\varepsilone
Thus the Vlasov-Poisson systems \varepsilonqref{eq:VP}, \varepsilonqref{eq:VP-whole} are of the form \varepsilonqref{eq:Vlasov}.
The available well-posedness theory for the system \varepsilonqref{eq:Vlasov} depends on the choice of the interaction potential $W$, and in particular on the regularity of the force $- \nablabla W$. For example, if $\nablabla W$ is a Lipschitz function, then the system \varepsilonqref{eq:Vlasov} is well-posed in the class $C \left ( [0,\infty); \muathcal{M}_+(\muathcal{X} \times \mathbb{R}^d) \rhoightght )$ - the space of continuous paths taking values in the space $\muathcal{M}_+(\muathcal{X} \times \mathbb{R}^d)$ of finite measures on $\muathcal{X} \times \mathbb{R}^d$ equipped with the topology of weak convergence of measures.
This case was considered for example by Braun and Hepp \cite{Braun-Hepp} and by Dobrushin \cite{Dobrushin}.
A path $f \in C \left ( [0,\infty); \muathcal{M}_+ (\muathcal{X} \times \mathbb{R}^d) \rhoightght )$ is a weak solution of the Vlasov equation \varepsilonqref{eq:Vlasov} if, for all test functions $\phi \in C^1_c \left ( [0,\infty) \times \muathcal{X} \times \mathbb{R}^d \rhoightght )$,
\begin{equation} \lambdabel{eq:Vlasov-weak}
\int_0^\infty \int_{\muathcal{X} \times \mathbb{R}^{d}} \Big[ \partial_t \phi + v \cdot \nablabla_x \phi - \left ( \nablabla_x W \ast_x \rhoho_f \rhoightght ) \cdot \nablabla_v \phi \Big ] f(t, \di x, \di v) \di t + \int_{\muathcal{X} \times \mathbb{R}^{d}} \phi(0, x,v) f_0(\di x, \di v) = 0 .
\varepsilone
Under the assumption that $\nablabla W$ is a Lipschitz function, it is known that weak solutions of the Vlasov equation \varepsilonqref{eq:Vlasov} exist \cite{Braun-Hepp, Dobrushin} and are unique \cite{Dobrushin}.
\begin{equation}gin{thm} \lambdabel{thm:Vlasov-existence}
Assume that $\nablabla W : \muathcal{X} \to \mathbb{R}^d$ is a Lipschitz function.
Let $f_0$ be a finite non-negative measure with finite first moment:
\begin{equation}
\int_{\mathbb{T}^d \times \mathbb{R}^d} (1 + |x| + |v|) f_0 (\di x \di v) < + \infty .
\varepsilone
Then there exists a unique weak solution $f \in C \left ( [0, + \infty) ; \muathcal{M}_+(\muathcal{X} \times \mathbb{R}^d) \rhoightght )$ of the Vlasov equation \varepsilonqref{eq:Vlasov}.
\varepsilonnd{thm}
\mc{S}ection{Well-posedness for the Vlasov-Poisson System}
In the case of the Vlasov-Poisson system for electrons \varepsilonqref{eq:VP}, the interaction potential $W$ is chosen to be the function $G$ defined by the relation \varepsilonqref{eq:fundamental-solution}. The resulting force $K = - \nablabla G$ is known as the Coulomb kernel.
However, $K$ is not a Lipschitz function and so the Vlasov-Poisson system does not satisfy the assumptions of Theorem~\rhoef{thm:Vlasov-existence}. For example, in the whole space case, $\muathcal{X} = \mathbb{R}^d$, $G$ takes the form
\begin{equation} \lambdabel{def:G}
G_{\mathbb{R}^d}(x) = \begin{equation}gin{cases}
- {\mathcal{F}}rac{1}{2 \pi} \log{|x|}, & d=2, \\
{\mathcal{F}}rac{1}{4 \pi |x|}, & d=3 ,
\varepsilonnd{cases}
\varepsilone
The Coulomb kernel $K_{\mathbb{R}^d} = - \nablabla G_{\mathbb{R}^d}$ takes the form
\begin{equation} \lambdabel{def:K}
K(x) = \begin{equation}gin{cases}
{\mathcal{F}}rac{x}{2 \pi |x|^2}, & d=2, \\
{\mathcal{F}}rac{x}{4 \pi |x|^3}, & d=3 ,
\varepsilonnd{cases}
\varepsilone
and thus has a singluarity at $x=0$.
On the torus $\muathcal{X} = \mathbb{T}^d$, it can be shown that $G_{\mathbb{T}^d}$ is smooth away from the origin: $G_{\mathbb{T}^d} \in C^\infty(\mathbb{T}^d \mc{S}etminus \{0\})$. Near the singularity it is of the form
\begin{equation}
G_{\mathbb{T}^d} = G_{\mathbb{R}^d} + G_1,
\varepsilone
where $G_1$ is a $C^\infty$ function. Thus $K_{\mathbb{T}^d}$ possesses a singularity similar to that of $K_{\mathbb{R}^d}$.
Consequently, Theorem~\rhoef{thm:Vlasov-existence} does not apply to the Vlasov-Poisson system. It is not known whether the Vlasov-Poisson system is well-posed in the class of measure solutions.
However, global well-posedness has been shown for solution classes with greater regularity.
Arsen'ev \cite{Arsenev} introduced a notion of weak solution for the Vlasov-Poisson system \varepsilonqref{eq:VP} in dimension $d=3$ and proved the existence of such solutions, globally in time, for initial data $f_0$ belonging to the space $L^1 \cap L^\infty(\mathbb{R}^6)$.
The boundedness condition $f_0 \in L^\infty(\mathbb{R}^{6})$ was later relaxed to $f_0 \in L^p(\mathbb{R}^{6})$, for $p$ sufficiently large, by Horst and Hunze \cite{Horst-Hunze}.
In the case of classical $C^1$ solutions, in the two-dimensional case $d=2$
Ukai and Okabe \cite{Ukai-Okabe} proved global existence for initial data $f_0 \in C^1(\mathbb{R}^4)$ decaying sufficiently fast at infinity.
In dimension $d=3$, global-in-time solutions were constructed by Pfaffelmoser \cite{Pfaffelmoser} for initial data $f_0 \in C^1_c(\mathbb{R}^6)$.
Schaeffer gave a streamlined proof of the same result in \cite{Schaeffer}.
Horst \cite{Horst1993} extended these results to include non-compactly supported initial data with sufficiently fast decay at infinity.
The methods of proof for these results are based on an analysis of the characteristic trajectories associated to system \varepsilonqref{eq:VP}.
This approach was adapted to the torus by Batt and Rein \cite{Batt-Rein}, who proved the existence of global-in-time classical solutions for \varepsilonqref{eq:VP} posed on $\mathbb{T}^3 \times \mathbb{R}^3$, for initial data $f_0 \in C^1(\mathbb{T}^3 \times \mathbb{R}^3)$ with sufficiently fast decay at infinity.
An alternative approach to the construction of global-in-time solutions in dimension $d=3$ was provided by Lions and Perthame \cite{Lions-Perthame}. Their method is based on proving the propagation of moments. They showed global existence of solutions, provided that the initial datum $f_0 \in L^1 \cap L^\infty(\mathbb{R}^d \times \mathbb{R}^d)$ has moments in velocity of sufficiently high order.
However, their strategy is for the whole space case $x \in \mathbb{R}^d$, and differs from the strategies currently available for the torus.
Pallard \cite{Pallard} then extended the range of moments that could be propagated in the whole space case and showed propagation of moments on the torus $\mathbb{T}^3$, using a method based on an analysis of trajectories (more similar to \cite{Batt-Rein, Pfaffelmoser, Schaeffer}). Chen and Chen \cite{Chen-Chen} adapted these techniques to further extend the range of moments that could be propagated for the torus case.
Lions and Perthame \cite{Lions-Perthame} proved a uniqueness criterion for their solutions under the additional technical condition that, for all $R, T >0$,
\begin{equation}
\mc{S}up \left\{ |\nablabla f_0(y + vt, w)| : |y-x| \leq R, |w-v| \leq R \} \in L^\infty \left ( (0,T) \times \mathbb{R}^3_x ; L^1 \cap L^2 (\mathbb{R}^3_v) \rhoightght )\rhoightght\} .
\varepsilone
Robert \cite{Robert} then proved uniqueness for solutions that are compactly supported in phase space for all time. Subsequently, Loeper \cite{Loeper} proved a uniqueness result which requires only boundedness of the mass density $\rhoho_f$, and therefore includes the compactly supported case. Loeper's result is based on proving a stability estimate on solutions of the VPME system \varepsilonqref{eq:vpme} with bounded density, with respect to their initial data $f_0$ -- in particular, a quantitative estimate in terms of the second order Wasserstein distance $W_2$.
In a similar vein, in the one dimensional case Hauray \cite{Hauray14} proved a weak-strong uniqueness principle, showing that if a bounded density solution exists, then this solution is unique among measure-valued solutions. This result is also based on a Wasserstein stability result.
\mc{S}ection{Well-posedness theory for the Vlasov-Poisson system with massless electrons}
The VPME system for ions is in general less well understood than the Vlasov-Poisson system for electrons, due to the additional nonlinearity in the elliptic equation for the electrostatic potential.
In the case of the well-posedness theory, weak solutions for the VPME system were constructed in dimension $d=3$ in the whole space by Bouchut \cite{Bouchut}, globally in time.
In one dimension, global-in-time weak solutions were constructed by Han-Kwan and Iacobelli \cite{IHK1} for measure data with a first moment. A weak-strong uniqueness principle was also proved for solutions satisfying $\rhoho_f \in L^\infty_{\thetaxt{loc}} \left ([0, + \infty) ; L^\infty(\mathbb{T}) \rhoightght )$: namely, if a solution with this regularity exists, then it is unique among measure solutions. However, a well-posedness theory for strong solutions in higher dimensions remained open.
In the article \cite{IGP-WP}, global well-posedness is proved for the VPME system on the torus in dimension $d=2$ and $d=3$. The main result is stated in the following theorem.
\begin{equation}gin{thm}[Global well-posedness: $\mathbb{T}^d$] \lambdabel{thm:main}
Let $d = 2, 3$. Let the initial datum $f_0 \in L^1 \cap L^\infty(\muathbb{T}^d \times \muathbb{R}^d)$ be a probability density satisfying
\begin{equation}
f_0(x,v) \leq {\mathcal{F}}rac{C_0}{1 + |v|^{k_0}}\;\;\muathbbox{for some}\,\, k_0 > d, \quad \int_{\mathbb{T}^d \times \mathbb{R}^d} |v|^{m_0} f_0(x,v) \di x \di v < + \infty\;\;\muathbbox{for some}\,\, m_0 > d(d-1) .
\varepsilone
Then there exists a global-in-time weak solution $f \in C([0,\infty); \muathcal{P}(\muathbb{T}^d \times \muathbb{R}^d))$ of the VPME system \varepsilonqref{eq:vpme} with initial data $f_0$. This is the unique solution of \varepsilonqref{eq:vpme} with initial datum $f_0$ such that
$$
\rhoho_f \in L^\infty_{\thetaxt{loc}}([0,+\infty) ; L^\infty(\mathbb{T}^d) ).
$$
In addition, if $f_0$ has compact support, then at each time $t$, $f(t)$ has compact support.
\varepsilonnd{thm}
This theorem asks for no regularity on $f_0$, only that $f_0 \in L^1 \cap L^\infty(\muathbb{T}^d \times \muathbb{R}^d)$. The resulting solutions are therefore not $C^1$ classical solutions in general. It is thus useful to introduce a concept of {\it strong} solutions: the class of bounded distributional solutions $f$ of \varepsilonqref{eq:vpme} whose density $\rhoho_f$ is uniformly bounded: $\rhoho_f \in L^\infty_{\thetaxt{loc}}([0,+\infty) ; L^\infty(\muathcal{X}) )$. Strong solutions have several convenient properties: in particular, their characteristic ODE system is well-posed and the resulting flow can be used to represent the solutions. A consequence of this is that if the initial datum $f_0$ is additionally assumed to be $C^1$, then the resulting strong solution is in fact a $C^1$ classical solution. Therefore we may also deduce global well-posedness for classical solutions of the VPME system.
In a forthcoming paper, we also consider the problem posed on the whole space; we are able to prove the following global well-posedness result for the whole space systems \varepsilonqref{eq:vpme-whole-variable}and \varepsilonqref{eq:vpme-whole-fixed}.
\begin{equation}gin{thm}[Global well-posedness: $\mathbb{R}^3$]
\lambdabel{thm:wp}
Let ${f_0 \in L^1\cap L^\infty(\muathbb{R}^3 \times \muathbb{R}^3)}$ be a probability density
satisfy
$$
f_0(x,v)\leq {\mathcal{F}}rac{C}{(1+|v|)^r} \,\,\,\thetaxt{ for some $r>3$}, \qquad \int_{\muathbb{R}^3\times \muathbb{R}^3}|v|^{m_0}f_0(x,v)\,dx\,dv <+\infty \,\,\,\thetaxt{ for some $m_0>6$} .
$$
Assume that $g\in L^1\cap L^\infty(\muathbb{R}^3)$, with $g\geq 0$ satisfying $\int_{\muathbb{R}^3}g=1$. Then there exists a unique solution ${f\in L^\infty([0,T] ; L^1\cap L^\infty(\muathbb{R}^3 \times \muathbb{R}^3))}$ of \varepsilonqref{eq:vpme-whole-variable} (resp. \varepsilonqref{eq:vpme-whole-fixed}) with initial datum $f_0$ such that $\rhoho_f \in L^{\infty}([0,T] ; L^\infty(\muathbb{R}^3))$.
\varepsilonnd{thm}
\begin{equation}gin{remark}
In particular, these results provide well-posedness for the VPME system under the same conditions as were previously known for the Vlasov-Poisson system.
\varepsilonnd{remark}
\mc{S}ubsection{Strategy for $\mathbb{T}^d$}
\mc{S}ubsubsection{Analysis of the Electric Field}
The first step of the proof is to obtain estimates on the regularity of the electric field $E$. We begin with a decomposition of the electric field, as was used in \cite{IHK1} for the one dimensional setting. The electric field $E$ can be seen as a sum of the electric field appearing in the electron model \varepsilonqref{eq:VP}, plus a more regular nonlinear term. For this, we use the notation $E = \bar E +\widehat E $, where
\begin{equation}
\bar E =-\nablabla \bar U ,\qquad \widehat E =-\nablabla \widehat U ,
\varepsilone
and $\bar U $ and $\widehat U $ solve respectively
\begin{equation} \lambdabel{electric-field-strategy}
\Delta \bar U =1-\rhoho_f ,\qquad \Delta \widehat U =e^{\bar U +\widehat U } - 1.
\varepsilone
We expect $\widehat E$ to be more regular than $\bar E$. The key point is to prove this rigorously, taking into account the nonlinearity in the equation satisfied by $\widehat U$. In particular we need to quantify the gain of regularity carefully.
To analyse $\widehat E$, we use techniques from the calculus of variations which allow us to deal with the nonlinearity in the equation for $\widehat U$. We then wish to quantify the gain of regularity in terms of its dependence on $\rhoho_f$. The key lemma is the following regularity estimate.
\begin{equation}gin{lem} \lambdabel{lem:hatU-reg}
Let $d=2,3$.
Assume that $\rhoho_f \in L^{{\mathcal{F}}rac{d+2}{d}}$. There exist unique $\bar U, \widehat U \in W^{1,2}(\mathbb{T}^d)$ such that
\begin{equation}
\Delta \bar U =1-\rhoho_f ,\qquad \Delta \widehat U =e^{\bar U +\widehat U } - 1.
\varepsilone
Moreover, there exists $\alphapha > 0$ such that $\widehat U \in C^{2,\alphapha}(\mathbb{T}^d)$, with the quantitative estimate
\begin{equation}
\|\widehat U \|_{C^{2,\alphapha}(\muathbb{T}^d)} \leq C_{\alphapha,d}\,\varepsilonxp\,\varepsilonxp {\Bigl(C_{\alphapha,d}\, \Bigl(1 + \lVert \rhoho_f \rhoVert_{L^{{\mathcal{F}}rac{d+2}{d}}(\muathbb{T}^d)} \Bigr) \Bigr)}, \qquad\alphapha \in \begin{equation}gin{cases} (0,1) \thetaxt{ if } d=2 \\ (0, {\mathcal{F}}rac{1}{5}] \thetaxt{ if } d=3. \varepsilonnd{cases}
\varepsilone
\varepsilonnd{lem}
The choice of $(d+2)/d$ as the integrability exponent is relevant because
this is a quantity that we expect to be bounded uniformly in time, as a consequence of the conservation of the following energy functional associated to the VPME system:
\begin{equation} \lambdabel{def:Ee}
\muathcal{E} [f ] := {\mathcal{F}}rac{1}{2}\int_{\muathbb{T}^d \times \muathbb{R}^d} |v|^2 f \di x \di v + {\mathcal{F}}rac{1 }{2} \int_{\muathbb{T}^d} |\nablabla U |^2 \di x + \int_{\muathbb{T}^d} U e^{U } \di x .
\varepsilone
\begin{equation}gin{lem} \lambdabel{lem:rho-Lp}
Let $f \geq 0$ satisfy, for some constant $C_0 > 0$,
\begin{equation}
\lVert f \rhoVert_{L^{\infty}(\muathbb{T}^d \times \muathbb{R}^d)} \leq C_0,\qquad
\muathcal{E} [f] \leq C_0 ,
\varepsilone
where $\muathcal{E}$ is the energy functional defined in \varepsilonqref{def:Ee}. Then the mass density
\begin{equation} \lambdabel{def:rho}
\rhoho_f(x) : = \int_{\muathbb{R}^d} f(x,v) \di v
\varepsilone
lies in $L^{(d+2)/d}(\muathbb{T}^d)$ with
\begin{equation} \lambdabel{rho-Lp}
\lVert \rhoho_f \rhoVert_{L^{{\mathcal{F}}rac{d+2}{d}}(\muathbb{T}^d)} \leq C_1 .
\varepsilone
for some constant $C_1 > 0$ depending on $C_0$ and $d$ only.
\varepsilonnd{lem}
Using these estimates on the electric field, the proof of well-posedness is carried out in two main steps. First we prove the uniqueness of solutions for VPME under the condition that the mass density $\rhoho_f$ is bounded in $L^\infty(\mathbb{T}^d)$. Then, we show the global existence of solutions with bounded density, given the assumptions of Theorem~\rhoef{thm:main}.
\mc{S}ubsubsection{Uniqueness}
The first part of the proof of well-posedness is to prove the uniqueness of strong solutions, i.e. uniqueness under the condition that
\begin{equation}
\rhoho_f \in L^\infty_{\thetaxt{loc}} \left ( [0, + \infty) ; L^\infty(\mathbb{T}^d) \rhoightght ).
\varepsilone
For the electron Vlasov-Poisson system \varepsilonqref{eq:VP}, Loeper \cite{Loeper} proved uniqueness of solutions under this condition. In the VPME setting, we make use of Loeper's strategy to handle the electric field $\bar E$. However, to deal with $\widehat E$ further nontrivial estimates are necessary. We prove the following estimate, which quantifies the stability of $\widehat E$ with respect to the charge density $\rhoho_f$.
\begin{equation}gin{lem} \lambdabel{prop:Ustab}
For each $i=1,2$, let $\bar U _i$ and $\widehat U_i$ be respectively solutions of
\begin{equation}
\Delta \bar U _i= h_i - 1, \qquad \Delta \widehat U _i= e^{\bar U _i + \widehat U _i} - 1 .
\varepsilone
where $h_i \in L^{\infty} \cap L^{(d+2)/d}(\muathbb{T}^d)$. Then there exists a constant $C_d > 0$ such that
\begin{equation}gin{align}
\lambdabel{stab-Uhat}
\lVert \nablabla \widehat U _1 - \nablabla \widehat U _2 \rhoVert^2_{L^2(\muathbb{T}^d)} &\leq \varepsilonxp\,\varepsilonxp {\left [ C_d \left ( 1 + \muax_i\, \lVert h_i \rhoVert_{L^{(d+2)/d}(\muathbb{T}^d)} \rhoightght ) \rhoightght ]}
\muax_i\, \lVert h_i \rhoVert_{L^{\infty}(\muathbb{T}^d)} \, W^2_2(h_1, h_2).
\varepsilonnd{align}
\varepsilonnd{lem}
Using these estimates, we are able to prove the following stability estimate for solutions of the VPME system \varepsilonqref{eq:vpme} relative to the initial datum, quantified in the second order Wasserstein distance $W_2$. Uniqueness of strong solutions then follows immediately.
\begin{equation}gin{prop}[Stability for solutions with bounded density] \lambdabel{prop:Wstab}
For $i = 1,2$, let $f _i$ be solutions of \varepsilonqref{eq:vpme} satisfying for some constant $M$ and all $t \in [0,T]$,
\begin{equation} \lambdabel{str-str_rho-hyp}
\rhoho[f _i(t)] \leq M .
\varepsilone
Then there exists a constant $C $, depending on $M$, such that, for all $t \in [0,T]$,
\begin{equation}
W_2\left(f _1(t), f _2(t)\right)^2 \leq \begin{equation}gin{cases}
16 d e \varepsilonxp{\biggl [ \log{{\mathcal{F}}rac{W_2\left(f _1(0), \, f _2(0)\right)^2}{16 d e}} e^{-C t} \biggr ]} & \thetaxt{ if } t \leq t_0 \\
\muax \Big \{W_2\left(f _1(0), f _2(0)\right)^2 , d \Big \} \, e^{C (1 + \log{16}) (t - t_0)} & \thetaxt{ if } t > t_0 .
\varepsilonnd{cases}
\varepsilone
where the time $t_0$ is defined by
\begin{equation}
t_0 = t_0\big(W_2\left(f _1(0), f _2(0)\right) \big) = \inf \left\{ t \geq 0 : 16 d e \varepsilonxp{\biggl [ \log{{\mathcal{F}}rac{W_2\left(f _1(0), \, f _2(0)\right)^2}{16 d e}} e^{-C t} \biggr ]}> d\rhoightght\} .
\varepsilone
\varepsilonnd{prop}
\mc{S}ubsubsection{Existence of Solutions}
The proof of existence is based on controlling the moments of solutions. We first show an a priori estimate, proving that the VPME propagates velocity moments of sufficiently high order. This approach was previously used to prove global existence for the electron Vlasov-Poisson system, going back to the work of Lions and Perthame \cite{Lions-Perthame} for the problem posed on $\mathbb{R}^3$. Pallard \cite{Pallard} proved propagation of moments on the torus and extended the range of moments that could be propagated in the whole space, while Chen and Chen \cite{Chen-Chen} further extended the range of moments available for the torus case.
By extending these methods to the VPME case, we show global-in-time existence of solutions for the VPME system, for any initial datum $f_0 \in L^1 \cap L^\infty(\mathbb{T}^d \times \mathbb{R}^d)$ that has a finite velocity moment of order $m_0 > d$. Note that Theorem~\rhoef{thm:main} requires moments of higher order than this, for the reason that stronger assumptions are required to show uniqueness.
The proposition below shows the propagation of moments for classical solutions of the VPME system. In the proof, the estimates from Lemma~\rhoef{lem:hatU-reg} on the nonlinear part of the potential $\widehat U$ are crucial.
\begin{equation}gin{prop} \lambdabel{prop:moment-propagation}
Let the dimension $d=2$ or $d=3$. Let $0 \leq f_0 \in L^1 \cap L^\infty(\mathbb{T}^d \times \mathbb{R}^d)$ have a finite energy and finite velocity moment of order $m_0 > d$:
\begin{equation}
\muathcal{E} [f] \leq C_0<+\infty,\qquad \int_{\mathbb{T}^d \times \mathbb{R}^d} |v|^{m_0} f_0(x,v) \di x \di v =M_0< +\infty.
\varepsilone
Let $f$ be a $C^1$ compactly supported solution of the VPME system \varepsilonqref{eq:vpme}.
Then, for all $T > 0$,
\begin{equation}
\mc{S}up_{[0,T]} \int_{\mathbb{T}^d \times \mathbb{R}^d} |v|^{m_0} f(t,x,v) \di x \di v \le C(T, C_0,M_0,m_0,\|f_0\|_\infty).
\varepsilone
\varepsilonnd{prop}
Using this estimate, we then prove the global existence of solutions for the VPME system under these assumptions.We first consider a regularized version of the VPME system:
\begin{equation} \lambdabel{eq:vpme-reg}
\begin{equation}gin{cases}
\partial_t f + v \cdot \nablabla_x f - \chi_r \ast_x \nablabla_x U \cdot \nablabla_v f = 0, \\
\Delta U = e^U - \chi_r \ast_x \rhoho_f , \\ \displaystyle
f \vert_{t=0} = f_0 , \; \int_{\mathbb{T}^d \times \mathbb{R}^d} f_0 \di x \di v = 1.
\varepsilonnd{cases}
\varepsilone
Here $\chi_r$ is a mollifier defined for $r > 0$ by
\begin{equation}
\chi_r(x) : = r^{-d} \chi \left ( {\mathcal{F}}rac{x}{r} \rhoightght ), \qquad \chi \in C^\infty_c(\muathbb{T}^d ; [0, +\infty)),
\varepsilone
where $\chi$ is a fixed smooth, radially symmetric function with compact support.
The regularized system \varepsilonqref{eq:vpme-reg} is globally well-posed. This can be proved using standard methods, for example by adapting the approach of Dobrushin \cite{Dobrushin}. The proof of Proposition~\rhoef{prop:moment-propagation} then provides moment estimates for the solutions of \varepsilonqref{eq:vpme-reg} that are uniform in the regularization parameter. We can then extract a limit point and show that it is a global solution of the VPME system. With this method of construction, no regularity is required on the initial datum $f_0$. Moreover, the conservation of the energy $\muathcal{E}[f]$ defined in \varepsilonqref{def:Ee} also follows -- in comparison, the energy of the weak solutions constructed by Bouchut \cite{Bouchut} is non-increasing but not necessarily conserved. We obtain the following existence result.
\begin{equation}gin{thm} \lambdabel{thm:existence}
Let $d = 2, 3$. Consider an initial datum $f_0 \in L^1 \cap L^\infty(\muathbb{T}^d \times \muathbb{R}^d)$ satisfying
\begin{equation}
\int_{\mathbb{T}^d \times \mathbb{R}^d} |v|^{m_0} f_0(x,v) \di x \di v < + \infty, \; \;\muathbbox{for some}\,\, m_0 > d .
\varepsilone
Then there exists a global-in-time weak solution $f \in C([0,\infty); \muathcal{P}(\muathbb{T}^d \times \muathbb{R}^d))$ of the VPME system \varepsilonqref{eq:vpme} with initial data $f_0$, such that for all $T>0$,
\begin{equation}
\mc{S}up_{t \in [0,T]} \int_{\mathbb{T}^d \times \mathbb{R}^d} |v|^{m_0} f(t,x,v) \di x \di v < + \infty .
\varepsilone
\varepsilonnd{thm}
The proof of Theorem~\rhoef{thm:main} is then completed by showing that, under the specified decay and moment assumption on $f_0$, the solution provided by Theorem~\rhoef{thm:existence} has bounded density.
Proposition~\rhoef{prop:Wstab} then applies, proving the uniqueness of this solution.
\mc{S}ubsection{Strategy for $\mathbb{R}^3$}
In the whole space case, the overall strategy is similar to the torus case: we first analyse the electrostatic potential using the decomposition $U = \bar U + \widehat U$, where
\begin{equation} \lambdabel{eq:barU}
- \Delta \bar U = \rhoho_f, \qquad \lim_{|x| \to 0} \bar U(x) = 0 ,
\varepsilone
and the remainder $\widehat U$ satisfies either
\begin{equation} \lambdabel{eq:hatU-both}
\Delta \widehat U = g e^{\bar U + \widehat U} \quad \thetaxt{or} \quad \Delta \widehat U = {\mathcal{F}}rac{ g e^{\bar U + \widehat U}}{\int_{\muathbb{R}^3} g e^{\bar U + \widehat U} \di x}.
\varepsilone
Once again, by using techniques from the calculus of variations we can show that the nonlinear remainder $\widehat U$ is more regular than $\bar U$. However, one first difference with the torus case is that we have to account for the behaviour of the potential at infinity.
A more significant difference occurs for the fixed charge model. Due to the normalisation of the electron charge, the nonlinearity takes a different form compared to the torus case. To deal with this, we use a different functional in the calculus of variations approach to the analysis of $\widehat U$.
For the uniqueness of strong solutions, once again we prove a stability estimate in $W_2$ using stability estimates for the electric field with respect to the charge density $\rhoho_f$. For $\bar E$ we use estimates devised by Loeper \cite{Loeper}. For $\widehat E$ we again need a version of Lemma~\rhoef{prop:Ustab}, modified in the fixed charge case to handle the different nonlinearity.
To prove existence, we again use the propagation of moments. However the proof of the propagation of moments in the whole space is very different with respect to the propagation of moments on the torus, and we rely on the approach of Lions and Perthame \cite{Lions-Perthame}, making use of the regularity estimates on $\widehat U$.
\varepsilonnd{document} |
\begin{document}
\title[Counting elliptic curves with a 7-isogeny]{Counting elliptic curves over the rationals \\ with a 7-isogeny}
\author{Grant Molnar}
\address{Department of Mathematics\\
Dartmouth College\\
Hanover, NH 03755-3551}
\email{[email protected]}
\urladdr{http://www.grantmolnar.com}
\author{John Voight}
\address{Department of Mathematics\\
Dartmouth College\\
Hanover, NH 03755-3551}
\email{[email protected]}
\urladdr{http://www.math.dartmouth.edu/~jvoight}
\begin{abstract}
We count by height the number of elliptic curves over the rationals, both up to isomorphism over the rationals and over an algebraic closure thereof, that admit a cyclic isogeny of degree $7$.
\end{abstract}
\maketitle
\setcounter{tocdepth}{1}
\tableofcontents
\section{Introduction}\label{Section: Introduction}
\subsection{Motivation and setup}
Number theorists have an enduring, and recently renewed, interest in the arithmetic statistics of elliptic curves: broadly speaking, we study asymptotically the number of elliptic curves of bounded size with a given property. More precisely, every elliptic curve $E$ over $\mathbb{Q}$ is defined uniquely up to isomorphism by a Weierstrass equation of the form
\begin{equation}
E \colon y^2 = x^3 + Ax + B\label{Equation: Weierstrass equation}
\end{equation}
with $A,B \text{\rm i}n \mathbb{Z}$ satisfying $4A^3+27B^2 \neq 0$ and such that no prime $\ell$ has $\ell^4 \mid A$ and $\ell^6 \mid B$. Let $\mathscr{E}$ be the set of elliptic curves of this form: we define the \defi{height} of $E \text{\rm i}n \mathscr{E}$ by
\begin{equation}
\hht(E) \colonequals \max(\abs{4A^3},\abs{27B^2}).
\end{equation}
For $X \geq 1$, let $\mathscr{E}_{\leq X} \colonequals \{E \text{\rm i}n \mathscr{E} : \hht(E) \leq X\}$. Mathematicians have studied the count of those $E \text{\rm i}n \mathscr{E}_{\leq X}$ which admit (or are equipped with) additional level structure as $X \to \text{\rm i}nfty$, and they have done so more generally over global fields.
In recent work, many instances of this problem have been resolved. For example, Harron--Snowden \cite{Harron-Snowden} and Cullinan--Kenney--Voight \cite{Cullinan-Kenney-Voight} (see also previous work of Duke \cite{Duke} and Grant \cite{Grant}) produced asymptotics for counting those elliptic curves $E$ for which the torsion subgroup $E(\mathbb{Q})_{\textup{tors}}$ of the Mordell--Weil group is isomorphic to a given a finite abelian group $T$, i.e., they estimated $\#\set{E \text{\rm i}n \mathscr{E}_{\leq X} : E(\mathbb{Q})_{\tors} \simeq T}$ as $X \to \text{\rm i}nfty$ for each of the fifteen groups $T$ indicated in Mazur's theorem on torsion. These cases correspond to genus zero modular curves with infinitely many rational points. For such $T$, they established an asymptotic with an effectively computable constant and a power-saving error term. Moreover, satisfactory interpretations of the exponent of $X$ and the constants appearing in these asymptotics are provided. The main ingredients in the proof are the Principle of Lipschitz (also called Davenport's Lemma \cite{Davenport}) and an elementary sieve.
Moving on, we consider asymptotics for
\begin{equation}
\# \set{E \text{\rm i}n \mathscr{E}_{\leq X} : \textup{$E$ admits a cyclic $N$-isogeny}}
\label{Equation: counting N-isogenies}
\end{equation}
(where we mean that the $N$-isogeny is defined over $\mathbb{Q}$). Our attention is again first drawn to the cases where the modular curve $Y_0(N)$, parametrizing elliptic curves with a cyclic $N$-isogeny, has genus zero: namely, $N=1, \dots, 10, 12, 13, 16, 18, 25$. For $N \leq 4$, we again have an explicit power-saving asymptotic, with the case $N=3$ due to Pizzo--Pomerance--Voight \cite{Pizzo-Pomerance-Voight} and the case $N=4$ due to Pomerance--Schaefer \cite{Pomerance-Schaefer}. For all but four of the remaining values, namely $N=7,10,13,25$, Boggess--Sankar \cite{Boggess-Sankar} provide
at least the correct growth rate.
For both torsion and isogenies, work of Bruin--Najman \cite{Bruin-Najman} and Phillips \cite{Phillips1} extend these counts to a general number field $K$.
However, the remaining four cases have quite stubbornly resisted these methods. The obstacle can be seen in quite elementary terms: the `universal' elliptic curve with an $N$-isogeny is of the form $dy^2 = x^3 + f(t)x + g(t)$ with $f(t),g(t) \text{\rm i}n \mathbb{Q}[t]$ (for $t \text{\rm i}n \mathbb{Q}$ away from a finite set and $d \text{\rm i}n \mathbb{Z}$ a squarefree twisting parameter), and for these four values of $N$ we have $\gcd(f(t),g(t)) \neq 1$. Phrased geometrically, the corresponding `universal' elliptic surface over $\mathbb{P}^1$ has places of additive reduction (more precisely, type II). Either way, this breaks the sieve---and new techniques are required.
\subsection{Results}
For $X \geq 1$, we define
\begin{equation}
N_{7}(X) \colonequals \# \set{E \text{\rm i}n \mathscr{E}_{\leq X} : \textup{$E$ admits a cyclic} \ 7\text{-isogeny}}.
\end{equation}
Our main result is as follows (\mathbb{C}ref{Theorem: asymptotic for N(X)}).
\begin{theorem}\label{Intro Theorem: asymptotic for N(X)}
There exist effectively computable $c_1,c_2 \text{\rm i}n \mathbb{R}_{>0}$ such that for every $\epsilon > 0$, we have
\[
N_{7}(X) = c_1 X^{1/6} \log X + c_2 X^{1/6} + O(X^{7/45 + \epsilon})
\]
as $X \to \text{\rm i}nfty$, where the implied constant depends on $\epsilon$.
\end{theorem}
The constants $c_1,c_2$ in \mathbb{C}ref{Intro Theorem: asymptotic for N(X)} are explicitly given, and estimated numerically in \cref{Section: Computations} as $c_1 = 0.09285536\ldots$ and $c_2 \approx -0.16405$. It turns out that no elliptic curve over $\mathbb{Q}$ admits two $7$-isogenies with distinct kernels (\mathbb{C}ref{prop:noe7isogn}), so $N_{7}(X)$ also counts elliptic curves \emph{equipped with} a 7-isogeny.
The first step in our strategy to prove \mathbb{C}ref{Intro Theorem: asymptotic for N(X)} diverges from the methods of Boggess--Sankar \cite{Boggess-Sankar} and Phillips \cite{Phillips1}, where the twists are resolved by use of a certain modular curve (denoted by $X_{1/2}(N)$). Instead, we first count twist classes directly, as follows. Let $\mathbb{Q}alg$ be an algebraic closure of $\mathbb{Q}$. Up to isomorphism \emph{over $\mathbb{Q}alg$}, every elliptic curve $E$ over $\mathbb{Q}$ with $j(E) \neq 0,1728$ has a unique Weierstrass model \eqref{Equation: Weierstrass equation} with the additional property that $B>0$ and no prime $\ell$ has $\ell^2 \mid A$ and $\ell^3 \mid B$; such a model is called \defi{twist minimal}. (See \cref{Subsection: Twist minimal Weierstrass equations} for $j(E)=0,1728$.) Let $\scrE^{\textup{tw}} \subset \mathscr{E}$ be the set of twist minimal elliptic curves, and let $\scrE^{{\rm tw}}_{\leq X} \colonequals \scrE^{\textup{tw}} \cap \mathscr{E}_{\leq X}$ be those with height at most $X$.
Accordingly, we obtain asymptotics for
\begin{equation}
N_{7}^{\textup{tw}}(X) \colonequals \# \{E \text{\rm i}n \scrE^{{\rm tw}}_{\leq X} : \textup{$E$ admits a cyclic $7$-isogeny}\}
\end{equation}
as follows (\mathbb{C}ref{Theorem: asymptotic for twN(X)}).
\begin{theorem}\label{Intro Theorem: asymptotic for twN(X)}
We have
\[
N_{7}^{\textup{tw}}(X) = 3\zeta(2)c_1 X^{1/6} + O(X^{2/15} \log^{17/5} X)
\]
as $X \to \text{\rm i}nfty$, with $c_1$ as in \textup{\mathbb{C}ref{Intro Theorem: asymptotic for N(X)}}.
\end{theorem}
For an outline of the proof, see \cref{sec:decomp}. The use of the Principle of Lipschitz remains fundamental, but the sieving is more involved: we decompose the function into progressively simpler pieces that can be estimated. (See \mathbb{C}ref{rmk:moduli} for a stacky interpretation.) We then deduce \mathbb{C}ref{Intro Theorem: asymptotic for N(X)} from \mathbb{C}ref{Intro Theorem: asymptotic for twN(X)} by counting twists using a Tauberian theorem (attributed to Landau). The techniques of this paper can be adapted to handle the the cases $N = 10, 13, 25$, which have places of type III additive reduction; these will be treated in upcoming work.
\subsection{Contents}
In \cref{Section: Elliptic curves and lattices}, we set up basic notation and investigate minimal twists. In \cref{Section: Some analytic trivia}, we tersely review some needed facts from analytic number theory. In \cref{Section: Estimating twN(X)}, we pull together material from the earlier sections to prove \mathbb{C}ref{Intro Theorem: asymptotic for twN(X)}. In \cref{Section: Working over the rationals}, we use Landau's Tauberian theorem and \mathbb{C}ref{Intro Theorem: asymptotic for twN(X)} to obtain \mathbb{C}ref{Intro Theorem: asymptotic for N(X)}. In \cref{Section: Computations}, we describe algorithms to compute the various quantities we study in this paper, and report on their outputs.
\section{Elliptic curves and isogenies}\label{Section: Elliptic curves and lattices}
In this section, we set up what we need from the theory of elliptic curves.
\subsection{Height, minimality, and defect} \label{Subsection: Twist minimal Weierstrass equations}
We begin with some notation and terminology (repeating and elaborating upon the introduction); we refer to Silverman \cite[Chapter III]{Silverman} for background.
Let $E$ be an elliptic curve over $\mathbb{Q}$. Recall that a \defi{(simplified) integral Weierstrass equation} for $E$ is an affine model of the form
\begin{equation} \label{eqn:yaxb}
y^2 = x^3 + Ax + B
\end{equation}
with $A,B \text{\rm i}n \mathbb{Z}$. Let
\begin{equation} \label{eqn:HAB}
H(A,B) \colonequals \max(\abs{4A^3},\abs{27B^2}).
\end{equation}
The largest $d \text{\rm i}n \mathbb{Z}_{>0}$ such that $d^4 \mid A$ and $d^6 \mid B$ is called the \defi{minimality defect} $\mindefect(A,B)$ of the model. We then define the \defi{height} of $E$ to be
\begin{equation}
\hht(E)=\hht(A,B) \colonequals \frac{H(A,B)}{\mindefect(A,B)^{12}}, \label{eqn:justheight}
\end{equation}
well-defined up to isomorphism. In fact, $E$ (up to isomorphism over $\mathbb{Q}$) has unique \defi{minimal} model
\[ y^2=x^3+(A/d^4)x+(B/d^6)\] with minimality defect $d=1$. Let $\mathscr{E}$ be the set of elliptic curves over $\mathbb{Q}$ in their minimal model, and let
\begin{equation}
\mathscr{E}_{\leq X} \colonequals \{E \text{\rm i}n \mathscr{E} : \hht(E) \leq X\}.
\end{equation}
Let $\mathbb{Q}alg$ be an algebraic closure of $\mathbb{Q}$. We may similarly consider all integral Weierstrass equations for $E$ which define a curve isomorphic to $E$ \emph{over $\mathbb{Q}alg$}---these are the \defi{twists} of $E$ (defined over $\mathbb{Q}$). Let $E$ have $j(E) \neq 0,1728$. We call the largest $e\text{\rm i}n \mathbb{Z}_{>0}$ such that $e^2 \mid A$ and $e^3 \mid B$ the \defi{twist minimality defect} of a model \eqref{eqn:yaxb}, denoted $\twistdefect(A,B)$. Explicitly, we have
\begin{equation}
\twistdefect(E) = \twistdefect(A,B) \colonequals \prod_{\ell} \ell^{v_\ell}, \quad \text{where} \ v_\ell \colonequals \lfloor \min(\ord_\ell(A)/2,\ord_\ell(B)/3) \rfloor,\label{Equation: defect powers}
\end{equation}
with the product over all primes $\ell$. As above, we then define the \defi{twist height} of $E$ to be
\begin{equation} \label{eqn:htdef}
\twht(E)=\twht(A,B) \colonequals \frac{H(A,B)}{\twistdefect(A,B)^{6}},
\end{equation}
well-defined on the $\mathbb{Q}alg$-isomorphism class of $E$; and $E$ has a unique model over $\mathbb{Q}$ up to isomorphism over $\mathbb{Q}alg$ with twist minimality defect $\twistdefect(E) = e = 1$ and $B > 0$, which we call \defi{twist minimal}, namely,
\begin{equation}
y^2 = x^3 + (A/e^2)x + |B|/e^3. \label{Equation: twist-reduction of an arbitrary elliptic curve}
\end{equation}
For $j=0,1728$, we choose twist minimal models as follows:
\begin{itemize}
\text{\rm i}tem If $j(E)=0$ (equivalently, $A=0$), then we take $y^2=x^3 + 1$ of twist height $27$.
\text{\rm i}tem If $j(E)=1728$ (equivalently, $B=0$), then we take $y^2=x^3+x$ of twist height $4$.
\end{itemize}
Let $\scrE^{\textup{tw}} \subset \mathscr{E}$ be the set of twist minimal elliptic curves, and let $\scrE^{{\rm tw}}_{\leq X} \colonequals \scrE^{\textup{tw}} \cap \mathscr{E}_{\leq X}$ be those with twist height at most $X$. If $E \text{\rm i}n \scrE^{\textup{tw}}$ with $j(E) \neq 0,1728$, then the set of twists of $E$ in $\mathscr{E}$ are precisely those of the form $E^{(c)} \colon y^2 = x^3 + c^2 A x + c^3 B$ for $c \text{\rm i}n \mathbb{Z}$ squarefree, and
\begin{equation}
\hht(E^{(c)})=c^6 \hht(E) = c^6 h^{\rm tw}_7eight(E).\label{Equation: quadratic twists multiply height by c^6}
\end{equation}
(For $j(E)=0,1728$, we instead have sextic and quartic twists, but these will not figure here: see \mathbb{C}ref{prop:noe7isogn}.)
\begin{remark} \label{rmk:moduli0}
This setup records in a direct manner the more intrinsic notions of height coming from moduli stacks. The moduli stack $Y(1)_\mathbb{Q}$ of elliptic curves admits an open immersion into a weighted projective line $Y(1) \hookrightarrow \mathbb{P}(4,6)_\mathbb{Q}$ by $E \mapsto (A:B)$ for any choice of model \eqref{eqn:yaxb}, and the height of $E$ is the height of the point $(A:B) \text{\rm i}n \mathbb{P}(4,6)(\mathbb{Q})$ associated to $\mathscr{O}_{\mathbb{P}(4,6)}(12)$ (with coordinates harmlessly scaled by $4,27$): see Bruin--Najman \cite[\S 2, \S 7]{Bruin-Najman} and Phillips \cite[\S 2.2]{Phillips1}. Similarly, the height of the twist minimal model is given by the height of the point $(A:B) \text{\rm i}n \mathbb{P}(2,3)(\mathbb{Q})$ associated to $\mathscr{O}_{\mathbb{P}(2,3)}(6)$, which is almost but not quite the height of the $j$-invariant (in the usual sense).
\end{remark}
\subsection{Isogenies of degree $7$} \label{sec:7isog}
Next, we gather the necessary input from modular curves. Recall that the modular curve $Y_0(7)$, defined over $\mathbb{Q}$, parametrizes pairs $(E,\phi)$ of elliptic curves $E$ equipped with a $7$-isogeny $\phi$ up to isomorphism, or equivalently, a cyclic subgroup of order $7$ stable under the absolute Galois group $\Gal_\mathbb{Q} \colonequals \Gal(\mathbb{Q}alg\,|\,\mathbb{Q})$. We compute that the coarse space of $Y_0(7)$ is an affine open in $\mathbb{P}^1$, so the objects of interest are parametrized by its coordinate $t \neq -7,\text{\rm i}nfty$ (see \mathbb{C}ref{lem:7isog-param}).
More precisely, define
\begin{equation} \label{Equation: Definition of f for 7-isogenies}
\begin{aligned}
f_0(t) &\colonequals -3 (t^2 - 231 t + 735) \\
&= -3 (t^2 - (3 \cdot 7 \cdot 11)t + (3 \cdot 5 \cdot 7^2)), \\
g_0(t) &\colonequals 2 (t^4 + 518 t^3 - 11025 t^2 + 6174 t - 64827) \\
&= 2 (t^4 + (2 \cdot 7 \cdot 31)t^3 - (3^2 \cdot 5^2 \cdot 7^2)t^2 + (2 \cdot 3^2 \cdot 7^3)t - (3^3 \cdot 7^4)), \\
h(t) &\colonequals t^2 + t + 7, \\
f(t) &\colonequals f_0(t) h(t), \\
g(t) &\colonequals g_0(t) h(t).
\end{aligned}
\end{equation}
Then $h(t) = \gcd(f(t), g(t))$.
\begin{lem} \label{lem:7isog-param}
The set of elliptic curves $E$ over $\mathbb{Q}$ that admit a $7$-isogeny (defined over $\mathbb{Q}$) are precisely those of the form $E \colon y^2 = x^3 + c^2f(t)x + c^3g(t)$ for some $c \text{\rm i}n \mathbb{Q}^\times$ and $t \text{\rm i}n \mathbb{Q}$ with $t \neq -7$.
\end{lem}
\begin{proof}
Routine calculations with $q$-expansions for modular forms on the group $\Gamma_0(7)$, with the cusps at $t=-7,\text{\rm i}nfty$.
\end{proof}
Of course, for elliptic curves up to isomorphism over $\mathbb{Q}alg$, we can ignore the factor $c$ in \mathbb{C}ref{lem:7isog-param}.
\begin{remark}
Let
\begin{equation} \label{Equation: Definition of f for 7-isogenies, isogenous}
\begin{aligned}
f^\prime_0(t) &\colonequals -3 (t^2 + 9 t + 15) \\
&= -3 (t^2 + (3^2) t + (3 \cdot 5)), \\
g^\prime_0(t) &\colonequals 2 (t^4 + 14 t^3 + 63 t^2 + 126 t + 189) \\
&= 2 (t^4 + (2 \cdot 7) t^3 + (3^2 \cdot 7) t^2 + (2 \cdot 3^2 \cdot 7) t + (3^3 \cdot 7)), \\
f^\prime(t) &\colonequals f^\prime_0(t) h(t), \\
g^\prime(t) &\colonequals g^\prime_0(t) h(t),
\end{aligned}
\end{equation}
with $h(t)$ as above. The elliptic curve $E$ in \mathbb{C}ref{lem:7isog-param} is $7$-isogenous to
\[
E^\prime : y^2 = x^3 + c^2 f^\prime(t) + c^3 g^\prime(t)
\]
via the marked $7$-isogeny.
\end{remark}
\begin{prop} \label{prop:noe7isogn}
No elliptic curve $E$ over $\mathbb{Q}$ admits two $7$-isogenies with distinct kernels, and no
$E$ over $\mathbb{Q}$ with $j(E)=0,1728$ admits a $7$-isogeny.
\end{prop}
\begin{proof}
For the first statement: if $E$ admits two distinct $7$-isogenies, then generators for each kernel give a basis for the $7$-torsion of $E$ in which $\Gal_\mathbb{Q}$ acts diagonally. The corresponding compactified modular curve, $X_{\textup{sp}}(7)$, has genus $1$ and $2$ rational cusps; it is isomorphic to $X_0(49)$ over $\mathbb{Q}$, and has Weierstrass equation $y^2+xy=x^3-x^2-2x-1$ and LMFDB label \href{https://lmfdb.org/EllipticCurve/Q/49/a/4}{\textsf{49.a4}}. Its Mordell--Weil group is $\mathbb{Z}/2\mathbb{Z}$, so all rational points are cusps.
For the second statement, we simply observe that $f(t)$ and $g(t)$ have no roots $t \text{\rm i}n \mathbb{Q}$.
\end{proof}
To work with integral models, we take $t=a/b$ (in lowest terms) and homogenize, giving the following polynomials in $\mathbb{Z}[a,b]$:
\begin{equation} \label{eqn:ABCab}
\begin{aligned}
C(a, b) &\colonequals b^2 h(a/b)=a^2 + ab + 7 b^2, \\
A_0(a, b) &\colonequals b^2 f_0(a/b) = -3 (a^2 - 231 a b + 735 b^2), \\
B_0(a, b) &\colonequals b^4 g_0(a/b) = 2 (a^4 + 518 a^3 b - 11025 a^2 b^2 + 6174 a b^3 - 64827 b^4), \\
A(a,b) &\colonequals b^4 f(a/b) = C(a,b)A_0(a,b) \\
B(a,b) &\colonequals b^6 f(a/b) = C(a,b)B_0(a,b). \\
\end{aligned}
\end{equation}
We have $C(a,b) = \gcd(A(a,b),B(a,b)) \text{\rm i}n \mathbb{Z}[a,b]$.
We say that a pair $(a, b) \text{\rm i}n \mathbb{Z}^2$ is \defi{groomed} if $\gcd(a, b) = 1$, $b > 0$, and $(a, b) \neq (-7, 1)$. Thus \mathbb{C}ref{lem:7isog-param} and \mathbb{C}ref{prop:noe7isogn} provide that the elliptic curves $E \text{\rm i}n \mathscr{E}$ that admit a $7$-isogeny are precisely those with a model
\begin{equation}
y^2 = x^3 + \frac{c^2 A(a, b)}{d^4} x + \frac{c^3 B(a, b)}{d^6} \label{Equation: Weierstrass equation for elliptic surface, homogenized}
\end{equation}
where $(a, b)$ is groomed, $c \text{\rm i}n \mathbb{Z}$ is squarefree, and $d=\mindefect(c^2A(a,b),c^3B(a,b))$.
Thus the count
\begin{equation} \label{eqn:NQx}
N_{7}(X) \colonequals \#\{E \text{\rm i}n \mathscr{E}_{\leq X} : \textup{$E$ admits a cyclic $7$-isogeny}\}
\end{equation}
can be computed as
\begin{equation} \label{eqn:NQX}
N_{7}(X) = \#\left\{(a, b, c) \text{\rm i}n \mathbb{Z}^3 :
\begin{minipage}{33ex} $(a, b)$ groomed, $c$ squarefree, and \\
$\hht(c^2 A(a,b),c^3 B(a,b)) \leq X$
\end{minipage}
\right\}.
\end{equation}
with the height defined as in \eqref{eqn:justheight}.
Similarly, but more simply, the subset of $E \text{\rm i}n \scrE^{\textup{tw}}$ that admit a $7$-isogeny are
\begin{equation}
E_{a,b} \colon y^2 = x^3 + \frac{A(a, b)}{e^2} x + \frac{|B(a, b)|}{e^3} \label{Equation: Weierstrass equation for elliptic surface, homogenized, twist}
\end{equation}
with $(a, b)$ groomed and $e=\twistdefect(A(a,b),B(a,b))$ the twist minimality defect \eqref{Equation: defect powers}. Accordingly, if we define
\begin{equation} \label{eqn:NQxwist}
N_{7}^{\textup{tw}}(X) \colonequals \# \{E \text{\rm i}n \scrE^{{\rm tw}}_{\leq X} : \textup{$E$ admits a cyclic $7$-isogeny}\}
\end{equation}
then
\begin{equation} \label{eqn:twistheightcalc}
N_{7}^{\textup{tw}}(X) = \# \set{(a, b) \text{\rm i}n \mathbb{Z}^2 : \textup{$(a, b)$ groomed and $\twht(A(a,b),B(a,b)) \leq X$}}.
\end{equation}
\begin{remark} \label{rmk:moduli}
Returning to \mathbb{C}ref{rmk:moduli0}, we conclude that counting elliptic curves equipped with a $7$-isogeny is the same as counting points on $\mathbb{P}(4,6)_\mathbb{Q}$ in the image of the natural map $Y_0(7) \to Y(1) \subseteq \mathbb{P}(4,6)_\mathbb{Q}$. Counting them up to twist replaces this with the further natural quotient by $\mu_2$, giving $\mathbb{P}(2,3)_\mathbb{Q}$.
\end{remark}
\subsection{Twist minimality defect}\label{Subsection: groomed and ungroomed primes}
The twist minimality defect is the main subtlety in our study of $N_{7}^{\textup{tw}}(X)$, so we analyze it right away.
\begin{lemma}\label{Lemma: 3 and 7 are the ungroomed primes}
Let $(a,b) \text{\rm i}n \mathbb{Z}^2$ be groomed, let $\ell$ be prime, and let $v \text{\rm i}n \mathbb{Z}_{\geq 0}$. Then the following statements hold.
\begin{enumalph}
\text{\rm i}tem If $\ell \neq 3, 7$, then $\ell^v \mid \twistdefect(A(a,b),B(a,b))$ if and only if $\ell^{3v} \mid C(a, b)$.
\text{\rm i}tem $\ell^{3v} \mid C(a,b)$ if and only if $\ell \nmid b$ and $h(a/b) \equiv 0 \pmod{\ell^{3v}}$.
\text{\rm i}tem If $\ell \neq 3$, then $\ell \mid C(a,b)$ implies $\ell \nmid (2a+b)=(\partial C/\partial a)(a,b)$.
\end{enumalph}
\end{lemma}
\begin{proof}
We use the notation \eqref{eqn:ABCab} and argue as in Cullinan--Kenney--Voight \cite[Proof of Theorem 3.3.1, Step 3]{Cullinan-Kenney-Voight}.
For part (a), we compute the resultants
\[
\mathbb{R}es(A_0(t,1),B_0(t,1))=\mathbb{R}es(f_0(t),g_0(t))=-2^8\cdot 3^7 \cdot 7^{14} = \mathbb{R}es(A_0(1,u),B_0(1,u)).
\]
So if $\ell \neq 2,3,7$, then $\ell \nmid \gcd(A_0(a,b),B_0(a,b))$; so by \eqref{Equation: defect powers}, if $\ell^v \mid \twistdefect(A(a,b),B(a,b))$ then $\ell^{2v} \mid C(a,b)$. But also
\[
\mathbb{R}es(B_0(t,1),C(t,1))=\mathbb{R}es(g_0(t),h(t)) = 2^8 \cdot 3^3 \cdot 7^7 = \mathbb{R}es(B_0(1,u),C(1,u)),
\]
so $\ell \nmid \gcd(B_0(a,b), C(a, b))$ and thus $\ell^v \mid \twistdefect(A(a,b),B(a,b))$ if and only if $\ell^{3v} \mid C(a,b)$. If $\ell = 2$, a short computation confirms that $B(a, b)$ is odd whenever $(a, b)$ is groomed, so our claim also holds in this case.
For (b), by homogeneity it suffices to show that $\ell \nmid b$, and indeed this holds since if $\ell \mid b$ then $A(a,0) \equiv -3a^4 \equiv 0 \pmod{\ell}$ and $B(b,0) \equiv 2a^6 \equiv 0 \pmod{\ell}$ so $\ell \mid a$, a contradiction.
Part (c) follows from (b) and the fact that $h(t)$ has discriminant $\disc(h(t))=3^3$.
\end{proof}
For $e \geq 1$, let $\widetilde{\mathcal{T}}(e) \subseteq (\mathbb{Z}/e^3\mathbb{Z})^2$ denote the image of
\[
\set{(a, b) \text{\rm i}n \mathbb{Z}^2 : (a, b) \ \text{groomed}, \ e \mid \twistdefect(A (a, b), B (a, b))}
\]
under the projection
\[
\mathbb{Z}^2 \to (\mathbb{Z} / e^3 \mathbb{Z})^2,
\]
and let $\widetilde{T}(e) \colonequals \# \widetilde{\mathcal{T}}(e)$. Similarly, let $\mathcal{T}(e) \subseteq \mathbb{Z}/e^3\mathbb{Z}$ denote the image of
\[
\set{t \text{\rm i}n \mathbb{Z} : e^2 \mid f(t) \ \text{and} \ e^3 \mid g(t)}
\]
under the projection
\[
\mathbb{Z} \to \mathbb{Z} / e^3 \mathbb{Z},
\]
and let $T(e) \colonequals \#\mathcal{T}(e)$.
\begin{lemma}\label{Lemma: bound on T(e)}
The following statements hold.
\begin{enumalph}
\text{\rm i}tem The functions $\widetilde{T}(e)$ and $T(e)$ are multiplicative, and $\widetilde{T}(e) = \varphi(e^3) T(e)$.
\text{\rm i}tem For all $\ell \neq 3,7$ and $v \geq 1$,
\begin{align*}
T(\ell^v) &= T(\ell) = 1 + \left(\frac{\ell}{3}\right).
\end{align*}
\text{\rm i}tem We have
\[
T(3) = 18, \ T(3^2) = 27, \text{ and }\ T(3^v) = 0 \ \text{for} \ v \geq 3,
\]
and
\[
T(7) = 50, \ T(7^2) = 7^4+1=2402, \text{ and }\ T(7^v) = 7^7+1=823544 \ \text{for} \ v \geq 3.
\]
\text{\rm i}tem
We have $T(e) =O(2^{\omega(e)})$, where $\omega(e)$ is the number of distinct prime divisors of $e$.
\end{enumalph}
\end{lemma}
\begin{proof}
For part (a), multiplicativity follows from the CRT (Sun Zi theorem). For the second statement, let $\ell$ be a prime, and let $e = \ell^v$ for some $v \geq 1$. Consider the injective map
\begin{equation}
\begin{aligned}
\mathcal{T}(\ell^v) \times (\mathbb{Z}/ \ell^{3 v})^\times &\to \widetilde{\calT}(\ell^v) \\
(t,u) &\mapsto (tu,u)
\end{aligned}
\end{equation}
We observe $A(1, 0) = -3$ and $B(1, 0) = 2$ are coprime, so no pair $(a, b)$ with $b \equiv 0 \pmod \ell$ can be a member of $\widetilde{\mathcal{T}}(\ell^v)$. Surjectivity of the given map follows, and counting both sides gives the result.
Now part (b). For $\ell \neq 3, 7$, \mathbb{C}ref{Lemma: 3 and 7 are the ungroomed primes}(a)--(b) yield
\[
\mathcal{T}(\ell^v) = \set{t \text{\rm i}n \mathbb{Z}/\ell^{3v} \mathbb{Z} : h(t) \equiv 0 \psmod{\ell^{3v}}}.
\]
By \mathbb{C}ref{Lemma: 3 and 7 are the ungroomed primes}(c), $h(t) \equiv 0 \pmod \ell$ implies $h^\prime(t) \not\equiv 0 \pmod \ell$, so Hensel's lemma applies and we need only count roots of $h(t)$ modulo $\ell$, which by quadratic reciprocity is
\[
1 + \parent{\frac{-3}{\ell}} = 1 + \parent{\frac{\ell}{3}} = \begin{cases}
2, & \ \text{if} \ \ell \equiv 1 \psmod 3; \\
0, & \ \text{else.}
\end{cases}
\]
Next, part (c). For $\ell = 3$, we just compute
$T(3) = 18$, $T(3^2) = 27$, and $T(3^3) = 0$; then $T(3^3) = 0$ implies $T(3^v) = 0$ for all $v \geq 3$.
For $\ell = 7$, we compute
\[
T(7) = 50, \ T(7^2) = 2402, \ T(7^3) = \dots = T(7^6) = 823544.
\]
Hensel's lemma still applies to $h(t)$: let $t_0,t_1$ be the roots of $h(t)$ in $\mathbb{Z}_7$ with $t_0 \colonequals 248044 \pmod{7^7}$ (so that $t_1=-1-t_0$). We claim that
\begin{equation}
\mathcal{T}(7^{3v}) = \set{t_0} \sqcup \set{t_1 + 7^{3v - 7} u \text{\rm i}n \mathbb{Z} / 7^{3v} \mathbb{Z} :u \text{\rm i}n \mathbb{Z} / 7^7 \mathbb{Z}},\label{Equation: decomposition of T(7^v)}
\end{equation}
for $3v \geq 7$. Indeed, $g_0(t_1) \equiv 0 \pmod {7^7}$, so we can afford to approximate $t_1$ modulo $7^{3v - 7}$. As $g(t_0) \not\equiv 0 \pmod {7}$ and $g(t_1) \not\equiv 0 \pmod{7^8}$, no other values of $t$ suffice. Thus $T(7^{3v}) = 1 + 7^7 = 823544$.
Finally, part (d). From (a)--(c) we conclude
\begin{equation}
T(e) \leq \frac{27 \cdot 823544}{4} \cdot \prod_{\substack{\ell \mid e \\ \ell \neq 3,7}} \parent{1 + \parent{\frac{\ell}{3}}} \leq 5558922 \cdot 2^{\omega(e)}
\end{equation}
so $T(e) = O(2^{\omega(e)})$ as claimed.
\end{proof}
\subsection{The common factor \texorpdfstring{$C(a, b)$}{Cab}}
In view of \mathbb{C}ref{Lemma: 3 and 7 are the ungroomed primes}, the twist minimality defect away from the primes $2,3,7$ is given by the quadratic form $C(a,b)=a^2+ab+7b^2=b^2 h(a/b)$. Fortunately, this is the norm form of a quadratic order of class number $1$, so although this is ultimately more than what we need, we record some consequences of this observation which take us beyond \mathbb{C}ref{Lemma: bound on T(e)}.
For $m \text{\rm i}n \mathbb{Z}_{>0}$,, let
\begin{equation}\label{Equation: c(m)}
c(m) \colonequals \#\{(a, b) \text{\rm i}n \mathbb{Z}^2 : b > 0, \ \gcd(a, b) = 1, \ C(a, b) = m\}.
\end{equation}
\begin{lemma}\label{Lemma: c(m) is multiplicative etc}
The following statements hold.
\begin{enumalph}
\text{\rm i}tem We have $c(m n) = c(m) c(n)$ for $m,n \text{\rm i}n \mathbb{Z}_{>0}$ coprime.
\text{\rm i}tem We have
\[
c(3) = 0, \ c(3^2) = 2, \ c(3^3) = 3, \ \text{and} \ c(3^v) = 0 \ \text{for} \ v \geq 4;
\]
for $p \neq 3$ prime and $k \geq 1$ an integer, we have
\begin{equation} \label{eqn:cp}
c(p) = c(p^k) = 1 + \parent{\frac{p}{3}}.
\end{equation}
\text{\rm i}tem For $m$ and $n$ positive integers, we have
\[
c(n^3 m) \leq 3 \cdot 2^{\omega(n)-1} c(m).
\]
\end{enumalph}
\end{lemma}
\begin{proof}
Let $\zeta \colonequals (1 + \sqrt{-3})/2$,
so $\overline{\zeta} = 1 - \zeta = (1 - \sqrt{-3})/2$.
The quadratic form
\[
C(a, b) = a^2 + a b + 7 b^2 = (a+b\parent{-1 + 3 \zeta})(a+b\overline{\parent{-1 + 3 \zeta}}) =\Nm(a+b\parent{-1 + 3 \zeta})
\]
is the norm on the order $\mathbb{Z}[3 \zeta]$ in basis $\set{1, -1 + 3 \zeta}$. Recall that $\alpha \text{\rm i}n \mathbb{Z}[3 \zeta]$ is \defi{primitive} if no $n \text{\rm i}n \mathbb{Z}_{>1}$ divides $\alpha$. Thus, accounting for sign,
\begin{equation}
2c(m) = \#\{\alpha \text{\rm i}n \mathbb{Z}[3 \zeta] \ \text{primitive} : \Nm(\alpha) = m\}.\label{Equation: c(m) as an algebraic number theory thing}
\end{equation}
The order $\mathbb{Z}[3 \zeta]$ is a suborder of the Euclidean domain $\mathbb{Z}[\zeta]$ of conductor 3. It inherits from $\mathbb{Z}[\zeta]$ the following variation on unique factorization: up to sign, every nonzero $\alpha \text{\rm i}n \mathbb{Z}[3\zeta]$ can be written uniquely
as
\[
\alpha = \beta \pi_1^{e_1} \cdots \pi_r^{e_r},
\]
where $\Nm(\beta)$ is a power of $3$, $\pi_1, \dots, \pi_r$ are distinct irreducibles coprime to $3$, and $e_1, \dots, e_r$ are positive integers. Note that $\alpha$ is primitive if and only if $\beta$ is primitive and for $1 \leq i, j \leq r$ (not necessarily distinct) we have $\pi_i \neq \overline{\pi_j}$. Thus if $m$ and $n$ are coprime integers, $\alpha \text{\rm i}n \mathbb{Z}[3 \zeta]$ is primitive, and $\Nm(\alpha) = mn$, then $\alpha$ may be factored uniquely (up to sign) as $\alpha = \alpha_1 \alpha_2$, where $\Nm(\alpha_1) = m$ and $\Nm(\alpha_2) = n$. This proves (a).
We now prove (b). If $p \neq 3$ is inert in $\mathbb{Z}[3\zeta]$ (equivalently, in $\mathbb{Z}[\zeta]$), then no primitive $\alpha$ satisfies $\Nm(\alpha) = p^v$, so $c(p^v) = 0$. If $p \neq 3$ splits in $\mathbb{Z}[3 \zeta]$ (equivalently, in $\mathbb{Z}[\zeta]$), then no primitive $\alpha$ is divisible by more than one of the two primes above $p$, so $c(p^v) = 2$. This proves \eqref{eqn:cp} (compare \mathbb{C}ref{Lemma: bound on T(e)}). Finally, if $p = 3$, we compute $c(3) = 0,$ $c(3^2) = 2,$ and $c(3^3) = 3$. Congruence conditions show $c(3^v) = 0$ for $v \geq 4$.
Part (c) follows immediately from (a) and (b).
\end{proof}
\begin{remark}
We prove \mathbb{C}ref{Lemma: c(m) is multiplicative etc}(a) and \mathbb{C}ref{Lemma: c(m) is multiplicative etc}(b) only as a means to proving \mathbb{C}ref{Lemma: c(m) is multiplicative etc}(c). Although the algebraic structure of the Eisenstein integers $\mathbb{Z}[\zeta]$ may not be available in the study of other families of elliptic curves that exhibit potential additive reduction, we expect analogues of \mathbb{C}ref{Lemma: c(m) is multiplicative etc}(c) to hold in a general context.
\end{remark}
The twist minimality defect measures the discrepancy between $H(A, B)$ and $h^{\rm tw}_7eight(A, B)$: this discrepancy cannot be too large compared to $C(a,b)$, as the following theorem shows.
\begin{theorem}\label{Theorem: Controlling size of twist minimality defect}
We have the following.
\begin{enumalph}
\text{\rm i}tem For all $(a, b) \text{\rm i}n \mathbb{R}^2$, we have
\begin{equation}
108 C(a, b)^6 \leq H(A(a, b), B(a, b)) \leq \kappa C(a, b)^6,\label{Equation: upper and lower bounds for H}
\end{equation}
where $\kappa = 311\,406\,871.990\,204\ldots$ is an explicit algebraic number.
\text{\rm i}tem If $C(a, b) = e_0^3 m$, with $m$ cubefree, then $\twistdefect(A(a, b), B(a, b)) = e_0 e^\prime$ for some $e^\prime \mid 3 \cdot 7^3$, and
\[
\frac{2^2}{3^3 \cdot 7^{18}} e_0^{12} m^6 \leq h^{\rm tw}_7eight(A(a, b), B(a, b)) \leq \kappa e_0^{12} m^6.
\]
\end{enumalph}
\end{theorem}
\begin{proof}
We wish to find the extrema of $H(A(a, b), B(a, b))/C(a, b)^6$. As this expression is homogeneous of degree 0, and $C(a, b)$ is positive definite, we may assume without loss of generality that $C(a, b) = 1$. Using Lagrange multipliers,
we verify that \eqref{Equation: upper and lower bounds for H} holds: the lower bound is attained at $(1, 0)$, and the upper bound is attained when $a = 0.450\,760\dots$ and $b=-0.371\,118\dots$ are roots of
\begin{equation}\label{Equation: Roots defining upperratio}
\begin{aligned}
1296 a^8 - 2016 a^6 + 2107 a^4 - 1596 a^2 + 252 &=0 \\
1067311728 b^8 - 275298660 b^6 + 43883077 b^4 - 3623648 b^2 + 1849 &= 0, \\
\end{aligned}
\end{equation}
respectively.
Now write $C(a, b) = e_0^3 m$ with $m$ cubefree, and write $\twistdefect(A(a, b), B(a, b)) = e_0 e^\prime$. By \mathbb{C}ref{Lemma: 3 and 7 are the ungroomed primes}, $e^\prime = 3^v 7^w$ for some $v, w \geq 0$; a short computation shows $v \text{\rm i}n \set{0, 1}$, and \eqref{Equation: decomposition of T(7^v)} shows $w \leq \ceil{7/3} = 3$. As
\[
H(A(a, b), B(a, b)) = e_0^6 \parent{e^\prime}^6 h^{\rm tw}_7eight(A(a, b), B(a, b)),
\]
we see
\[
\frac{108}{(e^\prime)^6} e_0^{12} m^6 \leq h^{\rm tw}_7eight(A(a, b), B(a, b)) < \frac{\kappa}{(e^\prime)^6} e_0^{12} m^6.
\]
Rounding $e^\prime$ up to $3 \cdot 7^3$ on the left and down to $1$ on the right gives the desired result.
\end{proof}
\begin{corollary}\label{Corollary: bound twist defect in terms of twist height}
Let $(a, b)$ be a groomed pair. We have
\[
\twistdefect(A(a, b), B(a, b)) \leq \frac{3^{5/4} \cdot 7^{9/2}}{2^{1/6}} h^{\rm tw}_7eight(A(a, b), B(a, b))^{1/12} \]
where $3^{5/4} \cdot 7^{9/2} / 2^{1/6} = 22\,344.5\ldots$
\end{corollary}
\begin{proof}
In the notation of \mathbb{C}ref{Theorem: Controlling size of twist minimality defect}(b),
\[
e_0^{12} m^6 \leq \frac{3^3 \cdot 7^{18}}{2^2} h^{\rm tw}_7eight(A(a, b), B(a, b)).
\]
Multiplying through by $(e^\prime)^{12}$, rounding $m$ down to $1$ on the left, rounding $e^\prime$ up to $3 \cdot 7^7$ on the right, and taking $12$th roots of both sides, we obtain the desired result.
\end{proof}
\section{Analytic ingredients}\label{Section: Some analytic trivia}
In this section, we record some results from analytic number theory used later.
\subsection{Lattices and the principle of Lipschitz}\label{Subsection: Lattices and the principle of Lipschitz}
We recall (a special case of) the Principle of Lipschitz, also known as Davenport's Lemma.
\begin{theorem}[Principle of Lipschitz]\label{Theorem: Principle of Lipschitz}
Let $\mathcal{R} \subseteq \mathbb{R}^2$ be a closed and bounded region, with rectifiable boundary $\partial \mathcal{R}$. We have
\[
\#(\mathcal{R} \cap \mathbb{Z}^2) = \Area(\mathcal{R}) + O(\len(\partial \mathcal{R})),
\]
where the implicit constant depends on the similarity class of $\mathcal{R}$, but not on its size, orientation, or position in the plane $\mathbb{R}^2$.
\end{theorem}
\begin{proof}
See Davenport \cite{Davenport}.
\end{proof}
Specializing to the case of interest, for $X > 0$ let
\begin{equation} \label{eqn: R(X)}
\mathcal{R}(X) \colonequals \set{(a, b) \text{\rm i}n \mathbb{R}^2 :H(A(a, b), B(a, b)) \leq X, \ b \geq 0},
\end{equation}
and let $R \colonequals \Area(\mathcal{R}(1))$. The region $\mathcal{R}(1)$ is the common region in Figure \ref{Figure: calR(1)}.
\jvfigure{Figure: calR(1)}{\text{\rm i}ncludegraphics[scale=0.3]{desmos-graph7.png}}{The region $\mathcal{R}(1)$}
\begin{lemma}\label{Lemma: formula for R(X)}
For $X > 0$, we have $\Area (\mathcal{R}(X)) = R X^{1/6}$.
\end{lemma}
\begin{proof}
Since $f(t)=A(t,1)$ and $g(t)=B(t,1)$ have no common real root, the region $\mathcal{R}(X)$ is compact \cite[Proof of Theorem 3.3.1, Step 2]{Cullinan-Kenney-Voight}. The homogeneity
\[
H(A(u a, ub), B(ua, ub)) = u^{12} H(A(a, b), B(a, b))
\]
implies
\[
\Area(\mathcal{R}(X)) = \Area(\{(X^{1/12} a, X^{1/12} b) :(a, b) \text{\rm i}n \mathcal{R}(1)\}) = X^{1/6} \Area(\mathcal{R}(1)) = R X^{1/6}
\]
as desired.
\end{proof}
The following corollaries are immediate.
\begin{corollary}\label{Corollary: Estimates for L(X)}
For $a_0, b_0, d \text{\rm i}n \mathbb{Z}$ with $d \geq 1$, we have
\[
\#\{(a, b) \text{\rm i}n \mathcal{R}(X) \cap \mathbb{Z}^2 : (a, b) \equiv (a_0, b_0) \psmod d\} = \frac{R X^{1/6}}{d^2} + O\parent{\frac{X^{1/{12}}}{d}}.
\]
The implied constants are independent of $X$, $d$, $a_0,$ and $b_0$. In particular,
\begin{equation}
\#(\mathcal{R}(X) \cap \mathbb{Z}^2) = R X^{1/6} + O(X^{1/{12}}).
\end{equation}
\end{corollary}
\begin{proof}
Combine \mathbb{C}ref{Lemma: formula for R(X)} and \mathbb{C}ref{Theorem: Principle of Lipschitz}.
\end{proof}
\begin{corollary}\label{Corollary: Bound on partial sums of c(m)}
Let $\parent{c(m)}_{m \geq 1}$ be as in \eqref{Equation: c(m)}. We have
\[
\sum_{m \leq X} c(m) = O(X).
\]
\end{corollary}
\begin{proof}
Immediate from \mathbb{C}ref{Corollary: Estimates for L(X)}.
\end{proof}
\subsection{Dirichlet series}
The following theorem is attributed to Stieltjes.
\begin{theorem}\label{Theorem: product of Dirichlet series converges}
Let $\alpha, \beta : \mathbb{Z}_{> 0} \to \mathbb{R}$ be arithmetic functions. If $L_\alpha(s) \colonequals \sum_{n \geq 1} \alpha(n) n^{-s}$ and $L_\beta(s) \colonequals \sum_{n \geq 1} \beta(n) n^{-s}$ both converge when $\repart(s) > \sigma$, and one of these two series converges absolutely, then
\[
L_{\alpha \ast \beta}(s) \colonequals \sum_{n \geq 1} \parent{\sum_{d \mid n} \alpha(d) \beta\parent{\frac nd}} n^{-s}
\]
converges for $s$ with $\repart(s) > \sigma$. If both $L_\alpha(s)$ and $L_\beta(s)$ both converge absolutely when $\repart(s) > \sigma$, then so does $L_{\alpha \ast \beta}(s)$.
\end{theorem}
\begin{proof}
Widder \cite[Theorems 11.5 and 11.6b]{Widder} proves a more general result, or see Tenenbaum \cite[proof of Theorem II.1.2, Notes on p.~204]{Tenenbaum}.
\end{proof}
Let $\gamma \colonequals \lim_{y \to \text{\rm i}nfty} \bigl(\sum_{n \leq y} 1/n\bigr) - \log y$ be the Euler--Mascheroni constant.
\begin{theorem}\label{Theorem: Laurent series expansion for zeta(s)}
The difference
\[
\zeta(s) - \parent{\frac{1}{s - 1} + \gamma}
\]
is entire on $\mathbb{C}$ and vanishes at $s = 1$.
\end{theorem}
\begin{proof}
Ivi\'c \cite[page 4]{Ivic} proves a more general result.
\end{proof}
\subsection{Regularly varying functions}
We require a fragment of Karamata's integral theorem for regularly varying functions.
\begin{definition}
Let $F \colon \mathbb{R}_{\geq 0} \to \mathbb{R}$ be measurable and eventually positive. We say that $F$ is \defi{regularly varying of index $\rho \text{\rm i}n \mathbb{R}$} if for each $\lambda > 0$ we have
\[
\lim_{y \to \text{\rm i}nfty} \frac{F(\lambda y)}{F(y)} = \lambda^\rho.
\]
\end{definition}
\begin{theorem}[Karamata's integral theorem]\label{Theorem: Karamata's integral theorem}
Let $F \colon \mathbb{R}_{\geq 0} \to \mathbb{R}$ be locally bounded and regularly varying of index $\rho$. Let $\sigma, \rho \text{\rm i}n \mathbb{R}$. Then the following statements hold.
\begin{enumalph}
\text{\rm i}tem For any $\sigma > \rho + 1$ (and for $\sigma = \rho + 1$ if $\text{\rm i}nt_0^\text{\rm i}nfty u^{- \rho - 1} F(u) \,\mathrm{d}u < \text{\rm i}nfty$), we have
\[
\text{\rm i}nt_y^\text{\rm i}nfty t^{-\sigma} F(u) \,\mathrm{d}u \sim \frac{y^{1 - \sigma} F(y)}{\abs{\sigma - \rho - 1}}
\]
as $y \to \text{\rm i}nfty$.
\text{\rm i}tem For any $\sigma < \rho + 1$, we have
\[
\text{\rm i}nt_0^y u^{-\sigma} F(u) \,\mathrm{d}u \sim \frac{y^{1 - \sigma} F(y)}{\abs{\sigma - \rho - 1}}
\]
as $y \to \text{\rm i}nfty$.
\end{enumalph}
\end{theorem}
\begin{proof}
See Bingham--Glodie--Teugels \cite[Theorem 1.5.11]{Bingham-Goldie-Teugels}. (Karamata's integral theorem also includes a converse.)
\end{proof}
\begin{corollary}\label{Corollary: tail of sum of f(n)/n^sigma}
Let $\alpha \colon \mathbb{Z}_{> 0} \to \mathbb{R}$ be an arithmetic function, and suppose that for some $\kappa, \rho, \tau \text{\rm i}n \mathbb{R}$ with $\kappa \neq 0$, we have
\begin{equation}
F(y) \colonequals \sum_{n \leq y} \alpha(n) \sim \kappa y^\rho \log^\tau y\label{Equation: asymptotic for F in corollary of Karamata's Theorem}
\end{equation}
as $y \to \text{\rm i}nfty$. Let $\sigma, \rho > 0$.
Then the following statements hold, as $y \to \text{\rm i}nfty$.
\begin{enumalph}
\text{\rm i}tem If $\sigma > \rho > 0$, then
\[
\sum_{n > y} n^{-\sigma} \alpha(n) \sim \frac{\rho y^{-\sigma} F(y)}{\abs{\sigma - \rho}} \sim \frac{\kappa \rho y^{\rho - \sigma} \log^\tau y}{\abs{\sigma - \rho}}.
\]
\text{\rm i}tem If $\rho > \sigma > 0$, then \[
\sum_{n \leq y} n^{-\sigma} \alpha(n) \sim \frac{\rho y^{-\sigma} F(y)}{\abs{\sigma - \rho}} \sim \frac{\kappa \rho y^{\rho - \sigma} \log^\tau y}{\abs{\sigma - \rho}}.
\]
\end{enumalph}
\end{corollary}
\begin{proof}
Replacing $\alpha$ and $F$ with $-\alpha$ and $-F$ if necessary, we may assume $\kappa > 0$. As a partial sum of an arithmetic function, $F(y)$ is measurable and locally bounded; by \eqref{Equation: asymptotic for F in corollary of Karamata's Theorem}, $F(y)$ is eventually positive. Now for any $\lambda > 0$, we compute
\[
\lim_{y \to \text{\rm i}nfty} \frac{F(\lambda y)}{F(y)} = \lim_{y \to \text{\rm i}nfty} \frac{\kappa (\lambda y)^\rho \log^\tau (\lambda y)}{\kappa y^\rho \log^\tau y} = \lambda^\rho,
\]
so $F$ is regularly varying of index $\rho$.
Suppose first $\sigma > \rho > 0$. Since
\[
y^{-\sigma} F(y) \sim \kappa y^{\rho - \sigma} \log^\tau y \to 0
\]
as $y \to \text{\rm i}nfty$, Abel summation yields
\[
\sum_{n > y} n^{-\sigma} \alpha(n) = - y^{-\sigma} F(y) + \sigma \text{\rm i}nt_y^\text{\rm i}nfty u^{-\sigma - 1} F(u) \,\mathrm{d}u.
\]
Clearly $\sigma + 1 > \rho + 1$, so \mathbb{C}ref{Theorem: Karamata's integral theorem}(a) tells us
\[
\text{\rm i}nt_y^\text{\rm i}nfty u^{-\sigma - 1} F(u) \,\mathrm{d}u \sim \frac{y^{-\sigma} F(y)}{\abs{\sigma - \rho}} \sim \frac{\kappa y^{\rho - \sigma} \log^\tau y}{\abs{\sigma - \rho}}
\]
and thus
\[
\sum_{n > y} n^{-\sigma} \alpha(n) \sim \frac{\rho y^{-\sigma} F(y)}{\abs{\sigma - \rho}}
\]
as $y \to \text{\rm i}nfty$.
The case $\rho > \sigma > 0$ is similar.
\end{proof}
\subsection{Bounding Dirichlet series on vertical lines}
Recall that a complex function $F(s)$ has \defi{finite order} on a domain $D$ if there exists $\xi \text{\rm i}n \mathbb{R}_{>0}$ such that
\[
F(s) = O(1 + \abs{t}^\xi)
\]
whenever $s = \sigma + i t \text{\rm i}n D$. If $F$ is of finite order on a right half-plane, we define
\[
\mu_F(\sigma) \colonequals \text{\rm i}nf\{\xi \text{\rm i}n \mathbb{R}_{\geq 0} :F(\sigma + i t) = O(1 + \abs{t}^\xi)\}
\]
where the implicit constant depends on $\sigma$ and $\xi$.
Let $L(s)$ be a Dirichlet series with abscissa of absolute convergence $\sigma_a$ and abscissa of convergence $\sigma_c$.
\begin{theorem}\label{Theorem: absolutely convergent Dirichlet series have mu = 0}
We have $\mu_L(\sigma)=0$ for all $\sigma > \sigma_a$, and $\mu_L(\sigma)$ is nonincreasing (as a function of $\sigma$) on any region where $L$ has finite order.
\end{theorem}
\begin{proof}
Tenenbaum \cite[Theorem II.1.21]{Tenenbaum}.
\end{proof}
\begin{theorem}\label{Theorem: Dirichlet series grow slowly on vertical lines}
Let $\sigma_c < \sigma_0 \leq \sigma_c+1$ and let $\epsilon > 0$. Then uniformly on
\[
\set{s = \sigma + i t \text{\rm i}n \mathbb{C} : \sigma_0 \leq \sigma \leq \sigma_c + 1, \ \abs{t} \geq 1},
\]
we have
\[
L(\sigma + i t) = O(t^{1 + \sigma_c - \sigma + \epsilon}).
\]
\end{theorem}
\begin{proof}
Tenenbaum \cite[Theorem II.1.19]{Tenenbaum}.
\end{proof}
\begin{corollary}\label{Corollary: bound on mu past abscissa of convergence}
For all $\sigma > \sigma_c$, we have
\[
\mu_{L}(\sigma) \leq \max(0,1 + \sigma_c - \sigma).
\]
\end{corollary}
\begin{proof}
It is well-known that $\sigma_a \leq \sigma_c + 1$, so the claim holds for $\sigma > \sigma_c + 1$ by \mathbb{C}ref{Theorem: absolutely convergent Dirichlet series have mu = 0}. Now for $\sigma_c < \sigma < \sigma_c + 1$, our claim follows by letting $\epsilon \to 0$ in \mathbb{C}ref{Theorem: Dirichlet series grow slowly on vertical lines}.
\end{proof}
\begin{theorem}\label{Theorem: muzeta(sigma)}
Let $\zeta(s)$ be the Riemann zeta function, and let $\sigma \text{\rm i}n \mathbb{R}$. We have
\[
\mu_\zeta(\sigma) \leq \begin{cases}
\frac 12 - \sigma, & \text{if} \ \sigma \leq 0; \\
\frac 12 - \frac{141}{205} \sigma, & \text{if} \ 0 \leq \sigma \leq \frac 12; \\
\frac{64}{205}(1 - \sigma), & \text{if} \ \frac 12 \leq \sigma \leq 1; \\
0 & \text{if} \ \sigma \geq 0.
\end{cases}
\]
Moreover, equality holds if $\sigma<0$ or $\sigma>1$.
\end{theorem}
\begin{proof}
Tenenbaum \cite[page 235]{Tenenbaum} proves the claim when $\sigma<0$ or $\sigma>1$. Now $\mu_\zeta(1/2) \leq 32/205$ by Huxley \cite[Theorem 1]{Huxley}, and our result follows from the subconvexity of $\mu_\zeta$ \cite[Theorem II.1.20]{Tenenbaum}.
\end{proof}
\subsection{A Tauberian theorem}\label{Subsection: Perron's formula}
We now present a Tauberian theorem, due in essence to Landau \cite{Landau1915}.
\begin{definition}\label{Definition: admissible sequences}
Let $\parent{\alpha(n)}_{n \geq 1}$ be a sequence with $\alpha(n) \text{\rm i}n \mathbb{R}_{\geq 0}$ for all $n$, and let $L_\alpha(s) \colonequals \sum_{n \geq 1} \alpha(n) n^{-s}$.
We say the sequence $\parent{\alpha(n)}_{n \geq 1}$ is \defi{admissible} with (real) parameters $\parent{\sigma_a, \delta, \xi}$ if the following hypotheses hold:
\begin{enumroman}
\text{\rm i}tem $L_\alpha(s)$ has abscissa of absolute convergence $\sigma_a$.\label{Condition: abscissa of absolute convergence sigma}
\text{\rm i}tem The function $L_\alpha(s)/s$ has meromorphic continuation to $\set{s = \sigma + i t \text{\rm i}n \mathbb{C} : \sigma > \sigma_a - \delta}$ and only finitely many poles in this region. \label{Condition: meromorphic extension}
\text{\rm i}tem For $\sigma > \sigma_a - \delta$, we have $\mu_{L_\alpha}(\sigma) \leq \xi$.
\label{Condition: mu is bounded}
\end{enumroman}
\end{definition}
If $\parent{\alpha(n)}_n$ is admissible, let $s_1, \dots, s_r$ denote the poles of $L_\alpha(s)/s$ with real part greater than $\sigma_a - \delta/(\xi + 2)$.
The following theorem is essentially an application of Perron's formula, which is itself an inverse Mellin transform.
\begin{theorem}[Landau's Tauberian Theorem]\label{Theorem: Landau's Tauberian theorem}
Let $\parent{\alpha(n)}_{n \geq 1}$ be an admissible sequence (\textup{\mathbb{C}ref{Definition: admissible sequences}}), and write $N_\alpha(X) \colonequals \sum_{n \leq X} \alpha(n)$. Then for all $\epsilon>0$,
\[
N_\alpha(X) = \sum_{j = 1}^r \res_{s=s_j}\parent{\frac{L_\alpha(s) X^{s}}{s}} + O\!\parent{X^{\sigma_a - \frac{\delta}{\floor{\xi} + 2} + \epsilon}},
\]
where the main term is a sum of residues and the implicit constant depends on $\epsilon$.
\end{theorem}
\begin{proof}
See Roux \cite[Theorem 13.3, Remark 13.4]{Roux}.
\end{proof}
\begin{remark}
Landau's original theorem \cite{Landau1915} was fitted to a more general context, and allowed sums of the form
\[
\sum_{n \geq 1} \alpha(n) \ell(n)^{-s}
\]
as long as $\parent{\ell(n)}_{n \geq 1}$ was increasing and tended to $\text{\rm i}nfty$. Landau also gave an explicit expansion of
\[
\res_{s=s_j}\parent{\frac{L_\alpha(s) X^{s}}{s}}
\]
in terms of the Laurent series expansion for $L_\alpha(s)$ around $s = s_j$. However, Landau also required that $L_\alpha(s)$ has a meromorphic continuation to all of $\mathbb{C}$, and Roux \cite[Theorem 13.3, Remark 13.4]{Roux} relaxes this assumption.
\end{remark}
Let $d(n)$ denote the number of divisors of $n$, and let $\omega(n)$ denote the number of distinct prime divisors of $n$.
Theorem \ref{Theorem: Landau's Tauberian theorem} has the following easy corollary.
\begin{corollary}\label{Corollary: sum of 2^omega(n) and d(n)^2}
We have
\[
\sum_{n \leq y} 2^{\omega(n)} = \frac{y \log y}{\zeta(2)} + O(y) \quad \text{and} \quad \sum_{n \leq y} d(n)^2 = \frac{y \log^3 y}{6 \zeta(2)} + O(y \log^2 y).
\]
as $y \to \text{\rm i}nfty$.
\end{corollary}
\begin{proof}
Recall that
\[
\frac{\zeta(s)^2}{\zeta(2s)} = \sum_{n \geq 1} \frac{2^{\omega(n)}}{n^s} \ \text{and} \ \frac{\zeta(s)^4}{\zeta(2s)} = \sum_{n \geq 1} \frac{d(n)^2}{n^s}.
\]
It is straightforward to verify that $\parent{2^{\omega(n)}}_{n \geq 1}$ and $\parent{d(n)^2}_{n \geq 1}$ are both admissible with parameters $(1, 1/2, 1/3)$. We apply Theorem \ref{Theorem: Landau's Tauberian theorem} and discard lower-order terms to obtain the result.
\end{proof}
\begin{remark}
\mathbb{C}ref{Theorem: Landau's Tauberian theorem} furnishes lower order terms for the sums $\sum_{n \leq y} 2^{\omega(n)}$ and $\sum_{n \leq y} d(n)^2$, and even better estimates are known (e.g.\ Tenenbaum \cite[Exercise I.3.54]{Tenenbaum} and Zhai \cite[Corollary 4]{Zhai}), but \mathbb{C}ref{Corollary: sum of 2^omega(n) and d(n)^2} suffices for our purposes and illustrates the use of \mathbb{C}ref{Theorem: Landau's Tauberian theorem}.
\end{remark}
\section{Estimates for twist classes} \label{Section: Estimating twN(X)}
In this section, we decompose $N_{7}^{\textup{tw}}(X)$, counting the number of twist minimal elliptic curves over $\mathbb{Q}$ admitting a $7$-isogeny \eqref{eqn:NQxwist} in terms of progressively simpler functions. We then estimate those simple functions, and piece these estimates together until we arrive at an estimate for $N_{7}^{\textup{tw}}(X)$; the main result is \mathbb{C}ref{Theorem: asymptotic for twN(X)}, which proves \mathbb{C}ref{Intro Theorem: asymptotic for twN(X)}.
\subsection{Decomposition and outline} \label{sec:decomp}
We establish some notation for brevity and ease of exposition. Suppose $\parent{\alpha(X; n)}_{n \geq 1}$ is a sequence of real-valued functions, and $\phi : \mathbb{R}_{>0} \to \mathbb{R}_{>0}$. We write
\[
\sum_{n \geq 1} \alpha(X; n) = \sum_{n \ll \phi(X)} \alpha(X; n)
\]
if there is a positive constant $\kappa$ such that for all $X \text{\rm i}n \mathbb{R}_{> 0}$ and all $n > \kappa \phi(X)$, we have $\alpha(X; n) = 0$.
The function $N_{7}^{\textup{tw}}(X)$ is difficult to understand chiefly because of the twist minimality defect. Fortunately, the twist minimality defect cannot get too large relative to $X$ (see \mathbb{C}ref{Corollary: bound twist defect in terms of twist height}). So we partition our sum based on the value of $\twistdefect(A(a,b), B(a,b))$ in terms of the parametrization provided in \cref{sec:7isog}.
For $e \geq 1$, let $N_{7}^{\textup{tw}}(X; e)$ denote the number of pairs $(a, b) \text{\rm i}n \mathbb{Z}^2$ with
\begin{itemize}
\text{\rm i}tem $(a, b)$ groomed,
\text{\rm i}tem $h^{\rm tw}_7eight(A(a, b), B(a, b)) \leq X$, and
\text{\rm i}tem $\twistdefect(A(a, b), B(a, b)) = e$.
\end{itemize}
By \eqref{eqn:twistheightcalc} and \mathbb{C}ref{Corollary: bound twist defect in terms of twist height}, we have
\begin{equation}
N_{7}^{\textup{tw}}(X) = \sum_{e \ll X^{1/12}} N_{7}^{\textup{tw}}(X; e);\label{Equation: twN(X) in terms of twN(X; e)}
\end{equation}
more precisely, we can restrict our sum to
\[
e \leq \frac{3^{5/4} \cdot 7^{9/2}}{2^{1/6}} \cdot X^{1/12}.
\]
Determining when an integer $e$ \emph{divides} $\twistdefect(A, B)$ is easier than determining when $e$ \emph{equals} $\twistdefect(A, B)$, so we also let $M(X; e)$ denote the number of pairs $(a, b) \text{\rm i}n \mathbb{Z}^2$ with
\begin{itemize}
\text{\rm i}tem $\gcd(a, b) = 1$ and $b > 0$;
\text{\rm i}tem $H(A(a,b),B(a,b)) \leq X$;
\text{\rm i}tem $e \mid \twistdefect(A(a, b), B(a, b))$;
\text{\rm i}tem $(a, b) \neq (-7, 1)$.
\end{itemize}
Note that the points counted by $N_{7}^{\textup{tw}}(X; e)$ have \emph{twist} height bounded by $X$, but the points counted by $M(X; e)$ have only the function $H$ bounded by $X$.
\mathbb{C}ref{Theorem: Controlling size of twist minimality defect} and the M\"{o}bius sieve yield
\begin{equation}
N_{7}^{\textup{tw}}(X; e) = \sum_{f \ll \frac{X^{1/18}}{e^{2/3}}} \mu(f) M(e^6 X; ef);\label{Equation: twistN(X; e) in terms of M(X; e)}
\end{equation}
more precisely, we can restrict our sum to
\[
f \leq \frac{3^{1/2} 7^2}{2^{1/9}} \cdot \frac{X^{1/18}}{e^{2/3}}.
\]
In order to estimate $M(X; e)$, we further unpack the groomed condition on pairs $(a, b)$. We therefore let $M(X; d, e)$ denote the number of pairs $(a, b) \text{\rm i}n \mathbb{Z}^2$ with
\begin{itemize}
\text{\rm i}tem $\gcd(da, db, e) = 1$ and $b > 0$;
\text{\rm i}tem $H(A(d a, d b), B(d a, db)) \leq X$;
\text{\rm i}tem $e \mid \twistdefect(A(d a, d b), B(da, db))$;
\text{\rm i}tem $(a, b) \neq (-7, 1)$.
\end{itemize}
By \mathbb{C}ref{Theorem: Controlling size of twist minimality defect}, and because $H(A(a, b), B(a, b))$ is homogeneous of degree 12, another M\"{o}bius sieve yields
\begin{equation}
M(X; e) = \sum_{\substack{d \ll X^{1/12} \\ \gcd(d, e) = 1}} \mu(d) M(X; d, e);\label{Equation: cM(X;e) in terms of cM(X; d, e)}
\end{equation}
more precisely, we can restrict our sum to
\[
d \leq \frac{1}{2^{1/6} \cdot 3^{1/4}} \cdot X^{1/12}.
\]
Before proceeding, we now give an outline of the argument used in this section. In \mathbb{C}ref{Lemma: asymptotic for M(X; e)}, we use the Principle of Lipschitz to estimate $M(X; d, e)$, then piece these estimates together using \eqref{Equation: cM(X;e) in terms of cM(X; d, e)} to estimate $M(X; e)$. Heuristically,
\begin{equation}
M(X; d, e) \sim \frac{R T(e) X^{1/6}}{d^2 e^3} \prod_{\ell \mid e} \parent{1 - \frac{1}{\ell}}
\end{equation}
(where $R$ is the area of \eqref{eqn: R(X)} and $T$ is the arithmetic function investigated in \mathbb{C}ref{Lemma: bound on T(e)})
by summing over the congruence classes modulo $e^3$ that satisfy $e \mid \twistdefect(A(d a, d b), B(da, db))$. Then \eqref{Equation: cM(X;e) in terms of cM(X; d, e)} suggests
\begin{equation}
M(X; e) \sim \frac{R T(e) X^{1/6}}{\zeta(2) e^3 \prod_{\ell \mid e} \parent{1 + \frac{1}{\ell}}}.\label{Equation: Main term for cM(X; e)}
\end{equation}
To go further, we substitute \eqref{Equation: twistN(X; e) in terms of M(X; e)} into \eqref{Equation: twN(X) in terms of twN(X; e)}, and let $n = e f$ to obtain
\begin{equation}
N_{7}^{\textup{tw}}(X) = \sum_{n \ll X^{1/12}} \sum_{e \mid n} \mu\parent{n/e} M(e^6 X; n).\label{Equation: twN(X) in terms of M(X; e), reorganized}
\end{equation}
This is the core identity that, in concert with the Principle of Lipschitz, enables us to estimate $N_{7}^{\textup{tw}}(X)$.
Substituting \eqref{Equation: Main term for cM(X; e)} into \eqref{Equation: twN(X) in terms of M(X; e), reorganized}, and recalling $\varphi(n) = \sum_{e \mid n} \mu(n/e) e$, we obtain the heuristic estimate
\begin{equation}
N_{7}^{\textup{tw}}(X) \sim \frac{Q R X^{1/6}}{\zeta(2)},
\end{equation}
where
\begin{equation}
Q \colonequals \sum_{n \geq 1} \frac{T(n) \varphi(n) }{n^3 \prod_{\ell \mid n} \parent{1 + \frac{1}{\ell}}}.
\end{equation}
To make this estimate for $N_{7}^{\textup{tw}}(X)$ rigorous, and to get a better handle on the size of order of growth for its error term, we now decompose \eqref{Equation: twN(X) in terms of M(X; e), reorganized} based on the size of $n$ into two pieces:
\begin{equation}
\begin{aligned}
N_{7}^{\textup{tw}}ly(X) &\colonequals \sum_{n \leq y} \sum_{e \mid n} \mu\parent{\frac ne} M(e^6 X; n), \\
N_{7}^{\textup{tw}}gy(X) &\colonequals \sum_{n > y} \sum_{e \mid n} \mu\parent{n/e} M(e^6 X; n).
\end{aligned}
\end{equation}
By definition, we have
\[ N_{7}^{\textup{tw}}(X) = N_{7}^{\textup{tw}}ly(X) + N_{7}^{\textup{tw}}gy(X). \]
We then estimate $N_{7}^{\textup{tw}}ly(X)$ in \mathbb{C}ref{Lemma: asymptotic for twN<=y(X)}, and treat $N_{7}^{\textup{tw}}gy(X)$ as an error term which we bound in \mathbb{C}ref{Lemma: bound on twN>y(X)}. Setting the error from our estimate equal to the error arising from $N_{7}^{\textup{tw}}gy(X)$, we obtain Theorem \ref{Theorem: asymptotic for twN(X)}.
In the remainder of this section, we follow the outline suggested here by successively estimating $M(X; d, e)$, $M(X; e)$, $N_{7}^{\textup{tw}}ly(X)$, $N_{7}^{\textup{tw}}gy(X)$, and finally $N_{7}^{\textup{tw}}(X)$.
\subsection{Asymptotic estimates}
We first estimate $M(X; d, e)$ and $M(X; e)$.
\begin{lemma}\label{Lemma: asymptotic for M(X; e)}
The following statements hold.
\begin{enumalph}
\text{\rm i}tem If $\gcd(d, e) > 1$, then $M(X; d, e) = 0$. Otherwise, we have
\[
M(X; d, e) = \frac{R T(e) X^{1/6}}{d^2 e^3} \prod_{\ell \mid e} \parent{1 - \frac{1}{\ell}} + O\parent{\frac{T(e) X^{1/12}}{d}}.
\]
where $R$ is the area of \eqref{eqn: R(X)}.
\text{\rm i}tem We have
\[
M(X; e) = \frac{R T(e) X^{1/6}}{\zeta(2) e^3 \prod_{\ell \mid e} \parent{1 + \frac{1}{\ell}}} + O(T(e) X^{1/12} \log X).
\]
\end{enumalph}
In both cases, the implied constants are independent of $d$, $e$, and $X$.
\end{lemma}
\begin{proof}
We begin with (a) and examine the summands $M(X; d, e)$. If $d$ and $e$ are not coprime, then $M(X; d, e) = 0$ because $\gcd(da, db, e) \geq \gcd(d, e) > 1$. On the other hand, if $\gcd(d, e) = 1$, we have a bijection from the pairs counted by $M(X; 1, e)$ to the pairs counted by $M(d^{12} X; d, e)$ given by $(a, b) \mapsto (d a, d b)$.
Combining \mathbb{C}ref{Lemma: bound on T(e)}(a) and \mathbb{C}ref{Corollary: Estimates for L(X)}, we have
\begin{equation}
\begin{aligned}
M(X; 1, e) &= \sum_{(a_0, b_0) \text{\rm i}n \widetilde{\mathcal{T}}(e)} \#\{(a, b) \text{\rm i}n \mathcal{R}(X) \cap \mathbb{Z}^2 : (a, b) \equiv (a_0, b_0) \psmod {e^3}, (a, b) \neq (-7, 1) \}\\
&= \varphi(e^3) T(e) \parent{\frac{R X^{1/6}}{e^6} + O\parent{\frac{X^{1/12}}{e^3}}} \\
&= \frac{R T(e) X^{1/6}}{e^3} \prod_{\ell \mid e} \parent{1 - \frac{1}{\ell}} + O(T(e) X^{1/12}),
\end{aligned}
\end{equation}
and thus
\[
M(X; d, e) = \frac{R T(e) X^{1/6}}{d^2 e^3} \prod_{\ell \mid e} \parent{1 - \frac{1}{\ell}} + O\parent{\frac{T(e) X^{1/12}}{d}}.
\]
For part (b), we compute
\begin{equation} \label{eqn:cmXe}
\begin{aligned}
M(x; e) &= \sum_{\substack{d \ll X^{1/12} \\ \gcd(d, e) = 1}} \mu(d) M(X; d, e) \\
&= \sum_{\substack{d \ll X^{1/12} \\ \gcd(d, e) = 1}} \mu(d) \parent{\frac{T(e) R X^{1/6}}{d^2 e^3} \prod_{\ell \mid e} \parent{1 - \frac{1}{\ell}} + O\parent{T(e) \frac{X^{1/12}}{d}}} \\
&= \frac{R T(e) X^{1/6}}{e^3} \prod_{\ell \mid e} \parent{1 - \frac{1}{\ell}} \sum_{\substack{d \ll X^{1/12} \\ \gcd(d, e) = 1}} \frac{ \mu(d)}{d^2} + O\parent{T(e) X^{1/12} \sum_{\substack{d \ll X^{1/12} \\ \gcd(d, e) = 1}} \frac 1d}.
\end{aligned}
\end{equation}
Plugging the straightforward estimates
\begin{equation}
\sum_{\substack{d \ll X^{1/12} \\ \gcd(d, e) = 1}} \frac{ \mu(d)}{d^2} = \frac{1}{\zeta(2)} \prod_{\ell \mid e} \parent{1 - \frac{1}{\ell^2}}^{-1} + O(X^{-1/12})
\end{equation}
and
\[
\sum_{\substack{d \leq X^{1/12}}} \frac 1d = \frac{1}{12}\log X+ O(1) \]
into \eqref{eqn:cmXe} then simplifies to give
\begin{equation}
\begin{aligned}
M(x;e)
&= \frac{R T(e) X^{1/6}}{\zeta(2) e^3 \prod_{\ell \mid e} \parent{1 + \frac{1}{\ell}}} + O(T(e) X^{1/12} \log X)
\end{aligned}
\end{equation}
proving (b).
\end{proof}
We are now in a position to estimate $N_{7}^{\textup{tw}}ly(X)$.
\begin{prop}\label{Lemma: asymptotic for twN<=y(X)}
Suppose $y \ll X^{\frac{1}{12}}$. Then
\[
N_{7}^{\textup{tw}}ly(X) = \frac{Q R X^{1/6}}{\zeta(2)} + O\parent{\max\parent{\frac{X^{1/6} \log y}{y}, X^{1/12} y^{3/2} \log X \log^3 y }}
\]
where
\[
Q \colonequals \sum_{n \geq 1} \frac{\varphi(n) T(n)}{n^3 \prod_{\ell \mid n} \parent{1 + \frac{1}{\ell}}} = Q_3 Q_7 \prod_{\substack{p \neq 7 \ \textup{prime} \\ p \equiv 1 \psmod {3}}} \parent{1 + \frac{2}{(p+1)^2}},
\]
and $Q_3 = 13/6$, $Q_7=63/8$.
\end{prop}
\begin{proof}
Substituting the asymptotic for $M(X; e)$ from \mathbb{C}ref{Lemma: asymptotic for M(X; e)} into the defining series for $N_{7}^{\textup{tw}}ly(X)$, we have
\[
N_{7}^{\textup{tw}}ly(X) = \sum_{n \leq y} \sum_{e \mid n} \mu\parent{n/e} \parent{\frac{R T(n) e X^{1/6}}{\zeta(2) n^3 \prod_{\ell \mid n} \parent{1 + \frac{1}{\ell}}} + O\parent{T(n) e^{1/2} X^{1/12} \log (e^6 X)}}.
\]
We handle the main term and the error of this expression separately. For the main term, we have
\begin{equation}
\begin{aligned}
\sum_{n \leq y} \sum_{e \mid n} \mu\parent{n/e} \parent{\frac{R T(n) e X^{1/6}}{\zeta(2) n^3 \prod_{\ell \mid n} \parent{1 + \frac{1}{\ell}}}} &= \frac{R X^{1/6}}{\zeta(2)} \sum_{n \leq y} \frac{T(n)}{n^3 \prod_{\ell \mid n} \parent{1 + \frac{1}{\ell}}} \sum_{e \mid n} \mu\parent{n/e} e \\
&= \frac{R X^{1/6}}{\zeta(2)} \sum_{n \leq y} \frac{\varphi(n) T(n)}{n^3 \prod_{\ell \mid n} \parent{1 + \frac{1}{\ell}}}.
\end{aligned}
\end{equation}
By \mathbb{C}ref{Lemma: bound on T(e)}(d), we see
\[
\frac{\varphi(n) T(n)}{n^3 \prod_{\ell \mid n} \parent{1 + \frac{1}{\ell}}} = O\parent{\frac{2^{\omega(n)}}{n^2}}.
\]
By \mathbb{C}ref{Corollary: tail of sum of f(n)/n^sigma} and \mathbb{C}ref{Corollary: sum of 2^omega(n) and d(n)^2}, we have
\[
\sum_{n > y} \frac{2^{\omega(n)}}{n^2} \sim \frac{\log y}{\zeta(2) y}
\]
as $y \to \text{\rm i}nfty$. \textit{A fortiori,}
\[
\sum_{n > y} \frac{\varphi(n) T(n)}{n^3 \prod_{\ell \mid n} \parent{1 + \frac{1}{\ell}}} = O\parent{\sum_{n > y} \frac{2^{\omega(n)}}{n^2}} = O\parent{\frac{\log y}{y}},
\]
so the series
\begin{equation}
\sum_{n \geq 1} \frac{\varphi(n) T(n)}{n^3 \prod_{\ell \mid n} \parent{1 + \frac{1}{\ell}}} = Q \label{Equation: sum for Q}
\end{equation}
is absolutely convergent, and
\begin{equation}
\begin{aligned}
\sum_{n \leq y} \sum_{e \mid n} \mu\parent{n/e} \parent{\frac{R T(n) e X^{1/6}}{\zeta(2) n^3 \prod_{\ell \mid n} \parent{1 + \frac{1}{\ell}}}} &= \frac{R X^{1/6}}{\zeta(2)} \parent{Q - O\parent{\frac{\log y}{y}}} \\
&= \frac{Q R X^{1/6}}{\zeta(2)} + O\parent{\frac{X^{1/6} \log y}{y}}.
\end{aligned}
\end{equation}
As the summands of \eqref{Equation: sum for Q} constitute a nonnegative multiplicative arithmetic function, we can factor $Q$ as an Euler product. For $p$ prime, \mathbb{C}ref{Lemma: bound on T(e)} yields
\begin{equation}
Q_p \colonequals \sum_{a \geq 0} \frac{\varphi(p^a) T(p^a)}{p^{3a} \prod_{\ell \mid p} \parent{1 + \frac{1}{\ell}}} = \begin{cases}
1 + \displaystyle{\frac{2}{p^2 + 1}}, & \textup{if $p \equiv 1 \psmod{3}$ and $p \neq 7$;} \\
13/6, & \text{if $p=3$;} \\
63/8, & \text{if $p=7$;} \\
1 & \text{else}.
\end{cases}
\end{equation}
Thus
\begin{equation}
Q = \prod_{\textup{$p$ prime}} Q_p = Q_3 Q_7 \prod_{\substack{p \neq 7 \ \textup{prime} \\ p \equiv 1 \psmod {3}}} \parent{1 + \frac{2}{p^2 + 1}}. \label{Equation: product for Q}
\end{equation}
We now turn to the error term. Since $y \ll X^{1/12}$, for $e \leq y$ we have $\log (e^6 X) \ll \log X$. Applying \mathbb{C}ref{Lemma: bound on T(e)}(d), we obtain
\begin{align}
\sum_{n \leq y} \sum_{e \mid n} \mu\parent{n/e} O\parent{T(n) e^{1/2} X^{1/12} \log \parent{e^6 X}} &= O\parent{X^{1/12} \log X \sum_{n \leq y} T(n) \sum_{e \mid n} \abs{\mu\parent{\frac{n}{e}}} e^{1/2} } \nonumber \\
&= O\parent{X^{1/12} \log X \sum_{n \leq y} 2^{2 \omega(n)} n^{1/2} }. \label{Equation: partial simplification twN<=y(X)}
\end{align}
\mathbb{C}ref{Corollary: tail of sum of f(n)/n^sigma} and \mathbb{C}ref{Corollary: sum of 2^omega(n) and d(n)^2}, together with the trivial inequality $2^{2 \omega(n)} \leq d(n)^2$, yield
\begin{equation}
\sum_{n \leq y} 2^{2 \omega(n)} n^{1/2} = O(y^{3/2} \log^3 y). \label{Equation: approximation 2^2 omega(n) n^1/2}
\end{equation}
Substituting \eqref{Equation: approximation 2^2 omega(n) n^1/2} into \eqref{Equation: partial simplification twN<=y(X)} gives our desired result.
\end{proof}
We now bound $N_{7}^{\textup{tw}}gy(X)$.
\begin{lemma}\label{Lemma: bound on twN>y(X)}
We have
\[
N_{7}^{\textup{tw}}gy(X) = O\parent{\frac{X^{1/6} \log^3 y}{y}}.
\]
\end{lemma}
\begin{proof}
We have
\begin{align}
N_{7}^{\textup{tw}}gy(X) &= \sum_{n > y} \sum_{e \mid n} \mu\parent{n/e} M(e^6 X; n)
\leq \sum_{n > y} 2^{\omega(n)} M(n^6 X; n).\label{Equation: 1st partial simplification of twNgy(X)}
\end{align}
Write $n = 3^v 7^w n^\prime$ where $\gcd(n^\prime, 3) = \gcd(n^\prime, 7) = 1$. We define
\[
n_0 \coloneqq 3^{\max(v - 1, 0)} 7^{\max(w - 3, 0)} n^\prime,
\]
so
\[
\frac{n}{3 \cdot 7^3} \leq n_0 \leq n.
\]
Let $(a, b) \text{\rm i}n \mathbb{Z}^2$ be a groomed pair. By \mathbb{C}ref{Theorem: Controlling size of twist minimality defect}(a), $H(A(a, b), B(a, b)) \leq n^6 X$ implies $C(a, b) \leq n^6 X$, and by \mathbb{C}ref{Theorem: Controlling size of twist minimality defect}(b), $n \mid \twistdefect(A(a, b), B(a, b))$ implies $n_0^3 \mid C(a, b)^3$. Thus
\begin{equation}
M(n^6 X; n) \leq \#\set{(a, b) \text{\rm i}n \mathbb{Z}^2 \ \text{groomed} : 108 C(a, b)^6 \leq n^6 X, \ n_0^3 \mid C(a, b)}.
\end{equation}
Recalling \eqref{Equation: c(m)} and \mathbb{C}ref{Lemma: c(m) is multiplicative etc}(c), we deduce
\[
M(n^6 X; n) \leq \sum_{m \ll X^{1/6}/n^2} c(n_0^3 m) \leq 3 \cdot 2^{\omega(n_0) - 1} \sum_{m \ll X^{1/6}/n^2} c(m).
\]
But $2^{\omega(n)} \leq 4 \cdot 2^{\omega(n_0)}$, so by \mathbb{C}ref{Corollary: Bound on partial sums of c(m)}, we have
\[
M(n^6 X; n) = O\parent{\frac{2^{\omega(n)} X^{1/6}}{n^2}},
\]
and substituting this expression into \eqref{Equation: 1st partial simplification of twNgy(X)} yields
\begin{equation}
N_{7}^{\textup{tw}}gy(X) = O\parent{\sum_{n > y} \frac{\parent{2^{\omega(n)}}^2 X^{1/6}}{n^2}} = O\parent{X^{1/6} \sum_{n > y} \frac{2^{2 \omega(n)}}{n^2}}.\label{Equation: 2nd partial simplification of twNgy(X)}
\end{equation}
As in the proof of \mathbb{C}ref{Lemma: asymptotic for twN<=y(X)}, combining \mathbb{C}ref{Corollary: tail of sum of f(n)/n^sigma} and \mathbb{C}ref{Corollary: sum of 2^omega(n) and d(n)^2} together with the trivial inequality $2^{2 \omega(n)} \leq d(n)^2$ yields
\begin{equation}
\sum_{n > y} \frac{2^{2 \omega(n)}}{n^2} = O\parent{\frac{\log^3 y}{y}}. \label{Equation: approximation 2^2 omega(n) / n^2}
\end{equation}
Substituting \eqref{Equation: approximation 2^2 omega(n) / n^2} into \eqref{Equation: 2nd partial simplification of twNgy(X)} gives our desired result.
\end{proof}
We are now in a position to prove \mathbb{C}ref{Intro Theorem: asymptotic for twN(X)}, which we restate here with the notations we have established.
\begin{theorem}\label{Theorem: asymptotic for twN(X)}
We have
\[
N_{7}^{\textup{tw}}(X) = \frac{Q R X^{1/6}}{\zeta(2)} + O(X^{2/15} \log^{17/5} X),
\]
where
\[
Q = \sum_{n \geq 1} \frac{\varphi(n) T (n)}{n^3 \prod_{\ell \mid n} \parent{1 + 1/\ell}},
\]
and $R$ is the area of the region
\[
\mathcal{R}(1) = \set{(a, b) \text{\rm i}n \mathbb{R}^2 : H(A (a, b), B (a, b)) \leq 1, b \geq 0}.
\]
\end{theorem}
\begin{proof}
Let $y$ be a positive quantity with $y \ll X^{1/12}$; in particular, $\log y \ll \log X$. \mathbb{C}ref{Lemma: asymptotic for twN<=y(X)} and \mathbb{C}ref{Lemma: bound on twN>y(X)} together tell us
\begin{equation}
N_{7}^{\textup{tw}}(X) = \frac{Q R X^{1/6}}{\zeta(2)} + O\parent{\max\parent{\frac{X^{1/6} \log^3 y}{y}, X^{1/12} y^{3/2} \log X \log^3 y}}.
\end{equation}
We let $y = X^{1/30}/\log^{2/5} X$, so
\begin{equation}
\frac{X^{1/6} \log^3 y}{y} \asymp X^{1/12} y^{3/2} \log X \log^3 y \asymp X^{2/15} \log^{17/5} X,
\end{equation}
and we conclude
\[
N_{7}^{\textup{tw}}(X) = \frac{Q R X^{1/6}}{\zeta(2)} + O(X^{2/15} \log^{17/5} X)
\]
as desired.
\end{proof}
\subsection{$L$-series}
To conclude, we set up the next section by interpreting
\mathbb{C}ref{Theorem: asymptotic for twN(X)} in terms of Dirichlet series. Let
\begin{equation}
h^{\rm tw}_7(n) \colonequals \# \set{(a, b) \text{\rm i}n \mathbb{Z}^2 \textup{ groomed : $\twht(A(a,b),B(a,b)) = n$}}
\end{equation}
and define
\begin{equation}
L^{\rm tw}_7(s) \colonequals \sum_{n \geq 1} \frac{h^{\rm tw}_7(n)}{n^s}
\end{equation}
wherever this series converges. Then $N_{7}^{\textup{tw}}(X) = \sum_{n \leq X} h^{\rm tw}_7(n)$, and conversely we have
$L^{\rm tw}_7(s) = \text{\rm i}nt_0^\text{\rm i}nfty u^{-s} \,\mathrm{d}N_{7}^{\textup{tw}}(u) $.
\begin{cor}\label{Corollary: twL(s) has a meromorphic continuation}
The Dirichlet series $L^{\rm tw}_7(s)$ has abscissa of (absolute) convergence $\sigma_a=\sigma_c = 1/6$ and has a meromorphic continuation to the region
\begin{equation}
\set{s = \sigma + i t \text{\rm i}n \mathbb{C} : \sigma > 2/15}. \label{Equation: domain of twistL(s)}
\end{equation}
Moreover, $L^{\rm tw}_7(s)$ has a simple pole at $s = 1/6$ with residue
\[ \res_{s=\frac{1}{6}} L^{\rm tw}_7(s) = \frac{QR}{6\zeta(2)} \]
and is holomorphic elsewhere on the region \eqref{Equation: domain of twistL(s)}.
\end{cor}
\begin{proof}
Let $s = \sigma + i t \text{\rm i}n \mathbb{C}$ be given with $\sigma > 1/6$. Abel summation yields
\begin{equation}
\begin{aligned}
\sum_{n \leq X} h^{\rm tw}_7(n) n^{-s} &= N_{7}^{\textup{tw}}(X) X^{-s} + s \text{\rm i}nt_1^X N_{7}^{\textup{tw}}(u) u^{-s-1} \,\mathrm{d}u \\
&= O\parent{X^{1/6 - \sigma} + s \text{\rm i}nt_1^X u^{- 5/6 - \sigma} \,\mathrm{d}u};
\end{aligned}
\end{equation}
as $X \to \text{\rm i}nfty$ the first term vanishes and the integral converges. Thus, when $\sigma > 1/6$,
\[
\sum_{n \geq 1} h^{\rm tw}_7(n) n^{-s} = s \text{\rm i}nt_1^\text{\rm i}nfty N_{7}^{\textup{tw}}(u) u^{-1-s}\,\mathrm{d}u
\]
and this integral converges. A similar argument shows that the sum defining $L^{\rm tw}_7(s)$ diverges when $\sigma < 1/6$. We have shown $\sigma_c = 1/6$ is the abscissa of convergence for $L^{\rm tw}_7(s)$, but as $h^{\rm tw}_7(n) \geq 0$ for all $n$, it is also the abscissa of \emph{absolute} convergence $\sigma_a=\sigma_c$.
Now define $L^{\rm tw}_7R(s)$ so that
\begin{equation}
L^{\rm tw}_7(s) = \frac{QR}{\zeta(2)} \zeta(6s) + L^{\rm tw}_7R(s).\label{Equation: Defining twLR}
\end{equation}
Abel summation and the substitution $u \mapsto u^{1/6}$ yields for $\sigma>1$
\[
\zeta(6s) = s \text{\rm i}nt_1^\text{\rm i}nfty \floor{u^{1/6}} u^{- 1 - s}\,\textrm{d}u = s \text{\rm i}nt_1^\text{\rm i}nfty \parent{u^{1/6} + O(1)} u^{- 1 - s} \,\textrm{d}u.
\]
Let
\[
\delta(n) \colonequals \begin{cases} 1, & \text{if} \ n = k^6 \ \text{for some} \ k \text{\rm i}n \mathbb{Z}; \\
0, & \text{else.}
\end{cases}
\]
Then
\begin{equation}
\begin{aligned}
L^{\rm tw}_7R(s) &= \sum_{n \geq 1} \parent{h^{\rm tw}_7(n) - \frac{QR}{\zeta(2)}\delta(n)} n^{-s}\\
&= s \text{\rm i}nt_1^\text{\rm i}nfty \parent{N_{7}^{\textup{tw}}(u) - \frac{QR}{\zeta(2)} \floor{u^{1/6}}} u^{-1-s} \,\textrm{d}u \label{Equation: twistLR(s) as an integral}
\end{aligned}
\end{equation}
when $\sigma > 1/6$. But then for any $\epsilon > 0$,
\begin{equation}
N_{7}^{\textup{tw}}(u) - \frac{QR}{\zeta(2)} \floor{u^{1/6}} = O(u^{2/15 + \epsilon}) \label{Equation: twistN(t) - C floor(t^1/6)}
\end{equation}
by \mathbb{C}ref{Theorem: asymptotic for twN(X)}. Substituting \eqref{Equation: twistN(t) - C floor(t^1/6)} into \eqref{Equation: twistLR(s) as an integral}, we obtain
\begin{align}
L^{\rm tw}_7R(s) = s \text{\rm i}nt_1^\text{\rm i}nfty \parent{N_{7}^{\textup{tw}}(u) - \frac{QR}{\zeta(2)} \floor{u^{1/6}}} u^{-1-s}\,\textrm{d}u &= O\left(s \text{\rm i}nt_1^\text{\rm i}nfty u^{-13/15 - \sigma + \epsilon} \,\textrm{d}u\right) \label{Equation: bound on twLR}
\end{align}
where the integral converges whenever $\sigma > 2/15 + \epsilon$. Letting $\epsilon \to 0$, we obtain an analytic continuation of $L^{\rm tw}_7R(s)$ to the region \eqref{Equation: domain of twistL(s)}.
At the same time, $\zeta(6s)$ has meromorphic continuation to $\mathbb{C}$ with a simple pole at $s=1/6$ with residue $1/6$. Thus looking back at \eqref{Equation: Defining twLR}, we find that
\[
L^{\rm tw}_7(s) = \frac{QR}{\zeta(2)} \zeta(6s) + s \text{\rm i}nt_1^\text{\rm i}nfty \parent{N_{7}^{\textup{tw}}(u) - \frac{QR}{\zeta(2)} \floor{u^{1/6}}} u^{-1-s} \,\textrm{d}u
\]
when $\sigma > 1/6$, but in fact the right-hand side of this equality defines a meromorphic function on the region \eqref{Equation: domain of twistL(s)} with a simple pole at $s = 1/6$ and no other poles. Our claim follows.
\end{proof}
\section{Estimates for rational isomorphism classes}\label{Section: Working over the rationals}
In \cref{Section: Estimating twN(X)}, we counted the number of elliptic curves over $\mathbb{Q}$ with a (cyclic) $7$-isogeny up to isomorphism over $\mathbb{Q}alg$ (\mathbb{C}ref{Theorem: asymptotic for twN(X)}). In this section, we count all isomorphism classes over $\mathbb{Q}$ by enumerating over twists using a Tauberian theorem (\mathbb{C}ref{Theorem: Landau's Tauberian theorem}).
\subsection{Setup}
Breaking up the sum \eqref{eqn:NQX}, let
\begin{equation}
h_7(n) \colonequals \#\{(a, b, c) \text{\rm i}n \mathbb{Z}^3 : \textup{$(a, b)$ groomed, $c$ squarefree, $\hht(c^2 A(a,b),c^3A(a,b)) = n$}\} \label{Definition: h(n), N(n), L(n)}.
\end{equation}
Then $h_7(n)$ counts the number of elliptic curves $E \text{\rm i}n \mathscr{E}$ of height $n$ that admit a $7$-isogeny \eqref{eqn:NQx}
and
\begin{equation}
N_{7}(X) = \sum_{n \leq X} h_7(n).\label{Equation: NQ(X) in terms of h7}
\end{equation}
We also let
\begin{equation}
L_7(s) \colonequals \sum_{n \geq 1} \frac{h_7(n)}{n^s}
\end{equation}
wherever this sum converges.
\begin{theorem}\label{Theorem: relationship between twistL(s) and L(s)}
The following statements hold.
\begin{enumalph}
\text{\rm i}tem We have
\[
h_7(n) = 2 \sum_{c^6 \mid n} \abs{\mu(c)} h^{\rm tw}_7\parent{n/c^6}
\]
\text{\rm i}tem For $s = \sigma + i t \text{\rm i}n \mathbb{C}$ with $\sigma > 1/6$ we have
\begin{equation}
L_7(s) = \frac{2 \zeta(6s) L^{\rm tw}_7(s)}{\zeta(12s)}
\end{equation}
with absolute convergence on this region.
\text{\rm i}tem The Dirichlet series $L_7(s)$ has a meromorphic continuation to the region \eqref{Equation: domain of twistL(s)} with a double pole at $s = 1/6$ and no other singularities on this region.
\text{\rm i}tem The Laurent expansion for $L_7(s)$ at $s = 1/6$ begins
\begin{equation}
L_7(s) = \frac{1}{3 \zeta(2)^2} \parent{\frac{QR}{6} \parent{s - \frac 16}^{-2} + \parent{\zeta(2) \ell_0 + Q R \parent{\gamma - \frac{2 \zeta^\prime(2)}{\zeta(2)}}} \parent{s - \frac{1}{6}}^{-1} + O(1)},
\end{equation}
where
\begin{equation}
\ell_0 \colonequals \frac{Q R \gamma}{\zeta(2)} + \frac {1}6 \text{\rm i}nt_1^\text{\rm i}nfty \parent{N_{7}^{\textup{tw}}(u) - \frac{QR}{\zeta(2)} \floor{u^{1/6}}} u^{-7/6} \,\mathrm{d}u\label{Equation: Definition of ell0}
\end{equation}
is the constant term of the Laurent expansion for $L^{\rm tw}_7(s)$ around $s = 1/6$.
\end{enumalph}
\end{theorem}
\begin{proof}
For (a), we first collect the terms that contribute to $h_7(n)$ by the quadratic twist factor $c$:
\begin{equation}
h_7^{\parent{c}}(n) \colonequals \#\set{(a, b) \text{\rm i}n \mathbb{Z}^2 \textup{ groomed} : \hht(c^2A(a,b),c^3B(a,b))= n}
\end{equation}
By \eqref{Equation: quadratic twists multiply height by c^6} we have $\height(c^2 A(a,b),c^3B(a,b))=c^6 \height(A(a,b),B(a,b))$, so
\begin{equation}
h_7^{\parent{c}}(n) =
\begin{cases}
h^{\rm tw}_7(n/c^6), & \text{if $c^6 \mid n$;} \\
0, & \text{otherwise.}
\end{cases}
\end{equation}
Therefore
\[ h_7(n) = \sum_{c \ \text{squarefree}} h^{\parent{c}}(n) \\
= 2 \sum_{c \geq 1} \abs{\mu(c)} h^{\parent{c}}(n) \\
= 2 \sum_{c^6 \mid n} \abs{\mu(c)} h^{\rm tw}_7\parent{n/c^6} \]
proving (a).
For (b), we see that $h_7(n)$ is the the $n$th coefficient of the Dirichlet convolution of $L^{\rm tw}_7(s)$ and
\[
2\sum_{n \geq 1} \abs{\mu(n)} n^{-6s} = \frac{2\zeta(6s)}{\zeta(12s)}.
\]
Write $s = \sigma + i t$. As both $L^{\rm tw}_7(s)$ and $\zeta(6s)/\zeta(12s)$ are absolutely convergent when $\sigma > 1/6$, we see
\[
L_7(s) = \frac{2 \zeta(6s) L^{\rm tw}_7(s)}{\zeta(12s)}
\]
when $\sigma > 1/6$, and $L_7(s)$ converges absolutely in this half-plane.
For (c), since $\zeta(s)$ is nonvanishing when $\sigma > 1$, the ratio $\zeta(6s)/\zeta(12s)$ is meromorphic function for $\sigma > 1/12$. But \mathbb{C}ref{Corollary: twL(s) has a meromorphic continuation} gives a meromorphic continuation of $L^{\rm tw}_7(s)$ to the region \eqref{Equation: domain of twistL(s)}. The function $L_7(s)$ is a product of these two meromorphic functions on \eqref{Equation: domain of twistL(s)}, and so it is a meromorphic function on this region. The holomorphy and singularity for $L_7(s)$ then follow from those of $L^{\rm tw}_7(s)$ and $\zeta(s)$.
We conclude (d) by computing Laurent expansions. We readily verify
\begin{equation}
\frac{\zeta(6s)}{\zeta(12s)} = \frac{1}{\zeta(2)}\parent{\frac{1}{6}\parent{s - \frac 16}^{-1} + \parent{\gamma - \frac{2 \zeta^\prime(2)}{\zeta(2)}} + O\parent{s - \frac 16}},
\end{equation}
whereas the Laurent expansion for $L^{\rm tw}_7(s)$ at $s = 1/6$ begins
\begin{equation}
L^{\rm tw}_7(s) = \frac{1}{\zeta(2)} \parent{\frac{QR}{6}\parent{s - \frac 16}^{-1} + \zeta(2) \ell_0 + \dots},
\end{equation}
with $\ell_0$ given by \eqref{Equation: Definition of ell0}. Multiplying the Laurent series tails gives the desired result.
\end{proof}
\subsection{Proof of main result}
We are now poised to finish off the proof of our main result.
\begin{lemma}\label{Lemma: h(n) is admissible}
The sequence $\parent{h_7(n)}_{n \geq 1}$ is admissible \textup{(\mathbb{C}ref{Definition: admissible sequences})} with parameters $(1/6,1/30,16/15)$.
\end{lemma}
\begin{proof}
We check each condition in \mathbb{C}ref{Definition: admissible sequences}. Since $h_7(n)$ counts objects,
we indeed have $h_7(n) \text{\rm i}n \mathbb{Z}_{\geq 0}$.
For (i), \mathbb{C}ref{Corollary: twL(s) has a meromorphic continuation} tells us that $L^{\rm tw}_7(s)$ has $1/6$ as its abscissa of absolute convergence. Likewise, $\displaystyle{\frac{\zeta(6s)}{\zeta(12s)}}$ has $1/6$ as its abscissa of absolute convergence. By \mathbb{C}ref{Theorem: relationship between twistL(s) and L(s)}(b),
\[
L_7(s) = \frac{2\zeta(6s) L^{\rm tw}_7(s)}{\zeta(12s)},
\]
and by \mathbb{C}ref{Theorem: product of Dirichlet series converges} this series converges absolutely for $\sigma > \sigma_a$, so the abscissa of absolute convergence for $L_7(s)$ is at most $1/6$. But for $\sigma < 1/6$, $L_7(\sigma) > L^{\rm tw}_7(\sigma)$ by termwise comparison of coefficients, so the Dirichlet series for $L_7(s)$ diverges when $\sigma < 1/6$, and (i) holds with $\sigma_a = 1/6$.
For (ii), \mathbb{C}ref{Corollary: twL(s) has a meromorphic continuation} tells us that $L^{\rm tw}_7(s)$ has a meromorphic continuation when $\sigma=\repart(s)>2/15$; on the other hand, as $\zeta(12s)$ is nonvanishing for $\sigma > 1/12$, we see that $\zeta(6s)/\zeta(12s)$ has a meromorphic contintuation to $\sigma>1/12$, and so (ii) holds with
\[
\delta = 1/6 - 2/15=1/30.
\]
(The only pole of $L_7(s)/s$ with $\sigma > 2/15$ is the double pole at $s = 1/6$ indicated in
\mathbb{C}ref{Theorem: relationship between twistL(s) and L(s)}(e).)
For (iii), let $\sigma > 2/15$. By \mathbb{C}ref{Corollary: bound on mu past abscissa of convergence}, we have $\mu_{L^{\rm tw}_7}(\sigma) \leq 1$. Let $\zeta_a(s)=\zeta(as)$. Applying \mathbb{C}ref{Theorem: muzeta(sigma)}, we have
\begin{equation}
\mu_{\zeta_6}(\sigma) = \mu_{\zeta}(6\sigma) \leq \frac{64}{205}\parent{1 - \frac{6 \cdot 2}{15}} = \frac{64}{1025}
\end{equation}
if $\sigma \leq 1/6$, and by \mathbb{C}ref{Theorem: absolutely convergent Dirichlet series have mu = 0}, $\mu_{\zeta_6}(\sigma) = 0$ if $\sigma > 1/6$. Finally, as $\zeta(12s)^{-1}$ is absolutely convergent for $s > 1/12$, \mathbb{C}ref{Theorem: absolutely convergent Dirichlet series have mu = 0} tells us $\mu_{{\zeta_{12}}^{-1}}(\sigma) = 0$. Taken together, we see
\[
\mu_{L_7}(\sigma) \leq 1 + \frac{64}{1025} + 0 = \frac{1089}{1025},
\]
so the sequence $\parent{h_7(n)}_{n \geq 1}$ is admissible with final parameter $\xi = 1089/1025$.
\end{proof}
We now prove \mathbb{C}ref{Intro Theorem: asymptotic for N(X)}, which we restate here for ease of reference in our established notation.
\begin{theorem}\label{Theorem: asymptotic for N(X)}
For all $\epsilon > 0$,
\[
N_{7}(X) = \frac{QR}{3 \zeta(2)^2} X^{1/6} \log X + \frac{2}{\zeta(2)^2}\parent{\zeta(2) \ell_0 + Q R \parent{\gamma - 1 - \displaystyle{\frac{2 \zeta^\prime(2)}{\zeta(2)}}}} X^{1/6} + O\parent{X^{7/45 + \epsilon}}
\]
as $X \to \text{\rm i}nfty$, where the implicit constant depends on $\epsilon$. The constants $Q,R$ are defined in \textup{\mathbb{C}ref{Theorem: asymptotic for twN(X)}}, and $\ell_0$ is defined in \eqref{Equation: Definition of ell0}.
\end{theorem}
\begin{proof}
By \mathbb{C}ref{Lemma: h(n) is admissible}, $\parent{h_7(n)}_{n \geq 1}$ is admissible with parameters $\parent{1/6, 1/30, 1089/1025}$. We now apply \mathbb{C}ref{Theorem: Landau's Tauberian theorem} to the Dirichlet series $L_7(s)$, and our claim follows.
\end{proof}
\begin{remark}\label{Remark: true bound for twistN(X)}
We suspect that the true error on both $N_{7}(X)$ and $N_{7}^{\textup{tw}}(X)$ is $O(X^{1/12 + \epsilon})$. Some improvements to our error term may be possible. We believe that (with appropriate hypotheses) the denominator $\floor{\xi} + 2$ in the exponent of the error for \mathbb{C}ref{Theorem: Landau's Tauberian theorem} can be replaced with $\xi + 1$. If so, the exponent $7/45 + \epsilon$ in the error term may be replaced with $1909/12684 + \epsilon$. Further improvements in the estimate of $\mu_\zeta(\sigma)$ will translate directly to improvements in the error term of $N_{7}(X)$. In the most optimistic scenario, if the Lindel\"{o}f hypothesis holds, the exponent of our error term would be $3/20 + \epsilon$.
\end{remark}
\section{Computations}\label{Section: Computations}
In this section, we conclude by describing some computations which make our main theorems completely explicit.
\subsection{Computing elliptic curves with $7$-isogeny}
We begin by outlining an algorithm for computing all elliptic curves that admit a 7-isogeny up to twist height $X$. In a nutshell, we iterate over possible factorizations $e^3 m$ with $m$ cubefree to find all groomed pairs $(a, b)$ for which $C(a, b) = e^3 m$, then check if $h^{\rm tw}_7eight(A(a, b), B(a, b)) \leq X$.
In detail, our algorithm proceeds as follows.
\begin{enumalg}
\text{\rm i}tem We list all primes $p \equiv 1 \pmod 3$ up to $(X/108)^{1/6}$
(this bound arises from \mathbb{C}ref{Theorem: Controlling size of twist minimality defect}(a)).
\text{\rm i}tem For each pair $(a, b) \text{\rm i}n \mathbb{Z}^2$ with $b > 0$, $\gcd(a, b) = 1$, $b > 0$, and $C(a, b)$ coprime to $3$ and less than $Y$, we compute $C(a, b)$. We organize the results into a lookup table, so that for each $c$ we can find all pairs $(a, b)$ with $b > 0$, $\gcd(a, b) = 1$, $b > 0$, and $C(a, b) = c$. We append $1$ to our table with lookup value $(1, 0)$. For each $c$ in our lookup table, we record whether $c$ is cubefree by sieving against the primes we previously computed.
\text{\rm i}tem For positive integer pairs $(e_0, m)$, $e_0^{12} m^6 \leq X/108$, and $m$ cubefree, we find all groomed pairs $(a, b) \text{\rm i}n \mathbb{Z}^2$ with $C(a, b) = e_0^3 m$. If $\gcd(e_0, 3) = \gcd(m, 3) = 1$, we can do this as follows. If $e_0^3 < Y$, we iterate over groomed pairs $(a_e, b_e)$ and $(a_m, b_m)$ yielding $C(a_e, b_e) = e_0^3$ and $C(a_m, b_m) = m$ respectively, and taking the product
\[
(a_e + b_e\parent{-1+3\zeta}) (a_m + b_m\parent{-1+3\zeta}) = a + b \parent{-1+3\zeta} \text{\rm i}n \mathbb{Z}[3\zeta]
\]
as in the proof of \mathbb{C}ref{Lemma: c(m) is multiplicative etc}. If $e_0^3 > Y$, we iterate over groomed pairs $(a_e^\prime, b_e^\prime)$ with $C(a_e^\prime, b_e^\prime) = e_0$ instead of over groomed pairs $(a_e, b_e)$, and compute
\[
(a_e^\prime + b_e\parent{-1+3\zeta})^3 (a_m + b_m\parent{-1+3\zeta}) = a + b \parent{-1+3\zeta} \text{\rm i}n \mathbb{Z}[3\zeta].
\]
If $\gcd(e_0, 3) > 1$ or $\gcd(m, 3) > 1$, we perform the steps above for the components of $e_0$ and $m$ coprime to $3$, and then postmultiply by those groomed pairs $(a_3, b_3) \text{\rm i}n \mathbb{Z}^2$ with $C(a_3, b_3)$ an appropriate power of $3$ (which is necessarily $9$ or $27$ by \mathbb{C}ref{Lemma: c(m) is multiplicative etc}(b).
\text{\rm i}tem For each pair $(a, b)$ with $C(a, b) = e_0^3 m$, obtained in the previous step, we compute $H(A(a, b), B(a, b))$. We compute the $3$-component of the twist minimality defect $e_3$, the $7$-component of the twice minimality defect $e_7$, and thereby compute the twist minimality defect $e = \lcm(e_0, e_3, e_7)$. We compute the twist height using the reduced pairs $(A(a, b) / e^2, \abs{B(a, b)} / e^3)$. If this result is less than or equal to $X$, we report $(a, b)$, together with their twist height and any auxiliary information we care to record.
\end{enumalg}
We list the first few twist minimal elliptic curves admitting a (cyclic) $7$-isogeny in Table \ref{table:firstfew7}.
\jvtable{table:firstfew7}{
\rowcolors{2}{white}{gray!10}
\begin{tabular}{c c|c c}
$(A, B)$ & $(a, b)$ & $h^{\rm tw}_7eight(E)$ & $\twistdefect(E)$ \\
\hline\hline
$(-3, 62)$ & $(14, 5)$ & $103788$ & $1029$ \\
$(13, 78)$ & $(21, 4)$ & $164268$ & $1029$ \\
$(37, 74)$ & $(42, 1)$ & $202612$ & $1029$ \\
$(-35, 98)$ & $(0, 1)$ & $259308$ & $21$ \\
$(45, 18)$ & $(35, 2)$ & $364500$ & $1029$ \\
$(-43, 166)$ & $(7, 13)$ & $744012$ & $3087$ \\
$(-75, 262)$ & $(-7, 8)$ & $1853388$ & $1029$ \\
$(-147, 658)$ & $(-56, 1)$ & $12706092$ & $1029$ \\
$(-147, 1582)$ & $(7, 6)$ & $67573548$ & $343$ \\
$(285, 2014)$ & $(28, 3)$ & $109517292$ & $343$ \\
$(-323, 2242)$ & $(-21, 10)$ & $135717228$ & $1029$ \\
$(-395, 3002)$ & $(-63, 2)$ & $246519500$ & $1029$ \\
$(-155, 3658)$ & $(21, 11)$ & $361286028$ & $1029$ \\
$(357, 5194)$ & $(7, 1)$ & $728396172$ & $21$ \\
$(-595, 5586)$ & $(-14, 1)$ & $842579500$ & $63$ \\
$(285, 5662)$ & $(91, 1)$ & $865572588$ & $1029$ \\
$(-603, 5706)$ & $(-28, 11)$ & $879077772$ & $1029$ \\
\end{tabular}
}{$E \text{\rm i}n \scrE^{\textup{tw}}$ with $7$-isogeny and $\twht E \leq 10^{9}$}
Running this algorithm out to $X = 10^{42}$ took us approximately $34$ CPU hours on a single core, producing 4\,582\,079
elliptic curves admitting a (cyclic) 7-isogeny in $\scrE^{\textup{tw}}_{\leq 10^{42}}$. To check the accuracy of our code, we confirmed that the $j$-invariants of these curves are distinct. We also confirmed that the 7-division polynomial of each curve has a linear or cubic factor over $\mathbb{Q}$; this took 3.5 CPU hours. For $X = 10^{42}$, we have
\[
\frac{\zeta(2)}{Q R}\frac{ N_{7}^{\textup{tw}}(10^{42})}{(10^{42})^{1/6}} = 0.99996\ldots,
\]
which is close to 1.
Substituting \mathbb{C}ref{Theorem: relationship between twistL(s) and L(s)}(a) into \eqref{Equation: NQ(X) in terms of h7} and reorganizing the resulting sum, we find
\begin{equation}
N_{7} (X) = 2 \sum_{n \leq X} h^{\rm tw}_7\parent{n/c^6} \sum_{c \leq (X/n)^{1/6}} \abs{\mu(c)}.
\end{equation}
Letting $X = 10^{42}$ and using our list of 4\,582\,079 elliptic curves admitting a (cyclic) 7-isogeny, we compute that there are $88\,157\,174$ elliptic curves admitting a (cyclic) 7-isogeny in $\mathscr{E}_{\leq 10^{42}}$.
\subsection{Computing constants}
We also estimate the constants in our main theorems. First and easiest among these is $Q$, given by \eqref{Equation: product for Q}. Truncating the Euler product as a product over $p \leq Y$ gives us a lower bound
\[
Q_{\leq Y} \colonequals \frac{273}{16} \prod_{\substack{7 < p \leq Y \\ p \equiv 1 \psmod 3}} \parent{1 + \frac{2}{p^2+1}}
\]
for $Q$. To obtain an upper bound, we compute
\[
Q < Q_{\leq Y} \exp\parent{2 \sum_{\substack{p > Y \\ p \equiv 1 \psmod 3}} \frac{1}{p^2+1}}.
\]
Suppose $Y \geq 8 \cdot 10^9$. Using Abel summation and Bennett--Martin--O'Bryant--Rechnitzer \cite[Theorem 1.4]{Bennett-Martin-OBryant-Rechnitzer}, we obtain
\begin{align*}
\sum_{\substack{p > Y \\ p \equiv 1 \psmod 3}} \frac{1}{p^2+1} &= -\frac{\pi(Y;3,1)}{Y^2 + 1} + 2 \text{\rm i}nt_Y^\text{\rm i}nfty \frac{\pi(u; 3, 1) u} {(u^2 + 1)^2} \,\mathrm{d}u \\
&< -\frac{Y}{2\parent{Y^2 + 1} \log Y} + \parent{\frac 1{\log Y} + \frac{5}{2 \log^2 Y}} \text{\rm i}nt_Y^\text{\rm i}nfty \frac{u^2} {(u^2 + 1)^2} \,\mathrm{d}u \\
&= \frac 12 \parent{\frac{5Y}{2 (Y^2 + 1) \log Y} + \parent{\frac 1{\log Y} + \frac{5}{2 \log^2 Y}} \parent{\frac{\pi}{2} - \tan^{-1}(Y)}}
\end{align*}
so
\[
Q <Q_{\leq Y} \cdot \exp\parent{\frac{5Y}{2 (Y^2 + 1) \log Y} + \parent{\frac 1{\log Y} + \frac{5}{2 \log^2 Y}} \parent{\frac{\pi}{2} - \tan^{-1}(Y)}}.
\]
In particular, letting $Y = 10^{12}$, we compute
\[
17.46040523112662 < Q < 17.460405231134835
\]
This computation took approximately $9$ CPU days.
We now turn our attention to $R$, given in \eqref{eqn: R(X)}. We observe
\[
\mathcal{R}(1) \subseteq [-0.677, 0.677] \times [0, 0.078],
\]
so we can estimate $\mathcal{R}(1)$ by performing rejection sampling on the rectangle $[-0.677, 0.677] \times [0, 0.078]$, which has area $0.105612$. Of our first $s \colonequals 595\,055\,000\,000$ samples, $r \colonequals 243\,228\,665\,965$ lie in $R$, so
\[
R \approx 0.105612 \cdot\frac{r}{s} = 0.04316889\ldots
\]
with standard error
\[
0.105612 \cdot \sqrt{\frac{r(s - r)}{s^3}} < 6.8 \cdot 10^{-8}.
\]
This took 11 CPU weeks to compute.
We can approximate $\ell_0$ by truncating the integral \eqref{Equation: Definition of ell0} and using our approximations for $Q$ and $R$. This yields $\ell_0 \approx -1.62334$. In \mathbb{C}ref{Theorem: asymptotic for twN(X)}, we have shown that for some $M > 0$ and for all $u > X$, we have
\[
\abs{N_{7}^{\textup{tw}}(u) - \frac{QR}{\zeta(2)} \floor{u^{1/6}}} < M u^{2/15} \log^{17/5} u.
\]
Thus
\begin{equation*}
\begin{aligned}
&\abs{\text{\rm i}nt_X^\text{\rm i}nfty \parent{N_{7}^{\textup{tw}}(u) - \frac{QR}{\zeta(2)} \floor{u^{1/6}}} u^{-7/6} \,\mathrm{d}u} \\
&\qquad\qquad < M \text{\rm i}nt_X^\text{\rm i}nfty u^{-31/30} \log^{17/5} u\,\mathrm{d}u \\
&\qquad\qquad < M \text{\rm i}nt_X^\text{\rm i}nfty u^{-31/30} \log^4 u\,\mathrm{d}u \\
&\qquad\qquad = 30 M X^{-1/30} \parent{\log^4 X + 120 \log^3 X + 10800 \log^2 X + 648000 \log X + 19440000};
\end{aligned}
\end{equation*}
this gives us a bound on our truncation error.
We do not know the exact value for $M$, but empirically, we find that for $1 \leq u \leq 10^{42}$,
\[
-3.3119 \cdot 10^{-5} \leq \frac{N_{7}^{\textup{tw}}(u) - \frac{QR}{\zeta(2)} \floor{u^{1/6}}}{u^{2/15} \log^{17/5} u} \leq 4.3226 \cdot 10^{-6}.
\]
If we assume $M \approx 3.3119 \cdot 10^{-5}$, we find the truncation error for $\ell_0$ is bounded by $253.23$, which catastrophically dwarfs our initial estimate.
We can do better with stronger assumptions. Suppose for the moment that $N_{7}^{\textup{tw}}(X) - \frac{QR}{\zeta(2)} X^{1/6} = O(X^{1/12 + \epsilon})$, as we guessed in \mathbb{C}ref{Remark: true bound for twistN(X)}. We let $\epsilon \colonequals 10^{-4}$, and find that for $1 \leq u \leq 10^{42}$,
\[
-1.2174 \leq \frac{N_{7}^{\textup{tw}}(u) - \frac{QR}{\zeta(2)} \floor{u^{1/6}}}{u^{1/12 + \epsilon}} \leq 0.52272.
\]
If
\[
\abs{N_{7}^{\textup{tw}}(X) - \frac{QR}{\zeta(2)} X^{1/6}} \leq M X^{1/12 + \epsilon}
\]
for $M \approx 1.2174$, we get an estimated truncation error of $2.43 \cdot 10^{-5}$, which is much more manageable.
Our estimate of $\ell_0$ is also skewed by our estimates of $QR$. An error of $\epsilon$ in our estimate for $QR$ induces an error of
\[
\frac{\epsilon}{6\zeta(2)} \text{\rm i}nt_1^X \floor{u^{1/6}} u^{-7/6} \,\mathrm{d}u < \frac{\epsilon}{6\zeta(2)} \text{\rm i}nt_1^X u^{-1}\,\mathrm{d}u = \frac{\epsilon \log X}{6\zeta(2)}
\]
in our estimate of $\ell_0$. When $X = 10^{42}$, this gives an additional error of $1.15 \cdot 10^{-5}$, for an aggregate error of $253.23$ or $2.43 \cdot 10^{-5}$, depending on our assumptions.
Given $Q$, $R$, and $\ell_0$, it is straightforward to compute $c_1$ and $c_2$ using the expressions given for them in \mathbb{C}ref{Theorem: asymptotic for N(X)}. We find $c_1 = 0.09285536\ldots$ with an error of $6.02 \cdot 10^{-8}$, and $c_2 \approx -0.16405$ with an error of $307.89$ or of $2.98 \cdot 10^{-5}$, depending on the assumptions made above. Note that both of these estimates for $c_2$ depended on empirical rather than theoretical estimates for the implicit constant in the error term of \mathbb{C}ref{Theorem: asymptotic for N(X)}. As a sanity check, we also verify that
\[
\frac{N_{7}^{\textup{tw}}(10^{42})}{10^7} - 42 c_1 \log 10 = -0.1641924\ldots \approx c_2,
\]
which agrees to three decimal places with the estimate for $c_2$ we gave above.
\providecommand{\MR}{\relax\text{\rm i}fhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\title{Uniform labelled calculi for preferential conditional logics based on neighbourhood semanticsootnote{Submitted for publication to the Journal of Logic and Computation. The article will be revised after referees reports.
This work was partially supported by the Project TICAMORE ANR-16-CE91-0002-
01 and by WWTF project MA 16-28.}
\begin{abstract}
The preferential conditional logic $ \mathbb{PCL} $, introduced by Burgess, and
its extensions are studied. First, a natural semantics
based on neighbourhood models, which generalise Lewis' sphere models for
counterfactual logics, is proposed. Soundness and completeness of $ \mathbb{PCL} $ and its extensions
with respect to this class of models are proved directly.
Labelled sequent calculi for all logics of the family are then introduced.
The calculi are modular and have standard proof-theoretical properties,
the most important of which is admissibility of cut, that entails a syntactic proof of completeness of the calculi.
By adopting a general strategy, root-first proof search terminates, thereby providing a decision procedure for $ \mathbb{PCL} $ and its extensions.
Finally, the semantic completeness of the calculi is established:
from a finite branch in a failed proof attempt it is possible to extract a finite
countermodel of the root sequent. The latter result gives a
constructive proof of the finite model property of all the logics considered.
\end{abstract}
\section{Introduction}
Conditional logics have been studied from a philosophical viewpoint since the 60's, with seminal works by, among other, Lewis, Nute, Stalnaker, Chellas, Pollock and Burgess.\footnote{Cf. \cite{lewis1973}, \cite{stalnaker1968theory}, \cite{stalnaker1970semantic}, \cite{chellas1975basic}, \cite{pollock1981refined}, \cite{burgess1981quick}, \cite{veltman1985logic}.} In all cases, the aim is to represent a kind of hypothetical implication $A > B$ different from classical material implication, but also from other non-classical implications, such as the intuitionistic one.
There are mainly two kinds of interpretations of a conditional $A > B$. The first is hypothetical/counterfactual: ``If A were the case then B would be the case'', while the second is prototypical: ``Typically (normally) if A then B'', or ``B holds in most normal/typical cases in which A
holds''. Applications of conditional logics to computer science, more specifically to artificial intelligence and knowledge representation, have followed these two interpretations. The hypothetical/counterfactual interpretation has lead to the study of the relation of conditional logics with the notion of \mathsf{0}h{belief change}, which has lead to the crucial issue of the Ramsey Test. The prototypical interpretation has found an interest in the formalisation of default and non-monotonic reasoning (the well-known KLM systems) and has some relation with probabilistic reasoning. The range of conditional logics is actually more extensive, comprising also deontic and causal interpretations.
All interpretations of the conditional operator agree on the rejection of some properties of material implication, along with properties of other non-classical implications, such as the intuitionistic one. These undesirable properties are strengthening, $A > B$ \mathsf{0}h{implies} $(A\land C) > B$; transitivity, $A> B$ \mathsf{0}h{and} $B > C$ \mathsf{0}h{imply} $A > C$, and contraposition, $A > B$ \mathsf{0}h{implies} $\neg B > \neg A$.
The semantics of conditional logics is defined in terms of various kinds of possible-world models, most of them comprising a notion of preference, comparative similarity or choice among worlds. Intuitively, a conditional $A > B$ is true at a world $x$ if $B$ is true
in all the worlds most normal/similar/close to $x$ in which $A$ is true.
In contrast with the situation in
standard modal logic, there is
no unique semantics for conditional logics.
\
In this paper we consider the conditional logic $\mathbb{PCL}$ (Preferential Conditional Logic), one of the fundamental systems of conditional logics. An axiomatization of $\mathbb{PCL}$ (and the respective completeness proof) has been originally presented in the seminal work by Burgess in \cite{burgess1981quick}, where the system is called S, and then by Veltman \cite{veltman1985logic}.
Logic $\mathbb{PCL}$ generalises Lewis' basic logic of counterfactuals, and its flat fragment corresponds to the preferential logic P of non-monotonic reasoning proposed by Kraus, Lehmann and Magidor \cite{kraus1990nonmonotonic}.
The logic takes its name, $\mathbb{PCL}$, from its original semantics, defined in terms of \mathsf{0}h{preferential models}. In these models, every world $x$ is associated with a set of accessible worlds $W_x$ and a \mathsf{0}h{preference} relation $\leq_x $ on this set; the intuition is that this relation assesses the relative normality/similarity of pairs of worlds with respect to $x$. Roughly speaking, a conditional $A > B$ is forced by $x$ if $B$ is true in all accessible worlds (that is, worlds in $W_x$) where $A$ holds and that are most ``normal" with respect to $x$, where their normality is assessed by the relation $\leq_x$\footnote{According to some interpretations, normality means minimality with respect to $\leq_x$.}.
In this paper we present an alternative semantics for $\mathbb{PCL}$ based on \mathsf{0}h{neighbourhood models}. Neighbourhood semantics has been successfully employed to analyse non-normal modal logics \cite{chellas1975basic}, as their semantics cannot be defined in terms of ordinary relational Kripke models. In neighbourhood models, every world $x$ is equipped with a set of neighbourhoods $N(x)$ and each $\alpha\in N(x)$ is a non-empty set of worlds. The general intuition is that each neighbourhood $\alpha \in N(x)$ represents a state of information/knowledge/affair to be taken into account in evaluating the truth of modal formulas at world $x$. In the conditional context, neighbourhood inclusion can be understood as follows:
if $\alpha, \beta \in N(x)$ and $\beta\subseteq \alpha$, then worlds in $\beta$ are at least as plausible/normal as worlds in $\alpha$.
It turns out that neighbourhood models provide a very natural semantics for $\mathbb{PCL}$. This semantics abstracts away from the details of the preference relations and, moreover, the definition of the conditional can be seen as a simple modification of the \mathsf{0}h{strict implication} operator, avoiding the unwanted properties of strengthening, transitivity and contraposition.
The strict implication demands that each $\alpha\in N(x)$ ``validates" the implication $A \to B$. The truth condition for the conditional only requires that, for all $\alpha\in N(x)$ containing an $A$ -world, there is a smaller neighbourhood $\beta\subseteq \alpha$ non-vacuously validating the implication $A \to B$, where non-vacuously means that $\beta$ must contain an $A$-world.
No further properties or structure of neighbourhood models are needed.
The use of neighbourhood models for analysing conditional logics is not a novelty: Lewis' sphere models for counterfactual logics belong to this approach. However, the crucial property of sphere models is that neighbourhoods (e.g. spheres) are \mathsf{0}h{nested}: given $\alpha, \beta \in N(x)$, either $\alpha\subseteq \beta$ or $\beta\subseteq \alpha$. This property entails that worlds belonging to $\bigcup N(x)$ can be always be compared according to their level of normality\footnote{In models where minimal spheres always exist, the nesting property is equivalent to the existence of a ranking function $r_x$ defined for every world $ x $. The function $r_x(y)$ evaluates the level of normality of each world $y\in W_x$ with respect to $x$.}. This assumption is controversial in some contexts such as belief revision \cite{girard2007onions} and non-monotonic reasoning.
The logic $\mathbb{PCL}$ is more general: its neighbourhood models do not assume nesting of neighbourhoods, whence worlds in $\bigcup N(x)$ are not necessarily comparable with respect to their level of normality.
Although $\mathbb{PCL}$ is the basic system we consider in this paper, stronger systems can be obtained by assuming properties of neighbourhood models: normality, total reflexivity, weak centering, centering, uniformity and absoluteness. These conditions are analogous to the ones considered by Lewis for sphere models, and give rise to a total of 15 preferential systems.
\
The Hilbert axiomatization of $\mathbb{PCL}$ is given by adding to
the smallest conditional logic CK three axioms, namely, (ID), (CM) and (OR). The family of preferential logics is obtained by adding axioms in correspondence with the semantic properties mentioned above.
In sharp contrast with the simplicity of its Hilbert axiomatization, the proof theory of $\mathbb{PCL}$ and its extensions is largely unexplored. To the best of our knowledge, the only existing proof systems for $\mathbb{PCL}$ can be found in \cite{giordano2009tableau,schroder2010optimal} and, more recently, in \cite{nalon2018resolution,girlando2019uniform}. All of them are based on preferential semantics, and the last two cover only logic $\mathbb{PCL}$ and none of the extensions\footnote{For a more detailed discussion on the literature, refer to section 8.}.
Building on the neighbourhood semantics, we define labelled sequent calculi for $\mathbb{PCL}$ and its extensions\footnote{Some results of this work have been preliminarily presented in \cite{negri2015analytic}.}. The calculi make use of both world and neighbourhood labels to encode the relevant features of the semantics into the syntax. All calculi are \mathsf{0}h{standard}, meaning that each connective is handled exactly by dual left and right rules, justified through a clear meaning explanation.
As a special feature, a new operator, $\mid$, is introduced for translating the meaning explanation of the conditional operator into sequent rules.
Moreover, the calculi are \mathsf{0}h{modular}, to the extent that logical rules are the same for all systems, while relational rules for neighbourhood and world labels are added to define calculi for extensions.
We do not consider explicitly the family of Lewis' logics, for which several internal and labelled calculi exist. Nonetheless the present framework can be adapted to cover these systems as well.
In addition to
simplicity and modularity,
the calculi
have
strong proof theoretical properties, such as height-preserving invertibility and admissibility of contraction and cut.
We show that the calculi are terminating under the adoption of a uniform proof search strategy, obtaining thereby a decision procedure for (almost) all logics of the $\mathbb{PCL}$ family. However, since the logics in this family belong to different complexity classes \cite{halpern1994complexity}, the uniform strategy will be unavoidably far from optimal.
We also prove semantic completeness of the calculus: from a failed proof of a formula it is possible to extract a \mathsf{0}h{finite} neighbourhood countermodel, built from a branch of the attempted proof.
This result provides a constructive proof of the finite model property for each logic of the $\mathbb{PCL}$ family with respect to the neighbourhood semantics.
\
The paper is organised as follows: In Section 2, the family of $\mathbb{PCL}$ logics and the neighbourhood semantics is introduced. Section 3 shows completeness of $\mathbb{PCL}$ and its extensions with respect to the neighbourhood semantics. In Section 4, we introduce labelled sequent calculi for family of preferential logics. In Section 5 we prove the main syntactic properties of the calculi, including admissibility of cut, thereby obtaining a syntactic proof of the their completeness.
In Section 6, a decision procedure for the logics is presented. In Section 7, we present a proof of semantic completeness for the calculi, by extracting a countermodel form failed proof search. Finally, Section 8 discusses some related work.
\section{Preferential logics and neighbourhood semantics}
In this section we introduce the family of preferential conditional logics.
\begin{definition}
The set of well formed formulas of $ \mathbb{PCL} $ and its extensions is defined by means of the following grammar, for $ p \in Atm $ propositional variable and $ A, B \in \mathcal{L} $:
$$ \mathcal{L} :: = p \mid \bot \mid A \mathsf{w}edge B \mid A\lor B \mid A \rightarrow B \mid A > B .$$
\end{definition}
Preferential conditional logic $ \mathbb{PCL} $ is the basic system of the family; extensions of $ \mathbb{PCL} $ are obtained by adding to the basic system the axioms for \textit{normality}, \textit{total reflexivity}, \textit{weak centering}, \textit{centering}, \textit{uniformity} and \textit{absoluteness}. The resulting 15 logics are represented in the lattice of Figure \ref{fig:logics}.
\begin{figure}
\caption{The preferential family of conditional logics}
\label{fig:logics}
\end{figure}
The axiomatic presentation of $ \mathbb{PCL} $ and its extensions is given in Figure \ref{axiom_table}. Propositional axioms and rules are standard. Given a logic $ \mathbb{K} $ of the preferential family, we denote its axiom system as $ \vdash_{\mathcal{H}}iom{K} $, and derivability of a formula $ F $ in the axiom system as $ \derivable{K} F $.
\begin{figure}
\caption{Axiomatization of $ \mathbb{PCL}
\label{axiom_table}
\end{figure}
The following proposition contains some theorems of $\mathbb{PCL}$ that will be (tacitly) used in the following. The first four are well-known axioms, respectively called (RT), (MOD), (DT), and (CSO) in the literature. Axiom (DT) is equivalent to (OR), and from (DT) axiom (RT) is derivable. Axiom (CSO) is equivalent to (CM)+(RT). The proof of the last three axioms is given in \cite{kraus1990nonmonotonic}.
\begin{proposition} \label{lemma:derivable_ax}
The following formulas are derivable in $\mathbb{PCL} $:
\begin{enumerate}
\item (RT) ~ $ (A > B) \land ((A\land B) > C) \rightarrow (A > C) $;
\item (MOD) ~ $(A > \bot) \to (B > \lnot A)$;
\item (DT) ~ $((A \land B) > C) \to (A > (B \to C))$;
\item (CSO) ~ $(A > B) \land (B > A)) \to ((A > C) \to (B> C))$;
\item $ ((A\lor B) > A) \mathsf{w}edge ((B\lor C) >B) \rightarrow ((A\lor C) >A) $;
\item $ ((A\lor B)> A) \mathsf{w}edge ((B\lor C)>B ) \rightarrow A> (C\rightarrow B)) $;
\item $ ((A\lor B)> A) \mathsf{w}edge (B> C) \rightarrow ( A> (B\rightarrow C)) $.
\end{enumerate}
\end{proposition}
The semantics of $ \mathbb{PCL} $ is usually defined in terms of preferential models, as explained in the Introduction. Here we define an alternative semantics in terms of neighbourhood models.
\begin{definition}\label{def:neighbourhood_models}
A \textit{neighbourhood model} is a structure $ \mathfrak{M}_{\mathcal{G}}Neigh = \mathcal{L}_Cle W, N, \llbracket \ \rrbracket \rangle $
where:
\begin{itemize}
\item $ W $ is a non empty set of elements, the \textit{possible worlds};
\item $ N: W \rightarrow \mathcal{P}(\mathcal{P}(W) ) $ is the \textit{neighbourhood function}, which associates to each $ x \in W $ a set $ N(x)\subseteq \mathcal{P}(W) $, called a \textit{system of neighbourhood};
\item $\llbracket \ \rrbracket: \mbox{\it Atm} \rightarrow \mathcal{P}(W) $ is the propositional evaluation.
\end{itemize}
The elements of $ N(x) $ are called \textit{neighbourhoods}, and are denoted by lowercase Greek letters.
For all $ x \in W $, we assume the neighbourhood function to satisfy the property of \textit{non-emptiness}: For each $ \alpha \in N(x) $, $ \alpha $ is non-empty.
\end{definition}
\begin{notation}
The symbol $ \mathbb{V}dash $ is used to denote the forcing (or truth) of a formula at a world of a model: $ x \mathbb{V}dash B $ means that $B$ is true at $x$. Given a neighbourhood $\alpha$,
we use $ \alpha \Vdash^\exists B $ as a shorthand for \textit{there exists $ y \in \alpha $ such that $ y \mathbb{V}dash B $},
and $ \alpha \Vdash^\forall B $ as a shorthand for \textit{for all $ y \in \alpha $ it holds that $ y \mathbb{V}dash B $}.
\end{notation}
\noindent Before giving its formal definition, we give an intuitive motivation of the truth condition for the conditional operator in neighbourhood semantics. Suppose we want to define a conditional operator more fine-grained than the material implication, and suitable for an hypothetical, non-monotonic, or plausible interpretation. As a first attempt, we can define a kind of strict implication, in analogy to the corresponding notion in normal modal logic:
\begin{quote}
(1) $x \mathbb{V}dash A > B$ \textit{iff for all $\alpha \in N(x)$ it holds $\alpha \Vdash^\forall A \to B$}.
\end{quote}
However, this definition is not suitable for the conditional operator, as it would satisfy the unwanted properties of strengthening (or monotonicity), transitivity, and contraposition.
An equivalent, slightly redundant, formulation of (1) consists in a restriction to neighbourhoods that contain $A$-worlds:
\begin{quote}
(2) $x \mathbb{V}dash A > B$ \textit{iff for all $\alpha \in N(x)$, if $\alpha \Vdash^\exists A$ then $\alpha \Vdash^\forall A \to B$}.
\end{quote}
Thus, for every $\alpha \in N(x)$, if $\alpha$ contains an $A$-world, we require that $\alpha \Vdash^\forall A \to B$. The latter condition is too strong: in the intended interpretation, and in particular in the non-monotonic reading, the conditional should tolerate exceptions. Thus, instead of requiring $A \to B$ to be verified by the \mathsf{0}h{whole} $\alpha$, we only demand the formula to be verified by a sub-neighbourhood $\beta$ of $\alpha$.
\begin{quote}
(3) $x \mathbb{V}dash A > B$ \textit{iff for all $\alpha \in N(x)$, if $\alpha \Vdash^\exists A$ then there exists a $\beta \in N(x)$, with $\beta\subseteq \alpha$ such that $\beta \Vdash^\forall A \to B$.}
\end{quote}
Here, however, there is still a problem: the condition on $\beta$ could be vacuously satisfied by choosing a $\beta$ that does not contain any $A$-world (at least whenever $A\not=\top$). To rule out this case, we modify (3) as follows:
\begin{quote}
(4) $x \mathbb{V}dash A > B$ \textit{iff for all $\alpha \in N(x)$, if $\alpha \Vdash^\exists A$ then there exists $\beta \in N(x)$, with $\beta\subseteq \alpha$ such that $\beta \Vdash^\exists A$ and $\beta \Vdash^\forall A \to B$.}
\end{quote}
Definition (4) is the truth definition of conditional
adequate to formalize the logics of the preferential family.
\begin{definition}
For any formula $F\in \mathcal{L}$ and $x\in W$, truth of a formula in a model, in symbols $ x \mathbb{V}dash F $, is defined as follows. For atoms $p$, $ x \mathbb{V}dash p $ if $x\in \llbracket p\rrbracket$; truth conditions for Boolean combinations are the standard ones; the truth condition for the conditional operator is (4).
We say that a formula $F$ is valid in $ \mathfrak{M}_{\mathcal{G}}Neigh$ if for all $ x\in W$, $ x \mathbb{V}dash F $. We say that a formula if valid in the class of all neighbourhood models (resp. in a class of models $\cal K$)
if for all neighbourhood models $\mathfrak{M}_{\mathcal{G}}Neigh$ (resp. in $\cal K$) it holds that $F$ is valid in $\mathfrak{M}_{\mathcal{G}}Neigh$; this will be denoted by $\vDash_{\mathcal{N}} F$ (resp. $\vDash_{\mathcal{N}}id{K} F$).
\end{definition}
\begin{definition}\label{def:extensions_neigh}
Extensions of the class of neighbourhood models are defined as follows:
\begin{itemize}[noitemsep]
\item \textit{Normality}: For all $ x \in W$ it holds that $ N(x) \neq \mathsf{0}tyset $;
\item \textit{Total reflexivity}: For all $ x \in W$ there exists $\alpha \in N(x) $ such that $x \in \alpha$;
\item \textit{Weak centering}: For all $ x \in W$ and $\alpha \in N(x) $, $ x \in \alpha $;
\item \textit{Centering}: For all $ x \in W $ and $ \alpha \in N(x) $, $x \in \alpha$ and $\{x\} \in N(x) $;
\item \textit{Uniformity}\footnote{The property of uniformity as we have defined it is sometimes called \mathsf{0}h{local uniformity}, to distinguish it from the following property, called \mathsf{0}h{uniformity}: for all $ x, y \in W $, $ \bigcup \, N(x) = \bigcup \, N(y)$. However, the set of valid formulas in the class of models satisfying uniformity and local uniformity is the same. A similar remark applies to the property of absoluteness. }: For all $ x \in W $ it holds that if $ y \in \alpha $ and $ \alpha \in N(x) $, then $ \bigcup \, N(x) = \bigcup \, N(y)$.
\item \textit{Absoluteness}: For all $ x \in W$ it holds that if $ y \in \alpha $ and $ \alpha \in N(x) $, then $ N(x) = N(y)$.
\end{itemize}
The extensions are respectively denoted by $\mathfrak{M}_{\mathcal{G}}ex{N} $, $ \mathfrak{M}_{\mathcal{G}}ex{T} $, $ \mathfrak{M}_{\mathcal{G}}ex{W} $, $ \mathfrak{M}_{\mathcal{G}}ex{C} $, $ \mathfrak{M}_{\mathcal{G}}ex{U} $ and $ \mathfrak{M}_{\mathcal{G}}ex{A} $. As happens with axioms, semantic conditions can be combined, yielding 15 classes of models: so $ \mathfrak{M}_{\mathcal{G}}ex{NT} $ is a neighbourhood model with normality and total reflexivity, $ \mathfrak{M}_{\mathcal{G}}ex{WA} $ is a neighbourhood model with weak centering and absoluteness, and so on.
\end{definition}
Not all the extensions of the above table are proper conditional logics. We observe that
\begin{itemize}
\item[1.] $\mathbb{PCA} $ collapses to classical logic;
\item[2.] $\mathbb{PWA} $ collapses to {\bf S5}.
\end{itemize}
We provide a proof of the above through the semantics, obtaining a collapse of models. This implies the collapse of logical systems, once completeness has been proved.
For 1, we prove that $N(x)=\{\{x\}\}$. Let $y\in\alpha$ and $\alpha\in N(x)$. By absoluteness, $N(x)=N(y)$. By centering, $\{x\}\in N(x)$ and
$\{y\}\in N(y)$, so that $\{y\}\in N(x)$ and $x\in \{y\}$, whence $x=y$. It follows that there is only one possible world, and the forcing condition of the conditional collapses to the one of material implication.
For 2, we prove that $N(x)=\{S\}$, where $S$ is any set of worlds to which $x$ belongs. Let $\alpha, \beta\in N(x)$. We show that
$\alpha=\beta$. Let $y\in\alpha$; then, by absoluteness $N(x)=N(y)$, so $\beta\in N(y)$, and by centering $y\in \beta$. We conclude $\alpha\subseteq \beta$. The other inclusion is proved in the same way. Moreover, from the fact that for any $y\in S$, $N(y)=\{S\}$ it follows that all the possible words are equivalent: thus, the forcing condition of a conditional $ A>B $ reduces to the truth condition of the strict implication $ \Box (A \rightarrow B) $.
\
\noindent By adding to $ \vdash_{\mathcal{H}}iom{PCL} $ the axiom
\begin{quotation}
(CV) ~ $ ((A>C) \mathsf{w}edge \lnot (A>\lnot B)) \rightarrow ((A\mathsf{w}edge B)>C) $
\end{quotation}
we obtain logic $ \mathbb{V} $, which is the basic system of Lewis' counterfactual logic. By adding the axiom to the other preferential logics, we get the family of counterfactual logics, $ \mathbb{V} $ and extensions, introduced in \cite{lewis1973}. Lewis defined the semantics of counterfactual logics in terms of sphere models; and sphere models for $ \mathbb{V} $ can be obtained by adding to neighbourhood models the following condition:
\begin{quotation}
\textit{Nesting}: For all $ \alpha,\beta \in N(x) $, either $ \alpha \subseteq \beta $ or $ \beta \subseteq \alpha $.
\end{quotation}
Thus, the family of Lewis' logics is by all means an extension of the preferential systems, and the proof theoretic and model theoretic methods exposed in the following sections can be (more or less modularly) extended to cover Lewis' logics.
\section{Soundness and completeness of neighbourhood models}
We now prove soundness and completeness of the classes of models with respect to the axioms of $ \mathbb{PCL} $ and its extensions.
\subsection{Soundness}
\begin{theorem}[Soundness]
\label{ax_soundness}
For $ F \in \mathcal{L} $, $ \vdash_{\mathcal{H}}iom{P} $ axiom system for a preferential logic $ \mathbb{P} $ and $ \mathcal{P} $ the corresponding class of neighbourhood models, it holds that if $\derivable{P} F$, then $ \vDash_{\mathcal{P}} F $.
\end{theorem}
\begin{proof}
The proof consists in showing that the axioms are valid, and that the inference rules preserve validity. By means of example, we prove soundness of axioms (CM), (OR) and (U$_1$).
\noindent (CM) \, $ ((A>B) \mathsf{w}edge (A>C)) \rightarrow ((A\mathsf{w}edge B)>C) $. Consider an arbitrary neighbourhood model $ \mathfrak{M}_{\mathcal{G}}Neigh$ and an arbitrary world $x$, and suppose that $x$ forces the antecedent of the implication. We show that $x$ forces the succedent. The assumption means that:
\begin{itemize}[noitemsep]
\item [1.] $ \mathfrak{M}_{\mathcal{G}}Neigh, x \mathbb{V}dash A>B $, i.e., if there exists $ \alpha \in N(x) $ such that $ \alpha \Vdash^\exists A $, then there exists
$ \beta \subseteq \alpha $ such that $ \beta \Vdash^\exists A $ and $ \beta \Vdash^\forall A\rightarrow B $;
\item [2.] $\mathfrak{M}_{\mathcal{G}}Neigh, x \mathbb{V}dash A>C $, i.e., if there exists $ \alpha \in N(x) $ such that $ \alpha \Vdash^\exists A $, then there exists
$ \prec_gamma \subseteq \alpha $ such that $ \prec_gamma \Vdash^\exists A $ and $ \prec_gamma \Vdash^\forall A\rightarrow C $.
\end{itemize}
Suppose that there is $\alpha\in N(x) $ such that $ \alpha \Vdash^\exists A\mathsf{w}edge B $; in particular, $ \alpha \Vdash^\exists A$ so by 1 we have that there is $ \beta \subseteq \alpha $ such that $ \beta \Vdash^\exists A $ and $ \beta \Vdash^\forall A\rightarrow B $. By 2 from $ \beta \Vdash^\exists A $, we have that there is
$\prec_gamma \subseteq \beta $ such that $ \prec_gamma \Vdash^\exists A $ and $ \prec_gamma \Vdash^\forall A\rightarrow C $. Since $\prec_gamma\subseteq \beta $ and $\prec_gamma\Vdash^\exists A$, by $\beta \Vdash^\forall A\rightarrow B $ we get $\prec_gamma\Vdash^\exists A\mathsf{w}edge B $. From $ \prec_gamma \Vdash^\forall A\rightarrow C $, a fortiori we have $ \prec_gamma \Vdash^\forall A\mathsf{w}edge B\rightarrow C $, so we have proved that $x\mathbb{V}dash A\mathsf{w}edge B> C$.
\noindent (OR) \, $( (A>C) \mathsf{w}edge (B>C)) \rightarrow ((A\lor B) >C) $. Suppose there is a neighbourhood model which satisfies the antecedent, i.e.
\begin{itemize}[noitemsep]
\item [1.] $ \mathfrak{M}_{\mathcal{G}}Neigh, x \mathbb{V}dash A>B $ , i.e., if there exists $ \alpha \in N(x) $ such that $ \alpha \Vdash^\exists A $, then there exists $ \beta \subseteq \alpha $ such that $ \beta \Vdash^\exists A $ and $ \beta \Vdash^\forall A\rightarrow B $;
\item [2.] $ \mathfrak{M}_{\mathcal{G}}Neigh, x \mathbb{V}dash B>C $, , i.e., if there exists $ \alpha'\in N(x) $ such that $ \alpha' \Vdash^\exists B $, then there exists $ \prec_gamma \subseteq \alpha' $ such that $ \prec_gamma \Vdash^\exists B $ and $ \prec_gamma \Vdash^\forall B\rightarrow C $.
\end{itemize}
Our claim is $x\mathbb{V}dash( A\lor B)> C$. Assume there is $\alpha'' \in N(x) $ such that $ \alpha \Vdash^\exists A\lor B $. Then either $ \alpha \Vdash^\exists A $ or $ \alpha \Vdash^\exists B $. In the first case we use 1 and obtain that there is $ \beta \subseteq \alpha'' $ such that $ \beta \Vdash^\exists A $ and $ \beta \Vdash^\forall A\rightarrow C $. Then from 2 (with $\beta$ in place of $\alpha'$) we obtain that there is $\prec_gamma \subseteq \beta$ such that
$\prec_gamma\mathbb{V}dash B$ (and a fortiori $\prec_gamma\mathbb{V}dash A\lor B$) and $ \prec_gamma \Vdash^\forall B\rightarrow C $. Since $\prec_gamma\subseteq \beta\subseteq \alpha$, by 1 we have $\prec_gamma\Vdash^\forall A\rightarrow C$, and a fortiori $\prec_gamma\Vdash^\forall (A\lor B)\rightarrow C$. The second case is dealt with in a similar way, so we conclude $x\mathbb{V}dash (A\lor B)> C$.
\noindent (U$ _1 $) \, $ (\lnot A>\bot) \rightarrow (\lnot (\lnot A >\bot) >\bot )$. Suppose there is a neighbourhood model with local uniformity that verifies the antecedent, i.e.
\begin{itemize}[noitemsep]
\item [1.] $ \mathfrak{M}_{\mathcal{G}}ex{U}, x \mathbb{V}dash \lnot A> \bot $, , i.e., if there exists $ \alpha \in N(x) $ such that $ \alpha \Vdash^\exists \lnot A $, then there exists $ \beta \subseteq \alpha $ such that $ \beta \Vdash^\exists \lnot A $ and $ \beta \Vdash^\forall \lnot A\rightarrow \bot $.
\end{itemize}
We claim that if there exists $ \alpha' \in N(x) $ such that $ \alpha' \Vdash^\exists \lnot (\lnot A>\bot )$, then there is $\prec_gamma\subseteq \alpha' $ such that $ \prec_gamma\Vdash^\exists \lnot (\lnot A>\bot )$ and $ \prec_gamma\Vdash^\forall \lnot \lnot (\lnot A>\bot )$. The latter two give a contradiction, so we need to prove that the existence of the above $\alpha'$ leads to a contradiction.
Assume $ \alpha' \Vdash^\exists \lnot (\lnot A>\bot )$, i.e. there is $y\in \alpha'$ such that $y \nVdash\lnot A >\bot$. Then there is $\delta\in N((y)$ such that $\delta\Vdash^\exists\lnot A$. Since $ \bigcup N(x) = \bigcup N(y) $ by the condition of uniformity, there is $\alpha''\in N(x)$ such that $z\in \alpha''$ and $\alpha''\Vdash^\exists \lnot A$. By 1, there is $ \beta \subseteq \alpha'' $ such that $ \beta \Vdash^\exists \lnot A $ and $ \beta \Vdash^\forall \lnot A\rightarrow \bot $, so we have the desired contradiction.
\end{proof}
\subsection{Completeness of $ \mathbb{PCL} $}
\noindent We here prove the completeness of of $\mathbb{PCL}$ with respect to the neighbourhood semantics (extensions are treated in Subsection 3.3).
Generally speaking, proving completeness for the axiom systems of $ \mathbb{PCL} $ and its extensions seems to be quite an arduous task. Burgess \cite{burgess1981quick} was the first to provide a completeness proof for $ \mathbb{PCL} $, using preferential models. His proof in the mentioned paper, condensed in a few pages, is quite intricate and not so easy to grasp. In his thesis, Veltman \cite{veltman1985logic} gave a proof of strong completeness of $\mathbb{PCL}$ with respect to preferential semantics. This result is far from elementary. In \cite{halpern1994complexity} Halpern and Friedman sketched a completeness proof for $ \mathbb{PCL} $, claiming the proof to be similar to Burgess' proof. Moreover, they state that the proof can cover extensions of $ \mathbb{PCL} $, but the proof for extensions is postponed to a full paper which never appeared.
More recently, in \cite{giordano2009tableau}, the completeness of the axiomatization of $ \mathbb{PCL} $ and its extensions is proved with respect to classes of preferential models, assuming the Limit assumption.
For Lewis' sphere models, a direct completeness result was given by Lewis in \cite{lewis1973}: he proved that the axioms of $ \mathbb{V} $ and extensions are sound and complete with respect to sphere models. However, the proof heavily relies on the connective of comparative plausibility, which is definable in $ \mathbb{V} $ but not in $\mathbb{PCL}$.
To the best of our knowledge, no completeness result is known for the axioms of $ \mathbb{PCL} $ and its extensions with respect to neighbourhood models.
The proofs in the rest of this section cover $\mathbb{PCL}$ and all its extensions, except those ones congaing weak centering (and not containing centering). The proofs make use of
some notions and lemmas from \cite{giordano2009tableau}.
\
We follow the standard strategy: in order to prove the completeness of an axiom system $\vdash_{\mathcal{H}}iom{K}$ with respect to a class of models $\mathfrak{M}_{\mathcal{G}}ex{K}$, we define a model $\mathfrak{M}ex{P}$ and we prove that:
\begin{enumerate}
\item $\mathfrak{M}ex{K}$ is \mathsf{0}h{canonical}, meaning that for any formula $F\in \mathcal{L}$, $\derivable{K} F$ if and only if $ F $ is valid in $ \mathfrak{M}ex{K} $;
\item $\mathfrak{M}ex{K} \in \mathfrak{M}_{\mathcal{G}}ex{K}$.
\end{enumerate}
From these two facts the completeness of $\vdash_{\mathcal{H}}iom{K}$ with respect the class $\mathfrak{M}_{\mathcal{G}}ex{K}$ immediately follows. For $\mathbb{PCL}$ the class $\mathfrak{M}_{\mathcal{G}}ex{K}$ will be the class of all neighbourhood models; for extensions, $\mathfrak{M}_{\mathcal{G}}ex{K}$ will be the class of models extended with the properties detailed in Definition \ref{def:extensions_neigh}.
As usual, the model is be built by considering \mathsf{0}h{maximal consistent sets} of formulas. We start by recalling standard definitions and properties. The notion of (in-)consistency and subsequent definitions and lemmas on maximal consistent sets are relative to some axiom system $\vdash_{\mathcal{H}}iom{K}$.
\begin{definition}\label{def:maxc}
Given a set of formulas $ S \in \mathcal{L} $, we say that $ S $ is \mathsf{0}h{inconsistent} if it has a finite subset $ \{B_1, \dots , B_n \} \subseteq S $ such that $ \derivable{K} (B_1 \mathsf{w}edge \dots \mathsf{w}edge B_n)\rightarrow \bot $. We say that $ S $ is \mathsf{0}h{consistent} if it is not inconsistent. We say that $ S $ is \mathsf{0}h{maximal consistent} if $ S $ is consistent and for any formula $ A \notin S $, $ S \cup \{A\} $ is inconsistent. We denote by $X,Y,Z,\dots$ the maximal consistent sets and by $\mathsf{MAX}$ the set of all maximal consistent sets over a fixed language.
\end{definition}
We assume all standard properties of $\mathsf{MAX}$ sets, in particular the following:
\begin{lemma}\label{lindem} ~
\begin{itemize}
\item [$a)$] For $ S $ set of formulas, $S$ is consistent if and only if there exists $Z\in \mathsf{MAX}$ such that $S\subseteq Z$.
\item [$b)$] For $ A $ formula, $\vDash_{\mathcal{N}}id{K} A$ if and only if for all $ Z\in \mathsf{MAX}$, $A\in Z$.
\end{itemize}
\end{lemma}
\begin{proof}
The direction \mathsf{0}h{only if} of $a)$ is the standard \mathsf{0}h{Lindembaum lemma}, proved by means of an inductive construction. Property $b)$ is a sub-case of $a)$, obtained by taking $S = \{\lnot A\}$, by controposition and completeness of all $Z\in \mathsf{MAX}$ (either $A\in Z$ or $\neg A\in Z$).
\end{proof}
We will (shortly) define the worlds of the canonical model $ \mathfrak{M}ex{K} $ as the set $\{(X, A) \mid X\in \mathsf{MAX} \ \mbox{and} \ A\in \mathcal{L} \ \mbox{and} \ A\in X \}$.
Thanks to Lemma \ref{lindem}, in order to prove that the $ \mathfrak{M}ex{K} $ is indeed \mathsf{0}h{canonical}, we will only have to show that for any formula $F\in\mathcal{L}$ and for any world $(X, A)$, it holds that:
\begin{quotation}
$(\textit{Truth Lemma})$ ~ $F\in X$ if and only if $(X, A) \mathbb{V}dash F$.
\end{quotation}
It easy to see that canonicity of $ \mathfrak{M}ex{K} $ follows: $A$ is valid in $ \mathfrak{M}ex{K} $ if and only if for all $ Z\in \mathsf{MAX}$, $A\in Z$ (by the Truth Lemma and definition of the worlds), if and only if $\derivable{K} A$ (by Lemma \ref{lindem}).
Before providing the canonical model construction, we introduce some additional definitions and lemmas.
\begin{definition}
Let $ X \in \mathsf{MAX} $. The set of \mathsf{0}h{conditional consequences} of a formula $B \in \mathcal{L}$ is defined as:
$ \mathsf{c}s{X}{B} = \{C \in \mathcal{L} \mid B>C \in X \} $.
\end{definition}
\begin{lemma}\label{lemma:maximal_consequences}
The following hold:
\begin{enumerate}
\item $ B \in \mathsf{c}s{X}{B} $;
\item If $\mathsf{c}s{X}{B} \subseteq Y $ and $B>C \in X $, then $ C \in Y$;
\item $B>C \in X$ iff for all $ Y$, $\mathsf{c}s{X}{B}\subseteq Y $ implies $C \in Y$.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove only direction $\Leftarrow$ of statement 3. By hypothesis, there is no $Z \in \mathsf{MAX} $ such that $ \mathsf{c}s{X}{B} \cup \{\lnot C \}\subseteq Z$. By lemma \ref{lindem}, $\mathsf{c}s{X}{B} \cup \{\lnot C \}$ is inconsistent, and there must be some $ D_1, \dots, D_n \in \mathsf{c}s{X}{B}$ such that $\derivable{K} (D_1 \mathsf{w}edge \dots \mathsf{w}edge D_n) \rightarrow C$. Thus, by (RCK) and (R-And), $\derivable{K} ((B>D_1) \mathsf{w}edge \dots \mathsf{w}edge (B>D_n)) \rightarrow (B>C)$. Since $ (B>D_1), \dots , (B>D_n) \in X$, also $B>C \in X $.
\end{proof}
\begin{definition}\label{def:lesser_equal}
Let $X \in \mathsf{MAX}$, $A, B \in \mathcal{L} $. Define $A \preccurlyeqer{X} B $ if $ (A \lor B) > A \in X$.
\end{definition}
\begin{proposition} \label{prop:ref_tr}
The relation $ \preccurlyeqer{X} $ is reflexive and transitive.
\end{proposition}
\begin{proof}
Reflexivity follows from axiom (ID) and (OR).
Transitivity immediately follows from 1 of Lemma \ref{lemma:derivable_ax}.
\end{proof}
\begin{proposition} [From \cite{giordano2009tableau}] \label{prop:relations}
If $ A \preccurlyeqer{X} B $, $ \mathsf{c}s{X}{A} \subseteq Y$ and $B \in Y $, then $ \mathsf{c}s{X}{B} \subseteq Y $.
\end{proposition}
\begin{proof}
Let $B>C \in X $ (thus, $ C \in \mathsf{c}s{X}{B}$). Our goal is to show that $ C \in Y $.
By hypothesis, we know that $ (A\lor B)>A \in X $. From Axiom 6 of Proposition \ref{lemma:derivable_ax} it follows that $A>(B\rightarrow C) \in X$. Thus, $B\rightarrow C \in \mathsf{c}s{X}{A} $ and, by hypothesis $B\rightarrow C \in Y $ and $B \in Y $. Thus, $ C \in Y$.
\end{proof}
\begin{proposition} \label{prop:relations_important}
If $ A \preccurlyeqer{X} B \preccurlyeqer{X} C $, $ \mathsf{c}s{X}{A} \subseteq Y$ and $C \in Y $, then $\mathsf{c}s{X}{B} \subseteq Y $.
\end{proposition}
\begin{proof}
By hypothesis, $(A \lor B) > A \in X $ and $(B\lor C)> B \in X$. By Axiom 5 of Proposition \ref{lemma:derivable_ax}, $A>(C\rightarrow B) \in X $. Thus, $ C\rightarrow B \in X^A$, and $C\rightarrow B \in Y $. Since $C \in Y $, we have $B \in Y $. Thus, we have that $A \preccurlyeqer{X} B $, $ \mathsf{c}s{X}{A} \subseteq Y$ and $B \in Y $. Applying Proposition \ref{prop:relations} we obtain $\mathsf{c}s{X}{B} \subseteq Y $.
\end{proof}
\noindent We can now proceed with the construction of the canonical model.
\begin{definition} \label{def:canonical_model}
For $ p $ propositional atom, let
\begin{itemize}
\item $\mathsf{w}orldsC = \{ (X,A) \mid X \in \mathsf{MAX} \textit{ and } A \in \mathcal{L} \textit{ and } A \in X \} $;
\item $ \vDash_{\mathcal{N}}C(p) = \{ (X,A) \in \mathsf{w}orldsC \mid p \in X \} $.
\end{itemize}
For $ (X,A), (Y,B) \in \mathsf{w}orldsC $, we define a neighbourhood as:
$$
\spheres{(X,A)}{(Y,B)} = \{ (Z,C) \in \mathsf{w}orldsC \mid \mathsf{c}s{X}{C} \subseteq Z \textit{ and } C \preccurlyeqer{X} B \textit{ and } B \notin Z \} \cup \{(Y,B) \}
$$
Now for any $ (X,A) \in \mathsf{w}orldsC$, let the neighbourhood function be defined as :
$$
\neigh{(X,A)} = \{ \spheres{(X,A)}{(Y,B)} \mid \mathsf{c}s{X}{B} \subseteq Y \ \mbox{and} \ B\in\mathcal{L}\}
$$
Finally, let the \textit{canonical model} be defined as $ \mathfrak{M}ex{N} = \mathcal{L}_Cle \mathsf{w}orldsC, \mathcal{N}, \vDash_{\mathcal{N}}C\rangle $.
\end{definition}
\begin{note}
Slightly abusing the notation, we write $ \neigh{X,A}$ instead of $\neigh{(X,A)}$. Moreover, since in $\spheres{(X,A)}{(Y,B)}$ the $A$ is not needed, we simplify the notation to $\spheres{X}{(Y,B)}$.
\end{note}
\begin{proposition} \label{prop:verification_canonical}
The canonical model $ \mathfrak{M}ex{N} $ is a neighbourhood model.
\end{proposition}
\begin{proof}
It suffices to verify that that non-emptiness holds; since for all $ (Y, B ) \in \mathsf{w}orldsC $ it holds that $ (Y,B) \in \spheres{X}{(Y,B)} $, the property immediately follows.
\end{proof}
\begin{lemma}\label{lemma:neighbourhood_nesting}
If $ \spheres{X}{(Y,B)} \in \neigh{X,A} $ and $ (U,D) \in \spheres{X}{(Y,B)} $, then $ \spheres{X}{(U,D)} \subseteq \spheres{X}{(Y,B)} $.
\end{lemma}
\begin{proof}
We prove the non-trivial case in which $ (U,D) \neq (Y,B) $.
Let $ (V,E) \in \spheres{X}{(U,D)} $; we have to show that $ (V,E) \in \spheres{X}{(Y,B)} $. Thus, we have to show that $a)$ $ \mathsf{c}s{X}{E} \subseteq V $, $b)$ $ E \preccurlyeqer{X} B $ and $c)$ $ B \notin V $.
Again, we consider the non-trivial case in which $ (V,E) \neq (U,D) $.
Since $ (V,E) \in \spheres{X}{(U,D)} $ we have that $\mathsf{c}s{X}{E} \subseteq V $ (requirement $a$ is met), $ E \preccurlyeqer{X} D $ and $ D \notin V $. Since $ (U,D) \in \spheres{X}{(Y,B)} $ we have, among the others, that $ D\preccurlyeqer{X} B $. By transitivity of $ \preccurlyeqer{X} $ (Proposition \ref{prop:ref_tr}) it follows that $ E \preccurlyeqer{X} B $. Thus, $b)$ is satisfied. It remains to prove that $ B \notin V $. For the sake of contradiction, suppose that $ B \in V $. From this latter, $ E \preccurlyeqer{X} D \preccurlyeqer{X} B $ and $\mathsf{c}s{X}{E} \subseteq V $ it follows by Proposition \ref{prop:relations_important} that $ \mathsf{c}s{X}{D} \subseteq V$; thus, by Lemma \ref{lemma:maximal_consequences}, $ D \in V $, against previous assumption. Thus, requirement $c)$ is satisfied.
\end{proof}
\noindent We are now ready to prove the Truth Lemma.
\begin{lemma}[Truth Lemma] \label{lemma:truth_lemma}
Let $ F \in \mathcal{L}$ and $ X \in \mathsf{MAX} $. The following statements are equivalent:
\begin{itemize}
\item $ F \in X $;
\item $ \mathfrak{M}ex{N}, (X,A) \mathbb{V}dash F $.
\end{itemize}
\end{lemma}
\begin{proof}
As usual, the proof proceeds by mutual induction on the complexity of the formula $F$. We show only the case of $ F \equiv G>H $, tacitly assuming that the inductive hypothesis holds on subformulas of $ F $, that is for $G$ (and similarly for $H$): and any world $(U, B)$: $ G \in U $ iff $ \mathfrak{M}ex{N}, (U,B) \mathbb{V}dash G $.
Thus, we have to prove the equivalence of the following statements:
\begin{enumerate}
\item $ G>H \in X $;
\item For all $ \alpha \in \neigh{X,A} $, if $ \alpha \Vdash^\exists G $ then there exists $ \beta \in \neigh{X,A} $ with $ \beta \subseteq \alpha $, $ \beta \Vdash^\exists G $ and $ \beta \Vdash^\forall G\rightarrow H $.
\end{enumerate}
\noindent $ \mathbf{[1\Rightarrow 2]} $ Assume 1, and suppose that $ \alpha \in \neigh{X,A} $ and $ \alpha \Vdash^\exists G $, for $ \alpha = \spheres{X}{(Y,B)} $. We must show that there exists a $ \beta \in \neigh{X,A} $ such that $\beta \subseteq \alpha$, $\beta \Vdash^\exists G $ and $ \beta \Vdash^\forall G\rightarrow H $.
We distinguish two cases, depending on whether $ B\preccurlyeqer{X} G $ holds or not. Suppose it holds; then, we show that we can take $ \beta = \alpha = \spheres{X}{(Y,B)} $. Given the hypothesis we only have to prove that $ \alpha \Vdash^\forall G\rightarrow H $. To this aim let $ (U,D) \in \spheres{X}{(Y,B)} $ and $ G\in U $. From $ (U,D) \in \spheres{X}{(Y,B)} $ it follows that $ \mathsf{c}s{X}{D} \subseteq U $ and $ D \preccurlyeqer{X} B $. Since $ B\preccurlyeqer{X} G $, by transitivity of $\leq_X$ we obtain $ D \preccurlyeqer{X} G $. Therefore we have: $ G \in U $, $ \mathsf{c}s{X}{D} \subseteq U $ and $ B\preccurlyeqer{X} G $,so that by Proposition \ref{prop:relations} we obtain $ \mathsf{c}s{X}{G} \subseteq U $. Since $ G>H \in X $ we have $ H \in \mathsf{c}s{X}{G} $, and finally $ H\in U $.
Now suppose that $ B\preccurlyeqer{X} G $ does not hold. Therefore $ \lnot ((B\lor G) > B) \in X$. Thus, $ \mathsf{c}s{X}{B\lor G} \cup \{ \lnot B\}$ is consistent, so that (by lemma \ref{lindem}) there exists some $ Z \in \mathsf{MAX} $ such that $ \mathsf{c}s{X}{B\lor G} \cup \{ \lnot B\} \subseteq Z $ (whence $G\in Z$). Let us consider the world $(Z, B\lor G) $. Note that by construction $ \mathsf{c}s{X}{B\lor G} \subseteq Z $, and obviously $ (B\lor G) \preccurlyeqer{X} B $ and $ B \notin Z$, By Definition \ref{def:canonical_model} $ (Z,B\lor G) \in \spheres{X}{(Y,B)}$. We show that we can take the required $ \beta = \spheres{X}{(Z,B\lor G)} $: since $\mathsf{c}s{X}{B\lor G} \subseteq Z $, we have $\spheres{X}{(Z,B\lor G)} \in \neigh{X,A} $; since $ (Z,B\lor G) \in \spheres{X}{Y,B}$, by lemma \ref{lemma:neighbourhood_nesting} we have $\spheres{X}{(Z,B\lor G)} \subseteq \spheres{X}{(Y,B)}$; since $G\in Z$, we immediately have $\spheres{X}{(Z,B\lor G)} \Vdash^\exists G$. We still have to prove that $ \spheres{X}{(Z,B\lor G)} \Vdash^\forall G\rightarrow H $.
To this purpose suppose $ (U,D) \in \spheres{X}{(Z,B\lor G)} $ and $G\in U$: since we have $D \leq_X B\lor G \leq_X G$, as before, by Proposition \ref{prop:relations}, we obtain $ \mathsf{c}s{X}{G} \subseteq Z$ and we can conclude $ H\in Z $.
\
\noindent $ \mathbf{[2\Rightarrow 1]} $ Assume 2. We show that for all $ Z \in \mathsf{MAX} $, if $ \mathsf{c}s{X}{G} \subseteq Z $, then $ H \in Z $.
By Lemma \ref{lemma:maximal_consequences}, this is equivalent to $ G>H \in X $.
To this aim, suppose that $ \mathsf{c}s{X}{G} \subseteq Z $, for some $ Z $. Then, $ (Z,G) \in \mathsf{w}orldsC $. Let us consider the neighbourhood $ \spheres{X}{(Z,G)} = \alpha$: by construction this world belongs to $ \neigh{X,A} $ and thus, by hypothesis, $ \spheres{X}{(Z,G)} \Vdash^\exists G $. By hypothesis 2., there exists some neighbourhood $ \beta \in \neigh{X,A}$ such that $ \beta \subseteq \alpha $, $ \beta \Vdash^\exists G $ and $ \beta \Vdash^\forall G\rightarrow H $. It easy to see that it must be $ \beta = \alpha = \spheres{X}{(Z,G)} $, since by Definition \ref{def:canonical_model} the only world that satisfies $ G $ in the neighbourhood $ \spheres{X}{(Z,G)} $ is $ (Z,G) $ itself ($\forall (U,D) \in \spheres{X}{(Z,G)}$ if $(U,D) \not= (Z,G)$ then $G \notin U$).
Thus, from $ \spheres{X}{(Z,G)} \Vdash^\forall G\rightarrow H $, $ (Z,G) \in \spheres{X}{(Z,G)} $ and $ G \in Z $ it immediately follows that $ H \in Z $.
\end{proof}
\noindent By the previous lemma we immediately obtain:
\begin{theorem}[Completeness]
\label{theorem:completeness}
For $ F \in \mathcal{L}$, if $ \vDash_{\mathcal{N}}id{N} $ then $ \derivable{PCL} F $.
\end{theorem}
\subsection{Completeness for extensions of $\mathbb{PCL}$}
\noindent Our aim is to extend the completeness proof to the whole family of all preferential logics. We are able to extend the proof to all extensions of $ \mathbb{PCL} $, except for the systems containing \textit{weak centering} (and not containing centering).
To obtain a proof for a logic featuring more than one semantic condition, it suffices to combine the proof strategies for each case.
Unless otherwise specified, all notions refer to the canonical model for $ \mathbb{PCL} $ defined in the previous section. In some cases, the canonical model needs to be modified to account for specific conditions.
The following proposition (whose proof is obvious) will used for the cases of absoluteness and uniformity.
\begin{proposition}
For every $(X,A) \in \mathsf{w}orldsC$, it holds:
$$\bigcup \neigh{X,A} = \{(Z,C)\in \mathsf{w}orldsC \mid Z^C \subseteq X\}.$$
\end{proposition}
\subsubsection*{Normality}
We show that in presence of Axiom (N), the canonical model $ \mathfrak{M}ex{N} $ satisfies the condition of normality:
\begin{quotation}
For all $(X, A)\in \mathsf{w}orldsC$, it holds that $\neigh{X,A}\neq \mathsf{0}tyset$.
\end{quotation}
By Axiom (N), we have that for all $(X, A)\in \mathsf{w}orldsC$, it holds that $\lnot (\top > \bot)\in X$. Thus, $X^{\top}$ is consistent and by Lemma \ref{lindem} there is $Z\in \mathsf{MAX}$ such that $X^{\top} \subseteq Z$. As a consequence, $(Z,\top)\in \mathsf{w}orldsC$, and $\spheres{X}{(Z,\top)} \in \neigh{X,A}$, whence $\neigh{X,A}\neq \mathsf{0}tyset$.
\subsubsection*{Absoluteness}
We show that in presence of Axioms $(A_1),(A_2)$, the canonical model $ \mathfrak{M}ex{N} $ satisfies the condition of local absoluteness:
\begin{quotation}
If $(Z,C) \in \bigcup \neigh{X,A}$, then $ \neigh{X,A} = \neigh{Z,C}$.
\end{quotation}
We first prove that $a)$ for any formula $B\in \mathcal{L}$, $ \mathsf{c}s{X}{B} = \mathsf{c}s{Z}{B} $. To this aim, let $ G \in \mathsf{c}s{X}{B} $; then $ B>G \in X $. By Axiom (A$ _1 $), $ C>(B>G) \in X$, and $ B>G \in \mathsf{c}s{X}{C}$. Since $(Z,C) \in \bigcup \neigh{X,A}$, it holds that $ \mathsf{c}s{X}{C} \subseteq Z$; from this follows that $ B>G \in Z $, and thus $ G \in \mathsf{c}s{Z}{B} $. Conversely, suppose $ G \notin \mathsf{c}s{X}{B} $. Then $ \lnot (B>G) \in X $; by (A$ _2 $) $ C > \lnot (B>G) \in X $, and $ \lnot (B>G) \in \mathsf{c}s{X}{C} $. Again, since $ \mathsf{c}s{X}{C} \subseteq Z$ we have $ \lnot(B>G) \in Z $, and thus $ G \notin \mathsf{c}s{Z}{B} $.
Observe that $b)$ for any formulas $D,E\in \mathcal{L}$, it holds $D \preccurlyeqer{X} E$ if and only if $D \preccurlyeqer{Z} E$. In fact, from $D \preccurlyeqer{Z} E$ follows that $(D \lor E)> D \in Z$, and by proceeding similarly as in $a)$ we obtain that $(D \lor E)> D\in X$ if and only if $(D \lor E)> D \in Z$.
From $a)$ it immediately follows that for any $(Y,B)$, $\mathsf{c}s{X}{B} \subseteq Y$ if and only if $\mathsf{c}s{Z}{B} \subseteq Y$. Then, by $b)$ we have $ \spheres{X}{(Y,B)} = \spheres{Z}{(Y,B)}$, whence by $a)$ we obtain $ \neigh{X,A} = \neigh{Z,C}$.
\subsubsection*{Total Reflexivity}
In this case we need to modify the construction of the canonical model.
\begin{definition}\label{def:universe}
The \mathsf{0}h{universe} of $ (X,A) $ is the set:
$$ \mathsf{u}v{X,A} = \{(Y,B) \in \mathsf{w}orldsC \mid \textit{ for all } G \in \mathcal{L}, ~ G>\bot \in X\textit{ implies } \lnot G \in Y \} .$$
The canonical model $ \mathfrak{M}ex{U} = \mathcal{L}_Cle \mathsf{w}orldsCU, \mathcal{N}^{\mathsf{u}}, \vDash_{\mathcal{N}}CU \rangle $ is defined by stipulating $ \mathsf{w}orldsCU = \mathsf{w}orldsC $, $ \vDash_{\mathcal{N}}CU = \vDash_{\mathcal{N}}C $, and
$$
\neighU{X,A} = \neigh{X,A} \cup \{ \mathsf{u}v{X,A} \}.
$$
where $ \neigh{X,A}$ is the same as in Definition \ref{def:canonical_model}.
\end{definition}
\begin{lemma}\label{prop:univ}
For any $ (X,A), (Y, B) \in \mathsf{w}orldsC $, it holds that $\spheres{X}{(Y,B)} \subseteq \mathsf{u}v{X,A} $.
\end{lemma}
\begin{proof}
Assume that some $(Z,C) \in \spheres{X}{(Y,B)} $. We have to prove that for all $ G \in \mathcal{L} $, if $ G>\bot \in X $ then $ \lnot G \in Z $, and this immediately follows from \vdash_{\mathcal{H}}Mod \ and $ \mathsf{c}s{X}{C} \subseteq Z $.
\end{proof}
We show that in presence of axiom $(T)$, the canonical model $ \mathfrak{M}ex{U} $ satisfies the condition of total reflexivity, that is:
\begin{quotation}
If $(X,A)\in \mathsf{w}orldsCU$, there exists $\alpha\in \neighU{X,A}$ such that $(X,A)\in\alpha$.
\end{quotation}
It is immediate to verify that the condition holds: because of Axiom (T), we have that $(X,A)\in \mathsf{u}v{X,A}$.
Since we have modified the definition of the canonical model, we have to verify that the Truth Lemma still holds.
To this aim, we need to add one case in the direction [\textbf{1} $\Rightarrow$ \textbf{2}] of the proof, that is, if $ G>H \in X $, then $ \mathfrak{M}ex{U}, (X,A) \mathbb{V}dash G>H $. Assume that $ G>H \in X $ and that for some $ \alpha \in \neighU{X,A} $ it holds $ \alpha \Vdash^\exists A $. If $ \alpha \in \neigh{X,A} $ the proof proceeds as in Lemma \ref{lemma:truth_lemma}. Let now $ \alpha = \mathsf{u}v{X,A} $ and suppose for some $ (Z,C) \in \mathsf{u}v{X,A} $ it holds that $ (Z,C) \mathbb{V}dash G $, whence $ G \in Z $. We show that there must exist an $ (U,D) \in \bigcup \neigh{X,A} $ such that $ (U,D) \mathbb{V}dash G $. If this were not the case, we would get that for all $ (U,D) \in \bigcup \neigh{X,A} $, $ G\notin U $. But this entails that $ \mathsf{c}s{X}{G} $ is inconsistent; and thus $ G> \bot \in X $, against the hypothesis that $ (Z,C) \mathbb{V}dash G $ and $ (Z,C) \in \mathsf{u}v{X,A} $. Thus there is a $ (U,D) \in \bigcup \neigh{X,A} $ such that $ G \in U $. We take $ \alpha' = \spheres{X}{(U,D)} $. Observe that $\alpha' \subseteq \alpha = \mathsf{u}v{X,A}$. We can proceed as in proof of Lemma \ref{lemma:truth_lemma} by finding for a $\beta \in \neigh{X,A}$ with $\beta \subseteq \alpha'$ fulfilling the required conditions.
\subsubsection*{Uniformity}
We take the same model construction as for total reflexivity, that is the model $\mathfrak{M}ex{U}$.
Thus, we only need that in presence of axioms $(U_1)$ and $ (U_2) $ $ \mathfrak{M}ex{U} $ satisfies the condition of local uniformity, that is, for any $ (X,A), (Y,B) \in \mathsf{w}orldsCU $:
\begin{quotation}
If $ (Y,B) \in \bigcup \neighU{X,A} $, then $ \bigcup \neighU{X,A} = \bigcup \neighU{Y,B} $.
\end{quotation}
To this aim, first observe that
$$ \bigcup \neighU{X,A} = \mathsf{u}v{X,A} $$
Suppose now $ (Y,B) \in \bigcup \neighU{X,A}= \mathsf{u}v{X,A} $. We show that $ G>\bot \in X $ if and only if $ G>\bot \in Y $.
Let $ G>\bot \in X $. Then by axiom (U$ _1 $) it follows that $ \lnot (G>\bot )>\bot \in X $. Since $ (Y,B) \in\mathsf{u}v{X,A}$ we have $ \lnot \lnot (G>\bot ) \in Y $, that is $ G>\bot \in Y $. Conversely, suppose that $ G>\bot \notin X $, i.e., $ \lnot (G>\bot ) \in X $. By axiom (U$ _2 $) we have that $ (G>\bot) >\bot \in X $, and since $ (Y,B) \in\mathsf{u}v{X,A}$, we get $ \lnot (G>\bot) \in Y $, whence $ G>\bot \notin Y $. \\
From the fact that $ G>\bot \in X $ if and only if $ G>\bot \in Y $ we obtain that for all $ (Z,C) \in \mathsf{w}orldsCU $, $ (Z,C ) \in \mathsf{u}v{X,A} $ if and only if $ (Z,C) \in \mathsf{u}v{Y,B} $, which means
$ \bigcup \neighU{X,A} = \bigcup \neighU{Y,B} $.
\subsubsection*{Centering}
We modify the canonical model construction as follows.
\begin{definition}
For $ (X,A), (Y,B) \in \mathsf{w}orldsCW $, let:
$$
\spheresCW{(X,A)}{(Y,B)} = \spheres{X}{(Y,B)} \cup \{ (X,A)\}.
$$
Observe that here the formula $A$ in $(X,A)$ is relevant.
Then, for any $ (X,A) \in \mathsf{w}orldsC$, $ \neighW{X,A} = \{ \spheresCW{(X,A)}{(Y,B)} \mid \mathsf{c}s{X}{B} \subseteq Y \} $. The set of worlds $ \mathsf{w}orldsC $ and the evaluation function $ \vDash_{\mathcal{N}}C $ do not change, and the canonical model is $ \mathfrak{M}ex{C} = \mathcal{L}_Cle \mathsf{w}orldsC, \mathcal{N}^{\mathsf{cw}}, \vDash_{\mathcal{N}}C \rangle$.
\end{definition}
We now show that in presence of axioms $(W)$ and $ (C) $, the canonical model $ \mathfrak{M}ex{U} $ satisfies the condition of centering:
\begin{itemize}
\item[$a$)] For every world $ (X,A)$ and every $\alpha \in \neighW{X,A}$, $ (X,A)\in \alpha$;
\item [$b) $] $\{(X,A)\}\in \neighW{X,A}$.
\end{itemize}
Condition $a)$ holds by definition. As for $b)$, first observe that for any $(X,A)$ it holds by (W) that $\mathsf{c}s{X}{A} \subseteq X$, so that $\spheresCW{(X,A)}{(X,A)}\in \neighW{X,A}$. We now show that $\spheresCW{(X,A)}{(X,A)} = \{(X,A)\} $. To this aim, we prove that there is no world $ (Y,B) \in \spheresCW{(X,A)}{(X,A)}$ such that $ (Y,B) \neq (X,A) $. For the sake of contradiction, suppose such a world exists. It follows that $A\not\in Y$ and $ B \preccurlyeqer{X} A $, which means that $ (A\lor B) > A \in X $. Thus, by axiom (W), $ (A \lor B) \rightarrow B \in X $. Since by definition $ A \in X $, we have $ B \in X $. By axiom (C) it follows that also $B>A \in X $. Thus, $ A \in \mathsf{c}s{X}{B} $; and since $ \mathsf{c}s{X}{B} \subseteq Y $ we have $ A \in Y $, which contradicts with the assumption $ A \notin Y $.
\
Since we have modified the canonical model, we have to verify that the Truth Lemma continues to hold.
For the direction $\mathbf{[1\Rightarrow 2]}$, suppose that $ G>H \in X $ and that for $ \alpha \in \neighW{X,A} $ it holds that $ \alpha \Vdash^\exists G $. We can proceed as in the proof of Lemma \ref{lemma:truth_lemma}, finding a suitable $\beta\in \neighW{X,A} $. The fact that $(X,A)$ belongs to every neighbourhood in $\neighW{X,A}$, and also to $\beta$, does not compromise the assertion that $ \beta \Vdash^\forall G\rightarrow H $, since from the hypothesis $G>H\in X$ follows by (W) that $G\to H\in X$.
For the direction $\mathbf{[2\Rightarrow 1]}$, assume 2. We distinguish two cases:
\begin{itemize}[noitemsep]
\item[$ i. $] $G\not\in X$;
\item[$ ii. $] $G\in X$.
\end{itemize}
In case $i$, we proceed as in the proof of Lemma \ref{lemma:truth_lemma}, by proving that for all $ Z \in \mathsf{MAX} $, if $ \mathsf{c}s{X}{G} \subseteq Z $, then $ H \in Z $. To this aim, let us consider $\alpha = \spheresCW{(X,A)}{(Z,G)} = \spheres{X}{(Z,G)} \cup \{ (X,A)\}\in \neighW{X,A}$. By hypothesis, there exists a neighbourhood $ \beta \in \neighW{X,A}$ such that $ \beta \subseteq \alpha $, $ \beta \Vdash^\exists G $ and $ \beta \Vdash^\forall G\rightarrow H $. Since $G\not\in X$, it must be that $\beta = \spheresCW{(X,A)}{(Z,G)}$, whence $(Z,G)\in \beta$ follows.
In case $ii$, let us consider $\alpha = \spheresCW{(X,A)}{(X,A)} \in \neighW{X,A}$. By hypothesis, there exists a neighbourhood $ \beta \in \neighW{X,A}$ such that $ \beta \subseteq \spheresCW{(X,A)}{(X,A)} $, $ \beta \Vdash^\exists G $ and $ \beta \Vdash^\forall G\rightarrow H $. However, since $\spheresCW{(X,A)}{(X,A)} = \{(X,A)\}$, it must be $\beta = \{(X,A)\}$. Thus, since $G \to H\in X$ and $G\in X$, we obtain $H\in X$. By axiom (C), we finally obtain $G> H\in X$.
\begin{theorem}[Completeness for extensions]
\label{theorem:completeness_ext}
Let $ \mathsf{ext} $ denote one of the logics: $ \mathbb{P}\mathsf{N}N$, $\mathbb{P}\mathsf{T}T$, $\mathbb{P}\mathsf{C}C$, $\mathbb{P}\mathsf{U}U$ , $\mathbb{P}\mathsf{N}N\mathsf{U}U$, $\mathbb{P}\mathsf{T}T\mathsf{U}U$, $\mathbb{P}\mathsf{C}C\mathsf{U}U$, $\mathbb{P}\mathsf{N}N\mathbb{A}$, $\mathbb{P}\mathsf{T}T\mathbb{A}$, $\mathbb{P}\mathsf{C}C\mathbb{A} $.
For $ F \in \mathcal{L}$, if $ F $ is valid in a class of models for $ \mathsf{ext} $, then $ \derivable{ext} F $.
\end{theorem}
\section{A family of labelled sequent calculi}
In this section we introduce labelled calculi for $ \mathbb{PCL} $ and its extensions.
We call $ \lab{CL} $ the calculus for $ \mathbb{PCL} $. Calculi for extensions are denoted by $ \lab{CL} $ to which we add the name of the frame conditions of the corresponding logics: thus, $ \lab{CL^N} $ is a proof system for $ \mathbb{PCL}\mathsf{N}N $, $ \lab{CL^{TU}} $ is a proof system for $ \mathbb{PCL}\mathsf{T}T\mathsf{U}U $. Let $ \lab{CL^*} $ denote the whole family of calculi.
The definition of the sequent calculi $ \lab{CL^*} $ follows the well-established methodology of enriching the language of the calculus by means of labels, thus importing the semantic information of neighbourhood models into the syntactic proof system\footnote{Refer to \cite{negri2005proof} for the general methodology in Kripke models and to \cite{negri2016non-normal} for the general methodology in neighbourhood semantics.}. For this reason, it is useful to recall the the truth condition for the conditional operator in neighbourhood models:
\begin{center}
$(*) \ x \mathbb{V}dash A > B $ \, \textit{iff} \, \textit{ for all $ \alpha \in N(x)$, if
$ \alpha \Vdash^\exists A $ then there exists $ \beta\in N(x)$ such that $\beta \subseteq \alpha $, $ \beta \Vdash^\exists A $, and $ \beta \Vdash^\forall A\rightarrow B$.}
\end{center}
\noindent We enrich the language $ \mathcal{L}$ as follows.
\begin{definition}
\label{labelled_formulas}
Let $ x, y,z, \dots $ be variables for worlds in a neighbourhood model, and $ a, b, c, \dots $ variables for neighbourhoods. \textit{Relational atoms} are the following expressions:
\begin{itemize}[noitemsep]
\item $ a \in N(x) $, ``neighbourhood $ a $ belongs to the family of neighbourhoods associated to $ x $'';
\item $ x \in a $, ``world $ x $ belongs to neighbourhood $ a$'';
\item $ a \subseteq b $, ``neighbourhood $ a $ is included into neighbourhood $ b $''.
\end{itemize}
\noindent \textit{Labelled formulas} are defined as follows. Relational atoms are labelled formulas and, for $ A \in \mathcal{L}$, the following are labelled formulas:
\begin{itemize}[noitemsep]
\item $ x:A $, ``formula $ A $ is true at world $ x $'';
\item $ a \Vdash^\exists A $, ``$ A $ is true at some world of neighbourhood $ a $'';
\item $ a \Vdash^\forall A $, ``$ A $ is true at all worlds of neighbourhood $ a $'';
\item $ x \barra{a} A|B $, ``there exists $ \beta\in N(x)$ such that $\beta \subseteq \alpha $, $ \beta \Vdash^\exists A $, and $ \beta \Vdash^\forall A\rightarrow B$''.
\end{itemize}
We use $ \{x\} $ to denote a neighbourhood consisting of exactly one element.
\end{definition}
\noindent Relational atoms and labelled formulas are defined in correspondence with semantic notions. Relational atoms describe the structure of the neighbourhood model, whereas labelled formulas are defined in correspondence with the forcing relations at a world ($ x \mathbb{V}dash A$) and at a neighbourhood ($a \Vdash^\exists A$, $ a \Vdash^\forall A $).
Formula $ x \barra{a} A|B $ introduces a semantic condition corresponding to the consequent of the right-hand side of $ (*) $. The reason for the introduction of this formula is that $ (*) $ is too rich to be expressed by a single rule.
Thus we need to break $ (*) $ into two smaller conditions, one (the antecedent) covered by rules for formulas $ x: A>B $ and the other (the consequent) covered by $ x \barra{a} A|B $.
\begin{definition}
Sequents of $ \lab{CL^*} $ are expressions $ \Gamma \Rightarrow \Delta $
where $ \Gamma $ and $ \Delta $ are multisets of relational atoms and labelled formulas, and relational atoms may occur only in $ \Gamma $.
\end{definition}
\begin{figure}
\caption{Sequent calculus $ \lab{CL}
\label{rules_CL}
\end{figure}
\begin{figure}
\caption{Sequent calculi for extensions of $ \lab{CL}
\label{rules_extensions}
\end{figure}
\noindent Figure \ref{rules_CL} contains the rules for $ \mathbb{PCL} $, whereas Figure \ref{rules_extensions} shows the rules for extensions of $ \mathbb{PCL} $. We write $ \mathsf{(a!)} $ as a side condition expressing the requirement that label $ a $ should not occur in the conclusion of a rule. Propositional rules are standard. Rules for local forcing make explicit the meaning of the forcing relations $ \Vdash^\forall $ and $ \Vdash^\exists $. Rules for the conditional are defined on the basis of the truth condition for $ > $ in neighbourhood models.
Each rule of Figure \ref{rules_extensions} is defined in correspondence with the frame conditions on extensions of $ \mathbb{PCL} $. For total reflexivity and weak centering, the frame condition can be formalized by means of a single rule.
Rule $ \mathsf{0} $ stands for the requirement of non-emptiness in the model, and it is added to capture the condition of normality, along with rule $ \mathsf{N} $\footnote{The rule needs not to be added to the calculus $ \lab{CL} $: the rules of this calculus always introduce non-empty neighbourhoods, and the system can be shown to be complete with respect to the axioms of $ \mathbb{PCL} $ (Theorem \ref{theorme:completeness_synth}). However, the rule is needed to express the condition of normality: the new neighbourhood introduced by rule $ \mathsf{N} $ could be empty. }.
Centering requires four rules:
Rule $ \mathsf{C} $ ensure the Centering condition by introducing formulas with neighbourhood label $ \{x\} $ (the singleton). Rule $ \mathsf{Single} $ ensures that the singleton contains at least one element, and rules $ \Repl{1} $ and $ \Repl{2} $ that it contains at most one element: if there is another element $ y \in \{x\}$, then the properties holding for $ x $ hold also for $ y $ (i.e. $ x $ and $ y $ are the same element).
Similarly, extensions with uniformity and absoluteness are defined by adding multiple rules.
Rules $ \mathsf{U}nif{1} $ and $ \mathsf{U}nif{2} $ encode the semantic condition of uniformity. In order to avoid the symbol $ \bigcup $ in the sequent language, the rules translate the following two conditions which, taken together, are equivalent to uniformity.
\begin{quote}
$ \mathsf{U}nif{1} $: If there exist $ \alpha \in N(x) $ such that $ y \in \alpha $ and $ \beta \in N(y) $ such that $ z \in \beta $, then there exists $ \prec_gamma \in N(x) $ such that $ z \in \prec_gamma $;
$ \mathsf{U}nif{2} $: If there exist $ \alpha \in N(x) $ such that $ y \in \alpha $ and $ \beta \in N
(x)
$ such that $ z \in \beta $, then there exists $ \prec_gamma \in N(y) $ such that $ z \in \prec_gamma $.
\end{quote}
As for absoluteness, rules $ \mathsf{A}s{1} $ and $ \mathsf{A}s{2} $ encode the information that for any $ x \in W $, given $ a \in N(x) $ and $ y \in a $, if $ \beta \in N(x) $ then $ \beta \in N(y) $ (rule $ \mathsf{A}s{1} $), and if $ \beta \in N(y) $, then $ \beta \in N(x) $ (rule $ \mathsf{A}s{2} $). Thus, $ N(x)= N(y) $.
\
The sequent calculi $ \lab{CL^*} $ can be modularly extended to cover Lewis' logics (refer to the end of Section 2). To obtain a calculus for $ \mathbb{V} $, it suffices to add to $ \lab{CL} $ a structural rule corresponding to the semantic condition of nesting:
$$\infer[\mathsf{Nes}]{a \in N(x), b \in N(x), \Gamma \Rightarrow \Delta}{ a \subseteq b, a \in N(x), b \in N(x), \Gamma \Rightarrow \Delta & b \subseteq a, a \in N(x), b \in N(x), \Gamma \Rightarrow \Delta}
$$
The rule can be added to calculi for extensions of $ \mathbb{PCL} $ to obtain calculi for the corresponding logics extending $ \mathbb{V} $\footnote{Refer to \cite{girlando2018counterfactuals} for a simpler labelled calculus for $ \mathbb{V} $, which makes use of the connective of \textit{comparative plausibility} instead of the conditional operator.}.
\
It might happen that some instances of rules of $ \lab{CL^*} $ present a duplication of the atomic formula in the conclusion: for example, an instance of $ \mathsf{U}nif{1} $ with $ a=b $ displays two formulas $ a \in N(x) $ in the conclusion. Since we want contraction to be height-preserving admissible, we deal with these cases by adding to the sequent calculus a new rule, in which the duplicated formulas are contracted into one. Such an operation is called applying a \textit{closure condition} to the rules (cf. \cite{negri2005proof}). Thus, rule $ \mathsf{U}nif{1}^* $ is the rule obtained applying the closure condition to $ \mathsf{U}nif{1} $ in case $ a=b $ and $ x =y $; rules $ \mathsf{U}nif{1}^{**} $ and $ \mathsf{U}nif{2}^{**} $ are obtained from $ \mathsf{U}nif{1} $ and $ \mathsf{U}nif{2} $, in case $ a=b $ and $ y=z $; and finally, $ \mathsf{A}s{1}^* $ is obtained from $ \mathsf{A}s{1} $ in the case $ a=b $. There is no need to define additional rules which can be generated by the closure condition, since such rules either collapse or are subsumed by other rules of the calculus. For instance, the rule obtained applying the closure condition to $ \mathsf{U}nif{2} $, case $ a=b $ and $ x=y $, is the following:
$$
\infer[\mathsf{U}nif{2}^*]{a \in N(x), x \in a, \Gamma \Rightarrow \Delta}{z \in c, c \in N(x), a \in N(x), x \in a, \Gamma \Rightarrow \Delta}
$$
and this is the same instance we obtain applying the closure condition to $ \mathsf{U}nif{1}^* $. However, the rules added by closure condition are not needed to prove completeness of the calculi; for this reason, we have not included them in the following sections (e.g. in the termination proof).
\
To prove soundness of the rules with respect to the corresponding system of logics, we need to interpret relational atoms and labelled formulas in neighbourhood models.
The notion of realization interprets the labels in neighbourhood frames, thus connecting the syntactic elements of the calculus with the semantic elements of the model.
\begin{definition} \label{realization}
Let $ \mathcal{M}= \mathcal{L}_Cle W, N, \llbracket \ \rrbracket \rangle $ be a neighbourhood model for $ \mathbb{PCL} $ or its extensions, $ \mathcal{S} $ a set of world labels and $ \mathcal{N} $ a set of neighbourhood labels. An $ \mathcal{SN} $\textit{-realization} over $ \mathcal{M} $ consists of a pair of functions $( \rho , \sigma) $ such that:
\begin{itemize}[noitemsep]
\item $ \rho: \mathcal{S} \rightarrow W $ is the function assigning to each $ x\in \mathcal{S} $ an element $ \rho (x) \in W $;
\item $ \sigma: \mathcal{N} \rightarrow \mathcal{P}(W) $ is the function assigning to each $ a \in \mathcal{N} $ a neighbourhood $ \sigma(a) \in N(w) $, for $ w \in W $.
\end{itemize}
We introduce the notion of satisfiability of a formula $ \mathcal{F} $ under an $ \mathcal{SN} $-realization by cases on the form of $ \mathcal{F} $:
\begin{itemize}[noitemsep]
\item $ \mathcal{M} \vDash_{\rho, \sigma} a \in N(x) $ if $ \sigma(a) \in N(\rho(x) ) $;
\item $ \mathcal{M} \vDash_{\rho, \sigma} a\subseteq b $ if $ \sigma (a) \subseteq \sigma(b) $;
\item $ \mathcal{M} \vDash_{\rho, \sigma} y \in \{x\} $ if $ \rho(y) \in \sigma ( \{ x\}) $;
\item $ \mathcal{M} \vDash_{\rho, \sigma} x:P $ if $ \rho (x) \mathbb{V}dash P $ \footnote{This definition is extended in the standard way to formulas obtained by the classical propositional connectives.};
\item $ \mathcal{M} \vDash_{\rho, \sigma} a \Vdash^\forall A $ if $ \sigma(a) \Vdash^\forall A $;
\item $ \mathcal{M} \vDash_{\rho, \sigma} a\Vdash^\exists A $ if $ \sigma(a) \Vdash^\exists A $;
\item $ \mathcal{M} \vDash_{\rho, \sigma} x \mathbb{V}dash_a A|B$ if $ \sigma(a) \in N(\rho(x)) $ and for some $ \beta \subseteq \sigma(a)$ it holds that $ \beta \mathbb{V}dash^\exists A $ and $ \beta \mathbb{V}dash^\forall A\rightarrow B $;
\item $ \mathcal{M} \vDash_{\rho, \sigma} x : A>B$ if for all $ \sigma(a) \in N(\rho(x)) $, if $ \mathcal{M} \vDash_{\rho, \sigma} a\Vdash^\exists A $ then $ \mathcal{M} \vDash_{\rho, \sigma} x \mathbb{V}dash_a A|B$.
\end{itemize}
\noindent Given a sequent $ \Gamma \Rightarrow \Delta $, let $ \mathcal{S} $, $ \mathcal{N} $ be the sets of world and neighbourhood labels occurring in $ \Gamma \cup \Delta$, and let $ (\rho, \sigma) $ be an $ \mathcal{SN} $-realization. Define $ \mathcal{M} \vDash _{\rho, \sigma} \Gamma \Rightarrow \Delta $ if either $ \mathcal{M} \nvDash_{\rho, \sigma} F $ for some $ F \in \Gamma $ or $ \mathcal{M}\vDash_{\rho, \sigma} G $ for some $ G\in \Delta $. Define validity under all realizations by $ \mathcal{M} \vDash \Gamma \Rightarrow \Delta $ if $ \mathcal{M}\vDash_{\rho, \sigma} \Gamma \Rightarrow \Delta $ for all $ (\rho, \sigma) $ and say that a sequent is valid in all neighbourhood models if $ \mathcal{M}\vDash_{\rho, \sigma} \Gamma \Rightarrow \Delta $ for all models $ \mathcal{M} $.
\end{definition}
\begin{theorem}[Soundness]
If a sequent $ \Gamma \Rightarrow \Delta $ is derivable in $ \lab{CL^*}$, then it is valid in the corresponding class of neighbourhood models.
\end{theorem}
\begin{proof}
The proof is by straightforward induction on the height of the derivation, employing the notion of realization defined above. By means of example, we show soundness of the left and right rule for the conditional operator.
$ [\mathsf{L>}] $
From a neighbourhood model and a realization which validates the premisses we construct a neighbourhood model which validates the conclusion. Let $ \mathcal{M} \vDash_{\rho, \sigma} a \in N(x), x:A>B, \Gamma \Rightarrow \Delta, a \Vdash^\exists A $ and $ \mathcal{M} \vDash_{\rho, \sigma} x \barra{a} A|B, a \in N(x), x:A>B, \Gamma \Rightarrow \Delta $. The only relevant case is the one in which $ \mathcal{M} \vDash_{\rho, \sigma} a \Vdash^\exists A $ and $ \mathcal{M} \nvDash_{\rho, \sigma} x \barra{a} A|B $. From the former we have that $ \sigma(a) \Vdash^\exists A $; from the latter that for $ \sigma(a) \in \rho(x) $ and for all $ \beta \in \sigma(\alpha) $ it holds that either $ \beta \nVdash^\exists A $ or $ \beta \nVdash^\forall A \rightarrow B $. By definition, this means that
$ \mathcal{M} \nvDash_{\rho, \sigma} x: A >B$, for $ \sigma(a) \in \rho(x) $; and thus, $ \mathcal{M} \vDash_{\rho, \sigma} a \in N(x), x:A>B, \Gamma \Rightarrow \Delta $.
$ [\mathsf{R>}] $ Suppose $ \mathcal{M} \vDash_{\rho, \sigma} a \in N(x), a \Vdash^\exists A, \Gamma \Rightarrow \Delta, x \barra{a} A|B $. We show that the conclusion is valid in the same model, under the same realization. There are two relevant cases: either the one in which $ \mathcal{M} \nvDash_{\rho, \sigma} a \Vdash^\exists A $ or the one in which $ \mathcal{M} \vDash_{\rho, \sigma} x \barra{a} A|B $. In the former case we have that $ \sigma(a) \nVdash^\exists A $, for $ \sigma(a) \in \rho (x) $. In the latter case, we have that for $ \sigma(a) \in \rho (x) $, there exists $ \beta \in \sigma(\alpha) $ such that $ \beta \mathbb{V}dash^\exists A $ and $ \beta \mathbb{V}dash^\forall A \rightarrow B $. In both cases it holds by definition that $ \mathcal{M} \vDash_{\rho, \sigma} x:A>B $; thus, $ \mathcal{M} \vDash_{\rho, \sigma} \Gamma \Rightarrow \Delta, x:A>B $.
\end{proof}
\section{Structural properties and syntactic completeness}
In this section we prove the main structural properties of calculi $ \lab{CL^*} $.
We start with some preliminary definitions and lemmas. By \textit{height} of a derivation we mean the number of nodes occurring in the longest derivation branch, minus one.
We write $ \vdash^n \Gamma \Rightarrow \Delta $ meaning that there is a derivation of $ \Gamma \Rightarrow \Delta $ in $ \lab{CL^*} $ with height bounded by $ n $.
\begin{definition}
The weight of relational atoms is 0. As for the other labelled formulas, the label of formulas of the form $ x:A $ and $ x\mathbb{V}dash_a A|B $ is $ x $; the label of formulas $ a\Vdash^\forall A $ and $ a\Vdash^\exists A $ is $ a $. We denote by $ l(\mathcal{F}) $ the label of a formula $ \mathcal{F} $, and by $ p(\mathcal{F}) $ the pure part of the formula, i.e., the part of the formula without the label and without the forcing relation. The \textit{weight} of a labelled formula is defined as a lexicographically ordered pair
$$ \mathcal{L}_Cle w(p(\mathcal{F})), w(l(\mathcal{F}) ) \rangle $$
where
\begin{itemize}[noitemsep]
\item for all world labels $ x $, $ w(x)=0 $;
\item for all neighbourhood labels $ a $, $ w(a)=1 $;
\item $w(p) = w (\bot) = 1$;
\item $ w(A\circ B) = w(A) + w(B) + 1 $ for $ \circ $ conjunction, disjunction or implication;
\item $ w(A|B)= w(A) +w(B)+2 $;
\item $ w(A>B) = w(A)+w(B)+3 $.
\end{itemize}
\end{definition}
\noindent The definition of substitution of labels given in \cite{negri2005proof} can be extended in an obvious way to the relational atoms and labelled formulas of $ \lab{CL^*} $. According to this definition we have, for example, $(a\Vdash^\exists A)[b/a] \equiv b\Vdash^\exists A$, and $(x \barra{a} B|A)[y/x]\equiv y \barra{a} B|A$.
The calculus is routinely shown to enjoy the property of height preserving substitution both of world and neighbourhood labels. The proof is a straightforward extension of the same proof in \cite{negri2005proof}.
\begin{proposition}\label{subst}
\
\begin{enumerate}[label=(\roman*), noitemsep]
\item If $\vdash^n \Gamma \Rightarrow \Delta$, then $\vdash^n \Gamma{[y/x]}\Rightarrow\Delta{[y/x]}$;
\item If $\vdash^n \Gamma \Rightarrow \Delta$, then $\vdash^n \Gamma{[b/a]}\Rightarrow\Delta{[b/a]}$.
\end{enumerate}
\end{proposition}
The following Lemma, adapted from \cite{negri2005proof}, ensures derivability of generalized initial sequent. The proof proceeds by mutual induction on the weight of labelled formulas.
\begin{lemma}\label{generalized_initial_sequents}
The following sequents are derivable in $ \lab{CL^*} $.
\begin{enumerate}[noitemsep]
\item $ a \Vdash^\exists A, \Gamma \Rightarrow \Delta, a\Vdash^\exists A $
\item $ a \Vdash^\forall A, \Gamma \Rightarrow \Delta, a \Vdash^\forall A $
\item $ x\mathbb{V}dash_a A|B, \Gamma \Rightarrow \Delta, x\mathbb{V}dash_a A|B $
\item $ x:A, \Gamma \Rightarrow \Delta, x:A $
\end{enumerate}
\end{lemma}
To prove admissibility of the cut rule, we need admissibility of the structural rules and invertibility of all the rules. The reader can find a detailed proof of these properties in \cite{girlandothesis}. Both lemmas are proved by induction on the height of the derivation.
\begin{lemma}
Let $ \mathcal{F} $ be a relational atom or a labelled formula. The rules of weakening and contraction are height-preserving admissible in $ \lab{CL^*} $:
$$
\infer[\mathsf{W}kl]{\mathcal{F}, \Gamma \Rightarrow \Delta}{\Gamma \Rightarrow \Delta} \quad
\infer[\mathsf{W}kr]{\Gamma \Rightarrow \Delta,\mathcal{F}}{\Gamma \Rightarrow \Delta} \quad
\infer[\mathsf{C}trl]{\mathcal{F},\Gamma \Rightarrow \Delta}{\mathcal{F},\mathcal{F}, \Gamma \Rightarrow \Delta} \quad
\infer[\mathsf{C}trr]{\Gamma \Rightarrow \Delta,\mathcal{F}}{\Gamma \Rightarrow \Delta,\mathcal{F},\mathcal{F}}
$$
\end{lemma}
\begin{lemma}
All the rules of $ \lab{CL^*} $ are height-preserving invertible: if the conclusion of a rule is derivable with derivation height $ n $, its premiss(es) are derivable with at most the same derivation height.
\end{lemma}
\begin{theorem}[Cut-admissibility]
\label{cut_3}
The rule of cut is admissible in $ \lab{CL^*} $.
$$
\infer[\mathsf{C}ut]{\Gamma, \Gamma' \Rightarrow \Delta, \Delta'}{\Gamma \Rightarrow \Delta, \mathcal{F} \quad & \quad \mathcal{F}, \Gamma' \Rightarrow \Delta' }
$$
\end{theorem}
\begin{proof}
The proof is by primary induction on the weight of the cut formula and on secondary induction on the sum of the heights of the derivations of the premisses of $ \mathsf{C}ut $\footnote{Refer to \cite{structural} for the general methodology of proving cut-admissibility in labelled systems.}. We distinguish cases according to the rules applied to derive the premisses:
\begin{itemize}[noitemsep]
\item [$a)$] At least one of the premisses of $ \mathsf{C}ut $ is an initial sequent;
\item [$b)$] The cut formula is not the principal formula in the derivation of at least one premiss;
\item [$c)$] The cut formula is the principal formula of both derivations of the premisses.
\end{itemize}
We only show the case of $c)$ in which the cut formula has the form $ A>B $. For the proof of propositional cases, refer to \cite[Theorem 3.2.3]{structural}; for the proof of the other conditional cases refer to \cite{girlandothesis}.
\
\begin{adjustbox}{max width = \textwidth}
$$
\infer[\mathsf{C}ut]{ a \in N(x), \Gamma, \Gamma' \Rightarrow \Delta, \Delta' }{
\infer[\mathsf{R>}]{\Gamma \Rightarrow \Delta, x:A>B }{ \deduce{ b \in N(x), b \Vdash^\exists A, \Gamma \Rightarrow \Delta, x \barra{a} A|B }{(1)} }
&
\infer[\mathsf{L>}]{ a \in N(x), x:A>B, \Gamma' \Rightarrow \Delta' }{
\deduce{.. \Rightarrow \Delta', a \Vdash^\exists A }{(2)}
&
\deduce{x \barra{a} A|B, a \in N(x), x:A>B, \Gamma' \Rightarrow \Delta' }{(3)}
}
}
$$
\end{adjustbox}
\
\noindent We first apply $ \mathsf{C}ut $ on the premisses of $ \mathsf{L>} $. Both applications have a smaller sum of height of the premisses with respect to the original application of $ \mathsf{C}ut $:
$$
\mathcal{D}_1 = \quad \infer[\mathsf{C}ut]{a \in N(x), \Gamma, \Gamma' \Rightarrow \Delta, \Delta'}{
\Gamma \Rightarrow \Delta, x:A>B
&
\deduce{ a \in N(x), x:A>B, \Gamma' \Rightarrow \Delta', a \Vdash^\exists A }{(2)}
}
$$
$$
\mathcal{D}_2 = \quad \infer[\mathsf{C}ut]{a \in N(x), \Gamma, \Gamma' \Rightarrow \Delta, \Delta'}{
\Gamma \Rightarrow \Delta, x:A>B
&
\deduce{x \barra{a} A|B, a \in N(x), x:A>B, \Gamma' \Rightarrow \Delta' }{(3)}
}
$$
We combine the above with two occurrences of $ \mathsf{C}ut $, on formulas of lesser weight than the original cut formula.
$$
\infer[\mathsf{C}tr]{a \in N(x), \Gamma, \Gamma' \Rightarrow \Delta, \Delta'}{
\infer[\mathsf{C}ut]{a \in N(x)^3, \Gamma^3, \Gamma'^2 \Rightarrow \Delta^3, \Delta'^2}{
\infer[\mathsf{C}ut]{a \in N(x)^2, \Gamma^2, \Gamma' \Rightarrow \Delta^2, \Delta', x \barra{a}A|B }{ \mathcal{D}_1\quad & \deduce{
b \in N(x), b \Vdash^\exists A, \Gamma \Rightarrow \Delta, x \barra{a} A|B }{(1)[b/a]}
}
&
\quad \mathcal{D}_2}
}
$$
\end{proof}
\noindent
The axioms of each system of logic can be derived in the respective calculus. By admissibility of cut, the inference rules can be shown to be admissible, therefore obtaining a syntactic proof of completeness of the calculi. Details are given in the Appendix.
\begin{theorem}[Completeness via cut admissibility]
\label{theorme:completeness_synth}
If a formula $ A $ is derivable in $\vdash_{\mathcal{H}}iom{PCL} $ or in one of its extensions, then there is a derivation of $ \Rightarrow x: A $ in the calculus $ \lab{CL^*} $ for the corresponding logic.
\end{theorem}
\noindent We conclude the section by proving admissibility of rules $ \Repl{1} $ and $ \Repl{2}$ in their generalized form. This lemma will be used in Section \ref{sec:semantic_compl}, to prove completeness of the calculi featuring centering with respect to neighbourhood models.
\begin{lemma}
Rules $ \Repl{1} $ and $ \Repl{2}$ generalized to all formulas of the language are admissible in $ \lab{CL^*} $.
\end{lemma}
\begin{proof}
Admissibility of the two rules is proven simultaneously, by induction on the weight of formulas. We only show the proof admissibility for $\Repl{1} $ (the other rule is symmetric).
Since contraction and cut are admissible in $ \lab{CL^*} $, it is sufficient to show that sequent $ y \in \{x\}, A(x) \Rightarrow A(y) $ is derivable.
From this sequent and the premiss of $\Repl{1} $, the conclusion of $\Repl{1} $ can be derived applying cut and contraction.
We proceed by induction on the weight of formula $ A(x) $; there are several cases to consider.
\noindent \textbf{1.} $ A(x) \equiv x: \mathcal{F} $, $ A(y) \equiv y: \mathcal{F} $, where $ \mathcal{F} $ is a propositional formula. We consider the case $ A(x) \equiv x:B\rightarrow C $, $ A(y) \equiv y:B\rightarrow C $.
\
\begin{adjustbox}{max width = \textwidth}
$$
\infer[\mathsf{R\rightarrow}]{y \in \{x\}, x:B\rightarrow C \Rightarrow y:B\rightarrow C }{
\infer[\mathsf{L\rightarrow}]{ y \in \{x\}, x:B\rightarrow C, y:B \Rightarrow y: C }{
\infer[\Repl{2}]{y \in \{x\}, y:B \Rightarrow y: C, x:B}{
\deduce{y \in \{x\},x :B, y:B \Rightarrow y: C, x:B}{}
}
&
\infer[\Repl{1}]{y \in \{x\}, y:B, x:C \Rightarrow y: C}{
\deduce{y \in \{x\}, y:B ,x:C, y:C \Rightarrow y: C, x:B}{}
}
}
}
$$
\end{adjustbox}
\
\noindent In this case we need $ \Repl{2} $, applied to formulas of smaller weight, and the two premisses are derivable by \mbox{Lemma \ref{generalized_initial_sequents}}.
\noindent \textbf{2.} $ A(x) \equiv x \mathbb{V}dash_{a} B|C$, $ A(y) \equiv y \mathbb{V}dash_{a} B|C$.
$$
\infer[\mathsf{L|}]{y \in \{x\}, x \mathbb{V}dash_{a} B|C \Rightarrow y \mathbb{V}dash_{a} B|C}{
\infer[\Repl{1}]{c \in N(x), c\subseteq a, c \mathbb{V}dash^{\exists} B, c \mathbb{V}dash^{\forall}B\rightarrow C, y \in \{x\} \Rightarrow y \mathbb{V}dash_{a} B|C }{
\infer[\mathsf{R|}]{c \in N(y), c \in N(x), c\subseteq a, c \mathbb{V}dash^{\exists} B, c \mathbb{V}dash^{\forall}B\rightarrow C, y \in \{x\} \Rightarrow y \mathbb{V}dash_{a} B|C}{
(1) \qquad
&
\qquad (2)
} } }
$$
Where $ (1)$ is sequent $ c \in N(y), c \in N(x), c\subseteq a, c \mathbb{V}dash^{\exists} B, c \mathbb{V}dash^{\forall}B\rightarrow C, y \in \{x\} \Rightarrow y \mathbb{V}dash_{a} B|C, c \mathbb{V}dash^{\exists} A $, and $ (2)$ is sequent $ c \in N(y), c \in N(x), c\subseteq a, c \mathbb{V}dash^{\exists} B, c \mathbb{V}dash^{\forall} B\rightarrow C, y \in \{x\} \Rightarrow y \mathbb{V}dash_{a} B|C, c \mathbb{V}dash^{\forall} B\rightarrow C$.
Rule $\Repl{1} $ is applied to the atomic formula $ c \in N(x) $, which has smaller weight than $ A(x) $. The lower premiss is derivable by \mbox{Lemma \ref{generalized_initial_sequents}}, the upper one by steps of $\mathsf{L\fe} $, $\mathsf{L\fu}$, $\mathsf{L\subseteq}$, and \mbox{Lemma \ref{generalized_initial_sequents}}.
\noindent \textbf{3.} $ A(x) \equiv x: B>C $, $ A(y) \equiv y:B> C $.
$$
\infer[\mathsf{R>}]{y \in \{x\}, x: B>C \Rightarrow y:B> C}{
\infer[\Repl{2}]{ a \in N(y), a \mathbb{V}dash^{\exists} B, y \in \{x\}, x: B>C \Rightarrow y \mathbb{V}dash_{a} B|C}{
\infer[\mathsf{L>}]{ a \in N(x), a \in N(y), a \mathbb{V}dash^{\exists} B, y \in \{x\}, x: B>C \Rightarrow y \mathbb{V}dash_{a} B|C}{
&
x \mathbb{V}dash_{a} B|C, a \in N(x), a \in N(y), a \mathbb{V}dash^{\exists} B, y \in \{x\}, x: B>C \Rightarrow y \mathbb{V}dash_{a} B|C } } }
$$
Rule $ \Repl{2} $ is applied to formula $ a \in N(y) $, of smaller weight. The leftmost premiss is the sequent $ a \in N(x), a \in N(y), a \mathbb{V}dash^{\exists} B, y \in \{x\}, x: B>C \Rightarrow y \mathbb{V}dash_{a} B|C, a \mathbb{V}dash^{\exists} A
$, derivable by Case 1.
\end{proof}
\section{Decision procedure}
As they are, the calculi $ \lab{CL^*} $ are not terminating. Simple cases of loops are due to the repetition of the principal formula in the premiss of a rule; more complex cases of loop are generated by the interplay of world and neighbourhood labels. Our aim in this section is to provide a termination strategy for the calculi, thus defining a decision procedure for the logic.
Here follows some examples of loops which might occur in root-first proof search.
\begin{example}\label{ex:loop_immediate}
Loop generated by repeated applications of rule $ \mathsf{L\fu} $ to $ a\Vdash^\forall C $.
$$
\infer[\mathsf{L\fe}]{a \Vdash^\exists A, a\Vdash^\forall C, \Gamma \Rightarrow \Delta}{
\infer[\mathsf{L\fu}]{x\in a, x: A, a \Vdash^\forall C, \Gamma \Rightarrow \Delta}{
\infer[\mathsf{L\fu}]{x\in a, x: A, x:C,a\Vdash^\forall C, \Gamma \Rightarrow \Delta}{
\deduce{x\in a, x: A, x:C, x:C,a\Vdash^\forall C,\Gamma \Rightarrow \Delta}{\vdots}
}
}
}
$$
\end{example}
\begin{example}\label{ex:simple_loop}
Loop generated by repeated applications of $\mathsf{L>} $ and $ \mathsf{L|} $, with one conditional formula in the antecedent (only the left premiss of $ \mathsf{L>} $ is shown).
\
\begin{adjustbox}{max width = \textwidth}
$$
\infer[\mathsf{L>}]{a \in N(x), x: A> B \Rightarrow \Delta}{
\infer[\mathsf{L|}]{x \barra{a} A|B, a\in N(x), x: A> B \Rightarrow \Delta}{
\infer[\mathsf{L>}]{b\in N(x), b\subseteq a, a\in N(x),b\Vdash^\exists A, b\Vdash^\forall A\rightarrow B, x: A> B \Rightarrow \Delta }{
\infer[\mathsf{L|}]{x \barra{b} A|B,b\in N(x), b\subseteq a, a\in N(x), b\Vdash^\exists A, b\Vdash^\forall A\rightarrow B, x: A> B \Rightarrow \Delta}{
\deduce{ c \in N(x), c \subseteq b, b\in N(x), b\subseteq a, a\in N(x), c \Vdash^\exists A, c \Vdash^\forall A\rightarrow B, \dots, x: A> B \Rightarrow \Delta }{\vdots }
}
}}}
$$
\end{adjustbox}
\end{example}
\begin{example}\label{ex:complex_loop}
Loop generated by repeated applications of rules $ \mathsf{L>} $ and $ \mathsf{L|} $, with two conditional formulas in the antecedent. Let $ \Omega = x: A> B, x: C> D $. We write only the leftmost premiss of $ \mathsf{L>} $; next to $ \mathsf{L>} $ is written the number of applications of the rule.
\begin{adjustbox}{max width= \textwidth}
$$
\infer[\mathsf{L>} ~ \mbox{(2)}]{a\in N(x), \Omega \Rightarrow \Delta }{
\infer[\mathsf{L|}]{x \barra{a} A|B, x\barra{a} C|D,\Omega \Rightarrow \Delta}{
\infer[\mathsf{L|}]{b \subseteq a, b\in N(x), b\Vdash^\exists A, b\Vdash^\forall A \rightarrow B,x\barra{a} C|D, \Omega \Rightarrow \Delta}{
\infer[\mathsf{L>}~ \mbox{(4)}]{c \in N(x), c \subseteq a, c \Vdash^\exists C, c \Vdash^\forall C\rightarrow D, \dots, \Omega \Rightarrow \Delta}{
\infer[\mathsf{L|}] { x\barra{b} A|B, x\barra{b} C|D, x\barra{c} A|B, x\barra{c} C|D,\dots, \Omega\Rightarrow \Delta}{
\infer[\mathsf{L|}]{ d \in N(x), d\subseteq b, d \Vdash^\exists A, d \Vdash^\forall A\rightarrow B, x\barra{b} C|D, x\barra{c} A|B, x\barra{c} C|D,\dots, \Omega\Rightarrow \Delta}{
\infer[\mathsf{L|}]{e \in N(x), e\subseteq b, e \Vdash^\exists C, e \Vdash^\forall C\rightarrow D, x\barra{c} A|B, x\barra{c} C|D,\dots, \Omega\Rightarrow \Delta}{
\infer[\mathsf{L|}]{f \in N(x), f\subseteq c, f \Vdash^\exists A, f \Vdash^\forall A\rightarrow B, x\barra{c} C|D,\dots, \Omega\Rightarrow \Delta}{
\infer[\mathsf{L>} ~ \mbox{(4)}]{g \in N(x), g\subseteq c, g \Vdash^\exists C, g \Vdash^\forall C\rightarrow D, \dots, \Omega\Rightarrow \Delta}{
\deduce{ x\barra{d} A|B, x\barra{d} C|D,x\barra{e} A|B, x\barra{e} C|D, x\barra{f} A|B, x\barra{f} C|D,x\barra{g} A|B, x\barra{g} C|D, \cdots, \Omega \Rightarrow \Delta }{\vdots}
}
}
}
}
}
}
}
}
}
$$
\end{adjustbox}
\end{example}
We start by proving termination for $ \lab{CL} $, and then extend the proof strategy to sequent calculi for the extensions of $\mathbb{PCL}$.
We recall that all logics of the $\mathbb{PCL}$ family are decidable and their complexity is known.
\begin{remark}
\label{remark:complexity}
The complexity of the family of preferential conditional logics is studied in \cite{halpern1994complexity}, where it is shown that: for systems \mathsf{0}h{without} uniformity and absoluteness, the decision procedure is $ \mbox{PSPACE} $-complete. For logics with uniformity, the decision problem is $ \mbox{EXPTIME} $-complete. Finally, for systems with absoluteness, the decision problem is $ \mbox{NP} $-complete.
\end{remark}
\subsection{Decidability for $ \lab{CL} $}
In this section we define a proof search strategy which blocks rules applications leading to non-terminating branches.
We first want to prevent applications of a rule R to a sequent that already contains the formulas introduced by R. This is done by defining saturation conditions for each rule.
\begin{definition}\label{def:saturation_conditions_PCL}
Let $ \mathcal{D} $ be a derivation in $ \lab{CL} $, and $ \mathcal{B} = S_0, S_1, \dots$ a derivation branch, with $ S_i $ sequent $\Gamma_i \Rightarrow \Delta_i $, for $ i = 1, 2, \dots $ and $ S_0 $ sequent $ \, \Rightarrow x:A_0 $. Let $ \downarrow \Gamma_{k} / \downarrow \Delta_{k}$ denote the union of the antecedents/succedents occurring in the branch from $ S_0 $ up to $ S_k $.
We say that a sequent $\Gamma \Rightarrow \Delta$ \mathsf{0}h{satisfies the saturation condition w.r.t. a rule} R if, whenever $\Gamma \Rightarrow \Delta$ contains the principal formulas in the conclusion of R, then it also contains the formulas introduced by \mathsf{0}h{one} of the premisses of R. The saturation conditions are listed in Figure \ref{fig:saturation_conditions_PCL}.
We say that $\Gamma \Rightarrow \Delta$ is \mathsf{0}h{saturated} if there is no formula $ x:p $ occurring in $ \Gamma \cap \Delta $, there is no formula $ x: \bot $ occurring in $ \Gamma $, $\Gamma \Rightarrow \Delta$ satisfies \mathsf{0}h{all} saturation conditions listed in the upper part of Figure \ref{fig:saturation_conditions_PCL}.
\end{definition}
\begin{figure}
\caption{Saturation conditions associated to $ \lab{CL}
\label{fig:saturation_conditions_PCL}
\end{figure}
In Example \ref{ex:loop_immediate}, the second bottom-up application of $ \mathsf{L\fu} $ is blocked by the saturation condition associated to $ \mathsf{L\fu} $, since formula $ x:A $ already occurs in some antecedent of the derivation branch.
In order to block the other cases of loop, we need to modify the rules of $ \lab{CL} $, and define a proof search strategy which governs the application of rules in root-first proof search.
\begin{definition}
We modify the rule $ \mathsf{L>} $ as rule $\mathsf{L>}T $, and introduce rule $ \mathsf{Mon\forall} $ in $ \lab{CL} $.
\
\begin{adjustbox}{max width = \textwidth}
\begin{tabular}{c}
$ \infer[\mathsf{L>}T]{a \in N(x),x:A>B, \Gamma \Rightarrow \Delta }{ a \in N(x),x:A>B, \Gamma \Rightarrow \Delta, a \Vdash^\exists A & a \Vdash^\exists A, x \mathbb{V}dash_a A|B, a \in N(x),x:A>B, \Gamma \Rightarrow \Delta} $\\[1ex]
$ \infer[\mathsf{Mon\forall}]{b \subseteq a, a \Vdash^\forall A, \Gamma \Rightarrow \Delta}{b \subseteq a, b \Vdash^\forall A, a \Vdash^\forall A, \Gamma \Rightarrow \Delta}$
\end{tabular}
\end{adjustbox}
\end{definition}
\begin{lemma}
In $ \lab{CL} $ it holds that:
\begin{enumerate}
\item Rule $ \mathsf{Mon\forall} $ is admissible;
\item Rules $ \mathsf{L>} $ and $ \mathsf{L>}T $ are equivalent.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof of 1 is immediate, by induction on the height of the derivation.
To prove that $ \mathsf{L>}T $ is admissible if we have $ \mathsf{L>} $, apply weakening to the right premiss of $ \mathsf{L>} $ and then apply $ \mathsf{L>}T $ to obtain the conclusion of $ \mathsf{L>} $. To prove that $ \mathsf{L>} $ is admissible if we have $ \mathsf{L>}T $, we need admissibility of $ \mathsf{C}ut $. Let (1) and (2) denote the left and right premiss of $ \mathsf{L>}T $. The conclusion of $ \mathsf{L>}T $ is derived as follows:
$$
\infer[\mathsf{L>}]{a \in N(x), x:A>B, \Gamma \Rightarrow \Delta}{
(1) &
\infer[\mathsf{C}ut]{ x\mathbb{V}dash_a A|B, a \in N(x), x:A>B, \Gamma \Rightarrow \Delta }{
\infer[\mathsf{Wk_L}]{ x\mathbb{V}dash_a A|B, a \in N(x), x:A>B, \Gamma \Rightarrow \Delta, a \Vdash^\exists A }{ (1)}
&
(2)}
}
$$
\end{proof}
\begin{definition}
The saturation conditions for $ \mathsf{L>}T $ and $ \mathsf{Mon\forall} $ are defined in the lower part of Figure \ref{fig:saturation_conditions_PCL}. The list of saturation conditions needed for the termination proof is given by the conditions listed in the upper part of Figure \ref{fig:saturation_conditions_PCL}, in which the condition $ \mathsf{L>} $ is replaced by $ \mathsf{L>}T $, and the saturation condition for $ \mathsf{Mon\forall} $ is added.
\end{definition}
We shall provide a decision procedure for sequent calculus $ \lab{CL} $ modified with rules $ \mathsf{Mon\forall} $ and $ \mathsf{L>}T $.
We now define the proof search strategy.
\begin{definition}
\label{def:Strategy}
When constructing root-first a derivation tree for a sequent $ \Rightarrow x_{0}:A $, apply the following \textit{proof search strategy}:
\begin{enumerate}
\item Apply rules which introduce a new label (\mathsf{0}h{dynamic} rules) only if rules which do not introduce a new label (\mathsf{0}h{static} rules) are not applicable; as an exception, apply $ \mathsf{R>} $ before $ \mathsf{L>}T $.
\item If a sequent satisfies a saturation condition R, do not apply to that sequent the rule R corresponding to the saturation condition.
\end{enumerate}
\end{definition}
Observe that if the strategy is applied, world labels in root-first proof search are processed one after the other, according to the order in which they are generated.
\begin{example}\label{ex:solution}
In Example \ref{ex:simple_loop}, the loop is stopped because of the proof search strategy and the saturation condition for $ \mathsf{L|} $, which blocks the uppermost application of the rule to the formula $ x \barra{b} A|B $.
The proof strategy requires static rules to be introduced \mathsf{0}h{before} dynamic rules. Thus, the static rule $ \mathsf{Ref} $ is applied \mathsf{0}h{before} the uppermost occurrence of the dynamic rule $ \mathsf{L|} $, introducing in the derivation formula $ b\subseteq b $\footnote{By a similar argument, also $ a\subseteq a $ and a number of other formulas should occur in the derivation before the uppermost application of $ \mathsf{L|} $; but they are not relevant here.}.
The saturation condition for $ \mathsf{L|} $ applied to $ x \barra{b} A|B $ is met if there is some label $ d $ such that formulas $ d \subseteq b $, $ d \in N(x) $, $ d \Vdash^\exists A $ and $ d \Vdash^\forall A\rightarrow B $ already occur in the antecedent of a sequent occurring lower in the branch. Thus, if we take $ d $ to be $ b $ itself, the saturation condition is met and the uppermost occurrence of $ \mathsf{L|} $ cannot be applied.
To see how the loop in Example \ref{ex:complex_loop} is stopped, we re-write the derivation according to the proof search strategy, highlighting the formulas to which rule $ \mathsf{L|} $ cannot be applied.
Observe that here rule $ \mathsf{L>}T $ and $ \mathsf{Mon\forall} $ become relevant.
The same conventions as in Example \ref{ex:complex_loop} apply.
\begin{adjustbox}{max width= \textwidth}
$$
\infer[\mathsf{L>}T\mbox{(2)}]{a\in N(x), \Omega \Rightarrow \Delta }{
\infer[\mathsf{L|}]{a \Vdash^\exists A, a \Vdash^\exists C, x \barra{a} A|B, x\barra{a} C|D,\Omega \Rightarrow \Delta}{
\infer[\mathsf{L|}]{b \subseteq a, b\in N(x), b\Vdash^\exists A, b\Vdash^\forall A \rightarrow B,x\barra{a} C|D, \Omega \Rightarrow \Delta}{
\infer[\mathsf{Ref} ~\mbox{(2)}]{c \in N(x), c \subseteq a, c \Vdash^\exists C, c \Vdash^\forall C\rightarrow D, \dots, \Omega \Rightarrow \Delta}{
\infer[\mathsf{L>}T \mbox{(4)}]{b \subseteq b, c \subseteq c,c \in N(x), c \subseteq a, c \Vdash^\exists C, c \Vdash^\forall C\rightarrow D, \dots, \Omega \Rightarrow \Delta}{
\infer[\mathsf{L|}] {b \Vdash^\exists A, b \Vdash^\exists C, c \Vdash^\exists A, c \Vdash^\exists C, \mathbf{x\barra{b} A|B}, x\barra{b} C|D, x\barra{c} A|B, \mathbf{x\barra{c} C|D},\dots, \Omega\Rightarrow \Delta}{
\infer[\mathsf{Mon\forall}]{ d \in N(x), d\subseteq b, d \Vdash^\exists C, d \Vdash^\forall C\rightarrow D, x\barra{c} A|B,\dots, \Omega\Rightarrow \Delta}{
\infer[\mathsf{L|}]{d \in N(x), d\subseteq b, d \Vdash^\exists C, d \Vdash^\forall C\rightarrow D, d \Vdash^\forall A\rightarrow B, x\barra{c} A|B,\dots, \Omega\Rightarrow \Delta}{
\infer[\mathsf{Mon\forall}]{e \in N(x), e\subseteq c, e \Vdash^\exists A, e \Vdash^\forall A\rightarrow B,\dots, \Omega\Rightarrow \Delta}{
\infer[\mathsf{Ref}~ \mbox{(2)}]{e \in N(x), e\subseteq c, e \Vdash^\exists A, e \Vdash^\forall A\rightarrow B, e \Vdash^\forall C\rightarrow D,\dots, \Omega\Rightarrow \Delta}{
\infer[\mathsf{L>}T \mbox{(4)}]{d \subseteq d, e \subseteq e, e \in N(x), e\subseteq c, e \Vdash^\exists A, e \Vdash^\forall A\rightarrow B, e \Vdash^\forall C\rightarrow D,\dots, \Omega\Rightarrow \Delta}{
\deduce{d\Vdash^\exists A, d \Vdash^\exists C, e \Vdash^\exists A, e \Vdash^\exists C, \mathbf{x\barra{d} A|B}, \mathbf{x\barra{d} C|D}, \mathbf{x\barra{e} A|B}, \mathbf{x\barra{e} C|D}, \cdots, \Omega \Rightarrow \Delta }{\vdots}
}
}
}
}
}
}
}
}
}
}
}
$$
\end{adjustbox}
\
\noindent Application of $ \mathsf{L|} $ to formula $ x \barra{b} A|B $ is blocked by the saturation condition, since $ b\subseteq b $, $ b\in N(x) $, $ b \Vdash^\exists A $ and $ b \Vdash^\forall A\rightarrow B $ all occur in the branch. Application of the rule to $ x\barra{c} C|D $ is blocked in a similar way.
Application of $ \mathsf{L|} $ to $ x \barra{d} A|B $ is blocked, since all the formulas relevant for the saturation condition occur in the branch: $ d \subseteq d $ (introduced by $ \mathsf{Ref} $), $ d \in N(x) $, $ d \Vdash^\exists A $ (introduced by $ \mathsf{L>}T $) and $ d \Vdash^\forall A \rightarrow B $ (introduced by $ \mathsf{Mon\forall} $). Application of $ \mathsf{L|} $ to the other formulas in the top sequent is blocked, and the loop is stopped.
\end{example}
Before tackling the termination proof, we define an ordering of the world labels according to their generation in the branch. The resulting tree of labels is needed to ensure that the number formulas introduced in root-first proof search is \mathsf{0}h{finite}.
\begin{definition} \label{def:graph_labels}
Given a sequent $ \Gamma_k \Rightarrow \Delta_k $, let $ a $, $ b $ be neighbourhood labels and $ x $, $ y $ world labels occurring in $ \downarrow \Gamma_k \cup \downarrow\Delta_k $. We define:
\begin{itemize}
\item $ k(x)= min \{t \mid x \mathit{\ occurs \ in \ } \Gamma_t\} $;
\item $ k(a)= min \{t \mid a \mathit{\ occurs \ in \ } \Gamma_t \}$;
\item $ x \prec_g a $, \textquotedblleft $ x $ generates $ a $\textquotedblright ~ if for some
$ t\leqslant k $ and $ k(a)=t $, $ a \in N(x) $ occurs in $ \Gamma_t $;
\item $ b \prec_g y $, \textquotedblleft $ b $ generates $ y $\textquotedblright ~ if for some $ t\leqslant k $ and $ k(y)=t $, $ y\in b $ occurs in $ \Gamma_t $;
\item $ x \mathsf{w}w y$ if for some $ a $, $ x\prec_g a $ and $ a \prec_g y$ and $ x \neq y $.
\end{itemize}
\end{definition}
\noindent Intuitively, the relation $ x\prec_g a $ holds between $ x $ and $ a $ if $ a\in N(x) $ is introduced at some stage in the derivation (thus, with an application of $ \mathsf{R>} $ or $ \mathsf{L|} $); similarly, the relation $ b \prec_g y $ holds between $ b $ and $ y $ if $ y \in b $ is introduced in the derivation (thus, applying either $ \mathsf{R\fu} $ or $ \mathsf{L\fe} $).
\begin{lemma} \label{lemma:graph_labels}
Given a derivation branch, the following hold:
\begin{itemize}[noitemsep]
\item[(a)] The relation $ \mathsf{w}w $ is acyclic and forms a tree with the world label $ x_0 $ at the root;
\item[(b)] All labels occurring in a derivation branch also occur in the associated tree; that is, letting $ x \mathsf{w}clos y$ be the transitive closure of $ \mathsf{w}w $, if $ u $ occurs in $ \downarrow \Gamma_k $, then $ x_0 \mathsf{w}clos u $.
\end{itemize}
\end{lemma}
\begin{proof}
(a) follows from the definition of relation $ \mathsf{w}w $ and from the sequent calculus rules.
Observe that the relation of generation $ \prec_g $ between world and neighbourhood labels is \mathsf{0}h{unique}: it is defined by taking into account the value $ k(a) $ or $ k(y) $, which keeps track of the derivation step at which the new label is introduced. At each derivation step, dynamic rules introduce a new label which is generated by at most one world or neighbourhood label. Take a sequent $ \Gamma_k \Rightarrow \Delta_k $, and suppose $ x \prec_g a $: by definition, there is a $ t \leqslant k $ such that $ k(a) = t $ and $ a \in N(x) $ occurs in $ \Gamma_t $. Now suppose that $ a \in N(y) $ occurs in some $ \Gamma_{s} $, with $ t < s \leqslant k $. Since $ k(a) =t $, relation $ y \prec_g a $ does not hold in the tree of labels. A similar reasoning holds for $ y \prec_g a $. Thus, except for the label at the root, each label in a derivation branch has \mathsf{0}h{exactly} one parent according to the relation $ \prec_g $ and, by definition, also according to $ \mathsf{w}w $.
As for (b), it is easily proved by induction on $ k(u) \leqslant k $. If $ k(u)=0 $, then $ u=x_0 $ and (b) trivially holds. If $ k(u)=t>0 $, $ u $ does not occur in $ \Gamma_{t-1} $ and $ u $ occurs in $ \Gamma_t $. This means that there exist a $ v $ and a $ b $ such that $ b\in N(v)$ occurs in $ \Gamma_{t-1} $ and $ u\in b $ occurs in $ \Gamma_t $; thus, $ k(v) < k(u) $. By inductive hypothesis, $ x_0\mathsf{w}clos v $; since $ v \mathsf{w}w u $, also $ x_0 \mathsf{w}clos u $ holds.
\end{proof}
\begin{definition} \label{def:conditional_degree}
The \mathsf{0}h{size} of a formula $ A $, denoted by $ \size{A}$, is the number of symbols occurring in $ A $.
The \mathsf{0}h{conditional degree} of a formula $ A $ corresponds to the level of nesting of the conditional operator in $ A $ and is defined as follows:
\begin{itemize}[noitemsep]
\item $ d(p)= d(\bot) =0 $ for $ p $ atomic;
\item $ d(C \circ D)= max(d(C), d(D)) $ for $ \circ \in \{ \mathsf{w}edge, \lor, \rightarrow \}$;
\item $ d(C>D)= max(d(C) , d(D)) +1 $.
\end{itemize}
Given a sequent $ \Gamma \Rightarrow \Delta $ occurring in a derivation branch $ \mathcal{B} $, the conditional degree of a world label is the highest conditional degree among the formulas it labels:
$$
d(x) = \textit{max}\{ d(C) \mid x:C \in \downarrow \Gamma \cup \downarrow \Delta \}.
$$
\end{definition}
\noindent We now prove that the proof search strategy ensures termination.
\begin{theorem}[Termination]\label{theorem:termination}
Root-first proof search for a $ \lab{CL} $ derivation for a sequent $ \Rightarrow x_0 :A_0$ built in accordance with the strategy terminates in a finite number of steps, with either an initial sequent or a saturated sequent.
\end{theorem}
\begin{proof}
To prove that root-first proof search terminates, we have to show that all the branches of a derivation starting with $ \Rightarrow x_0 :A_0 $ and built in accordance with the proof search strategy are finite. We take an arbitrary derivation branch $ \mathcal{B} $.
Since $ \lab{CL} $ rules do not increase the complexity of formulas when going from the conclusion to the premiss(es), the only source of non-termination in the branch is the presence of an infinite number of labels. We need to show that the tree of labels associated to $ \mathcal{B} $ is finite. Let us call $ \prec_grafo $ the graph associated to $ \mathcal{B} $ according to Definition \ref{def:graph_labels}. This amounts to prove that:
\begin{enumerate}
\item Each branch of $ \prec_grafo $ has a finite length;
\item Each node of $ \prec_grafo $ has a finite number of immediate successors.
\end{enumerate}
Claim 1 is proved by induction on the conditional degree of a label $ y $ occurring in the branch. If $ d(y) = 0 $, $ y $ labels either an atomic formula or a propositional formula. In any case, no new world labels are generated from $ y $, and the branch is finite. If $ d(y)>0 $, it means that $ y $ labels some conditional formula. In this case, $ y $ generates at least one world label $ z $, meaning that
for some neighbourhood label $ a $, $ y \prec_g a $ and $ a \prec_g z $. By definition, $ y \prec_g a $ if rule $ \mathsf{R>} $ or $ \mathsf{L|} $ are applied in the derivation branch, introducing formula $ a \in N(y) $. Similarly, $ a \prec_g z $ if formula $ z \in a $ has been introduced in the branch by application of $ \mathsf{L\fe} $ or $ \mathsf{R\fu} $. Thus, a new world label $ z $ can be generated from a world label $ y $ by a combination of the above rules, possibly with the addition of static rules. In any case, it holds that the conditional degree of the formulas labelled with $ z $ is strictly smaller than the conditional degree of the formulas labelled with $ y $. To see this, suppose that $ y:A>B $ occurs in the consequent of some sequent in the branch. Application of $ \mathsf{R>} $ introduces a relational atom $ a \in N(y) $, and generates a formula $ y \barra{a} A|B $ in the consequent. Application of rule $ \mathsf{R|} $ introduces in the consequent either formula $ a \Vdash^\exists A $, to which no dynamic rules can be applied, or formula $ a \Vdash^\forall A\rightarrow B $. In this case, rule $ \mathsf{R\fu} $ can be applied, and a new world label $ z \in a $ is generated, along with formula $ z:A\rightarrow B $ in the consequent. It holds that $ d(z) < d(y) $, and similar considerations apply for the other rules combinations.
It holds that $ d(A_0) $ is bounded by the size of the formula $ A_0 $ at the root. Thus, for $ n = \size{A_0} $, the maximal length of each branch of $ \prec_grafo $ is bounded by $ O(n) $.
\
Proving claim 2 requires some care. By definition, a world label $ z $ is generated by a world label $ y $ if there is some neighbourhood label $ a$ such that $ y \prec_g a $ and $ a \prec_g z $, for $ k(y) = s $, $ k(a) = t $ and $ k(z)= u $ with $ s<t<u $. To prove that the number of world labels generated by some $ y $ is finite, we need to prove that:
\begin{itemize}
\item[$a)$] A world label $ y $ generates a finite number of neighbourhood labels;
\item[$b)$] A neighbourhood label $ a $ generates a finite number of new world labels.
\end{itemize}
As for $ a) $, observe that a new neighbourhood label can be generated by application of $ \mathsf{R>} $ or $ \mathsf{L|} $. In the former case, the rule is applied to some formula $ y:A>B $ occurring in $ \Delta_{t-1} $. Since the formula disappears from $ \Delta_t $, rule $ \mathsf{R>} $ can be applied only once. Thus, the number of new neighbourhood labels linearly depends on the size of the formula $ A_0 $ at the root of the sequent. \\
The case in which the new neighbourhood is generated by $ \mathsf{L|} $ is more complex, since the rule may interact with rule $ \mathsf{L>}T $, as shown in Examples \ref{ex:simple_loop} and \ref{ex:complex_loop}.
To see how the loop is stopped in the general case, suppose that one neighbourhood label $ a \in N(y) $ occurs in the antecedent of some sequent in $ \mathcal{B} $, along with $ n $ conditional formulas $ y:A_1>B_1, \dots , y:A_n>B_n $.
After $ n $ applications of $ \mathsf{L>}T $, $ n $ formulas $ y \barra{a}A_1|B_1, \dots , y \barra{a} A_n|B_n $ occur in the antecedent. By $ n $ applications of $ \mathsf{L|} $, $ n $ new neighbourhood label are generated, along with the following formulas in the antecedent:
\
\begin{tabular}{c }
$b_1 \subseteq a, b_1 \in N(y), b_1 \Vdash^\exists A_1, b_1 \Vdash^\forall A_1 \rightarrow B_1$ \\
\multicolumn{1}{c}{\vdots}\\
$b_n \subseteq a, b_n \in N(y), b_n \Vdash^\exists A_n, b_n \Vdash^\forall A_n \rightarrow B_n$\\
\end{tabular}
\
\noindent Now, rule $ \mathsf{L>}T $ can be applied to all the conditional formulas and all the neighbourhood just introduced. Thus, $ n\cdot n $ formulas are generated in the antecedent, along with formulas $ b_1 \subseteq b_1 , \dots, b_n \subseteq b_n $ introduced by $ \mathsf{Ref} $.
\
\begin{tabular}{c c c}
$b_1 \subseteq b_1, y\barra{b_1} A_1|B_1,$ & \dots &$ y\barra{b_1} A_n|B_n$\\
\vdots & & \vdots \\
$b_n \subseteq b_n, y\barra{b_n} A_1|B_1,$ & \dots &$ y\barra{b_n} A_n|B_n$\\
\end{tabular}
\
\noindent In principle, application of $\mathsf{L|} $ yields $ n\cdot n $ new neighbourhood labels; however, $ n $ applications of the rule are blocked by the saturation condition associated to the rule. More precisely, $ \mathsf{L|} $ cannot be applied to formula $ y\barra{b_1} A_1|B_1 $, because formulas $ b_1 \subseteq b_1$, $b_1 \in N(x)$, $b_1 \Vdash^\exists A_1 $ and $ b_1 \Vdash^\forall A\rightarrow B $ occur lower in the branch. Similarly, the saturation condition for $ \mathsf{L|} $ blocks applications of the rule to formulas $ y\barra{b_2} A_2|B_2 $, $ y \barra{b_3} A_3|B_3 $, and so on. Thus, only $ n (n-1) $ new neighbourhood labels are generated. Let $ k = n-1 $.
\
\noindent
\begin{tabular}{c c c }
$c^1_2 \subseteq b_1, c^1_2 \Vdash^\exists A_2, c^1_2 \Vdash^\forall A_2 \rightarrow B_2$ & \dots & $c^1_n \subseteq b_1, c^1_n \Vdash^\exists A_n, c^1_n \Vdash^\forall A_n \rightarrow B_n$ \\
\vdots & & \vdots \\
$c^n_1 \subseteq b_n, c^n_1 \Vdash^\exists A_1, c^n_1 \Vdash^\forall A_1 \rightarrow B_1$ & \dots & $c^n_k \subseteq b_n, b^n_k \Vdash^\exists A_{k}, b^n_{k} \Vdash^\forall A_{k} \rightarrow B_{k}$ \\
\end{tabular}
\
\noindent
Before applying $ \mathsf{L>}T $, we exhaustively apply the static rules of $ \mathsf{Ref} $ and $ \mathsf{Mon\forall} $, obtaining the following formulas (recall that $ k = n-1 $):
\
\begin{tabular}{c c c}
$ c^1_2 \subseteq c^1_2, c^1_2 \Vdash^\forall A_1 \rightarrow B_1 $ & \dots & $ c^1_n \subseteq c^1_n, c^1_n \Vdash^\forall A_1 \rightarrow B_1 $\\
\vdots & & \vdots \\
$ c^n_1 \subseteq c^n_1, c^n_1 \Vdash^\forall A_n \rightarrow B_n $ & \dots & $ c^n_k \subseteq c^n_k, c^n_k \Vdash^\forall A_n \rightarrow B_n $\\
\end{tabular}
\
We now apply $ \mathsf{L>}T $, and introduce $ n \cdot n (n-1) $ formulas to which $ \mathsf{L|} $ can be applied. Let us consider the $ n $ formulas generated from application of the rule to label $ c^1_2 $. Recall that $ \mathsf{L>}T $ also introduces local forcing formulas.
$$
c^1_2 \Vdash^\exists A_1, \dots, c^1_2 \Vdash^\exists A_n, y \barra{c^{1}_{2}} A_1 |B_1, y \barra{c^{1}_{2}} A_2 |B_2, y \barra{c^{1}_{2}} A_3 |B_3, \dots , y \barra{c^{1}_{2}} A_n |B_n
$$
The application of $ \mathsf{L|} $ to formula $ y \barra{c^{1}_{2}} A_2 |B_2 $ is blocked by the saturation condition: formulas $ c^1_2 \subseteq c^1_2 $, $ c^1_2 \Vdash^\exists A_2 $ and $ c^1_2 \Vdash^\forall A_2 \rightarrow B_2 $ occur in the branch. Application of the rule to $ y \barra{c^{1}_{2}} A_1 |B_1 $ is also blocked: formulas $ c^1_2 \Vdash^\exists A_1 $ and $ c^1_2 \Vdash^\forall A_1 \rightarrow B_1 $ have been introduced in the branch by $ \mathsf{L>}T $ and $ \mathsf{Mon\forall} $ respectively. Thus, rule $ \mathsf{L|} $ can be applied only $ n(n-1)(n-2) $ times, generating the same number of new neighbourhood labels.
The process continues: after the next applications of $ \mathsf{L>}T $ and $ \mathsf{L|} $, $ n(n-1)(n-2)(n-3) $ new labels are introduced, and so on. The number of $ \mathsf{L|} $ rule applications blocked by the saturation condition strictly increases, until all the generated neighbourhood labels are blocked.
To be more precise, count as one step in the generation process all applications of $ \mathsf{L>}T $, $ \mathsf{Mon\forall} $, $ \mathsf{Ref} $ and $ \mathsf{L|} $ to a sequent. During the $ i $-th step, rule $ \mathsf{L>}T $ generates a number $ n $ of formulas $ x \barra{e} G|H $ for each neighbourhood label occurring in the sequent. Then, rule $ \mathsf{L|} $ can be applied, introducing a new neighbourhood label for each application. However, out of every $ n $ formulas $ x \barra{e} G|H$, $ i-1 $ applications of $ \mathsf{L|} $ are blocked.
$$
\# \textit{ of new neighbourhood labels at the } i^{th} \textit{ step } = ~ \frac{n!}{(n-i)!}
$$
It follows that after $ n+1 $ steps, all the generated neighbourhood labels are blocked and, as a consequence, all applications of $ \mathsf{L>}T $ and $ \mathsf{L|} $ are blocked.
In general, for each neighbourhood label $ a \in N(y) $ and $ n $ conditional formulas labelled with $ y $, we generate at most
$$
\sum_{k=1}^{n-1} \frac{n!}{(n-k)!}
$$
new neighbourhood labels. The number of neighbourhood labels generated by $ \mathsf{L|} $ and $ \mathsf{L>} $ is bounded by $ O((n-2) \cdot n!) $, since at most $ n-2 $ terms appear in the sum of labels, and the biggest term in the sum is $ n! $. This can be approximated to $ O(n^2 \cdot n!) $.
\
To prove $ b) $, recall that a new world label $ z $ is generated from a neighbourhood label $ a $ if rule $ \mathsf{L\fe} $, $ \mathsf{R\fu} $, or $ \mathsf{L|} $ are applied in the derivation. Since in all these rules the principal formula disappears from the premiss, each rule can be applied at most once to each suitable formula, generating one world label for each application. Thus, the number of world labels generated linearly depends on the size of the formula $ A_0 $ at the root of the derivation.
\
Since $ \prec_grafo $ has a finite number of nodes, the world and neighbourhood labels in the derivation are finite. Since the pure formulas are in a finite number (all subformulas of $ A_0 $), in a finite number of steps proof proof search terminates, yielding either a saturated sequent or an initial sequent.
\end{proof}
Take $ n =\size{A_0} $. The number of labels generated from a node of $ \prec_grafo $ is counted as follows. The number of neighbourhood labels generated by $ \mathsf{R>} $ is $ O(n) $. Since the number of conditional formulas in the derivation is bounded by $ \size{A_0} $, the number of neighbourhood labels generated by $ \mathsf{L|} $ and $ \mathsf{L>} $ is bounded by $ O(n^2\cdot n!) $. Each neighbourhood label generates one new world labels; thus, the maximal number of world labels generated from a world label is bounded by $ O(n^2 \cdot n!) $.
To conclude, since the maximal length of each branch is bounded by $ O(n) $, the maximal number of world labels introduced in a derivation branch is bounded by $ O(n^3\cdot n!) $.
To obtain a complexity bound for the decision procedure associated to $ \lab{CL} $, the maximal number of labels has to be combined with the number of formulas generated at each step. The exponential bound on labels, however, already shows that the complexity of the decision procedure is $\mbox{NEXPTIME}$, far from the $\mbox{PSPACE} $ bound known for $ \mathbb{PCL} $ (see Remark \ref{remark:complexity}).
\subsection{Decidability for extensions}
Theorem \ref{theorem:termination} can be extended to the calculi for most extensions of $\mathbb{PCL}$.
We show how sequent calculi for logics with normality, total reflexivity, weak centering, centering and uniformity terminate. We do not treat extensions of $ \lab{CL} $ with the rules for absoluteness. In these logics all $N(x)$ are the same, and there is no need to keep track of the system of neighbourhood $ N(x) $ to which a certain neighbourhood $ \alpha $ belongs. This simplification is not reflected by sequent calculus $ \lab{CLA} $, which is defined as a \mathsf{0}h{modular} extension of $ \lab{CL} $. Thus, proving termination of $ \lab{CLA} $ is not worth, since the simplest extension of $ \mathbb{PCL} $ would have the most complex decision procedure\footnote{Refer to \cite{girlandothesis} for terminating a labelled sequent calculus more suitable to treat the condition of absoluteness. The resulting decision procedure, however, is still not optimal.}.
In order to treat the extensions of $\mathbb{PCL}$ we define saturation conditions for the additional rules and prove that the tree of labels corresponding to a derivation branch is finite.
The rules we are concerned with are $ \mathsf{N} $, $ \mathsf{0} $, $ \mathsf{T} $, $ \mathsf{W} $, $ \mathsf{C} $, $ \mathsf{Single} $, $ \Repl{1} $, $ \Repl{2} $, $ \mathsf{U}nif{1} $ and $ \mathsf{U}nif{2} $. Proof of termination for sequent calculi displaying a combination of these rules can be obtained by combining the proof strategies exposed in this section.
We start by adding to the conditions in Figure \ref{fig:saturation_conditions_PCL} the saturation conditions for these new rules, listed in Figure \ref{fig:saturation_cond_ext}.
\begin{figure}
\caption{Saturation conditions for extensions}
\label{fig:saturation_cond_ext}
\end{figure}
\begin{definition}
The proof search strategy from Definition \ref{def:Strategy} is supplemented with the following clause:
\begin{itemize}
\item[3.] Rule $ \mathsf{0} $ can be applied to a sequent and a formula $a\in N(x)$ only if some formula $ a \Vdash^\exists A $ occurs in the consequent, or some formula $ a \Vdash^\forall A $ occurs in the antecedent.
\end{itemize}
\end{definition}
Let us see how the proof search strategy stops the two new cases of loop generated by the rules for extensions.
Interaction of $ \mathsf{0} $ and $ \mathsf{N} $ generate the following loop.
$$
\infer[\mathsf{N}]{ x: A, \Gamma \Rightarrow \Delta}{
\infer[\mathsf{0}]{ a \in N(x), x: A, \Gamma \Rightarrow \Delta }{
\infer[\mathsf{N}]{ y \in a, a \in N(x), x: A, \Gamma \Rightarrow \Delta}{
\infer[\mathsf{0}]{b \in N(y), y \in a, a \in N(x), x: A, \Gamma \Rightarrow \Delta}{
\deduce{z \in b, b \in N(y), y \in a, a \in N(x), x: A, \Gamma \Rightarrow \Delta}{
\vdots
}
}
}
}
}
$$
If no formulas $ a \Vdash^\exists A $ occur in $ \Delta $ and no formulas $ a \Vdash^\forall A $ occur in $ \Gamma $, the first application of rule $ \mathsf{0} $ is blocked. Suppose $ a \Vdash^\exists A $ occurs in $ \Delta $. Then $ \mathsf{0} $ is applied, but if restriction 3 is not met by neighbourhood $ b $, the second uppermost application of $ \mathsf{0} $ is stopped. The number of formulas $ a \Vdash^\exists A $ in the consequent and $ a \Vdash^\forall A $ in the antecedent is bounded by the conditional degree of formulas at the root; thus, the loop is stopped. Intuitively, rule $ \mathsf{0} $ needs to be applied only to the neighbourhood label introduced by $ \mathsf{N} $, to ensure that it is not empty\footnote{Refer to the derivation of axiom (N) in the Appendix.}. The neighbourhoods introduced by $ \mathsf{T} $, $ \mathsf{U}nif{1} $ or $ \mathsf{U}nif{2} $ already contains an element, so no other loops with $ \mathsf{0} $ arise.
Applications of $ \mathsf{U}nif{1} $ and $ \mathsf{U}nif{2} $ generate the following loop, where we take $ \Omega = a \in N(x), y \in a, b \in N(y), z \in b $.
$$
\infer[\mathsf{U}nif{1}]{\Omega, \Gamma \Rightarrow \Delta }{
\infer[\mathsf{U}nif{2}]{c \in N(x), z \in c, \Omega, \Gamma \Rightarrow \Delta}{
\infer[\mathsf{U}nif{1}]{d \in N(y), z \in d, c \in N(y), z \in c, \Omega, \Gamma \Rightarrow \Delta}{
\infer[\mathsf{U}nif{2}]{ e \in N(x), z \in e, d \in N(y), z \in d, c \in N(y), z \in c, \Omega, \Gamma \Rightarrow \Delta}{
\deduce{f \in N(y), z \in f, e \in N(x), z \in e, d \in N(y), z \in d, c \in N(y), z \in c, \Omega, \Gamma \Rightarrow \Delta}{\vdots}
}
}
}
}
$$
The saturation condition for $ \mathsf{U}nif{2} $ blocks the first bottom-up application of the rule: there is a neighbourhood label $ b $ such that $ b \in N(y) $ and $ z \in d $ are in $ \Gamma $. Similarly, a loop generated by $ \Repl{1} $ and $ \Repl{2} $ is blocked by their saturation conditions.
\
We now prove termination for the sequent calculi extending $ \lab{PCL} $, adapting the proof of termination for $ \lab{PCL} $ (Theorem \ref{theorem:termination}). Observe that Lemma \ref{lemma:graph_labels} holds for all the extensions considered: thus, the world labels occurring in a derivation branch form a tree according to the relation $ \mathsf{w}w $.
\begin{theorem}[Termination]\label{theorem:termination_ext}
Root-first proof search for a sequent $ \Rightarrow x_0 :A_0$ in the sequent calculi $ \lab{CL^N} $, $ \lab{CL^T} $, $ \lab{CL^W} $, $ \lab{CL^C} $, $ \lab{CL^U} $, $ \lab{CL^{NU}} $, $ \lab{CL^{TU}} $, $ \lab{CL^{WU}} $ and $ \lab{CL^{CU}} $, built in accordance with the strategy, terminates in a finite number of steps, with either an initial sequent or a saturated sequent.
\end{theorem}
\begin{proof}
As in the proof of Theorem \ref{theorem:termination}, we need to check that the tree of labels $ \prec_grafo $ associated to an arbitrary derivation branch is finite:
\begin{enumerate}
\item Each branch of $ \prec_grafo $ has a finite length;
\item Each node of $ \prec_grafo $ has a finite number of immediate successors.
\end{enumerate}
As for 1, the proof remains the same as in Theorem \ref{theorem:termination}.
Rule $ \mathsf{0} $ introduces a new world label but, as we have seen, applications of this rule are restricted: the rule can be applied only if afterwards some rule of local forcing can be applied to the new world label. Since the number of local forcing formulas occurring in a derivation branch is bounded by the size of formula $ A_0 $, the length of a branch in $ \prec_grafo $ starting from a world label $ y $ is still bounded by $ O(n) $, for $ n= \size{A_0} $.
Rules $ \mathsf{T} $, $ \mathsf{W} $ and $ \mathsf{Single} $ introduce in derivation branch a world label $ x $ generated by $ x $ itself. By definition we required that $ x \neq y $ in order to have $ x \mathsf{w}w y $; and thus, the rules do not introduce a new node in the tree of labels. Rule $ \mathsf{C} $ does not introduce a new world label in the derivation.
Replacement rules are applied only to atomic formulas, and operate exclusively on world labels which are already present in the derivation. Similarly, rules $ \mathsf{U}nif{1}$ and $ \mathsf{U}nif{2} $ do not introduce new world labels in the derivation; thus, they do not affect the length of a branch in $ \prec_grafo $.
\
The proof of 2 remains basically the same as in Theorem \ref{theorem:termination}, meaning that the count of the number of new neighbourhood labels generated by one neighbourhood label and $ n $ formulas $ y:A_1>B_1\dots, y:A_n >B_n $ is still bounded by $ O(n^2 \cdot n!) $, for $ n = \size{A_0} $.
However, the number of neighbourhood labels generated from a world label increases, due to the presence of rules for extensions. Each rule $ \mathsf{N} $, $ \mathsf{T} $, $ \mathsf{C} $, if applicable, adds at most one neighbourhood label $ a \in N(y) $ or $ \{y\} \in N(y) $ to a world label $ y $. The number of neighbourhood labels generated by a world labels is bounded by the size of the formula $ A_0 $ at the root; in logics with normality, total reflexivity and centering this number is at most $ O(n+3) $.
As for the rules of replacement, given a world label $ y $ and a formula $ z \in \{y\} $, these rules rules may introduce in the derivation formulas $ a \in N(z) $ or $ a \in N(y) $. Thus, the number of world labels introduced from a world label $ y $ is bounded the size of formula $ A_0 $, and thus by $ O(n) $ (as before), to which we have to add the number of applications of replacement rules introducing relational atoms $ a \in N(y) $. Since replacement rules can be applied at most once to each $ z \in \{y\} $ and $ a \in N(z) $, the total number of neighbourhood labels is bounded by $ O(2n) $.
A similar reasoning holds for $ \mathsf{U}nif{1} $ and $ \mathsf{U}nif{2} $. The saturation conditions for uniformity prevent the application of both $ \mathsf{U}nif{1} $ and $ \mathsf{U}nif{2} $ to formulas $ c \in N(x) $ (or $ c \in N(y) $), and $ z \in c $ if the neighbourhood label has been generated by the rules of uniformity. Thus, only one rule of uniformity ($ \mathsf{U}nif{1} $ or $ \mathsf{U}nif{2} $) can be applied out of every 4 relational atoms $ a \in N(x)$, $y \in a$, $b\in N(y) $ (or $ b \in N(x) $) and $ z \in b $. Moreover, the rule can be applied at most once to these labels. We can thus estimate the maximal number of neighbourhood labels introduced by a world label to be bounded by $ O(2n) $.
Following the same reasoning as for $ \lab{CL} $, we have that the maximal number of world labels generated from a world label for calculi without centering or uniformity is given by $ O(n^2 \cdot n! )$, while for calculi with centering or uniformity the bound is $ O(2n \cdot n \cdot n! )$.
\end{proof}
To conclude, the maximal length of each branch in $ \prec_grafo $ combined with the maximal number of nodes generated from a node yields the following maximal bounds for world labels introduced in a derivation branch:
$ O(n^3 \cdot n!) $ in case of calculi without centering or uniformity, and
$ O(2n \cdot n^2 \cdot n!) $ for calculi with centering or uniformity, always taking $ n = \size{A_0} $. In both cases, the decision procedure associated to the logic is $ \mbox{NEXPTIME} $.
\section{Semantic completeness }
\label{sec:semantic_compl}
Completeness of a sequent calculus can be proved either with respect to the axiom system or with respect to the class of models for the logic. Theorem \ref{theorme:completeness_synth}, along with Theorem \ref{cut_3} of cut-admissibility ensures the completeness of all calculi with respect to the axiomatization of the corresponding logics. In this section we prove the semantic completeness of the calculi: we show that if a formula is valid in the class of neighbourhood models for a given logic, then it is derivable in the sequent corresponding calculus. As usual we prove the counterpositive statement: if a formula is not derivable in the sequent calculus, we can construct a countermodel (in the intended class). The model wile be extracted from a saturated upper sequent.
Since the proof requires to build a countermodel from a saturated sequent, termination of the calculi is needed. For this reason, we prove semantic completeness of all the systems, except for those with the condition of absoluteness.
\subsection{Completeness for $ \lab{CL} $}
\begin{theorem}\label{theorem:semantic_completeness}
Let $ \Gamma \Rightarrow \Delta $ be a saturated upper sequent in a derivation in $ \lab{CL} $. There exists a \textit{finite} countermodel $ \mathcal{M} $ that satisfies all formulas in $ \downarrow \Gamma $ and falsifies all formulas in $ \downarrow \Delta $.
\end{theorem}
\begin{proof}
Since $ \Gamma \Rightarrow \Delta $ is saturated, it is the upper sequent of a branch $ \mathcal{B} $.
We construct a model $ \mathcal{M}_\mathcal{B} $ that satisfies all formulas in $ \downarrow \Gamma $ and falsifies all formulas in $ \downarrow \Delta $. The countermodel contains the semantic informations encoded in the sequents of the derivation branch. Let
$$
S_\mathcal{B} = \{ x \mid x \in (\downarrow \Gamma ~ \cup \downarrow \Delta) \} \qquad N_\mathcal{B} = \{ a \mid a \in (\downarrow \Gamma~ \cup \downarrow \Delta) \}
$$
Then, we associate to each $ a \in N_\mathcal{B} $ a neighbourhood as follows:
$$
\alpha_a = \{ y \in S_\mathcal{B} \mid y \in a \ \mathit{ belongs \ to } \ \Gamma \}
$$
Thus, for each neighbourhood $ a $, $ \alpha_a \subseteq S_\mathcal{B} $. We construct the neighbourhood model $ \mathcal{M}_\mathcal{B} = \mathcal{L}_Cle W_\mathcal{B}, N_\mathcal{B}, \llbracket \ \rrbracket_\mathcal{B} \rangle$ as follows.
\begin{itemize}[noitemsep]
\item $ W_\mathcal{B}= S_\mathcal{B} $
\item For any $ x \in W_\mathcal{B} $, $ N_\mathcal{B}(x)= \{ \alpha_a \ | \ a \in N(x) \ \mathit{belongs \ to} \ \downarrow \Gamma \} $
\item For $ p $ atomic, $ \llbracket p\, \rrbracket_\mathcal{B} = \{ x \in W_\mathcal{B} \ | \ x:p \ \mathit{belongs \ to} \ \downarrow \Gamma \}$
\end{itemize}
\noindent We now show that $\mathcal{M}_{\mathcal{B}} = \mathcal{L}_Cle W_{\mathcal{B}}, N_{\mathcal{B}}, \llbracket \ \rrbracket_{\mathcal{B}} \rangle $ satisfies the property of non-emptiness for $ \mathbb{PCL} $ neighbourhood models: we have to verify that every $ \alpha_a \in N(x)$ contains at least one element.
If $ a \in N(x) $ occurs in the sequent, it must have been introduced either by $ \mathsf{R>} $ or $ \mathsf{L|} $. If the neighbourhood label is not blocked, by the saturation conditions associated to both rules it holds that $ a\Vdash^\exists C $ occurs in $ \downarrow \Gamma $. Thus, by the saturation condition $ (\mathsf{R\fe}) $, formula $ y\in a $ occurs in $ \Gamma $.
\noindent Moreover, the model $ \mathcal{M}_\mathcal{B} $ satisfies the following property:
\begin{quote}
$(*)$ If $ a \subseteq b $ belongs to $ \Gamma $, then $ \alpha_{a} \subseteq \alpha_{b} $
\end{quote}
To verify $ (*) $, suppose $ y \in \alpha_{a} $. This means that $ y \in a $ belongs to $ \Gamma $; then, by the saturation condition $ \mathsf{L\subseteq}$, also $ y \in b $ belongs to $ \Gamma $. By definition of the model we have $ y\in \alpha_{b}$, and thus that $ \alpha_{a} \subseteq \alpha_{b} $.
\
\noindent Next, define a realization $ (\rho, \sigma) $ such that $ \rho(x) = x $ and $ \sigma(a)= \alpha_{a} $ and prove the following claims:
\begin{itemize}[noitemsep]
\item[] \textbf{[Claim 1]} If $ \mathcal{F} $ is in $ \downarrow \Gamma $, then $ \mathcal{M}_{\mathcal{B}} \vDash_{\rho, \sigma} \mathcal{F}$;
\item[] \textbf{[Claim 2]} If $ \mathcal{F} $ is in $ \downarrow \Delta $, then $ \mathcal{M}_{\mathcal{B}} \nvDash_{\rho, \sigma} \mathcal{F}$;
\end{itemize}
\noindent where $ \mathcal{F} $ denotes a labelled formula, i.e., $ \mathcal{F}$ is $a \in N(x)$, $x\in a$, $a \subseteq b$, $x \mathbb{V}dash^{\forall} A$, $x \mathbb{V}dash^{\exists} A$, $x \mathbb{V}dash_a A|B$, $x:A$, $x: A>B $. The two claims are proved by cases, by induction on the weight of the formula $ \mathcal{F} $.
\textbf{[a]} If $ \mathcal{F} $ is a formula of the form $ a \in N(x) $, $ x \in a $ or $ a \subseteq b $, Claim 1 holds by definition of $ \mathcal{M}_{\mathcal{B}} $, and Claim 2 is empty. For the case of $ a \subseteq b $, employ the fact $(*)$ above.
\textbf{[b]} If $ A $ is a labelled atomic formula $ x:p $, the claims hold by definition of the model; by the saturation condition associated to $ \mathsf{init} $ no inconsistencies arise. If $ A \equiv \bot $, the formula is not forced in any model and Claim 2 holds, while Claim 1 holds by the saturation clause associated to $ \bot_{\mathsf{L}} $. If $ A $ is a conjunction, disjunction or implication, both claims hold for the corresponding saturation conditions and by inductive hypothesis on formulas on smaller weight.
\textbf{[c]} If $ A \equiv a \mathbb{V}dash ^{\exists} A $ is in $ \downarrow \Gamma $, then by the saturation clause associated to $ \mathsf{L\fe} $ for some $ x $ there are $ x \in a $, $ x:A $ are in $ \downarrow \Gamma $. By definition of the model $ \mathcal{M}_{\mathcal{B}} $, for some $ x $, $ x \in \alpha_{a} $. Then, since $ w(x:A) < w(a \mathbb{V}dash^{\exists} A) $, apply the inductive hypothesis and obtain $\mathcal{M}_{\mathcal{B}} \vDash x:A $. Therefore, by definition of satisfiability, $ \mathcal{M}_{\mathcal{B}} \vDash \alpha_{a} \mathbb{V}dash ^{\exists} A $.
\noindent If $ a \mathbb{V}dash ^{\exists} A $ is in $ \downarrow \Delta $, then it is also in $ \Delta $. Consider an arbitrary world $ x $ in $ \alpha_{a} $. By definition of $\mathcal{M}_{\mathcal{B}}$ we have that $ x \in a $ is in $ \Gamma $; apply the saturation condition associated to $ \mathsf{R\fu} $ and obtain that $ x:A $ is in $ \downarrow \Delta $. By inductive hypothesis, $ \mathcal{M}_{\mathcal{B}} \nvDash x:A $; thus, since this line of reasoning holds for arbitrary $ x $, we can conclude by definition of satisfiability that $ \mathcal{M}_{\mathcal{B}} \nvDash \alpha_{a} \mathbb{V}dash ^{\exists} A $. The case in which $ A \equiv a \mathbb{V}dash ^{\forall} A $ is similar.
\textbf{[d]} If $ x \mathbb{V}dash_{a} A|B $ is in $ \downarrow \Gamma $, then by the saturation condition associated to $ \mathsf{L|} $ for some $ c $ it holds that $ c \in N(x) $ and $ c \subseteq a $ are in $ \Gamma $, and $ a \mathbb{V}dash ^{\exists} A $, $ a \mathbb{V}dash^{\forall} A\rightarrow B$ are in $ \downarrow \Gamma $. By definition of the model, $ \alpha_c \subseteq \alpha_a $, and by inductive hypothesis $ \mathcal{M}_{\mathcal{B}} \vDash \alpha_{c} \mathbb{V}dash ^{\exists} A $ and $ \mathcal{M}_{\mathcal{B}} \vDash \alpha_{c} \mathbb{V}dash ^{\forall} A\rightarrow B $. By definition, this yields $ \mathcal{M}_{\mathcal{B}} \vDash x \mathbb{V}dash_{a}A|B $.
\noindent If $ x \mathbb{V}dash_{a} A|B $ is in $ \downarrow \Delta $, consider a neighbourhood $ \alpha_{c} \subseteq \alpha_a $ in $ N(x) $. Then by definition of $\mathcal{M}_{\mathcal{B}}$ we have that $ c \in N(x) $ and $ c \subseteq a $ are in $ \Gamma $; apply the saturation condition associated to $ \mathsf{R|} $ and obtain that either $ c \mathbb{V}dash^{\exists} A $ or $ c \mathbb{V}dash^{\forall} A \rightarrow B $ is in $ \downarrow \Delta $. By inductive hypothesis, either $ \mathcal{M} \nvDash \alpha_{c} \mathbb{V}dash^{\exists} A $ or $ \mathcal{M}_{\mathcal{B}} \nvDash \alpha_{c} \mathbb{V}dash^{\forall} A \rightarrow B $. In both cases, by definition $ \mathcal{M}_{\mathcal{B}} \nvDash x \mathbb{V}dash_{a} A|B $.
\textbf{[e]} If $ x: A>B $ is in $ \downarrow \Gamma $, then it is also in $ \Gamma $. Consider an arbitrary neighbourhood $ \alpha_{a} $ in $ N(x) $. By definition of $\mathcal{M}_{\mathcal{B}}$ we have that $ a \in N(x) $ is in $ \Gamma $; apply the saturation condition associated to $ \mathsf{L>}' $ and conclude that either $ a \mathbb{V}dash ^{\exists} A $ is in $ \downarrow \Delta $, or $ x \mathbb{V}dash_{a} A|B $ is in $ \downarrow \Gamma $. By inductive hypothesis, it holds that either $ \mathcal{M}_{\mathcal{B}} \nvDash \alpha_{a} \mathbb{V}dash ^{\exists} A $ or $ \mathcal{M}_{\mathcal{B}} \vDash x \mathbb{V}dash_{a} A|B $. In both cases, by definition $ \mathcal{M}_{\mathcal{B}} \vDash x :A>B $.
\noindent If $ x: A>B $ is in $ \downarrow \Delta $, by the saturation condition associated to $ \mathsf{R>} $, for some $ a $ it holds that $ a\in N(x) $ is in $ \Gamma $, $ a \mathbb{V}dash^{\exists} A $ is in $ \downarrow \Gamma $ and $ x \mathbb{V}dash_{a} A|B $ is in $ \downarrow \Delta $. By inductive hypothesis, $ \mathcal{M}_{\mathcal{B}} \vDash \alpha_{a} \mathbb{V}dash^{\exists} A $ and $ \mathcal{M}_{\mathcal{B}} \nvDash x \mathbb{V}dash_{a} A|B $, thus, by definition, we have $ \mathcal{M}_{\mathcal{B}} \nvDash x: A>B $.
\end{proof}
\noindent Theorem \ref{theorem:semantic_completeness} together with the soundness of $ \lab{CL} $ provides a constructive proof of the finite model property for the logics: if $A$ is satisfiable in a model (meaning that $ \lnot A $ is not valid), by soundness of the calculi $ \lnot A $ is not provable. Thus by Theorem \ref{theorem:semantic_completeness} we build a finite countermodel of $ \lnot A $, that is a finite model in which $ A $ is satisfiable.
The same holds for the calculi for extensions of $ \mathbb{PCL} $, once their semantic completeness has been proved.
\subsection{Semantic completeness for extensions}
Semantic completeness for sequent calculi with normality, total reflexivity, weak centering and uniformity can be proved similarly as for $ \lab{CL} $. Extensions of the calculi with rules for centering require a modification on the countermodel construction, to account for singleton neighbourhoods.
\begin{theorem}
If $ A $ is valid in $ \mathbb{PCL} $ combined with normality, total reflexivity weak centring and uniformity, then sequent $ \Rightarrow x:A $ is derivable in $ \lab{CL} $ combined with the corresponding rules for normality, total reflexivity, weak centering and uniformity.
\end{theorem}
\begin{proof}
The proof proceeds as the one of Theorem \ref{theorem:semantic_completeness}. For the case of normality, a clause is added in the countermodel construction; however, Claim 1 and 2 continue to hold in the model. For the remaining cases, the countermodel construction does not change, and it only remains to verify that the countermodel $ \mathcal{M}_\mathcal{B} $ satisfies the properties of normality, total reflexivity, weak centering and uniformity, provided that the corresponding rules and saturation conditions are added to the calculus.
\textit{Normality}: To construct a countermodel for logics featuring \textit{only} normality, the following case distinction applies, for $ Q = \forall, \exists $:
we need to add a clause to the definition of $ W_\mathcal{B} $, $ \alpha_a $ and for $ Q = \forall, \exists $:
\begin{itemize}[noitemsep]
\item If $ a \in N(x) $ occurs in $ \Gamma $, and there are some formulas $ a \mathbb{V}dash^Q A $ in $ \downarrow \Gamma~ \cup \downarrow \Delta $, the countermodel $\mathcal{M}_{\mathcal{B}}$ is defined as in the case of $ \mathbb{PCL} $;
\item If $ a \in N(x) $ occurs in $ \Gamma $, but no formulas $ a \mathbb{V}dash^Q A $ occur in $ \downarrow \Gamma~ \cup \downarrow \Delta$, we set: $ W_\mathcal{B}= S_\mathcal{B} \cup \{u\} $, for some variable $ u $ not occurring in $ \Gamma$; $ \alpha_a = \{u \}$ and $N_\mathcal{B}(u) = \{\{u\} \} $.
\end{itemize}
The model satisfies the condition of normality: according to the saturation condition $ \mathsf{N} $, for every $ x $ occurring in $ \downarrow \Gamma $, there is $ a $ such that $ a\in N(x) $ occurs in $ \Gamma $. By definition of $ \mathcal{M}_\mathcal{B} $, $ \alpha_a \in N_\mathcal{B}(x) $.
Moreover, we have to verify that non-emptiness of the model holds also for the neighbourhood $ \alpha_a $ introduced by the rule. If there are some formulas $ a \mathbb{V}dash^Q A $ occurring in $ \downarrow \Gamma \cup \downarrow \Delta $, the saturation condition associated to either $ \mathsf{0} $, $ \mathsf{L\fe} $ or $ \mathsf{R\fu} $ ensures that there is at least one formula $ y \in a $ in $ \Gamma $. If there are no such formulas, the application of $ \mathsf{N} $ is not relevant to the derivation; following the definition, we introduce an arbitrary world $ u $ to be placed in the neighbourhood\footnote{There is no need to verify non-emptiness for stronger conditions of total reflexivity and weak centering, since the rules added to the calculus add a world belonging to the neighbourhood introduced.}.
\textit{Total reflexivity}: According to the saturation condition $ \mathsf{T} $, for every $ x $ occurring in $ \downarrow \Gamma\, \cup \downarrow \Delta $ also $ a \in N(x) $, $ x \in a $ occur in $ \Gamma $. By definition of $\mathcal{M}_{\mathcal{B}} $, this means that $\alpha_{a} \in N(x) $ and $ x \in \alpha_{a} $, and total reflexivity holds.
\textit{Weak centering}: Suppose $ \alpha_a \in N(x) $. We want to show that $ x \in \alpha_a $. By definition, if $ \alpha_a \in N(x) $ then $ a \in N(x) $ occurs in $ \Gamma $. By the saturation condition associated to $ \mathsf{W} $, it holds that also $ x \in a $ occurs in $ \Gamma $; thus, by definition of the model $ x \in \alpha_a $.
\textit{Uniformity}: Suppose $ y \in \bigcup N(x) $, which means that $ y \in \alpha_a $ and $ \alpha_a \in N(x) $. By definition, $ a \in N(x) $ and $ y \in a $ occur in $ \Gamma $. We have to show that $ \bigcup N(x) = \bigcup N(y) $, that is:
\begin{quote}
$ z \in \bigcup N(x) $ iff $ z \in \bigcup N(y) $
\end{quote}
Assume $ z \in \bigcup N(x) $. This means that $ z \in \alpha_b $ and $ b \in N(x) $ and, by definition, $ z \in b $ and $ b \in N(x) $ occur in $ \Gamma $. By the saturation condition associated to $ \mathsf{U}nif{2} $, we have that for some $ c $, $ c \in N(y) $ and $ z \in c $ occur in $ \Gamma $. Thus, $ z \in \alpha_c $ and $ \alpha_c \in N(y) $, meaning that $ z \in \bigcup N(y) $. The saturation condition associated to $ \mathsf{U}nif{1} $ is needed to prove the other direction.
\end{proof}
\begin{theorem}
If $ A $ is valid in $ \mathbb{PCL} $ extended with centering, then sequent $ \Rightarrow x:A $ is derivable in $ \lab{CL^C} $.
\end{theorem}
\begin{proof}
In this case, worlds of the countermodel are not defined as the set $ S_\mathcal{B} $ of labels occurring in the branch, but as \textit{equivalence classes} $ [x] $ with respect to the relation $ y \in \{x\} $, which we will show to be an equivalence relation. Then, we require $ [x] $ to be contained in any neighbourhood of $N(x)$. For $ S_\mathcal{B} $, $ N_\mathcal{B} $ and $ \alpha_a $ as defined before, let
\begin{quote}
$ [x] = \{ y \in S_\mathcal{B} \mid y \in \{x\} \ \mathit{ occurs \ in } \ \Gamma\} $;\\
$ [x] \subseteq \alpha_a $, for $ a \in N(x)$ occurring in $ \Gamma$.
\end{quote}
We construct a model $ \mathcal{M}^c_\mathcal{B} = \mathcal{L}_Cle W^c, N^c, \llbracket \ \rrbracket^c
\rangle $ as follows:
\begin{itemize}[noitemsep]
\item $ W^c = \{ [ x] \mid x \in S_\mathcal{B} \} $;
\item for each $[ x ] \in W^c $, $ N^c( [x ])= \{ \alpha_a \mid a \in N(x) \ \mathit{belongs \ to} \ \downarrow \Gamma \} $;
\item for $ p $ atomic, $ \llbracket p\, \rrbracket^c = \{ [x] \in W^c \mid x :p \ \mathit{belongs \ to} \ \downarrow \Gamma \}$.
\end{itemize}
We first prove that $ y \in \{x\} $ is an equivalence relation. The relation is \textit{reflexive}: for each $ x $ occurring in $ \Gamma $, $ x \in \{x\} $ occurs in $ \Gamma $. This holds from the saturation conditions associated to $ \mathsf{N} $, $ \mathsf{C} $ and $ \mathsf{Single} $. To prove that the relation is \textit{symmetric}, we have to show that if $ y \in \{x\} $ occurs in $ \Gamma $, then also $ x \in \{y\} $ occurs in $ \Gamma $. By reflexivity, we have that $ y \in \{y\} $. Thus, by the saturation condition associated to $ \Repl{2} $, we have that also $ x \in \{y\} $ belongs to $ \Gamma $. To prove the converse, use saturation condition associated to $ \Repl{1} $.
To show that the relation is \textit{transitive} we have to prove that if $ y \in \{x\} $ and $ x \in \{z\} $ occur in $ \Gamma $, also $ y \in \{z\} $ occurs in $ \Gamma $. By saturation conditions $ \mathsf{N} $ and $ \mathsf{C} $ and $ \mathsf{Single} $, we have that also $ \{ z\} \in N(z) $ occurs in $ \Gamma $. By the saturation condition associated to $ \Repl{1}$ applied to $ x \in \{z\} $, also $ \{z\} \in N(x) $ occurs in the sequent; thus, by the saturation condition associated to $ \mathsf{C} $ we have that both formulas $ \{ x\} \subseteq \{z\}$ and $ \{x\}\in N(z) $ occur in $ \Gamma $. Finally, by the saturation condition associated to $ \mathsf{L\subseteq} $, since $ y \in \{x\} $ and $ \{x\} \subseteq \{z\} $, we have that $ y \in \{z\} $ occurs in the sequent.
Next we need to show that the definitions of $ N^c( [x ])$ and $ \llbracket p\, \rrbracket^c$ do not depend on the chosen representative of the equivalence class in question.
\begin{quote}
i) if $y\in [ x ] $, then $a \in N(x)$ is in $\Gamma$ if and only if $a \in N(y)$ is in $\Gamma$;\\
ii) if $y\in [ x ] $, then $x:p $ is in $ \Gamma $ if and only if $ y:p $ is in $ \Gamma $.
\end{quote}
Fact $ i) $ follows from the saturation conditions associated to $\Repl{1}$ and $\Repl{2}$, applied to on the formulas $a\in N(x)$ and $ a \in N(y) $. Fact $ ii) $ follows from application of the same saturation conditions to $ x:p $ and $ y:p $.
The model $ \mathcal{M}^c_\mathcal{B} $ satisfies the property of centering. Observe that in our model $ \{ x\} $ corresponds to $ [x] $: both are defined as the set containing exactly one element, $ x $. Suppose $ \alpha_a \in N(x) $; we have to show that $ \{x\} \subseteq \alpha_a $ and $\{x\} \in N(x) $.
By definition of the model we have that $ [x] \subseteq \alpha_a $, and from this and $ \alpha_a \in N(x) $ it follows that $ [x] \in N([x]) $; thus, strong centering holds.
\noindent The following facts are needed in the proof Claims 1 and 2 below.
\begin{quote}
$1)$ If $ a \subseteq b $ belongs to $ \Gamma $, then $ \alpha_{a} \subseteq \alpha_{b} $;\\
$ 2) $ if $ [x] \in \llbracket A \rrbracket$ and $ y \in [x] $, then $ [y] \in \llbracket A \rrbracket$; \\
$ 3) $ If $ [x] \in \llbracket A \rrbracket $, then $ x:A $ belongs to $ \downarrow \Gamma $.
\end{quote}
Fact $ 1) $ is proved in the same way as $ (*) $; the proofs of $ 2) $ and $ 3) $ are immediate from admissibility of $ \Repl{1} $ and $\Repl{2} $ in their generalized form.
Finally, we define define a realization $ (\rho, \sigma) $ such that $ \rho(x) = [x] $ and $ \sigma(a)= \alpha_{a} $, and prove that:
\begin{itemize}[noitemsep]
\item[] \textbf{[Claim 1]} If $ \mathcal{F} $ is in $ \downarrow \Gamma $, then $ \mathcal{M}^c_{\mathcal{B}} \vDash_{\rho, \sigma} \mathcal{F}$;
\item[] \textbf{[Claim 2]} If $ \mathcal{F} $ is in $ \downarrow \Delta $, then $ \mathcal{M}^c_{\mathcal{B}} \nvDash_{\rho, \sigma}\mathcal{F}$.
\end{itemize}
\noindent Again, $ \mathcal{F} $ denotes the labelled formulas of the language, including $ y \in \{x\}$, $ \{x\} \in N(x) $, $\{ x \} \in a$. The two cases are proved by distinction of cases, and by induction on the height of the derivation.
If $ \mathcal{F} $ is a relational formula that does not contain any singleton, Claim 1 holds by definition of the model, and Claim 2 is empty as in case a) of proof of the previous models. Similarly, if $ \mathcal{F} $ is either $ y \in \{x\} $, $\{x\} \in N(x)$ or $ \{x\} \subseteq a $, Claim 1 is satisfied by definition.
The cases b)- e) of the previous proof remain unchanged; condition 2) ensures that all the elements of an equivalence class of world labels satisfy the same sets of formulas.
\end{proof}
\section{Related work and discussion}
In this paper we have studied Preferential Conditional Logic $ \mathbb{PCL} $ and its extensions. We have first provided a natural semantics for this class of logics in terms of neighbourhood models.
Neighbourhood models generalise Lewis' sphere models for counterfactual logics. We have given a \mathsf{0}h{direct} proof of soundness and completeness of $\mathbb{PCL} $ and its extensions with respect to this class of models, with the exception of $ \mathbb{P}\mathsf{W}W $ and its extensions with (U) and (A).
We have then presented labelled sequent calculi for all logics of the family. The calculi are modular and have standard proof-theoretical properties, the most important one being cut admissibility, by which completeness of the calculi easily follows.
We have then tackled termination of the calculi, with the aim of obtaining a decision procedure for each logic. For all systems, except for the ones containing absoluteness, we have shown that by adopting a suitable strategy, it holds that every derivation either succeeds or ends by producing a finite tree. With respect to the known complexity of the logics, the decisions procedures are not optimal, and further work is needed to obtain optimal procedures out of the labelled calculi. The last result we have shown is semantic completeness for the calculi, again with the exception of cases of absoluteness.
\
Concerning the semantics, a few works have considered neighbourhood models for $\mathbb{PCL}$ or closely related logics. The relation between neighbourhood models and preferential models has been considered in \cite{negri2015sequent,girlandothesis} and is based on a well-known duality between partial orders and so-called Alexandrov topologies. According to this result, neighbourhood models are build by associating to each world a topology in which the neighbourhoods are the \mathsf{0}h{open} sets. For conditional logics this duality is studied in detailed in \cite{marti2013topological}. However, the topological semantics of \cite{marti2013topological} imposes closure under arbitrary unions and non-empty intersections on the neighbourhoods. These conditions are not required by the logic, and we have not assumed them in the definition of neighbourhood models.
A kind of neighbourhood semantics, called Broccoli Semantics, has been considered in \cite{girard2007onions}. In this article, it is shown that the logic BL characterised by the Broccoli Semantics coincides with $\mathbb{PCL}$. Completeness of BL is obtained by Burgess' result.
Yet another kind of neighbourhood semantics bearing some similarity to ours is the Premise Semantics, considered in the seminal work by Veltman \cite{veltman1985logic}. Premise semantics is shown equivalent to preferential semantics (called ``ordered semantics''). Premise models are neighbourhood models which do not require any additional properties, as in our definition. However, the definition of the conditional is different from ours, as it considers arbitrary intersections of neighbourhoods. Then, the result of \mathsf{0}h{strong} completeness is proved indirectly by resorting to preferential semantics (whence generalising Burgess' result).
In this respect, the direct completeness result with respect to the neighbourhood semantics contained in this work is new. In future work we wish to complete it with the missing cases.
\
Concerning proof systems, very few calculi are known for $\mathbb{PCL}$ and its extensions. Labelled tableaux calculi for $\mathbb{PCL}$ and its extensions (including all the ones considered here, and Lewis' logics) are proposed in \cite{giordano2009tableau}. The calculi are based on preferential semantics with the Limit assumption, and are defined by extending the language by pseudo-modalities indexed on worlds.
The tableaux calculi cover all logics considered in this work, but they are inherently different from the ones we introduced, due to the presence of Limit assumption. As a difference with the present work, termination is obtained by relatively complex blocking conditions.
As a side note, the neighbourhood semantics could be reformulated by assuming the Limit assumption as follows. Given a formula $A$, consider the set of neighbourhoods $\alpha\in N(x)$ minimal with respect to set inclusion such that $\alpha\Vdash^\exists A$, and define a conditional $A > B$ to be forced at $x$ if each neighbourhood $\alpha$ in this set forces universally $A \to B$. Corresponding calculi could possibly be developed based on this semantics.
Labelled sequent calculi based on preferential semantics for $ \mathbb{PCL} $ and its extensions, including counterfactual logics, are presented in \cite{girlando2019uniform}. In this case, the semantics is defined \mathsf{0}h{without} the limit assumption. However, while there is a proof of termination for all systems of calculi, complexity issues are not analysed in detail.
An unlabelled sequent calculus for $ \mathbb{PCL} $ yielding an optimal $ \mbox{PSPACE}$ decision procedure is presented in \cite{schroder2010optimal}:
the calculus is obtained by closing one step rules by all possible cuts and by adding a specific rule for $\mathbb{PCL}$. The resulting system is undoubtedly significant, but the rules have a highly combinatorial nature and are overly complicated. In particular, a non-trivial calculation (although the algorithm is polynomial) is needed to obtain one backward instance of the (S)-rule for a given sequent.
Recently, a resolution calculus for $\mathbb{PCL}$ has been proposed in \cite{nalon2018resolution}. The calculus does not make use of labels, nor of any additional structure; it relies however on a non-trivial pre-processing of formulas (including renaming of subformulas and addition of propositional constants) in order to transform a formula into a suitable set of clauses to which the resolution rules can be applied.
As a difference with Lewis' logics\footnote{Refer to \cite{girlando2016standard, girlando2017hypersequent} for recently proposed non-labelled calculi for Lewis' logics.}, it is remarkable that today, 40 years since preferential logics has been introduced, no \mathsf{0}h{standard} unlabelled sequent calculi for $ \mathbb{PCL} $ or its extensions have been found, where by a standard calculus we mean a proof system with a fixed finite number of rules, each with a fixed finite number of premisses.
Regarding labelled sequent calculi for preferential logics, from a computational viewpoint the main issue, to be explored in future work, is whether the calculi can be refined in order to achieve optimal complexity. This may lead to a redefinition the semantics itself, in order to obtain sharper labelled rules, or to a modification of calculus structure in itself.
\appendix
\section*{Appendix}
\begin{proof}[Proof of Theorem \ref{theorme:completeness_synth}]
We have to show that the inference rules of $ \vdash_{\mathcal{H}}iom{PCL} $ are admissible, and that the axioms of $ \vdash_{\mathcal{H}}iom{PCL} $ and its extensions are derivable. By means of example, we show admissibility of (RCEA) and (RCK) in $ \lab{CL} $, and derivability of axioms (CM), (N), (T) and (U$_1 $) in $ \lab{CLT} $. More derivation examples can be found in \cite{girlandothesis}.
For (RCEA), suppose $ \vdash A\leftrightarrow B $. Thus, $ \vdash x:A\rightarrow B $, whence $ \Rightarrow x:A\rightarrow B $, and $ \vdash x:B\rightarrow A $, whence $ \Rightarrow x:B\rightarrow A $. We derive three sequents by application of $ \mathsf{C}ut $ and other rules, and show how to combine them into a derivation of $ \Rightarrow x:(A>C) \rightarrow (B>C) $, i.e., $ \vdash (A>C) \rightarrow (B>C) $. The other direction of the implication is similar.
From $ \Rightarrow x:B\rightarrow A $ obtain by substitution $\Rightarrow y:B\rightarrow A $. Sequent $ y:B, y:B\rightarrow A \Rightarrow y:A $ is derivable.
$$
\infer[\mathsf{L\fe}]{(1) \quad a \in N(x), a \Vdash^\exists B, x:A>C \Rightarrow x\barra{a} B|C, a \Vdash^\exists A }{
\infer[\mathsf{R\fe}]{a \in N(x), y \in a, y: B, x:A>C \Rightarrow x\barra{a} B|C, a \Vdash^\exists A }{
\infer[\mathsf{W}k]{ a \in N(x), y \in a, y: B, x:A>C \Rightarrow x\barra{a} B|C, a \Vdash^\exists A, y :A }{
\infer[\mathsf{C}ut]{y:A \Rightarrow y:B}{
\infer[\mathsf{W}k]{y:A \Rightarrow y:B\rightarrow A, y:B}{
\Rightarrow y:B\rightarrow A,
}
&
y:B, y:B\rightarrow A \Rightarrow y:A
}
}
}
}
$$
From $ \Rightarrow x:A\rightarrow B $ obtain by substitution $ \Rightarrow y:A\rightarrow B $. Sequent $ y:A, y:A \rightarrow B \Rightarrow y:B $ is derivable.
\
\begin{adjustbox}{max width = \textwidth}
$$
\infer[\mathsf{L\fe}]{(2) \quad c\subseteq a, a \in N(x), c \in N(x), a \Vdash^\exists B, c \Vdash^\exists A, c \Vdash^\forall A\rightarrow C, x:A>C \Rightarrow x\barra{a} B|C }{
\infer[\mathsf{R\fe}]{ c\subseteq a, a \in N(x), c \in N(x), y \in c, a \Vdash^\exists B, y:A, c \Vdash^\forall A\rightarrow C,, x:A>C \Rightarrow x\barra{a} B|C }{
\infer[\mathsf{W}k]{c\subseteq a, a \in N(x), c \in N(x), y \in c, a \Vdash^\exists B, y:A, c \Vdash^\forall A\rightarrow C, y : A\rightarrow C,, x:A>C \Rightarrow x\barra{a} B|C}{
\infer[\mathsf{C}ut]{y:A \Rightarrow y:B}{
\infer[\mathsf{W}k]{y:A \Rightarrow y:A\rightarrow B, y:B}{
\Rightarrow y:A\rightarrow B
}
&
y:A, y:A \rightarrow B \Rightarrow y:B
}
}
}
}
$$
\end{adjustbox}
\
\noindent From $ \Rightarrow x:B\rightarrow A $ obtain by substitution $ \Rightarrow z:B\rightarrow A $, for some variable $ z $ different from $ y $. Sequent $z:C \Rightarrow z:C$ is derivable, as well as sequent $ z:B, z:B\rightarrow A \Rightarrow z:A $.
\
\begin{adjustbox}{max width = \textwidth}
$$
\infer[\mathsf{R\fu}]{(3) \ c \subseteq a, a \in N(x), c \in N(x), c \Vdash^\exists A, c \Vdash^\forall A\rightarrow C, a \Vdash^\exists B, x:A>C \Rightarrow x \barra{a} B|C, c \Vdash^\forall B\rightarrow C }{
\infer[\mathsf{L\fu}]{z \in c, c \subseteq a, a \in N(x), c \in N(x), c \Vdash^\exists A, c \Vdash^\forall A\rightarrow C,a \Vdash^\exists B, x:A>C \Rightarrow x \barra{a} B|C, z:B\rightarrow C }{
\infer[\mathsf{W}k]{ z \in c, c \subseteq a, a \in N(x), c \in N(x), c \Vdash^\exists A, c \Vdash^\forall A\rightarrow C, z: A\rightarrow C, a \Vdash^\exists B, x:A>C \Rightarrow x \barra{a} B|C,z:B\rightarrow C }{
\infer[\mathsf{R\rightarrow}]{z:A\rightarrow C \Rightarrow z:B\rightarrow C}{
\infer[\mathsf{L\rightarrow}]{ z:A\rightarrow C, z:B \Rightarrow z:C }{
z:C \Rightarrow z:C
&
\infer[\mathsf{C}ut]{z:B \Rightarrow z:A}{
\infer[\mathsf{W}k]{z:B \Rightarrow z:B\rightarrow A, z:A}{\Rightarrow z:B\rightarrow A}
&
z:B, z:B\rightarrow A \Rightarrow z:A
}
}
}
}
}
}
$$
\end{adjustbox}
\
\noindent To conclude, we combine the above derivations into the following:
\
\begin{adjustbox}{max width = \textwidth}
$$
\infer[\mathsf{R\rightarrow}]{\Rightarrow x: (A>C)\rightarrow (B>C) }{
\infer[\mathsf{R>}]{x: A>C\Rightarrow x:B>C}{
\infer[\mathsf{L>}]{a \in N(x), a \Vdash^\exists B, x:A>C \Rightarrow x\barra{a} B|C}{
(1)
&
\infer[\mathsf{L|}]{x \barra{a} A|C, a \Vdash^\exists B, x:A>C \Rightarrow x\barra{a}B|C}{
\infer[\mathsf{R|}]{c \subseteq a, a \in N(x), c \in N(x), c \Vdash^\exists A, c \Vdash^\forall A\rightarrow C, a \Vdash^\exists B, x:A>C \Rightarrow x\barra{a}B|C }{
(2)\qquad
&
\qquad (3)
}
}
}
}
}
$$
\end{adjustbox}
\
\noindent For (RCK), suppose $ \vdash A\rightarrow B $. Thus, $ \Rightarrow x:A\rightarrow B $. We show how to derive sequent $ \Rightarrow x:(C>A) \rightarrow (C>B) $. From $ \Rightarrow x:A\rightarrow B $ obtain by substitution $ \Rightarrow y:A\rightarrow B $. Then, from this sequent and the derivable sequent $ y:C\Rightarrow y:C $ obtain sequent (1):
\
\begin{adjustbox}{max width = \textwidth}
$$
\infer[\mathsf{R\fu}]{(1) \quad b \subseteq a, a \in N(x), b \in N(x), b \Vdash^\exists C, b \Vdash^\exists C\rightarrow A,a \Vdash^\exists C, x:C>A \Rightarrow x \barra{a} C|B, b \Vdash^\forall C\rightarrow B}{
\infer[\mathsf{L\fu}]{y \in b, b \subseteq a, a \in N(x), b \in N(x), b \Vdash^\exists C, b \Vdash^\exists C\rightarrow A,a \Vdash^\exists C, x:C>A \Rightarrow x \barra{a} C|B, y: C\rightarrow B}{
\infer[\mathsf{W}k]{y \in b, b \subseteq a, a \in N(x), b \in N(x), b \Vdash^\exists C, b \Vdash^\exists C\rightarrow A, y : C\rightarrow A, a \Vdash^\exists C, x:C>A \Rightarrow x \barra{a} C|B, y: C\rightarrow B}{
\infer[\mathsf{R\rightarrow}]{y:C\rightarrow A \Rightarrow y:C\rightarrow B}{
\infer[\mathsf{L\rightarrow}]{y:C\rightarrow A,y:C \Rightarrow y:B}{
y:C \Rightarrow y:C
&
\infer[\mathsf{W}k]{y:C, y:A \Rightarrow y:B}{
y:A \Rightarrow y:B
}
}
}
}
}
}
$$
\end{adjustbox}
\
\noindent The following two sequents are derivable, by Lemma \ref{generalized_initial_sequents}:
\begin{quote}
(2) \ $ a \in N(x), a \Vdash^\exists C, x:C>A \Rightarrow x \barra{a} C|B, a \Vdash^\exists C $ \\
(3) $ b \subseteq a, a \in N(x), b \in N(x), b \Vdash^\exists C, b \Vdash^\exists C\rightarrow A,a \Vdash^\exists C, x:C>A \Rightarrow x \barra{a} C|B, b \Vdash^\exists C $\
\end{quote}
To conclude the proof, apply the following rules to $ y:C\rightarrow A \Rightarrow y:C\rightarrow B $:
\
\begin{adjustbox}{max width = \textwidth}
$$
\infer[\mathsf{R\rightarrow}]{\Rightarrow x:(C>A) \rightarrow (C>B)}{
\infer[\mathsf{R>}]{x:C>A \Rightarrow x:C>B}{
\infer[\mathsf{L>}]{ a \in N(x), a \Vdash^\exists C, x:C>A \Rightarrow x \barra{a} C|B }{
(2)
&
\infer[\mathsf{L|}]{x \barra{a} C|A, a \in N(x), a \Vdash^\exists C, x:C>A \Rightarrow x \barra{a} C|B}{
\infer[\mathsf{R|}]{b \subseteq a, a \in N(x), b \in N(x), b \Vdash^\exists C, b \Vdash^\exists C\rightarrow A,a \Vdash^\exists C, x:C>A \Rightarrow x \barra{a} C|B }{
(3) \qquad
&
\qquad (1)
}
}
}
}
}
$$
\end{adjustbox}
\noindent Here follows the derivation of (CM), in which we have omitted three derivable left premisses: $ a \Vdash^\exists a \mathsf{w}edge B \dots \Rightarrow \dots a \Vdash^\exists A $, premiss of the lower occurrence of $ \mathsf{L>} $, $ b\Vdash^\exists A \dots \Rightarrow \dots b \Vdash^\exists A $ premiss of the upper occurrence of $ \mathsf{L>} $, and $ c\subseteq a, a \Vdash^\exists A\mathsf{w}edge B \dots \Rightarrow \dots c \Vdash^\exists A\mathsf{w}edge B $, premiss of $ \mathsf{R|} $.
\
\begin{adjustbox}{max width = \textwidth}
$
\infer[\mathsf{R\rightarrow}]{\Rightarrow x: (A > B) \land (A > C) \rightarrow ((A\land B) > C) }{
\infer[\mathsf{L \mathsf{w}edge}]{x: (A > B) \land (A > C) \Rightarrow x: (A\land B) > C}{
\infer[\mathsf{R>}]{x: A > B, x:A > C \Rightarrow x:(A\land B) > C}{
\infer[\mathsf{L>}]{a \in N(x), a \Vdash^\exists A\mathsf{w}edge B, x: A > B, x:A > C \Rightarrow x \barra{a} A\mathsf{w}edge B | C }{
\infer[\mathsf{L|}]{x \barra{a} A|B, a \in N(x), a \Vdash^\exists A\mathsf{w}edge B, x: A > B, x:A > C \Rightarrow x \barra{a} A\mathsf{w}edge B | C }{
\infer[\mathsf{L>}]{ b \in N(x), b\subseteq a, a \in N(x), b\Vdash^\exists A, b\Vdash^\exists A\rightarrow B, a \Vdash^\exists A\mathsf{w}edge B, x: A > B, x:A > C \Rightarrow x \barra{a} A\mathsf{w}edge B | C }{
\infer[\mathsf{L|}]{ x \barra{b} A|C, b \in N(x), b\subseteq a, a \in N(x), b\Vdash^\exists A, b\Vdash^\exists A\rightarrow B , a \Vdash^\exists A\mathsf{w}edge B, x: A > B, x:A > C \Rightarrow x \barra{a} A\mathsf{w}edge B | C }{
\infer[\mathsf{T}rans]{c \in N(x), c \subseteq b, b \in N(x), b\subseteq a, a \in N(x), c\Vdash^\exists A, c\Vdash^\exists A\rightarrow C, b\Vdash^\exists A, b\Vdash^\exists A\rightarrow B , a \Vdash^\exists A\mathsf{w}edge B, x: A > B, x:A > C \Rightarrow x \barra{a} A\mathsf{w}edge B | C }{
\infer[\mathsf{R|}]{ c \subseteq a, c \subseteq b, c\Vdash^\exists A, c \Vdash^\exists A, c\Vdash^\forall A\rightarrow C, b\Vdash^\exists A, b\Vdash^\exists A\rightarrow B , a \Vdash^\exists A\mathsf{w}edge B, x: A > B, x:A > C \Rightarrow x \barra{a} A\mathsf{w}edge B | C }{
\infer[\mathsf{R\fu}]{ c \subseteq a\dots c \subseteq b, c \Vdash^\exists A, c\Vdash^\forall A\rightarrow C, b\Vdash^\exists A, b\Vdash^\exists A\rightarrow B , a \Vdash^\exists A\mathsf{w}edge B \dots \Rightarrow \dots c\Vdash^\forall A\mathsf{w}edge B \rightarrow C }{
\infer[\mathsf{R\rightarrow}]{ y \in c , c \subseteq a, c \subseteq b, c \Vdash^\exists A, c\Vdash^\forall A\rightarrow C, b\Vdash^\exists A, b\Vdash^\exists A\rightarrow B , a \Vdash^\exists A\mathsf{w}edge B \dots \Rightarrow \dots y: A\mathsf{w}edge B \rightarrow C }{
\infer[\mathsf{L\fu}]{y \in c , c \subseteq a, c \subseteq b, c \Vdash^\exists A, c\Vdash^\forall A\rightarrow C, b\Vdash^\exists A, b\Vdash^\exists A\rightarrow B , a \Vdash^\exists A\mathsf{w}edge B, y: A\mathsf{w}edge B \dots \Rightarrow \dots y:C}{
\infer[\mathsf{L\rightarrow}]{y \in c , c \subseteq a, c \subseteq b, c \Vdash^\exists A, y: A\rightarrow C, b\Vdash^\exists A, b\Vdash^\forall A\rightarrow B , a \Vdash^\exists A\mathsf{w}edge B, y: A\mathsf{w}edge B \dots \Rightarrow \dots y:C}{
\deduce{\dots y:C \Rightarrow y:C \dots}{\triangledown}
&
\deduce{\dots y:A\mathsf{w}edge B \Rightarrow y:A \dots}{\triangledown}
}
}
}
}
}
}
}
}
}
}
}
}
}
$
\end{adjustbox}
\
\noindent Here follows the derivation of (N).
\
\begin{adjustbox}{max width = \textwidth}
$$
\infer[\mathsf{R\lnot}]{\Rightarrow x:\lnot (\top > \bot )}{
\infer[\mathsf{N}]{x: \top>\bot\Rightarrow }{
\infer[\mathsf{L>}]{a \in N(x), x: \top>\bot\Rightarrow }{
\infer[\mathsf{0}] {a \in N(x), x: \top>\bot\Rightarrow a \Vdash^\exists \top}{
\infer[\mathsf{R\fe}]{y\in a, a \in N(x), x: \top>\bot\Rightarrow a \Vdash^\exists \top}{y\in a, a \in N(x), x: \top>\bot\Rightarrow a \Vdash^\exists \top, y :\top }
}
&
\infer[\mathsf{L|}]{ x\mathbb{V}dash_a \top | \bot , y\in a, a \in N(x), x: \top>\bot\Rightarrow x:\bot}{
\infer[\mathsf{L\fu}]{ c\in N(x), c \subseteq a, a \Vdash^\exists \top, a \Vdash^\forall \top \rightarrow \bot , y\in a, a \in N(x), x: \top>\bot\Rightarrow x:\bot }{
\infer[\mathsf{L\subseteq}]{c\in N(x), c \subseteq a, a \Vdash^\exists \top, a \Vdash^\forall \top \rightarrow \bot , y : \top \rightarrow \bot, y\in a, a \in N(x), x: \top>\bot\Rightarrow x:\bot }{ \dots \Rightarrow y:\top \qquad & \qquad y:\bot \Rightarrow \dots }
}
}
}
}
}
$$
\end{adjustbox}
\
\noindent Here follows the derivation of axiom (T). The left premiss of $ \mathsf{L>} $ is the derivable sequent $ x\in a, a\in N(x), x:A, x: A>\bot \Rightarrow x: \bot, a \mathbb{V}dash^{\exists} A $, not shown for reasons of space.
\
\begin{adjustbox}{max width = \textwidth}
$$
\infer[\mathsf{R\rightarrow}]{\Rightarrow x: A \rightarrow ((A > \bot ) \rightarrow \bot ) }{
\infer[\mathsf{R\rightarrow}]{x: A \Rightarrow x: (A > \bot ) \rightarrow \bot }{
\infer[\mathsf{T}]{x: A, x:A < \bot \Rightarrow x: \bot }{
\infer[\mathsf{L>}]{x\in a, a\in N(x), x:A, x: A>\bot \Rightarrow x: \bot}{
\infer[\mathsf{L\subseteq}]{x\mathbb{V}dash_{a} A| \bot, x \in a, a\in N(x), x:A, x:A>\bot, \Rightarrow x:\bot }{
\infer[\mathsf{L\fe}]{b \in N(x), b\subseteq a, b\mathbb{V}dash^{\exists} A, b\mathbb{V}dash^{\forall} A\rightarrow \bot, x \in a, a \in N(x), x:A>\bot \Rightarrow x:\bot }{ \infer[\mathsf{L\fu}]{ y \in b, y:A, b \in N(x), b\subseteq a, b\mathbb{V}dash^{\forall} A\rightarrow \bot, x \in a, a \in N(x), x:A>\bot \Rightarrow x:\bot }{
\infer[\mathsf{L\rightarrow}]{y:A\rightarrow \bot, y\in b, y:A, b \in N(x), b\subseteq a, x \in a, a \in N(x), x:A>\bot \Rightarrow x:\bot }{
y:A\rightarrow \bot, y\in b, y:A \dots \Rightarrow x:\bot, y:A
\quad
&
\quad
y: \bot \dots \Rightarrow
} } } }
} } } }
$$
\end{adjustbox}
\
\noindent Finally, here follows the derivation of (U$ _1 $). The derivable left premiss of $ \mathsf{L>} $, sequent $z \in c, z \in b \dots z: \lnot A \Rightarrow \dots c \Vdash^\exists \lnot A, z:\lnot A$, is not shown.
\
\begin{adjustbox}{max width = \textwidth}
$$
\infer[\mathsf{R\rightarrow}]{\Rightarrow x: (\lnot A > \bot ) \rightarrow (\lnot (\lnot A > \bot ) > \bot) }{
\infer[\mathsf{R>}]{x: \lnot A > \bot \Rightarrow x:\lnot (\lnot A > \bot ) > \bot }{
\infer[\mathsf{L\fe}]{a \in N(x), a \Vdash^\exists \lnot (\lnot A > \bot ), x: \lnot A > \bot \Rightarrow x \mathbb{V}dash _a \lnot (\lnot A > \bot ) > \bot }{
\infer[\mathsf{L\lnot}]{y \in a, a \in N(x), y: \lnot (\lnot A > \bot ), x: \lnot A > \bot \Rightarrow x \mathbb{V}dash _a \lnot (\lnot A > \bot ) > \bot}{
\infer[\mathsf{R>}]{ y \in a, a \in N(x), x: \lnot A > \bot \Rightarrow x \mathbb{V}dash _a \lnot (\lnot A > \bot ) > \bot, y: \lnot A > \bot }{
\infer[\mathsf{L\fe}]{b \in N(y), y \in a, a \in N(x), b\Vdash^\exists \lnot A, x: \lnot A > \bot \Rightarrow x \mathbb{V}dash _a \lnot (\lnot A > \bot ) > \bot, y\mathbb{V}dash_b \lnot A | \bot}{
\infer[\mathsf{U}nif{1}]{z \in b, b \in N(y), y \in a, a \in N(x), z: \lnot A, x: \lnot A > \bot \Rightarrow x \mathbb{V}dash _a \lnot (\lnot A > \bot ) > \bot, y\mathbb{V}dash_b \lnot A | \bot}{
\infer[\mathsf{L>}]{z \in c, z \in b, b \in N(y), y \in a, a \in N(x), z: \lnot A, x: \lnot A > \bot \Rightarrow x \mathbb{V}dash _a \lnot (\lnot A > \bot ) > \bot, y\mathbb{V}dash_b \lnot A | \bot}{
\infer[\mathsf{L|}]{x\mathbb{V}dash_c \lnot A | \bot \dots z: \lnot A, x: \lnot A > \bot \Rightarrow \dots}{
\infer[\mathsf{L\fe}]{d \subseteq c, d \in N(x) \dots d \Vdash^\exists \lnot A, d \Vdash^\forall \lnot A \rightarrow \bot, z: \lnot A, x: \lnot A > \bot \Rightarrow \dots }{
\infer[\mathsf{L\fu}]{k \in d, d \subseteq c \dots k: \lnot A, d \Vdash^\forall \lnot A \rightarrow \bot, z: \lnot A, x: \lnot A > \bot \Rightarrow \dots }{
\infer[\mathsf{L\rightarrow}]{k \in d, d \subseteq c \dots k: \lnot A, d \Vdash^\forall \lnot A \rightarrow \bot, k:\lnot A \rightarrow \bot, z: \lnot A, x: \lnot A > \bot \Rightarrow \dots }{\deduce{ \dots k:\lnot A \Rightarrow \dots k:\lnot A}{\triangledown} & \qquad \qquad\qquad \qquad
\deduce{\dots k:\bot\Rightarrow \dots }{\mathsf{init}}
}
}
}
}
}
}
}
}
}
}
}
}
$$
\end{adjustbox}
\end{proof}
\end{document} |
\betaegin{document}
\betaaselineskip=0.20in
\makebox[\textwidth]{
\hglue-15pt
\betaegin{minipage}{0.6cm}
\vskip9pt
\varepsilonnd{minipage}
\betaegin{minipage}[t]{6cm}
\footnotesize{ {\betaf Discrete Mathematics Letters} \\ \underline{www.dmlett.com}}
\varepsilonnd{minipage}
\betaegin{minipage}[t]{6.5cm}
\normalsize {\it Discrete Math. Lett.} {\betaf X} (202X) XX--XX
\varepsilonnd{minipage}}
\vskip36pt
\noindent
{\lambdaarge \betaf Mean Sombor index}\\
\noindent
J. A. M\'endez-Berm\'udez$^{1,}\footnote{Corresponding author ([email protected])}$, R. Aguilar-S\'anchez$^{2}$, Edil D. Molina$^{3}$, Jos\'e M. Rodr\'{\i}guez$^{4}$\\
\noindent
\footnotesize
$^1${\it Instituto de F\'{\i}sica, Benem\'erita Universidad Aut\'onoma de Puebla, Apartado Postal J-48, Puebla 72570, Mexico} \\
\noindent
$^2${\it Facultad de Ciencias Qu\'imicas, Benem\'erita Universidad Aut\'onoma de Puebla,
Puebla 72570, Mexico}\\
\noindent
$^3${\it Facultad de Matem\'aticas, Universidad Aut\'onoma de Guerrero, Carlos E. Adame No.54 Col. Garita, Acapulco Gro. 39650, Mexico} \\
\noindent
$^4${\it Departamento de Matem\'aticas, Universidad Carlos III de Madrid, Avenida de la Universidad 30,
28911 Legan\'es, Madrid, Spain} \\
\noindent
(\footnotesize Received: Day Month 202X. Received in revised form: Day Month 202X. Accepted: Day Month 202X. Published online: Day Month 202X.)\\
\sigmaetcounter{page}{1} \thetaispagestyle{empty}
\betaaselineskip=0.20in
\normalsize
\betaegin{abstract}
\noindent
We introduce a degree--based variable topological index inspired on the power (or generalized) mean. We name this new index as the mean Sombor index: $mSO_\alphalpha(G) = \sigmaum_{uv \in E(G)} \lambdaeft[\lambdaeft( d_u^\alphalpha+d_v^\alphalpha \right) /2 \right]^{1/\alphalpha}$. Here, $uv$ denotes the edge of the graph $G$ connecting the vertices $u$ and $v$, $d_u$ is the degree of the vertex $u$, and $\alphalpha \in \mathbb{R} \betaackslash \{0\}$. We also consider the limit cases $mSO_{\alphalpha\to 0}(G)$ and $mSO_{\alphalpha\to\pm\infty}(G)$. Indeed, for given values of $\alphalpha$, the mean Sombor index is related to well-known topological indices such as the inverse sum indeg index, the reciprocal Randic index, the first Zagreb index, the Stolarsky--Puebla index and several Sombor indices. Moreover, through a quantitative structure property relationship (QSPR) analysis we show that $mSO_\alphalpha(G)$ correlates well with several physicochemical properties of octane isomers. Some mathematical properties of mean Sombor indices as well as bounds and new relationships with known topological indices are also discussed. \\[2mm]
{\betaf Keywords:} degree--based topological index; power mean; Sombor indices; QSPR analysis.\\[2mm]
{\betaf 2020 Mathematics Subject Classification:} 26E60, 05C09, 05C92.
\varepsilonnd{abstract}
\betaaselineskip=0.20in
\sigmaection{Preliminaries}
For two positive real numbers $x,y$ the power mean or generalized mean
$PM_\alphalpha(x,y)$ with exponent $\alphalpha\in \mathbb{R} \betaackslash \{0\}$ is given as
\betaegin{equation}
PM_\alphalpha(x,y)=\lambdaeft( \frac{x^\alphalpha+y^\alphalpha}{2} \right)^{1/\alphalpha} \, ,
\lambdaabel{Mxy}
\varepsilonnd{equation}
see e.g.~\cite{B03,S09}. $PM_\alphalpha(x,y)$ is also known as H\"older mean.
For given values of $\alphalpha$, $PM_\alphalpha(x,y)$ reproduces well-known mean values. As examples, in Table~\ref{TableMxy}
we show some expressions for $PM_\alphalpha(x,y)$ for selected values of $\alphalpha$ with their corresponding
names, when available.
\betaegin{table}[b!]
\caption{Expressions for the generalized mean $PM_\alphalpha(x,y)$ for selected values of $\alphalpha$.}
\centering
\betaegin{tabular}{r l l}
\hline\hline
$\alphalpha$ & $PM_\alphalpha(x,y)$ & name (when available) \\ [0.5ex]
\hline
$-\infty$ & $PM_{\alphalpha\to-\infty}(x,y)=\min(x,y)$ & minimum value \\ [1ex]
$-1$ & $\deltaisplaystyle PM_{-1}(x,y)=\frac{2xy}{x+y}$ & harmonic mean \\ [1ex]
0 & $\deltaisplaystyle PM_{\alphalpha\to 0}(x,y)=\sigmaqrt{xy}$ & geometric mean \\ [1ex]
$1/2$ & $\deltaisplaystyle PM_{1/2}(x,y)=\lambdaeft( \frac{\sigmaqrt{x}+\sigmaqrt{y}}{2} \right)^2$ & \\ [1ex]
1 & $\deltaisplaystyle PM_1(x,y)=\frac{x+y}{2}$ & arithmetic mean \\ [1ex]
2 & $\deltaisplaystyle PM_2(x,y)=\lambdaeft( \frac{x^2+y^2}{2} \right)^{1/2}$ & root mean square \\ [1ex]
3 & $\deltaisplaystyle PM_3(x,y)=\lambdaeft( \frac{x^3+y^3}{2} \right)^{1/3}$ & cubic mean \\ [1ex]
$\infty$ & $PM_{\alphalpha\to\infty}(x,y)=\max(x,y)$ & maximum value \\ [1ex]
\hline
\varepsilonnd{tabular}
\lambdaabel{TableMxy}
\varepsilonnd{table}
There is a well known inequality for the power mean, namely~\cite{OT57,C66,L74}:
For any $\alphalpha_1<\alphalpha_2$,
\betaegin{equation}
PM_{\alphalpha_1}(x,y) \lambdae PM_{\alphalpha_2}(x,y) \, ,
\lambdaabel{ineq1}
\varepsilonnd{equation}
where the equality is attained for $x=y$.
\sigmaection{The mean Sombor index}
A large number of graph invariants of the form
\betaegin{equation}
TI(G) = \sigmaum_{uv \in E(G)} F(d_u,d_v)
\lambdaabel{TI}
\varepsilonnd{equation}
are currently been studied in mathematical chemistry; where $uv$ denotes the edge of the graph $G$ connecting the vertices $u$ and $v$,
$d_u$ is the degree of the vertex $u$, and $F(x,y)$ is an appropriate chosen function, see e.g.~\cite{G13,S15,S21}.
Inspired by the power mean and given a simple graph $G=(V(G),E(G))$, here we choose
the function $F(x,y)$ in Eq.~(\ref{TI}) as the power mean $PM_\alphalpha(x,y)$ and define the degree--based variable topological index
\betaegin{equation}
mSO_\alphalpha(G)= \sigmaum_{uv \in E(G)} \lambdaeft( \frac{d_u^\alphalpha+d_v^\alphalpha}{2} \right)^{1/\alphalpha} ,
\lambdaabel{MG}
\varepsilonnd{equation}
where $\alphalpha\in \mathbb{R} \betaackslash \{0\}$. We name $mSO_\alphalpha(G)$ as the \varepsilonmph{mean Sombor index}.
\betaegin{table}[b!]
\caption{Expressions for the mean Sombor index $mSO_\alphalpha(G)$ for selected values of $\alphalpha$.}
\centering
\betaegin{tabular}{r l l}
\hline\hline
$\alphalpha$ & $mSO_\alphalpha(G)$ & index equivalence \\ [0.5ex]
\hline
$-\infty$ & $\deltaisplaystyle mSO_{\alphalpha\to-\infty}(G)=\sigmaum_{uv \in E(G)} \min(d_u,d_v)$ & $\deltaisplaystyle SP_{\alphalpha\to-\infty}(G)$ \\ [1ex]
$-1$ & $\deltaisplaystyle mSO_{-1}(G)=\sigmaum_{uv \in E(G)} \frac{2d_ud_v}{d_u+d_v}$ & $2ISI(G)$ \\ [1ex]
0 & $\deltaisplaystyle mSO_{\alphalpha\to 0}(G)=\sigmaum_{uv \in E(G)} \sigmaqrt{d_ud_v}$ & $R^{-1}(G)$ \\ [1ex]
$1/2$ & $\deltaisplaystyle mSO_{1/2}(G)=\sigmaum_{uv \in E(G)} \lambdaeft( \frac{\sigmaqrt{d_u}+\sigmaqrt{d_v}}{2} \right)^2$ & $2^{-2} KA^1_{1/2,2}(G)$ \\ [1ex]
1 & $\deltaisplaystyle mSO_1(G)=\sigmaum_{uv \in E(G)} \frac{d_u+d_v}{2}$ & $2^{-1}M_1(G)$ \\ [1ex]
2 & $\deltaisplaystyle mSO_2(G)=\sigmaum_{uv \in E(G)} \lambdaeft( \frac{d_u^2+d_v^2}{2} \right)^{1/2}$ & $2^{-1/2}SO(G)$ \\ [1ex]
3 & $\deltaisplaystyle mSO_3(G)=\sigmaum_{uv \in E(G)} \lambdaeft( \frac{d_u^3+d_v^3}{2} \right)^{1/3}$ & $2^{-1/3} KA^1_{3,1/3}(G)$ \\ [1ex]
$\infty$ & $\deltaisplaystyle mSO_{\alphalpha\to\infty}(G)=\sigmaum_{uv \in E(G)} \max(d_u,d_v)$ & $\deltaisplaystyle SP_{\alphalpha\to\infty}(G)$ \\ [1ex]
\hline
\varepsilonnd{tabular}
\lambdaabel{TableMG}
\varepsilonnd{table}
Note, that for given values of $\alphalpha$, the mean Sombor index is related to known topological indices:
$mSO_{-1}(G) = 2ISI(G)$, where $ISI(G)$ is the inverse sum indeg index~\cite{VG10,V10},
$mSO_{\alphalpha\to 0}(G) = R^{-1}(G)$, where $R^{-1}(G)$ is the reciprocal Randic index~\cite{GFE14}, and
$mSO_1(G) = M_1(G)/2$, where $M_1(G)$ is the first Zagreb index~\cite{GT72}.
Also, it is relevant to stress that the mean Sombor index is related to several Sombor indices:
$mSO_2(G) = 2^{-1/2}SO(G)$, where $SO(G)$ is the Sombor index~\cite{G21a},
$mSO_\alphalpha(G) = 2^{-1/\alphalpha} SO_\alphalpha(G)$, where $SO_\alphalpha(G)$ is the $\alphalpha$-Sombor index~\cite{RDA21}, and
$mSO_\alphalpha(G) = 2^{-1/\alphalpha} KA^1_{\alphalpha,1/\alphalpha}(G)$, where
$KA^1_{\alphalpha,\betaeta}(G)= \sigmaum_{uv\in E(G)} \lambdaeft( d_u^\alphalpha + d_v^\alphalpha \right)^\betaeta$
is the first $(\alphalpha,\betaeta)-KA$ index~\cite{K19}.
In addition, the limit cases $mSO_{\alphalpha\to\pm\infty}(G)$ correspond with the limit cases $SP_{\alphalpha\to\pm\infty}(G)$
of the recently introduced Stolarsky--Puebla index~\cite{MAAS22}.
In Table~\ref{TableMG} we report some expressions for $mSO_\alphalpha(G)$
for selected values of $\alphalpha$ that we identify with known topological indices.
\sigmaection{QSPR study of $mSO_\alphalpha(G)$ on octane isomers}
\lambdaabel{QSPR}
As a first application of mean Sombor indices, here we perform a quantitative structure property
relationship (QSPR) study of $mSO_\alphalpha(G)$
to model some physicochemical properties of octane isomers.
Here we choose to study the following properties:
acentric factor (AcentFac), boiling point (BP), heat capacity at constant pressure (HCCP),
critical temperature (CT), relative density (DENS), standard enthalpy of formation (DHFORM),
standard enthalpy of vaporization (DHVAP), enthalpy of formation (HFORM), heat of vaporization (HV) at 25$^{\circ}$C,
enthalpy of vaporization (HVAP), and entropy (S).
The experimental values of the physicochemical
properties of the octane isomers were kindly provided by Dr. S. Mondal, see Table 2 in Ref.~\cite{MDDP21}.
\betaegin{figure}[ht!]
\betaegin{center}
\includegraphics[scale=0.6]{Fig01.eps}
\caption{\footnotesize{
Mean Sombor index $mSO_\alphalpha(G)$ vs.~the physicochemical properties of octane isomers, for the
values of $\alphalpha$ that maximize the correlations: (a) $\alphalpha=0$, (b) $\alphalpha=-8.19$, (c)
$\alphalpha=-0.87$, (d) $\alphalpha=-2.05$, (e) $\alphalpha=-0.53$, (f) $\alphalpha=-1.28$, (g) $\alphalpha\rightarrow\infty$,
(h) $\alphalpha=-4.23$, (i) $\alphalpha\rightarrow-\infty$, (j) $\alphalpha\rightarrow\infty$, and (k) $\alphalpha=0.58$.
Red dashed lines are the linear QSPR models of Eq.~(\ref{P}), with the regression and statistical parameters
resumed in Table~\ref{a_values}.
}}
\lambdaabel{Fig01}
\varepsilonnd{center}
\varepsilonnd{figure}
In Fig.~\ref{Fig01} we plot $mSO_\alphalpha(G)$ vs.~the physicochemical properties of octane isomers for the
values of $\alphalpha$ that maximize the absolute value of Pearson's correlation coefficient $r$; see Table~\ref{a_values}.
Moreover, in Fig.~\ref{Fig01} we tested the following linear
regression model
\betaegin{equation}
{\cal P} = c_1 [mSO_\alphalpha(G)] + c_2,
\lambdaabel{P}
\varepsilonnd{equation}
where ${\cal P}$ represents a given physicochemical property.
In Table~\ref{a_values} we resume the regression and statistical parameters of the linear QSPR models
(see the red dashed lines in Fig.~\ref{Fig01}) given by Eq.~(\ref{P}).
\betaegin{table}[h!]
\caption{For the physicochemical properties ${\cal P}$ reported in Fig.~\ref{Fig01}: values of $\alphalpha$ that maximize the absolute value of Pearson's correlation coefficient $r$. $c_2$, $c_1$, $SE$, $F$, and $SF$ are the intercept, slope, standard error, $F$-test, and statistical significance, respectively, of the linear QSPR models of Eq.~(\ref{P}).}
\lambdaabel{a_values}
\centering
\betaegin{tabular}{r r r r r r r r }
\hline\hline
property ${\cal P}$ & $\alpha$ & $r$ & $c_2$ & $c_1$ & $SE$ & $F$ & $SF$ \\ [0.5ex]
\hline
AcentFac & $0$ & $-0.990$ & $0.988$ & $-0.046$ & $0.005$ & $749.116$ & $7.25$E-15
\\ [1ex]
BP & $-8.19$ & $0.886$ & $14.115$ & $8.946$ & $2.929$ & $58.126$ & $1.00$E-06
\\ [1ex]
HCCP & $-0.87$ & $0.928$ & $-21.216$ & $3.604$ & $0.504$ & $98.128$ & $3.13$E-08
\\ [1ex]
CT & $-2.05$ & $0.717$ & $30.048$ & $20.859$ & $6.788$ & $16.934$ & $8.10$E-04
\\ [1ex]
DENS & $-0.53$ & $0.702$ & $0.134$ & $0.042$ & $0.022$ & $15.518$ & $1.17$E-03
\\ [1ex]
DHFORM & $-1.28$ & $0.781$ & $-26.253$ & $2.331$ & $0.546$ & $24.924$ & $1.33$E-04
\\ [1ex]
DHVAP & $\infty$ & $-0.962$ & $11.375$ & $-0.117$ & $0.108$ & $196.401$ & $2.11$E-10
\\ [1ex]
HFORM & $-4.23$ & $0.912$ & $-77.705$ & $2.220$ & $0.530$ & $78.903$ & $1.39$E-07
\\ [1ex]
HV & $-\infty$ & $0.895$ & $15.880$ & $2.036$ & $1.286$ & $4.622$ & $4.72$E-02
\\ [1ex]
HVAP & $\infty$ & $-0.921$ & $80.550$ & $-0.592$ & $0.813$ & $89.724$ & $5.81$E-08
\\ [1ex]
S & $0.58$ & $-0.956$ & $160.060$ & $-3.655$ & $1.372$ & $98.128$ & $3.13$E-08
\\ [1ex]
\hline
\varepsilonnd{tabular}
\varepsilonnd{table}
From Table~\ref{a_values} we can conclude that $mSO_\alphalpha(G)$ provides good predictions of
AcentFac, BP, HCCP, DHVAP, HFORM, HV, HVAP, and S for which the
correlation coefficients (absolute values) are closer or higher than 0.9.
Note that for all these physicochemical properties of octane isomers the statistical significance
of the linear model of Eq.~(\ref{P}) is far below 5\%.
Also notice that the mean Sombor index that better correlates (linearly) with the AcentFac is $mSO_{\alphalpha\to 0}(G)$,
which indeed coincides with the reciprocal Randic index.
Moreover, we found that $|r|$ is maximized when $\alphalpha \to \infty$, for DHVAP and HVAP, and when
$\alphalpha \to -\infty$ for HV. This means that the limiting cases $mSO_{\alphalpha\to\pm\infty}(G)$ are also
relevant from an application point of view.
\sigmaection{Inequalities involving $mSO_\alphalpha(G)$}
\lambdaabel{ineq}
Equation~(\ref{ineq1}) can be straightforwardly used to
state a monotonicity property for the $mSO_\alphalpha(G)$ index, as well as inequalities for related indices.
That is, if $ \alphalpha_1<\alphalpha_2 $ we have,
\betaegin{equation}
mSO_{\alphalpha_1}(G) \lambdae mSO_{\alphalpha_2}(G) \, ,
\lambdaabel{ineq11}
\varepsilonnd{equation}
which implies, for the the first $(a,b)-KA$ index, that
\betaegin{equation}
2^{-1/\alphalpha_1} KA^1_{\alphalpha_1,1/\alphalpha_1}(G) \lambdae 2^{-1/\alphalpha_2} KA^1_{\alphalpha_2,1/\alphalpha_2}(G) \, , \qquad \alphalpha_1<\alphalpha_2 \, ,
\lambdaabel{ineq12}
\varepsilonnd{equation}
and moreover
\betaegin{equation}
2ISI(G) \lambdae R^{-1}(G) \lambdae 2^{-2} KA^1_{1/2,2}(G) \lambdae 2^{-1}M_1(G) \lambdae 2^{-1/2}SO(G) \, .
\lambdaabel{ineq13}
\varepsilonnd{equation}
Note that this last inequality involves the inverse sum indeg index, the reciprocal Randic index, the $(a,b)-KA$ index,
the first Zagreb index, and the Sombor index. It is fair to acknowledge that
the very last inequality in (\ref{ineq13}) was already included in the Theorem 3.1 of~\cite{MMM21}.
In what follows we will state bounds for the mean Sombor index as well as new relationships with known topological indices.
We will use the following particular case of Jensen's inequality.
\betaegin{lemma} \lambdaabel{l:Jensen}
If $g$ is a convex function on $\mathbb{R}$ and $x_1,\deltaots,x_m \in \mathbb{R}$, then
$$
g\Big(\frac{ x_1+\cdots +x_m }{m} \Big) \lambdae \frac{1}{m} \, \betaig(g( x_1)+\cdots +g(x_m) \betaig) .
$$
If $g$ is strictly convex, then the equality is attained in the inequality if and only if $x_1=\deltaots=x_m$.
\varepsilonnd{lemma}
\betaegin{theorem}
Let $G$ be a graph with $m$ edges and $\alpha\in \mathbb{R} $; if $\alpha>1$ then
$$
mSO_\alpha(G) \lambdae \frac{m^{1-1/\alpha}}{2^{1/\alpha}}\lambdaeft( M_1^{\alpha+1}(G) \right)^{1/\alpha} \, ,
$$
if $\alpha<1$ and $\alpha\ne 0$ then
$$
mSO_\alpha(G) \gammae \frac{m^{1-1/\alpha}}{2^{1/\alpha}}\lambdaeft( M_1^{\alpha+1}(G) \right)^{1/\alpha} \, ,
$$
and the equality in each bound is attained for a connected graph $G$
if and only if $G$ is regular or biregular.
\varepsilonnd{theorem}
\betaegin{proof}
Assume first that $\alpha>1$ then, for $x\gammae 0$, $x^{1/\alpha}$ is a concave function and by Lemma~\ref{l:Jensen} we have
$$
\betaegin{aligned}
\frac{1}{m}\sigmaum_{uv\in E(G)} \lambdaeft( \frac{d_u^\alpha + d_v^\alpha}{2} \right)^{1/\alpha}
&\lambdae
\lambdaeft( \frac{1}{2m} \sigmaum_{uv\in E(G)} (d_u^\alpha + d_v^\alpha) \right)^{1/\alpha} \\
&=
\frac{1}{2^{1/\alpha} m^{1/\alpha}}\lambdaeft( \sigmaum_{u\in V(G)} d_u^{\alpha+1} \right)^{1/\alpha}
=
\frac{1}{2^{1/\alpha} m^{1/\alpha}}\lambdaeft( M_1^{\alpha+1}(G) \right)^{1/\alpha} \, .
\varepsilonnd{aligned}
$$
Assume now that $\alpha<1$ and $\alpha\ne 0$, then $x^{1/\alpha}$ is a convex function and by Jensen's inequality we obtain
$$
\betaegin{aligned}
\frac{1}{m}\sigmaum_{uv\in E(G)} \lambdaeft( \frac{d_u^\alpha + d_v^\alpha}{2} \right)^{1/\alpha}
&\gammae
\lambdaeft( \frac{1}{2m} \sigmaum_{uv\in E(G)} (d_u^\alpha + d_v^\alpha) \right)^{1/\alpha} \\
&=
\frac{1}{2^{1/\alpha} m^{1/\alpha}}\lambdaeft( \sigmaum_{u\in V(G)} d_u^{\alpha+1} \right)^{1/\alpha}
=
\frac{1}{2^{1/\alpha} m^{1/\alpha}}\lambdaeft( M_1^{\alpha+1}(G) \right)^{1/\alpha} .
\varepsilonnd{aligned}
$$
If $G$ is regular or biregular, with maximum and minimum degrees $\Delta$ and $\delta$, respectively,
$$
mSO_\alpha(G)=m\lambdaeft( \frac{\Delta^\alpha+\delta^\alpha}{2} \right)^{1/\alpha}
= \frac{m^{1-1/\alpha}}{2^{1/\alpha}} \lambdaeft(m( \Delta^\alpha+\delta^\alpha) \right)^{1/\alpha}
= \frac{m^{1-1/\alpha}}{2^{1/\alpha}} \lambdaeft( M_1^{\alpha+1}(G) \right)^{1/\alpha} .
$$
If any of these equalities hold, for every $uv,u^\prime v^\prime \in E(G)$, by Lemma~\ref{l:Jensen}, we have $d_u^\alpha+d_v^\alpha=d_{u^\prime}^\alpha + d_{v^\prime}^\alpha$. In particular if we take $u=u^\prime$ we have $d_v = d_{v^\prime}$, so all the neighbors of a vertex $u\in V(G)$ have the same degree. Thus, since $G$ is a connected graph, $G$ is regular or biregular.
\varepsilonnd{proof}
In order to prove the next result we need an additional technical result.
In~\cite[Theorem 3]{BMRS} appears a converse of H\"older inequality, which in the discrete case can be stated as follows~\cite[Corollary 2]{BMRS}.
\betaegin{lemma} \lambdaabel{c:holder}
If $1<p,q<\infty$ with $1/p+1/q=1$, $x_j,y_j\gammae 0$ and $a y_j^q \lambdae x_j^p \lambdae b y_j^q$ for $1\lambdae j \lambdae k$ and some positive constants $a,b,$ then:
$$
\Big(\sigmaum_{j=1}^k x_j^p \Big)^{1/p} \Big(\sigmaum_{j=1}^k y_j^q \Big)^{1/q}
\lambdae K_p(a ,b ) \sigmaum_{j=1}^k x_jy_j ,
$$
where
$$
K_p(a ,b )
= \betaegin{cases}
\,\deltaisplaystyle\frac{1}{p} \Big( \frac{a }{b } \Big)^{1/(2q)} + \frac{1}{q} \Big( \frac{b }{a } \Big)^{1/(2p)}\,, & \quad \text{if } 1<p<2 \, ,
\\
\, & \,
\\
\,\deltaisplaystyle\frac{1}{p} \Big( \frac{b }{a } \Big)^{1/(2q)} + \frac{1}{q} \Big( \frac{a }{b } \Big)^{1/(2p)}\,, & \quad \text{if } p \gammae 2 \, .
\varepsilonnd{cases}
$$
If $x_j>0$ for some $1\lambdae j \lambdae k$, then the equality in the bound is attained if and only if $a =b$ and $x_j^p=a y_j^q$ for every $1\lambdae j \lambdae k$.
\varepsilonnd{lemma}
\betaegin{theorem}
Let $G$ be a graph with $m$ edges, maximum degree $\Delta$ and minimum degree $\delta$, let $0 < \alpha < 1 $, then
$$
mSO_\alpha(G) \lambdae \frac{m^{1-1/\alpha}}{2^{1/\alpha}} \, K_{\alpha} \lambdaeft( M_1^{\alpha+1}(G) \right)^{1/\alpha}
$$
where
$$
K_{\alpha}^{\alpha} = \lambdaeft\lambdabrace
\betaegin{array}{cc}
\alpha\lambdaeft( \frac{\Delta}{\delta}\right)^{\frac{\alpha - \alpha^2}{2}} + (1-\alpha)\lambdaeft( \frac{\Delta}{\delta}\right)^{\frac{-\alpha^2}{2}}, & \text{if } 0<\alpha\lambdae\frac{1}{2} \;,\\
\alpha\lambdaeft( \frac{\Delta}{\delta}\right)^{\frac{\alpha^2-\alpha}{2}} + (1-\alpha)\lambdaeft( \frac{\Delta}{\delta}\right)^{\frac{\alpha^2}{2}}, & \text{if } \frac{1}{2} < \alpha < 1 \, ,
\varepsilonnd{array}
\right.
$$
the equality holds if and only if $G$ is a regular graph.
\varepsilonnd{theorem}
\betaegin{proof}
For each $uv\in E(G)$ we have
$$
\delta^\alpha \lambdae \frac{d_u^\alpha + d_v^\alpha}{2} \lambdae \Delta^\alpha\, .
$$
If we take $x_j=d_u^\alpha$, $y_j=d_v^\alpha$ and $p=1/\alpha$ by Lemma~\ref{c:holder} we have
$$
\betaegin{aligned}
m^{1-\alpha}\lambdaeft( mSO_\alpha(G) \right) ^\alpha
& =
\lambdaeft( \sigmaum_{uv\in E(G)} \lambdaeft( \frac{d_u^\alpha + d_v^\alpha}{2} \right)^{1/\alpha}\right)^{\alpha}
\lambdaeft( \sigmaum_{uv\in E(G)} 1^{\frac{1}{1-\alpha}} \right)^{1-\alpha} \\
& \lambdae K_{\alpha}^{\alpha} \sigmaum_{uv\in E(G)}\frac{d_u^{\alpha}+d_v^{\alpha}}{2}
= \frac{1}{2} \, K_{\alpha}^{\alpha} M_1^{\alpha+1}(G) \, ,
\varepsilonnd{aligned}
$$
where
$$
K_{\alpha}^{\alpha}= \lambdaeft\lambdabrace
\betaegin{array}{cc}
\alpha\lambdaeft( \frac{\Delta}{\delta}\right)^{\frac{\alpha - \alpha^2}{2}} + (1-\alpha)\lambdaeft( \frac{\Delta}{\delta}\right)^{\frac{-\alpha^2}{2}}, & \text{if } 0<\alpha\lambdae\frac{1}{2} \;, \\
\alpha\lambdaeft( \frac{\Delta}{\delta}\right)^{\frac{\alpha^2-\alpha}{2}} + (1-\alpha)\lambdaeft( \frac{\Delta}{\delta}\right)^{\frac{\alpha^2}{2}}, & \text{if } \frac{1}{2} < \alpha < 1 \, ,
\varepsilonnd{array}
\right.
$$
and the equality holds if and only if $\delta=\Delta$, i.e., $G$ is regular.
\varepsilonnd{proof}
The following inequalities are known for $x,y > 0$:
\betaegin{equation}\lambdaabel{eq:a}
\betaegin{aligned}
x^a + y^a
< (x + y)^a
\lambdae 2^{a-1}(x^a + y^a )
\qquad & \text{if } \, a > 1,
\\
2^{a-1}(x^a + y^a )
\lambdae (x + y)^a
< x^a + y^a
\qquad & \text{if } \, 0<a < 1,
\\
(x + y)^a
\lambdae 2^{a-1}(x^a + y^a )
\qquad & \text{if } \, a < 0,
\varepsilonnd{aligned}
\varepsilonnd{equation}
and the equality in the second, third or fifth bound is attained for each $a$ if and only if $x=y$.
\betaegin{proposition}
Let $G$ be a graph and $\alpha \in \mathbb{R} \betaackslash\{0\}$, then
$$
\betaegin{aligned}
2^{-1/\alpha}SO(G)
< mSO_\alpha(G)
\lambdae 2^{-1/2}SO(G)
\qquad & \text{if } \, 0< \alpha < 2,
\\
2^{-1/2}SO(G)
\lambdae mSO_\alpha(G)
<2^{-1/\alpha}SO(G)
\qquad & \text{if } \, \alpha > 2,
\\
mSO_\alpha(G)
\lambdae 2^{-1/2}SO(G)
\qquad & \text{if } \, \alpha<0 \, ,
\varepsilonnd{aligned}
$$
and the equality in the second, third or fifth bound is attained for each $\alpha$ if and only if each connected component of $G$ is a regular graph.
\varepsilonnd{proposition}
\betaegin{proof}
If we divide each one of the inequalities in (\ref{eq:a}) by $2^a$ we obtain
$$
\betaegin{aligned}
2^{-a}\lambdaeft( {x^a + y^a}\right)
< \lambdaeft( \frac{x + y}{2}\right) ^a
\lambdae \frac{x^a + y^a}{2}
\qquad & \text{if } \, a > 1,
\\
\frac{x^a + y^a}{2}
\lambdae \lambdaeft( \frac{x + y}{2} \right) ^a
<2^{-a} \lambdaeft( {x^a + y^a} \right)
\qquad & \text{if } \, 0<a < 1,
\\
\lambdaeft( \frac{x + y}{2} \right) ^a
\lambdae \frac{x^a + y^a}{2}
\qquad & \text{if } \, a < 0 .
\varepsilonnd{aligned}
$$
If we take $x=d_u^\alpha$, $y=d_v^\alpha$ and $a=2/\alpha$; then the previous inequalities give
$$
\betaegin{aligned}
2^{-2/\alpha}\lambdaeft( {d_u^2 + d_v^2}\right)
< \lambdaeft( \frac{d_u^\alpha + d_v^\alpha}{2}\right) ^{2/\alpha}
\lambdae \frac{d_u^{2} + d_v^{2}}{2}
\qquad & \text{if } \, 0< \alpha < 2,
\\
\frac{d_u^{2} + d_v^{2}}{2}
\lambdae \lambdaeft( \frac{d_u^\alpha + d_v^\alpha}{2} \right) ^{2/\alpha}
<2^{-{2/\alpha}} \lambdaeft( {d_u^{2} + d_v^{2}} \right)
\qquad & \text{if } \, \alpha > 2,
\\
\lambdaeft( \frac{d_u^\alpha + d_v^\alpha}{2} \right) ^{2/\alpha}
\lambdae \frac{d_u^{2} + d_v^{2}}{2}
\qquad & \text{if } \, \alpha < 0 ,
\varepsilonnd{aligned}
$$
and the equality in the second, third or fifth bounds are attained for each $a$ if and only if $d_u=d_v$.
From this we obtain
$$
\betaegin{aligned}
2^{-1/\alpha}\lambdaeft( {d_u^2 + d_v^2}\right) ^{1/2}
< \lambdaeft( \frac{d_u^\alpha + d_v^\alpha}{2}\right) ^{1/\alpha}
\lambdae 2^{-1/2 } \lambdaeft( {d_u^{2} + d_v^{2}} \right) ^{1/2}
\qquad & \text{if } \, 0< \alpha < 2,
\\
2^{-1/2 } \lambdaeft( {d_u^{2} + d_v^{2}} \right) ^{1/2}
\lambdae \lambdaeft( \frac{d_u^\alpha + d_v^\alpha}{2} \right) ^{1/\alpha}
<2^{-1/\alpha}\lambdaeft( {d_u^2 + d_v^2}\right) ^{1/2}
\qquad & \text{if } \, \alpha > 2,
\\
\lambdaeft( \frac{d_u^\alpha + d_v^\alpha}{2} \right) ^{1/\alpha}
\lambdae 2^{-1/2 } \lambdaeft( {d_u^{2} + d_v^{2}} \right) ^{1/2}
\qquad & \text{if } \, \alpha < 0 ,
\varepsilonnd{aligned}
$$
and the equality in the second, third or fifth bounds are attained for each $a$ if and only if $d_u=d_v$.
The desired result is obtained by adding up for each $uv\in E(G)$.
\varepsilonnd{proof}
The following result appears in [4].
\betaegin{lemma}\lambdaabel{l:a_ir}
If $a_{i} > 0$ for $1 \lambdaeq i \lambdaeq k$ and $r \in \mathbb{R}$, then
$$
\betaegin{aligned}
\sigmaum_{i=1}^{k} a_{i}^{r} \gammaeq k^{1-r}\lambdaeft(\sigmaum_{i=1}^{k} a_{i}\right)^{r}, &\quad \text { if }\, r \lambdaeq 0 \text { or } r \gammaeq 1, \\
\sigmaum_{i=1}^{k} a_{i}^{r} \lambdaeq k^{1-r}\lambdaeft(\sigmaum_{i=1}^{k} a_{i}\right)^{r}, &\quad \text { if }\, 0 \lambdaeq r \lambdaeq 1 .
\varepsilonnd{aligned}
$$
\varepsilonnd{lemma}
\betaegin{proposition}
If $G$ is a graph with $m$ edges, then
$$
\betaegin{aligned}
KA_{\alpha,\beta}(G)\gammae m^{1-\beta}(M_1^{\alpha+1}(G))^{\beta}
&\quad \text { if }\, \beta \lambdaeq 0 \text { or } \beta \gammaeq 1, \\
KA_{\alpha,\beta}(G)\lambdae m^{1-\beta}(M_1^{\alpha+1}(G))^{\beta}
&\quad \text { if }\, 0\lambdae \beta \lambdae 1 \,.
\varepsilonnd{aligned}
$$
\varepsilonnd{proposition}
\betaegin{proof}
If we take $a_i=d_u^\alpha+d_v^{a}$ and $r=\beta$, by Lemma~\ref{l:a_ir} we have
$$
\betaegin{aligned}
\sigmaum_{uv\in E(G)} \lambdaeft( d_u^\alpha+d_v^a \right) ^{\beta}
\gammaeq m^{1-\beta}\lambdaeft(\sigmaum_{uv\in E(G)} \lambdaeft( d_u^\alpha+d_v^a \right)\right)^{\beta},
&\quad \text { if } \beta \lambdaeq 0 \text { or } \beta \gammaeq 1, \\
\sigmaum_{uv\in E(G)} \lambdaeft( d_u^\alpha+d_v^a \right) ^{\beta}
\lambdaeq m^{1-\beta}\lambdaeft(\sigmaum_{uv\in E(G)} \lambdaeft( d_u^\alpha+d_v^a \right)\right)^{\beta},
&\quad \text { if } 0 \lambdaeq \beta \lambdaeq 1 .
\varepsilonnd{aligned}
$$
\varepsilonnd{proof}
Given a graph $G$, let us define the mean Sombor matrix $m\mathcal{S\!M}_{\alpha}(G)$ with entries
\betaegin{equation}\lambdaabel{meanSmat}
{a}_{uv}:=\lambdaeft\lambdabrace \betaegin{array}{ll}
\lambdaeft( \frac{d_u^\alpha + d_v^\alpha}{2}\right) ^{1/\alpha} , & \text{if } uv\in E(G)\, , \\
0, & \text{otherwise} \, .
\varepsilonnd{array}
\right.
\varepsilonnd{equation}
One can easily check the following result about the trace of the matrix $m\mathcal{S\!M}_{\alpha}(G)^2$:
\betaegin{equation}\lambdaabel{e:trPM}
\omegaperatorname{tr} \lambdaeft(m\mathcal{S\!M}_{\alpha}(G)^2 \right)
=
\sigmaum_{uv\in E(G)}\lambdaeft( \frac{d_u^\alpha + d_v^\alpha}{2}\right) ^{2/\alpha} \, .
\varepsilonnd{equation}
Denote by $\sigmaigma^2$ the variance of the sequence of the terms
$\lambdaeft\lambdabrace \lambdaeft( \frac{d_u^\alpha + d_v^\alpha}{2}\right) ^{1/\alpha}\right\rbrace $ appearing in the definition of $mSO_\alpha(G)$.
\betaegin{proposition}
Let $G$ be a graph, then
$$
mSO_a(G)=\sigmaqrt{ \frac{m}{2} \, \omegaperatorname{tr}\lambdaeft(m\mathcal{S\!M}_{\alpha}(G)^2\right) - m^2\sigmaigma^2}\, .
$$
\varepsilonnd{proposition}
\betaegin{proof}
By the definition of $\sigmaigma^2$, we have
$$
\sigmaigma^2
=
\frac{1}{m} \sigmaum_{uv\in E(G)}\lambdaeft(\lambdaeft( \frac{d_u^\alpha + d_v^\alpha}{2}\right) ^{1/\alpha} \right) ^2
-
\lambdaeft( \frac{1}{m} \sigmaum_{uv\in E(G)}\lambdaeft( \frac{d_u^\alpha + d_v^\alpha}{2}\right) ^{1/\alpha} \right) ^2
\, ,
$$
then using the expression~\varepsilonqref{e:trPM} we have
$$
\sigmaigma^2= \frac{1}{2m} \,\omegaperatorname{tr}\lambdaeft(m\mathcal{S\!M}_{\alpha}(G)^2\right) - \frac{1}{m^2} \, mSO_\alpha(G)^2 ,
$$
and the result follows from this equality.
\varepsilonnd{proof}
\betaegin{theorem}
Let $G$ be any graph, then
$$
mSO_2(G) \lambdae M_1(G)-M_2^{1/2}(G) \, ,
$$
where $M_2^{1/2}$ is the variable second Zagreb index $M_2^{\alphalpha}$ at $\alphalpha=1/2$, and the equality is attained if and only if each connected component of $G$ is a regular graph.
\varepsilonnd{theorem}
\betaegin{proof}
Let be $\delta$, $\Delta$ the minimum and maximum degree of $G$, respectively. Let's analyze the behavior of the function
$$
f(x,y)= \lambdaeft( x+y-\sigmaqrt{xy} \, \right) ^2 - \frac{x^2+y^2}{2}\, ,
$$
for $\delta \lambdae x\lambdae y \lambdae \Delta$. We have
$$
\betaegin{aligned}
\frac{\partial f}{\partial x}(x,y)
&= 2(x+y-\sigmaqrt{xy} \,)\lambdaeft( 1-\frac{1}{2}\sigmaqrt{\frac{y}{x}} \, \right) -x \\
&= x - 3\sigmaqrt{xy} + 3y -\frac{y\sigmaqrt{y}}{\sigmaqrt{x}} \\
&= \frac{x\sigmaqrt{x}- 3x\sigmaqrt{y} + 3y\sigmaqrt{x} - y\sigmaqrt{y}}{\sigmaqrt{x}} \\
&= \frac{\lambdaeft( \sigmaqrt{x} - \sigmaqrt{y} \, \right) ^3}{\sigmaqrt{x}} \lambdae 0 \, ,
\varepsilonnd{aligned}
$$
so $f$ is a decreasing function for each $y$. Thus, we have $f(x,y)\gammae f(y,y)=0$, so
$$
x+y-\sigmaqrt{xy}\gammae \sigmaqrt{\frac{x^2+y^2}{2}}\, ,
$$
and the equality is attained if and only if $x=y$. Therefore for any $uv\in E(G)$,
$$
d_u+d_v - \sigmaqrt{d_ud_v} \gammae \sigmaqrt{\frac{d_u^2+d_v^2}{2}}\,
$$
and the equality is attained if and only if $d_u=d_v$. The desired result is obtained by adding up for each $uv\in E(G)$.
\varepsilonnd{proof}
\sigmaection{Discussion and conclusions}
We have introduced a degree--based variable topological index inspired on the power mean (also known
as generalized mean and H\"older mean).
We named this new index as the mean Sombor index $mSO_\alphalpha(G)$, see Eq.~(\ref{MG}).
For given values of $\alphalpha$, the mean Sombor index is related to well-known topological indices, in particular with
several Sombor indices.
In addition, through a QSPR study, we showed that mean Sombor indices are suitable to model
acentric factor, boiling point, heat capacity at constant pressure, standard enthalpy of vaporization,
enthalpy of formation, heat of vaporization at 25$^{\circ}$C,
enthalpy of vaporization, and entropy of octane isomers; see Section~\ref{QSPR}.
We have also discussed some mathematical properties of mean Sombor indices as well as stated bounds
and new relationships with known topological indices; see Section~\ref{ineq}, where the mean Sombor matrix
was also introduced in Eq.~(\ref{meanSmat}).
Finally, we would like to remark that, in addition to all the known indices that the mean Sombor index
reproduces, we discover the indices
$$
mSO_{-\infty}(G) \varepsilonquiv {mSO}_{\alphalpha\to-\infty}(G) = \sigmaum_{uv \in E(G)} \min(d_u,d_v)
$$
and
$$
mSO_\infty(G) \varepsilonquiv {mSO}_{\alphalpha\to\infty}(G) = \sigmaum_{uv \in E(G)} \max(d_u,d_v) \, ;
$$
which, from the QSPR study of Section~\ref{QSPR}, were shown to be good predictors of
the standard enthalpy of vaporization, the enthalpy of vaporization, and the heat of vaporization at
25$^{\circ}$C of octane isomers.
It is fair to mention that several known topological indices include the min/max functions; among them
we can mention the min-max (and max-min) rodeg index, the min-max (and max-min) sdi index, and the min-max
(and max-min) deg index, introduced in Ref.~\cite{VG10}. However, to the best of our knowledge, the indices
$mSO_{\pm \infty}(G)$ have not been theoretically studied before (for an exception where the equivalent
Stolarsky--Puebla indices have been computationally applied to random networks see~\cite{MAAS22}).
Thus, we believe that a theoretical study of these two new indices is highly pertinent.
\sigmaection*{Acknowledgment}
J.A.M.-B. acknowledges financial support from CONACyT (Grant No.~A1-S-22706) and
BUAP (Grant No.~100405811-VIEP2021).
E.D.M. and J.M.R. were supported by a grant from Agencia Estatal de Investigaci\'on (PID2019-106433GBI00/
AEI/10.13039/501100011033), Spain. J.M.R. was supported by the Madrid Government (Comunidad de Madrid-Spain) under the Multiannual Agreement with UC3M in the line of Excellence of University Professors (EPUC3M23), and in the context of the V PRICIT (Regional Programme of Research and Technological Innovation).
\footnotesize
\betaegin{thebibliography}{00}
\betaibitem{BMRS}
P. Bosch, E. D. Molina, J. M. Rodr{\'\i}guez, J. M. Sigarreta,
Inequalities on the Generalized ABC Index,
{\it Mathematics} {\betaf 9} (2021) 1151.
\betaibitem{B03}
P. S. Bullen,
Handbook of Means and Their Inequalities
(Kluwer, Dordrecht, 2003).
\betaibitem{S09}
S. Sykora,
Mathematical means and averages: basic properties. 3. Stan's Library,
(Castano Primo, Italy, 2009)
doi:10.3247/SL3Math09.001.
\betaibitem{OT57}
B. Ostle, H. L. Terwilliger,
A comparison of two means,
{\it Proc. Montana Acad. Sci.} {\betaf 17} (1957) 69--70.
\betaibitem{C66}
B. C. Carlson,
Some inequalities for hypergeometric functions,
{\it Proc. Amer. Math. Soc.} {\betaf 17} (1966) 32--39.
\betaibitem{L74}
T.-P. Lin,
The power mean and the logarithmic mean,
{\it The American Mathematical Monthly} {\betaf 81} (1974) 879--883.
\betaibitem{G13}
I. Gutman,
Degree-based topological indices,
{\it Croat. Chem. Acta} {\betaf 86} (2013) 351--361.
\betaibitem{S15}
J. M. Sigarreta,
Bounds for the geometric-arithmetic index of a graph,
{\it Miskolc Math. Notes} {\betaf 16} (2015) 1199--1212.
\betaibitem{S21}
J. M. Sigarreta,
Mathematical properties of variable topological indices,
{\it Symmetry} {\betaf 13} (2021) 43.
\betaibitem{VG10}
D. Vuki\v{c}evi\'c, M. Ga\v{s}perov,
Bond additive modeling 1. Adriatic indices,
{\it Croat. Chem. Acta} {\betaf 83} (2010) 243--260.
\betaibitem{V10}
D. Vuki\v{c}evi\'c,
Bond additive modeling 2. Mathematical properties of max-min rodeg index,
{\it Croat. Chem. Acta} {\betaf 83} (2010) 261--273.
\betaibitem{GFE14}
I. Gutman, B. Furtula, C. Elphick,
Three new/old vertex--degree--based topological indices,
{\it MATCH Commun. Math. Comput. Chem.} {\betaf 72} (2014) 617--632.
\betaibitem{GT72}
I. Gutman, N. Trinajsti\'c,
Graph theory and molecular orbitals. Total $\pi$-electron energy of alternant hydrocarbons,
{\it Chem. Phys. Lett.} {\betaf 17} (1972) 535--538.
\betaibitem{G21a}
I. Gutman,
Geometric approach to degree-based topological indices: Sombor indices,
{\it MATCH Commun. Math. Comput. Chem.} {\betaf 86} (2021) 11--16.
\betaibitem{RDA21}
T. Reti, T. Doslic, A Ali,
On the Sombor index of graphs,
{\it Contrib. Math.} {\betaf 3} (2021) 11--18.
\betaibitem{K19}
V. R. Kulli,
The $(a,b)-KA$ indices of polycyclic aromatic hydrocarbons and benzenoid systems,
{\it Int. J. Math. Trends Tech.} {\betaf 65} (2019) 115--120.
\betaibitem{MAAS22}
J. A. Mendez-Bermudez, R. Aguilar-Sanchez, R. Abreu-Blaya, J. M. Sigarreta,
Stolarsky-Puebla index,
{\it Discrete Math. Lett.} {\betaf 9} (2022) 10--17.
\betaibitem{MDDP21}
S. Mondal, A. Dey, N. De, A. Pal,
QSPR analysis of some novel neighbourhood degree-based topological descriptors,
{\it Complex Intel. Syst.} {\betaf 7} (2021) 977--996.
\betaibitem{MMM21}
I. Milovanovi\'c, E. Milovanovi\'c, M. Mateji\'c,
On some mathematical properties of Sombor indices,
{\it Bull. Int. Math. Virtual Inst.} {\betaf 11} (2021) 341--353.
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\title{Envy-Free and Pareto-Optimal Allocations for \ Agents with Asymmetric Random Valuations}
\begin{abstract}
We study the problem of allocating $m$ indivisible items to $n$ agents with additive utilities.
It is desirable for the allocation to be both fair and efficient, which we formalize through the notions of \emph{envy-freeness} and \emph{Pareto-optimality}.
While envy-free and Pareto-optimal allocations may not exist for arbitrary utility profiles, previous work has shown that such allocations exist with high probability assuming that all agents' values for all items are independently drawn from a common distribution.
In this paper, we consider a generalization of this model where each agent's utilities are drawn independently from a distribution \emph{specific to the agent}.
We show that envy-free and Pareto-optimal allocations are likely to exist in this asymmetric model when $m=\Omega\left(n\, \log n\right)$, which is tight up to a log log gap that also remains open in the symmetric subsetting.
Furthermore, these guarantees can be achieved by a polynomial-time algorithm.
\end{abstract}
\section{Introduction}
\label{sec:intro}
Imagine that the neighborhood children go trick-or-treating and return successfully, with a large heap of candy between them.
They then try to divide the candy amongst themselves, but quickly reach the verge of a fight:
Each has their own conception of which sweets are most desirable, and, whenever a child suggests a way of splitting the candy, another child feels unfairly disadvantaged.
As a (mathematically inclined) adult in the room, you may wonder:
Which allocation of candies should you suggest to keep the peace? And, is it even possible to find such a fair distribution?
In this paper, we study the classic problem of fairly dividing $m$ items among $n$ agents~\citep{BCM16}, as exemplified by the scenario above.
We assume that the items we seek to divide are goods (i.e., receiving an additional piece of candy never makes a child less happy), that items are indivisible (candy cannot be split or shared), and that the agents have \emph{additive} valuations (roughly: a child's value for a piece of candy does not depend on which other candies they receive).
We will understand an allocation to be fair if it satisfies two axioms: \emph{envy-freeness} (EF) and \emph{Pareto-optimality} (PO).
First, fair allocations should be envy-free, which means that no agent should strictly prefer another agent's bundle to their own.
After all, if an allocation violates envy-freeness, the former agent has good reason to contest it as unfair.
Second, fair allocations should be Pareto-optimal, i.e., there should be no reallocation of items making some agent strictly better off and no agent worse off.
Not only does this axiom rule out allocations whose wastefulness is unappealing; it is also arguably necessary to preserve envy-freeness:
Indeed, if a chosen allocation is envy-free but not Pareto-optimal, rational agents can be expected to trade items after the fact, which might lead to a final allocation that is not envy-free after all.
Unfortunately, even envy-freeness alone is not always attainable. For instance, if two agents like a single item, the agent who does not receive it will always envy the agent who does.
Motivated by the fact that worst-case allocation problems may not have fair allocations, a line of research in fair division studies asymptotic conditions for the existence of such allocations, under the assumption that the agents' utilities are random rather than adversarially chosen~\cite[e.g.][]{DGK+14,MS19}.
Specifically, these papers assume that all agents' utilities for all items are independently drawn from a common distribution $\mathcal{D}$, a model which we call the \emph{symmetric model}.
Among the algorithms shown to satisfy envy-freeness in this setting, only one is also Pareto-optimal: the (utilitarian) \emph{welfare-maximizing algorithm}, which assigns each item to the agent who values it the most.
This algorithm is Pareto-optimal, and it is also envy-free with high probability as the number of items $m$ grows in $\Omega\left(n \, \log n\right)$.\footnote{In fact, \citet{DGK+14} prove this result for a somewhat more general model than the one presented above (certain correlatedness between distributions is also allowed), but their model assumes the key symmetry between agents that we discuss below.}
Since envy-free allocations may exist with only vanishing probability for $m \in \Theta\left(n \, \log n / \log \log n\right)$ in the symmetric model~\citep{MS19}, the above result characterizes almost tightly when envy-free and Pareto-optimal allocations exist in this model.
Zooming out, however, this positive result is unsatisfying in that, outside of this specific random model, the welfare-maximizing algorithm can hardly be called ``fair'':
For example, if an agent~A tends to have higher utility for most items than agent~B, the welfare-maximizing algorithm will allocate most items to agent~A, which can cause large envy for agent~B.
In short, the welfare-maximizing algorithm leads to fair allocations only because the model assumes each agent to be equally likely to have the largest utility, an assumption that limits the lessons that can be drawn from this model.
Motivated by these limitations of prior work, this paper investigates the existence of fair allocations in a generalization of the symmetric model, which we refer to as the \emph{asymmetric model}.
In this model, each agent $i$ is associated with their own distribution $\mathcal{D}_i$, from which their utility for all items is independently drawn.
\begin{figure}
\caption{The top panel shows probability density functions of five agents' utility distributions. The bottom panel shows densities after scaling distributions by the given multipliers. When drawing an independent sample from each scaled distribution, each sample is the largest with probability $1/5$.}
\label{fig:scaling}
\end{figure}
Within this model, we aim to answer the question: \emph{When do envy-free and Pareto-optimal allocations exist for agents with asymmetric valuations?}
\subsection{Our Techniques and Results}
\label{sec:results}
\begin{figure}
\caption{Existing and new results on when EF and EF+PO allocations are guaranteed to exist in both models. Bold results are new.}
\label{fig:result}
\end{figure}
In \cref{sec:symm}, we study which results in the symmetric model generalize to the asymmetric model.
In particular, we apply an analysis by \citet{MS20} to the asymmetric model in a black-box manner to prove envy-free allocations exist when $m \in \Omega\left(n \, \log n / \log \log n\right)$, which is tight with existing impossibility results on envy-freeness. However, this approach does not preserve Pareto-optimality.
Using a new approach, we prove in \cref{sec:existence} that generalizing the random model from symmetric to asymmetric agents does not substantially decrease the frequency of envy-free and Pareto-optimal allocations.
The key idea is to find a multiplier $\beta_i > 0$ for each agent $i$ such that, when drawing an independent sample $u_i$ from each utility distribution $\mathcal{D}_i$, each agent $i$ has an equal probability of $\beta_i u_i$ being larger than the $\beta_{j} u_{j}$ of all other agents $j \neq i$, which we call the agent's \emph{resulting probability} from these multipliers.
\cref{fig:scaling} illustrates how five utility distributions can be rescaled in this way.
If all resulting probabilities of a set of multipliers equal $1/n$, we call these multipliers \emph{equalizing}.
A set of equalizing multipliers defines what we call its \emph{multiplier allocation}, which assigns each item to the agent $i$ whose utility weighted by $\beta_i$ is the largest.
Put differently, the multiplier allocation simulates the welfare-maximizing algorithm in an instance in which each agent $i$'s distribution is scaled by $\beta_i$.
Just like the welfare-maximizing allocations, the multiplier allocations are Pareto-optimal by construction, and the similarity between both allocation types allows us to apply proof techniques developed for the welfare-maximizing algorithm and the symmetric setting to show envy-freeness.
The core of our paper is a proof that equalizing multipliers always exist, which we show using Sperner's lemma.
Since an algorithm based on this direct proof would have exponential running time, we design a polynomial-time algorithm for computing approximately equalizing multipliers, i.e., multipliers whose resulting probabilities lie within $[1/n - \delta, 1/n + \delta]$ for a $\delta > 0$ given in the input.
Having established the existence of equalizing multipliers, we go on to show that the multiplier allocation is envy-free with high probability.
To obtain this result, we demonstrate a constant-size gap between each agent's expected utility for an item conditioned on them receiving the item and the agent's expected utility conditioned on another agent receiving the item, and then use a variant of the argument of \citet{DGK+14} to show that multiplier allocations are envy-free with high probability when $m \in \Omega\left(n \, \log n\right)$.
This guarantee extends to the case where we allocate based on multipliers that are sufficiently close to equalizing, which means that our polynomial-time \emph{approximate multiplier algorithm} is Pareto-optimal and envy-free with high probability.
In \cref{sec:experiments}, we empirically evaluate how many items are needed to guarantee envy-free and Pareto-optimal allocations for a fixed collection of agents.
We find that the approximate multiplier algorithm needs relatively large numbers of items to ensure envy-freeness; that the round robin algorithm violates Pareto-optimality in almost all instances; and that the Maximum Nash Welfare (MNW) algorithm achieves both axioms already for few items but that its running time limits its applicability.
For larger numbers of items, the approximate multiplier algorithm satisfies both axioms and excels by virtue of its running time.
\subsection{Related work}
The question of when fair allocations exist for random utilities was first raised by \citet{DGK+14}, whose main result we have already discussed.
Our paper also builds on work by \citet{MS19,MS20}, who prove the lower bound on the existence of envy-free allocations mentioned in the introduction and that the classic round robin algorithm produces envy-free allocations in the symmetric model for slightly lower $m$ than the welfare-maximizing algorithm.
A bit further afield, \citet{Suksompong16} and \citet{AMN+17} study the existence of proportional and maximin-share allocations (two relaxations of envy-freeness) in the symmetric model, and \citet{MS17a} study envy-freeness when items are allocated to groups rather than to individuals.
None of these papers consider Pareto-optimality, perhaps because fair division yields few tools for simultaneously guaranteeing envy-freeness and Pareto-optimality.
The asymmetric model we investigate has been previously used, for example, by \citet{KPW16} to study the existence of maximin-share allocations.
While part of their proof applies the results by \citeauthor{DGK+14} to construct envy-free allocations in the asymmetric model, as do we, their allocation algorithm is not Pareto-optimal (see \cref{sec:symefpo}).
\citet{FGH+19} also consider maximin-share allocations in the asymmetric model, for agents with weighted entitlements.
\citet{ZP20} study allocation problems in the asymmetric model, when items arrive online.
While they do consider and achieve Pareto-optimality, they only obtain approximate notions of envy-freeness.
\new{Finally, \citet{smoothed} study the existence of envy-free allocations and of both proportional and Pareto-optimal allocations in an expressive utility model based on smoothed analysis.}
\section{Preliminaries}
\label{sec:model}
\paragraph{General Definitions.}
We consider a set $M$ of $m$ indivisible items being allocated to a group $N=\{1, \dots, n\}$ of $n$ agents. Each agent $i\in N$ has a \emph{utility} $u_i(\alpha)\geq0$ for each item $\alpha\in M$, indicating their degree of preference for the item. The collection of agent--item utilities make up a \emph{utility profile}. An \emph{allocation} $\mathcal{A}=\{A_i\}_{i\in N}$ is a partition of the items into $n$ \emph{bundles}: $M = A_1 \cup \cdots \cup A_n$, where agent $i$ gets the items in bundle $A_i$. Under our assumption that the agents' utilities are \emph{additive}, agent $i$'s \emph{utility} for a subset of items $A\subseteq M$ is $u_i\left(A\right)=\sum_{\alpha\in A}u_i\left(\alpha\right)$.
An allocation $\mathcal{A}=\{A_i\}_{i\in N}$ is said to be \emph{envy-free} (EF) if $u_i(A_i)\geq u_i(A_j)$ for all $i,j \in N$, i.e., if each agent weakly prefers their own bundle to any other agent's bundle.
We say that an allocation $\mathcal{A}=\{A_i\}_{i\in N}$ is \emph{Pareto dominated} by another allocation $\mathcal{A'}=\{A'_i\}_{i\in N}$ if $u_i(A_i)\leq u_i(A'_i)$ for all $i\in N$, with at least one inequality holding strictly.
An allocation is \emph{Pareto-optimal} (PO) if it is not Pareto dominated by any other allocation.
An allocation is called \emph{fractionally Pareto-optimal} (fPO) if it is not even Pareto dominated by any ``fractional'' allocation of items.
For our purposes, it suffices to note that an allocation is fPO iff there exist multipliers $\{\beta_i > 0\}_{i \in N}$ such that each item $\alpha$ is allocated to an agent $i$ with maximal $\beta_i u_i(\alpha)$~\citep{Negishi60}.
\paragraph{Asymmetric Model.}
In our asymmetric model, each agent $i$ is associated with a \emph{utility distribution} $\mathcal{D}_i$, a nonatomic probability distribution over $[0, 1]$.
The model assumes that the utilities $u_i(\alpha)$ for all $\alpha\in M$ are independently drawn from $\mathcal{D}_i$.
For simplicity, we just write $u_i$ as a random variable for $u_i(\alpha)$ if we are not talking about a specific item $\alpha$, where $u_i\sim \mathcal{D}_i$.
Let $f_i$ and $F_i$ denote the probability density function (PDF) and cumulative distribution function (CDF) of $\mathcal{D}_i$.
For our main result, we make the following assumptions on utility distributions:
\emph{(a) Interval support}: The support of each $\mathcal{D}_i$ is an interval $[a_i, b_i]$ for $0 \leq a_i < b_i \leq 1$.
\emph{(b) $\left(p, q\right)$-PDF-boundedness}: For constants $0<p<q$, the density of each $\mathcal{D}_i$ is bounded between $p$ and $q$ within its support. These two assumptions are weaker than those by \citet{MS20}, who additionally require all distributions to have support $[0,1]$.
A random event occurs \emph{with high probability} if the event's probability converges to 1 as $n \to \infty$.
\section{Takeaways From the Symmetric Model}
\label{sec:symm}
We quickly review results obtained in the symmetric model, and to which degree they carry over to the asymmetric model.
\paragraph{Non-Existence of EF Allocations:}
Since the symmetric model is a special case of the asymmetric model\,---\,in which all $\mathcal{D}_i$ are equal\,---\,this negative result immediately applies:
\begin{proposition}[\citealt{MS19}]\footnote{Here, we present a special case; the original result holds for different choices of distribution and leaves some flexibility in $m$.}
There exists $c > 0$ such that, if $m = (\lfloor c \, \log n / \log \log n \rfloor + 1/2) \, n$ and all utility distributions are uniform on $[0,1]$, then, with high probability, no envy-free allocation exists.
\end{proposition}
\paragraph{Existence of EF Allocations:}
In the symmetric model, \citet{MS20} give an allocation algorithm, round robin, that satisfies EF with high probability.
An interesting property of this algorithm is that an agent's allocation given a utility profile depends not on the \emph{cardinal} information of the agents' utilities, but only on their \emph{ordinal} preference order over items.
Using this property, we prove in \full{\cref{app:rr}}{the full version} that their result generalizes to the asymmetric model since, in a nutshell, an agent $i$'s envy of the other agents is indistinguishable between the asymmetric model and a symmetric model with common distribution $\mathcal{D}_i$.
\begin{restatable}{proposition}{proprr}
\label{prop:rr}
When distributions have interval support and are $(p, q)$-PDF-bounded, if $m \in \Omega\left(n \, \log n / \log \log n\right)$, an envy-free allocation exists with high probability.
\end{restatable}
To our knowledge, we are the first to observe that the analysis by \citeauthor{MS20} generalizes in this way, which improves on the previously best known upper bound of $m \in \Omega\left(n \, \log n\right)$ in the asymmetric model due to \citet{KPW16}.
\paragraph{Existing Approaches Do Not Provide EF+PO:}
\label{sec:symefpo}
Generalizing the existence result for EF and PO allocations by \citet{DGK+14} to the asymmetric model is more challenging than the round robin result above, since cardinal information is crucial for the PO property.
In \full{\cref{app:maxpercentile}}{the full version of the paper}, we illustrate this point by considering how \citet{KPW16} apply the theorem of \citeauthor{DGK+14} to prove the existence of EF allocations in the asymmetric model; namely, they assign each item to the agent for whom the item is in the highest percentile of their utility distribution.
On an example, we show that this approach fundamentally violates PO, and that assigning items based on multipliers is the most natural way to guarantee PO.
\new{\full{In \cref{app:normalize},}{In the full version,} we also give an example showing that normalizing each agent's values to add up to one\,---\,perhaps the most obvious way to obtain multipliers\,---\,is not sufficient to provide EF.}
\section{Existence of EF+PO Allocations}
\label{sec:existence}
We now prove our main theorem:
\begin{theorem}
\label{thm:existence}
Suppose that all utility distributions have interval support and are $(p,q)$-PDF-bounded for some $p,q$.
If $m \in \Omega\left(n \, \log n\right)$ as $n \to \infty$,\footnote{Alternatively, we may assume $n \in O\left(m / \log m\right)$ as $m \to \infty$ to avoid the assumption that $n \to \infty$, as do \citet{DGK+14}.} then, with high probability, an envy-free and (fractionally) Pareto-optimal allocation exists and can be found in polynomial time.
\end{theorem}
In \cref{sec:existence_multipliers}, we prove that we can always find multipliers $\{\beta_i\}_{i \in N}$ that equalize each agent's probability of receiving a random-utility item in the multiplier allocation (which allocates item $\alpha$ to the agent with maximal $\beta_i u_i(\alpha)$ and is trivially fPO).
We also discuss how to efficiently find multipliers leading to approximately equalizing probabilities.
Next, in \cref{sec:existence_gap}, we show that an agent's expected utility for an item allocated to themselves is larger by a constant than their expected utility for an item allocated to another agent.
In \cref{sec:puttingtogetheref}, we combine these properties to prove envy-freeness.
\subsection{Existence of Equalizing Multipliers}
\label{sec:existence_multipliers}
For a set of multipliers $\vec{\beta} \in \mathbb{R}^n_{>0}$ and an agent $i$, we denote $i$'s resulting probability by
\begin{align}
p_i(\vec{\beta}) \coloneqq{}& \mathbb{P}[\beta_i u_i = \textstyle{\max_{j\in N}\beta_j u_j}]& \notag \\
={}& \int_0^1 f_i(u) \, \textstyle{\prod_{j \in N \setminus \{i\}}} F_j(\beta_i / \beta_j \, u) \, du. \label{eq:piint}
\end{align}
\subsubsection{Existence Proof Using Sperner's Lemma}
The existence of equalizing multipliers can be established quite easily using Sperner's lemma:
\begin{theorem}
\label{thm:equalizing}
For any set of utility distributions, there exists a set of equalizing multipliers.
\end{theorem}
\begin{proof}[Proof sketch]
Since scaling all multipliers by the same factor does not change the resulting probabilities, we may restrict our focus to multipliers within the $(n-1)$-dimensional simplex $S = \{ \vec{\beta} \in \mathbb{R}_{\geq0}^n \mid \sum_{i \in N} \beta_i = 1\}$.
We define a coloring function $f: S \rightarrow N$, which maps each set of multipliers in the simplex to an agent with maximum resulting probability.
Clearly, points on a face $\beta_i = 0$ are not colored with color belonging to agent $i$ since some other agent has a positive multiplier and must thus have a greater scaled utility than $i$.
Now, consider a simplicization of $S$, i.e., a partition of $S$ into small simplices meeting face to face (generalizing the notion of a triangulation in the 2-D simplex).
Sperner's lemma shows the existence of a small simplex that is panchromatic, i.e., whose $n$ vertices are each colored with a different agent.
This small simplex constitutes a neighborhood of multipliers such that, for each agent $i$, there is a set of multipliers $\vec{\beta}$ in this neighborhood such that agent $i$'s resulting probability is larger than that of any other agent, and, as a consequence, such that $i$ has a resulting probability of at least $1/n$.
By successively refining the simplicization, we can make these neighborhoods arbitrarily small.
In \full{\cref{app:sperner}}{the full version}, we prove the existence of a set of exactly equalizing multipliers, which follows from the Bolzano-Weierstraß Theorem and continuity of the functions $p_i$ on $\mathbb{R}_{>0}^n$\full{\ (\cref{app:continuous})}{}.
\end{proof}
\subsubsection{An Approximation Algorithm for Equalizing Multipliers}
The proof above is succinct, but not particularly helpful in finding equalizing multipliers computationally.\footnote{When measuring running time, we assume that the algorithm has access to an oracle allowing it to compute the $p_i(\vec{\beta})$ for a given $\vec{\beta}$ in constant time. This choice abstracts away from the distribution-specific cost and accuracy of computing the integral in \cref{eq:piint}.}
Though the application of Sperner's lemma can be turned into an approximation algorithm, using it to find multipliers such that all resulting probabilities lie within $\delta > 0$ of $1/n$ requires $\textit{poly}(n) \, n! \, \left(\log(2 \, q) \, \left(1 + \frac{4 \, n \, q}{\delta}\right)\right)^n$ time (\full{\cref{app:sperner}}{see full version}).
This large runtime complexity points to a more philosophical shortcoming of our proof of \cref{thm:equalizing}, namely, that it does very little to elucidate the structure of how multipliers map to resulting probabilities.
Given that the proof barely made use of any properties of the $p_i$ other than continuity, it is natural that the resulting algorithm resembles a complete search over the space of multipliers.
By contrast, our polynomial-time algorithm for finding approximately-equalizing multipliers will be based on three structural properties of the $p_i$ (proved in \full{\cref{app:boundmult}}{the full version}):
\paragraph{Local monotonicity:} If we change the multipliers from $\vec{\beta}$ to $\vec{\beta}'$, and if agent $i$'s multiplier increases by the largest factor ($\beta_i'/\beta_i = \max_{j \in N} \beta_j' / \beta_j$), then $i$'s resulting probability weakly increases\full{\ (\cref{lem:localmonotonicity})}{}.
\paragraph{Bounded probability change:} If we change a set of multipliers by increasing $i$'s multiplier by a factor of $(1 + \epsilon)$ for some $\epsilon > 0$ and leaving all other multipliers equal, then $i$'s resulting probability increases by at most $2 \, q \, \epsilon$\full{\ (\cref{lem:lipschitzmultipliers})}{}.
\paragraph{Bound on multipliers:} If $i$'s multiplier is at least $2\,q$ times as large as $j$'s multiplier, then $i$ must have a strictly larger resulting probability than $j$\full{\ (\cref{lem:boundmult})}{}.
\noindent Crucially, we can combine the first two properties to control how the resulting probabilities evolve while changing the multipliers in a specific ``step'' operation, which is the key building block of our approximation algorithm:
\paragraph{Step guarantee:} If we change a set of multipliers by increasing the multipliers of a subset $S$ of agents by a factor of $(1 + \epsilon)$ while leaving the other multipliers unchanged, then (a) the resulting probabilities of all $i \in S$ weakly increase, but at most by $2 \, q \, \epsilon$, and (b) the resulting probabilties of all $i \notin S$ weakly decrease, also by at most $2 \, q \, \epsilon$ (\full{\cref{lem:smallstep}}{proof in full version}).
\begin{algorithm}[tb]
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{An oracle to compute resulting probabilities, a constant $0 < \delta \leq 1$, and a PDF upper bound $q$}
\Output{A vector of multipliers $\vec{\beta} \in \mathbb{R}_{>0}^n$}
$\epsilon \leftarrow \delta / (2 \, q)$\;
$\vec{z} \leftarrow \vec{0}$\;
\While{$\exists i \in N.\; \left|p_i\big((1+\epsilon)^{\vec{z}}\big) - 1/n\right| > \delta$}{
$S \leftarrow \{i \in N \mid p_i\big((1+\epsilon)^{\vec{z}}\big) \leq 1/n\}$\;
$\vec{z} \leftarrow \vec{z} + \mathds{1}_S$\;
}
\Return $(1+\epsilon)^{\vec{z}}$
\caption{Equalizing Multipliers}
\label{alg:multipliers}
\end{algorithm}
\noindent Algorithm~\ref{alg:multipliers} keeps track of a set of multipliers $(1 + \epsilon)^{\vec{z}} = ((1 + \epsilon)^{z_1}, \dots, (1 + \epsilon)^{z_n})^\mathsf{T}$.
In each loop iteration, we use the step operation to increase all resulting probabilities originally below $1/n$ and decrease all resulting probabilities originally above $1/n$, both by a bounded amount so that they cannot overshoot $1/n$ by too much.
After polynomially many steps, all resulting probabilities lie within a band around $1/n$, which means that the multipliers are approximately equalizing.
\begin{theorem}
\label{thm:equalizingpoly}
In time $\mathcal{O}(n^2 \, q \, \log (q) \, \delta^{-1})$, Algorithm~\ref{alg:multipliers} computes a vector of multipliers $\vec{\beta}$ such that, for all $i \in N$,
$ 1/n - \delta \leq p_i(\vec{\beta}) \leq 1/n + \delta$.
\end{theorem}
\begin{proof}
At the beginning of an iteration of the loop, partition the agents into three sets $Z_{\ell}$, $Z_{m}$, and $Z_{h}$ depending on whether $p_i\big((1+\epsilon)^{\vec{z}}\big)$ is smaller than $1/n - \delta$, is in $[1/n - \delta, 1/n + \delta]$, or is larger than $1/n + \delta$, respectively.
We make two observations: \textbf{(a)} Once an agent is in $Z_m$, they will always stay there since, by the step guarantee, their probability moves by at most $\delta=2 \, q \, \epsilon$ per iteration and moves up whenever it was below $1/n$ and down whenever it was above $1/n$. \textbf{(b)} Agents cannot move between $Z_\ell$ and $Z_h$ within one iteration, since the probabilities belonging to $Z_\ell$ and $Z_h$ are separated by a gap of size $2\,\delta$, whereas the step guarantee shows that an agents' probability moves by at most $\delta$.
Next, we show that the algorithm terminates; specifically, that it exits the loop after at most $T \coloneqq \lceil\frac{\log(2 \, q)}{\log(1 + \epsilon)}\rceil \cdot (n - 1)$ iterations.
For the sake of contradiction, suppose that at the beginning of the $(T+1)$th iteration of the loop, some agent was not yet in $Z_m$.
For now, say that such an $i$ was in $Z_h$ and let $i$ have maximal $p_i\big((1+\epsilon)^{\vec{z}}\big)$.
Then, since $i$ has always been in $Z_h$, $z_i$ has never been increased and it still holds that $z_i = 0$.
At the same time, in each round, the multipliers of some $|S| \geq 1$ other agents get increased, from which it follows that some other agent $j$ must have $z_j \geq \lceil\frac{\log(2 \, q)}{\log(1 + \epsilon)}\rceil$.
Then, $\beta_j / \beta_i = (1 + \epsilon)^{z_j-z_i} \geq 2\,q$, which implies that $p_j > p_i$ by the bounds-on-multipliers property, which contradicts our choice of $i$.
The case where $i \in Z_\ell$ is symmetric:
This time, choose an $i$ with minimal $p_i \, \big((1+\epsilon)^{\vec{z}}\big)$.
Since $i \in Z_\ell$, $z_i$ must have been increased in every round and equal $T$.
Furthermore, in each previous round, $|S| \leq n-1$, since the algorithm would not have re-entered the loop if all probabilities were $1/n$, suggesting that some agent's probability is larger than $1/n$ and is thus not included in $S$.
Hence, there must be another agent $j$ with $z_j \leq T - \lceil\frac{\log(2 \, q)}{\log(1 + \epsilon)}\rceil$.
This implies that $p_j < p_i$, contradicting our choice of $i$.
It follows that the loop is executed at most $T$ times. Taking into account that each iteration requires $\mathcal{O}(n)$ oracle queries, the total time complexity is in
$T \cdot \mathcal{O}(n) \in \mathcal{O}(n^2 \, q \, \log(q)/\delta)$.\footnote{$T \cdot \mathcal{O}(n) = \lceil \log(2 \, q) / \log(1 + \delta /(2 \,q))\rceil \, (n-1) \, \mathcal{O}(n)
\leq \lceil \log(2 \, q) \, ((4 \, q)/\delta) \rceil \, (n-1) \, \mathcal{O}(n) \in \mathcal{O}(n^2 \, q \, \log(q)/\delta)$, where the inequality holds since $\delta/(2\,q)\leq 1$.}
The bound on the resulting probabilities follows from the fact that $Z_m = N$ when the algorithm exits.
\end{proof}
As another demonstration of the rich structure in the $p_i$, we show in \full{\cref{app:unique}}{the full version} that the equalizing multipliers are unique, using a strengthened local-monotonicity property.
\full{\new{The algorithmic proof above yields an alternative proof of the existence of perfectly equalizing multipliers, by applying a limit argument similar to the one at the end of \cref{thm:equalizing}.}}{}
\subsection{Gap between Expected Utilities}
\label{sec:existence_gap}
Having established the existence of (approximately) equalizing multipliers $\vec{\beta}$, we will now analyze the corresponding multiplier allocation, which assigns each item $\alpha$ to the agent $i$ with maximal $\beta_i \, u_i(\alpha)$.
By definition, this allocation satisfies fPO and thus PO, so it remains to show EF.
\new{In our exposition, we will focus on exactly equalizing multipliers, but all observations extend to multipliers that are ``sufficiently close'' to equalizing, which we make explicitly in \cref{lem:boundgap}.}
As sketched in \cref{sec:results}, we now prove that an agent $i$'s expected utility for an item they receive themselves is strictly larger than $i$'s expected utility for an item that another agent receives in the multiplier allocation.
In fact, we will prove that there is a \emph{constant} gap between these conditional expectations i.e., a constant $C_{p,q} > 0$ such that, for all $i \neq j \in N$, $\mathbb{E}\left[u_i\, |\, \beta_iu_i=\max_{k\in N} \beta_k u_k \right] \geq C_{p, q} + \mathbb{E}\left[u_i\, |\, \beta_ju_j=\max_{k\in N} \beta_k u_k \right]$.
Bounding this gap is the main idea of the proof by \citeauthor{DGK+14}
Their proof approach is applicable since, by scaling the utilities by equalizing multipliers, we bring a key property exploited by \citeauthor{DGK+14}\ to the asymmetric model: as does the welfare-maximizing algorithm in the symmetric setting, the multiplier allocation gives a random item to each agent with equal probability.
Thus, by concentration, all agents receive similar numbers of items.
A positive gap $C_{p,q}$ furthermore ensures that agents prefer the average item in their own bundle to the average item in another bundle.
The last two statements imply that the allocation is likely to be envy-free.
\iffullversion
Before we go into the bounds, it is instructive to see why the interval support property is required for the multiplier approach.
Consider a case with two agents: Agent A's utility is uniformly distributed on $[1/4, 3/4]$, whereas agent B's distribution is uniform on $[0,1/4]\cup[3/4, 1]$, which means that $\mathcal{D}_B$'s support is not an interval.
It is easy to verify that setting both multipliers to 1 is equalizing.
But $\mathbb{E}[u_A\, |\, u_A\geq u_B] = \mathbb{E}[u_A]$, since the event $u_A\geq u_B$ only tells us that $u_B$ is taken from the left interval in its support ($[0,1/4]$) but $u_A$ is still distributed uniformly in $[1/4, 3/4]$.
Hence, without assuming interval support, the gap we aim to bound may be zero.
For a fixed set of distributions, interval support is enough to provide a positive gap (\full{\cref{app:positivegap}}{see full version}).
However, if we want to add new agents along the infinite sequence of instances as $m \to \infty$, we require a uniform lower bound on this gap.
In \full{\cref{app:pdfboundeddisc}}{the full version}, we \full{}{also\ }give examples showing that both very high probability densities and very low probability densities can make the gap arbitrarily small, which motivates our assumption of $(p, q)$-PDF-boundedness.
\else
\new{In the full version, we show that some utility distributions do not yield a constant gap, and we motivate our assumptions, interval support and $(p,q)$-PDF-boundedness, using such distributions.}
\fi
In \full{\cref{app:boundgap}}{the full version}, we derive the desired\full{\ constant}{} gap:
\begin{restatable}{proposition}{lemboundgap}
\label{lem:boundgap}
For any collection of agents whose utility distributions are $(p,q)$-PDF-bounded and have interval support, given a set of multipliers $\vec{\beta}$ such that $|p_i-1/n|<1/(2\,n)$ for all $i\in N$, it holds that for any $i\ne j\in N$,
\begin{align*}
&\mathbb{E}\left[u_i \bigg| \beta_i u_i = \max_{k\in N} \beta_k u_k \right]
- \mathbb{E}\left[u_i \bigg| \beta_j u_j = \max_{k\in N} \beta_k u_k \right] \\
&\geq C_{p, q}
\end{align*}
for a constant $C_{p, q} \in (0, 1]$ that only depends on $p$ and $q$.
\end{restatable}
\iffullversion
\fi
\subsection{The Multiplier Allocation Satisfies EF}
\label{sec:puttingtogetheref}
In \full{\cref{app:combine}}{the full version}, we combine the existence of equalizing multipliers and the positive gap to prove \cref{thm:existence}, i.e., that the multiplier allocation is EF with high probability, and that this even holds when assigning based on approximately equalizing multipliers.
Here, we sketch the argument: First, we run Algorithm~\ref{alg:multipliers} to find approximately equalizing multipliers $\vec{\beta}$ with an accuracy $\delta \coloneqq C_{p, q}/(4\,n)$, which requires $\mathcal{O}(n^2/\delta) \subseteq \mathcal{O}(n^3)$ time by \cref{thm:equalizingpoly}.
Second, we allocate all items based on $\vec{\beta}$, in $\mathcal{O}(m \, n)$ time).
This yields the \emph{approximate multiplier allocation}, which is fPO by construction.
It remains to show EF:
\cref{lem:boundgap} applies to $\vec{\beta}$ since it satisfies the proposition's precondition: $|p_i - 1/n| \leq \delta \leq \frac{1}{4 \, n} < \frac{1}{2 \, n}$.
By arithmetic, the positive gap guaranteed by the proposition implies that, for any two agents $i \neq j$,
$\mathbb{E}[u_i(A_i)] -\mathbb{E}[u_i(A_j)] \geq \frac{m}{2 \,n} \, C_{p,q}$.
Assuming that $m \in \Omega(n \, \log n)$, we prove that, with high probability, all $u_i(A_j)$ stay within a distance of $\frac{m}{4n} C_{p,q}$ from their expectations by concentration. Then,
\begin{align*}
u_i(A_i) &\geq \mathbb{E}[u_i(A_i)] - \tfrac{m}{4n} C_{p,q} \\
&\geq \big(\mathbb{E}[u_i(A_j)] + \tfrac{m}{2n} C_{p,q}\big) - \tfrac{m}{4n} C_{p,q} \\
&\geq \mathbb{E}[u_i(A_j)] + \tfrac{m}{4n} C_{p,q} \geq u_i(A_j),
\end{align*}
which implies that $i$ does not envy $j$ for any $i$ and $j$, i.e., the allocation is envy-free (and Pareto optimal) as claimed.
\section{Empirical Results}
\label{sec:experiments}
\begin{figure}
\caption{Probability of different algorithms satisfying EF and PO for $n=10$ distributions.
The multiplier and rounded MNW algorithms are always PO and therefore not shown.
Each datapoint corresponds to 1\,000 random instances, and flies indicate 95\% confidence intervals. PO and MNW data not available for large $m$ due to computational cost.}
\label{fig:experiments}
\end{figure}
After characterizing the existence of EF and PO allocations from an asymptotic angle, we now empirically investigate allocation problems for a concrete set of agents.
We use a set of ten agents with utility distributions from a simple parametric family of $(0.1, 1.9)$-PDF-bounded distributions.\footnote{\full{\Cref{app:experiment1}}{The full version} contains all details on the experiments.
Since EF allocations exist for smaller $m$ if $n$ divides $m$, we repeat the experiment with shifted values of $m$\full{\ in \cref{app:experiment2}}{}, which does not change the major trends.
We also repeat the experiments with the five distributions from \cref{fig:scaling}, showing that our observations generalize to extremely heterogeneous distributions that are not $(p,q)$-PDF-bounded.}
We compute multipliers for these ten distributions by implementing a variant of Algorithm~\ref{alg:multipliers}.
Specifically, we repeatedly run the algorithm with exponential decreasing $\delta$, starting each iteration from the last set of multipliers, which allows the algorithm to change multipliers faster in the first rounds and empirically leads to a sublinear running time in $\delta^{-1}$.
For a requested accuracy of $\delta = 10^{-5}$, our algorithm runs in 30 seconds on consumer hardware, and we verify analytically that the resulting multipliers indeed lie within this tolerance, undisturbed by numerical inaccuracies in the computation.
As shown by the solid line in \cref{fig:experiments}, the multiplier allocation requires large numbers of items to be reliably EF:
When allocating $m=500$ items to the ten agents, the allocation is EF in only 67\% of instances, and it requires $m=2\,000$ items for this probability to reach 99\%.
This slow speed of convergence seems to be inherent to the argument of \citet{DGK+14} since, in an instance with ten copies of one of our distributions and 500 random items, the welfare-maximizing algorithm is also only EF with 87\% probability.
In contrast to the approximate-multiplier algorithm, the round robin algorithm (dotted line) reliably obtains EF allocations already for $m \geq 100$, but its allocations are essentially never PO unless $m$ is very small (dash-dotted line). This lack of PO matches our theoretical predictions \full{(\cref{app:neg})}{in the full version}.
If one searches for an algorithm that satisfies both EF and PO for small numbers of items, variants of the Maximum Nash Welfare (MNW) algorithm appear promising in our experiments\,---\,unless their computational complexity is prohibitive.
In experiments in \full{\cref{app:betaexperiments}}{the full version} with only five agents, the optimization library BARON can reliably find the (discrete) MNW allocation for small $m \leq 200$.
The MNW allocation is automatically PO, and it satisfies EF as reliably as round robin in our experiments.
For our 10 agents, however, and as little as $m=20$ items, BARON often takes multiple minutes to compute a single allocation, making this algorithm intractable for our analysis.
In \full{\cref{app:experiment1}}{the full version}, we discuss approaches to this intractability, and propose to round the \emph{fractional} MNW allocation instead.
This approach is still guaranteed to be PO, and yields EF allocations already for small $m$ (dashed line).
In a sense, this rounded MNW algorithm complements the approximate multiplier algorithm:
For small $m$, rounded MNW already provides EF and its runtime is acceptable. For large $m$, the approximate multiplier algorithm guarantees EF while its runtime scales blazingly fast in $m$, since almost all work happens in the determination of the multipliers, independently of $m$.
\section{Discussion}
In this paper, we show that EF and PO allocations are likely to exist for random utilities even if different agents' utilities follow different distributions.
Given that the known asymptotic bounds for the existence of EF+PO allocations are equal in the asymmetric and in the symmetric model, we see no evidence that the asymmetry of agent utilities would make EF+PO allocations substantially rarer to exist, up to, possibly, a $\log \log n$ gap that remains open in both models.
The most interesting idea coming out of this paper is the technique of finding equalizing multipliers, which might be of use in wider settings.
Notably, the existence proof based on Sperner's lemma mainly uses the continuity of the function mapping multipliers to probabilities, and in particular does not use the independence between the agents' utilities.
Thus, the multiplier technique might apply to random models where the agents' utilities exhibit some correlation, as long as the gap in expected utilities can still be bounded.
In the limit of infinitely many items, we can think of the multiplier technique as a way to find an allocation of divisible goods that is Pareto-optimal and \emph{balanced}, i.e., where every agent receives an equal amount of items.
In future work, we hope to explore if this construction extends to arbitrary sets of divisible items.
\section*{Acknowledgments}
We thank Bailey Flanigan and Ariel Procaccia
for valuable comments and suggestions on the paper. We also
thank Dravyansh Sharma, Jamie Tucker-Foltz, Ruixiao Yang,
Xingcheng Yao, and Zizhao Zhang for helpful technical discussions.
\iffullversion
\onecolumn
\appendix
\section*{\LARGE{Appendix}}
\section{Proof of \cref{prop:rr}: Existence of Envy-Free Allocations}
\label{app:rr}
\proprr*
\begin{proof}
For any pair of agents $i, j\in N$, we will prove the probability of the event that $i$ envies $j$ in our asymmetric model is in $O\left(1/m^3\right)$.
Consider a symmetric model with distribution $\mathcal{D}_i$ for all agents. Let $\mathbb{P}^S\left[T\right]$ be the probability that some event $T$ occurs in this symmetric model, and $\mathbb{P}^A\left[T\right]$ be the probability that some event $T$ occurs in the asymmetric model.
For every utility profile, define its ordinal profile as $\{\mathcal{O}_k\}_{k\in N}$ where $\mathcal{O}_k$ is a permutation of items $M$, in descending order according to $u_k(\alpha)$ for $\alpha\in M$.
Let $\mathbb{O}$ be the set of all possible ordinal profiles, which contains $(m!)^n$ elements. Since in both models, all agents are independent and the utility for all items are drawn independently, all ordinal profiles have the same probability to appear, that is,
\begin{align*}
\forall\, \Tilde{O}\in \mathbb{O}, \,
\mathbb{P}^S\left[\{\mathcal{O}_k\}_{k\in N}=\Tilde{O}\right] = \mathbb{P}^A\left[\{\mathcal{O}_k\}_{k\in N}=\Tilde{O}\right] = \frac{1}{(m!)^n}.
\end{align*}
Since the allocation allocated by round robin algorithm is uniquely determined by the ordinal profile, let $\{A_k(\Tilde{O})\}_{k\in N}$ denote the resulting allocation given ordinal profile $\Tilde{O}$.
We can express $\mathbb{P}^S\left[i\text{ envies }j\right]$ as
\begin{align*}
\mathbb{P}^S\left[i\text{ envies }j\right] &=\sum_{\Tilde{O}\in\mathbb{O}}\mathbb{P}^S\left[\{\mathcal{O}_k\}_{k\in N}=\Tilde{O}\right]\cdot\mathbb{P}^S\left[i\text{ envies }j \, \bigg| \, \{\mathcal{O}_k\}_{k\in N}=\Tilde{O}\right] \\
&= \frac{1}{(m!)^n}\sum_{\Tilde{O}\in\mathbb{O}}\mathbb{P}^S\left[i\text{ envies }j\, \bigg|\, \{\mathcal{O}_k\}_{k\in N}=\Tilde{O}\right].
\end{align*}
Similarly, we can express $\mathbb{P}^A\left[i\text{ envies }j\right]$ as
\begin{align*}
\mathbb{P}^A\left[i\text{ envies }j\right]
= \frac{1}{(m!)^n}\sum_{\Tilde{O}\in\mathbb{O}}\mathbb{P}^A\left[i\text{ envies }j\, \bigg|\, \{\mathcal{O}_k\}_{k\in N}=\Tilde{O}\right].
\end{align*}
For all $\Tilde{O}\in \mathbb{O}$,
\begin{align*}
\mathbb{P}^S\left[i\text{ envies }j\, \bigg|\, \{\mathcal{O}_k\}_{k\in N}=\Tilde{O}\right]
&= \mathbb{P}^S\left[\{u_i(\alpha)\sim \mathcal{D}_i\}_{\alpha\in M}: \sum_{\alpha\in A_i(\Tilde{O})}u_i(\alpha)<\sum_{\alpha\in A_j(\Tilde{O})}u_i(\alpha)\, \bigg|\,\mathcal{O}_i=\Tilde{O}_i\right] \\
&= \mathbb{P}^A\left[i\text{ envies }j\, \bigg|\, \{\mathcal{O}_k\}_{k\in N}=\Tilde{O}\right].
\end{align*}
Hence we have $\mathbb{P}^A\left[i\text{ envies }j\right]=\mathbb{P}^S\left[i\text{ envies }j\right]$.
The following lemma is implied by the proof for Thm 3.1 by \citet{MS20}.
\begin{lemma}[\citealt{MS20}]
In the symmetric model, if $m\in\Omega\left(n\,\log n/\log\log n\right)$ and the common distribution $\mathcal{D}$ is $(p, q)$-PDF-bounded on $[0, 1]$, then for any pair of agents $i, i'$, the probability that $i$ envies $i'$ in the round robin allocation is at most $O(1/m^3)$.
\label{lem:MSenvy}
\end{lemma}
\paragraph{PDF-bounded on $\mathbf{[0, 1]}$}
Here we follow the assumptions of \citet{MS20} on the distributions: PDF-bounded on $[0,1]$.
Since $\mathcal{D}_i$ is $(p, q)$-PDF-bounded, the lemma indicates that $\mathbb{P}^S\left[i\text{ envies }j\right]=O\left(1/m^3\right)$. Thus, by our earlier arguments, the probability that agent $i$ envies agent $j$ in our asymmetric model is also in $O\left(1/m^3\right)$.
Applying a union bound over all pairs $i, j$, we know that the allocation is envy-free in the asymmetric model with probability at least $1-O\left(1/m\right)$ when $m\in\Omega\left(n\,\log n/\log\log n\right)$.
\paragraph{Interval support and PDF-bounded}
Moreover, we make slight modification (on constant level) to the proof by \citet{MS20} to generalize \cref{lem:MSenvy} and get bounded envy probability when only assuming interval support and $(p,q)$-PDF-bounded.
We first review the main idea of the proof for Thm 3.1 in their paper.
For two agents $i, i'$, let $X^{i, i'}_t$ denote $i$'s value for the item that $i'$ gets in the $t$th round, and $X^{i}_t$ denote $i$'s value for her own item in the $t$th round. While the maximum possible envy can be (when $i'$ chooses before $i$ in each round and gets 1 more item than $i$)
\begin{equation}
u_i(M_{i'}) - u_i(M_{i}) = X_1^{i,i'}-\sum_{t}\left(X^i_t-X^{i,i'}_{t+1}\right)
\leq 1-\sum_{t=1}^{T}X^i_t\cdot\left(1-Y^{i,i'}_{t+1}\right).
\label{eq:ms21}
\end{equation}
where the last inequality follows from $Y^{i,i'}_{t+1}<1$ for all $t\geq1$, the gap in the first $T$ rounds is sufficient for the envy to be negative with high probability.\footnote{Note that we allow negative envy, whereas some works define envy to be $\max\{0, u_i(A_j)-u_i(A_i)\}$. When envy is $-e$, we can also say the negative envy is $e$.} \citeauthor{MS20} choose such $T$ that with high probability ($1-O\left(1/m^3\right)$), events
(E1) $\sum_{t=1}^{T}Y_t^{i,i'}\geq T-2$ and
(E2) $X^i_t<1/2$ for all $t\leq T$, do not happen.
Then it can be seen that when neither E1 nor E2 happen, the envy in \cref{eq:ms21} is non-positive.
For constant $c>1$, consider changing E1 to E1':
(E1') $\sum_{t=1}^{T}Y_t^{i,i'}\geq T-2c$, then E1' and E2 give us negative envy of at least $c-1$.
We can still bound the probability that E1' or E2 occur in $O\left(1/m^3\right)$, by multiplying the value of $T$ set in Manurangsi and Suksompong's proof by a factor of $c$, while keeping other valuations as they did.
Then by Lemma 2.4 in their paper, the upper bound of the probability that E1' occurs is the upper bound for E1 to the power of $c$, which is still in $O\left(1/m^3\right)$. Meanwhile, the upper bound of the probability that E2 occurs is still in $O\left(1/m^3\right)$, for $m$ that is sufficiently large.
Hence we show that for some constant $c>1$, when the distribution is PDF-bounded on $[0,1]$, the probability that $i$ envies $i'$ more than $1-c$ in the round robin allocation is at most $O\left(1/m^3\right)$.
Now we use such result to further prove the envy probability is bounded by $O\left(1/m^3\right)$ when the distribution $\mathcal{D}$ is PDF-bounded and has interval support instead of $[0,1]$ support.
The method is to use affine transformation to transform $\mathcal{D}$'s support interval $[a, b]$ into $[0, 1]$, mapping the original utility $u$ to $u'=(u-a)/(b-a)$ and the original distribution $\mathcal{D}$ to $\mathcal{D}'$. The PDF $f_\mathcal{D}$ now becomes $f_\mathcal{D'}(u)=(b-a)\cdot f_\mathcal{D}((b-a)u+a)$.
Since $\mathcal{D}$ is $(p, q)$-PDF-bounded, the length of its support, $b-a$, must be at least $1/q$. Thus for any $u\in[0, 1]$, $p/q\leq f_\mathcal{D'}(u)\leq q$, indicating that $\mathcal{D}'$ is PDF-bounded on $[0, 1]$.
Since the affine transform does not change the ordinal profile, the round robin allocation under the transformed utility profile, where all original utilities are transformed by the affine transformation: $u\mapsto (u-a)/(a-b)$, is the same as the one under the original utility profile.
For the same round robin allocation, suppose the envy that $i$ holds for $i'$ in the transformed utility profile is $e$, then the envy in the original utility profile becomes $e'$:
\begin{displaymath}
e' = \begin{cases}
(b-a)\, e & \text{if $i$ and $i'$ get same number of items}\\
(b-a)\, e+a & \text{if $i'$ gets one more item than $i$} \\
(b-a)\, e-a & \text{if $i$ gets one more item than $i'$}
\end{cases}
\end{displaymath}
Then for $i$ to envy $i'$ in the original utility profile, i.e., $e'>0$, it must be true that $e>-a/(b-a)$. Since the utilities in the transformed utility profile can be considered as drawn randomly from $\mathcal{D}'$, which is PDF-bounded on $[0, 1]$, by our previous result, the probability that $e>-a/(b-a)=1-c$ is in $O\left(1/m^3\right)$.
Hence the probability that $i$ envies $i'$ in the round robin allocation for distribution $\mathcal{D}$ is still in $O\left(1/m^3\right)$, generalizing the result \cref{lem:MSenvy} to only assuming interval support and PDF-bounded for the distributions.
Finally, similarly, we get $\mathbb{P}^S\left[i\text{ envies }j\right]=O\left(1/m^3\right)$ and that the allocation is envy-free in the asymmetric model with high probability, when distributions have interval support and are $(p,q)$-PDF-bounded.
\end{proof}
\section{Discussion of Maximum-Percentile Algorithm}
\label{app:maxpercentile}
\begin{figure}
\caption{Illustrations of the example in the text. Dots represent utilities of 5\,000 random items. Shaded regions delineate items allocated to agent~A, for the maximum-percentile algorithm (left) and for the multiplier allocation (right). Three marked items (two have near-identical utilities) show that the maximum-percentile allocation is not PO.}
\label{fig:percentile}
\end{figure}
As we stated in the body of the paper, \citet{KPW16} apply the proof by \citet{DGK+14} to show the existence of EF allocations in the asymmetric setting.
The core idea of their algorithm is to allocate each item to the agent for whom the item is in the highest percentile of their utility distribution, which we will call the \emph{maximum-percentile algorithm}.
It is easy to see that each agent has a probability $1/n$ of receiving each item, and it is not too hard to show that agents have higher expected utility for items they receive than for items allocated to other agents. This implies envy-freeness with high probability as $m \in \Omega\left(n\, \log n\right)$ following the proof by \citeauthor{DGK+14}
Unfortunately, this construction is unlikely to generate PO allocations:
Consider a setting with two asymmetric agents, in which agent A's utility is drawn, with 50\% probability, uniformly between 0 and $1/4$, and, with 50\% probability, uniformly between $1/4$ and 1; and in which agent B's utility is drawn either uniformly between $0$ and $3/4$ or uniformly between $3/4$ and 1, each with 50\% probability.
Conceptually, A's utility distribution skews towards lower values, whereas B's skews towards higher values.
The black dots in \cref{fig:percentile} show random samples of these utilities, and the shaded region in the left plot marks the range of utilities in which the maximum-percentile algorithm allocates items to A.
The left plot also highlights three specific items: two lie around the median utility for both agents and are given to A, and one of them lies around the top percentile for both agents and is given to B.
The fact that the ratio ``$u_B(\alpha)/u_A(\alpha)$'' is strictly greater for the two ``median items'' (at roughly $(3/4)/(1/4) = 3$) than for the ``top item'' (roughly $1$) immediately implies that the allocation is not fPO: Agent B would profit from trading half of the top item against one of A's median items (roughly, since $3/4 > 1/2$), and A would also profit from this trade (since $1/2 > 1/4$).
In fact, a similar trade of whole items, which exchanges both median items against the top item, shows that the maximum-percentile allocation violates Pareto-optimality proper.
The most promising way to avoid violations of PO is to construct fPO allocations, since the characterization of fPO using multipliers in \cref{sec:model} provides useful structure that is not available for PO.
As shown in the right panel in \cref{fig:percentile}, this corresponds to choosing a line through the origin, allocating items below the line to A, and allocating items above the line to B.
In fact, the plot shows the unique such line with the added property that a random-utility item is equally likely to be given to either agent.
In the next section, we generalize this kind of allocation to arbitrary numbers of agents.
\section{Discussion of Normalizing Multipliers}
\label{app:normalize}
As discussed in previous section, the most promising way to construct PO allocation is to utilize multiplier-based maximum-welfare allocation.
One natural choice is the set of normalizing multipliers that normalize each agent's values to add up to 1.
However, we show by the following counter-example that the set of normalizing multipliers may violate EF.
Consider a setting with $|N|=n$ agents where agent 1's utility is drawn uniformly from $[0, 1]$ while the rest of the agents' utilities are drawn uniformly from $[1/3, 2/3]$. When the number of items is sufficiently large, the normalizing multipliers for all agents will be close. With dominating probability, by Chernoff bound, all multipliers for agents $N/\{1\}$ are less than $5/4$ of agent 1's multiplier. A Chernoff bound will also guarantee high probability that agent 1 receives more than $1/12=1/2\, (1-2/3\cdot5/4)$ of all items, while there must be some agent $i$ receiving less than $1/n$ of all items. Agent $i$ will surely envy agent 1 when $1/n\cdot2/3<1/12\cdot1/3$.
\section{Proofs Used in Existence Result of Envy-free and Pareto-optimal Allocations}
\subsection{Continuity of Probability Function $p_i(\cdot)$}
\label{app:continuous}
\begin{lemma}
\label{thm:continuous}
For all $i\in N$, the probability function $p_i(\cdot)$ defined by
\begin{align*}
p_i\left(\beta_1, \dots, \beta_n\right) = \mathbb{P}\left[\beta_i u_i = \max_{j\in N}\beta_j u_j\right]
\end{align*}
is continuous on $(\beta_1, \dots, \beta_n)\in \mathbb{R}^n_{>0}$.
\end{lemma}
\begin{proof}
We have
\begin{align*}
p_i(\beta_1,\dots,\beta_n) &= \int_{0}^{1} f_i(u)\, \prod_{j\ne i}F_j\left(\frac{\beta_i}{\beta_j}u\right)\, du,
\end{align*}
then
\begin{displaymath}
\begin{aligned}
\left|p_i\left(\beta'_1,\cdots,\beta'_n\right) - p_i(\beta_1,\cdots,\beta_n)\right|
&\leq \int_{0}^{1} f_i(u)\cdot
\left|\, \prod_{j\ne i}F_j\left(\frac{\beta'_i}{\beta'_j}u\right)-\prod_{j\ne i}F_j\left(\frac{\beta_i}{\beta_j}u\right)\right|\, du \\
&\leq \sum_{j\ne i}\int_{0}^{1} f_i(u)\cdot
\left| F_j\left(\frac{\beta'_i}{\beta'_j}u\right)-F_j\left(\frac{\beta_i}{\beta_j}u\right)\right|\, du.
\end{aligned}
\end{displaymath}
For nonatomic distribution $\mathcal{D}_j$, its cumulative distribution function $F_j(\cdot)$ is continuous, also the function $g_u(x, y) = \frac{xu}{y}$ is continuous when $x, y\ne 0$. Thus
\begin{align*}
\lim_{(\beta'_i, \beta'_j)\rightarrow(\beta_i, \beta_j)}F_j\left(\frac{\beta'_i}{\beta'_j}u\right) = F_j\left(\frac{\beta_i}{\beta_j}u\right),
\end{align*}
and we have
\begin{align*}
&\lim_{(\beta'_1, \cdots, \beta'_n)\rightarrow(\beta_1, \cdots, \beta_n)}\left|\, p_i(\beta'_1,\cdots,\beta'_n) - p_i(\beta_1,\cdots,\beta_n)\,\right| \\
&\leq \sum_{j\ne i}\int_{0}^{1} f_i(u) \lim_{(\beta'_i, \beta'_j)\rightarrow(\beta_i, \beta_j)} \left|F_j\left(\frac{\beta'_i}{\beta'_j}u\right) - F_j\left(\frac{\beta_i}{\beta_j}u\right)\right| du = 0.
\end{align*}
Therefore function $p_i(\cdot)$ is continuous on $\mathbb{R}^n_{>0}$.
\end{proof}
\subsection{Proof for Existence of Multipliers with Sperner's Lemma}
\label{app:sperner}
Here we show a detailed proof for existence of equalizing multipliers with Sperner's Lemma.
Without loss of generality we assume all multipliers add up to 1, then $(\beta_1, \beta_2, \dots, \beta_n)$ falls in a $(n-1)$-dimensional simplex $S$. Now we define the coloring function $f: S\rightarrow N$, which maps each set of multipliers to an agent with the highest probability of having the largest scaled utility under the set of multipliers:
\begin{displaymath}
f(\beta_1, \dots, \beta_n) \in \mathop{\arg\max}_{i\in N}\mathbb{P}\left[\beta_i u_i = \max_{j\in N}\beta_j u_j\right].
\end{displaymath}
It is clear that the $n$ vertices of the simplex are colored with $n$ different ``colors'', since agent $i$ will have probability 1 of having the largest scaled utility when $\beta_i=1$ and $\beta_j=0, \forall j\ne i$.
Now we divide the simplex $S$ into smaller simplices with (at most) half the diameter of previous simplex. Let $\mathcal{S}_1$ denote the set of smaller simplices.
Then we further divide each simplex in $\mathcal{S}_1$ into even smaller simplices with half diameter, and we denote them by $\mathcal{S}_2$. We repeat the procedure and divide the original simplex into $\mathcal{S}_1, \mathcal{S}_2, \mathcal{S}_3, \dots$ containing smaller and smaller simplices.
By \emph{Sperner's Lemma}, in each $\mathcal{S}_i\, (i\geq1)$, there are always an odd number of simplicies that are colored with $n$ colors, indicating the existence of a simplex $\bm{s_i}\in \mathcal{S}_i$ whose vertices are colored with all $n$ colors.
Now consider the sequence of such simplices: $\bm{s_1}, \bm{s_2}, \bm{s_3}, \dots$, and let the $i$th vertex be the vertex that is mapped to $i$ by $f$. We can represent each simplex with a $n^2$-dimensional vector:
\begin{align*}
(\bm{\beta^1}, \bm{\beta^2}, \dots, \bm{\beta^n}) =
(\beta^1_1, \dots, \beta^1_n; \dots; \beta^n_1, \dots, \beta^n_n)\in \mathbb{R}^{n^2}.
\end{align*}
where $\bm{\beta^i}$ is the vector of multipliers in the $i$th vertex and $\beta^i_j$ is the $j$th multiplier in the $i$th vertex. Let $\bm{s^i_j}$ denote the subvector $\bm{\beta^i}$ in $\bm{s_j}$.
Since all of them are bounded between $(0, 0, \dots, 0)$ and $(1, 1, \dots, 1)$, by \emph{Bolzano-Weierstrass Theorem}, there must be a convergent subsequence $\bm{s_{a_1}}, \bm{s_{a_2}}, \bm{s_{a_3}}, \dots$ that converges to a simplex $(\bm{\beta^{1^*}}, \bm{\beta^{2^*}}, \dots, \bm{\beta^{n^*}})$ in the space:
\begin{align*}
\lim_{t\rightarrow\infty}\bm{s_{a_t}} = (\bm{\beta^{1^*}}, \bm{\beta^{2^*}}, \dots, \bm{\beta^{n^*}}).
\end{align*}
Since the simplicies in $\{\bm{s_i}\}_{i\geq 0}$ get smaller and smaller with a ratio of $1/2$, we have
\begin{align*}
\lim_{t\rightarrow\infty}\bm{s^1_{a_t}} = \lim_{t\rightarrow\infty}\bm{s^2_{a_t}} = \cdots = \lim_{t\rightarrow\infty}\bm{s^n_{a_t}}.
\end{align*}
Then it can be deduced that $\bm{\beta^{1^*}} = \cdots = \bm{\beta^{n^*}} = \bm{\beta^*}$.
We argue that $\bm{\beta^*}\in \mathbb{R}^n_{>0}$.
Since otherwise, if $\beta^*_i=0$ for some $i\in N$, then $\beta^i_i$, the $i$th multiplier in the $i$th vertex grows arbitrarily small as the sequence $\{\bm{s_{a_t}}\}$ goes.
This would lead to the probability that agent $i$ having the largest scaled utility goes to 0, contradicting the fact that agent $i$ should have the highest probability of having largest scaled utility, with probability at least $1/n$.
Now we claim that all agents have equal probability of having the largest scaled utility under $\bm{\beta^*}$. Define
\begin{align*}
p_i(\beta_1, \dots, \beta_n) = \mathbb{P}\left[\beta_i u_i = \max_{j\in N}\beta_j u_j\right].
\end{align*}
As shown in \cref{app:continuous}, $p_i(\cdot)$ is a continuous function in $\mathbb{R}^n_{>0}$. Then for all $i\in N$,
\begin{align*}
\frac{1}{n}\leq \lim_{t\rightarrow\infty}p_i(\bm{s^i_{a_t}}) = p_i(\lim_{t\rightarrow\infty}\bm{s^i_{a_t}}) = p_i(\bm{\beta^*}).
\end{align*}
Since $\sum_{i=1}^{n}p_i(\bm{\beta^*})=1$, we must have
\begin{align*}
p_1(\bm{\beta^*}) = \cdots = p_n(\bm{\beta^*}) = \frac{1}{n}.
\end{align*}
This shows that $\bm{\beta^*}$ is the set of multipliers we are looking for, hence the existence.
\subsubsection{Algorithm Based on Sperner's-Lemma Proof}
We discuss here the degree to which the Sperner's argument can be implemented as an approximation algorithm.
\begin{proposition}
\label{prop:sperneralgo}
For any $\delta > 0$, a variant of the previous Sperner's argument computes a set of multipliers $\{\beta_i\}_{i \in N}$ such that $|p_i - p_j| \leq \delta$ in time $\textit{poly}(n) \, n! \, \left(\log(2 \, q) \, \left(1 + \frac{4 \, n \, q}{\delta}\right)\right)^n$.
\end{proposition}
\begin{proof}
To make the existence proof algorithmic, a key question is how to discretize the space of multipliers into simplices.
Such a simplicization should be easy to traverse algorithmically and its simplices should describe sufficiently compact sets of multipliers such that any panchromatic simplex should allow to uniformly bound how far the probabilities are from $1/n$.
As \citet{Papadimitriou94} describes, there is already no obvious simplicization when $n=4$; for example, the regular tetrahedron cannot be partitioned into multiple regular tetrahedra.
Following Papadimitriou's discussion, we apply Sperner's lemma to a hypercube rather than a simplex:
Let $\epsilon>0$ denote a small constant, to be determined later.
Let our set of points be the grid $G \coloneqq \{ \bm{p} \in \mathbb{Z}^{n-1} \mid ||\bm{p}||_{\infty} \leq \lceil \frac{\log(2 \, q)}{\log(1+\epsilon)} \rceil\}$.
We can partition this grid first in cubelets of the shape $[0, 1]^{n-1} + \bm{p} \subseteq G$ for some $\bm{p} \in G$, and subdivide each of these cubelets into $(n-1)!$ simplices.\footnote{For an excellent exposition, see \url{https://people.csail.mit.edu/costis/6896sp10/lec6.pdf}, accessed on January 11, 2022.}
We map each point $\bm{p} = (p_1, \dots, p_{n-1})$ in the grid $G$ to a color in $[n]$;
specifically, we choose the color $\argmax_{i \in N} p_i((1+\epsilon)^{p_1}, (1 + \epsilon)^{p_2}, \dots, (1 + \epsilon)^{p_{n-1}}, 1)$ with some canonical tie breaking.
If, for some $\bm{p} \in G$ and $1 \leq i \leq n-1$, we have $p_i = - \lceil \frac{\log(2 \, q)}{\log(1+\epsilon)} \rceil$, then $i$'s multiplier is at most $\frac{1}{2\,q}$ whereas agent $n$'s multiplier is $1$; by \cref{lem:boundmult}, $n$'s probability of being the largest is strictly larger than $i$'s, which means that $\bm{p}$'s color is not $i$.
Similarly, if $p_i = \lceil \frac{\log(2 \, q)}{\log(1+\epsilon)} \rceil$, then $i$'s probability is strictly larger than $n$'s by \cref{lem:boundmult}, and therefore $\bm{p}$'s color is not $n$.
Given these observations, there must exist a panchromatic simplex, which in turn must be contained in a cubelet $C = [0,1]^{n-1} + \bm{p}$ for a $\bm{p} \in G$.
Fix the multipliers $\{\beta_i\}_{i \in N}$ by setting $\beta_n \coloneqq 1$, and $\beta_i \coloneqq (1 + \epsilon)^{p_i}$ for all $1 \leq i \leq n-1$.
Recall that some point in $C$ is colored $n$, which means that $n$ has the largest probability for the corresponding multipliers, and in particular has a probability of at least $1/n$.
Since $n$'s multiplier $\beta_n$ is equal to the multiplier of this point, and since all other multipliers $\beta_i$ are at most the corresponding multiplier at this point, it follows that $p_n(\beta_1, \dots, \beta_n) \geq 1/n$.
Similarly, fix some agent $1 \leq i \leq n - 1$.
Since some point in $C$ has color $i$, the point $\bm{p} + \bm{e_i}$ must also have color $i$, where $\bm{e_i}$ is the unit vector in dimension $i$.
It follows that $p_i(\beta_1, \dots, \beta_{i-1}, (1 + \epsilon) \, \beta_i), \beta_{i+1}, \dots, \beta_n) \geq 1/n$.
By \cref{lem:lipschitzmultipliers}, $p_i(\beta_1, \dots, \beta_n) \geq 1/n - 2 \, q \, \epsilon$.
These lower bounds also allow us to upper bound the probabilities; for each $i \in N$, $p_i(\beta_1, \dots, \beta_n) \leq 1 - (n-1) \, (1/n - 2 \, q \, \epsilon) = 1/n + 2 \, (n - 1) \, q \, \epsilon$.
If we set $\epsilon = \frac{\delta}{2 \, n \, q}$, then it holds for all $i,j \in N$ that
\[|p_i(\beta_1, \dots, \beta_n) - p_j(\beta_1, \dots, \beta_n)| \leq \delta.\]
It remains to bound the running time of this algorithm. To find the panchromatic simplex, we traverse the simplices, visiting each at most once. Within each simplex, we query the oracle to determine the color of the new vertex and perform polynomial-time work to move on to the next simplex. The bottleneck of the computation is that we might have to traverse nearly all vertices, which leads to an overall running time in
\[ \textit{poly}(n) \, (n-1)! \, \left(\frac{2 \, \log(2 \, q) + 1}{\log(1 + \frac{\delta}{2 \, n \, q})}\right)^{n-1} \leq \textit{poly}(n) \, n! \, \left((2 \, \log(2 \, q) + 1) \, \left(1/2 + \frac{2 \, n \, q}{\delta}\right)\right)^n. \qedhere \]
\end{proof}
\subsection{Proof of Properties of the $p_i$}
\label{app:boundmult}
\begin{lemma}
\label{lem:localmonotonicity}
Fix multiplier sets $\vec{\beta}, \vec{\beta}' \in \mathbb{R}_{>0}^n$ and an agent $i$.
If $\frac{\beta_i'}{\beta_i} \geq \frac{\beta_j'}{\beta_j}$ for all $j \neq i$, then $p_i(\vec{\beta}') \geq p_i(\vec{\beta})$.
\end{lemma}
\begin{proof}
In \cref{eq:piint}, observe that the $F_j$ are monotone increasing and that $\beta_i' / \beta_j' = (\beta'_i / \beta_i)/(\beta'_j / \beta_j)\cdot\beta_i / \beta_j\geq \beta_i / \beta_j$, $\forall j$.
\end{proof}
\begin{restatable}{lemma}{lipschitzmultipliers}
\label{lem:lipschitzmultipliers}
Fix a set of multipliers $\vec{\beta}$, an agent $i$, and $\epsilon > 0$.
Let $\vec{\beta}'$ denote a set of multipliers such that $\beta'_i \coloneqq (1 + \epsilon) \, \beta_i$ and $\beta_j' \coloneqq \beta_j$ for all $j \neq i$. Then, $p_i(\vec{\beta}') \leq p_i(\vec{\beta}) + 2 \, q \, \epsilon$.
\end{restatable}
\begin{proof}
For convenience, set $p_i \coloneqq p_i(\beta_1, \dots, \beta_n)$ and $p_i' \coloneqq p_i(\beta_1, \dots, \beta_{i-1}, (1 + \epsilon) \, \beta_i, \beta_{i+1}, \dots, \beta_n)$.
Moreover, define a function $\pi : \mathbb{R} \to \mathbb{R}$ such that $\pi(t) \coloneqq \prod_{j \neq i} F_j(\frac{\beta_i}{\beta_j} \, t)$.
Observe that $\pi$ is monotone increasing and its range lies within $[0,1]$.
Using these definitions,
\begin{align}
p_i' - p_i &= \int_{0}^1 f_i(u) \, \big(\pi((1+\epsilon) \, u) - \pi(u)\big) \, du \notag \\
&= \sum_{t=0}^{\infty} \int_{(1+\epsilon)^{-t-1}}^{(1+\epsilon)^{-t}} f_i(u) \, \big(\pi((1+\epsilon) \, u) - \pi(u)\big) \, du \notag \\
&\leq \sum_{t=0}^{\infty} \left(\big(\pi((1+\epsilon)^{-t+1}) - \pi((1 + \epsilon)^{-t-1})\big) \, \int_{(1+\epsilon)^{-t-1}}^{(1+\epsilon)^{-t}} f_i(u) du \right) \notag \\
&\leq \sum_{t=0}^{\infty} \left(\big(\pi((1+\epsilon)^{-t+1}) - \pi((1 + \epsilon)^{-t-1})\big) \, \int_{(1+\epsilon)^{-t-1}}^{(1+\epsilon)^{-t}} q \, du \right) \notag \\
&= q \, \epsilon \, \sum_{t=0}^{\infty} \big(\pi((1+\epsilon)^{-t+1}) - \pi((1 + \epsilon)^{-t-1})\big) \, (1 + \epsilon)^{-t-1} \label{eq:telescope}
\end{align}
By the monotonicity of $\pi$, all coefficients $\pi((1+\epsilon)^{-t+1}) - \pi((1 + \epsilon)^{-t-1})$ are nonnegative.
Moreover, if we add up these coefficients only for the even $t$, this is a telescoping series
\begin{align*}
\sum_{t=0,2,4,\dots} \pi((1+\epsilon)^{-t+1}) - \pi((1 + \epsilon)^{-t-1}) = \lim_{t \to \infty} \pi(1+\epsilon) - \pi((1 + \epsilon)^{-2\,t-1}) \leq 1.
\end{align*}
Thus, the even summands of \cref{eq:telescope} are a ``subconvex'' combination of the $(1+\epsilon)^{-t-1}$, and are therefore upper bounded by the largest such term:
\[ \sum_{t=0,2,4,\dots} \big(\pi((1+\epsilon)^{-t+1}) - \pi((1 + \epsilon)^{-t-1})\big) \, (1 + \epsilon)^{-t-1} \leq (1 + \epsilon)^{-1}. \]
Applying the same reasoning to the odd summands, we obtain
\[ \sum_{t=1,3,5,\dots} \big(\pi((1+\epsilon)^{-t+1}) - \pi((1 + \epsilon)^{-t-1})\big) \, (1 + \epsilon)^{-t-1} \leq (1 + \epsilon)^{-2}. \]
By plugging these last two equations into \cref{eq:telescope}, we conclude that
\[ p_i' - p_i \leq q \, \epsilon \, \big((1 + \epsilon)^{-1} + (1 + \epsilon)^{-2} \big) \leq 2 \, q \, \epsilon. \qedhere \]
\end{proof}
\begin{restatable}{lemma}{lemboundmult}
\label{lem:boundmult}
For two agents $i$ and $j$ and $\vec{\beta} \in \mathbb{R}_{>0}^n$, if $\smash{\frac{\beta_i}{\beta_j}} \geq 2 \, q$, then $p_i(\vec{\beta}) > p_j(\vec{\beta})$.
\end{restatable}
\begin{proof}
Suppose there is a pair of $i, j\in N$ such that $\beta_i/\beta_j> 2q$. Then we we have the following inequalities for the probability $p_i, p_j$ of $i$ and $j$ getting each item:
\begin{align*}
&p_i = \mathbb{P}\left[\beta_i u_i=\max_{k\in N}\beta_k u_k\right]
\geq \mathbb{P}\left[\beta_i u_i=\max_{k\in N}\beta_k u_k\,\land\, \beta_i u_i\geq\beta_j\right] \\
&= \mathbb{P}\left[\beta_iu_i\geq\beta_j\right]\cdot\mathbb{P}\left[\beta_i u_i=\max_{k\in N}\beta_k u_k\, \bigg|\, \beta_i u_i\geq\beta_j\right]
\geq \mathbb{P}\left[\beta_iu_i\geq\beta_j\right]\cdot\mathbb{P}\left[\beta_j\geq\max_{k\in N,\ k\ne i, j}\beta_k u_k\right],
\end{align*}
and
\begin{align*}
&p_j = \mathbb{P}\left[\beta_j u_j=\max_{k\in N}\beta_k u_k\right]
= \mathbb{P}\left[\beta_j u_j\geq \beta_i u_i\right]\cdot\mathbb{P}\left[\beta_j u_j=\max_{k\in N}\beta_k u_k\, \bigg|\, \beta_j u_j\geq \beta_i u_i\right] \\
&< \mathbb{P}\left[\beta_j\geq \beta_i u_i\right]\cdot\mathbb{P}\left[\beta_j\geq\max_{k\in N,\ k\ne i, j}\beta_k u_k\right],
\end{align*}
where the last inequality follows from $u_j\leq1$. Since
\begin{align*}
\mathbb{P}\left[\beta_i u_i\leq \beta_j\right] = \mathbb{P}\left[u_i\leq \beta_j/\beta_i\right]\leq\mathbb{P}\left[u_i\leq1/(2q)\right]\leq 1/2,
\end{align*}
we have
\begin{align*}
\mathbb{P}\left[\beta_i u_i\geq\beta_j\right] \geq \mathbb{P}\left[\beta_j\geq \beta_i u_i\right]\quad \Rightarrow \quad p_i > p_j.
\end{align*}
\end{proof}
Here we extend \cref{lem:boundmult} to bound the ratios under approximated equalizing multipliers.
\begin{corollary}
If the set of multipliers $\vec{\beta}$ satisfies that $|p_i-1/n|<1/(2\, n)$ for all $i\in N$, then the ratio $\beta_{i}/\beta_{j}$ between any pair of agents $i, j$ is at most $4\, q$.
\label{cor:boundratio}
\end{corollary}
\begin{proof}
If $\beta_{i}/\beta_{j}>3\, q$ for some $i, j\in N$, then
\begin{align*}
\mathbb{P}\left[\beta_i u_i\leq \beta_j\right] = \mathbb{P}\left[u_i\leq \beta_j/\beta_i\right]\leq\mathbb{P}\left[u_i<1/(4q)\right]< 1/4,
\end{align*}
Then following the inequalities in the above proof for \cref{lem:boundmult}, we would have $p_i>3\, p_j$, which contradicts to the fact that $1/(2\,n)<p_i, p_j<3/(2\,n)$.
\end{proof}
\begin{lemma}
\label{lem:smallstep}
Let $\vec{z} \in \mathbb{Z}^n$, $\epsilon > 0$, and $S \subseteq N$. For all $i \in S$,
\[ p_i\big((1+\epsilon)^{\vec{z}}\big) \leq p_i\big((1+\epsilon)^{\vec{z} + \mathds{1}_S}\big) \leq p_i\big((1+\epsilon)^{\vec{z}}\big) + 2 \, q \, \epsilon, \]
and, for all $i \notin S$,
\[ p_i\big((1+\epsilon)^{\vec{z}}\big) - 2 \, q \, \epsilon \leq p_i\big((1+\epsilon)^{\vec{z} + \mathds{1}_S}\big) \leq p_i\big((1+\epsilon)^{\vec{z}}\big). \]
\end{lemma}
\begin{proof}
This follows from repeated application of \cref{lem:localmonotonicity} and \cref{lem:lipschitzmultipliers}, as well as the observation that the effect on the resulting multipliers is equal whether we multiply all $\beta_i$ for $i \in S$ by $1 + \epsilon$ or whether we divide all $\beta_i$ for $i \notin S$ by $1 + \epsilon$ instead.
\end{proof}
\subsection{Uniqueness of Equalizing Multipliers}
\label{app:unique}
First, we formalize the notion of local strict monotonicity as the following lemma.
\begin{lemma}
Assuming the distributions for all agents have interval support. For any agent $j\in N$, let $p_j>0,\, p'_j$ be the probability that agent $j$ receives each item under $\{\beta_i\}_{i\in N}$ and $\{\beta'_i\}_{i\in N}$. If $\forall i\in N$, $\beta'_j/\beta'_i\geq \beta_j/\beta_i$ and there exists some $k\in N$ such that $p_k>0$ and $\beta'_j/\beta'_k> \beta_j/\beta_k$, then $p'_j>p_j$.
\label{lem:strict}
\end{lemma}
\begin{proof}
First we can express $p'_j, p_j$ as follows:
\begin{align*}
&p'_j = \mathbb{P}\left[\beta'_j u'_j=\max_{i\in N}\{\beta'_i u'_i\}\right] = \int_{0}^{1}f_j(u) \prod_{i\in N/\{j\}}F_i\left(\frac{\beta'_j}{\beta'_i}u\right) du, \\
&p_j = \int_{0}^{1}f_j(u) \prod_{i\in N/\{j\}}F_i\left(\frac{\beta_j}{\beta_i}u\right) du.
\end{align*}
Suppose $\mathcal{D}_j$'s support interval is $[\underline{u_j}, \overline{u_j}]$. Now take
\begin{align*}
\underline{u} = \max_u\{u: \exists i\in N,\, F_i\left(\frac{\beta_j}{\beta_i}u\right)=0\}.
\end{align*}
From $F_j\left(\frac{\beta_j}{\beta_j}\underline{u_j}\right)=0$, then by definition of $\underline{u}$, it is true that $\underline{u_j}\leq\underline{u}$.
We also have $\underline{u}<\overline{u_j}$, since otherwise we can find agent $i\in N$, such that $F_i\left(\frac{\beta_j}{\beta_i}u\right)=0$ for all possible value of $u_j=u\in[\underline{u_j}, \overline{u_j}]$, making $p_j=0$. Then we take
\begin{align*}
\overline{u} = \min_u\{u: F_k\left(\frac{\beta_j}{\beta_k}u\right)=1\}.
\end{align*}
We argue that $\overline{u}>\underline{u}$. Consider otherwise, then we can find $u_0\in [\overline{u}, \underline{u}]$ and $i\in N$, where
\begin{align*}
&F_i\left(\frac{\beta_j}{\beta_i}u_0\right)=0,\, F_k\left(\frac{\beta_j}{\beta_k}u_0\right)=1 \\
&\Rightarrow
u_i \geq \frac{\beta_j}{\beta_i}u_0,\,
u_k \leq \frac{\beta_j}{\beta_k}u_0
\Rightarrow \beta_i u_i\geq \beta_k u_k,
\end{align*}
which will make $p_k=0$ since all distributions are non-atomic, contradicting our assumption that $p_k>0$. Combining the earlier arguments, we know that $\max\{\underline{u_j}, \underline{u}\} < \min\{\overline{u_j}, \overline{u}\}$. Then for any $u\in (\max\{\underline{u_j}, \underline{u}\}, \min\{\overline{u_j}, \overline{u}\})=\mathcal{I}$,
\begin{align*}
&f_j(u)>0, \quad F_i\left(\frac{\beta_j}{\beta_i}u\right) > 0,\, \forall i\in N, \\
&0<F_k\left(\frac{\beta_j}{\beta_k}u\right) < 1
\Rightarrow F_k\left(\frac{\beta'_j}{\beta'_k}u\right) > F_k\left(\frac{\beta_j}{\beta_k}u\right).
\end{align*}
The last inequality follows from the fact that the derivative $F'_k(u)>0$ for $u$ that satisfies $0<F_k(u)<1$ (guaranteed by Interval support property, $0<F_k(u)<1$ just means $u$ is in the support interval), and that $\beta'_j/\beta'_k>\beta_j/\beta_k$. Then
\begin{align*}
\int_{u\in\mathcal{I}}f_j(u) \prod_{i\in N/\{j\}}F_i\left(\frac{\beta'_j}{\beta'_i}u\right) du
> \int_{u\in\mathcal{I}}f_j(u) \prod_{i\in N/\{j\}}F_i\left(\frac{\beta_j}{\beta_i}u\right) du.
\end{align*}
For $u$ in the rest of the range, we have that
\begin{align*}
\frac{\beta'_j}{\beta'_i}\geq \frac{\beta_j}{\beta_i}
\Rightarrow
F_i\left(\frac{\beta'_j}{\beta'_i}u\right)\geq F_i\left(\frac{\beta_j}{\beta_i}u\right),\, \forall i\in N/\{j\}.
\end{align*}
Then the integral in this range for $p'_j$ is greater or equal to that for $p_j$. Hence we have showed the strict ordering $p'_j>p_j$.
\end{proof}
Now we prove uniqueness of equalizing multipliers. Suppose there are two different sets of equalizing multipliers, namely $\vec{\beta}, \vec{\beta'}$ (assume they are both normalized by setting $\beta_1=1$).
We can find the $i=\arg\max_{j\in N}\beta'_j/\beta_j$, w.l.o.g. assume $\beta'_{i}/\beta_{i}>1$, otherwise we simply exchange $\vec{\beta}$ and $\vec{\beta'}$ and the maximum ratio must be larger than $1$ since the ratios cannot all be 1.
Now we know that for any $j\in N$, $\beta'_i/\beta'_j = (\beta'_i/\beta_i)/(\beta'_j/\beta_j)\cdot\beta_i/\beta_j \geq \beta_i/\beta_j$, while $\beta'_i/\beta'_1>\beta_i/\beta_1$. Then by the local strict monotonicity \cref{lem:strict}, $i$'s probability under $\vec{\beta'}$ is strictly larger than that under $\vec{\beta}$, contradicting to them both being $1/n$ under equalizing multipliers.
\subsection{Inequalities for Expectations}
\label{app:exp}
\begin{lemma}
For any pair of agents $i, j\in N$, the following inequalities between expectations hold (assuming that the conditions in the conditional expectations can be met):
\begin{equation}
\mathbb{E}\left[u_i\, \bigg|\, \beta_j u_j = \max_{k\in N}\beta_k u_k\right] \leq \mathbb{E}\left[u_i\right]
\label{eq:exp1}
\end{equation}
and
\begin{equation}
\mathbb{E}\left[u_i\, \bigg|\, \beta_i u_i=\max_{k\in N}\beta_k u_k\right] \geq \mathbb{E}\left[u_i\, \big|\, \beta_i u_i \geq \beta_j u_j\right].
\label{eq:exp2}
\end{equation}
\end{lemma}
\begin{proof}
Let $v_j=\mathbbm{1} \left[\beta_j u_j = \max_{k\in N}\beta_k u_k\right]$, which is a random variable taking value from $\{0, 1\}$.
Then we have
\begin{align*}
\mathbb{E}\left[u_i\, \bigg|\, \beta_j u_j = \max_{k\in N}\beta_k u_k\right]
= \mathbb{E}\left[u_i\, \big|\, v_j=1\right]
= \int_{0}^{1}f_{u_j|v_j=1}(u)\cdot \mathbb{E}\left[u_i\, \big|\, v_j=1, u_j=u\right] du.
\end{align*}
Given $v_j=1$, for all $u\in[0, 1]$, it holds that
\begin{align*}
\mathbb{E}\left[u_i\, \big|\, v_j=1, u_j=u\right]
= \mathbb{E}\left[u_i\, \big|\, \beta_i u_i\leq \beta_j u\right]
\leq \mathbb{E}\left[u_i\right].
\end{align*}
Thus
\begin{align*}
\int_{0}^{1}f_{u_j|v_j=1}(u)\cdot \mathbb{E}\left[u_i\, \big|\, v_j=1, u_j=u\right] du\leq
\int_{0}^{1}f_{u_j|v_j=1}(u)\cdot \mathbb{E}\left[u_i\right] du
= \mathbb{E}\left[u_i\right].
\end{align*}
Therefore \cref{eq:exp1} holds.
For \cref{eq:exp2}, let $v_{i,j}=\max_{k\in N/\{i, j\}}\beta_k u_k$ be a random variable taking value in $[0, 1]$, and $v_i=\mathbbm{1} \left[\beta_i u_i = \max_{k\in N}\beta_k u_k\right]$.
Then by substituting the condition we have
\begin{align*}
&\mathbb{E}\left[u_i\, \bigg|\, \beta_i u_i=\max_{k\in N}\beta_k u_k\right]
= \mathbb{E}\left[ u_i\, \bigg|\, v_i=1\right]
= \mathbb{E}\left[ u_i\, \bigg|\, \beta_i u_i\geq \beta_j u_j, \beta_i u_i\geq v_{i,j}\right] \\
&= \int_{0}^{1}f_{v_{i,j}|v_i=1}(u)\cdot\mathbb{E}\left[ u_i\, \bigg|\, \beta_i u_i\geq \beta_j u_j, \beta_i u_i\geq v_{i,j}, v_{i,j}=u\right]du.
\end{align*}
Since $v_{i,j}, u_i, u_j$ are independent, for all $u\in[0,1]$ it holds that
\begin{align*}
\mathbb{E}\left[ u_i\, \bigg|\, \beta_i u_i\geq \beta_j u_j, \beta_i u_i\geq v_{i,j}, v_{i,j}=u\right]
= \mathbb{E}\left[ u_i\, \bigg|\, \beta_i u_i\geq \beta_j u_j, \beta_i u_i\geq u\right]
\geq \mathbb{E}\left[ u_i\, \bigg|\, \beta_i u_i\geq \beta_j u_j\right].
\end{align*}
Hence
\begin{align*}
&\int_{0}^{1}f_{v_{i,j}|v_i=1}(u)\cdot\mathbb{E}\left[ u_i\, \bigg|\, \beta_i u_i\geq \beta_j u_j, \beta_i u_i\geq v_{i,j}, v_{i,j}=u\right]du\\
&\geq \int_{0}^{1}f_{v_{i,j}|v_i=1}(u)\cdot\mathbb{E}\left[ u_i\, \bigg|\, \beta_i u_i\geq \beta_j u_j\right]du
= \mathbb{E}\left[ u_i\, \bigg|\, \beta_i u_i\geq \beta_j u_j\right].
\end{align*}
Therefore \cref{eq:exp2} holds.
\end{proof}
\subsection{Positive Gap Between Conditional Expectations}
\label{app:positivegap}
\begin{restatable}{proposition}{proppositivegap}
\label{prop:positivegap}
Fix a set of agents whose utility distributions have interval support, and let $\{\beta_i\}_{i \in N}$ denote a set of multipliers such that $\left|p_i-1/n\right|< 1/(2\,n)$ for all $i\in N$.
Then, for all $i \neq j$, \[\mathbb{E}\left[u_i \middle| \beta_i u_i = \max_{k \in N} \beta_k u_k \right] > \mathbb{E}\left[u_i \middle| \beta_j u_j = \max_{k \in N} \beta_k u_k \right].\]
\end{restatable}
\begin{proof}
Let $X \coloneqq \beta_i u_i$ and $Y \coloneqq \beta_j u_j$ denote the scaled random variables, then it still holds that: (a) $X$ and $Y$ are independent, (b) $X$ and $Y$ have interval support, and (c) both $\mathbb{P}\left[X > Y\right]$ and $\mathbb{P}\left[Y > X\right]$ are at least $1/(2\,n)>0$.
From \cref{app:exp}, we know that
\begin{align*}
&\mathbb{E}\left[u_i\, \bigg|\, \beta_i u_i=\max_{k\in N}\beta_k u_k\right] \geq \mathbb{E}\left[u_i\, |\, \beta_i u_i\geq\beta_j u_j\right]
= \frac{1}{\beta_i}\mathbb{E}\left[X\, \bigg|\, X> Y\right], \\
&\mathbb{E}\left[u_i\, \bigg|\, \beta_j u_j=\max_{k\in N}\beta_k u_k\right] \leq \mathbb{E}\left[u_i\right]
= \frac{1}{\beta_i}\mathbb{E}\left[X\right].
\end{align*}
Then it suffices to show that $\mathbb{E}\left[X \mid X > Y \right] > \mathbb{E}\left[X\right]$.
Let $\mathcal{I}$ denote the intersection of the support intervals of $X$ and $Y$, excluding both endpoints.
This intersection is a nonempty interval and both $\mathbb{P}\left[X \in \mathcal{I}\right]$ and $\mathbb{P}\left[Y \in \mathcal{I}\right]$ must have positive probability, since, else, one variable's support would entirely lie below or above the other variable's support, which would contradict the above observation that $0 < \mathbb{P}\left[X>Y\right] < 1$.
Since for all $y\in \mathcal{I}$,
\begin{align*}
\mathbb{E}\left[X\right] =
\underbrace{\mathbb{P}\left[X>y\right]}_{>0} \cdot \underbrace{\mathbb{E}\left[X\, |\, X > y\right]}_{>y} + \underbrace{\mathbb{P}\left[X\leq y\right]}_{=1-\mathbb{P}\left[X>y\right] > 0} \cdot \underbrace{\mathbb{E}\left[X\, |\, X \leq y\right]}_{\leq y}
\, \Rightarrow\, \mathbb{E}\left[X\, |\, X > y\right] > \mathbb{E}\left[X\right].
\end{align*}
Hence
\begin{align*}
\mathbb{E}\left[X\, |\, X > Y, Y\in \mathcal{I}\right]
&= \int_{0}^{\beta_j}f_{Y|Y<X, Y\in\mathcal{I}}(y)\cdot\mathbb{E}\left[X\, |\, X > Y, Y\in \mathcal{I}, Y=y\right] dy \\
&= \int_{0}^{\beta_j}f_{Y|Y<X, Y\in\mathcal{I}}(y)\cdot\mathbb{E}\left[X\, |\, X > y\right] dy \\
&> \int_{0}^{\beta_j}f_{Y|Y<X, Y\in\mathcal{I}}(y)\cdot\mathbb{E}\left[X\right] dy
= \mathbb{E}\left[X\right].
\end{align*}
For all $y\notin \mathcal{I}$, $\mathbb{E}\left[X\, |\, X > y\right] \geq \mathbb{E}\left[X\right]$, from which we have
\begin{align*}
\mathbb{E}\left[X\, |\, X > Y, Y\notin \mathcal{I}\right]
&= \int_{0}^{\beta_j}f_{Y|Y<X, Y\notin\mathcal{I}}(y)\cdot\mathbb{E}\left[X\, |\, X > Y, Y\notin \mathcal{I}, Y=y\right] dy \\
&= \int_{0}^{\beta_j}f_{Y|Y<X, Y\notin\mathcal{I}}(y)\cdot\mathbb{E}\left[X\, |\, X > y\right] dy \\
&\geq \int_{0}^{\beta_j}f_{Y|Y<X, Y\notin\mathcal{I}}(y)\cdot\mathbb{E}\left[X\right] dy
= \mathbb{E}\left[X\right].
\end{align*}
Then, we can bound
\begin{align*}
\mathbb{E}\left[X\, |\, X > Y\right]
={} &\underbrace{\mathbb{P}\left[Y\in \mathcal{I} \mid X > Y \right]}_{>0} \cdot \underbrace{\mathbb{E}\left[X\, |\, X > Y, Y\in \mathcal{I}\right]}_{> \mathbb{E}[X]} \\
&+ \underbrace{\mathbb{P}\left[Y \notin \mathcal{I} \mid X > Y \right]}_{= 1 - \mathbb{P}\left[Y\in \mathcal{I} \mid X > Y \right]} \cdot \underbrace{\mathbb{E}\left[X\, |\, X > Y, Y\notin \mathcal{I}\right]}_{\geq \mathbb{E}[X]} \\
>{} &\mathbb{E}\left[X\right],
\end{align*}
which shows a positive gap.
\end{proof}
\subsection{Discussion of $(p,q)$-PDF-Boundedness}
\label{app:pdfboundeddisc}
The above proof in \cref{app:positivegap} shows a gap for any particular set of agents, but, to bound the probability of EF when the number and distributions of agents change, we need a uniform constant lower bound for all $n$ and all utility distributions involved.
Obtaining such a bound requires some additional restriction on which distributions are allowed, and $(p, q)$-PDF-boundedness is a natural choice for this:
On the one hand, excessively low probability densities cn make the gap grow arbitrarily small, although still $>0$. To see this, consider a variant of the counter-example where agent~A’s distribution is uniform on $[1/4, 3/4]$ and agent~B’s utility is uniformly distributed on $[0, 1/4]\cup[3/4, 1]$, in which we increase the density of agent B's distribution in $[1/4, 3/4]$ to a small, positive constant $\epsilon$. It is easy to see that, in this example, allowing arbitrarily low (positive) densities can cause the gap to become arbitrarily small.
On the other hand, the gap could vanish as a result of excessively high rather than low densities. Indeed, in a scenario where agent~A's utility is uniform on $[0,1]$ and agent B's distribution is uniform on $[1/2-\epsilon, 1/2 + \epsilon]$, as $\epsilon \to 0^+$ and agent B's density grows unboundedly, the positive gap for this agent goes to zero.
Assuming that all densities (in the support) lie between some constants $p>0$ and $q$ avoids these problematic cases.
\subsection{Proof of \cref{lem:boundgap}}
Before proving the constant gap, we first show a lower bound on the length of the intersection between support intervals.
\begin{lemma}
Suppose the set of multipliers $\vec{\beta}$ satisfies that $|p_i-1/n|<1/(2\, n)$ for all $i\in N$.
For any $i, j\in N$ and $h(u)=f_j\left(\frac{\beta_i}{\beta_j}u\right)$, the interval $I^* = \text{supp}(h)\cap\text{supp}(f_i)$ has length at least $L_{q}$, which only depends on $q$.
\label{lem:overlap}
\end{lemma}
\begin{proof}
We consider the random variables $u_i\sim D_i,\ u_j\sim D_j$ after scaled: $\widetilde{u_i}=\beta_i u_i,\ \widetilde{u_j}=\beta_j u_j$, which have PDF: $\widetilde{f_i}(u)=(1/\beta_i)\,f_i\left(u/\beta_i\right)$ and $\widetilde{f_j}(u)=(1/\beta_j)\,f_j\left(u/\beta_j\right)$.
Without loss of generality we will assume that the $\min_{k\in N}\beta_k=1$, then by \cref{cor:boundratio} we know that $\max_{k\in N}\beta_k\leq 4\,q$.
We will first prove that $\text{supp}(\widetilde{f_i})\cap \text{supp}(\widetilde{f_j})$ has lower bounded length.
Note that $\widetilde{f_i}$ is $(p/\beta_i, q/\beta_i)$-PDF-bounded and $\widetilde{f_j}$ is $(p/\beta_j, q/\beta_j)$-PDF-bounded.
If one of the two support intervals contain the other, then the length of their intersection will be at least $\min\{\beta_i/q, \beta_j/q\}\geq 1/q$.
Otherwise, if $\text{supp}(\widetilde{f_i})$ lies on the right of $\text{supp}(\widetilde{f_j})$, suppose their intersection is $[a, b]$ (there will not be a vacant intersection since then $\beta_ju_j$ would always be smaller than $\beta_iu_i$), then it always holds that $\beta_iu_i\geq a,\, \beta_ju_j\leq b$. We claim that the length of $[a, b]$ is at least $1/(4\,q)$. Otherwise, consider
\begin{align*}
p_j &= \mathbb{P}\left[\beta_ju_j=\max_{k\in N}\beta_ku_k\right] \\
&= \mathbb{P}\left[\beta_iu_i\leq b\right]\cdot\mathbb{P}\left[\beta_ju_j=\max_{k\in N}\beta_ku_k\, \bigg|\, \beta_iu_i\leq b\right] \\
&\leq \mathbb{P}\left[\beta_iu_i\leq b\right]\cdot\mathbb{P}\left[\max_{k\in N, k\ne i}\beta_ku_k\leq b\right],
\end{align*}
while
\begin{align*}
p_i &= \mathbb{P}\left[\beta_iu_i=\max_{k\in N}\beta_ku_k\right] \\
&\geq \mathbb{P}\left[\beta_iu_i\geq b\right]\cdot\mathbb{P}\left[\beta_iu_i=\max_{k\in N}\beta_ku_k\, \bigg|\, \beta_ju_j\geq b\right] \\
&\geq \mathbb{P}\left[\beta_iu_i\geq b\right]\cdot\mathbb{P}\left[\max_{k\in N, k\ne i}\beta_ku_k\leq b\right].
\end{align*}
If the length of $[a, b]$ is less than $1/(4\,q)$, we have
\begin{align*}
\mathbb{P}\left[\beta_iu_i\leq b\right] = \mathbb{P}\left[\beta_iu_i\in [a, b]\right] < \frac{q}{\beta_i}\cdot\frac{1}{4q} \leq \frac{1}{4},
\end{align*}
which indicates that $p_i>3\, p_j$ while it should be true that $1/(2\,n)<p_i, p_j<3/(2\,n)$, hence the contradiction.
The argument is symmetric for the case where $\text{supp}(\widetilde{f_i})$ lies on the left of $\text{supp}(\widetilde{f_j})$, which also gives the same lower bound $1/(4\,q)$ on the length.
Therefore we conclude that $I = [a, b] = \text{supp}(\widetilde{f_i})\cap \text{supp}(\widetilde{f_j})$ has length at least $1/(4\,q)$.
Then $f_j$ is supported on $[a/\beta_j, b/\beta_j]$ and $f_i$ is supported on $[a/\beta_i, b/\beta_i]$. Thus
\begin{align*}
\forall u\in[\frac{a}{\beta_i}, \frac{b}{\beta_i}],\, \frac{\beta_i}{\beta_j}u\in [\frac{a}{\beta_j}, \frac{b}{\beta_j}],\, h(u)=f_j(\frac{\beta_i}{\beta_j}u)>0 \\
\Rightarrow [a/\beta_i, b/\beta_i]\subseteq\text{supp}(h),
\end{align*}
and
\begin{align*}
\,\,\,\,\forall u\in[\frac{a}{\beta_i}, \frac{b}{\beta_i}],\ f_i(u)>0\, \Rightarrow\, [a/\beta_i, b/\beta_i]\subseteq\text{supp}(f_i).
\end{align*}
Therefore the interval $[a/\beta_i, b/\beta_j]\subseteq \text{supp}(h)\cap\text{supp}(f_i)$, with a length of at least $1/(4\,q)\cdot1/(4\,q)=1/(16\,q^2)$. This proves our lemma that the intersection $I^*$ has length at least $L_{q}=1/(16\,q^2)$.
\end{proof}
\label{app:boundgap}
\lemboundgap*
\begin{proof}
In \cref{app:exp} we show that
\begin{align*}
\mathbb{E}\left[u_i\, \bigg|\, \beta_j u_j = \max_{k\in N}\beta_k u_k\right] \leq \mathbb{E}\left[u_i\right]
\end{align*}
and
\begin{align*}
\mathbb{E}\left[u_i\, \bigg|\, \beta_i u_i=\max_{k\in N}\beta_k u_k\right] \geq \mathbb{E}\left[u_i\, \big|\, \beta_i u_i \geq \beta_j u_j\right].
\end{align*}
Then it suffices to show that there is a constant gap between $\mathbb{E}\left[u_i\, |\, \beta_i u_i > \beta_j u_j\right]$ and $\mathbb{E}[u_i]$. Let
\begin{align*}
\mathcal{P} = \mathbb{P}\left[\beta_i u_i \geq \beta_j u_j\right]=\int_{0}^{1}f_i(u)\, F_j\left(\frac{\beta_i}{\beta_j}u\right) du,
\end{align*}
and
\begin{align*}
\Delta\mathbb{E} = \mathbb{E}\left[u_i\, \big|\, \beta_i u_i \geq \beta_j u_j\right]-\mathbb{E}[u_i],
\end{align*}
then we have
\begin{align*}
\Delta\mathbb{E} &= \frac{1}{\mathcal{P}}\int_{0}^{1}u\, f_i(u) \left(F_j\left(\frac{\beta_i}{\beta_j}u\right)-\mathcal{P}\right) du.
\end{align*}
Let $g(u)=F_j\left(\frac{\beta_i}{\beta_j}u\right)-\mathcal{P}$, and $h(u)=f_j\left(\frac{\beta_i}{\beta_j}u\right)$.
It is clear that $g(u)$ is monotonically increasing, moreover, we can lower bound the derivative of $g(u)$ on the support of $h(u)$, which is an interval and we denote this range as $\text{supp}(h)$:
\begin{align*}
g'(u) = \frac{\beta_i}{\beta_j}\, f_j\left(\frac{\beta_i}{\beta_j}u\right) \geq \frac{p}{4q},\ u\in \text{supp}(h).
\end{align*}
The inequality holds since $\beta_j/\beta_i\leq4\,q$ from \cref{cor:boundratio}.
Let $\text{supp}(f_i)$ denote the support interval of $f_i(u)$.
In \cref{lem:overlap} we show a lower bound, $L_{q}$, which only depends on $q$, on the length of the interval $I^* = \text{supp}(h) \cap \text{supp}(f_i)$.
We consider such interval $I^*=[l^*, r^*]\subseteq [0, 1]$ with midpoint $m^*=(l^* + r^*)/2$, and we know that $r^*-l^*\geq L_{q}$.
From the previous analysis, we know that for any $u\in I^*$, it always holds that $f_i(u)\geq p$ and $g'(u)\geq D_{p, q}=p/(4\,q)$.
Since
\begin{equation}
\int_{0}^{1}f_i(u)\, g(u)\, du=0,
\label{eq:int}
\end{equation}
combined with $g(u)$'s continuity and monotonicity, there exists a point $u^*\in[0, 1]$ where $g(u^*)=0$. The interval $I^*$ must have at least half of its length that lies on the left or right side of $u^*$, without loss of generality we assume that $u^*\leq m^*$ and interval $[m^*, r^*]$ lies on the right of $u^*$.
Let
\begin{align*}
c_1 = \int_{u^*}^{\frac{m^*+r^*}{2}}f_i(u)\, g(u)\, du,\
c_2 = \int_{\frac{m^*+r^*}{2}}^{1}f_i(u)\, g(u)\, du.
\end{align*}
When $u\in[u^*, \frac{m^*+r^*}{2}]$, we have $f_i(u)\geq0,\, g(u)\geq g(u^*)=0$, hence $c_1\geq0$. While $u\in[\frac{m^*+r^*}{2}, r^*]\subseteq I^*$, we have that $f_i(u)\geq p$ and
\begin{align*}
g(u)&\geq g\left(\frac{m^*+r^*}{2}\right) \geq g(m^*)+\frac{r^*-m^*}{2}\cdot D_{p, q} \\
&\geq g(u^*)+\frac{L_{p}}{4}\cdot D_{p, q} = \frac{L_{p}\, D_{p, q}}{4}.
\end{align*}
Then we can lower bound $c_2$ by a positive constant $G_{p, q}$:
\begin{align*}
c_2 \geq \int_{\frac{m^*+r^*}{2}}^{r^*}f_i(u)\, g(u)\, du \geq \frac{r^*-m^*}{2}\cdot p\cdot \frac{L_{q}\, D_{p, q}}{4}
\geq \frac{p\, L^2_{q}\, D_{p, q}}{16} = G_{p,q}.
\end{align*}
\cref{eq:int} indicates that
\begin{align*}
\int_{0}^{u^*}f_i(u)\, g(u)\, du = -(c_1+c_2).
\end{align*}
Then we have
\begin{align*}
&\int_{0}^{u^*}u\, f_i(u)\, g(u)\, du \geq -u^*\, (c_1+c_2), \\
&\int_{u^*}^{\frac{m^*+r^*}{2}}u\, f_i(u)\, g(u)\, du \geq u^*\, c_1, \\
&\int_{\frac{m^*+r^*}{2}}^{1}u\, f_i(u)\, g(u)\, du \geq \frac{m^*+r^*}{2}\, c_2,
\end{align*}
and we can lower bound $\Delta\mathbb{E}$ by
\begin{align*}
\Delta\mathbb{E} &\geq -u^*(c_1+c_2)+u^*c_1+\frac{m^*+r^*}{2}c_2 \\
&= \left(\frac{m^*+r^*}{2}-u^*\right)c_2 \geq \frac{L_{q}\,G_{p, q}}{4}.
\end{align*}
Therefore we have finished the proof with
\begin{align*}
C_{p, q} = \frac{L_{p}\,G_{p, q}}{4} = \frac{p^2}{2^{20} \,q^7}\leq1.
\end{align*}
\end{proof}
\subsection{Envy-free: Combining Previous Results}
\label{app:combine}
For any two agents $i, j\in N$, and each item $\alpha\in M$, let $X_\alpha$ denote its contribution to $u_i(A_i)$ and $Y_\alpha$ denote its contribution to $u_i(A_j)$.
In particular, $X_\alpha=0$ if $\alpha\notin A_i$ and $X_\alpha=u_i(\alpha)$ if $\alpha\in A_i$; while $Y_\alpha=0$ if $\alpha\notin A_j$ and $Y_\alpha=u_i(\alpha)$ if $\alpha\in A_j$.
When the set of equalizing multipliers are approximated such that $|p_i-1/n|\leq\delta=\min(C_{p, q}/(4\,n), 1/(4\,n))$ for all $i\in N$, following from the deduced constant gap in \cref{lem:boundgap}, we have
\begin{align*}
\mathbb{E}\left[X_\alpha\right] &= \mathbb{P}\left[\alpha\in A_i\right] \cdot \mathbb{E}\left[u_i(\alpha)\, \big|\, \alpha\in A_i\right] \\
&= \mathbb{P}\left[\beta_i u_i = \max_{k\in N}\beta_k u_k\right]\cdot \mathbb{E}\left[u_i\, \bigg|\, \beta_i u_i = \max_{k\in N}\beta_k u_k\right] \\
&\geq (\frac{1}{n}-\delta)\cdot\mathbb{E}\left[u_i\, \bigg|\, \beta_i u_i = \max_{k\in N}\beta_k u_k\right], \\
\mathbb{E}\left[Y_\alpha\right] &= \mathbb{P}\left[\alpha\in A_j\right] \cdot \mathbb{E}\left[u_i(\alpha)\, \big|\, \alpha\in A_j\right] \\
&= \mathbb{P}\left[\beta_j u_j = \max_{k\in N}\beta_k u_k\right]\cdot \mathbb{E}\left[u_i\, \bigg|\, \beta_j u_j = \max_{k\in N}\beta_k u_k\right] \\
&\leq (\frac{1}{n}+\delta)\cdot\mathbb{E}\left[u_i\, \bigg|\, \beta_j u_j = \max_{k\in N}\beta_k u_k\right], \\
&\quad\Rightarrow \mathbb{E}\left[X_\alpha\right] - \mathbb{E}\left[Y_\alpha\right] \geq C_{p, q}/n-2\delta\geq C_{p, q}/(2n).
\end{align*}
Note that $X_\alpha, Y_\alpha$ are independently and identically distributed for all $\alpha\in M$, and
\begin{align*}
u_i(A_i) = \sum_{\alpha\in M}X_\alpha, \quad
u_i(A_j) = \sum_{\alpha\in M}Y_\alpha.
\end{align*}
Thus we can bound $u_i(A_i)$ by Chernoff's bound: for any $\alpha$,
\begin{align*}
&\mathbb{P}\left[u_i(A_i) < \left(1-\frac{C_{p, q}}{4\, n\, \mathbb{E}[X_\alpha]}\right) m\, \mathbb{E}\left[X_\alpha\right]\right] \\
&\leq \exp\left( -\frac{m\, C^2_{p, q}}{32\, n^2\, \mathbb{E}\left[X_\alpha\right]}\right)
\leq \exp\left(-\frac{m\, C^2_{p, q}}{32\, n}\right).
\end{align*}
where the last inequality follows from $\mathbb{E}\left[X_\alpha\right]\leq 1/n$. Similarly we can bound $u_i(A_j)$:
\begin{align*}
&\mathbb{P}\left[u_i(A_j) > \left(1+\frac{C_{p, q}}{4\, n\, \mathbb{E}\left[Y_\alpha\right]}\right) m\, \mathbb{E}\left[Y_\alpha\right]\right] \\
&\leq \exp\left( -\frac{m\, C^2_{p, q}}{48\, n^2\, \mathbb{E}\left[Y_\alpha\right]}\right)
\leq \exp\left(-\frac{m\, C^2_{p, q}}{48\, n}\right).
\end{align*}
Then we can use union bound to bound the probability $\mathcal{P}_{ij}$ that neither of the above two events happen. With probability
\begin{align*}
&\mathcal{P}_{ij} \geq 1 - \exp\left(-\frac{m\, C^2_{p, q}}{32\, n}\right) - \exp\left(-\frac{m\, C^2_{p, q}}{48\, n}\right) \\
&\geq 1-\frac{2}{n^2}\exp\left(2\, \log n-\frac{m\, C^2_{p, q}}{32\, n}\right),
\end{align*}
we have $u_i(A_i)\geq u_i(A_j)$.
Again we use union bound on the probability that for any $i, j\in N, u_i(A_i)\geq u_i(A_j)$ (it just means that the allocation is envy-free):
\begin{align*}
\mathcal{P} \geq 1-2\, \exp\left(2\, \log n-\frac{m\, C^2_{p, q}}{32\, n}\right),
\end{align*}
which suggests that the allocation is envy-free with probability at least $1-2\,\exp\left(2\, \log n-\mathit{const}(p, q)\, m/n\right)$.
\section{Negative Result for Round Robin Algorithm}
\label{app:neg}
Suppose agent 1 has uniform distribution on $[0.6, 1]$, and agent $n$ has uniform distribution on $[0, 1]$. Assume in the round robin algorithm, there are a total of $t$ rounds. Agent 1 gets the first item, and agent $n$ gets the last item. With probability $p_1=1/3$, event A happens, where agent $n$ values the item that agent 1 gets in the first round at least $2/3$.
Consider the last two items that agent $n$ gets, let $X_{t-1}, X_{t}$ denote agent $n$'s utility on them. From Lemma 3.2 of \citet{MS20}, we know their distribution is $X_{t-1}\sim\mathcal{D}^{\max(n+1)}_{\leq X_{t-2}}, X_t\sim \mathcal{D}^{\max(1)}_{\leq X_{t-1}}$, where $\mathcal{D}^{\max(k)}_{\leq T}$ denote the distribution of the maximum of $k$ samples drawn from $\mathcal{D}$ truncated at $T$. $\mathcal{D}^{\max(n+1)}_{\leq X_{t-2}}$ is stochastically dominated by $\mathcal{D}^{\max(n+1)}$, which is the $(n+1)$th order statistic of $(n+1)$ samples. Hence, with probability at least $p_2=(1/3)^{n+1}$, event B happens, where $X_{t-1}\leq 1/3$, and since $X_t\leq X_{t-1}$ we also have $X_t\leq 1/3$.
The probability that both events happen is at least $p_1\cdot p_2=(1/3)^{n+2}$, since event A and event B are independent.
When both event A and event B happen, consider agent $1$ trading the first item she gets for the last two items agent $n$ gets, then agent $1$'s utility strictly increase, since $0.6+0.6>1$; at the same time, agent $n$'s utility does not decrease, since $X_{t-1}+X_{t}\leq 2/3$. Then the original allocation is Pareto dominated by the allocation after this trade. Therefore, when $n=\Theta(1)$, this means that with constant probability, the allocation with round robin algorithm is not Pareto-optimal, no matter how large $m$ is.
As the dash-dotted line in \cref{fig:experiments} suggests, for larger $m$, such trade for Pareto improvement is more prevalent.
\section{Details on Empirical Results}
\label{app:experiment1}
\subsection{Setup}
Code for all our experiments can be found at
\url{https://github.com/pgoelz/asymmetric}.
We implemented the approximate multiplier algorithm and the main experiments in Python (3.7.10).
We rely on Scipy for evaluating integrals (\texttt{integrate.quad}) and for the PDF and CDF of the beta distributions.
We optimize fractional Maximum Nash Welfare using cvxpy (1.1.14), which in turn calls the MOSEK solver (9.2.9).
Integer MNW allocations are found using the Baron (2020.4.14), called through pyomo (6.1.2).
To check Pareto-optimality, we use the Gurobi solver (9.0.3).
Finally, we use Numpy in version 1.17.3.
All experiments were run on a MacBook Pro with a 3.1 GHz Dual-Core i5 processor and 16 GB RAM, running macOS 10.15.7.
To verify the allocation probabilities, and for \cref{fig:percentile}, we use Mathematica 13.0.0 on the same machine (Mathematica code is included in the above Git repository).
\subsection{Utility Distributions}
The ten utility distributions that we study in the body of the paper all come from a parametric family, which we will call peak distributions.
Specifically, each peak distribution $\mathcal{D}^\mathit{peak}_a$ is parameterized by its \emph{peak} $a \in (0, 1)$, and has the PDF $f_a$ where
\[ f_a(x) = \begin{cases}
1/10 + (18/10) \, x / a & \text{if $x \leq a$} \\
1/10 + (18/10) \, (1 - x) / (1 - a) & \text{else}.
\end{cases}\]
This PDF grows linearly from $f_a(0) = 1/10$ to $f_a(a) = 19/10$ and decreases in another linear segment from there to $f_a(1) = 1/10$.
Thus, the distributions are all $(1/10, 19/10)$-PDF-bounded as claimed, and have different means and skews depending on their peak.
We study the distributions $\{\mathcal{D}^\mathit{peak}_{i/11}\}_{i=1, \dots, 10}$.
\subsection{Computing Maximum Nash Welfare}
As we mention in the body, finding discrete MNW allocations with BARON ran too slowly for our experiments.
For example, allocating $50$ random items took on average 5 minutes each time (over 50 random instances); since we estimate probabilities by sampling 1\,000 random instances, this datapoint (with still modest $m$) would take about three days to compute.
Given that finding a MNW allocation is NP-complete, this failure is to be expected.
The main trick for computing MNW in the literature, due to \citet{CKM+19}, assumes that the utilities of agents can only take on a small set of integer values.
Unfortunately, this trick is not applicable in our setting, since we have many more items and since discretizing the utilities is likely to introduce violations of PO.
Instead, we propose to optimize a convex program to find the \emph{fractional} allocation with maximal Nash welfare, which is much more efficient.
Then, we round the fractional allocation into a proper allocation, by assigning each item to the agent who receives
the largest share of the item in the fractional allocation.
Performing any such rounding on an fPO fractional allocation preserves fractional Pareto optimality, which is easy to see given the characterization of fPO given in our Model section.
\subsection{Divisibility by $n$}
\label{app:experiment2}
In the experiment displayed in \cref{fig:experiments} in the body of the paper, we evaluate the following sequence of items: $m \in \{10, 20, 100, 200, 500, 1\,000, 2\,000, 5\,000, 10\,000\}$.
This progression is a natural way to explore the space of $m$ on a logarithmic axis, but comes with one caveat: All $m$ are divisible by $n=10$, a special case in which envy-free allocations are known to appear at lower $m$ than in the general case \citep{MS19}.
To verify that our empirical findings are robust to $m$ that are not multiples of $n$, we repeat the experiment for the first values of $m$, but shifting each $m$ by $3$ as follows: $m \in \{13, 23, 53, 103, 203, 503, 1003\}$.
\begin{figure}
\caption{Version of the experiment in \cref{fig:experiments}
\label{fig:experiments_offset3}
\end{figure}
We see that this shift in $m$ causes the round robin allocations and the rounded MNW allocations to converge towards envy-freeness at a slightly slower rate, and also makes the round robin algorithm be even less likely to be Pareto-optimal.
The large trends identified in the body of the paper all persist, and we see no notable difference due to the offset when $m \geq 200$.
\subsection{Experiments with Beta Distributions}
\label{app:betaexperiments}
We also study the five utility distributions in \cref{fig:scaling}:
\[ \mathcal{D}_A = \mathit{Beta}(1/2, 1/2),\quad \mathcal{D}_B = \mathit{Beta}(1, 3), \quad \mathcal{D}_C =
\mathit{Beta}(2, 5), \quad\mathcal{D}_D = \mathit{Beta}(2, 2),\quad \mathcal{D}_E = \mathit{Beta}(5, 1). \]
We chose them based on the illustration displayed on top of the Wikipedia page on Beta distributions\footnote{See \url{https://en.wikipedia.org/wiki/Beta_distribution}, accessed on January 11, 2022. The figure is \url{https://commons.wikimedia.org/wiki/File:Beta_distribution_pdf.svg}.} at the time of writing.
Note that the distributions are not $(p,q)$-PDF-bounded for any $p,q$; however, running our algorithm with a value of $q=5$, which holds for most agents, produced multipliers of an accuracy of $10^{-5}$ in 29 seconds.
\begin{figure}
\caption{Version of the experiment in \cref{fig:experiments}
\label{fig:experimentmnw}
\end{figure}
Since BARON ran sufficiently fast on the given five agents, this plot contains both the integral MNW allocation and the rounded fractional MNW allocation.
The figure also includes lines for when envy-freeness up to one good (EF1) holds.
This is always true for the integral MNW and round robin, the rounded MNW satisfies it almost always in our experiments, and the multiplier allocation satisfies it at a similar rate as EF.
\fi
\end{document} |
\begin{document}
\title{Evolution Strategies in Optimization Problems\footnote{Partially presented at the
5th Junior European Meeting on \emph{Control and Information
Technology} (JEM'06), Sept 20--22, 2006, Tallinn, Estonia. Research Report CM06/I-44.}}
\author{Pedro A. F. Cruz \and Delfim F. M. Torres}
\mathrm{d}ate{Department of Mathematics\\
University of Aveiro\\
3810-193 Aveiro, Portugal\\
\texttt{\{pedrocruz,delfim\}@ua.pt}}
\maketitle
\begin{abstract}
Evolution Strategies are inspired in biology and part of a larger
research field known as Evolutionary Algorithms. Those strategies
perform a random search in the space of admissible functions,
aiming to optimize some given objective function. We show that
simple evolution strategies are a useful tool in optimal control,
permitting to obtain, in an efficient way, good approximations to
the solutions of some recent and challenging optimal control
problems.
\end{abstract}
\textbf{Mathematics Subject Classification 2000:} 49M99, 90C59, 68W20.
\textbf{Keywords.} Random search, Monte Carlo method, evolution
strategies, optimal control, discretization.
\section{Introduction}
Evolution Strategies (ES) are algorithms inspired in biology, with
publications dating back to $1965$ by separate authors
H.P.~Schwefel and I.~Rechenberg (\textrm{cf.} \cite{bib:ev}). ES
are part of a larger area called Evolutionary Algorithms that
perform a random search in the space of solutions aiming to
optimize some objective function. It is common to use biological
terms to describe these algorithms. Here we make use of a simple
ES algorithm known as the $(\mu,\lambda)-$ES method \cite{bib:ev},
where $\mu$ is the number of \textit{progenitors} and $\lambda$ is
the number of generated approximations, called
\textit{offsprings}. Progenitors are \textit{recombined} and
\textit{mutated} to produce, at each generation, $\lambda$
\textit{offsprings} with innovations sampled from a multivariate
normal distribution. The variance can also be subject to mutation,
meaning that it is part of the \textit{genetic code} of the
population. Every solution is evaluated by the objective function
and one or some of them selected to be the next
\textit{progenitors}, allowing the search to go on, stopping when
some criteria is met. In this paper we use a recent convergence
result proved by A.~Auger in 2005 \cite{bib:auger}. The log-linear
convergence is achieved for the optimization problems
we investigate here, and depends on the number $\lambda$ of search points.
Usually optimal control problems are approximately solved by means of numerical
techniques based on the gradient vector or the Hessian matrix
\cite{Viorel}. Compared with these techniques, ES provide easier
computer coding because they only use measures from a discretized
objective function. A first work combining these two research
fields (ES and optimal control) was done by D.S.~Szarkowicz in 1995
\cite{bib:szar}, where a Monte Carlo method (an algorithm with the
same principle as ES) is used to find an approximation to the
classical brachistochrone problem. In the late nineties of the XX
century, B.~Porter and his collaborators showed how ES are useful
to synthesize optimal control policies that minimize manufacturing
costs while meeting production schedules
\cite{ManufacturingSystems}. The use of ES in Control has grown
during the last ten years, and is today an active and promising
research area. Recent results, showing the power of ES in Control,
include Hamiltonian synthesis \cite{Kloster2001}, robust
stabilization \cite{Paris2000}, and optimization
\cite{OptimalElevator}. Very recently, it has also been shown that
the theory of optimal control provides insights that permit to
develop more efective ES algorithms \cite{OChelpsES}.
In this work we are interested in two classical problems of the
calculus of variations: the 1696 brachistochrone problem and the
1687 Newton's aerodynamical problem of minimal resistance (see
\textrm{e.g.} \cite{Tikh}). These two problems, although
classical, are source of strong current research on optimal
control and provide many interesting and challenging open
questions \cite{bib:plakhov,brachisSussmann}. We focus our study
on the brachistochrone problem with restrictions proposed by
A.G.~Ramm in 1999 \cite{bib:ramm}, for which some questions still
remain open (see some conjectures in \cite{bib:ramm}); and on a
generalized aerodynamical minimum resistance problem with
non-parallel flux of particles, recently studied by Plakhov and
Torres \cite{bib:plakhov,bib:delfim}. Our results show the
effectiveness of ES algorithms for this class of problems and
motivate further work in this direction in order to find the (yet)
unknown solutions to some related problems, as the ones formulated
in \cite{Billiards}.
\section{Problems and Solutions}
All the problems we are interested in share the same formulation:
\[
\min T[y(\cdot)] = \int_{x_0}^{x_f} L(x,y(x),y'(x)) \mathrm{d} x
\]
on some specified class of functions, where $y(\cdot)$ must
satisfy some given boundary conditions $(x_0,y_0)$ and
$(x_f,y_f)$.
We consider a simplified $(\mu,\lambda)-$ES algorithm where we put
$\mu=1$, meaning that on each generation we keep only one
\textit{progenitor} to generate other candidate solutions, and set
$\lambda=10$ meaning we generate $10$ candidate solutions called
\textit{offsprings} (this value appear as a reference value in the
literature). Also, the algorithm uses an individual and constant
$\sigma^2$ variance on each coordinate, which is fixed to a small
value related with the desired precision. The number of iterations
was $100\,000$ and $\sigma^2$ was tuned for each problem. We got
convergence in useful time. The simplified $(1,10)-$ES algorithm
goes as follow:
\begin{enumerate}
\item Set an equal spaced sequence of $n$ points
$\{x_0,\ldots,x_i,\ldots,x_f\}$ where $i=1,\ldots,n-2$; $x_0$ and
$x_f$ are kept fixed (given boundary conditions);
\item Generate a randomly piecewise linear function $y(\cdot)$
that approximate the solution, defined by a vector $y =
\{y_0,\ldots,y_i,\ldots,y_f\},\,i=1,\ldots,n-2$; transform $y$ in
order to satisfy the boundary conditions $y_0$ and $y_f$ and the
specific problem restrictions on $y$, $y'$ or $y''$;
\item Do the following steps a fixed number $N$ of times:
\begin{enumerate}
\item based on $y$ find $\lambda$ new candidate solutions
$Y^c$, $c=1,\ldots,\lambda$, where each new candidate is
produced by $Y^c = y + \mathrm{N}(0,\sigma^2)$ where $\mathrm{N}(0,\sigma^2)$
is a vector of random perturbations
from a normal distribution; transform each $Y^c$ to obey
boundary conditions $y_0$ and $y_f$ and
other problem restrictions on $y$, $y'$ or $y''$;
\item determine $T^c := T[Y^c]$, $c=1,\ldots,\lambda$,
and choose the new $y:=Y^c$ as the one with minimum $T^c$.
\end{enumerate}
\end{enumerate}
In each iteration the best solution must be kept because
$(\mu,\lambda)-$ES algorithms don't keep the best solution from
iteration to iteration.
The next subsections contain a description of the studied
problems, respective solutions and the approximations found by the
described algorithm.
\subsection{The classical brachistochrone problem, 1696}
\label{subsec:ClassBrach}
\textbf{Problem statement.}~The brachistochrone problem consists
in determining the curve of minimum time when a particle starting
at a point $A=(x_0,y_0)$ of a vertical plan goes to a point
$B=(x_1,y_1)$ in the same plane under the action of the gravity
force and with no initial velocity. According to the energy
conservation law $\frac{1}{2} m v^2 + m g y = m g y_0$ one easily
deduce that the time a particle needs to reach $B$ starting from
point $A$ along curve $y(\cdot)$ is given by
\begin{equation}\label{eq:bra}
T[y(\cdot)] = \frac{1}{\sqrt{2g}} \int_{x_0}^{x_1} \sqrt{\frac{1 + (y')^2}{y_0 - y}} \mathrm{d} x
\end{equation}
where $y(x_0)=y_0$, $y(x_1)=y_1$, and $y \in C^2(x_0,x_1)$. The
minimum to \eqref{eq:bra} is given by the famous Cycloid:
\begin{equation*}
\gamma\,:\,\left\{
\begin{array}{l}
x = x_0 + \frac{a}{2}(\theta - \sin \theta) \\
y = y_0 - \frac{a}{2}(1 - \cos \theta)
\end{array}
\right.
\end{equation*}
with $\theta_0 \le \theta \le \theta_1$, $\theta_0$ and $\theta_1$
the values of $\theta$ in the starting and ending points
$(x_0,y_0)$ and $(x_1,y_1)$. The minimum time is given by
$T=\sqrt{a/(2g)}\theta_1$, where parameters $a$ and $\theta_1$ can
be determined numerically from boundary conditions.
\textbf{Results and implementation details.}~Consider the
following three curves and the correspondent time a particle needs
to go from $A$ to $B$ through them:
\begin{description}
\item[$T_b$:]~The brachistochrone for the problem with
$(x_0,x_1)=(0,10)$, $(y_0,y_1)=(10,0)$ has parameters $a\simeq
5.72917$ and $\theta_1 =2.41201$; the time is $T_b\simeq1.84421$;
\item[$T_{es}$:]~A piecewise linear function with $20$ segments shown
in fig.~\ref{fig:bra-a} was found by ES; the time is $T_{es}=1.85013$;
\item[$T_o$:]~A piecewise linear function with $20$ segments
defined over the Brachisto\-chrone; the time is $T_o=1.85075$.
\end{description}
From fig.~\ref{fig:bra-a} one can see that the piecewise linear solution
is made of points that are not over the brachistochrone because
that is not the best solution for piecewise functions.
\begin{figure}
\caption{The brachistochrone problem and approximate solution.}
\label{fig:bra-a}
\label{fig:bra-b}
\label{fig:br}
\end{figure}
We use $\sigma=0.01$ (see appendix for cpu-times).
Fig.~\ref{fig:bra-b} shows that a little more than $10\,000$
iterations are needed to reach a good solution for the $20$ line
segment problem.
\subsection{Brachistochrone problem with restrictions, 1999}
\label{subsec:RammBrach}
\textbf{Problem statement.}~Ramm (1999) \cite{bib:ramm} presents a
conjecture about a brachistochrone problem over the set $S$ of
convex functions $y$ (with $y''(x)\ge0$ a.e.) and $0 \le y(x) \le
y_0(x)$, where $y_0$ is a straight line between $A=(0,1)$ and
$B=(b,0)$, $b>0$. Up to a constant, the functional to be minimized
is formulated as in (\ref{eq:bra}):
\begin{equation*}
T[y(\cdot)] = \int_0^b \frac{ \sqrt{1+(y')^2} }{ \sqrt{1-y} }\mathrm{d}
x\, .
\end{equation*}
Let $P$ be the line connecting $AO$ and $OB$, where
$O=(0,0)$; $P_{br}$ be the polygonal line connecting $AC$ and
$CB$, $C=(\pi/2,0)$. Then, $T_0:=T(y_0)=2\sqrt{1+b^2}$,
$T_P:=T(P)=2+b$, $T(P_{br})=\sqrt{4+\pi^2}+b-\pi/2$.
Let the brachistochrone be $y_{br}$. The following inequalities,
for each $y \in S$, hold \cite{bib:ramm}:
\begin{enumerate}
\item if $0 < b < 4/3$ then $T(y_{br}) \le T(y) < T_P$;
\item if $4/3 \le b \le \pi/2$ then $T(y_{br}) \le T(y) \le T_0$;
\item if $b > \pi/2$ then $T(P_{br}) < T(y) \le T_0$.
\end{enumerate}
The classical brachistochrone solution holds for cases 1 and 2
only. For the third case, Ramm has conjectured that the minimum
time curve is composed by the brachistochrone between $(0,1)$ and
$(\pi/2,0)$ and then by the horizontal segment between $(\pi/2,0)$
and $(x_f,0)$.
\textbf{Results and implementation details.}~We study the problem
with $b=2$. Our results give force to Ramm's conjecture mentioned
above for case 3. We compare three descendant times:
\begin{description}
\item[$T_{br}$:]~The conjectured solution in continuous time
takes $T_{br} = \sqrt{\alpha/9.8} \theta_f + (b-\pi/2) /
\sqrt{2*9.8} = 0.8066$; \item[$T_{es}$:]~The $20$ segment
piecewise linear solution found by ES needs $T_{es}=0.8107$;
\item[$T_o$:]~The $20$ segment piecewise linear solution with
points over the conjectured solution needs $T_o=0.8111$.
\end{description}
Previous values and fig.~\ref{fig:ramm} permits to take similar
conclusions than the ones obtained for the pure brachistochrone
problem (\S\ref{subsec:ClassBrach}).
\begin{figure}
\caption{Ramm's conjectured solution and approximate solution.}
\label{fig:ramm-a}
\label{fig:ramm-b}
\label{fig:ramm}
\end{figure}
We use $\sigma=0.001$ (see appendix for cpu-times).
Fig.~\ref{fig:ramm-b} shows that less than $10\,000$ iterations
are needed to reach a good solution.
\subsection{Newton's minimum resistance, 1687}
\label{subsec:NewtonClassical}
\textbf{Problem statement.}~Newton's aerodynamical problem
consists in determining the minimum resistance profile of a body
of revolution moving at constant speed in a rare medium of equally
spaced particles that don't interact with each other. Collisions
with the body are assumed to be perfectly elastic. Formulation of
this problem is: minimize
\begin{equation*}
R[y(\cdot)] = \int_0^r
\frac{ x }{ 1 + \mathrm{d}ot y(x)^2 } \mathrm{d} x
\end{equation*}
where $0\le x \le r$, $y(0)=0$, $y(r)=H$ and $y'(x)\ge0$.
The solution is given in parametric form:
\begin{equation*}
x(u) = 2\lambda u \, , \quad
y(u) = 0 \, , \quad \text{ for } u \in[0,1] \, ;
\end{equation*}
\begin{equation*}
x(u) = \frac{\lambda}{2}(\frac{1}{u} + 2u + u^3) \, , \quad
y(u) = \frac{\lambda}{2}(-\log u + u^2 + \frac{3}{4} u^4) - \frac{7
\lambda}{8}\, , \quad \text{ for } u \in [1, u_\textrm{max}] \, .
\end{equation*}
Parameters $\lambda$ and $u_\textrm{max}$ are obtained solving
$x(u_{\max})=r$ and $y(u_{\max})=H$.
\textbf{Results and implementations details.}~For $H=2$ we have:
\begin{description}
\item[$R_{newton}$:]~The exact solution has resistance $R_{newton} = 0.0802$;
\item[$R_{es}$:]~The $20$ segment piecewise linear solution found by ES has $R_{es}=0.0809$;
\item[$R_o$:]~The $20$ segment piecewise linear solution with points over the exact
solution leads to $R_o=0.0808$.
\end{description}
Newton's problem reveals to be more complex than previously
studied brachistochrone problems. Trial-and-error
was needed in order to find a useful $\sigma^2$ value.
For example, using $\sigma=0.001$ our algorithm
seems to stop in some local minimum. In
fig.~\ref{fig:newton} an approximate solution with $\sigma=0.01$ is shown.
We also have observed that changing the starting point causes minor differences in the
approximate solution. The achieved ES solution should be better since $R_o$ is
better than $R_{es}$. One possible explanation for this fact
is that we are using $20$ $x_i$ fixed points and
the optimal solution has a break point at $x=2\lambda$.
\begin{figure}
\caption{Optimal solution to Newton's problem and
approximation.}
\label{fig:newton-a}
\label{fig:newton-b}
\label{fig:newton}
\end{figure}
We use $\sigma=0.01$ (see appendix for cpu-times).
Fig.~\ref{fig:newton-b} shows that less than $1\,000$
iterations are needed to reach a good solution.
\subsection{Newton's problem with temperature, 2005}
\label{subsec:NewtonTemp}
\textbf{Problem statement.}~The problem consists in determining
the body of minimum resistance, moving with constant velocity in a
rarefied medium of chaotically moving particles with velocity
distributions assumed to be radially symmetric in the Euclidian
space $\mathbb{R}^d$. This problem was posed and solved in
2005-2006 by Plakhov and Torres \cite{bib:plakhov,bib:delfim}. It
turns out that the two-dimensional problem ($d=2$) is more richer
than the three-dimensional one, being possible five types of
solutions when the velocity of the moving body is not 'too slow'
or 'too fast' compared with the velocity of particles.
The pressure at the body surface is described by two functions: in
the front of the body the flux of particles causes resistance, in
the back the flux causes acceleration. We consider functions found
in \cite{bib:delfim}, where the two flux functions $p_+$ and $p_-$
are given by $p_+(u)=\frac{1}{1+u^2}+0.5$ and $p_-(u) =
\frac{0.5}{1+u^2}-0.5$. We also consider a body of fixed radius
$1$. The optimal solution depends on the body height $h$: the
front solution is denoted by $f_{h_+}$, which depends on some
appropriate front height $h_+$; and the solution for the rear is
denoted by $f_{h_-}$, depending on some appropriate height $h_-$.
Optimal solutions $f_{h_+}$ and $f_{h_-}$ are obtained:
\begin{equation*}
f_{h_+} = \min_{f_h} \quad \mathbb{R}_+(f_h) = \int_0^1 p_+(f_h'(t)) \mathrm{d} t
\end{equation*}
and
\begin{equation*}
f_{h_-} = \min_{f_h} \mathbb{R}_-(f_h) = \int_0^1 p_-(f_h'(t)) \mathrm{d} t.
\end{equation*}
Then, the body shape is determined by minimizing
\begin{equation*}
R(h) = \min_{h_+ + h_-=h} ( \mathbb{R}_+(f'_{h_+}) + \mathbb{R}_-(f'_{h_-}) ) \,
.
\end{equation*}
Solution can be of five types ($d = 2$). From functions $p_+$ and
$p_-$ one can determine constants $u_+^0$, $u_*$, $u_-^0$ and
$h_-$. Then, depending on the choice of the height $h$, theory
developed in \cite{bib:plakhov,bib:delfim} asserts that the
minimum resistance body is:
\begin{enumerate}
\item a trapezium if $0 < h < u_+^0$;
\item an isosceles triangle if $u_+^0 \le h \le u_*$;
\item the union of a triangle and a trapezium if $u_* < h < u_* + u_-^0$;
\item if $h \ge u_* + u_-^0$ the solution depends on $h_-$ and can be a union of
two isosceles triangles with common base with heights $h_+$ and $h_-$ or
the union of two isosceles triangles and a trapezium;
\item a combination of a triangle, trapezium and other triangle, depending
on some other particular conditions (\textrm{cf.}
\cite{bib:plakhov}).
\end{enumerate}
\textbf{Results and implementation details.}~We illustrate the
use of ES algorithms for $h=2$. Following section
4.1 of \cite{bib:plakhov} we have $u_*\simeq1.60847$ and $u_-^0=1$,
so this is case 3 above: $u_* < h < u_* + u_-^0$. The resistance values
are:
\begin{description}
\item[$R_{pd}$:]~The exact solution has resistance $R_{pd} = 0.681$;
\item[$R_{es}$:]~The $31$ segment piecewise linear
solution found by ES has $R_{es}=0.685$;
\end{description}
Similar to the classical problem of Newton (\S\ref{subsec:NewtonClassical}),
some hand search for the parameter $\sigma^2$ was needed.
\begin{figure}
\caption{2D Newton-type problem with
temperature.}
\label{fig:plakhov-a}
\label{fig:plakhov-b}
\label{fig:plakhov}
\end{figure}
We use $\sigma=0.01$ and piecewise approximation with $31$ equal
spaced segments in $xx$ (see appendix for cpu-times).
Fig.~\ref{fig:newton-b} shows that only little more than $1\,000$
iterations are needed to reach a good solution.
\section{Conclusions and future directions}
Our main conclusion is that a simple ES algorithm can be
effectively used as a tool to find approximate solutions to some
optimization problems. In the present work we report
simulations that motivate the use of ES algorithms to find good
approximate solutions to brachistochrone-type and Newton-type
problems. We illustrate our approach with the classical problems
and with some recent and still challenging problems. More
precisely, we considered the 1696 brachistochrone problem (B); the
1687 Newton's aerodynamical problem of minimal resistance (N); a
recent brachistochrone problem with restrictions (R) studied by
Ramm in 1999, and where some open questions still remain
\cite{bib:ramm}; and finally a generalized aerodynamical minimum
resistance problem with non-parallel flux of particles (P),
recently studied by Plakhov and Torres
\cite{bib:plakhov,bib:delfim} and which gives rise to other
interesting questions \cite{Billiards}.
We argue that the approximated solutions we have found by the ES
algorithm are of good quality. We give two reasons. First, for the
Brachistochrone and Ramm's problems the functional value for the
ES approximation was better than the linear interpolation over the
exact solution, showing that ES algorithm is capable of a good
precision. The second reason is the low relative error
$r(T_Y,T_y)$ between the functional over the exact solution $T_y$
and the approximate solution $T_Y$, as shown in the following
table:
\begin{center}
\begin{tabular}{|l|c|c||l|c|c|}
\hline
Pr. & $\max|Y_k - y_k|$ & $r(T_Y,T_y)$ & Pr.& $\max|Y_k - y_k|$ & $r(T_Y,T_y)$ \\
\hline
(B) & 0.15 & 0.001 &
(N) & 0.08 & 0.01 \\
(R) & 0.09 & 0.003 &
(P) & 0.07 & 0.001 \\
\hline
\end{tabular}
\end{center}
where $y_k$ are points over the exact solution of the problem and
$Y_k$ are points from the piecewise approximation. We note that
$\max|Y_k - y_k|$ need not to be zero because the best continuous
solution and the best linear solution cannot be superposed.
ES algorithms use computers in an intensive way. For
brachistochrone-type and Newton-type problems, and nowadays
computing power, few minutes of simulation (or less) were enough
on an interpreted language (see appendix).
More research is needed to tune this kind of algorithms and obtain
more accurate solutions. Special attention must be put in
qualifying an obtained ES approximation: Is it a minimum of the
energy function? Is it local or global? Another question is
computer efficiency. Waiting few minutes in recent computers is
not bad, but can we improve the running times?
Concerning the accuracy, several new ES algorithms have been
proposed. These algorithms can tune $\sigma$ values and use
generated second order information that can influence the
precision and time needed. Also the use of random $xx$ points
(besides $y$ piecewise linear solution) should be investigated.
We believe that the simplicity of the technique considered in the
present work can help in the search of solutions to some open
problems in optimal control. This is under investigation and will
be addressed elsewhere.
\section*{Appendix -- hardware and software}
The code developed for this work can be freely obtained from the
first author's web page, at
\verb+http://www.mat.ua.pt/jpedro/evolution/+.
In most of our investigations few minutes were sufficient for
getting a good approximation for all the considered problems, even
using a code style prone to humans rather than machines (code was
done concerning clearness of concepts rather than execution
speed). Our simulations used a Pentium~4 CPU 3~GHz, running Debian
Linux \verb+http://www.debian.org+. The language was R
\cite{bib:r}, chosen because it is a fast interpreted language,
numerically oriented to statistics and freely available.
The following CPU-times were obtained with command
$$
\verb+time R CMD BATCH problem.R+
$$
where \verb+time+ keeps track of cpu used and $\verb+R+$ calls the
interpreter. The times are rounded and the last column estimates
the time for a first good solution:
\begin{center}
\begin{tabular}{lccc}
Problem & Section & $100\,000$ iterations & `Good solution' at \\
\hline
Brachistochrone & \S\ref{subsec:ClassBrach} & 10~min & 1~min\\
Ramm conjecture & \S\ref{subsec:RammBrach} & 10~min & 1~min\\
Newton & \S\ref{subsec:NewtonClassical} & 9~min & 10~sec\\
Plakhov \& Torres & \S\ref{subsec:NewtonTemp} & 14~min & 10~sec\\
\hline
\end{tabular}
\end{center}
We note that the per iteration `step' was $\sigma=0.001$ in the
brachistochrone(-type) problems and $\sigma=0.01$ for the
Newton(-type) problems. Using a compiled language like \textsf{C}
one can certainly improve times by several orders of magnitude.
\end{document} |
\begin{document}
\title{Increasing the Replicability for Linear Models via Adaptive Significance Levels}
\author[1]{D. Vélez\footnote{[email protected], ORCID=0000-0001-7162-8848}}
\author[2]{ M.E. Pérez\footnote{[email protected], ORCID=0000-0001-8641-8405}}
\author[2]{L. R. Pericchi\footnote{[email protected], ORCID=0000-0002-7096-3596}}
\affil[1]{\small University of Puerto Rico, Río Piedras Campus, Statistical Institute and Computerized Information Systems, Faculty of Business Administration, 15 AVE Universidad STE 1501, San Juan, PR 00925-2535, USA}
\affil[2]{University of Puerto Rico, Río Piedras Campus, Department of Mathematics, Faculty of Natural Sciences, 17 AVE Universidad STE 1701, San Juan, PR 00925-2537, USA}
\date{}
\maketitle
\begin{abstract}
We put forward an adaptive alpha (Type I Error) that decreases as the information grows, for hypothesis tests in which nested linear models are compared. A less elaborate adaptation was already presented in \citet{PP2014} for comparing general i.i.d. models. In this article we present refined versions to compare nested linear models. This calibration may be interpreted as a Bayes-non-Bayes compromise, of a simple translations of a Bayes Factor on frequentist terms that leads to statistical consistency, and most importantly, it is a step towards statistics that promotes replicable scientific findings.
\end{abstract}
\textbf{Keywords:} p-value calibration; Bayes factor, linear model; likelihood ratio; adaptive alpha; PBIC \\
\textit{MSC2020: 62C05, 62C10, 62J20}
\section{Motivation}
Obtaining a $p$-value lower than $0.05$ no longer opens the doors for publication, but now statisticians must provide alternatives to scientists. One of the most important problems in statistics and in science as a whole, is to provide statistical measures of evidence that lead to replicable scientific findings. In this article, we propose an adaptive alpha level for linear models that depends on the design matrices, the difference in dimension between the models and the ``effective'' sample size. This adaptive alpha mimics the behaviour of a powerful Bayes Factor based on recent improvements of BIC (Bayarri et. al 2019) but uses the familiar concepts of \textit{Significance Hypothesis Testing}.
\section{Basic Derivation}
Consider the linear regression model $\mathbf{y}=\mathbf{X}\boldsymbol{\beta}+\boldsymbol{\epsilon}$, where $\mathbf{y}$ represents the n-dimensional random vector of response variables, $\mathbf{X}$ is the $n\times k$ matrix of non-stochastic explanatory variables (for simplicity, here we assume that $\mathbf{X}$ is a full rank matrix), $\boldsymbol{\beta}$ is a k-dimensional vector of regression parameters, $\boldsymbol{\epsilon}$ is an n-dimensional vector of standard normal errors, i.e. $\boldsymbol{\epsilon}\sim N(0,\sigma^2\mathbf{I}_n)$, and $\sigma$ is the standard deviation of the error, $\sigma>0$.
We denote with $M$ the full model whose matrix form is given by:
\begin{eqnarray*}
\mathbf{y}~~~~~&=&~~~~~~~~~~~~~~~\mathbf{X}~~~~~~~~~~~~~~~~~~~\boldsymbol{\beta}~~~~+~~~~~\boldsymbol{\epsilon}\\
\begin{bmatrix}
~y_1&\\
~y_2&\\
~\vdots\\
~y_n&\\
\end{bmatrix}&=&\begin{bmatrix}
1&x_{12}&x_{13}&\cdots & x_{1k}\\
1&x_{22}&x_{23}&\cdots & x_{2k}\\
\vdots &\vdots &\vdots & \ddots &\vdots\\
1&x_{n2}&x_{n3}&\cdots & x_{nk}\\
\end{bmatrix}\begin{bmatrix}
~\beta_1&\\
~\beta_2&\\
~\vdots\\
~\beta_k&\\
\end{bmatrix}+\begin{bmatrix}
~\epsilon_1&\\
~\epsilon_2&\\
~\vdots\\
~\epsilon_n&\\
\end{bmatrix}
\end{eqnarray*}
\noindent where $\epsilon_i\sim N(0,\sigma^2), \text{with}\hspace{.2cm} 1\leq i\leq n$.\\
Now suppose that we want to perform pairwise model comparisons between nested generic sub-models $M_i$ and $M_j$ from $M$, where $M_j$ is a sub-model having $j(\leq k)$ regression coefficients, with $M_i$ nested to $M_j$. Formally, we want to test the hypothesis
$$H_i:\text{Model}\hspace{.1cm} M_i\hspace{0.3cm} versus \hspace{.3cm}H_j:\text{Model}\hspace{.1cm} M_j,$$
\noindent in other words, we are comparing the following two nested linear models $$M_i:\mathbf{y}=\mathbf{X}_i\boldsymbol{\delta}_i+\boldsymbol{\epsilon}_i,\hspace{.5cm}\boldsymbol{\epsilon}_i\sim N(0,\sigma_i^2\mathbf{I}_n)$$ and $$M_j:\mathbf{y}=\mathbf{X}_j\boldsymbol{\beta}_j+\boldsymbol{\epsilon}_j,\hspace{.5cm}\boldsymbol{\epsilon}_j\sim N(0,\sigma_j^2\mathbf{I}_n).$$
So the Bayes Factor is:
$$B_{ij}(\mathbf{y})=\dfrac{\displaystyle\int f(\mathbf{y}|\mathbf{X}_i\boldsymbol{\delta}_i,\sigma_i^2\mathbf{I}_n)\pi^N(\boldsymbol{\delta}_i,\sigma_i)d\boldsymbol{\delta}_id\sigma_i}{\displaystyle\int f(\mathbf{y}|\mathbf{X}_j\boldsymbol{\beta}_j,\sigma_j^2\mathbf{I}_n)\pi^N(\boldsymbol{\beta}_j,\sigma_j)d\boldsymbol{\beta}_jd\sigma_j}.$$
The construction of the adaptive alpha is based on $B_{ij}(\mathbf{y})$, without explicit assessment of prior distributions by the user. Instead we will use well established statistical practices to directly construct summaries of evidence.
\begin{itemize}
\item[1.] \textit{Approximation to Bayes factors under regularity conditions:} Laplace's asymptotic method, under regularity conditions, gives the following approximation \citep[see for example][]{BerPer01}:
\begin{equation}\label{eq1}
B_{ij}=\dfrac{f(\mathbf{y}|\mathbf{X}_i\widehat{\boldsymbol{\delta}}_i,S_i^2\mathbf{I}_n)|\hat{I}_i|^{-1/2}}{f(\mathbf{y}|\mathbf{X}_j\widehat{\boldsymbol{\beta}}_j,S_j^2\mathbf{I}_n)|\hat{I}_j|^{-1/2}}\cdot\dfrac{(2\pi)^{i/2}\pi^N(\widehat{\boldsymbol{\delta}}_i,S_i)}{(2\pi)^{j/2}\pi^N(\widehat{\boldsymbol{\beta}}_j,S_j)},
\end{equation}
\noindent where $\widehat{\boldsymbol{\delta}}_i, S_i^2, \widehat{\boldsymbol{\beta}}_j, S_j^2,$ are MLE's at the parameters and $\hat{I}_i, \hat{I}_j$ are the observed information matrices respectively for $M_i$ and $M_j$.
Since the first factor typically goes to $\infty$ or to $0$ as the sample size accumulates, but the second factor stays bounded, it is useful to rewrite (\ref{eq1}) as:
\begin{equation}\label{eq2}
-2\log(B_{ij})=-2\log\left(\dfrac{f(\mathbf{y}|\mathbf{X}_i\widehat{\boldsymbol{\delta}}_i,S_i^2\mathbf{I}_n)}{f(\mathbf{y}|\mathbf{X}_j\widehat{\boldsymbol{\beta}}_j,S_j^2\mathbf{I}_n)}\right)-2\log\left(\dfrac{|\hat{I}_j|^{1/2}}{|\hat{I}_i|^{1/2}}\right)+C.
\end{equation}
\item[2.]\textit{Likelihood ratio:} The likelihood ratio can be written as:
\begin{equation}\label{eq3}
\dfrac{f(\mathbf{y}|\mathbf{X}_i\widehat{\boldsymbol{\delta}}_i,S_i^2\mathbf{I}_n)}{f(\mathbf{y}|\mathbf{X}_j\widehat{\boldsymbol{\beta}}_j,S_j^2\mathbf{I}_n)}=\left(\frac{S_{j}^2}{S_{i}^2}\right)^{\frac{n}{2}}=\left(\dfrac{\mathbf{y}^t(\mathbf{I}-\mathbf{H}_j)\mathbf{y}}{\mathbf{y}^t(\mathbf{I}-\mathbf{H}_i)\mathbf{y}}\right)^{\frac{n}{2}}
\end{equation}
See Appendix 1 for derivations.
\item[3.]\textit{The Fisher information matrix:} The Observed Fisher Information Matrix (OFIM) with $i$ adjustable parameters is
\begin{equation}\label{eq4}
\hat{I}_i(\widehat{\boldsymbol{\delta}_i})=\dfrac{1}{S_i^2}\cdot \mathbf{X}_i^t\mathbf{X}_i.
\end{equation}
Returning to equation (\ref{eq2}) and using (\ref{eq3}) and (\ref{eq4}) we have
\begin{equation}\label{eq5}
-2\log(B_{ij})=-(n-1)\log\left(\dfrac{\mathbf{y}^t(\mathbf{I}-\mathbf{H}_j)\mathbf{y}}{\mathbf{y}^t(\mathbf{I}-\mathbf{H}_i)\mathbf{y}}\right)-\log\left(\dfrac{|\mathbf{X}_j^t\mathbf{X}_j|}{|\mathbf{X}_i^t\mathbf{X}_i|}\right)+C.
\end{equation}
The constant $C$ depends on the prior assumptions and does not go to zero, but it is of lesser importance as the sample size grows.
\end{itemize}
\subsection{Sampling distribution of the likelihood ratio}
Under $H_0$, the sampling distribution of $\dfrac{\mathbf{y}^t(\mathbf{I}-\mathbf{H}_j)\mathbf{y}}{\mathbf{y}^t(\mathbf{I}-\mathbf{H}_i)\mathbf{y}}$ is a beta distribution,
\begin{equation}
\dfrac{\mathbf{y}^t(\mathbf{I}-\mathbf{H}_j)\mathbf{y}}{\mathbf{y}^t(\mathbf{I}-\mathbf{H}_i)\mathbf{y}}\sim Beta\left(\dfrac{n-j}{2},\dfrac{q}{2}\right)
\end{equation}
where $q=j-i$ \citep[see][Corollary 1]{art1}.
\begin{theorem}
\begin{equation}\label{eq7}
-(n-1)\log\left(\dfrac{\mathbf{y}^t(\mathbf{I}-\mathbf{H}_j)\mathbf{y}}{\mathbf{y}^t(\mathbf{I}-\mathbf{H}_i)\mathbf{y}}\right)\sim Ga\left(\frac{q}{2},\frac{\frac{n-j}{n-1}}{2}\right)
\end{equation}
\end{theorem}
\begin{proof}
Let $Z=-(n-1)\log(Y)$ and $Y\sim Beta\left(\frac{n-j}{2},\frac{q}{2}\right)$, then
\begin{eqnarray*}
F_{Z}(z)=P(Z\leq z)&=&P(Y\geq e^{-\frac{z}{n-1}})\\
&=&1-F_Y(e^{-\frac{z}{n-1}})\\
& \Downarrow & \\
f_Z(z)&=&\left(\frac{1}{n-1}\right)e^{-\frac{z}{n-1}}f_Y(e^{-\frac{z}{n-1}}). \\
\end{eqnarray*}
\noindent Thus $$f_Z(z)=\frac{1}{n-1}\dfrac{\Gamma\left(\frac{n-j}{2}+\frac{q}{2}\right)}{\Gamma\left(\frac{n-j}{2}\right)\Gamma\left(\frac{q}{2}\right)}e^{-\left(\frac{n-j}{2(n-1)}\right)z}(1-e^{-\frac{z}{n-1}})^{\frac{q}{2}-1}$$
but $\Gamma(n+\alpha)\approx \Gamma(n)n^{\alpha}$ \citep[see][eq. 6.1.46]{Abra}, so,
\begin{eqnarray*}
f_Z(z)&=&\frac{1}{n-1}\dfrac{\left(\frac{n-j}{2}\right)^{\frac{q}{2}}}{\Gamma\left(\frac{q}{2}\right)}e^{-\left(\frac{n-j}{2(n-1)}\right)z}(1-e^{-\frac{z}{n-1}})^{\frac{q}{2}-1}\\
&=& \dfrac{\left(\frac{n-j}{2(n-1)}\right)}{\Gamma\left(\frac{q}{2}\right)}e^{-\left(\frac{n-j}{2(n-1)}\right)z}\left(\frac{n-j}{2}-\frac{n-j}{2}e^{-\frac{z}{n-1}}\right)^{\frac{q}{2}-1}\\
&=& \dfrac{\left(\frac{n-j}{2(n-1)}\right)}{\Gamma\left(\frac{q}{2}\right)}e^{-\left(\frac{n-j}{2(n-1)}\right)z}\left(\frac{n-j}{2(n-1)}z\right)^{\frac{q}{2}-1}+O(n^{-2})
\end{eqnarray*}
\noindent hence $Z\sim Ga\left(\frac{q}{2},\frac{\frac{n-j}{n-1}}{2}\right)$.
\end{proof}
\begin{remark}
Theorem~1 is consistent with Wilk's Theorem, that is
\begin{center}
$Ga\left(\frac{q}{2},\frac{\frac{n-j}{n-1}}{2}\right)$ $\longrightarrow$ $\mathcal{X}^2_{(q)}$ as $n \rightarrow \infty,$
\end{center}
see for example \citet{CasBer2001}, Theorem 10.3.3.
\end{remark}
\subsection{Condition for the adaptive $\alpha$ to be approximately equivalent (yield the same decision) to a Bayes factor}
If we denote by $g_{n,{\alpha}}(q)$ the quantile of the test statistic of (\ref{eq7}) corresponding to a tail probability $\alpha$, using (\ref{eq5}) and (\ref{eq7}) we can make an important departure from classical hypothesis testing: instead of fixing the tail probability (and the quantile) as in significance testing, we let the quantile vary according to the following rule
\begin{equation}\label{eq8}
g_{\alpha_{(\mathbf{X}_i,\mathbf{X}_j,n)}}(q)=g_{n,{\alpha}}(q)+\log\left(\dfrac{|\mathbf{X}_j^t\mathbf{X}_j|}{|\mathbf{X}_i^t\mathbf{X}_i|}\right).
\end{equation}
\noindent Then the Bayes factor will converge to a constant (and $g_{\alpha_{(\mathbf{X}_i,\mathbf{X}_j,n)}}(q)$ will replace the fixed quantile). Note that (\ref{eq8}) establishes an approximate equivalence between Bayes Factor and adaptive significance levels.
\section{Adaptive alpha for linear models}
In order to establish the asymptotic correspondence between $\alpha$ levels and Bayes factor we need the following asymptotic expansion for the upper tail for large $Ga(\frac{q}{2},\frac{\frac{n-j}{n-1}}{2})=g_n(q)$,
\begin{equation}\label{eq9}
1-F(g_n(q))=1-Pr(g_n(q))\approx \dfrac{g_n(q)^{\frac{q}{2}-1}\exp\{-\frac{n-j}{2(n-1)}\cdot g_n(q)\}}{\left(\frac{2(n-1)}{n-j}\right)^{q/2-1}\Gamma\left(\frac{q}{2}\right)},
\end{equation}
\noindent see \citet{art2}.\\
Now we equate the significance level $\alpha$ to the approximate upper tail probability in (\ref{eq9}):
$$\alpha\approx \dfrac{g_{n,\alpha}(q)^{\frac{q}{2}-1}\exp\{-\frac{n-j}{2(n-1)}\cdot g_{n,\alpha}(q)\}}{\left(\frac{2(n-1)}{n-j}\right)^{q/2-1}\Gamma\left(\frac{q}{2}\right)}.$$
If we replace the fixed quantile $g_{n,\alpha}(q)$ by $g_{\alpha_{(\mathbf{X}_i,\mathbf{X}_j,n)}}(q)$ as in (\ref{eq8}), the following result is obtained:
\begin{equation}\label{eq10}
\alpha_{(b,n)}(q)=\dfrac{[g_{n,\alpha}(q)+\log(b)]^{\frac{q}{2}-1}}{b^{\frac{n-j}{2(n-1)}}\cdot\left(\frac{2(n-1)}{n-j}\right)^{q/2-1}\Gamma\left(\frac{q}{2}\right)}\times C_{\alpha},
\end{equation}
\noindent where $b=\frac{|\mathbf{X}_j^t\mathbf{X}_j|}{|\mathbf{X}_i^t\mathbf{X}_i|}$. This is the simple (approximate) calibration we have been looking for, and defines the linear adaptive $\alpha_{(b,n)}$ levels and also the corresponding adaptive quantiles, which are suitable for constructing adaptive testing intervals for any $q$. Note that we still need to assign a value the constant $C_\alpha$ in (\ref{eq10}); this will be discussed in next section.
\begin{remark}\label{obs2}
Note that the adaptive significance level $\alpha_{(b,n)}$ depends on the design matrices, the sample size $n$ and the difference of dimension between the models being compared. In particular, large sample sizes and co-linearity among explanatory variables will affect the significance level.
\end{remark}
\begin{remark}\label{obs3}
The derivation in this section will be further refined in next section along the lines of the Prior Based Bayes Factor and the Effective Sample Size Bayarri et. al (2019).
\end{remark}
\section{Strategies to select the calibration constant $C_\alpha$}
We now introduce some strategies for choosing the constant $C_\alpha$, which are simple enough for fast implementation in practice. Different strategies could be developed, providing alternative calibrations.
\begin{itemize}
\item[1.]\textbf{The strategy of a simple approximation}\\
The simplest approximation in (\ref{eq1}), which is implicit in the BIC approximation, comes from assuming priors $\pi^N(\boldsymbol{\beta}_j,S_j)$, $\pi^N(\boldsymbol{\delta}_i,S_i)$ to be $N((\beta_j,\sigma_j)|(\boldsymbol{\beta}_j,S_j),\hat{I}_j^{*(-1)})$, $N((\delta_i,\sigma_i)|(\boldsymbol{\delta}_i,S_i),\hat{I}_i^{*(-1)})$ respectively, where $\hat{I}_k^*=\hat{I}_k/n^*$, with $\hat{I}_k$ being the observed Fisher Information matrix, but noting that $n^*$ is the Effective Sample Size, mentioned before. This leads to a $C=1$ in (\ref{eq5}) and then a $C_\alpha=\exp\left\{-\frac{n-j}{2(n-1)}\cdot g_{n,\alpha}(q)\right\}$ in (\ref{eq10}).
\item[2.] \textbf{The strategy of a minimal balanced experiment: The one-way Layout}\\
We suppose that $m$ group of observations are available, with $n_k$ observations in the $k$th group, and that
$$y_{kh}\sim N(\mu_k,\sigma^2),~~~k=1,..,m~~,~~h=1,..,n_k,$$
independently, given $\mu_1,...,\mu_m,\sigma^2$. We shall denote by $M_i$ the model which sets $\mu_1=\cdots=\mu_m$, and by $M_j$ the model which allows $\mu_1\neq \cdots\neq\mu_m$. Note that $j=m$, $i=1$ and the matrices $\mathbf{X}_i$, $\mathbf{X}_j$, are easily identified. We have that $q=m-1$, thus (\ref{eq10}) is reduced to
$$\alpha(n_k,q)=\dfrac{\left[g_{n,\alpha}(q)+\log((\prod_{k=1}^{q+1}n_k)/n)\right]^{q/2-1}}{\left((\prod_{k=1}^{q+1}n_k)/n\right)^{\frac{q+1}{2(2q+1)}}\Gamma\left(\frac{q}{2}\right)}C_\alpha,$$
where $n=\displaystyle\sum_{k=1}^mn_k$. For the minimal balanced experiment, $n_k=2$, for each group and $n=2m=2(q+1)$ then, $$ C_\alpha=\alpha\cdot\dfrac{(2^q/(q+1))^{\frac{q+1}{2(2q+1)}}\Gamma\left(\frac{q}{2}\right)}{[g_{n,\alpha}(q)+\log(2^q/(q+1))]^{\frac{q}{2}-1}},$$ where $\alpha$ is the desired level for the minimal sample. The case $m=2$ is of particular interest since $q=1$, then the calibration constant $C_\alpha$ is:
$$C_\alpha=\alpha\cdot\sqrt{\pi\cdot g_{n,\alpha}(1)}.$$
\item[3.]\textbf{The strategy based in PBIC}\\
A major improvement, over BIC type approximations, has been recently introduced in (\citet{Bayarri2019}) and termed Prior Based Information Criterion (PBIC). The improvement is due to: i) "the sample size" $n$ is replaced by a much more precise "effective sample size" $n^e$ (for i.i.d. observations $n^e=n$, but not for non-i.i.d. observations), and ii) the effect of the prior is retained in the final expression, on which a flat tailed non-normal prior is employed.\\
This strategy consists in replacing in (\ref{eq5}) the constant $C$ that depends on the prior assumptions by
$$C=2\sum_{m_i=1}^{q_i}\log\frac{(1-e^{-v_{m_i}})}{\sqrt{2}v_{m_i}}-2\sum_{m_j=1}^{q_j}\log\frac{(1-e^{-v_{m_j}})}{\sqrt{2}v_{m_j}},$$
where $v_{m_l}=\frac{\hat{\xi}_{m_l}}{[d_{m_l}(1+n^e_{m_l})]}$ with $l=i,j$ corresponding to the Model $M_i$ and $M_j$ respectively, see (\citet{Bayarri2019}). Here $n^e_{m_l}$ refers to the effective sample size (called TESS, see ( \citet{Berger2014})). Hence
$$\alpha_{(b,n)}(q)=\dfrac{[g_{n,\alpha}(q)+\log(b)+C]^{\frac{q}{2}-1}}{b^{\frac{n-j}{2(n-1)}}\cdot\left(\frac{2(n-1)}{n-j}\right)^{q/2-1}\Gamma\left(\frac{q}{2}\right)}\times C_{\alpha},$$
and $$C_{\alpha}=\exp\left\{-\frac{n-j}{2(n-1)} \left( g_{n,\alpha}(q)+C \right)\right\}.$$
\end{itemize}
\section{Examples}
\subsection{Balanced One Way Anova}
Suppose we have $k$ groups with $r$ observations each, for a total sample size of $kr$ and let
$H_0: \mu_1= \cdots = \mu_k=\mu \;\; vs \;\; H_1: \mbox{At least one } \mu_i \mbox{ different}$. Then the design matrices for both models are:
{\footnotesize
\[ \mathbf{X}_1=\left(\begin{array}{c}
1\\
1\\
\vdots\\
1\\
\end{array}\right) \;, \mathbf{X}_k = \left( \begin{matrix}
1 & 0 & \ldots & 0 \\
1 & 0 & \ldots & 0 \\
\vdots & \vdots & \ldots & \vdots \\
1 & 0 & \ldots & 0 \\
0 & 1 & \ldots & 0 \\
0 & 1 & \ldots & 0 \\
\vdots & \vdots & \ldots & \vdots \\
0 & 1 & \ldots & 0 \\
\vdots & \vdots & \ldots & \vdots \\
0 & 0 & \ldots & 1 \\
0 & 0 & \ldots & 1 \\
\vdots & \vdots & \ldots & \vdots \\
0 & 0 & \ldots & 1
\end{matrix}\right) \; , b=\frac{|\mathbf{X}_k^t\mathbf{X}_k|}{|\mathbf{X}_1^t\mathbf{X}_1|}=k^{-1}r^{k-1},\]}
\noindent and the adaptive alpha for linear model in accordance with (\ref{eq10}) is
\begin{equation*}
\alpha(k,r)=\dfrac{[g_{r,\alpha}(k-1)-\log(k)+(k-1)\log(r)]^{\frac{k-3}{2}}}{\left(k^{-1}r^{k-1}\right)^{\frac{r-1}{2(r-1/k)}}\left(\frac{2(r-1/k)}{r-1}\right)^{\frac{k-3}{2}}\Gamma\left(\frac{k-1}{2}\right)}C_{\alpha}.
\end{equation*}
Here, the number of replicas $r$ is The Effective Sample Size (TESS) see remark~\ref{obs3}.
We will use initially the strategy of selecting $C_{\alpha}$ by fixing the sample size for a designed experiment, as suggested in \citet{PP2014}, allowing us to compare our adaptive $\alpha$ for linear models with the simpler version suggested there. The experiments were designed using an effect size of $f=0.25$ ($f=\frac{\mu_1-\mu_2}{\sigma}$), which according to \citet{Cohen1988} represents a medium effect size. We fixed $\alpha=0.05$ and the power at $0.8$ . The sample sizes obtained were $r_0 = 64, 40 \mbox{ and } 26$ for $k=2, 5 \mbox{ and }10$, respectively. The results shown Table \ref{tab1} evidence that both corrections for $\alpha$ yield very similar results, with the significance level decreasing steadily with the number of replicates.
\begin{table}[h]
\centering
{\small \begin{tabular}{|r|rrr|rrr|}
\hline
& \multicolumn{6}{c|}{$k$} \\
\hline
&\multicolumn{3}{c|}{Adaptive $\alpha$ for linear model} &\multicolumn{3}{c|}{Adaptive $\alpha$ (PP 2014)} \\
\hline
r & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{5 }& \multicolumn{1}{c|}{10} & \multicolumn{1}{c}{2 }& \multicolumn{1}{c}{5} & \multicolumn{1}{c|}{10} \\
\hline
$50$ &$0.057$ &$0.0327$ &$3.6\times 10^{-3} $ &$ 0.058$ & $0.0333$ & $3.8 \times 10^{-3}$ \\
$100$ &$0.038$& $0.0087$ & $2.2\times 10^{-4} $&$0.038$ & $0.0093$ & $2.4\times 10^{-4}$ \\
$500$ & $0.016$ & $0.0004$ &$3.1 \times 10^{-7}$ & $0.015$ & $0.0005$ & $3.4\times 10^{-7}$ \\
$1000$ & $0.011$& $0.0001$ & $1.8\times 10^{-8}$ & $0.010$ &$0.0001$ & $2.0 \times 10^{-8}$ \\
\hline
\end{tabular} }
\caption{Adaptive $\alpha$ for linear models vs. \citet{PP2014} original adaptive $\alpha$ for the One Way Anova problem. In both cases, a calibration strategy based on a designed experiment is used. }
\label{tab1}
\end{table}
\begin{table}[h]
\centering
{\small \begin{tabular}{|c|c|c|c|}
\hline
& \multicolumn{3}{c|}{$k$} \\
\hline
&\multicolumn{1}{c|}{Minimal sample} &\multicolumn{1}{c|}{Simple Calibration}&\multicolumn{1}{c|}{PBIC Calibration} \\
\hline
r & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{2}& \multicolumn{1}{c|}{2} \\
\hline
$4$ & $0.0523$ & $0.0360$ & $0.0283$ \\
$10$ & $0.0342$ & $0.0235$ & $0.0159$ \\
$50$ &$0.0130$ & $ 0.0090$ & $0.0061$\\
$100$ &$0.0087$& $0.0060$ & $0.0041$ \\
$500$ & $0.0035$ &$0.0024$ & $0.0017$ \\
$1000$ & $0.0024$& $0.0017$& $0.0011$ \\
\hline
\end{tabular} }
\caption{Adaptive alpha for linear models in the One Way Anova setting for the three calibration strategies described in Section 4.}
\label{tab2}
\end{table}
Table~\ref{tab2} shows how the three different calibration strategies discussed in Section 4 decrease the adaptive alpha as the effective sample size grows. It is reassuring that the different strategies yield comparable results, with the strategy based on PBIC being somewhat more drastic (in this case but not in general, see sect.7.2) in its penalization for higher samples.
We now present a simulation that shows how our methodology for decreasing alpha works precisely to improve scientific inference and interpretation. Inspired by an example in \citet{SBB2001} we perform the following experiment: we simulate $r$ data points from each of two normal distributions $N(\mu_1, \sigma)$ and $N(\mu_2,\sigma)$. We replicate this $2K$ times. For $K$ of the simulations, $\mu_1-\mu_2=0$, while for the other $K$ $\mu_1-\mu2=f>0$. For all $2K$ replications, we test the hypotheses $H_0: \mu_1=\mu2$ vs $H_1: \mu_1 \neq \mu_2$, and then count how many of the p-values lie between $0.05 - \varepsilon$ and $0.05$. Note that all these p-values would be deemed enough for rejecting $H_0$ if $\alpha=0.05$ is selected. Finally, we determine the proportion of these "significant" p-values obtained from samples where $H_0$ is true.
Table 3 presents the median percentage of these significant p-values coming from samples where $H_0$ is true for 100 replicates of the simulation scheme with $K=4000$, $f=0.25$, $\sigma=1$ and $\varepsilon=0.04$, for $r=10, 50, 100, 500$ and $1000$. The usual (flawed) interpretation is that only about $5\%$ should be generated from $H_0$. However, the reality is that much more are false positives. The proportion is not monotonic with $r$, but always far higher than $5\%$. In the limit, for $r=1000$, almost 100\% of these significant values near $0.05$ are generated from $H_0$. This result should not be surprising, as for large sample sizes and fixed significance levels, most null hypotheses are rejected. Table 3 also presents the proportion of significant p-values coming from $H_0$ when the $\alpha$ level is corrected according to the method suggested in this paper, using a calibration strategy based on PBIC. With this correction, the proportion of false positives decreases steadily, providing a more reliable Type I error control.
\begin{table}[h]
\centering
{\small \begin{tabular}{|c|c|c|}
\hline
& \multicolumn{2}{c|}{\% of samples with $0.01<p<0.05$} \\
\hline
&\multicolumn{1}{c|}{without adjustment} &\multicolumn{1}{c|}{with PBIC calibration } \\
\hline
r & \multicolumn{1}{c|}{2-groups} & \multicolumn{1}{c|}{2-group}\\
\hline
$10$ & $39.06\%$ & $34.18\%$ \\
$50$ &$21.43\%$ & $8.57\%$ \\
$100$ &$15.73\%$& $3.07\%$ \\
$500$ & $ 39.04\%$ &$0.22\%$ \\
$1000$ & $97.15\%$& $0.11\%$\\
\hline
\end{tabular} }
\caption{Percentage of p-values between $0.01$ and $0.05$ considered significant coming from data generated under the null hypothesis when equal number of tests with $H_0$ true and $H_0$ false are generated for different sample sizes. Uncorrected and corrected alpha levels are considered.}
\label{tab3}
\end{table}
\subsection{Linear Regression Model}
Consider a linear regression model $M_j: y_v=\beta_1+\beta_2x_{v2}+\cdots+\beta_j x_{vj}+\epsilon_v$
with $1\leq v\leq n$ and $2\leq j\leq k$, then $$|\mathbf{X}_j^t\mathbf{X}_j|=n(n-1)^{j-1}\prod_{l=2}^{j}s_{l}^2|R_j|$$ where $s_l^2$ and $R_j$ is the variance and the correlation matrix of the predictors in model $M_j$ respectively, so in the adaptive alpha in (\ref{eq10})
\begin{equation}\label{eq11}
b=(n-1)^{j-i}\left(\prod_{l=i+1}^{j}s^2_{l}\right)|R_{j-i}-R_{ij}^tR_i^{-1}R_{ij}|,
\end{equation}
\noindent Here $R_{ij}$ is the correlation matrix between predictors of the models $M_j$ that are not in $M_i$ with predictors of the model $M_i$, and $R_{j-i}$ is the correlation matrix of the predictors of the models $M_j$ that are not in $M_i$, see Appendix 2 for more detail.
As an example, we analyze a data set taken from \citet{acuna} which can be accessed at \url{http://academic.uprm.edu/eacuna/datos.html}. We want to predict the average mileage per gallon (denoted by \texttt{mpg}) of a set of $n=82$ vehicles using four possible predictor variables: cabin capacity in cubic feet (\texttt{vol}), engine power (\texttt{hp}), maximum speed in miles per hour (\texttt{sp}) and vehicle weight in hundreds of pounds (\texttt{wt})
To study the effect of the variances of the predictors and their correlations on the proposed adaptive alpha, we will compare the following models,
\begin{enumerate}
\item[1.] $H_0:M_2:$(mpg=$\beta_1$+$\beta_2\text{wt}_i$+$\epsilon_i$) vs $H_1:M_3:$(mpg=$\beta_1$+$\beta_2\text{wt}_i$+$\beta_3\text{sp}_i$+$\epsilon_i$)
\item[2.] $H_0:M_2:$(mpg=$\beta_1$+$\beta_2\text{wt}_i$+$\epsilon_i$) vs $H_1:M_3:$(mpg=$\beta_1$+$\beta_2\text{wt}_i$+$\beta_3\text{hp}_i$+$\epsilon_i$)
\item[3.] $H_0:M_2:$(mpg=$\beta_1$+$\beta_2\text{wt}_i$+$\epsilon_i$) vs $H_1:M_3:$(mpg=$\beta_1$+$\beta_2\text{wt}_i$+$\beta_3\text{vol}_i$+$\epsilon_i$)
\end{enumerate}
For all these tests (\ref{eq11}) can be rewritten as
$$b=(n-1)s_{3}^2(1-\rho_{23}^2),$$
where $s_{3}^2$ is the variance of the entering predictor in model $M_3$ and $\rho_{23}$ is the correlation between $\text{wt}$ y the new predictor in $M_3$.
Table \ref{tab4} shows the correction of the significance $\alpha$ (using $0.05$ as initial value) through the proposed adaptive alpha using simple and PBIC calibrations
\begin{table}
\centering
\begin{tabular}{ccrrrrrr}
\hline
Test &\small{Predictor}&\multicolumn{1}{c}{\small Var($\cdot$)} & \small{Cor(wt,$\cdot$)} & \multicolumn{1}{c}{b} & p-value & \small{$\alpha_{Simple}$} & \small{$\alpha_{PBIC}$}\\
\hline
1 & \texttt{sp} &197.1 &0.68 &8612.9& 0.0325 & 0.0004 & 0.0134\\
2 & \texttt{hp} & 3230.9 & 0.83 & 80449.5& 0.1661 & 0.0001 & 0.0046\\
3 & \texttt{vol} & 491.3 &0.38 & 33901.1& 0.6482 & 0.0002 & 0.0087\\
\hline
\end{tabular}
\caption{ Effect of the variances and correlations of the predictors on the calibration of significance level $\alpha$ in the modelling of the efficiency (mpg) for a set of cars \citep{acuna} when the simple calibration and the PBIC calibration are used.}
\label{tab4}
\end{table}
In all cases, the significance level is substantially reduced, specially using the simple callibration. Note that the strongest correction (the largest reduction on the $\alpha$ level) corresponds to the engine power (\texttt{hp}). This variable has both the largest variance and the highest correlation with the weight. For test 1 (including speed \texttt{sp} in the model), the use of the adaptive alpha levels will change the inference, as $p=0.0325$ is smaller than $0.05$ but larger than the calibrated $\alpha$ values. In the other two cases, the inference is not altered.
\subsection{Comparing significance level calibrations based on BIC and PBIC}
We will now present two challenging examples, and compare the behavior of the Adaptive Alpha Aignificance Levels based on BIC \citep[as presented in][]{PP2014} and the Adaptive Alpha for Linear Models calibrated using PBIC proposed in this paper.
\subsubsection{Comparing Findley's counterexample: adaptive alpha via (BIC vs. PBIC)}
Consider the following simple linear model
$$Y_i=\frac{1}{\sqrt{i}}\cdot\theta+\epsilon_i, ~~\text{where}~\epsilon_i\sim N(0,1),\\
i=1,2,3,..,n$$
and we are comparing the models $H_0:\theta=0$ and $H_1:\theta\neq 0$. This is a Classical and challenging counter example against BIC and the Principle of Parsimony. In \citet{Bayarri2019} it is shown the inconsistency of BIC but the consistency of PBIC in this problem.
Here we will compare two calibration strategies for the significance level $\alpha$:
\begin{enumerate}
\item Adaptive Alpha via BIC \citep{PP2014}:
$$\alpha_n(q)=\frac{[\chi_\alpha^2(q)+q\log(n)]^{\frac{q}{2}-1}}{2^{\frac{q}{2}-1}n^{\frac{q}{2}}\Gamma\left(\frac{q}{2}\right)}\times C_\alpha$$
$C_\alpha$ based on a simple calibration.
\item Adaptive Alpha for Linear Models calibrated using PBIC:
$$\alpha_{(b,n)}(q)=\dfrac{[g_{n,\alpha}(q)+\log(b)+C]^{\frac{q}{2}-1}}{b^{\frac{n-j}{2(n-1)}}\cdot\left(\frac{2(n-1)}{n-j}\right)^{q/2-1}\Gamma\left(\frac{q}{2}\right)}\times C_{\alpha},$$
$$C_{\alpha}=\exp\left\{-\frac{n-j}{2(n-1)} \left( g_{n,\alpha}(q)+C \right)\right\}.$$
$$C=-2\log\frac{(1-e^{-v})}{\sqrt{2}v}, v=\frac{\hat{\theta}^2}{d(1+n^e)}, d=\left(\sum_{i=1}^{n}\frac{1}{i}\right)^{-1}, n^e=\sum_{i=1}^{n}\frac{1}{i}$$
\end{enumerate}
Table~\ref{tab5} shows the behavior for both strategies when $n$ grows and we consider $\alpha=.05$ and $q=1$. The adaptive alpha for PBIC corrects far more slowly, as it should, due to the decreasing information content of successive observations.
\begin{table}[h]
\centering
{\small \begin{tabular}{|c|c|c|}
\hline
n &\multicolumn{1}{c|}{Adaptive Alpha via PBIC} &\multicolumn{1}{c|}{Adaptive Alpha via BIC} \\
\hline
$10$ & 0.0457 &0.0149\\
$20$ & 0.0367 &0.0100\\
$30$ & 0.0338 &0.0079 \\
$40$ & 0.0319 &0.0070 \\
$50$ & 0.0307 & 0.0059\\
$100$ & 0.0282 & 0.0040\\
$1000$ & 0.0253 & 0.0011\\
$10000$ & 0.0248 & 0.0003\\
\hline
\end{tabular} }
\caption{Behavior of the Adaptive Alpha based on BIC (simple aproximation) vs the Adaptive Alpha for Linear Models calibrated using PBIC for Findley's counterexample when sample size $n$ increases}
\label{tab5}
\end{table}
\subsubsection{Adaptive alpha via PBIC for testing equality of two means with unequal variances}
Consider comparing two normal means via the test $H_0:\mu_1=\mu_2$ versus $H_1:\mu_1\neq\mu_2$, where the associated known variances, $\sigma^2_1$ and $\sigma^2_2$ are not equal. $$\mathbf{Y}=\mathbf{X}\mathbb{\mu}+\mathbb{\epsilon}=\begin{pmatrix}
1&0\\
\vdots&\vdots\\
1& 0\\
0& 1\\
\vdots&\vdots\\
0&1
\end{pmatrix}\begin{pmatrix}
\mu_1\\
\mu_2
\end{pmatrix}+\begin{pmatrix}
\epsilon_{11}\\
\vdots\\
\epsilon_{2n_2}
\end{pmatrix},$$
$$\times \mathcal{\epsilon}\sim N(\mathbf{0},\text{diag}\{\underbrace{\sigma_1^2,...,\sigma_1^2}_{n_1},\underbrace{\sigma_2^2,...,\sigma_2^2\}}_{n_2})$$
Defining $\alpha=(\mu_1+\mu_2)/2$ and $\beta=(\mu_1-\mu_2)/2$ places this in the linear model comparison framework,
$$\mathbf{Y}=\mathbf{B}\binom{\mathbb{\alpha}}{\mathbb{\beta}}+\mathbb{\epsilon}$$
with $$\mathbf{B}=\begin{pmatrix}
1&\frac{1}{2}\\
\vdots&\vdots\\
1& \frac{1}{2}\\
1& -\frac{1}{2}\\
\vdots&\vdots\\
1&-\frac{1}{2}
\end{pmatrix}$$
where we are comparing $M_0:\beta=0$ versus $M_1:\beta\neq 0$.\\
So for adaptive alpha via PBIC,
$$C=-2\log\frac{(1-e^{-v})}{\sqrt{2}v}$$
$v=\frac{\hat{\beta}^2}{d(1+n^e)}, d=\left(\frac{\sigma_1^2}{n_1}+\frac{\sigma_2^2}{n_2}\right), n^e=\max\left\{\frac{n_1^2}{\sigma_1^2},\frac{n_2^2}{\sigma_2^2}\right\}\left(\frac{\sigma_1^2}{n_1}+\frac{\sigma_2^2}{n_2}\right)$ and the adaptive alpha behaves according to the table~\ref{tab6}, here $\alpha=.05$.
Table~\ref{tab6} shows how the adaptive alpha based on PBIC adapts faster when the sample size for the group with smaller variance increases. On the contrary, the adaptive alpha based on BIC calibration adapts at a similar pace for both groups.
\begin{table}[h]
\centering
{\small \begin{tabular}{|c|c|c|c|}
\hline
&& Adaptive alpha via PBIC& Adaptive alpha via BIC \\
\hline
$n_1$ & $n_2$ & \multicolumn{1}{c|}{$\sigma^2_1=14$, $\sigma^2_2=140$}& $n=n_1+n_2$\\
\hline
10 & $10$ &0.0159 & 0.0099 \\
10 & $100$ &0.0111 & 0.0038 \\
10 & $500$ &0.0104 & 0.0016 \\
100 & $10$ &0.0110 & 0.0038\\
100 & $100$ &0.0042 & 0.0027\\
100 & $500$ &0.0030 & 0.0015 \\
\hline
\end{tabular} }
\caption{Adaptive alpha via PBIC and BIC for testing equality of two means when variances are unequal. }
\label{tab6}
\end{table}
\section{Final comments, questions and some answers}
\begin{itemize}
\item[1.] The adaptive $\alpha$ provides guidance for adjusting significance to the sample size. The Linear Model version incorporates not only the sample size and the difference of dimensions, but also the information provided by the predictors or the design, and particularly their correlations, correcting for co-linearity.
\item[2.] The adaptive $\alpha$ is simple to use, and gives equivalent results than a sensible Bayes Factor,like Bayes Factors with Intrinsic Priors, but easier to be employed by practitioners, even by those who are not trained in sophisticated Bayesian Statistics. We hope that this development will give tools to the practice of Statistics.
\item[3.] The results exposed here, make use of state of the art large sample approximations of Bayes Factors like the PBIC and can be coupled with recent sensible base thresholds like $\alpha=0.005$, \citet{Benjamin2017}.
\end{itemize}
\section*{Appendix 1 The likelihood ratio}
Define $$r(\mathbf{y}|(\mathbf{X}_i,\mathbf{X}_j))=\dfrac{f(\mathbf{y}|\mathbf{X}_i\widehat{\boldsymbol{\delta}}_i,S_i^2\mathbf{I}_n)}{f(\mathbf{y}|\mathbf{X}_j\widehat{\boldsymbol{\beta}}_j,S_j^2\mathbf{I}_n)}$$
we will perform the calculations for the hypothesis test $$H_0:\text{Model}\hspace{.1cm} M_i\hspace{0.3cm} versus \hspace{.3cm}H_1:\text{Model}\hspace{.1cm} M_j.$$
Indeed, for model $M_i$ $$L(\mathbf{y}|\mathbf{X}_i,\sigma_i^2,\boldsymbol{\delta}_i)=\frac{1}{(2\pi)^{n/2}(\sigma_i^2)^{n/2}}\exp\left\{-\frac{1}{2\sigma_i^2}(\mathbf{y}-\mathbf{X}_i\boldsymbol{\delta}_i)^t(\mathbf{y}-\mathbf{X}_i\boldsymbol{\delta}_i)\right\}.$$
Since the MLE of $\boldsymbol{\delta}_i$ is $\widehat{\boldsymbol{\delta}}_i=(\mathbf{X}_i^t\mathbf{X}_i)^{-1}\mathbf{X}_i^t\mathbf{y}$ and the MLE of $\sigma_i^2$ is $S_{i}^2=\dfrac{\mathbf{y}^t(\mathbf{I}-\mathbf{H}_i)\mathbf{y}}{n}$, where $\mathbf{H}_i=\mathbf{X}_i(\mathbf{X}_i^t\mathbf{X}_i)^{-1}\mathbf{X}_i^t$
$$\sup_{\Omega_0}L(\mathbf{y}|\mathbf{X}_i,\sigma_i^2,\boldsymbol{\delta}_i)=\frac{1}{(2\pi)^{n/2}(S_{i}^2)^{n/2}}\exp\left\{-\frac{n}{2}\right\}.$$
For model $M_j$ $$L(\mathbf{y}|\mathbf{X}_j,\sigma_j^2,\boldsymbol{\beta}_j)=\frac{1}{(2\pi)^{n/2}(\sigma_j^2)^{n/2}}\exp\left\{-\frac{1}{2\sigma_j^2}(\mathbf{y}-\mathbf{X}_j\boldsymbol{\beta}_j)^t(\mathbf{y}-\mathbf{X}_j\boldsymbol{\beta}_j)\right\}.$$
Since MLE of $\boldsymbol{\beta}_j$ is $\widehat{\boldsymbol{\beta}}_j=(\mathbf{X}_j^t\mathbf{X}_j)^{-1}\mathbf{X}_j^t\mathbf{y}$ and the MLE of $\sigma_j^2$ is $S_{j}^2=\dfrac{\mathbf{y}^t(\mathbf{I}-\mathbf{H}_j)\mathbf{y}}{n}$
$$\sup_{\Omega}L(\mathbf{y}|\mathbf{X}_j,\sigma_j^2,\boldsymbol{\beta}_j)=\frac{1}{(2\pi)^{n/2}(S_{j}^2)^{n/2}}\exp\left\{-\frac{n}{2}\right\}.$$
Thus the likelihood ratio is $$r(\mathbf{y}|(\mathbf{X}_i,\mathbf{X}_j))=\dfrac{\sup_{\Omega_0}L(\mathbf{y}|\mathbf{X}_i,\sigma_i^2,\boldsymbol{\alpha}_i)}{\sup_{\Omega}L(\mathbf{y}|\mathbf{X}_j,\sigma_j^2,\boldsymbol{\beta}_j)}=\left(\frac{S_{j}^2}{S_{i}^2}\right)^{\frac{n}{2}}=\left(\dfrac{\mathbf{y}^t(\mathbf{I}-\mathbf{H}_j)\mathbf{y}}{\mathbf{y}^t(\mathbf{I}-\mathbf{H}_i)\mathbf{y}}\right)^{\frac{n}{2}}.$$
\section*{Appendix 2 An expression for b in (\ref{eq10})}
Consider linear regression model $M_j: y_v=\beta_1+\beta_2x_{v2}+\cdots+\beta_j x_{vj}+\epsilon_v$
with $1\leq v\leq n$ and $2\leq j\leq k$, then
$$\mathbf{X}_j=\begin{bmatrix}
1& x_{12}-\bar{x}_2&\cdots&x_{1j}-\bar{x}_j\\
1& x_{22}-\bar{x}_2&\cdots&x_{2j}-\bar{x}_j\\
\vdots&\vdots&\vdots&\vdots\\
1&x_{n2}-\bar{x}_2&\cdots&x_{nj}-\bar{x}_j\\
\end{bmatrix}$$
and
$$\mathbf{X}_j^t\mathbf{X}_j=\begin{bmatrix}
n& 0&0&\cdots&0\\
0& (n-1)s_2^2&(n-1)s_2s_3\rho_{23}&\cdots&(n-1)s_2s_j\rho_{2j}\\
\vdots&\vdots&\vdots&\vdots&\vdots\\
0&(n-1)s_2s_j\rho_{2j}&(n-1)s_3s_j\rho_{2j}&\cdots&(n-1)s_j^2\\
\end{bmatrix}$$
then
$$|\mathbf{X}_j^t\mathbf{X}_j|=n(n-1)^{j-1}\begin{vmatrix}
s_2^2& s_2s_3\rho_{23}&\cdots&s_2s_j\rho_{2j}\\
s_2s_3\rho_{23}& s_3^2&\cdots&s_3s_j\rho_{3j}\\
\vdots&\vdots&\vdots&\vdots\\
s_2s_j\rho_{2j}&s_3s_j\rho_{3j}&\cdots&s_j^2\\
\end{vmatrix},$$
note that row $l$ and column $l$ are multiplied by $s_l$, using properties of the determinants
$$|\mathbf{X}_j^t\mathbf{X}_j|=n(n-1)^{j-1}s_2^2s_3^2\cdots s_j^2\begin{vmatrix}
1& \rho_{23}&\cdots&\rho_{2j}\\
\rho_{23}& 1&\cdots&\rho_{3j}\\
\vdots&\vdots&\vdots&\vdots\\
\rho_{2j}&\rho_{3j}&\cdots&1\\
\end{vmatrix}=n(n-1)^{j-1}\prod_{l=2}^{j}s_{l}^2|R_j|$$
on the other hand,
$$R_j=\begin{bmatrix}
1& \rho_{23}&\cdots&\rho_{2j}\\
\rho_{23}& 1&\cdots&\rho_{3j}\\
\vdots&\vdots&\vdots&\vdots\\
\rho_{2j}&\rho_{3j}&\cdots&1\\
\end{bmatrix}=\begin{bmatrix}
R_i& R_{ij}\\
R_{ij}&R_{j-i}\\
\end{bmatrix}$$
where
$$R_{ij}=\begin{bmatrix}
\rho_{2j+1}& \rho_{3j+1}&\cdots&\rho_{ii+1}\\
\rho_{2j+2}& \rho_{3j+2}&\cdots&\rho_{ii+2}\\
\vdots&\vdots&\vdots&\vdots\\
\rho_{2j}&\rho_{3j+2}&\cdots&\rho_{ij}\\
\end{bmatrix}~~\text{and}~~R_{j-i}=\begin{bmatrix}
1& \rho_{i+2i+1}&\cdots&\rho_{ji+1}\\
\rho_{i+1i+2}& 1&\cdots&\rho_{ji+2}\\
\vdots&\vdots&\vdots&\vdots\\
\rho_{i+1j}&\rho_{i+2j}&\cdots&1\\
\end{bmatrix}.$$
Now since $\mathbf{X}_j$ is a full rank matrix, it can be seen that $$|R_j|=|R_i||R_{j-i}-R_{ij}^tR_i^{-1}R_{ij}|$$
thus $$ b=\frac{|\mathbf{X}_j^t\mathbf{X}_j|}{|\mathbf{X}_i^t\mathbf{X}_i|}=(n-1)^{j-i}\left(\prod_{l=i+1}^{j}s^2_{l}\right)|R_{j-i}-R_{ij}^tR_i^{-1}R_{ij}|$$
\end{document} |
\begin{document}
\title{Thermal effect on mixed state geometric phases for neutrino propagation
in a magnetic field}
\author{Da-Bao Yang}
\email{[email protected]}
\affiliation{Department of fundamental physics, School of Science, Tianjin Polytechnic
University, Tianjin 300387, People's Republic of China}
\author{Ji-Xuan Hou}
\affiliation{Department of Physics, Southeast University, Nanjing 211189, People's
Republic of China}
\author{Ku Meng}
\affiliation{Department of fundamental physics, School of Science, Tianjin Polytechnic
University, Tianjin 300387, People's Republic of China}
\date{\today}
\begin{abstract}
In astrophysical environments, neutrinos may propagate over a long
distance in a magnetic field. In the presence of a rotating magentic
field, the neutrino spin can flip from left-handed neutrino to right-handed
neutrino. Smirnov demonstrated that the pure state geometric phase
due to the neutrino spin precession may cause resonantg spin conversion
inside the Sun. However, in general, the neutrinos may in an ensemble
of thermal state. In this article, the corresponding mixed state geometric
phases will be formulated, including the off-diagonal casse and diagonal
ones. The spefic features towards temperature will be analysized.
\end{abstract}
\pacs{03.65.Vf, 75.10.Pq, 31.15.ac}
\maketitle
\section{Introduction}
\label{sec:introduction}
Geometric phase had been discovered by Berry \cite{berry1984quantal}
in the circumstance of adiabatic evolution. Then it was generalized
by Wilczek and Zee \cite{Wilczek1984appearance}, Aharonov and Anandan
\cite{aharonov1987phase,anandon1988nonadiabatic}, Samuel and Bhandari\cite{samuel1988general},
and Samuel and Bhandari \cite{samuel1988general} in the context of
pure state. Moreover, it was also extended to mixed state counterparts.
Its operationally well-defined notion was proposed by Sj$\ddot{o}$qvist
\emph{et. al}. \cite{sjoqvist2000mixed} based on inferferometry.
Subsequently, it was generalized to degenerate case by Singh et. al.
\cite{singh2003geometric} and to nonunitary evuolution by Tong et.
al. \cite{tong2004kinematic} by use of kenimatic approach. In addition,
when the final state is orthogonal to the initial state, the above
geometric phase is meaningless. So the complementary one to the usual
geometric phases had been put forward by Manini and Pistolesi \cite{manini2000off}.
The new phase is called off-diagonal geoemtric phase, which was been
generalized to non-abelian casse by Kult et. al. \cite{kult2007nonabelian}.
It also had been extended to mixed state ones by Filipp and Sj$\ddot{o}$qvist
\cite{filipp2003PRL,filipp2003offdiagonalPRA} during unitary evolution
. Further extension to non-degenerate case was made by Tong et. al.
\cite{tong2005offdiagonal} by kinematic approach. Finally , there
are excellent reviewed articles \cite{xiao2010Berry} and monographs
\cite{shapere1989book,bohm2003book,chru2004book} talking about its
influence and applications in physics and other natural science.
As well kown, Neutrino plays an important role in particle physics
and astronomy. Smirnov investigated the effect of resonant spin converstion
of solar neutrinos which was induced by the geometric phase \cite{smirnov1991solarneutrino}.
Joshi and Jain figured out the geometric phase of neutrino when it
was propagating in a rotating transverse mangnetic field \cite{joshi2015neutrino}.
However, their disscussion are confined to the pure state case. In
this article, we will talk about mixed state geometric phase of neutrino,
ranging from off-diagonal phase to diagonal one.
This paper is organised as follows. In the next section, the off-diagonal
geometric phase for mixed state will be reviewed as well as the usual
mixed state geometric phase. Furthermore, the related equation about
the propagation of two helicity components of neutrino will be retrospected.
In Sec. III, both the off-diagonal and diagonal mixed geometric phase
for neutrino in thermal state are going to be calculated. Finally,
a conclusion is drawn in the last section.
\section{Review of o ff-diagonal phase}
\label{sec:reviews}
If a non-degenerate density matrix takes this form
\begin{equation}
\rho_{1}=\lambda_{1}|\psi_{1}\rangle\langle\psi_{2}|+\cdots+\lambda_{N}|\psi_{N}\rangle\langle\psi_{N}|.\label{eq:DenstiyMatrix1}
\end{equation}
Moreover, a density operator that can't interfer with $\rho_{1}$
is introduced \cite{filipp2003offdiagonalPRA}, which is
\[
\rho_{n}=W^{n-1}\rho_{1}(W^{\dagger})^{n-1},n=1,...,N
\]
where
\[
W=|\psi_{1}\rangle\langle\psi_{N}|+\psi_{N}\rangle\langle\psi_{N-1}|\cdots+|\psi_{2}\rangle\langle\psi_{1}|.
\]
In the unitary evolution, excet the ususal mixed state geometric phase,
there exists so called mixed state off diagonal phase , which reads
\cite{filipp2003offdiagonalPRA}
\begin{equation}
\gamma_{\rho_{j_{1}}...\rho_{j_{l}}}^{(l)}=\Phi[Tr(\prod_{a=1}^{l}U^{\parallel}(\tau)\sqrt[l]{\rho_{j_{a}}})],\label{eq:OffdiagonlaGeometricPhase}
\end{equation}
where $\Phi[z]\equiv z/|z|$ for nonzero complex number $z$ and \cite{tong2005offdiagonal}
\begin{equation}
U^{\parallel}=U(t)\sum_{k=1}^{N}e^{-i\delta_{k}},\label{eq:ParallelUnitaryEvolution}
\end{equation}
in which
\begin{equation}
\delta_{k}=-i\int_{0}^{t}\langle\psi_{k}|U^{\dagger}(t^{\prime})\dot{U}(t^{\prime})|\psi_{k}\rangle dt^{\prime}\label{eq:DynamicalPhase}
\end{equation}
and $U(t)$ is the time evolution operator of this system. Moreover
$U^{\parallel}$ satisfies the parallel transport condition, which
is
\[
\langle\psi_{k}|U^{\parallel\dagger}(t)\dot{U}^{\parallel}(t)|\psi_{k}\rangle=0,\ k=1,\cdots,N.
\]
In addition, the usual mixed state geometric phase factor \cite{tong2004kinematic}
takes the following form
\begin{equation}
\gamma=\Phi\left[\sum_{k=1}^{N}\lambda_{k}\langle\psi_{k}|U(\tau)|\psi_{k}\rangle e^{-i\delta_{k}}\right]\label{eq:DiagonalGeometricPhase}
\end{equation}
The propagation helicity components $\left(\begin{array}{cc}
\nu_{R} & \nu_{L}\end{array}\right)^{T}$of a neutrino in a magnetic field obeys the following equation \cite{smirnov1991solarneutrino}
\begin{equation}
i\frac{d}{dt}\left(\begin{array}{c}
\nu_{R}\\
\nu_{L}
\end{array}\right)=\left(\begin{array}{cc}
\frac{V}{2} & \mu_{\nu}Be^{-i\omega t}\\
\mu_{\nu}Be^{i\omega t} & -\frac{V}{2}
\end{array}\right)\left(\begin{array}{c}
\nu_{R}\\
\nu_{L}
\end{array}\right),\label{eq:Schrodinger}
\end{equation}
where $T$ denotes the matrices transposing operation, $\vec{B}=B_{x}+iB_{y}=Be^{i\omega t}$,
$\mu_{\nu}$ reprensents the magnetic moment of a massive Dirac neutrino
and $V$ is a term due to neutrino mass as well as interaction with
matter. The instantaneous eigenvalues and eigenvectors corresponding
to the Hamiltonian take the following form \cite{joshi2015neutrino}
\[
E_{1}=+\sqrt{\left(\frac{V}{2}\right)^{2}+(\mu_{\nu}B)^{2}}
\]
\begin{equation}
|\psi_{1}\rangle=\frac{1}{N}\left(\begin{array}{c}
\mu_{\nu}B\\
-e^{i\omega t}\left(\frac{V}{2}-E_{1}\right)
\end{array}\right)\label{eq:EigenVector1}
\end{equation}
and
\[
E_{2}=-\sqrt{\left(\frac{V}{2}\right)^{2}+(\mu_{\nu}B)^{2}}
\]
\begin{equation}
|\psi_{2}\rangle=\frac{1}{N}\left(\begin{array}{c}
e^{-i\omega t}\left(\frac{V}{2}-E_{1}\right)\\
\mu_{\nu}B
\end{array}\right),\label{eq:EigenVector2}
\end{equation}
where the normalized factor
\[
N=\sqrt{\left(\frac{V}{2}-E_{1}\right)^{2}+\left(\mu_{\nu}B\right)^{2}}.
\]
If this system in a thermal state, the density operator can be written
as
\begin{equation}
\rho=\lambda_{1}|1\rangle\langle1|+\lambda_{2}|2\rangle\langle2|\label{eq:DensityMatrix}
\end{equation}
where
\[
\lambda_{1}=\frac{e^{-\beta E_{1}}}{e^{-\beta E_{1}}+e^{-\beta E_{2}}}
\]
and
\[
\lambda_{2}=\frac{e^{-\beta E_{2}}}{e^{-\beta E_{1}}+e^{-\beta E_{2}}}.
\]
In addition, $\beta=1/(kT),$ where $k$ is the Boltzmann constant
and $T$ represents the temperature. In the next section, the mixed
state geometric phase for both off-diagonal one and diagonal one will
be calculated.
\section{Mixed state geometric phase}
\label{sec:Nonadiabatic}
The differential equation Eq. \eqref{eq:Schrodinger} can be exactly
solved by the following transformation
\begin{equation}
\left(\begin{array}{c}
\nu_{R}\\
\nu_{L}
\end{array}\right)=e^{-i\sigma_{z}\frac{1}{2}\omega t}\left(\begin{array}{c}
a\\
b
\end{array}\right),\label{eq:TransformedState}
\end{equation}
where $\sigma_{z}$ is a Pauli matrix along $z$ direction whose explicit
form is
\[
\sigma_{z}=\left(\begin{array}{cc}
1 & 0\\
0 & -1
\end{array}\right).
\]
By substituting Eq. \eqref{eq:TransformedState} into Eq. \eqref{eq:Schrodinger},
one can obtain
\begin{equation}
i\frac{d}{dt}\left(\begin{array}{c}
a\\
b
\end{array}\right)=\tilde{H}\left(\begin{array}{c}
a\\
b
\end{array}\right),\label{eq:TransformedSchodinger}
\end{equation}
where
\[
\tilde{H}=\mu_{\nu}B\sigma_{x}+\frac{1}{2}(V-\omega)\sigma_{z}.
\]
Furthermore, it can be written in this form
\begin{equation}
\tilde{H}=\frac{1}{2}\Omega\left(\begin{array}{ccc}
\frac{2\mu_{\nu}B}{\Omega} & 0 & \frac{V-\omega}{\Omega}\end{array}\right)\centerdot\left(\begin{array}{ccc}
\sigma_{x} & \sigma_{y} & \sigma_{z}\end{array}\right),\label{eq:TransformedHamiltonian}
\end{equation}
where $\Omega=\sqrt{\left(2\mu_{\nu}B\right)^{2}+\left(V-\omega\right)^{2}}$.
Because $\tilde{H}$ is independent of time, Eq. \ref{eq:TransformedSchodinger}
can be exactly solved, whose time evolution operator takes the form
\[
\tilde{U}=e^{-i\tilde{H}t}.
\]
Associating with Eq. \eqref{eq:TransformedState}, the time evolution
operator for Eq. \eqref{eq:Schrodinger} is
\begin{equation}
U=e^{-i\tilde{H}t}e^{i\sigma_{z}\frac{1}{2}\omega t}.\label{eq:UnitaryEvolution}
\end{equation}
By substituting Eq. \eqref{eq:TransformedHamiltonian} into Eq. \ref{eq:UnitaryEvolution},
the above operator can be written in an explicit form, which is
\[
U=\left(\begin{array}{cc}
\cos\frac{\Omega}{2}t-i\frac{V-\omega}{\Omega}\sin\frac{\Omega}{2}t & -i\frac{2\mu_{\nu}B}{\Omega}\sin\frac{\Omega}{2}t\\
-i\frac{2\mu_{\nu}B}{\Omega}\sin\frac{\Omega}{2}t & \cos\frac{\Omega}{2}t+i\frac{V-\omega}{\Omega}\sin\frac{\Omega}{2}t
\end{array}\right)\left(\begin{array}{cc}
e^{i\frac{\omega t}{2}} & 0\\
0 & e^{-i\frac{\omega t}{2}}
\end{array}\right)
\]
In order to calculate off-diagonal phase \eqref{eq:OffdiagonlaGeometricPhase},
by use of Eq. \eqref{eq:ParallelUnitaryEvolution}, we can work out
\[
\begin{array}{ccc}
U_{11}^{\parallel} & \equiv & \langle\psi_{1}|U(t)\left(e^{-i\delta_{1}}|\psi_{1}\rangle\langle\psi_{1}|+e^{-i\delta_{2}}|\psi_{2}\rangle\langle\psi_{2}|\right)|\psi_{1}\rangle\\
& = & U_{11}e^{-i\delta_{1}},
\end{array}
\]
where $U_{11}=\langle\psi_{1}|U(t)|\psi_{1}\rangle$. In order to
simplify the result, let's talk about an easier case. when $t=\tau=2\pi/\Omega$,
\begin{equation}
U_{11}=-\frac{1}{N^{2}}\left[\mu_{\nu}^{2}B^{2}e^{i\frac{\omega\tau}{2}}+\left(\frac{V}{2}-E_{1}\right)^{2}e^{-i\frac{\omega\tau}{2}}\right]=U_{22}^{*},\label{eq:UDiagonal}
\end{equation}
where $*$ denotes the complex conjugate operation. By similar calculations,
one can obtains
\begin{equation}
U_{12}=\frac{2}{N^{2}}\mu_{\nu}B\left(\frac{V}{2}-E_{1}\right)\sin\left(\frac{1}{2}\omega\right)e^{-i(\omega\tau+\frac{\pi}{2})}=-U_{21}^{*}\label{eq:UOffDiagonal}
\end{equation}
Furthermore, $\delta_{1}$ can be explicitly calculated out by substituting
Eq. \eqref{eq:UnitaryEvolution} Eq. \eqref{eq:EigenVector1} into
Eq. \eqref{eq:DynamicalPhase}, which takes the form
\begin{equation}
\delta_{1}=\frac{1}{N^{2}}\left[2\mu_{\nu}^{2}B^{2}\left(\frac{V}{2}-E_{1}\right)+\left(\frac{V}{2}-E_{1}\right)^{2}\left(\frac{V}{2}-\omega\right)-\mu_{\nu}^{2}B^{2}\left(\frac{V}{2}-\omega\right)\right]\tau.\label{eq:DynamicalPhase1}
\end{equation}
By similar calculation, one can get
\begin{equation}
\delta_{2}=-\delta_{1}.\label{eq:DynamicalPhase2}
\end{equation}
Hence Eq. \eqref{eq:ParallelUnitaryEvolution} can be explicitly calculated
out,
\begin{equation}
\left(\begin{array}{cc}
U_{11}^{\parallel} & U_{12}^{\parallel}\\
U_{21}^{\parallel} & U_{22}^{\parallel}
\end{array}\right)=\left(\begin{array}{cc}
U_{11} & U_{12}\\
U_{21} & U_{22}
\end{array}\right)\left(\begin{array}{cc}
e^{-i\delta_{1}} & 0\\
0 & e^{-i\delta_{2}}
\end{array}\right).\label{eq:RelationsUnitaryEvolutions}
\end{equation}
Now, let us calculate the mixed state off-diagonal phasse
\begin{equation}
\gamma_{\rho_{1}\rho_{2}}^{(2)}=\Phi\left[Tr\left(\prod_{a=1}^{2}U^{\parallel}(\tau)\sqrt{\rho_{a}}\right)\right],\label{eq:OffDiagonalGeometricPhaseForNeutrino}
\end{equation}
where $\rho_{1}=\lambda_{1}|1\rangle\langle1|+\lambda_{2}|2\rangle\langle2|$
and $\rho_{2}=\lambda_{1}|2\rangle\langle2|+\lambda_{2}|1\rangle\langle1|$.
Under the basis of $|\psi_{1}\rangle$ and $|\psi_{2}\rangle$,
\begin{equation}
\begin{array}{ccc}
Tr\left(\prod_{a=1}^{2}U^{\parallel}(\tau)\sqrt{\rho_{a}}\right) & = & \sum_{b=1}^{2}\langle\psi_{b}|\prod_{a=1}^{2}U^{\parallel}(\tau)\sqrt{\rho_{a}}|\psi_{b}\rangle\\
& = & \sqrt{\lambda_{1}\lambda_{2}}\left[\left(U_{11}^{\parallel}\right)^{2}+\left(U_{22}^{\parallel}\right)^{2}\right]+U_{12}^{\parallel}U_{21}^{\parallel}.
\end{array}\label{eq:TraceOffDiagonal}
\end{equation}
By substituting Eq. \eqref{eq:RelationsUnitaryEvolutions} into Eq.
\eqref{eq:TraceOffDiagonal}, we can obtain a simpler result
\[
Tr\left(\prod_{a=1}^{2}U^{\parallel}(\tau)\sqrt{\rho_{a}}\right)=\sqrt{\lambda_{1}\lambda_{2}}\left[\left(U_{11}e^{-i\delta_{1}}\right)^{2}+\left(U_{22}e^{-i\delta_{2}}\right)^{2}\right]+U_{12}U_{21}e^{-i\left(\delta_{1}+\delta_{2}\right)}
\]
By substituting Eq. \eqref{eq:UDiagonal} and Eq. \eqref{eq:UOffDiagonal}
into the above equation, off-diagonal geometric phase \eqref{eq:OffDiagonalGeometricPhaseForNeutrino}
can be explicitly calculated,
\begin{equation}
\begin{array}{ccc}
\gamma_{\rho_{1}\rho_{2}}^{(2)} & = & \Phi\{\left(\frac{V}{2}-E_{1}\right)^{2}\mu_{\nu}^{2}B^{2}\left(\cos\omega\tau-1\right)+\sqrt{\lambda_{1}\lambda_{2}}\vartimes\\
& & [\left(\frac{V}{2}-E_{1}\right)^{4}\cos\left(\omega\tau+2\delta_{1}\right)+\mu_{\nu}^{4}B^{4}\cos\left(\omega\tau+2\delta_{1}\right)\\
& & +2\mu_{\nu}^{2}B^{2}\left(\frac{V}{2}-E_{1}\right)^{2}\cos2\delta_{1}]\}
\end{array}\label{eq:OffDiagonalPhaseFinalResult}
\end{equation}
Hence, the corresponding phase is either $\pi$ or $0$, which depends
on temperature and magnetic field. So, its phase is unresponsitive
to temperature.
By substituting Eq. \eqref{eq:UDiagonal}, Eq. \eqref{eq:DynamicalPhase1}
and Eq. \eqref{eq:DynamicalPhase2} into Eq. \eqref{eq:DiagonalGeometricPhase},
the diagonal geometric phase for miexed state reads
\begin{equation}
\begin{array}{ccc}
\gamma & = & \Phi\{\left[\lambda_{1}e^{i(\frac{\omega\tau}{2}-\delta_{1})}+\lambda_{2}e^{-i(\frac{\omega\tau}{2}-\delta_{1})}\right]\mu_{\nu}^{2}B^{2}+\\
& & \left[\lambda_{1}e^{-i(\frac{\omega\tau}{2}+\delta_{1})}+\lambda_{2}e^{i(\frac{\omega\tau}{2}+\delta_{1})}\right]\left(\frac{V}{2}-E_{1}\right)^{2}\}.
\end{array}\label{eq:DiagonalPhaseFinalResult}
\end{equation}
From the above result, we can draw a conclution that if $\lambda_{1}=\lambda_{2}$,
in another word $T\rightarrow\infty$, the corresponding phase maybe
$\pi$ or $0$. In other circumstance, it may vary continuously in
an interval. By contrary to off-diagonal one, the diagonal phase is
more sensitive to temprature.
\section{Conclusions and Acknowledgements }
\label{sec:discussion}
In this article, the time evolution operator of neutrino spin in the
presence of uniformly rotating magnetic field is obtained. Under this
time evolution operator, a thermal state of this neutrinos evolves.
Then there exists mixed off-diagonal geometric phase for mixed state,
as well as diagonal ones. They have been calculated respectively.
And an analytic form is achieved. In addition, a conclusion is drawn
that diagonal phase is more sensentive to off-diagonal one towards
temperature.
D.B.Y. is supported by NSF ( Natural Science Foundation ) of China
under Grant No. 11447196. J.X.H. is supported by the NSF of China
under Grant 11304037, the NSF of Jiangsu Province, China under Grant
BK20130604, as well as the Ph.D. Programs Foundation of Ministry of
Education of China under Grant 20130092120041. And K. M. is supported
by NSF of China under grant No.11447153.
\end{document} |
\begin{document}
\date{\today}
\flushbottom
\title{Cavity-based architecture to preserve quantum coherence and entanglement}
\thispagestyle{empty}
\section*{Introduction}
Entangled states are not only an existing natural form of compound systems in the quantum world, but also a basic
resource for quantum information technology \cite{nielsenchuang,benenti,amico2008RMP}. Due to the unavoidable coupling of a quantum system to the surrounding environment, quantum entanglement is subject to decay and can even vanish abruptly, a phenomenon known as early-stage disentanglement or entanglement sudden death \cite{yueberlyPRL2004,yueberlyPRL2006,doddPRA,santosPRA,yu2009Science,almeida2007Science,kimble2007PRL,eberlyScience2007,sallesPRA,aolitareview}. Harnessing entanglement dynamics and preventing entanglement from disappearing until the time a quantum task can be completed is thus a key challenge towards the feasibility of reliable quantum processing \cite{obrienreview,norireview}.
So far, a lot of researches have been devoted to entanglement manipulation and protection. A pure maximally entangled state can be obtained from decohered (partially entangled mixed) states \cite{bennett1,bennett2,panNature,kwiatNature,dongNatPhys} provided that there exist a large number of identically decohered states, which however will not work if the entanglement amount in these states is small. In situations where several particles are coupled to a common environment and the governing Hamiltonian is highly symmetric, there may appear a decoherence-free subspace that does not evolve in time \cite{zanardiPRL1997,lidarPRL,kwiatScience}: however, in this decoherence-free subspace only a certain kind of entangled state can be decoupled from the influence of the environment \cite{Zeno1,Zeno2}.
The quantum Zeno effect \cite{Zeno3} can also be employed to manipulate decoherence process but, to prevent considerable degradation
of entanglement, special measurements should be performed very frequently at equal time intervals \cite{Zeno1,Zeno2}.
By encoding each physical qubit of a manyqubit system onto a logical one comprising several physical qubits \cite{shorPRA,steanePRL,steaneRoySoc,calderbank,sainzPRA}, an appropriate reversal procedure can be applied to correct the error induced by decoherence after a multiqubit measurement that learns what error possibly occurred. Yet, as has been shown \cite{sainzPRA}, in some cases this method can indeed delay entanglement degradation but in other cases it leads to sudden disentanglement for states that otherwise disentangle only asymptotically. The possibility to preserve entanglement via dynamical decoupling pulse sequences has been also theoretically investigated recently for finite-dimensional or harmonic quantum environments \cite{muhktar2010PRA1,muhktar2010PRA2,wang2011PRA,pan2011JPB} and for solid state quantum systems suffering random telegraph or $1/f$ noise \cite{lofrancoPRB,lofrancospinecho}, but these procedures can be demanding from a practical point of view.
In general, environments with memory (so-called non-Markovian) suitably structured constitute a useful tool for protecting quantum superpositions and therefore the entanglement of composite systems \cite{yu2009Science,lofrancoreview,non-Mar2,non-Mar3}. It is nowadays well-known that independent qubits locally interacting with their non-Markovian environments can exhibit revivals of entanglement, both spontaneously during the dynamics \cite{lofrancoreview,bellomo2007PRL,bellomo2008PRA,lofranco2012PRA,LoFrancoNatCom} and on-demand by local operations \cite{darrigo2012AOP,orieux2015}. These revivals, albeit prolonging the utilization time of entanglement, however eventually decay.
In several situations, the energy dissipations of individual subsystems of a composite system are responsible for disentanglement. Therefore, methods that can trap system excited-state population would be effective for entanglement preservation.
A stationary entanglement of two independent atoms can be in principle achieved in photonic crystals or photonic-band-gap materials \cite{bellomo2008trapping,bellomo2010PhysScrManiscalco} if they are structured so as to inhibit spontaneous emission of individual atoms.
This spontaneous emission suppression induced by a photonic crystal has been so far verified experimentally for a single quantum dot \cite{expPBG} and its practical utilization for a multi-qubit assembly appears far from being reached.
Quantum interference can also be exploited to quench spontaneous emission in atomic systems \cite{interf1,interf2} and hence used to protect two-atom entanglement provided that three levels of the atoms can be used \cite{interf3}.
Since the energy dissipations originate from excited state component of an entangled state,
a reduction of the weight of excited-state by prior weak measurement on the system before interacting with the environment followed by a reversal measurement after the time-evolution proves to be an efficient strategy to enhance the entanglement \cite{Kim,Man1,Man2}. However, the success of this measurement-based strategy is always conditional (probability less than one) \cite{Kim,Man1,Man2}.
It was shown that steady-state entanglement can be generated if two qubits share a common environment \cite{pianiPRL,Zeno1}, interact each other \cite{matteoEPJD} and are far from thermal equilibrium \cite{brunnerarxiv,plenio2002PRL,hartmannNJP,brunoNJP,brunoEPL}.
It has been also demonstrated that non-Markovianity may support the formation of stationary entanglement in a non-dissipative pure dephasing environment provided that the subsystems are mutually coupled \cite{non-Mar1}.
Separated, independent two-level quantum systems at thermal equilibrium, locally interacting with their own environments, are however the preferable elements of a quantum hardware in order to accomplish the individual control required for quantum information processing \cite{obrienreview,norireview}. Therefore, proposals of strategies to strongly shield quantum resources from decay are essential within such a configuration. Here we address this issue by looking for an environmental architecture as simple as possible which is able to achieve this aim and at the same time realizable by current experimental technologies.
In particular, we consider a qubit embedded in a cavity which is in turn coupled to a second cavity and show that this basic structure is able to enable transitions from Markovian to non-Markovian regimes for the dynamics of the qubit just by adjusting the coupling between the two cavities.
Remarkably, under suitable initial conditions, this engineered environment is able to efficiently preserve qubit coherence and, when extended to the case of two noninteracting separated qubits, quantum entanglement. We finally discuss the effectiveness of our cavity-based architecture by considering experimental parameters typical of circuit quantum electrodynamics \cite{norireview,blaisPRA}, where this scheme can find its natural implementation.
\section*{Results}
Our analysis is divided into two parts. The first one is dedicated to the single-qubit architecture which shall permit us to investigate the dynamics of quantum coherence and its sensitivity to decay. The second part treats the two-qubit architecture for exploring to which extent the time of existence of quantum entanglement can be prolonged with respect to its natural disappearance time without the proposed engineered environment.
\subsection*{Single-qubit coherence preservation}
\begin{figure}
\caption{\textbf{Scheme of the single-qubit architecture.}
\label{fig:system}
\end{figure}
The global system is made of a two-level atom (qubit) inside a lossy cavity
$C_{1}$ which in turn interacts with another cavity $C_{2}$, as depicted in Fig.~\ref{fig:system}.
The Hamiltonian of the qubit and two cavities is given by ($\hbar = 1$)
\begin{eqnarray}\label{singlequbithamiltonian}
\hat{H}&=&(\omega _{0}/2)\hat{\sigma}_{z}+\omega_{1}\hat{a}_{1}^{\dag }\hat{a}_{1}
+\omega_{2}\hat{a}_{2}^{\dag }\hat{a}_{2}+\kappa(\hat{a}_{1}^{\dag}\hat{\sigma} _{-}+\hat{a}_{1}\hat{\sigma} _{+})\nonumber\\
&&+J(\hat{a}_{1}^{\dag}\hat{a}_{2}+\hat{a}_{1}\hat{a}_{2}^{\dag}),
\end{eqnarray}
where $\hat{\sigma}_{z}=\left|1\right\rangle\left\langle1\right|-\left|0\right\rangle\left\langle0\right|$
is a Pauli operator for the qubit with transition frequency $\omega _{0}$, $\hat{\sigma} _{\pm }$ are
the raising and lowering operators of the qubit, $\hat{a}_{1}$\ $(\hat{a}_{1}^{\dag })$ and
$\hat{a}_{2}$\ $(\hat{a}_{2}^{\dag })$ the annihilation (creation) operators of cavities $C_1$ and $C_2$ which sustain modes
with frequency $\omega _{1}$ and $\omega _{2}$, respectively. The parameter
$\kappa$ denotes the coupling of the qubit with cavity $C_1$ and $J$ the coupling between the two cavities.
We take $\omega _{1}=\omega _{2}=\omega$ and, in order to consider both resonant and non-resonant qubit-$C_1$ interactions,
$\omega_{0}=\omega+\delta$ with $\delta$ being the qubit-cavity detuning.
Taking the dissipations of the two cavities into account, the density operator $\rho(t)$ of the
atom plus the cavities obeys the following master equation \cite{petru}
\begin{eqnarray} \label{ro}
\dot{\rho}(t) &=&-i[\hat{H},\rho(t)]\nonumber\\
&-&\sum_{n=1}^{2}\frac{\Gamma _{n}}{2}[a_{n}^{\dag }a_{n}\rho(t)-2a_{n}\rho(t)a_{n}^{\dag }+\rho(t)a_{n}^{\dag }a_{n}],
\end{eqnarray}
where $\dot{\rho}(t)\equiv d\rho(t)/dt$ and $\Gamma _{1}$ ($\Gamma _{2}$) denotes the photon decay rate of cavity $C_1$ ($C_2$). The rate $\Gamma _{n}/2$ physically represents the bandwidth of the Lorentzian frequency spectral density of the cavity $C_n$, which is not a perfect single-mode cavity \cite{petru}. A cavity with a high quality factor will have a narrow bandwidth and therefore a small photon decay rate. Weak and strong coupling regimes for the qubit-$C_1$ interaction can be then individuated by the conditions $\kappa \leq \Gamma_1/4$ and $\kappa > \Gamma_1/4$ \cite{petru,bellomo2007PRL}.
Let us suppose the qubit is initially in the excited state $\left| 1\right\rangle$ and both cavities in the
vacuum states $\left| 00\right\rangle$, so that the overall initial state is $\rho(0)=\left| 100\right\rangle\left\langle 100\right|$,
where the first, second and third element correspond to the qubit, cavity $C_1$ and cavity $C_2$, respectively.
Since there exist at most one excitation in the total system at any time of evolution, we can make the ansatz for $\rho(t)$ in the form
\begin{equation}\label{rot}
\rho (t)=\left( 1-\lambda (t)\right) \left| \psi (t)\right\rangle\left\langle \psi (t)\right|
+\lambda (t)\left| 000\right\rangle\left\langle 000\right|,
\end{equation}
where $0\leq \lambda (t)\leq 1$\ with $\lambda (0)=0$ and
$\left| \psi(t)\right\rangle=h(t)\left| 100\right\rangle +c_{1}(t)\left|010\right\rangle
+c_{2}(t)\left| 001\right\rangle$
with $h(0)=1$\ and $c_{1}(0)=c_{2}(0)=0.$ It is convenient to
introduce the unnormalized state vector \cite{pseu1,pseu2}
\begin{eqnarray} \label{unnor}
\left| \widetilde{\psi }(t)\right\rangle &\equiv &\sqrt{1-\lambda (t)}\left| \psi (t)\right\rangle \nonumber \\
&=&\widetilde{h}(t)\left| 100\right\rangle +\widetilde{c}
_{1}(t)\left| 010\right\rangle+\widetilde{c}_{2}(t)\left|001\right\rangle,
\end{eqnarray}
where $\widetilde{h}(t)\equiv \sqrt{1-\lambda (t)}h(t)$ represents the
probability amplitude of the qubit and $\widetilde{c}_{n}(t)\equiv \sqrt{1-\lambda (t)}c_{n}(t)$ ($n=1,2$)
that of the cavities being in their excited
states. In terms of this unnormalized state vector we then get
\begin{equation} \label{ronga}
\rho (t)=| \widetilde{\psi }(t)\rangle\langle
\widetilde{\psi }(t)| +\lambda (t)| 000\rangle\langle 000|.
\end{equation}
The time-dependent amplitudes $\widetilde{h}(t),$\ $\widetilde{c}_{1}(t),$\ $\widetilde{c}_{2}(t)$ of Eq. (\ref{unnor}) are determined by a set of differential equations as
\begin{eqnarray}
i\frac{d\widetilde{h}(t)}{dt} &=&(\omega+\delta)\widetilde{h}(t)+
\kappa\widetilde{c}_{1}(t), \nonumber \\
i\frac{d\widetilde{c}_{1}(t)}{dt} &=&\left( \omega -\frac{i}{2}\Gamma
_{1}\right) \widetilde{c}_{1}(t)+\kappa\widetilde{h}(t)+
J \widetilde{c}_{2}(t), \nonumber \\
i\frac{d\widetilde{c}_{2}(t)}{dt} &=&\left( \omega -\frac{i}{2}\Gamma
_{2}\right) \widetilde{c}_{2}(t)+J \widetilde{c}_{1}(t).
\label{eqs}
\end{eqnarray}
The above differential equations can be solved by means of standard Laplace
transformations combined with numerical simulations to obtain the reduced
density operators of the atom as well as of each of the cavities.
In particular, in the basis $\{\left|1\right\rangle, \left|0\right\rangle\}$ the density matrix evolution of the qubit can be cast as
\begin{equation}\label{sing-at-evo}
\rho(t)=\left(
\begin{array}{cc}
u_{t}\rho_{11}(0) & z_{t}\rho^{j}_{10}(0) \\
z_{t}^{*}\rho_{01}(0) & 1-u_{t}\rho_{11}(0) \\
\end{array}
\right),
\end{equation}
where $u_{t}$ and $z_{t}$ are functions of the time $t$ (see Methods).
An intuitive quantification of quantum coherence is based to the off-diagonal elements of the desired quantum state, being these related to the basic property of quantum interference. Indeed, it has been recently shown \cite{coher} that the functional
\begin{equation}\label{coh}
\mathcal{C}(t)=\sum_{i,j (i\neq j)}|\varrho_{ij}(t)|,
\end{equation}
where $\varrho_{ij}(t)$\ $(i\neq j)$ are the off-diagonal elements of the system density matrix, satisfies the physical requirements which make it a proper coherence measure \cite{coher}. In the following, we adopt $\mathcal{C}(t)$ as quantifier of the qubit coherence
and explore how to preserve and even trap it under various conditions. To this aim, we first consider the
resonant atom-cavity interaction and then discuss the effects of detuning on the dynamics of coherence.
Suppose the qubit is initially prepared in the state $\left|\phi(0)\right\rangle=
\alpha\left|0\right\rangle+\beta\left|1\right\rangle$ (with $|\alpha|^2+|\beta|^2$=1),
namely, $\mathcal{C}(0)=2|\alpha\beta|$, then at time $t>0$ the coherence becomes
$\mathcal{C}(t)=2|\alpha\beta\widetilde{h}(t)|$.
Focusing on the weak coupling between the qubit and the cavity $C_1$ with $\kappa=0.24\Gamma_{1}$,
we plot the dynamics of coherence in Fig. \ref{fig:coher}(a). In this case, the qubit exhibits a Markovian
dynamics with an asymptotical decay of the coherence in the absence of the cavity $C_2$ (with $J=0$). However, by introducing the cavity $C_2$ with a sufficiently large coupling strength, quantum coherence undergoes non-Markovian dynamics with oscillations.
Moreover, it is readily observed that the decay of coherence can be greatly inhibited by increasing the $C_1$-$C_2$ coupling strength $J$. On the other hand, if the coupling between the atom and the cavity $C_1$ is initially in the
strong regime with the occurrence of coherence collapses and revivals, the increasing of the $C_1$-$C_2$ coupling strength $J$ can drive the non-Markovian dynamics of the qubit to the Markovian one and then back to the non-Markovian one,
as shown in Fig.~\ref{fig:coher}(b). This behavior is individuated by the suppression and the successive reactivation of oscillations during the dynamics. It is worth noting that, although the qubit can experience non-Markovian dynamics again
for large enough $J$, the non-Markovian dynamics curve is different from the original one for $J=0$
in the sense that the oscillations arise before the coherence decays to zero. In general, the coupling
of $C_1$-$C_2$ can enhance the quantum coherence also in the strong coupling regime between the qubit
and the cavity $C_1$.
\begin{figure*}
\caption{Coherence $\mathcal{C}
\label{fig:coher}
\end{figure*}
The oscillations of coherence, in clear contrast to the monotonic smooth decay in the Markovian regime, constitute a sufficient condition to signify
the presence of memory effects in the system dynamics, being due to information backflow from the environment to the quantum system \cite{breuer2009PRL}. The degree of a non-Markovian process, the so-called non-Markovianity, can be
quantified by different suitable measures
\cite{breuer2009PRL,lorenzoPRA,rivas2010PRL,bylicka2014}. We adopt here the non-Markovianity measure which exploits the dynamics of the trace distance
between two initially different states $\rho _{1}(0)$ and $\rho _{2}(0)$ of
an open system to assess their distinguishability \cite{breuer2009PRL}. A Markovian evolution can never increase the trace distance, hence nonmonotonicity of the latter would imply a non-Markovian character of the system dynamics. Based on this concept, the non-Markovianity can be
quantified by a measure $\mathcal{N}$ defined as \cite{breuer2009PRL}
\begin{equation}
\mathcal{N}=\max_{\rho _{1}(0),\rho _{2}(0)}\int_{\sigma >0}\sigma [t,\rho
_{1}(0),\rho _{2}(0)]dt, \label{N}
\end{equation}
where $\sigma [t,\rho _{1}(0),\rho _{2}(0)]=dD[\rho _{1}(t),\rho_{2}(t)]/dt$ is the rate of change of the trace distance, which is defined as
$D[\rho _{1}(t),\rho _{2}(t)]=(1/2)\mathrm{Tr}|\rho _{1}(t)-\rho_{2}(t)|$, with $|X|=\sqrt{X^{\dag }X}.$ By virtue of $\mathcal{N}$, we plot in Fig.~\ref{fig:nonMar} the non-Markovianity of the qubit dynamics for the conditions considered in Fig. \ref{fig:coher}(a)-(b). We see that if the qubit is initially weakly coupled to the cavity $C_{1}$ ($\kappa=0.24\Gamma_{1}$) its dynamics can undergo a transition from Markovian ($\mathcal{N}=0$) to non-Markovian ($\mathcal{N}>0$) regimes by increasing the coupling strengths $J$ between the two cavities. On the other hand, for strong qubit-cavity coupling ($\kappa=0.4\Gamma_{1}$), the non-Markovian dynamics occurring for $J=0$ turns into Markovian and then back to non-Markovian by increasing $J$. We mention that such a behavior has been also observed in a different structured system where a qubit simultaneously interacts with two coupled lossy cavities \cite{lofrancoManPRA}.
\begin{figure}
\caption{Non-Markovianity quantifier $\mathcal{N}
\label{fig:nonMar}
\end{figure}
\begin{figure}
\caption{Density plots of coherence $\mathcal{C}
\label{fig:detuning}
\end{figure}
\begin{figure*}
\caption{Coherence $\mathcal{C}
\label{fig:detuning2D}
\end{figure*}
Trapping qubit coherence in the long-time limit is a useful dynamical feature for itself that shall play a role for the preservation of quantum entanglement to be treated in the next section. We indeed find that the use of coupled cavities can achieve this result if the cavity $C_2$ is perfect, that is $\Gamma_{2}=0$ (no photon leakage). The plots in Figure \ref{fig:coher}(c)-(d) demonstrate the coherence trapping
in the long-time limit for both weak and strong coupling regimes between the qubit and the cavity $C_{1}$
for different coupling strengths $J$ between the two cavities.
This behavior can be explained by noticing that there exists a bound (decoherence-free) state of the qubit and the cavity $C_2$
of the form $\left|\psi_{-}\right\rangle=J\left|10\right\rangle-\kappa\left|01\right\rangle$,
with $J$ and $\kappa$ being the $C_1$-$C_2$ and qubit-$C_1$ coupling strengths. Being this state free from decay, once the reduced initial state
of the qubit and the cavity $C_2$ contains a nonzero component of this bound state
$\left|\psi_{-}\right\rangle$, a long-living quantum coherence for the qubit can be obtained.
For the initial state $\left|\Phi(0)\right\rangle=
\alpha\left|000\right\rangle+\beta\left|100\right\rangle$ of the qubit and two cavities here considered and $\Gamma_2=0$,
the coherence defined in Eq. (\ref{coh}) gets the asymptotic value $\mathcal{C}(t\rightarrow\infty)=2|\alpha\beta J^{2}/(J^{2}+\kappa^{2})|$,
which increases with $J$ for a given $\kappa$. We further point out that the cavity $C_1$ acts as a catalyst of the entanglement for the hybrid qubit-$C_2$ system, in perfect analogy to the stationary entanglement exhibited by two qubits embedded in a common cavity \cite{Zeno1}. In the latter case, in fact, the cavity mediates the interaction between the two qubits and performs as an entanglement catalyst for them.
We now discuss the effect of non-resonant qubit-$C_1$ interaction ($\delta\neq0$) on the dynamics of coherence. In Figure \ref{fig:detuning}(a)-(d), we display the density plots of the coherence as functions of detuning $\delta=\omega_0-\omega$ and rescaled time $\Gamma t$ for both weak and strong couplings. One observes that when $\delta$ departures from zero, the decay of coherence speeds up achieving the fastest decay around $\delta=J$. It is interesting to highlight the role of the cavity-cavity coupling parameter $J$ as a benchmark for having the fastest decay during the dynamics under the non-resonant condition. For larger detuning tending to the dispersive regime ($\delta \gg \kappa$), the decay of coherence is instead strongly slowed down \cite{bellomo2010PhysScrManiscalco}. However, as shown in Fig.~\ref{fig:detuning2D}, stationary coherence is forbidden out of resonance when the cavity $C_2$ is perfect. Since our main aim is the long-time preservation of quantum coherence and thus of entanglement, in the following we only focus on the condition of resonance between qubit and cavity frequencies.
\subsection*{Two-qubit entanglement preservation}
So far, we have studied the manipulation of coherence dynamics of a qubit via an
adjustment of coupling strength between two cavities. We now extend this architecture to
explore the possibility to harness and preserve the entanglement of two independent qubits, labeled as $A$ and $B$.
We thus consider $A$ ($B$) interacts locally with cavity $C_{1A}$ ($C_{1B}$) which is in turn coupled to cavity $C_{2A}$ ($C_{2B}$) with coupling strength $J_A$ ($J_B$), as illustrated in Fig.~\ref{fig:system2}.
That is, we have two independent dynamics with each one consisting of a qubit $j$ ($j=A,B$) and two coupled cavities $C_{1j}$-$C_{2j}$.
The total Hamiltonian is then given by the sum of the two independent Hamiltonians, namely,
$H=\sum_{j}H_{j}$, where each $H_{j}$ is the single-qubit Hamiltonian of Eq. (\ref{singlequbithamiltonian}).
Denoting with $\Gamma _{1j}$ ($\Gamma _{2j}$) the decay rate of cavity $C_{1j}$ ($C_{2j}$), we shall assume $\Gamma_{1A}=\Gamma_{1B}=\Gamma$ as the unit of the other parameters.
\begin{figure}
\caption{\textbf{Scheme of the two-qubit architecture.}
\label{fig:system2}
\end{figure}
As known for the case of independent subsystems, the complete dynamics of the two-qubit system can be obtained by
knowing that of each qubit interacting with its own environment \cite{bellomo2007PRL,bellomo2008PRA}.
By means of the single-qubit evolution, we can construct the evolved density matrix
of the two atoms, whose elements in the standard computational basis $\{\left|1\right\rangle\equiv\left|11\right\rangle,
\left|2\right\rangle\equiv\left|10\right\rangle,\left|3\right\rangle\equiv\left|01\right\rangle,
\left|4\right\rangle\equiv\left|00\right\rangle\}$ are
\begin{eqnarray}\label{two-at-evo}
\rho_{11}(t)&=&u_{t}^{A}u_{t}^{B}\rho_{11}(0) \nonumber \\ \nonumber
\rho_{22}(t)&=&u_{t}^{A}(1-u_{t}^{B})\rho_{11}(0)+u_{t}^{A}\rho_{22}(0)\\ \nonumber
\rho_{33}(t)&=&(1-u_{t}^{A})u_{t}^{B}\rho_{11}(0)+u_{t}^{B}\rho_{33}(0)\\ \nonumber
\rho_{44}(t)&=&(1-u_{t}^{A})(1-u_{t}^{B})\rho_{11}(0)+(1-u_{t}^{A})\rho_{22}(0)\\ \nonumber
&&+(1-u_{t}^{B})\rho_{33}(0)+\rho_{44}(0)\\ \nonumber
\rho_{14}(t)&=&\rho_{41}^{*}(t)=z_{t}^{A}z_{t}^{B}\rho_{14}(0)\\
\rho_{23}(t)&=&\rho_{32}^{*}(t)=z_{t}^{A}z_{t}^{B*}\rho_{23}(0),
\end{eqnarray}
where $\rho_{lm}(0)$ are the density matrix elements of the two-qubit initial state and $u_t^j$,$z_t^j$ are the time-dependent functions of Eq. (\ref{sing-at-evo}).
We consider the qubits initially in an entangled state of the form $\left|\psi(0)\right\rangle=\alpha\left|00\right\rangle+
\beta\left|11\right\rangle$ ($|\alpha|^2+|\beta|^2=1$).
As is known, this type of entangled states with $|\beta|>|\alpha|$ suffers from
entanglement sudden death when each atom locally interacts with a dissipative environment \cite{santosPRA,yu2009Science,almeida2007Science}.
As far as non-Markovian environments are concerned, partial revivals of entanglement can occur \cite{lofrancoreview,bellomo2007PRL,bellomo2008PRA,lofranco2012PRA,LoFrancoNatCom,lofranco2012PhysScripta,darrigo2014IJQI,darrigo2013hidden,bellomo2008bell,ban1,ban2,liuPRA,yonac,manNJP,baiPRL,baiPRA} typically after asymptotically decaying to zero or after a finite dark period of complete disappearance. It would be useful in practical applications that the non-Markovian oscillations can occur when the entanglement still retain a relatively large value.
With our cavity-based architecture, on the one hand we show that the Markovian dynamics
of entanglement in the weak coupling regime between the atoms and the corresponding cavities (i.e., $C_{1A}$ and $C_{1B}$)
can be turned into non-Markovian one by increasing the coupling strengths between the cavities $C_{1A}$-$C_{2A}$
and (or) $C_{1B}$-$C_{2B}$; on the other hand, we find that the appearance of entanglement revivals can be shifted to earlier times.
We employ the concurrence \cite{Wootters98} to quantify the entanglement (see Methods), which for the two-qubit evolved state of Eq. (\ref{two-at-evo})
reads $\mathcal{C}_{AB}(t)=2\max\{0,|\rho_{14}(t)|-\sqrt{\rho_{22}(t)\rho_{33}(t)}\}$. Notice that the concurrence of the Bell-like initial state $\ket{\psi(0)}$ is $\mathcal{C}_{AB}(0)=2|\alpha\beta|$.
In Fig. \ref{fig:con}(a) we plot the dynamics of concurrence $\mathcal{C}_{AB}(t)$ in the weak coupling
regime between the two qubits with their corresponding cavities with $\kappa_{A}=\kappa_{B}=0.2\Gamma$
($\Gamma_{1A}=\Gamma_{1B}=\Gamma$ has been assumed). For two-qubit initial states
with $\alpha=\sqrt{1/10}$, $\beta=\sqrt{9/10}$, the entanglement experiences sudden death
without coupled cavities ($J_{A}=J_{B}=0$). By incorporating the additional cavities with relatively small coupling strength, e.g., $J_{A}=0.5\Gamma$ and $J_{B}=\Gamma$, the concurrence still undergoes a Markovian decay
but the time of entanglement disappearance is prolonged. Increasing the coupling strengths $J_{A}$, $J_{B}$ of
the relevant cavities drives the entanglement dynamics from Markovian regime to non-Markovian one.
Moreover, the entanglement revivals after decay happen shortly after the evolution when the entanglement still has a large value.
In general, the concurrences are enhanced pronouncedly with $J_{A}$ and $J_{B}$. A comprehensive picture of the dynamics of concurrence as a function of coupling strength $J$ is shown in Fig. \ref{fig:con}(c) where we have assumed $J_{A}=J_{B}=J$.
In Fig. \ref{fig:con}(b) we plot the dynamics of $\mathcal{C}_{AB}(t)$ in the strong coupling
regime between qubit $j$ and its cavity $C_{1j}$ with $\kappa_{A}=\kappa_{B}=2\Gamma$ for which
the two-qubit dynamics is already non-Markovian in absence of cavity coupling, namely the entanglement can revive after dark periods. Remarkably, the figure shows that when the coupling $J_j$ between $C_{1j}$ and $C_{2j}$ is activated and gradually increased in each location, multiple transitions from non-Markovian to Markovian dynamics surface. We point out that the entanglement dynamics within the non-Markovian regime
exhibit different qualitative behaviors with respect to the first time when entanglement oscillates.
For instance, for $J_{A}=J_{B}=3\Gamma$, the non-Markovian entanglement oscillations (revivals) happen after its disappearance, while when $J_{A}=4\Gamma$ and $J_{B}=5\Gamma$ the entanglement oscillates before its sudden death. These dynamical features are clearly displayed in Fig. \ref{fig:con}(d).
\begin{figure}
\caption{The dynamics of concurrence for different coupling strengths $J_{A}
\label{fig:con}
\end{figure}
\begin{figure*}
\caption{The dynamics of concurrence for different coupling strengths $J_{A}
\label{fig:con-trap}
\end{figure*}
As expected according to the results obtained before on the single-qubit coherence, a steady concurrence arises in the long-time limit if the secondary cavities $C_{2A}$, $C_{2B}$ do not lose photons, i.e., $\Gamma_{2A}=\Gamma_{2B}=0$.
Fig. \ref{fig:con-trap}(a) shows the dynamics of concurrence for qubits coupled to their cavities with strengths $\kappa_{A}=0.2\Gamma$, $\kappa_{B}=0.3\Gamma$. We can readily see that, in absence of coupling with the secondary cavities ($J_{A}=J_{B}=0$),
the entanglement disappear at a finite time without any revival.
Contrarily, if the local couplings $C_{1j}$-$C_{2j}$ are switched on and increased, the entanglement does not vanish at a finite time any more and reaches a steady value after undergoing non-Markovian oscillations. Furthermore, the steady value of concurrence is proportional to the local cavity coupling strengths
$J_{A}$, $J_{B}$.
In Fig.~\ref{fig:con-trap}(b), the concurrence dynamics for $\kappa_{A}=\kappa_{B}=2\Gamma$ is plotted under which the two-qubit entanglement experiences non-Markovian features, that is revivals after dark periods, already in absence of coupled cavities, as shown by the black solid curve for $J_{A}=J_{B}=0$. Of course, in this case the entanglement eventually decays to zero. On the contrary, by adjusting suitable nonzero values of the local cavity couplings a considerable amount of entanglement can be trapped. As a peculiar qualitative dynamical feature, we highlight that the entanglement can revive and then be frozen after a finite dark period time of complete disappearance (e.g., see the inset of Fig.~\ref{fig:con-trap}(b), for the short-time dynamics with $J_{A}=2\Gamma$, $J_{B}=3\Gamma$ and also $J_{A}=J_{B}=3\Gamma$).
We finally point out that the the amount of preserved entanglement depends on the choice of the initial state (i.e., on the initial amount of entanglement) of the two qubits. As displayed in Fig.~\ref{fig:initial}, the less initial entanglement, the less entanglement is in general maintained in the ideal case of $\Gamma_{2A}=\Gamma_{2B}=0$. However, since there is not a direct proportionality between the evolved concurrence $\mathcal{C}_{AB}(t)$ and its initial value $\mathcal{C}_{AB}(0)$, the maximal values of concurrence do not exactly appear at $\alpha = 1/\sqrt{2}$ (corresponding to maximal initial entanglement) at any time in the evolution, as instead one could expect. It can be then observed that nonzero entanglement trapping is achieved for $\alpha>0.2$.
\subsubsection*{Experimental paramaters}
We conclude our study by discussing the experimental feasibility of the cavity-based architecture here proposed for the two-qubit assembly. Due to its cavity quantum electrodynamics characteristics, our engineered environment finds its natural realization in the well-established framework of circuit quantum electrodynamics (cQED) with transmon qubits and coplanar waveguide cavities \cite{blaisPRA,steffenarxiv,schoelkopfarxiv,leekPRL,finkNature}. The entangled qubits can be initialized by using the standard technique of a transmission-line resonator as a quantum bus \cite{blaisPRA,dicarloNature}. Initial Bell-like states as the one we have considered here can be currently prepared with very high fidelity \cite{dicarloNature}.
Considering up-to-date experimental parameters \cite{steffenarxiv,schoelkopfarxiv,leekPRL,finkNature} applied to our global system of Fig.~\ref{fig:system2}, the average photon decay rate for the cavity $C_{1j}$ ($j=A,B$) containing the qubit is $\Gamma_{1j} \in [1\ \mathrm{MHz},10\ \mathrm{MHz}]$, while the average photon lifetime for the high quality factor cavity $C_{2j}$ is $\tau_2\approx 55$ $\mu$s \cite{schoelkopfarxiv}, which implies
$\Gamma_{2j} \approx 10^{-2} \mathrm{MHz}\in [10^{-2} \Gamma_{1j},10^{-3} \Gamma_{1j}]$. The qubit-cavity interaction intensity $\kappa_j$ and the cavity-cavity coupling strength $J_j$ are usually of the same order of magnitude, with typical values $\kappa_j \sim J_j \in [1\ \mathrm{MHz},100\ \mathrm{MHz}]=[0.1 \Gamma_{1j},10 \Gamma_{1j}]$. The typical cavity frequency is $\omega \sim 2\pi \times 10$ GHz \cite{blaisPRA} while the qubit transition frequency can be arbitrarily adjusted in order to be resonant with the cavity frequency.
The above experimental parameters put our system under the condition $\kappa_j \ll \omega $ which guarantees the validity of the rotating wave approximation (RWA) for the qubit-cavity interaction here considered in the Hamiltonian of equation~(\ref{singlequbithamiltonian}).
In order to assess the extent of entanglement preservation expected under these experimental conditions, we can analyze the concurrence evolution under the same parameters of Fig.~\ref{fig:con-trap}(a) for $\kappa_j$, $J_j$, which are already within the experimental values, but with $\Gamma_{2A}=\Gamma_{2B}=\Gamma_2= 10^{-2} \Gamma, 10^{-3} \Gamma$ instead of being zero (ideal case), where $\Gamma=\Gamma_{1A} =\Gamma_{1B} \in [1\ \mathrm{MHz},10\ \mathrm{MHz}]$.
The natural estimated disappearance time of entanglement in absence of coupling between the cavities ($J_{A}=J_{B}=0$) is $\bar{t}=6.69/\Gamma \in [669 \ \mathrm{ns}, 6.69\ \mu\mathrm{s}]$, as seen from Fig.~\ref{fig:con-trap}(a).
When considering the experimental achievable decay rates for the cavities $C_{2j}$, we find that the entanglement is expected to be preserved until times $t^\ast$ orders of magnitude longer than $\bar{t}$, as shown in Table \ref{tab:times}.
In the case of higher quality factors for the cavities $C_{2j}$, such that the photon decay rate is of the order of $\Gamma_{2} = 10^{-4}\Gamma$, the entanglement can last even until the order of the seconds. These results provide a clear evidence of the practical powerful of our simple two-qubit architecture in significantly extending quantum entanglement lifetime for the implementation of given entanglement-based quantum tasks and algorithms \cite{obrienreview,dicarloNature,horodecki2009RMP,brunnerRMP}.
\begin{table*}[ht]
\centering
\begin{tabular}{|l || c | c |}
\hline
$\Gamma_2/\Gamma$ & $J_{A}/\Gamma=J_{B}/\Gamma=0.5$ & $J_{A}/\Gamma=0.5,\ J_{B}/\Gamma=1$
\\
\hline
$10^{-2}$ & $t^\ast=454/\Gamma \in [45.4 \ \mu\mathrm{s}, 454\ \mu\mathrm{s}]$ &
$t^\ast=974/\Gamma\in [97.4 \ \mu\mathrm{s}, 974\ \mu\mathrm{s}]$ \\
\hline
$10^{-3}$ & $t^\ast=4481/\Gamma\in [448 \ \mu\mathrm{s}, 4.48\ \mathrm{ms}]$ &
$t^\ast = 9686/\Gamma \in [0.967 \ \mathrm{ms}, 9.67\ \mathrm{ms}]$ \\
\hline
\end{tabular}
\caption{\label{tab:times} Estimates of the experimental entanglement lifetimes $t^\ast$ for different values of the second cavities decay rates $\Gamma_2$ and the local cavity couplings $J_A$, $J_B$. These values are to be compared with the natural entanglement lifetime without cavity coupling, $\bar{t}\in [669 \ \mathrm{ns}, 6.69\ \mu\mathrm{s}]$. The reference unit $\Gamma \in [1\ \mathrm{MHz},10\ \mathrm{MHz}]$.}
\end{table*}
\begin{figure}
\caption{The concurrence as a function of the two-qubit initial state parameter $\alpha$ and the scaled time $\Gamma t$
for $\kappa_{A}
\label{fig:initial}
\end{figure}
It is worth to mention that nowadays cQED technologies are also able to create a qubit-cavity coupling strength comparable to the cavity frequency, thus entering the so-called ultra-strong coupling regime \cite{niemckzyk}. In that case the RWA is to be relaxed and the counter-rotating terms in the qubit-cavity interaction have to be taken into account. According to known results for the single qubit evolution beyond the RWA \cite{werlangNoRWA}, it appears that the main effect of the counter-rotating terms in the Rabi Hamiltonian is the photon creation from vacuum under dephasing noise, which in turns induces a bit-flip error in the qubit evolution. This photon creation would be instead suppressed in the presence of dissipative (damping) mechanisms \cite{werlangNoRWA}. Since our cavity-based architecture is subject to amplitude damping noise, the qualitative long-time dynamics of quantum coherence and thus of entanglement are expected not to be significantly modified with respect to the case when RWA is retained. These argumentations stimulate a detailed study of the performance of our proposed architecture under the ultra-strong coupling regime out of RWA, to be addressed elsewhere.
\section*{Discussion}
In this work, we have analyzed the possibility to manipulate and maintain quantum coherence and entanglement of quantum systems by means of a simple yet effective cavity-based engineered environment. In particular, we have seen how an environmental architecture made of two coupled lossy cavities enables a switch between Markovian and non-Markovian regimes for the dynamics of a qubit (artificial atom) embedded in one of the cavity. This feature possesses an intrinsic interest in the context of controlling memory effects of open quantum systems. Moreover, if the cavity without qubit has a small photon leakage with respect to the other one, qubit coherence can be efficiently maintained.
We mention that our cavity-based architecture for the single qubit can be viewed as the physical realization of a photonic band gap for the qubit \cite{laurapseudo}, inhibiting its spontaneous emission. This property, then extended to the case of two independent qubits locally subject to such an engineered environment, has allowed us to show that quantum entanglement can be robustly shielded from decay, reaching a steady-state entanglement in the limit of perfect cavities.
The emergence of this steady-state entanglement within our proposed architecture confirms the mechanism of entanglement preservation when the qubit-environment interaction is dissipative: namely, the simultaneous existence of a bound state between the qubit and its local environment and of a non-Markovian dynamics for the qubit \cite{non-Mar3}. We remark that this condition is here shown to be efficiently approximated within current experimental parameters such as to maintain a substantial fraction of the entanglement initially shared between the qubits during the evolution. Moreover, we highlight that this goal is achieved even if the local reservoir (cavity) embedding the qubit is memoryless, thanks to the exploitation of an additional good-quality cavity suitably coupled to the first one. Specifically, we have found that, by suitably adjusting the control parameter constituted by this local cavity coupling, the entanglement between the separated qubits can be exploited for times orders of magnitude longer than the natural time of its disappearance in absence of the cavity coupling. These times are expected to be long enough to perform various quantum tasks \cite{obrienreview,dicarloNature}.
Our long-living quantum entanglement scheme, besides its simplicity, is straightforwardly extendable to many qubits, thus fulfilling the scalability requirement for complex quantum information and computation protocols. The fact that the qubits are independent and noninteracting also allows for the desirable individual operations on each constituent of a quantum hardware.
The results of this work provide new insights regarding the control of the fundamental non-Markovian character of open quantum system dynamics and pave the way to further experimental developments towards the realization of devices able to preserve quantum resources.
\section*{Methods}
\subsection*{Functions of the single qubit density matrix}
Let us denote with $\mathcal{L}^{-1}\{ L(s) \}(t)$ the inverse Laplace transform of $L(s)$.
Then, the functions $u_{t}$ and $z_{t}$ appearing in Eq. (\ref{sing-at-evo}) are expressed as
\begin{equation}
u_{t}=|z_{t}|^{2},\ z_{t} = \mathcal{L}^{-1}\{ F(s)/G(s)\}(t),\nonumber
\end{equation}
where
\begin{eqnarray}
F(s)&=& -4J^{2}-(2s+2i\omega+\Gamma_{1}) (2s+2i\omega+\Gamma_{2}),\\
\nonumber
G(s)&=&2 \kappa^2 (2 s + 2 i\omega + \Gamma_{2}) + [s + i (\delta + \omega)] \nonumber\\
&&\times\{4 [J^2 + (s + i\omega)^2] +
2 (s + i\omega) \Gamma_{2} \nonumber\\
&&+ \Gamma_{1} (2 s + 2 i\omega + \Gamma_{2})\}. \nonumber
\end{eqnarray}
\subsection*{Entanglement quantification by concurrence}
Entanglement for an arbitrary state $\rho_{AB}$ of two qubits is quantified by concurrence \cite{amico2008RMP,Wootters98}
\begin{equation}
\mathcal{C}_{AB}=\mathcal{C}(\rho_{AB})=\textrm{max}\{0,\sqrt{\chi_{1}}-\sqrt{\chi_{2}}-\sqrt{\chi_{3}}-\sqrt{\chi_{4}}\},
\end{equation}
where $\chi_{i}$ ($i=1,\ldots,4$) are the eigenvalues in decreasing order of the matrix $\rho_{AB}(\sigma_{y}\otimes\sigma_{y})\rho_{AB}^{\ast}(\sigma_{y}\otimes\sigma_{y})$, with $\sigma_{y}$ denoting the second Pauli matrix and $\rho_{AB}^{\ast}$ corresponding to the complex conjugate of the two-qubit density matrix $\rho_{AB}$ in the canonical computational basis $\{\left|11\right\rangle,\left|10\right\rangle,\left|01\right\rangle,\left|00\right\rangle\}$.
\begin{thebibliography}{10}
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi
\providecommand{\bibinfo}[2]{#2}
\providecommand{\eprint}[2][]{\url{#2}}
\bibitem{nielsenchuang}
\bibinfo{author}{Nielsen, M.~A.} \& \bibinfo{author}{Chuang, I.~L.}
\newblock \emph{\bibinfo{title}{Quantum Computation and Quantum Information}}
(\bibinfo{publisher}{Cambridge University Press},
\bibinfo{address}{Cambridge}, \bibinfo{year}{2000}).
\bibitem{benenti}
\bibinfo{author}{Benenti, G.}, \bibinfo{author}{Casati, G.} \&
\bibinfo{author}{Strini, G.}
\newblock \emph{\bibinfo{title}{Principles of quantum computation and
information}} (\bibinfo{publisher}{World Scientific},
\bibinfo{address}{Singapore}, \bibinfo{year}{2007}).
\bibitem{amico2008RMP}
\bibinfo{author}{Amico, L.}, \bibinfo{author}{Fazio, R.},
\bibinfo{author}{Osterloh, A.} \& \bibinfo{author}{Vedral, V.}
\newblock \bibinfo{title}{Entanglement in many-body systems}.
\newblock \emph{\bibinfo{journal}{Rev. Mod. Phys.}}
\textbf{\bibinfo{volume}{80}}, \bibinfo{pages}{517–576}
(\bibinfo{year}{2008}).
\bibitem{yueberlyPRL2004}
\bibinfo{author}{Yu, T.} \& \bibinfo{author}{Eberly, J.~H.}
\newblock \bibinfo{title}{Finite-time disentanglement via spontaneous
emission}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{93}}, \bibinfo{pages}{140404}
(\bibinfo{year}{2004}).
\bibitem{yueberlyPRL2006}
\bibinfo{author}{Yu, T.} \& \bibinfo{author}{Eberly, J.~H.}
\newblock \bibinfo{title}{Quantum open system theory: Bipartite aspects}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{97}}, \bibinfo{pages}{140403}
(\bibinfo{year}{2006}).
\bibitem{doddPRA}
\bibinfo{author}{Dodd, P.~J.} \& \bibinfo{author}{Halliwell, J.~J.}
\newblock \bibinfo{title}{Disentanglement and decoherence by open system
dynamics}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{69}},
\bibinfo{pages}{052105} (\bibinfo{year}{2004}).
\bibitem{santosPRA}
\bibinfo{author}{Santos, M.~F.}, \bibinfo{author}{Milman, P.},
\bibinfo{author}{Davidovich, L.} \& \bibinfo{author}{Zagury, M.}
\newblock \bibinfo{title}{Direct measurement of finite-time disentanglement
induced by a reservoir}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{73}},
\bibinfo{pages}{040305} (\bibinfo{year}{2006}).
\bibitem{yu2009Science}
\bibinfo{author}{Yu, T.} \& \bibinfo{author}{Eberly, J.~H.}
\newblock \bibinfo{title}{Sudden death of entanglement}.
\newblock \emph{\bibinfo{journal}{Science}} \textbf{\bibinfo{volume}{323}},
\bibinfo{pages}{598--601} (\bibinfo{year}{2009}).
\bibitem{almeida2007Science}
\bibinfo{author}{Almeida, M.~P.} \emph{et~al.}
\newblock \bibinfo{title}{Environment-induced sudden death of entanglement}.
\newblock \emph{\bibinfo{journal}{Science}} \textbf{\bibinfo{volume}{316}},
\bibinfo{pages}{579} (\bibinfo{year}{2007}).
\bibitem{kimble2007PRL}
\bibinfo{author}{Laurat, J.}, \bibinfo{author}{Choi, K.~S.},
\bibinfo{author}{Deng, H.}, \bibinfo{author}{Chou, C.~W.} \&
\bibinfo{author}{Kimble, H.~J.}
\newblock \bibinfo{title}{Heralded entanglement between atomic ensembles:
preparation, decoherence, and scaling}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{99}}, \bibinfo{pages}{180504}
(\bibinfo{year}{2007}).
\bibitem{eberlyScience2007}
\bibinfo{author}{Eberly, J.~H.} \& \bibinfo{author}{Yu, T.}
\newblock \bibinfo{title}{The end of an entanglement}.
\newblock \emph{\bibinfo{journal}{Science}} \textbf{\bibinfo{volume}{316}},
\bibinfo{pages}{555} (\bibinfo{year}{2007}).
\bibitem{sallesPRA}
\bibinfo{author}{Salles, A.} \emph{et~al.}
\newblock \bibinfo{title}{Experimental investigation of the dynamics of
entanglement: Sudden death, complementarity, and continuous monitoring of the
environment}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{78}},
\bibinfo{pages}{022322} (\bibinfo{year}{2008}).
\bibitem{aolitareview}
\bibinfo{author}{Aolita, L.}, \bibinfo{author}{de~Melo, F.} \&
\bibinfo{author}{Davidovich, L.}
\newblock \bibinfo{title}{Open-system dynamics of entanglement: a key issues
review}.
\newblock \emph{\bibinfo{journal}{Rep. Prog. Phys.}}
\textbf{\bibinfo{volume}{78}}, \bibinfo{pages}{042001}
(\bibinfo{year}{2015}).
\bibitem{obrienreview}
\bibinfo{author}{Ladd, T.~D.} \emph{et~al.}
\newblock \bibinfo{title}{Quantum computers}.
\newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{464}},
\bibinfo{pages}{45} (\bibinfo{year}{2010}).
\bibitem{norireview}
\bibinfo{author}{Xiang, Z.-L.}, \bibinfo{author}{Ashhab, S.},
\bibinfo{author}{You, J.} \& \bibinfo{author}{Nori, F.}
\newblock \bibinfo{title}{Hybrid quantum circuits: Superconducting circuits
interacting with other quantum systems}.
\newblock \emph{\bibinfo{journal}{Rev. Mod. Phys.}}
\textbf{\bibinfo{volume}{85}}, \bibinfo{pages}{623} (\bibinfo{year}{2013}).
\bibitem{bennett1}
\bibinfo{author}{Bennett, C.~H.} \emph{et~al.}
\newblock \bibinfo{title}{Purification of noisy entanglement and faithful
teleportation via noisy channels}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{76}}, \bibinfo{pages}{722} (\bibinfo{year}{1996}).
\bibitem{bennett2}
\bibinfo{author}{Bennett, C.~H.}, \bibinfo{author}{Bernstein, H.~J.},
\bibinfo{author}{Popescu, S.} \& \bibinfo{author}{Schumacher, B.}
\newblock \bibinfo{title}{Concentrating partial entanglement by local
operations}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{53}},
\bibinfo{pages}{2046} (\bibinfo{year}{1996}).
\bibitem{panNature}
\bibinfo{author}{Pan, J.~W.}, \bibinfo{author}{Gasparoni, S.},
\bibinfo{author}{Ursin, R.}, \bibinfo{author}{Weihs, G.} \&
\bibinfo{author}{Zeilinger, A.}
\newblock \bibinfo{title}{Experimental entanglement purification of arbitrary
unknown states}.
\newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{423}},
\bibinfo{pages}{417} (\bibinfo{year}{2003}).
\bibitem{kwiatNature}
\bibinfo{author}{Kwiat, P.~G.}, \bibinfo{author}{Barraza-Lopez, S.},
\bibinfo{author}{Stefanov, A.} \& \bibinfo{author}{Gisin, N.}
\newblock \bibinfo{title}{Experimental entanglement distillation and hidden
non-locality}.
\newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{409}},
\bibinfo{pages}{1014} (\bibinfo{year}{2001}).
\bibitem{dongNatPhys}
\bibinfo{author}{Dong, R.} \emph{et~al.}
\newblock \bibinfo{title}{Experimental entanglement distillation of mesoscopic
quantum states}.
\newblock \emph{\bibinfo{journal}{Nature Phys.}} \textbf{\bibinfo{volume}{4}},
\bibinfo{pages}{919} (\bibinfo{year}{2008}).
\bibitem{zanardiPRL1997}
\bibinfo{author}{Zanardi, P.} \& \bibinfo{author}{Rasetti, M.}
\newblock \bibinfo{title}{Noiseless quantum codes}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{79}}, \bibinfo{pages}{3306} (\bibinfo{year}{1997}).
\bibitem{lidarPRL}
\bibinfo{author}{Lidar, D.~A.}, \bibinfo{author}{Chuang, I.} \&
\bibinfo{author}{Whaley, K.~B.}
\newblock \bibinfo{title}{Decoherence-free subspaces for quantum computation}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{81}}, \bibinfo{pages}{2594} (\bibinfo{year}{1998}).
\bibitem{kwiatScience}
\bibinfo{author}{Kwiat, P.~G.}, \bibinfo{author}{Berglund, A.~J.},
\bibinfo{author}{Alterpeter, J.~B.} \& \bibinfo{author}{White, A.~G.}
\newblock \bibinfo{title}{Experimental entanglement distillation and hidden
non-locality}.
\newblock \emph{\bibinfo{journal}{Science}} \textbf{\bibinfo{volume}{290}},
\bibinfo{pages}{498} (\bibinfo{year}{2000}).
\bibitem{Zeno1}
\bibinfo{author}{Maniscalco, S.}, \bibinfo{author}{Francica, F.},
\bibinfo{author}{Zaffino, R.~L.}, \bibinfo{author}{Gullo, N.~L.} \&
\bibinfo{author}{Plastina, F.}
\newblock \bibinfo{title}{{Protecting entanglement via the quantum Zeno
effect}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{100}}, \bibinfo{pages}{090503}
(\bibinfo{year}{2008}).
\bibitem{Zeno2}
\bibinfo{author}{An, N.~B.}, \bibinfo{author}{Kim, J.} \& \bibinfo{author}{Kim,
K.}
\newblock \bibinfo{title}{Nonperturbative analysis of entanglement dynamics and
control for three qubits in a common lossy cavity}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{82}},
\bibinfo{pages}{032316} (\bibinfo{year}{2010}).
\bibitem{Zeno3}
\bibinfo{author}{Facchi, P.}, \bibinfo{author}{Lidar, D.~A.} \&
\bibinfo{author}{Pascazio, S.}
\newblock \bibinfo{title}{Unification of dynamical decoupling and the quantum
{Zeno} effect}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{69}},
\bibinfo{pages}{032314} (\bibinfo{year}{2004}).
\bibitem{shorPRA}
\bibinfo{author}{Shor, P.~W.}
\newblock \bibinfo{title}{Scheme for reducing decoherence in quantum computer
memory}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{52}},
\bibinfo{pages}{2493(R)} (\bibinfo{year}{1995}).
\bibitem{steanePRL}
\bibinfo{author}{Steane, A.~M.}
\newblock \bibinfo{title}{Error correcting codes in quantum theory}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{77}}, \bibinfo{pages}{793} (\bibinfo{year}{1996}).
\bibitem{steaneRoySoc}
\bibinfo{author}{Steane, A.~M.}
\newblock \bibinfo{title}{Multiple-particle interference and quantum error
correction}.
\newblock \emph{\bibinfo{journal}{Proc. R. Soc. London A}}
\textbf{\bibinfo{volume}{452}}, \bibinfo{pages}{2551} (\bibinfo{year}{1996}).
\bibitem{calderbank}
\bibinfo{author}{Calderbank, A.~R.} \& \bibinfo{author}{Shor, P.~W.}
\newblock \bibinfo{title}{Good quantum error-correcting codes exist}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{54}},
\bibinfo{pages}{1098} (\bibinfo{year}{1996}).
\bibitem{sainzPRA}
\bibinfo{author}{Sainz, I.} \& \bibinfo{author}{Bjork, G.}
\newblock \bibinfo{title}{Good quantum error-correcting codes exist}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{77}},
\bibinfo{pages}{052307} (\bibinfo{year}{2008}).
\bibitem{muhktar2010PRA1}
\bibinfo{author}{Mukhtar, M.}, \bibinfo{author}{Saw, T.~B.},
\bibinfo{author}{Soh, W.~T.} \& \bibinfo{author}{Gong, J.}
\newblock \bibinfo{title}{Universal dynamical decoupling: Two-qubit states and
beyond}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{81}},
\bibinfo{pages}{012331} (\bibinfo{year}{2010}).
\bibitem{muhktar2010PRA2}
\bibinfo{author}{Mukhtar, M.}, \bibinfo{author}{Soh, W.~T.},
\bibinfo{author}{Saw, T.~B.} \& \bibinfo{author}{Gong, J.}
\newblock \bibinfo{title}{Protecting unknown two-qubit entangled states by
nesting {Uhrig'}s dynamical decoupling sequences}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{82}},
\bibinfo{pages}{052338} (\bibinfo{year}{2010}).
\bibitem{wang2011PRA}
\bibinfo{author}{Wang, Z.-Y.} \& \bibinfo{author}{Liu, R.-B.}
\newblock \bibinfo{title}{Protection of quantum systems by nested dynamical
decoupling}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{83}},
\bibinfo{pages}{022306} (\bibinfo{year}{2011}).
\bibitem{pan2011JPB}
\bibinfo{author}{Pan, Y.}, \bibinfo{author}{Z.-R-Xi} \& \bibinfo{author}{Gong,
J.}
\newblock \bibinfo{title}{Optimized dynamical decoupling sequences in
protecting two-qubit states}.
\newblock \emph{\bibinfo{journal}{J. Phys. B: At. Mol. Opt. Phys.}}
\textbf{\bibinfo{volume}{44}}, \bibinfo{pages}{175501}
(\bibinfo{year}{2011}).
\bibitem{lofrancoPRB}
\bibinfo{author}{{Lo Franco}, R.}, \bibinfo{author}{D'Arrigo, A.},
\bibinfo{author}{Falci, G.}, \bibinfo{author}{Compagno, G.} \&
\bibinfo{author}{Paladino, E.}
\newblock \bibinfo{title}{Preserving entanglement and nonlocality in
solid-state qubits by dynamical decoupling}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. B}} \textbf{\bibinfo{volume}{90}},
\bibinfo{pages}{054304} (\bibinfo{year}{2014}).
\bibitem{lofrancospinecho}
\bibinfo{author}{{Lo Franco}, R.}, \bibinfo{author}{D'Arrigo, A.},
\bibinfo{author}{Falci, G.}, \bibinfo{author}{Compagno, G.} \&
\bibinfo{author}{Paladino, E.}
\newblock \bibinfo{title}{Spin-echo entanglement protection from random
telegraph noise}.
\newblock \emph{\bibinfo{journal}{Phys. Scr.}} \textbf{\bibinfo{volume}{T153}},
\bibinfo{pages}{014043} (\bibinfo{year}{2013}).
\bibitem{lofrancoreview}
\bibinfo{author}{{Lo Franco}, R.}, \bibinfo{author}{Bellomo, B.},
\bibinfo{author}{Maniscalco, S.} \& \bibinfo{author}{Compagno, G.}
\newblock \bibinfo{title}{Dynamics of quantum correlations in two-qubit systems
within non-{Markovian} environments}.
\newblock \emph{\bibinfo{journal}{Int. J. Mod. Phys. B}}
\textbf{\bibinfo{volume}{27}}, \bibinfo{pages}{1345053}
(\bibinfo{year}{2013}).
\bibitem{non-Mar2}
\bibinfo{author}{Tan, J.}, \bibinfo{author}{Kyaw, T.~H.} \&
\bibinfo{author}{Yeo, Y.}
\newblock \bibinfo{title}{Non-{Markovian} environments and entanglement
preservation}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{81}},
\bibinfo{pages}{062119} (\bibinfo{year}{2010}).
\bibitem{non-Mar3}
\bibinfo{author}{Tong, Q.~J.}, \bibinfo{author}{An, J.~H.},
\bibinfo{author}{Luo, H.~G.} \& \bibinfo{author}{Oh, C.~H.}
\newblock \bibinfo{title}{Mechanism of entanglement preservation}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{81}},
\bibinfo{pages}{052330} (\bibinfo{year}{2010}).
\bibitem{bellomo2007PRL}
\bibinfo{author}{Bellomo, B.}, \bibinfo{author}{{Lo Franco}, R.} \&
\bibinfo{author}{Compagno, G.}
\newblock \bibinfo{title}{Non-{Markovian} effects on the dynamics of
entanglement}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{99}}, \bibinfo{pages}{160502}
(\bibinfo{year}{2007}).
\bibitem{bellomo2008PRA}
\bibinfo{author}{Bellomo, B.}, \bibinfo{author}{{Lo Franco}, R.} \&
\bibinfo{author}{Compagno, G.}
\newblock \bibinfo{title}{Entanglement dynamics of two independent qubits in
environments with and without memory}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{77}},
\bibinfo{pages}{032342} (\bibinfo{year}{2008}).
\bibitem{lofranco2012PRA}
\bibinfo{author}{{Lo Franco}, R.}, \bibinfo{author}{Bellomo, B.},
\bibinfo{author}{Andersson, E.} \& \bibinfo{author}{Compagno, G.}
\newblock \bibinfo{title}{Revival of quantum correlation without
system-environment back-action}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{85}},
\bibinfo{pages}{032318} (\bibinfo{year}{2012}).
\bibitem{LoFrancoNatCom}
\bibinfo{author}{Xu, J.-S.} \emph{et~al.}
\newblock \bibinfo{title}{Experimental recovery of quantum correlations in
absence of system-environment back-action}.
\newblock \emph{\bibinfo{journal}{Nature Commun.}}
\textbf{\bibinfo{volume}{4}}, \bibinfo{pages}{2851} (\bibinfo{year}{2013}).
\bibitem{darrigo2012AOP}
\bibinfo{author}{{D'Arrigo}, A.}, \bibinfo{author}{{Lo Franco}, R.},
\bibinfo{author}{Benenti, G.}, \bibinfo{author}{Paladino, E.} \&
\bibinfo{author}{Falci, G.}
\newblock \bibinfo{title}{Recovering entanglement by local operations}.
\newblock \emph{\bibinfo{journal}{Ann. Phys.}} \textbf{\bibinfo{volume}{350}},
\bibinfo{pages}{211} (\bibinfo{year}{2014}).
\bibitem{orieux2015}
\bibinfo{author}{Orieux, A.} \emph{et~al.}
\newblock \bibinfo{title}{Experimental on-demand recovery of quantum
entanglement by local operations within non-{Markovian} dynamics}.
\newblock \emph{\bibinfo{journal}{Sci. Rep.}} \textbf{\bibinfo{volume}{5}},
\bibinfo{pages}{8575} (\bibinfo{year}{2015}).
\bibitem{bellomo2008trapping}
\bibinfo{author}{Bellomo, B.}, \bibinfo{author}{{Lo Franco}, R.},
\bibinfo{author}{Maniscalco, S.} \& \bibinfo{author}{Compagno, G.}
\newblock \bibinfo{title}{Entanglement trapping in structured environments}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{78}},
\bibinfo{pages}{060302(R)} (\bibinfo{year}{2008}).
\bibitem{bellomo2010PhysScrManiscalco}
\bibinfo{author}{Bellomo, B.}, \bibinfo{author}{{Lo Franco}, R.},
\bibinfo{author}{Maniscalco, S.} \& \bibinfo{author}{Compagno, G.}
\newblock \bibinfo{title}{Two-qubit entanglement dynamics for two different
non-{Markovian} environments}.
\newblock \emph{\bibinfo{journal}{Phys. Scr.}} \textbf{\bibinfo{volume}{T140}},
\bibinfo{pages}{014014} (\bibinfo{year}{2010}).
\bibitem{expPBG}
\bibinfo{author}{Lodahl, P.} \emph{et~al.}
\newblock \bibinfo{title}{Controlling the dynamics of spontaneous emission from
quantum dots by photonic crystals}.
\newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{430}},
\bibinfo{pages}{654} (\bibinfo{year}{2004}).
\bibitem{interf1}
\bibinfo{author}{Zhu, S.~Y.} \& \bibinfo{author}{Scully, M.~O.}
\newblock \bibinfo{title}{Spectral line elimination and spontaneous emission
cancellation via quantum interference}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{76}}, \bibinfo{pages}{388} (\bibinfo{year}{1996}).
\bibitem{interf2}
\bibinfo{author}{Scully, M.~O.} \& \bibinfo{author}{Zhu, S.~Y.}
\newblock \bibinfo{title}{Quantum control of the inevitable}.
\newblock \emph{\bibinfo{journal}{Science}} \textbf{\bibinfo{volume}{281}},
\bibinfo{pages}{1973} (\bibinfo{year}{1998}).
\bibitem{interf3}
\bibinfo{author}{Das, S.} \& \bibinfo{author}{Agarwal, G.~S.}
\newblock \bibinfo{title}{Protecting bipartite entanglement by quantum
interferences}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{81}},
\bibinfo{pages}{052341} (\bibinfo{year}{2010}).
\bibitem{Kim}
\bibinfo{author}{Kim, Y.~S.}, \bibinfo{author}{Lee, J.~C.},
\bibinfo{author}{Kwon, O.} \& \bibinfo{author}{Kim, Y.~H.}
\newblock \bibinfo{title}{Protecting entanglement from decoherence using weak
measurement and quantum measurement reversal}.
\newblock \emph{\bibinfo{journal}{Nature Phys.}} \textbf{\bibinfo{volume}{8}},
\bibinfo{pages}{117} (\bibinfo{year}{2012}).
\bibitem{Man1}
\bibinfo{author}{Man, Z.~X.}, \bibinfo{author}{Xia, Y.~J.} \&
\bibinfo{author}{An, N.~B.}
\newblock \bibinfo{title}{Manipulating entanglement of two qubits in a common
environment by means of weak measurements and quantum measurement reversals}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{86}},
\bibinfo{pages}{012325} (\bibinfo{year}{2012}).
\bibitem{Man2}
\bibinfo{author}{Man, Z.~X.}, \bibinfo{author}{Xia, Y.~J.} \&
\bibinfo{author}{An, N.~B.}
\newblock \bibinfo{title}{Enhancing entanglement of two qubits undergoing
independent decoherences by local pre- and postmeasurements}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{86}},
\bibinfo{pages}{052322} (\bibinfo{year}{2012}).
\bibitem{pianiPRL}
\bibinfo{author}{Benatti, F.}, \bibinfo{author}{Floreanini, R.} \&
\bibinfo{author}{Piani, M.}
\newblock \bibinfo{title}{Environment induced entanglement in {Markovian}
dissipative dynamics}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{91}}, \bibinfo{pages}{070402}
(\bibinfo{year}{2003}).
\bibitem{matteoEPJD}
\bibinfo{author}{Scala, M.}, \bibinfo{author}{Migliore, R.},
\bibinfo{author}{Messina, A.} \& \bibinfo{author}{S{\'{a}}nchez-Soto, L.~L.}
\newblock \bibinfo{title}{Robust stationary entanglement of two coupled qubits
in independent environments}.
\newblock \emph{\bibinfo{journal}{Eur. Phys. J. D}}
\textbf{\bibinfo{volume}{61}}, \bibinfo{pages}{199} (\bibinfo{year}{2011}).
\bibitem{brunnerarxiv}
\bibinfo{author}{Brask, J.~B.}, \bibinfo{author}{Brunner, N.},
\bibinfo{author}{Haack, G.} \& \bibinfo{author}{Huber, M.}
\newblock \bibinfo{title}{Autonomous quantum thermal machine for generating
steady-state entanglement}.
\newblock \emph{\bibinfo{journal}{Preprint at arXiv:1504.00187}}
(\bibinfo{year}{2015}).
\bibitem{plenio2002PRL}
\bibinfo{author}{Plenio, M.~B.} \& \bibinfo{author}{Huelga, S.~F.}
\newblock \bibinfo{title}{Entangled light from white noise}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{88}}, \bibinfo{pages}{197901}
(\bibinfo{year}{2002}).
\bibitem{hartmannNJP}
\bibinfo{author}{Hartmann, L.}, \bibinfo{author}{D{\"{u}}, W.} \&
\bibinfo{author}{Briegel, H.~J.}
\newblock \bibinfo{title}{Entanglement and its dynamics in open dissipative
systems}.
\newblock \emph{\bibinfo{journal}{New J. Phys.}} \textbf{\bibinfo{volume}{9}},
\bibinfo{pages}{230} (\bibinfo{year}{2007}).
\bibitem{brunoNJP}
\bibinfo{author}{Bellomo, B.} \& \bibinfo{author}{Antezza, M.}
\newblock \bibinfo{title}{Creation and protection of entanglement in systems
out of thermal equilibrium}.
\newblock \emph{\bibinfo{journal}{New J. Phys.}} \textbf{\bibinfo{volume}{15}},
\bibinfo{pages}{113052} (\bibinfo{year}{2013}).
\bibitem{brunoEPL}
\bibinfo{author}{Bellomo, B.} \& \bibinfo{author}{Antezza, M.}
\newblock \bibinfo{title}{Steady entanglement out of thermal equilibrium}.
\newblock \emph{\bibinfo{journal}{EPL (Europhysics Letters)}}
\textbf{\bibinfo{volume}{104}}, \bibinfo{pages}{10006}
(\bibinfo{year}{2013}).
\bibitem{non-Mar1}
\bibinfo{author}{Huelga, S.~F.}, \bibinfo{author}{Rivas, {\'{A}}.} \&
\bibinfo{author}{Plenio, M.~B.}
\newblock \bibinfo{title}{Non-{Markovianity}-assisted steady state
entanglement}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{108}}, \bibinfo{pages}{160402}
(\bibinfo{year}{2012}).
\bibitem{blaisPRA}
\bibinfo{author}{Blais, A.}, \bibinfo{author}{Huang, R.-S.},
\bibinfo{author}{Wallraff, A.}, \bibinfo{author}{Girvin, S.~M.} \&
\bibinfo{author}{Schoelkopf, R.~J.}
\newblock \bibinfo{title}{Cavity quantum electrodynamics for superconducting
electrical circuits: an architecture for quantum computation}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{69}},
\bibinfo{pages}{062320} (\bibinfo{year}{2004}).
\bibitem{petru}
\bibinfo{author}{Breuer, H.-P.} \& \bibinfo{author}{Petruccione, F.}
\newblock \emph{\bibinfo{title}{The Theory of Open Quantum Systems}}
(\bibinfo{publisher}{Oxford University Press}, \bibinfo{address}{Oxford, New
York}, \bibinfo{year}{2002}).
\bibitem{pseu1}
\bibinfo{author}{Garraway, B.~M.}
\newblock \bibinfo{title}{Nonperturbative decay of an atomic system in a
cavity}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{55}},
\bibinfo{pages}{2290} (\bibinfo{year}{1997}).
\bibitem{pseu2}
\bibinfo{author}{Garraway, B.~M.}
\newblock \bibinfo{title}{Decay of an atom coupled strongly to a reservoir}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{55}},
\bibinfo{pages}{4636} (\bibinfo{year}{1997}).
\bibitem{coher}
\bibinfo{author}{Baumgratz, T.}, \bibinfo{author}{Cramer, M.} \&
\bibinfo{author}{Plenio, M.~B.}
\newblock \bibinfo{title}{Quantifying coherence}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{113}}, \bibinfo{pages}{140401}
(\bibinfo{year}{2014}).
\bibitem{breuer2009PRL}
\bibinfo{author}{Breuer, H.-P.}, \bibinfo{author}{Laine, E.-M.} \&
\bibinfo{author}{Piilo, J.}
\newblock \bibinfo{title}{{Measure for the degree of non-Markovian behavior of
quantum processes in open systems}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{103}}, \bibinfo{pages}{210401}
(\bibinfo{year}{2009}).
\bibitem{lorenzoPRA}
\bibinfo{author}{Lorenzo, S.}, \bibinfo{author}{Plastina, F.} \&
\bibinfo{author}{Paternostro, M.}
\newblock \bibinfo{title}{{Geometrical characterization of non-Markovianity}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{88}},
\bibinfo{pages}{020102(R)} (\bibinfo{year}{2013}).
\bibitem{rivas2010PRL}
\bibinfo{author}{Rivas, {\'{A}}.}, \bibinfo{author}{Huelga, S.~F.} \&
\bibinfo{author}{Plenio, M.~B.}
\newblock \bibinfo{title}{{Entanglement and non-Markovianity of quantum
evolutions}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{105}}, \bibinfo{pages}{050403}
(\bibinfo{year}{2010}).
\bibitem{bylicka2014}
\bibinfo{author}{Bylicka, B.}, \bibinfo{author}{Chru{\'{s}}ci{\'{n}}ski, D.} \&
\bibinfo{author}{Maniscalco, S.}
\newblock \bibinfo{title}{{Non-Markovianity and reservoir memory of quantum
channels: a quantum information theory perspective}}.
\newblock \emph{\bibinfo{journal}{Sci. Rep.}} \textbf{\bibinfo{volume}{4}},
\bibinfo{pages}{5720} (\bibinfo{year}{2014}).
\bibitem{lofrancoManPRA}
\bibinfo{author}{Man, Z.-X.}, \bibinfo{author}{Xia, Y.-J.} \&
\bibinfo{author}{{Lo Franco}, R.}
\newblock \bibinfo{title}{{Harnessing non-Markovian quantum memory by
environmental coupling}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{92}},
\bibinfo{pages}{012315} (\bibinfo{year}{2015}).
\bibitem{lofranco2012PhysScripta}
\bibinfo{author}{{Lo Franco}, R.}, \bibinfo{author}{{D'Arrigo}, A.},
\bibinfo{author}{Falci, G.}, \bibinfo{author}{Compagno, G.} \&
\bibinfo{author}{Paladino, E.}
\newblock \bibinfo{title}{Entanglement dynamics in superconducting qubits
affected by local bistable impurities}.
\newblock \emph{\bibinfo{journal}{Phys. Scr.}} \textbf{\bibinfo{volume}{T147}},
\bibinfo{pages}{014019} (\bibinfo{year}{2012}).
\bibitem{darrigo2014IJQI}
\bibinfo{author}{{D'Arrigo}, A.}, \bibinfo{author}{{Lo Franco}, R.},
\bibinfo{author}{Benenti, G.}, \bibinfo{author}{Paladino, E.} \&
\bibinfo{author}{Falci, G.}
\newblock \bibinfo{title}{Hidden entanglement, system-environment information
flow and non-{Markovianity}}.
\newblock \emph{\bibinfo{journal}{Int. J. Quantum Inf.}}
\textbf{\bibinfo{volume}{12}}, \bibinfo{pages}{1461005}
(\bibinfo{year}{2014}).
\bibitem{darrigo2013hidden}
\bibinfo{author}{{D'Arrigo}, A.}, \bibinfo{author}{{Lo Franco}, R.},
\bibinfo{author}{Benenti, G.}, \bibinfo{author}{Paladino, E.} \&
\bibinfo{author}{Falci, G.}
\newblock \bibinfo{title}{Hidden entanglement in the presence of random
telegraph dephasing noise}.
\newblock \emph{\bibinfo{journal}{Phys. Scr.}} \textbf{\bibinfo{volume}{T153}},
\bibinfo{pages}{014014} (\bibinfo{year}{2013}).
\bibitem{bellomo2008bell}
\bibinfo{author}{Bellomo, B.}, \bibinfo{author}{{Lo Franco}, R.} \&
\bibinfo{author}{Compagno, G.}
\newblock \bibinfo{title}{Dynamics of non-classically-reproducible
entanglement}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{78}},
\bibinfo{pages}{062309} (\bibinfo{year}{2008}).
\bibitem{ban1}
\bibinfo{author}{Ban, M.}, \bibinfo{author}{Kitajima, S.} \&
\bibinfo{author}{Shibatay, F.}
\newblock \bibinfo{title}{Decoherence of quantum information in the
non-{Markovian} qubit channel}.
\newblock \emph{\bibinfo{journal}{J. Phys. A: Math. Gen.}}
\textbf{\bibinfo{volume}{38}}, \bibinfo{pages}{7161} (\bibinfo{year}{2005}).
\bibitem{ban2}
\bibinfo{author}{Ban, M.}
\newblock \bibinfo{title}{Decoherence of continuous variable quantum
information in non-{Markovian} channels}.
\newblock \emph{\bibinfo{journal}{J. Phys. A: Math. Gen.}}
\textbf{\bibinfo{volume}{39}}, \bibinfo{pages}{1927} (\bibinfo{year}{2006}).
\bibitem{liuPRA}
\bibinfo{author}{Liu, K.-L.} \& \bibinfo{author}{Goan, H.-S.}
\newblock \bibinfo{title}{{Non-{Markovian} entanglement dynamics of quantum
continuous variable systems in thermal environments}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{76}},
\bibinfo{pages}{022312} (\bibinfo{year}{2007}).
\bibitem{yonac}
\bibinfo{author}{Yonac, M.}, \bibinfo{author}{Yu, T.} \&
\bibinfo{author}{Eberly, J.~H.}
\newblock \bibinfo{title}{Sudden death of entanglement of two
{Jaynes}-{Cummings} atoms}.
\newblock \emph{\bibinfo{journal}{J. Phys. B: At. Mol. Opt. Phys.}}
\textbf{\bibinfo{volume}{39}}, \bibinfo{pages}{S621} (\bibinfo{year}{2006}).
\bibitem{manNJP}
\bibinfo{author}{Man, Z.~X.}, \bibinfo{author}{Xia, Y.~J.} \&
\bibinfo{author}{An, N.~B.}
\newblock \bibinfo{title}{Entanglement measure and dynamics of multiqubit
systems: non-{Markovian} versus {Markovian} and generalized monogamy
relations}.
\newblock \emph{\bibinfo{journal}{New J. Phys.}} \textbf{\bibinfo{volume}{12}},
\bibinfo{pages}{033020} (\bibinfo{year}{2010}).
\bibitem{baiPRL}
\bibinfo{author}{Bai, Y.~K.}, \bibinfo{author}{Xu, Y.~F.} \&
\bibinfo{author}{Wang, Z.~D.}
\newblock \bibinfo{title}{General monogamy relation for the entanglement of
formation in multiqubit systems}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{113}}, \bibinfo{pages}{100503}
(\bibinfo{year}{2014}).
\bibitem{baiPRA}
\bibinfo{author}{Bai, Y.~K.}, \bibinfo{author}{Ye, M.~Y.} \&
\bibinfo{author}{Wang, Z.~D.}
\newblock \bibinfo{title}{Entanglement monogamy and entanglement evolution in
multipartite systems}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{80}},
\bibinfo{pages}{044301} (\bibinfo{year}{2009}).
\bibitem{Wootters98}
\bibinfo{author}{Wootters, W.~K.}
\newblock \bibinfo{title}{Entanglement of formation of an arbitrary state of
two qubits}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{80}}, \bibinfo{pages}{2245--2248}
(\bibinfo{year}{1998}).
\bibitem{steffenarxiv}
\bibinfo{author}{Bronn, N.~T.} \emph{et~al.}
\newblock \bibinfo{title}{Reducing spontaneous emission in circuit quantum
electrodynamics by a combined readout/filter technique}.
\newblock \emph{\bibinfo{journal}{Preprint at arXiv:1504.04353}}
(\bibinfo{year}{2015}).
\bibitem{schoelkopfarxiv}
\bibinfo{author}{Vlastakis, B.} \emph{et~al.}
\newblock \bibinfo{title}{Violating {Bell's} inequality with an artificial atom
and a cat state in a cavity}.
\newblock \emph{\bibinfo{journal}{Preprint at arXiv:1504.02512}}
(\bibinfo{year}{2015}).
\bibitem{leekPRL}
\bibinfo{author}{Leek, P.~J.} \emph{et~al.}
\newblock \bibinfo{title}{Cavity quantum electrodynamics with separate photon
storage and qubit readout modes}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{104}}, \bibinfo{pages}{100504}
(\bibinfo{year}{2010}).
\bibitem{finkNature}
\bibinfo{author}{Fink, J.~M.} \emph{et~al.}
\newblock \bibinfo{title}{Climbing the {Jaynes}-{Cummings} ladder and observing
its {$\sqrt{n}$} nonlinearity in a cavity {QED} system}.
\newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{454}},
\bibinfo{pages}{315} (\bibinfo{year}{2008}).
\bibitem{dicarloNature}
\bibinfo{author}{DiCarlo, L.} \emph{et~al.}
\newblock \bibinfo{title}{Demonstration of two-qubit algorithms with a
superconducting quantum processor}.
\newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{460}},
\bibinfo{pages}{240} (\bibinfo{year}{2009}).
\bibitem{horodecki2009RMP}
\bibinfo{author}{Horodecki, R.}, \bibinfo{author}{Horodecki, P.},
\bibinfo{author}{Horodecki, M.} \& \bibinfo{author}{Horodecki, K.}
\newblock \bibinfo{title}{Quantum entanglement}.
\newblock \emph{\bibinfo{journal}{Rev. Mod. Phys.}}
\textbf{\bibinfo{volume}{81}}, \bibinfo{pages}{865--942}
(\bibinfo{year}{2009}).
\bibitem{brunnerRMP}
\bibinfo{author}{Brunner, N.}, \bibinfo{author}{Cavalcanti, D.},
\bibinfo{author}{Pironio, S.}, \bibinfo{author}{Scarani, V.} \&
\bibinfo{author}{Wehner, S.}
\newblock \bibinfo{title}{Bell nonlocality}.
\newblock \emph{\bibinfo{journal}{Rev. Mod. Phys.}}
\textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{419} (\bibinfo{year}{2014}).
\bibitem{niemckzyk}
\bibinfo{author}{Niemczyk, T.} \emph{et~al.}
\newblock \bibinfo{title}{{Circuit quantum electrodynamics in the
ultrastrong-coupling regime}}.
\newblock \emph{\bibinfo{journal}{Nature Phys.}} \textbf{\bibinfo{volume}{6}},
\bibinfo{pages}{772?776} (\bibinfo{year}{2010}).
\bibitem{werlangNoRWA}
\bibinfo{author}{Werlang, T.}, \bibinfo{author}{Dodonov, A.~V.},
\bibinfo{author}{Duzzioni, E.~I.} \& \bibinfo{author}{Villas-B{\^{o}}as,
C.~J.}
\newblock \bibinfo{title}{{Rabi model beyond the rotating-wave approximation:
Generation of photons from vacuum through decoherence}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{78}},
\bibinfo{pages}{053805} (\bibinfo{year}{2008}).
\bibitem{laurapseudo}
\bibinfo{author}{Mazzola, L.}, \bibinfo{author}{Maniscalco, S.},
\bibinfo{author}{Piilo, J.}, \bibinfo{author}{Suominen, K.-A.} \&
\bibinfo{author}{Garraway, B.~M.}
\newblock \bibinfo{title}{Pseudomodes as an effective description of memory:
Non-{Markovian} dynamics of two-state systems in structured reservoirs}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{80}},
\bibinfo{pages}{012104} (\bibinfo{year}{2009}).
\end{thebibliography}
\end{document} |
\begin{document}
\begin{abstract}
The concept of a C-class of differential equations goes back to E. Cartan with the upshot that generic equations in a C-class can be solved without integration.
While Cartan's definition was in terms of differential invariants being first integrals,
all results exhibiting C-classes that we are aware of are based on the fact that a canonical Cartan geometry associated to the equations in the class descends to the space
of solutions. For sufficiently low orders, these geometries belong
to the class of parabolic geometries and the results follow from
the general characterization of geometries descending to a twistor
space.
In this article we answer the question of whether a canonical Cartan
geometry descends to the space of solutions in the remaining cases of scalar ODE
of order at least four and of systems of ODE of order at least three.
As in the lower order cases, this is characterized by the vanishing of the
generalized Wilczynski invariants, which are defined via the linearization
at a solution. The canonical Cartan geometries (which are not parabolic
geometries) are a slight variation of those available in the literature
based on a recent general construction. All the verifications needed to
apply this construction for the classes of ODE we study are carried out
in the article, which thus also provides a complete alternative proof for
the existence of canonical Cartan connections associated to higher order
(systems of) ODE.
\end{abstract}
\title{On C-class equations}
\section{Introduction}
Consider a (system of) $(n+1)$-st order ODE $\cE$ given by
\begin{align}
\bu^{(n+1)} = {\bf f}(t,\bu,\bu',...,\bu^{(n)}), \leftarrowbel{E:ODE}
\end{align}
where $\bu^{(k)}$ is the $k$-th derivative of $\bu = (u^1,...,u^m)$
with respect to $t$. In a short paper \cite{Car1938} in 1938, \'Elie
Cartan defined the following notion: {\em ``A given class of ODE
\eqref{E:ODE} will be said to be a C-class if there exists an
infinite group (in the sense of Lie) $\fG$ transforming equations
of the class into equations of the class and such that the
differential invariants with respect to $\fG$ of an equation of the
class be first integrals of the equation.''} Here, $\fG$ is a
prescribed (local) Lie transformation pseudogroup, e.g.\ contact
transformations $\fC$ or point transformations $\fP$. (Recall that
by B\"acklund's theorem, $\fC$ is identified with $\fP$ when $m > 1$,
but these are distinct in the case of scalar equations.)
Cartan gave two examples of C-classes in the context of: (i) scalar
3rd order ODE up to $\fC$; (ii) scalar 2nd order ODE up to
$\fP$. These examples were based on an equivalent description as
``espaces g\'en\'eralis\'es''. In modern language, one represents the equation as a submanifold $\cE$ in an appropriate jet space and endows it with a canonical Cartan geometry $(\cG \to \cE, \omega)$ (see \S \ref{S:Cartan}). A canonical Cartan connection $\omega$ can be obtained
using only linear algebra or differentiation via, for example,
Cartan's method of equivalence. In particular, integration is not
needed. Since a Cartan connection provides a distinguished coframing
on a principal bundle $\cG$ over the ODE $\cE$, differential invariants of the original ODE
structure arise from the components of its curvature (and its
covariant derivatives). If one knows a priori that all differential
invariants are first integrals, and there are sufficiently many
functionally independent ones, then these can be used to solve the
ODE. Consequently, the utility of searching for C-classes becomes
readily apparent: {\em generic C-class ODE can be solved without
integration}.
More recently, R.~Bryant identified in \cite{Bry1991} a C-class
within 4th order scalar ODE (up to $\fC$), and the concept of
torsion-free path geometries (in the sense of Fels--torsion, see
\cite{Fels1995}) from D.~Grossman's article \cite{Gro2000} describes
a C-class for 2nd order systems (up to $\fC \cong \fP$).
The foundations of the geometric study of systems of ODEs of higher order via Cartan connections were developed by N. Tanaka and were published recently as technical reports \cite{Tanaka2017a, Tanaka2017b}. In \cite[Part I, Chapter VII]{Tanaka2017a}, Tanaka gives an interpretation of higher order systems of ODEs as $G_0$ structures on filtered manifolds (cf.\ Section 2 of this paper) and constructs a scalar product to define the normalization conditions for the associated Cartan connection (cf.\ Section 3). The detailed exposition of this approach can be found in \cite{DKM1999}. In \cite{Tanaka2017b}, Tanaka also establishes the foundations for the integration of what he calls ``foliated'' Cartan connections.
As shown in \cite{DKM1999}, both scalar ODE (of order at least 3) up
to $\fC$ and systems of ODE (of order at least 2) up to $\fC \cong
\fP$ admit an equivalent description via a canonical Cartan geometry\footnote{It is well known that all scalar 2nd order ODE are (locally) equivalent up to $\fC$. Regarding them up to $\fP$ also leads to a canonical Cartan geometry, but this is exceptional from the point of view of our formulations, so will be henceforth be excluded in this article.}
$(\cG \to \cE, \omega)$ of type $(G,P)$ for an appropriate Lie group
$G$ and closed subgroup $P\subset G$. (We caution that the existence
of canonical Cartan connections with respect to an arbitrary
pseudo--group $\fG$ is not known.) In the geometric description of
the ODE $\cE$, the solution space $\cS$ corresponds to the space of
integral curves in $\cE$ of a certain distinguished line field
$E\subset T\cE$, i.e.\ $\cS \cong \cE / E$. On the homogeneous model
$G/P$ of the geometry, the space $\cS$ is given as $G/Q$ for a
subgroup $Q\subset G$ containing $P$. Hence, a natural question
arises: for the given ODE, does the canonical Cartan geometry $(\cG
\to \cE, \omega)$ of type $(G,P)$ descend to a Cartan geometry $(\cG
\to \cS, \omega)$ of type $(G,Q)$? If so, then all differential
invariants of $\omega$ will be well-defined functions on $\cS$, i.e.\
they will be constant on solutions, hence they are necessarily first
integrals. Thus, such ODE $\cE$ will define a C-class. On the other
hand, this is a natural way to obtain geometric structures on the
solution space, which are an important topic in the geometric theory
of differential equations \cite{DT2006,Nur2009,GN2010,Kry2010,DG2012}.
For the cases treated by Cartan and Grossman, the equivalent Cartan
geometry actually falls into the class of parabolic geometries. In
this setting, the solution space is a special instance of a twistor
space of a parabolic geometry and the fundamental question of whether
a parabolic geometry descends to a twistor space was studied in
\cite{Cap2005}. It turns out that this depends only on the Cartan
curvature, and, as observed in \cite{CS2009}, this remains true for
arbitrary Cartan geometries. For parabolic geometries, there is a
simpler geometric object than the Cartan curvature, which still is a
fundamental invariant, namely the so--called {\em harmonic
curvature}. Using the machinery of Bernstein--Gelfand--Gelfand
sequences (BGG sequences) from \cite{CSS2001} and \cite{CD2001}, it
was shown in \cite{Cap2005} that descending of the geometry can be
characterized in terms of this harmonic curvature. In particular,
this provides an alternative proof for the results by Cartan and
Grossman.
Our goal in this article is to extend the characterization of the
possibility of descending the Cartan geometry to the solution space
to higher order cases (which also recovers Bryant's result on
C-class from \cite{Bry1991}). There are natural candidates for
relative invariants whose vanishing should characterize this descent,
namely the {\em generalized Wilczynski invariants}. These
were introduced in \cite{Dou2008}, where it was shown that their
vanishing (i.e.\ {\em Wilczynski--flatness}) implies existence of a
certain geometric structure on the solution space.
For concreteness, let us recall how the generalized Wilczynski
invariants are defined. Consider a linear ODE system:
\[
\bu^{(n+1)}=P_n(t) \bu^{(n)}+\dots+P_0(t)\bu
\]
up to transformations $(t,\bu)\mapsto (\leftarrowmbda(t),\mu(t)\bu)$, where
$\mu(t)\in \text{GL}(m)$. Any such system can be brought to the canonical
Laguerre--Forsyth form defined by: $P_n=0$ and $\tr(P_{n-1})=0$.
As proven by Wilczynski~\cite{Wilc1905} for scalar ODE and
generalized by Se-ashi~\cite{Seashi1988} to systems of ODE, the
following expressions become fundamental invariants for the class of
linear equations (in Laguerre--Forsyth form) and the above class of transformations:
\[
\Theta_r = \sum_{j=1}^{r-1} (-1)^j
\frac{(2r-j-1)!(n-r+j)!}{(r-j)!(j-1)!} P_{n-r+j}^{(j-1)},
\]
for $r=2,\dots, n+1$. (Observe that $\Theta_2$ is trace--free
and thus vanishes for scalar ODE.)
\begin{defn}\leftarrowbel{D:Wilc}
For \eqref{E:ODE}, the \emph{generalized Wilczynski invariants} $\cW_r$ for $r=2,\dots,n+1$ are defined as the invariants $\Theta_r$
evaluated at the linearization of the system. Formally, they are
obtained by substituting each $P_r(t)$ with the matrix
$\left(\frac{\partial {\bf f}}{\partial \bu^{(r)}}\right)$ and
replacing the usual derivative by the
total derivative
\[
\tfrac{d}{dt} = \tfrac{\partial}{\partial t} + \bu^{(1)} \tfrac{\partial}{\partial \bu} + ... + \bu^{(n)} \tfrac{\partial}{\partial \bu^{(n-1)}} + {\bf f} \tfrac{\partial}{\partial \bu^{(n)}}.
\]
\end{defn}
Our main problem thus is to relate the Wilczynski invariants to the
curvature of the canonical Cartan geometry (which is not a parabolic
geometry for higher-order cases) and to prove that vanishing of these
invariants implies the necessary algebraic restrictions on this
curvature. Now it has been known that there is an analogue of harmonic
curvature for the Cartan geometries constructed in \cite{DKM1999}, and
the Wilczynski invariants were identified as certain components of
this harmonic curvature. However, without having the machinery of BGG
sequences at hand, it is very hard to systematically deduce restrictions on the curvature from restrictions on the harmonic curvature. In the special case of scalar 7th order ODE, this was sorted out in \cite{DG2012} using direct computations that were not reproduced in the article.
To be able to apply BGG--like arguments, we use a small variation
of the canonical Cartan connection from \cite{DKM1999}. This is based
on the recent general construction of canonical Cartan connections
associated to filtered geometric structures in \cite{Cap2016}. This
has the advantage of a simpler characterization of the canonical
Cartan connection and of stronger uniqueness results. All
verifications needed to apply this general theory to the case of
(systems of) ODE are carried out in our article, so we obtain a
complete alternative proof of existence of canonical Cartan
connections associated to (systems of) higher order ODE.
The proof of the main result of this paper (Theorem \ref{T:Wilc-C}) is
based on arguments similar to the ones used in the recent versions of
the BGG machinery, see \S 4.9 and \S 4.10 of \cite{CSo2017}. Together
with the results of Cartan and Grossman from \cite{Car1938} and
\cite{Gro2000} (or the ones from \cite{Cap2005}), we obtain:
\begin{theorem}
The following families of equations and pseudogroups form C-classes:
\begin{itemize}
\item scalar ODE of order $\ge 3$ (viewed up to contact transformations) with vanishing generalized Wilczynski invariants;
\item systems of ODE of order $\ge 2$ (viewed up to point transformations) with vanishing generalized Wilczynski invariants.
\end{itemize}
\end{theorem}
Let us briefly describe the structure of the paper. In \S 2, we show
that ODE can be described as filtered geometric structures and
analyze the trivial equation to obtain the Lie groups and Lie algebras
needed for a description as a Cartan geometry. We also discuss the
space of solutions and the concept of C-class in this setting
(Definition \ref{D:C-class}). The verifications needed to apply the
constructions of canonical Cartan connections from \cite{Cap2016} are
carried out in \S 3. These are purely algebraic, partly using
finite--dimensional representation theory. In the end of the section,
we give examples of homogeneous C-class ODE. In \S 4, we
relate the Wilczynski invariants to the curvature of the canonical
Cartan connection and prove our main result. It is worth mentioning
here that not all the filtered
geometric structures of the type we use are obtained from ODE (see
Remark \ref{R:strong-reg} and the example related to $G_2$ in \S
\ref{S:hom-C-class}). Our results continue to hold for these more
general structures, provided one uses the description of Wilczynski
invariants in Theorem \ref{T:Wilc} as a definition in this more
general setting.
\section{Invariants and C-class via Cartan connections}
Our results are based on an equivalent description of (systems of)
ODE as Cartan geometries, which is a variant of the one in
\cite{DKM1999}. This in turn is derived from an equivalent
description as a filtered analogue of a G--structure, which we discuss
first.
\subsection{ODE as filtered $G_0$--structures}
\leftarrowbel{S:geo-str}
Consider the jet spaces $J^\ell = J^\ell(\bbR,\bbR^m)$, with
projections $\pi^\ell_k : J^\ell \to J^k$ $(k < \ell)$, and standard
adapted coordinates $(t,\bu_0,\bu_1,...,\bu_\ell)$, where $\bu_j =
(u^1_j,...,u^m_j)$ refers to the $j$--th derivative of $\bu(t) =
(u^1(t),...,u^m(t))$. For $\ell \geq 1$, the (rank $m+1$) contact subbundle is $C \subset T J^\ell$, which is locally the annihilator of (the
components of)
\[
\theta_0 = d\bu_0 - \bu_1 dt, \quad
\theta_1 = d\bu_1 - \bu_2 dt, \quad ..., \quad
\theta_{\ell-1} = d\bu_{\ell-1} - \bu_\ell dt.
\]
Its weak derived flag yields a filtration by subbundles $C =:\, C^{-1}
\subset C^{-2} \subset ... \subset C^{-\ell-1} := TJ^\ell$, with
$C^i$ having corank $m$ in $C^{i-1}$, and
\[
C^i = \tspan\{ \partial_t + \bu_1 \partial_{\bu_0} + ... + \bu_{\ell+1+i} \partial_{\bu_{\ell+i}}, \,\, \partial_{\bu_\ell}, \,\, ...,\,\, \partial_{\bu_{\ell+1+i}} \}
\]
for $i=-1,...,-\ell$. The Lie bracket satisfies
$[\Gamma(C^i),\Gamma(C^j)] \subset \Gamma(C^{i+j})$, and so
$(J^\ell,\{ C^i \})$ becomes a filtered manifold. In fact,
$[\Gamma(C^i),\Gamma(C^j)] \subset \Gamma(C^{\min(i,j)-1})$, which is
a stronger condition if $i,j \leq -2$.
We will exclusively study ODE under {\em contact transformations}.
These are diffeomorphisms $\Phi : J^\ell \to J^\ell$ such that
$\Phi_*(C) = C$. By B\"acklund's theorem, $\Phi$ is the prolongation
of a contact transformation on $J^1$. Moreover, if $m > 1$, the
latter is the prolongation of a diffeomorphism on $J^0$, i.e.\ a {\em
point} transformation.
Suppose $n \geq 2$. The $(n+1)$-st order ODE \eqref{E:ODE}
corresponds to a submanifold $\cE \subset J^{n+1}$ transverse to
$\pi^{n+1}_n$, so $\cE$ is locally diffeomorphic to $J^n$. For $\ell
= 1,...,n$, the contact subbundle on $J^\ell$ is preserved by contact
transformations, and its preimage under $\pi^{n+1}_\ell|_\cE$ yields
a subbundle $T^{\ell - n-1} \cE \subset T\cE$. The weak derived flag
of $D := T^{-1} \cE$ also gives rise to these same filtration
components:
\begin{align} \leftarrowbel{E:ODE-filtration}
T^{-1}\cE \subset T^{-2}\cE \subset ... \subset T^{-n}\cE \subset T^{-n-1}\cE := T\cE.
\end{align}
As for jet-spaces, $(\cE,\{ T^i \cE \})$ is a filtered manifold with
\begin{align} \leftarrowbel{E:strong-bracket}
[\Gamma(T^i \cE), \Gamma(T^j \cE)] \subset \Gamma(T^{\min(i,j)-1}\cE) \subset \Gamma(T^{i+j}\cE).
\end{align}
Further, there are distinguished subbundles $E \subset D$ and $F^{i}
\subset T^{i} \cE$:
\begin{itemize}
\item $E = \tspan\{ \frac{d}{dt} := \partial_t + \bu_1
\partial_{\bu_0} + ... + \bu_n \partial_{\bu_{n-1}} + {\bf f}
\partial_{\bu_n} \}$ is the annihilator of the
pullbacks of $\theta_0,...,\theta_n$ on $J^{n+1}$ to $\cE$.
\item $F^{i} = \tspan\{ \partial_{\bu_n}, ..., \partial_{\bu_{n+1+i}}
\}$ is the (involutive) vertical bundle for
$\pi^{n+1}_{n+i}|_\cE$. By B\"acklund's theorem, $F:= F^{-1}
\subset ... \subset F^{-(n-1)}$ are distinguished; $F^{-n}$ is
distinguished for $m > 1$. These give corresponding splittings
$T^{i} \cE = E \op F^{i}$.
\end{itemize}
For $x \in \cE$, define $\fm_i(x) := T^i_x \cE / T^{i+1}_x \cE$,
and induce a tensorial (``Levi'') bracket on $\fm(x) = \bop_{i < 0}
\fm_i(x)$ from the Lie bracket of vector fields. This nilpotent
graded Lie algebra (NGLA) is the {\em symbol algebra} at $x$ of
$(\cE,D)$, and its NGLA isomorphism type is independent of $x$, so let $\fm$ denote a fixed NGLA with $\fm \cong \fm(x)$ for any $x$.
Moreover, it is the same for all ODE \eqref{E:ODE}, and we describe
it in \S \ref{S:triv-eq} below.
All NGLA isomorphisms from $\fm$ to some $\fm(x)$ comprise the total space
of a natural frame bundle $F_{\tgr}(\cE) \to \cE$. This has
structure group $\Aut_{\tgr}(\fm)$, which naturally injects into
$\text{GL}(\fm_{-1}) \cong \text{GL}_{m+1}$ since $\fm_{-1}$ generates all of
$\fm$, reflecting the fact that $D$ is ``bracket-generating''. The splitting $D = E
\op F$ is encoded via reduction to a subbundle $\cG_0 \to \cE$ with
structure group $G_0 = \bbR^\times \times \text{GL}_m$ embedded as
diagonal blocks in $\text{GL}_{m+1}$.
Fixing $\fm$ and $G_0 \subset \Aut_{\tgr}(\fm)$ as above, a {\em
filtered $G_0$-structure} consists of:
\begin{itemize}
\item[(i)] a filtered manifold $(M,\{ T^i M \}_{i < 0})$ whose symbol algebras form a locally trivial bundle with model algebra $\fm$;
\item[(ii)] a reduction of structure group of $F_{\tgr}(M) \to M$ to a
principal $G_0$-bundle $\cG_0 \to M$.
\end{itemize}
Note that (i) implies that $T^{-1} M$ is of constant rank and
bracket-generating in $TM$. As described above, any ODE $\cE$ yields a
filtered $G_0$-structure. These are not the most general instances of
such structures, however, since the splittings $T^{i} \cE = E \op
F^{i}$ for $i=-2,...,-(n-1)$ (and for $i=-n$ if $m > 1$) are an
additional input. The following discussion in fact applies to all
filtered $G_0$--structures, and not only to those defined by (systems of)
ODE.
\subsection{The trivial ODE} \leftarrowbel{S:triv-eq}
We exclude the cases of scalar 3rd order ODE and of (systems of) 2nd
order ODE as these lead to parabolic geometries, which are
structurally different. So suppose that $n \geq 2$ and $m \geq 1$, with $(n,m) \neq\,\, (2,1)$. Then the contact symmetry algebra $\fg$ of
the trivial ODE $\bu^{(n+1)} = 0$ consists entirely of the
(prolonged) point symmetries:
\begin{align} \leftarrowbel{E:triv-sym}
\partial_{u^a}, \,\, t\partial_{u^a}, \,\, ..., \,\, t^n
\partial_{u^a}, \,\, \partial_t, \,\, t\partial_t, \,\,
u^b\partial_{u^a}, \,\, t^2 \partial_t + ntu^a\partial_{u^a},
\end{align}
where $1 \leq a,b \leq m$. Abstractly, $\fg = \fq \ltimes \fa$,
where $\fq = \fsl_2 \times \fgl_m$ acts on the abelian ideal $\fa =
V_n \otimes W$, with $V_n = S^n(\bbR^2)$ as an $\fsl_2$-module and $W
= \bbR^m$. Take a basis $\{ \sfx,\sfy \}$ on $\bbR^2$ and the standard
$\fsl_2$-basis
\[
\sfX = \sfx \partial_{\sfy}, \quad
\sfH = \sfx \partial_{\sfx} - \sfy \partial_{\sfy}, \quad
\sfY = \sfy \partial_{\sfx}.
\]
On $V_n$, use the basis $\sfv^i = \frac{1}{i!} \sfx^{n-i} \sfy^i$,
where $0 \leq i \leq n$. Let $\{ \sfe_a \}$ and $\{ \sfe^a_b \}$ be
the standard bases on $\bbR^m$ and $\fgl_m$, which satisfy $\sfe^a_b
\sfe_c = \delta^a_c \sfe_b$.
The prolongation to $J^{n+1}$ of \eqref{E:triv-sym} shows that $\fg$
is infinitesimally transitive on $\cE \subset J^{n+1}$, with
isotropy subalgebra $\fp \subset \fg$ at $o = \{ t=0, \bu_0 = ... =
\bu_n = 0 \} \in \cE$ spanned by $2t\partial_t, \,\,
u^b\partial_{u^a}, \,\, t^2 \partial_t + nt u^a\partial_{u^a}$.
Abstractly, $\fp$ is spanned by $\sfH, \fgl_m, \sfY$. The filtration
\eqref{E:ODE-filtration} induces ($\fp$-invariant) filtrations on
$\fg / \fp \cong T_o \cE$ and $\fg$:
\[
\fg^{-n-1} = \fg\supset \fg^{-n} \dots \supset \fg^{-1} \supset \fg^0
= \fp \supset \fg^1\supset \{ 0 \},
\]
and we put $\fg^i = \{ 0 \}$ for $i \geq 2$, and $\fg^i = \fg$ for $i
\leq -n-1$. In particular, $\fg^{-1} / \fp \cong D_o = E_o \op F_o$,
with $E_o \cong \bbR \sfX$ and $F_o \cong \bbR\sfy^n \otimes W$
(modulo $\fp$), while $\fg^1 = \bbR\sfY$ is distinguished as those
elements of $\fp$ whose bracket with $\fg^{-1}$ lies in $\fp$.
Viewed concretely, $E_o,F_o,\fg^1$ are respectively spanned by (the
prolongations of) $\partial_t$, $t^n \partial_{u^a}$, and
$t^2 \partial_t + nt u^a\partial_{u^a}$.
The associated graded $\tgr(\fg) = \bop_{i \in \bbZ} \tgr_i(\fg)$,
defined by $\tgr_i(\fg) := \fg^i / \fg^{i+1}$, is a graded Lie
algebra with $\fm := \tgr_-(\fg)$ a NGLA. The symbol algebra (\S
\ref{S:geo-str}) of $(\cE,D)$ associated to {\em any} ODE \eqref{E:ODE}
is isomorphic to $\fm$. On $\tgr(\fg)$, the induced $\fp$-action has
$\fg^1 \subset \fp$ acting trivially, so $\tgr_0(\fg) = \fg^0 /
\fg^1$ acts on $\tgr(\fg)$ by grading-preserving derivations.
It is convenient to introduce a grading directly on $\fg$, but since
this is not $\fp$-invariant, it should only be regarded as an
auxilliary structure. Consider $\sfZ = -\frac{\sfH}{2} - ( 1 +
\frac{n}{2} ) \id_m$. The eigenvalues of $\ad_\sfZ$ introduce a Lie
algebra grading $\fg = \fg_{-n-1} \op ... \op \fg_0 \op \fg_1$, so
each $\fg_i$ is a $\fg_0$-module. This satisfies $\fg^i = \bop_{j
\geq i} \fg_j$ so that $\tgr_i(\fg) \cong \fg_i$. As vector
spaces,
\begin{align}
\begin{split}
\fg_1 &\cong \bbR \sfY \leftarrowbel{E:grg}\\
\fg_0 &\cong \bbR \sfH \op \fgl_m\\
\fg_{-1} &\cong \bbR \sfX \op (\bbR \sfv^n \otimes W) \\
\fg_i &\cong \bbR \sfv^{n+1+i} \otimes W, \quad\quad i=-2, ..., -n-1.
\end{split}
\end{align}
(We caution that $\sfX \in \fg_{-1}$ has usual $\fsl_2$-weight $+2$.)
To pass to the group level, consider the natural action of $\text{GL}_2
\times \text{GL}_m$ on $V_n \otimes W$ with kernel $T = \{ \leftarrowmbda\,
\id_2 \times \leftarrowmbda^{-n}\, \id_m : \leftarrowmbda \in \bbR^\times \}$, and
$\LT_2 \subset \text{GL}_2$ (resp. $\LT^+_2$) the lower triangular
(resp. strictly lower triangular) matrices. Define
\begin{align}
\begin{split} \leftarrowbel{E:GP}
G &= (\text{GL}_2 \times \text{GL}_m) / T \ltimes (V_n \otimes W),\\
P &= (\LT_2 \times \text{GL}_m) / T,\\
P_+ &= \LT^+_2 / T.
\end{split}
\end{align}
Then $P_+ \subset P \subset G$ are closed subgroups in $G$
corresponding to $\fg^1 \subset \fg^0 \subset \fg$, with $P_+$ normal
in $P$. The adjoint action of $G$ restricts to a
filtration-preserving $P$-action on $\fg$, and $P_+$ consists exactly
of those elements for which the induced action on the associated graded
$\tgr(\fg)$ is trivial. Thus we obtain a natural induced
action of $G_0:=P/P_+$ on $\tgr(\fg)$. It is a familiar fact about
parabolic subgroups that the quotient projection $P\to G_0$
splits. Indeed, $G_0$ can be identified with the subgroup of those
elements of $P$ whose adjoint action preserves the grading on $\fg$,
and $(g,X)\mapsto g\exp(X)$ defines a diffeomorphism
$G_0\times\fg_1\to P$. In this picture, $G_0 \subset P$ is the direct
product of diagonal $2\times 2$ matrices and $\text{GL}_m$ (modulo
$T$). This Lie group $G_0$ is isomorphic to that used in \S
\ref{S:geo-str}, with $\bbR^\times$-factor there corresponding to
elements $\diag(\leftarrowmbda,1) \times \id_m$ (modulo $T$).
The Lie algebra of $G_0$ is $\fg_0$. Collecting the results of this
section, we in particular easily get:
\begin{prop}\leftarrowbel{P:adm-pair}
For the Lie algebra $\fg$ and the group $P$ defined above,
$(\fg,P)$ is an admissible pair in the sense of Definition 2.5 of
\cite{Cap2016}. Moreover, the group $P$ is of split exponential
type in the sense of Definition 4.11 of that reference.
\end{prop}
\subsection{Canonical Cartan connections}\leftarrowbel{S:Cartan}
The equivalent description of (systems of) ODE as filtered
$G_0$--structures that we have derived so far in particular includes
a principal $G_0$--bundle $\cG_0\to\cE$. A particularly nice way to
obtain invariants in such a situation is to construct a canonical
Cartan geometry out of the filtered $G_0$--structure. In the language
of \cite{Cap2016}, we are looking for a Cartan geometry of type
$(\fg,P)$ (where $\fg$ and $P$ are as in \S \ref{S:triv-eq}
above), which makes sense on smooth manifolds $M$ of dimension
$\dim(\fg/\fp)$. Such a Cartan geometry then consists of a (right)
principal $P$--bundle $\cG \to M$ and a \textit{Cartan connection}
$\omega \in \Omega^1(\cG,\fg)$. This
means that $\omega$ satisfies
\begin{enumerate}
\item For any $u \in \cG$, $\omega_u : T_u \cG \to \fg$ is a linear
isomorphism;
\item $\omega$ is $P$-equivariant, i.e.\ $R_g^* \omega = \Ad_{g^{-1}}
\circ \omega$ for any $g \in P$;
\item $\omega$ reproduces the generators of the fundamental vector fields
$\zeta_A$, i.e.\ we have $\omega(\zeta_A) = A$ for any $A \in \fp$.
\end{enumerate}
The fundamental invariant available in this setting then is the
curvature $K \in \Omega^2(\cG,\fg)$ of $\omega$, which is defined by
$K(\xi,\eta) = d\omega(\xi,\eta) + [\omega(\xi),\omega(\eta)]$. The
two--form $K$ is $P$-equivariant and horizontal, and can be
equivalently encoded as the \textit{curvature function} $\kappa : \cG
\to \bigwedge^2 \fg^* \otimes \fg$, defined by $\kappa(A,B) =
K(\omega^{-1}(A), \omega^{-1}(B))$ for $A,B \in \fg$. The Cartan
connection $\omega$ is {\em regular} if $\kappa(\fg^i, \fg^j) \subset
\fg^{i+j+1}$ for all $i,j$.
As detailed in Theorem 2.9 of \cite{Cap2016}, any regular Cartan
geometry of type $(\fg,P)$ on a smooth manifold $M$ gives rise to an
underlying filtered $G_0$--structure. The filtration $\{ T^i M \}$
on $TM$ is obtained by projecting down the subbundles
$T^i\cG:=\omega^{-1}(\fg^i)\subset T\cG$. Regularity of $\omega$
implies that the symbol algebra $\tgr(TM)$ is everywhere
NGLA-isomorphic to $\tgr_-(\fg)$. The reduction of structure group is
then defined by the $G_0$-bundle $\cG_0 := \cG / P_+$.
Constructing a canonical Cartan connection means reversing this
process. Given a filtered $G_0$--structure on $M$, one tries to
extend the principal $G_0$--bundle $\cG_0\to M$ to a principal
$P$--bundle $\cG\to M$, and endow that bundle with a natural Cartan
connection. Such a construction was first obtained in \cite{DKM1999} for (systems of) ODE
based on the general theory developed in \cite{Mor1993}. Here we
follow the recent general construction in \cite{Cap2016}, which
provides a more explicit characterization of the canonical Cartan
connection via its curvature and stronger uniqueness results.
In view of Proposition \ref{P:adm-pair}, two more ingredients are
needed to apply the general results of \cite{Cap2016}. On the one
hand, we have to verify that the associated graded $\tgr(\fg)$ from
\S \ref{S:triv-eq} is the full prolongation of its non--positive
part (see Definition 2.10 of \cite{Cap2016}). On the other hand, we
have to construct an appropriate normalization condition to be
imposed on the curvature of the canonical Cartan connection. Both
these steps are purely algebraic and we will carry them out in
\S \ref{S:cd-norm} below. Using the results of Propositions
\ref{P:codiff} and \ref{P:Tanaka} from there, we can apply Theorem
4.12 of \cite{Cap2016} to obtain the following result.
\begin{theorem} \leftarrowbel{T:G0str-Cartan} Fix $\fg$ and $P$ as in
\S \ref{S:triv-eq}. Then there is an equivalence of
categories between filtered $G_0$--structures and regular, normal
Cartan geometries of type $(\fg,P)$.
\end{theorem}
\begin{remark}\leftarrowbel{R:strong-reg}
As described in \S \ref{S:geo-str}, ODE (considered up to
contact transformations) define filtered $G_0$--structures, but not
every filtered $G_0$--structure is of that form. This can be easily
seen from the curvature of the canonical Cartan connection. We claim
that for structures induced by ODE, we get a stronger version of
regularity. Indeed, in this case $\kappa(\fg^i,\fg^j) \subset
\fg^{\min(i,j)-1}$ for all $i,j<0$ and this is a proper subspace of
$\fg^{ i+j+1}$ if $i,j<-1$.
By definition of the curvature, we get
\begin{align} \leftarrowbel{E:kappa}
\kappa(\omega(\xi),\omega(\eta)) = \xi \cdot \omega(\eta) - \eta \cdot \omega(\xi) - \omega([\xi,\eta]) + [\omega(\xi),\omega(\eta)],
\end{align}
and if $\omega(\xi)$ has values in $\fg^i$ and $\omega(\eta)$ has values in $\fg^j$, then the first two summands on the right hand side have values in $\fg^{\min(i,j)-1}$. Next, because of the large abelian
ideal $\fa$, the Lie bracket on $\fg$ has the property that
$[\fg^i,\fg^j]\subset\fg^{\min(i,j)-1}$ for all $i,j<0$, which handles the last term on the right hand side. Hence, it remains to show that for structures coming from ODE, we also have $\omega([\xi,\eta])$ taking values in $\fg^{\min(i,j)-1}$.
For such structures, we have the
decomposition $T^i\cE=E\oplus F^i$ for all $i<0$ with $E\subset
T^{-1}\cE$ and $F^i$ involutive. Given a vector field $\xi$ on
$\cG$ such that $\omega(\xi)$ has values in $\fg^i$ for $i<0$, we
can correspondingly decompose $\xi_1+\xi_2$, where $\xi_1$ is a
lift of a section of $E\to\cE$ and $\xi_2$ lifts a section of
$F^i\to\cE$. Similarly decompose $\eta=\eta_1+\eta_2$ for
$\eta\in\fX(\cG)$ such that $\omega(\eta)$ has values in
$\fg^j$. Using that a Lie bracket of lifts is a lift of the Lie
bracket of the underlying fields, one easily verifies that all the
brackets $[\xi_i,\eta_j]$ are lifts of sections of
$T^{\min(i,j)-1}\cE$ (or of smaller filtration components). Thus
$\omega([\xi,\eta])$ has values in $\fg^{\min(i,j)-1}$,
which completes the argument.
All the further developments in this article make sense for
arbitrary filtered $G_0$--structures and not only for the ones
coming from ODE provided that one uses the description of Wilczynski
invariants in Theorem \ref{T:Wilc} as a definition in the more
general setting.
\end{remark}
\subsection{The space of solutions and C-class}
\leftarrowbel{S:soln-space-geo-str}
In the description of \S \ref{S:geo-str}, it is clear how to
obtain the space of all solutions of \eqref{E:ODE}. The solutions are
the integral curves of the line bundle $E\subset T\cE$ spanned by
$\frac{d}{dt}$. Hence locally the space of solutions is the space of
leaves of the foliation defined by $E$. In the case of the trivial
equation $\bu^{(n+1)} = 0$, we obtain the solutions $\bu =
\sum_{i=0}^n \ba_i t^i$, where $\ba_i \in W = \bbR^m$ are
constant. Hence we obtain a global space $\cS$ of solutions in this
case and viewing $\cE$ as $G/P$, we see that $\cS=G/Q$, where $Q =
(\text{GL}_2 \times \text{GL}_m) / T \subset G$. This means that $\cS$ is the
homogeneous model for Cartan geometries of type $(G,Q)$. In
particular, the tangent bundle of $\cS$ is the homogeneous vector
bundle $G\times_Q(\fg/\fq)$, and as a $Q$--module, we get
$\fg/\fq\cong \fa = V_n\otimes W$.
This tensor decomposition of $\fg/\fq$ gives rise to a geometric structure on
$\cS$ that can be described by the corresponding decomposition of
the tangent bundle $T\cS$ into a tensor product. A simpler
description is provided by the distinguished variety in $\bbP(\fa)$
given as
\begin{align} \leftarrowbel{E:Segre}
\bbP^1\times \bbP^{m-1} \to \bbP(\fa), \quad
([b_0:b_1],[w]) \mapsto [(b_0\sfx+b_1\sfy)^n \otimes w].
\end{align}
Translating by $G$, one obtains a canonical isomorphic copy of this
variety in each tangent space of $\cS$. The resulting geometric
structure is called a {\em Segr\'e structure} (modelled on
\eqref{E:Segre}). When $m=1$, these structures are commonly called
{\em $\text{GL}_2$-structures}, but we will use the term Segr\'e
structure for all cases. Notice that this is a standard first order
structure corresponding to $Q\subset \text{GL}(\fa)$, without any additional
filtration on the tangent bundle.
Now one may ask the question whether similar things happen for more general ODE, both on the level of Cartan geometries and on
the level of Segr\'e structures. On the latter level, this is studied intensively in the literature in many special cases, see e.g. \cite{DT2006,Nur2009,GN2010,Kry2010,DG2012}. For our purposes, the results of \cite{Dou2008} are particularly relevant. In that article, it is shown in general that vanishing of the generalized Wilczynski invariants from Definition \ref{D:Wilc}
implies existence of a natural Segr\'e structure on the space of
solutions. The pullback of $T\cS$ to $\cE$ is naturally isomorphic to
$T\cE/E$. This is modelled on $\fa$, so on that level a
decomposition as a tensor product is available. The Wilczynski
invariants can be interpreted as obstructions to this decomposition
descending to a decomposition of $T\cS$, which is crucial for the
developments in \cite{Dou2008}, compare also to the proof of Theorem \ref{T:Wilc}.
On the level of Cartan geometries the question of descending is
closely related to the concept of C-class. The technical aspects of
this descending process are worked out in the case of parabolic
geometries in \cite{Cap2005}. As shown in \S 1.5.13 and 1.5.14
of \cite{CS2009}, the proofs in that article apply to general
groups. Consider an equation $\cE$ and a (local) space of solutions
$\cS$, i.e.~a local leaf space for $E\subset T\cE$. Descending of
the Cartan geometry $(\cG\to\cE,\omega)$ first requires that the
principal right action of $P$ on $\cG$ extends to a smooth action of
$Q\supset P$ which has the fields $\omega^{-1}(A)\in\fX(\cG)$ for $A \in \fq$ as
fundamental vector fields. If such an extension exists, then,
possibly shrinking $\cS$, one obtains a projection $\cG\to\cS$,
which is a $Q$--principal bundle. Next, one has to ask whether (the
restriction of) $\omega$ can be interpreted as a Cartan connection
on that principal $Q$--bundle, which boils down to the question of
$Q$--equivariance. Surprisingly, it turns out that the whole question
of descending of the Cartan geometry is equivalent to the fact that
all values of the curvature function $\kappa$ of $\omega$ vanish
upon insertion of any element of $\fq/\fp\subset\fg/\fp$, see
Theorem 1.5.14 of \cite{CS2009}.
But now the fact that the canonical Cartan geometry on $\cE$
descends to the space $\cS$ implies that the Cartan curvature and
hence all invariants derived from it in an equivariant fashion
descend to $\cS$ and thus are first integrals. This is the technical
definition of C-class that we use in this article:
\begin{defn} \leftarrowbel{D:C-class} An ODE \eqref{E:ODE} is of {\em
C-class} if its corresponding regular, normal Cartan geometry
$(\cG \to \cE, \omega)$ descends to sufficiently small spaces of
solutions or, equivalently, if its curvature function satisfies
$i_\sfX \kappa = 0$, where $\sfX \in \fg_{-1}$ was defined in \S \ref{S:triv-eq}.
\end{defn}
\section{Codifferentials and normalization conditions}
\leftarrowbel{S:cd-norm}
\subsection{Filtrations and gradings}\leftarrowbel{S:filtgr}
We will use the general results from \cite{Cap2016} to obtain
canonical Cartan connections. In addition to the properties of the
pair $(\fg,P)$ that we have already verified, the main ingredient needed
to apply this method is a choice of normalization condition. We do
this via a codifferential in the sense of Definition 3.9 of
\cite{Cap2016}.
Such a codifferential consists of $P$--equivariant maps acting between
spaces of the form $L(\bigwedge^k(\fg/\fp),\fg)$ of alternating multilinear
maps. An important role in \cite{Cap2016} is played by the natural
$P$--invariant filtration on these spaces and the associated graded
spaces. For our purposes, it will be useful to view these as subspaces
of the chain spaces $C^k(\fg,\fg)=L(\bigwedge^k\fg,\fg)$. Hence we
will first collect the necessary information on filtrations and
associated graded spaces in this setting. Observe that each of the
spaces $C^k(\fg,\fg)$ naturally is a representation of $\fg$ and of
$P$, and we can identify $L(\bigwedge^k(\fg/\fp),\fg)$ with the
subspace
$$
C^k_{\text{hor}}(\fg,\fg)=\{ \phi \in C^k(\fg,\fg) : i_z \phi =
0, \forall z \in \fp \}
$$
of horizontal $k$--chains, which is immediately seen to be
$P$--invariant.
As we have seen in \S \ref{S:triv-eq}, the Lie algebra $\fg$
carries a $P$--invariant filtration $\{\fg^i\}_{i=-n-1}^1$ such that
$\fp = \fg^0$. Moreover, we noticed that this filtration is actually
induced by a grading $\fg=\fg_{-n-1}\oplus\dots\oplus\fg_1$ of $\fg$ in
the sense that $\fg^i=\oplus_{j\geq i}\fg_j$. The grading is not
$P$--invariant, however, so it has to be viewed as an auxilliary
object. In particular, this implies that one can identify the filtered
Lie algebra $\fg$ with its associated graded Lie algebra
$\tgr(\fg)$. The filtration and the grading on $\fg$ induce a
filtration and a grading on each of the chain spaces $C^k(\fg,\fg)$,
which can be conveniently described in terms of homogeneity. Moreover,
it follows readily that each of the spaces $C^k(\fg,\fg)$ can be
naturally identified with its associated graded.
The notion of homogeneity is more familiar in the setting of
gradings: We say that $\varphi \in C^k(\fg,\fg)$ is {\em homogeneous of
degree $\ell$} if, for all $i_1,\dots,i_k\in\{-n-1,\dots,1\}$, it maps
$\fg_{i_1}\times\dots\times\fg_{i_k}$ to $\fg_{i_1+\dots+i_k+\ell}$. In our
simple situation, {\em homogeneity of degree $\geq\ell$ (in the filtration
sense)} then simply means that $\fg_{i_1}\times\dots\times \fg_{i_k}$ is always
mapped to $\fg^{i_1+\dots+i_k+\ell}$. For the passage to the
associated graded, it suffices to consider spaces of the form
$L(\bigwedge^k(\fg/\fp),\fg)$. As proved in Lemma 3.1 of
\cite{Cap2016}, identifying $\fg$ with $\tgr(\fg)$, the associated
graded to this filtered space can be identified with $C^k(\fg_-,\fg)$
(with its natural grading). For a map $\varphi\in
L(\bigwedge^k(\fg/\fp),\fg)$ which is homogeneous of degree
$\geq\ell$, the projection $\tgr_\ell(\varphi)\in C^k(\fg_-,\fg)_\ell$
is obtained by applying $\varphi$ to (the classes of) elements of
$\fg_-$ and taking the homogeneous component of degree $\ell$. Here we
denote by $C^k(\fg_-,\fg)_\ell$ the homogeneity $\ell$ component of
$C^k(\fg_-,\fg)$.
The spaces $C^k(\fg_-,\fg)$ are the chain spaces in the standard
complex computing the Lie algebra cohomology $H^*(\fg_-, \fg)$ of
the Lie algebra $\fg_-$ with coefficients in the module
$\fg$. Correspondingly, there is a standard differential in this
complex, which we denote by $\partial_{\fg_-}$. This differential
plays an important role in the definitions of normalization conditions
and of codifferentials.
\subsection{Scalar product and codifferential}\leftarrowbel{S:codiff}
As in \S \ref{S:filtgr}, we identify $L(\bigwedge^k(\fg/\fp),\fg)$ with $C^k_{\text{hor}}(\fg,\fg)$. Define an inner product $\leftarrowngle\ , \ \rangle$ on $\fg$ by declaring
$\sfX$, $\sfH$, $\sfY$, $\sfv^i_b:=\sfv^i\otimes\sfe_b$, $\sfe^a_b$
to be an orthogonal basis with
\[
\leftarrowngle \sfX, \sfX \rangle = \leftarrowngle \sfY, \sfY \rangle = 1, \quad
\leftarrowngle \sfH, \sfH \rangle = 2, \quad
\leftarrowngle \sfe^a_b, \sfe^a_b \rangle = 1, \quad
\leftarrowngle \sfv^i_b, \sfv^i_b \rangle = \frac{(n-i)!}{i!}.
\]
Then $\forall A,B \in \fq$ and $\forall u,v \in \fa$, this satisfies:
\begin{align} \leftarrowbel{E:innprod}
\leftarrowngle A, B \rangle = \tr(A^\top B), \qquad
\leftarrowngle Au, v \rangle = \leftarrowngle u, A^\top v \rangle.
\end{align}
Extend $\leftarrowngle \ , \ \rangle$ to an inner product on $C^*(\fg,\fg)$.
The spaces $C^k(\fg,\fg)$ are the chain spaces in the standard
complex computing the Lie algebra cohomology $H^*(\fg,\fg)$, and we
denote by $\partial_{\fg}$ the standard differentials in that
complex. From the explicit formula for these differentials (which
only uses the Lie bracket in $\fg$), it follows readily that these
maps are $\fg$--equivariant and $Q$--equivariant.
\begin{defn}\leftarrowbel{D:codiff}
For each $k$, we define the \textit{codifferential}
$\partial^*:C^k(\fg,\fg)\to C^{k-1}(\fg,\fg)$ as the adjoint (with
respect to the inner products we have just defined) of the Lie
algebra cohomology differential $\partial_{\fg}$. Explicitly, we
have the relation $\leftarrowngle \partial_{\fg} \phi, \psi \rangle = \leftarrowngle
\phi, \partial^* \psi \rangle$ for all $\phi\in C^{k-1}(\fg,\fg)$
and $\psi\in C^k(\fg,\fg)$.
\end{defn}
\begin{lemma}\leftarrowbel{L:codiff}
The codifferential restricts to a $P$--equivariant map
$\partial^*:L(\bigwedge^k(\fg/\fp),\fg)\to
L(\bigwedge^{k-1}(\fg/\fp),\fg)$. This map preserves homogeneity
and thus is compatible with the filtrations on both
spaces. Moreover, it is image--homogeneous in the sense of
Definition 3.7 of \cite{Cap2016}.
\end{lemma}
\begin{proof} We have already noted that $\partial_{\fg}$ is
$\fg$-equivariant. Now for any $A \in \fq$, we have
\begin{align*}
\leftarrowngle \phi, A \partial^* \psi \rangle &= \leftarrowngle A^\top \phi,
\partial^* \psi \rangle = \leftarrowngle \partial_{\fg}(A^\top \phi), \psi
\rangle = \leftarrowngle A^\top \partial_{\fg} \phi, \psi \rangle\\ &= \leftarrowngle
\partial_{\fg} \phi, A \psi \rangle = \leftarrowngle \phi, \partial^*(A \psi)
\rangle,
\end{align*}
so $\partial^*$ is $\fq$-equivariant on the full cochain spaces. Since
the grading element $\sfZ$ lies in $\fz(\fg_0) \subset \fq$, we see
that $\partial^*$ commutes with the action of $\sfZ$. This means that
it preserves homogeneity in the graded--sense and thus also in the
sense of filtrations.
Let $\op^\perp$ denote orthogonal direct sum. Then $\fg = \fg_-
\op^\perp \fp$ induces $\fg^* = \fann(\fg_-) \op^\perp \fann(\fp)$.
Letting $\bigwedge^{i,j} := \bigwedge^i \fann(\fg_-) \otimes
\bigwedge^j \fann(\fp)$, we have $\bigwedge^k \fg^* \cong
\bop_{i+j=k}^\perp \bigwedge^{i,j}$. Now by definition, the subspace
$L(\bigwedge^k(\fg/\fp),\fg)$ of $C^k(\fg,\fg)$ coincides with
$\bigwedge^{0,k} \otimes \fg$. Thus, its orthocomplement is given by
$\bop_{i>0}^\perp \bigwedge^{i,k-i}\otimes\fg$, and this space can be written
as
$$
\{ \varphi \in C^k(\fg,\fg) : \varphi(v_1,...,v_k) = 0, \,
\forall v_i \in \fg_- \}.
$$
Since $\fg_-$ is a subalgebra of $\fg$, the definition of the
differential implies that $\partial_{\fg}$ maps
$L(\bigwedge^{k-1}(\fg/\fp),\fg)^\perp$ to
$L(\bigwedge^k(\fg/\fp),\fg)^\perp$. Now for $\psi \in
L(\bigwedge^k(\fg/\fp),\fg)$, we can verify that $\partial^*\psi\in
L(\bigwedge^{k-1}(\fg/\fp),\fg)$ by showing that for all $\phi\in
L(\bigwedge^{k-1}(\fg/\fp),\fg)^\perp$, we get
$0=\leftarrowngle\phi,\partial^*\psi\rangle$. But this follows directly from
the definition as an adjoint. Since $L(\bigwedge^k(\fg/\fp),\fg)$ is a
$P$--invariant subspace of $C^k(\fg,\fg)$ for each $k$,
$Q$--equivariance of $\partial^*$ on $C^k(\fg,\fg)$ readily implies
$P$--equivariance of the restriction.
Image-homogeneity as defined in \cite{Cap2016} requires the
following. If we have an element in the image of $\partial^*$, which
is homogeneous of degree $\geq\ell$ in the filtration sense, then it
should be possible to write it as the image under $\partial^*$ of an
element which itself is homogeneous of degree $\geq\ell$. But in our
case, the filtration is derived from a grading that is preserved by
$\partial^*$ . Thus, if all non--zero homogeneous components of
$\partial^*\psi$ lie in degrees $\geq\ell$, it follows that all
homogeneous components of degree $<\ell$ of $\psi$ must be contained
in the kernel of $\partial^*$. (Otherwise, their images would be of
the same homogeneity.) Hence the homogeneous components of degree
$<\ell$ can be left out without changing the image, and image-homogeneity follows.
\end{proof}
To prove that $\partial^*$ can be used to obtain a normalization
condition, we have to consider the induced maps between the associated
graded spaces. As in \S \ref{S:filtgr}, we
view the associated graded of $L(\bigwedge^k(\fg/\fp),\fg)$ as
$C^k(\fg_-,\fg)$. Observe further that $C^k(\fg_-,\fg)$ is
exactly the subspace $\bigwedge^{0,k}\otimes\fg\subset C^k(\fg,\fg)$
as introduced in the proof of Lemma \ref{L:codiff}. Having made these
observations we can now verify the remaining properties of the
codifferential needed in order to apply the general theory for
existence of canonical Cartan connections.
\begin{prop}\leftarrowbel{P:codiff}
The maps $\partial^*$ from Definition \ref{D:codiff}
define a codifferential in the sense of Definition 3.9 of
\cite{Cap2016}. Hence, in the terminology of that reference,
$\ker(\partial^*)\subset L(\bigwedge^2(\fg/\fp),\fg)$ is a
normalization condition and $\im(\partial^*)\subset\ker(\partial^*)$
is a maximally negligible submodule.
\end{prop}
\begin{proof}
In view of Lemma \ref{L:codiff}, it remains to verify the second
condition in Definition 3.9 of \cite{Cap2016}. This says that the
maps $\underline{\partial^*}:C^k(\fg_-,\fg)\to C^{k-1}(\fg_-,\fg)$
induced by $\partial^*$ are disjoint to $\partial_{\fg_-}$. As
above, we can identify $C^k(\fg_-,\fg)$ with the subspace
$\bigwedge^{0,k}\otimes\fg\subset C^k(\fg,\fg)$, which endows it
with an inner product. Since $\fg_-$ is a subalgebra in $\fg$, it
easily follows from the definition of the Lie algebra cohomology
differential that for $\psi\in C^k(\fg_-,\fg)\subset C^k(\fg,\fg)$
we get $\partial_{\fg}\psi\in \bigwedge^{1,k}\otimes\fg\oplus
\bigwedge^{0,k+1}\otimes\fg$. Moreover, the component of
$\partial_{\fg}\psi$ in $\bigwedge^{0,k+1}\otimes\fg$ coincides with
$\partial_{\fg_-}\psi$.
Now taking $\varphi\in C^k(\fg_-,\fg)$ that is homogeneous of some
fixed degree $\ell$, we get $\underline{\partial^*}\varphi$ by
interpreting $\partial^*\varphi$ as an element of
$C^{k-1}(\fg_-,\fg)$. For $\psi\in C^{k-1}(\fg,\fg)$, we thus can
have $\leftarrowngle\partial^*\varphi,\psi\rangle\neq 0$ only if $\psi$ is
homogeneous of the same degree $\ell$ and contained in
$\bigwedge^{0,k-1}\otimes\fg$. By definition, we get
$\leftarrowngle\partial^*\varphi,\psi\rangle=\leftarrowngle\varphi,
\partial_{\fg}\psi\rangle$. Since
$\varphi\in\bigwedge^{0,k}\otimes\fg$, we may replace
$\partial_{\fg}\psi$ by its component in that subspace and hence by
$\partial_{\fg_-}\psi$. This shows that $\underline{\partial^*}$ is
adjoint to $\partial_{\fg_-}$, which implies the required
disjointness. All remaining claims now follow directly from
Proposition 3.10 of \cite{Cap2016}.
\end{proof}
As noted in \S \ref{S:Cartan}, the curvature
of a Cartan geometry is encoded in the curvature function $\kappa$,
which has values in $L(\bigwedge^2(\fg/\fp),\fg)$. Normality of the
Cartan geometry then exactly means that the values of $\kappa$
actually lie in the subspace $\ker(\partial^*)$.
\subsection{Lie algebra cohomology and Tanaka
prolongation}\leftarrowbel{S:Tanaka}
To obtain a more explicit description of the codifferential
$\partial^*$, we next study the Lie algebra cohomology differential
$\partial_{\fg_-}$. This will also allow us to verify that
$\tgr(\fg)\cong\fg$ is the full prolongation of its non--positive
part, which is the last ingredient needed to prove Theorem
\ref{T:G0str-Cartan}. This can be expressed in terms of the Lie
algebra cohomology $H^*(\fg_-,\fg)$.
Recall that $\fg=\fq\ltimes\fa$, with the abelian ideal
$\fa=V_n\otimes\bbR^m$ and the reductive subalgebra
$\fq=\mathfrak{sl}_2\times\mathfrak{gl}_m$. Moreover, $\fp\subset\fq$
and $\fg_-=\bbR\cdot\sfX\oplus\fa$. Now proceeding similarly as above,
we view $L(\bigwedge^k(\fg/\fq),\fg)$ as the subspace of
$C^k(\fg,\fg)$ consisting of those maps which vanish upon insertion of
one element of $\fq$. This can then be identified with the chain
space $C^k(\fa,\fg)$, where we view $\fg$ as an $\fa$--module via the
adjoint action. This identification is even $\fq$--equivariant, since
$\fg=\fq\oplus\fa$ as a $\fq$--module.
On the chain spaces $C^*(\fa,\fg)$, we again have a Lie algebra
cohomology differential, which we denote by
$\partial_{\fa}$. Explicitly, this differential is given by
\[
\partial_{\fa}\varphi(X_0,\dots,X_k)=\textstyle
\sum_i(-1)^i[X_i,\varphi(X_0,\dots,\widehat{X_i},\dots,X_k)].
\]
Now define $\omega^\sfX\in{\fg_-}^*$ to be the functional sending
$\sfX$ to $1$ and vanishing on $\fa\subset\fg_-$. Given $\phi \in
C^k(\fg_-,\fg)$, we have $\phi = \omega^\sfX \wedge \phi_1 + \phi_2$,
for elements $\phi_1 \in C^{k-1}(\fa,\fg)$ and $\phi_2 \in
C^k(\fa,\fg)$. Explicitly, we have $\phi_1=i_{\sfX}\phi$ and
$\phi_2=\phi-\omega^\sfX \wedge \phi_1$. We express this by writing
$\phi = \begin{psmallmatrix} \phi_1\\ \phi_2 \end{psmallmatrix}$.
\begin{lemma}\leftarrowbel{L:delg-}
In terms of the notation just introduced, the Lie algebra cohomology
differential $\partial_{\fg_-}$ is given by
\begin{equation} \leftarrowbel{E:delg-}
\partial_{\fg_-}\begin{pmatrix} \phi_1 \\ \phi_2\end{pmatrix}
= \begin{pmatrix} -\partial_\fa \phi_1 +
\sfX\cdot\phi_2\\ \partial_\fa \phi_2\end{pmatrix}.
\end{equation}
\end{lemma}
\begin{proof}
Take $\phi_1,\phi_2 \in C^*(\fa,\fg)$ of degrees $k-1$ and $k$,
respectively. Evaluating on $\bigwedge^{k+1} \fa$, we clearly have
$\partial_{\fg_-}(\omega^\sfX \wedge \phi_1) = 0$ and
$\partial_{\fg_-} \phi_2 = \partial_\fa \phi_2$. Next, simple direct
computations show that for elements $v_i \in \fa$, we obtain
\[
\partial_{\fg_-}(\omega^\sfX \wedge \phi_1)(\sfX,v_1,...,v_k) =
-\partial_\fa\phi_1(v_1,\dots,v_k),
\]
while $(\partial_{\fg_-} \phi_2)(\sfX,v_1,...,v_k)$ equals
\begin{align*}
\sfX \cdot (\phi_2&(v_1,...,v_k)) + \textstyle\sum_{j=1}^k (-1)^j \phi_2([\sfX,v_j],v_1,...,\widehat{v}_j,...,v_k)\\
=&(\sfX\cdot \phi_2)(v_1,...,v_k).
\end{align*}
\end{proof}
The $\fq$--equivariant decomposition $\fg=\fq\oplus\fa$ also induces a
decomposition of $C^k(\fa,\fg)$ according to the values of multilinear
maps. While the first factor is not a space of cochains, we still
denote this decomposition by $C^k(\fa,\fg)=C^k(\fa,\fq)\oplus
C^k(\fa,\fa)$. Observe that from the definition of $\partial_{\fa}$ it
follows readily that $C^k(\fa,\fa)\subset\ker(\partial_{\fa})$ and
that $\im(\partial_{\fa})\subset C^{k+1}(\fa,\fa)$. Using this, we can
now formulate the result on the full prolongation.
\begin{prop}[Tanaka prolongation]\leftarrowbel{P:Tanaka} Let $n \geq 2$, $m \geq 1$, with $(n,m) \neq (2,1)$. Then the graded Lie algebra
$\tgr(\fg)\cong\fg$ is the full prolongation of its non--positive
part.
\end{prop}
\begin{proof}
It is well known that the statement is equivalent to the fact that
$H^1(\fg_-,\fg)$ is concentrated in non--positive homogeneities,
compare with Proposition 2.12 of \cite{Cap2016}. In the vector
notation introduced above, an element of $C^1(\fg_-,\fg)$ can be
written as $\binom{A}{\phi}$ for $A\in\fg=C^0(\fa,\fg)$ and
$\phi\in C^1(\fa,\fg)$. Now as indicated above, we can decompose
$\phi=\phi_{\fa}+\phi_{\fq}$ according to the values. By Lemma
\ref{L:delg-}, $0=\partial_{\fg_-}\binom{A}{\phi} = \begin{psmallmatrix} -\partial_\fa(A) + \sfX \cdot \phi\\ \partial_\fa \phi \end{psmallmatrix}$ implies
$0=\partial_{\fa}\phi=\partial_{\fa}\phi_{\fq}$. But now by
definition, the restriction $C^1(\fa,\fq)\to C^2(\fa,\fa)$ of
$\partial_{\fa}$ is exactly the Spencer differential associated to
$\fq\subset \fa^*\otimes\fa$.
We note that $\fq$ acts irreducibly on $\fa$, and is not in the list of infinite-type algebras in \cite{KN1965}. Given the assumptions on $m$ and $n$, $\fa^* \op \fq \op \fa$ is not a $|1|$-graded semisimple Lie algebra. (The list of these algebras is well-known -- see\ \S 3.2.3 in \cite{CS2009}.) Thus, by the main result of \cite{KN1964} by Kobayashi and Nagano, $\fq\subset \fa^*\otimes\fa$ has trivial first
prolongation, so this Spencer differential is injective.
Hence, we conclude that $\phi_\fq = 0$.
We have already seen that
$\sfX\cdot\phi=\partial_\fa(A)$. Now we can decompose the
representation $\fa^*\otimes\fa$ of $\fq$ into irreducible
components. Writing this as $\fq \op \oplus_j U_j$, we can accordingly decompose $\phi=B + \sum_j\phi_j$ and
this decomposition is preserved by the action of $\sfX\in\fq$. But
on the other hand, $\partial_\fa:\fg\to\fa^*\otimes\fa$ vanishes on
$\fa\subset\fg$ and coincides with the inclusion on
$\fq\subset\fg$. Thus we conclude that $\sfX\cdot\phi_j=0$ for all
$j$, which means that these $\phi_j$ actually have to be
contained in highest weight spaces for the action of $\fsl_2$. These
are all represented by positive powers of $\sfX$ and thus contained in
negative homogeneity.
The upshot of this discussion is that if $\binom{A}{\phi}$ lies in
the kernel of $\partial_{\fg_-}$ and has positive homogeneity, then
$\phi=\phi_\fa \in C^1(\fa,\fa)$ must satisfy $\phi = \leftarrowmbda\, \ad_\sfY|_\fa$, and hence
$\sfX \cdot \phi = \leftarrowmbda\, \ad_\sfH|_\fa$. By the homogeneity assumption $A$
has to be homogeneous of non-negative degree, hence lies in $\fq$, so
$\partial_\fa(A) = \sfX \cdot \phi$ implies $A=-\leftarrowmbda\sfH$. But then
one immediately verifies that $\binom{A}{\phi}=\partial_{\fg_-}
(-\leftarrowmbda\sfY)$, which completes the proof.
\end{proof}
As we have observed in \S \ref{S:Cartan} already, this completes
the proof of Theorem \ref{T:G0str-Cartan}, so we have an equivalence
of categories between filtered $G_0$--structures and regular normal
Cartan geometries.
\subsection{A codifferential formula}\leftarrowbel{S:codiff-explicit}
To proceed towards a more explicit description of the codifferential
$\partial^*$, we continue identifying $C^k(\fa,\fg)$ with the subspace
of $C^k(\fg,\fg)$ of those cochains which vanish under insertion of an
element of $\fq$. Doing this, we can restrict the inner product from
\S \ref{S:codiff} to the subspace $C^k(\fa,\fg)$ and define a map
$\partial^*_{\fa}$ as the adjoint of the Lie algebra cohomology
differential $\partial_{\fa}$. We further observe that the
decomposition $C^k(\fa,\fg)=C^k(\fa,\fq)\oplus C^k(\fa,\fa)$ is
orthogonal with respect to our inner product. The basic properties of
$\partial^*_{\fa}$ are as follows.
\begin{lemma}\leftarrowbel{L:q-valued} \quad
\begin{enumerate}
\item The map $\partial^*_{\fa}$ is $\fq$--equivariant.
\item For each $k$, we have $\im(\partial^*_{\fa})\subset
C^k(\fa,\fq)\subset\ker(\partial^*_{\fa})$.
\item For $k=1$, we get $\im(\partial^*_{\fa})=C^1(\fa,\fq)$ and
$\ker(\partial^*_{\fa})$ is the direct sum of $C^1(\fa,\fq)$ and the
orthocomplement of $\fq\subset\fa^*\otimes\fa=C^1(\fa,\fa)$
included via the natural action of $\fq$ on $\fa$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) is proved in exactly the same way as equivariance of the
codifferential in Lemma \ref{L:codiff}.
(2) In \S \ref{S:Tanaka} we have observed that
$C^k(\fa,\fa)\subset\ker(\partial_{\fa})$ and
$\im(\partial_{\fa})\subset C^k(\fa,\fa)$ for each $k$. By the
definition as an adjoint, we see that
$\ker(\partial^*_{\fa})=\im(\partial_{\fa})^\perp$ and
$\im(\partial^*_{\fa})=\ker(\partial_{\fa})^\perp$. Thus (2) follows
from the fact that $C^k(\fa,\fq)=C^k(\fa,\fa)^\perp$ for each $k$.
(3) We have already observed in the proof of Proposition
\ref{P:Tanaka} that $\partial_{\fa}:\fg\to C^1(\fa,\fg)$ vanishes on
$\fa$ and restricts to the representation $\fq\to \fa^*\otimes\fa$ on
$\fq$. Thus $\im(\partial_{\fa})=\fq\subset C^1(\fa,\fa)\subset
C^1(\fa,\fg)$, which together with the arguments from (2) implies
the claimed description of $\ker(\partial^*_{\fa})$.
On the other hand, $\partial_{\fa}:C^1(\fa,\fg)\to C^2(\fa,\fg)$,
vanishes on $C^1(\fa,\fa)$ while in the proof of Proposition
\ref{P:Tanaka} we have seen that it restricts to an injection on
$C^1(\fa,\fq)$. Thus $\ker(\partial_{\fa})=C^1(\fa,\fa)$, and the
description of $\im(\partial^*_{\fa})$ in degree one follows.
\end{proof}
As above, we view $C^k(\fa,\fg)$ as the subspace of
$L(\bigwedge^k(\fg/\fp),\fg)$ consisting of those elements which
vanish under insertion of the element $\sfX$. Given the basis $\{
\sfY,\sfH, \sfe^a_b, \sfX, \sfv^i_b \}$ of $\fg$, let $\{ \eta^\sfY,
\eta^\sfH, \eta^b_a, \omega^\sfX, \omega_i^b \}$ be the dual basis.
\begin{prop} \leftarrowbel{P:hds} In terms of the notation from \S \ref{S:Tanaka},
the codifferential $\partial^*$ (on horizontal $k$-forms) is given by
\begin{equation} \leftarrowbel{E:hds}
\partial^*\begin{pmatrix} \phi_1 \\ \phi_2\end{pmatrix} = \begin{pmatrix} -\partial^*_\fa \phi_1\\ \partial^*_\fa \phi_2 + \sfY \cdot \phi_1\end{pmatrix},
\end{equation}
where $\phi_1 \in C^{k-1}(\fa,\fg)$ and $\phi_2 \in C^k(\fa,\fg)$.
\end{prop}
\begin{proof}
Take $\psi_1 \in C^{k-2}(\fa,\fg)$ and $\psi_2 \in C^{k-1}(\fa,\fg)$ and put
$\psi=\omega^\sfX\wedge\psi_1+\psi_2 \in C^{k-1}(\fg_-,\fg)$. In the proof of Proposition
\ref{P:codiff}, we have seen that $\partial_{\fg}\psi$ and
$\partial_{\fg_-}\psi$ differ only by elements of
$\bigwedge^{1,k-1}\otimes\fg$. Using Lemma \ref{L:delg-} we thus
conclude that, up to terms involving elements of
$\{\eta^\sfY,\eta^\sfH, \eta^b_a \}$, we get
\[
\partial_{\fg}(\omega^\sfX \wedge \psi_1 + \psi_2) \equiv\omega^\sfX
\wedge ( -\partial_\fa \psi_1 + \sfX \cdot \psi_2) + \partial_\fa
\psi_2.
\]
Since $\{ \eta^\sfY, \eta^\sfH, \eta^b_a \}$ is orthogonal to the
horizontal forms $\{ \omega^\sfX, \omega_i^b \}$, the formula for
$\partial^*$ (on horizontal forms) follows from:
\begin{align*}
\leftarrowngle \partial^*(\omega^\sfX \wedge \phi_1), \omega^\sfX \wedge
\psi_1 \rangle &= \leftarrowngle \omega^\sfX \wedge
\phi_1, \partial_{\fg}(\omega^\sfX \wedge \psi_1) \rangle
= \leftarrowngle \omega^\sfX \wedge \phi_1, -\omega^\sfX \wedge \partial_\fa \psi_1 \rangle\\
& = -\leftarrowngle \omega^\sfX, \omega^\sfX \rangle \leftarrowngle
\phi_1, \partial_\fa \psi_1 \rangle
= -\leftarrowngle \omega^\sfX, \omega^\sfX \rangle \leftarrowngle \partial^*_\fa \phi_1, \psi_1 \rangle\\
&=\leftarrowngle -\omega^\sfX \wedge \partial^*_\fa \phi_1, \omega^\sfX \wedge \psi_1 \rangle\\
\leftarrowngle \partial^*(\omega^\sfX \wedge \phi_1), \psi_2 \rangle
&= \leftarrowngle \omega^\sfX \wedge \phi_1, \partial_\fg\psi_2 \rangle
= \leftarrowngle \omega^\sfX \wedge \phi_1, \omega^\sfX \wedge \sfX\cdot\psi_2 \rangle\\
&= \leftarrowngle \phi_1, \sfX\cdot\psi_2 \rangle = \leftarrowngle \sfY \cdot \phi_1, \psi_2 \rangle\\
\leftarrowngle \partial^* \phi_2, \omega^\sfX \wedge \psi_1 \rangle &= \leftarrowngle \phi_2, \partial_\fg(\omega^\sfX \wedge \psi_1) \rangle = 0\\
\leftarrowngle \partial^* \phi_2, \psi_2 \rangle &= \leftarrowngle
\phi_2, \partial_\fg\psi_2 \rangle = \leftarrowngle
\phi_2, \partial_\fa\psi_2 \rangle = \leftarrowngle \partial^*_\fa \phi_2,
\psi_2 \rangle
\end{align*}
\end{proof}
\begin{cor}\leftarrowbel{C:red}
Consider $\im(\partial^*)\subset\ker(\partial^*)\subset
L(\bigwedge^k(\fg/\fp),\fg)$. Then the natural representation of
$P$ on $\ker(\partial^*) / \im(\partial^*)$ is completely
reducible, i.e.\ $\fg^1$ acts trivially.
\end{cor}
\begin{proof}
Let $\phi \in \ker(\partial^*)$. From \eqref{E:hds},
$\partial^*_\fa \phi_1 = 0$ and $\partial^*_\fa \phi_2 + \sfY \cdot
\phi_1 = 0$. Since $\sfY \cdot \omega^\sfX = -\omega^\sfX \circ
\ad_\sfY = 0$, then $\sfY \cdot \phi = \omega^\sfX \wedge (\sfY
\cdot \phi_1) + \sfY\cdot \phi_2$, so
\[
\sfY \cdot \phi = \begin{pmatrix} \sfY \cdot \phi_1 \\ \sfY \cdot
\phi_2 \end{pmatrix} = \begin{pmatrix} -\partial_\fa^* \phi_2 \\
\sfY\cdot \phi_2\end{pmatrix} = \partial^*\begin{pmatrix} \phi_2 \\
0\end{pmatrix} \in \im(\partial^*).
\]
Hence, $\fg^1$ acts trivially on $\ker(\partial^*) /
\im(\partial^*)$.
\end{proof}
\subsection{Homogeneous examples of C-class ODE}
\leftarrowbel{S:hom-C-class}
It is well-known that the submaximal (contact) symmetry dimension for
scalar ODE of order $\geq 4$ is two less than that of the (maximally
symmetric) trivial equation, except for orders 5 and 7 where it is
only one less \cite{Olv1995}. For these cases, explicit submaximally symmetric models are well-known:
\begin{align}
9 (u'')^2 u^{(5)} - 45 u'' u''' u'''' + 40 (u''')^3 = 0; \leftarrowbel{E:5-ex}\\
10 (u''')^3 u^{(7)} - 70 (u''')^2 u^{(4)} u^{(6)} - 49 (u''')^2 (u^{(5)})^2 \leftarrowbel{E:7-ex}\\
+ 280 u''' (u^{(4)})^2 u^{(5)} - 175 (u^{(4)})^4 = 0. \nonumber
\end{align}
These have $A_2 \cong \fsl_3$ and $C_2 \cong \fsp_4$ symmetry respectively.
Doubrov \cite{Dou2008} showed that \eqref{E:5-ex} and \eqref{E:7-ex} are Wilczynski-flat. We will describe their Cartan curvatures, observe the vanishing under $\sfX$-insertions, and hence confirm that they are of C-class.
The symmetry algebra $\fs \cong \fsl_3$ of $\cE$ given by \eqref{E:5-ex} is spanned by:
\[
\partial_t, \quad \partial_u, \quad t\partial_t, \quad u\partial_t, \quad t\partial_u, \quad u\partial_u, \quad t^2\partial_t + tu\partial_u, \quad tu\partial_t + u^2\partial_u.
\]
This is a homogeneous structure and (the restriction of the prolongation of) $\fs$ is infinitesimally transitive on $\cE$. Fixing the point $o = \{ t = u = u_1 = u_3 = u_4 = 0,\, u_2 = 0 \} \in \cE$, let us define an alternative basis:
\begin{align*}
X &= \partial_t + t \partial_{u}, \quad
H = -2(t \partial_t + 2 u \partial_{u}), \quad
Y = 2(u-t^2) \partial_t - 2t u \partial_{u}, \\
T_4 &= \frac{1}{2} \partial_u, \quad T_2 = -\partial_t + t\partial_u, \quad T_0 = -3t\partial_t, \\
T_{-2} &= -2(t^2+u)\partial_t - 2tu\partial_u, \quad T_{-4} = -2tu\partial_t - 2u^2\partial_u,
\end{align*}
This basis is adapted to $o$:
\begin{itemize}
\item the isotropy is $\fs^0 = \tspan\{ H, Y \}$.
\item the line field $E = \tspan\{ \partial_t + u_1 \partial_{u} + ... + u_4 \partial_{u_3} + u_5 \partial_{u_4} \}$ on $\cE$ has $E|_o = \tspan\{ X|_o \}$. Moreover, $\{ X,H,Y \}$ is a standard $\fsl_2$-triple.
\item the line field $F = \tspan\{ \partial_{u_4} \}$ on $\cE$ has $F|_o = \tspan\{ T_{-4}|_o \}$.
\item The elements $X$ and $T_{-4}$ have filtration degree $-1$ and this induces a filtration on $T_o\cE$.
\end{itemize}
(Again, we are referring to the restrictions of prolongations of the vector fields above.)
The element $H$ was used to decompose $\fs$ into weight spaces. Here, $T_{2i}$ has $H$-weight $2i$, and these span an $\fsl_2$-irrep isomorphic to $V_4$. Alternatively, we can view this in terms of $3 \times 3$ trace-free matrices. The map sending $a_2 X + a_0 H + a_{-2} Y + \sum_{i=-2}^2 b_{2i} T_{2i}$ to
\begin{align} \leftarrowbel{E:sl3-split}
\begin{psmallmatrix}
2 a_0 & \sqrt{2} a_2 & 0\\
\sqrt{2} a_{-2} & 0 & \sqrt{2} a_2\\
0 & \sqrt{2} a_{-2} & -2 a_0
\end{psmallmatrix}
+
\begin{psmallmatrix}
b_0 & -\sqrt{2}\, b_2 & b_4\\
\sqrt{2}\, b_{-2} & -2b_0 & \sqrt{2}\, b_2\\
b_{-4} & -\sqrt{2}\, b_{-2} & b_0
\end{psmallmatrix}.
\end{align}
is a Lie algebra isomorphism $\mathfrak{s} \to \mathfrak{sl}_3$. In summary, we have $\fs \cong \fsl_2 \op V_4$ as $\fsl_2$-modules, and this is equipped with the filtration induced from above, e.g.\
$\begin{psmallmatrix}
0 & 1 & 0\\
0 & 0 & 1\\
0 & 0 & 0
\end{psmallmatrix}$ and
$\begin{psmallmatrix}
0 & 0 & 0\\
0 & 0 & 0\\
1 & 0 & 0
\end{psmallmatrix} \mod \fs^0$ have filtration degree $-1$.
The decomposition $\fsl_3 \cong \fsl_2 \op V_4$ is in fact induced by a {\em principal} $\fsl_2$ subalgebra (all of which are conjugate in $\fs$). Similar decompositions exist for $C_2 \cong \fsp_4$ (arising from the symmetries of \eqref{E:7-ex}) and $G_2$, so it will be useful to formulate this in a uniform way. Let $\fs$ be a rank two complex simple Lie algebra. Fix a Cartan subalgebra $\fh$, root system $\Delta$, and a simple root system $\alpha_1,\alpha_2 \in \fh^*$. Let $\{ h_i, e_i, f_i \}_{i=1}^2$ be standard Chevalley generators, where $e_i$ and $f_i$ are root vectors for $\alpha_i$ and $-\alpha_i$ respectively. Let $Z_1,Z_2 \in \fh$ be the dual basis to $\alpha_1,\alpha_2$. We use the Bourbaki ordering, so that the Cartan matrices $c_{ij} = \leftarrowngle \alpha_i, \alpha_j^\vee \rangle$ for $A_2,C_2, G_2$ are:
\[
\begin{psmallmatrix}
2 & -1\\
-1 & 2
\end{psmallmatrix}, \quad
\begin{psmallmatrix}
2 & -1\\
-2 & 2
\end{psmallmatrix}, \quad
\begin{psmallmatrix}
2 & -1\\
-3 & 2
\end{psmallmatrix}.
\]
Define a principal $\fsl_2$-subalgebra via the standard $\fsl_2$-triple:
\[
H = 2(Z_1+Z_2), \quad X = e_1 + e_2, \quad Y =
\begin{cases}
\frac{2}{3} f_1 + \frac{1}{3} f_2, & \fs = A_2;\\
f_1 + f_2, & \fs = C_2;\\
2 f_1 + 3 f_2, & \fs = G_2.
\end{cases}
\]
The element $H$ decomposes $\fs$ into weight spaces, e.g.\ the root space with root $k\alpha_1 + \ell\alpha_2$ has weight $2(k+\ell)$. We apply the raising operator $X$ to the lowest root space to get the irreducible summand $V_n$. Indeed, the $H$-weight of the lowest (or highest) roots and dimension counting yields the $\fsl_2$-decomposition $\fs = \fsl_2 \op V_n$, where $n=4$ for $A_2$, $n=6$ for $C_2$, and $n=10$ for $G_2$. Note that the sum of root spaces $\fs_{-\alpha_1} \op \fs_{-\alpha_2}$ has $H$-weight $+2$, and is decomposed into a line lying in the $\fsl_2$ and a line lying in $V_n$. Filtration degrees are indicated in Figure \ref{F:G2-filtration} for the $G_2$ case. (The $A_2$ and $C_2$ cases are similar.)
\begin{center}
\begin{figure}
\caption{Filtration degrees associated with the $G_2$-model}
\end{figure}
\end{center}
We now use all of this to describe the curvature of the
associated canonical Cartan geometry. Recall from \S 1.5.15 and 1.5.16 of \cite{CS2009} that a homogeneous Cartan geometry $(\cG \to \cE, \omega)$ of type $(G,P)$ over a homogeneous base manifold $\cE \cong S / S^0$ is completely determined by a linear map $\alpha: \fs \to \fg$ that: (i) restricts to the derivative of the natural inclusion $\iota : S^0 \to P$ on $\fs^0$, (ii) is $S^0$-equivariant, i.e. $\Ad_{\iota(s)} \circ \alpha = \alpha \circ \Ad_s$ for $s \in S^0$, and (iii) induces a vector space isomorphism $\fs / \fs^0 \cong \fg / \fp$. Letting $\tilde\kappa(x,y) = \alpha[x,y] - [\alpha(x),\alpha(y)]$, the curvature corresponds to $\kappa \in \bigwedge^2(\fg/\fp)^*\otimes \fg$ given by
\begin{align} \leftarrowbel{E:hom-kappa}
\kappa(u,v) = \tilde\kappa(\alpha^{-1}(u),\alpha^{-1}(v)).
\end{align}
Given the $\fsl_2$-decomposition $\fs = \fsl_2 \op V_n$, define $\alpha : \fs \hookrightarrow \fg = \fgl_2 \op V_n = \fq \oplus \fa$ via the natural inclusion. This satisfies the required conditions above, but is moreover $\fsl_2$-equivariant. This immediately implies that $\kappa$ given in \eqref{E:hom-kappa} vanishes upon insertion of $\sfX \,\mod \fp$, i.e. it is of the form $\begin{psmallmatrix} 0\\ \kappa_2 \end{psmallmatrix}$.
We first check that $\kappa$ is normal. Since $\alpha$ is $\fsl_2$-equivariant, then $\kappa$ can be viewed as an $\fsl_2$-{\em invariant} element of the $\fsl_2$-module $\bigwedge^2 (\fg/\fq)^* \otimes \fg \cong C^2(\fa,\fg)$. From \eqref{E:hds}, it suffices to examine $\partial^*_\fa$ on this space. From Lemma \ref{L:q-valued}, $C^2(\fa,\fq) \subset \ker(\partial^*_\fa)$ and $\im(\partial^*_\fa) \subset C^1(\fa,\fq)$. As $\fsl_2$-modules, $C^1(\fa,\fq) \cong V_n \otimes (V_2 \op V_0) \cong V_{n+2} \oplus 2 V_n \oplus V_{n-2}$, which contains no trivial summands for $n \geq 3$. By $\fsl_2$-equivariance of $\partial^*_\fa$, we conclude that $\partial^*\kappa = 0$, i.e.\ $\kappa$ is normal.
A simple check using root diagrams shows that for all three cases $\kappa$ is regular and satisfies the stronger regularity condition from Remark \ref{R:strong-reg} in the $A_2$ and $C_2$ cases. Thus, in these cases we have constructed the curvature of the canonical Cartan connection of an ODE, so these ODE are indeed of C-class. Note that in the $A_2$ case, there is a unique trivial summand appearing in $\bigwedge^2 V_4 \otimes V_4$, so $\kappa$ necessarily lies here. In the $C_2$ case, there are two trivial summands: one occurs in $\bigwedge^2 V_6 \otimes V_6$ and the other occurs inside $\bigwedge^2 V_6 \otimes \fsl_2$. A direct computation shows that $\kappa$ lies in their sum, but not entirely in one summand or the other.
In the $G_2$ case, $\kappa$ is not strongly regular. The root spaces $\fs_{\alpha_1 + \alpha_2}$ and $\fs_{2\alpha_1 + \alpha_2}$ have filtration degrees $-8$ and $-9$ respectively, but these insert into $\tilde\kappa$ to produce a nontrivial element of $\fs_{3\alpha_1 + 2\alpha_2}$, which has degree $-11$. Consequently, no corresponding $G_2$-invariant 11th order ODE exists. We have constructed a non-ODE $G_2$-invariant filtered $G_0$-structure (with symbol algebra $\fm$). Passing to the leaf space of the foliation by $E$, we obtain a $G_2$-invariant $\text{GL}_2$-structure on an 11-manifold.
\section{Wilczynski--flatness and the main result} \leftarrowbel{S:main}
As we have observed in the end of \S \ref{S:Tanaka}, we can associate
a canonical normal Cartan geometry to any scalar ODE of order at least $4$
and each system of ODE of order at least $3$. Using the facts on the
normalization condition derived in \S \ref{S:cd-norm}, we can now
express the Wilczynski invariants in terms of the curvature $\kappa$ of this Cartan geometry. We can then prove our main result that in the case of vanishing Wilczynski invariants, the normal Cartan geometry descends to the space of solutions, thus exhibiting Wilczynski--flat equations as forming a C-class.
\subsection{Wilczynski invariants} \leftarrowbel{S:Wilc}
Normality implies that $\kappa$ takes values in the subspace $\ker(\partial^*) \subset L(\bigwedge^2(\fg/\fp),\fg)$. The element $\sfX\in\fg_-$ spans a one--dimensional $P$--invariant subspace in $\fg/\fp$. From Definition \ref{D:C-class}, the C-class property is confirmed if $\kappa$ takes values in the $P$-submodule
\begin{align} \leftarrowbel{E:bbE}
\bbE := \{ \phi \in \ker(\partial^*) \subset C^2_{\hor}(\fg,\fg) : i_\sfX \phi = 0 \}.
\end{align}
In terms of the vector notation introduced in \S \ref{S:Tanaka}, this corresponds to vectors with vanishing top component.
Composing the natural surjection $\ker(\partial^*)\to
\ker(\partial^*)/\im(\partial^*)$ with $\kappa$, we obtain the \textit{essential curvature} $\kappa_e$ of the geometry, which is shown to be a fundamental invariant in Proposition 4.6 of \cite{Cap2016}. From Corollary \ref{C:red}, we know that this quotient
representation is completely reducible, which shows that $\kappa_e$ is a much
simpler geometric object than $\kappa$.
To describe the relation to the generalized Wilczynski invariants, we
need some preparation. Inserting the vector $\sfX$ into the curvature function
$\kappa$, we obtain a function with values in $L(\fg/\fp,\fg)$. By skew symmetry of
$\kappa$, the values vanish upon insertion of $\sfX$ and thus descend to
$L(\fg/\fq,\fg)$ and one can further project to $L(\fg/\fq,\fg/\fq)$. Now
$\fg=\fq\oplus\fa$ and decomposing $L(\fg,\fg)$ accordingly, the result of that
projection can be identified with the component of $\kappa(\sfX,\_)$ in
$L(\fa,\fa)$.
Now $\fa$ is a representation of $\fq$, so we get $\fq\subset L(\fa,\fa)$. In
particular $\sfY$ and, more generally, $(\sfY,A)$ for any $A\in\fgl_m$,
can be viewed as an element of $L(\fa,\fa)$. Forming powers of the map induced by
$\sfY$, we also get $(\sfY^k,A)\in L(\fa,\fa)$ for $k=2,\dots,n$ and
$A\in\fgl_m$. The maps $\sfY^k$ for $k=1,\dots,n$ are linearly independent, and we
will show that the component of $\kappa(\sfX,\_)$ in $L(\fa,\fa)$ actually has
values in the subspace spanned by $\fq$ and the maps $(\sfY^k,A)$. Hence for each
$k=1,\dots, n$, we get a well defined component of the curvature function
determined by $\omega^{\sfX}\otimes\sfY^k$ with values in $\fgl_m$. We will also
show that the relevant components survive the projection to
$\ker(\partial^*)/\im(\partial^*)$ so they are also components of the essential
curvature function.
\begin{theorem}\leftarrowbel{T:Wilc}
For a scalar ODE of order at least $4$ and systems of ODEs of order at least $3$, the generalized Wilczynski invariants from Definition \ref{D:Wilc} are
equivalently encoded as components of the essential curvature function of the
associated canonical Cartan geometry:
\begin{itemize}
\item $\omega^\sfX\otimes \sfY^k$ for $k=2,\dots,n$ in case of a scalar ODE or systems of ODEs of order $n+1$;
\item additionally, the trace-free part of the component corresponding to $\omega^\sfX\otimes \sfY$ in case of a system of ODEs.
\end{itemize}
Vanishing of all generalized Wilczynski invariants is equivalent to
the fact that the curvature function $\kappa$ has values in the sum
$\bbE+\im(\partial^*)^1$, where the superscript means (filtration) homogeneity
$\geq 1$.
\end{theorem}
\begin{proof}
The key ingredient for this result is the description of Wilczynski
invariants in terms of a partial connection form from
\cite{Dou2008}. Given the manifold $\cE$ describing the equation and
the subbundle $E\subset T\cE$ from \S \ref{S:geo-str}, we can form
the quotient bundle $N:=T\cE/E$. Given sections $\xi$ of $E$ and $s$
of $N$, we can choose $\eta\in\fX(\cE)$ which projects onto $s$, and
project the Lie bracket $[\xi,\eta]$ to $N$. One immediately
verifies that this gives rise to a well defined bilinear operation
$D: \Gamma(E)\times\Gamma(N)\to\Gamma(N)$ which defines
a partial connection, i.e.~it is linear over smooth functions in the
first variable and satisfies a Leibniz rule in the second
variable. Hence, we write it as $(\xi,s)\mapsto D_\xi s$.
Now the canonical Cartan geometry gives rise to a specific
description of this operation. Denoting by $p:\cG\to\cE$ the Cartan
bundle and by $\omega$ the Cartan connection, we get
$T\cE=\cG\times_P(\fg/\fp)$, with $E\subset T\cE$ corresponding to
the submodule in $\fg/\fp$ spanned by $\sfX+\fp$. This module is $\fq / \fp$, so $N = \cG\times_P(\fg/\fq)$. Otherwise put, the bundle $\cG\to\cE$ defines
a reduction to the structure group $P$ of the (frame bundle of the)
vector bundle $N\to\cE$, and we can
describe the partial connection in terms of this reduction.
First, given a section $s\in\Gamma(N)$ and a lift
$\eta\in\fX(\cE)$ as above, we can further lift $\eta$ to a
$P$--invariant vector field $\tilde\eta\in\fX(\cG)$. Then by
construction, the $P$--equivariant function $\cG\to\fg/\fq$
corresponding to $s$ is given by $\omega(\tilde\eta)+\fq$. To
describe the partial connection, take $\xi\in\Gamma(E)$ and choose
a $P$--invariant lift $\tilde\xi\in\fX(\cG)$. Then the Lie bracket
$[\tilde\xi,\tilde\eta]$ is a $P$--invariant lift of $[\xi,\eta]$,
so $\omega([\tilde\xi,\tilde\eta])+\fq$ is the $P$--equivariant
function corresponding to the projection of $[\xi,\eta]$ and thus
to $D_\xi s$.
Since $\omega(\tilde\xi)$ is $\fq$-valued, then by \eqref{E:kappa}, we have
\begin{equation}\leftarrowbel{E:part-conn}
\omega([\tilde\xi,\tilde\eta]) + \fq = - \kappa(\omega(\tilde\xi),\omega(\tilde\eta)) + \tilde\xi \cdot \omega(\tilde\eta) + [\omega(\tilde\xi),\omega(\tilde\eta)] + \fq.
\end{equation}
Let us write $f:=\omega(\tilde\eta)+\fq$ for the function
corresponding to $s$. The expression
\begin{equation}\leftarrowbel{E:conn-form}
\tau(\tilde\xi)(A+\fq)=-\kappa(\omega(\tilde\xi),A)+[\omega(\tilde\xi),A]+\fq
\end{equation}
is well-defined since $\omega(\tilde\xi)$ has values
in $\fq$. Thus we get a partially-defined one--form on $\cG$ with values in
$L(\fg/\fq,\fg/\fq)$ (which is only defined on tangent vectors
projecting to $E\subset T\cE$). In terms of this form, the right hand
side of \eqref{E:part-conn} reads as $\tilde\xi\cdot
f+\tau(\tilde\xi)(f)$, so $\tau$ is exactly the (partial) connection
form for $D$ on $\cG$.
Now an interpretation of the Wilczynski invariants from \cite{Dou2008} is based on
a proof that structure group of $N$ can be reduced to $P$ in such a way that one
obtains a connection form for $D$ that satisfies a normalization condition. This
condition is that its values lie in the linear subspace of
$L(\fg/\fq,\fg/\fq)\cong\fa^*\otimes\fa$ spanned by $\fq$ and the
maps of the form $(\sfY^2,A_2),\dots,(\sfY^n,A_n)$ with $A_i\in\fgl_m$. It is
then shown in \cite{Dou2008} that the components as listed in the
Theorem exactly encode the Wilczynski invariants.
To see that $\tau$ satisfies this normalization condition, observe first that
$\omega(\tilde\xi)$ has values in $\mathbb R\sfX\subset \fq$, so it suffices to deal with the case that
$\omega(\tilde\xi)=\sfX$. Also, the bracket--term in \eqref{E:conn-form} gives a
contribution in $\fq\subset\fa^*\otimes\fa$. Using the vector notation
$\binom{\kappa_1}{\kappa_2}$ from \S \ref{S:codiff-explicit} for (the values of)
$\kappa$, we get $\kappa(\sfX,\_)=\kappa_1$. The values of $\kappa_1$
lie in $C^1(\fa,\fg)$, which as we know admits a $\fq$--invariant decomposition as
$C^1(\fa,\fq)\oplus C^1(\fa,\fa)$. Since in \eqref{E:conn-form} we work modulo
$\fq$, the component of $\kappa$ showing up there is a multiple of the component
$\kappa_1^{\fa}$ of $\kappa_1$ in $C^1(\fa,\fa)$. But by Proposition \ref{P:hds},
normality of $\kappa$ implies that $\kappa_1$ has values in
$\ker(\partial^*_{\fa})$, while $\sfY\cdot\kappa_1$ has values in
$\im(\partial^*_{\fa})$. By Lemma \ref{L:q-valued}, this means that
$\kappa_1^{\fa}\in\fq^\perp\subset\fa^*\otimes\fa$ and that
$\sfY\cdot\kappa_1^{\fa}=0$. This exactly means that the values of $\kappa_1^{\fa}$
lie in the sum of all those lowest weight spaces of the $\fsl_2$--representation
$\fa^*\otimes\fa$, which are perpendicular to the submodule $\fq$.
These lowest weight spaces are spanned by the maps
$(\sfY,A_1),(\sfY^2,A_2),\dots,(\sfY^n,A_n)$ with $A_1\in\fsl_m$ and
$A_k\in\fgl_m$ for $k=2,\dots,n$.
Thus we conclude that $\tau$ satisfies the normalization condition and that the
Wilczynski invariants are equivalently encoded by the class of
$\kappa(\sfX,\_)$ modulo $C^1(\fa,\fq)$. In particular, this class and thus
$\kappa_1^{\fa}$ vanishes identically in the Wilczynski--flat case.
Having all that in hand, the claims in the theorem now follow from two simple
observations. On the one hand, Proposition \ref{P:hds} and Lemma \ref{L:q-valued}
show that the restriction of the $P$--equivariant map
$\phi=\binom{\phi_1}{\phi_2}\mapsto \phi_1^{\fa}$ to $\ker(\partial^*)\subset
C^2_{\hor}(\fg,\fg)$ vanishes on the subspace $\im(\partial^*)$. Thus it factorizes
to the quotient $\ker(\partial^*)/\im(\partial^*)$, which shows that
$\kappa_1^{\fa}$ and thus the components encoding the Wilczynski
invariants are components of the essential curvature function $\kappa_e$. This
also shows that if $\kappa$ has values in $\mathbb E+\im(\partial^*)^1$, then
$\kappa_e$ vanishes upon insertion of $\sfX$ and hence all Wilczynski invariants
vanish.
On the other hand, suppose that we start from a Wilczynski--flat equation, so
$\kappa = \binom{\kappa_1}{\kappa_2} \in \ker(\partial^*)$ has the property that
$\kappa_1^{\fa}=0$. Then by Lemma \ref{L:q-valued}, $\kappa_1=\kappa_1^\fq \in
\im(\partial^*_\fa)$, so we can take an element $\psi_1\in C^2(\fa, \fg)$ such that
$\partial^*_\fa\psi_1=-\kappa_1$. Compatibility with homogeneities shows that we
may assume that $\psi_1$ is homogeneous of degree $\geq 0$. But then by Proposition
\ref{P:hds}, $\binom{\kappa_1}{\kappa_2}-\partial^*\binom{\psi_1}{0}$ has vanishing
top--component and thus lies in $\bbE$. Hence,
$\binom{\kappa_1}{\kappa_2}\in\bbE+\im(\partial^*)^1$, which completes the proof.
\end{proof}
\subsection{The covariant exterior derivative}
\leftarrowbel{S:d-omega}
Let $(\cG \to M, \omega)$ be a regular Cartan geometry of type
$(\fg,P)$. Following \cite{Cap2016}, we consider the operator
$d^\omega:\Omega^k(\cG,\fg)\to\Omega^{k+1}(\cG,\fg)$ defined by
\[
(d^\omega \varphi)(\xi_0,...,\xi_k) = d\varphi(\xi_0,...,\xi_k) +
\textstyle\sum_{i=0}^k (-1)^i [\omega(\xi_i),
\varphi(\xi_0,...,\widehat{\xi_i}, ..., \xi_k)],
\]
where $\xi_j \in \fX(\cG)$ for all $j$. By Proposition 4.2 of
\cite{Cap2016}, if $\varphi$ is horizontal and $P$--equivariant, then
so is $d^\omega\varphi$. Moreover, $d^\omega$ is compatible with the
natural notion of homogeneity for $\fg$--valued differential forms,
and the curvature $K$ of $\omega$ satisfies the Bianchi identity
$d^\omega K = 0$.
\subsection{Wilczynski-flat ODE are of C-class}
Now we are ready to prove our main result.
\begin{theorem} \leftarrowbel{T:Wilc-C} Any Wilczynski-flat ODE
\eqref{E:ODE} with $m=1$, $n \geq 3$ or $m \geq 2$, $n \geq 2$ is
of C-class.
\end{theorem}
\begin{proof} Let $(\cG \to \cE, \omega)$ be the regular, normal
Cartan geometry of type $(\fg,P)$ associated to \eqref{E:ODE} as in
Theorem \ref{T:G0str-Cartan}. We have to show that for a
Wilczynski--flat ODE, the curvature function $\kappa$ has values in
the module $\bbE$ defined in \eqref{E:bbE}. Generalizing the
relation between the curvature $K$ and the curvature function
$\kappa$, horizontal $\fg$--valued $k$--forms on $\cG$ can be
naturally identified with smooth functions $\cG\to
L(\bigwedge^k(\fg/\fp),\fg)$. The natural notions of
$P$--equivariance in the two pictures correspond to each other, see
Theorem 4.4 of \cite{Cap2016}. For the current proof, it will be
helpful to switch between forms and equivariant functions freely,
so we will express the fact that $\kappa$ has values in $\bbE$ as
``$K$ lies in $\bbE$''. In these terms, composing functions with
$\partial^*$ defines a tensorial operator
$\Omega^k_{\hor}(\cG,\fg)\to\Omega^{k-1}_{\hor}(\cG,\fg)$ for each
$k$, and we also denote this operator by $\partial^*$. By
construction, $\partial^*$ maps $P$--equivariant forms to
$P$--equivariant forms. In this language, normality can be simply
expressed as $\partial^*K=0$.
By Theorem \ref{T:Wilc}, Wilczynski--flatness implies that $K$ has
values in $\bbE+\im(\partial^*)^1$. Passing to equivariant
functions, applying Lemma 4.7 of \cite{Cap2016}, and passing back
to differential forms, we conclude that $K=K_1+K_2$ for
$P$--equivariant forms $K_1,K_2\in\Omega^2_{\hor}(\cG,\fg)$ such
that $K_1$ has values in $\bbE$ and $K_2$ has values in
$\im(\partial^*)^1$. Now we prove the theorem in a recursive way by
showing that for any $\ell\geq 1$, from a decomposition $K=K_1+K_2$
such that $K_1$ has values in $\bbE$ and $K_2$ has values in
$\im(\partial^*)^\ell$, we can always obtain a decomposition
$K=\tilde K_1+\tilde K_2$, for which $\tilde K_1$ again has values
in $\bbE$ but $\tilde K_2$ has values in
$\im(\partial^*)^{\ell+1}$. Since $\im(\partial^*)^r=\{0\}$ for
sufficiently large $r$, this implies the result.
So let us assume that $K=K_1+K_2$ as above with $K_2$ having values
in $\im(\partial^*)^\ell$ for some $\ell\geq 1$. We first claim
that for a $P$--equivariant form $\varphi \in
\Omega^2_{\hor}(\cG,\fg)$, which has values in $\bbE$, also
$\partial^*d^\omega\varphi$ has values in $\bbE$. In terms of the
description of $\partial^*$ from Proposition \ref{P:hds}, lying in
$\bbE$ means that the top component of the right hand side of
\eqref{E:hds} has to vanish. Denoting by $\hat X\in\fX(\cG)$ the
vector field characterized by $\omega(\hat X)=\sfX$, we thus have
to show that (the equivariant function corresponding to) $i_{\hat
X}d^\omega\varphi$ has values in
$\ker(\partial^*_\fa)\subset C^2(\fa,\fg)$.
The assumption on $\varphi$ implies $i_{\hat X}\varphi=0$, so for
vector fields $\xi,\eta\in\fX(\cG)$, we get
\begin{equation}\leftarrowbel{E:ixd}
(i_{\hat X}
d^\omega\varphi)(\xi,\eta)=d\varphi(\hat X,\xi,\eta)+[X,\varphi(\xi,\eta)].
\end{equation}
Using $i_{\hat X}\varphi=0$ once more, we get
\begin{equation}\leftarrowbel{E:ixd2}
\begin{aligned}
d\varphi&(\hat X,\xi,\eta)=\hat
X\cdot\varphi(\xi,\eta)-\varphi([\hat
X,\xi],\eta)-\varphi(\xi,[\hat X,\eta])=\\
& \hat X\cdot f(\omega(\xi),\omega(\eta))-f(\omega([\hat
X,\xi]),\omega(\eta))-f(\omega(\xi),\omega([\hat X,\eta])).
\end{aligned}
\end{equation}
Here $f$ denotes the equivariant function corresponding to $\varphi$,
which takes values in $\bbE\subset\ker(\partial^*)$.
By Proposition \ref{P:hds}, $f$ in fact has values
in $\ker(\partial^*_{\fa})\subset C^2(\fa,\fg)$.
Since $\omega(\hat X)$ is constant, then by \eqref{E:kappa},
\begin{equation}\leftarrowbel{E:ixd3}
\omega([\hat X,\xi]) =-K(\hat X,\xi)+[X,\omega(\xi)]+\hat X\cdot\omega(\xi),
\end{equation}
and likewise for $\omega([\hat X,\eta])$.
Now by assumption $K=K_1+K_2$ and $i_{\hat X}K_1=0$,
so $K(\hat X,\xi)=\kappa_2(\sfX,\omega(\xi))$ and $\kappa_2$
has values in $\im(\partial^*)$. By Proposition \ref{P:hds}, this means that
$\kappa_2(\sfX, \_)$ has values in $\im(\partial^*_{\fa})$,
which by part (3) of Lemma \ref{L:q-valued} is contained in $\fq$. In
particular, terms of the form $K(\hat X,\xi)$ insert trivially into
$f$, so these do not contribute. On the other hand, the contribution
to \eqref{E:ixd2} resulting from the last term in \eqref{E:ixd3} is
$$
-f(\hat X\cdot\omega(\xi),\omega(\eta))-f(\omega(\xi),\hat X\cdot\omega(\eta)).
$$
This adds up with the first term in the right hand side of
\eqref{E:ixd2} to $(\hat X\cdot f)(\omega(\xi),\omega(\eta))$.
Since $f$ has values in $\bbE$, the derivative $\hat X\cdot f$
has the same property. Now inserting the last remaining term in the
right hand side of \eqref{E:ixd3} into the right hand side of
\eqref{E:ixd2}, we obtain
$$
-f([X,\omega(\xi)],\omega(\eta))-f(\omega(\xi),[X,\omega(\eta)]).
$$
Viewing $f$ as a function with values in
$\ker(\partial^*_{\fa})\subset C^2(\fa,\fg)$ as above, we can write
the sum of these terms with the last term in the right hand side of
\eqref{E:ixd} as $(\rho_{\sfX}\circ f)(\omega(\xi),\omega(\eta))$.
Here $\rho_{\sfX}$ denotes the natural
action of $\sfX\in\fq$ on $C^2(\fa,\fg)$. But then $\fq$--equivariance
of $\partial^*_{\fa}$ as proved in part (1) of Lemma \ref{L:q-valued}
shows that also this function has values in $\ker(\partial^*_{\fa})$,
which completes the proof of the claim.
Returning to our decomposition $K=K_1+K_2$, we now use the Bianchi
identity $d^\omega K=0$ to get $\partial^*d^\omega
K_2=-\partial^*d^\omega K_1$, so by the claim, this has values in
$\bbE$. Now consider the maps $\partial_{\fg_-}$ and
$\underline{\partial^*}$ defined on the spaces $C^k(\fg_-,\fg)$ as in
the proof of Proposition \ref{P:codiff}, and the algebraic Laplacian
$\Box:=\partial_{\fg_-}\circ
\underline{\partial^*}+\underline{\partial^*}\circ \partial_{\fg_-}$,
which preserves degrees and homogeneity. This clearly can be
restricted to an endomorphism of $\im(\underline{\partial^*})\subset
C^2(\fg_-,\fg)_\ell$ (on which it coincides with
$\underline{\partial^*}\circ \partial_{\fg_-}$), and in the proof of
Proposition 3.10 of \cite{Cap2016} it is shown that this restriction
is bijective. By the Cayley--Hamilton theorem, there is a polynomial
$p_\ell \in \bbR[x]$ such that $p_\ell(\Box)$ is inverse to $\Box$ on
$\im(\underline{\partial^*})_\ell$. Now $p_\ell(\partial^* d^\omega)$
is a well-defined operator on the space of $P$--equivariant forms in
$\Omega^2_{\hor}(\cG,\fg)$, which preserves homogeneities.
Applying our claim once more, we see that $p_\ell(\partial^*
d^\omega)\partial^* d^\omega K_2$ has values in $\bbE$ and by
construction is still homogeneous of degree $\geq\ell$. Thus, $\tilde
K_1=K_1+p_\ell(\partial^*d^\omega)\partial^* d^\omega K_2$ has values
in $\bbE$, while $\tilde K_2:=K_2-p_\ell(\partial^*
d^\omega)\partial^* d^\omega K_2$ has values in $\im(\partial^*)$ and
is homogeneous of degree $\geq\ell$. To verify that $K=\tilde
K_1+\tilde K_2$ is the desired decomposition, it suffices to show that
the homogeneous component of degree $\ell$ of (the equivariant
function corresponding to) $\tilde K_2$ vanishes identically.
By part (3) of Theorem 4.4 of \cite{Cap2016}, for a $P$--equivariant
form $\varphi\in\Omega^2(\cG,\fg)$ which is homogeneous of degree
$\geq\ell$ and corresponds to the equivariant function $f$, the
homogeneous component of degree $\ell$ of the equivariant function
corresponding to $\partial^*d^\omega\varphi$ is given by
$\underline{\partial^*}\circ \partial_{\fg_-} \circ\tgr_\ell\circ f$. Applying
this iteratively starting with the function $\kappa_2$ corresponding
to $K_2$, we remain in the realm of functions having values in
$\im(\partial^*)^\ell$. Thus we iteratively conclude that the
homogeneous component of degree $\ell$ of the function corresponding
to $p_\ell(\partial^* d^\omega)\partial^* d^\omega K_2$ coincides with
$p_\ell(\Box)\circ\Box\circ\tgr_\ell\circ\kappa_2=\tgr_\ell\circ\kappa_2$,
which shows that $\tilde K_2$ has vanishing homogeneous component of
degree $\ell$.
\end{proof}
\begin{example} \leftarrowbel{EX:circles}
Let $\bu = (u^1,...,u^m)$. The equations for circles in an $(m+1)$-dimensional Euclidean space are given by
\[
\bu''' = 3 \bu'' \frac{\leftarrowngle \bu', \bu'' \rangle}{1 + \leftarrowngle \bu', \bu' \rangle}.
\]
This ODE system is conformally invariant, and it has been verified that the Wilczynski invariants vanish \cite[Prop.2]{Med2011}. By our Theorem \ref{T:Wilc-C}, the system is of C-class. For $m=1$, the equation is (contact) trivializable, but for $m \geq 2$ the system is not (point) trivializable.
\end{example}
Since the Wilczynski invariants for linear equations are invariants,
it follows that an ODE with trivializable linearizations is Wilczynski--flat. Hence we obtain
\begin{cor}\leftarrowbel{C:main} Let $n \geq 2$, $m \geq 1$, with $(n,m) \neq (2,1)$.
Any ODE \eqref{E:ODE} for which the linearization around any solution is
trivializable, is of C-class.
\end{cor}
\begin{example} \leftarrowbel{EX:n-submax} The ODE $u^{(n+1)} - \frac{n+1}{n} \frac{(u^{(n)})^2}{u^{(n-1)}} = 0$, for $n \geq 3$, is submaximally symmetric \cite[p.206]{Olv1995} except when $n=4$ and $6$.
At a fixed solution $u$, its linearization is the ODE for $v$ given by
\begin{align} \leftarrowbel{E:n-lin}
\ell_u[v] := v^{(n+1)} - \frac{2(n+1)}{n} a v^{(n)} -
\frac{n+1}{n} a^2 v^{(n-1)} = 0,
\end{align}
where $a := \frac{u^{(n)}}{u^{(n-1)}}$. We have $a' = \frac{a^2}{n}$ and
hence $(\frac{1}{a})'=-\frac{1}{n}$, $(\frac{1}{a^2})'=-\frac{2}{na}$ and $(\frac{1}{a^2})'' =
\frac{2}{n^2}$. Defining $\tilde{v} = \frac{v}{a^2}$, we have
$\tilde{v}^{(n+1)} = \frac{1}{a^2} \ell_u[v] = 0$, so \eqref{E:n-lin}
is trivializable, and the given ODE is of C-class by Corollary
\ref{C:main}. This example is included into a larger family of Wilczynski-flat (hence C-class) equations given in \cite[Example 2]{Dou2008}.
\end{example}
\begin{example} \leftarrowbel{EX:n-sys}
Let $m \geq 2$ and $n \geq 2$. Given $\bu = (u^1,...,u^m)$, consider
\[
\bu^{(n+1)} = \mathbf{f}, \qbox{where}
f_i = \begin{cases}
0, & i\neq m;\\
((u^1)^{(n)})^2, & i=m.
\end{cases}
\]
Its linearization is easily seen to be trivializable, so it is of C-class. It is not trivializable since a fundamental invariant does not vanish on it, namely $I_2$ in \cite{DM2014}. A similar 2nd order example was given in \cite[(5.6a)]{KT2014}, which was known to be of C-class since it is torsion-free \cite{Gro2000}.
\end{example}
\subsection{Remark: A potential alternative line of argument}
To conclude the article, let us briefly outline how the theory we have
developed could be used to obtain an alternative proof of our main
result. This line of argument is based on correspondence spaces which
are familiar in the case of parabolic geometries. It depends crucially
on the existence of a natural Segr\'e structure on the space of
solutions of a Wilczynski-flat ODE from \cite{Dou2008}, compare with
\S \ref{S:soln-space-geo-str}. As mentioned there, Segr\'e structures
are classical first order structures corresponding to $Q\subset
\text{GL}(\fa)$. Hence such a structure on a space $S$ comes with a
$Q$--principal bundle $\cG\to S$. The classical way to study such
structures is via the Spencer differential. As we have noted in the
proof of Proposition \ref{P:Tanaka}, this coincides with the
restriction of $\partial_{\fa}$ to a map $C^1(\fa,\fq)\to
C^2(\fa,\fa)$ and is injective. Choosing a $Q$--invariant complement
$\cN\subset C^2(\fa,\fa)$ to the image of the Spencer differential,
there is a canonical principal connection form $\tau$ on $\cG$
characterized by the fact that its torsion lies in
$\cG\times_Q\cN\subset\bigwedge^2T^*S\otimes TS$.
Usually, not too much emphasis is put on the actual choice of $\cN$,
but in the case of Segr\'e structures, this is a surprisingly subtle
issue. Analyzing $\partial_\fa : \fa^* \otimes \fq \to \bigwedge^2 \fa^* \otimes \fa$ in terms of representations of $\fq$, one
easily deduces that there always exist $Q$--invariant complements, but
aside from the $(m,n)=(1,3)$ case, there is always a freedom of
choice. The larger $m$ and $n$ get, the bigger this freedom becomes,
and while there are always only finitely many free parameters
involved, their number gets arbitrarily high.
Now it turns out that the construction from \S \ref{S:codiff} can
also be used to construct uniform normalization conditions for Segr\'e
structures. Indeed, we can view $L(\bigwedge^2(\fg/\fq),\fg)$ as the
subspace in $C^2(\fg,\fg)$ consisting of all cochains vanishing upon insertion of one element of $\fq$. Similarly as in Lemma \ref{L:codiff} one shows that
this subspace is preserved by $\partial^*$ and that the restriction of
$\partial^*$ to it is $Q$--equivariant. Using a similar adjointness
result as in Proposition \ref{P:codiff}, one shows that
$\cN:=\ker(\partial^*_\fa)\subset C^2(\fa,\fa)$ is a $Q$--invariant
complement to the image of the Spencer differential.
Now the alternative approach for proving that Wilczynski--flat ODE
form a C-class goes as follows. Starting with such an ODE $\cE$,
form a local space $\cS$ of solutions. As proved in \cite{Dou2008},
this space of solutions inherits a natural Segr\'e structure. This
gives rise to a principal $Q$--bundle $\cG\to\cS$, which we may endow
with the canonical principal connection $\tau\in\Omega^1(\cG,\fq)$
for the choice $\cN:=\ker(\partial^*_\fa)$ of normalization
condition. Taking the canonical soldering form $\theta$ on $\cG$,
which can be viewed as having values in $\fa$, we can form
$\theta\oplus\tau$, and this defines a Cartan connection $\omega$ of
type $(\fg,Q)$ on $\cG$.
The restriction of the principal action of $Q$ defines a free right
action of $P\subset Q$ on $\cG$, and we can form the correspondence
space, i.e.\ the space $\cC\cS:=\cG/P$ of orbits. This can be
identified with the total space of the associated bundle
$\cG\times_Q(Q/P)$. Of course, $\cG\to\cC\cS$ is a principal
$P$--bundle and it is easy to verify that $\omega$ also is a Cartan
connection of type $(\fg,P)$ on $\cG\to\cC\cS$.
Guided by what happens for parabolic geometries, we expect that it is possible
to show that $\cC\cS$ is locally isomorphic
to $\cE$, so $(\cG,\omega)$ can be locally viewed as a Cartan
geometry of type $(\fg,P)$ over $\cE$. This isomorphism is expected to have
the property that the underlying filtered $G_0$--structure of this
Cartan geometry is the given structure on $\cE$. From our choice of
normalization conditions it follows that $\omega$ is also normal (in
the sense used in this article) as a Cartan connection of type
$(\fg,P)$. Uniqueness of the normal Cartan geometry implies that
$(\cG,\omega)$ is locally isomorphic to the canonical Cartan geometry
on $\cE$ and by construction it descends to the local space $\cS$ of
solutions, which would complete the argument.
\end{document} |
\begin{document}
\title{On the optimal importance process for piecewise deterministic Markov process}
\author{H. Chraibi}\address{PERICLES department, EDF lab saclay, 7 Bd Gaspard Monge, 91120 Palaiseau, France}
\author{A. Dutfoy}\address{PERICLES department, EDF lab saclay, 7 Bd Gaspard Monge, 91120 Palaiseau, France}
\author{\underline{T. Galtier}}\address{LPSM (Laboratoire de probabilit\'es, statistique et mod\'elisation), Universit\'e Paris Diderot, 75205 Paris Cedex 13, France,[email protected]}
\author{J. Garnier}\address{CMAP (Centre de math\'ematiques appliqu\'ees), \'Ecole polytechnique, 91128 Palaiseau Cedex, France}
\begin{abstract}
In order to assess the reliability of a complex industrial system by simulation, and in reasonable time, variance reduction methods such as importance sampling can be used. We propose an adaptation of this method for a class of multi-component dynamical systems which are modeled by piecewise deterministic Markovian processes (PDMP). We show how to adapt the importance sampling method to PDMP, by introducing a reference measure on the trajectory space. This reference measure makes it possible to identify the admissible importance processes. Then we derive the characteristics of an optimal importance process, and present a convenient and explicit way to build an importance process based on theses characteristics. A simulation study compares our importance sampling method to the crude Monte-Carlo method on a three-component systems. The variance reduction obtained in the simulation study is quite spectacular.\\
\end{abstract}
\keywords{Monte-Carlo acceleration ; importance sampling ; hybrid dynamic system ; piecewise deterministic Markovian process ; cross-entropy ; reliability}
\subjclass{60K10;90B25;62N05}
\maketitle
\section{Introduction}
\label{sec1}
For safety and regulatory issues, nuclear or hydraulic industries must assess the reliability of their power generation systems. To do so, they can resort to probabilistic safety assessment. In recent years, dynamic reliability methods have been gaining interest, because they avoid conservative static approximations of the systems and they better capture the dynamics involved in the systems. When dealing with complex industrial systems, this kind of reliability analysis faces two main challenges: the first challenge is related to the modeling of such complex systems, the second one concerns the quantification of the reliability. Indeed as we refine the model the estimation of the reliability requires more efforts and is often challenging.
\subsection{A model based on a PDMP}
\label{subsec:PDMP}
Due to the complexity of the systems the reliability analysis is often done through an event tree analysis \cite{cepin2011assessment} which requires static approximations of the system, and relies on conservative approximations. With the development of computational capacities, it is now possible to consider more accurate tools for reliability assessment. Several attempts have been proposed to better model the dynamical processes involved in the systems. In this article, we focus on the option proposed in \cite{zhang2008piecewise} and \cite{dufour2015numerical}, which consists in modeling the system using a piecewise deterministic Markovian process (PDMP) with boundaries.
In many industrial systems, and in particular in power generation systems, failure corresponds to a physical variable of the system (such as temperature, pressure, water level) entering a critical region. The physical variables can enter this region only if a sufficient number of the basic components of the system are damaged. In order to estimate the reliability we need an accurate model of the trajectories of the physical variables. In industrial systems, the physics of the system is often determined by ordinary or partial differential equations which depend on the statuses of the components within the system (on, off or failed). Therefore the dynamics of the physical variables changes whenever the statuses of the components are altered. Such alteration can be caused by automatic control mechanisms within the systems or failures or repairs. It is also possible that the values of physical variables impact the statuses of the components, because the failure and repair rates of the components depend on the physical conditions. In order to deal with this interplay between the physical variables and the statuses of components, we need to model their joint evolution. The vector gathering these variables is called the state of the system. To address the challenge of modeling the trajectory of the state of the system, we model the evolution of the state of the system by a piecewise deterministic Markovian process (PDMP) with boundaries. PDMPs were introduced by M.H.A Davis in \cite{davis1984,davis1993markov}, they benefit from high modeling capacity, as they are meant to represent the largest class of Markovian processes that do not include diffusion. These processes can easily incorporate component aging, failure on demand, and delays before repairs.
For a given system, we denote its state at time $t$ by $Z_t =(X_t,M_t)$, where $X_t$ is the vector of the values of the physical variables, and $M_t$ the vector gathering the statuses of all the components in the system. Throughout the paper we call $X_t$ the position of the system, and $M_t$ the mode of the system. $\mathbf Z=(Z_t)_{t\in[0,t_f)}$ represents a trajectory of the state of the system up to a final observation time~$t_f$. We consider that the trajectories are all initiated in a state $z_o$.
Recall the system fails when the physical variables enter a critical region. We denote by $D$ the corresponding region of the state space, and we denote by $\mathscr D$ the set of the trajectories of $\mathbf Z$ that pass through $D$. In order to estimate the reliability on the observation time $t_f$, we want to estimate the probability of system failure defined by $$p=\mathbb{P}\big(\mathbf{Z}\in \mathscr D |Z_0\mbox{\hspace{-0.3 ex}}=\mbox{\hspace{-0.35ex}}z_o\big)=\mathbb{P}_{z_o}\big(\mathbf{Z}\in \mathscr D \big).$$
\subsection{Accelerate reliability assessment by using importance sampling}
\label{subsec:Pyc}
The second challenge is that the reliability of a complex industrial system can rarely be assessed analytically, so reliability analysis often relies on simulations techniques. The company \'Electricit\'e de France (EDF) has recently developed the PyCATSHOO toolbox \cite{Pycatshoo,Pycatshoo2}, which allows the simulation and the modeling of dynamic hybrid systems. PyCATSHOO bases its modeling on PDMPs. Thanks to Monte-Carlo simulation, it evaluates dependability criteria, among which is the reliability of the system. The method we present in this article is used to accelerate the reliability assessment within the PyCATSHOO toolbox.
In the context of reliable systems, crude Monte-Carlo techniques perform poorly because the system failure is a rare event. Indeed, with the Monte-Carlo method, when the probability of failure approaches zero, the number of simulations to get a reasonable precision on the relative error increases dramatically, and so does the computational time. To reduce this computational burden, one option is to reduce the number of simulations needed by using a variance reduction method. Among variance reduction techniques \cite{caron2014some,morio2014survey}, we may think of multilevel splitting techniques \cite{cerou2006genetic, del2005genealogical} and of importance sampling techniques \cite{dufour2015numerical,tutorialCE,bucklew2013introduction,zio2013monte}. A variance reduction method, inspired from particle filtering can be used on a particular case of PDMP that is a PDMP whithout boundary \cite{whiteley2011monte}. Unfortunately the industrial systems are often modeled by a PDMP with boundaries, and other variance reduction methods need to be designed for these cases. We choose to focus on the importance sampling technique, because: 1) the importance sampling strategy that we propose can easily be implemented (in particular in the PyCATSHOO toolbox) 2) the results derived in this paper (in particular the reference measure and the expressions of the densities and likelihood ratios) should be useful to study multilevel splitting.
In this paper we present how to adapt the importance sampling technique for PDMP. By doing so we generalize the use of importance sampling, not only for many power generation systems, but also for any phenomenon that can be modeled by a PDMP. As PDMP generalizes numerous kinds of processes (among which are discrete Markov chains, continuous time Markov chains, compound Poisson processes or queuing systems), the scope of our work goes way beyond the study of power generation systems.
\subsubsection{Prerequisite for importance sampling on PDMPs}
\label{subsec:IS}
Remember that we want to apply importance sampling to estimate the probability $p=\mathbb{P}_{z_o}\big(\mathbf{Z}\in \mathscr D \big)$ that the system fails. In our case, importance sampling would consist in simulating from a more fragile system, while weighting the simulation outputs by the appropriate likelihood ratio. The issue is that the random variable we are considering is a trajectory of a PDMP, so we need to clarify what is the density (or the likelihood) for a trajectory of PDMP. Namely we need to introduce a reference measure for PDMP trajectories, and to identify its related densities.
In simple cases of dynamical importance sampling, this issue of the reference measure is often eluded, because the reference measure has an obvious form: it is often a product of Lebesgue measures, or a product of discrete measures. But PDMPs are very degenerate processes, their laws involve hybrid random variables which have continuous and discrete parts. In this context, it is important to ensure that we do have a reference measure that is sigma-finite to define properly the densities and the likelihood ratios.
Suppose $\zeta$ is a reference measure for $ \mathbb{P}_{z_o}\big(\mathbf{Z} \in . \big)$, we denote by
$f$ the density of $\mathbf Z$ with respect to $\zeta$, and we denote by
$g$ the density of an importance process $\mathbf Z^\prime$ with respect to $\zeta$. If $\zeta$ exists, and $f$ and $g$ satisfy $\forall \mathbf z\in\mathscr D,\ f(\mathbf{z})\ne 0\Rightarrow g(\mathbf{z})\ne 0,\, $ then we can write:
\begin{align}
\mathbb{P}_{z_o}\big(\mathbf{Z}\in \mathscr D \big) = \mathbb{E}_f\big[\indic{\mathscr D}(\mathbf Z)\big]
&= \int_\mathscr{D} f(\mathbf{z})\,d\zeta(\mathbf{z}) = \int_\mathscr{D}\dfrac{f(\mathbf{z})}{g(\mathbf{z})}g(\mathbf{z})\,d\zeta(\mathbf{z}) = \mathbb{E}_g\bigg[\indic{\mathscr D}(\mathbf Z)\dfrac{f(\mathbf{Z})}{g(\mathbf{Z})}\bigg]\label{eq:ISPDMP}.
\end{align}
If $\big(\mathbf Z_1^\prime,\dots \mathbf Z_{N_{sim}}^\prime\big)$ is a sample of independent trajectories simulated according to an importance process with density $g$, then $\mathbb{P}_{z_o}\big(\mathbf{Z}\in \mathscr D \big) $ can be estimated without bias by:
\begin{align}
\hat{p}_{IS}&= \frac 1 {N_{sim}} \sum_{i=1}^{N_{sim} }\indic{ \mathscr D }(\mathbf Z_i^\prime) \frac{f(\mathbf Z_i^\prime)}{g(\mathbf Z_i^\prime)} & \mbox{with}\quad \mathbb V\mbox{ar}(\hat{p}_{IS}) =\frac {\mathbb E_f\left[\indic{\mathscr D}(\mathbf Z)\frac{f(\mathbf Z)}{g(\mathbf Z)}\right]-p^2 }{N_{sim}}
\label{pIS}
\end{align}
When $\mathbb E_{f}\big[\indic{ \mathscr D }(\mathbf Z) \frac{f(\mathbf Z)}{g(\mathbf Z)}\big] <\infty$ and the conditions above are verified, we have a central limit theorem on $\hat p_{IS}$:
\begin{equation}
\sqrt{N_{sim}} (\hat p_{IS}-p)\longrightarrow \mathcal N(0,\sigma_{IS}^2)\quad \mbox{where}\quad \sigma_{IS}^2=\mathbb E_{f}\big[\indic{ \mathscr D }(\mathbf Z) \frac{f(\mathbf Z)}{g(\mathbf Z)}\big] - p^2.
\end{equation}
Thus the use of importance sampling on PDMP trajectories requires the following three conditions:
\begin{itemize}
\item[(C1)] We have a measure $\zeta$ on the trajectory space, and the trajectory $\mathbf Z$ of the system state has density $f$ with respect to $\zeta$
\item[(C2)] We are able to simulate trajectories according to an importance process $\mathbf Z^{\prime} $ which has density $g$ with respect to $\zeta$ on $\mathscr D $ such that $\mathbb E_{f}\big[\indic{ \mathscr D }(\mathbf Z) \frac{f(\mathbf Z)}{g(\mathbf Z)}\big] <\infty$.
\item[(C3)] $\zeta$-almost everywhere in $ \mathscr D$ we have $\ f(\mathbf{z} )\ne 0 \Rightarrow g(\mathbf{z} )\ne 0$
\end{itemize} The existence of a reference measure is an important theoretical argument, but it can also be used to characterize the admissible importance processes. Knowing the reference measure $\zeta$ tells us what we can modify in the law of $\mathbf Z$ to obtain an importance process $\mathbf Z^\prime$ with a well-defined likelihood ratio. It is a valuable information to know to which extent we can modify the density $f$ to get the density $g$, because the variance $\sigma_{IS}^2$ depends on the density $g$.
It is theoretically possible to design an importance sampling strategy with zero variance, indeed, it suffices to use an importance process with a density \begin{equation}
g^*(\mathbf z)=\dfrac{\indic{\mathscr D}(\mathbf z)}{p}f(\mathbf z)\ .
\end{equation} In practice, however, we cannot reach this zero variance, as we do not know the value of $p$. The expression of $g^*$ rather serves as a guide to build an efficient and explicit density $g$. Indeed we can try to choose a density $g$ as close as possible from $g^*$ in order to get a strong variance reduction. \\
\subsubsection{Our contributions to the literature}
Many authors have used importance sampling on particular cases of PDMP sometimes without noting it was PDMPs, see \cite{labeau1996a, labeau1996b, lewis1984, Marseguerra1996}. Sometimes, the authors using PDMPs avoid considering automatic control mechanisms which activate and deactivate components depending on the values of physical variables. Such automatic control mechanisms play an important part in power generation systems, and therefore that can not be avoided in our case. Also, the modeling of control mechanisms implies to work with a special kind of PDMPs, which are the PDMPs with boundaries. These PDMP are typically the kind for which the reference measure is complex. In \cite{Marseguerra1996,ramakrishnan2016unavailability}, importance sampling is used on PDMP while taking into account automatic control mechanisms but the reference measure is not clearly identified, and so far we have not found a proof that likelihood ratios involved in importance sampling on PDMP are always defined. In Section \ref{sec:zeta} we provide a reference measure for PDMP trajectories. This allows to define the likelihood ratios for PDMP trajectories and to use the importance sampling method, but also to identify the admissible importance processes. Our major contributions are presented in Section \ref{sec:Opti} where the characteristics of the optimal importance process are identified, and used to propose a convenient way to build the importance process in practice. Note that the characteristics of the optimal importance process are identified for the general case of PDMP, therefore our result can be generalized to any subclass of the PDMP process, like Markov chains, or continuous time Markov chains, or queuing systems.
\subsubsection{Optimization of the variance reduction}
\label{subsec:CE}
Finding the optimal importance process is equivalent to solving the following minimization problem: $$g^{*}=\underset{g}{\mbox{argmin }} \mathbb E_{f}\big[\indic{ \mathscr D }(\mathbf Z) \frac{f(\mathbf Z)}{g(\mathbf Z)}\big]$$ Minimizing a quantity on a density space being difficult, we usually consider a parametric family
of importance densities $\{g_\alpha \}$ and look for a parameter $\alpha$ which yields an estimator with the smallest possible variance. Under favorable circumstances the form of the family can be determined by a large deviation analysis \cite{dupuis2004importance, heidelberger1995fast, siegmund1976importance}, but the large deviation method is difficult to adapt to PDMP with boundaries which are degenerate processes with state spaces with complicated topologies. Therefore we focus on other methods which rather try to minimize an approximation of the distance between the importance density $g$ and the optimal one $g^{*}$. For instance, if the approximated distance happens to be $ D(g,g^{*}) = \mathbb E_{f}\big[ \frac{g^{*}(\mathbf Z)}{g(\mathbf Z)}\big] $ it is equivalent to minimize the variance of the estimator, and if we consider the Kullback-Leibler divergence so that $D(g,g^{*}) = \mathbb E_{g^{*}}\Big[\log\big( \frac{g^{*}(\mathbf Z)}{g(\mathbf Z)}\big)\Big] $, we would be using the Cross-Entropy method \cite{tutorialCE, zio2013monte}. These two options have been compared on a set of standard cases in \cite{chan2011comparison}. They yielded similar results, though results obtained with the Cross-Entropy seemed slightly more stable than with the other option. In \cite{zuliani2012rare} the Cross-Entropy method was applied to a model equivalent to a PDMP without boundaries and showed good efficiency. Therefore we choose this method to select the parameters of the importance process in our paper. Of course, the efficiency of this procedure strongly depends on the choice of the parametric family of importance densities.\\
The rest of the paper is organized as follows: Section~\ref{sec:PDMP} introduces our model of multi-component system based on a Piecewise deterministic Markovian process. In Section~\ref{sec:zeta}, we introduce a reference measure on the space of the PDMP trajectories and study the admissible importance processes. In Section~\ref{sec:Opti} we present an optimal process and a clever way to build the importance process in practice. In Section~\ref{sec:Ex} we apply our adaptation of the importance sampling technique on a three-component system and compare its efficiency with the Monte-Carlo technique.\\
\section{A model for multi-component systems based on PDMP}
\label{sec:PDMP}
\subsection{State space of the system}
We consider a system with $N_c$ components and $d$ physical variables. Remember we call position the vector $X\in \mathbb R^d$ which represents the physical variables of the system, and we call mode the vector $M=(M^1,M^2, ..., M^{N_c})$ gathering the statuses of the $N_c$ components. The state of the system $Z$ includes the position and the mode: $Z=(X,M)$.\\
For ease of the presentation, we consider the status of a component can be $ON$, or $OFF$, or out-of-order (noted $F$), so that the set of modes is $\mathbb M=\{ON,OFF,F\}^{N_c}$, but as long as $\mathbb M$ stays countable, it is possible to consider more options for the statuses of the components. For instance, one could consider different regimes of activity instead of the simple status $ON$, or different types of failure instead of the status $F$. Note that we can also deal with continuous degradations, like the size of a breach in a pipe for instance: the presence of the degradation can be included in the mode and its size in the position.\\
Generally, there are some components in the system which are programmed to activate or deactivate when the position crosses some thresholds. For instance, it is typically what happens with a safety valve: when the pressure rises above a safety limit, the valves opens. To take into account these automatic control mechanisms, within a mode $m$ the physical variables are restricted to a set $\Omega_m\subset \mathbb R^d$, which is assumed open. We set $E_m=\{(x,m), x\in\Omega_m\}$, so that the state space is: \begin{equation}E=\underset{m\in\mathbb{M}}{\bigcup} E_m =\underset{m\in\mathbb{M}}{\bigcup} \Big\{(x,m), x\in\Omega_m\Big\} \end{equation}
\subsection{Flow functions}
In a given mode $m$, i.e. a given combination of statuses of components, the evolution of the position is determined by an ordinary differential equation. We denote by $\phi^m_x $ the solution of that equation initiated in $x$. If we consider a position state $Z_t$ at time $t$, there exists a random time $T>0$ such that $\forall s\in[0,T), $ $X_{t+s}=\phi^{M_t}_{X_t}(s)$ and $M_{t+s}=M_t$. For an initial state $z\in E$, we can introduce the flow function $\Phi_z$ with values in $E$. Regarding the evolution of the trajectory after a state $Z_t=(X_t,M_t)$, the next states are locally given by the function $\Phi_{Z_t}$:
\begin{align}
&\exists T>0,\,\forall s\in[0,T) ,\quad \nonumber\\
& Z_{t+s} = \Phi_{Z_t}(s)= \big(\phi^{M_t}_{X_t} (s),M_t\big) = \big( X_{t+s},M_t\big) \label{flow}
\end{align}
In practice an approximation of the function $\phi^{m}_{x}$ can be obtained by using a numerical method solving the ordinary differential equations. For instance the PyCATSHOO toolbox can use, among others, the Runge-Kutta methods up to the fourth order.
\subsection{Jumps}
\label{subsec:jumps}
The trajectory of the state can also evolve by jumping. This typically happens because of control mechanisms, failures, repairs, or natural discontinuities in the physical variables. When such a jump is triggered, the current state moves to another one by changing its mode and/or its position.
We denote by $\xoverline{ E\,}$ the closure of $E$, and $\mathscr B(E)$ the Borelian $\sigma$-algebra on $E$. We define $T$ as the time until the next jump after $t$, such that the next jump occurs at time $t+T$. The destination of this jump is determined according to a transition kernel $\mathcal K_{Z_{t+T^-}}$ where $Z_{t+T^-}\in \xoverline{ E\,}$ is the departure state of the jump. This kernel is defined by: \begin{align}
&\forall B\in \mathscr B (E),\quad
\mathbb P \left(Z_{t+T} \in B|Z_{t+T^-}=z^-\right) =\mathcal K_{z^-}(B) \ ,\label{kernel0}
\end{align} where $\mathscr B(.)$ indicates the Borelians of a set. Let $\forall z^-\in\xoverline{ E\,},\,\nu_{z^-}$ be a $\sigma$-finite measure on $E$, such that $\mathcal K_{z^-}<<\nu_{z^-}$. $\forall z^-\in\xoverline{ E\,}$, we define $K_{z^-}$ as the density of $\mathcal K_{z^-}$ with respect to $\nu_{z^-}$, so:
\begin{align}
&\forall B\in \mathscr B (E),\quad
\mathbb P \left(Z_{t+T} \in B|Z_{t+T^-}=z^-\right) =\int_B K_{z^-}(z)\,d \nu_{z^-}(z) \ ,\label{kernel}
\end{align}
The kernel density must satisfy $K_{z^-}(z^-)=0$, so that even if $\nu_{z^-}$ has a Dirac point in $z^-$, jumping on the departure state is impossible. Note that with this setting, the law of the arrival state of a jump can depend on the departure point $z^-$. For instance, if the physical variables are all continuous, then the reference measure of the transition kernel $\nu_{z^-}$ could be defined by: \begin{align}\forall B\in\mathscr B (E), \qquad & \nu_{z^-} (B)=\sum_{
\begin{array}{c}
w\in\mathbb{M}\backslash\{m^-\},\\
(x^-,w)\in E
\end{array}
} \delta_{(x^-,w)}(B) . \end{align}
In this example, the jump kernel is discrete:
\begin{align}\forall B\in\mathscr B (E), \qquad & \mathcal K_{z^-} (B)=\sum_{
\begin{array}{c}
w\in\mathbb{M}\backslash\{m^-\},\\
(x^-,w)\in E
\end{array} }
\mathbb P \left(Z_{t+T}=(x^-,w)|Z_{t+T^-}=(x^-,m^-)\right)\delta_{(x^-,w)}(B) , \end{align}
and it is generally the case, but one can imagine some cases where the kernel include a continuous part. For instance consider that the physical variables have two dimensions, the first corresponding to pressure on a concrete structure, and the second to the size of a crack in the structure. One can consider that the crack length increase in a jerky way, and that the amplitude of the increase is random and has a continuous law. For a jump triggered from a state $z^-=\big((x_1^-,x_2^-),m^-\big)\in E$ we could have:
\begin{align}\forall B\in\mathscr B (E), \qquad & \nu_{z^-} (B)= \int_{\big\{y_2>0\big|\big((x_1^-,y_2),m^-\big)\in B\big\}}d\mu_{Leb}(y_2) \end{align}
where $\mu_{Leb}(.)$ corresponds to the Lebesgue measure, and \begin{align}\forall B\in\mathscr B (E), \qquad & \mathcal K_{z^-} (B)=\int_B K_{z^-}(z)\,d \nu_{z^-}(z) =\int_{\big\{y_2>0\big|\big((x_1^-,y_2),m^-\big)\in B\big\}}K_{z^-}\big(\big((x_1^-,y_2),m^-\big)\big)d\mu_{Leb}(y_2) \end{align}
with $K_{z^-}\big(\big((x_1,x_2),m\big)\big)=0$. We think, that the cases of non-discrete jump kernel should be rather rare in the reliability analysis field, but PDMPs are also used in other fields, like finance, where non-discrete jump kernel could be more common and for which the use of importance sampling can be of interest too \cite{rolski2009stochastic}. That is why we keep the most general form of PDMP, which can handle any type of jump kernel.
\subsection{Jump times}
Now, assuming that $Z_t=z$, we present the law of the time until the next jump after $t$, which is denoted by~$T$.
\subsubsection*{Jumps at boundaries}
For $m\in \mathbb M$, let $\partial\Omega_m$ be the boundary of $\Omega_m$. The boundary of the set $E_m$ is the set $\partial E_m =\{(x,m), x\in\partial\Omega_m\} $. For $z=(x,m)\in E$, we define $t^*_z=\inf\{s>0,\Phi_z(s)\in \partial E_m\}$ the time until the flow hits the boundary. We take the convention $t^*_z= + \infty\ $ if $\{s>0,\Phi_z(s)\notin E_m\}=\emptyset$.
Assume that the system starts in state $z=(x,m)\in E$. When the flow leads the position out of its restricted set $ \Omega_m$, i.e. the state touches $\partial E_m$, an automatic jump is triggered (see the scheme in \ref{sautdet}), and $T=t^*_z$.
\begin{figure}
\caption{A jump at boundary.\label{sautdet}
\label{sautdet}
\end{figure}
Boundaries can be used to model automatic control mechanisms, or any automatic change in the status of a component. For instance in a dam, if the water level $X$ reaches a given threshold $x_{max}$ the evacuation valve automatically opens to avoid overflow. If $M= C ,\ O ,\ F $ represent respectively the modes where the valve is closed, or opened, or failed, this control system could be modeled by setting $\Omega_{C}=(0,x_{max})$ and $K_{(x_{max},C)}(\{(x_{max},O)\})=1$.\\
\subsubsection*{Spontaneous jumps}
\begin{figure}
\caption{A spontaneous jump.\label{sautalea}
\label{sautalea}
\end{figure}
The trajectory can also jump to another state when a random failure or a repair occurs (see Figure \ref{sautalea}). The distribution of the random time at which it happens is usually modeled through a state-related intensity function $\lambda:E\to\mathbb R_+$. For $z \in E$, $\lambda(z )$ represents the instantaneous probability (also called hazard rate) of having a failure or a repair at state $z $. If $Z_t=z$ and $T$ is the duration until the next jump, $\forall s< T$ we have $Z_{t+s}=\Phi_{z }(s)$. To simplify the notations in the following, we introduce the time-related intensity $\lambda_z$ such that $\lambda_{z}(s)=\lambda(\Phi_{z}(s))$ and $\Lambda_{z}(s)=\int_{0}^{s} \lambda \big(\Phi_{z}(u)\big) du$.
If $\mathbb P_{z}(.)$ is the probability of an event knowing $Z_t=z$, we have:
\begin{equation}
\mathbb P_{z}(T\leq s)=\left\{\begin{array}{cr}
1-\exp\left[-\Lambda_{z}(s)\right] & \mbox{ if }s<t^*_z\, ,\\
1& \mbox{ if }s\geq t^*_z\, .\\
\end{array}\right.
\label{interjump-time}
\end{equation}
The law of $T$ has a continuous and a discrete part (see Figure \ref{probaT}). \begin{figure}
\caption{An example of the cumulative distribution function of $T$,\label{probaT}
\label{probaT}
\end{figure}
As there is a discontinuity at $t_z^*$ in the cumulative distribution function of $T$, its reference measure $T$ must include a Dirac point at $t_z^*$ and therefore depends on $z$. We denote $\mu_z$ the reference measure of $T$ such that:
\begin{align}
\forall B\in \mathscr B\big([0,t_z^*]\big),\qquad \mu_{z}\big(B\big)=\mu_{Leb}\big(B\cup [0,t_z^*)\big)+\delta_{t_z^*}\big(B\big) . \label{measureMu}
\end{align}
This measure will be useful to define the dominant measure $\zeta$ in Section \ref{sec:zeta}. It also allows to reformulate the law of $T$ with an integral form:
\begin{align}
\mathbb P_{z} (T\leq s) &=\int_{(0,s]} \bigg(\lambda_z(u) \bigg)^{\mathbbm{1}_{ u < t^*_z }}\exp\Big[-\Lambda_{z }(u)\Big] d\mu_z(u) \ .\label{Tlaw}
\end{align}
\subsection{A link between jump rate and the hazard rates of the possible transitions}
Note that the equation \eqref{interjump-time}, or \eqref{Tlaw}, gives the time of the next jump, but it does not tell whether the transition is a failure, or a repair, or an automatic control mechanism. The type of the transition triggered is determined by the transition Kernel $\mathcal K_{Z_{t+T}^-}$. For each jump, we consider that there can be a countable number of possible transitions. Each type of transition is indexed by a number in the countable set $J$. When $Z_{t+T}^- \in \partial E$, $\mathcal K_{Z_{t+T}^-}$ can take an arbitrary form, but when $Z_{t+T}^- \in E$, the density of the kernel $ K_{Z_{t+T}^-}$ is linked to the hazard rates of the possible transitions, as shown in the following.
If a transition is indexed by $j\in J$, we denote by $T^j$ the time between $t$ and the next occurrence of this transition, taking by convention $T^j=+\infty$ if the transition does not occur. This way the time of the next jump satisfies: \begin{equation}
T=\min\left[\{T^{j}, \forall j\in J\}\cup \{t^*_z,\}\right]. \label{eq:TTj}
\end{equation} Let $\lambda^j:E\to\mathbb R_+$ be its associated state-related intensity function, such that:\begin{equation}
\forall s<t_z^*,\quad \mathbb P_{z}(T^j\leq s)=1-\exp\left[\text{-}\int_{0}^{s} \lambda^j \big(\Phi_z(u)\big)\right]du .\end{equation}
For instance, if the transition $j$ corresponds to a failure the function $\lambda^j$ is the associated failure rate, and respectively if the transition $j$ corresponds to a repair, the function $\lambda^j$ is the associated repair rate. Knowing $Z_t=z_t=(x,m)$, and therefore, knowing the path given by $\phi_x^m$ that the positions are following, we make the assumption that the times $T^j$ are independent. This assumption is true if the position gathers all the variables affecting the different types of transitions when the system is in mode $m$. According to the equation \eqref{eq:TTj}, this conditional independence implies that:
\begin{equation}
\forall z ^- \in E,\qquad \lambda(z^-) =\sum_{j\in J} \lambda^j(z^-)\ . \label{lambda}
\end{equation}
Note the equations \eqref{lambda} is only valid when the departure state $z^-$ is not on a boundary.
We denote by $B^j_{z^-}$ the possible arrival states of a jump when the transition $j$ is triggered and when the departure state is $z^-\in E$. We assume that the different types of transition are exclusive, meaning that for $i\neq j$ we have $B^i_{z^-}\cap B^j_{z^-}=\emptyset$. Then the probability of triggering the transition $i$ from the departure state $z^-$ is $\mathcal K_{z^-}(B^i_{z^-})$, and we have:
\begin{equation}
\forall z^-\in E,\qquad \mathcal K_{z^-}(B^i_{z^-}) = \frac{\lambda^i(z^-)}{\lambda(z^-)}\ . \label{eq:Klambda}
\end{equation}
Similarly the equation \eqref{eq:Klambda} is also valid only when the departure state $z^-$ is not on a boundary. When $z^-\in\partial E$ there is no link between $\lambda$ and $\mathcal K_{z^-}$.
\subsection{Generate a trajectory}
In order to generate a realization of the PDMP, one can follow these steps \cite{davis1984,davis1993markov,dufour2015numerical}: \begin{enumerate}
\item Start at $t=0$ with state $Z_t=z_t$
\item Generate $T$ the time until the next jump using \eqref{interjump-time} or \eqref{Tlaw}, and \eqref{lambda}
\item Follow the flow $\Phi$ until $T$ using \eqref{flow}
\item Generate $Z_{t+T}=z_{t+T}$ the arrival state of the jump knowing the departure state is $Z_{t+T^-} =\Phi_z(T)$ using \eqref{kernel}
\item Taking $t:=t+T$, repeat steps 1 to 5 until a trajectory of size $t_f$ is obtained
\end{enumerate}
\subsection{Example}
\label{subsec:sys3comp}
As an example of a system, we consider a room heated by three identical heaters. $X_t$ represents the temperature of the room at time $t$. $x_e$ is the exterior temperature. $\beta_1$ is the rate of the heat transition with the exterior. $\beta_2$ is the heating power of each heater. The differential equation giving the evolution of the position (i.e. the temperature of the room) has the following form: $$\frac{d\,X_t}{dt}=\beta_1 (x_e-X_t)+\beta_2 \mathbbm 1_{M^1_t\, or\, M^2_t\, or\, M^3_t=ON}\ .$$ \begin{figure}
\caption{A possible trajectory of the heated-room system\label{fig:schemetraj}
\label{fig:schemetraj}
\end{figure}
The heaters are programmed to maintain the temperature within an interval\linebreak$(x_{min} , x_{max})$ where $x_e<0<x_{min}$. Heaters can be on, off, or out-of-order, so $\mathbb M=\{ON,OFF,F\}^{3}$. We consider that the three heaters are in passive redundancy in the sense that: when $X\leq x_{min}$ the second heater activates only if the first one is failed, and the third one activates only if the two other heaters are failed. When a repair of a heater occurs, if $X\leq x_{min}$ and all other heaters are failed the heater status is set to $ON$, else the heater status is set to $OFF$.
To handle the programming of the heaters, we set \scalebox{0.9}{$\Omega_{m}=(-\infty,x_{max})$} when all the heaters are failed $m=(F,F,F)$ or when at least one is activated, otherwise we set \scalebox{0.9}{$\Omega_{m}=(x_{min},x_{max})$}. \\
Due to the continuity of the temperature, the reference measure for the kernel is $\forall B\in\mathscr B (E),$ $\nu_{(x,m)}(B)=\sum_{m^+\in\mathbb M\backslash\{m\}}\delta_{(x,m^+)}(B)$.
On the top boundary in $x_{max}$, heaters turn off with probability 1. On the bottom boundary in $x_{min}$, when a heater is supposed to turn on, there is a probability $\gamma=0.01$ that the heater will fail on demand. So, for instance, if $z^-=\big(x_{min}, (OFF,F,OFF)\big)$, we have $K_{z^-}\big(x_{min}, (ON,F,OFF)\big)=1-\gamma$, \linebreak and $K_{z^-}\big(x_{min}, (F,F,ON)\big)=\gamma(1-\gamma)$, and $K_{z^-}\big(x_{min}, (F,F,F)\big)=\gamma^2$.\\
For the spontaneous jumps that happen outside boundaries, we consider the position is not modified during the jumps and, if the transition $j$ corresponds to the failure of a heater, then, for $z^-=(x^-,m^-m)\in E$, $\lambda^{j}(z^-)=0.0021+0.00015\times x^- $
and, if the transition $j$ corresponds to a repair, then, for $z^-=(x^-,m^-m)\in E$, $\lambda^{j}(z^-)=0.2$. A possible trajectory of the state of this system is depicted in figure \ref{fig:schemetraj}. Here the system failure occurs when the temperature of the room falls below zero, so $D=\{(x,m)\in E, x<0\}$.\\
\section{A reference measure for trajectories}
\label{sec:zeta}
We have seen in Section \ref{subsec:jumps} that when the position is restricted to a bounded set in some modes, the time to the next jump can be a hybrid random variable. We have to be cautious when considering the density of a trajectory of a PDMP for several reasons: first the reference measure for the times between the jumps is a mixture of Dirac and Lebesgue measures,
secondly these hybrid jumps may occur multiple times and in a nested way in the law of the trajectory of PDMP. Indeed, with these mixtures of Dirac and Lebesgue measures involved, the existence of a sigma-finite reference measure on the trajectory space is not obvious, yet it is mandatory to properly define the density of a trajectory. The existence of a reference measure is therefore crucial, because it preconditions the existence of the likelihood ratio needed to apply the importance sampling method.\\
We begin this Section by introducing a few notations: For a trajectory $\mathbf Z$ on the observation interval $[0,t_f)$, we denote by $N$ the number of jumps before $t_f$, and by $S_{k}$ the time of the $k$-th jump with the conventions $S_{0}=0$, and $S_{N+1}=t_f$. $\forall k<N ,\ T_k=S_{k+1}-S_{k}$ is the duration between two jumps and $T_N=S_{N+1}-S_N=t_f-S_N$ is the remaining duration between the last jump and $t_f$. One can easily verify that the sequence of the $(Z_{S_k},S_{k+1}-S_{k})$ is a Markov chain: it is called the embedded Markov chain of the PDMP \cite{davis1984}.\\
\begin{figure}
\caption{notations}
\label{fig:notations}
\end{figure}
\subsection{The law of the trajectories}
The main idea in building the law of the trajectory $\mathbf{Z}$ is to summarize the trajectory by the truncated embedded Markov chain of the process: the vector $\big(Z_{S_0},T_0, \dots, Z_{S_N},T_N\big)$. This vector is also called the skeleton of the trajectory. As the trajectory is piecewise deterministic, we only need to keep the states of the arrivals of the jumps and the durations between the jumps to describe the trajectory. If we have the vector $\big(Z_{S_k},T_k\big)_{k\leq N}$ then we have enough information to reconstruct the trajectory using \eqref{flow} because we know the flow function $\Phi$. Noting $\Theta$ the map that changes $\mathbf{Z} $ into $\big(Z_{S_k},T_k\big)_{k\leq N}$, the law of $\mathbf{Z} $ can be defined as the image law of $\big(Z_{S_k},T_k\big)_{k\leq N}$ through $\Theta$. We denote by $\mathbf E$ the set of the trajectories defined on $[0,t_f)$. For $n\in\mathbb N$, let $A_n=\Big\{\big(z_{s_k},t_k\big)_{k\leq n}\in(E\times\mathbb R^{*}_\text{+})^{n},\, \overset{n}{\underset{i=0}{\sum}} t_i=t_f\Big\}$, so that $\Theta^{-1} (A_n)$ is the set of the trajectories including $n$ jumps. The sets $(\Theta^{-1} (A_n))_{n\in\mathbb N}$ form a partition of $\mathbf E$. The sets $(A_n)_{n\in\mathbb N}$ form a partition of the set of the skeletons. \\
We can get the law of $\big(Z_{S_k},T_k\big)_{k\leq N}$, by using the dependencies between its coordinates. Thanks to \eqref{Tlaw} and \eqref{kernel} we can get the density of $T_k$ knowing $Z_{S_k}$ with respect to $\mu_{Z_{S_k}}$, and the density of $Z_{S_{k+1}}$ knowing $\big(Z_{S_k},T_k\big)$ with respect to $\nu_{Z_{S_{k+1}^-}} $, where $Z_{S_{k+1}^-}=\Phi_{Z_{S_k}}(T_k)$:
\begin{equation}
f_{T_k|Z_{S_k}=z}(u)=\Big(\lambda_{z}(u) \Big)^{\mathbbm{1}_{ u < t^*_{z} }}\exp\Big[\,\text{-}\,\Lambda_{z }(u)\Big]\ ,
\label{Tdensity}
\end{equation}
\begin{equation}
f_{Z_{S_{k+1}}|Z_{S_k},T_k}(z)=K_{Z_{S_{k+1}^-}}(z)\ .
\label{Z+density}
\end{equation}
Using the Markov structure of the sequence $\big(Z_{S_k},T_k\big)_{k\leq N}$, the law of $\big(Z_{S_k},T_k\big)_{k\leq N}$ can be expressed as an integral of the product of the conditional densities given by \eqref{Tdensity} and \eqref{Z+density}. \\
We define the $\sigma$-algebra $\mathscr S$ on the set of the possible values of $\big(Z_{S_k},T_k\big)_{k\leq N}$ as the $\sigma$-algebra generated by the sets in
$\underset{\ n\in \mathbb N^* }{\bigcup} \mathscr B\Big(\Big\{\big(z_{s_k},t_k\big)_{k\leq n}\in(E\times\mathbb R^{*}_\text{+})^{n},\, \overset{n}{\underset{i=0}{\sum}} t_i=t_f\Big\}\Big)$.
\begin{mydef}
The law of the trajectory is then defined as follows, for $B\in \mathscr S$
\begin{align}
\mathbb P_{z_o}\Big(\mathbf{Z} \in \Theta^{-1}( B) \Big) = &\int_{B} \ \prod_{k=0}^{n} \Big(\lambda_{z_{k} }(t_k) \Big)^{\mathbbm{1}_{ t_k< t^*_{z_{k}}}}\exp\Big[-\Lambda_{z_{k}}(t_k)\Big] \prod_{k=1}^{n }K_{z_{k}^-}(z_{k})\nonumber\\
&\quad\times d\delta_{t^*_{n}}(t_n)\ d\nu_{z_{n }^-}(z_n)\ d\mu_{t^*_{z_{n-1}}}(t_{n-1}) \ ...\ d\nu_{z_{ 1}^-}(z_1)\ d\mu_{t^*_{z_{o}} }(t_{0})\ ,
\label{eq:loitraj}
\end{align}
where $z_j^-=\Phi_{z_{j-1}}(t_{j-1})$, and $t^*_{n}=t_f-\sum_{i=0}^{n-1}t_i$.
\end{mydef}
Note that, depending on the set $B$, $n$ can take different values in the equation \eqref{eq:loitraj}. Implicitly, the equation \eqref{eq:loitraj}, states that: \begin{align} \mathbb P_{z_o}\Big(\mathbf{Z} \in \Theta^{-1}( B) \Big)&=\mathbb P_{z_o}\bigg(\mathbf{Z} \in \Theta^{-1}\Big( \bigcup_{n\in \mathbb N}B\cap A_n\Big) \bigg)\nonumber\\
& =\sum_{n\in \mathbb N} \mathbb P_{z_o}\Big(\mathbf{Z} \in \Theta^{-1}( B\cap A_n) \Big)\nonumber\\
& =\sum_{n\in \mathbb N}\int_{B\cap A_n} \ \prod_{k=0}^{n} \Big(\lambda_{z_{k} }(t_k) \Big)^{\mathbbm{1}_{ t_k< t^*_{z_{k}}}}\exp\Big[-\Lambda_{z_{k}}(t_k)\Big] \prod_{k=1}^{n }K_{z_{k}^-}(z_{k})\nonumber\\
&\quad \qquad\times d\delta_{t^*_{n}}(t_n)\ d\nu_{z_{n }^-}(z_n)\ d\mu_{t^*_{z_{n-1}}}(t_{n-1}) \ ...\ d\nu_{z_{ 1}^-}(z_1)\ d\mu_{t^*_{z_{o}} }(t_{0})\ . \label{eq:loitraj2}
\end{align}
Also note that with our construction, this is a probability law on the space of the trajectories that satisfy \eqref{flow}, not on the set of all the trajectories with values in $E$.
\subsection{The dominant measure and the density}
\begin{mydef}
We define the measure $\zeta$ so that
\begin{align}
\zeta (\Theta^{-1}( B))= & \underset{ \mbox{\hspace{-12ex} } (z_{_k},t_{_k})_{k\leq n}
\in B}{\int\quad d\delta_{t^*_{n}}(t_n)\ d\nu_{z_{n }^-}(z_n)}\ d\mu_{t^*_{z_{n-1}}}(t_{n-1}) \ ...\ d\nu_{z_{ 1}^-}(z_1)\ d\mu_{t^*_{z_{o}} }(t_{0}) \ .
\label{zeta}
\end{align}
Note that, like in equation \eqref{eq:loitraj}, in the equation \eqref{zeta} $n$ can take different values depending on the set $B$.
\end{mydef}
\begin{myth}
If $\exists C>0, \forall z\in\xoverline{ E\,},\ \nu_z(E)<C$ and $t_f<\infty$, then $\zeta$ is a $\sigma$-finite measure. By Radon-Nikodym theorem, the density of a trajectory $\mathbf z=\Theta\big((z_{_0},t_{_0}),\, ...\, ,(z_{_n},t_n)\big)$ with respect to the measure $\zeta$ is
\begin{align}
f(\mathbf z) =\prod_{k=0}^{n} \Big(\lambda_{z_{k} }(t_k) \Big)^{^{\mbox{\hspace{-1.1ex}}\mathbbm{1}_{ t_k< t^*_{z_{k}}}}}\mbox{\hspace{-2.1ex}}\exp\Big[-\Lambda_{z_{k}}(t_k)\Big] \prod_{k=1}^{n }K_{z_{k}^-}(z_{k})\ ,
\label{density}
\end{align}
where $n$ is the number of jumps in the trajectory $\mathbf z$.
\label{thm:sigmatefinite}
\end{myth}
The proof of Theorem \ref{thm:sigmatefinite} is given in appendix \ref{appendix-sec1}. \\ Note that it is always possible to choose the measures $\nu_{z^-}$ so they are all bounded by the same constant. Indeed the transition kernel is itself bounded by 1, as it is a probability measure. So, to get a measure $\zeta$ that is $\sigma$-finite, we can simply take the measures $\nu $
equal to the transition kernel, so the densities can be properly defined when the observation time $t_f$ is finite.
\subsection{Admissible importance processes}
Recall that an admissible importance process is any process whose law is absolutely continuous with respect to $\zeta$ (condition C2), and which has a density $g$ with respect to $\zeta$ satisfying $ \forall\, \mathbf{z} \in\mathscr D $, $\ f(\mathbf{z} )\ne 0 \Rightarrow g(\mathbf{z} )\ne 0$ (condition C3). In this Section, we clarify the previous statement, and we identify to which extent we can modify the original process to obtain an admissible importance process. Throughout the rest of paper we denote the elements relative to this importance process with a $ ^\prime$, except for its density that is denoted by $g$.\\
Our first remark is that condition C2 implies that the realizations of the importance process must satisfy equation \eqref{flow}. Indeed, the measure $\zeta$ involves the transformation $\Theta$ which uses the equation \eqref{flow} to rebuild a trajectory from a skeleton. Consequently, the importance process has to piecewisely follow the same flows as the original process. Similarly to the original process the importance process jumps to a new state for each change of flow. To ensure condition C2, the law of the $T^\prime_k$ has to be dominated by $\mu_{Z^\prime_{S^\prime_k}}$, and the law of $Z^\prime_{S^\prime_{k+1}}$ has to be dominated by $\nu_{Z^{\prime-}_{S_k} }$. This means that the boundaries of the $\Omega_m$'s and the set of the possible arrivals of a jump remain unchanged. So the modification of the original process focuses on the timing and nature of changes of modes, i.e. the laws of the jumps.\\
To generate an importance process, we keep generating trajectories by successively generating the arrival state of a jump ($Z^\prime_{S^\prime_k}$) and the time until the next jump ($T^\prime_k $). As there is no requirement for the importance process to be Markovian, we consider that the law of a point of the trajectory $Z^\prime_t$ depends on the past values of states. As the states follow the flows piecewisely, it is equivalent to say that the law of $Z^\prime_{S^\prime_k}$ can depend on $\big(Z^\prime_{S^\prime_i},T^\prime_i\big)_{i< k}$, and that the law of $T^\prime_k$ can depend on $\big(Z^\prime_{S^\prime_i},T^\prime_i\big)_{i< k}$ and $Z^\prime_{S^\prime_k}$.
For a jump time $S^\prime_k$, we denote $\underline Z^\prime_{S^\prime_k}=\big((Z^\prime_{S^\prime_i},T^\prime_i)_{i<k},Z^\prime_{S^\prime_k}\big)$, and we denote
by $\lambda^\prime_{\underline z_k}(.)$ the intensity function associated to $T^\prime_k$ when $\underline Z^\prime_{S^\prime_k}=\underline z_k$. We have:
\begin{align}
\forall t\in (0,t^*_{z_k}],\quad \mathbb P(T_k^\prime\leq t|\underline Z^\prime_{S^\prime_k}=\underline z_k)&=\int_{(0,t]} \bigg(\lambda^\prime_{ \underline z_k}(u) \bigg)^{\mathbbm{1}_{ u < t^*_{z_k} }}\exp\Big[-\Lambda^\prime_{ \underline z_k}(u)\Big] d\mu_{z_k}(u) \label{integtime}
\end{align}
Noting ${\underline Z^\prime_{{S^\prime_k}^-}}=\big((Z^\prime_{S^\prime_i},T^\prime_i)_{i< k\,\text{-}1} \big)$ and $K^\prime_{\underline{z}^-}$ the importance kernel when $ \underline Z^\prime_{ {S^\prime_k}^- }={\underline{z}_{k} }^-$, we have:
\begin{align}
\forall B\in\mathscr{B}(E),\quad\mathbb P(Z^\prime_{S^\prime_k}\in B|{\underline Z^\prime_{{S^\prime_k}^-}}={\underline{z}_{k} }^-)
&=\int_{B} K^\prime_{\underline{z}^-_k} (z) d\nu_{z^-_k}(z) \label{impkernel}
\end{align}
Notice that the intensity function $\lambda^\prime_{\mathbf z_{s},s}$ in equation \eqref{integtime} does not have to be of the form $\lambda^\prime\circ\phi_{z_s}$, where $\lambda^\prime$ is a positive function on $E$. This means that at the time $S^\prime_k+t$, the intensity does not depend only on the state~$Z^\prime_{S^\prime_k+t}$ as it would be the case if $\mathbf Z^\prime$ were a PDMP. So, in the importance process, we consider that the intensity can depend on the arrival state of the last jump and on previous pairs $(Z^\prime_{S^\prime_i},T^\prime_i)$. Therefore the importance process can be seen as a piecewise deterministic process (PDP) which is not necessarily Markovian. \\
For condition C3 to be satisfied almost everywhere we can impose that almost everywhere for any $z_k\in E$, and $z_k^-\in\xoverline{ E\,}$, and $t\in (0,t^*_{z_k}]$ : \begin{align*}
\mathbb E\big[\indic{\mathscr D}(\mathbf Z)\big| \underline Z_{S_k}=\underline z_k \big]>0,\mbox{ and } K_{z_k^-}(z_k)>0\ &\Rightarrow K^\prime_{\underline z_k^-}(z_k)>0\\
\mathbb E\big[\indic{\mathscr D}(\mathbf Z)\big| \underline Z_{S_{k+1}^-}=(\underline z_k, t)\big]>0,\mbox{ and }\lambda_{z_k}(t)>0\ &\Rightarrow \lambda^\prime_{ \underline z_k}(t)>0.
\end{align*}
Unfortunately with complex systems, the set $\mathscr D $ can be very hard to manipulate, and we do not always know if $\mathbb E\big[\indic{\mathscr D}(\mathbf Z)\big| \underline Z_{S_k}=\underline z_k \big]$ or $\mathbb E\big[\indic{\mathscr D}(\mathbf Z)\big| \underline Z_{S_k}=\underline z_k, T_k=t\big]$ are positive. So in practice we often only use the following sufficient condition which states that for almost any $z_k\in E$, and $z_k^-\in\xoverline{ E\,}$, and $t\in (0,t*_{z_k}]$ :
\begin{align*}
K_{z_k^-}(z_k)>0\ &\Rightarrow K^\prime_{\underline z_k^-}(z_k)>0\\
\lambda_{z_k}(t)>0\ &\Rightarrow \lambda^\prime_{ \underline z_k}(t)>0.
\end{align*} \\
\section{Optimal and practical importance process}
\label{sec:Opti}
\subsection{Practical importance processes and notations}
We will see in Subsection \ref{subsec:Opti} that we can restrict the search of an efficient importance process within a special class of processes without any loss in efficiency, because an optimal importance process (giving an estimator with zero variance) belongs to this special class.
The processes of this class are defined through the expressions \eqref{integtime} and \eqref{impkernel} but they do not use all the information contained in $\underline z_k$ and $\underline z_k^-$. The jump rates $\lambda^\prime_{ \underline z_k}(t)$ depend only on three variables which are : the current state $Z_{S_k+t}=\Phi_{z_k}(t)$, the time $t_f-(s_k+t)$ left before $t_f$, and the indicator $\indic{\tau_D\leq s_k+t}$ which tells if the system failure has already happened. The kernels $K^\prime_{\underline z_{k+1}^-}$ depend only on three variables, which are : the current departure state $z_{k+1}^-=\Phi_{z_k}(t_k)$, the time $t_f- s_{k+1} $ left before $t_f$, and the indicator $\indic{\tau_D\leq s_{k+1}}$.
So, to ease the presentation of such jump rates and transition kernels, we slightly modify the state space by adding an active boundary at the boundary of~$D$ and we add a coordinate on the mode which indicates if the trajectory has already visited $D$. The state now becomes $Z=\big(X,(M,M_D)\big)$ where $M_D= 0$ if $D$ has not been visited, and $1$ if it has. This way, for any time $t$ we have $Z_t=(X_t,(M_t,\indic{\tau_D\leq t}))$. For instance, with the heated-room system the set of modes becomes $\mathbb M=\{ON,OFF,F\}^3\times\{0,1\}$. The kernel $K_{Z^-}$ is unchanged when $M_D^-=M_D^+$, and is null when $M_D^-\ne M_D^+$, except at the boundary of $D$ where $K_{(0,( F,F,F ,0))}\big(0,( F,F,F,1)\big)=1$.
The three variables that determine the jump rates and kernels of the processes of the special class can now be identified by the current state and the current time. Therefore, we now consider importance processes with jump rate $\lambda^\prime_{ z_k,s_k}(t) $ and transition kernel $K^\prime_{ z^-_k, s_k} $. Such processes have the following laws of jump times and jump arrivals:
\begin{align}
\forall t\in (0,t^*_{z_k}],\quad &\mathbb P(T_k^\prime\leq t| Z^\prime_{S^\prime_k}= z_k, S_k^\prime=s_k)\nonumber\\
&=\int_{(0,t]} \bigg(\lambda^\prime_{ z_k,s_k}(u) \bigg)^{\mathbbm{1}_{ u < t^*_{z_k} }}\exp\Big[-\Lambda^\prime_{ z_k,s_k}(u)\Big] d\mu_{z_k}(u) \label{integtime2}\\
\forall B\in\mathscr{B}(E),\quad & \mathbb P(Z^\prime_{S^\prime_k}\in B|{ Z^\prime_{{S^\prime_k}^-}}={ {z}_{k} }^-, S_k^\prime=s_k)
=\int_{B} K^\prime_{ z^-_k, s_k} (z) d\nu_{z^-_k}(z). \label{impkernel2}
\end{align}
Note that the class of processes that can be defined by \eqref{integtime2} and \eqref{impkernel2} is included in the class of admissible importance processes.
Thanks to the new definition of the states, the conditional expectations \linebreak$\mathbb E\big[\indic{\mathscr D}(\mathbf Z)\big| \underline Z_{S_k}, T_k \geq t\big]$ are equal to the conditional expectations $\mathbb E\big[\indic{\mathscr D}(\mathbf Z)\big| Z_{S_k+t}=\Phi_{Z_{S_k}}(t) \big]$. This makes it possible to introduce the following important definitions:
\begin{mydef}
Let $U^*$ be the function defined on $E\times \mathbb R^+$ by:
\begin{align}
U^*(z,s)
&=\mathbb E\big[\indic{\mathscr D}(\mathbf Z)| Z_{s}=z \big].
\end{align}
\end{mydef}
\begin{mydef}
Let $U^-$ be the function defined on $E\times \mathbb R^+$ by:
\begin{align}
U^{\text{-}}(z^{\text{-}},s)
&=\int_E U^{*}(z^+,s)K_{ z^{\text{-}}}(z^+)d\nu_{ z^{\text{-}}}(z^+).
\end{align}
\end{mydef}
The quantity $U^*(z,s)$ measures the chances of having a system failure before $t_f$ knowing the system is in state $z$ at time $s$, and the quantity $U^-(z^-,s)$ the chances of having a system failure before $t_f$ knowing the system is jumping from the state $z^-$ at time $s$. These quantities play an important role in the latter.
\subsection{A way to build an optimal importance process}
\label{subsec:Opti}
In the importance process, generating the trajectories jump by jump by using \eqref{integtime2} and \eqref{impkernel2} is not restrictive in term of efficiency, as proved by the following theorem:
\begin{myth}
For all $z\in E$, $z^-\in\xoverline{E\,}$, and $s\in[0,t_f)$, the jump densities with respect to $\mu_z$ such that \begin{align}
g^{*}_{T^\prime_k|Z^\prime_{S^\prime_k},\, S^\prime_k= z,s}(u)
&=\frac{U^{\text{-}}\big(\Phi_{z}(u),s+u\big)}
{U^{\text{*}}\big( z ,s\big)} f_{T_k| Z_{S_k}=z}(u)\
\label{bestL}
\end{align} and the kernels $\mathcal K^{*}_{ { z}^{\text{-}},s} $ having a density with respect to $\nu_{z^-}$ which satisfies \begin{align}
K^{*}_{ { z}^{\text{-}},s}(z)&=\frac
{U^*\big(z,s\big)}
{U^{\text{-}}\big(z^{\text{-}},s\big)}K_{ z^{\text{-}}}(z)\ \label{bestK}
\end{align} correspond to the jump densities and the transition kernels of an optimal importance process. \\Note, these optimal densities do integrate to one as $U^{\text{*}}\big( z ,s\big)=\int_0^{t_z^*} U^{\text{-}}\big(\Phi_{z}(u),s+u\big)
f_{T_k| Z_{S_k}=z}(u) du$, and \linebreak $U^{-}\big( z ,s\big)=\int_E U^{*}(z^+,s)K_{ z}(z^+)d\nu_{ z}(z^+)$.
\end{myth}
\begin{proof}
Assume the trajectory $\mathbf z=\Theta\big((z_{_0},t_{_0}),\, ...\, ,(z_{_n},t_n)\big)$ has been simulated with \eqref{bestL} and \eqref{bestK}. Then its density $g$ with respect to $\zeta$ is:\begin{align}
g(\mathbf{z} ) &=\prod_{k=0}^{n} g^{*}_{T^\prime_k|Z^\prime_{S^\prime_k},\, S^\prime_k= z_k,s_k}(t_k) \prod_{k=1}^{n }K^{*}_{ { z_k}^{\text{-}},s_k}(z_k) \nonumber\end{align}
So it verifies:
\begin{align}
g(\mathbf{z} ) &=\prod_{k=0}^{n}\frac {U^{\text{-}}\big(\Phi_{z_k}(t_k),s_k+t_k\big)}
{U^{\text{*}}\big( z_k ,s_k\big)}\prod_{k=1}^{n }\frac
{U^*\big(z_k,s_k\big)}
{U^{\text{-}}\big(z_k^{\text{-}},s_k\big)} \prod_{k=0}^{n} f_{T_k| Z_{S_k}=z_k}(t_k) \prod_{k=1}^{n } K_{ z_k^-}(z_k) \nonumber\\
&=\prod_{k=0}^{n}\frac {U^{\text{-}}\big(z_{k+1}^{\text{-}},s_{k+1}\big)}
{U^{\text{*}}\big( z_k ,s_k\big)}\prod_{k=0}^{n-1 }\frac{U^*\big(z_{k+1},s_{k+1}\big)}
{U^{\text{-}}\big(z_{k+1}^{\text{-}},s_{k+1}\big)} f(\mathbf z) \nonumber\\
&=\frac{U^{\text{-}}\big(z_{n+1}^{\text{-}},s_{n+1}\big)}
{U^*\big(z_{0},s_{0}\big)} f(\mathbf z) =\frac{\indic{\mathscr D}(\mathbf z)f(\mathbf z)}{\mathbb E_{z_0}\big[\indic{\mathscr D}(\mathbf z)\big]}=g^{*}(\mathbf{z}),\nonumber\end{align}
where $g^{*}(\mathbf{z})$ is the density for an estimator with zero variance.
\end{proof}
Equations \eqref{bestL} and \eqref{bestK} serve as a guide to build an importance process: one should try to specify densities as close as possible to these equations so as to get an estimator variance as close as possible to the minimal zero variance.
\subsection{Observations on the optimal process}
As we do not know the explicit forms of $U^*$ and $U^-$, the construction of an importance process close to the optimal one is delicate. Nonetheless, the equations \eqref{bestL} and \eqref{bestK} can give us information on how to build an importance process in practice. In this Section, we investigate the properties of the optimal importance process and of the function $U^*$ with the aim of building a good and practical importance process.
For instance, we can get the expression of the jump rate of the optimal process. For the time of the $k$-th jump, by definition of the jump rate and knowing that $(Z^\prime_{S^\prime_k},\, S_k)= (z,s)$, we get :
\begin{align}
\lambda^{*}_{ z,s}(u)&=\frac { g^{*}_{T^\prime_k|Z^\prime_{S^\prime_k},\, S^\prime_k= z,s}(u)}{1-\int_0^u g^{*}_{T^\prime_k|Z^\prime_{S^\prime_k},\, S^\prime_k= z,s}(v)dv}, \nonumber \\
\Leftrightarrow \quad \lambda^{*}_{ z,s}(u)&=\frac {U^{\text{-}}\big(\Phi_{z}(u),s+u\big) \Big(\lambda_{ z}(u) \Big)^{\mathbbm{1}_{ u < t^*_{z} }}\exp\Big[-\Lambda_{ z}(u)\Big]}{\int_{(u,t^*_{z}]}U^{\text{-}}\big(\Phi_{z}(v),s+v\big)\Big(\lambda_{ z}(v) \Big)^{\mathbbm{1}_{v < t^*_{z} }}\exp\Big[-\Lambda_{ z}(v)\Big]d\mu_{z}(v)}\ .\label{LB1}
\end{align} Using some properties of $U^*$ and \eqref{LB1} we can prove the following theorem:
\begin{myth}
\label{th:optijumprate}
The jump rate of the optimal importance process defined by the densities \eqref{bestL} and \eqref{bestK} verifies:
\begin{equation}
\lambda^{*}_{ z,s}(u)
=\frac {U^{\text{-}}\big(\Phi_{z}(u),s+u\big) }{U^{*}\big(\Phi_{z}(u),s+u\big)} \lambda_{ z}(u) \ . \label{LB2}
\end{equation}\end{myth}
The proof is provided in appendix \ref{appendix-sec2}.
Note that the expression \eqref{LB2} can be easily interpreted. $\lambda^{*}_{ z,s}(u)$ corresponds to the jump rate at the state $Z_{s+u}=\Phi_{z}(u)$. $U^{*}\big(\Phi_{z}(u),s+u\big)$ is the probability of generating a failing trajectory if $Z_{s+u}=\Phi_{z}(u)$ and if there is no jump at time $s+u$. $U^-\big(\Phi_{z}(u),s+u\big)$ is the probability of generating a failing trajectory if there is a jump at time $s+u$ and if the departure state is $Z_{s+u^-}=\Phi_{z}(u)$. So the ratio $\dfrac {U^{\text{-}}\big(\Phi_{z}(u),s+u\big) }{U^{*}\big(\Phi_{z}(u),s+u\big)}$ is the factor multiplying the probability of generating a failing trajectory when there is a jump at time $s + u$.
The expression indicates that, in order to reach the zero variance, one should increase the original jump rate in the same proportion as a jump would increase the probability of getting a failing trajectory.
The Theorem \ref{th:optijumprate} is noteworthy, because in practice the law of the jump time is specified through the jump rate. Thus it can be used to specify the laws of the jump times of an importance process, as we will do in Section \ref{subsec:ISparam}.
Also note that with equations \eqref{LB2} and \eqref{bestK} indicate that, once the region $D$ has been reached, the optimal process does not differ from the original process. Indeed if $\tau_D$ is the reaching time of the critical region $D$, then for $s\geq \tau_D$ we have for all states $ z$ and $z^- $, $U^*(z,s)=U^-(z^-,s)=1$ and so for $s\geq \tau_D$ we get $K^*_{z^-,s}=K_{z^-}$, and for $s+u\geq \tau_D$ we get $\lambda^*_{z,s}(u)=\lambda_{z}(u)$.
As it plays an important role in the expression of the optimal process, we look for more information about the function $U^*$. We first notice that: if $\tau$ is a stopping time such that $t_f>\tau>s$, then \begin{align}U^*(z,s)&=\mathbb E\big[\indic{\mathscr D}(\mathbf Z)\big| Z_s=z\big]\nonumber\\
&=\mathbb E\Big[ \mathbb E\big[\indic{\mathscr D}(\mathbf Z)\big| Z_{\tau} \big]\Big| Z_s=z\Big]\nonumber\\
\mbox{and so}\quad U^*(z,s)&=\mathbb E\big[U^*(Z_{\tau},\tau)\big| Z_s=z\big].\label{eq:Ustop} \end{align}
Using the equation \eqref{eq:Ustop} we can show the two following properties:
\begin{myth}
$U^*$ is kernel invariant on boundaries:
\begin{equation}
\forall z\in E,\qquad U^{\text{-}}\big(\Phi_{z}(t_z^{*}),s+t_z^{*}\big)= \lim_{ t\nearrow t_z^{*} } U^{*}\big(\Phi_{z}(t),s+t\big)\ .
\label{closureCondition}
\end{equation}
\label{thm:closureCondition}
\end{myth}
\begin{myth}
If $u\to U^{\text{-}}\big(\Phi_{z}(u),s+u\big) $ and $u\to \lambda_{ z}(u)$ are continuous almost everywhere on $[0,t^*_z)$, then almost everywhere $U^*$ is differentiable along the flow, with:
\begin{equation}
\frac{\partial U^{*}\big(\Phi_{z}(v),s+v\big) }{\partial v}
= U^{*}\big(\Phi_{z}(v),s+v\big) \lambda_{ z}(v) -U^{\text{-}}\big(\Phi_{z}(v),s+v\big) \lambda_{ z}(v)
\label{Uderiv}
\end{equation}
\label{thm:Uderiv}
\end{myth}
The theorems \ref{thm:closureCondition} and \ref{thm:Uderiv} can in fact be seen as foreward Kolmogorov equations on $U^*$. A complete proof for these two properties is in the appendix \ref{appendix-sec2}. \\
\subsection{A parametric importance process}
\label{subsec:ISparam}
In order to find an importance process that gives a good variance reduction, we usually restrict the search within a parametric family of importance densities. Then we rely on optimization routines to find the parameters yielding the best variance reduction. Here, we propose to use a parametric approximation of $U^*(z,s)$, and then combine it with equations \eqref{LB2} and \eqref{bestK} to get the form of the importance kernels and of the importance intensities.
If we denote $U_\alpha(z,s)$ our approximation of $U^*(z,s)$, where the parameter $\alpha$ belongs to the set $A_{param}$, and we set $U_\alpha^{\text{-}}\big(z^{\text{-}},s\big)=\int_E U_\alpha(w,s )K_{ z^{\text{-}}}(w)d\nu_{ z^{\text{-}}}(w)$, then the corresponding importance intensities and kernels are given by :
\begin{align}
\lambda^{\prime}_{ z,s}(u)
&=\frac {U^{\text{-}}_\alpha\big(\Phi_{z}(u),s+u\big) }{U_\alpha\big(\Phi_{z}(u),s+u\big)} \lambda_{ z}(u) \ ,
\label{impL2}
\\[10pt]
K^{\prime}_{ { z}^{\text{-}},s}(z^+)&=\frac
{U_\alpha\big(z^+,s\big)}
{U^{\text{-}}_\alpha\big(z^{\text{-}},s\big)}K_{ z^{\text{-}}}(z^+)\label{impK2}\ .
\end{align} \\
With these settings and notations, condition (C3) can be expressed as: \begin{align*}
U^*( z_k,s_k)>0 , \mbox{ and } K_{z_k^-}(z_k)>0\ &\Rightarrow U_\alpha( z_k,s_k)>0 \\
U^*( z_k,s_k+t)>0 ,\mbox{ and }\lambda_{z_k}(t)>0\ &\Rightarrow U_\alpha( z_k,s_k+t)>0 ,
\end{align*}
for any $z_k\in E$, and $z_k^-\in\xoverline{ E\,}$, and $t\in (0,t*_{z_k}]$. It is therefore satisfied if we take $U_\alpha $ positive everywhere for instance.
Here we switch the problem of setting a density $g$ close to $g^*$ by finding $\lambda^\prime$ and $K^\prime$, to the problem of finding a surface $U_\alpha$ on $E\times \mathbb R^+$ close to the surface $U^*$.
Note that this way of building a parametric family of importance processes can be applied to any kind of systems, though the shape of $U_\alpha$ may have to be adapted from case to case. Indeed, we expect the shape of $U^*$ to depend on the configuration of the system and so does the shape of the $U_\alpha$'s.
We could also have plugged the approximations $U_\alpha$ and $U_\alpha^-$ into \eqref{bestL}, rather than into \eqref{LB2}, but the option we have chosen is in fact more convenient and computationally more efficient. With Equation \eqref{LB2}, we pass through the intensity, so the density of the $T_k^\prime$'s automatically integrates to 1. Conversely if we pass through equation \eqref{bestL}, we have to renormalize the density so it integrates to 1 before simulating a realization of the $T_k^\prime$. As this renormalization requires to compute an integral, it is less advantageous.
\subsection{Remarks on the parameter optimization}
As mentioned in the introduction, we propose to use the cross-entropy method presented in \cite{tutorialCE} to select the parameters of the importance density as it was done in \cite{zuliani2012rare}. In the case of PDMP, it is hard to use the adaptive cross-entropy method presented in \cite{tutorialCE}: the adaptive cross-entropy requires to have a function $\mathcal O:\mathbf E\to \mathbb R$ that orders the trajectories which is hard to specify judiciously. This function must order the states in such a way that there exists a threshold $c$ for which \begin{equation}
\mathbb P(\mathbf Z \in \mathscr D)=\mathbb P(\mathcal O(\mathbf Z)>c),\end{equation} and there exists a sequence of thresholds $ c_0\leq c_1\leq \dots \leq c_k=c$ so that \begin{equation} \mathbb P(\mathbf Z \in \mathscr D_i)=\mathbb P(\mathcal O(\mathbf Z)>c_i),\end{equation} where $\mathscr D_0\subseteq\mathscr D_1\subseteq\dots\subseteq\mathscr D_i\subseteq \mathscr D_{i+1}\subseteq\dots\subseteq\mathscr D_{k}=\mathscr D$. In order to run the Cross-Entropy algorithm with relatively low sample sizes at each step (from 100 to 1000), it is good to set the function $\mathscr O$ so that $20\leq\frac{\mathbb P(\mathcal O(\mathbf Z)>c_i)}{\mathbb P(\mathcal O(\mathbf Z)>c_{i+1})}\leq 100$ \cite{tutorialCE}. The issue is that we find it hard to specify such a function with PDMP. For this reason we used a simplified version of the CE method considering only one threshold $c$.
The CE algorithm also requires to minimize an approximation of the Kullback-Leiber divergence $D(g_\alpha,g^{*})$. We simulate a sample used to compute many approximations of $D(g_\alpha,g^{*})$. The optimization routine uses the sample to compute some approximations of $D(g_\alpha,g^{*})$ with different values of $\alpha$. When the sample contains too many trajectories in $\mathscr D$, this approximations can be computationally heavy. Conversely, when the sample does not contain enough trajectories in $\mathscr D$ the approximations are not accurate enough. So we choose to increase the size of the sample gradually until it contains $n_{CE}$ trajectories in $\mathscr D$, $n_{CE}$ being a number fixed by the user. This way the objective function to minimize and its gradient are both a sum over $n_{CE}$ terms, and thus they are not too heavy to compute. The CE algorithm we used is presented in Table \ref{tab:CE0}.
\begin{table}[h]\fbox{
\begin{algorithm}[H]
\textbf{Initialization: } choose $\alpha_0\in A_{param}$ and $n_{CE}\in\mathbb N^*$ and set $ t=0$, and $ \varepsilon>0$\\
\While{ $||\alpha_{t}-\alpha_{t+1}||<\varepsilon $}{
Set $k=1$, and generate $ \mathbf Z^\prime_1 \sim g_{\alpha_t}$\\
\While{ $\sum_{i=1}^k \indic{\mathbf Z^\prime_i\in\mathscr D}<n_{CE}$}{Generate $\mathbf Z^\prime_{k+1} \sim g_{\alpha_t}$ \\ $k:=k+1$}
$N=k-1$\\
Compute $\alpha_{t+1}=\underset{\alpha\in A_{param}}{\mbox{argmin }}\frac 1 N \sum_{i=1}^{N} \indic{\mathbf Z^\prime_i\in\mathscr D}\frac{f(\mathbf Z^\prime_i)}{g_{\alpha_t}(\mathbf Z^\prime_i)} log\big(g_{\alpha_t}(\mathbf Z^\prime_i)\big) $ \\
$t:=t+1$ }
\textbf{End: }Estimate $p$ using the importance density $g_{\alpha_{t-1}}$
\end{algorithm}}
\caption{CE algorithm}
\label{tab:CE0}
\end{table}
However, to our knowledge, there is no guarantee that the minimization routine used in the cross entropy method converges to a global optimum. Therefore, to avoid falling in a local optimum, one should run several times the cross entropy method with different initial values for the vector of parameters. Note that the parametrization must be chosen carefully: indeed the family of the importance densities must contain densities that are close to the zero-variance density $g^*(\mathbf z)=\frac{\indic{ \mathscr D }(\mathbf z)f(\mathbf z)}{p}$ to obtain a good variance reduction, otherwise we could even obtain a variance increase. In order to avoid a variance increase, the parametric family should contain the original density $f$. Indeed, if we specify in the parametric family that for say $\alpha=0$ we have $g_0=f$ then the parameter optimization should not select a parameter worse than $\alpha=0$ and in the worst scenario the variance remains unchanged. This is why we advise that the family of the $U_\alpha$ functions includes a constant function, so that the original process with jump rate $\lambda_z$ and transition kernel $K_z$ is included in the admissible importance processes. \\
The initial vector of parameters $\alpha_0$ has a big influence on the convergence of the method. Ideally, it should be chosen to simulate $n_{CE}$ trajectories in $\mathscr D$ relatively fast, but, in order to avoid an over-biasing situation with a wrong approximation of the Kullback-Leiber divergence at the first step, we recommend to choose $\alpha_0$ so that $g_{\alpha_0}$ is as close as possible from $f$. Testing several values of $\alpha_0$ is therefore necessary, to get a sense of what is a good $\alpha_0$.
\section{Simulation study on a test case}
\label{sec:Ex}
In this Section we present how we build an importance process for the heated room system presented in Section \ref{subsec:sys3comp}.\\
\subsection{A parametric family of importance processes}
In the heated-room system, the three heaters are identical and are in parallel redundancy, so we expect the probability $U^*(z,s)=\mathbb E\big[\indic{\mathscr D}(\mathbf z)| Z_{s}=z\big]$ to increase with the number of failed heaters in the state $z$. Therefore, noting $b(z)$ the number of failed heaters in state $z$, we start by setting \begin{equation}
U_\alpha(z,s)= H_\alpha\big( b(z)\big)\,Q(x,s)
\end{equation} where $Q$ is a function of position and time, and $H_\alpha $ is a function on integers. We set $H_\alpha( 0)=1$. As we want $U_\alpha(z,s)$ to increase with $b(z)$, $H_\alpha $ has to be an increasing function.
If $T$ denotes the time until the next jump after a time $s$, using \eqref{eq:Ustop} with $\tau=s+T$ we get: \begin{align} \quad U^*(z,s)&=\mathbb E\big[U^*(Z_{s+T},s+T)\big| Z_s=z\big]. \end{align} As the repair rates are larger than the failure rates by one order of magnitude in practice, when there is at least one failed heater, the probability of arriving in a more degraded state $Z_T$ is much lower than the probability of having a repair. This last remark can actually be applied to any reliable industrial system (see for instance \cite{Pycatshoo2}). Ideally we would like $U_\alpha$ to mimic the property of $ U^*$ so we would like to have
\begin{equation}
U_\alpha (z,s)=\mathbb E\big[U_\alpha(Z_T,s+T)\big| Z_s=z\big] \label{mimic}
\end{equation}
which can be reformulated as :
\begin{equation}
H_\alpha \big(b(z)\big)=\sum_{m^+\in\mathbb{M}}H_\alpha\big(b(x,m^+)\big)\int_{(0,t_z^*]}\mbox{\hspace{-1.4em}}K_{\Phi_z(u)}\big((\phi_x^m(u),m^+)\big)w_z(u)d\mu_z(u) \label{mimic2}
\end{equation}
where $w_z(u)=\dfrac{Q (\phi_x^m(u),s+u)}{Q (x,s )}\exp\big[\text{-}\Lambda_z(u)\big]$.
As a repair is much more likely than failure, if the transition from state $(\phi_x^m(u),m)$ to the state $(\phi_x^m(u),m^+)$ indexes a repair $K_{\Phi_z(u)}\big((\phi_x^m(u),m^+)\big)$ is larger than if it had indexed a failure. So, \eqref{mimic2} implies that, when $b(z)>1$, the value of $H_\alpha( b(z))$ is closer from $H_\alpha( b(z)-1)$ than from $H_\alpha( b(z)+1)$. As $H_\alpha$ was supposed increasing, it must be convex. So
we propose that $H_\alpha( b(z) )=\exp\big[\alpha_1{b(z)}^2]$, with $\alpha_1>0$. If, from a $Z_s=\Phi_z(u)$, the transition $j$ corresponds to a failure then we have:
\begin{equation}
{\lambda^\prime}^{j}_{z,s}(u)= \lambda^{j} _{ z }(u)\exp\big[\alpha_1\big(2b(z)+1\big) ]\ , \label{lfimp}
\end{equation}
and if it corresponds to a repair then we have:
\begin{equation}
{\lambda^\prime}^{j}_{z,s} (u)= \lambda^{j}_{ z }(u)\exp\big[-\alpha_1\big(2b(z)-1\big)]\ . \label{lrimp}
\end{equation}
The jump rate satisfies:
\begin{equation}
{\lambda }^{\prime}_{ z,s}(u)= \sum_{i\in J} {\lambda^\prime}^{i}_{z,s} (u) \qquad \mbox{ and } \forall \qquad K^{\prime}_{ { \Phi_z(u)}^{\text{-}},s}(z^+)=\frac{ {\lambda^\prime}^{z^+}_{ z,s}(u)}{ \int_E {\lambda^\prime}^{z^+}_{z,s} (u)d\nu_z(z^+)}\ .
\end{equation}
We set the jump kernel such that its density satisfies for $u\in [0,t_z^*)$ :
\begin{equation}
K^\prime_{z^{\text{-}}}(z^+ )=\frac{K_{z^{\text{-}}}(z^+ )\exp \big[-\alpha_1\,b(z^+)^2\big]}{\int_E K_{z^{\text{-}}}(z)\exp\big[-\alpha_1\, b(z )^2\big]d\nu_{z^{\text{-}}}(z ) }.
\end{equation}
Note that plugging $U_\alpha$ into the equations \eqref{bestL} and \eqref{bestK} imposes some kind of symmetry in the biasing of failure and repair rates. It is especially visible in equations \eqref{lfimp} and \eqref{lrimp}: On the one hand the failure rate associated to the transition from a state $z^-$ to $z^+$ is multiplied by a factor $\exp\big[\alpha_1\big(2b(z^-)+1\big) ]$, and on the other hand the repair rate corresponding to the reversed transition (from state $z^+$ to state $z^-$) is divided by a factor $\exp\big[\alpha_1\big(2b(z^-)-1\big) ]$. The equations \eqref{bestL} and \eqref{bestK} not only imply that the failures should be enhanced and the repairs inhibited, but it also states that the magnitudes of the distortion should be reciprocal.
The square in $H_\alpha $'s formula was introduced to strengthen the failure rates when the number of broken heaters gets larger. The idea was to shorten the duration where several heaters are simultaneously failed in the simulated trajectories. Indeed, as repair is faster than failure, the shorter are the durations with a failed heater the more likely is the trajectory. Increasing the failure rates with the number of broken heaters is a mean to simulate more trajectories in $\mathscr D$ while maintaining the natural proportion between the likelihoods of the trajectories, which should decrease the variance.
As the failure on demand was likely to play an important role in the system failure, we choose to separate it from spontaneous failure in our parametrisation setting $U_\alpha((x_{min},m),s)=\exp[-\alpha_2 b(z)^2]H_\alpha(x_{min},s)$. This allows to better fit $U_\alpha$ to $U^*$. Under this assumption, the equation \eqref{impK2} implies that for $z^-=(x_{min},m)$, the importance kernel takes this form:
\begin{equation}
K^\prime_{z^{\text{-}}}(z^+ )=\frac{K_{z^{\text{-}}}(z^+ )\exp \big[-\alpha_2\,b(z^+)^2\big]}{\int_E K_{z^{\text{-}}}(z)\exp\big[-\alpha_2\, b(z )^2\big]d\nu_{z^{\text{-}}}(z ) }.
\label{impK}
\end{equation}
\subsection{Results}
The Monte-Carlo simulations have been carried out using the Python library PyCATSHOO. (The flow functions $\phi^m_x$ were computed using a Runge-Kutta method of order 4 with a discretization step of 0.01. This discretization step is small enough so that reducing it further does not change the estimations.) As the Cross-Entropy method was not yet implemented in PyCATSHOO, we have used a specific Python code for the Cross-Entropy and the importance sampling methods. The system parameters used in the simulation were the following ones: $x_{min}=0.5$, $x_{max}=5.5$, $x_e=-1.5$, $\beta_1=0.1$, $\beta_2=5$, $t_f=100$. Trajectories were all initiated in the state $z_0=\big(7.5,\ (OFF,OFF,OFF)\big)$. The probability of having a system failure before $t_f$ was estimated to $p=1.29\times10^{-5}$ with an intensive Monte-Carlo estimation based on $10^8$ runs.
\renewcommand{1.5}{1.5}
\begin{table}[ht]\centering{
\begin{tabular}{c|c|c|c|c|c|c|@{}}
\cline{2-7}
& $N_{sim}$ & $\hat p$ & $\hat\sigma^2/N_{sim}$ & $ \widehat{\mbox{IC}}\times 10^5$ & $t_{sim}$ & $\widehat{eff}$\\
\hline
\multicolumn{1}{|c|}{ \multirow{4}{*}{IS} }
& $10^3$ &$\, 1.28 \times10^{-5}$ & $\, 4.37\times10^{-13}$ & $\,[ 1.15 , 1.41 ] \,$& $0.073$ s & $3.1\times10^{10}$\\
\multicolumn{1}{|c|}{}& $10^4$ &$\, 1.273\times10^{-5}$ & $\, 5.07\times10^{-14}$ & $\,[ 1.228, 1.317 ] \,$& $0.073$ s & $2.7\times10^{10}$\\
\multicolumn{1}{|c|}{}& $10^5$ &$\, 1.289\times10^{-5}$ & $\, 5.01\times10^{-15}$ & $\,[ 1.275, 1.303 ] \,$& $0.077$ s & $2.6\times10^{10}$\\
\multicolumn{1}{|c|}{}& $10^6$ &$\, 1.288\times10^{-5}$ & $\, 5.05\times10^{-16}$ & $\,[ 1.283, 1.292 ] \,$& $0.079$ s & $2.5\times10^{10}$\\
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{MC}} & $10^6$ & $\, 0.4 \times 10^{-5}$ & $\, 4.00\times10^{-12}$ & $\,[ 0.01, 0.79 ] \,$ & $0.022$ s & no convergence \\
\multicolumn{1}{|c|}{} & $10^7$ & $\, 1.3 \times 10^{-5}$ & $\, 1.28\times 10^{-12}$ & $\,[ 1.07,1.51 ] \,$ & $0.022$ s & $3.5 \times 10^6$\\
\hline
\end{tabular}}
\caption{Comparison between Monte-Carlo and importance sampling estimations}
\label{res}
\end{table}
The values of the parameters selected by the cross-entropy method were $\alpha_1\simeq 0.915$ and $\alpha_2\simeq 1.197$, and for the first step, the approximation of the Kullback-Leiber divergence between $g^*$ and $g_\alpha $ was obtained by simulating from a biased density with parameters $(0.5,0.5)$. The whole cross-entropy method lasted approximately 9 minutes. Most of the running time was allocated to the optimization within each step of the cross-entropy, because each evaluation of the objective function and of its gradient was costly. In order to optimize the running time of the cross-entropy method, the size of the sample used for the approximations of the Kullback-Leiber divergence were set by simulating until we would get $n_{CE}=100$ trajectories with a system failure. The number of $n_{CE}=100$ roughly guaranties that the two first digit of of the Kullback-Leiber divergences are identified by their approximations. For each of the three steps needed to select the parameters, samples of respectively 1970, 126, 127 trajectories were used.
A comparison between Monte-Carlo and the associated importance sampling estimates is presented in Table \ref{res}, where we display the number $N_{sim}$ of simulations used for each method, the estimates $\hat p$ of the probability, the associated empirical variances $\hat\sigma^2/N_{sim}$ and confidence intervals $\widehat{\mbox{IC}}$, and the mean time of a simulation $t_{sim}$ in seconds. For $10^6$ simulations the results show that the Monte-Carlo estimator has not converged yet, whereas the importance sampling estimate is very accurate. To compare the two methods we estimate the efficiency of their estimators when they have converged. The efficiency is defined by the ratio of the precision and the computational time: $$eff=\frac{1}{\sigma^2/N_{sim}}\times \frac{1}{N_{sim} t_{sim}}=\frac{1}{\sigma^2t_{sim}}.$$
The efficiency can be interpreted as the contribution of a second of computation to the precision of the estimator. We estimate it by $\widehat{eff}= \frac{1}{\hat\sigma^2t_{sim}}$. The results indicate that our importance sampling strategy is approximately $7\,000$ times more efficient than a Monte-Carlo method. \\
We also verify that the importance sampling estimations are asymptotically normally distributed. The asymptotic normality was not observed for $N=10^3$, but it was observed for larger sample sizes. For instance for $N=10^4$, the Figure \ref{fig:normallydistr} shows a normalized histogram on 100 estimations $\hat p_{IS}$ that matches the normal density with mean $p$ and with the standard deviation of the 100 estimations.\begin{figure}
\caption{Asymptotic normality of the IS estimator (for $N=10^4$)}
\label{fig:normallydistr}
\end{figure}
We also recorded the weights of the failing trajectories in the sample of one run of the IS method with $N=10^4$. The Figure \ref{fig:weights1} shows that the weights are close to the value $p$, suggesting that the importance density is close to the optimal density. The figure \ref{fig:weights2} is a zoom-in on the largest weight: It shows there is no degenerated preponderant weight such that $\frac{f(\mathbf Z^\prime_i)}{g_\alpha(\mathbf Z^\prime_i)}\gg p$, suggesting there is no sign of under-favored region of $\mathscr D$ in $g_\alpha$. Here we do not need to check the weight degeneracy in all parts of $\mathcal D$ because, as we now the value of $p$ the can simply check the estimation are unbiased and normally distributed to ensure convergence is reached. Finally, in Figures \ref{fig:traj1} and \ref{fig:traj2}, we present the graphs of two trajectories obtained respectively with the original process with density $f$ and with the importance process selected by the CE method with density $g_{(\alpha_1,\alpha_2)}$.
\begin{figure}
\caption{Allocation of the weights of failing trajectories (for $N=10^4$)}
\label{fig:weights1}
\end{figure}
\begin{figure}
\caption{Allocation of the largest weights in the sample (for $N=10^4$)}
\label{fig:weights2}
\end{figure}
\begin{figure}
\caption{A trajectory of the coordinates of the state of the system. This trajectory was generated with the original process with density~$f$. }
\label{fig:traj1}
\caption{A trajectory of the coordinates of the state of the system. This trajectory was generated with the importance process with density $g_{(\alpha_1,\alpha_2)}
\label{fig:traj2}
\end{figure}
\section{Discussion}
Our work shows that importance sampling is applicable to any PDMP with or without boundaries. We have given the expressions of the intensities and the kernels of the optimal importance process, and we have seen that it depends on a critical function $U^*$. These expressions show that the optimal importance process has a specific structure. Although we do not have a closed form expression of the function $U^*$, these expressions are important for two reasons: 1)~They prove the existence of an optimal bias, which ensures that the importance sampling technique can be very efficient on PDMPs. 2)~They can guide the practical design of an efficient and explicit importance process. Indeed, by replacing $U^*$ by an approximation in the optimal expressions of the transition rates and kernels, we preserve the structure of the optimal importance process. The presented method therefore helps designing an importance process having the same behavior as the optimal one, and it showed good efficiency on our case study.
This biasing strategy can be applied to any system, but the parametric shape of the approximation of $U^*$ may have to be adapted from case to case. The parametric shape presented in this article is suited to any system with similar components in terms of failure rates and repair rates and containing one minimal cut set (A minimal cut set being a group of components that need to fail so that the system can fail). For a system with a different configuration, we expect the shape of the function $U^*$ will differ, and the method may require a different parametric approximation for the function~$U^*$.
Our approach through the function $U^*$ can be applied to any sub-classes of PDMP, like, for instance, Markov chains \cite{kuruganti1996importance}, or continuous time Markov Chain, or queing models. In the particular case of PDMP that is a continuous time Markov Chain, the definition of the function $U^*$ is close to the forward committor function used in the transition path theory \cite{metzner2009transition}. In the case of a general PDMP, a committor function would be a function $(z,s)\to \mathbb E\big[\indic{\mathscr D_{A}}(\mathbf Z)| Z_{s}=z \big]$ where $\mathscr D_{A}$ is the set of trajectories that pass through $D$ without passing through a set $A\subset E$ first. $ U^*$ is therefore a commitor function for which $A=\emptyset$. It is also interesting to note that, in the Adaptive Multilevel Splitting algorithm, the asymptotic variance is minimized when using the committor function as the score function \cite{brehier2015analysis,cerou2019asymptotic}, similarly, in the interacting particles system method\cite{del2005genealogical}, the function $U^*$ also plays a role in the optimal potential function method\cite{chraibi2018optimal}. Approaching the function $U^*$, allows to efficiently estimate rare event for importance sampling, but also for Adaptive Multilevel Splitting algorithm and the interacting particles system method. A method that allows to approximate this function would lead to significant improvement in the reliability assessment field.
We proposed to find a good approximation of $U^*$ searching inside a family of parametric functions $(U_\alpha)_{\alpha\in A_{param}}$. In our application on the heated room system, we used the Cross-Entropy method to select an efficient parameter $\alpha$.
We have noticed that the Cross-Entropy method tends to diverge quickly if it is not well initialized: The choices of $\alpha_0$ and $n_{CE}$ are critical for the convergence of the method. These two parameters must be well tuned because they impact the quality of the first approximations of the Kullback-Leiber divergence within the CE algorithm, and these approximations must be accurate enough to launch the optimization routine on a good track. To choose a high value for $n_{CE}$ is a way to insure that these first approximations are accurate enough, but it is not worth considering in practice, as it greatly slows down the CE algorithm. The only solution is to find right away an $\alpha_0$ which yields correct approximations of the Kullback-Leiber divergence. This is unfortunately difficult to do, and may be even harder with more complex systems. We believe that the CE method used in this article must be improved or substituted by an other parameter optimization method so that the initialization gets less critical.
With the IS method, depending on the importance process chosen, we can observe some weight degeneracy and therefore slow convergence. Weight degeneracy typically happens when two conditions are met: 1) there is a domain $\mathcal D_1 \subset\mathcal D$ such that the likelihood ratios within this domain are very big compared to the likelihood ratios in other domains of $\mathcal D$, meaning the domain $\mathcal D_1$ is under-favored by the importance density $g$ compared to $f$; and 2) though it is unlikely, few realizations of the importance process $\mathbf{Z}_i^\prime$ are drawn in $\mathcal D_1$, which creates unbalanced weights.
Some methods allow to reduce the risk of weight degeneracy by using resampling schemes like for instance in the interacting particle system (IPS) method \cite{del2005genealogical}, but, even though it is reduced, the risk of weight degeneracy still remains within the IPS method. The IPS method takes in input some potential functions $G_k$ (also called score functions). If these functions does not favor the domain $\mathcal D_1$, the convergence is slowed down\cite{chraibi2018optimal}, and we can end up with the same situation. In this method the weights of the simulation outputs are the inverse of the product of resampling weights multiplied by the objective function's evaluations (see equations 2.18 in \cite{del2005genealogical}). The degenerate weights would therefore appear each time some trajectories in $\mathcal D_1$ are selected by the resamplings, eventhough the resamplings make it unlikely. Weight degeneracy does not depend on the method used but it rather depend on the choice of the importance process or on the choice the potential functions. Weight degeneracy is that it is the symptom of a slow convergence, therefore the sample size should be increased until the weight degeneracy fades out: the weight degeneracy is a tool to select the sample size for both methods. For a fixed sample size, if the sample contains some trajectories in $\mathcal D_1$, the weight degeneracy can be a criterion to reject the importance density, or the potential function, used. But this last criterion is valid only if the sample contains observations in the under-favored domains in $\mathcal D$, which is unlikely by definition. One important point to stress out, is that witnessing no weight degeneracy within the simulations outputs does not guarantee the convergence, we can consider we have converge if the sample size is reasonably large and that we do not witness weight degeneracy in all part of $\mathcal D$.
When choosing the importance process, there is a risk of over-biasing. Over-biasing corresponds to the situation where a domain $\mathcal D_2\subset\mathcal D$ is over-favored by the importance process, resulting in an under-favoring of an other domains $\mathcal D_1\subset\mathcal D$. In this situation a weight degeneracy exist in $\mathcal D_1$ but it is not witnessed because no trajectory within the sample is drawn in $\mathcal D_1$. This situation happens when one type of failing trajectories is over represented in the importance distribution comparatively to other types of failing trajectories. This phenomenon can result in underestimating the probability of the system failure and in underestimating the variance. To avoid it, we must satisfy two points: 1) We must design a parametric importance density that can increase the likelihoods of each type of failing trajectories separately. 2) We need to initiate the Cross-Entropy method with a sample of trajectories that contains all types of failing trajectories. It is therefore preferable to apply this method only on systems of reasonable complexity, for which it is possible to determine the different types of failing trajectories. The parametric functions $(U_\alpha)_{\alpha\in A_{param}}$ should be flexible enough to satisfy the two previous points, but one should pay attention to keep the dimension of the vector of parameter $\alpha$ reasonably small, so that we avoid a prohibitive computational effort during the optimization routines in the CE.\\
\section{Conclusion}
We have presented a model for multi-component systems based on PDMPs. In order to speed up reliability assessment on such systems, we have adapted the importance sampling method to trajectories of PDMP. We have given a dominant measure for PDMP trajectories, allowing to properly define the likelihood ratio needed to apply the importance sampling method on such processes. The possible kinds of importance processes were discussed, and the optimal biasing strategy when simulating jump by jump was exhibited. We developed and tested a biasing strategy for a three-component heated-room system. Our importance sampling method has shown good performance, increasing the efficiency of the estimator by a factor $7\,000$.
\appendix
\section{The measure $\zeta$ is $\mathbf \sigma$-finite\\ when $\mathbf{t_f<\infty}$ and the measures $\mathbf{\nu_{z^-}}$ are bounded}
\label{appendix-sec1}
Remember that we defined the $\sigma$-algebra $\mathscr S$ on the set of the possible values of $\big(Z_{S_k},T_k\big)_{k\leq N}$ as the $\sigma$-algebra generated by the sets in
$\underset{\ n\in \mathbb N^* }{\bigcup} \mathscr B\Big(\Big\{\big(z_{s_k},t_k\big)_{k\leq n}\in(E\times\mathbb R^{*}_\text{+})^{n},\, \overset{n}{\underset{i=0}{\sum}} t_i=t_f\Big\}\Big)$. The measure $\zeta$ is defined by: :
\begin{align}
B\in \mathscr S,\quad \zeta \big(\Theta^{-1}(B ) \big)= & \underset{ \mbox{\hspace{-4ex} } (z_{_k},t_{_k})_{k\leq n}
\in B}{\int\qquad d\delta_{t^*_{n}}(t_n)}\ d\nu_{z_{n }^-}(z_n)\ d\mu_{t^*_{z_{n-1}}}(t_{n-1}) \ ...\ d\nu_{z_{ 1}^-}(z_1)\ d\mu_{t^*_{z_{o}} }(t_{0})
\label{zetaApp}
\end{align}
\begin{proof}
Let $A_n=\Big\{\big(z_{s_k},t_k\big)_{k\leq n}\in(E\times\mathbb R^{*}_\text{+})^{n},\, \overset{n}{\underset{i=0}{\sum}} t_i=t_f\Big\}$. Then $\Theta^{-1}(A_n)$ is the set of possible trajectories with $n$ jumps, and the sets $A_n$ for $n\in\mathbb N^*$ form a partition of the set of all possible trajectories. Note that
$A_n\subseteq (E\times[0,t_f))^n$, so
\begin{align*}
\zeta \big(\Theta^{-1}(A_n) \big)&\leq \zeta (\Theta^{-1}\big( (E\times[0,t_f))^n \big)\\
&\leq\underset{ \mbox{\hspace{-6ex}} (E\times[0,t_f))^n}{\int\qquad d\delta_{t^*_{n}}(t_n)}\ d\nu_{z_{n }^-}(z_n) \ d\mu_{t^*_{z_{n-1}}}(t_{n-1}) \ ...\ d\nu_{z_{ 1}^-}(z_1)\ d\mu_{t^*_{z_{o}} }(t_{0})
\end{align*}
We suppose that the $\nu_{z^-}$ are bounded, $\exists M>0, \forall z^-\in\xoverline{E\,},\ \nu_{z^-}(E)<M$. Under this assumption, we have:
\begin{align*}
\zeta \big(\Theta^{-1}(A_n) \big)&\leq M \underset{ \mbox{\hspace{-8ex} }(E\times[0,t_f))^{n-1}}{\int\qquad d\mu_{t^*_{z_{n-1}}}(t_{n-1})} \ ...\ d\nu_{z_{ 1}^-}(z_1)\ d\mu_{t^*_{z_{o}} }(t_{0}) \\
&\leq M \underset{ (E\times[0,t_f))^{n-2}}{\int\qquad } \int_E\int_{[0,t_f)} d\mu_{t^*_{z_{n-1}}}(t_{n-1})\ d\nu_{z_{n- 1}^-}(z_{n-1}) \ ...\ d\nu_{z_{ 1}^-}(z_1)\ d\mu_{t^*_{z_{o}} }(t_{0})\\
&\leq M(t_f+1) \underset{ (E\times[0,t_f))^{n-2}}{\int\qquad } \int_E d\nu_{z_{n- 1}^-}(z_{n-1})d\mu_{t^*_{z_{n-2}}}(t_{n-2}) \ ...\ d\nu_{z_{ 1}^-}(z_1)\ d\mu_{t^*_{z_{o}} }(t_{0})\\
&\leq M^2(t_f+1) \underset{ (E\times[0,t_f))^{n-2}}{\int\qquad } d\mu_{t^*_{z_{n-2}}}(t_{n-2})d\nu_{z_{n- 2}^-}(z_{n-2})\ ...\ d\nu_{z_{ 1}^-}(z_1)\ d\mu_{t^*_{z_{o}} }(t_{0})
\end{align*}
By recurrence we get that $\zeta \big(\Theta^{-1}(A_n) \big)\leq M^n(t_f+1)^n$, which proves that $\zeta$ is $\sigma$-finite.\end{proof}
\section{Optimal intensity's expression, and some properties of $U^*$}
\label{appendix-sec2}
\subsection{Equality \eqref{closureCondition} }
\label{appendix-sec21}
Let $z^\text{-}\in \delta E $ and $s\in[0,t_f)$. Remember that equality \eqref{closureCondition} states that
$$
U^{\text{-}}\big(\Phi_{z}(t_z^{*}),s+t_z^{*}\big)= \lim_{
t\nearrow t_z^{*} } U^{*}\big(\Phi_{z}(t),s+t\big).$$
\begin{proof}
We denote by $T$ the time until the next jump after the trajectory has reached the state $Z_{s+t}=\phi_{z}(t)$. Then we have:
\begin{align*}
U^{*}\big(\Phi_{z}(t),s+t\big)&=\mathbb E\big[\indic{\mathscr D}(\mathbf z)\big|Z_{s+t}=\phi_{z}(t)\big]\\
&=\mathbb E\Big[\mathbb E\big[\indic{\mathscr D}(\mathbf z)\big|Z_{T+s+t}\big]\Big|Z_{s+t}=\phi_{z}(t)\Big]\\
&=\mathbb E\Big[(\indic{T<t^*_{\Phi_z(t)}}+\indic{T=t^*_{\Phi_z(t)}})U^*(Z_{T+s+t},s+t+T) \Big|Z_{s+t}=\phi_{z}(t)\Big]\\
&= \int_0^{t^*_{\Phi_z(t)}} U^-( \Phi_{\Phi_z(t)}(u),s+t+u)\lambda_{\Phi_z(t)}( u)\exp\big[-\Lambda_{\Phi_z(t)}(u)\big]du \\
&\quad + \exp\big[-\Lambda_{\Phi_z(t)}( t^*_{\Phi_z(t)})\big]\int_E K_{z^-}(z^+)U^*(z^+,s+t+t^*_{\Phi_z(t)})d\nu_{z^-}(z^+) \\
&\mbox{where } z^-=\Phi_{\Phi_z(t)}(t^*_{\Phi_z(t)})\\
U^{*}\big(\Phi_{z}(t),s+t\big)
&= \int_t^{t^*_{z}} U^-( \Phi_{z}(u),s+ u)\lambda_{z}( u)\exp\big[-\Lambda_{\Phi_z(t)}(u-t)\big] du \\
&\quad + \exp\big[-\Lambda_{\Phi_z(t)}( t^*_{z}-t)\big]\int_E K_{z^-}(z^+)U^*(z^+,s+ t^*_{z})d\nu_{z^-}(z^+) \\
&\mbox{where } z^-=\Phi_{z}(t^*_z)
\end{align*}
so $ U^{*}\big(\Phi_{z}(t),s+t\big) =o(1) +(1+o(1))U^{\text{-}}\big(\Phi_{z}(t_z^{*}),s+t_z^{*}\big) $ as $t\to t^*_z,\ t<t^*_z$.
\end{proof}
\subsection{Proof of Theorem \ref{th:optijumprate}}
\label{appendix-sec22}
\begin{proof}
We have seen in the proof above that
\begin{align*}
U^{*}\big(\Phi_{z}(t),s+t\big)
&=\int_t^{t^*_{z}} U^-( \Phi_{z}(u),s+ u)\lambda_{z}(u) \exp\big[-\Lambda_{\Phi_z(t)}(u-t)\big] du \\&\quad + \exp\big[-\Lambda_{\Phi_z(t)}( t^*_{z}-t)\big] \int_E K_{z^-}(z^+)U^*(z^+,s+ t^*_{z})d\nu_{z^-}(z^+) \end{align*}
so
\begin{align*}
U^{*}\big(\Phi_{z}(t),s+t\big)&=\int_t^{t^*_{z}} U^-( \Phi_{z}(u),s+ u)\lambda_{z}(u) \exp\big[-\Lambda_{z}(u )\big] \exp\big[+\Lambda_{z}(t )\big]du \\
&\quad + \exp\big[-\Lambda_{z}( t^*_{z} )\big] \exp\big[+\Lambda_{z}(t )\big] \int_E K_{z^-}(z^+)U^*(z^+,s+ t^*_{z})d\nu_{z^-}(z^+) \\
&=\frac 1 {\exp\big[-\Lambda_{z}(t)\big]}\int_{[t,t^*_{z}]} U^-( \Phi_{z}(u),s+ u)\Big(\lambda_{z}(u)\Big)^{\indic{t< t^*_z}} \exp\big[-\Lambda_{z}(u )\big] d\mu_z(t)
\end{align*}
This last equality allows to transform \eqref{LB1} into \eqref{LB2}.
\end{proof}
\subsection{Equality \eqref{Uderiv}}
\label{appendix-sec23}
Let $z\in E$ and $s\in[0,t_f)$. Remember that equality \eqref{Uderiv} states that if the functions \linebreak $u\to U^{\text{-}}\big(\Phi_{z}(u),s+u\big) $ and $u\to \lambda_{ z}(v)$ are continuous almost everywhere on $[0,t^*_z)$, then almost everywhere $$ \frac{\partial U^{*}\big(\Phi_{z}(v),s+v\big) }{\partial v}= U^{*}\big(\Phi_{z}(v),s+v\big) \lambda_{ z}(v) - U^{\text{-}}\big(\Phi_{z}(v),s+v\big) \lambda_{ z}(v)$$
\begin{proof}
We denote by $T$ the time until the next jump after the trajectory has reached $Z_s=z$. For $0\leq h<t^*_z$, we define $\tau=\min(h,T)$.
\begin{align*}
U^{*}(z,s)&=\mathbb E\big[\indic{\mathscr D}(\mathbf{Z})\big|Z_s=z\big]\\
&=\mathbb E\Big[\mathbb E\big[\indic{\mathscr D}(\mathbf{Z})\big|Z_{s+\tau}\big]\Big|Z_s=z\Big]\\
&=\mathbb E\Big[(\indic{\tau=h}+\indic{\tau<h})\mathbb E\big[\indic{\mathscr D}(\mathbf{Z})\big|Z_{s+\tau}\big]\Big|Z_s=z\Big]\\
&=\mathbb E\Big[ \indic{T=h} \,\mathbb E\big[\indic{\mathscr D}(\mathbf{Z})\big|Z_{s+h}=\Phi_z(h)\big]\Big|Z_s=z\Big]\ +\
\mathbb E\Big[ \indic{T<h}\, \mathbb E\big[\indic{\mathscr D}(\mathbf{Z})\big|Z_{s+T}\big]\Big|Z_s=z\Big]\\
&= U^{*}(\phi_z(h),s+h) \ \mathbb E \big[ \indic{T=h} \big|Z_s=z\ \big]\ +\
\mathbb E\Big[ \indic{T<h}\, U^{*}(Z_{s+T},s+T)\Big|Z_s=z\Big]\\[10pt]
&=U^{*}(\phi_z(h),s+h) \,\exp\big[-\Lambda_z(h)\big] +\ \int_0^h \int_E K_{\Phi_z(u)}(z^+) U^{*}(z^+,s+u)d\nu_{\Phi_z(u)}(z^+)\lambda_z(u)\exp\big[-\Lambda_z(u)\big] du \end{align*}
As $\lambda_z(.)$ is continuous almost everywhere we have that almost everywhere :
\begin{align*}
U^{*}(z,s)&=U^{*}(\phi_z(h),s+h) \,(1-\lambda_z(0)h+o( h )) +\ \int_0^h U^{\text{-}}(\Phi_z(u),s+u) \lambda_z(u)\exp\big[-\Lambda_z(u)\big] du
\end{align*} As $u\to U^{\text{-}}(\phi_z(u),s+u)\lambda_z(u)$ is continuous almost everywhere, and we can do a Taylor approximation of the integral, which gives :
\begin{align*}
U^{*}(z,s)-U^{*}(\phi_z(h),s+h) &= -\lambda_z(0)\,.h\,.U^{*}(\phi_z(h),s+h) \,
+\ h\,.U^{\text{-}}(z,s) \lambda_z(0) +o(h )
\end{align*} So $u\to U^{*}(\phi_z(u),s+u)$ is right-continuous almost everywhere. Therefore $U^{*}(\phi_z(h),s+h) =U^{*}(z,s) +o(1)$, and we get :
\begin{align*}
\frac{U^{*}(z,s)-U^{*}(\phi_z(h),s+h)}{h}&= -\lambda_z(0)\, U^{*}(z,s ) \,
+\ U^{\text{-}}(z,s) \lambda_z(0) +o(1 )
\end{align*} Making $h$ tends to zero we get that $ u\to U^{*}(\phi_z(u),s+u)$ has a right-derivative in zero. Applying the same kind of reasoning in state $\Phi_z(-h)$ instead of $z$, we would find that the left-derivative exists and is equal to the right-derivative. So for almost every state $ z\in E$,
$$ \bigg(\frac{\partial U^{*}\big(\Phi_{z}(v),s+v\big) }{\partial v}\bigg)_{v=0}= U^{*}\big(\Phi_{z}(0),s+0\big) \lambda_{ z}(0) - U^{\text{-}}\big(\Phi_{z}(0),s+0\big) \lambda_{ z}(0) $$Applying the same reasoning in a state $\Phi_{z_o}(v)$ instead of $z$ and using the additivity of the flow, we get that almost everywhere:
$$ \forall z_o\in E, v>0,\quad \frac{\partial U^{*}\big(\Phi_{z_o}(v),s+v\big) }{\partial v} = U^{*}\big(\Phi_{z_o}(v),s+v\big) \lambda_{ z_o}(v) - U^{\text{-}}\big(\Phi_{z_o}(v),s+v\big) \lambda_{ z_o}(v) $$
\end{proof}
{
\footnotesize
}
\end{document} |
\begin{document}
\tolerance=1000
\begin{abstract}
The purpose of this paper is to introduce a cohomology theory for
abelian matched pairs of Hopf algebras and to explore its
relationship to Sweedler cohomology, to Singer cohomology and to
extension theory. An exact sequence connecting these cohomology
theories is obtained for a general abelian matched pair of Hopf
algebras, generalizing those of Kac and Masuoka for matched pairs
of finite groups and finite dimensional Lie algebras. The
morphisms in the low degree part of this sequence are given
explicitly, enabling concrete computations.
\end{abstract}
\operatorname{m}aketitle
\setcounter{section}{-1}
\section{Introduction}
In this paper we discuss various cohomology theories for Hopf
algebras and their relation to extension theory.
It is natural to think of building new algebraic objects from
simpler structures, or to get information about the structure of
complicated objects by decomposing them into simpler parts.
Algebraic extension theories serve exactly that purpose, and the
classification problem of such extensions is usually related to
cohomology theories.
In the case of Hopf algebras, extension theories are proving to be
invaluable tools for the construction of new examples of Hopf
algebras, as well as in the efforts to classify finite dimensional
Hopf algebras.
Hopf algebras, which occur for example as group algebras, as
universal envelopes of Lie algebras, as algebras of representative
functions on Lie groups, as coordinate algebras of algebraic
groups and as Quantum groups, have many \lq group like\rq\;
properties. In particular, cocommutative Hopf algebras are group
object in the category of cocommutative coalgebras, and are very
much related to ordinary groups and Lie algebras. In fact, over an
algebraically closed field of characteristic zero, such a Hopf
algebra is a semi-direct product of a group algebra by a universal
envelope of a Lie algebra, hence just a group algebra if finite
dimensional (see [MM, Ca, Ko] for the connected case, [Gr1,2, Sw2]
for the general case).
In view of these facts it appears natural to try to relate the
cohomology of Hopf algebras to that of groups and Lie algebras.
The first work in this direction was done by M.E. Sweedler [Sw1]
and by G.I. Kac [Kac] in the late 1960's. Sweedler introduced a
cohomology theory of algebras that are modules over a Hopf algebra
(now called Sweedler cohomology). He compared it to group
cohomology, to Lie algebra cohomology and to Amitsur cohomology.
In that paper he also shows how the second cohomology group
classifies cleft comodule algebra extensions. Kac considered Hopf
algebra extensions of a group algebra
$kT$ by the dual of a group algebra $k^N$ obtained from a matched
pair of finite groups $(N,T)$, and found an exact sequence
connecting the cohomology of the groups involved and the group of
Hopf algebra extensions $\operatorname{Opext} (kT,k^N)$
$$\begin{array}{l}
0\to H^1(N\bowtie T,k^{\bullet})\to H^1(T,k^{\bullet})
\oplus H^1(N,k^{\bullet})\to \operatorname{Aut} (k^N\# kT) \\
\to H^2(N\bowtie T,k^{\bullet})\to H^2(T,k^{\bullet})\to\operatorname{Opext}
(kT,k^N)\to H^3(N\bowtie T,k^{\bullet})\to ...
\end{array}$$
which is now known as the Kac sequence. In the work of Kac all
Hopf algebras are over the field of complex numbers and also carry
the structure of a $C^*$-algebra. Such structures are now called
Kac algebras. The generalization to arbitrary fields appears in
recent work by A. Masuoka [Ma1,2], where it is also used to show
that certain groups of Hopf algebra extensions are trivial.
Masuoka also obtained a version of the Kac sequence for matched
pairs of Lie bialgebras [Ma3], as well as a new exact sequence
involving the group of quasi Hopf algebra extensions of a finite
dimensional abelian Singer pair [Ma4].
In this paper we introduce a cohomology theory for general abelian
matched pairs $(T,N,\operatorname{m}u,\nu)$, consisting of two cocommutative
Hopf algebras acting compatibly on each other with bismash product
$H=N\bowtie T$, and obtain a general Kac sequence
$$\begin{array}{l}
0\to H^1(H,A)\to H^1(T,A)\oplus H^1(N,A)\to \operatorname{m}athcal H^1(T,N,A) \to H^2(H,A) \\
\to H^2(T,A)\oplus H^2(N,A)\to \operatorname{m}athcal H^2(T,N,A)\to H^3(H,k)\to
...
\end{array}$$
relating the cohomology $\operatorname{m}athcal H^*(T,N,A)$ of the matched pair
with coefficients in a module algebra $A$ to the Sweedler cohomologies
of the Hopf algebras involved. For trivial coefficients the maps
in the low degree part of the sequence are described explicitly.
If $T$ is finite-dimensional then abelian matched pairs $(T,N,\operatorname{m}u
,\nu )$ are in bijective correspondence with abelian Singer pairs
$(N,T^*)$, and we get a natural isomorphism $\operatorname{m}athcal
H^*(T,N,k)\cong H^*(N,T^*)$ between the cohomology of the abelian
matched pair and that of the corresponding abelian Singer pair. In
particular, together with results from [Ho] one obtains
$\operatorname{m}athcal{H}^1(T,N,k)\cong H^1(N,T^*)\cong \operatorname{Aut} (T^*\# N)$ and $\operatorname{m}athcal
H^2(T,N,k)\cong H^2(N,T^*)\cong \operatorname{Opext} (N,T^*)$. The sequence
gives information about extensions of cocommutative Hopf algebras
by commutative ones. It can also be used in certain cases to
compute the (low degree) cohomology groups a Hopf algebras.
Such a sequence can of course not exist for non-abelian matched
pairs, at least if the sequence is to consist of groups and not
just pointed sets as in [Sch].
Together with the five term exact sequence for a smash product of
Hopf algebras $H=N\rtimes T$ [M2], generalizing that of K.
Tahara [Ta] for a semi-direct product of groups,
$$\begin{array}{l} 1\to H^1_{meas}(T,\operatorname{Hom} (N,A))\to \tilde H^2(H,A)\to H^2(N,A)^T \\
\to H^2_{meas}(T,\operatorname{Hom}(N,A))\to \tilde H^3(H,A)\end{array}$$ it is
possible in principle to give a procedure to compute the second
cohomology group of any abelian matched pair of pointed Hopf
algebras over a field of characteristic zero with a finite group of
points and a reductive Lie algebra of primitives.
In Section 1 abelian Singer pairs of Hopf algebras are reviewed. In
particular we talk about the cohomology of an abelian Singer pair,
about Sweedler cohomology and Hopf algebra extensions [Si, Sw1].
In the second section abelian matched pairs of Hopf algebras are discussed.
We introduce a cohomology
theory for an abelian matched pair of Hopf algebras with
coefficients in a commutative module algebra, and in Section 4 we see how it
compares to the cohomology of a Singer pair.
The generalized Kac sequence for an abelian matched pair of Hopf
algebra is presented in Section 5. The homomorphisms in the low dgree
part of the sequence are given
explicitly, so as to make it possible to use them in explicit
calculations of groups of Hopf algebra extensions and low degree
Sweedler cohomology groups.
Section 6 examines how the tools introduced combined with some additional
observations can be used to describe explicitly the second cohomology group
of some abelian matched pairs.
In the appendix some results from (co-)simplicial homological
algebra used in the main body of the paper are presented.
Throughout the paper ${_H\operatorname{m}athcal V}$, ${_H\operatorname{m}athcal A}$ and
${_H\operatorname{m}athcal C}$ denote the categories of left $H$-modules,
$H$-module algebras and $H$-module coalgebras, respectively, for
the Hopf algebra $H$ over the field $k$. Similarly, $\operatorname{m}athcal
V^H$, $\operatorname{m}athcal A^H$ and $\operatorname{m}athcal C^H$ stand for the categories
of right $H$-comodules, $H$-comodule algebras and $H$-comodule
coalgebras, respectively.
We use the Sweedler sigma notation for comultiplication: $\Delta(c)=c_1\otimes c_2$,
$(1\otimes\Delta)\Delta(c)=c_1\otimes c_2\otimes c_3$ etc. In the cocommutative setting
the indices are clear from the context and we will omit them whenever
convenient.
If $V$ is a vector space, then $V^n$ denotes its $n$-fold tensor power.
\section{Cohomology of an abelian Singer pair}
\subsection{Singer pairs}
Let $(B,A)$ be a pair of Hopf algebras together with an action
$\operatorname{m}u\colon B\otimes A\to A$ and a coaction $\rho\colon B\to B\otimes A$ so that $A$
is a $B$-module algebra and $B$ is an $A$-comodule coalgebra. Then
$A\otimes B$ can be equipped with the cross product algebra structure
as well as the cross product coalgebra structure. To ensure
compatibility of these structures, i.e: to get a Hopf algebra,
further conditions on $(B,A,\operatorname{m}u ,\rho)$ are necessary. These are
most easily expressed in term of the action of $B$ on $A\otimes A,$
twisted by the coaction of $A$ on $B$,
$$\operatorname{m}u_2=(\operatorname{m}u\otimes\operatorname{m}_A(1\otimes\operatorname{m}u ))(14235)((\rho\otimes 1)\Deltalta_B\otimes 1\otimes 1)\colon
B\otimes A\otimes A\to A\otimes A,$$
i.e: $b(a\otimes a')=b_{1B}(a)\otimes b_{1A}\cdot b_2(a')$, and the coaction of $A$
on $B\otimes B,$ twisted by the action of $B$ on $A$,
$$\rho_2=(1\otimes 1\otimes\operatorname{m}_A(1\otimes\operatorname{m}u ))(14235)((\rho\otimes 1)\Deltalta_B\otimes\rho )\colon
B\otimes B\to B\otimes B\otimes A,$$
i.e: $\rho_2(b\otimes b')=b_{1B}\otimes b'_B\otimes b_{1A}\cdot b_2(b'_A)$.
Observe that for trivial coaction $\rho\colon B\to B\otimes A$ one gets the
ordinary diagonal action of $B$ on $A\otimes A$, and for trivial
action $\operatorname{m}u\colon B\otimes A\to A$ the diagonal coaction of $A$ on $B\otimes
B$.
\begin{Definition} The pair $(B,A,\operatorname{m}u ,\rho )$ is called an abelian Singer
pair if $A$ is commutative, $B$ is cocommutative and the following are
satisfied.
\begin{enumerate}
\item $(A,\operatorname{m}u)$ is a $B$-module algebra (i.e: an object of
$_B\operatorname{m}athcal A$), \item $(B,\rho )$ is a $A$-comodule coalgebra
(i.e: an object of $\operatorname{m}athcal C^A$), \item $\rho\operatorname{m}_B=(\operatorname{m}_B\otimes
1)\rho_2$, i.e: the diagram
$$\begin{CD}
B\otimes B @>\operatorname{m}_B >> B \\
@V\rho_2 VV @V\rho VV \\
B\otimes B\otimes A @>\operatorname{m}_B\otimes 1 >> B\otimes A
\end{CD}$$
commutes, \item $\Deltalta_A\operatorname{m}u =\operatorname{m}u_2(1\otimes\Deltalta_A)$, i.e: the
diagram
$$\begin{CD}
B\otimes A @>\operatorname{m}u >> A \\
@V1\otimes\Deltalta_A VV @V\Deltalta_A VV \\
B\otimes A\otimes A @>\operatorname{m}u_2 >> A\otimes A
\end{CD}$$
commutes.
\end{enumerate}
\end{Definition}
The twisted action of $B$ on $A^n$ and the twisted coaction of $A$
on $B^n$ can now be defined inductively:
$$\operatorname{m}u_{n+1}=(\operatorname{m}u_n\otimes\operatorname{m}_A(1\otimes\operatorname{m}u ))(14235)((\rho\otimes 1)\Deltalta_B\otimes 1^n\otimes 1)
\colon B\otimes A^{n}\otimes A\to A^{n}\otimes A$$
with $\operatorname{m}u_1=\operatorname{m}u$ and
$$\rho_{n+1}=(1\otimes 1^n\otimes\operatorname{m}_A(1\otimes\operatorname{m}u ))(14235)((\rho\otimes 1)\Deltalta_B\otimes\rho_n)\colon
B\otimes B^n\to B\otimes B^n\otimes A$$ with $\rho_1=\rho$.
\subsection{(Co-)modules over Singer pairs}
It is convenient to introduce the abelian category $_B\operatorname{m}athcal
V^A$ of triples $(V,\omegaega ,\lambda)$, where \begin{enumerate}
\item $\omegaega\colon B\otimes V\to V$ is a left $B$-module structure, \item
$\lambda\colon V\to V\otimes A$ is a right $A$-comodule structure and \item
the two equivalent diagrams
$$\begin{CD}
B\otimes V @>\omegaega >> V @. \quad \quad @. B\otimes V @>\omegaega >> B \\
@V1\otimes\lambda VV @V\lambda VV @. @V\lambda_{B\otimes V} VV @V\lambda VV \\
B\otimes V\otimes A @>\omegaega_{V\otimes A} >> V\otimes A @. \quad \quad @. B\otimes
V\otimes A @>\omegaega \otimes 1>> V\otimes A
\end{CD}$$
commute, where the twisted action $\omegaega_{V\otimes A}\colon B\otimes V\otimes A\to
V\otimes A$ of $B$ on $V\otimes A$ is given by $\omegaega_{V\otimes A}=(\omegaega\otimes
\operatorname{m}_A(1\otimes\operatorname{m}u )(14235)((\rho\otimes 1)\Deltalta_B\otimes 1\otimes 1)$ and the
twisted coaction $\lambda_{B\otimes V}\colon B\otimes V\to B\otimes V\otimes A$ of $A$
on $B\otimes V$ by $\lambda_{B\otimes V}=(1\otimes 1\otimes\operatorname{m}_A(1\otimes\operatorname{m}u
))(14235)((\rho\otimes 1)\otimes\Deltalta_B\otimes\lambda )$.
\end{enumerate}
The morphisms are $B$-linear and $A$-colinear maps. Observe that
$(B,\operatorname{m}_B,\rho )$, $(A,\operatorname{m}u ,\Deltalta_A)$ and $(k, \varepsilonsilon_B\otimes 1,
1\otimes\iotata_A)$ are objects of ${_B\operatorname{m}athcal V}^A$. Moreover, $(
{_B\operatorname{m}athcal V}^A, \otimes , k)$ is a symmetric monoidal category, so
that commutative algebras and cocommutative coalgebras are defined
in $( {_B\operatorname{m}athcal V}^A, \otimes , k)$.
The free functor $F\colon \operatorname{m}athcal V^A\to {{_B\operatorname{m}athcal V}^A}$, defined by
$F(X,\alphapha )=(B\otimes X, \alphapha_{B\otimes X})$ with twisted $A$-coaction
$\alphapha_{B\otimes X}=(1\otimes 1\otimes\operatorname{m}_A(1\otimes\operatorname{m}u))(14235)((\rho\otimes
1)\Deltalta_B\otimes\alphapha )$ is left adjoint to the forgetful functor
$U\colon {_B\operatorname{m}athcal V^A}\to \operatorname{m}athcal V^A$, with natural isomorphism
$\theta\colon {_B\operatorname{m}athcal V^A}(FM,N)\to \operatorname{m}athcal V^A(M,UN)$ given by
$\theta(f)(m)=f(1\otimes m)$ and
$\theta^{-1}(g)(n\otimes m)=\operatorname{m}u_N(n\otimes g(m))$. The unit $\eta_M\colon M\to UF(M)$ and
the counit $\varepsilonsilon_N\colon FU(N)\to N$ of the adjunction are given by
$\eta_M=\iotata_B\otimes 1$ and $\varepsilonsilon_N=\operatorname{m}u_N$, respectively, and give
rise to a comonad $\operatorname{m}athbf{G}=(FU,\varepsilonsilon, \deltalta=F\eta U).$
Similarly, the cofree functor $L\colon {_B\operatorname{m}athcal V}\to {_B\operatorname{m}athcal V}^A$,
defined by $L(Y,\beta )=(Y\otimes A, \beta_{Y\otimes A})$ with twisted
$B$-action $\beta_{Y\otimes A}=(\beta\otimes\operatorname{m}_A(1\otimes\operatorname{m}u
))(14235)((\rho\otimes 1)\Deltalta_B\otimes 1\otimes 1)$ is right adjoint to the
forgetful functor $U\colon {_B\operatorname{m}athcal V}^A\to {_B\operatorname{m}athcal V}$, with
natural isomorphism $\psi\colon {_B\operatorname{m}athcal V}(UM,N)\to {_B\operatorname{m}athcal
V}^A(M,LN)$ given by $\psi (g)=(1\otimes g)\deltalta_M$ and
$\psi^{-1}(f)=(1\otimes\varepsilonsilon_A)f$. The unit $\eta_M\colon M\to LU(M)$ and
the counit $\varepsilonsilon_N\colon UL(N)\to N$ of the adjunction are given by
$\eta_M=\deltalta_M$ and $\varepsilonsilon_N=1\otimes\varepsilonsilon_A$, respectively.
They give rise to a monad (or triple) $\operatorname{m}athbf{T}=(LU,\eta ,\operatorname{m}u
=L\varepsilonsilon U)$ on $_B\operatorname{m}athcal{V}^A$. The (non-commutative) square of functors
$$\begin{CD}
\operatorname{m}athcal V @>L>> \operatorname{m}athcal V^A \\
@VFVV @VFVV \\
{_B\operatorname{m}athcal V} @>L>> {_B\operatorname{m}athcal V}^A
\end{CD}$$
together with the corresponding forgetful adjoint functors
describes the situation. Observe that $ {_B\operatorname{m}athcal
V}^A(G(M),T(N))\cong \operatorname{m}athcal V(UM,UN)$. These adjunctions, monads
and comonads restrict to coalgebras and algebras.
\subsection{Cohomology of an abelian Singer pair}
The comonad $\operatorname{m}athbf G=(FU,\varepsilonsilon ,\deltalta =F\eta U)$ defined on
${_B\operatorname{m}athcal V}^A$ can be used to construct $B$-free simplicial
resolutions $\operatorname{m}athbf X_B(N)$ with $X_n(N)=G^{n+1}N=B^{n+1}\otimes N$,
faces and degeneracies
$$\partialrtial_i=G^i\varepsilonsilon_{G^{n-i}(N)}\colon X_{n+1}\to X_n,
\quad s_i=G^i\deltalta_{G^{n-i}(N)}\colon X_n\to X_{n+1}$$
given by $\partialrtial_i =1^i\otimes\operatorname{m}_B\otimes 1^{n+1-i}$ for $0\leq i\leq
n$, $\partialrtial_{n+1}=1^{n+1}\otimes\operatorname{m}u_N$, and $s_i=1^i\otimes\iotata_B\otimes 1^{n+2-i}$ for $0\leq i\leq n$.
The monad $\operatorname{m}athbf T=(LU,\eta ,\operatorname{m}u =L\varepsilonsilon U)$ on
${_B\operatorname{m}athcal V}^A$ can be used to construct $A$-cofree cosimplicial
resolutions $\operatorname{m}athbf{Y}_A(M)$ with $Y_A^n(M)=T^{n+1}M=M\otimes A^{n+1}$,
cofaces and codegeneracies
$$\partialrtial^i=T^{n+1-i}\eta_{T^i(M)}\colon Y^n\to Y^{n+1} \quad ,\quad s^i=T^{n_i}\operatorname{m}u_{T^i(M)}\colon Y^{n+1}\to Y^{n}$$
given by $\partialrtial^0=\deltalta_M\otimes 1^{n+1}$,
$\partialrtial^i=1^{i-1}\otimes\Deltalta_A\otimes 1^{n+2-i}$ for $1\leq i\leq
n+1$, and $s^i=1^{i+1}\otimes\varepsilonsilon_A\otimes 1^{n+1-i}$ for $0\leq i\leq
n$.
The total right derived functor of
$$ {_B\operatorname{Reg}^A}=\mathcal{U} {_B\operatorname{Hom}^A}\colon ({_B\operatorname{m}athcal C^A})^{op}\times {_B\operatorname{m}athcal A^A}\to \operatorname{Ab}$$
is now defined by means of the simplicial $\operatorname{m}athbf G$-resolutions
$\operatorname{m}athbf X_B(M)=\operatorname{m}athbf G^{*+1}M$ and the cosimplicial $\operatorname{m}athbf
T$-resolutions $\operatorname{m}athbf Y_A(N)=\operatorname{m}athbf T^{*+1}N$ as
$$R^*( {_B\operatorname{Reg}}^A(M,N)=H^*(\operatorname{Tot} {_B\operatorname{Reg}^A}(\operatorname{m}athbf X_B(M),\operatorname{m}athbf Y_A(N)).$$
\begin{Definition}\label{d12} The cohomology of a Singer pair $(B,A,\operatorname{m}u ,\rho)$ is given by
$$H^*(B,A)=H^{*+1}(\operatorname{Tot}\operatorname{m}athbf Z_0)$$
where $\operatorname{m}athbf Z_0$ is the double cochain complex obtained from
the double cochain complex $\operatorname{m}athbf Z={_B\operatorname{Reg}^A}(\operatorname{m}athbf
X(k),\operatorname{m}athbf Y(k)))$ by deleting the $0^{th}$ row and the $0^{th}$
column.
\end{Definition}
\subsection{The normalized standard complex}
Use the natural isomorphism $${_B\operatorname{m}athcal V}^A(FU(M),LU(N))\cong
\operatorname{m}athcal V(UM,UN)$$ to get the standard double complex
$$Z^{m,n}=({_B\operatorname{Reg}}^A(G^{m+1}(k)), T^{n+1}(k),\partialrtial',\partialrtial)\cong
( \operatorname{Reg}(B^{m},A^{n}),\partialrtial', \partialrtial).$$
For computational
purposes it is useful to replace this complex by the normalized
standard complex $Z_+$, where $Z_+^{m,n}=\operatorname{Reg}_+(B^m,A^n)$ is the
intersection of the degeneracies, consisting of all convolution
invertible maps $f\colon B^m\to A^n$ satisfying
$f(1\otimes\ldots\otimes\eta\varepsilon\otimes\ldots\otimes1)=\eta\varepsilon$ and
$(1\otimes\ldots\otimes\eta\varepsilon\otimes\ldots\otimes1)f=\eta\varepsilon$. In more detail,
the normalized standard double complex is of the form
\begin{small}
$$\begin{CD}
\operatorname{Reg}_+(k,k) @>\partial^{0,0}>> \operatorname{Reg}_+(B,k) @>\partial^{1,0}>> \operatorname{Reg}_+(B^2,k) @>\partial^{2,0}>>\operatorname{Reg}_+(B^3,k)\dots \\
@V\partial_{0,0}VV @V\partial_{1,0}VV @V\partial_{2,0}VV @V\partial_{3,0}VV \\
\operatorname{Reg}_+(k,A) @>\partial^{0,1}>>\operatorname{Reg}_+(B,A) @>\partial^{1,1}>>\operatorname{Reg}_+(B^2,A) @>\partial^{2,1}>>\operatorname{Reg}_+(B^3,k)\dots \\
@V\partial_{0,1}VV @V\partial_{1,1}VV @V\partial_{2,1}VV @V\partial_{3,1}VV \\
\operatorname{Reg}_+(k,A^2) @>\partial^{0,2}>>\operatorname{Reg}_+(B,A^2) @>\partial^{1,2}>>\operatorname{Reg}_+(B^2,A^2) @>\partial^{2,2}>>\operatorname{Reg}_+(B^3,A^2)\dots \\
@V\partial_{0,2}VV @V\partial_{1,2}VV @V\partial_{2,2}VV @V\partial_{3,2}VV \\
\operatorname{Reg}_+(k,A^3) @>\partial^{0,3}>>\operatorname{Reg}_+(B,A^3) @>\partial^{1,3}>>\operatorname{Reg}_+(B^2,A^3) @>\partial^{2,3}>>\operatorname{Reg}_+(B^3,A^3)\dots \\
@V\partial_{0,3}VV @V\partial_{1,3}VV @V\partial_{2,3}VV @V\partial_{3,3}VV \\
\vdots @. \vdots @. \vdots @. \vdots
\end{CD}$$
\end{small}
The coboundary maps
$$d_{n,m}^i\colon \operatorname{Reg}_+(B^n,A^m)\to \operatorname{Reg}_+(B^{n+1},A^m)$$
defined by
$$d_{n,m}^0\alphapha =\operatorname{m}u_m(1_B\otimes \alphapha),\
d_{n,m}^i\alphapha =\alphapha(1_{B^{i-1}}\otimes\operatorname{m}_B\otimes 1_{B^{n-i}}),\
d_{n,m}^{n+1}\alphapha =\alpha\otimes\varepsilon,$$ for $1\le i\le n,$ are used to construct the
horizontal differentials
$$\partial_{n,m}\colon\operatorname{Reg}_+(B^n,A^m)\to \operatorname{Reg}_+(B^{n+1},A^m),$$
given by the \lq alternating' convolution product
$$\partial_{n,m}\alphapha=d_{n,m}^0\alpha*d_{n,m}^1\alpha^{-1}*d_{n,m}^2\alpha*\ldots *d_{n,m}^{n+1}\alphapha^{(-1)^{n+1}}.$$
Dually the coboundaries
$${d'}^i_{n,m}\colon\operatorname{Reg}_+(B^n,A^m)\to \operatorname{Reg}_+(B^n,A^{m+1})$$
defined by
$${d'}^0_{n,m}\beta =(\beta\otimes 1_A)\rho_n,\ {d'}^i_{n,m}\beta =
( 1_{A^{i-1}}\otimes\Deltalta_A\otimes 1_{A^{n-i}})\beta,\
{d'}^{n+1}_{n,m}\beta =\eta\otimes\beta,$$
for $1\le i\le n$, determine the vertical
differentials
$$\partial^{n,m}\colon \operatorname{Reg}_+(B^n,A^m)\to \operatorname{Reg}_+(B^{n},A^{m+1}),$$
where
$$\partial^{n,m}\beta={d'}^0_{n,m}\beta *{d'}^1_{n,m}\beta^{-1}*{d'}^2_{n,m}\beta*\ldots
*d'^{n+1}_{n,m}\beta^{(-1)^{n+1}}.$$
The cohomology of the abelian Singer pair $(B,A,\operatorname{m}u,\rho)$ is by
definition the cohomology of the total complex.
$$\begin{array}{l}
0\to\operatorname{Reg}_+(B,A)\to \operatorname{Reg}_+(B^2,A)\oplus\operatorname{Reg}_+(B,A^2)\to\\
\ldots\to\bigoplus_{i=1}^n\operatorname{Reg}_+(B^{n+1-i},A^i)\to \ldots
\end{array}$$
There are cannonical isomorphisms
$H^1(B,A)\simeq \operatorname{Aut}(A\# B)$ and $H^2(B,A)\simeq
\operatorname{Opext}(B,A)$ [Ho] (here $\operatorname{Opext}(B,A)=\operatorname{Opext}(B,A,\operatorname{m}u,\rho)$
denotes the abelian group of equivalence classes of those
Hopf algebra extensions that give rise to the Singer pair $(B,A,\operatorname{m}u,\rho)$).
\subsection{Special cases}
In particular, for $A=k=M$ and $N$ a commutative $B$-module
algebra we get Sweedler cohomology of $B$ with coefficients in $N$
[Sw1]
$$H^*(B,N)=H^*(\operatorname{Tot} {_B\operatorname{Reg}}(\operatorname{m}athbf X(k),N))=H^*(\operatorname{Tot} {_B\operatorname{Reg}}(\operatorname{m}athbf G^{*+1}(k),N)).$$
In [Sw1] it is also shown that if $G$ is a group and $\operatorname{m}athbf{g}$ is a Lie algebra, then there
are canonical isomorphisms $H^n(kG,A)\simeq H^n(G,\mathcal{U}(A))$ for $n\ge 1$ and
$H^m(U\operatorname{m}athbf{g},A)\simeq H^m(\operatorname{m}athbf{g},A^+)$ for $m\ge 2$, where $\mathcal{U}(A)$ denotes the
multiplicative group of units and $A^+$ denotes the underlying vector space.
For $B=k=N$ and $M$ a cocommutative $A$-comodule coalgebra we get
the dual version [Sw1,Do]
$$H^*(M,A)=H^*(\operatorname{Tot} {\operatorname{Reg}^A}(M,\operatorname{m}athbf Y(k)))=H^*(\operatorname{Tot} {\operatorname{Reg}^A}(M,\operatorname{m}athbf T^{*+1}(k))).$$
\section{Cohomology of an abelian matched pair}
\subsection{Abelian matched pairs}
Here we consider pairs of cocommutative Hopf algebras
$(T,N)$ together with a
left action $\operatorname{m}u\colon T\otimes N\to N$, $\operatorname{m}u (t\otimes n)=t(n)$, and a right
action $\nu\colon T\otimes N\to T$, $\nu (t\otimes n)=t^n$. Then we have the
twisted switch
$$\tilde\sigma =(\operatorname{m}u\otimes\nu )\Deltalta_{T\otimes N}\colon T\otimes N\to N\otimes T$$
or, in shorthand $\tilde\sigma (t\otimes n)=t_1(n_1)\otimes t_2^{n_2}$,
which in case of trivial actions reduces to the ordinary switch
$\sigma\colon T\otimes N\to N\otimes T$.
\begin{Definition} Such a configuration $(T,N,\operatorname{m}u ,\nu )$ is called
an abelian matched pair if \begin{enumerate} \item $N$ is a left $T$-module
coalgebra, i.e: $\operatorname{m}u\colon T\otimes N\to N$ is a coalgebra map, \item $T$
is a right $N$-module coalgebra, i.e: $\nu\colon T\otimes N\to T$ is a
coalgebra map, \item $N$ is a left $T$-module algebra with respect
to the twisted left action $\tilde\operatorname{m}u =(1\otimes\operatorname{m}u )(\tilde\sigma\otimes
1)\colon T\otimes N\otimes N\to N$, in the sense that the diagrams
$$\begin{CD}
T\otimes N\otimes N @>1\otimes\operatorname{m}_N>> T\otimes N @. \quad \quad @. T\otimes k @>1\otimes\iotata_N>> T\otimes N \\
@V\tilde\operatorname{m}u VV @V\operatorname{m}u VV @. @V\varepsilonsilon_T\otimes 1VV @V\operatorname{m}u VV \\
N\otimes N @>m_N>> N @. \quad \quad @. k @>\iotata_N>> N
\end{CD}$$
commute, i.e: $\operatorname{m}u (t\otimes nm)=\sum \operatorname{m}u (t_1\otimes n_1)\operatorname{m}u (\nu (t_2\otimes
n_2)\otimes m)$ and $\operatorname{m}u (t\otimes 1)=\varepsilonsilon (t)1_N$, or in shorthand
$t(nm)=t_1(n_1)t_2^{n_2}(m)$ and $t(1_N)=\varepsilonsilon (t)1_N$, \item
$T$ is a right $N$-module algebra with respect to the twisted
right action $\tilde\nu =(\nu\otimes 1 )(1\otimes\tilde\sigma )\colon T\otimes T\otimes
N\to T\otimes T$, in the sense that the diagrams
$$\begin{CD}
T\otimes T\otimes N @>m_T\otimes 1>> T\otimes N @. \quad \quad @. k\otimes N @>\iotata_T\otimes 1>> T\otimes N \\
@V\tilde\nu VV @V\nu VV @. @V1\otimes\varepsilonsilon_NVV @V\nu VV \\
T\otimes T @>m_T>> T @. \quad \quad @. k @>\iotata_T>> T
\end{CD}$$
commute, i.e: $\nu (ts\otimes n)=\sum \nu (t\otimes\operatorname{m}u (s_1\otimes n_1))\nu
(s_2\otimes n_2)$ and $\nu (1_T\otimes n)=\varepsilonsilon (n)1_T$, or in
shorthand $(ts)^n=t^{s_1(n_1)}s_2^{n_2}$ and $1_T^n=\varepsilonsilon
(n)1_T$
\end{enumerate}
\end{Definition}
The bismash product Hopf algebra $(N\bowtie T,\operatorname{m},\Deltalta, \iotata
,\varepsilonsilon ,S )$ is the tensor product coalgebra $N\otimes T$ with unit
$\iotata_{N\otimes T}\colon k\to N\otimes T$, twisted multiplication
$$m=(m\otimes\operatorname{m})(1\otimes\tilde\sigma\otimes 1)\colon N\otimes T\otimes N\otimes T\to N\otimes T,$$
in short $\tilde\sigma (t\otimes n)=t_1(n_1)\otimes t_2^{n_2}$, $(n\otimes
t)(m\otimes s)=nt_1(m_1)\otimes t_2^{m_2}s$, and antipode
$$S=\tilde\sigma (S\otimes S)\sigma\colon N\otimes T\to N\otimes T,$$
i.e: $S(n\otimes t)=S(t_2)(S(n_2))\otimes S(t_1)^{S(n_1)}$. For a proof
that this is a Hopf algebra see [Kas]. To avoid ambiguity we will
often write $n\bowtie t$ for $n\otimes t$ in $N\bowtie T$. We also
identify $N$ and $T$ with the Hopf subalgebras $N\bowtie k$ and
$k\bowtie T$, respectively, i.e: $n\operatorname{eq}uiv n\bowtie 1$ and $t\operatorname{eq}uiv
1\bowtie t$. In this sense we write $n\bowtie t=nt$ and
$tn=t_1(n_1)t_2^{n_2}$.
If the action $\nu\colon T\otimes N\to T$ is trivial, then the bismash
product $N\bowtie T$ becomes the smash product (or semi-direct
product) $N\rtimes T$. An action $\operatorname{m}u\colon T\otimes N\to N$ is compatible
with the trivial action $1\otimes\varepsilonsilon\colon T\otimes N\to T$, i.e:
$(T,N,\operatorname{m}u , 1\otimes\varepsilonsilon )$ is a matched pair, if and only if $N$
is a $T$-module bialgebra and $\operatorname{m}u (t_1\otimes n)\otimes t_2=\operatorname{m}u (t_2\otimes
n)\otimes t_1$. Note that the last condition is trivially satisfied if
$T$ is cocommutative.
To make calculations more transparent we start to use the abbreviated
Sweedler sigma notation for the cocommutative setting whenever convenient.
\begin{Lemma}[{[Ma3]}, Proposition 2.3]\label{l22} Let $(T,N,\operatorname{m}u,\nu)$ be an
abelian matched pair. \begin{enumerate} \item A left $T$-module,
left $N$-module $(V,\alphapha ,\beta )$ is a left $N\bowtie T$-module
if and only if $t(nv)=t(n)(t^{n}(v))$, i.e: if and only if
with the twisted action $\tilde\alphapha =(1\otimes\alphapha
)(\tilde\sigma\otimes 1)\colon T\otimes N\otimes V\to N\otimes V$ the square
$$\begin{CD}
T\otimes N\otimes V @>1\otimes\beta >> T\otimes V \\
@V\tilde\alphapha VV @V\alphapha VV \\
N\otimes V @> \beta >> V
\end{CD}$$
commutes. \item A right $T$-module, right $N$-module $(W,\alphapha
,\beta )$ is a right $N\bowtie T$-module if and only if
$(v^t)^n=(v^{t(n)})^{t^{n}}$, i.e: if and only if with the twisted
action $\tilde\beta =(\beta\otimes 1)(1\otimes\tilde\sigma )\colon W\otimes T\otimes
N\to W\otimes T$ the square
$$\begin{CD}
W\otimes T\otimes N @>\alphapha\otimes 1 >> W\otimes N \\
@V\tilde\beta VV @V\beta VV \\
W\otimes T @> \alphapha >> W
\end{CD}$$
commutes. \item Let $(V, \alphapha )$ be a left $T$-module and $(W,
\beta )$ a right $N$-module. Then \begin{enumerate} \item [(i)]
$N\otimes V$ is a left $N\bowtie T$-module with $N$-action on the
first factor and $T$-action given by
$$\tilde\alphapha =(1\otimes\alphapha )\tilde\sigma\colon T\otimes N\otimes V\to N\otimes V,$$
that is $t(n\otimes v)= t_1(n_1)\otimes t_2^{n_2}(v)$.
\item[(ii)] $W\otimes T$ is a right $N\bowtie T$-module with
$T$-action on the right factor and $N$-action given by
$$\tilde\beta =(\beta\otimes 1)(1\otimes\tilde\sigma )\colon W\otimes T\otimes N\to W\otimes T,$$
that is $(w\otimes t)^n= w^{t_2(n_2)}\otimes t_1^{n_1}$. Moreover, $W\otimes
T$ is a left $N\bowtie T$-module by twisting the action via the
antipode of $N\bowtie T$.
\item[(iii)] The map $\psi\colon (N\bowtie T)\otimes V\otimes W\to (W\otimes T)\otimes
(N\otimes V)$ defined by $\psi ((n\bowtie t)\otimes v\otimes
w)=w^{S(t)(S(n))}\otimes S(t)^{S(n)}\otimes n\otimes tv$, is a $N\bowtie
T$-homomorphism, when $N\bowtie T$ is acting on the first factor
of $(N\bowtie T)\otimes V\otimes W$ and diagonally on $(W\otimes T)\otimes (N\otimes
V)$\\ $(nt)(w\otimes s\otimes\operatorname{m}\otimes v)=w^{(sS(t))(S(n))}\otimes
(sS(t))^{S(n)}\otimes nt(m)\otimes t^m(v).$\\ In particular, $(W\otimes T)\otimes
(N\otimes V)$ is a free left $N\bowtie T$-module in which any basis of
the vector space $(W\otimes k)\otimes (k\otimes V)$ is a $N\bowtie T$-free
basis.
\end{enumerate}
\end{enumerate}
\end{Lemma}
Observe that the inverse of $\psi\colon (N\bowtie T)\otimes V\otimes W\to (W\otimes
T)\otimes (N\otimes V)$ is given by
$$\psi^{-1}((w\otimes t)\otimes (n\otimes v)=(n\bowtie S(t^n))\otimes (w^{t(n)}\otimes t^n(v)).$$
The twisted actions can now be extended by induction to higher
tensor powers
$$\operatorname{m}u_{p+1}=(1\otimes\operatorname{m}u_p)(\tilde\sigma\otimes 1^p)\colon T\otimes N^{p+1}\to N^{p+1}$$
so that $\operatorname{m}u_{p+1}(t\otimes n\otimes\operatorname{m}athbf m)=\operatorname{m}u (t\otimes n)\otimes
\operatorname{m}u_p(\nu (t\otimes n)\otimes\operatorname{m}athbf m)$, $t(n\otimes\operatorname{m}athbf
m)=t(n)\otimes t^{n}(\operatorname{m}athbf m)$ and
$$\nu_{q+1}=(\nu_q\otimes 1)(1^q\otimes\tilde\sigma )\colon T^{q+1}\otimes N\to T^{q+1}$$
so that $\nu_{q+1}(\operatorname{m}athbf t\otimes s\otimes n)=\nu_q(\operatorname{m}athbf t\otimes\operatorname{m}u
(s\otimes n))\otimes \nu (s\otimes n)$, $(\operatorname{m}athbf t\otimes s)^n={\operatorname{m}athbf
t}^{s(n)}\otimes s^{n}$. Observe that the squares
$$\begin{CD}
T\otimes N^{p+1} @>\operatorname{m}u_{p+1}>> N^{p+1} @. \quad \quad @. T^{q+1}\otimes N @>\nu_{q+1}>> T^{q+1} \\
@V1\otimes fVV @VfVV @. @Vg\otimes 1VV @VgVV \\
T\otimes N^p @>\operatorname{m}u_p>> N^p @. \quad \quad @. T^q\otimes N @>\nu_q>> T^q
\end{CD}$$
commute when $f=1^{i-1}\otimes\operatorname{m}_N\otimes 1^{p-i}$ for $1\leq i\leq p$ and
$g=1^{j-1}\otimes\operatorname{m}_T\otimes 1^{q-j}$ for $1\leq j\leq q$, respectively.
By part 3 (iii) of the lemma above $T^{i+1}\otimes N^{j+1}$ can be
equipped with the $N\bowtie T$-module structure defined by
$(nt)(\operatorname{m}athbf r\otimes s\otimes\operatorname{m}\otimes\operatorname{m}athbf k)=\operatorname{m}athbf
r^{(sS(t))(S(n))}\otimes (sS(t))^{S(n)}\otimes nt(m)\otimes t^m(\operatorname{m}athbf k)$.
\begin{Corollary}\label{c31} The map $\psi\colon (N\bowtie T)\otimes T^i\otimes N^j\to
T^{i+1}\otimes N^{j+1}$, defined by $\psi ((nt)\otimes (\operatorname{m}athbf
r\otimes\operatorname{m}athbf k)= \operatorname{m}athbf r^{S(t)(S(n)}\otimes S(t)^{S(n)}\otimes n\otimes
t(\operatorname{m}athbf k)$, is an isomorphism of $N\bowtie T$-modules.
\end{Corollary}
The content of the Lemma \ref{l22} can be summarized in the square of \lq
free\rq\ functors between monoidal categories
$$\begin{CD}
\operatorname{m}athcal V @> F_T>> {_T\operatorname{m}athcal V} \\
@VF_NVV @V\tilde F_NVV \\
_N\operatorname{m}athcal V @>\tilde F_T>> {_{N\bowtie T}\operatorname{m}athcal V}
\end{CD}$$
each with a corresponding tensor preserving right adjoint
forgetful functor.
\subsection{The distributive law of a matched pair}
The two comonads on $ {_{N\bowtie T}\operatorname{m}athcal V}$ given by
$$\tilde{\operatorname{m}athbf G_T}=(\tilde G_T,\deltalta_T, \varepsilonsilon_T) \quad ,
\quad \tilde{\operatorname{m}athbf G_N}=(\tilde{G_N},\deltalta_N, \varepsilonsilon_N)$$
with $\tilde{G_T}=\tilde{F_T}\tilde{U_T}$, $\deltalta_T(t\otimes x)=t\otimes
1\otimes x$, $\varepsilonsilon_T(t\otimes x)=tx$, and with
$\tilde{G_N}=\tilde{F_N}\tilde{U_N}$, $\deltalta_N(n\otimes x)=n\otimes 1\otimes
x$, $\varepsilonsilon_N(n\otimes x)=nx$, satisfy a distributive law [Ba]
$$\tilde\sigma\colon \tilde{G_T}\tilde{\operatorname{m}athbf G_N}\to \tilde{\operatorname{m}athbf G_N}\tilde{G_T}$$
given by $\tilde\sigma (t\otimes n\otimes -)=\tilde\sigma (t\otimes n)\otimes -
=t_1(n_1)\otimes t_2^{n_2}\otimes - $. The equations for a distributive
law
$$\tilde{G_N}\deltalta_T\cdot\tilde\sigma =\tilde\sigma\tilde{G_T}\cdot
\tilde{G_T}\tilde\sigma\cdot\deltalta_T\tilde{G_N} \quad ,
\quad \deltalta_N\tilde{G_T}\cdot\tilde\sigma
=\tilde{G_N}\tilde\sigma\cdot\tilde\sigma\tilde{G_N}\cdot\tilde{G_T}\deltalta_N$$
and
$$\varepsilonsilon_N\tilde{G_T}\cdot\tilde\sigma =\tilde{G_T}\varepsilonsilon_N \quad ,
\quad \tilde{G_N}\varepsilonsilon_T\cdot\tilde\sigma
=\varepsilonsilon_T\tilde{G_N}$$ are easily verified.
\begin{Proposition}[{[Ba]}, Th. 2.2] The composite
$$\operatorname{m}athbf G=\operatorname{m}athbf G_N\circ_{\tilde\sigma}\operatorname{m}athbf G_T$$
with $G=(G_NG_T$, $\deltalta =G_N\tilde\sigma
G_T\cdot\deltalta_N\deltalta_T$ and $\varepsilonsilon =\varepsilonsilon_N\varepsilonsilon_T)$ is
again a comonad on $ {_{N\bowtie T}\operatorname{m}athcal V}$. Moreover, $\operatorname{m}athbf
G=\operatorname{m}athbf G_{N\bowtie T}$.
\end{Proposition}
The antipode can be used to define a left action
$$\nu_S=S\nu (S\otimes S)\sigma\colon N\otimes T\to T$$
by $n(t)=\nu_S(n\otimes t)=S\nu (S\otimes S)\sigma (n\otimes t)=S(S(t)^{S(n)})$
and a right action
$$\operatorname{m}u_S=S\operatorname{m}u (S\otimes S)\sigma\colon N\otimes T\to N$$
by $n^t=\operatorname{m}u_S(n\otimes t)=S\operatorname{m}u (S\otimes S)\sigma (n\otimes t)=S(S(t)(S(n))$.
The inverse of the twisted switch is then
$$\tilde\sigma^{-1}=(\nu_S\otimes\operatorname{m}u_S)\Deltalta_{N\otimes T}\colon N\otimes T\to T\otimes N$$
given by $\tilde\sigma^{-1}(n\otimes t)=n_1(t_1)\otimes n_2^{t_2}$, and
induces the inverse distributive law
$$\tilde\sigma^{-1}\colon G_NG_T\to G_TG_N.$$
\vskip .5cm
\subsection{Matched pair cohomology}\label{s23}
For every Hopf algebra $H$ the category of $H$-modules
${_H\operatorname{m}athcal V}$ is symmetric monoidal. The tensor product of two
$H$-modules $V$ and $W$ has underlying vector space the ordinary
vector space tensor product $V\otimes W$ and diagonal $H$-action.
Algebras and coalgebras in $_H\operatorname{m}athcal V$ are known as $H$-module
algebras and $H$-module coalgebras, respectively. The adjoint
functors and comonads of the last section therefore restrict to
the situations where $\operatorname{m}athcal V$ is replaced by $\operatorname{m}athcal C$ or
$\operatorname{m}athcal A$. In particular, if $(T,N,\operatorname{m}u ,\nu )$ is an abelian
matched pair, $H=N\bowtie T$ and $C$ is a $H$-module coalgebra
then $\operatorname{m}athbf X_H(C)$ is a canonical simplicial free $H$-module
coalgebra resolution of $C$ and by the Corollary \ref{c31} the composite
$\operatorname{m}athbf X_N(\operatorname{m}athbf X_T(C))$ is a simplicial double complex of
free $H$-module coalgebras.
\begin{Definition}\label{d25} The cohomology of an abelian matched pair
$(T,N,\operatorname{m}u ,\nu )$ with coefficients in the commutative $N\bowtie
T$-module algebra is defined by
$$\operatorname{m}athcal{H}^*(T,N,A)=H^{*+1}(\operatorname{Tot} (\operatorname{m}athbf B_0),$$
where $B_0$ is the double cochain complex obtained from the double
cochain complex $\operatorname{m}athbf B=C({_{N\bowtie T}\operatorname{Reg}}(\operatorname{m}athbf
X_N(\operatorname{m}athbf X_T(k),A))$ by deleting the $0^{th}$ row and the
$0^{th}$ column.
\end{Definition}
\subsection{The normalized standard complex}
Let $H=N\bowtie T$ be a bismash product of an abelian matched pair of
Hopf algebras and let the algebra $A$ be a left $N$ and a right
$T$-module such that it is a left $H$-module via
$nt(a)=n({a^{S(t)}})$, i.e. $(n(a))^{S(t)}=(t(n))(a^{S(t^n)}).$
Note that $\operatorname{Hom}(T^{p},A)$ becomes a left $N$-module via
$n(f)({\operatorname{m}athbf t})=n(f(\nu_p({\operatorname{m}athbf t},n)))$ and $\operatorname{Hom}(N^{q},A)$
becomes a right $T$-module via $f^t({\operatorname{m}athbf
n})=(f(\operatorname{m}u_q(t,{\operatorname{m}athbf n})))^t= S(t)(f(\operatorname{m}u_q(t,{\operatorname{m}athbf n})))$.
The simplicial double complex $G_T^pG_N^q(k)=(T^{p}\otimes
N^{q})_{p,q}$, $p,q\ge 1$ of free $H$-modules has horizontal face
operators $1\otimes d_N^*\colon T^{p}\otimes N^{q+1}\to T^{p}\otimes N^{q}$,
vertical face operators $d_T^*\otimes 1\colon T^{p+1}\otimes N^{q}\to
T^{p}\otimes N^{q}$, horizontal degeneracies $1\otimes s_N^*\colon T^{p}\otimes
N^{q}\to T^{p}\otimes N^{q+1}$ and vertical degeneracies
$s_T^*\otimes1\colon T^{p}\otimes N^{q}\to T^{p+1}\otimes N^{q}$, where
$$d_N^i= 1^{i}\otimes\operatorname{m}\otimes 1^{q-i-1},\quad d_N^q= 1^q\otimes\varepsilon,\quad
s_N^i= 1^{i}\otimes\eta\otimes 1^{q-i}$$ for $0\le i\le q-1$, and
$$d_T^j= 1^{p-j-1}\otimes\operatorname{m}\otimes 1^{j},\quad d_T^p=\varepsilon\otimes 1^p,\quad
s_T^j= 1^{p-j}\otimes\eta\otimes 1^{j}$$ for $0\le j\le p-1$.
These maps preserve the $H$-module structure on $T^{p}\otimes N^{q}$.
Apply the functor $\operatorname{_HReg}(\_,A)\colon {{_H\operatorname{m}athcal C}}^{op}\to \operatorname{Ab}$ to
get a cosimplicial double complex of abelian groups $\operatorname{m}athbf
B={\operatorname{_HReg}}(\operatorname{m}athbf X_N(\operatorname{m}athbf X_T(k),A)$ with
$B^{p,q}=\operatorname{_HReg}(T^{p+1}\otimes N^{q+1},A)$, coface operators
$\operatorname{_HReg}(d_{N*},A)$, $\operatorname{_HReg}(d_{T*},A)$ and codegeneracies are
$\operatorname{_HReg}(s_{N*},A)$, $\operatorname{_HReg}(s_{T*},A)$.
The isomorphism described in Corollary \ref{c31}
induces an isomorphism
of double complexes $\operatorname{m}athbf{B}(T,N,A)\cong\operatorname{m}athbf{C}(T,N,A)$ given by
$${\operatorname{_HReg}}(T^{p+1}\otimes N^{q+1},A)\stackrel{\operatorname{_HReg}(\psi,A)}{\longrightarrow}
{\operatorname{_HReg}}(H\otimes T^{p}\otimes N^{q},A) \stackrel{\theta}{\longrightarrow}
\operatorname{Reg}(T^{p}\otimes
N^{q},A)$$
for $p,q\ge 0$, where $C^{p,q}=\operatorname{Reg} (T^p\otimes
N^q,A)$ is the abelian group of convolution invertible linear maps
$f\colon N^{p}\otimes T^{q}\to A.$
The vertical differentials ${\deltalta_N}\colon C^{p,q}\to C^{p+1,q}$ and the
horizontal differentials ${\deltalta_T}\colon C^{p,q}\to C^{p,q+1}$ are
transported from ${\operatorname{m}athbf B}$ and turn out to be the twisted
Sweedler differentials on the $N$ and $T$ parts, respectively. The
coface operators are
$$
{\deltalta_N}_i f({\operatorname{m}athbf t}\otimes{\operatorname{m}athbf n})=\begin{cases}s f({\operatorname{m}athbf t}\otimes
n_1\otimes\ldots \otimes
n_in_{i+1}\otimes\ldots\otimes n_{q+1}), \operatorname{m}box{ for } i=1,\ldots ,q \\
n_1(f(\nu_q({\operatorname{m}athbf t}\otimes n_1)\otimes n_2\otimes\ldots\otimes n_{p+1})),
\operatorname{m}box{ for } i=0 \\
f({\operatorname{m}athbf t}\otimes n_1\otimes\ldots\otimes n_q)\varepsilon(n_{q+1}), \operatorname{m}box{ for }
i=q+1
\end{cases}$$ where ${\operatorname{m}athbf t}\in T^{p}$ and ${\operatorname{m}athbf
n}=n_1\otimes\ldots\otimes n_{q+1}\in N^{q+1}$, and similarly
$${\deltalta_T}_j f({\operatorname{m}athbf t}\otimes{\operatorname{m}athbf n})=\begin{cases}s f(t_{p+1}\otimes\ldots\otimes t_{j+1}t_{j}\otimes
\ldots \otimes t_{1}\otimes {\operatorname{m}athbf n}), \operatorname{m}box{ for } j=1,\ldots ,p \\
(f(t_{p+1}\otimes\ldots \otimes t_2\otimes \operatorname{m}u_p(t_{1}\otimes{\operatorname{m}athbf
n})))^{t_1},i
\operatorname{m}box{ for } j=0 \\
\varepsilon(t_{p+1})f(t_p\otimes\ldots\otimes t_{1}\otimes{\operatorname{m}athbf n}), \operatorname{m}box{ for }
j=p+1
\end{cases}$$ where ${\operatorname{m}athbf t}=t_1\otimes\ldots t_{q+1}\in T^{q+1}$
and ${\operatorname{m}athbf n}\in N^{q}$. The differentials in the associated
double cochain complex are the alternating convolution products
$${\deltalta_N} f={\deltalta_N}_0f*{\deltalta_N}_1 f^{-1}*\ldots *{\deltalta_N}_{q+1}f^{\pm 1}$$
and
$${\deltalta_T} f={\deltalta_T}_0f*{\deltalta_T}_1 f^{-1}*\ldots *{\deltalta_T}_{p+1}f^{\pm 1}.$$
In the associated normalized double complex $\operatorname{m}athbf C_+$, the
$(p,q)$ term $C^{p,q}_+=\operatorname{Reg}_+(T^{p}\otimes N^{q},A)$ is the
intersection of the degeneracy operators, that is, the abelian
group of convolution invertible maps $f\colon T^{p}\otimes N^{q}\to A$ with
$f(t_p\otimes\ldots\otimes t_1\otimes n_1\otimes\ldots n_q)=\varepsilon(t_p)\ldots
\varepsilon(n_q)$, whenever one of $t_i$ or one of $n_j$ is in $k$. Then
$\operatorname{m}athcal H^*(N,T,A)\cong H^{*+1}(\operatorname{Tot}\operatorname{m}athbf C_0)$, where
$\operatorname{m}athbf C_0$ is the double complex obtained from $\operatorname{m}athbf C_+$ by
replacing the edges by zero.
The groups of cocycles ${\operatorname{m}athcal Z}^i(T,N,A)$ and the groups
coboundaries ${\operatorname{m}athcal B}^i(T,N,A)$ consist of $i$-tuples of maps
$(f_j)_{1\le j\le i}$, $f_j\colon T^{j}\otimes N^{i+1-j}\to A$
that satisfy certain conditions.
We introduce the subgroups ${\operatorname{m}athcal Z}^i_p(T,N,A)\le {\operatorname{m}athcal
Z}^i(T,N,A)$, that are spanned by $i$-tuples in which the $f_j$'s
are trivial for $j\not= p$ and subgroups ${\operatorname{m}athcal
B}_p^i={\operatorname{m}athcal Z}_p^i\cap {\operatorname{m}athcal B}^i\subset {\operatorname{m}athcal B}_i$.
These give rise to subgroups of cohomology groups ${\operatorname{m}athcal
H}_p^i={\operatorname{m}athcal Z}_p^i/{\operatorname{m}athcal B}_p^i\simeq ({\operatorname{m}athcal
Z}_p^i+{\operatorname{m}athcal B}^i)/{\operatorname{m}athcal B}^i\subseteq {\operatorname{m}athcal H}^i$
which have a nice interpretation when $i=2$ and $p=1,2$; see
Section \ref{mcp}. \label{s24}
\section{The homomorphism $\pi\colon \operatorname{m}athcal{H}^2(T,N,A)\rightarrow H^{i,j}(T,N,A)$}
If $T$ is a finite group and $N$ is a finite $T$-group, then we
have the following exact sequence [M1]
$$
H^2(N,k^\bullet)\stackrel{{\deltalta_T}}{\to}\operatorname{Opext}(kT,k^N)\stackrel{\pi}{\to}
H^1(T,H^2(N,k^\bullet)).
$$
Here we define a version of homomorphism $\pi$ for
arbitrary smash products of cocommutative Hopf algebras.
We start by introducing the Hopf algebra analogue of
$H^i(T,H^j(N,k^\bullet))$. For positive $i,j$ and an abelian matched
pair of Hopf algebras $(T,N)$, with the action of $N$ on $T$
trivial, we define
\begin{eqnarray*}
Z^{i,j}(T,N,A)&=&\{\alpha\in \operatorname{Reg}_+(T^{i}\otimes N^{j},A)|{\deltalta_N}\alpha=\varepsilon,\; \operatorname{m}box{and}\\
&&\exists \beta\in
\operatorname{Reg}_+(T^{i+1}\otimes N^{j-1},A):\;{\deltalta_T}\alpha={\deltalta_N}\beta\},\\
B^{i,j}(T,N,A)&=&\{\alpha\in \operatorname{Reg}_+(T^{i}\otimes N^{j},A)|\exists
\gamma\in
\operatorname{Reg}_+(T^{i}\otimes N^{j-1},A),\\
&&\exists \gamma'\in \operatorname{Reg}_+(T^{i-1}\it N^{j},A):\; \alpha={\deltalta_N}
\gamma*{\deltalta_T}\gamma'\}.\\
H^{i,j}(T,N,A)&=&Z^{i,j}(T,N,A)/B^{i,j}(T,N,A).
\end{eqnarray*}
\begin{Remark} If $j=1$, then $$H^{i,1}(T,N,A) \simeq
\operatorname{m}athcal{H}^i_i(T,N,A)\simeq H_{meas}^i(T,\operatorname{Hom}(N,A)),$$ where the
$H^i_{meas}$ denotes the measuring cohomology [M2].
\end{Remark}
\begin{Proposition}
If $T=kG$ is a group algebra, then there is an isomorphism
$$H^i(G,H^j(N,A))\simeq H^{i,j}(kG,N,A).$$
\end{Proposition}
\begin{Remark} Here the \textbf{right} action of $G$ on $H^j(N,A)$ is
given by precomposition. We can obtain symmetric results in case we start with
a \textbf{right} action of $T=kG$ on $N$, hence
a \textbf{left} action of $G$ on $H^j(N,A)$.
\end{Remark}
\begin{proof}[Proof (of the proposition above)] By inspection we have
\begin{eqnarray*}
Z^i(G,H^j(N,A))&=&Z^{i,j}(kG,N,A)/\{\alpha\colon G\to B^j(N,A)\},\\
B^i(G,H^j(N,A))&=&B^{i,j}(kG,N,A)/\{\alpha\colon G\to B^j(N,A)\}.
\end{eqnarray*}
Here we identify regular maps from $(kG)^{i}\otimes N^{j}$ to $A$ with
set maps from $G^{\times i}$ to $\operatorname{Reg}(N^{j},A)$ in the obvious way.
\end{proof}
The following is a straightforward generalization of the Theorem
7.1 in [M2].
\begin{Theorem}\label{pi}
The homomorphism $\pi\colon {\operatorname{m}athcal H}^2(T,N,A)\to
H^{1,2}(T,N,A)$, induced by $(\alpha,\beta)\operatorname{m}apsto \alpha$, makes the
following sequence
$$
H^2(N,A)\oplus {\operatorname{m}athcal
H}^2_2(T,N,A)\stackrel{{\deltalta_T}+\iota}{\longrightarrow}{\operatorname{m}athcal
H}^2(T,N,A) \stackrel{\pi}{\to} H^{1,2}(T,N,A)
$$
exact.
\end{Theorem}
\begin{proof} It is clear that $\pi{\deltalta_T}=0$ and obviously also $\pi({\operatorname{m}athcal
H}^2_2)=0$.
Suppose a cocycle pair $(\alpha,\beta)\in {\operatorname{m}athcal Z}^2(T,N,A)$ is
such that $\alpha\in B^{1,2}(T,N,A)$. Then for some $\gamma\colon
T\otimes N\to A$ and some $\gamma'\colon N\otimes N\to A$ we have
$\alpha={\deltalta_N}\gamma*{\deltalta_T}\gamma'$, and hence
$(\alpha,\beta)=({\deltalta_N}\gamma,\beta)*({\deltalta_T}\gamma',\varepsilon)\sim
({\deltalta_N}\gamma^{-1},{\deltalta_T}\gamma)*({\deltalta_N}\gamma,\beta)*({\deltalta_T}\gamma',\varepsilon)=
(\varepsilon,{\deltalta_T}\gamma*\beta)*({\deltalta_T}\gamma',\varepsilon)\in {\operatorname{m}athcal
Z}_2^2(T,N,A)*{\deltalta_T}(Z^2(N,A))$. \end{proof}
\section{Comparison of Singer pairs and matched pairs}
\subsection{Singer pairs vs. matched pairs}\label{s41}
In this section we sketch a correspondence from matched pairs to
Singer pairs. For more details we refer to [Ma3].
\begin{Definition}
We say that an action $\operatorname{m}u\colon A\otimes M\to M$ is locally finite,
if every orbit $A(m)=\{a(m)|a\in A\}$ is finite dimensional.
\end{Definition}
\begin{Lemma}[{[Mo1]}, Lemma 1.6.4]\label{Mo164}
Let $A$ be an algebra and $C$ a coalgebra.
\begin{enumerate}
\item If $M$ is a right $C$-comodule via $\rho\colon M\to M\otimes C$,
$\rho(m)= m_0\otimes\operatorname{m}_1$, then $M$ is a left $C^*$-module via
$\operatorname{m}u\colon C^*\otimes M\to M$, $\operatorname{m}u(f\otimes\operatorname{m})=f(m_1)m_0$.
\item Let $M$ be a left $A$-module via $\operatorname{m}u\colon A\otimes M\to M.$
Then there is (a unique) comodule structure $\rho\colon M\to M\otimes
A^\circ$, such that $(1\otimes\operatorname{ev})\rho=\operatorname{m}u$ if and only if the
action $\operatorname{m}u$ is locally finite. The coaction is then given by
$\rho(m)=\sum f_i\otimes\operatorname{m}_i$, where $\{m_i\}$ is a basis for $A(m)$
and $f_i\in A^{\circ}\subseteq A^*$ are coordinate functions of
$a(m)$, i.e. $a(m)=\sum f_i(a)m_i$.
\end{enumerate}
\end{Lemma}
Let $(T,N,\operatorname{m}u,\nu)$ be an abelian matched pair and suppose
$\operatorname{m}u\colon T\otimes N\to N$
is locally finite. Then the Lemma above gives a coaction
$\rho\colon N\to N\otimes T^{\circ}$, $\rho(n)=n_N\otimes n_{T^\circ}$,
such that $t(n)=\sum n_N\cdot n_{T^\circ}(t)$.
There is a left action $\nu'\colon N\otimes T^*\to T^*$ given by
pre-composition, i.e. $\nu'(n\otimes f)(t)=f(t^n)$. If $\operatorname{m}u$ is
locally finite, it is easy to see that $\nu'$ restricts to
$T^\circ\subseteq T^*$.
\begin{Lemma}[{[Ma3]}, Lemma 4.1]
If $(T,N,\operatorname{m}u,\nu)$ is an abelian matched pair with $\operatorname{m}u$ locally finite then
the quadruple $(N,T^\circ,\nu',\rho)$ forms an abelian Singer pair.
\end{Lemma}
\begin{Remark} There is also a correspondence in the opposite
direction [M3].\end{Remark}
\subsection{Comparison of Singer and matched pair cohomologies}
Let\break
$(T,N,\operatorname{m}u,\nu)$ be an abelian matched pair of Hopf algebras,
with $\operatorname{m}u$ locally finite and $(N,T^\circ,\nu',\rho)$ the Singer
pair associated to it as above.
The embedding $\operatorname{Hom}(N^{i},(T^\circ)^{j})\subseteq
\operatorname{Hom}(N^{i},(T^{j})^*)\simeq\operatorname{Hom}(T^{j}\otimes N^{i},k)$ induced by the inclusion
${T^\circ}^{j}=(T^{j})^\circ \subseteq (T^{j})^*$ restricts to the
embedding $\operatorname{Reg}_+(N^{i},(T^\circ)^{j})\subseteq \operatorname{Reg}_+(T^{j}\otimes
N^{i},k)$. A routine calculation shows that it preserves the
differentials, i.e. that it gives an embedding of double
complexes, which is an isomorphism in case $T$ is finite
dimensional.
There is no apparent reason for the embedding of complexes to
induce an isomorphism of cohomology groups in general. It is our
conjecture that this is not always the case.
In some cases we can compare the multiplication part of
$H^2(N,T^\circ)$ (see the following section) and ${\operatorname{m}athcal
H}^2_2(N,T,k)$. We use the following lemma for this purpose.
\begin{Lemma}\label{compare}
Let $(T,N,\operatorname{m}u,\nu)$ be an abelian matched pair with the action
$\operatorname{m}u$ locally finite. If $f\colon T\otimes N^{i}\to k$ is a
convolution invertible map, such that ${\deltalta_T} f=\varepsilon$, then for each
${\bf n}\in N^{i}$, the map $f_{\bf n}=f(\_,{\bf n})\colon T\to k$
lies in the finite dual $T^\circ\subseteq T^*$.
\end{Lemma}
\begin{proof} It suffices to show that the orbit of $f_{\bf n}$
under the action of $T$ (given by $(s(f_{\bf n})(t)=f_{\bf
n}(ts)$) is finite dimensional (see [DNR], [Mo1] or [Sw2] for the
description of finite duals). Using the fact that ${\deltalta_T} f=\varepsilon$ we
get $s(f_{\bf n})(t)= f_{\bf n}(ts)= \sum f_{{\bf
n}_1}(s_1)f_{\operatorname{m}u_i(s_2\otimes {\bf n}_2)}(t)$.
Let $\Delta({\bf n})=\sum_j {\bf n'}_j\otimes {\bf n''}_j$. The action
$\operatorname{m}u_i\colon T\otimes N^{i}\to N^{i}$ is locally finite, since
$\operatorname{m}u\colon T\otimes N\to N$ is, and hence we can choose a finite basis
$\{{\bf m}_p\}$ for $\operatorname{Span}\{\operatorname{m}u_i(s\otimes {\bf
n''}_j)|s\in T\}$. Now note that $\{f_{{\bf m}_p}\}$ is a finite
set which spans $T(f_{\bf n})$. \end{proof}
\begin{Corollary}\label{cor1}
If $(T,N,\operatorname{m}u,\nu)$ is an abelian matched pair, with $\operatorname{m}u$ locally
finite and $(N,T^\circ,\omega,\rho)$ is the corresponding Singer
pair, then ${\operatorname{m}athcal H}^1(T,N,k)=H^1(N,T^\circ)$.
\end{Corollary}
\subsection{The multiplication and comultiplication parts of the second
cohomology group of a Singer pair}\label{mcp}
Here we discuss in more detail the Hopf algebra extensions that
have an \lq\lq unperturbed" multiplication and those that have an
\lq\lq unperturbed" comultiplication, more precisely we look at
two subgroups $Hm(B,A)$ and $Hc(B,A)$ of $H^2(B,A)\simeq \operatorname{Opext}(B,A)$, one generated
by the cocycles with a trivial multiplication part and
the other generated by the cocycles with a trivial
comultiplication part [M1]. Let
$$
Zc(B,A)=\{\beta\in \Reg_+(B,A\otimes A)|(\eta\varepsilon,\beta)\in Z^2(B,A)\}.
$$
We shall identify $Zc(B,A)$ with a subgroup of $Z^2(B,A)$ via
the injection $\beta\operatorname{m}apsto(\eta\varepsilon,\beta).$ Similarly let
$$
Zm(B,A)=\{\alphapha\in \Reg_+(B\otimes B,A)|(\alphapha,\eta\varepsilon)\in
Z^2(B,A)\}.
$$
If
$$
Bc(B,A)=B^2(B,A)\cap Zc(B,A)\;\operatorname{m}box{and}\;
Bm(B,A)=B^2(B,A)\cap Zm(B,A)
$$
then we define
$$
Hc(B,A)=Zc(B,A)/Bc(B,A)\;
\operatorname{m}box{and}\;
Hm(B,A)=Zm(B,A)/Bm(B,A).
$$
The identification of $Hc(B,A)$ with a subgroup of $H^2(B,A)$ is
given by
$$
Hc(B,A)\stackrel{\sim}{\to} (Zc(B,A)+B^2(B,A))/B^2(B,A)) \le
H^2(B,A),$$ and similarly for $Hm\le H^2$.
Note that in case $T$ is finite dimensional $Hc(N,T^*)\simeq
{\operatorname{m}athcal H}^2_2(T,N,k)$ and \break $Hm(N,T^*)\simeq {\operatorname{m}athcal
H}^2_1(T,N,k)$ with $\operatorname{m}athcal{H}_p^i(T,N,k)$ as defined in Section
\ref{s24}.
\begin{Proposition}\label{cor2}
Let $(T,N,\operatorname{m}u,\nu)$ be an abelian matched pair, with $\operatorname{m}u$ locally
finite and let $(N,T^\circ,\omega,\rho)$ be the corresponding Singer
pair. Then
$$Hm(N,T^\circ)\simeq {\operatorname{m}athcal H}^2_1(T,N,k).$$
\end{Proposition}
\begin{proof} Observe that we have an inclusion
$Zm(N,T^\circ)= \{\alphapha\colon N\otimes N\to T^\circ|\partial\alphapha=\varepsilon,
\partial'\alphapha=\varepsilon\}\subseteq \{\alpha\colon T\otimes N\otimes N\to k|{\deltalta_T}\alpha=\varepsilon,
{\deltalta_N}\alpha=\varepsilon\}={\operatorname{m}athcal Z}^2_1(T,N,k)$. The inclusion is in fact an
equality by Lemma \ref{compare}. Similarly the inclusion
$Bm(N,T^\circ)\subseteq {\operatorname{m}athcal B}^2_1(T,N,k)$ is an equality as
well. \end{proof}
\section{The generalized Kac sequence}
\subsection{The Kac sequence of an abelian matched pair}
We now start by sketching a conceptual way to obtain a generalized
version of the Kac sequence for an arbitrary abelian matched pair
of Hopf algebras relating the cohomology of the matched pair to
Sweedler cohomology. Since it is difficult to describe the homomorphisms involved
in this manner, we then proceed in the next
section to give an explicit description of the low degree part of
this sequence.
\begin{Theorem} Let $H=N\bowtie T$, where $(T,N,\operatorname{m}u ,\nu )$ be an
abelian matched pair of Hopf algebras, and let $A$ be a
commutative left $H$-module algebra. Then there is a long exact
sequence of abelian groups
$$\begin{array}{l}
0\to H^1(H,A)\to H^1(T,A)\oplus H^1(N,A)\to \operatorname{m}athcal H^1(T,N,A)\to H^2(H,A) \\
\to H^2(T,A)\oplus H^2(N,A)\to \operatorname{m}athcal H^2(T,N,A)\to H^3(H,A)\to
...
\end{array}$$
Moreover, if $T$ is finite dimensional then $(N,T^*)$ is an
abelian Singer pair, $H^*(T,k)\cong H^*(k,N^*)$ and $\operatorname{m}athcal
H^*(T,N,k)\cong H^*(N,T^*)$.
\end{Theorem}
\begin{proof} The short exact sequence of double cochain
complexes
$$0\to\operatorname{m}athbf B_0\to \operatorname{m}athbf B\to \operatorname{m}athbf B_1\to 0,$$
where $\operatorname{m}athbf B_1$ is the edge double cochain complex of $\operatorname{m}athbf
B= {_H\operatorname{Reg}}(\operatorname{m}athbf{X}_T\operatorname{m}athbf{X}_N(k),A)$ as in Section \ref{s23}, induces a long exact sequence in cohomology
$$\begin{array}{l}
0\to H^1(\operatorname{Tot} (\operatorname{m}athbf B))\to H^1(\operatorname{Tot} (\operatorname{m}athbf B_1))\to H^2(\operatorname{Tot}
(\operatorname{m}athbf B_0))\to H^2(\operatorname{Tot} (\operatorname{m}athbf B)) \\ \to H^2(\operatorname{Tot} (\operatorname{m}athbf
B_1))\to H^3(\operatorname{Tot} (\operatorname{m}athbf B_0))\to H^3(\operatorname{Tot} (\operatorname{m}athbf B))\to
H^3(\operatorname{Tot} (\operatorname{m}athbf B_1))\to ...
\end{array}$$
where $H^0(\operatorname{Tot} (\operatorname{m}athbf B_0))=0=H^1(\operatorname{Tot} (\operatorname{m}athbf B_0))$ and
$H^0(\operatorname{Tot} (\operatorname{m}athbf B))=H^0(\operatorname{Tot} (\operatorname{m}athbf B_1))$ have already been
taken into account. By Definition \ref{d25} $H^{*+1}(\operatorname{Tot} (\operatorname{m}athbf
B_0))=\operatorname{m}athcal H^*(T,N,A)$ is the cohomology of the matched pair
$(T,N,\operatorname{m}u ,\nu )$ with coefficients in $A$. Moreover, $H^*(\operatorname{Tot}
(\operatorname{m}athbf B_1)\cong H^*(T,A)\oplus H^*(N,A)$ is a direct sum of
Sweedler cohomologies.
From the cosimplicial version of the Eilenberg-Zilber theorem (see
Appendix) it follows that $H^*(\operatorname{Tot} (\operatorname{m}athbf B))\cong H^*(\operatorname{Diag}
(\operatorname{m}athbf B))$. On the other hand, Barr's theorem [Ba, Th. 3.4]
together with Corollary \ref{c31} says that $\operatorname{Diag} \operatorname{m}athbf X_T(\operatorname{m}athbf
X_N(k))\simeq \operatorname{m}athbf X_H(k)$, and gives an equivalence
$${_H\operatorname{Reg}}(\operatorname{Diag} \operatorname{m}athbf X_T(\operatorname{m}athbf X_N(k)),A)\simeq
\operatorname{Diag} ({_H\operatorname{Reg}}(\operatorname{m}athbf X_T(\operatorname{m}athbf X_N(k)),A)=\operatorname{Diag} (\operatorname{m}athbf
B)).$$ Thus, we get
$$H^*(H,A)=H^*({_H\operatorname{Reg}}(\operatorname{m}athbf X_H(k),A))\cong H^*(\operatorname{Diag} (\operatorname{m}athbf B))\cong
H^*(\operatorname{Tot} (\operatorname{m}athbf B)),$$ and the proof is complete.
\end{proof}
\subsection{Explicit description of the low degree part}
The aim of this section is to define explicitly homomorphisms that
make the following sequence
\begin{eqnarray*}
0&\to& H^1(H,A)\stackrel{\operatorname{res}_2}{\longrightarrow}H^1(T,A)\oplus
H^1(N,A) \stackrel{{\deltalta_N}*{\deltalta_T}}{\longrightarrow}{\operatorname{m}athcal H}^1(T,N,A)
\stackrel{\phi}{\to}H^2(H,A)\\[0pt]
&\stackrel{\operatorname{res}_2}{\longrightarrow}& H^2(T,A)\oplus
H^2(N,A)\stackrel{{\deltalta_N}*{\deltalta_T}^{-1}}{\longrightarrow}{\operatorname{m}athcal
H}^2(T,N,A)\stackrel{\psi}{\to}H^3(H,A).
\end{eqnarray*}
exact. This is the low degree part of the generalized Kac
sequence. Here $H=N\bowtie T$ is the bismash product Hopf algebra
arising from a matched pair $\operatorname{m}u\colon T\otimes N\to N$, $\nu\colon
T\otimes N\to T$. Recall that we abbreviate $\operatorname{m}u(t,n)=t(n)$,
$\nu(t,n)=t^n$. We shall also assume that $A$ is a trivial
$H$-module.
We define $\operatorname{res}_2=\operatorname{res}_2^i\colon H^i(H,A)\to H^i(T,A)\oplus
H^i(N,A)$ to be the map $(\operatorname{res}_T,\operatorname{res}_N)\Delta$, more precisely if
$f\colonH^{i}\to A$ is a cocycle, then it gets sent to a pair
of cocycles $(f_{T},f_{N})$, where $f_T=f|_{T^{i}}$ and
$f_N=f|_{N^{i}}$.
By ${\deltalta_N}*{\deltalta_T}^{(-1)^{i+1}}$, we denote the composite
$$\begin{array}{l}
H^i(T,A)\oplus H^i(N,A)\stackrel{{\deltalta_N}\oplus{\deltalta_T}^{\pm
1}}{\longrightarrow} \operatorname{m}athcal{H}^i_i(T,N,A)\oplus \operatorname{m}athcal{H}^i_1(T,N,A)
\\
\stackrel{\iota\oplus\iota}{\longrightarrow} {\operatorname{m}athcal
H}^i(T,N,A)\oplus {\operatorname{m}athcal H}^i(T,N,A) \stackrel{*}{\to}
{\operatorname{m}athcal H}^i(T,N,A). \end{array}$$ When $i=1$, the map just
defined, sends a pair of cocycles $a\in Z^1(T,A)$, $b\in
Z^1(N,A)$ to a map ${\deltalta_N} a*{\deltalta_T} b\colon T\otimes N\to A$ and if $i=2$ a
pair of cocycles $a\inZ^2(T,A)$, $b\in Z^2(N,A)$ becomes a
cocycle pair $({\deltalta_N} a,\varepsilon)*(\varepsilon,{\deltalta_T} b^{-1})=({\deltalta_N} a,{\deltalta_T} b^{-1})
\colon (T\otimes T\otimes N)\oplus (T\otimes N\otimes N)\to A$. Here ${\deltalta_N}$ and
${\deltalta_T}$ are the differentials for computing the cohomology of a matched
pair described in Section \ref{s24}.
The map $\phi\colon {\operatorname{m}athcal H}^1(T,N,A)\to H^2(H,A)$ assigns to
a cocycle $\gamma\colon T\otimes N\to A$, a map $\phi(\gamma)\colon
H\otimes H\to A$, which is characterized by
$\phi(\gamma)(nt,n't')=\gamma(t,n').$
The homomorphism $\psi\colon {\operatorname{m}athcal H}^2(T,N,A)\to H^3(H,A)$
is induced by a map that sends a cocycle pair $(\alphapha,\beta)\in {\operatorname{m}athcal Z}^2(T,N,A)$ to
the cocycle $f=\psi(\alphapha,\beta)\colon H\otimes H\otimes H\to A$ given by
$$
f(nt,n't',n''t'')=\varepsilon(n)\varepsilon(t'')\alpha(t^{n'},t',n'')\beta(t,n',t'(n'')).
$$
A direct, but lengthy computation shows that the maps
just defined induce homomorphisms that make the sequence above exact
[M3]. The most important tool in computations is the following
lemma about the structure of the second cohomology group
${\operatorname{m}athcal H}^2(H,A)$ [M3].
\begin{Lemma}\label{mainlemma}
Let $f\colon H\otimes H\to A$ be a cocycle. Define maps $g_f\colon H\to A$,
$h\colon H\otimes H\to A$ and $f_c\colon T\otimes N\to A$ by $g_f(nt)=f(n\otimes t)$,
$h=f*\delta g_f$ and $f_c(t\otimes n)=f(t\otimes n)f^{-1}(t(n)\otimes t^n)$. Then
\begin{enumerate}
\item $ h(nt,n't')=f_T(t^{n'},t')f_N(n,t'(n'))f_c(t,n') $
\item $ h_T=f_T,\; h_N=f_N,\; h|_{N\otimes T}=\varepsilon,\; h|_{T\otimes
N}=h_c=f_c,\; g_h=\varepsilon $
\item the maps $f_T$ and $f_N$ are cocycles and ${\deltalta_N} f_T={\deltalta_T}
f_c^{-1}$, ${\deltalta_T} f_N= {\deltalta_N} f_c^{-1}$
\item If $a\colon T\otimes T\to A$, $b\colon N\otimes N\to A$ are cocycles
and $\gamma\colon T\otimes N\to A$ is a convolution invertible map,
such that ${\deltalta_N} a={\deltalta_T}\gamma$ and ${\deltalta_T} b={\deltalta_N}\gamma$, then the map
$f=f_{a,b,\gamma}\colon H\otimes H\to A$, defined by
$$
f(nt,n't')=a(t^{n'},t')b(n,t(n'))\gamma^{-1}(t,n')
$$
is a cocycle and $f_T=a$, $f_N=b$, $f_c=f|_{T\otimes N}=\gamma^{-1}$
and $f|_{N\otimes T}=\varepsilon$.
\end{enumerate}
\end{Lemma}
\subsection{The locally finite case}
Suppose that the action $\operatorname{m}u\colon T\otimes N\to N$ is locally finite
and let $(N,T^\circ,\omega,\rho)$ be the Singer pair corresponding to
the matched pair $(T,N,\operatorname{m}u,\nu)$ as in Section \ref{s41}.
By Corollary \ref{cor1} we have ${\operatorname{m}athcal
H}^1(T,N,k)=H^1(N,T^\circ)$.
From the explicit description of the generalized Kac sequence, we
see that $({\deltalta_N}*{\deltalta_T}^{-1})|_{H^2(T,A)}={\deltalta_N}\colon
H^2(T,A)\to {\operatorname{m}athcal H}^2_2(N,T,A)$ and similarly that
$({\deltalta_N}*{\deltalta_T}^{-1})|_{H^2(N,A)}={\deltalta_T}^{-1}\colon H^2(N,A)\to {\operatorname{m}athcal
H}^2_1(N,T,A)$. By Proposition \ref{cor2} we have the equality ${\operatorname{m}athcal
H}_1^2(T,N,k)=Hm(N,T^\circ)$. Recall that
$Hm(N,T^\circ)\subseteq H^2(N,T^\circ)\simeq \operatorname{Opext}(N,T^\circ)$.
If the action $\nu$ is locally finite as well, then there is also
a (right) Singer pair $(T,N^\circ,\omega',\rho')$. By \lq{right\rq}
we mean that we have a right action $\omega'\colon N^\circ\otimes T\to
N^\circ$ and a right coaction $\rho'\colon T\otimes N^\circ\otimes T$. In
this case we get that ${\operatorname{m}athcal H}^2_2(T,N,k)\simeq {{
Hm}}'(T,N^\circ)\subseteq \operatorname{Opext}'(T,N^\circ)$. The dash refers to
the fact that we have a right Singer pair.
Define ${H^2_{mc}}=Hm\cap Hc$ and ${H^2_{mc}}'= {H^2_{m}}'\cap
{H^2_{c}}'$ and note $H_{mc}(N,T^\circ)\simeq{\operatorname{m}athcal
H}^2_2(N,T,k)\cap {\operatorname{m}athcal H}^2_1(N,T,k)\simeq
{H_{mc}}'(T,N^\circ)$. Hence
\begin{eqnarray*}
\operatorname{im}({\deltalta_N}*{\deltalta_T}^{-1})&\subseteq& {\operatorname{m}athcal
H}^2_1(T,N,k)+{\operatorname{m}athcal H}^2_2(T,N,k) \simeq \frac{{\operatorname{m}athcal
H}^2_1(T,N,k)\oplus {\operatorname{m}athcal H}^2_2(T,N,k)}{{\operatorname{m}athcal
H}^2_1(T,N,k)\cap {\operatorname{m}athcal H}^2_2(T,N,k)}\\
&=&\frac{Hm(N,T^\circ)\oplus {Hm}'(T,N^\circ)}{\langle {\operatorname{m}athrm
H^2_{mc}}(N,T^\circ)\operatorname{eq}uiv{H^2_{mc}}'(T,N^\circ)\rangle}.
\end{eqnarray*} In other words, $\operatorname{im}({\deltalta_N}*{\deltalta_T}^{-1})$ is
contained in a subgroup of ${\operatorname{m}athcal H}^2(T,N,k)$, that is
isomorphic to the pushout
$$\begin{CD}
{{H^2_{mc}}'(T,N^\circ)\simeq {H^2_{mc}}(N,T^\circ)} @> >> {{H^2_{m}}(N,T^\circ)} \\
@V VV @V VV \\
{{H^2_{m}}'(T,N^\circ)} @> >> X.
\end{CD}$$
Hence if both actions $\operatorname{m}u$ and $\nu$ of the abelian matched pair
$(T,N,\operatorname{m}u,\nu)$ are locally finite then we get the following
version of the low degree part of the Kac sequence:
\begin{eqnarray*}
0&\to& H^1(H,k)\stackrel{\operatorname{res}_2}{\longrightarrow}H^1(T,k)\oplus
H^1(N,k) \stackrel{{\deltalta_N}*{\deltalta_T}}{\longrightarrow}{H}^1(N,T^\circ)
\stackrel{\phi}{\to}H^2(H,k)\\
&\stackrel{\operatorname{res}_2}{\longrightarrow}& H^2(T,k)\oplus
H^2(N,k)\stackrel{{\deltalta_N}*{\deltalta_T}^{-1}}{\longrightarrow}X\stackrel{\psi|_X}{\longrightarrow}H^3(H,k).
\end{eqnarray*}
\subsection{The Kac sequence of an abelian Singer pair}
Here is a generalization of the Kac sequence relating Sweedler and
Doi cohomology to Singer cohomology.
\begin{Theorem} For any abelian Singer pair $(B,A,\operatorname{m}u,\rho)$
there is a long exact sequence
$$\begin{array}{l}
0\to H^1(Tot Z)\to H^1(B,k)\oplus H^1(k,A)\to H^1(B,A)\to H^2(\operatorname{Tot} Z)\\
\to H^2(B,k)\oplus H^2(k,A)\to H^2(B,A)\to H^3(\operatorname{Tot} Z)\to\ldots,
\end{array}$$
where $Z$ is the double complex from Definition \ref{d12}.
Moreover, we always have $H^1(B,A)\cong \operatorname{Aut}(A\# B)$,
$H^2(B,A)\cong\operatorname{Opext} (B,A)$ and $H^*(\operatorname{Tot} Z)\cong H^*(\operatorname{Diag} Z)$.
If $A$ is finite dimensional then $H^*(\operatorname{Tot} Z)=H^*(A^*\bowtie
B,k)$.
\end{Theorem}
\begin{proof} The short exact sequence of double cochain complexes
$$0\to Z_0\to Z\to Z_1\to 0,$$
where $Z_1$ is the edge subcomplex of
$Z= {_B\operatorname{Reg}^A}(\operatorname{m}athbf{X}_B(k),\operatorname{m}athbf{Y}_A(k))$, induces a
long exact sequence
$$\begin{array}{l}
0\to H^1(Tot Z)\to H^1(\operatorname{Tot} Z_1)\to H^2(\operatorname{Tot} Z_0)\to H^2(\operatorname{Tot} Z)\\
\to H^2(\operatorname{Tot} Z_1)\to H^3(\operatorname{Tot} Z_0)\to H^3(\operatorname{Tot} Z)\to H^3(\operatorname{Tot}
Z_0)\to\ldots
\end{array}$$
where $H^0(\operatorname{Tot} Z_0)=0=H^1(\operatorname{Tot} Z_0)$ and $H^0(\operatorname{Tot} Z)=H^0(\operatorname{Tot}
Z_1)$ have already been taken into account. By definition
$H^*(\operatorname{Tot} Z_0)=H^*(B,A)$ is the cohomology of the abelian Singer
pair $(B,A,\operatorname{m}u ,\rho )$, and by [Ho] we have $H^1(B,A)\cong\operatorname{Aut}
(A\# B)$ and $H^2(B,A)\cong\operatorname{Opext} (B,A)$. Moreover, we clearly
have $H^*(\operatorname{Tot} Z_1)\cong H^*(B,k)\oplus H^*(k,A)$, where the
summands are Sweedler and Doi cohomologies. By the cosimplicial
Eilenberg-Zilber theorem (see appendix) there is a natural
isomorphism $H^*(\operatorname{Tot} (\operatorname{m}athbf Z))\cong H^*(\operatorname{Diag} (\operatorname{m}athbf Z))$.
Finally, if $A$ is finite dimensional then $\operatorname{m}athbf
Z={_B\operatorname{Reg}^A}(\operatorname{m}athbf X(k),\operatorname{m}athbf Y(k))\cong {_{A^*\bowtie B}\operatorname{Reg}}
(\operatorname{m}athbf{B}(k),k)$, where $\operatorname{m}athbf B(k)=\operatorname{m}athbf X_{A^*}(\operatorname{m}athbf X_B(k))$.
\end{proof}
\section{On the matched pair cohomology of pointed cocommutative Hopf algebras
over fields of zero characteristic}
In this section we describe a method which gives information about the second
cohomology group ${\operatorname{m}athcal
H}^2(T,N,A)$ of an abelian matched pair.
\subsection{The method}
Let $(T,N)$ be an abelian matched pair of pointed Hopf algebras,
and $A$ a trivial $N\bowtie T$-module algebra.
\begin{enumerate}
\item Since $\operatorname{char} k=0$ and $T$ and $N$ are pointed
we have $T\simeq UP(T) \rtimes kG(T)$ and $N\simeq UP(N)\rtimes
kG(N)$ and $N\bowtie T\simeq U(P(T)\bowtie P(N)) \rtimes k(G(T)\bowtie G(N))$
[Gr1,2]. If $H$ is a Hopf algebra then $G(H)$ denotes the group of
points and $P(H)$ denotes the Lie algebra of primitives.
\item We can use the generalized Tahara sequence [M2] (see introduction) to compute
$H^2(T)$, $H^2(N)$, $H^2(N\bowtie T)$. In particular if $G(T)$ is
finite then the cohomology group
${H_{meas}^2}(kG(T),\operatorname{Hom}(UP(T),A))= H^{2,1}(kG(T),UP(T),A)=
{\operatorname{m}athcal H}^2_2(kG(T),UP(T),A)$ is trivial and there is a direct sum
decomposition
$H^2(T)=H^2(P(T))^{G(T)}\oplus H^2(G(T))$; we get a similar decomposition
for $H^2(N)$ if $G(N)$ is finite and for $H^2(N\bowtie T)$ in the case
$G(T)$ and $G(N)$ are both finite.
\item Since the Lie algebra cohomology groups $H^i({\bf g})$ admit a
vector space structure, the cohomology groups $H^{1,2}(G,{\bf
g},A)\simeq H^1(G,H^2({\bf g},A))$ are trivial if $G$ is finite
(any additive group of a vector space over a field of zero
characteristic is uniquely divisible).
\item The exactness of the sequence from Theorem \ref{pi}
implies that the maps ${\deltalta_T}\colon H^2(G(\_))\to {\operatorname{m}athcal
H}^2(kG(\_),UP(\_),A)$ are surjective if $G(\_)$ is finite, hence
by the generalized Kac sequence the kernels of the maps
$\operatorname{res}_2^3\colon H^3(\_)\to H^3(P(\_)\oplusH^3(G(\_))$ are
trivial. This then gives information about the kernel of the
map $\operatorname{res}_2^3\colon H^3(N\bowtie T)\to H^3(T)\oplus H^3(N)$.
\item Now use the exactness of the generalized Kac sequence
\begin{eqnarray*}
H^2(N\bowtie T)&\stackrel{\operatorname{res}_2^2}{\longrightarrow}&H^2(T)\oplus
H^2(N)\stackrel{{\deltalta_T}+{\deltalta_N}^{-1}}{\longrightarrow}
{\operatorname{m}athcal H}^2(T,N,A)\\
&\to& H^3(N\bowtie
T)\stackrel{\operatorname{res}_2^3}{\longrightarrow}H^3(T)\oplusH^3(N)
\end{eqnarray*}
to get information about ${\operatorname{m}athcal H}^2(T,N,A)$.
\end{enumerate}
\subsection{Examples}
Here we describe how the above procedure works on concrete
examples.
In the first three examples we restrict ourselves to a case
in which one of the Hopf algebras involved is a group algebra.
Let $T=UP(T)\rtimes kG(T)$ and $N=kG(N)$ and suppose that the
matched pair of $T$ and $N$ arises from actions $G(T)\times
G(N)\to G(N)$ and $(G(N)\rtimes G(T))\times P(T)\to P(T)$. If the
groups $G(T)$ and $G(N)$ are finite and their orders are relatively
prime, then the generalized Kac sequence shows that there is an
injective homomorphism
$$\Phi\colon
\frac{H^2(P(T))^{G(T)}}{H^2(P(T))^{G(N)\rtimes G(T)}}\oplus
\frac{H^2(G(N))}{H^2(G(N))^{G(T)}}\to {\operatorname{m}athcal H}^2(T,N,A).$$
Theorem \ref{pi} guarantees that the map $H^3(N\bowtie
T)=H^3(U(P(T))\rtimes k(G(N)\rtimes G(T)))\to H^3(P(T))\oplus
H^3(G(N)\rtimes G(T))$ is injective. Since the orders of $G(T)$
and $G(N)$ are assumed to be relatively prime the map
$H^3(G(N)\rtimes G(T))\to H^3(G(N))\oplus H^3(G(T))$ is also
injective. Hence the map
$$\operatorname{res}_2^3\colon H^3(N\bowtie T)\to H^3(N)\oplus H^3(T)$$
must be injective as well, since the
composite $H^3(N\bowtie T)\to H^3(N)\oplus H^3(T)\to
H^3(G(N))\oplus H^3(P(T))\oplus H^3(G(T))$ is injective. Hence
by the exactness of the generalized Kac sequence $\Phi$ is an
isomorphism.
\begin{Example}Let ${\bf g}=k\times k$ be the abelian Lie
algebra of dimension 2 and let $G=C_2=\langle a \rangle$ be the
cyclic group of order two. Furthermore assume that $G$ acts on
${\bf g}$ by switching the factors, i.e. $a(x,y)=(y,x)$. Recall
that $U{\bf g}=k[x,y]$ and that $H^i_{Sweedler}(U{\bf
g},A)=H^i_{Hochschild}(U{\bf g},A)$ for $i\ge 2$ and that
$H^i_{Hochschild}(k[x,y],k)=k^{\oplus {i\choose 2
}}$. A
computation shows that $G$ acts on $k\simeq H^2(k[x,y],k)$ by
$a(t)=-t$ and hence $H^2(k[x,y],k)^G=0$. Thus the homomorphism
$\pi$ (Theorem \ref{pi}) is the zero map and the homomorphism
$k\simeq H^2(k[x,y],k)\stackrel{{\deltalta_T}}{\to}{\operatorname{m}athcal
H}^2(kC_2,k[x,y],k)$ is an isomorphism.\end{Example}
\begin{Example}[symmetries of a triangle] Here we describe an
example arising from the action of the dihedral group $D_3$ on the
abelian Lie algebra of dimension $3$ (basis consists of vertices
of a triangle). More precisely let ${\bf g}=k\times k\times k$,
$G=C_2=\langle a\rangle$, $H=C_3=\langle b \rangle$, the actions
$G\times {\bf g}\to {\bf g}$, $H\times {\bf g}\to {\bf g}$ and
$H\times G\to H$ are given by $a(x,y,z)=(z,y,x)$,
$b(x,y,z)=(z,x,y)$ and $b^a=b^{-1}$ respectively. A routine
computation reveals the following
\begin{itemize}
\item $C_2$ acts on $k\times k\times k\simeq H^2(k[x,y,z],k)$ by
$a(u,v,w)=(-w,-v,-u)$, hence the $G$ stable part is
$$H^2(k[x,y,z],k)^G=\{(u,0,-u)\}\simeq k.$$
\item $H=C_3$ acts on $k\times k\times k$ by $b(u,v,w)=(w,u,v)$
and the $H$ stable part is $H^2(k[x,y,z],k)^H=
\{(u,u,u)\}\simeq k$.
\item The $D_3=C_2\rtimes C_3$ stable part
$H^2(k[x,y,z],k)^{D_3}$ is trivial.
\end{itemize}
Thus we have an isomorphism $k\times k^\bullet/(k^\bullet)^3\simeq {\operatorname{m}athcal
H}^2(k[x,y,z]\rtimes kC_2,kC_3,k)$.\end{Example}
\begin{Remark}The above also shows that there is an isomorphism $$k\times k\times k\simeq
\operatorname{m}athcal{H}^2(k[x,y,z],kD_3,k).$$\end{Remark}
\begin{Example} Let ${\bf g}=sl_n$, $G=C_2=\langle a\rangle$, $H=C_n=\langle
b \rangle$, where $a$ is a matrix that has $1$'s on the skew
diagonal and zeroes elsewhere and $b$ is the standard permutation
matrix of order $n$. Let $H$ and $G$ act on $sl_n$ by conjugation in ${\operatorname{m}athcal
M}_n$ and let $G$ act on $H$ by conjugation inside $GL_n$.
Furthermore assume that $A$ is a finite dimensional trivial $U{\bf
g}\rtimes k(H\rtimes G)$-module algebra. By Whitehead's second
lemma $H^2({\bf g},A)=0$ and hence we get an isomorphism $\mathcal{U}
A/(\mathcal{U} A)^n\simeq {\operatorname{m}athcal H}^2(Usl_n\rtimes kC_2,kC_n,A)$ if $n$
is odd. \end{Example}
\begin{Example} Let $H=U{\bf g}\rtimes kG$, where ${\bf g}$ is an
abelian Lie algebra and $G$ is a finite abelian group and assume
the action of $H$ on itself is given by conjugation, i.e.
$h(k)=h_1 kS(h_2)$. In this case it is easy to see that
$H^2(H,A)^H=H^2(H,A)$ for any trivial $H$-module algebra $A$ and
hence the homomorphism in the generalized Kac sequence
$\deltalta_{H,1}\oplus\deltalta_{H,2}\colon H^2(H,A)\oplus H^2(H,A)\to
{\operatorname{m}athcal H}^2(H,H,A)$ is trivial. Hence ${\operatorname{m}athcal
H}^2(H,H,A)\simeq \ker(H^3(H\rtimes H,A)\to H^3(H,A)\oplus
H^3(H,A)).$\end{Example}
\begin{appendix}
\section{Simplicial homological algebra}
This is a collection of notions and results from simplicial
homological algebra used in the main text. The emphasis is on the
cohomology of cosimplicial objects, but the considerations are
similar to those in the simplicial case [We].
\subsection{Simplicial and cosimplicial objects}
Let $\operatorname{m}athbf\Deltalta$ denote the simplicial category [Mc]. If
$\operatorname{m}athcal A$ is a category then the functor category $\operatorname{m}athcal
A^{\operatorname{m}athbf\Deltalta^{op}}$ is the category of simplicial objects
while $\operatorname{m}athcal A^{\operatorname{m}athbf\Deltalta}$ is the category of cosimplicial
objects in $\operatorname{m}athcal A$. Thus a simplicial object in $\operatorname{m}athcal A$
is given by a sequence of objects $\{ X_n\}$ together with, for
each $n\geq 0$, face maps $\partialrtial_i\colon X^{n+1}\to X_n$ for $0\leq
i\leq n+1$ and degeneracies $\sigma_j\colon X_n\to X_{n+1}$ for $0\leq
j\leq n$ such that
$\partialrtial_i\partialrtial_j=\partialrtial_{j-1}\partialrtial_i$ for $i<j$,
$\sigma_i\sigma_j=\sigma_{j+1}\sigma_i$ for $i\leq j$,
$\partialrtial_i\sigma_j=\begin{cases} \sigma_{j-1}\partialrtial_i,& \operatorname{m}box{ if } i<j;\\
1,& \operatorname{m}box{ if } i=j, j+1;\\
\sigma_j\partialrtial_{i-1},& \operatorname{m}box{ if } i>j+1. \end{cases}$
A cosimplicial object in $\operatorname{m}athcal A$ is a sequence of objects $\{
X^n\}$ together with, for each $n\geq 0$, coface maps
$\partialrtial^i\colon X^n\to X^{n+1}$ for $0\leq i\leq n+1$ and
codegeneracies $\sigma^j\colon X^{n+1}\to X^n$ such that
$\partialrtial^j\partialrtial^i=\partialrtial^i\partialrtial^{j-1}$ for $i<j$,
$\sigma^j\sigma^i=\sigma^i\sigma^{j+1}$ for $i\leq j$,
$\sigma^j\partialrtial^i=\begin{cases} \partialrtial^i\sigma^{j-1},& \operatorname{m}box{ if } i<j;\\
1,& \operatorname{m}box{ if } i=j,j+1;\\
\partialrtial^{i-1}\sigma^j,& \operatorname{m}box{ if } i>j+1. \end{cases}$
Two cosimplicial maps $f,g\colon X\to Y$ are homotopic if
for each $n\geq 0$ there is a family of maps $\{ h^i\colon X^{n+1}\to
Y^n|0\leq i\leq n\}$ in $\operatorname{m}athcal A$ such that
$h^0\partialrtial^0=f$, $h^n\partialrtial^{n+1}=g$,
$h^j\partialrtial^i= \begin{cases} \partialrtial^ih^{j-1},& \operatorname{m}box{ if } i<j;\\
h^{i-1}\partialrtial^i, & \operatorname{m}box{ if } i=j\ne 0;\\
\partialrtial^{i-1}h^j, & \operatorname{m}box{ if } i>j+1, \end{cases}$
$h^j\sigma^i= \begin{cases} \sigma^ih^{j+1}, & \operatorname{m}box{ if } i\leq j;\\
\sigma^{i-1}h^j, & \operatorname{m}box{ if } i>j. \end{cases}$ \\Clearly,
homotopy of cosimplicial maps is an equivalence relation.
If ${X}$ is a cosimplicial object in an abelian category $\operatorname{m}athcal{A}$, then
$C({X})$ denotes the associated cochain complex in $\operatorname{m}athcal{A}$, i.e.
an object of the category of cochain complexes $\operatorname{Coch}(\operatorname{m}athcal{A})$.
\begin{Lemma} For a cosimplicial object $X$ in the abelian
category $\operatorname{m}athcal{A}$ let $N^n(X)=\cap_{i=0}^{n-1}\ker\sigma^i$
and $D^n(X)=\sum_{j=0}^{n-1}\operatorname{im}\partialrtial^j$. Then $C(X)\cong
N(X)\oplus D(X)$. Moreover, ${X}/{D(X)}\cong N(X)$ is a cochain complex with
differentials given by $\partialrtial^n\colon {X^n}/{D^n}\to
{X^{n+1}}/{D^{n+1}}$, and $\pi^*(X)= H^*(N^*(X))$
is the sequence of cohomotopy objects of $X$.
\end{Lemma}
\begin{Theorem}[Cosimplicial Dold-Kan correspondence, {[We, 8.4.3]}] If
$\operatorname{m}athcal A$ is an abelian category then
\begin{enumerate}
\item $N\colon\operatorname{m}athcal{A}^{\operatorname{m}athbf\Deltalta}\to \operatorname{Coch} (\operatorname{m}athcal A)$ is an
equivalence and $N(X)$ is a summand of $C(X)$;
\item $\pi^*(X)=H^*(N(X))\cong H^*(C(X))$.
\item If $\operatorname{m}athcal A$ has enough injectives, then
$\pi^*=H^*N\colon \operatorname{m}athcal A^{\operatorname{m}athbf\Deltalta}\to \operatorname{Coch} (\operatorname{m}athcal A)$ and
$H^*C\colon A^{\operatorname{m}athbf\Deltalta}\to \operatorname{Coch} (\operatorname{m}athcal A)$ are the sequences
of right derived functors of $\pi^0=H^0N\colon \operatorname{m}athcal
A^{\operatorname{m}athbf\Deltalta}\to \operatorname{m}athcal A$ and $H^0C\colon \operatorname{m}athcal
A^{\operatorname{m}athbf\Deltalta}\to \operatorname{m}athcal A$, respectively.
\end{enumerate}
\end{Theorem}
\begin{proof} (1) If $y\in N^n(X)\cap D^n(X)$ then
$y=\sum_{i=0}^{n-1}\partialrtial^i(x_i)$, where each $x_i\in X^{n-1}$.
Suppose that $y=\partialrtial^0(x)$ and $y\in N^n(X)$, then
$0=\sigma^0(y)=\sigma^0\partialrtial^0(x)=x$ and hence
$y=\partialrtial^0(x)=0$ Now proceed by induction on the largest $j$
such that $\partialrtial^j(x_j)\neq 0$ So let
$y=\sum_{i=0}^j\partialrtial^i(x_i)$ such that $\partialrtial^j(x_j)\neq 0$,
i.e: $y\notin \sum_{i<j}\operatorname{im}\partialrtial^i$, and $y\in N^n(X)$. Then
$0=\sigma^j(y)=\sum_{i\leq j}\sigma^J\partialrtial^i(x_i)
=x_j+\sum{i<j}\sigma^j\partialrtial^i(x_i)=x_j+\sum_{i<j}\partialrtial^i\sigma^{j-1}(x_i)$.
This implies that $x_j=\sum_{i<j}\partialrtial^i\sigma^{j-1}(x_i)$ and
hence
$\partialrtial^j(x_j)=-\sum_{i<j}\partialrtial^j\partialrtial^i\sigma^{j-1}(x_i)
=-\sum_{i<j}\partialrtial^i\partialrtial^{j-1}\sigma^{j-1}(x_i)\in
\sum_{i<j}\operatorname{im}\partialrtial^i$, a contradiction. Thus, $N^n(X)\cap
D^n(X)=0$.
Now let us show that $D^n(X)+N^n(X)=C^n(X)$. Suppose that
$y=\partialrtial^0(x)$ for some $x\in X_{n-1}$ and $y\in
N^n(x)=\cap_{i=0}^{n-1}\ker\sigma^i$. Then
$0=\sigma^0(y)=\sigma^0\partialrtial^0(x)=x$, so that $\sigma^i(y)\neq
0$. If $y'=y-\partialrtial^i\sigma^i(y)$ then $y-y'\in D^n(X)$. For
$i<j$ we get
$\sigma^j(y')=\sigma^j(y)-\sigma^j\partialrtial^i\sigma^i(y)
=\sigma^j(y)-\partialrtial^i\sigma^{j-1}\sigma^i(y)
=\sigma^j(y)-\partialrtial^i\sigma^i\sigma^j(y)=0$. Moreover,
$\sigma^i(y')=\sigma^i(y)-\sigma^i\partialrtial^i\sigma^i(y)=\sigma^i(y)-\sigma^i(y)=0$,
so that $i-1$ is the largest index for which $\sigma^{i-1}y'\neq
0$. By induction, there is a $z\in D^n(X)$ such that $y-z\in
N^n(X)$, and hence $y\in D^n(X)+N^n(X)$.
It now follows that $\cap_{i=0}^{n-1}\ker\sigma^i =N^n(X)\cong
X^n/{D^n(X)} =X^n/{\sum_{i=0}^{n-1}\operatorname{im}\partialrtial^i}$. The
differential $\partialrtial^n\colon N^n(X)\to N^{n+1}(X)$ is given by
$\partialrtial^n(x+D^n(X))=\partialrtial^n(x)+D^{n+1}(X)$.
(2) By definition, see [We, 8.4.3].
(3) The functors $N\colon \operatorname{m}athcal
A^{\operatorname{m}athbf\Deltalta}\to\operatorname{m}athcal{A}$ and $C\colon\operatorname{m}athcal{A}^{\operatorname{m}athbf\Deltalta}\to
\operatorname{Coch} (\operatorname{m}athcal{A})$ are exact.
\end{proof}
The inverse equivalence $K\colon \operatorname{Coch} (\operatorname{m}athcal A)\to \operatorname{m}athcal
A^{\operatorname{m}athbf\Deltalta}$ has a description, similar to that for the
simplicial case [We, 8.4.4].
\subsection{Cosimplicial bicomplexes}
The category of cosimplicial bicomplexes in the abelian category
$\operatorname{m}athcal A$ is the functor category $\operatorname{m}athcal
A^{\operatorname{m}athbf\Deltalta\times\operatorname{m}athbf\Deltalta}=(\operatorname{m}athcal
A^{\operatorname{m}athbf\Deltalta})^{\operatorname{m}athbf\Deltalta}$. In particular, in a
cosimplicial bicomplex $X=\{ X^{p,q}\}$ in $\operatorname{m}athcal A$
\begin{enumerate} \item Horizontal and vertical cosimplicial
identities are satisfied; \item Horizontal and vertical
cosimplicial operators commute.
\end{enumerate}
The associated (unnormalized) cochain bicomplex $C(X)$ with
$C(X)^{p,q}=X^{p,q}$ has horizontal and vertical differentials
$$d_h=\sum_{i=0}^{p+1}(-1)^i\partialrtial_h^i\colon X_{p,q}\to X^{p+1,q}\quad ,\quad
d_v=\sum_{j=0}^{q+1}(-1)^{p+j}\partialrtial_v^j\colon X^{p,q}\to X^{p,q+1}$$
so that $d_hd_v=d_vd_h$. The normalized cochain bicomplex $N(X)$
is obtained from $X$ by taking the normalized cochain complex of
each row and each column. It is a summand of $CX$. The
cosimplicial Dold-Kan theorem then says that $H^{**}(CX)\cong H^{**}(NX)$
for every cosimplicial bicomplex.
The diagonal $\operatorname{diag}\colon \Deltalta\to \Deltalta\times\Deltalta$ induces the
diagonalization functor $\operatorname{Diag} =\operatorname{m}athcal A^{\operatorname{diag}}\colon \operatorname{m}athcal
A^{\Deltalta\times\Deltalta}\to\operatorname{m}athcal A^{\Deltalta}$, where
$\operatorname{Diag}^p(X)=X^{p,p}$ with coface maps
$\partialrtial^i=\partialrtial_h^i\partialrtial_v^i\colon X^{p,p}\to X^{p+1,p+1}$ and
codegeneracies $\sigma^j=\sigma_h^j\sigma_v^j\colon X^{p+1,p+1}\to
X^{p,p}$ for $0\leq i\leq p+1$ and $0\leq j\leq p$, respectively.
\begin{Theorem}[The cosimplicial Eilenberg-Zilber Theorem.] Let
$\operatorname{m}athcal A$ be an abelian category with enough injectives. There
is a natural isomorphism
$$\pi^*(\operatorname{Diag} X)=H^*(C\operatorname{Diag} (X))\cong H^*(\operatorname{Tot} (X)),$$
where $\operatorname{Tot}(X)$ denotes the total complex associated to the double
cochain complex $CX$.
Moreover, there is a convergent first quadrant cohomological
spectral sequence
$$E_1^{p,q}=\pi_v^q(X^{p,*})\quad ,\quad E_2^{p,q}=\pi_h^p\pi_v^q(X)\Rightarrow
\pi^{p+q}(\operatorname{Diag} X).$$
\end{Theorem}
\begin{proof} It suffices to show that $\pi^0\operatorname{Diag} \cong H^0(\operatorname{Tot}
X)$, and that
$$\pi^*\operatorname{Diag},\; H^*\operatorname{Tot} \colon\operatorname{m}athcal A^{\Deltalta\times\Deltalta }\to \operatorname{m}athcal
A^{\operatorname{m}athbf N}$$ are
sequences of right derived functors.
First observe that $\pi^0(\operatorname{Diag} X)=\operatorname{eq} (\partialrtial_h^0\partialrtial_v^0,
\partialrtial_h^0\partialrtial_v^0\colon X^{0,0}\to X^{1,1})$, while $H^0(\operatorname{Tot} (X))=\ker
((\partialrtial_h^0-\partialrtial_h^1, \partialrtial_v^0-\partialrtial_v^1)\colon X^{0,0}\to
X^{10}\oplus X^{01})$. But
$\partialrtial_h^0\partialrtial_v^0x=\partialrtial_h^1\partialrtial_v^1x$ implies that
$\partialrtial_v^0x=\sigma_h^0\partialrtial_h^0\partialrtial_v^0x
=\sigma_h^0\partialrtial_h^1\partialrtial_v^1x =\partialrtial_v^1x$, since
$\sigma_h^0\partialrtial_h^0 =1=\sigma_h^0\partialrtial_h^1$, and similarly
$\partialrtial_h^0x=\sigma_v^0\partialrtial_h^0\partialrtial_v^0x
=\sigma_v^0\partialrtial_h^1\partialrtial_v^1x =\partialrtial_h^1x$, since
$\sigma_v^0\partialrtial_v^0 =1=\sigma_v^0\partialrtial_v^1$, so that
$\pi^0(\operatorname{Diag} X)\subseteq H^0(\operatorname{Tot} (X))$.
Conversely, if $\partialrtial_h^0x=\partialrtial_h^1x$ and
$\partialrtial_v^0x=\partialrtial_v^1x$ then
$\partialrtial_h^0\partialrtial_v^0x=\partialrtial_h^0\partialrtial_v^1x
=\partialrtial_v^1\partialrtial_h^0x=\partialrtial_v^1\partialrtial_h^1x
=\partialrtial_h^1\partialrtial_v^1x$, and hence $H^0(\operatorname{Tot} (X))\subseteq
\pi^0(\operatorname{diag} X)$.
The additive functors $\operatorname{Diag}\colon \operatorname{m}athcal A^{\Deltalta\times\Deltalta}\to
\operatorname{m}athcal A^{\Deltalta}$ and $\operatorname{Tot}\colon\operatorname{m}athcal
A^{\Deltalta\times\Deltalta}\to \operatorname{Coch} (\operatorname{m}athcal A)$ are obviously exact,
while $\pi^*, H^*$ are cohomological $\deltalta$-functors, so that
both $\pi^*\operatorname{Diag} ,H^*\operatorname{Tot}\colon\operatorname{m}athcal A^{\Deltalta\times\Deltalta}\to
\operatorname{Coch} (\operatorname{m}athcal A)$ are cohomological $\deltalta$ functors.
The claim is that this cohomological $\deltalta$ functors are
universal, i.e: the right derived functors of $\pi^0\operatorname{Diag} ,H^0\operatorname{Tot}
C\colon \operatorname{m}athcal A^{\Deltalta\times\Deltalta}\to \operatorname{m}athcal A$, respectively.
Since $\operatorname{m}athcal A$ has enough injectives, so does $\operatorname{Coch} (\operatorname{m}athcal
A)$ by [We, Ex. 2.3.4], and hence by the Dold-Kan equivalence
$\operatorname{m}athcal A^{\Deltalta}$ and $\operatorname{m}athcal A^{\Deltalta\times\Deltalta}$ have
enough injectives. Moreover, by the next lemma, both $\operatorname{Diag}$ and
$\operatorname{Tot} $ preserve injectives. It therefore follows that
$$\begin{array}{l}
\pi^*\operatorname{Diag} =(R^*\pi^0)\operatorname{Diag} =R^*(\pi^0\operatorname{Diag} ),\\
H^*\operatorname{Tot} =(R^*H^0)\operatorname{Tot} =R^*(H^0\operatorname{Tot} ).
\end{array}$$
The canonical cohomological first quadrant spectral sequence
associated with the cochain bicomplex $C(X)$ has
$$E_1^{p,q}=H_v^q(C^{p,*}(X))=\pi_v^q(X^{p,*})\quad ,\quad
E_2^{p,q}=H_h^p(C(\pi_v^q(X))=\pi_h^p\pi_v^q(X)$$ and converges
finitely to $H^{p+q}(\operatorname{Tot} (X))\cong \pi^{p+q}(\operatorname{diag} X)$.
\end{proof}
\begin{Lemma} The functors $\operatorname{Diag}\colon \operatorname{m}athcal
A^{\Deltalta\times\Deltalta}\to\operatorname{m}athcal A^{\Deltalta}$ and $\operatorname{Tot}\colon\operatorname{m}athcal
A^{\Deltalta\times\Deltalta}\to\operatorname{Coch}\operatorname{m}athcal A$ preserve injectives.
\end{Lemma}
\begin{proof} A cosimplicial bicomplex $J$ is an injective object
in $\operatorname{m}athcal A^{\Deltalta\times\Deltalta}$ if and only if
\begin{enumerate} \item each $J^{p,q}$ is an injective object of
$\operatorname{m}athcal A$, \item each row and each column is cosimplicially
null-homotopic, i.e: the identity map is cosimplicially homotopic
to the zero map, \item the vertical homotopies $h_v^j\colon J^{*,q}\to
J^{*,q-1}$ for $0\leq j\leq q-1$ are cosimplicial maps.
\end{enumerate}
It then follows that $\operatorname{Diag} (J)$ is an injective object in
$\operatorname{m}athcal A^{\Deltalta}$, since $J^{p,p}$ is injective in $\operatorname{m}athcal
A$ for every $p\geq 0$ and the maps $h^i=h_h^ih_v^i\colon J^{p,p}\to
J^{p-1,p-1}$, $0\leq i\leq p-1$ and $p>0$, form a contracting
cosimplicial homotopy, i.e: a the identity map od $\operatorname{Diag} J$ is
cosimplicially null-homotopic.
On the other hand $\operatorname{Tot} (J)$ is a non-negative cochain complex of
injective objects in $\operatorname{m}athcal A$, so it is injective in $\operatorname{Coch}
(\operatorname{m}athcal A)$ if and only if it is split-exact, that is if and
only if it is exact. But every column of the associated cochain
bicomplex $C(J)$ is acyclic, since
$H_v^*(J^{p,*})=\pi^*(J^{p,*})=0$. The exactness of $\operatorname{Tot} (J)$
now follows from the convergent spectral sequence with
$E_1^{p,q}=H^q(C^{p,*}(J))=0$ and $E_2^{p,q}=H_h^p(H_v^q(C(J))
\Rightarrow H^{p+q}(\operatorname{Tot} (J))$.
\end{proof}
\vskip .5cm
\subsection{The cosimplicial Alexander-Whitney map}
The cosimplicial Alexander Whitney map gives an explicit formula
for the isomorphism in the Eilenberg-Zilber theorem. For $p+q=n$
let
$$g_{p,q}=d^n_hd^{n-1}_h\ldots d^{p+1}_hd^0_v\ldots d^0_v\colon X^{p,q}\to X^{n,n}$$
and $g^n=(g^{p,q})\colon \operatorname{Tot}^n (X)\to X^{n,n}$. This defines a natural
cochain map $g\colon \operatorname{Tot} (X)\to C(\operatorname{Diag} X)$, which induces a morphism
of universal $\deltalta$-functors
$$g^*\colon H^*(\operatorname{Tot} (X))\to H^*(C(\operatorname{Diag} X))=\pi^*(\operatorname{Diag} X).$$ Moreover,
$g^0\colon \operatorname{Tot}^0(X)=X^0=C^0(\operatorname{Diag} X)$, and hence
$$g^0\colon H^0(\operatorname{Tot} (X))\to H^0(C(\operatorname{Diag} X))=\pi^0(\operatorname{Diag} X).$$
The cosimplicial Alexander Whitney map is therefore (up to
equivalence) the unique cochain map inducing the isomorphism in
the Eilenberg-Zilber theorem. The inverse map $f\colon C(\operatorname{Diag} X)\to
\operatorname{Tot} (X)$ is given by the shuffle coproduct formula
$$f^{p,q}=\sum_{(p,q)-\operatorname{m}box{shuffles}}(-1)^{\operatorname{m}u}\sigma^{\operatorname{m}u (n)}_h\ldots
\sigma^{\operatorname{m}u (p+1)}_h\sigma^{\operatorname{m}u (p)}_v\ldots \sigma^{\operatorname{m}u (1)}_v\colon X^{n.n}\to X^{p,q},$$
and is a natural cochain map. It induces a natural isomorphism
$\pi^0(\operatorname{Diag} X)=H^0(C(\operatorname{Diag} X))\cong H^0(\operatorname{Tot} (X))$, and thus
$$f^*\colon \pi^*(\operatorname{Diag} X)=H^*(C(\operatorname{Diag} X))\cong H^*(\operatorname{Tot} (X))$$
is the unique isomorphism of universal $\deltalta$-functors given in
the cosimplicial Eilenber-Zilber theorem. In particular, $f^*$ is
the inverse of $g^*$.
\end{appendix}
\end{document} |
\begin{document}
\title{
Relationship between covariance of Wigner functions \\ and transformation noncontextuality
}
\author{Lorenzo Catani}\email{[email protected]}\affiliation{Electrical Engineering and Computer Science Department, Technische Universit\"{a}t Berlin, 10587 Berlin, Germany}
\begin{abstract}
We investigate the relationship between two properties of quantum transformations often studied in popular subtheories of quantum theory: covariance of the Wigner representation of the theory and the existence of a transformation noncontextual ontological model of the theory.
We consider subtheories of quantum theory specified by a set of states, measurements and transformations, defined specifying a group of unitaries, that map between states (and measurements) within the subtheory. We show that if there exists a Wigner representation of the subtheory which is covariant under the group of unitaries defining the set of transformations then the subtheory admits of a transformation noncontextual ontological model.
We provide some concrete arguments to conjecture that the converse statement also holds provided that the underlying ontological model is the one given by the Wigner representation.
In addition, we investigate the relationships of covariance and transformation noncontextuality with the existence of a quasiprobability distribution for the theory that represents the transformations as positivity preserving maps. We conclude that covariance implies transformation noncontextuality, which implies positivity preservation.
\end{abstract}
\maketitle
\section{Introduction}
Understanding what are the features of quantum theory that resist any explanation within the classical worldview is crucial both from the foundational and the computational point of view. Several no-go theorems have provided concrete contributions in this sense \cite{Bell1966,KochenSpecker67,Leifer2017,PBR,FrauchigerRenner2018,CataniLeifer2020}, as well as results concerning attempts of reproducing quantum theory in a classical fashion \cite{Wigner1932,Spekkens2007,ToyFieldTheory} and axiomatizations of quantum theory in the framework of general probabilistic theories \cite{Hardy2001, Chiribella2011, Masanes2011}.
Among the notions of nonclassicality, in recent years contextuality \cite{KochenSpecker67, Spekkens2005} has also been employed to explain the origin of the quantum computational speed-up for specific tasks and models of quantum computation \cite{Howard2014, Raussendorf2017, Delfosse2015, Vega2017, CataniBrowne2018, Raussendorf2013, Spekkens2009,Mansfield2018,CataniHenaut2018, Raussendorf2019,Schmid2018,Saha2019,SahaAnubhav2019,LostaglioSenno2020,Yadavalli2020,ContextualityViaUR,Flatt2021,Roch2021,CataniFaleiro2022}.
Nevertheless, there is still a neat discrepancy between what is considered to be truly nonclassical from a foundational and a computational point of view. An example is provided by the $n-$qubit stabilizer theory -- the subtheory of quantum theory composed by common eigenstates of Pauli operators, Clifford unitaries and Pauli measurements -- which has been proven to be efficiently simulatable by a classical computer according to Gottesman-Knill theorem \cite{Gottesman99}, despite it shows contextuality \cite{Mermin1990, GHZ}.
It would be desirable to obtain a consistent picture connecting the notions of nonclassicality in the two realms and distinguishing weaker and stronger notions of nonclassicality. In this respect, a fruitful approach would be to explore less studied and new notions of (non)classicality.
In the present work we move a step in this direction. We start by studying two properties of quantum transformations that are usually associated to classical behaviors and therefore, when broken, to genuinely nonclassical features.
These are the covariance of the Wigner function under the group of transformations allowed in the examined theory \cite{Zhu2016}, and transformation noncontextuality -- \textit{i.e.} the existence of a transformation noncontextual ontological model for the theory \cite{Spekkens2005}.
Let us spend few words to introduce them and to discuss why they should be considered as notions of classicality.
Covariance is defined in the framework of Wigner functions \cite{Wigner1932} -- particular quasiprobability distributions that provide a representation of quantum states, transformations and measurements in the phase space -- and it indicates that the transformations of the theory under examination can be represented as symplectic affine transformations in the phase space (a subset of the permutations in the discrete dimensional case) \cite{Gross2006,Zhu2016}.
It is often studied in reference to the group of Clifford unitaries, and it is known that a covariant representation of such group exists in the odd dimensional case \cite{Gross2006}, but it does not exist in the even dimensional case \cite{Raussendorf2022}. This is why, in the even dimensional case, smaller subtheories than the stabilizer theory have to be considered in order to have covariance \cite{Raussendorf2017,Delfosse2015}.
In the context of studying quantum theory in the phase space, if one wonders what it means for a set of quantum transformations to have a classical behavior, it is natural -- at least as a minimal requirement -- to assume this to be the case if the quantum transformations are represented in the phase space in the same way as the physical trajectories in classical Hamiltonian mechanics. This means, by symplectic transformations (necessary and sufficient condition, as proven by Hamilton \cite{Guillemin1990}). Hence, covariantly represented quantum transformations can be interpreted as classical trajectories in the phase space. In addition, covariance turns out to capture a wider notion of classicality than the mere correspondence to trajectories in classical Hamiltonian mechanics. The set of quantum transformations that are reproduced in the celebrated Spekkens' toy theory -- a noncontextual theory formulated in the phase space that reproduces many phenomena that are usually thought to be signatures of quantumness \cite{Spekkens2007, Spekkens2016, CataniBrowne2017,Catani2021} -- allows for a covariant Wigner representation. The reason is that covariance of the Wigner representation coincides with the requirement of preservation of the ``epistemic restriction'' (corresponding to the preservation of the Poisson brackets), which is the basic defining constraint of Spekkens toy theory.
Finally, a consequence of the present work is that covariance is indeed a notion of classicality also because it implies transformation noncontextuality, which is a well established notion of classicality, as we discuss below.
Transformation noncontextuality is defined in the framework of ontological models (also known as hidden variable models) \cite{Harrigan2010}. As we will report in the next section, quasiprobability representations and ontological models of a theory are strictly related notions \cite{Spekkens2008}.
Transformation noncontextuality means that operationally equivalent experimental procedures for performing a transformation must correspond to the same representation in the underlying ontological model \cite{Spekkens2005}. It is often justified as an instance of a methodological principle motivated by Leibniz's principle of the identity of indiscernibles \cite{SpekkensLeibniz2019}, and more recently as a requirement of no fine tuning \cite{CataniLeifer2020,Adlam2021}.
Transformation noncontextuality holds in classical mechanics. However, it was proven in \cite{Spekkens2008} that it does not exist a transformation noncontextual ontological model consistent with the statistics of quantum theory.
Historically, most research on the role of nonclassical features in quantum computation has focused on properties of preparations and measurements \cite{WallmanBartlett,Howard2014, Raussendorf2017}. Instead, we here study properties of quantum transformations. They have recently been shown to play a crucial role in information processing tasks \cite{Mansfield2018,CataniHenaut2018}, and in encoding nonclassical behaviors of subtheories of quantum theory that were previously considered to behave classically, like the single qubit stabilizer theory \cite{Lillystone2018}.
The latter provides one of the main motivations for considering covariance and transformation noncontextuality as connected: they are both broken by the presence of the Hadamard gate.
Another important result that ignores transformations is due to Spekkens in \cite{Spekkens2008}, where he showed that contextuality and negativity of quasiprobability representations are equivalent notions of nonclassicality.
In this article we focus on subtheories of quantum theory defined by a set $(\mathcal{S},\mathcal{T},\mathcal{M})$ of states, transformations and measurements, where the transformations map between states (and measurements) within the subtheory. Because covariance only applies to unitaries (it would not make sense for other transformations, \textit{e.g.}, those describing irreversible processes), we only consider theories whose set of transformations $\mathcal{T}$ is defined by a group of unitaries, and where the more general channels can be decomposed as convex mixtures of unitaries.
By using the result in \cite{Spekkens2008}, that trivially extends to transformations, we show that covariance implies transformation noncontextuality (proposition \ref{CovarianceTransfNC}). We conjecture that the converse holds with the assumption that the ontological model is the one corresponding to the Wigner representation. We provide some concrete arguments to support such claim in conjecture \ref{TransfNCCovariance}.
In this work we also introduce the notion of positivity preservation. We do so with the goal to potentially formulate a stronger notion of nonclassicality than transformation contextuality and non-covariance, that are quite weak insofar as they manifest also in simple classically simulatable subtheories like the single qubit stabilizer theory.
We say that a subtheory is positivity preserving if there exists a quasiprobability representation for which all the transformations of the theory map non-negative states to non-negative states.
Positivity preservation
holds in classical mechanics, where quasiprobabilities are (non-negative) probabilities, and it does not hold in full quantum theory, because of the unavoidable presence of negatively represented states and the fact that there always exists a transformation between any two states.
In propositions \ref{TheoremCovariance} and \ref{TheoremTransfNC} we prove that both covariance and transformation noncontextuality imply positivity preservation, but not vice versa.
These results motivate further research on the conceptual significance of positivity preservation as a compelling notion of classicality, since it would then be the case, via propositions \ref{TheoremCovariance} and \ref{TheoremTransfNC}, that breaking positivity preservation is a stronger notion of nonclassicality than transformation contextuality and non-covariance.
The remainder of the article is structured as follows. In section \ref{Definitions} we define quasiprobability distributions and Wigner functions (subsection \ref{SecWignerFunctions}), covariance of Wigner functions (subsection \ref{SecCovariance}), and transformation noncontextuality (subsection \ref{SecTransformationNoncontextuality}). In section \ref{SecSQM} we illustrate the motivating example of the single qubit stabilizer theory and then, in section \ref{Results}, we prove the results relating covariance, transformation noncontextuality and positivity preservation. We conclude, in section \ref{Discussion}, by discussing possible generalizations of the results and future avenues.
\section{Definitions}
\label{Definitions}
In this section we define Wigner functions and more general quasiprobability distributions (with a special focus on the property of positivity preservation), covariance of Wigner functions, and transformation noncontextuality.
\subsection{Wigner functions}
\label{SecWignerFunctions}
Wigner functions are a way of reformulating quantum theory in the phase space \cite{Wigner1932}, which is the framework where classical Hamiltonian mechanics is formulated, and therefore provide a tool to compare the two theories on the same ground. They are also the most popular example of quasiprobability distributions \cite{Optics, Gross2006, Raussendorf2017}. The latter are linear and invertible maps from operators on Hilbert space to real distributions on a measurable space $\Lambda.$ Quasiprobability representations of quantum theory associate \textit{i)} a real-valued function $\mu_{\rho}: \Lambda \rightarrow \mathbb{R}$ to any density operator $\rho$ such that $\int d\lambda \mu_{\rho}(\lambda)=1$, \textit{ii)} a real-valued function $\xi_{\Pi_k}: \Lambda \rightarrow \mathbb{R}$ to any element $\Pi_k$ of the POVM $\{\Pi_k\}$ such that $\sum_k \xi_{\Pi_k}(\lambda)=1$ $\;\forall \; \lambda\in\Lambda,$ and \textit{iii)} a real-valued matrix $\Gamma_{\varepsilon}: \Lambda\times\Lambda \rightarrow \mathbb{R}$ to any CPTP map $\varepsilon,$ such that $\int d\lambda' \Gamma_{\varepsilon}(\lambda',\lambda)=1.$ These distributions provide the same statistics of quantum theory -- given by the Born rule -- if \begin{equation}\begin{split} \label{QuantumStatistics} p(k|\rho,\varepsilon, \{\Pi_k\}) &= \textrm{Tr}[\varepsilon(\rho)\Pi_k] \\ &= \int d\lambda d\lambda' \xi_{\Pi_k}(\lambda') \Gamma_{\varepsilon}(\lambda',\lambda) \mu_{\rho}(\lambda).\end{split}\end{equation}
Quasiprobability distributions are named this way as they behave similarly to probability distributions, with the crucial difference that they can take negative values. The negativity is usually assumed to be a signature of quantumness because it is unavoidable in order to reproduce the statistics of quantum theory \cite{FerrieEmerson2009}.
A quasiprobability distribution, for example $\mu_{\rho}(\lambda),$ is \textit{non-negative} if $\mu_{\rho}(\lambda)\ge0 \;\; \forall \lambda\in\Lambda.$ Non-negative quasiprobability representations of quantum theory are an important tool to support the epistemic view of quantum theory \cite{Harrigan2010,Spekkens2007}, where quantum states are interpreted as states of knowledge rather than states of reality, and are useful to perform classical simulations of quantum computations \cite{Veitch2012}.
We will focus on the following property of quasiprobability distributions.
\newtheorem{Definition}{Definition}
\begin{Definition}[Positivity Preservation]\label{PositivityPreservation}
Given a set $\mathcal{S}_+$ of quantum states $\rho$ that are \emph{non-negatively} represented by a quasiprobability distribution $\mu_{\rho}(\lambda),$ a transformation $\varepsilon$ that maps $\rho$ to $\rho'=\varepsilon(\rho)$ is \emph{positivity preserving} if, for every $\rho\in\mathcal{S}_+,$ the quasiprobability distribution $\mu_{\rho'}(\lambda)$ is non-negative too.
A subtheory of quantum theory allows for a positivity preserving quasiprobability representation if there exists a quasiprobability distribution for which all the transformations of the subtheory are positivity preserving.
\end{Definition}
We are interested in Wigner functions defined over the discrete phase space $\Lambda=\mathbb{Z}^{2n}_{d},$ where the integers $d,n$ denote the dimensionality of the system and the number of systems, respectively. We follow the definition provided in \cite{Raussendorf2017} that includes the mostly used examples of Wigner functions \cite{Gross2006,Gibbons2004}.
\newtheorem{Wigner}[Definition]{Definition}
\begin{Wigner}[Wigner function]\label{WignerFunctions}
The Wigner function of a quantum state $\rho$ (and, analogously, of a POVM element $\Pi_k$) is defined as
\begin{equation}\label{wignernew}W^{\gamma}_{\rho}(\lambda)=\textrm{Tr}[A^{\gamma}(\lambda)\rho],\end{equation} where the phase-point operator is \begin{equation}\label{phasepoint} A^{\gamma}(\lambda)= \frac{1}{N_{\Lambda}}\sum_{\lambda' \in\Lambda}\chi([\lambda,\lambda'])\hat{W}^{\gamma}(\lambda').\end{equation}
\end{Wigner}
The normalization $N_{\Lambda}$ is such that $\textrm{Tr}(A^{\gamma}(\lambda))=1.$
The Weyl operators $\hat{W}^{\gamma}(\lambda)$ are defined as $\hat{W}^{\gamma}(\lambda)=w^{\gamma(\lambda)}Z(p)X(x),$ where the phase space point is $\lambda=(x,p)\in \Lambda.$
The operators $X(x),Z(p)$ represent the (generalized) Pauli operators, $X(x)=\sum_{x'\in\mathbb{Z}_d} \left | x'-x \right \rangle \left \langle x' \right |$ and $Z(p)= \sum_{x\in\mathbb{Z}_d} \chi(px)\left | x \right \rangle \left \langle x \right |.$
The functions $w$ and $\chi$ are $w^{\gamma(\lambda)}=i^{\gamma(\lambda)}, \chi(a)=(-1)^a$ in the case of qubits and $w^{\gamma(\lambda)}=\chi(-2^{-1}\gamma(\lambda)),\chi(a)=e^{\frac{2\pi i}{d}a}$ in the case of qudits of odd dimensions.
By choosing the function $\gamma: \Lambda\rightarrow \mathbb{Z}_q,$ where $q$ is an integer,
we can specify which Wigner function to consider.
The reason why choosing different Wigner functions is that they are non-negative -- thus providing a proof of operational equivalence with classical theory -- for different subtheories of quantum theory.
For example $\gamma= x\cdot p$ gives Gross' Wigner function \cite{Gross2006,Gross2019} for odd dimensional qudits, which non-negatively represents stabilizer quantum theory in odd dimensions \cite{Spekkens2016,CataniBrowne2017,SchmidGrossUnique2022}, while $\gamma=x \cdot p \;\mathrm{mod}4$ gives the Wigner function developed by Gibbons and Wootters \cite{Gibbons2004,Cormick2006} for qubits, which non-negatively represents the subtheory of quantum theory composed by all the separable eigenstates of Pauli operators, Pauli measurements and transformations between them \cite{Raussendorf2017,CataniBrowne2018}.
We will omit the superscript $\gamma$ in the future in order to soften the notation.
Wigner functions satisfy the following properties.
\begin{itemize}
\item The marginal of a Wigner function on the state $\rho$ behaves as a probability distribution: $\sum_{p\in \mathbb{Z}_d}W_{\rho}(x,p)=|\bra{x}\rho\ket{x}|^{2}.$
\item The Wigner function of many systems in a product state is the tensor product of the Wigner functions of each system, as a consequence of the factorability of $A(\lambda),$ $A(x_0,p_0\dots x_{n-1},p_{n-1})=A(x_0,p_0)\otimes \dots \otimes A(x_{n-1},p_{n-1}).$
\item The phase-point operators form a complete basis of the Hermitian operators in the Hilbert space with respect to the Hilbert-Schmidt inner product, thus obeying Hermitianity, $A(\lambda)=A^{\dagger}(\lambda)$ $\forall \; \lambda\in\Lambda,$ and orthonormality, $\textrm{Tr}[A(\lambda)A(\lambda')]=\frac{1}{N_{\Lambda}}\delta_{\lambda,\lambda'}.$
This implies that $\rho=\sum_{\lambda\in\Lambda}A(\lambda)W_{\rho}(\lambda)$ and $\sum_{\lambda\in\Lambda}W_{\rho}(\lambda)W_{\sigma}(\lambda)=\textrm{tr}(\rho\sigma),$ where $\rho,\sigma$ are any two Hermitian operators.
\item $\sum_{\lambda\in\Lambda}A(\lambda)=\mathbb{I},$ thus implying that $\textrm{Tr}[\rho]=1=\sum_{\lambda}W_{\rho}(\lambda).$
\end{itemize}
For consistency with equation \end{quote}ref{QuantumStatistics}, the Wigner function associated to a CPTP map $\varepsilon$ is $W_{\varepsilon}(\lambda',\lambda)=\textrm{Tr}[\varepsilon(A(\lambda))A(\lambda)]$ and it is such that $\sum_{\lambda\in\Lambda}W_{\varepsilon}(\lambda',\lambda)=1.$
\subsection{Covariance}
\label{SecCovariance}
The Wigner function $W_{\rho}(\lambda)$ is \textit{covariant} under the group\footnote{Notice that the set of unitaries must form a group for the property of covariance to be possible.} of unitary transformations in $\mathcal{T}$ if, for every unitary gate $U\in\mathcal{T}$,
\begin{equation}\label{covariance}W_{U\rho U^{\dagger}}(\lambda)=W_{\rho}(S\lambda+a) \;\;\; \forall \;\rho\in\mathcal{S},\; \forall \;\lambda\in\Lambda,\end{equation} where $S$ is a symplectic transformation, \textit{i.e.} $S^TJS=J,$ where $J= \bigoplus_{j=1}^{n} \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}_j$ is the standard invertible matrix used in symplectic geometry, and $a$ is a translation vector. There is a precise relationship between the group of unitaries and the group of symplectic transformations satisfying equation \end{quote}ref{covariance}; namely, the unitaries $U$ are indexed by the symplectic transformations $S$, and form a unitary representation of the group of such symplectic transformations.
We say that the set of transformations $\mathcal{T}$, as well as the subtheory $(\mathcal{S},\mathcal{T},\mathcal{M})$, is covariantly represented by the Wigner function $W_{\rho}(\lambda)$ if $W_{\rho}(\lambda)$ is covariant under the group of unitary transformations in $\mathcal{T}$ according to equation \end{quote}ref{covariance}.
Equation \end{quote}ref{covariance} can also be written as, \begin{equation}\label{covarianceA}UA(\lambda)U^{\dagger}=A(S\lambda+a)\;\;\; \forall \;\lambda\in\Lambda.\end{equation}
In the subtheories that we consider, the allowed quantum channels are the ones decomposed as mixtures of unitaries in $\mathcal{T}$. We recall that a quantum channel $\varepsilon$ -- represented by a CPTP map -- can in general be written as \begin{equation}\label{KrausDecomposition}\varepsilon(\rho)=\sum_kE_k\rho E_k^{\dagger},\end{equation} where the Kraus operators $E_k$ satisfy the completeness relation $\sum_kE_k^{\dagger}E_k=\mathbb{I}.$
In our case, the Kraus operators are of the form $E_k = \sqrt{p_k} U_k$, where $U_k$ are unitaries in $\mathcal{T}$ and the $p_k\in[0,1]$ are such that $\sum_k p_k =1$.
We say that a channel $\varepsilon$ is covariantly represented by the Wigner function $W_{\rho}(\lambda)$ if it belongs to the set of transformations $\mathcal{T}$ that is covariantly represented by the Wigner function $W_{\rho}(\lambda)$.
\subsection{Transformation Noncontextuality}
\label{SecTransformationNoncontextuality}
A natural way of justifying why quantum theory works is to provide an ontological model that reproduces its statistics \cite{Spekkens2005}. An ontological model associates the physical state of the system at a given time -- the ontic state -- to a point $\lambda$ in a measurable set $\Lambda,$ and the experimental procedures -- classified in preparations, transformations and measurements -- to probability distributions on the ontic space $\Lambda.$ A preparation procedure $P$ of a quantum state $\rho$ is represented by a probability distribution $\mu_P(\lambda)$ over the ontic space, $\mu_P:\Lambda\rightarrow \mathbb{R}$ such that $\int \mu_P(\lambda)d\lambda=1$ and $\mu_{P}(\lambda)\ge0 \;\; \forall \lambda\in\Lambda.$ A transformation procedure $T$ of a CPTP map $\varepsilon$ is represented by a transition matrix $\Gamma_T(\lambda',\lambda)$ over the ontic space, $\Gamma_T:\Lambda\times\Lambda\rightarrow \mathbb{R}$ such that $\int \Gamma_T(\lambda',\lambda)d\lambda'=1$ and $\Gamma_{T}(\lambda',\lambda)\ge0 \;\; \forall \lambda,\lambda'\in\Lambda.$ A measurement procedure $M$ with associated outcomes $k$ of a POVM $\{\Pi_k\}$ is represented by a set of indicator functions $\{\xi_{M,k}(\lambda)\}$ over the ontic space, $\xi_{M,k}:\Lambda\rightarrow \mathbb{R}$ such that $\sum_k \xi_{M,k}(\lambda)=1$ and $\xi_{M,k}(\lambda)\ge0 \;\; \forall \lambda\in\Lambda,\;\forall k.$ The ontological model reproduces the predictions of quantum theory according to the law of classical total probability, \begin{equation}\begin{split} \label{QuantumStatisticsOM} p(k|P,T,M) &= \textrm{Tr}[\varepsilon(\rho)\Pi_k] \\ &= \int d\lambda d\lambda' \xi_{M,k}(\lambda') \Gamma_T(\lambda',\lambda) \mu_P(\lambda).\end{split}\end{equation}
All the preparation procedures that prepare the state $\rho$ belong to the equivalence class that we denote with $e_{\rho}(P).$ Analogous reasoning for the equivalence classes $e_{\varepsilon}(T)$ associated to the CPTP map $\varepsilon$ and $e_{\{\Pi_k\}}(M)$ associated to the POVM $\{\Pi_k\}.$
The idea of the generalized notion of noncontextuality is that operational equivalences -- \textit{e.g.}, the different Kraus decompositions of a CPTP map -- are represented by identical probability distributions on the ontic space.
\newtheorem{TransformationNC}[Definition]{Definition}
\begin{TransformationNC}[Ontological model]\label{TransformationNC}
An ontological model of (a subtheory of) quantum theory is \emph{transformation noncontextual} if \begin{equation}\label{TransfNC}\Gamma_T(\lambda',\lambda)=\Gamma_{\varepsilon}(\lambda',\lambda) \;\;\; \forall \;T\in e_{\varepsilon}(T), \;\; \forall \;\varepsilon.\end{equation}
\end{TransformationNC}
The above definition can be analogously extended to preparation noncontextuality and measurement noncontextuality \cite{Spekkens2005}, \begin{equation}\begin{split}\label{PrepMeasNC}&\mu_P(\lambda)=\mu_{e_{\rho}(P)}(\lambda) \;\;\;\forall P\in e_{\rho}(P), \;\; \forall \;\rho, \\ & \xi_{M,k}(\lambda)=\xi_{e_{\Pi_k}(M)}(\lambda) \;\;\; \forall M \in e_{\{\Pi_k\}}(M), \;\; \forall \;\{\Pi_k\}.\end{split}\end{equation}
An ontological model is universally noncontextual if it is preparation, transformation and measurement noncontextual.
It can be shown that a universally noncontextual ontological model of quantum theory is impossible \cite{Spekkens2005} and, in particular, that a transformation noncontextual ontological model of quantum theory is impossible too.
From the definitions of ontological models above and the definitions of quasiprobability representations provided previously, by substituting equations \end{quote}ref{TransfNC} and \end{quote}ref{PrepMeasNC} into equation \end{quote}ref{QuantumStatisticsOM}, it is immediate to see that the existence of a \textit{non-negative} quasiprobability representation that provides the statistics of quantum theory as in equation \end{quote}ref{QuantumStatistics} coincides with the existence of a \textit{noncontextual} ontological model for it. This result was proven in \cite{Spekkens2008} and here stated also considering transformations.
\section{The single qubit stabilizer theory}
\label{SecSQM}
In this section we consider the stabilizer theory of one qubit, which provides a motivating example for studying the connection between covariance and transformation noncontextuality, as we will now show.
The stabilizer theory of one qubit is defined as the subtheory of one qubit quantum theory that includes the eigenstates of $X,Y,Z$ Pauli operators, the Clifford unitaries -- generated by the Hadamard gate $H$ and the phase gate $P$ -- and $X,Y,Z$ Pauli observables.
Its states and measurement elements are non-negatively represented by Wootters-Gibbons' Wigner functions \cite{Gibbons2004}, and the transformations are positivity preserving.
More precisely, as shown in \cite{Cormick2006}, there are two possible Wigner functions for one qubit stabilizer states that are non-negative and obey the desired properties stated in the previous section.
We denote the two Wigner functions with $W_{+}$ and $W_{-}.$ The corresponding phase-point operators are \[A_+(0,0)=\frac{1}{2}(\mathbb{I}+X+Y+Z),\] \[A_-(0,0)=\frac{1}{2}(\mathbb{I}+X+Y-Z),\] and $A_{j}(0,1)=XA_{j}(0,0)X^{\dagger},$ $A_{j}(0,1)=YA_{j}(0,0)Y^{\dagger},$ $A_{j}(1,1)=ZA_{j}(0,0)Z^{\dagger}$ for $j\in\{+,-\},$ where we have denoted with $X,Y,Z$ both the Pauli observables and the Pauli transformations.
The Wigner functions above are \textit{not} covariant under the group of Clifford unitaries. The reason is that the Hadamard gate (or, similarly, the phase gate) unavoidably maps a phase-point operator to a phase-point operator belonging to a different basis set, \textit{e.g.}, $HA_+(0,0)H=A_-(0,1),$ as a consequence of its action on the Pauli operators, $HXH=Z, HZH=X$ and $HYH=-Y.$
A popular preparation and measurement noncontextual ontological model of the single qubit stabilizer theory is the 8-state model \cite{WallmanBartlett}. The quantum states (and measurement elements) are represented as uniform probability distributions over an ontic space of dimension $8$ (twice the dimension of the standard phase space) and the Clifford transformations are represented by permutations over the ontic space. Intuitively, it can be seen as a model that corresponds to take into account both the Wigner functions $W_{+}$ and $W_{-},$ as shown in figure \ref{BlochWF}.\footnote{Notice that the 8-state model does \textit{not} correspond to a quasiprobability distribution as defined in the previous section.}
\begin{figure}
\caption{\textbf{Non-negative Wigner functions of one qubit stabilizer states.}
\label{BlochWF}
\end{figure}
The 8-state model may be viewed as another confirmation of the classical nature of the theory. However, it was proven in \cite{Lillystone2018} that this model and, more in general, any preparation and measurement noncontextual model of the single qubit stabilizer theory, shows transformation contextuality. This is due again, as for the breaking of covariance, to the Hadamard gate (or, similarly, to the phase gate). More precisely, if we consider the operationally equivalent decompositions of the completely depolarizing channel, $\varepsilon_1(\rho)=\sum_RR\rho R=\mathbb{I}$ and $\varepsilon_2(\rho)=\sum_R(HR)\rho (HR)^{\dagger}=\mathbb{I},$ where $R\in\{\mathbb{I},X,Y,Z\}$ and $\rho$ is any qubit stabilizer state, they are ontologically distinct in any preparation and measurement noncontextual ontological model of the single qubit stabilizer theory \cite{Lillystone2018}.
The ontological distinctness can be pictured in the 8-state model, where $\varepsilon_1$ and $\varepsilon_2$ are represented as different permutations in the ontic space. The former only maps within a half of the ontic space, that can be seen as the phase space where $W_+$ (or $W_-$) is defined, while the latter maps between the two halves of the ontic space, so between the phase space of $W_+$ and the phase space of $W_-$, which is also a way of picturing the breaking of covariance. This last consideration suggests the presence of a tight connection between covariance and transformation noncontextuality.
\section{Results}
\label{Results}
We now relate the properties of quantum transformations defined so far.
\newtheorem{Proposition}{Proposition}
\begin{Proposition} \label{CovarianceTransfNC} Let us consider a subtheory of quantum theory $(\mathcal{S},\mathcal{T},\mathcal{M})$, where the set $\mathcal{T}$ is made of transformations as defined in equation~\end{quote}ref{KrausDecomposition}.
If there exists a Wigner function by which the subtheory is covariantly represented, then the subtheory admits of a transformation noncontextual ontological model.
\end{Proposition}
\begin{proof}
Let us consider any transformation $\varepsilon\in\mathcal{T}$, with Kraus decomposition
$\varepsilon(\rho)=\sum_kE_k\rho E_k^{\dagger}$ for every $\rho\in\mathcal{S},$ where $E_k = \sqrt{p_k} U_k$, $U_k$ are unitaries in $\mathcal{T}$ and $p_k\in[0,1]$ are such that $\sum_k p_k =1$.
Given covariance (equation~\end{quote}ref{covarianceA}), the orthonormality of the phase-point operators, and the linearity of the trace, then $W_{\varepsilon}(\lambda',\lambda)=\textrm{Tr}[\varepsilon(A(\lambda))A(\lambda')]=\textrm{Tr}[\sum_kA(S_k\lambda+a_k)A(\lambda')]=1/N_{\Lambda}\sum_k\delta_{S_k\lambda+a_k,\lambda'}\ge0.$ A non-negative Wigner function for the transformations $\varepsilon\in\mathcal{T}$ implies a transformation noncontextual ontological model for the subtheory, as we have shown at the end of subsection \ref{SecTransformationNoncontextuality}.
\end{proof}
In short, proposition \ref{CovarianceTransfNC} states that, given our definition of subtheories, covariance implies transformation noncontextuality. However, the converse implication may not hold in general. In particular, it could be possible to have a transformation noncontextual ontological model, even if the ontological model corresponding to the Wigner representation is transformation contextual.\footnote{We leave the search for such a counter example -- i.e. of a subtheory that admits of a transformation noncontextual ontological model but does not admit of a covariant Wigner function -- as future research.} For this reason, the more sensible question to ask is: restricting to the cases where the ontological model corresponds to a Wigner representation, is it true that transformation noncontextuality implies covariance?
We conjecture this to be the case. We support this conjecture by showing that the existence of a transformation noncontextual ontological model corresponding to a Wigner representation of the subtheory $(\mathcal{S},\mathcal{T},\mathcal{M})$ implies that phase point operators are mapped between themselves by the unitary transformations in $\mathcal{T}$. Notice that this does not constitute a proof of the conjecture because having phase point operators being mapped between themselves by the unitary transformations of the theory does not say anything about such maps being symplectic, and hence covariance is not implied.
\newtheorem{Conjecture}{Conjecture}
\begin{Conjecture} \label{TransfNCCovariance} Let us consider a subtheory of quantum theory $(\mathcal{S},\mathcal{T},\mathcal{M})$, where the set $\mathcal{T}$ is made of transformations as defined in equation~\end{quote}ref{KrausDecomposition}.
If there exists a Wigner representation that provides a transformation noncontextual ontological model for the subtheory then the subtheory is covariantly represented by such Wigner function.
\end{Conjecture}
\begin{proof}[Proof of the argument in support of the conjecture]
We here show that the existence of a transformation noncontextual ontological model corresponding to a Wigner representation of the subtheory $(\mathcal{S},\mathcal{T},\mathcal{M})$ implies that phase point operators are mapped between themselves by the unitary transformations in $\mathcal{T}$.
Let us consider a Wigner representation that provides a noncontextual ontological model of the subtheory. This means that the Wigner functions $ W_\varepsilon (\lambda|\lambda’) = \textrm{Tr}[\varepsilon(A(\lambda))A(\lambda')]$ are non-negative for every $\varepsilon\in\mathcal{T}$, where $\varepsilon(\rho)=\sum_kE_k\rho E_k^{\dagger}$ for every $\rho\in\mathcal{S},$ $E_k = \sqrt{p_k} U_k$, $U_k$ are unitaries in $\mathcal{T}$ and $p_k\in[0,1]$ are such that $\sum_k p_k =1$. In particular, this means that also the special case where $\varepsilon$ corresponds to one of the unitaries $U\in\mathcal{T}$ is represented non-negatively, \textit{i.e.}, $W_{U} (\lambda|\lambda’)=\textrm{Tr}[UA(\lambda)U^{\dagger}A(\lambda')]\ge0.$
Let us now define $B\end{quote}uiv U A(\lambda)U^{\dagger}$ and, by the fact that the phase-point operators $A(\lambda)$ form a basis for the Hermitian operators in the Hilbert space, let us write it as $B=\sum_{\lambda'}W_{U}(\lambda|\lambda')A(\lambda').$ Successively, notice, from $\textrm{Tr}[A(\lambda)]=1,$ that $\textrm{Tr}[B]=\sum_{\lambda'}W_{U}(\lambda | \lambda')=1.$ Moreover, from the orthonormality of the phase-point operators, $\textrm{Tr}[B^2]= \sum_{\lambda'}W^2_{U}(\lambda | \lambda')=1.$
Therefore, given that $W_{U}(\lambda|\lambda')\ge0,\sum_{\lambda'}W_{U}(\lambda | \lambda')=1,$ and $\sum_{\lambda'}W^{2}_{U}(\lambda | \lambda')=1,$ we conclude that $W_{U}(\lambda|\lambda')=0$ $\forall \lambda'$ apart from one $\tilde{\lambda'},$ for which $W_{U}(\lambda|\tilde{\lambda'})=1.$ This means that $B$ coincides with one of the phase-point operators, \textit{i.e.} $U$ maps a phase point operator into another phase point operator. This is true for every $U\in\mathcal{T}$.
\end{proof}
Let us now state the relationships of covariance and transformation noncontextuality with the existence of a positivity preserving quasiprobability distribution for the subtheory.
\newtheorem{TheoremCovariance}[Proposition]{Proposition}
\begin{TheoremCovariance}\label{TheoremCovariance} Let us consider a subtheory of quantum theory $(\mathcal{S},\mathcal{T},\mathcal{M})$, where the set $\mathcal{T}$ is made of transformations as defined in equation~\end{quote}ref{KrausDecomposition}. If there exists a Wigner function by which the subtheory is covariantly represented, then the subtheory allows for a positivity preserving quasiprobability representation. The converse implication does \emph{not} hold, \textit{i.e.}, if the subtheory allows for a positivity preserving quasiprobability representation, then it is not always the case that there exists a Wigner function by which the subtheory is covariantly represented.
\end{TheoremCovariance}
\begin{proof}
Let us consider the states $\rho\in\mathcal{S}$ with non-negative Wigner function $W_{\rho}(\lambda)$ and any covariantly represented transformation $\varepsilon$ which maps $\rho$ to $\rho'=\varepsilon(\rho)$ such that $W_{\rho'}(\lambda)=\sum_k W_{\rho}(S_k\lambda+a_k).$ Since all $W_{\rho}(S_k\lambda+a_k)$ are non-negative because $W_{\rho}(\lambda)\ge0$ $\forall \lambda\in\Lambda$, then also $W_{\rho'}(\lambda)$ is non-negative.
The converse implication does not hold, as proven by the counterexample of the single qubit stabilizer theory, where the Hadamard gate maps non-negative states to non-negative states (Wootters-Gibbons' Wigner function \cite{Gibbons2004,Cormick2006}), but it is not covariantly represented.
\end{proof}
\newtheorem{TheoremTransfNC}[Proposition]{Proposition}
\begin{TheoremTransfNC}\label{TheoremTransfNC} Let us consider a subtheory of quantum theory $(\mathcal{S},\mathcal{T},\mathcal{M}).$ If the subtheory allows for a transformation noncontextual ontological model, then it also allows for a positivity preserving quasiprobability representation. The converse implication does \emph{not} hold, \textit{i.e.}, if the subtheory allows for a positivity preserving quasiprobability representation, then it is not always the case that the subtheory admits of a transformation noncontextual ontological model.
\end{TheoremTransfNC}
\begin{proof}
From Spekkens' result \cite{Spekkens2008} extended to transformations, the existence of a transformation noncontextual ontological model for the subtheory implies the existence of a quasiprobability representation that non-negatively represents any $\varepsilon\in\mathcal{T},$ $\Gamma_{\varepsilon}(\lambda,\lambda')\ge0\;\; \forall \lambda,\lambda'\in\Lambda.$ Given a state $\rho\in\mathcal{S}$ which is represented by the non-negative quasiprobability distribution $\mu_{\rho}(\lambda),$ the state $\rho'=\varepsilon(\rho)$ is also non-negatively represented, as $\mu_{\rho'}(\lambda)=\sum_{\lambda'}\mu_{\rho'}(\lambda')\Gamma_{\varepsilon}(\lambda,\lambda').$ This proves the first part of the proposition.
The converse implication does not hold. In the example of the single qubit stabilizer theory all the transformations map non-negative states to non-negative states (Wootters-Gibbons' Wigner function \cite{Gibbons2004,Cormick2006}), but the theory, with positivity preserving transformations and non-negative states and measurement elements, does not allow for a transformation noncontextual model.
The core reason why transformation contextuality, and, equivalently, the unavoidable presence of some negativity in $\Gamma_{\varepsilon},$ is a weaker notion of nonclassicality than the breaking of positivity preservation is that $\Gamma_{\varepsilon},$ despite assuming some negative values, can still preserve the positivity between the quasiprobability representations of the states.
\end{proof}
The relations found are depicted in figure \ref{Relations}.
\onecolumngrid
\begin{figure}
\caption{\textbf{Relationship between three notions of classicality associated to quantum transformations.}
\label{Relations}
\end{figure}
\twocolumngrid
\section{Discussion}
\label{Discussion}
The results obtained regard the relationships between a property of Wigner functions -- covariance, a property of ontological models -- transformation noncontextuality, and a property of quasiprobability representations -- positivity preservation. It would be interesting to extend the notion of covariance to more general quasiprobability representations. However, we argue that a relation between this extended covariance and transformation noncontextuality is not expected to hold. Such covariance would hold in the CSS rebit subtheory studied in \cite{Delfosse2015}, while transformation noncontextuality is violated\footnote{The two decompositions, $\rho \rightarrow 1/2(\rho + Y\rho Y)$ and $\rho \rightarrow 1/2(X\rho X + Z\rho Z),$ correspond to the same channel in the CSS subtheory of \cite{Delfosse2015}, even if any ontological model of the theory represents them as distinct (the proof mirrors the one contained in \cite{Lillystone2018} for the single qubit stabilizer theory). The author thanks Piers Lillystone for making him aware of this fact.}.
Still, the latter is not expected to imply covariance, considering that the orthonormality property of the phase point operators -- crucial for proving Proposition \ref{CovarianceTransfNC} -- does not hold for generic quasiprobability representations.
One extra way to generalize covariance would be to consider subtheories with arbitrary CPTP maps, where any CPTP map $\varepsilon$ is assumed to be given by a unitary $U$ acting on a Hilbert space that describes both the system and the environment, $\varepsilon(\rho)=\sum_k E_k\rho E_k^{\dagger}=\textrm{Tr}_E[U\rho_{SE}U^{\dagger}],$ where $\rho_{SE}$ is the state of the system and environment, while $\rho$ is the state of the system only. Covariance (see equation~\ref{covarianceA}) is then defined as $UA_{SE}(\lambda)U^{\dagger}=A_{SE}(S_{SE}\lambda+a_{SE}),$ where $S_{SE}$ and $a_{SE}$ are a symplectic matrix and a vector acting on the phase space associated to the system and environment, for all the possible unitaries that define $\varepsilon.$
We leave the study of this notion for future research.
The main open question about this work regards Conjecture \ref{TransfNCCovariance}. A possible strategy to prove it would be to first leverage on the ``proof of the argument in support of the conjecture'' -- showing that the existence of a transformation noncontextual ontological model corresponding to a Wigner representation of the subtheory $(\mathcal{S},\mathcal{T},\mathcal{M})$ implies that phase point operators are mapped between themselves by the unitary transformations in $\mathcal{T}$. Then, one may argue this to imply that the unitaries in $\mathcal{T}$ must be Clifford unitaries, as a non-Clifford transformation could not map phase point operators (that are sum of Pauli operators) among themselves. Finally, the diffucult part would be to prove, for these unitaries, a result akin to Lemma 9 (and then 11) in \cite{Delfosse2015}.
Notice that, if Conjecture \ref{TransfNCCovariance} holds true, then we can restate the results of Proposition \ref{CovarianceTransfNC} and Conjecture \ref{TransfNCCovariance} by saying that the set of transformations that are positively represented by a given Wigner function contains all and only those transformations that are represented covariantly by the Wigner function.
Another open question is whether any quasiprobability representation on an overcomplete frame \cite{FerrieEmerson2009} (like the one in \cite{Delfosse2015} and unlike the Wigner representation) always implies the corresponding model to be transformation contextual. In an overcomplete frame representation the phase-point operators are, by definition, more than the ones needed to form a basis. This is expected to imply multiple distinct representations of each transformation.\footnote{By the time this article first appeared this result has been proven in \cite{SchmidPusey2020} with minor extra assumptions.}
Showing that only complete frame representations provide noncontextual models would be a first step to ultimately try to prove that any noncontextual ontological model has to correspond to a Wigner representation.
Our results show that breaking positivity preservation is a stronger notion of nonclassicality than transformation contextuality and non-covariance. With respect to positivity preservation, the single qubit stabilizer theory is classical, even if it shows transformation contextuality and breaks covariance. This fact motivates the study of the physical justifications for considering positivity preservation a legitimate classical feature. It would be also interesting to explore other subtheories in which positivity preservation, unlike other notions of classicality, coincides with classical computational simulability. With this work we aim to promote further research on the reasons why the contextuality present in qubit stabilizer theory has, computationally, a classical nature.\footnote{Meaningful results in this direction have been found in \cite{Karanjai2018}, where it is shown that contextuality lower bounds the number of classical bits of memory required to simulate subtheories of quantum theory. In the case of multi-qubit stabilizer theory, it demands a quadrating scaling in the number of qubits, which coincides with the scaling of Gottesman-Knill algorithm \cite{Gottesman99}. If one considers the noncontextual stabilizer theory of odd dimensional qudits the scaling becomes linear.}
\end{document} |
\begin{document}
{\mathbf t}itle{\bf Frames of translates with prescribed fine structure \\ in shift invariant spaces}
\author{Mar\'\i a J. Benac $^{*}$, Pedro G. Massey $^{*}$, and Demetrio Stojanoff
\footnote{Partially supported by CONICET
(PIP 0435/10) and Universidad Nacional de La PLata (UNLP 11X681) } \
\footnote{ e-mail addresses: [email protected] , [email protected] , [email protected]}
\\
{\mathfrak{s}mall Depto. de Matem\'atica, FCE-UNLP
and IAM-CONICET, Argentina }}
^\downarrowte{}
\title{f Frames of translates with prescribed fine structure \ in shift invariant spaces}
\begin{abstract}
For a given finitely generated shift invariant (FSI) subspace ${\mathbf{a}l W}\mathfrak{s}ubset L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ we obtain a simple criterion for the existence of shift generated (SG) Bessel sequences $E(\mathcal F)$ induced by finite sequences of vectors $\mathcal F\in {\mathbf{a}l W}^n$ that have a prescribed fine structure i.e., such that the norms of the vectors in $\mathcal F$ and the spectra of $S_{E(\mathcal F)}$ is prescribed in each fiber of ${\mathbf t}ext{Spec}({\mathbf{a}l W})\mathfrak{s}ubset \mathbb{T}^k$. We complement this result by developing an analogue of the so-called sequences of eigensteps from finite frame theory in the context of SG Bessel sequences, that allows for a detailed description of all sequences with prescribed fine structure. Then, given $0<\alphapha_1\mathbf{l}eq \mathbf{l}dots\mathbf{l}eq \alphapha_n$ we characterize the
finite sequences $\mathcal F\in{\mathbf{a}l W}^n$ such that $\|f_i\|^2=\alphapha_i$, for $1\mathbf{l}eq i\mathbf{l}eq n$, and such that the fine spectral structure of the shift generated Bessel sequences $E(\mathcal F)$ have minimal spread (i.e. we show the existence of optimal SG Bessel sequences with prescribed norms); in this context the spread of the spectra is measured in terms of the convex potential $P^{\mathbf{a}l W}_\varphi$ induced by ${\mathbf{a}l W}$ and an arbitrary convex function $\varphi:\mathbb{R}}\def\C{\mathbb{C}_+\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$.
\end{abstract}
\noindentndent AMS subject classification: 42C15.
\noindentndent Keywords: frames of translates, shift invariant subspaces, Schur-Horn theorem, frame design problems,
convex potentials.
{\mathbf t}ableofcontents
\mathfrak{s}ection{Introduction}
Let ${\mathbf{a}l W}$ be a closed subspace of a separable complex Hilbert space $\mathcal{M}_d(\mathbb{C})hcal{H}$ and let $\mathcal{M}_d(\mathbb{C})hbb I$ be a finite or countable infinite set. A sequence $\mathcal F=\{f_i\}_{i\in \mathcal{M}_d(\mathbb{C})hbb I}$ in ${\mathbf{a}l W}$ is a frame for ${\mathbf{a}l W}$ if there exist positive constants $0<a\mathbf{l}eq b$ such that
$$
a \, \|f\|^2\mathbf{l}eq \mathfrak{s}um_{i\in \mathcal{M}_d(\mathbb{C})hbb I}|\mathbf{l}angle f,f_i\rangle |^2\mathbf{l}eq b\,\|f\|^2
\peso{for every} f\in {\mathbf{a}l W}\, .
$$
If we can choose $a=b$ then we say that $\mathcal F$ is a tight frame for ${\mathbf{a}l W}$.
A frame $\mathcal F$ for ${\mathbf{a}l W}$ allows for linear (typically redundant) and stable encoding-decoding schemes of vectors (signals) in ${\mathbf{a}l W}$. Indeed, if
${\mathbf{a}l V}$ is a closed subspace of $\mathcal{M}_d(\mathbb{C})hcal{H}$ such that ${\mathbf{a}l V}L(\mathcal{M}_d(\mathbb{C})hcal{H})lus{\mathbf{a}l W}^\perp=\mathcal{M}_d(\mathbb{C})hcal{H}$ (e.g. ${\mathbf{a}l V}={\mathbf{a}l W}$) then it is possible to find frames $\mathcal G=\{g_i\}_{i\in \mathcal{M}_d(\mathbb{C})hbb I}$ for ${\mathbf{a}l V}$ such that
\begin{equation}\mathbf{l}abel{eq: intro duals}
f=\mathfrak{s}um_{i\in \mathcal{M}_d(\mathbb{C})hbb I}\mathbf{l}angle f,g_i\rangle \ f_i \ , \quad {\mathbf t}ext{ for } f\in{\mathbf{a}l W}\,.
\end{equation}
The representation above lies within the theory of oblique duality (see \cite{YEldar3,CE06,YEldar1,YEldar2}).
In applied situations, it is usually desired to develop encoding-decoding schemes as above,
with some additional features related with stability of the scheme.
In some cases, we search for schemes such that the sequence of norms $\{\|f_i\|^2\}_{i\in \mathcal{M}_d(\mathbb{C})hbb I}$ as well as the spectral properties of the family $\mathcal F$ are given in advance, leading to what is known in the literature as frame design problem (see \cite{AMRS,BF,CFMPS,CasLeo,MR08,Pot} and the papers \cite{FMP,MRS13,MRS14,MRS13b} for the more general frame completions problem with prescribed norms). It is well known that both the spread of the sequences of norms as well as the spread of the spectra of the frame $\mathcal F$ are linked with numerical properties of $\mathcal F$.
Once we have constructed a frame $\mathcal F$ for ${\mathbf{a}l W}$ with the desired properties, we turn our attention to the construction of frames $\mathcal G$
for ${\mathbf{a}l V}$ satisfying Eq.\eqref{eq: intro duals} and having some prescribed features related with their numerical stability
(see \cite{BMS14,BMS15,CE06,MRS13,MRS13b}).
\noi
It is well known that the frame design problem has an equivalent formulation in terms of the relation between the main diagonal of a positive semi-definite
operator and its spectra; in the finite dimensional setting this relation is characterized in the Schur-Horn theorem from matrix analysis. There has been recent important advances in both the frame design problems as well as the Schur-Horn theorems in infinite dimensions, mainly due to the interactions of these problems
(see \cite{AMRS,BoJa,BoJa2,BoJa3,Jas,KaWe}). There are also complete parametrizations of all finite frames with prescribed norms and eigenvalues (of their frame operators) in terms of the so-called eigensteps sequences \cite{CFMPS}.
On the other hand, the spectral structure of oblique duals (that include classical duals) of a fixed frame can be described in terms of the relations between the spectra of a positive semi-definite operator and the spectra of its compressions to subspaces. In the finite dimensional context (see \cite{BMS14,MRS13}) these relations are known as the Fan-Pall inequalities (that include the so-called interlacing inequalities as a particular case).
Yet, in general, the corresponding results in frame theory do not take into consideration any additional structure of the frame. For example, regarding the frame design problem, it seems natural to wonder whether we can construct a structured frame (e.g., wavelet, Gabor or a shift generated frame) with prescribed structure; similarly, in case we fix a structured frame $\mathcal F$ for ${\mathbf{a}l W}$ it seems natural to wonder whether we can construct structured oblique dual frames with further prescribed properties.
\noi
In \cite{BMS15}, as a first step towards a detailed study of the spectral properties of structured oblique duals of shift generated systems induced by finite families of vectors in $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$, we extended the Fan-Pall theory to the context of measurable fields of positive semi-definite matrices
and their compressions by measurable selections of subspaces; this allowed us to give an explicit description of what we called {\it fine spectral structure} of the shift generated duals of a fixed shift generated (SG) frame for a finitely generated shift invariant (FSI) subspace ${\mathbf{a}l W}$ of $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$. Given a convex function $\varphi:\mathbb{R}}\def\C{\mathbb{C}_+\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$ we also introduced
the convex potential associated to pair the $(\varphi,{\mathbf{a}l W})$, that is a functional on SG Bessel sequences that measures the spread of the fine spectral structure of the sequence; there we showed that these convex potentials detect tight frames as their minimizers (under some normalization conditions).
Yet, our analysis was based on the fine spectral structure of a given SG Bessel sequence in a FSI subspace
${\mathbf{a}l W}\mathfrak{s}ubset L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$.
\noi
In this paper, building on an extension of the Schur-Horn theorem for measurable fields of positive semi-definite matrices,
we characterize the possible {\it fine structures} of SG Bessel sequences in FSI subspaces (see Section \ref{SI cosas} for preliminaries on SG Bessel sequences, Remark \ref{rem sobre estruc fina} and Theorem \ref{teo sobre disenio de marcos});
thus, we solve a frame design problem, where the prescribed features of the SG Bessel sequences are described in terms of some internal (or fine) structure, relative to a finitely generated shift invariant subspace ${\mathbf{a}l W}$. We also show that the Fan-Pall theory for fields of positive semi-definite matrices can be used to obtain a detailed description of SG Bessel sequences with prescribed fine structure, similar to that obtained in terms of the eigensteps in \cite{CFMPS}. In turn, we use these results to show that
given a FSI subspace ${\mathbf{a}l W}$, a convex function $\varphi:\mathbb{R}}\def\C{\mathbb{C}_+\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$ and a finite sequence of positive numbers $\alphapha_1\geq \mathbf{l}dots\geq \alphapha_n>0$, there exist vectors $f_i\in{\mathbf{a}l W}$ such that $\|f_i\|^2=\alphapha_i$, for $1\mathbf{l}eq i\mathbf{l}eq n$, and such that the SG Bessel sequence induced by these vectors minimizes the convex potential associated to the pair $(\varphi,{\mathbf{a}l W})$, among all such SG Bessel sequences (for other optimal design problems in shift invariant spaces see \cite{AlCHM1,AlCHM2}). The existence of these $(\varphi,{\mathbf{a}l W})$-optimal shift generated frame designs with prescribed norms is not derived using a direct ``continuity + compactness'' argument. Actually, their existence follows from a discrete nature of their spectral structure; we make use of the this fact to reduce the problem of describing the structure of optimal designs, to an optimization problem in a finite dimensional setting. As a tool, we consider the waterfilling construction in terms of majorization in general probability spaces. It is worth pointing out that there has been interest in the structure of finite sequences of vectors that minimize convex potentials in the finite dimensional context (see \cite{CKFT,FMP,MR08,MR10}), originating from the seminal paper \cite{BF}; our present situation is more involved and, although we reduce the problem to a finite dimensional setting, this reduction is not related with the techniques nor the results of the previous works on finite families of vectors.
\noi
The paper is organized as follows. In Section \ref{sec prelim}, after fixing the general notations used in the paper, we present some preliminary material on frames, shift invariant subspaces and shift generated Bessel sequences; we end this section with the general notion of majorization in probability spaces.
In Section \ref{subsec exac charac} we obtain an exact characterization of the existence of shift generated Bessel sequences with prescribed fine structure in terms of majorization relations; this result is based on a version of the Schur-Horn theorem for measurable fields of positive semi-definite matrices (defined on measure spaces) that is developed in the appendix (see Section \ref{Appendixity}). In Section \ref{subsec eigensteps}, building on the Fan-Pall inequalities from \cite{BMS15}, we obtain a detailed description of all shift generated Bessel sequences with prescribed fine structure that generalizes the so-called eigensteps construction in the finite dimensional setting. In Section
\ref{sec opti frames with prescribed norms} we show that for a fixed sequence of positive numbers $\alphapha_1\geq \mathbf{l}dots\geq \alphapha_n>0$, a
convex function $\varphi:\mathbb{R}}\def\C{\mathbb{C}_+\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$ and a FSI subspace ${\mathbf{a}l W}\mathfrak{s}ubset L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ there exist vectors $f_i\in{\mathbf{a}l W}$ such that $\|f_i\|^2=\alphapha_i$, for $1\mathbf{l}eq i\mathbf{l}eq n$, and such that $\mathcal F$ minimizes the convex potential associated to the pair $(\varphi,{\mathbf{a}l W})$ among all such finite sequences; in order to do this, we first consider in Section \ref{subse uniform} the uniform case in which the dimensions of the fibers of ${\mathbf{a}l W}$ are constant on the spectrum of ${\mathbf{a}l W}$.
The general case of the optimal design problem with prescribed norms in a FSI is studied in Section \ref{subsec gral mi gral}; our approach is based on a reduction of the problem to an optimization procedure in the finite dimensional setting. The paper ends with an Appendix, in which we consider a measurable version of the Schur-Horn theorem needed in Section \ref{subsec exac charac} as well as some technical aspects of an optimization problem needed in Section \ref{subsec gral mi gral}.
\mathfrak{s}ection{Preliminaries} \mathbf{l}abel{sec prelim}
In this section we recall some basic facts related with frames for subspaces and shift generated frames for shift invariant (SI) subspaces of $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$.
At the end of this section we describe majorization between functions in arbitrary probability spaces.
\def\mathbb {I} _m{\mathcal{M}_d(\mathbb{C})hbb {I} _m}
\def\mathbb{I}D{\mathcal{M}_d(\mathbb{C})hbb {I} _d}
\def\mathds{1}{\mathcal{M}_d(\mathbb{C})hds{1}}
\noi
{\bf General Notations}
\noi
Throughout this work we shall use the following notation:
the space of complex $d{\mathbf t}imes d$ matrices is denoted by $\mathcal{M}_d(\mathbb{C})$, the real subspace of self-adjoint matrices is denoted $\mathcal{M}_d(\mathbb{C})sad$ and $\mathcal{M}_d(\mathbb{C})pos$ denotes the set of positive semi-definite matrices; $\mathcal{M}_d(\mathbb{C})invd$ is the group of invertible elements of $\mathcal{M}_d(\mathbb{C})$, $\mathcal{M}_d(\mathbb{C})ud$ is the subgroup of unitary matrices and $\mathcal{M}_d(\mathbb{C})invd^+ = \mathcal{M}_d(\mathbb{C})pos \mathbf{a}p \mathcal{M}_d(\mathbb{C})invd$. If $T\in \mathcal{M}_d(\mathbb{C})$, we denote by
$\|T\|$ its spectral norm,
by $\text{\rm rk}\, T= \dim R(T) $ the rank of $T$,
and by ${\mathbf t}r T$ the trace of $T$.
\noi
Given $d \in \mathbb{N}$ we denote by $\mathbb{I}D = \{1, \dots , d\} \mathfrak{s}ubseteq \mathbb{N}$ and we set $\mathbb{I}_0=\end{array}\right]ptyset$.
For a vector $x \in \mathbb{R}}\def\C{\mathbb{C}^d$ we denote
by $x^\downarrow\in \mathbb{R}}\def\C{\mathbb{C}^d$
the rearrangement
of $x$ in non-increasing order. We denote by
$(\mathbb{R}}\def\C{\mathbb{C}^d)^\downarrow = \{ x\in \mathbb{R}}\def\C{\mathbb{C}^d : x = x^\downarrow\}$ the set of downwards ordered vectors.
Given $S\in \mathcal{M}_d(\mathbb{C})sad$, we write $\mathbf{l}a(S) = \mathbf{l}a^\downarrow(S)= (\mathbf{l}a_1(S) \, , \, \dots \, , \, \mathbf{l}a_d(S)\,) \in
(\mathbb{R}}\def\C{\mathbb{C}^d)^\downarrow$ for the
vector of eigenvalues of $S$ - counting multiplicities - arranged in decreasing order.
\noi
If $W\mathfrak{s}ubseteq \C^d$ is a subspace we denote by $P_W \in \mathcal{M}_d(\mathbb{C})pos$ the orthogonal
projection onto $W$.
Given $x\, , \, y \in \C^d$ we denote by $x\otimes y \in \mathcal{M}_d(\mathbb{C})$ the rank one
matrix given by
\begin{equation} \mathbf{l}abel{tensores}
x\otimes y \, (z) = \langle z\, , \, y\rangle \, x \peso{for every} z\in \C^d \ .
\end{equation}
Note that, if $x\neq 0$, then
the projection $P_x \ \mathfrak{s}tackrel{\mbox{\tiny{def}}}{=}\ P_{\gen\{x\}}= \|x\|^{-2} \, x\otimes x \,$.
\mathfrak{s}ubsection{Frames for subspaces}\mathbf{l}abel{sec defi frames subespacios}
In what follows $\mathcal{M}_d(\mathbb{C})hcal{H}$ denotes a separable complex Hilbert space and $\mathcal{M}_d(\mathbb{C})hbb I$ denotes a finite or countable infinite set.
Let ${\mathbf{a}l W}$ be a closed subspace of $\mathcal{M}_d(\mathbb{C})hcal{H}$: recall that a sequence $\mathcal F=\{f_i\}_{i\in \mathcal{M}_d(\mathbb{C})hbb I}$ in ${\mathbf{a}l W}$ is a {\it frame} for ${\mathbf{a}l W}$
if there exist positive constants $0<a\mathbf{l}eq b$ such that
\begin{equation}\mathbf{l}abel{defi frame}
a \, \|f\|^2\mathbf{l}eq \mathfrak{s}um_{i\in \mathcal{M}_d(\mathbb{C})hbb I}|\mathbf{l}angle f,f_i\rangle |^2\mathbf{l}eq b\,\|f\|^2
\peso{for every} f\in {\mathbf{a}l W}\, .
\end{equation}
In general, if $\mathcal F$ satisfies the inequality to the right in Eq. \eqref{defi frame}
we say that $\mathcal F$ is a $b$-Bessel sequence for ${\mathbf{a}l W}$. Moreover, we shall say that a sequence
$\mathcal G=\{g_i\}_{i\in\mathcal{M}_d(\mathbb{C})hbb I}$ in $\mathcal{M}_d(\mathbb{C})hcal{H}$ is a Bessel sequence - without explicit reference to a closed subspace - whenever
$\mathcal G$ is a Bessel sequence for its closed linear span; notice that this is equivalent to the fact that $\mathcal G$ is a Bessel sequence for $\mathcal{M}_d(\mathbb{C})hcal{H}$.
\noi
Given a Bessel sequence $\mathcal F=\{f_i\}_{i\in \mathcal{M}_d(\mathbb{C})hbb I}$ we consider its {\it synthesis operator} $T_\mathcal F\in L(\ell^2(\mathcal{M}_d(\mathbb{C})hbb I),\mathcal{M}_d(\mathbb{C})hcal{H})$ given by $T_\mathcal F((a_i)_{i\in \mathcal{M}_d(\mathbb{C})hbb I})=\mathfrak{s}um_{i\in \mathcal{M}_d(\mathbb{C})hbb I} a_i\ f_i$ which, by hypothesis on $\mathcal F$, is a bounded linear transformation. We also consider $T_\mathcal F^*\in L(\mathcal{M}_d(\mathbb{C})hcal{H},\ell^2(\mathcal{M}_d(\mathbb{C})hbb I))$ called the {\it analysis operator} of $\mathcal F$, given by $T_\mathcal F^*(f)=(\mathbf{l}angle f,f_i\rangle )_{i\in \mathcal{M}_d(\mathbb{C})hbb I}$ and the {\it frame operator} of $\mathcal F$ defined by $S_\mathcal F=T_\mathcal F\,T_\mathcal F^*$. It is straightforward to check that
$$
\mathbf{l}angle S_\mathcal F f,f \rangle =\mathfrak{s}um_{i\in \mathcal{M}_d(\mathbb{C})hbb I}|\mathbf{l}angle f,f_i\rangle |^2 \peso{for every}
f\in \mathcal{M}_d(\mathbb{C})hcal{H}\ .
$$
Hence, $S_\mathcal F$ is a positive semi-definite bounded operator; moreover, a Bessel sequence $\mathcal F$ in ${\mathbf{a}l W}$
is a frame for ${\mathbf{a}l W}$ if and only if $S_\mathcal F$ is an invertible operator when restricted to ${\mathbf{a}l W}$ or equivalently, if the range of $T_\mathcal F$ coincides with ${\mathbf{a}l W}$.
\noi
If ${\mathbf{a}l V}$ is a closed subspace of $\mathcal{M}_d(\mathbb{C})hcal{H}$ such that ${\mathbf{a}l V}L(\mathcal{M}_d(\mathbb{C})hcal{H})lus{\mathbf{a}l W}^\perp=\mathcal{M}_d(\mathbb{C})hcal{H}$ (e.g. ${\mathbf{a}l V}={\mathbf{a}l W}$) then it is possible to find frames $\mathcal G=\{g_i\}_{i\in \mathcal{M}_d(\mathbb{C})hbb I}$ for ${\mathbf{a}l V}$ such that
$$f=\mathfrak{s}um_{i\in \mathcal{M}_d(\mathbb{C})hbb I}\mathbf{l}angle f,g_i\rangle \ f_i \ , \quad {\mathbf t}ext{ for } f\in{\mathbf{a}l W}\,. $$
The representation above lies within the theory of oblique duality (see \cite{YEldar3,CE06,YEldar1,YEldar2}).
In this note we shall not be concerned with oblique duals; nevertheless, notice that the numerical stability of the encoding-decoding scheme above depends both on the numerical stability corresponding to $\mathcal F$ and $\mathcal G$ as above. One way to measure stability of the encoding or decoding algorithms is to measure the spread of the spectra of the frame operators corresponding to $\mathcal F$ and $\mathcal G$. Therefore both the task of constructing optimally stable $\mathcal F$ together with obtaining
optimally stable duals $\mathcal G$ of $\mathcal F$ are of fundamental interest in frame theory.
\mathfrak{s}ubsection{SI subspaces, frames of translates and their convex potentials}\mathbf{l}abel{SI cosas}
In what follows we consider $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ (with respect to Lebesgue measure) as a separable and complex Hilbert space.
Recall that a closed subspace ${\mathbf{a}l V}\mathfrak{s}ubseteq L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ is {\it shift-invariant} (SI) if $f\in {\mathbf{a}l V}$
implies $T_\ell f \in {\mathbf{a}l V}$ for any $\ell\in \mathbb{Z}^k$, where $T_yf(x)=f(x-y)$ is
the translation by $y \in \mathbb{R}}\def\C{\mathbb{C}^k$.
For example, take a subset $\mathcal{A} \mathfrak{s}ubset
L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ and set
$$
{\mathbf{a}l S}(\mathcal{A})= \overlineerline{{\mathbf t}ext{span}}\,\{T_\ell f:\ f\in\mathcal{A}\, , \ \ell\in\mathcal{M}_d(\mathbb{C})hbb Z^k\} \,.
$$
Then, ${\mathbf{a}l S}(\mathcal{A})$ is a shift-invariant subspace called the {\it SI subspace generated by $\mathcal{A}$}; indeed, ${\mathbf{a}l S}(\mathcal{A})$ is the smallest SI subspace that contains $\mathcal{A}$. We say that a SI subspace ${\mathbf{a}l V}$ is {\it finitely generated} (FSI) if there exists a finite set $\mathcal{A}\mathfrak{s}ubset L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ such that ${\mathbf{a}l V}={\mathbf{a}l S}(\mathcal{A})$. We further say that ${\mathbf{a}l W}$ is a principal SI subspace if there exists $f\in L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ such that ${\mathbf{a}l W}={\mathbf{a}l S}(f)$.
\noi In order to describe the fine structure of a SI subspace we consider the following representation of $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ (see \cite{BDR,Bo,RS95} and \cite{CabPat} for extensions of these notions to the more general context of actions of locally compact abelian groups). Let $\mathbb{T}=[-1/2,1/2)$ endowed with the Lebesgue measure and let $L^2(\mathbb{T}^k, \ell^2(\mathbb{Z}^k))$ be the Hilbert space of square integrable $\ell^2(\mathbb{Z}^k)$-valued functions that consists of all vector valued measurable functions $\phi: \mathbb{T}^k {\mathbf t}o \ell^2(\mathbb{Z}^k)$ with the norm
$$\| \phi\|^2= \int_{\mathbb{T}^k} \| \phi(x)\|_{\ell^2(\mathbb{Z}^k)}^{2} \ dx< \infty.$$
Then, $\mathcal{G}amma: L^2(\mathbb{R}}\def\C{\mathbb{C}^k){\mathbf t}o L^2(\mathbb{T}^k, \ell^2(\mathbb{Z}^k))$ defined for $f\in L^1(\mathbb{R}}\def\C{\mathbb{C}^k)\mathbf{a}p L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ by
\begin{equation}\mathbf{l}abel{def: iso}
\mathcal{G}amma f: \mathbb{T}^k {\mathbf t}o \ell^2(\mathbb{Z}^k)\ ,\quad\mathcal{G}amma f(x)= (\hat{f}(x+\ell))_{\ell\in \mathbb{Z}^k},
\end{equation}
extends uniquely to an isometric isomorphism between $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ and $L^2(\mathbb{T}^k, \ell^2(\mathbb{Z}^k))$; here
$$\hat f(x)= \int_{\mathbb{R}}\def\C{\mathbb{C}^k} f(y) \ e^{-2\pi\, i\,\mathbf{l}angle y,\,x\rangle} \ dy \quad {\mathbf t}ext{ for } \quad x\in\mathbb{R}}\def\C{\mathbb{C}^k\, , $$ denotes the Fourier transform of $f\in L^1(\mathbb{R}}\def\C{\mathbb{C}^k)\mathbf{a}p L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$.
\noi
Let ${\mathbf{a}l V}\mathfrak{s}ubset L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ be a SI subspace. Then, there exists a function $J_{\mathbf{a}l V}:\mathbb{T}^k\rightarrow\{$ closed subspaces of
$\ell^2(\mathbb{Z}^k)\}$ such that: if $ P_{J_{\mathbf{a}l V}(x)}$ denotes the orthogonal projection onto $J_{\mathbf{a}l V}(x)$ for $x\in\mathbb{T}^k$, then for every $\mathbf{x}i,\,\eta\in \ell^2(\mathbb{Z}^k)$ the function $x\mapsto \mathbf{l}angle P_{J_{\mathbf{a}l V}(x)} \,\mathbf{x}i\, , \, \eta\rangle$ is measurable and
\begin{equation}\mathbf{l}abel{pro: V y J}
{\mathbf{a}l V}=\{ f\in L^2(\mathbb{R}}\def\C{\mathbb{C}^k): \mathcal{G}amma f(x) \in J_{\mathbf{a}l V}(x) \,\ {\mathbf t}ext{for a.e.}\,\ x\in \mathbb{T}^k\}.
\end{equation} The function $J_{\mathbf{a}l V}$ is the so-called {\it measurable range function} associated with ${\mathbf{a}l V}$. By \cite[Prop.1.5]{Bo}, Eq. \eqref{pro: V y J} establishes a bijection between
SI subspaces of $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ and measurable range functions.
In case
${\mathbf{a}l V}=S(\mathcal{M}_d(\mathbb{C})hcal A) \mathfrak{s}ubseteq L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ is the SI subspace generated by $\mathcal{M}_d(\mathbb{C})hcal A=\{h_i:i\in \mathcal{M}_d(\mathbb{C})hbb I\}\mathfrak{s}ubset L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$, where $\mathcal{M}_d(\mathbb{C})hbb I$ is a finite or countable infinite set, then for a.e. $x\in\mathbb{T}^k$ we have that
\begin{equation}\mathbf{l}abel{eq Jv}
J_{\mathbf{a}l V}(x)=\overlineerline{{\mathbf t}ext{span}}\,\{\mathcal{G}amma h_i(x): \ i\in \mathcal{M}_d(\mathbb{C})hbb I\}\,.
\end{equation}
\noi
Recall that a bounded linear operator
$S\in L(L^2(\mathbb{R}}\def\C{\mathbb{C}^k))$ is {\it shift preserving} (SP) if
$T_\ell \, S=S\,T_\ell$ for every $\ell\in\mathbb{Z}^k$. In this case (see \cite[Thm 4.5]{Bo}) there exists
a (weakly) measurable field of operators $[S]_{(\cdot)}:\mathbb{T}^k\rightarrow \ell^2(\mathbb{Z}^k)$ (i.e. such that
for every $\mathbf{x}i,\,\eta\in \ell^2(\mathbb{Z}^k)$ the function
$\mathbb{T}^k\ni x\mapsto \mathbf{l}angle [S]_x\, \mathbf{x}i\, , \, \eta \rangle $
is measurable) and essentially bounded (i.e. the function $\mathbb{T}^k\ni x\mapsto \|\,[S]_{x}\,\|$ is essentially bounded)
such that
\begin{equation}\mathbf{l}abel{defi hatS}
[S]_x \big(\mathcal{G}amma f(x)\,\big)=\mathcal{G}amma (Sf) (x) \quad {\mathbf t}ext{ for a.e. }x\in\mathbb{T}^k\ , \ \ f\in L^2(\mathbb{R}}\def\C{\mathbb{C}^k)\,.
\end{equation} Moreover, $\|S\|={\mathcal{M}_d(\mathbb{C})hrm{ess}\mathfrak{s}up}_{x\in\mathbb{T}^k} \|\, [S]_x\, \|$.
Conversely, if $s:\mathbb{T}^k\rightarrow L(\ell^2(\mathbb{Z}^k))$ is a weakly measurable and essentially bounded field of operators then,
there exists a unique bounded operator $S\in L(L^2(\mathbb{R}}\def\C{\mathbb{C}^k))$ that is SP and such that $[S]=s$.
For example, let ${\mathbf{a}l V}$ be a SI subspace and consider $P_{\mathbf{a}l V}\in L(L^2(\mathbb{R}}\def\C{\mathbb{C}^k))$, the orthogonal projection
onto ${\mathbf{a}l V}$; then, $P_{\mathbf{a}l V}$ is SP so that
$[P_{\mathbf{a}l V}]{} :\mathbb{T}^k\rightarrow L(\ell^2(\mathbb{Z}^k))$ is given by $[P_{\mathbf{a}l V}]{}_x=P_{J_{\mathbf{a}l V}(x)}$ i.e., the orthogonal projection onto $J_{\mathbf{a}l V}(x)$, for a.e. $x\in\mathbb{T}^k$.
\noi The previous notions associated with SI subspaces and SP operators allow to develop a detailed study of frames of translates.
Indeed, let $\mathcal F=\{f_i\}_{i\in \mathcal{M}_d(\mathbb{C})hbb I}$ be a (possibly finite) sequence in $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$. In what follows we consider the sequence of integer translates
of $\mathcal F$, denoted $E(\mathcal F)$ and given by
$$E(\mathcal F)=\{T_\ell \, f_i\}_{(\ell,\, i)\,\in\, \mathbb{Z}^{k}{\mathbf t}imes \mathcal{M}_d(\mathbb{C})hbb I}\,.$$
For $x\in \mathbb{T}^k$, let $\mathcal{G}amma\mathcal F(x)=\{\mathcal{G}amma f_i(x)\}_{i\in \mathcal{M}_d(\mathbb{C})hbb I}$ which is a (possibly finite)
sequence in $\ell^2(\mathbb{Z}^k)$. Then $E(\mathcal F)$ is a $b$-Bessel sequence if and only if $\mathcal{G}amma\mathcal F(x)$ is a $b$-Bessel
sequence for a.e. $x\in \mathbb{T}^k$ (see \cite{Bo,RS95}). In this case,
we consider the synthesis operator $T_{\mathcal{G}amma\mathcal F(x)}:\ell^2(\mathcal{M}_d(\mathbb{C})hbb I)\rightarrow \ell^2(\mathbb{Z}^k)$ and frame operator $S_{\mathcal{G}amma\mathcal F(x)}:\ell^2(\mathbb{Z}^k)\rightarrow \ell^2(\mathbb{Z}^k)$ of $\mathcal{G}amma\mathcal F(x)$, for $x\in\mathbb{T}^k$. It is straightforward to check that $S_{E(\mathcal F)}$ is a SP operator.
\noi
If $\mathcal F=\{f_i\}_{i\in \mathcal{M}_d(\mathbb{C})hbb I}$ and $\mathcal G=\{g_i\}_{i\in \mathcal{M}_d(\mathbb{C})hbb I}$ are such that $E(\mathcal F)$ and $E(\mathcal G)$ are Bessel sequences then (see \cite{HG07,RS95}) the following fundamental relation holds:
\begin{equation}\mathbf{l}abel{eq:fourier}
[T_{E(\mathcal G)}\,T^*_{E(\mathcal F)}]_x
= T_{\mathcal{G}amma\mathcal G(x)}\,T^*_{\mathcal{G}amma\mathcal F(x)}\ , \quad {\mathbf t}ext{for a.e }\, x \in \mathbb{T}^k \,.
\end{equation}
These equalities
have several consequences. For example, if
${\mathbf{a}l W}$ is a SI subspace of $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ and we assume further that $\mathcal F,\,\mathcal G\in{\mathbf{a}l W}^n$ then,
for every $f,\,g\in L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$,
$$
\mathbf{l}angle S_{E(\mathcal F)}\, f,\,g\rangle =\int_{\mathbb{T}^k} \mathbf{l}angle S_{\mathcal{G}amma\mathcal F(x)}
\ \mathcal{G}amma f(x),\,\mathcal{G}amma g(x)\rangle_{\ell^2(\mathbb{Z}^k)}\ dx\ .
$$
This last fact implies that $[S_{E(\mathcal F)}]_x=S_{\mathcal{G}amma \mathcal F(x)}$ for a.e. $x\in\mathbb{T}^k$. Moreover,
$E(\mathcal F)$ is a frame for ${\mathbf{a}l W}$ with frame bounds
$0<a\mathbf{l}eq b$ if and only if $\mathcal{G}amma\mathcal F(x)$ is a
frame for $J_{\mathbf{a}l W}(x)$ with frame bounds $0<a\mathbf{l}eq b$ for a.e. $x\in \mathbb{T}^k$ (see \cite{Bo}).
\noi
We end this section with the notion of convex potentials in FSI introduced in \cite{BMS15}\,; in order to describe these potentials we
consider the sets
\begin{equation}\mathbf{l}abel{def convf}
\mathbf{x}rightarrow[n\rightarrow\infty]{}f = \{
\varphi:\mathbb{R}}\def\C{\mathbb{C}_+\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+\ , \ \varphi \ \mbox{ is a convex function} \ \}
\end{equation}
and $\mathbf{x}rightarrow[n\rightarrow\infty]{}fs = \{\varphi\in \mathbf{x}rightarrow[n\rightarrow\infty]{}f \ , \ \varphi$ is strictly convex $\}$.
\begin{fed} \mathbf{l}abel{defi pot conv}\rm Let ${\mathbf{a}l W}$ be a FSI subspace in $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$, let $\mathcal F=\{f_i\}_{i\in\mathbb {I} _n}\in{\mathbf{a}l W}^n$
be such that $E(\mathcal F)$ is a Bessel sequence and consider $\varphi\in \mathbf{x}rightarrow[n\rightarrow\infty]{}f$.
The convex potential associated to $(\varphi,{\mathbf{a}l W})$ on $E(\mathcal F)$, denoted $P_\varphi^{\mathbf{a}l W}(E(\mathcal F))$, is given by
\begin{equation}\mathbf{l}abel{eq defi pot}
P_\varphi^{\mathbf{a}l W}(E(\mathcal F))=\int_{\mathbb{T}^k}
{\mathbf t}r(\varphi(S_{\mathcal{G}amma \mathcal F(x)})\, [P_{\mathbf{a}l W}]_x) \ dx
\end{equation}
where $\varphi(S_{\mathcal{G}amma \mathcal F(x)})$ denotes the functional calculus of the positive and finite rank operator $S_{\mathcal{G}amma \mathcal F(x)}\in L(\ell^2(\mathbb{Z}^k))^+$ and ${\mathbf t}r(\cdot)$ denotes the usual semi-finite trace in $L(\ell^2(\mathbb{Z}^k))$.
$\triangle$
\end{fed}
\begin{exa}
Let ${\mathbf{a}l W}$ be a FSI subspace of $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ and let $\mathcal F=\{f_i\}_{i\in\mathbb {I} _n}\in{\mathbf{a}l W}^n$. If we set $\varphi(x)=x^2$ for $x\in \mathbb{R}}\def\C{\mathbb{C}_+$ then, the corresponding potential on $E(\mathcal F)$, that we shall denote $\mathcal{F}P(E(\mathcal F))$, is given by
$$
\mathcal{F}P(E(\mathcal F))= \int_{\mathbb{T}^k}
{\mathbf t}r( S_{\mathcal{G}amma \mathcal F(x)}^2) \ dx= \int_{\mathbb{T}^k} \ \mathfrak{s}um_{i,\,j\in \mathbb {I} _n} |\mathbf{l}angle \mathcal{G}amma f_i(x),\mathcal{G}amma f_j(x)\rangle|^2 \ dx\,, $$
where we have used the fact that $\varphi(0)=0$ in this case. Hence, $\mathcal{F}P(E(\mathcal F))$ is a natural extension of the Benedetto-Fickus frame potential (see \cite{BF}).
$\triangle$
\end{exa}
\noi
With the notation of Definition \ref{defi pot conv}, it is shown in \cite{BMS15} that $P_\varphi^{\mathbf{a}l W}(E(\mathcal F))$ is a well defined functional on the class of Bessel sequences $E(\mathcal F)$ induced by a finite sequence $\mathcal F=\{f_i\}_{i\in\mathbb {I} _n}\in{\mathbf{a}l W}^n$ as above. The main motivation for considering convex potentials is
that, under some natural normalization hypothesis, they detect tight frames as their minimizers
(see \cite[Theorem 3.9.]{BMS15} or Corollary \ref{cororo1} below); that is,
convex potentials provide simple scalar measures of stability that can be used to compare shift generated frames.
Therefore, the convex potentials for FSI are natural extensions of the convex potentials in finite dimensions introduced in \cite{MR10}.
In what follows, we shall consider the existence of tight frames $E(\mathcal F)$ for the FSI ${\mathbf{a}l W}$ with prescribed norms.
It turns out that there are natural restrictions for the existence of such frames (see Theorem \ref{teo sobre disenio de marcos} below). In case these restrictions are not fulfilled then, the previous remarks show that minimizers of convex potentials associated to a pair $(\varphi,\,{\mathbf{a}l W})$ within the class of frames with prescribed norms are natural substitutes of tight frames.
\mathfrak{s}ubsection{Majorization in probability spaces}\mathbf{l}abel{2.3}
Majorization between vectors (see \cite{Bhat,MaOl}) has played a key role in frame theory. On the one hand, majorization allows to characterize the existence of frames with prescribed properties (see \cite{AMRS,CFMPS,CasLeo}). On the other hand, majorization is a preorder relation that implies a family of tracial inequalities; this last fact can be used to explain the structure of minimizers of general convex potentials, that include the Benedetto-Fickus' frame potential (see \cite{BF,CKFT,MR08,MR10,MRS13,MRS14,MRS13b}). We will be dealing with convex potentials in the context of Bessel families of integer translates of finite sequences; accordingly, we will need the following general notion of majorization between functions in probability spaces.
\noi
Throughout this section the triple $(X,\mathcal{X},\mu)$ denotes
a probability space i.e.
$\mathcal{X}$ is a $\mathfrak{s}igma$-algebra of sets in $X$ and $\mu$ is a probability measure defined on $\mathcal{X}$. We shall denote by $L^\infty(X,\mu)^+
= \{f\in L^\infty(X,\mu): f\ge 0\}$.
For $f\in L^\infty(X, \mu)^+$,
the {\it decreasing rearrangement} of $f$ (see \cite{MaOl}), denoted $f^*:[0,1)\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$, is given by
\begin{equation}\mathbf{l}abel{eq:reord}
f^*(s ) \ \mathfrak{s}tackrel{\mbox{\tiny{def}}}{=}\ \mathfrak{s}up \,\{ t\in \mathbb{R}}\def\C{\mathbb{C}_+ : \ \mu \{x\in X:\ f(x)>t\} >s\}
\peso{for every} s\in [0,1)\, .
\end{equation}
\begin{rem} \mathbf{l}abel{rem:prop rear elem}We mention some elementary facts related with the decreasing rearrangement of functions that we shall need in the sequel. Let $f\in L^\infty(X,\mu)^+$, then:
\begin{enumerate}
\item $f^*$ is a right-continuous and non-increasing function.
\item $f$ and $f^*$ are equimeasurable i.e. for every Borel set $A\mathfrak{s}ubset \mathbb{R}}\def\C{\mathbb{C}$ then $\mu(f^{-1}(A))=|(f^*)^{-1}(A)|$, where $|B|$ denotes the Lebesgue measure of the Borel set $B\mathfrak{s}ubset \mathbb{R}}\def\C{\mathbb{C}$. In turn, this implies that for every continuous $\varphi:\mathbb{R}}\def\C{\mathbb{C}_+\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$ then: $\varphi\circ f\in L^\infty(X,\mu)$ iff $\varphi\circ f^*\in L^\infty([0,1])$ and in this case
\begin{equation}\mathbf{l}abel{reor int}
\int_X \varphi\circ f\ d\mu =\int_{0}^1 \varphi\circ f^*\ dx\ .
\end{equation}
\item If $g\in L^\infty(X,\mu)$ is such that $f\mathbf{l}eq g$ then $0\mathbf{l}eq f^*\mathbf{l}eq g^*$; moreover, in case $f^*=g^*$ then $f=g$.
$\triangle$
\end{enumerate}
\end{rem}
\begin{fed}\rm Let $f, g\in L^\infty (X, \mu)^+$ and
let $f^*,\,g^*$ denote their decreasing rearrangements. We say that $f$ {\mathbf t}extit{submajorizes} $g$ (in $(X,\mathcal{X},\mu)$), denoted $g \prec_w f$, if
\begin{eqnarray*}\mathbf{l}abel{eq: mayo de func}
\int_{0}^{s} g^*(t) \,\ dt &\mathbf{l}eq& \int_{0}^{s} f^*(t)\,\ dt \peso{for every} 0\mathbf{l}eq s\mathbf{l}eq 1 \,.
\end{eqnarray*}
If in addition $\int_{0}^{1} g^*(t)\,\ dt = \int_{0}^{1} f^*(t)\,\ dt$
we say that $f$ {\mathbf t}extit{majorizes} $g$ and write $g \prec f$.
$\triangle$
\end{fed}
\noi
In order to check that majorization holds between functions in probability spaces, we can consider the so-called {\it doubly stochastic maps}. Recall that a linear operator $D$ acting on $L^\infty(X,\mu)$ is a doubly-stochastic map if $D$ is unital, positive and trace preserving i.e.
$$
D(1_X)=1_X \ , \ \
D\big(\, L^\infty (X, \mu)^+ \, \big)\mathfrak{s}ubseteq L^\infty (X, \mu)^+
\peso{and}
\int_X D(f)(x)\ d\mu(x) =\int_X f(x)\ d\mu(x)
$$
for every $f\in L^\infty(X,\mu)$.
It is worth pointing out that $D$ is necessarily a contractive map.
\noi
Our interest in majorization relies in its relation with integral inequalities in terms of convex functions. The following result summarizes this relation as well as the role of the doubly stochastic maps (see for example \cite{Chong,Ryff}). Recall that $\mathbf{x}rightarrow[n\rightarrow\infty]{}f$ and $\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$ (see Eq. \eqref{def convf}) denote the sets of convex and strictly convex functions $\varphi:\mathbb{R}}\def\C{\mathbb{C}_+\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$, respectively.
\begin{teo}\mathbf{l}abel{teo porque mayo} \rm
Let $f,\,g\in L^\infty (X, \mu)^+$. Then the following conditions are equivalent:
\begin{enumerate}
\item $g\prec f$;
\item There is a doubly stochastic map $D$ acting on $L^\infty(X,\mu)$ such that $D(f)=g$;
\item For every $\varphi \in \mathbf{x}rightarrow[n\rightarrow\infty]{}f$ we have that
\begin{equation}\mathbf{l}abel{eq teo:desi mayo}
\int_X \varphi(g(x)) \ d\mu(x)\mathbf{l}eq \int_X \varphi(f(x))\ d\mu(x)\ .
\end{equation}
\end{enumerate}
Similarly, $g\prec_w f \iff $ Eq. \eqref{eq teo:desi mayo} holds for every {non-decreasing} convex function $\varphi$. \qed
\end{teo}
\noi
The following result plays a key role in the study of the structure of minimizers of $\prec_w$ within (appropriate) sets of functions.
\begin{pro}[\cite{Chong}]\mathbf{l}abel{pro int y reo} \rm
Let $f,\,g\in L^\infty(X,\mu)^+$ such that $g\prec_w f$. If there
exists $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$ such that
\begin{equation}
\int_X \varphi(f(x))\ d\mu(x) =\int_X \varphi(g(x)) \ d\mu(x)
\peso{then} g^*=f^* \ . {\mathbf t}ag*{
$\mathfrak{s}quare$}
\end{equation}
\end{pro}
\noi
\mathfrak{s}ection{Existence of shift generated frames with prescribed fine struture}
In this section we characterize the fine structure of a Bessel sequence $E(\mathcal F)$, where $\mathcal F=\{f_i\}_{i\in\mathbb {I} _n}\in{\mathbf{a}l W}^n$. By the fine (or relative) structure of $E(\mathcal F)$ we mean the sequence of norms of the vectors $\mathcal{G}amma \mathcal F(x)=(\mathcal{G}amma f_i(x)) _{i\in\mathbb {I} _n}$ and the sequence of eigenvalues of $[S_{E(\mathcal F)}]_x$ for $x\in\mathbb{T}^k$ (see Remark \ref{rem sobre estruc fina} for a precise description). As we shall see, the possible fine structure of $E(\mathcal F)$ can be described in terms of majorization relations.
\mathfrak{s}ubsection{A complete characterization in terms of majorization relations}\mathbf{l}abel{subsec exac charac}
\noi
We begin by showing the existence of measurable
spectral representations of self-adjoint SP operators with range lying in a FSI subspace (see Lemma \ref{lem spect represent ese}),
which follow from results from \cite{RS95} regarding the existence of measurable fields of eigenvectors and eigenvalues (counting multiplicities and arranged in non-increasing order) of measurable fields $M:\mathbb{T}^k \rightarrow \mathcal{M}_d(\mathbb{C})sad$ of selfadjoint matrices.
In order to do that, we first recall some notions and results from \cite{Bo}.
\noi
Given ${\mathbf{a}l W}\mathfrak{s}ubset L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ a FSI subspace, we say that $f\in {\mathbf{a}l W}$ is a quasi-orthogonal generator of ${\mathbf{a}l W}$ if
\begin{equation}\mathbf{l}abel{eq:}
\|g\|^2=\mathfrak{s}um_{\ell\in \mathbb{Z}^k} |\mathbf{l}angle T_\ell f, g \rangle|^2\ , \peso{for every} g\in {\mathbf{a}l W}\, .
\end{equation}
The next theorem, which is a consequence of results from \cite{Bo}, provides a decomposition of any FSI subspace
of $L^2(\mathbb{R}}\def\C{\mathbb{C}^n)$ into a finite orthogonal sum of principal SI subspaces with quasi-orthogonal generators.
\begin{teo}[\cite{Bo}]\mathbf{l}abel{teo:la descom de Bo} \rm
Let ${\mathbf{a}l W}$ be a FSI subspace of $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$, with $d=\mathfrak{s}upes_{x\in \mathbb{T}^k} d(x)$, where $d(x)=\dim J_{{\mathbf{a}l W}}(x)$ for $x\in\mathbb{T}^k$. Then there exist $h_1,\mathbf{l}dots,h_d\in{\mathbf{a}l W}$ such that ${\mathbf{a}l W}$ can be decomposed as an orthogonal sum
\begin{eqnarray}
{\mathbf{a}l W}=\bigoplus_{j\in\mathbb{I}_{d}} {\mathbf{a}l S}(h_j),
\end{eqnarray}
where $h_j$ is a quasi orthogonal generator of ${\mathbf{a}l S}(h_j)$ for $j\in\mathbb{I}_{d}\,$, and ${\mathbf t}ext{Spec}({\mathbf{a}l S}(h_{j+1}))\mathfrak{s}ubseteq
{\mathbf t}ext{Spec}({\mathbf{a}l S}(h_j))$ for $j\in \mathbb{I}_{d-1}\,$. Moreover,
in this case $\{\mathcal{G}amma h_j(x)\}_{j\in\mathbb{I}_{d(x)}}$ is a ONB of $J_{\mathbf{a}l W}(x)$ for a.e. $x\in\mathbb{T}^k$.
\qed
\end{teo}
\begin{lem}\mathbf{l}abel{lem spect represent ese}
Let ${\mathbf{a}l W}$ be a FSI subspace in $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ with $d=\mathfrak{s}upes_{x\in \mathbb{T}^k} d(x)$, where $d(x)=\dim J_{{\mathbf{a}l W}}(x)$ for $x\in\mathbb{T}^k$. Let $S\in L(L^2(\mathbb{R}}\def\C{\mathbb{C}^k))$ be a SP self-adjoint operator such that $R(S)\mathfrak{s}ubseteq{\mathbf{a}l W}$. Then, there exist:
\begin{enumerate}
\item measurable vector fields
$v_j:\mathbb{T}^k\rightarrow \ell^2(\mathbb{Z}^k)$ for $j\in\mathbb{I}_{d}$ such that $v_j(x)=0$ if $j>d(x)$ and $\{v_j(x)\}_{j\in\mathbb{I}_{d(x)}}$ is an ONB for $J_{\mathbf{a}l W}(x)$ for a.e. $x\in\mathbb{T}^k$;
\item bounded, measurable functions $\mathbf{l}a_j:\mathbb{T}^k\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$ for $j\in\mathbb{I}_{d}\,$, such that $\mathbf{l}a_1\geq \mathbf{l}dots\geq\mathbf{l}a_d$, $\mathbf{l}a_j(x)=0$ if $j>d(x)$ and
\begin{equation}\mathbf{l}abel{lem repre espec S}
[S]_x
=\mathfrak{s}um_{j\in\mathbb{I}_{d(x)}}\mathbf{l}ambda_j(x)\ v_j(x)\otimes v_j(x)
\ , \peso{for a.e.}\ x\in\mathbb{T}^k\,.
\end{equation}
\end{enumerate}
\end{lem}
\begin{proof}
By considering a convenient finite partition of $\mathbb{T}^k$ into measurable sets we can assume, without loss of generality, that $d(x)=d$ for a.e. $x\in \mathbb{T}^k$.
In this case, by Theorem \ref{teo:la descom de Bo} we have that
$$
{\mathbf{a}l W}=\bigoplus_{j\in \mathbb{I}_{d}} {\mathbf{a}l S}(h_j) \ ,
$$
where $h_j\in {\mathbf{a}l W}$, for $j\in\mathbb{I}_{d}$, are such that $\{\mathcal{G}amma h_j(x)\}_{j\in\mathbb{I}_{d}}$ is a ONB of $J_{{\mathbf{a}l W}}(x)$ for a.e. $x\in \mathbb{T}^k$.
Consider the measurable field of self-adjoint matrices $M(\cdot):\mathbb{T}^k{\mathbf t}o \mathcal{M}_d(\mathbb{C})sad$ given by
$$
M(x)=\big(\mathbf{l}angle [S]_{x} \,\ \mathcal{G}amma h_j(x) ,\,\ \mathcal{G}amma h_i(x)\rangle\big)_{i,j\,\in \mathbb{I}_{d}}\ .
$$
By \cite{RS95}, we can consider measurable functions $\mathbf{l}a_j:\mathbb{T}^k{\mathbf t}o \mathbb{R}}\def\C{\mathbb{C}_+$ for $j\in \mathbb{I}_d\,$,
such that $\mathbf{l}a_1\geq \mathbf{l}dots\geq \mathbf{l}a_d$ and measurable vector fields $w_j:\mathbb{T}^k{\mathbf t}o \C^d$ for $j\in \mathbb{I}_{d}\,$, such that
$\{w_j(x)\}_{j\in \mathbb{I}_{d}}$ is a ONB of $\C^d$ and
\begin{equation}\mathbf{l}abel{ecuac agreg1}
M(x)=\mathfrak{s}um_{j\in \mathbb{I}_{d}} \mathbf{l}a_j(x)\,\ w_j(x) \otimes w_j(x) \peso{for a.e.} \ x\in \mathbb{T}^k\, .
\end{equation}
If $w_j(x)=(w_{ij}(x))_{i\in\mathbb{I}_{d}}$ for $j\in\mathbb{I}_{d}\,$, consider the measurable vector
fields $v_j:\mathbb{T}^k {\mathbf t}o \ell^2(\mathbb{Z}^k)$ for $j\in \mathbb{I}_{d}\,$, given by
$$
v_j(x) =\mathfrak{s}um_{i\in \mathbb{I}_{d}} w_{ij}(x)\,\ \mathcal{G}amma h_i(x)\,\ {\mathbf t}ext{for}\,\ x\in \mathbb{T}^k\ .
$$
Then, it is easy to see that $\{v_j(x)\}_{j\in \mathbb{I}_{d}}$ is ONB of $J_{\mathbf{a}l W}(x)$ for a.e. $x\in\mathbb{T}^k$; moreover,
Eq. \eqref{ecuac agreg1} implies that Eq. \eqref{lem repre espec S} holds in this case.
\end{proof}
\begin{rem}\mathbf{l}abel{rem sobre estruc fina}
Let ${\mathbf{a}l W}$ be a FSI subspace with $d(x)=\dim J_{\mathbf{a}l W}(x)$ for $x\in\mathbb{T}^k$, and let $\mathcal F=\{f_i\}_{i\in\mathbb {I} _n}\in{\mathbf{a}l W}^n$ be a finite sequence in $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ such that $E(\mathcal F)$
is a Bessel sequence. In what follows we consider:
\begin{enumerate}
\item the {\it fine spectral structure of} $E(\mathcal F)$, that is the weakly measurable function
$$
\mathbb{T}^k\ni x\mapsto (\mathbf{l}ambda_j([S_{E(\mathcal F)}]_x )\,)_{j\in\mathbb{N}}\in \ell^1_+(\mathbb{Z}^k) \ ,
$$
with $\mathbf{l}ambda_j([S_{E(\mathcal F)}]_x )=\mathbf{l}a_j(x)$ as in Lemma \ref{lem spect represent ese} for $j\in\mathbb{I}_{d(x)}\,$, and $\mathbf{l}ambda_j([S_{E(\mathcal F)}]_x )=0$ for $j\geq d(x)+1$
and $x\in \mathbb{T}^k$. Thus, the fine spectral structure of $\mathcal F$
describes the eigenvalues of the positive finite rank operator $[S_{E(\mathcal F)}]_x =S_{\mathcal{G}amma \mathcal F(x)}\in L(\ell^2(\mathbb{Z}^k))$, counting multiplicities and arranged in non-increasing order.
\item The {\it fine structure of} $E(\mathcal F)$ given by the fine spectral structure together with the measurable vector valued function $\mathbb{T}^k\ni x\mapsto (\|\mathcal{G}amma f_i(x)\|^2)_{i\in\mathbb {I} _n}\in\mathbb{R}}\def\C{\mathbb{C}_+^n\,$.
$\triangle$
\end{enumerate}
\end{rem}
\noi
In order to state our main result of this section we shall need the notion of vector majorization from matrix analysis. Recall that
given $a=(a_i)_{i\in \mathbb {I} _n}\in \mathbb{R}}\def\C{\mathbb{C}^n$ and $b=(b_i)_{i\in \mathbb {I} _n}\in \mathbb{R}}\def\C{\mathbb{C}^m$ we say that $a$ is majorized by $b$, denoted $a\prec b$, if
\begin{equation}\mathbf{l}abel{eq: mayo dif}
\mathfrak{s}um_{i\in \mathbb{I}_k} a_i \mathbf{l}eq \mathfrak{s}um_{i\in \mathbb{I}_k} b_i\ , \ \ 1\mathbf{l}eq k \mathbf{l}eq \min\{n, m\} \ \ {\mathbf t}ext{and} \ \ \mathfrak{s}um_{i\in \mathbb{I}_n} a_i = \mathfrak{s}um_{i\in \mathbb{I}_m} b_i\,.
\end{equation}
\begin{teo}[Existence of shift generated sequences with prescribed fine structure]
\mathbf{l}abel{teo sobre disenio de marcos}
Let ${\mathbf{a}l W}$ be a FSI subspace in $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ and let $d(x)=\dim J_{{\mathbf{a}l W}}(x)$ for $x\in\mathbb{T}^k$.
Given measurable functions $\alpha_j:\mathbb{T}^k {\mathbf t}o \mathbb{R}}\def\C{\mathbb{C}_+$ for $j\in \mathbb {I} _n$ and $\mathbf{l}a_j:\mathbb{T}^k {\mathbf t}o \mathbb{R}}\def\C{\mathbb{C}_+$ for $j\in \mathbb{N}$,
the following conditions are equivalent:
\begin{enumerate}
\item There exists $\mathcal F=\{f_j\}_{j\in \mathbb {I} _n} \in {\mathbf{a}l W}^n$ such that $E(\mathcal F)$ is a Bessel sequence and:
\begin{enumerate}
\item $\|\mathcal{G}amma f_j(x)\|^2=\alpha_j(x)$ for $j\in \mathbb {I} _n$ and a.e. $x\in\mathbb{T}^k$;
\item $\mathbf{l}a_j([S_{E(\mathcal F)}]_x ) =\mathbf{l}a_j(x)$ for $j\in\mathbb{N}$ and a.e. $x\in\mathbb{T}^k$.
\end{enumerate}
\item The following admissibility conditions hold:
\begin{enumerate}
\item $\mathbf{l}ambda_j(x)=0$ for a.e. $x\in\mathbb{T}^k$ such that $j\geq\min\{d(x),n\}+1$.
\item $(\alphapha_j(x))_{j\in\mathbb {I} _n}\prec (\mathbf{l}a_j(x))_{j\in\mathbb{I}_{d(x)}}$ for a.e. $x\in\mathbb{T}^k$.
\end{enumerate}
\end{enumerate}
\end{teo}
\noi
Our proof of Theorem \ref{teo sobre disenio de marcos} is based on the following extension of a basic result
in matrix analysis related with the Schur-Horn theorem (for its proof, see section \ref{Appendixity} - Appendix). In what follows we
let $D_b\in \mathcal{M}_d(\mathbb{C})$ be the diagonal matrix with main diagonal $b\in\C^d$.
\begin{teo}\mathbf{l}abel{teo:mayo equiv} \rm
Let $b:\mathbb{T}^k\rightarrow (\mathbb{R}}\def\C{\mathbb{C}_+)^d$ and $c:\mathbb{T}^k\rightarrow (\mathbb{R}}\def\C{\mathbb{C}_+)^n$ be measurable vector fields.
The following statements are equivalent:
\begin{enumerate}
\item For a.e. $x\in \mathbb{T}^k$ we have that $c(x)\prec b(x)$.
\item There exist measurable vector fields $u_j: \mathbb{T}^k{\mathbf t}o \C^d$ for $j\in\mathbb {I} _n$ such that $\|u_j(x)\|=1$ for a.e. $x\in \mathbb{T}^k$
and $j\in \mathbb{I}_n\,$, and such that
\begin{equation}
D_{b(x)}=\mathfrak{s}um_{j\in \mathbb{I}_n} c_j(x)\,\ u_j(x) \otimes u_j(x) \ , \peso{for a.e.} \ x\in \mathbb{T}^k\ . {\mathbf t}ag*{
$\mathfrak{s}quare$}
\end{equation}
\end{enumerate}
\end{teo}
\begin{proof}[Proof of Theorem \ref{teo sobre disenio de marcos}]
Assume that there exists $\mathcal F=\{f_j\}_{j\in \mathbb {I} _n} \in {\mathbf{a}l W}^n$ such that $\|\mathcal{G}amma f_j(x)\|^2=\alpha_j(x)$, for $j\in \mathbb {I} _n\,$, and
$\mathbf{l}a_j([S_{E(\mathcal F)}]_x ) =\mathbf{l}a_j(x)$ for $j\in\mathbb{N}$ and a.e. $x\in\mathbb{T}^k$.
Consider the measurable field of positive semi-definite matrices $G:\mathbb{T}^k{\mathbf t}o {\mathbf{a}l M}_n(\C)^+$ given by the Gramian
$$
G(x)=\Big(\, \big\mathbf{l}angle \mathcal{G}amma f_i(x)\, , \, \mathcal{G}amma f_j(x) \big\rangle\, \Big)_{i,j\in \mathbb {I} _n} \ , \peso{for} x\in \mathbb{T}^k \ .
$$
Notice that $G(x)$ is the matrix representation of $T^*_{\mathcal{G}amma \mathcal F(x)}T_{\mathcal{G}amma \mathcal F(x)}\in L(\C^n)$ with respect to the canonical basis of $\C^n$ for $x\in \mathbb{T}^k$; using the fact that the finite rank operators $T^*_{\mathcal{G}amma \mathcal F(x)}T_{\mathcal{G}amma \mathcal F(x)}$ and $T_{\mathcal{G}amma \mathcal F(x)}T^*_{\mathcal{G}amma \mathcal F(x)}=[S_{E(\mathcal F)}]_x $ have the same positive eigenvalues (counting multiplicities) we see that
$$
\mathbf{l}ambda_j(G(x)) = \begin{cases} \mathbf{l}ambda_j(x) & \peso{for} 1\mathbf{l}eq j\mathbf{l}eq \min\{d(x),n\} \\
\ \ \ 0 & \peso{for} \min\{d(x),n\}+1\mathbf{l}eq j\mathbf{l}eq n \end{cases} \peso{for a.e.} x\in\mathbb{T}^k \ .
$$
On the other hand, the main diagonal of $G(x)$ is $(\|\mathcal{G}amma f_j(x)\|^2)_{j\in\mathbb {I} _n}=(\alpha_j(x))_{j\in\mathbb {I} _n}\,$;
hence, by the classical Schur-Horn theorem (see \cite{HJ13}) we see that
$$
(\alphapha_j(x))_{j\in\mathbb {I} _n}\prec \mathbf{l}ambda(G(x))\in \mathbb{R}}\def\C{\mathbb{C}_+ ^n \implies
(\alphapha_j(x))_{j\in\mathbb {I} _n}\prec (\mathbf{l}ambda_j(x))_{j\in \mathbb{I}_{d(x)}} \peso{for a.e.} x\in\mathbb{T}^k \ .
$$
\noi
Conversely, assume that $(\alpha_j(x))_{j\in \mathbb {I} _n} \prec (\mathbf{l}a_i(x))_{i\in \mathbb{I}_{d(x)}}$ for a.e. $x\in \mathbb{T}^k$.
By considering a convenient finite partition of $\mathbb{T}^k$ into measurable subsets we can assume, without loss of generality, that $d(x)=d$ for $x\in\mathbb{T}^k$. Therefore, by Theorem \ref{teo:mayo equiv}, there exist
measurable vector fields $u_j: \mathbb{T}^k{\mathbf t}o \C^d$ for $j\in\mathbb {I} _n$ such that $\|u_j(x)\|=1$ for a.e. $x\in \mathbb{T}^k$ and $j\in \mathbb{I}_n$, and such that
\begin{equation} \mathbf{l}abel{diago}
D_{\mathbf{l}ambda(x)}=\mathfrak{s}um_{j\in \mathbb{I}_n} \alphapha_j(x)\,\ u_j(x) \otimes u_j(x) \ , \peso{for a.e.} \ x\in \mathbb{T}^k\ ,
\end{equation}
where $\mathbf{l}ambda(x)=(\mathbf{l}ambda_j(x))_{j\in\mathbb{I}_d}$ for $x\in \mathbb{T}^k$. Now, by Theorem \ref{teo:la descom de Bo} there exist measurable vector fields
$v_j:\mathbb{T}^k\rightarrow \ell^2(\mathbb{Z}^k)$ for $j\in\mathbb{I}_d$ such that $\{v_j(x)\}_{j\in\mathbb{I}_d}$ is a ONB of $J_{\mathbf{a}l W}(x)$ for a.e. $x\in\mathbb{T}^k$.
Let $u_j(x)=(u_{ij}(x))_{i\in\mathbb{I}_d}$ for $j\in\mathbb {I} _n$ and $x\in\mathbb{T}^k$; then we consider
the finite sequence $\mathcal F=\{f_j\}_{j\in\mathbb {I} _n}\in{\mathbf{a}l W}^n$ determined by
$\mathcal{G}amma f_j(x)=\alphapha_j^{1/2}(x) \mathfrak{s}um_{i\in\mathbb{I}_d} u_{ij}(x) \ v_i(x)$ for $j\in\mathbb {I} _n$ and $x\in\mathbb{T}^k$.
It is clear that
$$
\|\mathcal{G}amma f_j(x)\|^2=\|\alphapha_j^{1/2}(x)\ u_j(x)\|^2=\alphapha_j(x) \peso{ for a.e. } x\in\mathbb{T}^k,\quad j\in\mathbb {I} _n\ .
$$
Moreover, using Eq. \eqref{diago} it is easy to see that
$$\mathbf{l}eft(\mathfrak{s}um_{j\in\mathbb {I} _n} \mathcal{G}amma f_j(x)\otimes \mathcal{G}amma f_j(x) \right)\, v_i(x)= \mathbf{l}a_i(x)\,v_i(x) \peso{for} i\in\mathbb{I}_d \ \ {\mathbf t}ext { and \ a.e. }\ x\in\mathbb{T}^k\,. $$
Hence, $\mathbf{l}a_j([S_{E(\mathcal F)}]_x ) =\mathbf{l}a_j(x)$ for $j\in\mathbb{N}$ and a.e. $x\in\mathbb{T}^k$
\end{proof}
\begin{rem}\mathbf{l}abel{se puede incluso con op SP}
Let ${\mathbf{a}l W}$ be a FSI subspace in $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ and let $d(x)=\dim J_{{\mathbf{a}l W}}(x)$ for $x\in\mathbb{T}^k$.
Let $\alpha_j:\mathbb{T}^k {\mathbf t}o \mathbb{R}}\def\C{\mathbb{C}_+$, $j\in \mathbb {I} _n\,$, be measurable functions and let $S\in L(L^2(\mathbb{R}}\def\C{\mathbb{C}^k))^+$ be a positive SP operator
such that $R(S)\mathfrak{s}ubseteq {\mathbf{a}l W}$. Let $\mathbb{T}^k\ni x\mapsto (\mathbf{l}a_j([S]_x) )_{j\in\mathbb{N}}$ be the fine spectral structure of $S$ (which is well defined by Lemma
\ref{lem spect represent ese}).
Assume that for a.e. $x\in\mathbb{T}^k$ we have that
$$
(\alphapha_j(x))_{j\in\mathbb {I} _n}\prec (\mathbf{l}a_j([S]_x) )_{j\in\mathbb{I}_{d(x)}}\ .
$$
Then, there exists $\mathcal F=\{f_j\}_{j\in \mathbb {I} _n} \in {\mathbf{a}l W}^n$ such that $E(\mathcal F)$ is a Bessel sequence,
$$
S_{E(\mathcal F)}=S \peso{and} \|\mathcal{G}amma f_j(x)\|^2=\alphapha_j(x) \peso{for a.e.} x\in\mathbb{T}^k \, , \, j\in\mathbb {I} _n \ .
$$
Indeed, if in the proof of Theorem \ref{teo sobre disenio de marcos} above we take the measurable vector fields
$v_j:\mathbb{T}^k\rightarrow \ell^2(\mathbb{Z}^k)$ such that $\{v_j(x)\}_{j\in\mathbb{I}_d}$ is a ONB of $J_{\mathbf{a}l W}(x)$ and such
that $[S]_x \, v_j(x)=\mathbf{l}a_j([S]_x)\ v_j$ for a.e. $x\in\mathbb{T}^k$ (notice that this can always be done by Lemma \ref{lem spect represent ese})
then we conclude, as before, that
\begin{equation}
[S_{E(\mathcal F)}]_x \ v_j(x)= \mathbf{l}a_j([S]_x)\,v_j(x) \peso{for} j\in\mathbb{I}_d \implies [S_{E(\mathcal F)}]_x = [S]_x \peso{for a.e.} x\in\mathbb{T}^k\ .
$\triangle$P
\end{equation}
\end{rem}
\noi
As a first application of Theorem \ref{teo sobre disenio de marcos} we show the existence of shift
generated uniform tight frames for an arbitrary FSI. In turn, this allows
us to strengthen some results from \cite{BMS15} (see also Corollary \ref{coro tight 2}).
\begin{cor}\mathbf{l}abel{cororo1}
Let $\{0\}\neq{\mathbf{a}l W}\mathfrak{s}ubset L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ be a FSI subspace and let $d(x)=\dim J_{\mathbf{a}l W}(x)$ for $x\in\mathbb{T}^k$. Assume that $n\geq \mathfrak{s}upes_{x\in\mathbb{T}^k}d(x)$,
let $Z_i=d^{-1}(i)$ for $i\in\mathbb {I} _n\,$ and set $C_{\mathbf{a}l W}= \mathfrak{s}um_{i\in\mathbb {I} _n} i\cdot |Z_i|>0$. Then:
\begin{enumerate}
\item There exists a sequence $\mathcal F=\{f_j\}_{j\in\mathbb {I} _n}\in{\mathbf{a}l W}^n$ such that $\|f_j\|^2=n^{-1}$ for $j\in\mathbb {I} _n$
and such that $E(\mathcal F)$ is a uniform tight frame for ${\mathbf{a}l W}$.
\item For any sequence $\mathcal G=\{g_j\}_{j\in\mathbb {I} _n}\in{\mathbf{a}l W}^n$ such that $E(\mathcal G)$ is a Bessel sequence and such that $\mathfrak{s}um_{j\in\mathbb {I} _n} \|g_j\|^2=1$, and for every
$\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}f$ we get that:
\begin{equation}\mathbf{l}abel{del item 2}
P^{\mathbf{a}l W}_\varphi(E(\mathcal G))\geq C_{\mathbf{a}l W} \ \varphi(C_{\mathbf{a}l W}^{-1})=P^{\mathbf{a}l W}_\varphi(E(\mathcal F))\, .
\end{equation}
Moreover, if we assume that $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$ then $P^{\mathbf{a}l W}_\varphi(E(\mathcal G))=C_{\mathbf{a}l W} \ \varphi(C_{\mathbf{a}l W}^{-1})$ if and only
if $E(\mathcal G)$ is a tight frame for ${\mathbf{a}l W}$.
\end{enumerate}
\end{cor}
\begin{proof}
Let $p_i=|Z_i|$ (where $|A|$ denotes the Lebesgue measure of $A\mathfrak{s}ubset \mathbb{T}^k$ and $Z_i=d^{-1}(i)$) for $1\mathbf{l}eq i\mathbf{l}eq n$;
then $C_{\mathbf{a}l W}= \mathfrak{s}um_{i\in\mathbb {I} _n} i\cdot p_i$. Notice that by hypothesis Spec ${\mathbf{a}l W}=\cup_{i\in\mathbb {I} _n} Z_i\,$.
For $x\in\mathbb{T}^k$ set $\alphapha(x)=0$ if $x\in \mathbb{T}^k\mathfrak{s}etminus{\mathbf t}ext{Spec}({\mathbf{a}l W})$ and:
\begin{equation} \mathbf{l}abel{defi alfas}
\alphapha(x):=\mathbf{l}eft\{
\begin{array}{ccc}
\frac{j\cdot C_{\mathbf{a}l W}^{-1} }{n}
& {\rm if} & x\in Z_j \ {\mathbf t}ext{ and } \ p_j>0\,; \\
0 & {\rm if} & x\in Z_j \ {\mathbf t}ext{ and } \ p_j=0 \, .\\
\end{array}
\right.
\end{equation}
Then, it is easy to see that $(\alphapha(x))_{i\in\mathbb {I} _n}\prec (C_{\mathbf{a}l W}^{-1})_{i\in\mathbb{I}_{d(x)}}$ for every $x\in \mathbb{T}^k$; hence, by Theorem \ref{teo sobre disenio de marcos} we see that there exists $\mathcal F=\{f_j\}_{j\in\mathbb {I} _n}\in{\mathbf{a}l W}^n$ such that $\|\mathcal{G}amma f_j(x)\|^2=\alphapha(x)$ for $j\in\mathbb {I} _n$ and such that $S_{\mathcal{G}amma \mathcal F(x)}=C_{\mathbf{a}l W}^{-1}\ P_{J_{\mathbf{a}l W}(x)}$ for a.e. $x\in\mathbb{T}^k$. Therefore, $S_{E(\mathcal F)}=C_{\mathbf{a}l W}^{-1}\, P_{\mathbf{a}l W}$ and
$$
\|f_j\|^2=\int_{\mathbb{T}^k} \alphapha(x)\ dx=
\frac{C_{\mathbf{a}l W}^{-1}}{n} \, \mathfrak{s}um_{i\in\mathbb {I} _n} \int_{Z_i} i\ dx=\frac{C_{\mathbf{a}l W}^{-1}}{n}\ \mathfrak{s}um_{i\in\mathbb {I} _n} i \cdot p_i=\frac{1}{n}\ .
$$
If $\mathcal G$ is as in item 2. then, by \cite{BMS15}, we get the inequality \eqref{del item 2}.
Notice that the lower bound is attained at $\mathcal F$ (since it is tight);
the last part of the statement was already shown in \cite{BMS15}.
\end{proof}
\mathfrak{s}ubsection{Generalized (measurable) eigensteps}\mathbf{l}abel{subsec eigensteps}
In this section we derive a natural extension of the notion of eigensteps introduced in \cite{CFMPS}, that allows us to describe a procedure to inductively construct finite sequences $\mathcal F=\{f_i\}_{i\in\mathbb {I} _n}\in{\mathbf{a}l W}^n$ such that the fine structure of $E(\mathcal F)$ (that is, the fine
spectral structure of $E(\mathcal F)$ and the finite sequence of measurable functions
$\|\mathcal{G}amma f_i(\cdot)\|^2:\mathbb{T}^k\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$, $i\in\mathbb{I}_n$) are prescribed. Hence, we obtain an in-depth description of a step-by-step construction of Bessel sequences $E(\mathcal F)$ with prescribed fine structure. We point out that our techniques are not based on those from \cite{CFMPS}; indeed, our approach is based on an additive model developed in \cite{BMS15}.
\begin{rem}\mathbf{l}abel{rem hay eigensteps}Let ${\mathbf{a}l W}$ be a FSI subspace and let $d(x)=\dim J_{\mathbf{a}l W}(x)$ for $x\in\mathbb{T}^k$;
let $\mathcal F=\{f_i\}_{i\in\mathbb {I} _n}\in{\mathbf{a}l W}^n$ be such that $E(\mathcal F)$ is a Bessel sequence and set:
\begin{enumerate}
\item $\alphapha_i(x)=\|\mathcal{G}amma f_i(x)\|^2$, for $x\in\mathbb{T}^k$ and $i\in\mathbb {I} _n\,$.
\item $\mathbf{l}a_i(x)=\mathbf{l}ambda_i([S_{E(\mathcal F)}]_x )$ , for $x\in\mathbb{T}^k$ and $i\in\mathbb {I} _n\,$,
where $\mathbb{T}^k\ni x\mapsto (\mathbf{l}ambda_i([S_{E(\mathcal F)}]_x )\,)_{i\in\mathbb{N}}$ denotes the fine spectral structure of $E(\mathcal F)$.
\end{enumerate}
By Theorem \ref{teo sobre disenio de marcos} these functions satisfy the following admissibility conditions:
\begin{enumerate}
\item[Ad.1] $\mathbf{l}ambda_i(x)=0$ for a.e. $x\in\mathbb{T}^k$ such that $i\geq\min\{n,\,d(x)\}+1$.
\item[Ad.2] $(\alphapha_i(x))_{i\in\mathbb {I} _n}\prec (\mathbf{l}a_i(x))_{i\in\mathbb{I}_{d(x)}}$ for a.e. $x\in\mathbb{T}^k$.
\end{enumerate}
For $j\in\mathbb {I} _n$ consider the sequence $\mathcal F_j=\{f_i\}_{i\in\mathbb{I}_{j}}\in{\mathbf{a}l W}^j$. In this case $E(\mathcal F_j)
=\{T_\ell f_i\}_{(\ell,i)\in\mathbb{Z}^k{\mathbf t}imes \mathbb{I}_{j}}$ is a Bessel sequence and $S_j=S_{E(\mathcal F_j)}$
is a SP operator such that
$$
[S_j]_x=S_{\mathcal{G}amma \mathcal F_j(x)}
=\mathfrak{s}um_{i\in\mathbb{I}_{j}} \mathcal{G}amma f_i(x)\otimes \mathcal{G}amma f_i(x)\in L(\ell^2(\mathbb{Z}^k))^+ \peso{for a.e. } x\in\mathbb{T}^k\ , \ \ j\in\mathbb {I} _n\ .
$$
For $j\in\mathbb {I} _n$ and $i\in\mathbb{I}_{j}\,$, consider the measurable function $\mathbf{l}ambda_{i,j}:\mathbb{T}^k\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$ given by
$$
\mathbf{l}ambda_{i,j}(x)=\mathbf{l}ambda_i([S_j]_x)\peso{for} x\in\mathbb{T}^k\ ,
$$
where $\mathbb{T}^k\ni x\mapsto (\mathbf{l}ambda_i([S_j]_x))_{i\in\mathbb{N}}$ denotes the fine spectral structure of $E(\mathcal F_j)$ (notice that by construction $\mathbf{l}ambda_i([S_j]_x)=0$ for $i\geq j+1$). Then, it is well known (see \cite{CFMPS}) that
$(\mathbf{l}ambda_{i,j}(x))_{i\in\mathbb{I}_{j}}$ interlaces $(\mathbf{l}ambda_{i,(j+1)}(x))_{i\in\mathbb{I}_{j+1}}$ i.e.
$$
\mathbf{l}ambda_{i,(j+1)}(x)\geq \mathbf{l}ambda_{i,j}(x)\geq \mathbf{l}ambda_{(i+1),(j+1)}(x) \peso{for} i\in\mathbb{I}_{j}\ ,\ j\in\mathbb{I}_{n-1}\ , \ {\mathbf t}ext{ and a.e. } x\in\mathbb{T}^k\ .
$$
Notice that for a.e. $x\in\mathbb{T}^k$,
$$
\mathfrak{s}um_{i\in\mathbb{I}_{j}} \mathbf{l}ambda_{i,j}(x)= {\mathbf t}r([S_j]_x)
= \mathfrak{s}um_{i\in\mathbb{I}_{j}}\|\mathcal{G}amma f_i(x)\|^2 =\mathfrak{s}um_{i\in\mathbb{I}_{j}}\alphapha_i(x) \peso{for} j\in\mathbb {I} _n\ .
$$
Finally notice that by construction $S_n=S_{E(\mathcal F)}$ and hence, $\mathbf{l}ambda_{i,n}(x)=\mathbf{l}ambda_i(x)$ for $i\in\mathbb {I} _n$ and $x\in \mathbb{T}^k$.
These facts motivate the following extension of the notion of eigensteps introduced in \cite{CFMPS}.
$\triangle$
\end{rem}
\begin{fed}\mathbf{l}abel{defi measiegen} \rm
Let ${\mathbf{a}l W}$ be a FSI subspace and let $\mathbf{l}a_i\, , \, \alphapha_i:\mathbb{T}^k\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$ for $i\in\mathbb {I} _n$ be measurable functions satisfying the admissibility assumptions Ad.1 and Ad.2 in Remark \ref{rem hay eigensteps}.
A sequence of eigensteps for $(\mathbf{l}ambda,\alphapha)$ is a doubly-indexed sequence
of measurable functions $\mathbf{l}ambda_{i,j}:\mathbb{T}^k\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$ for $i\in\mathbb{I}_{j}$ and $j\in\mathbb {I} _n$ such that:
\begin{enumerate}
\item $\mathbf{l}ambda_{i,(j+1)}(x)\geq \mathbf{l}ambda_{i,j}(x)\geq \mathbf{l}ambda_{(i+1),(j+1)}(x)$ for $i\in\mathbb{I}_{j} \, , \, j\in\mathbb{I}_{n-1}\, , \, $
and a.e. $x\in\mathbb{T}^k$;
\item $\mathfrak{s}um_{i\in\mathbb{I}_{j}} \mathbf{l}ambda_{i,j}(x)= \mathfrak{s}um_{i\in\mathbb{I}_{j}}\alphapha_i(x)$ for $j\in\mathbb {I} _n$ and a.e. $x\in \mathbb{T}^k$;
\item $\mathbf{l}ambda_{i,n}(x)=\mathbf{l}ambda_i(x)$ for $i\in\mathbb {I} _n$ and a.e. $x\in \mathbb{T}^k$.
$\triangle$
\end{enumerate}
\end{fed}
\begin{rem}\mathbf{l}abel{rem hay eigensteps2}
Consider the notations and terminology from Remark \ref{rem hay eigensteps}. Then $((\mathbf{l}ambda_{i,j}(\cdot))_{i\in\mathbb{I}_{j}})_{j\in\mathbb {I} _n}$ is a sequence of eigensteps for $(\mathbf{l}ambda,\alphapha)$. We say that $((\mathbf{l}ambda_{i,j}(\cdot))_{i\in\mathbb{I}_{j}})_{j\in\mathbb {I} _n}$ is the sequence of eigensteps for $(\mathbf{l}a,\alphapha)$ associated to $\mathcal F$.
$\triangle$
\end{rem}
\noi
In what follows we show that every sequence of eigensteps is associated to some $\mathcal F=\{f_i\}_{i\in\mathbb {I} _n}\in{\mathbf{a}l W}^n$ such that $E(\mathcal F)$ is a Bessel sequence (see Theorem \ref{teo todo eigen echachi} below). In order to show this, we recall an additive (operator) model from \cite{BMS15}.
\begin{fed}\mathbf{l}abel{el conjunto U}\rm
Let ${\mathbf{a}l W}$ be a FSI subspace and let $d:\mathbb{T}^k\rightarrow \mathbb{N}_{\geq 0}$ be the measurable function given by $d(x)=\dim J_{{\mathbf{a}l W}}(x)$ for $x\in\mathbb{T}^k$. Let $S\in L(L^2(\mathbb{R}}\def\C{\mathbb{C}^k))^+$ be SP and such that $R(S)\mathfrak{s}ubset {\mathbf{a}l W}$. Given a measurable function $m:\mathbb{T}^k\rightarrow \mathbb{Z}$ such that $m(x)\mathbf{l}eq d(x)$ for a.e. $x\in\mathbb{T}^k$ we consider
$$U^{\mathbf{a}l W}_m(S) =
\Big\{S+ B:B\in L(L^2(\mathbb{R}}\def\C{\mathbb{C}^k))^+ {\mathbf t}ext{ is SP},\, R(B)\mathfrak{s}ubset {\mathbf{a}l W},\,
\text{\rm rk}([B]_x)\mathbf{l}eq d(x)-m(x) \ {\mathbf t}ext{for a.e. } x\in \mathbb{T}^k \Big\} \,.
$$
$\triangle$
\end{fed}
\begin{teo}[Appendix of \cite{BMS15}]\mathbf{l}abel{estructdelU} \rm
Consider the notations from Definition \ref{el conjunto U}. Given a measurable function $\mu:\mathbb{T}^k\rightarrow \ell^1(\mathbb{N})^+$ the following are equivalent:
\begin{enumerate}
\item There exists $C\in U^{\mathbf{a}l W}_m(S)$ such that $\mathbf{l}a(\hat C_x)=\mu(x)$, for a.e. $x\in \mathbb{T}^k$;
\item For a.e. $x\in\mathbb{T}^k\mathfrak{s}etminus {\mathbf t}ext{Spec}({\mathbf{a}l W})$ then $\mu(x)=0$; for a.e. $x\in {\mathbf t}ext{Spec}({\mathbf{a}l W})$ we have that $\mu_i(x)=0$ for $i\geq d(x)+1$ and
\begin{enumerate}
\item in case $m(x)\mathbf{l}eq 0$, $\mu_i(x)\geq \mathbf{l}a_i([S]_ x)$ for $i\in\mathbb{I}_{d(x)}\,$;
\item in case $m(x)\in\mathbb{I}_{d(x)}\,$, $\mu_i(x)\geq \mathbf{l}a_i([S] _x)$ for $i\in\mathbb{I}_{d(x)}\,$ and
\begin{equation}
\mathbf{l}a_i([S]_x)\geq \mu_{(d(x)-m(x))+i}(x) \peso{for} i\in\mathbb{I}_{m(x)}\ . {\mathbf t}ag*{
$\mathfrak{s}quare$}
\end{equation}
\end{enumerate}
\end{enumerate}
\end{teo}
\begin{rem}
We point out that Theorem \ref{estructdelU} is obtained in terms of a natural extension of the Fan-Pall interlacing theory from matrix theory, to the context of measurable fields of positive matrices (see \cite[Appendix]{BMS15}); we also notice that the result is still valid for fields (of vectors and operators) defined in measurable subsets of $\mathbb{T}^k$. The original motivation for considering the additive model above was the fact that it describes the set of frame operators of oblique duals of a fixed frame. In the present setting, this additive model will also allow us to link the sequences of eigensteps with the construction of SG Bessel sequences with prescribed fine structure.
$\triangle$
\end{rem}
\begin{teo}\mathbf{l}abel{teo todo eigen echachi}
Let ${\mathbf{a}l W}$ be a FSI subspace and let $\mathbf{l}a_i,\, \alphapha_i:\mathbb{T}^k\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$ for $i\in\mathbb {I} _n$ be measurable functions satisfying the admissibility conditions Ad.1 and Ad.2 in Remark \ref{rem hay eigensteps}.
Consider a sequence of eigensteps
$((\mathbf{l}ambda_{i,j}(\cdot))_{i\in\mathbb{I}_{j}})_{j\in\mathbb {I} _n}$ for $(\mathbf{l}a,\alphapha)$.
Then, there exists $\mathcal F=\{f_i\}_{i\in\mathbb {I} _n}\in{\mathbf{a}l W}^n$ such that $E(\mathcal F)$ is a Bessel sequence and $((\mathbf{l}ambda_{i,j}(\cdot))_{i\in\mathbb{I}_{j}})_{j\in\mathbb {I} _n}$ is the sequence of eigensteps associated to $\mathcal F$.
\end{teo}
\begin{proof} First notice that both the assumptions as well as the properties of the objects that we want to construct are checked point-wise; hence, by considering a convenient partition of $\mathbb{T}^k$ into measurable sets we can assume (without loss of generality) that $d(x)=d\geq 1$, for $x\in\mathbb{T}^k$. Now, we argue by induction on $j$. Notice that by hypothesis for $i=j=1$, we see that $\mathbf{l}ambda_{1,1}(x)=\alphapha_1(x)$ for a.e. $x\in\mathbb{T}^k$.
Let $f_1\in{\mathbf{a}l W}$ be such that $\|\mathcal{G}amma f_1(x)\|^2=\alphapha_1(x)$ for a.e. $x\in\mathbb{T}^k$; indeed, we can take $f_1\in{\mathbf{a}l W}$ determined by the condition $\mathcal{G}amma f_1(x)=\alphapha^{1/2}(x)\ \mathcal{G}amma h_1(x)$, where $\{h_i\}_{i\in\mathbb{I}_{\ell}}$ are the quasi orthogonal generators for the orthogonal sum decomposition of ${\mathbf{a}l W}$ as in Theorem \ref{teo:la descom de Bo}. Then, by construction $\|\mathcal{G}amma f_1(x)\|^2=\alphapha_1(x)$ and $\mathbf{l}ambda_{1,1}(x)=\|\mathcal{G}amma f_1(x)\|^2=\mathbf{l}ambda_1([S_{E(f_1)}]_x)$ for a.e. $x\in \mathbb{T}^k$.
\noi
Assume that for $j\in\mathbb{I}_{n-1}$ we have constructed $\mathcal F_j=\{f_i\}_{i\in\mathbb{I}_{j}}\in{\mathbf{a}l W}^j$ such that
\begin{equation}\mathbf{l}abel{eq induc}
\mathbf{l}a_{i,\ell}(x)=\mathbf{l}a_i([S_{E(\mathcal F_\ell)}]_x) \peso{for} i\in\mathbb{I}_{\ell} \ , \ \ \ell\in\mathbb{I}_{j}\ \ {\mathbf t}ext{and a.e. } x\in\mathbb{T}^k\,.
\end{equation}
We now construct $f_{j+1}$ as follows: set $\mu_i=\mathbf{l}a_{i,j+1}$ for $i\in\mathbb{I}_{j+1}$ and $\mu_i=0$ for $i>j+1$; set $\mu=(\mu_i)_{i\in\mathbb{N}}:\mathbb{T}^k\rightarrow \ell^1(\mathbb{N})^+$ which is a measurable function. Further, set $S=S_{E(\mathcal F_j)}\in L(L^2(\mathbb{R}}\def\C{\mathbb{C}^k))^+$ which is a SP operator with $R(S)\mathfrak{s}ubset {\mathbf{a}l W}$ and set $m(x)=m=d-1$.
Moreover, by taking $\ell=j$ in Eq. \eqref{eq induc} above we see that $\mathbf{l}a_i([S]_x)=\mathbf{l}a_{i,j}(x)$ for $i\in\mathbb{I}_{j}$ and $\mathbf{l}a_i([S]_x)=0$ for $i\geq j+1$, for a.e. $x\in\mathbb{T}^k$ .
\noi
By hypothesis, we see that $\mathbf{l}a_{i,j+1}\mathbf{l}eq \mathbf{l}a_{i,j+2}\mathbf{l}eq \mathbf{l}dots\mathbf{l}eq \mathbf{l}a_{i,n}=\mathbf{l}a_i$; since the admissibility conditions in Remark \ref{rem hay eigensteps} hold, we conclude that $\mu_i=\mathbf{l}a_{i,j+1}=0$ whenever $i\geq d+1$. On the other hand, since
$d-m=1$ we see that
the conditions in item 2. in Theorem \ref{estructdelU} can be put together as the interlacing relations
$$ \mu_i(x)\geq \mathbf{l}a_i([S]_x)\geq\mu_{i+1}(x) \peso{for} i\in\mathbb{I}_{j} \ {\mathbf t}ext{ and a.e. }\ x\in\mathbb{T}^k\,, $$
which hold by hypothesis (see condition 1. in Definition \ref{defi measiegen}); therefore, by Definition \ref{el conjunto U} and Theorem \ref{estructdelU}, there exists a SP operator $B\in L(L^2(\mathbb{R}}\def\C{\mathbb{C}^k))^+$ such that $R(B)\mathfrak{s}ubset {\mathbf{a}l W}$, $\text{\rm rk}([B] _x )\mathbf{l}eq 1$ for a.e. $x\in\mathbb{T}^k$ and such that $\mathbf{l}a_i([S+B]_x)=\mu_i(x)=\mathbf{l}a_{i,j+1}(x)$ for $i\in\mathbb{I}_{j+1}\,$, for a.e. $x\in\mathbb{T}^k$.
The previous conditions on $B$ imply that there exists $f_{j+1}\in{\mathbf{a}l W}$ such that $B=S_{E(f_{j+1})}$; indeed, $f_{j+1}$ is such that it satisfies:
$\mathcal{G}amma f_{j+1}(x)\otimes \mathcal{G}amma f_{j+1}(x)=[B] _x$ for a.e. $x\in\mathbb{T}^k$.
Finally, if we set $\mathcal F_{j+1}=\{f_i\}_{i\in\mathbb{I}_{j+1}}$ then $S_{E(\mathcal F_{j+1})}=S_{E(\mathcal F_{j})}+S_{E(f_{j+1})}=S+B$ and hence $\mathbf{l}a_{i,j+1}(x)=
\mathbf{l}a_i([S_{E(\mathcal F_{j+1})}]_x)$ for $i\in\mathbb{I}_{j+1}$ and a.e. $x\in\mathbb{T}^k$. This completes the inductive step.
\end{proof}
\noi
We end this section with the following remark. With the notations and terminology in Theorem \ref{teo todo eigen echachi}, notice that the
constructed sequence $\mathcal F=\{f_i\}_{i\in\mathbb {I} _n}$ is such that
its fine structure is prescribed by $(\mathbf{l}ambda,\,\alphapha)$: indeed, $\mathbf{l}ambda_i([S_{E(\mathcal F)}]_x)=\mathbf{l}ambda_{i,\,n}(x)=\mathbf{l}ambda_i(x)$ and
$\|\mathcal{G}amma f_i(x)\|^2=\alphapha_i(x)$ for $i\in\mathbb {I} _n$ and a.e. $x\in\mathbb{T}^k$ (this last fact can be checked using induction and item 2. in Definition
\ref{defi measiegen}). That is, the measurable eigensteps provide a detailed description of Bessel sequences $E(\mathcal F)$ with prescribed fine structure.
\mathfrak{s}ection{An application: optimal frames with prescribed norms for FSI subspaces}\mathbf{l}abel{sec opti frames with prescribed norms}
In order to describe the main problem of this section we consider the following:
\begin{fed}\mathbf{l}abel{defi bal} \rm
Let ${\mathbf{a}l W}$ be a FSI subspace of $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ and let $\alphapha=(\alphapha_i)_{i\in\mathbb {I} _n}\in (\mathbb{R}}\def\C{\mathbb{C}_{>0}^n)^\downarrow$. We let
\begin{equation}
\mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})=\{\mathcal F=\{f_i\}_{i\in\mathbb {I} _n}\in{\mathbf{a}l W}^n:\ E(\mathcal F) \ {\mathbf t}ext{is a Bessel sequence }, \ \|f_i\|^2=\alphapha_i\,,\ i\in\mathbb {I} _n\}\ ,
\end{equation}
the set of SG Bessel sequences in ${\mathbf{a}l W}$ with norms prescribed by $\alpha$.
$\triangle$
\end{fed}
\noi
Notice that the restrictions on the families $\mathcal F=\{f_i\}_{i\in\mathbb {I} _n}\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$
(namely $\|f_i\|^2=\alphapha_i$ for $i\in\mathbb {I} _n$) are of a {\it global} nature. Our problem is to describe those $\mathcal F\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ such that
the encoding schemes associated to their corresponding Bessel sequences $E(\mathcal F)$ are as stable as possible. Ideally, we would search for sequences $\mathcal F$ such that $E(\mathcal F)$ are tight
frames for ${\mathbf{a}l W}$; yet, Theorem \ref{teo sobre disenio de marcos} shows that there are obstructions for the existence
of such sequences
(see Corollary \ref{coro tight 2} below).
\noi
By a simple re-scaling argument, we can assume that $\mathfrak{s}um_{i\in\mathbb {I} _n}\alphapha_i=1$; then Corollary \ref{cororo1} (see also \cite[Theorem 3.9.]{BMS15}) shows that if there exists $\mathcal F_0\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ such that $E(\mathcal F_0)$ is a tight
frame for ${\mathbf{a}l W}$ then $E(\mathcal F_0)$ is a minimizer in $\mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ of every frame potential $P_\varphi^{\mathbf{a}l W}$ for any convex function $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}f$ and $P_\varphi^{\mathbf{a}l W} (E(\mathcal F_0))= C_{\mathbf{a}l W} \ \varphi(C_{\mathbf{a}l W}^{-1})$; moreover, in case $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$ is a strictly convex function, then every such $\mathcal F\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ for which $P_\varphi^{\mathbf{a}l W}(E(\mathcal F))=C_{\mathbf{a}l W} \ \varphi(C_{\mathbf{a}l W}^{-1})$ is a tight frame.
This suggests that in the general case, in order to search for $\mathcal F\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$
such that the encoding schemes associated to their corresponding Bessel sequences $E(\mathcal F)$ are as stable as possible, we could study the minimizers in $\mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ of the convex potential $P_\varphi^{\mathbf{a}l W}$ associated to a strictly convex function $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$.
\noi
Therefore, given $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}f$, in what follows we show the existence of finite sequences $\mathcal F^{\rm op}\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ such that
$$
P_\varphi^{\mathbf{a}l W}(E(\mathcal F^{\rm op}))=\min\{P_\varphi^{\mathbf{a}l W}(E(\mathcal F)): \ \mathcal F\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})\}\,.
$$ Moreover, in case $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$ then we describe the fine spectral structure of the frame operator of $E(\mathcal F^{\rm op})$.
In case $\varphi(x)=x^2$, our results extend some results from \cite{BF,CKFT,MR10} for the frame potential to the context of
SG Bessel sequences lying in a FSI subspace ${\mathbf{a}l W}$.
\noi
Let us fix some general notions and notation for future reference:
\begin{notas}\mathbf{l}abel{nota impor2}
In what follows we consider:
\begin{enumerate}
\item A FSI subspace ${\mathbf{a}l W}\mathfrak{s}ubset L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$;
\item $d(x)=\dim J_{\mathbf{a}l W}(x)\mathbf{l}eq \ell\in\mathbb{N}$, for a.e. $x \in \mathbb{T}^k$;
\item The Lebesgue measure on $\mathbb{R}}\def\C{\mathbb{C}^k$, denoted $|\cdot|$ ; $Z_i=d^{-1}(i)\mathfrak{s}ubseteq \mathbb{T}^k$ and $p_i=|Z_i|$, $i\in\mathbb{I}_{\ell}\,$.
\item The spectrum of ${\mathbf{a}l W}$ is the measurable set ${\mathbf t}ext{Spec}({\mathbf{a}l W}) =
\bigcup_{i\in\mathbb{I}_{\ell}} Z_i = \{x\in \mathbb{T}^k: d(x)\neq 0\}$.
\end{enumerate}
$\triangle$
\end{notas}
\mathfrak{s}ubsection{The uniform dimension case}\mathbf{l}abel{subse uniform}
Consider the Notations \ref{nota impor2}. In this section we obtain the fine spectral structure of minimizers of convex potentials in $\mathcal{M}_d(\mathbb{C})hfrak B_\alphapha({\mathbf{a}l W})$ under the assumption that $d(x)=d$ for a.e. $x\in{\mathbf t}ext{Spec}({\mathbf{a}l W})$. In order to deal with this particular case, we recall some notions and constructions from \cite{BMS15}.
\begin{rem}[Waterfilling in measure spaces]\mathbf{l}abel{recordando waterfilling}
Let $(X,\mathcal{X},\mu)$ denote a probability space and let $L^\infty(X,\mu)^+
= \{g\in L^\infty(X,\mu): g\ge 0\}$. Recall that for
$f\in L^\infty(X,\mu)^+ $ and $c \geq \infes f\geq 0$ we consider the {\it waterfilling} of $f$ at level $c$, denoted $f_c\in L^\infty(X,\mu)^+$, given by $f_c= \max\{f,\,c\}=f + (c-f)^+$, where $g^+$ denotes the positive part of a real function $g$.
Recall the decreasing rearrangement of non-negative functions defined in Eq. \eqref{eq:reord}. It is straightforward to check that if
\begin{equation}\mathbf{l}abel{eq lem reord}
s_0= \mu\{x\in X:\ f(x)>c\} \peso{then}
f_c^*(s)=\mathbf{l}eft\{
\begin{array}{ccc}
f^*(s) & if & 0\mathbf{l}eq s <s_0 \,; \\
c & if & s_0\mathbf{l}eq s \mathbf{l}eq 1\, .
\end{array}
\right.
\end{equation}
We further consider $\phi_f: [\infes f, \infty){\mathbf t}o \mathbb{R}}\def\C{\mathbb{C}_+$ given by
$$
\phi_f(c)=\int_X f_c\ d\mu= \int_X f(x) + (c-f(x))^+\ d\mu(x)\,. $$
Then, it is easy to see that:
\begin{enumerate}
\item $\phi_f(\infes f)=\int_X f\ d\mu$ and $\mathbf{l}im _{c{\mathbf t}o +\infty} \phi_f(c)= +\infty$;
\item $\phi_f$ is continuous and strictly increasing.
\end{enumerate} Hence, for every $v\geq \int_X f\ d\mu$ there exists a unique $c=c(v)\geq \infes f$ such that
$\phi_f(c)=v$.
With the previous notations then, by \cite[Theorem 5.5.]{BMS15} we get that if $h\in L^\infty(X,\mu)^+ $ is such that
\begin{equation}\mathbf{l}abel{eq teo 5.5}
f\mathbf{l}eq h \peso{and} v\mathbf{l}eq \int_X h\ d\mu \peso{then} f_{c(v)}\prec_w h\ .\end{equation}
$\triangle$
\end{rem}
\begin{lem}\mathbf{l}abel{lem: wat er filling}
Let $(X,\mathcal{X},\mu)$ denote a probability space and let $f,\,g\in L^\infty(X,\mu)^+$ be such that $f\prec_w g$. Let $c,\,d\geq 0$ be such that
$\int_X f_c\ d\mu=\int_X g_d\ d\mu$, where $f_c$ and $g_d$ denote the waterfillings of $f$ and $g$ at levels $c$ and $d$ respectively.
Then $f_c\prec g_d$ in $(X,\mu)$.
\end{lem}
\begin{proof} Set $s_0=\mu\{x\in X:\ f(x)>c\}\in[0,1]$; notice that by construction $g\mathbf{l}eq g_d$ in $X$ so
that, by Remark \ref{rem:prop rear elem}, $g^*\mathbf{l}eq (g_d)^*$ in $[0,1]$. Hence, for every $s\in[0,s_0]$ we have
\begin{equation}\mathbf{l}abel{eq desi11}
\int_0^s(f_c)^*\ dt=\int_0^s f^*\ dt\mathbf{l}eq \int_0^s g^*\ dt\mathbf{l}eq \int_0^s (g_d)^*\ dt\,.
\end{equation}
On the other hand,
$$ \int_0^{s_0} (g_d)^* \ dt\geq \int_0^{s_0} g^* \ dt\geq \int_0^{s_0} f^* \ dt \implies
\omega \ \mathfrak{s}tackrel{\mbox{\tiny{def}}}{=}\ \int_0^{s_0} (g_d)^* \ dt-\int_0^{s_0} f^* \ dt\geq 0\, .$$
Using Remark \ref{recordando waterfilling} and the hypothesis we get that
$$
\int_0^{s_0} f^* \ dt + (1-s_0) \, c = \int_0^1 f_c^* \ dt
\mathfrak{s}tackrel{\eqref{reor int}}{=}\int_X f_c\ d\mu=\int_X g_d\ d\mu
\mathfrak{s}tackrel{\eqref{reor int}}{=} \int_0^{s_0} (g_d)^* \ dt + \int_{s_0}^1 (g_d)^* \ dt
$$
$$
\implies \quad \quad (1-s_0) \, c=\int_{s_0}^1 \Big[ \ (g_d)^* + \frac{\omega}{1-s_0} \ \Big]\ dt\,.$$
Thus, by \cite[Lemma 5.3.]{BMS15} we get that for $s\in[s_0\, , \, 1]$:
$$
(s-s_0)\,c\mathbf{l}eq \int_{s_0}^s \Big[ \ (g_d)^* + \frac{\omega}{1-s_0} \ \Big]\ dt
\mathbf{l}eq \int_{s_0}^s (g_d)^* \ dt+ \omega\ .
$$
This last identity and Remark \ref{recordando waterfilling} show that for $s\in[s_0\, , \, 1]$,
\begin{equation} \mathbf{l}abel{eq desi22}
\int_0^s (f_c)^*\ dt=
\int_0^{s_0} (g_d)^*\ dt-\omega + (s-s_0)\, c\mathbf{l}eq \int_0^{s_0} (g_d)^*\ dt + \int_{s_0}^s (g_d)^*\ dt\,.
\end{equation}
The lemma is a consequence of Eqs. \eqref{eq desi11} and \eqref{eq desi22}.
\end{proof}
\begin{rem}\mathbf{l}abel{rem: sobre medidas}
Let $(Z\, , \, \mathcal{M}_d(\mathbb{C})hcal Z\, , \, |\, \cdot \,|)$ be a (non-zero) measure subspace of $(\mathbb{T}^k\, , \, \mathcal{M}_d(\mathbb{C})hcal B\, , \, |\, \cdot \,|)$
and consider $(\mathbb{I}_r, \mathcal{M}_d(\mathbb{C})hcal P(\mathbb{I}_r),\#(\cdot))$ i.e, $\mathbb{I}_r$ endowed with the counting measure. In what follows we consider the product
space $X\ \mathfrak{s}tackrel{\mbox{\tiny{def}}}{=}\ Z{\mathbf t}imes \mathbb{I}_r$ endowed with the product measure $\mu\ \mathfrak{s}tackrel{\mbox{\tiny{def}}}{=}\ |\cdot |{\mathbf t}imes \#(\cdot)$.
$\triangle$
\end{rem}
\begin{lem}\mathbf{l}abel{lem utilisima} Consider the notations in Remark \ref{rem: sobre medidas} and
let $\alphapha: Z\rightarrow \mathbb{R}}\def\C{\mathbb{C}^r$ be a measurable function.
Let $\breve\alphapha:X\rightarrow \mathbb{R}}\def\C{\mathbb{C}$ be given by
$$
\breve \alphapha(x,i)=\alphapha_i(x) \peso{for} x\in Z \peso{and} i\in\mathbb{I}_r\ .
$$
Then $\breve \alphapha$ is a measurable function and we have that:
\begin{enumerate}
\item If $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}f$ then
$\int_X \varphi\circ \breve \alphapha \ d\mu = \mathfrak{s}um\mathbf{l}imits_{i\in\mathbb{I}_r} \int_{Z} \varphi(\alphapha_i(x))\ dx \ .$
\item Let $\beta: Z\rightarrow \mathbb{R}}\def\C{\mathbb{C}^r$ be a measurable function and let $\breve \beta:X\rightarrow \mathbb{R}}\def\C{\mathbb{C}$
be constructed analogously. If
$$
\alphapha(x)\prec\beta(x) \peso{for a.e.} x\in Z \ \implies \ \ \breve \alphapha\prec \breve\beta
$$
in the probability space $(X,\mathcal{M}_d(\mathbb{C})hcal X,{\mathbf t}ilde \mu)$, where ${\mathbf t}ilde \mu=(r\cdot|Z|)^{-1}\,\mu$.
\item
Similarly, $\alphapha(x)\prec_w\beta(x)$ for a.e. $x\in Z$ implies that
$\breve \alphapha\prec_w \breve \beta$ in $(X,\mathcal{M}_d(\mathbb{C})hcal X,{\mathbf t}ilde \mu)$.
\end{enumerate}
\end{lem}
\begin{proof}
The proof of the first part of the statement is straightforward. In order to see item 2., notice that if $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}f$ then $\alphapha(x)\prec\beta(x)$ implies that $\mathfrak{s}um_{i\in\mathbb{I}_r}\varphi(\alphapha_i(x))\mathbf{l}eq \mathfrak{s}um_{i\in\mathbb{I}_r}\varphi(\beta_i(x))$ for a.e. $x\in Z$. Then, using item 1. we get that
$$
\int_X \varphi\circ \breve \alphapha\ d{\mathbf t}ilde \mu=(r\cdot |Z|)^{-1} \int_{Z} \mathfrak{s}um_{i\in\mathbb{I}_r}\varphi(\alphapha_i(x)) \ dx
\mathbf{l}eq (r\cdot |Z|)^{-1} \int_{Z} \mathfrak{s}um_{i\in\mathbb{I}_r}\varphi(\beta_i(x))\ dx = \int_X \varphi\circ \breve \beta\ d{\mathbf t}ilde \mu\ .
$$
Since $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}f$ is arbitrary, Theorem \ref{teo porque mayo} shows that
$\breve \alphapha\prec \breve \beta$. Item 3. follows using similar arguments, based on the characterization of submajorization in terms of
integral inequalities involving non-decreasing convex functions given in Theorem \ref{teo porque mayo} (see also \cite{Chong}).
\end{proof}
\noi
The following is the first main result of this section.
\begin{teo}[Existence of optimal sequences in $\mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$]\mathbf{l}abel{teo dim unif}
Consider the Notations \ref{nota impor2}. Let $\alphapha=(\alphapha_i)_{i\in\mathbb {I} _n}\in (\mathbb{R}}\def\C{\mathbb{C}_{>0}^n)^\downarrow$ and assume that ${\mathbf{a}l W}$ is such that $d(x)=d$ for a.e. $x\in {\mathbf t}ext{Spec}({\mathbf{a}l W})$; set $r=\min\{n,d\}$. Let $p=p_d=|{\mathbf t}ext{Spec}({\mathbf{a}l W})|$. Then there exist $c=c(\alphapha,\,d,\,p)\geq 0$ and $\mathcal F^{\rm op}\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ such that:
\begin{enumerate}
\item For a.e. $x\in {\mathbf t}ext{Spec}({\mathbf{a}l W})$ we have that
\begin{equation}\mathbf{l}abel{eq defi la op unif dim}
\mathbf{l}a_j([S_{E(\mathcal F^{\rm op})}]_x)
=\mathbf{l}eft\{
\begin{array}{ccc}
\max\{\frac{\alphapha_j}{p}\, , \, c\} & if & j\in\mathbb{I}_{r} \ ; \\
0 & if & r+1 \mathbf{l}e j\mathbf{l}e d\ .
\end{array}
\right.
\end{equation}
In particular, if $d\mathbf{l}eq n$ (i.e. $r=d$) then $E(\mathcal F^{\rm op})$ is a frame for ${\mathbf{a}l W}$.
\item For every $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}f$ and every $\mathcal F\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ then
\begin{equation}\mathbf{l}abel{eq prop c}
p\cdot \mathfrak{s}um_{j\in\mathbb{I}_{r}} \varphi (\max\{\frac{\alphapha_j}{p}\, , \, c\}) + p \, (d-r)\, \varphi(0) =P_\varphi^{\mathbf{a}l W}(E(\mathcal F^{\rm op}))\mathbf{l}eq P_\varphi^{\mathbf{a}l W}(E(\mathcal F))
\,.
\end{equation}
\end{enumerate}
\end{teo}
\begin{proof}
Consider ${\mathbf t}ext{Spec}({\mathbf{a}l W})$ as a (non-zero, otherwise the result is trivial)
measure subspace of the $k$-torus endowed with Lebesgue measure. Then, we consider
$X={\mathbf t}ext{Spec}({\mathbf{a}l W}){\mathbf t}imes \mathbb{I}_r$ endowed with the product measure $\mu=|\cdot|{\mathbf t}imes \#(\cdot)$, where
$\#(\cdot)$ denotes the counting measure on $\mathbb{I}_r$ (as in Remark \ref{rem: sobre medidas}).
We also consider the normalized measure ${\mathbf t}ilde \mu=\frac{1}{p\cdot r}\ \mu$ on $X$.
Let $\mathcal F=\{f_j\}_{j\in\mathbb {I} _n}\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ and set $\beta_j(x)=\|\mathcal{G}amma f_j(x)\|^2$
for $x\in {\mathbf t}ext{Spec}({\mathbf{a}l W})$ and $j\in\mathbb {I} _n\, $. Notice that
\begin{equation}\mathbf{l}abel{ecua betaj}
\int_{{\mathbf t}ext{Spec}({\mathbf{a}l W})} \beta_j(x)\ dx=\|f_j\|^2=\alphapha_j\ , \peso{for} j\in\mathbb {I} _n\ .
\end{equation}
Let $\breve\gamma\, , \, \breve\beta:X\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$ be the measurable functions determined by
$$
\breve \gamma (x,j)=\frac{\alphapha_j}{p} \peso{and} \breve \beta(x,j)=\beta_j(x) \peso{for} x\in {\mathbf t}ext{Spec}({\mathbf{a}l W}) \peso{and} j\in\mathbb{I}_{r} \ .
$$
Consider the map $D:L^\infty(X,{\mathbf t}ilde \mu) \rightarrow L^\infty(X,{\mathbf t}ilde \mu)$ given by
$$
D(h)(x,j)=r\cdot\int_{{\mathbf t}ext{Spec}({\mathbf{a}l W}){\mathbf t}imes \{j\}} h \ d{\mathbf t}ilde \mu
= \frac 1p \
\int_{{\mathbf t}ext{Spec}({\mathbf{a}l W})} h(x,j) \ dx \peso{for} x\in {\mathbf t}ext{Spec}({\mathbf{a}l W}) \peso{and} j \in \mathbb{I}_{r}\ .
$$
Then, it is easy to see that $D$ is positive, unital and trace preserving i.e.
$D$ is a doubly stochastic map; moreover, by Eq. \eqref{ecua betaj},
$D(\breve \beta)=\breve \gamma$ and by Theorem \ref{teo porque mayo} we conclude that $\breve \gamma\prec \breve \beta\,$.
\noi
Now, consider the measurable vector-valued function $\beta^\downarrow(x)=(\beta^\downarrow_j(x))_{j\in\mathbb {I} _n}$
obtained by re-arrangement of the entries of the vector $\beta(x)=(\beta_j(x))_{j\in\mathbb {I} _n}$, for $x\in Z$ independently.
By construction we get the submajorization relations $(\beta_j(x))_{j\in\mathbb{I}_{r}}\prec_w (\beta^\downarrow_j(x))_{j\in\mathbb{I}_{r}}$
for every $x\in Z$ (notice that we are considering just the first $r$ entries of these $n$-tuples).
\noi
Thus, if we consider the measurable function $\breve {\beta^\downarrow} :X\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$
determined by $\breve{\beta^\downarrow}(x,j)=\beta^\downarrow_j(x)$ if $x\in {\mathbf t}ext{Spec}({\mathbf{a}l W})$ and $j\in\mathbb{I}_{r}\,$,
then Lemma \ref{lem utilisima}
shows that
$\breve\beta \prec_w \breve{\beta ^\downarrow}$ in $(X,{\mathbf t}ilde \mu)$.
By transitivity, we conclude that $\breve \gamma\prec_w \breve{\beta^\downarrow}$.
By Remark \ref{recordando waterfilling} there exists a unique $b\geq {\mathbf t}ext{ess-}\inf\mathbf{l}imits_{x\in X} \breve{\beta^\downarrow} (x)$ such that the waterfilling of $\breve{\beta^\downarrow}$ at level $b$, denoted $\breve{\beta^\downarrow}_b$, satisfies
$$
\int_X \breve{\beta^\downarrow} _b \ d{\mathbf t}ilde \mu=(r\cdot p)^{-1}\, \mathfrak{s}um_{i\in\mathbb {I} _n} \alphapha_i
\geq \int_X \breve{\beta^\downarrow} \ d{\mathbf t}ilde \mu \ .
$$
Similarly, let $c\geq {\mathbf t}ext{ess-}\inf\mathbf{l}imits_{x\in X} \,\breve \gamma(x)$ be such that the waterfilling of $\breve \gamma$ at level $c$, denoted $\breve \gamma_c\,$, satisfies
$$
\int_X \breve\gamma_c \ d{\mathbf t}ilde \mu=(r\cdot p)^{-1}\, \mathfrak{s}um_{i\in\mathbb {I} _n} \alphapha_i\geq \int_X \breve\gamma \ d{\mathbf t}ilde \mu\ .
$$
Therefore, by Lemma \ref{lem: wat er filling}, we see that
\begin{equation}\mathbf{l}abel{eq relac fc fbetaparaabajo}
\breve\gamma_c\prec \breve{\beta^\downarrow} _b\peso{in} (X,{\mathbf t}ilde \mu)\ .
\end{equation}
By Lemma
\ref{lem spect represent ese} there exist measurable functions $\mathbf{l}ambda_j:\mathbb{T}^k\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$ for $j\in\mathbb{I}_{d}$ such that
we have a representation of $[S_{E(\mathcal F)}]_x=S_{\mathcal{G}amma \mathcal F(x)}$ as in Eq. \eqref{lem repre espec S}, in terms of some measurable vector fields $v_j:\mathbb{T}^k\rightarrow \ell^2(\mathbb{Z}^k)$ for $j\in\mathbb{I}_d$, such that $\{v_j(x)\}_{j\in\mathbb{I}_d}$ is a ONB of $J_{\mathbf{a}l W}(x)$ for a.e. $x\in {\mathbf t}ext{Spec}({\mathbf{a}l W})$; indeed, in this case $\mathbf{l}a_j(x)=0$ for $j\geq r+1$ and a.e. $x\in {\mathbf t}ext{Spec}({\mathbf{a}l W})$.
\noi
If we let $e(x)\geq 0$ be determined by the condition
$$
\mathfrak{s}um_{i\in\mathbb{I}_r}\max\{\beta^\downarrow _i(x),e(x)\}=\mathfrak{s}um_{i\in\mathbb{I}_{r}}\mathbf{l}ambda_i(x)\
\Big(\, =\mathfrak{s}um_{i\in\mathbb{I}_{d}}\mathbf{l}ambda_i(x)\, \Big) \ , \peso{for a.e.} x\in {\mathbf t}ext{Spec}({\mathbf{a}l W})
$$
then by \cite{MR10} (also see \cite{MRS13,MRS14b,MRS14}) we have that
\begin{equation}\mathbf{l}abel{eq rel mayo MR}
(\delta_i(x))_{i\in\mathbb{I}_{r}}\ \mathfrak{s}tackrel{\mbox{\tiny{def}}}{=}\ (\max\{ \beta^\downarrow_i(x),\, e(x)\} )_{i\in\mathbb{I}_{r}}\prec (\mathbf{l}ambda_i(x))_{i\in\mathbb{I}_{r}} \ ,
\peso{for a.e.} x\in {\mathbf t}ext{Spec}({\mathbf{a}l W})\,.
\end{equation}
Notice that the vector $(\delta_i(x))_{i\in\mathbb{I}_{r}}$ can be considered as the (discrete) waterfilling of the vector $(\beta^\downarrow_j(x))_{j\in\mathbb{I}_{r}}$ at level $e(x)$, for $x\in {\mathbf t}ext{Spec}({\mathbf{a}l W})$.
If $\breve\delta \, , \, \breve\mathbf{l}ambda:X\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+$ are the measurable functions given by
$$
\breve\delta(x,j)=\delta_j(x) \peso{and} \breve\mathbf{l}ambda(x,j)=\mathbf{l}ambda_j(x) \peso{for} x\in {\mathbf t}ext{Spec}({\mathbf{a}l W}) \peso{and} j\in\mathbb{I}_{r}
$$
then, by Lemma \ref{lem utilisima}, we get that
$\breve\delta\prec \breve\mathbf{l}ambda$ in $(X,{\mathbf t}ilde\mu)$. Notice that by construction, $\breve\delta\geq \breve{\beta^\downarrow} $ and
$$\int_X \breve\delta\ d{\mathbf t}ilde \mu
=(r\cdot p)^{-1}\,\mathfrak{s}um_{i\in\mathbb {I} _n}\alphapha_i \,.$$
Hence, by Remark \ref{recordando waterfilling}, we get that $\breve{\beta^\downarrow}_b\prec \breve\delta\,$.
Putting all the pieces together, we now see that
\begin{equation}\mathbf{l}abel{eq relac major func}
\breve \gamma_c\prec \breve{\beta^\downarrow}_b\prec \breve\delta\prec \breve\mathbf{l}ambda\ , \peso{in} (X,{\mathbf t}ilde\mu)\ .
\end{equation}
Recall that by construction, we have that
\begin{equation}\mathbf{l}abel{eq la pinta de fc}
\breve \gamma_c(x)=\max\{ \frac{\alphapha_j}{p} \, , \, c\} \ , \peso{for} x\in {\mathbf t}ext{Spec}({\mathbf{a}l W}){\mathbf t}imes \{j\}\mathfrak{s}ubset X \, , \ j\in\mathbb{I}_{r}\ .
\end{equation}
Then, it is straightforward to check that
\begin{equation}\mathbf{l}abel{eq. c es el correcto}
(r\cdot p)^{-1}\,\mathfrak{s}um_{i\in\mathbb {I} _n}\alphapha_i=\int_X \breve\gamma_c\ d{\mathbf t}ilde\mu= r^{-1} \cdot
\mathfrak{s}um_{j\in\mathbb{I}_{r}}\max\{ \frac{\alphapha_j}{p} \, , \, c\} \implies
(\frac{\alphapha_j}{p})_{j\in\mathbb {I} _n}\prec (\max\{\frac{\alphapha_j}{p} \, , \, c\})_{j\in\mathbb{I}_{r}}\ .
\end{equation}
Thus, by Theorem \ref{teo sobre disenio de marcos},
there exists a Bessel sequence $\mathcal F^{\rm op}=\{f^{\rm op}_i\}_{i\in\mathbb {I} _n}\in{\mathbf{a}l W}^n$
such that the fine spectral structure
$(\mathbf{l}ambda_j([S_{E(\mathcal F^{\rm op})}]_x)\,)_{j\in\mathbb{N}}$
satisfies Eq. \eqref{eq defi la op unif dim} and such that
$\|\mathcal{G}amma f^{\rm op}_i(x)\|^2=\frac{\alphapha_i}{p}\,$, for $i\in\mathbb {I} _n\,$, and $x\in {\mathbf t}ext{Spec}({\mathbf{a}l W})$.
In particular, $\|f^{\rm op}_i\|^2=\alphapha_i$ for $i\in\mathbb {I} _n\,$, so $\mathcal F^{\rm op}\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$. If $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}f$ then, by the majorization relations in Eq. \eqref{eq relac major func}
and Lemma \ref{lem utilisima},
\begin{eqnarray*}\mathbf{l}abel{eq desi poten}
P_\varphi^{\mathbf{a}l W}(E(\mathcal F^{\rm op}))&=&\int_{{\mathbf t}ext{Spec}({\mathbf{a}l W})} [ \mathfrak{s}um_{j\in\mathbb{I}_{r}}\varphi(\max\{\frac{\alphapha_j}{p} \, , \, c\})
+ (d-r)\, \varphi(0) ]\ dx= \int_X \varphi\circ \breve \gamma_c\ d\mu + p\,(d-r)\, \varphi(0) \\
&\mathbf{l}eq & \int_X \varphi\circ \breve\mathbf{l}ambda\ d\mu + p\,(d-r)\, \varphi(0)
=P_\varphi^{\mathbf{a}l W}(E(\mathcal F))\,.
\end{eqnarray*}
Hence, $\mathcal F^{\rm op}$ satisfies items 1. and 2. in the statement.
\end{proof}
\noi
The previous result shows that there are indeed structural optimal frames with prescribed norms in the sense that these frames minimize any frame potential within $\mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$; along its proof we showed several majorization relations that allow us to prove that the spectral structure of any such structural optimal frame is described by Eq. \eqref{eq defi la op unif dim}.
\begin{teo}[Fine spectral structure of optimal sequences in $\mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$]
\mathbf{l}abel{teo struct fina dim hom}
With the hypothesis and notations from Theorem \ref{teo dim unif},
assume that $\mathcal F\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ is such that there exists $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$ with $P_\varphi^{\mathbf{a}l W}(E(\mathcal F))=P_\varphi^{\mathbf{a}l W}(E(\mathcal F^{\rm op}))$.
Then, for a.e. $x\in {\mathbf t}ext{Spec}({\mathbf{a}l W})$ we have that
\begin{equation}\mathbf{l}abel{eq defi la op unif dim2}
\mathbf{l}a_j([S_{E(\mathcal F)}]_x) =\mathbf{l}eft\{
\begin{array}{ccc}
\max\{\frac{\alphapha_j}{p} \, , \, c\} = \max\{ \beta^\downarrow_j(x)\, ,\, c\}& if & j\in\mathbb{I}_{r} \ ; \\
0 & if & r+1 \mathbf{l}e j\mathbf{l}e d\ ,
\end{array}
\right.
\end{equation}
where $\beta^\downarrow_1(x)\geq \mathbf{l}dots\beta^\downarrow_n(x)\geq 0$ are obtained by re-arranging the sequence
$$
\beta(x) = \big( \, \beta_1(x) \, , \, \mathbf{l}dots \, , \, \beta_n(x)\,\big)
=\big(\, \|\mathcal{G}amma f_1(x)\|^2 \, , \, \mathbf{l}dots \, , \, \|\mathcal{G}amma f_n(x)\|^2 \,\big) \in \mathbb{R}}\def\C{\mathbb{C}^n
$$
in non-increasing order, independently for each $x\in{\mathbf t}ext{Spec}({\mathbf{a}l W})$.
\end{teo}
\begin{proof}
We continue to use the notations and terminology from the proof of Theorem \ref{teo dim unif}.
Assume further that $\mathcal F\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ is such that there exists $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$ with
$$
p\cdot \mathfrak{s}um_{j\in\mathbb{I}_{r}} \varphi (\max\{\frac{\alphapha_j}{p}\, , \, c\}) +
p\,(d-r)\,\varphi(0) = P_\varphi^{\mathbf{a}l W}(E(\mathcal F))\ .
$$
Then, using this last fact and Lemma \ref{lem utilisima} we see that
$$
(r\cdot p)\,\int_X \varphi\circ \breve \gamma_c\ d{\mathbf t}ilde\mu=
(r\cdot p)\,\int_X \varphi\circ \breve\mathbf{l}ambda\ d{\mathbf t}ilde\mu\ .
$$
Hence, by Eq. \eqref{eq relac major func} we have that
$$
\int_X \varphi\circ \breve \gamma_c\ d{\mathbf t}ilde\mu= \int_X \varphi\circ \breve{\beta^\downarrow }_b\ d{\mathbf t}ilde\mu
= \int_X \varphi\circ \breve\delta\ d{\mathbf t}ilde\mu= \int_X \varphi\circ \breve\mathbf{l}ambda\ d{\mathbf t}ilde\mu\ .
$$
Thus, by Proposition \ref{pro int y reo} the functions $\breve \gamma_c,\,\breve{\beta^\downarrow}_b,\,\breve\delta,\,\breve\mathbf{l}ambda$
are equimeasurable. On the one hand, Eq. \eqref{eq rel mayo MR}, together with the equality above imply that
$\max\{ \beta^\downarrow_j(x)\, , \, e(x)\} =\mathbf{l}ambda_j(x)$, for $j\in\mathbb{I}_{r}$ and a.e. $x\in {\mathbf t}ext{Spec}({\mathbf{a}l W})$ and hence,
by construction, $\breve\delta=\breve\mathbf{l}ambda\,$.
On the other hand, by \cite[Corollary 5.6]{BMS15} we also get that $\breve{\beta^\downarrow} _b=\breve\delta\,$.
Therefore, $\breve{\beta^\downarrow} _b=\breve\delta=\breve\mathbf{l}ambda\,$; in particular, we get that
$\max\{ \beta^\downarrow_j(x)\, , \, b\} =\mathbf{l}ambda_j(x)$, for $j\in\mathbb{I}_{r}$ and a.e. $x\in {\mathbf t}ext{Spec}({\mathbf{a}l W})$.
\noi
Notice that, since $\breve \gamma_c$ and $\breve\mathbf{l}ambda$ are equi-measurable, then
$|\breve\mathbf{l}ambda^{-1}(\max\{\frac{\alphapha_j}{p}\, , \, c\})|=|{\breve \gamma_c} ^{-1}(\max\{\frac{\alphapha_j}{p} \, , \, c\})|$ for $j\in\mathbb{I}_{r}\,$;
thus, $\breve\mathbf{l}ambda$ takes the values $\max\{\frac{\alphapha_j}{p},\,c\}$ for $j\in\mathbb{I}_{r}$ (off a zero-measure set).
As $\breve\mathbf{l}ambda$ and $\breve \gamma_c$ are both induced by the vector-valued functions
$$
{\mathbf t}ext{Spec}({\mathbf{a}l W})\ni x\mapsto (\max\{\frac{\alphapha_j}{p} \, , \, c\})_{j\in\mathbb{I}_{r}}\in(\mathbb{R}}\def\C{\mathbb{C}_+^r)^\downarrow
\peso{and} {\mathbf t}ext{Spec}({\mathbf{a}l W})\ni x\mapsto (\mathbf{l}ambda_j(x))_{j\in\mathbb{I}_{r}}\in(\mathbb{R}}\def\C{\mathbb{C}_+^r)^\downarrow
$$
respectively, we conclude that
$$
(\max\{\frac{\alphapha_j}{p}\, , \, c\})_{j\in\mathbb{I}_{r}}=(\mathbf{l}ambda_j(x))_{j\in\mathbb{I}_{r}}
=(\max\{ \beta^\downarrow_j(x)\, , \, b\} )_{j\in\mathbb{I}_{r}} \ ,
\peso{for} x\in {\mathbf t}ext{Spec}({\mathbf{a}l W})\ .
$$
From this last fact, we see that we can set $b=c$ and the result follows.
\end{proof}
\begin{rem}\mathbf{l}abel{interpret de prop dim unif} Consider the notations and terminology from Theorem
\ref{teo dim unif}. We point that there is a simple
formula for the constant $c$. Indeed, notice that if
$\mathcal F^{\rm op}\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ is the structural solution of the optimization problem considered in Theorem
\ref{teo dim unif} then
$$
\mathfrak{s}um_{j\in\mathbb{I}_r}\mathbf{l}ambda_j([S_{E(\mathcal F^{\rm op})}]_ x)
={\mathbf t}r([S_{E(\mathcal F^{\rm op})}]_ x)=\mathfrak{s}um_{j\in\mathbb {I} _n}\|\mathcal{G}amma f^{\rm op}_j(x)\|^2\peso{for a.e.} x\in\mathbb{T}^k
$$
Therefore,
\begin{equation}\mathbf{l}abel{formu c}
\mathfrak{s}um_{i\in\mathbb{I}_r}\max\{\frac{\alphapha_i}{p} \, , \, c\}=\frac 1p \ \mathfrak{s}um_{j\in\mathbb {I} _n}\alphapha_j\ ,
\end{equation}
which shows that $c$ is obtained by the previous discrete waterfilling condition.
$\triangle$
\end{rem}
\noi Tight frames play a central role in applications. On the one hand, they give raise to simple
reconstruction formulas; on the other hand, they have several robustness
properties related with numerical stability of the encoding-decoding scheme
that they induce. It is therefore
important to have conditions that assure the existence of tight frames with prescribed norms: in the finite dimensional context
(i.e. finite frame theory) this problem is solved in \cite{CKFT} in terms of the so-called fundamental inequality. As a consequence of Remark
\ref{interpret de prop dim unif}, we obtain conditions for the existence of tight
SG frames with norms given by a finite sequence of positive numbers, in the uniform dimensional case.
\begin{cor} \mathbf{l}abel{coro tight 2}
Consider the notations and hypothesis of Theorem \ref {teo dim unif}.
In the uniform dimensional case (so in particular, $d(x)=d$ for a.e. $x\in {\mathbf t}ext{Spec}({\mathbf{a}l W})\,$), we have that
$$
{\mathbf t}ext{\rm there exist {\bf tight} frames in $\mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ } \ \iff \ \
d=r\mathbf{l}e n \peso{ \rm and} d \cdot\alpha_1 \mathbf{l}e \mathfrak{s}um_{j\in\mathbb {I} _n}\alphapha_j\ .
$$
\end{cor}
\proof It is a direct consequence of Eqs. \eqref{eq defi la op unif dim} and \eqref{formu c}.
\qed
\mathfrak{s}ubsection{Existence and structure of $P^{\mathbf{a}l W}_\varphi$-minimizers in $\mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$: the general case}\mathbf{l}abel{subsec gral mi gral}
It turns out that Theorem \ref{teo dim unif} allows to reduce the study of the
spectral structure of minimizers of convex potentials in FSI subspaces with norm restrictions to a finite dimensional model.
Indeed, consider the Notations \ref{nota impor2} and, for the sake of simplicity, assume that $p_i>0$ for every
$i\in\mathbb{I}_{\ell}\,$.
Consider $\alphapha\in (\mathbb{R}}\def\C{\mathbb{C}_{>0}^n)^\downarrow$ and let $\mathcal F\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$.
For each $i\in\mathbb{I}_\ell$ let ${\mathbf{a}l W}_i\mathfrak{s}ubset L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ be the closed FSI subspace whose
fibers coincide with those of ${\mathbf{a}l W}$ in $Z_i=d^{-1}(i)$ and are the zero subspace elsewhere,
and let $\mathcal F_i=\{f_{i,j}\}_{j\in\mathbb {I} _n}\in{\mathbf{a}l W}_i^n$ be determined by
$$
\mathcal{G}amma f_{i,j}(x)=\chi_{Z_i}(x)\ \mathcal{G}amma f_j(x) \peso{for a.e.} x\in\mathbb{T}^k \peso{and} j\in\mathbb {I} _n\ ,
$$
where $\chi_Z$ denotes the characteristic function of a measurable set $Z\mathfrak{s}ubset\mathbb{T}^k$.
Fix a convex function $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}f$. Since each ${\mathbf{a}l W}_i$ is also a uniform FSI, it satisfies the
hypothesis of Theorem \ref{teo dim unif}. Then we conclude that for each $i\in\mathbb{I}_\ell$ there exists $\mathcal F_i^{\rm dis}=\{f_{i,j}^{\rm dis}\}_{j\in\mathbb {I} _n}\in{\mathbf{a}l W}_i ^n$ such that
$$
\|f_{i,j}^{\rm dis}\|^2=\|f_{i,j}\|^2 \peso{for} j\in\mathbb{I}_n \peso{and} P^{{\mathbf{a}l W}_i}_\varphi (E(\mathcal F_i^{\rm dis}))\mathbf{l}eq P^{{\mathbf{a}l W}_i}_\varphi (E(\mathcal F_i)) \peso{for} i\in\mathbb{I}_\ell\ .
$$
We can recover the initial family $\mathcal F=\{f_i\}_{i\in\mathbb {I} _n}$ by gluing together the families $\mathcal F_i$ for $i\in\mathbb{I}_\ell\,$.
Similarly, if we glue the families $\mathcal F_i^{\rm dis}$ we get a family
$\mathcal F^{\rm dis}$ (in such a way that $(\mathcal F^{\rm dis})_i=\mathcal F_i^{\rm dis}\in {\mathbf{a}l W}_i^n$ as before, for $i\in\mathbb{I}_\ell$).
Notice that $\mathcal F^{\rm dis}\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ since
$$
\|f_i^{\rm dis}\|^2=\mathfrak{s}um_{j\in\mathbb{I}_n}\|f_{i,j}^{\rm dis}\|^2= \|f_i\|^2=\alphapha_i \peso{for} i\in\mathbb{I}_n \ ,
$$
using the fact that the subspaces $\{{\mathbf{a}l W}_i\}_{i\in\mathbb{I}_\ell}$ are mutually orthogonal. Also
$$
P^{{\mathbf{a}l W}}_\varphi (E(\mathcal F^{\rm dis}))= \mathfrak{s}um_{i\in\mathbb{I}_\ell} P^{{\mathbf{a}l W}_i}_\varphi (E(\mathcal F_i^{\rm dis}))\mathbf{l}eq
\mathfrak{s}um_{i\in\mathbb{I}_\ell} P^{{\mathbf{a}l W}_i}_\varphi (E(\mathcal F_i))= P^{{\mathbf{a}l W}}_\varphi (E(\mathcal F))\ .
$$
Now, the fine spectral structure of $\mathcal F_i^{\rm dis}$
is of a discrete nature (as described in Theorem \ref{teo dim unif}). Moreover, this fine structure is
explicitly determined in terms of the matrix
\begin{equation}\mathbf{l}abel{las B}
B=(p_i^{-1}\, \|f_{i,j}\|^2)_{i\in \mathbb{I}_\ell,\,j\in\mathbb {I} _n} \in \mathbb{R}}\def\C{\mathbb{C}_{+}^{\ell{\mathbf t}imes n} \peso{fulfilling the identity}
p^T\,B=\alphapha \ ,
\end{equation}
where $p=(p_i)_{i\in\mathbb{I}_\ell}$ and $\alphapha=(\alphapha_i)_{i\in\mathbb {I} _n}\,$.
Notice that the set of all such matrices form a convex compact subset of $\mathbb{R}}\def\C{\mathbb{C}_{+}^{m{\mathbf t}imes n}$.
The advantage of this approach is that we can use simple tools such as convexity,
compactness and continuity in a finite dimensional context, to show existence of
optimal spectral structure within our reduced model. Nevertheless, the reduced
model has a rather combinatorial nature (see the definition of
$\Lambda_{\alphapha,\,p}^{\rm op}(\delta)$ below), so we build it in steps.
\begin{notas}\mathbf{l}abel{muchas nots} In order to simplify the exposition of the next result,
we introduce the following notations that are motivated by the remarks above. Let $ m \, , \, n\in\mathbb{N}$:
\begin{enumerate}
\item Inspired in Eq. \eqref{las B}, for finite sequences $\alphapha\in (\mathbb{R}}\def\C{\mathbb{C}_{>0}^n)^\downarrow$
and $p=(p_i)_{i\in\mathbb{I}_{m}}\in \mathbb{R}}\def\C{\mathbb{C}_{>0}^m$ we consider the set of weighted partitions
$$
\begin{array}{rl}
W_{\alphapha,\,p}
&=\{ B\in \mathbb{R}}\def\C{\mathbb{C}_+^{m{\mathbf t}imes n } \ : \ p^T\, B = \alpha \, \}
\ . \end{array}
$$
It is straightforward to check that $W_{\alphapha,\,p}$ is a convex compact set.
\item \mathbf{l}abel{item2}
Given $d\in \mathbb{N}$ we define the map $L_d : \mathbb{R}}\def\C{\mathbb{C}_+^n {\mathbf t}o (\mathbb{R}}\def\C{\mathbb{C}_+^d)^\downarrow$ given by
\begin{equation}\mathbf{l}abel{eq defi gammacd}
L_d (\gammamma )
\ \mathfrak{s}tackrel{\mbox{\tiny{def}}}{=}\ \mathbf{l}eft\{
\begin{array}{ccc}
(\max\{\gammamma^\downarrow_i\, , \, c_d(\gammamma) \})_{i\in\mathbb{I}_{d}} &
& {\mathbf t}ext{if } \ d\mathbf{l}eq n \\
(\gammamma^\downarrow,0_{d-n}) & & {\mathbf t}ext{if } \ d> n
\end{array} \peso{for every} \gammamma \in \mathbb{R}}\def\C{\mathbb{C}_+^n \ ,
\right.
\end{equation}
where the constant $c_d(\gammamma) \in \mathbb{R}}\def\C{\mathbb{C}_+$ is uniquely determined by ${\mathbf t}r \, L_d (\gammamma ) = {\mathbf t}r \, \gammamma$, in case $d\mathbf{l}e n$.
By \cite[Prop. 2.3]{MR10} we know that $\gammamma\prec L_d (\gammamma )\,$, and $L_d (\gammamma )\prec \beta$
for every $\beta\in\mathbb{R}}\def\C{\mathbb{C}^d$ such that $\gammamma\prec \beta$.
\item\mathbf{l}abel{item3} Let $\delta=(d_i)_{i\in\mathbb{I}_{m}}\in\mathbb{N}^m$ be such that $1 \mathbf{l}e d_1< \mathbf{l}dots< d_m$. For each
$B \in W_{\alphapha,\,p}$ consider
\begin{equation}\mathbf{l}abel{Bdelta}
B_{\delta}= \big[ \, L_{d_i} (R_i(B)\,)\, \big]_{i\in\mathbb{I}_{m}}
\in \prod_{i\in\mathbb{I}_{m}} (\mathbb{R}}\def\C{\mathbb{C}_+^{d_i})^\downarrow
\ ,
\end{equation}
where $R_i(B)\in \mathbb{R}}\def\C{\mathbb{C}_+^n$ denotes the $i$-th row of $B$.
Moreover, using the previous notations we introduce the {\it reduced model (for optimal spectra)}
$$
\Lambda^{\rm op}_{\alphapha,\,p}(\delta)\ \mathfrak{s}tackrel{\mbox{\tiny{def}}}{=}\ \{ B_\delta :\ B\in W_{\alphapha,\,p}\}
\mathfrak{s}ubset \prod_{i\in\mathbb{I}_{m}} (\mathbb{R}}\def\C{\mathbb{C}_+^{d_i})^\downarrow\, .
$$
In general, $\Lambda^{\rm op}_{\alphapha,\,p}(\delta)$ is not a convex set and indeed,
the structure of this set seems rather involved; notice that
item 2 above shows that the elements of $\Lambda^{\rm op}_{\alphapha,\,p}(\delta)$
are $\prec$-minimizers within appropriate sets.
$\triangle$
\end{enumerate}
\end{notas}
\noi
The following result describes the existence and uniqueness of the solution to an optimization
problem in the reduced model for a fixed $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$, which corresponds to the minimization of the convex potential $P^{\mathbf{a}l W}_\varphi$ in $\mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ for a
FSI subspace ${\mathbf{a}l W}$ and a sequence of weights $\alphapha\in (\mathbb{R}}\def\C{\mathbb{C}_{>0}^n)^\downarrow$. The proof of this result is presented in section \ref{subsec reduced} (Appendix).
\begin{teo}\mathbf{l}abel{teo estruc prob reducido unificado}
Let $ m,\, n\in\mathbb{N}$, $\alphapha\in (\mathbb{R}}\def\C{\mathbb{C}_{>0}^n)^\downarrow$, $p=(p_i)_{i\in\mathbb{I}_{m}}\in \mathbb{R}}\def\C{\mathbb{C}_{>0}^m$ and
$\delta=(d_i)_{i\in\mathbb{I}_{m}}\in\mathbb{N}^m$ be such that $1\mathbf{l}eq d_1< \mathbf{l}dots< d_m$.
If $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}f$ then
there exists $\Psi^{\rm op}=[\psi_i^{\rm op}]_{i\in\mathbb{I}_{m}}\in \Lambda^{\rm op}_{\alphapha,\,p}(\delta)$ such that
$$ \mathfrak{s}um_{i\in\mathbb{I}_{m}} {p_i}\,{\mathbf t}r(\varphi(\psi_i^{\rm op}) )
\mathbf{l}eq \mathfrak{s}um_{i\in\mathbb{I}_{m}} {p_i}\,{\mathbf t}r(\varphi(\psi_i) )
\peso{for every} \Psi=[\psi_i]_{i\in\mathbb{I}_{m}}\in \Lambda^{\rm op}_{\alphapha,\,p}(\delta)\,.$$
Moreover:
\begin{enumerate}
\item If $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$ then such $\Psi^{\rm op}$ is unique;
\item If $n\geq d_m$ and $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$ is differentiable in
$\mathbb{R}}\def\C{\mathbb{C}_+\,$ then $\Psi^{\rm op}\in \prod_{i\in\mathbb{I}_m} (\mathbb{R}}\def\C{\mathbb{C}_{>0}^{d_i})^\downarrow$.
\qed
\end{enumerate}
\end{teo}
\noi
We now turn to the statement and proof of our main result in this section (Theorem \ref{teo min pot fsi generales} below). Hence, we let ${\mathbf{a}l W}$ be an arbitrary FSI subspace of $L^2(\mathbb{R}}\def\C{\mathbb{C}^k)$ and let $\alphapha=(\alphapha_i)_{i\in\mathbb {I} _n}\in (\mathbb{R}}\def\C{\mathbb{C}_{>0}^n)^\downarrow$. Recall that
$$
\mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})=\{\mathcal F=\{f_i\}_{i\in\mathbb {I} _n}\in{\mathbf{a}l W}^n:\ E(\mathcal F) \ {\mathbf t}ext{is a Bessel sequence }, \ \|f_i\|^2=\alphapha_i\,,\ i\in\mathbb {I} _n\}\,.
$$
Given $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}f$, in what follows we show the existence of finite sequences $\mathcal F^{\rm op}\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ such that
$$
P_\varphi^{\mathbf{a}l W}(E(\mathcal F^{\rm op}))=\min\{P_\varphi^{\mathbf{a}l W}(E(\mathcal F)): \ \mathcal F\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})\}\,.
$$ Moreover, in case $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$ then we describe the fine spectral structure of the frame operator of $E(\mathcal F^{\rm op})$ of any such $\mathcal F^{\rm op}$.
\begin{teo}\mathbf{l}abel{teo min pot fsi generales}
Let $\alphapha=(\alphapha_i)_{i\in\mathbb {I} _n}\in (\mathbb{R}}\def\C{\mathbb{C}_{>0}^n)^\downarrow$,
consider the Notations \ref{nota impor2}
and fix $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}f$.
Then, there exists $\mathcal F^{\rm op}\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ such that:
\begin{enumerate}
\item $\mathbf{l}a_j([S_{E(\mathcal F^{\rm op})}]_x)=:\psi^{\rm op}_{i,j}\in\mathbb{R}}\def\C{\mathbb{C}_+$ is a.e. constant for $x\in Z_i$, $j\in \mathbb{I}_{i}$ and $i\in\mathbb{I}_\ell$;
\item For every $\mathcal F\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ we have that
$$
\mathfrak{s}um_{i\in\mathbb{I}_{\ell}} {p_i}\mathbf{l}eft( \mathfrak{s}um_{j\in\mathbb{I}_i}\varphi(\psi^{\rm op}_{i,j})\right)=P_\varphi^{{\mathbf{a}l W}}(E(\mathcal F^{\rm op}))\mathbf{l}eq P_\varphi^{{\mathbf{a}l W}}(E(\mathcal F))\ .
$$
\end{enumerate}
If we assume that $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$ then:
\begin{enumerate}
\item[a)] If $\mathcal F\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ is such that $P_\varphi^{{\mathbf{a}l W}}(E(\mathcal F))=P_\varphi^{{\mathbf{a}l W}}(E(\mathcal F^{\rm op}))$ then $S_{E(\mathcal F)}$ has the same fine spectral structure as $S_{E(\mathcal F^{\rm op})}$.
\item[b)] If we assume further that $\varphi$ is differentiable in $\mathbb{R}}\def\C{\mathbb{C}_+$ and that
$n\geq i$ for every $i\in\mathbb{I}_\ell$ such that $p_i=|Z_i|>0$, then $E(\mathcal F)$ is a frame for ${\mathbf{a}l W}$.
\end{enumerate}
\end{teo}
\begin{proof}
Without loss of generality, we can assume that there exists an $m\mathbf{l}eq \ell $ such that $p_i=|Z_i|>0$ for $i\in\mathbb{I}_{m}$ and $p_i=|Z_i|=0$ for $m+1\mathbf{l}eq i\mathbf{l}eq \ell$
(indeed, the general case follows by restricting the argument given below to the set of indexes $i\in\mathbb{I}_\ell$ for which $p_i=|Z_i|>0$).
We set $p=(p_i)_{i\in\mathbb{I}_m}\in\mathbb{R}}\def\C{\mathbb{C}^m_{>0}$ and consider
$\mathcal F=\{f_i\}_{i\in\mathbb {I} _n}\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$. For $i\in\mathbb{I}_{m}$ and $j\in\mathbb {I} _n$ set
$$
B_{i,j}\ \mathfrak{s}tackrel{\mbox{\tiny{def}}}{=}\ \frac{1}{ p_i}\,\int_{ Z_i} \|\mathcal{G}amma f_j(x)\|^2\ dx
\implies \mathfrak{s}um_{i\in\mathbb{I}_{m}} p_i\,B_{i,j}
=\int_{{\mathbf t}ext{Spec}({\mathbf{a}l W})}\|\mathcal{G}amma f_j(x)\|^2\ dx=\|f_j\|^2=\alphapha_j \ ,
$$
for every $j\in\mathbb {I} _n\,$, since ${\mathbf t}ext{Spec}({\mathbf{a}l W})=\cup_{i\in\mathbb{I}_m}Z_i$.
Then $p^T\, B=\alphapha $ so using Notations \ref{muchas nots}, $B\in W_{\alphapha,\, p}\,$.
\noi Now, fix $i\in\mathbb{I}_{m}$ and consider the weights
$\beta^i= p_i\, R_{i}(B)^\downarrow\in\mathbb{R}}\def\C{\mathbb{C}_+^n\,$. For the sake of simplicity we assume,
without loss of generality, that $\beta^i=p_i\, R_{i}(B)$. For $i\in\mathbb{I}_m\,$,
let ${\mathbf{a}l W}_i$ be the FSI subspace whose fibers coincide with those of ${\mathbf{a}l W}$ inside $Z_i$ and that
are the zero subspace elsewhere; hence, Spec$({\mathbf{a}l W}_i)= Z_i$ and $\dim J_{{\mathbf{a}l W}_i}(x)=i$ for $x\in{\mathbf t}ext{Spec}({\mathbf{a}l W}_i)$.
For $i\in\mathbb{I}_m\,$,
set $\mathcal F_i=\{f_{i,j}\}_{j\in\mathbb {I} _n}$ where $\mathcal{G}amma f_{i,j}(x)=\mathcal{G}amma f_{j}(x)$ for $x\in Z_i$ and $\mathcal{G}amma f_{i,j}(x)=0$ elsewhere; then $\mathcal F_i\in \mathcal{M}_d(\mathbb{C})hfrak B_{\beta^i}({\mathbf{a}l W}_i)$ and
$$
[S_{E(\mathcal F_i)}]_x=S_{\mathcal{G}amma \mathcal F_i(x)}=[S_{E(\mathcal F)}]_x \peso{for} x\in Z_i={\mathbf t}ext{Spec}({\mathbf{a}l W}_i) \ , \quad i\in\mathbb{I}_m\, .$$
If we consider the minimization of $P_\varphi^{{\mathbf{a}l W}_i}$ in $\mathcal{M}_d(\mathbb{C})hfrak B_{\beta^i}({\mathbf{a}l W}_i)$ then, Theorem \ref{teo dim unif} and Remark \ref{interpret de prop dim unif} imply that there exists $c_i\geq 0$
such that
\begin{equation} \mathbf{l}abel{desi caso uniforme}
p_i\, \mathfrak{s}um_{j\in\mathbb{I}_{i}} \varphi (\max\{B_{i,j}\, , \, c_i\}) \mathbf{l}eq P_\varphi^{{\mathbf{a}l W}_i}(E(\mathcal F_i))
\peso{and} \mathfrak{s}um_{j\in I_{i}} \max\{B_{i,j}\, , \, c_i\}=\mathfrak{s}um_{i\in\mathbb {I} _n} B_{i,j}
\,.
\end{equation}
Using Notations \ref{muchas nots} and Eq. \eqref{eq. c es el correcto}, we get that for $i\in\mathbb{I}_m$
$$
L_{i}(R_{i}(B))= (\max\{B_{i,j}\, , \, c_i\})_{j\in I_{i}}
\implies B_{\delta}=[(\max\{B_{i,j}\, , \, c_i\})_{j\in \mathbb{I}_{i}}]_{i\in\mathbb{I}_{m}}\in \Lambda^{\rm op}_{\alphapha,\,p}(\delta)\,,$$
where $\delta=(i)_{i\in\mathbb{I}_m}$. Notice that ${\mathbf{a}l W}=L(\mathcal{M}_d(\mathbb{C})hcal{H})lus_{i\in\mathbb{I}_{m}}{\mathbf{a}l W}_i$ (orthogonal sum) and hence
$$
\mathfrak{s}um_{i\in\mathbb{I}_{m}} p_i\,
\mathfrak{s}um_{j\in\mathbb{I}_{i}} \varphi (\max\{B_{i,j}\, , \, c_i\})
\mathbf{l}e \mathfrak{s}um_{i\in\mathbb{I}_{m}} P_\varphi^{{\mathbf{a}l W}_i}(E(\mathcal F_i))=P_\varphi^{\mathbf{a}l W}(E(\mathcal F))\,. $$
\noi
Let $[\psi^{\rm op}_i]_{i\in\mathbb{I}_{m}}=\Psi^{\rm op}
\,\in \Lambda_{\alphapha,\,p}^{\rm op}(\delta)$ be as in
Theorem \ref{teo estruc prob reducido unificado}. Then
\begin{equation}\mathbf{l}abel{desi potop}
\mathfrak{s}um_{i\in\mathbb{I}_{m}} { p_i}\, {\mathbf t}r(\varphi(\psi^{\rm op}_i))
= \mathfrak{s}um_{i\in\mathbb{I}_{m}} {p_i}\mathbf{l}eft( \mathfrak{s}um_{j\in\mathbb{I}_i}\varphi(\psi^{\rm op}_{i,j})\right)
\mathbf{l}eq \mathfrak{s}um_{i\in\mathbb{I}_{m}} p_i\, \mathfrak{s}um_{j\in\mathbb{I}_{i}}
\varphi (\max\{B_{i,j}\, , \, c_i\})\mathbf{l}eq P_\varphi^{\mathbf{a}l W}(E(\mathcal F))\ .
\end{equation}
Recall that by construction, there exists
$B^{\rm op}=(\gammamma_{i,j})_{(i\, , \, j)\in\mathbb{I}_{m}{\mathbf t}imes\mathbb {I} _n}\in W_{\alphapha,\,p}$
such that $B^{\rm op}_{\delta}=\Psi^{\rm op}$ (see item \ref{item3} in Notations \ref{muchas nots}). In this case,
$$
\psi^{\rm op}_i=L_{i}(\,(\gammamma_{i,j})_{j\in\mathbb {I} _n}) \implies (\gammamma_{i,j})_{j\in\mathbb {I} _n}\prec \psi^{\rm op}_i \peso{for} i\in\mathbb{I}_{m}\ .
$$
Let $\gammamma:{\mathbf t}ext{Spec}({\mathbf{a}l W})\rightarrow \mathbb{R}}\def\C{\mathbb{C}^n$ be given by $\gammamma(x)
=R_i(B^{\rm op})= (\gammamma_{i,j})_{j\in\mathbb {I} _n}$ if $x\in Z_i$, for $i\in\mathbb{I}_{m}\,$; similarly, let
$\mathbf{l}ambda:{\mathbf t}ext{Spec}({\mathbf{a}l W})\rightarrow \coprod_{i\in\mathbb{I}_{m}}\mathbb{R}}\def\C{\mathbb{C}^{i}$, $\mathbf{l}ambda(x)=\psi^{\rm op}_i$
if $x\in Z_i$, for $i\in\mathbb{I}_{m}\,$. Then, by the previous remarks we get that $\gammamma(x)\prec \mathbf{l}ambda(x)$
for $x\in{\mathbf t}ext{Spec}({\mathbf{a}l W})$.
\noi
Hence, by Theorem \ref{teo sobre disenio de marcos} there exists
$\mathcal F^{\rm op}=\{f_j^{\rm op}\}_{j\in\mathbb {I} _n}$ such that
$$
\|\mathcal{G}amma f_j^{\rm op}(x)\|^2=\gammamma_{i,j} \peso{and}
\mathbf{l}ambda_j([S_{E(\mathcal F^{\rm op})}]_x)=\psi_{i\, , \, j}^{\rm op}
\peso{for} x\in Z_i\, , \ \ j\in\mathbb{I}_i \peso{and} i\in\mathbb{I}_{m}\ .
$$
Since $B^{\rm op}\in W_{\alphapha,\,p}$ then
$$\|f_j^{\rm op}\|^2=\int_{{\mathbf t}ext{Spec}({\mathbf{a}l W})}\|\mathcal{G}amma f_j^{\rm op}(x)\|^2\ dx=\mathfrak{s}um_{i\in\mathbb{I}_{m}} p_i \ \gammamma_{i,j}=\alphapha_j \peso{for} j\in\mathbb {I} _n\implies \mathcal F^{\rm op}\in
\mathcal{M}_d(\mathbb{C})hfrak B_{\alphapha}({\mathbf{a}l W})$$ and
\begin{equation}\mathbf{l}abel{eq casi estamos}
P_\varphi^{{\mathbf{a}l W}}(E(\mathcal F^{\rm op}))=\int_{{\mathbf t}ext{Spec}({\mathbf{a}l W})} {\mathbf t}r(\varphi(\mathbf{l}ambda(x)))\ dx=\mathfrak{s}um_{i\in\mathbb{I}_{m}} p_i \ {\mathbf t}r(\varphi(\psi_i^{\rm op})),
\end{equation}
then by Eq. \eqref{desi potop} we see that
$P_\varphi^{{\mathbf{a}l W}}(E(\mathcal F^{\rm op}))\mathbf{l}eq P_\varphi^{{\mathbf{a}l W}}(E(\mathcal F))\,.$
Since $\mathcal F\in \mathcal{M}_d(\mathbb{C})hfrak B_{\alphapha}({\mathbf{a}l W})$ was arbitrary, the previous facts show that $\mathcal F^{\rm op}$ satisfies items 1. and 2. in the statement.
\noi
Assume further that $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$ and $\mathcal F\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ is such that $P_\varphi^{{\mathbf{a}l W}}(E(\mathcal F))=P_\varphi^{{\mathbf{a}l W}}(E(\mathcal F^{\rm op}))$.
Then, by Eqs. \eqref{desi caso uniforme}, \eqref{desi potop} and \eqref{eq casi estamos} we see that
$$
p_i\, \mathfrak{s}um_{j\in\mathbb{I}_{i}} \varphi (\max\{B_{i,j}\, , \, c_i\}) = P_\varphi^{{\mathbf{a}l W}_i}(E(\mathcal F_i)) \peso{for} i\in\mathbb{I}_{m}\ .
$$
Therefore, by the case of equality in Theorem \ref{teo struct fina dim hom} and the uniqueness of $\Psi^{\rm op}$ from Theorem \ref{teo estruc prob reducido unificado}
we conclude that
$$
\mathbf{l}ambda_j([S_{E(\mathcal F)}]_x) =\mathbf{l}ambda_j([S_{E(\mathcal F_i)}]_x)=\psi^{\rm op}_{i,\,j} \peso{for} x\in Z_i\, , \ j\in\mathbb{I}_{i}\, , \ i\in\mathbb{I}_{m}\ .
$$
Finally, in case $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$ is differentiable in $\mathbb{R}}\def\C{\mathbb{C}_+$ and $n\geq m$ then, again by
Theorem \ref{teo estruc prob reducido unificado}, we see that $S_{E(\mathcal F)}$ is bounded from below in ${\mathbf{a}l W}$ (since the vectors in $\Psi^{\rm op}$ have no zero entries) and hence $E(\mathcal F)$ is a frame for ${\mathbf{a}l W}$.
\end{proof}
\noi
We end this section with the following remarks. With the notations of Theorem \ref{teo min pot fsi generales}, notice that the optimal Bessel sequence $\mathcal F^{\rm op}\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ depends on the convex function $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}f$, which was fixed in advance. That is, unlike the uniform case, we are not able to show that there exists $\mathcal F^{\rm univ}\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ such that $\mathcal F^{\rm univ}$ is a $P^{\mathbf{a}l W}_\varphi$-minimizer in
$\mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$ for every $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}f$. It is natural to wonder whether there exists such a universal solution $\mathcal F^{\rm univ}\in \mathcal{M}_d(\mathbb{C})hfrak {B}_\alphapha({\mathbf{a}l W})$; we conjecture that this is always the case.
\mathfrak{s}ection{Appendix}\mathbf{l}abel{Appendixity}
\mathfrak{s}ubsection{The Schur-Horn theorem for measurable fields of self-adjoint matrices and applications}
The simple notion of majorization between real vectors has played an important role in finite frame theory in finite dimensions.
In particular, it is well known that the existence of finite sequences with prescribed norms and frame operator can be characterized in terms of majorization, applying the Schur-Horn theorem.
\noi
Next we develop a Schur-Horn type theorem for measurable fields of self-adjoint matrices and use this result to prove Theorem \ref{teo:mayo equiv}.
Our proof is an adaptation of that given in \cite{HJ13} for the classical Schur-Horn theorem. We will use the existence of measurable eigenvalues and eigenvectors (i.e. diagonalization by measurable fields of unitary matrices) of measurable fields of self-adjoint matrices from \cite{RS95}.
In what follows we consider a measure subspace $(X,\, \mathcal{M}_d(\mathbb{C})hcal X,\, |\,\cdot|)$ of the measure space $(\mathbb{T}^k,\,\mathcal{M}_d(\mathbb{C})hcal B(\mathbb{T}^k),\,|\,\cdot|)$ of the $k$-torus with Lebesgue measure on Borel sets.
\begin{teo} \mathbf{l}abel{teo:mayo y el unitario} Let $A(\cdot): X {\mathbf t}o \mathcal{H}(n)$ be a measurable field of self-adjoint matrices with associated measurable eigenvalues $b_j:X{\mathbf t}o \mathbb{R}}\def\C{\mathbb{C}$ for $j\in \mathbb{I}_n$ such that $b_1\geq \cdots \geq b_n\,$. Let $c_j:X{\mathbf t}o \mathbb{R}}\def\C{\mathbb{C}$ be measurable functions for $j\in \mathbb{I}_n\,$. The following statements are equivalent:
\begin{enumerate}
\item $c(x)=(c_1(x)\, , \, \cdots \, , \, c_n(x))\prec b(x)=(b_1(x)\, , \, \cdots\, , \, b_n(x))$, for a.e. $x\in X$.
\item There exists a measurable field of unitary matrices $U(\cdot):X{\mathbf t}o {\mathbf{a}l U}(n)$, such that
\begin{equation}\mathbf{l}abel{eq SH}
d(U(x)^*\,A(x)\ U(x))=c(x)\, , \peso{for a.e.} x\in X\,,
\end{equation}
where $d(B)\in \C^n$ denotes the main diagonal of the matrix $B\in {\mathbf{a}l M}_n(\C)$.
\end{enumerate}
\end{teo}
\begin{proof}
First notice that the implication $2.\implies 1.$ follows from the classical Schur theorem.
\noi
$1.\implies 2.\,$:
By considering a convenient measurable field of permutation matrices, we can (and will) assume that the entries of the vector $c(x)$ is also arranged in non-increasing order: $c_1(x)\geq c_2(x)\geq \mathbf{l}dots c_n(x)$. By the results from \cite{RS95} showing the existence of a measurable field of unitary matrices diagonalizing the field $A$, we can assume without loss of generality that $A(x)=D_{b(x)}$ where $D_{b(x)}$ is the diagonal matrix with main diagonal $(b_1(x)\, , \, \mathbf{l}dots \, , \, b_n(x))$ for a.e. $x\in X$.
\noi We will argue by induction on $n$. For $n=1$ the result is trivial.
Hence, we may assume that $n\geq 2$.
Since $c(x)\prec b(x)$, we have $b_1(x)\geq c_1(x)\geq c_n(x)\geq b_n(x)$, so if $b_1(x)=b_n(x)$ it follows that
all the entries of $c(x)$ and $b(x)$ coincide, $A(x)=c_1(x) I_n\,$, and we can take $U(x)=I_n\,$ for every such $x \in X$.
By considering a convenient partition of $X$ we may therefore assume that $b_1(x)>b_n(x)$ in $X$.
Similarly, in case $c_1(x)=c_n(x)$ then the unitary matrix $U(x)=n^{-1/2}\, (w^{j\,k})_{j,k\in\mathbb {I} _n}\,$,
where $w=e^{\frac{-2\pi i}{ \ n}}$, satisfies that
$U(x)^*\, D_{b(x)}\ U(x)=(c_1(x)\, , \, \mathbf{l}dots \, , \, c_n(x))$.
Therefore,
by considering a convenient partition of $X$ we may therefore assume that $c_1(x)>c_n(x)$ in $X$
\noi
For $n=2$, we have $b_1(x)>b_2(x)$ and $b_1(x)\geq c_1(x)\geq c_2(x)= (b_1(x)- c_1(x))+b_2(x)\geq b_2(x).$ Consider the matrix
$$U(x)=\frac{1}{\mathfrak{s}qrt{b_1(x)-b_2(x)}}
\begin{pmatrix} \mathfrak{s}qrt{b_1(x)-c_2(x)} &-\mathfrak{s}qrt{c_2(x)-b_2(x)} \\
\mathfrak{s}qrt{b_2(x)-c_2(x)} &\mathfrak{s}qrt{b_1(x)-c_2(x)}
\end{pmatrix} \peso{for a.e.} x\in X\,.
$$
Notice that $U(x):X {\mathbf t}o M_2(\C)^+$ is a measurable function and an easy computation reveals that $U(x)^*\, U(x)=I_2$, so $U(x)$ is unitary for a.e.
$x\in X$. A further computation shows that
$$U(x)^*\, A(x)\ U(x)=
\begin{pmatrix} &c_1(x)& &*& \\
&*& &c_2(x)&
\end{pmatrix} \peso{for a.e.} x\in X\,.
$$
That is, $d(U^*(x)\, A(x)\, U(x))=(c_1(x),\,\ c_2(x))$ and $U(\cdot)$ has the desired properties.
\noi
Suppose that $n\geq 3$ and asssume that the theorem is true if the vectors $c(x)$ and $b(x)$ have size at most $n-1$.
For each $x\in X$ let $k(x)$ be the largest integer $k\in\mathbb {I} _n$ such that $b_k(x)\geq c_1(x)$. Since $b_1(x)\geq c_1(x)>c_n(x)\geq b_n(x)$, we see that $1\mathbf{l}eq k\mathbf{l}eq n-1$. Then, by considering a convenient partition of $X$ into measurable sets we can assume that $k(x)=k$ for $x\in X$. Therefore, by definition of $k$ we get that $b_k(x)\geq c_1(x)>b_{k+1}(x)$ for $x\in X$.
Let $\eta(x)=b_k(x)+b_{k+1}(x)-c_1(x)$ and observe that $\eta(x)=(b_k(x)-c_1(x))+b_{k+1}(x)\geq b_{k+1}(x)$. Then, the measurable vector $(b_k(x), b_{k+1}(x))$ majorizes the measurable vector $(c_1(x), \eta(x))$ and $b_k(x)>b_{k+1}(x)$ for a.e. $x\in X$.
Let $$D_1(x)=\begin{pmatrix} &b_k(x)& &0& \\
&0& &b_{k+1}(x)&
\end{pmatrix} \peso{for a.e.} x\in X\,.$$
By the case $n=2$ we obtain a measurable field of unitary matrices $U_1(\cdot):X\rightarrow {\mathbf{a}l U}(2)$ such that
$$d(U_1(x)^*\, D_1(x)\, U_1(x))= (c_1(x), \eta(x)) \peso{for a.e.} x\in X\,.$$
Since $b_k(x)=\eta(x)+(c_1(x)-b_{k+1}(x))>\eta(x)$, we have:
\noi
If $k=1$ then $b_1(x)>\eta(x)\geq b_2(x)\geq \cdots \geq b_n(x)$; if we let $D_2(x)\in \mathbb {M}_{n-2}(\C)$ be the diagonal matrix with main diagonal $(b_3(x)\, , \, \mathbf{l}dots\, , \, b_n(x))$ then $D_{b(x)}=D_1(x)L(\mathcal{M}_d(\mathbb{C})hcal{H})lus D_2(x)$ and
$$\begin{pmatrix} U_1(x)& 0 \\
0 &I_{n-2}
\end{pmatrix}^*
\begin{pmatrix} D_1(x)& 0 \\
0 &D_2(x)
\end{pmatrix}
\begin{pmatrix} U_1(x) &0 \\
0 &I_{n-2}
\end{pmatrix}=\begin{pmatrix}c_1(x) &Z(x)^*\\
Z(x) &V_1(x)\end{pmatrix}$$
where $Z(x)^*=(\overlineerline{z(x)}\, , \, 0\, , \, \mathbf{l}dots\, , \, 0)\in M_{1,(n-1)}(\C)$, $z(\cdot):X\rightarrow \C$ is a measurable function and $V_1(x)\in\mathbb {M}_{n-1}(\C)$
is the diagonal matrix with main diagonal $(\eta(x)\, , \, b_3(x)\, , \, \mathbf{l}dots\, , \, b_n(x))$.
Moreover, in this case it turns out that $(\eta(x)\, , \, b_3(x)\, , \, \cdots \, , \, b_n(x))$
majorizes $(c_2(x)\, , \, \cdots\, , \, c_n(x))$ for a.e. $x\in X$ (see \cite{HJ13}). By the inductive hypothesis there exists a measurable field $U_2(\cdot):X\rightarrow {\mathbf{a}l U}(n-1)$ such that $d(U_2(x)^* V_1(x) U_2(x))=(c_2(x)\, , \, \cdots\, , \, c_n(x))$. Hence, if we set $U(x)=(U_1(x)L(\mathcal{M}_d(\mathbb{C})hcal{H})lus I_{n-2})\cdot (1L(\mathcal{M}_d(\mathbb{C})hcal{H})lus U_2(x))$ for $x\in X$ then $U(\cdot):X\rightarrow {\mathbf{a}l U}(n)$ has the desired properties.
\noi
If $k>1$ then $b_1(x)\geq\mathbf{l}dots \geq b_{k-1}(x)\geq b_{k}(x)>\eta(x)\geq b_{k+1}(x)\geq \mathbf{l}dots \geq b_n(x)$.
Let $D_2(x)\in \mathbb {M}_{n-2}(\C)$ be the diagonal matrix with main diagonal
$$\beta(x)\ \mathfrak{s}tackrel{\mbox{\tiny{def}}}{=}\ (b_1(x)\, , \, \mathbf{l}dots\, , \, b_{k-1}(x)\, , \, b_{k+2}(x)\, , \, \mathbf{l}dots\, , \, b_n(x))\in\mathbb{R}}\def\C{\mathbb{C}^{n-2}.$$
Notice that in this case
$$\begin{pmatrix} U_1(x)& 0 \\
0 &I_{n-2}
\end{pmatrix}^*
\begin{pmatrix} D_1(x)& 0 \\
0 &D_2(x)
\end{pmatrix}
\begin{pmatrix} U_1(x) &0 \\
0 &I_{n-2}
\end{pmatrix}=\begin{pmatrix}c_1(x) &W(x)^*\\
W(x) &V_2(x)\end{pmatrix}$$
where $W(x)^*=(\overlineerline{w(x)}\, , \, 0\, , \, \mathbf{l}dots\, , \, 0)\in M_{1,(n-1)}(\C)$,
$w(\cdot):X\rightarrow \C$ is a measurable function and $V_2(x)\in M_{n-1}(\C)$ is the diagonal matrix with main diagonal
$$
\gammamma(x)\ \mathfrak{s}tackrel{\mbox{\tiny{def}}}{=}\ (\eta(x)\, , \, b_1(x)\, , \, \mathbf{l}dots\, , \, b_{k-1}(x)\, , \, b_{k+2}(x)\, , \, \mathbf{l}dots\, , \, b_n(x)) \peso{for a.e.} x\in X \ .
$$
It turns out that $(c_2(x)\, , \, \mathbf{l}dots\, , \, c_n(x))\prec \gammamma(x)$ for a.e. $x\in X$; by the inductive hypothesis there exists a measurable field
$U_2(\cdot):X\rightarrow {\mathbf{a}l U}(n-1)$ such that $d(U_2(x)^* V_2(x) U_2(x))=(c_2(x)\, , \, \mathbf{l}dots\, , \, c_n(x))$ for a.e. $x\in X$.
Notice that there exists a permutation matrix
$P\in{\mathbf{a}l U}(n)$ such that $P^*(x) D_{b(x)} P=D_1L(\mathcal{M}_d(\mathbb{C})hcal{H})lus D_2\,$. Hence, if we set $U(x)=P\cdot (U_1(x)L(\mathcal{M}_d(\mathbb{C})hcal{H})lus I_{n-2})\cdot (1L(\mathcal{M}_d(\mathbb{C})hcal{H})lus U_2(x))$ for a.e. $x\in X$ then,
$U(\cdot):X\rightarrow {\mathbf{a}l U}(n)$ has the desired properties.
\end{proof}
\noi
Next we prove Theorem \ref{teo:mayo equiv}, based on the Schur-Horn theorem for measurable field i.e. Theorem \ref{teo:mayo y el unitario} above. Our approach is an adaptation of some known results in finite frame theory (see \cite{AMRS}).
\noi
{\bf Theorem \ref{teo:mayo equiv}} \it Let $b:\mathbb{T}^k\rightarrow (\mathbb{R}}\def\C{\mathbb{C}_+)^d$ and $c:\mathbb{T}^k\rightarrow (\mathbb{R}}\def\C{\mathbb{C}_+)^n$ be measurable vector fields.
The following statements are equivalent:
\begin{enumerate}
\item For a.e. $x\in \mathbb{T}^k$ we have that $c(x)\prec b(x)$.
\item There exist measurable vector fields $u_j: \mathbb{T}^k{\mathbf t}o \C^d$ for $j\in\mathbb {I} _n$ such that $\|u_j(x)\|=1$ for a.e. $x\in \mathbb{T}^k$
and $j\in \mathbb{I}_n\,$, and such that
$$
D_{b(x)}=\mathfrak{s}um_{j\in \mathbb{I}_n} c_j(x)\,\ u_j(x) \otimes u_j(x) \ , \peso{for a.e.} \ x\in \mathbb{T}^k\ .
$$
\end{enumerate}
\rm
\begin{proof}
First notice that the implication $2.\implies 1.$ follows from well known results in finite frame theory (see \cite{AMRS}) in each point $x\in \mathbb{T}^k$. Hence, we show $1.\implies 2.$ We assume, without loss of generality, that the entries of the vectors $b(x)$ and $c(x)$ are arranged in non-increasing order. We now consider the following two cases:
\noi
{\bf Case 1:} assume that $n<d$. We let ${\mathbf t}ilde c:\mathbb{T}^k\rightarrow \C^d$ be given by ${\mathbf t}ilde c(x)=(c(x)\, , \, 0_{d-n})$
for $x\in \mathbb{T}^k$. Then, ${\mathbf t}ilde c(x)\prec b(x)$ for $x\in\mathbb{T}^k$ and therefore, by Theorem
\ref{teo:mayo y el unitario} there exists a measurable field $U(\cdot):\mathbb{T}^k\rightarrow {\mathbf{a}l U}(d)$
such that
\begin{equation} \mathbf{l}abel{eq aplic SH11}
d(U(x)^* D_{b(x)} \, U(x))=(c_1(x)\, , \, \mathbf{l}dots\, , \, c_n(x)\, , \, 0_{d-n})\peso{for a.e.} x\in\mathbb{T}^k\,.
\end{equation} Let $v_1(x)\, , \, \mathbf{l}dots\, , \, v_d(x)\in\C^d$ denote the columns of $C(x)=D_{b(x)}^{1/2}\,U(x)$, for $x\in\mathbb{T}^k$.
Then, Eq. \eqref{eq aplic SH11} implies that:
$$ \|v_j(x)\|^2= c_j(x) \peso{for} j\in\mathbb{I}_n \ , \quad v_j=0 \peso{for} n+1\mathbf{l}eq j\mathbf{l}eq d $$
$$ \peso{and}
D_{b(x)}= C(x)\, C(x)^*=\mathfrak{s}um_{j\in\mathbb{I}_n} v_j(x)\otimes v_j(x) \peso{for a.e.} x\in\mathbb{T}^k \,.$$
Thus, the vectors $u_j(x)$ are obtained from $v_j(x)$ by normalization, for a.e. $x\in\mathbb{T}^k$ and $j\in\mathbb{I}_n\,$.
\noi
{\bf Case 2:} assume that $n\geq d$. We let ${\mathbf t}ilde b:\mathbb{T}^k\rightarrow \C^n$ be given by
${\mathbf t}ilde b(x)=(b(x)\, , \, 0_{n-d})$ for $x\in \mathbb{T}^k$. Then, $c(x)\prec {\mathbf t}ilde b(x)$ for $x\in\mathbb{T}^k$ and therefore, by Theorem
\ref{teo:mayo y el unitario} there exists a measurable field $U(\cdot):\mathbb{T}^k\rightarrow {\mathbf{a}l U}(n)$
such that
\begin{equation} \mathbf{l}abel{eq aplic SH1}
d(U(x)^* D_{{\mathbf t}ilde b(x)} \, U(x))=(c_1(x)\, , \, \mathbf{l}dots\, , \, c_n(x))\peso{for a.e.} x\in\mathbb{T}^k\ .
\end{equation}
Let ${\mathbf t}ilde v_1(x)\, , \, \mathbf{l}dots\, , \, {\mathbf t}ilde v_n(x)\in\C^n$ denote the columns of $C(x)=D_{{\mathbf t}ilde b(x)}^{1/2}U(x)$, for $x\in\mathbb{T}^k$.
As before, Eq. \eqref{eq aplic SH1} implies that
$$
\|{\mathbf t}ilde v_j(x)\|^2= c_j(x) \peso{for} j\in\mathbb{I}_n \peso{and}
D_{{\mathbf t}ilde b(x)}= \mathfrak{s}um_{j\in\mathbb{I}_n} {\mathbf t}ilde v_j(x)\otimes {\mathbf t}ilde v_j(x) \peso{for a.e.} x\in\mathbb{T}^k \ .
$$
If we let ${\mathbf t}ilde v_j(x)=(v_{i,j}(x))_{i\in\mathbb {I} _n}$ then, the second identity above implies that ${\mathbf t}ilde v_{i,j}(x)=0$ for a.e. $x\in\mathbb{T}^k$ and every $d+1\mathbf{l}eq i\mathbf{l}eq n$.
If we let $v_j(x)=(v_{i,j}(x))_{i\in\mathbb{I}_d}$ for a.e. $x\in\mathbb{T}^k$ and $j\in\mathbb {I} _n\,$, we get that
$$ \|v_j(x)\|^2= c_j(x) \peso{for} j\in\mathbb{I}_n \peso{and}
D_{b(x)}= \mathfrak{s}um_{j\in\mathbb{I}_n} v_j(x)\otimes v_j(x) \peso{for a.e.} x\in\mathbb{T}^k \,.$$
Thus, the vectors $u_j(x)$ are obtained from $v_j(x)$ by normalization, for a.e. $x\in\mathbb{T}^k$ and $j\in\mathbb{I}_n\,$.
\end{proof}
\mathfrak{s}ubsection{The reduced finite-dimensional model: proof of Theorem \ref{teo estruc prob reducido unificado}}\mathbf{l}abel{subsec reduced}
In this section we present the proof of Theorem \ref{teo estruc prob reducido unificado}, divided into two parts (namely, Propositions
\ref{teo estruc prob reducido} and \ref{era facilongo nomas} below).
\begin{pro}\mathbf{l}abel{teo estruc prob reducido}
Let $ m,\, n\in\mathbb{N}$, $\alphapha\in (\mathbb{R}}\def\C{\mathbb{C}_{>0}^n)^\downarrow$, $p=(p_i)_{i\in\mathbb{I}_{m}}\in \mathbb{R}}\def\C{\mathbb{C}_{>0}^m$ and
$\delta=(d_i)_{i\in\mathbb{I}_{m}}\in\mathbb{N}^m$ be such that $1\mathbf{l}eq d_1< \mathbf{l}dots< d_m$.
If $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}f$ then
there exists $\Psi^{\rm op}=[\psi_i^{\rm op}]_{i\in\mathbb{I}_{m}}\in \Lambda^{\rm op}_{\alphapha,\,p}(\delta)$ such that
$$ \mathfrak{s}um_{i\in\mathbb{I}_{m}} {p_i}\,{\mathbf t}r(\varphi(\psi_i^{\rm op}) )
\mathbf{l}eq \mathfrak{s}um_{i\in\mathbb{I}_{m}} {p_i}\,{\mathbf t}r(\varphi(\psi_i) )
\peso{for every} \Psi=[\psi_i]_{i\in\mathbb{I}_{m}}\in \Lambda^{\rm op}_{\alphapha,\,p}(\delta)\,.$$
Moreover, if $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$ then such $\Psi^{\rm op}$ is unique.
\end{pro}
\begin{proof}
Let us consider the set
$$
\Lambda_{\alphapha,\,p}(\delta)\ \mathfrak{s}tackrel{\mbox{\tiny{def}}}{=}\ \bigcup_{B\in W_{\alphapha,\,p}} M(B) \mathfrak{s}ubseteq \prod_{i\in\mathbb{I}_{m}} (\mathbb{R}}\def\C{\mathbb{C}_+^{d_i})^\downarrow \ ,
$$
where
$$
M(B)\ \mathfrak{s}tackrel{\mbox{\tiny{def}}}{=}\ \{ [\mathbf{l}ambda_i]_{i\in\mathbb{I}_{m}}\in \prod_{i\in\mathbb{I}_{m}} (\mathbb{R}}\def\C{\mathbb{C}_+^{d_i})^\downarrow:\ R_i(B)\prec \mathbf{l}ambda_i\ , \ i\in\mathbb{I}_{m}\} \ .
$$
Notice that by construction $\Lambda_{\alphapha,\,p}^{\rm op}(\delta)\mathfrak{s}ubseteq \Lambda_{\alphapha,\,p}(\delta)$.
\noi
We claim that $\Lambda_{\alphapha,\,p}(\delta)$ is a convex set.
Indeed, let
$[\mathbf{l}ambda_i]_{i\in\mathbb{I}_{m}}\in M(B_1)$,
$[\mu_i]_{i\in\mathbb{I}_{m}}\in M(B_2)$ for
$B_1 $, $B_2\in W_{\alphapha,\,p}$ and
$t\in
[0,1]$.
Take the matrix
$B = t\, B_1 + (1-t)\,B_2\in W_{\alphapha,\,p}\,$ (since $ W_{\alphapha,\,p}$ is a convex set). Then
$$
[\,\gammamma_i\,]_{i\in\mathbb{I}_{m}} = [\,t\,\mathbf{l}ambda_i+(1-t)\, \mu_i\,]_{i\in\mathbb{I}_{m}}\in M(B) \mathfrak{s}ubseteq \Lambda_{\alphapha,\,p}(\delta)
\ :
$$
on the one hand, $\gamma_i\in(\mathbb{R}}\def\C{\mathbb{C}_+^{d_i})^\downarrow$, $i\in\mathbb{I}_m$; on the other hand, by Lidskii's additive inequality (see \cite{Bhat}) we have that, for each $i\in\mathbb{I}_{m}\,$
$$
R_i(B)= t\,R_i(B_1)+ (1-t)\,R_i(B_2)\prec t\, R_i(B_1)^\downarrow + (1-t)\, R_i(B_2)^\downarrow
\in (\mathbb{R}}\def\C{\mathbb{C}_+)^\downarrow
\ .
$$
On the other hand, by the hypothesis (and the definition of majorization) one deduces that
$$
R_i(B_1)^\downarrow\prec \mathbf{l}a_i \peso{and} R_i(B_2)^\downarrow\prec \mu_i \implies
R_i(B) \prec t\, \mathbf{l}a_i + (1-t)\, \mu_i= \gammamma_i
$$
for every $i\in \mathbb{I}_{m}\,$.
This proves the claim, so $\Lambda_{\alphapha,\,p}(\delta)$ is a convex set. Moreover, by the compactness of $W_{\alphapha,\,p}$ and by the conditions defining $M(B)$ for $B\in W_{\alphapha,\,p}\,$, it follows that $\Lambda_{\alphapha,\,p}(\delta)$ is a compact set.
Let
$$
\varphi_p:\Lambda_{\alphapha,\,p}(\delta)\rightarrow \mathbb{R}}\def\C{\mathbb{C}_+ \peso{given by}
\varphi_p(\Psi)\ \mathfrak{s}tackrel{\mbox{\tiny{def}}}{=}\ \mathfrak{s}um_{i\in\mathbb{I}_{m}} {p_i}\,{\mathbf t}r \,\varphi(\psi_i) \ ,
$$
for $\Psi=[\psi_i]_{i\in\mathbb{I}_{m}}
\in \Lambda_{\alphapha,\,p}(\delta)\,$.
It is easy to see that $\varphi_p$ is a convex function, which is strictly convex whenever $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$. Using this last fact it follows that there exists
$ \Psi_0\in \Lambda_{\alphapha,\,p}(\delta)$ that satisfies
$$ \varphi_p(\Psi_0)\mathbf{l}eq \varphi_p(\Psi) \peso{for every} \Psi\in \Lambda_{\alphapha,\,p}(\delta)\ ,
$$
and such $ \Psi_0$ is unique whenever $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$. Notice that by construction
there exists some
$B\in W_{\alphapha,\,p}$ such that $\Psi_0=[\psi_i^0]_{i\in\mathbb{I}_{m}} \in M(B)$.
Then, by item \ref{item2} of Notation \ref{muchas nots},
$$
R_i(B)\prec \psi_i^0 \implies L_{d_i}(R_i(B))\prec \psi_i^0
\implies {\mathbf t}r \, \varphi(L_{d_i}(R_i(B)))\mathbf{l}eq
{\mathbf t}r \,\varphi(\psi_i^0) \peso{for} i\in\mathbb{I}_{m}\ .
$$
Hence, the sequence $B_\delta$ defined in Eq. \eqref{Bdelta} using this matrix $B$ satisfies that
$\varphi_p(B_\delta)\mathbf{l}e \varphi_p(\Psi_0)$. So
we define $\Psi^{\rm op}\ \mathfrak{s}tackrel{\mbox{\tiny{def}}}{=}\ B_\delta\in \Lambda_{\alphapha,\,p}^{\rm op}(\delta)\mathfrak{s}ubset \Lambda_{\alphapha,\,p}(\delta)$, that has the desired properties.
Finally, the previous remarks show that $\Psi_0= \Psi^{\rm op}\in \Lambda_{\alphapha,\,p}^{\rm op}(\delta)$ whenever $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$.
\end{proof}
\begin{pro}\mathbf{l}abel{era facilongo nomas}
With the notations and terminology of Proposition \ref{teo estruc prob reducido}, assume
further that $n\geq d_m$ and that $\varphi\in\mathbf{x}rightarrow[n\rightarrow\infty]{}fs$ is differentiable in
$\mathbb{R}}\def\C{\mathbb{C}_+\,$. Then $$ \Psi^{\rm op}\in \prod_{i\in\mathbb{I}_m} (\mathbb{R}}\def\C{\mathbb{C}_{>0}^{d_i})^\downarrow\,.$$
\end{pro}
\begin{proof}
Let $\Psi^{\rm op}=[\psi_i^{\rm op}]_{i\in\mathbb{I}_{m}}$ where each vector $\psi_i^{\rm op} \in (\mathbb{R}}\def\C{\mathbb{C}_{+}^{d_i})^\downarrow \,$,
and assume that there exists $i_0\in\mathbb{I}_m$ such that $\psi_{i_0}^{\rm op}=(\psi_{i_0,j}^{\rm op})_{j\in
\mathbb{I}_{d_{i_0}}}$ satisfies that $\psi^{\rm op}_{i_0,k}=0$ for some $1\mathbf{l}eq k\mathbf{l}eq d_{i_0}$; let $1\mathbf{l}eq k_0\mathbf{l}eq d_{i_0}$ be the smallest such index.
Let $B\in W_{\alphapha,\,p}$ be such that $B_\delta=\Psi^{\rm op}$.
Recall from Eq. \eqref{eq defi gammacd} that, if we denote $c_i = c_{d_{i}}(R_{i}(B))$ for every $i \in \mathbb{I}_{m}\,$, then
$$
\psi^{\rm op}_{i_0,j}= L_{d_{i_0}} (R_{i_0}(B))_j =
\max\{R_{i_0}(B)^\downarrow_j\, , \, c_{i_0}\}
\peso{for} j\in \mathbb{I}_{d_{i_0}} \ ,
$$
since $n\geq d_{i_0}$ by hypothesis. Hence, in this case $c_{i_0}=
0$ and $R_{i_0}(B)^\downarrow_{k_0}=0$. Let $j_0\in \mathbb {I} _n $
such that $0=R_{i_0}(B)^\downarrow_{k_0} = B_{i_0\, , \, j_0}\,$.
By construction $\mathfrak{s}um_{i\in\mathbb{I}_m}p_i\ B_{i\, , \, j_0}=\alphapha_{j_0}>0$ so that there exists $i_1\in\mathbb{I}_m$ such that $B_{i_1,\,j_0}>0$. Let $\{e_j\}_{i\in\mathbb {I} _n}$ denote the canonical basis of $\mathbb{R}}\def\C{\mathbb{C}^n$.
For every $t\in I=[0,\frac{\beta_{i_1,\,j_0} \ p_{i_1}}{p_{i_0}}]$
consider the matrix $B(t)$ defined by its rows as follows:
\begin{itemize}
\item $R_{i_0}(B(t))=R_{i_0}(B)+t\, e_{j_0} $
\item $R_{i_1}(B(t))= R_{i_1}(B)- \frac{p_{i_0}\, t}{p_{i_1}}\ e_{j_0}\,$
\item $R_{i}(B(t))=R_{i}(B)$ for $i\in\mathbb{I}_m\mathfrak{s}etminus\{i_0,\,i_1\}$.
\end{itemize}
It is straightforward to check that $B(t)\in W_{\alphapha,\,p}$ for $t\in I$ and that $B(0)=B$.
Set $\Psi(t)=[\psi_i(t)]_{i\in\mathbb{I}_m}=B(t)_\delta\in \Lambda_{\alphapha,\, p}^{\rm op}(\delta)$ for $t\in I$ and notice that $\Psi(0)=\Psi^{\rm op}$. We now consider two cases:
\noi {\bf Case 1:\ }\ $B_{i_1,\,j_0}> c_{i_1}$ (recall that
$\psi^{\rm op}_{i_1\, , \, j}= L_{d_{i_1}} (R_{i_1}(B))_j =
\max\{R_{i_1}(B)^\downarrow_j\, , \, c_{i_1}\} $). Therefore $B_{i_1,\,j_0}
=R_{i_1}(B)^\downarrow_{k}$ for some $1\mathbf{l}eq k\mathbf{l}eq d_{i_1}$ and we let $1\mathbf{l}eq k_1\mathbf{l}eq d_{i_1}$ be the largest such $k$. It is straightforward to check that in this case there exists $\varepsilon >0$ such that
$$\psi_{i_0}(t)=\psi^{\rm op}_{i_0}+t\, e_{k_0} \peso{and} \psi_{i_1}(t)=\psi^{\rm op}_{i_1}-\frac{p_{i_0}}{p_{i_1}}\, t\, e_{k_1} \peso{for} t\in [0,\varepsilonilon]\,.$$
Therefore, for $t\in [0,\varepsilonilon]$ we have that
$$
f(t)=\varphi_p(\Psi(t))- \varphi_p(\Psi^{\rm op})=p_{i_0}\ (\varphi(t)-\varphi(0))+p_{i_1}\ (\varphi(B_{i_1,\, j_0} -
\frac{p_{i_0}}{p_{i_1}} t ) - \varphi(B_{i_1,\, j_0}))\ .
$$
Hence $f(0)=0$ and by hypothesis $f(t)\geq 0$ for $t\in[0,\varepsilonilon]$. On the other hand,
$$
f'(0)= p_{i_0}\ (\varphi'(0) - \varphi'(B_{i_1,\, j_0}) )<0
$$
since by the hypothesis $\varphi'$ is strictly increasing and $B_{i_1,\, j_0}>0$. This condition contradicts the previous facts about $f$. From this we see that the vectors in $\Psi^{\rm op}$ have no zero entries.
\noi
{\bf Case 2:\ }\ $B_{i_1,\,j_0}\mathbf{l}e c_{i_1}$. Hence, in this case $0<c_{i_1}$ and there exists $0\mathbf{l}eq r\mathbf{l}eq d_{i_1}-1$ such that
$$
\psi^{\rm op}_{i_1}=(R_{i_1}(B)^\downarrow_1\, , \, \mathbf{l}dots \, , \,
R_{i_1}(B)^\downarrow_r \, , \, c_{i_1}\, , \, \mathbf{l}dots\, , \, c_{i_1})
$$
so that there exists $\varepsilon>0$ such that for $t\in[0,\varepsilon]$ we have that
$$
\psi_{i_1}(t)=(R_{i_1}(B)^\downarrow_1\, , \, \mathbf{l}dots \, , \,
R_{i_1}(B)^\downarrow_r \, , \, c_{i_1}\, , \, \mathbf{l}dots\, , \, c_{i_1})-\frac{p_{i_0}\ t}{(d-r)\ p_{i_1}}\mathfrak{s}um_{j=r+1}^{d_1} e_j\,.
$$
Therefore, for $t\in [0,\varepsilonilon]$ we have that
$$f(t)=\varphi_p(\Psi(t))- \varphi_p(\Psi^{\rm op})=
p_{i_0}\ (\varphi(t)-\varphi(0))+p_{i_1} \ (d-r)\ (\varphi(c_{i_1}- \frac{p_{i_0} \, t}{(d-r)\,p_{i_1} } ) - \varphi(c_{i_1}))\, .$$
As before, $f(0)=0$ and $f(t)\geq 0$ for $t\in[0,\varepsilonilon]$; a simple computation shows that in this case we also have that $f'(0)<0$, which contradicts the previous facts; thus, the vectors in $\Psi^{\rm op}$ have no zero entries.
\end{proof}
{\mathfrak{s}criptsize
}
\end{document} |
\begin{document}
\title[Stable cuspidal curves and $\overline{\cM}_{2,1}$]{Stable cuspidal curves \\ and the integral Chow ring of $\overline{\cM}_{2,1}$}
\author[A. Di Lorenzo]{Andrea Di Lorenzo}
\address[A. Di Lorenzo]{Humboldt Universit\"{a}t zu Berlin, Germany}
\email{[email protected]}
\author[M. Pernice]{Michele Pernice}
\address[M. Pernice]{Scuola Normale Superiore, Pisa, Italy}
\email{[email protected]}
\author[A. Vistoli]{Angelo Vistoli}
\address[A. Vistoli]{Scuola Normale Superiore, Pisa, Italy}
\email{[email protected]}
\maketitle
\begin{abstract}
In this paper we introduce the moduli stack $\widetilde{\mathscr M}_{g,n}$ of $n$-marked stable at most cuspidal curves of genus $g$ and we use it to determine the integral Chow ring of $\overline{\cM}_{2,1}$. Along the way, we also determine the integral Chow ring of $\overline{\cM}_{1,2}$.
\end{abstract}
\section*{Introduction}\label{sec:intro}
Rational Chow rings of moduli spaces of curves have been the subject of intensive research in the last forty years, since Mumford's first investigation of the topic (\cite{Mum}). Nevertheless, rational Chow rings of moduli of \emph{stable} curves of genus larger than $1$ have proved to be hard to compute: the complete calculations have been carried out only for $\overline{M}_2$ (\cite{Mum}), $\overline{M}_{2,1}$ and $\overline{M}_3$ (\cites{Fab,Fab1}).
Integral Chow rings of moduli \emph{stacks} of curves, introduced in \cite{EG}, are even harder to study, but they contain way more information: for instance, rational Chow rings of moduli of hyperelliptic curves are trivial, but their integral counterparts are not (see \cites{VisM2,EF,Dil,Per}); and similarly for $\mathscr{M}_3\smallsetminus\mathscr{H}_3$, the stack of smooth non-hyperelliptic curves of genus three (see \cite{DLFV}).
For what concerns the moduli stack of stable curves of genus $>1$, the only result available in the integral case is the computation of the Chow ring of $\overline{\cM}_2$ by Larson in \cite{Lars} (the first and the third authors subsequently reproved Larson's theorem using a different approach, see \cite{DLV}).
The goal of this paper is to compute the integral Chow ring of $\overline{\cM}_{2,1}$ using a new geometric approach, involving the stack of what we call \emph{stable $A_2$-curves} (these are curves with only nodes or cusps as singularities).
\begin{maintheorem}[\ref{thm:chow Mbar21}]
Suppose that the ground field has characteristic $\neq 2,3$. Then we have
\[ \ch(\overline{\cM}_{2,1})\simeq \ZZ[\lambda_1,\psi_1,\vartheta_1,\lambda_2,\vartheta_2]/(\alpha_{2,1},\alpha_{2,2}, \alpha_{2,3}, \beta_{3,1}, \beta_{3,2}, \beta_{3,3}, \beta_{3,4}), \]
where
\begin{enumeratea}
\item the $\lambda_i$ are the Chern classes of the Hodge bundle,
\item the cycle $\psi_{1}$ is the first Chern class of the conormal bundle of the tautological section $\overline{\cM}_{2, 1} \arr \overline{\cC}_{2,1}$, where $\overline{\cC}_{2,1} \arr \overline{\cM}_{2,1}$ is the universal curve,
\item the cycle $\vartheta_1$ is the fundamental class of the locus of marked curves with a separating node,
\item the cycle $\vartheta_2$ is the fundamental class of the locus of marked curves with a marked separating node,
\end{enumeratea}
and the relations are
\begin{align*}
\alpha_{2,1}&=\lambda_2-\vartheta_2-\psi_1(\lambda_1-\psi_1),\\
\alpha_{2,2}&=24\lambda_1^2-48\lambda_2,\\
\alpha_{2,3}&=\vartheta_1(\lambda_1+\vartheta_1),\\
\beta_{3,1}&=20\lambda_1\lambda_2-4\lambda_2\vartheta_1,\\
\beta_{3,2}&=2\psi_1\vartheta_2,\\
\beta_{3,3}&=\vartheta_2(\vartheta_1+\lambda_1-\psi_1),\\
\beta_{3,4}&=2\psi_1(\lambda_1 +\vartheta_1)(7\psi_1-\lambda_1) - 24\psi_1^3.\\
\end{align*}
\end{maintheorem}
In the above we identify, as usual, $\overline{\cM}_{2,1}$ with the universal family over $\overline{\cM}_{2}$ via the stabilization map $\overline{\cM}_{2,1} \arr \overline{\cM}_{2}$. Viewed in this way, $\psi_{1}$ is the first Chern class of the dualizing sheaf $\omega_{\overline{\cM}_{2,1}/\overline{\cM}_{2}}$.
In \Cref{cor:rational chow} we reprove Faber's result on the rational Chow ring of $\overline{M}_{2,1}$ and we further extend it over any base field of characteristic $\neq 2,3$. Moreover, to show how our strategy works in a simpler case, we also determine the integral Chow ring of $\overline{\cM}_{1,2}$.
\begin{maintheorem}[\ref{thm:chow Mbar12}]
Suppose that the ground field has characteristic $\neq 2,3$. Then we have
\[ \ch(\overline{\cM}_{1,2}) \simeq \ZZ[\lambda_1,\mu_1]/(\mu_1(\lambda_1+\mu_1),24\lambda_1^2) \]
where $\lambda_1$ is the first Chern class of the Hodge line bundle and $\mu_1:=\bfp_*[\overline{\cM}_{1,1}]$ is the fundamental class of the universal section $\bfp:\overline{\cM}_{1,1}\to\overline{\cM}_{1,2}$ of $\overline{\cM}_{1,2}\to\overline{\cM}_{1,1}$.
\end{maintheorem}
\subsection*{Stable $A_2$-curves and strategy of proof}
The calculations of integral Chow rings of stacks of stable curves that have been carried out so far have been for stacks with a presentation of the form $[X/G]$, where $G$ is an affine algebraic group, and $X$ is an open subscheme of a representation of $G$; we will call this a \emph{good presentation}.
In other cases, like $\overline{\cM}_{2,1}$, the stack of interest, let us call it $\cX$, is not known to have such a presentation, but contains a closed subscheme $\cY \subseteq \cX$ such that both $\cY$ and $\cX \smallsetminus \cY$ have a good presentation, and we can compute both $\ch(\cY)$ and $\ch(\cX)$. One can try to use the localization sequence
\[
\operatorname{CH}^{*-c}(\cY) \arr \ch(\cX) \arr \ch(\cX \smallsetminus \cY) \arr 0
\]
where $c$ is the codimension of the regular immersion $\cY \hookrightarrow \cX$ but this gives no information on the kernel of the pushforward $\operatorname{CH}^{*-c}(\cY) \arr \ch(\cX)$.
Our approach, first introduced in \cite{fulghesu} in the context of intersection theory on stacks of curves nodal curves of genus~$0$, exploits a patching technique (see Lemma~\ref{lm:Atiyah}) that is at heart of the Borel-Atiyah-Segal-Quillen localization theorem, and has been used by many authors working on equivariant cohomology, equivariant Chow rings and equivariant K-theory (see the discussion in the introduction of \cite{DLV}). This works when the top Chern class of the normal bundle of $\cY$ in $\cX$ is not a zero-divisor in $\ch(\cY)$. Unfortunately when $\cY$ is Deligne--Mumford then $\ch[i](\cY)$ is torsion for $i > \dim \cY$, which means that the condition can never be satisfied. Thus to apply this to stacks of stable curves one needs to enlarge them so that the condition has a chance to be satisfied, compute the Chow ring of the enlarged stack, then use the localization sequence to compute the additional relations coming from the difference between the enlarged stack and the one we are interested in.
In the first section of this paper we introduced the moduli stack $\widetilde{\mathscr M}_{g,n}$ of \emph{stable $A_2$-curves}, obtained by adding to $\overline{\cM}_{g,n}$ curves with cuspidal singularities; the stability condition is that the canonical is still required to be ample. This is neither Deligne-Mumford nor separated (thus the stability condition above is only a weak one, and does not ensure GIT stability), but it is still a smooth quotient stack, hence it has a well defined integral Chow ring. The same holds for the universal stable $A_2$-curve $\widetilde{\mathscr C}_{g,n}\arr\widetilde{\mathscr M}_{g,n}$. In particular, as $\overline{\cM}_{2,1}$ is contained in $\widetilde{\mathscr C}_2$ as an open substack, we can break down the computation of $\ch(\overline{\cM}_{2,1})$ in two main steps:
\begin{enumerate1}
\item the determination of $\ch(\widetilde{\mathscr C}_2)$, which is the content of \Cref{prop:chow Ctilde2};
\item the determination of the image of $\ch(\widetilde{\mathscr C}_2\smallsetminus\overline{\cM}_{2,1})\arr \ch(\widetilde{\mathscr C}_2)$, i.e. the cycles coming from the locus of stable $A_2$-curves with cuspidal singularities. This is first done abstractly in terms of fundamental classes in \Cref{thm:chow Mbar21 abs}; afterwards we proceed with the explicit computations.
\end{enumerate1}
For the first step, we consider the stratification of $\widetilde{\mathscr C}_2$ by closed substacks given by
\[ \widetilde{\Theta}_2 \subset \widetilde{\Theta}_1 \subset \widetilde{\mathscr C}_2 \]
where $\widetilde{\Theta}_1$ is the locus of marked curves with a separating node and $\widetilde{\Theta}_2$ is the stratum of marked curves with a marked separating node. We compute $\ch(\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_1)$, $\ch(\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2)$ and $\ch(\widetilde{\Theta}_2)$ separately (each of these stacks has a good presentation); then we use the fact that the top Chern classes of the normal bundles of $\widetilde{\Theta}_2$ in $\widetilde{\mathscr C}_2$, and of $\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2$ in $\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_2$ are not zero-divisors, which allows us to use the patching technique to get $\ch(\widetilde{\mathscr C}_2)$.
\subsection*{Relations with previous work} A very natural approach to the problem of computing Chow rings of smooth stacks with a stratification for which we know the Chow rings of the strata is to use higher Chow groups to get a handle on the kernel. For the stack $\overline{\cM}_{2}$ this is carried out in \cite{Lars} (see also \cite{bae-schmitt-1, bae-schmitt-2}). Our approach, explained above, is different, and does not use higher Chow groups at all. It was first introduced in \cite{fulghesu} in intersection theory, and used to give an alternate proof of Larson's theorem on $\overline{\cM}_{2}$ in \cite{DLV}.
The idea of defining alternate compactifications of the moduli spaces of smooth curves by considering curves with more complicated singularities than nodes is already in the literature, starting from \cite{schubert} (see also \cite{hassett-hyeon-1, hassett-hyeon-2}), and from a more stack-theoretic standpoint in the work of Smyth (see \cite{smyth-survey}), continued by Alper, Fedorchuk and Smyth \cite{alper-fedorchuk-smyth-1, alper-fedorchuk-smyth-2, alper-fedorchuk-smyth-3, alper-fedorchuk-smyth-4}. Their perspective, however, is very different, as they look for stacks that are proper and Deligne--Mumford, in order to study the birational geometry of moduli spaces of curves.
In \cite{fulghesu, bae-schmitt-1, bae-schmitt-1} the authors study Chow rings of stacks of curves with positive dimensional automorphism groups; unlike us, they impose no stability condition, and their curves are all nodal.
\subsection*{Future prospects} The approach of the present paper should be applicable, maybe after inverting a few primes, to other moduli stacks with a stratification such that the Chow rings of the strata are computable. For the relatively simple cases considered in this paper it is enough to consider at most cuspidal curves; in other cases one needs more complicated singularities. The second author's PhD thesis will include several results on stacks of stable $A_{n}$-curves, as well a presentation for the Chow ring of $\overline{\cM}_{3}$ with coefficients in $\ZZ[1/6]$, obtained using $A_{3}$-curves, that is, curves with only nodes, cusps or tacnodes. Of course the presentation for the rational Chow ring of $\overline{\cM}_{3}$ is an old theorem of Faber \cite{Fab1}; our technique gives a completely different approach, and a more refined result.
The general principle seems to be that the more singularities you allow, the more likely it is that the patching condition be satisfied, making it possible to compute the Chow ring of the larger stack. Then one needs to compute the classes of various loci of unwanted curves, something that can be difficult; allowing very complicated singularities makes it harder. So, it is not clear what the limits of the methods are; still, we think it worth investigating what it can give, for example, for $\overline{\cM}_{3,1}$ and $\overline{\cM}_{4}$, whose rational Chow rings are not known.
\subsection*{Outline of the paper}
In Section \ref{sec:cusp} we introduce stable $A_2$-curves (\Cref{def:A2 curves}) and the associated stack $\widetilde{\mathscr M}_{g,n}$. We prove that $\widetilde{\mathscr M}_{g,n}$ is a quotient stack (\Cref{thm:quotient-stack}) and we study the normalization of $\widetilde{\mathscr M}_{g,n}\smallsetminus\overline{\cM}_{g,n}$: in particular, we prove in \Cref{thm:cuspidal-description} that it is isomorphic to $\widetilde{\mathscr M}_{g-1,n+1}\times[\AA^{1}/\gm]$ via a certain \emph{pinching} morphism, a result that will be essential for our computations.
In Section \ref{sec:chow Mbar12} we show how our strategy works in a simple case, namely for computing $\ch(\overline{\cM}_{1,2})$ (\Cref{thm:chow Mbar12}): we first determine $\ch(\widetilde{\mathscr C}_{1,1})$ (\Cref{prop:chow Ctilde11}) and then we excise the cuspidal locus.
Section \ref{sec:chow cusp} is devoted to the computation of $\ch(\widetilde{\mathscr C}_2)$ (\Cref{prop:chow Ctilde2}). As explained before, this result is obtained by patching together the explicit presentations of $\ch(\widetilde{\mathscr C}\smallsetminus\widetilde{\Theta}_1)$ (\Cref{prop:chow C minus Theta1}), of $\ch(\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2)$ (\Cref{prop:chow ThTilde1 minus ThTilde2}) and $\ch(\widetilde{\Theta}_2)$ (\Cref{prop:chow ThTilde2}).
In Section \ref{sec:abstract} we give an abstract characterization of $\ch(\overline{\cM}_{2,1})$. More precisely, we prove in \Cref{thm:chow Mbar21 abs} that
\[\ch(\overline{\cM}_{2,1})\simeq\ch(\widetilde{\mathscr C}_2)/(J,[\widetilde{\mathscr C}_2^{\rm c}],[\widetilde{\mathscr C}_2^{\rm E}],c''_*\rho^*T) \]
where $J$ is the ideal generated by the relations coming from $\overline{\cM}_2$ and two of the other generators are the fundamental classes of certain loci in $\widetilde{\mathscr C}_2$.
Finally, in Section \ref{sec:concrete} we compute the integral Chow ring of $\overline{\cM}_{2,1}$ (\Cref{thm:chow Mbar21}) by determining explicit expressions for $[\widetilde{\mathscr C}_2^{\rm c}]$ and $[\widetilde{\mathscr C}_2^{\rm E}]$ (\Cref{prop:class Ctilde2 c} and \Cref{prop:class Ctilde2 E}). Again, a key ingredient for this computation is the patching Lemma. In the last part of the Section we compare our presentation of $\ch(\overline{M}_{2,1})_{\mathbb{Q}}$ with the one obtained by Faber.
\subsection*{Notation}
For the convenience of the reader, we summarize here the notation for most of the stacks appearing in the paper.
\begin{enumerate}
\item $\widetilde{\mathscr M}_{g,n}$ - the stack of $n$-marked stable $A_2$-curves of genus $g$.
\item $\overline{\cM}_{g,n}$ - the stack of $n$-marked stable curves of genus $g$.
\item $\widetilde{\mathscr C}_{g,n}$ - the universal $n$-marked stable $A_2$-curve of genus $g$.
\item $\widetilde{\Delta}_1$ - the stack of stable $A_2$-curves of genus two with a separating node.
\item $\Delta_1$ - the stack of stable curves of genus two with a separating node.
\item $\widetilde{\Theta}_1$ - the stack of $1$-pointed stable $A_2$-curves of genus two with a separating node.
\item $\Theta_1$ - the stack of $1$-pointed stable curves of genus two with a separating node.
\item $\widetilde{\Theta}_2$ - the stack of $1$-pointed stable $A_2$-curves of genus two with a marked separating node.
\item $\Theta_2$ - the stack of $1$-pointed stable curves of genus two with a marked separating node.
\end{enumerate}
\subsection*{Acknowledgments}
We warmly thank Carel Faber for sharing his results on the rational Chow ring of $\overline{M}_{2,1}$ with us. We also thank Martin Bishop for spotting a mistake in the proof of \Cref{thm:chow Mbar12} and for suggesting a correction.
Finally, we are very grateful to the referees, who did an absolutely outstanding job. Their comments and suggestions greatly improved the quality of the paper.
\section*{Notations and conventions}
We work over a base commutative ring $k$, such that $2$ and $3$ are invertible in $k$. We will almost exclusively use the case in which $k$ is a field; but in the proof of Theorem~\ref{thm:cuspidal-description} the added generality will be useful, as we will have to assume $k = \ZZ[1/6]$. From Section~\ref{sec:chow Mbar12}, $k$ will be a field of characteristic different from $2$ and $3$.
All schemes, algebraic spaces and morphisms are going to be defined over $k$. All stacks will be over the \'{e}tale site $\aff$ of affine schemes over $k$ (we could take all schemes, it does not really make a difference). For algebraic stacks and algebraic spaces we will follow the conventions of \cite{knutson} and \cite{laumon-moret-bailly}; in particular, they will have a separated diagonal of finite type.
A \emph{quotient stack} $\cX$ is a stack over $k$, such that there exists an algebraic space $X \arr \spec k$, with an action of an affine flat group scheme of finite type $G \arr \spec k$, such that $\cX$ is isomorphic to $[X/G]$.
\section{The stack of stable $A_2$-curves of fixed genus} \label{sec:cusp}
\begin{definition}\label{def:A2 curves}
An \emph{$A_2$-curve} over a scheme $S$ is a proper flat finitely presented morphism of schemes $C \arr S$, whose geometric fibers are reduced connected curves with only nodes or cusps as singularities.
If $S$ is a scheme, an $n$-marked stable $A_2$-curve of genus $g$ over $S$ is an $A_{2}$-curve $C \arr S$ with $n$ sections $s_{1}$, \dots, $s_{n}\colon S \arr C$, such that, if we denote by $\Sigma_{i}$ the image of $s_{i}$ in $C$,
\begin{enumerate1}
\item the $\Sigma_{i}$ are disjoint,
\item they are contained in the smooth locus of $C \arr S$, and
\item if $\omega_{C/S}$ is the relative dualizing sheaf, the invertible sheaf $\omega_{C/S}(\Sigma_{1} + \dots + \Sigma_{n})$ is relatively ample on $S$.
\end{enumerate1}
In the most general, and correct, definition of an $A_2$-curve $C \arr S$, one should not assume that $C$ is a scheme, but an algebraic space; for the purposes of this paper, this will not be needed, but can be done without making any essential changes.
\end{definition}
We have the following standard result.
\begin{proposition}\label{prop:openness}
Let $C \arr S$ a proper flat finitely presented morphism with $n$-sections $S \arr C$. There exists an open subscheme $S' \subseteq S$ with the property that a morphism $T \arr S$ factors through $S'$ if and only if the projection $T\times_{S} C \arr T$, with the sections induced by the $s_{i}$, is a stable $n$-pointed $A_{2}$-curve.
\end{proposition}
\begin{proof}
\marginpar{} It is well known that a small deformation of a curve with cuspidal or nodal singularities still has cuspidal or nodal singularities. Hence, after restricting to an open subscheme of $S$ we can assume that $C \arr S$ is an $A_{2}$-curve. By further restricting $S$ we can assume that the sections land in the smooth locus of $C \arr S$, and are disjoint. Then the result follows from openness of ampleness for invertible sheaves.
\end{proof}
If $S' \arr S$ is a morphism of schemes, and $C \arr S$ is an $n$-marked stable $A_2$-curve of genus~$g$, the projection $S'\times_{S}C \arr S'$ is an $n$-marked stable $A_2$-curve of genus~$g$. Thus, there is an obvious stack $\widetilde{\mathscr M}_{g,n, k}$ of stable $A_2$-curves of genus~$g$ over $\aff$, whose category of sections over an affine scheme $S$ is the groupoid of stable $A_2$-curves of genus~$g$ over $S$. Mostly we will omit the indication of the base field, and simply write $\widetilde{\mathscr M}_{g,n}$.
As customary, we will denote $\widetilde{\mathscr M}_{g,0}$ by $\widetilde{\mathscr M}_{g}$.
\begin{theorem}\label{thm:quotient-stack}
The stack $\widetilde{\mathscr M}_{g,n}$ is a smooth algebraic stack of finite type over $k$. Furthermore, it is a quotient stack; more precisely, there exists a smooth quasi-projective scheme $X$ with an action of $\mathrm{GL}_{N}$ for some positive integer $N$, such that $\widetilde{\mathscr M}_{g,n} \simeq [X/\mathrm{GL}_{N}]$.
If $k$ is a field, then $\widetilde{\mathscr M}_{g,n}$ is connected.
\end{theorem}
\begin{proof}
It follows from Lemma~\ref{lem:boundedness} that there exists a positive integer $m$ with the property that if $\Omega$ is an algebraically closed field with a homomorphism $k \arr \Omega$, and $(C, p_{1}, \dots p_{n})$ is in $\widetilde{\mathscr M}_{g,n}(\Omega)$, then $\omega_{C/\Omega}(p_{1}+ \dots + p_{n})^{\otimes m}$ is very ample, and $\H^{1}\bigl(C, \omega_{C/\Omega}(p_{1}+ \dots + p_{n})^{\otimes m}\bigr) = 0$.
The invertible sheaf $\omega_{C/\Omega}(p_{1}+ \dots + p_{n})^{\otimes m}$ has Hilbert polynomial $P(t) = g + m(2g-2 + n)t$, and defines an embedding $C \subseteq \PP_{\Omega}^{N-1}$, where $N \mathrel{\smash{\overset{\mathrm{\scriptscriptstyle def}} =}} m(2g-2 + n) - g + 1$. If $\pi\colon C \arr S$ is a stable $n$-pointed $A_{2}$-curve, and $\Sigma_{1}$, \dots,~$\Sigma_{n} \subseteq C$ are the images of the sections, then $\pi_{*}\omega_{C/S}(\Sigma_{1}+ \dots + \Sigma_{n})^{\otimes m}$ is locally free sheaf of rank $N$, and its formation commutes with base change, because of Grothendieck's base change theorem.
Call $X$ the stack over $k$, whose sections over a scheme $S$ consist of a stable $n$-pointed $A_{2}$-curve as above, and an isomorphism $\cO_{S}^{N} \simeq \pi_{*}\omega_{C/S}(\Sigma_{1}+ \dots + \Sigma_{n})^{\otimes m}$ of sheaves of $\cO_{S}$-modules. Since $\pi_{*}\omega_{C/S}(\Sigma_{1}+ \dots + \Sigma_{n})^{\otimes m}$ is very ample, the automorphism group of an object of $X$ is trivial, and $X$ is equivalent to its functor of isomorphism classes.
Call $H$ the Hilbert scheme of subschemes of $\PP^{N-1}_{k}$ with Hilbert polynomial $P(t)$, and $D \arr H$ the universal family. Call $F$ the fiber product of $n$ copies of $D$ over $S$, and $C \arr F$ the pullback of $D \arr H$ to $F$; there are $n$ tautological sections $s_{1}$, \dots,~$s_{n}\colon F \arr C$. Consider the largest open subscheme $F'$ of $F$ such that the restriction $C'$ of $C$, with the restrictions of the $n$ tautological sections, is a stable $n$-pointed $A_{2}$-curve, as in Proposition~\ref{prop:openness}. Call $Y \subseteq F'$ the open subscheme whose points are those for which the corresponding curve is nondegenerate, $E \arr Y$ the restriction of the universal family, $\Sigma_{1}$, \dots,~$\Sigma_{n} \subseteq E$ the tautological sections. Call $\cO_{E}(1)$ the restriction of $\cO_{\PP^{N-1}_{Y}}(1)$ via the tautological embedding $E \subseteq \PP^{N-1}_{Y}$; there are two section of the projection $\pic_{E/Y}^{m(2g-2 + n)}\arr Y$ from the Picard scheme parametrizing invertible sheaves of degree $m(2g-2 + n)$, one defined by $\cO_{E}(1)$, the other by $\omega_{E/Y}(\Sigma_{1} + \dots + \Sigma_{n})^{\otimes m}$; let $Z \subseteq Y$ the equalizer of these two sections, which is a locally closed subscheme of $Y$.
Then $Z$ is a quasi-projective scheme over $k$ representing the functor sending a scheme $S$ into the isomorphism class of tuples consisting of a stable $n$-pointed $A_{2}$-curve $\pi\colon C \arr S$, together with an isomorphism of $S$-schemes
\[
\PP^{N-1}_{S} \simeq \PP(\pi_{*}\omega_{C/S}(\Sigma_{1} + \dots + \Sigma_{n}))\,.
\]
There is an obvious functor $X \arr Z$, associating with an isomorphism $\cO_{S}^{N} \simeq \pi_{*}\omega_{C/S}(\Sigma_{1}+ \dots + \Sigma_{n})^{\otimes m}$ its projectivization. It is immediate to check that $X \arr Z$ is a $\GG_{\rmm}$-torsor; hence it is representable and affine, and $X$ is a quasi-projective scheme over $\spec k$.
On the other hand there is an obvious morphism $X \arr \widetilde{\mathscr M}_{g,n}$ which forgets the isomorphism $\cO_{S}^{N} \simeq \pi_{*}\omega_{C/S}(\Sigma_{1}+ \dots + \Sigma_{n})^{\otimes m}$; this is immediately seen to be a $\mathrm{GL}_{N}$ torsor. We deduce that $\widetilde{\mathscr M}_{g,n}$ is isomorphic to $[X/\mathrm{GL}_{N}]$. This shows that is a quotient stack, as in the last statement; this implies that $\widetilde{\mathscr M}_{g,n}$ is an algebraic stack of finite type over $k$.
The fact that $\widetilde{\mathscr M}_{g,n}$ is smooth follows from the fact that $A_{2}$-curves are unobstructed.
Finally, to check that $\widetilde{\mathscr M}_{g,n}$ is connected it is enough to check that the open embedding $\cM_{g,n} \subseteq \widetilde{\mathscr M}_{g,n}$ has a dense image, since $\cM_{g,n}$ is well known to be connected. This is equivalent to saying that every stable $n$-pointed $A_{2}$-curve over an algebraically closed extension $\Omega$ of $k$ has a small deformation that is stable and nodal. Let $(C, p_{1}, \dots, p_{n})$ be a stable $n$-pointed $A_{2}$-curve; the singularities of $C$ are unobstructed, so we can choose a lifting $\overline{C}\arr \spec \Omega\ds{t}$, with smooth generic fiber. The points $p_{i}$ lift to sections $\spec\Omega\ds{t} \arr \overline{C}$, and then the result follows from Proposition~\ref{prop:openness}.
\end{proof}
Notice that there are many examples of stable $A_2$-curves over an algebraically closed field whose group of automorphisms is a positive-dimensional affine group (for example, let $C$ an irreducible rational curve whose only singularity is a single cusp, attached by a smooth point to a smooth curve of genus $g-1$; then the automorphisms group of $C$ is an extension of finite group by $\GG_{\rmm}$). This means that $\widetilde{\mathscr M}_{g}$ is not Deligne--Mumford, and not separated.
\begin{proposition}\label{prop:hodge-bundle}
Let $\pi\colon C \arr S$ be a stable $A_2$-curve of genus $g$. Then $\pi_{*}{\omega}_{C/S}$ is a locally free sheaf of rank $g$ on $S$, and its formation commutes with base change.
\end{proposition}
\begin{proof}
If $C$ is an $A_2$-curve over a field $k$, the dimension of $\H^{0}(C, \omega_{C/k})$ is $g$; so the result follows from Grauert's theorem when $S$ is reduced. But the versal deformation space of an $A_{2}$-curve over a field is smooth, so every $A_{2}$-curve comes, \'{e}tale-locally, from an $A_{2}$-curve over a reduced scheme, and this proves the result.
\end{proof}
As a consequence we obtain a locally free sheaf $\widetilde{\HH}_{g}$ of rank~$g$ on $\widetilde{\mathscr M}_{g, n}$, the \emph{Hodge bundle}.
We will need a classification of stable $A_2$-curves of genus~$2$, and $1$-marked stable $A_2$-curves of genus $1$. The following is straightforward.
\begin{proposition}\call{prop:list} Assume that $k$ is an algebraically closed field. A $1$-marked stable $A_2$-curve of genus $1$ over $k$ is either stable, or an irreducible cuspidal rational cubic.
A stable $A_2$-curve of genus~$2$ over $k$ is of one of the following types.
\begin{enumerate1}
\itemref{1} A stable curve of genus $2$.
\itemref{2} An irreducible curve of geometric genus~$1$ with a node or a cusp.
\itemref{3} An irreducible rational curve with a node and a cusp, or two cusps.
\itemref{5} The union of two $1$-marked stable $A_2$-curves of genus $1$ meeting transversally at the marked points.
\end{enumerate1}
\end{proposition}
The locus of singular curves in $\widetilde{\mathscr M}_{2}$ is a divisor with normal crossing, with two irreducible components $\widetilde{\Delta}_{0}$ and $\widetilde{\Delta}_{1}$. The component $\widetilde{\Delta}_{0}$ is the closure of the locus of irreducible curves; the other component $\widetilde{\Delta}_{1}$ is formed by the curves with a separating node, which are those of type \refpart{prop:list}{5}.
\section{The locus of cusps in the universal curve}\marginpar{}
The stack $\widetilde{\mathscr M}_{g}$ contains a closed substack of codimension~$2$, the cuspidal locus, whose points correspond to curves with at least a cusp. Its normalization, which we denote by $\widetilde{\mathscr C}^{\rm c}_g$, is the stack of curves of genus $g$ with a distinguished cusp. This will play an important role in one of our calculations. Everything that we are going to say in this section generalizes to $\widetilde{\mathscr M}_{g,n}$, but this would complicate notation, and the added generality would not be useful to us.
\subsection{The cuspidal locus in an $A_{2}$-curve}
Let $C \arr S$ be an $A_2$-curve; we are interested in giving a scheme structure to the locus of cusps $C^{\rm c} \subseteq C$, which is closed. This is done as follows.
Let $C^{\rm sing} \subseteq C$ the singular locus of the map $C \arr S$, with the usual scheme structure given by the first Fitting ideal of the sheaf $\Omega_{C/S}$. Then $C^{\rm sing}$ is finite over $S$; it is unramified at the nodal points, and it ramifies at the cusps. We define $C^{\rm c}$ as the closed subscheme of $C$ defined by the $0\th$ Fitting ideal of $\Omega_{C^{\rm sing}/S}$.
\begin{proposition}\label{prop:cuspidal-projection}
The geometric points of the subscheme $C^{\rm c} \subseteq C$ correspond precisely to the cuspidal points of the geometric fibers of $C \arr S$. The restriction $C^{\rm c} \arr S$ is finite and unramified.
Furthermore, the formation of $C^{\rm c} \subseteq C$ commutes with base change on $S$. If $S$ is of finite type over $k$ and the family $C \arr S$ is versal at each point of $S$, then $C^{\rm c}$ is a smooth scheme over $k$.
\end{proposition}
\begin{proof} \marginpar{}
Let us check the first statement. This can be done after restricting to the geometric fibers; so assume that $S = \spec\ell$, where $\ell$ is an algebraically closed field. Furthermore, one can pass to an \'{e}tale cover of $C$, without assuming that $C$ is proper.
Let $p \in C(\ell)$ be a singular point of $C$. If $p$ is a node, than it is \'{e}tale-locally of the form $\spec\ell[x,y]/(xy)$; a straightforward calculation reveals that the first Fitting ideal of $\Omega_{C/\ell}$ is $(x,y) \subseteq \cO_{C}$. Hence $\Omega_{C^{\rm sing}/\ell}$ is zero at $p$, and $p$ is not in $C^{\rm c}$.
If $p$ is a cusp, then $C$ is \'{e}tale-locally of the form $\spec\ell[x,y]/(y^{2} - x^{3})$. Then the first Fitting ideal of $\Omega_{C/\ell}$ is $(2y,3x^{3}) = (y, x^{2})$ (recall that we are assuming $\operatorname{char} k \neq 2$, $3$), so \'{e}tale locally $\cO_{C^{\rm sing}}$ is $\spec\ell[x,y]/(y, x^{2})$. Hence \'{e}tale-locally $0\th$ Fitting ideal of $\Omega_{C^{\rm sing}/\ell}$ is $(x)\subseteq \cO_{C^{\rm sing}}$, and $\cO_{C^{\rm c}} = \spec \ell$. This proves the first statement.
Formation of sheaves of differentials, and of Fitting ideals, commute with base change on $S$; hence formation of $C^{\rm c}$ also commutes with base change. Therefore, to prove the remaining statements we can assume that $S = \spec R$ is affine and of finite type over $k$.
The projection $C^{\rm c} \arr S$ is proper and representable; hence to check that it is finite it is enough to show that it has finite fibers, which follows from the first statement.
For the rest of the statements, take a geometric point $p\colon \spec\ell \arr C^{\rm c}$. We can base change from $k$ to $\ell$, and assume that $p$ is a rational point, and $k$ is algebraically closed; call $q$ the image of $p$ in $S$. Notice that the definition of $C^{\rm c}$ does not require $C \arr S$ to be proper; furthermore, if $U \arr C$ is an \'{e}tale map, then $U^{\rm c}$ is the inverse image of $C^{\rm c}$ in $U$.
By \cite[Example~6.46]{mattia-vistoli-deformation}, after passing to an \'{e}tale neighborhood of the image of $q$ in $S$ there exist two elements $a$ and $b$ of $R$, vanishing at $q$, and an \'{e}tale neighborhood $p \arr U \arr C$ of $p$, such that, if we denote by $V \subseteq \AA^{2}_{R}$ the subscheme defined by the equation $y^{2} = x^{3} + ax + b$, there is an \'{e}tale map $U \arr V$ of $S$-schemes sending $p \in U$ into the point over $q$ defined by $x = y = 0$. Then $U^{\rm c}$ is the inverse image of $V^{\rm c}$; a straightforward calculation shows that $V^{\rm c}$ is the subscheme defined by $x = y = a = b$, which proves that $C^{\rm c}$ is unramified over $S$. Furthermore, if $C \arr S$ is versal then $S$ is smooth, and $a$ and $b$ are part of a system of parameters around $q$, which shows that $C^{\rm c}$ is smooth, as claimed.
\end{proof}
\subsection{The pinching construction}\label{sub:pinching}\marginpar{}
Suppose that we have an $A_{2}$-curve $D$ over an algebraically closed field, with a cuspidal point $q \in C(k)$. Call $C$ the normalization of $D$ at the point $q$; then the inverse image of $q$ consists of a single point $p$. Then it is standard that the pair $(D,q)$ can be recovered from $(C, p)$: as a topological space $D = C$, while the structure sheaf $\cO_{D}$ is the subsheaf of $\cO_{C}$ consisting of functions whose Taylor expansions at $p$ has first order term equal to $0$. This construction, which we call \emph{pinching}, works in families, and plays a fundamental role in this paper; a depiction of it can be found in \Cref{fig:pinch}.
\begin{figure}
\caption{The pinching construction}
\label{fig:pinch}
\end{figure}
Let $\pi\colon C \arr S$ be an $A_2$-curve with a section $\sigma\colon S \arr C$ landing into the smooth locus of $C \arr S$. We will assume that $C$ is a scheme (the pinching construction will also work for algebraic spaces, using the small \'{e}tale site, but we will not need this). We define $\widehat{C} \arr S$ as follows. Denote by $\Sigma \subseteq C$ the image of $\sigma$; call $I_{\Sigma} \subseteq \cO_{C}$ the sheaf of ideals of $\Sigma \subseteq C$. Sending a section $f$ of $\cO_{C}$ into $f - \pi^{*}\sigma^{*}f$ gives a $\cO_{S}$-linear splitting $\cO_{C} \arr I_{\Sigma}$ of the embedding $I_{\Sigma} \subseteq \cO_{C}$.
As a topological space we set $\widehat{C} = C$; the structure sheaf $\cO_{\widehat{C}} \subseteq \cO_{C}$ is defined as the subring of sections $f$ of $\cO_{C}$ with the property that the class $[f - \pi^{*}\sigma^{*}f] \in I_{\Sigma}/I^{2}_{\Sigma}$ is zero. Then $\cO_{\widehat{C}}$ is a sheaf of rings with local stalks. Furthermore, $\pi^{\sharp}\colon \cO_{S} \arr \cO_{C}$ has image contained in $\cO_{\widehat{C}}$, so it defines a morphism of locally ringed spaces $\widehat{C} \arr S$.
\begin{proposition}\label{prop:pinching}
The morphism $\widehat{C} \arr S$ is an $A_2$-curve. The morphism $C \arr \widehat{C}$ induced by the embedding $\cO_{\widehat{C}} \subseteq \cO_{C}$ is an isomorphism outside of the image of the section $\sigma$. Furthermore, the composite $\Sigma \subseteq C \xrightarrow{\pi} \widehat{C}$ gives a closed and open embedding of $\Sigma$ into the cuspidal locus $\widehat{C}^{\rm c}$.
\end{proposition}
\begin{proof}
There is an exact sequence
\[
0 \arr \cO_{\widehat{C}} \arr \cO_{C} \arr I_{\Sigma}/I^{2}_{\Sigma} \arr 0
\]
of sheaves of $\cO_{S}$-modules; since $I_{\Sigma}/I^{2}_{\Sigma}$ is flat over $S$, it is clear that formation of $\widehat{C}$ commutes with base change on $S$, and that $\widehat{C}$ is flat over $\cO_{S}$.
Let us check that $\widehat{C}$ is finitely presented over $S$. Since formation of $\widehat{C}$ commutes with base change on $S$, and $C$ is finitely presented, we can assume that $S$ is noetherian. But the extension $\cO_{\widehat{C}} \subseteq \cO_{C}$ is finite, while $\cO_{C}$ is of finite type over $\cO_{S}$; from this is follows that $\cO_{\widehat{C}}$ is also of finite type over $\cO_{S}$.
Since $C \arr \widehat{C}$ is proper and surjective, it follows that $C\arr S$ is also proper.
The fact that $\widehat{C}$ is an $A_{2}$-curve can now be checked when $S$ is the spectrum of an algebraically closed field, and it is straightforward.
The second statement is evident from the construction.
The last statement is proved with a straightforward local calculation.
\end{proof}
The pinching construction is functorial, in the sense that an isomorphism $C \simeq C'$ of pointed $A_{2}$-curves over $S$ induces an isomorphism $\widehat{C} \simeq \widehat{C'}$ of $A_{2}$-curves over $S$.
Suppose that $C$ is a geometrically connected $A_{2}$-curve over $\spec k$, $p \in C(k)$ a smooth rational point. Let $U = \spec R$ be a miniversal deformation space for the pair $(C,p)$ as a $1$-pointed $A_{2}$-curve. Here $R$ is a complete local $k$-algebra; since $A_{2}$-curves are unobstructed, $R$ is a power series algebra $k\ds{x_{1}, \dots, x_{r}}$. Denote by $C_{U} \arr U$ the corresponding $1$-pointed $A_{2}$-curve; this $C_{U}$ is a priori a formal scheme; but the closed fiber $C$ is projective, and an ample line bundle on $C$ extends to $C_{U}$, because $C$ is $1$-dimensional, so it follows from Grothendieck's existence theorem that $C_{U} \arr U$ is a projective scheme.
Analogously, let $V = \spec S$ be a miniversal deformation space for the pinched $A_{2}$-curve $\widehat{C} \arr \spec k$; like in the previous case, $V$ is the spectrum of a power series algebra over $k$. Call $D_{V} \arr V$ the universal $A_{2}$-curve. Its closed fiber $\widehat{C}$ has a distinguished cusp $\widehat{p}$, the image of $p \in C(k)$; call $\Delta \subseteq D_{V}^{\rm c}$ the connected component of $D_{V}^{\rm c}$ containing $\widehat{p}$; by Proposition~\ref{prop:cuspidal-projection} the $\Delta \arr V$ is a embedding, and $\Delta$ is once again the spectrum of a power series algebra.
By pinching the tautological section $U \arr C_{U}$ we get a family of $A_{2}$-curves $\widehat{C}_{U} \arr U$ with a distinguished section $U \arr \widehat{C}_{U}^{\rm c}$; this induces a (non-unique) morphism $U \arr V$, which factors through $\Delta$.
\begin{lemma}\label{lem:cuspidal-isomorphism}
The morphism $U \arr \Delta$ described above is an isomorphism.
\end{lemma}
\begin{proof}
Call $\phi\colon U \arr \Delta$ the morphism above. To produce a morphism in the other direction $\psi\colon \Delta \arr U$, consider the pullback $D_{\Delta} \mathrel{\smash{\overset{\mathrm{\scriptscriptstyle def}} =}} \Delta\times_{V}D_{V}$, with its tautological section $\Delta \arr D_{\Delta}^{\rm c}$. A straightforward local calculation reveals that the normalization $\overline{D}_{\Delta}$ along $\Delta \subseteq$ is an $A_{2}$-curve, the reduced inverse image of $\Delta \subseteq D_{\Delta}$ in $\overline{D}_{\Delta}$ maps isomorphically onto $\Delta$, thus giving a section $\Delta\arr \overline{D}_{\Delta}$. Hence we get a pointed $A_{2}$-curve $\overline{D}_{\Delta}$, whose closed fiber is $C$; so, there exists a morphism $\psi\colon \Delta\arr U$, with an isomorphism $\overline{D}_{\Delta} \simeq \Delta\times_{U}C_{U}$, such that the section $\Delta \arr \overline{D}_{\Delta}$ corresponds to the section $\Delta\times_{U}C_{U}$ coming from the tautological section $U \arr C_{U}$.
Consider the composite $\phi\psi\colon U \arr U$. The pullback of $C_{U}$ along $\phi\psi$ is the normalization of $\widehat{C}_{U}$ along the canonical section $U \arr \widehat{C}^{\rm c}$; hence it is isomorphic to $C_{U}$. This implies that this pullback is $C_{U}$, which, in turn, together with the fact that $C_{U} \arr U$ is miniversal implies that $\psi\phi$ is an isomorphism. An analogous argument shows that the pullback of $D_{\Delta}$ along $\phi\psi$ is isomorphic to $D_{\Delta}$, so that $\phi\psi$ is also an isomorphism. It follows that $\phi$ is an isomorphism, as claimed.
\end{proof}
As an application of the pinching construction, let us prove the following lemma, which is needed in the proof of Theorem~\ref{thm:quotient-stack}.
\begin{lemma}\label{lem:boundedness}
There exists a scheme of finite type $U$ over $k$, and a morphism $U \arr\widetilde{\mathscr M}_{g,n}$, which is surjective on geometric points.
\end{lemma}
\begin{proof}
Let $\Omega$ be an algebraically closed extension of $k$. Let $(C, p_{1}, \dots, p_{n})$ be a stable $n$-pointed $A_{2}$-curve over $\Omega$, and call $q_{1}$, \dots,~$q_{s}$ the cuspidal points of $C$. Denote by $\overline{C}$ the normalization of $C$ at the $q_{i}$, $\overline{p}_{i}$ the point of $\overline{C}$ lying over $p_{i}$ and $\overline{q}_{i}$ the point of $\overline{C}$ lying over $q_{i}$. Then $(\overline{C}, \overline{p}_{1}, \dots, \overline{p}_{n}, \overline{q}_{1}, \dots, \overline{q}_{s})$ is a $(n+s)$-pointed nodal curve of genus~$g-s$. Hence $s \leq g$; it is enough to produce for each $s$ with $1 \leq s \leq g$ a scheme of finite type and a morphism $U_{s} \arr \widetilde{\mathscr M}_{g,n}$, whose image contains all $n$-pointed stable $A_{2}$-curves with exactly $s$ cuspidal points.
Notice that the $n$-pointed curve $(\overline{C}, \overline{p}_{1}, \dots, \overline{p}_{n}, \overline{q}_{1}, \dots, \overline{q}_{s})$ above is not necessarily stable, because the pullback of $\omega_{C/\Omega}(p_{1}+ \dots + p_{n})$ is
\[
\omega_{\overline{C}/\Omega}(\overline{p}_{1}+ \dots + \overline{p}_{n} + 2\overline{q}_{1} + \dots + 2\overline{q}_{s})\,;
\]
but this instability can only occur when $C$ has a rational component whose only singularity is a single cusp, and intersects the rest of the curve in only one point. For each $i = 1$, \dots,~$s$ let $r_{i}$ be a smooth point on the component of $\overline{C}$ containing $\overline{q}_{i}$, but different from $\overline{q}_{i}$; then the $(n+2s)$-pointed curve
\[
(\overline{C}, \overline{p}_{1}+ \dots + \overline{p}_{n} + 2\overline{q}_{1} + \dots + 2\overline{q}_{s}, r_{1}, \dots, r_{s})
\]
is stable.
Now, starting from a stable $(n+2s)$-curve of genus $g-s$ $(D, t_{1}, \dots, t_{n+2s})$ over a scheme $S$, let us construct an $n$-pointed $A_{2}$-curve $(\widetilde{D}, \widetilde{t}_{1}, \dots, \widetilde{t}_{n})$ of genus $g$, over the same $S$, as follows.
\begin{enumerate1}
\item The curve $\widetilde{D}$ is obtained from $D$ by pinching down the images of $t_{n+1}$, \dots,~$t_{n+s}$.
\item For each $i = 1$, \dots,~$i= n$, the section $\widetilde{t}_{i}\colon S \arr$ is the composite of $t_{i}\colon S \arr D$ with the projection $D \arr \widetilde{D}$.
\item The markings $t_{n+s+1}$, \dots,~$t_{n+2s}$ are forgotten.
\end{enumerate1}
The curve $(\widetilde{D}, \widetilde{t}_{1}, \dots, \widetilde{t}_{n})$ is not necessarily stable; however, by Proposition~\ref{prop:openness} there exists an open substack $\cU_{s} \subseteq \overline{\mathscr M}_{g-s, n+2s}$ whose objects are curves $$(D, t_{1}, \dots, t_{n+2s})$$ whose associate curve $(\widetilde{D}, \widetilde{t}_{1}, \dots, \widetilde{t}_{n})$ is stable. By sending $(D, t_{1}, \dots, t_{n+2s})$ into $(\widetilde{D}, \widetilde{t}_{1}, \dots, \widetilde{t}_{n})$ we get a morphism $\cU_{s} \arr \widetilde{\mathscr M}_{g,n}$. For each $\Omega$, the stable $n$-pointed curves with exactly $s$ cuspidal points are precisely those coming from $\cU_{s}$; since $\cU_{s}$ is of finite type we have a smooth surjective morphism $U_{s} \arr \cU_{s}$, in which $U_{s}$ is a scheme of finite type over $k$. The composite $U_{s} \arr \cU_{s} \arr \widetilde{\mathscr M}_{g, n}$ is the desired morphism.
\end{proof}
\subsection{The description of the cuspidal locus}\label{sub:description cusp}
Consider an $A_{2}$-curve $C \arr S$. The fact that formation of $C^{\rm c}$ commutes with base changes allows us define $C^{\rm c}$ even when $S$ is an Artin stack. In particular, let $\widetilde{\mathscr C}_{g} \arr \widetilde{\mathscr M}_{g}$ be the universal stable $A_2$-curve of genus~$g$; define $\widetilde{\mathscr C}^{\rm c}_g$ to be the cuspidal locus inside $\cC_{g}$. Then $\widetilde{\mathscr C}^{\rm c}_{g}$ is the stack of stable $A_2$-curves of genus~$g$ with a marked cusp. More precisely, one can describe $\widetilde{\mathscr C}^{\rm c}_{g}$ as the stack of stable $A_2$-curves $C \arr S$ of genus $g$, with a section $S \arr C^{\rm c}$.
By Proposition~\ref{prop:cuspidal-projection}, the projection $\widetilde{\mathscr C}^{\rm c}_{g} \arr \widetilde{\mathscr M}_{g}$ is finite and unramified; its image is precisely the locus of curves with at least one cusp. Furthermore $\widetilde{\mathscr C}^{\rm c}_{g}$ is a smooth stack; hence, it is in fact the normalization of the locus of curves in $\widetilde{\mathscr M}_{g}$ with at least one cusp. We aim to give a description of $\widetilde{\mathscr C}^{\rm c}_{g}$. This is a little subtle, as there are two distinct possibilities for a stable $A_{2}$ curve, revealed by the following.
\begin{proposition}\label{prop:description-punctual}
Assume that $k$ is algebraically closed. Let $C$ be a stable $A_2$-curve of genus $g$ over $k$, and let $p \in C(k)$ be a cuspidal point. Call $\overline{C}$ the normalization of $C$ at $p$, and $\overline{p}$ the point of $\overline{C}$ lying over $p$. Furthermore, call $D$ the component of $C$ containing $p$ and $\overline{D}$ its inverse image in $\overline{C}$. Then there are two possibilities:
\begin{enumerate1}
\item $(\overline{C}, \overline{p})$ is a $1$-marked stable $A_2$-curve of genus $g-1$, or
\item $\overline{D}$ is smooth of genus~$0$, and meets the rest of $\overline{C}$ transversally in one smooth point.
\end{enumerate1}
In case (1), the morphism $\overline{C} \arr C$ induces an isomorphism of group schemes $\mathop{\underline{\mathrm{Aut}}}\nolimits_{k}(\overline{C}, \overline{p}) \simeq \mathop{\underline{\mathrm{Aut}}}\nolimits_{k}(C, p)$.
In case (2), call $C_{1}$ the union of the irreducible components of $C$ different from $D$, and $q$ the point of intersection of $D$ with $C_{1}$. Then $\mathop{\underline{\mathrm{Aut}}}\nolimits_{k}(D, p, q) = \GG_{\rmm}$; there is an isomorphism of group schemes $\mathop{\underline{\mathrm{Aut}}}\nolimits_{k}(C_{1}, q) \times \GG_{\rmm} \simeq \mathop{\underline{\mathrm{Aut}}}\nolimits_{k}(C,p)$, in which $\mathop{\underline{\mathrm{Aut}}}\nolimits_{k}(C_{1}, q)$ acts trivially on $D$, and $\GG_{\rmm}$ acts trivially on $C_{1}$.
\end{proposition}
\begin{proof}
Then the degree of the restriction $\omega_{\overline{C}}(\overline{p}) \mid \overline{D}$ equals the degree of the restriction $\omega_{C} \mid D$ minus one. This means that the are two possibilities: either $\omega_{\overline{C}}(\overline{p}) \mid \overline{D}$ has positive degree, which gives case (1), or it has degree $0$, which gives case (2).
The statement about automorphism group schemes is straightforward.\end{proof}
This shows that there are two possibilities for the pair $(C, p)$. It can be obtained from a stable $1$-marked $A_2$-curve $(\overline{C}, \overline{p})$ by either pinching down $\overline{p}$, or by attaching to $\overline{p}$ a rational curve with a cusp.
Let us give a stack-theoretic version of this, by showing that $\widetilde{\mathscr C}^{\rm c}_{g}$ is isomorphic to $\widetilde{\mathscr M}_{g-1,1}\times[\AA^{1}/\gm]$. The stack $[\AA^{1}/\gm]$ has a closed substack $\cB\GG_{\rmm} = [0/\GG_{\rmm}]$, whose complement is $[(\AA^{1} \smallsetminus \{0\})/\GG_{\rmm}] = \spec k$. Thus a geometric point of $\widetilde{\mathscr M}_{g-1,1}\times[\AA^{1}/\gm]$ is of two types.
\begin{enumerate1}
\item It can be in the open substack $\widetilde{\mathscr M}_{g-1,1} = \widetilde{\mathscr M}_{g-1,1}\times\spec k$; in this case it corresponds to a stable $1$-marked $A_{2}$-curve $(\overline{C}, \overline{p})$ of genus $g-1$. In this case the associated $A_{2}$ curve is of type~(1), and is obtained by pinching $\overline{C}$ at $\overline{p}$.
\begin{figure}
\caption{Cuspidal curve of type (1)}
\label{fig:subim1}
\end{figure}
\item Alternatively, it can be in the closed substack $\widetilde{\mathscr M}_{g-1,1}\times\cB\GG_{\rmm} \subseteq \widetilde{\mathscr M}_{g-1,1}\times[\AA^{1}/\gm]$; then it corresponds, up to isomorphism, to a stable $1$-marked $A_{2}$-curve $(\overline{C}, \overline{p})$ of genus $g-1$; but the automorphism group of the object of the geometric point of $\widetilde{\mathscr M}_{g-1,1}\times[\AA^{1}/\gm]$ is $\mathop{\underline{\mathrm{Aut}}}\nolimits(\overline{C}, \overline{p}) \times\GG_{\rmm}$. In the case we associate with this object the curve of type~(2) obtained by attaching to $\overline{p}$ a rational curve with a cusp.
\begin{figure}
\caption{Cuspidal curve of type (2)}
\label{fig:subim2}
\end{figure}
\end{enumerate1}
The way these two set-theoretic descriptions glue together to a morphism $\widetilde{\mathscr M}_{g-1, 1} \times [\AA^{1}/\gm] \arr \widetilde{\mathscr C}^{\rm c}_{g}$ is a little subtle. We construct the morphism as follows.
Consider the universal curve $\widetilde{\mathscr C}_{g-1,1} \arr \widetilde{\mathscr M}_{g-1,1}$, with its tautological section $\sigma\colon \widetilde{\mathscr M}_{g-1,1} \arr \cC_{g-1, 1}$. Call $\Sigma$ the image of $\sigma$; then $\Sigma$ is a closed smooth subscheme of $\widetilde{\mathscr C}_{g-1, 1} \times [\AA^{1}/\gm]$ of codimension~$2$. Denote by $\cD_{g-1,1}$ the blowup of $\widetilde{\mathscr C}_{g-1, 1} \times [\AA^{1}/\gm]$ along $\Sigma \times \cB\GG_{\rmm}$, and call $\Sigma'$ the proper transform of $\Sigma$ in $\cD_{g-1, 1}$.
\begin{proposition}
The morphism $\cD_{g-1,1} \arr \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ is an $A_{2}$-curve of genus $g-1$. The composite $\Sigma' \subseteq \cD_{g-1,1} \arr \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ is an isomorphism. The corresponding section $\widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm] \arr \cD_{g-1,1}$ has image contained in the smooth locus of the morphism $\cD_{g-1,1} \arr \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$.
\end{proposition}
\begin{proof}
Obviously, $\cD_{g-1,1}$ is proper and representable over $\widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$. Also, it is smooth over $k$, since both $\widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ and $\Sigma\times\GG_{\rmm}$ are smooth; since the morphism $\cD_{g-1,1} \arr \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ has equidimensional fibers, it is also flat.
The fact that $\Sigma' \arr \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ is an isomorphism follows immediately from the fact that $\Sigma \times \cB\GG_{\rmm}$ is a Cartier divisor in $\Sigma \times [\AA^{1}/\gm]$.
Now we need to check that the singularities of the geometric fibers of $\cD_{g-1,1}$ over are at most $A_{2}$, and that the intersection of $\Sigma'$ with the geometric fibers is contained in the smooth locus. This is a particular case of the following.
\end{proof}
\begin{lemma}\label{lem:fibers-D}
Let $s\colon \spec\Omega \arr \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ be a geometric point. Call $(C, p) \in \widetilde{\mathscr M}_{g-1, 1}$ the marked $A_{2}$-curve corresponding to the composite
\[
\spec\Omega \xrightarrow{s} \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm] \xrightarrow{\pr_{1}} \widetilde{\mathscr M}_{g-1,1}\,.
\]
If the image of $s$ in $[\AA^{1}/\gm]$ is in $[\AA^{1}/\gm] \smallsetminus \cB\GG_{\rmm} \simeq \spec k$, then the fiber of $\cD_{g-1,1} \arr \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ over $s$ is $C$, and the inverse image of $\Sigma'$ is $p \in C$.
If the image of $s$ in $[\AA^{1}/\gm]$ is in $\cB\GG_{\rmm}$, then the fiber is $C$, with a copy of $\PP^{1}$ glued to $p$ by the point $0 \in \PP^{1}$. The inverse image of $\Sigma'$ is the point at infinity.
\end{lemma}
Thus we can apply the pinching construction above and get an $A_2$-curve $\widehat{\cD}_{g-1,1} \arr \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$.
\begin{proposition}
The morphism $\widehat{\cD}_{g-1,1} \arr \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ is a stable $A_2$-curve.
\end{proposition}
\begin{proof}
We need to check that the geometric fibers of $\widehat{\cD}_{g-1,1} \arr \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ are $A_2$-stable; this is clear from Lemma~\ref{lem:fibers-D}.
\end{proof}
By construction $\widehat{\cD}_{g-1,1} \arr \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ comes from a stable $1$-pointed $A_{2}$-curve $\cD_{g-1,1} \arr \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ by pinching down a section $\widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm] \arr \cD_{g-1,1}$ with image $\Sigma' \subseteq\cD_{g-1,1}$; from the last statement of Proposition~\ref{prop:pinching} we get a factorization $\widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm] \arr \widetilde{\mathscr C}_{g}^{\rm c} \arr \widetilde{\mathscr M}_{g}$.
\begin{theorem}\label{thm:cuspidal-description}
The morphism $\Pi\colon \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm] \arr \widetilde{\mathscr C}_{g}^{\rm c}$ described above is an isomorphism.
\end{theorem}
We will use the following.
\begin{lemma}\label{lem:isom}
Let $f\colon \cX \arr \cY$ an \'{e}tale morphism of algebraic stacks over $k$. Assume that the following conditions are satisfied for every algebraically closed extension $k \subseteq\Omega$.
\begin{enumerate1}
\item The morphism $f$ induces a bijection between isomorphism classes in $\cX(\Omega)$ and in $\cY(\Omega)$.
\item If $x \in \cX(\Omega)$, the morphism $\mathop{\underline{\mathrm{Aut}}}\nolimits_{\Omega}(x) \arr \mathop{\underline{\mathrm{Aut}}}\nolimits_{\Omega}\bigl(f(x)\bigr)$ induced by $f$ is an isomorphism.
\end{enumerate1}
Then $f\colon \cX \arr \cY$ is an isomorphism.
\end{lemma}
\begin{proof}
Let $V \arr \cY$ be a smooth surjective morphism, where $V$ is a scheme. Set $U := \cX\times_{\cY} V$; the conditions above are easily seen to imply that the automorphism group schemes of the geometric points of $U$ are trivial, so that $U$ is an algebraic space. The morphism $U \arr V$ is \'{e}tale, and for each $\Omega$ the induced function $U(\Omega) \arr V(\Omega)$ is a bijection. Since $V \arr U$ is \'{e}tale and surjective, it is an epimorphism of \'{e}tale sheaves; hence to show that it is an isomorphism it is enough to prove that the diagonal $V \arr V\times_{U}V$ is an isomorphism. But $V \arr U$ is unramified, so $V$ is an open subspace of $V\times_{U}V$; since it has the same geometric points, the result follows. So $U \arr V$ is an isomorphism, hence $f$ is an isomorphism, as claimed.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:cuspidal-description}]
By Lemma~\ref{lem:isom}, Proposition~\ref{prop:description-punctual} and Lemma~\ref{lem:fibers-D} it is enough to prove that $\Pi$ is \'{e}tale.
Let $\widetilde{\mathscr M}'_{g-1, 1}$ be the stack whose sections are $1$-pointed $A_{2}$-curves $C \arr S$ such that, if we denote by $\Sigma \subseteq$ the image of the section, the invertible sheaf $\omega_{C/S}(2\Sigma)$ is relatively ample on $S$. The $A_{2}$-curve $\cD_{g-1,1} \arr \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ satisfies this condition; hence we get a morphism $\Pi'\colon \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm] \arr \widetilde{\mathscr M}'_{g-1, 1}$.
Also, if $C \arr S$ is in $\widetilde{\mathscr M}'_{g-1, 1}$, the pinched curve $\widehat{C} \arr S$ is in $\widetilde{\mathscr M}_{g}$; hence the pinching construction gives a morphism $\Pi''\colon \widetilde{\mathscr M}'_{g-1, 1} \arr \widetilde{\mathscr C}_{g}^{\rm c}$. The composite
\[
\widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm] \xrightarrow{\Pi'} \widetilde{\mathscr M}'_{g-1,1} \xrightarrow{\Pi''} \widetilde{\mathscr C}_{g}^{\rm c}
\]
is $\Pi$, by construction; so it is enough to show that $\Pi'$ and $\Pi''$ are \'{e}tale.
For $\Pi''$ this follows from Lemma~\ref{lem:cuspidal-isomorphism}. For $\Pi'$, the locus of $\widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ where $\Pi'$ is \'{e}tale is open, and the only open substack of $\widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ containing $\widetilde{\mathscr M}_{g-1,1}\times \cB\GG_{\rmm}$ is $\widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ itself; hence it is enough show that $\Pi'$ is \'{e}tale along $\widetilde{\mathscr M}_{g-1,1}\times \cB\GG_{\rmm}$.
Denote by $\widetilde{\mathscr M}^{o}_{g-1,1}$ the complement $\widetilde{\mathscr M}'_{g-1,1} \smallsetminus \widetilde{\mathscr M}_{g-1,1}$, with its reduced scheme structure; this is the scheme-theoretic image of $\widetilde{\mathscr M}_{g-1,1}\times \cB\GG_{\rmm}$ into $\widetilde{\mathscr M}'_{g-1,1}$.
The morphism $\widetilde{\mathscr M}_{g-1,1}\times \cB\GG_{\rmm} \arr \widetilde{\mathscr M}^{o}_{g-1,1}$ can be described as follows. We can interpret $\cB\GG_{\rmm}$ as the stack $\cM_{0,2}$ of smooth curves $P \arr S$ of genus~$0$ with two disjoint sections $s_{1}$, $s_{2}\colon S \arr P$; the $\GG_{\rmm}$-torsor corresponding to $(P \arr S, s_{1}, s_{2})$ is that associated with the normal bundle to $s_{1}$. Given an object $(C \to S, s)$ of $\widetilde{\mathscr M}_{g-1,1}$ and an object $(P \to S, s_{1}, s_{2})$ of $\cM_{0,2}$, we obtain an object $C \sqcup P$ of $\widetilde{\mathscr M}'_{g-1,1}$ by gluing $C$ with $P$ by identifying $s$ and $s_{1}$, and using $s_{2}$ to give the marking. It is easy to check that $\widetilde{\mathscr M}_{g-1,1}\times \cB\GG_{\rmm} \arr \widetilde{\mathscr M}^{o}_{g-1,1}$ is an isomorphism.
So, it is enough to show that the scheme theoretic inverse image of $\widetilde{\mathscr M}^{o}_{g-1,1} \subseteq\widetilde{\mathscr M}_{g-1,1}$ in $\widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ is $\widetilde{\mathscr M}_{g-1,1}\times \cB\GG_{\rmm}$. We can assume that $k$ is algebraically closed; let $c\colon \spec k \arr \widetilde{\mathscr M}_{g-1,1}\times \cB\GG_{\rmm}$ be a morphism, and let us show that there is a smooth morphism $U \arr \widetilde{\mathscr M}_{g-1,1}\times [\AA^{1}/\gm]$ with a lifting $\spec k \arr U$ of $c$, such that the pullback of $\widetilde{\mathscr M}_{g-1,1}\times \cB\GG_{\rmm}$ to $U$ coincides with the pullback of $\widetilde{\mathscr M}^{o}_{g-1,1}$. Here is a picture of the curve corresponding to the composite $c'\colon \spec k \arr \widetilde{\mathscr M}_{g-1,1}\times \cB\GG_{\rmm} \arr \widetilde{\mathscr M}^{o}_{g-1,1}$; the blue point is the marked point, while the red point, which we call $q$, is the node where the component containing the marked point, which is isomorphic to $\PP^{1}$, meets the rest of the curve.
\centerline{\includegraphics[scale=.4]{Curva}}
Let $(V_{1},q_{1})$ be a versal deformation space of the node $q$ (here $q_{1} \in V_{1}(k)$ is the marked point). Since the universal family over $\widetilde{\mathscr M}'_{g-1,1}$ is obviously versal, there exists a smooth morphism $V \arr \widetilde{\mathscr M}'_{g-1,1}$ with a lifting of $c$ and a smooth morphism $V \arr V_{1}$ such that the inverse image of $q_{1}\in V_{1}$ in $V$ coincides with the inverse image of $\widetilde{\mathscr M}^{o}_{g-1,1}$. Set $U \mathrel{\smash{\overset{\mathrm{\scriptscriptstyle def}} =}} (\widetilde{\mathscr M}_{g-1,1}\times\GG_{\rma})\times_{\widetilde{\mathscr M}'_{g-1,1}}V$; for the thesis to hold it is enough to show the the inverse image of $q_{1}$ along the morphism $V \arr V_{1}$ is reduced. But this follows immediately from the fact that the curve $\cD_{g-1,1}$ is smooth over $\spec k$.
This completes the proof.
\end{proof}
\section{The Chow ring of $\overline{\cM}_{1,2}$} \label{sec:chow Mbar12}
From now on, the base ring $k$ will be a field of characteristic $\neq 2,3$.
In this Section we compute the integral Chow ring of $\overline{\cM}_{1,2}$, the moduli stack of stable genus $1$ curves with two markings (\Cref{thm:chow Mbar12}), and consequently also the rational Chow ring of the coarse moduli space $\overline{M}_{1,2}$ (\Cref{cor:chow Mbar12}).
These results are achieved in the following way: we first compute the integral Chow ring of $\widetilde{\mathscr C}_{1,1}$, the universal elliptic stable $A_2$-curve (\Cref{prop:chow Ctilde11}), using the patching lemma (\Cref{lm:Atiyah}). Then we conclude our computations leveraging the localization exact sequence induced by the open embedding $\overline{\cM}_{1,2}\hookrightarrow\widetilde{\mathscr C}_{1,1}$.
In this Section the reader can already see all the main tools that will be used in the upcoming Sections to determine $\ch(\overline{\cM}_{2,1})$
\subsection{The Chow ring of $\widetilde{\mathscr C}_{1,1}$}
First let us recall one of our key tools, the \emph{patching lemma}.
\begin{lemma}[\cite{DLV}*{Lemma 3.4}]\label{lm:Atiyah}
Let $X$ be a smooth variety endowed with the action of a group $G$, and $Y\xhookrightarrow{i} X$ a smooth, closed and $G$-invariant subvariety, with normal bundle $\cN$. Suppose that $c_{\rm top}^G(\cN)$ is not a zero-divisor in $\ch_G(Y)$. Then the following diagram of rings is cartesian:
\[\xymatrix{
\ch_G(X) \ar[r]^{i^*} \ar[d]^{j^*} & \ch_G(Y) \ar[d]^{q} \\
\ch_G(X\smallsetminus Y) \ar[r]^{p} & \ch_G(Y)/(c_{\rm top}^G(\cN))
}\]
where the bottom horizontal arrow $p$ sends the class of a variety $V$ to the equivalence class of $i^*\xi$, where $\xi$ is any element in the set $(j^*)^{-1}([V])$.
\end{lemma}
We will call $V_i$ the rank one representation of $\GG_{\rmm}$ on which the latter acts with weight $i$. The notation $V_{i_1,\dots,i_r}$ will stand for the rank $r$ representation $V_{i_1}\oplus\cdots\oplus V_{i_r}$.
Let $\bfp:\widetilde{\mathscr M}_{1,1}\to \widetilde{\mathscr C}_{1,1}$ be the universal section.
\begin{lemma}\label{lm:Ctilde11 minus sigma affine bundle}
The stack $\widetilde{\mathscr C}_{1,1}\smallsetminus\im{\bfp}$ is an affine bundle over $[V_{-2,-3}/\GG_{\rmm}]$ and
\[\ch(\widetilde{\mathscr C}_{1,1}\smallsetminus\im{\bfp})\simeq \ZZ[\lambda_1],\]
where $\lambda_1$ is the pullback of the first Chern class of the Hodge line bundle along $\widetilde{\mathscr C}_{1,1}\arr\widetilde{\mathscr M}_{1,1}$.
\end{lemma}
\begin{proof}
Recall that $\widetilde{\mathscr M}_{1,1}$ is isomorphic to the quotient stack $[V_{-4,-6}/\GG_{\rmm}]$: indeed, the $\GG_{\rmm}$-torsor associated with the Hodge line bundle is isomorphic to $V_{-4,-6}$ (see \cite{EG}*{5.4}). Consider the universal affine Weierstrass curve $W$ inside $V_{-4,-6}\times V_{-2,-3}$ defined by the equation
\[ y^2=x^3+ax+b, \]
where $(a,b)\in V_{-4,-6}$ and $(x,y)\in V_{-2,-3}$. Observe that this subscheme is $\GG_{\rmm}$-invariant.
The quotient stack $[W/\GG_{\rmm}]$ is isomorphic to $\widetilde{\mathscr C}_{1,1}\smallsetminus\im{\bfp}$, and it is immediate to check that $W\arr V_{-2,-3}$ is an equivariant affine bundle. Therefore
\[ \ch(\widetilde{\mathscr C}_{1,1}\smallsetminus\im{\bfp}) \simeq \ch([V_{-2,-3}/\GG_{\rmm}]) \simeq \ch(\cB\GG_{\rmm}) \simeq \ZZ[T] \]
where the isomorphisms are all given by pullback homomorphism. To conclude, observe that we have a commutative diagram of $\GG_{\rmm}$-quotient stacks
\[ \xymatrix{
[W/\GG_{\rmm}] \ar[r] \ar[d] & [V_{-2,-3}/\GG_{\rmm}] \ar[d] \\
[V_{-4,-6}/\GG_{\rmm}] \ar[r] & [\spec{k}/\GG_{\rmm}].
} \]
If we take the induced pullbacks, we obtain a commutative diagram of Chow rings: in particular, the pullback of $T$ along the right vertical and the top horizontal maps is equal to the pullback of $T$ along the other two maps. As the pullback of $T$ along the bottom horizontal arrow is $-\lambda_1$, we get the desired conclusion.
\end{proof}
\begin{proposition}\label{prop:chow Ctilde11}
We have
\[\ch(\widetilde{\mathscr C}_{1,1})\simeq \ZZ[\lambda_1,\mu_1]/(\mu_1(\lambda_1+\mu_1)),\]
where $\lambda_1$ is the first Chern class of the Hodge line bundle and $\mu_1:=\bfp_*[\widetilde{\mathscr M}_{1,1}]$ is the fundamental class of the universal section.
\end{proposition}
\begin{proof}
By definition the normal bundle of the universal section is equal to $-\psi_1=-\lambda_1$. We can apply \Cref{lm:Atiyah}, which tells us that we have a cartesian diagram of rings
\[
\begin{tikzcd}
\ch(\widetilde{\mathscr C}_{1,1}) \ar[r, "\bfp^*"] \ar[d, "j^*"] & \ch(\widetilde{\mathscr M}_{1,1})\simeq\ZZ[\lambda_1] \ar[d] \\
\ch(\widetilde{\mathscr C}_{1,1}\smallsetminus\im\bfp)\simeq\ZZ[\lambda_1] \ar[r] & \ZZ.
\end{tikzcd}
\]
As the restriction of $\lambda_1$, regarded as an element in $\ch(\widetilde{\mathscr C}_{1,1})$, to $\ch(\widetilde{\mathscr M}_{1,1})$ is equal to $\lambda_1$, we deduce that $\ch(\widetilde{\mathscr C}_{1,1})$ is generated by $\lambda_1$ and $\bfp_*[\widetilde{\mathscr M}_{1,1}]=:\mu_1$.
The cartesianity of the diagram above implies that the ideal of relations is formed by those polynomials $p(\lambda_1,\mu_1)$ which belong to the kernel of both $j^*$ and $\bfp^*$.
If $j^*p(\lambda_1,\mu_1)=0$ then it must be of the form $\mu_1q(\lambda_1,\mu_1)$. We have
\[ \bfp^*(\mu_1q(\lambda_1,\mu_1)) = -\lambda_1(q(\lambda_1,-\lambda_1)). \]
If this is zero then $q(\lambda_1,\mu_1)$ must be divisible by $\lambda_1+\mu_1$, thus the ideal of relations is generated by $\mu_1(\lambda_1+\mu_1)$.
\end{proof}
\subsection{The Chow ring of $\overline{\cM}_{1,2}$}
In several points in what follows we will use the following notion (see \cite[Definition 4.1]{Per})
\begin{definition}
If $f \colon \cX \arr \cY$ is a morphism of algebraic stacks, we say that $f$ is a \emph{Chow envelope} if it is proper and representable, and for every extension $K/k$ the induced functor $f_{K}\colon \cX(K) \arr \cY(K)$ is essentially surjective.
\end{definition}
\begin{proposition}
\begin{enumerate1}
\item Being a Chow envelope is a property that is stable under composition and under base change.
\item If $f\colon \cX \arr \cY$ is a Chow envelope, and $\cY$ is a quotient stack of finite type over $k$, then $f_{*}\colon \operatorname{CH}^{*-c}(\cX) \arr \ch(\cY)$ is surjective where $c$ is the relative dimension of $f$ (namely $\dim \cY - \dim \cX$).
\end{enumerate1}
\end{proposition}
\begin{proof}
Part (1) is straightforward.
\marginpar{} For Part (2), write $\cY = [Y/G]$, where $Y$ is an algebraic space of finite type, and $G \arr \spec k$ an affine algebraic group acting on $Y$. Recall the basic construction of \cite{EG}. Let $N$ be a positive integer, $G \arr \mathrm{GL}(V)$ be a finite dimensional representation with an open subscheme $U \subseteq V$ with $\codim_{V}(V \smallsetminus U) > N$; then the fppf quotient $(Y\times U)/G$ is an algebraic space, and by definition we have $\ch[i](\cY) = \ch[i]\bigl((Y\times U)/G\bigr)$ for $i \leq N$.
Since the morphism $\cX \arr \cY$ is proper and representable, we have a proper $G$-equivariant proper morphism of algebraic spaces $X \arr Y$ with $[X/G] = \cX$, and $\ch[i](\cX) = \ch[i]\bigl((X\times U)/G\bigr)$ for $i \leq N$, and we have a cartesian diagram
\[
\begin{tikzcd}
(X\times U)/G \rar\dar & \cX \dar\\
(Y\times U)/G \rar & \cY\,.
\end{tikzcd}
\]
This means that we can assume that $\cX$ and $\cY$ are algebraic spaces.
We refer to \cite[\S6.1]{EG} for basic facts about intersection theory on algebraic spaces. The Chow group of $\cY$ is generated by classes of integral closed subspaces $V \subseteq \cY$; we need to show that, given such a subspace $V \subseteq \cY$, there exists an integral closed subspace $W \subseteq \cX$ mapping birationally onto $V$. The algebraic space $V$ has a dense open subscheme $V' \subseteq V$; set $k(V) \mathrel{\smash{\overset{\mathrm{\scriptscriptstyle def}} =}} k(V')$. By hypothesis, the composite $\spec k(V) \arr V' \subseteq V \subseteq \cY$ lifts to $\spec k(V) \arr \cX$. But $\cX$ is of finite type, so this means that there exists a nonempty open subscheme $V_{1} \subseteq V'$ that lifts to an embedding $V_{1} \subseteq \cX$. Then we take $W$ to be the closure of $V_{1}$ in $\cX$.
\end{proof}
Here is the main result of the Section. We identify $\overline{\cM}_{1,2}$ with the universal curve over $\overline{\cM}_{1,1}$.
\begin{theorem}\label{thm:chow Mbar12}
Suppose that the ground field has characteristic $\neq 2,3$. Then
\[ \ch(\overline{\cM}_{1,2}) \simeq \ZZ[\lambda_1,\mu_1]/(\mu_1(\lambda_1+\mu_1),24\lambda_1^2) \]
where $\lambda_1$ is the first Chern class of the Hodge line bundle and $\mu_1:=\bfp_*[\overline{\cM}_{1,1}]$ is the fundamental class of the universal section.
\end{theorem}
\begin{proof}
Let $\cM_{0,2}$ be the stack of genus zero smooth curves with two marked points. Up to scalar multiplication, there is only one isomorphism class of $2$-pointed smooth genus zero curve, i.e. a projective line $\PP^1$ with $0$ and $\infty$ as marked points. This implies that $\cM_{0,2}\simeq\cB\GG_{\rmm}$ and that the universal marked curve is isomorphic to $[\PP^1/\GG_{\rmm}]$, where $\GG_{\rmm}$ acts on $\PP^1$ by scalar multiplication.
Consider the morphism of stacks
\[ \cM_{0,2} \simeq \cB\GG_{\rmm} \arr \widetilde{\mathscr M}_{1,1} \]
that sends a $2$-marked genus zero smooth curve $(C\arr S,p,\sigma)$ to the cuspidal elliptic curve $(\widehat{C} \arr S,p)$, where $\widehat{C}$ is obtained from $C$ by pinching $\sigma$.
In particular, let $\widehat{\PP^1}$ be the cuspidal curve obtained by pinching $\PP^1$ at $0$. The scalar action of $\GG_{\rmm}$ on $\PP^1$ descends to an action on $\widehat{\PP^1}$ and the normalization map $\PP^1\to\widehat{\PP^1}$ is equivariant with respect to this action, hence it induces a morphism of quotient stacks $c''':[\PP^1/\GG_{\rmm}]\to [\widehat{\PP^1}/\GG_{\rmm}]$.
Then we have a commutative diagram
\[
\begin{tikzcd}
\left[ \PP^1/\GG_{\rmm} \right] \ar[r, "c'''"] \ar[dr, "\rho"] \ar[rr, bend left, "c'"] & {\left[\widehat{\PP^1}/\GG_{\rmm}\right]} \ar[r, "c''"] \ar[d, "\pi'"] & \widetilde{\mathscr C}_{1,1} \ar[d, "\pi"] \\
& \cM_{0,2}\simeq\cB\GG_{\rmm} \ar[r, "c"] & \widetilde{\mathscr M}_{1,1}.
\end{tikzcd}
\]
where the square in the diagram is cartesian.
Observe that $c'$ is a Chow envelope for the locus of cuspidal curves in $\widetilde{\mathscr C}_{1,1}$, hence if we excise the image of
\begin{equation}\label{eq:c}
c'_*\colon\operatorname{CH}^{*-2}([\PP^1/\GG_{\rmm}]) \arr \ch(\widetilde{\mathscr C}_{1,1})\simeq \ZZ[\lambda_1,\mu_1]
\end{equation}
we obtain the Chow ring of $\overline{\cM}_{1,2}$.
From the projective bundle formula, we see that $\ch([\PP^1/\GG_{\rmm}])$ is generated as a $\ch(\cB\GG_{\rmm})$-module by $1$ and by the hyperplane section $h$, hence the image of $c'_*$ is generated as an ideal by $c'_*(\rho^*t^i\cdot h)$, where $t$ is the generator of $\ch(\cB\GG_{\rmm})\simeq \ZZ[t]$ and it coincides with $c^*\lambda_1$.
We have that $\rho^*(t^i)=\rho^*c^*(\lambda_1^i) = (c')^*(\pi^*\lambda_1^i)$. This implies that $c'_*\rho^*(t^ih)=c'_*(h\cdot(c')^*(\pi^*\lambda_1^i))=\pi^*\lambda_1^i\cdot c'_*(h)$, hence the image of $c'_*$ is generated as an ideal by $c'_*(1)$ and $c'_*(h)$.
The computation of $c'_*(1)$ is straightforward, as we have
\[c'_*(1)=c''_*(\pi')^*(1)=\pi^*c_*(1)=24\lambda_1^2.\]
Above we are using the fact that $c'''$ is birational, hence $c'''_*(1)=1$, and that $c:\cB\GG_{\rmm}\to\widetilde{\mathscr M}_{1,1}\simeq [V_{-4,-6}/\GG_{\rmm}]$ is the zero section of the vector bundle $[V_{-4,-6}/\GG_{\rmm}]\to\cB\GG_{\rmm}$, therefore its class coincides with the top Chern class of the $\GG_{\rmm}$-representation $V_{-4,-6}$, which is equal to $(-4\lambda_1)(-6\lambda_1)=24\lambda_1^2$.
Let $\bfp':\cB\GG_{\rmm}\to [\PP^1/\GG_{\rmm}]$ be the universal section given by the first marking, i.e. the $\GG_{\rmm}$-quotient of the $\GG_{\rmm}$-equivariant map $\spec{k}\to\PP^1$ corresponding to the point at infinity. Then $c'\circ \bfp ' = \bfp \circ c$ and $\bfp'_*(1)=h$, which readily implies $c'_*(h)=\bfp_*(24\lambda_1^2)=24\lambda_1^2\mu_1$. This concludes the proof.
\end{proof}
The rational Chow ring of the coarse moduli space $\overline{M}_{1,2}$ is isomorphic to the rational Chow ring of $\overline{\cM}_{1,2}$ (see \cite{VisInt}*{Proposition 6.1}). Thus, from the Theorem above we also get the following.
\begin{corollary}\label{cor:chow Mbar12}
Let $\overline{M}_{1,2}$ be the coarse moduli space of $\overline{\cM}_{1,2}$. Then over fields of characteristic $\neq 2,3$ we have
\[ \ch(\overline{M}_{1,2})_{\mathbb{Q}} \simeq \mathbb{Q}[\lambda_1,\mu_1]/(\mu_1(\lambda_1+\mu_1),\lambda_1^2). \]
\end{corollary}
\section{The Chow ring of $\widetilde{\mathscr C}_2$}\label{sec:chow cusp}
In this Section we determine the integral Chow ring of $\widetilde{\mathscr C}_2$, the universal stable $A_2$-curve of genus two (\Cref{prop:chow Ctilde2}). Our strategy resembles the one previously used for computing $\ch(\widetilde{\mathscr C}_{1,1})$.
Notice that a stable $A_{2}$-curve of genus $2$ cannot have more than one separating node; furthermore, a small deformation of a cusp cannot be a separating node. Hence the locus of separating nodes is a closed subset $\widetilde{\Theta}_{2} \subseteq\widetilde{\mathscr C}_{2}$, which is a connected component of the singular locus $\widetilde{\mathscr C}_{2}^{\rm sing} \subseteq \widetilde{\mathscr C}_{2}$ of the map $\widetilde{\mathscr C}_2\to\widetilde{\mathscr M}_2$, with the usual scheme structure given by the first Fitting ideal of $\Omega_{\widetilde{\mathscr C}_{2}/\widetilde{\mathscr M}_{2}}$. With this structure $\widetilde{\Theta}_{2}$ is a closed smooth connected substack of $\widetilde{\mathscr C}_{2}$.
The composite $\widetilde{\Theta}_{2} \subseteq \widetilde{\mathscr C}_{2} \arr \widetilde{\mathscr M}_{2}$ is proper, representable, unramified, and injective on geometric points, hence it is a closed embedding. We will denote by $\widetilde{\Delta}_{1} \subseteq \widetilde{\mathscr M}_{2}$ its image; it is the closure in $\widetilde{\mathscr M}_{2}$ of the divisor $\Delta_{1} \subseteq\overline{\cM}_{2}$.
We will denote by $\widetilde{\Theta}_{1} \subseteq \widetilde{\mathscr C}_{2}$ the inverse image of $\widetilde{\Delta}_{1}$; this is an integral divisor, which is smooth outside of $\widetilde{\Theta}_{2} \subseteq \widetilde{\Theta}_{1}$.
We will use the following cycle classes.
\begin{enumerate1}
\item $\lambda_{1} \in \ch[1](\widetilde{\mathscr M}_{2})$ and $\lambda_{2} \in \ch[2](\widetilde{\mathscr M}_{2})$ are, as usual, the Chern classes of the Hodge bundle $\widetilde{\HH}_{2}$; we will use the same notation for their pullbacks to $\ch(\widetilde{\mathscr C}_{2})$.
\item $\psi_{1} \in \ch[1](\widetilde{\mathscr C}_{2})$ is the first Chern class of the dualizing sheaf $\omega_{\widetilde{\mathscr C}_{2}/\widetilde{\mathscr M}_{2}}$.
\item $\theta_{1} \in \ch[1](\widetilde{\mathscr C}_{2})$ and $\theta_{2} \in \ch[2](\widetilde{\mathscr C}_{2})$ are the classes of $\widetilde{\Theta}_{1}$ and $\widetilde{\Theta}_{2}$ respectively.
\end{enumerate1}
If $\cX \subseteq \widetilde{\mathscr C}_{2}$ is any smooth substack, we will use the same symbols for the pullbacks of these classes to $\ch(\cX)$.
Then we first compute $\ch(\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_1)$ and $\ch(\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2)$: we put together these descriptions using the patching lemma (\Cref{lm:Atiyah}) to get the Chow ring of $\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2$.
We repeat this process once more: we compute $\ch(\widetilde{\Theta}_2)$ and we apply the patching lemma to finally obtain $\ch(\widetilde{\mathscr C}_2)$.
\subsection{The Chow ring of $\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_1$}\label{c2-d1}
In this Subsection we compute the integral Chow ring of $\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_1$, using its description as a quotient stack and basic equivariant intersection theory.
Let ${\rm B}_2$ be the Borel subgroup of lower triangular matrices inside $\mathrm{GL}_2$. We denote $\AA(6)$ the $\rm B_2$-representation on degree 6 binary forms, where the action is defined as
\[A\cdot h(x,z)=\det(A)^2 h(A^{-1}(x,z)). \]
Let $\widetilde{\AA}(6)$ be the vector space formed by pairs $(h(x,z),s)$ such that $h(0,1)=s^2$. Then $\widetilde{\mathbb{A}}(6)$ can be regarded as a $\rm B_2$-representation, where the action is given by
\[A\cdot (h(x,z),s):=(\det(A)^2 h(A^{-1}(x,z), \det(A)a_{22}^{-3}\cdot s). \]
Observe that the natural map $\pi_6\colon\widetilde{\AA}(6)\to\AA(6)$ is a $\rm B_2$-equivariant ramified double cover.
Let $D_4$ be the closed subscheme of $\AA(6)$ parametrizing homogeneous binary forms of degree $6$ with a root of multiplicity greater than $3$ in some field extension, and call $\widetilde{D}_4$ the preimage of $D_4$ in $\widetilde{\AA}(6)$. Let us set $U:=\widetilde{\AA}(6) \smallsetminus \widetilde{D}_4$; this is a $\rB_{2}$ invariant open subscheme of $\widetilde{\AA}(6)$.
Consider the two characters $\rB_{2} \arr \GG_{\rmm}$ defined by $(a_{ij}) \mapsto a_{11}$ and $(a_{ij}) \mapsto a_{22}$; let us call $\xi$ and $\eta \in \ch[1](\cB\rB_{2})$ respectively the first Chern classes of these two characters, and by the same symbols their pullbacks to $[U/\rB_{2}]$.
Furthermore, if $V$ is the tautological rank two representation of $\rB_{2} \subseteq \mathrm{GL}_{2}$, and $c_{1} \in \ch[1](\cB\rB_{2})$ and $c_{2} \in \ch[2](\cB\rB_{2})$ its Chern classes, then we have $\xi+\eta = c_{1}$ and $\xi\eta = c_{2}$.
\begin{proposition}\label{prop:open-strata}
We have an equivalence of stacks $\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_1 \simeq [U/{\rm B}_2]$. Furthermore, under this equivalence the class $\eta \in \ch[1][U/{\rm B}_2]$ corresponds to $\psi_{1} \in \ch[1](\widetilde{\mathscr C}_{2})$, while $c_{i} \in \ch[i][U/{\rm B}_2]$ corresponds to $\lambda_{i} \in \ch(\widetilde{\mathscr C}_{2})$.
\end{proposition}
The proof of the Proposition above is essentially contained in \cite{Per} by the second author: there only smooth curves are considered, but the arguments are virtually identical.
As explained in \cite{Per}*{Remark 3.1}, the ${\rm B}_2$-equivariant Chow ring of a scheme $X$ is isomorphic to its $T_2$-equivariant one, where $T_2 \subset {\rm B}_2$ is the maximal torus. Therefore, what we need is an explicit presentation of $\ch_{T_2}(U)$.
\begin{remark}
It is easy to check that
$$ \widetilde{\mathscr M}_2\smallsetminus \widetilde{\Delta}_1 \simeq [\AA(6)\smallsetminus D_4/\mathrm{GL}_2]. $$
The morphism $U = \widetilde{\AA}(6) \smallsetminus \widetilde{D}_4 \rightarrow \AA(6)\smallsetminus D_4$ is $\rB_{2}$-equivariant and the induced map
$$ \pi\colon\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_1 \simeq [\widetilde{\AA}(6) \smallsetminus \widetilde{D}_4/{\rm B}_2] \longrightarrow [\AA(6) \smallsetminus D_4/\mathrm{GL}_2] \simeq \widetilde{\mathscr M}_2\smallsetminus \widetilde{\Delta}_1$$
is in fact the restriction of the morphism $\widetilde{\mathscr C}_2 \rightarrow \widetilde{\mathscr M}_2$.
\end{remark}
The localization sequence applied to the closed subscheme $\widetilde{D}_4$ shows that all we need to do is to compute the ideal $\widetilde{I}$ given by the image of
\[{\rm{CH}}^{\ast -3}_{T_2}(\widetilde{D}_4) \longrightarrow \ch_{T_2}(\widetilde{\AA}(6)), \]
the pushforward along the equivariant closed embedding $\widetilde{D}_4\hookrightarrow \widetilde{\AA}(6)$.
If we denote by $I$ the ideal generated by the image of the pushforward along the closed embedding $D_4 \hookrightarrow \AA(6)$, clearly we have that $\pi_6^*(I)\subset \widetilde{I}$, where $\pi_6\colon\widetilde{\AA}(6)\to\AA(6)$ is the aforementioned ramified double cover.
We will prove that the ideal generated by $\pi_6^*(I)$ is in fact equal to $\widetilde{I}$. This works essentially in the same way as in \cite{Per}*{Theorem 5.7}, where instead of $\widetilde{\AA}(6)\smallsetminus \widetilde{D}_4$, there the author considers the complement of the preimage of the discriminant locus.
We can define a $\GG_{\rmm}$-torsor
$$ \widetilde{\AA}(6)\smallsetminus 0 \longrightarrow \PP(2^6,1) $$
where the $\GG_{\rmm}$-action on $\widetilde{\AA}(6)$ is described as follows:
$$ \lambda\cdot (h,s)=(\lambda^2 h,\lambda s)$$
for every point $(h,s)$ in $\widetilde{\AA}(6)$ and for every $\lambda \in \GG_{\rmm}$. (Here, as in what follows, we denote by $\PP(2^6,1)$ the weighted projective stack, that is, the stack quotient of $\AA^{7}\smallsetminus\{0\}$ by the action of $\GG_{\rmm}$ with weights $(2^{6},1)$). The $T_2$-action descends to an action on $\PP(2^6,1)$. We consider the following cartesian diagram
$$
\begin{tikzcd}
\widetilde{\Delta}_4 \arrow[rr, hook] \arrow[d, "\phi_6\vert_{\widetilde{\Delta}_4}"] & & {\PP(2^6,1)} \arrow[d, "\phi_6"] \\
\Delta_4 \arrow[rr, hook] & & \PP^6
\end{tikzcd}
$$
which is the projectivization of
$$
\begin{tikzcd}
\widetilde{D}_4 \arrow[rr, hook] \arrow[d, "\pi_6\vert_{\widetilde{D}_4}"] & & \widetilde{\AA}(6) \arrow[d, "\pi_6"] \\
D_4 \arrow[rr, hook] & & \AA(6)
\end{tikzcd}
$$
The action of $\GG_{\rmm}$ on the top row is the one described above, while the action on bottom row is the usual one.
Recall that if $X \rightarrow Y$ is a $\GG_{\rmm}$-torsor with $Y$ a quotient stack and $X$ the complement of the zero section in a line bundle $\cL$ on $Y$, then
$$ \ch(X) \simeq \ch(Y)/(c_1(\cL)).$$
Let $I'$ be the image of the pushforward along the closed embedding $\Delta_4\hookrightarrow \PP^6$ and let $\widetilde{I}'$ be the ideal generated by the image of the pushforward along the closed embedding $\widetilde{\Delta_4}\hookrightarrow \PP(2^6,1)$.
Then from the formula for the Chow ring of $\GG_{\rmm}$-torsors we get that if $\widetilde{I}'$ is equal to the ideal generated by $\phi_6^*(I')$ then the same statement holds in the non-projective case, i.e. $\widetilde{I}$ is equal to the ideal generated by $\pi_6^*(I)$.
A Chow envelope of $\Delta_4$ is given by
$$\rho: \PP^1 \times \PP^2 \longrightarrow \PP^6$$
where $\rho(f,g)=f^4g$ (as usual $\PP^n$ must be regarded as the projective space of binary forms of degree $n$). The morphism $\rho$ is a Chow envelope because of the following fact: if $K$ is a field and $f \in \Delta_4(K)$, then $f$ has a root of multiplicity $4$ and the root must defined over $K$ because $f$ has degree $6$.
To find a Chow envelope of $\widetilde{\Delta}_4\subset \PP(2^6,1)$, one could just consider the cartesian diagram
$$
\begin{tikzcd}
\cP \arrow[rr, "a"] \arrow[d, "q"] & & {\PP(2^6,1)} \arrow[d, "\phi_6"] \\
\PP^1 \times \PP^2 \arrow[rr, "\rho"] & & \PP^6.
\end{tikzcd}
$$
The morphism $a$ is in fact a Chow envelope. Nevertheless, we do not know how to describe a Chow envelope for $\cP$. To construct a Chow envelope of $\widetilde{\Delta}_4 \subset \PP(2^6,1)$, we define the morphism
$$ a: \PP^1 \times \PP(2^2,1) \longrightarrow \PP(2^6,1)$$
as $a([f],[g,s])=[(f^4g,f(0,1)^2s)]$ for every $[f]\in \PP^1$ and $[g,s] \in \PP(2^2,1).$
A straightforward computation gives us the following diagram:
$$
\begin{tikzcd}
{\PP^1 \times \PP(2^2,1)} \arrow[rd, "\alpha"] \arrow[rdd, "\id \times \phi_2"'] \arrow[rrrd, "a"] & & & \\
& \cP \arrow[rr, "g"] \arrow[d, "q"] & & {\PP(2^6,1)} \arrow[d, "\phi_6"] \\
& \PP^1 \times \PP^2 \arrow[rr, "\rho"] & & \PP^6
\end{tikzcd}
$$
where the square is cartesian (here $\phi_n: \PP(2^n,1)\rightarrow \PP^n$ sends $(a_0:\cdots:a_n)$ to $(a_0:\cdots:a_n^2)$). One would hope that $\alpha$ is a Chow envelope, but this is not the case. In fact, we have to define the morphism
$b: \PP(2^3) \longrightarrow \PP(2^6,1)$
with the formula $b([g])=[(x_0^4g,0)]$ for every $[g]\in \PP(2^3)$. This induces another commutative diagram
$$
\begin{tikzcd}
\PP(2^3) \arrow[rrrrd, "b"] \arrow[rdd, "\varphi_3"'] \arrow[rd, "\beta"] & & & & \\
& \cR \arrow[d, "p"] \arrow[r, hook] & \cP \arrow[rr, "g"] \arrow[d, "q"] & & {\PP(2^6,1)} \arrow[d, "\phi_6"] \\
& \PP^2 \arrow[r, hook] & \PP^1 \times \PP^2 \arrow[rr, "\rho"] & & \PP^6
\end{tikzcd}
$$
where the morphism $\varphi_3\colon\PP(2^3)\rightarrow \PP^2$ is the natural $\mu_2$-gerbe and the inclusion $\PP^2 \hookrightarrow \PP^1 \times \PP^2$ is induced by the rational point $[0:1] \in \PP^1$.
\begin{lemma}
In the setting above, we get that $a$ and $b$ are proper representable morphisms of algebraic stacks and the morphism $\alpha \coprod \beta:(\PP^1 \times \PP(2^2,1)) \coprod \PP(2^3) \longrightarrow \cP$ is a Chow envelope.
\end{lemma}
\begin{proof}
Properness is clear from the properness of the stacks involved. A straightforward computation shows that $a$ is not only representable (being faithful on every point) but is in fact fully faithful restricted to $U_0 \times \PP(2^2,1)$ where $U_0$ is the affine open subset of $\PP^1$ where $x_0\neq 0$. In the same way one can prove that $b$ is fully faithful.
Let us fix a field extension $K/k$: we want to prove that $(\alpha \coprod \beta) (K)$ is essentially surjective. Let $([f],[g],[h,s]) \in \cP(K)$, i.e. $h=f^4g$ for some representatives and suppose $f(0,1)\neq 0$. Thus $f(0,1)^4g(0,1)=s^2$ and therefore $\alpha(K)([f],[g,s/f(0,1)^2])=([f],[g],[h,s])$. Suppose instead $f(0,1)=0$, i.e. $h=x_0^4g$ and $s=0$. Then clearly $b([g])=[h,0]$ as desired.
\end{proof}
\begin{remark}
The previous proof shows us a bit more: it gives us that $\alpha(K)$ is an equivalence on the open $\cP \smallsetminus \cR$ while $\beta(K)$ is actually an equivalence between $\PP(2^3)(K)$ and $\cR(K)$ for every $K/k$ field extension. This gives a description of $\cP_{\rm red}$ as the fibered product of the two stacks $\PP^1 \times \PP(2^2,1)$ and $\PP(2^3)$ over $\PP(2^2,1)$.
\end{remark}
The previous remark implies $\alpha_*(1)=1$ and $\beta_*(1)=1$ at the level of Chow groups.
\begin{proposition}\label{prop:pullback relations}
The ideal generated by the image of the group homomorphism $a_*\oplus b_*$ inside $\ch_{T_2}(\PP(2^6,1))$ is equal the ideal generated by $\phi_6^*(\im {\rho})$.
\end{proposition}
\begin{proof}
First of all, using the projection formula and the explicit description of the Chow rings involved, one can prove that the ideal generated by $a_*\oplus b_*$ is in fact generated by the three cycles: $a_*(1),a_*(c_1^{T_2}(\cO_{\PP^1}(1)\boxtimes \cO_{\PP(2^2,1)})),b_*(1)$. For a more detailed discussion about this see \cite{Per}*{Section 5}.
Consider now the following diagram:
$$
\begin{tikzcd}
{\PP^1 \times \PP(2^2,1)} \arrow[rd, "\alpha"] \arrow[rdd, "\id \times \phi_2"'] \arrow[rrrd, "a"] & & & \\
& \cP \arrow[rr, "g"] \arrow[d, "q"] & & {\PP(2^6,1)} \arrow[d, "\phi_6"] \\
& \PP^1 \times \PP^2 \arrow[rr, "\rho"] & & \PP^6;
\end{tikzcd}
$$
therefore $a_*(1)=g_*\alpha_*(1)=g_*(1)=g_*q^*(1)=\phi_6^*\rho_*(1)$ because the square diagram is cartesian by construction and $\alpha_*(1)=1$. In the same way we get the result from $b_*(1)$. As far as the last generator is concerned, we get
$$ a_*(c_1^{T_2}(\cO_{\PP^1}(1)\boxtimes \cO_{\PP(2^2,1)}))= g_*\alpha_*(\id \times \phi_2)^*(c_1^{T_2}(\cO_{\PP^1}(1) \boxtimes \cO_{\PP^2}))= g_*\alpha_*c_1^{T_2}(\alpha^*\cL)$$
where $\cL$ is the line bundle $q^*(\cO_{\PP^1}(1) \boxtimes \cO_{\PP^2}))$. Using the projection formula we get $\alpha_*c_1^{T_2}(\alpha^*\cL)=c_1^{T_2}(\cL)$ and therefore
$$ a_*(c_1^{T_2}(\cO_{\PP^1}(1)\boxtimes \cO_{\PP(2^2,1)}))= g_*q^*(c_1^{T_2}(\cO_{\PP^1}(1) \boxtimes \cO_{\PP^2}))=\phi_6^*\rho_*(c_1^{T_2}(\cO_{\PP^1}(1)\boxtimes \cO_{\PP(2^2,1)}))$$
and this concludes the proof.
\end{proof}
As a Corollary, we finally get the description of $\ch(\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_1)$.
\begin{proposition}\label{prop:chow C minus Theta1}
We have
\[ \ch(\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_1)\simeq \ZZ[\psi_1,\lambda_1,\lambda_2]/(\lambda_2-\psi_1(\lambda_1-\psi_1),\lambda_1(24\lambda_1^2-48\lambda_2),20\lambda_1^2\lambda_2). \]
\end{proposition}
\begin{proof}
The relation $\lambda_2=\psi_1(\lambda_1-\psi_1)$ follows from \Cref{prop:open-strata} and the computations in the paragraph above it.
From \Cref{prop:pullback relations} we deduce that the image of
\[ {\rm{CH}}^{*-3}_{\rm B_2}(\widetilde{D}_4) \longrightarrow \ch_{\rm B_2}(\widetilde{\AA}(6)) \]
is equal to the ideal generated by the pullback of the image of
\[ {\rm{CH}}^{*-3}_{\mathrm{GL}_2}(D_4) \longrightarrow \ch_{\mathrm{GL}_2}(\AA(6)). \]
The latter can be determined with a standard argument of equivariant intersection theory. Consider the projectivization $\PP^6$ of $\AA(6)$. If we compute the generators of the image of
\begin{equation}\label{eq:image} {\rm{CH}}^{*-3}_{\mathrm{GL}_2}(\Delta_4) \longrightarrow \ch_{\mathrm{GL}_2}(\PP^6) \end{equation}
then we are done, because the ideal generated by the pullback to $\ch_{\mathrm{GL}_2}(\AA(6))$ of this image is easy to compute: we only have to substitute the hyperplane class $h$ with $2\lambda_1$, as in \cite{VisM2}*{pg. 638}.
The image of (\ref{eq:image}) is equal to the image of
\[ {\rm{CH}}_{\mathrm{GL}_2}^{*-3}(\PP^1\times\PP^2)\longrightarrow \ch_{\mathrm{GL}_2}(\PP^6),\]
the pushforward along the proper map $(f,g)\mapsto f^4g$. The latter can be easily computed, either as in \cite{VisM2}*{pg. 640-643} or via localization formulas. This concludes the proof.
\end{proof}
\subsection{The Chow ring of $\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2$}
In this Subsection our main goal is to compute the integral Chow ring of $\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2$ (\Cref{prop:chow C minus ThTilde2}) by applying the so called patching lemma. For this, we are only missing one ingredient, namely an explicit description of the integral Chow ring of $\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2$.
\begin{proposition}\label{prop:chow ThTilde1 minus ThTilde2}
We have
\[ \ch(\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2)\simeq \ZZ[\psi_1,\lambda_1,\lambda_2]/(\lambda_2-\psi_1(\lambda_1-\psi_1)).\]
\end{proposition}
\begin{proof}
The objects of the stack $\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2$ are of the form $(C\to S,\sigma)$, where:
\begin{itemize}
\item $C\to S$ is a family of stable $A_2$-curves of genus $2$ whose geometric fibers are \'{e}tale-locally obtained by gluing two elliptic stable $A_2$-curves at their marked points;
\item the image of the section $\sigma:S\to C$ does not contain the separating node in any of the geometric fibers of $C\to S$.
\end{itemize}
We can define a morphism
\[ (\widetilde{\mathscr C}_{1,1}\smallsetminus \im{\bfp})\times\widetilde{\mathscr M}_{1,1}\arr \widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2 \]
by sending a pair $(C\to S,p,\sigma),(C'\to S,p')$ to the family of curves determined by gluing $C$ and $C'$ along $p$ and $p'$, marked with $\sigma$. The morphism misses $\widetilde{\Theta}_2$ because $\widetilde{\mathscr C}_{1,1}\smallsetminus \im\bfp$ parametrizes curves in $\widetilde{\mathscr M}_{1,1}$ together with an extra section which does not coincide with the previous one. This is easily checked to be an isomorphism, with Lemma~\ref{lem:isom}.
\Cref{lm:Ctilde11 minus sigma affine bundle} implies that that $(\widetilde{\mathscr C}_{1,1}\smallsetminus\im{\bfp})\times\widetilde{\mathscr M}_{1,1}$ is an affine bundle over $[V_{-2,-3}/\GG_{\rmm}]\times\widetilde{\mathscr M}_{1,1}$. As $\widetilde{\mathscr M}_{1,1}\simeq [V_{-4,-6}/\GG_{\rmm}]$, we deduce
\[ \ch((\widetilde{\mathscr C}_{1,1}\smallsetminus\im{\bfp})\times\widetilde{\mathscr M}_{1,1})\simeq \ch_{\GG_{\rmm}\times\GG_{\rmm}}(V_{-2,-3}\times V_{-4,-6}). \]
This shows that the Chow ring of $(\widetilde{\mathscr C}_{1,1}\smallsetminus\im{\bfp})\times\widetilde{\mathscr M}_{1,1}$ is a ring with two generators, the first coming from $\widetilde{\mathscr C}_{1,1}\smallsetminus\im{\bfp}$ and the second from $\widetilde{\mathscr M}_{1,1}$.
A first generator is then the pullback of the first Chern class of the invertible sheaf on $\widetilde{\mathscr C}_{1,1}\smallsetminus\im{\bfp}$ defined by
\[ (C\to S,p,\sigma)\longmapsto \sigma^*\omega_{C/S}. \]
The pullback of the line bundle defined above to $(\widetilde{\mathscr C}_{1,1}\smallsetminus\im{\bfp})\times\widetilde{\mathscr M}_{1,1}\simeq \widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2$ is isomorphic to the cotangent bundle of the universal section. This implies that a first generator for $\ch(\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2)$ is $\psi_1$, the first Chern class of the cotangent bundle of the universal section.
A second generator is the pullback of the first Chern class of the Hodge line bundle of $\widetilde{\mathscr M}_{1,1}$, i.e.
\[ (\pi':C'\to S,p')\longmapsto \pi'_*\omega_{C'/S}. \]
After identifying $(\widetilde{\mathscr C}_{1,1}\smallsetminus\im{\bfp})\times\widetilde{\mathscr M}_{1,1}$ with $\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2$, the pullback of the line bundle above gets identified with the line bundle on $\widetilde{\Theta}_{1}\smallsetminus\widetilde{\Theta}_2$ defined by
\[ \cL:(\pi:C\to S,\sigma)\longmapsto \pi_*(\omega_{C/S}(-\im{\sigma})). \]
Indeed, we have the restriction homomorphism
\[ \omega_{C/S}(-\im{\sigma})\longrightarrow \omega_{C/S}(-\im{\sigma})|_{C'}\simeq i_{C'*}\omega_{C'/S}(p)\simeq i_{C'*}\omega_{C'/S} \]
that when pushed forward along $\pi$ gives us the morphism $\cL_S\to\pi'_*\omega_{C'/S}$: for this to be an isomorphism, it is enough to check that it induces an isomorphism of the fibers, i.e. that for every geometric point $s\in S$ we have $H^0(C_{s},\omega_{C_s}(-\sigma(s)))\simeq H^0(C'_{s},\omega_{C'_s})$. Any element in the first group vanishes on the marked component because it vanishes on $\sigma(s)$, from which it follows that the restriction map on the fibers is both injective and surjective, hence we get the claimed isomorphism $\cL_s\simeq \pi'_*\omega_{C'/S}$.
Call $s$ the first Chern class of the line bundle above. Observe that for every object $(C\to S,\sigma)$ in $\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2$ we have a short exact sequence
\[ 0 \arr \omega_{C/S}(-\im{\sigma}) \arr \omega_{C/S} \arr \omega_{C/S}|_{\im{\sigma}} \arr 0. \]
A straightforward application of the Cohomology and Base Change Theorem shows that the following sequence, obtained by pushing forward the sequence above along $\pi$, is actually a short exact sequence of locally free sheaves:
\[ 0 \arr \pi_*(\omega_{C/S}(-\im{\sigma})) \arr \pi_*\omega_{C/S} \arr \sigma^*\omega_{C/S} \arr 0. \]
Observe that the vector bundle in the middle is the Hodge vector bundle, the one on the right is the cotangent bundle to the universal section and the one on the left is the line bundle $\cL$ that we just defined. This implies that
\[ s+\psi_1=\lambda_1,\quad s\psi_1 =\lambda_2. \]
Putting all together, we deduce that
\[ \ch(\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2)\simeq \ZZ[\lambda_1,\psi_1] \]
and that $\lambda_2=\psi_1(\lambda_1-\psi_1)$.
\end{proof}
Let $\widetilde{\Delta}_1$ be the divisor inside $\widetilde{\mathscr M}_2$ parametrizing curves with one separating node.
\begin{lemma}\label{lem:delta1}
We have
\[ \ch(\widetilde{\Delta}_1)\simeq \ZZ[\xi_1,\lambda_1,\lambda_2]/(2\xi_1,\xi_1(\lambda_1-\xi_1)). \]
\end{lemma}
\begin{proof}
The objects of the stack $\widetilde{\Delta}_1$ are curves $C\arr S$ obtained by gluing two elliptic stable $A_2$-curves $(C'\arr S,p')$ and $(C''\arr S,p'')$ at their marked points. We have an isomorphism
\[ \widetilde{\Delta}_1 \simeq [(\widetilde{\mathscr M}_{1,1}\times \widetilde{\mathscr M}_{1,1})/\bC_2], \]
where $\bC_2$ acts by sending $((C'\arr S,p'),(C''\arr S,p''))$ to $((C''\arr S,p''),(C'\arr S,p'))$ (here the quotient stack is in the sense of \cite{Rom}).
If $V_{-4,-6}$ denotes the rank two representation of $\GG_{\rmm}$ of weights $-4$ and $-6$, then from \cite{EG}*{5.4} we have
\[ \left(\widetilde{\mathscr M}_{1,1}\times \widetilde{\mathscr M}_{1,1}\right)/\bC_2 \simeq [V_{-4,-6}\times V_{-4,-6}/\GG_{\rmm}^2\rtimes\bC_2]. \]
The Chow ring of the stack on the left is isomorphic to $\ch(\cB(\GG_{\rmm}^2\rtimes\bC_2))$, which can be extracted from the proof of \cite{DLV}*{Proposition 3.1}:
\[ \ch(\cB(\GG_{\rmm}^2\rtimes\bC_2)) \simeq \ZZ[\xi_1,c_1,c_2]/(2\xi_1,\xi_1c_1), \]
where $\xi_1$ is the first Chern class of the sign representation of $\bC_2$ and $c_1$ and $c_2$ are the Chern classes of the standard rank two representation of $\GG_{\rmm}^2\rtimes\bC_2$, i.e. $V_1\times V_1$ with $\bC_2$ that acts by switching the two factors.
In the proof of \cite{DLV}*{Proposition 3.1} it is also shown that, after identifying $\widetilde{\Delta}_1$ with $[V_{-4,-6}\times V_{-4,-6}/\GG_{\rmm}^2\rtimes\bC_2]$, we have
\[ \lambda_1=\xi_1-c_1, \quad\lambda_2=c_2. \]
Putting all together, we obtain the claimed expression for the Chow ring of $\widetilde{\Delta}_1$.
\end{proof}
\begin{proposition}\label{prop:chow C minus ThTilde2}
We have
\[ \ch(\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2)\simeq \ZZ[\psi_1,\vartheta_1,\lambda_1,\lambda_2]/I\]
where $I$ is the ideal of relations generated by
\begin{align*}
\lambda_2-\psi_1(\lambda_1-\psi_1),\\
(\lambda_1+\vartheta_1)(24\lambda_1^2-48\lambda_2),\\
20(\lambda_1+\vartheta_1)\lambda_1\lambda_2,\\
\vartheta_1(\lambda_1+\vartheta_1).
\end{align*}
\end{proposition}
\begin{proof}
The proof relies on \Cref{lm:Atiyah}. Indeed, we have already computed $\ch(\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_1)$ in \Cref{prop:chow C minus Theta1} and $\ch(\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2)$ in \Cref{prop:chow ThTilde1 minus ThTilde2}. Moreover, the normal bundle of $\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2$ in $\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2$ is equal to the pullback of the normal bundle of $\widetilde{\Delta}_1$ in $\widetilde{\mathscr M}_2$. The first Chern class of the latter is equal to $\xi_1-\lambda_1$ (see for instance \cite{Lars}*{Lemma 6.1}), and the pullback morphism
\[ \ch(\widetilde{\Delta}_1)\arr\ch(\widetilde{\Theta}_1)\arr\ch(\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2)\simeq \ZZ[\lambda_1,\psi_1] \]
sends $\lambda_1$ to $\lambda_1$ and $\xi_1$ to $0$. Therefore, the first Chern class of the normal bundle of $\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2$ is $-\lambda_1$ and in particular it is not a zero divisor.
We can apply \Cref{lm:Atiyah}, which tells us that the following diagram of rings is cartesian:
\begin{equation}\label{eq:atiyah diagram} \xymatrix{
\ch(\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2) \ar[r]^{j^*} \ar[d]^{i^*} & \ch(\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_1) \ar[d] \\
\ch(\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2) \ar[r] & \ch(\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2)/(\lambda_1).
}\end{equation}
We have seen in \Cref{prop:chow C minus Theta1} that $\ch(\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_1)$ is generated as a ring by $\psi_1$, the first Chern class of the cotangent bundle to the universal section, and $\lambda_1$, the first Chern class of the Hodge line bundle. Moreover, the Chow ring of $\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2$ is generated by $\psi_1$ and $\lambda_1$, intended as the first Chern classes of the restrictions of the aforementioned vector bundles. This implies that $\ch(\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2)$ is generated by $\lambda_1$, $\psi_1$ and $\vartheta_1$, the fundamental class of $\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2$ in $\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2$.
Let $f(\lambda_1,\psi_1,\vartheta_1)$ be a relation in $\ch(\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2)$. As it is sent to zero by $j^*$ in (\ref{eq:atiyah diagram}), it must be of the form $f'(\lambda_1,\psi_1)+\vartheta_1g(\lambda_1,\psi_1,\vartheta_1)$, where $f'$ belongs to the ideal of relations of $\ch(\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_1)$. Therefore we have
\[ 0=i^*f(\lambda_1,\psi_1,\vartheta_1)=f'(\lambda_1,\psi_1) - \lambda_1g(\lambda_1,\psi_1,-\lambda_1). \]
As $\lambda_1$ is not a zero divisor, we can reconstruct $g(\lambda_1,\psi_1,-\lambda_1)$ from $f'$ by dividing the latter by $\lambda_1$. This implies that
\[ f(\lambda_1,\psi_1,\vartheta_1) = f'(\lambda_1,\psi_1) + \vartheta_1\frac{f'(\lambda_1,\psi_1)}{\lambda_1} + \vartheta_1 f''(\lambda_1,\psi_1,\vartheta_1),\]
where $f''(\lambda_1,\psi_1,\vartheta_1)$ belongs to the kernel of $i^*$.
Observe now that, after substituting $\lambda_2$ with $\psi_1(\lambda_1-\psi_1)$, the generators of the ideal of relations of $\ch(\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_1)$ are
\[ \quad \lambda_1(24\lambda_1^2-48\psi_1(\lambda_1-\psi_1)),\quad 20\lambda_1^2\psi_1(\lambda_1-\psi_1), \]
hence the generators of the ideal of relations of $\ch(\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2)$ are
\[ \quad (\lambda_1+\vartheta_1)(24\lambda_1^2-48\psi_1(\lambda_1-\psi_1)),\quad 20(\lambda_1+\vartheta_1)\lambda_1\psi_1(\lambda_1-\psi_1) \]
together with the generators of $\ker(i^*)$ times $\vartheta_1$. An element $f''(\lambda_1,\psi_1,\vartheta_1)$ belongs to $\ker(i^*)$ if and only if $f''(\lambda_1,\psi_1,-\lambda_1)=0$. This means that $f''$ must be a multiple of $\lambda_1+\vartheta_1$.
Finally, as the exact sequence
\[ 0 \arr \pi_*(\omega_{C/S}(-\im{\sigma})) \arr \pi_*\omega_{C/S} \arr \sigma^*\omega_{C/S} \arr 0 \]
still holds on $\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2$, also the relation $\lambda_2=\psi_1(\lambda_1-\psi_1)$ holds true, thus concluding the computation.
\end{proof}
\subsection{The Chow ring of $\widetilde{\mathscr C}_2$}
To apply again the patching lemma and finally obtain a description of $\ch(\widetilde{\mathscr C}_2)$ we need to compute the integral Chow ring of $\widetilde{\Theta}_2$.
\begin{proposition}\label{prop:chow ThTilde2}
We have
\[ \ch(\widetilde{\Theta}_2)\simeq \ZZ[\xi_1,\lambda_1,\lambda_2]/(2\xi_1,\xi_1(\lambda_1-\xi_1)). \]
\end{proposition}
\begin{proof}
The map $\widetilde{\mathscr C}_2\arr\widetilde{\mathscr M}_2$ induces an isomorphism $\widetilde{\Theta}_2\arr\widetilde{\Delta}_1$.
Using \Cref{lem:delta1} we obtain the claimed expression for the Chow ring of $\widetilde{\Theta}_2$.
\end{proof}
\begin{proposition}\label{prop:chow Ctilde2}
We have
\[ \ch(\widetilde{\mathscr C}_2)\simeq \ZZ[\psi_1,\vartheta_1,\lambda_1,\lambda_2,\vartheta_2]/I\]
where $I$ is the ideal generated by
\begin{align*}
\lambda_2-\vartheta_2-\psi_1(\lambda_1-\psi_1),\\
(\lambda_1+\vartheta_1)(24\lambda_1^2-48\lambda_2),\\
20(\lambda_1+\vartheta_1)\lambda_1\lambda_2,\\
\vartheta_1(\lambda_1+\vartheta_1),\\
2\psi_1\vartheta_2,\\
\vartheta_2(\vartheta_1+\lambda_1-\psi_1),\\
\psi_1\vartheta_1\vartheta_2.
\end{align*}
\end{proposition}
\begin{proof}
From the localization exact sequence of Chow groups we see that $\ch(\widetilde{\mathscr C}_2)$ is generated by the preimages of the generators of $\ch(\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2)$ as a ring (which we computed in \Cref{prop:chow C minus ThTilde2}) together with the pushforward of the generators of $\ch(\widetilde{\Theta}_2)$ as a $\ZZ$-module (which we computed in \Cref{prop:chow ThTilde2}). The former set of generators is given by $\lambda_1$, $\lambda_2$, $\psi_1$ and $\vartheta_1$: the first two cycles are the Chern classes of the Hodge bundle, which is actually defined on the whole $\widetilde{\mathscr C}_2$, the fourth one is the fundamental class of $\widetilde{\Theta}_1$.
The extension of $\psi_1$ requires some further explanation.
Recall that on $\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2$ the cycle $\psi_1$ is the first Chern class of the cotangent bundle of the universal section, that is
\[ (C\to S,\sigma)\longmapsto \sigma^*\omega_{C/S}. \]
The line bundle above can be extended to a line bundle on the whole $\widetilde{\mathscr C}_2$ as follows: given a family $C\to S$ with section $\sigma$, let $C'\to S$ be the blow up of $C$ along the separating node, and let $\sigma'$ be the proper transform of $\sigma$, i.e. the proper transform of its image. Then $\sigma'$ is still a section of $C'\to S$, and $(\sigma')^*\omega_{C'/S}$ is a line bundle on $S$, which obviously coincides with the definition above when $C\to S$ belongs to $\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2$. The cycle $\psi_1$ is the first Chern class of this line bundle that we just defined.
The second part of the set of generators is formed by the fundamental class $\vartheta_2$ of $\widetilde{\Theta}_2$ together with the pushforward of the powers of $\lambda_1$, $\lambda_2$ and $\xi_1$. The projection formula tells us that
\[ i_*\lambda_1^j\lambda_2^k = \lambda_1^j\lambda_2^k\cdot i_*1 = \lambda_1^j\lambda_2^k\vartheta_2,\quad i_*(\lambda_1^j\lambda_2^k\xi_1^\ell)=\lambda_1^j\lambda_2^k\cdot i_*(\xi_1^\ell) \]
Moreover, thanks to the relation $\xi_1^2=\lambda_1\xi_1$ in $\ch(\widetilde{\Theta}_2)$, we have
\[ i_*(\xi_1^j)=i_*(\xi_1\lambda_1^{j-1})=\lambda_1^{j-1}\cdot i_*(\xi_1). \]
Putting all together we deduce that $\ch(\widetilde{\mathscr C}_2)$ is generated as a ring by
\[ \lambda_1,\lambda_2,\psi_1,\vartheta_1,\vartheta_2,\eta_3:=i_*\xi_1. \]
The next step is to find the relations among the generators. For this, we leverage \Cref{lm:Atiyah} in order to compute the Chow ring of $\widetilde{\mathscr C}_2$. We need to determine the top Chern class of the normal bundle of $\widetilde{\Theta}_2$ in $\widetilde{\mathscr C}_2$. By \cite{DLV}*{Proposition A.4}, we have
\[ \cN_{\widetilde{\Theta}_2}(C'^{\rm c}p_{\sigma} C''\to S,\sigma)= TC'_{\sigma}\oplus TC''_{\sigma}\]
Recall that
\[ \widetilde{\Theta}_2 \simeq (\widetilde{\mathscr M}_{1,1}\times\widetilde{\mathscr M}_{1,1})/\bC_2\simeq [V_{-4,-6}\times V_{-4,-6}/\GG_{\rmm}^2\rtimes\bC_2]. \]
If $\bfp\colon\widetilde{\mathscr M}_{1,1}\arr\widetilde{\mathscr C}_{1,1}$ denotes the universal section, the conormal bundle of $\im{\bfp}$ is
\[ (C\to S,p)\longmapsto TC_p \]
and its total space is isomorphic to $[V_{-4,-6}\times V_1/\GG_{\rmm}]$. We deduce that the total space of $\cN_{\widetilde{\Theta}_2}$ is isomorphic to
\[ [ (V_{-4,-6}\times V_{-4,-6})\times (V_1\times V_1)/\GG_{\rmm}^2\rtimes\bC_2 ], \]
where $\bC_2$ acts on $V_1\times V_1$ by switching the two factors. In other terms, the normal bundle is isomorphic to the pullback from $\cB(\GG_{\rmm}^2\rtimes\bC_2)$ of the rank two vector bundle associated with the standard representation of $\GG_{\rmm}^2\rtimes\bC_2$, i.e. the standard representation of $\mathrm{GL}_2$, regarded as a representation of $\GG_{\rmm}^2\rtimes\bC_2$ via the natural map that sends $\GG_{\rmm}^2$ to the subtorus of diagonal matrices and the generator of $\bC_2$ to the transposition. As already said in the proof of \Cref{prop:chow ThTilde2}, the second Chern class of this vector bundle is equal to $\lambda_2$, which is not a zero divisor in $\ch(\widetilde{\Theta}_2)$.
To proceed, we need to know how the pullback morphism $i^*\colon\ch(\widetilde{\mathscr C}_2)\arr\ch(\widetilde{\Theta}_2)$ works. The first formula below follows from the fact that the forgetful morphism $\widetilde{\Theta}_2\to\widetilde{\Delta}_1$ is an isomorphism, the other ones are a straightforward consequence of the projection formula:
\begin{equation}\label{eq:pullback theta2} i^*\lambda_i=\lambda_i,\quad i^*\eta_3=\lambda_2\xi_1, \quad i^*\vartheta_2=\lambda_2. \end{equation}
For the pullback of $\psi_1$, let $(C\to S,\sigma)$ belong to $\widetilde{\Theta}_2$. Denote $E$ the exceptional divisor of the blow up of $C$ along the separating node, and consider the family of genus $0$ curves $E\to S$: this family is marked by the proper transform of $\sigma$ and by a divisor of degree $2$ given by the intersection points of $E$ with the other components. Therefore, there exists an \'{e}tale double cover $f:S'\to S$ such that $E'=E\times_S S'\simeq \PP^1_{S'}$. In particular $(\sigma')^* \omega_{E'/S'} \simeq \cO_{S'}$, hence $f^*(\psi_1|_S)=0$ and $2(\psi_1|_S)=f_*f^*(\psi_1|_S)=0$. Then we must necessarily have
\begin{equation}\label{eq:pullback psi1} i^*\psi_1=\xi_1.\end{equation}
To compute $i^*\vartheta_1$, observe that we have a commutative diagram
\[\begin{tikzcd}
\widetilde{\Theta}_2 \arrow[hook]{r}{i} \arrow{d}{\rho} & \widetilde{\mathscr C}_2 \arrow{d}{\pi} \\
\widetilde{\Delta}_1 \arrow[hook]{r}{f} & \widetilde{\mathscr M}_2.
\end{tikzcd}\]
This implies that
\begin{align*}
i^*\vartheta_1=i^*\pi^*\delta_1=\rho^*f^*\delta_1=\rho^*(\xi_1-\lambda_1),
\end{align*}
and identifying $\ch(\widetilde{\Theta}_2)$ with $\ch(\widetilde{\Delta}_1)$ via $\rho^*$ as in \Cref{prop:chow ThTilde2}, we deduce that
\[ i^*\vartheta_1=\xi_1-\lambda_1. \]
Observe that
\[\eta_3=i_*\xi_1=i_*i^*\psi_1=\psi_1\vartheta_2\]
hence we do not need $\eta_3$ in order to have generators of $\ch(\widetilde{\mathscr C}_2)$ as a ring.
We can apply \Cref{lm:Atiyah}, which tells us that the following commutative diagram of rings is cartesian:
\begin{equation}
\xymatrix{
\ch(\widetilde{\mathscr C}_2) \ar[r]^{j^*}\ar[d]^{i^*} & \ch(\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2) \ar[d] \\
\ch(\widetilde{\Theta}_2) \ar[r] & \ch(\widetilde{\Theta}_2)/(\lambda_2)}
\end{equation}
Therefore, the ideal of relations is given by those polynomials in the generators that are sent to zero both in $\ch(\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2)$ and in $\ch(\widetilde{\Theta}_2)$. The generators of this ideal are therefore of the form $f=f'+\vartheta_2 g$, where by \Cref{prop:chow C minus ThTilde2} we know that $f'$ is $0$ or one of the following
\begin{align}\label{eq:four rels}
\lambda_2-\psi_1(\lambda_1-\psi_1),\nonumber\\
(\lambda_1+\vartheta_1)(24\lambda_1^2-48\lambda_2), \nonumber\\
20(\lambda_1+\vartheta_1)\lambda_1\lambda_2, \\
\vartheta_1(\lambda_1+\vartheta_1). \nonumber
\end{align}
Moreover we have
\[ 0= i^*f = i^*f' + \lambda_2 (i^*g) \Rightarrow i^*g = \frac{i^*(-f')}{\lambda_2}. \]
If $f'$ is one of the last three elements of \eqref{eq:four rels} or zero, then $i^*f'=0$, hence $g$ belongs to $\ker(i^*)$. If $f'=\lambda_2-\psi_1(\lambda_1-\psi_1)$, then $i^*g=(-1)$. Hence the ideal of relations is generated by $\ker(i^*)$ times $\vartheta_2$ and
\begin{align*}
\lambda_2-\psi_1(\lambda_1-\psi_1)-\vartheta_2,\\
(\lambda_1+\vartheta_1)(24\lambda_1^2-48\lambda_2),\\
20(\lambda_1+\vartheta_1)\lambda_1\lambda_2,\\
\vartheta_1(\lambda_1+\vartheta_1).
\end{align*}
The determination of the generators of $\ker(i^*)$ is straightforward. Multiplying them by $\vartheta_2$, we get the following set of cycles:
\[ 2\psi_1\vartheta_2,\quad \vartheta_2(\vartheta_1+\lambda_1-\psi_1),\quad \psi_1\vartheta_1\vartheta_2. \]
Putting all together, we reach the desired conclusion.
\end{proof}
\section{The Chow ring of $\overline{\cM}_{2,1}$, abstract characterization}\label{sec:abstract}
The goal of this Section is to find a minimum number of generators for the kernel of the restriction map
\[\ch(\widetilde{\mathscr C}_2)\longrightarrow\ch(\overline{\cM}_{2,1})\]
In \Cref{thm:chow Mbar21 abs} we show that, up to relations coming from the integral Chow ring of $\overline{\cM}_2$, this kernel is generated by the fundamental classes of two substacks, together with an additional cycle: the first fundamental class, denoted $\widetilde{\mathscr C}_2^{\rm c}$, is the cuspidal singularity locus of $\widetilde{\mathscr C}_2$; the second one, dubbed $\widetilde{\mathscr C}_2^{\rm E}$, is the substack of cuspidal curves of genus two with a separating node and a marked rational component, with its reduced scheme structure. The additional cycle comes also from this second locus.
\subsection{Relations from the locus of curves with two cusps}
Notice that an $A_{2}$ curve of genus $2$ cannot have more than two cusps. Let $\widetilde{\mathscr M}_2^{\rm c, \rm c}$ denotes the closed substack of $\widetilde{\mathscr M}_2$ parametrizing stable $A_2$-curves of genus two with two cusps, with its reduced scheme structure. In other words, a section of $\widetilde{\mathscr M}_2^{\rm c, \rm c}(S)$ is a family of stable $A_{2}$-curves $C \arr S$, which is trivial \'{e}tale-locally around each node. This is easily seen to be a smooth closed substack of codimension $4$.
We set
\[ \cZ^2 := \pi^{-1} \left( \widetilde{\mathscr M}_2^{\rm{c,c}} \right). \]
We think of $\cZ^2$ is the stack of pairs $(C\arr S,\sigma)$ where the geometric fibers of $C\arr S$ have two cusps. The pushforward along the closed embedding of $\cZ^2$ in $\widetilde{\mathscr C}_2$ induces
\[ {\rm CH}_*(\cZ^2) \arr {\rm CH}_*(\widetilde{\mathscr C}_2). \]
Our goal is to prove the following.
\begin{proposition}\label{prop:relations two cusps}
Let $J$ denote the ideal generated by the pullback along $\pi\colon\widetilde{\mathscr C}_2\arr\widetilde{\mathscr M}_2$ of the cuspidal relations. Then the image of
${\rm CH}_*(\cZ^2) \arr {\rm CH}_*(\widetilde{\mathscr C}_2)$ is contained in $J$.
\end{proposition}
Let $\overline{\cM}_0$ be the stack of genus $0$ curves with at most one node. In \cite{EFRat}*{Proposition 6} the authors give the following presentation of $\overline{\cM}_0$: let $E$ be the standard representation of $\mathrm{GL}_3$, and set
\[ V:={\rm Sym}^2E^{\vee}\otimes\det(E). \]
We can regard $V$ as the vector space of quadratic ternary forms, endowed with a twisted $\mathrm{GL}_3$-action. If $U\subset V$ is the invariant open subscheme of quadratic forms of rank $>1$, then
\[ \overline{\cM}_0 \simeq [U/\mathrm{GL}_3]. \]
Let $X\subset V\times \PP(E)$ be the closed subscheme of equation
\[ \alpha_{00}X^2 + \alpha_{01} XY + \alpha_{02} XZ + \alpha_{11} Y^2 + \alpha_{12} YZ + \alpha_{22} Z^2 = 0 . \]
Set $C:= X\cap (U\times\PP(E^{\vee}))$, so that $C\arr U$ is a family of genus $0$ curve with at most one node.
If $\pi\colon\overline{\cC}_0\arr \overline{\cM}_0$ is the universal curve of genus $0$, then we have
\[ \overline{\cC}_0 \simeq [C/\mathrm{GL}_3]. \]
\begin{lemma}\label{lm:chow Cbar0}
Let $h:=c_1(\cO(1))$ be the hyperplane class of $\PP(E)$. Then the pullback homomorphism of $\ch(\cB\mathrm{GL}_3)$-modules
\[ \ch(\cB \mathrm{GL}_3)\langle 1, h, h^2\rangle \simeq \ch(\PP(E)) \arr \ch(\overline{\cC}_0) \]
is surjective.
\end{lemma}
\begin{proof}
Observe that $X \arr \PP(E)$ is an equivariant vector subbundle of $V\times\PP(E) \arr \PP(E)$, because the equation defining $X$ is linear in the coefficients $\alpha_{ij}$. Therefore, the induced pullback at the level of equivariant Chow rings (which are the Chow rings of the quotients) is an isomorphism.
The pullback along open embeddings is always surjective, hence so it is the pullback along $[X/\mathrm{GL}_3]\arr \overline{\cC}_0$. Composing these two homomorphisms, we obtain the claimed surjection: the characterization of $\ch(\PP(E))$ follows from the usual projective bundle formula.
\end{proof}
Recall from \cite{EFRat}*{Lemma 8} that the sheaf $F:=\pi_*(\omega_{\pi}^{\vee})$ is a vector bundle of rank three on $\overline{\cM}_0$. In particular, we can consider the stack $\PP(F)$: its objects are pairs $(C\arr S, D)$ where $C\arr S$ is a family of genus $0$ curves with at most one node, and $D\subset C$ is a divisor such that the induced map $D\arr S$ is finite of degree $2$. Moreover the restriction of $D$ to a singular curve must have degree $1$ on each component: in other words, the restriction of $D$ is supported either on two distinct points lying on two distinct components, or on the node.
Consider the cartesian diagram
\[ \xymatrix{
\overline{\cC}_0 \times_{\overline{\cM}_0} \PP(F) \ar[r] \ar[d] & \PP(F) \ar[d] \\
\overline{\cC}_0 \ar[r] & \overline{\cM}_0.
} \]
Then the left vertical map is a projective bundle and we have
\[ \ch(\overline{\cC}_0 \times_{\overline{\cM}_0} \PP(F)) \simeq \ch(\overline{\cC}_0)\langle 1,t,t^2 \rangle \]
where $t$ is the pullback of the hyperplane section of $\PP(F)$. Together with \Cref{lm:chow Cbar0} this tells us that $\ch(\overline{\cC}_0 \times_{\overline{\cM}_0} \PP(F))$ is generated as a $\ch(\cB\mathrm{GL}_3)$-module by the pullbacks of $1$, $h$, $t$, $h^2$, $ht$, $t^2$, $h^2t$, $ht^2$, $h^2t^2$.
We have proved the following.
\begin{lemma}\label{lm:chow Cbar0 times P(F)}
The Chow ring of $\overline{\cC}_0 \times_{\overline{\cM}_0} \PP(F)$ is generated as a $\ch(\PP(F))$-module by the pullbacks of $1$, $h$ and $h^2$.
\end{lemma}
Let $\PP(F)^{\rm \acute{e}t}$ be the open substack of $\PP(F)$ formed by the pairs $(C\arr S, D)$ where $D\arr S$ is \'{e}tale.
The family of genus zero curves $\overline{\cC}_0 \times_{\overline{\cM}_0} \PP(F)^{\rm \acute{e}t} \arr \PP(F)^{\rm \acute{e}t}$ has by construction a divisor $\cD$ whose projection onto $\PP(F)^{\rm \acute{e}t}$ is \'{e}tale of degree $2$.
We can pinch this family of curves along the divisor $\cD$: the resulting scheme is a family of stable $A_2$-curves of genus two over $\PP(F)^{\rm \acute{e}t}$, with each fiber having two cusps. This in turn induces a map $f\colon\PP(F)^{\rm \acute{e}t}\arr\widetilde{\mathscr M}_2$ which fits into the following commutative diagram:
\begin{equation}\label{eq:diag envelope Z2}
\begin{tikzcd}
\overline{\cC}_0 \times_{\overline{\cM}_0} \PP(F)^{\rm \acute{e}t} \ar[rd] \ar[r, "\cP"] & f^*\widetilde{\mathscr C}_2 \ar[r] \ar[d] & \widetilde{\mathscr C}_2 \ar[d, "\pi"] \\
& \PP(F)^{\rm \acute{e}t} \ar[r, "f"] & \widetilde{\mathscr M}_2
\end{tikzcd}
\end{equation}
Here $\cP$ denotes the pinching morphism, and $f^*\widetilde{\mathscr C}_2$ is the resulting family of curves. The morphism $f$ sends a pair $(C\arr S, D)$ to the family of curves $\widehat{C}\arr S$ obtained by pinching $C$ along the support of $D$.
Observe that by construction the morphism $f$ is an isomorphism onto $\widetilde{\mathscr M}_2^{\rm{c,c}}$, the locus of curves with two cusps, hence $f^*\widetilde{\mathscr C}_2$ is isomorphic to $\cZ^2:=\pi^{-1}(\widetilde{\mathscr M}_2^{\rm{c,c}})$. As the pinching morphism induces an isomorphism at the level of Chow rings, we deduce the following:
\begin{lemma}\label{lm:chow envelope Z2}
The induced morphism
\[ \overline{\cC}_0 \times_{\overline{\cM}_0} \PP(F)^{\rm \acute{e}t} \arr \widetilde{\mathscr C}_2 \]
is a Chow envelope for $\cZ^2:=\pi^*(\widetilde{\mathscr M}_2^{\rm{c,c}})$.
\end{lemma}
By definition, the cycle $\psi_1$ in $\ch(\widetilde{\mathscr C}_2)$ is the first Chern class of the line bundle
\[ (C\arr S,\sigma) \longmapsto \sigma^*\omega_{C/S} \]
Hence its pullback to $\overline{\cC}_0 \times_{\overline{\cM}_0} \PP(F)^{\rm \acute{e}t}$ is the invertible sheaf
\[ (C'\arr S,\sigma,D)\longmapsto \sigma^*\omega_{C'/S}, \]
where $C'$ is the partial normalization of $C$ at the cusps.
It is clear that the line bundle above comes from a line bundle $\cL$ on $\overline{\cC}_0$, which can be characterized as follows: consider the cartesian diagram
\[
\begin{tikzcd}
\overline{\cC}_0 \times_{\overline{\cM}_0} \overline{\cC}_0 \ar[r, "pr_2"] \ar[d, "pr_1"] & \overline{\cC}_0 \ar[d, "\pi"] \\
\overline{\cC}_0 \ar[r, "\pi"] \ar[u, bend left, "\delta"] & \overline{\cM}_0
\end{tikzcd}
\]
where $\delta$ is the diagonal embedding. The pair $(\overline{\cC}_0\times_{\overline{\cM}_0}\overline{\cC}_0\arr \overline{\cC}_0,\delta)$ is the universal marked curve, and the line bundle $\cL$ is
\[ \delta^*pr_2^*\omega_{\pi} = \omega_{\pi}. \]
\Cref{lm:chow Cbar0} tells us that $\ch(\overline{\cC}_0)$ is generated as a $\ch(\overline{\cM}_0)$-module by $1$, $h$ and $h^2$. We claim that $h=-c_1(\cL)$.
For this, recall that there is a commutative diagram
\[
\begin{tikzcd}
\left[ C/\mathrm{GL}_3 \right] \ar[r,hook] & \left[ U\times\PP(E)/\mathrm{GL}_3 \right] \ar[r] \ar[d] & \left[\PP(E)/\mathrm{GL}_3\right] \\
& \left[U/\mathrm{GL}_3\right] \ar[r] & \left[\spec{k}/\mathrm{GL}_3\right]
\end{tikzcd}
\]
and that $h$ is obtained by pulling back the hyperplane section of $[\PP(E)/\mathrm{GL}_3]$, which coincides with the restriction to $[C/\mathrm{GL}_3]$ of the hyperplane section of $[U\times\PP(E)/\mathrm{GL}_3]$, regarded as a projective bundle on $[U/\mathrm{GL}_3]$. In more intrinsic terms, this projective bundle is $\PP((\pi_*\omega_{\pi})^{\vee})$, and the embedding of $\overline{\cC}_0$ in $\PP(\pi_*\omega_{\pi}^{\vee})$ is induced by the surjection
\[ \pi^*\pi_*(\omega_{\pi}^{\vee}) \arr \omega_{\pi}^{\vee}. \]
Therefore, by the universal property of projective bundles, the restriction of $\cO(1)$ to $\overline{\cC}_0$ is equal to $\omega_{\pi}^\vee$, from which we deduce that $\cL^\vee \simeq \cO(1)|_{\overline{\cC}_0}$. Putting all together, we have proved the following.
\begin{lemma}\label{lm:pullback h}
The pullback of $\psi_1$ along $\overline{\cC}_0\times_{\overline{\cM}_0}\PP(F)^{\rm \acute{e}t} \rightarrow \widetilde{\mathscr C}_2$ is equal to $-h$.
\end{lemma}
We are ready to prove the result stated at the beginning of the Subsection.
\begin{proof}[Proof of \Cref{prop:relations two cusps}]
It follows from \Cref{lm:chow envelope Z2} that the image of the pushforward
\[ \ch(\cZ^2) \arr \ch(\widetilde{\mathscr C}_2) \]
coincides with the image of
\[ \ch(\overline{\cC}_0\times_{\overline{\cM}_0}\PP(F)^{\rm \acute{e}t}) \arr \ch(\widetilde{\mathscr C}_2). \]
The diagram (\ref{eq:diag envelope Z2}) induces the following commutative diagram of Chow groups:
\[
\begin{tikzcd}
\ch(\overline{\cC}_0\times_{\overline{\cM}_0}\PP(F)^{\rm \acute{e}t}) \ar[r, "\simeq"] \ar[rr, bend left, "g_*"] & \ch(f^*\widetilde{\mathscr C}_2) \ar[r] & \ch(\widetilde{\mathscr C}_2) \\
& \ch(\PP(F)^{\rm \acute{e}t}) \ar[lu] \ar[u] \ar[r, "f_*"] & \ch(\widetilde{\mathscr M}_2) \ar[u, "\pi^*"]
\end{tikzcd}
\]
On one hand, we know from \Cref{lm:chow Cbar0 times P(F)} that the Chow ring of $\overline{\cC}_0\times_{\overline{\cM}_0}\PP(F)^{\rm \acute{e}t}$ is generated as a $\ch(\PP(F)^{\rm \acute{e}t})$-module by $1$, $h$, $h^2$. On the other hand \Cref{lm:pullback h} says that $g^*\psi_1=-h$, hence by the projection formula the image of $g_*$ is generated as an ideal by $\pi^*(\im{f_*})$.
By construction $\im{f_*}$ is contained in the ideal of cuspidal relations, hence its pullback along $\pi$ is contained in $J$. This concludes the proof.
\end{proof}
\subsection{Relations from the locus of curves with one cusp}
Consider the stack $\overline{\cM}_{1,1}\times [\AA^1/\GG_{\rmm}]$, where the action of $\GG_{\rmm}$ on $\AA^1$ has weight $1$. In Subsection \ref{sub:description cusp} we defined a morphism of stacks
\[ c\colon\widetilde{\mathscr M}_{1,1}\times [\AA^1/\GG_{\rmm}] \arr \widetilde{\mathscr M}_2 \]
that does the following: on one hand, when restricted to the open substack $\widetilde{\mathscr M}_{1,1}\simeq\widetilde{\mathscr M}_{1,1}\times [\AA^1\smallsetminus\{0\}/\GG_{\rmm}]$, it sends an elliptic stable $A_2$-curve $(C\arr S,p)$ to the stable $A_2$-curve of genus two $\widehat{C}\arr S$ obtained by pinching the section $p$ in $C$.
On the other hand, we can regard the closed substack $\widetilde{\mathscr M}_{1,1}\times \cB\GG_{\rmm}$ as the stack of pairs $((C\arr S,p),(C'\arr S,p'))$ where $(C\arr S,p)$ is an elliptic stable $A_2$-curve and $(C'\arr S,p')$ is a family of elliptic stable $A_2$-curves with a cusp. Then the morphism $c$ sends a pair to the stable $A_2$-curve of genus two obtained by gluing $C$ and $C'$ at the marked points.
Let us recall how to construct this morphism: let $\rm{Bl}(\widetilde{\mathscr C}_{1,1}\times [\AA^1/\GG_{\rmm}])$ be the blow-up of $\widetilde{\mathscr C}_{1,1}\times [\AA^1/\GG_{\rmm}]$ along the closed substack
\[ \bfp\times 0\colon\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm} \lhook\joinrel\longrightarrow \widetilde{\mathscr C}_{1,1}\times [\AA^1/\GG_{\rmm}]. \]
The fibers of $\rm{Bl}(\widetilde{\mathscr C}_{1,1}\times [\AA^1/\GG_{\rmm}])\arr \widetilde{\mathscr M}_{1,1}\times [\AA^1/\GG_{\rmm}]$ over the closed substack $\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}$ have two irreducible components, meeting in one node: the first component is given by a stable $A_2$-curve of genus one, the second is a rational curve.
Moreover, the proper transform of $\im{\bfp\times\rm{id}}$, once restricted to the fibers over $\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}$, determines a section whose image lands in the rational curve.
We can apply the pinching construction (Subsection \ref{sub:pinching}) to the curve $\rm{Bl}(\widetilde{\mathscr C}_{1,1}\times [\AA^1/\GG_{\rmm}])\arr \widetilde{\mathscr M}_{1,1}\times [\AA^1/\GG_{\rmm}]$ along the proper transform of $\im{\bfp\times\rm{id}}$, so that we get
\[ \cP\colon\rm{Bl}(\widetilde{\mathscr C}_{1,1}\times [\AA^1/\GG_{\rmm}])\arr \widehat{\cD}_{1,1}. \]
By construction the curve $\widehat{\cD}_{1,1}\arr\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}$ is a stable $A_2$-curve of genus two, hence we obtain the following fundamental diagram
\begin{equation}\label{eq:fund diag Z1}
\begin{tikzcd}
\rm{Bl}(\widetilde{\mathscr C}_{1,1}\times [\AA^1/\GG_{\rmm}]) \ar[r, "\cP"] \ar[d, "\rho"] & \widehat{\cD}_{1,1}\simeq c^*\widetilde{\mathscr C}_{2} \ar[r, "c'"] \ar[d, "\pi'"] & \widetilde{\mathscr C}_2 \ar[d, "\pi"] \\
\widetilde{\mathscr C}_{1,1}\times \left[\AA^1/\GG_{\rmm}\right] \ar[r, "q"] & \widetilde{\mathscr M}_{1,1}\times\left[\AA^1/\GG_{\rmm}\right] \ar[r, "c"] \ar[l, bend right, "\bfp\times\rm{id}"] & \widetilde{\mathscr M}_2
\end{tikzcd}
\end{equation}
\begin{remark}
The morphism $c\colon\widetilde{\mathscr M}_{1,1}\times [\AA^1/\GG_{\rmm}]\arr\widetilde{\mathscr M}_2$ is proper and surjects onto $\widetilde{\mathscr M}_2^{\rm c}$, the substack of stable $A_2$-curves with at least a cusp. Moreover, it is injective away from $\widetilde{\mathscr M}_2^{\rm{c,c}}$, the substack of curves with two cusps.
Consequently, the morphism $c':c^*\widetilde{\mathscr C}_2 \arr \widetilde{\mathscr C}_2$ is proper and surjects onto $\cZ^1=\pi^{-1}(\widetilde{\mathscr M}_2^{\rm c})$ and it is injective away from $\cZ^2=\pi^{-1}(\widetilde{\mathscr M}_2^{\rm{c,c}})$.
\end{remark}
Recall that $J\subset\ch(\widetilde{\mathscr C}_2)$ is the ideal generated by the pullback along $\pi\colon\widetilde{\mathscr C}_2\to\widetilde{\mathscr M}_2$ of the ideal of cuspidal relations in $\ch(\widetilde{\mathscr M}_2)$.
We also introduced the cycles $[\widetilde{\mathscr C}_2^{\rm c}]$ and $[\widetilde{\mathscr C}_2^{\rm E}]$: the first one is the fundamental class of the cuspidal locus of the universal curve $\widetilde{\mathscr C}_2\arr \widetilde{\mathscr M}_2$ (to not be confused with the locus of cuspidal curves!). The second one is the fundamental class of the locus of curves with a separating node such that the marked component is a cuspidal curve of geometric genus zero.
Let $E$ be the exceptional divisor of $\rm{Bl}(\widetilde{\mathscr C}_{1,1}\times [\AA^1/\GG_{\rmm}])$, so that we have a morphism $\rho:E\to \widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}$ and a proper morphism $c'':E\to\widetilde{\mathscr C}_2$ ((there is a little abuse of notation here, as for the sake of simplicity we denoted $\rho$ what should be denoted $\rho|_E$). Call $T$ the first Chern class of the Hodge line bundle on $\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}$. Then the main result of this Subsection is the following:
\begin{proposition}\label{prop:1-cusp whole}
The image of
\begin{equation}\label{eq:pushforward blowup} \ch(\rm{Bl}(\widetilde{\mathscr C}_{1,1}\times [\AA^1/\GG_{\rmm}])) \arr \ch(\widetilde{\mathscr C}_2) \end{equation}
is contained in the ideal $(J,[\widetilde{\mathscr C}_2^{\rm c}],[\widetilde{\mathscr C}_2^{\rm E}],c''_*\rho^*T).$
\end{proposition}
The Chow ring of a blow-up is generated by the pullback of the cycles from the base plus the cycles coming from the exceptional divisor.
Let $i:E\hookrightarrow \rm{Bl}(\widetilde{\mathscr C}_{1,1}\times [\AA^1/\GG_{\rmm}])$ be the exceptional divisor, then the image of (\ref{eq:pushforward blowup}) coincides with the image of
\begin{equation}\label{eq:image blow-up}
\begin{tikzcd}
\ch(\widetilde{\mathscr C}_{1,1}\times \left[\AA^1/\GG_{\rmm}\right])\oplus\ch(E) \ar[rrr, "c'_*\cP_*\rho^* + c'_*\cP_*i_*"] & & & \ch(\widetilde{\mathscr C}_2).
\end{tikzcd}
\end{equation}
Let us separately study the two factors of the homomorphism above.
\begin{proposition}\label{prop:1-cusp}
The image of $\ch(\widetilde{\mathscr C}_{1,1}\times \left[\AA^1/\GG_{\rmm}\right])\arr \ch(\widetilde{\mathscr C}_2)$ is contained in the ideal $(J,[\widetilde{\mathscr C}_2^{\rm c}],[\widetilde{\mathscr C}_2^{\rm E}])$.
\end{proposition}
The same argument of \Cref{prop:chow Ctilde11} implies that
\[ (j\times \id)^*q^*\colon\ch(\widetilde{\mathscr M}_{1,1}\times [\AA^1/\GG_{\rmm}])\arr\ch((\widetilde{\mathscr C}_{1,1}\smallsetminus\im{\bfp})\times [\AA^1/\GG_{\rmm}]) \]
is surjective. We deduce that Chow ring of $\widetilde{\mathscr C}_{1,1}\times[\AA^1/\GG_{\rmm}]$ is generated by the cycles coming from $\widetilde{\mathscr M}_{1,1}\times [\AA^1/\GG_{\rmm}]$ together with the ones coming from $\widetilde{\mathscr M}_{1,1}\times[\AA^1/\GG_{\rmm}]$ via
\[ (\bfp\times\rm{id})_*\colon\ch(\widetilde{\mathscr M}_{1,1}\times[\AA^1/\GG_{\rmm}])\arr\ch(\widetilde{\mathscr C}_{1,1}\times[\AA^1/\GG_{\rmm}]). \]
If a cycle is of the form $q^*\zeta$, the diagram (\ref{eq:fund diag Z1}) tells us that
\begin{align*} c'_*\cP_*\rho^*(q^*\zeta ) = c'_*(\pi')^*\zeta = \pi^*c_*\zeta,
\end{align*}
hence it is contained in $J$.
To prove \Cref{prop:1-cusp}, it remains to study the image in $\ch(\widetilde{\mathscr C}_2)$ of the cycles of the form $(\bfp\times\rm{id})_*\zeta$. This corresponds to the image of the map $f$ in the commutative diagram below:
\begin{equation*}
\begin{tikzcd}
& \ch(\widetilde{\mathscr C}_2) \ar[d, "\pi"] \\
\ch(\widetilde{\mathscr M}_{1,1}\times[\AA^1/\GG_{\rmm}]) \ar[ur,"f"] \ar[r, "c"] & \ch(\widetilde{\mathscr M}_2)
\end{tikzcd}
\end{equation*}
When restricted to the open substack $[\widetilde{\mathscr M}_{1,1}\times [\AA^1\smallsetminus\{0\}/\GG_{\rmm}]\simeq \widetilde{\mathscr M}_{1,1}$, the map $f$ sends an elliptic stable $A_2$-curve $(C\to S,p)$ to the marked genus two curve obtained by creating a cusp along $\im{p}$ and with the marking given by the cusp itself.
Recall that
\[ \ch(\widetilde{\mathscr M}_{1,1}\times[\AA^1/\GG_{\rmm}]) \simeq \ZZ[T,S] \]
where $T$ is the dual of the Hodge line bundle on $\widetilde{\mathscr M}_{1,1}$ and $S$ is the first Chern class of the universal line bundle on $\cB\GG_{\rmm}$.
\begin{lemma}\label{prop:cusp rel M2tilde}
In the setting above, we have $c^*(\lambda_1)= -S$ and $c^*(\lambda_2)=ST-T^2$, hence also $f^*(\lambda_1)= -S$ and $f^*(\lambda_2)=ST-T^2$. Moreover $f^*\psi_1=T+nS$ for some integer $n$.
\end{lemma}
\begin{proof}
The pullback along the closed embedding
\[ [\{0\}/\GG_{\rmm}]\times [\{0\}/\GG_{\rmm}] \lhook\joinrel\longrightarrow \widetilde{\mathscr M}_{1,1}\times [\AA^1/\GG_{\rmm}]\]
is an isomorphism of Chow rings, hence to determine $c^*(\lambda_1)$ and $c^*(\lambda_2)$ we can equivalently compute their restrictions to $[\{0\}/\GG_{\rmm}]\times [\{0\}/\GG_{\rmm}]$.
Consider the composition
\[ \{0\}\times\{0\} \arr[\{0\}/\GG_{\rmm}]\times [\{0\}/\GG_{\rmm}] \lhook\joinrel\longrightarrow \widetilde{\mathscr M}_{1,1}\times [\AA^1/\GG_{\rmm}] \overset{c}{\larr} \widetilde{\mathscr M}_2 \]
where the first arrow is the universal $\GG_{\rmm}\times\GG_{\rmm}$-torsor. The pullback of the Hodge bundle on $\widetilde{\mathscr M}_2$ along this composition, i.e. the fiber of the Hodge bundle on the closed point $\{0\}\times\{0\}\arr\widetilde{\mathscr M}_2$, can be regarded as a $\GG_{\rmm}\times\GG_{\rmm}$-representation of rank two. The equivariant Chern classes of this representation coincide by construction with $c^*(\lambda_1)$ and $c^*(\lambda_2)$.
Let $\widehat{C}$ denote the elliptic stable $A_2$-curve represented by the closed point $\{0\}\times\{0\}$: this is the genus $2$ curve obtained by gluing together the planar curve $C=\{Y^2Z=X^3\}$ with a projective line pinched in $[1,0]$. The points that we glue together are respectively $[0,1,0]$ and $[0,1]$. Here it follows a more detailed construction.
Recall that the family of stable $A_2$-curves of genus two over $\widetilde{\mathscr M}_{1,1}\times[\AA^1/\GG_{\rmm}]$ inducing the map to $\widetilde{\mathscr M}_2$ is constructed as follows: let $\widetilde{C}_{1,1}\subset\PP(V_{-2,-3,0})\times V_{-4,-6}$ be the $\GG_{\rmm}$-invariant family of elliptic curves of equation
\[ Y^2Z=X^3+aXZ^2+bZ^3, \]
with section given by $p:(a,b)\mapsto [0,1,0],(a,b)$. The quotient stack $[\widetilde{C}_{1,1}/\GG_{\rmm}]$ is isomorphic to $\widetilde{\mathscr C}_{1,1}$, the universal family over $\widetilde{\mathscr M}_{1,1}$.
Consider the blow up of $\widetilde{C}_{1,1}\times V_1$ along $p(\AA^2)\times \{0\}$, which naturally inherits a $\GG_{\rmm}\times\GG_{\rmm}$ action. If we pinch this blow up along the proper transform of $p(\AA^2)\times \AA^1$, we get a family of stable $A_2$-curves of genus two, whose $\GG_{\rmm}\times\GG_{\rmm}$-quotient is the family on $\widetilde{\mathscr M}_{1,1}\times[\AA^1/\GG_{\rmm}]$ we were looking for.
In particular, the fiber over $[\{0\}/\GG_{\rmm}]\times [\{0\}/\GG_{\rmm}]$ of this family is given by the $\GG_{\rmm}\times\GG_{\rmm}$-quotient of $\{Y^2Z=X^3\}$ and the projectivization of the tangent space of $\{[0,1,0]\}\times\{0\}$ in $\{Y^2Z=X^3\}\times \AA^1$. If $u$ is a coordinate for $\AA^1$, a basis for this vector space is given by $e_0=u^{\vee}$ and $e_1=\frac{X}{Y}^{\vee}$, on which $\GG_{\rmm}\times\GG_{\rmm}$ acts as
\[ (t,s)\cdot (e_0,e_1) = (se_0,te_1), \]
from which follows that $\GG_{\rmm}\times\GG_{\rmm}$ acts on the projective line as
$$ (t,s)\cdot[e_0,e_1] = [ se_0, te_1].$$
The fiber of the Hodge bundle on $\{0\}\times \{0\}$ is isomorphic to $\rm{H}^0(\widehat{C},\omega_{\widehat{C}})$. We need to compute the inherited action of $\GG_{\rmm}\times\GG_{\rmm}$ on this vector space. A straightforward computation shows that
\[ H^0(\widehat{C},\omega_{\widehat{C}})=H^0(C,\omega_C) \oplus H^0(\PP^1,\omega_{\PP^1}(2[1,0])),\]
hence
\[c_1^{\GG_{\rmm}^2}(H^0(\widehat{C},\omega_{\widehat{C}}))= -T + c_1^{\GG_{\rmm}^2}(H^0(\PP^1,\omega_{\PP^1}(2[1,0]))\]
\[c_2^{\GG_{\rmm}^2}(H^0(\widehat{C},\omega_{\widehat{C}}))= -T\cdot c_1^{\GG_{\rmm}^2}(H^0(\PP^1,\omega_{\PP^1}(2[1,0])).\]
It remains to compute the action of $\GG_{\rmm}^2$ over $\rm{H}^0(\PP^1,\omega_{\PP^1}(2 [1,0]))$.
The vector space $\rm{H}^0(\PP^1,\omega_{\PP^1}(2[1,0]))$ is generated by $(x_0/x_1)^2d(x_1/x_0)$, where $x_0=u=e_0^{\vee}$ and $x_1=\frac{X}{Y}=e_1^{\vee}$. Therefore, the action is
\[ (t,s)\cdot(x_0/x_1)^2d(x_1/x_0) = s^{-1}t (x_0/x_1)^2d(x_1/x_0) \]
and the first Chern class of the representation is $T-S$. Putting all together, we get the desired conclusion. We are left with the computation of $f^*\psi_1$.
Let $(C\to S,\sigma)$ be a stable $A_2$-curve of genus two, with $\im{\sigma}$ contained in the cuspidal locus of $C\to S$. Suppose moreover that the fibers of $C$ have no separating nodes. Let $\overline{C}\to S$ be the partial normalization of $C$ along $\im{\sigma}$, and let $p:S\to \overline{C}$ be the preimage of $\sigma$, so that we have
\begin{equation*}
\begin{tikzcd}
\overline{C} \ar[rr,"\nu"] \ar[dr] & & C \ar[dl] \\
& S \ar[ul, bend left, "p"] \ar[ur, bend right, "\sigma"']&
\end{tikzcd}
\end{equation*}
In particular $\sigma^*\omega_{C/S}\simeq p^*\nu^*\omega_{C/S}\simeq p^*\omega_{\overline{C}/S}(2p)$, where $\omega_{\bullet/S}$ denotes the dualizing sheaf.
As $\psi_1$ is the first Chern class of the line bundle on $\widetilde{\mathscr C}_{2}$ defined as
\[ (C\to S,\sigma)\longmapsto\sigma^*\omega_{C/S} \]
we deduce that $f^*\psi_1$, when restricted to the open substack $\widetilde{\mathscr M}_{1,1}\times[(\AA^1\smallsetminus\{0\})/\GG_{\rmm}]\simeq\widetilde{\mathscr M}_{1,1}$, is equal to the first Chern class of
\[ (\overline{C}\to S,p) \longmapsto p^*\omega_{\overline{C}/S}(2p). \]
This is equal to $T$, because $p^*\cO(-p)\simeq p^*\omega_{\overline{C}/S}$. We have shown in this way that $f^*\psi_1= T + nS$ for some integer $n$, as claimed.
\end{proof}
From \Cref{prop:cusp rel M2tilde} we deduce that the image of $f_*$ is generated as an ideal in $\ch(\widetilde{\mathscr C}_2)$ by $f_*1=[\widetilde{\mathscr C}_2^{\rm c}]$: this follows from a straightforward application of the projection formula. In this way we have proved \Cref{prop:1-cusp}.
\begin{proposition}\label{prop:exceptional}
Let $E$ be the exceptional divisor of ${\rm{Bl}}(\widetilde{\mathscr C}_{1,1}\times [\AA^1/\GG_{\rmm}])$ and let $\rho:E\to\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}$. Then the image of
\[ c_*'':\ch(E) \longrightarrow \ch(\widetilde{\mathscr C}_2) \]
is contained in the ideal $(J,[\widetilde{\mathscr C}_2^{\rm c}],[\widetilde{\mathscr C}_2^{\rm E}],c_*''(\rho^*T))$, where $T$ is the first Chern class of the dual of the Hodge line bundle on $\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}$.
\end{proposition}
\begin{proof}
By construction the exceptional divisor $E$ is the projectivization of the normal bundle of $\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}$ in $\widetilde{\mathscr C}_{1,1}\times [\AA^1/\GG_{\rmm}]$, hence its Chow ring is generated as a $\ch(\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm})$-module by the powers of the hyperplane class.
There is a commutative diagram
\begin{equation*}
\begin{tikzcd}
E \ar[r, "\cP"] \ar[dr, "\rho"] \ar[rr, bend left, "c''"] & c^*\widetilde{\mathscr C}_2 \ar[r, "c'"] \ar[d, "\pi'"] & \widetilde{\mathscr C}_2 \ar[d, "\pi"] \\
& \widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm} \ar[r, "c"] & \widetilde{\mathscr M}_2
\end{tikzcd}
\end{equation*}
which is basically the same as the diagram (\ref{eq:fund diag Z1}) but restricted to $\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}$. In particular, the square on the right is cartesian.
The pullback of $\psi_1$ along $c''=c'\circ\cP$ is equal to $nh + m\rho^*\ell$, where $h$ is the hyperplane class and $\ell$ is the class of a divisor in $\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}$. We claim that $n=1$. If so, we can conclude that $\im{c'_*\cP_*}$ is contained in the ideal $(J,[\widetilde{\mathscr C}_2^{\rm E}],c''_*(\rho^*T))$, where $[\widetilde{\mathscr C}_2^{\rm E}]=c''_*(1)$, as follows: first observe that
\begin{align*}
c''_*(h^i \rho^*\zeta) &= c''_*(({c''}^*\psi_1 - m\rho^*\ell)^i \rho^*\zeta) = \sum_{j=0}^{i} \psi_1^j \cdot c''_*\rho^*(\eta_j)
\end{align*}
where $\eta_j$ is some class in $\ch(\widetilde{\mathscr M}_{1,1}\times \cB\GG_{\rmm})$ for $j=0,\dots,n$. This shows that the image of $c''_*$ is generated as an ideal by elements of the form $c''_*\rho^*\eta$, where $\eta$ is in $\ch(\widetilde{\mathscr M}_{1,1}\times \cB\GG_{\rmm})$.
Second, observe that $\ch(\widetilde{\mathscr M}_{1,1}\times \cB\GG_{\rmm})$ is generated as a $\ch(\widetilde{\mathscr M}_2)$-module by $1$ and $T$: this follows from \Cref{prop:cusp rel M2tilde}. Therefore, we can write every element $\rho^*\eta$ as
\[\rho^*\eta = (\rho^*T)\cdot (\rho^*c^*\zeta) + (\rho^*c^*\zeta') = (\rho^*T)\cdot ({c''}^*\pi^*\zeta) + ({c''}^*\pi^*\zeta'). \]
Applying the projection formula, we get
\[ c''_*(\rho^*\eta)=(c''_*(\rho^*T))\cdot \pi^*\zeta + c''_*1\cdot \pi^*\zeta', \]
hence $\im{c''_*}$ is generated as an ideal by $c''_*1$ and $c''_*\rho^*T$, as claimed.
To check that $n=1$, first observe that $\psi_1$ can also be regarded as the first Chern class of the relative dualizing sheaf $\omega_{\pi}$ of $\pi\colon\widetilde{\mathscr C}_2\to\widetilde{\mathscr M}_2$. Pulling back this line bundle to $E$ amounts to the following: first we restrict $\omega_{\pi}$ to $\widetilde{\mathscr C}_2^{\rm E}$ inside $\widetilde{\mathscr C}_2$, then we pull it back to the relative normalization of $\widetilde{\mathscr C}_2^{\rm E}\to \pi(\widetilde{\mathscr C}_2^{\rm E})$. Finally, we restrict to the irreducible component of the normalization that is equal to $E$.
The usual formulas for the pullback of the relative dualizing sheaf along a normalization morphism tells us that the pullback of $\omega_{\pi}$ is equal to $\omega_{E/\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}}(2\bfq + \bfr)$, where $\bfq$ is the preimage in $E$ of the cusp and $\bfr$ is the preimage of the separating node.
As $\rho:E\to\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}$ is a projective bundle with fibers of dimension one, the class of the relative dualizing sheaf is equal to $-2h+\rho^*\ell'$. The class of $\cO(2\bfq + \bfr)$ is equal to $3h + \rho^*\ell'$, hence the pullback of $\psi_1$ is equal to $h+\rho^*(\ell' + \ell'')$. This concludes the proof.
\end{proof}
Putting together \Cref{prop:1-cusp} and \Cref{prop:exceptional}, we deduce that the image of
\begin{equation}\label{eq:push blow-up}
\begin{tikzcd}
\ch(\widetilde{\mathscr C}_{1,1}\times \left[\AA^1/\GG_{\rmm}\right])\oplus\ch(E) \ar[rrr, "c'_*\cP_*\rho^* + c'_*\cP_*i_*"] & & & \ch(\widetilde{\mathscr C}_2).
\end{tikzcd}
\end{equation}
is contained in the ideal $(J,[\widetilde{\mathscr C}_2^{\rm c}],[\widetilde{\mathscr C}_2^{\rm E}],c''_*\rho^*T)$. This coincides with the image of the map from the Chow ring of the blow-up of $\widetilde{\mathscr C}_{1,1}\times [\AA^{1}/\gm]$, hence we have proved \Cref{prop:1-cusp whole}.
\subsection{Abstract characterization of $\ch(\overline{\cM}_{2,1})$}
The map from the blow-up of $\widetilde{\mathscr C}_{1,1}\times[\AA^1/\GG_{\rmm}]$ to $\widetilde{\mathscr C}_2$ is one-to-one onto the locus of curves with exactly one cusp.
Therefore, the ideal in $\ch(\widetilde{\mathscr C}_2)$ formed by the cycles coming from the locus of cuspidal curves is equal to the sum of the ideal of cycles coming from the locus of curves with two cusps plus the image of (\ref{eq:push blow-up}). By \Cref{prop:relations two cusps} and \Cref{prop:1-cusp whole}, both these ideals are contained in $(J,[\widetilde{\mathscr C}_{2}^{\rm c}],[\widetilde{\mathscr C}_2^{\rm E}],c''_*\rho^*T)$, thus the latter ideal coincides with the whole ideal of cycles coming from the cuspidal locus.
In this way we have proved the following abstract characterization of the integral Chow ring of $\overline{\cM}_{2,1}$.
\begin{theorem}\label{thm:chow Mbar21 abs}
Suppose that the ground field has characteristic not $2$. Then
\[ \ch(\overline{\cM}_{2,1}) \simeq \ch(\widetilde{\mathscr C}_2)/(J,[\widetilde{\mathscr C}_2^{\rm c}],[\widetilde{\mathscr C}_2^{\rm E}],c''_*\rho^*T), \]
where $J$ is the pullback of the ideal of cuspidal relations in $\widetilde{\mathscr M}_2$.
\end{theorem}
\section{The Chow ring of $\overline{\cM}_{2,1}$, concrete computations}\label{sec:concrete}
In this final Section we conclude the computation of $\ch(\overline{\cM}_{2,1})$ (\Cref{thm:chow Mbar21}), by determining the last missing pieces: the fundamental class of $\widetilde{\mathscr C}_2^{\rm c}$ (\Cref{prop:class Ctilde2 c}), the fundamental class of $\widetilde{\mathscr C}_2^{\rm E}$ and $c''_*\rho^*T$ (\Cref{prop:class Ctilde2 E}).
\subsection{Fundamental class of $\widetilde{\mathscr C}_2^{\rm c}$ in $\ch(\widetilde{\mathscr C}_2)$}
Recall that $\widetilde{\mathscr C}_2^{\rm c}$ is the cuspidal locus of $\widetilde{\mathscr C}_2$: in other words, the closed substack $\widetilde{\mathscr C}_2^{\rm c}$ is the stack of cuspidal, stable $A_2$-curves of genus two together with a section that lands in the cuspidal locus.
\begin{remark}
The stack $\widetilde{\mathscr C}_2^{\rm c}$ is the normalization of $\widetilde{\mathscr M}_2^{\rm c}$, the stack of cuspidal stable $A_2$-curves of genus two.
\end{remark}
We want to compute the fundamental class of $\widetilde{\mathscr C}_2^{\rm c}$. First of all, we determine its restriction to the open stratum $\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_1$.
\begin{lemma}\label{lm:cusp in Ctilde2 minus ThTilde1}
$$ [\widetilde{\mathscr C}_2^{\rm c}]\vert_{\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_1} = 2\lambda_1\psi_1(7\psi_1-\lambda_1) - 24 \psi_1^3 \in \ch(\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_1).$$
\end{lemma}
\begin{proof}
Recall that $\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_1$ is an open substack of $[\widetilde{\AA}(6)/\rm B_2]$: the points in $\widetilde{\AA}(6)$ can be though as pairs $(f,s)$ where $f$ is a binary form of degree $6$ and $s$ is a scalar such that $f(0,1)=s^2$. The action of $\rm B_2$ is described in Section \ref{c2-d1} .
Consider the invariant closed subscheme $Z \subset \widetilde{\AA}(6)$ parametrizing those pairs $(f,s)$ where $(0,1)$ is a root of multiplicity at least three. The restriction of $[Z/\rm B_2]$ to the appropriate open substack of $[\widetilde{\AA}(6)/\rm B_2]$ coincides with $\widetilde{\mathscr C}_2^{\rm c}$, thus all we have to do is to compute the $\rm B_2$-equivariant fundamental class of $Z$. Actually, if $\GG_{\rmm}^2\subset \rm B_2$ is the maximal subtorus of diagonal matrices, we can equivalently compute the $\GG_{\rmm}^2$-equivariant class of $Z$, because the pullback along $[\widetilde{\AA}(6)/\GG_{\rmm}^2]\arr[\widetilde{\AA}(6)/\rm B_2]$ induces an isomorphism of Chow rings (\cite{Per}*{Remark 3.1}).
If we write a form $f$ as $f=a_0x_0^6+a_1x_0^5x_1...+a_6x_1^6$, where $s^2=a_6$, we get that $(f,s)$ belongs to $Z$ if and only if $s=a_4=a_5=0$. In other words, the subscheme $Z$ is a complete intersection of the three divisors $\{s=0\}$, $\{a_4=0\}$ and $\{ a_5=0\}$, hence its fundamental class is the product of the fundamental classes of those three divisors.
Applying the formula for the equivariant fundamental classes of divisors (see \cite{EF}*{Lemma 2.4}) we obtain
$$ [Z]=-2\psi_1(\lambda_1-3\psi_1)(\lambda_1-4\psi_1),$$
and this concludes the proof.
\end{proof}
Consider now the restriction of $\widetilde{\mathscr C}^{\rm c}_2$ to $\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_2$. Clearly, we have that
$$ [\widetilde{\mathscr C}_2^{\rm c}]\vert_{\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_2}=2\lambda_1\psi_1(7\psi_1-\lambda_1) - 24 \psi_1^3 + \vartheta_1p_2(\lambda_1,\psi_1) $$
where $\vartheta_1$ is the fundamental class of $\widetilde{\Theta}_1$ restricted to $\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_2$.
and $p_2(\lambda_1,\psi_1)$ is a homogeneous polynomial of degree 2 (notice that $p_2$ does not depend on $\vartheta_1$ because of the relation $\vartheta_1(\vartheta_1+\lambda_1)$ in $\ch(\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_2)$, see \Cref{prop:chow C minus ThTilde2}).
\begin{lemma}\label{lm:cusp in Ctilde2 minus ThTilde2}
The intersection $\widetilde{\mathscr C}_2^{\rm c} \cap (\widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2)$ is transversal and we have the following:
$$[\widetilde{\mathscr C}_2^{\rm c} \cap (\widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2)] = -24 \psi_1^3 \in \ch(\widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2). $$
Furthermore, we get
$$[\widetilde{\mathscr C}_2^{\rm c}]\vert_{\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_2}=2\psi_1(\lambda_1+\vartheta_1)(7\psi_1-\lambda_1) - 24 \psi_1^3$$
where the equality holds in $\ch(\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_2)$.
\end{lemma}
\begin{proof}
Let $\widetilde{\mathscr C}_{1,1}$ be the universal elliptic stable $A_2$-curve, and let $\bfp\colon\widetilde{\mathscr M}_{1,1}\arr\widetilde{\mathscr C}_{1,1}$ be the universal section.
Recall from \Cref{prop:chow ThTilde1 minus ThTilde2} that $\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2$ is isomorphic to the product $(\widetilde{\mathscr C}_{1,1}\smallsetminus \im{\bfp}) \times \widetilde{\mathscr M}_{1,1}$, where the isomorphism is given by taking an elliptic stable $A_2$-curve with a second marking and gluing it to another elliptic stable $A_2$-curve along the first marking.
The fibered product of stacks $\widetilde{\mathscr C}_2^{\rm c} \times_{\widetilde{\mathscr C}_2} (\widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2)$ is the substack of curves with a separating node and a marked cusp, hence
$$ (\widetilde{\mathscr C}_2^{\rm c} \times_{\widetilde{\mathscr C}_2} (\widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2))_{\rm red} \simeq \cB\GG_{\rmm} \times \widetilde{\mathscr M}_{1,1}.$$
From this we deduce that the dimension of the intersection is the expected one and the intersection is proper. Given a geometric point in $\widetilde{\mathscr C}_2^{\rm c} \times_{\widetilde{\mathscr C}_2} (\widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2$), it is enough to prove the transversality of the intersection in a versal neighborhood of the point. Because both $\widetilde{\mathscr C}_2^{\rm c}$ and $\widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2$ are smooth stacks, we just need to prove that the natural morphism of vector spaces
\begin{equation}\label{eq:trans} T_p \widetilde{\mathscr C}_2^{\rm c} \oplus T_p (\widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2) \longrightarrow T_p \widetilde{\mathscr C}_2 \end{equation}
is surjective for every geometric point $p$ of $\widetilde{\mathscr C}_2^{\rm c} \times_{\widetilde{\mathscr C}_2}(\widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2)$ (here $T_p(-)$ denotes the tangent space in the versal neighborhood).
Let us recall some information about the tangent space of $\widetilde{\mathscr C}_2$ at a geometric point $(C,\sigma) \in \widetilde{\mathscr C}_2(k)$. In a versal neighborhood of $(C,\sigma)$ we have:
\begin{equation}\label{eq:formula tangent}
T_{(C,\sigma)}\widetilde{\mathscr C}_2\simeq\im{d\pi}\oplus(\frkm_{\sigma}/\frkm_{\sigma}^2)^\vee,
\end{equation}
where $\pi\colon\widetilde{\mathscr C}_2 \rightarrow \widetilde{\mathscr M}_2$ is the forgetful morphism and $d\pi:T_{(C,\sigma)}\widetilde{\mathscr C}_2 \rightarrow T_C\widetilde{\mathscr M}_2$ is the induced morphism of tangent spaces (in a versal neighborhood).
Recall that if $\sigma$ is a smooth point of $C$, then $d\pi$ is surjective and $\dim (\frkm_{\sigma}/\frkm_{\sigma}^2)=1$. On the other hand, if $\sigma$ is a node or a cusp we have that $\dim (T_{C}\widetilde{\mathscr M}_2/\im{d\pi}) = 1$ and $\dim (\frkm_{\sigma}/\frkm_{\sigma}^2)=2$ (the formula (\ref{eq:formula tangent}) actually holds true for every double point singularity).
Let $p=(C,\sigma)$ be a geometric point in the intersection. The vector space $T_p\widetilde{\mathscr C}_2^{\rm c}$ parametrizes first order deformations of $(C,\sigma)$ that preserve both the cusp and the section: its image in $T_p \widetilde{\mathscr C}_2\simeq \im{d\pi}\oplus(\frkm_{\sigma}/\frkm_{\sigma}^2)^\vee $ can be regarded as a the subspace of $\im{d\pi}$ classifying first order deformations of $C$ that preserve the cusp, i.e. the tangent space $T_C\widetilde{\mathscr M}_2^{\rm c}$.
We also have the image of $T_{(C,\sigma)}(\widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2)$ in $T_p\widetilde{\mathscr C}_2$ can be identified with $(\frkm_{\sigma}/\frkm_{\sigma}^2)^\vee \oplus (T_C\widetilde{\Delta}_1\cap \im{d\pi})$. From this we deduce that (\ref{eq:trans}) is surjective, because the sum of the vector subspaces $T_{C}\widetilde{\mathscr M}_2^{\rm c}$ (deformations preserving the cusp) and $T_C (\widetilde{\Delta}_1\cap\im{d\pi})$ (deformations preserving the node) generates $\im{d\pi}$.
We have proved that
$$\widetilde{\mathscr C}_2^{\rm c} \times_{\widetilde{\mathscr C}_2} (\widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2) \simeq \cB\GG_{\rmm} \times \widetilde{\mathscr M}_{1,1},$$
where the latter can be regarded as a closed substack of $ (\widetilde{\mathscr C}_{1,1}\smallsetminus \im{\bfp}) \times \widetilde{\mathscr M}_{1,1}$.
To complete the statement, we only need the compute the class $[\cB\GG_{\rmm}]$ inside the Chow ring of $\widetilde{\mathscr C}_{1,1} \smallsetminus \im{\bfp}$.
Recall that $\widetilde{\mathscr C}_{1,1} \smallsetminus \im{\bfp} \simeq [ W/\GG_{\rmm}] $, where
$$W=\{((\alpha,\beta),(x,y)) \in \AA^2 \times \AA^2 | y^2=x^3+\alpha x+ \beta\}$$
is the universal affine Weierstrass curve and the $\GG_{\rmm}$-action is described by the formula $$t\cdot(\alpha,\beta,x,y)=(t^{-4}\alpha,t^{-6}\beta,t^{-2}x,t^{-3}y).$$ The inclusion $\cB\GG_{\rmm} \subset \widetilde{\mathscr C}_{1,1} \smallsetminus \im{\bfp} $ corresponds to the closed embedding
$$ [ (0,0,0,0)/\GG_{\rmm}] \lhook\joinrel\longrightarrow [ W/\GG_{\rmm}]$$
through the identification above.
The $\GG_{\rmm}$-equivariant class of the origin in $\ch_{\GG_{\rmm}}(W)$ can be computed as follows: if $\pr_1,\pr_2:W\arr\AA^2$ are the projections respectively on the first and on the second factor of $\AA^2\times\AA^2$, then we have
\[ (0,0,0,0) = \pr_1^{-1}\{\alpha=0\}\cap \pr_2^{-1}\{x=y=0\}. \]
The equivariant class of the subrepresentation $\{\alpha=0\}\subset \AA^2$ is $-4\psi_!$, because $\GG_{\rmm}$ acts on $\alpha$ by multiplication by $t^{-4}$. With the same argument we deduce that the class of $\{x=0\}$ is $-2\psi_1$ and the class of $\{y=0\}$ is $-3\psi_1$. Putting all together, we conclude that $[\cB\GG_{\rmm}]=-24\psi_1^3$.
Finally, write $$[\widetilde{\mathscr C}_2^{\rm c}]\vert_{\widetilde{\mathscr C}_2\smallsetminus \widetilde{\Theta}_2}= p_3(\lambda_1,\psi_1)+\vartheta_1p_2(\lambda_1,\psi_1),$$ where $p_3$ and $p_2$ are homogeneous polynomials of degree $3$ and $2$ respectively. \Cref{lm:cusp in Ctilde2 minus ThTilde1} gives us a formula for the restriction of the cycle above to $\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_1$, which implies that $p_3=2\lambda_1\psi_1(7\psi_1-\lambda_1) - 24 \psi_1^3$.
On the other hand, we have just computed the restriction of $[\widetilde{\mathscr C}_2^{\rm c}]$ to $\ch(\widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2)$, which is $-24\psi_1^3$. From \Cref{prop:chow C minus ThTilde2} we know that the restriction map sends
\[ \lambda_i\longmapsto \lambda_i,\quad \psi_1\longmapsto\psi_1,\quad \vartheta_1\longmapsto -\lambda_1. \]
Putting these information together, we get $p_2=2\psi_1(7\psi_1-\lambda_1)$, and this finishes the computation.
\end{proof}
Finally we get the description of the fundamental class of $\widetilde{\mathscr C}_2^{\rm c}$ inside the Chow ring of $\widetilde{\mathscr C}_2$.
\begin{proposition}\label{prop:class Ctilde2 c}
$$ [\widetilde{\mathscr C}_2^{\rm c}] = 2\psi_1(\lambda_1 +\vartheta_1)(7\psi_1-\lambda_1) - 24\psi_1^3 \in \ch(\widetilde{\mathscr C}_2). $$
\end{proposition}
\begin{proof}
First observe that the intersection of $\widetilde{\mathscr C}_2^{\rm c}$ with $\widetilde{\Theta}_2$ is empty, hence the restriction of the fundamental class of the first stack to the second is zero.
\Cref{lm:cusp in Ctilde2 minus ThTilde2} gives an explicit expression for the restriction of $[\widetilde{\mathscr C}_2^{\rm c}]$ to $\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2$, from which we deduce that
$$ [\widetilde{\mathscr C}_2^{\rm c}]= 2\psi_1(\lambda_1+\vartheta_1)(7\psi_1-\lambda_1)- 24\psi_1^3 + \vartheta_2 p_1(\lambda_1,\psi_1) \in \ch(\widetilde{\mathscr C}_2),$$
where $p_1$ is a homogeneous polynomial of degree $1$. Notice that $p_1$ does not depend on $\vartheta_1$ because of the relation $\vartheta_2(\vartheta_1 -\lambda_1 + \psi_1)$ inside $\ch(\widetilde{\mathscr C}_2)$ (see \Cref{prop:chow Ctilde2}). Pulling everything back to $\widetilde{\Theta}_2$ (see formulas \eqref{eq:pullback theta2} and \eqref{eq:pullback psi1}) and using the explicit presentation of $\ch(\widetilde{\Theta}_2)$ given in \Cref{prop:chow ThTilde2}, we get the equation
$$ 0=[\widetilde{\mathscr C}_2^{\rm c}]\vert_{\widetilde{\Theta}_2}=\lambda_2 p_1(\lambda_1,\xi_1) \in \ch(\widetilde{\Theta}_2),$$
which implies $p_1=0$ and concludes the proof.
\end{proof}
\subsection{Relations coming from $\widetilde{\mathscr C}_2^{\rm E}$}
Recall that $\widetilde{\mathscr C}_2^{\rm E}$ is the closed substack of $\widetilde{\mathscr C}_2$ whose points are families of $1$-pointed, almost stable, genus two curves $(C,\sigma)$ with a separating node satisfying the following property: at least one irreducible component of $C$ is cuspidal and the image of $\sigma$ belongs to a component with a cusp.
Let us recall the following diagram from the proof of \Cref{prop:exceptional}: let $\rm{Bl}(\widetilde{\mathscr C}_{1,1}\times [\AA^1/\GG_{\rmm}])$ be the blow-up of $\widetilde{\mathscr C}_{1,1}\times [\AA^1/\GG_{\rmm}]$ along $\im{\bfp\times 0}\simeq \widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}$, and let $E$ be the exceptional divisor. We can form a diagram
\[
\begin{tikzcd}
E\simeq\PP(N_{\bfp\times 0}) \ar[r, "c''"] \ar[d, "\rho"] & \widetilde{\mathscr C}_2 \ar[d, "\pi"] \\
\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm} \ar[r, "c"] & \widetilde{\mathscr M}_2.
\end{tikzcd}
\]
We can identify the Chow ring of $\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}$ with $\ZZ[T,S]$, where $T$ is the first Chern class of the dual of the Hodge line bundle.
The aim of this Subsection is to compute $c''_*[\widetilde{\mathscr C}_2^{\rm E}]$ and $c''_*(\rho^*T)$.
\begin{remark}
Let $\pi\colon\widetilde{\mathscr C}_2\arr\widetilde{\mathscr M}_2$ be the forgetful morphism: the preimage $\pi^{-1}(\widetilde{\mathscr M}_2^{\rm c})$ is the substack of $1$-pointed stable $A_2$-curves of genus two with a cusp. This stack is not irreducible, and one irreducible component is given by $\widetilde{\mathscr C}_2^{\rm E}$.
\end{remark}
It is clear that $\widetilde{\mathscr C}_2^{\rm E}$ has codimension 3 inside $\widetilde{\mathscr C}_2$, therefore we have the following description
\begin{align}\label{eq:class of C2tildeE} [\widetilde{\mathscr C}_2^{\rm E}] &= p_3(\lambda_1,\psi_1) + \vartheta_1p_2(\lambda_1,\psi_1) + \vartheta_2 p_1(\lambda_1,\psi_1) \in \ch (\widetilde{\mathscr C}_2)\\
c''_*(\rho^*T)&= q_4(\lambda_1,\psi_1) + \vartheta_1q_3(\lambda_1,\psi_1) + \vartheta_2 q_2(\lambda_1,\psi_1) \in \ch (\widetilde{\mathscr C}_2) \nonumber\end{align}
where the $p_i$ and $q_i$ are homogeneous polynomials of degree $i$.
As the intersection $(\widetilde{\mathscr C}_2\smallsetminus \widetilde{\Theta}_1) \cap \widetilde{\mathscr C}_2^{\rm E} $ is empty, we get $p_3=q_4=0$.
We will adopt the following notation: if $\cV\subset \cX \subset \cY$ are all closed embeddings of quotient stacks, we denote $[\cV]_{\cX}$ (respectively $[\cV]_{\cY}$) the fundamental class of $\cV$ in $\ch(\cX)$ (respectively in $\ch(\cY)$).
\begin{lemma}\label{lm:C2tildeE in the open}
\begin{align*}
[\widetilde{\mathscr C}_2^{\rm E}]|_{\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2} &=24\vartheta_1\psi_1^2 \in \ch(\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_2), \\
(c''_*\rho^*T)|_{\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2} &= 24\vartheta_1\psi_1^2(\lambda_1-\psi_1) \in \ch(\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_2).
\end{align*}
\end{lemma}
\begin{proof}
From (\ref{eq:class of C2tildeE}) we know that
\[[\widetilde{\mathscr C}_2^{\rm E}]|_{\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2}=\vartheta_1p_2(\lambda_1,\psi_1), \quad (c''_*\rho^*T)|_{\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2} = \vartheta_1q_3(\lambda_1,\psi_1)\]
We need to determine $p_2$ and $q_3$.
Using the description $\widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2$ as the product $(\widetilde{\mathscr C}_{1,1}\smallsetminus \im{\bfp}) \times \widetilde{\mathscr M}_{1,1}$, we get that $\widetilde{\mathscr C}_2^{\rm E} \simeq \cV \times \widetilde{\mathscr M}_{1,1}$ where $\cV \subset (\widetilde{\mathscr C}_{1,1}\smallsetminus \im{\bfp})$ is the closed substack classifying cuspidal curves.
Recall that $\widetilde{\mathscr C}_{1,1}\smallsetminus \im{\bfp}\simeq [W/\GG_{\rmm}]$, where
\[ W=\{(\alpha,\beta,x,y)\text{ }|\text{ }y^2=x^3+\alpha x + \beta \} \subset V_{-4,-6}\times V_{-2,-3}. \]
As before, we use the notation $V_{i,j}$ to indicate the rank two $\GG_{\rmm}$-representation of weight $i$ and $j$.
Hence we have that $\cV\simeq [V/\GG_{\rmm}]$ where $V\subset W$ is the $\GG_{\rmm}$-invariant closed subscheme defined by
$$V=\{ (\alpha,\beta,x,y) \in W\text{ }\vert\text{ }\alpha=0,\beta=0\}$$
which is a complete intersection inside $W$. As $\GG_{\rmm}$ acts on $\alpha$ with weight $-4$ and on $\beta$ with weight $-6$, we deduce that the equivariant class of $V$ is $24\psi_1^2$.
The codimension $2$ closed embedding
\[ e:\widetilde{\mathscr C}_2^{\rm E} \smallsetminus \widetilde{\Theta}_2 \lhook\joinrel\longrightarrow \widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2\]
is regular, thus the excess intersection formula gives us the following description:
$$j^*[\widetilde{\mathscr C}_2^{\rm E}]_{\widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_2} = c_1(N_j) \cdot [\widetilde{\mathscr C}_2^{\rm E}]_{\widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2}= c_1(N_j)\cdot 24\psi_1^2$$
where $j$ is the regular closed embedding $\widetilde{\Theta}_1 \smallsetminus \widetilde{\Theta}_2 \hookrightarrow \widetilde{\mathscr C}_2 \smallsetminus \widetilde{\Theta}_2$ and $N_j$ is the associated normal bundle.
Combining this with (\ref{eq:class of C2tildeE}) and the fact that $c_1(N_j)$ is not a zero divisor (see \Cref{prop:chow C minus ThTilde2}), we get that $p_2(\lambda_1,\psi_1) = 24\psi_1^2$.
Observe that $c'':E\to\widetilde{\mathscr C}_2$, once restricted over $\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2$, becomes a closed embedding, and we have a commutative diagram
\[
\begin{tikzcd}
E|_{\widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2} \ar[r, "e"] \ar[rr, bend left, "c''"] \ar[d, "\rho"] & \widetilde{\Theta}_1\smallsetminus\widetilde{\Theta}_2 \simeq \widetilde{\mathscr M}_{1,1}\times(\widetilde{\mathscr C}_{1,1}\smallsetminus \im{\bfp})\ar[r, "j"] \ar[d, "\pr_2"] & \widetilde{\mathscr C}_2\smallsetminus\widetilde{\Theta}_2 \\
\widetilde{\mathscr M}_{1,1}\times \cB\GG_{\rmm} \ar[r, "\pr_1"] & \widetilde{\mathscr M}_{1,1}. &
\end{tikzcd}
\]
In particular, we have that $T\in \ch(\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm})$ is equal to $\pr_1^*(T)$, hence we have
\[ c''_*\rho^*T=c''_*\rho^*\pr_1^*T=c''_*e^*\pr_2^*T=j_*(e_*1 \cdot \pr_2^*T) \]
and therefore $j^*c''_*(\rho^*T) = (e_*1\cdot\pr_2^*T)\cdot c_1(N_j)$. We have already computed that $e_*1=24\psi_1^2$. In the proof of \Cref{prop:chow ThTilde1 minus ThTilde2} we showed that $\pr_2^*T=s=\lambda_1-\psi_1$, from which we conclude that $j^*c''_*(\rho^*T)= c_1(N_j)24\psi_1^2(\lambda_1-\psi_1)$. This immediately implies our conclusion.
\end{proof}
The restriction of $\widetilde{\mathscr C}_2^{\rm E}$ to $\widetilde{\Theta}_2$ requires to be handled with care.
Consider the cartesian diagram
$$\begin{tikzcd}
\widetilde{\mathscr C}_2^{\rm E} \cap \widetilde{\Theta}_2 \arrow[rr, "i'", hook] \arrow[dd, "\codim{2}", hook] & & \widetilde{\mathscr C}_2^{\rm E} \arrow[dd, "\codim{3}", hook] \\
& & \\
\widetilde{\Theta}_2 \arrow[rr, "\codim{2}", hook] & & \widetilde{\mathscr C}_2
\end{tikzcd}$$
where $i'$ is a $\codim{1}$ closed embedding. It can be easily showed that $i'$ is not a regular embedding. In fact, it is a regular embedding away from the locus parametrizing curves with two cusps.
The substack of cuspidal curves in $\widetilde{\mathscr C}_2$ is the image of the pinching morphism (see Subsection \ref{sub:pinching}) $$\cP:{\rm Bl}_{\im{\bfp}\times\cB\GG_{\rmm}}(\widetilde{\mathscr C}_{1,1}\times [\AA^1/\GG_{\rmm}]) \longrightarrow \widetilde{\mathscr C}_2$$
where $\bfp: \widetilde{\mathscr M}_{1,1} \rightarrow \widetilde{\mathscr C}_{1,1}$ is the universal section and $\cB\GG_{\rmm}=[\{0\}/\GG_{\rmm}]$.
With this picture in mind, $\widetilde{\mathscr C}_2^{\rm E}$ can be regarded as the image of the exceptional divisor of the blow-up, i.e. the image of $\PP(N_{\bfp \times 0})$ via the pinching morphism.
The proper transform of $\im{\bfp} \times [\AA^1/\GG_{\rmm}]$ in the blow-up induces a section
$$ i'': \widetilde{\mathscr M}_{1,1} \times \cB\GG_{\rmm} \lhook\joinrel\longrightarrow \PP(N_{\bfp \times 0})$$
of the morphism $\PP(N_{\bfp \times 0}) \rightarrow \widetilde{\mathscr M}_{1,1}\times \cB\GG_{\rmm}$.
\begin{lemma}\label{lm:formula for C2tildeE in ThTilde2}
The following commutative diagram
\begin{equation}\label{eq:excess}\begin{tikzcd}
{\widetilde{\mathscr M}_{1,1}\times \cB\GG_{\rmm}} \arrow[d, "i''", hook] \arrow[r, "r"] & \widetilde{\Theta}_2 \arrow[d, "i", hook] \\
\PP(N_{\bfp \times 0}) \arrow[r, "c''"] & \widetilde{\mathscr C}_2
\end{tikzcd}\end{equation}
is cartesian and
\begin{align*}
i^*([\widetilde{\mathscr C}_2^{\rm E}]) &= r_*(r^*c_1(N_{i})-c_1(N_{i''})), \\
i^*(c''_*(\rho^*T)) &= r_*(T\cdot (r^*c_1(N_{i})-c_1(N_{i''}))).
\end{align*}
where $N_i$ (respectively $N_{i''}$) is the normal bundle associated with the closed regular embedding $i$ (respectively $i''$).
\end{lemma}
\begin{proof}
The cartesianity is clear from the construction of the pinching morphism (see Subsection \ref{sub:pinching}).
The compatibility of the Gysin homomorphism with the pushforward tells us that for any cycle $\zeta$ we have
\[ i^*c''_*\zeta = r_*(i^{!}\zeta). \]
We can apply the excess intersection formula, obtaining
\[ i^{!}\zeta = c_1(r^*N_{i}/N_{i''}) \cdot {i''}^*\zeta. \]
Applying these formulas to $\zeta=1$, $\rho^*T$ together with the fact that ${i''}^*\rho^*={\rm id}$, we obtain the desired conclusion.
\end{proof}
We are almost ready to compute explicitly the pullback of $[\widetilde{\mathscr C}^{\rm E}_2]$ and $c''_*\rho^*T$.
To proceed, we need the following technical result.
\begin{lemma}\label{lm:blowup formula}
Suppose $X,Y,Z$ are three smooth algebraic stacks with closed embeddings $i:X \hookrightarrow Y$ and $j: Y \hookrightarrow Z$ such that $\dim Z= \dim Y +1= \dim X +2$. Let $Z':= {\rm Bl}_{X}Z$ be the blow-up of $Z$ along $X$. Then we have a section $\sigma:X \hookrightarrow E$ of the natural morphism $E\rightarrow X$ where $E$ is the exceptional divisor, and the following formula holds:
$$ N_{X\vert E} = i^*N_{Y\vert Z}\otimes N_{X\vert Y}^\vee. $$
\end{lemma}
\begin{proof}
By \cite[B.6.10]{Ful} there is a lifting $j':Y \hookrightarrow Z'$ of $j$, i.e. we have a diagram
$$\begin{tikzcd}
Y \arrow[rr, "j'", hook] \arrow[rd, "j", hook] & & Z' \arrow[ld, "\pi"] \\
& Z &
\end{tikzcd}$$
where both $j'$ and $j$ are regular closed embedding of codimension $1$, and $ \pi^*(\cO_Z(Y))= \cO_{Z'}(Y) \otimes \cO_{Z'}(E) $.
If we consider the two cartesian diagrams
$$\begin{tikzcd}
X=X\times_Z Y \arrow[r, "\sigma", hook] \arrow[d, "i", hook] & E \arrow[r] \arrow[d, hook] & X \arrow[d, "j\circ i", hook] \\
Y \arrow[r, "j'", hook] & Z' \arrow[r,"\pi"] & Z
\end{tikzcd}$$
we get
$$ N_{X\vert E}= i^*N_{Y\vert Z'}= i^*j'^*\cO_{Z'}(Y)= i^*j'^*\pi^*\cO_Z(Y) \otimes i^*j'^*\cO_{Z'}(E)^\vee=i^*N_{Y\vert Z} \otimes N_{X\vert Y}^\vee$$
which concludes the proof.
\end{proof}
\begin{lemma}\label{lm:C2tilde in the closed}
Let $i\colon\widetilde{\Theta}_2\hookrightarrow \widetilde{\mathscr C}_2$ be the closed embedding. Then
\begin{align*}
i^*[\widetilde{\mathscr C}_2^{\rm E}] &= -24\lambda_1\lambda_2, \\
i^*(c''_*\rho^*T) &= 48\lambda_2^2.
\end{align*}
\end{lemma}
\begin{proof}
We want to apply the formula given by \Cref{lm:formula for C2tildeE in ThTilde2}. For this, we first need to compute $c_1(N_{i''})$ in the Chow ring of $\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}$ (same notation of \Cref{lm:formula for C2tildeE in ThTilde2}). We write
\[ \ch(\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}) \simeq \ZZ[T,S] \]
where $T$ is the dual of the Hodge line bundle on $\widetilde{\mathscr M}_{1,1}$ and $S$ is the universal line bundle of weight $1$.
To compute $c_1(N_{i''})$ we use \Cref{lm:blowup formula}, where the role of $X$ is played by $\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}$, the role of $Y$ by $\widetilde{\mathscr C}_{1,1}\times\cB\GG_{\rmm}$ and the role of $Z$ by $\widetilde{\mathscr C}_2\times[\AA^1/\GG_{\rmm}]$.
The normal bundle of $\widetilde{\mathscr C}_{1,1}\times\cB\GG_{\rmm}$ in $\widetilde{\mathscr C}_2\times[\AA^1/\GG_{\rmm}]$ is the pullback of the normal bundle of $\cB\GG_{\rmm}$ in $[\AA^1/\GG_{\rmm}]$, whose class is $S$. The conormal bundle of $\widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm}$ in $\widetilde{\mathscr C}_{1,1}\times\cB\GG_{\rmm}$ is the pullback of the conormal bundle of $\widetilde{\mathscr M}_{1,1}$ embedded in $\widetilde{\mathscr C}_{1,1}$ via the universal section: this is well known to be isomorphic to the Hodge line bundle, hence it is equal to $-T$.
This said, applying \Cref{lm:blowup formula}, we get
\[ c_1(N_{i''}) = S - T. \]
We computed the normal bundle of $\widetilde{\Theta}_2\hookrightarrow\widetilde{\mathscr C}_2$ in the proof of \Cref{prop:chow Ctilde2}: this coincided with the Hodge line bundle twisted by the $2$-torsion line bundle on $\widetilde{\Theta}_2$, hence its first Chern class is equal to $\xi_1-\lambda_1$.
Observe that the pullback of $\lambda_1$ along the two maps above is equal to $-S$ (see \Cref{prop:cusp rel M2tilde}): we deduce that $c_1(r^*N_{i})=g^*(\xi_1-\lambda_1)=S$, hence
\[c_1(r^*N_{i})-c_1(N_{i''})=T. \]
We can apply \Cref{lm:formula for C2tildeE in ThTilde2}, which tells us
\[ i^*[\widetilde{\mathscr C}^{\rm E}_2] = r_*(T),\quad i^*(c''_*\rho^*T) = r_*(T^2). \]
We need to compute the pushforward of this element to $\widetilde{\Theta}_2$, which we identify with $\widetilde{\Delta}_1$ just as we have done in the proof of \Cref{prop:chow ThTilde2}. Observe that there is a natural factorization
\[ \widetilde{\mathscr M}_{1,1}\times\cB\GG_{\rmm} \overset{g}{\longrightarrow} \widetilde{\mathscr M}_{1,1}\times\widetilde{\mathscr M}_{1,1} \overset{f}{\longrightarrow} (\widetilde{\mathscr M}_{1,1}\times\widetilde{\mathscr M}_{1,1})/\bC_2. \]
Set
\[ \ch(\widetilde{\mathscr M}_{1,1}\times\widetilde{\mathscr M}_{1,1}) = \ZZ[T,U] \]
where $U$ is the pullback of the dual of the Hodge line bundle from the second copy of $\widetilde{\mathscr M}_{1,1}$.
To compute the pushforward of $T$, first observe that $g^*T=T$, hence $g_*(T)=T\cdot g_*(1)$. The element $g_*(1)$ is the pullback of the fundamental class of $\cB\GG_{\rmm}$ in the second copy of $\widetilde{\mathscr M}_{1,1}$, which is equal to $24U^2$. Similarly, we have $g_*(T^2)=24U^2T^2$
Hence we are reduced to compute $f_*(24U^2T)$ and $f_*(24(UT)^2)$. This is equal to the pushforward of these cycles through the map
\[ \cB(\GG_{\rmm}\times\GG_{\rmm}) \longrightarrow \cB(\GG_{\rmm}^{\times 2}\rtimes \bC_2), \]
which has been explicitly determined in \cite{Lars}*{Lemma 7.3} and \cite{DLV}*{Corollary 3.2}. We get
\[ f_*(24U^2T) = -24\lambda_2\lambda_1, \quad f_*(24(UT)^2)=48\lambda_2^2, \]
which concludes the proof.
\end{proof}
\begin{proposition}\label{prop:class Ctilde2 E}
\begin{align*}
[\widetilde{\mathscr C}_2^{\rm E}]&= 24(\psi_1^2\vartheta_1-\lambda_1\vartheta_2) \in \ch (\widetilde{\mathscr C}_2),\\
c''_*\rho^*T &= 24(\psi_1^2(\lambda_1-\psi_1)\vartheta_1 + 2\lambda_2\vartheta_2) \in \ch (\widetilde{\mathscr C}_2).\\
\end{align*}
\end{proposition}
\begin{proof}
We know from \Cref{lm:C2tildeE in the open} that $[\widetilde{\mathscr C}_2^{\rm E}] = 24\vartheta_1\psi_1^2 + \vartheta_2 p_1$. This, combined together with \Cref{lm:C2tilde in the closed}, gives us
\[ -24\lambda_1\lambda_2 = i^*[\widetilde{\mathscr C}_2^{\rm E}] = 24 (\xi_1-\lambda_1)\xi_1^2 + \lambda_2p_1 \]
from which we deduce that $p_1=-24\lambda_1$.
From \Cref{lm:C2tildeE in the open} we also know that $c''_*\rho^*T = 24\vartheta_1\psi_1^2(\lambda_1-\psi_1) + \vartheta_2 q_2$. This, combined with \Cref{lm:C2tilde in the closed}, gives us
\[ 48\lambda_2^2 = i^*(c''_*\rho^*T) = \lambda_2q_2 \]
from which we deduce $q_2=48\lambda_2$. This concludes the proof.
\end{proof}
\subsection{Final results}
From \Cref{thm:chow Mbar21 abs} we know that
\[\ch(\overline{\cM}_{2,1})\simeq\ch(\widetilde{\mathscr C}_2)/(J,[\widetilde{\mathscr C}_{2}^{\rm c}],[\widetilde{\mathscr C}_2^{\rm E}],c''_*\rho^*T) \]
where $J$ is the ideal generated by the pullback of the relations in $\ch(\overline{\cM}_2)$. From \Cref{prop:chow Ctilde2} we know an explicit presentation of $\ch(\widetilde{\mathscr C}_2)$ in terms of generators and relations. The fundamental classes of $\widetilde{\mathscr C}_2^{\rm c}$ and $\widetilde{\mathscr C}_2^{\rm E}$ together with the cycle $c''_*\rho^*T$ have been computed respectively in \Cref{prop:class Ctilde2 c} and \Cref{prop:class Ctilde2 E}. Putting all together, we derive the following presentation of the integral Chow ring of $\overline{\cM}_{2,1}$ in terms of generators and relations.
\begin{theorem}\label{thm:chow Mbar21}
Suppose that the characteristic of the base field is not $2$ or $3$. Then we have
\[ \ch(\overline{\cM}_{2,1})\simeq \ZZ[\lambda_1,\psi_1,\vartheta_1,\lambda_2, \vartheta_2]/(\alpha_{2,1},\alpha_{2,2},\alpha_{2,3},\beta_{3,1},\beta_{3,2},\beta_{3,3},\beta_{3,4}) \]
where the $\alpha_{2,i}$ have degree $2$, the $\beta_{3,j}$ have degree $3$ and their explicit expressions are
\begin{align*}
\alpha_{2,1}&=\lambda_2-\vartheta_2-\psi_1(\lambda_1-\psi_1),\\
\alpha_{2,2}&=24\lambda_1^2-48\lambda_2,\\
\alpha_{2,3}&=\vartheta_1(\lambda_1+\vartheta_1),\\
\beta_{3,1}&=20\lambda_1\lambda_2-4\lambda_2\vartheta_1,\\
\beta_{3,2}&=2\psi_1\vartheta_2,\\
\beta_{3,3}&=\vartheta_2(\vartheta_1+\lambda_1-\psi_1),\\
\beta_{3,4}&=2\psi_1(\lambda_1 +\vartheta_1)(7\psi_1-\lambda_1) - 24\psi_1^3.\\
\end{align*}
\end{theorem}
\begin{proof}
The generating relations in the Theorem are obtained from the relations in the Chow ring of $\widetilde{\mathscr C}_2$ together with the relations in the Chow ring of $\overline{\cM}_2$, the fundamental classes of $\widetilde{\mathscr C}_2^{\rm c}$ and $\widetilde{\mathscr C}_2^{\rm E}$ and the cycle $c''_*\rho^*T$: a straightforward computation with Macaulay2 shows that all the relations of degree four as well as $[\widetilde{\mathscr C}_2^{\rm E}]$ are redundant, thus giving us the list above.
\end{proof}
We could have avoided to include either $\lambda_2$ or $\vartheta_2$ among the generators of the ring, but keeping them both allowed us to simplify the explicit expressions of the relations.
As a Corollary of the Theorem above we get the following description of the rational Chow ring of the coarse moduli space $\overline{M}_{2,1}$, whose computation over $\mathbb{C}$ has been done by Faber in his thesis \cite{Fab}.
\begin{corollary}\label{cor:rational chow}
Suppose that the characteristic of the base field is not $2$ or $3$. Then we have
\[ \ch(\overline{M}_{2,1})_{\mathbb{Q}}\simeq \mathbb{Q}[\lambda_1,\psi_1,\vartheta_1]/(\alpha_{2,3},\beta'_{3,1},\beta'_{3,2},\beta'_{3,3},\beta'_{3,4}) \]
where $\alpha_{2,3}$ has degree $2$, the $\beta_{3,j}$ have degree $3$ and their explicit expressions are
\begin{align*}
\alpha_{2,3}&=\vartheta_1(\lambda_1+\vartheta_1),\\
\beta'_{3,1}&=10\lambda_1^3-2\lambda_1^2\vartheta_1,\\
\beta'_{3,2}&=\psi_1\lambda_1^2-2\psi^2(\lambda_1-\psi_1)\\
\beta'_{3,3}&=(\frac{1}{2}\lambda_1^2-\psi_1(\lambda_1-\psi_1))(\vartheta_1+\lambda_1-\psi_1),\\
\beta'_{3,4}&=2\psi_1(\lambda_1 +\vartheta_1)(7\psi_1-\lambda_1) - 24\psi_1^3.\\
\end{align*}
\end{corollary}
\begin{proof}
One can express $\vartheta_2$ in terms of the other generators using the relation $\alpha_1$, and we obtain $\vartheta_2=\lambda_2-\psi_1(\lambda_1-\psi_1)$. We also have the relation $\frac{1}{24}\alpha_2=\frac{1}{2}\lambda_1-\lambda_2$, hence $\lambda_1$, $\psi_1$ and $\vartheta_1$ are enough to generate the rational Chow ring. The relations $\beta'_{j}$ are obtained from $\beta_j$ by substituting $2\lambda_2$ with $\lambda_1$ and $\vartheta_2$ with $\frac{1}{2}\lambda_1^2-\psi_1(\lambda_1-\psi_1)$.
\end{proof}
\subsection{Comparison with Faber's computation}
In his thesis \cite{Fab} Faber computed the rational Chow ring of $\overline{M}_{2,1}$ over $\mathbb{C}$. Using his notation, let $[(c)]_Q$ be the fundamental class of the locus of banana curves with a component of genus $1$ and a marked component of genus $0$, and let $[(d)]_Q$ be the fundamental class of the locus of curves with two elliptic tails connected by a marked $\PP^1$.
Denote $\delta_0$ the fundamental class of marked curves with a non-separating node, and let $\delta_1$ be the fundamental class of marked curves with a separating node (this would be $\vartheta_1$ in our notation).
\begin{theorem}[Faber]
Suppose that the ground field has characteristic zero. Then the rational Chow ring of $\overline{M}_{2,1}$ is given by
\[\ch(\overline{M}_{2,1})_{\mathbb{Q}} \simeq \mathbb{Q}[\psi_1,\delta_0,\delta_1]/(\GG_{\rma}mma_{2,1},\GG_{\rma}mma_{3,1},\GG_{\rma}mma_{3,2},\GG_{\rma}mma_{3,3},\GG_{\rma}mma_{3,4}) \]
where the $\GG_{\rma}mma_{i,j}$ have degree $i$ and they are given by
\begin{align*}
\GG_{\rma}mma_{2,1}&=(\delta_0 + 12 \delta_1) \delta_1, \\
\GG_{\rma}mma_{3,1}&=3\delta_0^3 + 11\delta_0^2\delta_1, \\
\GG_{\rma}mma_{3,2}&=\delta_1 ( \delta_1^2 + 2 \delta_1 \psi_1 + 4 \psi_1^2 ), \\
\GG_{\rma}mma_{3,3}&=\psi_1[(c)]_Q, \\
\GG_{\rma}mma_{3,4}&=\psi_1[(d)]_Q.
\end{align*}
\end{theorem}
Two of the three generators picked by Faber coincide with the ones chosen by us, namely $\psi_1$ and $\delta_1=\vartheta_1$. The last generator $\delta_0$ is well known to be equal to $10\lambda_1-2\vartheta_1$. A straightforward computation with Macaulay2 shows that the ring computed by Faber and the one given in \Cref{cor:rational chow} are isomorphic.
Regarding the relations, we have $\GG_{\rma}mma_{2,1}=10\alpha_{2,1}$: this follows by simply substituting $\delta_0$ with $10\lambda_1-2\vartheta_1$ (and of course $\delta_1$ with $\vartheta_1$).
We also have that $\GG_{\rma}mma_{3,4}$ and $\frac{1}{2}\beta'_{3,2}$ coincide. Indeed, we can rewrite $\frac{1}{2}\beta'_{3,2}$ as $\frac{1}{2}\beta_{3,2}=\psi_1\vartheta_2$, and $\vartheta_2$ is by definition the fundamental class of the locus of curves with a marked separating node, if we regard $\overline{M}_{2,1}$ as the coarse space of the universal curve over $\overline{\cM}_2$. On the other hand, we can also regard $\overline{M}_{2,1}$ as the coarse moduli space of stable marked curves of genus two: then $\vartheta_2$ coincides precisely with $[(d)]_Q$, hence $\psi_1\vartheta_2=\psi_1[(d)]_Q$.
Moreover, we have $\GG_{\rma}mma_{3,3}=\frac{1}{2}\beta'_{3,4}$. Indeed $\GG_{\rma}mma_{3,3}=\psi_1[(c)]_Q$ and one can compute the class of $[(c)]$ explicitly as the class of the locus of curves with a marked non-separating node, obtaining
\[ [(c)]_Q=(\lambda_1 +\vartheta_1)(7\psi_1-\lambda_1) - 12\psi_1^2, \]
from which the equality above follows.
For what concerns the last two relations found by Faber, they can be expressed in terms of our relations as follows:
\begin{align*}
\GG_{\rma}mma_{3,1}&=20\bigl((\vartheta_1-5\lambda_1)\alpha_{2,3} + 15\beta'_{3,1}\bigr) \\
\GG_{\rma}mma_{3,2}&=(2\psi_1+\vartheta_1-\lambda_1)\alpha_{2,3} - \frac{1}{12}\beta'_{3,1} + \frac{17}{6}\beta'_{3,2} + \frac{5}{6}\beta'_{3,3} + \frac{1}{6}\beta'_{3,4}.
\end{align*}
\begin{bibdiv}
\begin{biblist}
\bib{alper-fedorchuk-smyth-1}{article}{
AUTHOR = {Alper, Jarod},
AUTHOR = {Fedorchuk, Maksym},
AUTHOR = {Smyth, David Ishii}
TITLE = {Singularities with {$\Bbb{G}_m$}-action and the log minimal
model program for {$\overline{\scr{M}}_g$}},
JOURNAL = {J. Reine Angew. Math.},
FJOURNAL = {Journal f\"{u}r die Reine und Angewandte Mathematik. [Crelle's
Journal]},
VOLUME = {721},
YEAR = {2016},
PAGES = {1--41},
ISSN = {0075-4102},
MRCLASS = {14H10 (14E30)},
}
\bib{alper-fedorchuk-smyth-2}{article}{
AUTHOR = {Alper, Jarod},
AUTHOR = {Fedorchuk, Maksym},
AUTHOR = {Smyth, David Ishii}
TITLE = {Second flip in the {H}assett-{K}eel program: existence of good
moduli spaces},
JOURNAL = {Compos. Math.},
FJOURNAL = {Compositio Mathematica},
VOLUME = {153},
YEAR = {2017},
NUMBER = {8},
PAGES = {1584--1609},
ISSN = {0010-437X},
MRCLASS = {14D23 (14H10)},
}
\bib{alper-fedorchuk-smyth-3}{article}{
AUTHOR = {Alper, Jarod},
AUTHOR = {Fedorchuk, Maksym},
AUTHOR = {Smyth, David Ishii}
TITLE = {Second flip in the {H}assett-{K}eel program: a local
description},
JOURNAL = {Compos. Math.},
FJOURNAL = {Compositio Mathematica},
VOLUME = {153},
YEAR = {2017},
NUMBER = {8},
PAGES = {1547--1583},
ISSN = {0010-437X},
MRCLASS = {14H10 (14D23 14E30 14L30)},
MRNUMBER = {3705268}
}
\bib{alper-fedorchuk-smyth-4}{article}{
AUTHOR = {Alper, Jarod},
AUTHOR = {Fedorchuk, Maksym},
AUTHOR = {Smyth, David Ishii}
TITLE = {Second flip in the {H}assett-{K}eel program: projectivity},
JOURNAL = {Int. Math. Res. Not. IMRN},
FJOURNAL = {International Mathematics Research Notices. IMRN},
YEAR = {2017},
NUMBER = {24},
PAGES = {7375--7419},
ISSN = {1073-7928},
MRCLASS = {14H10 (14D23)},
MRNUMBER = {3802125}
}
\bib{bae-schmitt-1}{unpublished}{
author = {Bae, Younghan},
author = {Schmitt, Johannes},
title = {Chow rings of stacks of prestable curves I},
date = {2021},
eprint = {https://arxiv.org/abs/2107.09192}
}
\bib{bae-schmitt-2}{unpublished}{
author = {Bae, Younghan},
author = {Schmitt, Johannes},
title = {Chow rings of stacks of prestable curves II},
date = {2021},
eprint = {https://arxiv.org/abs/2107.09192}
}
\bib{Dil}{article}{
author={Di Lorenzo, Andrea},
title={The Chow ring of the stack of hyperelliptic curves of odd genus},
journal={Int. Math. Res. Not. IMRN},
date={2021},
number={4}
}
\bib{DLFV}{article}{
author={Di Lorenzo, Andrea},
author={Fulghesu, Damiano},
author={Vistoli, Angelo},
title={The integral Chow ring of the stack of smooth non-hyperelliptic curves of genus three},
journal={Trans. Amer. Math. Soc},
eprint={https://doi.org/10.1090/tran/8354},
date={2021}
}
\bib{DLV}{article}{
author={Di Lorenzo, Andrea},
author={Vistoli, Angelo},
title={Polarized twisted conics and moduli of stable curves of genus two},
date={2021},
eprint={https://arxiv.org/abs/2103.13204}
}
\bib{EFRat}{article}{
author={Edidin, Dan},
author={Fulghesu, Damiano},
title={The integral {C}how ring of the stack of at most 1-nodal rational
curves},
journal={Comm. Algebra},
volume={36},
date={2008},
number={2}
}
\bib{EF}{article}{
author={Edidin, Dan},
author={Fulghesu, Damiano},
title={The integral Chow ring of the stack of hyperelliptic curves of
even genus},
journal={Math. Res. Lett.},
volume={16},
date={2009},
number={1}
}
\bib{EG}{article}{
author={Edidin, Dan},
author={Graham, William},
title={Equivariant intersection theory},
journal={Invent. Math.},
volume={131},
date={1998},
number={3}
}
\bib{Fab}{thesis}{
author={Faber, Carel},
title={Chow rings of moduli spaces of curves},
organization={Universiteit van Amsterdam},
type={PhD thesis},
date={1988}
}
\bib{Fab1}{article}{
author={Faber, Carel},
title={Chow rings of moduli spaces of curves. I. The Chow ring of
$\overline{\scr M}_3$},
journal={Ann. of Math. (2)},
volume={132},
date={1990},
number={2}
}
\bib{fulghesu}{article}{
AUTHOR = {Fulghesu, Damiano},
TITLE = {The {C}how ring of the stack of rational curves with at most 3
nodes},
JOURNAL = {Comm. Algebra},
FJOURNAL = {Communications in Algebra},
VOLUME = {38},
YEAR = {2010},
NUMBER = {9}
}
\bib{Ful}{book}{
author={Fulton, William},
title={Intersection theory},
series={Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A
Series of Modern Surveys in Mathematics [Results in Mathematics and
Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]},
volume={2},
edition={2},
publisher={Springer-Verlag, Berlin},
date={1998}
}
\bib{hassett-hyeon-1}{article}{
AUTHOR = {Hassett, Brendan},
AUTHOR = {Hyeon, Donghoon},
TITLE = {Log canonical models for the moduli space of curves: the first
divisorial contraction},
JOURNAL = {Trans. Amer. Math. Soc.},
FJOURNAL = {Transactions of the American Mathematical Society},
VOLUME = {361},
YEAR = {2009},
NUMBER = {8},
PAGES = {4471--4489},
ISSN = {0002-9947},
MRCLASS = {14H10 (14E30)}
}
\bib{hassett-hyeon-2}{article}{
AUTHOR = {Hassett, Brendan},
AUTHOR = {Hyeon, Donghoon},
TITLE = {Log canonical models for the moduli space of curves: the first
divisorial contraction},
JOURNAL = {Trans. Amer. Math. Soc.},
FJOURNAL = {Transactions of the American Mathematical Society},
VOLUME = {361},
YEAR = {2009},
NUMBER = {8},
PAGES = {4471--4489},
ISSN = {0002-9947},
MRCLASS = {14H10 (14E30)}
}
\bib{knutson}{book}{
address = {Berlin},
author = {Knutson, Donald},
publisher = {Springer-Verlag},
title = {Algebraic spaces},
year = {1971}
}
\bib{Lars}{article}{
author={Larson, Eric},
title={The integral Chow ring of $\overline M_2$},
journal={Algebr. Geom.},
volume={8},
date={2021},
number={3}
}
\bib{laumon-moret-bailly}{book}{
address = {Berlin},
author = {Laumon, G{\'e}rard},
author = {Moret-Bailly, Laurent}
publisher = {Springer-Verlag},
series = {Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics},
title = {Champs alg\'ebriques},
volume = {39},
year = {2000}
}
\bib{Mum}{article}{
author={Mumford, David},
title={Towards an enumerative geometry of the moduli space of curves},
conference={
title={Arithmetic and geometry, Vol. II},
},
book={
series={Progr. Math.},
volume={36},
publisher={Birkh\"{a}user Boston, Boston, MA},
},
date={1983}
}
\bib{Per}{article}{
author={Pernice, Michele},
title={The integral Chow ring of the stack of 1-pointed hyperelliptic curves},
Journal={Int. Math. Res. Not.},
volume={rnab072},
date={2021},
eprint={https://doi.org/10.1093/imrn/rnab072}
}
\bib{Rom}{article}{
author = {Romagny, Matthieu},
journal = {Michigan Math. J.},
number = {1},
title = {Group actions on stacks and applications},
volume = {53},
year = {2005}}
\bib{schubert}{article}{
AUTHOR = {Schubert, David},
TITLE = {A new compactification of the moduli space of curves},
JOURNAL = {Compositio Math.},
FJOURNAL = {Compositio Mathematica},
VOLUME = {78},
YEAR = {1991},
NUMBER = {3},
PAGES = {297--313},
ISSN = {0010-437X},
MRCLASS = {14H10},
}
\bib{smyth-survey}{article}{
AUTHOR = {Smyth, David Ishii},
TITLE = {Towards a classification of modular compactifications of
{$\scr{M}_{g,n}$}},
JOURNAL = {Invent. Math.},
FJOURNAL = {Inventiones Mathematicae},
VOLUME = {192},
YEAR = {2013},
NUMBER = {2},
PAGES = {459--503},
ISSN = {0020-9910},
MRCLASS = {14H10 (14H20 14M27)},
}
\bib{mattia-vistoli-deformation}{incollection}{
author = {Talpo, Mattia},
author = {Vistoli, Angelo},
booktitle = {Handbook of moduli. {V}ol. {III}},
pages = {281--397},
publisher = {Int. Press, Somerville, MA},
series = {Adv. Lect. Math. (ALM)},
title = {Deformation theory from the point of view of fibered categories},
volume = {26},
year = {2013}
}
\bib{VisM2}{article}{
author={Vistoli, Angelo},
title={The {C}how ring of $\scr M_2$. Appendix to ``Equivariant
intersection theory'' by D. Edidin and W. Graham},
journal={Invent. Math.},
volume={131},
date={1998},
number={3}
}
\bib{VisInt}{article}{
author={Vistoli, Angelo},
title={Intersection theory on algebraic stacks and on their moduli
spaces},
journal={Invent. Math.},
volume={97},
date={1989},
number={3},
}
\end{biblist}
\end{bibdiv}
\end{document} |
\begin{document}
\begin{abstract}
In this paper, we compute the number of two-term tilting complexes for an arbitrary symmetric algebra with radical cube zero over an algebraically closed field.
Firstly, we give a complete list of symmetric algebras with radical cube zero having only finitely many isomorphism classes of two-term tilting complexes in terms of their associated graphs.
Secondly, we enumerate the number of two-term tilting complexes for each case in the list.
\end{abstract}
\maketitle
\section{Introduction}
Tilting theory plays an important role in the study of many areas of mathematics.
A central notion of tilting theory is a tilting complex which is a generalization of a progenerator in Morita theory.
Indeed, its endomorphism algebra is derived equivalent to the original algebra \cite{Rickard89der}.
Hence it is a natural problem to give a classification of tilting complexes for a given algebra.
In this paper, we study a classification of two-term tilting complexes for an arbitrary symmetric algebra with radical cube zero over an algebraically closed field $\mathbf{k}$.
Symmetric algebras with radical cube zero have been studied by Okuyama \cite{Okuyama86}, Benson \cite{Benson08} and Erdmann--Solberg \cite{ES}, and also appear in several areas such as \cite{CL,HK,Seidel08}.
Recently, Green--Schroll \cite{GSa} showed that this class is precisely the Brauer configuration algebras with radical cube zero.
The study of symmetric algebras $A$ with radical cube zero can be reduced to that of algebras with radical square zero.
For example, as an application of $\tau$-tilting theory (\cite{AIR}), we find in Proposition \ref{reduction} that the functor $-\otimes_A A/\operatorname{op}\nolimitseratorname{soc}\nolimits A$ gives a bijection
\begin{align}
\mathop{2\text{-}\mathsf{tilt}}\nolimits A \longrightarrow \mathop{2\text{-}\mathsf{silt}}\nolimits \, (A/\operatorname{op}\nolimitseratorname{soc}\nolimits A). \notag
\end{align}
Here, we denote by $\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ (respectively, $\mathop{2\text{-}\mathsf{silt}}\nolimits A$) the set of isomorphism classes of basic two-term tilting (respectively two-term silting) complexes for $A$. Notice that tilting complexes coincide with silting complexes for a symmetric algebra $A$ (\cite[Example 2.8]{AI}).
In \cite{Adachi16b,Aoki18,Zhang13}, they study two-term silting theory (or equivalently $\tau$-tilting theory) for algebras with radical square zero.
The first author (\cite{Adachi16b}) gives a characterization of algebras with radical square zero which are $\tau$-tilting finite (i.e., having only finitely many isomorphism classes of basic two-term silting complexes) by using the notion of single quivers, see Proposition \ref{RSZ}(2).
Using this result, we give a complete list of $\tau$-tilting finite symmetric algebras with radical cube zero as follows.
Now, let $A$ be a basic connected finite dimensional symmetric $\mathbf{k}$-algebra with radical cube zero.
Let $Q$ be the Gabriel quiver of $A$ and $Q^{\circ}$ the quiver obtained from $Q$ by deleting all loops.
We show in Definition-Proposition \ref{graphA} that $Q^{\circ}$ is the double quiver $Q_G$ (see Definition \ref{def:double quiver}) of a finite connected (undirected) graph $G$ with no loops, i.e., $Q^{\circ}=Q_G$.
We call $G$ the graph of $A$.
\begin{theorem} \label{theorem1}
Let $A$ be a basic connected finite dimensional symmetric $\mathbf{k}$-algebra with radical cube zero.
Then the following conditions are equivalent.
\begin{enumerate}[\rm (1)]
\item $A$ is $\tau$-tilting finite (or equivalently, $\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ is finite).
\item The graph of $A$ is one of graphs in the following list.
\end{enumerate}
$$
\begin{xy}
(6, -4)*{(\mathbb{A}_n)}, (-2,-12)*+{1}="A1", (6,-12)*+{2}="A2", (25, -12)*+{n}="An",
(15, -12)*{\cdots}="dot",
{ "A1" \ar@{-} "A2"},
{"A2" \ar@{-} (11,-12)},
{ (19, -12) \ar@{-} "An" },
(48, -4)*{(\mathbb{D}_n)\ 4 \leq n}, (33,-7)*+{1}="D1", (33, -17)*+{2}="D2", (41,-12)*+{3}="D3",
(60, -12)*+{n}="Dn",
(50, -12)*{\cdots}="dot",
{ "D1" \ar@{-} "D3"},
{ "D2" \ar@{-} "D3"},
{"D3" \ar@{-} (46, -12)},
{ (54, -12) \ar@{-} "Dn"},
(78, -4)*{(\mathbb{E}_6)}, (70, -14)*+{1}="E61", (78, -14)*+{2}="E62", (86, -14)*+{3}="E63", (94, -14)*+{5}="E64", (102, -14)*+{6}="E65", (86, -6)*+{4}="E66",
{"E61" \ar@{-} "E62"},
{"E62" \ar@{-} "E63"},
{"E63" \ar@{-} "E64"},
{"E64" \ar@{-} "E65"},
{"E63" \ar@{-} "E66"},
(6, -20)*{(\mathbb{E}_7)}, (-2, -31)*+{1}="E71", (6, -31)*+{2}="E72", (14, -31)*+{3}="E73", (22, -31)*+{5}="E74", (30, -31)*+{6}="E75", (38, -31)*+{7}="E76", (14, -23)*+{4}="E77",
{"E71" \ar@{-} "E72"},
{"E72" \ar@{-} "E73"},
{"E73" \ar@{-} "E74"},
{"E74" \ar@{-} "E75"},
{"E75" \ar@{-} "E76"},
{"E73" \ar@{-} "E77"},
(58, -20)*{(\mathbb{E}_8)},
(50, -31)*+{1}="E81", (58, -31)*+{2}="E82", (66, -31)*+{3}="E83", (74, -31)*+{5}="E84",
(82, -31)*+{6}="E85", (90, -31)*+{7}="E86", (98, -31)*+{8}="E87", (66, -23)*+{4}="E88",
{"E81" \ar@{-} "E82"},
{"E82" \ar@{-} "E83"},
{"E83" \ar@{-} "E84"},
{"E84" \ar@{-} "E85"},
{"E85" \ar@{-} "E86"},
{"E86" \ar@{-} "E87"},
{"E83" \ar@{-} "E88"},
(125, -4)*{(\widetilde{\mathbb{A}}_{n-1}) \ n\colon \mathrm{odd}},
(125, -12)*+{1}="b1", (115, -18)*+{2}="b2", (115, -28)*+{3}="b3",(125, -32)*+{4}="b4",
(135, -18)*+{n}="bn",
(135, -26)*{\rotatebox{90}{$\cdots$}}="ddot",
{"b1" \ar@{-} "b2"},
{"b2" \ar@{-} "b3"},
{"b3" \ar@{-} "b4"},
{"b1" \ar@{-} "bn"},
{"bn" \ar@{-} (135, -22)},
{ (133, -30) \ar@{-} "b4"},
\end{xy}
\noindent
$$
$$
\begin{xy}
(26, 0)*{({\rm I}_n) \ 4 \leq n},
(28, -7)*+{1}="c1", (21,-15)*+{2}="c2", (36, -15)*+{3}="c3",
(36, -23)*+{4}="c4", (36, -28)="c5", (36, -35)="c6",
(36.5, -31)*{\rotatebox{90}{$\cdots$}}="ddot",
(36, -39)*+{n}="c7",
{"c1" \ar@{-} "c2"},
{"c1" \ar@{-} "c3"},
{"c2" \ar@{-} "c3"},
{"c3" \ar@{-} "c4"},
{"c4" \ar@{-} "c5"},
{"c6" \ar@{-} "c7"},
(53, 0)*{({\rm II}_n) \ 5 \leq n \leq8},
(52, -7)*+{1}="d1", (45,-15)*+{2}="d2", (59, -15)*+{3}="d3",
(45, -23)*+{4}="d4", (59, -23)*+{5}="d5",
(59, -28)="d6", (59, -35)="d7",
(59.5, -31)*{\rotatebox{90}{$\cdots$}}="edot",
(59, -39)*+{n}="d8",
{"d1" \ar@{-} "d2"},
{"d1" \ar@{-} "d3"},
{"d2" \ar@{-} "d3"},
{"d2" \ar@{-} "d4"},
{"d3" \ar@{-} "d5"},
{"d5" \ar@{-} "d6"},
{"d7" \ar@{-} "d8"},
(72, 0)*{({\rm III})},
(79,-7)*+{1}="e1", (79, -15)*+{2}="e2", (72,-22)*+{3}="e3",
(72, -30)*+{4}="e4", (86,-22)*+{5}="e5", (86,-30)*+{6}="e6",
{"e1" \ar@{-} "e2"},
{"e2" \ar@{-} "e3"},
{"e2" \ar@{-} "e5"},
{"e3" \ar@{-} "e4"},
{"e4" \ar@{-} "e6"},
{"e5" \ar@{-} "e6"},
(97, 0)*{({\rm IV})},
(104, -7)*+{1}="f1",
(104, -15)*+{2}="f2", (97,-23)*+{3}="f3", (111, -23)*+{4}="f4",
(97, -31)*+{5}="f5", (111, -31)*+{6}="f6",
{"f1" \ar@{-} "f2"},
{"f2" \ar@{-} "f3"},
{"f2" \ar@{-} "f4"},
{"f3" \ar@{-} "f4"},
{"f3" \ar@{-} "f5"},
{"f4" \ar@{-} "f6"},
(121, 0)*{({\rm V})},
(128, -7)*+{1}="g1", (121,-15)*+{2}="g2", (135, -15)*+{3}="g3",
(121, -23)*+{4}="g4", (121, -31)*+{5}="g5", (135, -23)*+{6}="g6",
(135, -31)*+{7}="g7",
{"g1" \ar@{-} "g2"},
{"g1" \ar@{-} "g3"},
{"g2" \ar@{-} "g3"},
{"g2" \ar@{-} "g4"},
{"g3" \ar@{-} "g6"},
{"g4" \ar@{-} "g5"},
{"g6" \ar@{-} "g7"},
\end{xy}
$$
\end{theorem}
The second author (\cite{Aoki18}) classifies two-term silting complexes for an arbitrary algebra with radical square zero by using tilting modules over a path algebra (see Proposition \ref{RSZ}(1)).
Since the cardinality of the set of isomorphism classes of tilting modules over a path algebra is well known,
this provides us an explicit way to compute the number of them. We use this result to determine the number $\#\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ for each graph $G$ in the list of Theorem \ref{theorem1}.
\begin{theorem} \label{theorem2}
In Theorem \ref{theorem1}, the number $\#\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ depends only on the graph $G$ of $A$ and is given as follows.
{\fontsize{9pt}{0.4cm}\selectfont
\begin{table}[h]
{\renewcommand\arraystretch{1.3}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
$G$ & $\mathbb{A}_n$& $\mathbb{D}_{n}$&$\mathbb{E}_6 $&$ \mathbb{E}_7$&$ \mathbb{E}_8$& $\tilde{\mathbb{A}}_{n-1}$ & $ {\rm I}_n$ &
${\rm II}_5$& ${\rm II}_6$& ${\rm II}_7$ & ${\rm II}_8$ & ${\rm III}$ & ${\rm IV}$ & ${\rm V}$ \\ \hline
$\# \mathop{2\text{-}\mathsf{tilt}}\nolimits A$ &$\binom{2n}{n}$ & $a_n$ & $1700$ & $8872$ & $54066$ & $2^{2n-1}$ & $b_n$ & $632$ & $2936$ & $11306$ & $75240$ & $3108$& $4056$& $17328$ \\ \hline
\end{tabular}
}
\end{table}}
\noindent Here, for any $n\ge 4$, let $a_n:= 6\cdot 4^{n-2}-2\binom{2(n-2)}{n-2}$ and $b_n:=6\cdot 4^{n-2} + 2\binom{2n}{n} -4\binom{2(n-1)}{n-1}-4\binom{2(n-2)}{n-2}$.
\end{theorem}
We remark that the numbers for Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$ in the list are precisely the biCatalan numbers introduced by \cite{BR} in the context of Coxeter-Catalan combinatorics.
Our results for Dynkin graphs are independently obtained by \cite{DIRRT} in the study of biCambrian lattices for preprojective algebras.
We also remark that we can generalize our results for Brauer configuration algebras in terms of multiplicities.
A Brauer configuration algebra is defined by a configuration and a multiplicity function.
The configuration of a Brauer configuration algebra with radical cube zero corresponds to a graph \cite{GSa}.
By \cite{EJR}, one can show that the number of two-term tilting complexes over Brauer configuration algebras is independent of the multiplicity.
Therefore, we can also apply our results to any Brauer configuration algebra obtained by replacing the multiplicity of a Brauer configuration associated with a graph in the list of Theorem \ref{theorem1}.
This paper is organized as follows.
In Section \ref{sec:preliminaries}, we recall the definition of algebras with radical square zero and their two-term silting theory which are needed in this paper.
In Section \ref{sec:RCZ}, we study symmetric algebras with radical cube zero together with the correspondence algebra with radical square zero.
Our main results are Theorem \ref{reduced ver} and Corollary \ref{number by graph} which provide us an explicit way to compute the number of two-term tilting complexes for a given symmetric algebra with radical cube zero.
In Section \ref{main theorem}, we prove Theorems \ref{theorem1} and \ref{theorem2} by using results shown in the previous section.
\section{Preliminaries} \label{sec:preliminaries}
Throughout this paper, $\mathbf{k}$ is an algebraically closed field.
We recall that any basic connected finite dimensional $\mathbf{k}$-algebra $A$ is isomorphic to a bound quiver algebra $A\cong \mathbf{k}Q/I$, where $Q$ is a finite connected quiver and $I$ is an admissible ideal in the path algebra $\mathbf{k}Q$ of the quiver $Q$. We call $Q_A:=Q$ the \emph{Gabriel quiver} of $A$.
\subsection{Silting complexes}
Let $A$ be a basic (not necessary connected) finite dimensional $\mathbf{k}$-algebra. We denote by $\moduleCategory A$ the category of finitely generated right $A$-modules and by $\proj A$ the category of finitely generated projective right $A$-modules.
Let $\mathsf{K}^{\rm b}(\proj A)$ denote the homotopy category of bounded complexes of objects of $\proj A$.
For a complex $X\in \mathsf{K}^{\rm b}(\proj A)$, we say that $X$ is \emph{basic} if it is a direct sum of pairwise non-isomorphic indecomposable objects.
\begin{definition}
A complex $T$ in $\mathsf{K}^{\rm b}(\proj A)$ is said to be \emph{presilting} if it satisfies
\begin{align}
\operatorname{op}\nolimitseratorname{Hom}\nolimits_{\mathsf{K}^{\rm b}(\proj A)}(T,T[i])=0 \notag
\end{align}
for all positive integers $i$.
A presilting complex $T$ is called a \emph{silting complex} if it satisfies $\thick T=\mathsf{K}^{\rm b}(\proj A)$, where $\thick T$ is the smallest triangulated full subcategory which contains $T$ and is closed under taking direct summands.
In addition, a silting complex $T$ is called a \emph{tilting complex} if $\operatorname{op}\nolimitseratorname{Hom}\nolimits_{\mathsf{K}^{\rm b}(\proj A)}(T,T[i])=0$ for all non-zero integers $i$.
\end{definition}
We restrict our interest to the set of two-term silting complexes. Here, a complex $T=(T^{i},d^{i})$ in $\mathsf{K}^{\rm b}(\proj A)$ is said to be \emph{two-term} if it is isomorphic to a complex concentrated only in degree $0$ and $-1$, i.e.,
\begin{align}
(T^{-1}\overset{d^{-1}}{\rightarrow} T^0) = \cdots \to 0 \to T^{-1} \overset{d^{-1}}{\longrightarrow} T^0 \to 0 \to \cdots \notag
\end{align}
We denote by $\mathop{2\text{-}\mathsf{silt}}\nolimits A$ (respectively, $\mathop{2\text{-}\mathsf{tilt}}\nolimits A$) the set of isomorphic classes of basic two-term silting (respectively, two-term tilting) complexes for $A$.
Now, we call $M\in \moduleCategory A$ a \emph{tilting module} if all the following conditions are satisfied: (i) the projective dimension of $M$ is at most $1$, (ii) $\operatorname{op}\nolimitseratorname{Ext}\nolimits_A^1(M,M)=0$, and (iii) $|M|=|A|$, where $|M|$ denotes the number of pairwise non-isomorphic indecomposable direct summands of $M$. We denote by $\mathop{\mathsf{tilt}}\nolimits A$ the set of isomorphism classes of basic tilting $A$-modules.
By definition, we can naturally regard a tilting $A$-module $M$ as a tilting complex. More precisely, by taking a minimal projective presentation $P_1 \overset{f}{\to} P_0\to M \to 0$ of $M$ in $\moduleCategory A$, the two-term complex $(P_{1} \overset{f}{\to} P_0)$ provides a tilting complex in $\mathsf{K}^{\rm b}(\proj A)$.
The number of tilting modules over a path algebra of a Dynkin quiver is well known.
\begin{proposition}{\rm (see \cite{ONFR} for example)} \label{tilting number}
Let $Q$ be a quiver whose underlying graph $\Delta$ is one of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$. Then the number $\#\mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q$ is given by the following table and does not depend on the orientation of $Q$.
\begin{table}[h]
\begin{center}
{\renewcommand\arraystretch{1.3}
\begin{tabular}{|c||c|c|c|c|c|c|} \hline
$\Delta$ & $\mathbb{A}_n \, (n\geq 1)$ &$\ \mathbb{D}_n \,(n\geq4)$& $\mathbb{E}_6$ &
$\mathbb{E}_7$ & $\mathbb{E}_8$ \\ \hline
$\# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q$ & $\frac{1}{n+1}\binom{2n}{n}$ & $\frac{3n-4}{2n}\binom{2(n-1)}{n-1}$ & $418$ & $2431$ & $17342$\\ \hline
\end{tabular}}
\end{center}
\end{table}
\end{proposition}
More generally, if $Q$ is a disjoint union of Dynkin quivers $Q_{\lambda}$ ($\lambda \in \Lambda$), then we have
\begin{equation} \label{disjoint Dynkin}
\#\mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q = \prod_{\lambda\in \Lambda} \#\mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q_{\lambda}
\end{equation}
and this number is completely determined by a collection of the underlying graphs $\Delta_{\lambda}$ of $Q_{\lambda}$ for all $\lambda\in \Lambda$ as in Proposition \ref{tilting number}.
\subsection{Algebras with radical square zero}
Let $A$ be a basic connected finite dimensional $\mathbf{k}$-algebra.
We say that $A$ is an algebra with \emph{radical square zero} (respectively, \emph{radical cube zero}) if $J^2=0$ but $J\neq 0$ (respectively, $J^3=0$ but $J^2\neq 0$), where $J$ is the Jacobson radical of $A$.
For simplicity, we abbreviate an algebra with radical square zero (respectively, radical cube zero) by a RSZ (respectively, RCZ) algebra.
We first recall that any basic connected finite dimensional RSZ $\mathbf{k}$-algebra $A$ is isomorphic to a bound quiver algebra $\mathbf{k}Q/I$, where $Q:=Q_A$ is the Gabriel quiver of $A$ and $I$ is the two-sided ideal in $\mathbf{k}Q$ generated by all paths of length $2$.
Next, let $Q=(Q_{0},Q_{1})$ be a finite connected quiver, where $Q_{0}$ is the vertex set and $Q_{1}$ is the arrow set.
We denote by $Q^{\rm op}$ the opposite quiver of $Q$.
For a map $\epsilon\colon Q_{0}\to \{ \pm 1\}$, we define a quiver $Q_{\epsilon}$, called a {\it single quiver} of $Q$, as follows:
\begin{itemize}
\item The set of vertices is $Q_0$.
\item We draw an arrow $a \colon i\to j$ in $Q_{\epsilon}$ whenever there exists an arrow $a\colon i\to j$ with $\epsilon(i)=+1$ and $\epsilon(j)=-1$.
\end{itemize}
Note that $Q_{\epsilon}$ is bipartite (i.e., each vertex is either a sink or a source), but not connected in general.
Since it has no loops by definition, we have $Q_{\epsilon}=(Q^{\circ})_{\epsilon}$, where $Q^{\circ}$ denotes the quiver obtained from $Q$ by deleting all loops.
We give a connection between two-term silting complexes for a RSZ algebra and tilting modules over path algebras.
\begin{proposition}\label{RSZ}
Let $A$ be a basic connected finite dimensional RSZ $\mathbf{k}$-algebra and $Q_A$ the Gabriel quiver of $A$.
Let $Q:=(Q_A)^{\circ}$ be the quiver obtained from $Q_A$ by deleting all loops.
Then the following statements hold.
\begin{enumerate}[\rm (1)]
\item \textnormal{(\cite[Theorem 1.1]{Aoki18})} There is a bijection
\begin{align}
\mathop{2\text{-}\mathsf{silt}}\nolimits A \longrightarrow \bigsqcup_{\epsilon \colon Q_{0} \rightarrow\{\pm 1\}}
\mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op}. \notag
\end{align}
\item \textnormal{(\cite{Adachi16b,Aoki18})} The following conditions are equivalent.
\begin{enumerate}[\rm (a)]
\item $\mathop{2\text{-}\mathsf{silt}}\nolimits A$ is finite.
\item For every map $\epsilon\colon Q_{0}\to \{ \pm 1\}$, the underlying graph of the single quiver $Q_{\epsilon}$ is a disjoint union of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$.
\end{enumerate}
\item If one of equivalent conditions of \textnormal{(2)} holds, we have
\begin{align}
\#\mathop{2\text{-}\mathsf{silt}}\nolimits A = \sum_{\epsilon \colon Q_0 \to \{\pm1\}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op}.\notag
\end{align}
\end{enumerate}
\end{proposition}
We remark that we can replace the quiver $Q$ with the Gabriel quiver $Q_A$ of $A$ in Proposition \ref{RSZ} since we have $(Q_A)_{\epsilon} = Q_{\epsilon}$ for any map $\epsilon\colon Q_0 \to \{\pm1\}$.
\section{Two-term tilting complexes over symmetric RCZ algebras} \label{sec:RCZ}
Let $A$ be a basic connected finite dimensional symmetric RCZ $\mathbf{k}$-algebra. Then $\overline{A}:=A/\operatorname{op}\nolimitseratorname{soc}\nolimits A$ is a RSZ algebra by definition. Moreover, the Gabriel quiver of $\overline{A}$ coincides with the Gabriel quiver of $A$ since $\operatorname{op}\nolimitseratorname{soc}\nolimits A$ is contained in the square of the Jacobson radical of $A$.
The following is basic. Here, we remember that silting complexes coincide with tilting complexes for a symmetric algebra $A$ (\cite[Example 2.8]{AI}). In particular, $\mathop{2\text{-}\mathsf{tilt}}\nolimits A=\mathop{2\text{-}\mathsf{silt}}\nolimits A$.
\begin{proposition} \cite[Theorem 3.3]{Adachi16a} \label{reduction}
Let $A$ be a basic connected finite dimensional symmetric RCZ $\mathbf{k}$-algebra and $\overline{A}:=A/\operatorname{op}\nolimitseratorname{soc}\nolimits A$. Then the functor $-\otimes_A \overline{A}$ gives a bijection
\begin{align}
\mathop{2\text{-}\mathsf{tilt}}\nolimits A \longrightarrow \mathop{2\text{-}\mathsf{silt}}\nolimits \overline{A}. \notag
\end{align}
\end{proposition}
Next, the following observations provide us a combinatorial framework of studying two-term tilting complexes over symmetric RCZ algebras.
\begin{definition} \label{def:double quiver}
For a finite connected graph $G$ with no loops, we define a quiver $Q_G$ as follows.
\begin{itemize}
\item The set of vertices of $Q_G$ is the set of vertices of $G$.
\item We draw two arrows $a^{\ast} \colon i\to j$ and $a^{\ast\ast} \colon j\to i$ whenever there exists an edge $a$ of $G$ connecting $i$ and $j$.
\end{itemize}
We call $Q_G$ the \emph{double quiver} of $G$. Notice that $Q_G$ has no loops since so does $G$.
\end{definition}
\begin{defprop} \label{graphA}
Let $A$ be a basic connected finite dimensional symmetric RCZ $\mathbf{k}$-algebra.
Let $Q_A$ be the Gabriel quiver of $A$ and $Q:=(Q_A)^{\circ}$ the quiver obtained from $Q_A$ by deleting all loops. Then $Q$ is the double quiver $Q_G$ of a finite connected (undirected) graph $G$ with no loops. We call $G$ the graph of $A$.
\end{defprop}
\begin{proof}
For the Gabriel quiver $Q_A$ of $A$, let $\pi \colon \mathbf{k}Q_A \to A$ be a canonical surjection.
For any vertex $i$ of $Q_A$, let $P_i$ be the indecomposable projective $A$-module corresponding to $i$.
By definition, $P_i$ has Loewy length $3$ and its simple socle is isomorphic to the simple top $S_{i}:=P_{i}/P_{i}J$.
We recall from \cite[Proposition 5.6]{GSa} that our algebra $A$ is special multiserial (we refer to \cite[Definition 2.2]{GSa} for the definition of special multiserial algebras). Then each arrow $a\colon i\to j$ of $Q_A$ determines the unique arrow $\sigma(a)$ such that $\pi(a\sigma(a))\neq 0$, and the correspondence $\sigma$ gives a permutation of the set of arrows of $Q_A$, see \cite[Definition 4.8]{GSa}. In addition, the element $\pi(a\sigma(a)\sigma^2(a)\cdots\sigma^{m-1}(a))$ lies in the socle of $P_i$, where $m$ is the length of the $\sigma$-orbit containing the arrow $a$.
Since $P_i$ has Loewy length $3$, $m=2$ must hold. In particular, $\sigma(a)$ is the unique arrow $\sigma(a)\colon j\to i$ such that $\pi(\sigma(a)a)\neq 0$.
Now, we can restrict the permutation $\sigma$ to the subset consisting of all arrows which are not loops. Then we define a finite undirected graph $G$ as follows: The set of vertices of $G$ bijectively corresponds to the set of vertices of $Q_A$, and the set of edges of $G$ is naturally given by the set of unordered pairs $\{a,\sigma(a)\}$ for all arrows $a$ of $Q_A$ which are not loops. Then $G$ is the desired one as $(Q_A)^{\circ}=Q_G$ from our construction.
\end{proof}
As we mentioned before, the algebras $A$ and $\overline{A}:=A/\operatorname{op}\nolimitseratorname{soc}\nolimits A$ have the same Gabriel quiver $Q_A = Q_{\overline{A}}$. Therefore, $(Q_A)^{\circ}= (Q_{\overline{A}})^{\circ}$ is the double quiver $Q_G$ of a common finite connected graph $G$ with no loops by Definition-Proposition \ref{graphA}.
\begin{theorem} \label{reduced ver}
Let $A$ be a basic connected finite dimensional symmetric RCZ $\mathbf{k}$-algebra and $\overline{A}:=A/\operatorname{op}\nolimitseratorname{soc}\nolimits A$.
Let $Q_A$ be the Gabriel quiver of $A$ and $Q:=(Q_A)^{\circ}$ the quiver obtained from $Q_A$ by deleting all loops.
\begin{enumerate}[\rm (1)]
\item The following conditions are equivalent.
\begin{enumerate}[\rm (a)]
\item $\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ is finite.
\item $\mathop{2\text{-}\mathsf{silt}}\nolimits \overline{A}$ is finite.
\item For every map $\epsilon \colon Q_{0} \rightarrow \{\pm 1\}$, the underlying graph of the single quiver $Q_{\epsilon}$ is a disjoint union of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$.
\end{enumerate}
\item Fix any vertex $v\in Q_{0}$. If one of the equivalent conditions in \textnormal{(1)} is satisfied, then the following equalities hold.
\begin{align}
\# \mathop{2\text{-}\mathsf{tilt}}\nolimits A = \# \mathop{2\text{-}\mathsf{silt}}\nolimits \overline{A} = 2 \cdot \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=+1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}{Q}_{\epsilon}. \notag
\end{align}
\end{enumerate}
\end{theorem}
\begin{proof}
(1) It follows from Propositions \ref{RSZ}(2) and \ref{reduction}.
(2) By Proposition \ref{reduction}, we have $\# \mathop{2\text{-}\mathsf{tilt}}\nolimits A = \# \mathop{2\text{-}\mathsf{silt}}\nolimits \overline{A}$.
We show the second equality.
Let $v$ be a vertex in $Q$.
By Proposition \ref{RSZ}(1), we have
\begin{align}
\# \mathop{2\text{-}\mathsf{silt}}\nolimits \overline{A}
= \sum_{\epsilon \colon Q_{0} \rightarrow\{\pm 1\}} \#\mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op}
= \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=+1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op}
+ \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=-1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op}. \notag
\end{align}
For a map $\epsilon\colon Q_{0}\to \{ \pm 1\}$, we define a map $-\epsilon\colon Q_{0}\to \{ \pm 1\}$ by $(-\epsilon)(i):=-\epsilon(i)$ for all $i\in Q_{0}$.
Since $Q$ is the double quiver of the graph $G$ of $A$, we have $Q_{-\epsilon}=(Q_{\epsilon})^{\mathrm{op}}$.
This implies that $Q_{\epsilon}$ and $Q_{-\epsilon}$ have the same underlying graph $\Delta$.
By our assumption, $\Delta$ is a disjoint union of Dynkin graphs.
Thus we obtain $\# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q_{\epsilon}= \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q_{-\epsilon}$ because the number of non-isomorphic tilting modules over a path algebra of Dynkin type does not depend on orientation, see Proposition \ref{tilting number}.
Hence we have
\begin{align}
\sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=+1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q_{\epsilon}
=\sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=+1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op}
= \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{\pm 1\} \\ \epsilon(v)=-1}} \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}(Q_{\epsilon})^{\rm op}. \notag
\end{align}
This finishes the proof.
\end{proof}
For our convenience, we restate Theorem \ref{reduced ver} in terms of undirected graphs.
Let $G=(G_0,G_1)$ be a finite connected graph with no loops, where $G_0$ is the set of vertices and $G_1$ is the set of edges. For each map $\epsilon \colon G_0\to \{\pm1\}$, let $G_{\epsilon}$ be the graph obtained from $G$ by removing all edges between vertices $i,j$ with $\epsilon(i)=\epsilon(j)$. From our construction, $G_{\epsilon}$ is precisely the underlying graph of the quiver $Q_{\epsilon}$, where $Q:=Q_G$ is the double quiver of $G$ with vertex set $Q_0=G_0$. In particular, $Q_{\epsilon}$ is a disjoint union of Dynkin quivers if and only if $G_{\epsilon}$ is a disjoint union of Dynkin graphs.
Now, we recall that, for a quiver $Q$ whose underlying graph $\Delta$ is a disjoint union of Dynkin graphs, the number $\#\mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q$ does not depend on orientation of $Q$ and given by (\ref{disjoint Dynkin}). Then, we set $|\Delta| := \# \mathop{\mathsf{tilt}}\nolimits \mathbf{k}Q$.
\begin{corollary} \label{number by graph}
Let $A$ be a basic connected finite dimensional symmetric RCZ $\mathbf{k}$-algebra and $G$ the graph of $A$.
\begin{enumerate}[\rm (1)]
\item The following conditions are equivalent.
\begin{enumerate}[\rm (a)]
\item $\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ is finite.
\item For every map $\epsilon \colon G_0 \to \{\pm1\}$, the graph $G_{\epsilon}$ is a disjoint union of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$.
\end{enumerate}
\item Assume that, for any $\epsilon \colon G_0\to \{\pm1\}$, the graph $G_{\epsilon}$ is a disjoint union of Dynkin graphs $\Delta_{\epsilon,\lambda}$ ($\lambda \in \Lambda_{\epsilon}$). Then for a fixed vertex $v$ of $G$, the number $\#\mathop{2\text{-}\mathsf{tilt}}\nolimits A$ is equal to
\end{enumerate}
\begin{align}\label{number G}
2\cdot \sum_{\substack{\epsilon\colon G_0 \to \{\pm1\} \\ \epsilon(v)=+1}} |G_{\epsilon}| \ = \
2\cdot \sum_{\substack{\epsilon\colon G_0 \to \{\pm1\} \\ \epsilon(v)=+1}} \prod_{\lambda\in \Lambda_{\epsilon}} |\Delta_{\epsilon,\lambda}|.
\end{align}
\end{corollary}
\begin{proof}
Let $Q:=(Q_A)^{\circ}$, where $Q_A$ is the Gabriel quiver of $A$. Then $Q=Q_G$ holds by Definition-Proposition \ref{graphA}.
Then the assertion follows from Theorem \ref{reduced ver} since $G_{\epsilon}=Q_{\epsilon}$ for any map $\epsilon \colon G_0\to \{\pm1\}$.
\end{proof}
\begin{definition} \label{number GG}
Keeping the notations in Corollary \ref{number by graph}(2), we write $||G||$ for the number given by the left hand side of (\ref{number G}).
\end{definition}
\begin{figure}
\caption{A half of single quivers of the double quiver of $\mathbb{E}
\label{Fig.E6}
\end{figure}
\begin{example} \label{example-E6}
\begin{enumerate}[\rm (1)]
\item Let $Q$ be a quiver whose underlying graph $\Delta$ is one of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$.
Let $A$ be the trivial extension of the path algebra $\mathbf{k}Q$ of $Q$ by a minimal co-generator.
It is easy to see that $A$ is a symmetric RCZ algebra if $Q$ is bipartite.
In this case, the Gabriel quiver of $A$ is precisely the double quiver $Q_{\Delta}$ of $\Delta$, in other words, the graph of $A$ is $\Delta$.
On the other hand, $Q^{\rm op}$ also determines the symmetric RCZ algebra, which is naturally isomorphic to $A$.
\item Let $\Delta=\mathbb{E}_{6}$ and let $A$ be the symmetric RCZ algebra obtained as in (1).
In Figure \ref{Fig.E6}, we describe single quivers of $Q:=Q_{\mathbb{E}_6}$ associated to maps $\epsilon$ with $\epsilon(6)=+1$.
Here, the notation $i^{\sigma}$ denotes the vertex $i$ with $\epsilon(i)=\sigma \in \{\pm1\}$.
Using the Corollary \ref{number by graph}, we find that there are $1700$ isomorphism classes of basic two-term tilting complexes over $A$ as in the list of Theorem \ref{theorem2}.
\end{enumerate}
\end{example}
\section{Proof of Main Theorem} \label{main theorem}
In this section, we prove Theorems \ref{theorem1} and \ref{theorem2}.
Throughout this section, $G$ is a finite connected graph with no loops.
\subsection{Proof of Theorem \ref{theorem1}}
By Corollary \ref{number by graph}(1), the proof is completed with the following proposition.
\begin{proposition}\label{tothm1}
Let $G$ be a connected finite graph with no loops. Then the graph $G_{\epsilon}$ is a disjoint union of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$ for every map $\epsilon \colon G_{0} \to\{\pm 1\}$ if and only if $G$ is one of the list in Theorem \ref{theorem1}.
\end{proposition}
In the following, we give a proof of Proposition \ref{tothm1} by removing extended Dynkin graphs from the collection $G_{\epsilon}$ of subgraphs of $G$.
We start with removing extended Dynkin graphs of type $\widetilde{\mathbb{A}}$.
A graph is called an \emph{$n$-cycle} if it is a cycle with exactly $n$ vertices.
In particular, it is called an \emph{odd-cycle} if $n$ is odd, and an \emph{even-cycle} if $n$ even.
\begin{lemma}\label{remove-ext-A}
The following statements are equivalent:
\begin{enumerate}[\rm (1)]
\item There exists a map $\epsilon \colon G_{0} \to\{\pm 1\}$ such that $G_{\epsilon}$ contains an extended Dynkin graph of type $\widetilde{\mathbb{A}}$ as a subgraph.
\item $G$ contains an even-cycle as a subgraph.
\end{enumerate}
\end{lemma}
\begin{proof}
(2)$\Rightarrow$(1): Let $G'$ be a subgraph of $G$ which is an even-cycle.
Since an even-cycle is a bipartite graph, there exists a map $\epsilon \colon G_{0} \to \{\pm 1\}$ such that the underlying graph of $G_{\epsilon}$ contains $G'$ as a subgraph.
Hence the assertion follows.
(1)$\Rightarrow$(2): Assume that for some map $\epsilon\colon G_{0}\to \{ \pm 1\}$, the graph $G_{\epsilon}$ contains an extended Dynkin graph $G'$ of type $\widetilde{\mathbb{A}}$. Since $G_{\epsilon}$ is bipartite, so is $G'$.
Hence $G'$ is an even-cycle and a subgraph of $G$.
This finishes the proof.
\end{proof}
By Lemma \ref{remove-ext-A}, we may assume that $G$ contains no even-cycle as a subgraph.
In particular, $G$ has no multiple edges.
We give a connection between our graphs $G_{\epsilon}$ and subtrees of $G$.
Recall that a \emph{subtree} of $G$ is a connected subgraph of $G$ without cycles.
\begin{proposition}\label{subtree-bipartite}
Assume that $G$ contains no even-cycle as a subgraph.
Let $G'$ be a connected graph.
Then the following statements are equivalent.
\begin{enumerate}[\rm (1)]
\item There exists a map $\epsilon \colon G_{0} \to\{ \pm 1\}$ such that $G_{\epsilon}$ contains $G'$ as a subgraph.
\item $G'$ is a subtree of $G$.
\end{enumerate}
In particular, there exists a naturally two-to-one correspondence between the set of connected graphs of the form $G_{\epsilon}$ and the set of subtrees of $G$.
\end{proposition}
\begin{proof}
(2)$\Rightarrow$(1) is clear.
We show (1)$\Rightarrow$(2).
Since $G$ has no even-cycle as a subgraph, then $G_{\epsilon}$ is tree by Lemma \ref{remove-ext-A}.
Since $G'$ is a subgraph of $G$, any subgraph of $G'$ is a subtree of $G$.
\end{proof}
For a tree, we have the following result.
\begin{corollary}\label{Dynkin-case}
Assume $G$ is a tree.
Then the graph $G_{\epsilon}$ is a disjoint union of Dynkin graphs of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$ for each map $\epsilon \colon G_{0} \to \{\pm 1\}$ if and only if $G$ is a Dynkin graph of type $\mathbb{A}$, $\mathbb{D}$ and $\mathbb{E}$.
\end{corollary}
\begin{proof}
It is well known that $G$ is a Dynkin graph if and only if all subtrees of $G$ are Dynkin graphs.
The assertion follows from Proposition \ref{subtree-bipartite}.
\end{proof}
We remove extended Dynkin graphs of type $\widetilde{\mathbb{D}}$.
Assume that $G$ contains at least two odd-cycles.
Then there exists a subtree $G'$ of $G$ such that $G'$ is an extended Dynkin graph of type $\widetilde{\mathbb{D}}$.
Moreover, by Proposition \ref{subtree-bipartite}, there exists a map $\epsilon \colon G_{0} \to \{\pm 1\}$ such that $G_\epsilon$ contains an extended Dynkin graph of type $\widetilde{\mathbb{D}}$ as a subgraph.
Hence we may assume that $G$ contains at most one odd-cycle.
By Corollary \ref{Dynkin-case}, it is enough to consider the case where $G$ contains exactly one odd-cycle.
Namely, $G$ consists of an odd-cycle such that each vertex $v$ in the odd-cycle is attached to a tree $T_{v}$.
\begin{align}
\xymatrix@C=4mm@R=3mm{
\bullet\ar@{-}[r]&\bullet\ar@{-}[r]&v_{1}\ar@{-}[dr]\ar@{-}[dl]\ar@{-}[r]&\bullet&\bullet\ar@{-}[d]&\\
&v_{2}\ar@{-}[rr]&&v_{3}\ar@{-}[r]&\bullet\ar@{-}[r]&\bullet\\
}\notag
\end{align}
\begin{lemma}\label{remove-ext-D}
Fix an integer $k\ge 1$ and $n:=2k+1$.
Assume that $G$ consists of an $n$-cycle such that each vertex $v$ in the $n$-cycle is attached to a tree $T_{v}$.
Then the following statements are equivalent:
\begin{enumerate}[\rm (1)]
\item There exists a map $\epsilon \colon G_{0} \to\{ \pm 1\}$ such that $G_{\epsilon}$ contains an extended Dynkin graph of type $\widetilde{\mathbb{D}}$ as a subgraph.
\item $G$ contains an extended Dynkin graph of type $\widetilde{\mathbb{D}}$ as a subgraph.
\item $G$ satisfies one of the following conditions.
\begin{enumerate}[\rm (a)]
\item There is a vertex $v$ in the $n$-cycle such that the degree is at least four.
\item There is a vertex $v$ in the $n$-cycle such that the degree is exactly three and $T_{v}$ is not a Dynkin graph of type $\mathbb{A}$.
\item $k\ge 2$ and there are at least two vertices in the $n$-cycle such that the degrees are at least three.
\end{enumerate}
\end{enumerate}
\end{lemma}
\begin{proof}
(1)$\Leftrightarrow$(2) follows from Proposition \ref{subtree-bipartite}.
Moreover, we can easily check (2)$\Leftrightarrow$(3) because $\widetilde{\mathbb{D}}_{4}$ has exactly one vertex whose degree is exactly four and $\widetilde{\mathbb{D}}_{l}$ ($l\ge 5$) has exactly two vertices whose degree are exactly three.
\end{proof}
Fix an integer $k \ge 1$ and $n:=2k+1$.
By Lemma \ref{remove-ext-D}, we may assume that $G$ is one of the following graphs:
\begin{align}
\begin{picture}(400,200)(0,0)
\put(95,5){$k=1$}
\put(290,5){$k\ge 2$}
\put(70,200){\xymatrix@C=3mm@R=3mm{
&1_{l_{1}}\ar@{-}[d]&&\\
&\vdots\ar@{-}[d]&&\\
&1_{1}\ar@{-}[d]&&\\
&1\ar@{-}[rd]\ar@{-}[ld]&&\\
2\ar@{-}[rr]\ar@{-}[d]&&3\ar@{-}[d]\\
2_{1}\ar@{-}[d]&&3_{1}\ar@{-}[d]\\
\vdots\ar@{-}[d]&&\vdots\ar@{-}[d]\\
2_{l_{2}}&&3_{l_{3}}
}}
\put(270,170){\xymatrix@C=4mm@R=4mm{
&1_{l_{1}}\ar@{-}[d]&\\
&\vdots\ar@{-}[d]&\\
&1_{1}\ar@{-}[d]&\\
&1\ar@{-}[rd]\ar@{-}[ld]&\\
2\ar@{-}[d]&&n\ar@{-}[d]\\
3\ar@{.}[rr]&&{n-1}
}}
\end{picture}\notag
\end{align}
Finally, we remove extended Dynkin graphs of type $\widetilde{\mathbb{E}}$.
\begin{lemma}\label{remove-ext-E}
Fix an integer $k \ge 1$ and $n:=2k+1$.
\begin{enumerate}[\rm (1)]
\item Assume that $k=1$.
The following graphs $(\mathrm{i})$, $(\mathrm{ii})$ and $(\mathrm{iii})$ are the minimal graphs containing an extended Dynkin graph of type $\widetilde{\mathbb{E}}$.
\begin{align}
\begin{picture}(400,160)(0,0)
\put(25,150){\textnormal{(i)}}
\put(30,150){\xymatrix@C=3mm@R=3mm{
&1_{1}\ar@{-}[d]&\\
&1\ar@{-}[rd]\ar@{-}[ld]&\\
2\ar@{-}[d]\ar@{-}[rr]&&3\ar@{-}[d]\\
2_{1}&&3_{1}\ar@{-}[d]\\
&&3_{2}
}}
\put(145,150){\textnormal{(ii)}}
\put(265,150){\textnormal{(iii)}}
\put(150,150){\xymatrix@C=3mm@R=3mm{
&1\ar@{-}[rd]\ar@{-}[ld]&\\
2\ar@{-}[rr]\ar@{-}[d]&&3\ar@{-}[d]\\
2_{1}\ar@{-}[d]&&3_{1}\ar@{-}[d]\\
2_{2}&&3_{2}\ar@{-}[d]\\
&&3_{3}
}}
\put(270,150){\xymatrix@C=3mm@R=3mm{
&1\ar@{-}[rd]\ar@{-}[ld]&\\
2\ar@{-}[rr]\ar@{-}[d]&&3\ar@{-}[d]\\
2_{1}&&3_{1}\ar@{-}[d]\\
&&3_{2}\ar@{-}[d]\\
&&3_{3}\ar@{-}[d]\\
&&3_{4}\ar@{-}[d]\\
&&3_{5}
}}
\end{picture}\notag
\end{align}
\item Assume that $k\ge 2$.
The following graphs $(\mathrm{iv})$ and $(\mathrm{v})$ are
the minimal graphs containing an extended Dynkin graph of type $\widetilde{\mathbb{E}}$.
\begin{align}
\begin{picture}(300,145)(0,0)
\put(60,5){\textnormal{(iv)} $k= 2$}
\put(210,5){\textnormal{(v)} $k\ge 3$}
\put(45,130){\xymatrix@C=4mm@R=4mm{
&1_{2}\ar@{-}[d]&\\
&1_{1}\ar@{-}[d]&\\
&1\ar@{-}[rd]\ar@{-}[ld]&\\
2\ar@{-}[d]&&n\ar@{-}[d]\\
3\ar@{.}[rr]&&{n-1}
}}
\put(195,130){\xymatrix@C=4mm@R=4mm{
&1_{1}\ar@{-}[d]&\\
&1\ar@{-}[rd]\ar@{-}[ld]&\\
2\ar@{-}[d]&&n\ar@{-}[d]\\
3\ar@{-}[d]&&n-1\ar@{-}[d]\\
4\ar@{.}[rr]&&n-2
}}
\end{picture}\notag
\end{align}
\end{enumerate}
\end{lemma}
\begin{proof}
We can easily find extended Dynkin graphs $\widetilde{\mathbb{E}}_{6}$, $\widetilde{\mathbb{E}}_{7}$ and $\widetilde{\mathbb{E}}_{8}$ in the graphs above.
\end{proof}
Now we are ready to prove Proposition \ref{tothm1}.
\begin{proof}[Proof of Proposition \ref{tothm1}]
If $G$ is a tree, then the assertion follows from Corollary \ref{Dynkin-case}.
We assume that $G$ is not a tree.
By Lemma \ref{remove-ext-A}, we may assume that $G$ does not contain even-cycles as subgraphs.
Then $G$ does not contain extended Dynkin graphs as subgraphs if and only if $G$ is one of the following classes:
\begin{itemize}
\item $(\mathrm{I}_{n})_{n\ge 4}$ in Theorem \ref{theorem1}(2),
\item proper connected non-tree subgraphs appearing in Lemma \ref{remove-ext-E}(i)--(v).
\end{itemize}
The second class coincides with the graphs $(\widetilde{\mathbb{A}}_{n-1})_{n:{\rm odd}}$, $(\mathrm{I}_{n})_{4\le n \le 8}$, $(\mathrm{II}_{n})_{5\le n\le 8}$, (III), (IV) and (V) in Theorem \ref{theorem1}(2).
Hence the assertion follows from Proposition \ref{subtree-bipartite}.
\end{proof}
We finish this subsection with proof of Theorem \ref{theorem1}.
\begin{proof}[Proof of Theorem \ref{theorem1}]
The result follows from Corollary \ref{number by graph}(1) and Proposition \ref{tothm1}.
\end{proof}
\subsection{Proof of Theorem 1.2}
We just compute the number of two-term tilting complexes for each graph in the list of Theorem \ref{theorem1}. Our calculation is based on Theorem \ref{reduced ver} and Corollary \ref{number by graph}. For our purpose, we assume that $G$ is a graph appearing in the list of Theorem \ref{theorem1} and let $A$ be a basic connected finite dimensional symmetric RCZ algebra whose graph is $G$.
Keeping above notations, we determine the number $\#\mathop{2\text{-}\mathsf{tilt}}\nolimits A$, or equivalently, $||G||$ in Definition \ref{number GG}.
First, for types $\mathbb{A}$ and $\widetilde{\mathbb{A}}$, the number is already computed by \cite{Aoki18}:
\begin{proposition} \cite[Theorem 1.2]{Aoki18} \label{type A}
The following equality holds.
\begin{align}
\# \mathop{2\text{-}\mathsf{tilt}}\nolimits A =
\begin{cases}
\binom{2n}{n} &\text{if $G=\mathbb{A}_{n}$,}\\
2^{2n-1} &\text{if $G=\widetilde{\mathbb{A}}_{n-1}$ for odd $n$.}
\end{cases}\notag
\end{align}
\end{proposition}
Secondly, we consider the case where $G$ is a Dynkin graph of type $\mathbb{D}$.
For simplicity, let $c_{0}=1$, $c_{l}:=\binom{2l}{l}$ for each $l\ge 1$.
Then we have $||\mathbb{A}_l|| = c_l$ for all $l\geq 1$ by Proposition \ref{type A}. In addition, let $||\mathbb{A}_0||:=2$.
\begin{proposition}\label{type D}
Let $n\ge 4$ and $G=\mathbb{D}_n$. Then we have
\begin{align}
\# \mathop{2\text{-}\mathsf{tilt}}\nolimits A = 6\cdot 4^{n-2} - 2 c_{n-2}.\notag
\end{align}
\end{proposition}
\begin{proof}
Let $G$ be a graph as follows.
\begin{align}
\xymatrix@R=1mm{
1&&&&&&&\\
&3\ar@{-}[lu]\ar@{-}[ld]\ar@{-}[r]&4\ar@{-}[r]&\cdots\ar@{-}[r]&n.\\
2&&&&&&&
}\notag
\end{align}
By Corollary \ref{number by graph}, we have
\begin{equation} \label{for D}
\# \mathop{2\text{-}\mathsf{tilt}}\nolimits A =2 \cdot \sum_{\substack{\epsilon\colon Q_{0} \rightarrow \{ \pm 1\} \\ \epsilon(3)=+1}} |{G}_{\epsilon}|.
\end{equation}
We study the right hand side of (\ref{for D}). Let $M$ be the set of maps $\epsilon \colon G_{0} \rightarrow \{\pm1\}$ such that $\epsilon(3)=+1$.
Clearly, $M$ is a disjoint union of the following subsets:
\begin{itemize}
\item $M_{1}:=\{ \epsilon\in M \mid \epsilon(1)=\epsilon(2)=\epsilon(3) \}$.
\item $M_{2}:=\{ \epsilon\in M \mid \epsilon(1)=-\epsilon(2)=\epsilon(3) \}$.
\item $M_{3}:=\{ \epsilon\in M \mid -\epsilon(1)=\epsilon(2)=\epsilon(3) \}$.
\item $M_{4}:=\{ \epsilon\in M \mid -\epsilon(1)=-\epsilon(2)=\epsilon(3)=\epsilon(4) \}$.
\item $M_{5}:=\{ \epsilon\in M \mid -\epsilon(1)=-\epsilon(2)=\epsilon(3)=-\epsilon(4) \}=\bigsqcup_{t=4}^{n}M_{5}(t)$, where
\begin{align}
M_{5}(t):=\Bigl\{ \epsilon\in M_{5}\ \Bigl.\Bigr|\ t=\min\{ 4 \le j \le n \mid \epsilon(j)=\epsilon(j+1)\} \Bigr\}. \notag
\end{align}
\end{itemize}
From now, we compute $\mathsf{n}(i):=\sum_{\epsilon \in M_i} |G_{\epsilon}|$ for each $i\in\{ 1,\ldots, 5\}$.
In the following, the notation $\xymatrix{i\ar@{~}[r]&j}$ is replaced by an edge connecting $i$ and $j$ if $\epsilon(i)\neq\epsilon(j)$, otherwise nothing between them.
(i) Let $\epsilon \in M_{1}$. Then $G_{\epsilon}$ is given by
\begin{align}
\xymatrix@R=1mm{
1&&&&&\\
&3\ar@{~}[r]&4\ar@{~}[r]&\cdots\ar@{~}[r]&n-1\ar@{~}[r]&n.\\
2&&&&&
}\notag
\end{align}
Let $G'$ be the subgraph of $G$ obtained by removing the vertices $\{1,2\}$. Then we have $|G_{\epsilon}| = |G'_{\epsilon|_{\{3,\ldots,n\}}}|$.
Since $G'$ is a Dynkin graph of type $\mathbb{A}_{n-2}$, we obtain
\[
2\mathsf{n}(1)= 2 \cdot \sum_{\substack{\epsilon\colon G_0' \to \{\pm1\} \\ \epsilon(3)=+1}} |G'_{\epsilon}| =||\mathbb{A}_{n-2}|| = c_{n-2}
\]
where the last equality follows from Proposition \ref{type A}.
By an argument similar to (1), we can calculate other cases.
(ii) For each $\epsilon\in M_{2}$, the graph $G_{\epsilon}$ is given by
\begin{align}
\xymatrix@R=1mm{
1&&&&&\\
&3\ar@{-}[ld]\ar@{~}[r]&4\ar@{~}[r]&\cdots\ar@{~}[r]&n-1\ar@{~}[r]&n.\\
2&&&&&
}\notag
\end{align}
Then we can check $2\mathsf{n}(2)=||\mathbb{A}_{n-1}||-||\mathbb{A}_{n-2}||=c_{n-1}-c_{n-2}$.
(iii) By the symmetry of $G$, we have $\mathsf{n}(3)=\mathsf{n}(2)$.
(iv) Let $\epsilon\in M_{4}$. Then $G_{\epsilon}$ is described as
\begin{align}
\xymatrix@R=1mm{
1&&&&&\\
&3\ar@{-}[lu]\ar@{-}[ld]&4\ar@{~}[r]&\cdots\ar@{~}[r]&n-1\ar@{~}[r]&n.\\
2&&&&&
}\notag
\end{align}
Thus we find that $2\mathsf{n}(4)= |\mathbb{A}_3| \cdot ||\mathbb{A}_{n-3}|| = 5c_{n-3}$.
(v) For $\epsilon\in M_{5}(t)$, the graph $G_{\epsilon}$ is given by
\begin{align}
\xymatrix@R=1mm{
1&&&&&&&\\
&3\ar@{-}[lu]\ar@{-}[ld]\ar@{-}[r]&4& \ar@{-}[l]\cdots\ar@{-}[r]&t&t+1\ar@{~}[r]&\cdots\ar@{~}[r]&n.\\
2&&&&&&&
}\notag
\end{align}
Then we obtain
\begin{align}
2\mathsf{n}(5)
&=\sum_{t=4}^{n}|\mathbb{D}_t| \cdot ||\mathbb{A}_{n-t}||
=\frac{3n-4}{2n}c_{n-1}\cdot 2c_{0}+\sum_{t=4}^{n-1}\frac{3t-4}{2t}c_{t-1}c_{n-t}\notag\\
&=\frac{3n-4}{2n}c_{n-1}+\sum_{t=4}^{n}\frac{3t-4}{2t}c_{t-1}c_{n-t}.\notag
\end{align}
To finish the proof, we need the following lemma.
\begin{lemma} \label{binom numbers}
For any positive integer $n$, the following equalities hold:
\begin{enumerate}[(1)]
\item $\displaystyle{\sum_{t=1}^{n}c_{t-1}c_{n-t}=4^{n-1}}$.
\item $\displaystyle{\sum_{t=1}^{n}\frac{1}{t}c_{t-1}c_{n-t}=\frac{1}{2}c_{n}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
The equality (1) is well-known.
The equality (2) is obtained by
\begin{align}
\sum_{t=1}^{n}\frac{1}{t}c_{t-1}c_{n-t}
=\frac{n+1}{2}\sum_{t=1}^{n}C_{t-1}C_{n-t}
=\frac{n+1}{2}C_{n}=\frac{1}{2}c_{n},\notag
\end{align}
where $C_{n} := \frac{1}{n+1}c_{n}$ is the $n$-th Catalan number.
\end{proof}
By Lemma \ref{binom numbers}, we obtain the equality
\begin{align}
\sum_{t=4}^{n}\frac{3t-4}{2t}c_{t-1}c_{n-t}
&=\frac{3}{2}\sum_{t=4}^{n} c_{t-1}c_{n-t}-2\sum_{t=4}^{n}\frac{1}{t}c_{t-1}c_{n-t}\notag\\
&=\frac{3}{2}(4^{n-1} -c_{n-1} -2 c_{n-2} -6 c_{n-3}) -2(\frac{1}{2}c_{n}-c_{n-1}-c_{n-2}-2c_{n-3})\notag\\
&= 6\cdot 4^{n-2}-c_{n}+\frac{1}{2}c_{n-1}-c_{n-2}-5c_{n-3}.\notag
\end{align}
By (i)--(v), we have
\begin{align}
\# \mathop{2\text{-}\mathsf{tilt}}\nolimits A
&=c_{n-2}+2(c_{n-1}-c_{n-2})+5c_{n-3}+6\cdot 4^{n-2}-c_{n}+\frac{2n-2}{n}c_{n-1}-c_{n-2}-5c_{n-3}\notag\\
&=6\cdot 4^{n-2}-c_{n}+\frac{4n-2}{n}c_{n-1}-2c_{n-2}\notag\\
&=6\cdot 4^{n-2}-2c_{n-2},\notag
\end{align}
where the last equality follows from $c_{n}=\frac{2(2n-1)}{n}c_{n-1}$.
\end{proof}
Thirdly, we give an enumeration for type ($\mathrm{I}$). The number is obtained by using the result on type $\mathbb{D}$.
\begin{proposition} \label{type I}
If $G=\mathrm{I}_{n}$, then we have
\begin{align}
\#\mathop{2\text{-}\mathsf{tilt}}\nolimits A = 6\cdot 4^{n-2} + 2c_{n} -4c_{n-1}-4c_{n-2}.\notag
\end{align}
\end{proposition}
\begin{proof}
Let $G$ be a graph as follows.
\begin{align}
\xymatrix@R=1mm{
1\ar@{-}[dd]&&&&&&&\\
&3\ar@{-}[lu]\ar@{-}[ld]\ar@{-}[r]&4\ar@{-}[r]&\cdots\ar@{-}[r]&n.\\
2&&&&&&&
}\notag
\end{align}
By using a similar method of the proof of proposition \ref{type D}, we calculate the right-hand side of
\begin{align}
\# \mathop{2\text{-}\mathsf{tilt}}\nolimits A = 2 \sum_{\substack{\epsilon\colon G_{0} \rightarrow \{ \pm 1\} \\ \epsilon(3)=+1}} |G_{\epsilon}|.\notag
\end{align}
Let $M$ and $M_{i}$ $(1\leq i \leq 5)$ be sets of maps given in the proof of Proposition \ref{type D} and $\mathsf{m}(i):=\sum_{\epsilon\in M_{i}} |G_{\epsilon}|$.
For each map $\epsilon\in M_{1}\sqcup M_{4}\sqcup M_{5}$, we have $G_{\epsilon}=(\mathbb{D}_n)_{\epsilon}$.
Hence for each $i\in \{1,4,5\}$, we have
\begin{align}
\mathsf{m}(i)=\sum_{\epsilon\in M_{i}} |G_{\epsilon}|=\sum_{\epsilon\in M_{i}} |(\mathbb{D}_n)_{\epsilon}| = \mathsf{n}(i).\notag
\end{align}
Since $\mathsf{m}(2)=\mathsf{m}(3)$ holds by the symmetry of $G$, we have only to calculate $\mathsf{m}(2)$.
For each map $\epsilon\in M_{2}$, the graph $G_{\epsilon}$ is given by
\begin{align}
\xymatrix@R=1mm{
1\ar@{-}[dd]&&&&&\\
&3\ar@{-}[ld]\ar@{~}[r]&4\ar@{~}[r]&\cdots\ar@{~}[r]&n-1\ar@{~}[r]&n.\\
2&&&&&
}\notag
\end{align}
Then the calculation of $\mathsf{m}(2)$ is reduced to that of Dynkin graphs of type $\mathbb{A}$.
In fact, let $G'$ be the Dynkin graph $\mathbb{A}_{n}$.
Then we have
\begin{align}
\mathsf{m}(2)
&= \sum_{\substack{\epsilon\colon G'_{0}\to \{ \pm 1\}\\ \epsilon(3)=+1}} |G'_{\epsilon}|- \sum_{\substack{\epsilon\colon G'_{0}\to \{ \pm 1\}\\ \epsilon(2)=\epsilon(3)=+1}} |G'_{\epsilon}| -
\sum_{\substack{\epsilon\colon G'_{0}\to \{ \pm 1\} \\ -\epsilon(1)=-\epsilon(2)=\epsilon(3)=+1}} |G'_{\epsilon}|\notag \\
&=\frac{1}{2}||\mathbb{A}_{n}|| - \frac{1}{4} ||\mathbb{A}_{2}|| \cdot ||\mathbb{A}_{n-2}|| - \mathsf{n}(2). \notag\\
&=\frac{1}{2}c_{n}-\frac{3}{2}c_{n-2}-\mathsf{n}(2).\notag
\end{align}
Therefore we obtain
\begin{align}
\#\mathop{2\text{-}\mathsf{tilt}}\nolimits A
&= 2(\mathsf{m}(1)+\mathsf{m}(2)+\mathsf{m}(3)+\mathsf{m}(4)+\mathsf{m}(5))\notag\\
&= 2(\mathsf{n}(1)+2\mathsf{n}(2)+\mathsf{n}(4)+\mathsf{n}(5))-4\mathsf{n}(2)+4\mathsf{m}(2)\notag\\
&= || \mathbb{D}_{n}|| - 4\mathsf{n}(2) + 4\mathsf{m}(2) \notag\\
&= 6\cdot 4^{n-2} - 2c_{n-2} -4\mathsf{n}(2) + 2c_{n}-6c_{n-2}-4\mathsf{n}(2) \notag\\
&= 6\cdot 4^{n-2} + 2c_{n} - 4c_{n-1} - 4c_{n-2}. \notag
\end{align}
This finishes the proof.
\end{proof}
For the remained finite series $\mathbb{E}$, (II), (III), (IV) and (V),
we just compute the number by using the formula (\ref{number G}) in Corollary \ref{number by graph}(2).
\begin{proposition} \label{type sporadic}
For each case \textnormal{$\mathbb{E}$, (II), (III), (IV)} and \textnormal{(V)},
the number $\mathop{2\text{-}\mathsf{tilt}}\nolimits A_G$ is given by the table of Theorem \ref{theorem2}.
\end{proposition}
\begin{proof}
The number for $\mathbb{E}_6$ is shown in Example \ref{example-E6}(2) and the others are similar. The detail is left to the reader.
\end{proof}
\end{document} |
\begin{document}
\title[Some cohomologically similar manifolds and special generic maps]{New families of manifolds with similar cohomology rings admitting special generic maps}
\author{Naoki Kitazawa}
\keywords{Special generic maps. (Co)homology rings. Closed and simply-connected manifolds. \\
\indent {\it \textup{2020} Mathematics Subject Classification}: Primary~57R45. Secondary~57R19.}
\address{Institute of Mathematics for Industry, Kyushu University, 744 Motooka, Nishi-ku Fukuoka 819-0395, Japan\\
TEL (Office): +81-92-802-4402 \\
FAX (Office): +81-92-802-4405 \\
}
\email{[email protected]}
\urladdr{https://naokikitazawa.github.io/NaokiKitazawa.html}
\begin{abstract}
As Reeb's theorem shows, Morse functions with exactly two singular points on closed manifolds are very simple and important. They characterize spheres whose dimensions are not $4$ topologically and the $4$-dimensional unit sphere.
{\it Special generic} maps are generalized versions of these maps. Canonical projections of unit spheres are special generic. Studies of Saeki and Sakuma since the 1990s, followed by Nishioka and Wrazidlo, show that the differentiable structures of the spheres and the homology groups of the manifolds (in several classes) are restricted. We see special generic maps are attractive.
Our paper studies the cohomology rings of manifolds admitting such maps. As our new result, we find a new family of manifolds whose cohomology rings are similar and find that the (non-)existence of special generic maps are closely related to the topologies. More explicitly, we have previously found related families and our new manifolds add to these discoveries.
\end{abstract}
\maketitle
\section{Introduction.}
\label{sec:1}
According to the so-called Reeb's theorem, Morse functions with exactly two singular points on closed manifolds characterize spheres whose dimensions are not $4$ topologically and the $4$-dimensional unit sphere.
{\it Special generic} maps are, in short, their higher dimensional versions.
We first define a {\it special} generic map between smooth manifolds with no boundaries.
Before that, we introduce fundamental and important terminologies, notions and notation.
Hereafter, for an integer $k>0$, ${\mathbb{R}}^k$ denotes the $k$-dimensional Euclidean space, which is a smooth manifold canonically and also a Riemannian manifold endowed with the standard Euclidean metric.
For ${\mathbb{R}}^1$, $\mathbb{R}$ is used in a natural way. This is also a commutative ring and $\mathbb{Z} \subset \mathbb{R}$ denotes the subring of all integers.
$||x|| \geq 0$ is for the distance between $x$ and the origin $0$ in ${\mathbb{R}}^k$.
$S^k\ {\rm (}D^{k+1}{\rm )}:=\{x \in {\mathbb{R}}^{k+1} \mid ||x||=1\ {\rm (}resp.\ ||x|| \leq 1{\rm )}\}$ is the $k$-dimensional unit sphere (resp. ($k+1$)-dimensional unit disk). A canonical projection of the unit sphere $S^k \subset {\mathbb{R}}^{k+1}$ into ${\mathbb{R}}^{k^{\prime}}$ with $k \geq k^{\prime}$ is defined as a map mapping $(x_1,x_2) \in {\mathbb{R}}^{k^{\prime}} \times {\mathbb{R}}^{k+1-k^{\prime}}$ to $x_1 \in {\mathbb{R}}^{k^{\prime}}$. This is also a simplest special generic map.
Hereafter, $\dim X$ is the dimension of a topological space regarded as a CW complex, which we can define uniquely. A manifold is always regarded as a CW complex. A smooth manifold always has the structure of a so-called {\it PL} manifold in a canonical way and this is a polyhedron.
For a smooth map $c:X \rightarrow Y$ between smooth manifold, a point $p \in X$ is a {\it singular} point of $c$ if the rank of the differential ${dc}_p$ here is smaller than $\min{\dim X,\dim Y}$. $S(c)$ denotes the set of all singular points of $c$ and we call this the {\it singular set} of $c$.
A {\it diffeomorphism} means a homeomorphism which is a smooth map with no singular points.
Two smooth manifolds are defined to be {\it diffeomorphic} if a diffeomorphism between the two manifolds exists.
A smooth manifold homeomorphic to a sphere is said to be a {\it homotopy sphere}. A homotopy sphere is a {\it standard} (an {\it exotic}) sphere if it is diffeomorphic to a unit sphere (resp. not diffeomorphic to any unit sphere).
\begin{Def}
A smooth map $c:X \rightarrow Y$ between manifolds with no boundaries satisfying $\dim X \geq \dim Y$ is said to be {\it special generic} if at each singular point $p$, there exists suitable local coordinates and $f$ is locally
represented by $(x_1,\cdots,x_{\dim X}) \rightarrow (x_1,\cdots,x_{\dim Y-1},{\Sigma}_{j=1}^{\dim X-\dim Y+1} {x_{\dim Y+j-1}}^2)$.
\end{Def}
Morse functions in the Reeb's theorem are special generic. As a kind of exercises on smooth manifolds and maps and the theory of Morse functions, we can see that canonical projections of unit spheres are special generic.
\begin{Prop}
The singular set $S(c)$ of the special generic map $c:X \rightarrow Y$ is a {\rm (}$\dim Y-1${\rm )}-dimensional smooth closed submanifold of $X$ and has no boundary. Furtheremore, the restriction $c {\mid}_{S(c)}$ is a smooth immersion,
\end{Prop}
Other properties are presented in the next section. Studies of Saeki and Sakuma (\cite{saeki1,saeki2,saekisakuma,saekisakuma2,sakuma}) since the 1990s,
followed by Nishioka (\cite{nishioka}) and Wrazidlo (\cite{wrazidlo,wrazidlo2,wrazidlo3}), show that the differentiable structures of the spheres and the homology groups of the manifolds are restricted in several cases. This explicitly shows that special generic maps are attractive. Some of these results are also presented later.
Our paper studies the cohomology rings of manifolds admitting such maps. We also introduce terminologies and notation on homology groups, cohomology rings, characteristic classes of manifolds such as {\it Stiefel-Whitney classes} and {\it Pontrjagin classes} and other notions from algebraic topology and differential topology in the next section. We also expect that readers have some knowledge on them. \cite{hatcher} and \cite{milnorstasheff} explain about them systematically.
\begin{MainThm}
\label{mthm:1}
Let $m \geq 8$ be an arbitrary integer. Let $m^{\prime} \geq 6$ be another integer satisfying the condition $m-m^{\prime} \geq 2$. Assume also the existence of some integer $a$. Furthermore, the following conditions hold.
\begin{itemize}
\item $a=2$ or $a=m^{\prime}-3$.
\item $a+m^{\prime}-1 \leq m$
\item $2m^{\prime}-m-a=1$.
\end{itemize}
Let $G$ be an arbitrary finite commutative group which is not the trivial group.
Then we have a closed and simply-connected manifold $M$ enjoying the following properties.
\begin{enumerate}
\item \label{mthm:1.1} There exists an $m^{\prime}$-dimensional closed and simply-connected manifold $M^{\prime}$ and $M$ is diffeomorphic to $M^{\prime} \times S^{m-m^{\prime}}$.
\item \label{mthm:1.2} The $j$-th homology group of $M^{\prime}$ whose coefficient ring is $\mathbb{Z}$ is isomorphic to $G$ if $j=2,m^{\prime}-3$, $\mathbb{Z}$ if $j=0,m^{\prime}$, and the trivial group, otherwise.
\item \label{mthm:1.3} $M$ admits a special generic map into ${\mathbb{R}}^n$ for $m^{\prime}<n\leq m$ whereas it admits no special generic maps into ${\mathbb{R}}^n$ for $1 \leq n \leq 4$ and $n=m^{\prime}$. Furthermore, the special generic maps can be obtained as ones whose restrictions to the singular sets are embeddings and {\rm product organized} ones, defined in Definition \ref{def:2}, later.
\end{enumerate}
\end{MainThm}
\begin{MainThm}
\label{mthm:2}
Let $\{G_j\}_{j=1}^5$ be a sequence of free commutative groups such that $G_{j}$ and $G_{6-j}$ are isomorphic for $1 \leq j \leq 2$, that $G_{1}$ is not the trivial group and that the rank of $G_{3}$ is even. Let $G_{{\rm F},1}$ be an arbitrary finite commutative group and $G_{{\rm F},2}$ a finite commutative group which is not the trivial group.
Let $\{M_i\}_{i=1}^2$ be a pair of $8$-dimensional closed and simply-connected manifolds enjoying the following properties.
\begin{enumerate}
\item The $j$-th homology groups of these manifolds whose coefficient rings are $\mathbb{Z}$ are mutually isomorphic to $G_{1} \oplus G_{{\rm F},1}$ for $j=2$, the direct sum of $G_{2} \oplus G_{{\rm F},2}$ for $j=3$, the direct sum of $G_{3} \oplus G_{{\rm F},2}$ for $j=4$, the direct sum of $G_{4} \oplus G_{{\rm F},1}$ for $j=5$, and $G_{5}$ for $j=6$.
\item Consider the restrictions to the subgroups generated by all elements whose orders are infinite for these manifolds. They are regarded as subalgebras and isomorphic to the cohomology rings of manifolds represented as connected sums of finitely many manifolds diffeomorphic to the products of two homotopy spheres where the coefficient ring is $\mathbb{Z}$.
\item For an arbitrary positive integer $j>0$, the $j$-th Stiefel-Whitney classes and the $j$-th Pontrjagin classes of these manifolds are the zero elements of the $j$-th cohomology groups whose coefficient rings are $\mathbb{Z}/2\mathbb{Z}$, the ring of order $2$, and the $4j$-th cohomology groups whose coefficient rings are $\mathbb{Z}$, respectively.
\item $M_i$ does not admit special generic maps into ${\mathbb{R}}^n$ for $1 \leq n < i+5$ whereas it admits one into ${\mathbb{R}}^n$ for $i+5 \leq n \leq 8$.
Furthermore, the special generic maps can be obtained as ones whose restrictions to the singular sets are embeddings and {\rm product organized} ones, defined in Definition \ref{def:2}, later.
\end{enumerate}
\end{MainThm}
Main Theorem \ref{mthm:2} is regarded as a variant of Main Theorem 4 of \cite{kitazawa2}.
In the next section, we present additional fundamental properties of special generic maps. The third section is devoted to our proof of Main Theorems. We also present existing studies on the the homology groups and the cohomology rings of the manifolds. \\
\ \\
{\bf Conflict of Interest.} \\
The author was a member of the project supported by JSPS KAKENHI Grant Number JP17H06128 "Innovative research of geometric topology and singularities of differentiable mappings"
(Principal investigator: Osamu Saeki) and
is also a member of the project JSPS KAKENHI Grant Number JP22K18267 "Visualizing twists in data through monodromy" (Principal Investigator: Osamu Saeki). The present study is also supported by these projects. \\
\ \\
{\bf Data availability.} \\
Data essentially supporting the present study are all in the present paper.
\section{Fundamental properties and existing studies on special generic maps and the manifolds.}
First, we review elementary algebraic topology and other notions, terminologies and notation we need.
Let $(X,X^{\prime})$ be a pair of topological spaces where these two spaces satisfy the relation $X^{\prime} \subset X$ and may be empty. The ({\it $k$-th}) {\it homology group} of this pair $(X,X^{\prime})$ whose coefficient ring is $A$ is denoted by $H_{k}(X,X^{\prime};A)$. In the case $X^{\prime}$ is empty, we may omit ",$X^{\prime}$" in the notation and we call the homology group (cohomology group) of $(X,X^{\prime})$ the {\it homology group} (resp. {\it cohomology group}) of $X$ in general.
For a topological space $X$ of suitable classes, the {\it $k$-th homotopy group} of $X$ is denoted by ${\pi}_k(X)$.
The class of arcwise-connected cell complex, containing the class of arcwise-connected CW complexes for example, is one of such classes.
Let $(X,X^{\prime})$ and $(Y,Y^{\prime})$ be pairs of topological spaces where these spaces satisfy the relations $X^{\prime} \subset X$ and $Y^{\prime} \subset Y$ and may be empty. Given a continuous map $c:X \rightarrow Y$ satisfying $c(X^{\prime}) \subset Y^{\prime}$, then $c_{\ast}:H_{k}(X,X^{\prime};A) \rightarrow H_{k}(Y,Y^{\prime};A)$, $c^{\ast}:H^{k}(Y,Y^{\prime};A) \rightarrow H^{k}(X,X^{\prime};A)$ and $c_{\ast}:{\pi}_k(X) \rightarrow {\pi}_k(Y)$ denote the canonically induced homomorphisms where in considering homotopy groups the spaces satisfy the suitable conditions for example.
Let $X$ be a topological space and $A$ a commutative ring. Let
$H^{\ast}(X;A)$ denote the direct sum ${\oplus}_{j=0}^{\infty} H^j(X;A)$ for all integers greater than or equal to $0$.
For any sequence $\{a_j\}_{j=1}^l \subset H^{\ast}(X;A)$ of elements of length $l>0$, we can define the {\it cup product} ${\cup}_{j=1}^l a_j \in H^{\ast}(X;A)$ for general $l$. $a_1 \cup a_2$ is also used in the case $l=2$. We can have the structure of a graded commutative algebra $H^{\ast}(X;A)$. This is the {\it cohomology ring} of $X$ whose {\it coefficient ring} is $A$.
The {\it fundamental class} of a compact, connected and oriented manifold $Y$ is defined as the canonically and uniquely defined element of the ($\dim Y$)-th homology group, which is also a generator of the group $H_{\dim Y}(Y,\partial Y;\mathbb{Z})$, isomorphic to the group $\mathbb{Z}$.
Let $i_{Y,X}:Y \rightarrow X$ be a smooth immersion satisfying $i_{Y,X}(\partial Y) \subset \partial X$ and $i_{Y,X}({\rm Int}\ Y) \subset {\rm Int}\ X$. In other words, $Y$ is smoothly and {\it properly} immersed or embedded into $X$.
Note that in considering other categories such as the PL category and the topology category, we consider suitable maps playing roles as the smooth immersions play.
If for an element $h \in H_j(X,\partial X;A)$, the value of the homomorphism ${i_{Y,X}}_{\ast}$ induced by the map $i_{Y,X}:Y \rightarrow X$ at the fundamental class of $Y$ is $h$, then $h$ is {\it represented} by $Y$.
We also need several fundamental methods and theorems. For example, homology exact sequences for pairs of topological spaces, Mayer-Vietoris sequences, Poincar\'e duality (theorem) for compact and connected (orientable) manifolds, universal coefficient theorem and K\"unneth theorem for the products of topological spaces.
For such explanations, see \cite{hatcher} again for example.
The {\it diffeomorphism group} of a smooth manifold is the group of all diffeomorphisms from the manifold to itself endowed with the so-called {\it Whitney $C^{\infty}$ topology}.
A {\it smooth} bundle means a bundle whose fiber is a smooth manifold whose structure group is the diffeomorphism group. A {\it linear} bundle means a bundle whose fiber is a Euclidean space, unit sphere, or a unit disk and whose structure group consists of linear transformations. Note that a linear transformation here is defined in a matural and canonical way.
For general theory of bundles, see \cite{steenrod}. \cite{milnorstasheff} is for linear bundles.
\begin{Prop}
\label{prop:2}
For a special generic map on an $m$-dimensional closed and connected manifold $M$ into ${\mathbb{R}}^n$, we have the following properties.
\begin{enumerate}
\item \label{prop:2.1}
There exists an $n$-dimensional compact and connected smooth manifold $W_f$ such that a smooth surjection $q_f:M \rightarrow W_f$ and a smooth immersion $\bar{f}:W_f \rightarrow {\mathbb{R}}^n$ enjoying the relation $f=\bar{f} \circ q_f$ exist.
\item \label{prop:2.2}
There exists a small collar neighborhood $N(\partial W_f)$ of the boundary $\partial W_f \subset W_f$ and the composition of the restriction of $f$ to the preimage with the canonical projection to $\partial W_f$ gives a linear bundle whose fiber is the {\rm (}$m-n+1${\rm )}-dimensional unit disk $D^{m-n+1}$.
\item \label{prop:2.3}
The restriction of $f$ to the preimage of $W_f-{\rm Int}\ N(\partial W_f)$ gives a smooth bundle whose fiber is an {\rm (}$m-n${\rm )}-dimensional standard sphere. It is regarded as a linear bundle in specific cases. For example, in the case $m-n=0,1,2,3$, the bundle is regarded as a linear bundle.
\end{enumerate}
\end{Prop}
\begin{Prop}
\label{prop:3}
In Proposition \ref{prop:2}, we can have an {\rm (}m+1{\rm )}-dimensional compact and connected topological {\rm (}PL{\rm )} manifold $W$ and a continuous {\rm (}resp. piesewise smooth{\rm )} surjection $r:W \rightarrow W_f$ enjoying the following properties where we abuse the notation.
\begin{enumerate}
\item $M$ is the boundary of $W$ and the restriction of $r$ to the boundary $M$ is $q_f$.
\item $r$ gives the structure of a bundle over $W_f$ whose fiber is homeomorphic {\rm (}resp. PL homeomorphic{\rm )} to the {\rm (}$m-n+1${\rm )}-dimensional unit disk $D^{m-n+1}$ and whose structure group consists of piesewise smooth homeomorphisms.
\item $W$ is decomposed into two {\rm (}m+1{\rm )}-dimensional topological {\rm (}resp. PL{\rm )} manifolds $W_1$ and $W_2$.
\item We can consider the composition of the restriction of $r$ to $W_1$ with a canonical projection to $\partial W_f$. This gives a linear bundle whose fiber is the {\rm (}$m-n+2${\rm )}-dimensional unit disk $D^{m-n+2}$. Furthermore, this bundle is realized as a bundle whose subbundle obtained by considering the boundary of the unit disk and a suitable smoothly embedded copy of the unit disk $D^{m-n+1}$ there as its fiber is the linear bundle of Proposition \ref{prop:2} {\rm (}\ref{prop:2.2}{\rm )}.
\item The restriction of $r$ to $W_2$ gives a bundle whose fiber is diffeomorphic to the {\rm (}$m-n+1${\rm )}-dimensional unit disk $D^{m-n+1}$ {\rm (}resp. and whose structure group consists of piesewise smooth homeomorphisms{\rm )} . Furthermore, this bundle is realized as a bundle whose subbundle obtained by considering the boundary of the unit disk as its fiber is the smooth bundle of Proposition \ref{prop:2} {\rm (}\ref{prop:2.3}{\rm )}.
\item In the case $m-n=0,1,2,3$ for example, $W$ can be chosen as a smooth one and we can also do in such a way that $r$ gives a linear bundle over $W_f$.
\end{enumerate}
\end{Prop}
\begin{Ex}
\label{ex:1}
Let $l>0$ be an integer. Let $m \geq n \geq 2$ be integers.
Other than canonical projections of units spheres, we present simplest special generic maps. We consider a connected sum of $l>0$ manifolds in the family $\{S^{n_j} \times S^{m-n_j}\}_{j=1}^l$ in the smooth category where $1 \leq n_j \leq n-1$ is an integer for $1 \leq j \leq l$. We have a special generic map $f:M \rightarrow {\mathbb{R}}^n$ on the resulting manifold $M$ such that in Proposition \ref{prop:2}, the following properties are enjoyed.
\begin{enumerate}
\label{ex:1.1}
\item $\bar{f}$ is an embedding.
\item
\label{ex:1.2}
$W_f$ is represented as a boundary connected sum of $l>0$ manifolds each of which is diffeomorphic to each of $\{S^{n_j} \times D^{n-n_j}\}_{j=1}^l$ in the family. The boundary connected sum is considered in the smooth category.
\item \label{ex:1.3}
The bundles in Proposition \ref{prop:2} (\ref{prop:2.2}) and (\ref{prop:2.3}) are trivial.
\end{enumerate}
\end{Ex}
We introduce a result of \cite{saeki1} related to Example \ref{ex:1}.
\begin{Thm}[\cite{saeki1}]
\label{thm:1}
An $m$-dimensional closed and connected manifold $M$ admits a special generic map into ${\mathbb{R}}^2$ if and only if either of the following holds.
\begin{enumerate}
\item $M$ is a homotopy sphere which is not an $4$-dimensional exotic sphere.
\item A manifold
represented as a connected sum of smooth manifolds considered in the smooth category where each of the manifolds here is represented as either of the following manifolds.
\begin{enumerate}
\item The total space of a smooth bundle over $S^1$ whose fiber is a homotopy sphere for $m \neq 5$.
\item The total space of a smooth bundle over $S^1$ whose fiber is a standard sphere for $m=5$.
\end{enumerate}
\end{enumerate}
Furthermore, for each manifold here, we can construct a special generic map so that the properties {\rm (}\ref{ex:1.1}{\rm )} and {\rm (}\ref{ex:1.2}{\rm )} in Example \ref{ex:1} are enjoyed with $n_j=1$.
\end{Thm}
\begin{Prop}
\label{prop:4}
Let $A$ be a commutative ring.
In Proposition \ref{prop:2}, the homomorphisms ${q_f}_{\ast}:H_j(M;A) \rightarrow H_j(W_f;A)$, ${q_f}^{\ast}:H^j(W_f;A) \rightarrow H^j(M;A)$ and ${q_f}_{\ast}:{\pi}_j(M) \rightarrow {\pi}_j(W_f)$ are isomorphisms for $0 \leq j \leq m-n$.
\end{Prop}
We can show the following theorem by applying Proposition \ref{prop:4} with Proposition \ref{prop:2}, some fundamental theory on $3$-dimensional compact manifolds and some previously presented facts. For a proof, see the
original paper \cite{saeki1} for example.
\begin{Thm}[\cite{saeki1}]
\label{thm:2}
Let $m>3$ be an arbitrary integer.
\begin{enumerate}
\item If an $m$-dimensional closed and simply-connected manifold admits a special generic map into ${\mathbb{R}}^3$, then it is either of the following.
\begin{enumerate}
\item A homotopy pshere which is not a $4$-dimensional exotic sphere.
\item A manifold represented as a connected sum of smooth manifolds considered in the smooth category where each of the manifolds here is represented as the total space of a smooth bundle over $S^2$ whose fiber is a homotopy sphere. Moreover, the homotopy sphere of the fiber is also not a $4$-dimensional exotic sphere.
\end{enumerate}
\item
Furthermore, for each manifold here admitting a special generic map into ${\mathbb{R}}^3$, we can construct a special generic map so that the properties {\rm (}\ref{ex:1.1}{\rm )} and {\rm (}\ref{ex:1.2}{\rm )} in Example \ref{ex:1} are enjoyed with $n_j=2$.
\end{enumerate}
\end{Thm}
The following is shown in \cite{nishioka} to show Theorem \ref{thm:3} for $(m,n)=(5,4)$ by applying a classification result of $5$-dimensional closed and simply-connected manifolds of \cite{barden}, Propositon \ref{prop:4} and some additional arguments on homology groups.
\begin{Prop}[\cite{nishioka}]
\label{prop:5}
Let $k \geq 3$ be an integer.
For a $k$-dimensional compact and connected manifold $X$ such that $H_1(X;\mathbb{Z})$ is the trivial group, $X$ is orientable and the homology group $H_{j}(X;\mathbb{Z})$ is free for $j=k-2,k-1$.
\end{Prop}
\begin{Thm}[\cite{saeki1} for $(m,n)=(4,3),(5,3),(6,3)$,\cite{nishioka} for $(m,n)=(5,4)$, and \cite{kitazawa5} for $(m,n)=(6,4)$]
\label{thm:3}
For integers $(m,n)=(4,3),(5,3),(5,4),(6,4)$, An $m$-dimensional closed and simply-connected manifold $M$ admits a special generic map into ${\mathbb{R}}^n$ if and only if $M$ is as follows.
\begin{enumerate}
\item A standard sphere.
\item A manifold represented as a connected sum of smooth manifolds considered in the smooth category where each of the manifolds here is as follows.
\begin{enumerate}
\item The total space of a linear bundle over $S^2$ whose fiber is the unit sphere $S^{m-2}$.
\item Only in the case $(m,n)=(6,4)$, a manifold diffeomorphic to $S^3 \times S^3$.
\end{enumerate}
\end{enumerate}
Furthermore, for each manifold here, we can construct a special generic map so that the properties {\rm (}\ref{ex:1.1}{\rm )} and {\rm (}\ref{ex:1.2}{\rm )} in Example \ref{ex:1} are enjoyed with $n_j=2$ unless $(m,n)=(6,4)$. In the case $(m,n)=(6,4)$, we can do similarly with $n_j=2,3$.
\end{Thm}
\begin{Prop}
\label{prop:6}
Let $A$ be a commutative ring.
In Proposition \ref{prop:2}, the cup product for a finite sequence $\{u_j\}_{j=1}^{l} \subset H^{\ast}(M;A)$ of length $l>0$ consisting of elements whose degrees are at most $m-n$ is the zero element if the sum of all $l$ degrees is greater than or equal to $n$.
\end{Prop}
We introduce a proof of Propositions \ref{prop:4} and \ref{prop:6} referring to \cite{kitazawa1}.
For terminologies from the PL category and similar categories, see \cite{hudson} for example.
\begin{proof}[A proof of Propositions \ref{prop:4} and \ref{prop:6}]
This is based on an argument of \cite{saekisuzuoka} for example. We abuse the notation of some Propositions such as Proposition \ref{prop:2} and \ref{prop:3}.
$W$ is an ($m+1$)-dimensional compact and connected (PL) manifold whose boundary is $M$ and collapses to $W_f$, which is an $n$-dimensional compact smooth manifold immersed smoothly into ${\mathbb{R}}^n$. This has the (simple) homotopy type of an ($n-1$)-dimensional polyhedron and more precisely, collapses to an ($n-1$)-dimensional polyhedron. We can see that for $0 \leq j < m+1-(n-1)-1=m-n+1$, the three kinds of homomorphisms in Proposition \ref{prop:4} are isomorphisms.
In Proposition \ref{prop:6}, the cup product for the finite sequence is regarded as the value of ${q_f}^{\ast}:H^{\ast}(W_f;A) \rightarrow H^{\ast}(M;A)$ at the cup product of a sequence of length $l$ where ${q_f}^{\ast}:H^{\ast}(W_f;A) \rightarrow H^{\ast}(M;A)$ is defined in a canonical way from ${q_f}^{\ast}:H^{i}(W_f;A) \rightarrow H^{i}(M;A)$. This sequence of elements of $H^{\ast}(W_f;A)$ is defined in a canonical and unique way by Proposition \ref{prop:4}. The cup product is the zero element of $H^{\ast}(W_f;A)$. This follows from the fact that $W_f$ has the homotopy type of an ($n-1$)-dimensional polyhedron.
This completes the proof.
\end{proof}
Hereafter, in the present section, we introduce arguments in \cite{kitazawa7} and present some new arguments.
\begin{Prop}[Partially discussed in \cite{kitazawa7}]
\label{prop:7}
Let $m \geq n \geq 1$ be integers. Let ${\bar{f}}_N:\bar{N} \rightarrow {\mathbb{R}}^n$ be a smooth immersion of an $n$-dimensional compact, connected and orientable manifold $\bar{N}$.
Then there exist a suitable closed and connected smooth manifold $M$ and a special generic map $f:M \rightarrow {\mathbb{R}}^n$ enjoying the following properties
where we abuse the notation in Proposition \ref{prop:2}.
\begin{enumerate}
\item $W_f=\bar{N}$ and $\bar{f}={\bar{f}}_N$.
\item Two bundles in Proposition \ref{prop:2} {\rm (}\ref{prop:2.2}{\rm )} and {\rm (}\ref{prop:2.3}{\rm )} are trivial bundles.
\item Let $f_0:=f$. Then there exist a special generic map $f_j:M \rightarrow {\mathbb{R}}^{n+j}$ for $0 \leq j \leq m-n$ and a smooth immersion $f_j:M \rightarrow {\mathbb{R}}^{n+j}$ for $j \geq m-n$. If $f_N$ is an embedding, then each immersion can be obtained as an embedding. Furthermore, they can be constructed enjoying the relation $f_{j_1}={\pi}_{n+j_2,n+j_1} \circ f_{j_2}$ for any distinct integers $j_1<j_2$ and a canonical projection ${\pi}_{n+j_2,n+j_1};{\mathbb{R}}^{n+j_2} \rightarrow {\mathbb{R}}^{n+j_1}$.
\end{enumerate}
\end{Prop}
\begin{proof}
Most of our proof is based on a proof in the original paper.
By considering the restriction of the canonical projection of a unit sphere to $\mathbb{R}$ to a suitable hemisphere, we have a Morse function on a copy of the unit disk. This function can be obtained enjoying the following properties.
\begin{itemize}
\item It has exactly one singular point and there the function has maximal value (minimal value).
\item The preimage of the minimal (resp. maximal) value is the boundary of the disk.
\end{itemize}
We can prepare the product map of such a function and the identity map on the boundary $\partial \bar{N}$ and we have a surjection onto $N(\partial W_f)$, a suitable collar neighborhood of $\partial \bar{N}$, by restricting the manifold of the target of the product map and composing a suitable embedding.
We have a smooth trivial bundle over $\bar{N}-{\rm Int}\ N(\partial W_f)=W_f-{\rm Int}\ N(\partial W_f)$ whose fiber is an ($m-n$)-dimensional standard sphere. By gluing these maps in a suitable way we have a special generic map $f=f_0$ enjoying the first two properties.
\begin{itemize}
\item The diffeomorphism agreeing with the diffeomorphism used for identification between the connected component in the boundary of $\partial N(\partial W_f)$ and the boundary $\partial W_f -{\rm Int}\ N(\partial W_f)$.
\item The identity map on the fiber, which is the unit sphere $S^{m-n}$. Note that the fibers of the both trivial bundles can be regarded as the unit sphere $S^{m-n}$.
\end{itemize}
Let $0 \leq j \leq m-n$.
We replace the canonical projection of the unit sphere to $\mathbb{R}$ in the former product map by a suitable one onto ${\mathbb{R}}^{1+j}$. For the projection of the latter trivial smooth bundle, we replace the projection by the product map of a canonical projection of the unit sphere to ${\mathbb{R}}^j$ and the identity map on the base space. We glue in a natural and suitable way to have a desired map $f_j$.
In the case $j>m-n$, for construction of a desired map $f_j$, we consider the canonically defined embeddings of the unit spheres instead. This argument is a new one, presented first in the present paper.
This completes the proof of Proposition \ref{prop:7}, and by the construction, Proposition \ref{prop:8}, which is presented later.
\end{proof}
\begin{Def}[Partially defined in \cite{kitazawa2}]
\label{def:2}
We define the special generic map $f=f_0$ obtained in this way a {\it product-orginized} special generic map.
\end{Def}
\begin{Prop}
\label{prop:8}
For $0 \leq j \leq m-n$, $f_j$ in Proposition \ref{prop:7} can be also constructed as a product-organized special generic map.
\end{Prop}
We can generalize ${\mathbb{R}}^n$ here to a general connected smooth manifold with no boundary. However, we concentrate on the cases of ${\mathbb{R}}^n$ essentially.
For a linear bundle over a base space $X$, we can define the {\it $j$-th Stiefel-Whitney class} as an element of $H^j(X;\mathbb{Z}/2\mathbb{Z})$ and the {\it $j$-th Pontrjagin class} as an element of $H^{4j}(X;\mathbb{Z})$.
The tangent bundles of smooth manifolds are important linear bundles whose fibers are Euclidean spaces.
The {\it $j$-th Stiefel-Whitney class} of a smooth manifold $X$ is defined as that of its tangent bundle and an element of $H^j(X;\mathbb{Z}/2\mathbb{Z})$.
The {\it $j$-th Pontrjagin class} of a smooth manifold $X$ is defined as that of its tangent bundle and an element of $H^{4j}(X;\mathbb{Z})$.
The following follows from elementary arguments related to this.
\begin{Prop}
\label{prop:9}
Let $j>0$ be an arbitrary integer.
For the manifold $M$ and any positive integer $j$, the $j$-th Stiefel-Whitney class and the $j$-th Pontrjagin class of $M$ are the zero elements of $H^j(M;\mathbb{Z}/2\mathbb{Z})$ and $H^{4j}(M;\mathbb{Z})$, respectively.
\end{Prop}
See \cite{milnorstasheff} for Stiefel-Whitney classes and Pontrjagin classes.
\section{Main Theorems and related arguments.}
We prove Main Theorems. We need some arguments and results of the author. For example, ones in \cite{kitazawa2,kitazawa3,kitazawa4,kitazawa5,kitazawa6} are important.
\begin{Prop}[\cite{kitazawa2,kitazawa3}]
\label{prop:10}
Let $n \geq 5$ be an integer.
Let $G$ be a finite commutative group.
We have an $n$-dimensional compact and simply-connected manifold ${\bar{N}}_{G,n}$ enjoying the following properties.
\begin{enumerate}
\item ${\bar{N}}_{G,n}$ is smoothly embedded in ${\mathbb{R}}^n$.
\item The boundary $\partial {\bar{N}}_{G,n}$ is connected.
\item $H_{n-3}({\bar{N}}_{G,n};\mathbb{Z})$ is isomorphic to $G$.
\item $H_j({\bar{N}}_{G,n};\mathbb{Z})$ is the trivial group for $j \neq 0,n-3$.
\end{enumerate}
\end{Prop}
\begin{proof}
We prove this referring to \cite{kitazawa3} and in a different way. However, the method is essentially same.
First Let $G$ be a cyclic group.
We can choose a 3-dimensional closed and connected manifold $_G$ smoothly embedded in ${\mathbb{R}}^n$ such that ${\pi}_1(Y_G)$ and $H_1(Y_G;\mathbb{Z})$ are isomorphic to $G$ and $H_2(Y_G;\mathbb{Z})$ is the trivial group. This is due to \cite{wall} and for example, we can choose a so-called {\it Lens space}.
We remove the interior of a small closed tubular neighborhood $N(Y_G)$ of $Y_G$ and ${S^n}_{N(Y_G)}$ denotes the resulting manifold. $N(Y_G)$ is regarded as the total space of a trivial linear bundle over $Y_G$ whose fiber is the ($n-3$)-dimensional unit disk $D^{n-3}$.
$\partial {S^{n}}_{N(Y_G)}$ is regarded as the total space of the subbundle of the previous bundle whose fiber is $\partial D^{n-3} \subset D^{n-3}$.
We have a Mayer-Vietoris sequence
$$\rightarrow H_j(\partial {S^n}_{Y_G};\mathbb{Z})=H_j(\partial N(Y_G);\mathbb{Z}) \rightarrow H_j(N(Y_G);\mathbb{Z}) \oplus H_j({S^n}_{N(Y_G)};\mathbb{Z}) \rightarrow H_j(S^n;\mathbb{Z}) \rightarrow$$
\ \\
and the homomorphism from the first group to the second group is an isomorphism for $1 \leq j \leq n-1$.
This is due to the fundamental fact that $H_j(S^n;\mathbb{Z})$ is isomorhic to $\mathbb{Z}$ for $j=0,n$ and the trivial group for $1 \leq j \leq n-1$ and that the homomorphism from $H_n(S^n;\mathbb{Z})$ into $H_{n-1}(\partial {S^n}_{N(Y_G)};\mathbb{Z})$ is an isomorphism between groups isomorphic to $\mathbb{Z}$.
${S^n}_{N(Y_G)} \supset \partial {S^n}_{N(Y_G)}$ are a trivial linear bundle over $Y_G$ whose fiber is the unit disk $D^{n-3}$ and its subbundle obtained by considering the fiber $S^{n-4}=\partial D^{n-3} \subset D^{n-3}$, respectively.
${S^n}_{N(Y_G)}$ is a connected manifold.
$H_j({S^n}_{N(Y_G)};\mathbb{Z})$ is isomorphic to $H_{n-1}(Y_G;\mathbb{Z})$ for $1 \leq j \leq n-2$ and the trivial group for $j=n-1,n$.
${\pi}_1(\partial {S^n}_{N(Y_G)})$ is isomorphic to the direct sum $\mathbb{Z} \oplus G$ in the case $n=5$ and $G$ in the case $n>5$ and commutative. $S^n$ is simply-connected. For $S^n$ and the same submanifolds $N(Y_G)$ and ${S^n}_{N(Y_G);\mathbb{Z}}$, we can apply Seifert van-Kampen theorem. ${\pi}_1({S^n}_{N(Y_G)})$ is shown to be isomorphic to $\mathbb{Z}$.
We define
${\bar{N}}_G$ as a manifold in the following way. First attach a copy of $D^3 \times D^{n-3}$ so that this is the total space of the restriction of the linear bundle $N(Y_G)$ to a copy of the $3$-dimensional unit disk $D^3$ smoothly embedded in the base space $Y_G$. After that eliminate the corner.
We have a Mayer-Vietoris sequence for ${\bar{N}}_G$ and its decomposition into the two manifolds, one of which is ${S^n}_{N(Y_G)}$ and the other of which is the copy of $D^3 \times D^{n-3}$. More precisely, the resulting manifold is decomposed by an ($n-1$)-dimensional smoothly embedded manifold diffeomorphic to $D^3 \times \partial D^{n-3}$ into the two manifold.
We can also apply Seifert van-Kampen theorem for these manifolds.
Investigating our Mayer-Vietoris sequence and our argument using Seifert van-Kampen theorem, this completes our proof in the case of a cyclic group $G$.
For a general finite commutative group $G$, we have a desired manifold by considering a boundary connected sum of finitely many such manifolds forming the family $\{{\bar{N}}_{G_j}\}_{j=1}^l$ such that each $G_j$ is a cyclic group and that the direct sum ${\oplus}_{j=1}^l G_j$ is isomorphic to $G$. We add that the boundary connected sum is taken in the smooth category.
This completes the proof.
\end{proof}
We have the following by applying Proposition \ref{prop:10} and other arguments such as`Propositions \ref{prop:7}, \ref{prop:8} and \ref{prop:9}.
\begin{Thm}[\cite{kitazawa5} ($(m,n)=(6,5)$)]
\label{thm:4}
Let $m \geq n+1$ be an integer. Let $G$ be a finite commutative group. We have an $m$-dimensional closed and simply-connected manifold $M_{G,n,m-n}$ and a product-organized special generic map $f_{G,n,m-n}:M_{G,n,m-n} \rightarrow {\mathbb{R}}^n$ enjoying the following properties.
\begin{enumerate}
\item \label{thm:4.1}
For any positive integer $j$, the $j$-th Stiefel-Whitney class and the $j$-th Pontrjagin class of $M_{G,n,m-n}$ are the zero elements of $H^j(M_{G,n,m-n};\mathbb{Z}/2\mathbb{Z})$ and $H^{4j}(M_{G,n,m-n};\mathbb{Z})$, respectively.
\item \label{thm:4.2}
The restriction of the map to the singular set is an embedding. The image is diffeomorphic to ${\bar{N}}_G$ in Proposition \ref{prop:10} and $W_{f_{G,n,m-n}}$ where $W_{f_{G,n,m-n}}$ abuse "$W_f$ in Proposition \ref{prop:2}".
\item \label{thm:4.3}
$H_j(M_{G,n,m-n};\mathbb{Z})$ is isomorphic to $G$ for $j=n-3,m-n+2$, $\mathbb{Z}$ for $j=0,m$, and the trivial group otherwise.
\end{enumerate}
Furthermore, according to \cite{jupp,wall2,zhubr,zhubr2} for example, in a suitable case, the topology and the differentiable structure of the manifold $M_{G,5,1}$ is uniquely defined.
\end{Thm}
\begin{proof}
We have a product-organized special generic map $f_{G,n,m-n}:M_{G,n,m-n} \rightarrow {\mathbb{R}}^n$ on a suitable closed and simply-connected manifold $M_{G,n,m-n}$ enjoying (\ref{thm:4.1}) and (\ref{thm:4.2}) here by Propositions \ref{prop:7}, \ref{prop:8} and \ref{prop:9}
with Proposition \ref{prop:4}.
It is sufficient to show (\ref{thm:4.3}) here.
$W_{f_{G,n,m-n}}$ is simply-connected and by Proposition \ref{prop:10} collapses to an ($n-2$)-dimensional polyhedron.
By revising the proof of Propositions \ref{prop:4} and \ref{prop:6}, we can see that ${q_{f_{G,n,m-n}}}_{\ast}:H_j(M_{G,n,m-n};\mathbb{Z}) \rightarrow H_j(W_{f_{G,n,m-n}};\mathbb{Z})$ is an isomorphism for $0 \leq j \leq m-n+1$.
In the present proof, "Proposition \ref{prop:4}" means this isomorphism. \\
\\
Hereafter, $W_{G,n,m-n}$ denotes "$W$ in Propositions \ref{prop:2} and \ref{prop:3}" where $M_{G,n,m-n}$ is used instead of "$M$ in these propositions". This is ($m+1$)-dimensional, closed and simply-connected.
We also have a homology exact sequence
$$\rightarrow H_{j+1}(W_{G,n,m-n},M_{G,n,m-n};\mathbb{Z}) \rightarrow H_j(M_{G,n,m-n};\mathbb{Z}) \rightarrow H_j(W_{G,n,m-n};\mathbb{Z}) \rightarrow $$
\quad $H_j(W_{G,n,m-n},M_{G,n,m-n};\mathbb{Z}) \rightarrow $ \\
\ \\
for $(W_{G,n,m-n},M_{G,n,m-n})$.\\
\ \\
Case 1 The case $m-n+1 \geq \frac{m}{2}$. \\
The resulting manifold $M_{G,n,m-n}$ is a desired manifold
by applying Proposition \ref{prop:4} and Poincar\'e duality theorem. \\
\ \\
Case 2 The case $m$ is even and $m-n+2=\frac{m}{2}$.\\
\\
We also have a homology exact sequence
$$\rightarrow H_{n-2}(W_{G,n,m-n},M_{G,n,m-n};\mathbb{Z}) \rightarrow H_{n-3}(M_{G,n,m-n};\mathbb{Z}) \rightarrow H_{n-3}(W_{G,n,m-n};\mathbb{Z}) \rightarrow $$
$H_{n-3}(W_{G,n,m-n},M_{G,n,m-n};\mathbb{Z}) \rightarrow $ \\
\ \\
for $(W_{G,n,m-n},M_{G,n,m-n})$. $H_{n-2}(W_{G,n,m-n},M_{G,n,m-n};\mathbb{Z})$ is isomorphic to $H^{m-n+3}(W_{G,n,m-n};\mathbb{Z})$, $H^{m-n+3}(W_{f_{G,n,m-n}};\mathbb{Z})$, and $H_{2n-m-3}(W_{f_{G,n,m-n}},\partial W_{f_{G,n,m-n}};\mathbb{Z})$ by Proposition \ref{prop:3} and by virtue of Poincar\'e duality theorem for $W_{f_{G,n,m-n}}$. We have the relation $2n-m-4=0$ and the previous groups are isomorphic to $H_{1}(W_{f_{G,n,m-n}},\partial W_{f_{G,n,m-n}};\mathbb{Z})$. This is the trivial group since $W_{f_{G,n,m-n}}$ is simply-connected and $\partial W_{f_{G,n,m-n}}$ is connected. This is shown by the homology exact sequence for $(W_{f_{G,n,m-n}},\partial W_{f_{G,n,m-n}})$.
Similarly, $H_{n-1}(W_{G,n,m-n},M_{G,n,m-n};\mathbb{Z})$ is isomorphic to $H_{2}(W_{f_{G,n,m-n}},\partial W_{f_{G,n,m-n}};\mathbb{Z})$ and $H^{n-2}(W_{f_{G,n,m-n}};\mathbb{Z})$ by Proposition \ref{prop:3} and by virtue of Poincar\'e duality theorem for $W_{f_{G,n,m-n}}$. This group is shown to be finite by Proposition \ref{prop:10} and universal coefficient theorem. Similarly, $H_{n-2}(W_{G,n,m-n},M_{G,n,m-n};\mathbb{Z})$ is isomorphic to $H_{0}(W_{f_{G,n,m-n}},\partial W_{f_{G,n,m-n}};\mathbb{Z})$ and the trivial group. We have the relation $n-2=\frac{m}{2}$. By Proposition \ref{prop:4}, Proposition \ref{prop:10} and Poincar\'e duality theorem for $M_{G,n,m-n}$, this completes the proof for this case. \\
\ \\
Case 3 The case $m$ is odd and $m-n+\frac{3}{2}=\frac{m}{2}$.\\
\\ We also have a homology exact sequence
$$\rightarrow H_{n-2}(W_{G,n,m-n},M_{G,n,m-n};\mathbb{Z}) \rightarrow H_{n-3}(M_{G,n,m-n};\mathbb{Z}) \rightarrow H_{n-3}(W_{G,n,m-n};\mathbb{Z}) \rightarrow $$
$H_{n-3}(W_{G,n,m-n},M_{G,n,m-n};\mathbb{Z}) \rightarrow $ \\
\ \\
for $(W_{G,n,m-n},M_{G,n,m-n})$. $H_{n-2}(W_{G,n,m-n},M_{G,n,m-n};\mathbb{Z})$ is isomorphic to $H^{m-n+3}(W_{G,n,m-n};\mathbb{Z})$, $H^{m-n+3}(W_{f_{G,n,m-n}};\mathbb{Z})$, and $H_{2n-m-3}(W_{f_{G,n,m-n}},\partial W_{f_{G,n,m-n}};\mathbb{Z})$ by Proposition \ref{prop:3} and by virtue of Poincar\'e duality theorem for $W_{f_{G,n,m-n}}$. We have the relation $2n-m-3=0$ and the previous groups are isomorphic to $H_{0}(W_{f_{G,n,m-n}},\partial W_{f_{G,n,m-n}};\mathbb{Z})$ and the trivial group. We have the relation $n-2=\frac{m-1}{2}$. By Proposition \ref{prop:4}, Proposition \ref{prop:10} and Poincar\'e duality theorem for $M_{G,n,m-n}$, this completes the proof for this case. \\
\ \\
This completes the proof of (\ref{thm:4.3}). \\
This completes the proof.
\end{proof}
\begin{Rem}
\cite{nishioka} has studied Case 3 of the proof of Theorem \ref{thm:4}. More precisely, for a positive integer $k>0$ and a ($2k+1$)-dimensional closed and connected manifold $M$ such that the group $H_1(M;\mathbb{Z})$ is the trivial group admitting a special generic map into ${\mathbb{R}}^{k+2}$, the group $H_k(M;\mathbb{Z})$ has been shown to be free. The case $(m,n)=(5,4)$ of Theorem \ref{thm:3} concerns the case $k=2$ here.
\end{Rem}
\begin{proof}
[A proof of Main Theorem \ref{mthm:1}]
First consider an $m^{\prime}$-dimensional closed and simply-connected manifold $M_{G,5,m^{\prime}-5}$ in Theorem \ref{thm:4} where $m^{\prime}$ denotes "$m$ of Theorem \ref{thm:4}". We can embed this smoothly into ${\mathbb{R}}^{m^{\prime}+1}$ by Proposition \ref{prop:7} for example.
We define $M$ as $M:=M_{G,5,m^{\prime}-5} \times S^{m-m^{\prime}}$.
This completes our exposition on the properties (\ref{mthm:1.1}) and (\ref{mthm:1.2}).
By Propositions \ref{prop:2}, \ref{prop:4} and \ref{prop:5} for example, this does not admit special generic maps into ${\mathbb{R}}^n$ for $n=1,2,3,4$. By the previous argument, we can consider the product map of a Morse function with exactly two singular points on $S^{m-m^{\prime}}$ and the identity map on $M_{G,5,m^{\prime}-5}$ and embed the image smoothly in a suitable way into ${\mathbb{R}}^{m^{\prime}+1}$. This can be constructed as product-organized one. This implies the existence of special generic maps into ${\mathbb{R}}^n$ for $m^{\prime}<n \leq m$.
We prove the non-existence into ${\mathbb{R}}^{m^{\prime}}$. Suppose that such a special generic map $f$ on $M$ into ${\mathbb{R}}^{m^{\prime}}$ exists. From Proposition \ref{prop:4}, the homomorphism ${q_f}_{\ast}$ maps the homology group $H_{m-m^{\prime}}(M;\mathbb{Z})$ onto $H_{m-m^{\prime}}(W_f;\mathbb{Z})$ as an isomorphism.
$m^{\prime}-1$ is the dimension of the boundary $\partial W_f \subset W_f$ and the relation $a+(m^{\prime}-1) \leq m$ is assumed as a condition. We can apply Poincar\'e duality or intersection theory for $M$. From this,
the torsion subgroup of $H_{a}(M;\mathbb{Z})$, which is not the trivial group from the assumption and K\"unneth theorem for the product $M_{G,5,m^{\prime}-5} \times S^{m-m^{\prime}}$, is mapped onto $H_{a}(W_f;\mathbb{Z})$ by the homomorphism ${q_f}_{\ast}$ and this is an isomorphism.
Here, we apply fundamental methods used first in \cite{kitazawa4}, generalizing some methods used in \cite{saeki1}, and used also in \cite{kitazawa5,kitazawa6} for example.
We choose an element of $h_1 \in H_{m-m^{\prime}}(M;\mathbb{Z})$ and an element of $h_2 \in H_{a}(M;\mathbb{Z})$ which are not the zero elements. Furthermore, we choose $h_2$ as an element of a summand of a direct sum decomposition of the torsion subgroup of $H_{a}(M;\mathbb{Z})$ into cyclic groups. This decomposition is due to fundamental theorem on the structures of finitely generated commutative groups. Note that $a=2,m^{\prime}-3$ and that $M$ is the product of a manifold $M_{G,5,m^{\prime}-5}$ in Theorem \ref{thm:4} and $S^{m-m^{\prime}}$.
By the isomorphisms before and Poincar\'e duality theorem for $W_f$, we have an element of $H_{m^{\prime}-(m-m^{\prime})}(W_f,\partial W_f;\mathbb{Z})$ which is not the zero element corresponding to the element ${q_f}_{\ast}(h_1)$, the value of the isomorphism presented before at $h_1$, in a canonical way uniquely. In a similar reason, there exists a suitable commutative group $G_{h_2}$ which is finite and cyclic and which is not the trivial group and we have an element of $H_{m^{\prime}-a}(W_f,\partial W_f;G_{h_2})$ which is not the zero element corresponding to the element of $H_{a}(W_f;G_{h_2})$, obtained by mapping $h_2$ by the isomorphism defined canonically by ${q_f}_{\ast}$ before and mapping by the canonically defined homomorphism defined naturally from the canonical quotient map from $\mathbb{Z}$ onto $G_{h_2}$, in a canonical way uniquely. For $h_1$, we consider the value of the canonically defined homomorphism defined naturally from the canonical quotient map from $\mathbb{Z}$ onto $G_{h_2}$ at the obtained element of $H_{m^{\prime}-(m-m^{\prime})}(W_f,\partial W_f;\mathbb{Z})$.
We put these two elements by ${h_{1,G_{h_2}}}^{\prime} \in H_{m^{\prime}-(m-m^{\prime})}(W_f,\partial W_f;G_{h_2})$ and ${h_{2,G_{h_2}}}^{\prime} \in H_{m^{\prime}-a}(W_f,\partial W_f;G_{h_2})$. By considering a (so-called {\it generic}) {\it intersection}, we have an element of $H_1(W_f;\partial W_f;G_{h_2})$. The degree $1$ is due to the relation $(m^{\prime}-(m-m^{\prime}))+(m^{\prime}-a)-m^{\prime}=2m^{\prime}-m-a=1$ with the assumption on the integers.
This is the sum of elements represented by closed intervals embedded smoothly and properly in $W_f$.
Furthermore, two boundary points of the each interval is mapped into distinct connected components of $\partial W_f$ and the interior is embedded into the interior of $W_f$.
Related to this, circles in (the interior of) $W_f$ are null-homotopic since $W_f$ is simply-connected.
By respecting the bundle whose projection is $r:W \rightarrow M$ in Proposition \ref{prop:3} where we abuse the notation and considering a variant of so-called {\it Thom isomorphisms} or {\it prism-operators}, we have an element of $H_{m-m^{\prime}+2}(W,M;G_{h_2})$. By considering the boundary, we have an element of $H_{m-m^{\prime}+1}(M;G_{h_2})$.
For this see \cite{saeki1} and for algebraic topological notions see \cite{hatcher,milnorstasheff} for example.
We go back to our arguments for the proof. If this element is not the zero element, then by fundamental arguments on Poincar\'e duality and intersection theory, this element must be obtained as an element which is the value of the canonically defined homomorphism induced naturally from the canonical quotient map from $\mathbb{Z}$ to $G_{h_2}$. By the structure of $M=M_{G,5,m^{\prime}-5} \times S^{m-m^{\prime}}$ and conditions on homology groups together with Poincar\'e duality and K\"unneth theorem for example, this is not the zero element and this is not an element which is the value of the canonically defined homomorphism induced naturally from the canonical quotient map from $\mathbb{Z}$ to $G_{h_2}$.
We add that this homomorohism is one from $H_{m-m^{\prime}+1}(M;\mathbb{Z})$ into $H_{m-m^{\prime}+1}(M;G_{h_2})$ as a precise exposition.
We find a contradiction. We have (\ref{mthm:1.3}). This completes the proof.
\end{proof}
\begin{proof}[A proof of Main Theorem \ref{mthm:2}]
$M_1$ is obtained as a connected sum of the following three manifolds taken in the smooth category.
\begin{itemize}
\item A $8$-dimensional manifold
admitting a product-organized special generic map into ${\mathbb{R}}^5$ in Theorem \ref{thm:4} by considering "$(m,n,G)=(8,5,G_{{\rm F},1})$ in the theorem".
\item A $8$-dimensional manifold
admitting a product-organized special generic map into ${\mathbb{R}}^6$ in Theorem \ref{thm:4} by considering "$(m,n,G)=(8,6,G_{{\rm F},2})$ in the theorem".
\item A manifold represented as a suitably chosen connected sum of manifolds diffeomorphic to $S^2 \times S^6$, $S^3 \times S^5$ or $S^4 \times S^4$, which admit special generic maps into ${\mathbb{R}}^6$ in Example \ref{ex:1}. The special generic maps discussed in Example \ref{ex:1} are also constructed as product-organized maps by fundamental arguments on special generic maps.
\end{itemize}
We can see the non-existence of special generic map on $M_1$ into ${\mathbb{R}}^n$ for $n=1,2,3,4,5$ from Propositions \ref{prop:2}, \ref{prop:4}, \ref{prop:5} and Theorem \ref{thm:1} for example.
By a fundamental argument for construction of special generic maps in \cite{saeki1}, we can construct a product-organized special generic map on $M_1$ into ${\mathbb{R}}^6$.
We give $M_2$. This is obtained as a connected sum of the following three manifolds taken in the smooth category.
\begin{itemize}
\item A $8$-dimensional manifold
admitting a product-organized special generic map into ${\mathbb{R}}^5$ in Theorem \ref{thm:4} by considering "$(m,n,G)=(8,5,G_{{\rm F},1})$ in the theorem".
\item
A $8$-dimensional closed and simply-connected manifold "$M^{\prime} \times S^2$ in Main Theorem \ref{mthm:1}", having the 3rd homology group $H_3(M^{\prime} \times S^3;\mathbb{Z})$ isomorphic to $G_{{\rm F},2}$. More precisely, we consider the case "$(m,m^{\prime},a,G)=(8,6,3,G_{{\rm F},2})$ in Main Theorem \ref{mthm:1}".
\item A manifold represented as a suitably chosen connected sum of manifolds diffeomorphic to $S^2 \times S^6$, $S^3 \times S^5$ or $S^4 \times S^4$, which admit special generic maps into ${\mathbb{R}}^7$ in Example \ref{ex:1}. These maps can be constructed as product-organized maps as before.
\end{itemize}
We can see the non-existence of special generic maps on $M_2$ into ${\mathbb{R}}^n$ for $n=1,2,3,4,5,6$. For $n=1,2,3,4,5$, for example, Propositions \ref{prop:2}, \ref{prop:4}, \ref{prop:5}, \ref{prop:6} and Theorem \ref{thm:1} complete the proof. For $n=6$ here, we apply Main Theorem \ref{mthm:1} directly or revise arguments in the proof of Main Theorem \ref{mthm:1} suitably if we need. In the case ($m=8$ and) $n=6$, $m-n=8-6=2$ and we do not have any sequence of elements of a finite length of $H^j(M;A)$ with $0 \leq j \leq m-n=2$ the cup product for which is not the zero element whose degree is greater than or equal to $6$. This means that we cannot apply Proposition \ref{prop:6} to give a proof for the case $n=6$. We can show the existence of special generic maps into ${\mathbb{R}}^n$ for $n=7,8$ similarly.
This completes the proof.
\end{proof}
This adds to Main Theorem 4 of \cite{kitazawa2} where situations are a bit different.
Last, for example, our new examples are discovered especially motivated by \cite{kitazawa6}. Compare these results to our new results.
\end{document} |
\begin{document}
\title[]
{When is scalar multiplication decidable?}
\author[P. Hieronymi]{Philipp Hieronymi}
\address
{Department of Mathematics\\operatorname{m}athbb{U}niversity of Illinois at Urbana-Champaign\\1409 West Green Street\\operatorname{m}athbb{U}rbana, IL 61801}
\email{[email protected]}
\urladdr{http://www.math.uiuc.edu/\textasciitilde phierony}
\subjclass[2010]{Primary 03B25, Secondary 03C64, 11A67}
\thanks{The author was partially supported by NSF grant DMS-1300402 and by UIUC Campus Research Board award 14194.}
\date{\today}
\begin{abstract} Let $K$ be a subfield of $\operatorname{m}athbb{R}$. The theory of $\operatorname{m}athbb{R}$ viewed as an ordered $K$-vector space and expanded by a predicate for $\operatorname{m}athbb{Z}$ is decidable if and only if $K$ is a real quadratic field.
\end{abstract}
\operatorname{m}aketitle
\section{Introduction}
It has long been known that the first order theory of the structure $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z})$ is decidable. Arguably due to Skolem \cite{skolem}\footnote{See Smorny\'nski \cite[Exercise III.4.15]{smor}.}, the result can be deduced easily from B\"uchi's theorem on the decidability of monadic second order theory of one successor \cite{Buchi}\footnote{See Boigelot, Rassart and Wolper \cite{BRW}.}, and was later rediscovered independently by Weispfenning \cite{weis} and Miller \cite{ivp}. However, a consequence of G\"odel's famous first incompleteness theorem \cite{Goedel} states that when expanding $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z})$ by a symbol for multiplication on $\operatorname{m}athbb{R}$, the theory of the resulting structure $(\operatorname{m}athbb{R},<,+,\cdot,\operatorname{m}athbb{Z})$ becomes undecidable. This observation gives rise to the following natural and surprisingly still open question:
\begin{center}\emph{How many traces of multiplication can be added to $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z})$ without making the first order theory undecidable?}
\end{center}
\noindent Here, building on earlier work of Hieronymi and Tychonievich \cite{HT} and Hieronymi \cite{H-Twosubgroups}, we will give a complete answer to this question when \emph{traces of multiplication} is taken to mean scalar multiplication by certain irrational numbers. To make this statement precise: for $a\in \operatorname{m}athbb{R}$, let $\mathcal{L}mbda_a:\operatorname{m}athbb{R} \to \operatorname{m}athbb{R}$ be the function that takes $x$ to $ax$. Denote the structure $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z},\mathcal{L}mbda_a)$ by $\operatorname{m}athbb{C}al S_a$. A real number is quadratic if it is the solution to a quadratic equation with rational coefficients. Theorem B of \cite{HT} states that the theory of $\operatorname{m}athbb{C}al S_a$ is undecidable if $a$ is not quadratic. In this paper we prove that $\operatorname{m}athbb{C}al S_a$ is decidable if $a$ is quadratic. This establishes the following theorem.
\begin{thmA} The theory of $\operatorname{m}athbb{C}al S_a$ is decidable if and only if $a$ is quadratic.
\end{thmA}
\noindent By Theorem A of \cite{H-Twosubgroups}, the theory of the structure $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z},\operatorname{m}athbb{Z} a)$ is decidable whenever $a$ is quadratic. Here, we will show how the decidablity of the theory of $\operatorname{m}athbb{C}al S_a$ can be deduced from this result. Before explaining the precise strategy of the proof, we collect two corollaries of Theorem A.\newline
\noindent Theorem A induces a dichotomy for expansions of $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z})$ by scalar multiplication by a single real number. This raises the question whether there is a similar characterization for expansions where scalar multiplication is added for every element of some subset of $\operatorname{m}athbb{R}$.
We say that real numbers $a_1,\dots,a_n$ are $\operatorname{m}athbb{Q}$-linear dependent if there are $q_1,\dots, q_n\in \operatorname{m}athbb{Q}$, not all zero, such that
\[
q_1 a_1 + \dots + q_n a_n = 0.
\]
We say $a_1,\dots, a_n$ are $\operatorname{m}athbb{Q}$-linear independent if they are not $\operatorname{m}athbb{Q}$-linear dependent. Theorem C of \cite{HT} states that the structure $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z},\operatorname{m}athbb{Z} a,\operatorname{m}athbb{Z} b)$ defines full multiplication on $\operatorname{m}athbb{R}$ whenever $1,a,b$ are $\operatorname{m}athbb{Q}$-linear independent. Since $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z},\mathcal{L}mbda_a,\mathcal{L}mbda_b)$ defines both $\operatorname{m}athbb{Z} a$ and $\operatorname{m}athbb{Z} b$, it also defines full multiplication on $\operatorname{m}athbb{R}$ whenever $1,a,b$ are $\operatorname{m}athbb{Q}$-linear independent. On the other hand, if $a,b$ are irrational and $1,a,b$ are $\operatorname{m}athbb{Q}$-linear dependent, then either $\operatorname{m}athbb{C}al S_{a}$ defines the function $\mathcal{L}mbda_b$. With this observation we get the following result as a corollary of Theorem A and \cite[Theorem B]{HT}.
\begin{thmB} Let $S\subseteq \operatorname{m}athbb{R}$. Then the structure $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z},(\mathcal{L}mbda_b)_{b\in S})$ defines the same sets as exactly one of the following structures:
\begin{itemize}
\item[(i)] $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z})$,
\item[(ii)] $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z},\mathcal{L}mbda_a)$, for some quadratic $a\in \operatorname{m}athbb{R}\setminus \operatorname{m}athbb{Q}$,
\item[(iii)] $(\operatorname{m}athbb{R},<,+,\cdot,\operatorname{m}athbb{Z})$.
\end{itemize}
\end{thmB}
\noindent The three cases are indeed exclusive. By Theorem A, a structure in (ii) does not define full multiplication on $\operatorname{m}athbb{R}$. Using the results in \cite{weis} or \cite{ivp}, one can show that $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z})$ does not define any dense and codense subset of $\operatorname{m}athbb{R}$, while the structures in (ii) do\footnote{For a definable dense set in $\operatorname{m}athbb{C}al S_a$, see \cite[Proof of Theorem C]{HT}.}. As a corollary of Theorem B, we obtain the following generalization of Theorem A.
\begin{thmC} Let $K$ be a subfield of $\operatorname{m}athbb{R}$. The theory of the ordered $K$-vector space $\operatorname{m}athbb{R}$ expanded by a predicate for $\operatorname{m}athbb{Z}$ is decidable if and only if $K$ is a quadratic field.
\end{thmC}
\noindent The work in this paper is mainly motivated by purely foundational concerns. However, the structure $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z})$ and the decidability of its first order theory have been used extensively in computer science, in particular in verification and model checking. Our Theorem A gives decidability in a larger language, and one might hope that the increase in expressive power leads to new applications; see Hieronymi, Nguyen, Pak \cite{HNP} for an application to decision problems in discrete geometry. One should mention however that for irrational $a$, the structure $\operatorname{m}athbb{C}al S_a$ defines a model of the monadic second order theory of one successor by \cite[Theorem D]{H-Twosubgroups}. Thus any implementation of the algorithm determining the truth of a sentence in $\operatorname{m}athbb{C}al S_a$ is limited by the high computational costs necessary to decide a statement in the monadic second order theory of one successor\footnote{Most of the computational complexity comes from the construction of the complement of a B\"uchi automaton. For details, see for example Vardi \cite{Vardi}. When considering just $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z})$, some of the difficulties can be avoided, see Boigelot, Jodogne and Wolper \cite{BJW}.}. \\
\noindent We have already argued how Theorem A implies Theorem B and Theorem C. Theorem A itself is deduced from the following result.
\begin{thmD} Let $d\in \operatorname{m}athbb{Q}$. Then $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z},\operatorname{m}athbb{Z}\sqrt{d})$ defines multiplication by $\sqrt{d}$.
\end{thmD}
\noindent The proof of Theorem D is the goal of this paper and the only significant improvement over previous results. We now explain how Theorem A can be proved using Theorem D.
\begin{proof}[Proof of Theorem A from Theorem D]
By \cite[Theorem B]{HT}, the theory of $\operatorname{m}athbb{C}al S_a$ is undecidable whenever $a$ is not quadratic. To establish Theorem A, it is therefore enough to show that the theory of $\operatorname{m}athbb{C}al S_a$ is decidable for quadratic $a$.
Let $a\in \operatorname{m}athbb{R}$ be quadratic. Then there are $b,c,d\in \operatorname{m}athbb{Q}$ such that
$a= b + c\sqrt{d}$. By \cite[Theorem A]{H-Twosubgroups} the theory of $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z},\operatorname{m}athbb{Z}\sqrt{d})$ is decidable. By Theorem D, the function $\mathcal{L}mbda_{\sqrt{d}}$ is definable in $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z},\operatorname{m}athbb{Z}\sqrt{d})$. Since $a=b+c\sqrt{d}$, $\mathcal{L}mbda_a$ is definable in this structure as well. The decidability of the theory of $\operatorname{m}athcal S_a$ follows.
\end{proof}
For ease of notation, we denote $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z},\operatorname{m}athbb{Z} a)$ by $\operatorname{m}athbb{C}al R_{a}$. Theorem D is not the first results of this form. Let $\varphi:=\frac{1+\sqrt{5}}{2}$ be the golden ratio. Then \cite[Theorem B]{H-Twosubgroups} states that $\operatorname{m}athbb{C}al R_{\varphi}$ defines $\mathcal{L}mbda_\varphi$. The proof of this result depends heavily on the fact that the continued fraction expansion of $\varphi$ is $[1;1,\dots]$. To prove Theorem D, we build on this earlier work in \cite{H-Twosubgroups}, but have to add extra arguments coming both from the theory of continued fractions and from definability. In Section 4 of \cite{H-Twosubgroups} it is shown that the representations in the Ostrowski numeration system based on $a$ of both natural numbers and real numbers are definable in $\operatorname{m}athbb{C}al R_{\sqrt{d}}$. The Ostrowski numeration system is a non-standard way to represent numbers based on the continued fraction expansion of $a$. In Section 2 we recall the basic definitions and results about this numeration system. In Section 3, after reminding the reader of some of the previous results in \cite{H-Twosubgroups}, we prove that $\mathcal{L}mbda_{\sqrt{d}}$ is definable in $\operatorname{m}athbb{C}al R_{\sqrt{d}}$. The main step in the proof is to realize that using theorems about the continued fraction expansions of $\sqrt{d}$, multiplication by $\sqrt{d}$ can be expressed in terms of certain shifts in the Ostrowski representations and scalar multiplication by rational numbers. Most of the work in Section 3 will go into showing that these shifts are definable.
\subsection*{Notation} We denote $\{0,1,2,\dots\}$ by $\operatorname{m}athbb{N}$. Throughout this paper definable will mean definable without parameters.
\section{Continued fractions} In this section, we recall some basic and well-known definitions and results about continued fractions. Expect for the definition of Ostrowski representations of real numbers, all these results can be found in every basic textbook on continued fractions. We refer the reader to Rockett and Szüsz \cite{RS} for proofs of the results, simply because to the author's knowledge it is the only book discussing Ostrowski representations of real numbers in detail. \newline
\noindent A \textbf{finite continued fraction expansion} $[a_0;a_1,\dots,a_k]$ is an expression of the form
\[
a_0 + \frac{1}{a_1 + \frac{1}{a_2+ \frac{1}{\ddots + \frac{1}{a_k}}}}
\]
For a real number $a$, we say $[a_0;a_1,\dots,a_k,\dots]$ is \textbf{the continued fraction expansion of $a$} if $a=\lim_{k\to \infty}[a_0;a_1,\dots,a_k]$ and $a_0\in \operatorname{m}athbb{Z}$, $a_i\in \operatorname{m}athbb{N}_{>0}$ for $i>0$. For the rest of this subsection, fix a positive irrational real number $a$ and assume that $[a_0;a_1,\dots,a_k,\dots]$ is the continued fraction expansion of $a$.
\begin{defn}\mathcal{L}bel{def:beta} Let $k\geq 1$. We define $p_k/q_k \in \operatorname{m}athbb{Q}$ to be the \textbf{$k$-th convergent of $a$}, that is the quotient $p_k/q_k$ where $p_k\in \operatorname{m}athbb{N}$, $q_k\in \operatorname{m}athbb{Z}$, $\gcd(p_k,q_k)=1$ and
\[
\frac{p_k}{q_k} = [a_0;a_1,\dots,a_k].
\]
The \textbf{$k$-th difference of $a$} is defined as $\beta_k := q_k a - p_k$. We define $\zeta_k \in \operatorname{m}athbb{R}$ to be the \textbf{$k$-th complete quotient of $a$}, that is
$\zeta_k = [a_k;a_{k+1},a_{k+2},\dots]$.
\end{defn}
\noindent Maybe the most important fact about the convergents we will use, is that both their nominators and denominators satisfy the following recurrence relation.
\begin{fact}{\cite[Chapter I.1 p. 2]{RS}}\mathcal{L}bel{fact:recursive} Let $q_{-1} := 0$ and $p_{-1}:=1$. Then $q_{0} = 1$, $p_{0}=a_0$ and for $k\geq 0$,
\begin{align*}
q_{k+1} &= a_{k+1} \cdot q_k + q_{k-1}, \\
p_{k+1} &= a_{k+1} \cdot p_k + p_{k-1}. \\
\end{align*}
\end{fact}
\noindent We directly get that for $k\geq 0$, $\beta_{k+1} = a_{k+1} \beta_k + \beta_{k-1}$. We need the following well-known facts about $\zeta_k$.
\begin{fact}{\cite[Chapter I.4 p. 9]{RS}} \mathcal{L}bel{fact:beta} Let $k\in \operatorname{m}athbb{N}_{>0}$. Then
$
\beta_{k+1} = - \frac{\beta_k}{\zeta_{k+2}}.
$
\end{fact}
\begin{fact}{\cite[Chapter I.2 p. 4]{RS}} \mathcal{L}bel{fact:zetaplus1} Let $k\in \operatorname{m}athbb{N}_{>0}$. Then
\[
\zeta_k = a_{k} + \frac{1}{\zeta_{k+1}}.
\]
\end{fact}
\begin{fact}{\cite[Chapter I.2 p. 4]{RS}}\mathcal{L}bel{fact:zetaexp} Let $k\in \operatorname{m}athbb{N}_{>0}$. Then
\[
a=
\frac{p_{k}\zeta_{k+1} + p_{k-1}}{q_{k}\zeta_{k+1} + q_{k-1}}.
\]
\end{fact}
\noindent We will now introduce a numeration system due to Ostrowski \cite{Ost}.
\begin{fact}{\cite[Chapter II.4 p. 24]{RS}}\mathcal{L}bel{ostrowski} Let $N\in \operatorname{m}athbb{N}$. Then $N$ can be written uniquely as
\[
N = \sum_{k=0}^{n} b_{k+1} q_{k},
\]
where $n\in \operatorname{m}athbb{N}$ and the $b_k$'s are in $\operatorname{m}athbb{N}$ such that $b_1<a_1$ and for all $k\in \operatorname{m}athbb{N}_{\leq n}$, $b_k \leq a_{k}$ and, if $b_k = a_{k}$, then $b_{k-1} = 0$.
\end{fact}
\noindent We call the representation of a natural number $N$ given by Fact \ref{ostrowski} the \textbf{Ostrowski representation} of $N$ based on $a$. Of course, we will drop the reference to $a$ whenever $a$ is clear from the context. If $\varphi$ is the golden ratio, the Ostrowski representation based on $\varphi$ is better known as the \textbf{Zeckendorf representation}, see Zeckendorf \cite{Zeckendorf}. We will also need a similar representation of a real number.
\begin{fact}\cite[Chapter II.6 Theorem 1]{RS}
\mathcal{L}bel{ostrowskireal} Let $c \in \operatorname{m}athbb{R}$ be such that $-\frac{1}{\zeta_1} \leq c < 1-\frac{1}{\zeta_1}$. Then $c$ can be written uniquely in the form
\[
c = \sum_{k=0}^{\infty} b_{k+1} \beta_{k},
\]
where $b_k \in \operatorname{m}athbb{N}$, $0 \leq b_1 < a_1$, $0 \leq b_k \leq a_{k}$, for $k> 1$, and $b_k = 0$ if $b_{k+1} = a_{k+1}$, and $b_k < a_{k}$ for infinitely many odd $k$.
\end{fact}
\subsection*{Square roots of rational numbers} So far, we have only introduced facts about continued fractions that were already used in \cite{H-Twosubgroups}. In order to extend the results from that paper, we will now recall some theorems about continued fractions for square roots of rational numbers. For the following, fix $d\in \operatorname{m}athbb{Q}_{>0}$ such that $d\neq c^2$ for all $c\in \operatorname{m}athbb{Q}$. When we refer to $p_k,q_k,\beta_k$ and $\zeta_k$, we mean the ones given by the continued fraction expansion of $\sqrt{d}$. In \cite{H-Twosubgroups} we used the fact that the continued fraction expansion of quadratic numbers is periodic. Here we need the following stronger statement for $\sqrt{d}$.
\begin{fact}\mathcal{L}bel{fact:cfsqrt}\cite[Theorem III.1.5]{RS} The continued fraction expansion of $\sqrt{d}$ is of the form
$[a_0;\overline{a_1,a_2,a_3\dots,a_2,a_1,2a_0}]$,
where the periodic part without the last term is a palindrome.
\end{fact}
\noindent Let $m$ be the length of the (minimal) period of the continued fraction expansion of $\sqrt{d}$.
\begin{fact}\mathcal{L}bel{eq:zeta} Let $\ell \in \operatorname{m}athbb{N}$. Then
\[
\zeta_{\ell m + 1} = \frac{1}{\sqrt{d}-a_0}.
\]
\end{fact}
\begin{proof}
By the periodicity and the definition of the $k$-th complete quotient, we obtain
\begin{equation*}
\zeta_{\ell m+1} = \zeta_1 = \frac{1}{\zeta_0 - a_0} = \frac{1}{\sqrt{d} - a_0}.
\end{equation*}
\end{proof}
\noindent Our proof of Theorem D depends crucially on the following connection between the two sequences $(p_k)_{k\in \operatorname{m}athbb{N}}$ and $(q_k)_{k\in \operatorname{m}athbb{N}}$.
\begin{fact}\mathcal{L}bel{fact:pq} Let $k\in \operatorname{m}athbb{N}$. Then
\begin{align*}
p_{km} &= a_0 p_{km-1} + d q_{km-1},\\
q_{km} &= a_0 q_{km-1} + p_{km-1}.
\end{align*}
\end{fact}
\begin{proof} By Fact \ref{fact:zetaexp} and Fact \ref{eq:zeta},
\begin{align*}
\sqrt{d} &= \frac{p_{km}\zeta_{km+1} + p_{km-1}}{q_{km}\zeta_{km+1} + q_{km-1}}\\
&= \frac{p_{km}+ \sqrt{d} p_{km-1} - a_0 p_{km-1}}{q_{km} + \sqrt{d} q_{km-1}- a_0q_{km-1}}.
\end{align*}
Hence
\[
\sqrt{d} (q_{km} -a_0 q_{km-1} - p_{km-1}) + d q_{km-1} - p_{km}+ a_0 p_{km-1} =0.
\]
The statement follows from the irrationality of $\sqrt{d}$.
\end{proof}
\begin{fact}\mathcal{L}bel{fact:pkm}
There exist $s=(s_0,\dots,s_{m-1}), t=(t_0,\dots,t_{m-1})\in (\operatorname{m}athbb{Q}^2)^m$ such that for every $i\in \{0,\dots, m-1\}$ and for every $k\in \operatorname{m}athbb{N}$
\begin{align*}
p_{km} &= s_{i,1} \cdot p_{km+i} + s_{i,2}\cdot p_{km+i-1}\\
p_{km-1} &= t_{i,1} \cdot p_{km+i} + t_{i,2} \cdot p_{km+i-1}.
\end{align*}
\end{fact}
\begin{proof}
We prove the statement by induction on $i$. When $i=0$, then the statement holds with $s_0=(1,0)$ and $t_0=(0,1)$. Now assume there
exist $(s_0,\dots,s_i)$ and $(t_0,\dots,t_i)$ in $(\operatorname{m}athbb{Q}^2)^i$ such that for every $k\in \operatorname{m}athbb{N}$
\begin{align*}
p_{km} &= s_{i,1} \cdot p_{km+i} + s_{i,2}\cdot p_{km+i-1}\\
p_{km-1} &= t_{i,1} \cdot p_{km+i} + t_{i,2} \cdot p_{km+i-1}.
\end{align*}
By the periodicity of the continued fraction expansion of $\sqrt{d}$ and Fact \ref{fact:recursive}, there exists $u \in \operatorname{m}athbb{N}_{> 0}$ such that for every $k \in \operatorname{m}athbb{N}$
\[
p_{km+i+1} = u \cdot p_{km+i} + p_{km+i-1}.
\]
Thus
\begin{align*}
p_{km} &= s_{i,2} p_{km+i+1} + (s_{i,1}-s_{i,2}u) \cdot p_{km+i}\\
p_{km-1} &= t_{i,2} p_{km+i+1} + (t_{i,1}-t_{i,2}u) \cdot p_{km+i}.
\end{align*}
\end{proof}
\begin{cor}\mathcal{L}bel{cor:pq} There exist $v=(v_0,\dots,v_{m-1}), w=(w_0,\dots,w_{m-1})\in \operatorname{m}athbb{Q}^m$ such that for every $i\in \{0,\dots, m-1\}$ and for every $k\in \operatorname{m}athbb{N}$
\[
q_{km+i} = v_{i} \cdot p_{km+i+1} + w_i \cdot p_{km+i}.
\]
\end{cor}
\begin{proof}
By the periodicity of the continued fraction expansion of $\sqrt{d}$ and Fact \ref{fact:recursive}, we have that for $i\in \{0,\dots, m-1\}$ there exist $r_{i,1},r_{i,2} \in \operatorname{m}athbb{Z}$ such that for every $k \in \operatorname{m}athbb{N}$
\[
q_{km+i} = r_{i,1} \cdot q_{km} + r_{i,2} \cdot q_{km-1}.
\]
By Fact \ref{fact:pq},
\begin{align*}
q_{km-1} &= \frac{1}{d} p_{km} - \frac{a_0}{d} p_{km-1},\\
q_{km} &= \frac{a_0}{d} p_{km} + \frac{d-a_0}{d} p_{km-1}.
\end{align*}
Thus for $i\in \{0,\dots, m-1\}$ there exist $u_{i,1},u_{i,2} \in \operatorname{m}athbb{Z}$ such that for every $k \in \operatorname{m}athbb{N}$
\[
q_{km+i} = u_{i,1} \cdot p_{km} + u_{i,2} \cdot p_{km-1}.
\]
The statement now follows from Fact \ref{fact:pkm}.
\end{proof}
\noindent For purely periodic continued fraction expansions, like the one of the golden ratio $\varphi$, there is an even stronger connection between the $p_k$'s and the $q_k$'s. In that case, $q_{k+1}=p_k$. This fact was used in \cite{H-Twosubgroups} to show the definability of $\mathcal{L}mbda_{\varphi}$ in $\operatorname{m}athbb{C}al R_{\varphi}$. In the next section, we will prove that the weaker statement of Corollary \ref{cor:pq} is enough to establish the definability of $\mathcal{L}mbda_{\sqrt{d}}$.
\begin{fact}\mathcal{L}bel{fact:productzeta} The following equation holds:
\[
\zeta_1 \cdots \zeta_{m+1} = \frac{q_m}{\sqrt{d}-a_0} + q_{m-1}
\]
As a consequence $\zeta_1 \cdots \zeta_{m+1} \in \operatorname{m}athbb{Q}(\sqrt{d})$.
\end{fact}
\begin{proof} Recall that $q_{-1}=0$ and $q_0=1$. Applying Fact \ref{fact:recursive} and Fact \ref{fact:zetaplus1} multiple times, we obtain
\begin{align*}
\zeta_1 \cdots \zeta_{m+1} &= (q_0 \zeta_1 + q_{-1}) \cdot \zeta_2 \cdots \zeta_{m+1}\\
&=\Big( (q_0 (a_1 + \frac{1}{\zeta_2}) + q_{-1}) \zeta_2\Big)
\cdot \zeta_3 \cdots \zeta_{m+1}\\
&=\big( q_1 \zeta_2 + q_0\big)
\cdot \zeta_3 \cdots \zeta_{m+1}\\
&= \cdots = \big(q_{m-1} \zeta_m + q_{m-2}\big) \cdot \zeta_{m+1}= q_m \zeta_{m+1} + q_{m-1}
\end{align*}
The statements of the fact follows directly from Fact \ref{eq:zeta}.
\end{proof}
\noindent The definability of $\mathcal{L}mbda_{\zeta_1 \cdots \zeta_{m+1}}$ in $\operatorname{m}athbb{C}al S_{\sqrt{d}}$ is a direct consequence. Because of the periodicity of the continued fraction expansion of $\sqrt{d}$ and Fact \ref{fact:beta}, we also get the following fact.
\begin{fact}\mathcal{L}bel{eq:multbyc} Let $k\in \operatorname{m}athbb{N}$. Then
\[
\zeta_1 \cdots \zeta_{m+1} \cdot \beta_{k+m} = (-1)^m \cdot \beta_{k}.
\]
\end{fact}
\noindent Hence multiplying a real number $z\in[-\frac{1}{\zeta_1}, 1-\frac{1}{\zeta_1})$ by $\zeta_1 \cdots \zeta_{m+1}$ corresponds to an $m$-shift in the Ostrowski representation of $z$.
\section{Defining scalar multiplication} Let $d \in \operatorname{m}athbb{Q}$. In this section we prove that $\operatorname{m}athbb{C}al R_{\sqrt{d}}$ defines $\mathcal{L}mbda_{\sqrt{d}}$. We can easily reduce to the case that $\sqrt{d}\notin \operatorname{m}athbb{Q}$. Since $\operatorname{m}athbb{C}al R_{a}$ and $\operatorname{m}athbb{C}al R_{qa}$ are interdefinable for non-zero $q\in \operatorname{m}athbb{Q}$ and the set of squares of rational numbers is dense in $\operatorname{m}athbb{R}_{\geq 0}$, we can assume that $1.5 < \sqrt{d} < 2$. By Fact \ref{fact:cfsqrt}, the continued fraction expansion of $\sqrt{d}$ is of the form
\[
[a_0;\overline{a_1,a_2,a_3\dots,a_2,a_1,2a_0}].
\]
Denote the length of the period by $m$ and set $s:= \operatorname{m}ax a_i$. From now on only the structure $\operatorname{m}athbb{C}al R_{\sqrt{d}}$ is considered. Whenever we say definable, we mean definable in this structure.
\subsection*{Preliminaries} We now recall all the necessary results from Section 4 of \cite{H-Twosubgroups}. The main observation from the section we need is that the structure $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z},\sqrt{d} \operatorname{m}athbb{Z})$ defines predicates allowing us to definably recover the digits of the Ostrowski representation of a given number. Everything stated here is either explicitly stated in \cite{H-Twosubgroups} or can be obtained by minor modifications.\newline
\noindent Since $1<\sqrt{d}<2$, we have
\[
[-\frac{1}{\zeta_1}, 1-\frac{1}{\zeta_1})=[1-\sqrt{d},2-\sqrt{d}).\]
We denote this interval by $I$. By the statement after \cite[Definiton 4.1]{H-Twosubgroups}, the set $\{q_k \sqrt{d} : k > 0\}$ is definable. We write $V$ for this set and $s_V$ for the successor function on $V$. The reader can easily verify that $s_V$ is definable since $V$ is definable as well.
\begin{defn} Let $f: \operatorname{m}athbb{N}\sqrt{d} \to \operatorname{m}athbb{R}$ map $n\sqrt{d}$ to $\sum_k b_{k+1} \beta_k$ if $\sum_k b_{k+1} q_k$ is the Ostrowski representation of $n$.
\end{defn}
\noindent By \cite[Lemma 4.3]{H-Twosubgroups} the function $f$ is definable. This allows us to move definably between natural numbers and real numbers whose Ostrowski representations have the same digits.
\begin{defn}
For $i \in \{0,\dots,s\}$ we define $E_i \subseteq V\times I$ such that $(q_{\ell}\sqrt{d},c) \in E_i$ if and only if there is a sequence $(b_i)_{i\in \operatorname{m}athbb{N}_{>0}}$ such that $\sum_{k=0}^{\infty} b_{k+1} \beta_k$ is the Ostrowski representation of $c$ and $b_{\ell+1}= i$.
\end{defn}
\noindent Lemma 4.11 of \cite{H-Twosubgroups} only states that $E_i$ is definable for $i \in \{0,1\}$. However, the reader can check that its proof can be used easily to conclude that $E_i$ is indeed definable for every $i\in \{0,\dots s\}$.
\subsection*{Defining shifts} We now start to extend the results from \cite{H-Twosubgroups}. In order to define multiplication by $\sqrt{d}$, we have to show that certain generalized shifts in the Ostrowski representation are definable.
\begin{lem} Let $n \in \operatorname{m}athbb{N}_{>1}$ and $j\in\{0,\dots,n-1\}$. Then the set
\[
V_{j,n} := \{ q_l\sqrt{d} \in V \ : \ l = j \operatorname{m}od n \}
\]
is definable.
\end{lem}
\begin{proof} Let $c\in I$ be the unique element of $I$ such that $(q_j\sqrt{d},c)\in E_1$, $(q_l\sqrt{d},c) \in E_0$ for $l<j$ and
\[
\forall q_l\sqrt{d} \in V \ ((q_l\sqrt{d},c) \in E_1) \leftrightarrow \big(\bigwedge_{i=1}^{n-1} (q_{l+i}\sqrt{d},c) \in E_0 \big).
\]
Since $n>1$, such a $c$ exists. Since every element of $V$ and the successor function $s_V$ are definable, so is $c$. It is easy to verify that
\[
V_{j,n} = \{q_l\sqrt{d} \in V \ : \ (q_l\sqrt{d},c) \in E_1\}.
\]
Hence $V_{j,n}$ is definable.
\end{proof}
\noindent Recall that $m$ is defined as the smallest period of the continued fraction of $\sqrt{d}$. Set $t := \operatorname{m}ax \{ m, 2\}$.
\begin{defn} For $i\in \{0,\dots, t-1\}$, define $B_i\subseteq \operatorname{m}athbb{N}\sqrt{d}$ to be the set of all $n\sqrt{d}$ such that
\[
\forall z \in V \ \big(z \notin V_{i,t} \rightarrow E_0(z,f(n\sqrt{d}))\big) \wedge \big(z \in V_{i,t} \rightarrow (E_0(z,f(n\sqrt{d}))\vee E_1(z,f(n\sqrt{d})))\big).
\]
\end{defn}
\noindent Note that $B_i$ is definable for each $i\in\{0,\dots,t-1\}$. The set $B_i$ contains precisely those of elements of $n\sqrt{d}$ of $\operatorname{m}athbb{N}\sqrt{d}$ for which the digit $b_{k+1}$ of the Ostrowski representation on $n$ is either $0$ or $1$ when $k=i \operatorname{m}od t$, and $0$ otherwise. Indeed, the following lemma follows the definitions of $f$ and $V_{i,t}$.
\begin{lem}\mathcal{L}bel{lem:propbj} Let $i,n \in \operatorname{m}athbb{N}$ be such that $\sum_{k} b_{k+1} q_k$ is the Ostrowski representation of $n$ and $n\sqrt{d} \in B_i$. Then for every $k\in \operatorname{m}athbb{N}$
\[
b_{k+1} \in \left\{
\begin{array}{ll}
\{0,1\}, & \hbox{if $k=i \operatorname{m}od t$;} \\
\{0\}, & \hbox{otherwise.}
\end{array}
\right.
\]
\end{lem}
\noindent Since we chose $t$ to be at least $2$, we obtain the following corollary.
\begin{cor}\mathcal{L}bel{cor:finitesub} Let $X \subseteq \operatorname{m}athbb{C}al P(\operatorname{m}athbb{N})$ be finite. Then there exists $n\sqrt{d} \in B_i$ such that
\[
n = \sum_{k \in X} q_{kt+i}.
\]
\end{cor}
\noindent Hence there is a natural bijection between $B_i$ and the set of finite subsets of $\operatorname{m}athbb{N}$. This is the only place where we need that $t\geq 2$. We now use this observation to define a shift between $B_i$ and $B_j$ when $j=i+1 \operatorname{m}od t$.
\begin{defn} Let $i,j\in \{0,\dots,t-1\}$ be such that $j = i+1 \operatorname{m}od t$. Let $S_i : B_i \to B_j$ map $x \in B_i$ to the unique $y \in B_j$ such that
\[
E_0(1,y) \wedge \forall z\in V (E_{1}(z,x) \leftrightarrow E_{1}(s_V(z),y)).
\]
\end{defn}
\noindent It follows from Corollary \ref{cor:finitesub} that the unique $y \in B_j$ in the above Definition always exists. Since $B_i$ and $E_1$ are definable, so is $S_i$. Moreover, note that the function $S_i$ is simply a shift by one in the Ostrowski representation. The following lemma makes this statement precise.
\begin{lem}\mathcal{L}bel{lem:oshift} Let $i,j\in \{0,\dots,t-1\}$ be such that $j = i+1 \operatorname{m}od t$. Let $n\sqrt{d} \in B_i$ and $\ell\in \operatorname{m}athbb{N}$ such that $S_i(n\sqrt{d})=\ell\sqrt{d}$ and $\sum_k b_{k+1} q_k$ is the Ostrowski representation of $n$. Then the Ostrowski representation of $\ell$ is $\sum_k b_{k+1} q_{k+1}$.
\end{lem}
\noindent It is worth pointing out that the sum $\sum_k b_{k+1} q_{k+1}$ is only the Ostrowski representation of $\ell$, because being in $B_i$ implies that all $b_{k}$ are in $\{0,1\}$, and that whenever $b_k=1$, then $b_{k-1}=0$. In general, when we take an Ostrowski representation and shift it as in Lemma \ref{lem:oshift}, there is no guarantee that the resulting sum is again an Ostrowski representation. However, in order to define multiplication by $\sqrt{d}$, we will have to make shifts that may result in sums that are not Ostrowski representations. Towards that goal, we will now introduce a new definable object $C$ which in a way made precise later, contains all Ostrowski representations and is closed under shifts.
\begin{defn} For $\ell\in \{0,\dots,t-1\}$, define
\[
C_{\ell} := \{ (x_1,\dots, x_s) \in B_{\ell}^s \ : \ \bigwedge_{1\leq i<j\leq s} \forall z \in V \big (E_1(z,f(x_j)) \rightarrow E_1(z,f(x_i))\big)\}.
\]
Set $C := C_0\times \dots \times C_{t-1}$. Define $T : V \times C \to \{0,\dots,s\}$ by
\[
(z,(c_0,\dots,c_{t-1})) \operatorname{m}apsto \operatorname{m}ax \{ j \ : \bigvee_{i=0}^{t-1} E_1(z,f(c_{i,j})) \wedge z \in V_{i,t} \} \cup \{0\}.
\]
\end{defn}
\noindent In the following, we will often work with an element $c=(c_0,\dots,c_{t-1}) \in C$, where $c_i$ is assumed to be in $C_i$. When we refer to $c_{i,j}$, as is done in the definition of $T$, we will always mean the $j$-th component of $c_i$. Note that for every $z\in V$ there exists a unique $i\in \{0,\dots,t-1\}$ such that $z \in V_{i,t}$. Hence for that $i$, we immediately get from the definition of $T$ that for every $c\in C$
\[
T(z,c)= \operatorname{m}ax \{ j \ : E_1(z,f(c_{i,j})) \wedge z \in V_{i,t} \}\cup \{0\}.
\]
Thus the conjunction in the definition of $T$ can be dropped if $i$ is assumed to satisfy $z \in V_{i,t}$.
\begin{lem}\mathcal{L}bel{lem:isoC} Let $\alpha : \operatorname{m}athbb{N} \to \{0,\dots,s\}$ be a function that is eventually zero. Then there is a unique $c \in C$ such that $T(q_{\ell}\sqrt{d},c) = \alpha(\ell)$ for all $\ell \in \operatorname{m}athbb{N}$.
\end{lem}
\begin{proof} By Corollary \ref{cor:finitesub} we can find for each $j\in \{0,\dots,t-1\}$ and for each finite $X \in \operatorname{m}athbb{C}al P(\operatorname{m}athbb{N})$ an element $n\sqrt{d}\in B_j$ such that $k\in X$ if and only if $E_1(q_{kt+j+1}\sqrt{d},f(n\sqrt{d}))$. The statement of the Lemma follows easily.\end{proof}
\noindent As a corollary of Lemma \ref{lem:isoC} we get that the set of Ostrowski representations can be embedded into $C$.
\begin{cor}\mathcal{L}bel{cor:T} Let $n\in \operatorname{m}athbb{N}$ and $\sum_k b_{k+1} q_k$ be the Ostrowski representation of $n$. Then there is a unique $c \in C$ such that $b_{k+1} = T(q_k\sqrt{d},c)$ for all $k\in \operatorname{m}athbb{N}$.
\end{cor}
\begin{defn} Let $R: \operatorname{m}athbb{N}\sqrt{d} \to C$ map $n\sqrt{d}$ to the unique $c \in C$ such that
\[
\bigwedge_{i=1}^s \ \forall z \in V \ E_i(z,f(n\sqrt{d})) \leftrightarrow T(z,c)=i.
\]
\end{defn}
\noindent By Corollary \ref{cor:T} the unique $c$ in the preceding definition indeed exists. Note that $R$ is definable. The motivation for the definition of $C$ was to be able to define shifts.
\begin{defn} Let $S : C \to C$ be given by
\[
(c_0,\dots,c_{t-1}) \operatorname{m}apsto (S_{t-1}(c_{t-1}),S_0(c_0),\dots,S_{t-2}(c_{t-2})).
\]
\end{defn}
\noindent For $\ell \geq 1$ we denote the $\ell$-th compositional iterate of $S$ by $S^{\ell}$. The following Lemma shows that the function $S$ is indeed a shift operation with respect to $T$.
\begin{lem}\mathcal{L}bel{lem:STconnection} Let $c \in C$ and $k \in \operatorname{m}athbb{N}$. Then $T(q_k\sqrt{d},c) = T(q_{k+1}\sqrt{d},S(c)).$
\end{lem}
\begin{proof} Let $c=(c_0,\dots,c_{t-1})\in C$. Let $i,\ell \in \{0,\dots,t-1\}$ such that $k=i\operatorname{m}od t$ and $\ell=i+1\operatorname{m}od t$. By Definition of $S_i$, we have that for $j\in \{1,\dots,s\}$
\[
E_1(q_k\sqrt{d},c_{i,j}) \leftrightarrow E_1(q_{k+1}\sqrt{d},S_i(c_{i,j})).
\]
Since $q_k \sqrt{d} \in V_{i,t}$ and $q_{k+1} \sqrt{d} \in V_{\ell,t}$, it follows immediately from the definition of $T$ that
\begin{align*}
T(q_k\sqrt{d},c) &= \operatorname{m}ax \{ j \ : \ E_1(q_k\sqrt{d},c_{i,j}) \} \cup \{0\}\\
& = \operatorname{m}ax \{ j \ : \ E_1(q_{k+1}\sqrt{d},S(c_{i,j})) \} \cup \{0\}= T(q_{k+1}\sqrt{d},S(c)).
\end{align*}
\end{proof}
\noindent After showing that $\operatorname{m}athbb{N}\sqrt{d}$ can be embedded into $C$ and that there exists a definable shift operation, the next step is to recover natural numbers and real numbers from $C$. To achieve this, we define the following two functions.
\begin{defn} For $u=(u_0,\dots,u_{t-1})\in \operatorname{m}athbb{Q}^t$, let $\Sigma_u :C \to \operatorname{m}athbb{R}$ be defined by
\[
(c_0,\dots,c_{t-1}) \operatorname{m}apsto \sum_{i=0}^{t-1} u_i \sum_{j=1}^s c_{i,j},
\]
and $F_u: C \to \operatorname{m}athbb{R}$ by
\[
(c_0,\dots,c_{t-1}) \operatorname{m}apsto \sum_{i=0}^{t-1} u_i \sum_{j=1}^s f(c_{i,j}).
\]
\end{defn}
\noindent As is made precise in the following Proposition, one should think of the image of $C$ under $\Sigma_u$ and $F_u$ as the set of numbers that can be expressed (not necessarily uniquely) in some generalized Ostrowski representation.
\begin{prop}\mathcal{L}bel{prop:gshift} Let $u=(u_0,\dots,u_{t-1}) \in \operatorname{m}athbb{Q}^t$ and $n\in \operatorname{m}athbb{N}$ be such that $\sum_k b_{k+1} q_k$ is the Ostrowski representation of $n$. Then
\begin{align*}
\Sigma_u(S^{\ell}(R(n\sqrt{d}))) &= \sum_{i=0}^{t-1} u_i \sum_{k=0}^{\infty} b_{kt+i+1} q_{kt+i+\ell}\sqrt{d}, \hbox{ and }\\
F_u(S^{\ell}(R(n\sqrt{d}))) &= \sum_{i=0}^{t-1} u_i \sum_{k=0}^{\infty} b_{kt+i+1} \beta_{kt+i+\ell}.
\end{align*}
\end{prop}
\begin{proof} By Corollary \ref{cor:T}, we have that $b_{k+1} = T(q_k\sqrt{d},R(n\sqrt{d}))$, for all $k \in \operatorname{m}athbb{N}$.
By Lemma \ref{lem:STconnection}, $T(q_k\sqrt{d},R(n\sqrt{d})) = T(q_{k+\ell}\sqrt{d},S^\ell(R(n\sqrt{d})))$ for all $k\in \operatorname{m}athbb{N}$. For ease of notation, denote $S^\ell(R(n\sqrt{d}))$ by $c=(c_0,\dots,c_{t-1})$.
Then we have for each $k\in \operatorname{m}athbb{N}$, $i\in \{0,\dots,t-1\}$ and $j\in \{1,\dots,s\}$ that
\[E_1(q_{kt+i+\ell}\sqrt{d},c_{i,j}) \hbox{ if and only if } b_{kt+i+1} \leq j.\]
Hence
\begin{align*}
\sum_{j=1}^{s} c_{i,j} =\sum_{j=1}^{s} \sum_{k} |\{ j : E_1(q_{kt+i}\sqrt{d},c_{i,j})\}| q_{kt+i+\ell}\sqrt{d}= \sum_{k} b_{kt+i+1} q_{kt+i+\ell}\sqrt{d}.
\end{align*}
With the same argument, the reader can check that
\[
\sum_{j=1}^{s} f(c_{i,j})=\sum_{k} b_{kt+i+1} \beta_{kt+i+\ell}.
\]
We can easily deduce the statement of the Lemma from the definitions of $\Sigma$ and $F$.
\end{proof}
\subsection*{Proof of Theorem D} In this subsection, we will give a proof of Theorem D. When we say that for a real number $b\in \operatorname{m}athbb{R}$ and a subset $X$ the restriction of $\mathcal{L}mbda_b$ to $X$ is definable, we just mean that the graph of the restriction $\mathcal{L}mbda_b|_X$ is definable.\newline
\noindent Here is an outline how we proceed to prove Theorem D: we first combine Fact \ref{eq:multbyc} and Corollary \ref{cor:pq} with technology developed in the previous subsection, to show that the restrictions of $\mathcal{L}mbda_{\sqrt{d}}$ to $\operatorname{m}athbb{N}$ and to $f(\operatorname{m}athbb{N}\sqrt{d})$ are definable. Using arguments from \cite{H-Twosubgroups} we conclude that $\mathcal{L}mbda_{\sqrt{d}}$ is definable.
\begin{lem}\mathcal{L}bel{lem:restoI} Let $n\in \operatorname{m}athbb{N}$. Then
\[
f(n\sqrt{d}) = (-1)^m(\frac{q_m}{\sqrt{d}-a_0} +q_{m-1}) \cdot F_{(1,\dots,1)}(S^m(R(n\sqrt{d}))).
\]
\end{lem}
\begin{proof} Let $\sum_k b_{k+1} q_k$ be the Ostrowski representation of $n$. By Fact \ref{eq:multbyc}, Fact \ref{fact:productzeta} and Proposition \ref{prop:gshift}
\begin{align*}
f(n\sqrt{d}) &= \sum_k b_{k+1} \beta_k = (-1)^m(\frac{q_m}{\sqrt{d}-a_0} +q_{m-1}) \sum_{k} b_{k+1} \beta_{k+m}\\
&=(-1)^m(\frac{q_m}{\sqrt{d}-a_0} +q_{m-1}) \cdot F_{(1,\dots,1)}(S^m(R(n\sqrt{d}))).
\end{align*}
\end{proof}
\begin{cor}\mathcal{L}bel{cor:restoI} The restriction of $\mathcal{L}mbda_{\sqrt{d}}$ to $f(\operatorname{m}athbb{N}\sqrt{d})$ is definable.
\end{cor}
\begin{proof}
Let $a,b \in \operatorname{m}athbb{Q}$ be such that
\[
\frac{q_m}{\sqrt{d}-a_0} +q_{m-1} = a \sqrt{d} + b.
\]
By Lemma \ref{lem:restoI} and the injectivity of $f$, the restriction of $\mathcal{L}mbda_{(a\sqrt{d}+b)^{-1}}$ to $f(\operatorname{m}athbb{N}\sqrt{d})$ is definable.
Since $(a\sqrt{d}+b)^{-1} = \frac{a\sqrt{d}-b}{a^2d-b^2}$, we have
\[
\mathcal{L}mbda_{\sqrt{d}}(x) = \frac{a^2d-b^2}{a}\mathcal{L}mbda_{(a\sqrt{d}+b)^{-1}}(x) + \frac{b}{a}x.
\]
Hence the restriction of $\mathcal{L}mbda_{\sqrt{d}}$ to $f(\operatorname{m}athbb{N}\sqrt{d})$ is definable.
\end{proof}
\begin{lem}\mathcal{L}bel{lem:restoN} There are $v,w \in \operatorname{m}athbb{Q}^t$ such that for every $n \in \operatorname{m}athbb{N}$
\[
n = \Sigma_{v}(S(R(n\sqrt{d}))) - F_{v}(S(R(n\sqrt{d}))) + \Sigma_{w}(R(n\sqrt{d})) - F_{w}(R(n\sqrt{d})).
\]
\end{lem}
\begin{proof} Let $v,w \in \operatorname{m}athbb{Q}^t$ be given by Corollary \ref{cor:pq}. Note that $p_k = q_k\sqrt{d} - \beta_k$.
By Proposition \ref{prop:gshift}
\begin{align*}
n &= \sum_{k=0}^{\infty} b_{k+1} q_k = \sum_{i=0}^{t-1} \sum_{k=0}^{\infty} b_{kt+i+1} q_{kt+i}\\
&= \sum_{i=0}^{t-1} \sum_{k=0}^{\infty} b_{kt+i+1} (v_i \cdot p_{kt+i+1} + w_i \cdot p_{kt+i})\\
&= \sum_{i=0}^{t-1} \sum_{k=0}^{\infty} b_{kt+i+1} \Big(v_i (q_{kt+i+1}\sqrt{d} - \beta_{kt+i+1}) + w_i (q_{kt+i+1}\sqrt{d} - \beta_{kt+i+1})\Big)\\
&= \Sigma_{v}(S(R(n\sqrt{d}))) - F_{v}(S(R(n\sqrt{d}))) + \Sigma_{w}(R(n\sqrt{d})) - F_{w}(R(n\sqrt{d})).
\end{align*}
\end{proof}
\begin{cor}\mathcal{L}bel{cor:restoN} The restriction of $\mathcal{L}mbda_{\sqrt{d}}$ to $\operatorname{m}athbb{N}$ is definable.
\end{cor}
\begin{proof} By Lemma \ref{lem:restoN} the restriction of $\mathcal{L}mbda_{\sqrt{d}^{-1}}$ to $\operatorname{m}athbb{N}\sqrt{d}$ is definable. Since $\mathcal{L}mbda_{\sqrt{d}}$ is the inverse function of $\mathcal{L}mbda_{\sqrt{d}^{-1}}$ and $\mathcal{L}mbda_{\sqrt{d}^{-1}}(\sqrt{d}\operatorname{m}athbb{N})=\operatorname{m}athbb{N}$, it follows that the restriction of $\mathcal{L}mbda_{\sqrt{d}}$ to $\operatorname{m}athbb{N}$ is definable.
\end{proof}
\begin{proof}[Proof of Theorem D] Here we follow the argument in the proof of \cite[Theorem 5.5]{H-Twosubgroups}. First note that it is enough to define $\mathcal{L}mbda_{\sqrt{d}}$ on $\operatorname{m}athbb{R}_{\geq 0}$. Let $Q : \operatorname{m}athbb{N} + f(\operatorname{m}athbb{N}\sqrt{d}) \to \operatorname{m}athbb{R}$ map $m + f(n\sqrt{d})$ to $\mathcal{L}mbda_{\sqrt{d}}(m) + \mathcal{L}mbda_{\sqrt{d}}(f(n\sqrt{d}))$. It is immediate that $Q$ is well-defined and that $Q$ and $\mathcal{L}mbda_{\sqrt{d}}$ agree on the domain of $Q$. By Corollary \ref{cor:restoI} and Corollary \ref{cor:restoN}, $Q$ is definable.
Since $\operatorname{m}athbb{N} + f(\operatorname{m}athbb{N}\sqrt{d})$ is dense in $[1-\sqrt{d},\infty)$ and multiplication by $\sqrt{d}$ is continuous, the graph of $\mathcal{L}mbda_{\sqrt{d}}$ on $[1-\sqrt{d},\infty)$ is the topological closure of the graph of $Q$ in $\operatorname{m}athbb{R}^2$.
Thus the restriction of $\mathcal{L}mbda_{\sqrt{d}}$ to $\operatorname{m}athbb{R}_{\geq 0}$ is definable.
\end{proof}
\section{Conclusion}
This paper solves the question left open in \cite{H-Twosubgroups} whether the theory of $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z},\mathcal{L}mbda_a)$ is decidable whenever $a$ is a quadratic irrational number. We achieve this by showing that $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z},\operatorname{m}athbb{Z}\sqrt{d})$ defines $\mathcal{L}mbda_{\sqrt{d}}$ whenever $d\in \operatorname{m}athbb{Q}$. Since the theory of $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z},\operatorname{m}athbb{Z}\sqrt{d})$ is known to be decidable by \cite[Theorem A]{H-Twosubgroups}, we can conclude that the theory of $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z},\mathcal{L}mbda_a)$ is indeed decidable when $a$ is quadratic.\newline
\noindent We finish with a few remarks about related results and open questions.
\subsection*{1} We do not know whether Theorem D holds when $\sqrt{d}$ is replaced by an arbitrary real number $a$, even in the case when $a$ is quadratic. By \cite[Theorem A]{H-Twosubgroups} we know for quadratic $a$ that the theory of $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Z},\operatorname{m}athbb{Z} a)$ is decidable. However, when $a$ is non-quadratic not much is known.
\subsection*{2} Let $a \in \operatorname{m}athbb{R}\setminus \operatorname{m}athbb{Q}$. Let $x^{a} : \operatorname{m}athbb{R} \to \operatorname{m}athbb{R}$ map $t$ to $t^a$ if $t>0$ and to $0$ otherwise. An isomorphic copy of $\operatorname{m}athbb{C}al S_a$ is definable in the structure $(\operatorname{m}athbb{R},<,+,\cdot,2^{\operatorname{m}athbb{Z}},x^{a})$. But by \cite[Theorem 1.3]{discrete} the latter structure defines $\operatorname{m}athbb{Z}$ and hence its theory is undecidable, even if $a$ is quadratic.
\subsection*{3} Questions considered in this paper can also be asked for $\operatorname{m}athbb{Q}$ instead of $\operatorname{m}athbb{Z}$. By Robinson \cite{julia} the structure $(\operatorname{m}athbb{R},<,+,\cdot,\operatorname{m}athbb{Q})$ defines $\operatorname{m}athbb{Z}$ and therefore its theory is undecidable. On the other hand, $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Q})$ is modeltheoretically very well behaved, see van den Dries \cite{densepairs}, and its theory is decidable. So here we can also ask how many traces of multiplication can be added to the latter structure without destroying its tameness? By recent work of Block Gorman, Hieronymi and Kaplan in \cite{BHK}, $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Q},\mathcal{L}mbda_a)$ is model-theoretically tame for every $a\in \operatorname{m}athbb{R}$. Furthermore, the theory of $(\operatorname{m}athbb{R},<,+,\operatorname{m}athbb{Q},\mathcal{L}mbda_a)$ is decidable as long as $\operatorname{m}athbb{Q}(a)$ has a computable presentation as an ordered field, and the question whether a finite subset of $\operatorname{m}athbb{Q}(a)$ is $\operatorname{m}athbb{Q}$-linearly independent is decidable.
\end{document} |
\begin{document}
\title{\bf\huge Computational Characteristics of Random Field Ising Model with Long-Range Interaction}
\author{Fangxuan Liu}
\email{[email protected]}
\affiliation{Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, PR China}
\author{L.-M. Duan}
\email{[email protected]}
\affiliation{Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, PR China}
\date{\today}
\begin{abstract}
Ising model is a widely studied class of models in quantum computation. In this paper we investigate the computational characteristics of the random field Ising model (RFIM) with long-range interactions that decays as an inverse polynomial of distance, which can be achieved in current ion trap system. We prove that for an RFIM with long-range interaction embedded on a 2-dimensional plane, solving its ground state is $\mathsf{NP}$-complete for all diminishing exponent, and prove that the 1-dimensional RFIM with long-range interaction can be efficiently approximated when the interaction decays fast enough.
\end{abstract}
\maketitle
\section{Introduction}
Quantum computer science are in the noisy intermediate-scale quantum (NISQ) era~\cite{NISQ}, when it is still not possible to build a large-scale quantum computer with capable error correction, but it is already possible to build a quantum circuit with intermediate scale (tens of qubits) and depth (tens of steps) with relatively high fidelity~\cite{google_supercomputer,utsc_superconductor,threshold_1,threshold_2}.
While the full power of quantum computation, such as solving computationally hard problems~\cite{quantum_algorithms}, applications on optimization~\cite{variational_quantum_algorithm}, quantum machine learning~\cite{quantum_machine_learning}, quantum chemistry~\cite{quantum_chemistry} and many other fields can only be fulfilled beyond the NISQ era, analogue quantum computing~\cite{analogue_quantum_computation_1,analogue_quantum_computation_2,analogue_quantum_computation_3} thrives in this circumstance. Instead of using sequence of gates dictated by quantum algorithms to manipulate qubits, it simulates the target system by manipulating the Hamiltonian of a well controlled quantum system to solve computational problems. Although important progress has been achieved in this field, currently the Hamiltonian that can be simulated is often restricted by the specific physical systems and the available experimental techniques.
Ising model~\cite{ising_history} is among the systems that are most frequently used in analogue quantum computing because of its simplicity and capability. The computational complexity problem associated with Ising models is a widely discussed topic, as its result will affect the utilization of different families of Ising models in analogue quantum computing field. The general Ising model is well-known to be NP-complete~\cite{related_ising_complexity_2}. Indeed Karp's 21 NP-complete problems~\cite{independent_set_npc} can reduce to the ground state of Ising models~\cite{ising_formulation_npc}, and it is also connected to the complexity of general high-spin systems~\cite{spin_system}. For more specific families of Ising models, there are also many known results on their complexity. Ising model without magnetic field on a planar graph is proven to be in $\mathsf{P}$ \cite{related_ising_complexity_1}, while Ising model without field on non-planar graph is $\mathsf{NP}$-complete \cite{related_ising_complexity_2}. Ising model with random fields are in $\mathsf{P}$ when the coupling is ferromagnetic, and is in $\mathsf{NP}$-complete when the coupling is anti-ferromagnetic in 2 dimensions or more ~\cite{related_ising_complexity_3}. Ref.~\cite{spin_glass_3d} gives a further examination about the lower bound of the complexity of 3D Ising model with magnetic field. These previous research works either consider the most general Ising model, or very specific models in terms of the interaction and the field terms, which are challenging for current experiment to construct.
The Ising models that naturally appears in the ion trap are mostly random field Ising models with long-range interactions: the interaction between spins can usually be approximated by the form $r^{-\alpha}$~\cite{poly_diminish_1,poly_diminish_2}, with $r$ being the distance between the two spins and $\alpha$ being a constant in the range of $0-3$ depending on the frequency detuning of the operation laser beams. Hence, it is crucial to study the computational characteristics of this family of Ising models.
In this paper, we prove several computational characteristics of long-range interaction random field Ising models (RFIM)~\cite{random_field_ising_model}. In section~\ref{sec:notation}, we give the background of our proof and define the notations. In section~\ref{sec:outline} we gives the outline of the proof. In section~\ref{sec:NPC}, we prove that the 2D grid anti-ferromagnetic long-range interaction RFIM is in $\mathsf{NP}$-complete. In section~\ref{sec:APX}, we consider Ising model as an optimization problem, prove that the 1D long-range interaction RFIM have a Polynomial-Time Approximation Scheme (in complexity class $\mathsf{PTAS}$), and hypothesize that the 2D long-range interaction RFIM is not likely in $\mathsf{PTAS}$.
\section{Background and Notations}\label{sec:notation}
A Random Field Ising Model (RFIM) is an undirected graph $G=(V,E)$ together with a Hamiltonian $H$ of the form:
$$
H=-\sum_{\{i, j\}\in E} J_{ij} \sigma_i \sigma_j - \sum_{k\in V} h_k \sigma_k
$$
A vertex of the graph is called a \emph{spin}. The \emph{state} $\sigma$, assigns a value $\pm 1$ for each spin. The \emph{interaction}, assigns a real value $J_{ij}$ for each edge $\{i, j\}\in E$. The \emph{field}, assigns a real value $h_{i}$ for each spin $i$.
Given the interaction and field, the Hamiltonian gives the energy of a state: $E(\sigma) = H(\sigma;J,h)$. A state is called \emph{ground state} if there exists no state that have lower energy.
Now we consider Ising model that is embedded onto Euclidean space. Each spin $v$ is assigned to a position in the Euclidean space $\vec{r}_i$, and the interaction $J_{ij}$ is dictated by the position of two spins: $J_{ij} = f(\vec{r}_i, \vec{r}_j)$.
Here we consider a specific family of $f$: $f(\vec{r}_i, \vec{r}_j)=C||\vec{r}_i-\vec{r}_j||^{-\alpha}, \alpha>0$, the interaction of two spins are determined by their distance, and this system is ferromagnetic (anti-ferromagnetic) if $C$ is positive (negative). This family of interaction naturally occurs in ion traps, and is the most common family of interaction that is able to be prepared in the current laboratory. We call this family of interaction \emph{long-range interaction}, and we say that this Ising model have $\alpha$-long-range interaction.
\section{Outline of the Proof}\label{sec:outline}
The main idea of the proof of NP-completeness of the 2D grid anti-ferromagnetic long-range interaction RFIM is to construct several systems that have different exponent for the long-range interaction, and construct maps between the state of two systems. These systems and mappings have special properties so that the mapping only introduce minor relative perturbation to the energy states. The composition of the mappings connects an realistic system (an Ising model with relatively long range interaction) and an ideal system (an Ising model with perfect near neighbour interaction), and if there exist an oracle that gives the ground state of the ideal system, we can utilize the oracle and the mapping, which can be executed in polynomial time, to solve an NP-complete problem.
We also examine the computation complexity of the one dimensional system by giving a proper definition of $\epsilon$-approximation of the solution and give a dynamic-programming styled algorithm for efficient approximation. We further expand the set of Ising model that can be efficiently approximated to Ising models that satisfies a specific graph structure, and give an conjecture of the non-approximatable condition of the Ising model.
\section{NP-completeness of 2D grid Anti-ferromagnetic Long-Range Interaction RFIM}\label{sec:NPC}
Here we prove the theorem that solving the ground state of 2D grid Anti-ferromagnetic Long-Range Interaction RFIM is $\mathsf{NP}$-complete.
\subsection{Low energy state preserving maps}
Here we consider two Ising models and a map between the states of two models.
For an Ising model, we define a set $G$ of states a \emph{low-energy set} if: there exist no state $\sigma'\not\in G$ so that $\exists \sigma\in \sigma', E(\sigma')<E(\sigma)$. In particular, the set that contains all ground state of an Ising model is a low-energy set.
The \emph{energy gap} $\Delta_G$ of a low-energy set characterizes how far this set is from the other states: $\Delta_G=\min_{\sigma\not\in G} E(\sigma) - \max_{\sigma\in G} E(\sigma)$.
Now we consider two Ising models with states $\Sigma = \{\sigma\}$ and $\Sigma' = \{\sigma'\}$. Let $\tilde{\Sigma}$ be a subset of $\Sigma$ and $\tilde{\Sigma'}$ be a subset of $\Sigma'$. Consider a bijective map $g:\tilde{\Sigma'} \rightarrow \tilde{\Sigma}$, we call it a $(a,\delta)$ map if:
$$
\forall \sigma'\in G, |E(\sigma') - aE(g(\sigma'))| \le a\delta
$$
This map preserves the low-energy set under the condition that:
\begin{lem}\label{thm:gap}
For a low-energy set $G\subseteq \tilde{\Sigma}$, if:
\begin{enumerate}
\item $\delta<\frac{1}{2}\Delta_G$.
\item $\Tilde{\Sigma'}$ is a low-energy set.
\end{enumerate}
Then, $g^{-1}(G)$ is a low-energy set with energy gap $\Delta_{g^{-1}(G)} = a(\Delta_G-2\delta)$
\end{lem}
Besides, we consider the composition of two maps:
\begin{lem}\label{thm:composite}
Consider three systems with states $\Sigma, \Sigma', \Sigma''$, let $\tilde{\Sigma},\tilde{\Sigma'},\tilde{\Sigma''}$ be the subset of $\Sigma, \Sigma', \Sigma''$ respectively. Consider two maps $g:\tilde{\Sigma'}\rightarrow \tilde{\Sigma}$ and $g':\tilde{\Sigma''} \rightarrow \tilde{\Sigma'}$. If $g$ is a $(a,\delta)$-map and $g'$ is a $(a',\delta')$-map, then the composition $g\circ g'$ is a $(a'a,\frac{\delta'}{a}+\delta)$-map.
\end{lem}
\subsection{Arbitrary placement of spins}
Here we consider how it will affect the energy states of an Ising model if we slightly change the position of its spins. The result shows that although we are limited to put spins on the grid point, we can construct a system with arbitrarily small error where we can place the spins arbitrarily.
\begin{lem}\label{thm:mapgrid}
Consider an Ising model with $\alpha$-long-range interaction (original system) on 2D space with $n$ spins, with each spin $i$ located on $\vec{r}_i =(x_i,y_i)$. The minimal distance between two spins are $d$. Then, consider another Ising model with $\alpha$-long-range interaction (new system) on 2D space, with:
For each spin $i$, we put a spin $i'=(\lfloor\frac{x_i}{\epsilon}\rfloor,\lfloor\frac{y_i}{\epsilon}\rfloor)$, and apply the magnetic field $h_{i'} = h_i\epsilon^\alpha$.
Then we construct a map $g:\Sigma'\rightarrow \Sigma$: A state in the new system $\sigma'$ is send to $\sigma$ in the original system, with $\sigma_{i}=\sigma'_{i'}$ for all spin $i$ in the original system.
With $\epsilon<\frac{d^{\alpha+1}\delta}{2(\alpha+1)n(n+1)}$, $g$ is a $(\epsilon^\alpha, \delta)$-map.
\end{lem}
This theorem implies that we can simulate an Ising model with spins placed on arbitrary system with an Ising model with spins placed on square grid with arbitrary precision with the same spin count.
\subsection{Logical spins}
Consider an anti-ferromagnetic Ising model with an $\alpha$-long-range interaction that has 8 spins. Four of the spin are placed on $(\pm 1, \pm 1)$, and four of the spins are placed on $(0,\pm1)$ and $(\pm 1,0)$. With no magnetic field applied to all spins, this Ising model have two ground states.
The ground state of this system is 2-degenerate: Either the corner spins are $+1$, the edge spins are $-1$ or the opposite. We use $\sigma_+$ and $\sigma_-$ to denote these two states. The energy gap between these two ground states and the other states are determined by $\alpha$: $\Delta(\alpha) = 4-4\cdot 2^{-\frac{\alpha}{2}} - 6\cdot 4^{-\frac{\alpha}{2}} + 8\cdot 5^{-\frac{\alpha}{2}} + 2\cdot 8^{-\frac{\alpha}{2}}$. The energy gap is a constant given the $\alpha$ being constant.
We call $\sigma_\pm$ \emph{valid states}, and if we only consider the valid state of the system, we can use a single variable $\tilde{\sigma}$ to denote the two states: $\tilde{\sigma}=1$ indicates the system is in $\sigma_+$ state, and $\tilde{\sigma}=-1$ indicates the system is in $\sigma_-$ state.
Also, if we apply magnetic field $+\frac{h}{8}$ on the corner spins and $-\frac{h}{8}$ on the edge spins, then $E(\sigma_-)-E(\sigma_+)=2h$, we call this field scheme \emph{logical field} $h$. Then, if we only consider the valid states of this model, this model is equivalent to the model that contains the singleton spin and with magnetic field $h$ applied to it, hence we call this model \emph{logical spin}. For a model that have multiple logical spins, a state is called valid if all the logical spins are in valid states.
The following theorem gives the condition that in which situation the set of valid state is a low-energy set given the other spins fixed:
\begin{lem} \label{thm:valid}
Consider an Ising model with a logical spin with logical field $+h$ on the origin and $n$ spins and the distance from the origin to each spin greater than $R = (\frac{1}{8n}(\frac{\Delta}{2}-|h|))^{-\frac{1}{\alpha}} + \sqrt{2}$. Then if all these spins are fixed, $\sigma_\pm$ are the two lowest-energy state among all states of this fields.
\end{lem}
So, if several logical spins are placed in the space and they are not too close to each other, we can expect that if the state of a logical spin is not one of $\sigma_\pm$, then it is impossible to be the ground state.
Next, we consider the property of the interaction between two logical spins that satisfies the above condition.
\begin{lem}\label{thm:interaction}
Consider an Ising model with two logical spins that is placed satisfying Lemma~\ref{thm:valid}, the distance between them are $r$. The system can be described as $\Tilde{\sigma} = (\Tilde{\sigma}_1, \Tilde{\sigma}_2)$ ignoring the invalid state. We apply $h_1,h_2$ logical field onto the two logical spins respectively. Up to a constant, the Hamiltonian of the system is:
$$
H(\Tilde{\sigma}) = -h_1\Tilde{\sigma}_1 -h_2\Tilde{\sigma}_2 - I_{12} \Tilde{\sigma}_1\Tilde{\sigma}_2
$$
And $I_{12}$ satisfies:
$$
\left|I_{12} + \alpha^2 (\alpha+2)^2 r^{-(\alpha+4)}\right| \le f(\alpha) r^{-\alpha-5}
$$
\end{lem}
This theorem indicates that if the system has a $\alpha$-long-range interaction, the interaction between two logical spins will be similar to that between two physical spins with a $(\alpha+4)$-long-range interaction. So we can simulate a system with $\alpha+4$-long-range interaction and $n$ spins on a system with $\alpha$-long-range interaction and $8n$ spins:
\begin{lem}\label{thm:maplogical}
Consider an anti-ferromagnetic Ising model with $(\alpha+4)$-long-range interaction and $n$ spins that satisfies $\max_{v\in G} |h_v| \le 1$. Then with parameter $\epsilon$, we can construct an Ising model with $\alpha$-long-range interaction by:
For each spin of original system on $(x,y)$ and magnetic field $h$, we put a logical spin on $(\frac{x}{\epsilon},\frac{y}{\epsilon})$, and apply logical field $h \alpha^2(\alpha+2)^2 \epsilon^{\alpha+4}$ on it.
Then, with $\epsilon$ small enough, Theorem~\ref{thm:valid} is satisfied, so we can only consider the valid states. We can construct a map $g$ from the valid state of the new system to the original system by setting $g(\tilde{\sigma})_i = \tilde{\sigma}_i$.
With $\epsilon < f(n,\epsilon) \delta$, $g$ is a $(\alpha^2(\alpha+2)^2 \epsilon^{\alpha+4}, \delta)$-map.
\end{lem}
\subsection{Polynomial reduction}
In this section, we focus on the reduction from the ground state problem into a known $\mathsf{NP}$-complete problem. We consider this particular problem: find an maximum independent set on a graph with each node on a 2D plane, where an edge connects two nodes if and only if their distance is $1$ and the maximum-degree of each node is 3. Since each node has at most 3 neighbours, we can make the angle of two edges on the same node be greater than $\frac{\pi}{2}$, so if two nodes are not connected, the distance between them is greater than $\sqrt{2}$.
This problem is $\mathsf{NP}$-complete, following directly from the $\mathsf{NP}$-completeness of finding an maximum independent set of a planar graph with max degree of each node being 3.
From \cite{related_ising_complexity_1}, there is:
\begin{lem}\label{thm:nn}
Following \cite{related_ising_complexity_1}, for a planar graph $G=(V,E)$ with max degree 3, we consider an Ising model on this graph with Hamiltonian:
$$
H = \sum_{\{u,v\}\in E} \sigma_u\sigma_v + \sum_{w\in V} \sigma_w
$$
Then this graph has an independent set of cardinality $\ge k$ if there exists a state that $H(\sigma) \le \frac{1}{2}|V|-4k$. Also, from the form of this Hamiltonian, the energy gap of a low-energy set of this system is at least $1$.
\end{lem}
We can find the connection between two Ising models, one built directly on the graph, the other has spins placed on each node, but the interaction between the spins being a long-range interaction with exponent large enough.
\begin{lem}\label{thm:upperbound}
Consider a planar graph $G=(V,E)$ with max-degree 3, each node assigned to a coordinate, with two nodes are connect if and only if their distance is 1, and not connected if and only if their distance is $\ge \sqrt{2}$.
We construct two Ising models:
\begin{enumerate}
\item An Ising model whose Hamiltonian is:
$$H = \sum_{\{u,v\}\in E} \sigma_u\sigma_v + \sum_{w\in V} +\sigma_w$$
\item An anti-ferromagnetic $\alpha$-long-range interaction Ising model with a spin $\tilde{v}$ put on the coordinate of each node and a constant magnetic field $1$ applied on each spin.
\end{enumerate}
Then, we can construct a map $g$ from the second system to the first system by $g(\sigma)_v=\sigma_{\tilde{v}}$. With $\alpha>\frac{2}{\ln 2} \ln (|V|(|V|+1))+2$, this map is a $(1,\frac{1}{4})$-map.
\end{lem}
We can finally prove that finding the ground state of a random field Ising model with anti-ferromagnetic long-range interaction is $\mathsf{NP}$-complete by compositing all the maps constructed above:
\begin{thm}
Given a planar graph $G=(V,E)$ with max-degree 3, each node assigned to a coordinate, with two nodes are connect if and only if their distance is 1, and the distance between nodes are either exactly $1$ or greater than $\sqrt{2}$. Then, for any $\alpha$, there exist an integer $t$ that $\alpha+4t>\frac{2}{\ln 2}\ln(|V|(|V|+1))+2$. We choose the smallest $t$. The size of the problem is characterized by $n$.
\begin{enumerate}
\item Construct an Ising model according to Lemma~\ref{thm:nn}, denoted by $M_{\text{NN}}$.
\item Construct the second system, an RFIM with $(\alpha+4t)$-long-range interaction, according to Lemma~\ref{thm:upperbound}, denoted by $M_t$. Call the mapping obtained by this construction $g_{\text{NN}}$, which is a $(1,\frac{1}{4})$-mapping.
\item For $i$ going from $t$ to $1$, construct an RFIM with $(\alpha+4(i-1))$-long-range interaction $M_{i-1}$ according to Lemma~\ref{thm:maplogical} from $M_{i}$. The map $g_i$ obtained by this construction is a $(F_i, \delta_i)$-map, where $F_i$ is determined by $n,i,\alpha$ and $\delta_i$, and we choose $\delta_i$ so that $\delta_i<\frac{1}{12t\prod_{i<j\le t} F_t}$, this is possible because $\delta_i$ can be chosen arbitrarily small, and the dependency of variables form an acyclic graph.
\item Construct a RFIM $M_\text{Grid}$ with $\alpha$-long-range interaction and each spin placed on the grid from $M_0$ according to Lemma~\ref{thm:mapgrid}. Choose the parameter that the map $g_\text{Grid}$ obtained by the construction is a $(\epsilon^\alpha, \delta)$-map with $\delta<\frac{\prod_i F_i}{12}$.
\end{enumerate}
Then, the composite map $\tilde{g} = g_\text{NN} \circ g_t\circ \cdots \circ g_1 \circ g_\text{Grid}$ is a $(A, \Delta)$-map with $\Delta < \frac{5}{12}$ from Theorem~\ref{thm:composite}. Because the energy gap of the set of ground states of $M_\text{NN}$ is $1$, its preimage of $\tilde{g}$ on the state of $M_\text{Grid}$ is a low-energy set, so if we find a ground state $\sigma_0$ of $M_\text{Grid}$, $\tilde{g}(\sigma_0)$ is a ground state of $M_\text{NN}$ (Theorem~\ref{thm:gap}).
The number of spins of $M_\text{Grid}$ is $n\times 8^t=O(n(8^{\frac{1}{\ln 2}})^{\ln n}) = O(n^4)$, and the computation of $\tilde{g}$ requires $\text{poly}(n)$ time. So, if we are given a poly-time oracle that gives a ground state of a square-grid anti-ferromagnetic $\alpha$-long-range RFIM, then we can construct a poly-time oracle that gives an maximum independent set of the graph. So, there exist a polynomial time reduction from the ground state of square-grid anti-ferromagnetic $\alpha$-long-range RFIM to an $\mathsf{NP}$-complete problem, indicating that this problem is $\mathsf{NP}$-complete.
\end{thm}
\section{Approximability of Long-Range Interaction RFIM}\label{sec:APX}
Here we view the computational property of the RFIM on a different perspective, as we try to obtain a state whose energy is close to the ground state energy.
This type of problem are usually characterized as optimization problems, and we prove that the approximation of the one dimension RFIM is in a complexity class that is considered feasible.
Besides, we consider a wider class of interaction: $|f(\vec{r_i},\vec{r_j})|\le ||\vec{r_i}-\vec{r_j}||^{-\alpha},\alpha>0$. We will still call this family of interaction $\alpha$-long-range interaction because the interaction is still dominated by $r^{-\alpha}$.
\subsection{Computational Complexity Classes for Optimization Problems}
The decision problems are usually in the following form: given the instance of the problem, the algorithm is required to answer ``yes'' or ``no''. For example, given a Hamiltonian of Ising model and energy $E$, returning whether this Ising model have a state with energy less than $E$ is an decision problem. Optimization problems, however, requires the algorithm to give an ``solution'' for the problem that is close enough to the ideal solution, for example, given a graph and a real number $\epsilon$, returning an independent subset $V'$ so that $|V'|\ge (1-\epsilon) |V|$ where $V$ is the maximal independent set of the graph is an optimization problem. The $\epsilon$-approximation requirement of the solution is used for definitions of some important complexity classes and a lot of practical problems have nice structures so that $\epsilon$-approximation can be defined~\cite{optimization_problems}, so do finding the ground state of Ising models, which is defined in the next section.
Some important complexity classes for optimization problems are defined as follows~\cite{optimization_classes}:
\begin{itemize}
\item $\mathsf{APX}$ (approximable) is defined as all the NP optimization problems where there exists a polynomial-time approximation algorithm to give an $\epsilon$-approximation solution for constant epsilon.
\item $\mathsf{PTAS}$ (Poly-Time Approximation Scheme) is defined as all of the NP optimization problems that have an approximation algorithm that returns an $\epsilon$-approximation for any given $\epsilon$ and the time complexity of the algorithm is polynomial to the problem length $n$. The exponent of the polynomial may still depends on $\epsilon$. For example, a problem that has an algorithm that gives $\epsilon$-approximation within $O(n^{\frac{1}{\epsilon!}})$ time is still considered in $\mathsf{PTAS}$, although good approximations are not feasible to compute.
\item $\mathsf{FPTAS}$ (Fully Poly-Time Approximation Scheme) is defined as all of the NP optimization problems that have an approximation algorithm that returns an $\epsilon$-approximation for any given $\epsilon$ and the time complexity of the algorithm is polynomial to $n$ and $\frac{1}{\epsilon}$.
\end{itemize}
The relations between these complexity classes are still an open problem, but unless $\mathsf{P} = \mathsf{NP}$, we have $\mathsf{FPTAS} \subsetneq \mathsf{PTAS} \subsetneq \mathsf{APX}$~\cite{approximation_inclusion}.
\subsection{Definition of $\epsilon$-approximation}
For an Ising model with $n$ spins, we denote its ground state by $\sigma_0$. An state is an $\epsilon$-approximation if:
$$
E(\sigma)-E(\sigma_0) \le n\epsilon
$$
That is, each spin contributes $\epsilon$ error in energy on average.
The reason why we choose this definition is that it is consistent with the optimization version of the independent set problem of the graph that is discussed before: consider a planar graph with max degree 3 on which we want to find the maximum independent set. We can construct an Ising model as Lemma~\ref{thm:nn}. Suppose the cardinality of the maximum independent set is $n_0$, which is greater than $\frac{|V|}{4}$ because the graph have max degree 3. Then the corresponding ground state of the Ising model has energy $\frac{1}{2}|V| -4(n_0+1)\le E_0\le \frac{1}{2}|V| - 4n_0$. Consider a state with energy $E\le E_0+n\epsilon$, so its energy is $E\le \frac{1}{2}|V|-4n_0(1-\epsilon)$, which indicates an independent set is $(1-\epsilon)n_0$, which is an $\epsilon$-approximation of independent set problem.
\subsection{Ground state of tree-like Ising models}\label{sec:dp}
The tree decomposition groups vertices of a graph into bags that have a tree structure, which is often used to accelerate solving some problems on graph. The definition of the tree decomposition is: \cite{graph_tree_dp}
For a graph $G=(V,E)$, a tree decomposition of the graph is $(X,T)$, where each element $X_i\in X$ is a subset of $V$ and $T$ is a tree with nodes in $X$. They satisfies:
\begin{enumerate}
\item $\bigcup_i X_i = V$, that is each node in $v$ is in at least one bag.
\item If there exists an edge $\{u,v\}$, then there exist an $X_i$ such that both $u$ and $v$ are in $X_i$.
\item If both $X_i$ and $X_j$ contains $v$, then every node on the unique path between $X_i$ and $X_j$ in $T$ contains $v$ as well.
\end{enumerate}
The width of the decomposition is defined as the size of the largest bag minus 1, and the treewidth of the graph is defined as the minimum width for all the tree decompositions. Treewidth is a measure of how close the graph is to a tree: A tree clearly has treewidth 1, and any graph that contains a cycle has treewidth at least 2. Notice that there exists no upper bound for planar graph as the treewidth of an $n\times n$ square grid is $n$. Some problems on a graph with a low treewidth can be solved efficiently by dynamic programming, and solving the ground state of an Ising model is one of them.
For a tree decomposition $(X,T)$, we define an \emph{assignment} $H$ as a set of maps $h_i:X_i\rightarrow \{-1,+1\}$. An assignment is \emph{consistent} if for any vertex $v\in V$, if $v\in X_i$ and $v\in X_j$, then $h_i(v)=h_j(v)$. An consistent assignment will give a vertex only one value no matter which bag it is in, so a consistent assignment corresponds a state.
We can find the ground state of an Ising model given its tree decomposition by the algorithm:~\cite{graph_tree_dp}
Picking a bag $X_i$, we enumerate all of its assignment $h_i$ and want to find the lowest energy assignment consistent with it. We can remove $X_i$ from $T$ and obtain an forest $\{T_j\}$, and denote by $X_j$ the (bag) vertices that are adjacent to $X_i$ in the graph and lies in $T_j$. For each $T_j$, a part of its vertices has been assigned, and we use this algorithm recursively to solve the ground state of $T_j$ given these vertices fixed, which gives the lowest energy assignment of $T$ given $h_i$ fixed. Finally, the ground state, is given by the lowest energy assignment along all $h_i$.
Suppose the width of the tree decomposition is $D$, then with dynamic programming technique, solving the ground state takes $O(2^D |X|)$ time.
\subsection{Approximation scheme}
Now, given a long-range interaction, we want to prune the interaction between some spins to get a low treewidth graph, while introducing as small errors to energy of states as possible.
For an RFIM, the interactions between the spins can be denoted by a graph $G=(V,E)$, with the interaction between all neighboring vertices being counted. We define an approximation graph $\tilde{G}=(V, \tilde{E})$ as a subgraph of $G$, and we can construct another Hamiltonian of the system on this approximation graph:
$$
H_{\tilde{G}} = -\sum_{\{i, j\}\in \tilde{E}} J_{ij} \sigma_i\sigma_j - \sum_{k\in V} h_k\sigma_k
$$
If an approximation graph satisfies: $\forall \sigma, |H(\sigma) - H_{\tilde{G}}(\sigma)|\le \frac{1}{2} n\epsilon$, then we call this graph an \emph{$\epsilon$-approximation graph}.
The following theorem gives an sufficient condition for the possibility of efficiently solving an approximation:
\begin{lem}\label{thm:apx}
For an RFIM, if there exist an $\epsilon$-approximation graph that have a treewidth $D$, then given this minimum width tree decomposition, the algorithm described in Section~\ref{sec:dp} will give an $\epsilon$-approximation under $O(2^D |X|)$ time.
\end{lem}
Following the theorem above, we can obtain an sufficient condition for a problem in $\mathsf{PTAS}$ immediately:
\begin{thm}\label{thm:ptas}
For a problem (that solves approximation of RFIM), if for all instances, let $n$ be the size of the instance, then if for an arbitrarily small $\epsilon>0$, this instance has an $\epsilon$-approximation graph, and this graph have a tree decomposition $(X,T)$, with $|X|$ being a polynomial of $n$ and the upper-bound of width of the decomposition $D$ is a constant to $n$, then this problem is in $\mathsf{PTAS}$.
\end{thm}
Using this theorem, we can show that approximating the 1-dimensional $\alpha$-long-range interaction RFIM is in $\mathsf{PTAS}$ with $\alpha\ge 1$:
\begin{cor}\label{thm:ptas1d}
For a 1-dimensional $\alpha$-long-range interaction RFIM ($n$ spins) with every spin on the integer coordinate, it has an $\epsilon$-approximation graph $\tilde{G}$ that excludes edges between vertices with distance $>(\epsilon(\alpha-1))^\frac{1}{1-\alpha}=R$. There exists a tree decomposition of this approximation graph $(X,T)$ that $X_i = \{v_i,\cdots,v_{i+R}\}$ and $X_i,X_{i+1}$ are connected in $T$. The width of this decomposition is constant to $n$ and $|X|=O(n)$, hence this Ising model is in $\mathsf{PTAS}$.
\end{cor}
This theorem does not apply for $\alpha\le 1$ as in this scenario, the energy mainly consists of interactions between spins that are far away from each other.
There is an intuition that 2-dimensional square-grid long-range RFIM is not in $\mathsf{PTAS}$, due to the following observation: the edges of the square grid consists of interaction of strength $1$, which implies that an approximation graph excluding a lot of square grid edges is not likely to be an $\epsilon$-approximation graph with $\epsilon<1$. So, it is not likely that for a fixed $\epsilon<1$, there exists an $\epsilon$-approximation graph with fixed graph width, which implies that 2-dimensional square-grid long-range RFIM is probably not in $\mathsf{PTAS}$.
\begin{acknowledgments}
We thank Y.-K. Wu for discussions. This work was supported by the Frontier Science Center for Quantum Information of the Ministry of Education of China and the Tsinghua University Initiative Scientific Research Program.
\end{acknowledgments}
\appendix
\section{Appendix A: Proof for the theorems}
\begin{proof}[Proof for Lemma~\ref{thm:gap}]
There is:
$$
\begin{aligned}
&\min_{\sigma' \notin g^{-1}(G)} E(\sigma') - \max_{\sigma' \in g^{-1}(G)} E(\sigma')\\
\ge{}& a(\min_{\sigma \notin G} E(\sigma) - \delta - \max_{\sigma \in G} E(\sigma)-\delta)\\
={}& a(\Delta_G - 2\delta)
\end{aligned}
$$
\end{proof}
\begin{proof}[Proof for Lemma~\ref{thm:composite}]
There is:
$$
\begin{aligned}
& |E(\sigma'') - a'aE(g(g'(\sigma'')))|\\
\le{}& |E(\sigma'') - a'E(g'(\sigma''))| + a'|E(g'(\sigma'')) - a''E(g(g'(\sigma'')))|\\
={}& a'\delta' + a'a\delta\\
={}& a'a(\frac{\delta'}{a} + \delta)
\end{aligned}
$$
So the composite map $g\circ g'$ is a $(a'a,\frac{\delta'}{a} + \delta)$-map.
\end{proof}
\begin{proof}[Proof for Lemma~\ref{thm:mapgrid}]
For any two spins, let $a, b$ be the position of them on the original system, and let $a', b'$ be the position of the two corresponding constructed spins. There is ($d(x,y)=||x-y||$):
$$
\begin{aligned}
\frac{1}{\epsilon} d(a, b) -2 \le d(a',b') \le \frac{1}{\epsilon} d(a, b) + 2\\
\end{aligned}
$$
And there is, under $|x|<\frac{t}{2(\alpha+1)}<\frac{1}{2(\alpha+1)}$:
$$
|(1+x)^{-\alpha}-1| \le \alpha (1-x)^{-\alpha-1} |x|\le t
$$
The Hamiltonian of the system is:
$$
\begin{aligned}
H'(\sigma')=&-\sum_{a',b'\in S'} J_{a'b'} \sigma_{a'}\sigma_{b'} - \sum_{c'\in S'} h'_{c'} \sigma_{c'}\\
=&-\sum_{a', b'\in S'} -d^{-\alpha}(a',b') \sigma_{a'} \sigma_{b'} - \sum_{c'\in S'} h_{c} \epsilon^\alpha \sigma_{c'}\\
\end{aligned}
$$
So:
$$
\begin{aligned}
& \left| H'(\sigma')+(\sum_{a',b'\in S}-d^{-\alpha}(a,b)\sigma_{a'}\sigma_{b'}+\sum_{c'\in S'} h_{c} \sigma_{c'})\epsilon^\alpha\right| \\
=& \left|\sum_{a',b'\in S'} (d^{-\alpha}(a',b')-\epsilon^\alpha d^{-\alpha}(a,b) ) \sigma_{a'}\sigma_{b'}\right|\\
\le & \sum_{a', b'} \left| d^{-\alpha}(a',b')-\epsilon^\alpha d^{-\alpha}(a,b) \right|\\
=& \sum_{a', b'} \epsilon^\alpha d^{-\alpha}(a, b) \left| (\frac{\epsilon d(a', b')}{d(a,b)})^{-\alpha} -1\right|\\
\le & \sum_{a', b'} \epsilon^\alpha d^{-\alpha}(a, b) \left| (1+\frac{\epsilon d(a', b')-d(a,b)}{d(a,b)})^{-\alpha} -1\right|\\
\end{aligned}
$$
Now if we set $\epsilon \le \frac{t d}{4(\alpha+1)}$ with $t\in(0,1]$, then for all $a', b'$, there is:
$$
\left|\frac{\epsilon d(a', b')-d(a,b)}{d(a,b)}\right|\le \frac{2\epsilon}{d(a,b)} \le \frac{t}{2(\alpha+1)}
$$
There is:
$$
\begin{aligned}
&|H'(\sigma') - \epsilon^\alpha H(g(\sigma'))|\\
=& \left| H'(\sigma')+(\sum_{a',b'\in S}-d^{-\alpha}(a,b)\sigma_{a'}\sigma_{b'}+\sum_{c'\in S'} h_{c} \sigma_{c'})\epsilon^\alpha\right| \\
\le & \sum_{a', b'} \epsilon^{\alpha}d^{-\alpha}(a,b) t\\
\le & \epsilon^{\alpha} \frac{n(n+1)}{2} t d^{-\alpha}
\end{aligned}
$$
By setting $t\le \frac{2\delta d^\alpha}{n(n+1)}$, which implies $\epsilon<\frac{d^{\alpha+1} \delta}{2(\alpha+1)n(n+1)}$ we can conclude that $g$ is a $(\varepsilon^\alpha, \delta)$ mapping.
\end{proof}
\begin{proof}[Proof for Lemma~\ref{thm:valid}]
The interaction of spins on the logical spin satisfies the condition that: the total disturbance of the other spins together with the applied field is smaller than $|h|+8n(R-\sqrt{2})^{-\alpha} < \frac{\Delta}{2}$, so $\{\sigma_+, \sigma_-\}$ remains a set of low-energy state.
\end{proof}
\begin{proof}[Proof for Lemma~\ref{thm:interaction}]
Let $L$ and $L'$ denote the sets that contain the spins in two logical spins respectively.
The Hamiltonian of the system writes:
$$
\begin{aligned}
H(\sigma) =& -\sum_{a,b\in L\cup L'} -J_{a,b}\sigma_a\sigma_b - \sum_{c} h_c\sigma_c\\
=&-\sum_{a,b \in L} J_{a,b}\sigma_a\sigma_b - \sum_{a,b \in L'} J_{a,b}\sigma_a\sigma_b\\
&- \sum_{c\in L} h_c\sigma_c - \sum_{c\in L'} h_c\sigma_c - \sum_{a\in L, b\in L'} J_{a,b}\sigma_a\sigma_b\\
=&2C -\tilde{h_1}\tilde{\sigma_1} - \tilde{h_2}\tilde{\sigma_2} - I_{12} \tilde{\sigma_{1}}\tilde{\sigma_2}\\
\end{aligned}
$$
Where $C$ is a function of $\alpha$.
Now calculate $I_{12}$, the Taylor expansion of $J_{a,b}$ to the 4th order term shows that:
$$
I_{12} = -\alpha^2(\alpha+2)^2 r^{-(\alpha+4)} (1+O(r^{-1}))
$$
So we can conclude that:
$$
\left|I_{12} +\alpha^2(\alpha+2)^2 r^{-(\alpha+4)}\right| \le f(\alpha) r^{-\alpha-5}
$$
\end{proof}
\begin{proof}[Proof for Lemma~\ref{thm:maplogical}]
Using Lemma~\ref{thm:interaction}, the Hamiltonian of the constructed system is, up to a constant:
$$
\begin{aligned}
H'(\Tilde{\sigma}) =& -\sum_{c} \tilde{h_c} \tilde{\sigma_c} -\sum_{a,b} I_{a,b} \tilde{\sigma_a}\tilde{\sigma_b}
\end{aligned}
$$
Let $\sigma = g(\Tilde{\sigma})$, there is:
$$
\begin{aligned}
&\left| H'(\Tilde{\sigma}) - \alpha^2(\alpha+2)^2 \epsilon^{\alpha+4} H(\sigma) \right|\\
={}&\left| \sum_c \tilde{h}_c\tilde{c} + \sum_{a,b} I_{a,b} \tilde{\sigma_a}\tilde{\sigma_b}\right. \\&- \left.\sum_c \alpha^2(\alpha+2)^2 \epsilon^{\alpha+4} h_c \tilde{\sigma_c} - \sum_{a,b} -\alpha^2(\alpha+2)^2 \epsilon^{\alpha+4} \tilde{\sigma_a}\tilde{\sigma_b} \right|\\
={}&\left|\sum_{a,b} (I_{a,b} + \alpha^2(\alpha+2)^2 \epsilon^{\alpha+4} )\tilde{\sigma_a}\tilde{\sigma_b} \right|\\
\le{}& \frac{n(n+1)}{2} f(\alpha) \epsilon^{\alpha+5}
\end{aligned}
$$
By choose $\epsilon < \delta \frac{2\alpha^2(\alpha+2)^2}{n(n+1)f(\alpha)}$, $g$ is a $(\alpha^2(\alpha+2)^2\epsilon^{\alpha+4}, \delta)$-mapping.
\end{proof}
\begin{proof}[Proof for Lemma~\ref{thm:upperbound}]
The Hamiltonian of the constructed Ising model is:
$$
H'(\sigma') = -\sum_{u,v\in V} -r_{uv}^{-\alpha}\sigma'_u\sigma'_v - \sum_{w\in V} -\sigma'_w
$$
So:
$$
\begin{aligned}
&\left| H'(\sigma') - H(g(\sigma')) \right|\\
=&\left| \sum_{u,v\in V} r_{uv}^{-\alpha} \sigma'_u \sigma'_v - \sum_{(u,v)\in E} \sigma'_u\sigma'_v\right|\\
=&\left| \sum_{(u,v) \notin E} r_{uv}^{-\alpha} \sigma'_u\sigma'_v \right|\\
\le& \frac{|V|(|V|+1)}{2} 2^{-\frac{\alpha}{2}}\\
\le & \frac{1}{4}
\end{aligned}
$$
\end{proof}
\begin{proof}[Proof for Lemma~\ref{thm:apx}]
Suppose the algorithm find $\sigma$ and let $\sigma_0$ be the ground state. There is:
$$
\begin{aligned}
&H(\sigma) - H(\sigma_0)\\
={}& (H(\sigma) - H_{\tilde{G}}(\sigma)) + (H_{\tilde{G}}(\sigma) - H_{\tilde{G}}(\sigma_0) )+ (H_{\tilde{G}}(\sigma_0) - H(\sigma_0))\\
\le{}& \frac{1}{2}n\epsilon + 0 + \frac{1}{2}n\epsilon\\
\le{}& n\epsilon
\end{aligned}
$$
\end{proof}
\end{document} |
\begin{document}
\title[Mathematical Models]{Mathematical Models of Contemporary Elementary
Quantum Computing Devices}
\author{G. Chen}
\address{Department of Mathematics, Texas A\&M University, College Station,
TX~77843, U.S.A.}
\email{[email protected]}
\author{D. A. Church}
\address{Department of Physics, Texas A\&M University, College Station,
TX~77843, U.S.A.}
\email{[email protected]}
\author{B.-G. Englert}
\address{Department of Physics, National University of Singapore, Singapore}
\email{[email protected]}
\author{M. S. Zubairy}
\address{Department of Physics, Texas A\&M University, College Station,
TX~77843, U.S.A.}
\email{[email protected]}
\thanks{The first and fourth authors are also of the Institute of Quantum
Studies, Texas A\&M University, College
Station, TX 77843, U.S.A., and are supported in part by DARPA QuIST Contract
F49620-01-1-0566 and Texas A\&M University TITF initiative.
The third author was supported in part by NSF Grant PHY9876899.}
\begin{abstract}
Computations with a future quantum computer will be implemented through
the operations by elementary quantum gates. It is now well known that the
collection of 1-bit and 2-bit quantum gates are universal for
quantum computation, i.e., any $n$-bit unitary operation can be carried out
by concatenations of 1-bit and 2-bit elementary quantum gates.
Three contemporary quantum devices--cavity QED, ion traps and quantum
dots--have been widely regarded as perhaps the most promising candidates
for the construction of elementary quantum gates. In this paper, we describe
the physical properties of these devices, and show the mathematical
derivations based on the interaction of the laser field as control with
atoms, ions or electron spins, leading to the following:
\begin{itemize}
\item[(i)] the 1-bit unitary rotation gates; and
\item[(ii)] the 2-bit quantum phase gates and the controlled-not gate.
\end{itemize}
This paper is aimed at providing a sufficiently self-contained survey
account of analytical nature for mathematicians, physicists and computer
scientists to aid interdisciplinary understanding in the
research of quantum computation.
\end{abstract}
\maketitle
\section{Introduction}\label{sec1}
The design and construction of the quantum computer is a major project by
the scientific community of the 21$^{\rm st}$ Century. This project is
interdisciplinary by nature, requiring close collaboration between
physicists, engineers, computer scientists and mathematicians.
Physicists have taken the lead in the design of quantum devices utilizing the
peculiar properties of quantum mechanics. Most or perhaps all of such
devices will eventually become micro- or nano-fabricated. Their controls are
achieved through the
tuning of laser pulses. As hardware, many types of material or
devices have been either used or proposed: NMR, cavity QED, ion or atom
traps, quantum dots, SQUID, etc. See some discussions in \cite[Chap.~7]{NC},
\cite{S}, for example. NMR (nuclear magnetic resonance) is the first
scheme utilized by researchers to demonstrate the principle of quantum
computing through quantum search.
NMR seems to have provided the most successful demonstration of quantum
computing so far.
However, this kind of
``quantum computing in a coffee mug'' utilizes bulk material, which is not
considered as really quantum by many people. In addition, NMR has a severe
weakness in the lack of scalability.
Concerning SQUIDs\footnote{The SQUID can be
viewed as a high-$Q$
microwave frequency LC resonator.
The first excited state, with excitation energy $\hbar \omega$
where $\omega$ is the microwave frequency, is doubly degenerate in zero
magnetic field. These two states are the qubit states. To lift
the degeneracy, Josephson junction(s) are placed in the current loop.
Magnetic field can then be used to tune the levels back to degeneracy
as required for qubit coupling, etc.
The coupling is magnetic, as in mutual inductance of two current loops.
The major breakthrough made last year (2001--02) is the single quantum
flux gate for readout.
It is 100\% efficient and does not add decoherence. Expect to
see rapid progress in the near future as a result of this breakthrough.
We thank Prof. Philip R. Hemmer for the above communication.}
(superconducting quantum interference devices), there
has been major progress in the design and control of such devices
recently.
However, the authors' analytical knowledge about SQUIDs still appears
very limited for the time being and, thus, any detailed mathematical
representations must be deferred to a future account.
We focus our attention on the following three quantum
devices: cavity QED, ion traps and quantum dots. They seem to have
received the most attention in the contemporary literature, and have been
widely identified by many as the most promising. The writing of this paper
all began with our
simple desire to try to understand, analytically, how and why
such devices work. Analytical studies of these devices are scattered in
many references in the physics literature, which admittedly are not easy
reading for most mathematicians. Indeed, this is the most commonly
encountered difficulty in any interdisciplinary research. But even
many physicists specializing in quantum devices find certain difficulty
in their attempt to understand the working of other quantum devices, if such
devices do not fall exactly within their own specialty. Therefore, our
objectives in this paper are three-fold:
\begin{itemize}
\item[(1)] Provide a sufficiently self-contained account of the physical
description of these primary contemporary quantum computing devices and
the derivation of their mathematical representations.
\item[(2)] Supply the control-theoretic aspect of quantum control via the
shaping
of laser pulses in quantum computing, which is the main theme of the
conference.
\item[(3)] Write an accessible survey of three important quantum devices for
mathematicians and other scientists who are interested in understanding
the basic interdisciplinary link between physics, mathematics and computer
science, in a mathematically rigorous way, for quantum computing.
\end{itemize}
Our main emphasis will be
on the mathematical modeling. As far as the physics is concerned
in this paper, we acknowledge the many segments
that we have extracted, directly or indirectly, from the original sources
cited in the references. New devices and designs emerge almost daily, and we
must concede that the list of
references is neither exhaustively comprehensive nor totally up-to-date as
any attempt to achieve such by us is nearly impractical, or even
impossible given the time and various other constraints. We
nevertheless believe that the basic mathematics underlying most of the quantum
computing devices remains largely similar, and hope that we have provided
enough ideas and information for the readers
to trace the literature on their own.
In the main body of this paper to follow, there are four sections.
Section 2 provides a quick summary of universality results of
quantum gates. Sections 3, 4 and 5 deal with, respectively, cavity QED, ion
traps and quantum dots. In the final Section 6, we briefly discuss some
issues on laser control in quantum computing as the conclusion of this
paper.
\section{Universality of Elementary Quantum Gates}\label{sec2}
We begin this section by introducing some basic notations used in quantum
mechanics, as such notations do not appear to be familiar to a majority of
mathematicians. A useful reference can be found in \cite{SZ}.
Let $\cl H$ be a {\em complex\/} Hilbert space with inner product $\langle
~~,~~\rangle$. For any subset $S\subseteq \cl H$, define
\begin{align}
\text{span } S &= \text{closure of } \left\{\sum^n_{j=1} \alpha_j\psi_j \mid
\alpha_j\in\bb C, \psi_j \in S, j=1,2,\ldots, n, \text{ for all } \psi_j\in
S\right\}\noindentonumber\\
\label{eq2.1}
&\qquad \text{ in } \cl H,
\end{align}
where in the above, $\bb C$ denotes the set of all complex numbers.
Elements in
$\cl H$, such as $\psi_j$ in \eqref{eq2.1}, are called vectors. The inner
product of two vectors $\psi_j$ and $\psi_k$ is $\langle\psi_j,\psi_k\rangle$.
However, in quantum mechanics, the Dirac bra-ket notation writes the vector
$\psi_k$ as $|k\rangle$, pronounced {\em ket\/} $k$, where $k$ labels the
vector. The role of
$\psi_j$ in the inner product $\langle\psi_j,\psi_k\rangle$ is
that $\psi_j$ is identified as a linear functional on $\cl H$ by
\begin{equation}
\psi^*_j\colon \ \cl H\longrightarrow \bb C;\quad \psi^*_j(\psi_k) \equiv
\langle\psi_j,\psi_k\rangle.
\end{equation}
The Dirac bra-ket notation writes $\psi^*_j$ as $\langle j|$, pronounced
{\em bra\/} $j$. The inner product is now written as
\begin{equation}
\langle\psi_j,\psi_k\rangle = \langle j|k\rangle.
\end{equation}
Let $A\colon \ \cl H\to \cl H$ be a linear operator. Then in Dirac's notation
we
write $\langle \psi_j, A\psi_k\rangle$ as $\langle j|A|k\rangle$.
The set of all kets $(|k\rangle)$ generates the linear space $\cl H$, and the
set of all bras $(\langle j|)$ also generates the dual linear space $\cl H^*$.
These two linear spaces are called, respectively, the ket space and the bra
space, which are dual to each other. The natural isomorphism from $H$ onto
$H^*$, or from $H^*$ onto $H$, is called ``taking the adjoint'' by physicists.
An expression like $\langle j|A|k\rangle$ has an intended symmetry:\ one can
think of $A$ acting on $|k\rangle$ on the right as a matrix times a column
vector, or $A$ acting on $\langle j|$ on the left as a row vector times a
matrix.
Any 2-dimensional complex Hilbert space $\cl H$ is isomorphic to $\bb C^2$.
The standard basis of $\bb C^2$ is
\begin{equation}
\pmb{e}_1 = \left[\begin{matrix} 1\\ 0\end{matrix}\right],\quad \pmb{e}_2 =
\left[\begin{matrix} 0\\ 1\end{matrix}\right].
\end{equation}
In quantum mechanics, the preferred notation for basis is, respectively,
$|0\rangle$ and
$|1\rangle$,
where $|0\rangle$ and $|1\rangle$ generally refer to the ``spin'' state of a
certain quantum system under discussion.
``Spin up'' and ``spin down''
form a binary quantum alternative and, therefore, a quantum bit or,
a \emph{qubit}. These spin states may then further be
identified with $\pmb{e}_1$ and $\pmb{e}_2$ through, e.g.,
\begin{equation}\label{eq2.1a}
|0\rangle = \left[\begin{matrix} 1\\ 0\end{matrix}\right],\quad |1\rangle =
\left[\begin{matrix} 0\\ 1\end{matrix}\right].
\end{equation}
For a column vector, say $\pmb{e}_1$,
we use superscript $T$ to denote its transpose,
such as $\pmb{e}^T_1 = [1~~0]$,
for example. The spin states $|0\rangle$ and $|1\rangle$ form a standard
orthonormal basis for the (single) qubit's ket space. The tensor product of a
set of $n$ vectors
$|z_j\rangle\in
\bb C^2_j$, specified by the quantum numbers $z_j$
for $j=1,2,\ldots, n$, is written interchangeably as
\begin{equation}\label{eq2.2}
|z_1\rangle \otimes |z_2\rangle \otimes\cdots \otimes |z_n\rangle =
|z_1\rangle |z_2\rangle \cdots |z_n\rangle = |z_1z_2 \cdots
z_n\rangle.
\end{equation}
The tensor product space of $n$ copies of $\bb C_2$ is defined to be
\begin{equation}\label{eq2.3}
(\bb C_2)^{\otimes n} = \text{span}\{|j_1j_2\cdots j_n\rangle \mid j_k\in
\{0,1\}, k=1,2,\ldots, n\}.
\end{equation}
It is a $2^n$-dimensional complex Hilbert space with the induced inner product
from $\bb C^2$. A {\em quantum state\/} is a vector in $(\bb C^2)^{\otimes n}$
(or in any Hilbert space $\cl H$) with the unit norm\footnote{This type of
quantum state is actually a more restrictive type, called a
\emph{pure state}.
A pure state, say ket $|k\rangle$, could be identified with the projection
operator $|k\rangle\langle k|$.
Then a \emph{general state} is a convex sum of
such projectors, i.e., $\sum\limits_k c_k|k\rangle\langle k|$ with $c_k\ge 0$
and $\sum\limits_k c_k=1$.}.
A quantum state is said to be \emph{entangled} if it is \emph{not}
a tensor product of the form \eqref{eq2.2}.
A \emph{quantum operation} is a \emph{unitary linear transformation}
on $\cl H$.
The \emph{unitary group} $U(\cl H)$ on $\cl H$ consists of all unitary linear
transformations on $\cl H$ with composition as the (noncommutative) natural
multiplication operation of the group.
It is often useful to write the basis vectors $|j_1j_2\cdots j_n\rangle \in
(\bb
C^2)^{\otimes n}$, $j_k\in \{0,1\}$, $k=1,2,\ldots, n$, as column vectors
according to the lexicographic ordering:
\begin{align}
|00\cdots 00\rangle &= \left[\begin{matrix} 1\\ 0\\ 0\\ \vdots\\ 0\\ 0\\
0\end{matrix}\right] , |00\cdots 01\rangle = \left[\begin{matrix} 0\\ 1\\ 0\\
\vdots\\ 0\\ 0\\ 0\end{matrix}\right], |00\cdots 10\rangle =
\left[\begin{matrix} 0\\ 0\\ 1\\ \vdots\\ 0\\ 0 \\ 0\end{matrix}\right],
\cdots\noindentonumber\\
\label{eq2.4}
|11\cdots 10\rangle &= \left[\begin{matrix} 0\\ 0\\ 0\\ \vdots\\ 0\\ 1\\
0\end{matrix}\right] , |11\cdots 11\rangle = \left[\begin{matrix} 0\\ 0\\ 0\\
\vdots\\ 0\\ 0\\ 1\end{matrix}\right];
\end{align}
cf.\ \eqref{eq2.1a}. Any $2\times 2$ unitary matrix is an admissible quantum
operation on one qubit.
We call it a 1-bit gate. Similarly, we call a $2^k\times 2^k$ unitary
transformation a $k$-bit gate. Let a 1-bit gate be
\begin{equation}
U = \left[\begin{matrix} u_{00}&u_{01}\\ u_{10}&u_{11}\end{matrix}\right];
\end{equation}
we define the operator $\Lambda_m(U)$ \cite{Ba} on $(m+1)$-qubits (with
$m=0,1,2,\ldots$) through its action on the basis by
\begin{equation}
\Lambda_m(U)(|x_1x_2\cdots x_my\rangle) =
\left\{\begin{array}{@{}l@{\ }l@{\ }l@{}}
|x_1x_2\cdots x_my\rangle&\text{if}&\wedge^m_{k=1} x_k=0,\\
u_{0y}|x_1x_2\cdots x_m0\rangle + u_{1y}|x_1x_2\cdots x_m1\rangle
&\text{if}&
\wedge^m_{k=1}x_k=1,\end{array}\right.
\end{equation}
where ``$\wedge$'' denotes the Boolean operator AND. The matrix representation
for $\Lambda_m(U)$ according to the ordered basis \eqref{eq2.4} is
\begin{equation}
\Lambda_m(U) = \left[\begin{matrix}
1\\ &\ddots&&\bigcirc\\ &&1\\ &\bigcirc&&\left[\begin{matrix} u_{00}&u_{01}\\
u_{10}&u_{11}\end{matrix}\right]\end{matrix}\right]_{2^{m+1}\times 2^{m+1}}.
\end{equation}
This unitary operator is called the $m$-bit-controlled $U$ operation.
An important logic operation on one qubit is the NOT-gate
\begin{equation}
\sigma_x = \left[\begin{matrix} 0&1\\ 1&0\end{matrix}\right];
\end{equation}
cf.\ the Pauli matrices in \eqref{eq5.1a} of Section \ref{sec5}. From
$\sigma_x$,
define the important 2-bit operation $\Lambda_1(\sigma_x)$, which is the
controlled-not gate, henceforth acronymed the CNOT gate.
For any 1-bit gate $A$, denote by $A(j)$ the operation (defined through
its action on the basis) on the slot $j$:
\begin{equation}
A(j)|x_1x_2\cdots x_j\cdots x_n\rangle = |x_1\rangle \otimes |x_2\rangle
\otimes\cdots \otimes |x_{j-1}\rangle \otimes [A|x_j\rangle] \otimes |x_{j+1}
\rangle \otimes\cdots\otimes |x_n\rangle.
\end{equation}
Similarly, for a 2-bit gate $B$, we define $B(j,k)$ to be the operation on the
two qubit slots $j$ and $k$ \cite{BB}. A simple 2-bit gate is the swapping gate
\begin{equation}\label{eq2.5}
U_\mathrm{sw}
|x_1x_2\rangle = |x_2x_1\rangle, \text{ for } x_1,x_2\in \{0,1\}.
\end{equation}
Then $U_\mathrm{sw}(j,k)$ swaps the qubits between the $j$-th
and the $k$-th slots.
The tensor product of two 1-bit quantum gates $S$ and $T$ is defined through
\begin{equation}
(S\otimes T) (|x\rangle \otimes |y\rangle = (S|x\rangle) \otimes (T|y\rangle),
\text{ for } x,y\in \{0,1\}.
\end{equation}
A 2-bit quantum gate $V$ is said to be \emph{primitive}
(\cite[Theorem 4.2]{BB}) if
\begin{equation}
V = S\otimes T\quad \text{or}\quad V = (S\otimes T) U_\mathrm{sw}
\end{equation}
for some 1-bit gates $S$ and $T$. Otherwise, $V$ is said to be
\emph{imprimitive}.
It is possible to factor an $n$-bit quantum gate $U$ as the composition of
$k$-bit
quantum gates for $k\le n$. The earliest result of this {\em universality\/}
study was given by Deutsch \cite{De} in 1989 for $k=3$. Then in 1995, Barenco
\cite{BaO}, Deutsch, Barenco and Ekert \cite{DBE}, DiVincenzo \cite{DiV} and
Lloyd \cite{Ll} gave the result for $k=2$. Here, we quote the elegant
mathematical treatment and results by J.-L. Brylinski and R.~K. Brylinski in
\cite{BB}.
\begin{defn}[$\lbrack$\ref{BB}, Def.~4.2$\rbrack$]\label{defn2.1}
A collection of 1-bit gates $A_i$ and 2-bit gates $B_j$ is called (exactly)
{\em universal\/} if, for each $n\ge 2$, any $n$-bit gate can be obtained by
composition of the gates $A_i(\ell)$ and $B_j(\ell,m)$, for
$1\le \ell, m\le n$.
$\square$
\end{defn}
\begin{thm}[$\lbrack$\ref{BB}, Theorem 4.1$\rbrack$]\label{thm2.1}
Let $V$ be a given 2-bit gate. Then the following are equivalent:
\begin{itemize}
\item[(i)] The collection of all 1-bit gates $A$ together with $V$ is
universal.
\item[(ii)] $V$ is imprimitive.$
\square$
\end{itemize}
\end{thm}
A common type of 1-bit gate from AMO (atomic, molecular and optical)
devices is the unitary rotation gate
\begin{equation}\label{eq2.6}
U_{\theta,\phi} \equiv \left[\begin{matrix} \cos\theta&-ie^{-i\phi}\sin\theta\\
-ie^{i\phi}\sin\theta&\cos\theta\end{matrix}\right],
\qquad 0\le \theta,\phi \le 2\pi.
\end{equation}
We have (the determinant) $\det U_{\theta,\phi} = 1$ for any $\theta$ and
$\phi$
and, thus, the collection of all such $U_{\theta,\phi}$ is not dense in $U(\bb
C^2)$. Any unitary matrix with determinant equal to 1 is said to be {\em
special
unitary\/} and the collection of all special unitary matrices on $(\bb
C^2)^{\otimes n}$, $SU((\bb C^2)^{\otimes n})$,
is a proper subgroup of $U((\bb
C^2)^{\otimes n})$. It is known that the gates $U_{\theta,\phi}$ generate
$SU(\bb C^2)$.
Another common type of 2-bit gate we shall encounter in the next two sections
is the quantum phase gate (QPG)
\begin{equation}\label{eq2.7}
Q_\eta = \left[\begin{matrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\
0&0&0&e^{i\eta}\end{matrix}\right],\qquad 0\le \eta\le 2\pi.
\end{equation}
\begin{thm}[$\lbrack$\ref{BB}, Theorem 4.4$\rbrack$]\label{thm2.2}
The collection of all the 1-bit gates $U_{\theta,\phi}$, $0\le \theta,\phi$,
$\phi\le 2\pi$, together with any 2-bit gate $Q_\eta$ where $\eta\noindentot\equiv 0$
(mod $2\pi$), is universal.$
\square$
\end{thm}
Note that the CNOT gate $\Lambda_1(\sigma_x)$ can be written as
\begin{equation}\label{eq2.8}
\Lambda_1(\sigma_x) = U_{\pi/4,\pi/2}(2) Q_\pi U_{\pi/4,-\pi/2}(2);
\end{equation}
cf.\ \cite[(2.2)]{DZC}. Thus, we also have the following.
\begin{cor}\label{cor2.3}
The collection of all the 1-bit gates $U_{\theta,\phi}$, $0\le \theta,\phi \le
2\pi$, together with the CNOT gate $\Lambda_1(\sigma_x)$, is universal.
$
\square$
\end{cor}
Diagramatically, the quantum circuit for \eqref{eq2.8} is given in
Fig.~\ref{fig2.1}.
\begin{figure}
\caption{\label{fig2.1}
\label{fig2.1}
\end{figure}
\section{Two-Level Atoms and Cavity QED}\label{sec3}
The main objective of this section is to show that the 1-bit unitary gates
\eqref{eq2.6} and the 2-bit QPG \eqref{eq2.7} can be implemented using the
methods of, respectively, 2-level atoms and cavity QED. For clarity, we divide
the discussions in three subsections.
\subsection{Two-Level Atoms}\label{sec3.1}
The atomic energy levels are very susceptible
to excitation by electromagnetic radiation. The
structure of electronic eigenstates and the interaction between electrons and
photons can be quite complicated. But under certain assumptions we have in
effect a 2-level atom. These assumptions are as follows:
\begin{itemize}
\item[(i)] the difference in energy levels of the electrons matches the
energy of the incident photon;
\item[(ii)] the symmetries of the atomic structure and the resulting
``selection rules'' allow the transition of the electrons between the two
levels; and
\item[(iii)] all the other levels are sufficiently ``detuned'' in frequency
separation with respect to
the frequency of the incident field such that there is no transition to those
levels.
\end{itemize}
Then the model of a 2-level atom provides a good approximation.
\begin{figure}
\caption{\label{fig3.1}
\label{fig3.1}
\end{figure}
The wave function $\psi(\pmb{r},t)$ of the electron of the 2-level atom,
which is coupled to the external electric field $\pmb{E}(\pmb{r},t)$ by its
electric charge $e$, as shown in Fig.~\ref{fig3.1}, obeys the
Schr\"odinger equation
\begin{equation}\label{eq3.3}
i\hbar \frac\partial{\partial t} \psi(\pmb{r},t) = H\psi(\pmb{r},t)
= (H_0+H_1) \psi(\pmb{r},t),
\end{equation}
where $|\psi(\pmb{r},t)|^2$ is the probability density
of finding that electron at position $\pmb{r} = (x,y,z)$ at time $t$, and
\begin{align}\label{eq3.6c}
H_0 &\equiv \frac1{2m} \pmb{\noindentabla}^2 + V(\pmb{r}) &
\begin{minipage}[t]{200pt}
is the differential operator for the unperturbed Hamiltonian of the electron,
with the gradient operator
$\pmb{\noindentabla}=(\partial/\partial x, \partial/\partial y, \partial/\partial
z)$
and the electrostatic potential $V(\pmb{r})$ between
the electron and the nucleus,
\end{minipage}
\\
\label{eq3.6d}
H_1 &\equiv -e\pmb{r} \cdot \pmb{E}(\pmb{r}_0,t) &
\begin{minipage}[t]{200pt}
is the interaction
Hamiltonian between the field and the electron of the atom.
\end{minipage}
\end{align}
\begin{rem}\label{rem3.0.c}
The wavelength of visible light, typical for atomic transitions,
is about a few
thousand times the diameter of an atom. Therefore, there is no significant
spatial variation of the electric field across an atom and so we can replace
$\pmb{E}(\pmb{r},t)$ by $\pmb{E}(\pmb{r}_0,t)$, the field at a reference point
inside the atom such as the position of the nucleus or the center of mass.
Consistent with this long-wavelength approximation (known as {\em dipole
approximation\/} in the physics literature) is that the magnetic field
$\pmb{B}$ satisfies $\pmb{B}(\pmb{r},t) \cong 0$, so that the Lorentz force of
the radiation field on the electron is $e\pmb{E}(\pmb{r}_0,t)$, the negative
gradient of $-e\pmb{r}\cdot \pmb{E}(\pmb{r}_0,t)$, which adds to the potential
energy $V(\pmb{r})$ in the Hamiltonian operator.$
\square$
\end{rem}
Let the external electromagnetic field be a monochromatic plane-wave field
linearly polarized along the $x$-axis, which interacts with the atom placed at
$\pmb{r}_0 = \pmb{0}$. The electric field takes the form
\begin{equation}\label{eq3.6e}
\pmb{E}(\pmb{0},t)
= \cl E\cos(\noindentu t) \pmb{e}_x;\quad \pmb{e}_x \equiv [1,0,0]^T.
\end{equation}
Let $|1\rangle$ and $|0\rangle$ represent upper and lower level states of the
atom, which are eigenstates of the unperturbed part $H_0$ of the Hamiltonian
corresponding to the eigenvalues $\hbar \omega_1$ and $\hbar \omega_0$,
respectively, with $\omega \equiv \omega_1-\omega_0$, cf.\ Fig.~\ref{fig3.1}.
For the unperturbed 2-level atom system, i.e., without the external
electromagnetic field,
let the eigenstates of the electron be $|\psi_0\rangle =
|0\rangle$ and $|\psi_1\rangle= |1\rangle$, i.e.,
\begin{equation}\label{eq3.7}
\left\{\begin{array}{lll}
H_0|\psi_0\rangle = E_0|\psi_0\rangle,&\text{or}&H_0|0\rangle = \hbar
\omega_0|0\rangle;\qquad
\langle 0|0\rangle = 1,\\[1ex]
H_0|\psi_1\rangle = E_1|\psi_1\rangle,&\text{or}&H_0|\psi_1\rangle = \hbar
\omega_1|1\rangle;
\qquad \langle 1|1\rangle = 1.\end{array}\right.
\end{equation}
We invoke the assumption of 2-level atom that the electron
lives only on states
$|0\rangle$ and $|1\rangle$:
\begin{equation}\label{eq3.8}
|\psi(t)\rangle = C_0(t)|0\rangle + C_1(t)|1\rangle,
\end{equation}
where $C_0(t)$ and $C_1(t)$ are complex-valued functions such that $|C_0(t)|^2
+
|C_1(t)|^2 = 1$, with $|C_0(t)|^2$ and $|C_1(t)|^2$ being the probability of
the
electron in, respectively, state $|0\rangle$ and $|1\rangle$ at time $t$.
Since $H_1$ (after quantization) is an operator effective only on the subspace
\begin{equation}
S = \text{span}\{|0\rangle, |1\rangle\},
\end{equation}
$H_1$ is the zero operator on $S^\bot$ (the orthogonal complement of $S$
in the Hilbert space $\cl H$ which
has an orthonormal basis consisting of all eigenstates of $H_0$). We utilize
the property that on $S$ the projection operator is
$|0\rangle \langle 0| + |1\rangle\langle 1|$, and obtain
\begin{align}
H_1 &= - e\pmb{r}\cdot \pmb{E}(\pmb{0},t) = -e(x\pmb{e}_x + y\pmb{e}_y +
z\pmb{e}_z)\cdot (\cl E\cos (\noindentu t)\pmb{e}_x) \noindentonumber\\
&= -e\cl Ex \cos(\noindentu t) \text{ (by \eqref{eq3.6d} and
\eqref{eq3.6e})}\noindentonumber\\
&= -e\cl E[|0\rangle\langle 0| + |1\rangle\langle 1|] x [|0\rangle \langle
0| + |1\rangle\langle 1|] \cos (\noindentu t)\noindentonumber\\
&= -e\cl E\{[\langle 0| x |0\rangle] |0\rangle\langle 0| + [\langle 0| x
|1\rangle] |0\rangle \langle 1|\noindentonumber\\
\label{eq3.9}
&\quad + [\langle 1| x |0\rangle] |1\rangle\langle 0| + [\langle 1| x
|1\rangle] |1\rangle\langle 1|\} \cos (\noindentu t).
\end{align}
But from the symmetry property and the selection rules that
$|\psi_j(\pmb{r})|^2 = |\psi_j(-\pmb{r})|^2$ for $j=0$
and 1, we have
\begin{align}
\langle 0| x |0\rangle = \int\limits^\infty_{-\infty}
\int\limits^\infty_{-\infty}\int\limits^\infty_{-\infty} x
|\psi_0(\pmb{r})|^2 dxdydz =0
\end{align}
since $x=\pmb{r}\cdot\pmb{e}_x$ is an odd function of $\pmb{r}$ whereas
$|\psi_0(\pmb{r})|^2$ is even.
Similarly, $\langle 1|x |1\rangle = 0$. Now, write $P_{01} \equiv e\langle
0| x |1\rangle$ and $P_{10} \equiv e\langle 1|x |0\rangle$. Then
$P_{10} = \overline P_{01}$. To solve the partial differential
differential equation \eqref{eq3.3}, we need only focus our attention on the
2-dimensional invariant subspace $S$ and, thus, a dramatic reduction of
dimensionality. Substituting \eqref{eq3.8} into \eqref{eq3.3}, utilizing
\eqref{eq3.7} and \eqref{eq3.9} and simplifying, we obtain a $2\times 2$ first
order linear ODE
\begin{equation}\label{eq3.10}
\frac{d}{dt} {\left[\begin{matrix} C_0(t)\\ C_1(t)\end{matrix}\right]} =
\left[\begin{matrix} -i\omega_0&i\Omega_R e^{-i\phi}\cos \noindentu t\\ i\Omega_R
e^{i\phi} \cos \noindentu t&-i\omega_1\end{matrix}\right] \left[\begin{matrix}
C_0(t)\\ C_1(t)\end{matrix}\right],
\end{equation}
where
\begin{equation}\label{eq3.11}
\Omega_R \equiv \frac{|P_{10}| \cl E}\hbar = \text{the Rabi frequency},
\end{equation}
and
\begin{equation}
\phi = \text{phase of } P_{10}\colon \ P_{10} = |P_{10}|e^{i\phi}.
\end{equation}
Equation \eqref{eq3.10} has time-varying coefficients and exact solutions of
such equations are not always easy to come by. Here, we make another round of
approximation, called the \emph{rotating wave approximation}, by first setting
\begin{equation}\label{eq3.17.1}
c_0(t) = C_0(t) e^{i\omega_0t},\quad c_1(t) = C_1(t)e^{i\omega_1t},
\end{equation}
and substituting them into \eqref{eq3.10} and then dropping terms involving
$e^{\pm i(\omega+\noindentu)t}$, where $\omega\equiv \omega_0-\omega_1$ is called the
atomic transition frequency. (From an experimental point of view, such $e^{\pm
i(\omega+\noindentu)t}$ terms
represent high frequency oscillations which cannot be observed in laboratory
instrumentation.
Furthermore, integrals involving highly oscillatory terms make
insignificant contributions. In view of the fact that other non-resonant terms
have already been discarded, these $e^{\pm i(\omega+\noindentu)t}$ terms representing
highly non-resonant contributions should be neglected for consistency.)
Then we obtain
\begin{equation}\label{eq3.12}
\left\{\begin{array}{l}
\dot c_0(t) = \dfrac{i\Omega_R}{2} e^{-i\phi} e^{i(\omega-\noindentu)t} c_1(t),\\
\noindentoalign{
}
\dot c_1(t) = \dfrac{i\Omega_R}{2} e^{i\phi} e^{-i(\omega-\noindentu)t} c_0(t),
\end{array}\right.
\end{equation}
which in turn leads to a second order single ODE with constant coefficients
\begin{equation}\label{eq3.13}
\ddot c_0 - i(\omega-\noindentu) \dot c_0 + \frac{\Omega^2_R}{4} c_0 = 0.
\end{equation}
Solving for $c_0$ and $c_1$ in \eqref{eq3.12} and \eqref{eq3.13},
we obtain the
explicit solution in terms of the initial condition $(c_0(0), c_1(0)$):
\begin{equation}
\left[\begin{matrix} c_0(t)\\ c_1(t)\end{matrix}\right] = \left[\begin{matrix}
e^{i\frac\Delta2 t} \left[\cos\left(\frac\Omega2 t\right) - i\frac\Delta\Omega
\sin\left(\frac\Omega2 t\right)\right]&
e^{i\frac\Delta2 t} \cdot i \cdot \frac{\Omega_R}{\Omega} e^{-i\phi} \sin
\left(\frac\Omega2 t\right)\\
e^{-i\frac\Delta2 t} \cdot i \frac{\Omega_R}\Omega e^{i\phi} \sin
\left(\frac\Omega2 t\right)&
e^{-i\frac\Delta2 t} \left[\cos \left(\frac\Omega2 t\right) +
i\frac\Delta\Omega
\sin \left(\frac\Omega2 t\right)\right]\end{matrix}\right] \!
\left[\begin{matrix} c_0(0)\\ c_1(0)\end{matrix}\right],
\end{equation}
where $\Omega=\sqrt{\Omega_R^2+(\omega-\noindentu)^2}$.
At resonance,
\begin{equation}
\Delta = \omega-\noindentu = 0,
\end{equation}
and, thus
\begin{equation}
\frac\Delta\Omega = 0,\quad \Omega_R = \Omega.
\end{equation}
The above yields
\begin{equation}\label{eq3.13b}
\left[\begin{matrix} c_0(t)\\ c_1(t)\end{matrix}\right] = \left[\begin{matrix}
\cos\left(\frac\Omega2 t\right)&ie^{-i\phi} \sin\left(\frac\Omega2 t\right)\\
ie^{i\phi} \sin\left(\frac\Omega2 t\right)&\cos\left(\frac\Omega2 t\right)
\end{matrix}\right] \left[\begin{matrix} c_0(0)\\ c_1(0)\end{matrix}\right].
\end{equation}
(Note that if, instead, we relate $C_j(t)$ to $C_j(0)$ for $j=0,1$, then
because of \eqref{eq3.17.1}, unlike the (special) unitary matrix in
\eqref{eq3.13b} whose determine is 1, the corresponding unitary matrix will
contain some additional phase factor(s).)
Write
\begin{align}\label{eq3.13a}
\theta &= \frac\Omega2 t\\
\phi' &= \phi+\pi,\noindentonumber
\end{align}
and re-name $\phi'$ as $\phi$. Then the above matrix becomes
\begin{equation}\label{eq3.14}
U_{\theta,\phi} \equiv \left[\begin{matrix}
\cos\theta&-ie^{i\phi} \sin\theta\\
-ie^{-i\phi}\sin\theta&\cos\theta\end{matrix}\right],
\end{equation}
which is the {\em 1-bit rotation unitary gate}.
Note that $\theta$ in \eqref{eq3.14} depends on $t$.
\begin{rem}\label{rem3.1A}
If we write the Hamiltonian as (\cite[(5.2.44), p.~157]{SZ})
\begin{equation}\label{eq3.13c}
H = -\frac{\hbar \Omega}2
[e^{-i\phi} |1\rangle\langle 0| + e^{i\phi} |0\rangle
\langle 1|],
\end{equation}
then we obtain
\begin{equation}
e^{-\frac{i}\hbar Ht} = \left[\begin{matrix}
\cos\left(\frac\Omega2 t\right)&ie^{-i\phi} \sin \left(\frac\Omega2 t\right)\\
ie^{i\phi} \sin\left(\frac\Omega2 t\right)&\cos \left(\frac\Omega2 t\right)
\end{matrix}\right],
\end{equation}
which is the same as the matrix in \eqref{eq3.13b}. This Hamiltonian in
\eqref{eq3.13c} is
thus called the {\em effective (interaction) Hamiltonian\/} for the 2-level
atom. It
represents the essence of the original Hamiltonian in \eqref{eq3.3} after
simplifying assumptions. From now on for the many physical systems under
discussion, we will simply use the effective Hamiltonians for such systems as
supported by theory and/or experiments, rather than to derive the Hamiltonians
from scratch based on the Schr\"odinger's equations coupled with the
electromagnetic field.
$
\square$
\end{rem}
\subsection{Quantization of the Electromagnetic Field}\label{sec3.2}
The quantization of the electromagnetic radiation field as simple harmonic
oscillators is
important in quantum optics. This fundamental contribution is due to Dirac.
Here
we provide a motivation by following
the approach of \cite[Chap.~1, pp.~3--4]{SZ}.
We begin with the classical description of the field based on
Maxwell's equations. These equations relate the electric and
magnetic field
vectors $\pmb{E}$ and $\pmb{B}$, respectively.
Maxwell's equations lead to the following wave equation for the
electric field:
\begin{equation}\label{3.2.4}
\noindentabla^2 \pmb{E} - \frac1{c^2} \frac{\partial^2 \pmb{E}}{\partial t^2}
= 0,
\end{equation}
along with a corresponding wave equation for the magnetic field.
The electric field has the spatial dependence
appropriate for a cavity resonator of length $L$. We take the
electric field to be linearly polarized in the $x$-direction and
expand in the
normal modes (so-called in the sense that they constitute orthogonal
coordinates for oscillations) of the cavity
\begin{equation}\label{3.2.5}
E_x(z,t) = \sum_j A_j q_j (t) \sin (k_j z),
\end{equation}
where $q_j$ is the normal mode amplitude with the dimension of a length,
$k_j = j \pi /L$, with $j = 1,2,3, \ldots$, and
\begin{equation}\label{3.2.6}
A_j = \left(\frac{2\noindentu^2_j m_j}{V \epsilon_0}\right)^{1/2},
\end{equation}
with $\noindentu_j = j \pi c/L$ being the cavity eigenfrequency, $V = LA$ ($A$ is
the transverse area of the optical resonator) is the volume of the resonator
and $m_j$ is a constant with the dimension of mass. The constant $m_j$ has
been included only to establish the analogy between the dynamical problem of a
single mode of the electromagnetic field and that of the simple harmonic
oscillator. The equivalent mechanical oscillator will have a mass $m_j$, and
a Cartesian coordinate $q_j$. The nonvanishing component of the magnetic
field $B_y$ in the cavity
is obtained from Eq.\ (\ref{3.2.5}):
\begin{equation}\label{3.2.7}
B_y = \sum_j A_j \left( \frac{\dot{q}_j \epsilon_0}{k_j} \right) \cos (k_j
z).
\end{equation}
The classical Hamiltonian for the field is
\begin{equation}\label{3.2.8}
H_{cl} = \frac{1}{2} \int_V d\tau(\epsilon_0 E^2_x + \frac{1}{\mu_0}
B^2_y),
\end{equation}
where the integration is over the volume of the cavity. It follows, on
substituting from (\ref{3.2.5}) and (\ref{3.2.7}) for $E_x$ and $B_y$,
respectively,
in (\ref{3.2.8}), that
\begin{align}
H_{cl}
&= \frac{1}{2} \sum_j (m_j \noindentu_j^2 q_j^2 + m_j \dot q_j^2) \noindentonumber \\
&= \frac{1}{2} \sum_j \left(m_j \noindentu_j^2 q_j^2 + \frac{p_j^2}{ m_j} \right),
\label{3.2.9}
\end{align}
where $p_j = m_j \dot q_j$ is the canonical momentum of the $j$th mode.
Equation (\ref{3.2.9}) expresses the Hamiltonian of the radiation field as a
sum of independent oscillator energies.
This suggests correctly that each mode of the field is
dynamically equivalent to a mechanical harmonic oscillator.
The present dynamical problem can be quantized by identifying $q_j$ and
$p_j$ as operators which obey the commutation relations
\begin{align}
[q_j, p_{j^\prime}]
&= i \hbar \delta_{jj^\prime} ,\noindentonumber\\
\label{3.2.10b}
[q_j, q_{j^\prime}]
&= [p_j, p_{j^\prime}] = 0.
\end{align}
In the following we shall restrict ourselves to a single mode of the radiation
field modeled by a simple harmonic oscillator.
The corresponding Hamiltonian is therefore given by
\begin{equation}\label{eq3.15}
H = \frac1{2m} p^2+ \frac12 m\noindentu^2 x^2,
\end{equation}
where
\begin{align}
\noindentonumber
p &= \text{the particle momentum operator } = \frac\hbar{i} \frac{d}{dx},\\
\noindentonumber
m &= \text{a constant with the dimensions of mass,}\\
\noindentonumber
x &= \text{the position operator (corresponding to the $q$ variable above),}\\
\noindentonumber
\noindentu &= \text{the natural (circular) frequency of the oscillator}\\
&\phantom{=}~\text{~~(a parameter related to the potential depth).}
\end{align}
So the eigenstates $\psi$ of the Schr\"odinger equation satisfy
\begin{equation}\label{eq3.16}
H\psi = -\frac{\hbar^2}{2m} \frac{d^2\psi}{dx^2} + \frac12 m\noindentu^2 x^2 \psi =
E\psi.
\end{equation}
Let us make a change of variables
\begin{equation}\label{eq3.16'}
y = \sqrt{\frac{m\noindentu}h}\ x,\quad \lambda = \frac{E}{\hbar \noindentu},
\end{equation}
then \eqref{eq3.16} becomes
\begin{equation}\label{eq3.17}
\frac12 \left[\frac{d^2\psi}{dy^2} - y^2\psi\right] = -\lambda\psi.
\end{equation}
We now define two operators
\begin{equation}
a = \frac1{\sqrt 2} (d/dy + y),\quad a^{\dag} = -\frac1{\sqrt 2} (d/dy-y).
\end{equation}
Note that $a^{\dag}$ is the Hermitian adjoint operator of $a$ with respect to
the $L^2(\bb R)$
inner product. Then it is easy to check that for any sufficiently smooth
function $\phi$ on $\bb R$,
\begin{equation}
(aa^{\dag}-a^{\dag}a)\phi = \phi,
\end{equation}
i.e.,
\begin{equation}\label{eq3.18}
\text{the commutator of $a$ with } a^{\dag} =[a,a^{\dag}] =\pmb{1},
\end{equation}
where $\pmb{1}$ denotes the identity operator but is often simply written as
(the scalar) 1 in the
physics literature. It is now possible to verify the following:
\noindent (i)\ Let $\widetilde H \equiv - \frac12\left(\frac{d^2}{dy^2} -
y^2\right)$.
Then
\begin{equation}
\widetilde H = a^{\dag}a + \frac12 \pmb{1} = aa^{\dag} - \frac12 \pmb{1}.
\end{equation}
(ii)\ If $\psi_\lambda$ is an eigenstate of $\widetilde H$ satisfying
\begin{equation}
\widetilde H \psi_\lambda = \lambda\psi_\lambda\qquad \text{(i.e.,
\eqref{eq3.17}),}
\end{equation}
then so are $a\psi_\lambda$ and $a^{\dag}\psi_\lambda$:
\begin{align*}
\widetilde H(a\psi_\lambda) &= \left(\lambda - 1\right) a\psi_\lambda,\\
\widetilde H(a^{\dag}\psi_\lambda) &= \left(\lambda + 1\right) a^{\dag}\psi_k.
\end{align*}
(iii)\ If $\lambda_0$ is the lowest eigenvalue (or energy level)
of $\widetilde H$, then
\begin{equation}\label{eq3.18a}
\lambda_0 = \frac12.
\end{equation}
(iv)\ All of the eigenvalues of $\widetilde H$ are given by
\begin{equation}\label{eq3.18b}
\lambda_n = \left(n + \frac12\right),\qquad n=0,1,2,\ldots~.
\end{equation}
(v)\ Denote the $n$-th eigenstate $\psi_n$ by $|n\rangle$, for
$n=0,1,2,\ldots$~.
Then
\begin{equation}\label{eq3.19}
|n\rangle = \frac{(a^{\dag})^n}{(n!)^{1/2}} |0\rangle.
\end{equation}
Furthermore,
\begin{align}
\noindentonumber
a^{\dag}a|n\rangle &= n|n\rangle,\qquad n=0,1,2,\ldots;\\
\noindentonumber
a^{\dag}|n\rangle &= \sqrt{n+1}\ |n+1\rangle,\qquad n=0,1,2,\ldots;\\
\label{eq3.21}
a|n\rangle &= \sqrt n\ |n-1\rangle,\qquad n=1,2,\ldots;\quad a|0\rangle = 0.
\end{align}
(vi) The completeness relation is
\begin{equation}
\pmb{1} = \sum^\infty_{n=0} |n\rangle\langle n|.
\end{equation}
(vii) The wave functions are
\begin{equation}
\psi_j(y) = N_jH_j(y) e^{-y^2/2},\qquad y
= \left(\frac{m\noindentu}\hbar\right)^{1/2}
x,
\end{equation}
$j=0,1,2,\ldots$, where $H_j(y)$ are the Hermite polynomials of degree $j$,
and
$N_j$ is a normalization factor,
\begin{rem}\label{rem3.1}
\begin{itemize}
\item[(i)] The energy levels $\left(n+\frac12\right) \hbar\noindentu$ of $H$ (after
converting back to the $x$-coordinate from the $y$-coordinate in
\eqref{eq3.18b})
may be interpreted as the presence of $n$ quanta or photons of
energy $\hbar \noindentu$. The eigenstates $|n\rangle$ are called the Fock states or
the photon number states.
\item[(ii)] The energy level $E=\frac12 \hbar\noindentu$ (from \eqref{eq3.16'} and
\eqref{eq3.18a}) is called the ground state energy.
\item[(iii)] Because of \eqref{eq3.21}, we call $a$ and
$a^{\dag}$, respectively, the annihilation and creation
operators.$
\square$
\end{itemize}
\end{rem}
\subsection{Cavity QED for the Quantum Phase Gate}\label{sec3.3}
Cavity QED (quantum electrodynamics) is a system that enables the coupling
of single atoms to only a few photons in a resonant cavity. It is realized by
applying a large laser electric field in a narrow band of frequencies
within a small Fabry--Perot cavity consisting of highly
reflective mirrors. See a primitive drawing in Fig.~\ref{fig3.2}.
\begin{figure}
\caption{\label{fig3.2}
\label{fig3.2}
\end{figure}
A 3-level atom is injected into the cavity. An electron in the atom has three
levels, $|\alpha\rangle$, $|\beta\rangle$ and $|\gamma\rangle$, as shown in
Fig.~\ref{fig3.3}.
Actually, the state $|\alpha\rangle$ will be used only as an {\em auxiliary\/}
level because later we will define the states $|\beta\rangle$ and
$|\gamma\rangle$ as the first qubit, $|1\rangle$ and $|0\rangle$, respectively.
Once the atom enters
the cavity, the strong electromagnetic field of the privileged cavity mode
causes transitions of
the electron between $|\alpha\rangle$ and $|\beta\rangle$,
and a photon or photons are released or absorbed in this process.
Of the photon
states $|0\rangle$, $|1\rangle$, $|2\rangle,\ldots$ inside the cavity
only $|0\rangle$ and $|1\rangle$ will be important
(which physically mean 0 or 1 photo inside the cavity), and they define
the second qubit.
The creation and annihilation operators $a^{\dag}$ and $a$ act on the photon
states
$|n\rangle$ for $n=0,1,2,\ldots,\infty$, according to \eqref{eq3.21}.
\begin{figure}
\caption{\label{fig3.3}
\label{fig3.3}
\end{figure}
The Hamiltonian for the atom-cavity field interaction is given by
\begin{equation}\label{eq3.22}
H = H_0 + H_1 + H_2,
\end{equation}
where
\begin{align}
\noindentonumber
H_0 &= \frac{\hbar \omega_{\alpha\beta}}{2} (|\alpha\rangle \langle\alpha| -
|\beta\rangle\langle\beta|) = \text{ the atom's Hamiltonian,}\\
\label{eq3.24}
H_1 &= \hbar \noindentu a^{\dag}a
= \text{ the Hamiltonian of the laser electric field of
the cavity,}
\end{align}
and
\begin{align}
\label{eq3.25}
H_2 = \hbar g(|\alpha\rangle \langle \beta|a + |\beta\rangle\langle
\alpha|a^{\dag})
=&\text{ the interaction Hamiltonian of}\\[-0.25\baselineskip]
&\text{ the laser field with the atom; with $g>0$.}\noindentonumber
\end{align}
See the derivation of \eqref{eq3.22} in \cite{Ra}. Note that the operators $a$
and $a^{\dag}$ appearing in \eqref{eq3.22} operate only on the second bit,
while the
rest of the operators operate only on the first bit. Also, operator
$|\alpha\rangle \langle\alpha| - |\beta\rangle \langle\beta|$
can be written as
$\sigma_z$; see equation \eqref{eq5.1a} later, because its matrix
representation with
respect to the ordered basis $\{|\alpha\rangle, |\beta\rangle\}$ is exactly
$\sigma_z$.
\begin{lem}\label{lem3.1}
Let the underlying Hilbert space be
\begin{equation}\label{eq3.26}
\cl H = \text{span}\{|\alpha,n\rangle, |\beta,n\rangle, |\gamma,n\rangle|
n=0,1,2,\ldots\}
\end{equation}
Then the Hamiltonian operator \eqref{eq3.22} has a family of 2-dimensional
invariant subspaces
\begin{equation}\label{eq3.27}
V_n = \text{span}\{|\alpha,n-1\rangle, |\beta,n\rangle\},\qquad n=1,2,\ldots~.
\end{equation}
Indeed, with respect to the ordered basis in \eqref{eq3.27}, the matrix
representation of $H$ on $V_n$ is given by
\begin{equation}\label{eq3.27a}
H|_{V_n} = \hbar \left[\begin{matrix}
\frac{1}{2}\omega_{\alpha\beta} + \noindentu(n-1)&g\sqrt n\\
g\sqrt n&-\frac{1}{2}\omega_{\alpha\beta} + \noindentu n\end{matrix}\right],\qquad
n=1,2,3,\ldots~.
\end{equation}
\end{lem}
\begin{proof}
Even though the verification is straightforward, let us provide some details
for
the ease of future reference. We have, from \eqref{eq3.21}, and
\eqref{eq3.22}--\eqref{eq3.25},
\begin{align}
H|\alpha,n-1\rangle &= H_0|\alpha,n-1\rangle + H_1|\alpha,n-1\rangle +
H_2|\alpha,n-1\rangle\noindentonumber\\
&= \left[\frac{\hbar\omega_{\alpha\beta}}{2} |\alpha,n-1\rangle\right]
+ [\hbar
\noindentu(n-1)|\alpha,n-1\rangle] + [\hbar g\sqrt n|\beta,n\rangle]\noindentonumber\\
\label{eq3.27b}
&= \hbar \left[\frac{\omega_{\alpha\beta}}2
+\noindentu(n-1)\right] |\alpha,n-1\rangle
+ \hbar [g\sqrt n] |\beta,n\rangle.
\end{align}
Similarly,
\begin{equation}\label{eq3.27c}
H|\beta,n\rangle= \hbar \left[-\frac{\omega_{\alpha\beta}}2 + \noindentu n\right]
|\beta,n\rangle + \hbar [g\sqrt n] |\alpha, n-1\rangle.
\end{equation}
Therefore, we obtain \eqref{eq3.27a}.
\end{proof}
\begin{lem}\label{lem3.2}
On the invariant subspace $V_n$ in \eqref{eq3.27},
the Hamiltonian operator $H$
has two eigenstates
\begin{equation}\label{eq3.28}
\left\{\begin{array}{l}
|+\rangle_n \equiv \cos \theta_n|\alpha,n-1\rangle -
\sin\theta_n|\beta,n\rangle,\\[1ex]
|-\rangle_n \equiv \sin\theta_n|\alpha,n-1\rangle +
\cos\theta_n|\beta,n\rangle,
\end{array}\right.
\end{equation}
where
\begin{align}
\sin\theta_n &\equiv \frac{\Omega_n-\Delta}D,\quad \cos \theta_n \equiv
\frac{2g\sqrt n}D,\noindentonumber\\
D &\equiv [(\Omega_n-\Delta)^2 + 4g^2n]^{1/2},\noindentonumber\\
\Omega_n &\equiv (\Delta^2 +4g^2n)^{1/2},\noindentonumber\\
\label{eq3.28a}
\Delta &\equiv \noindentu -\omega_{\alpha\beta} = \text{ the detuning frequency,}
\end{align}
with eigenvalues (i.e., energy levels)
\begin{equation}\label{eq3.29}
E_{\pm(n)} = \hbar\left[n\noindentu + \frac12(-\noindentu \mp \Omega_n)\right],
\end{equation}
such that
\begin{equation}\label{eq3.30}
H|+\rangle_n = E_{+(n)} |+\rangle_n,\quad H|-\rangle_n = E_{-(n)}|-\rangle_n.
\end{equation}
\end{lem}
\begin{proof}
These eigvenvalues and eigenvectors can be computed in a straightforward way,
using the $2\times 2$ matrix representation of $H$ on $V_n$, \eqref{eq3.27a},
with respect to the ordered
basis of $V_n$ chosen as in \eqref{eq3.27}.
\end{proof}
\begin{rem}\label{rem3.1a}
The states $|+\rangle_n$ and $|-\rangle_n$ in \eqref{eq3.28} are called the
{\em dressed states\/} in the sense that atoms are dressed by electromagnetic
fields.
These two states are related to the splitting of the spectral
lines due to the electric field.
$
\square$
\end{rem}
Now, let us assume {\em large detuning}:
\begin{equation}\label{eq3.31}
|\Delta| \gg 2g\sqrt n~.
\end{equation}
Also, $|\Delta|$ remains small in comparison to the
transition frequency so that approximate resonance is maintained.
Accordingly, this is satisfied only for small values of $n$.
Then
\begin{align}\noindentonumber
\Omega_n &= (\Delta^2+4g^2n)^{1/2} = \Delta\left(1 +
\frac{2g^2n}\Delta\right)^{1/2} \approx \Delta + \frac{2g^2n}\Delta;\\
\sin\theta_n &= (\Omega_n-\Delta)/D\approx 0;\quad \cos\theta_n = 2g\sqrt
n/D\approx 1.
\end{align}
Thus, from \eqref{eq3.28},
\begin{equation}\label{eq3.32}
|+\rangle_n \approx |\alpha,n-1\rangle, \quad |-\rangle_n\approx
|\beta,n\rangle.
\end{equation}
\begin{rem}\label{rem3.2}
By the assumption \eqref{eq3.30} of large detuning, from \eqref{eq3.29} using
\eqref{eq3.28a} and \eqref{eq3.31}, we now have
\begin{align}
E_{+(n)} &= \hbar\left[n\noindentu + \frac12 (-\noindentu-\Omega_n)\right]\noindentonumber\\
&\approx \hbar\left[\left(n-\frac12\right)\noindentu - \frac12 \left(\Delta +
\frac{2g^2n}\Delta\right)\right]\noindentonumber\\
&= \hbar\left[\left(n-\frac12\right)\noindentu - \frac12 (\noindentu-\omega_{\alpha\beta}) -
\frac{g^2n}\Delta\right]\noindentonumber\\
\label{eq3.32a}
&= \hbar \left[\frac{\omega_{\alpha\beta}}2 + \noindentu(n-1)\right] - \frac{\hbar
g^2n}\Delta.
\end{align}
Similarly, we have
\begin{equation}\label{eq3.32b}
E_{-(n)} \approx \hbar \left[-\frac{\omega_{\alpha\beta}}2 +\noindentu n\right] +
\frac{\hbar g^2n}\Delta,
\end{equation}
such that
\begin{align}\noindentonumber
H|+\rangle_n &\approx H|\alpha,n-1\rangle = E_{+(n)}|\alpha,n-1\rangle,\\
H|-\rangle_n &\approx H|\beta,n\rangle = E_{-(n)} |\beta,n\rangle,
\end{align}
where $E_{\pm(n)}$ are now given by \eqref{eq3.32a} and \eqref{eq3.32b}.
Thus, we see that the assumption \eqref{eq3.30} of large detuning causes the
off-diagonal terms in the matrix \eqref{eq3.27a} (which are contributed by the
{\em interaction Hamiltonian\/} $H_2$ in \eqref{eq3.24}) to disappear. The
de facto or, {\em effective interaction Hamiltonian}, has now become
\begin{equation}\label{eq3.32c}
\widetilde H_2
= -\frac{\hbar g^2}\Delta (aa^{\dag}|\alpha\rangle\langle\alpha|
-a^{\dag}a|\beta\rangle\langle\beta|),
\end{equation}
such that $\widetilde H \equiv H_0 + H_1 + \widetilde H_2$ now admits a
diagonal
matrix representation
\begin{equation}\label{eq3.32d}
\widetilde H|_{V_n} = \left[\begin{matrix} E_{+(n)}&0\\ 0&E_{-(n)}\end{matrix}
\right]
\end{equation}
with respect to the ordered basis $\{|\alpha,n-1\rangle, |\beta,n\rangle\}$ of
$V_n$, with $E_{\pm(n)}$ here given by \eqref{eq3.32a} and \eqref{eq3.32b}.
$
\square$
\end{rem}
We now define the first qubit by
\begin{equation}\label{eq3.33}
|\beta\rangle = |1\rangle,\quad |\gamma\rangle = |0\rangle,
\end{equation}
where $|\gamma\rangle$ is the state in Fig.~\ref{fig3.3}
that is {\em detached\/} (or
{\em off-resonance\/}) from the Hamiltonian $H$, i.e.,
\begin{equation}\label{eq3.34}
H|\gamma\rangle = 0.
\end{equation}
For the second qubit, we choose $n=1$ for $V_n$ in \eqref{eq3.27} and then see
that the
second bit is only 0 or 1, i.e., the cavity can have 0 or 1 photon. The cavity
field is nearly resonant with the $|\alpha\rangle \leftrightarrow
|\beta\rangle$
transition: $\noindentu = \omega_{\alpha\beta} + \Delta$.
\begin{thm}\label{thm3.3}
Let the effective total Hamiltonian be
\begin{align}
\widetilde H = H_0 + H_1 + \widetilde H_2= \hbar&
\left[\frac{\omega_{\alpha\beta}}2 (|\alpha\rangle \langle\alpha| -
|\beta\rangle \langle\beta|) + \noindentu a^{\dag}a\right.\noindentonumber\\
\label{eq3.35}
&\enskip \left. - \frac{\hbar g^2}\Delta (aa^{\dag}
|\alpha\rangle\langle\alpha| - a^{\dag}a|\beta\rangle \langle\beta|\right].
\end{align}
Then
\begin{equation}\label{eq3.36}
\widetilde V \equiv \text{span}\{|0,0\rangle, |0,1\rangle, |1,0\rangle,
|1,1\rangle\}
\end{equation}
is an invariant subspace of $\widetilde H$ in the underlying Hilbert space
\eqref{eq3.26}, such that
\begin{equation}\label{eq3.37}
\left\{\begin{array}{l}
\widetilde H|0,0\rangle = 0,\quad \widetilde H |0,1\rangle = 0,\quad
\widetilde
H|1,0\rangle = 0\\ \displaystyle
\widetilde H|1,1\rangle = E_{-(1)} |1,1\rangle, \text{ where } E_{-(1)}
= \hbar
\left(-\frac{\omega_{\alpha\beta}}2 + \noindentu + \frac{\hbar g^2}\Delta\right).
\end{array}\right.
\end{equation}
Consequently, with respect to the ordered basis of $\widetilde V$ in
\eqref{eq3.36}, $\widetilde H$ admits a diagonal matrix representation
\begin{equation}\label{eq3.38}
\widetilde H|_{\widetilde V} = \left[\begin{matrix}
0&&&&\bigcirc\\
&0\\
&&&0\\
&\bigcirc&&&E_{-(1)}\end{matrix}\right],
\end{equation}
with the evolution operator
\begin{equation}\label{eq3.39}
e^{-i\widetilde Ht/\hbar}|_{\widetilde V} = \left[\begin{matrix}
1\\ &&&&&\bigcirc\\ &&1\\ &&&&1\\ &\bigcirc\\
&&&&&\exp\left(-i\frac{E_{-(1)}t}\hbar \right)\end{matrix}\right].
\end{equation}
\end{thm}
\begin{proof}
We have
\begin{equation}
\widetilde H|0,0\rangle = \widetilde H|\gamma,0\rangle = 0\quad
\text{and}\quad
\widetilde H|0,1\rangle = \widetilde H|\gamma,1\rangle = 0, \text{ by
\eqref{eq3.34}.}
\end{equation}
Also,
\begin{equation}
\widetilde H|1,0\rangle = \widetilde H|\beta,0\rangle = 0, \text{ by
\eqref{eq3.32d},}
\end{equation}
where $V_n$ is chosen to be $V_0$ by setting $n=0$ therein. But
\begin{equation}
\widetilde H|1,1\rangle = \widetilde H|\beta,1\rangle = E_{-(1)} |1,1\rangle
\end{equation}
by \eqref{eq3.32d} where $V_n$ is chosen to be $V_1$ by setting $n=1$ therein.
So both \eqref{eq3.38} and \eqref{eq3.39} follow.
\end{proof}
The unitary operator in \eqref{eq3.39} gives us the quantum phase gate
\begin{equation}\label{eq3.40}
Q_\eta = \left[\begin{matrix}
1\\ &1&&&\bigcirc\\ &&&1\\ &\bigcirc&&&e^{i\phi}\end{matrix}\right]
\end{equation}
with
\begin{equation}\label{eq3.41}
\phi \equiv - \frac{E_{-(1)}}\hbar t.
\end{equation}
We can now invoke Theorem \ref{thm2.2} in Section \ref{sec2} to conclude the
following.
\begin{thm}\label{thm3.4}
The collection of 1-bit gates $U_{\theta,\phi}$ in \eqref{eq3.14} from the
2-level atoms and the 2-bit gates $Q_\eta$ in \eqref{eq3.40} from cavity QED
is universal for quantum computation.$
\square$
\end{thm}
\section{Ion Traps}\label{sec4}
Ion traps utilize charged atoms with internal electronic states to represent
qubits. Ion traps isolate and confine small numbers of charged atoms by means
of electromagnetic fields. The ions are then cooled using laser beams until
their kinetic energy is much smaller than the inter-ionic potential energy.
Under these conditions, the ions form a regular array in the trap. Laser beams
can be tuned to excite the electronic states of particular ions and to couple
these internal states to the center-of-mass (CM) vibrational motion of the ion
array. This coupling provides entanglement for quantum computation.
\begin{figure}
\caption{\label{fig4.1}
\label{fig4.1}
\end{figure}
A Paul, or radio-frequency, ion trap confines ions by the time average of an
oscillating force $F(\pmb{r},t)$, where $\pmb{r} = (x,y,z)$, arising from a
non-uniform electric field $E(\pmb{r}) \cos(\mu t)$ acting on the ion charge
$e$ with mass $m$.
In one dimension, $\langle F(X,t)\rangle = -\partial\langle
U(X,t)\rangle/\partial X$, where $\langle U(X,t)\rangle$ is an effective
time-averaged potential energy, and
$U(X,t) = e^2E^2(X)\cos^2(\Omega t)/(m\Omega^2)$. The time
average
is denoted by $\langle~~\rangle$ and here, it is defined by $\langle
y(t)\rangle = \frac\Omega{2\pi} \int^{\frac{2\pi}\Omega}_0 y(t)dt$, where
$y(t)$
is
periodic with period $2\pi/\Omega$. $X$ describes the (relatively slow) motion
of a
guiding center, about which the ions execute small oscillations at frequency
$\Omega$. The trap is constructed so that the guiding center motion in two
dimensions (say $x$ and $y$) is harmonic, with frequencies $\noindentu_x$ and $\noindentu_y$
which are much smaller than the frequency $\Omega$. The trap electrodes may
confine
the ions in a circle or other closed 2-dimensional geometry, or a linear trap
may be produced by using a dc potential to confine the ions in the third
spatial dimension $z$; see Fig.~\ref{fig4.1}.
Once confined in the trap, the ions may be cooled using laser light, so that
they condense into a linear array along the trap axis. Each ion oscillates
with small amplitude about the zero of the time-averaged radial potential.
The electrostatic repulsion of the ions along the linear axis of the trap
combats a spatially varying dc potential applied for axial confinement,
so that below milliKelvin $(1\times 10^{-3} K)$ temperatures,
the ions in the array are
separated by several micrometers $(10^{-6}m)$. Under these conditions, the ion
confinement time is very long, many hours or days, and each ion can be
separately excited by a focused laser beam, so that long-lived internal levels
of each ion can serve as the
two states of a qubit (Fig.~\ref{fig4.2}).
Regarding the axial kinetic motion, the ions
oscillate axially as a group, so collective oscillations such as the CM
oscillations couple the individual ions, or qubits, permitting the
implementation of multi-qubit gates based on quantum entanglement of the CM
motion with each laser-excited qubit state. There are many CM modes of
oscillation, depending on the oscillations of individual ions or groups of
ions,
each with a characteristic frequency $\omega_{\text{CM}}$.
\begin{figure}
\caption{\label{fig4.2}
\label{fig4.2}
\end{figure}
A prototype scheme was proposed by Cirac and Zoller \cite{CZ}, which is based
on a confined ion system as outlined above. Assume the quantized axial CM
normal mode oscillation has frequency $\omega_{\text{CM}}$ and discrete
levels $n\hbar \omega_{\text{CM}}$ with quantum number $n=0,1,2,\ldots$ (the
ground state zero-point energy $\hbar \omega_{\text{CM}}/2$ in Remark
\ref{rem3.1}(ii) is disregarded); see Fig.~\ref{fig4.3}(b).
A quantum of
oscillation $\hbar\omega_{\text{CM}}$ is called a {\em phonon}. Each ion is
individually excited by focused standing wave laser beams. Two internal
hyperfine structure states of the ion (i.e., states coupling the electrons and
the nucleus of the ion) in the ground term (or lowest electronic set of
states),
separated by the frequency $\omega_0$, can be identified as the qubit states
$|0\rangle$ and $|1\rangle$. Alternatively, a ground level and a metastable
excited level might be used.
By thus coupling only these selected states of the ion by electromagnetic
fields we are
led to a {\em 2-level atom\/} analogously as in Section \ref{sec3}. These
states can be coupled using electric dipole transitions to higher-lying
states,
and there is also an internal auxiliary level, denoted $|\alpha\rangle$, which
can be coupled to the qubit states; see Fig.~\ref{fig4.2}.
Consequently a linear $N$-ion
system can be specified in terms of quantum numbers by the wavefunction
$|\psi\rangle_N = |j_1\rangle |j_2\rangle\cdots |j_N\rangle
|k\rangle_{\text{CM}}$,
where states (of each ion) $j_1,j_2,\ldots, j_N\in \{0,1\}$ and
$k=0,1,2,\ldots,
N-1$ describes the populated quantum state of the CM motion. The system is
initialized with $k=0$ when the ions are very cold, and with all qubit states
$j_1,j_2,\ldots, j_N$ set to 0 by inducing appropriate laser transitions in
the individual ions.
\begin{figure}
\caption{\label{fig4.3}
\label{fig4.3}
\end{figure}
Fig.~4.3(b) gives a diagram of the harmonic potential well of the axial ion
center-of-mass vibration, showing the lowest energy levels. The six ions in
Fig.~\ref{fig4.3}(a) would populate one of these energy levels,
acting as a single entity with the CM coordinates.
The acting laser beam tuned to frequency $\omega_0$ causes a dipole
transition coupling the two qubit levels $|0\rangle$ and $|1\rangle$ in
Fig.~\ref{fig4.2}.
Using the same rotating wave approximation as in Section \ref{sec3}, we obtain
the 1-bit rotation matrix $U_{\theta,\phi}$ as in \eqref{eq3.14}.
Therefore, according to the theory of universal quantum computing (Theorem
\ref{thm2.1})
in Section \ref{sec2}, now all we need to show is an entanglement operation
between two qubits.
We assume that two ions corresponding to the two qubits are coupled
in the {\em Lamb--Dicke regime}, i.e.,
the amplitude of the ion oscillation in the ion trap
potential is small compared to the wavelength of the incident laser beam. This
is expressed in terms of the Lamb--Dicke criterion:
\begin{equation}\label{eq4.1}
\eta(\Omega/2\omega_{\text{CM}}) \ll 1
\end{equation}
where
\begin{equation}
\begin{array}{rl }
\eta \equiv k z_0\,:& \text{the Lamb--Dicke parameter,}\\
z_0\,:& \text{a ``zero-point'' oscillation amplitude,}\\
& \text{at the lowest energy of CM vibration,}\\
k = 2\pi/\lambda \,:& \text{the wave number of the laser radiation,}\\
\omega_{\text{CM}} \,:& \text{the frequency of the CM motion referred to
earlier,}\\
\Omega\,:&\text{the Rabi frequency (cf.\ \eqref{eq3.11}).}
\end{array}
\end{equation}
\begin{figure}
\caption{\label{fig4.4}
\label{fig4.4}
\end{figure}
When \eqref{eq4.1} is satisfied, the ``sideband'' frequencies $\omega_0\pm
\omega_{\text{CM}}$ arising from the CM oscillation in the laser standing wave
are {\em resolved\/} (see Fig.~4.4 and the captions therein),
and can be
separately excited by the laser beam; see Fig.~\ref{fig4.3}(a).
Then the interaction
Hamiltonian coupling the internal qubit states of each ion $j$, $j=1,\ldots,
N$, to the CM motion is similar to \eqref{eq3.25}, with coupling constant $g =
\frac{\eta\Omega}{2\sqrt N}$:
\begin{equation}\label{eq4.2}
H_j = \frac{\hbar\eta\Omega}{2\sqrt N}
[|1\rangle_j \langle 0|_j e^{-i\phi} a +
|0\rangle_j \langle 1|_j e^{i\phi} a^{\dag}],\qquad j=1,\ldots, N,
\end{equation}
where $a$ and $a^{\dag}$ are, respectively, the annihilation and creation
operators
(see Subsection \ref{sec3.2}) for the phonons defined by
\begin{align}\noindentonumber
a|n\rangle_{\text{CM}} &= \sqrt n\ |n-1\rangle_{\text{CM}}, \text{ for }
n=1,2,\ldots;\quad a|0\rangle_{\text{CM}} = 0|0\rangle_{\text{CM}} = 0,\\
a^{\dag}|n\rangle_{\text{CM}} &= \sqrt{n+1}\ |n+1\rangle_{\text{CM}}, \text{
for }
n=0,1,2,\ldots; \text{ cf.\ \eqref{eq3.21},}
\end{align}
and
\begin{align}\noindentonumber
\phi = &\text{ the angle in the spherical coordinate system $(r,\theta,\phi)$
with respect}\\[-0.25\baselineskip]
&\text{ to the $x$-axis of the laboratory coordinate frame.}
\end{align}
The derivations of \eqref{eq4.2} may be found in \cite[Section 5.1]{Hu}
where the
authors give an argument based on normal modes of phonons. At low enough
temperature (i.e., the Lamb--Dicke regime), only the lowest order or the CM mode
plays a role, and this leads to the Hamiltonian \eqref{eq4.2} used by Cirac and
Zoller in \cite{CZ}.
\begin{rem}\label{rem4.1}
It can be shown that the Rabi frequency $\Omega_R$ for the internal qubit
transition coupled by $a$ and $a^{\dag}$ to the vibration is
$\eta\Omega/(2\sqrt
N)$,
much smaller than that, $\Omega$, in (\ref{eq3.11}) for a single qubit
(rotation)
operation.$
\square$
\end{rem}
\begin{lem}\label{lem4.1}
Let $H_j$ be given by \eqref{eq4.2} for $j=1,2,\ldots, N$. Then $H_j$ has a
family of invariant two-dimensional subspaces
\begin{equation}
V_{j,k} \equiv \text{ span}\{|0\rangle_j |k+1\rangle_{\text{CM}}, |1\rangle_j
|k\rangle_{\text{CM}}\}, \quad k=0,1,2,\ldots,
\end{equation}
in the Hilbert space
\begin{equation}
V_j \equiv \text{ span}\{|0\rangle_j |k\rangle_{\text{CM}},
|1\rangle_j|k\rangle_{\text{CM}}\mid k=0,1,2,\dots\}.
\end{equation}
On the orthogonal complement of all the $V_{j,k}$'s, i.e., $\widetilde V_j =
\left(\text{span } \bigcup\limits^\infty_{k=0} V_{j,k}\right)^\bot \subset
V_j$,
the action of $H_j$ is $0\colon \ H_j |\psi\rangle = 0$ for all $\psi\in
\widetilde V_j$.
\end{lem}
\begin{proof}
We have
\begin{align}\noindentonumber
H_j|0\rangle_j|k+1\rangle_{\text{CM}} &= \frac{\hbar\eta\Omega}{2\sqrt N}
[|1\rangle_j \langle 0|0\rangle_j e^{-i\phi} (a|k+1\rangle_{\text{CM}}) +
|0\rangle_j \langle 1|0\rangle_j e^{i\phi}(a^{\dag}|k+1\rangle_{\text{CM}})]\\
&= \frac{\hbar\eta\Omega}{2\sqrt N} e^{-i\phi} \sqrt{k+1}\ |1\rangle_j
|k\rangle_{\text{CM}},
\end{align}
and, similarly,
\begin{equation}
H_j|1\rangle_j |k\rangle_{\text{CM}} = \frac{\hbar\eta\Omega}{2\sqrt N}
e^{i\phi} \sqrt{k+1}\ |0\rangle_j |k+1\rangle_{\text{CM}}.
\end{equation}
Thus, $V_{j,k}$ is an invariant two-dimensional subspace of $H_j$ in $V_j$.
\end{proof}
Let the time-evolution operator of $H_j$ (depending on $\phi$) on $V_j$ be
$U_j(t,\phi) = e^{-\frac{i}{\hbar} H_jt}$. Then on the subspace $V_{j,k}$,
$k=0,1,2,\ldots,\infty$, with respect to the ordered basis $\{|0\rangle_j
|k+1\rangle_{\text{CM}}, |1\rangle_j |k\rangle_{\text{CM}}\}$, $H_j$
admits the following matrix representation
\begin{equation}
H_j = \frac{\hbar\eta\Omega}{2} \sqrt{\frac{k+1}N} \left[\begin{matrix}
0&e^{-i\phi}\\ e^{i\phi}&0\end{matrix}\right].
\end{equation}
Thus, with $H_j$ restricted to $V_{j,k}$, its time-evolution operator is given
by
\begin{align}
U_{j,k}(t,\phi) &\equiv U_j(t,\phi)\big|_{V_{j,k}} = e^{-\frac{i}{\hbar}
H_jt}\Big|_{V_{j,k}}\noindentonumber\\
\label{eq4.3}
&= \left[\begin{matrix} \cos(\cl E_kt)&-ie^{-i\phi} \sin (\cl E_kt)\\
-ie^{i\phi}\sin(\cl E_kt)&\cos(\cl E_kt)\end{matrix}\right],\quad \cl E_k
\equiv
\frac{\hbar\eta\Omega}{2} \sqrt{\frac{k+1}N}.
\end{align}
Also, note that
\begin{equation}\label{eq4.4}
U_j(t,\phi)|\psi\rangle = |\psi\rangle\quad \text{for all}\quad |\psi\rangle
\in
\widetilde V_j,
\end{equation}
because the action of $H_j$ on $\widetilde V_j$ is annihilation. In physics,
these states in $\widetilde V_j$ are said to be {\em off-resonance}.
Next, define
\begin{equation}\label{eq4.5}
H^{\mathrm{aux}}_j
= \frac{\hbar\eta\Omega}{2\sqrt N} [|\alpha\rangle_j \langle
0|_j e^{-i\phi}a + |0\rangle_j \langle\alpha|_j e^{i\phi}a^{\dag}],\qquad
j=1,2,
\end{equation}
where $|\alpha\rangle$ is the auxiliary quantum state
as indicated in Fig.~\ref{fig4.2}.
Then, similarly to Lemma \ref{lem4.1}, we have the following.
\begin{lem}\label{lem4.2}
Let $H^{\mathrm{aux}}_j$ be given by \eqref{eq4.5} for $j=1,2,\ldots, N$. Then
$H^{\mathrm{aux}}_j$ has a family of invariant two-dimensional subspaces
\begin{equation}
V^{\mathrm{aux}}_{j,k} \equiv
\text{span}\{|0\rangle_j |k+1\rangle_{\text{CM}},
|\alpha\rangle_j |k\rangle_{\text{CM}}\},\qquad k=0,1,2,\ldots,
\end{equation}
in the Hilbert space
\begin{equation}
V^{\mathrm{aux}}_j \equiv \text{span}\{|0\rangle_j |k\rangle_{\text{CM}},
|\alpha\rangle_j |k\rangle_{\text{CM}}\mid k=0,1,2,\ldots\}.
\end{equation}
On the orthogonal complement $\widetilde V^{\mathrm{aux}}_j \equiv
\left(\text{span }
\bigcup\limits^\infty_{k=0} V^{\mathrm{aux}}_{j,k}\right)^\bot
\subset V^{\mathrm{aux}}_j$, the action of $H^{\mathrm{aux}}_j$ is $0\colon \
H^{\mathrm{aux}}_j |\psi\rangle = 0$ for all $|\psi\rangle \in \widetilde
V^{\mathrm{aux}}_j$.$
\square$
\end{lem}
The time-evolution operator corresponding to $H^{\mathrm{aux}}_j$ on
$V^{\mathrm{aux}}_j$ is denoted as
$U^{\mathrm{aux}}_j(t,\phi) = e^{-\frac{i}\hbar
H^{\mathrm{aux}}_jt}$. Now, restrict $H^{\mathrm{aux}}_j$ to the invariant
two-dimensional subspace $V^{\mathrm{aux}}_{j,k}$ with ordered basis
$\{|0\rangle_j |k+1\rangle_{\text{CM}}, |\alpha\rangle_j
|k\rangle_{\text{CM}}\}$; its evolution operator has the matrix representation
\begin{align}\noindentonumber
U^{\mathrm{aux}}_{j,k}(t,\phi) &\equiv
U^{\mathrm{aux}}_j(t,\phi)\big|_{V^{\mathrm{aux}}_{j,k}} = e^{-\frac{i}\hbar
H^{\mathrm{aux}}_jt}\big|_{V^{\mathrm{aux}}_{j,k}}\\
&= \left[\begin{matrix} \cos(\cl E_kt)&-ie^{-i\phi} \sin (\cl E_kt)\\
-ie^{i\phi}\sin(\cl E_kt)&\cos(\cl E_kt)\end{matrix}\right],\quad \cl E_k
\equiv
\frac{\hbar\eta\Omega}{2} \sqrt{\frac{k+1}N}.\label{eq4.6}
\end{align}
Here, we again also have
\begin{equation}\label{eq4.7}
U^{\mathrm{aux}}_j (t,\phi)|\psi\rangle
= |\psi\rangle \quad \text{for all}\quad
|\psi\rangle\in \widetilde V^{\mathrm{aux}}_j,
\end{equation}
because $H^{\mathrm{aux}}_j$ annihilates the subspace $\widetilde
V^{\mathrm{aux}}_j$.
Using the CM mode as the bus, we can now derive the following 2-bit quantum
phase gate.
\begin{thm}\label{thm4.3}
Let $U_j(t,\phi), U_{j,k}(t,\phi), U^{\mathrm{aux}}_j(t,\phi)$ and
$U^{\mathrm{aux}}_{j,k}(t,\phi)$ be defined as above satisfying
\eqref{eq4.3}--\eqref{eq4.7} for $j=1,2$ and $k=0$. Then for
\begin{equation}
U \equiv U_1(T,0) U^{\mathrm{aux}}_2(2T,0) U_1(T,0),\quad T\equiv
\frac\pi{\eta\Omega} \sqrt N,
\end{equation}
we have
\begin{align}
\label{eq4.8}
U|0\rangle_1 |0\rangle_2|0\rangle_{\text{CM}} &= |0\rangle_1 |0\rangle_2
|0\rangle_{\text{CM}},\\
\label{eq4.9}
U|1\rangle_1 |0\rangle_2 |0\rangle_{\text{CM}} &= |1\rangle_1 |0\rangle_2
|0\rangle_{\text{CM}},\\
\label{eq4.10}
U|0\rangle_1 |1\rangle_2 |0\rangle_{\text{CM}} &= |0\rangle_1 |1\rangle_2
|0\rangle_{\text{CM}},\\
\label{eq4.11}
U|1\rangle_1 |1\rangle_1 |0\rangle_{\text{CM}} &= -|1\rangle_1 |1\rangle_1
|0\rangle_{\text{CM}}.
\end{align}
Consequently, by ignoring the last CM-bit $|0\rangle_{\text{CM}}$, we have the
phase gate
\begin{equation}\label{eq4.11a}
U = Q_\pi,\quad \text{cf. (\ref{eq2.7}).}
\end{equation}
\end{thm}
\begin{proof}
We first verify \eqref{eq4.8}:
\begin{align}
U|0\rangle_1 |0\rangle_2 |0\rangle_{\text{CM}}
=~&U_1(T,0) U^{\mathrm{aux}}_2 (2T,0) [U_1(T,0) |0\rangle_1
|0\rangle_{\text{CM}}] |0\rangle_2\noindentonumber\\
=~&U_1(T,0) U^{\mathrm{aux}}_2(2T,0) [|0\rangle_1 |0\rangle_{\text{CM}}]
|0\rangle_2\noindentonumber\\
&\quad \text{(by \eqref{eq4.4} because $|0\rangle_1 |0\rangle_{\text{CM}} \in
\widetilde V_j$ for $j=1$)}\noindentonumber\\
=~&U_1(T,0) [U^{\mathrm{aux}}_2 (2T,0) |0\rangle_2 |0\rangle_{\text{CM}}]
|0\rangle_1\noindentonumber\\
=~&U_1(T,0) [|0\rangle_2 |0\rangle_{\text{CM}}] |0\rangle_1\noindentonumber\\
&\quad \text{(by \eqref{eq4.7} because $|0\rangle_2 |0\rangle_{\text{CM}} \in
\widetilde V^{\mathrm{aux}}_j$ for $j=2$)}\noindentonumber\\
=~&[U_1(T,0) |0\rangle_1 |0\rangle_{\text{CM}}] |0\rangle_2\noindentonumber\\
=~&|0\rangle_1 |0\rangle_{\text{CM}} |0\rangle_2\noindentonumber\\
&\quad \text{(by
\eqref{eq4.4} because $|0\rangle_2 |0\rangle_{\text{CM}} \in \widetilde
V_j$ for $j=1$)}\noindentonumber\\
=~&|0\rangle_1 |0\rangle_2 |0\rangle_{\text{CM}}.
\end{align}
Next, we verify \eqref{eq4.9}:
\begin{align}
U|1\rangle_1 |0\rangle_2 |0\rangle_{\text{CM}}
=~&U_1(T,0) U^{\mathrm{aux}}_2 (2T,0) [U_1(T,0)
|1\rangle_0 |0\rangle_{\text{CM}}] |0\rangle_2\noindentonumber\\
=~&U_1(T,0) U^{\mathrm{aux}}_2 (2T,0) [-i|0\rangle_1 |1\rangle_{\text{CM}}]
|0\rangle_2\noindentonumber\\
&\quad \text{(using \eqref{eq4.3} with $k=0, \phi=0$ and $\cl E_kT=\pi/2$)}
\noindentonumber\\
=~&U_1(T,0) \{(-i) [U_2(2T,0) |0\rangle_2 |1\rangle_{\text{CM}}]
|0\rangle_1\}\noindentonumber\\
=~&U_1(T,0) \{[i|0\rangle_2 |1\rangle_{\text{CM}}] |0\rangle_1\}\noindentonumber\\
&\quad \text{(using \eqref{eq4.6} and \eqref{eq4.3} with $k=0,\phi=0$ and
$2\cl E_kT=\pi$)}\noindentonumber\\
=~&i[U_1(T,0)|0\rangle_1 |1\rangle_{\text{CM}}]|0\rangle_2\noindentonumber\\
=~&i[(-i) |1\rangle_1 |0\rangle_{\text{CM}}] |0\rangle_2\noindentonumber\\
&\quad \text{(using \eqref{eq4.3} with $k=0,\phi=0$ and $\cl E_kT=\pi/2$)}
\noindentonumber\\
=~&|1\rangle_1 |0\rangle_2 |0\rangle_{\text{CM}}.
\end{align}
The verification of \eqref{eq4.10} can be done in the same way as that for
\eqref{eq4.8}.
Finally, we verify \eqref{eq4.11}:
\begin{align}
U|1\rangle_1 |1\rangle_2 |0\rangle_{\text{CM}}
=~&U_1(T,0)U^{\mathrm{aux}}_2 (2T,0) [U_1(T,0)
|1\rangle_1 |0\rangle_{\text{CM}}]
|1\rangle_2\noindentonumber\\
=~&U_1(T,0) U^{\mathrm{aux}}_2(2T,0) [-i|0\rangle_1 |1\rangle_{\text{CM}}]
|1\rangle_2\noindentonumber\\
&\quad \text{(using \eqref{eq4.3} with $j=1, k=0, \phi=0$ and $\cl
E_kT=\pi/2$)}\noindentonumber\\
=~&U_1(T,0) [(-i) U^{\mathrm{aux}}_2 (2T,0) |1\rangle_2 |1\rangle_{\text{CM}}]
|0\rangle_1\noindentonumber\\
=~&U_1(T,0) [(-i) |1\rangle_2 |1\rangle_{\text{CM}}] |0\rangle_1\noindentonumber\\
&\quad \text{(by \eqref{eq4.7} because $|1\rangle_2 |1\rangle_{\text{CM}}\in
\widetilde V^{\mathrm{aux}}_j$ for $j=2$)}\noindentonumber\\
=~&(-i) [U_1(T,0) |0\rangle_1 |1\rangle_{\text{CM}}] |1\rangle_2\noindentonumber\\
=~&(-i)[(-i) |1\rangle_1 |0\rangle_{\text{CM}}] |1\rangle_2\noindentonumber\\
&\quad \text{(using \eqref{eq4.3} with $j=1, k=0,\phi=0$ and $\cl
E_kT=\pi/2$)}\noindentonumber\\
=~&-|1\rangle_1 |1\rangle_2 |0\rangle_{\text{CM}}.
\end{align}
The verifications are complete.
\end{proof}
The CNOT gate can now be obtained according to Corollary \ref{cor2.3}. As a
consequence, we have the following.
\begin{thm}\label{thm4.4}
The quantum computer made of confined ions in a trap is universal.
\end{thm}
\begin{proof}
Use \eqref{eq2.8} and \eqref{eq4.11a} to deduce Corollary \ref{cor2.3}.
\end{proof}
A very similar gate has been implemented experimentally using a cold, confined
Be$^+$ ion \cite{MMKIW}. To conclude this section, we mention two more schemes
using ion traps.
\noindentoindent\textbf{(a)}\ The S\o{}rensen--M\o{}lmer scheme \cite{SM}.\\
Experimental inconveniences in implementation of the Cirac--Zoller scheme on
confined ions include the requirement that the ions be initialized in the
ground
vibrational state $|0\rangle_{\text{CM}}$, which requires sophisticated laser
cooling techniques,
and the experimental observation that the CM vibration of a
linear string of ions in a trap has limited coherence.
The S\o{}rensen--M\o{}lmer
scheme applies to a string of ions, but the ions can have thermal motion, as
long
as they remain within the Lamb--Dicke regime. The ions never populate a
vibrational mode, so decoherence effects are reduced. The idea is to
simultaneously excite an ion {\em pair\/} using two laser beams detuned by
frequency $\delta$ from the upper and lower vibrational sidebands $\omega_0 +
\omega_{\text{CM}} -\delta$ and $\omega_0 - \omega_{\text{CM}} + \delta$, so
that the energy (or frequency) sum is $2\omega_0$, corresponding to excitation
of both ions by two interfering transition paths
from $|0,0\rangle|n\rangle$ to
$|1,1\rangle|n\rangle$, without changing the vibrational state. The lasers are
detuned from the red and blue sideband transitions of {\em single\/} ion
excitation, so the vibration is only virtually (negligibly) excited, and the
vibration levels are never actually populated. One laser beam is applied to
each
ion. For any two ions, each illuminated by {\em both\/} of the laser beams, an
additional two interference paths exist, and the effective Rabi frequency for
the two-ion transition becomes $\Omega_{\text{SM}} =
-(\eta\Omega)^2/(\omega_{\text{CM}}-\delta)$, where $\Omega$ represents the
single ion Rabi frequency at the same laser power. The two-ion interaction
Hamiltonian can be written
$(\hbar\Omega_{\text{SM}}/2)(|0,0\rangle \langle1,1|
+
|1,1\rangle \langle0,0|+$ $|1,0\rangle\langle 0,1| +
|0,1\rangle\langle1,0|)$. The Rabi frequency can be approximated as
$\Omega_{\text{SM}} = \eta^2\Omega^2/\delta$, since $\omega_{\text{CM}} -
\delta\approx \delta$. Full entanglement is achieved at a particular
interaction
time $T=\pi/2\Omega_{\text{SM}}$, when the wavefunction becomes a coherent
superposition of maximally entangled $|0,0\rangle$ and $|1,1\rangle$ states
$|\psi\rangle = 2^{-1/2}(|0,0\rangle -i|1,1\rangle)$. Once entanglement is
achieved, the CNOT gate can be implemented with additional single particle
rotations, carried out analogously to those in the Cirac--Zoller scheme, with
individual laser beams exciting single ions. Simultaneous entanglement of any
{\em even\/} number of ions can be carried out similarly, although the CNOT
gate
applied to {\em any pair\/} of ions is sufficient for computation. The
technique
has been experimentally demonstrated with four ions in a small linear ion trap
\cite{Sa}.
\noindentoindent\textbf{(b)}~The Jonathan--Plenio scheme \cite{JP}.\\
When an atomic transition is driven by a laser beam at resonance, an AC Stark
shift (or light shift) results, changing the energies of the coupled
levels due
to the coupling to the laser field. The laser drive on an ion in a linear
string
in a trap produces Rabi transitions between the coupled levels,
but if the Rabi
frequency matches one of the CM vibration modes $\noindentu_p$ of the ion string, an
exchange of excitation energy between the internal and vibrational variables
also occurs. This exchange can be used to derive a CNOT gate between any two
ions in the linear chain, based on an interference analogous to that of the
S\o{}rensen--M\o{}lmer scheme discussed above. This is expected to result in
relatively fast quantum gates, but requires cold ions.
When the light shift is
used to drive virtual two-phonon transitions simultaneously, using more than
one vibrational mode, it is predicted that gate speeds will be somewhat
reduced,
but the ions remain insensitive to heating, possibly offering
an advantage over
the S\o{}rensen--M\o{}lmer scheme. Two ion states of the form
$|1,0\rangle|n_p\rangle$ and $|0,1\rangle|n_p\rangle$ are coupled. The
effective time-independent Hamiltonian is written $-\hbar\omega[|1,0\rangle
\langle 0,1| + |0,1\rangle\langle 1,0|]$ plus a sum over states $p$ which
depend
on the ion motion, but which can be cancelled by reversing the sign of a
parameter half way through the operation. The states in the Hamiltonian are
assumed now to be ``dressed'', i.e., formed from the original states coupled
and
shifted by interaction with the external field. The frequency is $\omega =
(\Omega^2/2)\Sigma^N_{p=1} \eta_{1p} \eta_{2p} \noindentu_p/(\Omega^2-\noindentu^2_p)$.
Maximally entangled states can be produced at the time $T = |\pi/4\omega|$.
This
proposed technique has not yet been experimentally demonstrated.
\section{Quantum Dots}\label{sec5}
Quantum dots are fabricated from semiconductor materials, metals, or small
molecules. They work by confining electric charge quanta (i.e., spins) in
three
dimensional boxes with electrostatic potentials.
The spin of a charge quantum in a
single quantum dot can be manipulated, i.e., single qubit operations, by
applying pulsed local electromagnetic fields, through a scanning-probe tip,
for
example. Two-qubit operations can be achieved by spectroscopic manipulation or
by a purely electrical gating of the tunneling barrier between neighboring
quantum dots.
The spin-1/2 operator of an electron is given by
\begin{equation}\label{eq5.1}
\pmb{S} = (\sigma_x, \sigma_y,\sigma_z)^T = \sigma_x\pmb{e}_x + \sigma_y \pmb{
e}_y + \sigma_z \pmb{e}_z
\end{equation}
where $\sigma_x, \sigma_y$ and $\sigma_z$ are the usual Pauli matrices:
\begin{equation}\label{eq5.1a}
\sigma_x = \left[\begin{matrix} 0&1\\ 1&0\end{matrix}\right],\quad \sigma_y =
\left[\begin{matrix} 0&-i\\ i&0\end{matrix}\right],\quad \sigma_z =
\left[\begin{matrix} 1&0\\ 0&-1\end{matrix}\right],
\end{equation}
and
\begin{equation}
\pmb{e}_x = \left[\begin{matrix} 1\\ 0\\ 0\end{matrix}\right],\quad
\pmb{e}_y =
\left[\begin{matrix} 0\\ 1\\ 0\end{matrix}\right],\quad \pmb{e}_z =
\left[\begin{matrix} 0\\ 0\\ 1\end{matrix}\right]
\end{equation}
are the unit vectors in the directions of $x,y$ and $z$. Let $\pmb{S}_i$ and
$\pmb{S}_j$ denote, respectively, the spin of the electric charge quanta at,
respectively, the $i$-th and $j$-th location of the quantum dots. Then the
usual
physics of the Hubbard model \cite{AM} gives the Hamiltonian of the system of
$n$
quantum dots as
\begin{equation}\label{eq5.2}
H = \sum^n_{j=1} \mu_B g_j(t) \pmb{B}_j(t) \cdot \pmb{S}_j + \sum_{1\le j<k\le
n} J_{jk}(t) \pmb{S}_j\cdot \pmb{S}_k,
\end{equation}
where the first summation denotes the sum of energy due to the application of
a magnetic field $\pmb{B}_j$ to the electron spin at dot $j$, while the second
denotes the interaction Hamiltonian through the tunneling effect of a gate
voltage
applied between the dots, and
\begin{equation}
\parbox[b]{0.9\textwidth}{
\begin{itemize}
\item[$\mu_B$:] is the Bohr magneton;
\item[$g_j(t)$:] is the effective $g$-factor;
\item[$\pmb{B}_j(t)$:] is the applied magnetic field;
\item[$J_{jk}(t)$:] the time-dependent exchange constant \cite[see
$\lbrack$10$\rbrack$ in
the References therein]{LD}, with $J_{jk}(t)
= 4t^2_{jk}(t)/u$, which is produced by the turning
on and off of the tunneling
matrix element $t_{ij}(t)$ between quantum dots $i$ and $j$, with $u$
being the charging energy of a single dot.
\end{itemize}}
\end{equation}
Note that for
\begin{equation}
\pmb{S}_j = \sigma^{(j)}_x \pmb{e}_x + \sigma^{(j)}_y \pmb{e}_y +
\sigma^{(j)}_z
\pmb{e}_z,\qquad j=1,2,\ldots, n,
\end{equation}
and
\begin{equation}
\pmb{B}_j(t) = b^{(j)}_x(t)\pmb{e}_x + b^{(j)}_y(t) \pmb{e}_y + b^{(j)}_z(t)
\pmb{e}_z,\qquad j=1,2,\ldots, n,
\end{equation}
the dot products in \eqref{eq5.2} are defined by
\begin{align}\noindentonumber
\pmb{S}_j\cdot \pmb{S}_k &= \sigma^{(j)}_x \sigma^{(k)}_x + \sigma^{(j)}_y
\sigma^{(k)}_y + \sigma^{(j)}_z \sigma^{(k)}_z,\\
\pmb{B}_j(t)\cdot\pmb{S}_j &= b^{(j)}_x(t) \sigma^{(j)}_x + b^{(j)}_y(t)
\sigma^{(j)}_y + b^{(j)}_z(t)\sigma^{(j)}_z.
\end{align}
In Fig.~\ref{fig5.1}, we include a quantum dot array design given in
Burkard, Engel and Loss \cite{BEL}.
\begin{figure}
\caption{\label{fig5.1}
\label{fig5.1}
\end{figure}
Again, by the universal quantum computing theorems in \S 2, we need only
discuss the case of a system with two quantum dots. The spin-up $\uparrow$ and
spin-down $\downarrow$ of the electric charge quanta in each dot are denoted
as $|0\rangle$ and $|1\rangle$, respectively.
Then the underlying Hilbert space is
\begin{equation}\label{eq5.3a}
\cl H = \text{span}\{|00\rangle, |01\rangle, |10\rangle, |11\rangle\}.
\end{equation}
The wave function $|\psi(t)\rangle$ of the system has four components:
\begin{equation}\label{eq5.3b}
|\psi(t)\rangle = \psi_1(t)|00\rangle + \psi_2(t)|01\rangle + \psi_3(t)
|10\rangle + \psi_4(t)|11\rangle,
\end{equation}
satisfying the Schr\"odinger equation
\begin{equation}\label{eq5.3}
\left\{\begin{array}{ll}\displaystyle
i\hbar \frac\partial{\partial t} |\psi(t)\rangle = H(t) |\psi(t)\rangle,&
t>0,\\[1.5ex]
|\psi(0)\rangle = |\psi_0\rangle \in \cl H,
\end{array}\right.
\end{equation}
where
\begin{equation}\label{eq5.4}
H(t) \equiv \frac\hbar2 [\pmb{\Omega}_1(t) \cdot \pmb{\sigma} + \pmb{
\Omega}_2(t)
\cdot
\pmb{\tau} + \omega(t) \pmb{\sigma}\cdot \pmb{\tau}],
\end{equation}
following from \eqref{eq5.2} by rewriting the notation
\begin{equation}\label{eq5.5}
\pmb{S}_1 =\pmb{\sigma},\quad \pmb{S}_2 = \pmb{\tau}\,; \quad \mu_B g_j(t)
\pmb{
B}_j(t) = \frac\hbar2 \pmb{\Omega}_j(t),\quad j=1,2;\quad J_{12}(t) =
\frac\hbar2
\omega(t).
\end{equation}
The $\pmb{\Omega}_1(t), \pmb{\Omega}_2(t)$ and $\omega(t)$
in \eqref{eq5.4} are
the {\em control pulses}.
Let us now give a brief heuristic physics discussion for the derivation of the
Hamiltonian in \eqref{eq5.2} and \eqref{eq5.4} as a remark below.
\begin{rem}\label{rem4.2}
The suggestion to use quantum dots as qubits for the purposes of quantum
information processing in general, and quantum computing in particular, is
based on the following observations.
To set the stage, we note that a quantum dot is formed by a single
electron that is tightly bound to a ``center'' -- some distinguished small
area in a solid.
Typically, the binding is strongly confining in one spatial direction (the
$z$ direction) while the electron has some freedom of motion in the plane
perpendicular to it (the $xy$ plane), with its dynamics governed by the forces
that bind it to its center.
In addition, there is the possibility of applying electric and magnetic
fields of varying strength and direction, whereby the experimenter can exert
some external control over the evolution of the electron's state.
In particular, a magnetic field $\brel{B}(t)$ will couple to the magnetic
moment of the electron, which gives rise to a term in the Hamilton operator
that is proportional to $\brel{B}\cdot\brel\sigma$, where $\brel\sigma$ is the
Pauli spin vector operator of the electron, so that
\begin{align}\label{eq5.8d}
H=\frac12\hbar\brel\Omega(t)\cdot\brel\sigma
\end{align}
is the structure of the effective single-qubit Hamilton operator, with
$\brel\Omega(t)\propto\brel{B}(t)$
being externally controlled by the experimenter.
By suitably choosing $\brel{B}(t)$ various single-qubit gates can be
realized.
To suppress the unwanted effect of the Lorentz force (which couples to the
electron's charge, results in a time-dependent energy, and thus produces an
accumulated phase change), it is expedient to have the magnetic field vector
in the $xy$ plane, so that $\brel{B}\cdot\brel{e}_z=0$ is imposed.
Nevertheless, general single qubit gates can be realized by a succession of
rotations of $\brel\sigma$ about axes in the $xy$ plane.
This is just an application of Euler's classical result that general
rotations can be composed of successive rotations about two orthogonal axes
only (standard choice is the $z$ axis and the $x$ axis but, of course, the
$x$ axis and the $y$ axis serve this purpose just as well).
More complicated is the implementation of two-qubit gates.
Here one needs two quantum dots in close proximity, by which is meant that
they have a considerable interaction, predominantly originating in the
Coulomb repulsion of the charges of the two electrons.
It is important that external electric fields can be used to modify the
effective interaction strength in a wide range, and in particular one can
``turn it off'' by separating the two quantum dots (or rather shielding them
from each other).
Consider, thus, two quantum dots with some interaction potential in
addition to the potentials that bind them to their respective centers.
This situation is reminiscent of the hydrogen dimer, the $\mathrm{H}_2$
molecule, except that the confinement to a plane and the form of the binding
potential, and also of the effective interaction, are quite different.
It is, indeed, not so simple to model the various potentials reasonably well,
and one must be practiced in the art of solid-state theory to do it well.
We shall, therefore, refer the interested reader to the specialized
literature, perhaps starting with
\cite{BLD} and the references therein.
The general picture, however, can be grasped without getting involved with
such details.
First, we note that at sufficiently low temperatures, only the ground state
and the first excited state of the interacting two-dot system will be
dynamically relevant, at least as long as we carefully avoid exciting other
states.
For symmetry reasons, the ground state is a spin-singlet (total spin angular
momentum of $0\hbar$) and has a symmetric spatial wave function, whereas
the first excited state is a spin-triplet
(total spin angular momentum of $1\hbar$)
and has an antisymmetric spatial wave function.
The excited state is long-lived because
triplet-to-singlet transitions tend to have very small matrix elements.
The total spin angular momentum vector operator
is $\brel{S}=\frac12\hbar(\brel\sigma+\brel\tau)$
where we denote
the Pauli vector operators of the two electrons
by $\brel\sigma$ and $\brel\tau$, respectively.
The eigenvalues of $\brel{S}^2$ are $0\hbar^2$ in the spin singlet and
$2\hbar^2$ in the spin triplet. In view of $\brel\sigma^2=3\pmb{1}$ and
$\brel\tau^2=3\pmb{1}$, this says that $\brel\sigma\cdot\brel\tau$ has
eigenvalue
$-3$ in the singlet and $+1$ in the triplet.
As a consequence,
\begin{equation}
\left\{\begin{array}{l}
\dfrac14\bigl(\pmb{1}-\brel\sigma\cdot\brel\tau\bigr)
\text{ projects on the singlet, and}\\
\noindentoalign{
}
\dfrac14\bigl(3\pmb{1}+\brel\sigma\cdot\brel\tau\bigr)
\text{ projects on the triplet.}\end{array}\right.
\end{equation}
Effectively, then, they project on the ground state and the excited state,
respectively.
Denoting by $E_0$ the ground state energy, and by $E_1=E_0+2\hbar\omega$
that of the excited state, we have
\begin{align}\noindentonumber
H&=\frac14\bigl(\pmb{1}-\brel\sigma\cdot\brel\tau\bigr)E_0
+\frac14\bigl(3\pmb{1}+\brel\sigma\cdot\brel\tau\bigr)E_1\\
&=\frac14\bigl(E_0+3E_1)\pmb{1}+\frac14(E_1-E_0)\brel\sigma\cdot\brel\tau
\end{align}
or, after dropping the irrelevant additive constant,
\begin{align}\label{eq5.8g}
H=\frac12\hbar\omega(t)\brel\sigma\cdot\brel\tau
\end{align}
for the effective Hamilton operator of the coupling between the two quantum
dots.
We have indicated the externally controlled time dependence of $\omega$, the
``turning on'' and ``turning off'' of the interaction at the experimenter's
discretion, by $t$.
In summary, then, we have two terms of the form (\ref{eq5.8d}), one for
each quantum dot, and the interaction term (\ref{eq5.8g}) in the effective
Hamilton operator
\begin{equation}
H=\frac12\hbar\bigl[\brel\Omega_1(t)\cdot\brel\sigma
+\brel\Omega_2(t)\cdot\brel\tau
+\omega(t)\brel\sigma\cdot\brel\tau\bigr]\,.
\end{equation}
Within reasonable ranges, the experimenter is capable of realizing any
$\brel\Omega_1(t)$, $\brel\Omega_2(t)$, and $\omega(t)$,
where it is fully sufficient to have two of them vanishing at any instant.
$
\square$
\end{rem}
\begin{lem}\label{lem5.1}
Let $\pmb{\sigma}$ and $\pmb{\tau}$ be defined as in \eqref{eq5.5}. Then we
have
\begin{align}\label{eq5.9a}
&{\rm ~(i)}~~\left(\dfrac{1+\pmb{\sigma}\cdot\pmb{\tau}}{2}\right)^2 =
\pmb{1} \text{ and }
(\pmb{\sigma}\cdot\pmb{\tau})^2 = 3\pmb{1} -
2\pmb{\sigma} \cdot
\pmb{\tau}.\\
\label{eq5.10}
&{\rm (ii)}~~U_{\mathrm{sw}} = \frac12 (\pmb{1}
+\pmb{\sigma}\cdot
\pmb{\tau}) =
U^{\dagger}_{\mathrm{sw}} = U^{-1}_{\mathrm{sw}},
\text{ where $U_{\mathrm{sw}}$ is the swapping gate}
\end{align}
\begin{equation}
U_{\mathrm{sw}}|jk\rangle
= |kj\rangle\quad \text{for}\quad j,k=0,1; \text{cf.\
(\ref{eq2.5}).}
\end{equation}
\hspace{.3in}{\rm (iii)}
\begin{equation}\label{eq5.9}
U^{\dagger}_{\mathrm{sw}} \pmb{\sigma} U_{\mathrm{sw}}
= \pmb{\tau},\quad U^{\dagger}_{\mathrm{sw}} \pmb{\tau} U_{\mathrm{sw}} =
\pmb{\tau}
\end{equation}
\begin{equation}\label{eq5.12a}
\hspace{-3.1in}{\rm (iv)}~~U^2_{\mathrm{sw}} = \pmb{1}.
\end{equation}
\end{lem}
\begin{proof}
Let us recall first how the various $\sigma_j$ and $\tau_j$
transform the basis vectors, for $j=x,y,z$:
\begin{align}
\sigma_x|0\cdot\rangle&=|1\cdot\rangle\,,\qquad
&\sigma_x|1\cdot\rangle&=|0\cdot\rangle\,,\noindentonumber\\
\sigma_y|0\cdot\rangle&=i|1\cdot\rangle\,,\qquad
&\sigma_y|1\cdot\rangle&=-i|0\cdot\rangle\,,\noindentonumber\\
\sigma_z|0\cdot\rangle&=|0\cdot\rangle\,,\qquad
&\sigma_z|1\cdot\rangle&=-|1\cdot\rangle\,,
\end{align}
and likewise for $\tau_x$, $\tau_y$, and $\tau_z$
acting on $|\cdot0\rangle$ and $|\cdot1\rangle$.
Thus
\begin{align}
\brel\sigma\cdot\brel\tau|00\rangle
&=(1^2+i^2)|11\rangle+|00\rangle=|00\rangle\,,\noindentonumber\\
\brel\sigma\cdot\brel\tau|01\rangle
&=(1^2-i^2)|10\rangle-|01\rangle=2|10\rangle-|01\rangle\,,\noindentonumber\\
\brel\sigma\cdot\brel\tau|10\rangle
&=(1^2-i^2)|01\rangle+|10\rangle=2|01\rangle-|10\rangle\,,\noindentonumber\\
\brel\sigma\cdot\brel\tau|11\rangle
&=(1^2+i^2)|00\rangle+|11\rangle=|11\rangle\,,
\end{align}
and we see that the $4\times4$ matrix for $\brel\sigma\cdot\brel\tau$ is given
by
\begin{align}
\brel\sigma\cdot\brel\tau=\left[
\begin{matrix}
1&0&0&0\\ 0&-1&2&0\\ 0&2&-1&0 \\ 0&0&0&1
\end{matrix}\right]\,,
\end{align}
and the first equation in \eqref{eq5.9a} follows.
We read off that $\brel\sigma\cdot\brel\tau$ has a 3-fold eigenvalue $+1$ and
a single eigenvalue $-3$.
The respective projectors to the subspaces characterized by these eigenvalues
are easily verified to be
\begin{align}
\bb P_1 \equiv
\frac{1}{4}(3\pmb{1}+\brel\sigma\cdot\brel\tau)\quad\mbox{and}\quad
\bb P_2\equiv \frac{1}{4}(\pmb{1}-\brel\sigma\cdot\brel\tau)\,,
\end{align}
where $\bb P^2_j = \bb P_j$ for $j=1,2$, and $\bb P_j\bb P_k=0$ for $j\noindente k$.
Therefore, for any sufficiently well-behaved function, including polynomials
and analytic functions, of
$\brel\sigma\cdot\brel\tau$,
\begin{align}\noindentonumber
f(\brel\sigma\cdot\brel\tau)&=
f(1)\frac{1}{4}(3\pmb{1}+\brel\sigma\cdot\brel\tau)
+f(-3) \frac{1}{4}(\pmb{1}-\brel\sigma\cdot\brel\tau)
\\ \label{eq5.10a}
&=\frac{1}{4}\left[3f(1)+f(-3)\right]\pmb{1}
+\frac{1}{4}\left[f(1)-f(-3)\right]\brel\sigma\cdot\brel\tau\,,
\end{align}
according to the spectral theorem.
As a first application, consider $f(x)=x^2$ and find
\begin{align}
(\brel\sigma\cdot\brel\tau)^2=3\pmb{1}-2\brel\sigma\cdot\brel\tau\,,
\end{align}
which is the second equation in \eqref{eq5.9a}.
Further, in view of the matrix for $U_{\mathrm{sw}}$,
\begin{align}
U_{\mathrm{sw}}=\frac{1}{2}(\pmb{1}+\brel\sigma\cdot\brel\tau)
=\left[\begin{matrix}
1&0&0&0\\ 0&0&1&0\\ 0&1&0&0 \\ 0&0&0&1
\end{matrix}\right]
\end{align}
the swapping gate really swaps: $U_{\mathrm{sw}}|jk\rangle=|kj\rangle$ (no
effect on $|00\rangle$ and $|11\rangle$, whereas $|01\rangle$ and
$|10\rangle$ are interchanged).
To verify
\begin{align}
U_{\mathrm{sw}}\brel\sigma=\brel\tau U_{\mathrm{sw}}
\end{align}
it should suffice to inspect the pair
\begin{align}
\sigma_x&=|00\rangle\langle10|+|01\rangle\langle11|+
|10\rangle\langle00|+|11\rangle\langle01|\\
\intertext{and}
\tau_x&=|00\rangle\langle01|+|10\rangle\langle11|+
|01\rangle\langle00|+|11\rangle\langle10|\,,
\end{align}
for example.
One can also have a purely algebraic, but more lengthy, argument
that exploits nothing but the basic identities, such as $\sigma_x^2=\pmb{1}$
and $\sigma_1\sigma_2=i\sigma_3$, etc. Thus, the rest easily follows.
\end{proof}
\begin{thm}\label{thm5.2}
Denote by $U(t)$ the time evolution operator for the quantum system
\eqref{eq5.3} and \eqref{eq5.4} for time duration $t\in [0,T]$. Choose $\pmb{
\Omega}_1(t) = \pmb{\Omega}_2(t) = 0$ in \eqref{eq5.4} and let $\omega(t)$
therein satisfies
\begin{equation}\label{eq5.11}
\int^T_0 \omega(t)dt = \frac\pi2.
\end{equation}
Then we have $U(T) =-e^{\pi i/4} U_{\mathrm{sw}}$, i.e.,
$U(T)$ is the swapping gate
(with a nonessential phase factor $-e^{\pi i/4}$.)
\end{thm}
\begin{proof}
By assumptions, we have now
\begin{equation}
H(t) = \omega(t) \pmb{\sigma}\cdot\pmb{\tau}/2.
\end{equation}
Since $\omega(t)$ is scalar-valued, we have the commutativity
\begin{equation}
H(t_1) H(t_2) =H(t_2)H(t_1),\quad \text{for any}\quad t_1,t_2\in [0,T].
\end{equation}
Therefore
\begin{align}
U(T) &= e^{-i \int^T_0 H(t)dt/\hbar}
= e^{\left[-\frac{i}2 \int^T_0 \omega(t)
dt\right]\pmb{\sigma}\cdot \pmb{\tau}}\noindentonumber\\
&= e^{-i\phi\pmb{\sigma}\cdot\pmb{\tau}}\hspace{1.5in}
\left(\phi\equiv \frac12
\int^T_0 \omega(t)dt\right)\noindentonumber\\
\label{eq5.12}
&= \cos(\phi\pmb{\sigma}\cdot\pmb{\tau}) - i \sin(\phi\pmb{\sigma}\cdot
\pmb{\tau}),
\end{align}
where $e^{-i\phi\pmb{\sigma}\cdot\pmb{\tau}}$, $\cos(\phi\pmb{\sigma}\cdot
\pmb{\tau})$ and $\sin(\phi\pmb{\sigma}\cdot\pmb{\tau})$ are $4\times 4$
matrices.
We now utilize \eqref{eq5.10a} to calculate the exponential matrix $U(T)$:
\begin{equation}\label{eq5.13}
U(T) = e^{-i\phi\pmb{\sigma}\cdot\pmb{\tau}}
= e^{-i\phi}\cdot \frac14 (3\pmb{1} + \pmb{\sigma} \cdot\pmb{\tau}) +
e^{-3i\phi} \cdot \frac14 (\pmb{1} - \pmb{\sigma}\cdot\pmb{\tau}).
\end{equation}
Thus, with a little manipulation, \eqref{eq5.13} becomes
\begin{align}
U(T) &= e^{i\phi} \left[\cos(2\phi) \pmb{1} - i\sin(2\phi) \frac{\pmb{1} +
\pmb{\sigma}\cdot\pmb{\tau}}2\right]\noindentonumber\\
\label{eq5.14}
&= e^{i\phi} [\cos(2\phi)\pmb{1} - i\sin(2\phi) U_{\mathrm{sw}}],\quad
\text{by (\ref{eq5.10}).}
\end{align}
Choosing $\phi = \pi/4$,
we obtain the desired conclusion.
\end{proof}
\begin{cor}\label{cor5.3}
The square roots of the swapping gate, $U^{1/2}_{\mathrm{sw}}$, are
\begin{equation}\label{eq5.15}
U^{1/2}_{\mathrm{sw}} = \frac{e^{\pm\pi i/4}}{\sqrt 2}
(\pmb{1} \mp iU_{\mathrm{sw}}).
\end{equation}
\end{cor}
\begin{proof}
From \eqref{eq5.14}, we first obtain
\begin{equation}
U_{\mathrm{sw}} = ie^{-\frac{\pi i}4} U(T).
\end{equation}
Then use $\phi = \pm \pi/8$ in \eqref{eq5.14} to obtain
\begin{equation}
U^{1/2}_{\mathrm{sw}} = (ie^{-\frac{\pi i}4})^{1/2} e^{\pm\pi i/8}
\left[\frac1{\sqrt 2}
(\pmb{1} \mp i U_{\mathrm{sw}})\right]
\end{equation}
and the desired conclusion.
(Note that these two square roots of $U_{\mathrm{sw}}$
reflect the choices of $\sqrt 1=1$ and
\begin{equation}
\text{the square root of $-1=\pm i$}
\end{equation}
for the square roots of the eigenvalues of $U_{\mathrm{sw}}$.)
\end{proof}
Theorem \ref{thm5.2} gives us the choice of control pulses $\omega(t)$ (with
$\pmb{\Omega}_1(t)$ and $\pmb{\Omega}_2(t)$ being set to 0) which yields the
swapping gate $U_{\mathrm{sw}}$, which does not cause entanglement. However,
$U^{1/2}_{\mathrm{sw}}$ in Corollary \ref{cor5.3} causes entanglement.
See Corollary
\ref{cor5.5} shortly.
\begin{thm}\label{thm5.4}
Let $\phi,\theta \in [0,2\pi]$ be given. Denote $\pmb{e}(\phi) = \cos\phi\pmb{
e}_x + \sin \phi \pmb{e}_y + 0 \pmb{e}_z$ for the given $\phi$. Let $U_{1,\pmb{
\Omega}_1}(t)$ be the time evolution operator corresponding to the quantum
system
\eqref{eq5.3} and \eqref{eq5.4} for $t\in [0,T]$ where the pulses are chosen
such that
\begin{equation}\label{eq5.16}
\pmb{\Omega}_1(t) = \Omega_1(t) \pmb{e}(\phi),\quad
\pmb{\Omega}_2(t) = 0,\quad
\omega(t) = 0,\qquad t\in [0,T],
\end{equation}
with $\Omega_1(t)$ satisfying
\begin{equation}\label{eq5.17}
\int^T_0 \Omega_1(t) dt = 2\theta,\quad \text{for the given } \theta.
\end{equation}
Then the action of $U_{1,\pmb{\Omega}_1}(t)$ on the first qubit satisfies
\begin{equation}
U_{1,\pmb{\Omega}_1}(t) = U_{\theta,\phi}, \text{ the 1-bit unitary rotation
gate
(\ref{eq2.6}).}
\end{equation}
\end{thm}
\begin{proof}
We have
\begin{align}
U_{\theta,\phi} &= \left[\begin{matrix} \cos\theta&-ie^{-i\phi}\sin\theta\\
-ie^{i\phi}\sin\theta&\cos\theta\end{matrix}\right]\noindentonumber\\
&= \cos\theta \pmb{1} -ie^{-i\phi} \sin\theta \left(\frac{\sigma_x -
i\sigma_y}2\right) - ie^{i\phi} \sin\theta
\left(\frac{\sigma_x-i\sigma_y}2\right)\noindentonumber\\
&= \cos\theta \pmb{1} - i \sin \theta \cos \phi \sigma_x - i\sin \theta \sin
\phi \sigma_y\noindentonumber\\
&= \cos\theta \pmb{1} - i\sin\theta(\cos\phi \sigma_x
+\sin\phi\sigma_y)\noindentonumber\\
&= \cos\theta \pmb{1} - i\sin \theta \pmb{e}(\phi) \cdot\pmb{\sigma}\noindentonumber\\
\label{eq5.18}
&= e^{-i\theta \pmb{e}(\phi)\cdot\pmb{\sigma}},
\end{align}
noting that in the above, we have utilized the fact that the $2\times 2$ matrix
\begin{equation}
\pmb{e}(\phi)\cdot \pmb{\sigma} = \left[\begin{matrix}
0&\cos\phi - i\sin\phi\\
\cos \phi+ i\sin \phi&0\end{matrix}\right]
\end{equation}
satisfies $(\pmb{e}(\phi)\cdot\pmb{\sigma})^{2n} = \pmb{1}$ for
$n=0,1,2,\ldots$~.
With the choices of the pulses as given in \eqref{eq5.16}, we see that the
second
qubit remains steady in the time-evolution of the system.
The Hamiltonian, now, is
\begin{equation}
H_1(t) = \frac\hbar2 \Omega_1(t) \pmb{e}_1(\phi)\cdot\pmb{\sigma}
\end{equation}
and acts only on the first qubit (where the subscript 1 of $\pmb{e}_1(\phi)$
denotes that this is the vector $\pmb{e}(\phi)$ for the first bit). Because
$\Omega_1(t)$ is scalar-valued, we
have
\begin{equation}
H_1(t_1) H_1(t_2) = H_1(t_2) H_1(t_1)\quad \text{for any}\quad t_1,t_2\in
[0,T].
\end{equation}
Thus
\begin{align}
U_{1,\pmb{\Omega}_1}(T)&= e^{-\frac{i}2 \int^T_0 \Omega_1(t) \pmb{e}_1(\phi)
\cdot \pmb{\sigma} \ dt}\noindentonumber\\
&= e^{\left[-\frac{i}2 \int^T_0\Omega_1(t) dt\right] \pmb{e}_1(\phi)\cdot
\pmb{\sigma}}\noindentonumber\\
&= e^{-i\theta\pmb{e}_1(\phi)\cdot \pmb{\sigma}},\hspace{1.3in} \text{(by
\eqref{eq5.17})}
\end{align}
using \eqref{eq5.18}. The proof is complete.
\end{proof}
We may define $U_{2,\pmb{\Omega}_2}$ in a similar way as in Theorem
\ref{thm5.4}.
\begin{cor}\label{cor5.5}
The quantum phase gate $Q_\pi$ \eqref{eq2.7} is given by
\begin{equation}
Q_\pi = (-i) U_{1,\pmb{\Omega}^{(2)}_1}
U_{2,\pmb{\Omega}_2}
U^{1/2}_{\mathrm{sw}} U_{1,\pmb{\Omega}^{(1)}_1} U^{1/2}_{\mathrm{sw}},
\end{equation}
where
\begin{equation}\label{eq5.21a}
\left\{\begin{array}{l}
\displaystyle\int \pmb{\Omega}^{(1)}_1 (t)\, dt = -\pi \pmb{e}_{1z},\\
\displaystyle\int \pmb{\Omega}^{(2)}_1 (t)\, dt = \pi \pmb{e}_{1z}/2,\\
\displaystyle\int \pmb{\Omega}_2(t) \,dt = -\pi \pmb{e}_{2z}/2,
\end{array}\right.
\end{equation}
and $\pmb{e}_{1z}, \pmb{e}_{2z}$ denote the $\pmb{e}_z$ vector of,
respectively, the first and the second qubit.
\end{cor}
\begin{rem}
In order to realize this succession of gates, only one of the
$\pmb{\Omega}(t)$ in \eqref{eq5.21a} is nonzero at any given instant $t$, with
the duration when $\pmb{\Omega}^{(1)}_1(t)\noindenteq0$ earlier than that when
$\pmb{\Omega}_2(t)\noindenteq0$, and that when $\pmb{\Omega}^{(2)}_1(t)\noindenteq0$
even later. Earliest is the period when $\omega(t)\noindenteq0$ for the first
$U^{1/2}_{\mathrm{sw}}$, and another period when $\omega(t)\noindenteq0$ is
intermediate between those when $\pmb{\Omega}^{(1)}_1(t)\noindenteq0$
and $\pmb{\Omega}_2(t)\noindenteq0$.
\end{rem}
\begin{proof}
Define
\begin{equation}\label{eq5.24.1}
U_{\text{XOR}} \equiv e^{\frac{\pi i}4 \sigma_z} e^{-\frac{\pi i}4 \tau_z}
U^{1/2}_{\mathrm{sw}} e^{i\frac\pi2 \sigma_z} U^{1/2}_{\mathrm{sw}},
\end{equation}
with $U^{1/2}_{\mathrm{sw}} = \frac{e^{-\frac\pi4 i}}{\sqrt 2}
(\pmb{1} + iU_{\mathrm{sw}})$
chosen from \eqref{eq5.15}. Then it is straightforward to check that
\begin{align}\noindentonumber
U_{\text{XOR}}|00\rangle &= |00\rangle (i), \quad U_{\text{XOR}} |01\rangle =
|01\rangle(i),\\
U_{\text{XOR}}|10\rangle &= |10\rangle (i),\quad U_{\text{XOR}} |11\rangle =
|11\rangle (-i),
\end{align}
so that
\begin{align}
U_{\text{XOR}} &= i(|00\rangle \langle 00| + |01\rangle \langle 01| +
|10\rangle\langle 10| - |11\rangle\langle11|)\noindentonumber\\
&= iQ_\pi.
\end{align}
\end{proof}
\begin{rem}\label{rem4.2a}
In \cite[eq.~(2)]{LD}, Loss and DiVincenzo write their $U_{\text{XOR}}$
gate as
\begin{equation}
U_{\text{XOR}} = e^{i\frac\pi2 S^z_1}
e^{-i\frac\pi2 S^z_2} U^{1/2}_{\mathrm{sw}}
e^{i\pi
S^z_1} U^{1/2}_{\mathrm{sw}}.
\end{equation}
Thus there is a difference in notation between the above and \eqref{eq5.24.1}:
\begin{equation}
S^2_1 = \sigma_z/2,\quad S^z_2 = \tau_z/2.
\end{equation}
$\square$
\end{rem}
\begin{thm}\label{thm5.6}
The quantum computer made of quantum dots is universal.
\end{thm}
\begin{proof}
This follows easily from \eqref{eq2.8}, Theorem \ref{thm5.4}, Corollary
\ref{cor5.5}, and finally Corollary \ref{cor2.3}.
\end{proof}
Throughout this paper we have been employing the Schr\"odinger
point of view and could therefore afford to identify the ket vectors with
particular numerical column vectors, the bra vectors with the corresponding
row vectors, and we did not distinguish between operators and matrices.
Now, let us give a brief, heuristic exposition
from the point of view of the Heisenberg picture, in which it is important to
discriminate between kets and their columns, bras and their rows, operators
and their matrices.
Dirac's classic text \cite{DiracQMbook} and the recent textbook by Schwinger
\cite{SchwingerQMbook} treat quantum mechanics consistently in the Heisenberg
picture.
For a start we return to (\ref{eq5.3a}) and (\ref{eq5.3b}) where each
$|jk\rangle$ is a joint
eigenket of $\sigma_z$ and $\tau_z$ with eigenvalues $1-2j$ and $1-2k$,
respectively for $j,k=0,1$.
A simultaneous measurement of $\sigma_z$ and $\tau_z$ yields the actual one of
the four pairs of values and tells us which of the four possibilities is the
case.
Now, a measurement of $\sigma_z$, say, at time $t_1$ is not the same as a
measurement of $\sigma_z$ at time $t_2$, one really has to perform two
different experiments (perhaps using the same lab equipment twice).
Therefore, according to the reasoning of Heisenberg and Dirac, just to
mention the main quantum physicists, we should also have two mathematical
symbols for
these two measurements: $\sigma_z(t_1)$ and $\sigma_z(t_2)$.
As a consequence, then, also the eigenkets must refer to the time of
measurement: $|jk,t_1\rangle$ and $|jk,t_2\rangle$.
Accordingly, the eigenvector equations read
\begin{align}\label{eq19}
\setlength{\arraycolsep}{2pt}
\left.\begin{array}{rcl}
\sigma_z(t)|jk,t\rangle&=&|jk,t\rangle(1-2j)\\[1ex]
\tau_z(t)|jk,t\rangle&=&|jk,t\rangle(1-2k)
\end{array}\right\}
\mbox{for all $t$, for $j,k=0,1$.}
\setlength{\arraycolsep}{5pt}
\end{align}
On the other hand, the state of the system, which is typically specified by a
statement of the kind ``at time $t_0$ both spins were up in the $z$
direction'' (formally: state ket $|\Psi\rangle=|00,t_0\rangle$),
does not depend on time by its nature.
So, rather than (\ref{eq5.3b}), we write
\begin{align}
\label{eq5.20}
|\Psi\rangle
= \psi_1(t)|00,t\rangle + \psi_2(t)|01,t\rangle + \psi_3(t)
|10,t\rangle + \psi_4(t)|11,t\rangle\,,
\end{align}
where the \emph{dynamical} time dependences of the kets $|00,t\rangle$, \dots,
$|11,t\rangle$ is compensated for by
the \emph{parametric} time dependence of the probability amplitudes
$\psi_1(t)$, \dots, $\psi_4(t)$, so that the state ket
$|\Psi\rangle$ is constant in time, as it should be.
The probability amplitudes are thus given by
\begin{align} \label{eq5.21}
\psi_1(t)=\langle00,t|\Psi\rangle\,,\quad
\psi_2(t)=\langle01,t|\Psi\rangle\,,\quad
\psi_3(t)=\langle10,t|\Psi\rangle\,,\quad
\psi_4(t)=\langle11,t|\Psi\rangle\,,
\end{align}
and they have the same meaning as before: $|\psi_2(t)|^2$, for example, is
the probability that the first spin is found up and the second down at time
$t$. And, of course, they obey the same Schr\"odinger equation as before,
namely (\ref{eq5.3}) where
$|\psi(t)\rangle$ is just the column
$\left[\psi_1(t)\ \psi_2(t)\ \psi_3(t)\ \psi_4(t)\right]^\mathrm{T}$
and $H(t)$ the $4\times4$ matrix that is meant in (\ref{eq5.4}).
With the Hamilton \emph{operator} (not matrix!)
\begin{align}\label{eq5.22}
H\bigl(\brel\sigma(t),\brel\tau(t),t\bigr)=
\frac\hbar2 [\brel \Omega_1(t) \cdot \brel\sigma(t)
+ \brel \Omega_2(t) \cdot \brel\tau(t)
+ \omega(t) \brel\sigma(t)\cdot \brel\tau(t)]\,,
\end{align}
which has a dynamical time dependence in $\brel\sigma(t)$ and $\brel\tau(t)$
and a
parametric time dependence in $\brel\Omega_j(t)$, $j=1,2$, and $\omega(t)$,
the four rows of this Schr\"odinger equation are just
\begin{align}\label{eq5.23}
i\hbar\frac{\partial}{\partial t}\langle jk,t|\Psi\rangle=
\langle jk,t| H\bigl(\brel\sigma(t),\brel\tau(t),t\bigr)|\Psi\rangle
\qquad\mbox{for $jk=00,01,10,11$}
\end{align}
and the elements of the $4\times4$ matrix of (\ref{eq5.4}) are
\begin{align}\label{eq5.24}
\langle jk,t| H\bigl(\brel\sigma(t),\brel\tau(t),t\bigr)|lm,t\rangle
\qquad\mbox{with $jk=00,01,10,11$ and $lm=00,01,10,11$.}
\end{align}
The Pauli matrices in (\ref{eq5.1a}) appear when we express the components
of $\brel\sigma(t)$ explicitly in terms of the eigenkets and eigenbras of
$\sigma_z(t)$ and $\tau_z(t)$,
\begin{align}\label{eq5.25}
\sigma_x(t)&=\sum_{k=0,1}\bigl(
|0k,t\rangle\langle1k,t|+|1k,t\rangle\langle0k,t|\bigr)
\noindentonumber\\&=\sum_{k=0,1}
\left[\begin{matrix}|0k,t\rangle\\|1k,t\rangle\end{matrix}\right]
\left[\begin{matrix}0&1\\1&0\end{matrix}\right]
\bigl[\langle0k,t|,\langle1k,t|\bigr]\,,\noindentonumber\\
\sigma_y(t)&=\sum_{k=0,1}
\left[\begin{matrix}|0k,t\rangle\\|1k,t\rangle\end{matrix}\right]
\left[\begin{matrix}0&-i\\i&0\end{matrix}\right]
\bigl[\langle0k,t|,\langle1k,t|\bigr]\,,\noindentonumber\\
\sigma_z(t)&=\sum_{k=0,1}
\left[\begin{matrix}|0k,t\rangle\\|1k,t\rangle\end{matrix}\right]
\left[\begin{matrix}1&0\\0&-1\end{matrix}\right]
\bigl[\langle0k,t|,\langle1k,t|\bigr]\,,
\end{align}
and likewise for $\brel\tau(t)$, compactly written as
\begin{align}\label{eq5.26}
\brel\tau(t)=\sum_{j=0,1}
\left[\begin{matrix}|j0,t\rangle\\|j1,t\rangle\end{matrix}\right]
\left(\left[\begin{matrix}0&1\\1&0\end{matrix}\right]\brel{e}_x
+\left[\begin{matrix}0&-i\\i&0\end{matrix}\right]\brel{e}_y
+\left[\begin{matrix}1&0\\0&-1\end{matrix}\right]\brel{e}_z\right)
\bigl[\langle j0,t|,\langle j1,t|\bigr]\,.
\end{align}
Note how numerical $2\times2$ matrices are sandwiched by 2-component columns
of kets and 2-component rows of bras to form linear combinations of products
of the dyadic ket-times-bra type.
The Schr\"odinger equation (\ref{eq5.23}) is the equation of motion of the
bras $\langle jk,t|$ and those of the kets $|jk,t\rangle$ is the adjoint
system.
The equation of motion for the operator $\brel\sigma(t)$
follows from them,
\begin{align}\label{eq5.27}
i\hbar\frac{d}{dt}\brel\sigma(t)
=\brel\sigma(t)H_t-H_t\brel\sigma(t)=\bigl[\brel\sigma(t),H_t\bigr]\,,
\end{align}
and likewise for $\brel\tau(t)$,
\begin{align}\label{eq5.28}
i\hbar\frac{d}{dt}\brel\tau(t)
=\bigl[\brel\tau(t),H_t\bigr]\,,
\end{align}
where $H_t$ abbreviates $H\bigl(\brel\sigma(t),\brel\tau(t),t\bigr)$,
the Hamilton operator of (\ref{eq5.22}).
These are particular examples of the more general \emph{Heisenberg equation
of motion}, which states that any operator -- here: any function $O_t$ of
$\brel\sigma(t)$, $\brel\tau(t)$, and $t$ itself -- obeys
\begin{align}\label{eq5.29}
\frac{d}{dt}O_t= \frac{\partial}{\partial t}O_t
+\frac{1}{i\hbar}\bigl[O_t,H_t\bigr]\,,
\end{align}
where $\frac{d}{dt}$ is the total time derivative whereas
$\frac{\partial}{\partial t}$ differentiates only the parametric time
derivative, so that the commutator term accounts for the dynamical change in
time of $O_t$.
Note that (\ref{eq5.29}) has exactly the same structure as the Hamilton
equation of motion in classical mechanics.
A particular $O_t$ is the unitary evolution operator
$U_{t_0,t}\equiv U\bigl(t_0;\brel\sigma(t),\brel\tau(t),t\bigr)$
that transforms a bra at time $t_0$ into the corresponding bra at time $t$
(earlier or later),
\begin{align}\label{eq5.30}
\langle\dots,t|=\langle\dots,t_0|U_{t_0,t}\,,
\end{align}
where the ellipsis stands for any corresponding set of quantum numbers (such
as $00$, \dots $11$).
$U_{t_0,t}$ has a parametric dependence on both $t_0$ and $t$,
in addition to the dynamical $t$ dependence that originates in
$\brel\sigma(t)$ and $\brel\tau(t)$.
The group property $U_{t_0,t_1}U_{t_1,t_2}=U_{t_0,t_2}$ follows from the
definition and so does the unitarity,
and thus
\begin{align}\label{eq5.31}
{U_{t_0,t}}^\dagger={U_{t_0,t}}^{-1}=U_{t,t_0}
\end{align}
is implied immediately.
Accordingly, we have
\begin{align}\label{eq5.32}
\brel\sigma(t)=U_{t,t_0}\brel\sigma(t_0)U_{t_0,t}\,,\qquad
\brel\tau(t)=U_{t,t_0}\brel\tau(t_0)U_{t_0,t}\,,
\end{align}
and therefore
$U\bigl(t_0;\brel\sigma(t),\brel\tau(t),t\bigr)=
U\bigl(t_0;\brel\sigma(t_0),\brel\tau(t_0),t\bigr)$, that is to say: it
doesn't matter if we regard $U_{t,t_0}$ as a function of the spin operators
at the initial or the final time.
Upon combining (\ref{eq5.30}) with the Schr\"odinger equation
(\ref{eq5.23}), we find first
\begin{align}\label{eq5.33}
i\hbar\frac{d}{dt}U_{t_0,t}=U_{t_0,t}H_t\,,
\end{align}
and then with Heisenberg's general equation of motion (\ref{eq5.29}),
\begin{align}\label{eq5.34}
i\hbar\frac{\partial}{\partial t}U_{t_0,t}=H_tU_{t_0,t}\,.
\end{align}
To begin with the implicit $\brel\sigma$ and $\brel\tau$ are all at time $t$,
but since the derivative in (\ref{eq5.34}) refers to the parametric
dependence on $t$, not to the dynamical one, we can just as well take all
$\brel\sigma$ and $\brel\tau$ at time $t_0$, so that the $t$ dependence is
then only parametric.
With the understanding, then, that we are only dealing with parametric $t$
dependences, we can combine (\ref{eq5.34}) with the initial condition
$U_{t_0,t_0}=\pmb{1}$ into the integral equation
\begin{align}\label{eq5.35}
U_{t_0,t}=\pmb{1}-\frac{i}{\hbar}\int_{t_0}^tdt_1H_{t_1}U_{t_0,t_1}\,,
\end{align}
which can be iterated to produce a formal series expansion, a variant of both
the Born series of scattering theory and the Dyson series of field theory,
of $U_{t_0,t}$ in powers of the (time-integrated) Hamilton operator.
It begins with the terms
\begin{align}\label{eq5.36}
U_{t_0,t}=\pmb{1}-\frac{i}{\hbar}\int_{t_0}^tdt_1H_{t_1}
-\frac{1}{\hbar^2}\int_{t_0}^tdt_1\int_{t_0}^{t_1}dt_2H_{t_1}H_{t_2}
+\cdots\,.
\end{align}
For the Hamilton operator (\ref{eq5.22}), the Heisenberg equations of motion
(\ref{eq5.27}) and (\ref{eq5.28}) read
\begin{equation}\label{eq5.37}
\left\{\begin{array}{l}
\dfrac{d}{dt} \brel\sigma(t) =
\brel\Omega_1(t) \times\brel\sigma(t) - \omega(t) \brel\sigma(t) \times
\brel\tau(t)\,,\\
\dfrac{d}{dt}\brel\tau(t) =
\brel\Omega_2(t) \times \brel\tau(t) + \omega(t) \brel\sigma(t)
\times\brel\tau(t)\,.\end{array}\right.
\end{equation}
\begin{exm}\label{exm5.1}
Let us give the solution of \eqref{eq5.34} for the case when
$\brel\Omega_1(t)=0$,
$\brel\Omega_2(t)=0$:
\begin{equation}\label{eq5.38}
\left\{\begin{array}{l}
\brel\sigma(t) =\dfrac{1}{2}\bigl[\brel\sigma(t_0)+\brel\tau(t_0)\bigr]
+\dfrac{1}{2}\bigl[\brel\sigma(t_0)-\brel\tau(t_0)\bigr]\cos\varphi
-\dfrac{1}{2}\brel\sigma(t_0)\times\brel\tau(t_0)\sin\varphi
\,,\\
\noindentoalign{
}
\brel\tau(t) =\dfrac{1}{2}\bigl[\brel\sigma(t_0)+\brel\tau(t_0)\bigr]
-\dfrac{1}{2}\bigl[\brel\sigma(t_0)-\brel\tau(t_0)\bigr]\cos\varphi
+\dfrac{1}{2}\brel\sigma(t_0)\times\brel\tau(t_0)\sin\varphi,
\end{array}\right.
\end{equation}
where
\begin{align}\label{eq5.39}
\varphi=2\int_{t_0}^{t}dt'\omega(t')\,.
\end{align}
One can verify by inspection that (\ref{eq5.38}) solves (\ref{eq5.37}).
$
\square$
\end{exm}
Summarizing the above discussions, we have established the following.
\begin{thm}\label{thm5.7}
Let \eqref{eq5.3} and \eqref{eq5.4} be satisfied. Then we have
\begin{equation}
\left\{\begin{array}{l}
\dfrac{d}{dt} \pmb{\sigma}(t) = \dfrac1{i\hbar} [\pmb{\sigma}, H] (t) =
\pmb{\Omega}_1(t) \times\pmb{\sigma}(t) - \omega(t) \pmb{\sigma}(t) \times
\pmb{\tau}(t),\\
\noindentoalign{
}
\dfrac{d}{dt}\pmb{\tau}(t) = \dfrac1{i\hbar} [\pmb{\tau}, H](t) =
\pmb{\Omega}_2(t) \times \pmb{\tau}(t) + \omega(t) \pmb{\sigma}(t)
\times\pmb{\tau}(t)
\end{array}\right.
\end{equation}
and
\begin{equation}
\left\{\begin{array}{l}
\pmb{\sigma}(t) = U_{t,t_0}\pmb{\sigma}(t_0) U_{t_0,t}\\
\pmb{\tau}(t) = U_{t,t_0} \pmb{\tau}(t_0) U_{t_0,t}\end{array}\quad ,\quad
\text{for
any}\quad t_0,t\ge 0.\right.
\end{equation}
$
\square$
\end{thm}
\section{Conclusion}\label{sec6}
New or improved versions of the elementary devices cited in this paper are
constantly emerging. For example, in the design of cavity QED,
instead of using a particle beam to shoot atoms through an optical
cavity as illustrated in
Fig.~3.2, there have been proposals to enclose ion traps inside a cavity.
Thus,
such proposals incorporate both features of ion traps and cavity QED as
studied in this paper.
Even though it is not possible for us to describe {\em all\/}
major types of contemporary quantum computing devices or proposals, we hope
that
our mathematical analysis and derivations herein have provided reasonably
satisfactory rigor and basic principles useful in helping future
interdisciplinary research in quantum computation.
From the control-theoretic point of view, the best theoretical model for laser
and quantum control seems to be
the Schr\"odinger equation offered by (\ref{eq5.3}) with the Hamiltonian given
by (\ref{eq5.2}), wherein the $\pmb{B}_j(t)$'s and $J_{jk}(t)$'s
are the {\em bilinear
controllers\/}.
Equivalently, the Heisenberg's equations of motion as presented in Theorem
\ref{thm5.7} (which deal with only two quantum dots) form a {\em
bi-trilinear control system\/} with $\pmb{\Omega}_1(t)$ and
$\pmb{\Omega}_2(t)$
therein as bilinear controllers, while $\omega(t)$ is a {\em trilinear\/}
controller.
Such control systems
(as presented in Theorem \ref{thm5.7}, e.g.) look nearly identical to the
classical rigid body rotation dynamics and, thus, a fair portion of the
existing
control results appear to be readily applicable. In our discussion in Section
\ref{sec5}, we have derived certain formulas for the shaping of laser pulses
(see (\ref{eq5.11}), (\ref{eq5.16}), (\ref{eq5.17}), \eqref{eq5.21a}, etc.)
which can achieve the desired elementary 1-bit and 2-bit elementary quantum
gates. However, we wish to emphasize that such laser pulse shaping is {\em
underutilized\/} as far as the control effects are concerned. To obtain more
optimal control effects, {\em feedback strategies\/} need to be evaluated.
Quantum feedback control is currently an important topic
in quantum science and technology.
For the models of 2-level atoms, cavity QED and ion traps,
our discussions here have shown that the admissible class
of controllers is quite limited, the reason
being that the control or excitation laser must operate at specified
frequencies in order to stay within the two or three levels,
and to avoid the excitation of unmodeled dynamics.
We can exercise control mainly through the duration of the
activation of laser pulses, such as \eqref{eq3.13a} has shown.
There are many interesting mathematical problems in the study of laser-driven
control systems. Quantum computing/computer contains an abundant source
of such
problems for mathematicians and control theorists to explore.
\end{document} |
\begin{document}
\begin{center}\begin{large}
New approach to Minkowski fractional inequalities using generalized k-fractional integral operator
\end{large}\end{center}
\begin{center}
$Vaijanath \, L. Chinchane $
Department of Mathematics,\\
Deogiri Institute of Engineering and Management\\
Studies Aurangabad-431005, INDIA\\
[email protected]
\end{center}
\begin{abstract}
In this paper, we obtain new results related to Minkowski fractional integral inequality using generalized k-fractional integral operator which is in terms of the Gauss hypergeometric function.
\end{abstract}
\textbf{Keywords :} Minkowski fractional integral inequality, generalized k-fractional integral operator and Gauss hypergeometric function.\\
\textbf{Mathematics Subject Classification:} 26D10, 26A33, 05A30.\\
\section{Introduction }
\paragraph{} In the last decades many researchers have worked on fractional integral inequalities using Riemann-Liouville, generalized Riemann-Liouville, Hadamard and Siago, see \cite{C1,C2,C3,D1,D2,D3,D4}. W. Yang \cite{YA} proved the Chebyshev and Gr$\ddot{u}$ss-type integral inequalities for Saigo fractional integral operator. S. Mubeen and S. Iqbal \cite{MU} has proved the Gr$\ddot{u}$ss-type integral inequalities generalized k-fractional integral. In \cite{BA1,C5,KI2,YI} authors have studied some fractional integral inequalities using generalized k-fractional integral operator (in terms of the Gauss hypergeometric function). Recently many researchers have shown development of fractional integral inequalities associated with hypergeometric functions, see \cite{SH1,KI2,P1,R1,S1,SA,V1,W1,YI}. Also, in \cite{C2,D1} authors established reverse Minkowski fractional integral inequality using Hadamard and Riemann-Liouville integral operator respectively.
\paragraph{}In literature few results have been obtained on some fractional integral inequalities using Saigo fractional integral operator, see \cite{C4,K4,P1,P2,YI}. Motivated from \cite{C1,C5,D1,KI2}, our purpose in this paper is to establish some new results using generalized k-fractional integral in terms of Gauss hypergeometric function. The paper has been organized as follows, in section 2, we define basic definitions and proposition related to generalized k-fractional integral. In section 3, we give the results about reverse Minkowski fractional integral inequality using fractional generalized k-fractional integral, In section 4, we give some other inequalities using fractional generalized k-fractional integral.
\section{Preliminaries}
\paragraph{} In this section, we give some necessary definitions which will be used latter.
\begin{definition} \cite{KI2,YI}
The function $f(x)$, for all $x>0$ is said to be in the $L_{p,k}[0,\infty),$ if
\begin{equation}
L_{p,k}[0,\infty)=\left\{f: \|f\|_{L_{p,k}[0,\infty)}=\left(\int_{0}^{\infty}|f(x)|^{p}x^{k}dx\right)^{\frac{1}{p}} < \infty \, \, 1 \leq p < \infty \, k \geq 0\right\},
\end{equation}
\end{definition}
\begin{definition} \cite{KI2,SAO,YI}
Let $f \in L_{1,k}[0,\infty),$. The generalized Riemann-Liouville fractional integral $I^{\alpha,k}f(x)$ of order $\alpha, k \geq 0$ is defined by
\begin{equation}
I^{\alpha,k}f(x)= \frac{(k+1)^{1-\alpha}}{\Gamma (\alpha)}\int_{0}^{x}(x^{k+1}-t^{k+1})^{\alpha-1}t^{k} f(t)dt.
\end{equation}
\end{definition}
\begin{definition} \cite{KI2,YI}
Let $k\geq 0,\alpha>0, \mu >-1$ and $\beta, \eta \in R $. The generalized k-fractional integral $I^{\alpha,\beta,\eta,\mu}_{x,k}$ (in terms of the Gauss hypergeometric function)of order $\alpha$ for real-valued continuous function $f(t)$ is defined by
\begin{equation}
\begin{split}
I^{\alpha,\beta,\eta,\mu}_{x,k}[f(x)]&
= \frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}
\times \\
& _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f(\tau)d\tau.
\end{split}
\end{equation}
\end{definition}
where, the function $_{2}F_{1}(-)$ in the right-hand side of (2.3) is the Gaussian hypergeometric function defined by
\begin{equation}
_{2}F_{1} (a, b; c; x)=\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n}} \frac{x^{n}}{n!},
\end{equation}
and $(a)_{n}$ is the Pochhammer symbol\\
$$(a)_{n}=a(a+1)...(a+n-1)=\frac{\Gamma(a+n)}{\Gamma(a)}, \,\,\,(a)_{0}=1.$$
Consider the function
\begin{equation}
\begin{split}
F(x,\tau)&= \frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\tau^{(k+1)\mu}\\
&(x^{k+1}-\tau^{k+1})^{\alpha-1} \times _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\\
&=\sum_{n=0}^{\infty}\frac{(\alpha+\beta+\mu)_{n}(-n)_{n}}{\Gamma(\alpha+n)n!}x^{(k+1)(-\alpha-\beta-2\mu-\eta)}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1+n}(k+1)^{\mu+\beta+1}\\
&=\frac{\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}(k+1)^{\mu+\beta+1}}{x^{k+1}(\alpha+\beta+2\mu)\Gamma(\alpha)}+\\
&\frac{\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha}(k+1)^{\mu+\beta+1}(\alpha+\beta+\mu)(-n)}{x^{k+1}(\alpha+\beta+2\mu+1)\Gamma(\alpha+1)}+\\
&\frac{\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha+1}(k+1)^{\mu+\beta+1}(\alpha+\beta+\mu)(\alpha+\beta+\mu+1)(-n)(-n+1)}{x^{k+1}(\alpha+\beta+2\mu+1)\Gamma(\alpha+2)2!}+...
\end{split}
\end{equation}
It is clear that $F(x,\tau)$ is positive because for all $\tau \in (0, x)$ , $(x>0)$ since each term of the (2.5) is positive.
\section{Reverse Minkowski fractional integral inequality }
\paragraph{}In this section, we establish reverse Minkowski fractional integral inequality using generalized k-fractional integral operator (in terms of the Gauss hypergeometric function).
\begin{theorem} Let $p\geq1$ and let $f$, $g$ be two positive function on $[0, \infty)$, such that for all $x>0$, $I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]<\infty$, $I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]<\infty$. If $0<m\leq \frac{f(\tau)}{g(\tau)}\leq M$, $\tau \in (0,x)$ we have
\begin{equation}
\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\right]^{\frac{1}{p}}+\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\right]^{\frac{1}{p}}\leq \frac{1+M(m+2)}{(m+1)(M+1)}\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)]\right]^{\frac{1}{p}},
\end{equation}
for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$
\end{theorem}
\textbf{Proof}: Using the condition $\frac{f(\tau)}{g(\tau)}\leq M$, $\tau \in (0,x)$, $x>0$, we can write
\begin{equation}
(M+1)^{p}f(\tau)\leq M^{p}(f+g)^{p}(\tau).
\end{equation}
Multiplying both side of (3.2) by $F(x,\tau)$, then integrating resulting identity with respect to $\tau$ from $0$ to $x$, we get
\begin{equation}
\begin{split}
&(M+1)^{p}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}
\times \\
& _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f^{p}(\tau)d\tau\\
&\leq M^{p}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}
\times \\
& _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} (f+g)^{p}(\tau)d\tau,
\end{split}
\end{equation}
\noindent which is equivalent to
\begin{equation}
I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)] \leq \frac{M^{p}}{(M+1)^{p}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)]\right],
\end{equation}
\noindent hence, we can write
\begin{equation}
\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)] \right]^{\frac{1}{p}} \leq \frac{M}{(M+1)} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)]\right]^{\frac{1}{p}}.
\end{equation}
On other hand, using condition $m\leq \frac{f(\tau)}{g(\tau)}$, we obtain
\begin{equation}
(1+\frac{1}{m})g(\tau)\leq \frac{1}{m}(f(\tau)+g(\tau)),
\end{equation}
therefore,
\begin{equation}
(1+\frac{1}{m})^{p}g^{p}(\tau)\leq(\frac{1}{m})^{p}(f(\tau)+g(\tau))^{p}.
\end{equation}
Now, multiplying both side of (3.7) by $F(x,\tau)$, ( $\tau \in(0,x)$, $x>0$), where $G(x,\tau)$ is defined by (2.5). Then integrating resulting identity with respect to $\tau$ from $0$ to $x$, we have
\begin{equation}
\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{p}(x)]\right]^{\frac{1}{p}} \leq \frac{1}{(m+1)} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)]\right]^{\frac{1}{p}}.
\end{equation}
The inequalities (3.1) follows on adding the inequalities (3.5) and (3.8).
\paragraph{}Our second result is as follows.
\begin{theorem} Let $p\geq1$ and $f$, $g$ be two positive function on $[0, \infty)$, such that for all $x>0$, $I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]<\infty$, $I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]<\infty$. If $0<m\leq \frac{f(\tau)}{g(\tau)}\leq M$, $\tau \in (0,x)$ then we have
\begin{equation}
\begin{split}
\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)] \right]^{\frac{2}{p}}+\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)] \right]^{\frac{2}{p}}\geq &(\frac{(M+1)(m+1)}{M}-2)\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)] \right]^{\frac{1}{p}}+\\
&\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)] \right]^{\frac{1}{p}}.
\end{split}
\end{equation}
for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$
\end{theorem}
\textbf{Proof}: Multiplying the inequalities (3.5) and (3.8), we obtain
\begin{equation}
\frac{(M+1)(m+1)}{M}\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\right]^{\frac{1}{p}}\times \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\right]^{\frac{1}{p}}\leq \left([I^{\alpha,\beta,\eta,\mu}_{x,k}[(f(x)+g(x))^{p}]]^{\frac{1}{p}}\right)^{2}.
\end{equation}
Applying Minkowski inequalities to the right hand side of (3.10), we have
\begin{equation}
(\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f(x)+g(x))^{p}]\right]^{\frac{1}{p}})^{2}\leq (\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\right]^{\frac{1}{p}}+\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\right]^{\frac{1}{p}})^{2},
\end{equation}
which implies that
\begin{equation}
\begin{split}
\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[(f(x)+g(x))^{p}]\right]^{\frac{2}{p}}\leq & \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\right]^{\frac{2}{p}}+
\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\right]^{\frac{2}{p}}\\
&+2\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\right]^{\frac{1}{p}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\right]^{\frac{1}{p}}.
\end{split}
\end{equation}
Hence, from (3.10) and (3.12), we obtain (3.9).
Theorem 3.2 is thus proved.
\section{ Other fractional integral inequalities related to Minkowski inequality}
\paragraph{}In this section, we establish some new integral inequalities related to Minkowski inequality using generalized k-fractional integral operator (in terms of the Gauss hypergeometric function).
\begin{theorem} Let $p>1$, $\frac{1}{p}+\frac{1}{q}=1 $ and $f$, $g$ be two positive function on $[0, \infty)$, such that $I^{\alpha,\beta,\eta,\mu}_{x,k}[f(x)]<\infty$, $I^{\alpha,\beta,\eta,\mu}_{x,k}[g(x)]<\infty$. If $0<m\leq \frac{f(\tau)}{g(\tau)}\leq M < \infty$, $\tau \in [0,x]$ we have
\begin{equation}
\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[f(x)]\right]^{\frac{1}{p}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}[g(x)]\right]^{\frac{1}{q}}
\leq (\frac{M}{m})^{\frac{1}{pq}}\left[I^{\alpha,\beta,\eta,\mu}_{x,k}[[f(x)]^{\frac{1}{p}}[g(x)]^{\frac{1}{q}}]\right],
\end{equation}
for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$
\end{theorem}
\textbf{Proof:-} Since $\frac{f(\tau)}{g(\tau)}\leq M $, $\tau \in[0,x]$ $x> 0$, therefore \\
\begin{equation}
[g(\tau)]^{\frac{1}{p}}\geq M^{\frac{-1}{q}}[f(\tau)]^{\frac{1}{q}},
\end{equation}
and also,
\begin{equation}
\begin{split}
[f(\tau)]^{\frac{1}{p}}[g(\tau)]^{\frac{1}{q}}&\geq M^{\frac{-1}{q}}[f(\tau)]^{\frac{1}{q}}[f(\tau)]^{\frac{1}{p}}\\
&\geq M^{\frac{-1}{q}}[f(\tau)]^{\frac{1}{q}+\frac{1}{q}}\\
&\geq M^{\frac{-1}{q}}[f(\tau)].
\end{split}
\end{equation}
Multiplying both side of (4.3) by $F(x,\tau)$, ( $\tau \in(0,x)$, $x>0$), where $F(x,\tau)$ is defined by (2.5). Then integrating resulting identity with respect to $\tau$ from $0$ to $x$, we have
\begin{equation}
\begin{split}
&\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}
\times \\
& _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f(\tau)^{\frac{1}{p}}g(\tau)^{\frac{1}{q}}d\tau \\
&\leq M^{\frac{-1}{q}}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}
\times \\
& _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f(\tau)d\tau,
\end{split}
\end{equation}
which implies that
\begin{equation}
I^{\alpha,\beta,\eta,\mu}_{x,k}\left[[f(x)]^{\frac{1}{p}}[g(x)]^{\frac{1}{q}} \right] \leq M^{\frac{-1}{q}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}f(x)\right].
\end{equation}
Consequently,
\begin{equation}
\left(I^{\alpha,\beta,\eta,\mu}_{x,k}\left[[f(x)]^{\frac{1}{p}}[g(x)]^{\frac{1}{q}} \right]\right)^{\frac{1}{p}} \leq M^{\frac{-1}{pq}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}f(x)\right]^{\frac{1}{p}},
\end{equation}
on other hand, since $m g(\tau)\leq f(\tau)$, \, $\tau \in[0,x)$, $x>0$, then we have
\begin{equation}
[f(\tau)]^{\frac{1}{p}}\geq m^{\frac{1}{p}}[g(\tau)]^{\frac{1}{p}},
\end{equation}
multiplying equation (4.7) by $[g(\tau)]^{\frac{1}{q}}$, we have
\begin{equation}
[f(\tau)]^{\frac{1}{p}}[g(\tau)]^{\frac{1}{q}}\geq m^{\frac{1}{p}}[g(\tau)]^{\frac{1}{q}}[g(\tau)]^{\frac{1}{p}}= m^{\frac{1}{p}}[g(\tau)].
\end{equation}
Multiplying both side of (4.8) by $F(x,\tau)$, ( $\tau \in(0,x)$, $x>0$), where $F(x,\tau)$ is defined by (2.5). Then integrating resulting identity with respect to $\tau$ from $0$ to $x$, we have
\begin{equation}
\begin{split}
&\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}
\times \\
& _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f(\tau)^{\frac{1}{p}}g(\tau)^{\frac{1}{q}}d\tau \\
&\leq M^{\frac{1}{p}}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}
\times \\
& _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} g(\tau)d\tau,
\end{split}
\end{equation}
which implies that
\begin{equation}
I^{\alpha,\beta,\eta,\mu}_{x,k}\left[[f(x)]^{\frac{1}{p}}[g(x)]^{\frac{1}{q}} \right] \leq m^{\frac{1}{p}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}g(x)\right].
\end{equation}
Hence, we can write
\begin{equation}
\left(I^{\alpha,\beta,\eta,\mu}_{x,k}\left[[f(x)]^{\frac{1}{p}}[g(x)]^{\frac{1}{q}} \right]\right)^{\frac{1}{q}} \leq m^{\frac{1}{pq}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}f(x)\right]^{\frac{1}{q}},
\end{equation}
multiplying equation (4.6) and (4.11) we get the result (4.1).
\begin{theorem} Let $f$ and $g$ be two positive function on $[0, \infty[$, such that\\ $I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]<\infty$,
$I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]<\infty$. $x>0$, If $0<m\leq \frac{f(\tau)^{p}}{g(\tau)^{q}}\leq M < \infty$, $\tau \in [0,x]$. Then we have
\begin{equation*}
\left[I^{\alpha,\beta,\eta,\mu}_{x,k}f^{p}(x)\right]^{\frac{1}{p}} \left[I^{\alpha,\beta,\eta,\mu}_{x,k}g^{q}(x)\right]^{\frac{1}{q}}\leq (\frac{M}{m})^{\frac{1}{pq}}\left[I^{\alpha,\beta,\eta,\mu}_{x,k}(f(x)g(x))\right] hold.
\end{equation*}
Where $p>1$, $\frac{1}{p}+\frac{1}{q}=1 $, for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$
\end{theorem}
\textbf{Proof:-}
Replacing $f(\tau)$ and $g(\tau)$ by $f(\tau)^{p}$ and $g(\tau)^{q}$, $\tau \in [0,x]$, $x>0$ in theorem 4.1, we obtain required inequality.
\paragraph{} Now, here we present fractional integral inequality related to Minkowsky inequality as follows
\begin{theorem} let $f$ and $g$ be two integrable functions on $[1, \infty]$ such that $\frac{1}{p}+\frac{1}{q}=1, p>1,$ and $0<m<\frac{f(\tau)}{g(\tau)}<M, \tau \in (0,x), x>0.$ Then for all $\alpha>0,$ we have
\begin{equation}
I^{\alpha,\beta,\eta,\mu}_{x,k}\{fg\}(x)\leq \frac{2^{p-1}M^{p}}{p(M+1)^{p}}\left(I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}+g^{p}](x)\right)+\frac{2^{q-1}}{q(m+1)^{q}}\left(I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{q}+g^{q}](x)\right),
\end{equation}
for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$
\end{theorem}
\textbf{Proof:-} Since, $\frac{f(\tau)}{g(\tau)}<M, \tau \in (0,x), x>0,$ we have
\begin{equation}
(M+1)f(\tau)\leq M(f+g)(\tau).
\end{equation}
Taking $p^{th}$ power on both side and multiplying resulting identity by $ F(x,\tau)$, we obtain
\begin{equation}
\begin{split}
&(M+1)^{p}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}
\times \\
& _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} f^{p}(\tau)d\tau\\
&\leq M^{p} \frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}
\times \\
& _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} (f+g)^{p}(\tau)d\tau,
\end{split}
\end{equation}
therefore,
\begin{equation}
I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]\leq \frac{M^{p}}{(M+1)^{p}}I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)],
\end{equation}
on other hand, $0<m<\frac{f(\tau)}{g(\tau)}, \tau \in (0,x), x>0,$ we can write
\begin{equation}
(m+1)g(\tau)\leq (f+g)(\tau),
\end{equation}
therefore,
\begin{equation}
\begin{split}
&(m+1)^{q}\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}
\times \\
& _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} g^{q}(\tau)d\tau\\
&\leq \frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}
\times \\
& _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} (f+g)^{q}(\tau)d\tau,
\end{split}
\end{equation}
consequently, we have
\begin{equation}
I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)]\leq \frac{1}{(m+1)^{q}}I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{q}(x)].
\end{equation}
Now, using Young inequality
\begin{equation}
[f(\tau)g(\tau)]\leq \frac{f^{p}(\tau)}{p}+\frac{g^{q}(\tau)}{q}.
\end{equation}
Multiplying both side of (4.19) by $ F(x,\tau)$, which is positive because $\tau \in(0,x)$, $x>0$, then integrate the resulting identity with respect to $\tau$ from $0$ to $x$, we get
\begin{equation}
I^{\alpha,\beta,\eta,\mu}_{x,k}[f(x)g(x))]\leq \frac{1}{p}\,I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{p}(x)]+\frac{1}{q}\,I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{q}(x)],
\end{equation}
from equation (4.15), (4.18) and (4.20) we get
\begin{equation}
I^{\alpha,\beta,\eta,\mu}_{x,k}[f(x)g(x))]\leq \frac{M^{p}}{p(M+1)^{p}}\,I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)]+\frac{1}{q(m+1)^{q}}\,I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{q}(x)],
\end{equation}
now using the inequality $(a+b)^{r}\leq 2^{r-1}(a^{r}+b^{r}), r>1, a,b \geq 0,$ we have
\begin{equation}
I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{p}(x)] \leq 2^{p-1}I^{\alpha,\beta,\eta,\mu}_{x,k}[(f^{p}+g^{p})(x)],
\end{equation}
and
\begin{equation}
I^{\alpha,\beta,\eta,\mu}_{x,k}[(f+g)^{q}(x)] \leq 2^{q-1}I^{\alpha,\beta,\eta,\mu}_{x,k}[(f^{q}+g^{q})(x)].
\end{equation}
Injecting (4.22), (2.23) in (4.21) we get required inequality (4.12). This complete the proof.
\begin{theorem}
Let $f$, $g$ be two positive function on $[0, \infty)$, such that $f$ is non-decreasing and $g$ is non-increasing. Then
\begin{equation}
\begin{split}
I^{\alpha,\beta,\eta,\mu}_{x,k}f^{\gamma}(x) g^{\delta}(x)&\leq (k+1)^{-\mu-\beta}x^{(k+1)(\mu+\beta)}\frac{\Gamma(1-\beta)\Gamma(1+\mu+\eta+1)}{\Gamma(1-\beta+\eta)\Gamma(\mu+1)} \\
&\times I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)]I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{\delta}(x)],
\end{split}\end{equation}
for all $k \geq 0,$ $\alpha > max\{0,-\beta-\mu\}$, $\beta < 1,$ $\mu >-1,$ $\beta -1< \eta <0.$
\end{theorem}
\textbf{Proof:-} let $\tau,\rho \in [0,x]$, $x>0$, for any $\delta>0$, $\gamma>0$, we have
\begin{equation}
\left(f^{\gamma}(\tau)-f^{\gamma}(\rho)\right)\left(g^{\delta}(\rho)-g^{\delta}(\tau)\right) \geq 0,
\end{equation}
\begin{equation}
f^{\gamma}(\tau)g^{\delta}(\rho)-f^{\gamma}(\tau)g^{\delta}(\tau)- f^{\gamma}(\rho)(g^{\delta}(\rho)+f^{\gamma}(\rho)g^{\delta}(\tau) \geq 0,
\end{equation}
therefore
\begin{equation}
f^{\gamma}(\tau)g^{\delta}(\tau)+f^{\gamma}(\rho)(g^{\delta}(\rho)\leq f^{\gamma}(\tau)g^{\delta}(\rho)+f^{\gamma}(\rho)g^{\delta}(\tau).
\end{equation}
Now, multiplying both side of (4.27) by $F(x,\tau)$, ( $\tau \in(0,x)$, $x>0$), where $F(x,\tau)$ is defined by (2.5). Then integrating resulting identity with respect to $\tau$ from $0$ to $x$, we have
\begin{equation}
\begin{split}
&\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}
\times \\
& _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k}[f^{\gamma}(\tau)g^{\delta}(\tau)]d\tau\\
&+ f^{\gamma}(\rho)g^{\delta}(\rho)\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}
\times \\
& _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k}[1]d\tau \\
&\leq g^{\delta}(\rho)\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}
\times \\
& _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k}f^{\gamma}(\tau)d\tau\\
&+f^{\gamma}(x)\frac{(k+1)^{\mu+\beta+1}x^{(k+1)(-\alpha-\beta-2\mu)}}{\Gamma (\alpha)}\int_{0}^{x}\tau^{(k+1)\mu}(x^{k+1}-\tau^{k+1})^{\alpha-1}
\times \\
& _{2}F_{1} (\alpha+ \beta+\mu, -\eta; \alpha; 1-(\frac{\tau}{x})^{k+1})\tau^{k} g^{\delta}(\tau)d\tau,
\end{split}
\end{equation}
\begin{equation}
\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)g^{\delta}(x)]+f^{\gamma}(\rho)(g^{\delta}(\rho)I^{\alpha,\beta,\eta,\mu}_{x,k}[1]\\
&\leq g^{\delta}(\rho)I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)]+f^{\gamma}(\rho)I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{\delta}(x)].
\end{split}
\end{equation}
Again, multiplying both side of (4.29) by $F(x,\rho)$, ( $\rho \in(0,x)$, $x>0$), where $F(x,\rho)$ is defined by (2.5). Then integrating resulting identity with respect to $\rho$ from $0$ to $x$, we have
\begin{equation*}
\begin{split}
&I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)g^{\delta}(x)]I^{\alpha,\beta,\eta,\mu}_{x,k}[1]+I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)g^{\delta}(x)]
I^{\alpha,\beta,\eta,\mu}_{x,k}[1]\\
&\leq I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{\delta}(x)]I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)]
+I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)]I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{\delta}(x)],
\end{split}
\end{equation*}
then we can write
\begin{equation*}
2I^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)g^{\delta}(x)] \leq \frac{1}{[I^{\alpha,\beta,\eta,\mu}_{x,k}[1]]^{-1}}2 II^{\alpha,\beta,\eta,\mu}_{x,k}[f^{\gamma}(x)]I^{\alpha,\beta,\eta,\mu}_{x,k}[g^{\delta}(x)].
\end{equation*}
This proves the result (4.24).\\
\textbf{Competing interests}\\
The authors declare that they have no competing interests.
\end{document} |
\begin{document}
\begin{center}
{\Large{More results on the number of zeros of multiplicity at least $r$}}\\
\ \\
Olav Geil and Casper Thomsen\\
Department of Mathematical Sciences\\
Aalborg University\\
Fr.\ Bajersvej 7G\\
9220 Aalborg {\O}\\
Denmark\\
\ \\
Email: [email protected] and [email protected]
\end{center}
\ \\
\begin{center}
\begin{minipage}{12cm}
{\small{{\textbf{Abstract:}}
\ We consider multivariate polynomials and investigate how many zeros of multiplicity at least $r$ they can have over a Cartesian product of finite subsets of a field. Here $r$ is any prescribed positive integer and the
definition of
multiplicity that we use is the one related to Hasse derivatives. As a generalization of material in~\cite{augot,dvir} a general version of the Schwartz-Zippel was presented in~\cite{weighted} which from the leading monomial -- with respect to a lexicographic ordering -- estimates the sum of zeros when counted with multiplicity. The corresponding corollary on the number of zeros of multiplicity at least $r$ is in general not sharp and therefore in~\cite{weighted} a recursively defined function $D$ was introduced using which one can derive improved information. The recursive function being rather complicated, the only known closed formula consequences of it are for the case of two variables~\cite{weighted}. In the present paper we derive closed formula consequences for arbitrary many variables, but for the powers in the leading monomial being not too large. Our bound
can be viewed as a generalization of the footprint bound~\cite{onorin,gh} -- the classical footprint bound taking not multiplicity into account.\\
\ \\
\noindent {\textbf{Keywords:}} Footprint bound, multiplicity, multivariate polynomial,
Schwartz-Zippel bound, zeros of polynomial \\
\noindent {\textbf{MSC classifications:}} Primary: 12Y05. Secondary:
11T06, 12E05, 13P05, 26C99 }}
\end{minipage}
\end{center}
\ \\
\section{Introduction}\label{sec1}
Given a univariate polynomial over an arbitrary field it is an easy
task to estimate the number of zeros of multiplicity at least $r$, for any fixed positive integer $r$. As is well-known the number of such zeros is less than or equal to the
degree of the polynomial divided by $r$. For multivariate
polynomials the situation is much more complicated as these polynomials
on the one hand typically have an infinite number of zeros when the
field is infinite and on the other hand have only a finite number of
zeros when not. A meaningful reformulation of the problem which works
independently of the field -- and which will be taken in the present
paper -- is to restrict to point sets that are Cartesian products of
finite sets. This of course includes the important case where the
point set is ${\mathbb{F}}_q \times \cdots \times {\mathbb{F}}_q$,
${\mathbb{F}}_q$ being the finite field with $q$ elements. Another
concern is which definition of multiplicity to use as for multivariate
polynomials there are more competing definitions. In the present paper
we use the one related to Hasse derivatives (see
Definition~\ref{defmult} below).\\
The interest in studying the outlined
problem originally came from applications to Guruswami-Sudan style \cite{GS} list decoding algorithms for
$q$-ary Reed-Muller codes, weighted Reed-Muller codes and their likes
\cite{pw_ieee,pw_expanded,manyauthors,augot,weighted}. The first bound on the number of zeros of prescribed
multiplicity was developed by Pellikaan and Wu in~\cite{pw_ieee,pw_expanded}. Later Augot
and Stepanov improved upon Pellikaan and Wu's bound (see~\cite[Prop.\ 13]{weighted}) by generalizing the Schwartz-Zippel bound to also deal with
multiplicity \cite{augot}. The proof of this bound was later given by Dvir et
al.\ in~\cite{dvir} where it was used to estimate the size of
Kakeya sets over finite fields. The mentioned Schwartz-Zippel bound estimates the sum of zeros when counted with multiplicity. From this, one obtains an easy corollary on the number of zeros of multiplicity $r$ or more. All of the above mentioned bounds are
stated in terms of the total degree of the involved polynomials
and the point set under consideration is always ${\mathbb{F}}_q \times
\cdots \times
{\mathbb{F}}_q$. In~\cite[Th.\ 5]{weighted} the generalization of the Schwartz-Zippel
bound was taken a step further to now work for arbitrary finite point
sets $S_1 \times \cdots \times S_m$, $S_i \subseteq {\mathbb{F}}$, $i=1, \ldots , m$
(where ${\mathbb{F}}$ is any field) and to take into account the
leading monomial with respect to a lexicographic ordering. Again one
obtains an easy corollary on the number of zeros of multiplicity at
least $r$ \cite[Cor.\ 3]{weighted}. Whereas the generalized
Schwartz-Zippel bound~\cite[Th.\ 5]{weighted} is tight in the sense
that we can always find polynomials attaining it (see
Proposition~\ref{prosharp} below) a similar result does not hold for
its corollary~\cite[Cor.\ 3]{weighted}. To address this problem we
introduced in~\cite{weighted} a recursively defined function $D$ to
estimate the number of zeros of multiplicity at least
$r$. Unfortunately, the function $D$ is quite complicated and only for
the case of two variables some simple closed formula upper bounds were derived~\cite[Prop.\ 16]{weighted}.
The purpose of the present paper is to establish for the general case of arbitrarily many variables a class of cases in which from $D$ we can derive a simple closed formula expression which is still an improvement to the Schwartz-Zippel bound for zeros of multiplicity at least $r$ (\cite[Cor.\ 3]{weighted}). The bound that we derive turns out to be a natural generalization of the
footprint bound \cite{onorin,gh} which estimates the number of zeros without
taking multiplicity into consideration.\\
The paper is organized as follows. In Section~\ref{sec2} we start by defining multiplicity and by recalling the general Schwartz-Zippel bound and as a corollary the Schwartz-Zippel bound for zeros of multiplicity at least $r$. The rest of Section~\ref{sec2} is devoted to a discussion of the method from~\cite{weighted}. In Section~\ref{sec3} we give the new results regarding a simple closed formula upper bound for the case of the coefficients in the leading monomial being small. The concept of being small in general is rather involved and we therefore establish simple sufficient conditions for this to happen.
\section{Background}\label{sec2}
We first recall the concept of Hasse derivatives.
\begin{Definition}
Given $F(X_1, \ldots X_m)\in {\mathbb{F}}[X_1, \ldots , X_m]$ and
$\vec{k}=(k_1, \ldots , k_m) \in {\mathbb{N}}_0^m$ the $\vec{k}$'th
Hasse derivative of $F$, denoted by $F^{(\vec{k})}(X_1, \ldots , X_m)$ is the
coefficient of $Z_1^{k_1} \cdots Z_m^{k_m}$ in $F(X_1+Z_1, \ldots , X_m+Z_m)\in {\mathbb{F}}(X_1, \ldots , X_m)[Z_1, \ldots , Z_m]$. In other words
$$F(X_1+Z_1, \ldots , X_m+Z_m)=\sum_{\vec{k}} F^{(\vec{k})}(X_1, \ldots , X_m)Z_1^{k_1}
\cdots Z_m^{k_m}.$$
\end{Definition}
Observe that the next definition includes the usual concept of multiplicity for univariate polynomials as a special case.
\begin{Definition}\label{defmult}
For $F(X_1, \ldots , X_m) \in {\mathbb{F}}[X_1, \ldots , X_m]\backslash \{ {0} \}$ and
$\vec{a}=(a_1, \ldots , a_m) \in {\mathbb{F}}^m$ we define the multiplicity of $F$ at $\vec{a}$
denoted by ${\mbox{mult}}(F,\vec{a})$ as follows. Let $r$ be an
integer such that for every $\vec{k}=(k_1, \ldots ,
k_m) \in {\mathbb{N}}_0^m$ with $k_1+\cdots +k_m < r$, $F^{(\vec{k})}(a_1, \ldots , a_m)=0$
holds, but for some $\vec{k}=(k_1, \ldots ,
k_m) \in {\mathbb{N}}_0^m$ with $k_1+\cdots +k_m = r$,
$F^{(\vec{k})}(a_1, \ldots , a_m)\neq 0$ holds, then
${\mbox{mult}}(F,\vec{a})=r$. If $F=0$ then we define ${\mbox{mult}}(F,\vec{a})=\infty$.
\end{Definition}
The above definition is the one that is usually given in the literature. For our purpose the below equivalent description shall also prove useful.
\begin{Definition}\label{defeq}
Let $F(X_1, \ldots , X_m) \in {\mathbb{F}}[X_1, \ldots , X_m]\backslash \{0\}$ and
$\vec{a}=(a_1, \ldots , a_m) \in {\mathbb{F}}^m$.
Consider the ideal
\begin{eqnarray}
J_t=\langle (X_1-a_1)^{p_1}\cdots(X_m-a_m)^{p_m} \mid p_1+\cdots
+p_m=t \rangle \subseteq {\mathbb{F}}[X_1, \ldots , X_m]. \nonumber
\end{eqnarray}
We have
${\mbox{mult}}(F,\vec{a})=r$ if $F\in J_r \backslash
J_{r+1}$. If $F=0$ we have ${\mbox{mult}}(F,\vec{a})=\infty$.
\end{Definition}
We next state the most general form of the Schwartz-Zippel bound for fields
\cite[Th.\ 5]{weighted}. Here, and in the rest of the paper $S_1,
\ldots , S_m \subset {\mathbb{F}}$ are finite subsets of the field
${\mathbb{F}}$ and we write $s_1 =|S_1|, \ldots , s_m=|S_m|$. We note
that the below theorem was generalized to arbitrary commutative rings
in~\cite[Th.\ 7.10]{bishnoi2015zeros} where it was called the
generalized Schwartz Theorem.
\begin{Theorem}\label{prop-sz-gen}
Let $F(X_1, \ldots , X_m) \in {\mathbb{F}}[X_1, \ldots , X_m]$ be a non-zero polynomial and
let $X_1^{i_1} \cdots X_m^{i_m}$ be its leading
monomial with respect to a lexicographic ordering $\prec_{lex}$. Then for
any finite sets $S_1, \ldots ,S_m \subseteq {\mathbb{F}}$
\begin{eqnarray}
\sum_{\vec{a}\in S_1 \times \cdots \times S_m}{\mbox{mult}}(F,\vec{a})
\leq i_1s_2\cdots s_m+s_1i_2s_3 \cdots s_m+\cdots +s_1\cdots s_{m-1}i_m.\nonumber
\end{eqnarray}
\end{Theorem}
Turning to the problem of estimating the number of zeros of multiplicity at least $r$ -- which is the topic of the present paper -- we have the following corollary corresponding to~\cite[Cor.\ 3]{weighted}. We may think of it as the Schwartz-Zippel bound for zeros of multiplicity at least $r$.
\begin{Corollary}\label{cor-sz-gen}
Let $F(X_1, \ldots , X_m) \in {\mathbb{F}}[X_1, \ldots , X_m]$ be a non-zero polynomial and
let $X_1^{i_1} \cdots X_m^{i_m}$ be its leading
monomial with respect to the lexicographic ordering. Assume $S_1, \ldots
,S_m \subseteq {\mathbb{F}}$ are finite sets.
Then over
$S_1 \times \cdots \times S_m $ the number
of zeros of multiplicity at least $r$ is less than or equal to the
minimum of
\begin{eqnarray}
\big( i_1s_2\cdots s_m+s_1i_2s_3\cdots s_m+\cdots +s_1\cdots
s_{m-1}i_m \big) /r\nonumber
\end{eqnarray}
and $s_1 \cdots s_m$.
\end{Corollary}
As mentioned in the introduction one
obtains better estimates than Corollary~\ref{cor-sz-gen}
by using the recursively defined function $D$. In particular
Corollary~\ref{cor-sz-gen} is not tight. Before giving the details we
pause for a moment to show that on the other hand
Theorem~\ref{prop-sz-gen} is tight (a fact that has not been reported
before). For this purpose we shall need the notation
$$S_j=\{\alpha_1^{(j)}, \ldots , \alpha_{s_j}^{(j)} \}$$
for $j=1, \ldots , m$, and the below proposition:
\begin{Proposition}\label{pronuogsaa}
Consider
\begin{equation}
F(X_1, \ldots , X_m)=\prod_{u=1}^{m} \prod_{v=1}^{s_u}(X_u-\alpha_v^{(u)})^{r_{v}^{(u)}}.\label{eqlinfac}
\end{equation}
The multiplicity of $(\alpha_{j_1}^{(1)}, \ldots ,
\alpha_{j_m}^{(m)})$ in $F(X_1, \ldots , X_m)$ equals
\begin{equation}
r_{j_1}^{(1)}+\cdots +r_{j_m}^{(m)}. \label{eqHjubi}
\end{equation}
\end{Proposition}
\noindent\emph{Proof:}
Clearly, the multiplicity is greater than or equal to
$r=r_{j_1}^{(1)}+\cdots +r_{j_m}^{(m)}$. Using Gr\"{o}bner basis theory we now
show that it is not larger. We substitute
${\mathcal{X}}_i=X_i-\alpha_{j_i}^{(i)}$ for $i=1, \ldots ,m$ and observe
that by Buchberger's S-pair criteria
$${\mathcal{B}}=\{{\mathcal{X}}_1^{r_1} \cdots {\mathcal{X}}_m^{r_m}
\mid r_1+\cdots +r_m=r+1\}$$
is a Gr\"{o}bner basis (with respect to any fixed monomial
ordering).
The support of $F({\mathcal{X}}_1, \ldots , {\mathcal{X}}_m)$ contains
a monomial of the form ${\mathcal{X}}_1^{i_1} \cdots
{\mathcal{X}}_m^{i_m}$ with $i_1+\cdots +i_m=r$.
Therefore the remainder of $F({\mathcal{X}}_1, \ldots ,
{\mathcal{X}}_m)$ modulo ${\mathcal{B}}$ is non-zero.
It is well known that if a
polynomial is reduced modulo a Gr\"{o}bner basis then the remainder is
zero if and only if it belongs to the ideal generated by the elements
in the basis. \qed
\ \\
We are now ready to show that Theorem~\ref{prop-sz-gen} is tight.
\begin{Proposition}\label{prosharp}
Let $S_1, \ldots , S_m \subseteq {\mathbb{F}}$ be finite sets. If
$F(X_1, \ldots , X_m)\in {\mathbb{F}}[X_1, \ldots , X_m]$ is a product
of univariate linear factors -- meaning that it is of the form (\ref{eqlinfac}) -- then the number of
zeros of $F$ counted with multiplicity reaches the generalized Schwartz-Zippel bound
(Theorem~\ref{prop-sz-gen}).
\end{Proposition}
\noindent\emph{Proof:}
Consider the polynomial
$$F(X_1, \ldots , X_m)=\prod_{u=1}^m \prod_{v=1}^{s_u}\big(X_u-\alpha_v^{(u)}\big)^{r_v^{(u)}}.$$
Write $i_u=\sum_{v=1}^{s_u}r_v^{(u)}$, $u=1, \ldots , m$. Applying carefully Proposition~\ref{pronuogsaa} we obtain
\begin{eqnarray}
\sum_{\vec{a} \in S_1 \times \cdots \times S_m}
{\mbox{mult}}(F,\vec{a})
&=\sum_{t=1}^{s_1}(s_2 \cdots s_m )r_t^{(1)}+\cdots +
\sum_{t=1}^{s_m}(s_1 \cdots s_{m-1} )r_t^{(m)} \nonumber
\\
&=i_1s_2 \cdots s_{m} +\cdots +s_1
\cdots s_{m-1} i_m \nonumber
\end{eqnarray}
and we are through. \qed
\ \\
We next return to the problem of improving
Corollary~\ref{cor-sz-gen} for which we introduced
in~\cite[Def.\ 5]{weighted}
the function $D$.
\begin{Definition}\label{defD}
Let $r \in {\mathbb{N}}, i_1, \ldots , i_m \in {\mathbb{N}}_0$. Define
$$D(i_1,r,s_1)=\min \big\{\big\lfloor \frac{i_1}{r} \big\rfloor,s_1\big\}$$
and for $m \geq 2$
\begin{multline*}
D(i_1, \ldots , i_m,r,s_1, \ldots ,s_m)=
\\
\begin{split}
\max_{(u_1, \ldots ,u_r)\in A(i_m,r,s_m) }&\bigg\{ (s_m-u_1-\cdots -u_r)D(i_1,\ldots ,i_{m-1},r,s_1,
\ldots ,s_{m-1})\\
&\quad+u_1D(i_1, \ldots , i_{m-1},r-1,s_1, \ldots ,s_{m-1})+\cdots
\\
&\quad +u_{r-1}D(i_1, \ldots ,i_{m-1},1,s_1, \ldots , s_{m-1})+u_rs_1\cdots
s_{m-1} \bigg\}
\end{split}
\end{multline*}
where
\begin{multline}
A(i_m,r,s_m)= \\
\{ (u_1, \ldots , u_r) \in {\mathbb{N}}_0^r \mid u_1+ \cdots
+u_r \leq s_m {\mbox{ \ and \ }} u_1+2u_2+\cdots +ru_r \leq i_m\}.\label{eqA}
\end{multline}
\end{Definition}
Throughout the rest of the paper we shall always assume that $r \in
{\mathbb{N}}$ and that $i_1, \ldots , i_m \in {\mathbb{N}}_0$.
The improvement of Corollary~\ref{cor-sz-gen} was given in~\cite[Th.\ 6]{weighted}
as follows:
\begin{Theorem}\label{prorec}
For a polynomial $F(X_1, \ldots , X_m)\in {\mathbb{F}}[X_1, \ldots , X_m]$ let $X_1^{i_1}\cdots X_m^{i_m}$ be its leading monomial with
respect to the lexicographic ordering $\prec_{lex}$ with $X_m \prec_{lex} \cdots \prec_{lex} X_1$. Then $F$ has at most $D(i_1, \ldots , i_m,r,s_1,
\ldots ,s_m)$ zeros of multiplicity at least $r$ in $S_1\times \cdots
\times S_m$. The corresponding recursive algorithm produces a number
that is at most equal to the number found in
Corollary~\ref{cor-sz-gen} and is at most equal to $s_1 \cdots s_m$.
\end{Theorem}
When
$\lfloor i_1/s_1 \rfloor + \cdots + \lfloor i_m/s_m \rfloor \geq r$
Proposition~\ref{pronuogsaa} guarantees the existence of polynomials $F(X_1, \ldots , X_m)$ with leading monomial $X_1^{i_1} \cdots X_m^{i_m}$ having all elements of $S_1 \times \cdots \times S_m$ as zeros of multiplicity at least $r$. Hence, we only need to apply Theorem~\ref{prorec} to the case
$
\lfloor i_1/s_1 \rfloor + \cdots + \lfloor i_m/s_m \rfloor < r, $
and in particular we can assume $i_t < r s_t$.
\begin{Example}\label{exny1}
In this example we estimate the number of zeros of multiplicity
$3$ or more for polynomials in two
variables. Both $S_1$ and $S_2$ are assumed to be of size $5$.
From the above discussion, for
\begin{eqnarray}
(i_1,i_2)&\in&\{(\alpha,\beta) \mid \alpha \geq 15\} \cup \{ (\alpha,\beta) \mid \alpha \geq 10
{\mbox{ and }} \beta \geq 5\}\nonumber \\
&&\cup \{(\alpha,\beta) \mid \alpha \geq 5 {\mbox{ and }} \beta \geq 10\} \cup \{(\alpha,\beta)
\mid \beta \geq 15\}\nonumber
\end{eqnarray}
we have $D(i_1,i_2,3,5,5)=25$. Table~\ref{tabny1} shows
information obtained from our algorithm for the remaining possible
choices of exponents $(i_1,i_2)$. Observe, that the table is not symmetric
meaning that $D(i_1,i_2,3,5,5)$ does not always equal
$D(i_2,i_1,3,5,5)$. The corresponding values of the
Schwartz-Zippel bound
(Corollary~\ref{cor-sz-gen}) is displayed in Table~\ref{tabny2}, from
which it is clear that indeed the function $D$ can sometimes give a
dramatic improvement. For instance $D(3,11,3,5,5)$ equals $19$, but the
Schwartz-Zippel bound only gives the estimate $23$. Similarly,
$D(2,12)$ equals $20$ and the Schwartz-Zippel bound gives $23$.
\begin{table}
\centering
\caption{$D(i_1,i_2,3,5,5)$}
\newcommand{~~}{~~}
\begin{tabular}{@{}c@{}cc@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{}}
\toprule
&&\multicolumn{15}{c}{$i_1$}\\
&&0 &1 &2 &3 &4 &5 &6 &7 &8 &9 &10 &11 &12 &13 &14\\
\addlinespace
\multirow{15}{*}{$i_2$}
&\multicolumn{1}{c}{0 }&0 &0 &0 &5 &5 &5 &10&10&10&15&15&15&20&20&20\\
&\multicolumn{1}{c}{1 }&0 &0 &1 &5 &6 &6 &11&11&12&{{16}}&17&17&{{21}}&21&21\\
&\multicolumn{1}{c}{2 }&0 &1 &2 &7 &8 &9 &{{13}}&13&14&{{17}}&{{19}}&19&{{22}}&22&22\\
&\multicolumn{1}{c}{3 }&5 &5 &5 &9 &9 &10&{{14}}&14&{{16}}&{{18}}&{21}&{21}&{23}&{23}&23\\
&\multicolumn{1}{c}{4 }&5 &5 &6 &9 &11&{13}&{16}&{16}&{18}&{19}&{23}&{23}&{24}&{24}&{24}\\
&\multicolumn{1}{c}{5 }&5 &6 &7 &11&12&{14}&{17}&{17}&{20}&{20}\\
&\multicolumn{1}{c}{6 }&10&10&10&13&14&{17}&{19}&{19}&{21}&{21}\\
&\multicolumn{1}{c}{7 }&10&10&11&13&15&{18}&{20}&{20}&{22}&{22}\\
&\multicolumn{1}{c}{8 }&10&11&12&15&{17}&{21}&{22}&{22}&{23}&{23}\\
&\multicolumn{1}{c}{9 }&15&15&15&17&{18}&{22}&{23}&{23}&{24}&{24}\\
&\multicolumn{1}{c}{10}&15&15&16&17&{20}\\
&\multicolumn{1}{c}{11}&15&16&17&19&{21}\\
&\multicolumn{1}{c}{12}&20&20&20&21&{22}\\
&\multicolumn{1}{c}{13}&20&20&21&21&{23}\\
&\multicolumn{1}{c}{14}&20&21&22&23&{24}\\
\bottomrule
\end{tabular}
\label{tabny1}
\end{table}
\begin{table}
\centering
\caption{The Schwartz-Zippel bound (sz) for zeros of multiplicity at least $3$}
\begin{tabular}{c|rrrrrrrrrrrr}
$i_1+i_2$&0&1&2&3&4&5&6&7&8&9&10&11\\
sz&0 &1 &3 &5 &6 &8 &10 &11 &13 &15 &16 &18\\
\ \\
$i_1+i_2$&12&13&14&15&16&17&18\\
sz &20
&21 &23&25&25&25&25
\end{tabular}
\label{tabny2}
\end{table}
It is easy to establish a lower bound on the maximal number of
possible zeros of multiplicity at least $r=3$ for polynomials with any leading monomial
$X_1^{i_1}X_2^{i_2}$. This is done by inspecting polynomials of the
form~(\ref{eqlinfac}). As an example
$\prod_{u=1}^{4}(X_1-\alpha_{u}^{(1)})^2\prod_{v=1}^5(X_2-\alpha_v^{(2)})$
has $20$ zeros of multiplicity (at least) $3$. But $D(8,5,3,5,5)=20$ and
therefore the true value of the maximal number of zeros of
multiplicity at least $3$ is $20$ in this case. In Table~\ref{tabnyyy3} we
list the difference between $D(i_1,i_2,3,5,5)$ and the lower bound
found by using the above method. The large amount of zero's in the table proves that
$D(i_1,i_2,3,5,5)$ often equals the true maximal number of zeros of
multiplicity at least $3$.
\begin{table}
\centering
\caption{Difference between upper and lower bound in Example~\ref{exny1}}\label{tabny6}
\newcommand{~~}{~\hspace*{0.851em}}
\begin{tabular}{@{}c@{}cc@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{}}
\toprule
&&\multicolumn{15}{c}{$i_1$}\\
&&0&1&2&3&4&5&6&7&8&9&10&11&12&13&14\\
\addlinespace
\multirow{15}{*}{$i_2$}
&\multicolumn{1}{c}{0} &0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\
&\multicolumn{1}{c}{1} &0&0&0&0&1&0&1&1&1&1&2&1&1&1&0\\
&\multicolumn{1}{c}{2} &0&0&0&2&2&2&3&2&2&2&3&2&2&1&0\\
&\multicolumn{1}{c}{3} &0&0&0&0&0&1&1&1&3&1&4&3&2&2&0\\
&\multicolumn{1}{c}{4} &0&0&0&0&2&3&3&3&2&2&3&2&2&1&0\\
&\multicolumn{1}{c}{5} &0&0&0&2&2&3&2&2&0&0\\
&\multicolumn{1}{c}{6} &0&0&0&0&1&2&3&2&1&0\\
&\multicolumn{1}{c}{7} &0&0&0&0&2&3&3&3&1&0\\
&\multicolumn{1}{c}{8} &0&0&0&2&1&1&2&1&2&0\\
&\multicolumn{1}{c}{9} &0&0&0&0&1&2&2&1&1&0\\
&\multicolumn{1}{c}{10}&0&0&0&0&0\\
&\multicolumn{1}{c}{11}&0&0&0&1&0\\
&\multicolumn{1}{c}{12}&0&0&0&0&0\\
&\multicolumn{1}{c}{13}&0&0&0&0&0\\
&\multicolumn{1}{c}{14}&0&0&0&0&0\\
\bottomrule
\end{tabular}
\label{tabnyyy3}
\end{table}
\end{Example}
In~\cite[Pro.\ 16]{weighted} we derived the following
closed formula expression upper bounds for
the case of two variables.
\begin{Proposition}\label{protwovar}
For $k=1, \ldots , r-1$, $D(i_1,i_2,r,s_1,s_2)$ is upper bounded by\\
$\begin{array}{cl}
{\mbox{(C.1)}}& {\displaystyle{s_2\frac{i_1}{r}+\frac{i_2}{r}\frac{i_1}{r-k}}}\\
&{\mbox{if \ }}(r-k)\frac{r}{r+1}s_1 \leq i_1 < (r-k)s_1
{\mbox{ \ and \ }} 0\leq i_2 <ks_2\\
{\mbox{(C.2)}}&
{\displaystyle{s_2\frac{i_1}{r}+((k+1)s_2-i_2)(\frac{i_1}{r-k}-\frac{i_1}{r})+(i_2-ks_2)(s_1-\frac{i_1}{r})}}\\
& {\mbox{if \ }}(r-k)\frac{r}{r+1}s_1 \leq i_1 < (r-k)s_1 {\mbox{ \
and \ }} ks_2\leq i_2 <(k+1)s_2\\
{\mbox{(C.3)}}&
{\displaystyle{s_2\frac{i_1}{r}+\frac{i_2}{k+1}(s_1-\frac{i_1}{r})}}\\
&{\mbox{if \ }} (r-k-1)s_1 \leq i_1 < (r-k)\frac{r}{r+1}s_1 {\mbox{ \
and \ }} 0 \leq i_2 < (k+1)s_2.
\end{array}
$\\
Finally,\\
$\begin{array}{cl}
{\mbox{(C.4)}}& {\displaystyle{D(i_1,i_2,r,s_1,s_2)=s_2\lfloor \frac{i_1}{r} \rfloor
+i_2(s_1-\lfloor \frac{i_1}{r} \rfloor )}}\\
& {\mbox{if \ }} s_1(r-1) \leq i_1 < s_1r {\mbox{ \ and \ }} 0 \leq i_2 < s_2.
\end{array}
$\\
The above numbers are at most equal to $\min\{(i_1s_2+s_1i_2)/r, s_1s_2 \}$.
\end{Proposition}
If in (C.3) of the above proposition we substitute $k=r-1$ then we derive
\begin{equation}
D(i_1,i_2,r,s_1,s_2) \leq s_1s_2-(s_1-\frac{i_1}{r})(s_2-\frac{i_2}{r}) \label{eqmer2}
\end{equation}
for $0\leq i_1< \frac{r}{r+1}s_1$ and $0 \leq i_2 < r s_2$. Actually, (\ref{eqmer2}) holds under the weaker assumption
\begin{equation}
0\leq i_1 \leq \frac{r}{r+1}s_1, 0 \leq i_2 < rs_2 \label{eqbett}
\end{equation}
which is seen by plugging in the values $k=r-1$ and
$i_1=\frac{r}{r+1}s_1$ into the expressions in (C.1), (C.2) and
(\ref{eqmer2}). This is the result that we will generalize to more
variables in the next section.
\begin{Example}\label{extwo}
This is a continuation of Example~\ref{exny1} where we investigated
$D(i_1,i_2,3,5,5)$. Although
condition~(\ref{eqbett}) reads $i_1 \leq 3$ and $i_2 \leq 14$ we print in
Table~\ref{tabny3} the value of~(\ref{eqmer2}) for all possible
$(i_1,i_2)$. The single, as well as double, underlined numbers correspond to entries where the
number is strictly smaller than
$D(i_1,i_2,3,5,5)$. For such entries (\ref{eqmer2}) certainly doesn't hold
true. By inspection, condition~(\ref{eqbett}) seems rather sharp. The
double underlined numbers correspond to cases where even, the value is
smaller than the lower bounds on the maximal number of zeros, that
we established at the end of Example~\ref{exny1}. Hence, not only
cannot~(\ref{eqmer2}) serve as a general upper bound on $D$, but neither
can it serve as a general upper bound on the maximal number of zeros
of multiplicity at least $r$.
\begin{table}
\centering
\caption{$\lfloor 25-(5-i_1/3)(5-i_2/3)\rfloor$}
\newcommand{~~}{~~}
\begin{tabular}{@{}c@{}cc@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{}}
\toprule
&&\multicolumn{15}{c}{$i_1$}\\
&&0 &1 &2 &3 &4 &5 &6 &7 &8 &9 &10 &11 &12 &13 &14\\
\addlinespace
\multirow{15}{*}{$i_2$}
&\multicolumn{1}{c}{0 }&0 &1 &3 &5 &\it{6} &\it{8}&\it{10}&\it{11}&\it{13}&\it{15}&\it{16}&\it{18}&\it{20}&\it{21}&\it{23}\\
&\multicolumn{1}{c}{1 }&1 &3 &4 &6 &\it{7} &\it{9} &\it{11}&\it{12}&{\it{14}}&{\bf{\underline{\it{15}}}}&\it{17}&\it{18}&{\underline{\it{20}}}&\it{21}&\it{23}\\
&\multicolumn{1}{c}{2 }&3 &4 &6 &7 &\it{9} &\it{10} &{\underline{\it{12}}}&\it{13}&\it{14}&{\underline{\it{16}}}&{\underline{\it{17}}}&\it{19}&{\underline{\it{20}}}&\it{22}&\it{23}\\
&\multicolumn{1}{c}{3 }&5 &6 &7 &9 &\it{10} &\it{11}&{\underline{\it{13}}}&\it{14}&{\underline{\it{15}}}&{\underline{\it{17}}}&\underline{\it{18}}&\underline{\it{19}}&\underline{\it{21}}&\underline{\it{22}}&\it{23}\\
&\multicolumn{1}{c}{4 }&6 &7 &9 &10 &\it{11}&\underline{\it{12}}&\underline{\it{14}}&\underline{\it{15}}&\underline{\it{16}}&\underline{\it{17}}&\underline{\underline{\it{18}}}&\underline{\underline{\it{20}}}&\underline{\underline{\it{21}}}&\underline{\underline{\it{22}}}&\underline{\underline{\it{23}}}\\
&\multicolumn{1}{c}{5 }&8 &9 &10 &11&\it{12}&\underline{\it{13}}&\underline{\it{15}}&\underline{\it{16}}&\underline{\underline{\it{17}}}&\underline{\underline{\it{18}}}\\
&\multicolumn{1}{c}{6 }&10&11&12&13&\it{14}&\underline{\it{15}}&\underline{\it{16}}&\underline{\it{17}}&\underline{\underline{\it{18}}}&\underline{\underline{\it{19}}}\\
&\multicolumn{1}{c}{7 }&11&12&13&14&\it{15}&\underline{\it{16}}&\underline{\it{17}}&\underline{\it{17}}&\underline{\underline{\it{18}}}&\underline{\underline{\it{19}}}\\
&\multicolumn{1}{c}{8 }&13&14&14&15&\underline{\it{16}}&
\underline{\underline{\it{17}}}&\underline{\underline{\it{18}}}&\underline{\underline{\it{18}}}&\underline{\underline{\it{19}}}&\underline{\underline{\it{20}}}\\
&\multicolumn{1}{c}{9 }&15&15&16&17&\underline{\it{17}}&\underline{\underline{\it{18}}}&\underline{\underline{\it{19}}}&\underline{\underline{\it{19}}}&\underline{\underline{\it{20}}}&\underline{\underline{\it{21}}}\\
&\multicolumn{1}{c}{10}&16&17&17&18&\underline{\underline{\it{18}}}\\
&\multicolumn{1}{c}{11}&18&18&19&19&\underline{\underline{\it{20}}}\\
&\multicolumn{1}{c}{12}&20&20&20&21&\underline{\underline{\it{21}}}\\
&\multicolumn{1}{c}{13}&21&21&22&22&\underline{\underline{\it{22}}}\\
&\multicolumn{1}{c}{14}&23&23&23&23&\underline{\underline{\it{23}}}\\
\bottomrule
\end{tabular}
\label{tabny3}
\end{table}
\end{Example}
{\section{A closed formula expression when $(i_1, \ldots ,i_m)$ is small}\label{secismall}\label{sec3}}
\noindent
Having already four different cases of closed formula expressions when
$m=2$ (Proposition~\ref{protwovar}),
the situation gets
very complicated for more variables. Assuming, however, that
the exponent $(i_1, \ldots , i_m)$ in the leading monomial is ``small''
-- a concept that will be formally defined in Definition~\ref{defconda} below -- we
can give a simple formula which is a generalization of (\ref{eqmer2})
and which is also strongly related to the footprint bound from
Gr\"{o}bner basis theory.\\
Given a zero dimensional ideal of a
multivariate polynomial ring, and a fixed
monomial ordering, the well-known footprint bound states that the size of the
corresponding variety is at most equal to the number of monomials that
can not be found as leading monomial of any polynomial in the
ideal (if moreover the ideal is radical, then equality holds).
More details on the footprint bound can be found in
\cite{clo,onorin,gh} -- in particular see \cite[Pro.\ 4, Sec.\ 5.3]{clo}.
We
have the following easy corollary. \\
\begin{Corollary}\label{footprspecial}
Given a polynomial $F(X_1, \ldots , X_m) \in {\mathbb{F}}[X_1,
\ldots , X_m]$, and a monomial ordering, let $X_1^{i_1}\cdots
X_m^{i_m}$ be the leading monomial of $F$, and assume $i_1 < s_1,
\ldots , i_m < s_m$. The number of elements in
$S_1 \times \cdots \times S_m$ that are zeros of $F$ is at most
equal to
\begin{equation}
s_1 \cdots s_m-(s_1-i_1)(s_2-i_2)\cdots (s_m-i_m).\label{eqthisisfoot}
\end{equation}
\end{Corollary}
\noindent\emph{Proof:}
The set of zeros of $F$ from $S_1 \times \cdots \times S_m$ equals the
variety of the ideal $\langle F, G_1, \ldots , G_m\rangle$ where
$G_i=\prod_{u=1}^{s_i}(X_i-\alpha_u^{(i)})$. Here, we used the
notation introduced prior to Proposition~\ref{pronuogsaa}. The above ideal clearly is
zero-dimensional. In fact, the monomials that are not leading monomial
of any polynomial in the ideal must belong to the set
$$\{X_1^{j_1} \cdots X_m^{j_m} \mid j_1 < s_1, \ldots, j_m < s_m,
X_1^{j_1} \cdots X_m^{j_m}{\mbox{ is not divisible by }} X_1^{i_1}
\cdots X_m^{i_m} \},$$
the size of which equals (\ref{eqthisisfoot}).
The result now follows from the footprint bound. \qed\\
The above corollary and (\ref{eqmer2}) are clearly related as
(\ref{eqthisisfoot}) equals the right side of (\ref{eqmer2}) for
$m=2$. Similarly, (\ref{eqbett}) equals the assumption in the corollary. Observe, however, that in (\ref{eqmer2}), and in this paper in
general, we always assume that the monomial ordering is the
lexicographic ordering described in Theorem~\ref{prorec}. The
master theorem of the present paper is the following result where
(\ref{eqendnuenstjerne}) is the generalization of (\ref{eqmer2}) to
more variables and
where the mentioned Condition A is the generalization
of~(\ref{eqbett}). Recall that $D(i_1, \ldots , i_m,r,s_1, \ldots ,
s_m)$ serves as an upper bound on the number of zeros of multiplicity
at least $r$ for polynomials with leading monomial being $X_1^{i_1}
\cdots X_m^{i_m}$ with respect to the lexicographic ordering. As a consequence the master theorem also can be viewed as a
generalization of Corollary~\ref{footprspecial}, when restricted to a lexicographic ordering.
\begin{Theorem}\label{prosmall}
Assume that $(i_1, \ldots , i_m,r,s_1, \ldots , s_m)$ with $m\geq 2$ satisfies Condition
A in Definition~\ref{defconda} below. We have
\begin{eqnarray}
D(i_1, \ldots ,i_m,r,s_1,\ldots , s_m) \leq s_1\cdots s_m-(s_1-\frac{i_1}{r})\cdots
(s_m-\frac{i_m}{r})\label{eqendnuenstjerne}
\end{eqnarray}
which is at most equal to $\min \{ (i_1s_2\cdots s_m+\cdots +s_1\cdots
s_{m-1}i_m)/r, s_1\cdots s_m\}$.
\end{Theorem}
We postpone the proof of Theorem~\ref{prosmall} till the end of the
section. \\
\begin{Definition}\label{defconda}
Let $m \geq 2$. We say that $(i_1, \ldots , i_m,r,s_1, \ldots , s_m)$
satisfies Condition A if the following hold
$$
\begin{array}{rl}
(A.1)& 0 \leq i_1\leq s_1, \ldots , 0 \leq i_{m-1} \leq s_{m-1}, 0
\leq i_m < r s_m\\
(A.2)&s(s_1-\frac{i_1}{\ell}) \cdots (s_{m-2}-\frac{i_{m-2}}{\ell}) \leq \ell
(s_1-\frac{i_1}{s})\cdots (s_{m-2}-\frac{i_{m-2}}{s})\\
& {\mbox{ \ for all \ }}
\ell=2, \ldots , r, s=1, \ldots \ell-1.\\
(A.3)&s(s_1-\frac{i_1}{r}) \cdots (s_{m-1}-\frac{i_{m-1}}{r}) \leq r
(s_1-\frac{i_1}{s})\cdots (s_{m-1}-\frac{i_{m-1}}{s})\\
&{\mbox{ \ for all \ }} s=1, \ldots
, r-1.
\end{array}
$$
\end{Definition}
We note that one could actually replace $\ell=2, \ldots , r$ in (A.2)
with the weaker $\ell =2, \ldots, r-1$ as the case $\ell=r$ follows
from (A.3).\\
Admittedly, the definition of the exponent being small (Condition A) is
rather technical. However:
\begin{itemize}
\item If $(i_1, \ldots , i_m)$ is small
then all $(i_1^\prime, \ldots , i_m^\prime)$ with
$i_1^\prime \leq i_1, \ldots , i_m^\prime \leq i_m$ are also small
(Proposition~\ref{proalso}). Hence, it is enough to check if $(i_1,
\ldots , i_m)$ satisfies Condition A.
\item Condition A is satisfied when $i_t \leq s_t \min \left\{ \frac{\sqrt[m-1]{r}-1}{\sqrt[m-1]{r}-\frac{1}{r}},
\frac{\sqrt[m-2]{2}-1}{\sqrt[m-2]{2}-\frac{1}{2}}\right\}$, $t=1, \ldots
,m-1$, $i_m <r s_m$ (Theorem~\ref{themain}).
\item As already mentioned, Condition A and the master theorem reduces
to well-known results when $r=1$ or when $m=2$ (see Remark~\ref{remsmart}
for the details).
\item For arbitrary $m$ but
$r=2$ and $s_1 = \cdots =s_m$,
Condition A reduces
to a simple expression (Proposition~\ref{proI} and Example~\ref{exsmalll}).
\end{itemize}
\begin{Proposition}\label{proalso}
If $(i_1, \ldots , i_m,r,s_1, \ldots , s_m)$ satisfies Condition
A then for all $i_1^\prime, \ldots ,i_m^\prime$ with $0 \leq
i_1^\prime \leq i_1, \ldots , 0 \leq i_m^\prime \leq i_m$ also
$(i_1^\prime , \ldots , i_m^\prime , r, s_1, \ldots , s_m)$ satisfies
Condition A.
\end{Proposition}
\noindent\emph{Proof:}
It is enough to show that
\begin{equation}
\frac{s_t-\frac{i_t}{s}}{s_t-\frac{i_t}{\ell}} \leq \frac{s_t-\frac{ai_t}{s}}{s_t-\frac{ai_t}{\ell}} \label{eqa}
\end{equation}
holds for all rational numbers $a$ and integers $t$ with $0 < a < 1$
and $1 \leq t \leq m-1$. But~(\ref{eqa}) is equivalent to
$(1-a)(\ell-s)\geq 0$
which is a valid inequality when $\ell >s$. \qed
\ \\
We now give the most important theorem of the paper.
\begin{Theorem}\label{themain}
If $i_m < r s_m$ and if for $t=1, \ldots , m-1$
$$i_t \leq s_t \min \left\{ \frac{\sqrt[m-1]{r}-1}{\sqrt[m-1]{r}-\frac{1}{r}},
\frac{\sqrt[m-2]{2}-1}{\sqrt[m-2]{2}-\frac{1}{2}}\right\}$$
then $D(i_1, \ldots i_m,r,s_1, \ldots s_m) \leq s_1 \cdots
s_m-(s_1-\frac{i_1}{r}) \cdots (s_m-\frac{i_m}{r})$.
\end{Theorem}
\noindent\emph{Proof:}
The idea behind Theorem~\ref{themain} is to choose $i_t$, $t=1, \ldots
, m-1$ such that
\begin{equation}
\sqrt[m-1]{s}(s_t-\frac{i_t}{r}) \leq \sqrt[m-1]{r}(s_t-\frac{i_t}{s}),
{\mbox{ \ \ for }} s=1, \ldots , r-1, \label{TREKANT1}
\end{equation}
and such that
\begin{equation}
\sqrt[m-2]{s}(s_t-\frac{i_t}{\ell}) \leq \sqrt[m-2]{\ell}(s_t-\frac{i_t}{s}),
{\mbox{ \ \ for }} \ell=2,\ldots , r, {\mbox{ / }}s=1, \ldots , \ell-1. \label{TREKANT2}
\end{equation}
The first set of inequalities guarantees (A.3) and the second set
guarantees (A.2). Now (\ref{TREKANT1}) and (\ref{TREKANT2}),
respectively, translates to
\begin{equation}
\frac{i_t}{s_t} \leq
\frac{\sqrt[m-1]{r}-\sqrt[m-1]{s}}{\frac{\sqrt[m-1]{r}}{s}-\frac{\sqrt[m-1]{s}}{r}}, \label{nabel1}
\end{equation}
\begin{equation}
\frac{i_t}{s_t} \leq
\frac{\sqrt[m-2]{\ell}-\sqrt[m-2]{s}}{\frac{\sqrt[m-2]{\ell}}{s}-\frac{\sqrt[m-2]{s}}{\ell}}, \label{nabel2}
\end{equation}
respectively, and then also (A.1) is clearly satisfied. We shall show
that the right side of (\ref{nabel1}) is smallest possible when $s=1$,
in which case it equals
$(\sqrt[m-1]{r}-1)/(\sqrt[m-1]{r}-1/r)$. And we shall show that the
right side of (\ref{nabel2}) is smallest possible when $\ell=2, s=1$,
in which case it equals $(\sqrt[m-2]{2}-1)/(\sqrt[m-2]{2}-1/2)$.\\
We first consider (\ref{nabel1}) where we substitute $S=\sqrt[m-1]{s}$
and $R=\sqrt[m-1]{r}$ to obtain
$$\frac{i_t}{s_t} \leq \frac{R^mS^{m-1}-R^{m-1}S^m}{R^m-S^m}.$$
We want to demonstrate that the right side is minimal on $[ 1, R [$
when $S=1$. The derivative is
$$\frac{(m-1)R^{2m}S^{m-2}+R^mS^{2m-2}-mR^{2m-1}S^{m-1}}{(R^m-S^m)^2}.$$
Hence, it suffices to show that the numerator is always positive on
$] 0,R[$. Writing $S=Ra$ with $a \in ] 0, 1[$ the condition that
the numerator should be positive becomes $m-1+a^m-ma >0$. Plugging in
$a=1$, equality holds. Therefore the result follows from the fact that
the derivative of $m-1+a^m-ma$ is negative on $]0,1[$.\\
The above proof not only shows that the minimum of the right side of
(\ref{nabel1}) is obtained for $s=1$. It also applies to demonstrate
that the minimum of the right side of (\ref{nabel2}) is attained in
one of the following cases $(\ell=2, s=1)$, $(\ell=3, s=1), \ldots ,
(\ell=r,s=1)$. We next substitute $m-2$ with $m$ on the right side of
(\ref{nabel2}) to obtain $(\ell^{1/m}-1)/(\ell^{1/m}-1/\ell)$. We want
to show that the minimal value for $\ell \in [2, \infty [$ is
attained when $\ell=2$. The derivative is
$$\frac{-\left(\ell^{1/m}m-\ell^{(m+1)/m}+\ell^{1/m}-m\right)}{m
\left(\ell^{(m+1)/m}-1\right)^2}$$
where the denominator is always positive and the numerator is
positive for $\ell=0$. The result follows from the fact that
$$\frac{d}{d\ell}\left(\ell^{1/m}m-\ell^{(m+1)/m}+\ell^{1/m}-m)\right)=\frac{(m+1)(\ell^{-(m-1)/m}-\ell^{1/m})}{m}$$
is negative on $]0,\infty[$.
\qed
\begin{Remark}\label{remsmart}
If $r=1$ then (A.2) and (A.3) do not apply and therefore Condition A
reduces to $i_1 \leq s_1, \ldots , i_m \leq s_m$. Hence, in this case Theorem~\ref{prosmall} in combination with Theorem~\ref{prorec} reduce to
Theorem~\ref{footprspecial}.\\
For $m=2$ and $r$ arbitrary condition (A.2) does not apply and
condition (A.3)
simplifies to
$$i_1 \leq \frac{rs}{r+s}s_1$$
for all integers $s$ with $1 \leq s < r$. The minimal upper bound on
$i_1$ is attained for $s=1$. Hence, in case of two variables Condition A reads $i_1
\leq \frac{r}{r+1}s_1$, $i_2 < r s_2$. For $m=2$ and $r$ being arbitrary Theorem~\ref{prosmall} therefore equals (\ref{eqmer2}) and (\ref{eqbett}).
\end{Remark}
\begin{Proposition}\label{proI}
Assume $r=2$ and $s_1=\cdots =s_m =q$. Then Condition A simplifies to
$$\sum_{t=1}^{m-1} (-1)^{t+1} \frac{2^{t+1}-1}{2^t}\sum_{1 \leq j_1 <
\cdots < j_t \leq m-1} (I_{j_1} \cdots I_{j_t}) \leq 1 {\mbox{ \ and \ }} I_m < 2$$
where $I_1=i_1/q, \ldots , I_m=i_m/q$.
\end{Proposition}
\noindent\emph{Proof:}
For $r=2$, the conditions (A.2), (A.3) become
$$\big(s_1-\frac{i_1}{2}\big) \cdots\big(s_{m-1}-\frac{i_{m-1}}{2}\big) \leq
2\big(s_1-i_1\big) \cdots\big(s_{m-1}-i_{m-1}\big)$$
which is equivalent to
\begin{eqnarray}
&&\left( 1-\frac{I_1}{2}\right)\cdots \left(
1-\frac{I_{m-1}}{2}\right)\leq 2(1-I_1)\cdots (1-I_{m-1}) \nonumber
\\
&\Updownarrow \nonumber \\
&&1+\sum_{t=1}^{m-1}(-1)^t(\frac{1}{2})^t\sum_{1 \leq j_1 < \cdots < j_t
\leq m-1} (I_{j_1} \cdots I_{j_t})\leq \nonumber \\
&&{\mbox{ \ \ \ \ \ \ }} 2 +
2\sum_{t=1}^{m-1}(-1)^t\sum_{1 \leq j_1 < \cdots <j_t \leq m-1}(I_{j_1}
\cdots I_{j_t}) \nonumber \\
&\Updownarrow \nonumber \\
&&\sum_{t=1}^{m-1} (-1)^{t+1} \frac{2^{t+1}-1}{2^t}\sum_{1 \leq j_1 <
\cdots < j_t \leq m-1} (I_{j_1} \cdots I_{j_t}) \leq 1 \nonumber
\end{eqnarray}
and we are through. \qed
\begin{Example}\label{exsmalll}
Let the notation be as in Proposition~\ref{proI}.
For $r=2$, $m=3$ and $s_1=s_2=s_3=q$ Condition A reads
$$\frac{3}{2}(I_1+I_2)-\frac{7}{4}I_1I_2 \leq 1, {\mbox{ \ \ }} I_3
< 2.$$ For $r=2$, $m=4$ and $s_1=s_2=s_3=s_4=q$ Condition A reads
$$\frac{3}{2}(I_1+I_2+I_3)-\frac{7}{4}(I_1I_2+I_1I_3+I_2I_3)+\frac{15}{8}I_1I_2I_3 \leq 1, {\mbox{ \ \ }} I_4 < 2.$$
This is illustrated in Figure~\ref{figimp}.
\begin{figure}
\caption{The surface $\frac{3}
\label{figimp}
\end{figure}
\end{Example}
From Proposition~\ref{proI} it is clear that in the case of $r=2$, for
Condition A to hold we must have $i_t \leq \frac{2}{3} s_t$, $t=1,
\ldots , m-1$. The general picture for $r$ arbitrary is described in
the following proposition.
\begin{Proposition}\label{lem}
Assume that $(i_1, \ldots , i_m,r,s_1, \ldots , s_m)$ with $m\geq 2$ satisfies Condition
A. If $r \geq 2$ then
\begin{equation}
i_1 \leq \frac{r}{r+1}s_1, \ldots , i_{m-1} \leq \frac{r}{r+1}s_{m-1}.\label{eqeqeq}\end{equation}
\end{Proposition}
\noindent\emph{Proof:}
Follows from (A.3), the last part of Remark~\ref{remsmart}, and the fact that
$$s_t-\frac{i_t}{\ell}\geq s_t-\frac{i_t}{s}$$
holds for $t=1, \ldots , m-1$. \qed
\ \\
\noindent\emph{Proof:} oftheorem
Let $(i_1, \ldots ,i_m,r,s_1, \ldots ,s_m)$ with $m\geq 2$ be
such that Condition A holds. We give an induction proof
that
\begin{equation}
\begin{array}{r}
D(i_1, \ldots , i_t,l,s_1,\ldots ,s_t) \leq s_1\cdots s_t-(s_1-\frac{i_1}{\ell})
\cdots (s_t-\frac{i_{t}}{\ell})\\
{\mbox{ \ for all \ }}1 \leq t < m , 1 \leq
\ell\leq r.
\end{array}
\label{eqminus1}
\end{equation}
For $t=1$ the
result is clear. Let $1 < t<m$ and assume the result holds when $t$ is
substituted with $t-1$. According to Definition~\ref{defD} we have
\begin{multline*}
D(i_1, \ldots , i_t,l,s_1, \ldots ,s_t)=
\\
\begin{split}
\max_{(u_1, \ldots ,u_l)\in A(i_t,\ell,s_t) }\bigg\{& (s_t-u_1-\cdots -u_\ell)D(i_1,\ldots ,i_{t-1},\ell,s_1,
\ldots ,s_{t-1})\\
&+u_1D(i_1, \ldots , i_{t-1},\ell-1,s_1, \ldots ,s_{t-1})+\cdots
\\
&+u_{\ell-1}D(i_1, \ldots ,i_{t-1},1,s_1, \ldots ,
s_{t-1})+u_\ell s_1\cdots s_{t-1} \bigg\}
\end{split}
\end{multline*}
where
\begin{eqnarray}
A(i_t,\ell,s_t)&=&\{(u_1, \ldots ,u_\ell) \in {\mathbb{N}}_0^\ell \mid
u_1+ \cdots +u_\ell \leq s_t, {\mbox{ \ }}u_1+2u_2+\cdots + \ell u_\ell \leq i_t\}\nonumber
\end{eqnarray}
follows from~ Definition~\ref{defD}. By the above assumptions this implies that
\begin{multline}
D(i_1, \ldots ,i_t,\ell,s_1, \ldots ,s_t) \leq
\\
\shoveleft
\max_{(u_1, \ldots ,u_\ell)\in B(i_t,\ell,s_t) } \bigg\{ s_t\big(
s_1 \cdots s_{t-1}-(s_1-\frac{i_1}{\ell})\cdots(s_{t-1}-\frac{i_{t-1}}{\ell})\big)\\
\begin{split}
&+u_1\big((s_1-\frac{i_1}{\ell})\cdots(s_{t-1}-\frac{i_{t-1}}{\ell})-(s_1-\frac{i_1}{\ell-1})\cdots(s_{t-1}-\frac{i_{t-1}}{\ell-1}) \big)\\
&+ \cdots
\\
&+u_{\ell-1}\big(
(s_1-\frac{i_1}{\ell})\cdots(s_{t-1}-\frac{i_{t-1}}{\ell})-(s_1-\frac{i_1}{1})\cdots(s_{t-1}-\frac{i_{t-1}}{1})
\big)\\
&+u_{\ell}\big((s_1-\frac{i_1}{\ell})\cdots(s_{t-1}-\frac{i_{t-1}}{\ell})\big) \bigg\}\label{eqznabel}
\end{split}
\end{multline}
where
\begin{eqnarray}B(i_{t},\ell,s_t)&=&\{(u_1, \ldots , u_\ell) \in {\mathbb{Q}}^\ell \mid 0
\leq u_1, \ldots , u_\ell, {\mbox{ \ }} u_1 + \cdots +u_\ell \leq s_t,
\nonumber \\
&&{\mbox{ \ \hspace{4cm} and \ }} u_1+2u_2+\cdots +\ell u_\ell \leq i_t\}.\nonumber
\end{eqnarray}
We have $t<m $ and therefore condition (A.2) applies. We note that
$$s(s_1-\frac{i_1}{\ell}) \cdots (s_{t-1}-\frac{i_{t-1}}{\ell})
\leq \ell (s_1-\frac{i_1}{s})\cdots (s_{t-1}-\frac{i_{t-1}}{s})$$
for $s=1, \ldots , \ell-1$ is equivalent to
$$(\ell-s)(s_1-\frac{i_1}{\ell}) \cdots (s_{t-1}-\frac{i_{t-1}}{\ell})
\leq \ell (s_1-\frac{i_1}{\ell-s})\cdots (s_{t-1}-\frac{i_{t-1}}{\ell-s})$$
for $s=1, \ldots , \ell-1$ which again is equivalent to
$$\ell\big( (s_1-\frac{i_1}{\ell}) \cdots
(s_{t-1}-\frac{i_{t-1}}{\ell})-(s_1-\frac{i_1}{\ell-s}) \cdots
(s_{t-1}-\frac{i_{t-1}}{\ell-s}) \big)
\leq s (s_1-\frac{i_1}{\ell})\cdots (s_{t-1}-\frac{i_{t-1}}{\ell})$$
for $s=1, \ldots , \ell-1$. Therefore the maximal value
of~(\ref{eqznabel}) is attained for $u_1=\cdots =u_{\ell-1}=0$ and
$u_\ell=\frac{i_t}{\ell}$. This concludes the induction proof of (\ref{eqminus1}).\\
To show (\ref{eqendnuenstjerne}) we apply similar arguments
to the case
$t=m$
but use condition (A.3) rather than condition (A.2). \\
Finally we address the last part of Theorem~\ref{prosmall}. It is
clear that the right side of~(\ref{eqendnuenstjerne}) is smaller than or equal to $s_1\cdots s_m$. To
see that it is also smaller than or equal to
\begin{equation}
\sum_{t=1}^m \big( (\prod_{\begin{array}{c}j=1,\ldots , m\\j \neq t
\end{array}}s_j)\frac{i_t}{r}\big) \label{eqsumprod}
\end{equation}
we start by observing that
$$\big( \prod_{\begin{array}{c}j=1, \ldots , m\\j \neq
t\end{array}}s_j \big) \frac{i_t}{r}$$
equals the volume of
\begin{eqnarray}
N(t,\frac{i_t}{r})&=&\{(a_1, \ldots , a_m) \in {\mathbb{R}}_0^m \mid 0
\leq a_t < \frac{i_t}{r}, 0 \leq a_j \leq s_j \nonumber \\
&& {\mbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ for }} j \in \{1,
\ldots , m\}\backslash \{ t \} \}. \nonumber
\end{eqnarray}
The sum of volumes of $N(t,\frac{i_t}{r})$, $t=1, \ldots m$ is larger
than or equal to the volume of
\begin{eqnarray}
\cup_{t=1}^m N(t, \frac{i_t}{r})&=&\{(a_1, \ldots ,a_m) \in
{\mathbb{R}}_0^m \mid 0 \leq a_t \leq s_t {\mbox{ for }} t=1, \ldots
, m \nonumber \\
&&{\mbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ and not all $j$ satisfy }} \frac{i_j}{r} \leq a_j\} \nonumber
\end{eqnarray}
which equals the right side of~(\ref{eqendnuenstjerne}). \qed
\section{Concluding remarks}
The results in this paper use the lexicographic
ordering. We pose it as a research problem to investigate if some of
them hold for arbitrary monomial orderings.
\section*{Acknowledgments}
This work was supported by the Danish Council for Independent
Research (grant no.\ DFF-4002-00367) and by the Danish National
Research Foundation and the National Natural Science
Foundation of China (Grant No.\ 11061130539 -- the Danish-Chinese
Center for Applications of Algebraic Geometry in Coding Theory and
Cryptography).
\end{document} |
\begin{document}
\setlength{\arraycolsep}{2pt}
\title[]{\textbf{Random access test as an identifier of nonclassicality}}
\author[Heinosaari]{Teiko Heinosaari}
\author[Lepp\"aj\"arvi]{Leevi Lepp\"aj\"arvi}
\email{Teiko Heinosaari: [email protected]}
\email{Leevi Lepp\"aj\"arvi: [email protected]}
\address[Heinosaari]{(1) Department of Physics and Astronomy, University of Turku, Finland, (2)
Quantum algorithms and software, VTT Technical Research Centre of Finland Ltd}
\address[Lepp\"aj\"arvi]{RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava, Slovakia }
\begin{abstract}
Random access codes are an intriguing class of communication tasks that reveal an operational and quantitative difference between classical and quantum information processing. We formulate a natural generalization of random access codes and call them random access tests, defined for any finite collection of measurements in an arbitrary finite dimensional general probabilistic theory. These tests can be used to examine collective properties of collections of measurements. We show that the violation of a classical bound in a random access test is a signature of either measurement incompatibility or super information storability. The polygon theories are exhaustively analyzed and a critical difference between even and odd polygon theories is revealed.
\end{abstract}
\maketitle
\section{Introduction}
The central theoretical aim of quantum information processing is to understand how quantum physics can be exploited in computing and communication. For example, in the celebrated superdense coding protocol quantum entanglement is used to communicate a certain number of bits of information by transmitting smaller number of qubits and sharing beforehand entangled qubits \cite{BeWi92}. In general, quantum protocols are superior to classical protocols in some information processing tasks, but not in all (such as in nonlocal computation \cite{LiPoShWi07} and some quantum guessing games \cite{AlBaBrAcGiPi10}). Every task showing a quantum advantage also reveals something about quantum theory itself, although the quantum resources behind the advantage may not always be so straightforward to identify.
Random access codes (RACs) are simple communication protocols where a number of bits are encoded into a smaller number of bits and it is later randomly decided which bit should be decoded. This kind of tasks become interesting when compared to their quantum versions, known as quantum random access codes (QRACs), where a number of bits are encoded into a smaller number of qubits, or more generally, a quantum system with smaller dimension than the dimension of the classical encoding space \cite{Wiesner83, AmLeMaOz09}. It is known that in many cases a quantum system is better than a classical system of the same operational dimension (i.e. with the same maximal number of perfectly distinguishable states), and quantum random access codes have been investigated from various different angles and generalized into different directions, see e.g. \cite{PaZu10, AgBoMiPa18,AmBaChKrRa19,DoMo21}. It has been e.g. shown that QRAC provides a robust self-testing of mutually unbiased bases, which are the unique quantum measurements giving the optimal performance in a particular QRAC \cite{FaKa19}. Further, it has been shown that to get any quantum advantage at all, one must use incompatible pair of measurements \cite{CaHeTo20}. In that way, QRAC can be used as a semi-device independent certification of quantum incompatibility.
In the current investigation we adopt the approach started in \cite{CaHeTo20} and generalize random access codes to \emph{random access tests} (RATs) where the number of measurement outcomes and operational dimension of the communication medium are independent from each other (in RACs it is assumed that these are the same).
We formulate random access tests in the framework of general probabilistic theories (GPTs), hence enabling us to compare the performances in different theories.
We connect the performance of a collection of measurements in a random access test to the decoding power of their specific approximate joint measurement, which we call the \emph{harmonic approximate joint measurement}. This observation links the optimal performance of the RAT to the information storability of the full theory, a concept introduced in \cite{MaKi18}. Remarkably, it is found that the measurement incompatibility is not a necessary condition for a performance over the classical bound (as it is in quantum theory), but also a phenomenon called \emph{super information storability} can enable it. By super information storability we mean that the information storability is larger than the operational dimension of the theory; it's existence was recognized and studied in \cite{MaKi18}.
We make a detailed investigation of the performance of RATs in polygon theories. The optimal performances reveal a difference between even and odd polygon theories and, perhaps more surprisingly, also a finer division into different classes in both polygons. Our investigation indicates that the optimal performance in information processing tasks is a route to a deeper understanding of GPTs and their nonclassical features, thereby also to a better understanding of quantum theory.
The present investigation is organized as follows. In Sec. \ref{sec:GPT} we recall the needed definitions and machinery of general probabilistic theories. Sec. \ref{sec:IS} focuses on the concepts of decoding power of a measurement and information storability of a whole theory. In Sec. \ref{sec:APPROX} we define a specific kind of approximate joint measurement for any collection of measurements, which turns out to be a useful tool. In Sec. \ref{sec:RAT} we are finally ready to define random access tests and show how they link to the earlier concepts. Sec. \ref{sec:MAX} connects the definite success of special random access tests to maximal incompatibility. In Sec. \ref{sec:POLY} we present a detailed study of random access tests in polygon state spaces and demonstrate how the maximal success probabilities separate different theories.
\section{General probabilistic theories}\label{sec:GPT}
General probabilistic theories constitutes a generalized framework for quantum and classical theories based on operational principles. In addition to quantum and classical theories, GPTs include countless toy theories where various operational features and tasks can be tested and considered in. This enables us to compare different theories to each other based on how these different features behave in different theories. In particular, by looking at the known nonclassical features of quantum theory (such as incompatibility \cite{BuHeScSt13,BaGaGhKa13}, steering \cite{StBu14,Banik15} and nonlocality \cite{PoRo94,JaGoBaBr11}) in this more general operational framework helps us understand what makes quantum theory special among all other possible theories. Furthermore, by looking at these features in the full scope of GPTs gives us insight on these features themselves which deepens our understanding about them and helps us make connections between different features.
GPTs are build around operational concepts such as \emph{preparations}, \emph{transformations} and \emph{measurements} which are used to describe a physical experiment. The preparation procedure involves preparing a (physical) system in a \emph{state} that contains the information about the system's properties. The set of possible states is described by a state space $\mathcal{S}$ which is taken to be a compact convex subset of a finite-dimensional vector space $\mathcal{V}$. Whereas compactness and finite-dimensionality are technical assumptions which are often made to simplify the mathematical treatment of the theory, convexity follows from the possibility to have probabilistic mixtures of different preparation devices: if we prepare the system in a state $s_1\in \mathcal{S}$ with probability $p \in [0,1]$ or state $s_2 \in \mathcal{S}$ with probability $1-p \in [0,1]$ in different rounds of the experiment, then the prepared state is statistically described by the mixture $ps_1+(1-p)s_2$ and must thus be a valid state in $\mathcal{S}$. The extreme points of $\mathcal{S}$ are called pure and the set of pure states is described by $\mathcal{S}^{ext}$. If a state is not pure then it is called mixed.
In the popular ordered vector space formalism (see e.g. \cite{Lami17, Plavala21} for more details) the state space $\mathcal{S}$ is embedded as a compact convex base of a closed, generating proper cone $\mathcal{V}_+$ in a finite-dimensional ordered vector space $\mathcal{V}$. This means that $\mathcal{V}_+$ is convex, it spans $\mathcal{V}$, it satisfies $\mathcal{V}_+ \cap - \mathcal{V}_+ = \{0\}$ and that every element $x \in \mathcal{V}_+ \setminus \{0\}$ has a unique base decomposition $x = \alpha s$, where $\alpha>0$ and $s \in \mathcal{S}$. In this case the state space can be expressed as
\begin{align*}
\mathcal{S} = \{ x \in \mathcal{V} \, | \, x \geq 0, \ u(x)=1 \},
\end{align*}
where the the partial order $\geq$ in $\mathcal{V}$ is the partial order induced by the proper cone $\mathcal{V}_+$ defined in the usual way as $x \geq y$ if and only if $x-y \in \mathcal{V}_+$, and $u$ is a order unit in the dual space $\mathcal{V}^*$, or equivalently, a strictly positive functional on $\mathcal{V}$.
The measurement events are described by \emph{effects} which are taken to be affine functionals from $\mathcal{S}$ to the interval $[0,1]$; for an effect $e: \mathcal{S} \to [0,1]$ we interpret $e(s)$ as the probability that the measurement event corresponding to $e$ is detected when the system is in a state $s \in \mathcal{S}$. The affinity of the effects follows from the statistical correspondence between states and effects according to which the effects respect the convex structure of the states so that
$$
e(ps_1+(1-p)s_2) = pe(s_1) + (1-p)e(s_2)
$$
for all $p\in [0,1]$ and $s_1,s_2 \in \mathcal{S}$.
The set of effects on a state space $\mathcal{S}$ is called the effect space of $\mathcal{S}$ and it is denoted by $\mathcal{E}(\mathcal{S})$. Two distinguished effects are the zero effect $o$ and the unit effect $u$ for which $o(s)=0$ and $u(s)=1$ for all $s \in \mathcal{S}$. The effect space is clearly convex, and again the extreme elements of $\mathcal{E}(\mathcal{S})$ are called pure and others are mixed. The set of pure effects on a state space $\mathcal{S}$ is denoted by $\mathcal{E}^{ext}(\mathcal{S})$.
In the ordered vector space formalism we extent the effects to linear functionals and in this case we can depict the effect space as
\begin{align*}
\mathcal{E}(\mathcal{S}) = \mathcal{V}^*_+ \cap (u-\mathcal{V}^*_+) = \{ e \in \mathcal{V}^* \, | \, o \leq e \leq u \},
\end{align*}
where the partial order on the dual space $\mathcal{V}^*$ is induced by the dual cone $\mathcal{V}^*_+ = \{f \in \mathcal{V}^* \, | \, f(x) \geq 0 \ \forall x \in \mathcal{V}_+ \}$.
An important class of effects is the set of indecomposable effects. Following \cite{KiNuIm10} we say that a nonzero effect $e\in \mathcal{E}(\mathcal{S})$ is indecomposable if the decomposition $e=e_1+e_2$ for some two nonzero effects $e_1, e_2 \in \mathcal{E}(\mathcal{S})$ implies that $e = c_1 e_1= c_2 e_2$ for some $c_1,c_2 > 0$. It was shown in \cite{KiNuIm10} that every effect can be expressed as a finite sum of indecomposable effect and that every indecomposable effect can be expressed as a positive multiple of a pure indecomposable effect. Geometrically indecomposable effects are exactly those that lie on the extreme rays of the dual cone $\mathcal{V}^*_+$. We denote the set of indecomposable effects on $\mathcal{S}$ by $\mathcal{E}_{ind}(\mathcal{S})$ and the set of pure indecomposable effects by $\mathcal{E}^{ext}_{ind}(\mathcal{S})$.
A \emph{measurement} on a state space $\mathcal{S}$ with a finite number of outcomes is a mapping $\mathsf{M}: x \mapsto \mathsf{M}_x$ from a finite outcome set $\Omega$ to the set of effects $\mathcal{E}(\mathcal{S})$ such that $\sum_{x \in \Omega} \mathsf{M}_x(s) = 1$ for all $s \in \mathcal{S}$, or equivalently, $\sum_{x \in \Omega} \mathsf{M}_x = u$. We interpret $\mathsf{M}_x(s)$ as the probability that an outcome $x$ is obtained when the system in a state $s \in \mathcal{S}$ is measured with the measurement $\mathsf{M}$. We denote the set of all measurements on $\mathcal{S}$ by $\mathcal{M}(\mathcal{S})$.
The set of measurements with a fixed outcome set is convex, and we denote the set of all extreme measurements with any outcome set by $\mathcal{M}^{ext}(\mathcal{S})$. We say that an measurement is indecomposable if each of it's nonzero effect is indecomposable. We denote the set of indecomposable measurements on $\mathcal{S}$ by $\mathcal{M}_{ind}(\mathcal{S})$ and the set of indecomposable extreme measurements by $\mathcal{M}^{ext}_{ind}(\mathcal{S})$. A measurement $\mathsf{T}$ with an outcome set $\Omega$ is said to be trivial if it does not give any information about the measured state, i.e., it is of the form $\mathsf{T}_x = p_x u$ for all $x \in \Omega$ for some probability distribution $p:=(p_x)_{x \in \Omega}$ on $\Omega$.
There are two basic ways of forming new measurements: \emph{mixing} and \emph{post-processing}. First, as pointed out above, the set of measurements with a fixed set of outcomes is convex and thus we can make convex mixtures of measurements: if $\mathsf{M}^{(1)}, \ldots, \mathsf{M}^{(n)}$ are measurements with an outcome set $\Omega$ and $(p_i)_{i=1}^n$ is a probability distribution, then $\sum_{i=1}^n p_i \mathsf{M}^{(i)}$ is a measurement with effects $\sum_{i=1}^n p_i \mathsf{M}^{(i)}_x$ for all $x\in \Omega$. Clearly, those measurements that cannot be written as a nontrivial mixture are the extreme measurements.
Second, we say that a measurement $\mathsf{N}$ with an outcome set $\Lambda$ is a post-processing of a measurement $\mathsf{M}$ with an outcome set $\Omega$ if there exists a stochastic matrix $\nu:= (\nu_{xy})_{x \in \Omega, y \in \Lambda}$, i.e., $\nu_{xy} \geq 0$ for all $x \in \Omega, y \in \Lambda$ and $\sum_{y \in \Lambda} \nu_{xy} =1$ for all $x \in \Omega$, such that
\begin{align*}
\mathsf{N}_y = \sum_{x \in \Omega} \nu_{xy} \mathsf{M}_x
\end{align*}
for all $y \in \Lambda$. In this case we denote $\mathsf{N} = \nu \circ \mathsf{M}$. The post-processing relation defines a preorder on the set of measurements as follows: $\mathsf{N} \preceq \mathsf{M}$ if and only if $\mathsf{N} = \nu \circ \mathsf{M}$ for some stochastic matrix $\nu$. The set of maximal elements in $\mathcal{M}(\mathcal{S})$ with respect to the post-processing preorder is known to be exactly the set of indecomposable measurements $\mathcal{M}_{ind}(\mathcal{S})$ \cite{MaMu90a, FiHeLe18}.
\begin{example}[Quantum theory]
Let $\mathcal{H}$ be a $d$-dimensional Hilbert space. We denote by $\mathcal{L(H)}$ the algebra of linear operators on $\mathcal{H}$ and by $\mathcal{L}_s(\mathcal{H})$ the real vector space of self-adjoint operators on $\mathcal{H}$. The state space of a $d$-dimensional quantum theory is defined as
\begin{align*}
\mathcal{S(H)} = \{ \mathbf{a}rrho \in \mathcal{L}_s(\mathcal{H}) \, | \, \mathbf{a}rrho \geq O, \ \tr{\mathbf{a}rrho}=1\},
\end{align*}
where $O$ is the zero-operator and the partial order is induced by the cone of positive semi-definite matrices according to which a self-adjoint matrix $A$ is positive semi-definite, $A \geq O$, if and only if $\ip{\mathbf{a}rphi}{A \mathbf{a}rphi} \geq 0$ for all $\mathbf{a}rphi \in \mathcal{H}$. The pure states are exactly the rank-1 projections on $\mathcal{H}$.
The set of effects $\mathcal{E}(\mathcal{S(H)})$ can be shown (see e.g. \cite{MLQT12}) to be isomorphic to the set $\mathcal{E(H)}$ of self-adjoint operators bounded between $O$ and $\mathbbm{1}$, where $\mathbbm{1}$ is the identity operator on $\mathcal{H}$, i.e.,
\begin{align*}
\mathcal{E}(\mathcal{S(H)}) \simeq \mathcal{E(H)} := \{ E \in \mathcal{L}_s(\mathcal{H}) \, | \, O \leq E \leq \mathbbm{1} \}.
\end{align*}
The pure effects then correspond to the projections on $\mathcal{H}$ and the indecomposable effects to the rank-1 effect operators.
Measurements with finite number of outcomes on $\mathcal{H}$ are described by positive operator-valued measures (POVMs), i.e., maps of the form $M: x \mapsto M(x)$ from a finite outcome set $\Omega$ to the set of effect operatos $\mathcal{E(H)}$ such that $\sum_{x \in \Omega} M(x) = \mathbbm{1}$. The indecomposable POVMs are those whose all nonzero effects are rank-1 operators.
\end{example}
\section{Decoding power and information storability}\label{sec:IS}
\subsection{Base norms and order unit norms}
Let us start by introducing some more structure on GPTs that is needed in order to define decoding power and information storability (for more details on GPTs see e.g. \cite{Lami17, Plavala21}). Let $\mathcal{S}$ be a state space on an ordered vector space $\mathcal{V}$. On the vector spaces $\mathcal{V}$ and $\mathcal{V}^*$ we can define two natural norms that are induced by the cones $\mathcal{V}_+$ and $\mathcal{V}_+^*$ respectively. We will introduce them next.
In the ordered vector space $\mathcal{V}$ we have that $\mathcal{S}$ is a compact base of the closed, generating proper cone $\mathcal{V}_+$ so that in particular every element $x \in \mathcal{V}$ can be expressed as $x= \alpha y- \beta z$ for some $\alpha, \beta \geq 0$ and $y,z \in \mathcal{S}$. The \emph{base norm} $\no{\cdot}_\mathcal{V}$ on $\mathcal{V}$ is then defined as
\begin{align*}
\no{x}_\mathcal{V} = \inf \{ \alpha + \beta \, | \, \ x=\alpha y- \beta z, \ \alpha, \beta \geq 0, \ y,z \in \mathcal{S} \}
\end{align*}
for all $x \in \mathcal{V}$. It follows that if $x \in \mathcal{V}_+$, then $\no{x}_\mathcal{V} = u(x)$. In particular we have that $\mathcal{S} = \{ x \in \mathcal{V}_+ \, | \, \no{x}_\mathcal{V} = 1\}$.
In the dual space $\mathcal{V}^*$ we have the order unit $u \in \mathcal{V}^*_+$, i.e., for every $f \in \mathcal{V}^*$ there exists $\lambda > 0$ such that $f \leq \lambda u$, so that we can define the \emph{order unit norm} $\no{\cdot}_{\mathcal{V}^*}$ on $\mathcal{V}^*$ as
\begin{align*}
\no{f}_{\mathcal{V}^*} = \inf \{ \lambda \geq 0 \, | \, - \lambda u \leq f \leq \lambda u\}
\end{align*}
for all $f \in \mathcal{V}^*$. It follows that $\mathcal{E}(\mathcal{S}) = \{ f \in \mathcal{V}^*_+ \, | \, \no{f}_{\mathcal{V}^* } \leq 1\}$. Furthermore, it can be shown that the base and the order unit norm are dual to each other, i.e.,
\begin{align*}
\no{x}_{\mathcal{V}} = \sup_{ \no{f}_{\mathcal{V}^*} \leq 1} |f(x)|, \qquad \no{f}_{\mathcal{V}^*} = \sup_{ \no{x}_{\mathcal{V}} \leq 1} |f(x)|.
\end{align*}
In particular, we can express the order unit norm on $\mathcal{V}^*$ as the supremum norm over $\mathcal{S}$ so that
\begin{align} \label{eq:sup-norm}
\no{f}_{\mathcal{V}^*} = \sup_{ s \in \mathcal{S}} |f(s)|
\end{align}
for all $f \in \mathcal{V}^*$. Since $\mathcal{S}$ is compact, the supremum is always attained.
As we will mostly consider the properties of effects and measurements, we will be only using the norm $\no{ \cdot}_{\mathcal{V}^*}$. For this reason, in order to simplify our notation, from now on we will write $\no{ \cdot}$ instead of $\no{ \cdot}_{\mathcal{V}^*}$. We note that in particular for effects and other elements in $\mathcal{V}^*_+$ the absolute values in Eq. \eqref{eq:sup-norm} can be removed, and thus we have that
\begin{align*}
\no{e} = \max_{ s \in \mathcal{S}} e(s)
\end{align*}
for all $e \in \mathcal{V}^*_+$.
\begin{example}[Quantum theory]
In the case of quantum theory $\mathcal{S(H)}$ we note that for an effect $e \in \mathcal{E}(\mathcal{S(H)})$ the norm $\no{e}$ corresponds to the operator norm $\no{ E }$ of the corresponding effect operator $E \in \mathcal{E(H)}$, which in the finite dimensional quantum theory equals with the maximal eigenvalue of $E$.
\end{example}
\subsection{Decoding power of a measurement}
For a measurement $\mathsf{M} \in \mathcal{M}(\mathcal{S})$ with an outcome set $\Omega$, we denote
\begin{align*}
\lambda_{max}(\mathsf{M}) := \sum_{x \in \Omega} \no{\mathsf{M}_x} \, .
\end{align*}
An operational interpretation of $\lambda_{max}(\mathsf{M})$ is that two parties, a sender and a receiver, communicate by transferring physical systems where messages $1,\ldots,n$ are encoded in states $s_1,\ldots,s_n$. The receiver is bound to use $\mathsf{M}$ to decode the messages, but the sender can freely choose the states. For each outcome $x$ of $\mathsf{M}$, the sender hence chooses a state $s_x$ such that the correct inference is as likely as possible, i.e., $\mathsf{M}_x(s_x)$ is maximal. Assuming that the initial messages occur with uniform probability, the number $\lambda_{max}(\mathsf{M}) / n$ is the maximal probability for the receiver to infer the correct messages by using $\mathsf{M}$. Based on this operational motivation, we call $\lambda_{max}(\mathsf{M})$ the \emph{decoding power} of $\mathsf{M}$.
In the following we show that $\lambda_{max}$ has certain monotonicity properties that makes it a reasonable quantification of the quality of measurements. In particular, we see that the decoding power function behaves in mixing and post-processing in the following way.
\begin{proposition}\label{prop:lmax-pp}
For any measurement $\mathsf{M}$ with an outcome set $\Omega$ and post-processing $\nu =(\nu_{xy})_{x \in \Omega, y \in \Lambda}$, we have that
\begin{align*}
\lambda_{max}(\nu \circ \mathsf{M}) \leq \lambda_{max}(\mathsf{M}).
\end{align*}
\end{proposition}
\begin{proof}
With a direct calculation by using the triangular inequality and the absolute homogenity of $\no{\cdot }$ as well as the stochasticity of $\nu$ we see that
\begin{align*}
\lambda_{max}(\nu \circ \mathsf{M}) & = \sum_{y \in \Lambda} \no{(\nu \circ \mathsf{M})_y}
= \sum_{y \in \Lambda} \no{ \sum_{x \in \Omega} \nu_{xy} \mathsf{M}_x } \\
&\leq \sum_{y \in \Lambda} \sum_{x \in \Omega} \nu_{xy} \no{ \mathsf{M}_x } = \sum_{x \in \Omega} \left( \sum_{y \in \Lambda} \nu_{xy} \right) \no{ \mathsf{M}_x } \\
& = \sum_{x \in \Omega} \no{ \mathsf{M}_x}
= \lambda_{max}(\mathsf{M}) \, .
\end{align*}
\end{proof}
\begin{proposition}\label{prop:lmax-mix}
For any collection of measurements $\mathsf{M}^{(1)}, \ldots, \mathsf{M}^{(n)}$ with an outcome set $\Omega$ and probability distribution $p:=(p_i)_{i=1}^n$ we have that
\begin{align*}
\lambda_{max}\left(\sum_{i=1}^n p_i \mathsf{M}^{(i)}\right) \leq \sum_{i=1}^n p_i \lambda_{max}(\mathsf{M}^{(i)}) \leq \max_{i\in \{1, \ldots, n\}} \lambda_{max}(\mathsf{M}^{(i)}).
\end{align*}
\end{proposition}
\begin{proof}
With a direct calculation by using the triangular inequality and the absolute homogenity of $\no{\cdot }$ as well as the normalization of $p$ we see that
\begin{align*}
\lambda_{max}\left(\sum_{i=1}^n p_i \mathsf{M}^{(i)}\right) & = \sum_{x \in \Omega} \no{\sum_{i=1}^n p_i \mathsf{M}^{(i)}_x} \leq \sum_{x \in \Omega} \sum_{i=1}^n p_i \no{\mathsf{M}^{(i)}_x} \\
&= \sum_{i=1}^n p_i \left( \sum_{x \in \Omega} \no{ \mathsf{M}^{(i)}_x} \right) = \sum_{i=1}^n p_i \lambda_{max}(\mathsf{M}^{(i)}) \\
& \leq \max_{i\in \{1, \ldots, n\}} \lambda_{max}(\mathsf{M}^{(i)}).
\end{align*}
\end{proof}
In the context of resource theories of measurements, the decoding power of a measurement is related to the robustness of measurements, defined as the minimal amount of noise needed to make a measurement trivial, studied in quantum theory e.g. in \cite{SkLi19, OsBi19} and in GPTs in \cite{Kuramochi20b}.
\subsection{Information storability}
The previously defined decoding power is a quality of a single measurement. As we saw earlier, it is the maximal probability for a receiver to infer correct messages when the states used in encoding are optimized. We can also think of a scenario where the measurement is optimized over all possible measurements in the given theory. This quantity is known as the \emph{information storability} \cite{MaKi18} and for a theory with a state space $\mathcal{S}$ we denote it as
\begin{align}\label{eq:lmax-state}
\lambda_{max}(\mathcal{M}(\mathcal{S})) := \sup_{\mathsf{M} \in \mathcal{M}(\mathcal{S})} \lambda_{max}(\mathsf{M}) \, .
\end{align}
In the same way we can define the information storability for any subset $\mathcal{T}$ of measurements. This is particularly relevant when we study restrictions on measurements \cite{FiGuHeLe20}. For a subset $\mathcal{T} \subseteq \mathcal{M}(\mathcal{S})$ we denote the information storability of $\mathcal{T}$ as
\begin{align*}
\lambda_{max}(\mathcal{T}) := \sup_{\mathsf{M} \in \mathcal{T}} \lambda_{max}(\mathsf{M}) \, .
\end{align*}
In the cases where there is no risk of confusion we will use the simpler notation $\lambda_{max}(\mathcal{S})$ for $\lambda_{max}(\mathcal{M}(\mathcal{S}))$.
\begin{example}[Quantum theory]\label{ex:quantum-is}
In a finite $d$-dimensional quantum theory for a POVM $M$ we have
\begin{align*}
\lambda_{max}(M) = \sum_x \no{M(x)} \leq \sum_x \tr{M(x)} = \tr{ \sum_x M(x)} = \tr{\mathbbm{1}} = d
\end{align*}
and we also see that this upper bound is reached whenever every operator $M(x)$ is rank-1.
\end{example}
Based on Example \ref{ex:quantum-is} one could presume that the information storability is always the same as the operational dimension of the theory, as is the case for quantum theory. The latter dimension is defined as the maximal number of perfectly distinguishable states. The operational dimension of a theory clearly is a lower bound for the information storability. However, the information storability can be larger than the operational dimension and we call this phenomenon \emph{super information storability}. This is the case e.g. in odd polygon theories (see Sec. \ref{sec:POLY}).
From Prop. \ref{prop:lmax-pp} and \ref{prop:lmax-mix} it follows that the supremum in Eq. \eqref{eq:lmax-state} is attained for extreme measurements that are maximal in the post-processing preorder. Thus, in a state space $\mathcal{S}$ we have that
\begin{align}\label{eq:lmax-ind}
\lambda_{max}(\mathcal{S}) = \sup_{\mathsf{M} \in \mathcal{M}(\mathcal{S})} \lambda_{max}(\mathsf{M}) = \sup_{\mathsf{M} \in \mathcal{M}^{ext}_{ind}(\mathcal{S})} \lambda_{max}(\mathsf{M}).
\end{align}
We note that the set of extreme indecomposable measurements $\mathcal{M}^{ext}_{ind}(\mathcal{S})$ is exactly the set of extreme simulation irreducible measurements in \cite{FiHeLe18}.
For state spaces with a particular structure we can show a simple way of calculating the information storability of the theory. In particular, we will use this result for polygon state spaces in Sec. \ref{sec:POLY}.
\begin{proposition}\label{prop:lambda-max-constant}
Suppose that there exists a state $s_0 \in \mathcal{S}$ such that $e(s_0) = f(s_0)=: \lambda_0$ for all $e,f \in \mathcal{E}^{ext}_{ind}(\mathcal{S})$. Then $\lambda_{max}(\mathsf{M}) = 1/ \lambda_0$ for all $\mathsf{M} \in \mathcal{M}_{ind}(\mathcal{S})$ and therefore $\lambda_{max}(\mathcal{S})=1/ \lambda_0$.
\end{proposition}
\begin{proof}
Let $\mathsf{M} \in \mathcal{M}_{ind}(\mathcal{S})$ with an outcome set $\Omega$. Since each effect $\mathsf{M}_x$ is indecomposable, by \cite{KiNuIm10} we have that $\mathsf{M}_x = \alpha_x m_x$ for some $\alpha_x > 0$ and $m_x \in \mathcal{E}^{ext}_{ind}(\mathcal{S})$ for all $x \in \Omega$. Furthermore, by \cite{KiNuIm10}, for all $m_x$ there exists a pure state $s_x \in \mathcal{S}^{ext}$ such that $m_x(s_x)=1$ so that $\no{\mathsf{M}_x} = \alpha_x$ and $\lambda_{max}(\mathsf{M}) = \sum_{x \in \Omega} \alpha_x$.
From the normalization $\sum_{x \in \Omega} \mathsf{M}_x= u$ and the assumption that $e(s_0) = \lambda_0$ for all $e \in \mathcal{E}^{ext}_{ind}(\mathcal{S})$ it follows that
\begin{align*}
\dfrac{1}{\lambda_0} = \dfrac{1}{\lambda_0} u(s_0) = \dfrac{1}{\lambda_0} \sum_{x \in \Omega} \alpha_x m_x(s_0) = \sum_{x \in \Omega} \alpha_x = \lambda_{max}(\mathsf{M}).
\end{align*}
The last claim follows from this and Eq. \eqref{eq:lmax-ind}.
\end{proof}
\section{Harmonic approximate joint measurements}\label{sec:APPROX}
A \emph{joint measurement} of measurements $\mathsf{M}^{(1)}, \ldots, \mathsf{M}^{(k)}$ with outcome sets $\Omega_1, \ldots, \Omega_k$ is a measurement $\mathsf{J}$ with the product outcome set $\Omega_1 \times \cdots \times \Omega_k$ and satisfying the marginal properties
\begin{align*}
\sum_{x_2 \in \Omega_2} \cdots \sum_{x_k \in \Omega_k} \mathsf{J}_{x_1,\ldots, x_k} &= \mathsf{M}^{(1)}_{x_1} \quad \forall x_1 \in \Omega_1, \\
\mathbf{d}ots \qquad \\
\sum_{x_1 \in \Omega_1} \cdots \sum_{x_{k-1} \in \Omega_{k-1}} \mathsf{J}_{x_1,\ldots, x_k} &= \mathsf{M}^{(k)}_{x_k} \quad \forall x_k \in \Omega_k\, .
\end{align*}
Measurements $\mathsf{M}^{(1)}, \ldots, \mathsf{M}^{(k)}$ are called \emph{compatible} if they have a joint measurement and otherwise they are \emph{incompatible}.
In classical theories, i.e., in theories whose state spaces are simplices, all measurements are compatible whereas in every nonclassical GPTs there are pairs of measurements that are incompatible \cite{Plavala16,Kuramochi20}. However, even a set of incompatible measurements can be made incompatible if we allow for some amount of error. Most commonly this error is quantified by the amount of noise needed to be added to a set of incompatible set of measurements in order to make them compatible \cite{BuHeScSt13}. Formally, for each $(\lambda_1, \ldots, \lambda_k) \in [0,1]^k$ and every choice of trivial measurements $\mathsf{T}^{(1)}, \ldots, \mathsf{T}^{(k)}$ the measurements $\lambda_1 \mathsf{M}^{(1)} + (1-\lambda_1) \mathsf{T}^{(1)}, \ldots, \lambda_k \mathsf{M}^{(k)} + (1-\lambda_k) \mathsf{T}^{(k)}$ are considered to be \emph{noisy versions} of the measurements $\mathsf{M}^{(1)}, \ldots, \mathsf{M}^{(k)}$, where the amount of noise added to each observable $\mathsf{M}^{(i)}$ is characterized by the parameter $1-\lambda_i$. In this case we call a joint measurement $\tilde{\mathsf{J}}$ of any noisy versions of the measurements $\mathsf{M}^{(1)}, \ldots, \mathsf{M}^{(k)}$ an \emph{approximate joint measurement} of $\mathsf{M}^{(1)}, \ldots, \mathsf{M}^{(k)}$.
Following \cite{FiHeLe17} we can form a class of approximate joint measurements for $\mathsf{M}^{(1)}, \ldots, \mathsf{M}^{(k)}$ by fixing convex weights $\lambda_1,\ldots,\lambda_k$ (so that $\lambda_1+ \cdots+ \lambda_k=1$) and probability distributions $p^{(1)},\ldots,p^{(k)}$ on the sets $\Omega_2\times \cdots \times \Omega_k, \ldots, \Omega_1\times \cdots \times \Omega_{k-1}$ respectively and by setting
\begin{align}\label{eq:approx}
\tilde{\mathsf{J}}_{x_1,\ldots,x_k} = \lambda_1 p^{(1)}_{x_2,\ldots,x_k} \mathsf{M}^{(1)}_{x_1} + \cdots +\lambda_k p^{(k)}_{x_1,\ldots,x_{k-1}} \mathsf{M}^{(k)}_{x_k}
\end{align}
for all $(x_1, \ldots, x_k) \in \Omega_1 \times \cdots \times \Omega_k$.
The marginals of $\tilde{\mathsf{J}}$ are
\begin{align*}
\sum_{x_2 \in \Omega_2} \cdots \sum_{x_k \in \Omega_k} \tilde{\mathsf{J}}_{x_1,\ldots, x_k} &= \lambda_1 \mathsf{M}^{(1)}_{x_1} + (1-\lambda_1) \mathsf{T}^{(1)}_{x_1} \quad \forall x_1 \in \Omega_1, \\
\mathbf{d}ots \qquad \\
\sum_{x_1 \in \Omega_1} \cdots \sum_{x_{k-1} \in \Omega_{k-1}} \tilde{\mathsf{J}}_{x_1,\ldots, x_k} &= \lambda_k \mathsf{M}^{(k)}_{x_k} + (1-\lambda_k) \mathsf{T}^{(k)}_{x_k} \quad \forall x_k \in \Omega_k ,
\end{align*}
where $\mathsf{T}^{(1)},\ldots,\mathsf{T}^{(k)}$ are trivial measurements.
In the previous construction we are free to choose the convex weights and the probability distributions. It turns out that a particular choice is useful for our following developments. We choose all probability distributions $p^{(1)},\ldots,p^{(k)}$ to be uniform distributions and the convex weights are chosen to be
\begin{align*}
\lambda_i = \frac{h(m_1,...,m_k)}{k\, m_i} \, ,
\end{align*}
where $h(m_1,\ldots,m_k)$ is the harmonic mean of the numbers $m_1,\ldots,m_k$ and $m_i$ is the number of outcomes of $\mathsf{M}^{(i)}$. We denote this specific measurement as $\mathsf{H}^{(1,\ldots,k)}$ and call it \emph{harmonic approximate joint measurement} of $\mathsf{M}^{(1)}, \ldots, \mathsf{M}^{(k)}$. Inserting the specific choices into Eq. \eqref{eq:approx} we observe that the harmonic approximate joint measurement can be written in the form
\begin{align}\label{eq:M-sum-gen}
\mathsf{H}^{(1,\ldots,k)}_{x_1,\ldots, x_k} = \frac{1}{\kappa(m_1,\ldots,m_k)}( \mathsf{M}^{(1)}_{x_1} + \cdots + \mathsf{M}^{(k)}_{x_k} ) \, ,
\end{align}
for all $(x_1, \ldots, x_k) \in \Omega_1 \times \cdots \times \Omega_k$ with
\begin{align*}
\kappa(m_1,\ldots,m_k) := \prod_i m_i \sum_i \frac{1}{m_i} \, .
\end{align*}
We remark that any approximate joint measurement of the form of Eq. \eqref{eq:approx} can be obtained from $\mathsf{M}^{(1)}, \ldots, \mathsf{M}^{(k)}$ by a suitable mixing and post-processing, hence can be simulated by using those measurements only \cite{FiHeLe18}. In this sense, they are all trivial approximate joint measurements and among these trivial approximate joint measurements our specific choice $\mathsf{H}^{(1,\ldots,k)}$ stands out by having a particularly symmetric form. Although trivial approximate joint measurements can be formed for any collection of measurements, we will show that the harmonic approximate joint measurement $\mathsf{H}^{(1,\ldots,k)}$ is related to the incompatibility of $\mathsf{M}^{(1)}, \ldots, \mathsf{M}^{(k)}$ in an intriguing way. The link is explained in the following sections.
\section{Random access tests}\label{sec:RAT}
\subsection{Classical and quantum random access codes}
As a motivation for the later developments, we recall that in the $(n,d)$--\emph{random access code} (RAC), Alice is given $n$ input dits $\vec{x}=(x_1, \ldots, x_n) \in \{0, \ldots, d-1\}^n$ based on which she prepares one dit and sends it to Bob. Bob is then given a number $j \in \{1, \ldots, n\}$ and the task of Bob is to guess the corresponding dit $x_j$ of Alice. The temporal order is here important: Alice does not know $j$ and therefore cannot simply send $x_j$ to Bob. It is clear that, assuming all inputs are sampled uniformly, Bob will make errors. The performance depends on the strategy that Alice and Bob agree to follow. The choices of the inputs given to Alice and Bob are sampled with uniform probability and the average success probability quantifies the quality of their strategy. Generally, we denote by $\bar{P}^{(n,d)}_{c}$ the best average success probability that Alice and Bob can achieve with a coordinated strategy.
For instance, Alice and Bob can agree that Alice always sends the value of the first dit. If Bob has to announce the value of the first dit, he makes no errors. This happens with the probability $1/n$. On the other hand, with the probaility $(n-1)/n$ Bob has to announce the value of some other dit in which case the information received from Alice does not help Bob and he has to make a random choice, thereby guessing the right value with the probability $1/d$. Therefore, with this strategy the average success probability is $(d+n-1)/(nd)$. It can be shown that in the case when $n=2$ this is also the optimal strategy so that $\bar{P}^{(2,d)}_c = 1/2(1+1/d)$ \cite{AmKrRa15,TaHaMaBo15}.
In $(n,d)$--\emph{quantum random access code} (QRAC), Alice is given $n$ input dits $\vec{x}=(x_1, \ldots, x_n) \in \{0, \ldots, d-1\}^n$ based on which she prepares a $d$-dimensional quantum system into a state $\mathbf{a}rrho_{\vec{x}} \in \mathcal{S}(\mathcal{H}_d)$ and sends it to Bob. Bob is then given a number $j \in \{1, \ldots, n\}$ and the task of Bob is to guess the corresponding dit $x_j$ of Alice by performing a measurement on the state sent by Alice. If Bob performs a measurement described by a $d$-outcome POVM $M^{(j)}$, the average success probability of QRAC when the inputs are assumed to be uniformly distributed is given by
\begin{align*}
\dfrac{1}{n d^n} \sum_{\vec{x}} \tr{\mathbf{a}rrho_{\vec{x}} (M^{(1)}(x_1) + \cdots + M^{(n)}(x_n) )}.
\end{align*}
The best average success probability $\bar{P}^{(n,d)}_{q}$ of a $(n,d)$-QRAC is then obtained by optimizing over the states and the measurements so that
\begin{align*}
\bar{P}^{(n,d)}_{q} = \sup_{ M^{(1)}, \ldots, M^{(n)} \in \mathcal{M}(\mathcal{H}_d)} \dfrac{1}{n d^n} \sum_{\vec{x}} \no{M^{(1)}(x_1) + \cdots + M^{(n)}(x_n)},
\end{align*}
where the norm is the operator norm on $\mathcal{L}(\mathcal{H}_d)$. In the case that $n=2$ it is known that $\bar{P}^{(2,d)}_q = 1/2(1+1/\sqrt{d}) > \bar{P}^{(2,d)}_c$ and that this bound can only be attained with rank-1 projective and mutually unbiased POVMs \cite{FaKa19}.
\subsection{Random access tests in GPTs} \label{subsec:gpt-rac}
In the following we formulate a class of tests that are similar to QRACs but are free from certain assumptions. Instead of restricting the number of measurement outcomes to match the (operational) dimension of the theory, we look at tests where the number of outcomes can be arbitrary. Furthermore, we look at these tests from the point of view of testing properties of collections of measurements in a given GPT.
The tests are defined as follows: Let $\mathcal{S}$ be a state space and let $\mathsf{M}^{(1)},\ldots,\mathsf{M}^{(k)}$ be measurements on $\mathcal{S}$ with outcome sets $\Omega_1, \ldots, \Omega_k$ with $m_1,\ldots,m_k$ outcomes respectively. As in the QRAC setting, Alice prepares a state $s_{\vec{x}} \in \mathcal{S}$, where $\vec{x}=(x_1,\ldots,x_k) \in \Omega_1 \times \cdots \times \Omega_k$. The task is the same as before: Bob is told an index $j\in \{1,\ldots,k\}$ and he should guess the label $x_j$ by performing a measurement $\mathsf{M}^{(j)}$ on the state $s_{\vec{x}}$. When the queries are uniformly distributed the average success probability of this \emph{random access test (RAT)} is then
\begin{align*}
\frac{1}{km_1\cdots m_k} \sum_{\vec{x}} \left(\mathsf{M}_{x_1}^{(1)} + \cdots + \mathsf{M}^{(k)}_{x_k} \right)(s_{\vec{x}})\, .
\end{align*}
When Bob is required to use the fixed measurements $\mathsf{M}^{(1)}, \ldots, \mathsf{M}^{(k)}$ and one optimizes over the states, the maximum average success probability becomes
\begin{align}\label{eq:P-def}
\bar{P}(\mathsf{M}^{(1)},\ldots,\mathsf{M}^{(k)}):=\frac{1}{km_1\cdots m_k} \sum_{\vec{x}} \no{ \mathsf{M}_{x_1}^{(1)} + \cdots + \mathsf{M}^{(k)}_{x_k}},
\end{align}
where the norm is now the order unit norm in the dual space $\mathcal{V}^*$ of the ordered vector space $\mathcal{V}$ in which the state space $\mathcal{S}$ is embedded. The number $\bar{P}(\mathsf{M}^{(1)},\ldots,\mathsf{M}^{(k)})$ is therefore the maximal success probability of the test for Alice and Bob if they are bound to use the measurements $\mathsf{M}^{(1)},\ldots,\mathsf{M}^{(k)}$ but free to choose the states.
A direct calculation reveals the following formula.
\begin{proposition}
\begin{align}\label{eq:connection}
\bar{P}(\mathsf{M}^{(1)},\ldots,\mathsf{M}^{(k)})=\tfrac{1}{h(m_1,\ldots,m_k)} \, \lambda_{max}(\mathsf{H}^{(1,\ldots,k)})\, .
\end{align}
\end{proposition}
This observation shows that the two operationally accessible quantities $\bar{P}(\mathsf{M}^{(1)},\ldots,\mathsf{M}^{(k)})$ and $\lambda_{max}(\mathsf{H}^{(1,\ldots,k)})$ are connected in a simple way. However, let us emphasize that in real experiments one would find lower bounds for these quantities as they are defined via optimal states.
From Eq. \eqref{eq:P-def} (or equivalently from Eq. \eqref{eq:connection}) one gets the following upper bound
\begin{align*}
\bar{P}(\mathsf{M}^{(1)}, \ldots, \mathsf{M}^{(k)}) \leq \frac{1}{k} \sum_{i=1}^k \frac{\lambda_{max}(\mathsf{M}^{(i)})}{m_i}\, .
\end{align*}
Thus, the maximum average success probability of a random access test is upper bounded by the weighted average of the decoding powers of the measurements used in the test. Clearly $\lambda_{max}(\mathsf{M}^{(i)}) \leq m_i$ for all $i \in \{1, \ldots, k\}$, where the equality holds only when $\mathsf{M}^{(i)}$ perfectly distinguishes $m_i$ (not necessarily different) states which is the only case when the bound is trivial. However, we note that in order to calculate this bound one has to be familiar with the description of the measurements $\mathsf{M}^{(1)},\ldots,\mathsf{M}^{(k)}$.
On the other hand, if we do not know the exact specification of the measurements but only know that $\mathsf{M}^{(1)},\ldots,\mathsf{M}^{(k)}$ belong to a set $\mathcal{T} \subseteq \mathcal{M}(\mathcal{S})$ that is closed under post-processing and mixing, implying that $\mathsf{H}^{(1,\ldots,k)}$ belongs to $\mathcal{T}$, then Eq. \eqref{eq:connection} gives an upper bound
\begin{align}\label{eq:universal-bound}
\bar{P}(\mathsf{M}^{(1)},\ldots,\mathsf{M}^{(k)}) \leq \tfrac{1}{h(m_1,\ldots,m_k)} \lambda_{max}(\mathcal{T})\, .
\end{align}
The minimal subset $\mathcal{T} \subseteq \mathcal{M}(\mathcal{S})$ that includes the measurements $\mathsf{M}^{(1)}$, $\ldots$, $\mathsf{M}^{(k)}$ and that is closed with respect to mixing and post-processing is the simulation closure of the set $\{\mathsf{M}^{(i)}\}_{i=1}^k$ (see \cite{FiHeLe18}), i.e., a set formed of all possible mixtures and/or post-processings of the measurements $\mathsf{M}^{(1)}, \ldots, \mathsf{M}^{(k)}$. From Prop. \ref{prop:lmax-pp} and \ref{prop:lmax-mix} we see that in this case $\lambda_{max}(\mathcal{T}) = \max_i \lambda_{max}(\mathsf{M}^{(i)})$.
We note that in the case when $\lambda_{max}(\mathcal{T})=m_1=\cdots= m_k=d$, where $d$ is the operational dimension of the theory (such as in the case of $(k,d)$--QRAC where $\mathcal{T}= \mathcal{M}(\mathcal{H}_d)$) the right hand side of Eq. \eqref{eq:universal-bound} is 1 and therefore it does not give a nontrivial bound. However, in different settings the bound can be nontrivial.
\begin{example}[$n$-tomic measurements]
Let $\mathcal{T}$ consist of all measurements on $\mathcal{S}$ which can be simulated with measurements with $n$ or less outcomes, i.e., every measurement in $\mathcal{T}$ is obtained as a mixture and/or post-processing of some set of $n$-outcome measurements. Measurements of this type are called $n$-tomic and have been considered e.g. in \cite{FiGuHeLe20, KlCa16, GuBaCuAc17, Huetal18}. It was shown in \cite{FiGuHeLe20} that $\lambda_{max}(\mathcal{T}) \leq n$ so that Eq. \eqref{eq:universal-bound} gives $\bar{P}(\mathsf{M}^{(1)},\ldots,\mathsf{M}^{(k)}) \leq n/h(m_1,\ldots,m_k) $. For example, if $n=k=2$, then we obtain $\bar{P}(\mathsf{M}^{(1)},\mathsf{M}^{(2)}) \leq (m_1+m_2)/(m_1m_2) $ which is nontrivial for measurements $\mathsf{M}^{(1)},\mathsf{M}^{(2)} \in \mathcal{T}$ with $m_1,m_2 >2$.
\end{example}
One can show that $\bar{P}(\mathsf{M}^{(1)}, \ldots, \mathsf{M}^{(k)})$ is a convex function of all of its arguments. To see this, let $\mathsf{M}^{(1)}, \ldots, \mathsf{M}^{(k)}$ be measurements and w.l.o.g. let $\mathsf{M}^{(1)} = \sum_{i=1}^l p_i \mathsf{N}^{(i)}$ for some measurements $\mathsf{N}^{(1)}, \ldots, \mathsf{N}^{(l)}$ and some probability distribution $(p_i)_{i=1}^l$. We see that
\begin{align*}
\bar{P}(\mathsf{M}^{(1)},\ldots, \mathsf{M}^{(k)}) &= \frac{1}{k m_1 \cdots m_k} \sum_{\vec{x}} \no{ \mathsf{M}^{(1)}_{x_1} + \cdots + \mathsf{M}^{(k)}_{x_k}}\\
&= \frac{1}{k m_1 \cdots m_k} \sum_{\vec{x}} \no{\sum_{i=1}^l p_i \mathsf{N}^{(i)}_{x_1} + \mathsf{M}^{(2)}_{x_2} + \cdots + \mathsf{M}^{(k)}_{x_k}}\\
&= \frac{1}{k m_1 \cdots m_k} \sum_{\vec{x}} \no{\sum_{i=1}^l p_i \left( \mathsf{N}^{(i)}_{x_1} + \mathsf{M}^{(2)}_{x_2} + \cdots + \mathsf{M}^{(k)}_{x_k}\right)}\\
&\leq \frac{1}{k m_1 \cdots m_k} \sum_{\vec{x}} \sum_{i=1}^l p_i \no{ \mathsf{N}^{(i)}_{x_1} + \mathsf{M}^{(2)}_{x_2} + \cdots + \mathsf{M}^{(k)}_{x_k} }\\
&= \sum_{i=1}^l p_i \bar{P}(\mathsf{N}^{(i)}, \mathsf{M}^{(2)}, \ldots, \mathsf{M}^{(k)})\, .
\end{align*}
Furthermore, one sees that
\begin{align*}
\bar{P}\left( \sum_i p_i \mathsf{N}^{(i)}, \mathsf{M}^{(2)}, \ldots, \mathsf{M}^{(k)} \right) &\leq \sum_{i=1}^l p_i \bar{P}(\mathsf{N}^{(i)}, \mathsf{M}^{(2)}, \ldots, \mathsf{M}^{(k)}) \\
& \leq \sum_{i=1}^l p_i \max_j \bar{P}(\mathsf{N}^{(j)}, \mathsf{M}^{(2)}, \ldots, \mathsf{M}^{(k)}) \\
&= \max_j \bar{P}(\mathsf{N}^{(j)}, \mathsf{M}^{(2)}, \ldots, \mathsf{M}^{(k)})\, .
\end{align*}
Thus, the success probability within the whole theory is maximized for extreme measurements.
\subsection{Upper bound for compatible pairs of measurements}
As was stated before, for the classical $(2,d)$-RAC it is known that the optimal success probability is $\bar{P}^{(2,d)}_c = \half \left(1+\frac{1}{d} \right)$. Generalizing the result from \cite{CaHeTo20}, let us consider random access tests with two measurements with $d$ outcomes on a state space $\mathcal{S}$, where the operational dimension of the theory is $d$. In such a GPT, there exists $d$ pure states $\{s_1, \ldots,s_d\} \in \mathcal{S}^{ext}$ and a $d$-outcome measurement $\mathsf{M}$ such that $\mathsf{M}_i(s_j)=\delta_{ij}$ for all $i,j \in \{1, \ldots, d\}$. We note that then $\no{\mathsf{M}_i}=1$ for all $i \in \{1, \ldots, d\}$ so that in particular we must have that $\no{\mathsf{M}_i + \mathsf{M}_i} =2$ and $\no{\mathsf{M}_i + \mathsf{M}_j} =1$ for all $i\neq j$ (since $\sum_i \mathsf{M}_i =u$). Hence, we see that
\begin{align*}
\bar{P}(\mathsf{M},\mathsf{M}) &= \frac{1}{2d^2} \sum_{i,j=1}^d \no{\mathsf{M}_i + \mathsf{M}_j} = \frac{1}{d^2} \sum_{i=1}^d \no{\mathsf{M}_i} + \frac{1}{2d^2} \sum_{\substack{i,j=1 \\ i \neq j} }^d \no{\mathsf{M}_i + \mathsf{M}_j} \\
&= \frac{1}{2d^2}(2 d + d(d-1))= \frac{1}{2} \left(1+\frac{1}{d} \right) \, ,
\end{align*}
so that the optimal success probability for the classical $(2,d)$--RAC can be always achieved in a theory with operational dimension $d$.
We see that the classical bound can be always achieved with the foolish strategy of choosing the same measurement for the random access test. For quantum theory it was shown in \cite{CaHeTo20} that this bound cannot be surpassed even if we allow for two different measurements that are compatible. However, in general we can actually show a bound for compatible measurements that in some cases differs from the classical one.
The following result is a generalization of Prop. 3 in \cite{CaHeTo20}.
\begin{proposition}\label{prop:compatible}
Let $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ be two compatible measurements with $m_1$ and $m_2$ outcomes, respectively.
If $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ have a joint measurement belonging to $\mathcal{T} \subseteq \mathcal{M}(\mathcal{S})$, then
\begin{align}\label{eq:compatible}
\bar{P}(\mathsf{M}^{(1)}, \mathsf{M}^{(2)}) \leq \frac{1}{2} \left( 1 + \frac{\lambda_{max}(\mathcal{T})}{m_1m_2} \right) \, .
\end{align}
If $\lambda_{max}(\mathcal{T})=m_1=m_2=d$, where $d$ is the operational dimension of the theory, then the right hand side is the classical bound for (2,d)--RAC.
\end{proposition}
\begin{proof}
Let $\mathsf{J} \in \mathcal{T}$ be a joint measurement of $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$.
We have
\begin{align*}
&h(m_1,m_2) \cdot \bar{P}(\mathsf{M}^{(1)},\mathsf{M}^{(2)}) = \lambda_{max}(\mathsf{H}^{(1,2)}) = \sum_{x,y} \no{ \tfrac{1}{\kappa(m_1,m_2)}(\mathsf{M}^{(1)}_x+ \mathsf{M}^{(2)}_y) } \\
&= \frac{1}{\kappa(m_1,m_2)} \sum_{x,y} \no{ \sum_a \mathsf{J}_{x,a} + \sum_b \mathsf{J}_{b,y} } \\
&= \frac{1}{\kappa(m_1,m_2)} \sum_{x,y} \no{\mathsf{J}_{x,y}+ \sum_{\substack{a\\ a \neq y}} \mathsf{J}_{x,a} + \sum_{b} \mathsf{J}_{b,y} } \\
& \leq \frac{1}{\kappa(m_1,m_2)} \sum_{x,y} \no{\mathsf{J}_{x,y}}+ \frac{1}{\kappa(m_1,m_2)}\sum_{x,y}\no{ \sum_{\substack{a\\ a \neq y}} \mathsf{J}_{x,a} +\sum_{b} \mathsf{J}_{b,y} } \\
& = \frac{1}{\kappa(m_1,m_2)} \lambda_{max}(\mathsf{J})+ \frac{1}{\kappa(m_1,m_2)}\sum_{x,y}\no{ \sum_{\substack{a\\ a \neq y}} \mathsf{J}_{x,a} +\sum_{b} \mathsf{J}_{b,y} }
\end{align*}
For the second summand, we observe that
\begin{align*}
0\leq \sum_{\substack{a\\ a \neq y}} \mathsf{J}_{x,a} + \sum_{b} \mathsf{J}_{b,y} \leq \sum_{a,b} \mathsf{J}_{a,b} = u \, .
\end{align*}
We hence get
\begin{align*}
&h(m_1,m_2) \cdot \bar{P}(\mathsf{M}^{(1)},\mathsf{M}^{(2)}) \leq \frac{1}{\kappa(m_1,m_2)} \left( \lambda_{max}(\mathcal{T}) + m_1m_2 \right) \, .
\end{align*}
Inserting the expressions of $h(m_1,m_2)$ and $\kappa(m_1,m_2)$ we arrive to Eq. \eqref{eq:compatible}.
\end{proof}
From the latter part of Prop. \ref{prop:compatible} it is clear that in the case of $(2,d)$--QRAC the bound given by Eq. \eqref{eq:compatible} for $\mathcal{T}= \mathcal{M}(\mathcal{S})$ matches the optimal success probability of the classical $(2,d)$--RAC both for quantum and classical state spaces. In addition, in \cite{MaKi18} it was shown that $\lambda_{max}(\mathcal{S})=d=2$ for all state spaces $\mathcal{S}$ that are point-symmetric (or centrally symmetric) so that also in this case the bound given by Eq. \eqref{eq:compatible} matches the optimal success probability of the classical $(2,2)$--RAC for two dichotomic measurements.
The bound given in Eq. \eqref{eq:compatible} can be used to detect incompatibility for pairs of measurements. To detect the incompatibility of any pair of measurements on a state space $\mathcal{S}$ we choose $\mathcal{T}= \mathcal{M}(\mathcal{S})$ so that in terms of the harmonic approximate joint measurement we can express the previous result as follows:
\begin{corollary}
Let $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ be measurements on a state space $\mathcal{S}$ with $m_1$ and $m_2$ outcomes, respectively. If
\begin{align}\label{eq:incompatible}
\lambda_{max}(\mathsf{H}^{(1,2)}) > \frac{\lambda_{max}(\mathcal{S})+m_1m_2}{m_1 +m_2},
\end{align}
then $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ are incompatible.
\end{corollary}
We remark that the previous incompatibility test becomes useless for large $m_1$ and $m_2$. Namely, $\lambda_{max}(\mathsf{H}^{(1,2)}) \leq \lambda_{max}(\mathcal{S})$ so that a necessary requirement that the inequality in Eq. \eqref{eq:incompatible} can hold for some $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ is that
\begin{align}\label{eq:useful}
\lambda_{max}(\mathcal{S}) > \frac{m_1m_2}{m_1 + m_2 -1} \, .
\end{align}
In the usual QRAC scenario, i.e., when $m_1=m_2=d$, where $d$ is the operational dimension of the theory, from the fact that $\lambda_{max}(\mathcal{S})\geq d$ it follows that Eq. \eqref{eq:useful} holds for all theories with $d>1$. Furthermore, if $m_1=m_2 = \lambda_{max}(\mathcal{S})>1$, then Eq. \eqref{eq:useful} also holds. Some different types of examples as well as examples where the random access test does not detect incompatibility of some pairs of measurements in the quantum case have been presented in \cite{CaHeTo20}.
\section{Maximal incompatibility of dichotomic measurements} \label{sec:MAX}
There are various ways to quantify the level of incompatibility of two measurements \cite{DeFaKa19}. As introduced in \cite{BuHeScSt13}, the \emph{degree of incompatibility} $d(\mathsf{M}^{(1)},\mathsf{M}^{(2)})$ of two measurements $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ is defined as the maximal value $\lambda \in [0,1]$ such that the noisy versions $\lambda \mathsf{M}^{(1)} + (1-\lambda) \mathsf{T}^{(1)}$ and $\lambda \mathsf{M}^{(2)} + (1-\lambda) \mathsf{T}^{(2)}$ of $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ are compatible for some choices of trivial measurements $\mathsf{T}^{(1)}$ and $\mathsf{T}^{(2)}$. This quantity has a universal lower bound $d(\mathsf{M}^{(1)}, \mathsf{M}^{(2)}) \geq \half$, and two measurements $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ are called \emph{maximally incompatible} if $d(\mathsf{M}^{(1)}, \mathsf{M}^{(2)}) = \half$. For instance, the canonical position and momentum measurements in quantum theory are maximally incompatible \cite{HeScToZi14}. In the following we concentrate on maximal incompatibilty of two dichotomic measurements. In quantum theory it is known that no such pairs exist \cite{BuHeScSt13}.
We say that a dichotomic measurement $\mathsf{M}$ with outcomes $+$ and $-$ discriminates two sets of states $S_+, S_- \subset \mathcal{S}$ if $\mathsf{M}_+(s_+) = 1$ for all $s_+ \in S_+$ and $\mathsf{M}_+(s_-)=0$ for all $s_- \in S_-$. This concept allows now to formulate the following result.
\begin{proposition}\label{prop:2-2-maximal-prob}
Let $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ be two dichotomic measurements. Then $\bar{P}(\mathsf{M}^{(1)}, \mathsf{M}^{(2)}) =1$ if and only if there are four states $s_1,s_2,s_3,s_4 \in \mathcal{S}$ such that $\mathsf{M}^{(1)}$ discriminates $\{s_1,s_2\}$ and $\{s_3,s_4\}$ and $\mathsf{M}^{(2)}$ discriminates $\{s_1, s_4\}$ and $\{s_2,s_3\}$.
\end{proposition}
\begin{proof}
Let $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ be dichotomic measurements with effects $\mathsf{M}^{(1)}_+, \mathsf{M}^{(1)}_-$ and $\mathsf{M}^{(2)}_+, \mathsf{M}^{(2)}_-$ respectively. Since $\mathsf{M}^{(1)}_+ + \mathsf{M}^{(1)}_- = u$, $\mathsf{M}^{(2)}_+ + \mathsf{M}^{(2)}_- = u$ and $u(s) =1$ for all $s \in \mathcal{S}$, we can express the success probability $\bar{P}(\mathsf{M}^{(1)}, \mathsf{M}^{(2)})$ as follows:
\begin{align*}
\bar{P}(\mathsf{M}^{(1)}, \mathsf{M}^{(2)}) &= \frac{1}{8} \left[ \no{\mathsf{M}^{(1)}_+ + \mathsf{M}^{(2)}_+} + \no{\mathsf{M}^{(1)}_+ + \mathsf{M}^{(2)}_-} \right. \\
& \quad \quad \quad \left. + \no{\mathsf{M}^{(1)}_- + \mathsf{M}^{(2)}_+}+ \no{\mathsf{M}^{(1)}_- + \mathsf{M}^{(2)}_-}\right] \\
&= \frac{1}{8} \left[ \sup_{s_1 \in \mathcal{S}} (\mathsf{M}^{(1)}_+ + \mathsf{M}^{(2)}_+)(s_1) + \sup_{s_2 \in \mathcal{S}} (u+\mathsf{M}^{(1)}_+- \mathsf{M}^{(2)}_+)(s_2) \right. \\
& \quad \quad \left. + \sup_{s_3 \in \mathcal{S}} (u-\mathsf{M}^{(1)}_+ + \mathsf{M}^{(2)}_+)(s_3)+ \sup_{s_4 \in \mathcal{S}} (2u-\mathsf{M}^{(1)}_+ - \mathsf{M}^{(2)}_+)(s_4)\right] \\
&= \frac{1}{8} \left[4 + \sup_{s_1 \in \mathcal{S}}(\mathsf{M}^{(1)}_+ + \mathsf{M}^{(2)}_+)(s_1) + \sup_{s_2 \in \mathcal{S}}(\mathsf{M}^{(1)}_+ - \mathsf{M}^{(2)}_+)(s_2) \right. \\
& \quad \quad \left. + \sup_{s_3 \in \mathcal{S}}(-\mathsf{M}^{(1)}_+ + \mathsf{M}^{(2)}_+)(s_3)+ \sup_{s_4 \in \mathcal{S}}(-\mathsf{M}^{(1)}_+ - \mathsf{M}^{(2)}_+)(s_4)\right]\, .
\end{align*}
Since $\mathsf{M}^{(1)}_+$ and $\mathsf{M}^{(2)}_+$ are effects, we clearly have that
\begin{align*}
& \sup_{s_1 \in \mathcal{S}}(\mathsf{M}^{(1)}_+ + \mathsf{M}^{(2)}_+)(s_1) \in [0,2], \quad \ \, \quad \sup_{s_2 \in \mathcal{S}}(\mathsf{M}^{(1)}_+ - \mathsf{M}^{(2)}_+)(s_2) \in [-1,1], \\
&\sup_{s_3 \in \mathcal{S}}(-\mathsf{M}^{(1)}_+ + \mathsf{M}^{(2)}_+)(s_3)\in [-1,1], \quad \sup_{s_4 \in \mathcal{S}}(-\mathsf{M}^{(1)}_+ - \mathsf{M}^{(2)}_+)(s_4) \in [-2,0].
\end{align*}
Now it is clear that $\bar{P}(\mathsf{M}^{(1)}, \mathsf{M}^{(2)})=1$ if and only if there are four states $s_1,s_2,s_3,s_4$ such that
\begin{align*}
(\mathsf{M}^{(1)}_+ + \mathsf{M}^{(2)}_+)(s_1) &=2, \quad (\mathsf{M}^{(1)}_+ - \mathsf{M}^{(2)}_+)(s_2) =1, \\
(-\mathsf{M}^{(1)}_+ + \mathsf{M}^{(2)}_+)(s_4) &=1, \quad (-\mathsf{M}^{(1)}_+ - \mathsf{M}^{(2)}_+)(s_3) =0,
\end{align*}
which holds if and only if
\begin{align}
\mathsf{M}^{(1)}_+(s_1) = \mathsf{M}^{(1)}_+(s_2) = 1, \quad \mathsf{M}^{(1)}_+(s_3) = \mathsf{M}^{(1)}_+(s_4)=0, \label{eq:discrimination1} \\
\mathsf{M}^{(2)}_+(s_1) = \mathsf{M}^{(2)}_+(s_4) = 1, \quad \mathsf{M}^{(2)}_+(s_2) = \mathsf{M}^{(2)}_+(s_3)=0. \label{eq:discrimination2}
\end{align}
\end{proof}
In \cite{JePl17} it was shown that two dichotomic measurements $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ are maximally incompatible if and only if Eq. \eqref{eq:discrimination1} and \eqref{eq:discrimination2} hold for states $s_1,s_2,s_3,s_4 \in \mathcal{S}$ such that $\half (s_1 +s_3) = \half (s_2+s_4)$. Equivalently, \emph{two dichotomic measurements $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ are maximally incompatible if and only if there is an affine subspace $\mathcal{K} \subset \aff{\mathcal{S}}$ such that $\mathcal{F}=\mathcal{K} \cap \mathcal{S}$ is a parallelogram and $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ discriminate the opposite edges of $\mathcal{F}$}. By a parallelogram we mean a 2-dimensional convex body that is a convex hull of its four vertices $s_1, s_2, s_3, s_4$ such that $\half (s_1 +s_3) = \half (s_2+s_4)$. In this case measurements $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ discriminate the opposite edges of the parallelogram if $\mathsf{M}^{(1)}$ discriminates $\{s_1,s_2\}$ and $\{s_3,s_4\}$ and $\mathsf{M}^{(2)}$ discriminates $\{s_1, s_4\}$ and $\{s_2,s_3\}$. Thus, from Prop. \ref{prop:2-2-maximal-prob} and the aforementioned result from \cite{JePl17} we see that \emph{two maximally incompatible dichotomic measurements $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ satisfy $\bar{P}(\mathsf{M}^{(1)},\mathsf{M}^{(2)}) =1$}. In particular, we can conclude the following:
\begin{corollary}
If $\bar{P}(\mathsf{M}^{(1)},\mathsf{M}^{(2)})<1$ for two dichotomic measurements $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$, then they are not maximally incompatible.
\end{corollary}
On the other hand, in the case that the states in Prop. \ref{prop:2-2-maximal-prob} can be chosen to be affinely dependent, we can show also the inverse of the previous statement. To see this, let us consider the case of Prop. \ref{prop:2-2-maximal-prob} where the states $\{s_1,s_2,s_3,s_4\}$ are affinely dependent. First, let us note that from the discrimination of the sets $\{s_1,s_2\}$ and $\{s_3,s_4\}$ together with the discrimination of the sets $\{s_1,s_4\}$ and $\{s_2,s_3\}$ it follows that all the states are indeed different and must lie on the boundary of $\mathcal{S}$ so that in particular $\dim(\aff{\{s_1,s_2,s_3,s_4\}}) \geq 2$. Now, since $\{s_1,s_2,s_3,s_4\}$ are affinely dependent, we must have that $\dim(\aff{\{s_1,s_2,s_3,s_4\}}) =2$. In this case, in order for the discrimination of states to be possible, we must also have that
\begin{align*}
\aff{\{s_1,s_2,s_3,s_4\}} & =\aff{\{s_1,s_2,s_3\}} = \aff{\{s_1,s_2,s_4\}}\\
& = \aff{\{s_1,s_3,s_4\}}= \aff{\{s_2,s_3,s_4\}},
\end{align*}
which in particular means that the states $s_2,s_3,s_4$ must form an affine basis of $\aff{\{s_1,s_2,s_3,s_4\}}$. Hence, we must have that $s_1 = \alpha_2 s_2 + \alpha_3 s_3 + \alpha_4 s_4$ for some $\alpha_2, \alpha_3, \alpha_4 \in \mathbb{R}$ such that $\alpha_2 + \alpha_3+ \alpha_4=1$. From Eq. \eqref{eq:discrimination1} and \eqref{eq:discrimination2} it follows that $\alpha_2=1$, $\alpha_3 =-1$ and $\alpha_4=1$ so that $\half(s_1+s_3)=\half(s_2+s_4)$. By the result of \cite{JePl17}, we then must have that $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ are maximally incompatible. We then arrive at the following result.
\begin{proposition}
Two dichotomic measurements $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ are maximally incompatible if and only if $\bar{P}(\mathsf{M}^{(1)}, \mathsf{M}^{(2)}) =1$, where the optimal states can be chosen to be affinely dependent.
\end{proposition}
In the first nonclassical polygon state space, the square state space $\mathcal{S}_4$, it is known that the measurements $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ defined by setting $\mathsf{M}^{(1)}_+ =e_1$ and $\mathsf{M}^{(2)}_+ =e_2$ (see Sec. \ref{sec:POLY} for more details) are maximally incompatible \cite{BuHeScSt13}. Indeed, one can check that then $\mathsf{M}^{(1)}$ discriminates $\{s_1,s_2\}$ and $\{s_3,s_4\}$ and $\mathsf{M}^{(2)}$ discriminates $\{s_1, s_4\}$ and $\{s_2,s_3\}$, where the states $s_1, \ldots, s_4$ are the affinely dependent pure states of the square state space. In similar fashion, one can construct maximally incompatible dichotomic measurements on theories whose state spaces are hypercubes of any dimension.
On the other hand, in the following example we demonstrate that the affine dependence of the optimal states is indeed necessary for measurements $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ with $\bar{P}(\mathsf{M}^{(1)}, \mathsf{M}^{(2)}) =1$ to be maximally incompatible.
\begin{example}[Tetrahedron]
Let $\mathcal{S} = \conv{\{s_1,s_2,s_3,s_4\}}$, where $s_1$, $s_2$, $s_2$, $s_3$ are the vertices of a regular tetrahedron. Since tetrahedron is a simplex, $\mathcal{S}$ is a classical state space with $d=4$ distinguishable pure states. Let $\mathsf{M}$ be the (unique) measurement that distinguishes the states $s_1,s_2,s_3,s_4$, i.e., $\mathsf{M}_i(s_j) = \delta_{ij}$ for all $i,j \in \{1,2,3,4\}$. We can define two dichotomic measurements $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ by setting $\mathsf{M}^{(1)}_+ = \mathsf{M}_1 + \mathsf{M}_2$ and $\mathsf{M}^{(2)}_+ = \mathsf{M}_1+\mathsf{M}_4$, and see that they satisfy Prop. \ref{prop:2-2-maximal-prob} only with the affinely independent pure states $s_1,s_2,s_3,s_4$. However, as $\mathcal{S}$ is a classical state space, all measurements on $\mathcal{S}$ are compatible so in particular $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ are not maximally incompatible.
\end{example}
\section{Random access tests in polygon state spaces}\label{sec:POLY}
\subsection{Polygon theories}
Folowing \cite{JaGoBaBr11} we define a regular $n$-sided polygon (or $n$-gon) state space $\mathcal{S}_n$ embedded in $\mathbb{R}^3$ as the convex hull of its $n$ extreme points
\begin{align*}
s_j=
\begin{pmatrix}
r_n \cos\left(\dfrac{2 j \pi}{n}\right) \\
r_n \sin\left(\dfrac{2 j \pi }{n}\right) \\
1
\end{pmatrix}, \quad j = 1,\ldots,n,
\end{align*}
where we have defined $r_n = \sqrt{\sec\left( \frac{\pi}{n}\right)}$. For $n=2$ the state space is a line segment which is the state space of the bit, i.e., the 2-dimensional classical system, and for $n=3$ the state space is a triangle which is the state space of the trit, i.e., the 3-dimensional classical system, while for all $n\geq 4$ we get nonclassical state spaces with the first one being the square state space.
Clearly, we now have the zero effect $o = (0, 0, 0)^T$ and the unit effect $u = (0, 0, 1)^T$. Depending on the parity of $n$, the effect space can have different structures (see \cite{FiHeLe18,HeLePl19} for details). Let us denote $s_0 = (0,0,1)^T$. For even $n$ the effect space $\mathcal{E}(\mathcal{S}_n)$ has $n$ nontrivial extreme points,
\begin{align*}
e_k= \dfrac{1}{2}
\begin{pmatrix}
r_n \cos\left(\dfrac{(2k-1) \pi }{n}\right) \\
r_n \sin\left(\dfrac{(2k-1) \pi }{n}\right) \\
1
\end{pmatrix}, \quad k = 1,\ldots,n,
\end{align*}
so that $\mathcal{E}(\mathcal{S}_n) = \conv{\{o,u,e_1, \ldots,e_n\}}$.
All the nontrivial extreme effects lie on a single plane determined by those points $e$ such that $e(s_0)=1/2$.
In the case of odd $n$, the effect space has $2n$ nontrivial extreme effects,
\begin{align*}
g_k
= \dfrac{1}{1+r^2_n}
\begin{pmatrix}
r_n \cos\left(\dfrac{2k \pi }{n}\right) \\
r_n \sin\left(\dfrac{2k \pi }{n}\right) \\
1
\end{pmatrix}, \quad \quad f_k = u-g_k ,
\end{align*}
for $k = 1,\ldots,n$. Now $\mathcal{E}(\mathcal{S}_n) = \conv{\{o,u, g_1, \ldots, g_n, f_1 ,\ldots, f_n\}}$ and the nontrivial effects are scattered on two different planes determined by all those points $g$ and $f$ such that $g(s_0) = \frac{1}{1+r^2_n}$ and $f(s_0) = \frac{r^2_n}{1+r^2_n}$. The first few polygons and their effect spaces are depicted in Fig. \ref{fig:polygons}.
\begin{figure}
\caption{\label{fig:polygons}
\label{fig:polygons}
\end{figure}
Since $e \in \mathcal{E}^{ext}_{ind}(\mathcal{S}_n)$ if and only if $e = e_k$ for some $k \in \{1, \ldots, n\}$ when $n$ is even, and $e = g_k$ for some $k \in \{1, \ldots, n\}$ when $n$ is odd, and because $e_k(s_0) = \half$ and $g_k(s_0) = 1/(1+r^2_n)$ for all $k \in \{1, \ldots,n\}$, we can use Propositions \ref{prop:lmax-pp} and \ref{prop:lambda-max-constant} to make the following Corollary:
\begin{corollary}\label{cor:lmax-polygon}
$\lambda_{max}(\mathcal{S}_n) = 2$ when $n$ is even and $\lambda_{max}(\mathcal{S}_n)=1+\sec \left(\frac{\pi}{n}\right) >2$ when $n$ is odd.
\end{corollary}
We note that for the nonclassical polygon state spaces $\mathcal{S}_n$ with $n\geq 4$, in both even and odd cases, the maximal number of distinguishable pure states is two, i.e., $d=2$ in both cases. On the other hand, clearly for the classical cases $n=2$ and $n=3$ we have $d=2$ and $d=3$, respectively. Thus, from the above Corollary we conclude that
\begin{quote}
\emph{all nonclassical odd polygon state spaces have super information storability.}
\end{quote}
Also, we note that the suitable classical reference for polygons is the bit ($n=d=2$) whereas the trit ($n=d=3$) does not share many of the features of the other polygons so that we often exclude the case $n=3$ when we consider the properties of the polygons as a whole. Therefore, in order to follow the (Q)RAC-like scenario, we will next focus on the simplest RATs on the nonclassical polygon state spaces $\mathcal{S}_n$ with $n\geq 4$, i.e., RATs with two measurements which both have $d=2$ outcomes.
\subsection{Maximum success probability for compatible measurements}
From Cor. \ref{cor:lmax-polygon} we see that for even $n$, we have that $\lambda_{max}(\mathcal{S}_n)=2=d$ so that the bound given in Prop. \ref{prop:compatible} for two compatible measurements is exactly the classical bound for $(2,2)$--RAC.
Thus,
\begin{quote}
\emph{in the case of even polygon theories the classical bound can be achieved with compatible measurements but violation of the classical bound can only be achieved with incompatible measurements, just like in quantum theory}.
\end{quote}
However, for odd $n$, we have that $\lambda_{max}(\mathcal{S}_n) = 1+r^2_n >2 =d$ and we can explicitly construct two compatible dichotomic measurements $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ such that $\bar{P}(\mathsf{M}^{(1)},\mathsf{M}^{(2)}) > \frac{3}{4}= \bar{P}^{(2,2)}_{c}$. Let us consider one of the indecomposable extreme measurements $\mathsf{C} \in \mathcal{M}^{ext}_{ind}(\mathcal{S}_n)$ with effects
\begin{align*}
\mathsf{C}_1 = g_1 \, , \quad \mathsf{C}_2= \half r^2_n \, g_{\frac{n+1}{2}} \, , \quad \mathsf{C}_3= \half r^2_n \, g_{\frac{n+3}{2}} \, .
\end{align*}
By denoting the two outcomes of $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ by $+$ and $-$, we take
\begin{align*}
&\mathsf{M}^{(1)}_+=\mathsf{C}_1 = g_1, \quad && \mathsf{M}^{(1)}_- = \mathsf{C}_2+\mathsf{C}_3 = \half r^2_n(g_{\frac{n+1}{2}}+g_{\frac{n+3}{2}}), \\
&\mathsf{M}^{(2)}_+ = \mathsf{C}_1 +\mathsf{C}_2 = g_1 +\half r^2_n g_{\frac{n+1}{2}}, \quad && \mathsf{M}^{(2)}_-= \mathsf{C}_3 = \half r^2_n g_{\frac{n+3}{2}}.
\end{align*}
\begin{figure}
\caption{\label{fig:polygon-comp-max}
\label{fig:polygon-comp-max}
\end{figure}
Clearly $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$ are compatible since they are both post-processings of $\mathsf{C}$ and thus by Prop. \ref{prop:compatible} we have that
\begin{align*}
\bar{P}(\mathsf{M}^{(1)}, \mathsf{M}^{(2)}) \leq \frac{1}{2} \left( 1 + \frac{1+r^2_n}{4} \right).
\end{align*}
By using the states $s_1, s_{\frac{n+1}{2}}, s_{\frac{n+3}{2}}$ one can confirm that
\begin{align*}
\bar{P}(\mathsf{M}^{(1)}, \mathsf{M}^{(2)}) &= \frac{1}{8} \left( \no{\mathsf{M}^{(1)}_+ + \mathsf{M}^{(2)}_+} +\no{\mathsf{M}^{(1)}_+ + \mathsf{M}^{(2)}_-} \right. \\
& \quad \quad \quad \left. + \no{\mathsf{M}^{(1)}_- + \mathsf{M}^{(2)}_+}+ \no{\mathsf{M}^{(1)}_- + \mathsf{M}^{(2)}_-}\right) \\
&= \frac{1}{8} \left( \no{2 \mathsf{C}_1 + \mathsf{C}_2} + \no{\mathsf{C}_1 + \mathsf{C}_3} \right. \\
& \quad \quad \quad \left. + \no{\mathsf{C}_1 + 2 \mathsf{C}_2 + \mathsf{C}_3} + \no{\mathsf{C}_2 + 2 \mathsf{C}_3} \right) \\
&\geq \frac{1}{8} \left[ (2 \mathsf{C}_1 + \mathsf{C}_2)(s_1) + (\mathsf{C}_1 + \mathsf{C}_3)(s_1) \right. \\
& \quad \quad \left. + (\mathsf{C}_1 + 2 \mathsf{C}_2 + \mathsf{C}_3)(s_{\frac{n+1}{2}}) + (\mathsf{C}_2 + 2 \mathsf{C}_3)(s_{\frac{n+3}{2}}) \right] \\
&= \frac{1}{2} \left( 1 + \frac{1+r^2_n}{4} \right).
\end{align*}
Therefore, by combining the above inequalities we see that the given states actually maximize the respective expressions and that
\begin{align}\label{eq:odd-polygon-comp-max}
\bar{P}(\mathsf{M}^{(1)}, \mathsf{M}^{(2)})= \frac{1}{2} \left( 1 + \frac{1+r^2_n}{4} \right).
\end{align}
Since $\lambda_{max}(\mathcal{S}_n) =1+r^2_n > 2$, we conclude that
\begin{align*}
\bar{P}(\mathsf{M}^{(1)},\mathsf{M}^{(2)}) > \frac{3}{4}= \bar{P}^{(2,2)}_{c} \, .
\end{align*}
Hence,
\begin{quote}
\emph{in the case of odd polygon theories the classical bound can be surpassed with suitably chosen compatible measurements}.
\end{quote}
We note that the right hand side of Eq. \eqref{eq:odd-polygon-comp-max} approaches the classical bound $\bar{P}^{(2,2)}_{c}=3/4$ as $n \to \infty$. The maximum success probability of a RAT for compatible pair of dichotomic measurements on polygon state spaces is illustrated in Fig. \ref{fig:polygon-comp-max}.
\subsection{Maximum success probability for incompatible measurements}
\subsubsection{Rebit state space}
The state space of \emph{rebit}, or real qubit, is defined otherwise similarly to the qubit but the field of complex numbers is replaced with the field of real numbers. Thus, the `Bloch ball' of the qubit is replaced the `Bloch disc' so that rebit can be seen as a restriction of the qubit. Formally, the pure states and the nontrivial extreme effects of rebit are of the form $s_\theta = (\cos \theta, \sin \theta,1)^T$ and $e_\theta=\half (\cos \theta, \sin \theta,1)^T$ for any $\theta \in [0,2\pi)$ respectively. The zero and the unit effect are the same as in polygons, i.e., $o=(0,0,0)^T$ and $u=(0,0,1)^T$. In many ways polygons state spaces can be thought as discretized versions of the rebit state space. The following analysis of the RAT with two dichotomic measurements on the rebit will be useful in our later analysis of the polygon theories.
We will explicitly show that the maximum success probabililty $\bar{P}$ of the RAT with two dichotomic measurements on the rebit is the same as in the corresponding $(2,2)$--QRAC in qubit, i.e., $1/2 \left( 1+ 1/\sqrt{2} \right)$. Hence, let us consider a RAT with two dichotomic measurements $\mathsf{M}^{(1)}$ and $\mathsf{M}^{(2)}$. We denote $e:= \mathsf{M}^{(1)}_+$ and $f := \mathsf{M}^{(2)}_+$ and the average success probability can then be written in the form
\begin{align}
\bar{P}(\mathsf{M}^{(1)}, \mathsf{M}^{(2)}) &= \frac{1}{8} \left[ \sup_{t_1 \in \mathcal{S}} (e+f)(t_1) + \sup_{t_2 \in \mathcal{S}}(u-e+f)(t_2) +\right. \nonumber \\
&\ \ \ \left. \sup_{t_3 \in \mathcal{S}}(u-e+u-f)(t_3) + \sup_{t_4 \in \mathcal{S}}(e+u-f)(t_4)\right]. \label{eq:P-2-2}
\end{align}
As was shown in Sec. \ref{subsec:gpt-rac}, the average success probability $\bar{P}$ is maximized for extreme measurements. On a state space $\mathcal{S}$ this means that Eq. \eqref{eq:P-2-2} is maximized for some effects $e,f \in \mathcal{E}^{ext}(\mathcal{S})$. Furthermore, the sums of effects in Eq. \eqref{eq:P-2-2} are maximized for pure states so that we can also choose the optimal values for $t_1, t_2, t_3, t_4$ to be pure if needed.
Due to the symmetry of the rebit system we can freely choose $\mathsf{M}^{(1)}_+=e=e_0=\half(1,0,1)^T$. Let us denote $f=e_\theta$ for some $\theta \in [0,2 \pi)$ and $t_i = s_{\mathbf{a}rphi_i}$ for some $\mathbf{a}rphi_i \in [0,2\pi)$ for all $i \in \{1,2,3,4\}$. The optimal success probability for a RAT with two dichotomic measurements on a rebit system then reads
\begin{align*}
\bar{P}&= \sup_{\theta \in [0,2\pi)} \frac{1}{8} \left[ \sup_{\mathbf{a}rphi_1 \in [0,2\pi)} (e_0+e_\theta)(s_{\mathbf{a}rphi_1}) + \sup_{\mathbf{a}rphi_2 \in [0,2\pi)} (u-e_0+e_\theta)(s_{\mathbf{a}rphi_2}) +\right. \\
&\ \ \ \left. \sup_{\mathbf{a}rphi_3 \in [0,2\pi)} (u-e_0+u-e_\theta)(s_{\mathbf{a}rphi_3}) + \sup_{\mathbf{a}rphi_4 \in [0,2\pi)} (e_0+u-e_\theta)(s_{\mathbf{a}rphi_4})\right].
\end{align*}
By expanding the above expression and by using some trigonometric identities we can rewrite the above equation as
\begin{align*}
\bar{P}&= \sup_{\theta \in [0,2\pi)} \frac{1}{8} \left[ \sup_{\mathbf{a}rphi_1 \in [0,2\pi)} \left(1+ \cos\left(\frac{\theta}{2} \right) \cos\left(\frac{\theta}{2}-\mathbf{a}rphi_1 \right) \right) \right. \\
& \quad \quad \quad \quad \quad \ \ + \sup_{\mathbf{a}rphi_2 \in [0,2\pi)} \left(1- \sin\left(\frac{\theta}{2} \right) \sin\left(\frac{\theta}{2}-\mathbf{a}rphi_2 \right) \right) \\
& \quad \quad \quad \quad \quad \ \ + \sup_{\mathbf{a}rphi_3 \in [0,2\pi)} \left(1- \cos\left(\frac{\theta}{2} \right) \cos\left(\frac{\theta}{2}-\mathbf{a}rphi_3 \right) \right) \\
& \left. \quad \quad \quad \quad \quad \ \ + \sup_{\mathbf{a}rphi_4 \in [0,2\pi)} \left(1+ \sin\left(\frac{\theta}{2} \right) \sin\left(\frac{\theta}{2}-\mathbf{a}rphi_4 \right) \right) \right].
\end{align*}
We can get an upper bound for $\bar{P}$ by choosing the angles $\mathbf{a}rphi_1, \mathbf{a}rphi_2, \mathbf{a}rphi_3, \mathbf{a}rphi_4$ such that each of the inner supremums is bound above by either $1+|\cos(\theta/2)|$ or $1+|\sin(\theta/2)|$. After this the outer supremum can be calculated and we get the following upper bound:
\begin{align*}
\bar{P} &\leq \sup_{\theta \in [0,2\pi)} \frac{1}{8} \left[ 4 + 2 \left| \cos\left( \frac{\theta}{2} \right) \right| + 2 \left| \sin\left( \frac{\theta}{2} \right) \right|\right] = \frac{1}{2} \left( 1+ \frac{1}{\sqrt{2}} \right)
\end{align*}
\begin{figure}
\caption{\label{fig:rebit}
\label{fig:rebit}
\end{figure}
Furthermore, one can check that this bound is obtained with the following parameters: $\theta=\pi/2$, $\mathbf{a}rphi_1 = \pi/4$, $\mathbf{a}rphi_2 = 3\pi/4$, $\mathbf{a}rphi_3=5\pi/4$ and $\mathbf{a}rphi_4= 7\pi/4$. Thus, the optimal success probability of the random access test in rebit coincides with the corresponding optimal success probability in qubit. The optimal effects and states (up to rotational symmetry) are depicted in Fig. \ref{fig:rebit}. We note that the optimal extreme effect $f$ aligns itself furthest away from both $e$ and $u-e$ along the semicircle between them (this is because we are optimizing both $e+f$ and $u-e+f$ at the same time) and the optimal states are uniquely determined by the sums $e+f$, $u-e+f$, $e+u-f$ and $u-e+u-f$ along the same directions in the $(x,y)$-projection as is seen in Fig. \ref{fig:rebit}.
\subsubsection{Maximum success probability for polygons}
Based on the rebit system we can compare the behaviour of the maximal success probability of RATs on polygons. In many ways polygons can be thought as discretized versions of the rebit system and depending on the coarseness of the discretization the studied properties may look a bit different or similar to the properties of the rebit. This is also the case with the optimal success probability of the random access test.
\begin{figure}
\caption{\label{fig:polygon-max}
\label{fig:polygon-max}
\end{figure}
As was established before, the maximum success probability for two dichotomic measurements on a regular polygon state space $\mathcal{S}_n$, denoted by $\bar{P}_n$, is maximized for some nontrivial extreme effects $e,f \in \mathcal{E}^{ext}(\mathcal{S}_n)$ and pure states $t_1,t_2,t_3,t_4 \in \mathcal{S}^{ext}_n$. Because $\mathcal{E}^{ext}(\mathcal{S}_n) = \{o,u,e_1, \ldots, e_n\}$ when $n$ is even and $\mathcal{E}^{ext}(\mathcal{S}_n) = \{o,u,g_1, \ldots, g_n, f_1, \ldots, f_n\}$ when $n$ is odd, so that $\mathcal{E}^{ext}(\mathcal{S}_n)$ (and $\mathcal{S}^{ext}_n$) is finite in both cases, it is easy to calculate the maximum value for $\bar{P}_n$ in the case of two dichotomic measurements for small values of $n$. The results are shown in Fig. \ref{fig:polygon-max} for polygons with up to $30$ vertices.
From Fig. \ref{fig:polygon-max} we observe at least three interesting points. First, the maximum success probability is at least as high as that of the qubit (and the rebit) in every polygon state space, and in many of them it is higher. Second, as one would expect, in both cases $\bar{P}_n$ seems to approach the maximum success probability of the qubit (and the rebit) as the number of vertices increase and the polygon starts to approximate the rebit system more closely geometrically. Third, the behaviour of $\bar{P}_n$ is drastically different for even and odd $n$: for even $n$ the maximum success probability $\bar{P}_n$ seems to oscillate with decreasing amplitude whereas for odd $n$ it seems that $\bar{P}_n$ approaches the qubit limit more monotonically. In the following we will analyse the results of Fig. \ref{fig:polygon-max} further by taking a closer look on the optimizing effects and states. We provide formulas for $\bar{P}_n$ in every $n$.
\subsubsection{Optimality results for the maximum success probability in the even polygons}
We will now look more closely on explaining Fig. \ref{fig:polygon-max} for even polygons. We will prove that the maximum success probability $\bar{P}_n$ for two dichotomic measurements on even polygon state space $\mathcal{S}_n$ depends on $n$ as follows:
\begin{align}
&\bar{P}_n = \frac{1}{2} \left( 1+ \frac{\sec\left( \frac{\pi}{n} \right)}{\sqrt{2}} \right) \quad \textrm{$n=4m$ for odd $m \in \mathbb N$ }\label{eq:n=4m-odd} \\
& \bar{P}_n = \frac{1}{2} \left( 1+ \frac{1}{\sqrt{2}} \right) \quad \textrm{$n=4m$ for even $m \in \mathbb N$}\label{eq:n=4m-even} \\
& \bar{P}_n = \frac{1}{4} \left[ 2 + r_n^2 \cos\left(\frac{m\pi}{n} \right) + \sin\left(\frac{m\pi}{n} \right) \right] \quad \textrm{$n=4m+2$ for odd $m \in \mathbb N$}\label{eq:n=4m+2-odd} \\
& \bar{P}_n = \frac{1}{4} \left[ 2 + \cos\left(\frac{m\pi}{n} \right) + r_n^2 \sin\left(\frac{m\pi}{n} \right) \right] \quad \textrm{$n=4m+2$ for even $m \in \mathbb N$}\label{eq:n=4m+2-even}
\end{align}
(In the last two expressions $r_n = \sqrt{\sec\left( \frac{\pi}{n}\right)}$ as before.) Thus, we will see that in the cases when $n=4m+2$, where $m$ is odd or even, or $n=4m$, where $m$ is even, the maximum success probability is strictly larger than that of the qubit and the rebit, whereas for $n=4m$, where $m$ is even, it is exactly the same as in qubit. In all cases the limit $n \to \infty$ matches the qubit value.
We start by deriving an upper for the maximum success probability $\bar{P}_n$ similarly to how we did in the case of rebit. Let us consider the RAT for two dichotomic measurements as stated in Eq. \eqref{eq:P-2-2}. The polygons are symmetrical in the sense that we can fix the first extreme effect $e$ to be any of the nontrivial extreme effects and for even polygons we choose $e=e_1$. Furthermore, we have that $f = e_k$ for some $k \in \{1, \ldots, n\}$ and $t_i = s_{j_i}$ for some $j_i \in \{1, \ldots, n\}$ for all $i \in \{1,2,3,4\}$. With this notation we can write $\bar{P}_n$ for even polygons as
\begin{align}\label{eq:Pn-poly}
\begin{split}
\bar{P}_n&= \sup_{k \in \{1, \ldots, n\}} \frac{1}{8} \left[ \sup_{j_1 \in \{1, \ldots, n\}} (e_1+e_k)(s_{j_1}) + \sup_{j_2 \in \{1, \ldots, n\}} (u-e_1+e_k)(s_{j_2}) \right. \\
&\ \ \ \left. + \sup_{j_3 \in \{1, \ldots, n\}} (u-e_1+u-e_k)(s_{j_3}) + \sup_{j_4 \in \{1, \ldots, n\}} (e_1+u-e_k)(s_{j_4})\right].
\end{split}
\end{align}
First we note that we can restrict $k \in \{1, \ldots, n/2\}$ since otherwise we can just take $f=u-e_k = e_{k+n/2}$ instead of $f=e_k$. By expanding the previous expression and by using some trigonometric identities we can rewrite the previous equation as
\begin{align*}
\bar{P}_n&= \sup_{k \in \{1, \ldots, n/2\}} \frac{1}{8} \left[\sup_{j_1 \in \{1, \ldots, n\}} \left(1+ r_n^2 \cos\left(\frac{(k-1)\pi}{n} \right) \cos\left(\frac{(k-2j_1)\pi}{n} \right) \right) \right. \\
& \quad \quad \quad \quad \quad \ \ + \sup_{j_2 \in \{1, \ldots, n\}} \left(1- r_n^2 \sin\left(\frac{(k-1)\pi}{n} \right) \sin\left(\frac{(k-2j_2)\pi}{n} \right) \right) \\
& \quad \quad \quad \quad \quad \ \ + \sup_{j_3 \in \{1, \ldots, n\}} \left(1- r_n^2 \cos\left(\frac{(k-1)\pi}{n} \right) \cos\left(\frac{(k-2j_3)\pi}{n} \right) \right) \\
& \left. \quad \quad \quad \quad \quad \ \ + \sup_{j_4 \in \{1, \ldots, n\}} \left(1+ r_n^2 \sin\left(\frac{(k-1)\pi}{n} \right) \sin\left(\frac{(k-2j_4)\pi}{n} \right) \right) \right].
\end{align*}
Analogously to the rebit, we can upper bound the inner supremums by the terms $1+r_n^2 |\cos((k-1)\pi/n)|$ and $1+r_n^2 |\sin((k-1)\pi/n)|$ from which we can omit the absolute values since $\cos((k-1)\pi/n) \geq 0$ and $\sin((k-1)\pi/n) \geq 0$ for all $k \in \{1, \ldots, n/2\}$. For the remaining (outer) supremum we can use the upper bound $\cos((k-1)\pi/n)+\sin((k-1)\pi/n) \leq \sqrt{2}$ so that in the end we get the following upper bound for $\bar{P}_n$ for all even polygons:
\begin{align}
\bar{P}_n &\leq \sup_{k \in \{1, \ldots,n/2\}} \frac{1}{8} \left[ 4 + 2 r_n^2 \cos\left(\frac{(k-1)\pi}{n} \right) + 2 r_n^2 \sin\left(\frac{(k-1)\pi}{n} \right) \right] \nonumber \\
&\leq \frac{1}{2} \left( 1+ \frac{r_n^2}{\sqrt{2}} \right) = \frac{1}{2} \left( 1+ \frac{\sec\left(\frac{\pi}{n}\right)}{\sqrt{2}} \right). \label{eq:P_n-even-bound}
\end{align}
One can confirm that the above bound is attained when we take $k=1+n/4$, $j_1=k/2$, $j_2=k/2+n/4$, $j_3=k/2+n/2$ and $j_4=k/2+3n/4$. However, since $k,j_1,j_2,j_3,j_4$ must be integers, this upper bound can be attained only in the case when $n=4m$ for some $m\in \mathbb N$ (so that $k$ is an integer) and $m$ is odd (so that $k=1+m$ is even and $j_1,j_2,j_3,j_4$ are integers). Thus, in this case we have that $f=e_{m+1}$, $t_1 = s_{\frac{m+1}{2}}$, $t_2 = s_{\frac{3m+1}{2}}$, $t_3 = s_{\frac{5m+1}{2}}$ and $t_4 = s_{\frac{7m+1}{2}}$.
From the previous result it becomes immediate that we must consider different cases also within the even polygons. Let us next consider the case when $n=4m$ but $m$ is even. As we saw above, we cannot saturate the previous bound for $\bar{P}_n$ in this case because in this case the optimizing values for $j_i$'s are not integers. However, the expressions for the inner supremums which we are upper bounding, namely $\cos((k-2j_1)\pi/n)$, $-\sin((k-2j_2)\pi/n)$, $\sin((k-2j_3)\pi/n)$ and $-\cos((k-2j_4)\pi/n)$, are discrete and of simple form so that we know that even if we cannot attain the optimal value $1$ for these expressions with the optimal parameters, the actual supremums are attained with parameters close to the the ones presented above.
Let us consider separately two different cases when $n=4m$ and $m$ is even. First, let us consider the case that $k$ is even so that the optimal values for $j_i$'s are integers and the first inequality in Eq. \eqref{eq:P_n-even-bound} is saturated, i.e., the inner supremums have the same optimal values as above. Again, in this case they are either of the form $1+r_n^2 \cos((k-1)\pi/n)$ or $1+r_n^2 \sin((k-1)\pi/n)$ and they are attained with $j_1=k/2$, $j_2=k/2+m$, $j_3=k/2+2m$ and $j_4=k/2+3m$. However, the second inequality is saturated, i.e., the outer supremum attains the previous optimal value, only when $k=m+1$ which would make $k$ odd since $m$ is even. Therefore we cannot attain the previous bound in this case. Instead, for the outer supremum we must maximize $\cos((k-1)\pi/n)+\sin((k-1)\pi/n)$ for all even values of $k \in \{1, \ldots, n/2\}$. From the form of this expression we see that the supremum must be attained with the closest even integer value to $m+1$, i.e., either with $k=m$ or $k=m+2$. We can verify that then for both of these values of $k$ we have that $\cos((k-1)\pi/n)+\sin((k-1)\pi/n) = \sqrt{2}/r_n^2$ and hence $\bar{P}_n(e,f) = 1/2(1+1/\sqrt{2})$ with the optimizing states $t_1 = s_{\frac{m}{2}}$, $t_2 = s_{\frac{3m}{2}}$, $t_3 = s_{\frac{5m}{2}}$ and $t_4 = s_{\frac{7m}{2}}$ for the optimizing effect $f= e_m$, and $t_1 = s_{\frac{m}{2}+1}$, $t_2 = s_{\frac{3m}{2}+1}$, $t_3 = s_{\frac{5m}{2}+1}$ and $t_4 = s_{\frac{7m}{2}+1}$ for the optimizing effect $f= e_{m+2}$.
Second, let us assume that $k$ is odd so that the first inequality in Eq. \eqref{eq:P_n-even-bound} is not saturated, i.e., that the previously presented optimal values for $j_i$'s are not integers. In this case the actual optimizing values for the inner supremums must then be the closest integers to the values considered before. Thus, the inner supremums must be attained with the following values: $j_{\pm 1} = (k\pm 1)/2$, $j_{\pm 2} = (k\pm 1)/2+m$, $j_{\pm 3} = (k\pm 1)/2+2m$ and $j_{\pm 4} = (k\pm 1)/2+3m$. It is straightforward to verify that in both cases, i.e., when one uses either $j_{+i}$'s or $j_{-i}$'s, the inner supremums take the values $1+\cos((k-1)\pi/n)$ or $1+\sin((k-1)\pi/n)$ depending on which expression we are evaluating. Now the outer supermum can attain the same optimal value as before with the parameter value $k=m+1$ (which is odd because $m$ is even). Finally we obtain that also in this case $\bar{P}_n(e,f) = 1/2(1+1/\sqrt{2})$ with the effect $f=e_{m+1}$ and optimizing states $t_1 = s_{\frac{m}{2}}$ or $t_1 = s_{\frac{m}{2}+1}$, $t_2 = s_{\frac{3m}{2}}$ or $t_2 = s_{\frac{3m}{2}+1}$, $t_3 = s_{\frac{5m}{2}}$ or $t_3 = s_{\frac{5m}{2}+1}$ and $t_4 = s_{\frac{7m}{2}}$ or $t_4 = s_{\frac{7m}{2}+1}$.
To conclude, we have shown that the maximum success probability for an even polygon with $n=4m$ for some $m \in \mathbb N$ are given by Eqs. \eqref{eq:n=4m-odd} and \eqref{eq:n=4m-even}. All the optimal effects and states are explicitly expressed in Table \ref{table1}.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|} \hline
$n$ & \multicolumn{4}{|c|}{$4m$} \\ \hline
$m$ & odd & \multicolumn{3}{|c|}{even} \\ \hline
$e$ & $e_1$ & \multicolumn{3}{|c|}{$e_1$} \\ \hline
$f$ & $e_{m+1}$ & $e_{m}$ & $e_{m+1}$ & $e_{m+2}$ \\ \hline
$t_1$ & $s_{\frac{m+1}{2}}$ & $s_{\frac{m}{2}}$ & $s_{\frac{m+1 \pm 1}{2}}$ & $s_{\frac{m}{2}+1}$ \\ \hline
$t_2$ & $s_{\frac{3m+1}{2}}$ & $s_{\frac{3m}{2}}$ & $s_{\frac{3m+1 \pm 1}{2}}$ & $s_{\frac{3m}{2}+1}$ \\ \hline
$t_3$ & $s_{\frac{5m+1}{2}}$ & $s_{\frac{5m}{2}}$ & $s_{\frac{5m+1 \pm 1}{2}}$ & $s_{\frac{5m}{2}+1}$ \\ \hline
$t_4$ & $s_{\frac{7m+1}{2}}$ & $s_{\frac{7m}{2}}$ & $s_{\frac{7m+1 \pm 1}{2}}$ & $s_{\frac{7m}{2}+1}$ \\ \hline
$\bar{P}_n$ & $\frac{1}{2}\left(1+ \frac{r_n^2}{\sqrt{2}} \right)$ & \multicolumn{3}{|c|}{$\frac{1}{2}\left(1+ \frac{1}{\sqrt{2}} \right)$ } \\ \hline
\end{tabular}\mathcal{V}pace*{0.4cm}
\caption{\label{table1} The optimal effects and states for the random access test with two dichotomic measurements on even polygon state spaces $\mathcal{S}_n$, where $n=4m$ for some $m \in \mathbb N$. When $m$ is even, the optimizing effect $f$ is not unique and thus also the optimizing states are different for different choises of $f$ and furthermore they may not be unique even for a fixed choice of $f$.}
\end{table}
Let us then consider the case when $n\neq 4m$ for any $m \in \mathbb N$ so that we must actually then have that $n = 4m+2$ for some $m \in \mathbb N$. In this case we can again distinguish two different cases: when $m$ is odd and when $m$ is even. To see the reason for this, let us consider the expression for $\bar{P}_n$ when $n=4m+2$. In this case we see that the maximum possible value for the inner supremums can be attained if $j_1=k/2$, $j_2=k/2+m+1/2$, $j_3=k/2+2m$ and $j_4=k/2+3m+3/2$. However, we see that these are not integers neither for odd or even $k$.
If we assume that $k$ is even, then the closest integers with which we can attain the supremum are $j'_1=k/2$, $j'_{\pm 2}=(k\pm 1)/2+m+1/2$, $j'_3=k/2+2m$ and $j'_{\pm 4}=(k\pm 1)/2+3m+3/2$. For these parameters we get that
\begin{align*}
\bar{P}_n &= \sup_{k \in \{1, \ldots, n/2\}} \frac{1}{8} \left[ 4 + 2 r_n^2 \cos\left(\frac{(k-1)\pi}{n} \right) + 2 \sin\left(\frac{(k-1)\pi}{n} \right) \right]
\end{align*}
It can be shown that in this case the optimizing value for $k$ can be restricted to be either $m+1$ or $m+2$. Since we are assuming that $k$ is even, this leads to two different considerations: when $m$ is even and when $m$ is odd. For odd $m$ we have that the optimal value is $k = m+1$ and in this case Eq. \eqref{eq:n=4m+2-odd} holds. Similarly for even $m$ we have that the optimal value is $k = m+2$ and in this case Eq. \eqref{eq:n=4m+2-even} holds.
On the other hand, if we assume that $k$ is odd, then the optimizing values for $j_i$'s are $j''_{\pm 1}=(k\pm 1)/2$, $j''_{2}=k/2+m+1/2$, $j''_{\pm 3}=(k\pm 1)/2+2m$ and $j''_{4}=k/2+3m+3/2$. For these parameters we get that
\begin{align*}
\bar{P}_n &= \sup_{k \in \{1, \ldots, n/2\}} \frac{1}{8} \left[ 4 + 2 \cos\left(\frac{(k-1)\pi}{n} \right) + 2 r_n^2 \sin\left(\frac{(k-1)\pi}{n} \right) \right]
\end{align*}
Also in this case the optimizing value for $k$ can be shown to be either $m+1$ or $m+2$. Again we have different cases for even and odd $m$. For odd $m$ we have that the optimal value is $k = m+2$ and then one can confirm that we get Eq. \eqref{eq:n=4m+2-odd}. Similarly for even $m$ we have that the optimal value is $k = m+1$ and then one can confirm that we get Eq. \eqref{eq:n=4m+2-even}.
To conclude, the maximum success probabilities in the case $n=4m+2$ are given by Eq. \eqref{eq:n=4m+2-odd} when $m$ is odd and by Eq. \eqref{eq:n=4m+2-even} when $m$ is even. One can show that both of these expressions are strictly between the qubit (and the rebit) value $1/2(1+1/\sqrt{2})$ and the upper bound given in Eq. \eqref{eq:P_n-even-bound} for all finite $m \in \mathbb N$. In the limit $m \to \infty$ the maximum success probability approaches the qubit value. All the optimal effects and states are explicitly expressed in Table \ref{table2}.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|} \hline
$n$ & \multicolumn{4}{|c|}{$4m+2$} \\ \hline
$m$ & \multicolumn{2}{|c|}{odd} & \multicolumn{2}{|c|}{even} \\ \hline
$e$ & \multicolumn{2}{|c|}{$e_1$} & \multicolumn{2}{|c|}{$e_1$} \\ \hline
$f$ & $e_{m+1}$ & $e_{m+2}$ & $e_{m+1}$ & $e_{m+2}$ \\ \hline
$t_1$ & $s_{\frac{m+1}{2}}$ & $s_{\frac{m\pm 1}{2}+1}$ & $s_{\frac{m+1\pm 1}{2}}$ & $s_{\frac{m}{2}+1}$\\ \hline
$t_2$ & $s_{\frac{3m\pm 1}{2}+1}$ & $s_{\frac{3m+1}{2}+1}$ & $s_{\frac{3m}{2}+1}$ & $s_{\frac{3m+1\pm 1}{2}+1}$\\ \hline
$t_3$ & $s_{\frac{5m+1}{2}+1}$ & $s_{\frac{5m\pm 1}{2}+2}$ & $s_{\frac{5m+1\pm 1}{2}+1}$ & $s_{\frac{5m}{2}+2}$\\ \hline
$t_4$ & $s_{\frac{7m \pm 1}{2}+2}$ & $s_{\frac{7m+1}{2}+2}$ & $s_{\frac{7m}{2}+2}$ & $s_{\frac{7m+1\pm 1}{2}+2}$\\ \hline
& \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{} \\[-0.4cm]
$\bar{P}_n$ & \multicolumn{2}{|c|}{
$\frac{1}{4} \left[ 2 + r_n^2 \cos\left(\frac{m\pi}{n} \right) + \sin\left(\frac{m\pi}{n} \right) \right]$} & \multicolumn{2}{|c|}{
$\frac{1}{4} \left[ 2 + \cos\left(\frac{m\pi}{n} \right) + r_n^2 \sin\left(\frac{m\pi}{n} \right) \right]$} \\[0.1cm] \hline
\end{tabular}\mathcal{V}pace*{0.4cm}
\caption{\label{table2} The optimal effects and states for the random access test with two dichotomic measurements on even polygon state spaces $\mathcal{S}_n$, where $n=4m+2$ for some $m \in \mathbb N$. For both even and odd $m$ the optimizing effect $f$ is not unique and thus also the optimizing states are different for different choices of $f$ and furthermore they may not be unique even for a fixed choice of $f$.}
\end{table}
\subsubsection{Comparing rebit to the even polygon state spaces}
As was established in the previous section, instead of having a single oscillatory upper bound that would explain the behaviour of the maximum success probability depicted in Fig. \ref{fig:polygon-max} we must in fact divide the even polygons in four distinct cases: i) $n=4m$ and $m$ is odd, ii) $n=4m$ and $m$ is even, iii) $n=4m+2$ and $m$ is odd, and iv) $n=4m+2$ and $m$ is even for some $m \in \mathbb N$.
\begin{figure}
\caption{ \label{fig:optimal-4-6-8-10}
\label{fig:optimal-4-6-8-10}
\end{figure}
Regarding the geometry of the optimal effects and the states, as was explained previously, we can freely choose $e=e_1$. According to our analysis, similarly to the rebit case, the second extreme effect $f$ is aligned so that it is `furthest' away from both $e$ and $u-e$ along the circumsphere of the polygon, i.e., taking values $f=e_k$ where $k \in \{m,m+1,m+2\}$ depending on the polygon. Finally, again just as in the rebit, the optimizing states $t_1, t_2,t_3,t_4$ are (not necessarily uniquely) determined by the sums $e+f$, $u-e+f$, $e+u-f$ and $u-e+u-f$ as close to the same directions of these sums in the 2-dimensional $(x,y)$-projections as possible.
We illustrate the previous properties along with pointing out the differences of the four aforementioned cases by depicting some of the optimal effects and states in the first four even polygons in Fig. \ref{fig:optimal-4-6-8-10}. We see the first two cases when $n=4m$ with $m=1$ and $m=2$ depicted on the left, whereas the first two cases when $n=4m+2$ with $m=1$ and $m=2$ depicted on the right. Furthermore, the top figures are examples of the cases when $m$ is odd and the bottom ones are examples of the cases when $m$ is even. The depicted states and effects are chosen from Tables \ref{table1} and \ref{table2} in such a that we take $f=e_{m+1}$ for all of the different cases and if there are more than one optimizing state for these optimizing effects, then we take the equal convex mixture of such states for the reasons explained below.
From the figure we see that in the cases when $n=4m$ the alignment of the optimal effects $e$ and $f$ can be chosen to be just as in the rebit state space. However, the alignment of the optimal states differ for odd and even $m$: if $m$ is odd, then the optimal states $t_1, t_2, t_3, t_4$ are unique pure states and aligned in the same direction in the $(x,y)$-projection as the sums of the optimal effects (just as in the rebit), but in the case that $m$ is even, the optimal states are not unique and they can be chosen from the corresponding 2-dimensional faces of the state space. In particular, for illustration purposes, for even $m$ we chose the optimal states to be of the form $t_i=1/2(s_{j_i}+s_{j_i+1})$ for some $j_i \in \{1, \ldots, n\}$ for all $i \in \{1,2,3,4\}$ so that they align in the same direction as the sums of the optimal effects $e$ and $f$ in the 2-dimensional $(x,y)$-projection just as in the case of odd $m$ and the rebit. However, in the case that $n=4m+2$ already for $m=1$ and $m=2$ the situation differs from that of the rebit and the qubit since the effect $f$ cannot be chosen to be orthogonal to $e$ in the $(x,y)$-projection in the $z=1/2$--plane as before. For this reason also the optimal states are determined differently: after fixing the effects $e=e_1$ and $f=e_{m+1}$, for odd $m$ we have that $t_1$ and $t_3$ are uniquely determined and $t_2$ and $t_4$ are not (and we choose them along the same direction with the sums of the respective optimizing effects), whereas for even $m$ it is the opposite.
\subsubsection{Odd polygons}
In odd polygons theories the maximum success probability $\bar{P}_n$ for two dichotomic measurements depends on $n$ as follows:
\begin{align}
& \bar{P}_n = \frac{1}{4} \left[ 2 + \cos\left(\tfrac{m\pi}{n} \right) + r_{2n}^2 \sin\left(\tfrac{m\pi}{n} \right) \right] \quad \textrm{ $n=4m+1$ for $m \in \mathbb N$}\label{eq:n=4m+1} \\
& \bar{P}_n = \frac{1}{4} \left[ 2 + \cos\left(\tfrac{(m+1)\pi}{n} \right) + r_{2n}^2 \sin\left(\tfrac{(m+1)\pi}{n} \right) \right] \ \textrm{ $n=4m+3$ for $m \in \mathbb N$},\label{eq:n=4m+3}
\end{align}
where $r_{2n} = \sqrt{\sec\left( \frac{\pi}{2n}\right)}$.
The proof of these results is similar to our previous proof in case of the even polygon theories. In Eq. \eqref{eq:P-2-2} we can fix the first extreme effect $e$ to be any of the nontrivial extreme effects and for odd polygons we choose $e=g_1$. Furthermore, we have that we can choose $f = g_k$ for some $k \in \{1, \ldots, n\}$ and $t_i = s_{j_i}$ for some $j_i \in \{1, \ldots, n\}$ for all $i \in \{1,2,3,4\}$. We can write $\bar{P}_n$ for odd polygons as
\begin{align*}
\bar{P}_n&= \sup_{k \in \{1, \ldots, n\}} \frac{1}{8} \left[ \sup_{j_1 \in \{1, \ldots, n\}} (g_1+g_k)(s_{j_1}) + \sup_{j_2 \in \{1, \ldots, n\}} (u-g_1+g_k)(s_{j_2}) \right. \\
&\ \ \ \left. + \sup_{j_3 \in \{1, \ldots, n\}} (u-g_1+u-g_k)(s_{j_3}) + \sup_{j_4 \in \{1, \ldots, n\}} (g_1+u-g_k)(s_{j_4})\right].
\end{align*}
By expanding the previous expression and by using some trigonometric identities we can rewrite the previous equation as
\begin{align*}
\bar{P}_n&= \sup_{k \in \{1, \ldots, n\}} \frac{1}{8} \left[\sup_{j_1 \in \{1, \ldots, n\}} \frac{1}{1+r_n^2}\left(2+ 2 r_n^2 \cos\left(\tfrac{(k-1)\pi}{n} \right) \cos\left(\tfrac{(k+1-2j_1)\pi}{n} \right) \right) \right. \\
& \quad \quad \ \ + \sup_{j_2 \in \{1, \ldots, n\}} \frac{1}{1+r_n^2}\left(1+r_n^2- 2r_n^2 \sin\left(\tfrac{(k-1)\pi}{n} \right) \sin\left(\tfrac{(k+1-2j_2)\pi}{n} \right) \right) \\
& \quad \quad \ \ + \sup_{j_3 \in \{1, \ldots, n\}} \frac{1}{1+r_n^2}\left(2 r_n^2- 2 r_n^2 \cos\left(\tfrac{(k-1)\pi}{n} \right) \cos\left(\tfrac{(k+1-2j_3)\pi}{n} \right) \right) \\
& \left. \quad \quad \ \ + \sup_{j_4 \in \{1, \ldots, n\}} \frac{1}{1+r_n^2}\left(1+r_n^2+ 2r_n^2 \sin\left(\tfrac{(k-1)\pi}{n} \right) \sin\left(\tfrac{(k+1-2j_4)\pi}{n} \right) \right) \right].
\end{align*}
Similarly to as we did in the even polygons we can confirm that the algebraic maximum, and thus and upper bound for the actual maximum success probability, can be attained with parameters $k=1+n/4$, $j_1=(k+1)/2$, $j_2=(k+1)/2 + n/4$, $j_3 = (k+1)/2+n/2$ and $j_4 = (k+1)/2+3n/4$ in which case we have that
\begin{align*}
\bar{P}_n \leq \frac{1}{2} \left( 1+ \frac{ \sqrt{2}r_n^2}{1+r_n^2} \right) = \frac{1}{2} \left( 1+ \frac{ \sqrt{2}}{1+\cos\left( \frac{\pi}{n} \right)} \right).
\end{align*}
We note that for any finite $n$ this bound is always larger than the maximum success probability in qubit and in the limit $n \to \infty$ the bound coincides with the qubit value. However, unlike in the even polygon case, we cannot attain this upper bound in any odd polygon since $k=1+n/4$ will never be an integer for odd $n$. But since the optimized expressions are of the similar form as for even polygons we again know that the actual supremum values will be attained for parameter values close to those that attain the algebraic maximum. All the optimizing effects and states are presented in Table \ref{table3}. We depict some of these optimal effects and states in Fig. \ref{fig:optimal-5-7-9-11} in similar fashion as we did in Fig. \ref{fig:optimal-4-6-8-10} for even polygons.
\begin{figure}
\caption{ \label{fig:optimal-5-7-9-11}
\label{fig:optimal-5-7-9-11}
\end{figure}
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|c|} \hline
$n$ & \multicolumn{2}{|c|}{$4m+1$} & \multicolumn{2}{|c|}{$4m+3$} \\ \hline
$m$ & odd & even & odd & even \\ \hline
$e$ & $g_1$ & $g_1$ & $g_1$ & $g_1$ \\ \hline
$f$ & $g_{m+1}$ & $g_{m+1}$ & $g_{m+2}$ & $g_{m+2}$ \\ \hline
$t_1$ & $s_{\frac{m\pm 1}{2}+1}$ & $s_{\frac{m}{2}+1}$ & $s_{\frac{m+1}{2}+1}$ & $s_{\frac{m+1\pm1}{2}+1}$\\ \hline
$t_2$ & $s_{\frac{3m+1}{2}+1}$ & $s_{\frac{3m}{2}+1}$ & $s_{\frac{3m+1}{2}+2}$ & $s_{\frac{3m}{2}+2}$\\ \hline
$t_3$ & $s_{\frac{5m+ 1}{2}+1}$ & $s_{\frac{5m+1\pm 1}{2}+1}$ & $s_{\frac{5m\pm 1}{2}+3}$ & $s_{\frac{5m}{2}+3}$\\ \hline
$t_4$ & $s_{\frac{7m+1}{2}+1}$ & $s_{\frac{7m}{2}+2}$ & $s_{\frac{7m+1}{2}+3}$ & $s_{\frac{7m}{2}+4}$\\ \hline
& \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{} \\[-0.4cm]
$\bar{P}_n$ & \multicolumn{2}{|c|}{
$\frac{1}{4} \left[ 2 + \cos\left(\frac{m\pi}{n} \right) + r_{2n}^2 \sin\left(\frac{m\pi}{n} \right) \right]$} & \multicolumn{2}{|c|}{
$\frac{1}{4} \left[ 2 + \cos\left(\frac{(m+1)\pi}{n} \right) + r_{2n}^2 \sin\left(\frac{(m+1)\pi}{n} \right) \right]$} \\[0.1cm] \hline
\end{tabular}
\caption{\label{table3} The optimal effects and states for the random access test with two dichotomic measurements on odd polygon state spaces. For both cases $n=4m+1$ and $n=4m+3$ the optimizing states are different for odd and even $m$ and they may not be unique. }
\end{table}
\section{Conclusions}
We introduced a generalization of the well-known quantum random access code (QRAC) information processing tasks, namely the random access tests (RATs), in the framework of general probabilistic theories. In particular, we formulated the random access tests to study the properties of the measurements that are used to decode the information in the test. We showed that the figure of merit of these tasks, the average success probability, is linked to the decoding power of the harmonic approximate joint measurement of the used measurements. We saw that in this way the harmonic approximate joint measurement, which can be defined for any set of measurements, can be used to give upper bounds for the maximum average success probability of the RAT of the given measurements.
In quantum theory it was previously known that one has to use incompatible measurements in order to obtain a quantum advantage in QRACs over the classical case. We generalized this result to show that in order to obtain an advantage in RATs compared to the classical case either the measurements have to be incompatible or the theory itself must possess a property which we call super information storability meaning that the information storability is strictly larger than the operational dimension of the theory. In the case of quantum and point-symmetric state spaces super information storability does not hold so that in these cases the violation of the classical bound implies incompatibility of the measurements. In general, our result can be used as a semi-device independent certification of incompatibility in GPTs. We showed that maximal incompatibility of two dichotomic measurements, i.e., maximal robustness under noise in the form of mixing, links to their performance in the RAT. More precisely, we proved that two dichotomic measurements are maximally incompatible if and only if they can be used to accomplish the RAT with certainty by using a set of affinely dependent states.
As examples of state spaces other than quantum and classical we considered the regular polygon theories in which we exhaustively examined the RATs with two dichotomic measurements (as their operational dimension is also two). The even polygons are point-symmetric so that in this case they do not have super information storability but for odd polygons we gave an explicit construction of compatible measurements that violate the classical bound, hence detecting the super information storability property of these theories. Furthermore, we solved the optimal success probabilities of these RATs in all polygons and saw that for incompatible measurements it is possible to violate the quantum bound as well, both in even and odd polygons.
\end{document} |
\begin{document}
\begin{abstract}
Our main result is elementary and concerns the relationship between the multiplicative groups of the coordinate and endomorphism rings of the formal additive group
over a field of characteristic $p>0$. The proof involves the combinatorics of base $p$ representations of positive integers in a striking way.
We apply the main result to construct a canonical quotient of the module of Coleman power series over the Iwasawa algebra
when the base local field is of characteristic $p$. This gives information in a situation
which apparently has never previously been investigated.
\end{abstract}
\title{Digit patterns and Coleman power series}
\section{Introduction}
\subsection{Overview and motivation}
Our main result (Theorem~\ref{Theorem:MainResult} below) concerns the relationship between the multiplicative groups of the coordinate and endomorphism rings of the formal additive group
over a field of characteristic $p>0$. Our result is elementary and does not require a great deal of apparatus for its statement. The proof of the main result involves the combinatorics of base $p$ representations of positive integers
in a striking way.
We apply our main result (see Corollary~\ref{Corollary:ColemanApp} below) to construct a canonical quotient of the module of Coleman power series over the Iwasawa algebra
when the base local field is of characteristic $p$. By {\it Coleman power series} we mean the telescoping power series
introduced and studied in Coleman's classical paper \cite{Coleman}.
Apart from Coleman's \cite{ColemanLocModCirc} complete results in the important special case of the formal multiplicative group over ${\mathbb{Z}}_p$, little is known about the structure of the module of Coleman power series over the Iwasawa algebra, and, so far as we can tell,
the characteristic $p$ situation has never previously been investigated. We undertook this research in an attempt to fill the gap in characteristic $p$.
Our results are far from being as complete as Coleman's, but they are surprising on account of their ``digital'' aspect, and they raise further questions worth investigating.
\subsection{Formulation of the main result} The notation introduced under this heading is in force throughout the paper.
\subsubsection{Rings and groups of power series} Fix a prime number $p$
and a field $K$ of characteristic $p$.
Let $q$ be a power of $p$. Consider: the (commutative) power series ring
$$K[[X]]=\left\{\left.\sum_{i=0}^\infty a_i X^i\right| a_i\in K\right\};$$
the (in general noncommutative) ring
$$R_{q,K}=\left\{\left.\sum_{i=0}^\infty a_i X^{q^i}\right| a_i\in K\right\},$$
in which by definition multiplication is power series composition;
and the subgroup
$$\Gamma_{q,K}=\left\{\left.X+\sum_{i=1}^\infty a_i X^{q^i}\right| a_i\in K\right\}\subset R_{q,K}^\times,$$
where in general $A^\times$ denotes the group of units of a ring $A$ with unit.
Note that $K[[X]]^\times$ is a right $\Gamma_{q,K}$-module
via composition of power series.
\subsubsection{Logarithmic differentiation}
Given $F=F(X)\in K[[X]]^\times$,
put
$${\mathbf{D}}[F](X)=XF'(X)/F(X)\in XK[[X]].$$
Note that
\begin{equation}\label{equation:Homogeneity}
{\mathbf{D}}[F(\alpha X)]={\mathbf{D}}[F](\alpha X)
\end{equation}
for all $\alpha\in K^\times$. Note that the sequence
\begin{equation}\label{equation:Factoid}
1\rightarrow K[[X^p]]^\times\subset
K[[X]]^\times\xrightarrow{{\mathbf{D}}}
\left\{\left.\sum_{i=1}^\infty a_i X^i\in XK[[X]]\right|
a_{pi}=a_i^p\;\mbox{for all}\;i\in {\mathbb{N}} \right\}\rightarrow 0
\end{equation}
is exact,
where ${\mathbb{N}}$ denotes the set of positive integers.
\subsubsection{$q$-critical integers}
Given $c\in {\mathbb{N}}$, let
$$
O_q(c)=\{n\in {\mathbb{N}}\vert (n,p)=1\;\mbox{and}
\;n\equiv p^ic\bmod{q-1}\;\mbox{for some $i\in {\mathbb{N}}\cup\{0\}$}\}.
$$
Given $n\in {\mathbb{N}}$, let
$\ord_p n$ denote the exact order with which $p$ divides $n$.
We define
$$
C^0_q=\left\{c\in {\mathbb{N}}\cap(0,q)\left|
(c,p)=1\;\mbox{and}\;\frac{c+1}{p^{\ord_p(c+1)}}=\min_{n\in O_q(c)\cap(0,q)}
\frac{n+1}{p^{\ord_p(n+1)}}\right.\right\},
$$
and we call elements of this set {\em $q$-critical integers}.
In the simplest case $p=q$ one has $C^0_p=\{1,\dots,p-1\}$, but in general
the set $C^0_q$ is somewhat complicated.
Put
$$
C_q=\bigcup_{c\in C_q^0}\{q^i(c+1)-1\vert i\in {\mathbb{N}}\cup\{0\}\},
$$
noting that the union is disjoint, since the sets in the union are contained in
different congruence classes modulo $q-1$.
See below for informal ``digital'' descriptions of the sets $C_q^0$ and $C_q$.
\subsubsection{The homomorphism $\psi_q$}
We define a homomorphism
$$\psi_q:XK[[X]]\rightarrow X^2K[[X]]$$ as follows:
given $F=F(X)=\sum_{i=1}^\infty a_iX^i\in XK[[X]]$,
put
$$
\psi_{q}[F]=X\cdot \sum_{k\in C_q}a_kX^{k}.
$$
Note that the composite map
$$\psi_q\circ {\mathbf{D}}:K[[X]]^\times\rightarrow
\left\{\left.\sum_{k\in C_q}a_k X^{k+1}\right| a_k\in K\right\}$$
is surjective by exactness of sequence (\ref{equation:Factoid}).
Further, since the set $\{k+1\vert k\in C_q\}$ is stable under multiplication by $q$,
the target of $\psi_q\circ {\mathbf{D}}$ comes equipped with the structure of left $R_{q,K}$-module. More precisely, the target of $\psi_q\circ {\mathbf{D}}$ is a free left $R_{q,K}$-module for which the set $\{X^{k+1}\vert k\in C_q^0\}$ is a basis.
The following is the main result of the paper.
\begin{Theorem}\label{Theorem:MainResult}
The formula
\begin{equation}\label{equation:MainResult}
\psi_{q}[{\mathbf{D}}[F\circ \gamma]]=\gamma^{-1}\circ \psi_{q}[{\mathbf{D}}[F]]
\end{equation}
holds for all $\gamma\in \Gamma_{q,K}$ and $F\in K[[X]]^\times$.
\end{Theorem}
\noindent
In \S\S\ref{section:Reduction}--\ref{section:DigitMadness2} we give the proof of the theorem. More precisely,
we first explain in \S\ref{section:Reduction} how to reduce the proof of the theorem to a couple of essentially combinatorial assertions
(Theorems~\ref{Theorem:DigitMadness}
and \ref{Theorem:DigitMadness2}),
and then we prove the latter in
\S\ref{section:DigitMadness} and \S\ref{section:DigitMadness2}, respectively.
In \S\ref{section:ColemanApp} we make the application (Corollary~\ref{Corollary:ColemanApp}) of Theorem~\ref{Theorem:MainResult} to Coleman power series. The application does not require any of the apparatus of the proof of Theorem~\ref{Theorem:MainResult}.
\subsection{Informal discussion}
\subsubsection{``Digital'' description of $C_q^0$}
The definition of $C_q^0$ can readily be understood in terms
of simple operations on digit strings.
For example, to verify that $39$ is $1024$-critical, begin by writing out the base $2$ representation of $39$ thus:
$$39=100111_2$$
Then put enough place-holding $0$'s on the left so as to
represent $39$ by a digit string of length $\ord_2 1024=10$:
$$39=0000100111_2$$
Then calculate as follows:
$$\begin{array}{cclr}
\mbox{permute cyclically}
&0000100111_2&\xrightarrow{\mbox{\tiny strike trailing $1$'s and leading $0$'s}}&100_2\\
&0001001110_2&\mbox{ignore: terminates with a $0$}\\
&0010011100_2&\mbox{ignore: terminates with a $0$}\\
\downarrow&0100111000_2&\mbox{ignore: terminates with a $0$}\\
&1001110000_2&\mbox{ignore: terminates with a $0$}\\
&0011100001_2&\xrightarrow{\mbox{\tiny strike trailing $1$'s and leading $0$'s}}&1110000_2\\
&0111000010_2&\mbox{ignore: terminates with a $0$}\\
&1110000100_2&\mbox{ignore: terminates with a $0$}\\
\downarrow
&1100001001_2&\xrightarrow{\mbox{\tiny strike trailing $1$'s and leading $0$'s}}&110000100_2\\
&1000010011_2&\xrightarrow{\mbox{\tiny strike trailing $1$'s and leading $0$'s}}&10000100_2\\
\end{array}$$
Finally, conclude that $39$ is $1024$-critical
because the first entry of the last column is the smallest in that column.
This numerical example conveys some of the flavor of the combinatorial considerations
coming up in the proof of Theorem~\ref{Theorem:MainResult}.
\subsubsection{``Digital'' description of $C_q$}
Given $c\in C_q^0$, let $[c_1,\cdots,c_m]_p$
be a string of digits representing $c$ in base $p$.
(The digit string notation is defined below in
\S\ref{section:Reduction}.)
Then each digit string of the form
$$[c_1,\dots,c_m,\underbrace{p-1,\dots,p-1}_{ n\ord_p q}]_p$$
represents an element of $C_q$. Moreover, each element of $C_q$ arises this way, for unique $c\in C_q^0$ and $n\in {\mathbb{N}}\cup\{0\}$.
\subsubsection{Miscellaneous remarks}
$\;$
(i) The set $C_q$ is a subset of the set of {\em magic numbers} (relative to the choice of $q$) as defined and studied in \cite[\S8.22, p.\ 309]{Goss}. For the moment we do not understand this connection
on any basis other than ``numerology'', but we suspect that it runs much deeper.
(ii) A well-ordering of the set of positive integers distinct from the usual one,
which we call the {\em $p$-digital well-ordering}, plays a key role
in the proof of Theorem~\ref{Theorem:MainResult}, via Theorems~\ref{Theorem:DigitMadness}
and \ref{Theorem:DigitMadness2} below.
In particular, Theorem~\ref{Theorem:DigitMadness2}, via Proposition~\ref{Proposition:DigitMadness2},
characterizes the sets $C_q^0$ and $C_q$ in terms of the $p$-digital well-ordering
and congruences modulo $q-1$.
(iii) The results of this paper were discovered by extensive
computer experimentation with base $p$ expansions
and binomial coefficients modulo $p$. No doubt refinements of our results can be discovered
by continuing such experiments.
(iv) It is an open problem to find a minimal set of generators
for $K[[X]]^\times$ as a topological right $\Gamma_{q,K}$-module, the topologies here being the $X$-adically induced ones. It seems very likely that the module is always infinitely generated,
even when $K$ is a finite field.
Computer experimentation (based on the method of proof of Proposition~\ref{Proposition:Reduction} below) with the simplest case of the problem (in which $K$ is the two-element field and $p=q=2$) has revealed some interesting patterns. But still we are unable to hazard any detailed guess about the solution.
\section{Application to Coleman power series}
\label{section:ColemanApp}
We assume that the reader already knows about Lubin-Tate formal groups and Coleman power series, and is familiar with their applications. We refer the less well-versed reader to \cite{LubinTate}, \cite{Coleman}, \cite{ColemanLocModCirc} and \cite{ColemanAI} to get started up the mountain of literature.
\subsection{Background} We review \cite{LubinTate}, \cite{Coleman} and \cite{ColemanLocModCirc} just far enough to fix a consistent system of notation and to frame precisely the general structure problem motivating our work. \subsubsection{The setting}
Let $k$ be a nonarchimedean local field with maximal
compact subring ${\mathcal{O}}$ and uniformizer $\pi$.
Let $q$ and $p$ be the cardinality and characteristic, respectively,
of the residue field ${\mathcal{O}}/\pi$. Let $\bar{k}$ be an algebraic closure of $k$.
Let $H$ be a complete unramified extension of $k$ in the completion of $\bar{k}$,
let $\varphi$ be the arithmetic Frobenius automorphism
of $H/k$, and let ${\mathcal{O}}_H$ be the ring of integers of $H$. Let the action of $\varphi$ be
extended coefficient-by-coefficient to the power series
ring ${\mathcal{O}}_H[[X]]$.
\subsubsection{Lubin-Tate formal groups}
We say that formal power series with coefficients in ${\mathcal{O}}$ are {\em congruent modulo $\pi$}
if they are so coefficient-by-coefficient,
and we say they are {\em congruent modulo degree $2$}
if the constant and linear terms agree.
Let ${\mathbb{F}}F_\pi$ be the set of one-variable power series $f=f(X)$ such that
$$f(X)\equiv \pi X\bmod{\deg 2},\;\;\;f(X)\equiv X^q\bmod{\pi}.$$
The simplest example of an element of ${\mathbb{F}}F_\pi$ is
$\pi X+X^q$.
The general example of an element of ${\mathbb{F}}F_\pi$ is
$\pi X+X^q+\pi X^2e(X)$,
where $e(X)\in {\mathcal{O}}[[X]]$ is arbitrary.
Given $f\in {\mathbb{F}}F_\pi$, there exists unique $F_f=F_f(X,Y)\in {\mathcal{O}}[[X,Y]]$
such that
$$F_f(X,Y)\equiv X+Y\bmod{\deg 2},\;\;\;f(F_f(X,Y))=F_f(f(X),f(Y)).$$
The power series $F_f(X,Y)$ is a {\em commutative formal group law}.
Given $a\in {\mathcal{O}}$ and $f,g\in {\mathbb{F}}F_\pi$, there exists unique
$[a]_{f,g}=[a]_{f,g}(X)\in {\mathcal{O}}[[X]]$ such that
$$[a]_{f,g}(X)\equiv aX\bmod{\deg 2},\;\;f([a]_{f,g}(X))=[a]_{f,g}(g(X)).$$
We write $[a]_f=[a]_f(X)=[a]_{f,f}(X)$ to abbreviate notation.
The family \linebreak $\{[a]_f(X)\}_{a\in {\mathcal{O}}}$ is a system of {\em formal complex multiplications}
for the formal group law $F_f(X,Y)$. For each fixed $f\in {\mathbb{F}}F_\pi$, the package
$$({\mathbb{F}}F_f(X,Y),\{[a]_f(X)\}_{a\in {\mathcal{O}}})$$
is a {\em Lubin-Tate formal group}.
The formal properties of the ``big package''
$$\left(\{F_f(X,Y)\}_{f\in {\mathbb{F}}F_\pi},
\{[a]_{f,g}(X)\}_{\begin{subarray}{c}
a\in {\mathcal{O}}\\
f,g\in {\mathbb{F}}F_\pi
\end{subarray}}\right)$$
are detailed in \cite[Thm.\ 1, p.\ 382]{LubinTate}. In particular,
one has
\begin{equation}\label{equation:LTformal}
[\pi]_f(X)=f(X),\;\;\;[1]_f(X)=X,\;\;\;
[a]_{f,g}\circ[b]_{g,h}=[ab]_{f,h}
\end{equation}
for all $a,b\in {\mathcal{O}}$ and $f,g,h\in {\mathbb{F}}F_\pi$.
We remark also that
\begin{equation}\label{equation:LTformalbis}
[\omega]_{\pi X+X^q}(X)=\omega X
\end{equation}
for all roots $\omega$ of unity in $k$ of order prime to $p$.
\subsubsection{Coleman power series}
By Coleman's theory \cite{Coleman} there exists for each $f\in {\mathbb{F}}F_\pi$ a unique
group homomorphism
$${\mathbb{N}}N_f:{\mathcal{O}}_H[[X]]^\times\rightarrow {\mathcal{O}}_H[[X]]^\times$$
such that
$$
{\mathbb{N}}N_f[h](f(X))=\prod_{\begin{subarray}{c}
\lambda\in \bar{k}\\
f(\lambda)=0
\end{subarray}}h(F_f(X,\lambda))
$$
for all $h\in {\mathcal{O}}_H[[X]]^\times$.
Let
$${\mathcal{M}}_f=\{h\in {\mathcal{O}}_H[[X]]^\times\vert {\mathbb{N}}N_f[h]=\varphi h\}.$$
We refer to elements of ${\mathcal{M}}_f$ as {\em Coleman power series}.
\subsubsection{Natural operations on Coleman power series}
The group
${\mathcal{M}}_f$ comes equipped with the structure of right ${\mathcal{O}}^\times$-module
by the rule
\begin{equation}\label{equation:OOModuleRule}
((h,a)\mapsto h\circ [a]_f):{\mathcal{M}}_f\times {\mathcal{O}}^\times\rightarrow {\mathcal{M}}_f,
\end{equation}
and we have at our disposal a canonical isomorphism
\begin{equation}\label{equation:Cformal}
(h\mapsto h\circ [1]_{g,f}):{\mathcal{M}}_g\rightarrow {\mathcal{M}}_f
\end{equation}
of right ${\mathcal{O}}^\times$-modules
for all $f,g\in {\mathbb{F}}F_\pi$, as one verifies by applying the formal properties (\ref{equation:LTformal}) of the big Lubin-Tate package in a straightforward way. We also have at our disposal a canonical group isomorphism
\begin{equation}\label{equation:CBijection}
(h\mapsto h\bmod{\pi}):{\mathcal{M}}_f\rightarrow ({\mathcal{O}}_H/\pi)[[X]]^\times
\end{equation}
as one verifies by applying \cite[Lemma 13, p.\ 103]{Coleman}
in a straightforward way.
The {\em Iwasawa algebra} (completed group ring)
${\mathbb{Z}}_p[[{\mathcal{O}}^\times]]$ associated to $k$ acts naturally on the slightly modified version
$${\mathcal{M}}_f^0=\{h\in {\mathcal{O}}_H[[X]]^\times \vert h\in {\mathcal{M}}_f,\;h(0)\equiv 1\bmod{\pi}\}$$
of ${\mathcal{M}}_f$.
\subsubsection{The structure problem}
Little seems to be known in general about the structure of the ${\mathcal{O}}^\times$-module ${\mathcal{M}}_f$. To determine this structure is a fundamental problem in local class field theory, and the problem remains open. Essentially everything we do know about the problem is due to Coleman.
Namely, in the special case
$$k={\mathbb{Q}}_p=H,\;\;\;\pi=p,\;\;\;f(X)=(1+X)^p-1\in {\mathbb{F}}F_\pi,$$
Coleman showed \cite{ColemanLocModCirc}
that ${\mathcal{M}}^0_f$ is ``nearly'' a free ${\mathbb{Z}}_p[[{\mathcal{O}}^\times]]$-module of rank $1$,
and in the process recovered Iwasawa's result on the structure of local units modulo circular units. Moreover, Coleman's methods are strong enough to analyze ${\mathcal{M}}^0_f$ completely in the case of general $H$, even though this level of generality is not explicitly considered in \cite{ColemanLocModCirc}. So in the case of the formal multiplicative group over ${\mathbb{Z}}_p$
we have a complete and satisfying description of structure. Naturally one wishes for so
complete a description in the general case. We hope with the present work to contribute to the solution of the structure problem.
Here is the promised application of Theorem~\ref{Theorem:MainResult}, which makes the ${\mathcal{O}}^\times$-module structure of a certain quotient of ${\mathcal{M}}_f$ explicit when $k$ is of characteristic $p$.
\begin{Corollary}\label{Corollary:ColemanApp} Assume that $k$ is of characteristic $p$ and fix $f\in {\mathbb{F}}F_\pi$.
Then there exists a surjective group homomorphism
$$\Psi_f:{\mathcal{M}}_f\rightarrow\left\{\left.\sum_{k\in C_q}a_kX^{k+1}\right| a_k\in {\mathcal{O}}_H/\pi\right\}$$
such that
\begin{equation}\label{equation:ColemanApp}
\Psi_f[h\circ [\omega u]_f]\equiv
[(\omega u)^{-1}]_{\pi X+X^q}\circ \Psi_f[h]\circ[\omega]_{\pi X+X^q}\bmod{\pi}
\end{equation}
for all $h=h(X)\in {\mathcal{M}}_{f}$, $u\in 1+\pi{\mathcal{O}}$
and roots of unity $\omega\in {\mathcal{O}}^\times$.
\end{Corollary}
\proof If we are able to construct $\Psi_{\pi X+X^q}$ with the desired properties,
then in the general case the map
$$\Psi_f=(h\mapsto \Psi_{\pi X+X^q}[h\circ [1]_{f,\pi X+X^q}])$$
has the desired properties by (\ref{equation:LTformal}) and (\ref{equation:Cformal}).
We may therefore assume without loss of generality that
$$f=\pi X+X^q,$$
in which case
$F_f(X,Y)=X+Y$,
i.~e., the formal group underlying the Lubin-Tate formal group attached to $f$ is additive.
By (\ref{equation:LTformalbis}) and the definitions, given
$a\in {\mathcal{O}}$ and writing
$$a=\sum_{i=0}^\infty \alpha_i \pi^i\;\;\;(\alpha_i^q=\alpha_i),$$
in the unique possible way, one has
$$[a]_f\equiv \sum_{i=0}^\infty \alpha_i X^{q^i}\bmod{\pi},$$
and hence
the map
$a\mapsto [a]_{f}\bmod \pi$ gives rise to an isomorphism
$$\theta:{\mathcal{O}}\iso R_{q,{\mathcal{O}}/\pi}$$
of rings. Let
$$\rho:{\mathcal{M}}_f\rightarrow ({\mathcal{O}}_H/\pi)[[X]]^\times$$
be the isomorphism (\ref{equation:CBijection}). We claim that
$$\Psi_f=\psi_q\circ {\mathbf{D}}\circ \rho$$
has all the desired properties. In any case, since $\rho$ is an isomorphism and $\psi_q\circ {\mathbf{D}}$ by (\ref{equation:Factoid}) is surjective, $\Psi_f$ is surjective, too.
To verify (\ref{equation:ColemanApp}), we calculate as follows:
$$\begin{array}{rcl}
\psi_q[{\mathbf{D}}[\rho(h\circ [\omega u]_f)]]&=&\psi_q[{\mathbf{D}}[\rho(h\circ [\omega]_f\circ[ u]_f)]]\\
&=&
\psi_q[{\mathbf{D}}[\rho(h)\circ \theta(\omega)\circ \theta(u)]]\\
&=&\theta(u^{-1})\circ \psi_q[{\mathbf{D}}[\rho(h)\circ \theta(\omega)]]\\
&=&\theta(u^{-1})\circ \psi_q[{\mathbf{D}}[\rho(h)]\circ \theta(\omega)]\\
&=&\theta(u^{-1})\circ\theta(\omega^{-1})\circ
\psi_q[{\mathbf{D}}[\rho(h)]]\circ \theta(\omega)\\
&=&\theta((u\omega)^{-1})\circ
\psi_q[{\mathbf{D}}[\rho(h)]]\circ \theta(\omega)
\end{array}
$$
The third and fourth steps are justified by
(\ref{equation:MainResult}) and
(\ref{equation:Homogeneity}), respectively.
The remaining steps are clear. The claim is proved, and with it the corollary.
\qed
\section{Reduction of the proof}\label{section:Reduction} We put Coleman power series behind us for the rest of the paper. We return to the elementary point of view taken in the introduction. In this section
we explain how to reduce the proof of Theorem~\ref{Theorem:MainResult} to a couple of combinatorial assertions.
\subsection{Digital apparatus}
\subsubsection{Base $p$ expansions}
Given an additive decomposition
$$n=\sum_{i=1}^s n_ip^{s-i}\;\;\;(n_i\in {\mathbb{Z}}\cap [0,p),\;n\in {\mathbb{N}}),$$
we write
$$n=[n_1,\dots,n_s]_p,$$
we call the latter a {\em base $p$ expansion} of $n$ and
we call the coefficients $n_i$ {\em digits}.
Note that we allow base $p$ expansions to have leading $0$'s.
We say that a base $p$ expansion is
{\em minimal} if the first digit is positive.
For convenience, we set the empty base $p$ expansion $[]_p$ equal to $0$ and declare it to be minimal.
We always read base $p$ expansions left-to-right, as though they were words spelled in the alphabet $\{0,\dots,p-1\}$. In this notation the well-known theorem of Lucas takes the form
$$\left(\begin{array}{c}
\;[a_1,\dots,a_n]_p\\
\;[b_1,\dots,b_n]_p
\end{array}\right)\equiv \left(\begin{array}{c}
a_1\\
b_1
\end{array}\right)
\cdots \left(\begin{array}{c}
a_n\\
b_n
\end{array}\right)\bmod{p}.$$
(For all $n\in {\mathbb{N}}\cup\{0\}$ and $k\in {\mathbb{Z}}$ we set
$\left(\begin{subarray}{c}
n\\
k\end{subarray}\right)=
\frac{n!}{k!(n-k)!}$ if $0\leq k\leq n$ and $\left(\begin{subarray}{c}
n\\
k\end{subarray}\right)=0$ otherwise.)
The theorem of Lucas implies that for all
integers $k,\ell,m\geq 0$ such that $m=k+\ell$, the binomial
coefficient $\left(\begin{subarray}{c}
m\\
k
\end{subarray}\right)$ does not vanish modulo $p$ if and only if the addition of $k$ and $\ell$ in base $p$ requires no ``carrying''.
\subsubsection{The $p$-core function $\kappa_p$}
Given $n\in {\mathbb{N}}$, we define
$$
\kappa_p(n)=(n/p^{\ord_p n}+1)/p^{\ord_p(n/p^{\ord_p n}+1)}-1.
$$
We call $\kappa_p(n)$ the {\em $p$-core}
of $n$.
For example, $\kappa_p(n)=0$
iff $n=p^{k-1}(p^\ell-1)$
for some $k,\ell\in {\mathbb{N}}$. The meaning of the $p$-core function
is easiest to grasp in terms of minimal base $p$ expansions. One calculates $\kappa_p(n)$ by
discarding trailing $0$'s and then discarding trailing $(p-1)$'s.
For example, to calculate the $3$-core
of
$963=[1,0,2,2,2,0,0]_3$,
first discard trailing $0$'s to get
$[1,0,2,2,2]_3=107$,
and then discard trailing $2$'s to get
$\kappa_3(963)=[1,0]_3=3$. \subsubsection{The $p$-defect function $\delta_p$}
For each $n\in{\mathbb{N}}$, let $\delta_p(n)$ be the length of the minimal base $p$
representation of $\kappa_p(n)$. We call $\delta_p(n)$ the {\em $p$-defect}
of $n$. For example, since as noted above $\kappa_3(963)=[1,0]_3$,
one has $\delta_3(963)=2$.
\subsubsection{The $p$-digital well-ordering}
We equip the set of positive integers with a well-ordering
$\leq_p$ by declaring $m\leq_{p}n$
if
$$ \kappa_p(m)<\kappa_p(n)$$
or
$$\kappa_p(m)=\kappa_p(n)\;\mbox{and}\;m/p^{\ord_p m}<n/p^{\ord_pn}$$
or
$$\kappa_p(m)=\kappa_p(n)\;\mbox{and}\;m/p^{\ord_p m}=n/p^{\ord_p n}\;\mbox{and} \;m\leq n.$$
In other words, to verify $m\leq _p n$, first compare $p$-cores of $m$ and $n$,
then in case of a tie compare numbers of $(p-1)$'s trailing the $p$-core, and in case of another tie compare numbers of trailing $0$'s. We call $\leq_p$ the {\em $p$-digital well-ordering}.
In the obvious way we derive order relations $<_p$, $\geq_{p}$ and $>_{p}$
from $\leq_{p}$. We remark that
$$\delta_p(m)<\delta_p(n)\Rightarrow m<_p n,\;\;\;
m\leq_p n\Rightarrow \delta_p(m)\leq \delta_p(n);$$
in other words, the function $\delta_p$ gives a reasonable if rough approximation
to the $p$-digital well-ordering.
\subsubsection{The function $\mu_q$}
Given $c\in {\mathbb{N}}$, let $\mu_q(c)$ be the unique element of the set
$$\{n\in {\mathbb{N}}\vert n\equiv p^i c\bmod{q-1}\;\mbox{for some}\;i\in {\mathbb{N}}\cup\{0\}\}
$$
minimal with respect to the $p$-digital well-ordering.
Note that $\mu_q(c)$ cannot be divisible by $p$.
Consequently $\mu_q(c)$ may also be characterized as the unique element of the set $O_q(c)$ minimal with respect to the $p$-digital well-ordering.
\subsubsection{$p$-admissibility}
We say that a quadruple $(j,k,\ell,m)\in {\mathbb{N}}^4$
is {\em $p$-admissible} if
$$(m,p)=1,\;\;\;m=k+j(p^\ell-1),\;\;\;\left(\begin{array}{c}
k-1\\
j
\end{array}\right)\not\equiv 0\bmod{p}.$$
This is the key technical definition of the paper.
Let ${\mathcal{A}}_p$ denote the set of $p$-admissible quadruples.
\begin{Theorem}\label{Theorem:DigitMadness}
For all $(j,k,\ell,m)\in {\mathcal{A}}_p$, one has (i)
$k<_p m$, and moreover, (ii)
if \linebreak $\kappa_p(k)=\kappa_p(m)$, then
$j=(p^{\ord_p k}-1)/(p^\ell-1)$.
\end{Theorem}
\noindent We will prove this result in \S\ref{section:DigitMadness}. Note that the conclusion of part (ii) of the theorem
implies $\ord_pk>0$ and $\ell\vert \ord_p k$.
\begin{Theorem}\label{Theorem:DigitMadness2}
One has
\begin{equation}\label{equation:DigitMadness2}
\max_{c\in {\mathbb{N}}}\mu_q(c)<q,
\end{equation}
\begin{equation}\label{equation:DigitMadness2bis}
\begin{array}{cl}
&\displaystyle\{(\mu_q(c)+1)q^{i}-1\;\vert\; i\in {\mathbb{N}}\cup\{0\},\;c\in {\mathbb{N}}\}\\\\
=&\displaystyle\left\{c\in {\mathbb{N}}\left|(c,p)=1,\;
\kappa_p(c)=\min_{n\in O_q(c)}\kappa_p(n)\right.\right\}.
\end{array}
\end{equation}
\end{Theorem}
\noindent
We will prove this result in \S\ref{section:DigitMadness2}. We have phrased the result in a way emphasizing the $p$-digital well-ordering.
But perhaps it is not clear what the theorem means in the context of Theorem~\ref{Theorem:MainResult}.
The next result provides an explanation.
\begin{Proposition}\label{Proposition:DigitMadness2}
Theorem~\ref{Theorem:DigitMadness2} granted,
one has
\begin{equation}\label{equation:DigitMadness2quad}
C_q^0=\{\mu_q(c)\vert c\in {\mathbb{N}}\},
\end{equation}
\begin{equation}\label{equation:DigitMadness2ter}
C_q=\left\{c\in {\mathbb{N}}\left|(c,p)=1,\;
\kappa_p(c)=\kappa_p(\mu_q(c))\right.\right\}.
\end{equation}
\end{Proposition}
\proof
The definition of $C^0_q$ can be rewritten $$C^0_q=\left\{c\in {\mathbb{N}}\cap(0,q)\left|
(c,p)=1,\;\kappa_p(c)=\min_{n\in O_q(c)\cap(0,q)}\kappa_p(n)\right.\right\}.$$
Therefore relation (\ref{equation:DigitMadness2}) implies containment $\supset$ in (\ref{equation:DigitMadness2quad}) and moreover, supposing failure of equality in (\ref{equation:DigitMadness2quad}), there exist
$c,c'\in C_q^0$
such that
$$c=\mu_q(c)\neq c',\;\;\;\kappa_p(c)=\kappa_p(c').$$
But $c'=q^i(c+1)-1$ for some $i\in {\mathbb{N}}$ by (\ref{equation:DigitMadness2bis}), hence $c'\geq q$,
and hence $c'\not\in C_q^0$. This contradiction establishes equality in (\ref{equation:DigitMadness2quad})
and in turn containment $\subset$ in (\ref{equation:DigitMadness2ter}).
Finally, (\ref{equation:DigitMadness2bis})
and (\ref{equation:DigitMadness2quad}) imply equality in (\ref{equation:DigitMadness2ter}). \qed
The following is the promised reduction of the proof of Theorem~\ref{Theorem:MainResult}.
\begin{Proposition}\label{Proposition:Reduction}
If Theorems~\ref{Theorem:DigitMadness}
and \ref{Theorem:DigitMadness2} hold, then
Theorem~\ref{Theorem:MainResult} holds, too.
\end{Proposition}
\noindent Before turning to the proof, we pause to discuss the groups in play.
\subsection{Generators for $K[[X]]^\times$,
${\mathbf{D}}[K[[X]]^\times]$ and $\Gamma_{q,K}$}
\label{subsection:Convenient}
Equip $K[[X]]^\times$ with the topology for which the family
$\{1+X^nK[[X]]\vert n\in {\mathbb{N}}\}$ is a neighborhood base at the origin. Then the set
$$\{1+\alpha X^k\vert \alpha \in K^\times,\;k\in {\mathbb{N}}\}\cup K^\times$$
generates $K[[X]]^\times$ as a topological group.
Let ${\mathbb{F}}_p$ be the residue field of ${\mathbb{Z}}_p$.
Let
$E_p=E_p(X)\in {\mathbb{F}}_p[[X]]$
be the reduction modulo $p$ of the Artin-Hasse exponential
$$\exp\left(\sum_{i=0}^\infty
\frac{X^{p^i}}{p^i}\right)\in ({\mathbb{Q}}\cap{\mathbb{Z}}_p)[[X]],$$
noting that
$${\mathbf{D}}[E_p]=\sum_{i=0}^\infty X^{p^i}.$$
Since
$E_p(X)=1+X+O(X^2)$,
the set
$$\{E_p(\alpha X^k)\;\vert \;\alpha\in K^\times,\;k\in {\mathbb{N}},\;(k,p)=1\}\cup K[[X^p]]^\times$$
generates $K[[X]]^\times$ as a topological group.
For each $k\in {\mathbb{N}}$ such that $(k,p)=1$ and $\alpha\in K^\times$, put
$$W_{k,\alpha}=W_{k,\alpha}(X)=k^{-1}{\mathbf{D}}[E_p(\alpha X^k)]=\sum_{i=0}^\infty \alpha^{p^i}X^{kp^i}\in XK[[X]].
$$
Equip ${\mathbf{D}}[K[[X]]^\times]$ with the relative $X$-adic topology. The set
$$\{W_{k,\alpha}\vert k\in {\mathbb{N}},\;(k,p)=1,\;\alpha\in K^\times\}$$
generates ${\mathbf{D}}[K[[X]]^\times]$ as a topological group, cf.\ exact sequence (\ref{equation:Factoid}).
Equip $\Gamma_{q,K}$ with the relative $X$-adic topology.
Note that
$$
(X+\beta X^{q^\ell})^{-1}=\sum_{i=0}^\infty
(-1)^{i}\beta^{\frac{q^{\ell i}-1}{q^\ell-1}}X^{q^{\ell i}}\in \Gamma_{q,K}
$$
for all $\ell\in {\mathbb{N}}$ and $\beta \in K^\times$. The inverse operation here is of course understood in the functional rather than multiplicative sense.
The set
$$\{X+\beta X^{q^\ell}\;\vert\;
\beta\in K^\times,\;\ell\in {\mathbb{N}}\}$$
generates $\Gamma_{q,K}$ as a topological group.
\subsection{Proof of the proposition}
It is enough to verify (\ref{equation:MainResult}) with $F$ and $\gamma$ ranging over sets of generators for the topological groups $K[[X]]^\times$
and $\Gamma_{q,K}$, respectively. The generators mentioned in the preceding paragraph are the convenient ones. So fix $\alpha,\beta\in K^\times$
and $k,\ell\in {\mathbb{N}}$ such that $(k,p)=1$. It will be enough to verify that
\begin{equation}\label{equation:Nuff}
\psi_{q}[M_{k,\alpha,\ell,\beta}]=
\left\{\begin{array}{rl}\displaystyle
\alpha X^{k+1}+\sum_{\ell\vert f\in {\mathbb{N}}} (-1)^{f/\ell}\alpha^{q^f}\beta^{\frac{q^f-1}{q^\ell-1}}X^{q^f(k+1)}&\mbox{if $k\in C_q$,}\\\\
0&\mbox{otherwise,}
\end{array}\right.
\end{equation}
where \begin{equation}\label{equation:MExpansion}
\begin{array}{cl}
&M_{k,\alpha,\ell,\beta}=M_{k,\alpha,\ell,\beta}(X)=k^{-1}{\mathbf{D}}[E_p(\alpha(X+\beta X^{q^\ell})^k)]\\\\
=&\displaystyle W_{k,\alpha}+\sum_{i=0}^\infty
\sum_{j=1}^\infty \left(\begin{array}{c}
p^ik-1\\ j
\end{array}\right)\alpha^{p^i}\beta^{j}X^{p^ik+j(q^\ell-1)}.\end{array}
\end{equation}
By Theorem~\ref{Theorem:DigitMadness}, many terms
on the right side of (\ref{equation:MExpansion}) vanish, and more precisely,
we can rewrite (\ref{equation:MExpansion}) as follows:
\begin{equation}\label{equation:Nuff2}
\begin{array}{rcl}
M_{k,\alpha,\ell,\beta}&\equiv&\displaystyle\alpha X^k+
\sum_{\begin{subarray}{c}
m\in O_q(k)\\
m>_pk
\end{subarray}}
\left(\sum_{\begin{subarray}{c}
i\in {\mathbb{N}}\cup\{0\}, j\in{\mathbb{N}}\\
(j,p^ik,\ord_p q^\ell,m)\in {\mathcal{A}}_p
\end{subarray}}
\left(\begin{array}{c}
p^ik-1\\
j
\end{array}\right)\alpha^{p^i}\beta^j\right)X^m\\\\
&&\hskip 5cm\bmod{X^pK[[X^p]]}.
\end{array}
\end{equation}
By Theorem~\ref{Theorem:DigitMadness2} as recast in the form of Proposition~\ref{Proposition:DigitMadness2}, along with formula (\ref{equation:Nuff2}) and the definitions, both sides of (\ref{equation:Nuff})
vanish unless $k\in C_q$. So now fix $c\in C_q^0$ and $g\in {\mathbb{N}}\cup\{0\}$
and put
$$k=(c+1)q^g-1\in C_q$$ for the rest of the proof of the proposition.
Also fix $f\in {\mathbb{N}}\cup\{0\}$ and put
$$m=q^f(k+1)-1=(c+1)q^{f+g}-1\in C_q$$ for the rest of the proof.
It is enough to evaluate the coefficient
of $X^m$ in (\ref{equation:Nuff2}). By part (ii) of Theorem~\ref{Theorem:DigitMadness}, there is no term in the sum on the right side of (\ref{equation:Nuff2}) of degree $m$ unless $\ell\vert f$, in
which case there is exactly one term, namely
$$\left(\begin{array}{c}
q^fk-1\\
\frac{q^f-1}{q^\ell-1}
\end{array}\right)\alpha^{q^f}\beta^{\frac{q^f-1}{q^\ell-1}}X^m,$$
and by the theorem of Lucas, the binomial coefficient mod $p$ evaluates
to $(-1)^{f/\ell}$.
Therefore (\ref{equation:Nuff}) does indeed hold.
\qed
\subsection{Remarks}
$\;$
(i) By formula (\ref{equation:Nuff2}), the $p$-digital well-ordering actually gives rise to a $\Gamma_{q,K}$-stable complete separated filtration of the quotient $K[[X]]^\times/K[[X^p]]^\times$ distinct from the $X$-adically induced one. Theorem~\ref{Theorem:MainResult} merely describes the structure of $K[[X]]^\times/K[[X^p]]^\times$ near the top of the ``$p$-digital filtration''.
(ii) Computer experimentation based on formula (\ref{equation:MExpansion}) was helpful in making the discoveries detailed in this paper. We believe that continuation of such experiments could lead to further progress, e.g., to the discovery of a minimal set of generators for $K[[X]]^\times$ as a topological right $\Gamma_{q,K}$-module.
\section{Proof of Theorem~\ref{Theorem:DigitMadness}}\label{section:DigitMadness}
\begin{Lemma}\label{Lemma:DigitGames}
Fix $(j,k,\ell,m)\in {\mathcal{A}}_p$. Put
$$e=\ord_p(m+1),\;\;\;f=\ord_p k,\;\;\;g=\ord_p(k/p^f+1).$$
Then there exists a unique integer $r$ such that
\begin{equation}\label{equation:DigitGames0}
0\leq r\leq e+\ell-1,\;\; r\equiv 0\bmod{\ell},\;\; j\equiv \frac{p^r-1}{p^\ell-1}\bmod{p^{e}},
\end{equation}
and moreover
\begin{equation}\label{equation:DigitGames1}
f+g\geq e,
\end{equation}
\begin{equation}\label{equation:DigitGames2}
\kappa_p(m)\geq \kappa_p(k).
\end{equation}
\end{Lemma}
\noindent This lemma is the key technical result of the paper.
\subsection{Completion of the proof of the theorem, granting the lemma}
Fix $(j,k,\ell,m)\in {\mathcal{A}}_p$.
Let $e,f,g,r$ be as defined in Lemma~\ref{Lemma:DigitGames}.
Since the number of digits in the minimal base $p$ expansion of $k$ cannot exceed the number of digits in the minimal base $p$ expansion of $m$, one has
\begin{equation}\label{equation:DigitGames3}
\delta_p(k)+f+g\leq \delta_p(m)+e.
\end{equation}
Combining (\ref{equation:DigitGames1})
and (\ref{equation:DigitGames3}), one has
\begin{equation}\label{equation:DigitGames4}
\delta_p(k)=\delta_p(m)\Rightarrow f+g=e.
\end{equation}
Now in general one has
$$m+1=(\kappa_p(m)+1)p^e,\;\;\;
k+p^f=(\kappa_p(k)+1)p^{f+g},$$
and hence
$$
\kappa_p(k)=\kappa_p(m)\Rightarrow
\left(j=\frac{p^{f}-1}{p^\ell-1}\;\mbox{and}\;e>g\right)
$$
via
(\ref{equation:DigitGames4}).
Theorem~\ref{Theorem:DigitMadness} now follows via (\ref{equation:DigitGames2})
and the definition of the $p$-digital well-ordering.
\qed
\subsection{Proof of Lemma~\ref{Lemma:DigitGames}}
Since $e$ is the number of trailing $(p-1)$'s in the minimal base $p$ expansion of $m$, the lemma is trivial in the case $e=0$.
We therefore assume that $e>0$ for the rest of the proof.
Let
$$m=[m_1,\dots,m_t]_p\;\;\;(t>0,\;m_1>0,\;\;m_t>0)$$
be the minimal base $p$ expansion of $m$. For convenience, put
$$d=\delta_p(m)\geq 0,\;\;\;m_\nu=0\;\mbox{for $\nu<1$}.$$
Then
$$t=e+d,\;\;\;m_{d+1}=\cdots=m_{d+e}=p-1,\;\;\;m_{d}< p-1.$$
By hypothesis
$$\left(\begin{array}{c}
k-1\\
j
\end{array}\right)=\left(\begin{array}{c}
m-jp^\ell-1+j\\
m-jp^\ell-1
\end{array}\right)>0,$$
hence
$$m>jp^\ell,$$
and hence the number of digits in the minimal base $p$ of expansion of $jp^\ell$ does not exceed that of $m$. Accordingly,
$$t> \ell$$
and one has a base $p$ expansion
for $j$ of the form
$$j=[j_1,\dots,j_{t-\ell}]_p,$$
which perhaps is not minimal.
For convenience, put
$$j_\nu=0\;\mbox{for $\nu<1$ and also for $\nu>t-\ell$}.$$
This state of affairs is summarized by the ``snapshot''
$$m=[m_1,\dots,m_t]=[m_{1},\dots,m_{d},\underbrace{p-1,\dots,p-1}_e]_p,\;\;\kappa_p(m)=[m_{1},\dots,m_{d}]_p,$$
$$jp^\ell=[j_1,\dots,j_t]_p=[j_1,\dots,j_{t-\ell},\underbrace{0,\dots,0}_\ell]_p
,$$
which the reader should keep in mind as we proceed.
We are ready now to prove the existence and uniqueness of $r$. One has
$$m-jp^\ell-1=k-1-j=[m_1',\dots,m_d',p-1-j_{d+1},\dots,
p-1-j_{t-1},p-2]_p,$$
where the digits $m'_1,\dots,m'_d$ are defined by the equation
\begin{equation}\label{equation:Swivel}
\kappa_p(m)-[j_{1},\dots,j_{d}]_p=[m'_1,\dots,m'_d]_p.
\end{equation}
By hypothesis and the theorem of Lucas, the addition of $k-1-j$ and $j$ in base $p$ requires no ``carrying'', and hence
\begin{equation}\label{equation:BigDigit}
k-1=
\left\{\begin{array}{ll}
\;[m_1'+j_{1-\ell},\dots,m_d'+j_{d-\ell},\\
\;p-1-j_{d+1}+j_{d+1-\ell},\dots,
p-1-j_{d+e-1}+j_{d+e-1-\ell},p-2+j_{d+e-\ell}]_p.
\end{array}\right.
\end{equation}
From the system of inequalities for the last $e+\ell$ digits of the base $p$ expansion of $jp^\ell$
implicit in (\ref{equation:BigDigit}), it follows that there exists $r_0\in {\mathbb{N}}\cup\{0\}$ such that
\begin{equation}\label{equation:BigDigitBis}
jp^\ell=[j_{1-\ell},\dots,j_{d-\ell},
\overbrace{0,\dots,0,\underbrace{\underbrace{1,0,\dots,0}_{\ell},\dots,\underbrace{1,0,\dots,0}_{\ell}}_{\mbox{\tiny $r_0$ blocks}},0}^{e+\ell}]_p.
\end{equation}
Therefore $r=r_0\ell$ has the required properties (\ref{equation:DigitGames0}). Uniqueness of $r$ is clear. For later use, note the relation
\begin{equation}\label{equation:HeadScratcher}
r\geq e\Leftrightarrow [j_{d-\ell+1},\dots,j_{d}]_p\neq 0\Rightarrow [j_{d-\ell+1},\dots,j_{d}]_p=p^{r-e},
\end{equation}
which is easy to see from the point of view adopted here to prove (\ref{equation:DigitGames0}).
By (\ref{equation:DigitGames0}) one has
\begin{equation}\label{equation:DigitGames2.2}
k+p^r-(m+1)+j' p^e (p^\ell-1)=0\;\;\;\mbox{for some $j'\in {\mathbb{N}}\cup\{0\}$,}
\end{equation}
and hence one has
\begin{equation}\label{equation:DigitGames2.5}
r\geq \min(f,e),\;\;\;f\geq \min(r,e).
\end{equation}
This proves (\ref{equation:DigitGames1}),
since either one has $f\geq e$, in which case (\ref{equation:DigitGames1}) holds trivially,
or else $f<e$, in which case $r=f$ by (\ref{equation:DigitGames2.5}), and hence
(\ref{equation:DigitGames1}) holds by (\ref{equation:DigitGames2.2}).
Put
$$k-1=[k'_1,\dots,k'_{d+e}]_p,\;\;\;\;{\mathbf{1}}_{r\geq e}=\left\{\begin{array}{ll}
1&\mbox{if $r\geq e$,}\\
0&\mbox{if $r<e$.}
\end{array}\right.
$$
Comparing (\ref{equation:BigDigit}) and (\ref{equation:BigDigitBis}), we see that
the digits $k'_{d+1},\dots,k'_{d+e}$ are all $(p-1)$'s with at most one exception,
and the exceptional digit if it exists is a $p-2$.
Further, one has
$$k'_{d+1}=\dots=k'_{d+e}=p-1\Leftrightarrow f\geq e\Leftrightarrow {\mathbf{1}}_{r\geq e}=1$$
by (\ref{equation:DigitGames2.5}).
Therefore one has
$$\kappa_p(k)\leq[k'_1,\dots,k'_d]+{\mathbf{1}}_{r\geq e}.$$
Finally, via (\ref{equation:Swivel}), (\ref{equation:BigDigit}) and (\ref{equation:HeadScratcher}), it follows that
$$
\begin{array}{rcl}
\kappa_p(k)&\leq &[m_1'+j_{1-\ell},\dots,m_d'+j_{d-\ell}]_p+{\mathbf{1}}_{r\geq e}\\
&=&\kappa_p(m)-[j_1,\dots,j_d]_p+[j_{1-\ell},\dots,j_{d-\ell}]_p+{\mathbf{1}}_{r\geq e}\\
&=&\kappa_p(m)-[j_{1-\ell},\dots,j_d]_p+[j_{1-\ell},\dots,j_{d-\ell}]_p+{\mathbf{1}}_{r\geq e}\\
&=&\kappa_p(m)-[j_{d-\ell+1},\dots,j_{d}]_p+{\mathbf{1}}_{r\geq e}
\\
&&-[j_{1-\ell},\dots,j_{d-\ell},\underbrace{0,\dots,0}_\ell]_p+[j_{1-\ell},\dots,j_{d-\ell}]_p\\
&=&\kappa_p(m)-{\mathbf{1}}_{r\geq e}(p^{r-e}-1)-(p^\ell-1)[j_{1-\ell},\dots,j_{d-\ell}]_p\\
&\leq &\kappa_p(m).
\end{array}
$$
Thus (\ref{equation:DigitGames2}) holds and the proof of the lemma is complete.
\qed
\section{Proof of Theorem~\ref{Theorem:DigitMadness2}}\label{section:DigitMadness2}
\subsection{Further digital apparatus}
Put $\lambda=\ord_p q$.
For each $c\in {\mathbb{N}}$, let
$$\bracket{c}_q=\min\{n\in {\mathbb{N}}\vert n\equiv c\bmod{q-1}\},\;\;\;\tau_p(c)=c/p^{\ord_pc}.
$$
Note that
$$0<\bracket{c}_q<q,\;\;\;
\bracket{c}_q=\bracket{c'}_q\Leftrightarrow c\equiv c'\bmod{q-1}$$
for all $c,c'\in {\mathbb{N}}$.
Given $c\in {\mathbb{N}}$, and writing $\bracket{c}_q=[c_1,\dots,c_\lambda]_p$,
note that
$$\{c_1,\dots,c_\lambda\}\neq \{0\},\;\;
\bracket{pc}_q=[c_2,\dots,c_\lambda,c_1]_p,$$
$$\langle c\rangle_q\geq \tau_p(\bracket{c}_q)=[c_1,\dots,c_{\max\{i\vert c_i\neq 0\}}]_p\geq \kappa_p(\langle c\rangle_q).$$
\begin{Lemma}\label{Lemma:Necklace2}
$\langle p^ic\rangle_q\leq
p^i-1\Rightarrow \tau_p(\langle c\rangle_q)\leq\langle p^ic\rangle_q$ for $c\in {\mathbb{N}}$ and $i\in {\mathbb{N}}\cap(0,\lambda)$.
\end{Lemma}
\begin{Lemma}\label{Lemma:Necklace3}
$\displaystyle
\min_{i=0}^{\lambda-1}\tau_p(\langle p^ic+1\rangle_q)=1+\min_{i=0}^{\lambda-1}\kappa_p(\langle p^ic\rangle_q)=1+\kappa_p(\mu_q(c))$ for $c\in {\mathbb{N}}$.
\end{Lemma}
\begin{Lemma}\label{Lemma:Necklace4}
$i\not\equiv 0\bmod{\lambda}\Rightarrow p^i(\mu_q(c)+1)-1\not\in O_q(c)$ for $i,c\in {\mathbb{N}}$.
\end{Lemma}
\subsection{Completion of the proof of the theorem, granting the lemmas}
Relation (\ref{equation:DigitMadness2}) holds
by Lemma~\ref{Lemma:Necklace3}.
Relation (\ref{equation:DigitMadness2bis}) holds
by Lemma~\ref{Lemma:Necklace4}.
\qed
\subsection{Proof of Lemma~\ref{Lemma:Necklace2}} Write $\langle c\rangle_q=[c_1,\dots,c_\lambda]_p$.
By hypothesis
$$\langle p^ic\rangle_q=
[\underbrace{0,\dots,0}_{\lambda-i},c_1,\dots,c_i]_q,\;\;\;
c=[c_1,\dots,c_i,\underbrace{0,\dots,0}_{\lambda-i}]_p,
$$
and hence $\tau_p(c)\leq \langle p^ic\rangle_q$.
\qed
\subsection{Proof of Lemma~\ref{Lemma:Necklace3}}
Since
$$\mu_q(c)=(\kappa_p(\mu_q(c))+1)p^g-1\in O_q(c),$$
for some $g\in {\mathbb{N}}\cup\{0\}$,
one has
$$\kappa_p(\mu_q(c))+1\geq
\min_{i=0}^{\lambda-1}
\min_{j=0}^{\lambda-1}
\langle p^i(p^jc+1)\rangle_q.$$
One has
$$
\tau_p(\langle n+1\rangle_q)\geq 1+\kappa_p(\langle n\rangle_q)
$$
for all $n\in {\mathbb{N}}$, as can be verified by a somewhat tedious case analysis
which we omit. Clearly, the inequalities $\geq$ hold in the statement we are trying to prove.
Therefore it will be enough to prove that
$$
\min_{i=0}^{\lambda-1}
\min_{j=0}^{\lambda-1}
\langle p^i(p^jc+1)\rangle_q\geq \min_{j=0}^{\lambda-1} \tau_p(\langle p^jc+1\rangle_q).
$$
Fix $i=1,\dots,\lambda-1$ and $j=0,\dots,\lambda-1$.
It will be enough just to prove that
\begin{equation}\label{equation:AlmostLastNuff}
\langle p^i(p^jc+1)\rangle_q
<\tau_p(\langle p^jc+1\rangle_q)
\Rightarrow \langle p^i(p^jc+1)\rangle_q
\geq \tau_p(\langle p^{i+j}c+1\rangle_q).
\end{equation}
But by the preceding lemma, under the hypothesis of (\ref{equation:AlmostLastNuff}), one has
$$p^i-1<\langle p^i(p^jc+1)\rangle_q$$
and hence
$$\langle p^i(p^jc+1)\rangle_q=
\langle p^{i+j}c+1\rangle_q+p^i-1\geq \tau_p(
\langle p^{i+j}c+1\rangle_q).$$
Thus (\ref{equation:AlmostLastNuff}) is proved, and with
it the lemma.
\qed
\subsection{Proof of Lemma~\ref{Lemma:Necklace4}}
We may assume without loss of generality that \linebreak $0<i<\lambda$ and $c=\mu_q(c)$.
By the preceding lemma $c<q$.
Write
$c=[c_1,\dots,c_\lambda]_p$
and define $c_k$ for all $k$ by enforcing the rule
$c_{k+\lambda}=c_k$.
Supposing that the desired conclusion does not hold, one has
$$p^{\lambda-i}[c_1,\dots,c_\lambda,\underbrace{p-1,\dots,p-1}_{i}]_p\equiv [c_1,\dots,c_\lambda,\underbrace{p-1,\dots,p-1}_{i},\underbrace{0,\dots,0}_{\lambda-i}]_p$$
$$
\equiv
[c_1,\dots,c_\lambda]_p+[\underbrace{p-1,\dots,p-1}_{i},\underbrace{0,\dots,0}_{\lambda-i}]_p
$$
$$
\equiv
[c_1,\dots,c_\lambda]_p-[\underbrace{0,\dots,0}_{i},\underbrace{p-1,\dots,p-1}_{\lambda-i}]_p$$
$$\equiv [c_{1+m},\dots,c_{\lambda+m}]_p=\bracket{p^mc}_q$$
for some integer $m$, where all the congruences are modulo $q-1$. It is impossible to have
$c_1=\cdots=c_i=0$ since this would force
the frequency of occurrence of the digit $0$
to differ in the digit strings $c_1,\dots,c_\lambda$
and $c_{1+m},\dots,c_{\lambda+m}$, which after all are just cyclic permutations one of the other.
Similarly we can rule out the possibility $c_{i+1}=\cdots=c_\lambda=p-1$.
Thus the base $p$ expansion of $c$ takes the form
$$c=[\underbrace{0,\dots,0}_{\alpha},
\underbrace{\bullet,\dots,\bullet}_{\beta},
\underbrace{p-1,\dots,p-1}_{\gamma}
]_p,$$
where
$$\alpha< i,\;\;\beta>0,\;\;\;\gamma<\lambda-i,\;\;\;
\alpha+\beta+\gamma=\lambda,$$
and the bullets hold the place of a digit string not beginning with a $0$ and not ending with a $p-1$.
Then one has
$$\begin{array}{rcl}
1+\kappa_p(c)&=&(c+1)/p^\gamma\\
&>&(c+1-p^{\lambda-i})/p^\gamma+1\;\;(\mbox{strict inequality!})\\
&\geq &\tau_p(c+1-p^{\lambda-i})+1\\
&=&\tau_p(\langle p^mc\rangle_q)+1\\
&\geq &\kappa_p(\langle p^mc\rangle_q)+1
\end{array}
$$
in contradiction to the preceding lemma. This contradiction finishes the proof. \qed
\end{document} |
\begin{document}
\renewcommand{\vec}[1]{\ensuremath{\boldsymbol{#1}}}
\newcommand{\bra}[1]{\ensuremath{\langle #1|}}
\newcommand{\ket}[1]{\ensuremath{| #1\rangle}}
\newcommand{\ket{\mbox{$\downarrow$}}}{\ket{\mbox{$\downarrow$}}}
\newcommand{\ket{\mbox{$\uparrow$}}}{\ket{\mbox{$\uparrow$}}}
\title{Optimal control of entangling operations for trapped ion quantum computing}
\author{V. Nebendahl$^{1}$}
\author{H. H{\"a}ffner$^{1,2}$}
\author{C. F. Roos$^{1,2}$}
\email{[email protected]}
\affiliation{$^1$Institut f\"ur Quantenoptik und
Quanteninformation, \"Osterreichische Akademie der Wissenschaften,
Otto-Hittmair-Platz 1, A-6020 Innsbruck, Austria}
\affiliation{$^2$Institut f\"ur Experimentalphysik, Universit\"at
Innsbruck, Technikerstr.~25, A-6020 Innsbruck, Austria}
\date{\today}
\begin{abstract}
Optimal control techniques are applied for the decomposition of
unitary quantum operations into a sequence of single-qubit gates
and entangling operations. To this end, we modify a
gradient-ascent algorithm developed for systems of coupled nuclear
spins in molecules to make it suitable for trapped ion quantum
computing. We decompose unitary operations into entangling gates
that are based on a nonlinear collective spin operator and
complemented by global spin flip and local light shift gates.
Among others, we provide explicit decompositions of controlled-NOT
and Toffoli gates, and a simple quantum error correction protocol.
\end{abstract}
\pacs{03.67-a, 32.80.Qk, 37.10.Ty}
\maketitle
\section{\label{sec:intro}Introduction}
Choosing the laws of quantum physics as the physical basis for
constructing models of computation \cite{Deutsch:1985} allows for
solving certain computational problems more efficiently as in
models based on classical physics \cite{Nielsen2000}. In the
quantum circuit model, information is encoded in quantum bits
(qubits) and manipulated by applying unitary operations acting on
the joint state space of the qubits. It has been shown that
arbitrary unitary operations can be broken down into sequences of
elementary gate operations, consisting of single-qubit operations
and entangling operations acting on pairs of qubits
\cite{Barenco1995}. It is a non-trivial task to find optimum
decompositions of unitary operations into a minimum number of
elementary gates. Often, the controlled-NOT (CNOT) gate operation
is chosen as the entangling operation. However, it has been shown
that almost any entangling operation can be used for this purpose
as well \cite{Lloyd1995}.
In experiments processing quantum information, the available
physical processes determine the choice of the entangling gate. In
the case of trapped ions manipulated by coherent laser light
\cite{Blatt:2008}, a qubit is realized by encoding quantum
information in a pair of long-lived internal states, consisting of
hyperfine or Zeeman ground states, or in a combination of a ground
state and a metastable state with an energy difference of a few
electron volts. Gates acting on a single qubit are achieved by
lasers coupling the qubit states by either dipole-forbidden
single-photon transitions or Raman transitions. Pairs of ions are
entangled by qubit-qubit interactions mediated by a coupling to a
vibrational mode of the ions' motion in the trap.
To achieve a gate on either a specific ion or a pair of ions
within an ion string, two strategies are pursued:
\begin{enumerate}
\item The laser beam is tightly focussed so that it interacts only
with a single ion at a time \cite{Naegerl:1999}. Then, a CNOT gate
or an equivalent entangling gate between an arbitrary pair of ions
is achieved by a sequence of pulses with the laser addressed to
either one or the other ion of the pair \cite{Schmidt-Kaler2003a}.
\item Alternatively, a laser with wider beam diameter is employed
in combination with ions held in a segmented ion trap. In this
approach, trap potentials are dynamically transformed to enable
the transport of one or two ions into the interaction region with
the laser beam. To entangle a pair of ions $i,j$, a bichromatic
laser field is used to realize either a conditional phase gate
\cite{Leibfried2003a} induced by an effective Hamiltonian
$H_{PG}\propto\sigma_z^{(i)}\sigma_z^{(j)}$ or a
M{\o}lmer-S{\o}rensen gate \cite{Sackett2000} induced by
$H_{MS}\propto\sigma_x^{(i)}\sigma_x^{(j)}$ where $\sigma_n^{(k)}$
denotes a Pauli spin operator $\mathbf{\boldsymbol{\sigma}\cdot
n}$ acting on the $k$'th ion.
\end{enumerate}
Recently, the latter interaction has been used to entangle a pair
of ions with fidelities of up to 99.3(1)\%, with a coupling
mediated by the longitudinal center-of-mass mode of the ion string
\cite{Benhelm:2008b}. Employing the same kind of interaction to
$N>2$ ions would realize a unitary operation
$U_{MS}^{X}(\theta)=\exp(-i\frac{\theta}{4}{S_x}^2)$ with
$S_x=\sum_{k=1}^N\sigma_x^{k}$, yielding an equal pairwise
coupling between all ions in the string. In addition, using a
single frequency resonant with the qubit transition instead of a
bichromatic field coupling to the transitions' motional sidebands,
the same laser beam could induce spin flips on all ions described
by the unitary $U_X(\theta)=\exp(-i\frac{\theta}{2}{S_x})$. This
opens up the interesting prospect of performing arbitrary unitary
operations by complementing these unitaries by single-qubit phase
shift gates
$U_z^{(k)}(\theta)=\exp(-i\frac{\theta}{2}\sigma_z^{(k)})$ that
could be induced by a tightly focussed off-resonant laser beam
interacting only with the $k$'th ion. In this approach, no
interferometric stability is required between the optical path
lengths of the focussed and the wide beam. Moreover, the use of light shift gates facilitates addressing a single qubit in a string of ion qubits without introducing unwanted state transformations on the neighboring ions since the
phase shift $\theta$ is proportional to the intensity of the laser
field as compared to single-qubit spin flip gates where the
rotation angle $\theta$ is proportional to the field amplitude \cite{OptControlQGates_Footnote0}. In
this way, unitary operations could be realized on a small group of
ions without the need to split and rearrange the ion string
in-between \cite{Rowe2002} and with more modest requirements
regarding the spatial mode profile of the tightly focussed laser
beam.
In this paper, we will use optimal control techniques to find
decompositions of $N$-qubit gates into unitary operations induced
by the set of Hamiltonians
\begin{equation}
\label{eq:setofH}
{\cal
S}=\{{S_x}^2,S_x,\sigma_z^{(1)},\sigma_z^{(2)},\ldots,
\sigma_z^{(N)}\}\,.
\end{equation}
In contrast to similar applications of optimal control in nuclear
magnetic resonance (NMR) experiments \cite{Khaneja2005} and in
laser-induced femto-chemistry \cite{Tesch:2002,Palao:2003}, we are
interested in the case where only one of the Hamiltonians is
applied at a time. As a consequence, the optimal control algorithm
is required to find decompositions of a given quantum gate by
sequences of laser pulses that interact either with all ions in
the same way or with an individual ion.
The goal of this paper is to find gate decompositions of interest for current state-of-the-art ion trap experiments that are more efficient than decompositions based on gates acting on only one or two qubits at a time.
\section{Basis set of operations}
The mean-field interaction ${S_x}^2$ acting on a string of $N$
ions entangles each ion qubit with each other ion qubit. Thus, on
the one hand, application of this Hamiltonian endows us with the
power to entangle arbitrary pairs of qubits. On the other hand, it
induces entangling interactions that are not always desired. The
situation we are encountering is somewhat similar to the one in
NMR quantum computing where the system Hamiltonian $H_{\rm sys}$
consisting of spin-spin interactions and chemical shifts leads to
a system dynamics that needs to be controlled by radio-frequency
fields interacting with single spins at a time. For this purpose,
techniques have been developed for selectively switching off
certain spin-spin interactions by refocussing techniques
\cite{Vandersypen2004}. In the scenario we envision for the ion
trap system, the role of the radio-frequency fields is taken over
by laser pulses inducing single-qubit gate operations that are
intermittently applied to particular ions.
Refocussing techniques are also applicable to the trapped ion
system. For example, to entangle qubits 1 and 2 in a system of
three qubits, the entangling pulse could be split into two parts
and interleaved with a refocussing pulse to obtain the sequence
$U=U_{MS}^X(\pi/4)U_z^{(3)}(\pi)U_{MS}^X(\pi/4)$. The light shift
pulse on the third qubit flips its phase and effectively reverses
the entangling interactions with qubits 1 and 2. Substituting the
last pulse by its inverse, results in a sequence
$U=U_{MS}^X(-\pi/4)U_z^{(3)}(\pi)U_{MS}^X(\pi/4)$ that entangles
qubits 1 and 2 with qubit 3 without inducing entangling
interactions between 1 and 2.
The set of Hamiltonians (\ref{eq:setofH})
is sufficient for generating arbitrary unitary operations on a
string of $N$ qubits as can be shown by an explicit construction:
A single-qubit x-rotation acting on qubit $k$ is generated by the
spin echo sequence
$U_x^{(k)}(\theta)=U_z^{(k)}(-\pi)\,U_X(-\theta/2)\,U_z^{(k)}(\pi)\,U_X(\theta/2)$.
Arbitrary single-qubit gates can then be performed when combining
this operation with single-qubit phase shift gates
$U_z^{(k)}(\theta)$. Similarly, an operation $U_{MS}^{X-x_k}$ that
entangles all ions with each other except for qubit $k$ is
produced from the $N$-qubit entangling gate $U_{MS}^X(\theta)$ by
the pulse sequence
$U_{MS}^{X-x_k}=U_z^{(k)}(-\pi)\,U_{MS}^X(\theta/2)\,U_z^{(k)}(\pi)\,U_{MS}^X(\theta/2)$.
Substituting $U_{MS}^X$ in the above sequence by $U_{MS}^{X-x_k}$,
a similar sequence is obtained that entangles all qubits except
for two, and by induction, a two-qubit entangling gate is
constructed between any pair of qubits $m$ and $n$ which together
with arbitrary single qubit gates forms a universal set of gates.
While this construction shows that in principle arbitrary unitary
operations are realizable by pulse sequences generated from the
set $\cal S$, it is of no practical use. For the implementation of
$N$-qubit gate operations, we are interested in finding pulse
sequences that minimize gate errors occurring in ion trap quantum
computing. Therefore, we will be searching for sequences having
either a minimum number of (entangling) pulses or a minimum length
in terms of the sum of pulse angles $\theta_n$ of the individual
pulses.
\section{Optimal control of unitary transformations}
Optimal control techniques have been applied to the problem of
generating specific unitary transformations
\cite{Palao:2003,Khaneja2005,Tesch:2002,Grace:2007} with
applications to systems as different as NMR, neutral atoms in
optical lattices, Josephson junction qubits and trapped ions
\cite{Khaneja2005,Montangero:2007,DeChiara:2008,Timoney:2008}. To
find decompositions of entangling ion trap gates in terms of
pulses generated by Hamiltonians $H_k$ from $\cal S$, we modify a
gradient-ascent algorithm that was developed by Khaneja et al.
\cite{Khaneja2005} in the context of NMR experiments. In their
approach, a unitary transformation $U_{\rm target}$ was searched
for
by constructing a unitary operation
\begin{equation}
U=\prod_{m=1}^M U_m=\prod_{m=1}^M e^{\left(-\frac{i}{\hbar}\Delta
t\left(H_{\rm sys}+\sum_{k=1}^K u_{km}H_k\right)\right)},
\label{eq:UKhaneja}
\end{equation}
where $H_k\in {\cal S}$, that maximized the performance function
$\Phi(\{u_{km}\})=|\mbox{Tr}(U^\dagger U_{\rm target})|^2$. The
authors noted that for small time increments $\Delta t$ the
calculation of the gradient \cite{OptControlQGates_Footnote1}
\begin{equation}
\frac{\partial\Phi}{\partial
u_{km}}\approx-2\mbox{Re}(\mbox{Tr}(\frac{i\Delta t}{\hbar}
W_mH_kV_m)\mbox{Tr}(W_mV_m)^\ast), \label{eq:gradient}
\end{equation}
with $W_m=U_{\rm target}^\dagger U_N\cdots U_{m+1}$ and
$V_m=U_m\cdots U_1$, could be efficiently carried out requiring
only about $3M$ matrix multiplications and about $KM$ calculation
of traces. Then, a gradient-based algorithm was devised to
increase the value of the performance function by modifying the
control amplitudes
\begin{equation}
u_{km}\rightarrow u_{km}+\epsilon\frac{\partial\Phi}{\partial
u_{km}}\label{eq:parameterupdate}
\end{equation}
using a suitable step size $\epsilon$. Repeated application of the
gradient calculation followed by updating the control amplitudes
maximized the performance function and resulted in a unitary
transformation realizing the target operation $U_{\rm target}$.
There are a few important differences between coupled spin systems
in NMR and in laser-manipulated strings of trapped ions. In the
NMR context of ref.~\cite{Khaneja2005}, the product of unitaries
in (\ref{eq:UKhaneja}) arises from a discretization of the time
variable that required the approximation of a continuous control
amplitude $u_k(t)$ by a stepwise continuous function with values
$u_{kj}$. Restrictions to the values of $u_k(t)$ are only due to
technical requirements like amplitude or bandwidth limitations of
the radio-frequency equipment used for producing the control
fields. In ion trap experiments, however, limitations exist for
the values that the functions $u_k(t)$ can take on because the
simultaneous application of different control Hamiltonians is
either technically very challenging or physically impossible. In
the former case, the simultaneous application of single qubit
phase shift gates to more than a single ion would require control
of the spatial profile of a laser beam inducing phase shifts by
the ac-Stark effect. More importantly, in the latter case, the
entangling interaction ${S_x}^2$ is produced by an effective
Hamiltonian that precludes the simultaneous application of
single-qubit phase shifts. As a consequence, the control functions
need to satisfy the condition $u_k(t)u_l(t)=0$ for all $k,l$ at
all times $t$. Therefore, the unitary transformation is naturally
decomposed into a product of unitaries and Eq.~(\ref{eq:UKhaneja})
is replaced by
\begin{equation}
U=\prod_{m=1}^M\exp\left(-i\theta_m H_{k_m}\right),
\label{eq:UNebendahl}
\end{equation}
where $\theta_m=\Delta t/\hbar\,u_m$ and $k_m$ labels the
Hamiltonian from set $\cal S$ that is to be used for the {\sl m}th
pulse. As the system is stationary in the absence of laser
interactions, the system Hamiltonian $H_{\rm sys}$ was omitted in
Eq.~(\ref{eq:UNebendahl}).
Using the gradient-ascent method for updating the control
amplitudes $\theta_m$ increases the performance function but
leaves the pulse ordering defined via the indices $k_m$ unchanged.
Thus, for the optimum pulse order to be included in the
configuration space, the number of pulses $M$ needs to be much
larger than the expected minimum number of pulses finally
realizing the target operation $U_{\rm target}$. Therefore, the
search algorithm has to be complemented by a penalty function like $\Phi_p=\sum_{m=1}^M |\theta_m|^\gamma$, $0<\gamma<1$, that tries to eliminate
short pulses that do not contribute much to increasing the
performance function $\Phi$. The functional form of $\Phi_p$
assures that a change in the length $\theta_m$ of the $m$'th pulse
by $d\theta_m$ penalizes already short pulses much more than
longer ones since $d\Phi_p={\rm
sign}{(\theta_m)}\gamma|\theta_m|^{\gamma-1}\, d\theta_m$. In the optimization routine used for finding the pulse decompositions presented in section \ref{sec_examples}, the exponent $\gamma$ ranged from 0.5 to 0.8.
Now, the performance function $\Phi$ is replaced by
$\hat{\Phi}=\Phi-\alpha\Phi_p$ where $\alpha$ is a suitably chosen
weight.
For updating the pulse lengths $\theta_m$, we do not calculate the
gradient of $\hat{\Phi}$ and move in the direction of steepest
ascent but perform consecutive one-dimensional maximizations of
$\theta_m$ instead. This has the advantage that the step size
$\epsilon$ of Eq.~(\ref{eq:parameterupdate}) can be individually
adjusted for the different directions by considering also the
curvatures $\partial^2\hat{\Phi}/\partial \theta_m^2$. For a
negative curvature,the pulse length $\theta_m$ is updated by going
to the maximum of the parabola approximating the $\hat{\Phi}$
whereas for a positive curvature a fixed step size is used for
updating $\theta_m$.
To avoid becoming trapped in a local maximum of the performance
function, we combine the uphill search algorithm with elements of
simulated annealing. Instead of choosing the pulse length
$\theta_m^\ast$ corresponding to the maximum of the parabolic
approximation to $\hat{\Phi}$, the algorithm samples the region
around the maximum by randomly choosing a pulse length
$\theta_m=\theta_m^\ast+\Delta\theta$ where $\Delta\theta$ is
randomly drawn from a normal distribution with probability density
$\propto\exp((\partial^2\hat{\Phi}/\partial
\theta_m^2)(\Delta\theta)^2/T_{\rm eff})$ where the effective
temperature $T_{\rm eff}$ determines the spread of the
distribution around $\theta_m^\ast$. In the course of the
optimization, $T_{\rm eff}$ is lowered to zero. In addition, the
algorithm tries to introduce new pulses into the sequence from
time to time to achieve a variation of the pulse order. Unless
otherwise mentioned the program is started from a random sequence
of pulses of sufficient length.
The computational overhead for performing an update of the pulse
amplitudes is the same for a method following the gradient and for
the $M$ consecutive one-dimensional optimizations of the pulse
lengths. Choosing the one-dimensional optimizations allows us to
calculate the curvatures $\partial^2\hat{\Phi}/\partial
\theta_m^2$ at no additional cost as well as to include the
annealing technique. It would be preferable to use all the
information encoded in the elements $\partial^2\hat{\Phi}/\partial
\theta_m\partial\theta_n$ of the Hesse matrix, however, we judged
the computational cost amounting to ${\cal O}(M^2)$ matrix
multiplications to be prohibitively high.
\section{Examples\label{sec_examples}}
The optimization routine was used to search for decompositions of
unitary transformations of interest in systems of three to five
qubits. In the following subsections, examples will be given for
pulse sequences found by the program. Interestingly, it turns out
that in most cases the sequences consist of pulses having pulse
lengths that are simple fractions of $\pi$ even though the
optimization routine was allowed to vary the pulse lengths
continuously. Moreover, the pulse sequences listed below realize
the desired target operation not only approximately but exactly.
Depending on the initial pulse sequence and the values of the
parameters controlling the optimization process, the optimization
algorithm can converge to different solutions realizing a target
operation $U_{\rm target}$. This demonstrates that we cannot be
sure that the pulse sequences found by the program necessarily
represent the optimum solution. However, as we are interested in
discovering sequences of practical interest to be used in
experiments, this is hardly a drawback.
In the following, we simplify our notation by using the short-hand
notation $U_z^{(k)}(\theta) \leftrightarrow [\theta]_z^k$,
$U_X(\theta) \leftrightarrow [\theta]_X$, $U_{MS}^{X}(\theta)
\leftrightarrow [\theta]_{XX}$ to achieve a convenient and compact
representation of the pulse sequences. Pulses are separated by
hyphens and temporally ordered from left to right. Whenever a
sequence is given, the number of qubits it is operating on is
mentioned in the text.
\subsection{CNOT gates on three qubits}
In a system of three qubits, a CNOT gate between two qubits is
realized by the sequence
\begin{eqnarray*}
&&
[\frac{\pi}{2}]_{X}-[\frac{\pi}{2}]_z^{1}-[\frac{\pi}{4}]_{XX}-
[\frac{\pi}{4}]_{X}-[\pi]_z^{3}-[\frac{\pi}{4}]_{X}-
\\ &&
[\frac{\pi}{4}]_{XX}-[\frac{\pi}{2}]_z^{1}-[\frac{\pi}{2}]_{X}-[\pi]_z^{3},
\end{eqnarray*}
where the target qubit 2 is controlled by qubit 1, corresponding
to the operation $U=({\mathcal
I}+\sigma_z^{(1)}+\sigma_x^{(2)}-\sigma_z^{(1)}\sigma_x^{(2)})/2$.
A unitary transformation consisting of two CNOT operations with
qubit 1 controlling the other two qubits is described by the
unitary operation $U=({\mathcal
I}+\sigma_z^{(1)}+\sigma_x^{(2)}\sigma_x^{(3)}-\sigma_z^{(1)}\sigma_x^{(2)}\sigma_x^{(3)})/2$.
One way of decomposing it into elementary operations is given by
the sequence
\begin{eqnarray*}
&&
[\frac{\pi}{2}]_{X}-[-\frac{\pi}{2}]_z^{1}-[\frac{\pi}{4}]_{XX}-
[-\frac{\pi}{4}]_{X}-[\pi]_z^{1}-[-\frac{\pi}{4}]_{X}-
\\ &&
[-\frac{\pi}{4}]_{XX}-[\frac{\pi}{2}]_z^{1}-[\frac{\pi}{2}]_{X}.
\end{eqnarray*}
As an example of a pulse sequence found by the search algorithm
that is not composed of pulses having pulse angles which are
simple rational fractions of $\pi$, we present another
decomposition of the three-qubit operation $U$ given by \linebreak
\begin{equation*}
[\beta_2]_{XX}-[\alpha_2]_z^{1}-[\beta_2]_{XX}-[\alpha_1]_z^{1}-[\beta_1]_{XX}-[\alpha_1]_z^{1}.
\end{equation*}
For $\alpha_1\approx 0.7121\pi$, $\alpha_2\approx -0.4241\pi$,
$\beta_1\approx -0.2121\pi$ and $\beta_2\approx 0.3560\pi$, it
also realizes up to an unimportant global phase two CNOT gates on
three qubits with qubit 1 controlling the other two. The program
having provided the pulse angles, we found in a second step that
the angles satisfy the algebraic relations $\alpha_1=2\beta_2$,
$\alpha_2=\pi-4\beta_2$, $\beta_1=\pi/2-2\beta_2$ with
$\beta_2=\frac{3}{8}\pi-\frac{1}{4}\arcsin(\sqrt{5}-2)$.
\subsection{CNOT gates on more than three qubits}
In general, a sequence realizing a two-qubit gate in a system with
$N=3$ qubits will not correctly function in the case $N>3$ as it
risks to entangle the spectator qubits with each other. On the
other hand, a sequence realizing a two-qubit gate for $N=4$ which
does not contain any phase shift gates on the spectator qubits is
also applicable to $N>4$ because of the symmetry of the
interactions between the different spectator qubits. A pulse
sequence realizing a CNOT gate for $N\ge 4$ is given by
\begin{eqnarray}
&&
[\frac{\pi}{2}]_{X}-[-\frac{\pi}{2}]_z^{1}-[-\frac{\pi}{8}]_{XX}-
[\pi]_z^{2}-[\frac{\pi}{8}]_{XX}-
\nonumber\\ &&
[\frac{\pi}{4}]_{X}-[\pi]_z^{1}-[-\frac{\pi}{8}]_{XX}-
[\pi]_z^{2}-[\frac{\pi}{8}]_{XX}- \label{eq:CNOTonNQubits}
\\ &&
[-\frac{\pi}{4}]_{X}-[-\frac{\pi}{2}]_z^{1}-[-\frac{\pi}{2}]_{X}
\nonumber
\end{eqnarray}
where again the qubit 1 controls the target qubit 2.
\subsection{Further multi-qubit operations}
Apart from CNOT gate operations, we also searched for a
decomposition of a quantum Toffoli gate operation. The sequence
\cite{OptControlQGates_Footnote2}
\begin{eqnarray*}
&&
[\frac{\pi}{2}]_{Y}-[\frac{\pi}{4}]_z^{3}-[\frac{\pi}{2}]_{XX}-
[-\frac{\pi}{2}]_{X}-[-\frac{\pi}{2}]_z^{3}-[-\frac{\pi}{4}]_{X}-
\\ &&
[\frac{\pi}{4}]_{XX}-[\frac{\pi}{2}]_z^{3}-[\frac{\pi}{2}]_{XX}-
[\frac{\pi}{2}]_{X}-[-\frac{\pi}{2}]_{Y}
\end{eqnarray*}
is applicable to a system of three qubits and flips the state of
the third qubit depending on the state of qubits 1 and 2, thus
realizing the operation $U=(3{\mathcal
I}+\sigma_x^{(3)}+(\sigma_z^{(1)}+\sigma_z^{(2)}-\sigma_z^{(1)}\sigma_z^{(2)})(\mathcal{
I}-\sigma_x^{(3)}))/4$.
In a system with an even number $N=2M$ of qubits,
a mapping between the Bell basis and the product state basis for
each pair of qubits $(2m-1,2m)$, $m=1,\ldots M$, is of interest as
it could be used for measuring the multipartite concurrence of an
$M$-qubit quantum state available in two copies \cite{Aolita:2006}
and for implementing entanglement purification protocols. For the
case of four qubits, the sequence
\begin{equation*}
[\frac{\pi}{4}]_{XX}-[\pi]_z^{1}-[\pi]_z^{2}-[\frac{\pi}{4}]_{XX}-[\pi]_z^{1}-[\pi]_z^{2}
\end{equation*}
realizes the desired mapping described by
$U_{(1-2,3-4)}=\exp(-i\frac{\pi}{4}(\sigma_x^{(1)}\sigma_x^{(2)}+\sigma_x^{(3)}\sigma_x^{(4)}))$.
The approach can be extended to higher numbers of qubits.
Executing the sequence
\begin{eqnarray*}
&&
[\frac{\pi}{8}]_{XX}-[\pi]_z^{1}-[\pi]_z^{2}-[\pi]_z^{3}-[\pi]_z^{4}-
\\ &&
[\frac{\pi}{8}]_{XX}-[\pi]_z^{1}-[\pi]_z^{2}-[\pi]_z^{5}-[\pi]_z^{6}
\end{eqnarray*}
twice, realizes the unitary transformation
$U_{(1-2,3-4,\ldots)}=\exp(-i\frac{\pi}{4}\sum_{m=1}^M\sigma_x^{(2m-1)}\sigma_x^{(2m)})$
for the case $N=6,8$. Moreover, this approach could also be used
to create linear cluster states and to realize the Hamiltonian of
a 1-D Ising model by concatenating the unitaries
$U_{(1-2,3-4,\ldots)}$ and $U_{(2-3,4-5,\ldots)}$.
\subsection{Quantum error correction}
\begin{figure}
\caption{\label{fig:errorcorrection}
\label{fig:errorcorrection}
\end{figure}
In experimental quantum information processing, first steps have
been taken to demonstrate quantum error correction
\cite{Cory1998,Chiaverini:2004a} using three-qubit codes. A major
step forward would be the realization of repetitive error
correction where the quantum information remains encoded in a
logical qubit all the time. In the example shown in
Fig.~\ref{fig:errorcorrection}, a qubit state $\alpha|0\rangle +
\beta |1\rangle$ is encoded in a logical qubit consisting of three
qubits as $|\psi_L\rangle=\alpha|000\rangle + \beta |111\rangle$.
The circuit detects single bit flip errors $\sigma_x^{(m)}$,
$m=1,2,3$, by means of two additional ancilla qubits for syndrome
detection and corrects the errors coherently by application of
quantum Toffoli gates in order to restore the state
$|\psi_L\rangle$.
Using gate decompositions similar to Eq.~(\ref{eq:CNOTonNQubits})
for constructing the unitary transformation $U_{\rm QEC}$ that
detects and corrects errors from CNOT and Toffoli gates would
result in a pulse sequence with more than 100 pulses that does not
seem to be realizable with current technology. Therefore, the only
practical approach seems to be to search for a gate decomposition
of the complete operation $U_{\rm QEC}$. However, as the time
needed for finding gate decompositions scales exponentially with
the number of qubits, the task of decomposing the five-qubit
operation $U_{\rm QEC}$ might seem quite challenging at a first
glance. But actually, there is a whole class of operations
$\tilde{U}_{\rm QEC}$ equivalent to $U_{\rm QEC}$ accomplishing
the error correction protocol. To see this, note that we only
require $\tilde{U}_{\rm QEC}$ to perform the mapping
$U^{(m)}|\psi_L\rangle|00\rangle\rightarrow|\psi_L\rangle|\psi_A^{(m)}\rangle$
where
$U^{(m)}\in\{\mathcal{I},\sigma_x^{(1)},\sigma_x^{(2)},\sigma_x^{(3)}\}$
and $|\psi_A^{(m)}\rangle$ are arbitrary orthonormal vectors
describing the ancilla state at the end of the correction step.
Therefore, any valid transformation $\tilde{U}_{\rm QEC}$ can be
put into the form
\begin{eqnarray}
\tilde{U}_{\rm
QEC}P^A_{00}=&\sum_m&\!\!\!\!\left(|000\rangle\langle
000|U_m\otimes|\psi_{A,0}^m\rangle\langle 00|\right.\nonumber\\
&+&\!\!\left.|111\rangle\langle
111|U_m\otimes|\psi_{A,1}^m\rangle\langle 00|\right)
\end{eqnarray}
where the ancilla states needs to satisfy the constraint
$\psi_{A,0}^m=\psi_{A,1}^m$.
Here, $P^A_{00}={\mathcal I}\otimes|00\rangle\langle 00|$ is a
projector onto the initial ancilla state.
The condition $\langle\psi_{A,0}|\psi_{A,1}\rangle=1$ expressing
the constraint $\psi_{A,0}^m=\psi_{A,1}^m$ assures that no
information about the state of the logical qubit can be obtained
from detecting the ancilla state. To search for a gate decomposition, the performance function can
now be modified by replacing the trace $\mbox{Tr}(U^\dagger U_{\rm
QEC})$ by
$\Phi=\mbox{Re}(\sum_m\langle\psi_{A,0}^{(m)}|\psi_{A,1}^{(m)}\rangle)$.
Using this approach, the search program found the following pulse
decomposition $\tilde{U}_{\rm QEC}$ for realizing an operation equivalent to $U_{\rm QEC}$:
\begin{widetext}
\begin{eqnarray*}
&&
[-\frac{\pi}{2}]_{X}-[\frac{\pi}{2}]_z^{5}-[\frac{\pi}{2}]_z^{4}-
[\pi]_z^{3}-[-\frac{\pi}{8}]_{YY}-[\pi]_z^{2}-[-\frac{\pi}{8}]_{YY}-
[\pi]_z^{5}-[\pi]_z^{3}-[\frac{\pi}{8}]_{YY}-[\pi]_z^{2}-[-\frac{3\pi}{8}]_{YY}-[\frac{\pi}{2}]_{X}-[\frac{\pi}{2}]_z^{3}-
\\
&&
[\frac{\pi}{2}]_z^{1}-
[-\frac{\pi}{8}]_{XX}-[\pi]_z^{5}-[\pi]_z^{4}-[\frac{\pi}{8}]_{XX}-[\pi]_z^{1}-
[\frac{\pi}{2}]_z^{4}-[\pi]_z^{5}-[\frac{\pi}{8}]_{XX}-[\pi]_z^{2}-[-\frac{3\pi}{8}]_{X}-[\pi]_z^{4}-[-\frac{\pi}{8}]_{XX}-[\pi]_z^{5}-
\\
&&
[\frac{\pi}{8}]_{XX}-[\frac{\pi}{8}]_{X}-[\pi]_z^{2}-[\frac{\pi}{8}]_{XX}-
[-\frac{\pi}{2}]_z^{4}-[-\frac{\pi}{4}]_{XX}.
\end{eqnarray*}
\end{widetext}
Here, the uphill search was extremely slow when starting the
search algorithm from a random sequence of pulse. However,
starting the algorithm from a sequence constructed from gate
decompositions of the CNOT and Toffoli gates and initially driving
it away from this undesired solution by increasing the effective
temperature $T_{\rm eff}$ proved to be an effective strategy for
finding an improved solution.
\section{Summary and outlook}
In conclusion, we have developed an algorithm based on optimal
control techniques for finding decompositions of unitary
transformations into finite sequences of pulses that correspond to
the application of Hamiltonians drawn from a given set. For a set
of Hamiltonians that are of interest for ion trap quantum
computing, the algorithm provides gate decompositions that would
be difficult to find otherwise.
Looking for gate decompositions involving parallel two-qubit interactions
on a small number of ions provided in many cases (as for example for the quantum Toffoli gate) a pulse sequence much shorter than what would have been possible by using sequential two-qubit interactions and single qubit gates. In a few other cases, however, this strategy did not pay off (as we noted when looking for a decomposition of the quantum Fredkin gate).
While our investigation was limited to a particular basis set of
Hamiltonians $\cal S$, the program could be adapted to search for
gate decompositions using other sets that might be more relevant
for realizations of quantum computing in other physical systems
where, for example, the natural entangling interaction is given by
a $\sqrt{i\rm SWAP}$ gate \cite{Steffen2006a} or an exchange
interaction \cite{DiVincenzo2000}. For example, in the context of
ion trap quantum computing, another interesting question is
whether the algorithm could be modified to make it applicable to
the case of quantum gates realized by a tightly focussed laser
interacting with a single ion at a time. In this case, the Hilbert
space would be comprised not only of the qubit states but also of
the harmonic oscillator the qubits are coupling to, a
configuration not only found in ion trap quantum computing but
also in cavity-QED setups where an electromagnetic field mode is
interacting with cold atoms or superconducting flux qubits.
\newline
\begin{acknowledgments}
We gratefully acknowledge the support of the European network
SCALA, the Institut f{\"u}r Quanteninformation GmbH and IARPA.
C.~F.~R. would like to thank T.~Schulte-Herbr{\"u}ggen and
S.~Glaser for useful discussions and C.~Kruszynska for help with
symbolic calculations.
\end{acknowledgments}
\begin{thebibliography}{27}
\expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi
\expandafter\ifx\csname bibnamefont\endcsname\relax
\def\bibnamefont#1{#1}\fi
\expandafter\ifx\csname bibfnamefont\endcsname\relax
\def\bibfnamefont#1{#1}\fi
\expandafter\ifx\csname citenamefont\endcsname\relax
\def\citenamefont#1{#1}\fi
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi
\providecommand{\bibinfo}[2]{#2}
\providecommand{\eprint}[2][]{\url{#2}}
\bibitem[{\citenamefont{Deutsch}(1985)}]{Deutsch:1985}
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Deutsch}},
\bibinfo{journal}{Proc. R. Soc. Lond. A} \textbf{\bibinfo{volume}{400}},
\bibinfo{pages}{97} (\bibinfo{year}{1985}).
\bibitem[{\citenamefont{Nielsen and Chuang}(2000)}]{Nielsen2000}
\bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Nielsen}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{I.~L.} \bibnamefont{Chuang}},
\emph{\bibinfo{title}{Quantum Computation and Quantum Information}}
(\bibinfo{publisher}{Cambridge Univ. Press, Cambridge},
\bibinfo{year}{2000}).
\bibitem[{\citenamefont{Barenco et~al.}(1995)\citenamefont{Barenco, Bennett,
Cleve, DiVincenzo, Margolus, Shor, Sleator, Smolin, and
Weinfurter}}]{Barenco1995}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Barenco}},
\bibinfo{author}{\bibfnamefont{C.~H.} \bibnamefont{Bennett}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Cleve}},
\bibinfo{author}{\bibfnamefont{D.~P.} \bibnamefont{DiVincenzo}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Margolus}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Shor}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Sleator}},
\bibinfo{author}{\bibfnamefont{J.~A.} \bibnamefont{Smolin}},
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Weinfurter}},
\bibinfo{journal}{Phys.~Rev.~A} \textbf{\bibinfo{volume}{52}},
\bibinfo{pages}{3457} (\bibinfo{year}{1995}).
\bibitem[{\citenamefont{Lloyd}(1995)}]{Lloyd1995}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Lloyd}},
\bibinfo{journal}{Phys.~Rev.~Lett.} \textbf{\bibinfo{volume}{75}},
\bibinfo{pages}{346} (\bibinfo{year}{1995}).
\bibitem[{\citenamefont{Blatt and Wineland}(2008)}]{Blatt:2008}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Blatt}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Wineland}},
\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{453}},
\bibinfo{pages}{1008} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{N\"agerl et~al.}(1999)\citenamefont{N\"agerl,
Leibfried, Rohde, Thalhammer, Eschner, Schmidt-Kaler, and
Blatt}}]{Naegerl:1999}
\bibinfo{author}{\bibfnamefont{H.~C.} \bibnamefont{N\"agerl}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Leibfried}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Rohde}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Thalhammer}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Eschner}},
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Schmidt-Kaler}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Blatt}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{60}},
\bibinfo{pages}{145} (\bibinfo{year}{1999}).
\bibitem[{\citenamefont{Schmidt-Kaler et~al.}(2003)\citenamefont{Schmidt-Kaler,
H\"affner, Gulde, Riebe, Lancaster, Deuschle, Becher, H\"ansel, Eschner, Roos
et~al.}}]{Schmidt-Kaler2003a}
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Schmidt-Kaler}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{H\"affner}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Gulde}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Riebe}},
\bibinfo{author}{\bibfnamefont{G.~P.~T.} \bibnamefont{Lancaster}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Deuschle}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Becher}},
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{H\"ansel}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Eschner}},
\bibinfo{author}{\bibfnamefont{C.~F.} \bibnamefont{Roos}},
\bibnamefont{et~al.}, \bibinfo{journal}{Appl.~Phys.~B}
\textbf{\bibinfo{volume}{77}}, \bibinfo{pages}{789} (\bibinfo{year}{2003}).
\bibitem[{\citenamefont{Leibfried et~al.}(2003)\citenamefont{Leibfried,
DeMarco, Meyer, Lucas, Barrett, Britton, Itano, Jelenkovi{\'c}, Langer,
Rosenband et~al.}}]{Leibfried2003a}
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Leibfried}},
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{DeMarco}},
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Meyer}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Lucas}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Barrett}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Britton}},
\bibinfo{author}{\bibfnamefont{W.~M.} \bibnamefont{Itano}},
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Jelenkovi{\'c}}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Langer}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Rosenband}},
\bibnamefont{et~al.}, \bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{422}}, \bibinfo{pages}{412} (\bibinfo{year}{2003}).
\bibitem[{\citenamefont{Sackett et~al.}(2000)\citenamefont{Sackett, Kielpinski,
King, Langer, Meyer, Myatt, Rowe, Turchette, Itano, Wineland
et~al.}}]{Sackett2000}
\bibinfo{author}{\bibfnamefont{C.~A.} \bibnamefont{Sackett}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Kielpinski}},
\bibinfo{author}{\bibfnamefont{B.~E.} \bibnamefont{King}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Langer}},
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Meyer}},
\bibinfo{author}{\bibfnamefont{C.~J.} \bibnamefont{Myatt}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Rowe}},
\bibinfo{author}{\bibfnamefont{Q.~A.} \bibnamefont{Turchette}},
\bibinfo{author}{\bibfnamefont{W.~M.} \bibnamefont{Itano}},
\bibinfo{author}{\bibfnamefont{D.~J.} \bibnamefont{Wineland}},
\bibnamefont{et~al.}, \bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{404}}, \bibinfo{pages}{256} (\bibinfo{year}{2000}).
\bibitem[{\citenamefont{Benhelm et~al.}(2008)\citenamefont{Benhelm, Kirchmair,
Roos, and Blatt}}]{Benhelm:2008b}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Benhelm}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Kirchmair}},
\bibinfo{author}{\bibfnamefont{C.~F.} \bibnamefont{Roos}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Blatt}},
\bibinfo{journal}{Nat.~Phys.} \textbf{\bibinfo{volume}{4}},
\bibinfo{pages}{463} (\bibinfo{year}{2008}).
\bibitem[{Opt({\natexlab{a}})}]{OptControlQGates_Footnote0}
\bibinfo{note}{Assuming that a single-qubit gate is performed with a laser beam
with intensity $I$ and a Gaussian spatial beam profile with waist $w$, it is
advantageous to work with an interaction whose strengths falls off as $\theta
(r)\propto\exp(-2r^2/w^2)$ instead of $\theta
(r)\propto\sqrt{I}\propto\exp(-r^2/w^2)$.}
\bibitem[{\citenamefont{Rowe et~al.}(2002)\citenamefont{Rowe, Ben-Kish,
DeMarco, Leibfried, Meyer, Beall, Britton, Hughes, Itano, Jelenkovi{\'c}
et~al.}}]{Rowe2002}
\bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Rowe}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Ben-Kish}},
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{DeMarco}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Leibfried}},
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Meyer}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Beall}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Britton}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Hughes}},
\bibinfo{author}{\bibfnamefont{W.~M.} \bibnamefont{Itano}},
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Jelenkovi{\'c}}},
\bibnamefont{et~al.}, \bibinfo{journal}{Quant.~Inf.~Comp.}
\textbf{\bibinfo{volume}{2}}, \bibinfo{pages}{257} (\bibinfo{year}{2002}).
\bibitem[{\citenamefont{Khaneja et~al.}(2005)\citenamefont{Khaneja, Reiss,
Kehlet, Schulte-Herbr{\"u}ggen, and Glaser}}]{Khaneja2005}
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Khaneja}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Reiss}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Kehlet}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Schulte-Herbr{\"u}ggen}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.~J.}
\bibnamefont{Glaser}}, \bibinfo{journal}{J.~Magn.~Reson.}
\textbf{\bibinfo{volume}{172}}, \bibinfo{pages}{296} (\bibinfo{year}{2005}).
\bibitem[{\citenamefont{Tesch and de~Vivie-Riedle}(2002)}]{Tesch:2002}
\bibinfo{author}{\bibfnamefont{C.~M.} \bibnamefont{Tesch}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{de~Vivie-Riedle}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{89}},
\bibinfo{pages}{157901} (\bibinfo{year}{2002}).
\bibitem[{\citenamefont{Palao and Kosloff}(2003)}]{Palao:2003}
\bibinfo{author}{\bibfnamefont{J.~P.} \bibnamefont{Palao}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Kosloff}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{68}},
\bibinfo{pages}{062308} (\bibinfo{year}{2003}).
\bibitem[{\citenamefont{Vandersypen and Chuang}(2004)}]{Vandersypen2004}
\bibinfo{author}{\bibfnamefont{L.~M.~K.} \bibnamefont{Vandersypen}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{I.~L.}
\bibnamefont{Chuang}}, \bibinfo{journal}{Rev.~Mod.~Phys.}
\textbf{\bibinfo{volume}{76}}, \bibinfo{pages}{1037} (\bibinfo{year}{2004}).
\bibitem[{\citenamefont{Grace et~al.}(2007)\citenamefont{Grace, Brif, Rabitz,
Walmsley, Kosut, and Lidar}}]{Grace:2007}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Grace}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Brif}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Rabitz}},
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Walmsley}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Kosut}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Lidar}}, \bibinfo{journal}{J.
Phys. B} \textbf{\bibinfo{volume}{40}}, \bibinfo{pages}{S103}
(\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Montangero et~al.}(2007)\citenamefont{Montangero,
Calarco, and Fazio}}]{Montangero:2007}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Montangero}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Calarco}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Fazio}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{99}},
\bibinfo{pages}{170501} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Chiara et~al.}(2008)\citenamefont{Chiara, Calarco,
Anderlini, Montangero, Lee, Brown, Phillips, and Porto}}]{DeChiara:2008}
\bibinfo{author}{\bibfnamefont{G.~D.} \bibnamefont{Chiara}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Calarco}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Anderlini}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Montangero}},
\bibinfo{author}{\bibfnamefont{P.~J.} \bibnamefont{Lee}},
\bibinfo{author}{\bibfnamefont{B.~L.} \bibnamefont{Brown}},
\bibinfo{author}{\bibfnamefont{W.~D.} \bibnamefont{Phillips}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.~V.} \bibnamefont{Porto}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{77}},
\bibinfo{pages}{052333} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Timoney et~al.}(2008)\citenamefont{Timoney, Elman,
Glaser, Weiss, Johanning, Neuhauser, and Wunderlich}}]{Timoney:2008}
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Timoney}},
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Elman}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Glaser}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Weiss}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Johanning}},
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Neuhauser}},
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Wunderlich}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{77}},
\bibinfo{pages}{052334} (\bibinfo{year}{2008}).
\bibitem[{Opt({\natexlab{b}})}]{OptControlQGates_Footnote1}
\bibinfo{note}{If the sum in eq.~(\ref{eq:UKhaneja}) consists of only a single
term, then eq.~(\ref{eq:gradient}) becomes exact.}
\bibitem[{Opt({\natexlab{c}})}]{OptControlQGates_Footnote2}
\bibinfo{note}{As the Hamiltonians in set $\cal S$ are sufficient for
generating arbitrary unitary operations, we did not include the Hamiltonians
$S_Y$, $S_Y^2$. However, as these Hamiltonians are experimentally as easily
generated as $S_X$, $S_X^2$ by shifting the optical phase of the control
laser field by $\pi/2$ with the help of an acousto-optical modulator, we use
them in some of the pulse sequences for constructing the desired target
operation.}
\bibitem[{\citenamefont{Aolita and Mintert}(2006)}]{Aolita:2006}
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Aolita}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Mintert}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{97}},
\bibinfo{pages}{050501} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Cory et~al.}(1998)\citenamefont{Cory, Price, Maas,
Knill, Laflamme, Zurek, Havel, and Somaroo}}]{Cory1998}
\bibinfo{author}{\bibfnamefont{D.~G.} \bibnamefont{Cory}},
\bibinfo{author}{\bibfnamefont{M.~D.} \bibnamefont{Price}},
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Maas}},
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Knill}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Laflamme}},
\bibinfo{author}{\bibfnamefont{W.~H.} \bibnamefont{Zurek}},
\bibinfo{author}{\bibfnamefont{T.~F.} \bibnamefont{Havel}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.~S.} \bibnamefont{Somaroo}},
\bibinfo{journal}{Phys.~Rev.~Lett.} \textbf{\bibinfo{volume}{81}},
\bibinfo{pages}{2152} (\bibinfo{year}{1998}).
\bibitem[{\citenamefont{Chiaverini et~al.}(2004)\citenamefont{Chiaverini,
Leibfried, Schaetz, Barrett, Blakestad, Britton, Itano, Jost, Knill, Langer
et~al.}}]{Chiaverini:2004a}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Chiaverini}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Leibfried}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Schaetz}},
\bibinfo{author}{\bibfnamefont{M.~D.} \bibnamefont{Barrett}},
\bibinfo{author}{\bibfnamefont{R.~B.} \bibnamefont{Blakestad}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Britton}},
\bibinfo{author}{\bibfnamefont{W.~M.} \bibnamefont{Itano}},
\bibinfo{author}{\bibfnamefont{J.~D.} \bibnamefont{Jost}},
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Knill}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Langer}},
\bibnamefont{et~al.}, \bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{432}}, \bibinfo{pages}{602} (\bibinfo{year}{2004}).
\bibitem[{\citenamefont{Steffen et~al.}(2006)\citenamefont{Steffen, Ansmann,
Bialczak, Katz, Lucero, McDermott, Neeley, Weig, Cleland, and
Martinis}}]{Steffen2006a}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Steffen}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Ansmann}},
\bibinfo{author}{\bibfnamefont{R.~C.} \bibnamefont{Bialczak}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Katz}},
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Lucero}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{McDermott}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Neeley}},
\bibinfo{author}{\bibfnamefont{E.~M.} \bibnamefont{Weig}},
\bibinfo{author}{\bibfnamefont{A.~N.} \bibnamefont{Cleland}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.~M.}
\bibnamefont{Martinis}}, \bibinfo{journal}{Science}
\textbf{\bibinfo{volume}{313}}, \bibinfo{pages}{1423} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{DiVincenzo et~al.}(2000)\citenamefont{DiVincenzo,
Bacon, Kempe, Burkard, and Whaley}}]{DiVincenzo2000}
\bibinfo{author}{\bibfnamefont{D.~P.} \bibnamefont{DiVincenzo}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Bacon}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Kempe}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Burkard}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{K.~B.} \bibnamefont{Whaley}},
\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{408}},
\bibinfo{pages}{339} (\bibinfo{year}{2000}).
\end{thebibliography}
\end{document} |
\begin{document}
\title{High Fidelity Quantum Gates for Trapped Ions under Micromotion}
\author{C. Shen and L.-M. Duan}
\affiliation{Department of physics, University of Michigan, Ann Arbor, MI 48109}
\affiliation{Center for Quantum Information, IIIS, Tsinghua University, Beijing
100084, China}
\begin{abstract}
Two or three dimensional Paul traps can confine a large number of
ions forming a Wigner crystal, which would provide an ideal architecture
for scalable quantum computation except for the micromotion, an issue
that is widely believed to be the killer for high fidelity quantum
gates. Surprisingly, here we show that the micromotion is not an obstacle
at all for design of high fidelity quantum gates, even though the
magnitude of the micromotion is significantly beyond the requirement
of the Lamb-Dicke condition. Through exact solution of the quantum
Mathieu equations, we demonstrate the principle of the gate design
under micromotion using two ions in a quadrupole Paul trap as an example.
The proposed micromotion quantum gates can be extended to the many
ion case, paving a new way for scalable trapped ion quantum computation.
\end{abstract}
\pacs{03.67.Lx, 03.67.Ac, 37.10.Ty}
\maketitle
Trapped ions constitute one of the most promising systems for realization
of quantum computation \cite{1}. All the quantum information processing
experiments so far are done in linear Paul traps, where the ions form
a one-dimensional (1D) crystal along the trap axis \cite{1,2,3,4}.
In this configuration, the external radio-frequency (r.f.) Paul trap
can be well approximated by a static trapping potential with negligible
micromotion, which is believed to be critical for design of high fidelity
quantum gates. However, in term of scalability, the linear configuration
is not the optimal one for realization of large scale quantum computation:
first, the number of ions in a linear trap is limited \cite{5}; and
second, the linear configuration is not convenient for realization
of fault-tolerant quantum computation. The effective qubit coupling
in a large ion chain is dominated by the dipole interaction, which
is only good for short-range quantum gates because of its fast decay
with distance. In a linear chain with short range quantum gates, the
error threshold for fault tolerance is very tough and hard to be met
experimentally \cite{6,raussendorf_fault-tolerant_2007}.
From a scalability point of view, two (2D) or three dimensional (3D)
Paul traps would be much better for quantum computation compared with
a linear chain. In a 2D or 3D trap, one can hold a large number of
qubits with a high error threshold for fault tolerance, in the range
of a percent level, even with just the nearest neighbor quantum gates
\cite{raussendorf_fault-tolerant_2007}. Thousands to millions of
ions have been successfully trapped to form 2D or 3D Wigner crystals
in a Paul trap \cite{2D_crystal}. However, there is a critical problem
to use this system for quantum computation, i.e., the micromotion
issue. In the 2D or 3D configuration, micromotion cannot be compensated,
and the magnitude of micromotion for each ion can be significantly
beyond the optical wavelength (i.e., outside of the Lamb-Dicke region).
As the micromotion is from the driving force of the Paul trap, it
cannot be laser cooled. The messy and large-magnitude micromotion
well beyond the Lamb-Dicke condition is believed to be a critical
hurdle for design of entangling quantum gate operations in this architecture.
In this paper, we show that the micromotion surprisingly is not an
obstacle at all for design of high-fidelity quantum gates. When the
ions form a crystal in a time-dependent Paul trap, they will be described
by a set of Mathieu equations. We solve exactly the quantum Mathieu
equations in general with an inhomogeneous driving term and find that
the micromotion is dominated by a well-defined classical trajectory
with no quantum fluctuation. This large classical motion is far outside
of the Lamb-Dicke region, however, it does not lead to infidelity
of quantum gates if it is appropriately taken into account in the
gate design. The quantum part of the Mathieu equation is described
by the secular mode with a micromotion correction to its mode function.
This part of motion still satisfies the Lamb-Dicke condition at the
Doppler temperature, which is routine to achieve for experiments.
We use two ions in a quadrupole trap, which have large micromotion,
as an example to show the principle of the gate design, and give the
explicit gate scheme both in the slow and the fast gate regions using
multi-segment laser pulses \cite{zhu_arbitrary-speed_2006,zhu_trapped_2006},
with the intrinsic gate infidelity arbitrarily approaching zero under
large micromotion. We finally give a breif discussion of the general
procedure of the gate design under micromotion, which in principle
can work for any number of ions, with important implication for large-scale
quantum computation.
To illustrate the general feature of micromotion in a Paul trap and
the principle of the gate design under micromotion, we consider a
three-dimensional (3D) anisotropic quadrupole trap with a time dependent
potential $\Phi(x,y,z)=\left(U_{0}+V_{0}\cos\left(\Omega_{T}t\right)\right)\left(\frac{x^{2}+y^{2}-2z^{2}}{d_{0}^{2}}\right)\equiv\alpha(t)(x^{2}+y^{2}-2z^{2})$
from an electric field oscillating at the r.f. $\Omega_{T}$, where
$U_{0},V_{0}$ are voltages for the d.c. and a.c. components and $d_{0}$
characterizes the size of the trap. We choose a positive $U_{0}$
to reduce the effective trap strength along the $z$ direction so
that the two ions align along the $z$-axis. Since the motions in
different directions do not couple to each other under quadratic expansion,
we focus our attention on the $z$ direction. The total potential
energy of two ions (each with charge $e$ and mass $m$) is
\begin{equation}
V(z_{1},z_{2})=-2e\alpha(t)\left(z_{1}^{2}+z_{2}^{2}\right)+\frac{e^{2}}{4\pi\epsilon_{0}\left\vert z_{1}-z_{2}\right\vert }.\label{1}
\end{equation}
Define center-of-mass (CM) coordinate $u_{\text{cm}}=(z_{1}+z_{2})/2$
and relative coordinate $u_{\text{r}}=z_{1}-z_{2}$. Without loss
of generality, we assume $u_{\text{r}}>0$ and its average $\bar{u}_{\text{r}}=u_{0}$.
We assume the magnitude of the ion motion is significantly less than
the ion separation, which is always true for the ions in a crystal
phase. The Coulomb interaction can then be expanded around the average
distance $\bar{u}_{\text{r}}$ up to the second order of $\left\vert u_{\text{r}}-u_{0}\right\vert $.
Under this expansion, the total Hamiltonian $H=p_{\text{cm}}^{2}/4m+p_{\text{r}}^{2}/m+V(z_{1},\, z_{2})$
is quadratic (although time-dependent) in terms of the coordinate
operators $u_{\text{cm}},u_{\text{r}}$ and the corresponding momentum
operators $p_{\text{cm}}=p_{1}+p_{2}$, $p_{\text{r}}=\left(p_{1}-p_{2}\right)/2$.
The Heisenberg equations under this Hamiltonian $H$ yield the following
quantum Mathieu equations respectively for the coordinate operators
$u_{\text{cm}}$ and $u_{\text{r}}$
\begin{equation}
\frac{d^{2}u_{\text{cm}}}{d\xi^{2}}+\left(a_{\text{cm}}-2q_{\text{cm}}\cos\left(2\xi\right)\right)u_{\text{cm}}=0\label{eq:EOM-CM}
\end{equation}
\begin{equation}
\frac{d^{2}u_{\text{r}}}{d\xi^{2}}+\left(a_{\text{r}}-2q_{\text{r}}\cos\left(2\xi\right)\right)u_{\text{r}}=f_{0}\label{eq:EOM-REL}
\end{equation}
where the dimensionless parameters $a_{\text{cm}}=-16eU_{0}/\left(md_{0}^{2}\Omega_{T}^{2}\right)$,
$a_{\text{r}}=a_{\text{cm}}+4e^{2}/\left(\pi\epsilon_{0}mu_{0}^{3}\Omega_{T}^{2}\right)$,
$q_{\text{\text{cm}}}=q_{\text{r}}=8eV_{0}/\left(md_{0}^{2}\Omega_{T}^{2}\right)$
and the dimensionless time $\xi=\Omega_{T}t/2$. The driving term
$f_{0}=6e^{2}/\left(\pi\epsilon_{0}mu_{0}^{2}\Omega_{T}^{2}\right)$.
The quantum operators $u_{\text{cm}}$ and $u_{\text{r}}$ satisfy
the same form of the Mathieu equations (except for the driving term
$f_{0}$) as for the classical variables. As these equations are linear,
we can use the solutions known for the classical Mathieu equation
to construct a quantum solution that takes into account of the quantum
fluctuation.
It is well known that the solution to the classical Mathieu equation$\frac{d^{2}}{d\xi^{2}}v+\left(a-2q\cos\left(2\xi\right)\right)v=0$
is a combination of Mathieu sine $S(a,\, q,\,\xi)$ and Mathieu cosine
$C(a,\, q,\,\xi)$ functions, which reduce to the conventional sine
and cosine functions when micromotion is neglected \cite{mclachlan_mathieu_1947}.
The solution to a homogeneous quantum Mathieu equation $\frac{d^{2}}{d\xi^{2}}\hat{u}+\left(a-2q\cos\left(2\xi\right)\right)\hat{u}=0$
can be described using the reference oscillator technique \cite{ref_oscillator}.
From the classical solution $v$ and the quantum operator $\hat{u}$,
one can introduce the following annihilation operator of a reference
oscillator (remember that $\xi=\Omega_{T}t/2$ is the dimensionless
time)
\begin{equation}
\hat{a}(t)=\sqrt{\frac{m}{2\hbar\omega}}i\left(v(t)\dot{\hat{u}}(t)-\dot{v}(t)\hat{u}(t)\right),\label{eq:annihlation}
\end{equation}
where $\omega$ is a normalization constant typically taken as the
secular motion frequency of the corresponding Mathieu equation. In
addition, we impose the initial condition for $v(t)$ with $\left.v(t)\right\vert _{t=0}=1$
and $\left.\dot{v}(t)\right\vert _{t=0}=i\omega$. The position operator
$\hat{u}(t)$ and its conjugate momentum $\hat{p}(t)\equiv m\dot{\hat{u}}(t)$
satisfy the commutator $\left[\hat{u}(t),\hat{p}(t)\right]=i\hbar$.
From the above definition, one can easily check that $\frac{d}{dt}\hat{a}(t)\propto v\frac{d^{2}}{d\xi^{2}}\hat{u}-\hat{u}\frac{d^{2}}{d\xi^{2}}v=0$,
so $\hat{a}(t)\equiv\hat{a}$ is a constant of motion. Furthermore,
$\hat{a}$ and $\hat{a}^{\dagger}$ satisfy the standard commutator
\[
\left[\hat{a},\hat{a}^{\dagger}\right]=\left(m/2\hbar\omega\right)\left(i\hbar/m\right)\left.\left(v(t)\dot{v}^{\ast}(t)-v^{\ast}(t)\dot{v}(t)\right)\right\vert _{t=0}=1.
\]
When micromotion is neglected, $v(t)=e^{i\omega t}$ and $\hat{a}$
reduces to the annihilation operator of a harmonic oscillator. in
the presence of micromotion, $v(t)=C(a,\, q,\,\xi)+iS(a,\, q,\,\xi)$.
The solution to the position operator $\hat{u}$ takes the form
\begin{equation}
\hat{u}(t)=u_{0}\left(v^{\ast}(t)\hat{a}+v(t)\hat{a}^{\dagger}\right)\label{5}
\end{equation}
where $u_{0}\equiv\sqrt{\hbar/2m\omega}$ is the oscillator length.
The above solution gives a complete description of the center-of-mass
motion with the operator
\begin{equation}
u_{\text{cm}}(t)=u_{0\text{cm}}\left(v_{\text{cm}}^{\ast}(t)\hat{a}_{\text{cm}}+v_{\text{cm}}(t)\hat{a}_{\text{cm}}^{\dagger}\right),\label{6}
\end{equation}
where $u_{0\text{cm}}\equiv\sqrt{\hbar/4m\omega_{\text{cm}}}$ and
$\omega_{cm}$ is the secular frequency of the center of mass mode.
The relative motion $u_{\text{r}}$ satisfies the inhomogeneous quantum
Mathieu equation (3). To solve it, we let $u_{\text{r}}=u_{\text{r}}^{\prime}+\bar{u}_{\text{r}}$,
where $u_{\text{r}}^{\prime}$ is an operator that inherits the commutators
for $u_{\text{r}}$ and satisfies the homogenous quantum Mathieu equation
and $\bar{u}_{\text{r}}$ is a classical variable corresponding to
a special solution of the Mathieu equation $\frac{d^{2}\bar{u}_{\text{r}}}{d\xi^{2}}+\left(a_{\text{r}}-2q_{\text{r}}\cos\left(2\xi\right)\right)\bar{u}_{\text{r}}=f_{0}$.
The special solution $\bar{u}_{\text{r}}$ can be found through the
series expansion $\bar{u}_{\text{r}}=f_{0}\sum_{n=0}^{+\infty}c_{n}\cos(2n\xi)$,
where the expansion coefficients $c_{n}$ satisfy the recursion relations
$a_{\text{r}}c_{0}-q_{r}c_{1}=1$ and $c_{n}=D_{n}\left(c_{n-1}+c_{n+1}+c_{0}\delta_{n,1}\right)$
for $n\geq1$ with $D_{n}\equiv-q_{r}/\left(4n^{2}-a_{r}\right)$.
When $a_{r}\ll1$ and $q_{\text{r}}\ll1$, which is typically true
under real experimental configurations, $c_{n}$ rapidly decays to
zero with $\left\vert c_{n+1}/c_{n}\right\vert \approx q_{\text{r}}/4\left(n+1\right)^{2}$
and we can keep only the first few terms in the expansion and obtain
an approximate analytical expression for $\bar{u}_{\text{r}}$ \cite{supplementary}.
The complete solution of $u_{\text{r}}$ is then given by
\begin{equation}
u_{\text{r}}(t)=u_{0\text{r}}\left(v_{\text{r}}^{\ast}(t)\hat{a}_{\text{r}}+v_{\text{r}}(t)\hat{a}_{\text{r}}^{\dagger}\right)+\bar{u}_{\text{r}}(t),\label{7}
\end{equation}
where $u_{0\text{r}}\equiv\sqrt{\hbar/m\omega_{\text{r}}}$ and $\omega_{r}$
is the secular frequency of the relative mode.
Now we show how to design high fidelity quantum gates under micromotion.
To perform the controlled phase flip (CPF) gate, we apply laser induced
spin dependent force on the ions, with the interaction Hamiltonian
described by \cite{zhu_trapped_2006}
\begin{equation}
H=\sum_{j=1}^{2}\hbar\Omega_{j}\cos\left(k_{\delta}z_{j}+\mu_{\delta}t+\phi_{j}\right)\sigma_{j}^{z}.\label{8}
\end{equation}
where $k_{\delta}$ is the wave vector difference of the two Raman
beams along the $z$ direction, $\mu_{\delta}$ is the two-photon
Raman detuning, $\Omega_{j}$ (real) is the Raman Rabi frequency for
the ion $j$, and $\phi_{j}$ is the corresponding initial phase.
In terms of the normal modes, the position operators $z_{j}=u_{\text{cm}}-(-1)^{j}u_{\text{r}}/2$,
where $u_{\text{cm}},\, u_{\text{r}}$ are given by Eqs. (6) and (7).
We introduce three Lamb-Dicke parameters, $\eta_{\text{cm}}\equiv k_{\delta}u_{0\text{cm}}$
for the CM mode, $\eta_{\text{\text{r}}}\equiv k_{\delta}u_{0\text{r}}/2$
for the relative mode, and $\eta_{\text{mm}}\equiv k_{\delta}\bar{u}_{\text{r}}/2$
for pure micromotion. Under typical experimental configurations, $\eta_{\text{cm}}\sim\eta_{\text{r}}\ll1$.
The parameter $\eta_{\text{mm}}$ is a classical variable that oscillates
rapidly with time by multiples of the micromotion frequency $\Omega_{T}$.
In Fig. 1(a), we show a typical trajectory of $\eta_{\text{mm}}\left(t\right)$.
The magnitude of variation of $\eta_{\text{mm}}$ is considerably
larger than $1$. In Fig. 1(b), we also plot the function $v_{\text{cm}}(t)$,
which is dominated by the oscillation at the secular motion frequency
$\omega_{\text{cm}}$ with small correction from the micromotion.
The magnitude of $v_{\text{cm}}(t)$ is bounded by a constant slightly
larger than 1. The function $v_{r}(t)$ has very similar behavior
except that $\omega_{\text{cm}}$ is replaced by $\omega_{\text{r}}$.
From this consideration of parameters, we can expand the term $\cos\left(k_{\delta}z_{j}+\mu_{\delta}t+\phi_{j}\right)$
with small parameters $\eta_{\text{cm}},\eta_{\text{r}}$, but $\eta_{\text{mm}}$
is a big term which needs to be treated exactly. After the expansion,
to leading order in $\eta_{\text{cm}}$ and $\eta_{\text{r}}$, the
Hamiltonian $H$ takes the form
\begin{equation}
H\approx-\left[\chi_{1}(t)\sigma_{1}^{z}+\chi_{2}(t)\sigma_{2}^{z}\right]\hat{f}_{\text{cm}}-\left[\chi_{1}(t)\sigma_{1}^{z}-\chi_{2}(t)\sigma_{2}^{z}\right]\hat{f}_{\text{r}},\label{9}
\end{equation}
where we have defined
\begin{eqnarray}
\hat{f}_{\mu} & \equiv & \eta_{\mu}\left(v_{_{\mu}}^{\ast}(t)\hat{a}_{\mu}+v_{\mu}(t)\hat{a}_{\mu}^{\dagger}\right),\\
\chi_{j}(t) & \equiv & \hbar\Omega_{j}\sin\left[\mu_{\delta}t+\phi_{j}-(-1)^{j}\eta_{\text{mm}}\left(t\right)\right],
\end{eqnarray}
where the subscript $\mu=$ cm, r and $j=1,\,2$. In Eq. (9), we have
dropped the term $\cos\left(\mu_{\delta}t+\phi_{j}\pm\eta_{\text{mm}}\right)$
which induces single-bit phase shift but is irrelevant for the CPF\ gate.
The evolution operator at the gate time $\tau$ generated by the Hamiltonian
$H$ can be expressed as
\begin{equation}
U(\tau)=D_{\text{cm}}(\alpha_{\text{cm}})D_{\text{r}}(\alpha_{\text{r}})\exp\left[i(\gamma_{\text{r}}-\gamma_{\text{\text{cm}}})\sigma_{1}^{z}\sigma_{2}^{z}\right],\label{11}
\end{equation}
where the displacement operator $D_{\mu}(\alpha_{\mu})\equiv\exp\left(\alpha_{\mu}\hat{a}_{\mu}^{\dagger}-\alpha_{\mu}^{\ast}\hat{a}_{\mu}\right)$
($\mu=$ cm, r). Let $j_{\mu}=1$ for $\mu=$ cm and $j_{\mu}=-1$
for $\mu=$ r. The displacement $\alpha_{\mu}$ and the accumulated
phase $\gamma_{\mu}$ have the following expression
\begin{align}
\alpha_{\mu} & =i\eta_{\mu}\int_{0}^{\tau}\left(\chi_{1}(t)\sigma_{1}^{z}+j_{\mu}\chi_{2}(t)\sigma_{2}^{z}\right)u_{\mu}(t)\, dt\label{12}\\
\gamma_{\mu} & =i\left(\eta_{\mu}\right)^{2}\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}\mathbb{\mathcal{S}}[\chi_{1}\chi_{2}]\text{Im}\left[u_{\mu}(t_{1})u_{\mu}^{\ast}(t_{2})\right]
\end{align}
where $\mathbb{\mathcal{S}}[\chi_{1}\chi_{2}]\equiv\chi_{1}(t_{1})\chi_{2}(t_{2})+\chi_{1}(t_{2})\chi_{2}(t_{1})$.
\begin{figure}
\caption{(Color online) (a) The time dependent parameter $\eta_{mm}
\label{fig:trajectory}
\end{figure}
To realize the CPF gate, we require $\alpha_{\mu}=0$ and $\gamma_{\text{r}}-\gamma_{\text{\text{cm}}}=\pi/4$.
The integrals $\alpha_{\mu}$ can be evaluated semi-analytically \cite{supplementary}
or purely numerically. We normally take $\Omega_{1}=\Omega_{2}\equiv\Omega$.
Note that even in this case $\chi_{1}(t_{1})\neq\chi_{2}(t_{2})$
with the micromotion term $\eta_{\text{mm}}\left(t\right)$. This
is different from the case of a static trap. From Eq. (12), we see
that $\alpha_{\mu}=0$ for a fixed $\mu$ gives two complex and thus
four real constraints. With excitation of $N$ motional modes, the
total number of (real) constraints to realize the CPF\ gate is therefore
$4N+1$ (the condition $\gamma_{\text{r}}-\gamma_{\text{\text{cm}}}=\pi/4$
gives one constraint). To satisfy these constraints, we divide the
Rabi frequency $\Omega\left(t\right)$ $\left(0\leq t\leq\tau\right)$
into $m$ equal-time segments, and take a constant $\Omega_{\beta}$
$\left(\beta=1,2,\cdots,m\right)$ for the $\beta$th segment \cite{zhu_arbitrary-speed_2006,zhu_trapped_2006}.
This kind of modulation can be conveniently done through an acoustic
optical modulator in experiments \cite{choi_optimal_2014}. The Rabi
frequencies are our control parameters. For the two ion case, under
fixed detuning $\mu_{\delta}$ and gate time $\tau$, in general we
can find a solution for the CPF\ gate with $m=9$ segments. For some
specific detuning $\mu_{\delta}$ very close to a secular mode frequency,
off-resonant excitations become negligible and a solution is possible
under one segment of pulse by tuning of the gate time $\tau$, which
corresponds to the case of the S\o{}rensen-M\o{}lmer gate \cite{3}
generalized to include the micromotion correction.
To characterize the quality of the gate, we use the fidelity $F\equiv tr_{\mu}\left[\rho_{\mu}\left\vert \left\langle \Psi_{0}\right\vert U_{\text{CPF}}^{\dagger}U(\tau)\left\vert \Psi_{0}\right\rangle \right\vert ^{2}\right]$,
defined as the overlap of the evolution operator $U(\tau)$ with the
perfect one $U_{\text{CPF}}\equiv e^{i\pi\sigma_{1}^{z}\sigma_{2}^{z}/4}$
under the initial state $\left\vert \Psi_{0}\right\rangle $ for the
ion spins and the thermal state $\rho_{\mu}$ for the phonon modes.
In our calculation, without loss of generality, we take $\left\vert \Psi_{0}\right\rangle =\left(\left\vert 0\right\rangle +\left\vert 1\right\rangle \right)\otimes\left(\left\vert 0\right\rangle +\left\vert 1\right\rangle \right)/2$
and assume the Doppler temperature $T_{D}$ for all the phonon modes.
For any given detuning $\mu_{\delta}$ and gate time $\tau$, we optimize
the control parameters $\Omega_{\beta}$ $\left(\beta=1,2,\cdots,m\right)$
to get the maximum fidelity $F$. In Fig. (\ref{fig:fidelity_mu0.9}),
we show the gate fidelity as a function of gate time for $\mu_{\delta}=0.95\omega_{\text{cm}}$
(close to a secular frequency) by applying a single segment laser
pulse of a constant Rabi frequency $\Omega$. In the figure, the dashed
line corresponds to the result in a static harmonic trap with the
same secular frequencies but no micromotion. If we take into account
the micromotion contribution but do not change the gate design, the
result is described by the dash-dot line, with a low fidelity about
only $50\%$. When we optimize the gate design (optimize $\Omega_{\beta}$)
including the micromotion correction, the gate fidelity is represented
by the solid line, which approaches the optimal fidelity achievable
in a static trap. The gate infidelity $\delta F\equiv1-F$ approaches
$2\times10^{-3}$ at the optimal gate time $\tau=20.005T_{z}$, where
$T_{z}\equiv2\pi/\omega_{\text{cm}}$.
By applying $9$ segments of laser pulses with optimized $\Omega_{\beta}$
$\left(\beta=1,2,\cdots,9\right)$, the gate fidelity $F$ can attain
the unity at arbitrary detuning $\mu_{\delta}$ for the two ion case.
As an example, In Fig. 3(a), we show the optimized solution of $\Omega_{\beta}$
(blue lines) at an arbitrarily chosen detuning $\mu_{\delta}=1.4\omega_{\text{cm}}$.
For comparison, the red lines represent the solution of $\Omega_{\beta}$
in a static harmonic trap with otherwise the same parameters. The
maximum magnitude of $\left\vert \Omega_{\beta}\right\vert $ significantly
increases in the presence of micromotion. This is understandable as
fast oscillations of the micromotion tend to lower the effective Rabi
frequencies. In Fig. 3(b), we show the maximum magnitude of $\left\vert \Omega_{\beta}\right\vert $
as a function of the gate time $\tau$. Compared with the solution
in a static harmonic trap, the maximum $\left\vert \Omega_{\beta}\right\vert $
in general needs to increase by about an order of magnitude under
micromotion.
\begin{figure}
\caption{(Color online) (a) The fidelity of a two-ion conditional phase flip
gate as a function of gate time, with the unit of time $T_{z}
\label{fig:fidelity_mu0.9}
\end{figure}
\begin{figure}
\caption{(Color online) (a) The waveform of the optimal segmented pulse calculated
for the gate with duration $\tau=1.31T_{z}
\label{fig:segmented}
\end{figure}
In conclusion, we demonstrate that arbitrarily high fidelity quantum
gates can be achieved under large micromotion. The demonstration in
this paper uses the example of two ions in a quadrupole trap, which
has the micromotion magnitude significantly beyond the Lamb-Dicke
limit. Apparently, the idea here is applicable to the many ion case.
For a system of $N$ ions in any dimension, as long as the ions crystallize,
each ion has an average equilibrium position. We can then expand the
Coulomb potential around these equilibrium positions. Under the r.f.
Paul trap and the Coulomb interaction, the motion of the ions can
then be described by a set of coupled time-dependent Mathieu equations.
Using the technique in this paper, we can solve the motional dynamics
and optimize the gate design that explicitly takes into account all
the micromotion contributions. The gate design technique under micromotion
proposed in this paper solves a major obstacle for high fidelity quantum
computation in real r.f. traps beyond the 1D limitation and opens
a new way for scalable quantum computation based on large 2D or 3D
trap-ion crystals in Paul traps.
\textit{Acknowledgments. }This work was supported by the NBRPC (973
Program) 2011CBA00300 (2011CBA00302), the IARPA MUSIQC program, the
ARO and the AFOSR MURI programs, and the DARPA OLE program.
\begin{widetext}
\section*{Appendix: Supplementary Material}
In this appendix, we show in detail how to solve the driven Mathieu
equation and give an approximate treatment of the motional integrals.
\subsection*{SOLUTION OF DRIVEN MATHIEU EQUATION}
We show in detail how to solve the Mathieu equation with a constant
drive term.
\[
\frac{d^{2}u}{d\xi^{2}}+\left(a-2q\cos\left(2\xi\right)\right)u=f_{0}
\]
Let us assume that $u(\xi)=f_{0}\sum_{n=0}^{\infty}c_{n}\cos(2n\xi)$
and insert it into the equation. After re-organization, we get
\[
ac_{0}-qc_{1}+\sum_{n=1}^{\infty}\left[(a-4n^{2})c_{n}-q(c_{n-1}+c_{n+1})-qc_{0}\delta_{n,1}\right]\cos(2nt)=1.
\]
Defining $D_{n}\equiv(a-4n^{2})/q$, we have the following set of
linear equations
\begin{eqnarray*}
ac_{0}-qc_{1} & = & 1\\
c_{n}-\frac{1}{D_{n}}(c_{n-1}+c_{n+1}+c_{0}\delta_{n,1}) & = & 0.
\end{eqnarray*}
In matrix form,
\begin{equation}
\left(\begin{array}{cccccc}
a & -q & 0 & \cdots & & 0\\
-\frac{2}{D_{1}} & 1 & -\frac{1}{D_{1}}\\
0 & -\frac{1}{D_{2}} & 1 & -\frac{1}{D_{2}}\\
\vdots & & -\frac{1}{D_{3}} & 1 & -\frac{1}{D_{3}}\\
& & & \ddots & \ddots\\
0
\end{array}\right)\cdot\left(\begin{array}{c}
c_{0}\\
c_{1}\\
c_{2}\\
\vdots\\
\\
\\
\end{array}\right)=\left(\begin{array}{c}
1\\
0\\
0\\
\vdots\\
\\
\\
\end{array}\right).\label{eq:matrix_eq}
\end{equation}
The factor $1/D_{n}$ decreases very fast as $n$ increases and we
can truncate the expansion of $u(\xi)$ at a small $n$. Numerically
we observe that typically keeping up to $c_{2}$ already gives enough
accuracy. We can thus get a very accurate analytical expression
\begin{eqnarray*}
c_{0} & \approx & \frac{64+a(a-20)-q^{2}}{(32-3a)q^{2}+a(a-4)(a-16)},\\
c_{1} & \approx & \frac{2(a-16)q}{(32-3a)q^{2}+a(a-4)(a-16)},\\
c_{2} & \approx & \frac{2q^{2}}{(32-3a)q^{2}+a(a-4)(a-16)}.
\end{eqnarray*}
For the example in the main text, $a_{r}=-0.0388$ and $q_{r}=0.283$,
we have $c_{0}=1132.8$ and $u_{r}(\xi)=f_{0}c_{0}\left[1-0.14\cos(2\xi)+0.0025\cos(4\xi)+\cdots\right].$
The micromotion corrected equilibrium position is $f_{0}c_{0}$ and
should be identified with $u_{0}$ around which we expand the Coulomb
potential in the first place. Thus we should determine them self-consistently.
Taking the relative motion in the manuscript as an example, since
both $a_{\text{r}}\equiv\frac{-16eU_{0}}{md_{0}^{2}\Omega_{T}^{2}}+\frac{4e^{2}}{\pi\epsilon_{0}mu_{o}^{3}\Omega_{T}^{2}}$
and $f_{0}\equiv\frac{6e^{2}}{\pi\epsilon_{0}mu_{0}^{2}\Omega_{T}^{2}}$
are functions of $u_{0}$, then the self-consistent equation
\begin{eqnarray*}
u_{0}=f_{0}c_{0} & \approx & f_{0}\frac{64+a_{\text{r}}(a_{\text{r}}-20)-q_{\text{r}}^{2}}{(32-3a_{\text{r}})q_{\text{r}}^{2}+a_{\text{r}}(a_{\text{r}}-4)(a_{\text{r}}-16)}
\end{eqnarray*}
gives the correct $u_{0}$. With the iterative method it typically
takes only a few iterations to converge to the correct value when
starting from a proper initial value of $u_{0}$.
\subsection*{TWO-STAGE TIME INTEGRAL}
Here we offer an approximate treatment of motional integrals. We notice
that the secular frequency $\omega$ and the micromotion frequency
$\Omega$ are well separated, i.e. $\omega\ll\Omega$. This means
quantities with characteristic frequency $\omega$ or below stay constant
within one period of micromotion. So we can perform the time integral
in two steps: we first integrate over one period of the micromotion,
obtaining a slowly varying integrand, which we then integrate again.
By doing this we will show that the dominant effect of micromotion
is to modulate the effective Rabi frequency. Notice that the integrals
$\int_{0}^{\tau}\chi(t)u(t)\, dt$ can be reduced to the form below
(ignoring micromotion frequencies $n\Omega\pm\omega$ with $n\ge2$)
\begin{eqnarray*}
I & = & \int_{0}^{\tau}\sin\left(a_{0}(t)+a_{1}(t)\cos(\Omega t_{1}+\phi(t))\right)\left(b_{0}(t)+b_{1}(t)\cos\left(\Omega t+\varphi(t)\right)\right)
\end{eqnarray*}
where $a_{0}(t)$, $a_{1}(t)$, $b_{0}(t)$, $b_{1}(t)$, $\phi(t)$
and $\varphi(t)$ are all real slowly varying functions within one
period of micromotion $\frac{2\pi}{\Omega}$. The above integral can
be further broken into two parts, $I_{1}$ and $I_{2}$, where
\begin{eqnarray*}
I_{1} & \approx & \int_{0}^{\tau}dt\frac{\Omega}{2\pi}\int_{t}^{t+2\pi/\Omega}dt_{1}\sin\left(a_{0}(t)+a_{1}(t)\cos(\Omega t_{1}+\phi)\right)b_{0}(t)\\
& = & \int_{0}^{\tau}dt\,\frac{1}{2\pi}\int_{-\pi}^{\pi}dt'\,\sin\left(a_{0}(t)+a_{1}(t)\cos(t')\right)b_{0}(t)\\
& = & \text{Im}\left[\int_{0}^{\tau}dt\,\exp\left(i\, a_{0}(t)\right)\frac{1}{2\pi}\int_{-\pi}^{\pi}dt'\,\exp\left(i\, a_{1}\cos(t')\right)b_{0}(t)\right]\\
& & \text{Im}\left[\int_{0}^{\tau}dt\,\exp\left(i\, a_{0}(t)\right)J_{0}(a_{1})b_{0}(t)\right]\\
& = & \int_{0}^{\tau}dt\,\sin\left(a_{0}(t)\right)b_{0}(t)J_{0}(a_{1}(t))
\end{eqnarray*}
and
\begin{eqnarray*}
I_{2} & = & \int_{0}^{\tau}dt\sin\left(a_{0}(t)+a_{1}(t)\cos(\Omega t+\phi)\right)b_{1}(t)\cos\left(\Omega t+\varphi(t)\right)\\
& \approx & \int_{0}^{\tau}dt\frac{\Omega}{2\pi}\int_{t}^{t+2\pi/\Omega}dt_{1}\sin\left(a_{0}(t)+a_{1}(t)\cos(\Omega t_{1}+\phi)\right)b_{1}(t)\cos\left(\Omega t_{1}+\varphi\right)\\
& = & \int_{0}^{\tau}dt\,\cos(a_{0}(t))\cos(\varphi-\phi)J_{1}(a_{1}(t))
\end{eqnarray*}
where $J_{0}$ and $J_{1}$ denote the Bessel functions. In both cases,
the micromotion gives rise to slowly varying modulation factors, $J_{0}(a_{1}(t))$
and $\cos(\varphi-\phi)J_{1}(a_{1}(t))$. Moreover in $I_{2}$ the
phase of the original integrand is also shifted, $\sin(a_{0}(t))\rightarrow\cos(a_{0}(t))$.
For the actual experimental system, the term $I_{2}$ contributes
much less than the $I_{1}$ to the target integral $I$, due to the
much smaller coefficient of the micromotion component than that of
the secular component in $v(t)$. So in leading order, micromotion
reduces the laser Rabi frequency seen by the ion by a factor on the
order of $J_{0}(a_{1}(t))$.
\end{widetext}
\end{document} |
\begin{document}
\title{When to stop value iteration:\
stability and near-optimality versus computation}
\newcommand\numberthis{\addtocounter{equation}{1}\tag{\theequation}}
\renewcommand{\dagger}{}
\newif\ifproofs
\newcommand{\ifproof}[1]{\ifproofs{{#1}}\fi}
\newcommand{\ifnotproof}[1]{\ifproofs\else{{#1}}\fi}
\newcommand{\mproof}[1]{\ifproofs\noindent\textbf{Proof.} {\color{red}{#1}} $
\blacksquare$
\fi}
\proofstrue
\def \wDelta{\widetilde\Delta}
\def \wdelta{\widetilde\delta}
\newcommand{\overline{\alpha}_W}{\overline{\alpha}_W}
\newcommand{\overline{\alpha}_V}{\overline{\alpha}_V}
\newcommand{\overline{\alpha}_Y}{\overline{\alpha}_Y}
\newcommand{\overline{\alpha}_Yi}{\overline{\alpha}_Y^{-1}}
\newcommand{\underline{\alpha}_Y}{\underline{\alpha}_Y}
\newcommand{\underline{\alpha}_Yi}{\underline{\alpha}_Y^{-1}}
\newcommand{\alpha_W}{\alpha_W}
\newcommand{\aw^{-1}}{\aw^{-1}}
\newcommand{{\alpha}_Y}{{\alpha}_Y}
\newcommand{\ay^{-1}}{\ay^{-1}}
\newcommand{\widetilde{\alpha}_Y}{\widetilde{\alpha}_Y}
\newcommand{\widehat{\alpha}_Y}{\widehat{\alpha}_Y}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{R}_{\geq 0}}{\mathbb{R}_{\geq 0}}
\newcommand{\mathcal{U}}{\mathcal{U}}
\newcommand{\mathbb{Z}_{>0}}{\mathbb{Z}_{>0}}
\newcommand{\mathbb{Z}_{\geq 0}}{\mathbb{Z}_{\geq 0}}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\bm{u}}{\bm{u}}
\NewDocumentCommand{\ustar}{o}{
\mathbb{I}fNoValueTF{#1}
{\bm{u}_{d}^{\pmb{*}}}
{\bm{u}_{#1}^{\pmb{*}}}
}
\NewDocumentCommand{\us}{o}{
\mathbb{I}fNoValueTF{#1}
{u^*}
{u^* _ {#1}}
}
\newcommand{\mathcal{U}star}{\mathcal{U}^*_{d}}
\newcommand{\bm{\hat{u}}_{d}}{\bm{\hat{u}}_{d}}
\newcommand{\widehat{\mathcal{U}}_{d}}{\widehat{\mathcal{U}}_{d}}
\NewDocumentCommand{\phis}{o}{
\mathbb{I}fNoValueTF{#1}
{\phi}
{\phi^* _ {#1}}
}
\NewDocumentCommand{\ells}{o}{
\mathbb{I}fNoValueTF{#1}
{\ell}
{\ell^*_{#1}}
}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{K}K}{\mathcal{KK}}
\newcommand{\mathcal{K}L}{\mathcal{KL}}
\newcommand{\mathbb{I}}{\mathbb{I}}
\NewDocumentCommand{\V}{o}{
\mathbb{I}fNoValueTF{#1}
{V_{d}}
{V_{#1}}
}
\NewDocumentCommand{\J}{o}{
\mathbb{I}fNoValueTF{#1}
{J_{d}}
{J_{#1}}
}
\NewDocumentCommand{\Y}{o}{
\mathbb{I}fNoValueTF{#1}
{Y_{d}}
{Y_{#1}}
}
\NewDocumentCommand{\Vhat}{o}{
\mathbb{I}fNoValueTF{#1}
{{\widehat V}_{d}}
{{\widehat V}_{#1}}
}
\NewDocumentCommand{\Yhat}{o}{
\mathbb{I}fNoValueTF{#1}
{{\widehat Y}_{d}}
{{\widehat Y}_{#1}}
}
\newcommand{\gfactor}{\frac{1-\gamma}{\gamma}}
\newcommand{\tgfactor}{\tfrac{1-\gamma}{\gamma}}
\newcommand{\gdfactor}{\gfactor\frac{1}{1-\gamma^{d}}}
\newcommand{\tgdfactor}{\tgfactor\tfrac{1}{1-\gamma^{d}}}
\newcommand{\gstarfactor}{\frac{1-\gamma^*}{\gamma^*}}
\newcommand{\tgstarfactor}{\tfrac{1-\gamma^*}{\gamma^*}}
\newcommand{\gdstarfactor}{\gstarfactor\frac{1}{1-(\gamma^*)^{d^*}}}
\newcommand{\tgdstarfactor}{\tgstarfactor\tfrac{1}{1-(\gamma^*)^{d^*}}}
\DeclarePairedDelimiter\floor{\lfloor}{\rfloor}
\NewDocumentCommand{\uref}{m}{
{\textsubscript{(\ref{#1})}}
}
\newtheorem{SA}{Standing Assumption}
\newcommand{\XXX}{}
\NewDocumentCommand\diff{m}{{\color{blue}#1}}
\NewDocumentCommand\diffr{m}{{\color{blue}#1}}
\begin{abstract}
Value iteration (VI) is a ubiquitous algorithm for optimal control, planning, and reinforcement learning schemes. Under the right assumptions, VI is a vital tool to generate inputs with desirable properties for the controlled system, like optimality and Lyapunov stability. As VI usually requires an infinite number of iterations to solve general nonlinear optimal control problems, a key question is when to terminate the algorithm to produce a ``good'' solution, with a measurable impact on optimality and stability guarantees. By carefully analysing VI under general stabilizability and detectability properties, we provide explicit and novel relationships of the stopping criterion's impact on near-optimality, stability and performance,
thus allowing to tune these desirable properties against the induced computational cost. The considered class of stopping criteria encompasses those encountered in the control, dynamic programming and reinforcement learning literature and it allows considering new ones, which may be useful to further reduce the computational cost while endowing and satisfying stability and near-optimality properties. We therefore lay a foundation to endow machine learning schemes based on VI with stability and performance guarantees, while reducing computational complexity.
\end{abstract}
\hypertarget{introduction}{
\section{Introduction}\label{introduction}}
Value iteration (VI) is an established method for optimal control, which
plays a key role in reinforcement learning
\citep{Sutton,LewisRLDP2009,BUSONIU20188,pang2019adaptive}. This
algorithm consists in iteratively constructing approximations of the
optimal value function, based on which near-optimal control inputs are
derived for a given dynamical nonlinear systems and a given stage cost.
The convergence of said approximations to the optimal value function is
established in, e.g., \citep{Bertsekas:12,Bertsekas:TNNLS} under mild
conditions. To benefit from this convergence property, VI often needs to
be iterated infinitely many times. However, in practice, we cannot do so
and must stop iterating the algorithm before to manage the computational
burden, which may be critical in online applications. Heuristics are
often used in the literature to stop iterating by comparing the mismatch
between the value functions obtained at the current step and at the
previous one, see, e.g.,
\citep{Bertsekas:12,Sutton,pang2019adaptive,kiumarsi2017h,derongliu2015}.
An important question is then how far the obtained approximate value
function is to the optimal one. To the best of our knowledge, this is
only analysed in general when the cost is discounted and the stage cost
takes values in a bounded set \citep{Bertsekas:12}. An alternative
consists in asking for a sufficiently large number of iterations, as the
near-optimality gap vanishes as the number of iterations increases,
e.g.~\citep{Bertsekas:12,heydari,heydari2014adp,heydari2016acc,derongliu2015,granzotto2020},
but the issue is then the computational cost. Indeed, any estimate of
the number of iterations is in general subject to conservatism, and, as
a result, we may iterate many more times than what is truly required to
ensure ``good'\,' near-optimality properties. There is therefore a need
for stopping criteria for VI whose impact on near-optimality is
analytically established, and which are not too computationally
demanding.
Our main goal is to use VI to simultaneously ensure near-optimal control
and stability properties for physical systems. Stability is critical in
many applications, as: (i) it provides analytical guarantees on the
behavior of the controlled system solutions as time evolves; (ii) endows
robustness properties and is thus associated to safety considerations,
see, e.g., \citep{berkenkamp2017safe}. We therefore consider systems and
costs where general stability properties are bestowed by VI based
schemes, which follows from assumed general stabilizability and
detectability properties of the plant model and the stage cost as in
\citep{grimm2005,romain2016,granzotto2020}.
In this context, we consider state-dependent stopping criteria for VI
and we analyse their impact on the near-optimality and stability
properties of the obtained policies for general deterministic nonlinear
plant models and stage costs, where no discount factor is employed.
Instead of relying on a uniform contraction property as in, e.g.,
\citep{Bertsekas:12,derongliu2015}, our analysis is centered on and
exploits Lyapunov stability properties. Our work covers the
state-independent stopping criteria considered in the control, dynamical
programming and reinforcement learning literature
\citep{Sutton,LewisRLDP2009,BUSONIU20188}, but provides analytical
guarantees for undiscounted stage costs taking values in unbounded sets.
By carefully analysing the stopping criterion's impact on
near-optimality, stability and closed-loop cost guarantees, we provide
means to tune these properties against the induced computational cost,
thus clarifying the tradeoff between ``good enough'' convergence of VI
and ``good properties'' of generated inputs. Considering that VI is, via
Q-learning, the basis of many state-of-the-art reinforcement learning
methods, we believe the results of this paper contribute to the
(near)-optimality analysis for reinforcement learning, as we lay a
foundation to endow such schemes with stability and performance
guarantees, while reducing computational complexity.
The paper and its contributions are organized as follows. In Section
\ref{problem-statement}, we formally state the problem and the main
assumptions. We introduce the design of stopping criteria for VI in
Section \ref{stopping-criterion-design}, and show that the VI stopping
criterion is indeed verified with a finite number of iterations. Our
main results are found in Section \ref{main-results}. There, we provide
near-optimal guarantees, i.e.~a bound on the mismatch between the
approximated value function and the true optimal value function. The
bound can be easily and directly tuned by the designed stopping
criterion. Additionally, stability and performance guarantees of the
closed-loop system with inputs generated by VI are provided, given that
the stopping criterion is appropriately chosen. In Section
\ref{illustrative-example}, we provide an example to illustrate our
results. Concluding remarks are drawn in Section
\ref{conclusion}.\ifnotproof{ The proofs are omitted and available in the associated technical report \citep{granzottoVIcontrolreport}.}\ifproof{ The proofs are provided in this technical report in Section \ref{sec:proofs}.}
\noindent\textbf{Prior literature.} The classical stopping criterion is
analysed in \citep{Bertsekas:12}, albeit restricted to when the cost is
discounted and the stage cost takes values in a bounded set. Concerning
stability, works like
\citep{granzotto2020,heydari2017stability,wei2015value} provide
conditions to ensure that the feedback law obtained ensures a stability
property for a dynamical system. In particular, it is required in
\citep{granzotto2020} that the number of iteration \(d\) be sufficiently
large, and lower bounds on \(d\) are provided, but these are subject to
some conservatism. As explained above, by adapting the number of
iterations with data available during computations, the algorithm avoids
the conservatism often incurred by offline estimations for stability and
near-optimality guarantees. This is indeed the case in an example (see
Section \ref{S:example}), where we observe \(91\%\) fewer iterations for
comparable guarantees. Similar ideas related to the stopping criterion
were exploited in \citep{granzotto2020optimistic}, for a different
purpose, namely for the redesign of optimistic planning \citep{hren2008}
to address the near-optimal control of switched systems. We are also
aware of work of \citep{Pavlov2019}, which adapts the stopping criterion
with stability considerations for interior point solvers for reduced
computational complexity for nonlinear model predictive control
applications.
\noindent\textbf{Notation.} Let \(\mathbb{R}:= (-\infty,\infty)\),
\(\mathbb{R}_{\geq 0}:= [0,\infty)\),
\(\mathbb{Z}_{\geq 0}:= \{0,1,2,\ldots\}\) and
\(\mathbb{Z}_{>0}:= \{1,2,\ldots\}\). We use \((x,y)\) to denote
\([x^\top,y^\top]^\top\), where
\((x,y) \in \mathbb{R}^n\times\mathbb{R}^m\) and
\(n,m\in\mathbb{Z}_{>0}\). A function
\(\chi : \mathbb{R}_{\geq 0}\to \mathbb{R}_{\geq 0}\) is of class
\(\mathcal{K}\) if it is continuous, zero at zero and strictly
increasing, and it is of class \(\mathcal{K}_\infty\) if it is of class
\(\mathcal{K}\) and unbounded. A continuous function
\(\beta: \mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\to \mathbb{R}_{\geq 0}\)
is of class \(\mathcal{KL}\) when \(\beta(\cdot,t)\) is of class
\(\mathcal{K}\) for any \(t\geq0\) and \(\beta(s,\cdot)\) is decreasing
to 0 for any \(s\geq0\). The notation \(\mathbb{I}\) stands for the
identity map from \(\mathbb{R}_{\geq 0}\) to \(\mathbb{R}_{\geq 0}\).
For any sequence \(\bm{u}=[u_0,u_1,\dots]\) of length
\(d\in\mathbb{Z}_{\geq 0}\cup\{\infty\}\) where
\(u_i \in \mathbb{R}^m\), \(i \in \{0,\ldots,d\}\), and any
\(k\in\{0,\ldots,d\}\), we use \(\bm{u}|_k\) to denote the first \(k\)
elements of \(\bm{u}\), i.e.~\(\bm{u}|_k = [u_0,\dots,u_{k-1}]\) and
\(\bm{u}|_0=\varnothing\) by convention. Let \(g\ :\
\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\), we use \(g^{(k)}\) for the
composition of function \(g\) with itself \(k\) times, where
\(k\in\mathbb{Z}_{\geq 0}\), and \(g^{(0)}=\mathbb{I}\).
\hypertarget{problem-statement}{
\section{Problem Statement}\label{problem-statement}}
Consider the system
\begin{equation}\label{eq:sys} x^+ = f(x,u),\end{equation} with state
\(x \in \mathbb{R}^n\), control input \(u \in \mathcal{U}(x)\) where
\(\mathcal{U}(x)\subseteq\mathbb{R}^m\) is the set of admissible inputs,
and \(f: \mathcal{W} \to \mathbb{R}^n\) where
\(\mathcal{W}:=\{(x,u) : x\in\mathbb{R}^n, u\in\mathcal{U}(x)\}\). We
use \(\phi(k,x,\bm{u}|_ k)\) to denote the solution to system
(\ref{eq:sys}) at time \(k \in \mathbb{Z}_{\geq 0}\) with initial
condition \(x\) and inputs sequence
\(\bm{u}|_k=[u_{0},u_{1},\ldots,u_{k-1}]\), with the convention
\(\phi(0,x,\bm{u}|_0)=x\).
We consider the infinite-horizon cost \begin{equation}
\J[\infty](x,\bm{u}):=\sum\limits_{k=0}^{\infty}\ell(\phi(k,x,\bm{u}|_k),u_k),
\label{eq:Jinfty}
\end{equation} where \(x\in\mathbb{R}^n\) is the initial state,
\(\bm{u}\) is an infinite sequence of admissible inputs,
\(\ell :\mathcal{W}\to \mathbb{R}_{\geq 0}\) is the stage cost. Finding
an infinite sequence of inputs which minimizes (\ref{eq:Jinfty}) given
\(x\in\mathbb{R}^n\) is very difficult in general. Therefore, we instead
generate sequences of admissible inputs that \emph{nearly} minimize
(\ref{eq:Jinfty}), in a sense made precise below, while ensuring the
stability of the closed-loop system. For this purpose, we consider VI,
see e.g.~\citep{Bertsekas:12}. VI is an iterative procedure based on
Bellman equation, which we briefly recall next. Assuming the optimal
value function, denoted \(V_\infty\), exists for any
\(x\in\mathbb{R}^n\), the Bellman equation is \begin{equation}
V_{\infty}(x) = \min_{u\in\mathcal{U}(x)} \bigg\{\ell(x,u) + V_{\infty}(f(x,u))\bigg\}. \label{eq:Bellman2}
\end{equation} If we could solve (\ref{eq:Bellman2}) and find
\(V_\infty\), it would then be easy to derive an optimal policy, by
computing the \(\mathop{\mathrm{arg\,min}}\) corresponding to the right
hand-side of (\ref{eq:Bellman2}). However, it is in general very
difficult to solve (\ref{eq:Bellman2}). VI provides an iterative
procedure based on (\ref{eq:Bellman2}) instead, which allows obtaining
value functions (and associated control inputs), which converge to
\(V_\infty\). Hence, given an initial cost function
\(V_{-1}:\mathbb{R}^n\to \mathbb{R}_{\geq 0}\), VI generates a sequence
of value functions \(V_{d}\), \(d\in\mathbb{Z}_{\geq 0}\), for any
\(x\in\mathbb{R}^n\), by iterating \begin{equation}
V_{d}(x) :=\min_{u\in\mathcal{U}(x)} \bigg\{\ell(x,u) + V_{d-1}(f(x,u))\bigg\}. \label{eq:Bellman3}
\end{equation} For any \(d\in\mathbb{Z}_{\geq 0}\), the associated
input, also called policy, is defined as, for any \(x\in\mathbb{R}^n\),
\begin{equation}
u^*_{d}(x) \in \mathop{\mathrm{arg\,min}}_{u\in\mathcal{U}(x)} \bigg\{\ell(x,u) + V_{d-1}(f(x,u))\bigg\}, \label{eq:BellmanVIinput}
\end{equation} which may be set-valued. The convergence of \(V_d\),
\(d\in\mathbb{Z}_{\geq 0}\), to \(V_\infty\) in (\ref{eq:Bellman2}) is
ensured under mild conditions in \citep{Bertsekas:TNNLS}. In the sequel
we make assumptions that ensure that the \(\mathop{\mathrm{arg\,min}}\)
in (\ref{eq:BellmanVIinput}) exists for each \(x\in\mathbb{R}^n\).
In practice, we often stop iterating VI when a stopping criterion is
verified, such as, for instance, when for any \(x\in\mathbb{R}^n\),
\begin{equation} \V[d](x)- \V[d-1](x)\leq \varepsilon, \label{eq:uniformliterature} \end{equation}
where \(\varepsilon\in\mathbb{R}_{>0}\), see, e.g.,
\citep{Bertsekas:12,Sutton,pang2019adaptive,kiumarsi2017h}. However,
this stopping criterion leaves much to be desired in control
applications, for the following reasons: (i) it is not yet established
how \(\varepsilon\) impacts the stability properties of the closed-loop
system; (ii) tools to bound the mismatch between \(\V[d]\) and
\(\V[\infty]\) for this stopping criterion often requires a discount
factor in cost function (\ref{eq:Jinfty}), which impacts stability, as
shown in \citep{romain2016,romainAVICDC2019}; (iii) when \(\V[d]\) is
radially unbounded, i.e.~\(\V[d](x)\to\infty\) when \(|x|\to\infty\),
this stopping criterion is in general impossible to verify for all
\(x\in\mathbb{R}^n\). When the system is linear and the cost quadratic,
as in
\citep{ArnoldRiccati84,anderson2007optimal,JIANG20122699,BIAN2016348},
the convergence to the optimal cost function is shown to be quadratic
and often the stopping criterion is instead of the form
\(\V[d](x)- \V[d-1](x)\leq|\varepsilon| |x|^2\). However, the link
between the value of \(\varepsilon\) and resulting near-optimality and
stability guarantees is not established, and in practice it is
implicitly assumed that parameter \(\varepsilon\) is small enough.
We consider VI terminated by a general stopping criterion. That is, for
any \(x\in\mathbb{R}^n\),
\begin{equation}\label{eq:stop} \V[d](x)- \V[d-1](x)\leq c_\text{stop}(\varepsilon,x),\end{equation}
where \(c_\text{stop}(\varepsilon,x)\geq0\) is a stopping function,
which we design and which may depend on state vector \(x\) and a vector
of tuneable parameters \(\varepsilon\in\mathbb{R}^{n_\varepsilon}\) with
\(n_\varepsilon\in\mathbb{Z}_{>0}\). The design of \(c_\text{stop}\) is
explained in Section \ref{vischeme}. In that way, we cover the above
examples as particular cases, namely
\(c_\text{stop}(x,\varepsilon)=|\varepsilon|\) and
\(c_\text{stop}(\varepsilon,x)= |\varepsilon| |x|^2\) and allow
considering more general ones,
e.g.~\(c_\text{stop}(\varepsilon,x)= \max\{|\varepsilon_1|,| \varepsilon_2 | |x|^2\}\)
where \((\varepsilon_1,\varepsilon_2):=\varepsilon\in\mathbb{R}^2\) or
\(c_\text{stop}(\varepsilon,x)= x^\top S(\varepsilon)x\) for some
positive definite matrix \(S(\varepsilon)\) with
\(\varepsilon\in\mathbb{R}^{n_\varepsilon}\) and
\(n_\varepsilon\in\mathbb{Z}_{>0}\). The main novelty of this work is
the provided explicit link between \(c_\text{stop}(\varepsilon,x)\),
near-optimality and stability guarantees. As a result, we can tune
\(\varepsilon\) for the desired near-optimality and stability
properties, and the algorithm stops when the cost (hence, the generated
inputs) are such that these properties are verified.
The analysis relies on the next
assumption\footnote{The assumption is stated globally, for any $x\in\mathbb{R}^n$ and $u\in\mathcal{U}(x)$. We leave for future work
the case where the assumption holds on compact sets.} like in e.g.,
\citep{grimm2005,romain2016,granzotto2020}.
\begin{SA}[SA\ref{SAa}]\label{SAa}
There exist $\overline{\alpha}_V,\alpha_W\in \mathcal{K}_\infty$ and continuous function $\sigma: \mathbb{R}^n \to \mathbb{R}_{\geq 0}$ such that the following conditions hold.
\begin{itemize}[itemindent=\widthof{(ii)}]
\item[(i)] For any $x \in \mathbb{R}^n$, there exists an infinite sequence of admissible inputs $\bm{u}^*_{\infty}(x)$, called
\textit{optimal input sequence}, which minimizes (\ref{eq:Jinfty}), i.e.
$\V[\infty](x)=\J[\infty](x,\ustar[\infty](x))$, and $\V[\infty](x) \leq \overline{\alpha}_V(\sigma(x))$.
\item[(ii)] For any $(x,u)\in\mathcal{W}$, $\alpha_W(\sigma(x)) \leq \ell(x,u)$.
$
\square$
\end{itemize}
\end{SA}
Function $\sigma :\mathbb{R}^n\to\mathbb{R}_{\geq 0}$ in SA\ref{SAa} is a ``measuring'' function that we use to define stability, which depends
on the problem. For instance, by defining $\sigma=|\cdot|,\sigma=|\cdot|^2$ or $\sigma:x\mapsto x^\top Q x$ with
$Q=Q^\top>0$, one would be studying the stability of the origin, and by taking $\sigma=|\cdot|_{\cal A}$, one would
study stability of non-empty compact set ${\cal A}\subset \mathbb{R}^n$. General conditions to ensure the first part of item (i), i.e. the fact that $\V[\infty](x)$ is finite for any $x\in\mathbb{R}^{n}$ and the existence of optimal inputs, can be found in \citep{keerthi1985}. The second part of item (i) is related to the stabilizability of system (\ref{eq:sys}) with respect to stage cost $\ell$ in relation to
$\sigma$. Indeed, it is shown in \citep[Lemma 1]{grimm2005} that, for instance, when the stage cost $\ell(x,u)$ is uniformly globally
exponentially controllable to zero with respect to $\sigma$ for system (\ref{eq:sys}), see \citep[Definition 2]{grimm2005}, then
item (i) of SA\ref{SAa} is satisfied. We do not need to know $\V[\infty]$ to guarantee the last inequality in item (i) of SA\ref{SAa}. Indeed, it suffices to find, for any $x\in\mathbb{R}^n$, a sequence of inputs $\bm{u}(x)$, such the associated infinite-horizon costs verifies $J(x,\bm{u}(x)) \leq \overline{\alpha}_V(\sigma(x))$ for some $\overline{\alpha}_V\in\mathcal{K}_\infty$. Then, since $\V[\infty]$ is the optimal value function, for any $x\in\mathbb{R}^n$, $\V[\infty](x) \leq J(x,\bm{u}(x)) \leq \overline{\alpha}_V(\sigma(x))$. On the other hand, item (ii) of SA\ref{SAa} is a detectability property of the stage cost $\ell$ with
respect to $\sigma$, as when $\ell(x,u)$ is small, so is $\sigma(x)$.
We are ready to explain how to design the stopping criterion in
(\ref{eq:stop}).
\hypertarget{stopping-criterion-design}{
\section{\texorpdfstring{Stopping criterion design
\label{vischeme}}{Stopping criterion design }}\label{stopping-criterion-design}}
\hypertarget{key-observation}{
\subsection{Key observation}\label{key-observation}}
We start with the known observation
\citep{granz2019,bertsekas2005dynamic} that,
given\footnote{The case where $V_{-1}\neq0$ will be investigated in further work.}
\(V_{-1}=0\), at each iteration \(d\in\mathbb{Z}_{\geq 0}\), VI
generates the optimal value function for the finite-horizon cost
\begin{equation}
\J[d](x,\bm{u}_d):= \sum\limits_{k=0}^{d}\ell(\phi(k,x,\bm{u}_d|_k),u_k), \label{eq:J}
\end{equation} where \(\bm{u}_{d}=[u_0,u_1,...,u_d]\) are admissible
inputs. We assume below that the minimum of (\ref{eq:J}) exists with
relation to \(\bm{u}_{d}\) for any \(x\in\mathbb{R}^n\) and
\(d\in\mathbb{Z}_{\geq 0}\).
\begin{SA}[SA\ref{SAb}]\label{SAb}
For every $d\in\mathbb{Z}_{\geq 0}$, $x\in\mathbb{R}^n$, there exists $\ustar[d](x)$ such that \ifnotproof{$\V[d](x) = \J[d](x,\ustar[d]) = \min_{\bm{u}_d} \J[d](x,\bm{u}_d)$.}
\ifproof{\begin{equation}
\V[d](x) = \J[d](x,\ustar[d]) = \min_{\bm{u}_d} \J[d](x,\bm{u}_d).
\label{eq:Vd}\end{equation}}
$
\Box$
\end{SA}
SA\ref{SAb} is for instance verified when \(f\) and \(\ell\) are
continuous and \(\mathcal{U}(x)=\mathcal{U}\) is a compact set. More
general conditions to verify SA\ref{SAb} can be found in
e.g.~\citep{keerthi1985}. For the sake of convenience, we employ the
following notation for the technical aspects of this paper. For any
\(k\in\{0,1,\ldots,d\}\) and \(x\in\mathbb{R}^n\), we denote
\(\ell_d^*(k,x):=\ell(\phi(k,x,\ustar[d](x)|_{k}),u_k)\), where
\(\phi(k,x,\ustar[d](x)|_{k})\) is the solution to system (\ref{eq:sys})
with optimal inputs for cost \(\V[d](x)\), so that
\ifnotproof{$\V[d](x)= \sum\limits_{k=0}^{d}\ell^*_d(k,x)$.}
\ifproof{\begin{equation}\V[d](x)= \sum\limits_{k=0}^{d}\ell^*_d(k,x).\label{eq:lddef}\end{equation} }
The next property plays a key role in the forthcoming analysis.
\begin{proposition} \label{prop:terminal}
For any $x\in\mathbb{R}^n$ and $d\in\mathbb{Z}_{\geq 0}$, $\ell_d^*(d,x)\leq \V[d](x)-\V[d-1](x)$. $
\Box$
\end{proposition}
When the stopping criterion (\ref{eq:stop}) is verified,
i.e.~\(\V[d](x)-\V[d-1](x)\leq c_\text{stop}(\varepsilon,x)\), then
\(\ell_d^*(d,x)\leq c_\text{stop}(\varepsilon,x)\) in view of
Proposition \ref{prop:terminal}. Therefore,
\(c_\text{stop}(\varepsilon,x)\) is an upper-bound on the value of stage
cost \(\ell^*_d(d,x)\). By item (ii) of SA\ref{SAa}, this implies that
we also have an upper-bound for \emph{d-horizon} state measure
\(\sigma(\phi(d,x,\ustar[d](x)|_{d}))\), namely
\(\sigma(\phi(d,x,\ustar[d](x)|_{d}))\leq\alpha_W^{-1}(c_\text{stop}(\varepsilon,x))\),
which can be made as small as desired by reducing
\(c_\text{stop}(\varepsilon,x)\), which, again, we design. We exploit
this property to analyse the near-optimality and the stability of the
closed-loop system. Having said that, the challenges are: (i) to show
that condition (\ref{eq:stop}) is indeed verified for any
\(x\in\mathbb{R}^n\) and some \(d\in\mathbb{Z}_{\geq 0}\); (ii) to
select \(c_\text{stop}\) to ensure stability properties when closing the
loop of system (\ref{eq:sys}) with inputs (\ref{eq:BellmanVIinput});
(iii) to study the impact of \(c_\text{stop}\) on the performance, that
is, the cost along solutions, of the closed-loop system.
\hypertarget{satisfaction-of-the-stopping-criterion}{
\subsection{Satisfaction of the stopping
criterion}\label{satisfaction-of-the-stopping-criterion}}
We make the next assumption without loss of generality as we are free to
design \(c_\text{stop}\).
\newtheorem{assumption}{Assumption}
\begin{assumption}\label{a:stop}
One of the next properties is verified.
\begin{enumerate}[itemindent=\widthof{(ii)}]
\item[(i)] For any $\varepsilon\in\mathbb{R}^{n_\varepsilon}$, there is $\underline\epsilon>0$ such that, for any $x\in\mathbb{R}^n$, $c_\text{stop}(\varepsilon,x)\geq\underline\epsilon$.
\item[(ii)] There exist $L,\bar a_V, a_W>0$, such that SA\ref{SAa} holds with $\overline{\alpha}_V(s)\leq\bar a_V s$, $\overline{\alpha}_W(s)\leq\bar a_W s$ and $\alpha_W(s)\geq a_W s$ for any $s\in[0,L]$. Furthermore, for any $\varepsilon\in\mathbb{R}^{n_\varepsilon}$, there is $\underline\epsilon>0$ such that for any $ x\in\mathbb{R}^n$, $c_\text{stop}(\varepsilon,x)\geq\underline\epsilon \sigma(x)$.
$
\Box$
\end{enumerate}
\end{assumption}
Item (i) of Assumption \ref{a:stop} can be ensured by taking
\(c_\text{stop}(\varepsilon,x) = |\varepsilon| + \tilde c_\text{stop}(x,\varepsilon)\)
with \(\tilde c_\text{stop}(x,\varepsilon)\geq 0\) for any
\(x\in\mathbb{R}^n\), \(\varepsilon\in\mathbb{R}^{n_\varepsilon}\),
which covers (\ref{eq:uniformliterature}), to give an example. Item (ii)
of Assumption \ref{a:stop} means that the functions
\(\overline{\alpha}_V,\overline{\alpha}_W,\alpha_W\) in SA\ref{SAa} can
be upper-bounded, respectively lower-bounded, by linear functions on the
interval \([0,L]\). These conditions allow to select \(c_\text{stop}\)
such that \(c_\text{stop}(\varepsilon,x)\to0\) when \(\sigma(x)\to0\)
with \(x\in\mathbb{R}^n\), contrary to item (i) of Assumption
\ref{a:stop}, that is, \(c_\text{stop}\) may vanish on set
\(\{x : \sigma(x)=0\}\). This is important to provide stronger stability
and performance properties for systems whose inputs are given by our VI
scheme as shown in Section \ref{main-results}. Under item (ii) of
Assumption \ref{a:stop}, we can design \(c_\text{stop}\) as, e.g.,
\(c_\text{stop}(\varepsilon,x)= |\varepsilon|\sigma(x)\),
\(c_\text{stop}(\varepsilon,x)= \min\{|\varepsilon_1|,| \varepsilon_2 | |x|^2\}\)
where \((\varepsilon_1,\varepsilon_2)=:\varepsilon\in\mathbb{R}^2\) or
\(c_\text{stop}(\varepsilon,x)= x^\top S(\varepsilon)x\) for some
positive definite matrix \(S(\varepsilon)\) as mentioned before.
The next theorem ensures the existence of \(d\in\mathbb{Z}_{\geq 0}\)
such that, for any \(x\in\mathbb{R}^n\), (\ref{eq:stop}) holds based on
Assumption \ref{a:stop}.
\begin{theorem}\label{theo:terminates}
Suppose Assumption \ref{a:stop} holds. Then, for any $\Delta>0$ there exists $d\in\mathbb{Z}_{\geq 0}$ such that, for any $x\in\{z\in\mathbb{R}^n: \sigma(z) \leq \Delta \}$, (\ref{eq:stop}) holds. Moreover, when item (ii) of Assumption 1 holds with $L=\infty$, there exists $d\in\mathbb{Z}_{\geq 0}$ such that, for any $x\in\mathbb{R}^n$, (\ref{eq:stop}) is satisfied.
$
\Box$
\end{theorem}
Theorem \ref{theo:terminates} guarantees the stopping condition in
(\ref{eq:stop}) is always satisfied by iterating the VI algorithm
sufficiently many times, and that the required number of iterations is
uniform over sets of initial conditions of the form
\(\{x:\sigma(x)\leq \Delta\}\) for given \(\Delta>0\) in general, unless
item (ii) of Assumption \ref{a:stop} holds with \(L=\infty\), in which
case there exists a common, global, \(d\) for any \(x\in\mathbb{R}^n\).
Note that, while the proof of Theorem \ref{theo:terminates} provides a
conservative estimate of \(d\) such that (\ref{eq:stop}) is verified,
\ifnotproof{see \citep{granzottoVIcontrolreport},} this horizon estimate
is not utilized in the stopping criterion, which in turn implies that VI
stops with smaller horizon, in general, as illustrated in Section
\ref{S:example}.
In the following, we denote the cost calculated at iteration \(d\) as
\(\V[\varepsilon](x):=\V[d](x)\), like in
\citep{granzotto2020optimistic}, to emphasize that the cost returned is
parameterized by \(\varepsilon\) via
\(c_\text{stop}(\varepsilon,\cdot)\), and denote by
\(\ustar[\varepsilon](x)\) an associated optimal sequence of inputs,
i.e.~
\begin{equation}\V[\varepsilon](x) = \J[d](x,\ustar[\varepsilon](x)).\label{eq:V}\end{equation}
We are ready to state the main results.
\hypertarget{main-results}{
\section{Main results}\label{main-results}}
In this section, we analyze the near-optimality properties of VI with
the stopping criterion in (\ref{eq:stop}). We then provide conditions
under which system (\ref{eq:sys}), whose inputs are generated by
applying the state-feedback \(\ustar[\varepsilon](x)\) in
receding-horizon fashion, exhibits stability properties. Afterwards, the
cost function (\ref{eq:J}) along the solutions of the induced
closed-loop system are analysed, which we refer to by performance or
running-cost \citep{gruneperformance}.
\hypertarget{relationship-between-vvarepsilon-and-vinfty}{
\subsection{\texorpdfstring{Relationship between \(\V[\varepsilon]\) and
\(\V[\infty]\)
\label{openloopoptimality}}{Relationship between \textbackslash V{[}\textbackslash varepsilon{]} and \textbackslash V{[}\textbackslash infty{]} }}\label{relationship-between-vvarepsilon-and-vinfty}}
A key question is how far is \(\V[\varepsilon]\) from \(\V[\infty]\)
when we stop VI using (\ref{eq:stop}). Since \(\ell(x,u)\) is not
constrained to take values in a given compact set, and we do not
consider discounted costs, the tools found in the dynamic programming
literature to analyze this relationship are no longer applicable, see
\citep{Bertsekas:12}. We overcome this issue by exploiting SA\ref{SAa},
and adapting the results of \citep{granzotto2020} with the stopping
criterion and Proposition \ref{prop:terminal} in the next theorem.
\begin{theorem} \label{Vestimates}
Suppose Assumption \ref{a:stop} holds. For any $\varepsilon\in \mathbb{R}^{n_\varepsilon}$, $\Delta>0$ and $x \in\{z\in\mathbb{R}^n,\sigma(z)\leq\Delta \}$,
\begin{equation}\label{eq:Vestimates}
\V[\varepsilon](x) \leq \V[\infty](x) \leq \V[\varepsilon](x)+v_{\varepsilon}(x),
\end{equation}
where
$v_\varepsilon(x):=\overline{\alpha}_V\circ\alpha_W^{-1}(c_{\text{stop}}(\varepsilon,x))$ with $\overline{\alpha}_V,\alpha_W$ from SA\ref{SAa}. Moreover, when item (ii) of Assumption holds with $L=\infty$, we accept $\Delta=\infty$ and
$v_{\varepsilon}(x)\leq \frac{\bar a_V}{a_W} c_\text{stop}(\varepsilon,x)$. $
\square$
\end{theorem}
The lower-bound in (\ref{eq:Vestimates}) trivially holds from the
optimality of \(\V[\varepsilon](x)=\V[d](x)\) for some \(d<\infty\), and
the fact that \(\ell(x,u)\geq0\) for any \(x\in\mathbb{R}^n\) and
\(u\in\mathcal{U}(x)\). The upper-bound, on the other hand, implies that
the infinite-horizon cost is at most \(v_{\varepsilon}(x)\) away from
the finite-horizon \(\V[\varepsilon](x)\). The error term
\(v_{\varepsilon}(x)\) is small when \(c_\text{stop}(\varepsilon,x)\) is
small as \(\overline{\alpha}_V\circ\alpha_W^{-1}\in\mathcal{K}_\infty\).
Given that we know \(\overline{\alpha}_V,\alpha_W^{-1}\) a priori, and
we are free to design \(c_\text{stop}\) as wanted, we can therefore
directly make \(\V[\varepsilon](x)\) as close as desired to
\(\V[\infty](x)\) by adjusting \(c_\text{stop}\); the price to pay will
be more computations. Moreover, when item (ii) of Assumption holds with
\(L=\infty\), inequality (\ref{eq:Vestimates}) is verified for every
\(x\in\mathbb{R}^n\).
\hypertarget{stability}{
\subsection{Stability}\label{stability}}
We now consider the scenario where system (\ref{eq:sys}) is controlled
in a receding-horizon fashion by inputs that calculate cost
(\ref{eq:V}). That is, at each time instant \(k\in\mathbb{Z}_{\geq 0}\),
the first element of optimal sequence \(\ustar[\varepsilon](x_k)\),
calculated by VI, is then applied to system (\ref{eq:sys}). This leads
to the closed-loop system
\begin{equation}\label{eq:autosys} x^+ \in f(x,\mathcal{U}^*_{\varepsilon}(x)) =: F^*_{\varepsilon}(x), \end{equation}
where \(f(x,\mathcal{U}^*_{\varepsilon}(x))\) is the set
\(\{f(x,u) : u \in \mathcal{U}^*_{\varepsilon}(x)\}\) and
\(\mathcal{U}_{\varepsilon}^*(x):= \big\{ u_0 : \exists u_1,\ldots,u_{d} \in \mathcal{U}(x) \text{ such that } \V[\varepsilon](x)=\J[d](x,[u_0,\ldots,u_{d}])\big\}\)
is the set of the first input of \(d\)-horizon optimal input sequences
at \(x\), with \(d\) as defined in (\ref{eq:stop}). We denote by
\(\phi(k,x)\) a solution to (\ref{eq:autosys}) at time
\(k\in\mathbb{Z}_{\geq 0}\) with initial condition \(x\in\mathbb{R}^n\),
with some abuse of notation.
We assume next that \(c_\text{stop}\) can be made as small as desirable
by taking \(|\varepsilon|\) sufficiently small. As we are free to design
\(c_\text{stop}\) as wanted, this is without loss of generality.
\begin{assumption}\label{cstop2}
There exists $\theta:\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}$, with $\theta(\cdot,s)\in\mathcal{K}$ and $\theta(s,\cdot)$ non-decreasing
for any $s>0$, such that $c_\text{stop}(\varepsilon,x)\leq\theta(|\varepsilon|,\sigma(x))$ for any $x\in\mathbb{R}^n$ and
$\varepsilon\in\mathbb{R}^{n_\varepsilon}$. $
\Box$
\end{assumption}
Example of functions \(c_\text{stop}\) which satisfy Assumption
\ref{cstop2} are
\(c_{\text{stop}}(\varepsilon,x)=|\varepsilon|\sigma(x)\),
\(c_{\text{stop}}(\varepsilon,x)=\max\{|\varepsilon_1| \alpha(\sigma(x)),|\varepsilon_2|\}\)
for \(\varepsilon=(\varepsilon_1,\varepsilon_2)\in\mathbb{R}^2\),
\(\alpha\in\mathcal{K}\) and \(x\in\mathbb{R}^n\) to give a few.
The next theorem provides stability guarantees for system
(\ref{eq:autosys}).
\begin{theorem}\label{algostab}
Consider system (\ref{eq:autosys}) and suppose $c_\text{stop}$ verifies Assumptions \ref{a:stop} and \ref{cstop2}. There exists $\beta
\in\mathcal{KL}$ such that, for any $\delta,\Delta>0$, there exists $\varepsilon^*>0$ such that for any $x \in \{z \in\mathbb{R}^n
\, : \, \sigma(z) \leq \Delta \}$ and $\varepsilon\in\mathbb{R}^{n_\varepsilon}$ with $|\varepsilon| <\varepsilon^*$,
any solution $\phi(\cdot,x)$ to system (\ref{eq:autosys}) satisfies, for all $k\in \mathbb{Z}_{\geq 0}$,
$\sigma(\phi(k,x)) \leq \max \{\beta(\sigma(x),k),\delta \}$. $
\square$
\end{theorem}
Theorem \ref{algostab} provides a uniform semiglobal practical stability
property for the set \(\{z : \sigma(z) = 0 \}\). This implies that
solutions to (\ref{eq:autosys}), with initial state \(x\) such that
\(\sigma(x)\leq\Delta\), where \(\Delta\) is any given (arbitrarily
large) strictly positive constant, will converge to the set
\(\{z : \sigma(z) \leq \delta\}\), where \(\delta\) is any given
(arbitrarily small) strictly positive constant, by taking
\(\varepsilon^*\) sufficiently close to \(0\), thereby making
\(c_\text{stop}\) sufficiently small. An explicit formula for
\(\varepsilon^*\) is given in the proof of Theorem
\ref{algostab}\ifproof{,}\ifnotproof{ in \citep{granzottoVIcontrolreport},}
which is nevertheless subject to some conservatism. The result should
rather be appreciated qualitatively, in the sense that Theorem
\ref{algostab} holds for small enough \(\varepsilon^*\).
Under stronger assumptions, global exponential stability is ensured as
shown in the next corollary.
\begin{corollary}\label{Yges}
Suppose item (ii) of Assumption \ref{a:stop} holds and that $c_\text{stop}(\varepsilon,x)\leq
|\varepsilon|\sigma(x)$ for any $x\in\mathbb{R}^n$ and $\varepsilon\in\mathbb{R}^{n_\varepsilon}$. Let $\varepsilon^*>0$ be such that
$ \varepsilon^* < \frac{a_W^2}{\bar a_V}$.
Then, for any $x \in \mathbb{R}^n$ and $\varepsilon\in\mathbb{R}^{n_\varepsilon}$ such that $|\varepsilon|\leq\varepsilon^*$,
any solution $\phi(\cdot,x)$ to system (\ref{eq:autosys}) satisfies $\sigma(\phi(k,x)) \leq \frac{\bar a_V}{a_W} \left(1-\frac{a_W^2-|\varepsilon| a_V}{ \bar a_V a_W}\right)^k\sigma(x)$
for all $k \in \mathbb{Z}_{\geq 0}$. $
\square$
\end{corollary}
Corollary \ref{Yges} ensures a uniform global exponential stability
property of set \(\{x:\sigma(x)=0\}\) for system (\ref{eq:autosys}).
Indeed, in Corollary \ref{Yges}, the decay rate is given by
\(1-\frac{a_W^2-|\varepsilon|\bar a_V}{\bar a_V a_W}\) and take values
in \((0,1)\) as
\(|\varepsilon| \leq \varepsilon^* < \frac{a_W^2}{\bar a_V}\) as
required by Corollary \ref{Yges}, hence
\(\left(1-\frac{a_W^2-|\varepsilon|}{\bar a_V a_W}\right)^k\to0\) as
\(k\to\infty\). Furthermore, the estimated decay rate can be tuned via
\(\varepsilon\) from \(1\) to \(1-\frac{a_W}{\bar a_V}\) as
\(|\varepsilon|\) decreases to zero. We can therefore make the decay
smaller by adjusting \(c_\text{stop}\), as in Theorem \ref{Vestimates}.
Hence, by tuning \(\varepsilon\), we can tune how fast the closed-loop
converges to the attractor \(\{x:\sigma(x)=0\}\), and the price to pay
is more computations in general.
\hypertarget{policy-performance-guarantees}{
\subsection{\texorpdfstring{Policy performance guarantees
\label{near-opti}}{Policy performance guarantees }}\label{policy-performance-guarantees}}
In Section \ref{openloopoptimality}, we have provided relationships
between the finite-horizon cost \(\V[\varepsilon]\) and the
infinite-horizon cost \(\V[\infty]\). This is an important feature of
VI, but this does not directly provide us with information on the actual
value of the cost function (\ref{eq:Jinfty}) along solutions to
(\ref{eq:autosys}). Therefore, we analyse the running cost
\citep{gruneperformance} defined as \begin{equation}
\begin{split}
\mathcal{V}_{\varepsilon}^{\text{run}}(x) := \Bigg\{\sum_{k=0}^\infty
\ell_{\mathcal{U}^*_{\varepsilon}(\phi(k,x))}(\phi(k,x)) : \phi(\cdot,x)\text{ is a solution to (\ref{eq:autosys})}\Bigg\},
\label{eq:Vrun}
\end{split}
\end{equation} where
\(\ell_{\mathcal{U}^*_{\varepsilon}(\phi(k,x))}(\phi(k,x))\) is the
actual stage cost incurred at time step \(k\). It has to be noted that
\(\mathcal{V}_{\varepsilon}^\text{run}(x)\) is a set, since solutions of
(\ref{eq:autosys}) are not necessarily unique. Each element
\(V_{\varepsilon}^{\text{run}}(x) \in \mathcal{V}_{\varepsilon}^{\text{run}}(x)\)
corresponds then to the cost of a solution of (\ref{eq:autosys}).
Clearly, \(V_\varepsilon^{\text{run}}(x)\) is not necessarily bounded,
as the stage costs may not decrease to 0 in view of Theorem
\ref{algostab}. Indeed, only practical convergence is ensured in Theorem
\ref{algostab} in general. On the other hand, when the set
\(\{x\in\mathbb{R}^n : \sigma(x)=0\}\) is globally exponentially stable
as in Corollary \ref{Yges}, the elements of
\(\mathcal{V}_{\varepsilon}^{\text{run}}(x)\) in (\ref{eq:Vrun}) are
bounded and satisfy the next property.
\begin{theorem}\label{Vrunestimates}
Consider system (\ref{eq:autosys}) and suppose the conditions of Corollary \ref{Yges} hold. For any $\varepsilon$ such that
$|\varepsilon|<\varepsilon^*$, $x \in \mathbb{R}^n$, and $V_{\varepsilon}^{\text{run}}(x) \in
\mathcal{V}_{\varepsilon}^{\text{run}}(x)$, it follows that\ifnotproof{ $ \V[\infty](x)\leq V_\varepsilon^{\text{run}}(x) \leq \V[\infty](x) + w_{\varepsilon}\sigma(x)$},
\ifproof{\begin{equation}
\V[\infty](x)\leq V_\varepsilon^{\text{run}}(x) \leq \V[\infty](x) + w_{\varepsilon}\sigma(x),
\label{eq:Vrunestimates}
\end{equation}}
with $ w_{\varepsilon}:=\displaystyle\frac{\bar a_V^3}{a_W}\frac{|\varepsilon|}{a_W^2-\bar a_V|\varepsilon|}$, where the constants come from Corollary \ref{Yges}. $
\square$
\end{theorem}
The inequality \(\V[\infty](x) \leq V^{\text{run}}_{\varepsilon}(x)\) of
Theorem \ref{Vrunestimates} directly follows from the optimality of
\(\V[\infty]\). The inequality
\(\V[\varepsilon]^{\text{run}}(x) \leq \V[\infty](x) + w_{\varepsilon}\sigma(x)\)
provides a relationship between the running cost
\(V_{\varepsilon}^\text{run}(x)\) and the infinite-horizon cost at state
\(x\), \(\V[\infty](x)\). The inequality
\(V_\varepsilon^{\text{run}}(x) \leq \V[\infty](x) + w_{\varepsilon}\sigma(x)\)
confirms the intuition coming from Theorem \ref{Vestimates} that a
smaller stopping criterion leads to tighter near-optimality guarantees.
That is, when \(|\varepsilon|\to0\), \(w_{\varepsilon}\to0\) and
\(V^{\text{run}}_{\varepsilon}(x)\to\V[\infty](x)\) for any
\(x\in\mathbb{R}^n\), provided that Corollary \ref{Yges} holds. In
contrast with Theorem \ref{Vestimates}, stability of system
(\ref{eq:autosys}) is essential in Theorem \ref{Vrunestimates}. Indeed,
the term \(\scriptstyle\frac{1}{a_W^2-\bar a_V |\varepsilon|}\) in the
expression of \(w_\varepsilon\) shows that the running cost is large
when \(|\varepsilon|\) is close to \(\frac{a_W^2}{\bar a_V}\), hence,
when stability is not guaranteed, the running cost might be unbounded.
\hypertarget{illustrative-example}{
\section{\texorpdfstring{Illustrative Example
\label{S:example}}{Illustrative Example }}\label{illustrative-example}}
We consider the discrete cubic integrator, also seen in
\citep{grimm2005,granzotto2020}, which is given by
\ifnotproof{$(x_1^+,x^+_2)= (x_1+u,x_2+u^3)$}\ifproof{\begin{equation}
\begin{split}
x_1^+&=x_1+u\\
x_2^+&=x_2+u^3, \label{eq:cubicsys}
\end{split}
\end{equation}} where
\((x_1,x_2):=x\in\mathbb{R}^2\) and \(u\in\mathbb{R}\). Let
\(\sigma(x)=|x_1|^3+|x_2|\) and consider cost (\ref{eq:J}) with
\(\ell(x,u)=|x_1|^3+|x_2|+|u|^3\) for any
\((x,u)\in\mathbb{R}^2\times\mathbb{R}\). It is shown in
\citep{granzotto2020} that SA\ref{SAa} holds with
\(\overline{\alpha}_V=14\mathbb{I}\) and \(\alpha_W:=\mathbb{I}\).
Because it is notoriously difficult to exactly compute \(\V[d](x)\) and
associated sequence of optimal inputs for every \(x\in\mathbb{R}^2\), we
use an approximate scheme. In particular, we rely on a simple finite
difference approximation, with \(N=340^2\) points equally distributed in
\([-10,10]\times[-10^3,10^3]\) for the state space or, equivalently,
\(\{x\in\mathbb{R}^n : \sigma(x) \leq 2000\}\), and \(909\) equally
distributed quantized inputs in \([-20,20]\) centered at 0. We consider
three types of stopping criteria for which \(\varepsilon\) is a scalar.
For each stopping criterion, we discuss the type of guaranteed stability
and we provide in Table \ref{ExampleSTOPtable} the corresponding horizon
for different values of \(\varepsilon\), which is related to the
computation cost. Then, for each horizon, we give in Table
\ref{ExampleVIqualitable} estimates of the running cost for initial
condition \(x=(10,-10^3)\), by computing the sum in \eqref{eq:Vrun} up
to \(k=40\) instead of \(k=\infty\), as well as the value of
\(\sigma(\phi(40,x))\) to evaluate the convergence accuracy of the
corresponding policy.
We first take the uniform stopping criterion uniform stopping criterion
as in (\ref{eq:uniformliterature}), like in,
e.g.,\citep{Bertsekas:12,Sutton,pang2019adaptive,kiumarsi2017h,derongliu2015},
i.e.~\(c_\text{stop}(\varepsilon,x):=|\varepsilon|,\) with different
values of \(\varepsilon\). In this case, we have no global exponential
stability or performance guarantees like in Corollary \ref{Yges} and
Theorem \ref{Vrunestimates} a priori. Only near-optimal guarantees as in
Theorem \ref{Vestimates} and semiglobal practical stability as in
Theorem \ref{algostab} hold. For instance, by taking
\(\varepsilon=0.01\), Theorem \ref{Vestimates} holds with
\(v_\varepsilon(x) = 14 \cdot 0.01 = 0.14\) for any
\(x\in\mathbb{R}^n\).
We also consider the following relative stopping criterion, for any
\(x\in\mathbb{R}^n\) and \(\varepsilon\in\mathbb{R}\),
\(c_\text{stop}(\varepsilon,x):=|\varepsilon| \sigma(x)\). The
exponential stability of Corollary \ref{Yges} holds for any
\(\varepsilon\in\mathbb{R}\) such that
\(|\varepsilon|<\frac{a_W^2}{\bar a_V} = \frac{1}{14}\) in this case.
Moreover, we have near-optimality and performance properties as in
Theorems \ref{Vestimates} and \ref{Vrunestimates}, which were not
available for the previous stopping criterion
\(c_\text{stop}(\varepsilon,x)=|\varepsilon|\). Moreover, for
\(\varepsilon=0.01<\frac{1}{14}\), Theorem \ref{Vestimates} holds with
\(v_\varepsilon(x) = 14 \cdot 0.01 \cdot \sigma(x) = 0.14 \sigma(x)\)
for any \(x\in\mathbb{R}^n\), which is small when \(\sigma(x)\) is
small, and vice versa. Compared to the previous stopping criterion,
which leads to constant guaranteed near-optimality bound, here we have
better guarantees when \(\sigma(x)\) is small (and worse ones when
\(\sigma(x)\) is large). We observe less computations for better a
priori near-optimality properties for states near the attractor,
i.e.~when \(\sigma(x)<1\), when compared to the previous stopping
criterion. We finally consider the mixed stopping criterion
\(c_\text{stop}(\varepsilon,x):=|\varepsilon| \min\{\sigma(x),1\}\),
which provides better near-optimality guarantees than both considered
stopping criteria. We see from Table \ref{ExampleSTOPtable}, and Table
\ref{ExampleVIqualitable}, that by increasing iterations, we usually
obtain smaller and thus better running costs as well as tighter
convergence properties.
Compared to previous work \citep{granzotto2020}, where stability
properties to (approximate) value iteration are given, we require a
smaller number of iterations. Indeed, in view of
\citep[Corollary 2]{granzotto2020},
\(d\geq\bar d= \floor*{\frac{0-\ln 14^2}{\ln 13 -\ln 14}}= 71\). Of
course, this analysis is conservative and a different derivation of
\(\overline{\alpha}_V\) might provide different bounds on \(\bar d\).
Here, as the algorithm is free to choose the required number of
iterations via the stopping criterion, we significantly reduce its
conservatism. This induces smaller computational complexity, as,
e.g.~for \(\varepsilon=0.01<\frac{1}{14}\), exponential stability is
ensured with the stoping criterion verified at \(d=6\), that is,
\(8.5\%\) of iterations required by the lower bound \(\bar d =71\) of
\citep{granzotto2020}.
\begin{table}
\small
\begin{center}
\begin{tabular}{
r l
| c c c c c c c
}
\bottomrule
& & \multicolumn{7}{c}{$\bm{\varepsilon}$} \\[-0.1em]
& & 10 & 0.75 & 0.1 & 0.075& 0.05& 0.025 & 0.005 \\
\bottomrule
\multirow{3}{*}{$\bm{c_\text{stop}(\varepsilon,x):}$} & $\bm{|\varepsilon|}$& $d=6$& $d=7$& $d=8$& $d=8$& $d=8$ &$d=8$ & $d=9$ \\
& $\bm{|\varepsilon|\sigma(x)}$ & $d=0$ & $d=1$& $d=3$& $d=4$& $d=5$ &$d=6$ & $d=7$ \\
& $\bm{|\varepsilon|\min\{\sigma(x),1\}}$ & $d=6$& $d=7$& $d=8$& $d=8$& $d=8$ &$d=8$ & $d=9$ \\\bottomrule
\end{tabular}
\caption{Required iterations to fulfill each stopping criteria for $N=340^2$ points equally distributed in $\{z\in\mathbb{R}^n : \sigma(z) \leq 2000\}$. \label{ExampleSTOPtable}}
\end{center}
\normalsize
\end{table}
\begin{table}
\small
\begin{center}
\begin{tabular}{
c
| c c c c c c c c c
}
\bottomrule
& \multicolumn{9}{c}{$\bm{d}$} \\[-0.1em]
& 0 & 1 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
\bottomrule
$\bm{V^\text{run}_d(x)}$ & 77313 & 45497 & 19931 & 19965 & 19802 & 20090 & 20359 & 20261 & 20261 \\
$\bm{\sigma(\phi(40,x))}$ & 1982 & 1138 & 2.56 & 2.84 & 1.71 & 2.25 & 1.84 & 1.62 & 1.62 \\ \bottomrule
\end{tabular}
\caption{Estimation of the running cost $V^\text{run}_d(x)$ for $x=(10,-10^3)$ and the value of $\sigma(\phi(40,x))$. \label{ExampleVIqualitable}}
\end{center}
\normalsize
\end{table}
\iffalse
\begin{figure}
\caption{State evolution with stopping criterion (\ref{eq:cubicstop1}
\label{VIcubicstate1}
\caption{State evolution with stopping criterion (\ref{eq:cubicstop2}
\label{VIcubicstate2}
\end{figure}
\fi
\hypertarget{concluding-remarks}{
\section{\texorpdfstring{Concluding remarks
\label{conclusion}}{Concluding remarks }}\label{concluding-remarks}}
Future work includes relaxing the initial condition for VI and the main
assumptions. Another direction is extending the work towards stochastic
problems and online algorithms, towards the final goal of
stability-based computational-performance tradeoffs in reinforcement
learning.
\ifproofs
\section{Proofs}\label{sec:proofs}
\subsection{Proof of Proposition \ref{prop:terminal} }
Let \(x\in\mathbb{R}^n\) and \(d\in\mathbb{Z}_{>0}\). Since
\(\V[d-1](x)\) is the optimal value function associated to the
\((d-1)\)-horizon cost (\ref{eq:Vd}), it follows
\(\V[d-1](x)\leq \sum\limits_{k=0}^{d-1}\ell^*_d(k,x)\), and by
definition of \(\ell_d^*\),
\(\V[d](x)= \sum\limits_{k=0}^{d}\ell^*_d(k,x)\). Hence
\(\ell^*_d(d,x)=\sum\limits_{k=0}^{d}\ell^*_d(k,x)-\sum\limits_{k=0}^{d-1}\ell^*_d(k,x) \leq \V[d](x)- \V[d-1](x)\),
which gives the desired result.
\subsection{Proof of Theorem \ref{theo:terminates}}
We first consider the case when item (i) of Assumption \ref{a:stop}
holds. Let \(\Delta>0\) and \(x\in\mathbb{R}^n\) such that
\(\sigma(x)\leq \Delta\), \(\varepsilon\in \mathbb{R}^{n_\varepsilon}\)
and \(\underline\epsilon>0\) as in item (i) of Assumption \ref{a:stop}.
In view of \citep[Theorem 3]{granzotto2020} with \(\gamma=1\), it
follows that
\begin{equation}\V[d](x) \leq \V[\infty](x) \leq \V[d](x) + \overline{\alpha}_V\circ \underline{\alpha}_Y^{-1}\circ \left(\mathbb{I}-{\alpha}_Y\circ\overline{\alpha}_Y^{-1}\right)^{d}\circ\overline{\alpha}_Y(\sigma(x)),\end{equation}
for all \(d\in\mathbb{Z}_{>0}\) and \(x\in\mathbb{R}^n\), where
\(\overline{\alpha}_Y:=\overline{\alpha}_V\) and
\(\underline{\alpha}_Y,{\alpha}_Y:=\alpha_W\). Hence
\(-\V[d-1](x) - \overline{\alpha}_V\circ \underline{\alpha}_Y^{-1}\circ \left(\mathbb{I}-{\alpha}_Y\circ\overline{\alpha}_Y^{-1}\right)^{d-1}\circ\overline{\alpha}_Y(\sigma(x))\leq -\V[\infty](x)\)
and since \(\V[d](x)\leq \V[\infty](x)\), we derive that
\begin{equation}\V[d](x) -\V[d-1](x) - \overline{\alpha}_V\circ \underline{\alpha}_Y^{-1}\circ \left(\mathbb{I}-{\alpha}_Y\circ\overline{\alpha}_Y^{-1}\right)^{d-1}\circ\overline{\alpha}_Y(\sigma(x)) \leq \V[\infty](x)-\V[\infty](x)=0,\end{equation}
thus
\begin{equation}\label{eq:diff} \V[d](x) -\V[d-1](x) \leq \overline{\alpha}_V\circ \underline{\alpha}_Y^{-1}\circ \left(\mathbb{I}-{\alpha}_Y\circ\overline{\alpha}_Y^{-1}\right)^{d-1}\circ\overline{\alpha}_Y(\sigma(x)).\end{equation}
When \(\sigma(x)=0\),
\(\V[d](x)-\V[d-1](x)\leq\overline{\alpha}_V(0)=0\). Hence,
\(\V[d](x)- \V[d-1](x)\leq c_\text{stop}(\varepsilon,x)\) for any
\(d\in\mathbb{Z}_{>0}\) since \(c_{\text{stop}}(\varepsilon,x)\geq0\).
When \(\sigma(x)>0\),
\(\overline{\alpha}_V\circ \underline{\alpha}_Y^{-1}\circ \left(\mathbb{I}-{\alpha}_Y\circ\overline{\alpha}_Y^{-1}\right)^{d-1}\circ\overline{\alpha}_Y(\sigma(x))\)
is upper-bounded by
\(\overline{\alpha}_V\circ \underline{\alpha}_Y^{-1}\circ \left(\mathbb{I}-{\alpha}_Y\circ\overline{\alpha}_Y^{-1}\right)^{d-1}\circ\overline{\alpha}_Y(\sigma(\Delta))\)
as \(\sigma(x)\leq \Delta\) since the considered function is
non-decreasing. The term
\(\overline{\alpha}_V\circ \underline{\alpha}_Y^{-1}\circ \left(\mathbb{I}-{\alpha}_Y\circ\overline{\alpha}_Y^{-1}\right)^{d-1}\circ\overline{\alpha}_Y(\sigma(\Delta))\)
can be made arbitrarily close to \(0\) by increasing \(d\), according to
\citep[Lemma 3]{granzotto2020} as \(\gamma=1\) and
\(\sigma(x)\leq\Delta\). In particular, we take \(d^*\) sufficiently
large such that
\(\overline{\alpha}_V\circ \underline{\alpha}_Y^{-1}\circ \left(\mathbb{I}-{\alpha}_Y\circ\overline{\alpha}_Y^{-1}\right)^{d^*-1}\circ\overline{\alpha}_Y(\sigma(\Delta))\leq\underline\epsilon\)
where \(\underline\epsilon\) comes from item (i) of Assumption
\ref{a:stop} and is such that
\(\underline\epsilon\leq c_{\text{stop}}(\varepsilon,x)\). Hence
\(\V[d](x) -\V[d-1](x)\leq\overline{\alpha}_V\circ \underline{\alpha}_Y^{-1}\circ \left(\mathbb{I}-{\alpha}_Y\circ\overline{\alpha}_Y^{-1}\right)^{d-1}\circ\overline{\alpha}_Y(\sigma(x))\leq c_{\text{stop}}(\varepsilon,x)\)
for any \(d\geq d^*\) and \(d^*\) depends on \(\Delta\) a priori, but
not on \(x\). We have proved the desired result.
Consider now the case when item (ii) of Assumption \ref{a:stop} holds
with \(L<\infty\). Let \(\Delta>0\) and \(x\in\mathbb{R}^n\) such that
\(\sigma(x)\leq \Delta\), \(\varepsilon\in \mathbb{R}^{n_\varepsilon}\)
and \(\underline\epsilon>0\) as in item (ii) of Assumption \ref{a:stop}
and is such that
\(\underline\epsilon \sigma(x)\leq c_\text{stop}(\varepsilon,x)\). It
follows from the inequalities of item (ii) of Assumption \ref{a:stop}
that there exists some \(\delta>0\) such that
\(\overline{\alpha}_V\circ \underline{\alpha}_Y^{-1}\circ \left(\mathbb{I}-{\alpha}_Y\circ\overline{\alpha}_Y^{-1}\right)^{d}\circ\overline{\alpha}_Y(s)\leq \frac{\bar a_V^2}{a_W} \left(1-\frac{a_W}{\bar a_V}\right)^{d} s\)
for any \(s\in[0,\delta]\), see
\citep[proof of Corollary 1, equation (47)]{granzotto2020}. Note that
\(1-\frac{a_W}{\bar a_V}\in(0,1)\), hence
\(\left(1-\frac{a_W}{\bar a_V}\right)^{d-1}\) can be made as small as
desired. Therefore, there exists \(\bar d\) such that
\(\frac{\bar a_V^2}{a_W} \left(1-\frac{a_W}{\bar a_V}\right)^{\bar d-1} \leq \underline\epsilon\)
where \(\underline\epsilon\). Thus, when \(\sigma(x)\in[0,\delta]\) and
in view of (\ref{eq:diff}), it follows that
\(\V[d](x) -\V[d-1](x) \leq \overline{\alpha}_V\circ \underline{\alpha}_Y^{-1}\circ \left(\mathbb{I}-{\alpha}_Y\circ\overline{\alpha}_Y^{-1}\right)^{d}\circ\overline{\alpha}_Y(\sigma(x)) \leq \frac{\bar a_V^2}{a_W} \left(1-\frac{a_W}{\bar a_V}\right)^{d} \sigma(x) \leq \underline\epsilon \sigma(x) \leq c_\text{stop}(\varepsilon,x)\)
for any \(d\geq \bar d\). When \(\sigma(x)\in[\delta,\Delta]\), we have
\(c_\text{stop}(\varepsilon,x) \geq \underline\epsilon \delta>0\), and
we recover \(d^*\) such that
\(\V[d](x) -\V[d-1](x)\leq c_{\text{stop}}(\varepsilon,x)\) for
\(\sigma(x)\in[\delta,\Delta]\) by applying the steps made above, for
when item (i) of Assumption \ref{a:stop} holds. Therefore, it follows
that, for any \(d\geq\min\{\bar d, d^*\}\),
\(\V[d](x) -\V[d-1](x)\leq c_{\text{stop}}(\varepsilon,x)\) holds for
any \(\sigma(x)\in[0,\Delta]\). We have proved the desired result.
Suppose now that item (ii) of Assumption \ref{a:stop} holds with
\(L=\infty\). Let \(x\in\mathbb{R}^n\),
\(\varepsilon\in \mathbb{R}^{n_\varepsilon}\) and
\(\underline\epsilon>0\) as in item (ii) of Assumption \ref{a:stop}. It
follows then that
\(\overline{\alpha}_V\circ \underline{\alpha}_Y^{-1}\circ \left(\mathbb{I}-{\alpha}_Y\circ\overline{\alpha}_Y^{-1}\right)^{d}\circ\overline{\alpha}_Y(\sigma(x))\leq \frac{\bar a_V^2}{a_W} \left(1-\frac{a_W}{\bar a_V}\right)^{d} \sigma(x)\)
for all \(x\in\mathbb{R}^n\). Hence, by invoking the same arguments as
above when item (ii) of Assumption \ref{a:stop} holds, there exists
\(\bar d>0\) such that
\(\V[d](x)- \V[d-1](x)\leq c_\text{stop}(\varepsilon,x)\) for any
\(d\geq\bar d\). The proof is complete.
\subsection{Proof of Theorem \ref{Vestimates}}
Let \(x\in\mathbb{R}^n\) such that \(\sigma(x)\leq\Delta\) and
\(\varepsilon\in\mathbb{R}^{n_\varepsilon}\),
\(d\in\mathbb{Z}_{\geq 0}\) as in (\ref{eq:stop}), which exists since
Theorem \ref{theo:terminates} holds. Hence, optimal sequence
\([\us[0],\us[1],\ldots,\us[d]]:=\ustar[\varepsilon](x)\) and cost
\(\V[\varepsilon](x)\) defined in (\ref{eq:V}) are well-defined. Since
\(\V[\varepsilon]\) is a finite-horizon optimal cost,
\(\V[\varepsilon](x)\leq\V[\infty](x)\). On the other hand, consider the
infinite-horizon sequence
\(\bm{u}=[\us[0],\us[1],\ldots \us[d-1],\ustar[\infty](\phi(d,x,\ustar[\varepsilon](x)|_{d}))]\),
where \(\ustar[\infty](\phi(d,x,\ustar[\varepsilon](x)|_{d}))\) exists
in view of item (i) of SA\ref{SAa}. It follows from the optimality of
\(\V[\infty](x)\) that \(\V[\infty](x)\leq\J[\infty](x,\bm{u})\), and
from the definition of \(\bm{u}\) that
\(\J[\infty](x,\bm{u})= \V[\varepsilon](x)+\V[\infty](\phi(d,x,\ustar[\varepsilon](x)|_{d}))\),
which is finite. By invoking item (i) of SA\ref{SAa}, we derive
\(\V[\infty](x)\leq\V[\varepsilon](x)+\overline{\alpha}_V(\sigma(\phi(d,x,\ustar[\varepsilon](x)|_{d})))\).
In view of Lemma \ref{prop:terminal} and item (ii) of SA\ref{SAa},
\(\sigma(\phi(d,x,\ustar[\varepsilon](x)|_{d}))\leq\alpha_W^{-1}(c_{\text{stop}}(\varepsilon,x))\),
thus
\(\V[\infty](x)\leq\V[\varepsilon](x)+ \overline{\alpha}_V\circ\alpha_W^{-1}(c_{\text{stop}}(\varepsilon,x))\)
and the proof is complete.
\subsection{Proof of Theorem \ref{algostab}}
First, we prove the following result, which provides Lyapunov properties
for system (\ref{eq:autosys}) that we use to derive the main stability
result afterwards.
\begin{theorem}\label{YLyapunovProp}
Let $\Y[]:=\V[\infty]$, the following holds.
\begin{itemize}[itemindent=\widthof{(ii)}]
\item[(i)] For any $x\in\mathbb{R}^n$, \begin{equation}\underline{\alpha}_Y(\sigma(x)) \leq \Y[](x) \leq \overline{\alpha}_Y(\sigma(x)),\end{equation}
where $\underline{\alpha}_Y:=\alpha_W,\overline{\alpha}_Y:=\overline{\alpha}_V$, with $\alpha_W,\overline{\alpha}_V$ from SA\ref{SAa}.
\item[(ii)] For any $x\in\mathbb{R}^n$, $\varepsilon\in\mathbb{R}^{n_\varepsilon}$, $v \in F^*_{\varepsilon}(x)$, \begin{equation} \Y[](v)-\Y[](x)
\leq -{\alpha}_Y(\sigma(x)) + \overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,x))\end{equation}
where ${\alpha}_Y=\alpha_W$, with $\alpha_W$ and $\overline{\alpha}_V$ from SA\ref{SAa}, and $c_\text{stop}$ comes from (\ref{eq:stop}). $
\square$
\end{itemize}
\end{theorem}
\noindent\textbf{Proof.} Let
\(\varepsilon\in\mathbb{R}^{n_\varepsilon}\), \(x\in\mathbb{R}^n\) and
\(v \in F^* _ {\varepsilon}(x)\), which is well-defined in view of
Theorem \ref{theo:terminates}. There exists
\([\us[0],\us[1],\ldots,\us[d]]=\bm{u}^*_{\varepsilon}(x)\) such that
\(v=f(x,\us[0])\) and \(\bm{u}^*_{\varepsilon}(x)\) is an optimal input
sequence for system (\ref{eq:sys}) and cost (\ref{eq:J}) with horizon
\(d\), hence \(\V[\varepsilon](x)=\J[d](x,\bm{u}^*_{\varepsilon}(x))\).
Moreover, in view of Lemma \ref{prop:terminal} and item (ii) of
SA\ref{SAa},
\(\sigma(\phi(d,x,\ustar[\varepsilon](x)|_{d}))\leq\alpha_W^{-1}(c_{\text{stop}}(\varepsilon,x))\).
From items (i) and (ii) of SA\ref{SAa}, we have
\(\Y[](x)=\V[\infty](x)\leq\overline{\alpha}_V(\sigma(x))=:\overline{\alpha}_Y(\sigma(x))\).
On the other hand, we have from item (ii) of SA\ref{SAa} that
\(\alpha_W(\sigma(x))\leq \ell^*_d(0,x)\). This implies that
\(\alpha_W(\sigma(x))\leq \V[\varepsilon](x)\leq\V[\infty](x)=\Y[](x)\).
Hence item (i) of Theorem \ref{YLyapunovProp} holds with
\(\underline{\alpha}_Y=\alpha_W\).
Consider the sequence
\(\hat{\bm{u}}:=[\us[1],\us[2],\ldots,\us[d-1],\bar{\bm{u}}]\) where
\(\bar{\bm{u}}:=\bm{u}^*_{\infty}( \phi(d , x , \bm{u}^*_{\varepsilon}(x)|_{d}) )\),
\(\bm{u}^*_{\varepsilon}(x)|_{d}=[\us[0],\ldots,\us[d-1]]\) and \(\phi\)
denotes the solution of system (\ref{eq:sys}). The sequence
\(\hat{\bm{u}}\) consists of the first \(d\) elements of
\(\bm{u}^*_{\varepsilon}(x)\) after \(\us[0]\), followed by an optimal
input sequence of infinite length at state
\(\phi(d,x,\ustar[\varepsilon](x)|_{d})\), which exists according to
item (i) of SA\ref{SAa}. Sequence \(\bar{\bm{u}}\) minimizes
\(\J[\infty](\phi(d,x,\ustar[\varepsilon](x)|_{d}),\bar{\bm{u}})\) by
virtue of item (i) of SA\ref{SAa}. From the definition of cost \(\J\) in
(\ref{eq:J}) and \(\V[\infty](v)\) in view of item (i) of SA\ref{SAa},
\begin{equation}
\label{eq:propeqVa}
\begin{split}
\V[\infty](v)\quad
&\leq\quad
\J[\infty](v,\hat{\bm{u}})\\
&=\quad
\J[d-1](v,\hat{\bm{u}}|_{d-1})\\&\quad+\J[\infty](\phi(d-1,v,\hat{\bm{u}}|_{d-1}),\bar{\bm{u}}).
\end{split}
\end{equation} From Bellman optimality principle, we have
\(\V[\varepsilon](x) = \V[d](x)=\ell^*_d(0,x) + \V[d-1](v) = \ell^*_d(0,x)+\J[d-1](v,\hat{\bm{u}}|_{d-1})\),
hence
\begin{equation}\J[d-1](v,\hat{\bm{u}}|_{d-1})=V_{\varepsilon}(x)-\ell^*_d(0,x)\label{eq:propeqVb}.
\end{equation} Moreover, by item (i) of SA\ref{SAa}, \begin{align*}
\MoveEqLeft \J[\infty](\phi(d-1,v,\hat{\bm{u}}|_{d-1}),\bar{\bm{u}}) \\&\leq\overline{\alpha}_V(\sigma(\phi(d-1,v,\hat{\bm{u}}|_{d-1}))).\addtocounter{equation}{1}\tag{\theequation}\label{eq:propeqVc}
\end{align*} Consequently, in view of (\ref{eq:propeqVa}),
(\ref{eq:propeqVb}) and (\ref{eq:propeqVc}), \begin{align*}
\MoveEqLeft \V[\infty](v) \leq \V[\varepsilon](x) -\ell^*_d(0,x)\\
&\qquad+\overline{\alpha}_V(\sigma(\phi(d-1,v,\hat{\bm{u}}|_{d-1}))).
\addtocounter{equation}{1}\tag{\theequation}\label{eq:propeqa}
\end{align*} Since
\(\phi(d-1,v,\hat{\bm{u}}|_{d-1})=\phi(d,x,\ustar[\varepsilon](x)|_{d})\)
and
\(\sigma(\phi(d,x,\ustar[\varepsilon](x)|_{d}))\leq\alpha_W^{-1}(c_{\text{stop}}(\varepsilon,x))\)
holds, it follows \begin{equation}
\V[\infty](v)
\leq \V[\varepsilon](x)-\ell^*_d(0,x)+\overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,x)).
\label{eq:propeqb}
\end{equation} By Theorem \ref{Vestimates},
\(\V[\varepsilon](x)\leq\V[\infty](x)\), thus \begin{equation}
\V[\infty](v)
\leq \V[\infty](x)-\ell^*_d(0,x)+\overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,x)).
\label{eq:propeqfinal}
\end{equation} By invoking item (ii) of SA\ref{SAa}, we derive
\(\V[\infty](v) \leq \V[\infty](x) -\alpha_W(\sigma(x))+\overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,x))\),
and since \(\Y[]=\V[\infty]\), the proof is completed with
\({\alpha}_Y:=\alpha_W\). \(
\blacksquare\)
Item (i) states that \(\Y[]\) is positive definite and radially
unbounded with respect to the set \(\{x:\sigma(x)=0\}\). Item (ii) of
Theorem \ref{YLyapunovProp} shows that \(\Y[]\) strictly decreases along
the solutions to (\ref{eq:autosys}) up to a perturbative term
\(\overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,x))\),
which can be made as small as desired by selecting \(|\varepsilon|\)
close to \(0\) as
\(\overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,x))\leq\overline{\alpha}_V\circ\alpha_W^{-1}(\theta(|\varepsilon|,\sigma(x)))\),
per Assumption \ref{cstop2}.
We are ready to prove Theorem \ref{algostab}. The proof heavily borrows
from \citep[Theorem2]{granzottoCDC2019} and
\citep[Theorem 3.3]{granz2019}, we nevertheless include it in this
technical report for self-completeness. Let \(\Delta,\delta>0\). We
select \(\varepsilon^*>0\) such that \begin{alignat*}{3}
\theta(\varepsilon^*,\underline{\alpha}_Y^{-1}(\widetilde\Delta)) &< \alpha_W\circ\overline{\alpha}_V^{-1}(\frac{1}{2}\widetilde{\alpha}_Y(\widetilde\delta)), \addtocounter{equation}{1}\tag{\theequation}\label{eq:gdcondC}
\end{alignat*} where
\(\widetilde{\alpha}_Y:=\alpha_W\circ\overline{\alpha}_Y^{-1}\),
\(\widetilde\Delta:= \overline{\alpha}_Y(\Delta)\),
\(\widetilde\delta:= \left(\mathbb{I}-\frac{\widetilde{\alpha}_Y}{2}\right)^{-1}\circ\underline{\alpha}_Y(\delta)\)
and \(\theta\) comes from Assumption \ref{cstop2}. Note that
\(\left(\mathbb{I}-\frac{\widetilde{\alpha}_Y}{2}\right)^{-1}\) is
indeed of class \(\mathcal{K}_\infty\) as we assume without loss of
generality that\footnote{See \citep{granz2019} for details.}
\(\mathbb{I}-\widetilde{\alpha}_Y\in\mathcal{K}_\infty\), hence
\(\mathbb{I}-\widetilde{\alpha}_Y+\frac{\widetilde{\alpha}_Y}{2}\in\mathcal{K}_\infty\)
and so is its inverse. Inequality (\ref{eq:gdcondD}) can always be
verified by taking \(\varepsilon^*\) sufficiently small since
\(\theta(\cdot,\overline{\alpha}_Y^{-1}(\widetilde\Delta))\in\mathcal{K}\),
and
\(\alpha_W\circ\overline{\alpha}_V^{-1}(\frac{1}{2}\widetilde{\alpha}_Y(\widetilde\delta))>0\).
It follows from \(\theta(\cdot,s)\in\mathcal{K}\) for any \(s>0\) and
\(\theta(s,\cdot)\) is non-decreasing for any \(s\geq0\), that
\(\theta(|\varepsilon|,\underline{\alpha}_Y^{-1}(s))\leq\theta(\varepsilon^*,\underline{\alpha}_Y^{-1}(\widetilde\Delta))\)
for any \(s\in[0,\widetilde\Delta]\) and
\(|\varepsilon|<\varepsilon^*\). Furthermore, from Assumption
\ref{cstop2} and item (i) of Theorem \ref{YLyapunovProp}, we derive
\(c_\text{stop}(\varepsilon,x)\leq \theta(|\varepsilon|,\underline{\alpha}_Y^{-1}(\Y[](x)))\).
Thus, in view of (\ref{eq:gdcondC}),
\begin{equation}\label{eq:cstopcond}
c_\text{stop}(\varepsilon,x)\leq \alpha_W\circ\overline{\alpha}_V^{-1}\left(\frac{1}{2}\widetilde{\alpha}_Y(\widetilde\delta)\right)
\end{equation} for any \(x\) such that \(\Y[](x)\leq\widetilde\Delta\).
On the other hand, we have
\(\alpha_W\circ\overline{\alpha}_V^{-1}(\frac{1}{2}\widetilde{\alpha}_Y(\widetilde\delta))\leq\alpha_W\circ\overline{\alpha}_V^{-1}(\frac{1}{2}\widetilde{\alpha}_Y(s))\)
for any \(s\in[\widetilde\delta,\infty)\). Hence, for any
\(x\in\mathbb{R}^n\) such that
\(\Y[](x)\in[\widetilde\delta,\widetilde\Delta]\) and
\(|\varepsilon|<\varepsilon^*\), \begin{alignat*}{3}
\overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,x)) &\leq \frac{\widetilde{\alpha}_Y(\widetilde\delta)}{2} \leq \frac{\widetilde{\alpha}_Y(\Y[](x))}{2}. \addtocounter{equation}{1}\tag{\theequation}\label{eq:gdcondD}
\end{alignat*}
Let \(x\in\mathbb{R}^n\) with \(\sigma(x)\leq\Delta\) and
\(v \in F^* _ {\varepsilon}(x)\). In view of (\ref{eq:cstopcond}) and
items (i) and (ii) of Theorem \ref{YLyapunovProp}, \begin{equation}
\Y[](v)-\Y[](x) \leq -\widetilde{\alpha}_Y(\Y[](x)) +\overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,x)).
\label{eq:Ycontracttildepre}
\end{equation} Since \(\sigma(x) \leq \Delta\),
\(\Y[](x) \leq \overline{\alpha}_Y(\sigma(x)) \leq \overline{\alpha}_Y(\Delta) = \widetilde\Delta\).
Consider \(\Y[](x) \in [0,\widetilde\delta)\). Since
\(c_\text{stop}(\varepsilon,x)\leq \alpha_W\circ\overline{\alpha}_V^{-1}(\frac{1}{2}\widetilde{\alpha}_Y(\widetilde\delta))\)
holds for \(\Y[](x)\leq\widetilde\Delta\), it holds here. Furthermore,
since \(\mathbb{I}-\widetilde{\alpha}_Y\in\mathcal{K}_\infty\) holds
without loss of generality, and in view of (\ref{eq:Ycontracttildepre}),
\begin{equation}
\begin{split}
\Y[](v) &\leq \Y[](x)-\widetilde{\alpha}_Y(\Y[](x)) + \overline{\alpha}_V(\theta(\varepsilon^*,\sigma(x)))\\
&\leq \left(\mathbb{I}-\widetilde{\alpha}_Y\right)(\widetilde\delta) + \frac{1}{2}\widetilde{\alpha}_Y(\widetilde\delta).
\end{split}
\end{equation} Given the definition of \(\widetilde\delta\),
\begin{equation}
\Y[](v) \leq \left(\mathbb{I}-\frac{\widetilde{\alpha}_Y}{2}\right)(\widetilde\delta) = \underline{\alpha}_Y(\delta). \label{eq:Yattracted}
\end{equation}
When \(\Y[](x)\geq\widetilde\delta\), we derive from (\ref{eq:gdcondD})
that
\(-\widetilde{\alpha}_Y(\Y[](x))+\overline{\alpha}_V(c_\text{stop}(\varepsilon,x))\leq-\tfrac{1}{2}\widetilde{\alpha}_Y(\Y[](x))\).
Thus, from (\ref{eq:Ycontracttildepre}), \begin{align*}
\Y[](v)-\Y[](x) \leq -\tfrac{1}{2}\widetilde{\alpha}_Y(\Y[](x)).
\addtocounter{equation}{1}\tag{\theequation}\label{eq:Ycontracting}
\end{align*} In view of (\ref{eq:Yattracted}) and
(\ref{eq:Ycontracting}), it follows for any \(k\in\mathbb{Z}_{\geq 0}\)
that \begin{equation} \label{eq:Ysolonestep}
\Y[](\phi(k+1,x))\leq \max\left\{(\mathbb{I}-\tfrac{1}{2}\widetilde{\alpha}_Y)(\Y[](x)), \underline{\alpha}_Y(\delta)\right\},
\end{equation} where \(\phi(k,x)\) is a solution starting at \(x\) for
system (\ref{eq:autosys}). Furthermore, when
\(\Y[](x)\leq \underline{\alpha}_Y(\delta)\),
\(\Y[](v) \leq \underline{\alpha}_Y(\delta)\) follows. Indeed, if
\(\Y[](x) \in [\widetilde\delta,\widetilde\Delta]\),
\(\Y[](v) \leq \Y[](x)\leq \underline{\alpha}_Y(\delta)\) from
(\ref{eq:Ycontracting}), and if \(\Y[](x) \in[0,\widetilde\delta)\), we
deduce \(\Y[](v) \leq \underline{\alpha}_Y(\delta)\) from
(\ref{eq:Yattracted}). Hence the set
\(\{z\in\mathbb{R}^n \,:\, \Y[](z)\leq \underline{\alpha}_Y(\delta)\}\)
is forward invariant for system (\ref{eq:autosys}). By iterating
(\ref{eq:Ysolonestep}), we obtain \begin{equation}
\Y[](\phi(k,x)) \leq \max \left\{ \widetilde\beta(\Y[](x),k),\underline{\alpha}_Y(\delta) \right\},
\end{equation} where
\(\widetilde\beta(s,k)=\left(\mathbb{I}-\frac{1}{2}\widetilde{\alpha}_Y\right)^{(k)}(s)\)
for any \(s\geq0\), with \(\widetilde\beta\in\mathcal{KL}\) as
\(\lim_{k\to\infty}\left(\mathbb{I}-\frac{1}{2}\widetilde{\alpha}_Y\right)^{(k)}(s)=0\)
for any \(s\geq0\), since\footnote{See \citep[Lemma B.1]{granz2019}}
\(\left(\mathbb{I}-\frac{1}{2}\widetilde{\alpha}_Y\right)(s)<s\) for
\(s>0\) and
\(\left(\mathbb{I}-\frac{1}{2}\widetilde{\alpha}_Y\right)(0)=0\).
Finally, invoking
\(\underline{\alpha}_Y(\sigma(x))\leq \Y[](x) \leq \overline{\alpha}_Y(\sigma(x))\),
we deduce \begin{equation}
\sigma(\phi(k,x)) \leq \max \left\{\underline{\alpha}_Y^{-1}\left(\widetilde\beta(\overline{\alpha}_Y(\sigma(x)),k)\right),\delta \right \}.
\end{equation} Thus Theorem \ref{algostab} holds with
\(\beta(s,k) = \underline{\alpha}_Y^{-1}\left(\widetilde\beta(\overline{\alpha}_Y(s),k)\right)\)
for any \(s\geq0\) and \(k\in\mathbb{Z}_{\geq 0}\).
\subsection{Proof of Corollary \ref{Yges}}
We follow the steps made in \citep[Corollary 3.2]{granz2019}. Let
\(x \in \mathbb{R}^n\). We select \(\varepsilon^*<\frac{a_W^2}{a_V}\) as
in Corollary \ref{Yges} and let
\(\varepsilon\in\mathbb{R}^{n_\varepsilon}\) such that
\(|\varepsilon|\leq\varepsilon^*\) and \(v \in F^*_{\varepsilon}(x)\).
In particular, from item (i) of Theorem \ref{YLyapunovProp},
\(\alpha_W(\sigma(x)\leq \Y[](x) \leq \overline{\alpha}_V(\sigma(x))\),
and since item (ii) of Assumption \ref{a:stop} holds with \(L=\infty\)
and \(a_W s \leq \alpha_W(s)\), \(\overline{\alpha}_V(s)\leq a_V s\)
with \(a_W,a_V>0\) for any \(s>0\), we obtain \begin{equation}
a_W \sigma(x) \leq Y(x) \leq \bar a_V \sigma(x).
\label{eq:ygesybound}
\end{equation} Similarly, in view of item (ii) of Theorem
\ref{YLyapunovProp},
\(\Y[](v)-\Y[](x) \leq -\alpha_W(\sigma(x)) +\overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,x)),\)
and since \(c_\text{stop}(\varepsilon,x)\leq |\varepsilon|\sigma(x)\)
holds, we derive\\
\begin{equation}
\Y[](v)-\Y[](x) \leq-\left(\frac{a_W^2-|\varepsilon|\bar
a_V}{a_W}\right)\sigma(x).
\label{eq:ygesa}
\end{equation} Note that for \(|\varepsilon|\leq\varepsilon^*\),
\(\left(\frac{a_W^2-|\varepsilon|\bar a_V}{a_W}\right) > 0\). Hence, in
view of \eqref{eq:ygesa} and \eqref{eq:ygesybound},
\begin{equation}\Y[](v)-\Y[](x) \leq-\left(\frac{a_W^2-|\varepsilon|\bar
a_V}{\bar a_V a_W}\right)\Y[](x)\end{equation} holds for any
\(\sigma(x)\geq0\). Let \(x\in\mathbb{R}^n\) and denote \(\phi(k,x)\) be
a corresponding solution to (\ref{eq:autosys}) at time
\(k\in \mathbb{Z}_{\geq 0}\), it holds that
\(\Y[](\phi(k,x))\leq \left(1-\frac{a_W^2-|\varepsilon|\bar a_V}{\bar a_V a_W}\right)^{k}\Y[](x)\).
In view of \eqref{eq:ygesybound}, it follows from
\(\Y[](\phi(k,x)) \leq \left(1-\frac{a_W^2-|\varepsilon|\bar a_V}{\bar a_V a_W}\right)^{k}\Y[](x)\)
that
\(a_W \sigma(\phi(k,x))\leq \left(1-\frac{a_W^2-|\varepsilon|\bar a_V}{\bar a_V a_W}\right)^{k} \bar a_V \sigma(x)\)
hence
\(\sigma(\phi(k,x)) \leq \frac{\bar a_V}{a_W}\sigma(x)\left(1-\frac{a_W^2-|\varepsilon|\bar a_V}{\bar a_V a_W}\right)^{k}\)
and the proof is concluded.
\subsection{Proof of Theorem \ref{Vrunestimates}}
Let \(x \in \mathbb{R}^n\), \(\varepsilon\in\mathbb{R}^{n_\varepsilon}\)
such that \(|\varepsilon|<\varepsilon^*\) where \(\varepsilon^*\) is
selected as in Corollary \ref{Yges},
\(\phi(k+1,x)\in F^*_{\varepsilon}(\phi(k,x))\) for any
\(k\in\mathbb{Z}_{\geq 0}\) where \(\phi\) is a solution to
(\ref{eq:autosys}) initialized at \(x\). Let
\(u^r_k\in\mathcal{U}^*_{\varepsilon}(\phi(k,x))\) such that
\(\phi(k+1,x)=f(\phi(k,x),u^r_k)\), and note, since inputs from
(\ref{eq:autosys}) are the first input of \(\ustar[\varepsilon]\)
applied in a receding horizon fashion,
\(u^r_k=\ustar[\varepsilon](\phi(k,x))|_0\) and therefore
\(\ell(\phi(k,x),u_k^r)=\ell(\phi(k,x),\ustar[\varepsilon](\phi(k,x))|_0)= \ell^*_d(0,\phi(k,x))\)
by definition of \eqref{eq:lddef}. Consider then \begin{equation}
V_{\varepsilon}^\text{run}(x)= \sum_{k=0}^\infty \ell^*_d(0,\phi(k,x)),
\label{eq:Vavgdiffphi}
\end{equation} Note that indeed
\(V_{\varepsilon}^\text{run}(x)\in\mathcal{V}_{\varepsilon}^{\text{run}}(x)\).
It follows from the proof of Theorem \ref{YLyapunovProp}, in particular
\eqref{eq:propeqfinal}, that \begin{equation}
\V[\infty](\phi(k+1,x))
\leq \V[\infty](\phi(k,x))-\ell^*_d(0,\phi(k,x))+\overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,\phi(k,x))),
\end{equation} from which we deduce, for any \(N\geq0\), \begin{align*}
\MoveEqLeft \sum_{k=0}^N \ell^*_d(0,\phi(k,x))\\
&\leq\V[\infty](\phi(0,x)) -\V[\infty](\phi(1,x))+\overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,\phi(0,x)))\\
&\quad+\V[\infty](\phi(1,x)) -\V[\infty](\phi(2,x))+\overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,\phi(1,x)))\\
&\quad+\ldots\\
&\quad+\V[\infty](\phi(N,x)) -\V[\infty](\phi(N+1,x))\\
&\quad+\overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,\phi(N,x)))\\
&\leq \V[\infty](\phi(0,x)) + \sum_{k=0}^{N} \overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,\phi(k,x))),
\addtocounter{equation}{1}\tag{\theequation}\label{eq:Vsumdiffphi}
\end{align*} Hence, when \(N\to\infty\), \begin{equation}
\begin{split}
V_{\varepsilon}^\text{run}(x) &\leq \V[\varepsilon](\phi(0,x)) + \sum_{k=0}^{\infty} \overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,\phi(k,x))).
\end{split}\label{eq:Vsumdiffphiestimates}
\end{equation} All that remains is to compute a bound on
\(\sum_{k=0}^{\infty} \overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,\phi(k,x)))\),
which is possible by recalling that
\(\sigma(\phi(k,x))\leq \frac{\bar a_V}{a_W}\sigma(x)\left(1-\frac{a_W^2-|\varepsilon|\bar a_V}{\bar a_V a_W}\right)^{k}\)
holds from Corollary \ref{Yges} and
\(\overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,\phi(k,x)))\leq \frac{\bar a_V}{a_W} |\varepsilon|\sigma(\phi(k,x))\)
as the conditions of Corollary \ref{Yges} are assumed to hold.
Specifically,
\(\sum_{k=0}^{\infty} \overline{\alpha}_V\circ\alpha_W^{-1}(c_\text{stop}(\varepsilon,\phi(k,x))) \leq |\varepsilon|\frac{\bar a_V^2}{a_W^2}\sigma(x) \sum_{k=0}^{\infty} \left(1-\frac{a_W^2 -|\varepsilon|\bar a_V}{\bar a_V a_W}\right)^{k}\),
which provides (\ref{eq:Vrunestimates}) as
\(\sum_{k=0}^{\infty} \left(1-\frac{a_W^2-|\varepsilon|\bar a_V}{\bar a_V a_W}\right)^{k}=\frac{\bar a_V a_W}{a_W^2-\bar a_V |\varepsilon|}\).
The lower bound \(\V[\infty](x)\leq V_{\varepsilon}^\text{run}(x)\)
follows from the optimality of \(\V[\infty](x)\). Since
(\ref{eq:Vsumdiffphiestimates}) holds for an arbitrary solution of
(\ref{eq:autosys}), \(\phi(k+1,x)=f(\phi(k,x),u^r_k)\) for any
\(k\in\mathbb{Z}_{\geq 0}\), the resulting bound holds for any
\(V_{\varepsilon}^\text{run}(x)\in\mathcal{V}_{\varepsilon}^{\text{run}}(x)\).
\fi
\end{document} |
\begin{document}
\title*{Optimal point sets for quasi--Monte Carlo integration of bivariate periodic functions with bounded mixed derivatives}
\titlerunning{Optimal point sets for quasi--Monte Carlo integration of bivariate periodic functions}
\author{Aicke Hinrichs \and Jens Oettershagen}
\institute{
Aicke Hinrichs
\at Institut f\"ur Analysis, Johannes-Kepler-Universit\"at Linz, Altenberger Stra\ss e 69, 4040 Linz, Austria \\
\email{[email protected]}
\and
Jens Oettershagen
\at Institute for Numerical Simulation, Wegelerstra\ss e 6, 53115 Bonn, Germany \\
\email{[email protected]}
}
\maketitle
\abstract{We investigate quasi-Monte Carlo (QMC) integration of bivariate periodic functions with dominating mixed smoothness of order one. While there exist several QMC constructions which asymptotically yield the optimal rate of convergence of $\mathcal{O}(N^{-1}\log(N)^{\frac{1}{2}})$, it is yet unknown which point set is optimal in the sense that it is a global minimizer of the worst case integration error.
We will present a computer-assisted proof by exhaustion that the Fibonacci lattice is the unique minimizer of the QMC worst case error in periodic $H^1_\text{mix}$ for small Fibonacci numbers $N$. Moreover, we investigate the situation for point sets whose cardinality $N$ is not a Fibonacci number. It turns out that for $N=1,2,3,5,7,8,12,13$ the optimal point sets are integration lattices.}
\section{Introduction}
Quasi-Monte Carlo (QMC) rules are equal-weight quadrature rules which can be used to approximate integrals defined on the $d$-dimensional unit cube $[0,1)^d$
\begin{equation*}
\int_{[0,1)^d} f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x} \approx \frac{1}{N} \sum_{i=1}^{N} f(\boldsymbol{x}_i),
\end{equation*}
where $\ensuremath{\mathcal{P}}_N=\{\boldsymbol{x}_1,\boldsymbol{x}_2,\ldots,\boldsymbol{x}_{N}\}$ are deterministically chosen quadrature points in $[0,1)^d$.
The integration error for a specific function $f$ is given as
\begin{equation*}
\left| \int_{[0,1)^d} f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x} - \frac{1}{N} \sum_{i=1}^{N} f(\boldsymbol{x}_i) \right|.
\end{equation*}
To study the behavior of this error as $N$ increases for $f$ from a Banach space $(\ensuremath{\mathcal{H}}, \|\cdot\|)$ one considers the worst case error
\begin{equation*}
{\rm wce}(\ensuremath{\mathcal{H}},\ensuremath{\mathcal{P}}_N)=\sup_{\satop{f \in \ensuremath{\mathcal{H}}}{\|f\| \le 1}} \left| \int_{[0,1)^d} f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x} - \frac{1}{N} \sum_{i=1}^{N} f(\boldsymbol{x}_i) \right|.
\end{equation*}
Particularly nice examples of such function spaces are reproducing kernel Hilbert spaces \cite{Aronszajn_1950}.
Here, we will consider the reproducing kernel Hilbert space $H^1_\text{mix}$ of 1-periodic functions with mixed smoothness. Details on these spaces are given in Section 2. The reproducing kernel is a
tensor product kernel of the form
$$ K_{d,\gamma}(\boldsymbol{x},\boldsymbol{y}) = \prod_{j=1}^d K_{1,\gamma} (x_j,y_j) \ \mbox{for} \ \boldsymbol{x}=(x_1,\dots,x_d),\boldsymbol{y}=(y_1,\dots,y_d) \in [0,1)^d$$
with
$ K_{1,\gamma}(x,y)= 1 + \gamma k(|x - y|)$ and $k(t)=\frac{1}{2}(t^2-t+\frac{1}{6})$ and a parameter $\gamma>0$.
It turns out that minimizing the worst case error ${\rm wce}(H^1_\text{mix},\ensuremath{\mathcal{P}}_N)$ among all $N$-point sets $\ensuremath{\mathcal{P}}_N=\{\boldsymbol{x}_1,\ldots,\boldsymbol{x}_{N}\}$ with respect to the Hilbert space norm
corresponding to the kernel $K_{d,\gamma}$ is equivalent to minimizing the double sum
$$ G_\gamma ( \boldsymbol{x}_1,\ldots,\boldsymbol{x}_{N} ) = \sum_{i,j=1}^N K_{d,\gamma}(\boldsymbol{x}_i,\boldsymbol{x}_j).$$
There is a general connection between the discrepancy of a point set and the worst case error of integration. Details can be found in \cite[Chapter 9]{NW10}.
In our case, the relevant notion is the $L_2$-norm of the periodic discrepancy.
We describe the connection in detail in Section \ref{sec_dis}.
There are many results on the rate of convergence of worst case errors and of the optimal discrepancies for $N\to \infty$, see e.g. \cite{Niederreiter,NW10},
but results on the optimal point configurations for fixed $N$ and $d>1$ are scarce.
For discrepancies, we are only aware of \cite{W77}, where the point configurations minimizing the standard $L_\infty$-star-discrepancy for $d=2$ and $N=1,2,\dots,6$
are determined, \cite{PVC06}, where for $N=1$ the point minimizing the standard $L_\infty$- and $L_2$-star discrepancy for $d \ge 1$ is found, and
\cite{LP07}, where this is extended to $N=2$.
It is the aim of this paper to provide a method which for $d=2$ and $N>2$ yields the optimal points for the periodic $L_2$-discrepancy and worst case error in $H^1_\text{mix}$.
Our approach is based on a decomposition of the global optimization problem into exponentially many local ones which each possess unique solutions that can be approximated efficiently by a nonlinear block Gau\ss-Seidel method. Moreover, we use the symmetries of the two-dimensional torus to significantly reduce the number of local problems that have to be considered.
It turns out that in the case that $N$ is a (small) Fibonacci number, the Fibonacci lattice yields the optimal point configuration.
It is common wisdom, see e.g. \cite{BTY12,NS84,SJ94,SZ82}, that the Fibonacci lattice provides a very good point set for integrating periodic functions. Now our results support the
conjecture that they are actually the best points.
These results may suggest that the optimal point configurations are integration lattices or at least lattice point sets. This seems to be true for some numbers $N$ of points, for example
for Fibonacci numbers, but not always. However, it can be shown that integration lattices are always {\em local} minima of ${\rm wce}(H^1_\text{mix},\ensuremath{\mathcal{P}}_N)$.
Moreover, our numerical results also suggest that for small $\gamma$ the optimal points are always \emph{close} to a lattice point set, i.e. $N$-point sets of the form
$$ \left\{ \left( \frac{i}{N}, \frac{\sigma(i)}{N} \right) \,:\, i=0,\dots,N-1\right\} ,$$
where $\sigma$ is a permutation of $\{0,1,\dots,N-1\}$.
The remainder of this article is organized as follows: In Section 2 we recall Sobolev spaces with bounded mixed derivatives, the notion of the worst case integration error in reproducing kernel Hilbert spaces and the connection to periodic discrepancy.
In Section 3 we discuss necessary and sufficient conditions for optimal point sets and derive lower bounds of the worst case error on certain local patches of the whole $[0,1)^{2N}$. In Section 4 we compute candidates for optimal point sets up to machine precision. Using arbitrary precision rational arithmetic we prove that they are indeed near the global minimum which also turns out to be unique up to torus-symmetries.
For certain point numbers the global minima are integration lattices as is the case if $N$ is a Fibonacci number.
We close with some remarks in Section 5.
\section{Quasi--Monte Carlo Integration in $H^1_\text{mix}(\mathbb{T}^2)$}
\subsection{Sobolev Spaces of Periodic Functions}
We consider univariate 1-periodic functions $f: \mathbb{R} \rightarrow \mathbb{R}$ which are given by their values on the torus $\mathbb{T} = [0,1)$.
For $k\in\mathbb{Z}$, the $k$-th Fourier coefficient of a function $f \in L_2(\mathbb{T})$ is given by $\hat{f}_k = \int_0^1 f(x) \exp(2\pi \mathrm{i} \, k x) \, \,\mathrm{d} x$.
The definition
\begin{equation}
\|f\|_{H^{1,\gamma}}^2 = \hat{f}_0^2 + \gamma \sum_{k \in \mathbb{Z}} |2\pi k|^2 \hat{f}_k^2 = \left(\int_{\mathbb{T}} f(x) \, \,\mathrm{d} x\right)^2 + \gamma \int_{\mathbb{T}} f'(x)^2 \, \,\mathrm{d} x
\end{equation}
for a function $f$ in the univariate Sobolev space $H^1(\mathbb{T}) = W^{1,2}(\mathbb{T}) \subset L_2(\mathbb{T})$ of functions with first weak derivatives bounded in $L_2$
gives a Hilbert space norm $\|f\|_{H^{1,\gamma}}$ on $H^1(\mathbb{T})$ depending on the parameter $\gamma>0$.
The corresponding inner product is given by
\[
(f,g)_{H^{1,\gamma}(\mathbb{T})} = \left( \int_0^1 f(x)\, \,\mathrm{d} x \right) \left( \int_0^1 g(x) \, \,\mathrm{d} x \right) + \gamma \int_0^1 f'(x) g'(x) \, \,\mathrm{d} x .
\]
We denote the Hilbert space $H^1(\mathbb{T})$ equipped with this inner product by $H^{1,\gamma}(\mathbb{T})$.
Since $H^{1,\gamma}(\mathbb{T})$ is continuously embedded in $C^0(\mathbb{T})$ it is a reproducing kernel Hilbert space (RKHS), see \cite{Aronszajn_1950}, with a symmetric and positive definite kernel \linebreak
$K_{1,\gamma}: \mathbb{T} \times \mathbb{T} \rightarrow \mathbb{R}$, given by \cite{Wahba75}
\begin{equation}
\begin{aligned}
K_{1,\gamma}(x,y) := & 1 + \gamma \sum_{k \in \mathbb{Z} \setminus \{0\}} |2\pi k|^{-2} \exp(2\pi \mathrm{i} k (x-y)) \\
= & 1+ \gamma k(|x-y|) ,
\end{aligned}
\end{equation}
where $k(t) = \frac{1}{2}(t^2-t+\frac{1}{6})$ is the Bernoulli polynomial of degree two divided by two.
This kernel has the property that it reproduces point evaluations in $H^1$, i.e. \linebreak $f(x) = (f(\cdot),K(\cdot, x))_{H^{1,\gamma}}$ for all $f \in H^1$.
The reproducing kernel of the tensor product space $H^{1,\gamma}_\text{mix}(\mathbb{T}^2) := H^1(\mathbb{T}) \otimes H^1(\mathbb{T}) \subset C(\mathbb{T}^2)$ is the product of the univariate kernels, i.e.
\begin{equation}
\begin{aligned}
K_{2,\gamma}(\boldsymbol{x},\boldsymbol{y}) = & K_{1,\gamma}(x_1,y_1) \cdot K_{1,\gamma}(x_2,y_2) \\
= & 1 + \gamma k(|x_1 - y_1|) + \gamma k(|x_2 - y_2|) + \gamma^2 k(|x_1 - y_1|) k(|x_2 - y_2|) .
\end{aligned}
\end{equation}
\subsection{Quasi--Monte Carlo Cubature} \label{sec_wce}
A linear cubature algorithm $Q_N(f) := \frac{1}{N} \sum_{i=1}^{N} f(\boldsymbol{x}_i)$ with uniform weights $\frac{1}{N}$ on a point set $\ensuremath{\mathcal{P}}_N=\{\boldsymbol{x}_1,\dots,\boldsymbol{x}_{N}\}$ is called a QMC cubature rule.
Well-known examples for point sets used in such quadrature methods are digital nets, see e.g. \cite{DickPillich, Niederreiter}, and lattice rules \cite{SJ94}.
A two-dimensional integration lattice is a set of $N$ points given as
$$ \left\{ \left( \frac{i}{N}, \frac{i g}{N} \mod 1 \right) \,:\, i=0,\dots,N-1\right\} $$
for some $g\in \{1,\dots,N-1\}$ coprime to $N$.
A special case of such a rank-1 lattice rule is the so called Fibonacci lattice that only exists for $N$ being a Fibonacci number $F_n$ and is given by the generating vector $(1,g) = (1,F_{n-1})$, where $F_n$ denotes the $n$-th Fibonacci number. It is well known that the Fibonacci lattices yield the optimal rate of convergence in certain spaces of periodic functions.
In the setting of a reproducing kernel Hilbert space with kernel $K$ on a general domain $D$, the worst case error of the
QMC-rule $Q_N$ can be computed as
$$ {\rm wce}(\ensuremath{\mathcal{H}},\ensuremath{\mathcal{P}}_N)^2 = \int_{D} \int_{D} K(\boldsymbol{x},\boldsymbol{y}) \, \,\mathrm{d} \boldsymbol{x} \,\mathrm{d} \boldsymbol{y} - \frac{2}{N}\sum_{i=1}^N \int_{D} K(\boldsymbol{x}_i,\boldsymbol{y}) \, \,\mathrm{d} y + \frac{1}{N^2} \sum_{i,j=1}^{N} K(\boldsymbol{x}_i,\boldsymbol{x}_j) ,$$
which is the norm of the error functional, see e.g. \cite{DickPillich,NW10}.
For the kernel $K_{2,\gamma}$ we obtain
$$ {\rm wce}( H^{1,\gamma}_\text{mix}(\mathbb{T}^2) ,\ensuremath{\mathcal{P}}_N)^2 = - 1 + \frac{1}{N^2} \sum_{i=1}^N \sum_{j=1}^N K_{2,\gamma}(\boldsymbol{x}_i, \boldsymbol{x}_j).$$
There is a close connection between the worst case error of integration in \linebreak ${\rm wce}( H^{1,\gamma}_\text{mix}(\mathbb{T}^2) ,\ensuremath{\mathcal{P}}_N)$ for the case $\gamma=6$ and periodic $L_2$-discrepancy, which we will describe in the following.
\subsection{Periodic Discrepancy} \label{sec_dis}
The periodic $L_2$-discrepancy is measured with respect to periodic boxes. In dimension $d=1$, periodic intervals $I(x,y)$ for $x,y\in [0,1)$ are given by
$$ I(x,y)=[x,y) \ \mbox{if} \ x\le y \qquad \mbox{and} \qquad I(x,y)=[x,1) \cup [0,y) \ \mbox{if} \ x > y. $$
In dimension $d>1$, the periodic boxes $B(\boldsymbol{x},\boldsymbol{y})$ for $\boldsymbol{x}=(x_1,\dots,x_d)$ and $\boldsymbol{y}=(y_1,\dots,y_d)\in [0,1)^d$ are products of the one-dimensional intervals, i.e.
$$ B(\boldsymbol{x},\boldsymbol{y}) = I(x_1,y_1) \times \dots \times I(x_d,y_d).$$
The discrepancy of a set $\ensuremath{\mathcal{P}}_N=\{ \boldsymbol{x}_1,\dots,\boldsymbol{x}_N \} \subset [0,1)^d$ with respect to such a periodic box $B=B(\boldsymbol{x},\boldsymbol{y})$ is the deviation of the relative number of points of $\ensuremath{\mathcal{P}}_N$ in $B$
from the volume of $B$
$$ D({\ensuremath{\mathcal{P}}}_N, B ) = \frac{\# {\ensuremath{\mathcal{P}}}_N \cap B }{N} - \, {\rm vol} (B). $$
Finally, the periodic $L_2$-discrepancy of $\ensuremath{\mathcal{P}}_N$ is the $L_2$-norm of the discrepancy function taken over all periodic boxes $B=B(\boldsymbol{x},\boldsymbol{y})$, i.e.
$$ D_2(\ensuremath{\mathcal{P}}_N) = \left( \int_{[0,1)^d} \int_{[0,1)^d} D({\cal P}_N, B(\boldsymbol{x},\boldsymbol{y}) )^2 \,\mathrm{d} \, \boldsymbol{y} \,\mathrm{d} \boldsymbol{x} \right)^{1/2}. $$
It turns out, see \cite[page 43]{NW10} that the periodic $L_2$-discrepancy can be computed as
\begin{align*}
D_2 ({\ensuremath{\mathcal{P}}}_N)^2 = & -3^{-d} + \frac{1}{N^2} \sum_{\boldsymbol{x},\boldsymbol{y} \in {\cal P}_N} \tilde{K}_d(\boldsymbol{x},\boldsymbol{y}) \\
= & 3^{-d} {\rm wce}( H^{1,6}_\text{mix}(\mathbb{T}^d) ,\ensuremath{\mathcal{P}}_N)^2,
\end{align*}
where $\tilde{K}_d$ is the tensor product of $d$ kernels $\tilde{K}_1(x,y)= |x-y|^2 -|x-y| + \frac12$.
So minimizing the periodic $L_2$-discrepancy is equivalent to minimizing the worst case error in $H^{1,\gamma}_\text{mix}$ for $\gamma=6$.
Let us also remark that the periodic $L_2$-discrepancy is (up to a factor) sometimes also called diaphony.
This terminology was introduced in \cite{Zin1997}.
\section{Optimal Cubature Points}
In this section we deal with (local) optimality conditions for a set of two-dimensional points $\ensuremath{\mathcal{P}}_N \equiv (\boldsymbol{x}, \boldsymbol{y}) \subset \mathbb{T}^2$, where $\boldsymbol{x}, \boldsymbol{y} \in \mathbb{T}^N$ denote the
vectors of the first and second components of the points, respectively.
\subsection{Optimization Problem}
We want to minimize the squared worst case error
\begin{align*}
{\rm wce} & ( H^{1,\gamma}_\text{mix}(\mathbb{T}^2) ,\ensuremath{\mathcal{P}}_N)^2 = -1 + \frac{1}{N^2} \sum_{i,j=0}^{N-1} K_{1,\gamma}(x_i, x_j) \, K_{1,\gamma}(y_i, y_j) \\
= & -1 + \frac{1}{N^2} \sum_{i,j=0}^{N-1} \left(1 + \gamma k(|x_i - x_j|) + \gamma k(|y_i - y_j|) + \gamma^2 k(|x_i - x_j|) k(|y_i - y_j|) \right) \\
= & \frac{\gamma}{N^2} \sum_{i,j=0}^{N-1} \left( k(|x_i - x_j|) + k(|y_i - y_j|) + \gamma k(|x_i - x_j|) k(|y_i - y_j|) \right) \\
= & \frac{\gamma ( 2 k(0) + \gamma k(0)^2) }{N} \\
& \quad + \frac{2 \gamma}{N^2} \sum_{i=0}^{N-2} \sum_{j=i+1}^{N-1} \left( k(|x_i - x_j|) + k(|y_i - y_j|) + \gamma k(|x_i - x_j|) k(|y_i - y_j|) \right) \\
\end{align*}
Thus, minimizing ${\rm wce} ( H^{1,\gamma}_\text{mix}(\mathbb{T}^2) ,\ensuremath{\mathcal{P}}_N)^2$ is equivalent to minimizing either
\begin{equation}
F_\gamma(\boldsymbol{x}, \boldsymbol{y}) := \sum_{i=0}^{N-2} \sum_{j=i+1}^{N-1} \left( k(|x_i - x_j|) + k(|y_i - y_j|) + \gamma k(|x_i - x_j|) k(|y_i - y_j|) \right)
\end{equation}
or
\begin{equation}
G_\gamma(\boldsymbol{x}, \boldsymbol{y}) := \sum_{i,j=0}^{N-1} (1+\gamma k(|x_i - x_j|))(1+\gamma k(|y_i - y_j|) ).
\end{equation}
For theoretical considerations we will sometimes use $G_\gamma$, while for the numerical implementation we will use $F_\gamma$ as objective function, since it has less summands.
Let $\tau,\sigma \in S_N$ be two permutations of $\{0,1,\dots,N-1\}$. Define the sets
\begin{equation}
D_{\tau, \sigma} = \left\{\boldsymbol{x} \in [0,1)^N, \boldsymbol{y} \in [0,1)^N: \begin{matrix}
x_{\tau(0)} \leq x_{\tau(1)} \leq \cdots \leq x_{\tau(N-1)} \\
y_{\sigma(0)} \leq y_{\sigma(1)} \leq \cdots \leq y_{\sigma(N-1)}
\end{matrix}
\right\}
\end{equation}
on which all points maintain the same order in both components and hence it holds $|x_i - x_j| = s_{i,j} (x_i-x_j)$ for $s_{i,j} \in \{-1,1\}$. It follows that the restriction of $F_\gamma$ to $D_{\tau, \sigma}$, i.e. $F_\gamma(\boldsymbol{x}, \boldsymbol{y})_{|D_{\tau,\sigma}}$, is a polynomial of degree $4$ in $(\boldsymbol{x}, \boldsymbol{y})$. Moreover, $F_{\gamma{|D_{\tau,\sigma}}}$ is convex for sufficiently small $\gamma$.
\begin{proposition}\label{prop_conv}
$F_\gamma(\boldsymbol{x}, \boldsymbol{y})_{|D_{\tau,\sigma}}$ and $G_\gamma(\boldsymbol{x}, \boldsymbol{y})_{|D_{\tau,\sigma}}$ are convex if $\gamma \in [0, 6]$.
\end{proposition}
\begin{proof}
It is enough to prove the claim for $$G_\gamma(\boldsymbol{x}, \boldsymbol{y}) = \sum_{i,j=0}^{N-1} (1+\gamma k(|x_i - x_j|))(1+\gamma k(|y_i - y_j|) ).$$
Since the sum of convex functions is convex and since $f(x-y)$ is convex if $f$ is, it is enough to show that
$ f(s,t) = \big( 1+\gamma k(s) \big) \big( 1+\gamma k(t) \big) $ is convex for $s,t \in [0,1]$.
To this end, we show that the Hesse matrix $\ensuremath{\mathcal{H}}(f)$ is positive definite if $0 \le \gamma<6$.
First, $f_{ss} = \gamma \big( 1+\gamma k(t) \big)$ is positive if $\gamma<24$.
Hence is is enough to check that the determinant of $\ensuremath{\mathcal{H}}(f)$ is positive, which is equivalent to the inequality
$$ \big( 1+\gamma k(s) \big) \big( 1+\gamma k(t) \big) > \gamma^2 \left( s-\frac12 \right)^2 \left( t-\frac12 \right)^2.$$
So it remains to see that $$ 1+\gamma k(s) = 1 + \frac{\gamma}{2} \left( s^2-s+\frac16 \right) > \gamma \left( s-\frac12 \right)^2.$$
But this is elementary to check for $0\le \gamma < 6$ and $s\in [0,1]$.
In the case $\gamma=6$ the determinant of $\ensuremath{\mathcal{H}}(f)=0$ and some additional argument is necessary which we omit here.
\qed\end{proof}
Since
\[
[0,1)^N \times [0,1)^N = \bigcup_{(\tau, \sigma) \in S_N \times S_N} D_{\tau, \sigma} ,
\]
one can obtain the global minimum of $F_\gamma$ on $[0,1)^N \times [0,1)^N$ by computing $\argmin_{(\boldsymbol{x}, \boldsymbol{y}) \in D_{\tau,\sigma}} F_\gamma(\boldsymbol{x}, \boldsymbol{y})$ for all $(\tau, \sigma) \in S_N \times S_N$ and choose the global minimum as the smallest of all the local ones.
\subsection{Using the Torus Symmetries}
We now want to analyze how symmetries of the two dimensional torus $\mathbb{T}^2$ allow to reduce the number of regions $D_{\tau,\sigma}$ for which the optimization problem has to be solved.
The symmetries of the torus $\mathbb{T}^2$ which do not change the worst case error for the considered classes of periodic functions are generated by
\begin{enumerate}
\item Shifts in the first coordinate $x \mapsto x + c \mod 1$ and shifts in the second coordinate $y \mapsto y + c \mod 1$.
\item Reflection of the first coordinate $x \mapsto 1-x$ and reflection of the second coordinate $y \mapsto 1-y$.
\item Interchanging the first coordinate $x$ and the second coordinate $y$.
\item The points are indistinguishable, hence relabeling the points does not change the worst case error.
\end{enumerate}
Applying finite compositions of these symmetries to all the points in the point set $\ensuremath{\mathcal{P}}_N = \{ (x_0,y_0), \dots , (x_{N-1},y_{N-1}) \}$ leads to an equivalent point set with the same
worst case integration error.
This shows that the group of symmetries $G$ acting on the pairs $(\tau,\sigma)$ indexing $D_{\tau,\sigma}$ generated by the following operations
\begin{enumerate}
\item replacing $\tau$ or $\sigma$ by a shifted permutation: $\tau \mapsto ( \tau(0) + k \mod N , \dots, \tau(N-1) + k \mod N )$ or $\sigma \mapsto ( \sigma(0) + k \mod N , \dots, \sigma(N-1) + k \mod N )$
\item replacing $\tau$ or $\sigma$ by its flipped permutation: $\tau \mapsto ( \tau(N-1), \tau(N-2), \dots, \tau(1), \tau(0) )$ or $\sigma \mapsto ( \sigma(N-1), \sigma(N-2), \dots, \sigma(1), \sigma(0) )$
\item interchanging $\sigma$ and $\tau$: $(\tau,\sigma) \mapsto ( \sigma,\tau)$
\item applying a permutation $\pi\in S_N$ to both $\tau$ and $\sigma$ : $(\tau,\sigma) \mapsto (\pi \tau, \pi \sigma)$
\end{enumerate}
lead to equivalent optimization problems. So let us call the pairs $(\tau,\sigma)$ and $(\tau',\sigma')$ in $S_N \times S_N$ equivalent if they are in the same orbit with respect to the action of $G$.
In this case we write $(\tau,\sigma) \sim (\tau',\sigma')$.
Using the torus symmetries 1. and 4. it can always be arranged that $\tau = \text{id}$ and $\sigma(0)=0$, which together with fixing the point $(x_0,y_0)=(0,0)$ leads to the
sets
\begin{equation}
D_{\sigma} = \left\{\boldsymbol{x} \in [0,1)^N, \boldsymbol{y} \in [0,1)^N: \begin{array}{rl}
0= & x_0 \leq x_1 \leq \ldots \leq x_{N-1} \\
0= & y_{0} \leq y_{\sigma(1)} \leq \cdots \leq y_{\sigma(N-1)}
\end{array}
\right\},
\end{equation}
where $\sigma \in S_{N-1}$ denotes a permutation of $\{1,2,\ldots, N-1\}$.
But there are many more symmetries and it would be algorithmically desirable to cycle through exactly one representative of each equivalence class
without ever touching the other equivalent $\sigma$. This seems to be difficult to implement, hence we settled for a little less which still reduces the amount of permutations to be handled considerably.
To this end, let us define the symmetrized metric
\begin{equation}
d(i,j) = \min \{ |i-j| , N - |i-j| \} \qquad \mbox{for}\qquad 0 \le i,j \le N-1
\end{equation}
and the following subset of $S_{N}$.
\begin{definition}
The set of \emph{semi-canonical} permutations $\mathfrak{C}_N \subset S_N$ consists of permutations $\sigma$ which fulfill
\begin{enumerate}
\item[(i)] $\sigma(0)=0$
\item[(ii)] $d(\sigma(1),\sigma(2)) \le d(0,\sigma(N-1))$
\item[(iii)] $ \sigma(1) = \min \left\{ d(\sigma(i),\sigma(i+1)) \mid i=0,1,\dots,N-1 \right\}$
\item[(iv)] $\sigma$ is lexicographically smaller than $\sigma^{-1}$.
\end{enumerate}
Here we identify $\sigma(N)$ with $0=\sigma(0)$.
\end{definition}
This means that $\sigma$ is {\em semi-canonical} if the distance between $0=\sigma(0)$ and $\sigma(1)$ is minimal among all distances between $\sigma(i)$ and $\sigma(i+1)$,
which can be arranged by a shift. Moreover, the distance between $\sigma(1)$ and $\sigma(2)$ is at most as large as the distance between $\sigma(0)$ and $\sigma(N-1)$,
which can be arranged by a reflection and a shift if it is not the case.
Hence we have obtained the following lemma.
\begin{lemma}
For any permutation $\sigma \in S_N$ with $\sigma(0)=0$ there exists a semi-canonical $\sigma'$ such that the sets $D_\sigma$ and $D_{\sigma'}$ are equivalent up to torus symmetry.
\end{lemma}
Thus we need to consider only semi-canonical $\sigma$ which is easy to do algorithmically.
\begin{remark}
If $\sigma \in S_{N}$ is semi-canonical, it holds $\sigma(1) \leq N/2$.
\end{remark}
Another main advantage in considering our objective function only in domains $D_\sigma$ is that it is not only convex but strictly convex here. This is due
to the fact that we fix $(x_0,y_0)=(0,0)$.
\begin{proposition} \label{prop_convexity}
$F_\gamma(\boldsymbol{x}, \boldsymbol{y})_{|D_{\sigma}}$ and $G_\gamma(\boldsymbol{x}, \boldsymbol{y})_{|D_{\sigma}}$ are strictly convex if $\gamma \in [0, 6]$.
\end{proposition}
\begin{proof}
Again it is enough to prove the claim for $$G_\gamma(\boldsymbol{x}, \boldsymbol{y}) = \sum_{i,j=0}^{N-1} (1+\gamma k(|x_i - x_j|))(1+\gamma k(|y_i - y_j|) ).$$
Now we use that the sum of a convex and a strictly convex function is again strictly convex.
Hence it is enough to show that the function
\begin{align*}
f(x_1,\dots,x_{N-1},y_1,\dots,y_{N-1} ) & = \sum_{i=1}^{N-1} (1+\gamma k(|x_i - x_0|))(1+\gamma k(|y_i - y_0|) ) \\
& = \sum_{i=1}^{N-1} (1+\gamma k(x_i))(1+\gamma k(y_i) )
\end{align*}
is strictly convex on $[0,1]^{N-1} \times [0,1]^{N-1}$.
In the proof of Proposition \ref{prop_conv} it was actually shown that $f_i(x_i,y_i)=(1+\gamma k(x_i))(1+\gamma k(y_i) )$ is strictly convex for
$(x_i,y_i) \in [0,1]^2$ for each fixed $i=1,\dots,N-1$.
Hence the strict convexity of $f$ follows from the following easily verified lemma. \qed
\begin{lemma}
Let $f_i:D_i \to \mathbb{R}, i=1,\dots,m$ be strictly convex functions on the convex domains $D_i \in \mathbb{R}^{d_i}$.
Then the function $$ f: D= D_1 \times \dots\times D_m \to \mathbb{R}, (z_1,\dots,z_m) \mapsto \sum_{i=1}^m f_i(z_i)$$ is strictly convex.
\end{lemma}
\end{proof}
Hence we have indeed a unique point in each $D_\sigma$ where the minimum of $F_\gamma$ is attained.
\subsection{Minimizing $F_\gamma$ on $D_\sigma$}
Our strategy will be to compute the local minimum of $F_\gamma$ on each region \linebreak $D_\sigma \subset [0,1)^N \times [0,1)^N$ for all semi-canonical permutations $\sigma \in \mathfrak{C}_N \subset S_{N}$ and determine the global minimum by choosing the smallest of all the local ones.
This gives for each $\sigma \in \mathfrak{C}_N$ the constrained optimization problem
\begin{equation} \label{eqn_constr_mini}
\min_{(\boldsymbol{x}, \boldsymbol{y}) \in D_\sigma} F_\gamma(\boldsymbol{x}, \boldsymbol{y}) \quad \text{ subject to } v_i(\boldsymbol{x}) \geq 0 \text { and } w_i(\boldsymbol{y}) \geq 0 \text{ for all } i=1,\ldots, N-1 ,
\end{equation}
where the inequality constraints are linear and given by
\begin{equation}
v_i(\boldsymbol{x}) = x_{i} - x_{i-1} \quad \text{ and } \quad w_i(\boldsymbol{y}) = y_{\sigma(i)} - y_{\sigma(i-1)} \quad \text{ for } i=1,\ldots, N-1 .
\end{equation}
In order to use the necessary (and due to local strict convexity also sufficient) conditions for local minima
\begin{align*}
\frac{\partial}{\partial x_k} F_\gamma(\boldsymbol{x}, \boldsymbol{y}) = 0 \quad \text{ and } \quad \frac{\partial}{\partial y_k} F_\gamma(\boldsymbol{x}, \boldsymbol{y}) = 0 \quad \text{ for } k=1,\ldots, N-1
\end{align*}
for $(\boldsymbol{x}, \boldsymbol{y}) \in D_\sigma$ we need to evaluate the partial derivatives of $F_\gamma$.
\begin{proposition}
For a given permutation $\sigma \in \mathfrak{C}_N$ the partial derivative of $F_{\gamma|D_\sigma}$ with respect to the second component $\boldsymbol{y}$ is given by
\begin{equation} \label{eqn_first_deriv}
\frac{\partial}{\partial y_k} F_\gamma(\boldsymbol{x}, \boldsymbol{y})_{|D_\sigma} = y_k \left( \sum_{\substack{ i=0 \\ i \neq k} }^{N-1} c_{i,k} \right) - \sum_{\substack{ i=0 \\ i \neq k} }^{N-1} c_{i,k} y_i + \frac{1}{2} \left( \sum_{i=0}^{k-1} c_{i,k} s_{i,k} - \sum_{j=k+1}^{N-1} c_{k,j} s_{k,j} \right),
\end{equation}
where $s_{i,j} = \text{sgn}(y_i - y_j)$ and $c_{i,j} := 1+ \gamma k(|x_i - x_j|) = c_{j,i}$.
Interchanging $\boldsymbol{x}$ and $\boldsymbol{y}$ the same result holds for the partial derivatives with respect to $\boldsymbol{x}$ with the obvious modification to $c_{i,j}$ and the simplification that $s_{i,j} = -1$.
The second order derivatives with respect to $\boldsymbol{y}$ are given by
\begin{equation} \label{eqn_second_deriv}
\frac{\partial^2}{\partial y_k \partial y_j} F(\boldsymbol{x}, \boldsymbol{y})_{|D_\sigma} = \begin{cases}
\sum_{i=0 }^{k-1} c_{i,k} + \sum_{i=k+1 }^{N-1} c_{i,k} & \text{ for } j=k \\
-c_{k,j} & \text{ for } j \neq k
\end{cases}, \quad k,j \in \{1,\ldots, N-1\}
\end{equation}
Again, the analogue for $\frac{\partial^2}{\partial x_k \partial x_j} F(\boldsymbol{x}, \boldsymbol{y})_{|D_\sigma}$ is obtained with the obvious modification $c_{i,j} = 1+\gamma k(|y_i-y_j|)$.
\end{proposition}
\begin{proof}
We prove the claim for the partial derivative with respect to $\boldsymbol{y}$:
\begin{align*}
\frac{\partial}{\partial y_k} F_\gamma(\boldsymbol{x}, \boldsymbol{y}) = & \sum_{i=0}^{N-2} \sum_{j=i+1}^{N-1} \frac{\partial}{\partial y_k} k(|y_i - y_j|) \underbrace{ \left( 1+ \gamma k(|x_i - x_j|) \right) }_{ =: c_{i,j} } + \frac{\partial}{\partial y_k} k(|x_i - x_j|) \\
= & \sum_{i=0}^{N-2} \sum_{j=i+1}^{N-1} c_{i,j} \, \frac{\partial}{\partial y_k} k(|y_i - y_j|) \\
= & \sum_{i=0}^{N-2} \sum_{j=i+1}^{N-1} c_{i,j}\; k'( s_{i,j} \, (y_i - y_j )) \cdot \begin{cases}
s_{i,j} & \text{ for } i = k \\
-s_{i,j} & \text{ for } j = k \\
0 & \text{ else }
\end{cases} \\
= & \sum_{j=k+1}^{N-1} c_{k,j} s_{k,j} \; \left( s_{k,j} \, (y_k - y_j ) - \frac{1}{2} \right) - \sum_{i=0}^{k-1} c_{i,k} s_{i,k} \; \left( s_{i,k} \, (y_i - y_k ) - \frac{1}{2} \right) \\
= & y_k \left( \sum_{\substack{ i=0 \\ i \neq k}}^{N-1} c_{i,k} \right) - \sum_{\substack{ i=0 \\ i \neq k}}^{N-1} c_{i,k} y_i + \frac{1}{2} \left( \sum_{i=0}^{k-1} c_{i,k} s_{i,k} - \sum_{j=k+1}^{N-1} c_{k,j} s_{k,j} \right) .
\end{align*}
From this we immediately get the second derivative \eqref{eqn_second_deriv}. \qed
\end{proof}
\subsection{Lower Bounds of $F_\gamma$ on $D_\sigma$}
Until now we are capable of approximating local minima of $F_\gamma$ on a given $D_\sigma$. If this is done for all $\sigma \in \mathfrak{C}_N$ we can obtain a candidate for a global minimum, but due to the finite precision of floating point arithmetic one can never be sure to be close to the actual global minimum.
However, it is also possible to compute a lower bound for the optimal point set for each $D_\sigma$ using Wolfe-duality for constrained optimization.
It is known \cite{NW06} that for a convex problem with linear inequality constraints like \eqref{eqn_constr_mini} the Lagrangian
\begin{align}
\ensuremath{\mathcal{L}}_F(\boldsymbol{x}, \boldsymbol{y}, \boldsymbol{\lambda}, \boldsymbol{\mu}) := & F(\boldsymbol{x}, \boldsymbol{y}) - \boldsymbol{\lambda}^T \boldsymbol{v}(\boldsymbol{x}) - \boldsymbol{\mu}^T \boldsymbol{w}(\boldsymbol{y}) \\
= & F(\boldsymbol{x}, \boldsymbol{y}) - \sum_{i=1}^{N-1} \left( \lambda_i v_i(\boldsymbol{x}) + \mu_i w_{i}(\boldsymbol{y}) \right)
\end{align}
gives a lower bound on $F$, i.e.
$$\min_{(\boldsymbol{x}, \boldsymbol{y}) \in D_\sigma} F(\boldsymbol{x}, \boldsymbol{y}) \geq \ensuremath{\mathcal{L}}_F(\tilde{\boldsymbol{x}}, \tilde{\boldsymbol{y} }, \boldsymbol{\lambda}, \boldsymbol{\mu}) $$
for all $(\tilde{\boldsymbol{x}}, \tilde{\boldsymbol{y}}, \boldsymbol{\lambda}, \boldsymbol{\mu})$ that fulfill the constraint
\begin{equation}
\nabla_{(\boldsymbol{x}, \boldsymbol{y})} \ensuremath{\mathcal{L}}_F(\tilde{\boldsymbol{x}}, \tilde{\boldsymbol{y}}, \boldsymbol{\lambda}, \boldsymbol{\mu}) = 0 \quad \text{ and } \quad \boldsymbol{\lambda}, \boldsymbol{\mu} \geq 0 \text{ (component-wise)}.
\end{equation}
Here, $\nabla_{(\boldsymbol{x}, \boldsymbol{y})} = (\nabla_{\boldsymbol{x}}, \nabla_{\boldsymbol{y}})$, where $\nabla_{\boldsymbol{x}}$ denotes the gradient of a function with respect to the variables in $\boldsymbol{x}$.
Hence it is our goal to find for each $D_\sigma$ such an admissible point $(\tilde{\boldsymbol{x}}, \tilde{\boldsymbol{y}}, \boldsymbol{\lambda}, \boldsymbol{\mu})$ which yields a lower bound that is larger than
some given candidate for the global minimum. If the relevant computations are carried out in infinite precision rational number arithmetic these bounds are mathematically reliable.
In order to accomplish this we first have to compute the Lagrangian of \eqref{eqn_constr_mini}. To this end, let ${\bf P}_\sigma \in \{-1,0,1\}^{(N-1) \times (N-1)}$ denote the permutation matrix corresponding to $\sigma \in S_{N-1}$ and
\begin{equation} \label{eqnB}
\boldsymbol{B} := \begin{pmatrix}
1 & -1 & 0 & \ldots & 0 & 0 \\
0 & 1 & -1 & \ldots & 0 & 0 \\
\vdots & & & \ddots & & \vdots \\
0 & & \ldots & 0 & 1 & -1 \\
0 & & \ldots & &0 & 1 \\
\end{pmatrix} \in \mathbb{R}^{(N-1) \times (N-1)} .
\end{equation}
Then the partial derivatives of $\ensuremath{\mathcal{L}}_F$ with respect to $\boldsymbol{x}$ and $\boldsymbol{y}$ are given by
\begin{align}
\nabla_{\boldsymbol{x}} \ensuremath{\mathcal{L}}_F(\boldsymbol{x}, \boldsymbol{y}, \boldsymbol{\lambda}, \boldsymbol{\mu}) = & \nabla_{\boldsymbol{x}} F(\boldsymbol{x}, \boldsymbol{y}) - \begin{pmatrix}
\lambda_1 - \lambda_2 \\
\vdots \\
\lambda_{N-2} - \lambda_{N-1} \\
\lambda_{N-1}
\end{pmatrix}
= \nabla_{\boldsymbol{x}} F(\boldsymbol{x}, \boldsymbol{y}) - \boldsymbol{B} \boldsymbol{\lambda}
\end{align}
and
\begin{align}
\nabla_{\boldsymbol{y}} \ensuremath{\mathcal{L}}_F(\boldsymbol{x}, \boldsymbol{y}, \boldsymbol{\lambda}, \boldsymbol{\mu}) = & \nabla_{\boldsymbol{y}} F(\boldsymbol{x}, \boldsymbol{y}) - \begin{pmatrix}
\mu_{\sigma(1)} - \mu_{\sigma(2)} \\
\vdots \\
\mu_{\sigma(N-2)} - \mu_{\sigma(N-1)} \\
\mu_{\sigma(N-1)}
\end{pmatrix}
= \nabla_{\boldsymbol{y}} F(\boldsymbol{x}, \boldsymbol{y}) - \boldsymbol{B} {\bf P}_\sigma \boldsymbol{\mu} .
\end{align}
This leads to the following theorem.
\begin{theorem} \label{thm_lower_bound}
For $\sigma \in \mathfrak{C}_N$ and $\delta > 0$ let the point $(\tilde{\boldsymbol{x}}_\sigma, \tilde{\boldsymbol{y}}_\sigma) \in D_\sigma$ fulfill
\begin{equation} \label{eqn_offset_grad}
\frac{\partial}{\partial x_k} F(\tilde{\boldsymbol{x}}_\sigma, \tilde{\boldsymbol{y}}_\sigma) = \delta \quad \text{ and } \quad\frac{\partial}{\partial y_k} F(\tilde{\boldsymbol{x}}_\sigma, \tilde{\boldsymbol{y}}_\sigma) = \delta \quad \text{ for } k=1,\ldots, N-1 .
\end{equation}
Then
\begin{align} \label{eqn_LowerBound}
F(\boldsymbol{x}, \boldsymbol{y}) \geq & F(\tilde{\boldsymbol{x}}_\sigma, \tilde{\boldsymbol{y}}_\sigma) - \delta \sum_{i=1}^{N-1} \left( (N-i) \cdot v_i(\tilde{\boldsymbol{x}}_\sigma) + \sigma(N-i) w_i(\tilde{\boldsymbol{y}}_\sigma) \right) \\
> & F(\tilde{\boldsymbol{x}}_\sigma, \tilde{\boldsymbol{y}}_\sigma) - \delta N^2 \label{eqn_LowerBound2}
\end{align}
holds for all $(\boldsymbol{x}, \boldsymbol{y}) \in D_\sigma$.
\end{theorem}
\begin{proof}
Choosing
\begin{equation} \label{eqn_multipl}
\boldsymbol{\lambda} = \boldsymbol{B}^{-1} \nabla_{\boldsymbol{x}} F(\tilde{\boldsymbol{x}}_\sigma, \tilde{\boldsymbol{y}}_\sigma) \quad \text{ and } \quad \boldsymbol{\mu} = {\bf P}_\sigma^{-1} \boldsymbol{B}^{-1} \nabla_{\boldsymbol{y}} F(\tilde{\boldsymbol{x}}_\sigma, \tilde{\boldsymbol{y}}_\sigma)
\end{equation}
yields
\begin{equation}
\nabla_{\boldsymbol{x}} F(\tilde{\boldsymbol{x}}, \tilde{\boldsymbol{y}}) = \boldsymbol{B} \boldsymbol{\lambda} \quad \text{ and } \quad \nabla_{\boldsymbol{y}} F(\tilde{\boldsymbol{x}}, \tilde{\boldsymbol{y}}) = \boldsymbol{B} {\bf P}_\sigma \boldsymbol{\mu} .
\end{equation}
A short computation shows that the inverse of $\boldsymbol{B}$ from \eqref{eqnB} is given by
\[
\boldsymbol{B}^{-1} := \begin{pmatrix}
1 & 1 & \ldots & 1 \\
0 & 1 & \ldots & 1 \\
\vdots & 0 & \ddots & \vdots \\
0 & \ldots & 0 & 1 \\
\end{pmatrix} \in \mathbb{R}^{(N-1) \times (N-1)} ,
\]
which yields $\boldsymbol{y}, \boldsymbol{\lambda} >0$ and hence by Wolfe duality gives \eqref{eqn_LowerBound}. The second inequality \eqref{eqn_LowerBound2} then follows from noting that both $|v_i(\boldsymbol{x})|$ and $|w_i(\boldsymbol{y})|$ are bounded by $1$ and $2 \sum_{i=1}^{N-1} \sigma(N-i) = 2\sum_{i=1}^{N-1} i$ = $(N-1)(N-2) < N^2$. \qed
\end{proof}
Now, suppose we had some candidate $(\boldsymbol{x}^*, \boldsymbol{y}^*)\in D_{\sigma^*}$ for an optimal point set. If we can find for all other $\sigma \in \mathfrak{C}_N$ points $(\tilde{\boldsymbol{x}}_\sigma, \tilde{\boldsymbol{y}}_\sigma)$ that fulfills \eqref{eqn_offset_grad} and
$$F(\tilde{\boldsymbol{x}}_\sigma, \tilde{\boldsymbol{y}}_\sigma) - \delta N^2 \geq F_\gamma(\boldsymbol{x}^*, \boldsymbol{y}^*)$$
for some $\delta > 0$, we can be sure that $D_{\sigma^*}$ is (up to torus symmetry) the unique domain $D_\sigma$ that contains the globally optimal point set.
\section{Numerical Investigation of Optimal point sets}
In this section we numerically obtain optimal point sets with respect to the worst case error in $H^1_\text{mix}$. Moreover, we present a proof by exhaustion that these point sets are indeed approximations to the unique (modulo torus symmetry) minimizers of $F_\gamma$.
Since integration lattices are local minima, if the $D_\sigma$ containing the global minimizer corresponds to an integration lattice, this integration lattice is the exact global minimizer.
\subsection{Numerical Minimization with Alternating Directions}
In order to obtain the global minimum $(\boldsymbol{x}^*, \boldsymbol{y}^*)$ of $F_\gamma$ we are going to compute
\begin{equation}
\sigma^* := \argmin_{\sigma \in \mathfrak{C}_N} \min_{(\boldsymbol{x}, \boldsymbol{y}) \in D_\sigma} F_\gamma(\boldsymbol{x}, \boldsymbol{y}),
\end{equation}
where the inner minimum has a unique solution due to Proposition \ref{prop_convexity}. Moreover, since $D_\sigma$ is a convex domain we know that the local minimum of $F_\gamma(\boldsymbol{x}, \boldsymbol{y})_{|D_{\sigma}}$ is not on the boundary. Hence we can restrict our search for optimal point sets to the interior of $D_\sigma$, where $F_\gamma$ is differentiable.
Instead of directly employing a local optimization technique, we will make use of the special structure of $F_\gamma$.
While $F_\gamma(\boldsymbol{x}, \boldsymbol{y})_{|D_{\sigma}}$ is a polynomial of degree four, the functions
\begin{equation} \label{eqn_quadpoly}
\boldsymbol{x} \mapsto F_\gamma(\boldsymbol{x}, \boldsymbol{y}_0)_{|D_{\sigma}} \quad \text{ and } \quad \boldsymbol{y} \mapsto F_\gamma(\boldsymbol{x}_0, \boldsymbol{y})_{|D_{\sigma}},
\end{equation}
where one coordinate direction is fixed, are quadratic polynomials, which have unique minima in $D_\sigma$. We are going to use this property within an alternating minimization approach. This means, that the objective function $F$ is not minimized along all coordinate directions simultaneously, but with respect to certain successively alternating blocks of coordinates. If these blocks have size one this method is usually referred to as \emph{coordinate descent} \cite{LuoTseng} or nonlinear Gau\ss -Seidel method \cite{Grippo2000}. It is successfully employed in various applications, like e.g. expectation maximization or tensor approximation \cite{McLachlan, Uschmajew}.
In our case we will alternate between minimizing $F_\gamma(\boldsymbol{x}, \boldsymbol{y})$ along the first coordinate block $\boldsymbol{x} \in (0,1)^{N-1}$ and the second one $\boldsymbol{y} \in (0,1)^{N-1}$, which can be done exactly due to the quadratic polynomial property of the partial objectives \eqref{eqn_quadpoly}. The method is outlined in Algorithm \ref{algo_ao}, which for threshold-parameter $\delta=0$ approximates the local minimum of
$F_\gamma$ on $D_\sigma$. For $\delta>0$ it obtains feasible points that fulfill \eqref{eqn_offset_grad}, i.e. $\nabla_{(\boldsymbol{x}, \boldsymbol{y})} F_\gamma = (\delta, \ldots, \delta)= \delta {\bf 1}$.
Linear convergence of the alternating optimization method for strictly convex functions was for example proven in \cite{Ortega, bezdek87}.
\begin{algorithm}[tb]
\textbf{Given:} Permutation $\sigma \in \mathfrak{C}_N$, tolerance $\varepsilon > 0$ and off-set $\delta \geq 0$. \\
\textbf{Initialize:}
\begin{enumerate}
\item $\boldsymbol{x}^{(0)} := (0, \frac{1}{N}, \ldots, \frac{N-1}{N})$ and $\boldsymbol{y}^{(0)} = (0, \frac{\sigma(1)}{N}, \ldots, \frac{\sigma(N-1)}{N})$.
\item $k:=0$.
\end{enumerate}
\mathbb{R}epeat{ $ \sqrt{ \|\nabla_{\boldsymbol{x}}\|^2 + \|\nabla_{\boldsymbol{y}}\|^2 } < \varepsilon$ }{
\begin{enumerate}
\item compute $\boldsymbol{H}_{\boldsymbol{x}} := \left( \partial_{x_i} \partial_{x_j} F_\gamma(\boldsymbol{x}^{(k)}, \boldsymbol{y}^{(k)} \right)_{i,j=1}^N$ and $\nabla_{\boldsymbol{x}} = \left( \partial_{x_i} F_\gamma(\boldsymbol{x}^{(k)}, \boldsymbol{y}^{(k)} \right)_{i=1}^N$ by \eqref{eqn_second_deriv} and \eqref{eqn_first_deriv}.
\item Update $\boldsymbol{x}^{(k+1)} := \boldsymbol{H}_{\boldsymbol{x}}^{-1} \left( \nabla_{\boldsymbol{x}} + \delta \mathbf{1} \right) $ via Cholesky factorization.
\item compute $\boldsymbol{H}_{\boldsymbol{y}} := \left( \partial_{y_i} \partial_{y_j} F_\gamma(\boldsymbol{x}^{(k+1)}, \boldsymbol{y}^{(k)} \right)_{i,j=1}^N$ and $\nabla_{\boldsymbol{y}} = \left( \partial_{y_i} F_\gamma(\boldsymbol{x}^{(k+1)}, \boldsymbol{y}^{(k)} \right)_{i=1}^N$.
\item Update $\boldsymbol{y}^{(k+1)} := \boldsymbol{H}_{\boldsymbol{y}}^{-1} \left( \nabla_{\boldsymbol{y}} + \delta \mathbf{1} \right)$ via Cholesky factorization.
\item $k := k+1$.
\end{enumerate}
}
\textbf{Output}: point set $(\boldsymbol{x}, \boldsymbol{y}) \in D_\sigma$ with $\nabla_{\boldsymbol{x}} F_\gamma(\boldsymbol{x}, \boldsymbol{y}) \approx \delta \mathbf{1}$ and $\nabla_{\boldsymbol{y}} F_\gamma(\boldsymbol{x}, \boldsymbol{y}) \approx \delta \mathbf{1}$.
\caption{Alternating minimization algorithm. For off-set $\delta=0$ it finds local minima of $F_\gamma$. For $\delta > 0$ it obtains feasible points used by Algorithm \ref{algo_lowerbound}. } \label{algo_ao}
\end{algorithm}
\subsection{Obtaining Lower Bounds}
By now we are able to obtain a point set $(\boldsymbol{x}^*, \boldsymbol{y}^*) \in D_{\sigma^*}$ as a candidate for a global minimum of $F_\gamma$ by finding local minima on each $D_\sigma, \sigma \in \mathfrak{C}_N$. On first sight we can not be sure that we chose the right $\sigma^*$, because the value of $\min_{(\boldsymbol{x}, \boldsymbol{y}) \in D_\sigma} F_\gamma(\boldsymbol{x}, \boldsymbol{y})$ can only be computed numerically.
On the other hand, Theorem \ref{thm_lower_bound} allows to compute lower bounds for all the other domains $D_\sigma$ with $\sigma \in \mathfrak{C}_N$. If we were able to obtain for each $\sigma$ a point $(\tilde{\boldsymbol{x}}_\sigma, \tilde{\boldsymbol{y}}_\sigma)$, such that
$$\min_{(\boldsymbol{x},\boldsymbol{y}) \in D_{\sigma^*}} F_\gamma(\boldsymbol{x},\boldsymbol{y}) \approx \theta_N := F_\gamma(\boldsymbol{x}^*,\boldsymbol{y}^*) < \ensuremath{\mathcal{L}}_F(\tilde{\boldsymbol{x}}_\sigma, \tilde{\boldsymbol{y}}_\sigma) - 2N^2 \delta \leq F_\gamma(\boldsymbol{x}, \boldsymbol{y}),$$
we could be sure that the global optimum is indeed located in $D_{\sigma^*}$ and $(\boldsymbol{x}^*, \boldsymbol{y}^*)$ is a good approximation to it. Luckily, this is the case. Of course certain computations can not be done in standard double floating point arithmetic. Instead we use arbitrary precision rational number (APR) arithmetic from the GNU Multiprecision library GMP from \url{http://www.gmplib.org}. Compared to standard floating point arithmetic in double precision this is very expensive, but it has only to be used at certain parts of the algorithm.
The resulting procedure is outlined in Algorithm \ref{algo_lowerbound}, where we marked those parts which require APR arithmetic.
\begin{algorithm}[tb]
\textbf{Given:} Optimal point candidate $\ensuremath{\mathcal{P}}_N:= (\boldsymbol{x}^*, \boldsymbol{y}^*) \in D_\sigma$ with $\sigma \in \mathfrak{C}_N$, tolerance $\varepsilon > 0$ and off-set $\theta \geq 0$. \\
\textbf{Initialize:}
\begin{enumerate}
\item Compute $\theta_N := F_\gamma(\boldsymbol{x}^*, \boldsymbol{y}^*)$ \begin{color}{red} (in APR arithmetic) \end{color}.
\item $\Xi_N := \emptyset$.
\end{enumerate}
\mathbb{F}or{ \textbf{\emph{all}} \,\, $\sigma \in \mathfrak{C}_N$ }{
\begin{enumerate}
\item Find $(\tilde{\boldsymbol{x}}_\sigma, \tilde{\boldsymbol{y}}_\sigma) \in D_\sigma$ s.t. $\nabla_{(\boldsymbol{x}, \boldsymbol{y})}F_\gamma(\tilde{\boldsymbol{x}}_\sigma, \tilde{\boldsymbol{y}}_\sigma) \approx \delta \mathbf{1}$ by Algorithm \ref{algo_ao}.
\item Compute $\boldsymbol{\lambda} := \boldsymbol{B}^{-1} \nabla_{\boldsymbol{x}} F(\tilde{\boldsymbol{x}}_\sigma, \tilde{\boldsymbol{y}}_\sigma) \quad \text{ and } \quad \boldsymbol{\mu} := {\bf P}_\sigma^{-1} \boldsymbol{B}^{-1} \nabla_{\boldsymbol{y}} F(\tilde{\boldsymbol{x}}_\sigma, \tilde{\boldsymbol{y}}_\sigma)$ \begin{color}{red} (in APR arithmetic) \end{color}.
\item Verify $\boldsymbol{\lambda}, \boldsymbol{\mu} > 0$.
\item Evaluate $\beta_\sigma := \ensuremath{\mathcal{L}}_{F_\gamma}(\tilde{\boldsymbol{x}}_\sigma, \tilde{\boldsymbol{y}}_\sigma, \boldsymbol{\lambda}, \boldsymbol{\mu})$ \begin{color}{red} (in APR arithmetic) \end{color}.
\item \textbf{If} ( $\beta_\sigma \leq \theta_N$ ) $\Xi_N := \Xi_N \cup \sigma$.
\end{enumerate}
}
\textbf{Output}: Set $\Xi$ of permutations $\sigma$ in which $D_\sigma$ contained a lower bound smaller than $\theta_N$.
\caption{Computation of lower bound on $D_\sigma$.} \label{algo_lowerbound}
\end{algorithm}
\subsection{Results}
In Figures \ref{fig_plots1} and \ref{fig_plots2} the optimal point sets for $N=2,\ldots, 16$ and both $\gamma =1$ and $\gamma =6$ are plotted. It can be seen that they are close to lattice point sets, which justifies using them as start points in Algorithm \ref{algo_ao}. The distance to lattice points seems to be small if $\gamma$ is small.
In Table \ref{table_results} we list the permutations $\sigma$ for which $D_\sigma$ contains an optimal set of cubature points. In the second column the total number of semi-canonical permutations $\mathfrak{C}_N$ that had to be considered is shown. It grows approximately like $\frac{1}{2}(N-2)!$. Moreover, we computed the minimal worst case error and periodic $L_2$-discrepancies.
In some cases we found more than one semi-canonical permutation $\sigma$ for which $D_\sigma$ contained a point set which yields the optimal worst case error. Nevertheless, they represent equivalent permutations.
In the following list, the torus symmetries used to show the equivalency of the permutations are given. All operations are modulo 1.
\begin{itemize}
\item $N=7$: $(x,y) \mapsto (1-y,x)$
\item $N=9$: $(x,y) \mapsto (y-2/9, x-1/9)$
\item $N=11$: $(x,y) \mapsto (y+5/11 , x-4/11 )$
\item $N=14$: $(x,y) \mapsto (x-4/14 , y+6/14 )$
\item $N=15$: $(x,y) \mapsto (y+3/15 , x+2/15 ), (y-2/15 , 12/15-x ), (y-6/15 , 4/15-x ) $
\item $N=16$: $(x,y) \mapsto (1/16-x , 3/16-y )$
\end{itemize}
In all the examined cases $N \in \{2,\ldots, 16\}$ Algorithm 2 produced sets $\Xi_N$ which contained exactly the permutations that were previously obtained by Algorithm \ref{algo_ao} and are listed in Table 1. Thus we can be sure, that the respective $D_\sigma$ contained minimizers of $F_\gamma$, which on each $D_\sigma$ are unique. Hence we know that our numerical approximation of the minimum is close to the true global minimum, which (modulo torus symmetries) is unique. In the cases $N=1,2,3,5,7,8,12,13$ the obtained global minima are integration lattices.
\begin{table}[t]
\begin{tabular}{r|c|c|c|l|c}
$N$ & $| \mathfrak{C}_N |$ & ${\rm wce}(H^{1,1}_\text{mix},\ensuremath{\mathcal{P}}_N^*)$ & $D_2 ({\ensuremath{\mathcal{P}}}_N^*)$ & $\sigma^*$ & Lattice \\
\hline
\bf 1 & 0 & 0.416667 & 0.372678 & (0) & \checkmark \\
\hline
\bf 2 & 1 & 0.214492 & 0.212459 & (0 1) & \checkmark \\
\hline
\bf 3 & 1 & 0.146109 & 0.153826 & (0 1 2) & \checkmark \\
\hline
4 & 2 & 0.111307 & 0.121181 & (0 1 3 2) & \\
\hline
\bf 5 & 5 & 0.0892064 & 0.0980249 & (0 2 4 1 3) & \checkmark \\
\hline
6 & 13 & 0.0752924 & 0.0850795 & (0 2 4 1 5 3) & \\
\hline
7 & 57 & 0.0650941 & 0.0749072 & (0 2 4 6 1 3 5), (0 3 6 2 5 1 4) & \checkmark \\
\hline
\bf 8 & 282 & 0.056846 & 0.0651562 & (0 3 6 1 4 7 2 5) & \checkmark \\
\hline
9 & 1,862 & 0.0512711 & 0.0601654 & (0 2 6 3 8 5 1 7 4), (0 2 7 4 1 6 3 8 5) & \\
\hline
10 & 14,076 & 0.0461857 & 0.054473 & (0 3 7 1 4 9 6 2 8 5) & \\
\hline
11 & 124,995 & 0.0422449 & 0.050152 & \begin{minipage}{0.4\linewidth}(0 3 8 1 6 10 4 7 2 9 5),\\ (0 3 9 5 1 7 10 4 8 2 6) \end{minipage} & \\
\hline
12 & 1,227,562 & 0.0370732 & 0.0456259 & (0 5 10 3 8 1 6 11 4 9 2 7) & \checkmark \\
\hline
\bf 13 & 13,481,042 & 0.0355885 & 0.0421763 & (0 5 10 2 7 12 4 9 1 6 11 3 8) & \checkmark \\
\hline
14 & 160,456,465 & 0.0333232 & 0.0400524 & \begin{minipage}{0.4\linewidth}(0 5 10 2 8 13 4 11 6 1 9 3 12 7),\\ (0 5 10 3 12 7 1 9 4 13 6 11 2 8) \end{minipage} & \\
\hline
15 & 2,086,626,584 & 0.0312562 & 0.0379055 & \begin{minipage}{0.4\linewidth}(0 4 9 13 6 1 11 3 8 14 5 10 2 12 7),\\ (0 5 11 2 7 14 9 3 12 6 1 10 4 13 8),\\ (0 5 11 2 8 13 4 10 1 6 14 9 3 12 7),\\ (0 5 11 2 8 13 6 1 10 4 14 7 12 3 9) \end{minipage} & \\
\hline
16 & 29,067,602,676 & 0.0294507 & 0.0359673 & \begin{minipage}{0.4\linewidth}(0 3 11 5 14 9 1 7 12 4 15 10 2 6 13 8),\\ (0 3 11 6 13 1 9 4 15 7 12 2 10 5 14 8)\end{minipage} & \\
\hline
\end{tabular}
\caption{List of semi-canonical permutations $\sigma$, such that $D_\sigma$ contains an optimal set of cubature points for $N=1,\ldots, 16$. } \label{table_results}
\end{table}
\begin{figure}
\caption{Optimal point sets for $N=2, \ldots, 16$ and $\gamma = 1$.}
\label{fig_plots1}
\end{figure}
\begin{figure}
\caption{Optimal point sets for $N=2, \ldots, 16$ and $\gamma = 6$.}
\label{fig_plots2}
\end{figure}
\section{Conclusion}
In the present paper we computed optimal point sets for quasi--Monte Carlo cubature of bivariate periodic functions with mixed smoothness of order one by decomposing the required global optimization problem into approximately $(N-2)! / 2$ local ones. Moreover, we computed lower bounds for each local problem using arbitrary precision rational number arithmetic. Thereby we obtained that our approximation of the global minimum is in fact close to the real solution.
In the special case of $N$ being a Fibonacci number our approach showed that for $N \in \{1,2,3,5,8,13\}$ the Fibonacci lattice is the unique global minimizer of the worst case integration error in $H^1_\text{mix}$. We strongly conjecture that this is true for all Fibonacci numbers. Also in the cases $N=7,12$, the global minimizer is the obtained integration lattice.
In the future we are planning to prove that optimal points are close to lattice points.
Moreover, we will investigate $H^r_\text{mix}$, i.e. Sobolev spaces with dominating mixed smoothness of order $r\geq 2$ and other suitable kernels and discrepancies.
\end{document} |
\begin{document}
\title{Quantum versus classical chirps in a Rydberg atom}
\author{Tsafrir Armon and Lazar Friedland}
\email{[email protected]}
\begin{abstract}
The interplay between quantum-mechanical and classical evolutions in a
chirped driven Rydberg atom is discussed. It is shown that the system allows
two continuing resonant excitation mechanisms, i.e., a successive two-level
transitions (ladder climbing) and a continuing classical-like nonlinear phase
locking (autoresonance). The persistent $1:1$ and $2:1$ resonances between
the driving and the Keplerian frequencies are studied in detail and
characterized in terms of two dimensionless parameters $P_{1,2}$
representing the driving strength and the nonlinearity in the problem,
respectively. The quantum-mechanical rotating wave and the classical single
resonance approximations are used to describe the regimes of efficient
classical or quantum-mechanical excitation in this two-parameter space.
\end{abstract}
\maketitle
\affiliation{Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem 91904, Israel}
\section{Introduction}
\label{introduction}
Rydberg atoms possess many unique properties. With their large principal
quantum number, $n\gg 1$, they exhibit a long radiative lifetime (scaling as $n^{3}$
), a large orbital radius and dipole moment (scaling as $n^{2}$) and more (see
\cite{Lit1} and references therein). As a result, they show great promise in
many applications including quantum nondemolition measurements of photons
\cite{Lit2}, digital communication \cite{Lit3}, measurement of microwave
fields \cite{Lit4,Lit5}, quantum information (see \cite{Lit6} for a
comprehensive review), and more. The ability to manipulate and control
Rydberg atoms is thus of great importance.
Of particular interest are circular Rydberg states (CRSs), i.e., Rydberg atoms
in states with $l=n-1$, where $l$ is the orbital quantum number and $m$, the magnetic quantum number, can take any value $\left|m\right|\le n-1$. Such states
have the longest radiative lifetime and magnetic moment \cite{Lit7}, which
makes them better suited for many applications. Various techniques for the
creation of CRSs have been proposed and implemented over the years \cite
{Lit7, Lit8,Lit9,Lit10}. CRSs have been used in several innovative
advances in cavity quantum electrodynamics \cite{Lit11,Lit12} and are
proposed for future applications like quantum simulators of many-body
physics \cite{Lit13}.
In recent years, chirped frequency drives were studied as a tool for control
and manipulation of various oscillatory systems, including Rydberg atoms
\cite{Lit14,Gros,Lit14a,Lit15,Lit15a}. In many cases, the response of the
system to the chirped drive could take a quantum-mechanical, classical or
mixed form. In the classical limit, a persistent nonlinear phase locking
between the driver and the system, known as autoresonance (AR) \cite{Lit15b}
, yields a continuing excitation. In contrast, in the quantum limit, the
system undergoes successive Landau-Zener (LZ) transitions \cite{Landau,Zener}
, i.e., the quantum energy ladder climbing (LC). Both regimes of operation
have been demonstrated and used in atoms and molecules \cite
{Lit16,Lit17,Lit18,Lit19,Lit19b}, Josephson junctions \cite{Lit20}, plasma
waves \cite{Lit21,Lit22}, discrete nonlinear systems \cite{DNLSE}, and cold
neutrons \cite{Lit23}.
In this work, we study the effects of a linearly polarized, chirped
frequency electric field on a Rydberg atom initialized in a CRS. Even though we usually associate large quantum numbers with the emergence of classical phenomena, we show that it is not a sufficient condition. Particularly, Rydberg atoms in a CRS, while having $n,l\gg 1$, can exhibit both classical and quantum-mechanical responses to the chirped drive.
We describe the characteristics of quantum-mechanical persistent resonance and compare it to the previously studied case of classical autoresonance \cite{Gros,1D}. With the use of a unified parametrization, the necessary conditions for each resonant regime are mapped, allowing one to easily determine what evolution should be expected for a given parameter choice, and when each regime is accessible.
The persistent $1:1$ and $2:1$ resonances between the
driving and the Keplerian frequency of the atom are discussed in detail.
The scope of the paper is as follows: In Sec. \ref{Sec1}, we introduce
the model and its parametrization. Section \ref{Sec2} characterizes the
resonant structure of the problem and Sec. \ref{Sec3} analyzes the
quantum and classical resonant regimes and the associated parameter space
for the $1:1$ resonance. Section \ref{Sec4} builds on Sec. \ref{Sec3} and
analyzes the $2:1$ resonance, while our conclusions are summarized in Sec.
\ref{summary}.
\section{The Model \& Parameterization}
\label{Sec1}
We consider a Rydberg atom driven by an oscillating electric field of
constant amplitude and a down chirped frequency $\omega _{d}$ such that $
d\omega _{d}/dt=-\alpha $, with $\alpha $ being a constant chirp rate. The
Hamiltonian of the problem $\hat{H}=\hat{H_{0}}+\hat{U}$ includes the usual
unperturbed part
\begin{equation}
\hat{H_{0}}=\frac{\hat{\vec{p}}^{2}}{2m_{e}}-\frac{e^{2}}{\hat{r}},
\label{H0}
\end{equation}
and the driving part $\hat{U}=\varepsilon \cos {\phi _{d}}\hat{z}$, where $
m_{e}$ and $e$ are the electron's mass and charge, $\varepsilon $ \ and $
\phi _{d}=\int_{0}^{t} \omega _{d}\left(t'\right)dt'$ are the driving amplitude and phase
respectively, and the driving field is in the $z$ direction. The operator $
\hat{z}$ is dimensionless, with the normalization constant included in $
\varepsilon $. The eigenfunctions $\left\vert n,l,m\right\rangle $ of $\hat{
H_{0}}$ satisfy $\hat{H_{0}}
\left\vert n,l,m\right\rangle =E_{n}\left\vert n,l,m\right\rangle $ where $
E_{n}=-R_{y}n^{-2}$ and $R_{y}$ is the Rydberg energy. Note that we
neglected the corrections to the energy due to other quantum defects \cite
{Defect1,Defect2}, as they are fairly constant and do not have notable
consequences for this work. As an initial condition we consider a
single $\left\vert n_{0},n_{0}-1,m\right\rangle $ CRS.
The resonances in the problem are studied in detail in Sec. \ref{Sec2},
but their nature is important for the choice of a suitable parametrization.
These resonances correspond to a $q:1$ ratio between $\omega _{d}$ and the
Keplerian frequency (approximately given by $\frac{dE_{n}}{dn}/\hbar $), and
they affect transitions between the states $\left\vert n,l,m\right\rangle
\leftrightarrow \left\vert n+q,l+1,m\right\rangle $, which are coupled due
to the driving field via normalized coupling coefficients
\begin{equation}
c_{n,l,m}^{\pm q}=\frac{\left\langle n,l,m\left\vert \hat{z}\right\vert n\pm
q,l\pm 1,m\right\rangle }{C_{0}},
\end{equation}
where $C_{0}=\left\vert \left\langle n_{0},n_{0}-1,m\left\vert \hat{z}
\right\vert n_{0}+q,n_{0},m\right\rangle \right\vert $. Note that because of
the z-polarization of the driving field, $m$ is conserved throughout the
evolution while $l$ is only coupled to $l\pm 1$. Due to the strong
nonlinearity of the coupling coefficients and $E_{n}$ with respect to $n$,
many quantities in the problem may change by orders of magnitude when $n$
varies. Therefore, every parametrization will always be, in some sense,
local - helping one to study the vicinity of a specific value of $n$.
For the initial condition comprised of a single $\left\vert
n_{0},n_{0}-1,m\right\rangle $ CRS, one can identify three time scales in
the initial setting of the problem, i.e., the nonlinearity time scale $
T_{nl}=q^{2}\left\vert \frac{d^{2}E_{n_{0}}}{dn_{0}^{2}}\right\vert /\hbar
\alpha $ approximating the time between the first two successive
transitions, the frequency sweep time scale $T_{s}=\alpha ^{-1/2}$ and the
Rabi time scale $T_{R}=2\hbar /C_{0}\varepsilon $. Using these three
timescales we define the dimensionless time $\tau =t/T_{s}=t\sqrt{\alpha }$
and two dimensionless parameters
\begin{equation}
P_{1}=\frac{T_{s}}{T_{R}}=\frac{C_{0}\varepsilon }{2\hbar \sqrt{\alpha }},
\label{P1}
\end{equation}
\begin{equation}
P_{2}=\frac{T_{nl}}{T_{s}}=\frac{6q^{2}R_{y}}{\hbar \sqrt{\alpha }n_{0}^{4}},
\label{P2}
\end{equation}
characterizing the driving strength and the nonlinearity in the problem,
respectively. The parameters $P_{1,2}$ fully define the evolution of the system.
Indeed, upon expansion of the wave function in terms of the eigenfunctions $
\left\vert \psi \right\rangle =\sum_{n,l,m}a_{n,l,m}\left\vert
n,l,m\right\rangle $ one can write the dimensionless Schrodinger equation
for the coefficients $a_{n,l,m}$
\begin{equation}
i\frac{da_{n,l,m}}{d\tau }=\overline{E}_{n}a_{n,l,m}+2P_{1}\cos {\phi _{d}}
\sum_{n^{\prime }}\sum_{\Delta l=\pm 1}c_{n,l,m}^{n^{\prime },l^{\prime
}}a_{n^{\prime },l^{\prime },m}, \label{OrigEq}
\end{equation}
where now $\overline{E}_{n}$ is the dimensionless energy $
-P_{2}n_{0}^{4}/6q^{2}n^{2}$, $l^{\prime }=l+\Delta l$ and the summation
over $n^{\prime },\Delta l$ follows the restrictions on the quantum numbers,
i.e, $n^{\prime }\geq 1$, $0\leq l^{\prime }<n^{\prime }$, $\left\vert
m\right\vert \leq l^{\prime }$.
The resonant dynamics emerging from Eq. (\ref{OrigEq}) are the main
focus of this work, and it is helpful to first examine the different types
of evolutions one can expect when changing the parameters $P_{1,2}$. As a
representative example, we choose the $1:1$ resonance, with a single CRS
initial condition: $n_{0}=40$, $l_{0}=n_{0}-1=39$, $m=l_{0}=39$. Figure \ref
{FigLC} shows the numerical solution of Eq. (\ref{OrigEq}) for $P_{1}=1$, $
P_{2}=30$ (For details on the numerical simulations see Appendix \ref{AppB}). At
time $\tau =0$ the driving frequency passes the resonance with the Keplerian
frequency $d\overline{E}_{n}/dn$ associated with $n_{0}$.
\begin{figure}
\caption{Quantum mechanical numerical simulation of the $1:1$ resonance
dynamics for an initial CRS with $n_{0}
\label{FigLC}
\end{figure}
Figure \ref{FigLC}(a) shows the average unperturbed energy $\left\langle
\overline{E}_{n}\right\rangle =\sum_{n,l}\overline{E}_{n}\left\vert
a_{n,l,m}\right\vert ^{2}$ normalized with respect to the initial energy $
\left\vert \overline{E}_{n0}\right\vert $, as a function of time and one can
see a continuing increase in the energy of the system at later times.
Furthermore, the initial growth of the energy proceeds in sharp "jumps" as
highlighted in the inset. The dashed lines in the inset show the unperturbed
energies $\overline{E}_{n_{0}},\overline{E}_{n_{0}+1},\overline{E}_{n_{0}+2}$
normalized by $\left\vert \overline{E}_{n_{0}}\right\vert $. The sharp
transitions between these energy values indicate a full population transfer
between neighboring $n$ states. As a further illustration, Fig. \ref{FigLC}(b) shows
the distribution of the population between the different $n$ values $
P_{n}=\sum_{l}\left\vert a_{n,l,m}\right\vert ^{2}$, at six specific times,
corresponding to markers 1,2,..,6 in Fig. \ref{FigLC}(a). Comparing the distributions
at times 1 and 2 in the figure, one can observe a full population transfer
between $n_{0}$ and $n_{0}+1$ states. This trend continues at time 3 which
is in the middle of a two-level transition. However, later the energy
growth smooths, and the distribution $P\left( n\right) $ broadens -
revealing multilevel transitions in the system.
\begin{figure}
\caption{The same as Fig. \ref{FigLC}
\label{FigAR}
\end{figure}
Next, we compare the results in Fig. \ref{FigLC} with those in Fig. \ref
{FigAR}, corresponding to the same initial conditions and parameters, but $
P_{2}=30$ instead of $0.6$. Figure \ref{FigAR}(a) still exhibits an energy
increase, but no sharp "jumps" associated with two-level transitions are
observed. Additionally, the distributions $P\left( n\right) $ in Fig. \ref
{FigAR}(b) become wide shortly after the beginning of the process and much
wider than in Fig. \ref{FigAR}(b) at the final times. Such wide distributions
are indicative of classical-like behavior. This is illustrated in Fig. \ref
{FigAR}(a), where the gray shaded area represents $100$ classical trajectories
with the corresponding initial conditions and uniformly distributed initial
driving phases (the details of the classical simulation are given in Appendix
\ref{AppB}) and one observes that the quantum-mechanical energy of the
system with $P_{2}=30$ follows closely the classical evolution. It should be
noted that the oscillations visible in the quantum-mechanical solution in
this case have twice the driving frequency, and, thus, represent a
non-resonant effect.
In the following sections we show that indeed the dynamics in Fig. \ref
{FigLC} represents a quantum-mechanical LC process, while that in Fig. \ref
{FigAR} corresponds to the classical AR. We describe how Eq. (\ref
{OrigEq}) yields the aforementioned two types of evolutions and discuss the
resonant excitation efficiency in the problem in our $P_{1,2}$ parameter
space.
\section{Resonant Structure}
\label{Sec2}
We have illustrated above that the response of our system to the chirped
drive is dominated by resonant interactions. These resonances can be studied
conveniently via transformation to a rotating frame of reference, and
application of the rotating wave approximation (RWA) to neglect all rapidly
oscillating terms in Eq. (\ref{OrigEq}). To this end, we define $
b_{n,l,m}=a_{n,l,m}e^{il\phi _{d}}$ which transforms Eq. (\ref{OrigEq}) into:
\begin{equation}
i\frac{db_{n,l}}{d\tau }\approx -\Gamma _{n,l}b_{n,l}+P_{1}\left[
c_{n,l}^{+q}b_{n+q,l+1}+c_{n,l}^{-q}b_{n-q,l-1}\right] , \label{RWAEq}
\end{equation}
where index $m$ was omitted for brevity and $\Gamma _{n,l}=\overline{E}
_{n}+l\omega _{d}$. In this rotating frame of reference only the states with
similar pseudoenergies $\Gamma _{n,l}$ are resonant, while all other states
oscillate rapidly and can be neglected. The resonance condition between
states $n,l$ and states $n+q,l+1$ is then given by equating $\Gamma _{n,l}=\Gamma
_{n+q,l+1}$. In the limit of large $n$, one finds this condition to be
\begin{equation}
\omega _{d}\approx q\frac{d\overline{E}_{n}}{dn}, \label{ResCond}
\end{equation}
which as mentioned above, corresponds to a $q:1$ ratio between the driving
frequency and the Keplerian frequency $d\overline{E}_{n}/dn$. Since $\omega
_{d}=\omega _{0}-\tau $ (here and below we use $\omega _{d}$ and $\omega _{0}
$ normalized by $1/T_{s}=\sqrt{\alpha }$), it is possible to solve for the
value of $n$ satisfying the resonance condition, (\ref{ResCond}), as a
function of the time and use it to define the resonant value for the energy. The
dashed red lines in Figs. \ref{FigLC} and \ref{FigAR} show this resonant
energy as a function of the time and we observe that the evolution of the energy
of the system follows closely the resonant energy.
When neglecting rapidly oscillating terms in Eq. (\ref{RWAEq}), one must
verify that other pseudo-energy crossings (for different resonance ratios $q$
) do not interfere with the desired resonant chain. To this end, we define
the time $\tau _{n}^{q}$ when the resonance condition, (\ref{ResCond}), is met
for the transition $n,l\leftrightarrow n+q,l+1$. Note that since $d\overline{
E}_{n}/dn$ is monotonic in $n$, the resonant transitions along a given
resonant chain are ordered consecutively (i.e., $\tau _{n}^{q}<\tau
_{n+q}^{q} $). Nevertheless, if $\tau _{n+q}^{q+1}$ is larger than $\tau
_{n}^{q}$, but smaller than $\tau _{n+q}^{q}$, the two resonant chains will
mix. This leads to two conditions which must be met to avoid this mixing,
i.e., $\tau _{n+q}^{q+1}>\tau _{n}^{q}$ and $\tau _{n+q}^{q}<\tau
_{n+q}^{q+1}$. One can show that the first condition is always met, while
the second is only true starting from some minimal $n$ value. This minimal $
n $ can be found numerically, and above this $n$ the resonant chains do not
mix. For example, the minimal $n$ is $6$, $17$ and $34$ for $q=1,2,$ and $3$,
respectively \cite{SideNote1}. Within the RWA, when the resonant chains are
separated, one can study them individually, and this is what we do next.
\section{1:1 Resonance}
\label{Sec3}
Section \ref{Sec1} illustrated that parameter $P_{2}$ may change the nature
of the evolution of the system, and that two efficient excitation regimes are possible. However, such efficient excitation is not guaranteed, and depends on the choice of parameters $P_{1,2}$. To further understand this effect, we
now discuss the excitation efficiency of the $1:1$ resonance for a CRS initial condition with maximal $m$, i.e. $m=l_{0}=n_{0}-1$. As a measure for the efficiency we examine the
fraction of excited population with $n$ exceeding a certain threshold value $
n_{th}$ at the final time of evolution.
We proceed and show the numerical solution of Eq. (\ref{RWAEq}) for the distribution of
the excitation efficiency in $P_{1,2}$ parameter space in Fig. \ref{Fig1-1a}
(a). The initial (final) driving frequency in these simulations is given by Eq. (\ref{ResCond}) for $n=30$ ($n=60$), while $n_{th}=50$
and the initial condition is a single CRS with $n_{0}=40$ and $m=l_{0}=39$. The choice of $n_{0}$ puts the transition frequencies in the readily accessible microwave regime, while the choice of $n_{th}$ is discussed in Sec. \ref{SubSecIon}.
\begin{figure}
\caption{Quantum-mechanical numerical simulations for the $1:1$ resonance case. Equation (\protect\ref{RWAEq}
\label{Fig1-1a}
\end{figure}
As expected, the higher the driving amplitude (characterized by $P_{1}$),
the higher the excitation efficiency, up to $100\%$. However, the gradual
transition from no excitation to full excitation happens in the vicinity of
two distinct lines in the parameter space represented by the dashed diagonal
line and the dashed-dotted vertical line. Clearly, this hints at two
different resonant mechanisms in play and their study is our next goal.
\subsection{Quantum Mechanical Ladder-Climbing}
\label{SubSecLC}
Motivated by the two-level transitions shown in Fig. \ref{FigLC}, we
analyze Eq. (\ref{RWAEq}) again, but this time for two neighboring levels
only,
\begin{equation}
i\frac{d}{d\tau }\left(
\begin{array}{c}
b_{n} \\
b_{n+1}
\end{array}
\right) =\left(
\begin{array}{cc}
\overline{E}_{n}-\left( n-1\right) \tau & P_{1}c_{n}^{+1} \\
P_{1}c_{n}^{+1} & \overline{E}_{n+1}-n\tau
\end{array}
\right) \left(
\begin{array}{c}
b_{n} \\
b_{n+1}
\end{array}
\right) , \label{2Level}
\end{equation}
where index $l$ was omitted because the difference $n-l$ is conserved at $1$
along the $1:1$ resonance chain starting from a CRS. The initial driving
frequency was omitted from $\omega _{d}$ in (\ref{2Level}) for brevity, as
it could be canceled by shifting time. Equation (\ref{2Level}) describes a
two-level Landau-Zener transition \cite{Landau,Zener}. If the transitions'
times, as found from Eq. (\ref{ResCond}), are well separated the system will
undergo successive LZ transitions, commonly known as quantum energy LC. This
explains the initial dynamics shown in Fig. \ref{FigLC}(a). To see the
relevance of the LC process to the parameter space of Fig. \ref{Fig1-1a}
one needs to examine the efficiency of the process. The efficiency of a
single LZ transition, i.e., the fraction of the population transferring from
level $n$ to level $n+1$, depends on $P_{1}$ only and is given by the
LZ formula $1-\exp {\left[ -2\pi \left( P_{1}c_{n}^{+1}\right) ^{2}
\right] }$. Indeed,\ one can see in Fig. \ref{Fig1-1a} that for large
values of $P_{2}$, the efficiency of the excitation is independent of $P_{2}$
. Furthermore, one can find the efficiency of the full LC process, by
multiplying the efficiencies of successive single transitions:
\begin{equation}
P=\prod_{n=n_{0}}^{n_{th}}1-\exp {\left[ -2\pi \left( P_{1}c_{n}^{+1}\right)
^{2}\right] }. \label{PLC}
\end{equation}
By setting Eq. (\ref{PLC}) equal to $1/2$ we can define the threshold value $
P_{1,th}^{LC}$ for which half the population will reach the target state $
n_{th}$. In principle, this value depends on $n_{th}$, but for the
parameters of this problem within a few transitions $c_{n}^{+1}$ scales as $
n^{2}/n_{0}^{3/2}$ , so the product (\ref{PLC}) converges rapidly and only
weakly depends on $n_{th}$. One finds numerically that $P_{1,th}^{LC}\approx
0.39$, and this value is plotted as a dashed-dotted line in Fig. \ref
{Fig1-1a}, showing a good agreement with the numerical simulations when $
P_{2}$ is sufficiently large.
It should be noted that our choice of initial conditions in a circular state
is not incidental. Indeed, since $m=n_{0}-1$ is conserved, the $n_{0}$ CRS
is not "connected" from below to any other state (it is the "ground state"
of the resonant chain). Therefore, the LZ transitions can only transfer the
population up the resonant chain. However, if the initial conditions were
chosen such that there existed a state below $n_{0}$, the sweeping driving
frequency would have driven the population down to this state, and the
excitation process would have stopped.
Lastly, one still needs to find the values of $P_{2}$ for which the LC
framework is applicable. As mentioned above, the LZ transitions must be well
separated in time so they can be treated as separate two-level
transitions. To check when this condition is met, we follow the footsteps of
\cite{Lit19} and \cite{Ido} and compare the time between two successive LZ transitions
and the time-width of a single transition. The width of a single LZ
transition can be estimated as $\Delta \tau _{LZ}=1+P_{1}c_{n}^{+1}$ \cite
{Ido}, while the time between two successive transitions can be found using
Eq. (\ref{ResCond}) yielding $\Delta \tau _{between}\approx d^{2}\overline{E}
_{n}/dn^{2}$. Therefore, the condition $\Delta \tau _{between}\gg \Delta \tau
_{LZ}$ guarantees that the transitions are well separated. Explicitly, the
condition reads:
\begin{equation}
P_{2}\left( \frac{n_{0}}{n}\right) ^{4}\gg 1+P_{1}c_{n}^{+1}, \label{LCCond}
\end{equation}
where again index $l$ was omitted. For several initial LZ steps starting
from some $n_{0}\gg 1$, this condition can be relaxed by substituting $n_{0}$
for $n$ and recalling that by construction $c_{n_{0}}^{+1}=1:$
\begin{equation}
P_{2}\gg 1+P_{1}. \label{LCCond2}
\end{equation}
The solid line in the parameter space in Fig. \ref{Fig1-1a} represents $
P_{2}=1+P_{1}$ and one can see that above this line the efficiency depends
only on $P_{1}$, while $P_{1,th}^{LC}$ (dashed-dotted line in Fig. \ref{Fig1-1a})
bounds the region of an efficient LC process. Returning to Fig. \ref{FigLC}
which exhibits LC type evolution, one can see that its $P_{1,2}$
parameters are well inside the quantum region (the parameters are marked by
a black circle in Fig. \ref{Fig1-1a}). If condition (\ref{LCCond2}) is
not satisfied, the transitions are not well separated and many states mix.
This type of evolution is demonstrated in Fig. \ref{FigAR}, where the
parameters are well below the separation line (marked by the black diamond in
Fig. \ref{Fig1-1a}). The nature of the evolution in this case is studied
next.
\subsection{Classical Autoresonance}
\label{SubSecAR}
The previous subsection showed that quantum-mechanical analysis does not fully explain the results in Fig. \ref{Fig1-1a}. One can see in the figure two different threshold lines for efficient excitation. We show now that the dashed diagonal line corresponds to classical evolution. To understand this region in the parameter space, we turn to the classical analysis in Ref. \cite{Gros}. We present the main results of \cite{Gros} here for
completeness, while reformulating this theory in terms of our dimensionless quantum-mechanical parameters $P_{1,2}$.
The classical problem of the driven atom is conveniently
analyzed using the three pairs of action-angle variables of the unperturbed
problem. The actions $I_{2},I_{1}$ are associated with the total angular
momentum and its projection on the $z$ axis, respectively, while the action $
I_{3}$ characterizes the unperturbed Hamiltonian which is proportional to $
I_{3}^{-2}$. The semi-classical approximation then yields $I_{3}\approx
\hbar n$, $I_{2}\approx \hbar l$, $I_{1}\approx \hbar m$.
The classical theory in \cite{Gros} used the single resonance
approximation (a classical analog of the RWA). It resulted in a dimensionless Hamiltonian, which in terms of our parameters $P_{1,2}$ can be written as
\begin{equation}
H\left( \Theta _{1,2,3},\bar{I}_{1,2,3}\right) =-\frac{P_{2}n_{0}^{4}}{6\bar{
I}_{3}^{2}}+\frac{\sqrt{2}P_{1}}{n_{0}^{3/2}}\bar{I}_{3}^{2}\sin {i}\sin {
\Phi }, \label{ClassH}
\end{equation}
where the actions are normalized by $\hbar $ ($\bar{I}_{1,2,3}=I_{1,2,3}/
\hbar $), time is normalized by $1/\sqrt{\alpha }$, distances are
normalized with respect to Bohr's radius $a_{0}$, $i$ is the inclination angle ($\sin i=I_{1}/I_{2}$) and $\Phi =\Theta
_{3}+\Theta _{2}-\phi _{d}$ with $\Theta _{2,3}$ being the angle variables
corresponding to actions $I_{2,3}$. The physical meaning of the phase
mismatch $\Phi $ is revealed when examining its temporal derivative
\begin{equation}
\frac{d\Phi }{d\tau }=\frac{d\Theta _{3}}{d\tau }+\frac{d\Theta _{2}}{d\tau }
-\omega _{d}\approx \frac{d\Theta _{3}}{d\tau }-\omega _{d}+O\left(
P_{1}\right) .
\end{equation}
If $d\Phi /d\tau \approx 0,$ the orbital frequency $d\Theta _{3}/d\tau $
approximately follows the driving frequency. This classical resonance
condition is actually the same as the quantum one, (\ref{ResCond}), within the
semiclassical approximation. It is shown in Ref. \cite{Gros} that if the
driving frequency starts sufficiently far from the resonance, Hamiltonian (
\ref{ClassH}) yields a continued phase-locking $\Phi \approx 0$ after
passage through resonance provided that (using our parametrization)
\begin{equation}
\sqrt{P_{2}}P_{1}>0.41. \label{CTh}
\end{equation}
If this sharp threshold condition is satisfied, the resulting phase locking
yields a continuous increase in the energy as the system self-adjusts to
stay in resonance for an extended period of time. Note that the form of the
left hand side in the classical threshold condition, (\ref{CTh}), could have
been predicted even without the detailed analysis in \cite{Gros}, as it is
the only combination of parameters $P_{1,2}$, which does not depend on $\hbar $
(which cancels out after we replace $n_{0}$ with the initial dimensionless $
I_{3}$). We illustrate the sharp threshold phenomenon of the classical
autoresonance in Fig. \ref{Fig1-1b}(a), showing the excitation efficiency as a
function of parameters $P_{1,2}$ using the \textit{exact} classical
equations of motion (for details on these simulations see Appendix \ref{AppB}).
In order to check the independence of the capture of the system in
autoresonance on the initial phase mismatch, we started the simulation on a
circular orbit with spherical angles $\varphi =\theta =0$, and averaged over
the initial driving phase between $0$ and $2\pi $. All other parameters are
the same as in Fig. \ref{Fig1-1a}. One can see that both the classical [Fig. \ref{Fig1-1b}(a)] and the quantum
(Fig. \ref{Fig1-1a}) simulations correctly recreate the threshold condition (dashed line) even
though the threshold region is much narrower in the classical results.
Naturally, the classical simulations entirely ignore the quantum-classical
separation (solid line) given by condition (\ref{LCCond2}), further
demonstrating the quantum nature of the evolution above the separation line,
and identifying the dashed line as the classical threshold. The broadening
of the threshold region in Fig. \ref{Fig1-1a} can be attributed to quantum
fluctuations of the initial state, which were absent in the classical
simulations.
\begin{figure*}
\caption{Classical numerical simulations for the $1:1$ resonance case, with parameters and initial conditions corresponding to Fig. \ref{Fig1-1a}
\label{Fig1-1b}
\end{figure*}
The previous and current subsections describe \textit{purely} LC or AR
evolutions. However, as mentioned above, one must also consider an
intermediate case where the initial evolution is of an LC nature (i.e.,
condition [\ref{LCCond2}] is met), but in the final evolution stage
condition (\ref{LCCond}) is violated and we expect a dynamical transition
from LC to AR at later times. In fact, this situation was relevant to
practically all of the region in Fig. \ref{Fig1-1a} above the
quantum-classical separation line. Nevertheless, as demonstrated in Fig.
\ref{FigLC}(b), the efficiency of the excitation remains high and smooth
despite the transition from LC to AR. The reason for this smooth transition
can be explained by observing that the LC\ process closely follows the
resonance with the drive. In turn, this also means that at the transition to
the classical regime, the evolution is phase-locked to the drive and the
classical phase mismatch $\Phi $ remains bounded. This guarantees smooth
transition to the AR regime as the classical dynamics emerges during the
chirped excitation process.
Finally, one can see in the lower right part in Fig. \ref{Fig1-1b}(a) that
the transition region to efficient excitation is not as narrow. We attribute
this effect to the breaking of the SRA when parameter $P_{1}$ becomes large,
as discussed in the next section.
\subsection{The breaking of the single resonance approximation and ionization}
\label{SubSecIon}
Our quantum-mechanical model does not include ionization channels, so we
discuss the problem of ionization within the classical theory. Classically,
the ionization in the driven-chirped problem can occur when the SRA loses
its validity. This effect was studied in \cite{Gros}, where it is shown that
the breakdown of the SRA happens when the frequency of oscillations of $\Phi
$ in autoresonance become of the order of the driving frequency and other
resonant terms become important. When this happens the dynamics is not
dominated by the $1:1$ resonance, and ionization may soon follow via chaotic
dynamics. Based on \cite{Gros} the condition for breakdown of the SRA is
\begin{equation}
\frac{P_{1}}{P_{2}}>\gamma \frac{n_{0}^{7/2}}{9\sqrt{2}\sqrt{1-\left( \frac{m
}{n}\right) ^{2}}n^{4}}, \label{SRABreak}
\end{equation}
where we have used the semi-classical approximation for the dimensionless actions
and $\gamma $ is a numerical factor smaller than $1$. Condition (\ref
{SRABreak}) is again local, and gets easier to satisfy for higher $n$.
Therefore, for estimation, we substitute $n=n_{f}$ \ in (\ref{SRABreak}),
where $n_{f}$ is the resonant value of $n$ at the end of the excitation
process. Figure \ref{Fig1-1b}(b) shows the classical ionization probability
for the same parameters as in Fig. \ref{Fig1-1b}(a). The diagonal dotted line
in the figure is given by Eq. (\ref{SRABreak}) for $n=n_{f}$ and $\gamma =1$
. One can see that the ionization regime is centered around this line.
Furthermore, our quantum-mechanical simulations in Fig. \ref{Fig1-1a} are
performed in the portion of the parameter space for which no ionization
happens classically. In this part of the parameter space we do not expect
ionization to occur.
In order to avoid destruction of AR at large $n$ [see condition (\ref{SRABreak})], we limit the value of $n_{f}$, so that a sizable part of the parameter space avoids ionization. The value of $n_{th}$ was taken halfway between $n_{0}$ and $n_{f}$ to identify significant excitation.
\section{2:1 Resonance}
\label{Sec4}
Section \ref{Sec3} revolved around analytic and numerical results for the $
1:1$ resonance, but the analysis is not limited to this choice. In this
section, we show that the same considerations could be applied to the $
2:1$ resonance, leading to similar results. Consider a CRS initial condition
defined by $n_{0}$ and $m$, such that $m$ is not restricted and can take any value $\left\vert m\right\vert <n_{0}$. The choice of $n_{0}$ and $n_{th}$ follows the same considerations as in Sec. \ref{Sec3}. The
driving frequency now sweeps through twice the Keplerian frequency i.e., $2d\overline{E
}_{n_{0}}/dn_{0}$, and the resonant transitions are $n,l,m\leftrightarrow
n+2,l+1,m$. The analysis again starts with Eq. (\ref{RWAEq}), but now for $
q=2$, so a two-level description similar to Eq. (\ref{2Level}) follows
immediately. The width of a single LZ transition and the time between two
transitions are found similarly to Sec. (\ref{Sec3}) and the
quantum-classical separation criterion is found to be
\begin{equation}
P_{2}\left( \frac{n_{0}}{n}\right) ^{4}\gg 1+P_{1}c_{n,l}^{+2}.
\label{LCCond2-1}
\end{equation}
As with the $1:1$ resonance, the initial stages of the evolution are the
most important and condition (\ref{LCCond2-1}) could be replaced by its
version for $n=n_{0}$, yielding the same result as Eq. (\ref{LCCond2}).
Figure \ref{Fig2-1} shows numerical simulations for the efficiency of
excitation by passage through the $2:1$ resonance. Figure \ref{Fig2-1}(a) shows
quantum mechanical simulations for $n_{0}=90$,$m=0$, while Fig. \ref{Fig2-1}(b)
shows classical simulations for the corresponding initial condition with $
\bar{I}_{3}=\bar{I}_{2}=90$ and $\bar{I}_{1}=0$. In the quantum simulations
the efficiency is determined by the fraction of population exceeding $n_{th}=100$
, while for the classical simulations it is defined by the fraction of
initial conditions out of a uniformly distributed initial phases that reach
a final unperturbed energy corresponding to $\bar{I}_{3}>100$. Note that the
range of $P_{1,2}$ in Fig. \ref{Fig2-1} is the same as that in Fig. \ref
{Fig1-1a}.
\begin{figure*}
\caption{Numerical simulations of the $2:1$ resonance. (a) Solution of Eq. (
\protect\ref{RWAEq}
\label{Fig2-1}
\end{figure*}
The solid lines in Figs. \ref{Fig2-1}(a) and \ref{Fig2-1}(b) separate the quantum and
classical regions of the evolution. One can again observe the two regimes in Fig. \ref{Fig2-1}(a) separated by this line, and the absence of
this separation in the classical simulation in Fig. \ref{Fig2-1}(b). The
efficiency of the LC process above this separation line could be calculated
similarly to Eq. (\ref{PLC}) by successively multiplying the efficiencies of
individual LZ transitions. Once again, because the coupling coefficients
grow rapidly, the threshold value $P_{1,th}^{LC}$ for which the total
efficiency is $0.5$ depends only weakly on the number of transitions. It
also depends rather weakly on the value of $m$. For example, the parameters $
n_{0}=90$, $m=0$ as in Fig. \ref{Fig2-1}, yield $P_{1,th}^{LC}\approx 0.39$
(same as in Sec. \ref{Sec3}), but when $m=89$, $P_{1,th}^{LC}\approx 0.34$.
The value of $P_{1,th}^{LC}$ is represented in Figs. \ref{Fig2-1}(a) and \ref{Fig2-1}(b) by
dashed-dotted vertical lines. One can observe good agreement between the
predictions of the quantum mechanical simulation and this line in Fig. \ref
{Fig2-1}(a).
It should be noted that like the $1:1$ resonance in Sec. \ref{Sec3}, the CRS
studied here has the property that it is not "connected" from below to any
other state along the resonant chain. The state below the initial condition
in the chain would have had $n=n_{0}-2=l$ which is not a physical state.
Since this is true for every $m$, the $2:1$ LC continuing excitation process
could be applied to any $m$, unlike the $1:1$ LC.
When condition (\ref{LCCond2}) is violated, the classical dynamics emerges
and one can use the results in \cite{Gros} to find that the capture into
classical AR is only possible when
\begin{equation}
\sqrt{P_{2}}P_{1}>0.41. \label{CTh2}
\end{equation}
Remarkably, this result is identical to the one observed for the $1:1$ resonance [Eq. (\ref{CTh})], demonstrating the uniqueness of the parametrization used in this work.
The dashed lines in Fig. \ref{Fig2-1} show the threshold $\sqrt{P_{2}}
P_{1}=0.41$ for efficient excitation. The classical simulations exhibit a
sharp transition at this line, except for low $P_{2}$ as one gets closer to
the breaking of the SRA (the corresponding breaking line, as in Fig. \ref
{Fig1-1b} is outside the range of the $P_{1,2}$ values in Fig. \ref{Fig2-1}
). In the quantum mechanical simulations, the classical threshold is
retrieved below the quantum-classical separation line, (\ref{LCCond2}), but
is broadened compared to the classical simulations due to quantum
fluctuations.
\section{SUMMARY}
\label{summary}
In conclusion, we have studied the problem of resonant excitation of a
Rydberg atom starting in a CRS using chirped drive. Based on three
characteristic timescales in the problem, we introduce two dimensionless
parameters, $P_{1,2}$ [Eqs. (\ref{P1}) and (\ref{P2})], and study the resonant
nature of the problem in this parameter space within the rotating-wave
approximation. We have shown how this approximation allows one to reduce the
three dimensional problem to one dimensional resonant interactions
characterized by the $q:1$ ratio ($q=1,2$) between the driving and the Keplerian
frequencies. The $1:1$ and $2:1$ resonances are studied in detail each
showing two distinct persistent resonance regimes, i.e., the
quantum-mechanical ladder climbing and the classical autoresonance. The
major criteria (borderlines) in the $P_{1,2}$ parameter space are
discussed, including (a) the separation line between the two regimes and (b)
the regions of efficient excitation in the two regimes. In both regimes very
high efficiencies ($\sim 100\%$) are possible, but the LC process yields
significantly narrower (in $n$) excited wave packets. Our analytic results
are supported by classical and quantum-mechanical numerical simulations
demonstrating the validity of our theoretical approach, as well as the
quantum-classical correspondence, and other effects such as quantum
fluctuations. The ionization process in the chirped-driven excitation is
discussed classically in the framework of breaking of the single resonance
approximation in the problem. It is shown that the ionization effect is
negligible in the areas of interest in our quantum-mechanical simulations.
The results of this work extend previous studies of the chirped-driven
Rydberg atom into the boundary between the quantum and the classical evolution.
From a broader perspective, it is also the first use of the formalism for
studying such quantum-classical transitions in a three-dimensional problem.
The processes described in this work enlarge the tool-box for the control
and manipulation of Rydberg atoms, and may lead to new applications. It will
be interesting to study this problem for other initial conditions which are
not CRS in the future. Generally speaking, such initial conditions should
not exhibit sharp thresholds for capture into AR, but, rather, a different
capture process which could be conveniently studied in phase-space \cite
{Lit14a,phasespace}. Quantum-mechanically, such initial conditions will not
be the "ground state" of their resonant chain, so the climb up the energy
ladder would require starting close to the resonance rather than sweeping
through it. Another avenue for research could be studying time varying chirp
rates. The time between LZ transitions decreases by orders of magnitude as one
climbs up the energy ladder and, thus, lowering chirps in time may allow us to
prolong the LC process and reduce the possibility of ionization.
\begin{acknowledgments}
This work was supported by Israel Science Foundation Grant No. 30/14.
\end{acknowledgments}
\appendix
\section{Coupling Coefficients}
\label{AppA}
In computing the coupling coefficients $\left\langle n,l,m\left\vert \hat{z}
\right\vert n^{\prime },l^{\prime },m^{\prime }\right\rangle $ we use the
spherical coordinates $r,\theta ,\varphi $ and separate the integral for
the coefficients into the radial and angular parts. The angular part is
found by expressing $z$ as a function of $r$ and the spherical harmonic $
Y_{1}^{0}\left( \theta ,\varphi \right) $. The functions $\psi _{n^{\prime
},l^{\prime },m^{\prime }}$ and $\psi _{n,l,m}^{\ast }$ contribute two more
spherical harmonics, and the product of the three could be integrated in
terms of the Wigner 3j symbol, yielding the angular contribution, as well as
the selection rules $m=m^{\prime }$ and $l=l^{\prime }\pm 1$. For the radial
part we first normalize $r$ by $m_{e}a_{0}/2\mu $, where $a_{0}$ is Bohr's
radius and $\mu $ the reduced mass (the normalization factor is absorbed
into $\varepsilon $). The radial integral is then given by
\begin{gather*}
\int_{0}^{\infty }r^{3}R_{n,l}^{\ast }\left( r\right) R_{n^{\prime
},l^{\prime }}\left( r\right) dr, \\
R_{n,l}\left( r\right) =\sqrt{\frac{\left( n-l-1\right) !}{2n^{4}\left[
\left( n+l\right) !\right] }}e^{-\frac{r}{2n}}\left( \frac{r}{n}\right)
^{l}L_{n-l-1}^{2l+1}\left( \frac{r}{n}\right) ,
\end{gather*}
where $L_{a}^{b}$ is the generalized Laguerre polynomial. Note that $
L_{a}^{b}$ is a polynomial of order $a$, and therefore the product $
r^{3}R_{n,l}^{\ast }\left( r\right) R_{n^{\prime },l^{\prime }}\left(
r\right) $ could be broken into the sum of $(n-l-1)\times (n^{\prime
}-l^{\prime }-1)$ terms proportional to $r^{k}e^{-rp}$, where $k,p>0$. The
integral for each term yields $p^{-1-k}\Gamma \left( 1+k\right) $, with $
\Gamma $ the Euler Gamma function. The final result reads
\begin{multline*}
c_{n,l,m}^{n^{\prime },l+1}=\frac{1}{2}\sqrt{\frac{\left( l-m+1\right)
\left( l+m+1\right) }{\left( 2l+3\right) \left( 2l+5\right) }}\times \\
\times \sqrt{\left( n-l-1\right) !\left( n^{\prime }-l-2\right) !\left(
n+l\right) !\left( n^{\prime }+l+1\right) !}\times \\
\times \sum_{i=0}^{n-l-1}\sum_{j=0}^{n^{\prime
}-l-2}f_{i}^{n,l}f_{j}^{n^{\prime },l+1}D,
\end{multline*}
where
\begin{equation*}
\begin{array}{ccc}
D & = & \left( \frac{2nn^{\prime }}{n+n^{\prime }}\right) ^{2l+5+i+j}\left(
2l+4+i+j\right) !, \\
f_{i}^{n,l} & = & \left( -1\right) ^{i}\left[ n^{i+l+2}\left( n-l-1-i\right)
!\left( 2l+1+i\right) !\left( i\right) !\right] ^{-1}.
\end{array}
\end{equation*}
These $c_{n,l,m}^{n^{\prime },l+1}$ were computed using a symbolic software,
to avoid numerical accuracy issues. Note that the coupling of CRSs to other
CRSs, or nearly circular states, contains only a small number of
contributions and can by calculated explicitly. Namely, in the limit $
n_{0}\gg 1$ the value of $C_{0}$ is $\sqrt{2}n_{0}^{3/2}$ for the $1:1$
resonance with $m=n_{0}-1$, and $\sqrt{1-\left( m/n_{0}\right) ^{2}}
n_{0}^{3/2}/\sqrt{2}$ for the $2:1$ resonance.
\section{Numerical Simulations}
\label{AppB}
The quantum mechanical simulations in Figs. \ref{FigLC} and \ref{FigAR} use
Eq. (\ref{OrigEq}). The maximal value of $n$ and $n-l$ was chosen such that
only a negligible portion of the population reaches the states along those
numerical boundaries in the Hilbert space. The simulations in Figs. \ref
{Fig1-1a} and \ref{Fig2-1}(a), however, are based on the RWA [Eq. (\ref
{RWAEq})], i.e., include only the states which are connected to the initial
condition through the resonant interaction. This validity of this assumption
improves as $P_{2}$ increases and breaks down completely in the portion of
the parameter space where ionization occurs. For this reason the quantum
mechanical simulations are limited to the region of the parameter space
where no ionization is observed (classically). For the $1:1$ resonance we
have also tested the effect of the RWA by solving the same equation set with
more states outside the resonant chain (i.e., states with higher values of $n-l
$) and found no significant changes in the results presented in Fig. \ref
{Fig1-1a}.
Our classical simulations are based on solving the classical Hamilton
equations for the Hamiltonian:
\begin{equation*}
H=\frac{P_{2}n_{0}^{4}}{6q^{2}}\left[ p_{r}^{2}+\frac{p_{\theta }^{2}}{r^{2}}
+\frac{p_{\phi }^{2}}{r^{2}\sin ^{2}\theta }-\frac{2}{r}\right] +\frac{2P_{1}
}{C_{0}}\cos {\phi _{d}}r\cos \theta ,
\end{equation*}
where $r,\theta ,\phi $ are spherical coordinates and $p_{r},p_{\theta
},p_{\phi }$ their conjugate momenta. Naturally, the quantum mechanical
initial condition does not translate directly to a classical initial
condition. We used initial conditions corresponding to a classical circular
Keplerian case, but averaged over the initial driving phase $\phi _{d}$
between $0$ and $2\pi $ in Fig. \ref{Fig1-1b} and over $\theta $ between
$0$ and $\pi $ in Fig. \ref{Fig2-1}(b) for testing the validity of the
single resonance approximation. One can observe that in both figures all
initial conditions yield the same results except for the bottom-right corner
of the parameter space where the SRA is not valid.
\end{document} |
\begin{document}
\title[Electrostatic system with divergence-free Bach tensor]{Electrostatic system with divergence-free Bach tensor and non-null cosmological constant}
\author{Benedito Leandro}
\address{Universidade Federal de Goiás\\IME\\ Caixa postal 131, CEP 74690-900, Goi\^ania, GO, Brazil.
}
\curraddr{}
\email{[email protected]}
\thanks{The authors were partially supported by CNPq Grant 403349/2021-4.}
\author{Róbson Lousa}
\address{Universidade Federal de Goiás\\IME\\ Caixa postal 131, CEP 74690-900, Goi\^ania, GO, Brazil.}\address{Instituto Federal de Roraima \\Câmpus Amajari\\ CEP 69343-000, Amajari, RR, Brazil.}
\curraddr{}
\email{[email protected]}
\thanks{Róbson Lousa was partially supported by PROPG-CAPES [Finance Code 001].}
\subjclass[2020]{83C22, 83C05, 53C18.}
\date{}
\dedicatory{}
\begin{abstract}
We prove that three-dimensional electrostatic manifolds with divergence-free Bach tensor are locally conformally flat, provide that the electric field and the gradient of the lapse function are linearly dependent. Consequently, a three-dimensional electrostatic manifold admits a local warped product structure with a one-dimensional base and a constant curvature surface fiber.
\end{abstract}
\maketitle
\section{Introduction and main results}
In this paper, we will consider the following system (cf. \cite{cederbaum2016uniqueness, chrusciel2005non, chrusciel2017non, tiarlos, kunduri2018, lucietti} and the references therein).
\begin{definition}\label{def1}
Let $(M^3,g)$ be a Riemannian manifold with $E$ a tangent vector field on $M$ and $f\in C^\infty(M)$ satisfying
\begin{equation}\label{s1}
\begin{array}{rcll}
\nabla^2f&=&f(\textnormal{Ric}-\Lambda g+2E^\flat\otimes E^\flat-|E|^2g),\\\\
\Delta f&=&(|E|^2-\Lambda)f,\quad0=\textnormal{div}E\quad\mbox{and}\quad 0\,=\,\textnormal{curl}(fE).
\end{array}
\end{equation}
Here, $\textnormal{Ric}$, $\nabla^2$, $\textnormal{div}$ and $\Delta$ stand for the Ricci tensor, Hessian tensor, divergence and Laplacian operator with respect to the metric $g$, respectively. Moreover, $E^\flat$ is the one-form metrically dual to $E$. We refer to the above equations as {\it electrostatic system} with cosmological constant $\Lambda$ for the electrostatic spacetime
associated to $(M^3, g, f, E)$.
\end{definition}
Remember, the $\textnormal{curl}$ stands for an operator that describes the circulation (or rotation) of a vector field. Thus, we have $\textnormal{curl}(fE)=0$ if and only if
\begin{eqnarray}\label{curlXld}
df\wedge E^\flat+fdE^\flat=0.
\end{eqnarray}
The smooth function $f$ is called the lapse function, the field $E$ is known as electric field and $M^3$ is the spatial factor for the static electrostatic spacetime. Moreover, $f > 0$ on $M$. If $M$ has boundary $\partial M$, we assume in addition that $f^{-1}(0)= \partial M$ (cf. \cite{chrusciel2005non, chrusciel2017non,tiarlos, kunduri2018}).
Note that taking the contraction of the first equation and combining it with the Laplacian of $f$ in \eqref{s1}, we obtain an useful equation that relates the scalar curvature $R$, the cosmological constant, and the electric field:
\begin{equation}\label{rrr}
R=2(|E|^2+\Lambda).
\end{equation}
Furthermore, the system \eqref{s1} implies that the electric field and the gradient of the lapse function are linearly dependent on $\partial M=f^{-1}(0)$ (see \cite[Lemma 4]{tiarlos}).
There are some well-known examples of solutions for the electrostatic system, and we recommend seeing \cite[Section 3]{tiarlos} for a good overview. For instance, the {charged} Nariai system is an $3$-dimensional space $\left[0,\,\frac{\pi}{\alpha}\right]\times\mathbb{S}^2$ with metric tensor $g=dr^2 + \varphi^2g_{\mathbb{S}^2},$ where $\varphi$ is a constant and $g_{\mathbb{S}^2}$ is the standard metric of the sphere $\mathbb{S}^2$ of radius $1$. The electric field and the lapse function are given by
\begin{eqnarray*}
E=\frac{q}{\varphi^2}\partial_r\quad\mbox{and}\quad f(r(x)) = \sin(\alpha r(x)),
\end{eqnarray*}
where $r(x)^2=x_1^2+x_2^2+x_3^2$ such that $(x_1,\,x_2,\,x_3)$ are Cartesian coordinates, $\alpha=\sqrt{\Lambda - \frac{q^2}{\varphi^4}}$ and $\frac{1}{2\Lambda}<\varphi^2<\frac{1}{\Lambda}.$ Moreover, $0<m^2=\frac{1}{18\Lambda}\left[1+12q^2\Lambda+\sqrt{(1-4q^2\Lambda)^3}\right]$ and $0<|q|\leq\varphi^2\sqrt{\Lambda}.$ It is important to point out that the {charged} Nariai system is locally conformally flat (see \cite[Corollary 1.34]{chow2006hamilton}). In this work, is also important to remember the cold black hole system and the ultracold black hole system. They also are locally conformally flat standard electrostatic models.
The cold black hole is an $3$-dimensional space $[0,\,\infty)\times\mathbb{S}^2$ with metric tensor $g=dr^2 + \varphi^2g_{\mathbb{S}^2},$ where $\varphi$ is a constant and $g_{\mathbb{S}^2}$ is the standard metric of the sphere $\mathbb{S}^2$ of radius $1$. The electric field and the lapse function are given by
\begin{eqnarray*}
E=\frac{q}{\varphi^2}\partial_r\quad\mbox{and}\quad f(r(x)) = \sinh(\beta r(x)),
\end{eqnarray*}
where $r(x)^2=x_1^2+x_2^2+x_3^2$, $\beta=\sqrt{\frac{q^2}{\varphi^4}-\Lambda}$ and $0<\varphi^2<\frac{1}{2\Lambda}.$ Moreover, $0<m^2=\frac{1}{18\Lambda}\left[1+12q^2\Lambda+\sqrt{(1-4q^2\Lambda)^3}\right]$ and $\varphi^2\sqrt{\Lambda}\leq|q|.$
The ultracold black hole is an $3$-dimensional space $[0,\,\infty)\times\mathbb{S}^2$ with metric tensor $g=dr^2 + \varphi^2g_{\mathbb{S}^2},$ where $\varphi^2=\frac{1}{4\Lambda}=q^2$. The electric field and the lapse function are given by
\begin{eqnarray*}
E=\sqrt{\Lambda}\partial_r\quad\mbox{and}\quad f(r) = r,
\end{eqnarray*}
Moreover, $m=\frac{1}{3}\sqrt{\frac{2}{\Lambda}}.$
The Reissner-Nordström-de Sitter (RNdS) manifold is a important electrostatic system $(M^3,g,f,E)$ where $$M=[r_+,r_c]\times\mathbb{S}^2$$
to some positive constants $r_+$ and $r_c$ which are solutions of the lapse function given by
$$f=\left(1-\dfrac{2m}{r}+\dfrac{q^2}{r^2}-\dfrac{\Lambda r^2}{3}\right)^{\frac{1}{2}}.$$ Also, it is possible to extend $[r_+,r_c]$ to the entire real line.
The metric and the electric field are given by
$$g=f(r)^{-2}dr^2+r^2g_{\mathbb{S}^2}, \quad E=\dfrac{q}{r^2}f(r)\partial r.$$
In above, $q$, $m$ and $g_{\mathbb{S}^2}$ stand for the charge, mass and the standard metric of unit sphere $\mathbb{S}^2$, respectively. Since the cosmological constant $\Lambda$ is positive, from \eqref{rrr} we can see that $R>0$.
The RNdS solution can be rewritten in cosmological
coordinates. For instance, the Kastor-Traschen solution represents a $N$ charge-equal-to-mass, i.e., $m=|q|$, black holes in a spacetime with a positive cosmological constant $\Lambda:$
$$ds^2=-W^{-2}dt^2 + W^2(dx_{1}^2+dx_{2}^2+dx_{3}^2),$$
where $W=-\sqrt{\frac{\Lambda}{3}}t + \displaystyle\sum_{i=1}^N\frac{m_{i}}{r_{i}}.$ Here, $m_i$ stands for the black hole masses. Moreover, $r_i(x)=\sqrt{(x_1-a_i)^2+(x_2-b_i)^2+(x_3-c_i)^2}$ is the distance from a fixed point $(a_i,\,b_i,\,c_i).$ It is interesting to point out that this solution is time-dependent and correspond to the Majumdar-Papapetrou solution when $\Lambda=0.$
Keeping the electrostatic solutions in mind, we know that the electric field and the lapse function are related. In fact, from \eqref{curlXld} the electric field and the gradient of the lapse function must be linearly dependent at the boundary $\partial M$.
There are some well-known classification results of some important geometric structures like static vacuum manifolds and Ricci solitons carrying a metric such that the Bach tensor is free from divergence (cf. \cite{cao, catino2017gradient, hwang2021vacuum, benedito, qing2013note}). Any three-dimensional a Riemannian manifold is locally conformally flat if, and only if, its Cotton tensor $C$ is identically zero.
In the three dimensional case the Cotton tensor is associated with the Bach tensor, $B$, accordingly to $B=\textnormal{div}C$. The Bach tensor was defined in 1921 by Rudolf Bach and it is connected to general relativity and conformal geometry. This tensor appeared naturally from studies of Huyghens's principle and has some psychical significance mainly about wave propagation (see for instance \cite{szekeres1968conformal} and the references therein).
The main goal of this work is to show that an electrostatic system with divergence-free Bach tensor, i.e., $\textnormal{div}^2B=0$, must be locally conformally flat. It is important to say that $\textnormal{div}^2B=0$ is less restrictive (topologically speaking) than asymptotically flat conditions.
To state our main results we need to define an important function. To that end, we will say that the electric field $E$ and the gradient of the lapse function $\nabla f$ are linearly dependent if there exists a smooth function $\rho$ such that $E=\rho\nabla f.$ As an interest consequence of $\textnormal{curl}(fE)=0$, if $E=\nabla \psi$ for some smooth function $\psi:M\to\mathbb{R}$, then $E$ must be parallel to the gradient of $f$. Also, it is natural to consider the case $fE=\nabla\psi$.
We define the function $$Q=2(1-f^2\rho^2).$$
\begin{theorem}\label{teoharmonico'}
Let $(M^3,\,g,\,f,\,E)$ be a compact (without boundary) electrostatic system such that the electric field and the gradient of the lapse function are linearly dependent. Suppose that the Bach tensor is divergence-free and $Q>0$ (or $Q<0$). Then, $(M^3,\,g)$ is locally conformally flat.
\end{theorem}
The next result proves the noncompact case.
\begin{theorem}\label{proper'}
Let $(M^3,\,g,\,f,\,E)$ be an electrostatic system such that the electric field and the gradient of the lapse function are linearly dependent. Suppose that the Bach tensor is divergence-free and $Q>0$ (or $Q<0$). If $f$ is a proper function, then $(M^3,\,g)$ is locally conformally flat.
\end{theorem}
Now, we are able to provide the geometric structure for an $3$-dimensional electrostatic system.
\begin{theorem}
\label{fiber007-3}
Let $(M^3,\,g,\,f,\,E)$ be an electrostatic system such that the electric field and the gradient of the lapse function are linearly dependent. Suppose that the Bach tensor is divergence-free and $Q>0$ (or $Q<0$). If $f$ is a proper function, around any regular point of $f$ the manifold is locally a warped product with a one-dimensional base with fiber $(N^2,\,\overline{g})$ of constant curvature, i.e.,
$$(M^{3},\,g)=(I,\,dr^{2})\times_{\varphi}(N^{2},\,\overline{g}),$$
where $I\subset\mathbb{R}$ and $\varphi(r)=c_1\int\frac{dr}{\sqrt{f(r)}}+c_2;$ $c_1$ and $c_2$ are constants.
\end{theorem}
\begin{remark}
It is important to point out that if $M^3$ is compact in Theorem \ref{fiber007-3}, it is not necessary to ask for $f$ to be a proper function.
\end{remark}
\section{Structural lemmas}\label{lemmas}
This section is reserved to some preliminary results to prove the main theorems of this work. We start constructing a covariant $V$-tensor similar to the tensor defined in \cite{andrade}.
To that end, first we combine \eqref{s1} with \eqref{rrr} to obtain
\begin{equation}\label{combinado}
\nabla^2f=f\left(\textnormal{Ric}+2E^\flat\otimes E^\flat-\dfrac{R}{2}g\right).
\end{equation}
On other hand, it is well known that in any Riemannian manifold we can relate the Riemannian curvature tensor with a smooth function by using the Ricci identity
\begin{equation*}\label{4}
\nabla_i\nabla_j\nabla_kf-\nabla_j\nabla_i\nabla_kf=R_{ijkl}\nabla^lf.
\end{equation*}
Since the Hessian operator is symmetric, taking the covariant derivative of \eqref{combinado} over $i$ and $j$ and then subtract them we get
\begin{eqnarray*}
R_{ijkl}\nabla^lf&=&\nabla_i\nabla_j\nabla_kf-\nabla_j\nabla_i\nabla_kf\\&=&f(\nabla_iR_{jk}-\nabla_jR_{ik})-\frac{f}{2}(\nabla_iRg_{jk}-\nabla_jRg_{ik})\\&&-\frac{R}{2}(\nabla_ifg_{jk}-\nabla_jfg_{ik})+(R_{jk}\nabla_if-R_{ik}\nabla_jf)\\
&&+2f(E^{\flat_j}\nabla_iE^{\flat_k}-E^{\flat_i}\nabla_jE^{\flat_k}+\nabla_iE^{\flat_j}E^{\flat_k}-\nabla_jE^{\flat_i}E^{\flat_k})\\
&&+2(\nabla_ifE^{\flat_j}E^{\flat_k}-\nabla_jfE^{\flat_i}E^{\flat_k}),
\end{eqnarray*}
Here, we are considering $\{e_i\}^{3}_{i=1}$ as a base for the tangent space of $M$. Moreover, $E^{\flat_i}=E^{\flat}(e_i)$.
Note that the Cotton tensor over a $3$-dimensional Riemannian manifold is defined by
\begin{equation}\label{ct}
C_{ijk}=\nabla_iR_{jk}-\nabla_jR_{ik}-\frac{1}{4}(\nabla_iRg_{jk}-\nabla_jRg_{ik}).
\end{equation}
Furthermore, the Riemann curvature tensor is given by
\begin{eqnarray*}
R_{ijkl}&=&R_{ik}g_{jl}-R_{il}g_{jk}+R_{jl}g_{ik}-R_{jk}g_{il}-\frac{R}{2}(g_{ik}g_{jl}-g_{il}g_{jk}).
\end{eqnarray*}
Therefore, combining these equations we get
\begin{eqnarray}\label{construção1}
fC_{ijk}&=&(R_{jl}\nabla^lfg_{ik}-R_{il}\nabla^lfg_{jk})+R(\nabla_ifg_{jk}-\nabla_jfg_{ik})+2(R_{ik}\nabla_jf-R_{jk}\nabla_if)\nonumber\\
&&-2f(E^{\flat_j}\nabla_iE^{\flat_k}-E^{\flat_i}\nabla_jE^{\flat_k}+\nabla_iE^{\flat_j}E^{\flat_k}-\nabla_jE^{\flat_i}E^{\flat_k})\\
&&-2E^{\flat_k}(E^{\flat_j}\nabla_if-E^{\flat_i}\nabla_jf)+\dfrac{f}{4}(\nabla_iRg_{jk}-\nabla_jRg_{ik}).\nonumber
\end{eqnarray}
Now, using $\textnormal{curl}(fE)=0$ we can infer that
\begin{eqnarray*}
fdE^{\flat}(e_i,\,e_j)=-(df\wedge E^\flat)(e_i,\,e_j)&=& E^{\flat}(e_i)df(e_j)-E^{\flat}(e_j)df(e_i)\\
&=&E^{\flat_i}\nabla_jf - E^{\flat_j}\nabla_if.
\end{eqnarray*}
On the other hand, let $dE^{\flat}(e_i,\,e_j)=E^{\flat_{ij}}$, then, by definition we have
\begin{eqnarray}\label{2forma}
E^{\flat_{ij}}=\nabla_iE^{\flat_j} - \nabla_jE^{\flat_i}.
\end{eqnarray}
Further, we can see that
\begin{eqnarray}\label{curlajuda}
f(\nabla_iE^{\flat_j} - \nabla_jE^{\flat_i})=E^{\flat_i}\nabla_jf - E^{\flat_j}\nabla_if.
\end{eqnarray}
We can rewrite \eqref{construção1} using $\textnormal{curl}(fE)$. So,
\begin{eqnarray}\label{construção}
fC_{ijk}&=&(R_{jl}\nabla^lfg_{ik}-R_{il}\nabla^lfg_{jk})+2(R_{ik}\nabla_jf-R_{jk}\nabla_if)+R(\nabla_ifg_{jk}-\nabla_jfg_{ik})\nonumber\\
&&+\dfrac{f}{4}(\nabla_iRg_{jk}-\nabla_jRg_{ik})-2f(E^{\flat_j}\nabla_iE^{\flat_k}-E^{\flat_i}\nabla_jE^{\flat_k}).
\end{eqnarray}
Define the covariant $3$-tensor $V_{ijk}$ by
\begin{eqnarray}\label{tt}
V_{ijk}&=&2f(E^{\flat_i}\nabla_jE^{\flat_k}-E^{\flat_j}\nabla_iE^{\flat_k})+\dfrac{f}{4}(\nabla_iRg_{jk}-\nabla_jRg_{ik})+R(\nabla_ifg_{jk}-\nabla_jfg_{ik})\nonumber\\
&&-(R_{il}\nabla^lfg_{jk}-R_{jl}\nabla^lfg_{ik})-2(\nabla_ifR_{jk}-\nabla_jfR_{ik}),
\end{eqnarray}
where $E^{\flat_i}=E^\flat(e_i).$ The $V$-tensor has the same symmetries as the Cotton tensor $C$ and it is trace-free. Hence, from \eqref{construção} and \eqref{tt} we can conclude our next result.
\begin{lemma}
Let $(M^3,\,g,\,f,\,E)$ be an electrostatic system. Then,
\begin{equation}\label{ttt3}
fC_{ijk}=V_{ijk}.
\end{equation}
\end{lemma}
Our next results follow the same strategy used by \cite{andrade}, \cite{catino2017gradient} and \cite{benedito}. We will sketch the proofs here for sake of completeness.
\begin{lemma}\label{lemma3.3}
Let $(M^3,\,g,\,f,\,E)$ be an electrostatic system. Then,
\begin{equation*}
C_{kji}R^{ik}=\nabla^i\nabla^k\left(\frac{V_{kij}}{f}\right).
\end{equation*}
\end{lemma}
\begin{proof}
In dimension $n=3$, the Bach tensor is defined by
\begin{equation*}\label{bach3}
B_{ij}=\nabla^kC_{kij}=\nabla^k\left(\frac{V_{kij}}{f}\right),
\end{equation*}
Taking the derivative over $i$, we have
\begin{equation*}
\nabla^iB_{ij}=\nabla^i\nabla^k\left(\frac{V_{kij}}{f}\right).
\end{equation*}
On other hand,
\begin{equation*}
\nabla^jB_{ij}=-C_{ijk}R^{jk}, \qquad C_{ijk}=-C_{jik},\qquad \nabla^kC_{kij}=\nabla^kC_{kji}
\end{equation*}
and
\begin{equation*}\label{soma}
C_{ijk}+C_{kij}+C_{jki}=0.
\end{equation*}
Then, from a straightforward computation, we obtain
\begin{equation*}
\nabla^i\nabla^k\left(\frac{V_{kij}}{f}\right)=\nabla^iB_{ij}=-C_{jik}R^{ik}=-C_{jki}R^{ik}=C_{kji}R^{ik},
\end{equation*}
which is the expected result.
\end{proof}
\begin{lemma}\label{lemma4.3}
Let $(M^3,\,g,\,f,\,E)$ be an electrostatic system. Then,
\begin{equation*}
\begin{aligned}
\frac{1}{2}|C|^2+R^{ik}\nabla^jC_{jki}=-\nabla^j\nabla^i\nabla^k\left(\frac{V_{kij}}{f}\right).
\end{aligned}
\end{equation*}
\end{lemma}
\begin{proof}
Taking the divergence in Lemma \ref{lemma3.3}, we get
\begin{equation*}
C_{kji}\nabla^jR^{ik}+R^{ik}\nabla^jC_{kji}=\nabla^j\nabla^i\nabla^k\left(\frac{V_{kij}}{f}\right).
\end{equation*}
Now, from the symmetries of the $C$-tensor and renaming indices, we can infer that
\begin{equation*}
(\nabla^jR^{ik}-\nabla^kR^{ij})C_{jki}=C_{jki}\nabla^jR^{ik}+C_{kji}\nabla^kR^{ij}= 2C_{jki}\nabla^jR^{ik}.
\end{equation*}
Hence,
\begin{equation*}
\frac{1}{2}C_{kji}(\nabla^jR^{ik}-\nabla^kR^{ij})+R^{ik}\nabla^jC_{kji}=\nabla^j\nabla^i\nabla^k\left(\frac{V_{kij}}{f}\right).
\end{equation*}
Now, since the Cotton tensor is trace-free, from \eqref{ct} we obtain
\begin{equation*}
-\frac{1}{2}C_{kji}C^{kji}-R^{ik}\nabla^jC_{jki}=\nabla^j\nabla^i\nabla^k\left(\frac{V_{kij}}{f}\right).
\end{equation*}
Therefore, the result holds.
\end{proof}
From this point, the structure of the electrostatic system plays an important role.
\begin{theorem}\label{teo3.3'}
Let $(M^3,\,g,\,f,\,E)$ be an electrostatic system. For every $C^2$-function $\phi:\mathbb{R}\rightarrow\mathbb{R}$, with $\phi(f)$ having compact support $K\subseteq M$ such that $K\cap\partial M=\emptyset$ we have
\begin{eqnarray*}
\frac{1}{4}\int_M\phi(f)|C|^2=\int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki}+\int_M\phi(f)E^{\flat_j}\nabla^kE^{\flat_i}C_{jki}.
\end{eqnarray*}
\end{theorem}
\begin{proof}
From Lemma \ref{lemma4.3}, we obtain
\begin{equation*}
\begin{aligned}
\frac{1}{2}|C|^2\phi(f)+\phi(f)R^{ik}\nabla^jC_{jki}=-\phi(f)\nabla^j\nabla^i\nabla^k\left(\frac{V_{kij}}{f}\right).
\end{aligned}
\end{equation*}
Integrating this expression, we get
\begin{equation*}
\begin{aligned}
\frac{1}{2}\int_M|C|^2\phi(f)+\int_M\phi(f)R^{ik}\nabla^jC_{jki}=\int_M\dot{\phi}(f)\nabla^jf\nabla^i\nabla^k\left(\frac{V_{kij}}{f}\right).
\end{aligned}
\end{equation*}
Thus, from Lemma \ref{lemma3.3} we have
\begin{equation*}
\begin{aligned}
\frac{1}{2}\int_M|C|^2\phi(f)+\int_M\phi(f)R^{ik}\nabla^jC_{jki}=&-\int_M\dot{\phi}(f)R^{ik}\nabla^jfC_{jki}.
\end{aligned}
\end{equation*}
We will perform integration in some parts of the above equation, separately, using \eqref{combinado} and the fact that $C_{ijk}$ is trace-free and skew-symmetric. First,
\begin{eqnarray*}
\int_M\phi(f)R^{ik}\nabla^jC_{jki}&=&\int_M\dfrac{\phi(f)}{f}\nabla^i\nabla^kf\nabla^jC_{jki}-2\int_M\phi(f)E^{\flat_i}E^{\flat_k}\nabla^jC_{jki}\\
&=&\int_M\dfrac{\phi(f)}{f}\nabla^i\nabla^kf\nabla^jC_{jki}+2\int_M\dot{\phi}(f)\nabla^jfE^{\flat_i}E^{\flat_k}C_{jki}\\
&&+2\int_M\phi(f)\nabla^j(E^{\flat_i}E^{\flat_k})C_{jki}.
\end{eqnarray*}
On the other hand,
\begin{eqnarray*}
\int_M\dot{\phi}(f)R^{ik}\nabla^jfC_{jki}&=&\int_M\dfrac{\dot{\phi}(f)}{f}\nabla^jf\nabla^i\nabla^kfC_{jki}-2\int_M\dot{\phi}(f)\nabla^jfE^{\flat_i}E^{\flat_k}C_{jki}.\\
\end{eqnarray*}
Note that, since the Hessian tensor is symmetric
\begin{equation*}
2\nabla^j\nabla^kf C_{jki}=\nabla^k\nabla^jf C_{jki}+\nabla^j\nabla^kf C_{kji}=\nabla^k\nabla^jf(C_{jki}+C_{kji})=0.
\end{equation*}
Hence,
\begin{align*}
\frac{1}{2}\int_M|C|^2\phi(f)+&\int_M\dfrac{\phi(f)}{f}\nabla^i\nabla^kf\nabla^jC_{jki}+2\int_M\phi(f)\nabla^j(E^{\flat_i}E^{\flat_k})C_{jki}\\
=&-\int_M\dfrac{\dot{\phi}(f)}{f}\nabla^jf\nabla^i\nabla^kfC_{jki}=\int_M\dfrac{\dot{\phi}(f)}{f}\nabla^jf\nabla^if\nabla^kC_{jki}\\
=&-\int_M\dfrac{\dot{\phi}(f)}{f}\nabla^if\nabla^kf\nabla^jC_{jki}
=-\int_M\dfrac{\nabla^i{\phi}(f)}{f}\nabla^kf\nabla^jC_{jki}\\
=&-\int_M\frac{\phi(f)}{f^2}\nabla^if\nabla^kf\nabla^jC_{jki}+\int_M\dfrac{{\phi}(f)}{f}\nabla^i\nabla^kf\nabla^jC_{jki}\\
&+\int_M\dfrac{{\phi}(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki}.
\end{align*}
\iffalse
Now, from the first equation of the system \eqref{s1} and the fact that $C_{ijk}$ is trace-free and skew-symmetric we get
\begin{equation*}
\begin{aligned}
\frac{1}{2}\int_M|C|^2\phi(f)+&\int_M\frac{\phi(f)}{f}(\nabla^i\nabla^kf-2fE^{\flat_i}E^{\flat_k})\nabla^jC_{jki}\\
=&-\int_M\frac{\dot{\phi}(f)}{f}\nabla^jf(\nabla^k\nabla^if-2fE^{\flat_i}E^{\flat_k})C_{jki}\\
=&-\int_M\frac{\dot{\phi}(f)}{f}\nabla^jf\nabla^k\nabla^ifC_{jki}+2\int_M\dot{\phi}(f)\nabla^jfE^{\flat_i}E^{\flat_k}C_{jki}\\
=&\int_M\left(\frac{\ddot{\phi}(f)}{f}-\frac{\dot{\phi}(f)}{f^2}\right)\nabla^jf\nabla^kf\nabla^ifC_{jki}+\int_M\frac{\dot{\phi}(f)}{f}\nabla^k\nabla^jf\nabla^ifC_{jki}\\
&+\int_M\frac{\dot{\phi}(f)}{f}\nabla^jf\nabla^if\nabla^kC_{jki}+\textcolor{red}{2\int_M\dot{\phi}(f)(fE^{\flat_{ij}}+\nabla^ifE^{\flat_j})E^{\flat_k}C_{jki}}\\
=&\int_M\frac{\dot{\phi}(f)}{f}\nabla^jf\nabla^if\nabla^kC_{jki}+\textcolor{red}{2\int_Mf\dot{\phi}(f)E^{\flat_k}E^{\flat_{ij}}C_{jki}}.
\end{aligned}
\end{equation*}
Note that we used that
\begin{equation*}\label{hessiana}
2\nabla^j\nabla^kf C_{jki}=\nabla^k\nabla^jf C_{jki}+\nabla^j\nabla^kf C_{kji}=\nabla^k\nabla^jf(C_{jki}+C_{kji})=0.
\end{equation*}
From now, we rename the indices and, integrating by parts again, we infer
\begin{equation*}
\begin{aligned}
\frac{1}{2}\int_M|C|^2\phi(f)&+\int_M\frac{\phi(f)}{f}\nabla^i\nabla^kf\nabla^jC_{jki}-2\int_M\phi(f)E^{\flat_i}E^{\flat_k}\nabla^jC_{jki}\\
=&-\int_M\frac{\dot{\phi}(f)}{f}\nabla^kf\nabla^if\nabla^jC_{jki}+2\int_Mf\dot{\phi}(f)E^{\flat_k}E^{\flat_{ij}}C_{jki}\\
=&-\int_M\frac{\nabla^i\phi(f)}{f}\nabla^kf\nabla^jC_{jki}+2\int_Mf\dot{\phi}(f)E^{\flat_k}E^{\flat_{ij}}C_{jki}\\
=&\int_M\frac{\phi(f)}{f}\nabla^k\nabla^if\nabla^jC_{jki}-\int_M\frac{\phi(f)}{f^2}\nabla^kf\nabla^if\nabla^jC_{jki}\\
&+\int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki} +2\int_Mf\dot{\phi}(f)E^{\flat_k}E^{\flat_{ij}}C_{jki}.
\end{aligned}
\end{equation*}
\fi
Therefore, we get
\begin{eqnarray}\label{integral}
\int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki}&=&\frac{1}{2}\int_M|C|^2\phi(f)+\int_M\frac{\phi(f)}{f^2}\nabla^if\nabla^kf\nabla^jC_{jki}\nonumber\\
&&+2\int_M\phi(f)\nabla^j(E^{\flat_i}E^{\flat_k})C_{jki}.
\end{eqnarray}
Then, since the Cotton tensor is trace-free and skew-symmetric, another integration by parts gives us
\begin{eqnarray*}
\int_M\frac{\phi(f)}{f^2}\nabla^if\nabla^kf\nabla^jC_{jki}&=&\int_M\dfrac{\phi(f)}{f^2}\nabla^j\nabla^if\nabla^kfC_{kji}\\
&=&\int_M\dfrac{\phi(f)}{f}(R^{ij}+2E^{\flat_i}E^{\flat_j})\nabla^kfC_{kji}.
\end{eqnarray*}
We used \eqref{combinado} in the last equality. Thus, \eqref{integral} can be rewrite in the following form:
\begin{eqnarray*}
\int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki}&=&\frac{1}{2}\int_M|C|^2\phi(f)+2\int_M\phi(f)\nabla^j(E^{\flat_i}E^{\flat_k})C_{jki}\nonumber\\
&&-2\int_M\dfrac{\phi(f)}{f}E^{\flat_i}E^{\flat_j}\nabla^kfC_{jki}+\int_M\dfrac{\phi(f)}{f}R^{ij}\nabla^kfC_{kji}.
\end{eqnarray*}
Now, from \eqref{tt} and \eqref{ttt3}, we have
\begin{eqnarray*}\label{ricci}
R^{ij}\nabla^kfC_{kji}&=&\frac{1}{2}C_{kji}(\nabla^kfR^{ji}-\nabla^jf R^{ki})\nonumber\\
&=&-\frac{1}{2}fC_{kji}\left[\dfrac{1}{2}C^{kji}+(E^{\flat_j}\nabla^kE^{\flat_i}-E^{\flat_k}\nabla^jE^{\flat_i})\right]\\
&=&-\frac{1}{4}f|C|^2-\dfrac{1}{2}f\left(E^{\flat_j}\nabla^kE^{\flat_i}-E^{\flat_k}\nabla^jE^{\flat_i}\right)C_{kji}.\nonumber
\end{eqnarray*}
Thus,
\begin{eqnarray*}
&&\frac{1}{4}\int_M\phi(f)|C|^2=\int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki}\nonumber\\
&&+2\int_M\dfrac{\phi(f)}{f}\left[E^{\flat_j}E^{\flat_i}\nabla^kf-f\nabla^j(E^{\flat_i}E^{\flat_k})+\dfrac{f}{4}\left(E^{\flat_k}\nabla^jE^{\flat_i}-E^{\flat_j}\nabla^kE^{\flat_i}\right)\right]C_{jki}.
\end{eqnarray*}
Furthermore, from \eqref{curlajuda} we have
\begin{eqnarray*}
E^{\flat_i}\nabla^kf - E^{\flat_k}\nabla^if =f(\nabla^iE^{\flat_k}-\nabla^kE^{\flat_i}).
\end{eqnarray*}
Combining the last two equations and the fact that Cotton tensor is skew-symmetric, yields to
\begin{eqnarray*}
&&\frac{1}{4}\int_M\phi(f)|C|^2=\int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki}\nonumber\\
&&+2\int_M\phi(f)\left[E^{\flat_j}(\nabla^iE^{\flat_k}-\nabla^kE^{\flat_i})-\nabla^j(E^{\flat_i}E^{\flat_k})+\dfrac{1}{4}\left(E^{\flat_k}\nabla^jE^{\flat_i}-E^{\flat_j}\nabla^kE^{\flat_i}\right)\right]C_{jki}\\&&=\int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki}\nonumber\\
&&+2\int_M\phi(f)\left[E^{\flat_j}\nabla^iE^{\flat_k}-E^{\flat_i}\nabla^jE^{\flat_k}-\frac{3}{4}E^{\flat_k}\nabla^jE^{\flat_i}-\frac{5}{4}E^{\flat_j}\nabla^kE^{\flat_i}\right]C_{jki}.
\end{eqnarray*}
Note that
\begin{eqnarray*}
&&E^{\flat_j}\nabla^iE^{\flat_k}C_{jki} = - E^{\flat_k}\nabla^iE^{\flat_j}C_{jki},\qquad E^{\flat_i}\nabla^jE^{\flat_k}C_{jki} = - E^{\flat_i}\nabla^kE^{\flat_j}C_{jki},\\ &&E^{\flat_k}\nabla^jE^{\flat_i}C_{jki} = - E^{\flat_j}\nabla^kE^{\flat_i}C_{jki} \quad\mbox{and}\quad E^{\flat_j}\nabla^kE^{\flat_i}C_{jki} = - E^{\flat_k}\nabla^jE^{\flat_i}C_{jki}.
\end{eqnarray*}
Then,
\begin{eqnarray*}
\frac{1}{4}\int_M\phi(f)|C|^2&=&\int_M\phi(f)\left[2E^{\flat_j}\nabla^iE^{\flat_k}-2E^{\flat_i}\nabla^jE^{\flat_k}+E^{\flat_k}\nabla^jE^{\flat_i}\right]C_{jki}\nonumber\\
&& + \int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki}.
\end{eqnarray*}
Since from \eqref{2forma} we have $$2E^{\flat_i}\nabla^jE^{\flat_k}C_{jki}=E^{\flat_i}E^{\flat_{jk}}C_{jki},$$ we can infer that
\begin{eqnarray*}
\frac{1}{4}\int_M\phi(f)|C|^2&=&\int_M\phi(f)\left[-2E^{\flat_k}\nabla^iE^{\flat_j}-E^{\flat_i}E^{\flat_{jk}}+E^{\flat_k}\nabla^jE^{\flat_i}\right]C_{jki}\\
&&+\int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki}\\
&=&\int_M\phi(f)\left[-E^{\flat_k}\nabla^iE^{\flat_j}+E^{\flat_i}E^{\flat_{kj}}+E^{\flat_k}E^{\flat_{ji}}\right]C_{jki}\\
&&+\int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki}\\
&=&\int_M\phi(f)\left[E^{\flat_k}E^{\flat_{ji}}+E^{\flat_j}\nabla^iE^{\flat_k}+E^{\flat_i}E^{\flat_{kj}}\right]C_{jki}\\
&&+\int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki}\\
&=&\int_M\phi(f)\left[E^{\flat_k}E^{\flat_{ji}}+E^{\flat_j}E^{\flat_{ik}}+E^{\flat_i}E^{\flat_{kj}}+E^{\flat_j}\nabla^kE^{\flat_i}\right]C_{jki}\\
&&+\int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki}.
\end{eqnarray*}
On the other hand, since $E^\flat\wedge dE^\flat=0$, from \eqref{curlajuda} we get
\begin{eqnarray*}
f(E^{\flat_k}E^{\flat_{ji}}+E^{\flat_j}E^{\flat_{ik}}+E^{\flat_i}E^{\flat_{kj}})&=&E^{\flat_k}E^{\flat_j}\nabla^if-E^{\flat_k}E^{\flat_i}\nabla^jf+E^{\flat_j}E^{\flat_i}\nabla^kf\\&&-E^{\flat_j}E^{\flat_k}\nabla^if+E^{\flat_i}E^{\flat_k}\nabla^jf-E^{\flat_i}E^{\flat_j}\nabla^kf\\&=&0.
\end{eqnarray*}
Finally,
\begin{eqnarray*}
\frac{1}{4}\int_M\phi(f)|C|^2=\int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki}+\int_M\phi(f)E^{\flat_j}\nabla^kE^{\flat_i}C_{jki}.
\end{eqnarray*}
\end{proof}
\section{Proof of the main results}\label{proofs}
It is well-known {(see \cite[Lemma 4]{tiarlos})} that in $\partial M=f^{-1}(0)$ the electric field and the gradient of the lapse function are linearly dependent (LD). Motivated by the {charged Nariai solution and the cold black hole system}, we assume that both fields are linearly dependent on $M$, that is, there exists a smooth function $\rho$ such that $E=\rho\nabla f$. Thus we can rewrite the $V$-tensor \eqref{tt} as follow.
\begin{lemma}\label{ld1}
Let $(M^3,\,g,\,f,\,E)$ be an electrostatic system in which $E=\rho\nabla f$. Then the $V$-tenor is given by
\begin{eqnarray*}
V_{ijk}&=&f\rho|\nabla f|^2(\nabla_i\rho g_{jk}-\nabla_j\rho g_{ik})+\left(R-\dfrac{1}{2}Rf^2\rho^2-f^2\rho^2\Lambda\right)(\nabla_ifg_{jk}-\nabla_jfg_{ik})\\
&&+2(f^2\rho^2-1)(\nabla_ifR_{jk}-\nabla_jfR_{ik})+(f^2\rho^2-1)(R_{il}\nabla^lfg_{jk}-R_{jl}\nabla^lfg_{ik}).
\end{eqnarray*}
\end{lemma}
\begin{proof}
Since the electric field and the lapse function are linearly dependent (LD), there exists a smooth function $\rho$ such that $E=\rho\nabla f$. Using \eqref{combinado} we get
\begin{eqnarray*}
\nabla_iE^{\flat_j}&=&\nabla_i(\rho \nabla_jf)\\
&=&\nabla_i\rho\nabla_jf+\rho \nabla_i\nabla_jf\\
&=&\nabla_i\rho\nabla_jf+2f\rho^3\nabla_jf\nabla_if+f\rho R_{ij}-\dfrac{f}{2}\rho Rg_{ij}.
\end{eqnarray*}
On other hand, from \eqref{rrr} we get
\begin{eqnarray*}
\nabla_i R&=&2\nabla_i|E|^2\\
&=&4\rho|\nabla f|^2\nabla_i\rho+2\rho^2\nabla_i|\nabla f|^2.
\end{eqnarray*}
From \eqref{combinado} we know that
$$\nabla_i|\nabla f|^2=2f\left(R_{il}\nabla^l f+2\rho^2|\nabla f|^2\nabla_i f-\frac{R}{2}\nabla_i f\right).$$
Combining the last two equations and using \eqref{rrr}, we have
\begin{eqnarray*}
\nabla_i R&=&4\rho|\nabla f|^2\nabla_i\rho+4f\rho^2\left(R_{il}\nabla^l f+2\rho^2|\nabla f|^2\nabla_i f-\frac{R}{2}\nabla_i f\right).
\end{eqnarray*}
Then, from \eqref{tt} it follows that
\begin{eqnarray*}
V_{ijk}&=&2f\rho(\nabla_if\nabla_j\rho\nabla_kf-\nabla_jf\nabla_i\rho\nabla_kf)+f\rho|\nabla f|^2(\nabla_i\rho g_{jk}-\nabla_j\rho g_{ik})\\
&&+2(f^2\rho^2-1)(\nabla_ifR_{jk}-\nabla_jfR_{ik})+(f^2\rho^2-1)(R_{il}\nabla^lfg_{jk}-R_{jl}\nabla^lfg_{ik})\\
&&+\left[\left(1-\dfrac{3}{2}f^2\rho^2\right)R+2f^2\rho^4|\nabla f|^2\right](\nabla_ifg_{jk}-\nabla_jfg_{ik}).
\end{eqnarray*}
Since $\textnormal{curl}(fE)=0$ implies that $E^{\flat_{ij}}=0$ (the fields are LD, i.e., $df\wedge E^\flat=0$), we have $\nabla_iE^{\flat_j}=\nabla_jE^{\flat_i}.$ So, $$\nabla_i\rho\nabla_jf=\nabla_j\rho\nabla_if.$$
Finally, combining \eqref{rrr} with the last two identities the result follows.
\end{proof}
Moreover, from Theorem \ref{teo3.3'} we obtain the following corollary.
\begin{corollary}\label{corolario2}
Let $(M^3,\,g,\,f,\,E)$ be an electrostatic system where the electric field and gradient of the lapse function are linearly dependent. For every $\phi:\mathbb{R}\rightarrow\mathbb{R}$, $C^2$
function with $\phi(f)$ having compact support $K\subseteq M$ we have
\begin{eqnarray*}
\frac{1}{2}\int_M\dfrac{1}{Q}|C|^2\phi(f)=\int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki},
\end{eqnarray*}
where $Q=2(1-f^2\rho^2)\neq0$.
\end{corollary}
\begin{proof}
Taking in accounting that $E=\rho\nabla f$ in Theorem \ref{teo3.3'}, since the Cotton tensor is skew-symmetric and trace-free we obtain
\begin{eqnarray*}
\frac{1}{4}\int_M\phi(f)|C|^2&=&\int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki}+\int_M\phi(f)f\rho^2\nabla^jfR^{ki}C_{jki},
\end{eqnarray*}
where we used that $\textnormal{curl}(fE)=0$, i.e., $$\nabla^k\rho\nabla^if=\nabla^i\rho\nabla^kf.$$
On other hand, from Lemma \ref{ld1} we have
\begin{eqnarray*}
\nabla^jfR^{ki}C_{jki}&=&\dfrac{1}{2}C_{jki}(\nabla^jfR^{ki}-\nabla^kfR^{ji})\\
&=&\dfrac{1}{2Q}C_{jki}V^{jki}\\
&=&\dfrac{1}{2Q}f|C|^2.
\end{eqnarray*}
\end{proof}
Now, we are able to demonstrate our main results.
\begin{theorem}[Theorem \ref{teoharmonico'}]
Let $(M^3,\,g,\,f,\,E)$ be a compact electrostatic system such that the electric field and the gradient of the lapse function are linearly dependent. Suppose that the Bach tensor is divergence-free and $Q$ has defined sign everywhere. Then $(M^3,\,g)$ is locally conformally flat.
\end{theorem}
\begin{proof}
Considering $M$ is compact {without boundary} and $\phi(f)=f^4$, from Corollary \ref{corolario2} we obtain
\begin{eqnarray*}
\frac{1}{2}\int_M\dfrac{1}{Q}|C|^2f^4&=&\int_Mf^3\nabla^kf\nabla^i\nabla^jC_{jki}\\
&=&\frac{1}{4}\int_M\nabla^if^4\nabla^k\nabla^jC_{jki}\\
&=&-\frac{1}{4}\int_Mf^4\nabla^i\nabla^k\nabla^jC_{jki}.
\end{eqnarray*}
Since $\textnormal{div}^2B = 0$, by definition, $\textnormal{div}^3C = 0$, then the right-hand side is identically zero, i.e.,
\begin{eqnarray*}
\int_M\dfrac{1}{Q}|C|^2f^4=0,
\end{eqnarray*}
Since $f>0$ over $M$ and $Q$ has defined sign everywhere, $f^2\rho^2\neq1$, that is, the integral has always the same sign, therefore $C$ must be identically zero, thus the result holds.
\end{proof}
\begin{theorem}[Theorem \ref{proper'}]
Let $(M^3,\,g,\,f,\,E)$ be an electrostatic system such that the electric field and the gradient of the lapse function are linearly dependent. Suppose that the Bach tensor is divergence-free and $Q$ has defined sign everywhere. If $f$ is a proper function, then $(M^3,\,g)$ is locally conformally flat.
\end{theorem}
\begin{proof}
Let $s>0$ be a real number fixed, and so we take $\chi\in C^3$ a real non-negative function defined by $\chi=1$ in {$[0,s]$}, $\chi'\leq0$ in $[s,2s]$ and $\chi=0$ in $[2s,+\infty]$. Since $f$ is a proper function, we have that $\phi(f)=f^4\chi(f)$ has compact support in $M$ for $s>0$. From Corollary \ref{corolario2}, we get
\begin{eqnarray*}
-\frac{1}{2}\int_M\dfrac{1}{Q}|C|^2f^4\chi(f)&=&\int_Mf^3\chi(f)\nabla^kf\nabla^i\nabla^jC_{jki}\\&=&\frac{1}{4}\int_M\chi(f)\nabla^if^4\nabla^k\nabla^jC_{jki}\\
&=&-\frac{1}{4}\int_M\chi(f)f^4\nabla^i\nabla^k\nabla^jC_{jki}\\
&+&\frac{1}{4}\int_M\dot{\chi}(f)f^4\nabla^if\nabla^k\nabla^jC_{jki}.
\end{eqnarray*}
In the last equality we used integration by parts. Now, since $\textnormal{div}^2 B=0$ and taking $\phi(f)=f^5\dot{\chi}(f)$ in Corollary \ref{corolario2} one more time, we obtain
\begin{eqnarray*}
\frac{1}{2}\int_M\dfrac{1}{Q}|C|^2f^4\chi(f)&=&-\frac{1}{8}\int_M\dfrac{1}{Q}|C|^2f^5\dot{\chi}(f),
\end{eqnarray*}
i.e.,
\begin{equation*}
\begin{aligned}
\int_M\dfrac{1}{Q}f^4|C|^2[\chi(f)+\frac{1}{4}f\dot{\chi}(f)]=0.
\end{aligned}
\end{equation*}
Let be $M_s=\{x\in M; f(x)\leq s\}$. Thus, by definition, $\chi(f)+\frac{1}{4}f\dot{\chi}(f)=1$ on the compact set $M_s$. Thus, on $M_s$,
$$\int_{M_s}\dfrac{1}{Q}f^4|C|^2=0.$$
Therefore, since $Q$ has defined sign everywhere and $f$ is positive, $C=0$ in $M_s$. Taking $s\rightarrow+\infty$, we obtain that $C=0$ on $M$.
\end{proof}
\section{The Warped Product Structure}\label{warped}
In this section we will prove the warped product structure of a locally conformally flat electrostatic system following the ideas of \cite{cao, sun}. Consequently, the proof of Theorem \ref{fiber007-3} is given.
We consider an orthonormal frame $\{e_{1}, e_{2}, e_{3}\}$ diagonalizing the Ricci tensor $\textnormal{Ric}$ at a regular point $p\in\Sigma=f^{-1}(c)$, with associated eigenvalues $R_{kk}$, $k=1,\,2,\, 3,$ respectively. That is, $R_{ij}(p)=R_{ii}\delta_{ij}(p)$.
Now, from Theorem \ref{teoharmonico'} and Theorem \ref{proper'} we can infer that $V_{ijk}=0$ (since $(M,\,g)$ is locally conformally flat). Then, from Lemma \ref{ld1}, for all $i\neq j$ we get
\begin{eqnarray}\label{Vautovalor}
0=V_{ijj}&=&f\rho|\nabla f|^2\nabla_i\rho +(f^2\rho^2-1)(2R_{jj} + R_{ii})\nabla_if\nonumber\\
&&+\left(R-\dfrac{1}{2}Rf^2\rho^2-f^2\rho^2\Lambda\right)\nabla_if.
\end{eqnarray}
Without loss of generalization, consider $\nabla_{i}f\neq0$ and $\nabla_{j}f=0$ for all $i\neq j$. Observe that $\textnormal{Ric}(\nabla f)=R_{ii}\nabla f$, i.e., $\nabla f$ is an eigenvector for $\textnormal{Ric}$. From \eqref{Vautovalor}, we obtain that $R_{ii}$ and $R_{jj}, \ j\neq i,$ have multiplicity $1$ and $2$, respectively. In fact,
\begin{eqnarray*}
-f\rho|\nabla f|^2\frac{\nabla_1\rho}{\nabla_1f} -\left(R-\dfrac{1}{2}Rf^2\rho^2-f^2\rho^2\Lambda\right)-(f^2\rho^2-1)R_{11}=2(f^2\rho^2-1)R_{jj},
\end{eqnarray*}
for $j=2,\,3.$
Moreover, suppose that $\nabla_{i}f\neq0$ for at least two distinct directions. Assume $\nabla_1f\neq0$, $\nabla_2f\neq0$ and $\nabla_3f=0$. So, for instance, we have
\begin{eqnarray*}
-f\rho|\nabla f|^2\frac{\nabla_1\rho}{\nabla_1f} -\left(R-\dfrac{1}{2}Rf^2\rho^2-f^2\rho^2\Lambda\right)-(f^2\rho^2-1)R_{11}=2(f^2\rho^2-1)R_{33}
\end{eqnarray*}
and
\begin{eqnarray*}
-f\rho|\nabla f|^2\frac{\nabla_2\rho}{\nabla_2f} -\left(R-\dfrac{1}{2}Rf^2\rho^2-f^2\rho^2\Lambda\right)-(f^2\rho^2-1)R_{22}=2(f^2\rho^2-1)R_{33}.
\end{eqnarray*}
Then, using that $\textnormal{curl}(fE)=0$, i.e., $$\nabla^k\rho\nabla^if=\nabla^i\rho\nabla^kf.$$ We can conclude that $$\frac{\nabla_1\rho}{\nabla_1f}=\frac{\nabla_2\rho}{\nabla_2f}.$$ Thus, $R_{11}=R_{22}.$ Analogously, if $\nabla_if\neq0$ for all $i\in\{1,\,2,\,3\}$. Then, $R_{11}=R_{22}=R_{33}.$ So, we can conclude that $Ric$ has at most two distinct eigenvalues $\lambda$ and $\mu$ with one of them having multiplicity $2$.
Therefore, in any case we have that $\nabla f$ is an eigenvector for $\textnormal{Ric}$. From the above discussion we can take $\{e_{1}=\frac{\nabla f}{|\nabla f|},e_{2},\,e_{3}\}$ as an orthonormal frame for $\Sigma$ diagonalizing the Ricci tensor $\textnormal{Ric}$ for the metric $g$.
Now, from \eqref{combinado} we have
\begin{eqnarray*}
\nabla_a|\nabla f|^2=2f\left(R_{al}\nabla^l f+2\rho^2|\nabla f|^2\nabla_a f-\frac{R}{2}\nabla_a f\right);\quad a\in\{2,\,3\}.
\end{eqnarray*}
Hence, $|\nabla f|$ is a constant in $\Sigma$. Thus, we can express locally the metric $g$ in the form
\begin{eqnarray*}
g_{ij} = \frac{1}{|\nabla f|^{2}}df^{2} + g_{ab}(f,\theta)d\theta_{a}d\theta_{b},
\end{eqnarray*}
where $g_{ab}(f, \theta)d\theta_{a}d\theta_{b}$ is the induced metric and $(\theta_{2},\,\theta_{3})$ is any local coordinate system on $\Sigma$. We can find a good overview of the level set structure in \cite{cao,benedito}.
Observe that there is no open subset $\Omega$ of $M^{n}$ where $\{\nabla f=0\}$ is dense. In fact, if $f$ is constant in $\Omega$ and $M^{n}$ is complete, we have that $f$ is analytic, which implies $f$ is constant everywhere. Thus, we consider $\Sigma$ a connected component of the level surface $f^{-1}(c)$ (possibly disconnected) where $c$ is any regular value of the function $f$. Suppose that $I$ is an open interval containing $c$ such that $f$ has no critical points in the open neighborhood $U_{I}=f^{-1}(I)$ of $\Sigma$. For sake of simplicity, let $U_{I}\subset M\backslash\{f=0\}$ be a connected component of $f^{-1}(I)$. Then, we can make a change of variables
\begin{eqnarray*}
r(x)=\int\frac{df}{|\nabla f|}
\end{eqnarray*}
such that the metric $g$ in $U_{I}$ can be expressed by
\begin{eqnarray*}
g_{ij}=dr^{2}+g_{ab}(r,\theta)d\theta_{a}d\theta_{b}.
\end{eqnarray*}
Let $\nabla r=\frac{\partial}{\partial r}$, then $|\nabla r|=1$ and $\nabla f=f'(r)\frac{\partial}{\partial r}$ on $U_{I}$. Note that $f^{\prime}(r)$ does not change
sign on $U_{I}$. Thus, we may assume $I = (-\varepsilon,\,\varepsilon)$
with $f'(r)>0$ for $r\in I$. Moreover, we have $\nabla_{\partial r}\partial r=0.$
Then the second fundamental formula on $\Sigma$ is given by
\begin{eqnarray}\label{eq555}
h_{ab}&=& - \langle e_{1},\,\nabla_{a}e_{b}\rangle=\frac{\nabla_{a}\nabla_{b}f}{|\nabla f|}\nonumber\\
&=&\frac{1}{|\nabla f|}\left(f R_{ab}-\frac{Rf}{2}g_{ab}\right)=\frac{f}{|\nabla f|}\left(\mu-\frac{R}{2}\right)g_{ab}=\frac{H}{2}g_{ab},
\end{eqnarray}
where $H=H(r)$, since $H$ is constant in $\Sigma$. In fact, contracting the Codazzi equation
\begin{eqnarray*}
R_{1cab}=\nabla_{a}h_{bc}-\nabla_{b}h_{ac}
\end{eqnarray*}
over $c$ and $b$, it gives
\begin{eqnarray*}
R_{1a}=\nabla_{a}(H)-\frac{1}{2}\nabla_{a}(H)=\frac{1}{2}\nabla_{a}(H).
\end{eqnarray*}
On the other hand, since $R_{1a}=0,$ we conclude that $H$ is constant in $\Sigma$.
For what follows, we fix a local coordinate system
$$(x_{1},\, x_2,\, x_{3}) = (r,\,\theta_2,\, \theta_{3}) $$
in $U_{I}$, where $(\theta_{2},\theta_{3})$ is any local coordinate system on the level surface $\Sigma_{c}$. Considering that $a, b, c,\cdots\in\{2 , 3\}$, we have
\begin{eqnarray*}
h_{ab}=-g(e_1,\, \nabla_{a}\partial_{b})=-g(e_1, \Gamma^{1}_{ab}\partial_{r})=\frac{-1}{|\nabla f|}\Gamma^{1}_{ab}.
\end{eqnarray*}
Now, by definition
\begin{eqnarray*}
\Gamma^{1}_{ab}=\frac{1}{2}g^{11}\left(-\frac{\partial}{\partial r}g_{ab}\right)=\frac{1}{2}|\nabla f|\frac{\partial}{\partial r}g_{ab}.
\end{eqnarray*}
Then,
\begin{eqnarray*}
\frac{\partial}{\partial r}g_{ab}= - H(r)g_{ab}
\end{eqnarray*}
implies that
\begin{eqnarray*}
g_{ab}(r,\theta)=\varphi(r)^{2}g_{ab}(r_{0},\theta),
\end{eqnarray*}
where $\varphi(r)=e^{\left(-\int^{r}_{r_{0}}H(s)ds\right)}$ and the level set $\{r=r_{0}\}$ corresponds to the connected component $\Sigma$ of $f^{-1}(c)$.
Now, we can apply the warped product structure (see \cite{besse}). Hence, considering $$(M^{3},\,g)=(I,\,dr^{2})\times_{\varphi}(N^{2},\,\overline{g}),$$ where $g=dr^2+\varphi^2\overline{g}.$
The Ricci tensor of $(M^3,\,g)$ is
\begin{eqnarray}\label{wps}
R_{11}=-2\frac{\varphi''}{\varphi},\qquad R_{1a}=0
\end{eqnarray}
and
\begin{eqnarray*}
R_{ab}=\overline{R}_{ab}-\left[(\varphi')^2+\varphi\varphi''\right]\bar{g}_{ab}\qquad (a,\,b\in\{2,\,3\}).
\end{eqnarray*}
Since $\overline{R}_{ab}=\frac{\overline{R}}{2}\overline{g}_{ab}$,
\begin{eqnarray*}
R_{ab}=\left[\frac{\overline{R}}{2}-(\varphi')^2-\varphi\varphi''\right]\overline{g}_{ab}.
\end{eqnarray*}
On the other hand, {since
\begin{eqnarray*}
R=\varphi^{-2}\overline{R}-2\left(\frac{\varphi'}{\varphi}\right)^2-4\frac{\varphi''}{\varphi},
\end{eqnarray*}
we get}
\begin{eqnarray*}\label{Rbar=R}
\overline{R} = \varphi^2R+2(\varphi')^2+4\varphi\varphi''.
\end{eqnarray*}
\iffalse
\begin{eqnarray*}\label{Rbar=R}
\overline{R} = \varphi^2R+2\left(\frac{\varphi'}{\varphi}\right)^2+4\frac{\varphi''}{\varphi}.
\end{eqnarray*}
\fi
Since $R=2(\rho^2|\nabla f|^2+\Lambda)$ we get
\begin{eqnarray}\label{barR}
\overline{R} = 2\varphi^2\rho^2(f')^2 + 2(\varphi')^2 + 4\varphi\varphi'' + 2\varphi^2\Lambda.
\end{eqnarray}
Moreover, from \eqref{combinado} we know that
$$\frac{1}{|\nabla f|^2}\langle\nabla|\nabla f|^2,\,\nabla f\rangle=2f\left(R_{11}+2\rho^2|\nabla f|^2-\frac{R}{2}\right).$$
That is, from \eqref{rrr} and \eqref{wps} we get
\begin{eqnarray*}
\langle\nabla|\nabla f|^2,\,\nabla f\rangle=2f(f')^2\left[\rho^2(f')^2-2\frac{\varphi''}{\varphi} - \Lambda\right].
\end{eqnarray*}
Hence, using that $\nabla f = f'\partial_r$ we obtain
\begin{eqnarray*}
2(f')^2f''=2f(f')^2\left[\rho^2(f')^2-2\frac{\varphi''}{\varphi} - \Lambda\right].
\end{eqnarray*}
So,
\begin{eqnarray*}
\rho^2=\frac{1}{(f')^2}\left[\frac{f''}{f}+2\frac{\varphi''}{\varphi}+\Lambda\right].
\end{eqnarray*}
Combining the above identity with \eqref{barR} we can conclude that $\overline{R}$ does not depend on $\theta$. Therefore, $\overline{R}$ is a constant.
\iffalse
Moreover, we know that $\textnormal{div}(E)=0$ implies that $\langle\nabla\rho,\,\nabla f\rangle+\rho\Delta f=0$. So, {from \eqref{s1},} $f'\rho'+\rho(\rho^2(f')^2-\Lambda)f=0.$ Thus, we have
\begin{eqnarray*}
\rho\left(f''+2\frac{\varphi''}{\varphi}f\right)+\rho'f'=0.
\end{eqnarray*}
\fi
\iffalse
A straightforward computation proves (see \cite[Equation 2.1]{dominguez2018introduction}) that the mean curvature of $\Sigma$ also is given by
\begin{eqnarray*}
H = |\nabla f|^{-1}\left(\Delta f - \frac{1}{2|\nabla f|^2}\langle\nabla|\nabla f|^2,\,\nabla f\rangle\right).
\end{eqnarray*}
Therefore,
\begin{eqnarray}\label{H1}
H= \frac{1}{f'}\left[(\rho^2(f')^2-\Lambda)f- f''\right].
\end{eqnarray}
\fi
Furthermore, from \eqref{eq555} we have
\begin{eqnarray*}
\frac{1}{2}|\nabla f|Hg_{ab}=\nabla_a\nabla_bf=f\left(R_{ab}+2\rho^2\nabla_af\nabla_bf-\frac{R}{2}g_{ab}\right).
\end{eqnarray*}
Thus,
\begin{eqnarray*}
\left(\frac{1}{2}|\nabla f|H+\frac{Rf}{2}\right)\varphi^2\overline{g}_{ab}=fR_{ab}.
\end{eqnarray*}
On the other hand, \begin{eqnarray*}
fR_{ab}=f\left[\frac{\overline{R}}{2}-(\varphi')^2-\varphi\varphi''\right]\overline{g}_{ab}.
\end{eqnarray*}
Then,
\begin{eqnarray*}
f\left[\frac{1}{2}\varphi^2R + \varphi\varphi''\right]=\left(\frac{1}{2}f'H+\frac{Rf}{2}\right)\varphi^
2,
\end{eqnarray*}
i.e.,
\begin{eqnarray*}
H=2\frac{f}{f'}\frac{\varphi''}{\varphi}.
\end{eqnarray*}
Since $\varphi=e^{-\int^r_{r_0} H(s)ds}$, we conclude
\begin{eqnarray*}
\varphi' + 2\frac{f}{f'}\varphi''=0\quad\Rightarrow\quad\varphi'(r)=c_1{f(r)}^{-1/2},
\end{eqnarray*}
where $c_1\in\mathbb{R}.$
\iffalse
Let us use the Gauss-Codazzi equations, i.e., $R=\overline{R}+2R_{11} + |h|^2-H^2$, and \eqref{eq555} to obtain
\begin{eqnarray*}
4\frac{\varphi''}{\varphi}=\overline{R}-R-\frac{H^2}{2}.
\end{eqnarray*}
Thus, from \eqref{Rbar=R} we get
\begin{eqnarray*}
\overline{R} = \varphi^2R+2\left(\frac{\varphi'}{\varphi}\right)^2+4\frac{\varphi''}{\varphi}.
\end{eqnarray*}
\fi
\iffalse
A particular case of assumption of the electric field and the gradient of the lapse function being linearly dependent on $M$ is to assume that there exists a smooth function depending on $f$, i.e., $\rho=\rho(f)$ such that $E=\rho(f)\nabla f$ on $M$, then $E^{\flat_i}=\rho(f)\nabla_if$. Thus, from Lemma \ref{ld1} we can rewrite the $V$-tensor \eqref{tt} as follows.
\begin{lemma}\label{ld}
Let $(M^3,\,g,\,f,\,E)$ be an electrostatic system where $E=\rho(f)\nabla f$. Then the $V$-tenor is given by
\begin{eqnarray*}
V_{ijk}&=&P(R_{il}\nabla^lfg_{jk}-R_{jl}\nabla^lfg_{ik})+Q(\nabla_ifR_{jk}-\nabla_jfR_{ik})\\
&&+U(\nabla_ifg_{jk}-\nabla_jfg_{ik}),\nonumber
\end{eqnarray*}
where $$P=(f^2\rho(f)^2-1),\,\quad Q=2(f^2\rho(f)^2-1)$$ and
$$U=R-\dfrac{1}{2}Rf^2\rho(f)^2-f^2\rho(f)^2\Lambda+f\dot{\rho}(f)\rho(f)|\nabla f|^2.$$
\end{lemma}
\begin{proof}
Since there exists a smooth function depending on $f$, $\rho=\rho(f)$, such that $E=\rho(f)\nabla f$, hence $\nabla\rho(f)=\dot{\rho}(f)\nabla f$. Thus, by replacing it in Lemma \ref{ld1}, we obtain the result.
\iffalse replacing it in \eqref{tt}, we get
\begin{eqnarray}\label{substituir}
V_{ijk}&=&(1-f^2\rho^2)R(\nabla_ifg_{jk}-\nabla_jfg_{ik})-(R_{il}\nabla^lfg_{jk}-R_{jl}\nabla^lfg_{ik})\nonumber\\
&&-2(1-f^2\rho^2)(\nabla_ifR_{jk}-\nabla_jfR_{ik})+\dfrac{f}{4}(\nabla_iRg_{jk}-\nabla_jRg_{ik}).
\end{eqnarray}
On other hand, tanking derivative of \eqref{rrr},
\begin{eqnarray*}
\nabla_iR&=&2\nabla_i|E|^2\\
&=&4\rho\dot{\rho}\nabla_if|\nabla f|^2+2\rho^2\nabla_i|\nabla f|^2.
\end{eqnarray*}
From \eqref{combinado} we know that
$$\nabla_i|\nabla f|^2=2f\left(R_{il}\nabla^lf+2\rho(f)^2|\nabla f|^2\nabla_if-\dfrac{R}{2}\nabla_if\right).$$
Replacing it in the above equation and using \eqref{rrr}, we have
\begin{eqnarray*}
\nabla_iR&=&2\left(2\rho(f)\dot{\rho}(f)|\nabla f|^2+4f\rho(f)^4|\nabla f|^2-f\rho(f)R\right)\nabla_if+4f\rho^2R_{il}\nabla^lf.
\end{eqnarray*}
Finally, replacing in \eqref{substituir}, the result holds.\fi
\end{proof}
Moreover, from Theorem \ref{teo3.3'} we obtain the following corollary
\begin{corollary}\label{corolario2}
Let $(M^3,\,g,\,f,\,E)$ be an electrostatic system where $E=\rho(f)\nabla f$. For every $\phi:\mathbb{R}\rightarrow\mathbb{R}$, $C^2$
function with $\phi(f)$ having compact support $K\subseteq M$ we have
\begin{eqnarray*}
\frac{1}{2}\int_M\dfrac{1}{Q}|C|^2\phi(f)=\int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki},
\end{eqnarray*}
where $Q=2(1-f^2\rho(f)^2)$.
\end{corollary}
\begin{proof}
Taking in accounting that $E=\rho(f)\nabla f$ in the Theorem \ref{teo3.3'}, since the Cotton tensor is skew-symmetric we obtain that
\begin{eqnarray*}
\frac{1}{4}\int_M\phi(f)|C|^2=\int_M\frac{\phi(f)}{f}\nabla^kf\nabla^i\nabla^jC_{jki}+\int_M\phi(f)\rho(f)^2\nabla^jf\nabla^k\nabla^ifC_{jki}.
\end{eqnarray*}
Performing integration we have
\begin{eqnarray*}
\int_M\phi(f)\rho(f)^2\nabla^jf\nabla^k\nabla^ifC_{jki}&=&-\int_M\phi(f)\rho(f)^2\nabla^i\nabla^jf\nabla^kfC_{jki}\\
&=&\int_Mf\phi(f)\rho(f)^2(R^{ij}+2\rho(f)^2\nabla^if\nabla^jf)\nabla^kfC_{kji}\\
&=&\int_Mf\phi(f)\rho(f)^2R^{ij}\nabla^kfC_{kji},
\end{eqnarray*}
where we used \eqref{s1}. From \eqref{ricci} and Lemma \ref{ld}
\begin{eqnarray*}
\int_M\phi(f)\rho(f)^2\nabla^jf\nabla^k\nabla^ifC_{jki}&=&-\dfrac{1}{4}\int_M\phi(f)|C|^2\dfrac{f^2\rho(f)^2}{Q},
\end{eqnarray*}
therefore, the result holds.
\end{proof}
From now, we are able to demonstrate our main results.
\begin{theorem}
Let $(M^3,\,g,\,f,\,E)$ be a compact (without boundary) electrostatic system such that $E=\rho(f)\nabla f$ where $f^2\rho(f)^2\neq1$ everywhere, with third-order divergence-free Cotton tensor, i.e., $\textnormal{div}^3C = 0$. Then, the Cotton tensor is identically zero, i.e., $(M^3,\,g)$ is locally conformally flat.
\end{theorem}
\begin{proof}
Since $M$ is compact without boundary, we assume $\phi(f)=f^4$, then from Corollary \ref{corolario2}, we obtain
\begin{eqnarray*}
\frac{1}{2}\int_M\dfrac{1}{Q}|C|^2f^4&=&\int_Mf^3\nabla^kf\nabla^i\nabla^jC_{jki}\\
&=&\frac{1}{4}\int_M\nabla^if^4\nabla^k\nabla^jC_{jki}\\
&=&-\frac{1}{4}\int_Mf^4\nabla^i\nabla^k\nabla^jC_{jki},
\end{eqnarray*}
Since $\textnormal{div}^3C = 0$, the right-hand side is identically zero, i.e.,
\begin{eqnarray*}
\int_M\dfrac{1}{Q}|C|^2f^4=0,
\end{eqnarray*}
Since $f>0$ over $M$ and $f^2\rho(f)^2\neq1$ everywhere, the integral has the same sign, therefore $C$ must be identically zero, thus the result holds.
\end{proof}
\begin{theorem}
Let $(M^3,\,g,\,f,\,E)$ be an electrostatic system such that $E=\rho(f)\nabla f$ where $f^2\rho(f)^2\neq1$ everywhere, with third-order divergence-free Cotton tensor, i.e., $\textnormal{div}^3C = 0$. If $f$ is a proper function, then the Cotton tensor is identically zero, i.e., $(M^3,\,g)$ is locally conformally flat.
\end{theorem}
\begin{proof}
Let $s>0$ be a real number fixed, and so we take $\chi\in C^3$ a real non-negative function defined by $\chi=1$ in {$[0,s]$}, $\chi'\leq0$ in $[s,2s]$ and $\chi=0$ in $[2s,+\infty]$. Since $f$ is a proper function, we have that $\phi(f)=f^4\chi(f)$ has compact support in $M$ for $s>0$. From Corollary \ref{corolario2}, we get
\begin{eqnarray*}
\frac{1}{2}\int_M\dfrac{1}{Q}|C|^2f^4\chi(f)&=&\int_Mf^3\chi(f)\nabla^kf\nabla^i\nabla^jC_{jki}\\&=&\frac{1}{4}\int_M\chi(f)\nabla^if^4\nabla^k\nabla^jC_{jki}\\
&=&-\frac{1}{4}\int_M\chi(f)f^4\nabla^i\nabla^k\nabla^jC_{jki}\\
&+&\frac{1}{4}\int_M\dot{\chi}(f)f^4\nabla^if\nabla^k\nabla^jC_{jki}.
\end{eqnarray*}
In the last equality we used integration by parts. Now, since $\textnormal{div}^3 C=0$, taking $\phi(f)=f^5\dot{\chi}(f)$ in Corollary \ref{corolario2} one more time we obtain
\begin{eqnarray*}
\frac{1}{2}\int_M\dfrac{1}{Q}|C|^2f^4\chi(f)&=&-\frac{1}{8}\int_M\dfrac{1}{Q}|C|^2f^5\dot{\chi}(f),
\end{eqnarray*}
i.e.,
\begin{equation*}
\begin{aligned}
\int_M\dfrac{1}{Q}f^4|C|^2[\chi(f)+\frac{1}{4}f\dot{\chi}(f)]=0.
\end{aligned}
\end{equation*}
Let be $M_s=\{x\in M; f(x)\leq s\}$. Thus, by definition, $\chi(f)+\frac{1}{4}f\dot{\chi}(f)=1$ on the compact set $M_s$. Thus, on $M_s$,
$$\int_{M_s}\dfrac{1}{Q}f^4|C|^2=0.$$
Therefore, $C=0$ in $M_s$ since $f>0$ over $M_s$ and $f^2\rho(f)^2\neq1$ everywhere. Taking $s\rightarrow+\infty$, we obtain that $C=0$ on $M$.
\end{proof}
\fi
\end{document} |
\betagin{document}
\mathrm{d}ate{\today}
\title{The dynamical Ising-Kac model in $3D$ converges to $\mathbb{P}hi^4_3$}
\author{P.~Grazieschi$^1$, K.~Matetski$^2$ and H.~Weber$^3$}
\institute{University of Bath, \email{[email protected]} \and Michigan State University, \email{[email protected]} \and University of M\"{u}nster, \email{[email protected]}}
\mathrm{d}ate{\today}
\titleindent=0.65cm
\maketitle
\betagin{abstract}
We consider the Glauber dynamics of a ferromagnetic Ising-Kac model on a three-dimensional periodic lattice of size $(2 N + 1)^3$, in which the flipping rate of each spin depends on an average field in a large neighborhood of radius $\gamma^{-1} <\!\!< N$. We study the random fluctuations of a suitably rescaled coarse-grained spin field as $N \to \infty$ and $\gamma \to 0$; we show that near the mean-field value of the critical temperature, the process converges in distribution to the solution of the dynamical $\mathbb{P}hi^4_3$ model on a torus. Our result settles a conjectured from Giacomin, Lebowitz and Presutti \cite{PresuttiGiacomin}.
The dynamical $\mathbb{P}hi^4_3$ model is given by a non-linear stochastic partial differential equation (SPDE) which is driven by an additive space-time white noise and which requires renormalisation of the non-linearity. A rigorous notion of solution for this SPDE and its renormalisation is provided by the framework of regularity structures \cite{Regularity}.
As in the two-dimensional case \cite{IsingKac}, the renormalisation corresponds to a small shift of the inverse temperature of the discrete system away from its mean-field value.
\end{abstract}
\mathfrak{s}etcounter{tocdepth}{1}
\tableofcontents
\mathfrak{s}ection{Introduction}
We consider the Glauber dynamics of the three-dimensional Ising-Kac model on the discrete torus $\Z^3 \mathfrak{s}lash (2N + 1) \Z^3$. The spins take values $+1$ and $-1$ and flip randomly, where the flipping rate at a site $k$ depends on an average field in a large neighbourood of radius $\gamma^{-1} <\!\!< N$ around $k$.
We study the random fluctuations of a suitably rescaled coarse-grained spin field $X_\gamma$ as $N \to \infty$ and $\gamma \to 0$. We prove that there is a choice of the inverse temperature such that if the initial states converge in a suitable topology, then $X_\gamma$ converges in distribution to the solution of the dynamical $\mathbb{P}hi^4_3$ model, which is formally given by the SPDE
\betagin{equation}\lambdabel{eq:equation-intro}
(\partial_t - \mathbf{D}elta) X = - \mathfrak{r}ac{1}{3} X^3 + A X + \mathfrak{s}qrt 2\, \xi, \qquad \qquad x \in \mathbb{T}^3,
\end{equation}
where $\xi$ denotes a Gaussian space-time white noise.
The Ising-Kac model was introduced in the 60s to recover rigorously the van der Waals theory of phase transition \cite{Kac}. Various scaling regimes for the Glauber dynamics were studied in the ninetees \cite{MR1275526,MR1374000,MR1373999,MR2460018} and in particular, it was conjectured, that in $1$, $2$ and $3$ dimensions and in a very specific scaling, non-linear fluctuations described by \eqref{eq:equation-intro} can be observed \cite{PresuttiGiacomin}. For $d=4$ \eqref{eq:equation-intro} is not expected to have a non-trivial meaning \cite{MR4276286} and this is reflected in the dimension-dependent scaling relation \eqref{constant:isingkac_variable_dimension} below which can be satisfied in dimensions $d=1,2,3$ but not for $d=4$.
The one-dimensional convergence result was proved three decades ago in \cite{MR1317994, MR1358083}. The two dimensional case settled much more recently \cite{IsingKac}. In this article we treat the three-dimensional case, thereby completely settling the conjecture from \cite{PresuttiGiacomin}.
The main difference between the one-dimensional case $d=1$ and the cases $d=2$ and $d=3$ lies in the increased irregularity of solutions to \eqref{eq:equation-intro} in higher dimensions. In fact, for $d=1$ solutions are continuous functions and a solution theory is classical (see e.g. \cite{MR3236753}). For $d= 2,3$ solutions are Schwartz-distributions and \eqref{constant:isingkac_variable_dimension} has to be renormalised by adding an infinite counter-term. Formally, the equation becomes
\betagin{equation*}
(\partial_t - \mathbf{D}elta) X = - \mathfrak{r}ac{1}{3} \bigl(X^3 - 3\, \infty \times X\bigr) + A X + \mathfrak{s}qrt 2\, \xi.
\end{equation*}
For $d=2$ this renormalisation procedure was implemented rigorously in the influential paper by Da Prato-Debussche \cite{MR2016604} (see also \cite{MR3693966} for a solution theory on the full space $\R_t \times \R^2_x$).
Consequently, the convergence proof for Ising-Kac for $d=2$ consists of adapting their solution method to a discrete approximation (already found in \cite{PresuttiGiacomin}).
A key technical step was to show that, up to well-controlled error terms, the renormalisation of products of martingales is similar to the Wick renormalisation of Gaussian processes. Moreover, the renormalisation of the non-linearity in the discrete equation corresponds to a small shift (of order $\gamma^2 \log \gamma$ in the notation of that work) of the inverse temperature from the critical value of the mean-field mode (in fact this shift had already been suggested in \cite{MR1467623}).
The solution theory for\eqref{eq:equation-intro} for $d=3$ is yet much more involved than the $d=2$ case and was understood only much more recently. Short-time solution theories were contained in the groundbreaking theories of regularity structures \cite{Regularity} and paracontrolled distributions \cite{MR3406823,MR3846835} and a solution theory is by now completely developed \cite{MR3846835,from-infinity,MR3951704,MR4164267}, see Section~\ref{sec:phi4Section} for a brief review. In particular, it is known that the renormalisation procedure is more complex --- beyond the leading order ``Wick" renormalisation an additional logarithmic divergence (the ``sunset diagram'') appears.
In this article we develop an analysis for the discrete approximation to \eqref{eq:equation-intro} provided in \cite{PresuttiGiacomin,IsingKac} based on the theory of regularity structures. More specifically, we rely on the discretisation framework for regularity structures developed in \cite{erhard2017discretisation,HairerMatetski}, which of course has to be adapted to the situation at hand. A key part of this analysis is the construction and derivation of bounds for a suitable discrete \emph{model}. Following \cite{IsingKac} the discrete analogue of Hairer's \emph{model} is defined, based on a linearised version of the discrete equation. The elements of this model can be represented as iterated stochastic integrals with respect to a jump martingale. Our companion article \cite{Martingales} develops a systematic theory of these integrals which provides the necessary bounds. We encounter the same ``divergencies" as in the continuum, and as in the two-dimensional case, these correspond to small shifts (of oder $\gammamma^3$ and of order $\gammamma^6 \log \gammamma^{-1}$) to the temperature. Additionally, we encounter an order $1$ shift (corresponding to a shift of order $\gammamma^6$ of the temperature), in the analysis of the approximate Wick constant. This term, comes from the analysis of the predictable quadratic variation of the discrete martingales and does not have a counterpart in the continuous theory.
\mathfrak{s}ubsection{Structure of the article}
In Section~\ref{sec:IsingKac} we define the dynamical Ising-Kac model and state in Theorem~\ref{thm:main} our main convergence result. We recall the solution theory of the dynamical $\mathbb{P}hi^4_3$ model \eqref{eq:equation-intro} in Section~\ref{sec:phi4Section}. In Section~\ref{sec:discreteRegStruct} we construct a regularity structure for the discrete equation describing the Ising-Kac model. Furthermore, we make the definitions of discrete models and modelled distributions on this regularity structure, which are required to solve the equation. A particular desecrate renormalised model is constructed in Section~\ref{sec:lift}. Section~\ref{sec:martingales} contains some properties of the driving martingales and bounds on auxiliary processes, which allow to prove moment bounds for the discrete models in Section~\ref{sec:convergence-of-models}. In Section~\ref{sec:discrete-solution} we write and solve the discrete equation on the regularity structure. Theorem~\ref{thm:main} is proved in Section~\ref{sec:ConvFinal}. Appendix~\ref{sec:kernels} contains some properties of the discrete kernels used throughout the paper.
\mathfrak{s}ubsection{Notation}
\lambdabel{sec:notation}
We use $\N$ for the set of natural numbers $1, 2, \ldots$, and we set $\N_0 := \N \cup \{0\}$. The set of positive real numbers is denoted by $\R_+ := [0, \infty)$. We typically use the Euclidean distance $|x|$ for points $x \in \R^d$, but sometimes we need the distances $|x|_{\mathfrak{s}one} = |x_1| + \cdots + |x_d|$ and $|x|_{\mathfrak{s}igmanfty} = \max \{|x_1|, \ldots, |x_d|\}$. We denote by $B(x, r)$ the open ball in $\R^3$ containing the point $y$ such that $|y - x| < r$.
For integer $n \geq 0$, we denote by $\mathcal{C}C^n_0$ the set of compactly supported $\mathcal{C}C^n$ functions $\mathtt{var}phi : \R^3 \to \R$. The set $\mathcal{C}B^n$ contains all functions $\mathtt{var}phi \in \mathcal{C}C^n_0$, which are supported on $B(0, 1)$, and which satisfy $\| \mathtt{var}phi \|_{\mathcal{C}C^n} \leq 1$. For a function $\mathtt{var}phi \in \mathcal{C}B^n$, for $x \in \R^3$ and for $\lambdambda \in (0,1]$, we define its rescaled and recentered version
\betagin{equation}\lambdabel{eq:rescaled-function}
\mathtt{var}phi_x^\lambdambda(y) := \mathfrak{r}ac{1}{\lambdambda^{3}} \mathtt{var}phi \Bigl(\mathfrak{r}ac{y-x}{\lambdambda}\Bigr).
\end{equation}
We define the three-dimensional torus $\mathbb{T}^3$ identified with $[-1, 1]^3$, and the space $\mathscr{D}'(\mathbb{T}^3)$ of distributions on $\mathbb{T}^3$. Respectively, we denote by $\mathscr{D}'(\R^d)$ the space of distributions on $\R^d$. When working with distribution-valued stochastic processes, we use the Skorokhod space $\mathcal{C}D \bigl(\R_+, \mathscr{D}'(\mathbb{T}^3)\bigr)$ of c\`{a}dl\`{a}g functions \cite{Billingsley}.
For $\eta < 0$ we define the Besov space $\mathcal{C}C^\eta(\mathbb{T}^3)$ as a completion of smooth functions $f: \mathbb{T}^3 \to \R$, under the seminorm
\betagin{equation}\lambdabel{eq:Besov}
\| f \|_{\mathcal{C}C^\eta} := \mathfrak{s}up_{\mathtt{var}phi \in \mathcal{C}B^r} \mathfrak{s}up_{x \in \R^3} \mathfrak{s}up_{\lambdambda \in (0,1]} \lambdambda^{- \eta} |f (\mathtt{var}phi_x^\lambdambda)| < \infty,
\end{equation}
for $r$ being the smallest integer such that $r > -\eta$, where we extended $f$ periodically to $\R^3$, and where we write $f (\mathtt{var}phi_x^\lambdambda) = \lambdangle f, \mathtt{var}phi_x^\lambdambda \rangle$ for the duality pairing. Then the Dirac delta $\mathrm{d}eltalta$ is an element of the space $\mathcal{C}C^{-3}(\mathbb{T}^3)$.
It is important to define these spaces as completions of smooth functions, because this makes the spaces separable and allows to use various probabilistic results.
For $\mathtt{var}epsilon > 0$ we define the grid $\mathtt{L}ambda_{\mathtt{var}epsilon} := \mathtt{var}epsilon \Z^3$ of the mesh size $\mathtt{var}epsilon$.\lambdabel{lab:Lambda} Then it is convenient to map a function $f : \mathtt{L}ambda_{\mathtt{var}epsilon} \to \R$ to a distribution as
\betagin{equation}\lambdabel{eq:iota}
(\iota_\mathtt{var}epsilon f)(\mathtt{var}phi) := \mathtt{var}epsilon^{3} \mathfrak{s}um_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} f(x) \mathtt{var}phi(x),
\end{equation}
for any continuous and compactly supported function $\mathtt{var}phi$.
When working on the time-space domain $\R^4$, we use the \emph{parabolic scaling} $\mathfrak{s} := (2, 1, 1, 1)$, where the first coordinate corresponds to the time variable and the other three correspond to the space variables. Then for any point $(t, x_1, x_2, x_3) \in \R^4$, we introduce the parabolic distance from the origin $\mathcal{V}ert (t, x) \mathcal{V}ert_\mathfrak{s} := | t |^{\mathfrak{r}ac12} + |x_1| + |x_2| + |x_3|$. For a multiindex $k = (k_0, k_1, k_2, k_3) \in \N_0^4$ we define $|k|_\mathfrak{s} := 2 k_0 + k_1 + k_2 + k_3$. \lambdabel{lab:norms-s}
We frequently use the notation $a \lesssim b$, which means that $a \leq C b$ for a constant $C \geq 0$ independent of the relevant quantities (such quantities are always clear from the context). In the case $a \lesssim b$ and $b \lesssim a$ we simply write $a \approx b$. For a vanishing sequence of values $\mathfrak{e}$, the notation $a_\mathfrak{e} \mathfrak{s}igmam \mathfrak{e}^{-1}$ means that $\lim_{\mathfrak{e} \to 0} \mathfrak{e} a_\mathfrak{e}$ exists and is finite.
We write $\mathcal{C}L(V, W)$ for the space of linear bounded operators from $V$ to $W$.
\mathfrak{s}ubsection*{Acknowledgements}
K.~Matetski was partially supported by NSF grant DMS-1953859. P. Grazieschi was supported by a scholarship from the EPSRC Centre for Doctoral Training in Statistical Applied Mathematics at Bath (SAMBa), under the project EP/L015684/1.
\mathfrak{s}ection{The dynamical Ising-Kac model}
\lambdabel{sec:IsingKac}
The Ising-Kac model is a mean-field model with long range potential, which was introduced to recover rigorously the van der Waals theory of phase transition \cite{Kac}. We are interested in the three-dimensional model on a periodic domain. To define the model, let us take $N \in \N$ and let $\T^3_N := \Z^3 \mathfrak{s}lash (2N + 1) \Z^3$ be the three-dimensional discrete torus, i.e. a discrete periodic grid with $2N+1$ points per side. It will be convenient to identify $\T^3_N$ with the set $\{-N, -N + 1, \ldots, 0, \ldots, N\}^3$ and allow points to be multiplied by real numbers in such a way that $r \cdot x = rx \, (\text{mod} \, N)$, for any $x \in \T^3_N$ and $r \in \R$, where the $\text{mod}$ operator is taken on each component of $x$. Each site of the grid $k \in \T^3_N$ has an assigned spin value $\mathfrak{s}igmagma(k) \in \{-1, +1\}$. The set of all spin configurations is $\Sigma_N := \{ -1, +1 \}^{\T^3_N}$ and we write $\mathfrak{s}igmagma = \bigl(\mathfrak{s}igmagma(k): k \in \T^3_N\bigr)$ for an element of $\Sigma_N$.
Let us fix a constant $r_\mathfrak{s}tar > 0$. The range of the interaction is represented by a real number $\gamma \in (0, \gamma_\mathfrak{s}tar)$, for some $\gamma_\mathfrak{s}tar < r^{-1/3}_\mathfrak{s}tar$, and by a smooth, compactly supported, rotation invariant function $\mathfrak{K}: \R^3 \to [0,1]$, supported in the ball $B(0, r_\mathfrak{s}tar)$. (A high regularity of this function is required in the proof of Lemma~\ref{lem:Kg}.) We impose that $\mathfrak{K}(0) = 0$ and
\betagin{equation}\lambdabel{eq:K-moments}
\int_{\R^3} \mathfrak{K}(x)\, \mathrm{d} x = 1, \qquad\qquad \int_{\R^3} \mathfrak{K}(x) |x|^2\, \mathrm{d} x = 6,
\end{equation}
where $|x|$ is the Euclidean norm. Then we define the function $\mathfrak{K}_\gamma: \T^3_N \to [0, \infty)$ as
\betagin{equation}\lambdabel{eq:K-gamma}
\mathfrak{K}_\gamma(k) = \mathtt{var}kappa_{\gamma, 1} \gamma^3 \mathfrak{K}(\gamma k)
\end{equation}
for $k \in \T^3_N$. The constant $\mathtt{var}kappa_{\gamma, 1}$ is given by $\mathtt{var}kappa_{\gamma, 1}^{-1}:= \mathfrak{s}um_{k \in \T^3_N} \gamma^3 \mathfrak{K}(\gamma k)$, and it guarantees that $\mathfrak{s}um_{k \in \T^3_N} \mathfrak{K}_\gamma(k) = 1$. Our assumption $\gamma < \gamma_\mathfrak{s}tar$ makes sure that the radius of interaction $r_\mathfrak{s}tar \gamma^{-1}$ does not exceed the size of the domain $N \approx \gamma^{-4}$ (the precise definition of $N$ is given in \eqref{eq:scalings}). In the rest of this paper, we always consider $\gamma < \gamma_\mathfrak{s}tar$.
The \emph{locally averaged (coarse-grained) field} $h_\gamma: \Sigma_N \times \T^3_N \to \R$ is defined as
\[ h_\gamma(\mathfrak{s}igmagma, k) := \mathfrak{s}um_{j \in \T^3_N} \mathfrak{K}_\gamma (k - j) \mathfrak{s}igmagma(j). \]
Here and in what follows we consider the difference $k - j$ on the torus. The \emph{Hamiltonian} of the system is the function $\mathscr{H}_\gamma: \Sigma_N \to \R$ given by
\betagin{equation}\lambdabel{eq:Hamiltonian}
\mathscr{H}_\gamma(\mathfrak{s}igmagma) := - \mathfrak{r}ac{1}{2} \mathfrak{s}um_{j, k \in \T^3_N} \mathfrak{K}_\gamma (k - j) \mathfrak{s}igmagma(j) \mathfrak{s}igmagma(k) = - \mathfrak{r}ac{1}{2} \mathfrak{s}um_{k \in \T^3_N} \mathfrak{s}igmagma(k) h_\gamma (\mathfrak{s}igmagma, k).
\end{equation}
In other words, two spins $\mathfrak{s}igmagma(j)$ and $\mathfrak{s}igmagma(k)$ interact if they are located at a distance bounded by $r_\mathfrak{s}tar \gamma^{-1}$, where $r_\mathfrak{s}tar$ is the radius of the support of $\mathfrak{K}$.
For a fixed \emph{inverse temperature} $\betata > 0$, the \emph{Gibbs measure} $\lambda_{\ga}$ is the probability measure on $\Sigma_N$
\[
\lambda_{\ga} (\mathfrak{s}igmagma) = \mathfrak{r}ac{1}{\mathscr{Z}_\gamma} \mathrm{ex}p \big( - \betata \mathscr{H}_\gamma (\mathfrak{s}igmagma) \big) , \qquad \text{for} \quad \mathfrak{s}igmagma \in \Sigma_N,
\]
with normalization constant $\mathscr{Z}_\gamma := \mathfrak{s}um_{\mathfrak{s}igmagma \in \Sigma_N} \mathrm{ex}p \big( - \betata \mathscr{H}_\gamma (\mathfrak{s}igmagma) \big)$. Since we consider the Ising-Kac model in a finite volume, the sum is finite and $\mathscr{Z}_\gamma$ is always well-defined.
We are interested in the \emph{Glauber dynamics} of the Ising-Kac model, in which the spins evolve in time as a Markov process on a filtered probability space $\bigl(\Omega, \mathbb{P}, \mathscr{F}, (\mathscr{F}_t)_{t \geq 0}\bigr)$ with the infinitesimal generator
\betagin{equation}\lambdabel{eq:generator}
\mathscr{L}_\gamma f (\mathfrak{s}igmagma) := \mathfrak{s}um_{j \in \T^3_N} c_{\gamma} (\mathfrak{s}igmagma, j) \big( f(\mathfrak{s}igmagma^j) - f(\mathfrak{s}igmagma) \big),
\end{equation}
acting on functions $f: \Sigma_N \to \R$. The configuration $\mathfrak{s}igmagma^j$ is obtained from $\mathfrak{s}igmagma$ by flipping the spin at the site $j$, i.e. for any $k \in \T^3_N$
\[
\mathfrak{s}igmagma^j(k) := \left\{ \betagin{aligned} & \mathfrak{s}igmagma(k) &&\text{if }~ k \neq j, \\ &-\mathfrak{s}igmagma(k) &&\text{if }~ k = j. \end{aligned} \right.
\]
The flipping rates $c_\gamma$ are chosen such that the Gibbs measure $\lambda_{\ga}$ is reversible for the dynamics. For any $\mathfrak{s}igmagma \in \Sigma_N$ and for any $j \in \T^3_N$, we set
\betagin{equation}\lambdabel{eq:rates}
c_\gamma(\mathfrak{s}igmagma, j) := \mathfrak{r}ac{ \lambda_{\ga} (\mathfrak{s}igmagma^j) }{ \lambda_{\ga} (\mathfrak{s}igmagma) + \lambda_{\ga} (\mathfrak{s}igmagma^j) } = \mathfrak{r}ac{1}{2} \Bigl( 1 - \mathfrak{s}igmagma(j) \tanh \big( \beta h_\gamma(\mathfrak{s}igmagma, j) \big) \Bigr).
\end{equation}
One can readily check that the \emph{detailed balance condition} is satisfied (see Proposition~5.3 in \cite{Liggett} and the discussion above it)
\betagin{equation*}
c_\gamma(\mathfrak{s}igmagma^j, j) \lambda_{\ga} (\mathfrak{s}igmagma^j) = c_\gamma(\mathfrak{s}igmagma, j) \lambda_{\ga} (\mathfrak{s}igmagma),
\end{equation*}
for each $j \in \T^3_N$, which implies that indeed the Gibbs measure $\lambda_{\ga}$ is reversible. Given a time variable $t \geq 0$, we denote by $\mathfrak{s}igmagma(t) = \bigl(\mathfrak{s}igmagma(t, k): k \in \T^3_N\bigr)$ the pure jump Markov process with jump rates $c_\gamma$.
We can use properties of the infinitesimal generator (see \cite[App.~1.1.5]{KipnisLandim}) to write
\betagin{equation}\lambdabel{eq:equation-for-sigma}
\mathfrak{s}igmagma(t, k) = \mathfrak{s}igmagma(0, k) + \int_0^t \mathscr{L}_\gamma \mathfrak{s}igmagma(s, k)\, \mathrm{d} s + \mathfrak{m}_\gamma (t, k),
\end{equation}
where $\mathfrak{s}igmagma(0) \in \Sigma_N$ is a fixed initial configuration of spins at time $0$, the generator is applied to the function $f (\mathfrak{s}igmagma) = \mathfrak{s}igmagma(k)$ and $t \mapsto \mathfrak{m}_\gamma(t, k)$ is a family of c\`{a}dl\`{a}g martingales with jumps of size $2$ (because each spin changes values from $+1$ to $-1$ or vice versa). Moreover, the predictable quadratic covariations of these martingales are given by the \emph{carr\'{e} du champ} operator \cite[App.~B]{Mourrat} and may be written as
\betagin{equation}\lambdabel{eq:m-bracket}
\big\lambdangle \mathfrak{m}_\gamma(\bigcdot, k), \mathfrak{m}_\gamma(\bigcdot, k') \big\rangle_t = 4 \mathrm{d}eltalta_{k,k'} \int_0^t c_\gamma ( \mathfrak{s}igmagma(s), k )\, \mathrm{d} s,
\end{equation}
for all $k, k' \in \T^3_N$, where $\mathrm{d}eltalta_{k,k'}$ is the Kronecker delta, i.e. $\mathrm{d}eltalta_{k,k'} = 1$ if $k = k'$ and $\mathrm{d}eltalta_{k,k'} = 0$ otherwise. We recall that the predictable quadratic covariation in \eqref{eq:m-bracket} is the unique increasing process, vanishing at $t = 0$ and such that $t \; \mapsto \; \mathfrak{m}_\gamma(t, k) \mathfrak{m}_\gamma(t, k') - \big\lambdangle \mathfrak{m}_\gamma(\bigcdot, k), \mathfrak{m}_\gamma(\bigcdot, k') \big\rangle_t$ is a martingale. The definitions and properties of the bracket processes for c\`{a}dl\`{a}g martingales can be found in \cite{JS03}.
The dynamical version of the averaged field we denote by
\betagin{equation*}
h_\gamma(t,k) := h_\gamma(\mathfrak{s}igmagma(t), k).
\end{equation*}
\betagin{remark}
As we stated above, we always consider $N >\!\!> \gamma^{-1}$, which together with the property $\mathfrak{K}_\gamma(0) = 0$ means that there is no self-interaction of spins. In contrast to the setting of \cite{IsingKac}, we have to avoid self-interaction by postulating $\fK(0) = 0$. The reason for this assumption can be seen in the proof of Lemma~\ref{lem:Kg}, where the function $K_\gamma$ is required to be differentiable. The weaker bounds in \cite[Lem.~8.2]{IsingKac} in the two-dimensional setting allow this function to have a discontinuity at the origin.
\end{remark}
\mathfrak{s}ubsection{Convergence of a rescaled model}
\lambdabel{sec:convergence-statement}
Our main interest lies in understanding the bahaviour of a rescaled version of the dynamical Ising-Kac model. For $\mathtt{var}epsilon = 2 / (2 N + 1)$ we introduce the rescaled lattice \[ \T_{\mathtt{var}epsilon}^3 := \big\{ \mathtt{var}epsilon k : k \in \T^3_N \big\}. \] In particular, $\T_{\mathtt{var}epsilon}^3$ is a subset of the three-dimensional torus $\mathbb{T}^3$. In what follows, we use the convolution on the lattice, defined for two functions $f, g : \T_{\mathtt{var}epsilon}^3 \to \R$ as
\betagin{equation}\lambdabel{eq:discrete-convolution}
\bigl(f *_\eps g\bigr) (x) := \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \T_{\mathtt{var}epsilon}^3} f(x - y) g(y).
\end{equation}
For any function $g: \T_{\mathtt{var}epsilon}^3 \to \R$, we use the standard definition for the discrete Fourier transform
\betagin{equation}\lambdabel{eq:Fourier}
\widehat{g}(\omega) := \mathtt{var}epsilon^3 \mathfrak{s}um_{x \in \T_{\mathtt{var}epsilon}^3} g(x) e^{-\pi i \omega \cdot x} \qquad \text{for} \quad \omega \in \{ -N, \ldots, N \}^3.
\end{equation}
We fix two positive real constants $\mathrm{d}eltalta > 0$ and $\alphapha > 0$ and define the family of rescaled martingales
\betagin{equation}\lambdabel{eq:martingales}
\mathfrak{M}_\gamma ( t, x ) := \mathfrak{r}ac{1}{\mathrm{d}eltalta} \mathfrak{m}_\gamma \Bigl( \mathfrak{r}ac{t}{\alphapha}, \mathfrak{r}ac{x}{\mathtt{var}epsilon} \Bigr) \qquad \text{for}~~ x \in \T_{\mathtt{var}epsilon}^3,~ t \geq 0.
\end{equation}
and also
\betagin{equation}\lambdabel{eq:kernel-K}
K_\gamma(x) := \mathfrak{r}ac{1}{\mathtt{var}epsilon^{3}} \mathfrak{K}_\gamma \left( \mathfrak{r}ac{x}{\mathtt{var}epsilon} \right).
\end{equation}
Then from \eqref{eq:equation-for-sigma} we can conclude that the rescaled process
\betagin{equation}\lambdabel{eq:X-gamma}
X_\gamma(t, x) := \mathfrak{r}ac{1}{\mathrm{d}eltalta} h_\gamma \Bigl( \mathfrak{r}ac{t}{\alphapha}, \mathfrak{r}ac{x}{\mathtt{var}epsilon} \Bigr) \qquad \text{for}~~ x \in \T_{\mathtt{var}epsilon}^3,~ t \geq 0,
\end{equation}
solves the following equation (see \cite{IsingKac} for the derivation of an analogous equation in the two-dimensional case)
\betagin{align}
X_\gamma(t, x) &= X_\gamma^0 (x) + \big( K_\gamma *_\eps \mathfrak{M}_\gamma \big) (t, x) \lambdabel{eq:equation-for-X}\\
&\qquad + \int_0^{t} \biggl( \mathfrak{r}ac{{\mathtt{var}epsilon}^2}{\gamma^2 \alphapha} \widetilde{\mathbf{D}elta}_\gamma X_\gamma + \mathfrak{r}ac{\beta-1}{\alphapha} K_\gamma *_\eps X_\gamma - \mathfrak{r}ac{\beta^3 \mathrm{d}eltalta^2}{3 \alphapha} K_\gamma *_\eps X_\gamma^3 + E_\gamma \biggr) (s, x)\, \mathrm{d} s, \nonumber
\end{align}
where $X_\gamma^0 (x) = X_\gamma(0, x)$ is a rescaled initial configuration. The linear part of this equation is given by the discrete operator
\betagin{equation}\lambdabel{eq:Laplacian-gamma}
\widetilde{\mathbf{D}elta}_\gamma f (x) := \mathfrak{r}ac{\gamma^2}{{\mathtt{var}epsilon}^2} \big( K_\gamma *_\eps f - f \big)(x),
\end{equation}
and the ``error term'' $E_\gamma$ is given by
\betagin{equation}\lambdabel{eq:expr_error_term}
E_\gamma (t, x) := \mathfrak{r}ac{1}{\mathrm{d}eltalta \alphapha} \biggl( \tanh \bigl( \beta \mathrm{d}eltalta X_\gamma \bigr) - \beta \mathrm{d}eltalta X_\gamma + \mathfrak{r}ac{1}{3} \bigl(\beta \mathrm{d}eltalta X_\gamma\bigr)^3 \biggr)(t,x).
\end{equation}
As we commented after \eqref{eq:K-gamma}, for all $\gamma$ sufficiently small the function $K_\gamma(x)$ is supported on $(-1, 1)^3$, and its convolutions with periodic processes in \eqref{eq:equation-for-X} make sense.
We are going to take the limit such that all the scaling parameters in \eqref{eq:X-gamma} tend to zero. In order to prevent explosion of the multiplier $(\beta-1) / \alphapha$ in \eqref{eq:equation-for-X}, we need to consider the inverse temperature of the form
\betagin{equation}\lambdabel{eq:beta}
\beta = 1 + \alphapha \big( \mathfrak{C}_\gamma + A \big),
\end{equation}
where $A$ is a fixed constant (its value does not play any significant role and produces a linear term in the limiting equation \eqref{eq:Phi43}) and where $\mathfrak{C}_\gamma$ is a suitably chosen renormalisation constant, which diverges as $\gamma \to 0$ such that $\gamma^{-1} <\!\!< \alphapha^{-1}$. In other words, we consider the model near the critical mean-field value of the inverse temperature $\betata_c = 1$, and as we will see later, $\mathfrak{C}_\gamma$ plays a role of the renormalisation constant, which is required to have a non-trivial limit of the non-linearity $X_\gamma^3$ in \eqref{eq:equation-for-X}. The shift of the critical inverse temperature was observed in \cite{MR1467623}, and in the three-dimensional case it has a significantly more complicated structure than in two dimensions \cite{IsingKac} (see Theorem~\ref{thm:main}).
From \eqref{eq:rates} and \eqref{eq:m-bracket} we conclude that the predictable quadratic covariations of the martingales \eqref{eq:martingales} are
\betagin{equation}\lambdabel{eq:M-bracket}
\big\lambdangle \mathfrak{M}_\gamma(\bigcdot, x), \mathfrak{M}_\gamma(\bigcdot, x') \big\rangle_t = \mathfrak{r}ac{2 \mathtt{var}epsilon^3}{\mathrm{d}eltalta^2 \alphapha} \mathrm{d}eltalta^{(\mathtt{var}epsilon)}_{x,x'} \int_0^{t} \Bigl( 1 - \mathfrak{s}igmagma \Bigl(\mathfrak{r}ac{s}{\alphapha}, \mathfrak{r}ac{x}{\mathtt{var}epsilon}\Bigr) \tanh \bigl( \beta \mathrm{d}eltalta X_\gamma(s, x) \bigr) \Bigr) \mathrm{d} s,
\end{equation}
for any $x, x' \in \T_{\mathtt{var}epsilon}^3$, where $\mathrm{d}eltalta^{(\mathtt{var}epsilon)}_{x,x'} := \mathtt{var}epsilon^{-3} \mathrm{d}eltalta_{x,x'}$ is an approximation of the Dirac's delta.
We would like to have convergence of the operators $\widetilde{\mathbf{D}elta}_\gamma$ to the Laplacian, and of the quadratic covariations for the martingales to those of a cylindrical Wiener process. We also want to have a non-trivial nonlinearity in the limit (given by the cubic term), which translates into the relations between the scaling parameters \[ 1 \approx \mathfrak{r}ac{{\mathtt{var}epsilon}^2}{\gamma^2 \alphapha} \approx \mathfrak{r}ac{\mathrm{d}eltalta^2}{\alphapha} \approx \mathfrak{r}ac{{\mathtt{var}epsilon}^3}{\mathrm{d}eltalta^2 \alphapha}. \]
In the rest of this article, we therefore fix them to be $\gamma$-dependent as
\betagin{equation}\lambdabel{eq:scalings}
N = \lfloor \gamma^{-4} \rfloor, \qquad \mathtt{var}epsilon = \mathfrak{r}ac{2}{2N + 1}, \qquad \alphapha = \gamma^6, \qquad \mathrm{d}eltalta = \gamma^3.
\end{equation}
This implies $\mathtt{var}epsilon \approx \gamma^4$, and such choice of $\mathtt{var}epsilon$ (rather than $\mathtt{var}epsilon = \gamma^4$) makes the use of the discrete Fourier transform \eqref{eq:Fourier} more convenient. Moreover, we define:
\betagin{equation}\lambdabel{constant:isingkac_one}
\mathtt{var}kappa_{\gamma, 2} := \mathfrak{r}ac{\mathtt{var}epsilon^3}{\mathrm{d}eltalta^2 \alphapha} \approx 1,
\end{equation}
which we will use in the rest of the paper, remembering that it converges to $1$.
The scaling \eqref{eq:scalings} makes the radius of interaction for the rescaled process to be $\mathfrak{e} := \mathtt{var}epsilon / \gammamma \approx \gamma^3$. As such, the model has two scales: $\mathtt{var}epsilon \approx \gamma^4$ is the distance between points on the lattice, and $\mathfrak{e} \approx \gamma^3$ is the distance up to which the interaction between two spins is felt.
\betagin{remark}
The dynamical Ising-Kac model can be defined for any spatial dimension $d \geq 1$, where the previous conditions on the quantities $\mathtt{var}epsilon$, $\mathrm{d}eltalta$ and $\alphapha$ become
\betagin{equation}\lambdabel{constant:isingkac_variable_dimension}
\mathtt{var}epsilon \approx \gamma^{\mathfrak{r}ac{4}{4-d}}, \qquad \alphapha \approx \gamma^{\mathfrak{r}ac{2d}{4-d}}, \qquad \mathrm{d}eltalta \approx \gamma^{\mathfrak{r}ac{d}{4-d}}.
\end{equation}
Observe that, in order to make these quantities vanish when $\gamma \to 0$, we need to impose $d < 4$. This condition coincides with the \emph{local sub-criticality} condition in the solution theory of the dynamical $\mathbb{P}hi^4_d$ model \cite{Regularity}.
\end{remark}
Our goal is to prove convergence of the rescaled processes \eqref{eq:X-gamma} to the solution of the $\mathbb{P}hi^4_3$ equation (\emph{the dynamical $\mathbb{P}hi^4_3$ model})
\betagin{equation}\lambdabel{eq:Phi43}
(\partial_t - \mathbf{D}elta) X = - \mathfrak{r}ac{1}{3} X^3 + A X + \mathfrak{s}qrt 2\, \xi, \qquad X(0, \bigcdot) = X^0(\bigcdot),
\end{equation}
on $\R_+ \times \mathbb{T}^3$, where $\xi$ is space-time white noise, and $A$ is the same as in \eqref{eq:beta}. The notion of solution for the singular stochastic PDE \eqref{eq:Phi43} was first provided in \cite{Regularity} using the \emph{theory of regularity structures}, and later in \cite{MR3846835} using \emph{paracontrolled distributions}.
We need to introduce the topology in which convergence of the initial states holds. Namely, for a function $f_\gamma : \T_{\mathtt{var}epsilon}^3 \to \R$, for $\eta < 0$ and for the smallest integer $r$ such that $r > - \eta$, we define the semi-norm
\betagin{equation}\lambdabel{eq:eps-norm-1}
\| f_\gamma \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta} := \mathfrak{s}up_{\mathtt{var}phi \in \mathcal{C}B^{r}} \mathfrak{s}up_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \mathfrak{s}up_{\lambdambda \in [\mathfrak{e}, 1]} \lambdambda^{-\eta} | (\iota_\mathtt{var}epsilon f_\gamma)(\mathtt{var}phi^\lambdambda_x) | + \mathfrak{s}up_{\mathtt{var}phi \in \mathcal{C}B^{r}} \mathfrak{s}up_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \mathfrak{s}up_{\lambdambda \in [\mathtt{var}epsilon, \mathfrak{e})} \mathfrak{e}^{-\eta} | (\iota_\mathtt{var}epsilon f_\gamma)(\mathtt{var}phi^\lambdambda_x) |.
\end{equation}
where we extended the function $f_\gamma$ periodically to $\mathtt{L}ambda_{\mathtt{var}epsilon}$, the set of test functions $\mathcal{C}B^{r}$ defined in Section~\ref{sec:notation} and the map $\iota_\mathtt{var}epsilon$ is defined in \eqref{eq:iota}. This definition is similar to \eqref{eq:Besov}, where we ``measure'' regularity only above the scale $\mathfrak{e}$. On the smaller scale, we expect the function to be uniformly bounded by a constant multiple of $\mathfrak{e}^{-1}$. One can see that this semi-norm is finite for any function $f_\gamma$, but we will be always interested in the situation when it is bounded uniformly in $\gamma > 0$. If $\lambdambda < \mathfrak{e}$, then the support of $\mathtt{var}phi^\lambdambda_x$ contains only the point $x \in \mathtt{L}ambda_{\mathtt{var}epsilon}$, and we readily get
\betagin{equation}\lambdabel{eq:uniform-bound}
\mathfrak{s}up_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} |f_\gamma(x)| \leq \mathfrak{e}^{\eta} \| f_\gamma \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta}.
\end{equation}
To compare this function with a distribution $f \in \mathcal{C}C^{\eta}(\mathbb{T}^3)$, we also define
\betagin{align}\lambdabel{eq:eps-norm-2}
\| f_\gamma; f \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta} &:= \mathfrak{s}up_{\mathtt{var}phi \in \mathcal{C}B^{r}} \mathfrak{s}up_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \mathfrak{s}up_{\lambdambda \in [\mathfrak{e}, 1]} \lambdambda^{-\eta} | (\iota_\mathtt{var}epsilon f_\gamma - f)(\mathtt{var}phi^\lambdambda_x) | \\
&\qquad +\mathfrak{s}up_{\mathtt{var}phi \in \mathcal{C}B^{r}} \mathfrak{s}up_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \mathfrak{s}up_{\lambdambda \in [\mathtt{var}epsilon, \mathfrak{e})} \mathfrak{e}^{-\eta} | (\iota_\mathtt{var}epsilon f_\gamma)(\mathtt{var}phi^\lambdambda_x) | + \mathfrak{s}up_{\mathtt{var}phi \in \mathcal{C}B^{r}} \mathfrak{s}up_{x \in \R^3} \mathfrak{s}up_{\lambdambda \in (0, \mathfrak{e})} \lambdambda^{-\eta} | f(\mathtt{var}phi^\lambdambda_x) |, \nonumber
\end{align}
where we extended $f_\gamma$ and $f$ periodically to $\mathtt{L}ambda_{\mathtt{var}epsilon}$ and $\R^3$ respectively. In other words, we compare the two functions on the scale above $\mathfrak{e}$, and use the simple control on the smaller scale.
The following is the main result of this article, which is proved in Section~\ref{sec:ConvFinal}. We refer to Section~\ref{sec:notation} for the definitions of the involved spaces.
\betagin{theorem}\lambdabel{thm:main}
Let there exist values $-\mathfrak{r}ac{4}{7} < \eta < \bar \eta < -\mathfrak{r}ac{1}{2}$ and $\gamma_\mathfrak{s}tar > 0$, and a distribution $X^0 \in \mathcal{C}C^{\bar \eta}(\mathbb{T}^3)$
such that the rescaled initial state of the dynamical Ising-Kac model satisfies
\betagin{equation}\lambdabel{eq:initial-convergence}
\mathfrak{s}up_{\gamma \in (0, \gamma_\mathfrak{s}tar)} \| X^{0}_\gamma\|^{(\mathfrak{e})}_{\mathcal{C}C^{\bar \eta}} < \infty, \qquad \lim_{\gamma \to 0} \| X^{0}_\gamma; X^0 \|^{(\mathfrak{e})}_{\mathcal{C}C^{\eta}} = 0.
\end{equation}
Then there is a choice of the constant $\mathfrak{C}_\gamma$ in \eqref{eq:beta}, such that the processes $t \mapsto \iota_\mathtt{var}epsilon X_\gamma(t)$ converge in law as $\gamma \to 0$ to $t \mapsto X(t)$ with respect to the topology of the Skorokhod space $\mathcal{C}D \bigl(\R_+, \mathscr{D}'(\mathbb{T}^3)\bigr)$, where $X$ is the solution of the $\mathbb{P}hi^4_3$ equation \eqref{eq:Phi43} with the initial state $X^0$ and with the constant $A$ from \eqref{eq:beta}.
Furthermore, let $\widehat{K}_\gamma$ be the discrete Fourier transform of the function $K_\gamma$ (since $K_\gamma$ is symmetric, $\widehat{K}_\gamma$ is real valued). Then for all $\gamma > 0$ small enough one has the expansion
\betagin{equation}\lambdabel{eq:C-expansion}
\mathfrak{C}_\gamma = \mathfrak{c}_\gamma^{(2)} + \mathfrak{c}_\gamma^{(1)} + \mathfrak{c}_\gamma^{(0)},
\end{equation}
where the constants $\mathfrak{c}_\gamma^{(2)} \mathfrak{s}igmam \mathfrak{e}^{-1}$ and $\mathfrak{c}_\gamma^{(1)} \mathfrak{s}igmam \log \mathfrak{e}$ are given by
\betagin{align}
\mathfrak{c}_\gamma^{(2)} &= \mathfrak{r}ac{\gamma^6}{8} \mathfrak{s}um_{0 < |\omega|_{\mathfrak{s}igmanfty} \leq N} \mathfrak{r}ac{|\widehat{K}_\gamma(\omega)|^2}{1 - \widehat{K}_\gamma(\omega)}, \lambdabel{eq:renorm-constants-main}\\
\mathfrak{c}_\gamma^{(1)} &= \mathfrak{r}ac{\gamma^{18}}{16} \mathfrak{s}um_{0 < |\omega_1|_{\mathfrak{s}igmanfty}, |\omega_2|_{\mathfrak{s}igmanfty} \leq N} \mathfrak{r}ac{|\widehat{K}_\gamma(\omega_1)|^2 |\widehat{K}_\gamma(\omega_2)|^2}{(1 - \widehat{K}_\gamma(\omega_1)) (1 - \widehat{K}_\gamma(\omega_2))} \mathfrak{r}ac{\widehat{K}_\gamma(\omega_1 + \omega_2)}{1 - \widehat{K}_\gamma(\omega_1) - \widehat{K}_\gamma(\omega_2) + \widehat{K}_\gamma(\omega_1 + \omega_2)}, \nonumber
\end{align}
and the constant $\mathfrak{c}_\gamma^{(0)}$ has a finite limit as $\gamma \to 0$. All sums in \eqref{eq:renorm-constants-main} run over $\{-N, \ldots, N\}^3$ with the imposed restrictions, and the denominators of the terms in these sums are non-vanishing.
\end{theorem}
\betagin{remark}
One should note that the renormalisation constant $\mathfrak{C}_\gamma$ depends non-trivially on the covariations \eqref{eq:M-bracket} of the driving martingales. It can be seen from the proof of Theorem~\ref{thm:main} (more precisely, from the renormalisation of the lift \eqref{eq:cherry} and from the definition of the renormalisation constant \eqref{eq:C-exact} in the discrete equation). The constant $\mathfrak{c}_\gamma'$ in \eqref{eq:cherry} comes from renormalisation of the product in \eqref{eq:M-bracket}, and it is not observed in the two-dimensional case \cite{IsingKac}.
\end{remark}
\betagin{remark}
The precise value of the constant $\mathfrak{c}_\gamma^{(0)}$ may be obtained from \eqref{eq:C-exact}, which does not play a significant role and we omit it here.
\end{remark}
\betagin{remark}\lambdabel{rem:regularity-explanation}
It is natural to consider the initial states of regularity strictly smaller than $-\mathfrak{r}ac{1}{2}$, because this is the spatial regularity of the solution to \eqref{eq:Phi43} (see \cite{Regularity}). We make the assumption $\eta > -\mathfrak{r}ac{4}{7}$ on the regularity of the initial state. It follows from the definition of the model that $X_\gamma$ lives on the scale $\mathfrak{e} \approx \gamma^3$. This implies that, for any $\kappappa > 0$, we expect the following a priori bound
\betagin{equation*}
\| X_\gamma(t) \|_{L^\infty(\T_{\mathtt{var}epsilon}^3)} \lesssim \mathfrak{e}^{-\mathfrak{r}ac{1}{2} - \kappappa}
\end{equation*}
uniformly in $\gamma \in (0,\gamma_\mathfrak{s}tar)$. Hence, for $\kappappa < \mathfrak{r}ac{1}{14}$ we can use the Taylor expansion of order $5$ for the function $\tanh$ in \eqref{eq:expr_error_term}, with the error term bounded by a positive power of $\gamma$. This is the reason for our restriction $\eta = -\mathfrak{r}ac{1}{2} - \kappappa > -\mathfrak{r}ac{4}{7}$. Proving Theorem~\ref{thm:main} for lower regularity of the initial state requires some technicalities. More precisely, for $\eta < -\mathfrak{r}ac{4}{7}$ we need to have a bigger regularity structure, than the one defined in Section~\ref{sec:discreteRegStruct}, we need to control blow-ups of $X_{\gamma}$ at time $t = 0$, similarly to how it was done in \cite{MR3179667, Regularity}, and we may need to work in more complicated spaces (see \cite{Labbe} for continuous equations with irregular initial states).
\end{remark}
\mathfrak{s}ubsection{A mild form of the equation}
In order to define the Green's function for the linear operator in \eqref{eq:equation-for-X}, it is convenient to use the discrete Fourier transform \eqref{eq:Fourier}. We start with recalling some of its basic properties. Every time when a sum runs over $\omega \in \{ -N, \ldots, N \}^3$, we will simply write $|\omega|_{\mathfrak{s}igmanfty} \leq N$. For the function as in \eqref{eq:Fourier}, the Fourier series is
\betagin{equation}\lambdabel{eq:Fourier-series}
g(x) = \mathfrak{r}ac{1}{8}\mathfrak{s}um_{|\omega|_{\mathfrak{s}igmanfty} \leq N} \widehat{g}(\omega) e^{\pi i \omega \cdot x}.
\end{equation}
Then, for two functions $f, g : \T_{\mathtt{var}epsilon}^3 \to \R$, Parseval's theorem reads
\betagin{equation}\lambdabel{eq:Parseval}
\mathtt{var}epsilon^3 \mathfrak{s}um_{x \in \T_{\mathtt{var}epsilon}^3} f(x) g(x) = \mathfrak{r}ac{1}{8} \mathfrak{s}um_{|\omega|_{\mathfrak{s}igmanfty} \leq N} \widehat{f}(\omega)\, \overline{\widehat{g}(\omega)},
\end{equation}
where $\overline{\widehat{g}(\omega)}$ is the complex conjugation of $\widehat{g}(\omega)$. Moreover, one has the identities
\betagin{equation}\lambdabel{eq:Fourier-of-product}
\widehat{f g}(\omega) = \mathfrak{r}ac{1}{8} \mathfrak{s}um_{|\omega'|_{\mathfrak{s}igmanfty} \leq N} \widehat{f}(\omega - \omega')\, \widehat{g}(\omega'), \qquad\qquad \widehat{f *_\eps g}(\omega) = \widehat{f}(\omega)\, \widehat{g}(\omega),
\end{equation}
where $*_\eps$ is the convolution on $\T_{\mathtt{var}epsilon}^3$, defined in \eqref{eq:discrete-convolution}, and the subtraction $\omega - \omega'$ is performed on the torus $\{ -N, \ldots, N \}^3$. To have a lighter notation in the following formulas we will write $\mathscr{F}_{\!\!\mathtt{var}epsilon} f(\omega)$ for the discrete Fourier transform $\widehat{f}(\omega)$. One can readily see that $\mathscr{F}_{\!\!\mathtt{var}epsilon} f$ converges in a suitable sense as $\gamma \to 0$ to the continuous Fourier transform $\mathscr{F} f$ given by
\betagin{equation*}
\mathscr{F} f (\omega) = \int_{\R^3} f(x) e^{-\pi i \omega \cdot x} \qquad \text{for} \quad \omega \in \R^3.
\end{equation*}
It will be convenient to include the factor $\mathtt{var}epsilon^2 / (\gamma^2 \alphapha)$ in \eqref{eq:equation-for-X} into the definition of the linear operator. For this, we write
\betagin{equation}\lambdabel{eq:c-gamma-2}
\mathtt{var}epsilon = \gamma^4 \mathtt{var}kappa_{\gamma, 3} \qquad \text{with} \quad |\mathtt{var}kappa_{\gamma, 3} - 1| < \gamma^4,
\end{equation}
and we define a new operator
\betagin{equation}\lambdabel{eq:Laplacian-discrete}
\mathbf{D}elta_\gamma := \mathtt{var}kappa_{\gamma, 3}^2 \widetilde{\mathbf{D}elta}_\gamma = \mathfrak{r}ac{{\mathtt{var}epsilon}^2}{\gamma^2 \alphapha} \widetilde{\mathbf{D}elta}_\gamma.
\end{equation}
One can see that $\mathbf{D}elta_\gamma$ approximates the continuous Laplace operator $\mathbf{D}elta$ as $\gamma \to 0$, when it is applied to a sufficiently regular function, and we can define the respective approximate heat kernel. More precisely, we define the function $P^\gamma_t : \T_{\mathtt{var}epsilon}^3 \to \R$ solving for $t > 0$ the ODEs
\betagin{equation}\lambdabel{eq:kernels-P}
\mathfrak{r}ac{\mathrm{d}}{\mathrm{d} t} P^\gamma_t = \mathbf{D}elta_\gamma P^\gamma_t,
\end{equation}
with the initial condition $P^\gamma_0(x) = \mathrm{d}eltalta^{(\mathtt{var}epsilon)}_{x,0}$ (the latter is defined below \eqref{eq:M-bracket}). $P^\gamma$ is the Green's functions of the linear operator which appear in equation \eqref{eq:equation-for-X}.
This function can alternatively be defined by its discrete Fourier transform
\betagin{equation}\lambdabel{eq:tildeP}
\mathscr{F}_{\!\!\mathtt{var}epsilon} P^\gamma_t(\omega) = \mathrm{ex}p \Bigl( \mathtt{var}kappa_{\gamma, 3}^2 \bigl( \widehat{K}_\gamma(\omega) - 1 \bigr) \mathfrak{r}ac{t}{\alphapha} \Bigr),
\end{equation}
for all $\omega \in \{ -N, \ldots, N \}^3$. With a little ambiguity, we denote by $P^\gamma_t$ the operator acting on functions $f : \T_{\mathtt{var}epsilon}^3 \to \R$ by the convolution
\betagin{equation}\lambdabel{eq:P-gamma}
\bigl(P^\gamma_t f\bigr)(x) = \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \T_{\mathtt{var}epsilon}^3} P^\gamma_t(x - y) f(y).
\end{equation}
It will be also convenient to define the kernel
\betagin{equation}\lambdabel{eq:P-gamma-tilde}
\widetilde{P}^{\gamma}_{t}(x) := P^\gamma_{t} *_\eps K_\gamma(x),
\end{equation}
and the respective integral operator is defined by analogy with \eqref{eq:P-gamma}. We can then rewrite the discrete equation \eqref{eq:equation-for-X} in the mild form
\betagin{align}\lambdabel{eq:IsingKacEqn}
X_\gamma(t, x) &= P^\gamma_t X^0_\gamma(x) + \mathfrak{s}qrt 2\, Y_\gamma(t, x)\\
&\qquad + \int_0^t \widetilde{P}^{\gamma}_{t-s} \Bigl(-\mathfrak{r}ac{\beta^3}{3} X^3_\gamma + \bigl(\mathfrak{C}_\gamma + A\bigr) X_\gamma + E_\gamma \Bigr)(s, x)\, \mathrm{d} s, \nonumber
\end{align}
where we have used the inverse temperature \eqref{eq:beta} and where
\betagin{equation}\lambdabel{eq:Y-def}
Y_\gamma (t, x) := \mathfrak{r}ac{1}{\mathfrak{s}qrt 2} \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \T_{\mathtt{var}epsilon}^3} \int_0^t \widetilde{P}^{\gamma}_{t-s}(x-y) \,\mathrm{d} \mathfrak{M}_\gamma(s, y).
\end{equation}
Here and in the following, we always write stochastic integrals with respect to the time variable (which is $s$ in this integral).
\mathfrak{s}ubsection{A priori bounds}
\lambdabel{sec:a-priori}
In the proof of Theorem~\ref{thm:main}, we are going to show convergence of $X_\gamma(t)$ in a stronger topology than $\mathscr{D}'(\mathbb{T}^3)$. For this we need to control this process using the semi-norm \eqref{eq:eps-norm-1}. More precisely, for a fixed constant $\mathfrak{a} \geq 1$ and the value $\eta$ as in the statement of Theorem~\ref{thm:main} we define the stopping time
\betagin{equation}\lambdabel{eq:tau-1}
\tau^{(1)}_{\gamma, \mathfrak{a}} := \inf \Bigl\{t \geq 0 : \| X_\gamma(t) \|^{(\mathfrak{e})}_{\mathcal{C}C^{\eta}} \geq \mathfrak{a} \Bigr\}.
\end{equation}
On the random time interval $[0, \tau^{(1)}_{\gamma, \mathfrak{a}})$ we have the a priori bound $\| X_\gamma(t) \|^{(\mathfrak{e})}_{\mathcal{C}C^{\eta}} \leq \mathfrak{a}$, while on the closed interval $[0, \tau^{(1)}_{\gamma, \mathfrak{a}}]$ the bound is $ \| X_\gamma(t) \|^{(\mathfrak{e})}_{\mathcal{C}C^{\eta}} \leq \mathfrak{a} + 2 \mathtt{var}kappa_{\gamma, 1}$ almost surely. The two bounds are different because there may be a jump of the process at time $\tau^{(1)}_{\gamma, \mathfrak{a}}$, and as one can see from \eqref{eq:equation-for-X} the jump size of $X_\gamma(t, x)$ is bounded by the jump size of $\big( K_\gamma *_\eps \mathfrak{M}_\gamma \big) (t, x)$, and the latter is almost surely bounded by $\mathfrak{r}ac{2 \mathtt{var}epsilon^3}{\mathrm{d}eltalta} \mathfrak{s}up_{x \in \mathtt{L}ambda_\mathtt{var}epsilon} |K_\gamma(x)| \leq 2 \mathtt{var}kappa_{\gamma, 1}$. Here, we used the properties that the jump size of the martingale $\mathfrak{M}_\gamma(t,x)$ is $\mathfrak{r}ac{2}{\mathrm{d}eltalta}$ and a jump at time $t$ may almost surely happen only at one $x$. As follows from the definition of $\mathtt{var}kappa_{\gamma, 1}$ in \eqref{eq:K-gamma}, it converges to $1$ as $\gamma \to 0$. Since we always consider $\gamma$ sufficiently small, we can assume that $\mathtt{var}kappa_{\gamma, 1} \leq 2$, and hence $\| X_\gamma(t) \|^{(\mathfrak{e})}_{\mathcal{C}C^{\eta}} \leq \mathfrak{a} + 4 \leq 5 \mathfrak{a}$ almost surely on $[0, \tau^{(1)}_{\gamma, \mathfrak{a}}]$. Using \eqref{eq:uniform-bound} we also have a uniform bound on this process
\betagin{equation}\lambdabel{eq:X-apriori}
|X_\gamma(t, x)| \leq 5 \mathfrak{a} \mathfrak{e}^{\eta}
\end{equation}
almost surely. Staying on the time interval $[0, \tau^{(1)}_{\gamma, \mathfrak{a}}]$ is also sufficient to control the bracket process \eqref{eq:M-bracket}. More precisely, for $t < \tau^{(1)}_{\gamma, \mathfrak{a}}$ we have \eqref{eq:X-apriori} and the random part of \eqref{eq:M-bracket} is bounded by
\betagin{equation*}
\Bigl| \mathfrak{s}igmagma \Bigl(\mathfrak{r}ac{t}{\alphapha}, \mathfrak{r}ac{x}{\mathtt{var}epsilon}\Bigr) \tanh \big( \beta \mathrm{d}eltalta X_\gamma(t, x) \big) \Bigr| \lesssim \mathrm{d}eltalta \mathfrak{a} \mathfrak{e}^{\eta} \lesssim \mathfrak{a} \gamma^{3 (1 + \eta)},
\end{equation*}
where we used the estimate $|\tanh(x)| \leq |x|$ for any $x \in \R$, where we estimated $\betata$ by a constant and where we used the scaling \eqref{eq:scalings}. Since $\eta > -1$, the preceding expression vanishes as $\gamma \to 0$ and the bracket process \eqref{eq:M-bracket} converges to the covariance of a cylindrical Wiener process.
To control the discrete model, constructed in Section~\ref{sec:lift}, we need to introduce another stopping time. For this we define the rescaled spin field
\betagin{equation}\lambdabel{eq:S-def}
S_\gamma(t,x) := \mathfrak{r}ac{1}{\mathrm{d}eltalta} \mathfrak{s}igmagma \Bigl( \mathfrak{r}ac{t}{\alphapha}, \mathfrak{r}ac{x}{\mathtt{var}epsilon} \Bigr) \qquad \text{for}~~ x \in \T_{\mathtt{var}epsilon}^3,~ t \geq 0.
\end{equation}
In Section~\ref{sec:second-symbol} we will need to control the product $S_{\gamma}(t, x) X_\gamma(t, x)$, appearing in the random part of the bracket process \eqref{eq:M-bracket}, in a suitable space of distributions. For this, we will show in Lemma~\ref{lem:Y-approximation} that the spin field $S_\gamma$ can be replaced, up to an error, by its local average. More precisely, we take any smooth, rotation invariant function $\un{\fK} : \R^3 \to \R$, supported in the ball of radius $2$ and centered at the origin, whose continuous Fourier transform satisfies $\mathscr{F} \un{\fK} (\omega) = 1$, for all $\omega \in \R^3$ such that $|\omega|_{\mathfrak{s}igmanfty} \leq 1$. Then for a fixed constant $\un{\kappappa} \in (0, \mathfrak{r}ac{1}{10})$ we define
\betagin{equation}\lambdabel{eq:under-K-def}
\un{\fK}_\gamma(k) := \un{c}_{\gamma, 1} \gamma^{3(1-\un{\kappappa})} \un{\fK} \bigl(\gamma^{1-\un{\kappappa}} k\bigr) \qquad \text{with} \quad \un{c}_{\gamma, 1}^{-1} := \mathfrak{s}um_{k \in \T^3_N} \gamma^{3(1-\un{\kappappa})} \un{\fK} \bigl(\gamma^{1-\un{\kappappa}} k\bigr),
\end{equation}
and
\betagin{equation}\lambdabel{eq:X-under}
\un{X}_{\gamma} (t, x) := \bigl(\un{K}_{\gamma} *_\eps S_\gamma\bigr) (t,x), \qquad\qquad \un{K}_{\gamma}(x) := \mathfrak{r}ac{1}{\mathtt{var}epsilon^{3}} \un{\fK}_\gamma \Bigl( \mathfrak{r}ac{x}{\mathtt{var}epsilon} \Bigr).
\end{equation}
In contrast to \eqref{eq:X-gamma}, where the local average of the rescaled spin field $S_\gamma$ is computed in a ball of radius of order $\gamma^{3}$, the process $\un{X}_{\gamma}(t)$ is defined as a local average of spins in a ball of a smaller radius of order $\gamma^{3 + \un{\kappappa}}$. A precise value of $\un \kappappa$ will not play any significant role, as soon as it is small enough. In particular, taking $\un{\kappappa} < \mathfrak{r}ac{1}{10}$ will later allow us to use Lemma~\ref{lem:unX-X-bound}.
Then for $\eta$ as in the statement of Theorem~\ref{thm:main} and for the constant
\betagin{equation}\lambdabel{eq:c-under}
\un{\mathfrak{C}}_\gamma := 2 \mathtt{var}kappa_{\gamma, 2} \int_{0}^\infty \mathtt{var}epsilon^3 \mathfrak{s}um_{x \in \T_{\mathtt{var}epsilon}^3} \bigl(P^\gamma_t *_\eps \un{K}_\gamma\bigr)(x) \widetilde{P}^\gamma_t(x) \,\mathrm{d} t,
\end{equation}
where we use \eqref{constant:isingkac_one}, we define the stopping time
\betagin{equation}\lambdabel{eq:tau-2}
\tau^{(2)}_{\gamma, \mathfrak{a}} := \inf \Bigl\{t \geq 0 : \| \un{X}_\gamma (t) X_\gamma(t) - \un{\mathfrak{C}}_\gamma \|^{(\un{\mathfrak{e}})}_{\mathcal{C}C^{-1-\un\kappappa}} \geq \mathfrak{a} \mathfrak{e}^{\un\kappappa / 2 - 1} \Bigr\},
\end{equation}
where $\un{\mathfrak{e}} := \mathfrak{e} \gamma^{\un\kappappa}$. Since both of the involved processes $\un{X}_\gamma$ and $X_\gamma$ are expected to converge to distributions as $\gamma \to 0$, the product $\un{X}_\gamma X_\gamma$ needs to be renormalised by subtracting the divergent constant $\un{\mathfrak{C}}_\gamma$. From Lemma~\ref{lem:renorm-constant-underline} we have $|\un{\mathfrak{C}}_\gamma| \lesssim \mathfrak{e}^{-1}$, and we expect that $\| \un{X}_\gamma (t) X_\gamma(t) \|^{(\un{\mathfrak{e}})}_{\mathcal{C}C^{-1-\un\kappappa}}$ blows up with the speed $\mathfrak{e}^{-1}$. The speed of blow-up in \eqref{eq:tau-2}, after renormalising the product, is slower. It can be significantly improved, but the presented speed is enough for our estimates in Section~\ref{sec:second-symbol}.
To combine the two stopping times \eqref{eq:tau-1} and \eqref{eq:tau-2}, we set
\betagin{equation}\lambdabel{eq:tau}
\tau_{\gamma, \mathfrak{a}} := \tau^{(1)}_{\gamma, \mathfrak{a}} \wedge \tau^{(2)}_{\gamma, \mathfrak{a}},
\end{equation}
and we restrict the time variable to the interval $[0, \tau_{\gamma, \mathfrak{a}}]$. For this it will be convenient to consider a stopped process $\mathfrak{s}igmagma(t)$, extended beyond the random time $\tau_{\gamma, \mathfrak{a}}$. To define such extension, we introduce a new spin system $\mathfrak{s}igmagma'_{\gamma, \mathfrak{a}}$ which starts from the configuration $\mathfrak{s}igmagma'_{\gamma, \mathfrak{a}}\bigl(\mathfrak{r}ac{\tau_{\gamma, \mathfrak{a}}}{\alphapha}\bigr) = \mathfrak{s}igmagma\bigl(\mathfrak{r}ac{\tau_{\gamma, \mathfrak{a}}}{\alphapha}-\bigr)$ and which for the times $t > \mathfrak{r}ac{\tau_{\gamma, \mathfrak{a}}}{\alphapha}$ is given by the infinitesimal generator $\mathscr{L}_\gamma'$ given by \eqref{eq:generator} with the flip rates\footnote{The process $\mathfrak{s}igmagma'$ defined by the generator $\mathscr{L}_\gamma'$ is called a ``voter model'' \cite{Liggett}. The scaling limit of the one-dimensional Ising-Kac model near the critical temperature was proved in \cite{MR1358083} by using a coupling of these two models.}
\betagin{equation*}
c'_{\gamma}(\mathfrak{s}igmagma, j) = \mathfrak{r}ac{1}{2} \Bigl( 1 - \mathfrak{s}igmagma(j) h_\gamma(\mathfrak{s}igmagma, j) \Bigr).
\end{equation*}
Then we set
\betagin{equation}\lambdabel{eq:sigma-stopped}
\mathfrak{s}igmagma_{\gamma, \mathfrak{a}}(t) :=
\betagin{cases}
\mathfrak{s}igmagma(t) &\text{for}~~ t < \mathfrak{r}ac{\tau_{\gamma, \mathfrak{a}}}{\alphapha}, \\
\mathfrak{s}igmagma'_{\gamma, \mathfrak{a}}(t) &\text{for}~~ t \geq \mathfrak{r}ac{\tau_{\gamma, \mathfrak{a}}}{\alphapha},
\end{cases}
\end{equation}
where $\alphapha$ is from \eqref{eq:scalings}. The reason to make this particular choice for the extension is in a good control of the rescaled spin field $X'_{\gamma, \mathfrak{a}}$, defined as in \eqref{eq:X-gamma} for the process $\mathfrak{s}igmagma'_{\gamma, \mathfrak{a}}$. More precisely, we show in Lemma~\ref{lem:X-prime-bound} that $X'_{\gamma, \mathfrak{a}}$ solves a linear equation which allows to bound it globally in time.
We define the martingales $\mathfrak{M}_{\gamma, \mathfrak{a}}$ via the process $\mathfrak{s}igmagma_{\gamma, \mathfrak{a}}$ in the same way as we defined $\mathfrak{M}_\gamma$ in \eqref{eq:martingales} via the process $\mathfrak{s}igmagma$. For $t < \tau_{\gamma, \mathfrak{a}}$ the martingale $\mathfrak{M}_{\gamma, \mathfrak{a}}(t)$ coincides with $\mathfrak{M}_{\gamma}(t)$, while for $t \geq \tau_{\gamma, \mathfrak{a}}$ we denote $\mathfrak{M}_{\gamma, \mathfrak{a}}(t) = \mathfrak{M}'_{\gamma, \mathfrak{a}}$, where the latter has the predictable quadratic covariations
\betagin{equation}\lambdabel{eq:M-prime-bracket}
\big\lambdangle \mathfrak{M}'_{\gamma, \mathfrak{a}}(\bigcdot, x), \mathfrak{M}'_{\gamma, \mathfrak{a}}(\bigcdot, x') \big\rangle_t = 2 \mathtt{var}kappa_{\gamma, 2} \mathrm{d}eltalta^{(\mathtt{var}epsilon)}_{x,x'} \int_{\tau_{\gamma, \mathfrak{a}}}^{t} \Bigl( 1 - \mathrm{d}eltalta \mathfrak{s}igmagma'_{\gamma, \mathfrak{a}} \Bigl(\mathfrak{r}ac{s}{\alphapha}, \mathfrak{r}ac{x}{\mathtt{var}epsilon}\Bigr) X'_{\gamma, \mathfrak{a}}(s, x) \Bigr) \mathrm{d} s,
\end{equation}
with $\mathrm{d}eltalta^{(\mathtt{var}epsilon)}_{x,x'}$ defined below \eqref{eq:M-bracket} and $\mathtt{var}kappa_{\gamma, 2}$ is defined in \eqref{constant:isingkac_one}. Then for $t \geq \tau_{\gamma, \mathfrak{a}}$ we have
\betagin{equation}\lambdabel{eq:M-a-variation}
\big\lambdangle \mathfrak{M}_{\gamma, \mathfrak{a}}(\bigcdot, x), \mathfrak{M}_{\gamma, \mathfrak{a}}(\bigcdot, x') \big\rangle_t = \big\lambdangle \mathfrak{M}_\gamma(\bigcdot, x), \mathfrak{M}_\gamma(\bigcdot, x') \big\rangle_{\tau_{\gamma, \mathfrak{a}}} + \big\lambdangle \mathfrak{M}'_{\gamma, \mathfrak{a}}(\bigcdot, x), \mathfrak{M}'_{\gamma, \mathfrak{a}}(\bigcdot, x') \big\rangle_t.
\end{equation}
We define $X_{\gamma, \mathfrak{a}}$ as the solution of an analogue of equation \eqref{eq:IsingKacEqn}, driven by these new martingales
\betagin{align}\lambdabel{eq:IsingKacEqn-periodic}
X_{\gamma, \mathfrak{a}}(t, x) &= P^\gamma_t X^0_\gamma(x) + \mathfrak{s}qrt 2\, Y_{\gamma, \mathfrak{a}}(t, x) \\
&\qquad + \int_0^t \widetilde{P}^{\gamma}_{t-s} \Bigl( -\mathfrak{r}ac{\beta^3}{3} X^3_{\gamma, \mathfrak{a}} + \bigl(\mathfrak{C}_\gamma + A\bigr) X_{\gamma, \mathfrak{a}} + E_{\gamma, \mathfrak{a}} \Bigr)(s, x)\, \mathrm{d} s, \nonumber
\end{align}
where
\betagin{equation*}
Y_{\gamma, \mathfrak{a}} (t, x) := \mathfrak{r}ac{1}{\mathfrak{s}qrt 2} \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \T_{\mathtt{var}epsilon}^3} \int_0^t \widetilde{P}^{\gamma}_{t-s}(x-y)\, \mathrm{d} \mathfrak{M}_{\gamma, \mathfrak{a}}(s, y),
\end{equation*}
and the error term $E_{\gamma, \mathfrak{a}}$ is defined in the same way as $E_{\gamma}$ in \eqref{eq:expr_error_term}, but via the process $X_{\gamma, \mathfrak{a}}$. For $t \leq \tau_{\gamma, \mathfrak{a}}$ we have $X_{\gamma, \mathfrak{a}}(t) = X_{\gamma}(t)$, and for $t > \tau_{\gamma, \mathfrak{a}}$ we have $X_{\gamma, \mathfrak{a}}(t) = X'_{\gamma, \mathfrak{a}}(t)$.
Working with the process $X_{\gamma, \mathfrak{a}}$ is advantageous, because we can use the a priori bounds provided by the stopping times \eqref{eq:tau-1} and \eqref{eq:tau-2}, which guarantees convergence of the martingales and their lift to a discrete model (see Proposition~\ref{prop:models-converge}). To prove Theorem~\ref{thm:main}, we will first prove the respective convergence result for $X_{\gamma, \mathfrak{a}}$ and then we will take the limit $\mathfrak{a} \to \infty$. In order to show that $\tau_{\gamma, \mathfrak{a}}$ almost surely diverges in these limits, we will prove that this stopping time is close to a stopping time of the limiting process $X$, and the latter is almost surely infinite.
\mathfrak{s}ubsection{Periodic extensions}
We are going to write equation \eqref{eq:IsingKacEqn-periodic} in the framework of regularity structures. For this, we need to write this equation on the whole domain $\mathtt{L}ambda_{\mathtt{var}epsilon}$ rather than on the torus $\T_{\mathtt{var}epsilon}^3$. To do this, we denote by $G^\gamma_t : \mathtt{L}ambda_{\mathtt{var}epsilon} \to \R$ the discrete heat kernel, which solves equation \eqref{eq:kernels-P} on $\mathtt{L}ambda_{\mathtt{var}epsilon}$ (one can see that for $\gamma$ small enough, the discrete operator $\mathbf{D}elta_\gamma$ is naturally extended to functions on $\mathtt{L}ambda_{\mathtt{var}epsilon}$). Then we have the identity
\betagin{equation}\lambdabel{eq:From-P-to-G}
\mathtt{var}epsilon^3 \mathfrak{s}um_{x \in \T_{\mathtt{var}epsilon}^3} P^\gamma_t(x) f(x) = \mathtt{var}epsilon^3 \mathfrak{s}um_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} G^\gamma_t(x) f(x),
\end{equation}
for any $f : \T_{\mathtt{var}epsilon}^3 \to \R$, where on the right-hand side we extended $f$ periodically to $\mathtt{L}ambda_{\mathtt{var}epsilon}$. We define respectively
\betagin{equation}\lambdabel{eq:From-P-to-G-tilde}
\widetilde{G}^{\gamma}_{t}(x) := \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} G^{\gamma}_{t}(x - y)K_\gamma(y).
\end{equation}
Then equation \eqref{eq:IsingKacEqn-periodic} may be written as
\betagin{align}\lambdabel{eq:IsingKacEqn-new}
X_{\gamma, \mathfrak{a}}(t, x) &= G^\gamma_t X^0_\gamma(x) + \mathfrak{s}qrt 2\, Y_{\gamma, \mathfrak{a}}(t, x) \\
&\qquad + \int_0^t \widetilde{G}^{\gamma}_{t-s} \Bigl( -\mathfrak{r}ac{\beta^3}{3} X^3_{\gamma, \mathfrak{a}} + \bigl(\mathfrak{C}_{\gamma} + A\bigr) X_{\gamma, \mathfrak{a}} + E_{\gamma, \mathfrak{a}} \Bigr)(s, x)\, \mathrm{d} s, \nonumber
\end{align}
where we extended all the involved processes periodically to $\mathtt{L}ambda_{\mathtt{var}epsilon}$.
\mathfrak{s}ection{The dynamical $\mathbb{P}hi^4_3$ model}
\lambdabel{sec:phi4Section}
In this section we recall the notion of solution to the $\mathbb{P}hi^4$ equation \eqref{eq:Phi43} on the three-dimensional torus. Following \cite{Regularity}, we describe the solution in the framework of regularity structures. Throughout the section we are going to use singular modelled distribution and their basic properties, which can be found in \cite{Regularity}. However, we prefer to duplicate some of the definitions here to have a better motivation for the setting of Section~\ref{sec:discreteRegStruct} and \ref{sec:discrete-solution}.
\mathfrak{s}ubsection{A model space}
\lambdabel{sec:model-space-cont}
In this section we introduce an infinite set $\fT$ and a finite-dimensional regularity structure $\mathscr{T} = (\mathcal{C}A, \mathcal{C}T, \mathcal{C}G)$ such that $\mathcal{C}T \mathfrak{s}ubset \fT$ and that is required to describe equation \eqref{eq:Phi43}.
To define the space $\fT$, it is convenient to use some ``abstract symbols'' as its basis elements. Namely, $\blue\Xi$ will represent the driving noise in \eqref{eq:Phi43}, the integration map $\mathcal{C}I$ will represent the space-time convolution with the heat kernel, i.e. the Green's function of the parabolic operator $\partial_t - \mathbf{D}elta$ on $\R^3$. The symbols $\symbol{X}_i$, $i = 0, \ldots ,3$, will represent the time and space variables, and for $\ell = (\ell_0, \ldots, \ell_3) \in \N_0^4$ we will use the shorthand $\symbol{X}^\ell = \symbol{X}_0^{\ell_0} \symbol{X}_1^{\ell_1} \symbol{X}_2^{\ell_2} \symbol{X}_3^{\ell_3}$, with the special unit symbol $\symbol{\1} := \symbol{X}^0$. We define $\mathcal{C}W_{\mathrm{poly}} := \{\symbol{X}^\ell : \ell \in \N_0^4\}$ to be the set of all monomials.
Then we define the minimal sets $\mathcal{C}V$ and $\mathcal{C}U$ of formal expressions such that $\blue\Xi \in \mathcal{C}V$, $\mathcal{C}W_{\mathrm{poly}} \mathfrak{s}ubset \mathcal{C}V \cap \mathcal{C}U$ and the following implications hold:
\betagin{subequations}\lambdabel{eqs:rules}
\betagin{align}
\tau \in \mathcal{C}V \quad &\Rightarrow \quad \mathcal{C}I(\tau) \in \mathcal{C}U, \lambdabel{eq:rule1}\\
\tau_1, \tau_2, \tau_3 \in \mathcal{C}U \quad &\Rightarrow \quad \tau_1 \tau_2 \tau_3 \in \mathcal{C}V, \lambdabel{eq:rule2}
\end{align}
\end{subequations}
where the product of symbols is commutative with the convention $\symbol{\1} \tau = \tau$. We postulate $\mathcal{C}I(\symbol{X}^\ell) = 0$ and do not include such zero elements into $\mathcal{C}U$ and $\mathcal{C}V$. The set $\mathcal{C}U$ contains the elements needed to describe the solution of \eqref{eq:Phi43}, while $\mathcal{C}V$ contains the elements to describe the expression on the right-hand side of this equation. Namely, the relation \eqref{eq:rule1} means that the elements of $\mathcal{C}U$ are obtained by integrating the elements on the right-hand side of the equation. The rule \eqref{eq:rule2} means that the right-hand side of \eqref{eq:IsingKacEqn-new} contains the third power of the solution (we note that since $\symbol{\1} \in \mathcal{C}U$, the set $\mathcal{C}V$ also contains the symbols $\tau_1$ and $\tau_1 \tau_2$ for all $\tau_1, \tau_2 \in \mathcal{C}U$).
We set $\fW := \mathcal{C}U \cup \mathcal{C}V$, and for a fixed $\kappappa \in (0, \mathfrak{r}ac{1}{14})$ we define the homogeneity $|\bigcdot| : \fW \to \R$ of each element of $\fW$ by the recurrent relations
\betagin{subequations}\lambdabel{eqs:hom}
\betagin{align}
|\symbol{X}^\ell| &= |\ell|_\mathfrak{s}, \lambdabel{eq:hom1}\\
|\blue\Xi| &= -\mathfrak{r}ac{5}{2} - \kappappa,\lambdabel{eq:hom2} \\
|\tau_1 \tau_2| &= |\tau_2| + |\tau_2|, \lambdabel{eq:hom3}\\
|\mathcal{C}I(\tau)| &= |\tau| + 2, \quad \tau \notin \mathcal{C}W_{\mathrm{poly}}. \lambdabel{eq:hom4}
\end{align}
\end{subequations}
The definition \eqref{eq:hom1} takes into account the parabolic scaling of space-time; \eqref{eq:hom2} is the regularity of the space-time white noise; \eqref{eq:hom4} is motivated by the Schauder estimate, i.e. a convolution with the heat kernel increases regularity by $2$. One can readily see that for any $\kappappa < \mathfrak{r}ac{1}{14}$ and for any $\zeta \in \R$ the set $\{\tau \in \fW : |\tau| < \zeta\}$ is finite. The restriction $\kappappa < \mathfrak{r}ac{1}{14}$ will be useful later in Section~\ref{sec:discrete-solution} and it is explained in Remark~\ref{rem:regularity-explanation}.
We define $\fT$ to contain all finite linear combinations of the elements in $\fW$, and we view $\mathcal{C}I$ as a linear map $\tau \mapsto \mathcal{C}I(\tau)$, defined on the subspace generated by $\{ \blue\Xi, \mathcal{C}I(\blue\Xi)^2, \mathcal{C}I(\blue\Xi)^3 \}$. Our definition of this map implies that it can be considered as ``an abstract integration map'' from \cite{Regularity}. The set $\fA$ contains the homogeneities $|\tau|$ for all $\tau \in \fW$.
In order to solve equation \eqref{eq:Phi43}, it is enough to consider the elements in $\fW$ with negative homogeneities to describe the right-hand side, while the solution of this equation is described by the elements of homogeneities not exceeding $1 + 3\kappappa$. Hence, we define
\betagin{equation}\lambdabel{eq:basis}
\mathcal{C}W := \{\tau \in \mathcal{C}V : |\tau| \leq 0,\, \tau \neq \blue\Xi\} \cup \{\tau \in \mathcal{C}U : |\tau| \leq 1 + 3 \kappappa\}.
\end{equation}
This is the minimal set of the basis elements of a regularity structure, which will allow us to solve the equation \eqref{eq:abstract_equation}, an abstract version of \eqref{eq:Phi43}. We will see in Section~\ref{sec:lift}, that the element $\blue\Xi$ plays a special role; namely, $\blue\Xi$ corresponds to a distribution (a time derivative of a martingale), while the other elements correspond to functions. That is why it will be convenient to remove $\blue\Xi$ from the regularity structure.
As we will see, the set $\mathcal{C}V$ contains the elements describing the right-hand side of \eqref{eq:abstract_equation}, except the noise element $\blue\Xi$ which we prefer to exclude. In order to get the right-hand side of \eqref{eq:Phi43} after reconstruction of the right-hand side of \eqref{eq:AbstractDiscreteEquation}, it is enough to use the elements of $\mathcal{C}V$ with non-negative homogeneities. This explains why we use only the elements $\{\tau \in \mathcal{C}V : |\tau| \leq 0\}$ in \eqref{eq:basis}. As we explained above, we use the elements $\{\tau \in \mathcal{C}U : |\tau| \leq 1 + 3 \kappappa\}$, because we are going to solve equation \eqref{eq:abstract_equation} in a space of modelled distributions of regularity $1 + 3 \kappappa$.
We define $\mathcal{C}T$ to be the linear span of the elements in $\mathcal{C}W$, and the set $\mathcal{C}A$ contains the homogeneities $|\tau|$ for all elements $\tau \in \mathcal{C}W$.
It is convenient to represent the elements of $\mathcal{C}W$ as trees. Namely, we denote $\blue\Xi$ by a node \<oneNode>\,. When a map $\mathcal{C}I$ is applied to a symbol $\tau$, we draw an edge from the root of the tree representing this symbol $\tau$. For example, the symbol $\mathcal{C}I(\blue\Xi)$ is represented by the diagram $\<1b>$. The product of symbols $\tau_1, \ldots, \tau_n$ is represented by the tree, obtained from the trees of these symbols by drawing them from the same root. For example, $\<2b>$ and $\<3b>$ are the diagrams for $\mathcal{C}I(\blue\Xi)^2$ and $\mathcal{C}I(\blue\Xi)^3$ respectively. We use the symbols for the polynomials as before. In Table~\ref{tab:symbols-cont} we provide the elements of $ \mathcal{C}W$ and their homogeneities.
\betagin{table}[h]
\centering
\betagingroup
\mathfrak{s}etlength{\tabcolsep}{10pt}
\renewcommand{1.2}{1.2}
\mathfrak{s}ubfloat{
\betagin{tabular}{cc}
\hline
\textbf{Element} & \textbf{Homogeneity} \\
\hline
$\symbol{\1}$ & $0$ \\
$\symbol{X}_i$, $i = 1, 2, 3$ & $1$ \\
$\<1b>$ & $-\mathfrak{r}ac{1}{2}-\kappappa$ \\
$\<2b>$ & $-1-2\kappappa$ \\
$\<2b> \symbol{X}_i$, $i = 1, 2, 3$ & $-2\kappappa$ \\
$\<3b>$ & $-\mathfrak{r}ac{3}{2}-3\kappappa$
\end{tabular} \hspace{0.3cm}}
\mathfrak{s}ubfloat{\hspace{0.3cm}
\betagin{tabular}{cc}
\hline
\textbf{Element} & \textbf{Homogeneity} \\
\hline
$\<3b> \symbol{X}_i$, $i = 1, 2, 3$ & $-\mathfrak{r}ac{1}{2}-3\kappappa$ \\
$\<20b>$ & $\ 1-2\kappappa$ \\
$\<30b>$ & $\mathfrak{r}ac{1}{2}-3\kappappa$ \\
$\<22b>$ & $ -4\kappappa$ \\
$\<31b>$ & $ -4\kappappa$ \\
$\<32b>$ & $-\mathfrak{r}ac{1}{2}-5\kappappa$ \\
\end{tabular}}
\endgroup
\caption{The elements of $\mathcal{C}W$ and their homogeneities. \lambdabel{tab:symbols-cont}}
\end{table}
Every element $f \in \fT$ can be uniquely written as $f = \mathfrak{s}um_{\tau \in \fW} f_\tau \tau$ for $f_\tau \in \R$, and we define
\betagin{equation}\lambdabel{eq:RS-norm}
|f|_\alphapha := \mathfrak{s}um_{\tau \in \fW : |\tau| = \alphapha} |f_\tau|,
\end{equation}
postulating $|f|_\alphapha=0$ if the sum runs over the empty set. We also introduce the projections
\betagin{equation}\lambdabel{eq:RS-projections}
\mathcal{C}Q_{< \alphapha} f := \mathfrak{s}um_{\tau \in \fW : |\tau| < \alphapha} f_\tau \tau, \qquad\qquad \mathcal{C}Q_{\leq \alphapha} f := \mathfrak{s}um_{\tau \in \fW : |\tau| \leq \alphapha} f_\tau \tau.
\end{equation}
Let the model space $\fT_{< \alphapha}$ contain all the elements $f \in \fT$ satisfying $f = \mathcal{C}Q_{< \alphapha} f$. All these definitions can be immediately projected to $\mathcal{C}T$.
\mathfrak{s}ubsection{A structure group}
\lambdabel{sec:structure-group-cont}
In order to use the results of \cite{Regularity}, we need to define a structure group $\mathcal{C}G$. For this, we need to introduce another set of basis elements $\mathcal{C}W_+$, containing $\symbol{X}_i$ for $i = 1, 2, 3$, and the elements of $\mathcal{C}W$ of the form $\mathcal{C}I(\tau)$ for $\tau \neq \blue\Xi$. Then we define $\mathcal{C}T_+$ to be the free commutative algebra generated by the elements of $\mathcal{C}W_+$.
We define a linear map $\mathbf{D}elta : \mathcal{C}T \to \mathcal{C}T \otimes \mathcal{C}T_+$ by the identities
\betagin{subequations}\lambdabel{eqs:coproduct}
\betagin{equation}
\mathbf{D}elta \symbol{\1} = \symbol{\1} \otimes \symbol{\1}, \qquad \mathbf{D}elta \symbol{X}_i = \symbol{X}_i \otimes \symbol{\1} + \symbol{\1} \otimes \symbol{X}_i,
\end{equation}
and then recursively by (we denote by $I$ the identity operator on $\mathcal{C}T_+$)
\betagin{align}
\mathbf{D}elta \tau_1 \tau_2 &= (\mathbf{D}elta \tau_1) (\mathbf{D}elta \tau_2), \\
\mathbf{D}elta \mathcal{C}I(\blue\Xi) &= \mathcal{C}I(\blue\Xi) \otimes \symbol{\1}, \lambdabel{eq:Delta3} \\
\mathbf{D}elta \mathcal{C}I(\tau) &= (\mathcal{C}I \otimes I) \mathbf{D}elta \tau + \symbol{\1} \otimes \mathcal{C}I (\tau), \quad \tau \neq \blue\Xi, \lambdabel{eq:Delta4}
\end{align}
\end{subequations}
for respective elements $\tau_i, \tau, \bar \tau \in \mathcal{C}W$. In Table~\ref{tab:Delta-cont} we write $\mathbf{D}elta \tau$ for all $\tau \in \mathcal{C}W$.
\betagin{table}[h]
\centering
\betagingroup
\mathfrak{s}etlength{\tabcolsep}{10pt}
\renewcommand{1.2}{1.2}
\mathfrak{s}ubfloat{
\betagin{tabular}{c}
$\mathbf{D}elta \symbol{\1} = \symbol{\1} \otimes \symbol{\1}$ \\
$\mathbf{D}elta \symbol{X}_i = \symbol{X}_i \otimes \symbol{\1} + \symbol{\1} \otimes \symbol{X}_i$ \\
$\mathbf{D}elta \<1b> = \<1b> \otimes \symbol{\1}$ \\
$\mathbf{D}elta \<2b> = \<2b> \otimes \symbol{\1}$ \\
$\mathbf{D}elta \<2b> \symbol{X}_i = \<2b> \symbol{X}_i \otimes \symbol{\1} + \<2b> \otimes \symbol{X}_i$ \\
$\mathbf{D}elta \<3b> = \<3b> \otimes \symbol{\1}$
\end{tabular} \hspace{0.3cm}}
\mathfrak{s}ubfloat{\hspace{0.5cm}
\betagin{tabular}{c}
$\mathbf{D}elta \<3b> \symbol{X}_i = \<3b> \symbol{X}_i \otimes \symbol{\1} + \<3b> \otimes \symbol{X}_i$ \\
$\mathbf{D}elta \<20b> = \<20b> \otimes \symbol{\1} + \symbol{\1} \otimes \<20b>$ \\
$\mathbf{D}elta\<30b> = \<30b> \otimes \symbol{\1} + \symbol{\1} \otimes \<30b>$ \\
$\mathbf{D}elta\<22b> = \<22b> \otimes \symbol{\1} + \<2b> \otimes \<20b> $ \\
$\mathbf{D}elta\<31b> = \<31b> \otimes \symbol{\1} + \<1b> \otimes \<30b>$ \\
$\mathbf{D}elta\<32b> = \<32b> \otimes \symbol{\1} + \<2b> \otimes \<30b>$
\end{tabular}}
\endgroup
\caption{The image of the operator $\mathbf{D}elta$. \lambdabel{tab:Delta-cont}}
\end{table}
\betagin{remark}
Since we restricted the set of basis elements \eqref{eq:basis}, our definition of the map $\mathbf{D}elta$ looks much easier than in \cite[Eq.~8.8b]{Regularity}. More precise, the general definition of $\mathbf{D}elta \mathcal{C}I(\tau)$ should be
\betagin{equation*}
\mathbf{D}elta \mathcal{C}I(\tau) = (\mathcal{C}I \otimes I) \mathbf{D}elta \tau + \mathfrak{s}um_{\mathfrak{s}ubstack{k, \ell \in \N_0^4 \\ |k + \ell|_s < |\tau| + 2}} \mathfrak{r}ac{\symbol{X}^k}{k!} \otimes \mathfrak{r}ac{\symbol{X}^\ell}{\ell!} \mathcal{C}I_{k + \ell} (\tau),
\end{equation*}
where $\mathcal{C}I_{k + \ell}$ are new auxiliary symbols. Our definition \eqref{eq:basis} implies that there is at most one term in this sum, which yields \eqref{eq:Delta3} and \eqref{eq:Delta4}.
\end{remark}
For any linear functional $f : \mathcal{C}T_+ \to \R$ we define the map $\Gamma_f : \mathcal{C}T \to \mathcal{C}T$ as
\betagin{equation}\lambdabel{eq:Gamma-general}
\Gamma_{\!f} \tau := (I \otimes f) \mathbf{D}elta \tau.
\end{equation}
Then the structure group $\mathcal{C}G$ is defined as $\mathcal{C}G := \{\Gamma_{\!f} : f \in \mathcal{C}G_+\}$, where $\mathcal{C}G_+$ contains all linear functionals $f : \mathcal{C}T_+ \to \R$ satisfying $f(\tau \bar \tau) = f(\tau) f(\bar \tau)$ for $\tau, \bar \tau \in \mathcal{C}T_+$.
Since the model space $\mathcal{C}T$ is generated by a small number of elements listed in Table~\ref{tab:symbols-cont}, we can describe the structure group $\mathcal{C}G$ explicitly. More precisely, $\mathcal{C}G$ contains all the transformations listed in Table~\ref{tab:linear_transformations} for any real constants $a_i$, for $i=0, \ldots, 3$, $b$ and $c$.
\betagin{table}[h]
\centering
\betagingroup
\mathfrak{s}etlength{\tabcolsep}{10pt}
\renewcommand{1.2}{1.2}
\mathfrak{s}ubfloat{
\betagin{tabular}{cc}
\hline
\textbf{Element} & \textbf{Image} \\
\hline
$\symbol{\1}$ & $\symbol{\1}$ \\
$\symbol{X}_i$, $i = 1, 2, 3$ & $\symbol{X}_i + a_i \symbol{\1}$ \\
$\<1b>$ & $\<1b>$ \\
$\<2b>$ & $\<2b>$ \\
$\<2b> \symbol{X}_i$, $i = 1, 2, 3$ & $\<2b> \symbol{X}_i + a_i \<2b>$ \\
$\<3b>$ & $\<3b>$
\end{tabular} \hspace{0.3cm}}
\mathfrak{s}ubfloat{\hspace{0.5cm}
\betagin{tabular}{cc}
\hline
\textbf{Element} & \textbf{Image} \\
\hline
$\<3b> \symbol{X}_i$, $i = 1, 2, 3$ & $\<3b> \symbol{X}_i + a_i \<3b>$ \\
$\<20b>$ & $\<20b> + b \symbol{\1}$ \\
$\<30b>$ & $\<30b> + c \symbol{\1}$ \\
$\<22b>$ & $\<22b> + b \<2b>$ \\
$\<31b>$ & $\<31b> + c \<1b>$ \\
$\<32b>$ & $\<32b> + c \<2b>$
\end{tabular}}
\endgroup
\caption{Linear transformations in $\mathcal{C}G$ of the elements in $\mathcal{C}W$. \lambdabel{tab:linear_transformations}}
\end{table}
The bijection between these constants and the functionals $f \in \mathcal{C}G_+$ is given by
\betagin{equation*}
a_i = f(\symbol{X}_i), \qquad b = f\bigl(\,\<20b>\,\bigr), \qquad c = f\bigl(\,\<30b>\,\bigr).
\end{equation*}
In the rest of this section we use the framework of \cite{Regularity} to work with the regularity structure $\mathscr{T} = (\mathcal{C}A, \mathcal{C}T, \mathcal{C}G)$ just introduced.
\mathfrak{s}ubsection{A solution map}
Let $G$ be the heat kernel, i.e. the Green's function of the parabolic operator $\partial_t - \mathbf{D}elta$ on $\R^3$. As in \cite[Sec.~5]{Regularity}, we write it as $G = \mathscr{K} + \mathscr{R}$, where $\mathscr{R}$ is smooth and $\mathscr{K}$ is singular, compactly supported. Let furthermore, $Z = (\mathbb{P}i, \Gamma)$ be the model on the regularity structure $\mathscr{T}$ for the equation \eqref{eq:Phi43}, defined in \cite[Sec.~10.5]{Regularity} with respect to the kernel $\mathscr{K}$. Using the value $\kappappa$ from \eqref{eq:hom2}, we define the abstract integration operator
\betagin{equation}\lambdabel{eq:abstract-integral}
\mathcal{C}P := \mathcal{C}K_{\kappappa} + R_{1 + 3 \kappappa} \mathcal{C}R,
\end{equation}
where the operator $\mathcal{C}K_{\kappappa}$ is defined in \cite[Eq.~5.15]{Regularity} via the kernel $\mathscr{K}$ for the values $\betata = 2$ and $\gammamma = \kappappa$, the operator $R_{1 + 3 \kappappa}$ is defined in \cite[Eq.~7.7]{Regularity} as a Taylor's expansion of the function $\mathscr{R}$ up to the order $1 + 3 \kappappa$, and $\mathcal{C}R$ is the reconstruction map for the model $Z$ defined in \cite[Thm.~3.10]{Regularity}. The choice of the values $\kappappa$ and $1 + 3\kappappa$ in \eqref{eq:abstract-integral} is motivated as follows. We are going to solve an abstract version of equation \eqref{eq:Phi43} for a modelled distribution $U \in \mathcal{C}D^{\zeta, \eta}$ with $\zeta = 1 + 3 \kappappa$ being the minimal regularity such that the theory can be applied. Then the non-linearity $U^3$ of the equation is an element of the space $U \in \mathcal{C}D^{\zeta + 2 |\mathcal{C}I(\blue\Xi)|, \bar \eta}$, for $|\mathcal{C}I(\blue\Xi)|$ being the regularity of the sector in which $U$ takes values. Since $\zeta + 2 |\mathcal{C}I(\blue\Xi)| = \kappappa$ (see Table~\ref{tab:symbols-cont}), the map $\mathcal{C}P$ should act on elements of $\mathcal{C}D^{\kappappa, \bar \eta}$.
Using this integral operator, we define the modelled distribution
\betagin{equation}\lambdabel{eq:W-def}
W(z) := \mathcal{C}P \mathbf{{1}}_+ (\blue\Xi)(z),
\end{equation}
where $\mathbf{{1}}_+$ is the projection of modelled distributions to $\R_+$ in the time variable. Using the polynomial lift of the convolution $G X^0$, defined in \cite[Lem.~7.5]{Regularity}, we consider the abstract equation
\betagin{equation}\lambdabel{eq:abstract_equation}
U = \mathcal{C}Q_{< \zeta} \Bigl(G X^0 + \mathcal{C}P \mathbf{{1}}_+ F(U) + \mathfrak{s}qrt 2\, W\Bigr),
\end{equation}
where $U \in \mathcal{C}D^{\zeta, \eta} (Z)$ is a modelled distribution, for $\zeta = 1 + 3 \kappappa$ and $\eta \in \R$, and where the non-linearity $F$ is given by
\betagin{equation*}
F(U) := \mathcal{C}Q_{\leq 0} \Bigl(- \mathfrak{r}ac{1}{3} U^3 + A U\Bigr).
\end{equation*}
We note that the product $U^3$ is in general an element of $\fT$ and may contain terms which are not included into the model space $\mathcal{C}T$. The aim of applying the projection $\mathcal{C}Q_{\leq 0}$ is to remove such terms. Respectively, the right-hand side of \eqref{eq:abstract_equation} may contain elements with homogeneities higher than $1$, but we consider only the projection to the homogeneities not exceeding $\zeta$.
Let us now consider a mollified noise $\xi_{\mathrm{d}elta} = \mathtt{var}rho_\mathrm{d}elta \mathfrak{s}tar \xi$, where the mollifier $\mathtt{var}rho_\mathrm{d}elta$ is defined in \eqref{eq:rho} for $\mathrm{d}elta > 0$. Let us define $X_{\mathrm{d}elta}^{0} := \psi_\mathrm{d}elta * X^0$, where the mollifier $\psi_\mathrm{d}elta(x) := \mathfrak{r}ac{1}{\mathrm{d}elta^{3}} \psi (\mathfrak{r}ac{x}{\mathrm{d}elta})$ is defined for a smooth compactly supported function $\psi : \R^3 \to \R$, satisfying $\int_{\R^3} \psi(x) \mathrm{d} x = 1$. Let furthermore $U^{(\mathrm{d}elta)}$ be the solution of equation \eqref{eq:abstract_equation}, defined with respect to the initial condition $X_{\mathrm{d}elta}^{0}$ and the model $Z^{(\mathrm{d}elta)} = (\mathbb{P}i^{(\mathrm{d}elta)}, \Gamma^{(\mathrm{d}elta)})$, defined in \cite[Sec.~10.5]{Regularity} via the mollified noise $\xi_{\mathrm{d}elta}$. Then from \cite[Sec.~9.4]{Regularity} we conclude that the process $X_{\mathrm{d}elta} = \mathcal{C}R^{(\mathrm{d}elta)} U^{(\mathrm{d}elta)}$, where $\mathcal{C}R^{(\mathrm{d}elta)}$ is the reconstruction map for the model $Z^{(\mathrm{d}elta)}$ from \cite[Thm.~3.10]{Regularity}, is the classical solution of the SPDE
\betagin{equation}\lambdabel{eq:Phi43-delta}
\bigl(\partial_t - \mathbf{D}elta\bigr) X_{\mathrm{d}elta} = - \mathfrak{r}ac{1}{3} X_{\mathrm{d}elta}^3 + \bigl(\mathfrak{C}_\mathrm{d}elta + A\bigr) X_{\mathrm{d}elta} + \mathfrak{s}qrt 2\, \xi_{\mathrm{d}elta},
\end{equation}
with the initial condition $X_{\mathrm{d}elta}^{0}$ at time $0$. The renormalisation constant $\mathfrak{C}^{(\mathrm{d}elta)} \mathfrak{s}igmam \mathrm{d}elta^{-1}$ is defined in \cite{Regularity} an is such that the solution of \eqref{eq:Phi43-delta} converges as $\mathrm{d}elta \to 0$ in a suitable space of distributions.
\betagin{theorem}\lambdabel{thm:Phi-solution}
For $\zeta = 1 + 3 \kappappa$ and for $\eta$ as in Theorem~\ref{thm:main}, equation \eqref{eq:abstract_equation} has a unique local in time solution $U \in \mathcal{C}D^{\zeta, \eta}(Z)$, and the solution map $U = \mathcal{C}S(X^0, Z)$ is locally Lipschitz continuous with respect to the initial state $X^0 \in \mathcal{C}C^\eta(\mathbb{T})$ and the model $Z$.
Then the solution of \eqref{eq:Phi43} is defined as $X = \mathcal{C}R U$, where $\mathcal{C}R$ is the reconstruction map associated to the model $Z$ by \cite[Thm.~3.10]{Regularity}. Moreover, for any $T > 0$ and $p \geq 1$ one has
\betagin{equation*}
\mathbb{E} \biggl[\mathfrak{s}up_{t \in [0, T]} \| X(t) \|^p_{\mathcal{C}C^\eta}\biggr] < \infty,
\end{equation*}
and the same bound holds for $\| (X - \mathfrak{s}qrt 2\, \mathcal{C}R W)(t) \|_{\mathcal{C}C^{3 / 2 + 3 \eta}}$, where $W$ is defined in \eqref{eq:W-def}.
Finally, let $X_{\mathrm{d}elta}$ be the solution of \eqref{eq:Phi43-delta}. Then there exists $\theta > 0$ such that for any $T > 0$, $p \geq 1$ and for some $C > 0$, depending on $T$ and $p$, one has
\betagin{equation}\lambdabel{eq:solution-continuity}
\mathbb{E} \biggl[\mathfrak{s}up_{t \in [0, T]} \| (X - X_{\mathrm{d}elta})(t) \|^p_{\mathcal{C}C^\eta}\biggr] \leq C \mathrm{d}elta^{\theta p}
\end{equation}
uniformly over $\mathrm{d}elta \in (0,1]$.
\end{theorem}
\betagin{proof}
Existence of a local solution and its continuity was proved in \cite[Prop.~9.10]{Regularity}. From \cite[Thm.~1.1.]{from-infinity} we obtain the moment bounds on the processes $X$ and $X - \mathfrak{s}qrt 2\, \mathcal{C}R W$.
\end{proof}
One can readily see that the solution $U$ has the following expansion:
\betagin{equation}\lambdabel{eq:U-expansion}
U(z) = \mathfrak{s}qrt 2\, \<1b> + v(z) \symbol{\1} - \mathfrak{r}ac{2 \mathfrak{s}qrt 2}{3}\, \<30b> - 2 v(z) \<20b> + \mathfrak{s}um_{i = 1, 2,3} v^i(z) \symbol{X}_i,
\end{equation}
for some functions $v, v^i : \R_+ \times \R^3 \to \R$. Indeed, this identity follows by writing the integration operator in \eqref{eq:abstract_equation} explicitly as
\betagin{equation*}
U(z) = \mathcal{C}I \Bigl(- \mathfrak{r}ac{1}{3} U(z)^3 + A U(z) + \mathfrak{s}qrt 2\, \blue\Xi\Bigr) + v(z) \symbol{\1} + \mathfrak{s}um_{i = 1, 2,3} v^i(z) \symbol{X}_i,
\end{equation*}
repeating the iterative approximation of the solution several times and truncating all terms with homogeneities strictly bigger than $1$. The function $v$ may be written as $v = X - \mathfrak{s}qrt 2 Y$, where $X = \mathcal{C}R U$ and $Y = \mathcal{C}R W$, with $W$ defined in \eqref{eq:W-def}, and it solves the ``remainder equation''
\betagin{equation}\lambdabel{eq:remainder-equation}
\bigl(\partial_t - \mathbf{D}elta\bigr) v = - \mathfrak{r}ac{1}{3} \bigl(v + \mathfrak{s}qrt 2\, Y\bigl)^3 + A \bigl(v + \mathfrak{s}qrt 2\, Y\bigr),
\end{equation}
with the initial condition $X^0$ at time $t = 0$. Interpretation of the functions $v^i$ is more complicated, and we do not provide it here. Theorem~\ref{thm:Phi-solution} implies that for any $p \geq 1$ and $T > 0$ we have
\betagin{equation*}
\mathbb{E} \biggl[\mathfrak{s}up_{t \in [0, T]} \| v(t) \|^p_{\mathcal{C}C^{3 / 2 + 3 \eta}}\biggr] < \infty.
\end{equation*}
\mathfrak{s}ection{A regularity structure for the discrete equation}
\lambdabel{sec:discreteRegStruct}
Proving convergence of the Ising-Kac model requires solving equation \eqref{eq:IsingKacEqn-new} using the theory of regularity structures. For this we are going to use the framework \cite{erhard2017discretisation}, which is suitable for solving approximate stochastic PDEs. A less general framework developed in \cite{HairerMatetski} could also be applied.
We would like to stress very clearly that the regularity structure for equation \eqref{eq:IsingKacEqn-new} is very similar to the one used to solve the $\mathbb{P}hi^4_3$ equation, except for the fact that in our setting we need to describe the additional error term $E_\gamma$ defined in \eqref{eq:expr_error_term}. As we shall see, the local description of this error term involves the local description of the fifth power of the solution of our equation; this is the only reason why we need to introduce new trees which would not appear in the classical $\mathbb{P}hi^4_3$ solution theory.
In the following section we are going to define a regularity structure $\mathscr{T}^\mathrm{ex} = (\mathcal{C}A^\mathrm{ex}, \mathcal{C}T^\mathrm{ex}, \mathcal{C}G^\mathrm{ex})$ which extends the regularity structure $\mathscr{T}$, defined in Section~\ref{sec:phi4Section}, by adding several basis elements. Throughout this section we are going to use the notation from Section~\ref{sec:phi4Section}.
\mathfrak{s}ubsection{A model space}
\lambdabel{sec:model-space}
In addition to the integration map $\mathcal{C}I$ we introduce a new map $\mathcal{C}E$ which will represent the multiplication operator by $\mathfrak{e}^2 \approx \gammamma^{6}$. Then we define the minimal sets $\mathcal{C}V^\mathrm{ex}$ and $\mathcal{C}U^\mathrm{ex}$ of formal expressions by the implications \eqref{eqs:rules} and
\betagin{align}
\tau_1, \ldots, \tau_5 \in \mathcal{C}U^\mathrm{ex} \quad &\Rightarrow \quad \mathcal{C}E(\tau_1 \cdots \tau_5) \in \mathcal{C}V^\mathrm{ex}, \lambdabel{eq:rule3}
\end{align}
where we postulate $\mathcal{C}E(\symbol{X}^\ell) = 0$ and do not include such zero elements into $\mathcal{C}V^\mathrm{ex}$. The rule \eqref{eq:rule3} describes the remainder \eqref{eq:expr_error_term}, in the Taylor expansion of which the first non-vanishing element is proportional to $\gammamma^6 X_\gamma(t,x)^5$: in fact, the trees coming out from the rule \eqref{eq:rule3} are those which will allow a local description of the error tern $E_\gamma$ (see also Remark~\ref{rem:regularity-explanation}).
We define the set of elements $\fW^\mathrm{ex} := \mathcal{C}U^\mathrm{ex} \cup \mathcal{C}V^\mathrm{ex}$ with the homogeneity $|\bigcdot| : \fW^\mathrm{ex} \to \R$ defined by \eqref{eqs:hom} and
\betagin{align}
|\mathcal{C}E(\tau_1 \cdots \tau_5)| &= |\tau_1| + \cdots + |\tau_5| + 2, \quad \tau_1 \cdots \tau_5 \notin \mathcal{C}W_{\mathrm{poly}}. \lambdabel{eq:hom5}
\end{align}
The increase of homogeneity by $2$ in \eqref{eq:hom5} comes from the multiplier $\gamma^6 \approx \mathfrak{e}^2$.
The set $\fT^\mathrm{ex}$ contains all finite linear combinations of the elements in $\fW^\mathrm{ex}$, and we view $\mathcal{C}I$ and $\mathcal{C}E$ as linear maps $\tau \mapsto \mathcal{C}I(\tau)$ and $\bar \tau \mapsto \mathcal{C}E(\bar\tau)$, defined on the subspaces generated by $\{ \blue\Xi, \mathcal{C}I(\blue\Xi)^2, \mathcal{C}I(\blue\Xi)^3 \}$ and $\{ \mathcal{C}I(\blue\Xi)^4, \mathcal{C}I(\blue\Xi)^5 \}$ respectively. Our definitions of these maps imply that they have the same properties (but, as just stated, different domains), and both of them can be considered as ``abstract integration maps'' from \cite{Regularity}. The set $\fA^\mathrm{ex}$ contains the homogeneities $|\tau|$ for all $\tau \in \fW^\mathrm{ex}$.
By analogy with \eqref{eq:basis} we define
\betagin{equation}\lambdabel{eq:basis-discrete}
\mathcal{C}W^\mathrm{ex} := \{\tau \in \mathcal{C}V^\mathrm{ex} : |\tau| \leq 0,\, \tau \neq \blue\Xi\} \cup \{\tau \in \mathcal{C}U^\mathrm{ex} : |\tau| \leq 1 + 3 \kappappa\} \cup \{\tau : \mathcal{C}E(\tau) \in \mathcal{C}V^\mathrm{ex}, |\tau| \leq -2\},
\end{equation}
where we also add to $\fW^\mathrm{ex}$ those $\tau$ such that $\mathcal{C}E(\tau) \in \mathcal{C}V^\mathrm{ex}$. This is the minimal set of the basis elements of a regularity structure, which will allow us to solve the equation \eqref{eq:AbstractDiscreteEquation}, an abstract version of \eqref{eq:IsingKacEqn-new}. We need the elements $\{\tau : \mathcal{C}E(\tau) \in \mathcal{C}V^\mathrm{ex}, |\tau| \leq -2\}$ to be able to reconstruct the non-linearity \eqref{eq:E2-ga}.
As before, we define $\mathcal{C}T^\mathrm{ex}$ to be the linear span of the elements in $\mathcal{C}W^\mathrm{ex}$, and the set $\mathcal{C}A^\mathrm{ex}$ contains the homogeneities $|\tau|$ for all elements $\tau \in \mathcal{C}W^\mathrm{ex}$. We obviously have $\mathcal{C}W \mathfrak{s}ubset \mathcal{C}W^\mathrm{ex}$ and $\mathcal{C}T \mathfrak{s}ubset \mathcal{C}T^\mathrm{ex}$ for the sets defined in Section~\ref{sec:model-space-cont}.
As in Section~\ref{sec:model-space-cont}, we use the graphical representation of the elements of $\mathcal{C}W^\mathrm{ex}$, where application of the map $\mathcal{C}E$ is represented by the double edge $\<eb>$. For example, the diagram $\<40eb>$ represents the symbol $\mathcal{C}E(\mathcal{C}I(\blue\Xi)^4)$. Table~\ref{tab:symbols} contains those elements of $\mathcal{C}W^\mathrm{ex}$ which are not included in Table~\ref{tab:symbols-cont}. This setting is very similar to the one of the $\mathbb{P}hi^4_3$ solution theory, except that here we have an extra ``integration map''~$\mathcal{C}E$.
\betagin{table}[h]
\centering
\betagingroup
\mathfrak{s}etlength{\tabcolsep}{10pt}
\renewcommand{1.2}{1.2}
\mathfrak{s}ubfloat{
\betagin{tabular}{cc}
\hline
\textbf{Element} & \textbf{Homogeneity} \\
\hline
$\<4b>$ & $-2-4\kappappa$ \\
$\<5b>$ & $-\mathfrak{r}ac{5}{2}-5\kappappa$
\end{tabular} \hspace{0.3cm}}
\mathfrak{s}ubfloat{\hspace{0.3cm}
\betagin{tabular}{cc}
\hline
\textbf{Element} & \textbf{Homogeneity} \\
\hline
$\<40eb>$ & $-4\kappappa$ \\
$\<50eb>$ & $-\mathfrak{r}ac{1}{2}-5\kappappa$
\end{tabular}}
\endgroup
\caption{The elements of $\mathcal{C}W^\mathrm{ex}$ and their homogeneities which are not included into Table~\ref{tab:symbols-cont}. \lambdabel{tab:symbols}}
\end{table}
We are going the same notations for the norms and projections for the elements in $\fW^\mathrm{ex}$ as in \eqref{eq:RS-norm} and \eqref{eq:RS-projections}.
\mathfrak{s}ubsection{A structure group}
\lambdabel{sec:structure-group}
We introduce the set of basis elements $\mathcal{C}W^\mathrm{ex}_+$, containing $\symbol{X}_i$ for $i = 1, 2, 3$, and the elements of $\mathcal{C}W^\mathrm{ex}$ of the form $\mathcal{C}I(\tau)$ and $\mathcal{C}E(\bar \tau)$, for $\tau \neq \blue\Xi$. Then we define $\mathcal{C}T^\mathrm{ex}_+$ to be the free commutative algebra generated by the elements of $\mathcal{C}W^\mathrm{ex}_+$. The linear map $\mathbf{D}elta : \mathcal{C}T^\mathrm{ex} \to \mathcal{C}T^\mathrm{ex} \otimes \mathcal{C}T^\mathrm{ex}_+$ is define by \eqref{eqs:coproduct} and
\betagin{align}
\mathbf{D}elta \mathcal{C}E(\bar \tau) &= (\mathcal{C}E \otimes I) \mathbf{D}elta \bar \tau, \lambdabel{eq:Delta5}
\end{align}
for respective elements $\bar \tau \in \mathcal{C}W^\mathrm{ex}$. Then the action of $\mathbf{D}elta$ on the elements from $\mathcal{C}W$ is provided in Table~\ref{tab:Delta-cont} and the action on the other elements in $\mathcal{C}W^\mathrm{ex}$ is trivial and is provided in Table~\ref{tab:Delta}.
\betagin{table}[h]
\centering
\betagingroup
\mathfrak{s}etlength{\tabcolsep}{10pt}
\renewcommand{1.2}{1.2}
\mathfrak{s}ubfloat{
\betagin{tabular}{c}
$\mathbf{D}elta\<4b> = \<4b> \otimes \symbol{\1}$ \\
$\mathbf{D}elta\<5b> = \<5b> \otimes \symbol{\1}$
\end{tabular} \hspace{0.3cm}}
\mathfrak{s}ubfloat{\hspace{0.5cm}
\betagin{tabular}{c}
$\mathbf{D}elta\<40eb> = \<40eb> \otimes \symbol{\1}$ \\
$\mathbf{D}elta\<50eb> = \<50eb> \otimes \symbol{\1}$
\end{tabular}}
\endgroup
\caption{The image of the operator $\mathbf{D}elta$ for the elements in $\mathcal{C}W^\mathrm{ex}$ not provided in Table~\ref{tab:Delta-cont}. \lambdabel{tab:Delta}}
\end{table}
The structure group $\mathcal{C}G^\mathrm{ex}$ is defined as $\mathcal{C}G^\mathrm{ex} := \{\Gamma_{\!f} : f \in \mathcal{C}G^\mathrm{ex}_+\}$, where $\Gamma_{\!f}$ is given by \eqref{eq:Gamma-general} and $\mathcal{C}G^\mathrm{ex}_+$ contains all linear functionals $f : \mathcal{C}T^\mathrm{ex}_+ \to \R$ satisfying $f(\tau \bar \tau) = f(\tau) f(\bar \tau)$ for $\tau, \bar \tau \in \mathcal{C}T^\mathrm{ex}_+$. One can readily see that the elements of $\mathcal{C}G^\mathrm{ex}$ ant on $\mathcal{C}W$ as described in Table~\ref{tab:linear_transformations}, and they act on the other elements of $\mathcal{C}W^\mathrm{ex}$ as the identity maps.
We will use the framework of \cite{erhard2017discretisation} to work with the regularity structure $\mathscr{T}^\mathrm{ex} = (\mathcal{C}A^\mathrm{ex}, \mathcal{C}T^\mathrm{ex}, \mathcal{C}G^\mathrm{ex})$ just introduced on the discrete lattice $\mathtt{L}ambda_{\mathtt{var}epsilon}$.
\mathfrak{s}ubsection{Discrete models}
\lambdabel{sec:DiscreteModels}
Let $\mathcal{C}B^2_\mathfrak{s}$ be the set of all \emph{test functions} $\mathtt{var}phi \in \mathcal{C}C^2(\R^4)$, compactly supported in the ball of radius $1$ around the origin (with respect to the parabolic distance $\| \bigcdot \|_\mathfrak{s}$ defined in Section~\ref{sec:notation}), and satisfying $\| \mathtt{var}phi \|_{\mathcal{C}C^2} \leq 1$. By analogy with \eqref{eq:rescaled-function}, for $\mathtt{var}phi \in \mathcal{C}B^2_\mathfrak{s}$, $\lambdambda \in (0, 1]$ and $(s, y) \in \R^4$ we define a rescaled and recentered function
\betagin{equation}\lambdabel{eq:rescaled-function-general}
\mathtt{var}phi^\lambdambda_{(s, y)} (t, x) := \mathfrak{r}ac{1}{\lambdambda^5} \mathtt{var}phi \Bigl( \mathfrak{r}ac{t-s}{\lambdambda^2}, \mathfrak{r}ac{x-y}{\lambdambda} \Bigr).
\end{equation}
In the rest of the paper we use the time-space domain $D_\mathtt{var}epsilon := \R \times \mathtt{L}ambda_{\mathtt{var}epsilon}$, where the spatial grid $\mathtt{L}ambda_{\mathtt{var}epsilon}$ is defined in Section~\ref{sec:notation}.
In order to use the results of \cite{erhard2017discretisation}, we need to define a \emph{discretisation} for the regularity structure $\mathscr{T}^\mathrm{ex}$ according to \cite[Def.~2.1]{erhard2017discretisation}.
\betagin{definition}\lambdabel{def:discretisation}
\betagin{enumerate}[leftmargin=0.5cm]
\item We define the space $\mathcal{C}X_\mathtt{var}epsilon := L^\infty (D_\mathtt{var}epsilon)$, and we extend the operator \eqref{eq:iota} to $\iota_\mathtt{var}epsilon: \mathcal{C}X_\mathtt{var}epsilon \hookrightarrow L^\infty \bigl(\R, \mathscr{D}'(\R^3)\bigr)$ as
\[
(\iota_\mathtt{var}epsilon f)(t, \bigcdot) := \bigl(\iota_\mathtt{var}epsilon f(t)\bigr)(\bigcdot)
\]
for $f \in \mathcal{C}X_\mathtt{var}epsilon$. For any smooth compactly supported function $\mathtt{var}phi : \R^4 \to \R$ it will be convenient to write
\betagin{equation}\lambdabel{eq:iota-general}
(\iota_\mathtt{var}epsilon f)(\mathtt{var}phi) := \mathtt{var}epsilon^{3} \mathfrak{s}um_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{\R} f(t, x) \mathtt{var}phi(t, x)\, \mathrm{d} t.
\end{equation}
\item For any $\zeta \in \R$, $z \in D_\mathtt{var}epsilon$ and a compact set $K_\mathfrak{e} \mathfrak{s}ubset \R^4$ of diameter at most $2 \mathfrak{e}$, we define the following seminorm $f \in \mathcal{C}X_\mathtt{var}epsilon$:
\betagin{equation}\lambdabel{eq:local-norm}
\mathcal{V}ert f \mathcal{V}ert_{\zeta; K_\mathfrak{e}; z; \mathfrak{e}} := \mathfrak{e}^{-\zeta} \mathfrak{s}up_{z \in K_\mathfrak{e} \cap D_\mathtt{var}epsilon} | f(z) |.
\end{equation}
Obviously, this seminorm is local in the sense that if $f, g \in \mathcal{C}X_\mathtt{var}epsilon$ and $(\iota_\mathtt{var}epsilon f)(\mathtt{var}phi) = (\iota_\mathtt{var}epsilon g)(\mathtt{var}phi)$ for every $\mathtt{var}phi \in \mathcal{C}C^2$ supported in $K_\mathfrak{e}$, then $\mathcal{V}ert f - g \mathcal{V}ert_{\zeta; K_\mathfrak{e}; z; \mathfrak{e}} = 0$.
\item Let the function $\mathtt{var}phi^\mathfrak{e}_{z}$ be defined by \eqref{eq:rescaled-function-general} with $\lambdambda = \mathfrak{e}$, and let $[\mathtt{var}phi^\mathfrak{e}_{z}]$ denote its support. Then from the definition \eqref{eq:local-norm} we readily get the bound
\[
| (\iota_\mathtt{var}epsilon f)(\mathtt{var}phi^\mathfrak{e}_{z}) | \leq \biggl( \mathfrak{s}up_{\bar z \in [\mathtt{var}phi^\mathfrak{e}_{z}] \cap D_\mathtt{var}epsilon} | f(\bar z) | \biggr) \mathtt{var}epsilon^3 \mathfrak{s}um_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{\R} | \mathtt{var}phi^\mathfrak{e}_{z}(t, x) | \mathrm{d} t \lesssim \mathfrak{e}^\zeta \mathcal{V}ert f \mathcal{V}ert_{\zeta; [\mathtt{var}phi^\mathfrak{e}_z]; z; \mathfrak{e}},
\]
uniformly over $f \in \mathcal{C}X_\mathtt{var}epsilon$, $z \in D_\mathtt{var}epsilon$, $\zeta \in \R$, and $\mathtt{var}phi \in \mathcal{C}B^2_\mathfrak{s}$.
\item\lambdabel{it:discretisation-md} For any function $\Gamma : D_\mathtt{var}epsilon \times D_\mathtt{var}epsilon \to \mathcal{C}G^\mathrm{ex}$, any compact set $K \mathfrak{s}ubset \R^4$ and any $\zeta \in \R$, we define the following seminorm on the functions $f: D_\mathtt{var}epsilon \to \mathcal{C}T^\mathrm{ex}_{< \zeta}$:
\betagin{equation}\lambdabel{eq:choice_distr_norm}
\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta; K; \mathfrak{e}} := \mathfrak{s}up_{\mathfrak{s}ubstack{z, \bar z \in K \cap D_\mathtt{var}epsilon \\ \mathcal{V}ert z - \bar z \mathcal{V}ert_{\mathfrak{s}} \leq \mathfrak{e}}} \mathfrak{s}up_{m<\zeta} \mathfrak{e}^{m -\zeta} | f(z) - \Gamma_{\!z \bar z} f(\bar z) |_{m}.
\end{equation}
For a second function $\bar \Gamma : D_\mathtt{var}epsilon \times D_\mathtt{var}epsilon \to \mathcal{C}G^\mathrm{ex}$ and for $\bar f: D_\mathtt{var}epsilon \to \mathcal{C}T^\mathrm{ex}_{< \zeta}$ we also define
\betagin{equation}\lambdabel{eq:distance_mod_distr_eps}
\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f; \bar{f} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta; K; \mathfrak{e}} := \mathfrak{s}up_{\mathfrak{s}ubstack{z, \bar z \in K \cap D_\mathtt{var}epsilon \\ \mathcal{V}ert z - \bar z \mathcal{V}ert_\mathfrak{s} \leq \mathfrak{e}}} \mathfrak{s}up_{m<\zeta} \mathfrak{e}^{m -\zeta} \bigl| f(z) - \Gamma_{\!z \bar z} f(\bar z) - \bar{f}(z) + \bar{\Gamma}_{\!z \bar z} \bar{f}(\bar z) \bigr|_{m}.
\end{equation}
Both seminorms depend only on the values of $f$ and $\bar f$ in a neighbourhood of size $c \mathfrak{e}$ around $K$, for a fixed constant $c > 0$.
\end{enumerate}
\end{definition}
\betagin{remark}
The seminorms \eqref{eq:choice_distr_norm} and \eqref{eq:distance_mod_distr_eps} depends on the functions $\Gamma$ and $\bar \Gamma$. However, we prefer not to indicate it to have a lighter notation. The choice of these functions will be always clear from the context.
\end{remark}
\betagin{remark}\
Our definitions correspond to the ``semidiscrete'' case in \cite[Sec.~2]{erhard2017discretisation}.
\end{remark}
Following \cite[Def.~2.5]{erhard2017discretisation}, we can define a discrete model on the regularity structure~$\mathscr{T}^\mathrm{ex}$.
\betagin{definition}\lambdabel{def:model}
A \emph{discrete model} $(\mathbb{P}i^\gamma, \Gamma^\gamma)$ on the regularity structure $\mathscr{T}^\mathrm{ex}$ consists of a collection of maps $D_\mathtt{var}epsilon \ni z \mapsto \mathbb{P}i^\gamma_z \in \mathcal{C}L(\mathcal{C}T^\mathrm{ex}, \mathcal{C}X_\mathtt{var}epsilon)$ and $D_\mathtt{var}epsilon \times D_\mathtt{var}epsilon \ni (z, \bar z) \mapsto \Gamma^\gamma_{\!z \bar z} \in \mathcal{C}G^\mathrm{ex}$ with the following properties:
\betagin{enumerate}
\item $\Gamma^\gamma_{\!z z} = \mathrm{id}$ (where $\mathrm{id}$ is the identity operator), and $\Gamma^\gamma_{\!z \bar z} \Gamma^\gamma_{\!\bar z \tilde z} = \Gamma^\gamma_{\!z \tilde z}$ for all $z, \bar z, \tilde z \in D_\mathtt{var}epsilon$,
\item $\mathbb{P}i^\gamma_{\bar z} = \mathbb{P}i^\gamma_{z} \Gamma^\gamma_{\!z \bar z}$ for all $z, \bar z \in D_\mathtt{var}epsilon$.
\end{enumerate}
Furthermore, for any compact set $K \mathfrak{s}ubset \R^4$ the following bounds hold
\betagin{subequations}\lambdabel{eqs:model-bounds}
\betagin{equation}\lambdabel{eq:Pi-bounds}
\mathfrak{s}up_{\mathtt{var}phi \in \mathcal{C}B^2_\mathfrak{s}} \mathfrak{s}up_{z \in K \cap D_\mathtt{var}epsilon} \bigl| \bigl(\iota_\mathtt{var}epsilon \mathbb{P}i^\gamma_{z} \tau\bigr) (\mathtt{var}phi^\lambdambda_{z})\bigr| \lesssim \lambdambda^{|\tau|}, \qquad\qquad \mathfrak{s}up_{K_\mathfrak{e} \mathfrak{s}ubset K} \mathfrak{s}up_{z \in K \cap D_\mathtt{var}epsilon} \| \mathbb{P}i^\gamma_{z} \tau \|_{|\tau|; K_\mathfrak{e}; z; \mathfrak{e}} \lesssim 1,
\end{equation}
uniformly over $\lambdambda \in [\mathfrak{e}, 1]$ and $\tau \in \mathcal{C}W^\mathrm{ex}$, where the supremum in the second bound is over compact sets $K_\mathfrak{e} \mathfrak{s}ubset K$ with the diameter not exceeding $2\mathfrak{e}$. For the function $f^{\tau, \Gamma^\gamma}_{\bar z}(z) := \Gamma^\gamma_{\!z \bar z} \tau - \tau$ one has
\betagin{equation}\lambdabel{eq:Gamma-bounds}
| \Gamma^\gamma_{\!z \bar z} \tau |_m \lesssim \| z - \bar z \|_\mathfrak{s}^{|\tau| - m}, \qquad\qquad \mathfrak{s}up_{\bar z \in D_\mathtt{var}epsilon} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f^{\tau, \Gamma^\gamma}_{\bar z} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{|\tau|; K; \mathfrak{e}} \lesssim 1,
\end{equation}
\end{subequations}
uniformly over $\tau \in \mathcal{C}W^\mathrm{ex}$, $m < |\tau|$ and $z, \bar z \in K \cap D_\mathtt{var}epsilon$ such that $\| z - \bar z \|_\mathfrak{s} \in [\mathfrak{e}, 1]$. In the second bound in \eqref{eq:Gamma-bounds} we consider the seminorm \eqref{eq:choice_distr_norm} with respect to the map $\Gamma^\gamma$.
\end{definition}
\betagin{remark}
The first bounds in \eqref{eqs:model-bounds} control the model on the scale above $\mathfrak{e}$ similarly to continuous models in \cite{Regularity}, and the second bounds in \eqref{eqs:model-bounds} control the model on the scale below $\mathfrak{e}$.
\end{remark}
As we explained in Section~\ref{sec:model-space-cont}, we cannot define a model on the symbol $\blue\Xi$, because it corresponds to a distribution (a time derivative of the martingale) which is not an element of the space $\mathcal{C}X_\mathtt{var}epsilon$ introduced in Definition~\ref{def:discretisation}.
We denote by $\mathcal{V}ert \mathbb{P}i^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{K}$ and $\mathcal{V}ert \Gamma^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{K}$ the smallest proportionality constants such that the bounds \eqref{eq:Pi-bounds} and \eqref{eq:Gamma-bounds} hold respectively. Then for the model $Z^{\gamma} = (\mathbb{P}i^\gamma, \Gamma^\gamma)$ we set
\betagin{equation*}
\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert^{(\mathfrak{e})}_{K} := \mathcal{V}ert \mathbb{P}i^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{K} + \mathcal{V}ert \Gamma^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{K}.
\end{equation*}
For a second model $\bar Z^{\gamma} = (\bar\mathbb{P}i^\gamma, \bar \Gamma^\gamma)$ we define the ``distance''
\betagin{equation*}
\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma}; \bar Z^{\gamma} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert^{(\mathfrak{e})}_{K} := \mathcal{V}ert \mathbb{P}i^\gamma - \bar \mathbb{P}i^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{K} + \mathcal{V}ert \Gamma^\gamma; \bar \Gamma^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{K},
\end{equation*}
where $\mathcal{V}ert \Gamma^\gamma; \bar \Gamma^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{K}$ is the smallest proportionality constant such that the following bounds hold
\betagin{equation*}
\| \bigl(\Gamma^\gamma_{\!z \bar z} - \bar \Gamma^\gamma_{\!z \bar z}\bigr) \tau \|_m \lesssim \| z - \bar z \|_\mathfrak{s}^{|\tau| - m}, \qquad \mathfrak{s}up_{\bar z \in D_\mathtt{var}epsilon} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f^{\tau, \Gamma^\gamma}_{\bar z}; f^{\tau, \bar \Gamma^\gamma}_{\bar z} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{|\tau|; K; \mathfrak{e}} \lesssim 1,
\end{equation*}
uniformly over the same quantities as in \eqref{eq:Gamma-bounds}, where in the second bound we consider the distance \eqref{eq:distance_mod_distr_eps} with respect to $\Gamma^\gamma$ and $\bar \Gamma^\gamma$.
\betagin{remark}\lambdabel{rem:model-no-K}
We will often work with models on the set $K = [-T, T] \times [-1,1]^3$. In this case we prefer to remove the set $K$ from the notation and write $\mathcal{V}ert \mathbb{P}i^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{T}$, $\mathcal{V}ert \Gamma^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{T}$, etc.
\end{remark}
\mathfrak{s}ubsection{Modelled distributions}
\lambdabel{sec:DiscreteModelledDistributions}
By analogy with \cite[Sec.~6]{Regularity}, we are going to define a weighted norm for $\mathcal{C}T^\mathrm{ex}$-valued functions with a weight at time $0$. For this we define the following quantities for $z, \bar z \in \R^4$:
\betagin{equation*}
\| z \|_0 := |t|^{\mathfrak{r}ac{1}{2}} \wedge 1, \qquad \| z, \bar z \|_0 := \| z \|_0 \wedge \| \bar z \|_0,
\end{equation*}
where $z = (t,x)$ with $t \in \R$. We also set $\| z, \bar z \|_\mathfrak{e} := \| z, \bar z \|_0 \vee \mathfrak{e}$.
For $\zeta, \eta \in \R$ and for a compact set $K \mathfrak{s}ubset \R^4$, we define in the context of Definition~\ref{def:discretisation}(\ref{it:discretisation-md}) the following quantities (see \cite[Eqs.~3.21, 3.22]{erhard2017discretisation}):
\betagin{equation}\lambdabel{eq:md-small-scale}
\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; K; \mathfrak{e}} := \mathfrak{s}up_{\mathfrak{s}ubstack{z \in K \cap D_\mathtt{var}epsilon \\ \mathcal{V}ert z\mathcal{V}ert_{\mathfrak{s}} \leq \mathfrak{e}}} \mathfrak{s}up_{m<\zeta} \mathfrak{r}ac{| f(z) |_m}{\mathfrak{e}^{(\eta - m) \wedge 0}} + \mathfrak{s}up_{\mathfrak{s}ubstack{z, \bar z \in K \cap D_\mathtt{var}epsilon \\ \mathcal{V}ert z - \bar z \mathcal{V}ert_{\mathfrak{s}} \leq \mathfrak{e}}} \mathfrak{s}up_{m<\zeta} \mathfrak{r}ac{| f(z) - \Gamma_{\!z \bar z} f(\bar z) |_{m}}{\mathfrak{e}^{\zeta - m} \mathcal{V}ert z, \bar z\mathcal{V}ert_{\mathfrak{e}}^{\eta - \zeta}},
\end{equation}
and
\betagin{align}\lambdabel{eq:md-distance-small-scale}
\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f; \bar f \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; K; \mathfrak{e}} &:= \mathfrak{s}up_{\mathfrak{s}ubstack{z \in K \cap D_\mathtt{var}epsilon \\ \mathcal{V}ert z\mathcal{V}ert_{\mathfrak{s}} \leq \mathfrak{e}}} \mathfrak{s}up_{m<\zeta} \mathfrak{r}ac{| f(z) - \bar f(z) |_m}{\mathcal{V}ert z\mathcal{V}ert_{\mathfrak{s}}^{(\eta - m) \wedge 0}} \\
&\qquad + \mathfrak{s}up_{\mathfrak{s}ubstack{z, \bar z \in K \cap D_\mathtt{var}epsilon \\ \mathcal{V}ert z - \bar z \mathcal{V}ert_{\mathfrak{s}} \leq \mathfrak{e}}} \mathfrak{s}up_{m<\zeta} \mathfrak{r}ac{| f(z) - \Gamma_{\!z \bar z} f(\bar z) - \bar f(z) + \bar \Gamma_{\!z \bar z} \bar f(\bar z) |_{m}}{\mathfrak{e}^{\zeta - m} \mathcal{V}ert z, \bar z\mathcal{V}ert_{\mathfrak{e}}^{\eta - \zeta}}. \nonumber
\end{align}
Let us now take a discrete model $Z^{\gamma} = (\mathbb{P}i^\gamma, \Gamma^\gamma)$. A \emph{discrete modelled distribution} is an element of the space $\mathcal{C}D^{\zeta, \eta}_\mathfrak{e}(\Gamma^\gamma)$, containing the maps $f: D_\mathtt{var}epsilon \to \mathcal{C}T^\mathrm{ex}_{< \zeta}$ such that, for any compact set $K \mathfrak{s}ubseteq \R^4$,
\betagin{align}\lambdabel{eq:def_dgamma_norm}
\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert^{(\mathfrak{e})}_{\zeta, \eta; K} &:= \mathfrak{s}up_{\mathfrak{s}ubstack{z \in K \cap D_\mathtt{var}epsilon \\ \mathcal{V}ert z\mathcal{V}ert_{\mathfrak{s}} > \mathfrak{e}}} \mathfrak{s}up_{m<\zeta} \mathfrak{r}ac{| f(z) |_m}{\mathcal{V}ert z\mathcal{V}ert_{\mathfrak{s}}^{(\eta - m) \wedge 0}} \\
&\qquad + \mathfrak{s}up_{\mathfrak{s}ubstack{z, \bar z \in K \cap D_\mathtt{var}epsilon \\ \mathcal{V}ert z - \bar z \mathcal{V}ert_{\mathfrak{s}} > \mathfrak{e}}} \mathfrak{s}up_{m<\zeta} \mathfrak{r}ac{| f(z) - \Gamma^\gamma_{\!z \bar z} f(\bar z) |_{m}}{\| z - \bar z \|_\mathfrak{s}^{\zeta - m} \mathcal{V}ert z, \bar z\mathcal{V}ert_{\mathfrak{e}}^{\eta - \zeta}} + \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; K; \mathfrak{e}} < \infty, \nonumber
\end{align}
where the last term is define by \eqref{eq:md-small-scale} via $\Gamma^\gamma$. Sometimes it will be convenient to write $\mathcal{C}D^{\zeta, \eta}_\mathfrak{e}(Z^{\gamma})$ for $\mathcal{C}D^{\zeta, \eta}_\mathfrak{e}(\Gamma^\gamma)$, and when the model is clear from the context we will omit it from the notation and will simply write $\mathcal{C}D^{\zeta, \eta}_\mathfrak{e}$. Observe that the first two terms in \eqref{eq:def_dgamma_norm} are the same as in the definition of the modelled distributions in \cite[Def.~6.2]{Regularity}, except that we look at the scale above $\mathfrak{e}$. The last term measures regularity of $f$ on scale below~$\mathfrak{e}$.
For another discrete model $\bar Z^{\gamma} = (\bar \mathbb{P}i^\gamma, \bar \Gamma^\gamma)$ and a modelled distribution $\bar{f} \in \mathcal{C}D^\zeta_\mathfrak{e}(\bar Z^{\gamma})$, we set
\betagin{equation*}
\betagin{aligned}
\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f; \bar{f} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert^{(\mathfrak{e})}_{\zeta, \eta; K} &:= \mathfrak{s}up_{\mathfrak{s}ubstack{z \in K \cap D_\mathtt{var}epsilon \\ \mathcal{V}ert z\mathcal{V}ert_{\mathfrak{s}} > \mathfrak{e}}} \mathfrak{s}up_{m<\zeta} \mathfrak{r}ac{| f(z) - \bar f(z) |_m}{\mathcal{V}ert z\mathcal{V}ert_{\mathfrak{s}}^{(\eta - m) \wedge 0}} \\
&\qquad + \mathfrak{s}up_{\mathfrak{s}ubstack{z, \bar z \in K \cap D_\mathtt{var}epsilon \\ \mathcal{V}ert z - \bar z \mathcal{V}ert_{\mathfrak{s}} > \mathfrak{e}}} \mathfrak{s}up_{m<\zeta} \mathfrak{r}ac{| f(z) - \Gamma^\gamma_{\!z \bar z} f(\bar z) - \bar f(z) + \bar \Gamma^\gamma_{\!z \bar z} \bar f(\bar z) |_{m}}{\| z - \bar z \|_\mathfrak{s}^{\zeta - m} \mathcal{V}ert z, \bar z\mathcal{V}ert_{\mathfrak{e}}^{\eta - \zeta}} + \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f; \bar f \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta; K; \mathfrak{e}},
\end{aligned}
\end{equation*}
where the last term is defined by \eqref{eq:md-distance-small-scale} via $\Gamma^\gamma$ and $\bar \Gamma^\gamma$.
\betagin{remark}\lambdabel{rem:T-space}
When we work on the compact set $K = [-T, T] \times [-1,1]^3$, we simply write $\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert^{(\mathfrak{e})}_{\zeta, \eta; T}$ and $\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f; \bar{f} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert^{(\mathfrak{e})}_{\zeta, \eta; T}$. The space of modelled distributions, restricted to this set $K$ we denote by $\mathcal{C}D^{\zeta, \eta}_{\mathfrak{e}, T}$.
\end{remark}
\mathfrak{s}ubsection{The reconstruction theorem}
For a discrete model $(\mathbb{P}i^\gamma, \Gamma^\gamma)$ and for a modelled distribution $f \in \mathcal{C}D^{\zeta, \eta}_\mathfrak{e}$, we would like to define a \emph{reconstruction map} $\mathcal{C}R^\gamma: \mathcal{C}D^{\zeta, \eta}_\mathfrak{e} \to \mathcal{C}X_\mathtt{var}epsilon$, which behaves around each point $z$ as $\mathbb{P}i^\gamma_{z} f(z)$. Following the idea of \cite[Def.~4.5]{HairerMatetski}, we define it as
\betagin{equation}\lambdabel{eq:def_rec_op}
(\mathcal{C}R^\gamma f)(z) := \bigl( \mathbb{P}i^\gamma_{z} f(z) \bigr) (z).
\end{equation}
In the case $\eta = \zeta$, i.e. when there is no weights in the definition \eqref{eq:def_dgamma_norm}, we have the following ``\emph{reconstruction theorem},'' where we use the short notation $\mathcal{C}D^{\zeta}_\mathfrak{e} := \mathcal{C}D^{\zeta, \zeta}_\mathfrak{e}$.
\betagin{proposition}
For a discrete model $(\mathbb{P}i^\gamma, \Gamma^\gamma)$, a modelled distribution $f \in \mathcal{C}D^{\zeta}_\mathfrak{e}(\Gamma^\gamma)$ with $\zeta > 0$, and compact set $K \mathfrak{s}ubset \R^4$ one has
\betagin{equation}\lambdabel{eq:ReconThm1}
\bigl| \iota_\mathtt{var}epsilon \bigl( \mathcal{C}R^\gamma f - \mathbb{P}i^\gamma_z f(z) \bigr) (\mathtt{var}phi^\lambdambda_z) \bigr| \lesssim (\lambdambda \vee \mathfrak{e})^\zeta \mathcal{V}ert \mathbb{P}i^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{\bar{K}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert^{(\mathfrak{e})}_{\zeta; [\mathtt{var}phi^\lambdambda_z]},
\end{equation}
uniformly over $\mathtt{var}phi \in \mathcal{C}B^2_\mathfrak{s}$, $\lambdambda \in (0, 1]$, $z \in D_\mathtt{var}epsilon$, and $\mathfrak{e} \in (0, 1]$. Here, $\bar{K}$ is the $1$-fattening of $K$, $[\mathtt{var}phi^\lambdambda_z]$ is the support of $\mathtt{var}phi^\lambdambda_z$, and we used the map \eqref{eq:iota-general}.
Let $(\bar{\mathbb{P}i}^\gamma, \bar{\Gamma}^\gamma)$ be another discrete model with the respective reconstruction map $\bar{\mathcal{C}R}^\gamma$, defined by \eqref{eq:def_rec_op}. Then for any $\bar f \in \mathcal{C}D^{\zeta}_{\mathfrak{e}}(\bar \Gamma^\gamma)$ one has
\betagin{align}\lambdabel{eq:ReconThm2}
\bigl| \iota_\mathtt{var}epsilon \big( \mathcal{C}R^\gamma f - \mathbb{P}i^\gamma_z f(z) &- \bar{\mathcal{C}R}^\gamma \bar{f} + \bar{\mathbb{P}i}^\gamma_z \bar{f}(z) \big) (\mathtt{var}phi^\lambdambda_z) \bigr| \\
&\qquad \lesssim (\lambdambda \vee \mathfrak{e})^\zeta \Big( \mathcal{V}ert \bar{\mathbb{P}i}^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{\bar K} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f; \bar{f} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert^{(\mathfrak{e})}_{\zeta; [\mathtt{var}phi^\lambdambda_z]} + \mathcal{V}ert \mathbb{P}i^\gamma - \bar{\mathbb{P}i}^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{\bar K} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert^{(\mathfrak{e})}_{\zeta; [\mathtt{var}phi^\lambdambda_z]} \Big),\nonumber
\end{align}
uniformly over the same quantities as in \eqref{eq:ReconThm1}.
\end{proposition}
\betagin{proof}
For any compact set $K_\mathfrak{e} \mathfrak{s}ubset \R^4$ of diameter smaller than $2 \mathfrak{e}$ and for any $z \in D_\mathtt{var}epsilon$, from the properties of the model and modelled distribution we get
\betagin{equation}\lambdabel{eq:reconstruction-proof}
\betagin{aligned}
\bigl\mathcal{V}ert \mathcal{C}R^\gamma f - \mathbb{P}i^\gamma_z f(z) \bigr\mathcal{V}ert_{\zeta; K_\mathfrak{e}; z; \mathfrak{e}}
&= \mathfrak{e}^{-\zeta} \mathfrak{s}up_{\bar z \in K_\mathfrak{e} \cap D_\mathtt{var}epsilon} \bigl| \mathbb{P}i^\gamma_{\bar z} \big( f(\bar z) - \Gamma^\gamma_{\bar z z} f(z) \big)(\bar z) \bigr| \\
&\lesssim \mathfrak{s}up_{\beta < \zeta} \mathfrak{s}up_{\bar z \in K_\mathfrak{e} \cap D_\mathtt{var}epsilon} \mathfrak{e}^{\beta - \zeta} \mathcal{V}ert \mathbb{P}i^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{\bar K_\mathfrak{e}} \bigl| f(\bar z) - \Gamma^\gamma_{\bar z z}f(z) \big|_\beta.
\end{aligned}
\end{equation}
Using \eqref{eq:md-small-scale}, the latter yields
\betagin{equation}\lambdabel{eq:small_scale_condition}
\bigl\mathcal{V}ert \mathcal{C}R^\gamma f - \mathbb{P}i^\gamma_z f(z) \bigr\mathcal{V}ert_{\zeta; K_\mathfrak{e}; z; \mathfrak{e}} \lesssim \mathcal{V}ert \mathbb{P}i^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{\bar K_\mathfrak{e}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta; K_\mathfrak{e}; \mathfrak{e}}.
\end{equation}
Then \eqref{eq:ReconThm1} follows from \cite[Thm.~3.5]{erhard2017discretisation} and this bound.
The estimate \eqref{eq:ReconThm2} follows again from \cite[Thm.~3.5]{erhard2017discretisation} and from the following bound, which can be proved similarly to \eqref{eq:small_scale_condition},
\betagin{align*}
\bigl\| \mathcal{C}R^\gamma f - \mathbb{P}i^\gamma_z f(z) &- \bar{\mathcal{C}R}^\gamma \bar{f} + \bar{\mathbb{P}i}^\gamma_z \bar{f}(z) \bigr\|_{\zeta; K_\mathfrak{e}; z; \mathfrak{e}} \\
&\qquad\lesssim \mathcal{V}ert \bar{\mathbb{P}i}^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{\bar{K}_\mathfrak{e}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f; \bar{f} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta; K_\mathfrak{e}; \mathfrak{e}} + \mathcal{V}ert \mathbb{P}i^\gamma - \bar{\mathbb{P}i}^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{\bar{K}_\mathfrak{e}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta; K_\mathfrak{e}; \mathfrak{e}},
\end{align*}
uniformly over the involved quantities.
\end{proof}
Respectively, we can show that the reconstruction theorem \cite[Thm.~3.13]{erhard2017discretisation} holds in our case. The required Assumpsions~3.6 and 3.12 in \cite{erhard2017discretisation} follow readily from our definitions and estimates similar to \eqref{eq:reconstruction-proof}. We prefer not to duplicate full statement of this theorem, and we provide only the estimates which we are going to use later.
\betagin{proposition}\lambdabel{prop:ReconThm_v2}
In the described context, \cite[Thm.~3.13]{erhard2017discretisation} holds. In particular, let $(\mathbb{P}i^\gamma, \Gamma^\gamma)$ be a discrete model and let $f \in \mathcal{C}D^{\zeta, \eta}_\mathfrak{e}(\Gamma^\gamma)$ be a modelled distribution, taking values in a sector of regularity $\alphapha \leq 0$ and such that $\zeta > 0$, $\eta \leq \zeta$ and $\alphapha \wedge \eta > -2$. Then for any compact set $K \mathfrak{s}ubset \R^4$ one has
\betagin{equation*}
\bigl| \iota_\mathtt{var}epsilon \bigl( \mathcal{C}R^\gamma f \bigr) (\mathtt{var}phi^\lambdambda_z) \bigr| \lesssim (\lambdambda \vee \mathfrak{e})^{\alphapha \wedge \eta} \mathcal{V}ert \mathbb{P}i^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{\bar{K}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert^{(\mathfrak{e})}_{\zeta; [\mathtt{var}phi^\lambdambda_z]},
\end{equation*}
uniformly over the same quantities as in \eqref{eq:ReconThm1}.
For a second discrete model $(\bar{\mathbb{P}i}^\gamma, \bar{\Gamma}^\gamma)$ and for $\bar f \in \mathcal{C}D^{\zeta}_{\mathfrak{e}}(\bar \Gamma^\gamma)$ one has
\betagin{equation*}
\bigl| \iota_\mathtt{var}epsilon \bigl( \mathcal{C}R^\gamma f - \bar \mathcal{C}R^\gamma f\bigr) (\mathtt{var}phi^\lambdambda_z) \bigr| \lesssim (\lambdambda \vee \mathfrak{e})^{\alphapha \wedge \eta} \Big( \mathcal{V}ert \bar{\mathbb{P}i}^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{\bar K} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f; \bar{f} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert^{(\mathfrak{e})}_{\zeta; [\mathtt{var}phi^\lambdambda_z]} + \mathcal{V}ert \mathbb{P}i^\gamma - \bar{\mathbb{P}i}^\gamma \mathcal{V}ert^{(\mathfrak{e})}_{\bar K} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert f \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert^{(\mathfrak{e})}_{\zeta; [\mathtt{var}phi^\lambdambda_z]} \Big),
\end{equation*}
uniformly over the same quantities.
\end{proposition}
\mathfrak{s}ection{A renormalised lift of martingales}
\lambdabel{sec:lift}
Now we will construct a discrete model $Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}} = (\mathbb{P}i^{\gamma, \mathfrak{a}}, \Gamma^{\gamma, \mathfrak{a}})$ which will be used to write equation \eqref{eq:IsingKacEqn-new} on the regularity structure $\mathscr{T}^\mathrm{ex}$ (as in \cite{Regularity}, we call this model a ``lift'' of the random driving noise $\mathfrak{M}_{\gamma, \mathfrak{a}}$). For this, we are going to use the martingales from \eqref{eq:IsingKacEqn-new}, such that we have the a priori bounds on the solution provided by the stopping time \eqref{eq:tau}.
Since we have only few basis elements in the regularity structure, we prefer to define $Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}$ as a renormalised model, as opposed to \cite[Sec.~8.2]{Regularity}, where renormalisation of a canonical lift was done separately.
It will be convenient to use the following short notation:
\betagin{equation*}
\int_{D_\mathtt{var}epsilon} \mathtt{var}phi(z)\, \mathrm{d} z := \mathtt{var}epsilon^3 \mathfrak{s}um_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{\R} \mathtt{var}phi(t,x)\, \mathrm{d} t.
\end{equation*}
Throughout this section we will use the decomposition $\widetilde{G}^\gamma = \mywidetilde{\mathscr{K}}^\gamma + \mywidetilde{\mathscr{R}}^\gamma$ of the discrete kernel \eqref{eq:From-P-to-G-tilde}, defined in Appendix~\ref{sec:decompositions}.
\mathfrak{s}ubsection{Definition of the map $\mathbb{P}Pi^{\gamma, \mathfrak{a}}$}
\lambdabel{sec:PPi}
In order to define the model $(\mathbb{P}i^{\gamma, \mathfrak{a}}, \Gamma^{\gamma, \mathfrak{a}})$, we first introduce an auxiliary map $\mathbb{P}Pi^{\gamma, \mathfrak{a}} \in \mathscr{L}(\mathcal{C}T^\mathrm{ex}, \mathcal{C}X_\mathtt{var}epsilon)$, and then we will use the results from \cite[Sec.~8.3]{Regularity}.
It will be convenient to extend the martingales $\mathfrak{M}_{\gamma, \mathfrak{a}}(t, x)$ to all $t \in \R$. For this, we denote by $\widetilde{X}_{\gamma, \mathfrak{a}}(t, x)$ an independent copy of $X_{\gamma, \mathfrak{a}}(t, x)$, defined in Section~\ref{sec:a-priori}. Then $\widetilde{X}_{\gamma, \mathfrak{a}}$ solves equation \eqref{eq:IsingKacEqn-periodic} driven by a martingale $\widetilde{\mathfrak{M}}_{\gamma, \mathfrak{a}}(t, x)$. We define the extension of $\mathfrak{M}_{\gamma, \mathfrak{a}}(t, x)$ to $t < 0$ as
\betagin{equation}\lambdabel{eq:martingale-extension}
\mathfrak{M}_{\gamma, \mathfrak{a}}(t, x) = \widetilde{\mathfrak{M}}_{\gamma, \mathfrak{a}}(-t, x).
\end{equation}
This extension does not affect equation \eqref{eq:IsingKacEqn-new} in any way, and is a technical trick which simplifies the following formulas. In particular, it allows to define time integrals in \eqref{eq:Pi-Xi} and later on whole $\R$ rather than $\R_+$. In what follows, the martingales $\mathfrak{M}_{\gamma, \mathfrak{a}}(t, x)$ are extended periodically to $x \in \mathtt{L}ambda_{\mathtt{var}epsilon}$.
Using the map \eqref{eq:iota-general}, we start with making the following definition:
\betagin{equation}\lambdabel{eq:Pi-Xi}
\iota_\mathtt{var}epsilon \bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \blue\Xi\bigr)(\mathtt{var}phi) = \mathfrak{r}ac{1}{\mathfrak{s}qrt 2}\mathtt{var}epsilon^3 \mathfrak{s}um_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{\R} \mathtt{var}phi(t,x)\, \mathrm{d} \mathfrak{M}_{\gamma, \mathfrak{a}}(t, x),
\end{equation}
for any smooth, compactly supported function $\mathtt{var}phi : \R^4 \to \R$, and for the processes $\mathfrak{M}_{\gamma, \mathfrak{a}}$ just introduced. The stochastic integral is defined with respect to the martingale $t \mapsto \mathfrak{M}_{\gamma, \mathfrak{a}}(t, x)$, which is well defined in the Stieltjes sense since the function $\mathtt{var}phi$ is smooth. We need to use the factor $\mathfrak{r}ac{1}{\mathfrak{s}qrt 2}$ in order to have convergence of $\mathbb{P}Pi^{\gamma, \mathfrak{a}} \blue\Xi$ to a white noise (as follows from \eqref{eq:M-a-variation}, the martingale $\mathfrak{M}_{\gamma, \mathfrak{a}}$ converges to a cylindrical Wiener process with diffusion $2$). For monomials we set
\betagin{equation*}
\bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \symbol{\1}\bigr) (t,x) = 1, \qquad \bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \symbol{X}_i\bigr) (t,x) = x_i.
\end{equation*}
Furthermore, we use the kernel $\mywidetilde{\mathscr{K}}^\gamma$, defined in the beginning of this section, and set
\betagin{equation*}
\bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<1b>\bigr)(t, x) = \mathfrak{r}ac{1}{\mathfrak{s}qrt 2} \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{\R} \mywidetilde{\mathscr{K}}^\gamma_{t-s}(x - y)\, \mathrm{d} \mathfrak{M}_{\gamma, \mathfrak{a}}(s, y),
\end{equation*}
as well as
\betagin{equation}\lambdabel{eq:cherry}
\betagin{aligned}
\bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<2b>\bigr)(t, x) &= \bigl( \mathbb{P}Pi^{\gamma, \mathfrak{a}} \<1b>\bigr)(t, x)^2 - \mathfrak{c}_\gamma - \mathfrak{c}_\gamma',\\
\bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<3b>\bigr)(t, x) &= \bigl( \mathbb{P}Pi^{\gamma, \mathfrak{a}} \<1b> \bigr)(t, x)^3 - 3 \mathfrak{c}_\gamma \bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<1b>\bigr)(t, x),
\end{aligned}
\end{equation}
where
\betagin{equation}\lambdabel{eq:renorm-constant1}
\mathfrak{c}_\gamma := \int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(z)^2\, \mathrm{d} z
\end{equation}
is a diverging renormalisation constant (we show in Lemma~\ref{lem:renorm-constants} that the divergence speed is $\mathfrak{e}^{-1}$),
and
\betagin{equation}\lambdabel{eq:renorm-constant3}
\mathfrak{c}_{\gamma}' := - \beta \mathtt{var}kappa_{\gamma, 3} \gamma^6 \un{\mathfrak{C}}_\gamma \mathfrak{c}_\gamma
\end{equation}
is a renormalisation constant which is be bounded uniformly in $\gamma$, as follows from Lemma~\ref{lem:renorm-constants} and \ref{lem:renorm-constant-underline}. We used in \eqref{eq:renorm-constant3} the constants $\beta$, $\mathtt{var}kappa_{\gamma, 3}$ and $\un{\mathfrak{C}}_\gamma$ defined in \eqref{eq:beta}, \eqref{eq:c-gamma-2} and \eqref{eq:c-under} respectively.
We prefer to separate the two renormalisation constants in \eqref{eq:cherry}, because they have different origins. More precisely, the constant $\mathfrak{c}_\gamma$ would be the same if the driving noise was Gaussian, while $\mathfrak{c}_{\gamma}'$ comes from the renormalisation of the non-linearity of the bracket process \eqref{eq:M-bracket}. The necessity of such renormalisation will be clear from Section~\ref{sec:second-symbol}.
Let $H_n : \R \times \R_+ \to \R$ be the $n$-th Hermite polynomial, defined for $n \in \N$ and a real constant $c > 0$ in the following recursive way:
\betagin{equation}\lambdabel{eq:def_Hermite}
H_1(u, c) = u, \qquad H_{n+1}(u, c) = u H_n(u, c) - c H'_{n}(u, c) ~\text{ for any }~ n \geq 1,
\end{equation}
with $H'_{n}$ denoting the derivative of the polynomial $H_{n}$ with respect to the variable $u$. In particular, the first several Hermite polynomials are $H_1(u, c) = u$, $H_2(u, c) = u^2 - c$, $H_3(u, c) = u^3 - 3 c u$, $H_4(u, c) = u^4 - 6 c u^2 + 3 c^2$ and $H_5(u, c) = u^5 - 10 c u^3 + 15 c^2 u$.
Observe then that we have the identities $\bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<2b>\bigr)(t, x) = H_2 \bigl((\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<1b>)(t, x), \mathfrak{c}_\gamma + \mathfrak{c}_\gamma'\bigr)$ and $\bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<3b>\bigr)(t, x) = H_3\bigl((\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<1b>)(t, x), \mathfrak{c}_\gamma\bigr)$, which correspond to the Wick renormalisation of models in the case of a Gaussian noise \cite[Sec.~10]{Regularity}. Hence, in the same spirit we can define $\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<4b>$ and $\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<5b>$ in terms of the Hermite polynomials:
\betagin{equation}\lambdabel{eq:model-Hermite}
\betagin{aligned}
\bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<4b>\bigr)(t, x) &= H_4\bigl(( \mathbb{P}Pi^{\gamma, \mathfrak{a}} \<1b>)(t, x), \mathfrak{c}_\gamma \bigr) \\
&= \bigl( \mathbb{P}Pi^{\gamma, \mathfrak{a}} \<1b>\bigr)(t, x)^4 - 6 \mathfrak{c}_\gamma \bigl( \mathbb{P}Pi^{\gamma, \mathfrak{a}} \<1b>\bigr)(t, x)^2 + 3 \mathfrak{c}_\gamma^2, \\
\bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<5b>\bigr)(t, x) &= H_5\bigl(( \mathbb{P}Pi^{\gamma, \mathfrak{a}} \<1b>)(t, x), \mathfrak{c}_\gamma \bigr) \\
&= \bigl( \mathbb{P}Pi^{\gamma, \mathfrak{a}} \<1b>\bigr)(t, x)^5 - 10 \mathfrak{c}_\gamma \bigl( \mathbb{P}Pi^{\gamma, \mathfrak{a}} \<1b>\bigr)(t, x)^3 + 15 \mathfrak{c}_\gamma^2 \bigl( \mathbb{P}Pi^{\gamma, \mathfrak{a}} \<1b>\bigr)(t, x).
\end{aligned}
\end{equation}
For the elements of the form $\tau \symbol{X}_i \in \mathcal{C}W^\mathrm{ex}$ we set
\betagin{equation*}
\bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \tau \symbol{X}_i\bigr) (t,x) = \bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \tau\bigr) (t,x) \bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \symbol{X}_i\bigr) (t,x).
\end{equation*}
For each element $\mathcal{C}E(\tau) \in \mathcal{C}W^\mathrm{ex}$ we define
\betagin{equation*}
\bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \mathcal{C}E(\tau)\bigr) (t,x) = \gamma^6 \bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \tau\bigr) (t,x),
\end{equation*}
and for each element $\mathcal{C}I(\tau) \in \mathcal{C}W^\mathrm{ex}$ we set
\betagin{equation*}
\bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \mathcal{C}I(\tau)\bigr) (t,x) = \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{\R} \mywidetilde{\mathscr{K}}^\gamma_{t-s}(x - y) \bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \tau\bigr) (s,y)\, \mathrm{d} s.
\end{equation*}
One can see that this recursive definition of the map $\mathbb{P}Pi^{\gamma, \mathfrak{a}}$ gives its action on all the elements from Tables~\ref{tab:symbols-cont} and \ref{tab:symbols}, except the three diagrams $\<22b>$, $\<31b>$ and $\<32b>$. So, it is left to define the map $\mathbb{P}Pi^{\gamma, \mathfrak{a}}$ for these three elements. For the element $\<22b>$ we set
\betagin{equation*}
\bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<22b>\bigr) (t,x) = \bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<20b>\bigr) (t,x) \bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<2b>\bigr) (t,x) - \mathfrak{c}_\gamma'',
\end{equation*}
where
\betagin{equation}\lambdabel{eq:renorm-constant2}
\mathfrak{c}_\gamma'' := 2 \int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(z)\, \mywidetilde{\mathscr{K}}^\gamma(z_1)\, \mywidetilde{\mathscr{K}}^\gamma(z_2)\, \mywidetilde{\mathscr{K}}^\gamma(z_1 - z)\, \mywidetilde{\mathscr{K}}^\gamma(z_2 - z)\, \mathrm{d} z \mathrm{d} z_1 \mathrm{d} z_2
\end{equation}
is a new diverging renormalisation constant (we show in Section~\ref{sec:renormalisation} that the divergence order is $\log \mathfrak{e}$). Finally, we define
\betagin{align*}
\bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<31b>\bigr) (t,x) &= \bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<30b>\bigr) (t,x) \bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<1b>\bigr) (t,x), \\
\bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<32b>\bigr) (t,x) &= \bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<30b>\bigr) (t,x) \bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<2b>\bigr) (t,x) - 3 \mathfrak{c}_\gamma'' \bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \<1b>\bigr) (t,x).
\end{align*}
\mathfrak{s}ubsection{Definition of the model}
\lambdabel{sec:model-lift}
Having $\mathbb{P}Pi^{\gamma, \mathfrak{a}}$ defined on the basis elements $\mathcal{C}W^\mathrm{ex}$, we extend it linearly to $\mathcal{C}T^\mathrm{ex}$. Then the projection of $\mathbb{P}Pi^{\gamma, \mathfrak{a}}$ to $\mathcal{C}T^\mathrm{ex}$ yields the map $\mathbb{P}Pi^{\gamma, \mathfrak{a}} \in \mathscr{L}(\mathcal{C}T^\mathrm{ex}, \mathcal{C}X_\mathtt{var}epsilon)$. As we pointed above, we had to exclude the symbol $\blue\Xi$ from $\mathcal{C}T^\mathrm{ex}$, because our definition \eqref{eq:Pi-Xi} suggests that $\mathbb{P}Pi^{\gamma, \mathfrak{a}} \blue\Xi$ does not belong to $\mathcal{C}X_\mathtt{var}epsilon$. A discrete model $Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}} = (\mathbb{P}i^{\gamma, \mathfrak{a}}, \Gamma^{\gamma, \mathfrak{a}})$ on $\mathscr{T}^\mathrm{ex}$ is defined via this map $\mathbb{P}Pi^{\gamma, \mathfrak{a}}$ as in \cite[Sec.~8.3]{Regularity}. More precisely, we define
\betagin{equation*}
f^{\gamma, \mathfrak{a}}_z(\symbol{\1}) = - 1, \qquad f^{\gamma, \mathfrak{a}}_z(\symbol{X}_i) = - x_i, \qquad f^{\gamma, \mathfrak{a}}_z(\mathcal{C}I(\tau)) = - \bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \mathcal{C}I(\tau)\bigr) (z) \quad\text{for}~ \tau \neq \blue\Xi.
\end{equation*}
We extend this function linearly to $f^{\gamma, \mathfrak{a}}_z : \mathcal{C}T^\mathrm{ex}_+ \to \R$, where $\mathcal{C}T^\mathrm{ex}_+$ is defined in Section~\ref{sec:structure-group}. This is a multiplicative map, i.e. $f^{\gamma, \mathfrak{a}}_z(\tau \bar \tau) = f^{\gamma, \mathfrak{a}}_z(\tau) f^{\gamma, \mathfrak{a}}_z(\bar \tau)$ for $\tau, \bar \tau \in \mathcal{C}T^\mathrm{ex}_+$, and we can use \eqref{eq:Gamma-general} to define
\betagin{equation*}
F^{\gamma, \mathfrak{a}}_{\!z} := \Gamma_{\!f^{\gamma, \mathfrak{a}}_z}.
\end{equation*}
Since $F^{\gamma, \mathfrak{a}}_{\!z}$ is an element of the group $\mathcal{C}G^\mathrm{ex}$, it has the inverse $(F^{\gamma, \mathfrak{a}}_{\!z})^{-1}$. Then the discrete model $(\mathbb{P}i^{\gamma, \mathfrak{a}}, \Gamma^{\gamma, \mathfrak{a}})$ is defined as
\betagin{equation}\lambdabel{eq:Pi-Gamma-lift}
\mathbb{P}i^{\gamma, \mathfrak{a}}_z \tau = \bigl(\mathbb{P}Pi^{\gamma, \mathfrak{a}} \otimes f^{\gamma, \mathfrak{a}}_z\bigr) \mathbf{D}elta \tau, \qquad\qquad \Gamma^{\gamma, \mathfrak{a}}_{\! z \bar z} = (F^{\gamma, \mathfrak{a}}_{\!z})^{-1} \circ F^{\gamma, \mathfrak{a}}_{\!\bar z},
\end{equation}
where the operator $\mathbf{D}elta$ is defined in Section~\ref{sec:structure-group}. All the properties in Definition~\ref{def:model} follow from the definition of the model $Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}$. However, showing that the bounds \eqref{eqs:model-bounds} hold uniformly in $\gamma > 0$ is non-trivial and we prove these bounds in Section~\ref{sec:convergence-of-models}.
Since the operator $\mathbf{D}elta$ is simple in our case, we can write the map $\mathbb{P}i^{\gamma, \mathfrak{a}}$ explicitly. Namely, we have $\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \symbol{\1}\bigr) (\bar z) = 1$ and $\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \symbol{X}_i\bigr) (\bar z) = \bar x_i - x_i$, for $ z = ( t, x)$ and $\bar z = (\bar t, \bar x)$. Using the same space-time points, we furthermore have
\betagin{equation}\lambdabel{eq:lift-hermite}
\betagin{aligned}
\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \<1b>\bigr)(\bar z) &= \mathfrak{r}ac{1}{\mathfrak{s}qrt 2} \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{\R} \mywidetilde{\mathscr{K}}^\gamma_{\bar t-s}(\bar x - y)\, \mathrm{d}\mathfrak{M}_{\gamma, \mathfrak{a}}(s, y), \\
\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \<2b>\bigr)(\bar z) &= H_2\bigl((\mathbb{P}i^{\gamma, \mathfrak{a}}_z \<1b>)(\bar z), \mathfrak{c}_\gamma + \mathfrak{c}_\gamma' \bigr), \\
\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \<1b>^n\bigr)(\bar z) &= H_n\bigl((\mathbb{P}i^{\gamma, \mathfrak{a}}_z \<1b>)(\bar z), \mathfrak{c}_\gamma \bigr) \quad \text{for $n = 3, 4, 5$}.
\end{aligned}
\end{equation}
For $\tau \in \{\<2b>, \<3b>\}$ we have $\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \tau \symbol{X}_i\bigr) (\bar z) = \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \tau\bigr) (\bar z) \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \symbol{X}_i\bigr) (\bar z)$, and for $\tau \in \{\<4b>, \<5b>\}$ we have
\betagin{equation}\lambdabel{eq:Pi-E}
\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \mathcal{C}E(\tau)\bigr) (\bar z) = \gamma^6 \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \tau\bigr) (\bar z).
\end{equation}
For the elements $\tau \in \{\<2b>, \<3b>\}$ the following formulas hold:
\betagin{equation}\lambdabel{eq:Pi-I}
\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \mathcal{C}I(\tau)\bigr) (\bar z) = \int_{D_\mathtt{var}epsilon} \bigl(\mywidetilde{\mathscr{K}}^\gamma(\bar z - \tilde{z}) - \mywidetilde{\mathscr{K}}^\gamma(z - \tilde{z})\bigr) \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \tau\bigr) (\tilde{z})\, \mathrm{d} \tilde{z}.
\end{equation}
Finally, we have the identities
\betagin{align}
\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \<22b>\bigr) (\bar z) &= \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \<2b>\bigr) (\bar z)\, \int_{D_\mathtt{var}epsilon} \bigl(\mywidetilde{\mathscr{K}}^\gamma(\bar z - \tilde{z}) - \mywidetilde{\mathscr{K}}^\gamma(z - \tilde{z})\bigr) \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \<2b>\bigr) (\tilde{z})\, \mathrm{d} \tilde{z} - \mathfrak{c}_\gamma'', \nonumber\\
\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \<31b>\bigr) (\bar z) &= \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \<1b>\bigr) (\bar z)\, \int_{D_\mathtt{var}epsilon} \bigl(\mywidetilde{\mathscr{K}}^\gamma(\bar z - \tilde{z}) - \mywidetilde{\mathscr{K}}^\gamma(z - \tilde{z})\bigr) \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \<3b>\bigr) (\tilde{z})\, \mathrm{d} \tilde{z}, \lambdabel{eq:Pi-rest} \\
\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \<32b>\bigr) (\bar z) &= \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \<2b>\bigr) (\bar z)\, \int_{D_\mathtt{var}epsilon} \bigl(\mywidetilde{\mathscr{K}}^\gamma(\bar z - \tilde{z}) - \mywidetilde{\mathscr{K}}^\gamma(z - \tilde{z})\bigr) \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \<3b>\bigr) (\tilde{z})\, \mathrm{d} \tilde{z} - 3 \mathfrak{c}_\gamma'' \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \<1b>\bigr) (\bar z).\nonumber
\end{align}
Once the map $\mathbb{P}i^{\gamma, \mathfrak{a}}$ is defined, we can also write the map $\Gamma^{\gamma, \mathfrak{a}}$ explicitly. The latter can be easily obtained from the identity $\mathbb{P}i^{\gamma, \mathfrak{a}}_{\bar z} = \mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \Gamma^{\gamma, \mathfrak{a}}_{\!z \bar z}$, which is a part of Definition~\ref{def:model}. Namely, for fixed $z, \bar z \in D_\mathtt{var}epsilon$, we have that $\Gamma^{\gamma, \mathfrak{a}}_{\!z \bar z}$ is a linear map on $\mathcal{C}T^\mathrm{ex}$, whose action on the elements of $\mathcal{C}W^\mathrm{ex}$ is given in Table~\ref{tab:linear_transformations} with the constants
\betagin{equation}\lambdabel{eq:Gamma-lift}
a_i = - \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \symbol{X}_i\bigr) (\bar z), \qquad b = - \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \<20b>\bigr)(\bar z), \qquad c = - \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \<30b>\bigr)(\bar z).
\end{equation}
\betagin{remark}\lambdabel{rem:positive-vanish}
From the definition of the discrete model $Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}$ and the definition of he respective reconstruction map $\mathcal{C}R^{\gamma, \mathfrak{a}}$ in \eqref{eq:def_rec_op}, we can see that $\mathcal{C}R^{\gamma, \mathfrak{a}} \tau \equiv 0$ if $|\tau| > 0$.
\end{remark}
\betagin{remark}\lambdabel{rem:Eps-reconstruct}
For an element $\mathcal{C}E(\tau)$ we obviously have $\mathcal{C}R^{\gamma, \mathfrak{a}} \mathcal{C}E(\tau) = \gamma^6 \mathcal{C}R^{\gamma, \mathfrak{a}} \tau$.
\end{remark}
\betagin{remark}\lambdabel{rem:extension-to-Xi}
We note that in \eqref{eq:Pi-Xi} we defined the action of the map $\mathbb{P}Pi^{\gamma, \mathfrak{a}}$ also on the symbol $\blue\Xi$. This allows to extend the maps \eqref{eq:Pi-Gamma-lift} on this symbol as
\betagin{equation*}
\Gamma^{\gamma, \mathfrak{a}}_{\! z \bar z} \blue\Xi = \blue\Xi, \qquad \qquad \iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \blue\Xi\bigr)(\mathtt{var}phi) = \mathfrak{r}ac{1}{\mathfrak{s}qrt 2} \mathtt{var}epsilon^3 \mathfrak{s}um_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{\R} \mathtt{var}phi(t,x)\, \mathrm{d} \mathfrak{M}_{\gamma, \mathfrak{a}}(t, x),
\end{equation*}
for any smooth, compactly supported function $\mathtt{var}phi : \R^4 \to \R$, and for the martingales $\mathfrak{M}_{\gamma, \mathfrak{a}}$ as in \eqref{eq:Pi-Xi}. We see however that $\mathbb{P}i^{\gamma, \mathfrak{a}}_z \blue\Xi$ is not a function, which explains why we excluded the symbol $\blue\Xi$ from the domain of discrete models in Definition~\ref{def:model}. We can also extend the reconstruction map as
\betagin{equation*}
\iota_\mathtt{var}epsilon \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} \blue\Xi\bigr)(\mathtt{var}phi) = \mathfrak{r}ac{1}{\mathfrak{s}qrt 2} \mathtt{var}epsilon^3 \mathfrak{s}um_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{\R} \mathtt{var}phi(t,x)\, \mathrm{d} \mathfrak{M}_{\gamma, \mathfrak{a}}(t, x).
\end{equation*}
\end{remark}
\mathfrak{s}ubsection{Asymptotics of the renormalisation constants}
We can show precise speeds of divergence of the renormalisation constants.
\betagin{lemma}\lambdabel{lem:renorm-constants}
Let $\mathfrak{c}_\gamma$ and $\mathfrak{c}_\gamma''$ be defined in \eqref{eq:renorm-constant1} and \eqref{eq:renorm-constant2} respectively. The constants $\mathfrak{c}_\gamma^{(2)}$ and $\mathfrak{c}_\gamma^{(1)}$ are well-defined by \eqref{eq:renorm-constants-main} for all $\gamma > 0$ small enough. Moreover, $\mathfrak{c}_\gamma^{(2)} \mathfrak{s}igmam \mathfrak{e}^{-1}$ and $\mathfrak{c}_\gamma^{(1)} \mathfrak{s}igmam \log \mathfrak{e}$, and the expressions $\mathfrak{c}_\gamma - \mathfrak{r}ac{1}{2}\mathfrak{c}_\gamma^{(2)}$ and $\mathfrak{c}_\gamma'' - \mathfrak{r}ac{1}{4} \mathfrak{c}_\gamma^{(1)}$ converge as $\gamma \to 0$. This in particular implies the asymptotics of the renormalisation constant $\mathfrak{C}_\gamma$ stated in Theorem~\ref{thm:main}.
\end{lemma}
\betagin{proof}
The kernel $\mywidetilde{\mathscr{K}}^\gamma$, involved in the definitions \eqref{eq:renorm-constant1} and \eqref{eq:renorm-constant2}, is supported in a ball of radius $c \geq 1$ (see Appendix~\ref{sec:decompositions}). Let $D_{c, \mathtt{var}epsilon} := [0, c] \times \T_{\mathtt{var}epsilon}^3 $. Then, without any harm, we can replace the integration domains $D_\mathtt{var}epsilon$ by $D_{c, \mathtt{var}epsilon}$ in these definitions.
We define new constants
\betagin{subequations}\lambdabel{eqs:renorm-constants-new}
\betagin{align}
\tilde{\mathfrak{c}}_\gamma &= \int_{D_{c, \mathtt{var}epsilon}} \widetilde{P}^\gamma(z)^2 \mathrm{d} z, \lambdabel{eq:renorm-constant1-new}\\
\tilde{\mathfrak{c}}_\gamma'' &= 2 \int_{D_{c, \mathtt{var}epsilon}} \int_{D_{c, \mathtt{var}epsilon}} \int_{D_{c, \mathtt{var}epsilon}} \widetilde{P}^\gamma(z) \widetilde{P}^\gamma(z_1) \widetilde{P}^\gamma(z_2) \widetilde{P}^\gamma(z_1 - z) \widetilde{P}^\gamma(z_2 - z) \mathrm{d} z \mathrm{d} z_1 \mathrm{d} z_2. \lambdabel{eq:renorm-constant2-new}
\end{align}
\end{subequations}
We note that if we replace at least one instance of the singular kernel $\mywidetilde{\mathscr{K}}^\gamma$ by a smooth function in the definitions of the renormalisation constants \eqref{eq:renorm-constant1} and \eqref{eq:renorm-constant2}, then we obtain convergent constants (as $\gammamma \to 0$). This follows from the properties of $\mywidetilde{\mathscr{K}}^\gamma$ from Appendix~\ref{sec:decompositions}. Since $\mywidetilde{\mathscr{K}}^\gamma$ is a singular part of the discrete heat kernel, this implies that the limits $\lim_{\gamma \to 0} (\tilde\mathfrak{c}_\gamma - \mathfrak{c}_\gamma)$ and $\lim_{\gamma \to 0} (\tilde\mathfrak{c}_\gamma'' - \mathfrak{c}_\gamma'')$ exist and are finite. Hence, to prove this lemma, we need to show that the required asymptotic behaviours hold if we replace $\mathfrak{c}_\gamma$ and $\mathfrak{c}_\gamma''$ by $\tilde{\mathfrak{c}}_\gamma$ and $\tilde{\mathfrak{c}}_\gamma''$ respectively.
It will be convenient to write the constants \eqref{eqs:renorm-constants-new} in a different form. Applying Parseval's identity \eqref{eq:Parseval} in the spatial variable in \eqref{eq:renorm-constant1-new}, we get
\betagin{equation*}
\tilde{\mathfrak{c}}_\gamma = \int_0^c \mathtt{var}epsilon^3 \mathfrak{s}um_{x \in \T_{\mathtt{var}epsilon}^3} \widetilde{P}^\gamma_t(x)^2 \mathrm{d} t = \mathfrak{r}ac{1}{8} \int_0^c \mathfrak{s}um_{|\omega|_{\mathfrak{s}igmanfty} \leq N} \mathrm{ex}p \Bigl( 2 \mathtt{var}kappa_{\gamma, 3}^2 \big( \widehat{K}_\gamma(\omega) - 1 \big) \mathfrak{r}ac{t}{\alphapha} \Bigr) |\widehat{K}_\gamma(\omega)|^2 \mathrm{d} t,
\end{equation*}
where we used the Fourier transform \eqref{eq:tildeP}. From the properties of the function \eqref{eq:K-gamma} we have $\mathtt{var}epsilon^3 \mathfrak{s}um_{x \in \T_{\mathtt{var}epsilon}^3} K_\gamma(x) = 1$, which yields $\widehat{K}_\gamma(0) = 1$. Furthermore, from Lemma~\ref{lem:Kg} we can conclude that there exists $\gamma_0 > 0$ such that $\widehat{K}_\gamma(\omega) \neq 1$ for $\gamma < \gamma_0$ and all $\omega \in \Z^3$ satisfying $0 < |\omega|_{\mathfrak{s}igmanfty} \leq N$. Then we have
\betagin{align*}
\tilde{\mathfrak{c}}_\gamma &= \mathfrak{r}ac{c}{8} + \mathfrak{r}ac{1}{8} \int_0^c \mathfrak{s}um_{0 < |\omega|_{\mathfrak{s}igmanfty} \leq N} \mathrm{ex}p \Bigl( 2 \mathtt{var}kappa_{\gamma, 3}^2 \big( \widehat{K}_\gamma(\omega) - 1 \big) \mathfrak{r}ac{t}{\alphapha} \Bigr) |\widehat{K}_\gamma(\omega)|^2 \mathrm{d} t \\
&= \mathfrak{r}ac{c}{8} + \mathfrak{r}ac{1}{16} \mathfrak{s}um_{0 < |\omega|_{\mathfrak{s}igmanfty} \leq N} \mathfrak{r}ac{\alphapha |\widehat{K}_\gamma(\omega)|^2}{\mathtt{var}kappa_{\gamma, 3}^2 \big(1 - \widehat{K}_\gamma(\omega) \big)} \biggl(1 - \mathrm{ex}p \Bigl( 2 \mathtt{var}kappa_{\gamma, 3}^2 \big( \widehat{K}_\gamma(\omega) - 1 \big) \mathfrak{r}ac{c}{\alphapha} \Bigr)\biggr).
\end{align*}
Let us write $\tilde{\mathfrak{c}}_\gamma = \mathfrak{r}ac{c}{8} + \tilde{\mathfrak{c}}^{(2)}_\gamma - \tilde{\mathfrak{c}}^{(1)}_\gamma$, where
\betagin{align*}
\tilde{\mathfrak{c}}^{(2)}_\gamma &= \mathfrak{r}ac{1}{16} \mathfrak{s}um_{0 < |\omega|_{\mathfrak{s}igmanfty} \leq N} \mathfrak{r}ac{\alphapha |\widehat{K}_\gamma(\omega)|^2}{\mathtt{var}kappa_{\gamma, 3}^2 \big(1 - \widehat{K}_\gamma(\omega) \big)}, \\
\tilde{\mathfrak{c}}^{(1)}_\gamma &= \mathfrak{r}ac{1}{16} \mathfrak{s}um_{0 < |\omega|_{\mathfrak{s}igmanfty} \leq N} \mathfrak{r}ac{\alphapha |\widehat{K}_\gamma(\omega)|^2}{\mathtt{var}kappa_{\gamma, 3}^2 \big(1 - \widehat{K}_\gamma(\omega) \big)}\mathrm{ex}p \Bigl( 2 \mathtt{var}kappa_{\gamma, 3}^2 \big( \widehat{K}_\gamma(\omega) - 1 \big) \mathfrak{r}ac{c}{\alphapha} \Bigr).
\end{align*}
From Lemma~\ref{lem:Kg} for $0 < \gamma < \gamma_0$ we have the bounds $1 -\widehat{K}_\gamma(\omega) \geq C_1 \big( |\gamma^3 \omega|^2 \wedge 1 \big)$, $|\widehat{K}_\gamma(\omega)| \leq 1$ for $|\omega| \leq \gamma^{-3}$, and $|\widehat{K}_\gamma(\omega)| \leq C_2 |\gamma^3 \omega|^{-k}$ for $|\omega| \geq \gamma^{-3}$ and for any $k \in \N$. Using these bounds and \eqref{eq:c-gamma-2}, we conclude that $\tilde{\mathfrak{c}}^{(2)}_\gamma$ diverges as $\gamma \to 0$ with the rate $\mathfrak{e}^{-1}$. Moreover, the constant $\mathtt{var}kappa_{\gamma, 3}$ in the definition of $\tilde{\mathfrak{c}}^{(2)}_\gamma$ can be replaced by $1$, which produces a convergent error, i.e. $\tilde{\mathfrak{c}}^{(2)}_\gamma - \mathfrak{r}ac{1}{2} \mathfrak{c}^{(2)}_\gamma$ has a finite limit as $\gamma \to 0$, where the constant $\mathfrak{c}_\gamma^{(2)}$ is defined in \eqref{eq:renorm-constants-main}. Similarly, we can conclude that $\tilde{\mathfrak{c}}^{(1)}_\gamma$ is bounded uniformly in $0 < \gamma < \gamma_0$, and moreover it converges as $\gamma \to 0$. Thus, we have that $\mathfrak{c}_\gamma - \mathfrak{r}ac{1}{2}\mathfrak{c}_\gamma^{(2)}$ converges as $\gamma \to 0$, which finishes the proof of the asymptotic behaviours from the statement of this lemma which involve $\mathfrak{c}_\gamma$ and $\mathfrak{c}_\gamma^{(2)}$.
The constant $\mathfrak{c}_\gamma''$ is analysed in a similar way. Namely, we can show that
\betagin{equation*}
\mathfrak{c}_\gamma'' - \mathfrak{r}ac{1}{32} \mathfrak{s}um_{0 < |\omega_1|_\mathfrak{s}igmanfty, |\omega_2|_\mathfrak{s}igmanfty \leq N} \mathfrak{r}ac{\alphapha^3 |\widehat{K}_\gamma(\omega_1)|^2 |\widehat{K}_\gamma(\omega_2)|^2}{(1 - \widehat{K}_\gamma(\omega_1)) (1 - \widehat{K}_\gamma(\omega_2))} \mathfrak{r}ac{ \widehat{K}_\gamma(\omega_1 + \omega_2)}{1 - \widehat{K}_\gamma(\omega_1) - \widehat{K}_\gamma(\omega_2) + \widehat{K}_\gamma(\omega_1 + \omega_2)}
\end{equation*}
converges as $\gamma \to 0$. This expression equals $\mathfrak{c}_\gamma'' - \mathfrak{r}ac{1}{4} \mathfrak{c}_\gamma^{(1)}$ and the double sum diverges with the rate $\log \mathfrak{e}$.
\end{proof}
Similarly, we can study the asymptotic behaviour of the renormalisation constant \eqref{eq:c-under}
\betagin{lemma}\lambdabel{lem:renorm-constant-underline}
The constant \eqref{eq:c-under} satisfies $|\un{\mathfrak{C}}_\gamma| \lesssim \mathfrak{e}^{-1}$.
\end{lemma}
\betagin{proof}
Applying identities \eqref{eq:Parseval} and \eqref{eq:Fourier-of-product} to \eqref{eq:c-under} we get
\betagin{equation*}
\un{\mathfrak{C}}_\gamma = \mathfrak{r}ac{\mathtt{var}kappa_{\gamma, 2}}{4} \int_0^\infty \mathfrak{s}um_{|\omega|_{\mathfrak{s}igmanfty} \leq N} \mathrm{ex}p \Bigl( 2 \mathtt{var}kappa_{\gamma, 3}^2 \big( \widehat{K}_\gamma(\omega) - 1 \big) \mathfrak{r}ac{t}{\alphapha} \Bigr) \widehat{\un{K}}_\gamma(\omega) \widehat{K}_\gamma(\omega)\, \mathrm{d} t,
\end{equation*}
where we used the Fourier transform \eqref{eq:tildeP}. As in the proof of Lemma~\ref{lem:renorm-constants}, for all $\gamma > 0$ small enough we can compute the integral, which yields
\betagin{equation}\lambdabel{eq:un-fc-bound}
\un{\mathfrak{C}}_\gamma = \mathfrak{r}ac{\mathtt{var}kappa_{\gamma, 2}}{8} \mathfrak{s}um_{0 < |\omega|_{\mathfrak{s}igmanfty} \leq N} \mathfrak{r}ac{\alphapha \widehat{\un{K}}_\gamma(\omega) \widehat{K}_\gamma(\omega)}{\mathtt{var}kappa_{\gamma, 3}^2 \big(1 - \widehat{K}_\gamma(\omega) \big)}.
\end{equation}
From Lemma~\ref{lem:Kg} for all $\gamma > 0$ small enough we have $1 -\widehat{K}_\gamma(\omega) \geq C_1 \big( |\gamma^3 \omega|^2 \wedge 1 \big)$, $|\widehat{K}_\gamma(\omega)| \leq 1$ for $|\omega| \leq \gamma^{-3}$, and $|\widehat{K}_\gamma(\omega)| \leq C_2 |\gamma^3 \omega|^{-k}$ for $|\omega| \geq \gamma^{-3}$ and for any $k \in \N$. Since the kernel $\un{K}_\gamma$ has the same properties as $K_\gamma$, except that it is rescaled by $\gamma^{3 + \un\kappappa}$ rather than $\gamma^3$, we have the respective bounds $|\widehat{\un{K}}_\gamma(\omega)| \leq C_3$ for $|\omega| \leq \gamma^{-3 - \un \kappappa}$, and $|\widehat{\un{K}}_\gamma(\omega)| \leq C_4 |\gamma^{3 + \un\kappappa} \omega|^{-k}$ for $|\omega| \geq \gamma^{-3 - \un \kappappa}$ and for any $k \in \N$. Then the part of the sum in \eqref{eq:un-fc-bound} running over $0 < |\omega|_{\mathfrak{s}igmanfty} \leq \gamma^{-3}$ is bounded by a constant multiple of
\betagin{equation*}
\int_{0 < |\omega|_{\mathfrak{s}igmanfty} \leq \gamma^{-3}} \alphapha |\gamma^3 \omega|^{-2} \mathrm{d} \omega \lesssim \gamma^{-3},
\end{equation*}
where we made use of \eqref{eq:c-gamma-2}. The part of the sum running over $\gamma^{-3} < |\omega|_{\mathfrak{s}igmanfty} \leq N$ is bounded by a constant times
\betagin{equation*}
\int_{\gamma^{-3} < |\omega|_{\mathfrak{s}igmanfty} \leq N} \alphapha |\gamma^3 \omega|^{-k} \mathrm{d} \omega \lesssim \gamma^{3}.
\end{equation*}
Hence, we have the required bound $|\un{\mathfrak{C}}_\gamma| \lesssim \mathfrak{e}^{-1}$.
\end{proof}
\mathfrak{s}ection{Properties of the martingales and auxiliary results}
\lambdabel{sec:martingales}
In this section we collect several result which will be used to prove moment bounds for the discrete models constructed above. We first show that the martingales $\bigl(\mathfrak{M}_{\gamma, \mathfrak{a}}(\bigcdot, x)\bigr)_{x \in \T_{\mathtt{var}epsilon}^3}$ satisfy Assumption~1 in \cite{Martingales}.
\mathfrak{s}ubsection{Properties of the martingales}
\lambdabel{sec:martingales-properties}
The required properties of the predictable quadratic covariations, stated in Assumption~1(1) in \cite{Martingales}, follow from \eqref{eq:M-a-variation} and \eqref{eq:M-bracket}: $\big\lambdangle \mathfrak{M}_{\gamma, \mathfrak{a}}(\bigcdot, x), \mathfrak{M}_{\gamma, \mathfrak{a}}(\bigcdot, x') \big\rangle_t = 0$ for $x \neq x'$, and in the case $x = x'$ we have
\betagin{equation}\lambdabel{eq:M-gamma-bracket}
\big\lambdangle \mathfrak{M}_{\gamma, \mathfrak{a}}(\bigcdot, x) \big\rangle_t = \mathtt{var}epsilon^{-3} \int_0^t \mathbf{C}_{\gamma, \mathfrak{a}} (s,x)\, \mathrm{d} s,
\end{equation}
with an adapted process $t \mapsto \mathbf{C}_{\gamma, \mathfrak{a}} (t,x)$, given by
\betagin{equation}\lambdabel{eq:bC}
\mathbf{C}_{\gamma, \mathfrak{a}} (t,x) :=
\betagin{cases}
2 \mathtt{var}kappa_{\gamma, 2} \Bigl( 1 - \mathfrak{s}igmagma \bigl(\mathfrak{r}ac{t}{\alphapha}, \mathfrak{r}ac{x}{\mathtt{var}epsilon}\bigr) \tanh \bigl( \beta \mathrm{d}eltalta X_\gamma(t, x) \bigr)\Bigr) &\text{for}~~ t < \tau_{\gamma, \mathfrak{a}}, \\
2 \mathtt{var}kappa_{\gamma, 2} \Bigl( 1 - \mathrm{d}eltalta \mathfrak{s}igmagma'_{\gamma, \mathfrak{a}} \bigl(\mathfrak{r}ac{t}{\alphapha}, \mathfrak{r}ac{x}{\mathtt{var}epsilon}\bigr) X'_{\gamma, \mathfrak{a}}(t, x) \Bigr) &\text{for}~~ t \geq \tau_{\gamma, \mathfrak{a}},
\end{cases}
\end{equation}
where $X'_{\gamma, \mathfrak{a}}$ is defined as in \eqref{eq:X-gamma} via the process $\mathfrak{s}igmagma'_{\gamma, \mathfrak{a}}$ and where the constant $\mathtt{var}kappa_{\gamma, 2}$, which is closed to $1$, was introduced in \eqref{constant:isingkac_one}. We observe that the inequality $|\mathbf{C}_{\gamma, \mathfrak{a}} (s,x)| \leq 2$ holds uniformly over $\gamma \in (0,1)$ and $x \in \T_{\mathtt{var}epsilon}^3$.
Assumption~1(2) in \cite{Martingales} follows readily from the definition of the martingales, because every time a spin of the Ising-Kac model flips, only one of the martingales $\mathfrak{M}_{\gamma, \mathfrak{a}}(\bigcdot, x)$ changes its value, while the others stay unchanged.
We see that the martingale $\mathfrak{M}_{\gamma, \mathfrak{a}}(\bigcdot, x)$, for any fixed $x \in \T_{\mathtt{var}epsilon}^3$, has jumps of size $2 \gamma^{-3}$, because the martingale $\mathfrak{m}_{\gamma}(\bigcdot, x)$ from \eqref{eq:equation-for-sigma} has jumps of size $2$. Therefore, given that $\mathtt{var}epsilon \approx \gamma^4$, Assumption~1(3) in \cite{Martingales} holds with any value of the constant $\mathbf{k_1}$ bigger than $\mathfrak{r}ac{3}{4}$.
For a c\`{a}dl\`{a}g process $f$, we denote by $f(t^-)$ its left limit at time $t$ and we define the jump size at time $t$ as
\betagin{equation}\lambdabel{eq:jump}
\mathbf{D}elta_t f := f(t) - f(t^-).
\end{equation}
For a bounded set $A \mathfrak{s}ubset \R_+$ we then set $\mathbbm{n}^{(\gamma, \mathfrak{a})}_{A}(x) := \# \{t \in A : \mathbf{D}elta_t \mathfrak{M}_{\gamma, \mathfrak{a}}(\cdot, x) \neq 0 \}$ to be the number of jumps of the martingale $\mathfrak{M}_{\gamma, \mathfrak{a}}(\bigcdot, x)$ in $A$. Then for $\mathbf{k_2} = \mathfrak{r}ac{3}{2}$, for any $0 \leq a < b$ and $p \geq 1$ we can write
\betagin{equation*}
\mathfrak{s}up_{\gammamma \in (0,1)} \mathfrak{s}up_{x \in \T_{\mathtt{var}epsilon}^3} \mathbb{E} \Bigl[ \bigl| \mathbbm{n}^{\gamma, \mathfrak{a}}_{[ \mathfrak{e}^{\mathbf{k_2}} a, \mathfrak{e}^{\mathbf{k_2}} b)} (x) \bigr|^p \Bigr]^{\mathfrak{r}ac{1}{p}} = \mathfrak{s}up_{\gammamma \in (0,1)} \mathfrak{s}up_{x \in \T_{\mathtt{var}epsilon}^3} \mathbb{E} \Biggl[ \biggl( \mathfrak{s}um_{t \in [\mathfrak{e}^{\mathbf{k_2}} a, \mathfrak{e}^{\mathbf{k_2}} b)} \mathbbm{1}_{ \bigl\{ \mathbf{D}elta_t \mathfrak{M}_{\gamma, \mathfrak{a}}(\cdot, x) \neq 0 \bigr\} } \biggr)^p \Biggr]^{\mathfrak{r}ac{1}{p}}
\end{equation*}
where $\mathbbm{1}$ is the indicator function, so that the entire expression, by the rescaling \eqref{eq:martingales} and the definition \eqref{eq:scalings}, is finite. Here we used the fact that the martingales are compensated Poisson processes. Hence, Assumption~1(4) in \cite{Martingales} holds.
Using the definition \eqref{eq:martingales} and the identity \eqref{eq:equation-for-sigma}, the total variation norm of the martingale $t \mapsto \mathfrak{M}_{\gamma, \mathfrak{a}}(t, x)$ in the time interval $[0, T]$ is given by
\betagin{align*}
&\mathcal{V}ert \mathfrak{M}_{\gamma, \mathfrak{a}}(\bigcdot, x) \mathcal{V}ert_{\mathrm{TV}([0, T])} \\
&\qquad \leq \gamma^{-3} \lim_{|\mathcal{P}| \to 0} \mathfrak{s}um_{t_i \in \mathcal{P}} \Biggl| \int_{\gamma^{-6} t_{i}}^{\gamma^{-6} t_{i+1}} \mathfrak{s}um_{j \in \T^3_N} c_{\gamma, \mathfrak{a}} (\mathfrak{s}igmagma_{\gamma, \mathfrak{a}}(s), j) \left( \mathfrak{s}igmagma_{\gamma, \mathfrak{a}}^j (s, \mathtt{var}epsilon^{-1} x) - \mathfrak{s}igmagma_{\gamma, \mathfrak{a}}(s, \mathtt{var}epsilon^{-1} x) \right) \mathrm{d} s \Biggr| \\ & \hspace{150px} + \mathtt{var}epsilon^{\mathbf{k_1} - 3} \int_0^t \bigl| \mathcal{C}d_{\gamma, \mathfrak{a}}(s, x) \bigr| \mathrm{d} s , \nonumber
\end{align*}
where $\mathcal{P}$ is a partition of the set $[0, T]$ by a finite number of points $t_i$, the limit is with respect to the mesh size $|\mathcal{P}|$ of the partition going to $0$ and the sum is over the points $t_{i}$ in the partition. According to the definition \eqref{eq:sigma-stopped}, the jump rates $c_{\gamma, \mathfrak{a}}$ are defined as
\betagin{equation*}
c_{\gamma, \mathfrak{a}} (\mathfrak{s}igmagma_{\gamma, \mathfrak{a}}(s), j) =
\betagin{cases}
c_{\gamma} (\mathfrak{s}igmagma_{\gamma, \mathfrak{a}}(s), j) &\text{for}~~ s < \mathfrak{r}ac{\tau_{\gamma, \mathfrak{a}}}{\alphapha}, \\
c'_{\gamma} (\mathfrak{s}igmagma_{\gamma, \mathfrak{a}}(s), j) &\text{for}~~ s \geq \mathfrak{r}ac{\tau_{\gamma, \mathfrak{a}}}{\alphapha}.
\end{cases}
\end{equation*}
Then we can bound $\mathcal{V}ert \mathfrak{M}_{\gamma, \mathfrak{a}}(x) \mathcal{V}ert_{\mathrm{TV}([0, T])} \lesssim T \gamma^{-9} \mathtt{var}epsilon^{-3} \lesssim \mathtt{var}epsilon^{-\mathfrak{r}ac{21}{4}}$, and Assumption~1(5) in \cite{Martingales} holds with $\mathbf{k_3} = \mathfrak{r}ac{21}{4}$.
The process $t \mapsto \mathfrak{s}igmagma_t(k)$ is pure jump, and from equation \eqref{eq:equation-for-sigma} we have $\mathbf{D}elta_t \mathfrak{s}igmagma(k) = \mathbf{D}elta_t \mathfrak{m}_\gamma (k)$. Moreover, from \eqref{eq:generator} and \eqref{eq:rates} we have $\mathscr{L}_\gamma \mathfrak{s}igmagma(k) = \tanh \big( \beta h_\gamma(\mathfrak{s}igmagma, k) \big) - \mathfrak{s}igmagma(k)$. Hence, using the definition \eqref{eq:sigma-stopped} and rescaling equation \eqref{eq:equation-for-sigma} we get
\betagin{equation} \lambdabel{eq:mrt_decomp}
\mathfrak{M}_{\gamma, \mathfrak{a}}(t, x) = J_{\gamma, \mathfrak{a}}(t, x) + \mathtt{var}epsilon^{\mathbf{k_1} - 3} \int_0^t \mathcal{C}d_{\gamma, \mathfrak{a}}(s, x) \mathrm{d} s,
\end{equation}
where $t \mapsto J_{\gamma, \mathfrak{a}}(t, x) = \mathfrak{s}um_{0 \leq s \leq t} \mathbf{D}elta_s \mathfrak{M}_{\gamma, \mathfrak{a}}(x)$ is a pure jump process and
\betagin{equation}\lambdabel{eq:C-gamma}
\mathcal{C}d_{\gamma, \mathfrak{a}}(t, x) =
\betagin{cases}
\mathfrak{s}igmagma \bigl( \mathfrak{r}ac{t}{\alphapha}, \mathfrak{r}ac{x}{\mathtt{var}epsilon} \bigr) - \tanh \bigl( \beta \mathrm{d}eltalta X_{\gamma}( t, x) \bigr) &\text{for}~~ t < \mathfrak{r}ac{\tau_{\gamma, \mathfrak{a}}}{\alphapha}, \\
\mathfrak{s}igmagma'_{\gamma, \mathfrak{a}} \bigl( \mathfrak{r}ac{t}{\alphapha}, \mathfrak{r}ac{x}{\mathtt{var}epsilon} \bigr) - \mathrm{d}eltalta X'_{\gamma, \mathfrak{a}}( t, x) &\text{for}~~ t \geq \mathfrak{r}ac{\tau_{\gamma, \mathfrak{a}}}{\alphapha}.
\end{cases}
\end{equation}
The process $t \mapsto \mathcal{C}d_{\gamma, \mathfrak{a}}(t, x)$ is adapted and is bounded uniformly in $x$ and $t$, and Assumption~1(6) in \cite{Martingales} is satisfied.
To conclude this section, we also need to show that the kernel we integrate satisfies the right assumptions. To such a goal, we use Lemma~\ref{lem:Pgt} to get $\big| \partial_{t} \widetilde{G}^\gamma_{t}(x) \big| \lesssim t^{-\mathfrak{r}ac{5}{2}} \wedge \mathtt{var}epsilon^{-\mathfrak{r}ac{15}{4}} \wedge |x|^{-5}$; as such, we will use Definition~2.8 from \cite{Martingales} with the constant $\mathbf{k_4} = \mathfrak{r}ac{15}{4}$. The exact values of $\mathbf{k_3}$ and $\mathbf{k_4}$ are actually irrelevant, since they do not appear in Theorem~5.4 in \cite{Martingales}, which is our main tool for the following sections.
\mathfrak{s}ubsection{Besov spaces of distributions}
\lambdabel{sec:Besov}
In this section we recall the definition of the Besov spaces using the Littlewood-Paley theory.
According to \cite[Prop.~2.10]{BookChemin} there exist two smooth functions $\widetilde \chi, \chi : \R^3 \to \R$, taking values in $[0,1]$, such that $\widetilde \chi$ is supported on $B(0, \mathfrak{r}ac{4}{3})$, $\chi$ is supported on $B(0, \mathfrak{r}ac{8}{3}) \mathfrak{s}etminus B(0, \mathfrak{r}ac{3}{4})$, and for every $\omega \in \R^3$ they satisfy
\betagin{equation*}
\widetilde \chi(\omega) + \mathfrak{s}um_{k = 0}^\infty \chi(2^{-k} \omega) = 1.
\end{equation*}
Then we define $\chi_{-1} (\omega) := \widetilde \chi(\omega)$ and $\chi_{k} (\omega) := \chi(2^{-k} \omega)$ for $k \geq 0$, and set $\rho_{k} := \mathscr{F}^{-1} \chi_{k}$, where $\mathscr{F}^{-1}$ is the inverse Fourier transform on $\R^3$. Then for $k \geq 0$ we have $\rho_k(\omega) = 2^{3k} \rho(2^k \omega)$ where $\rho := \rho_0$. The $k$-th \emph{Littlewood-Paley block} of a function or tempered distribution $f$ is defined as
\betagin{equation}\lambdabel{eq:LittlewoodPaleyBlock}
\mathrm{d}eltalta_k f := \rho_k * f = \mathscr{F}^{-1} \bigl(\chi_{k}\, \mathscr{F} f\bigr).
\end{equation}
Then one can show that $f = \mathfrak{s}um_{k \geq -1} \mathrm{d}eltalta_k f$ in the sense of distributions for any tempered distribution $f$.
For $\eta \in \R$ and $p, q \in [1, \infty]$ the Besov space $\mathcal{C}B^\eta_{p, q}(\mathbb{T}^3)$ is defined as a completion of the the space of smooth functions $f : \mathbb{T}^3 \to \R$ under the norm
\betagin{equation*}
\| f \|_{\mathcal{C}B^\eta_{p, q}} := \Bigl\| \Bigl(2^{\eta k} \|\mathrm{d}eltalta_k f\|_{L^p}\Bigr)_{k \geq -1} \Bigr\|_{\ell^q},
\end{equation*}
where we extended $f$ periodically on the right-hand side and where we write $\|(a_k)_{k \geq -1}\|_{\ell^q}$ for the $\ell^q$ norm of the sequence $(a_k)_{k \geq -1}$.
It is not hard to see that $\mathcal{C}B^\eta_{\infty, \infty}(\mathbb{T}^3)$ coincides with the space $\mathcal{C}C^\eta(\mathbb{T}^3)$ defined in Section~\ref{sec:notation}.
\mathfrak{s}ubsection{Controlling the processes $\widehat{X}_{\gamma}$ and $S_\gamma$}
We need to prove some auxiliary bounds which will be used in the proof of Proposition~\ref{prop:Pi-bounds}. The following result provides bounds on the high frequency Fourier modes of the process $X_{\gamma}$.
\betagin{lemma}\lambdabel{lem:X-Fourier-a-priori}
For any $\bar \kappappa > 0$ and $M > 0$, there is a non-random constant $C > 0$, such that
\betagin{equation}\lambdabel{eq:X-Fourier-a-priori}
| \widehat{X}_{\gamma} (t, \omega) | \leq C \gamma^M,
\end{equation}
uniformly in $t \in \R_+$, $\gamma^{-3 - \bar \kappappa} \leq |\omega|_\mathfrak{s}igmanfty \leq N$ and $\gamma \in (0,1)$.
\end{lemma}
\betagin{proof}
Using \eqref{eq:X-gamma} and \eqref{eq:S-def}, we may write $X_\gamma = K_\gamma *_\mathtt{var}epsilon S_\gamma$. Then Parseval's identity \eqref{eq:Parseval} then yields $\widehat{X}_{\gamma} (t, \omega) = \widehat{K}_\gamma(\omega) \widehat{Z}_{\gamma} (t, \omega)$. Using the trivial bound $|S_{\gamma} (t, x)| \leq \gamma^{-3}$ we get $|\widehat{Z}_{\gamma} (t, \omega)| \lesssim \gamma^{-3}$ and the absolute value of $\widehat{X}_{\gamma} (t, \omega)$ is bounded by $C_1 \gamma^{-3} |\widehat{K}_\gamma(\omega)|$. Furthermore, we use \eqref{eq:K3} to bound it by $C_2 \gamma^{-3} |\gamma^3\omega|^{-m}$ for any integer $m \geq 0$, where the constant $C_2$ depends on $m$. Hence, for any $\bar \kappappa > 0$ and $|\omega|_\mathfrak{s}igmanfty \geq \gamma^{-3 - \bar \kappappa}$ we have $| \widehat{X}_{\gamma} (t, \omega) | \leq C_3 \gamma^{\bar \kappappa m - 3}$, which is the required bound \eqref{eq:X-Fourier-a-priori} with $M = \bar \kappappa m - 3$.
\end{proof}
The following result show that the a priori bound, provided by the stopping time \eqref{eq:tau-1}, yields a bound on the process $S_{\gamma}$ defined in \eqref{eq:S-def}. The bound on $S_{\gamma}$ is however slightly worse than for the process $X_{\gamma}$. Namely, while we consider the average values of $X_\gamma$ on the scales above $\mathfrak{e}$ (see the definition \eqref{eq:eps-norm-1} of the seminorm), we bound $S_\gamma$ on strictly larger scales.
\betagin{lemma}\lambdabel{lem:Z-bound}
Let $\eta$ be as in the statement of Theorem~\ref{thm:main}, let us fix any $\tilde\kappappa \in (0,1)$ and let $r$ be the smallest integer satisfying $r > \mathfrak{r}ac{1 + \eta}{\tilde \kappappa} - \eta$ and $r \geq 2$. Then there exist non-random $\gamma_0 > 0$ and $C > 0$ such that
\betagin{equation*}
\mathfrak{s}up_{t \in [0, \tau^{(1)}_{\gamma, \mathfrak{a}}]} \mathfrak{s}up_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \bigl| \bigl( \iota_\mathtt{var}epsilon S_{\gamma} (t)\bigr) (\mathtt{var}phii_x^\lambdambda) \bigr| \leq C \mathfrak{a} \lambdambda^{\eta},
\end{equation*}
uniformly over $\lambdambda \in [\mathfrak{e}^{1 - \tilde\kappappa}, 1]$, $\mathtt{var}phii \in \mathcal{C}B^{r}$ and $\gamma \in (0, \gamma_0)$. We recall that the stopping time $\tau^{(1)}_{\gamma, \mathfrak{a}}$ is defined in \eqref{eq:tau-1}.
\end{lemma}
\betagin{proof}
From the definitions \eqref{eq:kernel-K} and \eqref{eq:K-gamma} we have $\mathscr{F}_{\!\!\mathtt{var}epsilon}K_\gamma(\omega)= \mathtt{var}kappa_{\gamma, 1} \mathscr{F}_{\!\!\gamma}\fK(\mathtt{var}epsilon \omega / \gamma)$, where $\mathscr{F}_{\!\!\mathtt{var}epsilon}$ is the discrete Fourier transform defined in \eqref{eq:Fourier} and $\mathscr{F}_{\!\!\gamma}$ is defined by replacing $\mathtt{var}epsilon$ with $\gamma$. The first property in \eqref{eq:K-moments} implies that there are $a, c > 0$ such that $\mathscr{F} \fK(\omega) \geq a$ for $|\omega|_\mathfrak{s}igmanfty \leq c$, and for all $\gamma > 0$ small enough we have $\mathscr{F}_{\!\!\gamma} \fK(\omega) \geq a/2$ for $|\omega|_\mathfrak{s}igmanfty \leq c$. Hence, $\mathscr{F}_{\!\!\mathtt{var}epsilon}K_\gamma(\omega) \geq a \mathtt{var}kappa_{\gamma, 1}/2$ for $|\omega|_\mathfrak{s}igmanfty \leq c \gamma^{-3}$, and \eqref{eq:c-gamma-2} yields $a \mathtt{var}kappa_{\gamma, 1}/2 \geq a / 4$ for all $\gamma > 0$ small enough.
We define the function $\psi_\gamma(x)$ by its discrete Fourier transform $\widehat{\psi}_\gamma(\omega) = 1 / \widehat{K}_\gamma(\omega)$ for $|\omega|_\mathfrak{s}igmanfty \leq c \gamma^{-3}$ and $\widehat{\psi}_\gamma(\omega) = 0$ for $|\omega|_\mathfrak{s}igmanfty > c \gamma^{-3}$. Then $|\widehat{\psi}_\gamma(\omega)| \leq 4/a$ for $|\omega|_\mathfrak{s}igmanfty \leq c \gamma^{-3}$, which implies that $\psi_\gamma$ is a rescaled function with the scaling parameter $\gammamma$ (in the sense of \eqref{eq:rescaled-function}).
Let $n_0$ be the smallest integer such that $2^{n_0} > c \gamma^{-3}$. Then we use the Littlewood-Paley blocks, defined in Appendix~\ref{sec:Besov}, to write $\bigl( \iota_\mathtt{var}epsilon S_{\gamma} (t)\bigr) (\mathtt{var}phii_x^\lambdambda) = \bigl( \iota_\mathtt{var}epsilon S^{(1)}_{\gamma} (t)\bigr) (\mathtt{var}phii_x^\lambdambda) + \bigl( \iota_\mathtt{var}epsilon S^{(2)}_{\gamma} (t)\bigr) (\mathtt{var}phii_x^\lambdambda)$, where
\betagin{equation*}
\bigl( \iota_\mathtt{var}epsilon S^{(1)}_{\gamma} (t)\bigr) (\mathtt{var}phii_x^\lambdambda) = \mathfrak{s}um_{k \geq n_0} \bigl(\mathrm{d}eltalta_k \iota_\mathtt{var}epsilon S_{\gamma} (t)\bigr) (\mathtt{var}phii_x^\lambdambda), \qquad \bigl( \iota_\mathtt{var}epsilon S^{(2)}_{\gamma} (t)\bigr) (\mathtt{var}phii_x^\lambdambda) = \mathfrak{s}um_{-1 \leq k < n_0} \bigl(\mathrm{d}eltalta_k \iota_\mathtt{var}epsilon S_{\gamma} (t)\bigr) (\mathtt{var}phii_x^\lambdambda).
\end{equation*}
We first bound the process $S^{(1)}_{\gamma}$. We note that we can write $\bigl(\mathrm{d}eltalta_k \iota_\mathtt{var}epsilon S_{\gamma} (t)\bigr) (\mathrm{d}eltalta_k \mathtt{var}phii_x^\lambdambda)$ in the sum. From the definition \eqref{eq:S-def} we have $|S_\gamma(t,x)| \leq \gamma^{-3}$. Let us fix any integer $r > \mathfrak{r}ac{1 + \eta}{\tilde \kappappa} - \eta$ such that $r \geq 2$ and $\kappappa = r + \eta - \mathfrak{r}ac{1 + \eta}{\tilde \kappappa}$. Then we have $0 < \kappappa < r$. Moreover, if we take $\mathtt{var}phii \in \mathcal{C}B^{r}$, then $\| \mathrm{d}eltalta_k \mathtt{var}phii_x^\lambdambda \|_{L^1} \lesssim (\lambdambda 2^{k})^{-r + \kappappa}$, because $\mathcal{C}B^{r}$ is embedded into the Besov space $\mathcal{C}B^{r - \kappappa}_{\infty, \infty}$. Then we have
\betagin{equation*}
\bigl|\bigl( \iota_\mathtt{var}epsilon S^{(1)}_{\gamma} (t)\bigr) (\mathtt{var}phii_x^\lambdambda)\bigr| \lesssim \gamma^{-3} \mathfrak{s}um_{k \geq n_0} (\lambdambda 2^{k})^{-r + \kappappa} \lesssim \gamma^{-3} (\lambdambda 2^{n_0})^{-r + \kappappa} \lesssim \lambdambda^{-r + \kappappa} \mathfrak{e}^{r - 1 - \kappappa}.
\end{equation*}
If $\lambdambda \geq \mathfrak{e}^{1 - \tilde \kappappa}$, then the latter is bounded by $\lambdambda^\eta$.
Now, we will bound $S^{(2)}_{\gamma}$. We note that for $k < n_0$ we can express $S_{\gamma} (t)$ in terms of $\psi_\gamma *_\mathtt{var}epsilon X_{\gamma} (t)$, and using \eqref{eq:LittlewoodPaleyBlock} we may write
\betagin{equation}\lambdabel{eq:Z2-expansion}
\bigl( \iota_\mathtt{var}epsilon S^{(2)}_{\gamma} (t)\bigr) (\mathtt{var}phii_x^\lambdambda) = \bigl( \iota_\mathtt{var}epsilon X_{\gamma} (t)\bigr) \bigl(\mathbb{P}hi_x^{\gamma, \lambdambda}\bigr),
\end{equation}
where
\betagin{equation}\lambdabel{eq:Phi-function-def}
\mathbb{P}hi_x^{\gamma, \lambdambda}(y) = \int_{\R^3} \mathtt{var}epsilon^3 \mathfrak{s}um_{z \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \mathtt{var}phii_{x}^\lambdambda(z) \psi_\gamma(z - v) \mathfrak{s}um_{-1 \leq k < n_0} \rho_k(v - y)\, \mathrm{d} v.
\end{equation}
This is a convolution of three rescaled functions and hence it can be viewed as a function rescaled by $\lambdambda$. Then the definition \eqref{eq:tau-1} yields $|( \iota_\mathtt{var}epsilon X_{\gamma} (t)) (\mathbb{P}hi_x^{\gamma, \lambdambda})| \lesssim \mathfrak{a} 2^{- k \eta}$ for $t \in [0, \tau^{(1)}_{\gamma, \mathfrak{a}}]$. We note that the function $\mathbb{P}si_x^{\gamma, \lambdambda}$ is not compactly supported, as required in the definition of the seminorm \eqref{eq:eps-norm-1}. This however does not play any role since the process $X_{\gamma} (t)$ is periodic and the function has a fast decay at infinity (because the function $\mathtt{var}phii$ involved in the definition \eqref{eq:Phi-function-def} is compactly supported and the functions $\psi_\gamma$ and $\rho_k$ have fast decays at infinity). Then \eqref{eq:Z2-expansion} is absolutely bounded by a constant multiple of $\mathfrak{a} \lambda^{\eta}$, as required.
\end{proof}
\mathfrak{s}ubsection{Controlling the bracket process of the martingales}
In Section~\ref{sec:second-symbol} we need to analyse the process
\betagin{equation*}
Q_{\gamma, \mathfrak{a}}(t, x) := \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{0}^{\tau_{\gamma, \mathfrak{a}}} \!\!\!\mywidetilde{\mathscr{K}}^\gamma_{t-s}( x-y)^2 S_\gamma (s, y) X_\gamma(s, y) \mathrm{d} s,
\end{equation*}
where we used the stopping time \eqref{eq:tau}. In the following lemma we estimate the error after replacing $S_\gamma$ by its local average, i.e. we estimate how close $Q_{\gamma, \mathfrak{a}}$ is to
\betagin{equation*}
\un{Q}_{\gamma, \mathfrak{a}}(t, x) := \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_0^{\tau_{\gamma, \mathfrak{a}}} \mywidetilde{\mathscr{K}}^\gamma_{t-s} (x-y)^2 \un{X}_{\gamma} (s, y) X_\gamma(s, y) \mathrm{d} s,
\end{equation*}
where the process $\un{X}_{\gamma}$ is defined in \eqref{eq:X-under}.
\betagin{lemma}\lambdabel{lem:Y-approximation}
For every $T > 0$ there exist deterministic constants $\gamma_0 > 0$ and $C > 0$, depending also on the constant $\un{\kappappa}$ fixed in \eqref{eq:X-under}, such that
\betagin{equation}\lambdabel{eq:Y-approximation}
\mathfrak{s}up_{t \in [0, T]} \mathfrak{s}up_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} |Q_{\gamma, \mathfrak{a}}(t, x) - \un{Q}_{\gamma, \mathfrak{a}}(t, x)| \leq C \mathfrak{a} \gamma^{3 (\eta-1)},
\end{equation}
uniformly in $\gamma \in (0, \gamma_0)$. The value $\eta$ is as in the statement of Theorem~\ref{thm:main}.
\end{lemma}
\betagin{proof}
As we stated in the beginning of Section \ref{sec:lift}, $\widetilde{G}^\gamma = \mywidetilde{\mathscr{K}}^\gamma + \widetilde{\mathscr{R}}^\gamma$. Replacing $\mywidetilde{\mathscr{K}}^\gamma$ in the definitions of $Q_{\gamma, \mathfrak{a}}$ by $\widetilde{G}^\gamma$, and using \eqref{eq:X-apriori} and the trivial bound $|S_\gamma(s, y)| \leq \gamma^{-3}$, we obtain
\betagin{equation*}
Q_{\gamma, \mathfrak{a}}(t, x) = \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_0^{\tau_{\gamma, \mathfrak{a}}} \widetilde{G}^\gamma_{t-s}(x-y)^2 S_\gammamma(s, y) X_\gamma(s, y)\, \mathrm{d} s + \mathcal{C}O\bigl(\gamma^{3 (\eta-1)}\bigr),
\end{equation*}
and an analogous formula holds for $\un{Q}_{\gamma, \mathfrak{a}}$. Here, we made use of the estimates
\betagin{equation*}
\mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_0^{\tau_{\gamma, \mathfrak{a}}} \mywidetilde{\mathscr{K}}^\gamma_{t-s}(y) \widetilde{\mathscr{R}}^\gamma_{t-s}(y)\, \mathrm{d} s \lesssim 1, \qquad \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_0^{\tau_{\gamma, \mathfrak{a}}} \widetilde{\mathscr{R}}^\gamma_{t-s}(y)^2\, \mathrm{d} s \lesssim 1
\end{equation*}
which follow from smoothness of the function $\mathscr{R}^\gamma$ and an integrable singularity of $\mywidetilde{\mathscr{K}}^\gamma$ (see Appendix~\ref{sec:kernels}). We note that the value of the stopping time $\tau_{\gamma, \mathfrak{a}}$ does not play a role in this bound, because the kernel $\widetilde{G}^\gamma_{t-s}$ vanishes for $s \geq t$ and the integration interval is contained in $[0, t]$. That is why the error term $\mathcal{C}O\bigl(\gamma^{3 (\eta-1)}\bigr)$ is bounded uniformly in $t \in [0, T]$ and $x \in \mathtt{L}ambda_{\mathtt{var}epsilon}$.
Using the spatial periodicity of the processes we can write furthermore
\betagin{equation*}
Q_{\gamma, \mathfrak{a}}(t, x) = \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \T_{\mathtt{var}epsilon}^3} \int_0^{\tau_{\gamma, \mathfrak{a}}} \mathbb{P}hi^\gamma_{t-s}(x-y) S_\gammamma(s, y) X_\gamma(s, y)\, \mathrm{d} s + \mathcal{C}O\bigl(\gamma^{3 (\eta-1)}\bigr),
\end{equation*}
where by analogy with \eqref{eq:From-P-to-G} the function $\mathbb{P}hi^\gamma$ is defined by
\betagin{equation}\lambdabel{eq:Phi-def}
\mathtt{var}epsilon^3 \mathfrak{s}um_{x \in \T_{\mathtt{var}epsilon}^3} \mathbb{P}hi^\gamma_t(x) f(x) = \mathtt{var}epsilon^3 \mathfrak{s}um_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \widetilde{G}^\gamma_{t}(x)^2 f(x),
\end{equation}
for any $f : \T_{\mathtt{var}epsilon}^3 \to \R$, where on the right-hand side we extended $f$ periodically to $\mathtt{L}ambda_{\mathtt{var}epsilon}$.
Then we can write $Q_{\gamma, \mathfrak{a}}(t, x) = \un{Q}_{\gamma, \mathfrak{a}}(t, x) + E_{\gamma, \mathfrak{a}}(t, x) + \mathcal{C}O(\gamma^{3 (\eta-1)})$ with the error term
\betagin{equation*}
E_{\gamma, \mathfrak{a}}(t, x) = \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \T_{\mathtt{var}epsilon}^3} \int_0^{\tau_{\gamma, \mathfrak{a}}} \mathbb{P}hi^\gamma_{t-s}(x-y) \bigl(S_\gammamma - \un{X}_{\gamma}\bigr) (s, y) X_\gamma(s, y)\, \mathrm{d} s,
\end{equation*}
and we need to show that this error term is absolutely bounded by the right-hand side of \eqref{eq:Y-approximation}. Applying Parseval's identity \eqref{eq:Parseval} we get
\betagin{equation*}
E_{\gamma, \mathfrak{a}}(t, x) = \mathfrak{r}ac{1}{8} \int_0^{\tau_{\gamma, \mathfrak{a}}} \mathfrak{s}um_{|\omega|_{\mathfrak{s}igmanfty} \leq N} \mathscr{F}_{\!\!\mathtt{var}epsilon} \mathbb{P}hi^\gamma_{t-s} (\omega)\, \mathscr{F}_{\!\!\mathtt{var}epsilon}\bigl((S_\gammamma - \un{X}_{\gamma}) X_\gamma \bigr)(s, \omega)\, e^{-\pi i \omega \cdot x}\, \mathrm{d} s.
\end{equation*}
We expect that the high frequency Fourier modes of $\mathbb{P}hi^\gamma_{t-s}$ decay very fast, which allows to have a good control of the whole expression in the integral. To separate low and high Fourier modes of the function $\mathbb{P}hi^\gamma_{t-s}$, we take $\kappappa_1 > 0$, whose precise value will be fixed later, and write
\betagin{align*}
E_{\gamma, \mathfrak{a}}(t, x) &= \mathfrak{r}ac{1}{8} \int_0^{\tau_{\gamma, \mathfrak{a}}} \mathfrak{s}um_{\gammamma^{-3 - \kappappa_1} < |\omega|_{\mathfrak{s}igmanfty} \leq N} \mathscr{F}_{\!\!\mathtt{var}epsilon} \mathbb{P}hi^\gamma_{t-s} (\omega)\, \mathscr{F}_{\!\!\mathtt{var}epsilon}\bigl((S_\gammamma - \un{X}_{\gamma}) X_\gamma \bigr)(s, \omega)\, e^{-\pi i \omega \cdot x} \,\mathrm{d} s \\
&\qquad + \mathfrak{r}ac{1}{8} \int_0^{\tau_{\gamma, \mathfrak{a}}} \mathfrak{s}um_{|\omega|_{\mathfrak{s}igmanfty} \leq \gammamma^{-3 - \kappappa_1}} \mathscr{F}_{\!\!\mathtt{var}epsilon} \mathbb{P}hi^\gamma_{t-s} (\omega)\, \mathscr{F}_{\!\!\mathtt{var}epsilon}\bigl((S_\gammamma - \un{X}_{\gamma}) X_\gamma \bigr)(s, \omega)\, e^{-\pi i \omega \cdot x} \,\mathrm{d} s.
\end{align*}
We denote these two terms by $E^{(1)}_{\gamma, \mathfrak{a}}(t, x)$ and $E^{(2)}_{\gamma, \mathfrak{a}}(t, x)$ respectively.
We start with analysing the term $E^{(1)}_{\gamma, \mathfrak{a}}$. The processes $S_\gammamma$ and $\un{X}_{\gamma}$ can be uniformly bounded by $\gamma^{-3}$, while for the process $X_\gamma$ we have the bound \eqref{eq:X-apriori}. Then
\betagin{equation*}
|E^{(1)}_{\gamma, \mathfrak{a}}(t, x)| \lesssim \mathfrak{a} \gammamma^{3 (\eta - 1)} \int_0^{t} \mathfrak{s}um_{\gammamma^{-3 - \kappappa_1} < |\omega|_{\mathfrak{s}igmanfty} \leq N} |\mathscr{F}_{\!\!\mathtt{var}epsilon} \mathbb{P}hi^\gamma_{s} (\omega)|\, \mathrm{d} s,
\end{equation*}
where we used the property $\mathbb{P}hi^\gamma_{s} \equiv 0$ for $s < 0$ to extend the integral to $[0, t]$. The definition \eqref{eq:Phi-def}, the Poisson summation formula and the identity \eqref{eq:tildeP} yield
\betagin{align*}
\mathscr{F}_{\!\!\mathtt{var}epsilon} \mathbb{P}hi^\gamma_{s} (\omega) &= \mathfrak{s}um_{|\omega'|_{\mathfrak{s}igmanfty} \leq N} \mathscr{F}_{\!\!\mathtt{var}epsilon}{\widetilde{P}^\gamma_{s}}(\omega - \omega') \mathscr{F}_{\!\!\mathtt{var}epsilon}{\widetilde{P}^\gamma_{s}}(\omega') \\
& = \mathfrak{s}um_{|\omega'|_{\mathfrak{s}igmanfty} \leq N} \mathrm{ex}p \Bigl( \gamma^{-6} \mathtt{var}kappa_{\gamma, 3}^2 \bigl( \widehat{K}_\gamma(\omega - \omega') + \widehat{K}_\gamma(\omega') - 2 \bigr) s \Bigr) \widehat{K}_\gamma(\omega - \omega') \widehat{K}_\gamma(\omega').
\end{align*}
from which we readily get
\betagin{equation}\lambdabel{eq:double-heat-sum-1}
\int_0^t |\mathscr{F}_{\!\!\mathtt{var}epsilon} \mathbb{P}hi^\gamma_{s} (\omega)|\, \mathrm{d} s \leq t \gamma^{6} \mathtt{var}kappa_{\gamma, 3}^{-2} \mathfrak{s}um_{|\omega'|_{\mathfrak{s}igmanfty} \leq N} |\widehat{K}_\gamma(\omega - \omega') \widehat{K}_\gamma(\omega')|,
\end{equation}
where we made use of \eqref{eq:K2} to bound the exponential by $1$. Estimating $\widehat{K}_\gamma$ by \eqref{eq:K1B} and \eqref{eq:K3}, for any $k \geq 4$ we bound the preceding expression by a constant multiple of
\betagin{equation}\lambdabel{eq:double-heat-sum}
\gamma^{6} \mathfrak{s}um_{|\omega'|_{\mathfrak{s}igmanfty} \leq N} (| \gamma^3 (\omega - \omega')|\vee 1)^{-2k} (| \gamma^3 \omega'| \vee 1)^{-2k}.
\end{equation}
Then we have
\betagin{align}
&|E_{\gamma, \mathfrak{a}}^{(1)}(t, x)| \lesssim \mathfrak{a} \gammamma^{3 (\eta + 1)} \mathfrak{s}um_{\gammamma^{-3 - \kappappa_1} < |\omega|_{\mathfrak{s}igmanfty} \leq N} \mathfrak{s}um_{|\omega'|_{\mathfrak{s}igmanfty} \leq N} (| \gamma^3 (\omega - \omega')|\vee 1)^{-2k} (| \gamma^3 \omega'| \vee 1)^{-2k} \nonumber \\
&\qquad \lesssim \mathfrak{a} \gammamma^{3 (\eta - 5)} \int_{\gammamma^{- \kappappa_1} < |\omega|_{\mathfrak{s}igmanfty} \leq \gamma^3 N} \int_{|\omega'|_{\mathfrak{s}igmanfty} \leq \gamma^3 N} (| \omega - \omega'| \vee 1)^{-2k} (| \omega'| \vee 1)^{-2k} \mathrm{d} \omega' \mathrm{d} \omega. \lambdabel{eq:E1-integral}
\end{align}
In order to bound this integral, we split the domain of integration into two subdomains.
If $|\omega'|_{\mathfrak{s}igmanfty} \leq |\omega|_{\mathfrak{s}igmanfty} / 2$, then $| \omega - \omega'|_{\mathfrak{s}igmanfty} \geq |\omega|_{\mathfrak{s}igmanfty} / 2$. We also have $|\omega|_{\mathfrak{s}igmanfty} \geq 1$. Then the part of the double integral \eqref{eq:E1-integral}, in which the integration variables satisfy $|\omega'|_{\mathfrak{s}igmanfty} \leq |\omega|_{\mathfrak{s}igmanfty} / 2$, is bounded by a constant times
\betagin{equation*}
\int_{\gammamma^{ - \kappappa_1} < |\omega|_{\mathfrak{s}igmanfty} \leq \gamma^3 N} \int_{|\omega'|_{\mathfrak{s}igmanfty} \leq |\omega|_{\mathfrak{s}igmanfty} / 2} | \omega|_{\mathfrak{s}igmanfty}^{-2k} (| \omega'|_{\mathfrak{s}igmanfty} \vee 1)^{-2k}\mathrm{d} \omega' \mathrm{d} \omega \lesssim \int_{\gammamma^{- \kappappa_1} < |\omega|_{\mathfrak{s}igmanfty} \leq \gamma^3 N} |\omega|_{\mathfrak{s}igmanfty}^{-2k} \mathrm{d} \omega,
\end{equation*}
and the latter is of order $\gamma^{(2k - 3) \kappappa_1}$. Taking $k$ large enough, we can make the power of $\gamma$ arbitrarily big.
If $|\omega'|_{\mathfrak{s}igmanfty} > |\omega|_{\mathfrak{s}igmanfty} / 2$, then we simply bound $| \omega - \omega'|_{\mathfrak{s}igmanfty} \vee 1 \geq 1$, and the respective part of the double integral \eqref{eq:E1-integral} is bounded by
\betagin{align*}
\int_{\gammamma^{ - \kappappa_1} < |\omega|_{\mathfrak{s}igmanfty} \leq \gamma^3 N} \int_{|\omega|_{\mathfrak{s}igmanfty} / 2 < |\omega'|_{\mathfrak{s}igmanfty} \leq \gamma^3 N} | \omega'|_{\mathfrak{s}igmanfty}^{-2k} \mathrm{d} \omega' \mathrm{d} \omega \lesssim \int_{\gammamma^{- \kappappa_1} < |\omega|_{\mathfrak{s}igmanfty} \leq \gamma^3 N} | \omega|_{\mathfrak{s}igmanfty}^{3 -2k} \mathrm{d} \omega,
\end{align*}
which is of order $\gamma^{(2k - 6) \kappappa_1}$. Combining the preceding bounds, we get $|E_{\gamma, \mathfrak{a}}^{(1)}(t, x)| \lesssim \mathfrak{a}$.
Now, we will analyse the term $E^{(2)}_{\gamma, \mathfrak{a}}$. We have
\betagin{equation}\lambdabel{eq:E2-gamma-bound}
E^{(2)}_{\gamma, \mathfrak{a}}(t, x) = \int_0^{\tau_{\gamma, \mathfrak{a}}} \mathfrak{s}um_{|\omega|_\mathfrak{s}igmanfty \leq \gammamma^{-3 - \kappappa_1}} \mathscr{F}_{\!\!\mathtt{var}epsilon}\mathbb{P}hi^\gamma_{t-s} (\omega)\, \mathscr{F}_{\!\!\mathtt{var}epsilon} \bigl((S_\gammamma - \un{X}_{\gamma}) X_\gamma \bigr)(s, \omega)\, e^{-\pi i \omega \cdot x} \,\mathrm{d} s.
\end{equation}
Furthermore, \eqref{eq:Fourier-of-product} yields
\betagin{align*}
\mathscr{F}_{\!\!\mathtt{var}epsilon} \bigl((S_\gammamma - \un{X}_{\gamma}) X_\gamma \bigr)(s, \omega) &= \mathfrak{s}um_{|\omega'|_{\mathfrak{s}igmanfty} \leq N} \bigl(\widehat{Z}_\gamma - \widehat{\un{X}}_\gamma \bigr)(s, \omega') \widehat{X}_\gamma(s, \omega - \omega') \\
&= \mathfrak{s}um_{|\omega'|_{\mathfrak{s}igmanfty} \leq N} \bigl(1 - \widehat{\un{K}}_\gamma(\omega)\bigr) \widehat{Z}_\gamma(s, \omega') \widehat{X}_\gamma(s, \omega - \omega').
\end{align*}
We assumed in Section~\ref{sec:a-priori} that $\mathscr{F} \un{\fK} (\omega) = 1$ for all $\omega \in \R^3$ such that $|\omega|_{\mathfrak{s}igmanfty} \leq 1$, from which we conclude that the terms in the preceding sum may be non-vanishing only for $|\omega'|_{\mathfrak{s}igmanfty} > \gamma^{-3 - \un \kappappa}$. Then the variables in these sums satisfy $|\omega - \omega'|_{\mathfrak{s}igmanfty} \geq c \gammamma^{-3 - \un \kappappa}$ for some $c > 0$, if we take $\kappappa_1 = \un{\kappappa}/2$. From Lemma~\ref{lem:X-Fourier-a-priori} we have $|\widehat{X}_\gamma(s, \omega - \omega')| \lesssim \gammamma^M$ for any $M > 0$, where the proportionality constant depends on $\un{\kappappa}$ and $M$. Applying the preceding estimate to \eqref{eq:E2-gamma-bound}, we get
\betagin{align*}
|E^{(2)}_{\gamma, \mathfrak{a}}(t, x)| \lesssim \gamma^{M -3} \int_0^{t} \mathfrak{s}um_{|\omega|_\mathfrak{s}igmanfty \leq \gammamma^{-3 - \kappappa_1}} \bigl|\mathscr{F}_{\!\!\mathtt{var}epsilon} \mathbb{P}hi^\gamma_{s} (\omega) \bigr|\, \mathrm{d} s,
\end{align*}
where as before we used the bound $|\widehat{Z}_\gamma(s, \omega')| \lesssim \gamma^{-3}$ and extended the integral to the interval $[0, t]$. Using \eqref{eq:double-heat-sum-1} and \eqref{eq:double-heat-sum}, this expression is bounded as
\betagin{align*}
|E^{(2)}_{\gamma, \mathfrak{a}}(t, x)| &\lesssim \gamma^{M + 3} \mathfrak{s}um_{|\omega|_\mathfrak{s}igmanfty \leq \gammamma^{-3 - \kappappa_1}} \mathfrak{s}um_{|\omega'|_\mathfrak{s}igmanfty \leq N} (| \gamma^3 (\omega - \omega')|_\mathfrak{s}igmanfty\vee 1)^{-2k} (| \gamma^3 \omega'|_\mathfrak{s}igmanfty \vee 1)^{-2k} \\
&\lesssim \gamma^{M -15} \int_{|\omega|_\mathfrak{s}igmanfty \leq \gamma^3 \gammamma^{-3 - \kappappa_1}} \int_{|\omega'|_\mathfrak{s}igmanfty \leq \gamma^3 N} (| \omega - \omega'|_\mathfrak{s}igmanfty \vee 1)^{-2k} (| \omega'|_\mathfrak{s}igmanfty \vee 1)^{-2k} \mathrm{d} \omega' \mathrm{d} \omega.
\end{align*}
This expression is of order $\gamma^{M -15}$ which can be made arbitrarily small by taking $M$ large.
\end{proof}
\mathfrak{s}ubsection{Controlling the process $X'_{\gamma, \mathfrak{a}}$}
\lambdabel{sec:X-prime}
We recall that $X'_{\gamma, \mathfrak{a}}$ is defined below \eqref{eq:sigma-stopped} via the spin field $\mathfrak{s}igmagma'_{\gamma, \mathfrak{a}}$, and let us define
\betagin{equation}\lambdabel{eq:Z-prime-def}
S'_{\gamma, \mathfrak{a}}(t,x) := \mathfrak{r}ac{1}{\mathrm{d}eltalta} \mathfrak{s}igmagma'_{\gamma, \mathfrak{a}} \Bigl( \mathfrak{r}ac{t}{\alphapha}, \mathfrak{r}ac{x}{\mathtt{var}epsilon} \Bigr) \qquad \text{for}~~ x \in \T_{\mathtt{var}epsilon}^3,~ t \geq 0.
\end{equation}
We need to control these two processes.
\betagin{lemma}\lambdabel{lem:X-prime-bound}
Let $\eta$ be as in Theorem~\ref{thm:main}. There exists $\gammamma_0 > 0$ such that for every $p \geq 1$ and $T > 0$ one has
\betagin{equation}\lambdabel{eq:X-prime-bound}
\mathbb{E} \biggl[ \mathfrak{s}up_{t \in [\tau_{\gamma, \mathfrak{a}}, T]} \bigl| \bigl( \iota_\mathtt{var}epsilon X'_{\gamma, \mathfrak{a}} (t)\bigr) (\mathtt{var}phii_x^\lambdambda) \bigr|^p\biggr] \leq C \mathfrak{a}^p (\lambdambda \vee \mathfrak{e})^{\eta p},
\end{equation}
uniformly over $\gamma \in (0, \gamma_0)$, $\mathtt{var}phii \in \mathcal{C}B^{1}$, $x \in \mathtt{L}ambda_{\mathtt{var}epsilon}$ and $\lambdambda \in (0, 1]$. The constant $C$ depends only on $p$, $T$ and $\gamma_0$.
\end{lemma}
\betagin{proof}
By the definition in Section~\ref{sec:a-priori} we have $X'_{\gamma, \mathfrak{a}}(\tau_{\gamma, \mathfrak{a}}) = X_{\gamma}(\tau_{\gamma, \mathfrak{a}})$, and in the same way as we derived equation \eqref{eq:IsingKacEqn}, we get for $t \geq \tau_{\gamma, \mathfrak{a}}$
\betagin{equation}\lambdabel{eq:X-prime-equation-P}
X'_{\gamma, \mathfrak{a}}(t, x) = \bigl(P^\gamma_{t - \tau_{\gamma, \mathfrak{a}}} X_{\gamma}\bigr)(\tau_{\gamma, \mathfrak{a}}, x) + \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \T_{\mathtt{var}epsilon}^3} \int_{\tau_{\gamma, \mathfrak{a}}}^t \widetilde{P}^{\gamma}_{t - s}(x-y) \,\mathrm{d} \mathfrak{M}'_{\gamma, \mathfrak{a}}(s, y).
\end{equation}
Extending the processes periodically to $x \in \mathtt{L}ambda_{\mathtt{var}epsilon}$ and using \eqref{eq:From-P-to-G}, we replace $P^{\gamma}$, $\widetilde{P}^{\gamma}$ and $\T_{\mathtt{var}epsilon}^3$ in the preceding equation by $G^\gamma$, $\widetilde{G}^{\gamma}$ and $\mathtt{L}ambda_{\mathtt{var}epsilon}$ respectively. Then, for a test function $\mathtt{var}phii \in \mathcal{C}B^{1}$ we have
\betagin{equation}\lambdabel{eq:X-prime-equation}
\bigl( \iota_\mathtt{var}epsilon X'_{\gamma, \mathfrak{a}} (t)\bigr) (\mathtt{var}phii_x^\lambdambda) = \bigl( \iota_\mathtt{var}epsilon G^\gamma_{t - \tau_{\gamma, \mathfrak{a}}} X_{\gamma}(\tau_{\gamma, \mathfrak{a}})\bigr) (\mathtt{var}phii_x^\lambdambda) + \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{\tau_{\gamma, \mathfrak{a}}}^t \bigl(\widetilde{G}^{\gamma}_{t -s} *_\mathtt{var}epsilon \mathtt{var}phii_x^\lambdambda\bigr)(y) \,\mathrm{d} \mathfrak{M}'_{\gamma, \mathfrak{a}}(s, y).
\end{equation}
We denote the two terms on the right-hand side by $A_{\gamma, \lambdambda}(t)$ and $B_{\gamma, \lambdambda}(t)$ respectively. Then the first term may be written as
\betagin{equation*}
A_{\gamma, \lambdambda}(t) = \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} G^\gamma_{t - \tau_{\gamma, \mathfrak{a}}}(y) \bigl( \iota_\mathtt{var}epsilon X_{\gamma}(\tau_{\gamma, \mathfrak{a}})\bigr) (\mathtt{var}phii_{x - y}^\lambdambda).
\end{equation*}
Using the a priori bound on $X_{\gamma}$, provided by the stopping time \eqref{eq:tau-1}, we get $|\bigl( \iota_\mathtt{var}epsilon X_{\gamma}(\tau_{\gamma, \mathfrak{a}})\bigr) (\mathtt{var}phii_{x - y}^\lambdambda)| \lesssim \mathfrak{a} (\lambdambda \vee \mathfrak{e})^\eta$ where we used the definition of the seminorm \eqref{eq:eps-norm-1}. Then since the kernel $G^\gamma_t$ integrates to $1$, we get
\betagin{equation}\lambdabel{eq:A-bound}
\bigl|A_{\gamma, \lambdambda}(t)\bigr| \lesssim \mathfrak{a} (\lambdambda \vee \mathfrak{e})^\eta,
\end{equation}
with a proportionality constant independent of the involved values. Here, we used the fact that the discrete heat kernel $G^\gamma_t$ is absolutely summable over $\mathtt{L}ambda_{\mathtt{var}epsilon}$ and the sum is bounded uniformly in $\gamma$ and $t$, which follows from Lemma~\ref{lem:Pgt}.
Now, we will bound the last term in \eqref{eq:X-prime-equation}. For this, we define
\betagin{equation*}
B_{\gamma, \lambdambda}(t', t) := \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{\tau_{\gamma, \mathfrak{a}}}^{t'} \bigl(\widetilde{G}^{\gamma}_{t - s} *_\mathtt{var}epsilon \mathtt{var}phii_x^\lambdambda\bigr)(y) \,\mathrm{d} \mathfrak{M}'_{\gamma, \mathfrak{a}}(s, y),
\end{equation*}
so that $B_{\gamma, \lambdambda}(t) = B_{\gamma, \lambdambda}(t, t)$ and the process $t' \mapsto B_{\gamma, \lambdambda}(t', t)$ is a martingale on $[\tau_{\gamma, \mathfrak{a}}, t]$. In order to apply the Burkholder-Davis-Gundy inequality \cite[Prop.~A.2]{Martingales} to this martingale, we need to bound its jumps and bracket process. The jump times of $B_{\gamma, \lambdambda}$ coincide with those of $\mathfrak{M}'_{\gamma, \mathfrak{a}}$, and we get
\betagin{equation*}
\bigl|\mathbf{D}elta_s B_{\gamma, \lambdambda}(\bigcdot, t)\bigr| \leq \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \bigl|\bigl(\widetilde{G}^{\gamma}_{t - s} *_\mathtt{var}epsilon \mathtt{var}phii_x^\lambdambda\bigr)(y) \bigr| |\mathbf{D}elta_s \mathfrak{M}'_{\gamma, \mathfrak{a}}(\bigcdot, y)|,
\end{equation*}
for $s \in [\tau_{\gamma, \mathfrak{a}}, t]$, where we use the jump of the martingale $\mathbf{D}elta_s \mathfrak{M}'_{\gamma, \mathfrak{a}}$ defined in \eqref{eq:jump}. Moreover, the jump size of $\mathfrak{M}'_{\gamma, \mathfrak{a}}$ is bounded by $2 \gamma^{-3}$ and if $\mathfrak{M}'_{\gamma, \mathfrak{a}}(s, y)$ has a jump, it happens almost surely at the points $\{y_* + k : k \in \Z^3\}$ for a unique $y_* \in \T_{\mathtt{var}epsilon}^3$ (recall Section~\ref{sec:martingales-properties} and periodicity of the martingale). Thus, we get almost surely
\betagin{equation}\lambdabel{eq:B-jump-bound}
\bigl|\mathbf{D}elta_s B_{\gamma, \lambdambda}(\bigcdot, t)\bigr| \leq 2 \gamma^{-3} \mathtt{var}epsilon^3 \mathfrak{s}up_{y_* \in \T_{\mathtt{var}epsilon}^3} \mathfrak{s}um_{k \in \Z^3} \bigl|\bigl(\widetilde{G}^{\gamma}_{t - s} *_\mathtt{var}epsilon \mathtt{var}phii_x^\lambdambda\bigr)(y_* + k) \bigr| \lesssim \gamma^{-3} \mathtt{var}epsilon^3 \lesssim \gamma^9.
\end{equation}
The sum is bounded, because the discrete heat kernel decays very fast at infinity (see Lemma~\ref{lem:Pgt}).
Recalling \eqref{eq:M-prime-bracket}, the bracket process of $B_{\gamma, \lambdambda}(t', t)$ equals
\betagin{equation*}
\big\lambdangle B_{\gamma, \lambdambda}(\bigcdot, t) \big\rangle_{t'} = 2 \mathtt{var}kappa_{\gamma, 2} \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{\tau_{\gamma, \mathfrak{a}}}^{t'} \bigl(\widetilde{G}^{\gamma}_{t - s} *_\mathtt{var}epsilon \mathtt{var}phii_x^\lambdambda\bigr)(y)^2 \Bigl( 1 - \mathrm{d}eltalta \mathfrak{s}igmagma'_{\gamma, \mathfrak{a}} \Bigl(\mathfrak{r}ac{s}{\alphapha}, \mathfrak{r}ac{x}{\mathtt{var}epsilon}\Bigr) X'_{\gamma, \mathfrak{a}}(s, x) \Bigr) \mathrm{d} s.
\end{equation*}
The process in the parentheses is bounded by a constant, and the definition \eqref{eq:scalings} yields
\betagin{equation*}
\bigl|\big\lambdangle B_{\gamma, \lambdambda}(\bigcdot, t) \big\rangle_{t'}\bigr| \lesssim \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{0}^{t} \bigl(\widetilde{G}^{\gamma}_{s} *_\mathtt{var}epsilon \mathtt{var}phii_x^\lambdambda\bigr)(y)^2 \mathrm{d} s,
\end{equation*}
where we used $t' \leq t$.
Similarly to how we estimated \eqref{eq:renorm-constant1}, we can show that
\betagin{equation}\lambdabel{eq:B-bracket-bound}
\bigl|\big\lambdangle B_{\gamma, \lambdambda}(\bigcdot, t) \big\rangle_{t'}\bigr| \lesssim (\lambdambda \vee \mathfrak{e})^{-1}.
\end{equation}
Applying the Burkholder-Davis-Gundy inequality \cite[Prop.~A.2]{Martingales} and using the bounds \eqref{eq:B-jump-bound} and \eqref{eq:B-bracket-bound}, we get
\betagin{equation}\lambdabel{eq:B-bound}
\Bigl(\mathbb{E} \Bigl[ \mathfrak{s}up_{t \in [\tau_{\gamma, \mathfrak{a}}, T]} \bigl| B_{\gamma, \lambdambda}(t) \bigr|^p\Bigr]\Bigr)^{\mathfrak{r}ac{1}{p}} \lesssim (\lambdambda \vee \mathfrak{e})^{-\mathfrak{r}ac{1}{2}} + \gamma^9.
\end{equation}
Using then the Minkowski inequality and the bounds \eqref{eq:A-bound} and \eqref{eq:B-bound}, we obtain from \eqref{eq:X-prime-equation} the required result \eqref{eq:X-prime-bound}.
\end{proof}
Using the preceding result, the following one is proved in exactly the same way as Lemma~\ref{lem:Z-bound}.
\betagin{lemma}\lambdabel{lem:Z-prime-bound}
For any $\tilde\kappappa \in (0,1)$ there exist $\gamma_0 > 0$ such that for any $\gamma \in (0, \gamma_0)$, $T > 0$, $\mathtt{var}phii \in \mathcal{C}B^{r}$ and $\lambdambda \in [\mathfrak{e}^{1 - \tilde\kappappa}, 1]$ one has
\betagin{equation*}
\mathfrak{s}up_{t \in [\tau_{\gamma, \mathfrak{a}}, T]} \mathfrak{s}up_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \bigl| \bigl( \iota_\mathtt{var}epsilon S'_{\gamma, \mathfrak{a}} (t)\bigr) (\mathtt{var}phii_x^\lambdambda) \bigr| \leq C \mathfrak{a} \lambdambda^{\eta},
\end{equation*}
where the values $\eta$ and $r$ is the same as in the statement of Lemma~\ref{lem:Z-bound}. The non-random proportionality constant $C$ depends on $T$ and is independent of $\gamma$, $\mathtt{var}phii$ and $\lambdambda$.
\end{lemma}
\mathfrak{s}ubsection{Controlling the process $\un{X}'_{\gamma, \mathfrak{a}}$}
Let us define by analogy with \eqref{eq:c-under} the renormalisation term, which is a function of the time variable,
\betagin{equation}\lambdabel{eq:C-gamma-function}
\un{\mathfrak{C}}_{\gamma}(t) := 2 \mathtt{var}kappa_{\gamma, 2} \int_{0}^t \mathtt{var}epsilon^3 \mathfrak{s}um_{x \in \T_{\mathtt{var}epsilon}^3} \un{P}^{\gamma}_s(x) \widetilde{P}^\gamma_s(x) \,\mathrm{d} s,
\end{equation}
where $\un{P}^{\gamma}_{t} := P^{\gamma}_{t} *_\mathtt{var}epsilon \un{K}_\gamma$ and $\mathtt{var}kappa_{\gamma, 2}$ was defined in \eqref{constant:isingkac_one}. The following result will be useful later.
\betagin{lemma}\lambdabel{lem:renorm-function-underline}
The constant \eqref{eq:c-under} and the function \eqref{eq:C-gamma-function} satisfy $|\un{\mathfrak{C}}_\gamma - \un{\mathfrak{C}}_\gamma(t)| \lesssim t^{-c/2} \mathfrak{e}^{c -1}$ for any $c \in [0, 1)$.
\end{lemma}
\betagin{proof}
The proof of the bound goes along the lines of the proof of Lemma~\ref{lem:renorm-constant-underline}. More precisely, as in \eqref{eq:un-fc-bound} we get
\betagin{equation*}
\un{\mathfrak{C}}_\gamma - \un{\mathfrak{C}}_\gamma(t) = \mathfrak{r}ac{\mathtt{var}kappa_{\gamma, 2}}{8} \mathfrak{s}um_{0 < |\omega|_{\mathfrak{s}igmanfty} \leq N} \mathfrak{r}ac{\alphapha \widehat{\un{K}}_\gamma(\omega) \widehat{K}_\gamma(\omega)}{\mathtt{var}kappa_{\gamma, 3}^2 \big(1 - \widehat{K}_\gamma(\omega) \big)} \mathrm{ex}p \Bigl( 2 \mathtt{var}kappa_{\gamma, 3}^2 \big( \widehat{K}_\gamma(\omega) - 1 \big) \mathfrak{r}ac{t}{\alphapha} \Bigr).
\end{equation*}
Since the power of the exponential is negative, we can use the simple bound $e^{-x} \lesssim x^{-c/2}$ for any $x > 0$ and any $c > 0$, to estimate
\betagin{equation*}
|\un{\mathfrak{C}}_\gamma - \un{\mathfrak{C}}_\gamma(t)| \lesssim \mathfrak{s}um_{0 < |\omega|_{\mathfrak{s}igmanfty} \leq N} \mathfrak{r}ac{\alphapha |\widehat{\un{K}}_\gamma(\omega)| |\widehat{K}_\gamma(\omega)|}{1 - \widehat{K}_\gamma(\omega)} \Bigl(\big(1 - \widehat{K}_\gamma(\omega) \big) \mathfrak{r}ac{t}{\alphapha} \Bigr)^{-c/2}.
\end{equation*}
Proceeding as in the proof of Lemma~\ref{lem:renorm-constant-underline}, we get the desired bound.
\end{proof}
Let us define the process $\un{X}'_{\gamma, \mathfrak{a}}$ as in \eqref{eq:X-under}, but via the spin field $\mathfrak{s}igmagma'_{\gamma, \mathfrak{a}}$. The following result will be used in Section~\ref{sec:second-symbol}.
\betagin{lemma}\lambdabel{lem:X2-under-prime}
Let $\eta$ be as in Theorem~\ref{thm:main} and let $\un\kappappa$ and $\un\mathfrak{e}$ be as in \eqref{eq:tau-2}. There exists $\gammamma_0 > 0$ such that for every $p \geq 1$ and $T > 0$ one has
\betagin{equation}\lambdabel{eq:X2-prime-bound}
\mathbb{E} \biggl[ \mathfrak{s}up_{t \in [\tau_{\gamma, \mathfrak{a}}, T]} (t - \tau_{\gamma, \mathfrak{a}})^{-\mathfrak{r}ac{\eta p}{2}} \bigl\| \un{X}'_{\gamma, \mathfrak{a}} (t) X'_{\gamma, \mathfrak{a}}(t) - \un{\mathfrak{C}}_\gamma (t-\tau_{\gamma, \mathfrak{a}}) \bigr\|_{L^\infty}^p\biggr] \leq C \mathfrak{a}^{2 p} \un{\mathfrak{e}}^{\eta p},
\end{equation}
uniformly over $\gamma \in (0, \gamma_0)$. The constant $C$ depends only on $p$, $T$, $\gamma_0$ and $\un\kappappa$.
\end{lemma}
\betagin{proof}
Let $I_{\gamma, \mathfrak{a}}(t,x) := \un{X}'_{\gamma, \mathfrak{a}} (t, x) X'_{\gamma, \mathfrak{a}}(t, x) - \un{\mathfrak{C}}_\gamma (t-\tau_{\gamma, \mathfrak{a}})$ be the function, which we need to bound. From the proof of Lemma~\ref{lem:X-prime-bound} we know that $X'_{\gamma, \mathfrak{a}}$ solves equation \eqref{eq:X-prime-equation-P}. Similarly, we can show that
\betagin{equation}\lambdabel{eq:X-under-prime-equation}
\un{X}'_{\gamma, \mathfrak{a}} (t, x) = \bigl(P^\gamma_{t - \tau_{\gamma, \mathfrak{a}}} \un{X}_{\gamma}\bigr)(\tau_{\gamma, \mathfrak{a}}, x) + \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \T_{\mathtt{var}epsilon}^3} \int_{\tau_{\gamma, \mathfrak{a}}}^t \un{P}^{\gamma}_{t - s}(x-y) \,\mathrm{d} \mathfrak{M}'_{\gamma, \mathfrak{a}}(s, y).
\end{equation}
Here, we need to take $\gamma$ small enough so that the radius of the support of the function $\un{K}_\gamma$ gets smaller than one. Let us denote by $Y'_{\gamma, \mathfrak{a}}(t, x)$ and $\un{Y}'_{\gamma, \mathfrak{a}} (t, x)$ the last terms in \eqref{eq:X-prime-equation-P} and \eqref{eq:X-under-prime-equation} respectively, and let us define
\betagin{equation}\lambdabel{eq:Y-definitions}
\betagin{aligned}
Y'_{\gamma, \mathfrak{a}}(r, t, x) &:= \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \T_{\mathtt{var}epsilon}^3} \int_{\tau_{\gamma, \mathfrak{a}}}^{r} \widetilde{P}^{\gamma}_{t - s}(x-y) \,\mathrm{d} \mathfrak{M}'_{\gamma, \mathfrak{a}}(s, y), \\
\un{Y}'_{\gamma, \mathfrak{a}}(r, t, x) &:= \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \T_{\mathtt{var}epsilon}^3} \int_{\tau_{\gamma, \mathfrak{a}}}^{r} \un{P}^{\gamma}_{t - s}(x-y) \,\mathrm{d} \mathfrak{M}'_{\gamma, \mathfrak{a}}(s, y).
\end{aligned}
\end{equation}
Then these two processes are c\`{a}dl\`{a}g martingales in $r \in [\tau_{\gamma, \mathfrak{a}}, t]$, and $Y'_{\gamma, \mathfrak{a}}(t, x) = Y'_{\gamma, \mathfrak{a}}(t, t, x)$ and $\un{Y}'_{\gamma, \mathfrak{a}}(t, x) = \un{Y}'_{\gamma, \mathfrak{a}}(t, t, x)$. Since these martingales have finite total variation, their quadratic covariation may be written as (see \cite{JS03})
\betagin{equation}\lambdabel{eq:Y-jumps}
\bigl[Y'_{\gamma, \mathfrak{a}}(\bigcdot, t, x), \un{Y}'_{\gamma, \mathfrak{a}}(\bigcdot, t, x)\bigr]_r = \mathfrak{s}um_{\tau_{\gamma, \mathfrak{a}} \leq s \leq r} \mathbf{D}elta_s Y'_{\gamma, \mathfrak{a}}(\bigcdot, t, x)\, \mathbf{D}elta_s \un{Y}'_{\gamma, \mathfrak{a}}(\bigcdot, t, x),
\end{equation}
where $\mathbf{D}elta_s Y'_{\gamma, \mathfrak{a}}(\bigcdot, t, y)$ is the jumps size of the martingale at time $s$. Moreover, the process
\betagin{equation}\lambdabel{eq:N-prime}
\mathfrak{N}'_{\gamma, \mathfrak{a}}(r, t, x) := \bigl[Y'_{\gamma, \mathfrak{a}}(\bigcdot, t, x), \un{Y}'_{\gamma, \mathfrak{a}}(\bigcdot, t, x)\bigr]_r - \lambdangle Y'_{\gamma, \mathfrak{a}}(\bigcdot, t, x), \un{Y}'_{\gamma, \mathfrak{a}}(\bigcdot, t, x)\rangle_r
\end{equation}
is a martingale for $r \in [\tau_{\gamma, \mathfrak{a}}, t]$, where from \eqref{eq:M-gamma-bracket} we have
\betagin{equation*}
\lambdangle Y'_{\gamma, \mathfrak{a}}(\bigcdot, t, x), \un{Y}'_{\gamma, \mathfrak{a}}(\bigcdot, t, x)\rangle_r = \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \T_{\mathtt{var}epsilon}^3} \int_{\tau_{\gamma, \mathfrak{a}}}^r \un{P}^{\gamma}_{t - s}(x-y) \widetilde{P}^{\gamma}_{t - s}(x-y) \mathbf{C}_{\gamma, \mathfrak{a}} (s,y) \,\mathrm{d} s.
\end{equation*}
We denote $\mathfrak{N}'_{\gamma, \mathfrak{a}}(t, x) = \mathfrak{N}'_{\gamma, \mathfrak{a}}(t, t, x)$. Then we multiply \eqref{eq:X-prime-equation-P} and \eqref{eq:X-under-prime-equation}, to get
\betagin{align}
I_{\gamma, \mathfrak{a}}(t, x) &= \bigl(P^\gamma_{t - \tau_{\gamma, \mathfrak{a}}} \un{X}_{\gamma}\bigr)(\tau_{\gamma, \mathfrak{a}}, x)\, X'_{\gamma, \mathfrak{a}}(t, x) + \un{Y}'_{\gamma, \mathfrak{a}}(t, x)\, \bigl(P^\gamma_{t - \tau_{\gamma, \mathfrak{a}}} X_{\gamma}\bigr)(\tau_{\gamma, \mathfrak{a}}, x) + \mathfrak{N}'_{\gamma, \mathfrak{a}}(t, x) \nonumber\\
&\qquad + \biggl(\mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \T_{\mathtt{var}epsilon}^3} \int_{\tau_{\gamma, \mathfrak{a}}}^t \un{P}^{\gamma}_{t - s}(x-y) \widetilde{P}^{\gamma}_{t - s}(x-y) \mathbf{C}_{\gamma, \mathfrak{a}} (s,y) \,\mathrm{d} s - \un{\mathfrak{C}}_\gamma (t-\tau_{\gamma, \mathfrak{a}})\biggr). \lambdabel{eq:X-under-X-prime-expansion}
\end{align}
We denote the four terms on the right-hand side by $I_{\gamma, \mathfrak{a}}^{(i)}(t,x)$, for $i = 1, \ldots, 4$, and we will bound them one by one.
Expanding the discrete kernel as in Appendix~\ref{sec:decompositions} and using the a priori bound provided by the stopping time \eqref{eq:tau-1}, we obtain from Lemma~\ref{lem:P-convolved-with-f}
\betagin{equation*}
| (P^\gamma_{t - \tau_{\gamma, \mathfrak{a}}} X_{\gamma})(\tau_{\gamma, \mathfrak{a}}, x)| \lesssim \mathfrak{a} (t - \tau_{\gamma, \mathfrak{a}})^{\eta/2}, \qquad | (P^\gamma_{t - \tau_{\gamma, \mathfrak{a}}} \un{X}_{\gamma})(\tau_{\gamma, \mathfrak{a}}, x)| \lesssim \mathfrak{a} (t - \tau_{\gamma, \mathfrak{a}})^{\eta/2}.
\end{equation*}
Then the first term in \eqref{eq:X-under-X-prime-expansion} we bound as
\betagin{equation*}
\mathbb{E} \biggl[ \mathfrak{s}up_{t \in [\tau_{\gamma, \mathfrak{a}}, T]} (t - \tau_{\gamma, \mathfrak{a}})^{-\mathfrak{r}ac{\eta p}{2}} \bigl| I_{\gamma, \mathfrak{a}}^{(1)}(t,x) \bigr|^p\biggr] \leq \mathfrak{a}^p \mathbb{E} \biggl[ \mathfrak{s}up_{t \in [\tau_{\gamma, \mathfrak{a}}, T]} \bigl| X'_{\gamma, \mathfrak{a}}(t, x) \bigr|^{p}\biggr].
\end{equation*}
Applying Lemma~\ref{eq:X-prime-bound}, the preceding expression is bounded by a constant times $\mathfrak{a}^{2 p} \mathfrak{e}^{\eta p}$. The term $I_{\gamma, \mathfrak{a}}^{(2)}(t,x)$ can be bounded similarly. Indeed, $\un{Y}'_{\gamma, \mathfrak{a}}$ coincides with $\un{X}'_{\gamma, \mathfrak{a}}$, when the initial condition is $0$, and Lemma~\ref{eq:X-prime-bound} holds for $\un{X}'_{\gamma, \mathfrak{a}}$ where $\un{\mathfrak{e}}$ is used in place of $\mathfrak{e}$. Hence, we have
\betagin{equation*}
\mathbb{E} \biggl[ \mathfrak{s}up_{t \in [\tau_{\gamma, \mathfrak{a}}, T]} (t - \tau_{\gamma, \mathfrak{a}})^{-\mathfrak{r}ac{\eta p}{2}} \bigl| I_{\gamma, \mathfrak{a}}^{(2)}(t,x) \bigr|^p\biggr] \lesssim \mathfrak{a}^{p} \un{\mathfrak{e}}^{\eta p}.
\end{equation*}
To bound the third term in \eqref{eq:X-under-X-prime-expansion}, we use the Burkholder-Davis-Gundy inequality and get
\betagin{equation}\lambdabel{eq:I-3-bound}
\mathbb{E} \biggl[ \mathfrak{s}up_{t \in [\tau_{\gamma, \mathfrak{a}}, T]} \bigl| I_{\gamma, \mathfrak{a}}^{(3)}(t,x) \bigr|^p\biggr] \lesssim \biggl(\mathbb{E} \Bigl[ \bigl[ \mathfrak{N}'_{\gamma, \mathfrak{a}}(\bigcdot, t, x) \bigr]_t\Bigr] \biggr)^{\mathfrak{r}ac{p}{2}},
\end{equation}
where the quadratic variation is computed for the martingale \eqref{eq:N-prime}. From the definition of the martingale, we get
\betagin{equation}\lambdabel{eq:N-prime-bracket}
\bigl[ \mathfrak{N}'_{\gamma, \mathfrak{a}}(\bigcdot, t, x) \bigr]_t = \mathfrak{s}um_{\tau_{\gamma, \mathfrak{a}} \leq s \leq t} \bigl(\mathbf{D}elta_s \mathfrak{N}'_{\gamma, \mathfrak{a}}(\bigcdot, t, x) \bigr)^2.
\end{equation}
Moreover, \eqref{eq:Y-jumps} yields $\mathbf{D}elta_s \mathfrak{N}'_{\gamma, \mathfrak{a}}(\bigcdot, t, x) = \mathbf{D}elta_s Y'_{\gamma, \mathfrak{a}}(\bigcdot, t, x)\, \mathbf{D}elta_s \un{Y}'_{\gamma, \mathfrak{a}}(\bigcdot, t, x)$. Furthermore, the definitions \eqref{eq:Y-definitions} allow to bound the jumps of $Y'_{\gamma, \mathfrak{a}}$ and $\un{Y}'_{\gamma, \mathfrak{a}}$ in terms of jumps of $\mathfrak{M}'_{\gamma, \mathfrak{a}}$. Since the jumps size of the latter is bounded by $2 \gamma^{-3}$ (as follows from the scaling \eqref{eq:martingales}) and almost surely $\mathfrak{M}'_{\gamma, \mathfrak{a}}(s, y)$ has a jump at a unique point $y$, we get
\betagin{equation*}
\bigl|\mathbf{D}elta_s Y'_{\gamma, \mathfrak{a}}(\bigcdot, t, x)\bigr| \leq 2 \gamma^{-3} \mathtt{var}epsilon^3 \bigl\|\widetilde{P}^{\gamma}_{t - s}\bigr\|_{L^\infty}, \qquad \bigl|\mathbf{D}elta_s \un{Y}'_{\gamma, \mathfrak{a}}(\bigcdot, t, x)\bigr| \leq 2 \gamma^{-3} \mathtt{var}epsilon^3 \bigl\|\un{P}^{\gamma}_{t - s}\bigr\|_{L^\infty}.
\end{equation*}
From Lemma~\ref{lem:tilde-P-bound} we have $\bigl\|\widetilde{P}^{\gamma}_{t - s}\bigr\|_{L^\infty} \lesssim (t-s + \mathfrak{e}^2)^{-3/2}$ and $\bigl\|\un{P}^{\gamma}_{t - s}\bigr\|_{L^\infty} \lesssim (t-s + \un{\mathfrak{e}}^2)^{-3/2}$. Using these bounds in \eqref{eq:N-prime-bracket} yields
\betagin{equation*}
\bigl[ \mathfrak{N}'_{\gamma, \mathfrak{a}}(\bigcdot, t, x) \bigr]_t \lesssim \gamma^{18} \mathfrak{s}um_{\tau_{\gamma, \mathfrak{a}} \leq s \leq t} (t-s + \un{\mathfrak{e}}^2)^{-3} \mathbbm{1}_{\{ s: \, \mathfrak{M}'_{\gamma, \mathfrak{a}}(s, x) - \mathfrak{M}'_{\gamma, \mathfrak{a}}(s-, x) \neq 0 \}},
\end{equation*}
where $\mathbbm{1}$ is the indicator function and so the sum runs over the jump times of the martingales $\mathfrak{M}'_{\gamma, \mathfrak{a}}$. The moments of the number of jumps of the martingales are of order $\gamma^{-6}$, and hence the $p$-th moment of the preceding expression is bounded by a constant times
\betagin{equation*}
\gamma^{12} \int_{\tau_{\gamma, \mathfrak{a}}}^{t} (t-s + \un{\mathfrak{e}}^2)^{-3} \mathrm{d} s \lesssim \gamma^{12} \un{\mathfrak{e}}^{-4} \lesssim \gamma^{-4\un\kappappa}.
\end{equation*}
Then the right-hand side of \eqref{eq:I-3-bound} is bounded by a constant multiple of $\gamma^{-2 \un \kappappa p}$.
It is left to bound the last term in \eqref{eq:X-under-X-prime-expansion}. Using \eqref{eq:bC} and \eqref{eq:C-gamma-function}, we have
\betagin{equation*}
I_{\gamma, \mathfrak{a}}^{(4)}(t,x) = - \mathfrak{r}ac{2 \mathtt{var}epsilon^6}{\alphapha} \mathfrak{s}um_{y \in \T_{\mathtt{var}epsilon}^3} \int_{\tau_{\gamma, \mathfrak{a}}}^t \un{P}^{\gamma}_{t - s}(x-y) \widetilde{P}^{\gamma}_{t - s}(x-y) S'_{\gamma, \mathfrak{a}} (s, y) X'_{\gamma, \mathfrak{a}}(s, y) \,\mathrm{d} s.
\end{equation*}
Let $I_{\gamma, \mathfrak{a}}^{(5)}(t,x)$ be defined by this formula, where we replace $S'_{\gamma, \mathfrak{a}}$ by $\un{X}'_{\gamma, \mathfrak{a}}$. From Lemma~\ref{lem:X-prime-bound} we have $|X'_{\gamma, \mathfrak{a}}(s, y)| \lesssim \mathfrak{e}^{\eta}$ and we have $|S'_{\gamma, \mathfrak{a}}(s, y)| \lesssim \mathfrak{e}^{-1}$. Then, is we replace the kernels $\un{P}^{\gamma}$ and $\widetilde{P}^{\gamma}$ in $I_{\gamma, \mathfrak{a}}^{(5)}$ by $\un{\mathscr{K}}^{\gamma}$ and $\mywidetilde{\mathscr{K}}^{\gamma}$, we get an error of order $\mathfrak{e}^{1 + \eta}$. Then Lemma~\ref{lem:Y-approximation} yields $|I_{\gamma, \mathfrak{a}}^{(4)}(t,x) - I_{\gamma, \mathfrak{a}}^{(5)}(t,x)| \lesssim \gamma^{3 \eta}$ uniformly in $x$ and locally uniformly in $t$. To bound $I_{\gamma, \mathfrak{a}}^{(5)}$ we write
\betagin{align*}
I_{\gamma, \mathfrak{a}}^{(5)}(t,x) &= - \mathfrak{r}ac{2 \mathtt{var}epsilon^6}{\alphapha} \mathfrak{s}um_{y \in \T_{\mathtt{var}epsilon}^3} \int_{\tau_{\gamma, \mathfrak{a}}}^t \un{P}^{\gamma}_{t - s}(x-y) \widetilde{P}^{\gamma}_{t - s}(x-y) I_{\gamma, \mathfrak{a}}(s, y) \,\mathrm{d} s \\
&\qquad + \mathfrak{r}ac{2 \mathtt{var}epsilon^6}{\alphapha} \mathfrak{s}um_{y \in \T_{\mathtt{var}epsilon}^3} \int_{\tau_{\gamma, \mathfrak{a}}}^t \un{P}^{\gamma}_{t - s}(x-y) \widetilde{P}^{\gamma}_{t - s}(x-y) \un{\mathfrak{C}}_\gamma (s-\tau_{\gamma, \mathfrak{a}}) \,\mathrm{d} s,
\end{align*}
and we denote these two terms by $I_{\gamma, \mathfrak{a}}^{(6)}(t,x)$ and $I_{\gamma, \mathfrak{a}}^{(7)}(t,x)$. Since $|\un{\mathfrak{C}}_\gamma (s)| \lesssim \mathfrak{e}^{-1}$ (what follows from Lemma~\ref{lem:renorm-constant-underline}), we get $|I_{\gamma, \mathfrak{a}}^{(7)}(t,x)| \lesssim 1$. Furthermore, we have $|I_{\gamma, \mathfrak{a}}^{(6)}(t,x)| \lesssim \mathfrak{e} \mathfrak{s}up_{s \in [\tau_{\gamma, \mathfrak{a}}, t]}\| I_{\gamma, \mathfrak{a}}(s)\|_{L^\infty}$.
Combining all the previous bounds, we get
\betagin{align*}
\mathbb{E} \biggl[ \mathfrak{s}up_{t \in [\tau_{\gamma, \mathfrak{a}}, T]} (t - \tau_{\gamma, \mathfrak{a}})^{-\mathfrak{r}ac{\eta p}{2}} \bigl\| I_{\gamma, \mathfrak{a}}(t) \bigr\|_{L^\infty}^p\biggr] &\lesssim \mathfrak{a}^{2 p} \mathfrak{e}^{\eta p} + \mathfrak{a}^{p} \un{\mathfrak{e}}^{\eta p} + \gamma^{-2 \un \kappappa p} + \gamma^{3 \eta p}\\
&\qquad + \mathfrak{e}^p \mathbb{E} \biggl[ \mathfrak{s}up_{t \in [\tau_{\gamma, \mathfrak{a}}, T]} (t - \tau_{\gamma, \mathfrak{a}})^{-\mathfrak{r}ac{\eta p}{2}} \bigl\| I_{\gamma, \mathfrak{a}}(t) \bigr\|_{L^\infty}^p\biggr].
\end{align*}
Taking $\mathfrak{e}$ small enough, we get the required bound \eqref{eq:X2-prime-bound}.
\end{proof}
\mathfrak{s}ection{Moment bounds for the discrete models}
\lambdabel{sec:convergence-of-models}
Let $Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}$ be the discrete model defined in Section \ref{sec:discreteRegStruct}. In this section, we prove that this model is bounded uniformly in $\gamma$. Moreover, we introduce a new discrete model $Z^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}$, defined as $Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}$ but via mollified martingales. Then we show that the distance between these two models vanishes as $\mathrm{d}elta \to 0$, uniformly in $\gamma$.
Let $\mathtt{var}rho : \R^4 \to \R$ be a symmetric smooth function, supported on the ball of radius $1$ (with respect to the parabolic distance $\| \bigcdot \|_\mathfrak{s}$) and satisfying $\int_{\R^4} \mathtt{var}rho(z) \mathrm{d} z = 1$. For any $\mathrm{d}elta \in (0,1)$ we define its rescaling
\betagin{equation}\lambdabel{eq:rho}
\mathtt{var}rho_\mathrm{d}elta(t, x) := \mathfrak{r}ac{1}{\mathrm{d}elta^{5}} \mathtt{var}rho \Bigl( \mathfrak{r}ac{t}{\mathrm{d}elta^2}, \mathfrak{r}ac{x}{\mathrm{d}elta}\Bigr).
\end{equation}
We need to modify this function in a way that its integral over $D_\mathtt{var}epsilon$ becomes $1$. For this, we approximate the function by its local averages as
\betagin{equation}\lambdabel{eq:rho-gamma}
\mathtt{var}rho_{\gamma, \mathrm{d}elta}(t, x) := \mathtt{var}epsilon^{-3} \int_{y \in \R^3: |y - x|_\infty \leq \mathtt{var}epsilon/2} \mathtt{var}rho_\mathrm{d}elta(t, y) \mathrm{d} y,
\end{equation}
which satisfies $\int_{D_\mathtt{var}epsilon} \mathtt{var}rho_{\gamma, \mathrm{d}elta}(z) \mathrm{d} z = 1$. We regularise the martingales in the following way:
\betagin{equation}\lambdabel{eq:xi-gamma-delta}
\xi_{\gamma, \mathrm{d}elta, \mathfrak{a}}(t,x) := \mathfrak{r}ac{1}{\mathfrak{s}qrt 2} \mathtt{var}epsilon^{3} \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{\R} \mathtt{var}rho_{\gamma, \mathrm{d}elta} (t - s,x - y)\, \mathrm{d} \mathfrak{M}_{\gamma, \mathfrak{a}}(s, y).
\end{equation}
Then the process $\xi_{\gamma, \mathrm{d}elta, \mathfrak{a}}(t,x)$ is defined on $(t,x) \in \R \times \T_{\mathtt{var}epsilon}^3$, but it is not a martingale anymore. On the other hand, a convolution with this process can be interpreted as a stochastic integral. For example, a convolution with the kernel $\mywidetilde{\mathscr{K}}^\gamma$ may be written as
\betagin{equation*}
\bigl(\mywidetilde{\mathscr{K}}^\gamma \mathfrak{s}tar_\mathtt{var}epsilon \xi_{\gamma, \mathrm{d}elta, \mathfrak{a}}\bigr)(t,x) = \mathfrak{r}ac{1}{\mathfrak{s}qrt 2} \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{\R} \mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta}_{t-s}(x - y)\, \mathrm{d} \mathfrak{M}_{\gamma, \mathfrak{a}}(s, y),
\end{equation*}
where $\mathfrak{s}tar_\mathtt{var}epsilon$ is the convolution on $D_\mathtt{var}epsilon$ and $\mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta} := \mywidetilde{\mathscr{K}}^{\gamma} \mathfrak{s}tar_\mathtt{var}epsilon \mathtt{var}rho_{\gamma, \mathrm{d}elta}$. Then we can easily compare the two kernels as
\betagin{equation*}
\bigl(\mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta} - \mywidetilde{\mathscr{K}}^{\gamma}\bigr)(z) = \int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^{\gamma}(z - \bar z)\bigl(\mathtt{var}rho_{\gamma, \mathrm{d}elta}(\bar z) - 1\bigr) \mathrm{d} \bar z,
\end{equation*}
which is the main reason to mollify the noise using the function \eqref{eq:rho}.
Using $\xi_{\gamma, \mathrm{d}elta, \mathfrak{a}}$, we make the following definitions
\betagin{equation*}
\bigl(\mathbb{P}Pi^{\gamma, \mathrm{d}elta, \mathfrak{a}} \blue\Xi\bigr)(z) = \xi_{\gamma, \mathrm{d}elta, \mathfrak{a}}(z), \qquad \bigl(\mathbb{P}Pi^{\gamma, \mathrm{d}elta, \mathfrak{a}} \<1b>\bigr)(z) = \bigl(\mywidetilde{\mathscr{K}}^\gamma \mathfrak{s}tar_\mathtt{var}epsilon \xi_{\gamma, \mathrm{d}elta, \mathfrak{a}}\bigr)(z).
\end{equation*}
After that we define the linear map $\mathbb{P}Pi^{\gamma, \mathrm{d}elta, \mathfrak{a}}$ on $\mathcal{C}T$ by the same recursive definitions as in Section~\ref{sec:PPi}, but using the following renormalisation constants in place of \eqref{eq:renorm-constant1}, \eqref{eq:renorm-constant3} and \eqref{eq:renorm-constant2} respectively:
\betagin{equation}\lambdabel{eq:convolved_constants}
\betagin{aligned}
\mathfrak{c}_{\gamma, \mathrm{d}elta} &:= \int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta}(z)^2\, \mathrm{d} z, \\
\mathfrak{c}_{\gamma, \mathrm{d}elta}' &:= -\beta \mathtt{var}kappa_{\gamma, 3} \gamma^6 \un{\mathfrak{C}}_\gamma \mathfrak{c}_{\gamma, \mathrm{d}elta}, \\
\mathfrak{c}_{\gamma, \mathrm{d}elta}'' &:= 2 \int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(z) \mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta}(z_1) \mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta}(z_2) \mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta}(z_1 - z) \mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta}(z_2 - z)\, \mathrm{d} z\, \mathrm{d} z_1\, \mathrm{d} z_2.
\end{aligned}
\end{equation}
As we did in Section~\ref{sec:model-lift}, we define a discrete model $Z^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}} = (\mathbb{P}i^{\gamma, \mathrm{d}elta, \mathfrak{a}}, \Gamma^{\gamma, \mathrm{d}elta, \mathfrak{a}})$ from the map $\mathbb{P}Pi^{\gamma, \mathrm{d}elta, \mathfrak{a}}$. In the following proposition we provide moment bounds for this model.
\betagin{proposition}\lambdabel{prop:models-converge}
Let the constants $\kappappa$ and $\un \kappappa$, used in \eqref{eqs:hom} and \eqref{eq:tau-2}, satisfy $\kappappa \geq \un{\kappappa}$. Then for the discrete models $Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}$ and $Z^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}$, there exist $\gamma_0 > 0$ and $\theta > 0$ for which the following holds: for any $p \geq 1$ and $T > 0$ there is $C > 0$ such that
\betagin{equation}\lambdabel{eq:prop:models-converge}
\mathfrak{s}up_{\gamma \in (0, \gamma_0)} \mathbb{E} \Bigl[\bigl( \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{T}^{(\mathfrak{e})}\bigr)^p\Bigr] \leq C, \qquad \mathfrak{s}up_{\gamma \in (0, \gamma_0)} \mathbb{E} \Bigl[\bigl( \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}; Z^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_T^{(\mathfrak{e})}\bigr)^p\Bigr] \leq C \mathrm{d}elta^{\theta p},
\end{equation}
for any $\mathrm{d}elta \in (0,1)$. Here, we use the metrics for the discrete models, defined in Remark~\ref{rem:model-no-K}.
\end{proposition}
We prove this proposition in Section~\ref{sec:convergence-of-models-proof}.
For this, we use the framework developed in \cite{Martingales}, which provides moment bounds on multiple stochastic integrals with respect to a quiet general class of martingales. We showed in Section~\ref{sec:martingales-properties} that the martingales $\mathfrak{M}_{\gamma, \mathfrak{a}}$, introduced in Section~\ref{sec:a-priori}, have the required properties.
\mathfrak{s}ubsection{Bounds on the discrete model}
\lambdabel{sec:Models_bounds}
The basis elements of the regularity structure are listed in Tables~\ref{tab:symbols-cont} and \ref{tab:symbols}, and in this section we are going to prove bounds only on the map $\mathbb{P}i^{\gamma, \mathfrak{a}}$ from the discrete model $Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}$ on the basis elements with negative homogeneities, which do not contain the symbols $\mathcal{C}E$ and $\symbol{X}_i$. More precisely, we consider the set
\betagin{equation*}
\bar{\mathcal{C}W} = \bigl\{\<1b>, \<2b>, \<3b>, \<22b>, \<31b>, \<32b>, \<4b>, \<5b>\bigr\},
\end{equation*}
and prove the following bounds for its elements. We use in the statement of this proposition and in its proof the notation from Section~\ref{sec:DiscreteModels}.
\betagin{proposition}\lambdabel{prop:Pi-bounds}
Let the constants $\kappappa$ and $\un \kappappa$, used in \eqref{eqs:hom} and \eqref{eq:tau-2}, satisfy $\kappappa \geq \un{\kappappa}$. Then there are constants $\bar \kappappa > 0$, $\gamma_0 > 0$ and $\theta > 0$, such that for any $\tau \in \bar{\mathcal{C}W}$, $p \geq 1$ and $T > 0$ there is $C > 0$ for which we have the bounds
\betagin{align}
\Bigl(\mathbb{E} \bigl| \iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \tau\bigr)(\mathtt{var}phi^\lambdambda_z)\bigr|^p\Bigr)^{\mathfrak{r}ac{1}{p}} &\leq C (\lambdambda \vee \mathfrak{e})^{|\tau| + \bar \kappappa}, \lambdabel{eq:model_bound} \\
\Bigl(\mathbb{E} \bigl| \iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \tau - \mathbb{P}i^{\gamma, \mathrm{d}elta, \mathfrak{a}}_z \tau\bigr) (\mathtt{var}phi^\lambdambda_z) \bigr|^p\Bigr)^{\mathfrak{r}ac{1}{p}} &\leq C \mathrm{d}elta^{\theta} (\lambdambda \vee \mathfrak{e})^{|\tau| + \bar \kappappa - \theta},\lambdabel{eq:model_bound-delta}
\end{align}
uniformly in $z \in D_\mathtt{var}epsilon$, $\lambdambda \in (0,1]$, $\mathtt{var}phi \in \mathcal{C}B^2_\mathfrak{s}$ and $\gamma \in (0, \gamma_0)$.
\end{proposition}
\noindent The rest of this section is devoted to the proof of this result. We are going to prove the bounds \eqref{eq:model_bound}-\eqref{eq:model_bound-delta} for any $p$ sufficiently large, and the bound for any $p \geq 1$ follow then by H\"{o}lder inequality.
For every symbol $\tau \in \bar{\mathcal{C}W}$, we use the definition of the discrete model in Section~\ref{sec:model-lift} and the expansion \cite[Eq.~2.14]{Martingales} to write $\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)$ as a sum of terms of the form
\betagin{align}\lambdabel{eq:terms-in-Pi}
&\int_{D_\mathtt{var}epsilon} \mathtt{var}phii_z^\lambdambda(\bar z) \biggl(\int_{D_\mathtt{var}epsilon^{\otimes n}} F_{\bar z} (z_1, \ldots, z_n) \, \mathrm{d} \mathbf{M}^{\otimes n}_{\gamma, \mathfrak{a}} (z_1, \ldots, z_n)\biggr) \mathrm{d} \bar z \\
&\hspace{3cm} = \int_{D_\mathtt{var}epsilon^{\otimes n}} \biggl( \int_{D_\mathtt{var}epsilon} \mathtt{var}phii_z^\lambdambda(\bar z) F_{\bar z} (z_1, \ldots, z_n) \, \mathrm{d} \bar z \biggr) \mathrm{d} \mathbf{M}^{\otimes n}_{\gamma, \mathfrak{a}} (z_1, \ldots, z_n), \nonumber
\end{align}
where the measure $\mathbf{M}^{\otimes n}_{\gamma, \mathfrak{a}}$ is defined in Section~2.1 in \cite{Martingales} for the martingales $\mathfrak{M}_{\gamma, \mathfrak{a}}$, and a function $F$ of $n$ space-time variables. Similarly, we write $\iota_\mathtt{var}epsilon(\mathbb{P}i_{z}^{\gamma, \mathrm{d}elta, \mathfrak{a}}\tau)(\mathtt{var}phii_z^\lambdambda)$ as a sum of terms of the form
\betagin{align}\lambdabel{eq:terms-in-Pi-de}
&\int_{D_\mathtt{var}epsilon} \mathtt{var}phii_z^\lambdambda(\bar z) \biggl(\int_{D_\mathtt{var}epsilon^{\otimes n}} F_{\bar z} (z_1, \ldots, z_n) \, \mathrm{d} \mathbf{M}^{\otimes n}_{\gamma, \mathfrak{a}, (\mathrm{d}elta)} (z_1, \ldots, z_n)\biggr) \mathrm{d} \bar z \\
&\hspace{3cm} = \int_{D_\mathtt{var}epsilon^{\otimes n}} \biggl( \int_{D_\mathtt{var}epsilon} \mathtt{var}phii_z^\lambdambda(\bar z) \big( F_{\bar z} \mathfrak{s}tar_\mathtt{var}epsilon \rho_{\gamma, \mathrm{d}elta} \big) (z_1, \ldots, z_n) \, \mathrm{d} \bar z \biggr) \mathrm{d} \mathbf{M}^{\otimes n}_{\gamma, \mathfrak{a}} (z_1, \ldots, z_n), \nonumber
\end{align}
where $\mathbf{M}^{\otimes n}_{\gamma, \mathfrak{a}, (\mathrm{d}elta)} (z_1, \ldots, z_n)$ stays for the product measure associated to the regularised martingales $\xi_{\gamma, \mathrm{d}elta, \mathfrak{a}}$, defined in \eqref{eq:xi-gamma-delta}. The functions $F$ will be typically defined in terms of the singular part $\mywidetilde{\mathscr{K}}^\gamma$ of the decomposition $\widetilde{G}^\gamma = \mywidetilde{\mathscr{K}}^\gamma + \mywidetilde{\mathscr{R}}^\gamma$ done in Appendix~\ref{sec:decompositions}, or in terms of the function $\mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta} := \mathtt{var}rho_{\gamma, \mathrm{d}elta} \mathfrak{s}tar_\mathtt{var}epsilon \mywidetilde{\mathscr{K}}^\gamma$ where $\mathtt{var}rho_{\gamma, \mathrm{d}elta}$ is the mollifier from \eqref{eq:xi-gamma-delta}.
To bound the terms \eqref{eq:terms-in-Pi} and their difference with those in \eqref{eq:terms-in-Pi-de}, we are going to use Corollary~5.6 in \cite{Martingales}. For this, it is convenient to use graphical notation to represent the function $F$ and integrals, where nodes represent variables and arrows represent kernels. In what follows, the vertex ``\,\tikz[baseline=-3] \node [root] {};\,'' labelled with $z$ represents the basis point $z \in D_\mathtt{var}epsilon$; the arrow ``\,\tikz[baseline=-0.1cm] \mathrm{d}raw[testfcn] (1,0) to (0,0);\,'' represents a test function $\mathtt{var}phii^\lambdambda_z$; the arrow ``\,\tikz[baseline=-0.1cm] \mathrm{d}raw[keps] (0,0) to (1,0);\,'' represents the discrete kernel $\mywidetilde{\mathscr{K}}^\gamma$, and we will write two labels $(a_e, r_e)$ on this arrow, which correspond to the labels on graphs as described in \cite[Sec.~5]{Martingales}. More precisely, since the kernel $\mywidetilde{\mathscr{K}}^\gamma$ satisfies \cite[Assum.~4]{Martingales} with $a_e=3$ (see Lemma~\ref{lem:Pgt}), we draw ``\,\tikz[baseline=-0.1cm] \mathrm{d}raw[keps] (0,0) to node[labl,pos=0.45] {\tiny 3,0} (1,0);\,''.
Each variable $z_i$, integrated with respect to the measure $\mathbf{M}^{\otimes n}_{\gamma, \mathfrak{a}}$ with $n \geq 2$ is denoted by a node ``\,\tikz[baseline=-3] \node [var_very_blue] {};\,''; the variable integrated with respect to the martingale $\mathfrak{M}_{\gamma, \mathfrak{a}}$ we denote by ``\,\tikz[baseline=-3] \node [var_blue] {};\,''. By the node ``\,\tikz[baseline=-3] \node [dot] {};\,'' we denote a variable integrated out in $D_\mathtt{var}epsilon$.
\mathfrak{s}ubsubsection{The element $\tau = \protect\<1b>$}
We represent the function $\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau$, defined in \eqref{eq:lift-hermite}, diagrammatically as
\betagin{equation*}
\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)
\;=\;
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0.9,0.2) [root] (root) {};
\node at (0.9,0.2) [rootlab] {$z$};
\node at (-2.3, 0.2) [dot] (int) {};
\node at (-5.5,0.2) [var_blue] (left) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}\;.
\end{equation*}
This diagram is in the form \eqref{eq:terms-in-Pi} with $n=1$, where in this case, in the inner integral, we have the generalised convolution $\mathcal{C}K^{\lambdambda, \mathfrak{e}}_{\mathcal{C}CG, z}$ given by (as in \cite[Eq.~5.13]{Martingales})
\betagin{equation*}
\mathcal{C}K^{\lambdambda, \mathfrak{e}}_{\mathcal{C}CG, z}(z^{\mathtt{var}}) = \int_{D_\mathtt{var}epsilon} \!\!\mathtt{var}phii_z^\lambdambda(\bar z)\, \mywidetilde{\mathscr{K}}^\gamma(\bar z - z^{\mathtt{var}})\, \mathrm{d} \bar z.
\end{equation*}
One can check that \cite[Assum.~3]{Martingales} is satisfied for this diagram with a trivial contraction and the bound \cite[Eq.~5.16]{Martingales} holds with the sets $\tilde \mathcal{C}CV_{\!\mathtt{var}} = \Gamma = \{ 1 \}$ and labeling $\mathtt{L} = \{ \mathtt{nil} \}$. The set $B$ in this bound has to be $\emptyset$, while $A$ might be either $\{ 1 \}$ or $\emptyset$. From the diagram we see that $| \tilde \mathcal{C}CV_{\!\mathtt{var}}| = 1$ and $|\hat \mathcal{C}CV_{\!\bar\mathfrak{s}tar} \mathfrak{s}etminus \hat \mathcal{C}CV^\uparrow_{\!\mathfrak{s}tar}| = 1$; therefore, the value of the constant $\nu_\gammamma$ in \cite[Eq.~5.15]{Martingales} is $-\mathfrak{r}ac{1}{2}$. Applying \cite[Cor.~5.6]{Martingales}, we get that, for any $\bar\kappappa > 0$ and any $p \geq 2$ large enough:
\betagin{equation*}
\Bigl( \mathbb{E} \bigl| \iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau\bigr)(\mathtt{var}phii_z^\lambdambda) \bigr|^p\Bigr)^{\mathfrak{r}ac{1}{p}} \lesssim (\lambdambda \vee \mathfrak{e})^{- \mathfrak{r}ac{1}{2}} \Bigl( 1 + \mathtt{var}epsilon^{\mathfrak{r}ac{9}{4} - \bar\kappappa} \mathfrak{e}^{-\mathfrak{r}ac52} \Bigr).
\end{equation*}
Since $\mathfrak{e} \approx \gamma^3$ and $\mathtt{var}epsilon \approx \gamma^4$, this expression is bounded by a multiple of $(\lambdambda \vee \mathfrak{e})^{- \mathfrak{r}ac{1}{2}}$ as required in \eqref{eq:model_bound} (recall that $|\tau| = -\mathfrak{r}ac{1}{2}-\kappappa$).
In what follows, we use the notation and terminology from \cite[Sec.~5]{Martingales} in the same way as we did for this element $\tau$, and we prefer not to make references every time.
\mathfrak{s}ubsubsection{The element $\tau = \protect\<2b>$}
\lambdabel{sec:second-symbol}
Using the definition \eqref{eq:lift-hermite} and the expansion \cite[Eq.~2.11]{Martingales}, the function $\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau$ can be represented by the diagrams
\betagin{equation}\lambdabel{eq:Pi2}
\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)
\;=\;
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,-2.5) {$$};
\node at (0,0) [dot] (int) {};
\node at (-1.5,2.5) [var_blue] (left) {};
\node at (1.5,2.5) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}
\; +\;
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.5) [var_very_blue] (left) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}
\; - \; (\mathfrak{c}_\gamma + \mathfrak{c}_\gamma' ) \,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\mathrm{d}raw[testfcn] (int) to (root);
\end{tikzpicture}\;.
\end{equation}
Let us denote by ``\,\tikz[baseline=-3] \node [var_red_triangle] {};\,'' the integration against the family of martingales given by the predictable quadratic variation $x \mapsto \lambdangle \mathfrak{M}_{\gamma, \mathfrak{a}}(x) \rangle$, and by ``\,\tikz[baseline=-3] \node [var_red_square] {};\,'' the integration in the family of martingales $x \mapsto [\mathfrak{M}_{\gamma, \mathfrak{a}}(x)] - \lambdangle \mathfrak{M}_{\gamma, \mathfrak{a}}(x) \rangle$. Then we can write \eqref{eq:Pi2} as
\betagin{equation}\lambdabel{eq:Pi2-new}
\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)
\;=\;
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,-2.5) {$$};
\node at (0,0) [dot] (int) {};
\node at (-1.5,2.5) [var_blue] (left) {};
\node at (1.5,2.5) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}
\; +\;
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.5) [var_red_square] (left) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}
\; +\; \left(
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.5) [var_red_triangle] (left) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}
\; - \; (\mathfrak{c}_\gamma + \mathfrak{c}_\gamma' ) \,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\mathrm{d}raw[testfcn] (int) to (root);
\end{tikzpicture} \right).
\end{equation}
Let us denote the first two of these diagrams by $\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, 1}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)$ and $\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, 2}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)$ respectively, and let $\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, 3}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)$ denote the expression in the brackets in \eqref{eq:Pi2-new}.
Let us analyse the first diagram in \eqref{eq:Pi2-new}. \cite[Assum.~3]{Martingales} is satisfied for it with a trivial contraction, and the bound \cite[Eq.~5.16]{Martingales} holds with the sets $\tilde \mathcal{C}CV_{\!\mathtt{var}} = \Gamma = \{1, 2\}$ and labeling $\mathtt{L} = \{ \mathtt{nil}, \mathtt{nil} \}$. The sets $B$ in \cite[Eq.~5.16]{Martingales} needs to be $\emptyset$, while $A$ can be $\emptyset$, $\{1\}$, $\{2\}$ or $\{1, 2\}$. Furthermore, we have $| \hat \mathcal{C}CV_{\!\mathtt{var}}| = 2$ and $|\hat \mathcal{C}CV_{\!\bar\mathfrak{s}tar} \mathfrak{s}etminus \hat \mathcal{C}CV^\uparrow_{\!\mathfrak{s}tar}| = 2$ and the value of the constant $\nu_\gammamma$ in \cite[Eq.~5.16]{Martingales} is $-1$. Applying \cite[Cor.~5.6]{Martingales} to this diagram, we get for any $\bar \kappappa > 0$ and for any $p \geq 2$ large enough
\betagin{equation*}
\Bigl(\mathbb{E} \bigl|\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, 1}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)\bigr|^p\Bigr)^{\mathfrak{r}ac{1}{p}} \lesssim (\lambdambda \vee \mathfrak{e})^{-1} \Bigl( 1 + \mathtt{var}epsilon^{\mathfrak{r}ac94 - \bar \kappappa} \mathfrak{e}^{-\mathfrak{r}ac52} + \mathtt{var}epsilon^{\mathfrak{r}ac92 - \bar \kappappa} \mathfrak{e}^{-5} \Bigr).
\end{equation*}
Recalling that $|\tau| = -1-2\kappappa$, we get the required bound \eqref{eq:model_bound}.
For the second diagram in \eqref{eq:Pi2-new} we have $\tilde \mathcal{C}CV_{\!\mathtt{var}} = \{1\}$, $\Gamma = \emptyset$ and the labeling $\mathtt{L} = \{ \mathrm{d}iamond \}$. However, the graph does not satisfy \cite[Assum.~3]{Martingales}. To resolve this problem, we note that multiplication of a kernel by $\mathfrak{e}^{3-a}$ with $a > 0$ ``improves'' its regularity by $3-a$, meaning that the singularity of the kernel now diverges like $\mathfrak{e}^a$ instead of like $\mathfrak{e}^3$. Then for $0 < a < \mathfrak{r}ac{5}{2}$ we can write
\betagin{equation}\lambdabel{eq:renorm-of-2}
\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, 2}\tau\bigr)(\mathtt{var}phii_z^\lambdambda) = \mathfrak{e}^{2 (a - 3)}
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.5) [var_red_square] (left) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny a,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny a,0} (int);
\end{tikzpicture}\;,
\end{equation}
and \cite[Assum.~3]{Martingales} is satisfied. Then for any $\bar \kappappa > 0$ and any $p \geq 2$ large enough \cite[Cor.~5.6]{Martingales} yields
\betagin{equation*}
\Bigl(\mathbb{E} \bigl|\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, 2}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)\bigr|^p\Bigr)^{\mathfrak{r}ac{1}{p}} \lesssim \mathfrak{e}^{2 (a - 3)} (\lambdambda \vee \mathfrak{e})^{\mathfrak{r}ac{5}{2} - 2 a} \Bigl( \mathtt{var}epsilon^{\mathfrak{r}ac94} + \mathtt{var}epsilon^{\mathfrak{r}ac92 - \bar \kappappa} \mathfrak{e}^{-\mathfrak{r}ac{5}{2}} \Bigr).
\end{equation*}
For $\mathfrak{r}ac{3}{2} < a \leq \mathfrak{r}ac{7}{4}$ and $\bar \kappappa > 0$ small enough the right-hand side is bounded by $c_\gammamma (\lambdambda \vee \mathfrak{e})^{-1}$, where $c_\gammamma$ vanishes as $\gamma \to 0$.
The term $\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, 3}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)$ requires a more complicated analysis. Using the quadratic covariation \eqref{eq:bC} and the definition of the renormalisation constants \eqref{eq:renorm-constant1} and \eqref{eq:renorm-constant3}, we can write
\betagin{align}
&\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, 3}\tau\bigr)(\mathtt{var}phii_z^\lambdambda) = \mathfrak{r}ac{1}{2} \int_{D_\mathtt{var}epsilon} \mathtt{var}phii^\lambdambda_z (\bar z) \biggl( \mathtt{var}epsilon^3 \mathfrak{s}um_{\tilde{y} \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_0^\infty \mywidetilde{\mathscr{K}}^\gamma_{\bar t - \tilde{s}}(\bar x - \tilde{y})^2 \bigl(\mathbf{C}_{\gamma, \mathfrak{a}}(\tilde{s}, \tilde{y}) - 2 + 2 \beta \mathtt{var}kappa_{\gamma, 3} \gamma^6 \un{\mathfrak{C}}_\gamma\bigr) \mathrm{d} \tilde{s} \biggr) \mathrm{d} \bar z \nonumber \\
&\quad + \mathfrak{r}ac{1}{2} \int_{D_\mathtt{var}epsilon} \mathtt{var}phii^\lambdambda_z (\bar z) \biggl( \mathtt{var}epsilon^3 \mathfrak{s}um_{\tilde{y} \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{-\infty}^0 \mywidetilde{\mathscr{K}}^\gamma_{\bar t - \tilde{s}}(\bar x - \tilde{y})^2 \bigl(\widetilde{\mathbf{C}}_{\gamma, \mathfrak{a}}(-\tilde{s}, \tilde{y}) - 2 + 2 \beta \mathtt{var}kappa_{\gamma, 3} \gamma^6 \un{\mathfrak{C}}_\gamma\bigr) \mathrm{d} \tilde{s} \biggr) \mathrm{d} \bar z \lambdabel{eq:Pi-3}
\end{align}
for $\bar z = (\bar t, \bar x)$ and where $\widetilde{\mathbf{C}}_{\gamma, \mathfrak{a}}$ is the bracket process \eqref{eq:bC} for the martingale $\widetilde{\mathfrak{M}}_{\gamma, \mathfrak{a}}$ used in \eqref{eq:martingale-extension}. This is the definition \eqref{eq:martingale-extension} which requires us to consider the two integrals: for positive and negative times. Since the two terms in \eqref{eq:Pi-3} are bounded in the same way, we will derive below only a bound on the first term.
Using the rescaled process $S_\gamma$, defined in \eqref{eq:S-def}, from formula \eqref{eq:bC} we then get
\betagin{equation*}
\mathbf{C}_{\gamma, \mathfrak{a}}(\tilde{s}, \tilde{y}) - 2 =
\betagin{cases}
- 2 \mathtt{var}kappa_{\gamma, 3} \gamma^3 S_\gamma (\tilde{s}, \tilde{y}) \tanh \bigl( \beta \mathrm{d}eltalta X_\gamma(\tilde{s}, \tilde{y}) \bigr) + 2 (1 - \mathtt{var}kappa_{\gamma, 3}) &\text{for}~~ \tilde{s} < \tau_{\gamma, \mathfrak{a}}, \\
- 2 \mathtt{var}kappa_{\gamma, 3} \gamma^6 S'_{\gamma, \mathfrak{a}} (\tilde{s}, \tilde{y}) X'_{\gamma, \mathfrak{a}}(\tilde{s}, \tilde{y}) + 2 (1 - \mathtt{var}kappa_{\gamma, 3}) &\text{for}~~ \tilde{s} \geq \tau_{\gamma, \mathfrak{a}}.
\end{cases}
\end{equation*}
From \eqref{eq:c-gamma-2} we have $1 - \mathtt{var}kappa_{\gamma, 3} = \mathcal{C}O(\gamma^{4})$. Moreover, the function $\tanh$ can be approximated by its first-order Taylor polynomial: $\tanh(x) = x + \mathcal{C}O(x^3)$, and \eqref{eq:X-apriori} yields $\| X_\gamma(\cdot) \|_{L^\infty} \lesssim \mathfrak{e}^{\eta}$ almost surely. Hence, $|\tanh \bigl( \beta \mathrm{d}eltalta X_\gamma(\tilde{s}, \tilde{y}) \bigr) - \beta \mathrm{d}eltalta X_\gamma(\tilde{s}, \tilde{y})| \lesssim \mathrm{d}eltalta^3 \mathfrak{e}^{3\eta} \lesssim \gamma^{9 (1 + \eta)}$ almost surely uniformly in $\tilde{y}$ and $\tilde{s} < \tau_{\gamma, \mathfrak{a}}$. Then the preceding expression equals
\betagin{equation}\lambdabel{eq:bC-new}
\mathbf{C}_{\gamma, \mathfrak{a}}(\tilde{s}, \tilde{y}) - 2 =
\betagin{cases}
- 2 \mathtt{var}kappa_{\gamma, 3} \beta \gamma^6 S_\gamma (\tilde{s}, \tilde{y}) X_\gamma(\tilde{s}, \tilde{y}) + \mathtt{Err}_{\gamma, \mathfrak{a}}(\tilde{s}, \tilde{y}) &\text{for}~~ \tilde{s} < \tau_{\gamma, \mathfrak{a}}, \\
- 2 \mathtt{var}kappa_{\gamma, 3} \gamma^6 S'_{\gamma, \mathfrak{a}} (\tilde{s}, \tilde{y}) X'_{\gamma, \mathfrak{a}}(\tilde{s}, \tilde{y}) + \mathtt{Err}_{\gamma, \mathfrak{a}}'(\tilde{s}, \tilde{y}) &\text{for}~~ \tilde{s} \geq \tau_{\gamma, \mathfrak{a}},
\end{cases}
\end{equation}
where the error terms are almost surely uniformly bounded on the respective time intervals by $|\mathtt{Err}_{\gamma, \mathfrak{a}}(\tilde{s}, \tilde{y})| \lesssim \gamma^{9 (1 + \eta) \wedge 4}$ and $|\mathtt{Err}_{\gamma, \mathfrak{a}}'(\tilde{s}, \tilde{y})| \lesssim \gamma^{4}$. Using \eqref{eq:bC-new}, we then write the first term in \eqref{eq:Pi-3} as
\betagin{align}
&-\beta \mathtt{var}kappa_{\gamma, 3} \gamma^6 \int_{D_\mathtt{var}epsilon} \mathtt{var}phii^\lambdambda_z (\bar z) \biggl( \mathtt{var}epsilon^3 \mathfrak{s}um_{\tilde{y} \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{0}^{\tau_{\gamma, \mathfrak{a}}} \!\!\!\mywidetilde{\mathscr{K}}^\gamma_{\bar t-\tilde{s}}(\bar x-\tilde{y})^2 \bigl(S_\gamma (\tilde{s}, \tilde{y}) X_\gamma(\tilde{s}, \tilde{y}) - \un{\mathfrak{C}}_\gamma\bigr) \mathrm{d} \tilde{s} \biggr) \mathrm{d} \bar z \nonumber\\
&\qquad -\beta \mathtt{var}kappa_{\gamma, 3} \gamma^6 \int_{D_\mathtt{var}epsilon} \mathtt{var}phii^\lambdambda_z (\bar z) \biggl( \mathtt{var}epsilon^3 \mathfrak{s}um_{\tilde{y} \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{\tau_{\gamma, \mathfrak{a}}}^\infty \!\!\!\mywidetilde{\mathscr{K}}^\gamma_{\bar t-\tilde{s}}(\bar x-\tilde{y})^2 \bigl(S'_{\gamma, \mathfrak{a}} (\tilde{s}, \tilde{y}) X'_{\gamma, \mathfrak{a}}(\tilde{s}, \tilde{y}) - \un{\mathfrak{C}}_\gamma\bigr) \mathrm{d} \tilde{s} \biggr) \mathrm{d} \bar z \nonumber \\
&\qquad + \mathtt{Err}^\lambdambda_{\gamma, \mathfrak{a}}(z), \lambdabel{eq:renorm-of-2-last}
\end{align}
where $|\mathtt{Err}^\lambdambda_{\gamma, \mathfrak{a}}(z)| \lesssim \gamma^{6 + 9 \eta}$ almost surely, uniformly in $\gamma \in (0,1]$ and $z \in D_\mathtt{var}epsilon$. In the bound on the error term we used the bounds on the error terms in \eqref{eq:bC-new}, the assumption $-\mathfrak{r}ac{4}{7} < \eta < -\mathfrak{r}ac{1}{2}$ in Theorem~\ref{thm:main}, and the bound $\int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(z)^2\, \mathrm{d} z \lesssim \mathfrak{e}^{-1}$ which follows from Lemma~\ref{lem:renorm-constants} and the definition \eqref{eq:renorm-constant1}. We note that the assumptions on $\eta$ imply $6 + 9 \eta > 0$, and hence all moments of the error term $\mathtt{Err}^\lambdambda_{\gamma, \mathfrak{a}}(z)$ vanish as $\gamma \to 0$.
We denote the first two terms in \eqref{eq:renorm-of-2-last} by $\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, 4}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)$ and $\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, 5}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)$ respectively, and we start with bounding the first of them.
We first show that the rescaled spin field $S_{\gamma}$ can be replaced in this expression by its local average. After that we can work with the product of two spin fields in \eqref{eq:renorm-of-2-last} similarly to how we work with $X_\gamma^2$. We can now write
\betagin{align}
&\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, 4}\tau\bigr)(\mathtt{var}phii_z^\lambdambda) \lambdabel{eq:tree2-2-withXX} \\
& = - \beta \mathtt{var}kappa_{\gamma, 3} \gamma^6 \int_{D_\mathtt{var}epsilon} \mathtt{var}phii^\lambdambda_z (\bar z) \biggl( \mathtt{var}epsilon^3 \mathfrak{s}um_{\tilde{y} \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{0}^{\tau_{\gamma, \mathfrak{a}}} \!\!\!\mywidetilde{\mathscr{K}}^\gamma_{\bar t-\tilde{s}}(\bar x-\tilde{y})^2 \bigl(\un{X}_\gamma(\tilde{s}, \tilde{y}) X_\gamma(\tilde{s}, \tilde{y}) - \un{\mathfrak{C}}_\gamma\bigr) \mathrm{d} \tilde{s} \biggr) \mathrm{d} \bar z \nonumber \\
&- \beta \mathtt{var}kappa_{\gamma, 3} \gamma^6 \int_{D_\mathtt{var}epsilon} \mathtt{var}phii^\lambdambda_z (\bar z) \biggl( \mathtt{var}epsilon^3 \mathfrak{s}um_{\tilde{y} \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{0}^{\tau_{\gamma, \mathfrak{a}}} \!\!\!\mywidetilde{\mathscr{K}}^\gamma_{\bar t-\tilde{s}}(\bar x-\tilde{y})^2 \big( S_\gamma(\tilde{s}, \tilde{y}) X_\gamma(\tilde{s}, \tilde{y}) - \un{X}_\gamma(\tilde{s}, \tilde{y}) X_\gamma(\tilde{s}, \tilde{y}) \big) \mathrm{d} \tilde{s} \biggr) \mathrm{d} \bar z. \nonumber
\end{align}
Using Lemma~\ref{lem:Y-approximation}, the last term is absolutely bounded by a constant times $\gamma^{3 + 3 \eta}$, and it vanishes as $\gamma \to 0$. Using the a priori bound, provided by the stopping time \eqref{eq:tau-2}, the first term in \eqref{eq:tree2-2-withXX} is absolutely bounded by a constant times
\betagin{equation}
(\lambdambda \vee \mathfrak{e})^{-1- \un\kappappa} \mathfrak{e}^{\un\kappappa/2 - 1} \gamma^6 \int_0^{\infty} \mathtt{var}epsilon^3 \mathfrak{s}um_{\tilde y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \mywidetilde{\mathscr{K}}^\gamma_{\tilde s}(\tilde y)^2 \mathrm{d} \tilde s \lesssim (\lambdambda \vee \mathfrak{e})^{-1- \un\kappappa} \mathfrak{e}^{\un\kappappa/2 - 2} \gamma^6 \lesssim (\lambdambda \vee \mathfrak{e})^{-1- \un\kappappa} \mathfrak{e}^{\un\kappappa/2}.\lambdabel{eq:some-term-again}
\end{equation}
Here, we used Lemma~\ref{lem:renorm-constants} to bound the integral, because it coincides with the renormalisation constant \eqref{eq:renorm-constant1}. Since we assumed $\kappappa \geq \un{\kappappa}$, the preceding expression is bounded by $(\lambdambda \vee \mathfrak{e})^{-1 - \kappappa} \mathfrak{e}^{\un\kappappa/2}$.
It is left to bund the secdon term in \eqref{eq:renorm-of-2-last}. As in \eqref{eq:tree2-2-withXX}, we get that $\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, 5}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)$ equals
\betagin{equation*}
-\beta \gamma^6 \int_0^{\infty} \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \mywidetilde{\mathscr{K}}^\gamma_{s}(y)^2 \biggl( \int_{\tau_{\gamma, \mathfrak{a}}}^\infty \mathtt{var}epsilon^3 \mathfrak{s}um_{\bar x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \mathtt{var}phii^\lambdambda_z (\bar t + s, \bar x + y) \Bigl( \un{X}'_{\gamma, \mathfrak{a}}(\bar t, \bar x) X'_{\gamma, \mathfrak{a}}(\bar t, \bar x) - \un{\mathfrak{C}}_\gamma \Bigr) \mathrm{d} \bar t \biggr) \mathrm{d} s
\end{equation*}
up to an error, vanishing as $\gamma \to 0$. Here, the process $\un{X}'_{\gamma, \mathfrak{a}}$ is defined as in \eqref{eq:X-under} but via the spin field $\mathfrak{s}igmagma'_{\gamma, \mathfrak{a}}$. Furthermore, we replace the constant $\un{\mathfrak{C}}_\gamma$ by the function \eqref{eq:C-gamma-function} and get
\betagin{align}
& -\beta \gamma^6 \int_0^{\infty} \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \mywidetilde{\mathscr{K}}^\gamma_{s}(y)^2 \biggl( \int_{\tau_{\gamma, \mathfrak{a}}}^\infty \mathtt{var}epsilon^3 \mathfrak{s}um_{\bar x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \mathtt{var}phii^\lambdambda_z (\bar t + s, \bar x + y) \bigl( \un{\mathfrak{C}}_\gamma(\bar t - \tau_{\gamma, \mathfrak{a}}) - \un{\mathfrak{C}}_\gamma\bigr) \mathrm{d} \bar t \biggr) \mathrm{d} s \nonumber \\
&\qquad -\beta \gamma^6 \int_0^{\infty} \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \mywidetilde{\mathscr{K}}^\gamma_{s}(y)^2 \biggl( \int_{\tau_{\gamma, \mathfrak{a}}}^\infty \mathtt{var}epsilon^3 \mathfrak{s}um_{\bar x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \mathtt{var}phii^\lambdambda_z (\bar t + s, \bar x + y) \lambdabel{eq:some-bound} \\
&\hspace{6cm}\times \Bigl( \un{X}'_{\gamma, \mathfrak{a}}(\bar t, \bar x) X'_{\gamma, \mathfrak{a}}(\bar t, \bar x) - \un{\mathfrak{C}}_\gamma(\bar t - \tau_{\gamma, \mathfrak{a}}) \Bigr) \mathrm{d} \bar t \biggr) \mathrm{d} s. \nonumber
\end{align}
Applying Lemma~\ref{lem:renorm-function-underline} with any $c \in (0,1)$, the absolute value of the first term in \eqref{eq:some-bound} is bounded by a constant times
\betagin{equation}\lambdabel{eq:term-1}
\gamma^6 \mathfrak{e}^{c -1} \int_0^{\infty} \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \mywidetilde{\mathscr{K}}^\gamma_{s}(y)^2 \biggl( \int_{\tau_{\gamma, \mathfrak{a}}}^\infty \mathtt{var}epsilon^3 \mathfrak{s}um_{\bar x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} |\mathtt{var}phii^\lambdambda_z (\bar t + s, \bar x + y)| (\bar t - \tau_{\gamma, \mathfrak{a}})^{-c/2} \mathrm{d} \bar t \biggr) \mathrm{d} s.
\end{equation}
Using the scaling properties of the involved functions, this expression is of order $\gamma^6 \mathfrak{e}^{c -2} (\lambdambda \vee \mathfrak{e})^{-c}$. Recalling that $\mathfrak{e} \approx \gamma^3$, it vanishes as $\gamma \to 0$.
Now, we consider the second term in \eqref{eq:some-bound}. Multiplying and dividing the random process in the brackets by $(t - \tau_{\gamma, \mathfrak{a}})^{-\mathfrak{r}ac{\eta}{2}}$, we estimate the absolute value of this expression by
\betagin{align*}
-\beta \gamma^6 \int_0^{\infty} \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \mywidetilde{\mathscr{K}}^\gamma_{s}(y)^2 &\biggl( \int_{\tau_{\gamma, \mathfrak{a}}}^\infty \mathtt{var}epsilon^3 \mathfrak{s}um_{\bar x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} |\mathtt{var}phii^\lambdambda_z (\bar t + s, \bar x + y)| (t - \tau_{\gamma, \mathfrak{a}})^{\mathfrak{r}ac{\eta}{2}} \mathrm{d} \bar t \biggr) \mathrm{d} s \\
&\times \biggl( \mathfrak{s}up_{\bar t \in [\tau_{\gamma, \mathfrak{a}}, T]} (t - \tau_{\gamma, \mathfrak{a}})^{-\mathfrak{r}ac{\eta}{2}} \Bigl| \un{X}'_{\gamma, \mathfrak{a}}(\bar t, \bar x) X'_{\gamma, \mathfrak{a}}(\bar t, \bar x) - \un{\mathfrak{C}}_\gamma(\bar t - \tau_{\gamma, \mathfrak{a}}) \Bigr| \biggr)
\end{align*}
for a sufficiently large $T$. We can restrict the variable $\bar t$ by $T$ in this formula because $\mywidetilde{\mathscr{K}}^\gamma$ and $\mathtt{var}phii^\lambdambda_z$ are compactly supported. Applying the H\"{o}lder inequality and Lemma~\ref{lem:X2-under-prime}, the $p$-th moment of this expression is bounded by a constant multiple of
\betagin{equation*}
\gamma^6 \un{\mathfrak{e}}^{\eta} \biggl(\mathbb{E} \biggl[ \int_0^{\infty} \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \mywidetilde{\mathscr{K}}^\gamma_{s}(y)^2 \biggl( \int_{\tau_{\gamma, \mathfrak{a}}}^\infty \mathtt{var}epsilon^3 \mathfrak{s}um_{\bar x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} |\mathtt{var}phii^\lambdambda_z (\bar t + s, \bar x + y)| (\bar t - \tau_{\gamma, \mathfrak{a}})^{\mathfrak{r}ac{\eta}{2}} \mathrm{d} \bar t \biggr) \mathrm{d} s \biggr]^{2 p}\biggr)^{\mathfrak{r}ac{1}{2p}}.
\end{equation*}
As in \eqref{eq:term-1}, this expression is bounded by a constant times $\gamma^6 \un{\mathfrak{e}}^{\eta} \mathfrak{e}^{-1} (\lambdambda \vee \mathfrak{e})^{\eta}$. Recalling that $\un\mathfrak{e} = \mathfrak{e} \gamma^{\un\kappappa}$ and $\mathfrak{e} \approx \gamma^3$, this expression vanishes as $\gamma \to 0$.
The analysis which we performed the renormalised contraction of two vertices in \eqref{eq:Pi2} will be used many times for the other diagrams below. In order to draw less diagrams, we prefer to introduce a new vertex
\betagin{equation*}
\betagin{tikzpicture}[scale=0.35, baseline=0.4cm]
\node at (0,0) [dot] (int) {};
\node at (0,2.5) [var_very_pink] (left) {};
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}
\; := \;
\betagin{tikzpicture}[scale=0.35, baseline=0.4cm]
\node at (0,0) [dot] (int) {};
\node at (0,2.5) [var_very_blue] (left) {};
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}
\; - \; \mathfrak{c}_\gamma.
\end{equation*}
\mathfrak{s}ubsubsection{The element $\tau = \protect\<3b>$}
\lambdabel{sec:third-element}
The definition \eqref{eq:lift-hermite} of the renormalized model and the expansion \cite[Eq.~2.11]{Martingales} yield a diagrammatical representation of the map $\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau$:
\betagin{equation}\lambdabel{eq:Psi3}
\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)
\;=\;
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,-2.5) {$$};
\node at (0,0) [dot] (int) {};
\node at (-2,2) [var_blue] (left) {};
\node at (0,2.9) [var_blue] (cent) {};
\node at (2,2) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.4] {\tiny 3,0} (int);
\end{tikzpicture}
\; +\;3
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-1.3,2.3) [var_very_pink] (left) {};
\node at (2.1,2.3) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}
\; +\;
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_very_blue] (left) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[bend left=90] node[labl,pos=0.6] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-90] node[labl,pos=0.6] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.4] {\tiny 3,0} (int);
\end{tikzpicture}\;.
\end{equation}
Using \cite[Cor.~5.6]{Martingales}, for any $\bar \kappappa > 0$ and for any $p \geq 2$ large enough, we bound the $p$-th moment of first diagram in \eqref{eq:Psi3} by a constant times
\[
(\lambdambda \vee \mathfrak{e})^{-\mathfrak{r}ac32} \Bigl( 1 + \mathtt{var}epsilon^{\mathfrak{r}ac{9}{4} - \bar \kappappa} \mathfrak{e}^{-\mathfrak{r}ac52} + \mathtt{var}epsilon^{\mathfrak{r}ac{9}{2}- \bar \kappappa} \mathfrak{e}^{-5} + \mathtt{var}epsilon^{\mathfrak{r}ac{27}{4}- \bar \kappappa} \mathfrak{e}^{-\mathfrak{r}ac{15}{2}} \Bigr),
\]
which is the required bound \eqref{eq:model_bound} with $|\tau| = - \mathfrak{r}ac{3}{2}-3\kappappa$.
We demonstrate once again how to analyse renormalised contraction of two vertices in the second diagram in \eqref{eq:Psi3}, and we prefer not to repeat analogous computation in what follows. As in \eqref{eq:Pi2-new} and \eqref{eq:renorm-of-2}, we write
\[
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-1.3,2.3) [var_very_pink] (left) {};
\node at (2.1,2.3) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}
\; = \mathfrak{e}^{2 (a - 3)} \;
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-1.3,2.3) [var_red_square] (left) {};
\node at (2.1,2.3) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny a,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny a,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}
\; + \left(
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-1.3,2.3) [var_red_triangle] (left) {};
\node at (2.1,2.3) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}
\; - \; \mathfrak{c}_\gamma \,
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (1.5,2) [var_blue] (right) {};
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (right) to node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}\right),
\]
for any $0 < a < \mathfrak{r}ac{5}{2}$. \cite[Assum.~3]{Martingales} is satisfied for the two preceding diagrams, with $\tilde \mathcal{C}CV_{\!\mathtt{var}} = \{ 1, 2 \}$, $\Gamma = \{2\}$ and the labeling is $\mathtt{L} = \{ \mathrm{d}iamond, \mathtt{nil} \}$ for the first diagram and $\mathtt{L} = \{ \triangledown, \mathtt{nil} \}$ for the second. Applying \cite[Cor.~5.6]{Martingales} to the first diagram, we get, for any $\bar \kappappa > 0$ and for any $p \geq 2$ large enough, a bound on the $p$-th moment of the order
\[
\mathfrak{e}^{2 (a - 3)} (\lambdambda \vee \mathfrak{e})^{2 - 2a} \Bigl(\mathtt{var}epsilon^{\mathfrak{r}ac94} + \mathtt{var}epsilon^{\mathfrak{r}ac92 - \bar \kappappa} \mathfrak{e}^{-\mathfrak{r}ac{5}{2}}\Bigr).
\]
For $\mathfrak{r}ac{3}{2} < a \leq \mathfrak{r}ac{7}{4}$ and $\bar \kappappa > 0$ small enough the right-hand side is bounded by $c_\gammamma (\lambdambda \vee \mathfrak{e})^{-3/2}$, where $c_\gammamma$ vanishes as $\gamma \to 0$. The second diagram is analyzed similarly to the third diagram in \eqref{eq:Pi2-new}, and it can be also bounded by $c_\gammamma (\lambdambda \vee \mathfrak{e})^{-3/2}$.
We now look at the last diagram in \eqref{eq:Psi3}. By using equation \eqref{eq:mrt_decomp}, we can write
\betagin{align*}
&\mathtt{var}epsilon^{9} \mathfrak{s}um_{\tilde x \in \T_{\mathtt{var}epsilon}^3} \mathfrak{s}um_{\tilde s \in \R} \mywidetilde{\mathscr{K}}^\gamma_{\bar t - \tilde s}(\bar x - \tilde x)^3 \big(\mathbf{D}elta_{\tilde s} \mathfrak{M}_{\gamma, \mathfrak{a}}(\tilde x) \big)^{3} = 4 \mathtt{var}epsilon^{9} \gamma^{-6} \mathfrak{s}um_{\tilde x \in \T_{\mathtt{var}epsilon}^3} \mathfrak{s}um_{\tilde s \in \R} \mywidetilde{\mathscr{K}}^\gamma_{\bar t - \tilde s}(\bar x - \tilde x)^3 \mathbf{D}elta_{\tilde s} \mathfrak{M}_{\gamma, \mathfrak{a}}(\tilde x) \\
&\qquad = 4 \mathtt{var}epsilon^{6} \gamma^{-6} \int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(\bar z - \tilde z)^3 \mathrm{d} \mathfrak{M}_{\gamma, \mathfrak{a}}(\tilde z) + 4 \mathtt{var}epsilon^{\mathfrak{r}ac{15}{4}} \gamma^{-6} \int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(\bar z - \tilde z)^3 \mathcal{C}d_{\gamma, \mathfrak{a}}(\tilde z) \mathrm{d} \tilde z.
\end{align*}
As we explained at the beginning of this section, the kernel $\mywidetilde{\mathscr{K}}^\gamma$ satisfies \cite[Assum.~4]{Martingales} with $a_e=3$, and hence \cite[Lem.~5.2]{Martingales} yields the bound $|\mywidetilde{\mathscr{K}}^\gamma(z)| \lesssim (\mathcal{V}ert z\mathcal{V}ert_{\mathfrak{s}} \vee \mathfrak{e})^{-3}$. Then we have $|\mywidetilde{\mathscr{K}}^\gamma(z)^3| \lesssim (\mathcal{V}ert z\mathcal{V}ert_{\mathfrak{s}} \vee \mathfrak{e})^{-9}$ and \cite[Lem.~5.2]{Martingales} implies that $\mywidetilde{\mathscr{K}}^\gamma(z)^3$ satisfies \cite[Assum.~4]{Martingales} with $a_e=9$. This allows to write the last diagram in \eqref{eq:Psi3} as
\betagin{equation}\lambdabel{eq:Psi3-last}
4 \mathtt{var}epsilon^{6} \gamma^{-6} \mathfrak{e}^{a - 9}
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_blue] (left) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.4] {\tiny a,0} (int);
\end{tikzpicture}
\;+\;
4 \mathtt{var}epsilon^{\mathfrak{r}ac{15}{4}} \gamma^{-6} \int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} \mathtt{var}phi_z^\lambdambda(\bar z)\, \mywidetilde{\mathscr{K}}^\gamma(\bar z - \tilde z)^3 \mathcal{C}d_{\gamma, \mathfrak{a}}(\tilde z) \mathrm{d} \tilde z \mathrm{d} \bar z,
\end{equation}
where we used the same trick as in \eqref{eq:renorm-of-2} to ``improve'' the singularity of the kernel. For $a < 5$, the first diagram in \eqref{eq:Psi3-last} satisfies \cite[Assum.~3]{Martingales}, and for any $\bar \kappappa > 0$ and $p \geq 2$ large enough, \cite[Cor.~5.6]{Martingales} allows to bound its $p$-th moment by a constant multiple of
\betagin{equation*}
\mathtt{var}epsilon^{6} \gamma^{-6} \mathfrak{e}^{a - 9} (\lambdambda \vee \mathfrak{e})^{\mathfrak{r}ac{5}{2} - a} \Bigl(1 + \mathtt{var}epsilon^{\mathfrak{r}ac94 - \bar \kappappa} \mathfrak{e}^{-\mathfrak{r}ac{5}{2}}\Bigr).
\end{equation*}
For $a > 3$, this expression is of order $c_\gamma (\lambdambda \vee \mathfrak{e})^{\mathfrak{r}ac{5}{2} - a}$, where $c_\gamma \to 0$ as $\gamma \to 0$, which is the required bound \eqref{eq:model_bound}.
Now we will analyse the second term in \eqref{eq:Psi3-last}. Because of our extension of the martingales \eqref{eq:martingale-extension}, we need to bound separately the part of \eqref{eq:Psi3-last} with positive and negative times. Because the bounds in the two cases are the same, we will write only the analysis for positive times. Using \eqref{eq:C-gamma} and changing the integration variables, we can write it as a constant multiple of
\betagin{equation}\lambdabel{eq:Psi3-last-as-two}
\betagin{aligned}
&\gamma^{12} \int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(\bar z)^3 \int_{\R_+ \times \mathtt{L}ambda_{\mathtt{var}epsilon}} \mathtt{var}phi_{z - \bar z}^\lambdambda(\tilde z) \mathcal{C}d_{\gamma, \mathfrak{a}}(\tilde z) \mathrm{d} \tilde z \mathrm{d} \bar z \\
&\qquad = \gamma^{12} \int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(\bar z)^3 \int_{[0, \tau_{\gamma, \mathfrak{a}}] \times \mathtt{L}ambda_{\mathtt{var}epsilon}} \mathtt{var}phi_{z - \bar z}^\lambdambda(\tilde z) \Bigl( S_\gamma(\tilde z) - \gamma^{-3}\tanh \big( \beta \gamma^3 X_\gamma(\tilde z) \big) \Bigr) \mathrm{d} \tilde z \mathrm{d} \bar z \\
&\qquad\qquad + \gamma^{12} \int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(\bar z)^3 \int_{[\tau_{\gamma, \mathfrak{a}}, \infty) \times \mathtt{L}ambda_{\mathtt{var}epsilon}} \mathtt{var}phi_{z - \bar z}^\lambdambda(\tilde z) \bigl(S'_{\gamma, \mathfrak{a}}(\tilde z) - X'_{\gamma, \mathfrak{a}}(\tilde z) \bigr) \mathrm{d} \tilde z \mathrm{d} \bar z,
\end{aligned}
\end{equation}
where we used the rescaled spin field \eqref{eq:S-def} and where $S'_{\gamma, \mathfrak{a}}$ is defined by \eqref{eq:S-def} for the spin field $\mathfrak{s}igmagma'_{\gamma, \mathfrak{a}}$.
Let us bound the first term in \eqref{eq:Psi3-last-as-two}. From the decomposition of the kernel $\mywidetilde{\mathscr{K}}^\gamma$, provided in the beginning of Section~\ref{sec:lift}, we get $| \mywidetilde{\mathscr{K}}^\gamma(z)| \lesssim (\| z \|_\mathfrak{s}\vee \mathfrak{e})^{-3}$. Then from \cite[Lem.~7.3]{HairerMatetski} we get $\int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(\bar z)^3 \mathrm{d} \bar z \lesssim \mathfrak{e}^{-4}$. Approximating the function $\tanh$ by its Taylor expansion and using \eqref{eq:beta}, we write the first term in \eqref{eq:Psi3-last-as-two} as
\betagin{equation}\lambdabel{eq:Psi3-very-last}
\gamma^{12} \int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(\bar z)^3 \int_{[0, \tau_{\gamma, \mathfrak{a}}] \times \mathtt{L}ambda_{\mathtt{var}epsilon}} \mathtt{var}phi_{z - \bar z}^\lambdambda(\tilde z) \bigl( S_\gamma(\tilde z) - X_\gamma(\tilde z) \bigr) \mathrm{d} \tilde z \mathrm{d} \bar z + \mathtt{Err}_{\gammamma, \lambdambda},
\end{equation}
where the error term $\mathtt{Err}_{\gammamma, \lambdambda}$ is absolutely bounded by a constant times
\betagin{align*}
\gamma^{18} \int_{D_\mathtt{var}epsilon} &\mywidetilde{\mathscr{K}}^\gamma(\bar z)^3 \int_{[0, \tau_{\gamma, \mathfrak{a}}] \times \mathtt{L}ambda_{\mathtt{var}epsilon}} |\mathtt{var}phi_{z - \bar z}^\lambdambda(\tilde z)| \Bigl( \big( \mathfrak{c}_{\ga} + A \big) \| X_\gamma(\tilde t)\|_{L^\infty} + \| X_\gamma(\tilde t) \|^3_{L^\infty} \Bigr) \mathrm{d} \tilde z \mathrm{d} \bar z \\
&\qquad \lesssim \gamma^{18} \mathfrak{e}^{-4} \mathfrak{s}up_{t \geq 0} \Bigl( \big( \mathfrak{c}_{\ga} + A \big) \| X_\gamma( t)\|_{L^\infty} + \| X_\gamma(t) \|^3_{L^\infty} \Bigr),
\end{align*}
with $\tilde t$ being the time variable in $\tilde z$. Here, we used $\int_{D_\mathtt{var}epsilon} |\mathtt{var}phi_{z - \bar z}^\lambdambda(\tilde z)| \mathrm{d} \tilde z \lesssim 1$. The a priori bound \eqref{eq:X-apriori} allows to estimate the preceding expression by $\gamma^{18} \mathfrak{e}^{3 \eta-4} \lesssim \gamma^{6 + 9 \eta}$, which vanishes as $\gammamma \to 0$ because $\eta > -\mathfrak{r}ac{2}{3}$ in the assumptions of Theorem~\ref{thm:main}.
Now, we will bound the first term in \eqref{eq:Psi3-very-last}. From the definitions \eqref{eq:X-gamma} and \eqref{eq:S-def} we conclude that $X_\gamma(t, x) = \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} K_\gamma(x - y) S_\gamma(t, y)$. Then we can write
\betagin{equation}\lambdabel{eq:Z-is-X}
\int_{[0, \tau_{\gamma, \mathfrak{a}}] \times \mathtt{L}ambda_{\mathtt{var}epsilon}} \mathtt{var}phi_{z - \bar z}^\lambdambda(\tilde z) \bigl( S_\gamma(\tilde z) - X_\gamma(\tilde z) \bigr) \mathrm{d} \tilde z = \int_{[0, \tau_{\gamma, \mathfrak{a}}] \times \mathtt{L}ambda_{\mathtt{var}epsilon}} \psi_{z - \bar z}^\lambdambda(\tilde z) S_\gamma(\tilde z) \mathrm{d} \tilde z,
\end{equation}
with $\psi_{z - \bar z}^\lambdambda(\tilde t, \tilde x) = \mathtt{var}phi_{z - \bar z}^\lambdambda(\tilde t, \tilde x) - \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \mathtt{var}phi_{z - \bar z}^\lambdambda(\tilde t, y) K_\gamma(y - \tilde x)$. This function can be viewed as a rescaled test function, which for any $\kappappa_1 \in [0, 1)$ and any $k \in \N_0^4$ satisfies $\| D^k \psi_{z - \bar z}^\lambdambda \|_{L^\infty} \lesssim \mathfrak{e}^{\kappappa_1} \lambdambda^{-5 - |k|_\mathfrak{s} - \kappappa_1}$. Then for any $\tilde \kappappa > 0$, Lemma~\ref{lem:Z-bound} yields
\betagin{equation*}
\biggl| \int_{D_\mathtt{var}epsilon} \psi_{z - \bar z}^\lambdambda(\tilde z) S_\gamma(\tilde z) \mathrm{d} \tilde z\biggr| \lesssim \mathfrak{a} \mathfrak{e}^{\kappappa_1} \lambdambda^{\eta - \kappappa_1}
\end{equation*}
uniformly in $\lambdambda \in [\mathfrak{e}^{1- \tilde \kappappa}, 1]$. For $\lambdambda < \mathfrak{e}^{1- \tilde \kappappa}$ we can use $|S_\gamma(\tilde z)| \leq \gamma^{-3}$ and estimate the left-hand side by a constant multiple of $\mathfrak{e}^{-1}$. Since $-1 < \eta < -\mathfrak{r}ac{1}{2}$, from the two preceding bounds we conclude that
\betagin{equation}\lambdabel{eq:3-bound}
\biggl| \int_{D_\mathtt{var}epsilon} \psi_{z - \bar z}^\lambdambda(\tilde z) S_\gamma(\tilde z) \mathrm{d} \tilde z\biggr| \lesssim \mathfrak{a} \mathfrak{e}^{\kappappa_1} (\lambdambda \vee \mathfrak{e})^{-\mathfrak{r}ac{1 + \kappappa_1}{1 - \tilde \kappappa}}.
\end{equation}
Moreover, as above we have $\int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(\bar z)^3 \mathrm{d} \bar z \lesssim \mathfrak{e}^{-4}$. Hence, the first term in \eqref{eq:Psi3-very-last} is absolutely bounded by a constant times $\mathfrak{a} \mathfrak{e}^{\kappappa_1} (\lambdambda \vee \mathfrak{e})^{-\mathfrak{r}ac{1 + \kappappa_1}{1 - \tilde \kappappa}}$. If we take $\tilde \kappappa = \mathfrak{r}ac{1 - 2 \kappappa_1}{3}$, we get an estimate by $\mathfrak{a} \mathfrak{e}^{\kappappa_1} (\lambdambda \vee \mathfrak{e})^{-\mathfrak{r}ac{3}{2}}$, which vanishes as $\gammamma \to 0$. Taking $\kappappa_1$ close to $0$ we make $\tilde \kappappa$ close to $\mathfrak{r}ac{1}{2}$, and Lemma~\ref{lem:Z-bound} suggests that the test functions may be taken from $\mathcal{C}B^2_\mathfrak{s}$.
It is left to bound the last term in \eqref{eq:Psi3-last-as-two}. Identity \eqref{eq:Z-is-X} with the time interval $[\tau_{\gamma, \mathfrak{a}}, \infty)$ holds for the processes $S'_{\gamma, \mathfrak{a}}$ and $X'_{\gamma, \mathfrak{a}}$. Applying Lemma~\ref{lem:Z-prime-bound}, we get the same bound as \eqref{eq:3-bound}, i.e.
\betagin{equation*}
\biggl| \int_{[\tau_{\gamma, \mathfrak{a}}, \infty) \times \mathtt{L}ambda_{\mathtt{var}epsilon}} \psi_{z - \bar z}^\lambdambda(\tilde z) S'_{\gamma, \mathfrak{a}}(\tilde z) \mathrm{d} \tilde z \biggr| \lesssim \mathfrak{e}^{\kappappa_1} (\lambdambda \vee \mathfrak{e})^{-\mathfrak{r}ac{3}{2}}
\end{equation*}
uniformly in $\lambdambda \in (0, 1]$.
\mathfrak{s}ubsubsection{The element $\tau = \protect\<22b>$}
Using the definition \eqref{eq:Pi-rest} and the expansion \cite[Eq.~2.11]{Martingales}, we can represent the map $\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau$ diagrammatically as
\betagin{align} \nonumber
\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)
&\;=\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-2,2) [var_blue] (left) {};
\node at (2,2) [var_blue] (right) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2,-0.9) [var_blue] (right1) {};
\node at (-2,-0.9) [var_blue] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\;+\;2\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-2,2) [var_blue] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.5,-1.45) [var_very_blue] (right1) {};
\node at (-2,-0.9) [var_blue] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; +\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_very_pink] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2,-0.9) [var_blue] (right1) {};
\node at (-2,-0.9) [var_blue] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; +\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-2,2) [var_blue] (left) {};
\node at (2,2) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.9,-1.7) [var_very_pink] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.4] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to[bend left=-30] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\\[0.2cm] \lambdabel{eq:22b}
&\qquad \;+\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-1.3,2.3) [var_very_pink] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.9,-1.7) [var_very_pink] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to[bend left=-30] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\;+\;2\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-2,2) [var_blue] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (3,-1.45) [var_very_blue] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to[bend left=-30] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to[bend left=-30] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\;+\;2\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-1.3,2.3) [var_very_blue] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2,-1.45) [var_blue] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left) to[bend left=-80] node[labl,pos=0.55] {\tiny 3,0} (cent1);
\end{tikzpicture} \\[0.2cm]
&\qquad
\; +\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_very_blue] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=70] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-70] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (left) to[bend left=100] node[labl,pos=0.55] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left) to[bend right=100] node[labl,pos=0.55] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; + \left(\;2\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,1) [dot] (int) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2,-0.9) [var_very_blue] (right1) {};
\node at (-2,-0.9) [var_very_blue] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\;-\; \mathfrak{c}_\gamma''\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\mathrm{d}raw[testfcn] (int) to (root);
\end{tikzpicture}\right), \nonumber
\end{align}
where the renormalisation constant $\mathfrak{c}_\gamma''$ is defined in \eqref{eq:renorm-constant2}, and where the edge with the label ``$3,1$'' represents the kernel in \eqref{eq:Pi-rest}, where ``$1$'' refers to the positive renormalisation (see \cite[Sec.~5]{Martingales}).
Applying \cite[Cor.~5.6]{Martingales}, the high moments of the first and second diagrams are bounded by a constant multiplier of $(\lambdambda \vee \mathfrak{e})^{-\kappappa/2}$. Analysing contractions of vertices in the same way as we did in \eqref{eq:Pi2} and \eqref{eq:Psi3}, the diagrams number $3, \ldots, 7$ are bounded by $c_\gamma (\lambdambda \vee \mathfrak{e})^{-\kappappa/2}$ with a constant $c_\gamma$ vanishing as $\gamma \to 0$.
Regarding the eight tree, for any $\kappappa > 0$, we first rewrite it as (as before we use \cite[Lem.~5.2]{Martingales} to show that a product of singular kernels again satisfies \cite[Assum.~4]{Martingales})
\betagin{equation}
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_very_blue] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=70] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-70] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (left) to[bend left=100] node[labl,pos=0.55] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left) to[bend right=100] node[labl,pos=0.55] {\tiny 3,0} (cent1);
\end{tikzpicture} \; = \mathfrak{e}^{-2 - 2\kappappa} \;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_very_blue] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 5-$\kappappa$,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (left) to[bend left=100] node[labl,pos=0.55] {\tiny 5-$\kappappa$,0} (cent1);
\end{tikzpicture} \, ,
\end{equation}
where we multiplied some kernels by positive powers of $\mathfrak{e}$ in order to satisfy all the hypotheses of \cite[Cor.~5.6]{Martingales}. Once we apply it, we then get the bound
\[
\mathfrak{e}^{-2 - 2\kappappa} ( \lambdambda \vee \mathfrak{e} )^{-3+2\kappappa} \big( \mathtt{var}epsilon^{\mathfrak{r}ac92 - \tilde \kappappa} + \mathtt{var}epsilon^{9 - \tilde \kappappa} \mathfrak{e}^{-\mathfrak{r}ac52} + \mathtt{var}epsilon^{9 - \tilde \kappappa} \mathfrak{e}^{-5} \big) \lesssim ( \lambdambda \vee \mathfrak{e} )^{-3+2\kappappa} \mathfrak{e}^{4-2\kappappa},
\]
which vanishes as $\gamma \to 0$.
Recalling the definition of the positively renormalised kernel in \eqref{eq:Pi-rest}, the expression in the brackets in \eqref{eq:22b} may be written as
\betagin{equation}\lambdabel{eq:22b-renorm}
\left(2\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-4.2) [root] (root) {};
\node at (0,-4.2) [rootlab] {$z$};
\node at (0,-4.5) {$$};
\node at (0,1) [dot] (int) {};
\node at (0,-2) [dot] (cent1) {};
\node at (2,-0.5) [var_very_blue] (right1) {};
\node at (-2,-0.5) [var_very_blue] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\;-\; \mathfrak{c}_\gamma''\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\mathrm{d}raw[testfcn] (int) to (root);
\end{tikzpicture}\right)
\;-\; 2\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-4.2) [root] (root) {};
\node at (0,-4.2) [rootlab] {$z$};
\node at (0,-4.5) {$$};
\node at (0,1) [dot] (int) {};
\node at (0,-2) [dot] (cent1) {};
\node at (2,-0.5) [var_very_blue] (right1) {};
\node at (-2,-0.5) [var_very_blue] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw [keps] (int) to[out=0,in=90] (3,-1.4) to [out=-90, in=0] node[labl,pos=0] {\tiny 3,0} (root);
\end{tikzpicture}\;.
\end{equation}
The last diagram in \eqref{eq:22b-renorm} is readily bounded using \cite[Cor.~5.6]{Martingales} by a multiple of $(\lambdambda \vee \mathfrak{e})^{-\kappappa / 2}$, while the expression in the brackets requires some work. Using the notation from \eqref{eq:Pi2-new}, the expression in the brackets in \eqref{eq:22b-renorm} may be written as
\betagin{equation}\lambdabel{eq:22b-last}
2\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-4.2) [root] (root) {};
\node at (0,-4.2) [rootlab] {$z$};
\node at (0,-4.5) {$$};
\node at (0,1) [dot] (int) {};
\node at (0,-2) [dot] (cent1) {};
\node at (2,-0.5) [var_red_square] (right1) {};
\node at (-2,-0.5) [var_red_square] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; + 4 \;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-4.2) [root] (root) {};
\node at (0,-4.2) [rootlab] {$z$};
\node at (0,-4.5) {$$};
\node at (0,1) [dot] (int) {};
\node at (0,-2) [dot] (cent1) {};
\node at (2,-0.5) [var_red_square] (right1) {};
\node at (-2,-0.5) [var_red_triangle] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}\;
\; + \; \left( 2\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-4.2) [root] (root) {};
\node at (0,-4.2) [rootlab] {$z$};
\node at (0,-4.5) {$$};
\node at (0,1) [dot] (int) {};
\node at (0,-2) [dot] (cent1) {};
\node at (2,-0.5) [var_red_triangle] (right1) {};
\node at (-2,-0.5) [var_red_triangle] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\;-\; \mathfrak{c}_\gamma''\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\mathrm{d}raw[testfcn] (int) to (root);
\end{tikzpicture}\right),
\end{equation}
where the first two diagram are bounded using \cite[Cor.~5.6]{Martingales} by $c_\gamma (\lambdambda \vee \mathfrak{e})^{-\kappappa/2}$ with a constant $c_\gamma$ vanishing as $\gamma \to 0$.
It is left to bound the expression in the brackets in \eqref{eq:22b-last}. For this, let us define the random kernel
\betagin{equation}\lambdabel{eq:CG-def}
\mathcal{C}G_{\gamma} (z_1, z_2) =
\betagin{tikzpicture}[scale=0.35, baseline=-0.2cm]
\node at (0,1) [root] (int) {};
\node at (0,1.6) {\mathfrak{s}criptsize $z_2$};
\node at (0,-2) [root] (cent1) {};
\node at (0,-2) [rootlab] {$z_1$};
\node at (2,-0.5) [var_red_triangle] (right1) {};
\node at (-2,-0.5) [var_red_triangle] (left1) {};
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}\;,
\end{equation}
which may be written explicitly as
\betagin{align}\lambdabel{eq:CG-explicit}
\mathcal{C}G_{\gamma} (z_1, z_2) &= \int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(z_1 - z_3) \mywidetilde{\mathscr{K}}^\gamma(z_1 - z_4) \mywidetilde{\mathscr{K}}^\gamma(z_1 - z_2) \\
&\hspace{2cm} \times \mywidetilde{\mathscr{K}}^\gamma(z_2 - z_3) \mywidetilde{\mathscr{K}}^\gamma(z_2 - z_4) \mathbf{C}_{\gamma, \mathfrak{a}}(z_3) \mathbf{C}_{\gamma, \mathfrak{a}}(z_4) \mathrm{d} z_3 \mathrm{d} z_4.\nonumber
\end{align}
where we used the bracket process \eqref{eq:bC}. Then the expression in the brackets in \eqref{eq:22b-last} is absolutely bounded by
\betagin{equation}\lambdabel{eq:last-term}
\int_{D_\mathtt{var}epsilon} |\mathtt{var}phii_z^\lambdambda(z_1)| \left| 2 \int_{D_\mathtt{var}epsilon} \mathcal{C}G_{\gamma} (z_1, z_2) \mathrm{d} z_2 - \mathfrak{c}_\gamma'' \right| \mathrm{d} z_1 \leq \mathfrak{s}up_{z_1 \in D_\mathtt{var}epsilon} \left| 2 \int_{D_\mathtt{var}epsilon} \mathcal{C}G_{\gamma} (z_1, z_2) \mathrm{d} z_2 - \mathfrak{c}_\gamma'' \right|.
\end{equation}
We need the following bounds on the kernel $\mathcal{C}G_{\gamma}$.
\betagin{lemma}\lambdabel{lem:CG-bounds}
There exists a non-random constant $C > 0$, independent of $\gamma$, such that
\betagin{equation}\lambdabel{eq:CG-bound1}
\bigl| \mathcal{C}G_{\gamma} (z_1, z_2)\bigr| \leq C \bigl(\| z_1 - z_2 \|_\mathfrak{s} \vee \mathfrak{e}\bigr)^{-5},
\end{equation}
uniformly in $z_1 \neq z_2$. Moreover, for any $\theta \in (0, 1)$ we have
\betagin{equation}\lambdabel{eq:CG-bound2}
\left| 2 \int_{D_\mathtt{var}epsilon} \mathcal{C}G_{\gamma} (z_1, z_2) \mathrm{d} z_2 - \mathfrak{c}_\gamma'' \right| \leq C \mathfrak{e}^\theta,
\end{equation}
uniformly over $z_1$.
\end{lemma}
\betagin{proof}
As we state in \eqref{eq:bC}, we can uniformly bound $\mathbf{C}_{\gamma, \mathfrak{a}}$. Moreover, from the decomposition of the kernel $\mywidetilde{\mathscr{K}}^\gamma$, provided in Appendix~\ref{sec:decompositions}, we can conclude $| \mywidetilde{\mathscr{K}}^\gamma(z)| \lesssim (\| z \|_\mathfrak{s}\vee \mathfrak{e})^{-3}$. Then the bound \eqref{eq:CG-bound1} follows from \cite[Lem.~7.3]{HairerMatetski}.
Using the definitions \eqref{eq:CG-explicit} and \eqref{eq:renorm-constant2}, the expression in the absolute value in \eqref{eq:CG-bound2} may be written explicitly as
\betagin{align*}
&2 \int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(z_1 - z_3) \mywidetilde{\mathscr{K}}^\gamma(z_1 - z_4) \mywidetilde{\mathscr{K}}^\gamma(z_1 - z_2) \\
&\hspace{3cm} \times \mywidetilde{\mathscr{K}}^\gamma(z_2 - z_3) \mywidetilde{\mathscr{K}}^\gamma(z_2 - z_4) \bigl(\mathbf{C}_{\gamma, \mathfrak{a}}(z_3) - 2\bigr) \mathbf{C}_{\gamma, \mathfrak{a}}(z_4) \mathrm{d} z_2 \mathrm{d} z_3 \mathrm{d} z_4 \\
&\qquad + 4 \int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(z_1 - z_3) \mywidetilde{\mathscr{K}}^\gamma(z_1 - z_4) \mathscr{K}^\gamma(z_1 - z_2) \\
&\hspace{4cm} \times \mywidetilde{\mathscr{K}}^\gamma(z_2 - z_3) \mywidetilde{\mathscr{K}}^\gamma(z_2 - z_4) \bigl(\mathbf{C}_{\gamma, \mathfrak{a}}(z_4) - 2\bigr) \mathrm{d} z_2 \mathrm{d} z_3 \mathrm{d} z_4.
\end{align*}
Moreover, we can write the difference $\mathbf{C}_{\gamma, \mathfrak{a}} - 2$ as in \eqref{eq:bC-new}. After that, we apply Lemma~\ref{lem:Y-approximation} to replace the product $S_\gamma X_\gamma$ by $\un{X}_\gamma X_\gamma$, up to an error term. Then the preceding expression equals
\betagin{equation}\lambdabel{eq:22b-very-last}
\betagin{aligned}
&- 4 \beta \gamma^6 \int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(z_1 - z_3) \mywidetilde{\mathscr{K}}^\gamma(z_1 - z_4) \mywidetilde{\mathscr{K}}^\gamma(z_1 - z_2) \\
&\hspace{2cm} \times \mywidetilde{\mathscr{K}}^\gamma(z_2 - z_3) \mywidetilde{\mathscr{K}}^\gamma(z_2 - z_4) \un{X}_\gamma (z_3) X_\gamma(z_3) \mathbf{C}_{\gamma, \mathfrak{a}}(z_4) \mathrm{d} z_2 \mathrm{d} z_3 \mathrm{d} z_4 \\
&\qquad - 8 \beta \gamma^6 \int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} \mywidetilde{\mathscr{K}}^\gamma(z_1 - z_3) \mywidetilde{\mathscr{K}}^\gamma(z_1 - z_4) \mywidetilde{\mathscr{K}}^\gamma(z_1 - z_2) \\
&\hspace{3cm} \times \mywidetilde{\mathscr{K}}^\gamma(z_2 - z_3) \mywidetilde{\mathscr{K}}^\gamma(z_2 - z_4) \un{X}_\gamma (z_4) X_\gamma(z_4) \mathrm{d} z_2 \mathrm{d} z_3 \mathrm{d} z_4 + \mathtt{Err}_{\gammamma, \lambdambda},
\end{aligned}
\end{equation}
where the error term satisfies
\betagin{align*}
|\mathtt{Err}_{\gammamma, \lambdambda}| &\lesssim \gamma^{6 + 3 \eta} \,
\left|\; \betagin{tikzpicture}[scale=0.35, baseline=-0.6cm]
\node at (0,0) [dot] (int) {};
\node at (0,-2.9) [root] (cent1) {};
\node at (0,-2.9) [rootlab] {$0$};
\node at (2.5,-1.45) [dot] (right1) {};
\node at (-2.5,-1.45) [dot] (left1) {};
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}\; \right|\;.
\end{align*}
Using the bounds on singular kernels derived in \cite[Lem.~7.3]{HairerMatetski}, we obtain $|\mathtt{Err}_{\gammamma, \lambdambda}| \lesssim \gamma^{6 + 3 \eta - \bar{\kappappa}}$ for any $\bar{\kappappa} > 0$. From \eqref{eq:tau-2} we have the a priori bound $\| \un{X}_\gamma(t) X_\gamma(t)\|_{L^\infty} \lesssim \mathfrak{e}^{\un\kappappa/2-2}$ for $t < \tau_{\gamma, \mathfrak{a}}$, which allows to bound the first two terms in \eqref{eq:22b-very-last} by a constant multiple of $\gammamma^{6 - 3(2 - \un{\kappappa}/2) - \bar{\kappappa}} = \gammamma^{3 {\un\kappappa}/2 - \bar{\kappappa}}$. Taking $\bar{\kappappa}$ sufficiently small, this gives the required bound \eqref{eq:CG-bound2}.
\end{proof}
Applying \eqref{eq:CG-bound2}, we bound \eqref{eq:last-term} by a positive power of $\gammamma$. This finishes the proof of the required bound \eqref{eq:model_bound} for the element $\tau$.
\mathfrak{s}ubsubsection{The element $\tau = \protect\<31b>$}
The definition \eqref{eq:Pi-rest} and the expansion \cite[Eq.~2.15]{Martingales} allow to represent the map $\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau$ diagrammatically as
\betagin{align}\lambdabel{eq:fourth-symbol}
\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)
&\;=\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-2,2) [var_blue] (left) {};
\node at (0,2.9) [var_blue] (cent) {};
\node at (2,2) [var_blue] (right) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2,-0.9) [var_blue] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; +\;3\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-2,2) [var_blue] (left) {};
\node at (0,2.9) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.5,-1.45) [var_very_blue] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.5] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; +\;3\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-1.3,2.3) [var_very_pink] (left) {};
\node at (2.1,2.3) [var_blue] (right) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2,-0.9) [var_blue] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\\[0.2cm]
& \nonumber
\qquad \; +\;3\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-1.3,2.3) [var_very_pink] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.5,-1.45) [var_very_blue] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; +\;3\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-1.3,2.3) [var_very_blue] (left) {};
\node at (2,2) [var_blue] (right) {};
\node at (0,-2.9) [dot] (cent1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (left) to[bend left=-80] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; +\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_very_blue] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2,-0.9) [var_blue] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=90] node[labl,pos=0.6] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-90] node[labl,pos=0.6] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; +\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.3) {$$};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_very_blue] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=90] node[labl,pos=0.6] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-90] node[labl,pos=0.6] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (left) to[bend left=100] node[labl,pos=0.55] {\tiny 3,0} (cent1);
\end{tikzpicture},
\end{align}
where as before the arrow ``\,\tikz[baseline=-0.1cm] \mathrm{d}raw[keps] (0,0) to node[labl,pos=0.45] {\tiny 3,1} (1,0);\,'' represents the positively renormalised kernel in \eqref{eq:Pi-rest}.
\cite[Cor.~5.6]{Martingales} allows to bound the moments of the first two diagrams in \eqref{eq:fourth-symbol} by a constant multiple of $(\lambdambda \vee \mathfrak{e})^{-\kappappa}$, which yields the required bound \eqref{eq:model_bound}. Analysing the contractions of two and three vertices as before, all the other diagrams, except the last one, are bounded using \cite[Cor.~5.6]{Martingales} by $c_\gamma (\lambdambda \vee \mathfrak{e})^{-\kappappa}$ for a vanishing $c_\gamma$ as $\gamma \to 0$.
To bound the last diagram in \eqref{eq:fourth-symbol}, we use a positive power of $\mathfrak{e}$ to improve the singularity of the kernel:
\betagin{equation*}
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_very_blue] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=90] node[labl,pos=0.6] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-90] node[labl,pos=0.6] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (left) to[bend left=100] node[labl,pos=0.55] {\tiny 3,0} (cent1);
\end{tikzpicture}
\;=\; \mathfrak{e}^{3 a - 9}
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_very_blue] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=90] node[labl,pos=0.6] {\tiny a,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-90] node[labl,pos=0.6] {\tiny a,0} (int);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.4] {\tiny a,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (left) to[bend left=100] node[labl,pos=0.55] {\tiny 3,0} (cent1);
\end{tikzpicture},
\end{equation*}
for $a < \mathfrak{r}ac{5}{3}$. Then \cite[Cor.~5.6]{Martingales} allows to bound the right-hand side by a constant times
\betagin{equation*}
\mathfrak{e}^{3a - 9} (\lambdambda \vee \mathfrak{e})^{4 - 3 a} \Bigl( \mathtt{var}epsilon^{9-\bar\kappappa} \mathfrak{e}^{-5} + \mathtt{var}epsilon^{\mathfrak{r}ac92-\bar\kappappa} + \mathtt{var}epsilon^{\mathfrak{r}ac{27}{4}} \Bigr),
\end{equation*}
for any $\bar\kappappa >0$. Choosing appropriate values of $a$ and $\bar \kappappa$, we can estimate this by $c_\gamma (\lambdambda \vee \mathfrak{e})^{-\kappappa}$, where $c_\gamma$ vanishes as $\gamma \to 0$.
\mathfrak{s}ubsubsection{The element $\tau = \protect\<32b>$}
Using \eqref{eq:Pi-rest} and \cite[Eq.~2.15]{Martingales} we can write
\betagin{align} \lambdabel{eq:sixth-symbol}
&\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)
\;=\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-2,2) [var_blue] (left) {};
\node at (0,2.9) [var_blue] (cent) {};
\node at (2,2) [var_blue] (right) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2,-0.9) [var_blue] (right1) {};
\node at (-2,-0.9) [var_blue] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\;+\;6\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (2.5,-1.45) [var_very_blue] (right1) {};
\node at (-2,-0.9) [var_blue] (left1) {};
\node at (-2,2) [var_blue] (left) {};
\node at (0,2.9) [var_blue] (cent) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; +\;3\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-1.3,2.3) [var_very_pink] (left) {};
\node at (2.1,2.3) [var_blue] (right) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2,-0.9) [var_blue] (right1) {};
\node at (-2,-0.9) [var_blue] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\;+\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-2,2) [var_blue] (left) {};
\node at (0,2.9) [var_blue] (cent) {};
\node at (2,2) [var_blue] (right) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.9,-1.7) [var_very_pink] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to[bend left=-30] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture} \\[0.2cm] \nonumber
&\qquad
\;+\;3\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-1.3,2.3) [var_very_pink] (left) {};
\node at (2.1,2.3) [var_blue] (right) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.9,-1.7) [var_very_pink] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to[bend left=-30] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}
\; +\; 3 \left(
2\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.5,-1.45) [var_very_blue] (right1) {};
\node at (-2.5,-1.45) [var_very_blue] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; -\; \mathfrak{c}_\gamma'' \;
\betagin{tikzpicture}[scale=0.35, baseline=-0.7cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture} \right)
\; +\;6\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-1.3,2.3) [var_very_pink] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.5,-1.45) [var_very_blue] (right1) {};
\node at (-2,-0.9) [var_blue] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture} \\[0.2cm] \nonumber
&\qquad
\; +\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_very_blue] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2,-0.9) [var_blue] (right1) {};
\node at (-2,-0.9) [var_blue] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=90] node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-90] node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.6] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; +\;3\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,-4.5) {$$};
\node at (0,0) [dot] (int) {};
\node at (-2,2) [var_blue] (left) {};
\node at (0,2.9) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (3,-1.45) [var_very_blue] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to[bend left=-30] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to[bend left=-30] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; +\;6\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-1.3,2.3) [var_very_blue] (left) {};
\node at (2.1,2.3) [var_blue] (right) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2,-0.9) [var_blue] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);s
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (left) to[bend left=-80] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; +\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_very_blue] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.9,-1.7) [var_very_pink] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=90] node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-90] node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.6] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to[bend left=-30] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture} \\[0.2cm] \nonumber
&\qquad\; +\;6\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-1.3,2.3) [var_very_blue] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.5,-1.45) [var_very_blue] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left) to[bend left=-80] node[labl,pos=0.55] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; +\;3\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-1.3,2.3) [var_very_pink] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (3,-1.45) [var_very_blue] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to[bend left=-20] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to[bend left=-30] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; +\; 2\,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_very_blue] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2,-0.9) [var_blue] (right1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=90] node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-90] node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.6] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (left) to[bend right=100] node[labl,pos=0.55] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; +\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_very_blue] (left) {};
\node at (0,-2.9) [dot] (cent1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left) to[bend left=90] node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-90] node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.6] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (left) to[bend left=100] node[labl,pos=0.55] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left) to[bend right=100] node[labl,pos=0.55] {\tiny 3,0} (cent1);
\end{tikzpicture}.
\end{align}
Using \cite[Cor.~5.6]{Martingales}, the moments of the first two diagrams are bounded by a constant multiple of $(\lambdambda \vee \mathfrak{e})^{-\bar \kappappa}$ for any $\bar \kappappa > 0$. Analysing contracted vertices as before, all the other diagrams, except the expression in the brackets, are bounded by $c_\gamma (\lambdambda \vee \mathfrak{e})^{-\bar \kappappa}$ for any $\bar \kappappa > 0$ and for vanishing $c_\gamma$ as $\gamma \to 0$. Here, the contraction of five vertices is analysed in the same way as a contraction of three, with the only difference in the powers of $\mathfrak{e}$ in multipliers.
Now, we will bound the expression in the brackets in \eqref{eq:sixth-symbol}. Recalling the definition of the kernel in \eqref{eq:Pi-rest}, we can write
\betagin{align*}
2\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.5,-1.45) [var_very_blue] (right1) {};
\node at (-2.5,-1.45) [var_very_blue] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,1} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; -\; \mathfrak{c}_\gamma'' \;
\betagin{tikzpicture}[scale=0.35, baseline=-0.7cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; = \; \left(
2\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.5,-1.45) [var_very_blue] (right1) {};
\node at (-2.5,-1.45) [var_very_blue] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; -\; \mathfrak{c}_\gamma'' \;
\betagin{tikzpicture}[scale=0.35, baseline=-0.7cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture} \right)
\; -\; 2\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.5,-1.45) [var_very_blue] (right1) {};
\node at (-2.5,-1.45) [var_very_blue] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw [keps] (int) to[out=0,in=90] (3.5,-2.5) to [out=-90, in=0] node[labl,pos=0] {\tiny 3,0} (root);
\end{tikzpicture}\;.
\end{align*}
Applying \cite[Cor.~5.6]{Martingales}, the moments of the last diagram are bounded by a constant multiple of $(\lambdambda \vee \mathfrak{e})^{-\bar \kappappa}$ for any $\bar \kappappa > 0$. Similarly to \eqref{eq:22b-last}, we can write the expression in the brackets as
\betagin{equation}\lambdabel{eq:sixth-symbol-last}
2\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.5,-1.45) [var_red_square] (right1) {};
\node at (-2.5,-1.45) [var_red_square] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\;+\; 4\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.5,-1.45) [var_red_square] (right1) {};
\node at (-2.5,-1.45) [var_red_triangle] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\;+\; \left(2\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.5,-1.45) [var_red_triangle] (right1) {};
\node at (-2.5,-1.45) [var_red_triangle] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; -\; \mathfrak{c}_\gamma'' \;
\betagin{tikzpicture}[scale=0.35, baseline=-0.7cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\right),
\end{equation}
where the first two diagram are bounded using \cite[Cor.~5.6]{Martingales} by $c_\gamma (\lambdambda \vee \mathfrak{e})^{-\kappappa/2}$ with a constant $c_\gamma$ vanishing as $\gamma \to 0$.
Now, we will bound the expression in the brackets in \eqref{eq:sixth-symbol-last}. We write
\betagin{equation}\lambdabel{eq:sixth-symbol-last-again}
2\;
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.5,-1.45) [var_red_triangle] (right1) {};
\node at (-2.5,-1.45) [var_red_triangle] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; -\; \mathfrak{c}_\gamma'' \;
\betagin{tikzpicture}[scale=0.35, baseline=-0.7cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; =\;
\left(2\; \betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\node at (2.5,-1.45) [dot] (right1) {};
\node at (-2.5,-1.45) [dot] (left1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\; -\; \mathfrak{c}_\gamma'' \;
\betagin{tikzpicture}[scale=0.35, baseline=-0.7cm]
\node at (0,-5.1) [root] (root) {};
\node at (0,-5.1) [rootlab] {$z$};
\node at (0,0) [var_blue] (cent) {};
\node at (0,-2.9) [dot] (cent1) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}
\right)
\; + \;
\mathtt{Err}_{\gamma, \lambdambda}(z)
\end{equation}
where the error term $\mathtt{Err}_{\gamma, \lambdambda}$ is defined via random kernel \eqref{eq:CG-def} and can be bounded as
\betagin{equation*}
|\mathtt{Err}_{\gamma, \lambdambda}(z)| \lesssim \mathfrak{s}up_{z_1 \in D_\mathtt{var}epsilon} \left| 2 \int_{D_\mathtt{var}epsilon} \mathcal{C}G_{\gamma} (z_1, z_2) \mathrm{d} z_2 - \mathfrak{c}_\gamma'' \right|\; \mathfrak{s}up_{z_2 \in D_\mathtt{var}epsilon} \Bigl| \betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (-2.3, 0.2) [root] (root) {};
\node at (-2.3, 0.2) [rootlab] {$z_2$};
\node at (-5.5,0.2) [var_blue] (left) {};
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (root);
\end{tikzpicture} \Bigr|
\end{equation*}
(we recall the renormalisation constant \eqref{eq:renorm-constant2}). Using \cite[Cor.~5.6]{Martingales} we can bound the high moments of the last supremum by a constant multiple of $\mathfrak{e}^{-\mathfrak{r}ac{1}{2}}$, while Lemma~\ref{lem:CG-bounds} allows to bound the first supremum by a constant multiple of $\mathfrak{e}^\theta$ with $\theta \in (\mathfrak{r}ac{1}{2}, 1)$. Hence, all high moments of the error term vanish as $\gamma \to 0$.
It is left to bound the expression in the brackets in \eqref{eq:sixth-symbol-last-again}. For this, we define the kernel
\betagin{equation*}
G_{\gamma} (z_1, z_2) =
\betagin{tikzpicture}[scale=0.35, baseline=-0.2cm]
\node at (0,1) [root] (int) {};
\node at (0,1.6) {\mathfrak{s}criptsize $z_2$};
\node at (0,-2) [root] (cent1) {};
\node at (0,-2) [rootlab] {$z_1$};
\node at (2,-0.5) [dot] (right1) {};
\node at (-2,-0.5) [dot] (left1) {};
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (int) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (right1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\mathrm{d}raw[keps] (left1) to node[labl,pos=0.45] {\tiny 3,0} (cent1);
\end{tikzpicture}\;,
\end{equation*}
and we define for any smooth, compactly supported function $\psi : \R^4 \times \R^4 \to \R$ its ``negative renormalisation''
\betagin{equation*}
\bigl(\mathscr{R}_\gamma G_\gamma\bigr)(\psi) := \int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} G_\gamma(z_1, z_2) \bigl( \psi(z_1, z_2) - \psi(z_1, z_1) \bigr) \mathrm{d} z_1 \mathrm{d} z_2.
\end{equation*}
This identity defined $\mathscr{R}_\gamma G_\gamma$ as a distribution on $\R^4 \times \R^4$ (more precisely, $\mathscr{R}_\gamma G_\gamma$ is a function in the first variable and a distribution in the second one). Then the expression in the brackets in \eqref{eq:sixth-symbol-last-again} may be written as
\betagin{equation*}
\int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} \int_{D_\mathtt{var}epsilon} \mathtt{var}phii_z^\lambdambda(z_1) \bigl(\mathscr{R}_\gamma G_\gamma\bigr)(z_1, z_2) \mywidetilde{\mathscr{K}}^\gamma(z_3 - z_2) \mathrm{d} \mathfrak{M}_{\gamma, \mathfrak{a}}(z_3) \mathrm{d} z_1 \mathrm{d} z_2.
\end{equation*}
We note that this expression is well defined, because the distribution $\mathscr{R}_\gamma G_\gamma$ is convolved with smooth functions. It will be convenient to represent this expression as a diagram. For this, we denote the random kernel $G_\gamma$ by an edge ``\,\tikz[baseline=-0.1cm] \mathrm{d}raw[kernelBig] (0,0) to node[labl,pos=0.45] {\tiny 5,0} (1,0);\,'', and we denote $\mathscr{R}_\gamma G_\gamma$ by ``\,\tikz[baseline=-0.1cm] \mathrm{d}raw[kernelBig] (0,0) to node[labl,pos=0.45] {\tiny 5,-1} (1,0);\,''. Here, the label ``$5$'' refers to the order of singularity of $G_\gamma$ (which can be proved similarly to \eqref{eq:CG-bound1}), and the label ``$-1$'' refers to the order of negative renormalisation (see \cite[Sec.~5]{Martingales}). Then the preceding expression can be represented as
\betagin{equation*}
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,0) [root] (root) {};
\node at (0,0) [rootlab] {$z$};
\node at (2.2,0) [dot] (cent1) {};
\node at (5.1,0) [dot] (int) {};
\node at (7.7,0) [var_blue] (cent) {};
\mathrm{d}raw[testfcn] (cent1) to (root);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[kernelBig] (int) to node[labl,pos=0.45] {\tiny 5,-1} (cent1);
\end{tikzpicture}\;.
\end{equation*}
Applying \cite[Cor.~5.6]{Martingales}, the high enough moments of this expression are bounded by constant multiples of $(\lambdambda \vee \mathfrak{e})^{-\bar \kappappa}$ for any $\bar \kappappa > 0$.
\mathfrak{s}ubsubsection{The element $\tau = \protect\<4b>$}
The definition \eqref{eq:model-Hermite} and the expansion \cite[Eq.~2.15]{Martingales} yield a diagrammatical representation of the map $\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau$:
\betagin{align} \nonumber
\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau\bigr)(\mathtt{var}phii_z^\lambdambda) &\;=\;
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,-2.5) {$$};
\node at (0,0) [dot] (int) {};
\node at (-3.2,2) [var_blue] (left) {};
\node at (-1.1,2.9) [var_blue] (cent_left) {};
\node at (1.1,2.9) [var_blue] (cent_right) {};
\node at (3.2,2) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent_left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent_right) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to node[labl,pos=0.4] {\tiny 3,0} (int);
\end{tikzpicture}
\; + \;6\,
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-2.2,2) [var_very_pink] (left) {};
\node at (1.1,2.9) [var_blue] (cent) {};
\node at (3.2,2) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[bend left=60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-60] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to[bend left=60] node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=60] node[labl,pos=0.4] {\tiny 3,0} (int);
\end{tikzpicture}
\; + \;4\,
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-2,2) [var_very_blue] (left) {};
\node at (2,2) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[bend left=90] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-90] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=60] node[labl,pos=0.4] {\tiny 3,0} (int);
\end{tikzpicture}
\\[0.2cm]
&\qquad \lambdabel{eq:Psi4}
\; +\;
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_very_blue] (left) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[out=0, in=0, distance=2.5cm] node[labl,pos=0.55] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[out=180, in=180, distance=2.5cm] node[labl,pos=0.55] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[out=-35, in=35] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[out=225, in=135] node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}
\; +\; \;3\,
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-2.2,2) [var_very_pink] (left) {};
\node at (2.2,2) [var_very_pink] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-30] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=-30] node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}\;. \nonumber
\end{align}
The high enough moments of the first diagram are bounded using \cite[Cor.~5.6]{Martingales} by a constant multiplier of $(\lambdambda \vee \mathfrak{e})^{-2}$, which is the required bound \eqref{eq:model_bound}. Reducing the singularity of kernels in the same way as we did above, \cite[Cor.~5.6]{Martingales} allows to bound the moments of the other diagrams by $c_\gammamma (\lambdambda \vee \mathfrak{e})^{-2 - \bar{\kappappa}}$, for any $\bar{\kappappa} > 0$ and where $c_\gammamma$ vanishes as $\gamma \to 0$.
\mathfrak{s}ubsubsection{The element $\tau = \protect\<5b>$}
Similarly to the previous element, we can write
\betagin{align*}
&\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)
\;=\;
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,3.2) [var_blue] (cent) {};
\node at (-2.8,0.8) [var_blue] (left) {};
\node at (-2,2.5) [var_blue] (cent_left) {};
\node at (2,2.5) [var_blue] (cent_right) {};
\node at (2.8,0.8) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent_left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent_right) to node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to node[labl,pos=0.4] {\tiny 3,0} (int);
\end{tikzpicture}
+ 10
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,3.2) [var_blue] (cent) {};
\node at (-2,2) [var_very_pink] (cent_left) {};
\node at (2.1,2.8) [var_blue] (cent_right) {};
\node at (3.5,1.5) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (cent_left) to[bend left=45] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent_left) to[bend left=-45] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent) to[bend left=45] node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=45] node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent_right) to[bend left=45] node[labl,pos=0.4] {\tiny 3,0} (int);
\end{tikzpicture}
+ 10
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-2,2) [var_very_blue] (left) {};
\node at (1.5,2.5) [var_blue] (cent_right) {};
\node at (3,1.5) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[bend left=90] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-90] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=60] node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent_right) to[bend left=45] node[labl,pos=0.4] {\tiny 3,0} (int);
\end{tikzpicture} \\
& \qquad + 10
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-2,2) [var_very_blue] (left) {};
\node at (2.5,0) [var_very_pink] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[bend left=90] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-90] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=30] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=-30] node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}
+ 5
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-2,2) [var_very_blue] (left) {};
\node at (3,1.5) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[out=45, in=45, distance=2.5cm] node[labl,pos=0.55] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[out=225, in=225, distance=2.5cm] node[labl,pos=0.55] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[out=10, in=80] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[out=270, in=180] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=60] node[labl,pos=0.4] {\tiny 3,0} (int);
\end{tikzpicture}
+
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.9) [var_very_blue] (left) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[out=180, in=180, distance=3.2cm] node[labl,pos=0.55] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[out=0, in=0, distance=3.2cm] node[labl,pos=0.55] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[out=0, in=0, distance=1.5cm] node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to[out=180, in=180, distance=1.5cm] node[labl,pos=0.4] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}
+ 15
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (-2,2) [var_very_pink] (cent_left) {};
\node at (2,2) [var_very_pink] (cent_right) {};
\node at (3.5,1.5) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (cent_left) to[bend left=45] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent_left) to[bend left=-45] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent_right) to[bend left=45] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (cent_right) to[bend left=-45] node[labl,pos=0.45] {\tiny 3,0} (int);
\mathrm{d}raw[keps] (right) to[bend left=45] node[labl,pos=0.4] {\tiny 3,0} (int);
\end{tikzpicture}.
\end{align*}
By \cite[Cor.~5.6]{Martingales}, the moments of the first diagram are bounded by a constant multiple of $(\lambdambda \vee \mathfrak{e})^{-5/2}$, and, reducing the singularity of kernesl as before, the moments of the other diagrams are bounded by $c_\gamma (\lambdambda \vee \mathfrak{e})^{-5/2}$ with $c_\gamma$ vanishing when $\gamma \to 0$.
\mathfrak{s}ubsubsection{Proof of the bounds \eqref{eq:model_bound-delta}}
We draw ``\,\tikz[baseline=-0.1cm] \mathrm{d}raw[kepsdot] (0,0) to node[labl,pos=0.45] {\tiny 3,0} (1,0);\,'' for the kernel $\mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta}$, because it has the same singularity as $\mywidetilde{\mathscr{K}}^{\gamma}$ (see Appendix~\ref{sec:decompositions}), and we draw ``\tikz[baseline=-0.1cm] \mathrm{d}raw[keps] (0,0) to node[labl,pos=0.45] {\tiny 3+$\theta$,0} (1.5,0);\,'' for the difference $\mathrm{d}eltalta^{-\theta}(\mywidetilde{\mathscr{K}}^\gamma - \mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta})$, because it satisfies \cite[Assum.~4]{Martingales} with $a_e=3+\theta$, for any $\theta > 0$ small enough (see Appendix~\ref{sec:decompositions}).
We start with proving the bound \eqref{eq:model_bound-delta} for the element $\tau = \protect\<1b>$. As we described in the beginning of Section~\ref{sec:Models_bounds}, the difference $\mywidetilde{\mathscr{K}}^\gamma - \mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta}$ satisfies \cite[Assum.~4]{Martingales} with $a_e=3+\theta$, for any $\theta > 0$ small enough, and we represent this difference by the edge ``\,\tikz[baseline=-0.1cm] \mathrm{d}raw[keps] (0,0) to node[labl,pos=0.45] {\tiny 3+$\theta$,0} (1.5,0);\,'' with the multiplier $\mathrm{d}elta^\theta$. Then we write the function $\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \tau - \mathbb{P}i_{z}^{\gamma, \mathrm{d}elta, \mathfrak{a}} \tau$ as
\betagin{equation*}
\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \tau - \mathbb{P}i_{z}^{\gamma, \mathrm{d}elta, \mathfrak{a}} \tau \bigr)(\mathtt{var}phii_z^\lambdambda)
\;=\; \mathrm{d}eltalta^{\theta}
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0.9,0.2) [root] (root) {};
\node at (0.9,0.2) [rootlab] {$z$};
\node at (-2.3, 0.2) [dot] (int) {};
\node at (-6,0.2) [var_blue] (left) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3+$\theta$,0} (int);
\end{tikzpicture}\;,
\end{equation*}
with the kernel given by
\betagin{equation*}
\mathcal{C}K^{\lambdambda, \mathfrak{e}, \mathrm{d}elta}_{\mathcal{C}CG, z}(z^{\mathtt{var}}) = \int_{D_\mathtt{var}epsilon} \!\!\mathtt{var}phii_z^\lambdambda(\bar z)\, \big( \mywidetilde{\mathscr{K}}^\gamma - \mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta} \big)(\bar z - z^{\mathtt{var}}) \, \mathrm{d} \bar z.
\end{equation*}
Applying \cite[Cor.~5.6]{Martingales}, we get for any $\bar\kappappa > 0$ and $p \geq 2$ large enough
\betagin{equation*}
\Bigl( \mathbb{E} \bigl| \iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \tau - \mathbb{P}i_{z}^{\gamma, \mathrm{d}elta, \mathfrak{a}} \tau \bigr)(\mathtt{var}phii_z^\lambdambda) \bigr|^p\Bigr)^{\mathfrak{r}ac{1}{p}} \lesssim \mathrm{d}elta^{\theta} (\lambdambda \vee \mathfrak{e})^{- \mathfrak{r}ac{1}{2}-\theta} \Bigl( 1 + \mathtt{var}epsilon^{\mathfrak{r}ac{9}{4} - \bar\kappappa} \mathfrak{e}^{-\mathfrak{r}ac52} \Bigr) \lesssim \mathrm{d}elta^{\theta} (\lambdambda \vee \mathfrak{e})^{- \mathfrak{r}ac{1}{2}-\theta},
\end{equation*}
which is the required bound \eqref{eq:model_bound-delta} for the element $\tau$.
Now, we will prove the bound \eqref{eq:model_bound-delta} for the element $\tau = \protect\<2b>$. Similarly to \eqref{eq:Pi2-new} we can write
\betagin{align}\lambdabel{eq:Pi2-conv}
\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \tau - \mathbb{P}i_{z}^{\gamma, \mathrm{d}elta, \mathfrak{a}} \tau\bigr)(\mathtt{var}phii_z^\lambdambda)
& \;=\; \mathrm{d}elta^{\theta}
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,-2.5) {$$};
\node at (0,0) [dot] (int) {};
\node at (-2,2.5) [var_blue] (left) {};
\node at (2,2.5) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny $3$,0} (int);
\mathrm{d}raw[keps] (right) to node[labl,pos=0.45] {\tiny 3+$\theta$,0} (int);
\end{tikzpicture}
\; +\; \mathrm{d}elta^{\theta}
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,-2.5) {$$};
\node at (0,0) [dot] (int) {};
\node at (-2,2.5) [var_blue] (left) {};
\node at (2,2.5) [var_blue] (right) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to node[labl,pos=0.45] {\tiny 3+$\theta$,0} (int);
\mathrm{d}raw[kepsdot] (right) to node[labl,pos=0.45] {\tiny $3$,0} (int);
\end{tikzpicture}
\; +\; \mathrm{d}elta^\theta
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.5) [var_red_square] (left) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[bend left=80] node[labl,pos=0.45] {\tiny 3+$\theta$,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-80] node[labl,pos=0.45] {\tiny 3,0} (int);
\end{tikzpicture}
\; + \; \mathrm{d}elta^\theta
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.5) [var_red_square] (left) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[kepsdot] (left) to[bend left=80] node[labl,pos=0.45] {\tiny $3$,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-80] node[labl,pos=0.45] {\tiny 3+$\theta$,0} (int);
\end{tikzpicture} \; \\[0.2cm]
& +\; \left(
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.5) [var_red_triangle] (left) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[keps] (left) to[bend left=80] node[labl,pos=0.45] {\tiny $3$,0} (int);
\mathrm{d}raw[keps] (left) to[bend left=-80] node[labl,pos=0.45] {\tiny $3$,0} (int);
\end{tikzpicture} \; - \; (\mathfrak{c}_\gamma + \mathfrak{c}_\gamma' ) \,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\mathrm{d}raw[testfcn] (int) to (root);
\end{tikzpicture} \right)
\; - \; \left(
\betagin{tikzpicture}[scale=0.35, baseline=0cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\node at (0,2.5) [var_red_triangle] (left) {};
\mathrm{d}raw[testfcn] (int) to (root);
\mathrm{d}raw[kepsdot] (left) to[bend left=80] node[labl,pos=0.45] {\tiny $3$,0} (int);
\mathrm{d}raw[kepsdot] (left) to[bend left=-80] node[labl,pos=0.45] {\tiny $3$,0} (int);
\end{tikzpicture} \; - \; (\mathfrak{c}_{\gamma, \mathrm{d}elta} + \mathfrak{c}_{\gamma, \mathrm{d}elta}' ) \,
\betagin{tikzpicture}[scale=0.35, baseline=-0.5cm]
\node at (0,-2.2) [root] (root) {};
\node at (0,-2.2) [rootlab] {$z$};
\node at (0,0) [dot] (int) {};
\mathrm{d}raw[testfcn] (int) to (root);
\end{tikzpicture} \right). \nonumber
\end{align}
The moments of the first four terms in \eqref{eq:Pi2-conv} are bounded using \cite[Cor.~5.6]{Martingales} by a constant multiple of $\mathrm{d}elta^\theta (\lambdambda \vee \mathfrak{e})^{-1-\theta}$ in the same way as we bounded the respective terms in \eqref{eq:Pi2-new}. We prefer to provide more details for the two expressions in parentheses in \eqref{eq:Pi2-conv}. In the same way as in \eqref{eq:Pi-3}, we can write the whole expression in the last line in \eqref{eq:Pi2-conv} as
\betagin{align*}
&\mathfrak{r}ac{1}{2} \int_{D_\mathtt{var}epsilon} \mathtt{var}phii^\lambdambda_z (\bar z) \biggl( \mathtt{var}epsilon^3 \mathfrak{s}um_{\tilde{y} \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_0^\infty \Bigl(\mywidetilde{\mathscr{K}}^\gamma_{\bar t - \tilde{s}}(\bar x - \tilde{y})^2 - \mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta}_{\bar t - \tilde{s}}(\bar x - \tilde{y})^2\Bigr) \bigl(\mathbf{C}_{\gamma, \mathfrak{a}}(\tilde{s}, \tilde{y}) - 2 + 2 \beta \mathtt{var}kappa_{\gamma, 3} \gamma^6 \un{\mathfrak{C}}_\gamma\bigr) \mathrm{d} \tilde{s} \biggr) \mathrm{d} \bar z \\
&+ \mathfrak{r}ac{1}{2} \int_{D_\mathtt{var}epsilon} \mathtt{var}phii^\lambdambda_z (\bar z) \biggl( \mathtt{var}epsilon^3 \mathfrak{s}um_{\tilde{y} \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{-\infty}^0 \Bigl(\mywidetilde{\mathscr{K}}^\gamma_{\bar t - \tilde{s}}(\bar x - \tilde{y})^2 - \mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta}_{\bar t - \tilde{s}}(\bar x - \tilde{y})^2\Bigr) \bigl(\widetilde{\mathbf{C}}_{\gamma, \mathfrak{a}}(-\tilde{s}, \tilde{y}) - 2 + 2 \beta \mathtt{var}kappa_{\gamma, 3} \gamma^6 \un{\mathfrak{C}}_\gamma\bigr) \mathrm{d} \tilde{s} \biggr) \mathrm{d} \bar z.
\end{align*}
We bound this expression in the same way as we bounded \eqref{eq:Pi-3}, with the only difference that now we bound the difference of the two kernels as
\betagin{equation*}
\biggl| \int_{D_\mathtt{var}epsilon} \Bigl(\mywidetilde{\mathscr{K}}^\gamma(z)^2 - \mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta}(z)^2\Bigr)\, \mathrm{d} z \biggr| \lesssim \mathrm{d}eltalta^\theta \mathfrak{e}^{-1 - \theta}
\end{equation*}
(see explanations at the beginning of this section). Hence, the expression in the last line in \eqref{eq:Pi2-conv} is bounded by constant times $\mathrm{d}elta^\theta (\lambdambda \vee \mathfrak{e})^{-1-\theta}$, as required.
The bound \eqref{eq:model_bound-delta} for the other elements in $\bar\mathcal{C}W$ can be proved by analogy, and we prefer to provide only the idea of the proof. For any element $\tau \in \bar\mathcal{C}W$ we can write
\[
\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}\tau - \mathbb{P}i_{z}^{\gamma, \mathrm{d}elta, \mathfrak{a}}\tau\bigr)(\mathtt{var}phii_z^\lambdambda) = \mathfrak{s}um_{i \in A} \iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, i}\tau - \mathbb{P}i_{z}^{\gamma, (\mathrm{d}elta), i}\tau\bigr)(\mathtt{var}phii_z^\lambdambda),
\]
for a finite set $A$, and where the new maps $\mathbb{P}i_{z}^{\gamma, i} \tau$ and $\mathbb{P}i_{z}^{\gamma, (\mathrm{d}elta), i} \tau$ are coming from expanding products of martingales \cite[Eq.~5.1]{Martingales}. These two maps can be represented by diagrams, as we did above, with the only difference that the edges in the diagram of $\mathbb{P}i_{z}^{\gamma, (\mathrm{d}elta), i} \tau$ incident to the noise nodes are given by the kernels $\mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta}$. We can further write
\betagin{equation}\lambdabel{eq:Pi-expansion}
\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, i}\tau - \mathbb{P}i_{z}^{\gamma, (\mathrm{d}elta), i}\tau\bigr)(\mathtt{var}phii_z^\lambdambda) = \mathfrak{s}um_{j \in B_i} \iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, (\mathrm{d}elta), i, j}\tau\bigr)(\mathtt{var}phii_z^\lambdambda),
\end{equation}
where the diagram for $\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, (\mathrm{d}elta), i, j}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)$ is obtained from $\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i_{z}^{\gamma, (\mathrm{d}elta), i}\tau\bigr)(\mathtt{var}phii_z^\lambdambda)$ by replacing one of the kernels incident to noise nodes by $\mywidetilde{\mathscr{K}}^\gamma - \mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta}$, and some other nodes by $\mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta}$.
Applying \cite[Cor.~5.6]{Martingales} to each element in \eqref{eq:Pi-expansion}, in the same way as we did in the previous sections, we get the required bound \eqref{eq:model_bound-delta}.
\mathfrak{s}ubsection{Proof of Proposition~\ref{prop:models-converge}}
\lambdabel{sec:convergence-of-models-proof}
We start with proving the required bounds on the maps $\mathbb{P}i^{\gamma, \mathfrak{a}}$ and $\mathbb{P}i^{\gamma, \mathrm{d}elta, \mathfrak{a}}$. From the preceding sections we conclude that in the setting of this proposition, for $\bar \kappappa > 0$ sufficiently small and for every $\tau \in \mathcal{C}W^\mathrm{ex} \mathfrak{s}etminus \{\<40eb>, \<50eb>, \<20b>, \<30b>\}$ with $|\tau| < 0$, we have
\betagin{subequations}\lambdabel{eqs:moment-bounds-Pi-Gamma}
\betagin{equation}\lambdabel{eq:moment-bounds-Pi}
\mathbb{E} \Bigl[ \bigl|\bigl(\iota_\mathtt{var}epsilon \mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \tau\bigr) (\mathtt{var}phi^\lambdambda_{z})\bigr|^p\Bigr] \lesssim \lambdambda^{(|\tau| + \bar \kappappa) p}, \qquad \mathbb{E} \Bigl[ \bigl| \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \tau\bigr) (\bar z) \bigr|^p \Bigr] \lesssim \mathfrak{e}^{(|\tau| + \bar \kappappa) p},
\end{equation}
and
\betagin{align}
\mathbb{E} \Bigl[ \bigl|\bigl(\iota_\mathtt{var}epsilon \mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \tau - \iota_\mathtt{var}epsilon \mathbb{P}i^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{z} \tau\bigr) (\mathtt{var}phi^\lambdambda_{z})\bigr|^p\Bigr] &\lesssim \lambdambda^{(|\tau| + \bar \kappappa) p} \mathrm{d}elta^{\theta p}, \\
\mathbb{E} \Bigl[ \bigl|\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \tau - \mathbb{P}i^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{z} \tau\bigr) (\bar z) \bigr|^p \Bigr] &\lesssim \mathfrak{e}^{(|\tau| + \bar \kappappa) p} \mathrm{d}elta^{\theta p},
\end{align}
\end{subequations}
uniformly over $z \in [-T, T] \times [-1, 1]^3$, $\| \bar z - z \|_\mathfrak{s} \leq \mathfrak{e}$ and other quantities as in \eqref{eq:Pi-bounds}. In these and the following bounds the proportionality constants depends on $p$ and $T$, but are independent of all the other quantities. These bounds readily yield the respective bounds for the elements $\<40eb>$ and $\<50eb>$, because of the definition \eqref{eq:Pi-E} and $\gamma^6 \lesssim \mathfrak{e}^2 \lesssim (\lambdambda \vee \mathfrak{e})^2$.
It is left to prove these bounds for the symbols $\<20b>$ and $\<30b>$. We will prove stronger bounds
\betagin{equation}\lambdabel{eq:Pi-strong-bound}
\betagin{aligned}
\mathbb{E} \Bigl[ \bigl|\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \bar \tau\bigr)(\bar z)\bigr|^p\Bigr] &\lesssim \bigl(\| z - \bar z \|_\mathfrak{s} \vee \mathfrak{e}\bigr)^{(|\bar \tau| + \bar \kappappa) p}, \\
\mathbb{E} \Bigl[ \bigl|\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \bar \tau - \mathbb{P}i^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{z}\bar \tau\bigr)(\bar z)\bigr|^p\Bigr] &\lesssim \bigl(\| z - \bar z \|_\mathfrak{s} \vee \mathfrak{e}\bigr)^{(|\bar \tau| + \bar \kappappa) p} \mathrm{d}elta^{\theta p},
\end{aligned}
\end{equation}
for $\bar \tau \in \{\<20b>, \<30b>\}$, from which the required bounds \eqref{eqs:moment-bounds-Pi-Gamma} follow at once. From the definition \eqref{eq:Pi-I} and the expansion of $\mywidetilde{\mathscr{K}}^\gamma$, provided in Appendix~\ref{sec:decompositions}, we have
\betagin{equation}\lambdabel{eq:Pi-strong-bound-proof}
\Bigl(\mathbb{E} \Bigl[ \bigl|\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \bar \tau\bigr)(\bar z)\bigr|^p\Bigr]\Bigr)^{\mathfrak{r}ac{1}{p}} \lesssim \mathfrak{s}um_{n = 0}^{\mywidetilde{M}} \Bigl(\mathbb{E} \Bigl[ \bigl| \iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \tau\bigr) \bigl(\widetilde{K}^{\gamma, n}(\bar z - \bigcdot) - \widetilde{K}^{\gamma, n}(z - \bigcdot)\bigr) \bigr|^p\Bigr]\Bigr)^{\mathfrak{r}ac{1}{p}},
\end{equation}
for $\tau \in \{\<2b>, \<3b>\}$. In order to estimate this sum, we need to consider two cases: $\| z - \bar z \|_\mathfrak{s} \geq 2^{-n}$ and $\| z - \bar z \|_\mathfrak{s} < 2^{-n}$.
If $\| z - \bar z \|_\mathfrak{s} \geq 2^{-n}$, then we apply the Minkowski inequality to bound the $n$-th term in \eqref{eq:Pi-strong-bound-proof} by
\betagin{equation}\lambdabel{eq:Pi-strong-bound-proof1}
\Bigl(\mathbb{E} \Bigl[ \bigl| \iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \tau\bigr) \bigl(\widetilde{K}^{\gamma, n}(\bar z - \bigcdot)\bigr) \bigr|^p\Bigr]\Bigr)^{\mathfrak{r}ac{1}{p}} + \Bigl(\mathbb{E} \Bigl[ \bigl| \iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \tau\bigr) \bigl(\widetilde{K}^{\gamma, n}(z - \bigcdot)\bigr) \bigr|^p\Bigr]\Bigr)^{\mathfrak{r}ac{1}{p}}.
\end{equation}
Moreover, from the identities $\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} = \mathbb{P}i^{\gamma, \mathfrak{a}}_{\bar z} \Gamma^{\gamma, \mathfrak{a}}_{\!\bar z z}$ and $\Gamma^{\gamma, \mathfrak{a}}_{\!\bar z z} \tau = \tau$ for $\tau \in \{\<2b>, \<3b>\}$ (the first identity follows from the definition of the model, and the second identity follows from Table~\ref{tab:linear_transformations}), we can replace $\mathbb{P}i^{\gamma, \mathfrak{a}}_z$ in the first term in \eqref{eq:Pi-strong-bound-proof1} by $\mathbb{P}i^{\gamma, \mathfrak{a}}_{\bar z}$. Then the bounds \eqref{eq:Kn_bound} and \eqref{eq:moment-bounds-Pi} allow to estimate \eqref{eq:Pi-strong-bound-proof1} by a constant multiple of $2^{- (|\bar \tau| + \bar \kappappa) n}$. Then the part of the sum in \eqref{eq:Pi-strong-bound-proof} over $n$ satisfying $\| z - \bar z \|_\mathfrak{s} \geq 2^{-n}$ is bounded by a constant times
\betagin{equation*}
\mathfrak{s}um_{\mathfrak{s}ubstack{ 0 \leq n \leq M : \\ \| z - \bar z \|_\mathfrak{s} \geq 2^{-n}}} 2^{- (|\bar \tau| + \bar \kappappa) n} \lesssim \bigl(\| z - \bar z \|_\mathfrak{s} \vee \mathfrak{e}\bigr)^{|\bar \tau| + \bar \kappappa}.
\end{equation*}
If $\| z - \bar z \|_\mathfrak{s} < 2^{-n}$, then we need to distinguish the two cases $n < \mywidetilde{M}$ and $n = \mywidetilde{M}$. For $n < \mywidetilde{M}$ we can write
\betagin{equation*}
\widetilde{K}^{\gamma, n}(\bar z - \tilde z) - \widetilde{K}^{\gamma, n}(z - \tilde z) = \mathfrak{s}um_{i = 0}^3 \int_{L_i} \partial_{u_i} \widetilde{K}^{\gamma, n}(z + u - \tilde z) \mathrm{d} u,
\end{equation*}
for line segments $L_i$, parallel to the coordinate axes, such that their union is a path connecting the origin and $\bar z - z$. In particular, the length of each $L_i$ is bounded by $\| z - \bar z \|^{\mathfrak{s}_i}_\mathfrak{s}$, where $\mathfrak{s}_0 = 2$ and $\mathfrak{s}_i = 1$ for $i = 1,2,3$. Then we have
\betagin{equation*}
\iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \tau\bigr) \bigl(\widetilde{K}^{\gamma, n}(\bar z - \bigcdot) - \widetilde{K}^{\gamma, n}(z - \bigcdot)\bigr) = \mathfrak{s}um_{i = 0}^3 \int_{L_i} \iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z + u} \tau\bigr) \bigl(\partial_{u_i} K^{\gamma, n}(z + u - \bigcdot)\bigr) \mathrm{d} u,
\end{equation*}
where we replaced $\mathbb{P}i^{\gamma, \mathfrak{a}}_{z}$ by $\mathbb{P}i^{\gamma, \mathfrak{a}}_{z + u}$ in the same was as we did in \eqref{eq:Pi-strong-bound-proof1}. The bounds \eqref{eq:Kn_bound} and \eqref{eq:moment-bounds-Pi} yield
\betagin{align*}
\Bigl(\mathbb{E} \Bigl[ \bigl| \iota_\mathtt{var}epsilon \bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \tau\bigr) \bigl(K^{\gamma, n}(\bar z - \bigcdot) - K^{\gamma, n}(z - \bigcdot)\bigr) \bigr|^p\Bigr]\Bigr)^{\mathfrak{r}ac{1}{p}} \lesssim \mathfrak{s}um_{i = 0}^3 2^{- (|\bar \tau| - \mathfrak{s}_i + \bar \kappappa) n} \| z - \bar z \|^{\mathfrak{s}_i}_\mathfrak{s}.
\end{align*}
Since $|\bar \tau| - \mathfrak{s}_i < 0$, we can take $\bar \kappappa > 0$ small enough, such that $|\bar \tau| - \mathfrak{s}_i + \bar \kappappa < 0$. Then the part of the sum in \eqref{eq:Pi-strong-bound-proof} over $n$ satisfying $\| z - \bar z \|_\mathfrak{s} < 2^{-n}$ is bounded by a constant times
\betagin{equation*}
\mathfrak{s}um_{i = 0}^3 \| z - \bar z \|^{\mathfrak{s}_i}_\mathfrak{s} \mathfrak{s}um_{\mathfrak{s}ubstack{ 0 \leq n < \mywidetilde{M} : \\ \| z - \bar z \|_\mathfrak{s} < 2^{-n}}} 2^{- (|\bar \tau| - \mathfrak{s}_i + \bar \kappappa) n} \lesssim \bigl(\| z - \bar z \|_\mathfrak{s} \vee \mathfrak{e}\bigr)^{|\bar \tau| + \bar \kappappa}.
\end{equation*}
In the case $n = \mywidetilde{M}$ we have $\| z - \bar z \|_\mathfrak{s} < \mathfrak{e}$, and the radius of support of the function $\widetilde{K}^{\gamma, \mywidetilde{M}}(\bar z - \tilde z) - \widetilde{K}^{\gamma, \mywidetilde{M}}(z - \tilde z)$ in $\tilde z$ is of order $\mathfrak{e}$. Then \eqref{eq:Kn_bound} and the second bound in \eqref{eq:moment-bounds-Pi} yield
\betagin{align*}
\Bigl(\mathbb{E} \Bigl[ \bigl| \iota_\mathtt{var}epsilon &\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_z \tau\bigr) \bigl(\widetilde{K}^{\gamma, \mywidetilde{M}}(\bar z - \bigcdot) - \widetilde{K}^{\gamma, \mywidetilde{M}}(z - \bigcdot)\bigr) \bigr|^p\Bigr]\Bigr)^{\mathfrak{r}ac{1}{p}} \\
&\qquad \lesssim \mathfrak{e}^{|\tau| + \bar \kappappa} \int_{D_\mathtt{var}epsilon} \bigl| \widetilde{K}^{\gamma, \mywidetilde{M}}(\bar z - \tilde z) - \widetilde{K}^{\gamma, \mywidetilde{M}}(z - \tilde z)\bigr| \mathrm{d} \tilde z \lesssim \mathfrak{e}^{|\bar \tau| + \bar \kappappa} \lesssim \bigl(\| z - \bar z \|_\mathfrak{s} \vee \mathfrak{e}\bigr)^{|\bar \tau| + \bar \kappappa}.
\end{align*}
This finishes the proof of the first bound in \eqref{eq:Pi-strong-bound}.
The second bound in \eqref{eq:Pi-strong-bound} can be proved by analogy, but instead of \eqref{eq:Kn_bound} we need to use
\betagin{equation*}
\bigl|D^k \bigl(\widetilde{K}^{\gamma, n} - \widetilde{K}^{\gamma, n} \mathfrak{s}tar_\mathtt{var}epsilon \mathtt{var}rho_{\gamma, \mathrm{d}elta}\bigr)(z)\bigr| \leq C \mathrm{d}elta^\theta 2^{n(3 + |k|_\mathfrak{s} - \theta)},
\end{equation*}
for respective $n$ and $k$. This bound follows readily from the properties of $\widetilde{K}^{\gamma, n}$ and $\mathtt{var}rho_{\gamma, \mathrm{d}elta}$.
The bounds on $\mathbb{P}i^{\gamma, \mathfrak{a}}$ yield the bounds on $\Gamma^{\gamma, \mathfrak{a}}$. Indeed, the definition provided above \eqref{eq:Gamma-lift} yields $\Gamma^{\gamma, \mathfrak{a}}_{\!z \bar z} \tau = \tau - (\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \tau)(\bar z) \symbol{\1}$ for $\tau= \<20b>$, and from \eqref{eq:Pi-strong-bound} we get
\betagin{align*}
\mathbb{E} \Bigl[ \bigl| \Gamma^{\gamma, \mathfrak{a}}_{\!z \bar z} \tau \bigr|_0^p\Bigr] &= \mathbb{E} \Bigl[ \bigl|\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \tau\bigr)(\bar z)\bigr|^p\Bigr] \lesssim \bigl(\| z - \bar z \|_\mathfrak{s} \vee \mathfrak{e}\bigr)^{(|\tau| + \bar \kappappa) p}, \\
\mathbb{E} \Bigl[ \bigl| \bigl(\Gamma^{\gamma, \mathfrak{a}}_{\!z \bar z} - \Gamma^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{\!z \bar z}\bigr) \tau \bigr|_0^p\Bigr] &= \mathbb{E} \Bigl[ \bigl|\bigl(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \tau - \mathbb{P}i^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{z} \tau\bigr)(\bar z)\bigr|^p\Bigr] \lesssim \bigl(\| z - \bar z \|_\mathfrak{s} \vee \mathfrak{e}\bigr)^{(|\tau| + \bar \kappappa) p} \mathrm{d}elta^{\theta p},
\end{align*}
which is the required bound. In the same way we get bounds for all other elements $\tau \in \mathcal{C}W^\mathrm{ex}$ such that $\Gamma^{\gamma, \mathfrak{a}}_{\!z \bar z} \tau \neq \tau$.
Now we will use a Kolmogorov-type result to show that the bounds \eqref{eqs:moment-bounds-Pi-Gamma} and \eqref{eq:Pi-strong-bound} yield \eqref{eq:prop:models-converge}, with a small loss of regularity. For every $\tau \in \mathcal{C}W^\mathrm{ex} \mathfrak{s}etminus \{\<40eb>, \<50eb>\}$ with $|\tau| < 0$ the bounds
\betagin{align*}
\mathbb{E} \biggl[ \mathfrak{s}up_{\lambdambda \in [\mathfrak{e}, 1]}\mathfrak{s}up_{\mathtt{var}phi \in \mathcal{C}B^{2}_\mathfrak{s}} \mathfrak{s}up_{z \in K} \lambdambda^{-|\tau| p} \bigl|\bigl(\iota_\mathtt{var}epsilon \mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \tau\bigr) (\mathtt{var}phi^\lambdambda_{z})\bigr|^p\biggr] &\lesssim 1,\\
\mathbb{E} \biggl[ \mathfrak{s}up_{\lambdambda \in [\mathfrak{e}, 1]}\mathfrak{s}up_{\mathtt{var}phi \in \mathcal{C}B^{2}_\mathfrak{s}} \mathfrak{s}up_{z \in K} \lambdambda^{-|\tau| p} \bigl|\bigl(\iota_\mathtt{var}epsilon \mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \tau - \iota_\mathtt{var}epsilon \mathbb{P}i^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{z} \tau\bigr) (\mathtt{var}phi^\lambdambda_{z})\bigr|^p\biggr] &\lesssim \mathrm{d}elta^{\theta p},
\end{align*}
uniformly in $\gamma > 0$, can be proved in exactly the same way as \cite[Lem.~10.2]{Regularity}. These bounds for the elements $\<40eb>$ and $\<50eb>$ readily follow as in \eqref{eqs:moment-bounds-Pi-Gamma}. Furthermore, from \eqref{eq:Pi-strong-bound} and the Kolmogorov continuity criterion \cite{Kallenberg} we conclude that
\betagin{equation*}
\mathbb{E} \biggl[ \mathfrak{s}up_{z, \bar z \in K} \mathfrak{r}ac{|(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \bar \tau)(\bar z)|^p}{(\| z - \bar z \|_\mathfrak{s} \vee \mathfrak{e})^{|\bar \tau| p}}\biggr] \lesssim 1, \qquad \mathbb{E} \biggl[ \mathfrak{s}up_{z, \bar z \in K} \mathfrak{r}ac{|(\mathbb{P}i^{\gamma, \mathfrak{a}}_{z} \bar \tau - \mathbb{P}i^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{z}\bar \tau)(\bar z)|^p}{(\| z - \bar z \|_\mathfrak{s} \vee \mathfrak{e})^{|\bar \tau| p}}\biggr] \lesssim \mathrm{d}elta^{\theta p},
\end{equation*}
for $\bar \tau \in \{\<20b>, \<30b>\}$ and for any compact set $K \mathfrak{s}ubset \R^4$. Finally, we get the required bounds on the maps $\Gamma^{\gamma, \mathfrak{a}}$ and $\Gamma^{\gamma, \mathrm{d}elta, \mathfrak{a}}$, because they are defined in \eqref{eq:Gamma-lift} via $\mathbb{P}i^{\gamma, \mathfrak{a}}$ and $\mathbb{P}i^{\gamma, \mathrm{d}elta, \mathfrak{a}}$.
\mathfrak{s}ection{A discrete solution map}
\lambdabel{sec:discrete-solution}
In order to prove the desired convergence in Theorem~\ref{thm:main}, we first need to write equation \eqref{eq:IsingKacEqn-new} in the framework of regularity structures.
We use the discrete model $Z^{\gamma, \mathfrak{a}}_{\scaleto{\mathrm{lift}}{4pt}}$ construction in Section~\ref{sec:lift}, and define the integration operators on the space of modelled distributions via the kernel $\widetilde{G}^\gamma$ as in \cite[Sec.~4]{erhard2017discretisation}. More precisely, we write $\widetilde{G}^\gamma = \mywidetilde{\mathscr{K}}^\gamma + \mywidetilde{\mathscr{R}}^\gamma$ as in the beginning of Section~\ref{sec:lift}. Then we use the singular part $\mywidetilde{\mathscr{K}}^\gamma$ to define the map $\mathcal{C}K^\gamma_{\kappappa}$ as in \cite[Eq.~4.6]{erhard2017discretisation} for the value $\betata = 2$. We use the regularity $\kappappa$ by analogy with \eqref{eq:abstract-integral}. We note that we do not need to consider the map $\mathcal{C}A^\gamma$ from \cite[Eq.~4.16]{erhard2017discretisation}, since it vanishes in our case (see \cite[Rem.~4.10]{erhard2017discretisation}). We lift the smooth part $\mywidetilde{\mathscr{R}}^\gamma$ to a modelled distribution $R^\gamma_{1 + 3 \kappappa}$ by a Taylor's expansion as in \cite[Eq.~5.17]{HairerMatetski}. Then we define the map
\betagin{equation}\lambdabel{eq:P-operator}
\mathcal{C}P^\gamma := \mathcal{C}K^\gamma_{\kappappa} + R^\gamma_{1 + 3 \kappappa} \mathcal{C}R^{\gamma, \mathfrak{a}}
\end{equation}
on a suitable space of modelled distributions, where $\mathcal{C}R^{\gamma, \mathfrak{a}}$ is the reconstruction map associated to the model by \eqref{eq:def_rec_op}. In order to use Theorem~4.8 and Lemma~6.2 in \cite{erhard2017discretisation}, we need to show that the respective assumptions in \cite{erhard2017discretisation} are satisfied. Assumptions~4.1 and 4.4 hold trivially, because the action of the model $Z^{\gamma, \mathfrak{a}}_{\scaleto{\mathrm{lift}}{4pt}}$ on polynomials coincides with the canonical continuous polynomial model. Assumption~4.3 follows from our definition of the space $\mathcal{C}X_\mathtt{var}epsilon$ in Section~\ref{sec:DiscreteModels} and properties of the kernel $\mywidetilde{\mathscr{K}}^\gamma$. Assumption~4.7 can be shown by brutal bounds of the terms in \cite[Eq.~4.6]{erhard2017discretisation}, combined with the definitions of discrete models and modelled distributions from Sections~\ref{sec:DiscreteModels} and \ref{sec:DiscreteModelledDistributions}. Finally, Assumption~6.1 follows readily from the Taylor's approximations and smoothness of the function $\mywidetilde{\mathscr{K}}^\gamma$. As we said above, the map $\mathcal{C}A^\gamma$ vanishes in our case and Assumption~6.3 trivially holds.
Our goal is to write the solution of \eqref{eq:IsingKacEqn-new} as a reconstruction of an abstract equation of the form \eqref{eq:abstract_equation}. However, the complicated non-linearity in \eqref{eq:IsingKacEqn-new} makes the definition of this equation more difficult.
As follows from \eqref{eq:hom5}, applying $\mathcal{C}E$ increases homogeneity by $2$. However, applying $\mathcal{C}E$ to a modelled distribution $f \in \mathcal{C}D^\zeta$ does not give in general an element of $\mathcal{C}D^{\zeta+2}$, because $\mathcal{C}E$ vanishes on polynomials. To resolve this problem, we define the domain of this map
\betagin{equation*}
\mathbf{D}om_{\mathcal{C}E} := \{\<4b>, \<5b>, \symbol{\1}\}
\end{equation*}
and consider a modelled distribution of the form
\betagin{equation}\lambdabel{eq:f-in-the-domain}
f(z) = \mathfrak{s}um_{\tau \in \mathbf{D}om_{\mathcal{C}E}} f_{\tau}(z) \tau.
\end{equation}
Then we define the map
\betagin{equation}\lambdabel{eq:E-hat}
\bigl(\widehat{\mathcal{C}E}_\gamma f\bigr)(z) := \mathcal{C}E\bigl(f(z)\bigr) + \gamma^6 f_{\symbol{\1}}(z) \symbol{\1}.
\end{equation}
We need to consider $f$ of the form \eqref{eq:f-in-the-domain}, because $f$ should be in the domain of the map $\mathcal{C}E$. If $\mathcal{C}R^{\gamma, \mathfrak{a}}$ is the reconstruction map for the model $Z^{\gamma, \mathfrak{a}}_{\scaleto{\mathrm{lift}}{4pt}}$, then we use Remark~\ref{rem:Eps-reconstruct} to conclude
\betagin{equation}\lambdabel{eq:E-reconstruction}
\mathcal{C}R^{\gamma, \mathfrak{a}} \bigl(\widehat{\mathcal{C}E}_\gamma f\bigr)(z) = \gamma^6 \mathfrak{s}um_{\tau \in \mathbf{D}om_{\mathcal{C}E}} f_{\tau}(z) \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}}\tau\bigr) (z) = \gamma^6 \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} f\bigr)(z).
\end{equation}
Moreover, we can show that this map increases regularity. Throughout this section we are going to use the time-dependent norms on modelled distributions introduced in Remark~\ref{rem:T-space}.
As we showed in Remark~\ref{rem:extension-to-Xi}, the model and the reconstruction map are extended to the symbol $\blue\Xi$. Then the map \eqref{eq:P-operator} can be applied to this symbol and we define the modelled distribution
\betagin{equation}\lambdabel{eq:V-gamma}
W_{\gamma, \mathfrak{a}}(z) := \mathcal{C}P^\gamma \mathbf{{1}}_+ (\blue\Xi)(z).
\end{equation}
Furthermore, for $\zeta = 1 + 3 \kappappa$ and $\eta \in \R$ we define the abstract equation
\betagin{equation}\lambdabel{eq:AbstractDiscreteEquation}
U_{\gamma, \mathfrak{a}} = \mathcal{C}Q_{< \zeta} \Bigl(G^\gamma X_\gamma^0 + \mathcal{C}P^\gamma \mathbf{{1}}_+ \bigl( F_\gamma(U_{\gamma, \mathfrak{a}}) + E^{(1)}_\gamma(U_{\gamma, \mathfrak{a}}) + E^{(2)}_\gamma(U_{\gamma, \mathfrak{a}}) \bigr) + \mathfrak{s}qrt 2\, W_{\gamma, \mathfrak{a}}\Bigr),
\end{equation}
for a modelled distribution $U_{\gamma, \mathfrak{a}} \in D_\mathtt{var}epsilon^{\zeta, \eta} (Z^{\gamma, \mathfrak{a}}_{\scaleto{\mathrm{lift}}{4pt}})$, where $G^\gamma X_\gamma^0$ is the polynomial lift of the operator \eqref{eq:P-gamma} applied to $X_\gamma^0$ where the discrete heat kernel $G_t^\gamma$ is defined on $\mathtt{L}ambda_{\mathtt{var}epsilon}$ by \eqref{eq:From-P-to-G}. The function $F_\gamma$ describes the non-linearity in \eqref{eq:IsingKacEqn-new} and is defined as
\betagin{equation}\lambdabel{eq:F-ga}
F_\gamma(U_{\gamma, \mathfrak{a}}) := \mathcal{C}Q_{\leq 0} \Bigl( \Bigl(- \mathfrak{r}ac{\beta^3}{3} + B_\gamma\Bigr) U_{\gamma, \mathfrak{a}}^3 + (A_\gamma + A) U_{\gamma, \mathfrak{a}}\Bigr),
\end{equation}
for constants $A_\gamma$ and $B_\gamma$ whose values will be chosen in Lemma~\ref{lem:solution}. We need to consider these constants because of our definition of the renormalised products in \eqref{eq:model-Hermite}. As we will see in the following lemma, we need to take constants $A_\gamma$ and $B_\gamma$ in \eqref{eq:F-ga}, vanishing as $\gamma \to 0$, in order to obtain exactly \eqref{eq:IsingKacEqn-new} after reconstruction of \eqref{eq:AbstractDiscreteEquation}. The function $E^{(1)}_\gamma$ in \eqref{eq:AbstractDiscreteEquation} describes the remainder after the Taylor's approximation of the function $\tanh$ in \eqref{eq:expr_error_term}, and is given by
\betagin{equation}\lambdabel{eq:E1-ga}
E^{(1)}_\gamma(U_{\gamma, \mathfrak{a}}) := \mathfrak{r}ac{1}{\mathrm{d}eltalta \alphapha} R_5 \bigl(\beta \gamma^3 \mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}\bigr) \symbol{\1},
\end{equation}
where $\mathcal{C}R^{\gamma, \mathfrak{a}}$ is the reconstruction map, defined in \eqref{eq:def_rec_op}, and $R_5 : \R \to \R$ is the remainder in the Taylor's approximation of the fifth order of the function $\tanh$, i.e.
\betagin{equation}\lambdabel{eq:Taylor}
R_5(x) := \tanh x - x + \mathfrak{r}ac{x^3}{3} - \mathfrak{r}ac{x^5}{5}.
\end{equation}
The function $E^{(2)}_\gamma$ in \eqref{eq:AbstractDiscreteEquation} is defined as
\betagin{equation}\lambdabel{eq:E2-ga}
E^{(2)}_\gamma(U_{\gamma, \mathfrak{a}}) := \mathfrak{r}ac{\beta^5}{5} \widehat{\mathcal{C}E}_\gamma \biggl(\mathfrak{s}um_{\tau \in \{\mathfrak{s}calebox{0.7}{\<4b>}, \mathfrak{s}calebox{0.7}{\<5b>}\}} \Bigl( \mathcal{C}Q_{\tau} U_{\gamma, \mathfrak{a}}^5 - \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} \mathcal{C}Q_{\tau} U_{\gamma, \mathfrak{a}}^5\bigr) \symbol{\1} \Bigr) + H_5 \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}, 2 \mathfrak{c}_\gamma\bigr) \symbol{\1} \biggr),
\end{equation}
where $\mathcal{C}Q_\tau$ is the projection from the model space to the span of $\tau$, $H_5$ is the $5$-th Hermite polynomial \eqref{eq:def_Hermite} and the renormalisation constant $\mathfrak{c}_\gamma$ is defined in \eqref{eq:renorm-constant1}. The expression in the brackets in \eqref{eq:E2-ga} is spanned by the elements of $\mathbf{D}om_{\mathcal{C}E}$, which allows us to apply the map $\widehat{\mathcal{C}E}_\gamma$.
A natural definition of the non-linearity \eqref{eq:E2-ga} could be $\mathfrak{r}ac{\beta^5}{5} \mathcal{C}Q_{\leq 0} \widehat{\mathcal{C}E}_\gamma \mathcal{C}Q_{\leq 0}U_{\gamma, \mathfrak{a}}^5$. This definition however uses elements of negative homogeneities which appear in the product $U_{\gamma, \mathfrak{a}}^5$. We can make sense of it only if we add extra elements into the model space $\mathcal{C}T^\mathrm{ex}$ and define the map $\widehat{\mathcal{C}E}_\gamma$ on these elements. In order to have the dimension of $\mathcal{C}T^\mathrm{ex}$ minimal, we have to make a more complicated definition \eqref{eq:E2-ga}. More precisely, in the brackets in \eqref{eq:E2-ga} we keep only the two elements of $U_{\gamma, \mathfrak{a}}^5$ with the smallest homogeneities (these are $\mathcal{C}Q_{\tau} U_{\gamma, \mathfrak{a}}^5$ with $\tau \in \{\<4b>, \<5b>\}$). The other parts of $U_{\gamma, \mathfrak{a}}^5$ we reconstruct and write in \eqref{eq:E2-ga} as a multiplier of $\symbol{\1}$. Then if we apply the reconstruction map $\mathcal{C}R^{\gamma, \mathfrak{a}}$ to the expression in the brackets in \eqref{eq:E2-ga}, we get $H_5 \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}, 2 \mathfrak{c}_\gamma\bigr)$, which is a renormalised fifth power of the solution $\mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}$. We use the renormalisation constant $2 \mathfrak{c}_\gamma$, because of the multiplier $\mathfrak{s}qrt 2$ of the force term $Y_{\gamma, \mathfrak{a}}$ in \eqref{eq:IsingKacEqn-new} and a scaling property of Hermite polynomials. More precisely, in order to renormalise the fifth power of $\mathfrak{s}qrt 2\, Y_{\gamma, \mathfrak{a}}$, we need to use $H_5 \bigl(\mathfrak{s}qrt 2\, Y_{\gamma, \mathfrak{a}}, 2 \mathfrak{c}_\gamma\bigr)$.
We can show that reconstruction of \eqref{eq:AbstractDiscreteEquation} recovers the discrete equation \eqref{eq:IsingKacEqn-new}.
\betagin{lemma}\lambdabel{lem:solution}
Let $Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}$ be the model constructed in Section~\ref{sec:lift}, and let the reconstruction map $ \mathcal{C}R^{\gamma, \mathfrak{a}}$ be defined for the model $Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}$ in \eqref{eq:def_rec_op}. Let $U_{\gamma, \mathfrak{a}} \in \mathcal{C}D^{1 + 3 \kappappa, \eta}_\mathfrak{e} (Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}})$ be a solution of \eqref{eq:AbstractDiscreteEquation}. Then it may be written as
\betagin{equation}\lambdabel{eq:expansion}
U_{\gamma, \mathfrak{a}} (z) = \mathfrak{s}qrt2\, \<1b> + v_{\gamma, \mathfrak{a}}(z) \symbol{\1} + \Bigl(- \mathfrak{r}ac{\beta^3}{3} + B_\gamma\Bigr) \Bigl(2 \mathfrak{s}qrt{2}\, \<30b> + 6 v_{\gamma, \mathfrak{a}}(z) \<20b>\Bigr) + \mathfrak{s}um_{i = 1,2,3} v^{(i)}_{\gamma, \mathfrak{a}}(z) \symbol{X}_i,
\end{equation}
for some functions $v_{\gamma, \mathfrak{a}}, v^{(i)}_{\gamma, \mathfrak{a}} : \R_+ \times \T_{\mathtt{var}epsilon}^3 \to \R$. More precisely, we have $v_{\gamma, \mathfrak{a}} = X_{\gamma, \mathfrak{a}} - \mathfrak{s}qrt2\, Y_{\gamma, \mathfrak{a}}$, where $X_{\gamma, \mathfrak{a}} := \mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}$, $Y_{\gamma, \mathfrak{a}} = \mathcal{C}R^{\gamma, \mathfrak{a}}\<1b>$, and $v_{\gamma, \mathfrak{a}}$ solves the ``remainder equation''
\betagin{align}\lambdabel{eq:discrete-remainder}
v_{\gamma, \mathfrak{a}}(t, x) = P^\gamma_t X_\gamma^0(x) + \int_0^t \widetilde{P}^\gamma_{t-s} &\Bigl( \Bigl(- \mathfrak{r}ac{\beta^3}{3} + B_\gamma\Bigr) \bigl(v_{\gamma, \mathfrak{a}} + \mathfrak{s}qrt2\, Y_{\gamma, \mathfrak{a}}\bigr)^3 \\
&\qquad + (\mathfrak{C}_\gamma + A) \bigl(v_{\gamma, \mathfrak{a}} + \mathfrak{s}qrt2\, Y_{\gamma, \mathfrak{a}}\bigr) + E_{\gamma, \mathfrak{a}} \Bigr)(s, x)\, \mathrm{d} s, \nonumber
\end{align}
where $E_{\gamma, \mathfrak{a}}$ is given by \eqref{eq:expr_error_term} with $X_{\gamma, \mathfrak{a}}$ replaced by $v_{\gamma, \mathfrak{a}} + \mathfrak{s}qrt2\, Y_{\gamma, \mathfrak{a}}$.
Furthermore, there exist $A_\gamma$ and $B_\gamma$, vanishing as $\gamma \to 0$, such that the function $X_{\gamma, \mathfrak{a}} = \mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}$ solves \eqref{eq:IsingKacEqn-new} with the renormalisation constant
\betagin{equation}\lambdabel{eq:C-exact}
\mathfrak{C}_\gamma = 2 \bigl(\mathfrak{c}_\gamma + \mathfrak{c}_\gamma' - 2 \mathfrak{c}_\gamma''\bigr),
\end{equation}
where $\mathfrak{c}_\gamma$, $\mathfrak{c}_\gamma'$ and $\mathfrak{c}_\gamma''$ are defined in \eqref{eq:renorm-constant1}, \eqref{eq:renorm-constant3} and \eqref{eq:renorm-constant2} respectively.
\end{lemma}
\betagin{proof}
The expansion \eqref{eq:expansion} is obtained in the same way as \eqref{eq:U-expansion}, by iteration of \eqref{eq:AbstractDiscreteEquation}. If we define the functions $X_{\gamma, \mathfrak{a}} = \mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}$ and $Y_{\gamma, \mathfrak{a}} = \mathcal{C}R^{\gamma, \mathfrak{a}}\<1b>$, then we obtain
\betagin{equation}\lambdabel{eq:XandY}
X_{\gamma, \mathfrak{a}}(z) = \mathfrak{s}qrt2\, Y_{\gamma, \mathfrak{a}}(z) + v_{\gamma, \mathfrak{a}}(z),
\end{equation}
where the reconstructions of the elements with strictly positive homogeneities vanish (see Remark~\ref{rem:positive-vanish}). Using \eqref{eq:expansion}, we can write
\betagin{align*}
&\mathcal{C}Q_{\leq 0} U_{\gamma, \mathfrak{a}}^3(z) = 2 \mathfrak{s}qrt 2\, \<3b> + 6 v_{\gamma, \mathfrak{a}}(z)\, \<2b> + 12 \mathfrak{s}qrt 2 \Bigl(- \mathfrak{r}ac{\beta^3}{3} + B_\gamma\Bigr) \<32b> + 3 \mathfrak{s}qrt 2 v_{\gamma, \mathfrak{a}}(z)^2\, \<1b> \\
& + 24 \Bigl(- \mathfrak{r}ac{\beta^3}{3} + B_\gamma\Bigr) v_{\gamma, \mathfrak{a}}(z)\, \<31b> + 36 \Bigl(- \mathfrak{r}ac{\beta^3}{3} + B_\gamma\Bigr) v_{\gamma, \mathfrak{a}}(z)\, \<22b> + 4 \mathfrak{s}um_{i = 1, 2,3} v^{(i)}_{\gamma, \mathfrak{a}}(z) \symbol{X}_i \<2b> + v_{\gamma, \mathfrak{a}}(z)^3 \symbol{\1}.
\end{align*}
From our definition of the model in Section~\ref{sec:model-lift} and the reconstruction map in \eqref{eq:def_rec_op} we have $( \mathcal{C}R^{\gamma, \mathfrak{a}} \symbol{\1})(z) = 1$, $( \mathcal{C}R^{\gamma, \mathfrak{a}} \<2b>)(z) = H_2(Y_{\gamma, \mathfrak{a}}(z), \mathfrak{c}_\gamma + \mathfrak{c}_\gamma')$, $(\mathcal{C}R^{\gamma, \mathfrak{a}} \<1b>^n)(z) = H_n(Y_{\gamma, \mathfrak{a}}(z), \mathfrak{c}_\gamma)$ for $n \neq 2$, $( \mathcal{C}R^{\gamma, \mathfrak{a}} \symbol{X}_i \<2b>)(z) = 0$, $(\mathcal{C}R^{\gamma, \mathfrak{a}} \<32b>)(z) = - 3 \mathfrak{c}_\gamma'' Y_{\gamma, \mathfrak{a}}(z)$, $(\mathcal{C}R^{\gamma, \mathfrak{a}} \<31b>)(z) = 0$ and $(\mathcal{C}R^{\gamma, \mathfrak{a}} \<22b>)(z) = - \mathfrak{c}_\gamma''$. Applying the reconstruction map to the preceding expansion, we get
\betagin{align*}
&\bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} \mathcal{C}Q_{\leq 0} U_{\gamma, \mathfrak{a}}^3 \bigr)(z) = 2 \mathfrak{s}qrt 2 \bigl(Y_{\gamma, \mathfrak{a}}(z)^3 - 3\mathfrak{c}_\gamma Y_{\gamma, \mathfrak{a}}(z)\bigr) + 6 v_{\gamma, \mathfrak{a}}(z) \bigl(Y_{\gamma, \mathfrak{a}}(z)^2 - \mathfrak{c}_\gamma - \mathfrak{c}_\gamma'\bigr) \\
& - 36 \mathfrak{s}qrt 2 \mathfrak{c}_\gamma'' \Bigl(- \mathfrak{r}ac{\beta^3}{3} + B_\gamma\Bigr) Y_{\gamma, \mathfrak{a}}(z) + 3 \mathfrak{s}qrt 2 v_{\gamma, \mathfrak{a}}(z)^2 Y_{\gamma, \mathfrak{a}}(z) - 36 \mathfrak{c}_\gamma'' \Bigl(- \mathfrak{r}ac{\beta^3}{3} + B_\gamma\Bigr) v_{\gamma, \mathfrak{a}}(z) + v_{\gamma, \mathfrak{a}}(z)^3 \\
&\hspace{3cm}= X_{\gamma, \mathfrak{a}}(z)^3 - 6 \bigl(\mathfrak{c}_\gamma + \mathfrak{c}_\gamma' - 2 \mathfrak{c}_\gamma'' (\betata^3 - 3 B_\gamma)\bigr) X_{\gamma, \mathfrak{a}}(z),
\end{align*}
where we used \eqref{eq:XandY}. Hence, the reconstruction $\bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} F_\gamma(U_{\gamma, \mathfrak{a}}) \bigr)(z)$ of the function \eqref{eq:F-ga} gives
\betagin{equation*}
\Bigl(- \mathfrak{r}ac{\beta^3}{3} + B_\gamma\Bigr) \Bigl( X_{\gamma, \mathfrak{a}}(z)^3 - 6 \bigl(\mathfrak{c}_\gamma + \mathfrak{c}_\gamma' - 2 \mathfrak{c}_\gamma'' (\betata^3 - 3 B_\gamma)\bigr) X_{\gamma, \mathfrak{a}}(z) \Bigr) + (A_\gamma + A) X_{\gamma, \mathfrak{a}}(z).
\end{equation*}
Reconstruction of the function \eqref{eq:E1-ga} is trivial: $\bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} E^{(1)}_\gamma(U_{\gamma, \mathfrak{a}}) \bigr)(z) = \mathfrak{r}ac{1}{\mathrm{d}eltalta \alphapha} R_5\bigl(\beta \gamma^3 X_{\gamma, \mathfrak{a}}(z)\bigr)$.
Now, we turn to reconstruction of the function \eqref{eq:E2-ga}. Expansion \eqref{eq:expansion} yields
\betagin{equation}\lambdabel{eq:U5-expansion}
U_{\gamma, \mathfrak{a}}(z)^5 = 4 \mathfrak{s}qrt2\, \<5b> + 20 v_{\gamma, \mathfrak{a}}(z)\, \<4b> + \widetilde{U}_{\gamma, \mathfrak{a}}(z),
\end{equation}
where the remainder $\widetilde{U}_{\gamma, \mathfrak{a}}(z)$ takes values in the span of elements with homogeneities greater than $-\mathfrak{r}ac{3}{2} - 3 \kappappa$. Then the expression in the brackets in \eqref{eq:E2-ga} is
\betagin{equation*}
4 \mathfrak{s}qrt2\, \<5b> + 20 v_{\gamma, \mathfrak{a}}(z)\, \<4b> - \Bigl(4 \mathfrak{s}qrt2\, \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} \<5b>\bigr)(z) + 20 v_{\gamma, \mathfrak{a}}(z)\, \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} \<4b>\bigr)(z) - H_5 \bigl(X_{\gamma, \mathfrak{a}}(z), 2 \mathfrak{c}_\gamma\bigr)\Bigr) \symbol{\1}.
\end{equation*}
Using \eqref{eq:E-hat}, the function \eqref{eq:E2-ga} equals
\betagin{align*}
E^{(2)}_\gamma(U_{\gamma, \mathfrak{a}})(z) &= \mathfrak{r}ac{\beta^5}{5} \biggl(4 \mathfrak{s}qrt2\, \<50eb> + 20 v_{\gamma, \mathfrak{a}}(z) \<40eb> \\
&\qquad - \gamma^6 \Bigl(4 \mathfrak{s}qrt2\, \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} \<5b>\bigr)(z) + 20 v_{\gamma, \mathfrak{a}}(z)\, \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} \<4b>\bigr)(z) - H_5 \bigl(X_{\gamma, \mathfrak{a}}(z), 2 \mathfrak{c}_\gamma\bigr)\Bigr) \symbol{\1} \biggr),
\end{align*}
and applying the reconstruction map gives
\betagin{align*}
\bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} E^{(2)}_\gamma(U_{\gamma, \mathfrak{a}})\bigr)(z) &= \gamma^6 \mathfrak{r}ac{\beta^5}{5} H_5 \bigl(X_{\gamma, \mathfrak{a}}(z), 2 \mathfrak{c}_\gamma\bigr) \\
&= \gamma^6 \mathfrak{r}ac{\beta^5}{5} \Bigl( X_{\gamma, \mathfrak{a}}(z)^5 - 20 \mathfrak{c}_\gamma X_{\gamma, \mathfrak{a}}(z)^3 + 60 \mathfrak{c}_\gamma^2 X_{\gamma, \mathfrak{a}}(z) \Bigr).
\end{align*}
Here, we used the definition of the reconstruction map \eqref{eq:def_rec_op} and Remark~\ref{rem:Eps-reconstruct}.
Applying the reconstruction map to both sides of equation \eqref{eq:AbstractDiscreteEquation}, using the property $\mathcal{C}R^{\gamma, \mathfrak{a}} \mathcal{C}P^\gamma = \widetilde{G}^\gamma$ and using all previous identities, we obtain
\betagin{align*}
X_{\gamma, \mathfrak{a}}(t, x) &= G^\gamma_t X_\gamma^0(x) + \mathfrak{s}qrt 2\, Y_{\gamma, \mathfrak{a}}(t, x) \\
&\qquad + \int_0^t \widetilde{G}^\gamma_{t-s} \Bigl( \Bigl(- \mathfrak{r}ac{\beta^3}{3} + B_\gamma - 4 \gamma^6 \beta^5 \mathfrak{c}_\gamma\Bigr) X^3_{\gamma, \mathfrak{a}} + \bigl(\mathfrak{C}_\gamma + A\bigr) X_{\gamma, \mathfrak{a}} + E_{\gamma, \mathfrak{a}} \Bigr)(s, x)\, \mathrm{d} s,
\end{align*}
where the error term $E_{\gamma, \mathfrak{a}}$ is the same as in \eqref{eq:IsingKacEqn-new} and where
\betagin{equation}\lambdabel{eq:C-A}
\mathfrak{C}_\gamma = 2 \bigl(\beta^3 - 3 B_\gamma\bigr) \bigl(\mathfrak{c}_\gamma + \mathfrak{c}_\gamma' - 2 \mathfrak{c}_\gamma'' (\betata^3 - 3 B_\gamma)\bigr) + 12 \mathfrak{c}_\gamma^2 \gamma^6 \beta^5 + A_\gamma.
\end{equation}
In order to have this equation equal to \eqref{eq:IsingKacEqn-new}, we need to take $B_\gamma = 4 \gamma^6 \beta^5 \mathfrak{c}_\gamma$ and $A_\gamma$ from the previous identity. Lemma~\ref{lem:renorm-constants} suggests that $|B_\gamma| \lesssim \mathfrak{e}$ which vanishes as $\gamma \to 0$.
It is left to show that if we take $\mathfrak{C}_\gamma$ of the form \eqref{eq:C-exact}, then the constant $A_\gamma$, defined via \eqref{eq:C-A}, vanish as $\gamma \to 0$. We recall that $\beta$ depends on $\mathfrak{C}_\gamma$ via \eqref{eq:beta}. From \eqref{eq:C-exact} and \eqref{eq:C-A} we have
\betagin{equation}\lambdabel{eq:A-expansion}
A_\gamma = - 2 (\beta^3 - 1) (\mathfrak{c}_\gamma + \mathfrak{c}_\gamma') + 4 (\beta^6 - 1) \mathfrak{c}_\gamma'' + 6 (\mathfrak{c}_\gamma + \mathfrak{c}_\gamma') B_\gamma - 12 \mathfrak{c}_\gamma'' B_\gamma (2 \betata^3 - 3 B_\gamma) - 12 \mathfrak{c}_\gamma^2 \gamma^6 \beta^5.
\end{equation}
Using \eqref{eq:beta} and \eqref{eq:C-exact}, we can write
\betagin{align*}
\beta^3 - 1 &= \mathfrak{s}um_{k = 1, 2, 3} {3 \choose k} \gamma^{6 k} \bigl(2 (\mathfrak{c}_\gamma + \mathfrak{c}_\gamma' - 2 \mathfrak{c}_\gamma'') + c + A \bigr)^k, \\
\beta^6 - 1 &= \mathfrak{s}um_{k = 1, \ldots, 6} {6 \choose k} \gamma^{6 k} \bigl(2 (\mathfrak{c}_\gamma + \mathfrak{c}_\gamma' - 2 \mathfrak{c}_\gamma'') + c + A \bigr)^k.
\end{align*}
From Lemma~\ref{lem:renorm-constants} we have $\mathfrak{c}_\gamma = c_2 \mathfrak{e}^{-1} + \tilde{\mathfrak{c}}_\gamma$ and $\mathfrak{c}_\gamma'' = c_1 \log \mathfrak{e} + \tilde{\mathfrak{c}}''_\gamma$, where $|\tilde{\mathfrak{c}}_\gamma| \leq C |\log \mathfrak{e}|$ and $|\tilde{\mathfrak{c}}''_\gamma| \leq C$ for some constant $C > 0$ independent of $\gamma$. Moreover, the definition \eqref{eq:renorm-constant3} boundedness of $\mathfrak{c}_\gamma'$ uniformly in $\gamma \in (0,1]$. From \eqref{eq:c-gamma-2} we furthermore have $\mathfrak{e} = \gamma^3 \mathtt{var}kappa_{\gamma, 3}$ and hence $\mathfrak{e}^{-1} = \gamma^{-3} + \gamma^{-3} c_{\gamma, 3}$, where $|c_{\gamma, 3}| \leq \gamma^4 / (1 - \gamma^4) \to 0$ as $\gamma \to 0$. Using these bounds in \eqref{eq:A-expansion}, we can see that $A_\gamma$ vanishes as $\gamma \to 0$.
\end{proof}
\betagin{remark}
In what follows we will always consider equation \eqref{eq:AbstractDiscreteEquation} with the values $A_\gamma$ and $B_\gamma$ from Lemma~\ref{lem:solution}, which makes the reconstructed solution of \eqref{eq:AbstractDiscreteEquation} coincide with the solution of \eqref{eq:IsingKacEqn-new}.
\end{remark}
Let $Z^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}$ be another random discrete model constructed in Section~\ref{sec:convergence-of-models} and let us consider equation
\betagin{equation}\lambdabel{eq:AbstractDiscreteEquation-delta}
U_{\gamma, \mathrm{d}elta, \mathfrak{a}} = \mathcal{C}Q_{< \zeta} \Bigl(G^{\gamma} X_{\gamma, \mathrm{d}elta}^{0} + \mathcal{C}P^{\gamma, \mathrm{d}elta} \mathbf{{1}}_+ \mathcal{C}Q_{\leq 0} \Bigl(- \mathfrak{r}ac{\beta^3}{3} \bigl(U_{\gamma, \mathrm{d}elta, \mathfrak{a}}\bigr)^3 + (A_{\gamma, \mathrm{d}elta} + A) U_{\gamma, \mathrm{d}elta, \mathfrak{a}}\Bigr) + \mathfrak{s}qrt 2\, W_{\gamma, \mathrm{d}elta, \mathfrak{a}}\Bigr),
\end{equation}
which is defined in the same way as \eqref{eq:AbstractDiscreteEquation}, but with respect to the model $Z^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{\scaleto{\mathrm{lift}}{4pt}}$. The initial condition at time $0$ is
\betagin{equation}\lambdabel{eq:initial-gamma-delta}
X_{\gamma, \mathrm{d}elta}^{0}(x) := \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \psi_{\gamma, \mathrm{d}elta}(x - y) X_\gamma^0(y),
\end{equation}
where $X_\gamma^0$ is defined in the statement of Theorem~\ref{thm:main} and the function $\psi_{\gamma, \mathrm{d}elta}$ is a discrete approximation of the function $\psi_{\mathrm{d}elta}$ from \eqref{eq:Phi43-delta}:
\betagin{equation*}
\psi_{\gamma, \mathrm{d}elta}(x) := \mathtt{var}epsilon^{-3} \int_{|y - x| \leq \mathtt{var}epsilon/2} \psi_\mathrm{d}elta(y) \mathrm{d} y.
\end{equation*}
As in Lemma~\ref{lem:solution} we can readily conclude that there is a choice of $A_{\gamma, \mathrm{d}elta}$ such that the function $X_{\gamma, \mathrm{d}elta, \mathfrak{a}} = \mathcal{C}R^{\gamma, \mathrm{d}elta, \mathfrak{a}} U_{\gamma, \mathrm{d}elta, \mathfrak{a}}$ solves
\betagin{equation}\lambdabel{eq:Discrete-equation-regular-noise}
X_{\gamma, \mathrm{d}elta, \mathfrak{a}}(t, x) = P^\gamma_t X_{\gamma, \mathrm{d}elta}^{0}(x) + \int_0^t \widetilde{P}^\gamma_{t-s} \Bigl( -\mathfrak{r}ac{\beta^3}{3} \bigl(X_{\gamma, \mathrm{d}elta, \mathfrak{a}}\bigr)^3 + (\mathfrak{C}_{\gamma, \mathrm{d}elta} + A) X_{\gamma, \mathrm{d}elta, \mathfrak{a}} + \mathfrak{s}qrt 2\,\xi_{\gamma, \mathrm{d}elta, \mathfrak{a}} \Bigr)(s, x)\, \mathrm{d} s.
\end{equation}
where the driving noise is defined in \eqref{eq:xi-gamma-delta}. This equation is a modification of the Ising-Kac equation \eqref{eq:IsingKacEqn-new}, driven by a mollified noise and without the error term. The renormalisation $\mathfrak{C}_{\gamma, \mathrm{d}elta}$ we take to be in the form \eqref{eq:C-exact}, but defined via the constants $\mathfrak{c}_{\gamma, \mathrm{d}elta}$, $\mathfrak{c}_{\gamma, \mathrm{d}elta}'$ and $\mathfrak{c}_{\gamma, \mathrm{d}elta}''$ introduced in \eqref{eq:convolved_constants}.
Now we will study the solution map of \eqref{eq:AbstractDiscreteEquation}. In particular, we need to show that it is continuous with respect to the model and the initial state.
\betagin{proposition}\lambdabel{prop:SolutionMap}
Let $Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}$ be the random discrete model constructed in Section~\ref{sec:lift} and let the initial state $X_\gamma^0$ satisfy the assumptions of Theorem~\ref{thm:main}. Then for almost every realisation of $Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}$ there exists (possibly infinite) $T_{\gamma, \mathfrak{a}} > 0$ such that \eqref{eq:AbstractDiscreteEquation} has a unique solution $U_{\gamma, \mathfrak{a}} \in \mathcal{C}D^{\zeta, \eta}_\mathfrak{e}(Z^{\gamma, \mathfrak{a}}_{\scaleto{\mathrm{lift}}{4pt}})$ on the time interval $[0, T_{\gamma, \mathfrak{a}})$, where $\zeta = 1 + 3 \kappappa$ and the constant $\eta$ is from Theorem~\ref{thm:main}.
Let moreover $X_{\gamma, \mathfrak{a}} = \mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}$ where $\mathcal{C}R^{\gamma, \mathfrak{a}}$ is the reconstruction map \eqref{eq:def_rec_op} associated to the model. Then for every $L > 0$ there is $T^L_{\gamma, \mathfrak{a}} \in (0, T_{\gamma, \mathfrak{a}})$, such that $\lim_{L \to \infty} T^L_{\gamma, \mathfrak{a}} = T_{\gamma, \mathfrak{a}}$ almost surely, and
\betagin{equation}\lambdabel{eq:SolutionMap-bound}
\mathfrak{s}up_{t \in [0, T \wedge T^L_{\gamma, \mathfrak{a}}]} \| X_{\gamma, \mathfrak{a}}(t) \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta} \leq C,
\end{equation}
for any $T > 0$, provided $\| X^0_{\gamma, \mathfrak{a}} \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta} \leq L$ and $\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{T+1}^{(\mathfrak{e})} \leq L$, where we use the norm \eqref{eq:eps-norm-1}. The constant $C$ depends on $L$ and is independent of $\gamma$.
Let $Z^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{\scaleto{\mathrm{lift}}{4pt}}$ be the model defined in Section~\ref{sec:convergence-of-models}. Then there is a solution $U_{\gamma, \mathrm{d}elta, \mathfrak{a}} \in \mathcal{C}D^{\zeta, \eta}_\mathfrak{e}(Z^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{\scaleto{\mathrm{lift}}{4pt}})$ of equation \eqref{eq:AbstractDiscreteEquation-delta} on an interval $[0, T_{\gamma, \mathrm{d}elta, \mathfrak{a}})$. Let furthermore $X_{\gamma, \mathrm{d}elta, \mathfrak{a}} = \mathcal{C}R^{\gamma, \mathrm{d}elta, \mathfrak{a}} U_{\gamma, \mathrm{d}elta, \mathfrak{a}}$, where $\mathcal{C}R^{\gamma, \mathrm{d}elta, \mathfrak{a}}$ is the respective reconstruction map. Then there exist $\mathrm{d}eltalta_0 > 0$, $\theta > 0$ and $T^L_{\gamma, \mathrm{d}elta, \mathfrak{a}} \in (0, T_{\gamma, \mathrm{d}elta, \mathfrak{a}})$, such that $\lim_{L \to \infty} T^L_{\gamma, \mathrm{d}elta, \mathfrak{a}} = T_{\gamma, \mathrm{d}elta, \mathfrak{a}}$ almost surely and
\betagin{equation}\lambdabel{eq:SolutionMap-bound-delta}
\mathfrak{s}up_{t \in [0, T \wedge T^L_{\gamma, \mathfrak{a}} \wedge T^L_{\gamma, \mathrm{d}elta, \mathfrak{a}}]} \| (X_{\gamma, \mathfrak{a}} - X_{\gamma, \mathrm{d}elta, \mathfrak{a}})(t) \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta} \leq C \mathrm{d}elta^{\theta},
\end{equation}
uniformly over $\mathrm{d}elta \in (0, \mathrm{d}elta_0)$, provided $\| X^0_{\gamma, \mathfrak{a}} \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta} \leq L$, $\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{T+1}^{(\mathfrak{e})} \leq L$, $\| X^0_{\gamma, \mathfrak{a}} - X^{0}_{\gamma, \mathrm{d}eltalta, \mathfrak{a}} \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta} \leq \mathrm{d}elta^{\theta}$ and $\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}; Z^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{T+1}^{(\mathfrak{e})} \leq \mathrm{d}elta^{\theta}$.
\end{proposition}
\betagin{proof}
To prove existence of a local solution, we use a purely deterministic argument. For this, we take $T > 0$ and any realisation of the discrete model $Z^{\gamma}_{{\scaleto{\mathrm{lift}}{4pt}}}$ such that $\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{T+1}^{(\mathfrak{e})}$ is finite. Proposition~\ref{prop:models-converge} suggests that it happens almost surely. The spaces of modelled distributions are considered below with respect to $Z^{\gamma}_{{\scaleto{\mathrm{lift}}{4pt}}}$.
We proved in Lemma~\ref{lem:solution} that if a solution $U_{\gamma, \mathfrak{a}}$ exists, then it has the form \eqref{eq:expansion}. Hence, in this proof we will be looking for a solution in this form.
Let $\mathcal{C}M^{\gamma, \mathfrak{a}}_{T}(U_{\gamma, \mathfrak{a}})$ be the right-hand side of \eqref{eq:AbstractDiscreteEquation}, restricted to the time interval $[0, T]$. We need to prove that $\mathcal{C}M^{\gamma, \mathfrak{a}}_{T}$ is a contraction map on $\mathcal{C}D^{\zeta, \eta}_{\mathfrak{e}, T}$, uniformly in $\gamma$, for $T > 0$ small enough (see Remark~\ref{rem:T-space} for the definition of the time-dependent space). More precisely, let us take $U_{\gamma, \mathfrak{a}}, \bar U_{\gamma, \mathfrak{a}} \in \mathcal{C}D^{\zeta, \eta}_{\mathfrak{e}, T}$. Then we will prove that for some $\nu > 0$ we have
\betagin{subequations}\lambdabel{eqs:contraction}
\betagin{align}
\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert\mathcal{C}M^{\gamma, \mathfrak{a}}_{T}(U_{\gamma, \mathfrak{a}}) \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})} &\lesssim \| X_\gamma^0 \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta} + T^\nu \bigl(1 + \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})}\bigr)^5, \lambdabel{eq:contraction1}\\
\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert \mathcal{C}M^{\gamma, \mathfrak{a}}_{T}(U_{\gamma, \mathfrak{a}}); \mathcal{C}M^{\gamma, \mathfrak{a}}_{T}(\bar U_{\gamma, \mathfrak{a}}) \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})} &\lesssim T^\nu \bigl(1 + \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})}\bigr)^4 \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}}; \bar U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})}. \lambdabel{eq:contraction2}
\end{align}
\end{subequations}
Then for any $T > 0$ small enough, $\mathcal{C}M^{\gamma, \mathfrak{a}}_{T}$ is a contraction map on $\mathcal{C}D^{\zeta, \eta}_{\mathfrak{e}, T}$. The proportionality constants in these bounds are multiples of $\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathfrak{a}}_{\scaleto{\mathrm{lift}}{4pt}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert^{(\mathfrak{e})}_{T+1}$, which implies that $0 < T < T_{\gamma, \mathfrak{a}}$, for some $T_{\gamma, \mathfrak{a}} > 0$, depending on $\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathfrak{a}}_{\scaleto{\mathrm{lift}}{4pt}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert^{(\mathfrak{e})}_{T+1}$.
We first prove the bound \eqref{eq:contraction1}. For $\bar\zeta > 0$ and $\bar\eta > - 2$, we apply \cite[Thm.~4.22]{erhard2017discretisation} and get
\betagin{align}\lambdabel{eq:M-bound}
\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert\mathcal{C}M^{\gamma, \mathfrak{a}}_{T}(U_{\gamma, \mathfrak{a}}) \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})} &\lesssim \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert G^\gamma X_\gamma^0 \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})} + \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert W_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})} \\
&\qquad + T^\nu \Bigl( \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert F_\gamma(U_{\gamma, \mathfrak{a}}) \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\bar \zeta, \bar\eta; T}^{(\mathfrak{e})} + \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert E^{(1)}_\gamma(U_{\gamma, \mathfrak{a}}) \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\bar \zeta, \bar\eta; T}^{(\mathfrak{e})} + \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert E^{(2)}_\gamma(U_{\gamma, \mathfrak{a}}) \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\bar \zeta, \bar\eta; T}^{(\mathfrak{e})} \Bigr), \nonumber
\end{align}
for some $\nu > 0$. We are going to bound the terms on the right-hand side one by one, and a precise choice of $\bar \zeta$ and $\bar \eta$ will be clear from these bounds.
Similarly to \cite[Lem.~7.5]{Regularity}, we get $\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert G^\gamma X_\gamma^0 \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})} \lesssim \| X_\gamma^0 \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta}$. Furthermore, from \cite[Lem.~2.3]{Martingales} and \cite[Thm.~4.22]{erhard2017discretisation} we have the bound $\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert W_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})} \lesssim T^\nu$ on the term \eqref{eq:V-gamma}.
Now, we will bound the function \eqref{eq:F-ga}. From \cite[Sec.~4 and 6.2]{Regularity} we get
\betagin{equation*}
\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert F_\gamma(U_{\gamma, \mathfrak{a}}) \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta_1, \eta_1; T}^{(\mathfrak{e})} \lesssim \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}}^3 \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta_1, \eta_1; T}^{(\mathfrak{e})} + \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta_1, \eta_1; T}^{(\mathfrak{e})} \lesssim \bigl(\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})}\bigr)^3 + \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})},
\end{equation*}
for $\zeta_1 \leq \zeta - 1 - 2\kappappa$ and $\eta_1 \leq \eta - 1 - 2 \kappappa$. Here, we used the fact that $U_{\gamma, \mathfrak{a}}$ lives in a sector of regularity $\alphapha = -\mathfrak{r}ac{1}{2} - \kappappa$. Recalling that $\zeta = 1 + 3 \kappappa$ and $\kappappa < \mathfrak{r}ac{1}{14}$, the ranges of $\zeta_1$ and $\eta_1$ allow to chose $\bar \zeta$ and $\bar \eta$ as in \eqref{eq:M-bound}.
Now we will bound the function \eqref{eq:E1-ga}. From Proposition~\ref{prop:ReconThm_v2} we get $| (\mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}) (z) | \lesssim \mathfrak{e}^{\alphapha \wedge \eta} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})}$. Then for $r \in (6, 7)$ the definition \eqref{eq:Taylor} yields
\betagin{align*}
\Bigl|\mathfrak{r}ac{1}{\mathrm{d}eltalta \alphapha} R_5 \bigl(\beta \gamma^3 \mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}(z)\bigr)\Bigr| &\lesssim \mathfrak{r}ac{1}{\mathrm{d}eltalta \alphapha} |\beta \gamma^3 \mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}(z)|^{r} \\
&\lesssim \gamma^{3 r - 9} \mathfrak{e}^{r (\alphapha \wedge \eta)} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})} \lesssim \gamma^{\mathfrak{r}ac{3 r}{2} - 9 - 3 r\kappappa} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})}.
\end{align*}
From this we obtain the following bound on the function \eqref{eq:E1-ga}:
\betagin{equation*}
\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert E^{(1)}_\gamma(U_{\gamma, \mathfrak{a}}) \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\bar\zeta, \bar\eta; T}^{(\mathfrak{e})} \lesssim \gamma^{\mathfrak{r}ac{3 r}{2} - 9 - 3 r\kappappa} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})}.
\end{equation*}
If $\kappappa < \mathfrak{r}ac{1}{14}$, then there is a value of $r$ such that the last term vanishes as $\gamma \to 0$.
In order to bound the function \eqref{eq:E2-ga}, we need to bound the modelled distribution inside the brackets in \eqref{eq:E2-ga}, which we denote by $\widetilde{V}_{\gamma, \mathfrak{a}}$. Using the expansion \eqref{eq:U5-expansion}, we can write
\betagin{equation}\lambdabel{eq:U5-expansion-new}
U_{\gamma, \mathfrak{a}}(z)^5 = 4 \mathfrak{s}qrt2\, \<5b> + 20 v_{\gamma, \mathfrak{a}}(z)\, \<4b> + \widetilde{U}_{\gamma, \mathfrak{a}}(z),
\end{equation}
where the elements spanning $\widetilde{U}_{\gamma, \mathfrak{a}}$ have homogeneities greater than $-\mathfrak{r}ac{3}{2} - 7 \kappappa$. We note that $\widetilde{U}_{\gamma, \mathfrak{a}}(z)$ does not belong to $\mathcal{C}T^\mathrm{ex}$, but is rather an element of $\fT^\mathrm{ex}$ (see Section~\ref{sec:model-space} for the definition of this space). In particular, we cannot apply the model to $\widetilde{U}_{\gamma, \mathfrak{a}}(z)$ and hence we cannot measure the regularity of $\widetilde{U}_{\gamma, \mathfrak{a}}(z)$ as a modelled distribution. Instead, we write
\betagin{equation}\lambdabel{eq:H}
\widetilde{V}_{\gamma, \mathfrak{a}}(z) = 4 \mathfrak{s}qrt2\, \<5b> + 20 v_{\gamma, \mathfrak{a}}(z)\, \<4b> + \biggl(H_5 \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}(z), 2 \mathfrak{c}_\gamma\bigr) - \mathfrak{s}um_{\tau \in \{\mathfrak{s}calebox{0.7}{\<4b>}, \mathfrak{s}calebox{0.7}{\<5b>}\}} \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} \mathcal{C}Q_{\tau} U_{\gamma, \mathfrak{a}}^5\bigr)(z) \biggr) \symbol{\1},
\end{equation}
and we are going to show that this is a modelled distribution in a suitable space. Table~\ref{tab:linear_transformations} suggests that $\Gamma^{\gamma, \mathfrak{a}}_{\bar z z} \widetilde{V}_{\gamma, \mathfrak{a}}(z) = \widetilde{V}_{\gamma, \mathfrak{a}}(z)$, and hence the second term in the definition \eqref{eq:def_dgamma_norm} of modelled distributions contains the difference $\widetilde{V}_{\gamma, \mathfrak{a}}(z) - \widetilde{V}_{\gamma, \mathfrak{a}}(\bar z)$. Now, we will derive bounds on $\widetilde{V}_{\gamma, \mathfrak{a}}(z)$ and $\widetilde{V}_{\gamma, \mathfrak{a}}(z) - \widetilde{V}_{\gamma, \mathfrak{a}}(\bar z)$.
For the first term in \eqref{eq:H} we have
\betagin{equation}\lambdabel{eq:H-bound1}
| \widetilde{V}_{\gamma, \mathfrak{a}}(z) |_{|\mathfrak{s}calebox{0.7}{\<5b>}|} = 4 \mathfrak{s}qrt2, \qquad\qquad | \widetilde{V}_{\gamma, \mathfrak{a}}(z) - \widetilde{V}_{\gamma, \mathfrak{a}}(\bar z)|_{|\mathfrak{s}calebox{0.7}{\<5b>}|} = 0.
\end{equation}
Since $U_{\gamma, \mathfrak{a}} \in \mathcal{C}D^{\zeta, \eta}_{\mathfrak{e}, T}$ and the expansion \eqref{eq:expansion} holds, we conclude that
\betagin{align}
|v_{\gamma, \mathfrak{a}}(z)| &= \mathfrak{r}ac{1}{2 \betata^3} |U_{\gamma, \mathfrak{a}} (z)|_{0} \lesssim \bigl(\| z \|_\mathfrak{s} \vee \mathfrak{e}\bigr)^{\eta} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})}, \lambdabel{eq:v-bound}\\
|v_{\gamma, \mathfrak{a}}(z) - v_{\gamma, \mathfrak{a}}(\bar z)| &= \mathfrak{r}ac{1}{2 \betata^3} |U_{\gamma, \mathfrak{a}} (z) - \Gamma^{\gamma, \mathfrak{a}}_{\!z \bar z} U_{\gamma, \mathfrak{a}}(\bar z)|_{|\mathfrak{s}calebox{0.7}{\<20b>}|} \\
&\lesssim \bigl(\| z - \bar z \|_\mathfrak{s} \vee \mathfrak{e}\bigr)^{\zeta - |\mathfrak{s}calebox{0.7}{\<20b>}|} \mathcal{V}ert z, \bar z\mathcal{V}ert_{\mathfrak{e}}^{\eta - \zeta} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})},\nonumber
\end{align}
where we used the definition of the modelled distribution \eqref{eq:def_dgamma_norm}. Hence, for the second term in \eqref{eq:H} we have
\betagin{equation}\lambdabel{eq:H-bound2}
\betagin{aligned}
| \widetilde{V}_{\gamma, \mathfrak{a}}(z) |_{|\mathfrak{s}calebox{0.7}{\<4b>}|} &\lesssim \bigl(\| z \|_\mathfrak{s} \vee \mathfrak{e}\bigr)^{\eta} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})}, \\
| \widetilde{V}_{\gamma, \mathfrak{a}}(z) - \widetilde{V}_{\gamma, \mathfrak{a}}(\bar z)|_{|\mathfrak{s}calebox{0.7}{\<4b>}|} &\lesssim \bigl(\| z - \bar z \|_\mathfrak{s} \vee \mathfrak{e}\bigr)^{\zeta - |\mathfrak{s}calebox{0.7}{\<20b>}|} \mathcal{V}ert z, \bar z\mathcal{V}ert_{\mathfrak{e}}^{\eta - \zeta} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})}.
\end{aligned}
\end{equation}
Now, we will bound the last term in \eqref{eq:H}. From the expansion \eqref{eq:expansion} and Remark~\ref{rem:positive-vanish}, we get
\betagin{equation*}
\mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}(z) = \mathfrak{s}qrt2\, Y_{\gamma, \mathfrak{a}}(z) + v_{\gamma, \mathfrak{a}}(z),
\end{equation*}
where $Y_{\gamma, \mathfrak{a}} = \mathcal{C}R^{\gamma, \mathfrak{a}}\<1b>$. Using then the expansion \eqref{eq:U5-expansion-new}, the definition of the reconstruction map \eqref{eq:def_rec_op} and the definition of the model \eqref{eq:model-Hermite}, the last term in \eqref{eq:H} may be written as
\betagin{align}
&H_5 \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}(z), 2 \mathfrak{c}_\gamma\bigr) - \mathfrak{s}um_{\tau \in \{\mathfrak{s}calebox{0.7}{\<4b>}, \mathfrak{s}calebox{0.7}{\<5b>}\}} \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} \mathcal{C}Q_{\tau} U_{\gamma, \mathfrak{a}}^5\bigr) (z) \lambdabel{eq:last-term} \\
&\qquad = H_5 \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}(z), 2 \mathfrak{c}_\gamma\bigr) - 4 \mathfrak{s}qrt2\, \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} \<5b>\bigr)(z) - 20 v_{\gamma, \mathfrak{a}}(z)\, \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} \<4b>\bigr)(z) \nonumber \\
&\qquad = H_5 \bigl(\mathfrak{s}qrt2\, Y_{\gamma, \mathfrak{a}}(z) + v_{\gamma, \mathfrak{a}}(z), 2 \mathfrak{c}_\gamma\bigr) - 4 \mathfrak{s}qrt2\, H_5 \bigl(Y_{\gamma, \mathfrak{a}}(z), \mathfrak{c}_\gamma\bigr) - 20 v_{\gamma, \mathfrak{a}}(z)\, H_4 \bigl(Y_{\gamma, \mathfrak{a}}(z), \mathfrak{c}_\gamma\bigr). \nonumber
\end{align}
The following expansion holds for the Hermite polynomials
\betagin{equation*}
H_n(u + v, c) = \mathfrak{s}um_{m = 0}^n {n \choose m} H_m(u, c) v^{n - m},
\end{equation*}
which can be found in \cite{Abramowitz}. Moreover, from the definition \eqref{eq:def_Hermite} we get the scaling identity $H_n(a u, a^2 c) = a^n H_n(u, c)$ for any $a > 0$. Applying these identities, expression \eqref{eq:last-term} turns to
\betagin{align}
&H_5 \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}(z), 2 \mathfrak{c}_\gamma\bigr) - \mathfrak{s}um_{\tau \in \{\mathfrak{s}calebox{0.7}{\<4b>}, \mathfrak{s}calebox{0.7}{\<5b>}\}} \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} \mathcal{C}Q_{\tau} U_{\gamma, \mathfrak{a}}^5\bigr) (z) \lambdabel{eq:last-term-new} \\
&\quad = \mathfrak{s}um_{m = 0}^3 {5 \choose m} 2^{\mathfrak{r}ac{m}{2}} H_m \bigl(Y_{\gamma, \mathfrak{a}}(z), \mathfrak{c}_\gamma\bigr) v_{\gamma, \mathfrak{a}}(z)^{5 - m} = \mathfrak{s}um_{m = 0}^3 {5 \choose m} 2^{\mathfrak{r}ac{m}{2}} \bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} \<1b>^{m}\bigr)(z) v_{\gamma, \mathfrak{a}}(z)^{5 - m}. \nonumber
\end{align}
where we postulate $\bigl(\mathcal{C}R^{\gamma, \mathfrak{a}} \<1b>^{0}\bigr)(z) = 1$. For $m \in \{1, 2, 3\}$, the definitions \eqref{eq:def_rec_op} and the bound \eqref{eq:Pi-bounds} yield $| (\mathcal{C}R^{\gamma, \mathfrak{a}} \<1b>^{m} )(z) | \lesssim \mathfrak{e}^{|\mathfrak{s}calebox{0.7}{\<1b>}| m} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{T+1}^{(\mathfrak{e})}$. Combining this with the bound on the function $v_\gamma$ in \eqref{eq:v-bound}, we estimate expression \eqref{eq:last-term-new} as
\betagin{equation}\lambdabel{eq:H-bound3}
| \widetilde{V}_{\gamma, \mathfrak{a}}(z) |_{0} \lesssim \mathfrak{s}um_{m = 0}^3 \mathfrak{e}^{|\mathfrak{s}calebox{0.7}{\<1b>}| m} \bigl(\| z \|_\mathfrak{s} \vee \mathfrak{e}\bigr)^{(5 - m) \eta},
\end{equation}
where the proportionality constant is a multiple of $\bigl(1 + \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{T+1}^{(\mathfrak{e})}\bigr) \bigl(1 + \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})} \bigr)^5$.
Using the derived bounds, we can now estimate the function \eqref{eq:E2-ga}. From Table~\ref{tab:linear_transformations} we conclude that $\Gamma^{\gamma, \mathfrak{a}}_{\bar z z} (\widehat{\mathcal{C}E}_\gamma\widetilde{V}_{\gamma, \mathfrak{a}})(z) = (\widehat{\mathcal{C}E}_\gamma\widetilde{V}_{\gamma, \mathfrak{a}})(z)$, and the second term in the definition of the norm \eqref{eq:def_dgamma_norm} contains only the difference $(\widehat{\mathcal{C}E}_\gamma\widetilde{V}_{\gamma, \mathfrak{a}})(z) - (\widehat{\mathcal{C}E}_\gamma\widetilde{V}_{\gamma, \mathfrak{a}})(\bar z)$. Hence, we need to bound $(\widehat{\mathcal{C}E}_\gamma\widetilde{V}_{\gamma, \mathfrak{a}})(z)$ and $(\widehat{\mathcal{C}E}_\gamma\widetilde{V}_{\gamma, \mathfrak{a}})(z) - (\widehat{\mathcal{C}E}_\gamma\widetilde{V}_{\gamma, \mathfrak{a}})(\bar z)$.
From \eqref{eq:H-bound1} we get
\betagin{equation*}
| (\widehat{\mathcal{C}E}_\gamma\widetilde{V}_{\gamma, \mathfrak{a}})(z) |_{|\mathfrak{s}calebox{0.7}{\<5b>}| + 2} = 4 \mathfrak{s}qrt2, \qquad \qquad | (\widehat{\mathcal{C}E}_\gamma\widetilde{V}_{\gamma, \mathfrak{a}})(z) - (\widehat{\mathcal{C}E}_\gamma\widetilde{V}_{\gamma, \mathfrak{a}})(\bar z) |_{|\mathfrak{s}calebox{0.7}{\<5b>}| + 2} = 0.
\end{equation*}
Similarly, from \eqref{eq:H-bound2} we have
\betagin{align*}
| (\widehat{\mathcal{C}E}_\gamma\widetilde{V}_{\gamma, \mathfrak{a}})(z) |_{|\mathfrak{s}calebox{0.7}{\<4b>}| + 2} &\lesssim (\| z \|_\mathfrak{s} \vee \mathfrak{e})^{\eta} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})}, \\
| (\widehat{\mathcal{C}E}_\gamma\widetilde{V}_{\gamma, \mathfrak{a}})(z) - (\widehat{\mathcal{C}E}_\gamma\widetilde{V}_{\gamma, \mathfrak{a}})(\bar z)|_{|\mathfrak{s}calebox{0.7}{\<4b>}| + 2} &\lesssim (\| z - \bar z \|_\mathfrak{s} \vee \mathfrak{e})^{\zeta - |\mathfrak{s}calebox{0.7}{\<20b>}|} \mathcal{V}ert z, \bar z\mathcal{V}ert_{\mathfrak{e}}^{\eta - \zeta} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})}.
\end{align*}
Finally, \eqref{eq:H-bound3} yields
\betagin{align*}
| (\widehat{\mathcal{C}E}_\gamma \widetilde{V}_{\gamma, \mathfrak{a}})(z) |_{0} &\lesssim \gamma^6 \mathfrak{s}um_{m = 0}^3 \mathfrak{e}^{|\mathfrak{s}calebox{0.7}{\<1b>}| m} \bigl(\| z \|_\mathfrak{s} \vee \mathfrak{e}\bigr)^{(5 - m) \eta} \lesssim \mathfrak{e}^{\mathtt{var}theta} (\| z \|_{\mathfrak{s}} \vee \mathfrak{e})^{\eta_2-\mathtt{var}theta} ,
\end{align*}
for any $0 < \mathtt{var}theta \leq \mathfrak{r}ac{1}{2} - 3 \kappappa$ and $\eta_2 = 5 \eta + 2$ (recall that $|\<1b>| = -\mathfrak{r}ac{1}{2} - \kappappa$ and $|\<20b>| = 1 - 2\kappappa$), and where the proportionality constant is a multiple of $\bigl(1 + \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{T+1}^{(\mathfrak{e})}\bigr) \bigl(1 + \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})} \bigr)^5$. Using this bound, we get furthermore
\betagin{align*}
&| (\widehat{\mathcal{C}E}_\gamma \widetilde{V}_{\gamma, \mathfrak{a}})(z) - (\widehat{\mathcal{C}E}_\gamma \widetilde{V}_{\gamma, \mathfrak{a}})(\bar z) |_{0} \leq | (\widehat{\mathcal{C}E}_\gamma \widetilde{V}_{\gamma, \mathfrak{a}})(z) |_{0} + |(\widehat{\mathcal{C}E}_\gamma \widetilde{V}_{\gamma, \mathfrak{a}})(\bar z) |_{0}\\
&\qquad \lesssim \mathfrak{e}^{\mathtt{var}theta} \Bigl((\| z \|_{\mathfrak{s}} \vee \mathfrak{e})^{\eta_2-\mathtt{var}theta} + (\| \bar z \|_{\mathfrak{s}} \vee \mathfrak{e})^{\eta_2-\mathtt{var}theta}\Bigr) \lesssim \mathfrak{e}^{\mathtt{var}theta - \bar \mathtt{var}theta} (\| z - \bar z\|_\mathfrak{s} \vee \mathfrak{e})^{\bar \mathtt{var}theta} \mathcal{V}ert z, \bar z\mathcal{V}ert_{\mathfrak{e}}^{\eta_2-\mathtt{var}theta} ,
\end{align*}
for any $0 < \bar \mathtt{var}theta < \mathtt{var}theta$. Combining the preceding bounds on $\widehat{\mathcal{C}E}_\gamma \widetilde{V}_{\gamma, \mathfrak{a}}$, we conclude that the following bound holds for the function \eqref{eq:E2-ga}:
\betagin{equation}\lambdabel{eq:E2-bound}
\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert E^{(2)}_\gamma(U_{\gamma, \mathfrak{a}}) \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta_3, \eta_3; T}^{(\mathfrak{e})} \lesssim \mathfrak{e}^{\mathtt{var}theta - \bar \mathtt{var}theta} \bigl(1 + \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{T+1}^{(\mathfrak{e})}\bigr) \bigl(1 + \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert U_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\mathfrak{e})} \bigr)^5,
\end{equation}
for any $\zeta_3$ and $\eta_3$ satisfying $\zeta_3 \leq \bar \mathtt{var}theta$, $\zeta_3 \leq \zeta - |\<20b>| + |\<4b>| + 2$, $\eta_3 \leq \eta_2 - \mathtt{var}theta$, $\eta_3 \leq \eta - |\<20b>| + |\<4b>| + 2$, $\eta_3 - \zeta_3 \leq \eta - \zeta$ and $\eta_3 - \zeta_3 \leq \eta_2 - \mathtt{var}theta$. Taking $\mathtt{var}theta = 2 \kappappa$, $\bar \mathtt{var}theta = \kappappa$, $\zeta_3 = \kappappa$ and $\eta_3 = \eta - 1 - 2 \kappappa$, all these conditions are satisfied and moreover we have $\zeta_3 > 0$ and $\eta_3 > -2$, which allows to take $\bar \zeta$ and $\bar \eta$ as in \eqref{eq:M-bound}. We note that \eqref{eq:E2-bound} vanishes as $\gamma \to 0$, because the power of $\mathfrak{e}$ is strictly positive.
We have just finished the proof of the bound \eqref{eq:M-bound}, from which \eqref{eq:contraction1} follows. The bound \eqref{eq:contraction2} can be proved similarly and we prefer to omit the details. Then the Banach fixed point theorem yields existence of a fixed point of the map $\mathcal{C}M^{\gamma, \mathfrak{a}}_{T}$, and hence we get a local solution of equation \eqref{eq:AbstractDiscreteEquation}. By patching the local solution in the standard way, we get the maximal time $T_{\gamma, \mathfrak{a}}$ such that the solution exists on the time interval $[0, T_{\gamma, \mathfrak{a}})$. One can see that the time $T_{\gamma, \mathfrak{a}}$ is the one at which $\| X_{\gamma, \mathfrak{a}}(t) \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta}$ diverges. Applying Proposition~\ref{prop:ReconThm_v2} to the function $X_{\gamma, \mathfrak{a}} = \mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}$, we then get the required bound \eqref{eq:SolutionMap-bound}.
A bound on the solutions $U_{\gamma, \mathrm{d}elta, \mathfrak{a}}$ can be proved respectively.
Furthermore, in the same way as we proved \eqref{eq:SolutionMap-bound}, we get the bound \eqref{eq:SolutionMap-bound-delta}.
\end{proof}
Proposition~\ref{prop:SolutionMap} gives a local solution $X_{\gamma, \mathfrak{a}}$, and by analogy with \eqref{eq:remainder-equation} we can also study the respective solution $v_{\gamma, \mathfrak{a}}$ of the remainder equation \eqref{eq:discrete-remainder}. More precisely, we define it as $v_{\gamma, \mathfrak{a}} = X_{\gamma, \mathfrak{a}} - \mathfrak{s}qrt2\, Y_{\gamma, \mathfrak{a}}$, where $Y_{\gamma, \mathfrak{a}} = \mathcal{C}R^{\gamma, \mathfrak{a}}\<1b>$. Then from Proposition~\ref{prop:SolutionMap} we can conclude that in the setting of \eqref{eq:SolutionMap-bound} we have
\betagin{equation}\lambdabel{eq:discrete-remainder-bound}
\mathfrak{s}up_{t \in [0, T \wedge T^L_{\gamma, \mathfrak{a}}]} \| v_{\gamma, \mathfrak{a}}(t) \|^{(\mathfrak{e})}_{\mathcal{C}C^{3 / 2 + 3 \eta}} \leq C.
\end{equation}
In the same way, for a local solution $X_{\gamma, \mathrm{d}elta, \mathfrak{a}}$ we set $Y_{\gamma, \mathrm{d}elta, \mathfrak{a}} = \mathcal{C}R^{\gamma, \mathrm{d}elta, \mathfrak{a}}\<1b>$ and $v_{\gamma, \mathrm{d}elta, \mathfrak{a}} = X_{\gamma, \mathrm{d}elta, \mathfrak{a}} - \mathfrak{s}qrt2\, Y_{\gamma, \mathrm{d}elta, \mathfrak{a}}$. Then in the setting of \eqref{eq:SolutionMap-bound-delta} we have
\betagin{equation}\lambdabel{eq:discrete-remainder-bound-delta}
\mathfrak{s}up_{t \in [0, T \wedge T^L_{\gamma, \mathfrak{a}} \wedge T^L_{\gamma, \mathrm{d}elta, \mathfrak{a}}]} \| (v_{\gamma, \mathfrak{a}} - v_{\gamma, \mathrm{d}elta, \mathfrak{a}})(t) \|^{(\mathfrak{e})}_{\mathcal{C}C^{3 / 2 + 3 \eta}} \leq C \mathrm{d}elta^\theta.
\end{equation}
\mathfrak{s}ubsection{Controlling the process $\un{X}_{\gamma, \mathfrak{a}}$}
Similarly to $X_{\gamma, \mathfrak{a}}$, we can also control the process $\un{X}_{\gamma, \mathfrak{a}}$ defined in \eqref{eq:X-under}. For this, we define the discrete kernel $\un{P}^\gamma_t(x) := \bigl(P^\gamma_t *_\eps \un{K}_\gamma\bigr)(x)$ on $x \in \T_{\mathtt{var}epsilon}^3$ and by analogy with \eqref{eq:IsingKacEqn} we then get
\betagin{align*}
\un{X}_\gamma(t, x) &= P^\gamma_t \un{X}^0_\gamma(x) + \mathfrak{s}qrt 2\, \un{Y}_\gamma(t, x) \\
&\qquad + \int_0^t \un{P}^{\gamma}_{t-s} \Bigl( -\mathfrak{r}ac{\beta^3}{3} X^3_\gamma + \bigl(\mathfrak{C}_\gamma + A\bigr) X_\gamma + E_\gamma \Bigr)(s, x)\, \mathrm{d} s,
\end{align*}
where
\betagin{equation*}
\un{Y}_\gamma (t, x) := \mathfrak{r}ac{1}{\mathfrak{s}qrt 2} \mathtt{var}epsilon^3 \mathfrak{s}um_{y \in \T_{\mathtt{var}epsilon}^3} \int_0^t \un{P}^{\gamma}_{t-s}(x-y) \,\mathrm{d} \mathfrak{M}_\gamma(s, y).
\end{equation*}
We defined the respective kernel $\un{G}^\gamma_t(x)$ on $x \in \mathtt{L}ambda_{\mathtt{var}epsilon}$ by \eqref{eq:From-P-to-G}. This kernel is different from $\widetilde{G}^\gamma$ only by the scale, which is $\mathfrak{e}$ for the latter and $\un{\mathfrak{e}} := \mathfrak{e} \gamma^{\un\kappappa}$ for the former. Hence, in the same way as we did in Appendix~\ref{sec:decompositions}, we may write $\un{G}^\gamma = \un{\mathscr{K}}^\gamma + \un{\mathscr{R}}^\gamma$ and we may defined the respective abstract map $\un{\mathcal{C}P}^\gamma$ as in \eqref{eq:P-operator}. We also define the respective lift of the martingales $\un{Z}^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}$, which is defined in the same way as $Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}$ in Section~\ref{sec:model-lift}, but where in the definitions \eqref{eq:lift-hermite} and \eqref{eq:Pi-I} we use the kernel $\un{\mathscr{K}}^\gamma$. We note that we need to use the norms on scale $\un{\mathfrak{e}}$ to work with these objects, i.e. we have $\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert \un{Z}^{\gamma, \mathfrak{a}}_{\scaleto{\mathrm{lift}}{4pt}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert^{(\un{\mathfrak{e}})}_{T}$ bounded and $\un{\mathcal{C}P}^\gamma$ acts on suitable spaces $\mathcal{C}D^{\zeta, \eta}_{\un{\mathfrak{e}}, T}$. If $U_{\gamma, \mathfrak{a}}$ is a solution of \eqref{eq:AbstractDiscreteEquation}, then we define
\betagin{equation}\lambdabel{eq:U-bar}
\un{U}_{\gamma, \mathfrak{a}} = \mathcal{C}Q_{< \zeta} \Bigl(G^\gamma \un{X}_\gamma^0 + \un{\mathcal{C}P}^\gamma \mathbf{{1}}_+ \bigl( F_\gamma(U_{\gamma, \mathfrak{a}}) + E^{(1)}_\gamma(U_{\gamma, \mathfrak{a}}) + E^{(2)}_\gamma(U_{\gamma, \mathfrak{a}}) \bigr) + \mathfrak{s}qrt 2\, \un{W}_{\gamma, \mathfrak{a}}\Bigr),
\end{equation}
where $\un{W}_{\gamma, \mathfrak{a}}(z) := \un{\mathcal{C}P}^\gamma \mathbf{{1}}_+ (\blue\Xi)(z)$. We have from Lemma~\ref{lem:solution} that the solution of \eqref{eq:IsingKacEqn-new} is obtained as $X_{\gamma, \mathfrak{a}} = \mathcal{C}R^{\gamma, \mathfrak{a}} U_{\gamma, \mathfrak{a}}$. Recalling that $X_{\gamma, \mathfrak{a}}$ equals $X_{\gamma}$, the solution of \eqref{eq:IsingKacEqn}, on the time interval $[0, \tau_{\gamma, \mathfrak{a}}]$, we conclude that $\un{X}_{\gamma} = \un{\mathcal{C}R}^\gamma \un{U}_{\gamma, \mathfrak{a}}$ on $[0, \tau_{\gamma, \mathfrak{a}}]$. Furthermore, we may get a bound on $\un{X}_{\gamma, \mathfrak{a}} = \un{\mathcal{C}R}^\gamma \un{U}_{\gamma, \mathfrak{a}}$.
\betagin{proposition}\lambdabel{prop:under-X}
Let $X_{\gamma, \mathfrak{a}}$ be the local solution defined in Proposition~\ref{prop:SolutionMap}, and let $\un{X}_{\gamma, \mathfrak{a}}$ be as above. Then in the setting of \eqref{eq:SolutionMap-bound} one has
\betagin{equation*}
\mathfrak{s}up_{t \in [0, T \wedge T^L_{\gamma, \mathfrak{a}}]} \| \un{X}_{\gamma, \mathfrak{a}}(t) \|^{(\un{\mathfrak{e}})}_{\mathcal{C}C^\eta} \leq C,
\end{equation*}
where we use the norm \eqref{eq:eps-norm-1} with the scale $\un{\mathfrak{e}} := \mathfrak{e} \gamma^{\un\kappappa}$.
\end{proposition}
\betagin{proof}
For any $0 < \tilde{\mathfrak{e}} \leq \mathfrak{e}$ we have $\| X^{0}_\gamma\|^{(\tilde{\mathfrak{e}})}_{\mathcal{C}C^{\bar \eta}} \leq \| X^{0}_\gamma\|^{(\mathfrak{e})}_{\mathcal{C}C^{\bar \eta}}$. Taking $\un{\mathfrak{e}} < \tilde{\mathfrak{e}} < \mathfrak{e}$, Lemma~\ref{lem:Z-bound} yields $\bigl| \bigl( \iota_\mathtt{var}epsilon S_{\gamma} (t)\bigr) (\mathtt{var}phii_x^\lambdambda) \bigr| \lesssim \lambda^{\bar\eta} \| X^{0}_\gamma\|^{(\mathfrak{e})}_{\mathcal{C}C^{\bar \eta}}$ for any smooth compactly supported $\mathtt{var}phi$ and $\lambdambda \in [\un{\mathfrak{e}}, 1]$. From the assumption \eqref{eq:initial-convergence} on the initial condition, the last quantity is bounded uniformly in $\gamma \in (0, \gamma_\mathfrak{s}tar)$. Since the function $\un{K}_\gamma$ is smooth and rescaled by $\un{\mathfrak{e}}$, we get $\mathfrak{s}up_{\gamma \in (0, \gamma_\mathfrak{s}tar)} \| \un{X}^{0}_\gamma\|^{(\un{\mathfrak{e}})}_{\mathcal{C}C^{\bar \eta}} < \infty$. Estimating then the right-hand side of \eqref{eq:U-bar} in exactly the same way as we bounded \eqref{eq:contraction1}, we get $\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert\un{U}_{\gamma, \mathfrak{a}} \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{\zeta, \eta; T}^{(\un\mathfrak{e})} \lesssim 1$, for any $T \in (0, T_{\gamma, \mathfrak{a}})$, where the proportionality constant is independent of $\gamma$ and $T$. Recalling that $\un{X}_{\gamma, \mathfrak{a}} = \un{\mathcal{C}R}^\gamma \un{U}_{\gamma, \mathfrak{a}}$, the bound \eqref{eq:SolutionMap-bound} follows from Proposition~\ref{prop:ReconThm_v2} and moment bounds for the model.
\end{proof}
Let us define $\un{v}_{\gamma, \mathfrak{a}} := \un{X}_{\gamma, \mathfrak{a}} - \mathfrak{s}qrt2\, \un{Y}_{\gamma, \mathfrak{a}}$ with $\un{Y}_{\gamma, \mathfrak{a}} = \un{\mathcal{C}R}^{\gamma, \mathfrak{a}}\<1b>$. Then by bounding the right-hand side of \eqref{eq:U-bar} without the term $\mathfrak{s}qrt 2\, \un{W}_{\gamma, \mathfrak{a}}$, in the same way as we did in the proof of Proposition~\ref{prop:under-X}, in the setting of \eqref{eq:SolutionMap-bound} we get
\betagin{equation}\lambdabel{eq:remainder-bound}
\mathfrak{s}up_{t \in [0, T \wedge T^L_{\gamma, \mathfrak{a}}]} \| \un{v}_{\gamma, \mathfrak{a}}(t) \|^{(\un{\mathfrak{e}})}_{\mathcal{C}C^{3 / 2 + 3 \eta}} \leq C.
\end{equation}
We also need to control the process $\un{X}_\gamma X_\gamma$ which appears in the definition of the stopping time \eqref{eq:tau-2}. In what follows, when using the norm $\| \bigcdot \|_{L^\infty}$ of these processes, we compute the norm on $\T_{\mathtt{var}epsilon}^3$. Writing as before $X_{\gamma, \mathfrak{a}} = \mathfrak{s}qrt2\, Y_{\gamma, \mathfrak{a}} + v_{\gamma, \mathfrak{a}}$ and $\un{X}_{\gamma, \mathfrak{a}} = \mathfrak{s}qrt2\, \un{Y}_{\gamma, \mathfrak{a}} + \un{v}_{\gamma, \mathfrak{a}}$ with $Y_{\gamma, \mathfrak{a}} = \mathcal{C}R^{\gamma, \mathfrak{a}}\<1b>$ and $\un{Y}_{\gamma, \mathfrak{a}} = \un{\mathcal{C}R}^{\gamma, \mathfrak{a}}\<1b>$, we get
\betagin{align}
\bigl\| \bigl(\un{X}_{\gamma, \mathfrak{a}} X_{\gamma, \mathfrak{a}} &- 2\, \un{Y}_{\gamma, \mathfrak{a}} Y_{\gamma, \mathfrak{a}}\bigr)(t) \bigr\|^{(\un{\mathfrak{e}})}_{\mathcal{C}C^{3 / 2 + 3 \eta}} \lambdabel{eq:X-un-X} \\
&\qquad \lesssim \| \un{Y}_{\gamma, \mathfrak{a}}(t)\|_{L^\infty} \| v_{\gamma, \mathfrak{a}}(t) \|_{L^\infty} + \| \un{v}_{\gamma, \mathfrak{a}}(t) \|_{L^\infty} \bigl(\| Y_{\gamma, \mathfrak{a}}(t) \|_{L^\infty} + \| v_{\gamma, \mathfrak{a}}(t) \|_{L^\infty}\bigr). \nonumber
\end{align}
Propositions~\ref{prop:models-converge} and \ref{prop:ReconThm_v2} yield $\mathbb{E} \bigl[\mathfrak{s}up_{t \in [0, T]} \| Y_{\gamma, \mathfrak{a}}(t) \|_{L^\infty}^p \bigr] \lesssim \mathfrak{e}^{|\<1b>| p}$ for all $p \geq 1$ large enough, and respectively $\mathbb{E} \bigl[\mathfrak{s}up_{t \in [0, T]} \| \un{Y}_{\gamma, \mathfrak{a}}(t) \|_{L^\infty}^p \bigr] \lesssim \un{\mathfrak{e}}^{|\<1b>| p}$. Moreover, from \eqref{eq:discrete-remainder-bound} and \eqref{eq:remainder-bound} we get
\betagin{equation*}
\mathfrak{s}up_{t \in [0, T \wedge T^L_{\gamma, \mathfrak{a}}]} \| v_{\gamma, \mathfrak{a}}(t) \|_{L^\infty} \lesssim \mathfrak{e}^{\mathfrak{r}ac{3}{2} + 3 \eta}, \qquad \mathfrak{s}up_{t \in [0, T \wedge T^L_{\gamma, \mathfrak{a}}]} \| \un{v}_{\gamma, \mathfrak{a}}(t) \|_{L^\infty} \lesssim \un{\mathfrak{e}}^{\mathfrak{r}ac{3}{2} + 3 \eta}.
\end{equation*}
Using these bounds and Minkowski inequality, we get from \eqref{eq:X-un-X}
\betagin{equation}\lambdabel{eq:unX-X-minus-Y}
\mathfrak{s}up_{t \in [0, T \wedge T_{\gamma, \mathfrak{a}}]} \bigl\| \bigl(\un{X}_{\gamma, \mathfrak{a}} X_{\gamma, \mathfrak{a}} - 2\, \un{Y}_{\gamma, \mathfrak{a}} Y_{\gamma, \mathfrak{a}}\bigr)(t) \bigr\|_{L^\infty} \lesssim \mathfrak{e}^{|\<1b>| + \mathfrak{r}ac{3}{2} + 3 \eta} \gamma^{|\<1b>| \un \kappappa},
\end{equation}
where we used the definition $\un{\mathfrak{e}} = \mathfrak{e} \gamma^{\un\kappappa}$ and the bounds on $\eta$ in the statement of Theorem~\ref{thm:main}. If we take $\un \kappappa \leq \kappappa < \mathfrak{r}ac{1}{10}$, where $\kappappa$ is the value used in the definition of the regularity structure \eqref{eqs:hom}, the preceding expression is bounded by $C \mathfrak{e}^{\un\kappappa - 1}$. Furthermore, for any $\un{\eta} < -1$ we get the estimate
\betagin{equation*}
\mathbb{E} \biggl[\mathfrak{s}up_{t \in [0, T]} \Bigl(\| \un{Y}_{\gamma, \mathfrak{a}}(t) Y_{\gamma, \mathfrak{a}}(t) - \tfrac{1}{2} \un{\mathfrak{C}}_\gamma(t) \|^{(\un{\mathfrak{e}})}_{\mathcal{C}^{\un{\eta}}}\Bigr)^p\biggr] \lesssim 1,
\end{equation*}
for any $p \geq 1$ large enough and any $T > 0$. This estimate is obtained in the same way as Lemma~\ref{lem:X2-under-prime}, because the difference in the processes involved in these estimates is only in the initial states. Moreover, we have $|\un{\mathfrak{C}}_\gamma - 2 \un{\mathfrak{C}}_\gamma(t)| \lesssim 1$ where the constant $\un{\mathfrak{C}}_\gamma$ is defined in \eqref{eq:c-under}. Combining this bound with \eqref{eq:unX-X-minus-Y}, we get the following result.
\betagin{lemma}\lambdabel{lem:unX-X-bound}
Let $\un \kappappa \leq \kappappa$, where $\kappappa$ is the value used in \eqref{eqs:hom} and $\un{\kappappa}$ is from \eqref{eq:under-K-def}. For any $\un{\eta} < -1$, $T > 0$ and any $p \geq 1$ large enough, in the setting of \eqref{eq:SolutionMap-bound} one has
\betagin{equation*}
\mathbb{E} \biggl[\mathfrak{s}up_{t \in [0, T \wedge T^L_{\gamma, \mathfrak{a}}]} \Bigl(\| \un{X}_{\gamma, \mathfrak{a}}(t) X_{\gamma, \mathfrak{a}}(t) - \un{\mathfrak{C}}_\gamma \|^{(\un{\mathfrak{e}})}_{\mathcal{C}^{\un{\eta}}}\Bigr)^p\biggr] \lesssim \mathfrak{e}^{(\un\kappappa - 1) p},
\end{equation*}
where $\eta$ is from the statement of Theorem~\ref{thm:main} and $\un{\mathfrak{e}} = \mathfrak{e} \gamma^{\un\kappappa}$.
\end{lemma}
\mathfrak{s}ection{Proof of Theorem~\ref{thm:main}}
\lambdabel{sec:ConvFinal}
Let $X_\gamma$ be the rescaled spin field of the Ising-Kac model \eqref{eq:X-gamma}, and let $X$ be the solution of the $\mathbb{P}hi^4_3$ equation \eqref{eq:Phi43}. Our goal is to prove that
\betagin{equation}\lambdabel{eq:weak-limit}
\lim_{\gamma \to 0} \mathbb{E} \bigl[ F (\iota_\mathtt{var}epsilon X_\gamma) \bigr] = \mathbb{E} \bigl[ F ( X )\bigr],
\end{equation}
for any bounded, uniformly continuous function $F : \mathcal{C}D \bigl([0, T], \mathscr{D}'(\mathbb{T}^3)\bigr) \to \R$. We note that the processes $X_\gamma$ and $X$ are not required to be coupled, and the expectations in \eqref{eq:weak-limit} may be on different probability spaces. We fix the value $T > 0$ throughout this section.
It will be convenient to introduce some intermediate processes. More precisely, for $\mathrm{d}eltalta > 0$ we define $X_{\mathrm{d}elta}$ to be the solution of the SPDE \eqref{eq:Phi43-delta} and we define $X_{\gamma, \mathrm{d}elta, \mathfrak{a}}$ to be the solution of equation \eqref{eq:Discrete-equation-regular-noise}. Then the limit \eqref{eq:weak-limit} follows if for some $\gamma_0 > 0$ we have
\betagin{subequations}
\betagin{align}
&\lim_{\gamma \to 0} \mathbb{E} \bigl[ F (\iota_\mathtt{var}epsilon X_{\gamma, \mathfrak{a}}) \bigr] = \mathbb{E} \bigl[ F ( X )\bigr], \lambdabel{eq:bound-1}\\
\lim_{\mathfrak{a} \to \infty} &\lim_{\gamma \in (0, \gamma_0)} \mathbb{E} \bigl| F (\iota_\mathtt{var}epsilon X_\gamma) - F (\iota_\mathtt{var}epsilon X_{\gamma, \mathfrak{a}}) \bigr| = 0, \lambdabel{eq:bound-2}
\end{align}
\end{subequations}
where \eqref{eq:bound-1} holds for each fixed $\mathfrak{a} \geq 1$. Note that the two processes in \eqref{eq:bound-2} are defined on the same probability space. Furthermore, \eqref{eq:bound-1} follows if for some $\mathrm{d}elta_0 > 0$ we have
\betagin{subequations}
\betagin{align}
&\lim_{\mathrm{d}elta \to 0} \mathbb{E} \bigl| F (X_\mathrm{d}elta) - F ( X ) \bigr| = 0, \lambdabel{eq:bound-3} \\
&\lim_{\gamma \to 0} \mathbb{E} \bigl[ F (\iota_\mathtt{var}epsilon X_{\gamma, \mathrm{d}elta, \mathfrak{a}}) \bigr] = \mathbb{E} \bigl[ F ( X_\mathrm{d}elta )\bigr], \lambdabel{eq:bound-4} \\
\lim_{\mathrm{d}elta \to 0} &\mathfrak{s}up_{\gamma \in (0, \gamma_0)} \mathbb{E} \bigl| F (\iota_\mathtt{var}epsilon X_{\gamma, \mathfrak{a}}) - F (\iota_\mathtt{var}epsilon X_{\gamma, \mathrm{d}elta, \mathfrak{a}}) \bigr| = 0, \lambdabel{eq:bound-5}
\end{align}
\end{subequations}
where \eqref{eq:bound-4} holds for every fixed $\mathrm{d}elta \in (0, \mathrm{d}elta_0)$. Again we used that the pairs of processes in \eqref{eq:bound-3} and \eqref{eq:bound-5} are defined on the same probability spaces. The limit \eqref{eq:bound-3} follows from a much stronger convergence stated in Theorem~\ref{thm:Phi-solution}. The limit \eqref{eq:bound-4} is proved in Lemma~\ref{lem:convergence2}.
In order to compare the discrete and continuous heat kernels, we introduce the metric
\betagin{equation}\lambdabel{eq:heat-metric}
\| G^\gamma_t; G_t \|^{(\mathfrak{e})}_{L^1} := \mathfrak{s}um_{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}} \int_{|\bar x - x| \leq \mathtt{var}epsilon} \bigl| G^\gamma_t(x) - G_t(\bar x)\bigr| \mathrm{d} \bar x.
\end{equation}
Here, we use the heat kernel $G_t(x) = (2 \pi t)^{-3/2} e^{- |x|^3 / t}$ and the discrete kernel $G^\gamma_t : \mathtt{L}ambda_{\mathtt{var}epsilon} \to \R$ defined in \eqref{eq:From-P-to-G}. We will also use the discrete kernel $\widetilde{G}^\gamma$ defined in \eqref{eq:From-P-to-G-tilde}.
\betagin{lemma}\lambdabel{lem:heat-bounds}
For any $0 < t \leq 1$ one has
\betagin{equation}\lambdabel{eq:heat-limits}
\lim_{\gamma \to 0} \| G^\gamma_t; G_t \|^{(\mathfrak{e})}_{L^1} = 0, \qquad\qquad \lim_{\gamma \to 0} \| \widetilde{G}^\gamma_t; G_t \|^{(\mathfrak{e})}_{L^1} = 0.
\end{equation}
\end{lemma}
\betagin{proof}
From the explicit formula for the heat kernel we can get (see \cite[Lem.~7.4]{Regularity})
\betagin{equation*}
|G_t(x) - G_t(\bar x)| \leq C \bigl(t^{1/2} + (|x| \wedge |\bar x|)\bigr)^{-3 - \theta} |x - \bar x|^\theta,
\end{equation*}
for any $\theta \in [0, 1]$. Similarly, from the bounds on the discrete kernels provided at the end of Appendix~\ref{sec:decompositions} we get
\betagin{equation}\lambdabel{eq:heat-bounds}
|G^\gamma_t(x) - G_t(x)| \leq C \mathtt{var}epsilon^\theta \bigl(t^{1/2} + |x| + \mathtt{var}epsilon\bigr)^{-3 - \theta},
\end{equation}
Then the integral in \eqref{eq:heat-metric} is estimated by a constant multiple of $\mathtt{var}epsilon^\theta \bigl(t^{1/2} + |x| + \mathtt{var}epsilon\bigr)^{-3 - \theta}$, and the whole expression \eqref{eq:heat-metric} can be estimated by a constant times $\mathtt{var}epsilon^\theta$. This gives the first limit in \eqref{eq:heat-limits}, and the second follows in the same way, where the bounds for $\widetilde{G}^\gamma_t$ are of the form \eqref{eq:heat-bounds} with $\mathtt{var}epsilon$ being replaced by $\mathfrak{e}$.
\end{proof}
\betagin{lemma}\lambdabel{lem:convergence2}
For any $\mathfrak{a} \geq 1$, $\mathrm{d}elta \in (0, 1)$ and $T > 0$, the process $X_{\gamma, \mathrm{d}elta, \mathfrak{a}}(t)$ is almost surely uniformly bounded on $[0, T]$. Moreover, the limit \eqref{eq:bound-4} holds.
\end{lemma}
\betagin{proof}
We note that the formula \eqref{eq:xi-gamma-delta} makes sense on $\R \times \mathbb{T}^3$ (and not just $\R \times \T_{\mathtt{var}epsilon}^3$). Let then $\bar{\xi}_{\gamma, \mathrm{d}elta, \mathfrak{a}}$ be defined by \eqref{eq:xi-gamma-delta} on $\R \times \mathbb{T}^3$. In will be convenient to introduce an additional process $\bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}}$ on $\R \times \mathbb{T}^3$, which is the solution of the SPDE
\betagin{equation}\lambdabel{eq:auxiliary}
\bigl( \partial_t - \mathbf{D}elta \bigr) \bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}} = - \mathfrak{r}ac{\beta^3}{3} \bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}}^3 + \bigl( \mathfrak{C}_{\mathrm{d}elta} + A \bigr) \bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}} + \mathfrak{s}qrt 2 \,\bar \xi_{\gamma, \mathrm{d}elta, \mathfrak{a}},
\end{equation}
with the initial condition $X_\mathrm{d}elta^{0}$, the same as for equation \eqref{eq:Phi43-delta}. Then the limit \eqref{eq:bound-4} follows from
\betagin{subequations}
\betagin{align}
&\lim_{\gamma \to 0} \mathbb{E} \bigl[ F (\bar X_{\gamma, \mathrm{d}elta, \mathfrak{a}}) \bigr] = \mathbb{E} \bigl[ F ( X_\mathrm{d}elta )\bigr], \lambdabel{eq:bound-4-1} \\
&\lim_{\gamma \to 0} \mathbb{E} \bigl| F (\iota_\mathtt{var}epsilon X_{\gamma, \mathrm{d}elta, \mathfrak{a}}) - F (\bar X_{\gamma, \mathrm{d}elta, \mathfrak{a}}) \bigr| = 0, \lambdabel{eq:bound-4-2}
\end{align}
\end{subequations}
and we are going to prove these two limits.
For $T > 0$ we will use the shorthand notation $L^\infty_T := L^{\infty}([0, T] \times \mathbb{T}^3)$, and we will consider all the spaces and norms on $\mathbb{T}^3$ in the spatial variable, which we prefer not to write every time.
We start with analysing the second term in \eqref{eq:bound-4-1}. For this we will show the continuous dependence of the solution of equation \eqref{eq:auxiliary} on the driving noise and the initial state. More precisely, for $f_0 \in L^\infty$, for $T > 0$ and for a function $\zeta \in L^{\infty}_T$ we consider the PDE
\betagin{equation}\lambdabel{eq:auxiliary-general}
( \partial_t - \mathbf{D}elta ) f = - \mathfrak{r}ac{\beta^3}{3} f^3 + ( \mathfrak{C}_{\mathrm{d}elta} + A ) f + \mathfrak{s}qrt 2\, \zeta
\end{equation}
on $[0, T] \times \mathbb{T}^3$ with an initial condition $f_0 \in L^{\infty}(\mathbb{T}^3)$ at time $0$. Of course, the solution $f$ depends on $\mathrm{d}eltalta$ and $\gammamma$ through the constants $\mathfrak{C}_{\mathrm{d}elta}$ and $\betata$ (see \eqref{eq:beta}), but we prefer not to indicate this dependence to have a lighter notation. By our assumptions, there exists $L > 0$ such that $\| f_0 \|_{L^{\infty}} \leq L$ and $\| \zeta \|_{L^\infty_T} \leq L$. We are going to prove that there is a unique solution $f \in L^\infty_T$, and the solution map $f = \mathcal{C}S_T(\zeta, f_0)$ is locally continuous from $L^{\infty}_T \times L^{\infty}$ to $L^{\infty}_T$.
Let $P : \R_+ \times \mathbb{T}^3$ be the heat kernel, i.e., the Green's function of the parabolic operator $\partial_t - \mathbf{D}elta$. Then, with a little ambiguity, we write $P_t$ for the semigroup, whose action on functions is given by the convolution with the heat kernel $P_t$ on $\mathbb{T}^3$. Then the mild form of \eqref{eq:auxiliary-general} is
\betagin{equation}\lambdabel{eq:f-mild}
f_t(x) = P_t f_0 (x) + \int_0^t P_{t-s} \Bigl(- \mathfrak{r}ac{\beta^3}{3} f_s^3 + ( \mathfrak{C}_{\mathrm{d}elta} + A ) f_s + \mathfrak{s}qrt 2\, \zeta_s\Bigr)(x) \mathrm{d} s.
\end{equation}
We denote by $\mathcal{C}M_t(f)(x)$ the right-hand side, and we are going to prove that $\mathcal{C}M_t(f)$ is a contraction map on $\mathcal{C}B_{L, t} := \{f : \| f \|_{L^\infty_t} \leq L+1\}$ for a sufficiently small $0 < t < T$.
Taking $f \in \mathcal{C}B_{L, t}$, using the Young inequality and using the identity $\| P_t \|_{L^1} = 1$, we get
\betagin{align*}
\| \mathcal{C}M_t(f) \|_{L^\infty} &\leq \| f_0 \|_{L^\infty} + t \| f\|^3_{L^\infty_t} + t | \mathfrak{C}_{\mathrm{d}elta} + A | \| f \|_{L^\infty_t} + t \mathfrak{s}qrt 2\, \| \zeta \|_{L^\infty_t} \\
&\leq L + t \bigl( (L +1)^3 + | \mathfrak{C}_{\mathrm{d}elta} + A | (L+1) + \mathfrak{s}qrt 2\, L \bigr),
\end{align*}
where we estimated $\beta^3 \leq 3$, which follows from \eqref{eq:beta} for all $\gammamma > 0$ sufficiently small. Taking $t > 0$ small enough, we get
\betagin{equation*}
\| \mathcal{C}M_t(f) \|_{L^\infty} \leq L + 1,
\end{equation*}
which means that $\mathcal{C}M_t$ maps $\mathcal{C}B_{L, t}$ to itself.
Let us now take $f, \bar f \in \mathcal{C}B_{L, t}$ with $f_0 = \bar f_0$. Then
\betagin{equation*}
\big( \mathcal{C}M_t(f) - \mathcal{C}M_t(\bar f) \big)(x) = - \mathfrak{r}ac{\beta^3}{3} \int_{0}^t P_{t-s} \big( f_s^3 - \bar f_s^3 \big)(x) \mathrm{d} s + ( \mathfrak{C}_{\mathrm{d}elta} + A ) \int_{0}^t P_{t-s} \big( f_s - \bar f_s \big)(x) \mathrm{d} s,
\end{equation*}
which yields similarly to how we did above
\betagin{align*}
\| \mathcal{C}M_t(f) - \mathcal{C}M_t(\bar f)\|_{L^\infty} &\leq t \| f^3 - \bar f^3\|_{L^\infty_t} + t | \mathfrak{C}_{\mathrm{d}elta} + A | \| f - \bar f \|_{L^\infty_t} \\
&\leq t \bigl( 3 L^2 + | \mathfrak{C}_{\mathrm{d}elta} + A | \bigr) \| f - \bar f \|_{L^\infty_t}.
\end{align*}
Taking $t > 0$ small enough, we get $t \bigl( 3 L^2 + | \mathfrak{C}_{\mathrm{d}elta} + A | \bigr) < 1$, which means that $\mathcal{C}M_t$ is a contraction on $\mathcal{C}B_{L, t}$. By the Banach fixed point theorem, there exists a unique solution $f \in L^\infty_t$ of equation \eqref{eq:f-mild}.
Let us now denote by $f = \mathcal{C}S_t(\zeta, f_0)$ the solution map of \eqref{eq:f-mild} on $\mathcal{C}B_{L, t}$. We are going to show that it is continuous with respect to $\zeta$ and $f_0$, satisfying $\| f_0 \|_{L^{\infty}} \leq L$ and $\| \zeta \|_{L^\infty_T} \leq L$. For this we take $\| \bar f_0 \|_{L^{\infty}} \leq L$ and $\| \bar \zeta \|_{L^\infty_T} \leq L$, and for $\bar f = \mathcal{C}S_t(\bar \zeta, \bar f_0)$ we have
\betagin{align*}
\big( f_t - \bar f_t \big)(x) &= P_t (f_0 - \bar f_0) (x) - \mathfrak{r}ac{\beta^3}{3} \int_{0}^t P_{t-s} \big( f_s^3 - \bar f_s^3 \big)(x) \mathrm{d} s \\
&\qquad + ( \mathfrak{C}_{\mathrm{d}elta} + A ) \int_{0}^t P_{t-s} \big( f_s - \bar f_s \big)(x) \mathrm{d} s + \mathfrak{s}qrt 2 \int_{0}^t P_{t-s} \big( \zeta_s - \bar \zeta_s \big)(x) \mathrm{d} s.
\end{align*}
Computing the norms as above, we get
\betagin{equation*}
\| f - \bar f \|_{L^\infty_t} \leq \| f_0 - \bar f_0 \|_{L^\infty} + t \bigl( 3 L + | \mathfrak{C}_{\mathrm{d}elta} + A | \bigr) \| f - \bar f\|_{L^\infty_t} + t \mathfrak{s}qrt 2\, \| \zeta - \bar \zeta \|_{L^\infty_t}.
\end{equation*}
Since $t$ is such that $t \bigl( 3 L + | \mathfrak{C}_{\mathrm{d}elta} + A | \bigr) < 1$, we can move the term proportional to $\| f - \bar f\|_{L^\infty_t}$ to the left-hand side and get
\betagin{equation*}
\| f - \bar f \|_{L^\infty_t} \leq C \| f_0 - \bar f_0 \|_{L^\infty} + C \| \zeta - \bar \zeta \|_{L^\infty_t},
\end{equation*}
where the proportionality constant $C$ depends on $\mathrm{d}eltalta$ and $L$. Thus, we have a locally Lipschitz continuity of the solution map.
The extension of the solution to longer time intervals $[0, T]$ is the standard procedure, and is done by patching local solutions. Since the function $V : \R \to \R$ given by $V(u) = u^2$ is a Lyapunov function for equation \eqref{eq:auxiliary-general}, the solution is global in time and $T$ can be taken arbitrary (this standard result can be found for example in \cite[Prop.~6.23]{Hai09}).
Let us now look back at \eqref{eq:bound-4-1}. Using the constructed solution map we can write $X_{\mathrm{d}elta} = \mathcal{C}S(\xi_{\mathrm{d}elta}, X_{\mathrm{d}elta}^{0})$ and $\bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}} = \mathcal{C}S(\bar \xi_{\gamma, \mathrm{d}elta, \mathfrak{a}}, X_{\mathrm{d}elta}^{0})$. By Lemma $2.3$ in \cite{Martingales} we have the convergence in law in the topology of the Skorokhod space $\mathcal{C}D(\R_+, \mathscr{D}'(\mathbb{T}^3))$ of the family of martingales $(\mathfrak{M}_{\gamma, \mathfrak{a}}(\bigcdot, x))_{x \in \T_{\mathtt{var}epsilon}^3}$ to a cylindrical Wiener process on $L^2(\mathbb{T}^3)$. For any $T > 0$, we therefore get convergence in law of $\bar \xi_{\gamma, \mathrm{d}elta, \mathfrak{a}}$ to $\xi_{\mathrm{d}elta}$, as $\gamma \to 0$, in the topology of $L^\infty([0, T] \times \mathbb{T}^3)$. Then from continuity of the solution map $\mathcal{C}S$ we conclude that $\bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}}$ converges in law to $X_{\mathrm{d}elta}$, as $\gamma \to 0$, in the topology of $L^\infty([0, T] \times \mathbb{T}^3)$. This yields the required limit \eqref{eq:bound-4-1}.
Now, we will prove the limit \eqref{eq:bound-4-2}. We observe that these two processes are driven by the same noise and the live on the same probability space. It will be convenient to define an analogue of the $L^\infty$ norm to compare a discrete and continuous functions. Namely, for $f_\gamma : \mathtt{L}ambda_{\mathtt{var}epsilon} \to \R$ and $f : \R^3 \to \R$ we set
\betagin{equation*}
\| f_\gamma; f \|^{(\mathfrak{e})}_{L^\infty} := \mathfrak{s}up_{\mathfrak{s}ubstack{x \in \mathtt{L}ambda_{\mathtt{var}epsilon}, \bar x \in \R^3 \\ |\bar x - x|_\mathfrak{s}igmanfty \leq \mathtt{var}epsilon / 2}} \bigl| f_{\gamma}(x) - f(\bar x)\bigr|.
\end{equation*}
If moreover functions depend on the time variable, then set $\| f_\gamma; f \|^{(\mathfrak{e})}_{L^\infty_T} := \mathfrak{s}up_{t \in [0, T]} \| f_\gamma(t); f(t) \|^{(\mathfrak{e})}_{L^\infty}$. Then the limit \eqref{eq:bound-4-2} holds if we show
\betagin{equation}\lambdabel{eq:second-term-converges}
\lim_{\gamma \to 0} \mathbb{E} \| X_{\gamma, \mathrm{d}elta, \mathfrak{a}}; \bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}} \|^{(\mathfrak{e})}_{L^\infty_T} = 0.
\end{equation}
Now, we will prove the limit \eqref{eq:second-term-converges}. The mild form of \eqref{eq:auxiliary} is
\betagin{equation}\lambdabel{eq:mild1}
\bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}}(t, x) = P_t X_{\mathrm{d}elta}^{0} (x) + \int_0^t P_{t-s} \Bigl(- \mathfrak{r}ac{\beta^3}{3} \bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}}^3 + ( \mathfrak{C}_{\mathrm{d}elta} + A ) \bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}} + \mathfrak{s}qrt 2\, \bar \xi_{\gamma, \mathrm{d}elta, \mathfrak{a}}\Bigr)(s, x) \mathrm{d} s.
\end{equation}
As a consequence of our analysis of equation \eqref{eq:auxiliary-general}, if we take $\| X_{\mathrm{d}elta}^{0} \|_{L^{\infty}} \leq L$ and $\|\bar \xi_{\gamma, \mathrm{d}elta, \mathfrak{a}} \|_{L^\infty_T} \leq L$, then for $0 < t \leq T$ small enough we have $\|\bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}} \|_{L^\infty_t} \leq L + 1$. We will use this value $t$ in what follows. We can perform the same analysis as above and conclude that if $\| X_{\mathrm{d}elta}^{0} \|_{L^{\infty}} \leq L$ then $\|X_{\gamma, \mathrm{d}elta, \mathfrak{a}} \|_{L^\infty_t} \leq L + 1$. We prefer not to repeat the same argument twice.
We extend the processes periodically in the spatial variables. This means that we need to replace $P^\gamma$ and $\widetilde{P}^\gamma$ by $G^\gamma$ and $\widetilde{G}^\gamma$ respectively; and we need to replace $P$ by $G$ in \eqref{eq:mild1}. In what follows we are going to work with these periodic extensions.
Using the metric \eqref{eq:heat-metric}, one can readily get the bound
\betagin{equation}\lambdabel{eq:discret-continuous-bound}
\| G^\gamma_t X_{\gamma, \mathrm{d}elta, \mathfrak{a}}^{0}; G_t X_{\mathrm{d}elta}^{0} \|^{(\mathfrak{e})}_{L^\infty} \leq \| G_t \|_{L^1} \| X_{\gamma, \mathrm{d}elta, \mathfrak{a}}^{0}; X_{\mathrm{d}elta}^{0} \|^{(\mathfrak{e})}_{L^\infty} + \| G^\gamma_t; G_t \|^{(\mathfrak{e})}_{L^1} \| X_{\mathrm{d}elta}^{0}\|_{L^\infty}.
\end{equation}
We have $\| G_t \|_{L^1} = 1$, and from Lemma~\ref{lem:heat-bounds} we have that $Q^\gamma_t := \| G^\gamma_t; G_t \|^{(\mathfrak{e})}_{L^1}$ and $\widetilde{Q}^\gamma_t := \| \widetilde{G}^\gamma_t; G_t \|^{(\mathfrak{e})}_{L^1}$ vanish as $\gamma \to 0$. Using then the bound $\|\bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}} \|_{L^\infty_t} \leq L + 1$, subtracting equations, and using the bound similarly to \eqref{eq:discret-continuous-bound}, we get
\betagin{align*}
&\| X_{\gamma, \mathrm{d}elta, \mathfrak{a}}; \bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}} \|^{(\mathfrak{e})}_{L^\infty_t} \leq \| X_{\gamma, \mathrm{d}elta, \mathfrak{a}}^{0}; X_{\mathrm{d}elta}^{0} \|^{(\mathfrak{e})}_{L^\infty} + Q^\gamma_t L + t \mathfrak{s}qrt 2\, \| \xi_{\gamma, \mathrm{d}elta, \mathfrak{a}}; \bar \xi_{\gamma, \mathrm{d}elta, \mathfrak{a}}\|^{(\mathfrak{e})}_{L^\infty_t}\\
&\qquad + t \Bigl( \| X_{\gamma, \mathrm{d}elta, \mathfrak{a}}^3; \bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}}^3 \|^{(\mathfrak{e})}_{L^\infty_t} + | \mathfrak{C}_{\mathrm{d}elta} + A | \| X_{\gamma, \mathrm{d}elta, \mathfrak{a}}; \bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}} \|^{(\mathfrak{e})}_{L^\infty_t} + |\mathfrak{C}_{\gamma, \mathrm{d}elta} - \mathfrak{C}_{\mathrm{d}elta}| L\Bigr) \\
&\qquad + t \widetilde{Q}^\gamma_t \Bigl( (L + 1)^3 + | \mathfrak{C}_{\mathrm{d}elta} + A | (L + 1) + \mathfrak{s}qrt 2\, L\Bigr).
\end{align*}
We can readily show that $\| X_{\gamma, \mathrm{d}elta, \mathfrak{a}}^3; \bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}}^3 \|^{(\mathfrak{e})}_{L^\infty_t} \leq 3 L^2 \| X_{\gamma, \mathrm{d}elta, \mathfrak{a}}; \bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}} \|^{(\mathfrak{e})}_{L^\infty_t}$, and the choice of $t$ allows to absorb the term proportional to $\| X_{\gamma, \mathrm{d}elta, \mathfrak{a}}; \bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}} \|^{(\mathfrak{e})}_{L^\infty_t}$ to the left-hand side and get the bound
\betagin{align*}
\| X_{\gamma, \mathrm{d}elta, \mathfrak{a}}; \bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}} \|^{(\mathfrak{e})}_{L^\infty_t} &\lesssim \| X_{\gamma, \mathrm{d}elta, \mathfrak{a}}^{0}; X_{\mathrm{d}elta}^{0} \|^{(\mathfrak{e})}_{L^\infty} + Q^\gamma_t L + t \| \xi_{\gamma, \mathrm{d}elta, \mathfrak{a}}; \xi_{\gamma, \mathrm{d}elta, \mathfrak{a}} \|^{(\mathfrak{e})}_{L^\infty_t} + t |\mathfrak{C}_{\gamma, \mathrm{d}elta} - \mathfrak{C}_{\mathrm{d}elta}| L\\
&\qquad + t \widetilde{Q}^\gamma_t \Bigl( (L + 1)^3 + | \mathfrak{C}_{\mathrm{d}elta} + A | (L + 1) + \mathfrak{s}qrt 2\, L\Bigr),
\end{align*}
where the proportionality constant depends on $t$ and $L$. From our assumptions in Theorem~\ref{thm:main} on the initial states we conclude that $\lim_{\gamma \to 0} \| X_{\gamma, \mathrm{d}elta, \mathfrak{a}}^{0}; X_{\mathrm{d}elta}^{0} \|^{(\mathfrak{e})}_{L^\infty} = 0$. Furthermore, we have $\lim_{\gamma \to 0} \mathbb{E} \| \xi_{\gamma, \mathrm{d}elta, \mathfrak{a}}; \bar \xi_{\gamma, \mathrm{d}elta, \mathfrak{a}} \|^{(\mathfrak{e})}_{L^\infty_t} = 0$. Finally, from the definitions of the renormalisation constants we get $\lim_{\gamma \to 0} \mathfrak{C}_{\gamma, \mathrm{d}elta} = \mathfrak{C}_{\mathrm{d}elta}$, because the constants are defined in terms of the heat kernels and these converge uniformly as $\gamma \to 0$ (see Lemma~\ref{lem:Pgt}). Then from the preceding inequality we obtain
\betagin{equation*}
\mathbb{E} \| X_{\gamma, \mathrm{d}elta, \mathfrak{a}}; \bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}} \|^{(\mathfrak{e})}_{L^\infty_t} \leq C_{\gamma} (L, t),
\end{equation*}
where $\lim_{\gamma \to 0} C_{\gamma} (L, t) = 0$. Since $\bar \xi_{\gamma, \mathrm{d}elta, \mathfrak{a}}$ is almost surely bounded, the process $\bar{X}_{\gamma, \mathrm{d}elta, \mathfrak{a}}$ almost surely does not blow up in a finite time (see the argument above), and we conclude that the same is true for $X_{\gamma, \mathrm{d}elta, \mathfrak{a}}$ and \eqref{eq:second-term-converges} holds for any $T > 0$.
\end{proof}
Our next aim is to prove the limit \eqref{eq:bound-5}. It will be convenient to prove the required convergence in probability. For this we need to restrict the time interval to $[0, T^L_{\gamma, \mathfrak{a}} \wedge T^L_{\gamma, \mathrm{d}elta, \mathfrak{a}}]$, where the stopping times $T^L_{\gamma, \mathfrak{a}}$ and $T^L_{\gamma, \mathrm{d}elta, \mathfrak{a}}$ are defined in Proposition~\ref{prop:SolutionMap}. Moreover, we need to introduce auxiliary stopping times providing a bound on the models. More precisely, for $L > 0$ we define
\betagin{align*}
\tau^{L}_{\gamma, \mathfrak{a}} &:= \inf \Bigl\{ t \geq 0 : \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{t+1}^{(\mathfrak{e})} \geq L \Bigr\} \wedge T^L_{\gamma, \mathfrak{a}}, \quad \tau^{L}_{\gamma, \mathrm{d}elta, \mathfrak{a}} := \inf \Bigl\{ t \geq 0 : \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{t+1}^{(\mathfrak{e})} \geq L \Bigr\} \wedge T^L_{\gamma, \mathrm{d}elta, \mathfrak{a}}.
\end{align*}
Then for any $A > 0$, $L > 0$ and $T > 0$ we have
\betagin{align}
&\mathbb{P}\biggl( \mathfrak{s}up_{t \in [0, T]} \| (X_{\gamma, \mathrm{d}elta, \mathfrak{a}} - X_{\gamma, \mathfrak{a}})(t) \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta} \geq A \biggr) \nonumber\\
&\hspace{1cm}\leq \mathbb{P}\biggl( \mathfrak{s}up_{t \in [0, T \wedge \tau^{L}_{\gamma, \mathfrak{a}} \wedge \tau^{L}_{\gamma, \mathrm{d}elta, \mathfrak{a}}]} \| (X_{\gamma, \mathrm{d}elta, \mathfrak{a}} - X_{\gamma, \mathfrak{a}})(t) \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta} \geq A \biggr) + \mathbb{P} \bigl(\tau^{L}_{\gamma, \mathfrak{a}} \wedge \tau^{L}_{\gamma, \mathrm{d}elta, \mathfrak{a}} < T \bigr). \lambdabel{eq:weak-limit-3}
\end{align}
From the assumptions of Theorem~\ref{thm:main} we conclude that there exists $L_\mathfrak{s}tar > 0$ such that $\| X^0_{\gamma} \|^{(\mathfrak{e})}_{\mathcal{C}C^{\bar \eta}} \leq L_\mathfrak{s}tar$ uniformly in $\gamma \in (0, \gamma_\mathfrak{s}tar)$. Moreover, the definition \eqref{eq:initial-gamma-delta} yields $\mathfrak{s}up_{\gamma \in (0, \gamma_\mathfrak{s}tar)} \| X^0_{\gamma} - X^{0}_{\gamma, \mathrm{d}eltalta} \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta} \lesssim \mathrm{d}elta^{\theta}$ for any $\eta < \bar \eta$ and any $\theta > 0$ small enough. We fix $0 < \gamma_0 \leq \gamma_\mathfrak{s}tar$ such that the result of Proposition~\ref{prop:models-converge} holds. Then from Proposition~\ref{prop:SolutionMap} we conclude that
\betagin{equation}\lambdabel{eq:X-delta-stopped}
\lim_{L \to \infty} \lim_{\mathrm{d}elta \to 0} \mathfrak{s}up_{\gamma \in (0, \gamma_0)} \mathbb{P}\biggl( \mathfrak{s}up_{t \in [0, T \wedge \tau^{L}_{\gamma, \mathfrak{a}} \wedge \tau^{L}_{\gamma, \mathrm{d}elta, \mathfrak{a}}]} \| (X_{\gamma, \mathrm{d}elta, \mathfrak{a}} - X_{\gamma, \mathfrak{a}})(t) \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta} \geq A \biggr) = 0.
\end{equation}
Furthermore, we have
\betagin{equation}\lambdabel{eq:tau-bounds}
\mathbb{P} \bigl(\tau^{L}_{\gamma, \mathfrak{a}} \wedge \tau^{L}_{\gamma, \mathrm{d}elta, \mathfrak{a}} < T \bigr) \leq \mathbb{P} \bigl(T^{L}_{\gamma, \mathfrak{a}} \wedge T^{L}_{\gamma, \mathrm{d}elta, \mathfrak{a}} < T \bigr) + \mathbb{P}\bigl( \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{T+1}^{(\mathfrak{e})} \geq L \bigr) + \mathbb{P}\bigl( \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{T+1}^{(\mathfrak{e})} \geq L \bigr).
\end{equation}
Markov's inequality yields $\mathbb{P} \bigl( \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{T+1}^{(\mathfrak{e})} \geq L\bigr) \leq L^{-p}\, \mathbb{E} \bigl( \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{T+1}^{(\mathfrak{e})}\bigr)^p$, for any $p \geq 1$. From Proposition~\ref{prop:models-converge} we conclude that for any $p$ the preceding expectation is bounded uniformly in $\gamma \in (0,\gamma_0)$. In the same way from Proposition~\ref{prop:models-converge} we conclude that $\mathbb{P} \bigl( \vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert Z^{\gamma, \mathrm{d}elta, \mathfrak{a}}_{{\scaleto{\mathrm{lift}}{4pt}}}\vert\hspace{-1.5pt}\vert\hspace{-1.5pt}\vert_{T+1}^{(\mathfrak{e})} \geq L\bigr)$ is bounded uniformly in $\gamma \in (0,\gamma_0)$ and $\mathrm{d}elta \in (0,1)$, and hence from \eqref{eq:tau-bounds} we get
\betagin{equation}\lambdabel{eq:X-delta-stopped-better}
\lim_{L \to \infty} \mathfrak{s}up_{\mathrm{d}elta \in (0,1)} \mathfrak{s}up_{\gamma \in (0, \gamma_0)} \mathbb{P} \bigl(\tau^{L}_{\gamma, \mathfrak{a}} \wedge \tau^{L}_{\gamma, \mathrm{d}elta, \mathfrak{a}} < T \bigr) \leq \lim_{L \to \infty} \mathfrak{s}up_{\mathrm{d}elta \in (0,1)} \mathfrak{s}up_{\gamma \in (0, \gamma_0)} \mathbb{P} \bigl(T^{L}_{\gamma, \mathfrak{a}} \wedge T^{L}_{\gamma, \mathrm{d}elta, \mathfrak{a}} < T \bigr).
\end{equation}
Lemma~\ref{lem:convergence2} implies that the living time $T_{\gamma, \mathrm{d}elta, \mathfrak{a}}$ of the process $X_{\gamma, \mathrm{d}elta, \mathfrak{a}}$ is almost surely infinite, and hence Proposition~\ref{prop:SolutionMap} yields $\lim_{L \to \infty} T^L_{\gamma, \mathrm{d}elta, \mathfrak{a}} = +\infty$ almost surely. Then the right-hand side of \eqref{eq:X-delta-stopped-better} equals
\betagin{equation}\lambdabel{eq:X-delta-stopped-better-2}
\lim_{L \to \infty} \mathfrak{s}up_{\mathrm{d}elta \in (0,1)} \mathfrak{s}up_{\gamma \in (0, \gamma_0)} \mathbb{P} \bigl(T^{L}_{\gamma, \mathfrak{a}} < T \bigr).
\end{equation}
Furthermore, as we stated after \eqref{eq:IsingKacEqn-periodic}, we have $X_{\gamma, \mathfrak{a}}(t) = X_{\gamma}(t)$ for $t \leq \tau_{\gamma, \mathfrak{a}}$ and $X_{\gamma, \mathfrak{a}}(t) = X'_{\gamma, \mathfrak{a}}(t)$ for $t > \tau_{\gamma, \mathfrak{a}}$. Then $X_{\gamma, \mathfrak{a}}$ is almost surely bounded on each bounded time interval, because for $t \leq \tau_{\gamma, \mathfrak{a}}$ the process is bounded due to the definition of the stopping times \eqref{eq:tau-1}-\eqref{eq:tau}, and for $t > \tau_{\gamma, \mathfrak{a}}$ the process is bounded due to Lemma~\ref{lem:X-prime-bound}. Hence, we conclude that the living time of the process $X_{\gamma, \mathfrak{a}}$ is almost surely infinite, and $\lim_{L \to \infty} T^L_{\gamma, \mathfrak{a}} = +\infty$ almost surely. This implies that \eqref{eq:X-delta-stopped-better-2} vanishes.
From the preceding argument we conclude that the expression in \eqref{eq:weak-limit-3} vanishes, which yields convergence of the process $X_{\gamma, \mathrm{d}elta, \mathfrak{a}}$ to $X_{\gamma, \mathfrak{a}}$ as $\mathrm{d}elta \to 0$ in probability in the topology as in \eqref{eq:weak-limit-3}. Then the processes converge in distribution, i.e. \eqref{eq:bound-5} holds.
We have proved the limit \eqref{eq:bound-1} and it is left to prove \eqref{eq:bound-2}. We are going to prove this limit in probability. Recalling the definition of $X_{\gamma, \mathfrak{a}}$, for any $A > 0$ we get
\betagin{equation}
\mathbb{P}\biggl( \mathfrak{s}up_{t \in [0, T]} \| (X_{\gamma, \mathfrak{a}} - X_\gamma)(t) \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta} \geq A \biggr) \leq \mathbb{P}\bigl( \tau_{\gamma, \mathfrak{a}} < T \bigr), \lambdabel{eq:weak-limit-4}
\end{equation}
where the supremum vanishes if $\tau_{\gamma, \mathfrak{a}} \geq T$. From the definition \eqref{eq:tau} we have
\betagin{equation}
\mathbb{P} \bigl(\tau_{\gamma, \mathfrak{a}} < T \bigr) \leq \mathbb{P} \bigl(\tau^{(1)}_{\gamma, \mathfrak{a}} < T \bigr) + \mathbb{P} \bigl(\tau^{(2)}_{\gamma, \mathfrak{a}} < T \bigr). \lambdabel{eq:weak-limit-5}
\end{equation}
The stopping time \eqref{eq:tau-1} we write as $\tau^{(1)}_{\gamma, \mathfrak{a}} = \inf \bigl\{t \geq 0 : \| X_{\gamma, \mathfrak{a}}(t) \|^{(\mathfrak{e})}_{\mathcal{C}C^{\eta}} \geq \mathfrak{a} \bigr\}$, and hence it coincides with the stopping time $T^{L_\mathfrak{a}}_{\gamma, \mathfrak{a}}$ defined in Proposition~\ref{prop:SolutionMap} with a suitable values $L_\mathfrak{a}$, depending on $\mathfrak{a}$ and such that $\lim_{\mathfrak{a} \to \infty} L_\mathfrak{a} = \infty$. Then we have $\lim_{\mathfrak{a} \to \infty}\mathfrak{s}up_{\gamma \in (0, 1)} \mathbb{P} \bigl(\tau^{(1)}_{\gamma, \mathfrak{a}} < T \bigr) = 0$. Convergence of the last term in \eqref{eq:weak-limit-5} to zero uniformly in $\gamma \in (0, 1)$ as $\mathfrak{a} \to \infty$ follows from Lemma~\ref{lem:unX-X-bound}.
\mathfrak{s}ubsection{The renormalisation constant}
\lambdabel{sec:renormalisation}
We readily conclude from Lemma~\ref{lem:renorm-constants} that the renormalisation constant \eqref{eq:C-exact} may be written in the form \eqref{eq:C-expansion}.
\appendices
\mathfrak{s}ection{Properties of the discrete kernels}
\lambdabel{sec:kernels}
The main result of this appendix is provided in Lemma~\ref{lem:Pgt}, which provides bounds on continuous extensions of the functions $G^\gamma$ and $\widetilde{G}^\gamma$ defined in \eqref{eq:From-P-to-G} and \eqref{eq:From-P-to-G-tilde}.
Before proving these main results, we need to prove several bounds on the function $K_\gamma$. By the definitions \eqref{eq:K-gamma} and \eqref{eq:Laplacian-gamma} we conclude that there exists $\gamma_0 > 0$ (depending on the radius of support of the function $\fK$) such that for $\gamma \in (0, \gamma_0)$ and $\omega \in \{-N, \ldots, N\}^3$
\betagin{equation}\lambdabel{eq:B1}
\widehat{K}_\gamma (\omega) = \mathtt{var}epsilon ^3 \mathfrak{s}um_{x\in \T_{\mathtt{var}epsilon}^3} K_\ga(x) e^{- i \pi \omega \cdot x} = \mathtt{var}kappa_{\gamma, 1} \gamma^{3} \mathfrak{s}um_{x \in \gamma \Z^3} \fK(x) e^{- i \pi \gamma^3 \omega \cdot x},
\end{equation}
where we used the fact that $\fK$ is compactly supported to extend the sum to all $x \in \gamma \Z^3$. In what follows we will always consider $\gamma \in (0, \gamma_0)$. Furthermore, it will be convenient to view $\widehat{K}_\gamma$ as a function of a continuous argument by evaluating \eqref{eq:B1} for all $\omega \in \R^3$. In this way, the function $\widehat{K}_\gamma(\omega)$ is smooth and we will use the notation $\omega = (\omega_1, \omega_2, \omega_3)$ and $\partial_j$ for the partial derivative with respect to $\omega_j$. For a multiindex $k \in \N_0^3$ we will write $D^k = \prod_{j= 1}^3 \partial_j^{k_j}$ for a mixed derivative.
\betagin{lemma}\lambdabel{lem:Kg0}
For any $c > 0$ there exists a constant $C >0$ such that
\betagin{subequations}
\betagin{align}
\bigl| \gamma^{-6} \bigl(1 -\widehat{K}_\gamma(\omega) \bigr) - \pi^2 |\omega|^2 \bigr| &\leq C_1 \gamma^3 | \omega|^3, \lambdabel{eq:K2.4} \\
\bigl| \gamma^{-6} \partial_j \widehat{K}_\gamma(\omega) + 2 \pi^2 \omega_j \bigr| &\leq C_1 \gamma^3 | \omega|^2, \lambdabel{eq:K2.3}
\end{align}
\end{subequations}
uniformly over $\gamma \in (0, \gamma_0)$, $|\omega| \leq c\gamma^{-3}$ and $j \in \{1,2,3\}$.
\end{lemma}
\betagin{proof}
For $|\omega| \leq c \gamma^{-3}$ a Taylor expansion and \eqref{eq:B1} yield
\betagin{equation}\lambdabel{eq:one-minus-K}
\betagin{aligned}
1 - \widehat{K}_\gamma (\omega)
&= \mathtt{var}kappa_{\gamma, 1} \gamma^{3} \mathfrak{s}um_{x\in \gamma \Z^3} \fK( x) \bigl(1 - e^{- i \pi \gamma^3 \omega \cdot x} \bigr) \\
&= \mathtt{var}kappa_{\gamma, 1} \gamma^{3} \mathfrak{s}um_{x\in \gamma \Z^3} \fK( x) \Big( i \pi \gamma^3 \omega \cdot x + \tfrac{1}{2} \big( \pi \gamma^3 \omega \cdot x\big)^2\Big) + \mathtt{Err}_\gamma(\omega),
\end{aligned}
\end{equation}
where the error term satisfies $| \mathtt{Err}_\gamma(\omega)| \leq \mathtt{var}kappa_{\gamma, 1} \mathfrak{r}ac{\pi^3}{6} \gamma^{12} |\omega|^3 \mathfrak{s}um_{x\in \gamma \Z^3} |x|^3 \fK(x) \lesssim \gamma^9 |\omega|^3$.
In the first identity in \eqref{eq:one-minus-K} we used the definition of the constant $\mathtt{var}kappa_{\gamma, 1}$ in \eqref{eq:K-gamma}. By the symmetry of the kernel $\fK(x)$, we have $\mathfrak{s}um_{x\in \gamma \Z^3} \fK( x) ( \omega \cdot x ) =0$ and $\mathfrak{s}um_{x\in \gamma \Z^3} x_i x_j \fK(x) =0$ for $i \neq j$. Furthermore, the sums $\gamma^{3} \mathfrak{s}um_{x\in \gamma \Z^3} \fK( x) x_j^2$ converge to $\int_{\R^3} \fK(x) x_j^2 \mathrm{d} x = 2$ as $\gamma \to 0$ with an error $\mathcal{C}O(\gamma^3)$. The last identity follows from \eqref{eq:K-moments} and symmetry of the function $\fK$. Then from \eqref{eq:one-minus-K} we obtain \eqref{eq:K2.4}.
The remaining bound \eqref{eq:K2.3} follows in a similar manner. More precisely, using a Taylor expansion we write
\betagin{align*}
-\partial_j \widehat{K}_\gamma(\omega) &= \mathtt{var}kappa_{\gamma, 1} i \pi \gamma^6 \mathfrak{s}um_{x\in \gamma \Z^3} x_j \fK(x) \big( e^{- i \pi \gamma^3 \omega \cdot x} -1 \big) \\
&= \mathtt{var}kappa_{\gamma, 1} \pi^2 \gamma^9 \omega_j \mathfrak{s}um_{x \in \gamma \Z^3} x_j^2 \fK(x) + \mathtt{Err}'_\gamma(\omega),
\end{align*}
for an error term satisfying $|\mathtt{Err}'_\gamma(\omega)| \lesssim \gamma^9 |\omega|^2$. Here, we have used the symmetry of the kernel $\fK$ to add the term $-1$ in the first equality and to remove the sums containing the products $x_i x_j$ for $i \neq j$ in the Taylor expansion in the second line. The bound \eqref{eq:K2.3} then follows similarly to \eqref{eq:K2.4}.
\end{proof}
\betagin{lemma}\lambdabel{lem:Kg}
For any $k \in \N_0^3$ and $m \in \N_0$ there are constants $C_1, C_2, C_3 > 0$ (where only $C_2$ and $C_3$ depend on $k$, and only $C_3$ depends on $m$) such that the following estimates hold uniformly over $\gamma \in (0, \gamma_0)$, $\omega \in \bigl[-N-\mathfrak{r}ac12, N +\mathfrak{r}ac12\bigr]^3$ and $j \in \{1,2,3\}$:
\betagin{enumerate}
\item (Most useful for $|\omega| \lesssim \gamma^{-3}$)
\betagin{subequations}\lambdabel{eqs:K-bounds}
\betagin{align}
|\widehat{K}_\gamma(\omega) | &\leq 1, \lambdabel{eq:K1} \\
|\partial_j \widehat{K}_\gamma(\omega) | &\leq C_1 \gamma^3 \big( |\gamma^3 \omega| \wedge 1 \big), \lambdabel{eq:K1A} \\
|D^{k} \widehat{K}_\gamma(\omega) | &\leq C_2 \gamma^{3 | k |_{\mathfrak{s}one}}, \lambdabel{eq:K1B}
\end{align}
\end{subequations}
\item (Most useful for $|\omega| \gtrsim \gamma^{-3}$)
\betagin{equation}
| \gamma^3 \omega|^{2m} \big| D^{k} \widehat{K}_\gamma (\omega) \big|\leq C_3 \gamma^{3 |k|_{\mathfrak{s}one}}. \lambdabel{eq:K3}
\end{equation}
\end{enumerate}
Furthermore, the value of $\gamma_0 > 0$ can be chosen small enough so that
\betagin{equation}\lambdabel{eq:K2}
1 -\widehat{K}_\gamma(\omega) \geq C_4 \big( |\gamma^3 \omega|^2 \wedge 1 \big),
\end{equation}
uniformly over the same values of $\gamma$ and $\omega$, for some $C_4 > 0$.
\end{lemma}
\betagin{proof}
We can get \eqref{eq:K1} from \eqref{eq:B1} as $|\widehat{K}_\gamma(\omega) | \leq \mathtt{var}kappa_{\gamma, 1} \gamma^{3} \mathfrak{s}um_{x \in \gamma \Z^3} \fK(x) = 1$, where we used the definition of the constant $\mathtt{var}kappa_{\gamma, 1}$ in \eqref{eq:K-gamma}. Similarly, from \eqref{eq:B1} we get
\betagin{equation}\lambdabel{eq:K-hat-deriv}
D^k \widehat{K}_\gamma(\omega) = \mathtt{var}kappa_{\gamma, 1} \gamma^3 \mathfrak{s}um_{x\in \gamma \Z^3} (-i \pi \gamma^3 x)^k \fK(x) e^{- i \pi \gamma^3 \omega \cdot x },
\end{equation}
with the notation $x^k = \prod_{j= 1}^3 x_j^{k_j}$. Then we can prove \eqref{eq:K1B} as follows
\betagin{equation*}
|D^{k} \widehat{K}_\gamma(\omega) | \lesssim \mathtt{var}kappa_{\gamma, 1} \gamma^{3 (|k|_\mathfrak{s}one + 1)} \mathfrak{s}um_{x \in \gamma \Z^3} \fK(x) |x|^{|k|_\mathfrak{s}one} \lesssim \gamma^{3 |k|_\mathfrak{s}one},
\end{equation*}
where we estimated the sum by an integral, which is bounded because $\fK$ is bounded and compactly supported. For $|\omega| \geq \gamma^{-3}$, the estimate \eqref{eq:K1A} is a particular case of \eqref{eq:K1B}, and for $|\omega| \leq \gamma^{-3}$ it follows from \eqref{eq:K2.3}.
The proof of \eqref{eq:K3} is more involved. If $|\omega| \leq \gamma^{-3}$, then the bound \eqref{eq:K3} follows from \eqref{eq:K1B}, and we need to prove it for $|\omega| \geq \gamma^{-3}$. For any function $f c_{\ga,2}lon \gamma \Z^3 \to \R$, we define the discrete Laplacian
\betagin{equation*}
\uDelta_\gamma f (x) := \gamma^{-2} \mathfrak{s}um_{y \mathfrak{s}igmam x} \bigl(f(y) - f(x)\bigr),
\end{equation*}
where the sum runs over $y \in \gamma \Z^3$, which are nearest neighbours of $x$, i.e. $|y - x| = 1$. For a fixed $\omega \in \R^3$ we define the function $\mathbbm{e}_\omega : x \mapsto e^{-i \pi \gamma^3 \omega \cdot x}$, for which we have
\betagin{equation*}
\uDelta_\gamma \mathbbm{e}_\omega(x) = \mathfrak{f}_\gamma(\omega) \mathbbm{e}_\omega(x) \qquad \text{with}\quad \mathfrak{f}_\gamma(\omega) := -2 \gamma^{-2} \mathfrak{s}um_{j = 1}^3 \bigl(1 - c_{\ga,2}s(\pi \gamma^3 \omega_j)\bigr).
\end{equation*}
We note that for $\omega \neq 0$ we have $\mathfrak{f}_\gamma(\omega) \neq 0$, and this identity allows to write \eqref{eq:B1} as
\betagin{equation*}
\widehat{K}_\gamma (\omega) = \mathfrak{r}ac{\mathtt{var}kappa_{\gamma, 1} \gamma^{3}}{\mathfrak{f}_\gamma(\omega)^m} \mathfrak{s}um_{x \in \gamma \Z^3} \fK(x) \bigl(\uDelta^m_\gamma \mathbbm{e}_\omega\bigr)(x),
\end{equation*}
for any integer $m \geq 0$. After a summation by parts we get
\betagin{equation}\lambdabel{eq:K-hat-bound}
\widehat{K}_\gamma (\omega) = \mathfrak{r}ac{\mathtt{var}kappa_{\gamma, 1} \gamma^{3}}{\mathfrak{f}_\gamma(\omega)^m} \mathfrak{s}um_{x \in \gamma \Z^3} \bigl(\uDelta_\gamma^m \fK\bigr)(x) \,\mathbbm{e}_\omega(x).
\end{equation}
The function $\uDelta_\gamma^m \fK(x)$ converges uniformly to $\mathbf{D}elta^m \fK$ as $\gamma \to 0$, where $\mathbf{D}elta$ is the three-dimensional Laplace operator (recall that $\fK$ is smooth). Hence, $\gamma^{3} \mathfrak{s}um_{x \in \gamma \Z^3} \bigl(\uDelta_\gamma^m \fK\bigr)(x) \mathbbm{e}_\omega(x)$ can be absolutely estimated by an integral of $|\mathbf{D}elta^m \fK(x)|$. Recalling the scaling \eqref{eq:scalings}, one can see that there is a constant $C > 0$ such that $|\mathfrak{f}_\gamma(\omega)^{-m}| \leq C ( \gamma^{3} |\omega| )^{-2m}$ uniformly over $\gamma > 0$ and $|\omega_j| \leq N+\mathfrak{r}ac12$ for $j \in \{1, 2, 3\}$. Then from \eqref{eq:K-hat-bound} we get the required bound \eqref{eq:K3} for $k = 0$.
For $k \neq 0$, we use \eqref{eq:K-hat-deriv} and similarly to \eqref{eq:K-hat-bound} we get
\betagin{equation*}
D^{k} \widehat{K}_\gamma (\omega) = \mathfrak{r}ac{\mathtt{var}kappa_{\gamma, 1} \big(-i \pi \gamma^3\big)^{|k|_{\mathfrak{s}one}}}{\mathfrak{f}_\gamma(\omega)^m} \gamma^{3} \mathfrak{s}um_{x \in \gamma \Z^3} \bigl(\uDelta_\gamma^m \widetilde{\fK}_k\bigr)(x) \,\mathbbm{e}_\omega(x),
\end{equation*}
where $\widetilde{\fK}_k(x) := x^k \fK(x)$. Estimating the sum and the function $\mathfrak{f}_\gamma$ as before, we get \eqref{eq:K3} for any $k$.
Let us proceed to the proof of \eqref{eq:K2}. From \eqref{eq:K3}, we conclude that there exists $\overline{c}>0$ such that for $|\omega| \geq \overline{c} \gamma^{-3}$ we have $|\widehat{K}_\gamma(\omega)| \leq \mathfrak{r}ac12$. Hence, \eqref{eq:K2} holds for such $\omega$. Next, we consider $\omega$ such that $|\omega| < \underline{c} \gamma^{-3}$ for a constant $\underline{c}>0$ to be fixed below. For such $\omega$, \eqref{eq:K2.4} implies the existence of $\underline{C}$ such that
\betagin{equation*}
1 - \widehat{K}_\gamma (\omega) \geq \pi^2 |\omega|^2 \gamma^6 - \underline{C} |\omega|^3 \gamma^9 \geq \bigl( \pi^2 -\underline{C}\, \underline{c}\bigr) |\omega|^2 \gamma^6,
\end{equation*}
which can be bounded from below by $\pi^2 |\omega|^2 \gamma^6 / 2$ if we choose $\underline{c}$ small enough.
Finally, in order to treat the case $\underline{c} \gamma^{-3} \leq |\omega | \leq \overline{c} \gamma^{-3}$, we observe that the Riemann sums
\betagin{equation*}
K_\ga( \gamma^{-3} \omega ) = \mathtt{var}kappa_{\gamma, 1} \gamma^{3} \mathfrak{s}um_{x \in \gamma \Z^3} \fK(x) e^{- i \pi \omega \cdot x}
\end{equation*}
approximate $(\mathscr{F} \fK)(\omega)$ uniformly for $ |\omega| \in [\underline{c}, \overline{c}]$, where $\mathscr{F} \fK$ is the continuous Fourier transform on $\R^3$. On the other hand, $\mathscr{F} \fK$ is the Fourier transform of a probability measure with a density on $\R^3$, and as such, it is continuous and $|(\mathscr{F} \fK)(\omega)|<1$ if $\omega \neq 0$. In particular, $|(\mathscr{F} \fK)(\omega)|$ is bounded away from $1$ uniformly for $ |\omega| \in [\underline{c}, \overline{c}]$. Combining these facts, we see that for $\gamma$ small enough, $K_\ga(\omega)$ is bounded away from $1$ uniformly in $\underline{c} \gamma^{-3} \leq |\omega | \leq \overline{c} \gamma^{-3}$.
\end{proof}
The next lemma provides estimates on the kernels $G^\gamma$ and $\widetilde{G}^\gamma$, defined in \eqref{eq:From-P-to-G} and \eqref{eq:From-P-to-G-tilde} respectively. One way to extend the function $G^\gamma_t$ off the grid is by its Fourier transform
\betagin{equation*}
(\mathscr{F} G^\gamma_t)(\omega) = \mathrm{ex}p \Bigl( \mathtt{var}kappa_{\gamma, 3}^2 \bigl( \widehat{K}_\gamma(\omega) - 1 \bigr) \mathfrak{r}ac{t}{\alphapha} \Bigr) \mathbf{{1}}_{|\omega|_{\mathfrak{s}igmanfty} \leq N}
\end{equation*}
for all $\omega \in \R^3$, where $\mathscr{F}$ is the Fourier transform on $\R^3$ (this formula follows from \eqref{eq:tildeP}, \eqref{eq:From-P-to-G} and the Poisson summation formula). However, such extension is not convenient to work with because its Fourier transform is not smooth, which in particular does not allow to get the bounds in Lemma~\ref{lem:Pgt} below.
In order to define an extension of $G^\gamma_t$ with a smooth Fourier transform we use the idea of \cite[Sec.~5.1]{HairerMatetski}. Namely, from \eqref{eq:From-P-to-G} and \eqref{eq:kernels-P} we conclude that the function $G^\gamma_t$ solves the equation
\betagin{equation*}
\mathfrak{r}ac{\mathrm{d}}{\mathrm{d} t} G^\gamma_t(x) = \mathbf{D}elta_\gamma G^\gamma_t(x), \qquad x \in \mathtt{L}ambda_{\mathtt{var}epsilon},
\end{equation*}
with the initial condition $G^\gamma_0(x) = \mathrm{d}eltalta^{(\mathtt{var}epsilon)}_{x,0}$ (the latter is defined below \eqref{eq:M-bracket} and $\mathbf{D}elta_\gamma$ is defined in \eqref{eq:Laplacian-discrete}). Then we can write
\betagin{equation*}
G^\gamma_t(x) = \bigl(e^{t \mathbf{D}elta_\gamma} \mathrm{d}eltalta^{(\mathtt{var}epsilon)}_{\cdot,0}\bigr)(x), \qquad x \in \mathtt{L}ambda_{\mathtt{var}epsilon},
\end{equation*}
where $e^{t \mathbf{D}elta_\gamma}$ is the semigroup generated by the bounded operator $\mathbf{D}elta_\gamma$, acting on the space of bounded functions on $\mathtt{L}ambda_{\mathtt{var}epsilon}$. We applied the semigroup to the function $x \mapsto \mathrm{d}eltalta^{(\mathtt{var}epsilon)}_{x,0}$. We take a Schwartz function $\mathtt{var}phii : \R^3 \to \R$, such that $\mathtt{var}phii(0) = 1$ and $\mathtt{var}phii(x) = 0$ for all $x \in \Z^3 \mathfrak{s}etminus \{0\}$, and such that $(\mathscr{F} \mathtt{var}phii)(\omega) = 0$ for $|\omega|_\mathfrak{s}igmanfty \geq \mathfrak{r}ac{3}{4}$.\footnote{We can define $\mathtt{var}phii$ by its Fourier transform $\mathscr{F} \mathtt{var}phii = (\mathscr{F} \fD) * \psi$, where $\fD(x) = \prod_{j=1}^3 \mathfrak{r}ac{\mathfrak{s}igman(\pi x_j)}{\pi x_j}$ is the Dirichlet kernel and $\psi \in \mathcal{C}C^\infty(\R^3)$ is supported in the ball of radius $\mathfrak{r}ac{1}{4}$ with center at the origin and satisfies $\int_{\R^3} \psi(x) \mathrm{d} x = 1$. Then $(\mathscr{F} \mathtt{var}phii)(\omega)$ is smooth and vanishes for $|\omega|_\mathfrak{s}igmanfty \geq \mathfrak{r}ac{3}{4}$, because the Fourier transform of $\fD$ vanishes for $|\omega|_\mathfrak{s}igmanfty > \mathfrak{r}ac{1}{2}$. Moreover, $\mathtt{var}phii$ is Schwartz, because its Fourier transform is Schwartz. Finally, $\mathtt{var}phii$ takes the required values at the integer points, because $\fD$ takes the same values.} We note that the formula \eqref{eq:Laplacian-gamma} makes sense for all functions $f$ from $\mathcal{C}C_b(\R^3)$, which is the space of bounded continuous functions on $\R^3$, equipped with the supremum norm. Then \eqref{eq:Laplacian-discrete} allows to view $\mathbf{D}elta_\gamma$ as a bounded operator acting on $\mathcal{C}C_b(\R^3)$. Setting $\mathtt{var}phii^\mathtt{var}epsilon(x) := \mathtt{var}epsilon^{-3} \mathtt{var}phii(\mathtt{var}epsilon^{-1} x)$ we then define the extension of $G^\gamma_t$ off the grid by
\betagin{equation}\lambdabel{eq:G-extension}
G^\gamma_t(x) := \bigl(e^{t \mathbf{D}elta_\gamma} \mathtt{var}phii^{\mathtt{var}epsilon}\bigr)(x), \qquad x \in \R^3.
\end{equation}
The respective extension of the function $\widetilde{G}^{\gamma}_{t}$ is given by \eqref{eq:From-P-to-G-tilde} for all $x \in \R^3$. The advantage of such definition of the extension is that its Fourier transform is smooth.
It will be convenient to treat these functions on the space-time domain $\R_+ \times \R^3$. For this, we write $G^\gamma(z)$ where $z = (t,x)$ with $t \in \R_+$ and $x \in \R^3$, and we write $D^k G^\gamma(z)$ for the mixed derivative of order $k = (k_0, \ldots, k_3) \in \N_0^4$, where the index $k_0$ corresponds to the time variable $t$ and the other indices $k_i$ correspond to the respective spatial variables. We recall the parabolically rescaled quantities $|k|_\mathfrak{s}$ and $\| z\|_\mathfrak{s}$ defined in Section~\ref{sec:notation}.
\betagin{lemma}\lambdabel{lem:Pgt}
Let the constant $\gamma_0>0$ be as in the statement of Lemma~\ref{lem:Kg}, and let $|t|_a := |t|^{1/2} + a$ for any $a > 0$. Then for every $r \in \N$, $k \in \N_0^4$ with $k_0 \leq r$ and $n \in \N_0$ there is $C > 0$ such that
\betagin{subequations}\lambdabel{eqs:G-bounds}
\betagin{align}\lambdabel{eq:G-bound}
\big| D^k G^\gamma(t,x) \big| &\leq C |t|_\mathtt{var}epsilon^{-3 -|k|_\mathfrak{s} + n} \bigl(\| (t,x) \|_{\mathfrak{s}} + \mathtt{var}epsilon\bigr)^{-n},\\
\lambdabel{eq:tildeG-bound}
\big| D^k \widetilde{G}^\gamma(t,x) \big| &\leq C |t|_\mathfrak{e}^{-3 -|k|_\mathfrak{s} + n} \bigl(\| (t,x) \|_{\mathfrak{s}} + \mathfrak{e}\bigr)^{-n},
\end{align}
\end{subequations}
uniformly over $(t,x) \in \R^4$ with $t > 0$ and $\gammamma \in (0, \gamma_0)$.
\end{lemma}
From the bounds \eqref{eqs:G-bounds} we can apply \cite[Lem.~5.4]{HairerMatetski} and get the expansion as described in the beginning of Section~\ref{sec:lift}. For this, we note that the bounds \eqref{eq:tildeG-bound} imply that $\widetilde{G}^\gamma$ is a Schwartz function in $x$, which satisfies
\betagin{equation*}
\big| D^k \widetilde{G}^\gamma(t,x) \big| \leq C \bigl(\| (t,x) \|_{\mathfrak{s}} + \mathfrak{e}\bigr)^{-3 -|k|_\mathfrak{s}}.
\end{equation*}
Moreover, we can smoothly extend $\widetilde{G}^\gamma$ to $\R^4$ in the same way as in \cite[Sec.~5.1]{HairerMatetski}, so that $\widetilde{G}^\gamma(t) \equiv 0$ for $t < 0$.
\betagin{proof}[of Lemma~\ref{lem:Pgt}]
We start with proving \eqref{eq:G-bound}. Using \eqref{eq:tildeP}, the Fourier transform of \eqref{eq:G-extension} equals
\betagin{equation*}
(\mathscr{F} G^\gamma_t)(\omega) = (\mathscr{F} \mathtt{var}phii^{\mathtt{var}epsilon}) (\omega) \mathrm{ex}p \Bigl( \mathtt{var}kappa_{\gamma, 3}^2 \bigl( \widehat{K}_\gamma(\omega) - 1 \bigr) \mathfrak{r}ac{t}{\alphapha} \Bigr),
\end{equation*}
where $(\mathscr{F} \mathtt{var}phii^{\mathtt{var}epsilon}) (\omega) = (\mathscr{F} \mathtt{var}phii) (\mathtt{var}epsilon \omega)$. Then the inverse Fourier transform yields
\betagin{equation}\lambdabel{eq:G-deriv}
D^k G^\gamma (t, x) = \int_{\R^3} F^{\gamma}_t(\omega) e^{2 \pi i \omega \cdot x}\, \mathrm{d} \omega
\end{equation}
with
\betagin{equation}\lambdabel{eq:F-def}
F^{\gamma}_t(\omega) := (\mathscr{F} \mathtt{var}phii) \bigl(\mathtt{var}epsilon \omega\bigr) \Bigl( \bigl( \widehat{K}_\gamma(\omega) - 1 \bigr) \mathfrak{r}ac{\mathtt{var}kappa_{\gamma, 3}^2}{\alphapha} \Bigr)^{k_0} \bigl(2 \pi i \omega\bigr)^{\bar k} \mathrm{ex}p \Bigl( \mathtt{var}kappa_{\gamma, 3}^2 \bigl( \widehat{K}_\gamma(\omega) - 1 \bigr) \mathfrak{r}ac{t}{\alphapha} \Bigr).
\end{equation}
To bound the integral, we consider two cases: $|\omega| \leq \gamma^{-3}$ and $|\omega| > \gamma^{-3}$.
In the case $|\omega| \leq \gamma^{-3}$, according to \eqref{eq:K2.4} and \eqref{eq:K2}, there exists $c > 0$ such that for all $\gamma \in (0, \gamma_0)$ we have
\betagin{equation*}
|F^{\gamma}_t(\omega)| \lesssim \bigl( |\omega|^2 + \gamma^3 | \omega|^3 \bigr)^{k_0} |\omega^{\bar k}| \mathrm{ex}p \bigl(- c |\omega|^2 t \bigr) \lesssim |\omega|^{|k|_\mathfrak{s}} \mathrm{ex}p \bigl(- c |\omega|^2 t \bigr),
\end{equation*}
where we used the scaling variables \eqref{eq:scalings} and the bound \eqref{eq:c-gamma-2}. Here, we bounded the Fourier transform of $\mathtt{var}phii$ by a constant. Restricting the domain of the integration in \eqref{eq:G-deriv} to $|\omega| \leq \gamma^{-3}$, we estimate the integral by a constant times
\betagin{equation*}
\int_{|\omega| \leq \gamma^{-3}} |\omega|^{|k|_\mathfrak{s}} \mathrm{ex}p \bigl(- c |\omega|^2 t \bigr) \mathrm{d} \omega.
\end{equation*}
If $t \geq \gamma^6$, then we change the variable of integration to $u = \mathfrak{s}qrt t \omega$ and the integral can be estimated by $C t^{- (3 + |k|_\mathfrak{s}) / 2}$. On the other hand, if $t < \gamma^6$, then we change the variable to $u = \gamma^{3} \omega$ and the integral gets bounded by $C \mathfrak{e}^{- 3 - |k|_\mathfrak{s}}$ (recall that $\mathfrak{e} \approx \gamma^3$).
Now we will consider the case $|\omega| > \gamma^{-3}$. Since $\mathtt{var}phi$ is Schwartz, the same is true for its Fourier transform, and for any $m \in \N_0$ we have $\bigl| (\mathscr{F} \mathtt{var}phii) \bigl(\mathtt{var}epsilon \omega\bigr) \bigr| \lesssim (1 + \mathtt{var}epsilon |\omega|)^{-m}$. Using then \eqref{eq:K3} and \eqref{eq:K2}, we get
\betagin{align}\lambdabel{eq:F-bound}
|F^{\gamma}_t(\omega)| &\lesssim (1 + \mathtt{var}epsilon |\omega|)^{-m} \Bigl( \bigl(1 + | \gamma^3 \omega|^{-8} \bigr)/ \alphapha \Bigr)^{k_0} |\omega^{\bar k}| \mathrm{ex}p \bigl(- c t / \alphapha \bigr) \\
&\lesssim \gamma^{-6 k_0} (1 + \mathtt{var}epsilon |\omega|)^{-m} |\omega|^{|\bar k|_\mathfrak{s}one} \mathrm{ex}p \bigl(- c \gamma^{-6} t \bigr). \nonumber
\end{align}
Then the part of the integral \eqref{eq:G-deriv}, with the domain of integration restricted to $|\omega| > \gamma^{-3}$, is estimated by a constant times
\betagin{equation}\lambdabel{eq:integral-bound}
\gamma^{-6 k_0} \mathrm{ex}p \bigl(- c \gamma^{-6} t \bigr) \int_{|\omega | > \gamma^{-3}} (1 + \mathtt{var}epsilon |\omega|)^{-m} |\omega|^{|\bar k|_\mathfrak{s}one} \mathrm{d} \omega \lesssim \gamma^{ -6 k_0} \mathtt{var}epsilon^{-3-|\bar k|_\mathfrak{s}one} \mathrm{ex}p \bigl(- c \gamma^{-6} t \bigr).
\end{equation}
The integral is finite as soon as we take $m > |\bar k|_\mathfrak{s}one + 3$. If $t \leq \gamma^4$, then this expression is bounded by $C \gamma^{ -6 k_0} \mathtt{var}epsilon^{-3-|\bar k|_\mathfrak{s}one} \lesssim C \mathtt{var}epsilon^{-3 - |k|_\mathfrak{s}}$ (recall that $\mathtt{var}epsilon \approx \gamma^{-4}$). If $t \geq \gamma^4$, then we bound $\mathrm{ex}p (- c \gamma^{-6} t ) \leq \mathrm{ex}p (- c \gamma^{-2}/2 ) \mathrm{ex}p (- c \gamma^{-6} t/2 )$ and the exponentials can be estimated by rational functions as $\mathrm{ex}p (- c \gamma^{-2}/2 ) \lesssim \gamma^{(3 + |\bar k|_\mathfrak{s}one)/2}$ and $\mathrm{ex}p (- c \gamma^{-6} t/2 ) \lesssim (\gamma^{-6} t)^{-(3 + |k|_\mathfrak{s})/2}$. Then \eqref{eq:integral-bound} is bounded by $C t^{- (3 + |k|_\mathfrak{s}) / 2}$.
From the preceding analysis we conclude that
\betagin{equation}\lambdabel{eq:G-bound-1}
\big| D^k G^\gamma(t,x) \big| \leq C \big( |t|^{1/2} + \mathtt{var}epsilon \big)^{-3-|k|_\mathfrak{s}},
\end{equation}
and to complete the proof of \eqref{eq:G-bound} we need to bound this function with respect to $x$. For this, we need to consider $|x| \geq t^{1/2} \vee \mathtt{var}epsilon$ (the required bound \eqref{eq:tildeG-bound} for $|x| \leq t^{1/2} \vee \mathtt{var}epsilon$ follows from \eqref{eq:G-bound-1}).
For the function $\mathbbm{e}_x : \omega \mapsto e^{2 \pi i \omega \cdot x}$ we have $\mathbf{D}elta_\omega \mathbbm{e}_x(\omega) = |2 \pi i x|^2 \mathbbm{e}_x(\omega)$, where $\mathbf{D}elta_\omega$ is the Laplace operator with respect to $\omega$. Then the function $e^{2 \pi i \omega \cdot x}$ in \eqref{eq:G-deriv} can be replaced by $|2 \pi i x|^{-2 \ell} \mathbf{D}elta_\omega^\ell\mathbbm{e}_x(\omega)$ for any $\ell \geq 0$. Applying a repeated integration by part we get
\betagin{equation}\lambdabel{eq:tildeG-new}
D^k G^\gamma(t,x) = |2 \pi i x|^{-2\ell} \int_{\R^3} \mathbf{D}elta_\omega^\ell F^{\gamma}_t(\omega)\, \mathbbm{e}_x(\omega)\, \mathrm{d} \omega.
\end{equation}
There are no boundary terms in the integration by parts, because $F^{\gamma}_t(\omega)$ and its derivatives decay at infinity. The Fa\`{a} di Bruno formula allows to absolutely bound the function inside the integral by a constant multiple of
\betagin{align}
&\max_{|n_1|_{\mathfrak{s}one} + \cdots + |n_4|_{\mathfrak{s}one} = 2\ell} \biggl| \mathtt{var}epsilon^{|n_1|_\mathfrak{s}one} D^{n_1} (\mathscr{F} \mathtt{var}phii) \bigl(\mathtt{var}epsilon \omega\bigr) D^{n_2}\Bigl( \bigl( \widehat{K}_\gamma(\omega) - 1 \bigr) \mathfrak{r}ac{\mathtt{var}kappa_{\gamma, 3}^2}{\alphapha} \Bigr)^{k_0} \lambdabel{eq:G-spatial-bound}\\
&\hspace{5cm}\times \bigl(D^{n_3}\omega^{\bar k}\bigr) D^{n_4}\mathrm{ex}p \Bigl( \mathtt{var}kappa_{\gamma, 3}^2 \bigl( \widehat{K}_\gamma(\omega) - 1 \bigr) \mathfrak{r}ac{t}{\alphapha} \Bigr)\biggr|, \nonumber
\end{align}
where the maximum is over $n_1, \ldots, n_4 \in \N_0^3$ with $n_3 \leq \bar k$. As before, we need to consider two cases: $|\omega| \leq \gamma^{-3}$ and $|\omega| > \gamma^{-3}$.
In the case $|\omega| \leq \gamma^{-3}$ from \eqref{eqs:K-bounds} we conclude that
\betagin{equation*}
\Bigl|D^{n}\Bigl( \bigl( \widehat{K}_\gamma(\omega) - 1 \bigr) \mathfrak{r}ac{\mathtt{var}kappa_{\gamma, 3}^2}{\alphapha} \Bigr)\Bigr| \lesssim p_{\gamma, n}(\omega) \qquad \text{with} \quad
p_{\gamma, n}(\omega) =
\betagin{cases}
|\omega|^{2 - |n|_{\mathfrak{s}one}} &\text{for}~ |n|_{\mathfrak{s}one} \leq 2, \\
\gamma^{3 (|n|_{\mathfrak{s}one}-2)} &\text{for}~ |n|_{\mathfrak{s}one} \geq 3,
\end{cases}
\end{equation*}
and Fa\`{a} di Bruno formula yields
\betagin{align*}
&\Bigl| D^{n}\Bigl( \bigl( \widehat{K}_\gamma(\omega) - 1 \bigr) \mathfrak{r}ac{\mathtt{var}kappa_{\gamma, 3}^2}{\alphapha} \Bigr)^{k_0} \Bigr| \\
&\qquad \lesssim \Bigl| \bigl( \widehat{K}_\gamma(\omega) - 1 \bigr) \mathfrak{r}ac{\mathtt{var}kappa_{\gamma, 3}^2}{\alphapha} \Bigr|^{(k_0 - |n|_{\mathfrak{s}one}) \vee 0} \max_{r_1 + \cdots + r_{|n|_\mathfrak{s}one} = n} \prod_{i = 1}^{|n|_\mathfrak{s}one} \Bigl| D^{r_i} \Bigl( \bigl( \widehat{K}_\gamma(\omega) - 1 \bigr) \mathfrak{r}ac{\mathtt{var}kappa_{\gamma, 3}^2}{\alphapha} \Bigr) \Bigr| \\
&\qquad \lesssim |\omega|^{2 (k_0 - |n|_{\mathfrak{s}one}) \vee 0} \max_{r_1 + \cdots + r_{|n|_\mathfrak{s}one} = n} \prod_{i = 1}^{|n|_\mathfrak{s}one} p_{\gamma, r_i}(\omega).
\end{align*}
Combining this bound with Fa\`{a} di Bruno formula and \eqref{eq:K2}, we get
\betagin{align*}
&\Bigl|D^{n}\mathrm{ex}p \Bigl( \mathtt{var}kappa_{\gamma, 3}^2 \bigl( \widehat{K}_\gamma(\omega) - 1 \bigr) \mathfrak{r}ac{t}{\alphapha} \Bigr)\Bigr| \\
&\qquad \lesssim \mathrm{ex}p \Bigl( \mathtt{var}kappa_{\gamma, 3}^2 \bigl( \widehat{K}_\gamma(\omega) - 1 \bigr) \mathfrak{r}ac{t}{\alphapha} \Bigr) \max_{\ell_1 + \cdots + \ell_{|n|_\mathfrak{s}one} = n} \prod_{i = 1}^{|n|_\mathfrak{s}one} \Bigl| D^{\ell_i} \Bigl( \mathtt{var}kappa_{\gamma, 3}^2 \bigl( \widehat{K}_\gamma(\omega) - 1 \bigr) \mathfrak{r}ac{t}{\alphapha} \Bigr) \Bigr| \\
&\qquad \lesssim t^{|n|_{\mathfrak{s}one}} \mathrm{ex}p \bigl(- c |\omega|^2 t \bigr) \max_{\ell_1 + \cdots + \ell_{|n|_\mathfrak{s}one} = n} \prod_{i = 1}^{|n|_\mathfrak{s}one} p_{\gamma, \ell_i}(\omega).
\end{align*}
Using these bounds and $|D^{n_1}(\mathscr{F} \mathtt{var}phii) \bigl(\omega\bigr) \lesssim 1$, the expression inside the maximum in \eqref{eq:G-spatial-bound} is estimated by a constant times
\betagin{equation*}
\mathtt{var}epsilon^{|n_1|_\mathfrak{s}one} |\omega|^{2 (k_0 - |n_2|_{\mathfrak{s}one}) \vee 0 + |\bar k|_{\mathfrak{s}one} - |n_3|_{\mathfrak{s}one}} t^{|n_4|_{\mathfrak{s}one}} F_{n_2, n_4}(\omega) \mathrm{ex}p \bigl(- c |\omega|^2 t \bigr),
\end{equation*}
with
\betagin{equation*}
F_{n_2, n_4}(\omega) := \left(\max_{r_1 + \cdots + r_{|n_2|_\mathfrak{s}one} = n_2} \prod_{i = 1}^{|n_2|_\mathfrak{s}one} p_{\gamma, r_i}(\omega)\right) \left(\max_{\ell_1 + \cdots + \ell_{|n_4|_\mathfrak{s}one} = n_4} \prod_{i = 1}^{|n_4|_\mathfrak{s}one} p_{\gamma, \ell_i}(\omega)\right).
\end{equation*}
Hence, the part of the integral \eqref{eq:tildeG-new}, in which the integration variables is restricted to $|\omega| \leq \gamma^{-3}$, is bounded by a multiple of
\betagin{equation*}
|x|^{-2\ell} \mathtt{var}epsilon^{|n_1|_\mathfrak{s}one} t^{|n_4|_{\mathfrak{s}one}} \int_{|\omega| \leq \gamma^{-3}} |\omega|^{2 (k_0 - |n_2|_{\mathfrak{s}one}) \vee 0 + |\bar k|_{\mathfrak{s}one} - |n_3|_{\mathfrak{s}one}} F_{n_2, n_4}(\omega) \mathrm{ex}p \bigl(- c |\omega|^2 t \bigr) \mathrm{d} \omega.
\end{equation*}
If $t \geq \gamma^6$, then we change the variable to $u = t^{1/2} \omega$ and estimate this expression by $C |x|^{-2\ell} t^{\ell - \mathfrak{r}ac{1}{2}(|k|_\mathfrak{s} + 3)}$. If $t < \gamma^6$, then we change the variable to $u = \gamma^3 \omega$ and estimate the preceding expression by a multiple of $|x|^{-2\ell} \mathtt{var}epsilon^{2\ell - (|k|_\mathfrak{s} + 3)}$.
In the case $|\omega| > \gamma^{-3}$ we use \eqref{eq:K2} and $|D^{n_1}(\mathscr{F} \mathtt{var}phii) \bigl(\omega\bigr) \lesssim (1 + |\omega|)^{-m}$ to bound the function inside the maximum in \eqref{eq:G-spatial-bound} by a constant multiple of
\betagin{align*}
&\mathtt{var}epsilon^{|n_1|_{\mathfrak{s}one}} \gamma^{3 (|n_2|_{\mathfrak{s}one} + |n_4|_{\mathfrak{s}one} )} (1 + |\mathtt{var}epsilon \omega|)^{-m} |\omega|^{|\bar k|_{\mathfrak{s}one} - |n_3|_{\mathfrak{s}one}} \mathrm{ex}p \bigl(- c t / \alphapha \bigr) \mathfrak{r}ac{t^{|n_4|_{\mathfrak{s}one}}}{\alphapha^{k_0 + |n_4|_{\mathfrak{s}one}}} \\
&\qquad = \mathtt{var}epsilon^{|n_1|_{\mathfrak{s}one}} \gamma^{3 ( |n_2|_{\mathfrak{s}one} - |n_4|_{\mathfrak{s}one} - 2 k_0)} (1 + |\mathtt{var}epsilon \omega|)^{-m} |\omega|^{|\bar k|_{\mathfrak{s}one} - |n_3|_{\mathfrak{s}one}} t^{|n_4|_{\mathfrak{s}one}} \mathrm{ex}p \bigl(- c \gamma^{-6} t \bigr).
\end{align*}
Then the part of the integral \eqref{eq:tildeG-new} for $|\omega| > \gamma^{-3}$ is bounded by a multiple of
\betagin{equation*}
|x|^{-2\ell} \mathtt{var}epsilon^{|n_1|_{\mathfrak{s}one}} \gamma^{3 ( |n_2|_{\mathfrak{s}one} - |n_4|_{\mathfrak{s}one} - 2 k_0 )} t^{|n_4|_{\mathfrak{s}one}} \mathrm{ex}p \bigl(- c \gamma^{-6} t \bigr) \int_{|\omega| > \gamma^{-3}} (1 + |\mathtt{var}epsilon \omega|)^{-m} |\omega|^{|\bar k|_{\mathfrak{s}one} - |n_3|_{\mathfrak{s}one}} \mathrm{d} \omega.
\end{equation*}
This integral is finite if we take $m$ sufficiently large. We proceed in the same way as in \eqref{eq:integral-bound}. For $t \leq \gamma^4$ this expression is bounded by $|x|^{-2\ell} \gamma^{3 ( 2\ell - | k|_\mathfrak{s} - 3)}$. For $t \geq \gamma^4$ we estimate the exponential by a rational function and bound the preceding expression by $C |x|^{-2\ell} t^{\ell - \mathfrak{r}ac{1}{2}(|k|_\mathfrak{s} + 3)}$.
Taking $n = 2 \ell$, we have just proved that for $|x| \geq |t|^{1/2} \vee \mathtt{var}epsilon$ we have
\betagin{equation*}
\big| D^k G^\gamma(t,x) \big| \leq C |t|_\mathtt{var}epsilon^{-3 -|k|_\mathfrak{s} + n} |x|^{-n},
\end{equation*}
which together with \eqref{eq:G-bound-1} gives the required bound \eqref{eq:G-bound}.
The bound \eqref{eq:tildeG-bound} can be proved in a similar way. More precisely, from \eqref{eq:G-deriv} we get
\betagin{equation*}
D^k \widetilde{G}^\gamma(t,x) = \int_{\R^3} \widehat{K}_\gamma(\omega) F^{\gamma}_t(\omega) e^{2 \pi i \omega \cdot x} \mathrm{d} \omega.
\end{equation*}
The rest of the proof goes in the same way as before, with the only difference that now we use the fact that $\widehat{K}_\gamma(\omega)$ is Schwartz and for every $m \geq 0$ it satisfies $|D^n \widehat{K}_\gamma(\omega)| \lesssim \mathfrak{e}^{|n|_\mathfrak{s}one} (1 + \mathfrak{e} |\omega|)^{-m}$. Hence, all the scalings $\mathtt{var}epsilon$ should be replaced by $\mathfrak{e}$.
\end{proof}
As a corollary of the previous lemma, we can obtain a bound on the periodic heat kernel $\widetilde{P}^\gamma$.
\betagin{lemma}\lambdabel{lem:tilde-P-bound}
In the setting of Lemma~\ref{lem:Pgt} one has the following bound uniformly in $t \geq 0$:
\betagin{equation}\lambdabel{eq:tilde-P-bound}
\|\widetilde{P}^\gamma_t\|_{L^\infty} \leq C |t|_\mathfrak{e}^{-3}.
\end{equation}
\end{lemma}
\betagin{proof}
From \eqref{eq:From-P-to-G} we have $\widetilde{P}^\gamma_t(x) = \mathfrak{s}um_{m \in 2\Z^3} \widetilde{G}^\gamma_t(x + m)$. Using \eqref{eq:tildeG-bound} with any $n > 3$ and estimating the sum by an integral, we get the required bound \eqref{eq:tilde-P-bound}.
\end{proof}
\mathfrak{s}ubsection{Decompositions of discrete kernels}
\lambdabel{sec:decompositions}
Lemma~B.3 in \cite{Martingales} allows to apply \cite[Lem.~5.4]{HairerMatetski} for any integer $r \geq 2$ and to write the discrete kernel as $G^\gamma = \mathscr{K}^\gamma + \mathscr{R}^\gamma$, where
\betagin{enumerate}
\item $\mathscr{R}^\gamma$ is compactly supported and anticipative, i.e. $\mathscr{R}^\gamma(t,x) = 0$ for $t < 0$, and $\| \mathscr{R}^\gamma \|_{\mathcal{C}C^r}$ is bounded uniformly in $\gammamma \in (0, 1]$.
\item $\mathscr{K}^\gamma$ is anticipative and may be written as $\mathscr{K}^\gamma = \mathfrak{s}um_{n = 0}^{M} K^{\gamma, n}$ with $M = - \lfloor \log_2 \mathtt{var}epsilon \rfloor$, where the functions $\{K^{\gamma, n}\}_{0 \leq n \leq M}$ are defined on $\R^4$ and have the following properties:
\betagin{enumerate}
\item
the function $K^{\gamma, n}(z)$ is supported on the set $\{ z : \|z\|_\mathfrak{s} \leq c 2^{-n}\}$ for a constant $c \geq 1$ used in the supports of all functions $\{K^{\gamma, n}\}_{0 \leq n \leq M}$;
\item
for some $C > 0$, independent of $\gamma$, one has
\betagin{equation}\lambdabel{eq:Kn_bound}
|D^k K^{\gamma, n}(z)| \leq C 2^{n(3 + |k|_\mathfrak{s})},
\end{equation}
uniformly in $z$, $k \in \N_0^4$ such that $|k|_\mathfrak{s} \leq r$, and $0 \leq n < M$; for $n = M$ the bound \eqref{eq:Kn_bound} holds only for $k = 0$ (in particular, the function $K^{\gamma, M}$ does not have to be differentiable);
\item
for all $0 \leq n < M$ and $k \in \N_0^4$, such that $|k|_\mathfrak{s} \leq r$, one has
\betagin{equation*}
\int_{D_\mathtt{var}epsilon} z^k K^{\gamma, n}(z)\, \mathrm{d} z = 0;
\end{equation*}
for $n = M$ this identity holds only for $k = 0$.
\end{enumerate}
\end{enumerate}
Throughout the article we use interchangeably the notations $\mathscr{K}^\gamma(z)$ and $\mathscr{K}^\gamma_t(x)$ (and respectively for other kernels) for a point $z = (t,x)$ with $t \in \R$ and $x \in \R^3$.
In the same way we can write $\widetilde{G}^\gamma = \mywidetilde{\mathscr{K}}^\gamma + \mywidetilde{\mathscr{R}}^\gamma$, where the last two functions have the same properties as above, with the only difference that $\mywidetilde{\mathscr{K}}^\gamma$ is decomposed into a sum of $\mywidetilde{M} = - \lfloor \log_2 \mathfrak{e} \rfloor$ functions as $\mywidetilde{\mathscr{K}}^\gamma = \mathfrak{s}um_{n = 0}^{\mywidetilde{M}} \widetilde{K}^{\gamma, n}$.
This decomposition in particular allows to bound function convolved with a discrete heat kernel.
\betagin{lemma}\lambdabel{lem:P-convolved-with-f}
For a function $f : \T_{\mathtt{var}epsilon}^3 \to \R$ and for $\eta < 0$ the following bound holds uniformly in $\gamma \in (0,1)$ and locally uniformly in $t \geq 0$:
\betagin{equation*}
\|P^\gamma_t *_\mathtt{var}epsilon f\|_{L^\infty} \leq C |t|_\mathtt{var}epsilon^{\eta} \| f \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta}.
\end{equation*}
\end{lemma}
\betagin{proof}
Using \eqref{eq:From-P-to-G} we write $P^\gamma_t *_\mathtt{var}epsilon f = G^\gamma_t *_\mathtt{var}epsilon f$, where $f$ is extended periodically on the right-hand side. Using the decomposition of $G^\gamma$ as in the beginning of this section, we get $G^\gamma_t *_\mathtt{var}epsilon f = \mathfrak{s}um_{n = 0}^{M} K^{\gamma, n}_t *_\mathtt{var}epsilon f + \mathscr{R}^\gamma_t *_\mathtt{var}epsilon f$. Since $K^{\gamma, n}$ is bounded in a ball of radius $c 2^{-n}$, for a fixed $t \geq 0$ we have $K^{\gamma, n}_t \equiv 0$ if $|t|^{1/2} > c 2^{-n}$. Then the preceding sum can be restricted to $0 \leq n \leq M$ satisfying $|t|^{1/2} \leq c 2^{-n}$. Furthermore, the definition \eqref{eq:eps-norm-1} yields
\betagin{equation*}
\| K^{\gamma, n}_t *_\mathtt{var}epsilon f \|_{L^\infty} \lesssim 2^{-\eta n} \| f \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta}, \qquad\qquad \| \mathscr{R}^\gamma_t *_\mathtt{var}epsilon f \|_{L^\infty} \lesssim \| f \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta}.
\end{equation*}
Then for $\eta < 0$ we have
\betagin{equation*}
\|G^\gamma_t *_\mathtt{var}epsilon f\|_{L^\infty} \lesssim \mathfrak{s}um_{\mathfrak{s}ubstack{0 \leq n \leq M: \\ |t|^{1/2} \leq c 2^{-n}}} 2^{-\eta n} \| f \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta} \lesssim |t|_\mathfrak{e}^{\eta} \| f \|^{(\mathfrak{e})}_{\mathcal{C}C^\eta},
\end{equation*}
as required.
\end{proof}
Using the function $\mathtt{var}rho_{\gamma, \mathrm{d}elta}$ defined in \eqref{eq:rho-gamma}, we introduce new kernels $G^{\gamma, \mathrm{d}elta} := G^\gamma \mathfrak{s}tar_\mathtt{var}epsilon \mathtt{var}rho_{\gamma, \mathrm{d}elta}$ and $\widetilde{G}^{\gamma, \mathrm{d}elta} := \widetilde{G}^\gamma \mathfrak{s}tar_\mathtt{var}epsilon \mathtt{var}rho_{\gamma, \mathrm{d}elta}$. Then the decompositions of the kernels yield $G^{\gamma, \mathrm{d}elta} = \mathscr{K}^{\gamma, \mathrm{d}elta} + \mathscr{R}^{\gamma, \mathrm{d}elta}$ and $\widetilde{G}^{\gamma, \mathrm{d}elta} = \mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta} + \mywidetilde{\mathscr{R}}^{\gamma, \mathrm{d}elta}$, where $\mathscr{K}^{\gamma, \mathrm{d}elta} = \mathfrak{s}um_{n = 0}^{M} K^{\gamma, \mathrm{d}elta, n}$ and $\mywidetilde{\mathscr{K}}^{\gamma, \mathrm{d}elta} = \mathfrak{s}um_{n = 0}^{\mywidetilde{M}} \widetilde{K}^{\gamma, \mathrm{d}elta, n}$, and all the functions have the same properties as described above. Moreover, from the definition \eqref{eq:rho-gamma} we have the bounds
\betagin{equation*}
\bigl|D^k \bigl(K^{\gamma, \mathrm{d}elta, n} - K^{\gamma, n}\bigr)(z)\bigr| \leq C \mathrm{d}elta^\theta 2^{n(3 + \theta + |k|_\mathfrak{s})}, \qquad \bigl|D^k \bigl(\widetilde{K}^{\gamma, \mathrm{d}elta, n} - \widetilde{K}^{\gamma, n}\bigr)(z)\bigr| \leq C \mathrm{d}elta^\theta 2^{n(3 + \theta + |k|_\mathfrak{s})},
\end{equation*}
for any $\theta \in (0,1]$, as well as $\| \mathscr{R}^{\gamma, \mathrm{d}elta} - \mathscr{R}^{\gamma} \|_{\mathcal{C}C^{r-1}} \leq C \mathrm{d}elta^\theta$ and $\|\, \mywidetilde{\mathscr{R}}^{\gamma, \mathrm{d}elta} - \mywidetilde{\mathscr{R}}^{\gamma} \|_{\mathcal{C}C^{r-1}} \leq C \mathrm{d}elta^\theta$.
\end{document} |
\begin{document}
{\Large\bfseries\boldmath\scshape Bad News for Chordal Partitions}
Alex Scott\footnotemark[3] \quad
Paul Seymour\footnotemark[4] \quad
David R. Wood\footnotemark[5]
\DateFootnote
\footnotetext[3]{Mathematical Institute, University of Oxford, Oxford, U.K.\ (\texttt{[email protected]}).}
\footnotetext[4]{Department of Mathematics, Princeton University, New Jersey, U.S.A. (\texttt{[email protected]}). Supported by ONR grant N00014-14-1-0084 and NSF grant DMS-1265563.}
\footnotetext[5]{School of Mathematical Sciences, Monash University, Melbourne, Australia\\ (\texttt{[email protected]}). Supported by the Australian Research Council.}
\emph{Abstract.} Reed and Seymour [1998] asked whether every graph has a partition into induced connected non-empty bipartite subgraphs such that the quotient graph is chordal. If true, this would have significant ramifications for Hadwiger's Conjecture. We prove that the answer is `no'. In fact, we show that the answer is still `no' for several relaxations of the question.
\hrule
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\section{Introduction}
Hadwiger's Conjecture \citep{Hadwiger43} says that for all $t\geqslant 0$ every graph with no $K_{t+1}$-minor is $t$-colourable. This conjecture is easy for $t\leqslant 3$, is equivalent to the 4-colour theorem for $t=4$, is true for $t=5$ \citep{RST-Comb93}, and is open for $t\geqslant 6$. The best known upper bound on the chromatic number is $O(t\sqrt{\log t})$, independently due to \citet{Kostochka82,Kostochka84} and \citet{Thomason84,Thomason01}. This conjecture is widely considered to be one of the most important open problems in graph theory; see \citep{SeymourHC} for a survey.
Throughout this paper, we employ standard graph-theoretic definitions (see \citep{Diestel4}), with one important exception: we say that a graph $G$ \emph{contains} a graph $H$ if $H$ is isomorphic to an induced subgraph of $G$.
Motivated by Hadwiger's Conjecture, \citet{ReedSeymour-JCTB98} introduced the following definitions\footnote{\citet{ReedSeymour-JCTB98} used different terminology: `chordal decomposition' instead of chordal partition, and `touching pattern' instead of quotient.}. A \emph{vertex-partition}, or simply \emph{partition}, of a graph $G$ is a set $\mathcal{P}$ of non-empty induced subgraphs of $G$ such that each vertex of $G$ is in exactly one element of $\mathcal{P}$. Each element of $\mathcal{P}$ is called a \emph{part}. The \emph{quotient} of $\mathcal{P}$ is the graph, denoted by $G/\mathcal{P}$, with vertex set $\mathcal{P}$ where distinct parts $P,Q\in \mathcal{P}$ are adjacent in $G/\mathcal{P}$ if and only if some vertex in $P$ is adjacent in $G$ to some vertex in $Q$. A partition of $G$ is \emph{connected} if each part is connected. We (almost) only consider connected partitions. In this case, the quotient is the minor of $G$ obtained by contracting each part into a single vertex. A partition is \emph{chordal} if it is connected and the quotient is chordal (that is, contains no induced cycle of length at least four). Every graph has a chordal partition (with a 1-vertex quotient). Chordal partitions are a useful tool when studying graphs $G$ with no $K_{t+1}$ minor. Then for every connected partition $\mathcal{P}$ of $G$, the quotient $G/\mathcal{P}$ contains no $K_{t+1}$, so if in addition $\mathcal{P}$ is chordal, then $G/\mathcal{P}$ is $t$-colourable (since chordal graphs are perfect). \citet{ReedSeymour-JCTB98} asked the following question (repeated in \citep{KawaReed08,SeymourHC}).
\begin{ques}
\label{ChordalPartition}
Does every graph have a chordal partition such that each part is bipartite?
\end{ques}
If true, this would imply that every graph with no $K_{t+1}$-minor is $2t$-colourable, by taking the product of the $t$-colouring of the quotient with the 2-colouring of each part. This would be a major breakthrough for Hadwiger's Conjecture. The purpose of this note is to answer Reed and Seymour's question in the negative. In fact, we show the following stronger result.
\begin{thm}
\label{NoChordal}
For every integer $k\geqslant 1$ there is a graph $G$, such that for every chordal partition $\mathcal{P}$ of $G$, some part of $\mathcal{P}$ contains $K_k$.
Moreover, for every integer $t\geqslant 4$ there is a graph $G$ with tree-width at most $t-1$ (and thus with no $K_{t+1}$-minor) such that
for every chordal partition $\mathcal{P}$ of $G$, some part of $\mathcal{P}$ contains a complete graph on at least $\floor{(3t-11)^{1/3}}$ vertices.
\end{thm}
\cref{NoChordal} says that it is not possible to find a chordal partition in which each part has bounded chromatic number. What if we work with a larger class of partitions? The following natural class arises. A partition of a graph is \emph{perfect} if it is connected and the quotient graph is perfect. If $\mathcal{P}$ is a perfect partition of a $K_{t+1}$-minor free graph $G$, then $G/\mathcal{P}$ contains no $K_{t+1}$ and is therefore $t$-colourable. So if every part of $\mathcal{P}$ has small chromatic number, then we can control the chromatic number of $G$. We are led to the following relaxation of \cref{ChordalPartition}: does every graph have a perfect partition in which every part has bounded chromatic number? Unfortunately, this is not the case.
\begin{thm}
\label{NoPerfect}
For every integer $k\geqslant 1$ there is a graph $G$, such that for every perfect partition $\mathcal{P}$ of $G$, some part of $\mathcal{P}$ contains $K_k$.
Moreover, for every integer $t\geqslant 6 $ there is a graph $G$ with tree-width at most $t-1$ (and thus with no $K_{t+1}$-minor),
such that for every perfect partition $\mathcal{P}$ of $G$, some part of $\mathcal{P}$ contains a complete graph on at least $\floor{(\frac{3}{2}t-8)^{1/3}}$
vertices.
\end{thm}
\cref{NoChordal,NoPerfect} say that it is hopeless to improve on the $O(t\sqrt{\log t})$ bound for the chromatic number of $K_t$-minor-free graphs using chordal or perfect partitions directly. Indeed, the best possible upper bound on the chromatic number using the above approach would be $O(t^{4/3})$ (since the quotient is $t$-colourable, and the best possible upper bound on the chromatic number of the parts would be $O(t^{1/3})$.)\
What about using an even larger class of partitions? Chordal graphs contain no 4-cycle, and perfect graphs contain no 5-cycle. These are the only properties of chordal and perfect graphs used in the proofs of \cref{NoChordal,NoPerfect}. Thus the following result is a qualitative generalisation of both \cref{NoChordal,NoPerfect}. It says that there is no hereditary class of graphs for which the above colouring strategy works.
\begin{thm}
\label{General}
For every integer $k\geqslant 1$ and graph $H$, there is a graph $G$, such that for every connected partition $\mathcal{P}$ of $G$, either some part of $\mathcal{P}$ contains $K_k$ or the quotient $G/\mathcal{P}$ contains $H$.
\end{thm}
Before presenting the proofs, we mention some applications of chordal partitions and related topics. Chordal partitions have proven to be a useful tool in the study of the following topics for $K_{t+1}$-minor-free graphs: cops and robbers pursuit games \citep{Andreae86}, fractional colouring \citep{ReedSeymour-JCTB98,KawaReed08}, generalised colouring numbers \citep{HOQRS17}, and defective and clustered colouring \citep{vdHW}. These papers show that every graph with no $K_{t+1}$ minor has a chordal partition in which each part has desirable properties. For example, in \citep{ReedSeymour-JCTB98}, each part has a stable set on at least half the vertices, and in \citep{vdHW}, each part has maximum degree $O(t)$ and is 2-colourable with monochromatic components of size $O(t)$.
Several papers \citep{DMW05,KP-DM08,Wood-JGT06} have shown that graphs with tree-width $k$ have chordal partitions in which the quotient is a tree, and each part induces a subgraph with tree-width $k-1$, amongst other properties. Such partitions have been used for queue and track layouts \citep{DMW05} and non-repetitive graph colouring \citep{KP-DM08}. A \emph{tree partition} is a (not necessarily connected) partition of a graph whose quotient is a tree; these have also been widely studied \citep{Edenbrandt86,Halin91,Seese85,Wood09,DO95,DO96,BodEng-JAlg97,Bodlaender-DMTCS99}. Here the goal is to have few vertices in each part of the partition. For example, a referee of \citep{DO95} proved that every graph with tree-width $k$ and maximum degree $\Delta$ has a tree partition with $O(k\Delta)$ vertices in each part.
\section{Chordal Partitions: Proof of \cref{NoChordal}}
Let $\mathcal{P}=\{P_1,\dots,P_m\}$ be a partition of a graph $G$, and let $X$ be an induced subgraph of $G$. Then the \emph{restriction} of $\mathcal{P}$ to $X$ is the partition of $X$ defined by $$\RP{X} := \{G[V(P_i)\cap V(X)]:i\in\{1,\dots,m\},V(P_i)\cap V(X)\neq\emptyset\}.$$ Note that the restriction of a connected partition to a subgraph need not be connected. The following lemma gives a scenario where the restriction is connected.
\begin{lem}
\label{InducedPartition}
Let $X$ be an induced subgraph of a graph $G$, such that the neighbourhood of each component of $G-V(X)$ is a clique (in $X$).
Let $\mathcal{P}$ be a connected partition of $G$ with quotient $G/\mathcal{P}$. Then $\RP{X}$ is a connected partition of $X$,
and the quotient of $\RP{X}$ is the subgraph of $G/\mathcal{P}$ induced by those parts that intersect $X$.
\end{lem}
\begin{proof}
We first prove that for every connected subgraph $G'$ of $G$, if $V(G')\cap V(X)\neq\emptyset$, then $G'[V(G')\cap V(X)]$ is connected. Consider non-empty sets $A,B$ that partition $V(G')\cap V(X)$. Let $P$ be a shortest path from $A$ to $B$ in $G'$. Then no internal vertex of $P$ is in $V(X)$. If $P$ has an internal vertex, then all its interior belongs to one component $C$ of $G-V(X)$, implying the endpoints of $P$ are in the neighbourhood of $C$ and are therefore adjacent, a contradiction. Thus $P$ has no interior, and hence $G'[V(G')\cap V(X)]$ is connected.
Apply this observation with each part of $\mathcal{P}$ as $G'$. It follows that $\RP{X}$ is a connected partition of $X$. Moreover, if adjacent parts $P$ and $Q$ of $\mathcal{P}$ both intersect $X$, then by the above observation with $G'=G[V(P)\cup V(Q)]$, there is an edge between $V(P)\cap V(X)$ and $V(Q)\cap V(X)$. Conversely, if there is an edge between $V(P)\cap V(X)$ and $V(Q)\cap V(X)$ for some parts $P$ and $Q$ of $\mathcal{P}$, then $PQ$ is an edge of $G/\mathcal{P}$. Thus the quotient of $\RP{X}$ is the subgraph of $G/\mathcal{P}$ induced by those parts that intersect $X$.
\end{proof}
The next lemma with $r=1$ implies \cref{NoChordal}. To obtain the second part of \cref{NoChordal} apply \cref{ChordalWork} with $k=\floor{(3t-11)^{1/3}}$, in which case $s(k,1)\leqslant t$.
\begin{lem}
\label{ChordalWork}
For all integers $k\geqslant 1$ and $r\geqslant 1$, if
$$s(k,r):=\tfrac13(k^3-k) +(r-1)k + 4 , $$
then there is a graph $G(k,r)$ with tree-width at most $s(k,r)-1$ (and thus with no $K_{s(k,r)+1}$-minor),
such that for every chordal partition $\mathcal{P}$ of $G$, either:\\
(1) $G$ contains a $K_{kr}$ subgraph intersecting each of $r$ distinct parts of $\mathcal{P}$ in $k$ vertices, or\\
(2) some part of $\mathcal{P}$ contains $K_{k+1}$.
\end{lem}
\begin{proof}
Note that $s(k,r)$ is the upper bound on the size of the bags in the tree-decomposition of $G(k,r)$ that we construct. We proceed by induction on $k$ and then $r$.
When $k=r=1$, the graph with one vertex satisfies (1) for every chordal partition and has a tree-decomposition with one bag of size $1<s(1,1)$.
First we prove that the $(k,1)$ and $(k,r)$ cases imply the $(k,r+1)$ case. Let $A:=G(k,1)$ and $B:=G(k,r)$. Let $G$ be obtained from $A$ as follows. For each $k$-clique $C$ in $A$, add a copy $B_C$ of $B$ (disjoint from the current graph), where $C$ is complete to $B_C$. We claim that $G$ has the claimed properties of $G(k,r+1)$.
By assumption, in every chordal partition of $A$ some part contains $K_k$, $A$ has a tree-decomposition with bags of size at most $s(k,1)$, and for each $k$-clique $C$ in $A$, there is a tree-decomposition of $B_C$ with bags of size at most $s(k,r)$. For every tree-decomposition of a graph and for each clique $C$, there is a bag containing $C$. Add an edge between the node corresponding to a bag containing $C$ in the tree-decomposition of $A$ and any node of the tree in the tree-decomposition of $B_C$, and add $C$ to every bag of the tree-decomposition of $B_C$. We obtain a tree-decomposition of $G$ with bags of size at most $\max\{s(k,1),s(k,r)+k\} = s(k,r)+k = s(k,r+1)$, as desired.
Consider a chordal partition $\mathcal{P}$ of $G$. By \cref{InducedPartition}, $\RP{A}$ is a connected partition of $A$, and the quotient of $\RP{A}$ equals the subgraph of $G/\mathcal{P}$ induced by those parts that intersect $A$. Since $G/\mathcal{P}$ is chordal, the quotient of $\RP{A}$ is chordal. Since $A=G(k,1)$, by induction, $\RP{A}$ satisfies (1) with $r=1$ or (2). If outcome (2) holds, then some part of $\mathcal{P}$ contains $K_{k+1}$ and outcome (2) holds for $G$.
Now assume that $\RP{A}$ satisfies outcome (1) with $r=1$; that is, some part $P$ of $\mathcal{P}$ contains some $k$-clique $C$ of $A$. If some vertex of $B_C$ is in $P$, then $P$ contains $K_{k+1}$ and outcome (2) holds for $G$. Now assume that no vertex of $B_C$ is in $P$. Since each part of $\mathcal{P}$ is connected, the parts of $\mathcal{P}$ that intersect $B_C$ do not intersect $G-V(B_C)$. Thus, $\RP{B_C}$ is a connected partition of $B_C$, and the quotient of $\RP{B_C}$ equals the subgraph of $G/\mathcal{P}$ induced by those parts that intersect $B_C$, and is therefore chordal. Since $B=G(k,r)$, by induction, $\RP{B_C}$ satisfies (1) or (2). If outcome (2) holds, then the same outcome holds for $G$. Now assume that outcome (1) holds for $B_C$. Thus $B_C$ contains a $K_{kr}$ subgraph intersecting each of $r$ distinct parts of $\mathcal{P}$ in $k$ vertices. None of these parts are $P$. Since $C$ is complete to $B_C$, $G$ contains a $K_{k(r+1)}$ subgraph intersecting each of $r+1$ distinct parts of $\mathcal{P}$ in $k$ vertices, and outcome (1) holds for $G$. Hence $G$ has the claimed properties of $G(k,r+1)$.
It remains to prove the $(k,1)$ case for $k\geqslant 2$. By induction, we may assume the $(k-1,k+1)$ case. Let $A:=G(k-1,k+1)$. As illustrated in \cref{FirstConstruction}, let $G$ be obtained from $A$ as follows: for each set $\mathcal{C}=\{C_1,\dots,C_{k+1}\}$ of pairwise-disjoint $(k-1)$-cliques in $A$, whose union induces $K_{(k-1)(k+1)}$, add a $K_{k+1}$ subgraph $B_{\mathcal{C}}$ (disjoint from the current graph), whose $i$-th vertex is adjacent to every vertex in $C_i$. We claim that $G$ has the claimed properties of $G(k,1)$.
\begin{figure}
\caption{Construction of $G(k,1)$ in \cref{ChordalWork}
\label{FirstConstruction}
\end{figure}
By assumption, $A$ has a tree-decomposition with bags of size at most $s(k-1,k+1)$. For each set $\mathcal{C}=\{C_1,\dots,C_{k+1}\}$ of pairwise-disjoint $(k-1)$-cliques in $A$, whose union induces $K_{(k-1)(k+1)}$, choose a node $x$ corresponding to a bag of the tree-decomposition of $A$ containing $C_1\cup\dots\cup C_{k+1}$, and add a new node adjacent to $x$ with corresponding bag $V(B_{\mathcal{C}})\cup C_1\cup\dots\cup C_{k+1}$. We obtain a tree-decomposition of $G$ with bags of size at most $\max\{s(k-1,k+1),(k+1)k\} = s(k-1,k+1)=s(k,1)$, as desired.
Consider a chordal partition $\mathcal{P}$ of $G$. By \cref{InducedPartition}, $\RP{A}$ is a connected partition of $A$ and the quotient of $\RP{A}$ equals the subgraph of $G/\mathcal{P}$ induced by those parts that intersect $A$, and is therefore chordal. Since $A=G(k-1,k+1)$, by induction, $\RP{A}$ satisfies (1) or (2). If outcome (2) holds for $\RP{A}$, then some part of $\mathcal{P}$ contains $K_{k}$ and outcome (1) holds for $G$ (with $r=1$). Now assume that outcome (1) holds for $\RP{A}$. Thus $A$ contains a $K_{(k-1)(k+1)}$ subgraph intersecting each of $k+1$ distinct parts $P_1,\dots,P_{k+1}$ of $\mathcal{P}$ in $k-1$ vertices. Let $C_i$ be the corresponding $(k-1)$-clique in $P_i$. Let $\mathcal{C}:=\{C_1,\dots,C_{k+1}\}$ and $\widehat{\mathcal{C}}:= C_1\cup\dots\cup C_{k+1}$.
If for some $i\in\{1,\dots,k+1\}$, the neighbour of $C_i$ in $B_{\mathcal{C}}$ is in $P_i$,
then $P_i$ contains $K_{k}$ and outcome (1) holds for $G$.
Now assume that for each $i\in\{1,\dots,k+1\}$, the neighbour of $C_i$ in $B_{\mathcal{C}}$ is not in $P_i$.
Suppose that some vertex $x$ in $B_{\mathcal{C}}$ is in $P_i$ for some $i\in\{1,\dots,k+1\}$.
Then since $P_i$ is connected, there is a path in $G$
between $C_i$ and $x$ avoiding the neighbourhood of $C_i$ in $B_{\mathcal{C}}$.
Every such path intersects $\widehat{\mathcal{C}}\setminus C_i$, but none of these vertices are in $P_i$.
Thus, no vertex in $B_{\mathcal{C}}$ is in $P_1\cup\dots\cup P_{k+1}$.
If $B_{\mathcal{C}}$ is contained in one part, then outcome (2) holds.
Now assume that there are vertices $x$ and $y$ of $B_{\mathcal{C}}$ in distinct parts $Q$ and $R$ of $\mathcal{P}$.
Then $x$ is adjacent to every vertex in $C_i$ and $y$ is adjacent to every vertex in $C_j$, for some distinct $i,j\in\{1,\dots,k+1\}$.
Observe that $(Q,R,P_j,P_i)$ is a 4-cycle in $G/\mathcal{P}$.
Moreover, there is no $QP_j$ edge in $G/\mathcal{P}$
because $(\widehat{\mathcal{C}}\setminus C_j)\cup\{y\}$ separates $x\in Q$ from $C_j\subseteq P_j$,
and none of these vertices are in $Q\cup P_j$.
Similarly, there is no $RP_i$ edge in $G/\mathcal{P}$.
Hence $(Q,R,P_j,P_i)$ is an induced 4-cycle in $G/\mathcal{P}$,
which contradicts the assumption that $\mathcal{P}$ is a chordal partition.
Therefore $G$ has the claimed properties of $G(k,1)$.
\end{proof}
\section{Perfect Partitions: Proof of \cref{NoPerfect}}
The following lemma with $r=1$ implies \cref{NoPerfect}.
To obtain the second part of \cref{NoPerfect} apply \cref{ChordalWork} with $k=\floor{(\frac{3}{2}t-8)^{1/3}}$, in which case $t(k,1)\leqslant t$.
The proof is very similar to \cref{ChordalWork} except that we force $C_5$ in the quotient instead of $C_4$.
\begin{lem}
\label{NoPerfectLemma}
For all integers $k\geqslant 1$ and $r\geqslant 1$, if
$$t(k,r):=\tfrac23 (k^3-k) + (r-1)k + 6, $$
then there is a graph $G(k,r)$ with tree-width at most $t(k,r)-1$ (and thus with no $K_{t(k,r)+1}$-minor),
such that for every perfect partition $\mathcal{P}$ of $G$, either:\\
(1) $G$ contains a $K_{kr}$ subgraph intersecting each of $r$ distinct parts of $\mathcal{P}$ in $k$ vertices, or\\
(2) some part of $\mathcal{P}$ contains $K_{k+1}$.
\end{lem}
\begin{proof}
Note that $t(k,r)$ is the upper bound on the size of the bags in the tree-decomposition of $G(k,r)$ that we construct.
We proceed by induction on $k$ and then $r$.
For the base case, the graph with one vertex satisfies (1) for $k=r=1$ and has a tree-decomposition with one bag of size $1< t(1,1)$. The proof that the $(k,1)$ and $(k,r)$ cases imply the $(k,r+1)$ case is identical to the analogous step in the proof in \cref{ChordalWork}, so we omit it.
It remains to prove the $(k,1)$ case for $k\geqslant 2$. By induction, we may assume the $(k-1,2k+1)$ case.
Let $A:=G(k-1,2k+1)$. Let $B$ be the graph consisting of two copies of $K_{k+1}$ with one vertex in common. Note that $|V(B)|=2k+1$. As illustrated in \cref{SecondConstruction}, let $G$ be obtained from $A$ as follows: for each set $\mathcal{C}=\{C_1,\dots,C_{2k+1}\}$ of pairwise-disjoint $(k-1)$-cliques in $A$, whose union induces $K_{(k-1)(2k+1)}$, add a subgraph $B_{\mathcal{C}}$ isomorphic to $B$ (disjoint from the current graph), whose $i$-th vertex is adjacent to every vertex in $C_i$. We claim that $G$ has the claimed properties of $G(k,1)$.
\begin{figure}
\caption{Construction of $G(k,1)$ in \cref{NoPerfectLemma}
\label{SecondConstruction}
\end{figure}
By assumption, $A$ has a tree-decomposition with bags of size at most $t(k-1,2k+1)$. For each set $\mathcal{C}=\{C_1,\dots,C_{2k+1}\}$ of pairwise-disjoint $(k-1)$-cliques in $A$, whose union induces $K_{(k-1)(2k+1)}$, choose a node $x$ corresponding to a bag containing $C_1\cup\dots\cup C_{2k+1}$ in the tree-decomposition of $A$, and add a new node adjacent to $x$ with corresponding bag $V(B_{\mathcal{C}})\cup C_1\cup\dots\cup C_{2k+1}$. We obtain a tree-decomposition of $G$ with bags of size at most $\max\{t(k-1,2k+1),(2k+1)k\} = t(k-1,2k+1)=t(k,1)$, as desired.
Consider a perfect partition $\mathcal{P}$ of $G$. By \cref{InducedPartition}, $\RP{A}$ is a connected partition of $A$ and the quotient of $\RP{A}$ equals the subgraph of $G/\mathcal{P}$ induced by those parts that intersect $A$, and is therefore perfect. Recall that $A=G(k-1,2k+1)$. If outcome (2) holds for $\RP{A}$, then some part of $\mathcal{P}$ contains $K_{k}$ and outcome (1) holds for $G$ (with $r=1$). Now assume that outcome (1) holds for $\RP{A}$. Thus $A$ contains a $K_{(k-1)(2k+1)}$ subgraph intersecting each of $2k+1$ distinct parts $P_1,\dots,P_{2k+1}$ of $\mathcal{P}$ in $k-1$ vertices. Let $C_i$ be the corresponding $(k-1)$-clique in $P_i$. Let $\mathcal{C}:=\{C_1,\dots,C_{2k+1}\}$ and $\widehat{\mathcal{C}}:= C_1\cup\dots\cup C_{2k+1}$.
If for some $i\in\{1,\dots,2k+1\}$, the neighbour of $C_i$ in $B_{\mathcal{C}}$ is in $P_i$,
then $P_i$ contains a $K_{k}$ subgraph and outcome (1) holds for $G$.
Now assume that for each $i\in\{1,\dots,2k+1\}$, the neighbour of $C_i$ in $B_{\mathcal{C}}$ is not in $P_i$.
Suppose that some vertex $x$ in $B_{\mathcal{C}}$ is in $P_i$ for some $i\in\{1,\dots,k+1\}$.
Then since $P_i$ is connected, there is a path in $G$ between $C_i$ and $x$ avoiding the neighbourhood of $C_i$ in $B_{\mathcal{C}}$.
Every such path intersects $\widehat{\mathcal{C}}\setminus C_i$, but none of these vertices are in $P_i$.
Thus, no vertex in $B_{\mathcal{C}}$ is in $P_1\cup\dots\cup P_{2k+1}$.
By construction, $B_{\mathcal{C}}$ consists of two $(k+1)$-cliques $B^1$ and $B^2$, intersecting in one vertex $v$.
Say $v$ is in part $P$ of $\mathcal{P}$.
If $B^1 \subseteq V(P)$, then outcome (2) holds.
Now assume that there is a vertex $x$ of $B^1$ in some part $Q$ distinct from $P$.
Similarly, assume that there is a vertex $y$ of $B^2$ in some part $R$ distinct from $P$.
Now, $Q\neq R$, since $\widehat{\mathcal{C}}\cup\{v\}$ separates $x$ and $y$,
and none of these vertices are in $Q\cup R$.
By construction, $x$ is adjacent to every vertex in $C_i$ and $y$ is adjacent to every vertex in $C_j$, for some distinct $i,j\in\{1,\dots,2k+1\}$.
Observe that $(Q,P,R,P_j,P_i)$ is a 5-cycle in $G/\mathcal{P}$.
Moreover, there is no $QP_j$ edge in $G/\mathcal{P}$
because $(\widehat{\mathcal{C}}\setminus C_j)\cup\{y\}$
separates $x\in Q$ from $C_j\subseteq P_j$, and none of these vertices are in $Q\cup P_j$.
Similarly, there is no $RP_i$ edge in $G/\mathcal{P}$.
There is no $PP_j$ edge in $G/\mathcal{P}$
because $(\widehat{\mathcal{C}}\setminus C_j)\cup\{y\}$
separates $v\in P$ from $C_j\subseteq P_j$, and none of these vertices are in $P\cup P_j$.
Similarly, there is no $PP_i$ edge in $G/\mathcal{P}$.
Hence $(Q,P,R,P_j,P_i)$ is an induced 5-cycle in $G/\mathcal{P}$,
which contradicts the assumption that $\mathcal{P}$ is a perfect partition.
Therefore $G$ has the claimed properties of $G(k,1)$.
\end{proof}
\section{General Partitions: Proof of \cref{General}}
To prove \cref{General} we show the following stronger result, in which $G$ only depends on $|V(H)|$.
\begin{lem}
\label{GeneralGeneral}
For all integers $k,t,r\geqslant 1$, there is a graph $G=G(k,t,r)$, such that for every connected partition $\mathcal{P}$ of $G$ either:\\
(1) $G$ contains a $K_{kr}$ subgraph intersecting each of $r$ distinct parts of $\mathcal{P}$ in $k$ vertices, or\\
(2) $G/\mathcal{P}$ contains every $t$-vertex graph, or\\
(3) some part of $\mathcal{P}$ contains $K_{k+1}$.
\end{lem}
\begin{proof}
We proceed by induction on $k+t$ and then $r$. We first deal with two base cases.
First suppose that $t=1$. Let $G:=G(k,1,r):=K_1$. Then for every partition $\mathcal{P}$ of $G$, the quotient $G/\mathcal{P}$ has at least one vertex, and (2) holds. Now assume that $t\geqslant 2$.
Now suppose that $k=1$. Let $G:=G(1,t,r):=K_r$. Then for every connected partition $\mathcal{P}$ of $G$, if some part of $\mathcal{P}$ contains an edge, then (3) holds; otherwise each part is a single vertex, and (1) holds.
Now assume that $k\geqslant 2$.
The proof that the $(k,t,1)$ and $(k,t,r)$ cases imply the $(k,t,r+1)$ case is identical to the analogous step in the proof in \cref{ChordalWork}, so we omit it.
It remains to prove the $(k,t,1)$ case for $k\geqslant 2$ and $t\geqslant 2$.
By induction, we may assume the $(k,t-1,1)$ case and the $(k-1,t,r)$ case for all $r$.
Let $B:=G(k,t-1,1)$ and $n:=|V(B)|$.
Let $S^1,\dots,S^{2^n}$ be the distinct subsets of $V(B)$.
Let $A:=G(k-1,t,2^n)$.
Let $G$ be obtained from $A$ as follows:
for each set $\mathcal{C}=\{C_1,\dots,C_{2^n}\}$ of pairwise-disjoint $(k-1)$-cliques in $A$, whose union induces $K_{(k-1)2^n}$,
add a copy $B_{\mathcal{C}}$ of $B$ (disjoint from the current graph), where $C_i$ is complete to $S_{\mathcal{C}}^i$ for all $i\in\{1,\dots,2^n\}$,
where we write $S_{\mathcal{C}}^i$ for the subset of $V(B_{\mathcal{C}})$ corresponding to $S^i$.
We claim that $G$ has the claimed properties of $G(k,t,1)$.
Consider a connected partition $\mathcal{P}$ of $G$. By \cref{InducedPartition}, $\RP{A}$ is a connected partition of $A$, and the quotient of $\RP{A}$ equals the subgraph of $G/\mathcal{P}$ induced by those parts that intersect $A$. Recall that $A=G(k-1,t,2^n)$. If $\RP{A}$ satisfies outcome (2), then the quotient of $\RP{A}$ contains every $t$-vertex graph and outcome (2) is satisfied for $G$. If outcome (3) holds for $\RP{A}$, then some part of $\mathcal{P}$ contains $K_{k}$ and outcome (1) holds for $G$ (with $r=1$). Now assume that outcome (1) holds for $\RP{A}$. Thus $A$ contains a $K_{(k-1)2^n}$ subgraph intersecting each of $2^n$ distinct parts $P_1,\dots,P_{2^n}$ of $\mathcal{P}$ in $k-1$ vertices. Let $C_i$ be the corresponding $(k-1)$-clique in $P_i$. Let $\mathcal{C}:=\{C_1,\dots,C_{2^n}\}$.
If for some $i\in\{1,\dots,2^n\}$, some neighbour of $C_i$ in $B_{\mathcal{C}}$ is in $P_i$,
then $P_i$ contains $K_k$ and outcome (1) holds for $G$.
Now assume that for each $i\in\{1,\dots,2^n\}$, no neighbour of $C_i$ in $B_{\mathcal{C}}$ is in $P_i$.
Suppose that some vertex $x$ in $B_{\mathcal{C}}$ is in $P_i$ for some $i\in\{1,\dots,2^n\}$.
Then since $P_i$ is connected, $G$ contains a path between $C_i$ and $x$ avoiding the neighbourhood of $C_i$ in $B_{\mathcal{C}}$.
Every such path intersects $C_1\cup\dots\cup C_{i-1}\cup C_{i+1}\cup \dots\cup C_{2^n}$, but none of these vertices are in $P_i$.
Thus, no vertex in $B_{\mathcal{C}}$ is in $P_1\cup\dots\cup P_{2^n}$.
Hence, no part of $\mathcal{P}$ contains vertices in both $B_{\mathcal{C}}$ and in the remainder of $G$.
Therefore, $\RP{B_{\mathcal{C}}}$ is a connected partition of $B_{\mathcal{C}}$, and the quotient of $\RP{B_{\mathcal{C}}}$ equals the subgraph of $G/\mathcal{P}$ induced by those parts that intersect $B_{\mathcal{C}}$.
Since $B=G(k,t-1,1)$, by induction, $\RP{B_{\mathcal{C}}}$ satisfies (1), (2) or (3).
If outcome (1) or (3) holds for $\RP{B_{\mathcal{C}}}$, then the same outcome holds for $G$.
Now assume that outcome (2) holds for $\RP{B_{\mathcal{C}}}$.
We now show that outcome (2) holds for $G$.
Let $H$ be a $t$-vertex graph, let $v$ be a vertex of $H$, and let $N_H(v)=\{w_1,\dots,w_d\}$.
Since outcome (2) holds for $\RP{B_{\mathcal{C}}}$,
the quotient of $\RP{B_{\mathcal{C}}}$ contains $H-v$.
Let $Q_1,\dots,Q_{d}$ be the parts corresponding to $w_1,\dots,w_d$.
Then $S_{\mathcal{C}}^i=V(Q_1\cup \dots\cup Q_{d})$ for some $i\in\{1,\dots,2^n\}$.
In $G/\mathcal{P}$, the vertex corresponding to $P_i$ is adjacent to $Q_1,\dots,Q_d$ and to no other vertices corresponding to parts contained in $B_\mathcal{C}$. Thus, including $P_i$, $G/\mathcal{P}$ contains $H$ and outcome (2) holds for $\mathcal{P}$.
Hence $G$ has the claimed properties of $G(k,t,1)$.
\end{proof}
\let\oldthebibliography=\thebibliography
\let\endoldthebibliography=\endthebibliography
\renewenvironment{thebibliography}[1]{
\begin{oldthebibliography}{#1}
\setlength{\parskip}{0ex}
\setlength{\itemsep}{0ex}
}
{
\end{oldthebibliography}
}
\def\soft#1{\leavevmode\setbox0=\hbox{h}\dimen7=\ht0\advance \dimen7
by-1ex\relax\if t#1\relax\rlap{\raise.6\dimen7
\hbox{\kern.3ex\char'47}}#1\relax\else\if T#1\relax
\rlap{\raise.5\dimen7\hbox{\kern1.3ex\char'47}}#1\relax \else\if
d#1\relax\rlap{\raise.5\dimen7\hbox{\kern.9ex \char'47}}#1\relax\else\if
D#1\relax\rlap{\raise.5\dimen7 \hbox{\kern1.4ex\char'47}}#1\relax\else\if
l#1\relax \rlap{\raise.5\dimen7\hbox{\kern.4ex\char'47}}#1\relax \else\if
L#1\relax\rlap{\raise.5\dimen7\hbox{\kern.7ex
\char'47}}#1\relax\else\message{accent \string\soft \space #1 not
defined!}#1\relax\fi\fi\fi\fi\fi\fi}
\end{document} |
\begin{document}
\title{Bayesian Nonparametrics for Sparse Dynamic Networks}
\titlerunning{Bayesian Nonparametrics for Sparse Dynamic Networks}
\author{Cian Naik\inst{1} \and
Fran\c cois Caron\inst{1} \and
Judith Rousseau\inst{1} \and
Yee Whye Teh\inst{1,3}\and
Konstantina Palla\inst{2}}
\authorrunning{C. Naik et al.}
\institute{Department of Statistics, University of Oxford, Oxford, United Kingdom \and
Microsoft Research, Cambridge, United Kingdom \and Google Deepmind, London, United Kingdom}
\maketitle
\begin{abstract}
In this paper we propose a Bayesian nonparametric approach to modelling sparse time-varying networks. A positive parameter is associated to each node of a network, which models the sociability of that node. Sociabilities are assumed to evolve over time, and are modelled via a dynamic point process model. The model is able to capture long term evolution of the sociabilities. Moreover, it yields sparse graphs, where the number of edges grows subquadratically with the number of nodes. The evolution of the sociabilities is described by a tractable time-varying generalised gamma process. We provide some theoretical insights into the model and apply it to three datasets: a simulated network, a network of hyperlinks between communities on Reddit, and a network of co-occurences of words in Reuters news articles after the September $11^{th}$ attacks.
\keywords{Bayesian nonparametrics \and Poisson random measures \and networks \and random graphs \and sparsity \and point processes}
\end{abstract}
\section{Introduction}
This article is concerned with the analysis of dynamic networks, where one observes the evolution of links among a set of objects over time. As an example, links may represent social interactions between individuals over time or the co-occurrence of words in a newspaper over time. Probabilistic approaches treat the dynamic networks of interest as random graphs, where the vertices (nodes) and edges correspond to objects and links respectively. In the graph setting, sparsity is defined in terms of the rate in which the numbers of edges grows as the number of nodes increases. In a \textit{sparse} graph the number of edges grows sub-quadratically in the number of nodes. Hence, in a large graphs, two nodes chosen at random are very unlikely to be linked.
While sparsity is a property found in many real-world network datasets \citep{Newman2009}, most of the popular Bayesian models used in network analysis account for dense graphs, i.e. where the number of edges grows quadratically in the number of nodes, see \citep{Orbanz2015} for a review. A recent Bayesian nonparametric approach, proposed by \citep{Caron2017} and later developed in a number of articles~\citep{Veitch2015,Herlau2015,Borgs2018,Todeschini2016,naik2021sparse} represents the graph as an infinite point process on $\mathbb{R}^2_+$, giving rise to a class of sparse random graphs. This class of sparse models is projective and admits a representation theorem due to \citep{Kallenberg1990}.
In this paper, we are interested in the dynamic domain and aim to probabilistically model the evolution of sparse graphs over time, where edges may appear and disappear, and the node popularity may change over time. We build on the sparse graph model of \citep{Caron2017} and extend it to deal with time series of network data. We describe a fully generative and projective approach for the construction of sparse dynamic graphs. It is challenging to perform exact inference using the framework we introduce, and thus we consider an approximate inference method, using a finite-dimensional approximation introduced by \citep{lee2016finite}.
The rest of the article is structured as follows. In Section \ref{sec:background} we give some background on the sparse network model of \citep{Caron2017}. Section \ref{sec:model} describes the novel statistical dynamic network model we introduce in detail. In Section \ref{sec:properties}, we describe the sparsity properties of the proposed model. The approximate inference method, based on a truncation of the infinite-dimensional model is described in Section~\ref{sec:inference}. In Section \ref{sec:experiments} we present illustrations of our approach to three different dynamic networks with thousands of nodes and edges.
\section{Background: model of Caron and Fox for sparse static networks}
\label{sec:background}
We recall in this section the model of \citep{Caron2017} for sparse multigraphs. Let $\alpha>0$ be a positive real tuning the size of the network. A finite multigraph of size $\alpha>0$ is represented by a point process on $[0,\alpha]^2$
\[
N=\sum_{i, j} n_{ij} \delta_{(\theta_i,\theta_j)}
\]
where $n_{ij}=n_{ji}\in\{0,1,2,\ldots\}$, $i\leq j$, represents the number of interactions between individuals $i$ and $j$, and the $\theta_i\in[0,\alpha]$ can be interpreted as node labels. Those node labels are introduced for the model's construction, but are not observed nor inferred. Each node $i$ is assigned a sociability parameter $w_i>0$. Let $W=\sum_i w_i\delta_{\theta_i}$ be the corresponding random measure on $[0,\alpha]$. We assume that $W$ is a generalised gamma completely random measure~\citep{Kingman1967,Hougaard1986,Brix1999,Lijoi2007}. That is, $\{(w_i,\theta_i)_{i\geq 1}\}$ are the points of a Poisson point process with mean measure $\nu(w)dw \1{\theta\leq \alpha}d\theta$ where $\mathbbm{1}_{A}=1$ if the statement $A$ is true and $0$ otherwise, and $\nu$ is a L\'evy intensity on $(0,\infty)$ defined as
\begin{equation}
\nu(w)=\frac{1}{\Gamma(1-\sigma)}w^{-1-\sigma}e^{-\tau w}
\end{equation}
with hyperparameters $\sigma<1$ and $\tau>0$. We write simply $W\sim \text{GG}(\alpha,\sigma,\tau)$.
To each pair of nodes $i,j$, we assign a number of latent interactions $n_{ij}$, where
\begin{equation}n_{ij}\mid w_{i},w_{j} \sim\left \{
\begin{array}{ll}
\Poisson{2w_{i}w_{j}} & i< j , \quad n_{ji}= n_{ij}\\
\Poisson{w_{i}w_{j}} & i=j
\end{array}\right .
\end{equation}
Finally, two nodes are said to be connected if they have at least one interaction; let $z_{ij}=\1{n_{ij}>0}$ be the binary variable indicating if two nodes are connected. When $\sigma>0$, such model yields sparse graphs with power-law degree distributions~\citep{Caron2017,Caron2017a}.
\begin{figure}
\caption{Graphical representation of the model. The counts $n_{t}
\label{fig:graphical_ii}
\end{figure}
\section{Dynamic statistical network model}
\label{sec:model}
In order to study dynamically evolving networks, we assume that at each time $t=1,2,\ldots,T$, we observe a set of interactions between a number of nodes. This set of interaction is represented by a point process $N_t$ over $[0,\alpha]^2$ as in Equation~\eqref{eq:Zt}, where $\alpha$ tunes the size of the graphs.
\begin{equation}
N_t=\sum_{i, j} n_{tij} \delta_{(\theta_i,\theta_j)}.
\label{eq:Zt}
\end{equation}
Here, $n_{tij}$ is the number of interactions between $i$ and $j$ at time $t$, and the $\theta_i$ are unique node labels.
The dynamic point process $N_t$ is obtained as follows. We assume that each node $i$ at time $t$ has a \textit{sociability} parameter $w_{ti}\in \mathbb{R_+}$, that can be thought of as a measure of the node's willingness to interact with other nodes at time $t$. We consider the associated collection of random measures on $\mathbb R_+$, for $t=1,\ldots,T$
$$
W_t=\sum_i w_{ti} \delta_{\theta_i},~~~t=1,\ldots,T.
$$
We first describe in Section \ref{sec:latentcount} the model for the latent interactions. Then we describe in Section \ref{sec:sociability} the model for the time-varying sociability parameters $(W_t)_{t\geq 1}$. The overall probabilistic model is summarised in Figure \ref{fig:graphical_ii}.
\subsection{Dynamic network model based on observed interactions}
\label{sec:latentcount}
In the dynamic setting, what we observe in practice is often counts of interactions between nodes, e.g. hyperlinks, emails or co-occurrences, rather than a binary indicator of whether there is a connection between them. So for each pair of nodes $i\leq j$, we let $(n_{tij})_{t=1,2,\ldots T, j \geq i}$ be the interaction count between them at time t. We assume that $n_{tij}$ can be modelled as
\begin{equation}\label{eq:n_poiss}
n_{tij}\mid w_{ti},w_{tj} \sim\left \{
\begin{array}{ll}
\Poisson{2w_{ti}w_{tj}} & i< j, \quad n_{tji} = n_{tij}\\
\Poisson{w_{ti}w_{tj}} & i=j
\end{array}\right.
\end{equation}
This model can be easily adapted to graphs with directed edges, by modifying the Equation (\ref{eq:n_poiss}) to
$$
n_{tij}\sim {Poisson}(w_{ti}w_{tj})
$$
for all $i\neq j$, where $n_{tij}$ now represents the number of interactions from $i$ to $j$. The resulting inference algorithm essentially remains the same, and from now on we assume we are in the undirected edge setting.
As in the static case, we can reconstruct the binary graph by letting $z_{tij}= \1{(n_{tij}>0)}$ be the binary variable indicating if nodes $i$ and $j$ are connected at time $t$, i.e. two nodes are connected at time $t$ if and only if $n_{tij}>0$. To avoid ambiguity, we say that the number of edges in the graph at time $t$ is $\sum_{i>j}z_{tij}$, rather than counting the number of interactions between pairs of nodes. Marginalizing out the interaction counts $n_{tij}$, we have for $i\neq j$:
\begin{equation}
\Pr(z_{tij}=1\mid (w_{t-k,i},w_{t-k,j})_{k=0,\ldots,t-1})=1-e^{-2w_{ti}w_{tj}}.\label{eq:marginalz}
\end{equation}
\subsection{A dependent generalised gamma process for the sociability parameters}
\label{sec:sociability}
\begin{figure*}
\caption{Degree distributions over time, for a network simulated from the GG model with $T=4$, $\alpha = 200$, $\tau = 1$, $\phi = 1$ and varying values of $\sigma$.}
\label{fig:sigma_degree_t_0}
\label{fig:sigma_degree_t_1}
\label{fig:sigma_degree_t_2}
\label{fig:sigma_degree_t_3}
\label{fig:ggp_simulation_sigma_degree}
\end{figure*}
\begin{figure}
\caption{Evolution of weights over time, for a network simulated from the GG model with $T=100$, $\alpha = 1$, $\sigma = 0.01$, $ \tau = 1$ and (a) $\phi = 20$ (b) $\phi=2000$.}
\label{fig:phi_20_weights}
\label{fig:phi_2000_weights}
\label{fig:ggp_simulation_phi_weights}
\end{figure}
We consider here that the sequence of random measures $(W_t)_{t=1,2,\ldots}$ follows a Markov model, such that $W_t$ is marginally distributed as $\text{GG}(\alpha,\sigma,\tau)$. To this aim, we build on the generic construction of \citep{Pitt2005}. A similar model has been derived by \citep{CarTeh2012a} for dependent gamma processes (corresponding to the case $\sigma=0$ here). As in \citep{Caron2017}, we use the generalised gamma process here because of the flexibility the sparsity parameter $\sigma$ gives us. In particular, this setup allows us to capture power-law degree distributions, unlike with the gamma process.
For a sequence of additional latent variables $(C_t)_{t=1,2,\ldots}$, we consider a Markov chain $W_{t}
\rightarrow C_{t}\rightarrow W_{t+1}$ starting with $W_{1}\sim\text{GG}(\alpha
,\sigma,\tau)$ that leaves $W_{t}$ marginally $\text{GG}(\alpha
,\sigma,\tau)$. For $t=1,\ldots,T-1$, define
\begin{equation}
C_{t}=\sum_{i=1}^{\infty}c_{ti}\delta_{\theta_{i}}\quad c_{ti}|W_{t}
\sim\text{Poisson}(\phi w_{ti})\label{eq:latentCt}
\end{equation}
where $\phi>0$ is a parameter tuning the correlation. Given $C_t$, the measure $W_{t+1}$ is then constructed as a combination of masses defined
by $C_{t}$ and GG innovation:
\begin{align}
W_{t+1}=W_{t+1}^{\ast}+\sum_{i=1}^{\infty}w_{t+1,i}^{\ast}\delta_{\theta_{i}
}
\end{align}
with
\begin{align}
W_{t+1}^{\ast}&\sim\text{GG}(\alpha,\sigma,\tau+\phi)\\
w_{t+1,i}^{\ast}|C_{t}&\sim\text{Gamma}(\max(c_{ti}
-\sigma,0),\tau+\phi).
\end{align}
By convention, $\text{Gamma}(0,\tau)=\delta_0$, hence $w_{t+1,i}^{\ast}=0$ if $c_{ti}=0$ else $w_{t+1,i}^{\ast}>0$, Because the conditional laws of
$W_{t+1}|C_{t}$ coincide with that of $W_{t}|C_{t}$ \citep{Prunster2002,James2002,James2009},
the construction guarantees that $W_{t+1}$ has the same marginal distribution as $W_{t}$,
i.e., they are both distributed as $\text{GG}(\alpha,\sigma,\tau)$. Moreover,
as proved in Section \ref{sec:app:proofs} of the Appendix,
the
conditional mean of $W_{t+1}$ given $W_{t}=\sum_{i}w_{ti}\delta_{\theta_i}$ has the form
\begin{align}
E[W_{t+1}&|W_{t}]=\left (\frac{\tau}{\tau+\phi} \right )^{1-\sigma}E[W_t] \nonumber\\
&~+\frac{1}{\tau+\phi}\sum_{i=1}^{\infty}[\phi
w_{ti}-\sigma(1-e^{-\phi w_{ti}})]\delta_{\theta_{i}}\label{eq:conditionalmean}
\end{align}
In the gamma process case $(\sigma=0)$, the above expression reduces to
\[
E[W_{t+1}|W_{t}]=\frac{\tau}{\tau+\phi}E[W_{t}]+\frac{\phi}{\tau+\phi}
W_{t}.
\]
\subsection{Summary of the model's hyperparameters}
The model is parameterised by $(\alpha, \sigma, \tau, \phi)$, where:
\begin{itemize}
\item $\alpha$ tunes the overall size of the networks, with a larger value of $\alpha$ corresponding to larger networks.
\item $\sigma$ controls the sparsity and power-law properties of the graph, as will be shown in Section~\ref{sec:properties}. In Figure \ref{fig:ggp_simulation_sigma_degree} we see that different values of $\sigma$ give rise to different power-law degree distributions.
\item $\tau$ induces an exponential tilting of large degrees in the degree distribution.
\item $\phi$ tunes the correlation of the sociabilities of each node over time. As we see in Figure \ref{fig:ggp_simulation_phi_weights}, larger values correspond to higher correlation and smoother evolution of the weights.\\
\end{itemize}
\section{Sparsity and power-law properties of the model}
\label{sec:properties}
By construction,
the interactions at time $t$, $n_{tij}$, are drawn from the same (static) model as in \citep{Caron2017}, using a generalised gamma process for the L\'evy intensity, and so applying Proposition 18 in \citep{Caron2017a} we obtain the following asymptotic properties
\begin{proposition}
Let $N_{t,\alpha}$ be the number of active nodes at time $t$, $N_{t,\alpha}^{(e)} = \sum_{i\leq j} z_{tij}$ be the number of edges and $N_{t,\alpha,j}$ the number of nodes of degree $j$ in the graph at time $t$, then
as $\alpha$ tends to infinity, almost surely, we have for any $t$:
if $\sigma >0$,
$
N_{t,\alpha}^{(e)} \asymp N_{t,\alpha}^{2/(1+\sigma)},
$ if $\sigma=0$, $N_{t,\alpha}^{(e)} \asymp N_{t,\alpha}^{2}/\log^2(N_{t,\alpha})$ and if $\sigma<0$, $N_{t,\alpha}^{(e)} \asymp N_{t,\alpha}^{2}$.
Also, almost surely, for any $t\geq 1$ and $j\geq 1$, if $\sigma \in (0,1)$,
$$\frac{N_{t,\alpha,j}}{N_{t,\alpha}}\rightarrow p_j, \quad p_j=\frac{\sigma\Gamma(j-\sigma)}{j!\Gamma(1-\sigma)},$$
while if $\sigma\leq 0$, $N_{t,\alpha,j}/N_{t,\alpha} \rightarrow 0$ for all $j\geq 1$.
\end{proposition}
Hence the graphs are sparse if $\sigma\geq 0$ and dense if $\sigma<0$.
\section{Approximate inference}
\label{sec:inference}
\subsection{Finite-dimensional approximation}
Performing exact inference using this model is quite challenging, and we consider instead an approximate inference method, using a finite-dimensional approximation to the GG prior, introduced by \citep{lee2016finite}. This approximation gives rise to a particularly simple conjugate construction, enabling posterior inference to be performed.
Let $\text{BFRY}\left(\eta,\tau,\sigma\right)$ denote a (scaled and exponentially tilted) BFRY random variable\footnote{The name was coined by \citep{Devroye2014} after \citep{Bertoin2006}.} on $(0,\infty)$ with probability density function
\begin{align*}
g_{\eta,\tau,\sigma}(w) = \frac{\sigma w^{-1-\sigma}e^{-\tau w}\left( 1-e^{- (\sigma /\eta)^{1/\sigma}w} \right) }{\Gamma(1-\sigma)\left\{ \left( \tau +(\sigma /\eta)^{1/\sigma} \right)^{\sigma} -\tau^{\sigma}\right\} }
\end{align*}
with parameters $\sigma\in(0,1)$, $\tau>0$ and $\eta>0$.
At time 1, consider the finite-dimensional measure
\[
W_{1}=\sum_{i=1}^{K}w_{1i}\delta_{\theta_{i}}
\]
where $w_{1i} \sim \text{BFRY}\left(\alpha/K,\tau,\sigma\right)$ and $K<\infty$ is the truncation level. As shown by~\citep{lee2016finite}, for $\sigma\in(0,1)$
$$W_{1}\overset{d}{\to}\text{GG}(\alpha,\sigma,\tau)$$
as the truncation level $K$ tends to infinity. By this we mean that, if $W \sim GG(\alpha,\sigma,\tau)$, then
$$
\lim_{K \to \infty} \mathcal{L}_f(W') = \mathcal{L}_f(W)
$$
for an arbitrary measurable and positive $f$, where $\mathcal{L}_f(W) \coloneqq \mathbb{E}\left[e^{-W(f)}\right]$ is the Laplace functional of $W$ as defined by \citep{lee2016finite}.
To use this finite approximation with our dynamic model, we consider Poisson latent variables as in Equation~\eqref{eq:latentCt}. The measure $W_{t+1}$ is then constructed as:
\begin{align*}
W_{t+1}&=\sum_{i=1}^{K}w_{t+1,i}\delta_{\theta_{i}}\\
w_{t+1,i}|C_{t}&\sim\text{BFRY}\left(\alpha'_t/K,\tau+\phi,\sigma - c_{ti}\right).
\end{align*}
where
\begin{align*}
\alpha'_t = K(\sigma - c_{ti})(\sigma K/\alpha)^{\frac{c_{ti}-\sigma}{\sigma}}.
\end{align*}
This construction mirrors that of Section \ref{sec:sociability}, with the key difference being that we now obtain a stationary $\text{BFRY}\left(\alpha/K,\tau,\sigma\right)$ distribution for the $w_{ti}$. We can easily see this by noting that if
\begin{align*}
w_{ti} &\sim \text{BFRY}\left(\alpha/K,\tau,\sigma\right)\\
c_{ti}|w_{ti} &\sim\text{Poisson}(\phi w_{ti})
\end{align*}
then
\begin{align}\label{eq:t_conditional}
p(w_{ti}|c_{ti}) \propto w_{ti}^{-1-\sigma+c_{ti}}e^{-(\tau + \phi)w_{ti}}\left( 1-e^{- (\sigma K/\alpha)^{1/\sigma}w_{ti}} \right)
\end{align}
which we then recognise as a $\text{BFRY}\left(\alpha'_t/K,\tau+\phi,\sigma - c_{ti}\right)$ with $\alpha'_t$ as above.
Thus, the use of a finite-dimensional approximation BFRY random variables gives us the simple conjugate construction that we desired. The reason that we introduce this non-standard distribution is that, as far as we know, it is not possible to approximate the generalised gamma process using a finite measure with i.i.d. gamma random weights. In the specific case $\sigma=0$, we could use the simpler finite approximation using $Gamma(\alpha / K, \tau)$ random variables. However, this would preclude us from modelling networks with power-law degree distributions, as discussed previously.
\begin{figure*}
\caption{(Top row) Posterior Predictive Degree Distribution and (bottom row) 95\% credible intervals for sociabilites of the nodes with the highest degrees, for a network simulated from the GG model. True weights are represented by a green cross.}
\label{fig:sim_post_pred_t_0}
\label{fig:sim_post_pred_t_1}
\label{fig:sim_post_pred_t_2}
\label{fig:sim_post_pred_t_3}
\label{fig:sim_high_weights_t_0}
\label{fig:sim_high_weights_t_1}
\label{fig:sim_high_weights_t_2}
\label{fig:sim_high_weights_t_3}
\label{fig:ggp_simulation_post_pred}
\end{figure*}
\begin{figure}
\caption{Evolution of weights of high degree nodes for the network simulated from the GG model. The dotted line shows the true value of the weights in each case.}
\label{fig:sim_high_weights}
\label{fig:sim_weights}
\end{figure}
\subsection{Posterior Inference Algorithm}
In order to perform posterior inference with the approximate method, we use a Gibbs sampler. We introduce auxiliary variables $\{u_{ti}\}^K_{i=1}, t=1,\ldots,T$ following a truncated exponential distribution. The overall sampler is as follows (see details in Section \ref{sec:app:mcmc_alg} of the Appendix):
$1.$ Update the weights $w_{ti}$ given the rest using Hamiltonian Monte Carlo (HMC).\\
$2.$ Update the latent $c_{ti}$ given the rest using Metropolis Hastings (MH).\\
$3.$ Update the hyperparameters $\alpha$, $\sigma$, $\phi$ and $\tau$ and the latent variables $u_{ti}$ given the rest
For the hyperparameters, we place gamma priors on $\alpha$, $\tau$ and $\phi$, and a Beta prior on $\sigma$.
\section{Experiments}
\label{sec:experiments}
\begin{figure}
\caption{Trace plots for (a) $\sigma$, (b) $\phi$, (c) $\tau$ and (b) $\alpha$ for different truncation levels, when fitting to a network simulated from the GG model}
\label{fig:sigma_comparison}
\label{fig:phi_comparison}
\label{fig:tau_comparison}
\label{fig:alpha_comparison}
\label{fig:sim_truncation_comparison}
\end{figure}
\subsection{Simulated Data}
In order to assess the approximate inference scheme, we first consider a synthetic dataset. We simulate a network with $T=4$, $\alpha = 100$, $\sigma = 0.2$, $\tau = 1$, $\phi = 10$ from the exact model (see Section~\ref{sec:model}), and then estimate it using our approximate inference scheme described in Section~\ref{sec:inference}. The generated network has $3,096$ nodes and $101,571$ edges. We run an MCMC chain of length $600,000$, with the first $300,000$ samples discarded as burn-in. We set the truncation threshold to $K=15,000$.
The approximate inference scheme estimates the model's parameters well. This can be seen in terms of the fit of the posterior predictive degree distribution to the empirical as seen in Figures \ref{fig:sim_post_pred_t_0}-\ref{fig:sim_post_pred_t_3}, and the coverage of the credible intervals for the weights, as we see in Figures \ref{fig:sim_high_weights_t_0}-\ref{fig:sim_high_weights_t_3} for the 50 nodes with the highest degree, and in Figure~\ref{fig:sim_weights} for the evolution over time of some high degree nodes.
\begin{figure*}
\caption{Posterior Predictive Degree Distribution over time for the Reddit hyperlink network}
\label{fig:reddit_post_pred_t_1}
\label{fig:reddit_post_pred_t_5}
\label{fig:reddit_post_pred_t_9}
\label{fig:reddit_post_pred_t_11}
\label{fig:reddit_post_pred}
\end{figure*}
To see the influence of the truncation level $K$, we rerun the algorithm for different truncation levels $K=5,000$ and $K=10,000$. Trace plots of the hyperparameters are shown in Figure \ref{fig:sim_truncation_comparison}. The approximate posterior distributions are quantitatively similar, but we see that increasing the value of $K$ leads to a more correlated Markov chain. However, we note that $\phi$ is slightly under-estimated in our model, and this problem is more severe the lower the truncation level is taken to be. Further comparison is given in Section \ref{sec:app:mcmc_plots} of the Appendix.
\subsection{Real Data}
We illustrate the use of our model on two more dynamic network datasets: the Reddit Hyperlink Network \citep{kumar2018community}\footnote{\url{https://snap.stanford.edu/data/soc-RedditHyperlinks.html}} and the Reuters Terror dataset\footnote{\url{http://vlado.fmf.uni-lj.si/pub/networks/data/CRA/terror.htm}}.
\subsubsection{Reddit Hyperlink Network}
\begin{figure}
\caption{Evolution of (a) degrees and (b) weights of high degree nodes for the Reddit hyperlink network}
\label{fig:reddit_high_degrees}
\label{fig:reddit_high_weights}
\label{fig:reddit_weights}
\end{figure}
\begin{figure*}
\caption{Evolution of the weights, for the Reddit hyperlink network.}
\label{fig:reddit_word_weights_0}
\label{fig:reddit_word_weights_1}
\label{fig:reddit_word_weights_2}
\label{fig:reddit_subreddit_weights}
\end{figure*}
The Reddit hyperlink network represents hyperlink connections between subreddits (communities on Reddit) over a period of $T=12$ consecutive months in $2016$. Nodes are subreddits (communities) and the symmetric edges represent hyperlinks originating in a post in one community and linking to a post in another community. The network has $N= 28,810$ nodes and $388,574$ interactions. The observations here are hyperlinks between the pair of subreddits $i,j$ at time $t$. The dataset has been made symmetric by placing an edge between nodes $i$ and $j$ if there is a hyperlink between them in either direction. We also assume that there are no loops in the network, that is $n_{tij} = 0$ for $i=j$. We run the Gibbs sampler $400,000$ samples, with the first $200,000$ discarded as burn in. In this case, we choose a truncation level of $K=40,000$.
From Figure \ref{fig:reddit_post_pred} we see that our model is capturing the empirical degree distribution well. Furthermore, in Figure \ref{fig:reddit_weights} we see that the model is able to capture the evolution of weights associated with each subreddit in a fashion that agrees with the observed frequency of interactions. The high degree nodes here are interpretable as either communities with a very large number of followers - such as ``askreddit'' or ``bestof'', or communities which frequently link to others - such as ``drama'' or ``subredditdrama''.
In particular, we see in Figure \ref{fig:reddit_word_weights_0} that the weights of the controversial but popular political subreddit ``The Donald'' increases to a peak in November, corresponding to the 2016 U.S. presidential election. Conversely, the weights of the subreddit ``Sandersforpresident'' decrease as the year goes on, corresponding to the end of Senator Bernie Sanders' presidential campaign, a trend that again agrees with the evolution of the corresponding observed degrees.
\subsubsection{Reuters Terror Dataset}
\begin{figure*}
\caption{Posterior Predictive Degree Distribution for the Reuters terror dataset}
\label{fig:reuters_post_pred_t_0}
\label{fig:reuters_post_pred_t_2}
\label{fig:reuters_post_pred_t_3}
\label{fig:reuters_post_pred_t_4}
\label{fig:reuters_post_pred}
\end{figure*}
The final dataset we consider is the Reuters terror news network dataset. It is based on all stories released during $T=7$ consecutive weeks (the original data was day-by-day, but was shortened and collated for our purposes) by the Reuters news agency concerning the $09/11/01$ attack on the U.S.. Nodes are words and edges represent co-occurence of words in a sentence, in news. The network has $N= 13,332$ nodes (different words) and $473,382$ interactions. The observations here are the frequency of co-occurence between the pair of words $i,j$ at time $t$. We assume that there are no loops in the network, that is $n_{tij} = 0$ for $i=j$. We run the Gibbs sampler with $200,000$ samples, with the first $100,000$ discarded as burn in. In this case, we choose a truncation level of $K=20,000$.
Figure \ref{fig:reuters_post_pred} suggests that the empirical degree distribution does not follow a power law distribution, and our model therefore provides a moderate fit to the empirical degree distribution. The model is however able to capture the evolution of the popularity of the different words, as shown in Figure \ref{fig:reuters_word_weights_0}. For example, the weights of the words ``plane'' and ``attack'' decrease over time after 9/11, while the words ``letter" and ``anthrax" show a peak a few weeks after the attack. These correspond to the anthrax attacks that occurred over several weeks starting a week after 9/11.
Due to the empirical degree distribution not following a power-law, the estimated value of $\sigma$ is very close to $0$, which causes a slow convergence of the MCMC algorithm.
\begin{figure*}
\caption{Evolution of the weights, for the Reuters terror dataset}
\label{fig:reuters_word_weights_0}
\label{fig:reuters_word_weights_1}
\label{fig:reuters_word_weights_2}
\label{fig:reuters_words_weights}
\end{figure*}
\section{Discussion and Extensions}
A lot of work has been done on modelling dynamic networks; we restrict ourselves to focussing on Bayesian approaches. Much of this work has centred around extending static models. For example, \citep{Xu2014} and \citep{Durante2014} extend the stochastic block model to the dynamic setting by allowing for parameters that evolve over time. There has also been work on extending the mixed membership stochastic blockmodel \citep{Fu2009, ho2011evolving, xing2010state}, the infinite relational model \citep{ng2017dynamic} and the latent feature relational model \citep{foulds2011dynamic, heaukulani2013dynamic, kim2013nonparametric}. The problem inherent in these models is that they lead to networks which are dense almost surely \citep{Aldous1981, Hoover1979}, a property considered unrealistic for many real-world networks \citep{Orbanz2015} such as social and internet networks.
In order to build models for sparse dynamic networks, \citep{Ghalebi2019} build on the framework of edge-exchangeable networks \citep{cai2016edge, crane2018edge, williamson2016nonparametric}, in which graphs are constructed based on an infinitely exchangeable sequence of edges. As in our case, this framework allows for sparse networks with power-law degree distributions. This work, along with others in this framework \citep{ng2017dynamic, ghalebi2018dynamic}, utilises the mixture of Dirichlet network distributions (MDND) to introduce struture in the networks.
Conversely, our works builds on a different notion of exchangeability \citep{Kallenberg1990, Caron2017}. Within this framework, \citep{Miscouridou2018} use mutually-exciting Hawkes processes to model temporal interaction data. The difference between our work and theirs is that the sociabilities of the nodes are constant throughout time, with the time evolving element driven by previous interactions via the Hawkes process. Their work also builds on that of \citep{Todeschini2016}, incorporating community structure to the network. Exploring how communities appear, evolve and merge could have many practical uses. Thus, expanding our model to capture both evolving node popularity and dynamically changing community structure could be a useful extension.
Furthermore, our model assumes that the time between observations of the network is constant. If this is not the case, it may be helpful to use a continuous-time version of our model. This could be done by considering a birth-death process for the interactions between nodes, where each interaction has a certain lifetime distribution. The continuously evolving node sociabilities could then by described by the Dawson-Watanabe superprocess \citep{Watanabe1968, Dawson1975}.
\appendix
\section{Proofs}\label{sec:app:proofs}
\subsection{Proof of Equation (10)}
\begin{align*}
&E[W_{t+1}|W_{t}]\\& =E[E[W_{t+1}|C_{t}]|W_{t}]\\ & =E[W_{t+1}^{\ast}]+\frac
{1}{\tau+\phi}\sum_{i=1}^{\infty}E[\max(c_{ti}-\sigma,0)|W_{t}]\delta
_{\theta_{i}}\\
& =E[W_{t+1}^{\ast}]+\frac{1}{\tau+\phi}\sum_{i=1}^{\infty}[\phi
w_{ti}-\sigma(1-e^{-\phi w_{ti}})]\delta_{\theta_{i}}
\end{align*}
Finally, note that, for any measurable set $A\subseteq [0,\alpha]$, using Campbell's theorem,
$$E[W_{t+1}(A)]=\lambda(A)\int_0^\infty w\rho(w)dw=\lambda(A)\tau^{\sigma-1}$$
and similarly
\begin{align*}
E[W^\ast_{t+1}(A)]&=\lambda(A)(\tau+\phi)^{\sigma-1}\\
&=\left(\frac{\tau}{\tau+\phi}\right)^{1-\sigma}E[W_{t+1}(A)]
\end{align*}
where $\lambda$ denotes the Lebesgue measure. Thus,
\begin{align}
E[W_{t+1}&|W_{t}]=\left (\frac{\tau}{\tau+\phi} \right )^{1-\sigma}E[W_t] \nonumber\\
&~+\frac{1}{\tau+\phi}\sum_{i=1}^{\infty}[\phi
w_{ti}-\sigma(1-e^{-\phi w_{ti}})]\delta_{\theta_{i}}
\end{align}
\section{MCMC algorithm details}\label{sec:app:mcmc_alg}
For $t=1,\ldots,T$, let $\mathbf n_t=(n_{tij})_{1\leq i,j\leq K}$, $\mathbf w_t=(w_{ti})_{i=1,\ldots,K}$, $\mathbf u_t=(u_{ti})_{i=1,\ldots,K}$ and $\mathbf c_t=(c_{ti})_{i=1,\ldots,K}$.
\subsection{Conditional distribution for the interactions $n_{tij}$} \label{sec:multigraph_conditional}
The conditional distribution of $(\mathbf n_{t})_{t=1,\ldots, T}$ given the other variables can be found by recalling Equation (4) in the main article. We have
\begin{align*}
p((\mathbf n_{t})_{t=1,\ldots, T}\mid \text{rest})=\prod_{t=1}^T p(\mathbf n_t \mid \mathbf w_t)
\end{align*}
where
\begin{align*}
p(\mathbf n_t \mid \mathbf w_t) &= \prod_{1\leq i<j\leq K} \frac{(2w_{ti}w_{tj})^{n_{tij}}e^{-2w_{ti}w_{tj}} }{n_{tij}!} \prod_{i=1}^K \frac{(w_{ti}^2)^{n_{tii}}e^{-w_{ti}^2} }{n_{tii}!}\\
&\propto \Big[\prod^{K}_{i=1} w^{m_{ti}}_{ti}\Big] e^{- (\sum^{K}_{i=1} w_{ti})^2 }
\end{align*}
where $m_{ti}=n_{tii}+\sum_{j=1}^K n_{tij}$.
\subsection{Update of $u_{ti}$ and $w_{ti}$}
The auxiliary variables $u_{ti}$ are introduced in order to help with numerical instability problems with the update of the weights $w_{ti}$. These generally stem from the $\left( 1-e^{- (\sigma K/\alpha)^{1/\sigma}w_{ti}}\right)$ term in the BFRY distribution, which can be unstable when $(\sigma K/\alpha)^{1/\sigma}$ is large. Noting that the truncation level $K$ is a fixed value that we choose, we denote this term by
\begin{align}\label{eq:t_alphasigma_definition}
t_{\alpha,\sigma} = (\sigma K/\alpha)^{1/\sigma}
\end{align}
in everything that follows.
When defining $u_{ti}$, we want to cancel this problematic term out from the distribution of $w_{ti}$, and thus we sample $u_{ti}$:
\begin{align}
u_{ti}\mid w_{ti} \sim \text{tExp}(w_{ti},t_{\alpha,\sigma})
\end{align}
where $\text{tExp}(\lambda,a)$ denotes an Exponential distribution with rate parameter $\lambda$, truncated on $[0,a]$. The density function of $u_{ti}|w_{ti}$ is thus given by:
\begin{align}
p(u_{ti}|w_{ti}) = \frac{w_{ti}e^{-u_{ti} w_{ti}}}{\left( 1-e^{- t_{\alpha,\sigma}w_{ti}}\right)}, \qquad 0 \leq u_{ti} \leq t_{\alpha,\sigma}
\end{align}
We can sample $u_{ti}$ directly using inverse transform sampling. However, due to non-conjugacy we cannot sample directly from the posterior of $w_{ti}$. For that reason we use Hamiltonian Monte Carlo (HMC) \citep{Neal2000}.
For a given value of $t$, the posterior of $\mathbf w_t$ is given by:
\begin{align*}
&p(\mathbf w_t|\mathbf n_t, \mathbf c_t, \mathbf c_{t-1}, \mathbf u_t) \\
&\propto p(\mathbf n_t |\mathbf w_t)p(\mathbf w_t| \mathbf c_t, \mathbf c_{t-1})p(\mathbf u_t|\mathbf w_t)\\
&\propto \Big[\prod^{K}_{i=1} w^{m_i}_{ti}\Big] e^{- (\sum^{K}_{i=1} w_{ti})^2 } \Big[ \prod^{K}_{i=1} w^{ c_{ti} +\mathds{1}_{t>1}c_{t-1i}-1 -\sigma}_{ti} e^{- (\phi+\mathds{1}_{t>1}\phi+\tau)w_{ti}}\left( 1-e^{- t_{\alpha,\sigma}w_{ti}}\right)\Big]\\
&~~~~\times \Big[\prod^{K}_{i=1} \frac{w_{ti}e^{-u_{ti} w_{ti}}}{\left( 1-e^{- t_{\alpha,\sigma}w_{ti}}\right)} \Big]\\
&\propto \Big[ \prod^{K}_{i=1} w^{ c_{ti} +\mathds{1}_{t>1}c_{t-1i}+1+m_i-1 -\sigma}_{ti} e^{- (\phi+\mathds{1}_{t>1}\phi+\tau + u_{ti} )w_{ti}}\Big] e^{- (\sum^{K}_{i=1} w_{ti})^2 }
\end{align*}
where some of the terms only appear for $\mathds{1}_{t>1}$ because $\mathbf c_0$ does not exist, and thus
$p(w_{1i}|c_{1i}, c_{0,i})=p(w_{1i}|c_{1i})$. Here, we used that
\begin{align}
p(\mathbf w_t| \mathbf c_t, \mathbf c_{t-1}) \propto \prod^{K}_{i=1} \bigg[ \text{BFRY}\left(\alpha''_t/K,\tau+2\phi,\sigma - c_{ti} -c_{t-1i}\right) \bigg]
\end{align}
while noting that the form of $\alpha''_t$ is defined such that
\begin{align*}
t_{\alpha,\sigma} = (\sigma K/\alpha)^{1/\sigma} = \left(\frac{(\sigma-c_{ti}-c_{t-1i}) K}{\alpha''_t}\right)^{\frac{1}{\sigma-c_{ti}-c_{t-1i}}}
\end{align*}
We use change of variables $y_{ti} = \log w_{ti}$. The HMC algorithm requires computing
the gradient of the log-posterior which is:
\begin{align}
\frac{\partial \log p(\mathbf y_t | \mathbf n_t, \mathbf c_t, \mathbf c_{t-1}, \mathbf u_{t})}{dy_{ti}} &= ( c_{ti} +\mathds{1}_{t>1}c_{t-1i}+1+m_{ti} -\sigma) \nonumber \\
&~~~~-w_{ti}\left(\phi+\mathds{1}_{t>1}\phi+\tau + u_{ti} + 2\sum^{K}_{j=1} w_{tj} \right)
\end{align}
The algorithm then proceeds as follows, for each $t=1,\ldots,T$:
\begin{enumerate}
\item Sample momentum variables $\mathbf p_t=(p_{ti})_{i=1,\ldots,K}$ as:
\begin{eqnarray*}
p_{ti} & \overset{iid}{\sim} & \mathcal{N}(0,1)\quad i=1,\ldots,K
\end{eqnarray*}
\item Simulate $L$ steps of the discretized Hamiltonian via
\begin{eqnarray*}
\log\tilde{\mathbf w}_{t}^{(0)} & = & \log \mathbf w_t\\
\tilde{p}_{ti}& = & p_{ti}+\frac{\varepsilon}{2}\frac{\partial \log p(\log \mathbf w_t | \mathbf n_t, \mathbf c_t, \mathbf c_{t-1}, \mathbf u_{t})}{d(\log w_{ti})} \quad i=1,\ldots,K
\end{eqnarray*}
and for $l=1,\ldots,L-1$
\begin{eqnarray*}
\log\tilde{\mathbf w}_{t}^{(l)} & = & \log\tilde{\mathbf w}_{t}^{(l-1)}+\varepsilon\tilde{\mathbf p}_{t}^{(l-1)}\\
\tilde{p}_{ti}^{(l)} & = & \tilde{p}_{ti}^{(l-1)}+\varepsilon\frac{\partial \log p(\log \tilde{\mathbf w}_t^{(l)} | \mathbf n_t, \mathbf c_t, \mathbf c_{t-1}, \mathbf u_{t})}{d(\log \tilde{w}^{(l)}_{ti})}
\end{eqnarray*}
where $\tilde{\mathbf p}_{t}=(\tilde{p}_{ti})_{i=1,\ldots,K}$. Finally, set
\begin{eqnarray*}
\log\tilde{\mathbf w}_{t} & = & \log\tilde{\mathbf w}_{t}^{(L-1)}+\varepsilon\tilde{\mathbf p}_{t}^{(L-1)}\\
\tilde{p}_{ti}& = & -\left[\tilde{p}_{ti}^{(L-1)}+\frac{\varepsilon}{2}\frac{\partial \log p(\log \tilde{\mathbf w}_t | \mathbf n_t, \mathbf c_t, \mathbf c_{t-1}, \mathbf u_{t})}{d(\log \tilde{w}_{ti})}\right] \quad i=1,\ldots,K
\end{eqnarray*}
\item Accept $\tilde{\mathbf w}_{t},$ with probability $a = \min(1, r)$, where $r$ is given by:
\begin{align}
r &= \Big[\prod^{K}_{i=1} \Big(\frac{\tilde{w}_{ti} } {w_{ti}} \Big)^{c_{ti} +\mathds{1}_{t>1}c_{t-1i}+1+m_{ti} -\sigma} \Big] \nonumber \\
&\times e^{-(\sum^{K}_{i=1} \tilde{w}_{ti})^2 + (\sum^{K}_{i=1} w_{ti})^2 - (\phi+\mathds{1}_{t>1}\phi+\tau)(\sum^{K}_{i=1} \tilde{w}_{ti} - \sum^{K}_{i=1} w_{ti}) }\nonumber \\
&\times e^{-\sum^{K}_{i=1}u_{ti} \tilde{w}_{ti} - \sum^{K}_{i=1} u_{ti}w_{ti}} e^{-\frac{1}{2} \sum^{K}_{i=1}\tilde{p}_{ti}^2 - p_{ti}^2}
\end{align}
\end{enumerate}
In our case, we take $L=10$, and tune $\varepsilon$ on the first $10,000$ samples in order to achieve an acceptance rate of $0.65$.
\subsection{Update of $c_{ti}$}
We update $\mathbf c_t |\mathbf w_{t}, \mathbf w_{t+1}$ for each $t$. The conditional distribution is:
\begin{align}
p(c_{tk} | w_{tk}, w_{t+1k}) \propto& p(c_{tk}| w_{tk})p(w_{t+1k} | c_{tk})
\end{align}
where
\begin{align}
p(c_{tk}| w_{tk}) =& \Poisson{c_{tk}; \phi w_{tk}}\\
p(w_{t+1, k}| c_{tk}, w_{tk}) =&\text{BFRY}\left(w_{t+1, k};\alpha'_t/K,\tau+\phi,\sigma - c_{ti}\right)
\end{align}
For each $t=1, \dots, T$ and $k=1, \dots K$. We use Metropolis-Hastings to sample $c_{tk}$ with a Poisson as the proposal distribution, i.e. $\tilde{c}_{tk} \sim \Poisson{\phi w_k}$. Accept with probability $\min(1, r)$ where
\begin{align}
r = \frac{p(w_{t+1, k} | \tilde{c}_{tk})}{p(w_{t+1, k} | c_{tk})} = \frac{\text{BFRY}\left(w_{t+1, k};\tilde{\alpha}'_t/K,\tau+\phi,\sigma - \tilde{c}_{ti}\right)}{\text{BFRY}\left(w_{t+1, k};\alpha'_t/K,\tau+\phi,\sigma - c_{ti}\right)}
\end{align}
where we remember that $\alpha'_t$ depends on $c_t$.
\subsection{Update of $\alpha$, $\sigma$, $\tau$, $\phi$}
The joint posterior is given by
\begin{align}\label{eq:jointpost}
p(\alpha, \sigma, \tau, \phi|C,W)\propto& \prod^{K}_{k=1} \bigg(p(w_{1k}) p(c_{1k}| \phi, w_{1k})\bigg) \prod^{T}_{t=2}
\bigg[\prod^{K}_{k=1} \bigg(p(w_{tk} |c_{t-1k}) p(c_{tk}| \phi, w_{tk})\bigg)\bigg] \nonumber \\
& \times p(\alpha) p(\sigma) p(\tau) p(\phi)\nonumber \\
\propto& \prod^{K}_{k=1} \bigg(\text{BFRY}\left(w_{1k};\alpha/K,\tau,\sigma \right) \frac{(w_{1k}\phi)^{c_{1k}}e^{-\phi w_{1k}}}{c_{1k}!}\bigg) \nonumber \\
& \times \prod^{T}_{t=2}
\bigg[\prod^{K}_{k=1} \bigg(\text{BFRY}\left(w_{tk};\alpha'_t/K,\tau+\phi,\sigma - c_{t-1,k}\right) \frac{(w_{tk}\phi)^{c_{tk}}e^{-\phi w_{tk}}}{c_{tk}!}\bigg)\bigg] \nonumber \\
&\times p(\alpha) p(\sigma) p(\tau)p(\phi)
\end{align}
We place Gamma priors on $\alpha$, $\tau$ and $\phi$, and a Beta prior on $\sigma$:
\begin{align}
p(\alpha) &=\GammaD{\alpha; a_1, a_2}\\
p(\sigma) &= \BetaD{\sigma; s_1, s_2} = \frac{\sigma^{s_1 -1 } (1-\sigma)^{s_2-1} }{B(s_1, s_2)}\\
p(\tau) &= \GammaD{\tau; t_1, t_2} \\
p(\phi) &= \GammaD{\phi; f_1, f_2}
\end{align}
We then use a Metropolis Hastings sampler, with proposals given by:
\begin{align}
q(\tilde{\alpha}|\alpha ) &=\logn{\tilde{\alpha}; \log(\alpha), \sigma_{\alpha}} = \frac{1}{\sigma_\alpha \sqrt{2\pi} \tilde{\alpha}} \exp{- \frac{(\log{\tilde{\alpha}} - \log{\alpha} )^2}{2\sigma^2_\alpha} } \\
q(\tilde{\tau}|\tau ) &=\logn{\tilde{\tau}; \log(\tau), \sigma_{\tau}} = \frac{1}{\sigma_\tau \sqrt{2\pi} \tilde{\tau}} \exp{- \frac{(\log{\tilde{\tau}} - \log{\tau} )^2}{2\sigma^2_\tau} } \\
q(\tilde{\phi}|\phi ) &=\logn{\tilde{\phi}; \log(\phi), \sigma_{\phi}} = \frac{1}{\sigma_\phi \sqrt{2\pi} \tilde{\phi}} \exp{- \frac{(\log{\tilde{\phi}} - \log{\phi} )^2}{2\sigma^2_\phi} } \\
q(\tilde{\sigma}|\sigma ) &= \frac{1}{\sigma_\sigma \sqrt{2\pi} \tilde{\sigma}(1- \tilde{\sigma}) } \exp{- \frac{(\log{\frac{\tilde{\sigma}}{1 - \tilde{\sigma}} } - \log{\frac{\sigma}{1-\sigma}} )^2}{2\sigma^2_\sigma} }
\end{align}
The parts of the MH ratio involving the priors and proposals of the hyperparameters are given by:
\begin{align}
\frac{p(\tilde{\alpha})q(\alpha| \tilde{\alpha} )}{p(\alpha)q(\tilde{\alpha}|\alpha )} &=e^{-a_2(\tilde{\alpha}-\alpha)}\left(\frac{\tilde{\alpha}}{\alpha}\right)^{a_1}\\
\frac{p(\tilde{\tau})q(\tau| \tilde{\tau} )}{p(\tau)q(\tilde{\tau}|\tau )} &=e^{-t_2(\tilde{\tau}-\tau)}\left(\frac{\tilde{\tau}}{\tau}\right)^{t_1}\\
\frac{p(\tilde{\phi})q(\phi| \tilde{\phi} )}{p(\phi)q(\tilde{\phi}|\phi )} &=e^{-f_2(\tilde{\phi}-\phi)}\left(\frac{\tilde{\phi}}{\phi}\right)^{f_1}\\
\frac{p(\tilde{\sigma})q(\sigma| \tilde{\sigma} )}{p(\sigma)q(\tilde{\sigma}|\sigma )} &=\left(\frac{\tilde{\sigma}}{\sigma}\right)^{s_1}\left(\frac{1-\tilde{\sigma}}{1-\sigma}\right)^{s_2}
\end{align}
The MH ratio without the priors and proposals on $\alpha, \sigma, \tau$ and $\phi$ is given by
\begin{align}
\dfrac{\splitdfrac{ \prod^{K}_{k=1} \bigg(\text{BFRY}\left(w_{1k};\tilde{\alpha}/K,\tilde{\tau},\tilde{\sigma} \right) \frac{(w_{1k}\tilde{\phi})^{c_{1k}}e^{-\tilde{\phi} w_{1k}}}{c_{1k}!}\bigg)}{\times \prod^{T}_{t=2}
\bigg[\prod^{K}_{k=1} \bigg(\text{BFRY}\left(w_{tk};\tilde{\alpha}'_t/K,\tilde{\tau}+\tilde{\phi},\tilde{\sigma} - c_{t-1,k}\right) \frac{(w_{tk}\tilde{\phi})^{c_{tk}}e^{-\tilde{\phi} w_{tk}}}{c_{tk}!}\bigg)\bigg]}}
{\splitdfrac{\prod^{K}_{k=1} \bigg(\text{BFRY}\left(w_{1k};\alpha/K,\tau,\sigma \right) \frac{(w_{1k}\phi)^{c_{1k}}e^{-\phi w_{1k}}}{c_{1k}!}\bigg)}{\times \prod^{T}_{t=2}\bigg[\prod^{K}_{k=1} \bigg(\text{BFRY}\left(w_{tk};\alpha'_t/K,\tau+\phi,\sigma - c_{t-1,k}\right) \frac{(w_{tk}\phi)^{c_{tk}}e^{-\phi w_{tk}}}{c_{tk}!}\bigg)\bigg]}}
\end{align}
The logarithm of the ratio is then given by:
\begin{align}
&K\left[\log(\tilde{\sigma}) - \log(\sigma) \right] + K\left[\log(\Gamma(1-\sigma)) - \log(\Gamma(1-\tilde{\sigma})) \right] + \left[- (\tilde{\tau} + \tilde{\phi}) + (\tau + \phi) \right]\sum_{k=1}^K w_{1k}\nonumber\\
&+K\left[\log\left(\left\{ \left( \tau +t_{\alpha,\sigma} \right)^{\sigma} -\tau^{\sigma}\right\} \right) - \log\left(\left\{ \left( \tilde{\tau} +t_{\tilde{\alpha},\tilde{\sigma}} \right)^{\tilde{\sigma}} -\tilde{\tau}^{\tilde{\sigma}}\right\} \right) \right]\nonumber\\
&+\left(\sigma - \tilde{\sigma}\right)\sum_{t=1}^T\sum_{k=1}^K\log(w_{tk}) +\left[\log(\tilde{\phi}) - \log\phi)\right]\sum_{t=1}^T\sum_{k=1}^Kc_{tk}\nonumber\\
&+ \sum_{t=1}^T\sum_{k=1}^K \left[ \log\left(1-e^{-t_{\tilde{\alpha},\tilde{\sigma}}w_{tk}}\right) - \log\left(1-e^{-t_{\alpha,\sigma}w_{tk}}\right) \right]\nonumber\\
&+\sum_{t=2}^T\sum_{k=1}^K\log\left( \frac{\Gamma(1-\sigma + c_{t-1k})}{\Gamma(1-\tilde{\sigma} + c_{t-1k})}\right) + \left[- (\tilde{\tau} + 2\tilde{\phi}) + (\tau + 2\phi) \right]\sum_{t=2}^T\sum_{k=1}^K w_{tk}\nonumber\\
&+\sum_{t=2}^T\sum_{k=1}^K\log\left( \frac{\tilde{\sigma} - c_{t-1k}}{\sigma - c_{t-1k}}\right) + \sum_{t=2}^T\sum_{k=1}^K\log\left( \frac{\left\{ \left( \tau + \phi +t_{\alpha,\sigma} \right)^{\sigma - c_{t-1k}} -(\tau+\phi)^{\sigma- c_{t-1k}}\right\}}{\left\{ \left( \tilde{\tau}+\tilde{\phi} +t_{\tilde{\alpha},\tilde{\sigma}} \right)^{\tilde{\sigma}- c_{t-1k}} -\left(\tilde{\tau}+\tilde{\phi}\right)^{\tilde{\sigma}- c_{t-1k}}\right\}}\right)
\end{align}
\subsection{Log-posterior density}
\label{subsec:app:jointposterior}
The posterior probability density function, given the latent variables and up to a normalizing constant, thus takes the form:
\begin{align}
&p\left( \left(w_{tk},c_{tk}\right)_{k=1,\ldots,K, t= 1, \ldots, T},\sigma,\tau, \phi,\alpha, \rho \mid (n_{tij})_{1\leq i,j\leq K, t=1,\ldots, T}\right)\nonumber\\
& \propto \prod^{T}_{t=1}\bigg[p(\mathbf n_t| \mathbf w_t)\bigg]\prod^{K}_{k=1} \bigg(p(w_{1k}) p(c_{1k}| \phi, w_{1k})\bigg) \prod^{T}_{t=2}
\bigg[\prod^{K}_{k=1} \bigg(p(w_{tk} |c_{t-1k}) p(c_{tk}| \phi, w_{tk})\bigg)\bigg] \nonumber \\
&~~~~\times p(\alpha) p(\sigma) p(\tau) p(\phi)\nonumber \\
& \propto \prod^{T}_{t=1}\bigg[\Big[\prod^{K}_{k=1} w^{m_{tk}}_{tk}\Big] e^{- (\sum^{K}_{k=1} w_{tk})^2 })\bigg] \prod^{K}_{k=1} \bigg(\text{BFRY}\left(w_{1k};\alpha/K,\tau,\sigma \right) \frac{(w_{1k}\phi)^{c_{1k}}e^{-\phi w_{1k}}}{c_{1k}!}\bigg)\nonumber\\
&~~~~\times \prod^{T}_{t=2} \bigg[\prod^{K}_{k=1} \bigg(\text{BFRY}\left(w_{tk};\alpha'_t/K,\tau+\phi,\sigma - c_{t-1,k}\right) \frac{(w_{tk}\phi)^{c_{tk}}e^{-\phi w_{tk}}}{c_{tk}!}\bigg)\bigg] \nonumber \\
&~~~~\times p(\alpha) p(\sigma) p(\tau)p(\phi).
\label{eq:app:jointposterior}
\end{align}
\section{MCMC plots}\label{sec:app:mcmc_plots}
In this section we give the MCMC trace plots for the simulated and real data experiments. In each case, the samples are thinned so that we have $100$ samples from the posterior.
\subsection{Simulated Data}
In Figure \ref{fig:mcmc_trace_simulated_parameters} we see the trace plots for the hyperparameters $\alpha, \sigma, \tau$ and $\phi$, as well at the logposterior (up to a constant), for the network simulated from the GG model. We can see that the parameters other than $\phi$ converge well to their true values, and that in each case the mixing of the three chains is good. Furthermore, we can see that the logposterior has converged to a stable value.
\begin{figure}
\caption{MCMC trace plots of hyperparameters and logposterior for the network simulated from the GG model, with $\alpha = 100$, $\sigma = 0.2$, $\tau = 1$ and $\phi= 10$. The true values in each case are denoted by the dotted green line.}
\label{fig:mcmc_trace_simulated_parameters}
\end{figure}
\subsection{Real Data}
\subsubsection{Reddit Hyperlink Network}
In Figure \ref{fig:mcmc_trace_reddit_parameters} we see the trace plots for the hyperparameters $\alpha, \sigma, \tau$ and $\phi$, as well at the logposterior, for the Reddit hyperlink network. We can see that the parameters and the logposterior have converged to a stable value.
\begin{figure}
\caption{MCMC trace plots of hyperparameters and logposterior for the Reddit hyperlink network.}
\label{fig:mcmc_trace_reddit_parameters}
\end{figure}
\subsubsection{Reuters Terror Dataset}
In Figure \ref{fig:mcmc_trace_reuters_parameters} we see the trace plots for the hyperparameters $\alpha, \sigma, \tau$ and $\phi$, as well at the logposterior, for the Reuters terror network. As before, we see that the hyperparameters have not yet converged. However, we found that running a longer chain provided similar credible intervals for the weights and degree distribution.
\begin{figure}
\caption{MCMC trace plots of hyperparameters and logposterior for the Reuters terror network.}
\label{fig:mcmc_trace_reuters_parameters}
\end{figure}
\end{document} |
\begin{document}
\title{High fidelity quantum cloning of two known nonorthogonal quantum states via weak measurement}
\author{Ming-Hao Wang}
\affiliation{State Key Laboratory of Magnetic Resonances and Atomic and Molecular Physics,
Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071, China}
\affiliation{University of Chinese Academy of Sciences, Beijing 100049, China}
\author{Qing-Yu Cai}
\thanks{Corresponding author. \\Electronic address: [email protected].}
\affiliation{State Key Laboratory of Magnetic Resonances and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071, China}
\begin{abstract}
We propose a scheme to enhance the fidelity of symmetric quantum cloning machine using a weak measurement. By adjusting the intensity of weak measurement parameter $p$, we obtain the copies with different optimal fidelity. Choosing proper value of $p$, we can obtain the perfect copies for initial qubits. In this paper, we focus on $1-2$ quantum cloning for two nonorthogonal states. Sets containing more than two linear independent states are also discussed briefly. Due to weak measurements being probabilistic, we obtain high fidelity at risk of probability. If the weak measurement successes, we do the following operations to obtain copies with high fidelity, otherwise, the cloning process fails and we need do nothing. From this perspective, the scheme we propose is economical for saving quantum resource and time, which may be very useful in quantum information processing.
\end{abstract}
\pacs{03.65-w, 03.47.-a, 89.70+c}
\maketitle
\section{Introduction}
In recent years the quantum information has developed very quickly. The study of quantum information processing(QIP) has been attracting much attention from various
research communities. Various schemes for logic gates, such as C-NOT, SWAP, as required for classical computer, were proposed theoretically and implemented experimentally in many systems including optical photon~\cite{Knill2001,Nagali2009,Li2013,Tischler2017,Zeuner2017}, trapped ions~\cite{Cirac1995,Monroe1997,Kielpinski2002}, cavity quantum electrodynamics~\cite{Rauschenbeutel1999,Zubairy2003,Yan2018}, and liquid state nuclear magnetic resonance~\cite{Vandersypen2001,Baugh2007,Suter2008}. There seems great promising future in quantum computer. It's peculiar principles such as linearity, unitarity, and inseparability have been utilized to realize quantum computer~\cite{Nielsen2011}. On one hand, these principles enhance the capacity of information processing, but, on the other hand, they put on some limitations ~\cite{Pati2002}. A fundamental restriction in QIP is that an unknown quantum state cannot be copied perfectly~\cite{Wootters1982} in contrast with replicating information ubiquitously in classical world~\cite{Wang2018}. This is a consequence of linearity of quantum mechanic. This makes a qubit so distinct from a classical bit. This limitation is known as no-cloning theorem and has found its applications in quite different fields of quantum information theory, such as quantum computation and quantum cryptography~\cite{Gisin2002}.
However, if we pay some price, then approximate or even exact cloning is possible. It dose not prohibit the possibility of approximate cloning of an arbitrary state of a quantum mechanical system. Bu{\v{z}}ek and Hillery firstly presented a scheme that given a unknown qubit, tow identical output qubits as approximate as possible to input qubit are produced~\cite{Buzek1996}. After their seminal paper, quantum cloning has been extensively studied and lot of topmost achievements have been made, both theoretically and experimentally~\cite{Fan2014,Scarani2005}. This Bu{\v{z}}ek - Hillery quantum cloning machine is state-independent and known as universal quantum cloning machine(UQCM). The optimal fidelity of a cloning bound achieves $5/6$ for UQCM. Shortly after, a new quantum cloning machine was been proposed~\cite{Brus2000,Fan2002}, called quantum phase-covariant cloning machine(QPCCM). The fidelity of QPCCM will reach to $0.854$, which is higher than UQCM. The QPCCM is significantly important in quantum cryptography as it provides the optimal eavesdropping method for a large class of attacks on quantum cryptograph protocols ~\cite{Bennett1984,Ekert1991,Ferenczi2012}. Besides deterministic quantum cloning machine, Duan and Guo~\cite{Duan1998,Duan1998a} revealed that a quantum state secretly chosen from a linearly independent states set can be probabilistically cloned with unit fidelity and they called this quantum cloning process probabilistic quantum cloning (PQC). This kind of quantum cloning is different from the deterministic quantum cloning machine, it has nonzero probability that cloning process comes to nothing. However, once we success, we obtain the perfect copies of initial qubits. The scheme we proposed in this paper is different from the PQC. The more detailed difference between them will be discussed later.
It has well been realized that the we can manipulate qubits in a better way if we pay some price. The QPCCM is one of examples. Recently, an optimal quantum cloning machine, which clones qubits of arbitrary symmetrical distribution around the Bloch vector was investigated~\cite{Bartkiewicz2010}. More generally, states in block regime, which is a simply connected region enclosed by a ¡°longitude-latitude grid¡± on the Bloch sphere, was also investigated~\cite{Kang2016}. All those quantum cloning machines are based on the maximin principle by making full use of
a priori information of amplitude and phase about the to-be-cloned qubit input set. As expected, the performance of machines is better than UQCM. In addition to the price of limiting the range of input states, other resource can also be sacrificed, such as the probability of success. PQC is one among those.
Inspired by those previous work, we propose a new scheme that combines the weak measurement and unitary transformation. Assuming that given a qubit secretly chosen from a state set $\{\ket{\psi_1},\ket{\psi_2}\}$, our task is to duplicate the given qubits. Many related works are reported~\cite{Brus1998,Duan1998a,Zhang2012}. In their schemes, they all clones the given qubits directly. Compared with their work, our work has peculiar advantage. Before complicated quantum transformation, we pretreat the given qubits. This pretreatment allows us to obtain copies with high fidelity. Whether doing further operator depends on the result of measurement, which economize quantum resource and make our scheme a economical one. This may be serviceable in QIP. The rest of this paper is organized as follows. In Sect.II, we make a brief review of weak measurement and quantum cloning machine, especially optimal quantum cloning for two nonorthogonal states. In Sect.III, we show our scheme for higher cloning fidelity in details. Finally, a concise summary is given in Sect.V.
\section{Theory}
\subsection{Weak measurement}
The projection postulate is one of the basic postulates of the standard quantum theory and it states that measurement of a variable of a quantum system irrevocably collapses the initial state to one of the eigenstates (corresponding to the measurement outcome) of the measurement operator. Once the initial state collapses due to a projection measurement on a quantum system, it can never be recovered. However, the situation is different for the case that the measurement is not sharp, i.e., non-projective measurement~\cite{Kim2009}. For weak measurements, the information extracted from the quantum system is deliberately limited, thereby keeping the measured system¡¯s state from randomly collapsing towards an eigenstate. It is possible to reverse the measurement-induced state collapse and the unsharpness of a measurement has been shown to be related to the probabilistic nature of the reversing operation which can serve as a probabilistic quantum error correction~\cite{Koashi1999}.
Consider the initial state of a qubit is a pure state $\ket{\phi}$ and the measurement operators $P_{1}$ and $P_{2}$ are orthogonal projectors whose sum $P_{1}+P_{2}=I$ is identity. We introduce the operators
\begin{equation}\label{weak_measurement}
\hat{M}_{yes}=\sqrt{p}\hat{P}_{1}+\hat{P}_{2}, \qquad \hat{M}_{no}=\sqrt{1-p}\hat{P}_{1}, \quad p\in [0,1].
\end{equation}
It should be noted that $M_{yes}^{\dag}M_{yes}+M_{no}^{\dag}M_{no}=I$ and therefore $M_{yes}$ and $M_{no}$ describe a measurement. Consider the effect of the operators $M$ on a pure state $\ket{\phi}$. The state can be rewritten as $\ket{\phi}=\sqrt{ p_1}\ket{\phi_1}+\sqrt{p_2}\ket{\phi_2}$, where $\ket{\phi_{1,2}}=P_{1,2}\ket{\phi}/\sqrt{p_{1,2}}$ are the two possible outcomes of the projective measurement and $p_{1,2}=\bra{\phi}P_{1,2}\ket{\phi}$ are the corresponding probabilities. The operator $M_{yes}$ decreases the ratio $\frac{p_1}{p_2}$, causing $\hat{M}_{yes}\ket{\phi}$ moving toward $\ket{\phi_2}$ while operator $\hat{M}_{no}$ collapses the $\ket{\phi}$ into $\ket{\phi_2}$.
\subsection{Optimal quantum cloning for two nonorthogonal states}
Suppose we are given with equal probability one quantum state from a set including two known nonorthogonal quantum states in the form
\begin{equation}\label{state_set}
\ket{\psi_1}=\cos(\xi)\ket{0}+\sin(\xi)\ket{1}, \quad \ket{\psi_2}=\sin(\xi)\ket{0}+\cos(\xi)\ket{1},
\end{equation}
where $\xi \in [0,\pi/4]$, with the scale product
\begin{equation}
\bra{\psi_1}\ket{\psi_2}=\sin(2\xi).
\end{equation}
The transformation of symmetric 1-2 state-dependent cloning take the following form:
\begin{equation}\label{transformation}
\begin{split}
\ket{00} & \rightarrow a\ket{00}+b(\ket{01}+\ket{10})+c\ket{11},\\
\ket{10} & \rightarrow a\ket{11}+b(\ket{10}+\ket{01})+c\ket{00},
\end{split}
\end{equation}
where we assume the cloning coefficients a, b and c are real numbers. Due to the unitarity of the transform, the following formulas must be satisfied:
\begin{equation}\label{unitary_condition}
a^2+2b^2+c^2=1,\quad a c+ b^2=0.
\end{equation}
Solving the Eq.(\ref{unitary_condition}), we obtain the following equations:
\begin{equation}\label{value_of_a_and_c}
a=\frac{1}{2}(\sqrt{1-4b^2}+1),\quad c=\frac{1}{2}(\sqrt{1-4b^2}-1).
\end{equation}
In previous work, fidelity is often used as a factor of merit, which is defined as $F=\bra{\phi_{in}}\rho_{out}\ket{\phi_{in}}$, where $\rho_{out}$ is a reduced density matrix of output state $1$ or $2$. Due to the symmetry of transformation given by Eq.(\ref{transformation}), we obtain the copies fidelity as
\begin{equation}
F(\ket{\psi_{1}})=F(\ket{\psi_{2}})=\frac{1}{4} (3 a^2+4 (a+b) (b+c) \sin (2 \xi )+(a+c) \cos (4 \xi ) (a-2 b-c)+2 a b+4 b^2+2 b c+c^2)
\end{equation}
With some calculation and using the method of Lagrange multipliers, we can determine the cloning coefficient b as
\begin{equation}\label{value_of_ b}
b=\frac{1}{8} (1-\csc (2 \xi )+\csc (2 \xi )\sqrt{9 \sin ^2(2 \xi )-2 \sin (2 \xi )+1} ).
\end{equation}
Combine Eq.(\ref{value_of_ b}) and Eq.(\ref{value_of_a_and_c}), we obtain the detailed transformation and the maximum fidelity for optimal cloning.
\section{Scheme for higher fidelity}
In this section, we are going to investigate how to enhance the fidelity by using weak measurement. Unlike the schemes proposed by others, we do a weak measurement as a pretreatment before transformation. After measurement, we obtain new intermediate qubits. Putting them into a quantum machine, we will obtain the final qubits we want with high fidelity at the output of machine. Let us now present our scheme in detailed. Suppose we are given a qubit selected randomly from Eq.(\ref{state_set}). We first do a weak measurement described as Eq.(\ref{weak_measurement}) on the given state. Let $\hat{P}_1=\ket{+}\bra{+}$ and $\hat{P}_2=\ket{-}\bra{-}$, and $\ket{\psi_{1,2}}$ can be rewritten as
\begin{equation*}
\ket{\psi_{1,2}}=\frac{\sqrt{2}}{2}\big(\cos(\xi)+\sin(\xi)\big)\ket{+}\pm \frac{\sqrt{2}}{2}\big(\cos(\xi)-\sin(\xi)\big)\ket{-}.
\end{equation*}
After weak measurement, if the outcome is 'yes', the initial qubits become
\begin{equation}
\ket{\psi_{1,2}'}=\frac{\hat{M}_{yes} \ket{\psi_{1,2}}}{\sqrt{p_{yes}}}=\frac{\sqrt{p} (\sin (\xi )+\cos (\xi ))}{\sqrt{(p-1) \sin (2 \xi )+p+1}}\ket{+}\pm \frac{\cos (\xi )-\sin (\xi )}{\sqrt{(p-1) \sin (2 \xi )+p+1}}\ket{-},
\end{equation}
with the probability
\begin{equation}\label{probability}
p_{yes}=\frac{1}{2} ((p-1) \sin (2 \xi )+p+1).
\end{equation}
Similarly, we obtain the inter product of two possible intermediate states as
\begin{equation}\label{inner_product}
\bra{\psi_{1}'}\ket{\psi_{2}'}=\frac{(p+1) \sin (2 \xi )+p-1}{(p-1) \sin (2 \xi )+p+1},
\end{equation}
Next, we do a transformation on the intermediate qubit $\ket{\psi'_{1,2}}$ and ancillary qubit, which is originally in blank state $\ket{0}$. For the sake of convenience, we substitute $\frac{\sqrt{p} (\sin (\xi )+\cos (\xi ))}{\sqrt{(p-1) \sin (2 \xi )+p+1}}$ and $\frac{\cos (\xi )-\sin (\xi )}{\sqrt{(p-1) \sin (2 \xi )+p+1}}$ with $\frac{\sqrt{2}}{2}\big(\cos(\xi')+\sin(\xi')\big)$ and $\frac{\sqrt{2}}{2}\big(\cos(\xi')-\sin(\xi')\big)$
respectively, so the intermediate qubits can be rewritten as£»
\begin{gather}\label{intermediate_states}
\ket{\psi_{1}'}=\frac{\sqrt{2}}{2}\big(\cos(\xi')+\sin(\xi')\big)\ket{+}+\frac{\sqrt{2}}{2}\big(\cos(\xi')-\sin(\xi')\big)\ket{-}=\cos(\xi')\ket{0}+\sin(\xi')\ket{1},\\
\ket{\psi_{2}'}=\frac{\sqrt{2}}{2}\big(\cos(\xi')+\sin(\xi')\big)\ket{+}-\frac{\sqrt{2}}{2}\big(\cos(\xi')-\sin(\xi')\big)\ket{-}=\sin(\xi')\ket{0}+\cos(\xi')\ket{1},
\end{gather}
Combining them with Eq.(\ref{inner_product}), we obtain the important equation as
\begin{equation}\label{equation}
\sin(2\xi')=\frac{(p+1) \sin (2 \xi )+p-1}{(p-1) \sin (2 \xi )+p+1}.
\end{equation}
By applying the transformation \ref{transformation}, we obtain the copy fidelity as
\begin{equation}\label{fidelity}
F(\ket{\psi_{1}'})=F(\ket{\psi_{2}'})=\frac{1}{2} \big(1+(a+c) \cos (2 \xi ) \cos (2 \xi')+2 b (a+c)\sin (2 \xi)(\sin (2 \xi' )+1)\big).
\end{equation}
We want to derive the optimal fidelity, with the constraints as Eq.(\ref{unitary_condition}). After some algebra, we determine the clining coefficients as
\begin{gather}
b=\frac{\csc(2 \xi) \sqrt{8 (\sin (2 \xi' )+1)^2 \sin ^2(2 \xi)+\cos ^2(2 \xi' ) \cos ^2(2 \xi)}}{8 (\sin (2 \xi' )+1)}-\frac{\cos (2 \xi' ) \cot(2 \xi)}{8 (\sin (2 \xi' )+1)}.
\end{gather}
So the optimal fidelity has an expression
\begin{equation}\label{final_fidelity}
\begin{split}
F=&\frac{1}{32} \{16+\frac{3 \sqrt{2} \cos (2 \xi' ) \cos (2 \xi)}{(\sin (2 \xi' )+1)}\big[4 \sin ^2(2 \xi' )+8 \sin (2 \xi' )-\cos ^2(2 \xi' ) \cot ^2(2 \xi)\\
&+\cos (2 \xi' ) \cot (2 \xi) \sqrt{ (\cos ^2(2 \xi' ) \cot ^2(2 \xi)+8 (1+\sin (2\xi' ))^2)}+4\big]^{1/2}\\
+&\frac{\sqrt{2}\sin(2 \xi)}{(\sin (2 \xi' )+1)^2}\sqrt{ (\cos ^2(2 \xi' ) \cot ^2(2 \xi)+8 (1+\sin (2\xi' ))^2)} \big[4 \sin ^2(2 \xi' )+8 \sin (2 \xi' )-\cos ^2(2 \xi' ) \cot ^2(2 \xi)\\
&+\cos (2 \xi' ) \cot (2 \xi)\sqrt{ (\cos ^2(2 \xi' ) \cot ^2(2 \xi)+8 (1+\sin (2\xi' ))^2)}+4\big]^{1/2}\}
\end{split}
\end{equation}
\begin{figure}
\caption{(Color)The dependence of the fidelity on the angle $\xi$ and $\xi'$. The dotted line correspond to $\sin(2\xi')=\sin^2(2\xi)$, in which constraint fidelity saturates unit. }
\label{figure_fidelity}
\end{figure}
Analyzing the Eq.(\ref{final_fidelity}) as shown in Fig.(\ref{figure_fidelity}), we find that under the condition $\sin(2\xi')=\sin^2(2\xi)$, $F$ always reaches unit, which means we obtain two perfect copies of initial state. For an instant, $\xi=\pi/8$ and $\xi'=\pi/12$, which satisfies the above condition, all parameters are determined and we obtain initial qubits
\[\ket{\psi_1}=\frac{1}{2} \sqrt{2+\sqrt{2}} \ket{0} +\frac{1}{2} \sqrt{2-\sqrt{2}} \ket{1}, \quad \ket{\psi_2}=\frac{1}{2}\sqrt{2-\sqrt{2}} \ket{0} +\frac{1}{2} \sqrt{2+\sqrt{2}} \ket{1},\]
intermediate qubits
\[\ket{\psi'_1}=\frac{1}{4} (\sqrt{2}+\sqrt{6})\ket{0}+\frac{1}{4} (\sqrt{6}-\sqrt{2})\ket{1}, \quad \ket{\psi'_2}=\frac{1}{4} (\sqrt{6}-\sqrt{2})\ket{0}+\frac{1}{4} (\sqrt{2}+\sqrt{6})\ket{1}, \]
transformation coefficients
\[b=\frac{1}{2 \sqrt{3}}, \quad a=\frac{1}{2} (\sqrt{\frac{2}{3}}+1), \quad c=\frac{1}{2} (\sqrt{\frac{2}{3}}-1),\]
and the final qubits
\begin{gather}\label{final_qubits}
\ket{\psi_1}\rightarrow \ket{\psi'_1} \rightarrow (\frac{1}{2} \sqrt{2+\sqrt{2}} \ket{0} +\frac{1}{2} \sqrt{2-\sqrt{2}} \ket{1})\otimes (\frac{1}{2} \sqrt{2+\sqrt{2}} \ket{0} +\frac{1}{2} \sqrt{2-\sqrt{2}} \ket{1},\\
\ket{\psi_2}\rightarrow \ket{\psi'_2} \rightarrow (\frac{1}{2} \sqrt{2-\sqrt{2}} \ket{0} +\frac{1}{2} \sqrt{2+\sqrt{2}} \ket{1})\otimes (\frac{1}{2} \sqrt{2-\sqrt{2}} \ket{0} +\frac{1}{2} \sqrt{2+\sqrt{2}} \ket{1},
\end{gather}
which are exactly the perfect copies of the initial qubits.
\begin{figure}
\caption{(Color)The dependence of the probability and the fidelity on the parameters of $p$ and $\xi$.}
\label{16}
\label{12}
\label{8}
\label{6}
\label{dependence_on_p}
\end{figure}
Next, we discuss the probability of success, which have the formula as Eq.(\ref{probability}). It is function of $\xi$ and $p$, where $\xi$ arranges from $0$ to $\pi/4$ and $p$ arranges from $0$ to $1$. Generally, $\xi$ is known and unchanged. We choose proper value of $p$ so that we keep a balance between probability of success and fidelity. It has been shown dependence of probability of success and fidelity on $p$ in the Fig.(\ref{dependence_on_p}) that given a fixed $\xi$. Probability is monotone increasing while fidelity increase at first and then decrease after it reach unit. It is worth noting that in the case of fidelity being unit, we obtain the relationship as $\frac{(p+1) \sin (2 \xi )+p-1}{(p-1) \sin (2 \xi )+p+1}=\sin^2(2\xi)$, corresponding the probability as $p_{yes}=\frac{1}{1+\sin(2\xi)}$, which is just the Duan-Guo bound~\cite{Duan1998a}.
\section{conclusion}
In this paper, we propose a scheme to duplicate qubits chosen randomly from a nonorthogonal state set using weak measurement. We have demonstrated that weak measurement can indeed be useful for high fidelity of quantum cloning process. Compared to general Quantum cloning machine, we firstly do a weak measurement on the qubit to make it easier to clone. If the result of a measurement is 'yes', we feed the qubit to quantum cloning machine. After cloning process accomplished, we obtain copies with high fidelity dependent on the value of $p$. By choosing proper $p$, we can even obtain the perfect copies. It is easily implemented with current experimental techniques. It is worth emphasize that our scheme is also work for the state set that contains more than two linear independent states. Analogously, what we need to do is find a set of proper operators which pretreat the qubits before unitary transformation. Since the weak measurement is non-unitary, the whole process is probabilistic and the probability of success is dependent on $p$. Sometimes, we may get nothing. But, at the risk of failure, we can obtain higher fidelity of one of the copies. There are several difference between our scheme and PQC. The main difference is that for PQC, we can only obtain perfect copies with some probability or nothing, while for our scheme, we can adjust $p$ to achieve balance between probability of success and fidelity of copies. This flexibility may be of great useful in QIP. For instants, for some quantum key distribution, using the our cloner, Eve can sacrifice the success probability in order obtain information, and Alice and Bob can not detect whether they are eavesdropped if they only use the disturbance criterion. Eve hides himself in qubits loss. In general, the best eavesdropping scheme for Eve is to choose proper $p$ to keep a balance between probability and fidelity so that he obtain information as much as possible and hide oneself in the qubits loss and noise from discovering. What's more, we do a weak measurement before cloning transformation, which consist of a series of complicated quantum logic gates in experiment. Only when the outcome is 'yes', we do the further transformation, otherwise, we quit. This can save us lots of resource, such as time and quantum qubits(ancillary qubits). Thus, our scheme is economic from this perspective. It should also be emphasized that the method we used here is not restricted to quantum cloning process. It is suitable in many other situation in QIP. The core thought is that we pretreat the target qubits, in which some price may be payed. However, we may complete the mission beyond the limitation.
\section*{Acknowledgments}
Finial support from National Natural Science Foundation of China under Grant Nos. 11725524, 61471356 and 11674089 is gratefully acknowledged.
\end{document} |
\begin{document}
\mbox{}
\begin{center}
{\huge Further Collapses in $\TFNP$}
\\[1.3cm] \large
\setlength\tabcolsep{1.2em}
\begin{tabular}{cccc}
Mika G\"o\"os&
Alexandros Hollender&
Siddhartha Jain&
Gilbert Maystre\\[-1mm]
\small\slshape EPFL &
\small\slshape University of Oxford &
\small\slshape EPFL &
\small\slshape EPFL
\end{tabular}
\begin{tabular}{ccc}
William Pires&
Robert Robere&
Ran Tao\\[-1mm]
\small\slshape McGill University &
\small\slshape McGill University &
\small\slshape McGill University
\end{tabular}
\large
\end{center}
\begin{quote}
\noindent\small
{\bf Abstract.}~
We show $\EOPL={\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePAD$. Here the class $\EOPL$ consists of all total search problems that reduce to the {\scshape End-of-Potential-Line} problem, which was introduced in the works by Hub{\'a}{\v{c}}ek and Yogev ({\footnotesize SICOMP 2020}) and Fearnley et~al.~({\footnotesize JCSS 2020}). In particular, our result yields a new simpler proof of the breakthrough collapse $\CLS={\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePAD$ by Fearnley et~al.~({\footnotesize STOC 2021}). We also prove a companion result $\SOPL={\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePADS$, where $\SOPL$ is the class associated with the {\scshape Sink-of-Potential-Line} problem.
\end{quote}
\section{Introduction}
Our main results are two collapses of total $\NP$ search problem ($\TFNP$) classes.
\begin{theorem}\label{thm:EOPL-collapse}
$\EOPL = {\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePAD$.
\end{theorem}
\begin{theorem}\label{thm:SOPL-collapse}
$\SOPL = {\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePADS$.
\end{theorem}
In particular, \cref{thm:EOPL-collapse} answers a question asked by Daskalakis in his Nevanlinna Prize lecture~\cite[Open Question 10]{Daskalakis2019}. Let us explain what these collapses mean and how they fit into the diverse complexity zoo of search problem classes, as summarised in \cref{fig:classes}. The classes ${\text{\upshape\sffamily P}}\xspaceLS$, ${\text{\upshape\sffamily P}}\xspacePAD$, ${\text{\upshape\sffamily P}}\xspacePADS$ are classical. They were all introduced in the original pioneering works~\cite{Megiddo1991, Johnson1988,Papadimitriou1994} that founded the theory of $\TFNP$. To define these classes, it is most convenient to describe a canonical complete problem for each class. (See \cref{sec:grid} for more formal definitions).
\begin{description}
\item[\bf ${\text{\upshape\sffamily P}}\xspaceLS$:] $\sodLong$ ($\sod$). We are given \emph{implicit access} to a directed graph $G=(V,E)$ that is acyclic, has out-degree at most $1$, and has exponentially many nodes, $|V|=2^n$. The graph is described by a $\textup{poly}(n)$-sized circuit: for any node $v\in V$, we can compute its unique \emph{successor} (out-neighbour) $u$, if any, and also an integer \emph{potential}, which is guaranteed to increase along the direction of the edge~$(v,u)$. The goal is to find a \emph{sink} node (in-degree~$\geq 1$, out-degree~0).
\item[\bf ${\text{\upshape\sffamily P}}\xspacePAD$:] $\eolLong$ ($\eol$). We are given access to a directed graph $G=(V,E)$ that has in/out-degree at most $1$, and has $|V|=2^n$ nodes. The graph is described by a $\textup{poly}(n)$-sized circuit: for any $v\in V$, we can compute its \emph{successor} $u$ and \emph{predecessor}~$u'$, if any. We are guaranteed that if $v$'s successor is $u$, then $u$'s predecessor is $v$, and vice versa. In addition, we are given the name of a \emph{distinguished source} node $v^*$ (in-degree~0, out-degree~1). The goal is to find any source or sink other than $v^*$.
\item[\bf ${\text{\upshape\sffamily P}}\xspacePADS$:] $\solLong$ ($\sol$). Same as $\eol$ except the goal is to find a sink.
\end{description}
\begin{figure}\label{fig:classes}
\end{figure}
\paragraph{Modern classes.}
Research in the past decade has studied several relatively weak classes of search problems that lie below~${\text{\upshape\sffamily P}}\xspaceLS$ and ${\text{\upshape\sffamily P}}\xspacePAD$. The intersection class ${\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePAD$ is, of course, one immediate such example. This class, however, feels quite artificial at first glance. It does not seem to admit any ``natural'' complete problem. Motivated by this, Daskalakis and Papadimitriou~\cite{Daskalakis2011} introduced the \emph{continuous local search} class $\CLS\subseteq {\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePAD$, which, by its very definition, admits natural complete problems related to the local optimisation of continuous functions over the real numbers (computed by arithmetic circuits). The class $\CLS$ is exceptional in that it captures the complexity of \emph{real continuous} optimisation problems, while most classical search problem classes are designed to capture \emph{combinatorial principles}, often phrased in terms of directed graphs.
In order to understand $\CLS$ from a more combinatorial perspective, Hub{\'a}{\v{c}}ek and Yogev~\cite{Hubacek2020} and Fearnley, Gordon, Mehta, and Savani~\cite{Fearnley2020} introduced the class~$\EOPL\subseteq\CLS$, whose complete problem is the namesake $\eoplLong$ ($\eopl$) problem, defined below. (The paper~\cite{Hubacek2020} initially defined a more restricted ``metered'' version of this problem, but we use the formulation from~\cite{Fearnley2020}, which they prove is equivalent to the one from~\cite{Hubacek2020}.) It is also natural to define a \emph{sink-only} version of $\EOPL$ as suggested by~\cite{Goos2018}.
\begin{description}
\item[\bf $\EOPL$:] $\eoplLong$ ($\eopl$). We are given access to a directed graph $G=(V,E)$ that is acyclic, has in/out-degree at most $1$, and has $|V|=2^n$ nodes; that is, $G$ is a disjoint union of directed paths. The graph is described by a $\textup{poly}(n)$-sized circuit: for any node we can compute its successor and predecessor, if any, and also an integer potential, which is guaranteed to increase along the directed edges. In addition, we are given the name of a distinguished source $v^*$. The goal is to find any source or sink other than $v^*$.
\item[\bf $\SOPL$:] $\soplLong$ ($\sopl$). Same as $\eopl$ except the goal is to find a sink.
\end{description}
It is comforting to know that the definition of $\EOPL$ is robust: Ishizuka~\cite{Ishizuka2021} showed that a version of $\eopl$ that guarantees $\textup{poly}(n)$ many distinguished sources is still equivalent (via polynomial-time reductions) to the above standard version with a single source.
Fearnley et al.~\cite{Fearnley2020} also defined a more restricted subclass $\UEOPL\subseteq\EOPL$ where the complete problem is $\ueoplLong$, a version of $\eopl$ with a \emph{unique} directed path. They showed that this class contains many important search problems with unique witnesses, such as unique sink orientations, linear complementary problems, {\scshape Arrival}~\cite{Dohrau2017,Gaertner2018}. Other problems known to lie in \UEOPL are a restricted version of the Ham-Sandwich problem~\cite{Chiu2020} and a pizza cutting problem~\cite{Schnider2021}. Fearnley et al.~\cite{Fearnley2020} conjecture that $\UEOPL\neq\EOPL$.
\paragraph{A surprising collapse.}
In a breakthrough, Fearnley, Goldberg, Hollender, and Savani~\cite{Fearnley2021} showed that, despite appearances to the contrary, $\CLS={\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePAD$. This goes against the conjecture of Daskalakis and Papadimitriou~\cite{Daskalakis2011} that the classes are distinct, a belief which underlied much of their original motivation for introducing $\CLS$. The nontrivial direction of the collapse is a reduction from a canonical complete problem $\sod\curlywedge\eol\in {\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePAD$ (defined below) to a problem $\kkt\in\CLS$, which involves computing a Karush--Kuhn--Tucker point of a smooth function. We may summarise the main technical result of Fearnley et al.~\cite{Fearnley2021} as
\begin{align} \label{eq:kkt}
\sod\curlywedge\eol &~\leq~ \kkt
&&\text{which implies}\enspace {\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePAD\subseteq\CLS.
\end{align}
Here we use $\leq$ to denote a polynomial-time reduction between search problems. The operator~$\curlywedge$ produces the \emph{meet} of two search problems: the input to problem $\pA\curlywedge\pB$ is a pair $(x,y)$ where $x$ is an instance of $\pA$ and $y$ is an instance of $\pB$ and the goal is to output either a solution to~$x$ or to $y$. Then $\sod\curlywedge\eol$ is the canonical (albeit ``unnatural'') complete problem for ${\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePAD$~\cite{Daskalakis2011}.
\paragraph{Our new collapses.}
Our main results, \cref{thm:EOPL-collapse,thm:SOPL-collapse}, follow from two new reductions, the first one of which strengthens the reduction \cref{eq:kkt} from~\cite{Fearnley2021}:
\begin{align}
\sod\curlywedge\eol &~\leq~ \eopl
&&\text{which implies}\enspace {\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePAD\subseteq\EOPL,
\label{eq:eopl} \\
\sod\curlywedge\sol &~\leq~ \sopl
&&\text{which implies}\enspace {\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePADS\subseteq\SOPL.
\end{align}
These reductions are between purely combinatorially defined search problems. In the case of~\cref{eq:eopl}, this bypasses the continuous middle-man of $\CLS$ and makes our reduction relatively simple to describe. In particular, we get a new simpler proof of the breakthrough collapse of~\cite{Fearnley2021} by combining~\cref{eq:eopl} with the inclusion $\EOPL\subseteq\CLS$ proved by~\cite{Hubacek2020}. Furthermore, the new collapse implies that problems related to Tarski's fixpoint theorem~\cite{Etessami2020} and to a colourful version of Carath{\'e}odory's theorem~\cite{Meunier2017} lie in \EOPL.
\paragraph{A further surprise?}
Given that the collapse $\CLS={\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePAD$ was considered extremely surprising by most experts, how surprised should we be by the further collapse
\[
\EOPL\, =\, \CLS\, =\, {\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePAD\enspace ?
\]
Fearnley et al.~\cite{Fearnley2020} wrote regarding $\EOPL$ vs.\ $\CLS$ that ``we actually think it could go either way.'' In the wake of their breakthrough, the paper~\cite{Fearnley2021} explicitly conjectured $\EOPL\neq\CLS$.
For the authors of the present paper, the new collapse did come as an utter shock. When we began work on this project, our intuitions convinced us that, again, $\EOPL\neq \CLS$, a conjecture which had just found its way to the second author's PhD thesis~\cite[Section~7.5]{Hollender2021}. In our convictions, we set out to prove this separation in the \emph{black-box model} where, instead of circuits, the directed graphs are described by black-box oracles. We tried in vain for nine months. The upshot is that \cref{thm:EOPL-collapse,thm:SOPL-collapse} now crush this possibility, as they hold even in the black-box model.
\section{A Unified View: The \texorpdfstring{$\grid$}{Grid} Problem}\label{sec:grid}
In this section we formally define all the problems of interest. We take the unusual approach of defining a single problem (which we call the $\grid$ problem) with various parameters which can be tweaked to obtain all of the problems we study in this paper. This mainly serves two purposes. First of all, it is particularly convenient for presenting our reductions, since it allows us to combine instances from different problems more easily. The second reason is that we believe that this unified view of seemingly very different problems is of independent interest.
\paragraph{The $\grid$ problem.}
For $n \in \mathbb{N}$, let $[n] \coloneqq \{1,2,\dots,n\}$. We define a general problem on a grid $[N] \times [M]$, where $N$ and $M$ should be thought of as being (potentially) exponentially large. The problem involves $A$ paths starting from column $1$ ($[N] \times \{1\}$) and moving from column $i$ ($[N] \times \{i\}$) to column $i+1$ ($[N] \times \{i+1\}$). On the last column ($[N] \times \{M\}$) there are at most $B$ valid ends of paths. If paths are not allowed to merge, then by the Pigeonhole Principle $A > B$ ensures the existence of a solution, i.e., a path that does not end at a valid position on the last column. If paths are allowed to merge, then a solution is guaranteed to exist as long as $B = 0$. To make things more precise, the paths start from nodes $1$ to $A$ in the first column (i.e., $[A] \times \{1\}$), and the valid termination points are nodes $1$ to $B$ in the last column (i.e., $[B] \times \{M\}$).
In more detail, we are given a boolean circuit $S\colon [N] \times [M] \to [N] \cup \{\textup{\textsf{null}}\}$, the \emph{successor circuit}, which allows us to efficiently compute the outgoing edge at a node. If $S(x,y) = \textup{\textsf{null}}$, then $(x,y)$ does not have an outgoing edge. Otherwise, there is an outgoing edge from $(x,y)$ to $(S(x,y),y+1)$. The problem also has two parameters which are used to tweak the definition: $r$ (\emph{reversible}) and $b$ (\emph{bijective}). Intuitively, when $r=1$, we change the representation of paths to make them \emph{reversible}. Namely, in addition to the successor circuit $S$, we are also given access to a \emph{predecessor circuit} $P \colon [N] \times [M] \to [N] \cup \{\textup{\textsf{null}}\}$, which, analogously to $S$, allows us to efficiently compute the incoming edge at a node. In particular, when $r=1$, every node can have at most one incoming edge, i.e., two paths cannot merge. When $r=1$, the other parameter $b$ is used to introduce additional solutions. Namely, when $b=1$, then we do not allow any new paths apart from the original $A$ paths, and we also require that all $B$ valid ends of paths are actually reached by a path. The combination $r=0, b=1$ is not allowed.
We use the term \emph{sink} to refer to a node with at least one incoming edge but no outgoing edge. Similarly, a \emph{source} is a node with an outgoing edge but no incoming edge. The formal definition of the problem is as follows.
\begin{definition}\label{definition:grid_problem}
In the \grid problem, given $N,M,A,B$ with $N \geq A > B \geq 0$ and $M \geq 2$, boolean circuits $S, P \colon [N] \times [M] \to [N] \cup \{\textup{\textsf{null}}\}$, and bits $r,b \in \{0,1\}$, output any of the following:
\begin{enumerate}
\item $x \in [A]$ such that $S(x,1) = \textup{\textsf{null}}$,
\emph{(missing pigeon/source)}
\item $x \in [N]$ such that $S(x,M-1) > B$,
\emph{(invalid hole/sink)}
\item $x \in [N]$ and $y \in [M-2]$ such that\\
$S(x,y) \neq \textup{\textsf{null}}$ and $S(S(x,y),y+1) = \textup{\textsf{null}}$,
\emph{(pigeon interception/sink)}
\item If $r=1$ and $b=1$:
\begin{enumerate}
\item $(x,y) \in ([N] \times [M-1]) \setminus ([A] \times \{1\})$ such that\\
$S(x,y) \neq \textup{\textsf{null}}$ and $P(x,y) = \textup{\textsf{null}}$, or
\emph{(pigeon genesis/source)}
\item $x \in [B]$ such that $P(x,M) = \textup{\textsf{null}}$.
\emph{(empty hole/sink)}
\end{enumerate}
\end{enumerate}
We also enforce the following two conditions syntactically:
\begin{itemize}
\item If $r=0$, then $b=0$ and $B=0$.
\item If $r=1$, then the successor and predecessor circuits are consistent, which can be enforced as follows. The circuit $S$ is replaced by the circuit $\overline{S}$, which on input $(x,y)$ computes $x ' \coloneqq S(x,y)$ and outputs $x'$, unless $(x',y+1) \notin [N] \times [M]$ or $P(x',y+1) \neq x$, in which case it outputs $\textup{\textsf{null}}$. Similarly, the circuit $P$ is replaced by the circuit $\overline{P}$, which on input $(x,y)$ computes $x ' \coloneqq P(x,y)$ and outputs $x'$, unless $(x',y-1) \notin [N] \times [M]$ or $S(x',y-1) \neq x$, in which case it outputs $\textup{\textsf{null}}$.
\end{itemize}
\end{definition}
\paragraph{Canonical complete problems as special cases of $\grid$.}
As defined above, the inputs $N, M, A, B$ of the $\grid$ problem are completely unrestricted, apart from the natural restrictions $N \geq A > B \geq 0$ and $M \geq 2$. By imposing various additional restrictions on these inputs, we obtain the following canonical complete problems; see~\cref{figure:grid_problem_summary}. (Here $\iphp$/$\bphp$ stand for Injective/Bijective Pigeonhole Principle.)
\begin{itemize}
\item $\sod$: $r=0, b=0, A=1, B=0$. ({\text{\upshape\sffamily P}}\xspaceLS-complete)
\item $\sopl$: $r=1, b=0, A=1, B=0$. (\SOPL-complete)
\item $\eopl$: $r=1, b=1, A=1, B=0$. (\EOPL-complete)
\item $\iphp$: $r=1, b=0, M=2, N=A=B+1$. ({\text{\upshape\sffamily P}}\xspacePADS-complete)
\item $\bphp$: $r=1, b=1, M=2, N=A=B+1$. ({\text{\upshape\sffamily P}}\xspacePAD-complete)
\end{itemize}
Note that beyond those restrictions, the inputs are left unrestricted. For example, in $\sod$, the input $M$ can be very large, which is indeed needed for the problem to be {\text{\upshape\sffamily P}}\xspaceLS-complete.
\begin{figure}\label{figure:intro_iter}
\label{figure:intro_php}
\label{figure:intro_sopl}
\label{figure:grid_problem_summary}
\end{figure}
\begin{remark}
Here we have slightly abused notation by calling these problems $\sod$, $\sopl$ and $\eopl$ even though their original definitions (in~\cite{Johnson1988,Goos2018,Fearnley2020}, respectively) do not use a grid structure, and instead come with an additional circuit computing the potential of any node. It is not too hard to see that these grid-versions of the problems are indeed polynomial-time equivalent to the original versions. The main idea is that the grid implicitly provides a potential value for every node $(x,y)$, namely its column number $y$. Thus, given such a problem on a grid, it is easy to define a potential circuit by simply assigning the potential value $y$ to any node $(x,y)$ of the grid.
The other direction is slightly more involved. Consider an instance of one of the original problems with vertex set $V = [N]$ and potential values lying in $P = [M]$. Without loss of generality, we can assume that along any edge the potential increases by exactly one. Indeed, this was proved explicitly by~\cite{Fearnley2020} when they reduced $\eopl$ to $\eoml$, and the same idea applies to $\sod$ and $\sopl$ as well. The reduction to the grid-version of the problem is then obtained by identifying a vertex $x \in V$ that has potential $p \in P$ with the node $(x,p)$ on the $[N] \times [M]$ grid.
\end{remark}
The following is essentially folklore (see, e.g., \cite{Buresh2004}), so we only provide a brief proof sketch.
\begin{lemma}\label{lem:PPAD-as-PHP}
$\iphp$ and $\bphp$ are respectively {\text{\upshape\sffamily P}}\xspacePADS- and {\text{\upshape\sffamily P}}\xspacePAD-complete.
\end{lemma}
\begin{proof}[Proof Sketch]
To see that $\bphp$ lies in {\text{\upshape\sffamily P}}\xspacePAD, we can reduce to $\eol$ (see, e.g., \cite{Daskalakis2009} for a formal definition) with vertex set $V = [N] \times [2]$ as follows: add a directed edge from node $(x,2)$ to node $(x,1)$ for all $x \leq A-1$. By taking $(x,A)$ as the distinguished source node, this yields an $\eol$ instance with the same solutions as the original $\bphp$ instance. On the other hand, given an instance of $\eol$ with vertex set $V = [N]$ and distinguished source node $N$ (without loss of generality), we construct an instance of $\bphp$ on $[N] \times [2]$ as follows: for any isolated vertex $x \in [N]$, create an edge from $(x,1)$ to $(x,2)$; for any edge from $x$ to $y$, create an edge from $(x,1)$ to $(y,2)$. This simple reduction proves the {\text{\upshape\sffamily P}}\xspacePAD-hardness of $\bphp$. The exact same constructions can be used to prove that $\iphp$ is {\text{\upshape\sffamily P}}\xspacePADS-complete, by reducing to and from the $\sol$ problem (formally defined by Beame et al.~\cite{Beame1998}, who call it {\scshape Sink}).
\end{proof}
In \cref{sec:discussion}, we briefly explain how an extended version of the $\grid$ problem can be used to also capture {\text{\upshape\sffamily P}}\xspacePP, the class defined by Papadimitriou~\cite{Papadimitriou1994} to capture a version of the Pigeonhole Principle where edges can only be computed efficiently in the forward direction. We do not currently see any natural way of extending the definition of the $\grid$ problem so that it also captures the class {\text{\upshape\sffamily P}}\xspacePA.
\section{Path-Pigeonhole Problems}
In this section we use the $\grid$ problem to define some interesting extensions of the two pigeonhole problems. Namely, we consider the case where, instead of just two columns, there are many columns. In a certain sense, this corresponds to allowing the pigeons to travel for a long time before reaching a hole. In particular, we can no longer efficiently tell in which hole a given pigeon will land. This allows us to show that the problems remain hard even when there are significantly more pigeons than holes. This fact, stated in \cref{lem:inj-path-php} below, will be crucial to obtain our main result later.
Let $f\colon \mathbb{N} \to \mathbb{N}$ be a polynomial-time computable function with $f(t) > t$. In this section, we consider the following restrictions of $\grid$:
\begin{itemize}
\item $\pathiphp_f$: $r=1, b=0, A = f(B)$.
\item $\pathbphp_f$: $r=1, b=1, A = f(B)$.
\end{itemize}
The following lemma is an important ingredient for the proof of our main result.
\begin{lemma}\label{lem:inj-path-php}
Let $f(t)>t$ be polynomial-time computable.
There exists a reduction $\iphp\leq \pathiphp_f$ that maps an instance with parameters $(A,B) = (T+1,T)$ to an instance with parameters $(A,B) = (f(T),T)$ and $(N,M) = (f(T),f(T)-T+1)$.
\end{lemma}
\begin{proof}
The idea behind this reduction is very simple. Intuitively, we have the ability to ``merge'' $T+1$ paths into $T$ paths by using the $\iphp$ instance. Namely, we can go from having $T+1$ paths on some column $i$ to having only $T$ paths on the next column $i+1$, and such that finding a ``mistake'', i.e., a path that stops between the two columns, requires solving the $\iphp$ instance. In particular, if we start with some $N$ paths, where $N \geq T+1$, then we can ``merge'' those paths into $N-1$ paths by ``merging'' the first $T+1$ paths into $T$ paths, and leaving the remaining $N-(T+1)$ paths unchanged. Applying this idea repeatedly, we can ``merge'' $f(T)$ paths into just $T$ paths in $f(T)-T$ steps. This results in an instance of $\pathiphp_f$ with $f(T)-T+1$ columns, where every solution yields a solution to the $\iphp$ instance; see \cref{figure:reduction_php_pathphp}. More formally, let $(S,P)$ denote an instance of $\iphp$ with parameters $A=T+1$ and $B=T$. Without loss of generality, we can assume that no pigeon goes to the invalid hole, i.e., $S(x,1) \neq T+1$ for all $x \in [T+1]$. Indeed, if there is such an edge, we can just remove it and this does not change the set of solutions; the node pointing to the invalid hole was a solution before, and now it is still a solution, because it has no successor. We construct an instance $(\widehat{S}, \widehat{P})$ of $\pathiphp_f$ on the $[N] \times [M]$ grid, where $N=f(T)$, $M=f(T)-T+1$, $A=f(T)$ and $B=T$. The successor circuit $\widehat{S}$ is defined as follows:
\begin{equation*}
\widehat{S}(x,y)
~\coloneqq~
\begin{cases}
S(x,1) & \text{if } x \in [T+1] \text{ and } y \in [M-1],\\
x-1 & \text{if } T+2 \leq x \leq f(T) - y + 1 \text{ and } y \in [M-1],\\
\textup{\textsf{null}} & \text{otherwise},
\end{cases}
\end{equation*}
and the predecessor circuit $\widehat{P}$ is then defined accordingly to be consistent with $\widehat{S}$. Both circuits can be constructed in polynomial time, given $S$ and $P$, and given that $f$ can be computed in polynomial time. It is straightforward to check that any solution of the constructed instance yields a solution to the original $\iphp$ instance.
\end{proof}
The same proof idea also yields that $\bphp\leq \pathbphp_f$.
\begin{figure}\label{figure:reduction_php_pathphp}
\end{figure}
\section{\texorpdfstring{$\SOPL = {\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePADS$}{SOPL = PLS ∩ PPADS}}
In this section, we prove \cref{thm:SOPL-collapse}, namely $\SOPL = {\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePADS$. To prove this we provide a reduction from a ${\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePADS$-complete problem to the canonical \SOPL-complete problem.
\begin{lemma}
$\sod\curlywedge\iphp \leq \sopl$.
\label{lem:iphp_or_iter_reduces_to_sopl}
\end{lemma}
\begin{proof}[Proof Sketch]
There are two obstacles to a direct reduction from \sod to \sopl: (i) we can only compute edges in the forward direction (i.e., we only have access to a successor circuit), and (ii) multiple edges can point to the same node.
To resolve the first issue, we modify the original $[N] \times [M]$ grid of the \sod instance by taking $N$ copies of each node. This ensures that there is a separate copy of each node $v$ for each potential predecessor on the previous column. As a result, edges from different predecessor nodes will point to different copies of $v$. This means that predecessor nodes can now also be computed efficiently. Namely, in order to compute the predecessors of the $i$th copy of node $v$, it suffices to check whether in the original \sod instance the $i$th node on the previous column points to $v$. If that is the case, then all copies of this $i$th node are predecessors in the modified instance. Otherwise, there are no predecessors. However, the second issue remains: since we have made $N$ copies of each node, there are also $N$ copies of each predecessor node, and thus $N$ edges pointing to the corresponding copy of $v$. This is not acceptable, since \sopl allows at most one incoming edge.
To overcome the second obstacle, we make use of the following high-level idea: use the $\iphp$ instance (which maps $K+1$ pigeons to $K$ holes) to ``hide'' the fact that multiple edges can point to a single node. Unfortunately, we cannot use the $\iphp$ instance to hide the fact that $N$ paths merge into a single node. But, if we take~$K N$ copies of each original node, instead of just $N$, then we have $KN$ paths and $K$ target nodes. By \cref{lem:inj-path-php}, the $\iphp$ instance can be turned into a $\pathiphp_f$ instance that hides the fact that $KN$ paths merge into $K$ paths. Thus, we replace each original node $v$ of the \sod instance by a gadget that has $KN$ nodes in the left-most column and $K$ nodes in the right-most column, and such that finding the sink of a path inside the gadget requires solving the $\iphp$ instance. Importantly, we only construct the paths inside this gadget when the original node $v$ has a successor in the \sod instance. This ensures that when $v$ is an isolated node, the corresponding gadget does not contain any edges. \Cref{figure:reduction_php_or_iter_sopl} illustrates the construction for $N=3$, $M=4$ and $K=2$.
\begin{figure}\label{figure:reduction_php_or_iter_sopl}
\end{figure}
Note that although we might have added many new sources to the graph (which are irrelevant for \sopl), it remains the case that from any sink of the new graph, we can extract either a solution to \sod or to $\iphp$.
In the final construction, edges can indeed be computed in both directions efficiently. Namely, given any node, we can determine in polynomial time if it has an incoming and/or outgoing edge, as well as the identity of the potential predecessor and successor nodes. Here, we crucially use the fact that edges can be computed efficiently in both directions in the $\iphp$ instance.
\end{proof}
\begin{proof}
Let $S$ be an instance of \sod on the grid $[N] \times [M]$. We are also given an instance of $\iphp$ with parameters $(A,B) = (K+1,K)$. Without loss of generality we can assume that $K = N$, because we can easily pad the \sod or $\iphp$ instance with additional rows without changing the set of solutions. By \cref{lem:inj-path-php}, we can reduce this $\iphp$ instance to a $\pathiphp_{t^2}$ instance on the grid $[N^2] \times [M']$ with parameters $(A,B) = (N^2,N)$. Without loss of generality, we can assume that $M' = M$, because we can pad the \sod or $\pathiphp_{t^2}$ instance with additional columns, if needed. This is not important for the reduction, but will be convenient.
We will take $N^2$ copies of each node in the original \sod instance, and make $M$ copies of each column. As a result, our \sopl instance will be defined on the $[N^3] \times [M^2]$ grid. It will be convenient to use some special notation to refer to points in this grid. For $\alpha \in [N^2]$ and $x \in [N]$, we use the notation $(\alpha, x)$ to denote the row $\alpha + (x-1) \cdot N^2 \in [N^3]$. This corresponds to indexing the $\alpha$th copy of row $x$ of the original instance. We also introduce some additional notation to index these $[N^2]$ copies: for $i,j \in [N]$, we let $[i,j] \coloneqq i + (j-1) \cdot N \in [N^2]$. Thus, $([i,j], x)$ denotes the $[i,j]$th copy of row $x$. The ``$[i,j]$'' notation essentially subdivides $[N^2]$ into $N$ blocks containing $N$ values each, which will be useful for routing incoming edges to the correct copy of a node. Using the analogous subdivision also on the columns, the notation $(\alpha,x;k,y) \in [N^2] \times [N] \times [M] \times [M]$ denotes the node $(\alpha + (x-1) \cdot N^2, k + (y-1) \cdot M) \in [N^3] \times [M^2]$. In particular, the notation $([i,j],x;k,y)$ is well-defined.
The circuits $\widehat{S}, \widehat{P}$ of the \sopl instance on $[N^3] \times [M^2]$ are defined as follows:
\begin{align*}
\widehat{S}([i,j],x;k,y)
&~\coloneqq~
\begin{cases}
([i,x],S(x,y)) & \text{if } k = M \text{ and } j = 1,\\
(S'([i,j],k),x) & \text{if } k < M \text{ and } S(x,y) \neq \textup{\textsf{null}},\\
\textup{\textsf{null}} & \text{otherwise}
\end{cases} \\[1em]
\widehat{P}([i,j],x;k,y)
&~\coloneqq~
\begin{cases}
([i,1],j) & \text{if } k = 1 \text{ and } y > 1 \text{ and } S(j,y-1) = x,\\
(P'([i,j],k),x) & \text{if } k > 1 \text{ and } S(x,y) \neq \textup{\textsf{null}},\\
\textup{\textsf{null}} & \text{otherwise}
\end{cases}
\end{align*}
where $(\alpha,z) \in [N^2] \times [N]$ is interpreted as an element in $[N^3]$ as above, and where we use the convention $(\ast,\textup{\textsf{null}}) = (\textup{\textsf{null}},\ast) = \textup{\textsf{null}}$. Using the fact that $S'$ and $P'$ are consistent, it can be checked that $\widehat{S}$ and $\widehat{P}$ are also consistent.
In order to argue about the correctness of the reduction, consider any sink $([i,j],x;k,y)$ of the \sopl instance. If $2 \leq k \leq M-1$, then it must be that $([i,j],k)$ is a sink of the $\pathiphp_{t^2}$ instance $(S',P')$. If $k = M$ and $j = 1$, then $([i,j],x;k,y)$ cannot be a sink, since $\widehat{P}([i,j],x;k,y) \neq \textup{\textsf{null}}$ implies that $S(x,y) \neq \textup{\textsf{null}}$, and thus $\widehat{S}([i,j],x;k,y) \neq \textup{\textsf{null}}$. If $k = M$ and $j > 1$, then $([i,j],k)$ is an invalid sink on the last column of the $\pathiphp_{t^2}$ instance, and so in particular a solution. If $k = 1$ and $S(x,y) \neq \textup{\textsf{null}}$, then $([i,j],k)$ is a missing source on the first column of the $\pathiphp_{t^2}$ instance, and so again a solution. Finally, if $k = 1$ and $S(x,y) = \textup{\textsf{null}}$, then it must be that $S(j,y-1) = x$ and thus $(x,y)$ is a sink of the original \sod instance, and this is witnessed by the node $(j,y-1)$.
\end{proof}
\section{\texorpdfstring{$\EOPL = {\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePAD$}{EOPL = PLS ∩ PPAD}}
In this section, we prove \cref{thm:EOPL-collapse}, namely $\EOPL = {\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePAD$. The equality $\SOPL = {\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePADS$ (\cref{thm:SOPL-collapse}) proved in the previous section, together with the fact that ${\text{\upshape\sffamily P}}\xspacePAD \subseteq {\text{\upshape\sffamily P}}\xspacePADS$, immediately imply that
\[\SOPL∩ {\text{\upshape\sffamily P}}\xspacePAD \,=\, {\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePAD.\]
As a result, in order to prove \cref{thm:EOPL-collapse}, it suffices to give a reduction from an $\SOPL∩ {\text{\upshape\sffamily P}}\xspacePAD$-complete problem to an $\EOPL$-complete problem:
\begin{lemma}
$\sopl \curlywedge \bphp\leq\eopl$.
\label{lem:bphp_or_sopl_reduces_to_eopl}
\end{lemma}
\begin{proof}[Proof Sketch]
A very natural attempt at a reduction from \sopl to \eopl is to try to remove all undistinguished sources, i.e., all sources except the trivial one. Then, clearly, any \eopl-solution would have to be a sink, and thus also a solution to \sopl.
There is a simple trick that \emph{almost} achieves this. First, make a \emph{reversed} copy of the \sopl instance, i.e., reverse the direction of all edges, and the ordering of the potential. Note that sources of the original instance have now become sinks in the reversed copy, and vice versa. Then, for each source node $v$ of the original graph, add an edge pointing from its copy $\overline{v}$ (which is a sink) to $v$.
\begin{figure}\label{figure:reduction_sopl_eopl}
\end{figure}
The only problem with this reduction is that we have eliminated \emph{all} sources of the original graph, including the distinguished one. In particular, the distinguished source $v_0$ of the original instance is no longer a source, since there is an edge from its copy $\overline{v}_0$ to $v_0$. As a result, the reduction fails, because the instance of \eopl we have constructed does not have a distinguished source. Furthermore, we cannot hope to turn one of the new sources into a distinguished source, since any such source yields a solution to the original instance (where it is a sink).
In order to address this issue, we add a new node $u$ and select it as our new distinguished source. Clearly, $u$ is a solution of the instance, since it is a distinguished source that is not actually a source, but just an isolated node. Now, imagine that we remove the edge $(\overline{v}_0,v_0)$ and instead introduce an edge $(u,v_0)$; see \cref{figure:reduction_sopl_eopl}. Then, $u$ is no longer a solution, but $\overline{v}_0$ becomes a sink, and thus a solution, instead. In other words, the reduction can pick whether it wants $u$ or $\overline{v}_0$ to be a solution by changing this edge. Of course, in both cases, the resulting instance is very easy to solve, but this minor observation already provides the idea for the next step.
Take $k$ copies of the instance we have constructed (before adding $u$). There are now $k$ copies $v_0^{(1)}, \dots, v_0^{(k)}$ of the original distinguished source, and $k$ copies of the reverse copy $\overline{v}_0^{(1)}, \dots, \overline{v}_0^{(k)}$. Remove the edges $(\overline{v}_0^{(i)}, v_0^{(i)})$ for $i = 1, \dots, k$. If we now introduce the new distinguished source $u$, we have $k+1$ nodes that ``need'' an outgoing edge in order to not be solutions (namely, $u, \overline{v}_0^{(1)}, \dots, \overline{v}_0^{(k)}$) and $k$ nodes that ``need'' an incoming edge (namely, $v_0^{(1)}, \dots, v_0^{(k)}$). Clearly, no matter how we introduce edges here, one of $u, \overline{v}_0^{(1)}, \dots, \overline{v}_0^{(k)}$ will not have an outgoing edge and will be a solution. However, we can use a $\bphp$ instance to make it hard to find such a solution. Let $K$ denote the parameter of the $\bphp$ instance, i.e., $K+1$ points are mapped to $K$ points. Then, we let $k \coloneqq K$ and add edges between $u, \overline{v}_0^{(1)}, \dots, \overline{v}_0^{(k)}$ and $v_0^{(1)}, \dots, v_0^{(k)}$ according to the $\bphp$ instance. An example of the construction is depicted in \Cref{figure:reduction_php_or_sopl_eopl}.
\begin{figure}\label{figure:reduction_php_or_sopl_eopl}
\end{figure}
Now, it is easy to check that any undistinguished source or any sink of the resulting graph must yield a solution to the $\bphp$ instance or a solution of the \sopl instance. In particular, if $u$ is not a source, then this yields a solution to $\bphp$.
\end{proof}
\begin{proof}
Let $(S,P)$ be an instance of \sopl on the grid $[N] \times [M]$. Without loss of generality, we can assume that all sources occur on the first column, i.e., for any source $(x,y) \in [N] \times [M]$ it holds that $y=1$. Indeed, by appropriately increasing $N$, for each source $(x,y)$ we can add a path that starts on the first column and ends at $(x,y)$, thus effectively ``transferring'' the source to the first column. Let $(S',P')$ be an instance of $\bphp$ on the grid $[K+1] \times [2]$ that maps $K+1$ pigeons to $K$ holes.
We take $K$ copies of the \sopl instance and $K$ copies of the reversed \sopl instance, all together in a single grid. This grid will be of the form $[KN] \times [2M]$. For clarity, we will use the notation $(i,x;y) \in [K] \times [N] \times [2M]$ to denote the element $(x + (i-1) \cdot N,y) \in [KN] \times [2M]$. The $i$th copy of the instance will be embedded in $\{i\} \times [N] \times ([2M] \setminus [M])$, while the $i$th reversed copy will be in $\{i\} \times [N] \times [M]$. Formally, we define new successor and predecessor circuits $\widehat{S}, \widehat{P}$ on $[KN] \times [2M]$ as follows:
\begin{align*}
\widehat{S}(i,x;y)
&~\coloneqq~
\begin{cases}
(i,P(x,M-y+1)) & \text{if } y \leq M,\\
(i,S(x,y-M)) & \text{if } y \geq M+1
\end{cases}\\[1em]
\widehat{P}(i,x;y)
&~\coloneqq~
\begin{cases}
(i,S(x,M-y+1)) & \text{if } y \leq M,\\
(i,P(x,y-M)) & \text{if } y \geq M+1
\end{cases}
\end{align*}
where $(i,z) \in [K] \times [N]$ represents the element $z + (i-1) \cdot N \in [KN]$, and where we use the convention $(i,\textup{\textsf{null}}) = \textup{\textsf{null}}$.
Since $S$ and $P$ are consistent, $\widehat{S}$ and $\widehat{P}$ are also consistent. Note that there are currently no edges between column $M$ and column $M+1$. In the second step of the reduction we add edges between these two columns as follows. For every $i \in [K]$ and $x \in [N] \setminus \{1\}$, if $(i,x;M+1)$ is a source, then we add an edge from $(i,x;M)$ to $(i,x;M+1)$. Note that in that case $(i,x;M)$ was a sink. The case where $x=1$ is handled separately, because it corresponds to nodes that are copies of the distinguished source of the original \sopl instance. For any $i \in [K]$ and for $x=1$, if $S'(i,1) = j \neq \textup{\textsf{null}}$, we add an edge from $(i,1;M)$ to $(j,1;M+1)$. Note that here we also use $P'$ (which is assumed to be consistent with $S'$) to implement this edge in $(\widehat{S},\widehat{P})$.
Finally, we introduce a new special node $u$ on column $M$ which will act as our new distinguished source. If $S'(K+1,1) = j \neq \textup{\textsf{null}}$, then we add an edge from $u$ to $(j,1;M+1)$. By extending the grid to be $[KN+1] \times [2M]$, by renaming nodes and by ``transferring'' the source $u$ to the first column as before, we can ensure that the distinguished source is $(1,1)$.
It is easy to check that the new circuits $\widehat{S}, \widehat{P}$ can be constructed in polynomial time. For the correctness of the reduction, note that any source or sink that occurs on columns $[2M] \setminus \{M,M+1\}$ must correspond to a sink of the original \sopl instance. On the other hand, any source or sink that occurs on column $M$ or $M+1$ must correspond to a solution of the $\bphp$ instance (namely, a pigeon without a hole, or a hole without a pigeon). This completes the reduction.
\end{proof}
\section{Discussion}\label{sec:discussion}
As mentioned in the introduction, it remains open whether $\UEOPL \overset{?}{=} \EOPL$. Separating the two classes in the black-box model would be an important first step towards pinning down the complexity of the various natural problems contained in \UEOPL, since it would provide strong evidence that these problems are unlikely to be complete for ${\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePAD$.
The techniques developed in this paper do not seem to yield any other major class collapse. Indeed, our reductions are all black-box, and the main classes are known to be distinct in that model~\cite{Beame1998,Morioka2001,Buresh2004,Goos2022}.
In the remainder of this section we briefly present some observations about the path pigeonhole problems, as well as a further consequence of our reduction techniques: a version of $\sod$ where paths are not allowed to merge turns out to be ${\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePP$-complete.
\paragraph{Path-Pigeonhole problems.}
\cref{lem:inj-path-php} in particular establishes that $\pathiphp_f$ is {\text{\upshape\sffamily P}}\xspacePADS-hard. Membership in {\text{\upshape\sffamily P}}\xspacePADS can be shown by reducing to $\iphp$ using a construction similar to the reduction from $\eol$ to $\bphp$ in the proof of \cref{lem:PPAD-as-PHP}.
The statement of \cref{lem:inj-path-php} also holds for $\bphp\leq \pathbphp_f$, and the proof is essentially the same. This shows that $\pathbphp_f$ is {\text{\upshape\sffamily P}}\xspacePAD-hard. However, it is unclear whether $\pathbphp_f$ lies in {\text{\upshape\sffamily P}}\xspacePAD. Indeed, using the same idea as for $\pathiphp_f \leq \iphp$ yields an instance with $A \gg B$, and we cannot increase $B$ artificially here (whereas this is possible in $\iphp$). Another way to state this is to say that we can reduce $\pathbphp_f$ to an instance of $\eol$ that has many distinguished source nodes, instead of just one. It is known that $\eol$ with a polynomial number of distinguished sources remains {\text{\upshape\sffamily P}}\xspacePAD-complete~\cite{Goldberg2021}, but in general we will obtain an exponential number of such sources here.
\paragraph{Extending the $\grid$ problem to capture {\text{\upshape\sffamily P}}\xspacePP.}
The canonical {\text{\upshape\sffamily P}}\xspacePP-complete problem is $\pigeoncircuit$~\cite{Papadimitriou1994}: given a circuit mapping $N$ pigeons to $N-1$ holes, find a \emph{collision}, i.e., two pigeons that are mapped to the same hole. Importantly, unlike in $\iphp$ or $\bphp$, we are not given a circuit to compute the mapping in the other direction, i.e., from holes to pigeons. In order to capture this problem, we extend the definition of $\grid$ by introducing an additional parameter bit $c \in \{0,1\}$, which stands for \emph{collision}. We also introduce a new solution type:
\begin{enumerate}
\item[5.] If $r=0$ and $c=1$: $x_1,x_2 \in [N]$ and $y \in [M-1]$ such that \\
$x_1 \neq x_2$ and $S(x_1,y) = S(x_2,y) \neq \textup{\textsf{null}}$,
\emph{(pigeon collision/merging)}
\end{enumerate}
Furthermore, the syntactic condition ``If $r=0$, then $b=0$ and $B=0$'' is replaced by the condition:
\begin{itemize}
\item If $r=0$, then $b=0$. If $r=0$ and $c=0$, then $B=0$.
\end{itemize}
The {\text{\upshape\sffamily P}}\xspacePP-complete problem $\pigeoncircuit$ is then obtained by setting $r=0, c=1, M=2, N=A=B+1$. In fact, $\grid$ remains in {\text{\upshape\sffamily P}}\xspacePP even if we just set $r=0, c=1$ and leave the other parameters unfixed. This can be shown by using a construction similar to the reduction from $\eol$ to $\bphp$ in the proof of \cref{lem:PPAD-as-PHP}.
\paragraph{$\sod$ without merging.}
What is the complexity of $\sod$ if paths are not allowed to merge? In other words, what is the complexity of the $\grid$ problem with parameters $r=0, c=1, A=1, B=0$? Clearly, this restricted version still lies in {\text{\upshape\sffamily P}}\xspaceLS, and by the previous paragraph it also lies in {\text{\upshape\sffamily P}}\xspacePP. Using the ideas developed in this paper, it can be shown that the problem is in fact ${\text{\upshape\sffamily P}}\xspaceLS∩ {\text{\upshape\sffamily P}}\xspacePP$-complete. To see this, note that using the simple construction in the proof of \cref{lem:inj-path-php} we can reduce $\pigeoncircuit$ to a path-version of the problem where $f(T)$ pigeons are mapped to $T$ holes. Then, the construction in the proof of \cref{lem:iphp_or_iter_reduces_to_sopl} can be used to reduce $\sod \curlywedge \pigeoncircuit$ to $\sod$ without merging.
\end{document} |
\begin{document}
\title[Plurisubharmonic defining functions]{A note on plurisubharmonic defining functions in $\mathbb{C}^{n}$}
\author{J. E. Forn\ae ss, A.-K. Herbig}
\subjclass[2000]{32T35, 32U05, 32U10}
\keywords{Plurisubharmonic defining functions, Stein neighborhood basis, DF exponent}
\thanks{Research of the first author was partially supported by an NSF grant.}
\thanks{Research of the second author was supported by FWF grant P19147}
\address{Department of Mathematics, \newline University of Michigan, Ann Arbor, Michigan 48109, USA}
\email{[email protected]}
\address{Department of Mathematics, \newline University of Vienna, Vienna, A-1090, Austria}
\email{[email protected]}
\date{}
\begin{abstract}
Let $\Omega\subset\subset\mathbb{C}^{n}$, $n\geq 3$, be a smoothly bounded domain. Suppose that
$\Omega$ admits
a smooth defining function which is plurisubharmonic on the boundary of $\Omega$. Then the
Diederich--Forn\ae ss exponent can be chosen arbitrarily close to $1$, and the closure of $\Omega$
admits a Stein neighborhood basis.
\end{abstract}
\maketitle
\section{Introduction}
Let $\Omega\subset\subset\mathbb{C}^{n}$ be a smoothly bounded domain. Throughout, we suppose
that $\Omega$ admits a $\mathcal{C}^{\infty}$-smooth defining function $\rho$ which is
plurisubharmonic on the boundary, $b\Omega$, of $\Omega$. That is,
\begin{align}\label{E:psh}
H_{\rho}(\xi,\xi)(z):=\sum_{j,k=1}^{n}\frac{\partial^{2}\rho}{\partial z_{j}\partial\overline{z}_{k}}(z)
\xi_{j}\overline{\xi}_{k}\geq 0\;\;\text{for all}\;z\in b\Omega,\;\xi\in\mathbb{C}^{n}.
\end{align}
The question we are concerned with is what condition \eqref{E:psh} tells us about the
behaviour of the complex Hessian of $\rho$ - or of some other defining function of $\Omega$ -
away from the boundary of $\Omega$.
That $\rho$ is not necessarily plurisubharmonic in any neighborhood of $b\Omega$ can be seen
easily, for an example see Section 2.3 in \cite{For-Her}. In \cite{For-Her}, we showed that if $n=2$,
then for any $\epsilon>0$, $K>0$ there exist smooth defining functions
$\rho_{i}$, $i=1,2$, and a neighborhood $U$ of $b\Omega$ such that
\begin{align}\label{E:dim2result}
H_{\rho_{i}}(\xi,\xi)(q_{i})\geq-\epsilon|\rho_{i}(q_{i})|\cdot|\xi|^{2}
+K|\langle\partial\rho_{i}(q_{i}),\xi\rangle|^{2}
\end{align}
for all $\xi\in\mathbb{C}^{2}$ and $q_{1}\in\overline{\Omega}\cap U$,
$q_{2}\in\Omega^{c}\cap U$.
The estimates \eqref{E:dim2result} imply the existence of particular exhaustion functions for
$\Omega$ and the complement of $\overline{\Omega}$,
which is not a direct consequence of \eqref{E:psh}.
A Diederich--Forn\ae ss
exponent of a domain
is a number $\tau\in(0,1]$ for which there exists a smooth defining function $s$ such
that $-(-s)^{\tau}$ is strictly plurisubharmonic in the domain.
It was shown in \cite{DF1,Ran} that all smoothly
bounded, pseudoconvex domains have a Diederich--Forn\ae ss exponent. However, it is also known
that there are smoothly bounded, pseudoconvex domains for which the largest
Diederich--Forn\ae ss exponent has to be chosen arbitrarily close to $0$ (see \cite{DF2}). In
\cite{For-Her}, we showed that \eqref{E:dim2result}, $i=1$, implies that the Diederich--Forn\ae ss
exponent can be chosen arbitrarily close to $1$. We also showed that
\eqref{E:dim2result}, $i=2$, yields that the complement of $\Omega$ can be exhausted by bounded,
strictly plurisubharmonic functions. In particular, the closure of $\Omega$ admits a Stein
neighborhood basis.
For $n\geq 3$ we obtain the following:
\begin{theorem}\label{T:Main}
Let $\Omega\subset\subset\mathbb{C}^{n}$ be a smoothly bounded domain. Suppose that
$\Omega$ has a smooth defining function which is plurisubharmonic on the boundary of
$\Omega$. Then for any $\epsilon>0$ there exist a neighborhood $U$ of $b\Omega$ and smooth
defining functions $r_{1}$ and $r_{2}$ such that
\begin{align}\label{E:Main1}
H_{r_{1}}(\xi,\xi)(q)\geq -\epsilon
\left[|r_{1}(q)|\cdot|\xi|^{2}+\frac{1}{|r_{1}(q)|}\cdot\left|\langle\partial r_{1}(q),\xi\rangle\right|^{2}\right]
\end{align}
holds for all $q\in\Omega\cap U$, $\xi\in\mathbb{C}^{n}$, and
\begin{align}\label{E:Main2}
H_{r_{2}}(\xi,\xi)(q)\geq
-\epsilon
\left[r_{2}(q)\cdot|\xi|^{2}+\frac{1}{r_{2}(q)}\cdot\left|\langle\partial r_{2}(q),\xi\rangle\right|^{2}\right]
\end{align}
holds for all $q\in(\overline{\Omega})^{c}\cap U$, $\xi\in\mathbb{C}^{n}$.
\end{theorem}
Let us remark that our proof of Theorem \ref{T:Main} also works when $n=2$. However,
the results of Theorem \ref{T:Main} are weaker than \eqref{E:dim2result}. Nevertheless,
they are still strong enough to obtain that the Diederich--Forn\ae ss exponent can be chosen arbitrarily close to 1 and that the closure of the domain admits a Stein neighborhood basis. In particular, we have the following:
\begin{corollary}\label{C:DF}
Assume the hypotheses of Theorem \ref{T:Main} hold. Then
\begin{enumerate}
\item for all $\eta\in (0,1)$ there exists a smooth defining function $\tilde{r}_{1}$ of $\Omega$ such that
$-(-\tilde{r}_{1})^{\eta}$ is strictly plurisubharmonic on $\Omega$,
\item for all $\eta>1$ there exist a smooth defining function $\tilde{r}_{2}$ of $\Omega$ and a
neighborhood $U$ of $\overline{\Omega}$ such that $\tilde{r}_{2}^{\eta}$ is strictly plurisubharmonic
on $(\overline{\Omega})^{c}\cap U$.
\end{enumerate}
\end{corollary}
We note that in \cite{DF3} it was proved that (i) and (ii) of Corollary \ref{C:DF} hold for
so-called regular
domains. Furthermore, in \cite{DF4} it was shown that pseudoconvex domains with real-analytic boundary are regular domains.
This article is structured as follows. In Section \ref{S:prelim}, we give the setting and define our basic notions. Furthermore, we show in this section which piece of the complex Hessian of $\rho$ at a given point $p$ in $b\Omega$ constitutes an obstruction for inequality \eqref{E:Main1} to hold for a given $\epsilon>0$. In Section \ref{S:modification}, we construct a local defining function which does not possess this obstruction term to \eqref{E:Main1} at a given boundary point $p$. Since this fixes our problem with \eqref{E:Main1} only at this point $p$ (and at nearby boundary points at which the Levi form is of the same rank as at $p$), we will need to patch the newly constructed local defining functions without letting the obstruction term arise again. This is done
in Section \ref{S:cutoff}. In Section \ref{S:proof}, we finally prove \eqref{E:Main1} and remark at the end how to obtain \eqref{E:Main2}. We conclude this paper with the proof of Corollary
\ref{C:DF} in Section \ref{S:DF}.
We would like to thank J. D. McNeal for fruitful discussions on this project, in particular we are very grateful to him for providing us with Lemma \ref{L:McNeal} and its proof.
\section{Preliminaries and pointwise obstruction}\label{S:prelim}
Let $(z_{1},\dots,z_{n})$ denote the coordinates of $\mathbb{C}^{n}$. We shall identify the vector
$\langle\xi_{1},\dots,\xi_{n}\rangle$ in $\mathbb{C}^{n}$ with
$\sum_{i=1}^{n}\xi_{i}\frac{\partial}{\partial z_{i}}$ in the $(1,0)$-tangent bundle of $\mathbb{C}^{n}$ at
any given point. This means in particular that if $X$, $Y$ are $(1,0)$-vector fields with
$X(z)=\sum_{i=1}^{n}X_{i}(z)\frac{\partial}{\partial z_{i}}$ and
$Y(z)=\sum_{i=1}^{n}Y_{i}(z)\frac{\partial}{\partial z_{i}}$, then
\begin{align*}
H_{\rho}(X,Y)(z)=\sum_{j,k=1}^{n}\frac{\partial^{2}\rho}{\partial z_{j}\partial\overline{z}_{k}}(z)
X_{j}(z)\overline{Y}_{k}(z).
\end{align*}
Suppose $Z$ is another $(1,0)$-vector field with
$Z(z)=\sum_{l=1}^{n}Z_{l}(z)\frac{\partial}{\partial z_{l}}$. For notational convenience, and because of lack of a better notation, we shall write
\begin{align*}
(ZH_{\rho})(X,Y)(z):=
\sum_{j,k,l=1}^{n}\frac{\partial^{3}\rho}{\partial z_{j}\partial\overline{z}_{k}\partial z_{l}}(z)
X_{j}(z)\overline{Y}_{k}(z) Z_{l}(z).
\end{align*}
We use the pointwise hermitian inner product $\langle .,.\rangle$ defined by
$\langle\frac{\partial}{\partial z_{j}},\frac{\partial}{\partial z_{k}}\rangle=\delta_{j}^{k}$. Hoping that it will not cause any confusion, we also write $\langle .,.\rangle$ for contractions of vector fields and forms.
We will employ the so-called (sc)-(lc) inequality: $|ab|\leq\tau|a|^{2}+\frac{1}{4\tau}|b|^{2}$ for
$\tau>0$. Furthermore, we shall write $|A|\lesssim|B|$ to mean $|A|\leq c|B|$ for some constant $c>0$ which does not depend on any of the relevant parameters. In particular, we will only use this notation when $c$ depends solely on absolute constants, e.g., dimension, quantities related to the given defining function
$\rho$.
Let us now work on proving inequality \eqref{E:Main1}.
Since $b\Omega$ is smooth, there exists a neighborhood $U$ of $b\Omega$ and a smooth map
\begin{align*}
\pi:\overline{\Omega}\cap U&\longrightarrow b\Omega\\
q&\longmapsto\pi(q)=p
\end{align*}
such that $\pi(q)=p$ lies on the line normal to $b\Omega$ passing through $q$ and $|p-q|$ equals
the Euclidean distance, $d_{b\Omega}(q)$, of $q$ to $b\Omega$.
After possibly shrinking $U$, we can assume that $\partial\rho\neq 0$ on $U$. We set
$N(z)=\frac{1}{|\partial\rho(z)|}\sum_{j=1}^{n}\frac{\partial\rho}{\partial\overline{z}_{j}}(z)
\frac{\partial}{\partial z_{j}}$. If $f$ is a smooth function on $U$, then it follows from Taylor's theorem
that
\begin{align}\label{E:BasicTaylor}
f(q)=f(p)-d_{b\Omega}(q)\left(\operatorname{Re} N\right)(f)(p)+\mathcal{O}\left(d_{b\Omega}^{2}(q)\right)\;\;
\text{for}\;\;q\in \overline{\Omega}\cap U;
\end{align}
for details see for instance Section 2.1 in \cite{For-Her}\footnote{Equation \eqref{E:BasicTaylor} above differs from (2.1) in \cite{For-Her} by a factor of $2$ in the second term on the right hand side. This stems from mistakenly using that outward normal of length $1/2$ instead of the one of unit length in
\cite{For-Her}. However, this mistake is inconsequential for the results in \cite{For-Her}.}.
Let $p\in b\Omega\cap U$ be given. Let $W\in\mathbb{C}^{n}$ be a weak, complex tangential vector at
$p$, i.e., $\langle\partial\rho(p),W\rangle=0$ and $H_{\rho}(W,W)(p)=0$. If $q\in\Omega\cap U$ with
$\pi(q)=p$, then \eqref{E:BasicTaylor} implies
\begin{align}\label{E:BasicTaylorH}
H_{\rho}(W,W)(q)=H_{\rho}(W,W)(p)-d_{b\Omega}(q)\left(\operatorname{Re} N\right)
\left(H_{\rho}(W,W)\right)(p)+\mathcal{O}(d^{2}_{b\Omega}(q))|W|^{2}.
\end{align}
Since $H_{\rho}(W,W)$ is a real-valued function, we have
\begin{align*}
\left(\operatorname{Re} N\right)\left(H_{\rho}(W,W)\right)=\operatorname{Re}\left[N\left(H_{\rho}(W,W)\right)\right].
\end{align*}
Moreover,
$H_{\rho}(W,W)$ is non-negative on $b\Omega\cap U$ and equals $0$ at $p$. That is,
$H_{\rho}(W,W)_{|_{b\Omega\cap U}}$ attains a local minimum at $p$. Therefore, any tangential
derivative of $H_{\rho}(W,W)$ vanishes at $p$. Since $N-\overline{N}$ is tangential to $b\Omega$,
we obtain
\begin{align*}
\operatorname{Re}\left[N\left(H_{\rho}(W,W)\right)\right](p)=N\left(H_{\rho}(W,W)\right)(p)
=(NH_{\rho})(W,W)(p),
\end{align*}
where the last equality holds since $W$ is a fixed vector.
Hence, \eqref{E:BasicTaylorH} becomes
\begin{align}\label{E:BasicTaylorHW}
H_{\rho}(W,W)(q)=-d_{b\Omega}(q)(NH_{\rho})(W,W)(p)
+\mathcal{O}\left(d_{b\Omega}^{2}(q)\right)|W|^{2}.
\end{align}
Clearly, we have a problem with obtaining \eqref{E:Main1}
when $(NH_{\rho})(W,W)$ is strictly positive at $p$. That is, when $H_{\rho}(W,W)$
is strictly decreasing along the real inward normal to $b\Omega$ at $p$, i.e.,
$H_{\rho}(W,W)$ becomes negative there, then \eqref{E:Main1} can not hold for the complex
Hessian $\rho$ when $
\epsilon>0$ is sufficiently close to zero. The question is whether we can find another smooth defining function $r$ of
$\Omega$ such that $(NH_{r})(W,W)(p)$ is less than $(NH_{\rho})(W,W)(p)$. The construction of
such a function $r$ is relatively easy and straightforward when $n=2$ (see Section 2.3 in \cite{For-Her}
for a non-technical derivation of $r$). The difficulty in higher dimensions arises simply from the fact that
the Levi form of a defining function might vanish in more than one
complex tangential direction at a given boundary point.
\section{Pointwise Modification of $\rho$}\label{S:modification}
Let $\Sigma_{i}\subset b\Omega$ be the set of boundary points at which the Levi form of $\rho$
has rank $i$, $i\in\{0,\dots,n-1\}$. Note that $\cup_{i=0}^{j}\Sigma_{i}$ is closed in $b\Omega$ for any
$j\in\{0,\dots,n-1\}$. Moreover,
$\Sigma_{j}$ is relatively closed in $b\Omega\setminus \cup_{i=0} ^{j-1}\Sigma_{i}$ for
$j\in\{1,\dots,n-1\}$. Of course, $\Sigma_{n-1}$ is the set of strictly pseudoconvex
boundary points of $\Omega$.
Let $p\in b\Omega\cap\Sigma_{i}$ for some $i\in\{0,\dots, n-2\}$ be given. Then there exist a
neighborhood $V\subset U$ of $p$ and smooth, linearly independent $(1,0)$-vector fields $W^{\alpha}$,
$1\leq\alpha\leq n-1-i$, on $V$, which are complex tangential to $b\Omega$ on $b\Omega\cap V$ and
satisfy $H_{\rho}(W^{\alpha},W^{\alpha})=0$ on $\Sigma_{i}\cap V$.
We consider those points $q\in\Omega\cap V$ with $\pi(q)=p$.
We shall work with the smooth function
\begin{align*}
r(z)=\rho(z)\cdot e^{-C\sigma(z)},\;\text{where}\; \;
\sigma(z)=\sum_{\alpha=1}^{n-1-i}H_{\rho}(W^{\alpha},W^{\alpha})(z)
\end{align*}
for $z\in V$. Here, the constant $C>0$ is fixed and to be chosen later.
Note that $r$ defines $b\Omega$ on $b\Omega\cap V$.
Furthermore, $\sigma$ is a smooth function on $V$ which is non-negative on $b\Omega\cap V$ and
vanishes on the set $\Sigma_{i}\cap
V$. That means that $\sigma_{|_{b\Omega\cap V}}$ attains a local minimum at each point in $\Sigma_{i}
\cap V$. Therefore, any tangential derivative of $\sigma$ vanishes on $\Sigma_{i}\cap V$.
Moreover, if $z\in\Sigma_{i}\cap V$ and $T\in\mathbb{C}T_{z}b\Omega$ is such that $H_{\rho}(T,T)$
vanishes at $z$, then $H_{\sigma}(T,T)$ is non-negative at that point.
Let $W\in
\mathbb{C}^{n}$
be a vector contained in the span of the vectors
$\left\{W^{\alpha}(p)\right\}_{\alpha=1}^{n-1-i}$. Then, using \eqref{E:BasicTaylorHW}, it follows that
\begin{align}\label{E:explaincutoff}
H_{r}(W,W)(q)
=&e^{-C\sigma(q)}\left[H_{\rho}(W,W)
-C\operatorname{Re}\left(\langle\partial\rho,W\rangle\overline{\langle\partial\sigma,W\rangle
}\right)\right.\notag\\
&\hspace{3.5cm}+\left.\rho\left(C^{2}\left|\langle\partial\sigma,W\rangle\right|^{2}
-CH_{\sigma}(W,W)\right)\right](q)\notag\\
=&e^{-C\sigma(q)}
\Bigl[
-d_{b\Omega}(q)(NH_{\rho})(W,W)(p)-C\rho(q)H_{\sigma}(W,W)(q)\bigr.
+\mathcal{O}\left(d_{b\Omega}^{2}(q)\right)|W|^{2}\notag\\
&\hspace{1.5cm}+C^{2}\rho(q)\left|\langle\partial\sigma(q),W\rangle \right|^{2}\bigl.-2C\operatorname{Re}\left(
\langle\partial\rho,W\rangle\overline{\langle\partial\sigma,W\rangle
}
\right)(q)
\Bigr].
\end{align}
Since $\langle\partial\sigma(p),W\rangle=0=\langle\partial\rho(p),W\rangle$, Taylor's theorem gives
\begin{align*}
\langle\partial\sigma(q),W\rangle=\mathcal{O}\left(r(q)\right)|W|=\langle\partial\rho(q),W\rangle.
\end{align*}
Therefore, we obtain
\begin{align}\label{E:BasicTaylorr}
H_{r}(W,W)(q)\geq -d_{b\Omega}(q)e^{-C\sigma(q)}(NH_{\rho})(W,W)(p)&-Cr(q)H_{\sigma}(W,W)(q)\\
&+\mathcal{O}\left(r^{2}(q)\right)|W|^{2},\notag
\end{align}
where the constant in the last term depends on the choice of the constant $C$. However, in view of our claim \eqref{E:Main1}, this is inconsequential. From here on, we will not point out such negligible dependencies.
We already know that $H_{\sigma}(W,W)(p)$ is non-negative, i.e., of the right sign to correct
$(NH_{\rho})(W,W)(p)$ when necessary. The question is whether the sizes of
$(NH_{\rho})(W,W)(p)$ and
$H_{\sigma}(W,W)(p)$ are comparable in some sense. The following proposition clarifies this.
\begin{proposition}\label{P:compare}
There exists a constant $K>0$ such that
\begin{align*}
\left|(NH_{\rho})(W,W)(z_{0})\right|^{2}
\leq K|W|^{2}\cdot H_{\sigma}(W,W)(z_{0})
\end{align*}
holds for all $z_{0}\in\Sigma_{i}\cap V$ and $W\in\mathbb{C}T_{z_{0}}b\Omega$
with $H_{\rho}(W,W)(z_{0})=0$.
\end{proposition}
In order to prove Proposition \ref{P:compare}, we need the following lemma:
\begin{lemma}\label{L:compare}
Let $z_{0}\in b\Omega$ and $U$ a neighborhood of $z_{0}$. Let $Z$ be a smooth $(1,0)$-vector field
defined
on $U$, which is complex tangential to $b\Omega$ on $b\Omega\cap U$, and let
$Y\in\mathbb{C}^{n}$ be a vector belonging to $
\mathbb{C}T_{z_{0}}b\Omega$. Suppose that $Y$ and $Z$ are such that
\begin{align*}
H_{\rho}(Y,Y)(z_{0})=0=H_{\rho}(Z,Z)(z_{0}).
\end{align*}
Set $X=\sum_{j=1}^{n}\overline{Y}(Z_{j})\frac{\partial}{\partial z_{j}}$. Then the following holds:
\begin{enumerate}
\item $X$ is complex tangential to $b\Omega$ at $z_{0}$,
\item $\left(YH_{\rho}\right)(X,Z)(z_{0})=0$,\
\item $H_{H_{\rho}(Z,Z)}(Y,Y)(z_{0})\geq H_{\rho}(X,X)(z_{0})$.
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{L:compare}]
(1) That $X$ is complex tangential to $b\Omega$ at $z_{0}$ was shown in Lemma 3.4 of
\cite{For-Her}.
(2) The plurisubharmonicity of $\rho$ says that
both $H_{\rho}(Y,Y)_{|_{b\Omega\cap U}}$ and $H_{\rho}(Z,Z)_{|_{b\Omega\cap U}}$
attain a local minimum at $z_{0}$. In fact, the function $H_{\rho}(aY+bZ,aY+bZ)_{|_{b\Omega}}$,
$a,b\in\mathbb{C}$, attains a local minimum at $z_{0}$. This means that any tangential derivative of
either one of those three functions must vanish at that point. In particular, we have
\begin{align*}
0&=\langle\partial H_{\rho}(aY+bZ,aY+bZ),X\rangle(z_{0})\\
&=|a|^{2}\langle\partial H_{\rho}(Y,Y),X\rangle(z_{0})
+2\operatorname{Re}\left(a\overline{b}\langle\partial H_{\rho}(Y,Z),X\rangle
\right)(z_{0})+|b|^{2}\langle\partial H_{\rho}(Z,Z),X\rangle(z_{0})\\
&=2\operatorname{Re}\left(a\overline{b}\langle\partial H_{\rho}(Y,Z),X\rangle
\right)(z_{0}).
\end{align*}
Since this is true for all $a,b\in\mathbb{C}$, it follows that $\langle\partial H_{\rho}(Y,Z),X\rangle$
must vanish
at $z_{0}$. But the plurisubharmonicity of $\rho$ at $z_{0}$ yields
\begin{align*}
\langle\partial H_{\rho}(Y,Z),X\rangle=\left(XH_{\rho}\right)(Y,Z)(z_{0})=
\left(YH_{\rho}\right)(X,Z)(z_{0}),
\end{align*}
which proves the claim.
(3) Consider the function
\begin{align*}
f(z)=\left(
H_{\rho}(Z,Z)\cdot H_{\rho}(X,X)-\left|H_{\rho}(X,Z)\right|^{2}\right)(z)\;\;\text{for}\;\;z\in U.
\end{align*}
Note that $f_{|_{b\Omega\cap U}}$ attains a local minimum at $z_{0}$. Since $Y$ is a weak direction at
$z_{0}$, it follows that $H_{f}(Y,Y)(z_{0})$ is
non-negative. This implies that
\begin{align}\label{E:comparetemp}
\left(H_{H_{\rho}(Z,Z)}(Y,Y)\cdot H_{\rho}(X,X)\right)(z_{0})\geq
\left|
\langle \partial H_{\rho}(X,Z), Y\rangle(z_{0})
\right|^{2},
\end{align}
where we used that both $H_{\rho}(Z,Z)$ and any tangential derivative of $H_{\rho}(Z,Z)$ at $z_{0}$
are zero. We compute
\begin{align*}
\langle\partial H_{\rho}(X,Z),Y\rangle(z_{0})
=(YH_{\rho})(X,Z)(z_{0})+H_{\rho}(X,X)(z_{0})
+H_{\rho}\left(\sum_{j=1}^{n}Y(X_{j})\frac{\partial}{\partial z_{j}},Z \right)(z_{0}).
\end{align*}
The first term on the right hand side equals zero by part (2) of Lemma \ref{L:compare}, and the third
term is zero as well since $\rho$ plurisubharmonic at $z_{0}$ and $Z$ is a weak direction there. Therefore,
\eqref{E:comparetemp} becomes
\begin{align*}
H_{\rho}(X,X)(z_{0})\leq H_{H_{\rho}(Z,Z)}(Y,Y)(z_{0}).
\end{align*}
\end{proof}
Now we can proceed to show Proposition \ref{P:compare}.
\begin{proof}[Proof of Proposition \ref{P:compare}]
Recall that we are working with vectors $W\in\mathbb{C}^{n}$ contained in the span of
$W^{1}(z_{0}),\dots,W^{n-1-i}(z_{0})$. We consider the function
\begin{align*}
h(z)=\left(
\sigma\cdot H_{\rho}(N,N)-\sum_{\alpha=1}^{n-1-i}\left|
H_{\rho}(N,W^{\alpha})
\right|^{2}
\right)(z) \;\;\text{for}\;\;z\in U,
\end{align*}
where $\sigma=\sum_{\alpha=1}^{n-1-i}H_{\rho}(W^{\alpha},W^{\alpha})$. Again, since $\rho$ is
plurisubharmonic on $b\Omega$, $h_{|_{b\Omega\cap V}}$ has a local minimum at $z_{0}$.
This, together with $H_{\rho}(W,W)(z_{0})=0$, implies that
$H_{h}(W,W)(z_{0})$ is non-negative. Since both $\sigma$ and $\langle\partial\sigma,
W\rangle=0$ vanish at $z_{0}$,
it
follows that
\begin{align*}
&\left(
H_{\sigma}(W,W)\cdot H_{\rho}(N,N)
\right)(z_{0})\\
&\geq
\sum_{\alpha=1}^{n-1-i}\left|
(WH_{\rho})(N,W^{\alpha})+H_{\rho}\left(\sum_{j=1}^{n}W(N_{j})\frac{\partial}{\partial z_{j}},W^{\alpha}
\right)
+H_{\rho}\left(
N,\sum_{j=1}^{n}\overline{W}(W_{j}^{\alpha})\frac{\partial}{\partial z_{j}}
\right)
\right|^{2}(z_{0})\\
&=
\sum_{\alpha=1}^{n-1-i}\left|
(WH_{\rho})(N,W^{\alpha})+H_{\rho}\left(
N,\sum_{j=1}^{n}\overline{W}(W_{j}^{\alpha})\frac{\partial}{\partial z_{j}}
\right)
\right|^{2}(z_{0}),
\end{align*}
where the last step follows from $\rho$ being plurisubharmonic at $z_{0}$ and the
$W^{\alpha}$'s being
weak directions there.
Moreover, we have that $(WH_{\rho})(N,W^{\alpha})$ equals $(NH_{\rho})(W,W^{\alpha})$ at $z_{0}$. Writing
$X^{\alpha}=\sum_{j=1}^{n}\overline{W}(W_{j}^{\alpha})\frac{\partial}{\partial z_{j}}$, we obtain
\begin{align*}
\left(
H_{\sigma}(W,W)\cdot H_{\rho}(N,N)
\right)(z_{0})
&\geq
\sum_{\alpha=1}^{n-1-i}\left|
(NH_{\rho})(W,W^{\alpha})+H_{\rho}(N,X^{\alpha})
\right|^{2}(z_{0})\\
&\geq
\sum_{\alpha=1}^{n-1-i}\left(
\frac{1}{2}\left|
(NH_{\rho})(W,W^{\alpha})
\right|^{2}-3
\left|H_{\rho}(N,X^{\alpha})\right|^{2}
\right)(z_{0}).
\end{align*}
Here the last step follows from the (sc)-(lc) inequality.
Since $\rho$ is plurisubharmonic at $z_{0}$, we can apply the Cauchy--Schwarz inequality
\begin{align*}
\left|
H_{\rho}(N,X^{\alpha})(z_{0})
\right|^{2}
&\leq
\left(
H_{\rho}(N,N)\cdot H_{\rho}(X^{\alpha},X^{\alpha})
\right)(z_{0})\\
&\leq \left(
H_{\rho}(N,N)\cdot H_{H_{\rho}(W^{\alpha},W^{\alpha})}(W,W)
\right)(z_{0}),
\end{align*}
where the last estimate follows by part (3) of Lemma \ref{L:compare} with $W$ and $W^{\alpha}$
in place of $Y$ and $Z$, respectively. Thus we have
\begin{align*}
\sum_{\alpha=1}^{n-1-i}\left|H_{\rho}(N,X^{\alpha})(z_{0}) \right|^{2}
\leq
\left(H_{\rho}(N,N)\cdot H_{\sigma}(W,W)\right)(z_{0}),
\end{align*}
which implies that
\begin{align}\label{E:comparetemp2}
\sum_{\alpha=1}^{n-1-i}\left|
(NH_{\rho})(W,W^{\alpha})(z_{0})
\right|^{2}
\leq
8\left(H_{\sigma}(W,W)\cdot H_{\rho}(N,N)\right)(z_{0}).
\end{align}
Since $W$ is a linear combination of $\{W^{\alpha}(z_{0})\}_{\alpha=1}^{n-1-i}$, we can write
$W=\sum_{\alpha=1}^{n-1-i}a_{\alpha}W^{\alpha}(z_{0})$ for some scalars $a_{\alpha}\in\mathbb{C}$.
Because of the linear independence of the $W^{\alpha}$'s on $V$, there exists a constant $K_{1}>0$
such
that
\begin{align*}
\sum_{\alpha}^{n-1-i}|b_{\alpha}|^{2}\leq K_{1}\left|
\sum_{\alpha=1}^{n-1-i}b_{\alpha}W^{\alpha}(z)
\right|^{2}\;\text{for all}\;z\in b\Omega\cap V,\;b_{\alpha}\in\mathbb{C}.
\end{align*}
Thus it follows that
\begin{align*}
\sum_{\alpha=1}^{n-1-i}
\left|
(NH_{\rho})(W,W^{\alpha})(z_{0})
\right|^{2}
&\geq
\frac{1}{K_{1}|W|^{2}}
\sum_{\alpha=1}^{n-1-i}\left|(NH_{\rho})(W,a_{\alpha}W^{\alpha})(z_{0})\right|^{2}\\
&\geq
\frac{1}{K_{1}(n-1-i)|W|^{2}}
\left|(NH_{\rho})(W,W)(z_{0})\right|^{2}.
\end{align*}
Hence, \eqref{E:comparetemp2} becomes
\begin{align*}
\left|(NH_{\rho})(W,W)(z_{0})\right|^{2}\leq
8K_{1}(n-1-i)|W|^{2}\left(H_{\sigma}(W,W)\cdot H_{\rho}(N,N)\right)(z_{0}).
\end{align*}
Let $K_{2}>0$ be a constant such that
$H_{\rho}(N,N)_{|_{b\Omega}}\leq K_{2}$ holds. Setting $K=8K_{1}K_{2}(n-1-i)$,
it follows that
\begin{align*}
\left|(NH_{\rho})(W,W)(z_{0})\right|^{2}\leq K|W|^{2}H_{\sigma}(W,W)(z_{0}).
\end{align*}
\end{proof}
Recall that we are considering a fixed boundary point $p\in\Sigma_{i}$ and all $q\in\Omega\cap V$,
$\pi(q)=p$, for
some sufficiently small neighborhood of $p$. After possibly shrinking $V$ it follows by Taylor's theorem
that
\begin{align*}
H_{\sigma}(W,W)(q)=H_{\sigma}(W,W)(\pi(q))+\mathcal{O}\left(d_{b\Omega}(q)\right)|W|^{2}
\end{align*}
holds for all $q\in\Omega\cap V$ with $\pi(q)=p$.
Using this and Proposition \ref{P:compare}, we get for $q\in\Omega\cap V$ with
$\pi(q)=p$ that
\begin{align*}
\left|(NH_{\rho})(W,W)(p)\right|^{2}\leq K|W|^{2}\left[H_{\sigma}(W,W)(q)
+\mathcal{O}\left(d_{b\Omega}(q)\right)|W|^{2}\right].
\end{align*}
Therefore, our basic estimate \eqref{E:BasicTaylorr} of the complex Hessian of $r$ in direction $W$
becomes
\begin{align*}
H_{r}(W,W)(q)\geq
- d_{b\Omega}(q)e^{-C\sigma(q)}(NH_{\rho})(W,W)(p)
&-r(q)\frac{C}{K|W|^{2}}\left|NH_{\rho}(W,W)\right|^{2}(p)\\
&+\mathcal{O}\left(r^{2}(q)\right)|W|^{2}.
\end{align*}
Let $c_{1}>0$ be such that $d_{b\Omega}(z)\leq c_{1}|\rho(z)|$ for all $z$ in $V$.
Then, if we choose
\begin{align}\label{E:chooseC}
C\geq\max_{z\in b\Omega, T\in\mathbb{C}^{n}, |T|=1}
\left\{
0,\frac{c_{1}\operatorname{Re}\left[(NH_{\rho})(T,T)(z)\right]-\frac{\epsilon}{2}}{\left|(NH_{\rho})(T,T)(z)\right|^{2}}K
\right\},
\end{align}
we obtain, after possibly shrinking $V$,
\begin{align}\label{E:BasicTaylorrfinal}
H_{r}(W,W)(q)\geq -\epsilon |r(q)|\cdot|W|^{2}
\end{align}
for all $q\in\Omega\cap V$ with $\pi(q)=p$ and $W\in\mathbb{C}^{n}$ in the span of
$\{W^{\alpha}(p)\}_{\alpha=1}^{n-1-i}$. In fact, after possibly shrinking $V$, \eqref{E:BasicTaylorrfinal} holds with, say, $2\epsilon$ in place of $\epsilon$ for all
$q\in\Omega\cap V$ satisfying $\pi(q)\in\Sigma_{i}\cap V$ and $W\in\mathbb{C}^{n}$ belonging to
the span of $\{W^{\alpha}(\pi(q))\}_{\alpha=1}^{n-1-i}$.
A problem with this construction is that
$r$ is not necessarily plurisubharmonic at those weakly pseudoconvex boundary points which are not
in $\Sigma_{i}$. This possible loss of plurisubharmonicity occurs because the
$W^{\alpha}$'s are not necessarily weak directions at those points. This means,
that we can not simply copy this construction with $r$ in place of $\rho$ to get good estimates near, say,
$\Sigma_{i+1}$. Let us be a more explicit. Suppose $\tilde{p}\in\Sigma_{i+1}\cap V$ is such that
at least one of the $W^{\alpha}$'s is not a weak direction at $\tilde{p}$. That means, if
$T$ is a weak complex tangential direction at $\tilde{p}$, then neither does $|\langle\partial\sigma(\tilde{p}),T\rangle|^{2}$ have to be
zero nor does $H_{\sigma}(T,T)(\tilde{p})$ have to be non-negative. In view of \eqref{E:explaincutoff},
this says that it might actually happen that
$(NH_{r})(T,T)(\tilde{p})$ is greater than $(NH_{\rho})(T,T)(\tilde{p})$ for such a vector $T$. That is, by
removing the obstruction term at $p$ we might
have worsened the situation at $\tilde{p}$.
One might think that this does not cause any real problems since we
still need to introduce a correcting function $\tilde{\sigma}$ to remove the obstruction to
\eqref{E:Main1} on the set $\Sigma_{i+1}\cap V$. However, it might be the case that
$(NH_{\rho})(T,T)(\tilde{p})=0$. In this case we do not know whether $H_{\tilde{\sigma}}(T,T)$ is
strictly positive at $\tilde{p}$, i.e., we do not know whether $H_{\tilde{\sigma}}(T,T)(\tilde{p})$ can make
up for any obstructing terms at $\tilde{p}$ introduced by $\sigma$. This says that we need to smoothly
cut off $\sigma$ in a manner such that, away from $\Sigma_{i}\cap V$,
$|\langle\partial\sigma,T\rangle|^{2}$ stays close to zero and $H_{\sigma}(T,T)$ does not become too
negative (relative to $\epsilon|T|^{2}$). The construction of such a cut off function will be done in the
next section.
\section{The cutting off}\label{S:cutoff}
Let us recall our setting: we are considering a given boundary point $p\in\Sigma_{i}$,
$0\leq i \leq n-2$, $V$ a
neighborhood of $p$ and smooth, linearly independent $(1,0)$-vector fields
$\{W^{\alpha}\}_{\alpha=1}^{n-1-i}$, $\alpha\in\{1,\dots,n-1-i\}$ on $V$, which are complex tangential
to $b\Omega$ on $b\Omega\cap V$ and satisfy $H_{\rho}(W^{\alpha},W^{\alpha})=0$ on
$\Sigma_{i}\cap V$. From now on, we also suppose that $V$ and the $W^{\alpha}$'s are
chosen such that the
span of $\{W^{\alpha}(z)\}_{\alpha=1}^{n-1-i}$ contains the null space of the Levi form of $\rho$ at $z$
for all $z\in\Sigma_{j}\cap V$ for $j\in\{i+1,\dots,n-2\}$. This can be done by first selecting
smooth $(1,0)$-vector fields $\{S^{\beta}(z)\}_{\beta=1}^{i}$ which are complex tangential to
$b\Omega$ on $b\Omega\cap V$ for some neighborhood $V$ of $p$ and orthogonal to each other
with respect to the Levi form of $\rho$ such that $H_{\rho}(S^{\beta},S^{\beta})>0$ holds on
$b\Omega\cap V$ after possibly shrinking $V$. Then one completes the basis of the complex
tangent space with smooth $(1,0)$-vector fields $\{W^{\alpha}(z)\}_{\alpha=1}^{n-1-i}$ such that
the $W^{\alpha}$'s are orthogonal to the $S^{\beta}$'s with respect to the Levi form of $\rho$.
Let $V'\subset\subset V$ be another neighborhood of $p$. Let $\zeta\in C^{\infty}_{c}(V,[0,1])$ be a function
which equals $1$ on $V'$. For given $m>2$, let $\chi_{m}\in C^{\infty}(\mathbb{R})$ be
an increasing function with $\chi_{m}(x)=1$ for all $x\leq 1$ and $\chi_{m}(x)=e^{m}$ for all
$x\geq e^{m}$ such that
\begin{align*}
\frac{x}{\chi_{m}(x)}\leq 2,\;\;
\chi_{m}'(x)\leq 2,\;\text{and}\;\;
x\cdot\chi_{m}''(x)\leq 4\;\;\text{for all}\;\; x\in[1,e^{m}].
\end{align*}
Set $\chi_{m,\tau}(x)=\chi_{m}\left(\frac{x}{\tau}\right)$ for given $\tau>0$. The above properties then
become
\begin{align*}
\frac{x}{\chi_{m,\tau}(x)}\leq 2\tau,\;\;
\chi_{m,\tau}'(x)\leq \frac{2}{\tau},\;\text{and}\;\;
x\cdot\chi_{m,\tau}''(x)\leq \frac{4}{\tau}\;\;\text{for all}\;\; x\in[\tau,\tau e^{m}].
\end{align*}
Set $g_{m,\tau}(x)=1-\frac{\ln(\chi_{m,\tau}(x))}{m}$. It follows by a straightforward computation that
\begin{align}\label{E:propsg}
&g_{m,\tau}(x)=1\;\text{for}\;x\leq\tau,\;\;0\leq g_{m,\tau}(x)\leq 1\;\text{for all}\;x\in\mathbb{R},
\;\text{and}\notag\\
&|g_{m,\tau}'(x)|\leq\frac{4}{m}\cdot\frac{1}{x},\;\;g_{m,\tau}''(x)\geq-\frac{8}{m}\cdot\frac{1}{x^{2}}\;\;
\text{for}\;x\in(\tau, \tau e^{m}).
\end{align}
For given $m,\;\tau> 0$ we define
\begin{align*}
s_{m,\tau}(z)=\zeta(z)\cdot\sigma(z)\cdot g_{m,\tau}(\sigma(z))\;\;\text{for}\;z\in V
\end{align*}
and $s_{m,\tau}=0$ outside of $V$. This function has the properties described at the
end of Section \ref{S:modification} if $m,\tau$ are chosen appropriately:
\begin{lemma}\label{L:cutoff}
For all $\delta>0$, there exist $m,\;\tau>0$ such that
$s_{m,\tau}$ satisfies:
\begin{enumerate}
\item[(i)] $s_{m,\tau}=\zeta\sigma$ for $\sigma\in[0,\tau]$,
\item[(ii)] $0\leq s_{m,\tau}\leq\delta$ on $b\Omega$.
\end{enumerate}
Moreover, if $z\in \left(b\Omega\cap V\right)\setminus\Sigma_{i}$ and
$T\in\mathbb{C}T_{z}b\Omega$, then
\begin{enumerate}
\item[(iii)] $|\langle\partial s_{m,\tau}(z),T\rangle|\leq\delta|T|$,
\item[(iv)] $H_{s_{m,\tau}}(T,T)(z)\geq-\delta|T|^{2}$ if $T\in\operatorname{span}\{W^{\alpha}(z)\}$.
\end{enumerate}
\end{lemma}
Note that part (iv) of Lemma \ref{L:cutoff} in particular says that if
$z\in\cup_{j=i+1}^{n-2}\Sigma_{j}\cap V$, then $H_{s_{m,\tau}}(T,T)(z)\geq-\delta|T|^{2}$ for all $T$
which are weak complex tangential vectors at $z$.
To prove Lemma \ref{L:cutoff}, we will need $|\langle\partial\sigma,T\rangle|^{2}
\lesssim\sigma|T|^{2}$ on
$\overline{\operatorname{supp}(\zeta)}\cap b\Omega$. That this is in fact true we learned from J. D. McNeal.
\begin{lemma}[\cite{McN}]\label{L:McNeal}
Let $U\subset\subset\mathbb{R}^{n}$ be open. Let $f\in C^{2}(U)$ be a non-negative
function on $U$. Then for any compact set $K\subset\subset U$, there exists a constant $c>0$ such that
\begin{align}\label{E:McNeal}
|\nabla f(x)|^{2}\leq c f(x)\;\;\text{for all}\;\;x\in K.
\end{align}
\end{lemma}
Since the proof by McNeal is rather clever, and since we are not aware of it being published,
we shall give it here.
\begin{proof}
Let $F$ be a smooth, non-negative function such that $F=f$ on $K$ and $F=0$ on
$\mathbb{R}^{n}\setminus U$.
For a given $x\in K$, we have for all $h\in\mathbb{R}^{n}$ that
\begin{align}\label{E:McNealTaylor}
0\leq F(x+h)&=F(x)+\sum_{k=1}^{n}\frac{\partial F}{\partial x_{k}}(x)h_{k}
+\frac{1}{2}\sum_{k,l=1}^{n}\frac{\partial^{2} F}{\partial x_{k}\partial x_{l}}(\xi)h_{k}h_{l}\notag\\
&=f(x)+\sum_{k=1}^{n}\frac{\partial f}{\partial x_{k}}(x)h_{k}+
\frac{1}{2}\sum_{k,l=1}^{n}\frac{\partial^{2} F}{\partial x_{k}\partial x_{l}}(\xi)h_{k}h_{l}
\end{align}
holds for some $\xi\in U$.
Note that \eqref{E:McNeal} is true if $\left(\nabla f\right)(x)=0$. So assume now that
$\left(\nabla f\right)(x)\neq 0$ and choose
$h_{k}=\frac{\frac{\partial f}{\partial x_{k}}(x)}{\left|\left(\nabla f\right)(x)\right|}\cdot t$ for
$t\in\mathbb{R}$. Then
\eqref{E:McNealTaylor} becomes
\begin{align*}
0\leq f(x)+\left|\left(\nabla f\right)(x)\right|t
+nL\cdot\frac{\sum_{k=1}^{n}\left|\frac{\partial f}{\partial x_{k}}(x)\right|^{2}}{\left|\left(\nabla f\right)
(x) \right|^{2}}
\cdot t^{2}\;\;\text{for all}\;\;t\in\mathbb{R},
\end{align*}
where $L=\frac{1}{2}\sup
\left\{\left|\frac{\partial^{2}F}{\partial x_{k}\partial x_{l}}(\xi)
\right|\;|\;\xi\in U,\;1\leq k,l\leq n\right\}$.
Therefore, \eqref{E:McNealTaylor} becomes
\begin{align*}
0\leq f(x)+\left|\left(\nabla f\right)(x)\right|\cdot t+nL\cdot t^{2}\;\;\text{for all}\;\;t\in\mathbb{R}.
\end{align*}
In particular, the following must hold for all
$t\in\mathbb{R}$:
\begin{align*}
-\frac{f(x)}{nL}+\frac{\left|\left(\nabla f\right)(x)\right|^{2}}{(2nL)^{2}}
\leq
\left(
t+\frac{\left|\left(\nabla f\right)(x)\right|}{2nL}
\right)^{2},
\end{align*}
which implies that
\begin{align*}
\left|\left(\nabla f\right)(x)\right|^{2}\leq 4nL\cdot f(x).
\end{align*}
\end{proof}
We can assume that $V$ is such that there exists a diffeomorphism $\phi:V\cap b\Omega
\longrightarrow U$ for some open set $U\subset\subset\mathbb{R}^{2n-1}$.
Set $f:=\sigma\circ\phi^{-1}$. Then $f$ satisfies the
hypotheses of Lemma \ref{L:McNeal}. Hence we get that there exists a constant $c>0$ such that
$\left|\left(\nabla f\right)(x)\right|^{2}\leq cf(x)$ for all $x\in K=\phi(\overline{\operatorname{supp}(\zeta)})$. This implies
that there exists a constant $c_{1}>0$, depending on $\phi$, such that
\begin{align}\label{E:McNealapplied}
\left|\langle\partial\sigma(z),T\rangle\right|^{2}\leq c_{1}\sigma(z)|T|^{2}\;\;
\text{for all}\;\; z\in\overline{\operatorname{supp}(\zeta)}\;\text{and}\; T\in\mathbb{C}T_{z}b\Omega.
\end{align}
Now we can prove Lemma \ref{L:cutoff}.
\begin{proof}[Proof of Lemma \ref{L:cutoff}]
Note first that $s_{m,\tau}$ is identically zero
on $b\Omega\setminus V$ for any $m>2$ and $\tau>0$.
Now let $\delta>0$ be given, let $m$ be a large, positive number, fixed and to be chosen later
(that is, in the proof of (iv)).
Below we will show how to
choose $\tau>0$ once $m$ has been chosen.
Part (i) follows directly from the definition of $s_{m,\tau}$ for any choice of $m>2$ and $\tau>0$.
Part (ii) also follows
straightforwardly, if $\tau>0$ is such that $\tau e^{m}\leq\delta$.
Notice that for all $z\in b\Omega\cap V$ with $\sigma(z)>\tau e^{m}$, $s_{m,\tau}(z)=0$, and hence
(iii), (iv) hold trivially there. Thus, to
prove (iii) and (iv) we only need to consider the two sets
\begin{align*}
S_{1}=\{z\in b\Omega\cap V\;|\;\sigma(z)\in(0,\tau)\}\;\;\text{and}\;\;
S_{2}=\{z\in b\Omega\cap V\;|\;\sigma(z)\in[\tau,\tau e^{m}]\}.
\end{align*}
\emph{Proof of (iii):} If $z\in S_{1}$, then $s_{m,\tau}(z)=\zeta(z)\cdot\sigma(z)$ and
if $T\in\mathbb{C}T_{z}b\Omega$, we get
\begin{align*}
\left|\langle\partial s_{m,\tau}(z),T\rangle
\right|&=
\left|
\sigma\cdot\langle\partial\zeta,T\rangle+\zeta\cdot\langle\partial\sigma, T\rangle
\right|(z)\\
&\overset{\eqref{E:McNealapplied}}{\leq} \left(c_{2}\sigma(z)
+\left(c_{1}\sigma(z)\right)^{\frac{1}{2}}\right)|T|, \;
\text{where}\;\; c_{2}=\max_{z\in b\Omega\cap V}|\partial\zeta(z)|.
\end{align*}
Thus, if we choose $\tau>0$ such that, $c_{2}\tau+(c_{1}\tau)^{\frac{1}{2}}\leq\delta$, then
(iii) holds on the set $S_{1}$.\\
Now suppose that $z\in S_{2}, T\in\mathbb{C}T_{z}b\Omega$ and compute:
\begin{align*}
\left|
\langle\partial s_{m,\tau}(z),T\rangle
\right|
&=\left|
\sigma\cdot g_{m,\tau}(\sigma)\cdot\langle\partial\zeta,T\rangle
+\zeta\cdot\langle\partial\sigma,T\rangle
\left(
g_{m,\tau}(\sigma)+\sigma\cdot g_{m,\tau}'(\sigma)
\right)
\right|(z)\\
&\overset{\eqref{E:McNealapplied},\eqref{E:propsg}}{\leq}
\left(
c_{2}\sigma(z)+\left(c_{1}\sigma(z)\right)^{\frac{1}{2}}\cdot\left(1+\frac{4}{m}\right)
\right)|T|.
\end{align*}
Thus, if we choose $\tau>0$ such that $c_{2}\tau e^{m}+2(c_{1}\tau e^{m})^{\frac{1}{2}}
\leq\delta$, then (iii) also holds on the set $S_{2}$.
\emph{Proof of (iv):} Let us first consider the case when $z\in S_{1}$. Then, again,
$s_{m,\tau}(z)=\zeta(z)\cdot\sigma(z)$ and if $T$ is in the span of
$\{W^{\alpha}(z)\}_{\alpha=1}^{n-1-i}$, we obtain
\begin{align*}
H_{s_{m,\tau}}(T,T)(z)=\left[\sigma\cdot H_{\zeta}(T,T)
+2\operatorname{Re}\left(\langle\partial\zeta,T\rangle\cdot
\overline{\langle\partial\sigma,T\rangle}\right)
+\zeta\cdot H_{\sigma}(T,T)
\right](z).
\end{align*}
Let $c_{3}>0$ be a constant such that $H_{\zeta}(\xi,\xi)(z)\geq
-c_{3}|\xi|^{2}$ for all $z\in b\Omega\cap V$, $\xi\in\mathbb{C}^{n}$. Then it follows, using
\eqref{E:McNealapplied} again, that
\begin{align*}
H_{s_{m,\tau}}(T,T)(z)\geq
-\left(c_{3}\sigma(z)+2c_{2}\left(c_{1}\sigma(z) \right)^{\frac{1}{2}}\right) |T|^{2}
+\zeta(z)\cdot H_{\sigma}(T,T)(z).
\end{align*}
Note that for $z$ and $T$ as above, $H_{\sigma}(T,T)(z)\geq 0$ when $\sigma(z)=0$.
Furthermore, the set
\begin{align*}
\left\{(z,T)\;|\;z\in b\Omega\cap\overline{\operatorname{supp}\zeta},\;\sigma(z)=0,\;
T\in\operatorname{span}\{W^{1}(z),\dots,W^{n-1-i}(z)\}\right\}
\end{align*}
is a closed subset of the complex tangent bundle of $b\Omega$.
Thus there exists a neighborhood $U\subset V$
of $\{z\in b\Omega\cap\overline{\operatorname{supp}\zeta}\;|\;\sigma(z)=0\}$ such that
\begin{align*}
H_{\sigma}(T,T)(z)\geq-\frac{\delta}{2}|T|^{2}
\end{align*}
holds for all $z\in b\Omega\cap U$, $T$ in the span of $\{W^{\alpha}(z)\}_{\alpha=1}^{n-1-i}$.
Let $\nu_{1}$ be the maximum of $\sigma$ on the closure of $b\Omega\cap U$, and let
$\nu_{2}$ be the minimum of $\sigma$ on
$\left(b\Omega\cap\overline{\operatorname{supp}\zeta}\right)\setminus U$.
Now choose $\tau>0$ such that
$\tau \leq\min\{\nu_{1},\frac{\nu_{2}}{2}\}$. Then $z\in S_{1}$ implies that $z\in b\Omega\cap U$ and
therefore
$\zeta(z)\cdot H_{\sigma}(T,T)(z)\geq-\frac{\delta}{2}|T|^{2}$ for all $T$ in the span of
the $W^{\alpha}(z)$'s.
If we also make sure that $\tau>0$ is such that $c_{3}\tau+2c_{2}\left(c_{1}\tau\right)^{\frac{1}{2}}
\leq\frac{\delta}{2}$, then (iv) is true on $S_{1}$.
Now suppose that $z\in S_{2}$ and $T$ in the span of
$\{W^{\alpha}(z)\}_{\alpha=1}^{n-1-i}$. We compute
\begin{align*}
H_{s_{m,\tau}}&(T,T)(z)=
\Bigl[\sigma g_{m,\tau}(\sigma) H_{\zeta}(T,T)
+2\operatorname{Re}\left(\langle\partial\zeta,T\rangle\cdot
\overline{\langle\partial\sigma,T\rangle}
\right)
\left(
g_{m,\tau}(\sigma)+\sigma g_{m,\tau}'(\sigma)
\right)\Bigr.\\
&\Bigl.+
\zeta\left|\langle\partial\sigma,T\rangle\right|^{2}
\left(2g_{m,\tau}'(\sigma)+\sigma g_{m,\tau}''(\sigma)\right)
+\zeta H_{\sigma}(T,T)
(g_{m,\tau}(\sigma)+\sigma g_{m,\tau}'(\sigma))
\Bigr](z)\\
&=\operatorname{I}+\operatorname{I}I+\operatorname{I}II+\operatorname{I}V.
\end{align*}
If we choose $\tau>0$ such that $\tau e^{m}c_{3}\leq\frac{\delta}{4}$, then it follows that
$\operatorname{I}\geq-\frac{\delta}{4}|T|^{2}$. Estimating the term $\operatorname{I}I$ we get
\begin{align*}
\operatorname{I}I\overset{\eqref{E:propsg}}{\geq}-2c_{2}\left|\langle\partial\sigma(z),T\rangle
\right||T|\left(1+\frac{4}{m}\right)
\overset{\eqref{E:McNealapplied}}{\geq} -4c_{2}\left(c_{1}\sigma(z)\right)^{\frac{1}{2}}|T|^{2}
\geq-\frac{\delta}{4}|T|^{2},
\end{align*}
if we choose $\tau>0$ such that $4c_{2}\left(c_{1}\tau e^{m}\right)^{\frac{1}{2}}\leq\frac{\delta}{4}$.
To estimate term
$\operatorname{I}V$, we only need to make sure that $\tau>0$ is so small that $z\in S_{2}$ implies that
$2\zeta(z)\cdot H_{\sigma}(T,T)(z)\geq
-\frac{\delta}{4}|T^{2}$. This can be done similarly to the case when $z\in S_{1}$.
Note that up to this point the size of the parameter $m$ played no role. That is, we obtain above
results for any choice of $m$ as long as $\tau>0$ is sufficiently small. The size of $m$ only matters
for the estimates on term $\operatorname{I}II$: \eqref{E:propsg} and \eqref{E:McNealapplied} yield
\begin{align*}
\operatorname{I}II&\geq -\left|\langle\partial\sigma(z),T\rangle\right|^{2}
\left[2|g_{m,\tau}'(\sigma(z))|+\sigma(z)g_{m,\tau}''(\sigma(z))\right]\\
&\geq -\left|\langle\partial\sigma(z),T\rangle\right|^{2}\frac{16}{m\sigma(z)}
\geq-\frac{16 c_{1}}{m}|T|^{2}.
\end{align*}
We now choose $m>0$ such that $\frac{16 c_{1}}{m}\leq\frac{\delta}{4}$, and then we choose
$\tau>0$ according to our previous computations .
\end{proof}
\section{Proof of \eqref{E:Main1}}\label{S:proof}
We shall prove \eqref{E:Main1} by induction over the rank of the Levi form of $\rho$. To start the
induction we construct a smooth defining function $r_{0}$ of $\Omega$ which satisfies
\eqref{E:Main1} on $U_{0}\cap\Omega$ for some neighborhood $U_{0}$ of $\Sigma_{0}$.
Let $\{V_{j,0}\}_{j\in J_{0}}$, $\{V_{j,0}'\}_{j\in J_{0}}\subset\subset\mathbb{C}^{n}$ be
finite, open covers of $\Sigma_{0}$ with $V_{j,0}'\subset\subset V_{j,0}$ such that there exist smooth,
linearly independent $(1,0)$-vector fields $W_{j,0}^{\alpha}$, $\alpha\in\{1,\dots,n-1\}$,
defined on $V_{j,0}$, which
are complex tangential to $b\Omega$ on $b\Omega\cap V_{j,0}$ and satisfy:
\begin{enumerate}
\item $H_{\rho}(W_{j,0}^{\alpha},W_{j,0}^{\alpha})=0$ on $\Sigma_{0}\cap V_{j,0}$ for all
$j\in J_{0}$,
\item the span of the $W_{j,0}^{\alpha}(z)$'s contains the null space of the Levi form of
$\rho$ at $z$ for all boundary points $z$ in $\cup_{j=i+1}^{n-2}\Sigma_{j}\cap V_{j,0}$.
\end{enumerate}
We shall write $V_{0}=\cup_{j\in J_{0}}V_{j,0}$
and $V_{0}'=\cup_{j\in J_{0}}V_{j,0}'$.
Choose smooth, non-negative functions
$\zeta_{j,0}$, $j\in J_{0}$, such that
\begin{align*}
\sum_{j\in J_{0}}\zeta_{j,0}=1\;\text{on}\;V_{0}',\;\sum_{j\in J_{0}}\zeta_{j,0}\leq 1\;\text{on}\;V_{0},
\;\text{and}\;\overline{\operatorname{supp}\zeta_{j,0}}\subset V_{j,0}\;\text{for all}\;j\in J_{0}.
\end{align*}
Set $\sigma_{j,0}=\sum_{\alpha=1}^{n-1}H_{\rho}(W_{j,0}^{\alpha},W_{j,0}^{\alpha})$. For given
$\epsilon>0$, choose $C_{0}$ according to $\eqref{E:chooseC}$. Then choose $m_{j,0}$
and $\tau_{j,0}$ as in Lemma \ref{L:cutoff} such that
\begin{align*}
s_{m_{j,0},\tau_{j,0}}(z)=
\begin{cases}
\zeta_{j,0}(z)\cdot\sigma_{j,0}(z)\cdot g_{m_{j,0},\tau_{j,0}}(\sigma_{j,0}(z)) &\text{if}\;\;z\in V_{j,0}\\
0 &\text{if}\;\;z\in (V_{j,0})^{c}
\end{cases}
\end{align*}
satisfies (i)-(iv) of Lemma \ref{L:cutoff} for $\delta_{0}=\frac{\epsilon}{C_{0}|J_{0}|}$.
Finally, set $s_{0}=\sum_{j\in J_{0}}s_{m_{j,0},\tau_{j,0}}$ and
define the smooth defining function $r_{0}=\rho e^{-C_{0}s_{0}}$.
By our choice of $r_{0}$ we have for all
$q\in\Omega\cap V_{0}'$ with $\pi(q)\in\Sigma_{0}\cap V_{0}'$ that
\begin{align*}
H_{r_{0}}(W,W)(q)\geq H_{r_{0}}(W,W)(\pi(q))-\epsilon \left|r_{0}(q)\right|\cdot |W|^{2}
=-\epsilon \left|r_{0}(q)\right|\cdot |W|^{2}
\end{align*}
for all $W\in\mathbb{C}T_{\pi(q)}b\Omega$.
In fact, by continuity there exists a neighborhood $U_{0}\subset V_{0}'$ of $\Sigma_{0}$ such that
\begin{align*}
H_{r_{0}}(W,W)(q)\geq H_{r_{0}}(W,W)(\pi(q))-2\epsilon |r_{0}(q)||W|^{2}
\end{align*}
holds for all $q\in\Omega\cap U_{0}$ with $\pi(q)\in b\Omega\cap U_{0}$ and
$W\in\mathbb{C}T_{\pi(q)}b\Omega$.
Now let $\xi\in\mathbb{C}^{n}$. For each $q\in\Omega\cap U_{0}$
with $\pi(q)\in b\Omega\cap U_{0}$ we shall
write $\xi=W+M$, where $W\in\mathbb{C}T_{\pi(q)}b\Omega$ and $M$ in the span of $N(\pi(q))$.
Note that then $|\xi|^{2}=|W|^{2}+|M|^{2}$.
We get for the complex Hessian of $r_{0}$ at $q$:
\begin{align*}
H_{r_{0}}(\xi,\xi)(q)&=H_{r_{0}}(W,W)(q)+2\operatorname{Re}\left(
H_{r_{0}}(W,M)(q)
\right)
+H_{r_{0}}(M,M)(q)\\
&\geq H_{r_{0}}(W,W)(\pi(q))-2\epsilon\left|r_{0}(q)\right|\cdot |W|^{2}+
2\operatorname{Re}\left(
H_{r_{0}}(W,M)(q)
\right)
+H_{r_{0}}(M,M)(q).
\end{align*}
Note that Taylor's theorem yields
\begin{align*}
H_{r_{0}}(W,M)(q)&=H_{r_{0}}(W,M)(\pi(q))+\mathcal{O}(d_{b\Omega}(q))|W||M|\\
&=
e^{-C_{0}s_{0}(\pi(q))}\left(H_{\rho}(W,M)-C\overline{\langle\partial\rho,M\rangle}
\langle\partial s_{0},W\rangle\right)(\pi(q))+\mathcal{O}(d_{b\Omega}(q))|W||M|.
\end{align*}
It follows by property (iii) of Lemma \ref{L:cutoff} that
$|\langle\partial s_{0},W\rangle|\leq\frac{\epsilon}{C_{0}}|W|$ on $b\Omega$.
After possibly shrinking $U_{0}$ we get
\begin{align*}
2\operatorname{Re}\left(H_{r_{0}}(W,M)(\pi(q))\right)\geq-4\epsilon|\partial\rho| |W| |M|
+e^{-C_{0}s_{0}(\pi(q))}2\operatorname{Re}\left(H_{\rho}(W,M)\right)(\pi(q)).
\end{align*}
Putting the above estimates together, we have
\begin{align*}
H_{r_{0}}(\xi,\xi)(q)\geq
&-2\epsilon|r_{0}(q)||W|^{2}-4\epsilon|\partial\rho||W||M|+H_{r_{0}}(M,M)(q)\\
&+e^{-C_{0}s_{0}(\pi(q))}
\left[
H_{\rho}(W,W)(\pi(q))+2\operatorname{Re}\left(H_{\rho}(W,M)(\pi(q))\right)
\right].
\end{align*}
An application of the (sc)-(lc) inequality yields
\begin{align*}
-\epsilon |W||M|\geq-\epsilon\left(|r_{0}(q)||\xi|^{2}
+\frac{1}{|r_{0}(q)|}\left|\langle\partial r_{0}(\pi(q)),\xi\rangle\right|^{2}
\right),
\end{align*}
where we used that $|\xi|^{2}=|W|^{2}+|M|^{2}$. Taylor's
theorem also gives us that
\begin{align*}
\left|\langle\partial r_{0}(\pi(q)),\xi\rangle\right|
&=e^{-C_{0}s_{0}(\pi(q))}\cdot\left|
\langle\partial\rho(\pi(q)),\xi\rangle
\right|\\
&\leq e^{-C_{0}s_{0}(\pi(q))}\cdot\left(
\left|\langle\partial\rho(q),\xi\rangle\right|+
\mathcal{O}\left(\rho(q)\right)|\xi|
\right),
\end{align*}
where the constant in the last term is independent of $\epsilon$. After possibly shrinking $U_{0}$, we
obtain for all $q\in\Omega\cap U_{0}$
\begin{align*}
\left|\langle\partial r_{0}(\pi(q)),\xi\rangle\right|
&\leq 2e^{-C_{0}s_{0}(q)}\cdot\left|\langle\partial\rho(q),\xi\rangle\right|
+\mathcal{O}\left(r_{0}(q)\right)|\xi|\\
&\leq 2\left|\langle\partial r_{0}(q),\xi\rangle\right|+2\left|\rho(q)\right|
\cdot\left|\langle\partial e^{-C_{0}s_{0}(q)},\xi\rangle\right|
+\mathcal{O}\left(r_{0}(q)\right)|\xi|,
\end{align*}
where, again, the constant in the last term is independent of $\epsilon$. Using that
$\left|\langle\partial s_{0},W\rangle\right|\leq\frac{\epsilon}{C_{0}}|W|$ on $b\Omega$, we get
\begin{align*}
\left|\langle\partial e^{-C_{0}s_{0}(q)},\xi\rangle\right|
\leq 2\epsilon |W|+\mathcal{O}\left( \left|\langle\partial\rho(\pi(q)),\xi\rangle \right| \right),
\end{align*}
which implies that
$|\langle\partial r_{0}(\pi(q)),\xi\rangle|\lesssim|\langle\partial r_{0}(q),\xi\rangle|+
|r_{0}(q)||\xi|$. Thus we have
\begin{align*}
-\epsilon|W||M|\gtrsim-\epsilon\left(|r_{0}(q)||\xi|^{2}
+\frac{1}{|r_{0}(q)|}\left|\langle\partial r_{0}(q),\xi\rangle\right|^{2}
\right).
\end{align*}
Since
\begin{align*}
H_{\rho}(W,W)(\pi(q))+2\operatorname{Re}(H_{\rho}(W,M)(\pi(q))=H_{\rho}(\xi,\xi)(\pi(q))-H_{\rho}(M,M)(\pi(q)),
\end{align*}
and $H_{\rho}(\xi,\xi)(\pi(q))$ is non-negative, it follows that
\begin{align*}
H_{r_{0}}(\xi,\xi)(q)\gtrsim -\epsilon\left(|r_{0}(q)||\xi|^{2}+\frac{1}{|r_{0}(q)|}
\left|\langle\partial r_{0}(q),\xi\rangle\right|^{2}\right)
+\mu_{0}H_{\rho}(\xi,\xi)(\pi(q))
\end{align*}
for some positive constant $\mu_{0}$.
Since the constants in $\gtrsim$ do not depend on the choice of $\epsilon$, this
proves \eqref{E:Main1} in an open neighborhood $U_{0}$ of $\Sigma_{0}$.
Let $l\in\{0,\dots,n-3\}$ be fixed and suppose that there exist a smooth defining function
$r_{l}$ of $b\Omega$ and an open neighborhood $U_{l}\subset\mathbb{C}^{n}$ of
$\cup_{i=0}^{l}\Sigma_{i}$ such that
\begin{align}\label{E:ihypotheses1}
H_{r_{l}}(\xi,\xi)(q)\geq -\epsilon
\left(
|r_{l}(q)||\xi|^{2}+\frac{1}{|r_{l}(q)|}\left|\langle\partial r_{l}(q),\xi\rangle\right|^{2}
\right)+\mu_{l}H_{\rho}(\xi,\xi)(\pi(q))
\end{align}
for all $q\in\Omega\cap U_{l}$ with $\pi(q)\in b\Omega\cap U_{l}$ and $\xi\in\mathbb{C}^{n}$. Here,
$\mu_{l}$ is some positive constant.
Furthermore, we suppose that the function $\vartheta_{l}$ defined by $r_{l}=\rho e^{-\vartheta_{l}}$
satisfies
the following
\begin{align}
\left|
\langle\partial\vartheta_{l}(z),T\rangle\right|&\leq\epsilon|T|\;\;\text{for all}\;\;z\in b\Omega, T\in
\mathbb{C}T_{z}b\Omega\label{E:ihypothesis2}\\
H_{\vartheta_{l}}(T,T)(z)&\geq-\epsilon|T|^{2}\;\;\text{for all}\;\; z\in\cup_{j=l+1}^{n-2}\Sigma_{j},
T\in\mathbb{C}T_{z}b\Omega\;\;\text{with}\;\;H_{\rho}(T,T)(z)=0\label{E:ihypothesis3}.
\end{align}
Let $k=l+1$. We shall now show that there exist a smooth defining function $r_{k}$ and
a neighborhood $U_{k}$ of $\cup_{i=0}^{k}\Sigma_{i}$ such that for some positive constant
$\mu_{k}$
\begin{align}\label{E:claimistep}
H_{r_{k}}(\xi,\xi)(q)\geq -\epsilon\left(
|r_{k}(q)||\xi|^{2}+\frac{1}{|r_{k}(q)|}\left|
\langle\partial r_{k}(q),\xi\rangle
\right|^{2}
\right)+\mu_{k}H_{\rho}(\xi,\xi)(\pi(q))
\end{align}
holds for all $q\in \Omega\cap U_{k}$ with $\pi(q)\in b\Omega\cap U_{k}$ and $\xi\in\mathbb{C}^{n}$.
Let $\{V_{j,k}\}_{j\in J_{k}}$, $\{V_{j,k}'\}_{j\in J_{k}}$ be finite, open covers of $\Sigma_{k}\setminus
U_{k-1}$ such that
\begin{enumerate}
\item[(1)] $V_{j,k}'\subset\subset V_{j,k}$ and $\overline{V}_{j,k}
\cap\left(\cup_{i=0}^{k-1}\Sigma_{i}\right)=\emptyset$
for all $j\in J_{k}$, and
\item[(2)] there exist smooth, linearly
independent $(1,0)$-vector fields $W_{j,k}^{\alpha}$, $\alpha\in\{1,\dots,n-1-k\}$,
and $S_{j,k}^{\beta}$, $\beta\in\{1,\dots,k\}$, defined on $V_{j,k}$, which are
complex tangential to $b\Omega$ on $b\Omega\cap V_{j,k}$ and satisfy the following:
\begin{enumerate}
\item[(a)] $H_{\rho}(W_{j,k}^{\alpha},W_{j,k}^{\alpha})=0$ on $\Sigma_{k}\cap V_{j,k},\;
\alpha\in\{1,\dots,n-1-k\},\;j\in J_{k}$,
\item[(b)] the span of $\{W_{j,k}^{1}(z),\dots,W_{j,k}^{k}(z)\}$ contains the null space of the Levi form
of $\rho$ at all boundary points $z$ belonging to $\cup_{i=k+1}^{n-2}\Sigma_{i}\cap V_{j,k}$,
\item[(c)] $H_{\rho}(S_{j,k}^{\beta}, S_{j,k}^{\beta})> 0$ on $b\Omega\cap \overline{V_{j,k}},\;
\beta\in\{1,\dots,k\},\;j\in J_{k}$,
\item[(d)] $H_{\rho}(S_{j,k}^{\beta},S_{j,k}^{\tilde{\beta}})=0$ for $\beta\neq\tilde{\beta}$ on
$b\Omega\cap V_{j,k}$, $\beta,\tilde{\beta}\in\{1,\dots,k\},\;j\in J_{k}$.
\end{enumerate}
\end{enumerate}
Note that above vector fields $\{W_{j,k}^{\alpha}\}$
always exist in some neighborhood of a given point in
$\Sigma_{k}$. However, we might not be able to cover $\Sigma_{k}$ with finitely many such
neighborhoods, when the closure of $\Sigma_{k}$ contains boundary points at which the Levi
form is of lower rank. Moreover, if the latter is the case, then (c) above is also impossible. These are the
reasons for proving \eqref{E:Main1} via induction over the rank of the Levi form of
$\rho$.
Suppose $S$ is in the span of $\{S_{j,k}^{\beta}(z)\}_{\beta=1}^{k}$ and
$W$ is in the span of $\{W_{j,k}^{\alpha}(z)\}_{\alpha=1}^{n-1-k}$
for some $z\in V_{j,k}$, then there is some constant
$\kappa_{k}>0$ such that $|S|^{2}+|W|^{2}\leq\kappa_{k}|S+W|^{2}$ for all $j\in J_{k}$.
We shall write $V_{k}=\cup_{j\in J_{k}}V_{j,k}$ and $V_{k}'=\cup_{j\in J_{k}}V_{j,k}'$.
Let $\zeta_{j,k}$ be non-negative, smooth functions such that
\begin{align*}
\sum_{j\in J_{k}}\zeta_{j,k}=1\;\text{on}\;V_{k}',\;\sum_{j\in J_{k}}\zeta_{j,k}\leq 1\;
\text{on}\;V_{k}\;\text{and}\;\overline{\operatorname{supp}\zeta_{j,k}}\subset V_{j,k}.
\end{align*}
Set $\sigma_{j,k}=\sum_{\alpha=1}^{n-k-1}H_{\rho}(W_{j,k}^{\alpha},W_{j,k}^{\alpha})$. Recall that
$\epsilon>0$ is given. Choose $C_{k}$ according to \eqref{E:chooseC} with
$\frac{\epsilon}{\kappa_{k}}$ in place of $\epsilon$ there.
We now choose $m_{j,k},\;\tau_{j,k}>0$ such that
\begin{align*}
s_{m_{j,k},\tau_{j,k}}(z)=
\begin{cases}
\zeta_{j,k}(z)\cdot\sigma_{j,k}(z)\cdot g_{m_{j,k}\tau_{j,k}}(\sigma_{j,k}(z))&\text{if}\;
z\in V_{j,k}\\
0&\text{if}\;z\in (V_{j,k})^{c}
\end{cases}
\end{align*}
satisfies (i)-(iv) of Lemma \ref{L:cutoff} with
$\delta_{k}=\frac{\epsilon}{C_{k}|J_{k}|\kappa_{k}}$. Set $s_{k}=\sum_{j\in J_{k}}s_{m_{j,k},\tau_{j,k}}$
and define the smooth defining function
$r_{k}=r_{k-1}e^{-Cs_{k}}$. We claim that this choice of $r_{k}$ satisfies
\eqref{E:claimistep}.
We shall first see that \eqref{E:claimistep} is true for all $q\in\Omega\cap U_{k-1}\cap V_{k}$
with $\pi(q)\in b\Omega\cap U_{k-1}\cap V_{k}$.
A straightforward computation yields
\begin{align}\label{E:istepint1}
H_{r_{k}}(\xi,\xi)(q)=
e^{-C_{k}s_{k}(q)}
\biggl[
H_{r_{k-1}}(\xi,\xi)\biggr.&+r_{k-1}\left(
C_{k}^{2}\left|\langle\partial s_{k},\xi\rangle\right|^{2}-C_{k}H_{s_{k}}(\xi,\xi)
\right)\\
\biggl.&-2C_{k}\operatorname{Re}\left(
\overline{\langle\partial r_{k-1},\xi\rangle}\langle\partial s_{k},\xi\rangle
\right)
\biggr](q).\notag
\end{align}
By induction hypothesis \eqref{E:ihypotheses1} we have good control over the first term in
\eqref{E:istepint1}:
\begin{align*}
e^{-C_{k}s_{k}(q)}H_{r_{k-1}}(\xi,\xi)(q)\geq
&-\epsilon\left(
|r_{k}(q)||\xi|^{2}+\frac{1}{|r_{k}(q)|}\left|\langle
e^{-C_{k}s_{k}(q)}\partial r_{k-1}(q),\xi\rangle
\right|^{2}
\right)\\
&+e^{-C_{k}s_{k}(q)}\mu_{k-1}H_{\rho}(\xi,\xi)(\pi(q)).
\end{align*}
Note that
\begin{align*}
\left|
\langle
e^{-C_{k}s_{k}(q)}\partial r_{k-1}(q),\xi\rangle
\right|^{2}
\leq
2\left|
\langle\partial r_{k}(q),\xi\rangle
\right|^{2}
+r_{k}^{2}(q)C_{k}^{2}\left|
\langle\partial s_{k}(q),\xi\rangle
\right|^{2}.
\end{align*}
Moreover, part (iii) of Lemma \ref{L:cutoff} implies that
\begin{align}\label{E:istepint2}
C_{k}^{2}\left|
\langle\partial s_{k}(q),\xi\rangle
\right|^{2}
\leq
2\epsilon|\xi|^{2}+\mathcal{O}\left(\left|
\langle\partial r_{k}(\pi(q)),\xi\rangle
\right|^{2}\right)
\leq 3\epsilon|\xi|^{2}+\mathcal{O}\left(\left|
\langle\partial r_{k}(q),\xi\rangle
\right|^{2}\right)
\end{align}
after possibly shrinking $U_{k-1}$ (in normal direction only). Thus
we have
\begin{align*}
e^{-C_{k}s_{k}(q)}H_{r_{k-1}}(\xi,\xi)(q)\gtrsim
-\epsilon\left(
|r_{k}(q)||\xi|^{2}+\frac{1}{|r_{k}(q)|}\left|\langle
\partial r_{k}(q),\xi\rangle
\right|^{2}
\right)+\mu H_{\rho}(\xi,\xi)(\pi(q)).
\end{align*}
For some positive constant $\mu\leq\mu_{k-1}$. Thus the first term on the right hand side of
\eqref{E:istepint1} is taken care of.
Now suppose $q\in\Omega\cap U_{k-1}\cap V_{k}$ is such that $\pi(q)\in b\Omega\cap V_{j,k}$ for
some $j\in J_{k}$. To be able to deal with the term $H_{s_{k}}(\xi,\xi)(q)$ in \eqref{E:istepint1}, we shall
write $\xi=S+W+M$, where
\begin{align*}
S\in\operatorname{span}\left(\{S_{j,k}^{\beta}(\pi(q))\}_{\beta=1}^{k} \right),\;\;
W\in\operatorname{span}\left(\{W_{j,k}^{\alpha}(\pi(q))\}_{\alpha=1}^{n-1-k}\right),\;\;\text{and}\;\;
M\in\operatorname{span}\left(N(\pi(q))\right).
\end{align*}
Then the (sc)-(lc) inequality gives
\begin{align*}
C_{k}H_{s_{k}}(\xi,\xi)(q)&\geq C_{k}H_{s_{k}}(W,W)(q)-\frac{\epsilon}{\kappa_{k}}|W|^{2}+
\mathcal{O}\left(|S|^{2}+|M|^{2}\right)\\
&\geq -2\frac{\epsilon}{\kappa_{k}}|W|^{2}+\mathcal{O}\left(|S|^{2}+|M|^{2}\right),
\end{align*}
where the last step holds since $s_{k}$ satisfies part (iv) of Lemma \ref{L:cutoff}.
The last inequality together with \eqref{E:istepint2} lets us estimate the second term
in \eqref{E:istepint1} as follows
\begin{align*}
e^{-C_{k}s_{k}(q)}r_{k-1}(q)&\left(
C_{k}^{2}\left|\langle\partial s_{k},\xi\rangle\right|^{2}-C_{k}H_{s_{k}}(\xi,\xi)
\right)(q)\\
&\gtrsim
-\epsilon
\left(
|r_{k}||\xi|^{2}+\frac{1}{|r_{k}|}\left|\langle
\partial r_{k},\xi\rangle
\right|^{2}
\right)(q)+\mathcal{O}(r_{k}(q))|S|^{2}.
\end{align*}
For the third term in \eqref{E:istepint1} we use \eqref{E:istepint2} again and obtain
\begin{align*}
-2C_{k}e^{-C_{k}s_{k}(q)}\operatorname{Re}\left(
\overline{\langle\partial r_{k-1},\xi\rangle}\langle \partial s_{k},\xi\rangle
\right)(q)
\gtrsim
-\epsilon\left(|r_{k}(q)||\xi|^{2}+\frac{1}{|r_{k}(q)|}\left|\langle\partial r_{k-1}(q),\xi\rangle
\right|^{2}\right).
\end{align*}
Collecting all these estimates, using $\frac{1}{\kappa_{k}}|W|^{2}\leq|\xi|^{2}$, we now have
for $q\in\Omega\cap U_{k-1}\cap V_{k}$ with $\pi(q)\in b\Omega\cap U_{k-1}\cap V_{k}$ and for some
$\mu>0$
\begin{align*}
H_{r_{k}}(\xi,\xi)(q)\gtrsim
&-\epsilon
\left(|r_{k}(q)||\xi|^{2}+\frac{1}{|r_{k}(q)|}\left|\langle\partial r_{k-1}(q),\xi\rangle
\right|^{2}\right)\\
&\hspace{4cm}+\mu H_{\rho}(\xi,\xi)(\pi(q))+\mathcal{O}(r_{k}(q))|S|^{2}\\
\gtrsim&-\epsilon
\left(|r_{k}(q)||\xi|^{2}+\frac{1}{|r_{k}(q)|}\left|\langle\partial r_{k-1}(q),\xi\rangle
\right|^{2}\right)
+\frac{\mu}{2}H_{\rho}(\xi,\xi)(\pi(q)).
\end{align*}
Here, the last estimate holds, after possibly shrinking $U_{k-1}\cap V_{k}$ (in normal direction only),
since $\rho$ is plurisubharmonic on $b\Omega$.
We still need to show that \eqref{E:claimistep} is true in some neighborhood of
$\Sigma_{k}\setminus U_{k-1}$. Let $U\subset V_{k}$ be a neighborhood of
$\Sigma_{k}\setminus U_{k-1}$, $q\in \Omega\cap U$ with $\pi(q)\in b\Omega\cap V_{j,k}$ for
some $j\in J_{k}$
and $\xi\in\mathbb{C}^{n}$. Writing $\vartheta_{k}=\vartheta_{k-1}+C_{k}s_{k}$, we get
\begin{align*}
H_{r_{k}}(\xi,\xi)(q)
=e^{-\vartheta_{k}(q)}
\Bigl[
H_{\rho}(\xi,\xi)-&2\operatorname{Re}\left(
\overline{\langle\partial\rho,\xi\rangle}
\langle\partial\vartheta_{k},\xi\rangle
\right)\Bigl.\\
&\Bigr.+\rho\left(
\left|\langle\partial\vartheta_{k},\xi\rangle \right|^{2}
-H_{\vartheta_{k-1}}(\xi,\xi)-C_{k}H_{s_{k}}(\xi,\xi)
\right)
\Bigr](q)\\
&=\operatorname{I}+\operatorname{I}I+\operatorname{I}II+\operatorname{I}V+\operatorname{V}.
\end{align*}
We write again $\xi=S+W+M$.
By construction of $s_{k}$ and by the induction hypotheses \eqref{E:ihypothesis2} and
\eqref{E:ihypothesis3} on
$\vartheta_{k-1}$, we can do estimates similar to the ones below \eqref{E:istepint1} to obtain
\begin{align*}
\operatorname{I}I+\operatorname{I}II+\operatorname{I}V\gtrsim
-\epsilon\left(
|r_{k}(q)||\xi|^{2}+\frac{1}{|r_{k}(q)|}\left|
\langle\partial r_{k}(q),\xi\rangle
\right|^{2}
\right)+\mathcal{O}(r_{k}(q))|S|^{2}\;\;\text{for}\;\;q\in U.
\end{align*}
So we are left with the terms $\operatorname{I}$ and $\operatorname{V}$.
Let us first consider the term $\operatorname{I}$. By Taylor's Theorem, we have
\begin{align*}
e^{-\vartheta_{k}(q)}H_{\rho}(\xi,\xi)(q)
=
e^{-\vartheta_{k}(q)}&\Bigl(H_{\rho}(\xi,\xi)(\pi(q))
-d_{b\Omega}(q)\operatorname{Re}\bigl[ (N
H_{\rho})(\xi,\xi)(\pi(q))\bigr]\Bigr)\\
&+\mathcal{O}\left(r_{k}^{2}(q)\right)|\xi|^{2}.
\end{align*}
By the (sc)-(lc) inequality we have
\begin{align*}
\operatorname{Re} \left[(NH_{\rho})(\xi,\xi)(\pi(q))\right]
\leq &\operatorname{Re}\left[(NH_{\rho})(W,W)(\pi(q))\right]\\
&+\frac{\epsilon}{c_{1}\kappa_{k}}|W|^{2}+
\mathcal{O}\left(\frac{c_{1}\kappa_{k}}{\epsilon}\right)(|S|^{2}+|M|^{2}),
\end{align*}
where $c_{1}>0$ is such that $d_{b\Omega}(q)\leq c_{1}|\rho(q)|$.
Therefore we obtain for some $\mu>0$
\begin{align*}
e^{-\vartheta_{k}(q)}H_{\rho}(\xi,\xi)(q)
\geq
&-d_{b\Omega}(q)e^{-\vartheta_{k}(q)}\operatorname{Re}\left[(NH_{\rho})(W,W)(\pi(q))\right]
+r_{k}(q)\frac{\epsilon}{\kappa_{k}}|W|^{2}\\
&+\mu H_{\rho}(\xi,\xi)(\pi(q))+\mathcal{O}\left(r_{k}(q)\right)(|S|^{2}+|M|^{2})
+\mathcal{O}\left(r_{k}^{2}(q)\right)|\xi|^{2}.
\end{align*}
To estimate term $\operatorname{V}$ we use (sc)-(lc) inequality again:
\begin{align*}
-r_{k}(q)C_{k}H_{s_{k}}(\xi,\xi)(q)
\geq -r_{k}(q)\left(C_{k}H_{s_{k}}(W,W)(q)-\frac{\epsilon}{\kappa_{k}}|W|^{2}\right)+
\mathcal{O}\left(r_{k}(q)\right)(|S|^{2}+|M|^{2}).
\end{align*}
After possibly shrinking $U$, we get for some $\mu>0$
\begin{align*}
\operatorname{I}+\operatorname{V}\geq
&
-d_{b\Omega}(q)e^{-\vartheta_{k}(q)}\operatorname{Re}\left[(NH_{\rho})(W,W)(\pi(q))\right]
+r_{k}(q)\left(-C_{k}H_{s_{k}}(W,W)(q)+\frac{2\epsilon}{\kappa_{k}}|W|^{2}
\right)\\
&+\mu H_{\rho}(\xi,\xi)(\pi(q))+\mathcal{O}\left(r_{k}(q)\right)|M|^{2}
+\mathcal{O}\left(r_{k}^{2}(q)\right)|\xi|^{2}.
\end{align*}
By our choice of $s_{k}$ and $C_{k}$, it follows that for all $q\in\Omega\cap U$ with
$\pi(q)\in b\Omega\cap U$ we have
\begin{align*}
-d_{b\Omega}e^{-\vartheta_{k}(q)}\operatorname{Re}\left[ (NH_{\rho})(W,W)(\pi(q))\right]
-r_{k}(q)C_{k}H_{s_{k}}(W,W)(q)\geq\frac{\epsilon}{\kappa_{k}}r_{k}(q)|W|^{2}.
\end{align*}
Putting our estimates for the terms $\operatorname{I}$--$\operatorname{V}$ together and letting
$U_{k}$ be the union of $U_{k-1}$ and
$U$, we obtain: for all $q\in\Omega\cap U_{k}$ with $\pi(q)\in b\Omega\cap U_{k}$, the
function $r_{k}$ satisfies
\begin{align*}
H_{r_{k}}(\xi,\xi)(q)\gtrsim -\epsilon\left(
|r_{k}(q)||\xi|^{2}+\frac{1}{|r_{k}(q)|}\left|
\langle\partial r_{k}(q),\xi\rangle
\right|^{2}
\right)+\mu_{k}H_{\rho}(\xi,\xi)(\pi(q))
\end{align*}
for some $\mu_{k}>0$.
Since the constants in $\gtrsim$ do
not depend on $\epsilon$ or on any other parameters which come
up in the construction of $r_{k}$, \eqref{E:claimistep} follows. Moreover, by construction, $\vartheta_{k}$
satisfies \eqref{E:ihypothesis2} and \eqref{E:ihypothesis3}.
Note that $\Sigma_{n-1}\setminus U_{n-2}$ is
a closed subset of the set of strictly pseudoconvex boundary points. Thus for
any smooth defining function $r$ there is some
neighborhood $U$ of $\Sigma_{n-1}\setminus U_{n-2}$ such that
\begin{align*}
H_{r}(\xi,\xi)\gtrsim |\xi|^{2}
+\mathcal{O}(|\langle\partial r(q),\xi\rangle|^{2})\;\;\text{for all}\;\;\xi\in\mathbb{C}^{n}
\end{align*}
holds on $U\cap\Omega$.
This concludes the proof of \eqref{E:Main1}.
The proof of \eqref{E:Main2} is essentially the same as the one of \eqref{E:Main1} except that a few
signs change. That is, the basic estimate \eqref{E:BasicTaylorH} for $q\in\overline{\Omega}^{c}\cap U$
becomes
\begin{align*}
H_{\rho}(W,W)(q)=2d_{b\Omega}(q)NH_{\rho}(W,W)(\pi(q))+\mathcal{O}\left(d_{b\Omega}^{2}(q)
\right)|W|^{2}
\end{align*}
for any vector $W\in\mathbb{C}^{n}$ which is a weak complex tangential direction at $\pi(q)$.
So an obstruction for \eqref{E:Main2} to hold at $q\in\overline{\Omega}^{c}\cap U$
occurs when $NH_{\rho}(W,W)$ is negative at $\pi(q)$ --
note that this happens exactly when we have no problem with \eqref{E:Main1}.
Since the obstruction terms
to \eqref{E:Main1} and \eqref{E:Main2} only differ by a sign, one would expect that the necessary
modifications of
$\rho$ also just differ by a sign. In fact, let $\vartheta_{n-2}$ be as in the proof of \eqref{E:Main1} --
that is, $r_{1}=\rho e^{-\vartheta_{n-2}}$ satisfies \eqref{E:Main1} for a given $\epsilon>0$. Then
$r_{2}=\rho e^{\vartheta_{n-2}}$ satisfies \eqref{E:Main2} for the same $\epsilon$.
\section{Proof of Corollary \ref{C:DF}}\label{S:DF}
We shall now prove Corollary \ref{C:DF}. We begin with part (i) by
showing first that for
any $\eta\in(0,1)$ there exist a $\delta>0$, a smooth defining function $r$ of $\Omega$ and a
neighborhood $U$ of $b\Omega$ such that
$h=-(-re^{-\delta|z|^{2}})^{\eta}$ is strictly plurisubharmonic on $\Omega\cap U$.
Let $\eta\in(0,1)$ be fixed, and $r$ be a smooth defining function of $\Omega$. For notational ease
we write $\phi(z)=\delta|z|^{2}$ for $\delta>0$. Here, $r$ and $\delta$ are fixed and to be chosen later.
Let us compute the complex Hessian of $h$ on $\Omega\cap U$:
\begin{align*}
H_{h}(\xi,\xi)=&\eta(-r)^{\eta-2}e^{-\phi\eta}
\Bigl[
(1-\eta)\Bigr.\left|\langle\partial r,\xi\rangle\right|^{2}-rH_{r}(\xi,\xi)\\
&+2r\eta \operatorname{Re}\left(\langle\partial r,\xi\rangle\langle\overline{\partial\phi,\xi}\rangle\right)
\Bigl.
-r^{2}\eta\left|\langle\partial\phi,\xi\rangle\right|^{2}
+r^{2}H_{\phi}(\xi,\xi)\Bigr].
\end{align*}
An application of the (sc)-(lc) inequality gives
\begin{align*}
2r\eta \operatorname{Re}\left(\langle\partial r,\xi\rangle\langle\overline{\partial\phi,\xi}\rangle\right)
\geq -\frac{1-\eta}{2}\left|\langle\partial r,\xi\rangle\right|^{2}
-\frac{2r^{2}\eta^{2}}{1-\eta}\left|\langle\partial\phi,\xi\rangle\right|^{2}.
\end{align*}
Therefore, we obtain for the complex Hessian of $h$ on $\Omega$ the following:
\begin{align*}
H_{h}(\xi,\xi)
\geq
\eta(-r)^{\eta-2}e^{-\phi\eta}
\Biggl[\frac{1-\eta}{2}|\langle\partial r,\xi\rangle|^{2}
\Biggr.&-rH_{r}(\xi,\xi)\\
&+r^{2}
\left\{
-\frac{\eta(1+\eta)}{1-\eta}|\langle\partial\phi,\xi\rangle|^{2}+H_{\phi}(\xi,\xi)
\right\}
\Biggl.\Biggr].
\end{align*}
Set $\delta=\frac{1-\eta}{2\eta(1+\eta)D}$, where $D=\max_{z\in\overline{\Omega}}|z|^{2}$. Then we
get
\begin{align*}
H_{\phi}(\xi,\xi)-\frac{\eta(1+\eta)}{1-\eta}\left|\langle\partial\phi,\xi\rangle\right|^{2}
=
\delta\left(H_{|z|^{2}}(\xi,\xi)-\frac{\eta(1+\eta)}{1-\eta}\delta\left|\langle\overline{z},
\xi\rangle\right|^{2}
\right)
\geq
\frac{\delta}{2}|\xi|^{2}.
\end{align*}
This implies that
\begin{align}\label{E:generalDFest}
H_{h}(\xi,\xi)\geq \eta(-r)^{\eta-2}e^{-\phi\eta}
\left[
\frac{1-\eta}{2}|\langle \partial r,\xi\rangle|^{2}
-rH_{r}(\xi,\xi)+\frac{\delta}{2}r^{2}|\xi|^{2}
\right]
\end{align}
holds on $\Omega$.
Set $\epsilon=\min\{\frac{1-\eta}{4},\frac{1-\eta}{8\eta(1+\eta)D}\}$.
By \eqref{E:Main1}
there exist a neighborhood $U$ of $b\Omega$ and a smooth defining function $r_{1}$
of $\Omega$ such that
\begin{align*}
H_{r_{1}}(\xi,\xi)(q)
\geq
-\epsilon\left(|r_{1}(q)||\xi|^{2}+\frac{1}{|r_{1}(q)|}|\langle\partial r_{1}(q),\xi\rangle|^{2}\right)
\end{align*}
holds for all $q\in\Omega\cap U$, $\xi\in\mathbb{C}^{n}$.
Setting $r=r_{1}$ and using \eqref{E:generalDFest}, we obtain
\begin{align*}
H_{h}(\xi,\xi)(q)\geq
\eta(-h(q))\cdot\epsilon|\xi|^{2}\;\;\text{for}\;\;q\in\Omega\cap U,\;\xi\in\mathbb{C}^{n}.
\end{align*}
It follows by standard arguments that there exists a defining function $\widetilde{r}_{1}$ such that
$-(-\widetilde{r}_{1})^{\eta}$ is strictly
plurisubharmonic on $\Omega$; for details see pg.\ 133 in \cite{DF1}.
This proves part (i) of Corollary \ref{C:DF}.
A proof similar to the one of part (i), using \eqref{E:Main2},
shows that for each $\eta>1$
there exists a smooth
defining function $\tilde{r}_{2}$, a neighborhood $U$ of $b\Omega$ and $\delta>0$ such that
$(r_{2}e^{\delta|z|^{2}})^{\eta}$ is strictly plurisubharmonic on $\overline{\Omega}^{c}\cap U$.
\end{document} |
\begin{document}
\title{Areas of triangles and $ ext{SL}
\begin{abstract}
In Euclidean space, one can use the dot product to give a formula for the area of a triangle in terms of the coordinates of each vertex. Since this formula involves only addition, subtraction, and multiplication, it can be used as a definition of area in $R^2$, where $R$ is an arbitrary ring. The result is a quantity associated with triples of points which is still invariant under the action of $\text{SL}_2(R)$. One can then look at a configuration of points in $R^2$ in terms of the triangles determined by pairs of points and the origin, considering two such configurations to be of the same type if corresponding pairs of points determine the same areas. In this paper we consider the cases $R=\mathbb{F}_q$ and $R=\mathbb{Z}/p^\ell \mathbb{Z}$, and prove that sufficiently large subsets of $R^2$ must produce a positive proportion of all such types of configurations.
\end{abstract}
\section{Introduction}
There are several interesting combinatorial problems asking whether a sufficiently large subset of a vector space over a finite field must generate many different objects of some type. The most well known example is the Erdos-Falconer problem, which asks whether such a set must contain all possible distances, or at least a positive proportion of distances. More precisely, given $E\subset\mathbb{F}_q^d$ we define the distance set
\[
\Delta(E)=\{(x_1-y_1)^2+\cdots +(x_d-y_d)^2:x,y\in E\}.
\]
Obviously, $\Delta(E)\subset\mathbb{F}_q$. The Erdos-Falconer problem asks for an exponent $s$ such that $\Delta(E)=\mathbb{F}_q$, or more generally $|\Delta(E)|\gtrsim q$, whenever $|E|\gtrsim q^s$ (Throughout, the notation $X\lesssim Y$ means there is a constant $C$ such that $X\leq CY$, $X\approx Y$ means $X\lesssim Y$ and $Y\lesssim X$, and $O(X)$ denotes a quantity that is $\lesssim X$). In \cite{IR}, Iosevich and Rudnev proved that $\Delta(E)=\mathbb{F}_q$ if $|E|\gtrsim q^{\frac{d+1}{2}}$ In \cite{Sharpness} it is proved by Hart, Iosevich, Koh, and Rudnev that the exponent $\frac{d+1}{2}$ cannot be improved in odd dimensions, althought it has been improved to $4/3$ in the $d=2$ case (first in \cite{WolffExponent} in the case $q\equiv 3 \text{ (mod 4)}$ by Chapman, Erdogan, Hart, Iosevich, and Koh, then in general in \cite{GroupAction} by Bennett, Hart, Iosevich, Pakianathan, and Rudnev). Several interesting variants of the distance problem have been studied as well. A result of Pham, Phuong, Sang, Valculescu, and Vinh studies the problem when distances between pairs of points are replaced with distances between points and lines in $\mathbb{F}_q^2$; they prove that if sets $P$ and $L$ of points and lines, respectively, satisfy $|P||L|\gtrsim q^{8/3}$, then they determine a positive proportion of all distances \cite{Lines}. Birklbauer, Iosevich, and Pham proved an analogous result about distances determined by points and hyperplanes in $\mathbb{F}_q^d$ \cite{Planes}.\\
We can replace distances with dot products and ask the analogous question. Let
\[
\Pi(E)=\{x_1y_1+\cdots +x_dy_d:x,y\in E\},
\]
and again ask for an exponent $s$ such that $|E|\gtrsim q^s$ implies $\Pi(E)$ contains all distances (or at least a positive proportion of distances). Hart and Iosevich prove in \cite{HI} that the exponent $s=\frac{d+1}{2}$ works for this question as well. The proof is quite similar to the proof of the same exponent in the Erdos-Falconer problem; in each case, the authors consider a function which counts, for each $t\in\mathbb{F}_q$, the number of representations of $t$ as, respectively, a distance and a dot product determined by the set $E$. These representation functions are then studied using techniques from Fourier analysis. \\
Another interesting variant of this problem was studied in \cite{Angles}, where Lund, Pham, and Vinh defined the angle between two vectors in analogue with the usual geometric interpretation of the dot product. Namely, given vectors $x$ and $y$, they consider the quantity
\[
s(x,y)=1-\frac{(x\cdot y)^2}{\|x\|\|y\|},
\]
where $\|x\|=x_1^2+\cdots x_d^2$ is the finite field distance defined above. Note that since we cannot always take square roots in finite fields, the finite field distance corresponds to the square of the Euclidean distance; therefore, $s(x,y)$ above is the correct finite field analogue of $\sin^2\theta$, where $\theta$ is the angle between the vectors $x$ and $y$. This creates a variant of the dot product problem, since one can obtain different dot products from the same angle by varying length. The authors go on to prove that the exponent $\frac{d+2}{2}$ guarantees a positive proportion of angles. \\
It is of interest to generalize these types of results to point configurations. By a $(k+1)$-point configuration in $\mathbb{F}_q^d$, we simply mean an element of $(\mathbb{F}_q^d)^{k+1}$. Throughout, we will use superscripts to denote different vectors in a given configuration, and subscripts to denote the coordinates of each vector. For example, a $(k+1)$ point configuration $x$ is made up of vectors $x^1,...,x^{k+1}$, each of which has coordinates $x_1^i,x_2^i$. Given a set $E\subset\mathbb{F}_q^d$, we can consider $(k+1)$-point configurations in $E$ (i.e., elements of $E^{k+1}$) and ask whether $E$ must contain a positive proportion of all configurations, up to some notion of equivalence. For example, we may view $(k+1)$-point configurations as simplices, and our notion of equivalence is geometric congruence; any two simplices are congruent if there is a translation and a rotation that maps one onto the other. Since a $2$-simplex is simply a pair of points, and two such simplices are congruent if and only if the distance is the same, congruence classes simply correspond to distance. Hence, the Erdos-Falconer distance problem may be viewed as simply the $k=1$ case of the simplex congruence problem. In \cite{Ubiquity}, Hart and Iosevich prove that $E$ contains the vertices of a congruent copy of every non-degenerate simplex (non-degenerate here means the points are in general position) whenever $|E|\gtrsim q^{\frac{kd}{k+1}+\frac{k}{2}}$. However, in order for this result to be non-trivial the exponent must be $<d$, and that only happens when $\binom{k+1}{2}<d$. So, the result is limited to fairly small configurations. This result is improved in \cite{GroupAction} by Bennett, Hart, Iosevich, Pakianathan, and Rudnev, who prove that for any $k\leq d$ a set $E\subset\mathbb{F}_q^d$ determines a positive proportion of all congruence classes of $(k+1)$-point configurations provided $|E|\gtrsim q^{d-\frac{d-1}{k+1}}$. This exponent is clearly non-trivial for all $k$. In \cite{Me}, I extended this result to the case $k\geq d$. \\
In this paper, we consider a different notion of equivalence. We will consider the problem over both finite fields and rings of integers modulo powers of primes, so I will define the equivalence relation in an arbitrary ring.
\begin{dfn}
Let $R$ be a ring, and let $E\subset R^2$. We define an equivalence relation $\sim$ on $E^{k+1}$ by $(x^1,...,x^{k+1})\sim (y^1,...,y^{k+1})$ (or more breifly $x\sim y$) if and only if for each pair $i,j$ we have $x^i\cdot x^{j\perp}=y^i\cdot y^{j\perp}$. Define $\mathcal{C}_{k+1}(E)$ to be the set of equivalence classes of $E$ under this relation.
\end{dfn}
In the Euclidean setting, $\frac{1}{2} |x\cdot y^\perp|$ is the area of the triangle with vertices $0,x,y$. So, we may view each pair of points in a $(k+1)$-point configuration as determining a triangle with the origin, and we consider two such configurations to be equivalent if the triangles they determine all have the same areas. As we will prove in section $2$, this equivalence relation is closely related to the action of $\text{SL}_2(R)$ on tuples of points; except for some degenerate cases, two configurations are equivalent if and only if there is a unique $g$ mapping one to the other. This allows us to analyze the problem in terms of this action; in section 2, we define a counting function $f(g)$ and reduce matters to estimating the sum $\sum_g f(g)^{k+1}$. In section 3, we show how to turn an estimate for $\sum_g f(g)^2$ into an estimate for $\sum_g f(g)^{k+1}$. Since we already understand the $k=1$ case (it is essentially the same as the dot product problem discussed above), this reduction allows us to obtain a non-trivial result. Our first theorem is as follows.
\begin{thm}
\label{MT1}
Let $q$ be a power of an odd prime, and let $E\subset\mathbb{F}_q^2$ satisfy $|E|\gtrsim q^s$, where $s={2-\frac{1}{k+1}}$. Then $\mathcal{C}_{k+1}(E)\gtrsim \mathcal{C}_{k+1}(\mathbb{F}_q^2)$.
\end{thm}
In addition to proving this theorem, we will consider the case where the finite field $\mathbb{F}_q$ is replaced by the ring $\mathbb{Z}/p^\ell\mathbb{Z}$. The structure of the proof is largely the same; the dot product problem over such rings is studied in \cite{CIP}, giving us the $k=1$ case, and the machinery which lifts that case to arbitrary $k$ works the same way. However, many details in the proofs are considerably more complicated. The theorem is as follows.
\begin{thm}
\label{MT2}
Let $p$ be an odd prime, let $\ell\geq 1$, and let $E\subset(\mathbb{Z}/p^\ell\mathbb{Z})^2$ satisfy $|E|\gtrsim \ell^{\frac{2}{k+1}}p^{\ell s}$, where $s=2-\frac{1}{\ell(k+1)}$. Then $\mathcal{C}_{k+1}(E)\gtrsim \mathcal{C}_{k+1}((\mathbb{Z}/p^\ell\mathbb{Z})^2).$
\end{thm}
We first note that, as we would expect, Theorem \ref{MT2} coincides with Theorem \ref{MT1} in the case $\ell=1$. We also note that, for fixed $p$ and $k$, the exponent in Theorem \ref{MT2} is always less than 2, but it tends to $2$ as $\ell\to\infty$. This does not happen in the finite field case, where the exponent depends on $k$ but not on the size of the field. \\
Finally, we want to state the extent to which these results are sharp. There are examples which show that the exponent must tend to $2$ as $k\to\infty$ in the finite field case, and as either $k\to\infty$ or $\ell\to\infty$ in the $\mathbb{Z}/p^\ell\mathbb{Z}$ case.
\begin{thm}[Sharpness]
We have the following:
\begin{enumerate}[i]
\item For any $s<2-\frac{2}{k+1}$, there exists $E\subset\mathbb{F}_q^2$ such that $|E|\approx q^s$ and $\mathcal{C}_{k+1}(E)=o(\mathcal{C}_{k+1}(\mathbb{F}_q^2))$.
\item For any $s<2-\min\left(\frac{2}{k+1},\frac{1}{\ell}\right)$, there exists $E\subset(\mathbb{Z}/p^\ell\mathbb{Z})^2$ such that $|E|\approx p^{\ell s}$ and $\mathcal{C}_{k+1}(E)=o(\mathcal{C}_{k+1}((\mathbb{Z}/p^\ell\mathbb{Z})^2))$.
\end{enumerate}
\end{thm}
\section{Characterization of the equivalence relation in terms of the $\text{SL}_2(R)$ action}
Our main tool in reducing the problem of $(k+1)$-point configurations to the $k=1$ case is the fact that we can express the equivalence relation in terms of the action of the special linear group; with some exceptions, tuples $x$ and $y$ are equivalent if and only if there exists a unique $g\in \text{SL}_2$ such that for each $i$, we have $y^i=gx^i$. In order to use this, we need to bound the number of exceptions to this rule. This is easy in the finite field case, and a little more tricky in the $\mathbb{Z}/p^\ell\mathbb{Z}$ case. The goal of this section is to describe and and bound the number of exceptional configurations in each case. We begin with a definition.
\begin{dfn}
Let $R$ be a ring. A configuration $x=(x^1,...,x^{k+1})\in (R^2)^{k+1}$ is called \textbf{good} if there exist two indices $i,j$ such that $x^i\cdot x^{j\perp}$ is a unit. A configuration is \textbf{bad} if it is not good.
\end{dfn}
As we will see, the good configurations are precisely those for which equivalence is determined by the action of $\text{SL}_2(R)$. To prove this, we will need the following theorems about determinants of matrices over rings, which can be found in \cite{DF}, section 11.4.
\begin{thm}
\label{DF1}
Let $R$ be a ring, let $A_1,...,A_n$ be the columns of an $n\times n$ matrix $A$ with entries in $R$. Fix an index $i$, and let $A'$ be the matrix obtained from $A$ by replacing column $A_i$ by $c_1A_1+\cdots+c_nA_n$, for some $c_1,...,c_n\in R$. Then $\det(A')=c_i\det(A)$.
\end{thm}
\begin{thm}
\label{DF2}
Let $R$ be a ring, and let $A$ be an $n\times n$ matrix with entries in $R$. The matrix $A$ is invertible if and only if $\det(A)$ is a unit in $R$.
\end{thm}
\begin{thm}
\label{DF3}
Let $R$ be a ring, and let $A$ and $B$ be $n\times n$ matrices with entries in $R$. Then $\det(AB)=\det(A)\det(B)$.
\end{thm}
We are now ready to prove that equivalence of good configurations is given by the action of the special linear group.
\begin{lem}
\label{SL2}
Let $R$ be a ring, and let $x,y$ be good configurations such that $x^i\cdot x^{j\perp}=y^i\cdot y^{j\perp}$ for every pair of indices $i,j$. Then there exists a unique $g\in \text{SL}_2(R)$ such that $y^i=gx^i$ for each $i$.
\end{lem}
\begin{proof}
Because $x$ and $y$ are good, there exist indices $i$ and $j$ such that $x^i\cdot x^{j\perp}$ is a unit; equivalently, the determinant of the $2\times 2$ matrix with columns $x^i$ and $x^j$ is a unit. Denote this matrix by $(x^i\ x^j)$. By theorem \ref{DF2}, this matrix is invertible. Let
\[
g=(y^i\ y^j)(x^i\ x^j)^{-1}.
\]
Since $g(x^i\ x^j)=(gx^i\ gx^j)$, it follows that $y^i=gx^i$ and $y^j=gx^j$. Also note that by Theorem \ref{DF3}, we have $\det(g)=1$. Let $n$ be any other index. We want to write $x^n=ax^i+bx^j$; this amounts to solving the matrix equation
\[
\begin{pmatrix}
x_1^i & x_1^j \\
x_2^i & x_2^j
\end{pmatrix}
\begin{pmatrix}
a \\
b
\end{pmatrix}
=x^n
\]
Since we have already established the matrix $(x^i\ x^j)$ is invertible, we can solve for $a$ and $b$. Similarly, let $y^n=a'y^i+b'y^j$. By Theorem \ref{DF1}, we have $\det(x^i\ x^n)=b\det(x^i\ x^j)$ and $\det(y^i\ y^n)=b'\det(y^i\ y^j)$. It follows that $b=b'$, and an analogous argument yields $a=a'$. Therefore,
\[
gx^n=g(ax^i+bx^j)=agx^i+bgx^j=ay^i+by^j=y^n.
\]
So, we have established existance. To prove uniqueness, note that $g$ must satisfy $g(x^i\ x^j)=(y^i\ y^j)$, and since $(x^i\ x^j)$ is invertible we can solve for $g$.
\end{proof}
Now that we know that good tuples allow us to use the machinery we need, we must prove that the bad tuples are negligible.
\begin{lem}
\label{countbad}
Let $R$ be a ring and let $E\subset R^2$. We have the following:
\begin{enumerate}[i]
\item If $R=\mathbb{F}_q$, then $E^{k+1}$ contains $\lesssim q^k|E|$ bad tuples. In particular, if $|E|\gtrsim q^{1+\e}$ for any constant $\e>0$, the number of bad tuples in $E^{k+1}$ is $o(|E|^{k+1})$.
\item If $R=\mathbb{Z}/p^\ell \mathbb{Z}$, the number of bad tuples in $R^{k+1}$ is $\lesssim p^{(2\ell-1)(k+1)+1}$. In particular, if $|E|\gtrsim p^{2\ell-1+\frac{1}{k+1}+\e}$ for any constant $\e>0$, then the number of bad tuples in $E^{k+1}$ is $o(|E|^{k+1})$.
\end{enumerate}
\end{lem}
\begin{proof}
We first prove the first claim. Since the only non-unit of $\mathbb{F}_q$ is 0, a bad tuple must consist of $k+1$ points which all lie on a line through the origin. Therefore, we may choose $x^1$ to be anything in $E$, after which the next $k$ points must be chosen from the $q$ points on the line through the origin and $x^1$. \\
To prove the second claim, first observe that the number of tuples where at least one coordinate is a non-unit is $p^{2(\ell-1)(k+1)}$, which is less then the claimed bound. So, it suffices to bound the set of bad tuples where all coordinates are units. Let $B$ be this set. Define
\[
\psi(x_1^1,x_2^1,\cdots ,x_1^{k+1},x_2^{k+1})=(p^{\ell-1}x_1^1,x_2^1,\cdots ,p^{\ell-1}x_1^{k+1},x_2^{k+1}).
\]
If $x\in B$, then $x^i\cdot x^{j\perp}$ is a non-unit, meaning it is divisible by $p$, and
\[
(p^{\ell-1}x_1^i,x_2^i)\cdot (p^{\ell-1}x_1^j,x_2^j)=p^{\ell-1}x^i\cdot x^{j\perp}=0.
\]
Therefore, $\psi$ maps bad tuples $x$ to tuples $y$ with $y^i\cdot y^{j\perp}=0$, or $y_1^iy_2^j-y_1^jy_2^i=0$. Rearranging, using the fact that the second coordinate of each $y^i$ is a unit, we conclude that $\frac{y_1^i}{y_2^i}$ is a constant independent of $i$ which is divisible by $p^{\ell-1}$. In other words, each $y^i$ is on a common line through the origin and a point $(n,1)$ where $p^{\ell-1}|n$. There are $p$ such lines, and once we fix a line there are $p^{\ell(k+1)}$ choices of tuples $y$. Therefore, $|\psi(B)|\leq p\cdot p^{\ell(k+1)}$. Finally, we observe that the map $\psi$ is $p^{(\ell-1)(k+1)}$-to-1. This gives us the claimed bound on $|B|$.
\end{proof}
\begin{lem}
\label{flemma}
Let $R$ be either $\mathbb{F}_q$ or $\mathbb{Z}/p^\ell\mathbb{Z}$. Let $E\subset R^2$, and let $G\subset E^{k+1}$ be the set of good tuples. Suppose $|E|\gtrsim q^{1+\e}$ if $R=\mathbb{F}_q$ and $|E|\gtrsim p^{2\ell-1+\frac{1}{k+1}+\e}$ if $R=\mathbb{Z}/p^\ell\mathbb{Z}$. For $g\in \text{SL}_2(R)$, define $f(g)=\sum_x E(x)E(gx)$. Then
\[
|E|^{2(k+1)}\lesssim \mathcal{C}_{k+1}(E)\sum_{g\in\text{SL}_2(R)} f(g)^{k+1}.
\]
\end{lem}
\begin{proof}
By Cauchy-Schwarz, we have
\[
|G|^2\leq |G/\sim|\cdot |\{(x,y)\in G\times G:x\sim y\}|.
\]
By assumption and Lemma \ref{countbad}, $|E|^{k+1}\approx |G|$, and therefore the left hand side above is $\approx |E|^{2(k+1)}$. Since $G\subset E^{k+1}$ the right hand side above is $\leq \mathcal{C}_{k+1}(E)|\{(x,y)\in G\times G:x\sim y\}|$. It remains to prove $|\{(x,y)\in G\times G: x\sim y\}|\leq \sum_{g\in\text{SL}_2(R)} f(g)^{k+1}$. By lemma \ref{SL2},
\[
|\{(x,y)\in G\times G: x\sim y\}|=\sum_{x,y\in G}\sum_{\substack{g \\ y=gx}} 1.
\]
By extending the sum over $G$ to one over all of $E^{k+1}$, we bound the above sum by
\begin{align*}
&\sum_{x,y\in E^{k+1}}\sum_{\substack{g \\ y=gx}} 1 \\
=&\sum_x E(x^1)\cdots E(x^{k+1})\sum_g E(gx^1)\cdots E(gx^{k+1}) \\
=&\sum_g\left(\sum_{x^1} E(x^1)E(gx^1)\right)^{k+1} \\
=&\sum_g f(g)^{k+1}
\end{align*}
\end{proof}
\section{Lifting $L^2$ estimates to $L^{k+1}$ estimates}
In both the case $R=\mathbb{F}_q$ and $R=\mathbb{Z}/p^\ell\mathbb{Z}$, results are known for pairs of points, which is essentially the $k=1$ case. The finite field version was studied in \cite{HI}, and the ring of integers modulo $p^\ell$ was studied in \cite{CIP}. In section 2, we defined a function $f$ on $\text{SL}_2(R)$ and related the number of equivalence classes determined by a set to the sum $\sum_g f(g)^{k+1}$. Since results are known for the $k=1$ case, we have information about the sum $\sum_g f(g)^2$. We wish to turn that into a bound for $\sum_g f(g)^{k+1}$. This is achieved with the following lemma.
\begin{lem}
\label{induction}
Let $S$ be a finite set, and let $F:S\to \mathbb{R}_{\geq 0}$. Let
\[
A=\frac{1}{|S|}\sum_{x\in S}F(x)
\]
denote the average value of $F$, and
\[
M=\sup_{x\in S}F(x)
\]
denote the maximum. Finally, suppose
\[
\sum_{x\in S}F(x)^2=A^2|S|+R.
\]
Then there exist constants $c_k$, depending only on $k$, such that
\[
\sum_{x\in S}F(x)^{k+1}\leq c_k(M^{k-1}R+A^{k+1}|S|).
\]
\end{lem}
\begin{proof}
We proceed by induction. For the base case, let $c_1=1$ and observe that the claimed bound is the one we assumed for $\sum_x F(x)^2$. Now, let $\{c_k\}$ be any sequence such that $k\binom{k}{j}c_j\leq c_k$ holds for all $j<k$; for example, $c_k=2^{k^2}$ works. Now, suppose the claimed bound holds for all $1\leq j<k$, and also observe that the bound is trivial for $j=0$. By direct computation, we have
\begin{align*}
&\sum_{x\in S}(F(x)-A)^2 \\
=&\sum_{x\in S}F(x)^2-2A\sum_{x\in S}F(x)+A^2|S| \\
=&\sum_{x\in S}F(x)^2-A^2|S| \\
=&R.
\end{align*}
We also have
\[
\sum_{x\in S}F(x)^{k+1}=\sum_{x\in S}(F(x)-A)^kF(x)+\sum_{j=0}^{k-1}\binom{k}{j}(-1)^{k-j+1}A^{k-j}\sum_{x\in S}F(x)^{j+1}.
\]
To bound the first term, we simply use the trivial bound. Since $F(x)\leq M$ for all $x$, $A\leq M$, and $F(x),A\geq 0$, we conclude $|F(x)-A|\leq M$ for each $x$. Therefore,
\[
\sum_{x\in S}(F(x)-A)^kF(x)\leq M^{k-1}\sum_{x\in S}(F(x)-A)^2=M^{k-1}R.
\]
To bound the second term, we use the inductive hypothesis and the triangle inequality. We have
\begin{align*}
&\left|\sum_{j=0}^{k-1}\binom{k}{j}(-1)^{k-j+1}A^{k-j}\sum_{x\in S}F(x)^{j+1}\right| \\
\leq &k\cdot\sup_{0\leq j<k} \binom{k}{j}A^{k-j}\sum_{x\in S}F(x)^{j+1} \\
\leq &k\cdot\sup_{0\leq j<k} \binom{k}{j}A^{k-j}c_j(M^{j-1}R+A^{j+1}|S|) \\
\leq & c_k \cdot\sup_{0\leq j<k}(A^{k-j}M^{j-1}R+A^{k+1}|S|)
\end{align*}
Since $A\leq M$, it follows that $A^{k-j}M^{j-1}R\leq M^{k-1}R$ for any $j<k$, so the claimed bound holds.
\end{proof}
\section{Some lemmas about the action of $\text{SL}_2(R)$}
\begin{lem}
\label{action}
Let $G$ be a finite group acting transitively on a finite set $X$. Define $\ph:X\times X\to\mathbb{N}$ by $\ph(x,y)=|\{g\in G:gx=y\}|$. We have
\[
\ph(x,y)=\frac{|G|}{|X|}
\]
for every pair $x,y$. If $h:X\to\mathbb{C}$ and $x_0\in X$, then
\[
\sum_{g\in G} h(gx_0)=\frac{|G|}{|X|}\sum_{x\in X}h(x).
\]
\end{lem}
\begin{proof}
The second statement follows from the first by a simple change of variables. To prove the first, we have
\[
\sum_{x,y\in X}\ph(x,y)=\sum_{g\in G}\sum_{\substack{x,y\in X \\ gx=y}}1.
\]
On the right, for any fixed $g$, one can choose any $x$ and there is a unique corresponding $y$, so the inner sum is $|X|$ and the right hand side is therefore $|G||X|$. On the other hand, $\ph$ is constant. To prove this, let $x,y,z,w\in X$ and let $h_1,h_2\in G$ such that $h_1x=z$ and $h_2w=y$. This means for any $g$ with $gz=w$, we have $(g_2gh_1)x=y$, so $\ph(z,w)\leq \ph(x,y)$. By symmetry, equality holds. If $c$ is the constant value of $\ph(x,y)$, the left hand side above must be $c|X|^2$, and therefore $c=\frac{|G|}{|X|}$ as claimed.
\end{proof}
\begin{lem}
\label{SL2size}
We have $|\text{SL}_2(\mathbb{F}_q)|=q^3-q$ and $|\text{SL}_2(\mathbb{Z}/p^\ell\mathbb{Z})|=p^{3\ell}-p^{3\ell-2}$.
\end{lem}
\begin{proof}
We are counting solutions to the equation $ad-bc=1$ where $a,b,c,d\in\mathbb{F}_q$. We consider two cases. If $a$ is zero, then $d$ can be anything, and we must have $bc=1$. This means $b$ can be anything non-zero, and $c$ is determined. So, there are $q^2-q$ solutions with $a=0$. With $a\neq 0$, $b$ and $c$ can be anything, and $d$ is determined, giving $q^3-q^2$ solutions in this case. So, there are $(q^3-q^2)+(q^2-q)$ total solutions. \\
Next, we want to count solutions to $ad-bc=1$ with $a,b,c,d\in\mathbb{Z}/p^\ell\mathbb{Z}$. The arguments are essentially the same as in the proof of the finite field case, but slightly more complicated because there are non-zero elements which are still not units. We again consider separately two cases according to whether $a$ is a unit or not. If $a$ is a unit, then $b,c$ can be anything and then $d$ is determined, so there are $(p^{\ell}-p^{\ell-1})p^{2\ell}$ such solutions. If $a$ is not a unit, then $b$ and $c$ must be units, as otherwise $1$ would be divisible by $p$. So there are $p^{\ell-1}$ choices for $a$, $p^{\ell}$ choices for $d$, $p^{\ell}-p^{\ell-1}$ for $b$, and $c$ is determined. Putting this together, we get the claimed number of solutions.
\end{proof}
\section{Proof of Theorem \ref{MT1}}
We are now ready to prove theorem \ref{MT1}.
\begin{proof}
First observe that good tuples are equivalent to $\approx q^{3}$ distinct tuples, so there are $\approx q^{2k-1}$ equivalence classes of good tuples. Since the only non-unit in the finite field case is 0, the bad tuples are all in the same equivalence class. So, our goal is to prove $\mathcal{C}_{k+1}(E)\gtrsim q^{2k-1}$. We first must prove the estimate
\[
\sum_g f(g)^2=\frac{|E|^4}{q}+O(q^2|E|^2).
\]
We expand the sum on the left hand side and change variables to obtain
\[
\sum_g f(g)^2=\sum_{x^1,x^2,y^1,y^2}E(x^1)E(x^2)E(y^1)E(y^2)\left(\sum_{\substack{g \\ gx=y}}1\right).
\]
We first observe we may ignore the pairs $x,y$ which are on a line through the origin. This is because if $x^2=tx^1$ and $y^2=sy^1$, there will exist $g$ with $gx=y$ if and only if $t=s$, in which case there are $\approx q$ choices for $g$. So, we have $|E|$ choices for $x^1$ and $y^1$, $q$ choices for $t$, and $\approx q$ choices for $g$ giving an error of $O(q^2|E|^2)$, as claimed. For all other pairs $x,y$, the inner sum in $g$ is 1 if $x\sim y$ and 0 otherwise. Therefore, if $\nu(t)=|\{(x,y)\in E\times E:x\cdot y^\perp=t\}|$, we have
\[
\sum_g f(g)^2=O(|E|^2q^2)+\sum_t\sum_{\substack{x^1, x^2, y^1, y^2 \\ x^1\cdot x^{2\perp}=t \\ y^1\cdot y^{2\perp}=t}}E(x^1)E(x^2)E(y^1)E(y^2)=O(|E|^2q^2) +\sum_t\nu(t)^2.
\]
The proof of theorem 1.4 in \cite{HI} shows that $\nu(t)=\frac{|E|^2}{q}+O(|E|q^{1/2})$, so this gives
\[
\sum_t \nu(t)^2-\frac{|E|^4}{q}=\sum_t \left(\nu(t)-\frac{|E|^2}{q}\right)^2=O(|E|^2q^2),
\]
which proves the equation above. We now apply lemma \ref{induction} with $F=f$. Lemmas \ref{SL2size} and \ref{action} imply
\[
A=\frac{1}{|\text{SL}_2(\mathbb{F}_q)|}\sum_xE(x)\sum_gE(gx)=\frac{1}{(q^2-1)}|E|^2=\frac{|E|^2}{q^2}+O\left(\frac{|E|^2}{q^4}\right)
\]
and
\[
|S|=q^3+O(q^2).
\]
Putting this together gives
\[
A^2|S|=\frac{|E|^4}{q}+O\left(\frac{|E|^4}{q^2}\right),
\]
and therefore
\[
\sum_g f(g)^2=A^2|S|+R
\]
with $R=O(q^2|E|^2)$. Finally, we observe that $f$ has maximum $M\leq |E|$. Therefore, lemma \ref{induction} gives
\[
\sum_g f(g)^{k+1}\lesssim q^2|E|^{k+1}+\frac{|E|^{2(k+1)}}{q^{2k-1}}.
\]
Together with lemma \ref{flemma}, this gives
\[
|E|^{2(k+1)}\lesssim \mathcal{C}_{k+1}(E)\left(q^2|E|^{k+1}+\frac{|E|^{2(k+1)}}{q^{2k-1}}\right).
\]
If the second term on the right is bigger, we get the result for free. If the first term is bigger, we get
\[
\mathcal{C}_{k+1}(E)\gtrsim \frac{|E|^{k+1}}{q^2}.
\]
This will be $\gtrsim q^{2k-1}$ when $|E|\gtrsim q^{2-\frac{1}{k+1}}$, as claimed.
\end{proof}
\section{Size of $\mathcal{C}_{k+1}((\mathbb{Z}/p^\ell\mathbb{Z})^2)$}
Since $|\text{SL}_2(R)|\approx |R|^3$, we expect each tuple in $(R^2)^{k+1}$ to be equivalent to $\approx |R|^3$ other tuples, and therefore we expect the number of congruence classes to be $|R|^{2k-1}$. In the finite field case, this was proved as the first step of the proof of Theorem \ref{MT1}, but the proof in the $R=\mathbb{Z}/p^\ell\mathbb{Z}$ is more complicated so we will prove it here, separately from the proof of Theorem \ref{MT2} in the next section.
\begin{thm}
\label{target}
We have $\mathcal{C}_{k+1}((\mathbb{Z}/p^\ell\mathbb{Z})^2)\approx (p^\ell)^{2k-1}$. More precisely, the good $(k+1)$-point configurations of $(\mathbb{Z}/p^\ell\mathbb{Z})^2$ determine $\approx (p^\ell)^{2k-1}$ classes, and the bad configurations determine $o((p^\ell)^{2k-1})$ classes.
\end{thm}
\begin{proof}
We first establish that there are $\approx p^{\ell(2k-1)}$ classes of good tuples. This is easy; if $x$ is a good tuple, we have seen the map $g\mapsto gx$ is injective, so each class has size $\approx p^{3\ell}$ and there are $p^{2\ell (k+1)}$ tuples, meaning there are $p^{2\ell(k+1)-3\ell}$ classes. \\
It remains to bound the number of bad classes. We first establish the $k=1,2$ cases. When $k=1$, we want to prove there are $o(p^\ell)$ equivalence classes. This is clear, because in the $k=1$ case we are looking at pairs $(x^1,x^2)$ whose class is determined by the scalar $x^1\cdot x^{2\perp}$. The classes therefore correspond to the underlying set of scalars in $\mathbb{Z}/p^\ell\mathbb{Z}$, and the bad classes correspond to non-units. In the $k=2$ case, we are looking at triples $(x^1,x^2,x^3)$ whose class is determined by the three scalars $(x^1\cdot x^{2\perp},x^2\cdot x^{3\perp},x^3\cdot x^{1\perp})$. So, the space of equivalence classes can be identified with $((\mathbb{Z}/p^\ell\mathbb{Z})^2)^3$, and the bad classes correspond to triples of non-units. \\
For $k\geq 3$, we use the following theorem, which is really just a more specific version of Theorem \ref{DF2}, also found in \cite{DF}, chapter 11. \\
\begin{thm*}[\ref{DF2}']
For any $2\times 2$ matrix $A$, there exists a $2\times 2$ matrix $B$ with $AB=BA=(\det(A))I_2$, where $I_2$ is the $2\times 2$ identity matrix.
\end{thm*}
We also make a more specific version of the definition of good and bad tuples. Namely, let$x$ be a $(k+1)$ point configuration in $\mathbb{Z}/p^\ell\mathbb{Z}$, and let $m\leq \ell$ be minimal with respect to the property that $p^m$ divides $x^i\cdot x^{j\perp}$ for every pair of indices $(i,j)$. We say that $x$ is \textbf{$m$-bad}. Observe that according to our previous definition, good tuples are $0$-bad and bad tuples are $m$-bad for some $m>0$. Also observe that $m$-badness is preserved by equivalence, so we may define $m$-bad equivalence classes analogously. An easy variant of the argument in Lemma \ref{countbad} shows that the number of $m$-bad tuples is $\lesssim p^{(2\ell-m)(k+1)+m}$; note that this bound can be rewritten as $p^{\ell(2k-1)+3\ell-km}$. We claim that every $m$-bad equivalence class has at least $p^{3\ell-2m}$ elements. It follows from the claim that there are $\lesssim p^{\ell(2k-1)+(2-k)m}$ $m$-bad classes, and since we may assume $k\geq 3$ the theorem follows from here. To prove the claim, note that the equivalence class containing $x$ also contains $gx$ for any $g\in\text{SL}_2(\mathbb{Z}/p^\ell\mathbb{Z})$, so for a lower bound on the size of a class we need to determine the size of the image of the map $g\mapsto gx$. First note that we may assume without loss of generality that each coordinate of $x^1$ is a unit. This is because given $x$ we can shift any factor of $p$ from $x^1$ onto each other vector $x^i$ and obtain another representative of the same equivalence class. Next, observe that if $x$ is $m$-bad and $gx=hx$, then by Theorem \ref{DF2}' we have $p^mg=p^mh$. It follows that $h=g+p^{\ell-m}A$ for some matrix $A$ with entries between $0$ and $p^m$. Using the fact that
\[
\det(A+B)=\det(A)+\det(B)+\mathcal{B}(A,B),
\]
where $\mathcal{B}$ is bilinear, we conclude that if $h=g+p^{\ell-m}A$ and $\det(g)=\det(h)=1$, we must have
\[
0=p^{2(\ell-m)}\det(A)+p^{\ell-m}\mathcal{B}(g,A).
\]
Let $m'$ be the minimal power of $p$ which divides all entries of $A$. Since the entries of $g$ cannot all be divisible by $p$, it follows that $\ell-m+m'$ is the maximal power of $p$ which divides the second term above. Since $2(\ell-m+m')$ divides the first term, it follows that both terms must be 0 for the equation to hold. In particular, we must have $\mathcal{B}(g,A)=0$. Since at least one entry of $g$ must be a unit, we can solve for one entry of $A$ in terms of the others. Now observe that in order to have $gx=hx$, we must have $p^{\ell-m}Ax=0$. In particular, $p^{\ell-m}Ax^1=0$. Since each coordinate of $x^1$ is a unit, we may solve for another entry of the matrix $A$. This means there are at most $p^{2m}$ choices for $A$, and hence the map $g\mapsto gx$ is at most $p^{2m}$-to-one. It follows that $m$-bad classes have at least $p^{3\ell-2m}$ elements, as claimed.
\end{proof}
\section{Proof of Theorem \ref{MT2}}
\begin{proof}
In keeping with the rest of this paper, the proof of the $\mathbb{Z}/p^\ell\mathbb{Z}$ case is essentially the same as the finite field case, but more complicated casework is required to deal with non-units. By our work in the previous section, our goal is to show $\mathcal{C}_{k+1}(E)\gtrsim (p^\ell)^{2k-1}$. Following the line of reasoning in the proof of Theorem \ref{MT1}, we want to establish the estimate
\[
\sum_g f(g)^2=\frac{|E|^4}{p^\ell}+O(\ell^2|E|^2(p^\ell)^{3-\frac{1}{\ell}}).
\]
We have, after a change of variables,
\[
\tag{$*$}
\sum_g f(g)^2=\sum_{x^1,x^2,y^1}E(x^1)E(x^2)E(y^1)\sum_{\substack{g \\ gx^1=y^1}}E(gx^2).
\]
We first want to throw away terms where $x^1,y^1$ have non-units in their first coordinates. Note that there are $\approx p^{4\ell-2}$ such pairs. For each, there are $|E|$ many choices for $x^2$. We claim that there are $\leq p^\ell$ choices of $g$ which map $x^1$ to $y^1$ under this constraint. It follows from this claim that those terms contribute $\lesssim p^{5\ell-2}|E|$ to ($*$), which is less then the claimed error term. To prove the claim, observe that we are counting solutions to the system of equations
\begin{align*}
ax_1^1+bx_2^1&=y_1^1 \\
cx_1^1+dx_1^1&=y_2^1\\
ad-bc&=1
\end{align*}
in $a,b,c,d$. Since $x_1^1$ is a unit, we can solve the first two equations for $a$ and $c$, respectively. Plugging these solutions into the third equation yields
\[
1=\frac{y_1^1}{x_1^1}d-\frac{y_2^1}{y_1^1}b.
\]
Since $y_1^1$ is a unit, for every $b$ there is a unique $d$ satisfying the equation. This proves the claim. Now, we want to remove all remaining terms from ($*$) corresponding to $x^1,x^2$ where $x^1\cdot x^{2\perp}$ is not a unit. To bound this contribution, we observe that for any such pair, we can write $x^2=tx^1+k$, where $0<t<p$ and $k$ is a vector where both entries are non-units. Therefore, there are $\leq |E|^2$ choices for $(x^1,y^1)$, there are $\leq p^{2\ell-1}$ choices for $x^2$, and there are $\leq p^\ell$ choices for $g$ as before. This gives the bound $|E|^2p^{3\ell-1}$, smaller than the claimed error term. This means, up to the error term, ($*$) can be written as
\[
\sum_{\substack{x^1,x^2,y^1,y^2 \\ x^1\cdot x^{2\perp}=y^1\cdot y^{2\perp}}}E(x^1)E(x^2)E(y^1)E(y^2)=\sum_t \nu(t)^2,
\]
where $\nu(t)=|\{(x,y)\in E\times E:x\cdot y^\perp=t\}|$. This function was studied in \cite{CIP}; in that paper, it is proved that $\nu(t)=\frac{|E|^2}{q}+O(\ell|E|(p^\ell)^{\frac{1}{2}(2-\frac{1}{\ell})})$, leading to the claimed estimate for $\sum_g f(g)^2$, using the same reasoning as in the proof of Theorem \ref{MT1}. Applying Lemma \ref{induction} and Lemma \ref{flemma} with $A\approx \frac{|E|^2}{p^{2\ell}},|S|\approx p^{3\ell},M\leq |E|, R=O(\ell^2 |E|^2(p^\ell)^{3-\frac{1}{\ell}})$ gives
\[
|E|^{2(k+1)}\lesssim \mathcal{C}_{k+1}(E)\left(\ell^2|E|^{k+1}(p^\ell)^{3-\frac{1}{\ell}}+\frac{|E|^{2(k+1)}}{p^{\ell(2k-1)}}\right).
\]
If the second term on the right is bigger, we get the result for free. If the first term is bigger, we have
\[
\mathcal{C}_{k+1}(E)\gtrsim \frac{|E|^{k+1}}{\ell^2p^{3\ell-1}}.
\]
If $|E|\gtrsim \ell^{\frac{2}{k+1}}p^{\ell s}$, then this is $\gtrsim p^{\ell s(k+1)-3\ell+1}$, which is $\gtrsim p^{\ell(2k-1)}$ when $s\geq 2-\frac{1}{\ell(k+1)}$.
\end{proof}
\section{Proof of sharpness}
\begin{proof}
We first consider the finite field case. Let $1\leq s<2-\frac{2}{k+1}$, and let $E$ be a union of $q^{s-1}$ circles of distinct radii. Since each circle has size $\approx q$, this is a set of size $\approx q^s$. Observe that for any $x\in E^{k+1}$ and any $g$ in the orthogonal group $O_2(\mathbb{F}_q)$, we have $gx\in E^{k+1}$. Therefore, every configuration of points in $E$ is equivalent to at least $|O_2(\mathbb{F}_q)|\approx q$ other configurations. This means that
\[
\mathcal{C}_{k+1}(E)\lesssim q^{-1}|E|^{k+1}\approx q^{s(k+1)-1}=o(q^{2k-1}),
\]
where in the last step we use the assumed bound on $s$. \\
Now, consider the $\mathbb{Z}/p^\ell\mathbb{Z}$ case. Let $1\leq s<2-\min\left(\frac{2}{k+1},\frac{1}{\ell}\right)$. We consider two different examples, according to which of $\frac{2}{k+1}$ or $\frac{1}{\ell}$ is smaller. In the first case, the example that works for finite fields also works here; circles still have size $\approx p^\ell$, so nothing is changed. In the second, let
\[
E=\{(t+pn,t+pm):0\leq t<p,0\leq m,n\leq p^{\ell-1}\}.
\]
Clearly $|E|=p^{2\ell -1}=(p^\ell)^{2-\frac{1}{\ell}}$, but it is also easy to check that $x\cdot y^\perp$ is never a unit for any $x,y\in E$. Therefore, every configuration of points in $E$ is bad, and we have shown that this is $o(\mathcal{C}_{k+1}(p^{\ell(2k-1)}))$.
\end{proof}
\end{document} |
\begin{document}
\mainmatter
\title{Bandits with an Edge}
\titlerunning{Bandits with an edge}
\author{Dotan Di Castro$^1$
\and Claudio Gentile$^2$\and Shie Mannor$^1$}
\authorrunning{Di Castro, Gentile, Mannor}
\institute{$^1$Technion, Israel Institute of Technology, Haifa, Israel\\
$^2$Universita' dell'Insubria, Varese, Italy\\
\mailsa, \mailsb, \mailsc
}
\toctitle{Lecture Notes in Computer Science}
\tocauthor{Authors' Instructions}
\maketitle
\begin{abstract}
We consider a bandit problem over a graph where the rewards are not directly observed.
Instead, the decision maker can compare two nodes and receive (stochastic) information pertaining to the
difference in their value. The graph structure describes the set of possible comparisons.
Consequently, comparing between two nodes that are relatively far requires estimating the difference between every pair of nodes on the path between them.
We analyze this problem from the perspective of sample complexity: How many queries are needed to find an approximately optimal node with probability more than $1-\delta$ in the PAC setup?
We show that the topology of the graph plays a crucial in defining the sample complexity: graphs with a low diameter have a much better sample complexity.
\end{abstract}
\section{Introduction}
We consider a graph where every edge can be sampled. When sampling an edge, the decision maker
obtains a signal that is related to the value of the nodes defining the edge. The objective of the decision maker
is to locate the node with the highest value. Since there is no possibility to sample the value of the nodes directly, the decision maker has to infer which is the best node by considering the differences between the nodes.
As a motivation, consider the setup where a user interacts with a webpage. In the webpage, several links or ads can be presented, and the response of the user is to click one or none of them. Essentially, in this setup we query the user to compare between the different alternatives. The response of the user is comparative: a preference of one alternative to the other will be reflected in a higher probability of choosing the alternative. It is much less likely to obtain direct feedback from a user, asking her to provide an evaluation of the worth of the selected alternative. In such a setup, not all pairs of alternatives can be directly compared or, even if so, there might be constraints on the number of times a pair of ads can be presented to a user. For example, in the context of ads it is reasonable to require that ads for similar items will not appear in the same page (e.g., two competing brands of luxury cars will not appear on the same page).
In these contexts, a click on a particular link cannot be seen as an absolute relevance judgement~(e.g., \cite{j02}), but rather as a relative preference. Moreover, feedback can be noisy and/or inconsistent, hence aggregating the choices into a coherent picture may be a non-trivial task. Finally, in such contexts pairwise comparisons occur more frequently than multiple comparisons, and are also
more natural from a cognitive point of view~(e.g.,\cite{t27}).
We model this learning scenario as bandits on graphs where the information that is obtained is differential.
We assume that there is an inherent and unknown value per node, and that the graph describes the allowed (pairwise)
comparisons.
That is, nodes $i$ and $j$ are connected by an edge if they can be compared by a single query. In this case, the query
returns a random variable whose distribution depends, in general, on the values of $i$ and $j$.
For the sake of simplicity, we assume that the observation of the edge between nodes $i$ and $j$ is a random variable
that depends only on the {\em difference} between the values of $i$ and $j$.
Since this assumption is restrictive in terms of applicability of the algorithms,
we also consider the more general setup where contextual information is observed before sampling the edges.
This is intended to model a more practical setting where, say, a web system has preliminary access to a set
of user profile features.
In this paper, our goal is to identify the node with the highest value, a problem that has been studied
extensively in the machine learning literature (e.g., \cite{evendar2006action,abm10}).
More formally, our objective is to find an approximately optimal node (i.e.,
a node whose value is at most
$\epsilon$ smaller than the highest value) with a given failure probability $\delta$ as quickly as
possible.
When contextual information is added, the goal becomes to progressively fasten the time
needed for identifying a good node for the given user at hand, as more and more users interact with
the system.
\noindent{\bf Related work. }
There are two common objectives in stochastic bandit problems: minimizing the regret
and identifying the ``best'' arm.
While both objectives are of interest, regret minimization seems particularly difficult in our setup.
In fact, a recent line of research related to our paper is the \emph{Dueling Bandits Problem}
of Yue et al. \cite{yue2011,yj11} (see also \cite{frpu94}).
In the dualing bandit setting, the learner
has at its disposal a {\em complete} graph of comparisons between pairs of nodes, and each edge
$(i,j)$ hosts an unknwon preference probability $p_{i,j}$ to be interpreted as the probability that
node $i$ will be preferred over node $j$. Further consistency assumptions (stochastic transitivity
and stochastic triangle inequality) are added. The complete graph assumption
allows the authors to define a well-founded notion of regret, and analyze a regret
minimization algorithm which is further enhanced in \cite{yj11}
where the consistency assumptions are relaxed.
Although at first look our paper seems to deal with the same setup, we highlight here the main
differences. First, the setups are different with respect to the topology handled.
In \cite{yue2011,yj11} the topology is always a complete graph which results in the possibility to directly
compare between every two nodes. In our work (as in real life) the topology is \emph{not} a complete
graph, resulting in a framework where a comparison of two nodes requires sampling all the edges between
the nodes. In the extreme case of a straight line we need to sample all the given edges in the graph
in order to compare the two nodes that are farthest apart.
Second, the objective of minimizing the regret is natural for a complete graph where it amounts to
comparing a choice of the best bandit repeatedly
with the actual pairs chosen. In a topology other than the complete graph this notion is less
clear since one has to restrict choices to edges that are available.
Finally, the algorithms in \cite{yue2011,yj11} are geared towards the elimination of
arms that are not optimal with high probability. In our setup one \emph{cannot} eliminate such nodes
and edges because it is crucial in comparing candidates for optimal nodes. Therefore, the resulting
algorithms and analyses are quite different.
On the other hand, constraining to a given set of allowed comparisons leads us to make less
general statistical assumptions than \cite{yue2011,yj11}, in that our algorithms are based on the
ability to reconstruct the reward difference on adjacent nodes by observing their connecting edge.
From a different perspective, the setup we consider is reminiscent of online learning with partial monitoring
\cite{LugosiMStoltz08Strategies}. In the partial monitoring setup, one usually does not observe the reward directly,
but rather a signal that is related (probabilistically) to the unobserved reward. However, as far we know,
the alternatives (called arms usually) in the partial monitoring setup are separate and there is no additional
structure: when sampling an arm a reward that is related to this arm alone is obtained but not observed.
Our work differs in imposing an additional structure, where the signal is derived from the structure of the
problem where the signal is always relative to adjacent nodes. This means that comparing two nodes that are not
adjacent requires sampling all the edges on a path between the two nodes. So that deciding which of two remote
nodes has higher value requires a high degree of certainty regarding all the comparisons on the path between them.
Another research area which is somewhat related to this paper is learning to rank via
comparisons
(a very partial list of references includes~\cite{crs99,j02,dms04,bsr05,jr05,cqltl07,hfcb08,xlwzl08,Liu09}).
Roughly speaking, in this problem we have a collection of training instances to be associated with a finite set of
possible alternatives or classes (the graph nodes in our setting). Every training example is assigned a set of
(possibly noisy or inconsistent) pairwise (or groupwise) preferences between the classes.
The goal is to learn a function
that maps a new training example to a total order (or ranking) of the classes. We emphasize that
the goal in this paper
is different in that we work in the bandit setup with a given structure for the comparisons and, in addition,
we are just aiming at identifying the (approximately) best class, rather than ranking them all.
\noindent{\bf Content of the paper. }
The rest of the paper is organized as follows. We start from the formal model in Section \ref{sec:model}.
We analyze the basic linear setup, where each node is comparable to at most two nodes in
Section~\ref{sec:Linear-Topology}.
We then move to the tree setup and analyze it in Section \ref{s:tree}.
The general setup of a network is treated in Section \ref{s:net}.
Some experiments are then presented in Section \ref{s:sims} to elucidate the theoretical findings in
previous sections.
In Section \ref{s:extensions} we discuss the more general setting with contextual information.
We close with some directions for future research.
\section{Model and Preliminaries}\label{sec:model}
In this section we describe the classical Multi-Armed Bandit (MAB)
setup, describe the Graphical Bandit (GB) setup, state/recall two
concentration bounds for sequences of random variables, and review a few terms from
graph theory.
\subsection{The Multi-Armed Bandit Problem}
The MAB model \cite{lai1985asymptotically} is comprised of a set of \emph{arms} $A\triangleq\left\{ 1,\ldots,n\right\} $.
When sampling arm $i\in A$, a \emph{reward} which is a random variable
$R_{i}$, is provided.
Let $r_{i}=\mathbb{E}\left[R_{i}\right]$. The goal in the MAB
setup is to find the arm with the highest expected reward, denoted
by $r^{*}$, where we term this arm's reward the \emph{optimal
reward}. An arm whose expected reward is strictly less than $r^{*}$
is called a \emph{non-best arm}. An arm $i$ is called an $\epsilon$-optimal
arm if its expected reward is at most $\epsilon$ from the optimal
reward, i.e., $\mathbb{E}\left[R_{i}\right]\ge r^{*}-\epsilon$. In
some cases, the goal in the MAB setup is to find an $\epsilon$-optimal arm.
A typical algorithm for the MAB problem does the following. At each
time step $t$ it samples an arm $i_{t}$ and receives a reward $R_{i_t}$.
When making its selection,
the algorithm may depend on the history (i.e., the actions and rewards)
up to time $t-1$. Eventually the algorithm must commit to a single
arm and select it. Next we define the desired properties of such
an algorithm.
\begin{definition}
(PAC-MAB) An algorithm is an $\left(\epsilon,\delta\right)$-probably
approximately correct (or $\left(\epsilon,\delta\right)$-PAC) algorithm
for the MAB problem with sample complexity $T$, if it terminates
and outputs an $\epsilon$-optimal arm with probability at least $1-\delta$,
and the number of times it samples arms before
termination is bounded by $T$.
\end{definition}
In the case of standard MAB problems there is no structure defined over
the arms. In the next section we describe the setup of our work where such a
structure exists.
\subsection{The Graphical Bandit Problem}
Suppose that we have an undirected and connected graph $G = (V,E)$ with nodes
$V=\left\{ 1,\ldots,n\right\}$ and edges $E$. The nodes are associated with reward
values $r_{1},\ldots,r_{n}$, respectively, that are unknown to us. We denote
the node with highest value by $i^{*}$ and, as before, $r^* = r_{i^*}$. Define
$u\triangleq \min_{j\ne i^*} r_{i^*}-r_{j}$ to be the difference between the node
with the highest value and the node with the second highest value. We call $u$ the reward
{\em gap}, and interpret it as a measure for how
easy is to discriminate between the two best nodes in the network.
As expected, the gap $u$ has a significant influence on the sample complexity bounds
(provided the accuracy parameter $\epsilon$ is not large).
We say that nodes $i$ and $j$ are \emph{neighbors}
if there is an edge in $E$ connecting them (and denote the edge random variable by $E^{ij}$).
This edge value is a random variable whose distribution is determined by
the nodes it is connecting, i.e., $(i,j)$'s statistics are determined by $r_{i}$
and $r_{j}$. In this work, we assume that\footnote{
Notice that, although the graph is undirected, we view edge $(i,j)$ as a directed
edge from $i$ to $j$. It is understood that $E^{ji} = -E^{ij}$.
}
$\mathbb{E}\left[E^{ij}\right]= r_{j}-r_{i}$.
Also, for the sake of concreteness, we assume the edge values are bounded in $[-1,1]$.
In this model, we can only sample the graph edges $E^{ij}$ that provide independent
realizations of the node differences.
For instance, we may interpret $E^{ij} = +1$ if the feedback we receive says that
item $j$ is preferred over item $i$, $E^{ij} = -1$ if $i$ is preferred over $j$, and
$E^{ij} = 0$ if no feedback is received. Then the reward difference $r_j - r_i$
becomes equal to the difference between the probability of preferring
$j$ over $i$ and the probability of preferring $i$ over $j$.
Let us denote the realizations of $E^{ij}$ by $E_{t}^{ij}$
where the subscript $t$ denotes time.
Our goal is to find an $\epsilon$-optimal
node, i.e., a node $i$ whose reward satisfies $r_{i}\ge r^{*}-\epsilon$.
Whereas neighboring nodes can be directly compared by sampling its connecting
edge, if the nodes are far apart, a comparison between the two can only be done
indirectly, by following a path connecting them.
We denote a \emph{path} between node $i$ and node $j$ by $\pi_{ij}$.
Observe that there can be several paths in $G$ connecting $i$ to $j$.
For a given path $\pi$ from $i$ to $j$, we define the
\emph{composed edge} value $E_{\pi}^{ij}$ by $E_{\pi}^{ij} = \sum_{(k,l)\in\pi_{ij}} E^{kl}$,
with $E_{\pi}^{ii} = 0$.
By telescoping, the average value of a composed edge $E^{ij}$ only depends
on its endpoints, i.e.,
\begin{equation}\label{eq:depends_edge}
\mathbb{E}\left[E_{\pi}^{ij}\right]=\sum_{(k,l)\in\pi_{ij}}\mathbb{E}\left[E^{kl}\right]
=\sum_{(k,l)\in\pi_{ij}}\left(r_{l}-r_{k}\right) = r_{j}-r_{i},
\end{equation}
independent of $\pi$.
Similarly, define $E_{t}^{ij}$ to be the time-$t$ realization
of the composed edge random variable $E_{\pi}^{ij}$ when we pull once
all the edges along the path $\pi$ joining $i$ to $j$. A schematic illustration of the the GB setup is presented in Figure 1.
\begin{figure}
\caption{Schematic illustration of the the GB setup for 6 nodes}
\label{fig:setup}
\end{figure}
The algorithms we present in the next sections hinge on constructing reliable estimates
of edge reward differences, and then combining them into a suitable node selection procedure.
This procedure heavily depends on the graph topology.
In a tree-like (i.e., acyclic) structure no inconsistencies can arise due to the noise
in the edge estimators. Hence the node selection procedure just aims at identifying the
node with the largest reward gap to a given reference node.
On the other hand, if the graph has cycles, we have to rely on a more robust node
elimination procedure, akin to the one investigated in \cite{evendar2006action}
(see also the more recent \cite{abm10}).
\subsection{Large Deviations Inequalities}
In this work we use Hoeffding's maximal inequality (e.g., \cite{cesa2006}).
\begin{lemma}\label{l:hoeff}
\label{lem:Hoeffding_maximal}Let $X_{1},\ldots,X_{N}$ be independent
random variables with zero mean satisfying $a_{i}\le X_{i}\le b_{i}$
w.p. 1. Let $S_{i}=\sum_{j=1}^{i}X_{j}$. Then, \[
P\left(\max_{1\le i\le N}S_{i}>\epsilon\right)
\le\exp\left(-\frac{\epsilon^{2}}{\sum_{i=1}^{N}\left(b_{i}-a_{i}\right)^{2}}
\right).
\]
\end{lemma}
\section{Linear topology and sample complexity}\label{sec:Linear-Topology}
As a warm-up, we start by considering the GB setup in the case of a linear graph,
i.e., $E=\left\{ (i,i+1): 1\le i\le n-1\right\} $.
We call it the \emph{linear setup}.
The algorithm
for finding the highest node in the linear setup is presented in Algorithm
\ref{alg:Algorithm-linear}. The algorithm samples all the edges, computes for each
edge its empirical mean, and based on these statistics finds the highest edge.
Algorithm \ref{alg:Algorithm-linear} will also serve as a subroutine for the
tree-topology discussed in Section \ref{s:tree}.
\begin{algorithm}
\begin{algorithmic}[1]
\renewcommand{\textbf{Input:}}{\textbf{Input:}}
\renewcommand{\textbf{Output:}}{\textbf{Output:}}
\REQUIRE {$\epsilon>0$, $\delta>0$, line graph with edge set $E=\left\{ (i,i+1): 1\le i\le n-1\right\} $ }
\FOR{$i=1,\ldots,n-1$}
\STATE {Pull edge $(i,i+1)$ for $T^{i}$ times}
\STATE {Let $\hat{E}^{i,i+1}=\frac{1}{T^{i}}\sum_{t=1}^{T^{i}}E_{t}^{i,i+1}$ be the empirical average of edge $(i,i+1)$}
\STATE {Let $\hat{E}_{\pi_{1i}}^{1i} = \sum_{k=1}^{i-1}\hat{E}^{k-1,k} $ be the empirical average of the
composed edge $E_{\pi_{1i}}^{1i}$, where $\pi_{1i}$ is the (unique) path from 1 to $i$.}
\ENDFOR
\ENSURE {Node $k = {\rm argmax}_{i = 1,..., n} \hat{E}_{\pi_{1i}}^{1i}$.
}
\end{algorithmic}
\caption{The algorithm for the linear setup\label{alg:Algorithm-linear}}
\end{algorithm}
The following proposition gives the sample complexity of Algorithm \ref{alg:Algorithm-linear} in the case
when the edges are bounded.
\begin{proposition}\label{prop:linear_PAC_bounded}
If $-1\le E^{i,i+1}\le 1$ holds, then Algorithm \ref{alg:Algorithm-linear}
operating on a linear graph with reward gap $u$
is an $\left(\epsilon,\delta\right)$-PAC algorithm when the $T^i$
satisfy
\[
\left(\sum_{i=1}^{n-1}\frac{4}{T^{i}}\right)^{-1}\ge\frac{1}{\max\{\epsilon,u\}^{2}}\log\left(\frac{2}{\delta}\right).
\]
If $T^{i}=T$
then the sample complexity of each edge is
$
T\ge\frac{4n}{\max\{\epsilon,u\}^{2}}\log\left(\frac{2}{\delta}\right).
$\
Hence the sample complexity of the algorithm is $O\left(\frac{n^2}{\max\{\epsilon,u\}^{2}}
\log\left(\frac{1}{\delta}\right)\right).$
\end{proposition}
\begin{proof}
Let
$
\tilde{E}_{t}^{i,i+1}\triangleq\frac{(r_{i+1}-r_{i})-E_{t}^{i,i+1}}{T^{i}},\quad t = 1, ... T^{i}.
$
Each $\tilde{E}_{t}^{i,i+1}$ has zero mean with
$-2/T^{i}\le\tilde{E}_{t}^{i,i+1}\le 2/T^{i}$.
Hence
\begin{equation}\label{eq:sequence}
\tilde{E}_{1}^{1,2},\ldots,\tilde{E}_{T^{1}}^{1,2},
\tilde{E}_{1}^{2,3},\ldots,\tilde{E}_{T^{2}}^{2,3},
\ldots,
\tilde{E}_{1}^{n-1,n},\ldots,\tilde{E}_{T^{n-1}}^{n-1,n}
\end{equation}
is a sequence of $\sum_{i=1}^{n-1}T^{i}$ zero-mean and independent random
variables.
Set for brevity $\tilde{\epsilon} = \max\{\epsilon,u\}$, and
suppose, without lost of generality, that some node $j$ has the highest value.
The probability that Algorithm \ref{alg:Algorithm-linear}
fails, i.e., returns a node whose value is $\epsilon$ below the optimal value is bounded by
\begin{equation}\label{e:err}
\Pr\left(\exists i = 1,\ldots,n : \hat{E}_{\pi_{1i}}^{1i} - \hat{E}_{\pi_{1j}}^{1j}> 0 \textrm{ and }
r_i < r_j - \tilde{\epsilon} \right)~.
\end{equation}
We can write
\begin{align}
(\ref{e:err})
& \le \Pr\left(\exists i = 1,\ldots,n : \hat{E}_{\pi_{1i}}^{1i} - \hat{E}_{\pi_{1j}}^{1j} -\left(r_{i}-r_{j}\right)
> \tilde{\epsilon} \right)\notag\\
& = \Pr\left(\exists i = 1,\ldots,n : \sum_{k=1}^{i-1}\sum_{t=1}^{T^{i}}\tilde{E}_{t}^{k,k+1} > \tilde{\epsilon}\right)\notag\\
& \le \Pr\left(\exists\textrm{ partial sum in \eqref{eq:sequence} with magnitude} > \tilde{\epsilon}\right)\notag\\
& \le2\exp\left(-\frac{\tilde{\epsilon}^{2}}{\sum_{k=1}^{n-1}\sum_{t=1}^{T^{k}}\left(2/T^{k}\right)^{2}}\right),\notag
\end{align}
where in the last inequality we used Lemma \ref{lem:Hoeffding_maximal}.
Requiring this probability to be bounded by $\delta$
yields the claimed inequality.
\qed
\end{proof}
The sample sizes $T^i$ in Proposition \ref{prop:linear_PAC_bounded}
encode constraints on the number of times the edges $(i,i+1)$ can be sampled.
Notice that the statement therein implies
$T_i \geq \frac{4}{\max\{\epsilon,u\}^{2}}\log\left(\frac{2}{\delta}\right)$ for all $i$,
i.e., we cannot afford in a line graph to undersample any edge. This is because
every edge in a line graph is a {\em bridge}, hence a poor estimation of any such
edge would affect the differential reward estimation throughout the graph.
In this respect, this proposition only allows for a partial tradeoff among these numbers.
\iffalse
****************************************************************
\begin{proof}
As in the proof of Proposition \ref{prop:linear_PAC_bounded}, suppose
that $p_{1}>p_{i}$ for $2\le i\le M$. The random variable $\hat{E}^{i,i+1}$
is a Gaussian random variable with mean $p_{i}-p_{i+1}$ and variance
$\sigma_{i,i+1}^{2}/T^{i,i+1}$. Now \begin{equation}
\begin{aligned} & \Pr\left(e_{1}\right)\\
& \le\Pr\left(\exists k\in\left\{ 2,\ldots,M\right\} :\left|\hat{E}^{1,k}-\left(p_{1}-p_{k}\right)\right|>\epsilon\right)\\
& =\Pr\left(\exists k\in\left\{ 2,\ldots,M\right\} :\left|\sum_{i=1}^{k-1}\left(\hat{E}^{i,i+1}-\left(p_{i}-p_{i+1}\right)\right)\right|>\epsilon\right)\\
& \le\Pr\left(\exists\textrm{partial sum in sequence \eqref{eq:sequence}}\ge\epsilon\right)\\
& \le2\exp\left(-\frac{\epsilon^{2}}{\sum_{k=1}^{M-1}\textrm{Var}\left(\hat{E}^{i,i+1}\right)}\right)\\
& =2\exp\left(-\frac{\epsilon^{2}}{\sum_{k=1}^{M-1}\sigma_{k,k+1}^{2}/T^{k,k+1}}\right),\end{aligned}
\label{eq:err_linear_Gauss}\end{equation}
where in the third inequality we use \ref{lem:bernstein_like}. Going
on similar lines as in Proposition \ref{prop:linear_PAC_bounded}
we get\[
\left(\sum_{k=1}^{M-1}\frac{\sigma_{k_{k+1}}^{2}}{T^{k,k+1}}\right)^{-1}\ge\frac{1}{\epsilon^{2}}\log\left(\frac{2}{\delta}\right).\]
The result for the case $T^{i,i+1}=T$ and $c_{i}=c$ is straight
forward.
\end{proof}
****************************************************************
\fi
\section{Tree topology and its sample complexity}\label{s:tree}
In this section we investigate PAC algorithms for finding the best
node in a tree. Let then $G = (V,E)$ be an $n$-node
tree with diameter $D$ and a set of leaves $L \subseteq V$.
Without loss of generality
we can assume that the tree is rooted at node 1 and that all edges
are directed downwards to the leaves. Algorithm \ref{alg:Algorithm-tree}
considers all possible paths from the root to the leaves and treats
each one of them as a line graph to be processed as in Algorithm
\ref{alg:Algorithm-linear}.
We have the following proposition where, for simplicity of presentation,
we do no longer differentiate among the sample sizes $T^{i,j}$ associated
with edges $(i,j)$.
\begin{algorithm}
\begin{algorithmic}[1]
\renewcommand{\textbf{Input:}}{\textbf{Input:}}
\renewcommand{\textbf{Output:}}{\textbf{Output:}}
\REQUIRE {$\epsilon>0$, $\delta>0$, tree graph with set of leaves $L \subseteq V$}
\FOR{all leaves $k \in L$}
\STATE {Pull each edge $(i,j) \in E$ for $T$ times}
\STATE {Let $\hat{E}^{ij}=\frac{1}{T}\sum_{t=1}^{T}E_{t}^{ij}$
be the empirical average of edge $(i,j)$,
and $m_k = {\rm argmax}_{(1,i) \in \pi_{1,k}} \hat{E}_{\pi_{1i}}^{1i}$
be the maximum empirical average along path $\pi_{1k}$ (as in Algorithm \ref{alg:Algorithm-linear})}
\ENDFOR
\ENSURE {Node $m = {\rm argmax}_{k \in L} m_k$.}
\end{algorithmic}
\caption{The algorithm for the tree setup\label{alg:Algorithm-tree}}
\end{algorithm}
\begin{proposition}\label{prop:tree_PAC_bounded}
If $-1\le E^{ij}\le 1$ holds, then Algorithm \ref{alg:Algorithm-tree}
operating on a tree graph with reward gap $u$
is an $\left(\epsilon,\delta\right)$-PAC algorithm when
the sample complexity $T$ of each edge satisfies
$
T\ge\frac{4D}{\max\{\epsilon,u\}^{2}}\log\left(\frac{2|L|}{\delta}\right).
$\
Hence the sample complexity of the algorithm is
$O\left(\frac{nD}{\max\{\epsilon,u\}^{2}}\log\left(\frac{|L|}{\delta}\right)\right).$
\end{proposition}
\begin{proof}
The probability that Algorithm \ref{alg:Algorithm-tree}
returns a node whose average reward is $\epsilon$ below the optimal one
coincides with the probability that there exists a leaf $k \in L$ such that
Algorithm \ref{alg:Algorithm-linear}
operating on the linear graph $\pi_{1,k}$ singles out a node $m_k$ whose average
reward is more than $\epsilon$ from the optimal one within $\pi_{1,k}$.
Setting
$T = \frac{4|\pi_{1,k}|}{\tilde{\epsilon}^{2}}\log\left(\frac{2|L|}{\delta}\right)$,
with $\tilde{\epsilon} = \max\{\epsilon,u\}$,
ensures that the above happens with probability at most $\delta/|L|$.
Hence each edge is sampled at most
$\frac{4D}{\tilde{\epsilon}^{2}}\log\left(\frac{2|L|}{\delta}\right)$ times and the
claim follows by a standard union bound over $L$. \qed
\end{proof}
\section{Network Sample Complexity\label{s:net}}
In this section we deal with the problem
of finding the optimal reward in a general connected and undirected
graph $G = (V,E)$, being $|V|=n$.
We describe a node elimination algorithm that works in phases,
sketch an efficient implementation and provide a sample complexity.
The following ancillary definitions will be useful.
We say that a node is a \emph{local maximum}
in a graph if all its neighboring nodes do not have higher expected reward
than the node itself. The distance between node $i$ and node $j$
is the length of the shortest path between the two nodes.
Finally, the diameter $D(G)$ of a graph $G$ is the largest distance
between any pair of nodes.
Our suggested Algorithm operates in $\log n$ phases. For notational
simplicity, it will be convenient to use subscripts to denote the
phase number.
We begin with Phase 1, where the graph $G_1 = (V_1,E_1)$
is the original graph, i.e., at the beginning all nodes are participating,
and $n_1 = |V_1| = n.$
We then find a subgraph of $G_1$, which we call
\emph{sampled graph} denoted by $G_{1}^{S}$,
that includes all the edges involved in shortest paths between
all nodes in $V_1$. We sample each edge in subgraph $G_{1}^{S}$ for $T_{1}$-times,
and compute the corresponding sample averages. Based on these averages,
we find the local maxima\footnote
{
Ties can be broken arbitrarily.
}
of $G_{1}^{S}$.
The key observation is that there can be at most $n_{1}/2$ maxima.
Denote this set of maxima by $V_{2}$. Now, define a subgraph, denoted by
$G_{2}$, whose nodes are $V_{2}$. We repeat the process of getting
a sampled graph, denoted by $G_{2}^{S}$. We sample the edges of the
sampled graph $G_{2}^{S}$ for $T_{2}$-times and define, based on its maxima, a new subgraph.
Denote the set of maxima by $V_{3}$, and the process continues until
only one node is left. We call this algorithm NNE (\emph{Network Node Elimination}),
which is similar to the action elimination procedure of \cite{evendar2006action}
(see also \cite{abm10}). The algorithm
is summarized in Algorithm \ref{alg:network}.
\begin{algorithm}[h]
\begin{algorithmic}[1]
\renewcommand{\textbf{Input:}}{\textbf{Input:}}
\renewcommand{\textbf{Output:}}{\textbf{Output:}}
\REQUIRE {$\epsilon>0$, $\delta>0$, graph $G=(V,E)$, $i=1$}
\STATE {Initialize $G_1=G$, $V_1=V$}
\STATE {Compute the shortest path between all pairs of nodes of $G_1$,
and denote each path by $\pi_{ij}$. }
\STATE {Initialize the shortest path set by $SP_1 = \{\pi_{ij} | i,j \in V_1 \}$ }
\WHILE{$|V_i|>1$}
\STATE {$n_i=|V_i|$}
\STATE {Using the shortest paths in $SP_i$, find a sampled graph $G^S_i$ of $G_i$}
\STATE {$D_{i} = D(G^S_{i})$}
\STATE {Pull each edge in $G^S_i$ for $T_i$ times}
\STATE {Find the local maxima set, $V_{i+1}$, on $G^S_i$,
and get a subgraph $G_{i+1}$ that contains $V_{i+1}$}
\STATE {$SP_{i+1} = \{\pi_{ij} \in SP_i | i,j \in V_{i+1} \}$ }
\STATE{$i \leftarrow i+1$}
\ENDWHILE
\ENSURE {The remaining node}
\end{algorithmic}
\caption{The Network Node Elimination Algorithm\label{alg:network}}
\end{algorithm}
Two points should be made regarding the NNE algorithm. First, as
will be observed below, the sequence $\left\{ D(G_{i}^{S})\right\} _{i=1}^{\log n}$
of diameters is nonincreasing. Second, from the implementation viewpoint, a data-structure
maintaining all shortest paths between nodes is crucial, in order to efficiently
eliminate nodes while tracking the shortest paths between the surviving
nodes of the graph. In fact, this data structure might just be a collection of $n$
breadth-first spanning trees rooted at each node, that encode the shortest path between the root
and any other node in the graph. When node $i$ gets eliminated, we first eliminate the
spanning tree rooted at $i$, but also prune all the other spanning trees where node $i$
occurs as a leaf. If $i$ is a (non-root) internal node of another tree,
then $i$ should not be eliminated from this tree
since $i$ certainly belongs to the shortest path between another pair of surviving nodes. Note that connectivity is maintained through the process.
The following result gives a PAC bound for Algorithm \ref{alg:network} in the case
when the $E^{i,j}$ are bounded.
\begin{proposition}\label{lem:network}
Suppose that $-1\le E^{i,j}\le 1$ for every $(i,j)\in E$. Then
Algorithm \ref{alg:network} operating on a general graph $G$ with diameter $D$ and
reward gap $u$ is an $\left(\epsilon,\delta\right)$-PAC
algorithm with edge sample complexity
\begin{align}
T \le ~\frac{\sum_{i=1}^{\log n}n_i D_i}{\left(\max\{\epsilon,u\}/\log n\right)^2}
\,\log\left(\frac{n}{\delta/\log n}\right)
\le \frac{n\,D}{\left(\max\{\epsilon,u\}/\log n \right)^2}
\,\log\left(\frac{n}{\delta/\log n}\right).\notag
\end{align}
\end{proposition}
\begin{proof}
In each phase we have at most half the nodes of the previous
phase, i.e., $n_{i+1}\le n_{i}/2$. Therefore, the algorithm stops
after at most $\log n$ phases. Also, because we retain shortest path between
the surviving nodes, we also have $D_{i+1} \leq D_i \leq D$. At each phase, similar
to the previous sections, we make sure that it is at most $\delta/\log n$
the probability of identifying an $\epsilon/\log n$-optimal
node.
Therefore, it suffices to pull the edges in each sampled graph $G_{i}^{S}$
for $T_{i}\le\frac{n_i D_i}{\left(\max\{\epsilon,u\}/\log (n) \right)^2}
\log\left(\frac{n_i}{\delta/\log n}\right)$
times. Hence the overall sample complexity for an $\left(\epsilon,\delta\right)$-PAC
bound is at most $\sum_{j=1}^{\log n} T_j$, as claimed.
The last inequality just follows from $n_{i+1}\le n_{i}/2$
and $D_i \leq D$ for all $i$.
\qed
\end{proof}
Being more general, the bound contained in Proposition \ref{lem:network} is weaker
than the ones in previous sections when specialized to line graphs or trees.
In fact, one is left wondering whether it is always convenient to
reduce the identification problem on a general graph $G$ to the
identification problem on trees by, say, extracting a suitable spanning tree
of $G$ and then invoking Algorithm \ref{alg:Algorithm-tree} on it.
The answer is actually negative, as the set of simulations reported
in the next section show.
\section{Simulations\label{s:sims}}
In this section we briefly investigate the role of the graph topology
in the sample complexity.
In our simple experiment we compare Algorithm \ref{alg:Algorithm-tree}
(with two types of spanning trees) to Algorithm \ref{alg:network} over the
``spider web graph" illustrated in Figure \ref{fig:linear} (a).
This graph is made up of 15 nodes arranged in 3 concentric circles
(5 nodes each), where the circles are connected so as to resemble a spider web.
Node rewards are independently generated from the uniform distribution
on [0,1], edge rewards are just uniform in [-1,+1].
The two mentioned spanning trees are the longest diameter spanning tree (diameter 14)
and the shortest diameter spanning tree (diameter 5).
As we see from Figure \ref{fig:linear} (b), the latter tends to outperform the former.
However, both spanning tree-based algorithms are eventually outperformed by
NNE on this graph. This is because in later stages
NNE tends to handle smaller subgraphs, hence it needs only compare subsets of "good nodes".
\begin{figure}
\caption{
$(a)$ The spider-web topology.
$(b)$ Empirical error vs. time for the graph setup in (a) and spanning trees thereof.
Three algorithms are compared: NNE (red solid line),
the tree-based algorithm operating on a smallest diameter spanning tree
(black dashed line), and the tree-based algorithm operating on a
largest diameter spanning tree (blue dash-dot line).
The parameters are $n=15$ and $\epsilon=0$.
Average of 200 runs.
}
\label{fig:linear}
\end{figure}
\section{Extensions}\label{s:extensions}
We now sketch an extension of our framework to the case when the algorithm
receives contextual information in the form of feature vectors before sampling the edges.
This is intended to model a more practical setting where, say, a web system has preliminary
access to a set of user profile features.
This extension is reminiscent of the so-called {\em contextual bandit} learning
setting (e.g., \cite{lz07}), also called {\em bandits with covariates} (e.g., \cite{rz10}).
In such a setting, it is reasonable to assume that different users $\boldsymbol{x}_s$ have
different preferences (i.e., different best nodes associated with), but also that similar
users tend to have similar preferences.
A simple learning model that accommodates the above
(and is also amenable to theoretical analysis) is to assume each node $i$ of $G$ to host a
linear function $\boldsymbol{u}_i\,:\, \boldsymbol{x} \rightarrow \boldsymbol{u}_i^{\top}\boldsymbol{x}$ where, for simplicity,
$||\boldsymbol{u}_i|| = ||\boldsymbol{x}|| = 1$ for all $i$ and $\boldsymbol{x}$.
The optimal node $i^*(\boldsymbol{x})$ corresponding to vector $\boldsymbol{x}$ is
$i^*(\boldsymbol{x}) = \arg\max_{i \in V} \boldsymbol{u}_i^{\top}\boldsymbol{x}$. Our goal is to identify,
for the given $\boldsymbol{x}$ at hand, an $\epsilon$-optimal node $j$ such that
$\boldsymbol{u}_j^{\top}\boldsymbol{x} \geq \boldsymbol{u}_{i^*}^{\top}\boldsymbol{x} -\epsilon$.
Again, we do not directly observe node rewards, but only the differential rewards
provided by edges.\footnote
{
For simplicity of presentation, we disregard the reward gap here.
}
When we operate on input $\boldsymbol{x}$ and pull edge $(i,j)$,
we receive an independent observation of random variable
$E^{ij}(\boldsymbol{x})$ such that
$\mathbb{E}[E^{ij}(\boldsymbol{x})] = \boldsymbol{u}_j^{\top}\boldsymbol{x}- \boldsymbol{u}_i^{\top}\boldsymbol{x}$.
Learning proceeds in a sequence of {\em stages} $s=1, \ldots, S$, each stage
being in turn a sequence of time steps corresponding to the edge pulls taking
place in that stage. In Stage 1 the algorithm gets input $\boldsymbol{x}_1$, is allowed
to pull (several times) the graph edges $E^{ij}(\boldsymbol{x}_1)$, and is required to
output an $\epsilon$-optimal node for $\boldsymbol{x}_1$. Let $T(\boldsymbol{x}_1)$ be the sample complexity
of this stage. In Stage 2, we retain the information gathered in Stage 1,
receive a new vector $\boldsymbol{x}_2$ (possibly close to $\boldsymbol{x}_1$) and repeat the same kind of
inference, with sample complexity $T(\boldsymbol{x}_2)$. The game continues until $S$ stages
have been completed.
For any given sequence $\boldsymbol{x}_1$, $\boldsymbol{x}_2$, $\ldots$, $\boldsymbol{x}_S$, one expects the cumulative
sample size
\(
\sum_{s=1}^S T(\boldsymbol{x}_s)
\)
to grow {\em less than linearly} in $S$. In other words, the additional effort the
algorithm makes
in the identification problem diminishes with time, as more and more users are interacting
with the system, especially when these users are similar to each other, or even occur
more than once in the sequence $\boldsymbol{x}_1$, $\boldsymbol{x}_2$, $\ldots$, $\boldsymbol{x}_S$.
In fact, we can prove stronger results of the following kind. Notice that the bound
does not depend on the number $S$ of stages, but only on the dimension of the input space.\footnote
{
A slightly different statement holds in the case when the input dimension is infinite.
This statement quantifies the cumulative sample size w.r.t.
the amount to which the vectors $\boldsymbol{x}_1$, $\boldsymbol{x}_2$, $\ldots$, $\boldsymbol{x}_S$ are close to each other.
Details are omitted due to lack of space.
}
\begin{proposition}
Under the above assumptions, if $G = (V,E)$ is a connected and undirected graph, with $n$ nodes and
diameter $D$, and $\boldsymbol{x}_1$, $\boldsymbol{x}_2$, $\ldots$, $\boldsymbol{x}_S \in R^d$ is any sequence of unit-norm feature vectors,
then with probability at least $1-\delta$ a version of
the NNE algorithm exists which outputs at each stage $s$ an $\epsilon$-optimal
node for $\boldsymbol{x}_s$, and achieves the following cumulative sample size
\[
\sum_{s=1}^S T(\boldsymbol{x}_s) = O\left( B\,\log^2 B\right),
\]
where $B = \frac{n\,D}{\left(\epsilon/\log n\right)^2}\,\log\left(\frac{n}{\delta/\log n}\right)\,d^2.$
\end{proposition}
\begin{proof}[Sketch]
The algorithm achieving this bound combines linear-regression-like estimators with NNE. In particular,
every edge of $G$ maintains a linear estimator ${\hat \boldsymbol{u}^{ij}}$ intended to approximate the difference
$\boldsymbol{u}_j-\boldsymbol{u}_i$ over both stages and sampling times within each stage.
At stage $s$ and sampling time $t$ within stage $s$,
the vector ${\hat \boldsymbol{u}^{ij}_{s,t}}$ suitably stores all past feature vectors $\boldsymbol{x}_1, \ldots, \boldsymbol{x}_{s}$
observed so far, along with the corresponding edge reward observations.
By using tools from ridge regression in adversarial settings (see, e.g., \cite{dgs10}),
one can show high-probability approximation results of the form
\begin{align}
({\hat \boldsymbol{u}^{ij}_{s,t}} \,^{\top}\boldsymbol{x} - (\boldsymbol{u}_j-\boldsymbol{u}_i)^{\top}\boldsymbol{x})^2
\leq \boldsymbol{x}^{\top}A_{s,t}^{-1}\boldsymbol{x}\,\left(d\,\log \Sigma_{s,t} + \log\frac{1}{\delta}\right), \label{e:approx}
\end{align}
being $\Sigma_{s,t} = \sum_{k \leq s-1} T(\boldsymbol{x}_k) + t$, and
$A_{s,t}$ the matrix
\[
A_{s,t} = I + \sum_{k \leq s-1} T(\boldsymbol{x}_k)\boldsymbol{x}_k\boldsymbol{x}_k^{\top} + t\,\boldsymbol{x}_s\boldsymbol{x}_s^{\top}~.
\]
In stage $s$, NNE is able to output an $\epsilon$-optimal node for input $\boldsymbol{x}_s$
as soon as the RHS of (\ref{e:approx}) is as small as $c\epsilon^2$, for a
suitable constant $c$
depending on the current graph topology NNE is operating on.
Then the key observation is that in stage $s$ the number of times we sample an edge $(i,j)$
such that the above is false cannot be larger than
\[
\frac{1}{c\epsilon^2}\,\log \frac{|A_{s,T(\boldsymbol{x}_s)}|}{|A_{s,0}|}\left(d\,\log \Sigma_{s,T(\boldsymbol{x}_s)} + \log\frac{1}{\delta}\right),
\]
where $|\cdot|$ is the determinant of the matrix at argument.
This follows from standard inequalities of the form
$\sum_{t=1}^{T(\boldsymbol{x}_s)} \boldsymbol{x}_s^{\top}A_{s,t}^{-1}\boldsymbol{x}_s \leq \log \frac{|A_{s,T(\boldsymbol{x}_s)}|}{|A_{s,0}|}$.\qed
\end{proof}
\section{Discussion}\label{s:discussion}
This paper falls in the research thread of
analyzing online decision problems where
the information that is obtained is comparative between arms. We analyzed
a simple setup where the structure of comparisons is provided by a given graph
which, unlike previous works on this subject \cite{yue2011,yj11}, lead
us focus on the notion of
finding an $\epsilon$-optimal arm with high probability. We then described
an extension to the important contextual setup.
There are several issues that call for further research that we outline below.
First, we only addressed the exploratory bandit problem. It would be interesting to consider
the regret minimization version of the problem. While naively one can think of it as a problem
with an arm per edge of the graph, this may not be a very
effective model because the number of arms may go as $n^2$ but the number of parameters grows
like $n$. On top of this, definining a meaningful notion of regret may not be trivial
(see the discussion in the introductory section).
Second, we only considered graphs as opposed to hypergraphs. Considering comparisons of more
than two nodes raises interesting modeling issues and well as computational issues.
Third, we assumed that all samples are equivalent in the sense that all the pairs we can
compare have the same cost. This is not a realistic assumption in many applications.
An approach akin to budgeted learning \cite{Madani04thebudgeted} would be interesting here.
Fourth, we focused on upper bounds and constructive algorithms. Obtaining lower bounds
that depend on the network topology would be interesting. The upper bounds we have provided
are certainly loose for the case of a general network.
Furthermore, more refined upper bounds are likely to exist which take into account the distance
on the graph between the good nodes (e.g., between the best and the second best ones).
In any event, the algorithms we developed for the network case are certainly not optimal.
There is room for improvement by reusing information better and by adaptively selecting which
portions of the network to focus on. This is especially interesting
under smoothness assumptions on the expected rewards. Relevant references in the MAB setting
to start off with include \cite{a07,ksu08,bmss08}.
\end{document}
\appendix
\section{Appendix}
In this appendix we prove Lemma \ref{lem:Hoeffding_maximal}. We begin
by stating a general maximal bound on a martingale
difference sequence using rate functions. Then, we apply it to the rate function
for Gaussian variables, and end up proving Lemma \ref{lem:bernstein_like}.
\begin{theorem}\label{thm:LDT}
Let $X_{1},\ldots,X_{N}$ be independent random variables
where $\mathbb{E}\left[X_{i}\right]=0$ for $1\le i\le N$. For each
$1\le i\le N$ define the logarithmic generating function
$\rho_{i}\left(s\right)=\log\left(\mathbb{E}\left[e^{sX_{i}}\right]\right)$,
and assume that they are finite in the neighborhood of $0$.
Let $\epsilon_{1}\ldots,\epsilon_{N}$
be a sequence of positive numbers, and define the $i$-th rate function $f_i$
as $f_{i}(\epsilon_{i})=\sup_{s}(\epsilon_{i}s-\rho_{i}(s))$.
Let $\bar{X}_{i}=\sum_{j=1}^{i}X_{j}$ and $\bar{\epsilon}_{i}=\sum_{j=1}^{i}\epsilon_{j}$.
Then, for $1\le i\le N$ we have $f_{i}\left(\epsilon_{i}\right)>0$
and
\begin{align}
P\Bigl(\max_{1\le i\le N}\bar{X}_{i}\ge\bar{\epsilon}_{N}\Bigl)
& \le \exp\Bigl(-\sum_{j=1}^{N}f_{i}(\epsilon_{i})\Bigl).\label{eq:thrm1}
\end{align}
\end{theorem}
\begin{proof}
Define $S_{0}=0$ and
\begin{equation}
S_{k+1}=\begin{cases}
S_{k}+X_{k+1} & \textrm{ if }S_{k}<\sum_{i=1}^{N}\epsilon_{i},\\
S_{k} & \textrm{ if }S_{k}\ge\sum_{i=1}^{N}\epsilon_{i},\end{cases}\label{eq:s_k_def}
\end{equation}
i.e., $S_{k}$ is identical to $\bar{X}_{k}$ until $S_{k}$ exceeds
$\sum_{i=1}^{N}\epsilon_{i}$, and afterward it maintains the highest
value. Note that the following two events are equal:
\begin{equation}
\Bigl\{ \max_{1\le i\le N}\bar{X}_{i}>\sum_{i=1}^{N}\epsilon_{i}\Bigl\}
=\Bigl\{ S_{N}\ge\sum_{i=1}^{N}\epsilon_{i}\Bigl\} .\label{eq:events_equal}
\end{equation}
For the rest of the proof we concentrate on the r.h.s. of \eqref{eq:events_equal}.
Notice that
$
\rho_{i}(0)=\log\left(\mathbb{E}\left[e^{0\cdot X_{i}}\right]\right)
=\log\left(1\right)=0.
$
Moreover, using the Jensen's inequality and the fact that the r.v.
averages are $0$ we have for all $s\ne0$
\begin{equation}
\mathbb{E}\left[\exp\left(sX_{i}\right)\right]\ge\exp\left(\mathbb{E}\left[sX_{i}\right]\right)=e^{0}=1.\label{eq:Jensenineq}\end{equation}
Taking the $\log$ of \eqref{eq:Jensenineq} and using $\rho_{i}(0) = 0$
yields
\(
\rho_{i}\left(s\right)\ge 0.
\)
Let us fix some $s$ where $\rho_{i}\left(s\right)$ is finite (such
$s$ exists by the theorem's assumptions) and define the following
stopping time
\[
L=\begin{cases}
N & \textrm{ if }\bar{X}_{n}<\sum_{i=1}^{n}\epsilon_{i} \textrm{ \ensuremath{\quad}for }n=1,\ldots,N,\\
\min\left\{ n:\bar{X}_{n}\ge\sum_{i=1}^{n}\epsilon_{i}\right\} & \textrm{ otherwise}.\end{cases}
\]
Define $Y_{k}$ to be
\begin{equation}
Y_{k}=\exp\Biggl(sS_{k}-\sum_{i=1}^{\min\left(k,L\right)}\rho_{i}\left(s\right)\Biggl).\label{eq:Y_k_def}
\end{equation}
We claim that $Y_{k}$ is a martingale. Trivially, if $k\ge L$ then
$\min\left(L,k\right)=N$, by \eqref{eq:s_k_def} we have $S_{k+1}=S_{k}$,
and by \eqref{eq:Y_k_def} $Y_{k+1}=Y_{k}$. Let us define the $\sigma$-algebra
$\mathcal{F}_{k}\triangleq\sigma\left(X_{1},\ldots,X_{k}\right)$.
If $k<N$ then
\begin{align*}
&\mathbb{E}[Y_{k+1}|\mathcal{F}_{k}]\\
& = \mathbb{E}\left[\left.\exp\left(sS_{k+1}-\sum_{i=1}^{k+1}\rho_{i}\left(s\right)\right)\right|\mathcal{F}_{k}\right]\\
& = \mathbb{E}\left[\exp\left(sS_{k}-\sum_{i=1}^{k}\rho_{i}\left(s\right)\right)\right.
\cdot\exp\left(sX_{k+1}-\rho_{k+1}\left(s\right)\right)|\mathcal{F}_{k}\Biggr]\\
& = \exp\left(sS_{k}-\sum_{i=1}^{k}\rho_{i}\left(s\right)\right)
\cdot\textrm{E}\left[\exp\left(sX_{k+1}-\rho_{k+1}\left(s\right)\right)|\mathcal{F}_{k}\right]\\
& = Y_{k}\mathbb{E}\left[\exp\left(sX_{k+1}-\log\left(\mathbb{E}\left[e^{sX_{k+1}}\right]\right)\right)\right]
= Y_{k}\mathbb{E}\left[\exp\left(sX_{k+1}\right)\left(\mathbb{E}\left[e^{sX_{k+1}}\right]\right)^{-1}\right]
= Y_{k},
\end{align*}
and $\left(Y_{k}\right)_{k=1}^{N}$ is a martingale. Next, we claim
that $\mathbb{E}\left[Y_{1}\right]=1$~:
\[
\mathbb{E}\left[Y_{1}\right]
= \mathbb{E}\left[\exp\left(sX_{1}-\sum_{i=1}^{\min\left(1,L\right)}\rho_{i}\left(s\right)\right)\right]
= \mathbb{E}\left[\exp\left(sX_{1}\right)\exp\left(-\log\left(\mathbb{E}\left[e^{sX_{1}}\right]\right)\right)\right]
= 1,
\]
therefore, $\mathbb{E}\left[Y_{N}\right]=\mathbb{E}\left[Y_{N-1}\right]=\ldots=\mathbb{E}\left[Y_{1}\right]=1$.
Now,
\[
\mathbb{E}\left[\exp\left(sX_{N}-\sum_{i=1}^{N}\rho_{i}\left(s\right)\right)\right]
\le \mathbb{E}\left[\exp\left(sX_{N}-\sum_{i=1}^{\min\left(L,N\right)}\rho_{i}\left(s\right)\right)\right]
= \mathbb{E}\left[Y_{N}\right]
= 1,
\]
which yields
\begin{equation}
\mathbb{E}\left[\exp\left(sX_{N}\right)\right]\le\mathbb{E}\left[\exp\left(\sum_{i=1}^{N}\rho_{i}\left(s\right)\right)\right].\label{eq:x_less_rho}
\end{equation}
Next, using Markov inequality and \eqref{eq:x_less_rho} we can write
\begin{align}
P\Bigl(S_{N}\ge\sum_{i=1}^{N}\epsilon_{t}\Bigl)
&= P\Biggl(\exp\left(sS_{N}\right)\ge\exp\Bigl(s\sum_{i=1}^{N}\epsilon_{t}\Bigl)\Biggl)
\le \frac{\mathbb{E}\Bigl[\exp\left(sS_{N}\right)\Bigl]}{\exp\Bigl(s\sum_{i=1}^{N}\epsilon_{t}\Bigl)}\notag\\
& \le \mathbb{E}\Bigl[\exp\Bigl(-\Bigl(\sum_{i=1}^{N}\rho_{i}(s)-s\sum_{i=1}^{N}\epsilon_{i}\Bigl)\Bigl)\Bigl]
= \mathbb{E}\Bigl[\exp\Bigl(-\Bigl(\sum_{i=1}^{N}\Bigl(\rho_{i}(s)-s\epsilon_{i})\Bigl)\Bigl)\Bigl]~.\notag
\end{align}
Because this is true
for every $s$ for which $\rho\left(s\right)$ is fi{}nite, we can
take the infimum of the r.h.s. over $s$, which yields \eqref{eq:thrm1}.
\qed
\end{proof}
\iffalse
*************************************************************************************
The following lemma characterizes the rate function of a Gaussian random variable.
\begin{lemma}\label{lem:rate_func}
Let $X$ be a Gaussian random variable with
variance $\sigma^{2}$ and let $\epsilon>0$. Then, \[
f\left(\epsilon\right)\triangleq\sup_{s}\left(\epsilon s-\log\left(\mathbb{E}\left[e^{sX}\right]\right)\right)
=\frac{\epsilon^{2}}{2\sigma^{2}}.\]
\end{lemma}
\begin{proof}
We have
\[
\begin{aligned}
& \sup_{s}\left\{ \epsilon s-\log\left(\mathbb{E}\left[e^{sX}\right]\right)\right\} \\
& =\sup_{s}\left\{ \epsilon s-\log\left(e^{\frac{1}{2}\sigma^{2}s^{2}}\right)\right\} \\
& =\sup_{s}\left\{ \epsilon s-\frac{1}{2}\sigma^{2}s^{2}\right\} =\frac{\epsilon^{2}}{2\sigma^{2}}.
\end{aligned}
\]
\qed
\end{proof}
Next, we prove Lemma \ref{lem:bernstein_like}.
***********************************************************************************
\fi
\begin{proof}(Lemma \ref{lem:bernstein_like})
Directly follows from Theorem \ref{thm:LDT}
after observing that
the rate function of a Gaussian variable with variance $\sigma_i^2$ satisfies
\(
f_{i}\left(\epsilon\right)\ge\frac{\epsilon^{2}}{2\sigma_{i}^{2}},
\)
\qed
\end{proof}
\end{document}
\section{Introduction}
\end{document}
\usepackage[latin9]{inputenc}
\setcounter{secnumdepth}{3}
\setcounter{tocdepth}{3}
\usepackage{amsthm}
\usepackage{babel}
\newcommand{\RED}[1]{{\color{red}{#1}}}
\newcommand{\BLUE}[1]{{\color{blue}{#1}}}
\newcommand{\dotanote}[1]{{\color{red}{DD:[#1]}}}
\newcommand{\bnote}[1]{{\color{blue}{[#1]}}}
\newcommand{\mnote}[1]{{{\marginpar{\color{red}{\tiny #1}}}}}
\newcommand{\boldsymbol{x}}{\boldsymbol{x}}
\newcommand{\boldsymbol{u}}{\boldsymbol{u}}
\newcommand{\boldsymbol{w}}{\boldsymbol{w}}
\begin{document}
\twocolumn[
\icmlkeywords{blah!}
]
\newtheorem{theorem}{Theorem}[section]
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{assumption}[theorem]{Assumption}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{definition}[theorem]{Definition}
\appendix
\section{Appendix}
In this appendix we prove Lemma \ref{lem:Hoeffding_maximal}. We begin
by stating a general maximal bound on a martingale
difference sequence using rate functions. Then, we apply it to the rate function
for Gaussian random variables, and end up proving Lemma \ref{lem:bernstein_like}.
\begin{theorem}\label{thm:LDT}
Let $X_{1},\ldots,X_{N}$ be independent random variables
where $\mathbb{E}\left[X_{i}\right]=0$ for $1\le i\le N$. For each
$1\le i\le N$ define the logarithmic generating function
$\rho_{i}\left(s\right)=\log\left(\mathbb{E}\left[e^{sX_{i}}\right]\right)$,
and assume that they are finite in the neighborhood of $0$.
Let $\epsilon_{1}\ldots,\epsilon_{N}$
be a sequence of positive numbers, and define the $i$-th rate function $f_i$
as $f_{i}(\epsilon_{i})=\sup_{s}(\epsilon_{i}s-\rho_{i}(s))$.
Let $\bar{X}_{i}=\sum_{j=1}^{i}X_{j}$ and $\bar{\epsilon}_{i}=\sum_{j=1}^{i}\epsilon_{j}$.
Then, for $1\le i\le N$ we have $f_{i}\left(\epsilon_{i}\right)>0$
and
\begin{align}
P\left(\max_{1\le i\le N}\bar{X}_{i}\ge\bar{\epsilon}_{N}\right)
& \le \exp\Bigl(-\sum_{j=1}^{N}f_{i}(\epsilon_{i})\Bigl),\label{eq:thrm1}\\
P\left(\min_{1\le i\le N}\bar{X}_{i}\le-\bar{\epsilon}_{N}\right)
& \le \exp\Bigl(-\sum_{j=1}^{N}f_{i}(-\epsilon_{i})\Bigl),\notag \\
P\left(\max_{1\le i\le N}\left|\bar{X}_{i}\right|>\bar{\epsilon}_{N}\right)
& \le \exp\Bigl(-\sum_{j=1}^{N}f_{i}(\epsilon)\Bigl) \notag\\
&\ \ \ +\exp\Bigl(-\sum_{j=1}^{N}f_{i}(-\epsilon)\Bigl).\notag
\end{align}
\end{theorem}
\begin{proof}
Define $S_{0}=0$ and
\begin{equation}
S_{k+1}=\begin{cases}
S_{k}+X_{k+1} & \textrm{ if }S_{k}<\sum_{i=1}^{N}\epsilon_{i},\\
S_{k} & \textrm{ if }S_{k}\ge\sum_{i=1}^{N}\epsilon_{i},\end{cases}\label{eq:s_k_def}\end{equation}
i.e., $S_{k}$ is identical to $\bar{X}_{k}$ until $S_{k}$ exceeds
$\sum_{i=1}^{N}\epsilon_{i}$, and afterward it maintains the highest
value. Note that the following two events are equal:\begin{equation}
\left\{ \max_{1\le i\le N}\bar{X}_{i}>\sum_{i=1}^{N}\epsilon_{i}\right\} =\left\{ S_{N}\ge\sum_{i=1}^{N}\epsilon_{i}\right\} .\label{eq:events_equal}\end{equation}
For the rest of the proof we concentrate on the r.h.s. of \eqref{eq:events_equal}.
We argue that $\rho_{i}\left(s=0\right)=0$ since\begin{equation}
\rho_{i}\left(s=0\right)=\log\left(\mathbb{E}\left[e^{0\cdot X_{i}}\right]\right)=\log\left(1\right)=0.\label{eq:s_is_0}\end{equation}
Moreover, using the Jensen's inequality and the fact that the r.v.
averages are $0$ we have for all $s\ne0$\begin{equation}
\mathbb{E}\left[\exp\left(sX_{i}\right)\right]\ge\exp\left(\mathbb{E}\left[sX_{i}\right]\right)=e^{0}=1.\label{eq:Jensenineq}\end{equation}
Taking the $\log$ of \eqref{eq:Jensenineq} and using \eqref{eq:s_is_0}
yields
\(
\rho_{i}\left(s\right)\ge0.
\)
Let us fix some $s$ where $\rho_{i}\left(s\right)$ is finite (such
$s$ exists by the theorem's assumptions) and define the following
stopping time
\[
L=\begin{cases}
N & \textrm{ if }\bar{X}_{n}<\sum_{i=1}^{n}\epsilon_{i}\textrm{ \ensuremath{\quad}for }n=1,\ldots,N,\\
\min\left\{ n:\bar{X}_{n}\ge\sum_{i=1}^{n}\epsilon_{i}\right\} & \textrm{ otherwise}.\end{cases}
\]
Define $Y_{k}$ to be
\begin{equation}
Y_{k}=\exp\Biggl(sS_{k}-\sum_{i=1}^{\min\left(k,L\right)}\rho_{i}\left(s\right)\Biggl).\label{eq:Y_k_def}
\end{equation}
We claim that $Y_{k}$ is a martingale. Trivially, if $k\ge L$ then
$\min\left(L,k\right)=N$, by \eqref{eq:s_k_def} we have $S_{k+1}=S_{k}$,
and by \eqref{eq:Y_k_def} $Y_{k+1}=Y_{k}$. Let us define the $\sigma$-algebra
$\mathcal{F}_{k}\triangleq\sigma\left(X_{1},\ldots,X_{k}\right)$.
If $k<N$ then
\begin{eqnarray*}
\mathbb{E}[Y_{k+1}|\mathcal{F}_{k}]
& = & \mathbb{E}\left[\left.\exp\left(sS_{k+1}-\sum_{i=1}^{k+1}\rho_{i}\left(s\right)\right)\right|\mathcal{F}_{k}\right]\\
& = & \mathbb{E}\left[\exp\left(sS_{k}-\sum_{i=1}^{k}\rho_{i}\left(s\right)\right)\right.\\
& & \times\exp\left(sX_{k+1}-\rho_{k+1}\left(s\right)\right)|\mathcal{F}_{k}\Biggr]\\
& = & \exp\left(sS_{k}-\sum_{i=1}^{k}\rho_{i}\left(s\right)\right)\\
& & \textrm{\ensuremath{\times}E}\left[\exp\left(sX_{k+1}-\rho_{k+1}\left(s\right)\right)|\mathcal{F}_{k}\right]\\
& = & Y_{k}\mathbb{E}\left[\exp\left(sX_{k+1}-\log\left(\mathbb{E}\left[e^{sX_{k+1}}\right]\right)\right)\right]\\
& = & Y_{k}\mathbb{E}\left[\exp\left(sX_{k+1}\right)\left(\mathbb{E}\left[e^{sX_{k+1}}\right]\right)^{-1}\right]\\
& = & Y_{k},
\end{eqnarray*}
and $\left(Y_{k}\right)_{k=1}^{N}$ is a martingale. Next, we claim
that $\mathbb{E}\left[Y_{1}\right]=1$~:
\begin{eqnarray*}
\mathbb{E}\left[Y_{1}\right] & = & \mathbb{E}\left[\exp\left(sX_{1}-\sum_{i=1}^{\min\left(1,L\right)}\rho_{i}\left(s\right)\right)\right]\\
& = & \mathbb{E}\left[\exp\left(sX_{1}\right)\exp\left(-\log\left(\mathbb{E}\left[e^{sX_{1}}\right]\right)\right)\right]\\
& = & 1,
\end{eqnarray*}
therefore, $\mathbb{E}\left[Y_{N}\right]=\mathbb{E}\left[Y_{N-1}\right]=\ldots=\mathbb{E}\left[Y_{1}\right]=1$.
Now,
\begin{eqnarray*}
& & \mathbb{E}\left[\exp\left(sX_{N}-\sum_{i=1}^{N}\rho_{i}\left(s\right)\right)\right]\\
& \le & \mathbb{E}\left[\exp\left(sX_{N}-\sum_{i=1}^{\min\left(L,N\right)}\rho_{i}\left(s\right)\right)\right]\\
& = & \mathbb{E}\left[Y_{N}\right]\\
& = & 1,
\end{eqnarray*}
which yields
\begin{equation}
\mathbb{E}\left[\exp\left(sX_{N}\right)\right]\le\mathbb{E}\left[\exp\left(\sum_{i=1}^{N}\rho_{i}\left(s\right)\right)\right].\label{eq:x_less_rho}
\end{equation}
Next, using Markov inequality and \eqref{eq:x_less_rho} we can write
\begin{align}
P\Bigl(S_{N}\ge\sum_{i=1}^{N}\epsilon_{t}\Bigl)
& = P\Biggl(\exp\left(sS_{N}\right)\ge\exp\Bigl(s\sum_{i=1}^{N}\epsilon_{t}\Bigl)\Biggl)\notag\\
& \le \frac{\mathbb{E}\Bigl[\exp\left(sS_{N}\right)\Bigl]}{\exp\Bigl(s\sum_{i=1}^{N}\epsilon_{t}\Bigl)}\notag\\
& \le \mathbb{E}\Bigl[\exp\Bigl(-\Bigl(\sum_{i=1}^{N}\rho_{i}(s)-s\sum_{i=1}^{N}\epsilon_{i}\Bigl)\Bigl)\Bigl]\notag\\
& = \mathbb{E}\Bigl[\exp\Bigl(-\Bigl(\sum_{i=1}^{N}\Bigl(\rho_{i}(s)-s\epsilon_{i})
\Bigl)\Bigl)\Bigl]~.\notag
\end{align}
Because this is true
for every $s$ for which $\rho\left(s\right)$ is fi{}nite, we can
take the infi{}mum of the right-hand side over all $s$, which yields \eqref{eq:thrm1}.
Similar arguments obtain the other two equations.
\end{proof}
The following lemma characterizes the rate function of a Gaussian random variable.
\begin{lemma}\label{lem:rate_func}
Let $X$ be a Gaussian random variable with
variance $\sigma^{2}$ and let $\epsilon>0$. Then, \[
f\left(\epsilon\right)\triangleq\sup_{s}\left(\epsilon s-\log\left(\mathbb{E}\left[e^{sX}\right]\right)\right)=\frac{\epsilon^{2}}{2\sigma^{2}}.\]
\end{lemma}
\begin{proof}
We have
\[
\begin{aligned} & \sup_{s}\left\{ \epsilon s-\log\left(\mathbb{E}\left[e^{sX}\right]\right)\right\} \\
& =\sup_{s}\left\{ \epsilon s-\log\left(e^{\frac{1}{2}\sigma^{2}s^{2}}\right)\right\} \\
& =\sup_{s}\left\{ \epsilon s-\frac{1}{2}\sigma^{2}s^{2}\right\} =\frac{\epsilon^{2}}{2\sigma^{2}}.\end{aligned}
\]
\end{proof}
Next, we prove Lemma \ref{lem:bernstein_like}.
\begin{proof}(Lemma \ref{lem:bernstein_like})
By Theorem \ref{thm:LDT} we have
\[
P\Bigl(\max_{1\le i\le N}\bar{X}_{i}>N\epsilon\Bigl)
\le\exp\Bigl(-\sum_{j=1}^{N}f_{i}(\epsilon)\Bigl).
\]
From Lemma \ref{lem:rate_func} we have
\(
f_{i}\left(\epsilon\right)\ge\frac{\epsilon^{2}}{2\sigma_{i}^{2}},
\)
therefore,
\[
P\left(\max_{1\le i\le N}\bar{X}_{i}>N\epsilon\right)\le\exp\left(-\sum_{i=1}^{N}\frac{\epsilon^{2}}{2\sigma_{i}^{2}}\right).
\]
\end{proof}
\end{document} |
\begin{document}
\setcounter{page}{1}
\begin{bottomstuff}
A preliminary version of this article appeared in {\it Proceedings of the
35$^{\text{th}}$ International Colloquium on Automata, Languages, and
Programming (ICALP), Reykjavik, Iceland, July 7-11, 2008.}\newline
Author's addresses: Bernhard Haeupler, CSAIL, Massachusetts Institute of
Technology, Cambridge, MA 02139, United States, \email{[email protected]}; work
done while the author was a visiting student at Princeton University. Telikepalli
Kavitha, Tata Institute of Fundamental Research, Mumbai, India,
\email{[email protected]}; work done while the author was at Indian
Institute of Science. Rogers Mathew, Indian Institute of Science, Bangalore,
India, \email{[email protected]}. Siddhartha Sen, Department of Computer Science, Princeton University, Princeton, NJ 08540, United States, \email{[email protected]}. Robert E. Tarjan, Department of Computer
Science, Princeton University, Princeton, NJ 08540, United States and HP
Laboratories, Palo Alto, CA 94304, United States, \email{[email protected]}.\newline
Research at Princeton University partially supported by NSF grants CCF-0830676
and CCF-0832797. The information contained herein does not necessarily reflect
the opinion or policy of the federal government and no official endorsement
should be inferred.
\end{bottomstuff}
\title{Incremental Cycle Detection, Topological Ordering, and Strong Component Maintenance}
\section{Introduction} \label{sec:intro}
In this paper we consider three related problems on dynamic directed graphs: cycle
detection, maintaining a topological order, and maintaining strong components. We
begin with a few standard definitions. A {\em topological order} of a directed graph
is a total order ``$<$'' of the vertices such that for every arc $(v, w)$, $v < w$.
A directed graph is {\em strongly connected} if every vertex is reachable from every
other. The {\em strongly connected components} of a directed graph are its maximal
strongly connected subgraphs. These components partition the vertices
\cite{Harary1965}. Given a directed graph $G$, its {\em graph of strong components}
is the graph whose vertices are the strong components of $G$ and whose arcs are
all pairs $(X, Y)$ with $X \ne Y$ such that there is an arc in the original graph from a vertex
in $X$ to a vertex in $Y$. The graph of strong components is acyclic
\cite{Harary1965}.
A directed graph has a topological order (and in general more than one) if and only
if it is acyclic. The first implication is equivalent to the statement that every
partial order can be embedded in a total order, which, as Knuth~\cite{Knuth1973}
noted, was proved by Szpilrajn~\cite{Szpilrajn1930} in 1930, for infinite as well as
finite sets. Szpilrajn remarked that this result was already known to at least
Banach, Kuratowski, and Tarski, though none of them published a proof.
Given a fixed $n$-vertex, $m$-arc graph, one can find either a cycle or a topological
order in $\mathrm{O}(n + m)$ time by either of two methods: repeated deletion of sources
(vertices of in-degree zero)~\cite{Knuth1973,Knuth1974} or depth-first search
\cite{Tarjan1972}. The former method (but not the latter) extends to the enumeration
of all possible topological orders~\cite{Knuth1974}. One can find strong components,
and a topological order of the strong components in the graph of strong components,
in $\mathrm{O}(n + m)$ time using depth-first search, either one-way
\cite{Cheriyan1996,Gabow2000,Tarjan1972} or two-way~\cite{Sharir1981,Aho1983}.
In some situations the graph is not fixed but changes over time. An {\em
incremental} problem is one in which vertices and arcs can be added; a {\em
decremental} problem is one in which vertices and arcs can be deleted; a {\em
(fully) dynamic} problem is one in which vertices and arcs can be added or
deleted. Incremental cycle detection or topological ordering occurs in circuit evaluation
\cite{Alpern1990}, pointer analysis~\cite{Pearce2003}, management of compilation
dependencies~\cite{Marchetti1993,Omohundro1992}, and deadlock detection
\cite{Belik1990}. In some applications cycles are not fatal; strong components, and
possibly a topological order of them, must be maintained. An example is
speeding up pointer analysis by finding cyclic relationships~\cite{Pearce2003b}.
We focus on incremental problems. We assume that the vertex set is fixed and
given initially, and that the arc set is initially empty. We denote by $n$
the number of vertices and by $m$ the number of arcs added. For simplicity in stating time
bounds we assume that $m = \mathrm{\Omega}(n)$. We do not allow multiple arcs, so $m \le
{n \choose 2}$. One can easily extend our algorithms to support vertex additions in
$\mathrm{O}(1)$ time per vertex addition. (A new vertex has no incident arcs.) Our
topological ordering algorithms, as well as all others in the literature, can handle
arc deletions as well as insertions, since an arc deletion preserves topological
order, but our time bounds are no longer valid. Maintaining strong components as
arcs are deleted, or inserted and deleted, is a harder problem, as is maintaining the
transitive closure of a directed graph under arc insertions and/or deletions. These
problems are quite interesting and much is known, but they are beyond the scope of
this paper. We refer the interested reader to Roditty and Zwick~\cite{RodittyZ2008}
and the references given there for a thorough discussion of results on these
problems.
Our goal is to develop algorithms for incremental cycle detection and topological
ordering that are significantly more efficient than running an algorithm for a static
graph from scratch after each arc addition. In Section~\ref{sec:lim-search} we
discuss the use of graph search to solve these problems, work begun by Shmueli
\cite{Shmueli1983} and realized more fully by Marchetti-Spaccamela et
al.~\cite{Marchetti1996}, whose algorithm runs in $\mathrm{O}(nm)$ time. In
Section~\ref{sec:2way-search} we develop a two-way search method that we call
{\em compatible search}. Compatible search is essentially a generalization of
two-way ordered search, which was first proposed by Alpern et
al.~\cite{Alpern1990}. They gave a time bound for their algorithm in an
incremental model of computation, but their analysis does not give a good bound
in terms of $n$ and $m$. They also considered batched arc additions. Katriel
and Bodlaender~\cite{Katriel2006} gave a variant of two-way ordered search with
a time bound of $\mathrm{O}(\min\{m^{3/2}\log n, m^{3/2} + n^2\log n\})$. Liu and
Chao~\cite{Liu2007} improved the bound of this variant to $\mathrm{\Theta}(m^{3/2} +
mn^{1/2}\log n)$, and Kavitha and Mathew~\cite{Kavitha2007} gave another variant with a bound of
$\mathrm{O}(m^{3/2} + nm^{1/2}\log n)$.
A two-way search need not be ordered to solve the topological ordering problem. We
apply this insight in Section~\ref{sec:soft-search} to develop a version of
compatible search that we call {\em soft-threshold search}. This method uses
either median-finding (which can be approximate) or random sampling in place of
the heaps (priority queues) needed in ordered search, resulting in a time bound
of $\mathrm{O}(m^{3/2})$. We also show that any algorithm among a natural class of
algorithms takes $\mathrm{\Omega}(nm^{1/2})$ time in the worst case. Thus for sparse
graphs ($m/n = \mathrm{O}(1)$) our bound is best possible in this class of algorithms.
The algorithms discussed in Sections~\ref{sec:2way-search} and~\ref{sec:soft-search}
have two drawbacks. First, they require a sophisticated data structure, namely a
{\em dynamic ordered list}~\cite{Bender2002,Dietz1987}, to maintain the topological
order. One can address this drawback by maintaining the topological order as an
explicit numbering of the vertices from $1$ through $n$. Following Katriel
\cite{Katriel2004}, we call an algorithm that does this a {\em topological sorting}
algorithm. The one-way search algorithm of Marchetti-Spaccamela et
al.~\cite{Marchetti1996} is such an algorithm. Pearce and Kelly~\cite{Pearce2006}
gave a two-way-search topological sorting algorithm. They claimed it is fast in
practice, although they did not give a good time bound in terms of $n$ and $m$.
Katriel~\cite{Katriel2004} showed that any topological sorting algorithm that has a
natural {\em locality} property takes $\mathrm{\Omega}(n^2)$ time in the worst case even if
$m/n = \mathrm{\Theta}(1)$.
The second drawback of the algorithms discussed in
Sections~\ref{sec:2way-search} and~\ref{sec:soft-search} is that using graph
search to maintain a topological order becomes less and less efficient as the
graph becomes denser. Ajwani et al.~\cite{Ajwani2006} addressed this drawback
by giving a topological sorting algorithm with a running time of
$\mathrm{O}(n^{11/4})$. In Section~\ref{sec:top-search} we simplify and improve this
algorithm. Our algorithm searches the topological order instead of the graph.
We show that it runs in $\mathrm{O}(n^{5/2})$ time. This bound may be far from
tight. We obtain a lower bound of $\mathrm{\Omega}(n2^{\sqrt{2\lg n}})$ on
the running time of the algorithm by relating its efficiency to a generalization
of the $k$-levels problem of combinatorial geometry.
In Section~\ref{sec:strong} we extend the algorithms of Sections
\ref{sec:soft-search} and~\ref{sec:top-search} to the incremental maintenance of
strong components. We conclude in Section~\ref{sec:remarks} with some remarks and
open problems.
This paper is an improvement and extension of a conference paper
\cite{Haeupler2008b}, which itself is a combination and condensation of two on-line
reports~\cite{Haeupler2008,Kavitha2007}. Our main improvement is a simpler analysis
of the algorithm presented in Section~\ref{sec:top-search} and originally in
\cite{Kavitha2007}. At about the same time as~\cite{Kavitha2007} appeared and also
building on the work of Ajwani et al., Liu and Chao~\cite{Liu2008} independently
obtained a topological sorting algorithm that runs in $\mathrm{O}(n^{5/2}\log^2 n)$ or
$\mathrm{O}(n^{5/2}\log n)$ time, depending on the details of the implementation. More
recently, Bender, Fineman, and Gilbert~\cite{Bender2009} have presented a topological
ordering algorithm that uses completely different techniques and runs in
$\mathrm{\Theta}(n^2\log n)$ time.
\section{One-Way Search} \label{sec:lim-search}
\SetKwFunction{limitedsearch}{Limited-Search}
The simplest of the three problems we study is that of detecting a cycle when an arc
addition creates one. All the known efficient algorithms for this problem, including
ours, rely on the maintenance of a topological order. When an arc $(v, w)$ is added,
we can test for a cycle by doing a search forward from $w$ until either reaching $v$
(there is a cycle) or visiting all vertices reachable from $w$ without finding $v$.
This method takes $\mathrm{\Theta}(m)$ time per arc addition in the worst case, for a total of
$\mathrm{\Theta}(m^2)$ time. By maintaining a topological order, we can improve this method.
When a new arc $(v, w)$ is added, test if $v < w$. If so, the order is still
topological, and the graph is acyclic. If not, search for $v$ from $w$. If the
search finishes without finding $v$, we need to restore topological order, since (at
least) $v$ and $w$ are out of order. We can make the order topological by moving all
the vertices visited by the search to positions after all the other vertices, and
ordering the visited vertices among themselves topologically.
We need a way to represent the topological order. A simple numbering scheme suffices.
Initially, number the vertices arbitrarily from $1$ through $n$ and initialize a
global counter $c$ to $n$. When a search occurs, renumber the vertices visited by
the search consecutively from $c + 1$, in a topological order with respect to the
subgraph induced by the set of visited vertices, and increment $c$ to be the new
maximum vertex number. One way to order the visited vertices is to make the search
depth-first and order the vertices in reverse postorder \cite{Tarjan1972}. With this
scheme, all vertex numbers are positive integers no greater than $nm$.
Shmueli \cite{Shmueli1983} proposed this method as a heuristic for cycle detection,
although he used a more-complicated two-part numbering scheme and he did not mention
that the method maintains a topological order. In the worst case, every new arc can
invalidate the current topological order and trigger a search that visits a large
part of the graph, so the method does not improve the $\mathrm{O}(m^2)$ worst-case bound for
cycle detection. But it is the starting point for asymptotic improvement.
To do better we use the topological order to limit the searching. The search for $v$
from $w$ need not visit vertices larger than $v$ in the current order, since no such
vertex, nor any vertex reachable from such a vertex, can be $v$. Here is the
resulting method in detail. When a new arc $(v, w)$ has $v > w$, search for $v$
from $w$ by calling \limitedsearch{$v$,$w$}, where the function \limitedsearch
is defined in Figure~\ref{alg:lim-search}. In this and later functions and
procedures, a minus sign denotes set subtraction.
\begin{figure}
\caption{Implementation of limited search.}
\label{alg:lim-search}
\end{figure}
In \limitedsearch, $F$ is the set of vertices visited by the search, and $A$
is the set of arcs to be traversed by the search. An iteration of
the while loop that deletes an arc $(x, y)$ from $A$ does a {\em traversal} of
$(x, y)$. The choice of which arc in $A$ to traverse is arbitrary. If the
addition of $(v, w)$ creates a cycle, \limitedsearch{$v$,$w$} returns an arc
$(x, y) \ne (v, w)$ on such a cycle; otherwise, it returns null. If it returns
null, restore topological order by moving all vertices in $F$ just after $v$
(and before the first vertex following $v$, if any). Order the vertices within
$F$ topologically, for example by making the search depth-first and ordering
the vertices in $F$ in reverse postorder with respect to the search.
Figure~\ref{fig:lim-search} shows an example of limited search and reordering.
\begin{figure}
\caption{Limited search followed by vertex reordering. Initial topological
order is left-to-right. Arcs are numbered in order of traversal; the search is
depth-first. Visited vertices are $w$, $c$, $f$, $h$, $i$, $j$. They are
numbered in reverse postorder with respect to the search and reordered
correspondingly.}
\label{fig:lim-search}
\end{figure}
Before discussing how to implement the reordering, we bound the total time for
the limited searches. If we represent $F$ and $A$ as linked lists and mark
vertices as they are added to $F$, the time for a search is $\mathrm{O}(1)$ plus
$\mathrm{O}(1)$ per arc traversal. Only the last search, which does at most $m$ arc
traversals, can report a cycle. To bound the total number of arc traversals,
we introduce the notion of {\em relatedness}. We define a vertex and an arc to
be {\em related} if some path contains both the vertex and the arc, and {\em
unrelated} otherwise. This definition does not depend on whether the vertex or
the arc occurs first on the path; they are related in either case. If the
graph is acyclic, only one order is possible, but in a cyclic graph, a vertex
can occur before an arc on one path and after the arc on a different path. If
either case occurs, or both, the vertex and the arc are related.
\begin{lemma}
\label{lem:lim-search-rel}
Suppose the addition of $(v, w)$ does not create a cycle but does trigger a search.
Let $(x, y)$ be an arc traversed during the (unsuccessful) search for $v$ from $w$. Then $v$
and $(x, y)$ are unrelated before the addition but related after it.
\end{lemma}
\begin{proof}
Let $<$ be the topological order before the addition of $(v, w)$. Since $x < v$, for
$v$ and $(x, y)$ to be related before the addition there must be a path containing
$(x, y)$ followed by $v$. But then there is a path from $x$ to $v$. Since there is
a path from $w$ to $x$, the addition of $(v, w)$ creates a cycle, a contradiction.
Thus $v$ and $(x, y)$ are unrelated before the addition. After the addition there is a
path from $v$ through $(v, w)$ to $(x, y)$, so $v$ and $(x, y)$ are related. \qed
\end{proof}
The number of related vertex-arc pairs is at most $nm$, so the number of arc
traversals during all limited searches, including the last one, is at most $nm + m$.
Thus the total search time is $\mathrm{O}(nm)$.
Shmueli \cite{Shmueli1983} suggested this method but did not analyze it. Nor did he
give an efficient way to do the reordering; he merely hinted that one could modify
his numbering scheme to accomplish this. According to Shmueli, ``This may force us
to use real numbers (not a major problem)." In fact, it {\em is} a major problem,
because the precision required may be unrealistically high.
To do the reordering efficiently, we need a representation more complicated than a
simple numbering scheme. We use instead a solution to the {\em dynamic ordered list}
problem: represent a list of distinct elements so that order queries (does $x$ occur
before $y$ in the list?), deletions, and insertions (insert a given non-list element
just before, or just after, a given list element) are fast. Solving this problem is
tantamount to addressing the precision question that Shmueli overlooked. Dietz and
Sleator \cite{Dietz1987} gave two related solutions. Each takes $\mathrm{O}(1)$ time
worst-case for an order query or a deletion. For an insertion, the first takes
$\mathrm{O}(1)$ amortized time; the second, $\mathrm{O}(1)$ time worst-case. Bender et
al. \cite{Bender2002} simplified the Dietz-Sleator methods. With any of these
methods, the time for reordering after an arc addition is bounded by a constant
factor times the search time, so $m$ arc additions take $\mathrm{O}(nm)$ time.
There is a simpler way to do the reordering, but it requires rearranging all {\em
affected vertices}, those between $w$ and $v$ in the order (inclusive): move all
vertices visited by the search after all other affected vertices, preserving the
original order within each of these two sets. Figure~\ref{fig:lim-search-alt}
illustrates this alternative reordering method. We call a topological ordering
algorithm {\em local} if it reorders only affected vertices. Except for
Shmueli's unlimited search algorithm and the recent algorithm of Bender et
al.~\cite{Bender2009}, all the algorithms we discuss are local.
\begin{figure}
\caption{Alternative method of restoring topological order after a limited
search of the graph in Figure~\ref{fig:lim-search}
\label{fig:lim-search-alt}
\end{figure}
We can do this reordering efficiently even if the topological order is
explicitly represented by a one-to-one mapping between the vertices and the
integers from $1$ through $n$. This makes the method a topological sorting
algorithm as defined in Section \ref{sec:intro}. This method was proposed and
analyzed by Marchetti-Spaccamela et al. \cite{Marchetti1996}. The reordering
time is $\mathrm{O}(n)$ per arc addition; the total time for $m$ arc additions is
$\mathrm{O}(nm)$.
\section{Two-Way Search} \label{sec:2way-search}
\SetKwFunction{compatiblesearch}{Compatible-Search}
\SetKwFunction{vertexguidedsearch}{Vertex-Guided-Search}
\SetKwFunction{searchstep}{Search-Step}
We can further improve cycle detection and topological ordering by making the search
two-way instead of one-way: when a new arc $(v, w)$ has $v > w$, concurrently search
forward from $w$ and backward from $v$ until some vertex is reached from both
directions (there is a cycle), or enough arcs are traversed to guarantee that the
graph remains acyclic; if so, rearrange the visited vertices to restore topological
order.
Each step of the two-way search traverses one arc $(u, x)$ forward and one arc $(y,
z)$ backward. To make the search efficient, we make sure that these arcs are {\em
compatible}, by which we mean that $u < z$ (in the topological order before $(v, w)$
is added). Here is the resulting method in detail. For ease of notation we adopt
the convention that the minimum of an empty set is bigger than any other value and
the maximum of an empty set is smaller than any other value. Every vertex is in one
of three states: {\em unvisited}, {\em forward} (first visited by the forward
search), or {\em backward} (first visited by the backward search). Before any arcs
are added, all vertices are unvisited. The search maintains the set $F$ of
forward vertices and the set $B$ of backward vertices: if the search does not
detect a cycle, certain vertices in $B \cup F$ must be reordered to restore
topological order. The search also maintains the set $A_F$ of arcs to be
traversed forward and the set $A_B$ of arcs to be traversed backward. If the
search detects a cycle, it returns an arc other than $(v,w)$ on the cycle; if
there is no cycle, the search returns null.
When a new arc $(v, w)$ has $v > w$, search forward from $w$ and backward from
$v$ by calling \compatiblesearch{$v$,$w$}, where the function \compatiblesearch
is defined in Figure~\ref{alg:comp-search}.
\begin{figure}
\caption{Implementation of compatible search.}
\label{alg:comp-search}
\end{figure}
In compatible search, an iteration of the while loop is a {\em search step}.
The step does a {\em forward traversal} of the arc $(u, x)$ that it deletes from
$A_F$ and a {\em backward traversal} of the arc $(y, z)$ that it deletes from $A_B$.
The choice of which pair of arcs to traverse is arbitrary, as long as they are
compatible. If the addition of $(v, w)$ creates a cycle, it is possible for a
single arc $(u, z)$ to be added to both $A_F$ (when $u$ becomes forward) and to
$A_B$ (when $z$ becomes backward). It is even possible for such an arc to be
traversed both forward and backward in the same search step, but if this happens
it is the last search step. Such a double traversal does not affect the
correctness of the algorithm. Unlike limited search, compatible search can
visit unaffected vertices (those less than $w$ or greater than $v$ in
topological order), but this does not affect correctness, only efficiency. If
the search returns null, restore topological order as follows. Let $t =
\min(\{v\} \cup \{u| \exists (u, x) \in A_F \})$. Let $F_< = \{x \in F | x < t
\}$ and $B_> = \{y \in B | y > t\}$. If $t = v$, reorder as in limited search
(Section~\ref{sec:lim-search}): move all vertices in $F_<$ just after $t$. (In
this case $B_> = \{\}$.) Otherwise $(t < v)$, move all vertices in $F_<$ just
before $t$ and all vertices in $B_>$ just before all vertices in $F_<$. In
either case, order the vertices within $F_<$ and within $B_>$ topologically.
Figure~\ref{fig:comp-search} illustrates compatible search and reordering.
\begin{figure}
\caption{
Compatible search of the graph in Figure~\ref{fig:lim-search}
\label{fig:comp-search}
\end{figure}
\begin{theorem}
\label{thm:2way-corr}
Compatible search correctly detects cycles and maintains a topological order.
\end{theorem}
\begin{proof}
The algorithm maintains the invariant that every forward vertex is reachable from $w$
and $v$ is reachable from every backward vertex. Thus if $(u, x)$ with $x \in
B$ is traversed forward, there is a cycle consisting of a path from $w$ to $u$,
the arc $(u, x)$, a path from $x$ to $v$, and the arc $(v, w)$. Symmetrically,
if $(y,z)$ with $y \in F$ is traversed backward, there is a cycle. Thus if the
algorithm reports a cycle, there is one.
Suppose the addition of $(v, w)$ creates a cycle. Such a cycle consists of a
pre-existing path $P$ from $w$ to $v$ and the arc $(v, w)$. The existence of $P$
implies that $v > w$, so the addition of $(v, w)$ will trigger a search. The
search maintains the invariant that either there are distinct arcs $(u, x)$ and
$(y, z)$ on $P$ with $x \le y$, $(u, x)$ is in $A_F$, and $(y, z)$ is in $A_B$,
or there is an arc $(u, z)$ in both $A_F$ and $A_B$. In either case there is
a compatible arc pair, so the search can only stop by returning a non-null
arc. Thus if there is a cycle the algorithm will report one.
It remains to show that if $v > w$ and the addition of $(v, w)$ does not create a cycle,
then the algorithm restores topological order. This is a case analysis. First
consider $(v, w)$. If $t = v$, then $w$ is in $F_<$. If $t < v$, then $v$ is in $B_>$ and
$w$ is in $\{t\} \cup F_<$. In either case, $v$ precedes $w$ after the reordering.
Second, consider an arc $(x, y)$ other than $(v, w)$. Before the reordering $x < y$; we
must show that the reordering does not reverse this order. There are five cases:
Case 1: neither $x$ nor $y$ is in $F_< \cup B_>$. Neither $x$ nor $y$ is reordered.
Case 2: $x$ is in $F_<$. Vertex $y$ must be forward. If $y < t$ then $y$ is in $F_<$, and the
order of $x$ and $y$ is preserved because the reordering within $F_<$ is topological. If $y
= t$, then $t < v$, so the reordering inserts $x$ before $t = y$. If $y > t$, the reordering
does not move $y$ and inserts $x$ before $y$.
Case 3: $y$ is in $F_<$ but $x$ is not. Vertex $x$ is not moved, and $y$ follows $x$ after the
reordering since vertices in $F_<$ are only moved higher in the order.
Case 4: $y$ is in $B_>$. Vertex $x$ must be backward. Then $x \ne t$, since $x = t$ would
imply $t = v$ (since $x$ is backward) and $y > v$, which is impossible. If $x > t$ then $x$ is
in $B_>$, and the order of $x$ and $y$ is preserved because the reordering within $B_>$ is
topological. If $x < t$, the reordering does not move $x$ and inserts $y$ after $x$.
Case 5: $x$ is in $B_>$ but $y$ is not. Vertex $y$ is not moved, and $y$ follows $x$ after the
reordering since vertices in $B_>$ are only moved lower in the order.
We conclude that the reordering restores topological order. \qed
\end{proof}
A number of implementation details remain to be filled in. Before doing this, we
prove the key result that bounds the efficiency of two-way compatible search: the
total number of arc traversals over $m$ arc additions is $\mathrm{O}(m^{3/2})$. To
prove this, we extend the notion of relatedness used in Section~\ref{sec:lim-search} to arc
pairs: two distinct arcs are {\em related} if they are on a common path. Relatedness
is symmetric: the order in which the arcs occur on the common path is irrelevant.
(In an acyclic graph only one order is possible, but in a graph with cycles both
orders can occur, on different paths.) The following lemma is analogous to
Lemma~\ref{lem:lim-search-rel}:
\begin{lemma}
\label{lem:2way-search-rel}
Suppose the addition of $(v, w)$ triggers a search but does not create a cycle. Let
$(u, x)$ and $(y, z)$, respectively, be compatible arcs traversed forward and backward
during the search, not necessarily during the same search step. Then $(u, x)$ and $(y,
z)$ are unrelated before the addition of $(v, w)$ but are related after the addition.
\end{lemma}
\begin{proof}
Since adding $(v, w)$ does not create a cycle, $(u, x)$ and $(y, z)$ must be
distinct. Suppose $(u, x)$ and $(y, z)$ were related before the addition of $(v,
w)$. Let $P$ be a path containing both. The definition of compatibility is $u < z$.
But $u < z$ implies that $(u, x)$ precedes $(y, z)$ on $P$. Since $u$ is forward and
$z$ is backward, the addition of $(v, w)$ creates a cycle, consisting of a path from
$w$ to $u$, the part of $P$ from $u$ to $z$, a path from $z$ to $v$, and the arc $(v,
w)$. This contradicts the hypothesis of the lemma. Thus $(u, x)$ and $(y, z)$ are
unrelated before the addition of $(v, w)$.
After the addition of $(v, w)$, there is a path containing both $(u, x)$ and $(y,
z)$, consisting of $(y, z)$, a path from $z$ to $v$, the arc $(v, w)$, a path from
$w$ to $u$, and the arc $(u, x)$. Thus $(u, x)$ and $(y, z)$ are related after the
addition. \qed
\end{proof}
\begin{theorem}
\label{thm:2way-search-arcs}
Over $m$ arc additions, two-way compatible search does at most $4m^{3/2} + m+1$
arc traversals.
\end{theorem}
\begin{proof}
Only the last arc addition can create a cycle; the corresponding search does at most
$m + 1$ arc traversals. (One arc may be traversed twice.) Consider any search
other than the last. Let $A$ be the set of arcs traversed forward during the search. Let $k$ be the number of arcs in $A$.
Each arc $(u, x)$ in $A$ has a distinct {\em twin} $(y, z)$ that was traversed
backward during the search step that traversed $(u, x)$. These twins are compatible;
that is, $u < z$. Order the arcs $(u, x)$ in $A$ in non-decreasing order on
$u$. Each arc $(u, x)$ in $A$ is compatible not only with its own twin but also with the
twin of each arc $(q, r)$ following $(u, x)$ in the order within $A$, because if $(y,
z)$ is the twin of $(q, r)$, $u \le q < z$. Thus if $(u, x)$ is $i^{\text{th}}$ in
the order within $A$, $(u, x)$ is compatible with at least $k - i + 1$ twins of
arcs in $A$. By Lemma~\ref{lem:2way-search-rel}, each such compatible pair is unrelated
before the addition of $(v, w)$ but is related after the addition. Summing over all
arcs in $A$, we find that the addition of $(v, w)$ increases the number of related
arc pairs by at least $k(k + 1)/2$.
Call a search other than the last one {\em small} if it does no more than $2m^{1/2}$ arc
traversals and {\em big} otherwise. Since there are at most $m$ small searches, together
they do at most $2m^{3/2}$ arc traversals. A big search that does $2k$ arc traversals is
triggered by an arc addition that increases the number of related arc pairs by at
least $k(k + 1)/2 > km^{1/2}/2$. Since there are at most ${m \choose 2} <
m^2/2$ related arc pairs, the total number of arc traversals during big searches is at most
$2m^{3/2}$. \qed
\end{proof}
The example in Figure~\ref{fig:comp-search} illustrates the argument in the
proof of Theorem~\ref{thm:2way-search-arcs}. The arcs traversed forward,
arranged in non-decreasing order by first vertex, are $(w, h)$ with twin $(d,
v)$, $(w, c)$ with twin $(g, v)$, $(c, f)$ with twin $(a, d)$, and $(f, h)$ with
twin $(e, g)$. Arc $(w, h)$ is compatible with the twins of all arcs in $A$,
$(w, c)$ is compatible with its own twin and those of $(c, f)$ and $(f, h)$,
$(c, f)$ is compatible with its own twin and that of $(f, h)$, and $(f, h)$ is
compatible with its own twin. There can be other compatible pairs, and indeed
there are in this example, but the proof does not use them.
Our goal now is to implement two-way compatible search so that the time per arc
addition is $\mathrm{O}(1)$ plus $\mathrm{O}(1)$ per arc traversal. By
Theorem~\ref{thm:2way-search-arcs}, this would give a time bound of $\mathrm{O}(m^{3/2})$
for $m$ arc additions. First we discuss the graph representation, then the
maintenance of the topological order, and finally (in this and the next section) the
detailed implementation of the search algorithm.
We represent the graph using forward and backward incidence lists: each vertex has a
list of its outgoing arcs and a list of its incoming arcs, which we call the {\em
outgoing list} and {\em incoming list}, respectively. Singly linked lists
suffice. We denote by $\mathit{first\mbox{-}out}(x)$ and $\mathit{first\mbox{-}in}(x)$ the first arc on the outgoing
list and the first arc on the incoming list of vertex $x$, respectively. We
denote by $\mathit{next\mbox{-}out}((x, y))$ and $\mathit{next\mbox{-}in}((x, y))$ the arcs after $(x, y)$ on the
outgoing list of $x$ and the incoming list of $y$, respectively. In each case,
if there is no such arc, the value is null. Adding a new arc $(v, w)$ to this
representation takes $\mathrm{O}(1)$ time. If the addition of an arc $(v, w)$
triggers a search, we can update the graph representation either before or
after the search: arc $(v, w)$ will never be added to either $A_F$ or $A_B$.
We represent the topological order by a dynamic ordered list. (See
Section~\ref{sec:lim-search}.) If adding $(v, w)$ leaves the graph acyclic but
triggers a search, we reorder the vertices after the search as follows. Determine $t$.
Determine the sets $F_<$ and $B_>$. Determine the subgraphs induced by the vertices in
$F_<$ and $B_>$. Topologically sort these subgraphs using either of the two linear-time
static methods (repeated deletion of sources or depth-first search). Move the
vertices in $F_<$ and $B_>$ to their new positions using dynamic ordered list deletions
and insertions. The number of vertices in $F \cup B$ is at most two plus the number of
arcs traversed by the search. Furthermore, all arcs out of $F_<$ and all arcs into $B_>$
are traversed by the search. It follows that the time for the topological
sort and reordering is at most linear in one plus the number of arcs traversed,
not including the time to determine $t$. We discuss how to determine $t$ after
presenting some of the details of the search implementation.
We want the time of a search to be $\mathrm{O}(1)$ plus $\mathrm{O}(1)$ per arc traversal.
There are three tasks that are hard to implement in $\mathrm{O}(1)$ time: (1) adding arcs
to $A_F$ and $A_B$ (the number of arcs added as the result of an arc traversal may
not be $\mathrm{O}(1)$), (2) testing whether to continue the search, and (3) finding
a compatible pair of arcs to traverse.
By making the search vertex-guided instead of arc-guided, we simplify all of these
tasks, as well as the determination of $t$. We do not maintain $A_F$ and $A_B$
explicitly. Instead we partition $F$ and $B$ into {\em live} and {\em dead}
vertices. A vertex in $F$ is live if it has at least one outgoing untraversed
arc; a vertex in $B$ is live if it has at least one incoming untraversed arc;
all vertices in $F \cup B$ that are not live are dead. For each
vertex $x$ in $F$ we maintain a {\em forward pointer} $\mathit{out}(x)$ to the first
untraversed arc on its outgoing list, and for each vertex $y$ in $B$ we maintain
a {\em backward pointer} $\mathit{in}(y)$ to the first untraversed arc on its incoming
list; each such pointer is null if there are no untraversed arcs. We also
maintain the sets $F_L$ and $B_L$ of live vertices in $F$ and $B$, respectively.
When choosing arcs to traverse, we always choose a forward arc indicated by a
forward pointer and a backward arc indicated by a backward pointer. The test
whether to continue the search becomes ``$\min F_L < \max B_L$.''
When a new arc $(v,w)$ has $v > w$, do the search by calling\hspace{4pt}
\mathit{vertex}guidedsearch{$v$,$w$}, where the function \mathit{vertex}guidedsearch is defined
in Figure~\ref{alg:vertex-search}. It uses an auxiliary macro \searchstep, defined in Figure~\ref{alg:search-step}, intended to be expanded in-line; each
return from \searchstep returns from \mathit{vertex}guidedsearch as well. If
\mathit{vertex}guidedsearch{$v$,$w$} returns null, let $t = \min(\{v\} \cup \{x \in
F | \mathit{out}(x) \ne null\}$ and reorder the vertices in $F_<$ and $B_>$ as discussed above.
\begin{figure}
\caption{Implementation of vertex-guided search.}
\label{alg:vertex-search}
\end{figure}
\begin{figure}
\caption{Implementation of a search step.}
\label{alg:search-step}
\end{figure}
If we represent $F$ and $B$ by singly linked lists and $F_L$ and $B_L$ by doubly
linked lists (so that deletion takes $\mathrm{O}(1)$ time), plus flag bits for each
vertex indicating whether it is in $F$ and/or $B$, then the time for a search
step is $\mathrm{O}(1)$. The time to determine $t$ and to reorder the vertices is at
most $\mathrm{O}(1)$ plus $\mathrm{O}(1)$ per arc traversal.
It remains to implement tasks (2) and (3): testing whether to continue the
search and finding a compatible pair of arcs to traverse. In vertex-guided
search these tasks are related: it suffices to test whether $\min F_L < \max
B_L$; and, if so, to find $u \in F_L$ and $z \in B_L$ with $u < z$. The
historical solution is to store $F_L$ and $B_L$ in heaps (priority queues),
$F_L$ in a min-heap and $B_L$ in a max-heap, and in each iteration of the
while loop to choose $u = \min F_L$ and $z = \max B_L$. This guarantees that $u
< z$, since otherwise the continuation test for the search would have failed.
With an appropriate heap implementation, the test $\min F_L < \max B_L$ takes
$\mathrm{O}(1)$ time, as does choosing $u$ and $z$. Each insertion into a heap takes
$\mathrm{O}(1)$ time as well, but each deletion from a heap takes $\mathrm{O}(\log n)$
time, resulting in an $\mathrm{O}(\log n)$ time bound per search step and an
$\mathrm{O}(m^{3/2}\log n)$ time bound for $m$ arc additions.
This method is in essence the algorithm of Alpern et al.~\cite{Alpern1990}, although
their algorithm does not strictly alternate forward and backward arc traversals, and
they did not obtain a good total time bound. Using heaps but relaxing the
alternation of forward and backward arc traversals gives methods with slightly better
time bounds~\cite{Alpern1990,Katriel2006,Kavitha2007}, the best bound to date being
$\mathrm{O}(m^{3/2} + nm^{1/2}\log n)$~\cite{Kavitha2007}. One can further reduce the
running time by using a faster heap implementation, such as those of van Emde
Boas~\cite{Boas1977b,Boas1977}, Thorup~\cite{Thorup2004}, and Han and
Thorup~\cite{Han2002}. Our goal is more ambitious: to reduce the overall running
time to $\mathrm{O}(m^{3/2})$ by eliminating the use of heaps. This we do in the next
section.
\section{Soft-Threshold Search} \label{sec:soft-search}
\SetKwFunction{softthresholdsearch}{Soft-Threshold-Search}
To obtain a faster implementation of vertex-guided search, we exploit the flexibility
inherent in the algorithm by using a {\em soft threshold} $s$ to help choose $u$ and
$z$ in each search step. Vertex $s$ is a forward or backward vertex, initially $v$.
We partition the sets $F_L$ and $B_L$ into {\em active} and {\em passive} vertices.
Active vertices are candidates for the current search step, passive vertices are
candidates for future search steps. We maintain the sets $F_A$ and $F_P$, and
$B_A$ and $B_P$, of active and passive vertices in $F_L$ and $B_L$,
respectively. All vertices in $F_P$ are greater than $s$; all vertices in $B_P$ are less than $s$; vertices
in $F_A \cup B_A$ can be on either side of $s$. Searching continues while
$F_A \ne \{\}$ and $B_A \ne \{\}$. The algorithm chooses $u$ from $F_A$ and $z$
from $B_A$ arbitrarily. If $u < z$, the algorithm traverses an arc out of $u$
and an arc into $z$ and makes each newly live vertex active. If $u > z$, the
algorithm traverses no arcs. Instead, it makes $u$ passive if $u > s$ and
makes $z$ passive if $z < s$; $u > z$ implies that at least one of $u$ and $z$
becomes passive. When $F_A$ or $B_A$ becomes empty, the algorithm updates $s$
and the vertex partitions, as follows. Suppose $F_A$ is empty; the updating is
symmetric if $B_A$ is empty. The algorithm makes all vertices in $B_P$ dead,
makes $s$ dead if it is live, chooses a new $s$ from $F_P$, and makes active
all vertices $x \in F_P$ such that $x \le s$.
Here are the details of this method, which we call {\em soft-threshold search}.
When a new arc $(v, w)$ has $v > w$, do the search by calling
\softthresholdsearch{$v$,$w$}, where the function \softthresholdsearch is
defined in Figure~\ref{alg:soft-search}, and procedure \searchstep is defined
as in Figure~\ref{alg:search-step}, but with $F_A$ and $B_A$ replacing $F_L$ and
$B_L$, respectively. If \softthresholdsearch{$v$,$w$} returns null, let $t =
\min(\{v\} \cup \{x \in F| \mathit{out}(x) \ne \textit{null}\}$ and reorder the vertices in
$F_<$ and $B_>$ as discussed above. Figure~\ref{fig:soft-search}
illustrates soft-threshold search.
\begin{figure}
\caption{Implementation of soft-threshold search.}
\label{alg:soft-search}
\end{figure}
\begin{figure}
\caption{
Soft-threshold search of the graph in Figure~\ref{fig:lim-search}
\label{fig:soft-search}
\end{figure}
Soft-threshold search is an implementation of vertex-guided search except that it
makes additional vertices dead, not just those with no incident arcs left to
traverse. Once dead, a vertex stays dead. We need to prove that this does not
affect the search outcome. First we prove that soft-threshold search terminates.
\begin{theorem}
\label{thm:soft-search-steps}
A soft-threshold search terminates after at most $n^2 + m + n$ iterations of
the while loop.
\end{theorem}
\begin{proof}
Each iteration either traverses one or two arcs or makes one or two vertices
passive. The number of times a vertex can become passive is at most the number of times it becomes
active. Vertices become active only when they are visited (once per vertex) or when
$s$ changes. Each time $s$ changes, the old $s$ becomes dead if it was not dead
already. Thus $s$ changes at most $n$ times. The number of times vertices become
active is thus at most $n + n^2$ (once per vertex visit plus once per vertex per
change in $s$). \qed
\end{proof}
To prove correctness, we need two lemmas.
\begin{lemma}
\label{lem:soft-search-passive}
If $x$ is a passive vertex, $x > s$ if $x$ is in $F_P$, $x < s$ if $x$ is in
$B_P$.
\end{lemma}
\begin{proof}
If $x$ is a passive vertex, $x$ satisfies the lemma when it becomes passive, and it
continues to satisfy the lemma until $s$ changes. Suppose $x$ is forward; the
argument is symmetric if $x$ is backward. If $s$ changes because $F_A$ is empty, $x$
becomes active unless it is greater than the new $s$. If $s$ changes because
$B_A$ is empty, $x$ becomes dead. The lemma follows by induction on the number of search
steps. \qed
\end{proof}
\begin{lemma}
\label{lem:soft-search-live}
Let $A_F$ be the set of untraversed arcs out of vertices in $F$, let $A_B$
be the set of untraversed arcs into vertices in $B$, let $q = \min\{u| \exists (u, x) \in
A_F\}$, and let $r = \max\{z| \exists (y, z) \in A_B\}$. Then $q$ and $r$ remain live vertices
until $q > r$.
\end{lemma}
\begin{proof}
If $q$ and $r$ remain live vertices until $q = \infty$ or $r = -\infty$, the lemma
holds. Thus suppose $q$ dies before $r$ and before either $q = \infty$ or $r =
-\infty$. When $q$ dies, $q = s$ or $q$ is passive, and $B_A = \{\}$. Since $r$
is still live, $r$ is passive. By Lemma~\ref{lem:soft-search-passive}, $q \ge s
> r$. The argument is symmetric if $r$ dies before $q$ and before either $q =
\infty$ or $r = -\infty$. \qed
\end{proof}
\begin{theorem}
Soft-threshold search is correct.
\end{theorem}
\begin{proof}
Let $q$ and $r$ be defined as in Lemma~\ref{lem:soft-search-live}. By that
lemma, the search will continue until a cycle is detected or $q > r$. While the
search continues, it traverses arcs in exactly the same way as vertex-guided
search. Once $q > r$, the continuation test for vertex-guided search fails.
If the graph is still acyclic, the continuation test for soft-threshold search
may not fail immediately, but no additional arcs can be traversed; any
additional iterations of the while loop merely change the
state (active, passive, or dead) of various vertices. Such changes do not affect the
outcome of the search. \qed
\end{proof}
To implement soft-threshold search, we maintain $F_A$, $F_P$, $B_A$, and
$B_P$ as doubly-linked lists. The time per search step is $\mathrm{O}(1)$, not counting the
computations associated with a change in $s$ (the two code blocks at the end of the
while loop that are executed if $F_A$ or $B_A$ is empty, respectively).
The remaining freedom in the algorithm is the choice of $s$. The following
observation guides this choice. Suppose $s$ changes because $F_A$ is empty. The
algorithm chooses a new $s$ from $F_P$ and makes active all vertices in $F_P$ that
are no greater than $s$. Consider the next change in $s$. If this change occurs
because $F_A$ is again empty, then all the vertices that were made active by the
first change of $s$, including $s$, are dead, and hence can never become active
again. If, on the other hand, this change occurs because $B_A$ is empty, then all
the forward vertices that remained passive after the first change in $s$ become dead,
and $s$ becomes dead if it is not dead already. That is, either the vertices in
$F_P$ no greater than the new $s$, or the vertices in $F_P$ no less than the new $s$,
are dead after the next change in $s$. Symmetrically, if $s$ changes because
$B_A$ is empty, then either all the vertices in $B_P$ no less than the new $s$,
or all the vertices in $B_P$ no greater than the new $s$, are dead after the
next change in $s$. To minimize the worst case, we always select $s$ to be the
median of the set of choices. This takes time linear in the number of
choices~\cite{Blum1973,Schonhage1976,DorZ1999}.
\begin{theorem}
\label{thm:soft-search-time}
If each new $s$ is selected to be the median of the set of choices, soft-threshold
search takes $\mathrm{O}(m^{3/2})$ time over $m$ arc additions.
\end{theorem}
\begin{proof}
Consider a soft-threshold search. For each increase in $s$ we charge an amount
equal to the number of vertices in $F_P$ when the change occurs; for each
decrease in $s$ we charge an amount equal to the number of vertices in $B_P$
when the change occurs. The charge covers the time spent in the code block
associated with the change, including the time to find the new $s$ (the median)
and the time to make vertices passive or dead, all of which is linear in the
charge. The charge also covers any time spent later to make passive any
vertices that became active as a result of the change; this time is $\mathrm{O}(1)$
for each such vertex. The remainder of the search time is $\mathrm{O}(1)$ for
initialization plus $\mathrm{O}(1)$ per arc traversal. We claim that the total
charge is $\mathrm{O}(1)$ per arc traversal. The theorem follows from the claim and
Theorem~\ref{thm:2way-search-arcs}.
The number of vertices in $F \cup B$ is at most the number of arc traversals. We
divide the total charge among these vertices, at most two units per vertex. The
claim follows.
Consider a change in $s$ other than the last. Suppose this is an increase. Let $k$
be the number of vertices in $F_P$ when this change occurs; $k$ is also the charge
for the change. Since $s$ is selected to be the median of $F_P$, at
least $\lceil k/2 \rceil$ vertices in $F_P$ are no greater than $s$, and
at least $\lceil k/2 \rceil$ vertices in $F_P$ are no less than $s$. If the
next change in $s$ is an increase, all the vertices in $F_P$ no greater than $s$ must be dead by the time of the next change. If the next
change in $s$ is a decrease, all the vertices in $F_P$ no less than $s$ will be made
dead by the next change, including $s$ if it is not dead already. In either case we
associate the charge of $k$ with the at least $\lceil k/2 \rceil$ vertices that
become dead after the change in $s$ but before or during the next change in $s$.
A symmetric argument applies if $s$ decreases. The charge for the last change in $s$
we associate with the remaining live vertices, at most one unit per vertex. \qed
\end{proof}
Theorem~\ref{thm:soft-search-time} holds (with a bigger constant factor) if each new
$s$ is an approximate median of the set of choices; that is, if $s$ is larger
than $\epsilon k$ and smaller than $\epsilon k$ of the $k$ choices, for some
fixed $\epsilon > 0$. An alternative randomized method is to select each new
$s$ uniformly at random from among the choices.
\begin{theorem}
\label{thm:soft-search-rtime}
If each new $s$ is chosen uniformly at random from among the set of choices,
soft-threshold search takes $\mathrm{O}(m^{3/2})$ expected time over $m$ arc additions.
\end{theorem}
\begin{proof}
Each selection of $s$ takes time linear in the number of choices. We charge for the
changes in $s$ exactly as in the proof of Theorem~\ref{thm:soft-search-time}. The
search time is then $\mathrm{O}(1)$ plus $\mathrm{O}(1)$ per arc traversal plus $\mathrm{O}(1)$ per unit of
charge. We shall show that the expected total charge for a search is at most
linear in the number of vertices in $F \cup B$, which in turn is at most the
number of arc traversals. The theorem follows from the bound on expected total
charge and Theorem~\ref{thm:2way-search-arcs}.
The analysis of the expected total charge is much like the
analysis~\cite{Knuth1972} of Hoare's ``quick select''
algorithm~\cite{Hoare1961}. We construct an appropriate recurrence and prove a
linear bound by induction. Consider the situation just before some search
step. Let $\mathrm{E}(k)$ be the maximum expected total future charge, given that at
most $k$ distinct vertices are candidates for $s$ during future changes of $s$.
(A vertex can be a candidate more than once, but we only count it once.) The
maximum is over the possible current states of all the data structures; the
expectation is over future choices of $s$. We prove by induction on $k$ that
$\mathrm{E}(k) \le 4k$.
If $s$ does not change in the future, or if the next change in $s$ is the last
one, then the total future charge is at most $k$. Suppose the next change of $s$
is not the last, and the next choice of $s$ is from among $j$ candidates. Each of
these $j$ candidates is selected with probability $1/j$. If the new $s$ is the
$i^{\text{th}}$ smallest among the candidates, then at least $\min\{i, j - i + 1\}$
of these candidates cannot be future candidates. The charge for this change in $s$
is $j$. The maximum expected future charge, including that for this change in $s$,
is at most $j + \sum_{i=1}^{j/2} (2\mathrm{E}(k - i)/j)$ if $j$ is even, at most $j +
\mathrm{E}(k - \lceil j/2 \rceil)/j + \sum_{i=1}^{\lfloor j/2 \rfloor} (2\mathrm{E}(k - i)/j)$ if $j$ is
odd. Using the induction hypothesis $\mathrm{E}(k') \le 4k'$ for $k' < k$, we find that the
maximum expected future charge is at most $j + \sum_{i=1}^{j/2} (8(k - i)/j) =
4k + j - \sum_{i=1}^{j/2} (8i/j) = 4k + j - (4/j)(j/2)(j/2+1) = 4k - 2$ if $j$
is even, at most $j + 4(k - \lceil j/2 \rceil)/j + \sum_{i=1}^{\lfloor j/2
\rfloor} (8(k - i)/j) = 4k + j - 4\lceil j/2 \rceil / j - \sum_{i=1}^{\lfloor j/2 \rfloor} (8i/j) = 4k +
j - (4/j)(j/2 + 1/2 + (j/2-1/2)(j/2+1/2)) < 4k + j - (4/j)(j/2)^2 = 4k$ if $j$
is odd. By induction $\mathrm{E}(k) \le 4k$ for all $k$.
Over the entire search, there are at most $|F \cup B|$ candidates for $s$. It
follows that the expected total charge over the entire search is at most $4|F
\cup B|$, which is at most four times the number of arcs traversed during the
search. \qed
\end{proof}
Soft-threshold search with either method of choosing $s$ uses $\mathrm{O}(n + m)$
space, as do all the algorithms we have discussed so far. Katriel and
Bodlaender~\cite{Katriel2006} give a set of examples on which soft-threshold
search takes $\mathrm{\Omega}(m^{3/2})$ time no matter how $s$ is chosen, so the bounds
in Theorems~\ref{thm:soft-search-time} and~\ref{thm:soft-search-rtime} are tight.
It is natural to ask whether there is a faster algorithm. To address this question,
we consider algorithms that maintain (at least) an explicit list of the vertices in
topological order and that do any needed reordering by moving one vertex at a time to
a new position in this list. All known algorithms do this or can be modified to do
so with at most a constant-factor increase in running time. We further restrict our
attention to {\em local} algorithms, those that update the order after an arc $(v,
w)$ with $v > w$ is added by reordering only affected vertices (defined in
Section~\ref{sec:lim-search}: those vertices between $w$ and $v$, inclusive). These
vertices form an interval in the old order and must form an interval in the new
order; within the interval, any permutation is allowed as long as it restores
topological order. Our algorithms, as well as all previous ones except for
those of Shmueli~\cite{Shmueli1983} and Bender et al.~\cite{Bender2009}, are
local. The following theorem gives a lower bound of $\mathrm{\Omega}(n\sqrt{m})$ on the
worst-case number of vertices that must be moved by any local algorithm. Thus
for sparse graphs ($m/n = \mathrm{O}(1)$), soft-threshold search is as fast as
possible among local algorithms.
\begin{theorem}
\label{thm:local-lb}
Any local algorithm must reorder $\mathrm{\Omega}(n\sqrt{m})$ vertices, and hence must take
$\mathrm{\Omega}(n\sqrt{m})$ time.
\end{theorem}
\begin{proof}
Let $p$ and $k$ be arbitrary positive integers such that $p \le k$. We shall give an
example with $n = p(k + 1)$ vertices and $m = n - k - 1 + k(k + 1)/2$ arcs that
requires at least $pk(k + 1)/2 = nk/2$ vertex movements. Since $p \le k$, $k(k + 1)/2 \le m
\le 3k(k + 1)/2$, so $\sqrt{m} = \mathrm{\Theta}(k)$. The example is such that, after $n
- k - 1$ initial arc additions, each subsequent arc addition forces at least $p$
vertices to be moved in the topological order, assuming the algorithm is local. The total number
of vertex movements is thus at least $pk(k+1)/ 2 = \mathrm{\Omega}(n\sqrt{m})$. Given any
target number of vertices $n'$ and target number of arcs $m'$, we can choose $p$ and
$k$ so that $n = \mathrm{\Theta}(n')$ and $m = \mathrm{\Theta}(m')$, which gives the theorem.
The construction is quite simple. Let the $n$
vertices be numbered 1 through $n$ in their original topological order. Add $n
- k - 1$ arcs so that each interval of $p$ consecutive vertices ending in an
integer multiple of $p$ forms a path of the vertices in increasing order (so
that vertices 1 through $p$ form a path from 1 to $p$, $p + 1$ through $2p$
form a path from $p + 1$ to $2p$, and so on). Now there are $k + 1$ paths,
each containing $p$ vertices. Call these paths $P_1, P_2,\ldots,P_{k + 1}$, in
increasing order by first (and last) vertex. Add an arc from the last vertex
of $P_2$ (vertex $2p$) to the first vertex of $P_1$ (vertex 1). This forms a
path from $p + 1$ through $p + 2, p + 3,\ldots$ to $2p$, then through $1, 2,
\ldots$ to $p$. The affected vertices are the vertices 1 through $2p$, and
the only way to rearrange them to restore topological order is to move $p + 1$
through $2p$ before 1 through $p$, which takes at least $p$ individual vertex
moves. The effect is to swap $P_1$ and $P_2$ in the topological order. Now add
an arc from the last vertex of $P_3$ to the first vertex of $P_1$. This forces
$P_1$ to swap places with $P_3$, again requiring at least $p$ vertex moves.
Continue adding one arc at a time in this way, forcing $P_1$ to swap places
with $P_4, P_5,\ldots, P_{k + 1}$. After $k$ arcs additions of arcs from the
last vertex of $P_2, P_3,\ldots, P_{k + 1}$ to the first vertex of $P_1$, path
$P_1$ has been forced all the way to the top end of the topological order. Now
ignore $P_1$ and repeat the construction with $P_2$, forcing it to move past
$P_3, P_4,\ldots, P_{k + 1}$ by adding arcs $(3p, p + 1), (4p, p + 1),\ldots,
((k + 1)p, p + 1)$. Do the same with $P_3, P_4,\ldots, P_k$. The total number
of arcs added that force vertex moves is $k(k + 1)/2$. Each of these added
arcs forces at least $k$ vertex moves. Figure~\ref{fig:soft-lb} gives an
example of the construction. \qed
\end{proof}
\begin{figure}
\caption{The $\mathrm{\Omega}
\label{fig:soft-lb}
\end{figure}
The $\mathrm{\Omega}(n\sqrt{m})$ bound on vertex reorderings is tight. An algorithm that
achieves this bound is a two-way search that does not alternate forward and backward
arc traversals but instead does forward arc traversals until visiting an unvisited
vertex less than $v$, then does backward arc traversals until visiting an
unvisited vertex greater than $w$, and repeats. Each forward traversal is
along an arc $(u, x)$ with $u$ minimum; each backward traversal is along an arc
$(y, z)$ with $z$ maximum. Searching continues until a cycle is detected or
there is no compatible pair of untraversed arcs. If the search stops without
detecting a cycle, the algorithm reorders the vertices in the same way as in
two-way compatible search. One can prove that this method reorders
$\mathrm{O}(n\sqrt{m})$ vertices over $m$ arc additions by counting related vertex
pairs (as defined in the next section: two vertices are related if one path
contains both). Unfortunately we do not know an implementation of this
algorithm with an overall time bound approaching the bound on vertex reorderings.
For algorithms that reorder one vertex at a time but are allowed to move
unaffected vertices, the only lower bound known is the much weaker one of
Ramalingam and Reps~\cite{Ramalingam1994}. They showed that $n - 1$ arc
additions can force any algorithm, local or not, to do $\mathrm{\Omega}(n\log n)$ vertex
moves.
\section{Topological Search} \label{sec:top-search}
\SetKwFunction{topologicalsearch}{Topological-Search}
\SetKwFunction{reorder}{Reorder}
Soft-threshold search is efficient on sparse graphs but becomes less and less
efficient as the graph becomes denser; indeed, if $m = \mathrm{\Omega}(n^2)$ the time bound is
$\mathrm{O}(n^3)$, the same as that of one-way limited search (Section
\ref{sec:lim-search}). In this section we give an alternative algorithm that is efficient for dense graphs.
The algorithm uses two-way search, but differs in three ways from the methods
discussed in Sections \ref{sec:2way-search} and \ref{sec:soft-search}: it balances
vertices visited instead of arcs traversed (as in the method sketched at the end of
Section~\ref{sec:soft-search}); it searches the topological order instead of the
graph; and it uses a different reordering method, which has the side benefit of
making it a topological sorting algorithm. We call the algorithm {\em topological
search}.
We represent the topological order by an explicit mapping between the vertices
and the integers from 1 to $n$. We denote by $\mathit{position}(v)$ the number of
vertex $v$ and by $\mathit{vertex}(i)$ the vertex with number $i$. We implement
$\mathit{vertex}$ as an array. The initial numbering is arbitrary; it is topological
since there are no arcs initially. If $v$ and $w$ are vertices, we test $v <
w$ by comparing $\mathit{position}(v)$ to $\mathit{position}(w)$. We represent the graph by an
adjacency matrix $A: A(v, w) = 1$ if $(v, w)$ is an arc, $A(v, w) = 0$ if not.
Testing whether $(v, w)$ is an arc takes $\mathrm{O}(1)$ time, as does adding an
arc. Direct representation of $A$ uses $\mathrm{O}(n^2)$ bits of space;
representation of $A$ by a hash table reduces the space to $\mathrm{O}(n + m)$ but
makes the algorithm randomized.
To simplify the running time analysis and the extension to strong component
maintenance (Section \ref{sec:strong}), we test for cycles after the search.
Thus the algorithm consists of three parts: the search, the cycle test, and the
vertex reordering. Let $(v, w)$ be a new arc with $v > w$. The search
examines every affected vertex (those between $w$ and $v$ in the order,
inclusive). It builds a queue $F$ of vertices reachable from $w$ by searching
forward from $w$, using a current position $i$, initially $\mathit{position}(w)$.
Concurrently, it builds a queue $B$ of vertices from which $v$ is
reachable by searching backward from $v$, using a current position $j$,
initially $\mathit{position}(v)$. It alternates between adding a vertex to $F$
and adding a vertex to $B$ until the forward and backward searches meet. When
adding a vertex $z$ to $F$ or $B$, the method sets $\mathit{vertex}(\mathit{position}(z)) =
\textit{null}$.
In giving the details of this method, we use the following notation for queue
operations: $[\,]$ denotes an empty queue; $\mathit{inject}(x, Q)$ adds element $x$ to
the back of queue $Q$; $\mathit{pop}(Q)$ deletes the front element $x$ from queue $Q$
and returns $x$; if $Q$ is empty, $\mathit{pop}(Q)$ leaves $Q$ empty and returns null.
Do the search by calling \topologicalsearch{$v$,$w$}, where procedure
\topologicalsearch is defined in Figure~\ref{alg:top-search}.
\begin{figure}
\caption{Implementation of topological search.}
\label{alg:top-search}
\end{figure}
Once the search finishes, test for a cycle by checking whether there is an arc
$(u, z)$ with $u$ in $F$ and $z$ in $B$. If there is no such arc, reorder the vertices as
follows. Let $F$ and $B$ be the queues at the end of the search, and let $k$ be
the common value of $i$ and $j$ at the end of the search. Then $\mathit{vertex}(k) =
\textit{null}$. If the search stopped after incrementing $i$, then $\mathit{vertex}(k)$ was
added to $B$, and $F$ and $B$ contain the same number of vertices. Otherwise,
the search stopped after decrementing $j$, $\mathit{vertex}(k)$ was added to $F$, and
$F$ contains one more vertex than $B$. In either case, the number of positions
$g \ge k$ such that $\mathit{vertex}(g) = \textit{null}$ is $|F|$, and the number of positions $g
< k$ such that $\mathit{vertex}(g) = \textit{null}$ is $|B|$. Reinsert the vertices in $F \cup B$
into the vertex array, moving additional vertices as necessary, by calling
\reorder, using as the initial values of $F$, $B$, $i$, $j$ their values at the
end of the search, where procedure \reorder is defined in
Figure~\ref{alg:reorder}.
\begin{figure}
\caption{Implementation of reordering.}
\label{alg:reorder}
\end{figure}
The reordering process consists of two almost-symmetric while loops. The first loop
reinserts the vertices in $F$ into positions $k$ and higher. Variable $i$ is the current
position. If $\mathit{vertex}(i)$ is a vertex $q$ with an arc from a vertex currently in $F$,
vertex $q$ is added to the back of $F$ and $\mathit{vertex}(i)$ becomes null: vertex $q$ must be
moved to a higher position. If $\mathit{vertex}(i)$ becomes null, or if $\mathit{vertex}(i)$ was already
null, the front vertex in $F$ is deleted from $F$ and becomes $\mathit{vertex}(i)$. The
second loop reinserts the vertices in $B$ into positions $k - 1$ and lower in symmetric
fashion. The only difference between the loops is that the forward loop increments $i$
last, whereas the backward loop decrements $j$ first, to avoid examining $\mathit{vertex}(k)$.
The forward and backward loops are completely independent and can be executed in
parallel. (This is not true of the forward and backward searches.)
Figure~\ref{fig:top-search} gives an example of topological search and
reordering.
\begin{figure}
\caption{
Topological search and reordering of the graph in Figure~\ref{fig:lim-search}
\label{fig:top-search}
\end{figure}
\begin{theorem}
\label{thm:top-corr}
Topological search is correct.
\end{theorem}
\begin{proof}
Let $(v, w)$ be a new arc such that $v > w$. The search maintains the invariant
that every vertex in $F$ is reachable from $w$ and $v$ is reachable from every vertex in
$B$. Thus if there is an arc $(u, z)$ with $u$ in $F$ and $z$ in $B$, there is a cycle.
Suppose the addition of $(v, w)$ creates a cycle. The cycle consists of $(v,
w)$ and a path $P$ from $w$ to $v$ of vertices in increasing order. Let $u$ be the
largest vertex on $P$ that is in $F$ at the end of the search. Since $u \ne v$, there is an arc $(u, z)$
on $P$. Vertex $z$ must be in $B$, or the search would not have stopped. We conclude that
the algorithm reports a cycle if and only if the addition of $(v, w)$ creates one.
Suppose the addition of $(v, w)$ does not create a cycle. When the search stops, the
number of positions $g \ge i$ such that $\mathit{vertex}(g) = \textit{null}$ is $|F|$. The
forward reordering loop maintains this invariant as it updates $F$. It also maintains the invariant that
once position $i$ is processed, every position from $k$ to $i$, inclusive, is non-null.
Thus if $i = n + 1$, $F$ must be empty, and the loop terminates. Symmetrically, the
backward reordering loop terminates before $j$ can become 0. Thus all vertices in $F \cup
B$ at the end of the search are successfully reordered; some other vertices may also
be reordered. Let $\overline{F}$ and $\overline{B}$ be the sets of vertices added to $F$ and to $B$,
respectively, during the search and reordering. Vertices in $\overline{F}$ move to higher
positions, vertices in $\overline{B}$ move to lower positions, and no other vertices move.
All vertices in $\overline{F}$ are reachable from $w$, and $v$ is reachable from all
vertices in $\overline{B}$. We show by case analysis that after the reordering every
arc $(x, y)$ has $x < y$. There are five cases, of which two pairs are symmetric.
Suppose $x$ and $y$ are both in $\overline{F} \cup \overline{B}$. Since there is no
cycle, it cannot be the case that $x$ is in $\overline{F}$ and $y$ is in
$\overline{B}$. The reordering moves all vertices in $\overline{F}$ after all
vertices in $\overline{B}$ without changing the order of vertices in $\overline{F}$
and without changing the order of vertices in $\overline{B}$. It follows that $x <
y$ after the reordering. This includes the case $(x, y) = (v, w)$, since $w$ is in
$\overline{F}$ and $v$ is in $\overline{B}$. Suppose $y$ is in $\overline{F}$
and $x$ is not in $\overline{F} \cup \overline{B}$.
The reordering does not move $x$ and moves $y$ higher in the order, so $x < y$ after the reordering. The case of $x$
in $\overline{B}$ and $y$ not in $\overline{F} \cup \overline{B}$ is symmetric.
Suppose $x$ is in $\overline{F}$ and $y$ is not in $\overline{F} \cup \overline{B}$.
Since $x < y$ before the reordering, the first loop of the reordering must reinsert
$x$ before it reaches the position of $y$; otherwise $y$ would be in $\overline{F}$.
Thus $x < y$ after the reordering. The case $y$ in $\overline{B}$ and $x$ not in
$\overline{F} \cup \overline{B}$ is symmetric. \qed
\end{proof}
To bound the running time of topological search, we extend the concept of
relatedness to vertex pairs. We say two vertices are {\em related} if they are
on a common path. Relatedness is symmetric; order on the path does not matter.
\begin{lemma}
\label{lem:top-cycle-time}
Over $m$ arc additions, topological search spends $\mathrm{O}(n^2)$ time testing for cycles.
\end{lemma}
\begin{proof}
Suppose addition of an arc $(v, w)$ triggers a search. Let $F$ and $B$ be the values
of the corresponding variables at the end of the search. The test for cycles takes
$\mathrm{O}(|F||B|)$ time. If this is the last arc addition, the test takes $\mathrm{O}(n^2)$ time. Each
earlier addition does not create a cycle; for such an addition, each pair $x$ in $F$
and $y$ in $B$ is related after the addition but not before: before the reordering $x <
y$, so if $x$ and $y$ were related there would be a path from $x$ to $y$, and the addition of
$(v, w)$ would create a cycle, consisting of a path from $w$ to $x$, the path
from $x$ to $y$, a path from $y$ to $v$, and arc $(v,w)$. Since there are at
most ${n \choose 2}$ related vertex pairs, the time for all cycle tests other than the last is $\mathrm{O}(n^2)$. \qed
\end{proof}
For each move of a vertex during reordering, we define the {\em distance} of the move
to be the absolute value of the difference between the positions of the vertex in the
old and new orders.
\begin{lemma}
\label{lem:sum-distances}
Over all arc additions, except the last one if it creates a cycle, the time spent by
topological search doing search and reordering is at most a constant times the
sum of the distances of all the vertex moves.
\end{lemma}
\begin{proof}
Consider an arc addition that triggers a search and reordering. Consider a vertex $q$
that is moved to a higher position; that is, it is added to $F$ during either the
search or the reordering and eventually placed in a new position during the
reordering. Let $i_1$ be its position before the reordering and $i_2$ its
position after the reordering. When $q$ is added to $F$, $i = i_1$; when $q$ is removed
from $F$, $i = i_2$. For each value of $i$ greater than $i_1$ and no greater
than $i_2$, there may be a test for the existence of an arc $(q, \mathit{vertex}(i))$: such a test
can occur during forward search or forward reordering but not both. The number of
such tests is thus at most $i_2 - i_1$, which is the distance $q$ moves. A
symmetric argument applies to a vertex moved to a lower position. Every test for an
arc is covered by one of these two cases. Thus the number of arc tests is at most
the sum of the distances of vertex moves. The total time spent in search and
reordering is $\mathrm{O}(1)$ per increment of $i$, per decrement of $j$, and per arc test. For
each increment of $i$ or decrement of $j$ there is either an arc test or an insertion of
a vertex into its final position. The number of such insertions is at most one per
vertex moved. The lemma follows. \qed
\end{proof}
It remains to analyze the sum of the distances of the vertex moves. To simplify the
analysis, we decompose the moves into pairwise swaps of vertices. Consider sorting a
permutation of $1$ through $n$ by doing a sequence of pairwise swaps of out-of-order
elements. The {\em distance} of a swap is twice the absolute value of the difference
between the positions of the swapped elements; the factor of two accounts for the two
elements that move. The sequence of swaps is {\em proper} if, once a pair is
swapped, no later swap reverses its order.
Consider the behavior of topological search over a sequence of arc additions,
excluding the last one if it creates a cycle. Identify the vertices with their final
positions. Then the topological order is a permutation, and the final permutation is
sorted.
\begin{lemma}
\label{lem:proper-swaps}
There is a proper sequence of vertex swaps whose total distance equals the sum of the
distances of all the reordering moves.
\end{lemma}
\begin{proof}
Consider an arc addition that triggers a search and reordering. As in the proof of
Theorem~\ref{thm:top-corr}, let $\overline{F}$ and $\overline{B}$ be the sets of
vertices added to $F$ and to $B$, respectively, during the search and reordering.
Consider the positions of the vertices in $\overline{F} \cup \overline{B}$ before and
after the reordering. After the reordering, these positions from lowest to highest
are occupied by the vertices in $\overline{B}$ in their original order, followed by
the vertices in $\overline{F}$ in their original order. We describe a sequence of
swaps that moves the vertices in $\overline{F} \cup \overline{B}$ from their positions
before the reordering to their positions after the reordering. Given the
outcome of the swaps so far, the next swap is of any two vertices $x$ in $\overline{F}$ and $y$
in $\overline{B}$ such that $x$ is in a smaller position than $y$ and no vertex in
$\overline{F} \cup \overline{B}$ is in a position between that of $x$ and that of $y$.
The swap of $x$ and $y$ moves $x$ higher, moves $y$ lower, and preserves the order of
the vertices in $\overline{F}$ as well as the order of the vertices in
$\overline{B}$. If no swap is possible, all vertices in $\overline{F}$ must follow
all vertices in $\overline{B}$, and since swaps preserve the order within
$\overline{F}$ and within $\overline{B}$ the vertices are now in their positions
after the reordering. Only a finite number of swaps can occur, since each vertex can
only move a finite distance (higher for a vertex in $\overline{F}$, lower for a
vertex in $\overline{B}$). The total distance of the moves of the vertices in
$\overline{F}$ is exactly half the distance of the swaps, as is the total distance of
the moves of the vertices in $\overline{B}$. Any particular pair of vertices is
swapped at most once. Repeat this construction for each arc addition. If an arc
addition causes a swap of $x$ and $y$, with $x$ moving higher and $y$ moving lower,
then the arc addition creates a path from $y$ to $x$, and no later arc addition can
cause a swap of $x$ and $y$. Thus the swap sequence is proper. \qed
\end{proof}
The following lemma was proved by Ajwani et al. \cite{Ajwani2006} as part of the
analysis of their $\mathrm{O}(n^{11/4})$-time algorithm. Their proof uses a linear
program. We give a combinatorial argument.
\begin{lemma}
\label{lem:swap-distance}
{\em \cite{Ajwani2006}} Given an initial permutation of $1$ through $n$, any proper
sequence of swaps has total distance $\mathrm{O}(n^{5/2})$.
\end{lemma}
\begin{proof}
If $\mathrm{\Pi}$ is a permutation of 1 to $n$, we denote by $\mathrm{\Pi}(i)$ the $i^{\text{th}}$
element of $\mathrm{\Pi}$. We define the {\em potential} of $\mathrm{\Pi}$ to be $\sum_{i<j}
(\mathrm{\Pi}(i) - \mathrm{\Pi}(j))$. The potential is always between $-n^3$ and $n^3$. We
compute the change in potential caused by a swap in a proper swap sequence.
Let $\mathrm{\Pi}$ be the permutation before the swap, and let $i < j$ be the positions
in $\mathrm{\Pi}$ of the pair of elements ($\mathrm{\Pi}(i)$ and $\mathrm{\Pi}(j)$) that are swapped.
The distance $d$ of the swap is $2(j - i)$. Since the swap sequence is proper,
$\mathrm{\Pi}(i) > \mathrm{\Pi}(j)$. Swapping $\mathrm{\Pi}(i)$ and $\mathrm{\Pi}(j)$ reduces the contribution
to the potential of the pair $i,j$ by $2(\mathrm{\Pi}(i) - \mathrm{\Pi}(j))$. The swap also
changes the contributions to the potential of pairs other than $i,j$,
specifically those pairs exactly one of whose elements is $i$ or $j$. We
consider three cases for the other element of the pair, say $k$. If $k < i$,
the swap increases the contribution of $k,i$ by $\mathrm{\Pi}(i) - \mathrm{\Pi}(j)$ and
decreases the contribution of $k,j$ by $\mathrm{\Pi}(i) - \mathrm{\Pi}(j)$, for a net change of
zero. Similarly, if $j < k$, the swap decreases the contribution of $i,k$ by
$\mathrm{\Pi}(i) - \mathrm{\Pi}(j)$ and increases the contribution of $j,k$ by $\mathrm{\Pi}(i)
- \mathrm{\Pi}(j)$, for a net change of zero. More interesting is what happens if $i < k
< j$. In this case the swap decreases the contribution of both $i,k$ and $k,j$
by $\mathrm{\Pi}(i) - \mathrm{\Pi}(j)$. There are $j - i - 1$ such values of $k$. Summing
over all pairs, we find that the swap decreases the potential of the
permutation by $2(\mathrm{\Pi}(i) - \mathrm{\Pi}(j))( 1 + j - i - 1) = d(\mathrm{\Pi}(i) - \mathrm{\Pi}(j))$.
Call a swap of $\mathrm{\Pi}(i)$ and $\mathrm{\Pi}(j)$ {\em small} if $\mathrm{\Pi}(i) - \mathrm{\Pi}(j) < \sqrt{n}$
and {\em big} otherwise. Because the swap sequence is proper, a given pair
can be swapped at most once. Thus there are $\mathrm{O}(n^{3/2})$ small swaps. Each has
distance at most $2(n - 1)$, so the sum of the distances of all small swaps is
$\mathrm{O}(n^{5/2})$. A big swap of distance $d$ reduces the potential by at least
$d\sqrt{n}$. Since the sum of the potential decreases over all swaps is
$\mathrm{O}(n^3)$, the sum of the distances of all big swaps is $\mathrm{O}(n^{5/2})$. \qed
\end{proof}
The proof of Lemma \ref{lem:swap-distance} does not require that the swap sequence be
proper; it suffices that every swap is of an out-of-order pair and no pair of
elements is swapped more than once. The lemma may even be true if all swaps
are of out-of-order pairs with some pairs swapped repeatedly, but our proof
fails in this case, because our bound on the distance of the small swaps
requires that there be $\mathrm{O}(n^{3/2})$ of them.
\begin{theorem}
\label{thm:top-search-time}
Over $m$ arc additions, topological search spends $\mathrm{O}(n^{5/2})$ time.
\end{theorem}
\begin{proof}
Topological search spends $\mathrm{O}(n^2)$ time on the last arc addition. By
Lemmas~\ref{lem:top-cycle-time}-\ref{lem:swap-distance}, it spends
$\mathrm{O}(n^{5/2})$ on all the rest. \qed
\end{proof}
The bound of Theorem~\ref{thm:top-search-time} may be far from tight. In the
remainder of this section we discuss lower bounds on the running time of topological
search, and we speculate on improving the upper bound.
Katriel~\cite{Katriel2004} showed that any topological sorting algorithm that is
local (as defined in Section~\ref{sec:lim-search}: the algorithm reorders
only affected vertices) must do $\mathrm{\Omega}(n^2)$ vertex renumberings on a sequence of arc
additions that form a path. This bound is $\mathrm{\Omega}(n)$ amortized per arc on a graph of
$\mathrm{O}(n)$ arcs. She also proved that the topological sorting algorithm of
Pearce and Kelly~\cite{Pearce2006} does $\mathrm{O}(n^2)$ vertex renumberings. Since topological
search is a local topological sorting algorithm, her lower bound applies to this
algorithm. Her lower bound on vertex reorderings is tight for topological search,
since a proper sequence of swaps contains at most ${n \choose 2}$ swaps, and
each pair of reorderings corresponds to at least one swap.
To get a bigger lower bound, we must bound the total distance of vertex moves, not
their number. Ajwani~\cite{AjwaniThesis} gave a result for a related problem that
implies the following: on a sequence of arc additions that form a path, topological
search can take $\mathrm{\Omega}(n^2\log n)$ time. We proved this result independently in
our conference paper~\cite{Haeupler2008b}; our proof uses the same construction
as Ajwani's proof. This bound is $\mathrm{\Omega}(n\log n)$ amortized per arc on a graph
of $\mathrm{O}(n)$ arcs.
We do not know if Ajwani's bound is tight for graphs with $\mathrm{O}(n)$ arcs, but it is
not tight for denser graphs. There is an interesting connection between the running
time of topological search and the notorious $k$-levels problem of computational
geometry. Uri Zwick~(private communication, 2009) pointed this out to us. The
$k$-levels problem is the following: Consider the intersections of $n$ lines in
the plane in general position: each intersection is of only two lines, and the
intersections have distinct $x$-coordinates. An intersection is a {\em
$k$-intersection} if there are exactly $k$ lines below it (and $n - k - 2$
lines above it). What is the maximum number of $k$-intersections as a function
of $n$ and $k$? For our purposes it suffices to consider $n$ even and $k = n/2
- 1$. We call an intersection with $n/2 - 1$ lines below it a {\em halving
intersection}. The current best upper and lower bounds on the maximum number of
halving intersections are $\mathrm{O}(n^{4/3})$~\cite{Dey1998} and
$\mathrm{\Omega}(n2^{\sqrt{2\lg n}}/\sqrt{\lg n})$ (~\cite{Nivasch2008}; see also~\cite{Toth2001}).
The relationship between the $k$-levels problem and our problem does not require
that the lines be straight; it only requires that each pair intersect only once.
Thus instead of a set of lines we consider a set of {\em pseudolines},
arbitrary continuous functions from the real numbers to the real numbers, each
pair of which intersect at most once. Such a set is in {\em general position}
if no point is common to three or more pseudolines, no two intersections of
pseudolines have the same $x$-coordinate, and each intersection is a crossing
intersection: if pseudolines $P$ and $Q$ intersect and $P$ is above $Q$ to the
left of the intersection, then $Q$ is above $P$ to the right of the
intersection. The best bounds on the number of halving intersections of $2n$
pseudolines in general position are $\mathrm{O}(n^{4/3})$ (~\cite{TamakiT2003}; see
also~\cite{SharirS2003}) (the same as for lines) and $\mathrm{\Omega}(n2^{\sqrt{2\lg
n}})$~\cite{Zwick2005}. The latter gives a lower bound of $\mathrm{\Omega}(n^2
2^{\sqrt{2\lg n}})$ on the worst-case running time of topological search, as we
now show.
\begin{theorem}
\label{thm:halving-lb}
Let $n$ be even. On a graph of $3n/2$ vertices, topological search can spend
$\mathrm{\Omega}(n)$ time per arc addition for at least $H(n)$ arc additions, where $H(n)$
is the maximum number of halving intersections of $n$ pseudolines in the plane
in general position.
\end{theorem}
\begin{proof}
Given a set of $n$ pseudolines with $H(n)$ halving intersections, we construct a
sequence of $H(n)$ arc additions on a graph of $3n/2$ vertices on which
topological search spends $\mathrm{\Omega}(n)$ time on each arc addition. Given such a
set of pseudolines, choose a value $x_0$ of the $x$-coordinate sufficiently
small that all the halving intersections have $x$-coordinates larger than $x_0$. Number
the pseudolines from 1 to $n$ from highest to lowest $y$-coordinate at $x_0$, so
that the pseudoline with the highest $y$-coordinate gets number 1 and the one with
the lowest gets number $n$. Construct a graph with $3n/2$ vertices and an initial
(arbitrary) topological order. Number the first $n/2$ vertices in order from
$n$ down to $n/2 + 1$, and number the last $n/2$ vertices in order from $n/2$
down to 1, so that the first vertex gets number $n$, the $(n/2)^{\text{th}}$
gets number $n/2 + 1$, the middle $n/2$ get no number, the $(n + 1)^{\text{st}}$
gets number $n/2$, and the last gets number 1. These numbers are permanent and
are a function only of the initial order. Identify vertices by their number.
Process the halving intersections in order by $x$-coordinate. If the
$k^{\text{th}}$ halving intersection is of pseudolines $i$ and $j$ with $i < j$,
add an arc $(i, j)$ to the graph. To the left of the intersection, pseudoline
$i$ is above pseudoline $j$; to the right of the intersection, pseudoline $j$ is
above pseudoline $i$. Figure~\ref{fig:halving-lb} illustrates this construction.
\begin{figure}
\caption{(a) A set of $n = 8$ pseudolines with $H(n) = 7$ halving intersections.
Although the pseudolines are straight in this example, in general they need not
be. (b) The corresponding sequence of arc additions on a graph of $3n/2 = 12$
vertices on which topological search takes $\Omega(nH(n))$ time. The arc
additions correspond to the halving intersections processed in increasing order
by $x$-coordinate; only the first four arc additions are shown.}
\label{fig:halving-lb}
\end{figure}
Since each arc $(i, j)$ has $i < j$, the graph remains acyclic. Since two
pseudolines have only one intersection, a given arc is added only once.
Consider running topological search on this set of arc additions. We claim that each arc
addition moves exactly one vertex from the last third of the topological order
to the first third and vice-versa; the vertices in the middle third are never
reordered. Each such arc addition takes $\mathrm{\Omega}(n)$ time, giving the theorem.
To verify the claim, we prove the following invariant on the number of arc
additions: the set of vertices in the first or last third, respectively, of the
topological order have the same numbers as the bottom or top half of the
pseudolines, respectively. In particular, a halving intersection of two
pseudolines $i, j$ with $i < j$ corresponds to a swap of vertices $i$ in the top
third and $j$ in the bottom third, giving the claim.
Intersections that are not halving intersections preserve the invariant.
Suppose the invariant is true just to the left of a halving intersection of
pseudolines $i$ and $j$ with $i < j$. Just to the left of the intersection,
pseudolines $i$ and $j$ are the $n/2$ and $n/2 + 1$ highest pseudolines,
respectively. By the induction hypothesis, just before the addition of $(i, j)$ vertex $i$ is in the
last third of the topological order and vertex $j$ is in the first third.
Suppose that just before the addition of $(i, j)$ there is an arc $(j, k)$ with
$k$ in the first third. Then $j < k$, but pseudoline $k$ is in the bottom half
and hence must be below pseudoline $j$. This is impossible, since the existence
of the arc $(j, k)$ implies that pseudoline $k$ crossed above pseudoline $j$ to
the left of the intersection of $i$ and $j$. Thus there can be no such arc $(j, k)$.
Symmetrically, there can be no arc $(k, i)$ with $k$ in the last third. It
follows that the topological search triggered by the addition of $(i, j)$ will
compute a set $F$ all of whose vertices except $j$ are in the last third and a
set $B$ all of whose vertices except $i$ are in the first third. The
subsequent reordering will move $i$ to the first third, move $j$ to the last
third, and possibly reorder other vertices within the first and last thirds.
Thus the invariant remains true after the addition of $(i, j)$. By induction
the invariant holds, giving the claim, and the theorem. \qed
\end{proof}
\begin{corollary}
There is a constant $c > 0$ such that, for all $n$, there is a
sequence of arc additions on which topological search takes $cn^2
2^{\sqrt{2\lg n}}$ time.
\end{corollary}
Unfortunately the reduction in the proof of Theorem~\ref{thm:halving-lb} goes
only one way. We have been unable to construct a reduction in the other
direction, nor are we able to derive a better upper bound for topological
search via the methods used to derive upper bounds on the number of halving
intersections.
\section{Strong Components} \label{sec:strong}
All the known topological ordering algorithms can be extended to the maintenance of
strong components with at most a constant-factor increase in running time. Pearce
\cite{Pearce2005} and Pearce and Kelly \cite{Pearce2003b} sketch how to extend their
algorithm and that of Marchetti-Spaccamela et al.~\cite{Marchetti1996} to strong
component maintenance. Here we describe how to extend soft-threshold search and
topological search. The techniques differ slightly for the two algorithms, since one
algorithm is designed for the sparse case and the other for the dense case.
We formulate the problem as follows: Maintain the partition of the vertices defined
by the strong components. For each strong component, maintain a {\em canonical
vertex}. The canonical vertex represents the component; the algorithm is free to
choose any vertex in the component to be the canonical vertex. Support the query
$\mathit{find}(v)$, which returns the canonical vertex of the component containing vertex $v$.
Maintain a list of the canonical vertices in a topological order of the corresponding
components.
To represent the vertex partition, we use a disjoint set data
structure~\cite{Tarjan1975,Tarjan1984}. This structure begins with the partition
consisting of singletons and supports find queries and the operation $\mathit{unite}(x, y)$,
which, given canonical vertices $x$ and $y$, forms the union of the sets containing
$x$ and $y$ and makes $x$ the canonical vertex of the new set. If the sets are
represented by trees, the finds are done using path compression, and the unites are
done using union by rank, the amortized time per find is $\mathrm{O}(1)$ if the total time
charged for the unites is $\mathrm{O}(n\log n)$~\cite{Tarjan1984}. (In fact, the time
charged to the unites can be made much smaller, but this weak bound suffices for us.)
Since searching and reordering take much more than $\mathrm{\Omega}(n\log n)$ time, we can
treat the set operations as taking $\mathrm{O}(1)$ amortized time each.
To maintain strong components using soft-threshold search, we represent the graph by
storing, for each canonical vertex $x$, a list of arcs out of its component, namely
those arcs $(y, z)$ with $\mathit{find}(y) = x$, and a list of arcs into its component, namely
those arcs $(y, z)$ with $find(z) = x$. This represents the graph of strong components,
except that there may be multiple arcs between the same pair of strong components,
and there may be loops, arcs whose ends are in the same component. When doing a
search, we delete loops instead of traversing them. When the addition of an arc $(v,
w)$ combines several components into one, we form the incoming list and the outgoing
list of the new component by combining the incoming lists and outgoing lists,
respectively, of the old components. This takes $\mathrm{O}(1)$ time per old component, if
the incoming and outgoing lists are circular. Deletion of a loop takes $\mathrm{O}(1)$ time
if the arc lists are doubly linked.
Henceforth we identify each strong component with its canonical vertex, and we
abbreviate $\mathit{find}(x)$ by $f(x)$. If a new arc $(v, w)$ has $f(v) > f(w)$, do a
soft-threshold search forward from $f(w)$ and backward from $f(v)$. During the
search, do not stop when a forward arc traversal reaches a component in $B$ or
when a backward arc traversal reaches a component in $F$. Instead, allow
components to be in both $F$ and $B$. Once the search stops, form the new
component, if any. Then reorder the canonical vertices and delete from the
order those that are no longer canonical. Here are the details. When a new
arc $(v, w)$ has $f(v) > f(w)$, do the search by calling
\softthresholdsearch{$f(v)$,$f(w)$}, where \softthresholdsearch is defined
as in Section~\ref{sec:soft-search} but with the macro \searchstep
redefined as in Figure~\ref{alg:search-step-strong}. The new version of
\searchstep is just like the old one except that it visits canonical vertices
instead of all vertices, it uses circular instead of linear arc lists, and it
does not do cycle detection: \softthresholdsearch terminates only when $F_A$
or $B_A$ is empty, and it always returns null.
\begin{figure}
\caption{Redefinition of \searchstep to find strong components using
soft-threshold search.}
\label{alg:search-step-strong}
\end{figure}
Once the search finishes, let $t = \min(\{f(v)\} \cup \{x \in F| \mathit{out}(x) \ne
\textit{null}\})$. Compute the sets $F_<$ and $B_>$. Find the new component, if any, by
running a static linear-time strong components algorithm on the subgraph of the
graph of strong components whose vertex set is $X = F_< \cup \{t\} \cup B_>$
and whose arc set is $Y = \{(f(u), f(x))|(u, x) \text{ is an arc with } f(u)
\text{ in } \linebreak[0] F_< \text{ and } f(u) \ne f(x)\} \cup \{(f(y),
f(z))|(y, z) \text{ is an arc with } f(z) \in B_> \text{ and } f(y) \ne f(z)\}$. If a new
component is found, combine the old components it contains into a new component
with canonical vertex $v$.
Reorder the list of vertices in topological order by moving the vertices in $X -
\{t\}$ as in Section~\ref{sec:2way-search}. Then delete from the list all
vertices that are no longer canonical, namely the canonical vertices other than $f(v)$ of the
old components contained in the new component.
{\em Remark}: Since the addition of $(v, w)$ can only form a single new component,
running a strong components algorithm to find this component is overkill. A simpler
alternative is to unmark all vertices in $X$ and then run a forward depth-first
search from $f(w)$, traversing arcs in $Y$. During the search, mark vertices as
follows: Mark $f(v)$ if it is reached. When retreating along an arc $(f(u), f(x))$,
mark $f(u)$ if $f(x)$ is marked. At the end of the search, the marked vertices are
the canonical vertices contained in the new component.
\begin{theorem}
\label{thm:strong-soft-corr}
Maintaining strong components via soft-threshold search is correct.
\end{theorem}
\begin{proof}
By induction on the number of arc additions. Consider the graph of strong components
just before an arc $(v, w)$ is added. This addition forms a new component if and
only if $f(v) > f(w)$ and there is a path from $f(w)$ to $f(v)$. Furthermore the old
components contained in the new component are exactly the components on paths
from $f(w)$ to $f(v)$. The components on such a path are in increasing order,
so the path consists of a sequence of one or more components in $F_<$, possibly
$t$, and a sequence of one or more components in $B_>$. Each arc on such a
path is in $Y$. It follows that the algorithm correctly forms the new
component. If there is no new component, the reordering is exactly the same as
in Section~\ref{sec:2way-search}, so it correctly restores topological order.
Suppose there is a new component. Then certain old components are combined
into one, and their canonical vertices other than $f(v)$ are deleted from the
list of canonical vertices in topological order. We must show that the new
order is topological. The argument in the proof of Theorem~\ref{thm:2way-corr}
applies, except that there are some new possibilities. Consider an arc $(x, y)$
other than $(v, w)$. One of the cases in the proof of
Theorem~\ref{thm:2way-corr} applies unless at least one of $x$ and $y$ is in
the new component. If both are in the new component, then $(x, y)$ becomes a
loop. Suppose just one, say $y$, is in the new component. Then $f(x)$ cannot
be forward, or it would be in the new component. Either $f(x)$ is in $B_>$ or
$f(x)$ is not in $X$; in either case, $f(x)$ precedes $f(v)$ after the
reordering. The argument is symmetric if $x$ but not $y$ is in the new
component. \qed
\end{proof}
To bound the running time of the strong components algorithm we need to extend
Lemma~\ref{lem:2way-search-rel} and Theorem~\ref{thm:2way-search-arcs}.
\begin{lemma}
\label{lem:strong-search-rel}
Suppose the addition of $(v, w)$ triggers a search. Let $(u, x)$ and $(y, z)$,
respectively, be arcs traversed forward and backward during the search, not
necessarily during the same search step, such that $f(u) < f(z)$. Then either $(u,
x)$ and $(y, z)$ are unrelated before the addition of $(v, w)$ but related afterward,
or they are related before the addition and the addition makes them into loops.
\end{lemma}
\begin{proof}
After $(v, w)$ is added, there is a path containing both of them, so they are related
after the addition. If they were related before the addition, then there must be a
path containing $(u, x)$ followed by $(y, z)$. After the addition there is a path
from $z$ to $u$, so $u$, $x$, $y$, and $z$ are in the new component, and both $(u,
x)$ and $(y, z)$ become loops. \qed
\end{proof}
\begin{theorem}
\label{thm:strong-arcs}
Over $m$ arc additions, the strong components algorithm does $\mathrm{O}(m^{3/2})$ arc
traversals.
\end{theorem}
\begin{proof}
Divide the arc traversals during a search into those of arcs that become loops as a
result of the arc addition that triggered the search, and those that do not. Over
all searches, there are at most $2m$ traversals of arcs that become loops: each
such arc can be traversed both forward and backward. By
Lemma~\ref{lem:strong-search-rel} and the proof of
Theorem~\ref{thm:2way-search-arcs}, there are at most $4m^{3/2}$ traversals of
arcs that do not become loops. \qed
\end{proof}
\begin{theorem}
Maintaining strong components via soft-threshold search takes \linebreak
$\mathrm{O}(m^{3/2})$ time over $m$ arc additions, worst-case if $s$ is always a median or approximate
median of the set of choices, expected if $s$ is always chosen uniformly at
random.
\end{theorem}
\begin{proof}
Consider the addition of an arc $(v, w)$ such that $f(v) > f(w)$. Each search step
either traverses two arcs or deletes one or two loops. An arc can only become a
loop once and be deleted once, so the extra time for such events is $\mathrm{O}(m)$
over all arc additions. The arcs in $Y$ were traversed by the search, so the
time to form the new component and to reorder the vertices is $\mathrm{O}(1)$ per
arc traversal. The theorem follows from Theorem~\ref{thm:strong-arcs} and the
proofs of Theorems~\ref{thm:soft-search-time} and~\ref{thm:soft-search-rtime}.
\qed
\end{proof}
To maintain strong components via topological search, we represent the graph of
strong components by an adjacency matrix $A$ with one row and one column per
canonical vertex. If $x$ and $y$ are canonical vertices, $A(x, y) = 1$ if $x \ne y$
and there is an arc $(q, r)$ with $f(q) = x$ and $f(r) = y$; otherwise, $A(x, y) =
0$. We represent the topological order of components by an explicit numbering of the
canonical vertices using consecutive integers starting from one. We also store the
inverse of the numbering. If $x$ is a canonical vertex, $\mathit{position}(x)$ is
its number; if $i$ is a vertex number, $\mathit{vertex}(i)$ is the canonical vertex with
number $i$. Note that the matrix $A$ is indexed by vertex, {\em not} by vertex number; the
numbers change too often to allow indexing by number.
To maintain strong components via topological search, initialize all entries of $A$
to zero. Add a new arc $(v, w)$ by setting $A(f(v), f(w)) = 1$. If $f(v) > f(w)$,
search forward from $f(w)$ and backward from $f(v)$ by executing
\topologicalsearch{$f(v)$, $f(w)$} where \topologicalsearch is defined as in
Section~\ref{sec:top-search}. Let $k$ be the common value of $i$ and $j$ when
the search stops. After the search, find the vertex set of the new component,
if any, by running a linear-time static strong components algorithm on the graph
whose vertex set is $X = F \cup B$ and whose arc set is $Y = \{(x, y)|x
\text{ and } y \text{ are in } F \cup B \text{ and } A(x, y) = 1\}$. Whether or
not there is a new component, reorder the old canonical vertices exactly as in
Section~\ref{sec:top-search}. Finally, if there is a new component, do the
following: form its vertex set by combining the vertex sets of the old
components contained in it. Let the canonical vertex of the new component be
$\mathit{vertex}(k)$. Form a row and column of $A$ representing the arcs out of and
into the new component by combining those of the old components contained in
it. Delete from the topological order all the vertices that are no longer
canonical. Number the remaining canonical vertices consecutively from 1.
{\em Remark}: As in soft-threshold search, using a static strong components
algorithm to find the new component is overkill; a better method is the one described
in the remark before Theorem~\ref{thm:strong-soft-corr}: run a forward
depth-first search from $f(w)$, marking vertices when they are found to be in
the new component.
\begin{theorem}
\label{thm:strong-top-corr}
Maintaining strong components via topological search is correct.
\end{theorem}
\begin{proof}
By induction on the number of arc additions. Consider the addition of an arc $(v,
w)$. Let $f$ and $f'$ be the canonical vertex function just before and just after
this addition, respectively. The addition creates a new component if and only if
$f(v) > f(w)$ and there is a path from $f(w)$ to $f(v)$. Suppose $f(v) > f(w)$ and
let $F$ and $B$ be the values of the corresponding variables just after the search
stops. Any path from $f(w)$ to $f(v)$ consists of a sequence of one or more vertices
in $F$ followed by a sequence of one or more vertices in $B$. Each arc on such a
path is in $Y$. It follows that the algorithm correctly finds the new component. If
there is no new component, the algorithm reorders the vertices exactly as in
Section~\ref{sec:top-search} and thus restores topological order. Suppose there is a
new component. Let $k$ be the common value of $i$ and $j$ when the search
stops. The reordering sets $\mathit{vertex}(k) = f(w)$. This vertex is the canonical vertex of the new component.
Let $(x, y)$ be an arc. The same argument as in the proof of Theorem~\ref{thm:top-corr} shows
that $f'(x) = f(x) < f(y) = f'(y)$ after the reordering unless $f(x)$ or $f(y)$
or both are in the new component. If both are in the new component, then $(x,
y)$ is a loop after the addition of $(v, w)$. Suppose $f(x)$ but not $f(y)$ is
in the new component. Then $f(x) \in F \cup B$. If $f(x) \in F$, then
$\mathit{position}(f(y)) > k$ after the reordering but before the renumbering, so $f'(x)
< f'(y)$ after the reordering and renumbering. If $f(x) \in B$, then $f(y)
\notin B$, since otherwise $f(y)$ is in the component. It follows that
$\mathit{position}(f(y)) > k$ before the reordering, and also after the reordering but
before the renumbering, so $f'(x) < f'(y)$ after the reordering and renumbering.
A symmetric argument applies if $f(y)$ but not $f(x)$ is in the new component.
\qed
\end{proof}
\begin{theorem}
Maintaining strong components via topological search takes $\mathrm{O}(n^{5/2})$ time over
all arc additions.
\end{theorem}
\begin{proof}
The time spent combining rows and columns of $A$ and renumbering vertices after
deletion of non-canonical vertices is $\mathrm{O}(n)$ per deleted vertex, totaling
$\mathrm{O}(n^2)$ time over all arc additions. The time spent to find the new component
after a search is $\mathrm{O}(|F| + |B|)^2 = \mathrm{O}(|F||B|)$ since $|B| \le |F| \le |B| + 1$,
where $F$ and $B$ are the values of the respective variables at the end of the search.
If $x$ is in $F$ and $y$ is in $B$, then either $x$ and $y$ are unrelated before
the arc addition that triggered the search but related after it (and possibly in the
same component), or they are related and in different components before the arc
addition but in the same component after it. A given pair of
vertices can become related at most once and can be combined into one component at
most once. There are ${n \choose 2}$ vertex pairs. Combining these facts, we
find that the total time spent to find new components is $\mathrm{O}(n^2)$.
To bound the rest of the computation time, we apply
Theorem~\ref{thm:top-search-time}. To do this, we modify the strong components
algorithm so that it does not delete non-canonical vertices from the topological
order but leaves them in place. Such vertices have no incident arcs and are never
moved again. This only makes the search and reordering time longer, since the
revised algorithm examines non-canonical vertices during search and reordering,
whereas the original algorithm does not. The proof of
Theorem~\ref{thm:top-search-time} applies to the revised algorithm, giving a bound of
$\mathrm{O}(n^{5/2})$ on the time for search and reordering. \qed
\end{proof}
\section{Remarks} \label{sec:remarks}
We are far from a complete understanding of the incremental topological ordering
problem. Indeed, we do not even have a tight bound on the running time of
topological search. Given the connection between this running time and the
$k$-levels problem (see Section~\ref{sec:top-search}), getting a tighter bound seems
a challenging problem. As mentioned in the introduction, Bender et
al.~\cite{Bender2009} have proposed a completely different algorithm with a running
time of $\mathrm{\Theta}(n^2\log n)$.
A more general problem is to find an algorithm that is efficient for any graph
density. Our lower bound on the number of vertex reorderings is $\mathrm{\Omega}(nm^{1/2})$
for any local algorithm (see the end of Section \ref{sec:soft-search}); we conjecture
that there is an algorithm with a matching running time, to within a polylogarithmic
factor. For sparse graphs, soft-threshold search achieves this bound to within a
constant factor. For dense graphs, the algorithm of Bender, Fineman, and Gilbert
achieves it to within a logarithmic factor. For graphs of intermediate density,
nothing interesting is known.
We have used total running time to measure efficiency. An alternative is to use an
incremental competitive model \cite{Ramalingam1991}, in which the time spent to
handle an arc addition is compared to the minimum work that must be done by any
algorithm, given the same topological order and the same arc addition. The minimum
work that must be done is the minimum number of vertices that must be reordered,
which is the measure that Ramalingam and Reps used in their lower bound. (See the
end of Section \ref{sec:soft-search}.) But no existing algorithm handles an arc
addition in time polynomial in the minimum number of vertices that must be reordered.
To obtain positive results, researchers have compared the performance of their
algorithms to the minimum sum of degrees of reordered vertices \cite{Alpern1990}, or
to a more-refined measure that counts out-degrees of forward vertices and in-degrees
of backward vertices \cite{Pearce2006}. For these models, appropriately balanced
forms of ordered search are competitive to within a logarithmic factor
\cite{Alpern1990,Pearce2006}. In such a model, semi-ordered search is competitive to
within a constant factor. We think, though, that these models are misleading: they
ignore the possibility that different algorithms may maintain different topological
orders, they do not account for the correlated effects of multiple arc additions, and
good bounds have only been obtained for models that overcharge the adversary.
Alpern et al. \cite{Alpern1990} and Pearce and Kelly \cite{Pearce2007} studied
batched arc additions as well as single ones. Pearce and Kelly give an algorithm
that handles an addition of a batch of arcs in $\mathrm{O}(m')$ time, where $m'$ is
the total number of arcs after the addition, and such that the total time for all arc additions
is $\mathrm{O}(nm)$. Thus on each batch the algorithm has the same time bound as a
static algorithm, and the overall time bound is that of the incremental algorithm of
Marchetti-Spaccamela et al. \cite{Marchetti1996}.
This result is not surprising, because {\em any} incremental topological ordering
algorithm can be modified so that each batch of arc additions takes $\mathrm{O}(m')$
time but the overall running time increases by at most a constant factor. The idea is to run
a static algorithm concurrently with the incremental algorithm, each
maintaining its own topological order. Here are the details. The incremental
algorithm maintains a set of added arcs that have not yet been processed.
Initially this set is empty. To handle a new batch of arcs, add them to the
graph and to the set of arcs to be processed. Then start running a static
algorithm; concurrently, resume the incremental algorithm on the expanded set
of new arcs. The incremental algorithm deletes an arc at a time from this set
and does the appropriate processing. Allocate time in equal amounts to the two
algorithms. If the static algorithm stops before the incremental algorithm
processes all the arcs, suspend the incremental algorithm and use the
topological order computed by the static algorithm as the current order. If
the incremental algorithm processes all the arcs, stop the static algorithm and
use the topological order computed by the incremental algorithm as the current
order. This algorithm runs a constant factor slower than the incremental
algorithm and spends $\mathrm{O}(m')$ time on each batch of arcs.
For the special case of soft-threshold search, this method can be improved to
maintain a single topological order, and to restart the incremental algorithm
each time the static algorithm completes first. The time bound remains the
same. If the static algorithm stops first, replace the topological order
maintained by the incremental algorithm by the new one computed by the static
algorithm, and empty the set of new arcs. These arcs do not need to be
processed by the incremental algorithm. This works because the running time
analysis of soft-threshold search does not use the current topological order,
only the current graph, specifically the number of related arc pairs. Whether
something similar works for topological search is open. Much more interesting
would be an overall time bound based on the size and number of batches that is
an improvement for cases other than one batch of $m$ arcs and $m$ batches of
single arcs.
Alpern et al. \cite{Alpern1990} also allowed unrelated vertices to share a position
in the order. More precisely, their algorithm maintains a numbering of the vertices
such that if $(v, w)$ is an arc, $v$ has a lower number than $w$, but unrelated
vertices may have the same number. This idea is exploited by Bender, Fineman, and
Gilbert in their new algorithm.
\begin{acks}
{\normalsize
The last author thanks Deepak Ajwani, whose presentation at the 2007 Data Structures
Workshop in Bertinoro motivated the work described in Section \ref{sec:soft-search},
and Uri Zwick, who pointed out the connection between the running time of topological
search and the $k$-levels problem. All the authors thank the anonymous referees
for their insightful suggestions that considerably improved the presentation.
}
\end{acks}
\end{document} |
\begin{document}
\title{Thermal Entanglement Phase Transition in Coupled Harmonic Oscillators with Arbitrary Time-Dependent Frequencies}
\author{DaeKil Park$^{1,2}$}
\affiliation{$^1$Department of Electronic Engineering, Kyungnam University, Changwon
631-701, Korea \\
$^2$Department of Physics, Kyungnam University, Changwon
631-701, Korea
}
\begin{abstract}
We derive explicitly the thermal state of the two coupled harmonic oscillator system when the spring and coupling constants are
arbitrarily time-dependent. In particular, we focus on the case of sudden change of frequencies. In this case we compute purity
function, R\'{e}nyi and von Neumann entropies, and mutual information analytically and examine their temperature-dependence. We also discuss on the
thermal entanglement phase transition by making use of the negativity-like quantity. Our calculation shows that the critical temperature $T_c$ increases with
increasing the difference between the initial and final frequencies. In this way we can protect the entanglement against the external temperature by introducing large difference of initial and final frequencies.
\end{abstract}
\maketitle
\section{Introduction}
Entanglement\cite{schrodinger-35,text,horodecki09} is a key physical resource in quantum information processing. For example,
it plays crucial role in quantum teleportation\cite{teleportation},
superdense coding\cite{superdense}, quantum cloning\cite{clon}, quantum cryptography\cite{cryptography,cryptography2}, quantum
metrology\cite{metro17}, and quantum computer\cite{qcomputer,qcreview}. In particular, physical realization of quantum cryptography and quantum computer seems to be accomplished in the near future\footnote{see Ref. \cite{white} and web page
https://www.computing.co.uk/ctg/news/3065541/european-union-reveals-test-projects-for-first-tranche-of-eur1bn-quantum-computing-fund.}.
Although entanglement is highly useful property of quantum state, it is normally fragile when quantum system interacts with its surroundings.
Interaction with the environments makes the given quantum system undergo decoherence\cite{zurek03} and as a result, it loses its quantum properties.
Thus, decoherence significantly changes the quantum entanglement. Sometimes entanglement exhibits an exponential decay in time by successive halves.
Sometimes, however, entanglement sudden death (ESD) occurs when the entangled multipartite quantum system is
embedded in Markovian environments\cite{markovian,yu05-1,yu06-1,yu09-1,almeida07,park-16}. This means that the entanglement is completely disentangled at finite times.
Most typical surrounding is external temperature. At finite temperature quantum mechanics the external temperature is introduced via imaginary time at
zero temperature quantum mechanics. Thus, the exponential decay or ESD-like phenomenon can occur in external temperature. If external temperature induces
the ESD-like phenomenon in temperature, this means there exists a critical temperature $T_c$, below or above which the entanglement of a system is
nonzero or completely zero. We will call this phenomenon thermal entanglement phase transition (TEPT) between nonzero entanglement phase and
zero entanglement phase. The TEPT and the critical temperature $T_c$ were explored\cite{park-19} recently by making use of concurrence\cite{form2, form3}
in anisotropic Heisenberg $X Y Z$ spin model with Dzyaloshinskii-Moriya interaction\cite{dzya58,mori60}.
The purpose of this paper is to study on the TEPT phenomenon in continuous variable system. Most simple continuous variable system seems to be two-coupled harmonic
oscillator system. In this reason we will choose this system to explore the TEPT when the spring constant $k_0$ and coupling constant $J$ are arbitrarily
time-dependent. Another reason we choose this system is because of the fact that the thermal state of this system is Gaussian. It is known that the
Peres-Horodecki positive partial transposition (PPT) criterion\cite{peres96,horodecki96,horodecki97} provides a necessary and sufficient condition for separability of
Gaussian continuous variable states\cite{duan-2000,simon-2000}. Thus, the temperature-dependence of entanglement can be roughly deduced
by considering the negativity-like quantity\cite{vidal01}. What we are interested in is to examine how the arbitrarily time-dependent parameters affect the
critical temperature. In particular, we focus in this paper on the sudden quenched model, where the system parameters abruptly change at $t=0$.
The paper is organized as follows. In section II we derive the thermal state of single harmonic oscillator system when the frequency is arbitrarily
time-dependent. We focus on the case of sudden quenched model (SQM). For SQM we derive the purity function and von Neumann entropy
of the thermal state analytically. In section III we derive explicitly the thermal state of two coupled harmonic oscillator system when the spring constant $k_0$
and coupling constant $J$ are arbitrarily time-dependent. In section IV we compute the purity function, R\'{e}nyi and von Neumann entropies, and mutual information analytically for the thermal state of two coupled harmonic oscillator system in the case of SQM. It is shown that the thermal state is less mixed with
increasing the difference between initial and final frequencies at the given external temperature. The mutual information shows that the common information parties $A$ and $B$ share does not completely vanish even in the infinity temperature limit. In section V the TEPT is discussed for the case of SQM
by making use of the negativity-like quantity. It is shown that the critical temperature $T_c$ increases with increasing the frequency difference. Thus, using SQM with large difference of initial and final frequencies it seems to be possible to protect entanglement against external temperature. In section VI a brief conclusion is given. In appendix A the eigenvalue equation for the thermal state of the coupled harmonic oscillator system is explicitly solved.
\section{Thermal State for single harmonic oscillator with arbitrary time-dependent frequency}
Let us consider a single harmonic oscillator with time-dependent frequency, whose Hamiltonian is
\begin{equation}
\langlebel{revise-hamil-1}
H_1 = \frac{1}{2} p^2 + \frac{1}{2} \otimesmega^2 (t) x^2.
\end{equation}
Then, the action functional of this system is given by
\begin{equation}
\langlebel{action}
S[x] = \int_0^t \left[ \frac{1}{2} \dot{x}^2 - \frac{1}{2} \otimesmega^2 (t) x^2 \right].
\end{equation}
Usually Kernel for any quantum system can be derived by computing the path-integral\cite{feynman}
\begin{equation}
\langlebel{path-integral}
K[x', x : t] = \int_{(0,x)}^{(t,x')} {\cal D}x e^{i S[x]}.
\end{equation}
Although the path-integral with constant frequency can be computed\cite{feynman,kleinert}, it does not seem to be simple
matter to compute the path-integral explicitly when $\otimesmega$ is arbitrary time-dependent. However, it is possible to derive the Kernel without computing the
path-integral if one uses the Schr\"{o}dinger description of Kernel
\begin{equation}
\langlebel{finaleq}
K[{\bm x'}, t_2 : {\bm x}, t_1] = \sum_n \psi_n \left( {\bm x'}, t_2 \right) \psi_n^* \left( {\bm x}, t_1 \right)
\end{equation}
where $n$ represents all possible quantum numbers and $\psi_n \left({\bm x}, t \right)$ is linearly-independent solution of time-dependent
Schr\"{o}dinger equation (TDSE).
The TDSE of our system was exactly solved in Ref. \cite{lewis68,lohe09}. The linearly independent solutions $\psi_n (x, t) \hspace{.1cm} (n=0, 1, \cdots)$ are expressed in a form
\begin{equation}
\langlebel{TDSE-1}
\psi_n (x, t) = e^{-i E_n \tau(t)} e^{\frac{i}{2} \left( \frac{\dot{b}}{b} \right) x^2} \phi_n \left( \frac{x}{b} \right)
\end{equation}
where
\begin{eqnarray}
\langlebel{TDSE-2}
&& E_n = \left( n + \frac{1}{2} \right) \otimesmega(0), \hspace{1.0cm} \tau (t) = \int_0^t \frac{d s}{b^2 (s)} \\ \nonumber
&&\phi_n (x) = \frac{1}{\sqrt{2^n n!}} \left( \frac{ \otimesmega (0)} {\pi b^2} \right)^{1/4} H_n \left(\sqrt{\otimesmega (0)} x \right) e^{-\frac{\otimesmega (0)}{2} x^2 }.
\end{eqnarray}
In Eq. (\ref{TDSE-2}) $H_n (z)$ is $n^{th}$-order Hermite polynomial and $b(t)$ satisfies the Ermakov equation
\begin{equation}
\langlebel{ermakov-1}
\ddot{b} + \otimesmega^2 (t) b = \frac{\otimesmega^2 (0)}{b^3}
\end{equation}
with $b(0) = 1$ and $\dot{b} (0) = 0$.
Solutions of the Ermakov equation were discussed in Ref. \cite{pinney50}. If $\otimesmega(t)$ is time-independent, $b(t)$ is simply one.
If $\otimesmega (t)$ is instantly changed as
\begin{eqnarray}
\langlebel{instant-1}
\otimesmega (t) = \left\{ \begin{array}{cc}
\otimesmega_0 & \hspace{1.0cm} t = 0 \\
\otimesmega & \hspace{1.0cm} t > 0,
\end{array} \right.
\end{eqnarray}
then $b(t)$ becomes
\begin{equation}
\langlebel{scale-1}
b(t) = \sqrt{ \frac{\otimesmega^2 - \otimesmega_0^2}{2 \otimesmega^2} \cos (2 \otimesmega t) + \frac{\otimesmega^2 + \otimesmega_0^2}{2 \otimesmega^2}}.
\end{equation}
For more general time-dependent case the Ermakov equation should be solved numerically.
Inserting Eq. (\ref{TDSE-1}) into Eq. (\ref{finaleq}) and using
\begin{equation}
\langlebel{formula-1}
\sum_{n=0}^{\infty} \frac{t^n}{n!} H_n (x) H_n (y) = (1 - 4 t^2)^{-1/2} \exp \left[ \frac{4 t x y - 4 t^2 (x^2 + y^2)}{1 - 4 t^2} \right],
\end{equation}
Kernel for this system becomes
\begin{equation}
\langlebel{kernel-single}
K[x', x: t] = e^{\frac{i}{2} \left( \frac{\dot{b}}{b} \right) x'^2} \frac{(\otimesmega_0 \otimesmega')^{1/4}}{\sqrt{2 \pi i \sin \Gamma(t)}}
\exp \left[\frac{i}{2 \sin \Gamma(t)} \left\{ (\otimesmega_0 x^2 + \otimesmega' x'^2) \cos \Gamma(t) - 2 \sqrt{\otimesmega_0 \otimesmega'} x x' \right\} \right]
\end{equation}
where
\begin{equation}
\langlebel{boso-1}
\otimesmega_0 = \otimesmega (t = 0), \hspace{1.0cm} \otimesmega'(t) = \frac{\otimesmega_0}{b^2 (t)}, \hspace{1.0cm} \Gamma (t) = \int_0^t \otimesmega'(s) d s.
\end{equation}
For time-independent case $b(t) = 1$, $\otimesmega_0 = \otimesmega' = \otimesmega$, and $\Gamma (t) = \otimesmega t$. Then, the Kernel in Eq. (\ref{kernel-single}) reduces to
usual well-known harmonic oscillator Kernel
\begin{equation}
\langlebel{usual1}
K[x', x : t] = \sqrt{\frac{\otimesmega}{2 \pi i \sin \otimesmega t}} \exp \left[ \frac{i \otimesmega}{2 \sin \otimesmega t}
\left\{ (x^2 + x'^2) \cos \otimesmega t - 2 x x' \right\} \right].
\end{equation}
It is remarkable to note that the $x \leftrightarrow x'$ symmetry in Eq. (\ref{usual1}) is broken in Eq. (\ref{kernel-single}) due to the time-dependence of
frequency. In fact, it is manifest due to the fact that the system parameters at $t=0$ are different from those at $t > 0$.
From now on in this section we consider only the case of SQM given in Eq. (\ref{instant-1}). In this case the Kernel becomes
\begin{equation}
\langlebel{sudden-1}
K[x', x : t] = e^{- \left( \frac{i (\otimesmega^2 - \otimesmega_0^2)}{4 \otimesmega b^2} \sin 2 \otimesmega t \right) x'^2} \sqrt{\frac{\otimesmega_0}{2 \pi i b \sin \Gamma(t)}}
\exp \left[ \frac{i \otimesmega_0}{2 \sin \Gamma(t)} \left\{ \left(x^2 + \frac{x'^2}{b^2} \right) \cos \Gamma(t) - \frac{2 x x'}{b} \right\} \right]
\end{equation}
where $b(t)$ is given in Eq. (\ref{scale-1}) and $\Gamma (t)$ becomes
\begin{equation}
\langlebel{sudden-2}
\Gamma (t) = \tan^{-1} \left(\frac{\otimesmega_0}{\otimesmega} \tan \otimesmega t \right)
= \frac{1}{2 i} \ln \frac{\otimesmega + i \otimesmega_0 \tan \otimesmega t}{\otimesmega - i \otimesmega_0 \tan \otimesmega t}.
\end{equation}
In quantum mechanics the inverse temperature $\beta = 1 / k_B T$ is introduced as a Euclidean time $\beta = i t$ (see Ch. $10$ of Ref.\cite{feynman}),
where $k_B$ is a Boltzmann constant. Then, the thermal density matrix is defined as
\begin{equation}
\langlebel{thermal-1}
\rho_T [x', x : \beta] = \frac{1}{{\cal Z} (\beta)} G[x', x : \beta]
\end{equation}
where $\beta = 1 / k_B T$, $G[x', x : \beta] = K[x', x : -i \beta]$, and ${\cal Z} (\beta) = \mbox{tr} G[x', x : \beta]$ is a partition function.
Throughout this paper we use $k_B = 1$ for convenience. For SQM case $b(t)$ and $\Gamma (t)$ are changed into $b (\beta)$ and
$\Gamma (\beta)$, whose explicit expressions are
\begin{equation}
\langlebel{sudden-3}
b(\beta) = \sqrt{ \frac{\otimesmega^2 - \otimesmega_0^2}{2 \otimesmega^2} \cosh (2 \otimesmega \beta) + \frac{\otimesmega^2 + \otimesmega_0^2}{2 \otimesmega^2}}, \hspace{1.0cm}
\Gamma (\beta) = -i \Gamma_0 (\beta)
\end{equation}
where\footnote{In fact, one can show that $b (\beta)$ in Eq. (\ref{sudden-3}) is a solution of $\frac{d^2 b}{d \beta^2} - \otimesmega^2 b = - \frac{\otimesmega_0^2}{b^3}$.}
\begin{equation}
\langlebel{sudden-4}
\Gamma_0 (\beta) = \frac{1}{2} \ln \left(\frac{\otimesmega + \otimesmega_0 \tanh \otimesmega \beta}{\otimesmega - \otimesmega_0 \tanh \otimesmega \beta}\right).
\end{equation}
Then, the partition function of this system becomes
\begin{equation}
\langlebel{partition-1}
{\cal Z} (\beta) = \sqrt{\frac{\otimesmega_0}{2 \pi b \sinh \Gamma_0}} \sqrt{\frac{\pi}{a_{0, -}}}
\end{equation}
where
\begin{equation}
\langlebel{partition-2}
a_{0, \pm} (\beta) = A_0 (\beta) + \frac{\otimesmega_0}{2 \sinh \Gamma_0} \left[ \left( 1 + \frac{1}{b^2} \right) \cosh \Gamma_0 \pm \frac{2}{b} \right]
\end{equation}
with
\begin{equation}
\langlebel{partition-3}
A_0 (\beta) = \frac{\otimesmega^2 - \otimesmega_0^2}{4 \otimesmega b^2} \sinh (2 \otimesmega \beta).
\end{equation}
Using the partition function one can derive the thermal density matrix in a form
\begin{equation}
\langlebel{thermal-2}
\rho_0 [x', x : \beta] = \sqrt{\frac{a_{0, -}}{\pi}} e^{-A_0 (\beta) x'^2}
\exp\left[ - \frac{\otimesmega_0}{2 \sinh \Gamma_0} \left\{ \left( x^2 + \frac{x'^2}{b^2} \right) \cosh \Gamma_0 - \frac{2 x x'}{b} \right\} \right].
\end{equation}
The thermal density matrix is in general mixed state. In order to explore how much it is mixed we first compute the purity function
$P_0 (\beta) = \mbox{tr} \left( \rho_0 \right)^2$. If it is one, this means that $\rho_0$ is pure state. If it is zero, this means $\rho_0$ is completely mixed state.
If $0 < P_0 (\beta) < 1$, this means that $\rho_0$ is partially mixed state. It is not difficult to show that the purity function of this system is
\begin{equation}
\langlebel{purity-1}
P_0 (\beta) \equiv \int dx dx' \rho_0 [ x', x : \beta] \rho_0 [ x, x' : \beta] = \sqrt{\frac{a_{0,-}}{a_{0,+}}}.
\end{equation}
Another quantity we want to compute is a von Neumann entropy $S[\rho_0]$ of $\rho_0$. If $\rho_0$ is pure state, $S[\rho_0]$ is zero. If its
mixedness increases, $S[\rho_0]$ also increases from zero and eventually goes to infinity for completely mixed state in this continuum case. In order to compute
the von Neumann entropy we should solve the eigenvalue equation
\begin{equation}
\langlebel{von-1}
\int d x \rho_0 [x', x : \beta] f_n (x) = \langlembda_n (\beta) f_n (x').
\end{equation}
One can show that the eigenvalue equation
\begin{equation}
\langlebel{von-2}
\int d x \left( A e^{-a_1 x^2 - a_2 x'^2 + 2 b x x'} \right) f_n (x) = \langlembda_n f_n (x')
\end{equation}
can be solved, and the eigenfunction and corresponding eigenvalue are
\begin{eqnarray}
\langlebel{von-3}
&&f_n (x) = {\cal C}_n^{-1} H_n (\sqrt{\epsilon_0} x) e^{-\frac{\alpha_0}{2} x^2} \\ \nonumber
&&\langlembda_n = A \sqrt{\frac{2 \pi}{(a_1 + a_2) + \epsilon_0}} \left[ \frac{(a_1 + a_2) - \epsilon_0}{(a_1 + a_2) + \epsilon_0} \right]^{n/2}
\end{eqnarray}
where $\epsilon_0 = \sqrt{(a_1 + a_2)^2 - 4 b^2}$ and $\alpha_0 = \epsilon_0 - (a_1 - a_2)$.
By making use of integral formula\cite{integral}
\begin{eqnarray}
\langlebel{integral2-1}
&&\int_{-\infty}^{\infty} e^{-(x - y)^2} H_{m} (c x) H_n (c x) d x \\ \nonumber
&&= \sqrt{\pi} \sum_{k=0}^{\min (m, n)} 2^k k! \left( \begin{array}{c} m \\ k \end{array} \right)
\left( \begin{array}{c} n \\ k \end{array} \right) (1 - c^2)^{\frac{m+n}{2} - k} H_{m+n - 2 k} \left( \frac{c y}{\sqrt{1 - c^2}} \right)
\end{eqnarray}
and various properties of Gamma function\cite{handbook}, the normalization constant ${\cal C}_n$ can be written in a form
\begin{equation}
\langlebel{normalization2-1}
{\cal C}_n^2 = \frac{1}{\sqrt{\alpha_0}} \sum_{k=0}^n 2^{2 n - k} \left( \frac{\epsilon_0}{\alpha_0} - 1 \right)^{n-k}
\frac{\Gamma^2 (n+1) \Gamma \left(n - k + \frac{1}{2} \right)}{\Gamma (k+1) \Gamma^2 (n - k + 1)}.
\end{equation}
If $a_1 = a_2$, $\alpha_0 = \epsilon_0$, which makes nonzero in $k$-summation of Eq. (\ref{normalization2-1}) only when $k = n$. Then,
${\cal C}_n$ becomes usual harmonic oscillator normalization constant
\begin{equation}
\langlebel{normalization2-2}
{\cal C}_n ^{-1} = \frac{1}{\sqrt{2^n n!}} \left( \frac{\epsilon_0}{\pi} \right)^{1 / 4}.
\end{equation}
Using Eqs. (\ref{von-2}) and (\ref{von-3}) the eigenvalue of
Eq. (\ref{von-1}) becomes
\begin{equation}
\langlebel{von-4}
\langlembda_n (\beta) = (1 - \xi_0) \xi_0^n
\end{equation}
where
\begin{equation}
\langlebel{von-5}
\xi_0 = \frac{\sqrt{a_{0,+}} - \sqrt{a_{0,-}}}{\sqrt{a_{0,+}} + \sqrt{a_{0,-}}} = \frac{1 - P_0 (\beta)} {1 + P_0 (\beta)}.
\end{equation}
Thus, the spectral decomposition of $\rho_0$ can be written as
\begin{equation}
\langlebel{single-spectral}
\rho_0 [x', x : \beta] = \sum_n
\langlembda_n (\beta) f_n (x': \beta) f_n^* (x: \beta),
\end{equation}
where $f_n (x: \beta)$ is given by Eq. (\ref{von-3}) with $\epsilon_0 = \sqrt{a_{0,+} a_{0,-}}$ and
$\alpha_0 = \epsilon_0 + A_0 (\beta) - \frac{\otimesmega_0 \cosh \Gamma_0}{2 \sinh \Gamma_0} \left( 1 - \frac{1}{b^2} \right).$
Eq. (\ref{von-4}) implies $\sum_n \langlembda_n (\beta) = 1$, which is consistent with $\mathrm{tr} \rho_0 = 1$. Then the von Neumann entropy of $\rho_0$ becomes
\begin{equation}
\langlebel{von-6}
S[\rho_0] \equiv - \sum_n \langlembda_n (\beta) \ln \langlembda_n (\beta) = - \ln (1 - \xi_0) - \frac{\xi_0}{1 - \xi_0} \ln \xi_0.
\end{equation}
For constant frequency, {\it i.e.} $\otimesmega = \otimesmega_0$, $A_0 = 0$, $a_{0,+} = \otimesmega \coth \frac{\otimesmega \beta}{2}$,
$a_{0,-} = \otimesmega \tanh \frac{\otimesmega \beta}{2}$, and $\xi_0 = e^{-\otimesmega \beta}$. For the case of SQM $A_0$ and $a_{0,\pm}$ become
larger than those in constant frequency case in the entire range of temperature. As a result, $P_0 (\beta)$ and $\xi_0$ become larger and smaller compared to the
constant frequency case. Since $-\ln (1 - x) - \frac{x}{1 - x} \ln x$ is monotonically increasing function in the range $0 \leq x \leq 1$, this fact decreases the
von Neumann entropy.
The temperature dependence of the purity function and von Neumann entropy is plotted in Fig. 1(a) and Fig. 1(b) when $\otimesmega = 3$ (black line), $5$ (red line),
$7$ (blue line) with $\otimesmega_0 = 3$. All figures show that $\rho_0$ becomes more and more mixed with increasing the external temperature. Both figures also show that
$\rho_0$ becomes less mixed with increasing $|\otimesmega - \otimesmega_0|$ at the given temperature. Thus, we can use SQM model to protect the purity against the external temperature.
\section{Thermal State for two coupled harmonic oscillators with arbitrary time-dependent frequencies}
In this section we will derive the thermal state for two coupled harmonic oscillator system, whose Hamiltonian is
\begin{equation}
\langlebel{revise-hamil-2}
H_2 = \frac{1}{2} \left(p_1^2 + p_2^2 \right) + V(x_1, x_2).
\end{equation}
We choose the potential $V(x_1, x_2)$ as a quadratic function with arbitrary time-dependent spring and coupling parameters.
The explicit expression of the potential is chosen in a form
\begin{equation}
\langlebel{revise-potential-1}
V(x_1, x_2) = \frac{1}{2} \left[ k_0 (t) (x_1^2 + x_2^2) + J(t) (x_1 - x_2)^2 \right].
\end{equation}
Then, the action functional of this system is
\begin{equation}
\langlebel{action2}
S[x_1, x_2] = \int_0^t dt \left[ \frac{1}{2} \left( \dot{x}_1^2 + \dot{x}_2^2 \right) - V(x_1, x_2) \right].
\end{equation}
It is easy to show that the potential is diagonalized by introducing $y_1 = \frac{1}{\sqrt{2}} (x_1 + x_2)$ and $y_2 = \frac{1}{\sqrt{2}} (x_1 - x_2)$ .
In terms of new canonical variables the action becomes that of two non-interacting harmonic oscillators in a form
\begin{equation}
\langlebel{action3}
S[x_1, x_2] = \int_0^t dt \left[ \frac{1}{2} \left( \dot{y}_1^2 + \dot{y}_2^2 \right) + \frac{1}{2} \left\{\otimesmega_1^2 (t) y_1^2 + \otimesmega_2^2 (t) y_2^2 \right\} \right]
\end{equation}
where $\otimesmega_1 (t) = \sqrt{k_0 (t)}$ and $\otimesmega_2 (t) = \sqrt{k_0 (t) + 2 J (t)}$. Thus, the Kernel for this system is
\begin{eqnarray}
\langlebel{kernel2-1}
&&K[x_1', x_2':x_1, x_2: t] = \\ \nonumber
&&\prod_{j=1}^2 \left[ e^{\frac{i}{2} \left( \frac{\dot{b}_j}{b_j} \right) y_j'^2} \frac{\left(\otimesmega_{j,0} \otimesmega_j'\right)^{1/4}}{\sqrt{2 \pi i \sin \Gamma_j (t)}} \exp \left[\frac{i}{2 \sin \Gamma_j (t)} \left\{ (\otimesmega_{j,0} y_j^2 + \otimesmega_j' y_j'^2) \cos \Gamma_j (t) - 2 \sqrt{\otimesmega_{j,0} \otimesmega_j'}
y_j y_j' \right\} \right] \right]
\end{eqnarray}
where $\otimesmega_{j,0} = \otimesmega_j (t = 0)$, $\otimesmega_j' = \frac{\otimesmega_{j,0}}{b_j^2 (t)}$, and $\Gamma_j (t) = \int_0^t \otimesmega_j' (s) ds$. Of course,
$b_1 (t)$ and $b_2 (t)$ satisfy the Ermakov equation
\begin{equation}
\langlebel{ermakov2-1}
\ddot{b}_j + \otimesmega_j^2 (t) b_j = \frac{\otimesmega_{j,0}}{b_j^3} \hspace{1.0cm} (j = 1, 2)
\end{equation}
with $\dot{b}_j (0) = 0$ and $b_j (0) = 1$. Then the thermal density matrix of this system is given by
\begin{equation}
\langlebel{thermal2-1}
\rho_T[x_1', x_2':x_1, x_2:\beta] = \frac{1}{{\cal Z} (\beta)} K[x_1',x_2':x_1,x_2:-i \beta]
\end{equation}
where $\beta = 1 / k_B T$ and ${\cal Z} (\beta) = \mbox{tr} K[x_1',x_2':x_1,x_2:-i \beta]$.
In this paper we will examine only the case of SQM. More general time-dependent cases will be explored elsewhere. If spring and
coupling constants are abruptly changed as
\begin{eqnarray}
\langlebel{instant2-1}
k_0 = \left \{ \begin{array}{cc}
k_{0,i} & \hspace{1.0cm} t = 0 \\
k_{0,f} & \hspace{1.0cm} t > 0
\end{array} \right.
\hspace{2.0cm}
J = \left \{ \begin{array}{cc}
J_{i} & \hspace{1.0cm} t = 0 \\
J_{f} & \hspace{1.0cm} t > 0,
\end{array} \right.
\end{eqnarray}
$\otimesmega_1$ and $\otimesmega_2$ become
\begin{eqnarray}
\langlebel{instant2-2}
\otimesmega_1 = \left \{ \begin{array}{cc}
\otimesmega_{1,0} \equiv \otimesmega_{1,i} = \sqrt{k_{0,i}} & \hspace{.3cm} t = 0 \\
\otimesmega_{1,f} = \sqrt{k_{0,f}} & \hspace{.3cm} t > 0,
\end{array} \right.
\hspace{0.7cm}
\otimesmega_2 = \left \{ \begin{array}{cc}
\otimesmega_{2,0} \equiv \otimesmega_{2,i} = \sqrt{k_{0,i} + 2 J_i} & \hspace{.3cm} t = 0 \\
\otimesmega_{2,f} = \sqrt{k_{0,f} + 2 J_f} & \hspace{.3cm} t > 0.
\end{array} \right.
\end{eqnarray}
Then, the thermal density matrix of this system is given by
\begin{eqnarray}
\langlebel{thermal2-2}
&&\rho_T[x_1', x_2':x_1,x_2:\beta] \\ \nonumber
&&=\prod_{j=1}^2 \sqrt{\frac{a_{j,-}}{\pi}} \exp \left[ -A_j y_j'^2 - \frac{\otimesmega_{j,i}}{2 \sinh \Gamma_{E,j}}
\left\{ \left( y_j^2 + \frac{y_j'^2}{b_j^2} \right) \cosh \Gamma_{E,j} - \frac{2 y_j y_j'}{b_j} \right\} \right]
\end{eqnarray}
where\footnote{The subscript $E$ in $\Gamma_{E,j}$ stands for ``Euclidean''. This subscript is attached to stress the point that the inverse temperature $\beta$ is introduced as a Euclidean time.}
\begin{eqnarray}
\langlebel{sudden2-1}
&& b_j = \sqrt{\frac{\otimesmega_{j,f}^2 - \otimesmega_{j,i}^2}{2 \otimesmega_{j,f}^2} \cosh (2 \otimesmega_{j,f} \beta) + \frac{\otimesmega_{j,f}^2 + \otimesmega_{j,i}^2}
{2 \otimesmega_{j,f}^2}},
\hspace{.2cm} \Gamma_{E,j} = \frac{1}{2} \ln \frac{\otimesmega_{j,f} + \otimesmega_{j,i} \tanh (\otimesmega_{j,f} \beta)}{\otimesmega_{j,f} - \otimesmega_{j,i} \tanh (\otimesmega_{j,f} \beta)} \\ \nonumber
&&A_j = \frac{\otimesmega_{j,f}^2 - \otimesmega_{j,i}^2}{2 \otimesmega_{j,f} b_j^2} \sinh (2 \otimesmega_{j,f} \beta), \hspace{.2cm}
a_{j,\pm} = A_j + \frac{\otimesmega_{j,i}}{2 \sinh \Gamma_{E,j}} \left[ \left(1 + \frac{1}{b_j^2} \right) \cosh \Gamma_{E,j} \pm \frac{2}{b_j} \right]
\end{eqnarray}
with $j=1,2$. For the limit of $\otimesmega_{j,i} = \otimesmega_{j,f} \equiv \otimesmega_j$, we have $A_j = 0$, $b_j = 1$, $\Gamma_{E,j} = \otimesmega_j \beta$,
$a_{j,+} = \otimesmega_j \coth (\otimesmega_j \beta / 2)$, and $a_{j,-} = \otimesmega_j \tanh (\otimesmega_j \beta / 2)$. In terms of $x_j$-coordinates the thermal state
reduces to
\begin{eqnarray}
\langlebel{thermal2-3}
&&\rho_T[x_1', x_2':x_1,x_2:\beta] = \frac{\sqrt{a_{1,-}a_{2,-}}}{\pi} \exp \bigg[ -\alpha_1 (x_1'^2 + x_2'^2) - \alpha_2 (x_1^2 + x_2^2)
\\ \nonumber
&&\hspace{2.0cm}+ 2 \alpha_3 x_1' x_2' + 2 \alpha_4 x_1 x_2 + 2 \alpha_5 (x_1 x_1' + x_2 x_2') + 2 \alpha_6 (x_1 x_2' + x_2 x_1') \bigg]
\end{eqnarray}
where
\begin{eqnarray}
\langlebel{instant2-3}
&&\alpha_1 = \sum_{j=1}^{2} \left[ \frac{A_j}{2} + \frac{\otimesmega_{j,i} \cosh \Gamma_{E,j}}{4 b_j^2 \sinh \Gamma_{E,j}} \right],
\hspace{.5cm} \alpha_2 = \sum_{j=1}^2 \frac{\otimesmega_{j,i} \cosh \Gamma_{E,j}}{4 \sinh \Gamma_{E,j}} \\ \nonumber
&& \alpha_3 = \sum_{j=1}^2 (-1)^j \left[ \frac{A_j}{2} + \frac{\otimesmega_{j,i} \cosh \Gamma_{E,j}}{4 b_j^2 \sinh \Gamma_{E,j}} \right],
\hspace{.5cm} \alpha_4 = \sum_{j=1}^2 (-1)^j \frac{\otimesmega_{j,i} \cosh \Gamma_{E,j}}{4 \sinh \Gamma_{E,j}} \\ \nonumber
&&\hspace{1.0cm} \alpha_5 = \sum_{j=1}^2 \frac{\otimesmega_{j,i}}{4 b_j \sinh \Gamma_{E,j}}, \hspace{.5cm}
\alpha_6 = \sum_{j=1}^2 (-1)^{j-1} \frac{\otimesmega_{j,i}}{4 b_j \sinh \Gamma_{E,j}}.
\end{eqnarray}
It is worthwhile noting that $\alpha_j$ satisfy
\begin{eqnarray}
\langlebel{instant2-4}
&&\hspace{3.0cm}\alpha_1 + \alpha_2 = \frac{(a_{1,+} + a_{1,-}) + (a_{2,+} + a_{2,-})}{4} \\ \nonumber
&&\hspace{3.0cm}\alpha_3 + \alpha_4 = - \frac{(a_{1,+} + a_{1,-}) - (a_{2,+} + a_{2,-})}{4} \\ \nonumber
&&\alpha_5 = \frac{1}{8} \left[ (a_{1,+} - a_{1,-}) + (a_{2,+} - a_{2,-}) \right], \hspace{.5cm}
\alpha_6 = \frac{1}{8} \left[ (a_{1,+} - a_{1,-}) - (a_{2,+} - a_{2,-}) \right].
\end{eqnarray}
Using Eq. (\ref{instant2-4}) it is straightforward to show $\mbox{tr} \rho_T = 1$. In next section we compute several quantum information quantities analytically,
which measure how much $\rho_T$ is mixed.
\section{Various Quantities of Thermal State: Case of SQM}
In this section we will compute purity function, R\'{e}nyi and von Neumann entropies, and mutual information of $\rho_T[x_1', x_2':x_1, x_2:\beta]$ given
in Eq. (\ref{thermal2-2}) or equivalently Eq. (\ref{thermal2-3}). As a by-product we derive the spectral decomposition of $\rho_T[x_1', x_2':x_1, x_2:\beta]$.
\subsection{Purity function}
The purity function is defined as
\begin{equation}
\langlebel{purity3-1}
P(\beta) = \mbox{tr} \rho_T^2 \equiv
\int dx_1' dx_2' dx_1 dx_2 \rho_T[x_1', x_2':x_1, x_2:\beta] \rho_T[x_1, x_2:x_1', x_2':\beta].
\end{equation}
If $P(\beta) = 1$ or $0$, this means that $\rho_T$ is pure or completely mixed state. Direct calculation shows
\begin{equation}
\langlebel{purity3-2}
P(\beta) = \sqrt{\frac{a_{1,-} a_{2,-}}{a_{1,+} a_{2,+}}}.
\end{equation}
For the case of constant frequencies $\otimesmega_{j,i} = \otimesmega_{j,f} \equiv \otimesmega_j$ it reduces to
\begin{equation}
\langlebel{purity3-3}
P(\beta) = \tanh \frac{\otimesmega_1 \beta}{2} \tanh \frac{\otimesmega_2 \beta}{2}.
\end{equation}
The temperature-dependence of the purity function is plotted in Fig. 2 (a) when $k_{0,f} = J_f = 6$ (red line) and $k_{0,f} = J_f = 9$ (blue line). The $k_{0,i}$
and $J_i$ are fixed as $k_{0,i} = J_i = 3$. The black dashed line corresponds to constant frequencies $k_0 = J = 3$. As expected $\rho_T$ becomes more and more mixed with increasing temperature.
Fig. 2(a) also show that $\rho_T$ is less mixed when $|k_{0, f} - k_{0, i}|$ and $|J_f - J_i|$ increase.
\subsection{ R\'{e}nyi and von Neumann entropies}
In order to solve the R\'{e}nyi and von Neumann entropies of $\rho_T$ we should solve the eigenvalue equation
\begin{equation}
\langlebel{eigen3-1}
\int dx_1 dx_2 \rho_T [x_1', x_2':x_1, x_2:\beta] u_{mn} (x_1, x_2:\beta) = p_{mn} (\beta) u_{mn} (x_1', x_2':\beta).
\end{equation}
Eq. (\ref{eigen3-1}) is solved in appendix A and the eigenvalue $p_{mn} (\beta)$ is
\begin{equation}
\langlebel{eigen3-2}
p_{mn} (\beta) = (1 - \xi_1) \xi_1^m (1 - \xi_2) \xi_2^n
\end{equation}
where
\begin{equation}
\langlebel{eigen3-3}
\xi_1 = \frac{\sqrt{a_{1,+}} - \sqrt{a_{1,-}}}{\sqrt{a_{1,+}} + \sqrt{a_{1,-}}}, \hspace{1.0cm}
\xi_2 = \frac{\sqrt{a_{2,+}} - \sqrt{a_{2,-}}}{\sqrt{a_{2,+}} + \sqrt{a_{2,-}}}.
\end{equation}
In terms of $\xi_1$ and $\xi_2$ the purity function $P(\beta)$ in Eq. (\ref{purity3-2}) can be written as
\begin{equation}
\langlebel{purity3-4}
P (\beta) = \frac{1 - \xi_1}{1 + \xi_1} \frac{1 - \xi_2}{1 + \xi_2}.
\end{equation}
Then, the R\'{e}nyi and von Neumann entropies of $\rho_T$ reduce to
\begin{equation}
\langlebel{von3-1}
S_{\alpha} = S_{1, \alpha} + S_{2, \alpha} \hspace{1.0cm} S_{von} = S_{1,von} + S_{2,von}
\end{equation}
where
\begin{equation}
\langlebel{von3-2}
S_{j,\alpha} = \frac{1}{1 - \alpha} \ln \frac{(1 - \xi_j)^{\alpha}}{1 - \xi_j}, \hspace{1.0cm}
S_{j,von} = -\ln (1 - \xi_j) - \frac{\xi_j}{1 - \xi_j} \ln \xi_j
\end{equation}
with $j=1,2$.
One can show also that the normalized eigenfunction $u_{mn} (x_1, x_2:\beta)$ is
\begin{equation}
\langlebel{eigen3-4}
u_{mn}(x_1,x_2:\beta) = \left( \frac{1}{{\cal C}_{1,m}} H_m (\sqrt{\epsilon_1} y_1) e^{-\mu_1 y_1^2} \right)
\left( \frac{1}{{\cal C}_{2,n}} H_n (\sqrt{\epsilon_2} y_2) e^{-\mu_2 y_2^2} \right)
\end{equation}
where
\begin{eqnarray}
\langlebel{eigen3-5}
&&\epsilon_1 = \sqrt{a_{1,+} a_{1,-}} \hspace{4.2cm} \epsilon_2 = \sqrt{a_{2,+} a_{2,-}} \\ \nonumber
&&\mu_1 = \frac{1}{2} \left[ \epsilon_1 + (\alpha_1 - \alpha_2) - (\alpha_3 - \alpha_4) \right], \hspace{.5cm}
\mu_2 = \frac{1}{2} \left[ \epsilon_2 + (\alpha_1 - \alpha_2) + (\alpha_3 - \alpha_4) \right] \\ \nonumber
&& {\cal C}_{1,m}^2 = \frac{1}{\sqrt{2 \mu_1}} \sum_{k=0}^m 2^{2m-k} \left(\frac{\epsilon_1}{2\mu_1} - 1 \right)^{m-k}
\frac{\Gamma^2 (m+1) \Gamma (m- k + 1/2)}{\Gamma(k+1) \Gamma^2 (m-k+1)} \\ \nonumber
&& {\cal C}_{2,n}^2 = \frac{1}{\sqrt{2 \mu_2}} \sum_{k=0}^n 2^{2n-k} \left(\frac{\epsilon_2}{2\mu_2} - 1 \right)^{n-k}
\frac{\Gamma^2 (n+1) \Gamma (n- k + 1/2)}{\Gamma(k+1) \Gamma^2 (n-k+1)} .
\end{eqnarray}
For the case of constant frequencies $\alpha_1 - \alpha_2 = \alpha_3 - \alpha_4 = 0$, which results in $\mu_j = \frac{\epsilon_j}{2}$. In this case the sum in
${\cal C}_{1.m}^2$ or ${\cal C}_{2,n}^2$ is nonzero only when $k=m$ or $k=n$, and this fact yields well-known quantities
${\cal C}_{1,m}^{-1} = \frac{1}{\sqrt{2^m m!}} \left( \frac{\epsilon_1}{\pi} \right)^{1/4}$ and
${\cal C}_{2,n}^{-1} = \frac{1}{\sqrt{2^n n!}} \left( \frac{\epsilon_2}{\pi} \right)^{1/4}$. Thus, the spectral decomposition of $\rho_T$ can be written as
\begin{equation}
\langlebel{spectral3-1}
\rho_T [x_1',x_2':x_1,x_1:\beta] = \sum_{m,n} p_{mn} (\beta) u_{mn} (x_1',x_2':\beta) u_{mn}^* (x_1,x_2:\beta).
\end{equation}
The temperature-dependence of the von Neumann entropy is plotted in Fig. 2 (b) when $k_{0,f} = J_f = 6$ (red line) and $k_{0,f} = J_f = 9$ (blue line). The $k_{0,i}$ and $J_i$ are fixed as $k_{0,i} = J_i = 3$. The black dashed line corresponds to constant frequencies $k_0 = J = 3$. As expected $\rho_T$ becomes more and more mixed with increasing temperature. Fig. 2(b) also show that $\rho_T$ is less entangled when $|k_{0, f} - k_{0, i}|$ and $|J_f - J_i|$ increase as purity function exhibits.
\subsection{mutual information}
From $\rho_T$ in Eq. (\ref{thermal2-3}) one can derive the substates $\rho_{T,A} = \mbox{tr}_B \rho_T$ and $\rho_{T,B} = \mbox{tr}_A \rho_T$
by performing partial trace appropriately. Then, the substates become
\begin{equation}
\langlebel{mutual3-1}
\rho_{T,A}[x', x:\beta] = \rho_{T,B} [x',x:\beta] = \sqrt{\frac{a_{1,-} a_{2,-}}{\pi (\alpha_1 + \alpha_2 - 2 \alpha_5)}} e^{-B_1 x^2 - B_2 x'^2 + 2 B_3 x x'}
\end{equation}
where
\begin{eqnarray}
\langlebel{mutual3-2}
&&B_1 = \frac{\alpha_2 (\alpha_1 + \alpha_2 - 2 \alpha_5) - (\alpha_4 + \alpha_6)^2}{\alpha_1 + \alpha_2 - 2 \alpha_5}, \hspace{0.2cm}
B_2 = \frac{\alpha_1 (\alpha_1 + \alpha_2 - 2 \alpha_5) - (\alpha_3 + \alpha_6)^2}{\alpha_1 + \alpha_2 - 2 \alpha_5} \\ \nonumber
&&\hspace{2.5cm}B_3 = \frac{\alpha_5 (\alpha_1 + \alpha_2 - 2 \alpha_5) + (\alpha_3 + \alpha_6) (\alpha_4 + \alpha_6)}{\alpha_1 + \alpha_2 - 2 \alpha_5}.
\end{eqnarray}
It is not difficult to show that the eigenvalues of $\rho_{T,A}$ or $\rho_{T,B}$ are $(1 - \zeta) \zeta^n$, where
\begin{equation}
\langlebel{mutual3-3}
\zeta = \frac{2 B_3}{(B_1 + B_2) + \nu}
\end{equation}
with $\nu = \sqrt{(B_1 + B_2)^2 - 4 B_3^2}$.
Using the eigenvalues the R\'{e}nyi and von Neumann entropies of $\rho_{T,A}$ and $\rho_{T,B}$ can be obtained as
\begin{equation}
\langlebel{mutual3-4}
S_{A,\alpha} = S_{B,\alpha} = \frac{1}{1 - \alpha} \ln \frac{(1 - \zeta)^{\alpha}}{1 - \zeta^{\alpha}}, \hspace{.5cm}
S_{A,von} = S_{B,von} = - \ln (1 - \zeta) - \frac{\zeta}{1 - \zeta} \ln \zeta.
\end{equation}
Therefore, the mutual information of $\rho_T$ is given by
\begin{equation}
\langlebel{mutual3-5}
I (\rho_T) = S_{A,von} + S_{B,von} - S_{von}.
\end{equation}
The temperature-dependence of the mutual information is plotted in Fig. 2 (c) when $k_{0,f} = J_f = 6$ (red line) and $k_{0,f} = J_f = 9$ (blue line). The $k_{0,i}$ and $J_i$ are fixed as $k_{0,i} = J_i = 3$. The black dashed line corresponds to constant frequencies $k_0 = J = 3$. Like other quantities mutual information also decreases with
increasing temperature. However, it does not completely vanish at $T = \infty$. Fig. 2 (c) shows that the mutual information seems to approach $0.144$ at the
large temperature limit. This implies that the common information parties $A$ and $B$ share does not completely vanish even in the infinity temperature limit.
\section{Thermal Entanglement Phase Transition: Case of SQM}
Since the thermal state $\rho_T$ given in Eq. (\ref{thermal2-3}) is mixed state, its entanglement is in general defined via the convex-roof
method\cite{benn96,uhlmann99-1};
\begin{equation}
\langlebel{final-5}
{\cal E} (\rho_T) = \min \sum_j p_j {\cal E} (\psi_j),
\end{equation}
where minimum is taken over all possible pure state decompositions, i.e. $\rho_T = \sum_j p_j \ket{\psi_j} \bra {\psi_j}$, with $0 \leq p_j \leq 1$
and $\sum_j p_j = 1$.
The decomposition which yields minimum value is called the optimal decomposition.
However, it seems to be highly difficult problem to derive the optimal decomposition in the continuous variable system.
Because of this difficulty, we will consider the negativity-like quantity\cite{vidal01} of $\rho_T$. Let $\sigma_T$ be a partial transpose of $\rho_T$, i.e.,
\begin{eqnarray}
\langlebel{partial-T-1}
&&\sigma_T [x'_1,x'_2:x_1, x_2: \beta]
\equiv \rho_T [x_1, x'_2:x'_1,x_2:\beta] \\ \nonumber
&=& \frac{\sqrt{a_{1,-} a_{2,-}}}{\pi}
\exp \bigg[ -\alpha_1 (x_1^2 + x_2'^2) - \alpha_2 (x_1'^2 + x_2^2) + 2 \alpha_3 x_1 x_2' + 2 \alpha_4 x_1' x_2 \\ \nonumber
&& \hspace{6.0cm} + 2 \alpha_5 (x_1 x_1' + x_2 x_2') + 2 \alpha_6 (x_1 x_2 + x_1' x_2') \bigg].
\end{eqnarray}
Then, the negativity-like quantity ${\cal N} (\rho_T)$ is defined as
\begin{equation}
\langlebel{negat-1}
{\cal N} (\rho_T) = \sum_{m,n} |\Lambda_{mn}| - 1,
\end{equation}
where $\Lambda_{mn}$ is eigenvalue of $\sigma_T$, i.e.,
\begin{equation}
\langlebel{negat-2}
\int dx_1 dx_2 \sigma_T [x_1', x_2':x_1, x_2:\beta] f_{mn} (x_1, x_2) = \Lambda_{mn} (\beta) f_{mn} (x_1', x_2': \beta).
\end{equation}
One may wonder why the negativity-like quantity is introduced, because the PPT is known as necessary and sufficient criterion of
separability for only $2 \times 2$ qubit-qubit and $2 \times 3$ qubit-qudit states\cite{peres96,horodecki96,horodecki97}. However, as Ref. \cite{duan-2000,simon-2000} have shown, PPT also provides
a necessary and sufficient criterion of the separability for Gaussian continuous variable quantum states. Furthermore, the distillation protocols to maximally entangled state have been already suggested in Ref. \cite{duan-00-1,giedke-2000} in the Gaussian states. Thus, our negativity-like quantity is valid at least to determine whether the given Gaussian state is entangled or not.
Since ${\cal N} (\rho_T)$ is proportional to ${\cal E} (\rho_T)$, ${\cal N} (\rho_T) = 0$ at the critical temperature $T= T_c$ of the TEPT
if the external temperature induces the ESD phenomenon.
Thus, if the eigenvalue equation (\ref{negat-2}) is solved, it is possible to compute $T_c$.
As we will show in the following, however, it seems to be very difficult to solve Eq. (\ref{negat-2}) directly. In order to solve Eq. (\ref{negat-2}) we define
\begin{equation}
\langlebel{negat-3}
f_{mn} (x_1, x_2: \beta) = e^{\frac{\alpha_1 - \alpha_2}{2} (x_1^2 - x_2^2)} g_{mn} (x_1, x_2, \beta).
\end{equation}
Then, Eq. (\ref{negat-2}) can be written as
\begin{eqnarray}
\langlebel{negat-4}
&&\frac{\sqrt{a_{1,-} a_{2,-}}}{\pi} \exp \left[ - \frac{\alpha_1 + \alpha_2}{2} \left( x_1'^2 + x_2'^2 \right) + 2 \alpha_6 x_1' x_2' \right]
\\ \nonumber
&&\times \int dx_1 dx_2 \exp \left[ - \frac{\alpha_1 + \alpha_2}{2} (x_1^2 + x_2^2) + 2 \alpha_6 x_1 x_2 +
2 \left( \begin{array}{cc}
x_1', & x_2'
\end{array} \right) \left( \begin{array}{cc}
\alpha_5 & \alpha_4 \\
\alpha_3 & \alpha_5
\end{array} \right) \left( \begin{array}{c}
x_1 \\
x_2
\end{array} \right) \right] \\ \nonumber
&&\hspace{6.0cm} \times g_{mn} (x_1, x_2: \beta) = \Lambda_{mn} (\beta) g_{mn} (x_1', x_2': \beta).
\end{eqnarray}
If one changes the variables as $y_1 = \frac{1}{\sqrt{2}} (x_1 + x_2)$ and $y_2 = \frac{1}{\sqrt{2}} (-x_1 + x_2)$, Eq. (\ref{negat-4}) reduces to
\begin{eqnarray}
\langlebel{negat-5}
&&\frac{\sqrt{a_{1,-} a_{2,-}}}{\pi} e^{-\mu_- y_1'^2 - \mu_+ y_2'^2}
\int dy_1 dy_2 \exp \left[ -\mu_- y_1^2 - \mu_+ y_2^2 + \left( \begin{array}{cc} y_1', & y_2' \end{array} \right) A
\left( \begin{array}{c} y_1 \\ y_2 \end{array} \right) \right] \\ \nonumber
&& \hspace{5.0cm} \times g_{mn} (y_1, y_2: \beta) = \Lambda_{mn} (\beta) g_{mn} (y_1', y_2': \beta)
\end{eqnarray}
where
\begin{eqnarray}
\langlebel{negat-6}
\mu_{\pm} = \frac{\alpha_1 + \alpha_2}{2} \pm \alpha_6, \hspace{1.0cm}
A = \left( \begin{array}{cc}
2 \alpha_5 + (\alpha_3 + \alpha_4) & - (\alpha_3 - \alpha_4) \\
\alpha_3 - \alpha_4 & 2 \alpha_5 - (\alpha_3 + \alpha_4)
\end{array} \right).
\end{eqnarray}
The difficulty arises because of the fact that $A$ is not symmetric matrix if $\alpha_3 \neq \alpha_4$. Due to this fact it seems to be impossible to
factorize Eq. (\ref{negat-5}) into two single-party eigenvalue equations as we did in appendix A.
However, Eq. (\ref{negat-5}) can be solved for the case of constant frequencies, i.e., $\otimesmega_{1,i} = \otimesmega_{1,f} \equiv \otimesmega_1$ and
$\otimesmega_{2,i} = \otimesmega_{2,f} \equiv \otimesmega_2$, because in this case $\alpha_3$ is exactly equals to $\alpha_4$. Furthermore, in this case we get
\begin{eqnarray}
\langlebel{negat-7}
&&\mu_+ = \frac{1}{4} \left[ \otimesmega_1 \coth \frac{\otimesmega_1 \beta}{2} + \otimesmega_2 \tanh \frac{\otimesmega_2 \beta}{2} \right], \hspace{.5cm}
\mu_- = \frac{1}{4} \left[ \otimesmega_1 \tanh \frac{\otimesmega_1 \beta}{2} + \otimesmega_2 \coth \frac{\otimesmega_2 \beta}{2} \right] \nonumber \\
&& \hspace{2.0cm} \nu_+ \equiv \alpha_5 - \frac{\alpha_3 + \alpha_4}{2} = \frac{1}{4} \left[ \otimesmega_1 \coth \frac{\otimesmega_1 \beta}{2} - \otimesmega_2 \tanh \frac{\otimesmega_2 \beta}{2} \right] \\ \nonumber
&& \hspace{2.0cm} \nu_- \equiv \alpha_5 + \frac{\alpha_3 + \alpha_4}{2} = -\frac{1}{4} \left[ \otimesmega_1 \tanh \frac{\otimesmega_1 \beta}{2} - \otimesmega_2 \coth \frac{\otimesmega_2 \beta}{2} \right].
\end{eqnarray}
Since $\alpha_3 = \alpha_4$ in this case, Eq. (\ref{negat-5}) is factorized into the following two single-party eigenvalue equations:
\begin{eqnarray}
\langlebel{negat-8}
&&e^{-\mu_- y_1'^2} \int dy_1 e^{-\mu_- y_1^2 + 2 \nu_- y_1' y_1} g_{1,m} (y_1: \beta) = p_m (\beta) g_{1,m} (y_1':\beta) \\ \nonumber
&&e^{-\mu_+ y_2'^2} \int dy_2 e^{-\mu_+ y_2^2 + 2 \nu_+ y_2' y_2} g_{2,n} (y_2: \beta) = q_n (\beta) g_{2,n} (y_2':\beta).
\end{eqnarray}
Then, the total eigenvalue $\Lambda_{mn}$ and the normalized eigenfunction $f_{mn} (x_1, x_2: \beta)$ are expressed as
\begin{eqnarray}
\langlebel{negat-9}
&&\Lambda_{mn} = \frac{1 }{\pi}\sqrt{\otimesmega_1 \otimesmega_2 \tanh \frac{\otimesmega_1 \beta }{2} \tanh \frac{\otimesmega_2 \beta}{2}} p_m (\beta) q_n (\beta)
\\ \nonumber
&& f_{mn} (x_1, x_2:\beta) = g_{1,m} (y_1:\beta) g_{2,n} (y_2:\beta),
\end{eqnarray}
where $g_{1,m} (y_1:\beta)$ and $g_{2,n} (y_2:\beta)$ are normalized eigenfunctions of Eq. (\ref{negat-8}).
Solving Eq. (\ref{negat-8}) it is straightforward to show that the normalized eigenfunctions are
\begin{eqnarray}
\langlebel{negat-10}
&&g_{1,m} (y_1:\beta) = \frac{1}{\sqrt{2^m m!}} \left( \frac{\epsilon_1}{\pi} \right)^{1/4} H_m \left( \sqrt{\epsilon_1} y_1 \right) e^{-\frac{\epsilon_1}{2} y_1^2} \\ \nonumber
&&g_{2,n} (y_2:\beta) = \frac{1}{\sqrt{2^n n!}} \left( \frac{\epsilon_2}{\pi} \right)^{1/4} H_n \left( \sqrt{\epsilon_2} y_2 \right) e^{-\frac{\epsilon_2}{2} y_2^2},
\end{eqnarray}
where
\begin{eqnarray}
\langlebel{negat-11}
&&\epsilon_1 = 2 \sqrt{\mu_-^2 - \nu_-^2} = \sqrt{\otimesmega_1 \otimesmega_2 \tanh \frac{\otimesmega_1 \beta}{2} \coth \frac{\otimesmega_2 \beta}{2}}
\\ \nonumber
&&\epsilon_2 = 2 \sqrt{\mu_+^2 - \nu_+^2} = \sqrt{\otimesmega_1 \otimesmega_2 \coth \frac{\otimesmega_1 \beta}{2} \tanh \frac{\otimesmega_2 \beta}{2}}.
\end{eqnarray}
One can also show that the eigenvalue $\Lambda_{mn}$ is
\begin{equation}
\langlebel{negat-12}
\Lambda_{mn} = (1 - \zeta_1) (1 - \zeta_2) \zeta_1^m \zeta_2^n
\end{equation}
where
\begin{eqnarray}
\langlebel{negat-13}
&&\zeta_1 = \frac{\nu_-}{\mu_- + \frac{\epsilon_1}{2}} = \frac{\sqrt{\mu_- + \nu_-} - \sqrt{\mu_- - \nu_-}}{\sqrt{\mu_- + \nu_-} + \sqrt{\mu_- - \nu_-}} = - \frac{\sqrt{\otimesmega_1 \tanh \frac{\otimesmega_1 \beta}{2}} - \sqrt{ \otimesmega_2 \coth \frac{\otimesmega_2 \beta}{2}}}
{\sqrt{\otimesmega_1 \tanh \frac{\otimesmega_1 \beta}{2}} + \sqrt{ \otimesmega_2 \coth \frac{\otimesmega_2 \beta}{2}}} \\ \nonumber
&&\zeta_2 = \frac{\nu_+}{\mu_+ + \frac{\epsilon_2}{2}} = \frac{\sqrt{\mu_+ + \nu_+} - \sqrt{\mu_+ - \nu_+}}{\sqrt{\mu_+ + \nu_+} + \sqrt{\mu_+ - \nu_+}} = \frac{\sqrt{\otimesmega_1 \coth \frac{\otimesmega_1 \beta}{2}} - \sqrt{ \otimesmega_2 \tanh \frac{\otimesmega_2 \beta}{2}}}
{\sqrt{\otimesmega_1 \coth \frac{\otimesmega_1 \beta}{2}} + \sqrt{ \otimesmega_2 \tanh \frac{\otimesmega_2 \beta}{2}}}.
\end{eqnarray}
One can compute $\pm 1 - \zeta_1$ and $\pm 1 - \zeta_2$ explicitly, which result in $-1 < \zeta_1, \zeta_2 \leq 1$ for arbitrary temperature. Thus,
it is easy to show $\sum_{m,n} \Lambda_{mn} (\beta) = 1$ as expected. Eq. (\ref{negat-1}) and Eq. (\ref{negat-12}) make ${\cal N} (\beta)$
to be
\begin{equation}
\langlebel{nega-2}
{\cal N} (\beta) = \frac{(1 - \zeta_1) (1 - \zeta_2)}{(1 - |\zeta_1|) (1 - |\zeta_2|)} - 1.
\end{equation}
The $T$-dependence of ${\cal N} (\beta)$ is plotted in Fig. 3 for (a) positive and (b) negative $J$ with fixing $k_0 = 1$. Both figures show
${\cal N} (\beta)$ is zero at $T \geq T_c$. Similar results were obtained for general bosonic harmonic lattice systems\cite{peres96,plenio}.
Since ${\cal N} (\beta)$ is proportional to entanglement of $\rho_T$, this fact implies that
$\rho_T$ is entangled (or separable) state at $T < T_c$ (or $T \geq T_c$). The critical temperature temperature $T_c$ increases with increasing
$|J|$.
From Eq. (\ref{nega-2}) it is evident that $\rho_T$ is separable when $\zeta_1 \geq 0$ and $\zeta_2 \geq 0$. Eq. (\ref{negat-13}) implies that
this separability criteria can be rewritten in a form
\begin{equation}
\langlebel{separable-1}
x \tanh x - y \coth y \leq 0, \hspace{1.0cm} x \coth x - y \tanh y \geq 0
\end{equation}
where $x = \otimesmega_1 \beta / 2$ and $y = \otimesmega_2 \beta / 2$. If $J \geq 0$, first equation of Eq. (\ref{separable-1}) is automatically satisfied. Hence,
the second equation plays a role as a genuine separability criterion. If $J < 0$, first equation is true criterion. It is worthwhile noting that two equations
in Eq. (\ref{separable-1}) can be transformed into each other by interchanging $x$ and $y$. This fact implies that the region in $x$-$y$ plane, where the
separable states reside, is symmetric with respect to $y = x$.
The shaded region in Fig. 4(a) is a region where the separable states of $\rho_T$ reside in $x$-$y$ plane. As expected, the region is symmetric with respect to $y = x$.
It is shown that most separable states are accumulated in $0 \leq x, y \leq 1$. The boundary of the region contains an information about the critical
temperature $T_c$. The black dashed line in the region is $y = x \coth x$. Since this is very close to upper boundary, this can be used to compute $T_c$
approximately.
Let the upper boundary of Fig. 4(a) be expressed by $y_c = x_c g (x_c)$, where $x_c$ and $y_c$ are $x$ and $y$ at $T = T_c$. Then the low boundary should
be $x_c = y_c g (y_c)$. The function $g(z)$ can be derived numerically by using Eq. (\ref{separable-1}) after changing the inequality into equality. Then,
$T_c$ can be computed by
\begin{equation}
\langlebel{critical-1}
T_c = \frac{\otimesmega_{min}}{2 g^{-1} \left( \frac{\otimesmega_{max}}{\otimesmega_{min}} \right)}
\end{equation}
where $\otimesmega_{min} = \min (\otimesmega_1, \otimesmega_2)$ and $\otimesmega_{max} = \max (\otimesmega_1, \otimesmega_2)$. If one uses $g(z) \approx \coth z$,
the critical temperature is approximately
\begin{equation}
\langlebel{critical-2}
T_c \approx \frac{\otimesmega_{min}}{\ln \left(\frac{\otimesmega_{max} + \otimesmega_{min}}{\otimesmega_{max} - \otimesmega_{min}} \right)}.
\end{equation}
In Fig. 4(b) the $J$-dependence of $T_c$ is plotted when $k_0 = 1$. The black solid line and red dashed line correspond to Eq. (\ref{critical-1}) and
Eq. (\ref{critical-2}) respectively. It is shown that $T_c$ increases with increasing $|J|$ as expected from Fig. 3.
As we commented earlier, for the case of SQM it seems to be highly difficult problem to solve the eigenvalue equation (\ref{negat-2}) directly. However,
we can conjecture the eigenvalue $\Lambda_{mn} (\beta)$ without deriving the eigenfunction $f_{mn} [x_1, x_2:\beta]$ as follows. Since
$\sum_{m,n} \Lambda_{mn} (\beta) = 1$, $\Lambda_{mn} (\beta)$ might be represented as Eq. (\ref{negat-12}). If this is right, we can compute
$\zeta_1$ and $\zeta_2$ by making use of the R\'{e}nyi entropy.
If the eigenvalue is represented as Eq. (\ref{negat-12}), the R\'{e}nyi entropy of $\sigma_T$
can be written as
\begin{equation}
\langlebel{negat-18}
S_{\alpha} [\sigma_T] \equiv \frac{1}{1 - \alpha} \ln \mbox{tr} \left[ (\sigma_T)^{\alpha} \right]
= \frac{1}{1 - \alpha} \left[ \ln \frac{(1 - \zeta_1)^{\alpha}}{1 - \zeta_1^{\alpha}} + \ln \frac{(1 - \zeta_2)^{\alpha}}{1 - \zeta_2^{\alpha}}\right].
\end{equation}
Putting $\alpha=2$ or $3$ in Eq. (\ref{negat-18}), it is possible to derive
\begin{equation}
\langlebel{negat-19}
\frac{(1 - \zeta_1) (1 - \zeta_2)}{(1 + \zeta_1) (1 + \zeta_2)} = \beta_1, \hspace{1.0cm}
\frac{(1 - \zeta_1)^2 (1 - \zeta_2)^2}{(1 + \zeta_1 + \zeta_1^2) (1 + \zeta_2 + \zeta_2^2)} = \beta_2
\end{equation}
where
\begin{equation}
\langlebel{negat-20}
\beta_1 \equiv \mbox{tr} \sigma_T^2 = \sqrt{\frac{X_1}{X_2}}, \hspace{1.0cm}
\beta_2 \equiv \mbox{tr} \sigma_T^3 = \frac{4 X_1}{X_1 + 3 X_2 - 12 (\alpha_5^2 - \alpha_3 \alpha_4)}
\end{equation}
with
\begin{equation}
\langlebel{negat-21}
X_1 = (\alpha_1 + \alpha_2 - 2 \alpha_5)^2 - (\alpha_3 + \alpha_4 + 2 \alpha_6)^2, \hspace{.5cm}
X_2 = (\alpha_1 + \alpha_2 + 2 \alpha_5)^2 - (\alpha_3 + \alpha_4 - 2 \alpha_6)^2.
\end{equation}
Solving Eq. (\ref{negat-19}), we get
\begin{eqnarray}
\langlebel{negat-22}
&&u = \zeta_1 + \zeta_2 = \frac{1 - \beta_1}{2 (4 \beta_1^2 - \beta_1^2 \beta_2 - 3 \beta_2)} \left[ -3 \beta_2 (1 + \beta_1) + \sqrt{3 \beta_2 \left[ 16 \beta_1^2 - \beta_2 (3 - \beta_1)^2 \right]} \right] \\ \nonumber
&& v = \zeta_1 \zeta_2 = -1 + \frac{1 + \beta_1}{2 (4 \beta_1^2 - \beta_1^2 \beta_2 - 3 \beta_2)} \left[ -3 \beta_2 (1 + \beta_1) + \sqrt{3 \beta_2 \left[ 16 \beta_1^2 - \beta_2 (3 - \beta_1)^2 \right]} \right].
\end{eqnarray}
Thus, $\zeta_1$ and $\zeta_2$ for the case of SQM become
\begin{equation}
\langlebel{negat-23}
\zeta_1 = \frac{u + \sqrt{u^2 - 4 v}}{2}, \hspace{1.0cm} \zeta_2 = \frac{u - \sqrt{u^2 - 4 v}}{2}.
\end{equation}
At the case of constant frequency $\beta_1$ and $\beta_2$ reduce to
\begin{equation}
\langlebel{negat-24}
\beta_1 = \frac{\epsilon_1 \epsilon_2}{4 (\mu_+ + \nu_+) (\mu_- + \nu_-)}, \hspace{1.0cm}
\beta_2 = \frac{4 (\mu_+ - \nu_+) (\mu_- - \nu_-)}{(2 \mu_+ + \nu_+) (2 \mu_- + \nu_-)}.
\end{equation}
Using Eq. (\ref{negat-24}) and after tedious calculation, one can show that $\zeta_1$ and $\zeta_2$ in Eq. (\ref{negat-23}) exactly coincide with those in
Eq. (\ref{negat-13}) when $\otimesmega_{1,i} = \otimesmega_{1,f} = \otimesmega_1$ and $\otimesmega_{2,i} = \otimesmega_{2,f} = \otimesmega_2$. Then, the negativity-like
quantity can be written in a form
\begin{equation}
\langlebel{negat-25}
{\cal N} (\beta) = \frac{1 - u + v}{1 + |v| - (|\zeta_1| + |\zeta_2|)} - 1.
\end{equation}
The temperature dependence of ${\cal N} (\beta) / {\cal N} (\infty)$ is plotted in Fig. 5. In Fig. 5(a) we choose $k_{0,f} = 1$ (black dashed line),
$k_{0,f} = 20$ (red line), and $k_{0,f} = 40$ (blue line) when $k_{0,i} = 1$ and $J_i = J_f = 5$. As this figure exhibits, the critical temperature $T_c$ increases with
increasing $|k_{0,f} - k_{0,i}|$. In Fig. 5(b) we choose $J_f = 5$ (black dashed line), $J_f = 25$ (red line), and $J_f = 45$ (blue line) when $k_{0,i} = k_{0,f} = 1$
and $J_i = 5$. This figure also shows that $T_c$ increases with increasing $|J_f - J_i|$.
\section{Conclusions}
In this paper we derive explicitly the thermal state of the two coupled harmonic oscillator system when the spring and coupling constants are
arbitrarily time-dependent. In particular, we focus on the SQM model (see Eq. (\ref{instant2-1}) and Eq. (\ref{instant2-2})). In this model we compute purity
function, R\'{e}nyi and von Neumann entropies, and mutual information analytically and examine their temperature-dependence. We also discuss on the TEPT by making use of the negativity-like quantity. Our calculation shows that the critical temperature $T_c$ increases with
increasing the difference between the initial and final frequencies. In this way we can use the SQM model to protect the entanglement against the external temperature by introducing a large difference of frequencies, i.e. $|\otimesmega_f - \otimesmega_i| \gg 1$.
There are several issues related to our paper. Since the SQM model we consider involves a discontinuity at $t=0$, it is unrealistic in some sense. In order to
escape this fact we can introduce the time-dependence of frequencies as a form $\otimesmega = \otimesmega_i + (\otimesmega_f - \otimesmega_i) \sin \Omega t$.
Then, we have to solve the Ermakov equation numerically. In this case the critical temperature $T_c$ might be dependent on $\Omega$ and
$|\otimesmega_f - \otimesmega_i|$. Then, it may be possible to protect the entanglement in the thermal bath by adjusting $\Omega$ and $|\otimesmega_f - \otimesmega_i|$
appropriately.
In this paper we introduce the negativity-like quantity to examine the thermal entanglement, because we do not know how to derive the
optimal decomposition of Eq. (\ref{final-5}). Recently, the upper and lower bounds of entanglement of formation (EoF) are examined for arbitrary two-mode
Gaussian state\cite{ralph19}. It seems to be of interest to examine the TEPT with EoF.
\begin{appendix}{\centerline{\bf Appendix A}}
\setcounter{equation}{0}
\renewcommand{A.\arabic{equation}}{A.\arabic{equation}}
In this section we examine the eigenvalue equation of the following bipartite Gaussian state:
\begin{eqnarray}
\langlebel{type2}
&& \rho_2 [x_1', x_2': x_1, x_2]
= A \exp \bigg[ -a_1 (x_1'^2 + x_2'^2) - a_2 (x_1^2 + x_2^2) + 2 b_1 x_1' x_2' + 2 b_2 x_1 x_2 \\ \nonumber
&& \hspace{7.0cm} + 2 c (x_1 x_1' + x_2 x_2') + 2 f (x_1 x_2' + x_2 x_1') \bigg]
\end{eqnarray}
where $A = \sqrt{(a_1 + a_2 - 2c)^2 - (b_1 + b_2 + 2 f)^2} / \pi$. If $a_1 = \alpha_1$, $a_2 = \alpha_2$, $b_1 = \alpha_3$, $b_2 = \alpha_4$, $c = \alpha_5$,
and $f = \alpha_6$, $\rho_2$ is exactly the same with the thermal state $\rho_T$ given in Eq. (\ref{thermal2-3}).
Now let us consider the eigenvalue equation
\begin{equation}
\langlebel{eigen4-1}
\int dx_1 dx_2 \rho_2 [x_1', x_2': x_1, x_2] f_{mn} (x_1, x_2) = \langlembda_{mn} f_{mn} (x_1', x_2').
\end{equation}
First we change the variables as
\begin{equation}
\langlebel{change4-1}
y_1 = \frac{1}{\sqrt{2}} (x_1 + x_2), \hspace{2.0cm} y_2 = \frac{1}{\sqrt{2}} (x_1 - x_2).
\end{equation}
Then Eq. (\ref{eigen4-1}) is simplified as
\begin{eqnarray}
\langlebel{eigen4-2}
&& A e^{-(a_1 - b_1) y_1'^2 - (a_1 + b_1) y_2'^2}
\int dy_1 dy_2 e^{- (a_2 - b_2) y_1^2 - (a_2 + b_2) y_2^2 + 2 (c + f) y_1' y_1 + 2 (c - f) y_2' y_2} f_{mn} (y_1, y_2) \nonumber \\
&& \hspace{8.0cm}= \langlembda_{mn} f(y_1', y_2').
\end{eqnarray}
Now, we define
\begin{equation}
\langlebel{define4-1}
f_{mn} (y_1, y_2) = g_m (y_1) h_n (y_2).
\end{equation}
Then, Eq. (\ref{eigen4-2}) is solved if one solves the following two single-party eigenvalue equations:
\begin{eqnarray}
\langlebel{eigen4-3}
&& e^{-(a_1 - b_1) y_1'^2} \int dy_1 e^{-(a_2 - b_2) y_1^2 + 2 (c + f) y_1' y_1} g_m (y_1) = p_m g_m (y_1') \\ \nonumber
&& e^{-(a_1 + b_1) y_2'^2} \int dy_2 e^{-(a_2 + b_2) y_2^2 + 2 (c - f) y_2' y_2} h_n (y_2) = q_n h_n (y_2').
\end{eqnarray}
The eigenvalue of Eq. (\ref{eigen4-1}) can be computed as $\langlembda_{mn} = A p_m q_n$.
By making use of Eq. (\ref{von-2}) and Eq. (\ref{von-3}) one can show $\langlembda_{mn} = (1 - \xi_1) \xi_1^m (1 - \xi_2) \xi_2^n$, where
\begin{eqnarray}
\langlebel{eigen4-4}
&&\xi_1 = \frac{2 (c + f)}{(a_1 + a_2 - b_1 - b_2) + \epsilon_1} \\ \nonumber
&&\hspace{.5cm}= \frac{\sqrt{(a_1 + a_2 - b_1 - b_2) + 2 (c+f)} - \sqrt{(a_1 + a_2 - b_1 - b_2) - 2 (c+f)}}
{\sqrt{(a_1 + a_2 - b_1 - b_2) + 2 (c+f)} + \sqrt{(a_1 + a_2 - b_1 - b_2) - 2 (c+f)}}
\\ \nonumber
&&\xi_2 = \frac{2 (c - f)}{(a_1 + a_2 + b_1 + b_2) + \epsilon_2} \\ \nonumber
&&\hspace{.5cm}= \frac{\sqrt{(a_1 + a_2 + b_1 + b_2) + 2 (c-f)} - \sqrt{(a_1 + a_2 + b_1 + b_2) - 2 (c-f)}}
{\sqrt{(a_1 + a_2 + b_1 + b_2) + 2 (c-f)} + \sqrt{(a_1 + a_2 + b_1 + b_2) - 2 (c-f)}}
\end{eqnarray}
with
\begin{equation}
\langlebel{eigen4-5}
\epsilon_1 = \sqrt{(a_1 + a_2 - b_1 - b_2)^2 -4 (c + f)^2}, \hspace{1.0cm} \epsilon_2 = \sqrt{(a_1 + a_2 + b_1 + b_2)^2 -4 (c - f)^2}.
\end{equation}
We can also use Eq. (\ref{von-2}) and Eq. (\ref{von-3}) to derive the normalized eigenfunction, whose explicit expression is
\begin{equation}
\langlebel{eigen4-6}
f_{mn} (x_1, x_2) = \left(\frac{1}{{\cal C}_{1,m}} H_m (\sqrt{\epsilon_1} y_1) e^{-\frac{\alpha_1}{2} y_1^2} \right)
\left(\frac{1}{{\cal C}_{2,n}} H_n (\sqrt{\epsilon_2} y_2) e^{-\frac{\alpha_2}{2} y_2^2} \right)
\end{equation}
where
\begin{equation}
\langlebel{eigen4-7}
\alpha_1 = \epsilon_1 + (a_1 - a_2) - (b_1 - b_2), \hspace{1.0cm} \alpha_2 = \epsilon_2 + (a_1 - a_2) + (b_1 - b_2)
\end{equation}
and the normalization constants ${\cal C}_{1,m}$ and ${\cal C}_{2,n}$ are
\begin{eqnarray}
\langlebel{eigen4-8}
&&{\cal C}_{1,m}^2 = \frac{1}{\sqrt{\alpha_1}} \sum_{k=0}^m 2^{2m - k} \left( \frac{\epsilon_1}{\alpha_1} - 1 \right)^{m-k}
\frac{\Gamma^2 (m+1) \Gamma (m - k + 1/2)}{\Gamma (k + 1) \Gamma^2 (m-k+1)} \\ \nonumber
&&{\cal C}_{2,n}^2 = \frac{1}{\sqrt{\alpha_2}} \sum_{k=0}^n 2^{2n - k} \left( \frac{\epsilon_2}{\alpha_2} - 1 \right)^{n-k}
\frac{\Gamma^2 (n+1) \Gamma (n - k + 1/2)}{\Gamma (k + 1) \Gamma^2 (n-k+1)}.
\end{eqnarray}
Thus, the spectral decomposition of $\rho_2$ is
\begin{equation}
\langlebel{sepctral4-1}
\rho_2 [x_1', x_2':x_1, x_2] = \sum_{m,n} \langlembda_{mn} f_{mn} (x_1', x_2') f_{mn}^* (x_1, x_2),
\end{equation}
where $\langlembda_{mn}$ and $f_{mn}$ are given in Eq. (\ref{eigen4-4}) and Eq. (\ref{eigen4-6}) respectively.
\end{appendix}
\end{document} |
\begin{document}
\title{Total stretch minimization on single and identical parallel machines}
\author{ Abhinav Srivastav\inst{1,2}, Denis Trystram\inst{1.3}}
\institute{Univ. Grenoble Alpes \inst{1}, CNRS-Verimag\inst{2} \& Institut Universitaire de France \inst{3}
\\ \email{[email protected], [email protected]}
}
\maketitle
\begin{abstract}
We consider the classical problem of scheduling $n$ jobs with release dates on both single and identical parallel machines.
We measure the quality of service provided to each job by its stretch, which is defined as the ratio of its response time to processing time.
Our objective is to schedule these jobs non-preemptively so as to minimize total stretch.
So far, there have been very few results for total stretch minimization especially for the non-preemptive case.
For the preemptive version, the Shortest remaining processing time (SRPT) algorithm is known to give $2$-competitive ratio for total stretch on single machine while it has $13$-competitive ratio on identical parallel machines.
We study the problem with some additional assumptions and present the stronger competitive ratio.
We show that the Shortest processing time (SPT) algorithm is $(\Delta - \frac{1}{\Delta}+1$)-competitive for non-preemptive total stretch minimization on single machine and it is $(\Delta -\frac{1}{\Delta}+ \frac{3}{2} -\frac{1}{2m})$ on $m$ identical parallel machines, where $\Delta$ is the upper bound on the ratio between the maximum and the minimum processing time of the jobs.
\end{abstract}
\section{Introduction}
\par We consider the problem of non-preemptive scheduling of jobs with release dates on single and identical parallel machines.
Our objective is to schedule these jobs so as to guarantee the \textit{``fair"} quality of service to individual jobs.
Stretch is defined as a factor by which a job is slowed down with respect to the time it takes on unloaded system~\cite{Bender98flowand}.
Formally, we are given a set of $n$ jobs where the job $J_j$ has a processing time $p_j$ and a release date $r_j$ before which it cannot be scheduled, then the stretch $s_j$ of job $J_j$ is formally defined as $\frac{F_j}{p_j}$, where $F_j = C_j - r_j$ denotes the flow time ($C_j$ being the completion time of job $J_j$ in the schedule).
Our objective is to schedule the stream of jobs arriving online so as to minimize $\sum s_j$ for all the instances.
This objective is often referred to as the \textit{average stretch} or \textit{total stretch} optimization problem.
In this paper, we restrict our attention to schedule the jobs non-preemptively on single and parallel machines.
In the classical scheduling notation introducted by Graham et al.~\cite{Graham:1979}, these problems are respectively represented as $1|r_j|\sum s_j$ and $Pm|r_j|\sum s_j$.
Legrand et al.~\cite{Arnaud2008} showed using reduction from \emph{partition} problem, that $1|r_i|\sum s_i$ is NP complete.
\subsection{Related works}
\par Muthukrishnan et al.~\cite{min-average-stretch} showed that the classical scheduling policy, Shortest Remaining processing time (SRPT) is $2$ and $13$-competitive for the problem of $1|r_j, pmtn| \sum s_j$ and $Pm|r_j, pmtn| \sum s_j$, respectively.
Later, Chekuri et al.~\cite{Chekuri:2001} presented an algorithm that achieves competitive ratios of $13$ and $19$ for migratory and non-migratory models of $Pm|r_j, pmtn| \sum s_j$.
Bender et al.~\cite{Bender03} presented a PTAS for uniprocessor preemptive case of total stretch with running time in $O(n^{poly(\frac{1}{\epsilon})})$.
A more general problem than the total stretch is the problem of minimizing the sum of weighted flow time ($\sum w_i F_i$).
There is no online algorithm known with the constant competitive ratio for the sum of weighted flow time.
Bansal et al.~\cite{Nikhil:2007} showed using resource augmentation that there is an $O(1)$-speed $O(1)$-approximation for the offline version of weighted sum flow problem.
Leonardi et al.~\cite{Leonardi:2007} proved the lower bound of $\Omega(n^{\frac{1}{2} -\epsilon})$ for $1|r_j|\sum F_j$, while Kellerer et al.~\cite{Kellerer95approximability} showed that the worst case has a lower bound of $\Omega(n^{\frac{1}{3}-\epsilon})$ for $Pm|r_j|\sum F_j$.
Considering that such strong lower bounds exist for sum flow time, we assume additional information that the ratio of maximum processing time over minimum processing time for all the jobs is bounded by $\Delta$.
Using this assumption Bunde~\cite{Bunde04sptis} proved that the Shortest Processing time (SPT) algorithm is $\frac{\Delta+1}{2}$-competitive for the sum flow time on a single machine.
Chekuri et al.~\cite{Chekuri:2001} provided an online algorithm for $1|r_i,pmtn|\sum_i w_i F_i$ that is $O(log^2\Delta)$-competitive.
They also give a quasi-polynomial time $(2+\epsilon)$-approximation for the offline case when the weights and processing times are polynomial bounded.
Tao et al.~\cite{Tao:2013} showed that Weighted shortest processing time is $\Delta+1$ and $\Delta + \frac{3}{2} - \frac{1}{2m}$-competitive for sum of weighted flow time on single and parallel machines, respectively.
Their analysis is based on the idea of instance transformation which inherently assumes that the weights are independent of job's parameters. For the case of stretch minimization, this assumption is not valid.
We provide proof for this special case where weights are dependent on processing times i.e $w_i = \frac{1}{p_i}$. Moreover, the competitive ratios presented in the paper, are tighter in comparison to that of Tao et al.~\cite{Tao:2013}.
\subsection{Contributions}
\par In this paper, we extend the understanding of the competitiveness of stretch for non-preemptive schedules by presenting new competitive ratios.
We show that SPT provides $(\Delta - \frac{1}{\Delta}+1)$ and $(\Delta -\frac{1}{\Delta} + \frac{3}{2} - \frac{1}{2m})$-competitiveness in $O(n\log n)$ time for problem of $1|r_j|\sum s_j$ and $Pm|r_j|\sum s_j$, where $m$ is the number of machines.
Our analysis for single machine is based on careful observations on the structural similarity between SPT and SRPT schedules.
On another hand, our analysis for parallel machine is based on converting SPT on parallel machines to a new schedule on a virtual single machine.
\par This paper is organised as follows. In Section \ref{pre}, we present basic definitions and notations used in this paper.
In Section \ref{1machine}, we analyze the SPT algorithm on a single machine while in section~\ref{m-machine}, we present the analysis of SPT on identical parallel machines.
Section~\ref{conc} provides some concluding remarks for this work.
\section{Preliminaries} \label{pre}
\par In this section, we introduce some basic definitions and notations, that are used frequently in the remainder of this paper.
We consider the following clairvoyant online scheduling scenario.
A sequence of jobs arrive over time and the processing time of each job is known at its time of arrival.
Our goal is to execute the continuously arriving stream of jobs. Let $\mathcal{I}$ be a given scheduling instance specified by a set of jobs $\mathit{J}$, and for each job $J_j\in\mathit{J}$, a release time $r_j$ and a processing time $p_j$.
Without loss of generality, we assume that the smallest and largest processing times are equal to 1 and $\Delta$, respectively.
\par The proposed work is focused on studying two different well-known algorithms, namely SRPT and SPT.
The Shortest Remaining Processing Time (abbreviated as SRPT) is a preemptive schedule which can be defined as follows:
at any time $t$, the available job $J_j$ with the shortest remaining processing time $\rho_j(t)$ is processed until it is either completed or until another job $J_i$ with $\rho_i< \rho_j(r_i)$
becomes available, where the remaining processing time $\rho_j(t)$ of job $J_j$ is the amount of processing time of $J_j$ which has not been scheduled before time $t$.
In the second case, job $J_j$ is preempted and job $J_i$ is processed.
On another hand, the Shortest Processing Time (abbreviated as SPT) is a non-preemptive non-waiting schedule that runs the shortest available job in the queue whenever a processor becomes idle.
\par Formally, an online algorithm $\mathcal{A}_{on}$ is said to be $\alpha$-competitive with respect to a offline algorithm $\mathcal{A}_{off}$ if the worst case ratio (over all possible instances) of the performance of $\mathcal{A}_{on}$ is no more that $\alpha$ times the performance of $\mathcal{A}_{off}$.
\section{Analysis of $1|r_i|\sum s_i$} \label{1machine}
\par We begin by introducing some notions of schedule that play a central role in our analysis. We first show in section \ref{srpt-spt}, the structural similarity between SRPT and SPT schedules. Then, we construct a non-preemptive schedule by changing SRPT into a new schedule (called POS) and show that POS is $(\Delta - \frac{1}{\Delta}+1)$-competitive for SRPT (section~\ref{inters}).
Later, we show in section~\ref{spt-ana} that the total stretch of SPT and SRPT are no worse than that of POS and non-preemptive schedules, respectively.
Thus, the cost of SPT is $(\Delta-\frac{1}{\Delta}+1)$ factor within the cost of an optimal non-preemptive schedule.
\subsection{Structure of SRPT and SPT} \label{srpt-spt}
\par Without the loss of generality (W.l.o.g), we assume that SRPT resumes one of the jobs with equal remaining processing time before executing a new job.
Though it may choose arbitrarily between jobs with equal initial processing times, provided that SPT uses the same order.
In SRPT we define an active interval $I_j = [S_j,C_j]$ for each job $J_j$, where $S_j$ is the start time of $J_j$ in the preemptive schedule.
Note that due to the preemptive nature of schedule, the length of $I_j$ (denoted by $|I_j|$) is greater than or equal to the processing time $p_j$ of job $J_j$.
When two such active intervals intersect, one is contained in the other and there is no machine idle time in between both intervals~\cite{Kellerer95approximability}.
\par Based on such strong containment relations, we define a \textit{directed ordered forest} as shown in Figure~\ref{srpt-dof}. The vertices consist of jobs $1,....,n$.
There exists a directed edge going from job $J_i$ to $J_j$ if and only if $I_j \subseteq I_i$ and there does not exist a job $J_k$ with $I_j \subseteq I_k\subseteq I_i$.
For every vertex $i$, its children are ordered from left to right according to the ordering of their corresponding intervals in $I_i$.
Hence, we have a collection of directed out-trees $\mathcal{T} = \{T_1,....,T_r\}$.
We also order the roots $\gamma(T_k)$ of trees from left to right according to the ordering of their corresponding intervals.
Hence, SRPT runs the jobs in order of out-trees that is:
for every out-tree, all the jobs belonging to an out-tree $T_a$ are executed before $T_b$ if and only if $I_{\gamma(T_a)}<I_{\gamma(T_b)}$.
Bunde showed in~\cite{Bunde04sptis} that SPT also runs the job in similar fashion.
Thus, the difference between SRPT and SPT comes from the order of execution of jobs within each out-tree.
\begin{figure}
\caption{Example showing a SRPT schedule and its corresponding directed ordered forest}
\label{srpt-dof}
\end{figure}
\subsection{Intermediate schedule} \label{inters}
\par Starting from SRPT schedule, we construct a new non-preemptive schedule called \textit{POS} (which stands for Post Order Schedule).
During the interval $I_j = [S_{j},C_{j}]$, where $j = \gamma(T_a)$, POS runs the jobs of $T_a$, starting with $J_j$ and then running the other jobs of $T_a$
in order of increasing SRPT completion time (post order transversal) as shown in Figure~\ref{srpt-2-pos}.
\begin{figure}
\caption{Transformation showing SRPT to POS schedule}
\label{srpt-2-pos}
\end{figure}
\begin{myd} \label{def-compact}
A schedule is said compact if there is no idle time between the execution of jobs except due to the unavailability of jobs.
\end{myd}
\begin{myp} \label{pos-compact}
POS is compact if and only if SRPT is compact
\end{myp}
\begin{proof}
There can be idle time in SRPT schedule only when there is no job available for the execution.
During the execution of jobs belonging to the out-tree $T_a$ in SRPT, at any time $t \in [S_{T_{\gamma(T_a)}},C_{T_{\gamma(T_a)}}]$, there is at least one partial uncompleted job (specifically, $\gamma(T_a)$) available for the execution.
Thus, idle time can only exist between intervals of out-tree $T_a$ and $T_{a+1}$.
It is sufficient to show that POS completes all the jobs belonging to $T_a$ in interval $I_{\gamma(T_a)}$.
\par At any time $t\geq S_{\gamma(T_a)}+p_{\gamma(T_a)}$, the job scheduled in POS, has already been completed by SRPT.
This follows from the fact that SRPT has always finished at least as many jobs as in any other algorithm~\cite{Chung:2010}.
While at any time $t < S_{\gamma(T_a)} + p_{\gamma(T_a)}$, POS runs the root job of $T_a$ i.e $\gamma(T_a)$.
Hence, POS is busy for entirety of the interval $I_{\gamma(T_a)}$ completing all the jobs belonging to $T_a$.
\end{proof}
The following corollary is a direct consequence of proposition~\ref{pos-compact}.
\begin{myc} \label{pos-srpt-order}
POS and SRPT only differ in the order of execution of the jobs within each out-tree.
\end{myc}
\begin{myl} \label{pos-comp}
POS is $(\Delta-\frac{1}{\Delta}+1)$-competitive for total stretch with respect to SRPT schedule.
\end{myl}
\begin{proof}
From corollary~\ref{pos-srpt-order}, it is sufficient to show that POS schedule is $(\Delta-\frac{1}{\Delta}+1)$-competitive with respect to SRPT schedule for any out-tree.
Let $\sum\limits^{srpt}$ and $\sum\limits^{pos}$ denote the sum stretch of SRPT and POS for an out-tree $T_a$, respectively.
The completion time of $\gamma(T_a)$ in POS, is $\sum\limits_{\forall J_k \in T_a}^{J_k \neq \gamma(T_a)} p_k$ time units earlier than that of SRPT.
Consequently, $s_{\gamma(T_a)}^{pos} = s_{\gamma(T_a)}^{srpt} - \sum\limits_{\forall J_k \in T_a}^{J_k \neq \gamma(T_a)} \frac{p_k}{p_{\gamma(T_a)}}$.
On another hand, the rest of the jobs are delayed in POS by at most $p_{\gamma(T_a)}$.
Therefore, $s_k^{pos}\leq s_k^{srpt} + \frac{p_{\gamma(T_a)}}{p_k}, \forall J_k \in T_a, J_k \neq \gamma(T_a) $. Then, the sum stretch of all jobs in $T_a$ is given by :
\begin{align*}
\sum_{k\in T_a} s_k^{pos} &\leq \sum_{k\in T_a} s_k^{srpt} + \sum_{\forall J_k \in T_a}^{J_k \neq \gamma(T_a)}\frac{p_{\gamma(T_a)}}{p_k} - \sum\limits_{\forall J_k \in T_a}^{J_k \neq \gamma(T_a)} \frac{p_k}{p_{\gamma(T_a)}} \\
\end{align*}
\begin{align*}
\sum\limits^{pos} &\leq \sum\limits^{srpt} + \sum_{\forall J_k \in T_a}^{J_k \neq \gamma(T_a)}(\frac{p_{\gamma(T_a)}}{p_k} - \frac{p_k}{p_{\gamma(T_a)}}) \\
&\leq \sum\limits^{srpt} + \sum_{\forall J_k \in T_a}^{J_k \neq \gamma(T_a)} (\Delta - \frac{1}{\Delta}) \\
&\leq \sum\limits^{srpt} + ||I_a|| (\Delta - \frac{1}{\Delta}) \leq (\Delta - \frac{1}{\Delta} +1)\sum\limits^{srpt}
\end{align*}
where $||I_a||$ denotes the number of jobs executed in interval $I_a$. The third inequality is direct consequence of fact that $1 \leq \frac{p_{\gamma(T_a)}}{p_k} \leq \Delta$. The last inequality follows from the fact that $s_j \geq 1$, $\forall j \in J$.
\end{proof}
\subsection{SPT versus POS} \label{spt-ana}
\par In this section, we show that SPT achieves lower total stretch than that of POS. Our proof is based on iteratively changing SPT schedule to POS by removing the first difference between them.
\par W.l.o.g, assume that SPT runs the jobs in numerical order: $J_1$ followed by $J_2$ and so on. Let the first difference between SPT and POS occurs when SPT starts a jobs $J_i$ while POS starts another job $J_j$ as shown in Figure~\ref{spt2pos}.
SPT is changed by moving $J_j$ before $J_i$ and shifting every job from $i$ to $j-1$. Hence, the increase in the stretch (denoted by $\delta_{j}$) by the above transformation, is given by
\begin{align} \label{eq1}
\delta_{j} &= \sum\limits_{k=i}^{j-1}\frac{p_j}{p_k} - \sum\limits_{k=i}^{j-1}\frac{p_k}{p_j} = \sum\limits_{k=i}^{j-1}(\frac{p_j}{p_k} - \frac{p_k}{p_j}) = \sum\limits_{k=i}^{j-1}\delta_{jk} \nonumber
\end{align}
where $\delta_{jk}$ is the local increase in stretch by swapping job $J_k$ $(i \leq k \leq j-1)$ with $J_j$.
\begin{figure}
\caption{Successive transformations from SPT to POS}
\label{spt2pos}
\end{figure}
Based on sizes of $J_k$ and $J_j$, we classify the transformation moves into two sets according to the sign of $\delta_{jk}$, namely
$\delta_{jk} <0$ if $p_j < p_k$ and $\delta_{jk} \geq 0$ otherwise. Now, we present a series of technical results for proving that $\sum \delta_{j} \geq 0$, $\forall j \in J$. \\
\par Let consider two jobs $J_j$ and $J_k$ such that $\delta_{jk} < 0$; $J_j \prec J_k$ in POS and $J_k \prec J_j$ in SPT schedule, where $a \prec b$ denotes that job $a$ is executed before job $b$ in the schedule.
\begin{myl} \label{l>j}
There exists a job $J_l$ ($l\neq j$) such that $J_l \prec J_k$ in POS while $J_k \prec J_l$ in SPT and $p_{l} \geq p_k >p_j$.
\end{myl}
\begin{proof}
Let $t'$ be the time at which SPT schedules job $J_k$. Since $p_j < p_k$ and $J_k \prec J_j$ in POS, it follows that $t' < r_j$. Let $t \geq r_j >t'$ be the time at which POS start executing $J_k$. Therefore at time $t'$, POS schedules some job $J_l$ while SPT schedules $J_k$. Now if $p_l \geq p_k$, our lemma holds. Otherwise if $p_l < p_k$, then using the argument iteratively, it can be shown that there is some other job $J_{l'}$ such that $J_k \prec J_{l'}$ in SPT while $J_{l'} \prec J_k$ in POS.
\end{proof}
\begin{myo} \label{order-pos}
$J_l \prec J_j \prec J_k$ in POS schedule.
\end{myo}
\begin{myl} \label{J_j,J_l}
$J_j \prec J_l$ in SPT.
\end{myl}
\begin{proof}
Assume that in SPT $J_l \prec J_j$.
Let $t$ and $t'$ be the time at which $J_l$ start executing in SPT and POS schedules, respectively.
Then using Lemma~\ref{l>j}, $t' < t$.
Using SPT principle of the schedule at $t$, we get $r_j > t$ and $p_l > p_j$ in which case POS cannot schedule $J_l$ at time $t'$ or $p_l \leq p_j$.
Hence, this yields a direct contradiction to our assumption that $J_l \prec J_j$.
\end{proof}
\begin{myc} \label{sum>0}
For each $\delta_{jk} <0$ , there exists a job $l$ such that $\delta_{jk} + \delta_{lj} \geq 0$.
\end{myc}
\begin{proof}
It follows from Lemma~\ref{l>j} and Lemma~\ref{J_j,J_l} that there is always a job $l$ such that the ordering of jobs in SPT is $J_k \prec J_j \prec J_l$ while the same set of jobs are executed in order $J_l \prec J_j \prec J_k$ in POS. Moreover,
the transformation of $\delta_{lj}$ can be coupled with $\delta_{jk}$ such that:
\begin{align} \nonumber
\delta_{lj} + \delta_{jk} &= \frac{p_l}{p_j} - \frac{p_j}{p_l} + \frac{p_j}{p_k} - \frac{p_k}{p_l}
= \frac{p_k p_l + p_j^2}{p_j p_k p_l} (p_l - p_k) \geq 0 \nonumber
\end{align}
The last inequality follows from Lemma~\ref{l>j}.
\end{proof}
\begin{myp} \label{all>0}
$\sum \delta_{j} \geq 0$
\end{myp}
\begin{proof}
It follows from corollary~\ref{sum>0} that for every $\delta_{jk} < 0$, there exists a job $l$ such that $\delta_{jk}
+ \delta_{lj} \geq 0$. Thus, we say that job $k$ is matched to job $l$. Our lemma holds if there exists an injective mapping for all jobs $k_i$ who $\delta_{jk_i} \leq 0$. In case if there are jobs that are surjectively mapped to the same job $l$. Then using a similar proof construction as given on Lemma~\ref{l>j}, where $p_k$ can be replaced with $\sum p_{k_i}$, we get that $p_l \geq \sum\limits_{\forall k_i \in \mathcal{K}} p_{k_i}$. Consequently, the decrease in the total stretch due to $k_i$'s can be mapped with $\delta_{lj}$ such that $\delta_{lj} + \sum\limits_{\forall k_in \in \mathcal{K}} \delta_{jk_i} \geq 0$.
\end{proof}
The following result is the immediate consequence of Proposition~\ref{all>0} and Lemma~\ref{pos-comp}.
\begin{myc} \label{spt-approx-pos}
SPT is a $(\Delta - \frac{1}{\Delta} +1)$-approximation with respect to SRPT.
\end{myc}
\subsection{SRPT versus Optimal offline schedule}
\begin{myl} \label{srpt-as-lower-bound}
The total stretch of SRPT schedule is no worse than that of an optimal offline non-preemptive schedule.
\end{myl}
\begin{proof}
The methodology used in the proof is based on comparison of optimal and SRPT schedules at successive idle times.
The detailed proof is provided in Appendix~\ref{Proof-srpt-as-lower-bound}.
\end{proof}
\begin{myt} \label{SPT-competitive}
SPT is $(\Delta - \frac{1}{\Delta}+1)$-competitive with respect to the non-preemptive optimal total stretch.
\end{myt}
\begin{proof}
The theorem follows from combination of Lemma~\ref{srpt-as-lower-bound} and Corollary \ref{spt-approx-pos}.\
\end{proof}
\section{Analysis for $Pm|r_i|\sum s_i$} \label{m-machine}
The construction of the directed ordered forest is no more feasible in case of $m$ machines since the jobs may migrate onto different machines.
Therefore, we propose in section~\ref{construction-omms} a transformation of SPT on $m$-identical machines to a schedule (called OMMS) on a virtual machine with $m$ times the speed of single machine.
Later, it is shown in section~\ref{omms-sptm} that OMMS schedule is a $(\Delta-\frac{1}{\Delta}+1)$-approximation with respect to SPT on virtual machine (SPTM) with $m$ times speed.
Using the lower bound established by Chou et al.~\cite{Chou:2006} on weighted sum flow problem, finally we show that SPT is $\Delta - \frac{1}{\Delta}+ \frac{3}{2}-\frac{1}{2m}$-competitive.
\subsection{Intermediate schedule OMMS} \label{construction-omms}
\par To every instance $\mathcal{I}$ of the m-identical parallel machines problem, we associate an instance $\mathcal{I}^m$ with the same job set $J$ and for each job $J_j \in J$ , the processing time of $J_j$ is $p_j^m = p_j/m$. The job release dates $r_j$ are unchanged. Intuitively, the $m$-identical parallel machines are replaced by a virtual machine with speed $m$ times the speed of single machine. Let $C_j^m$ and $F_j^m$ denote the completion time and flow time (defined as $C_J^m - r_j$) of job $j\in J$ on virtual machine. We now define a general rule of transforming any $m$-identical machine schedule into a feasible schedule on virtual machine.
\begin{myd}
We construct a feasible schedule (called OMMS schedule) for instance $\mathcal{I}^m$ on a $m$-speed virtual machine by transforming SPT schedule for instance $\mathcal{I}$ on $m$ identical parallel machines. The jobs are executed in OMMS in the increasing order of their starting time in SPT, where ties are broken by executing the jobs in non-decreasing order of processing time.
\end{myd}
\begin{myl} \label{faster-machine-flow}
\begin{align*}
\sum\limits^{spt}_m \leq \sum\limits^{omms} + (1-\frac{1}{m})n
\end{align*}
where $\sum\limits^{spt}_m$ denotes the total stretch for SPT on $m$-identical parallel machines and $\sum\limits^{omms}$ denotes the total stretch of OMMS on a virtual machine.
\end{myl}
\begin{proof}
While transforming the schedule from SPT to OMMS schedule, the processing time of each job $j$ is reduced from $p_j$ to $p_j/m$. Therefore, the difference between $C_j$ and $C_j^m$ of every job $j \in J$ is upper bound by $(1-\frac{1}{m})p_j$. Taking summation over all $j \in J$, we obtain $\sum\limits^{spt}_m \leq \sum\limits^{omms} + (1-\frac{1}{m})n$.
\end{proof}
\subsection{Relationship between OMMS and SPT on virtual machine} \label{omms-sptm}
\par Here, we define the \textit{block} structure for OMMS schedule based on \textit{compactness} as defined in Definition~\ref{def-compact}. Our approach consists of partitioning the set of jobs into \textit{blocks} $B(1)$,$B(2)$ and so on, such that jobs belonging to any block can be scheduled regardless of jobs belonging to other blocks. Finally, we show that OMMS is $(\Delta+1-\frac{1}{\Delta})$-approximation to SPT on a $m$-speed virtual machine.
\par Let $R=\{r(1), r(2), r(3),...r(n')\}$ where $n'\leq n$ be the set of all different release times. Assume w.lo.g that $r_1< r_2 < ,...< r_{n'}$. We partition the jobs according to their release times into set of blocks. Let $Q(i) = \{J_j: r_j = r(i)\}$, $i = 1,...,n'$ denotes the set of jobs released at time $r(i)$.
The block $B(w)$ is defined as follows:
\begin{align}
B(w) &= \bigcup\limits_{i=b_{w-1}+1,..b_w}Q(i) \nonumber
\end{align}
where $b_w$ is the smallest positive integer such that
\begin{align}
r(b_{w-1}+1) + \sum\limits_{i=b_{w-1},....,b_w}\sum\limits_{J_j\in Q(i)} p_j^m < r(b_w) \nonumber
\end{align}
Intuitively, all jobs in $B(w)$ can be \textit{compactly} scheduled between $r_{B(w)} = min_{J_j\in B(w)} r_j$ and first job of $B(w+1)$ is released. Hence jobs belonging to the first block $B(1)$ could be completed at most time $r(b_1+1)$.
\par Thus, we focus our attention only to jobs belonging to \emph{single block}. We re-define $\mathcal{I}^m$ to denote the instance of jobs in a block.
Hence, $n$ denotes the number of jobs and $J = \{J_1,J_2,...,J_n\}$ denotes the set of jobs in $\mathcal{I}^m$.
Our next objective is to replace OMMS in lemma~\ref{faster-machine-flow}.
\begin{myd}
We construct a new schedule (SPTM) by scheduling all the jobs of instance $\mathcal{I}^m$ according to SPT rule on $m$-speed virtual machine.
\end{myd}
\begin{myo}\label{omms-sptm-block}
OMMS and SPTM have the same sets of jobs in each block.
\end{myo}
\begin{myd}
We construct a new schedule (D-SPTM) by delaying the start of each job in SPTM by $\Delta - \frac{1}{\Delta} $ time units later.
\end{myd}
\begin{myl}\label{omms-approx-sptm}
\begin{align*}
\sum\limits^{omms} \leq \sum\limits^{d-sptm} \leq (\Delta - \frac{1}{\Delta} + 1)\sum\limits^{sptm}
\end{align*}
where $\sum\limits^{\eta}$ denotes the total stretch of schedule $\eta \in \{$omms, d-sptm, sptm$\}$.
\end{myl}
\begin{proof}
The first inequality can be proved by removing the first difference between D-SPTM and OMMS schedules as shown for SPT and POS schedules in section~\ref{spt-ana}. The second inequality is the direct consequence of the construction of D-SPTM from SPTM, where each job's completion time is increased by $\Delta - \frac{1}{\Delta}$ with respect to SPTM.
\end{proof}
\begin{myc} \label{spt-sptm}
Combining the results of lemma~\ref{faster-machine-flow} and lemma~\ref{omms-approx-sptm}, it follows that:
\begin{align}
\sum\limits^{spt}_m \leq (\Delta - \frac{1}{\Delta}+1)\sum\limits^{sptm} + (1 - \frac{1}{m})n \nonumber
\end{align}
where $\sum\limits^{sptm}$ denotes the total stretch for SPTM.
\end{myc}
Next, using the bound established by Chou et al. in~\cite{Chou:2006}, we show the relationship between SPTM and the optimal total stretch on $m$-machines.
They gave a lower bound on the weighted completion time problem of $Pm|r_j|\sum w_j C_j$ in terms of LP schedule~\footnote{which was first defined in work of Goemans et al.~\cite{Goemans02}} on $m$-speed virtual machine.
They proved that:
\begin{align}
\sum w_j C_j^* \geq \sum w_j C_j^{LP} + \frac{1}{2}(1-\frac{1}{m}) \sum w_j p_j
\end{align}
For non-preemptive scheduling problem, the order of execution of jobs in LP schedule is similar to that of SPT.
Here, we extend the above result to the total stretch problem due to the equivalence between optimal schedules for $Pm|r_j|\sum w_j C_j$ and $Pm|r_j|\sum s_j$ where for each job $j$, $w_j = \frac{1}{p_j}$.
\begin{myc} \label{opt-lb}
Let $\sum\limits^{opt}_{m}$ denotes the optimal total stretch for $m$-identical parallel machines, then
\begin{align*}
\sum\limits^{opt}_m &\geq \sum\limits^{sptm} + \frac{1}{2}(1 - \frac{1}{m})n
\end{align*}
\end{myc}
\begin{myt} \label{final-theorem}
SPT is $(\Delta - \frac{1}{\Delta} + \frac{3}{2} -\frac{1}{2m})$-competitive for total stretch on $m$-identical parallel machine.
\end{myt}
\begin{proof}
The proof directly follows from using Corollary~\ref{opt-lb} and ~\ref{spt-sptm} along with the fact that $\sum\limits^{opt}_m \geq n$. For details, please refer to Appendix~\ref{final-proof}.
\end{proof}
\section{Concluding remarks} \label{conc}
\par In this paper, we investigated the problem of minimizing total stretch (average stretch) on a single and parallel machines.
We give tighter bound in comparison to previous work of Tao et al.~\cite{Tao:2013} on weighted sum flow time.
The main results, obtained by a series of intermediate schedules, show that the well-known scheduling policy SPT achieves competitive ratios of $(\Delta - \frac{1}{\Delta}+1)$ and $\Delta - \frac{1}{\Delta}+ \frac{3}{2} - \frac{1}{2m}$, respectively for the single and parallel machines cases.
\appendix
\section{Proof of Lemma~\ref{srpt-as-lower-bound}} \label{Proof-srpt-as-lower-bound}
\begin{proof}
Without the loss of generality, assume that optimal schedule run jobs in numerical order: $J_1$ followed by $J_2$ and so on.
\begin{itemize}
\item First, we consider the case where the optimal schedule is \emph{compact}. Let $J_k$ and $J_{k+1}$ be two consecutive jobs. Let $t$ be time at which $J_{k}$ starts its execution. Then either
\begin{itemize}
\item $p_{k+1} > p_{k}$, then \\
SRPT will also schedule $J_k$ before $J_{k+1}$ at time $t$. \\
\item $p_{k+1} \leq p_{k}$ and $t < r_{k+1} < t+p_k$, then \\
$s_k = \frac{t+p_k-r_k}{p_k}$ and $s_{k+1} = \frac{t+p_k+p_{k+1}-r_{k+1}}{p_{k+1}}$. \\
Let $w = r_{k+1} - t$ . Then, the following inequality holds since the optimal schedule is compact.
\begin{align}
s_k+s_{k+1} &\leq \frac{t+w+p_k+p_{k+1}-r_k}{p_k} + \frac{t+w+p_{k+1}-r_{k+1}}{p_{k+1}} \nonumber \\
w &\geq p_k-p_{k+1} \nonumber
\end{align}
At $r_{k+1}$, the remaining processing time for $J_k$ is $p_k - w \leq p_{k+1}$. Therefore, SRPT completes the execution of $J_k$ before scheduling $J_{k+1}$.
\end{itemize} \ \\
\item On another hand, consider the case where the optimal schedule is \emph{not compact}.
Let $t$ be the first moment when there is an idle time in the optimal schedule between two consecutive jobs $J_k$ and $J_{k+1}$ even though some job $J_l$ is available.
If $w$ is length of idle time: $w = S_{k+1} - C_{k}$, then from previous case, it follows that $w<p_l - p_k$.
Therefore, during this idle time, SRPT runs job $J_l$ such that remaining processing time of $J_l$ at $t+w$ is $p_l - w > p_k$.
Hence, SRPT preempts job $J_l$ at $t+w$, and schedules $J_k$.
Therefore at time $t+w$, SRPT and optimal have same set of uncompleted jobs. But the amount of work left in SRPT is less than or equal to the amount of job left in an optimal schedule.
\end{itemize}
Iteratively applying these above arguments at each idle period, it follows that SRPT is a lower bound for optimal schedule for non-preemptive total stretch.
\end{proof}
\section{Proof of theorem~\ref{final-theorem}} \label{final-proof}
\begin{proof}
Using the bound as stated in the corollary~\ref{opt-lb} and replacing $\sum\limits^{sptm}$ with $\sum\limits^{opt}_m$ in corollary~\ref{spt-sptm}, we get:
\begin{align*}
\sum\limits_{m}^{spt} &\leq (\Delta - \frac{1}{\Delta} + 1) (\sum_{m}^{opt} - \frac{1}{2}(1 - \frac{1}{m})n) + (1-\frac{1}{m})n \\
&\leq (\Delta - \frac{1}{\Delta} + 1) \sum_{m}^{opt} + \frac{1}{2}(1-\frac{1}{m})n - \frac{\Delta - \frac{1}{\Delta} + 1}{2} (1- \frac{1}{m})n \\
&\leq (\Delta - \frac{1}{\Delta} + 1) \sum_{m}^{opt} + \frac{1}{2}(1-\frac{1}{m})n
\end{align*}
Now combining the fact that $\forall j \in J, s_j \geq 1$ for all schedules. We get $n \leq \sum\limits^{opt}_m$. Replacing this inequality in above inequality, we obtain:
\begin{align*}
\sum\limits_{m}^{spt} &\leq (\Delta - \frac{1}{\Delta} + 1 + \frac{1}{2}(1 - \frac{1}{m}) \sum_{m}^{opt} \\
&\leq (\Delta - \frac{1}{\Delta} + \frac{3}{2} - \frac{1}{2m}) \sum_{m}^{opt}
\end{align*}
\end{proof}
\end{document} |
\begin{document}
\begin{abstract}
We consider solutions to the Cauchy problem for an internal-wave model derived by Camassa-Choi \cite{MR1389977}. This model is a natural generalization of the Benjamin-Ono and Intermediate Long Wave equations in the case of weak transverse effects. We prove the existence and long-time dynamics of global solutions from small, smooth, spatially localized initial data on \(\mathbb R^2\).
\end{abstract}
\maketitle
\section{Introduction}
In this article we consider real-valued solutions \(u\colon\mathbb R_t\times\mathbb R^2_{(x,y)}\rightarrow\mathbb R\) to the Cauchy problem for an internal-wave model derived by Camassa-Choi \cite{MR1389977},
\begin{equation}
\lambdaanglebel{eqn:cc-fd}
\lambdaeft( u_t + \mathcal T_h^{-1}u_{xx} + h^{-1}u_x - u u_x \right)_x + u_{yy} = 0,
\end{equation}
where \(h>0\) is the depth and the operator \(\mathcal T_h^{-1}\) has symbol \(i\coth(h\xi)\). In the limit \(h \rightarrow \infty\) we obtain the infinite depth equation,
\begin{equation}
\lambdaanglebel{eqn:cc}
\lambdaeft( u_t + \mathcal H^{-1}u_{xx} - u u_x \right)_x + u_{yy} = 0,
\end{equation}
where the inverse of the Hilbert transform \(\mathcal H^{-1} = - \mathcal H\) has symbol \(i\sgn\xi\). These are natural \(2\)-dimensional versions of the Intermediate Long Wave (ILW) and Benjamin-Ono equations in the case of weak transverse effects. Our goal is to investigate the long-time dynamics of solutions with sufficiently small, smooth and spatially localized initial data.
The infinite depth equation \eqref{eqn:cc} is a special case of the dispersion-generalized (or fractional) Kadomtsev-Petviashvili II (KP-II) equation,
\begin{equation}\lambdaanglebel{eqn:gKPII}
\lambdaeft( u_t - |D_x|^\alpha u_x + uu_x\right)_x + u_{yy} = 0.
\end{equation}
The original KP-II equation corresponds to the case \(\alpha = 2\) and is completely integrable in the sense that it possesses both a Lax pair and an infinite number of formally conserved quantities (see for example the survey article \cite{Klein2015}). To authors' knowledge, a Lax pair is not known for \eqref{eqn:cc-fd} or \eqref{eqn:cc} although both of their \(1d\) counterparts, the ILW and Benjamin-Ono, are integrable in this sense.
Both the finite and infinite depth equations are Hamiltonian with formally conserved energies,
\begin{gather}
E_h[u] = \int \lambdaeft( u\mathcal T_h^{-1}\partialartial_xu + h^{-1}u^2 + (\partialartial_x^{-1}\partialartial_yu)^2 - \frac13 u^3\right)\,dxdy,\\
E_\infty[u] = \int \lambdaeft(u\mathcal H^{-1}\partialartial_xu + (\partialartial_x^{-1}\partialartial_yu)^2 - \frac13u^3\right)\,dxdy,
\end{gather}
respectively. Both equations also conserve the \(L^2\)-norm,
\begin{equation}
M[u] = \int u^2\,dxdy.
\end{equation}
The infinite depth equation is invariant with respect to the scaling
\begin{equation}\lambdaanglebel{Scaling}
u(t,x,y)\mapsto \lambdaanglembda u(\lambdaanglembda^2t,\lambdaanglembda x,\lambdaanglembda^{\frac{3}{2}}y),\qquad \lambdaanglembda>0.
\end{equation}
Taking \(\lambdaanglembda = h\), this scaling also maps solutions to the finite depth equation with depth \(h\) to solutions with depth \(1\). Both the finite and infinite depth equations are invariant with respect to Galilean shifts of the form,
\begin{equation}\lambdaanglebel{Galilean}
u(t,x,y)\mapsto u(t,x+cy - c^2t,y - 2ct),\qquad c\in\mathbb R.
\end{equation}
The natural spaces in which to consider local well-posedness for the infinite depth equation are the homogeneous anisotropic Sobolev spaces \(\dot H^{s_1,s_2} = \dot H^{s_1}_x\dot H^{s_2}_y\) with norm
\[
\|u\|_{\dot H^{s_1,s_2}} = \||D_x|^{s_1}|D_y|^{s_2}u\|_{L^2_{x,y}},
\]
for which the scaling-critical, Galilean-invariant space is given by \((s_1,s_2) = (\frac14,0)\).
Small data global well-posedness and scattering were proved for the KP-II at the scaling-critical regularity \((s_1,s_2) = (-\frac12,0)\) by Hadac-Herr-Koch \cite{MR2526409,MR2629889}. Local well-posedness results are also available in higher dimensions \cite{2016arXiv160806730K} and for the dispersion generalized equation \eqref{eqn:gKPII} on \(\mathbb R_{x,y}^2\) provided \(\alpha>\frac43\) \cite{MR2434299}. While preparing this paper we also learned of a recent result of Linares-Pilod-Saut~\cite{2017arXiv170509744L} who prove several local well-posedness and ill-posedness results for \eqref{eqn:cc-fd}, \eqref{eqn:cc} and other similar generalizations of the KP-II.
We define the linear operator
\[
\mathcal L_h = \partialartial_t + \mathcal T_h^{-1}\partialartial_x^2 + h^{-1}\partialartial_x + \partialartial_x^{-1}\partialartial_y^2,
\]
with the corresponding modification when \(h = \infty\). Here \(\partialartial_x^{-1}\) is interpreted as the Fourier multiplier \(\mathrm{p.v. }(i\xi)^{-1}\), which for \(f\in L^1\) gives us
\[
\partialartial_x^{-1}f = \frac12\int_{-\infty}^xf(y)\,dy - \frac12\int_x^\infty f(y)\,dy.
\]
For \(\xi\neq0\), the dispersion relation associated to \eqref{eqn:cc-fd} is given by
\begin{equation}\lambdaanglebel{OMEGAh}
\omega_h(\mathbf k) = \xi^2\coth(h \xi) - h^{-1}\xi - \xi^{-1}\eta^2,
\end{equation}
where \(\mathbf k = (\xi,\eta)\), and in the limit \(h = \infty\) we obtain
\begin{equation}\lambdaanglebel{OMEGAinf}
\omega_\infty(\mathbf k) = \xi|\xi| - \xi^{-1}\eta^2.
\end{equation}
We take \(S_h(t)\) to be the associated linear propagator, defined using the Fourier transform\footnote{We use the isometric normalization of the Fourier transform, \(\hat f(\mathbf k) = \frac1{2\partiali}\int f(x,y)e^{-i(x,y)\cdot\mathbf k}\,d\mathbf k\).} as
\begin{equation}\lambdaanglebel{Propagator}
S_h(t)f = \frac{1}{2\partiali}\lambdaim\lambdaimits_{\epsilon\downarrow0}\int_{|\xi|>\epsilon} e^{it\omega_h(\mathbf k)}\hat f(\mathbf k)e^{i(x,y)\cdot\mathbf k}\,d\mathbf k,
\end{equation}
with the corresponding modification when \(h = \infty\). We note that the linear propagator \eqref{Propagator} extends to a well-defined unitary map \(S_h(t)\colon L^2(\mathbb R^2)\rightarrow L^2(\mathbb R^2)\) without the need for additional moment assumptions on \(f\).
Linear solutions satisfy the dispersive estimates,
\begin{equation}\lambdaanglebel{est:Dispersive}
\|h^{\frac12}\lambdaangle hD_x\rangle^{-\frac12} S_h(t)u_0\|_{L^\infty}\lambdaesssim |t|^{-1}\|u_0\|_{L^1},
\end{equation}
which may be readily seen from the fact that the kernel \(K_h\) of the linear propagator \(S_h\) is given by
\[
K_h(t,x,y) = \frac1{\sqrt{2ht}}\ k\lambdaeft(h^{-2}t,h^{-1}(x + \frac1{4t}y^2)\right),
\]
where the oscillatory integral
\[
k(t,x) = \frac1{\sqrt{2\partiali}}\lambdaim\lambdaimits_{\epsilon\downarrow0}\int_{|\xi|>\epsilon} |\xi|^{\frac12}e^{it(\xi^2\coth\xi - \xi) + ix\xi}\,d\xi.
\]
For a more detailed proof see \cite[Lemmas~4.7,~4.8]{2017arXiv170509744L}.
Due to the \(O(|t|^{-1})\) decay, bilinear interactions are long-range so it is natural to seek a normal form transformation that upgrades the quadratic nonlinearity to a cubic one. Resonant nonlinear interactions correspond to solutions of the system
\[
\begin{cases}
\omega_h(\mathbf k_1) + \omega_h(\mathbf k_2) + \omega_h(\mathbf k_3) = 0
\\
\mathbf k_1+\mathbf k_2+\mathbf k_3 = 0,
\end{cases}
\]
and some elementary algebraic manipulations show that this cannot be satisfied for non-zero \(\xi_j\). As a consequence the Camassa-Choi nonlinearity is non-resonant and formally we may construct a normal form leading to enhanced lifespan solutions.
Given the non-resonance of the bilinear interactions, one might expect the methods used for large values of \(\alpha\) to apply to the Camassa-Choi. However, in the corresponding \(1\)-dimensional cases of the ILW and Benjamin-Ono it is known that Picard iteration methods fail \cite{MR1885293,MR2172940} due to strong high-low bilinear interactions. While the additional dispersion in \(2d\) should allow for improved results over the corresponding \(1d\) case, one may apply the methods of \cite{MR1885293,2016arXiv160806730K} to show that Picard iteration still fails in the infinite depth case \eqref{eqn:cc} in the anisotropic Sobolev space \(\dot H^{\frac14,0}\) and almost all of the Besov-type refinements considered in \cite{2016arXiv160806730K}. For completeness we provide a brief proof of this ill-posedness result in Appendix~\ref{app:IP}.
Instead we will assume additional spatial localization on the initial data and establish global existence using a similar approach to work of the first author with Ifrim and Tataru on the KP-I equation \cite{2014arXiv1409.4487H}. Here we will apply the \emph{method of testing by wave packets}, originally developed in work of Ifrim-Tataru on the \(1d\) cubic NLS \cite{MR3382579} and \(2d\) gravity water waves \cite{2014arXiv1404.7583I}, and subsequently applied in several other contexts \cite{MR3462131,2014arXiv1406.5471I,2014arXiv1409.4487H}. The key difficulty we encounter when adapting this method to the setting of the Camassa-Choi is the presence of the non-local operator \(\mathcal T_h^{-1}\). Indeed, a testament to the robust nature of this approach is that it may be applied to obtain global solutions even in this context. We note that a related approach to establishing global well-posedness is via the space-time resonances method, simultaneously developed by Germain-Masmoudi-Shatah \cite{MR2542891,MR2482120} and Guftason-Nakanishi-Tsai \cite{MR2360438} as a significant upgrade to the method of normal forms, originally applied in the context of dispersive PDE by Shatah \cite{MR803256}.
We expect that linear solutions initially localized in space near \((x,y) = 0\) and at frequency \(\mathbf k = (\xi,\eta)\) will travel along rays of the Hamiltonian flow
\[
\Gamma_{\mathbf v} = \{(x,y) = t\mathbf v\},
\]
where the group velocity
\[
\mathbf v = - \nabla\omega_h(\mathbf k).
\]
In order to measure this localization we define the operators \(L_{x,h}\), \(L_y\) with corresponding symbols \(x + t\partialartial_\xi\omega_h(\mathbf k)\), \(y + t\partialartial_\eta\omega_h(\mathbf k)\) respectively. A simple computation yields the explicit expressions,
\[
L_{x,h} = x - 2t\mathcal T_h^{-1}\partialartial_x - th\partialartial_x^2(1 + \mathcal T_h^{-2}) - th^{-1} + t\partialartial_x^{-2}\partialartial_y^2,\qquad L_y = y - 2t\partialartial_x^{-1}\partialartial_y,
\]
and in the limit \(h = \infty\),
\[
L_{x,\infty} = x - 2t\mathcal H^{-1}\partialartial_x + t\partialartial_x^{-2}\partialartial_y^2.
\]
Further, by construction the operators \(L_{x,h},L_y\) commute with \(\mathcal L_h\).
As the equations possesses a Galilean invariance, in the spirit of \cite{MR3382579,2014arXiv1409.4487H} we will consider well-posedness in Galilean invariant spaces. The vector field \(\partialartial_xL_y\) is the generator of the Galilean symmetry. However, the operators \(\partialartial_y\), \(L_{x,h}\) do not commute with the Galilean group so we will instead measure \(x\)-localization with the Galilean-invariant operator
\[
J_h = L_{x,h}\partialartial_x + L_y\partialartial_y.
\]
We then define the time-dependent space \(X_h\) of distributions on \(\mathbb R^2\) with finite norm,
\[
\|u\|_{X_h}^2 = h^{-\frac12}\|u\|_{L^2}^2 + h^{\frac{15}2}\|\partialartial_x^4u\|_{L^2}^2 + h^{-\frac92}\|L_y^2\partialartial_x u\|_{L^2}^2 + h^{-\frac12}\|J_hu\|_{L^2}^2.
\]
We note that this space is uniform with respect to \(h\) and that \(S_h(-t)X_h = X_h(0)\), where the initial data space
\[
\|u_0\|_{X_h(0)} = h^{-\frac12}\|u_0\|_{L^2}^2 + h^{\frac{15}2}\|\partialartial_x^4u_0\|_{L^2}^2 + h^{-\frac92}\|y^2\partialartial_xu_0\|_{L^2}^2 + h^{-\frac12}\|(x\partialartial_x + y\partialartial_y)u_0\|_{L^2}^2.
\]
Our main result in the finite depth case is the following:
\begin{thm}\lambdaanglebel{thm:Main}
There exists \(0<\epsilon\lambdal1\) so that for all \(h>0\) and \(u(0)\in X_h(0)\) satisfying the estimate
\begin{equation}\lambdaanglebel{est:Init}
\|u(0)\|_{X_h(0)} \lambdaeq\epsilon,
\end{equation}
there exists a unique global solution \(u\in C(\mathbb R;X_h)\) to \eqref{eqn:cc-fd} satisfying the energy estimate
\begin{equation}\lambdaanglebel{Fdn}
\|u\|_{X_h(t)}\lambdaesssim \epsilon \lambdaangle h^{-2}t\rangle^{C\epsilon}
\end{equation}
and the pointwise decay estimate
\begin{equation}\lambdaanglebel{est:PTWISEDECAY}
h^{2}\|u_x\|_{L^\infty}\lambdaesssim \epsilon |h^{-2}t|^{-\frac12}\lambdaangle h^{-2}t\rangle^{-\frac12}.
\end{equation}
Further, this solution scatters in the sense that there exists some \(W\in L^2\) so that \(\|W\|_{L^2} = \|u_0\|_{L^2}\) and
\begin{equation}\lambdaanglebel{Fds}
h^{-\frac14}\|S_h(-t)u - W\|_{L^2} \lambdaesssim \epsilon^2 (h^{-2}t)^{-\frac1{96} + C\epsilon},\qquad t\rightarrow+\infty.
\end{equation}
\end{thm}
\begin{rem}
As in~\cite{MR3382579,2014arXiv1409.4487H,MR3462131} we may interpolate between the estimates \eqref{Fdn} and \eqref{Fds} to show that \(W\in (X_h(0),h^{\frac14}L^2)_{C\epsilon,2}\) for some \(C>0\) where \((X_h(0),h^{\frac14}L^2)_{C\epsilon,2}\) is the usual real interpolation space.
\end{rem}
In the infinite depth case we define the modification
\[
\|u\|_{X_\infty}^2 = \|u\|_{L^2}^2 +\|\partialartial_x^4u\|_{L^2}^2 + \|L_y^2\partialartial_x u\|_{L^2}^2 + \|J_\infty u\|_{L^2}^2,
\]
for which the initial data space is given by
\[
\|u_0\|_{X_\infty(0)} = \|u_0\|_{L^2}^2 + \|\partialartial_x^4u_0\|_{L^2}^2 + \|y^2\partialartial_xu_0\|_{L^2}^2 + \|(x\partialartial_x + y\partialartial_y)u_0\|_{L^2}^2.
\]
Our main result in the infinite depth case is then:
\begin{thm}\lambdaanglebel{thm:MainInf}
There exists \(0<\epsilon\lambdal1\) so that for all \(u(0)\in X_\infty(0)\) satisfying the estimate
\begin{equation}
\|u(0)\|_{X_\infty(0)} \lambdaeq\epsilon,
\end{equation}
there exists a unique global solution \(u\in C(\mathbb R;X_\infty)\) to \eqref{eqn:cc-fd} satisfying the energy estimate
\begin{equation}
\|u\|_{X_\infty(t)}\lambdaesssim \epsilon\lambdaangle t\rangle^{C\epsilon}
\end{equation}
and the pointwise decay estimate
\begin{equation}
\|u_x\|_{L^\infty}\lambdaesssim \epsilon |t|^{-\frac12}\lambdaangle t\rangle^{-\frac12}.
\end{equation}
Further, this solution scatters in the sense that there exists some \(W\in L^2\) so that
\begin{equation}
\|S_\infty(-t)u - W\|_{L^2} \lambdaesssim \epsilon t^{-\frac1{48} + C\epsilon},\qquad t\rightarrow+\infty.
\end{equation}
\end{thm}
\begin{rem}
The spaces \(X_h\) and \(X_\infty\) are essentially identical to the spaces considered in \cite{2014arXiv1409.4487H}. However, due to the reduced dispersion of \eqref{eqn:cc-fd} and \eqref{eqn:cc} at high frequency, we require an additional \(x\)-derivative to obtain sufficient decay.
\end{rem}
For simplicity we now restrict our attention to the case \(h = 1\) and drop the dependence on \(h\) throughout our notation. We remark the remaining finite depth cases may be obtained by scaling. Namely, we have
\[
u^{(1)}(t,x,y) = hu^{(h)}(h^2t,hx,h^{\frac32}y).
\]
The proof in the infinite depth case (Theorem~\ref{thm:MainInf}) is essentially identical and is thus omitted. However, for completeness we will outline a few of the required modifications when they deviate sufficiently from the finite depth proof.
The remainder of the paper is structured as follows: in Section~\ref{sect:Prelim} we give some brief technical preliminaries; in Section~\ref{sect:AP} we prove local well-posedness and a priori energy estimates for solutions to \eqref{eqn:cc-fd}; in Section~\ref{sect:KS} we prove pointwise bounds in the spirit of Klainerman-Sobolev estimates; in Section~\ref{sect:TWP} we complete the proof of Theorem~\ref{thm:Main} using the method of testing by wave packets. In Appendix~\ref{app:IP} we prove an ill-posedness result for \eqref{eqn:cc}.
\section{Preliminaries}\lambdaanglebel{sect:Prelim}
In this section we briefly state several technical preliminaries required for the proof.
\subsection{The resonance function}
We define the resonance function
\[
\Omega(\mathbf k_1,\mathbf k_2,\mathbf k_3) = \omega(\mathbf k_1) + \omega(\mathbf k_2) + \omega(\mathbf k_3).
\]
If we restrict the resonance function to the hyperplane \(\{\mathbf k_1 + \mathbf k_2 + \mathbf k_3 = 0\}\) we may compute a simplified expression for
\(
\Omega(\mathbf k_1,\mathbf k_2) = \Omega(\mathbf k_1,\mathbf k_2,-(\mathbf k_1 + \mathbf k_2)),
\)
\begin{equation}\lambdaanglebel{ResonanceFunction}
\Omega(\mathbf k_1,\mathbf k_2) = - \xi_1\xi_2(\xi_1 + \xi_2)\lambdaeft(\frac{(\xi_1 + \xi_2)^2\coth(\xi_1 + \xi_2) - \xi_1^2\coth\xi_1 - \xi_2^2\coth\xi_2}{\xi_1\xi_2(\xi_1 + \xi_2)} + \frac{\lambdaeft|\dfrac{\eta_1}{\xi_1} - \dfrac{\eta_2}{\xi_2}\right|^2}{(\xi_1 + \xi_2)^2}\right).
\end{equation}
By considering the asymptotic behavior of \(\coth\xi\) we obtain the lower bound
\[
\frac{(\xi_1 + \xi_2)^2\coth(\xi_1 + \xi_2) - \xi_1^2\coth\xi_1 - \xi_2^2\coth\xi_2}{\xi_1\xi_2(\xi_1 + \xi_2)} \gtrsim \frac1{1 + \max\{|\xi_1|,|\xi_2|,|\xi_1 + \xi_2|\}},
\]
and hence we obtain lower bounds for the resonance function \(\Omega_h\) in the low-high (\(|\xi_1|\lambdal|\xi_2|\)) and high-high (\(|\xi_1 + \xi_2|\lambdal|\xi_2|\)) asymptotic regions,
\begin{equation}\lambdaanglebel{FDResonanceLB}
|\Omega(\mathbf k_1,\mathbf k_2)|\gtrsim \begin{cases}\dfrac{|\xi_1||\xi_2|^2}{\lambdaangle \xi_2\rangle}\lambdaeft(1 + \dfrac{\lambdaangle\xi_2\rangle}{|\xi_1|^2}\lambdaeft|\dfrac{\eta_1 + \eta_2}{\xi_1 + \xi_2} - \dfrac{\eta_2}{\xi_2}\right|^2\right),&\quad |\xi_1|\lambdal|\xi_2|,
\\\dfrac{|\xi_1 + \xi_2||\xi_2|^2}{\lambdaangle \xi_2\rangle}\lambdaeft(1 + \dfrac{\lambdaangle\xi_2\rangle }{|\xi_1 + \xi_2|^2}\lambdaeft|\dfrac{\eta_1}{\xi_1} - \dfrac{\eta_2}{\xi_2}\right|^2\right),&\quad |\xi_1 + \xi_2|\lambdal|\xi_2|.\end{cases}
\end{equation}
The resonance function in the infinite depth case is given by the slightly more straightforward expression,
\begin{equation}\lambdaanglebel{ResonanceInf}
\Omega_\infty(\mathbf k_1,\mathbf k_2) = \xi_1\xi_2\xi_3\lambdaeft(\frac{2}{\max\{|\xi_1|,|\xi_2|,|\xi_3|\}} + \frac{\lambdaeft|\dfrac{\eta_1}{\xi_1} - \dfrac{\eta_2}{\xi_2}\right|^2}{\xi_3^2}\right).
\end{equation}
\subsection{Littlewood-Paley decomposition} We take \(0<\delta\lambdal1\) to be a fixed universal constant that determines the resolution of our frequency decomposition. The size of \(\delta\) will be determined in the ODE estimates of Section~\ref{sect:TWP} but will otherwise be irrelevant. For \(N\in 2^{\delta \mathbb Z}\) we take \(P_N\) to project to \(x\)-frequencies \(2^{-\delta}N<|\xi|<2^\delta N\) so that
\[
1 = \sum\lambdaimits_{N\in 2^{\delta\mathbb Z}}P_N.
\]
We write \(u_N = P_Nu\), and for \(A>0\) we take \(u_{<A} = \sum_{N<A}u_N\) and \(u_{\geq A} = \sum_{N\geq A}u_N\), where the sums are understood to be over \(N\in 2^{\delta\mathbb Z}\). We observe that for any \(A>0\) we have
\[
\|u\|_X^2 \sim \|u_{<A}\|_X^2 + \sum\lambdaimits_{N\geq A}\|u_N\|_X^2.
\]
We may further decompose \(u_N = u_N^+ + u_N^-\) where \(u_N^+\) is the projection to positive wavenumbers in \(x\)-frequency. For real-valued \(u\) we have \(u_N = 2\mathbb Re(u_N^+)\) and hence
\[
\|u_N\|_X \sim \|u_N^+\|_X \sim \|u_N^-\|_X.
\]
\subsection{Symbol classes and elliptic operators}
Given \(d\geq1\) we define the symbol class \(\mathscr S\) to consist of functions \(p\in C^\infty(\mathbb R^d\times \mathbb R^d\backslash\{0\})\) so that, writing \(x = (x_1,\dots,x_d)\) and \(\xi = (\xi_1,\dots,\xi_d)\), we have
\[
|D_x^\beta D_\xi^\alpha p(x,\xi)|\lambdaesssim_{|\alpha|,|\beta|} |\xi|^{-|\alpha|}.
\]
Given a symbol \(p\in\mathscr S\) we may define a pseudo-differential operator
\[
p(x,D)u = \frac1{(2\partiali)^{\frac d2}}\int_{\mathbb R^d} p(x,\xi)\hat f(\xi) e^{ix\cdot\xi}\,d\xi,
\]
and then have the estimate (see for example \cite{MR1766415})
\begin{equation}
\|p(x,D)u\|_{L^2}\lambdaesssim \|u\|_{L^2}.
\end{equation}
We also recall that if \(r(x,\xi) = p(x,\xi)q(x,\xi)\) for \(p,q\in \mathscr S\) then we have the estimate
\begin{equation}\lambdaanglebel{ProductRule}
\|r(x,D)f\|_{L^2} \lambdaesssim \|p(x,D)q(x,D)f\|_{L^2} + \|f\|_{H^{-1}}.
\end{equation}
Further, if \(p\in\mathscr S\) satisfies the estimate (again see \cite{MR1766415})
\[
|p(x,\xi)^{-1}|\lambdaesssim 1,
\]
then we say \(p\) is \emph{elliptic} and for \(f\in L^2(\mathbb R^d)\) and any \(s\in \mathbb R\) we have the estimate
\begin{equation}\lambdaanglebel{OPSElliptic}
\|f\|_{L^2} \lambdaesssim_s \|p(x,D)f\|_{L^2} + \|f\|_{H^s}.
\end{equation}
\subsection{Multilinear Fourier multipliers}
If \(d = 2n\) and \(m(\mathbf k_1,\dots,\mathbf k_n)\in\mathscr S\) is independent of the spatial variables, we may define a multilinear Fourier multiplier \(M\) with symbol \(m\) by
\[
M[u_1,\dots,u_n] = \frac1{(2\partiali)^n}\int_{\mathbb R^{2n}} m(\mathbf k_1,\dots,\mathbf k_n) \hat u_1(\mathbf k_1)\dots\hat u_n(\mathbf k_n) e^{i(x,y)\cdot(\mathbf k_1 + \dots + \mathbf k_n)}\,d\mathbf k_1\dots d\mathbf k_n.
\]
We then recall the Coifman-Meyer Theorem (see for example~\cite{MuscaluPipherTaoThiele} and references therein)
\begin{equation}\lambdaanglebel{CM}
\|M[u_1,\dots,u_n]\|_{L^p}\lambdaesssim \|u_1\|_{L^{p_1}}\dots\|u_n\|_{L^{p_n}},
\end{equation}
provided \(\frac 1p = \frac1{p_1} + \dots + \frac1{p_n}\), \(1\lambdaeq p <\infty\), \(1<p_j\lambdaeq\infty\).
\subsection{Sobolev estimates}
We recall the Sobolev estimate,
\begin{equation}
\|f\|_{L^\infty} \lambdaesssim \|f\|_{L^2}^{\frac14}\|f_x\|_{L^2}^{\frac12}\|f_{yy}\|_{L^2}^{\frac14},\lambdaanglebel{est:Sobolev}
\end{equation}
and the H\"older space modification,
\begin{equation}
\|f\|_{\dot C^{0,\alpha}} \lambdaesssim \lambdaeft(\|f\|_{L^2}^{\frac14 - \alpha}\|f_x\|_{L^2}^\alpha + \|f\|_{L^2}^{\frac14 - \frac\alpha 2}\|f_{yy}\|_{L^2}^{\frac\alpha2}\right)\|f_x\|_{L^2}^{\frac12}\|f_{yy}\|_{L^2}^{\frac14},\qquad 0<\alpha\lambdaeq\frac14.\lambdaanglebel{est:Holder}
\end{equation}
\section{Local well-posedness and energy estimates}\lambdaanglebel{sect:AP}
In this section we prove a priori estimates for the solution to \eqref{eqn:cc-fd} in the case \(h = 1\). As a consequence of the usual energy method we obtain local well-posedness in the spaces \(Z^k\), where the norm
\begin{equation}
\|u\|_{Z^k}^2 = \|u\|_{L^2}^2 + \|\partialartial_x^ku\|_{L^2}^2 + \|L_y^2\partialartial_x u\|_{L^2}^2,
\end{equation}
and we note that \(X\subset Z^4\). Our local well-posedness result is the following:
\begin{thm}\lambdaanglebel{thm:LWP}
Let \(h = 1\) and \(k\geq3\). Then for all \(u_0\in Z^k(0)\), the equation \eqref{eqn:cc-fd} is locally well-posed in \(Z^k\) and the solution exists at least as long as \(\lambdaeft|\int_0^t \|u_x(s)\|_{L^\infty}\,ds\right| <\infty\).
\end{thm}
\begin{rem}
Our definition of local well-posedness in Theorem~\ref{thm:LWP} includes:
\begin{itemize}
\item \emph{Existence.} There exists a solution \(S(-t)u\in C([-T,T];Z^k(0))\).
\item \emph{Uniqueness.} The solution \(S(-t)u\) is unique in the space \(Z^k(0)\).
\item \emph{Continuity.} The solution map \(u_0 \mapsto S(-t)u(t)\) is continuous in the \(Z^k(0)\) topology.
\item \emph{Persistence of regularity.} If \(u_0\in Z^{k'}(0)\) for \(k'\geq k\) then \(u\in Z^{k'}\).
\end{itemize}
\end{rem}
\begin{rem}
The result of Theorem~\ref{thm:LWP} is certainly not optimal in terms of regularity but will suffice for the purposes of establishing the existence of global solutions. Indeed, an elementary application of the usual Littlewood-Paley trichotomy allows us to obtain local well-posedness in \(Z^{\frac 72+}\). We also mention several other local well-posedness results in other topologies are proved in~\cite{2017arXiv170509744L}.
\end{rem}
\begin{rem}
As usual, it suffices to consider times \(t\geq0\) as the equation is invariant under the transformation
\(
u(t,x,y)\mapsto u(-t,-x,y).
\)
\end{rem}
The key ingredient for local well-posedness will be a priori estimates for the solution in the space \(Z^k\). We will supplement these a priori bounds with a further estimate when the initial data \(u_0\in X\) satisfies the smallness condition \eqref{est:Init} and obtain energy estimates for the solution depending upon the size of
\begin{equation}\lambdaanglebel{ControlQuantity}
\mathcal M = \frac1\epsilon\sup\lambdaimits_{t\in[0,T]} |t|^{\frac12}\lambdaangle t\rangle^{\frac12}\|u_x\|_{L^\infty}
\end{equation}
Our main a priori bound is the following:
\begin{prop}\lambdaanglebel{prop:NRG}
Let \(h=1\) and \(u\) be a smooth solution to \eqref{eqn:cc-fd} on the time interval \([0,T]\). Then we have the a priori estimate,
\begin{equation}\lambdaanglebel{est:Zk}
\|u(t)\|_{Z^k} \lambdaeq \|u_0\|_{Z^k(0)} \exp\lambdaeft(C\int_0^t \|u_x(s)\|_{L^\infty}\,ds\right).
\end{equation}
Further, if the initial data \(u_0\in X(0)\) satisfies \eqref{est:Init} and \(0<\epsilon\lambdal \mathcal M^{-1}\) is sufficiently small, we have the improved estimate
\begin{equation}\lambdaanglebel{est:NRG}
\|u(t)\|_X\lambdaesssim \epsilon \lambdaangle t\rangle^{C\mathcal M\epsilon}.
\end{equation}
\end{prop}
\begin{proof}
In order to justify the various computations we note that by standard approximation arguments it suffices to assume that \(u\) is a Schwartz function and that for some \(0<\nu\lambdal1\) we have \(P_{< \nu}u = 0\).
\emph{Estimates for \(\partialartial_x^ku\).}
Differentiating the equation \(k\) times we obtain
\[
\mathcal L\partialartial_x^ku = u\partialartial_x^{k+1}u + \frac12\sum\lambdaimits_{j=1}^k\binom{k+1}{j}\partialartial_x^ju\cdot\partialartial_x^{k+1-j}u.
\]
Integrating by parts for the first term and using the elementary interpolation estimate
\[
\|\partialartial_x^ju\|_{L^{\frac{2(k-1)}{j-1}}}^{k-1}\lambdaesssim \|u_x\|_{L^\infty}^{k-j}\|\partialartial_x^ku\|_{L^2}^{j - 1},\qquad 1\lambdaeq j\lambdaeq k,
\]
for the second term we obtain the bound
\begin{equation}\lambdaanglebel{est:D4u}
\frac{d}{dt}\|\partialartial_x^ku\|_{L^2}^2\lambdaesssim \|u_x\|_{L^\infty}\|\partialartial_x^ku\|_{L^2}^2.
\end{equation}
\emph{Estimates for \(L_y^2\partialartial_xu\).} Again we start by calculating
\[
\mathcal LL_y^2\partialartial_xu = uL_y^2\partialartial_x^2u + (L_y\partialartial_xu)^2.
\]
For the first term we may simply integrate by parts. For the second term it suffices to show that
\begin{equation}\lambdaanglebel{L4Bound}
\|L_yu_x\|_{L^4}^2\lambdaesssim \|u_x\|_{L^\infty}\|L_y^2u_x\|_{L^2},
\end{equation}
from which we obtain the estimate
\begin{equation}\lambdaanglebel{est:Lyu}
\frac{d}{dt}\|L_y^2\partialartial_xu\|_{L^2}^2\lambdaesssim \|u_x\|_{L^\infty}\|L_y^2\partialartial_xu\|_{L^2}^2.
\end{equation}
To prove the estimate \eqref{L4Bound} we first make a change of variables \(f(t,x,y) = u(t,x - \frac1{4t}y^2,y)\) so that
\[
\|f_x\|_{L^\infty} = \|u_x\|_{L^\infty},\qquad\|f_y\|_{L^4} = \|L_y\partialartial_xu\|_{L^4},\qquad \|\partialartial_x^{-1}f_{yy}\|_{L^2} = \|L_y^2\partialartial_xu\|_{L^2}.
\]
We first observe that by symmetry we may decompose by frequency as
\[
\|f_y\|_{L^4}^4 = \sum\lambdaimits_{N_1\sim N_2}\int f_{N_1,y} f_{N_2,y} (f_{\lambdaeq N_2,y})^2\,dxdy.
\]
We then integrate by parts to obtain
\begin{align*}
\|f_y\|_{L^4}^4 &= - \sum\lambdaimits_{N_1\sim N_2}\int f_{N_1} f_{N_2,yy} (f_{\lambdaeq N_2,y})^2\,dxdy - 2\sum\lambdaimits_{N_1\sim N_2}\int f_{N_1} f_{N_2,y} f_{\lambdaeq N_2,y} f_{\lambdaeq N_2,yy}\,dxdy\\
&\lambdaesssim \|f_x\|_{L^\infty}\lambdaeft(\|C_1[\partialartial_x^{-1}f_{yy},f_y,f_y]\|_{L^1} + \|C_2[f_y,f_y,\partialartial_x^{-1}f_{yy}]\|_{L^1}\right),
\end{align*}
where the trilinear Fourier multipliers
\[
C_1[f,g,h] = \sum\lambdaimits_{N_1\sim N_2}\partialartial_x^{-1}P_{N_1}(\partialartial_xf_{N_2} g_{\lambdaeq N_2}h_{\lambdaeq N_2}),\quad C_2[f,g,h] = \sum\lambdaimits_{N_1\sim N_2}\partialartial_x^{-1}P_{N_1}(f_{N_2}g_{\lambdaeq N_2}\partialartial_xh_{\lambdaeq N_2}),
\]
may be bounded using the Coifman-Meyer Theorem \eqref{CM} to obtain
\[
\|C_1[\partialartial_x^{-1}f_{yy},f_y,f_y]\|_{L^1} + \|C_2[f_y,f_y,\partialartial_x^{-1}f_{yy}]\|_{L^1}\lambdaesssim \|f_y\|_{L^4}^2\|\partialartial_x^{-1}f_{yy}\|_{L^2}.
\]
\emph{Proof of \eqref{est:Zk}.} The estimate \eqref{est:Zk} then follows from the conservation of mass and estimates \eqref{est:D4u}, \eqref{est:Lyu} and Gronwall's inequality.
\emph{Estimates for \(Ju\): Short times.} For short times (\(0<t<1\)) we take \(w = Ju + tuu_x\) and calculate
\[
\mathcal Lw = (uw)_x - t\partialartial_x[\mathcal T^{-1}\partialartial_x,u]u_x - t\partialartial_x[(1 + \mathcal T^{-2})\partialartial_x^2,u]u_x.
\]
We observe that \(\mathcal T^{-1}\partialartial_x\) is a smooth Fourier multiplier with principle symbol homogeneous of order \(1\) and that \((1 + \mathcal T^{-2})\partialartial_x^2\) is a smooth Fourier multiplier with Schwartz symbol. Standard commutator estimates (see for example~\cite{MR1766415}) then yield the bounds
\begin{align*}
\|\partialartial_x[\mathcal T^{-1}\partialartial_x,u]u_x\|_{L^2} &\lambdaesssim \|u_x\|_{L^\infty}\lambdaeft(\|u_{xx}\|_{L^2} + \|u\|_{L^2}\right),\\
\|\partialartial_x[(1 + \mathcal T^{-2})\partialartial_x^2,u]u_x\|_{L^2} &\lambdaesssim \|u_x\|_{L^\infty}\|u\|_{L^2}.
\end{align*}
Integrating by parts in the first term we then obtain the estimate
\begin{equation}\lambdaanglebel{est:JShort}
\frac{d}{dt}\|w\|_{L^2}^2\lambdaesssim \|u_x\|_{L^\infty}\|w\|_{L^2}\lambdaeft(\|w\|_{L^2} + \|u_{xx}\|_{L^2} + \|u\|_{L^2}\right).
\end{equation}
\emph{Estimates for \(Ju\): Long times.}
For times \(t\geq1\) we write
\[
J = \mathbf S - \frac12 L_y\partialartial_y - 2t\mathcal L,
\]
where the operator
\[
\mathbf S = 2t\partialartial_t + x\partialartial_x + \frac32 y\partialartial_y - t\partialartial_x^3(1 + \mathcal T^{-2}) + t\partialartial_x,
\]
satisfies
\[
[\mathcal L,\mathbf S] = 2\mathcal L.
\]
We note that in the limit \(h = \infty\) we obtain the operator \(\mathbf S_\infty = 2t\partialartial_t + x\partialartial_x + \frac32y\partialartial_y\), which is the generator of the scaling symmetry \eqref{Scaling}.
As a consequence, we define
\[
w = \mathbf Su - \frac12 L_y\partialartial_y u + \frac12u = Ju + \frac12u - 2tuu_x,
\]
and calculate
\[
\mathcal Lw = (uw)_x - \frac1{4t}\lambdaeft(u_xL_y^2\partialartial_xu - (L_y\partialartial_xu)^2\right) - t\partialartial_x[\partialartial_x^2(1 + \mathcal T^{-2}),u]u_x
\]
In order to obtain long time bounds for the commutator term we will take advantage of the non-resonance and use a normal form transformation to upgrade it to a cubic term. We start by using the Fourier transform to write
\[
\mathcal F\lambdaeft[\partialartial_x[\partialartial_x^2(1 + \mathcal T^{-2}),u]v_x\right] = \int_{\mathbf k_1 + \mathbf k_2 = \mathbf k}k(\xi_1,\xi_2)\hat u(\mathbf k_1)\hat v(\mathbf k_2)\,d\mathbf k,
\]
where the symbol
\[
k(\xi_1,\xi_2) = (\xi_1 + \xi_2)\xi_2\lambdaeft((\xi_1 + \xi_2)^2\cosech^2(\xi_1 + \xi_2) - \xi_2^2\cosech^2(\xi_2) \right).
\]
Next we symmetrize to obtain
\begin{align*}
k_{\mathrm{sym}}(\xi_1,\xi_2) &= \frac12k(\xi_1,\xi_2) + \frac12k(\xi_2,\xi_1)\\
&= \frac12(\xi_1 + \xi_2)^4\cosech^2(\xi_1 + \xi_2) - \frac12(\xi_1 + \xi_2)\xi_1^3\cosech^2\xi_1 - \frac12(\xi_1 + \xi_2)\xi_2^3\cosech^2\xi_2.
\end{align*}
We then construct a symmetric bilinear Fourier multiplier \(B[u,v]\) with symbol
\[
b(\mathbf k_1,\mathbf k_2) = \frac{k_{\mathrm{sym}}(\xi_1,\xi_2)}{\Omega(\mathbf k_1,\mathbf k_2)},
\]
where the resonance function \(\Omega\) is defined as in \eqref{ResonanceFunction}. The symbol \(b\) may be readily seen to be rapidly decaying at high frequencies. However, due to the commutator structure of \(k\) it also has additional smallness at low frequencies that will allow us to obtain bounds in terms of pointwise norms of \(u_x\) rather than \(u\). We remark that in the infinite depth case, we replace \(1 + \mathcal T^{-2}\) by \(1 + \mathcal H^{-2}\) and hence this term vanishes, i.e. \(k\equiv0\).
By construction we have
\[
\mathcal L B[u,u] = \partialartial_x[\partialartial_x^2(1 + \mathcal T^{-2}),u]u_x + 2B[u,uu_x],
\]
so taking \(q = w + tB[u,u]\) we obtain
\[
\mathcal Lq = (uq)_x - \frac1{4t}\lambdaeft(u_xL_y^2\partialartial_xu - (L_y\partialartial_xu)^2\right) + (1 - tu_x)B[u,u] + 2t \lambdaeft(B[u,uu_x] - uB[u,u_x]\right).
\]
We claim that
\begin{align}
\|B[u,u]\|_{L^2} &\lambdaesssim \|u_x\|_{L^\infty}\|u\|_{L^2},\lambdaanglebel{GoodNFBounds1}\\
\|B[u,uu_x] - uB[u,u_x]\|_{L^2} &\lambdaesssim \|u_x\|_{L^\infty}^2\|u\|_{L^2},\lambdaanglebel{GoodNFBounds2}
\end{align}
so applying these bounds along with the \(L^4\) estimate \eqref{L4Bound} and integrating by parts in the first term we obtain the estimate
\[
\frac d{dt}\|q\|_{L^2}^2\lambdaesssim \|u_x\|_{L^\infty}\|q\|_{L^2}^2 + \|u_x\|_{L^\infty}(1 + t\|u_x\|_{L^\infty})\|q\|_{L^2}\lambdaeft(\|u\|_{L^2} + \|L_y^2\partialartial_xu\|_{L^2}\right),
\]
which suffices to complete the proof of \eqref{est:NRG}.
It remains to prove the estimates \eqref{GoodNFBounds1}, \eqref{GoodNFBounds2}. By considering the asymptotic behavior of \(\coth\xi\) we obtain the following asymptotic behavior in the low-high and high-high regimes,
\[
|k_{\mathrm{sym}}(\xi_1,\xi_2)|\lambdaesssim \begin{cases}|\xi_1||\xi_2|^3\lambdaangle \xi_2\rangle^{-2},&\quad |\xi_1|\lambdal|\xi_2|,
\\|\xi_1 + \xi_2|^2|\xi_2|^2\lambdaangle \xi_2\rangle^{-2},&\quad |\xi_1 + \xi_2|\lambdal|\xi_2|.\end{cases}
\]
Combining these bounds with the estimate \eqref{FDResonanceLB} for the resonance function we obtain a (crude) bound for the symbol \(b\) of the bilinear operator \(B\),
\[
|b(\mathbf k_1,\mathbf k_2)|\lambdaesssim \begin{cases}|\xi_2|\lambdaangle\xi_2\rangle^{-1},&\quad |\xi_1|\lambdal|\xi_2|,
\\|\xi_1 + \xi_2|\lambdaangle\xi_2\rangle^{-1},&\quad |\xi_1 + \xi_2|\lambdal|\xi_2|.\end{cases}
\]
Next we decompose the operator \(B\) using the Littlewood-Paley trichotomy as
\[
B[u,v] = Q_1[u,v_x] + Q_1[v,u_x] + \partialartial_xQ_2[u,v],
\]
where we define the bilinear forms
\[
Q_1[u,v] = \sum\lambdaimits_N B[u_{\lambdal N},\partialartial_x^{-1}v_N],\qquad Q_2[u,v] =\sum\lambdaimits_{N_1\sim N_2} \partialartial_x^{-1}B[u_{N_1},v_{N_2}].
\]
From the above estimates for \(b\) we see that the corresponding symbols \(q_1,q_2\) are bounded and applying similar estimates for the derivatives we obtain \(q_1,q_2\in\mathscr S\). We may then apply the Coifman-Meyer Theorem \eqref{CM} to obtain the estimates
\[
\|Q_1[u,u_x]\|_{L^2}\lambdaesssim \|u_x\|_{L^\infty}\|u\|_{L^2},\qquad \|\partialartial_xQ_2[u,u]\|_{L^2}\lambdaesssim \|u_x\|_{L^\infty}\|u\|_{L^2},
\]
which suffice to complete the proof of \eqref{GoodNFBounds1}.
The proof of the estimate \eqref{GoodNFBounds2} is similar, taking advantage of the commutator structure. We first write that the difference
\[
B[u,uu_x] - uB[u,u_x] = C[u,u,u_x],
\]
where the operator \(C\) has symbol,
\[
C(\mathbf k_1,\mathbf k_2,\mathbf k_3) = b(\mathbf k_1,\mathbf k_2 + \mathbf k_3) - b(\mathbf k_1,\mathbf k_3).
\]
Next we decompose \(C\) according to frequency balance of the last two terms,
\[
C[u,u,u_x] = R_1[u,u_x,u_x] + R_2[u,u_x,u_x],
\]
where we define the trilinear forms,
\[
R_1[u,v,w] = \sum\lambdaimits_NC[u,\partialartial_x^{-1}v_N,w_{\lambdaeq N}],\qquad R_2[u,v,w] = \sum\lambdaimits_NC[u,\partialartial_x^{-1}v_N,w_{> N}].
\]
For the first term we may use the above estimates for the symbol \(b\) to see that \(R_1\) has symbol \(r_1\in\mathscr S \). We then apply the Coifman-Meyer Theorem \eqref{CM} to obtain the estimate
\[
\|R_1[u,u_x,u_x]\|_{L^2}\lambdaesssim \|u_x\|_{L^\infty}^2\|u\|_{L^2}.
\]
For the second term we instead use the commutator structure of \(C\), writing
\[
C(\mathbf k_1,\mathbf k_2,\mathbf k_3) = \int_0^1 \nabla_{\mathbf k_2}b(\mathbf k_1,h\mathbf k_2 + \mathbf k_3)\cdot\mathbf k_2\,dh,
\]
and using similar computations to above in the region \(|\xi_2|\lambdal|\xi_3|\) we have the estimate
\[
|C(\mathbf k_1,\mathbf k_2,\mathbf k_3)|\lambdaesssim |\xi_2|.
\]
Applying similar estimates for the derivatives we may show that \(r_2\in\mathscr S\) and once again we apply the Coifman-Meyer Theorem \eqref{CM} to obtain the estimate
\[
\|R_2[u,u_x,u_x]\|_{L^2}\lambdaesssim \|u_x\|_{L^\infty}^2\|u\|_{L^2},
\]
which completes the proof of \eqref{GoodNFBounds2}.
\end{proof}
The proof of Theorem~\ref{thm:LWP} now follows from a standard application of the energy method using the a priori estimate \eqref{prop:NRG} and the following Sobolev estimate:
\begin{lem}\lambdaanglebel{lem:ShortTimes}
For times \(t>0\) we have the estimate
\begin{equation}\lambdaanglebel{ShortTime}
\|u\|_{L^\infty} + \|u_x\|_{L^\infty}\lambdaesssim t^{-\frac12}\|u\|_{Z^3}
\end{equation}
\end{lem}
\begin{proof}
We start by decomposing \(u\) with respect to \(x\)-frequency as
\[
u = u_{<1} + \sum\lambdaimits_{N\geq1}u_N.
\]
Writing
\[
f(t,x,y) = u_{<1}(t,x - \frac{1}{4t}y^2,y),
\]
we may apply the Sobolev estimate \eqref{est:Sobolev} to obtain
\[
\|u_{<1}\|_{L^\infty}\lambdaesssim t^{-\frac12}\|u\|_{Z^0},\qquad \|\partialartial_xu_{<1}\|_{L^\infty}\lambdaesssim t^{-\frac12}\|u\|_{Z^0}.
\]
Replacing \(u_{<1}\) by \(u_N\) for \(N\in 2^{\delta\mathbb Z}\) we obtain a similar bound,
\[
\|u_N\|_{L^\infty}\lambdaesssim t^{-\frac12}N^{-\frac32}\|u_N\|_{Z^3},\qquad \|\partialartial_xu_N\|_{L^\infty}\lambdaesssim t^{-\frac12}N^{-\frac12}\|u_N\|_{Z^3}.
\]
Summing over \(N\geq1\) we obtain the estimate \eqref{ShortTime}.
\end{proof}
\section{Pointwise bounds}\lambdaanglebel{sect:KS}
In this section we prove that the energy estimates for solutions proved in Section~\ref{sect:AP} lead to corresponding pointwise bounds. In particular, we have the following result:
\begin{prop}\lambdaanglebel{prop:BasicPointwise}
For \(t>0\) we have the estimate
\begin{equation}\lambdaanglebel{Starter410}
\|u_x\|_{L^\infty}\lambdaesssim |t|^{-\frac12}\lambdaangle t\rangle^{-\frac12}\|u\|_X.
\end{equation}
\end{prop}
\begin{rem}
By combining the a priori estimates of Proposition~\ref{prop:NRG} with Proposition~\ref{prop:BasicPointwise} and a standard bootstrap argument, we obtain the local well-posedness of \eqref{eqn:cc-fd} on \emph{almost} global timescales \(T \approx e^{C \epsilon^{-1}}\).
\end{rem}
For times \(0<t< 1\), Proposition~\ref{prop:BasicPointwise} is corollary of the estimate \eqref{ShortTime}. As a consequence, it suffices to consider times \(t\geq 1\). Here we will prove a slightly more involved result that we will subsequently use to upgrade the almost global existence to global existence via a bootstrap argument.
We recall that linear solutions initially localized near the origin in space will propagate along the rays \(\Gamma_{\mathbf v}\) of the Hamiltonian flow. In particular, if the solution is localized at \(x\)-frequency \(N\in 2^{\delta \mathbb Z}\) then at time \(t\) it should be localized in the spatial region \(\{z\approx t m(N)\}\), where we define the spatial variable
\begin{equation}\lambdaanglebel{zVar}
z = -\lambdaeft(x + \frac1{4t}y^2\right),
\end{equation}
and the non-negative symbol
\begin{equation}\lambdaanglebel{LittleM}
m(\xi) = 2\xi\coth\xi - \xi^2\cosech^2\xi - 1.
\end{equation}
With this heuristic in mind we decompose
\[
u = u^\mr{hyp} + u^\mr{ell},
\]
where the part of the hyperbolic piece localized at frequency \(N\in 2^{\delta\mathbb Z}\) is localized in space so that
\[
t^{-1}z\in B_N^\mr{hyp} := \{v>0: v \sim m(N)\}.
\]
We note that
\[
m(\xi)\sim \frac{\xi^2}{\lambdaangle\xi\rangle},
\]
so due to the uncertainty principle such a localization is only possible when \(N\geq t^{-\frac13}\). As a consequence we include the low frequencies \(N<t^{-\frac13}\) (for which we may obtain improved decay) in the elliptic piece.
To make this construction rigorous, for each \(N\geq t^{-\frac13}\) we take a smooth bump function \(\chi_N^\mr{hyp} \in C^\infty_0(\mathbb R_+)\), identically \(1\) on the set \(B_N^\mr{hyp}\) and localized up to rapidly decaying tails at frequencies \(|\xi|\lambdaesssim \frac1{m(N)}\). We then define
\[
u^\mr{hyp} = \sum\lambdaimits_{N>t^{-\frac13},\partialm}u_N^{\mr{hyp},\partialm},\qquad u^\mr{ell} = u - u^\mr{hyp},
\]
where, for each \(N\geq t^{-\frac13}\),
\[
u_N^{\mr{hyp},\partialm}(t,x,y) = \chi_N^\mr{hyp}(t^{-1}z) u_N^\partialm(t,x,y),\qquad u_N^{\mr{ell}}(t,x,y) = \lambdaeft(1 - \chi_N^\mr{hyp}(t^{-1}z)\right)u_N(t,x,y).
\]
\begin{figure}
\caption{A phase space illustration of the hyperbolic region \(B_N^\mr{hyp}
\end{figure}
If the solution \(u\) behaves like a linear wave we expect that most of its energy will be concentrated in the hyperbolic piece \(u^\mr{hyp}\) and hence the elliptic piece \(u^\mr{ell}\) will be decay faster than the expected \(O(t^{-1})\) linear decay. As a consequence, we obtain the following pointwise bounds, similar to \cite[Proposition 3.1]{2014arXiv1409.4487H}:
\begin{lem}\lambdaanglebel{lem:FDR2} For \(t\geq 1\) and a.e. \((x,y)\in\mathbb R^2\) we have the estimates
\begin{gather}
|u^\mr{hyp}| \lambdaesssim t^{-1}v^{-\frac38}\lambdaangle v\rangle^{-\frac78}\|u\|_X,\quad |u_x^\mr{hyp}|\lambdaesssim t^{-1}v^{\frac18}\lambdaangle v\rangle^{-\frac38}\|u\|_X,\lambdaanglebel{est:FDR2-HYPERBOLIC-2}\\
|u^\mr{ell}| \lambdaesssim t^{-\frac34}\lambdaangle t^{\frac23}v\rangle^{-\frac34}\lambdaeft(1 + \lambdaog\lambdaangle t^{\frac23}v\rangle\right)\|u\|_X,\quad |u_x^\mr{ell}|\lambdaesssim t^{-\frac{13}{12}}\lambdaangle t^{\frac23}v\rangle^{-\frac14}\lambdaangle t^{\frac12}v\rangle^{\frac14}\lambdaangle t^{-\frac12}v\rangle^{-\frac5{12}}\|u\|_X\lambdaanglebel{est:FDR2-ELLIPTIC-2}.
\end{gather}
\end{lem}
\begin{rem}
The unusual scaling of the estimate \eqref{est:FDR2-ELLIPTIC-2} is a consequence of the fact that due to the weaker dispersion we must use additional derivatives to control the high \(x\)-frequencies rather than just the vector fields. This breaks the natural scaling of the other estimates.
\end{rem}
\begin{rem}
In the infinite depth case we have \(m(\xi) = 2|\xi|\) and hence the high frequency threshold is \(N\geq t^{-\frac12}\) rather than \(N\geq t^{-\frac13}\). The slightly different form of the function \(m\) leads to minor adjustments to the numerology of Lemma~\ref{lem:FDR2}:
\begin{lem} If \(h = \infty\) then for \(t\geq 1\) and a.e. \((x,y)\in\mathbb R^2\) we have the estimates
\begin{gather}
|u^\mr{hyp}| \lambdaesssim t^{-1}v^{-\frac14}\lambdaangle v\rangle^{-1}\|u\|_X,\quad |u_x^\mr{hyp}|\lambdaesssim t^{-1}v^{\frac34}\lambdaangle v\rangle^{-1}\|u\|_X,\\
|u^\mr{ell}| \lambdaesssim t^{-\frac78}\lambdaangle t^{\frac12}v\rangle^{-\frac34}\lambdaeft(1 + \lambdaog\lambdaangle t^{\frac12}v\rangle\right)\|u\|_X,\quad |u_x^\mr{ell}|\lambdaesssim t^{-\frac98}\lambdaangle t^{-\frac12}v\rangle^{-\frac5{12}}\|u\|_X.
\end{gather}
\end{lem}
\end{rem}
In order to prove Lemma~\ref{lem:FDR2} we require some auxiliary estimates so we delay the proof until Section~\ref{sect:ProofPointwise}.
\subsection{The solution to the eikonal equation}
In this section we construct the solution \(\partialhi\) to the eikonal equation
\begin{equation}\lambdaanglebel{eikonal}
\mr{ell}l(\partialhi_t,\partialhi_x,\partialhi_y) = 0,
\end{equation}
where \(\mr{ell}l\) is the symbol of the linear operator \(\mathcal L\), given by
\[
\mr{ell}l(\tau,\xi,\eta) = \tau - \xi^2\coth\xi + \xi + \xi^{-1}\eta^2.
\]
If we make the ansatz that for \(z>0\),
\[
\partialhi(t,x,y) = t\Phi(t^{-1}z),
\]
then we obtain an ODE for \(\Phi = \Phi(v)\),
\begin{equation}\lambdaanglebel{EikonalODE}
\Phi - v\Phi' + (\Phi')^2\coth(\Phi') - \Phi' = 0.
\end{equation}
Differentiating we obtain
\[
(m(\Phi') - v)\Phi'' = 0,
\]
and hence
\[
\Phi'(v) = m^{-1}(v),
\]
where the positive inverse \(m^{-1}(v)>0\) may be defined using the Inverse Function Theorem for \(v>0\). As a consequence we have the following lemma:
\begin{lem}
For \(z>0\) the solution to the eikonal equation \eqref{eikonal} is given by
\begin{equation}\lambdaanglebel{PHASE}
\partialhi(t,x,y) = t\Phi(t^{-1}z),
\end{equation}
where
\[
\Phi(v) = (m^{-1}(v))^2\coth(m^{-1}(v)) - (1 + v)m^{-1}(v).
\]
\end{lem}
We finish this section by noting that we have the estimates
\[
0< m(\xi)\sim \frac{\xi^2}{\lambdaangle \xi\rangle},
\]
and
\[
0<m^{-1}(v)\sim \lambdaangle v\rangle^{\frac12}v^{\frac12}.
\]
In particular, we may show that the solution to the eikonal equation satisfies the estimate
\[
|\Phi(v)|\sim \lambdaangle v\rangle^{\frac12}v^{\frac32}.
\]
\begin{rem}
In the infinite depth case we have \(\Phi(v) = -\frac14 v^2\) and hence the solution to the eikonal equation is simply
\[
\partialhi = -\frac{z^2}{4t}.
\]
\end{rem}
\begin{figure}
\caption{The graph of the function \(v = m(\xi)\).}
\end{figure}
\subsection{Elliptic estimates}
In this section we establish weighted \(L^2\)-estimates for the frequency localized pieces \(u_N\). As we expect to obtain improved decay at very low frequencies \(N< t^{-\frac13}\) regardless, we restrict out attention to high frequencies \(N\geq t^{-\frac13}\).
In order to control the localization of solutions we define an operator adapted to the hyperbolic/elliptic decomposition of \(u\) by
\begin{equation}\lambdaanglebel{Lz}
L_z = z - tm(D_x),
\end{equation}
so that the symbol of \(L_zP_N\) is elliptic away from the set \(B_N^\mr{hyp}\). We note that we may write
\[
L_z\partialartial_x = - \lambdaeft(J + \frac{1}{4t}L_y^2\partialartial_x + \frac12\right),
\]
and hence for \(t\geq 1\) we have the estimate
\begin{equation}\lambdaanglebel{est:LzBound}
\|L_z\partialartial_xu_N\|_{L^2}\lambdaesssim \|u_N\|_X.
\end{equation}
In order to obtain more detailed estimates for the hyperbolic piece we observe that for a given \(z>0\) there exist two solutions to the equation \(m(\xi) = t^{-1}z\). As a consequence we construct operators adapted to each of these roots,
\[
L_z^\partialm = m^{-1}(t^{-1}z) \partialm i\partialartial_x,
\]
where \(L_z^-\) (respectively \(L_z^+\)) will be elliptic if \(u_N\) is localized to positive (respectively negative) wavenumbers. A useful observation, and indeed our main motivation for introducing these operators is that if \(\partialhi\) is the solution to the eikonal equation \eqref{eikonal} defined as in \eqref{PHASE} then we have
\[
\partialartial_x(e^{-i\partialhi}f) = -ie^{-i\partialhi}L_z^+f.
\]
Our main elliptic estimates are then the following:
\begin{lem}\lambdaanglebel{lem:Elliptic}
For \(t\geq1\) and \(N\geq t^{-\frac13}\) we have the estimates
\begin{align}
\frac N{\lambdaangle N\rangle}\|L_z^+ u_N^{\mr{hyp},+}\|_{L^2} &\lambdaesssim \frac1{tN}\|u_N\|_X ,\lambdaanglebel{est:FDR2-Elliptic1}\\
\frac{N^2}{\lambdaangle N\rangle}\|u_N^\mr{ell}\|_{L^2} + \|v u_N^\mr{ell}\|_{L^2} &\lambdaesssim \frac1{tN}\|u_N\|_X,\lambdaanglebel{est:FDR2-Elliptic2}
\end{align}
where \(v = t^{-1}z\).
\end{lem}
\begin{rem}
In the infinite depth case the operator
\[
L_z = z - 2t|D_x|.
\]
As a consequence, the natural analogues,
\(
L_z^\partialm = \dfrac z{2t} \partialm i\partialartial_x
\),
satisfy
\[
L_z^\partialm P_\partialm = \frac1{2t}L_z P_\partialm,
\]
so we do not expect a gain of regularity for the hyperbolic piece as in \eqref{est:FDR2-Elliptic1} (see also~\cite{MR3382579}). For the elliptic piece we instead have the estimate
\[
N\|u_N^\mr{ell}\|_{L^2} + \|v u_N^\mr{ell}\|_{L^2} \lambdaesssim \frac1{tN}\|u_N\|_X.
\]
\end{rem}
\begin{proof}[Proof of Lemma~\ref{lem:Elliptic}]
As these are effectively one-dimensional estimates, we ignore the dependence upon \(y\) and treat \(t\geq1\) as a fixed parameter.
\emph{Proof of \eqref{est:FDR2-Elliptic1}.}
For high, positive wavenumbers \(\xi\approx N\geq1\) we may write the symbol \(m^{-1}(t^{-1}z) - \xi\) of \(L_z^+\) in terms of the symbol \(z - tm(\xi)\) of \(L_z\) as
\[
m^{-1}(t^{-1}z) - \xi = t^{-1}(z - tm(\xi))p(t^{-1}z,\xi),
\]
where the smooth function
\[
p(v,\xi) = \frac{m^{-1}(v) - \xi}{v - m(\xi)},
\]
is elliptic in the region \(v\sim m(\xi)\) for \(\xi\geq1\). We recall that \(u_{N,+}^\mr{hyp}\) is localized in space in the set \(B_N^\mr{hyp}\) and in frequency at positive wavenumbers \(\xi\approx N\) up to rapidly decaying tails at scale \(tN\). In particular we may harmlessly localize \(p(v,\xi)\) using cutoffs in space and frequency and then apply the product rule \eqref{ProductRule} and the elliptic estimate \eqref{OPSElliptic} to obtain
\[
\|L_{z,+}u_N^\mr{hyp}\|_{L^2}\lambdaesssim \frac1t\lambdaeft(\|L_zu_N^\mr{hyp}\|_{L^2} + \|u_N^\mr{hyp}\|_{L^2}\right) \lambdaesssim \frac1{tN}\|u_N\|_X.
\]
For low, positive wavenumbers \(\xi\approx N\) so that \(t^{-\frac13}\lambdaeq N < 1\) we instead write the product of the symbols of \(L_z^-L_z^+\) as
\[
(m^{-1}(t^{-1}z) + \xi)(m^{-1}(t^{-1}z) - \xi) = t^{-1}(z - tm(\xi))q(t^{-1}z,\xi),
\]
where
\[
q(v,\xi) = \frac{(m^{-1}(v))^2 - \xi^2}{v - m(\xi)}\in\mathscr S
\]
A similar estimate to above yields the bound
\[
\|L_z^-L_z^+u_{N}^{\mr{hyp},+}\|_{L^2}\lambdaesssim \frac1{tN}\lambdaeft(\|L_z\partialartial_xu_N\|_{L^2} + \|u_N\|_{L^2}\right).
\]
For sufficiently smooth \(w\) we then calculate
\[
\|wf\|_{L^2}^2 + \|f_x\|_{L^2}^2 = \|L_z^-f\|_{L^2}^2 + 2\mathop{\rm Im}\nolimits\int wf\cdot\bar f_x.
\]
Applying this with \(w = m^{-1}(t^{-1}z)\) and \(f = L_z^+u_{N}^{\mr{hyp},+}\), which is localized at positive wavenumbers \(\xi \sim N\) and in space in the region \(B_N^\mr{hyp}\) up to rapidly decaying tails, we obtain the estimate
\begin{align*}
\|f\|_{L^2}^2 &\lambdaesssim \frac1{N^2}\lambdaeft(\|wf\|_{L^2}^2 + \|f_x\|_{L^2}^2\right) +\frac1{t^2N^4}\|u_N\|_{L^2}^2\\
&\lambdaesssim \frac1{N^2} \|L_z^-f\|_{L^2}^2 + \frac1{N^2}\mathop{\rm Im}\nolimits\int wf\cdot\bar f_x + \frac1{t^2N^4}\|u_N\|_{L^2}^2\\
&\lambdaesssim \frac1{t^2N^4}\lambdaeft(\|L_z\partialartial_xu_N\|_{L^2}^2 + \|u_N\|_{L^2}^2\right)
\end{align*}
Combining these estimates we obtain \eqref{est:FDR2-Elliptic1}.
\emph{Proof of \eqref{est:FDR2-Elliptic2}.}
We define the Fourier multiplier
\[
a(\xi) = \sqrt{m(\xi)},
\]
and take \(A = a(D_x)\) so that
\[
L_z = z - tA^2.
\]
Integrating by parts for real-valued \(f\) we obtain the identity
\begin{equation}\lambdaanglebel{est:FDR2-IBP}
\|v f_x\|_{L^2}^2 - 2\int v|Af_x|^2\,dx + \|A^2f_x\|_{L^2}^2 = t^{-2}\|L_z\partialartial_xf\|_{L^2}^2,
\end{equation}
where have used that the symbol of the operator \(A[A,v]\) is skew-adjoint.
We then smoothly decompose the elliptic part of \(u_N\)
\[
u_N^\mr{ell} = \chi_{\{|z|\lambdal tm(N)\}}u_N + \chi_{\{z\approx - tm(N)\}}u_N + \chi_{\{|z|\gg tm(N)\}}u_N,
\]
where the smooth cutoffs are assumed to have compact support and be localized in frequency near zero at the scale of uncertainty.
For the first and last piece we apply the estimate \eqref{est:FDR2-IBP} with \(f = \chi_{\{|v|\lambdal m(N)\}}u_N\), \(\chi_{\{|z|\gg tm(N)\}}u_N\) respectively to obtain
\[
\|v f_x\|_{L^2} + m(N)\|f_x\|_{L^2} \lambdaesssim \frac 1t\|u_N\|_X + \sqrt{m(N)}\|\sqrt vf_x\|_{L^2},
\]
so from the spatial localization of \(f\) we obtain,
\[
\|v f_x\|_{L^2} + m(N)\|f_x\|_{L^2}\lambdaesssim \frac 1t\|u_N^\mr{ell}\|_X.
\]
For the remaining piece we use that for \(f = \chi_{\{z\approx - tm(N)\}}u_N\) the function \(Af_x\) is localized in the spatial region \(v <0 \) up to rapidly decaying tails to obtain a similar estimate,
\[
\|v f_x\|_{L^2} + m(N)\|f_x\|_{L^2}\lambdaesssim \frac 1t\|u_N^\mr{ell}\|_X.
\]
Combining these bounds with the fact that \(f\) is localized at frequencies \(\sim N\) up to rapidly decaying tails, we obtain the estimate \eqref{est:FDR2-Elliptic2}.
\end{proof}
\begin{rem}
We obeserve that we may combine the estimate \eqref{est:FDR2-Elliptic2} with the elementary low frequency estimate
\[
\|\partialartial_xu_{\lambdaeq t^{-\frac13}}\|_{L^2}\lambdaesssim t^{-\frac13}\|u\|_{L^2},
\]
to obtain the estimate for the elliptic piece
\begin{equation}\lambdaanglebel{est:EllipticGain}
\|\partialartial_xu^{\mr{ell}}\|_{L^2}\lambdaesssim t^{-\frac13}\|u\|_X.
\end{equation}
Further, in the infinite depth case we have the corresponding estimate,
\begin{equation}
\|\partialartial_xu^{\mr{ell}}\|_{L^2}\lambdaesssim t^{-\frac12}\|u\|_X.
\end{equation}
\end{rem}
\subsection{Proof of Lemma~\ref{lem:FDR2}}\lambdaanglebel{sect:ProofPointwise}
We now apply the elliptic estimates of Lemma~\ref{lem:Elliptic} to prove Lemma~\ref{lem:FDR2}. As the estimates are linear in \(u\) we will assume that \(\|u\|_X = 1\). Throughout this section we use the notation \(v = t^{-1}z\).
\emph{Low frequencies.} We first consider the low frequency part \(u_{\lambdaeq t^{-\frac13}}\). We recall that at low frequency \(m(\xi)\approx \xi^2\) and hence the operator \(t^{-1}L_z = v - m(D_x) \approx v\) whenever \(v\gg t^{-\frac23}\) whereas \(t^{-1}L_z\approx \partialartial_x^2\) whenever \(v\lambdal t^{-\frac23}\).
If \(|v|< t^{-\frac23}\) we take \(f(t,x,y) = u_{< t^{-\frac13}}(t,x - \frac{1}{4t}y^2,y)\). We then apply Bernstein's inequality to obtain the bounds
\begin{gather*}
\|f\|_{L^2}\lambdaesssim \|u_{<t^{-\frac13}}\|_{L^2}\lambdaesssim 1,\qquad \|f_x\|_{L^2}\lambdaesssim \|\partialartial_xu_{<t^{-\frac13}}\|_{L^2}\lambdaesssim t^{-\frac13},\\ \|f_{yy}\|_{L^2}\lambdaesssim t^{-2}\|L_y^2\partialartial_x^2u_{<t^{-\frac13}}\|_{L^2}\lambdaesssim t^{-\frac73}.
\end{gather*}
Applying the Sobolev estimate as in Lemma~\ref{lem:ShortTimes} we obtain the estimate,
\[
\|u_{<t^{-\frac13}}\|_{L^\infty}\lambdaesssim t^{-\frac34}.
\]
Essentially identical estimates to \(f_x\) yield a similar bound,
\[
\|\partialartial_xu_{< t^{-\frac13}}\|_{L^\infty}\lambdaesssim t^{-\frac{13}{12}}.
\]
If instead \(|v|\geq t^{-\frac23}\) we dyadically decompose in space, taking \(\chi_M\) to localize to the spatial region \(\{|v|\sim M\}\) for each \(M>t^{-\frac23}\). We then decompose \(\chi_Mu_{\lambdaeq t^{-\frac13}}\) at the scale of the uncertainty principle as
\[
\chi_Mu_{< t^{-\frac13}} = \chi_Mu_{< (tM)^{-1}} + \sum\lambdaimits_{(tM)^{-1}\lambdaeq N< t^{-\frac13}}\chi_Mu_N.
\]
For any \(N<t^{-\frac13}\) we have the estimate
\[
\|v\partialartial_xu_N\|_{L^2}\lambdaesssim t^{-1}\lambdaeft(\|L_z\partialartial_xu\|_{L^2} + tNm(N)\|u\|_{L^2}\right) \lambdaesssim t^{-1},
\]
where we have used that \(Nm(N)\approx N^3\lambdaesssim t^{-1} \) whenever \(N<t^{-\frac13}\). As a consequence we obtain the bounds,
\[
\|\partialartial_x(\chi_Mu_{< (tM)^{-1}})\|_{L^2}\lambdaesssim (tM)^{-1},\qquad \|\partialartial_x(\chi_Mu_N)\|_{L^2}\lambdaesssim (tM)^{-1}.
\]
We then apply the Sobolev estimate \eqref{est:Sobolev} as before to
\[
f(t,x,y) = \chi_M(t^{-1}x)u_{< (tM)^{-1}}(t,x - \frac{1}{4t}y^2,y),\qquad f(t,x,y) = \chi_M(t^{-1}x)u_N(t,x - \frac{1}{4t}y^2,y),
\]
respectively to obtain the bounds,
\[
\|\chi_M u_{< (tM)^{-1}}\|_{L^\infty} + \|\chi_M u_N\|_{L^\infty}\lambdaesssim t^{-\frac12}(tM)^{-\frac34}.
\]
Replacing \(u\) by \(u_x\) we obtain similar bounds,
\[
\|\chi_M \partialartial_xu_{< (tM)^{-1}}\|_{L^\infty}\lambdaesssim t^{-\frac12}(tM)^{-\frac74},\qquad \|\chi_M \partialartial_xu_N\|_{L^\infty} \lambdaesssim t^{-\frac12}(tM)^{-\frac34}N.
\]
Summing over \(N\) we obtain the estimates,
\[
|\chi_Mu_{<t^{-\frac13}}|\lambdaesssim t^{-\frac34}(t^{\frac23}M)^{-\frac34}\lambdaog(t^{\frac23}M),\qquad |\chi_M\partialartial_xu_{<t^{-\frac13}}|\lambdaesssim t^{-\frac{13}{12}}(t^{\frac23}M)^{-\frac34},
\]
where the logarithmic loss arises due to summation over frequencies \((tM)^{-1}\lambdaeq N\lambdaeq t^{-\frac13}\) (see also the corresponding bound in \cite[Proposition~3.1]{2014arXiv1409.4487H}).
Combining these bounds with the estimate in the region for \(|v|<t^{-\frac23}\) we obtain the low frequency estimates
\begin{equation}
|u_{< t^{-\frac13}}|\lambdaesssim t^{-\frac34}\lambdaangle t^{\frac23} v\rangle^{-\frac34}\lambdaeft(1 + \lambdaog\lambdaangle t^{\frac23} v\rangle\right),\qquad |\partialartial_xu_{< t^{-\frac13}}|\lambdaesssim t^{-\frac{13}{12}}\lambdaangle t^{\frac23} v\rangle^{-\frac34}.\lambdaanglebel{est:FDR2-LF-2}
\end{equation}
\emph{Elliptic piece.} For the high frequency part of the elliptic piece we proceed similarly to the low frequency piece. For \(N\geq t^{-\frac13}\) we apply the Sobolev estimate \eqref{est:Sobolev} and the elliptic estimate \eqref{est:FDR2-Elliptic2} to \(f(t,x,y) = u_N^\mr{ell}(t,x - \frac{1}{4t}y^2,y)\) on dyadic spatial intervals (as for the low frequency piece) to obtain the pointwise bound,
\[
|u^\mr{ell}_N|\lambdaesssim t^{-\frac54}\min\{N^{-\frac32}\lambdaangle N\rangle^{\frac34},|v|^{-\frac34}\}.
\]
If \(|v|< t^{-\frac23}\) then we sum in \(N\) to obtain
\[
|u^\mr{ell}_{\geq t^{-\frac13}}|\lambdaesssim \sum\lambdaimits_{N\geq t^{-\frac13}}t^{-\frac54}N^{-\frac32}\lambdaangle N\rangle^{\frac34}\lambdaesssim t^{-\frac34}.
\]
If \(|v|\geq t^{-\frac23}\) then we decompose the sum as
\[
|u^\mr{ell}_{\geq t^{-\frac13}}|\lambdaesssim \sum\lambdaimits_{t^{-\frac13}\lambdaeq N\lambdaeq v^{\frac12}\lambdaangle v\rangle^{\frac12}} t^{-\frac54}|v|^{-\frac34} + \sum\lambdaimits_{N\geq v^{\frac12}\lambdaangle v\rangle^{\frac12}}t^{-\frac54}N^{-\frac32}\lambdaangle N\rangle^{\frac34} \lambdaesssim t^{-\frac34}(t^{\frac23}|v|)^{-\frac34}\lambdaeft(1 + \lambdaog (t^{\frac13}v^{\frac12}\lambdaangle v\rangle^{\frac12})\right).
\]
Combining these, we obtain the bound,
\[
|u^\mr{ell}_{\geq t^{-\frac13}}|\lambdaesssim t^{-\frac34}\lambdaangle t^{\frac23}v\rangle^{-\frac34}\lambdaeft(1 + \lambdaog\lambdaangle t^{\frac23}v\rangle\right).
\]
As \(u_N^\mr{ell}\) is localized at frequencies \(|\xi|\sim N\) up to rapidly decaying tails, we obtain a similar estimate for the derivative,
\[
|\partialartial_xu^\mr{ell}_N|\lambdaesssim t^{-\frac54}N\min\{N^{-\frac32}\lambdaangle N\rangle^{\frac34},|v|^{-\frac34}\},
\]
Summing over \(t^{-\frac13}\lambdaeq N<1\) we then obtain the bound,
\[
|\partialartial_xu^\mr{ell}_{t^{-\frac13}\lambdaeq \cdot <1}|\lambdaesssim t^{-\frac{13}{12}}\lambdaangle t^{\frac23}v\rangle^{-\frac14}.
\]
For \(N>1\) we may use the fact that \(\|\partialartial_x^4u_N\|_{L^2}\lambdaesssim 1\) in lieu of the elliptic estimate \eqref{est:FDR2-Elliptic2} to obtain the slight modification
\[
|\partialartial_xu^\mr{ell}_N|\lambdaesssim t^{-\frac12}N\min\{(tN)^{-\frac34},(t|v|)^{-\frac34},N^{-\frac94}\},
\]
from which we obtain the bound
\[
|\partialartial_xu^\mr{ell}_{>1}|\lambdaesssim t^{-\frac98}\lambdaangle t^{-\frac12}v\rangle^{-\frac5{12}},
\]
completing the proof of \eqref{est:FDR2-ELLIPTIC-2}.
\emph{Hyperbolic piece.} We define the phase function \(\partialhi\) as in \eqref{PHASE} and observe that if we apply the Sobolev estimate \eqref{est:Sobolev} to \(f(t,x,y) = e^{-i\partialhi}u_N^{\mr{hyp},+}(t,x - \frac{1}{4t}y^2,y)\) we obtain the estimate
\[
|u_N^{\mr{hyp},+}|\lambdaesssim t^{-\frac12}\|u_N\|_{L^2}^{\frac14}\|L_z^+u_N^{\mr{hyp},+}\|_{L^2}^{\frac12}\|(L_y\partialartial_x)^2u_N\|_{L^2}^{\frac14}.
\]
We then use the elliptic estimate \eqref{est:FDR2-Elliptic1} and that \(u_N^{\mr{hyp},+}\) is localized in the spatial region \(v\approx m(N)\) and at frequencies \(\sim N\) up to rapidly decaying tails to obtain
\[
|u_N^{\mr{hyp},+}|\lambdaesssim t^{-1}N^{-\frac34}\lambdaangle N\rangle^{-\frac12},\qquad |\partialartial_xu_N^{\mr{hyp},+}|\lambdaesssim t^{-1}N^{\frac14}\lambdaangle N\rangle^{-\frac12}
\]
If \(t^{-\frac13}\lambdaeq N<1\) then on the support of \(u_N^{\mr{hyp},+}\) we have \(v\approx N^2\) so using that the supports of the \(u_N^{\mr{hyp},+}\) are essentially disjoint we may sum to obtain
\[
|u_{t^{-\frac13}\lambdaeq \cdot<1}^{\mr{hyp},+}|\lambdaesssim t^{-1}v^{-\frac38},\qquad |\partialartial_xu_{t^{-\frac13}\lambdaeq \cdot<1}^{\mr{hyp},+}|\lambdaesssim t^{-1}v^{\frac18}.
\]
Similarly, if \(N\geq1\) we have \(v\approx N\) on the support of \(u_N^{\mr{hyp},+}\) and hence
\[
|u_{\geq 1}^{\mr{hyp},+}|\lambdaesssim t^{-1}v^{-\frac54},\qquad |\partialartial_xu_{\geq 1}^{\mr{hyp},+}|\lambdaesssim t^{-1}v^{-\frac14}.
\]
By combining these estimates we obtain the bound \eqref{est:FDR2-HYPERBOLIC-2}, which completes the proof of Lemma~\ref{lem:FDR2}.\qed
\section{Testing by wave packets}\lambdaanglebel{sect:TWP}
We now turn to the problem of proving the existence of global solutions to \eqref{eqn:cc-fd} using a bootstrap argument. We assume that for some \(T>1\) there exists a solution \(S(-t)u\in C([0,T];X(0))\) to \eqref{eqn:cc-fd} satisfying the bootstrap assumption,
\begin{equation}\lambdaanglebel{BS}
\sup\lambdaimits_{t\in[0,T]}\|u_x\|_{L^\infty}\lambdaeq \mathcal M\epsilon t^{-\frac12}\lambdaangle t\rangle^{-\frac12}.
\end{equation}
Applying the a priori bound \eqref{est:NRG} we obtain the estimate
\begin{equation}\lambdaanglebel{AP}
\|u\|_X\lambdaesssim \epsilon \lambdaangle t\rangle^{C\mathcal M\epsilon},
\end{equation}
where the constants are independent of \(\mathcal M\).
From the pointwise bounds proved in Lemma~\ref{lem:FDR2}, we see that the worst decay occurs in the region \(\{z\approx t\}\). As a consequence we define the time-dependent set
\[
\Sigma_t = \{v\in\mathbb R: t^{-\frac1{12}}< v < t^{\frac1{12}}\},
\]
so that the worst behavior will occur whenever \(t^{-1}z\in \Sigma_t\). If we take \(\chi_{\Sigma_t^c}\) to be a smooth bump function supported in the complement \(\Sigma_t^c = \mathbb R^2\backslash\Sigma_t\) we may then apply the estimates of Lemma~\ref{lem:FDR2} to obtain
\begin{equation}\lambdaanglebel{ImprovedDecay}
\|u_x\ \chi_{\Sigma_t^c}(t^{-1}z)\|_{L^\infty}\lambdaesssim t^{-\frac{97}{96}}\|u\|_X.
\end{equation}
The additional \(t\)-decay in the estimate \eqref{ImprovedDecay} leads to an improvement of \eqref{BS} for sufficiently large times. As a consequence it remains to consider improved pointwise bounds for \(u_x\) in the region \(\Sigma_t\).
\begin{figure}
\caption{The region \(\{\frac zt\in \Sigma_t\}
\end{figure}
\subsection{Construction of the wave packets} Given a time \(t\geq 1\) and a velocity \(\mathbf v = (v_x,v_y)\in\mathbb R^2\) such that
\[
v = - \lambdaeft(v_x + \frac 14 v_y^2\right) \in \Sigma_t,
\]
we construct a wave packet adapted to the associated ray \(\Gamma_{\mathbf v} = \{(x,y) = t\mathbf v\}\) of the Hamiltonian flow by
\[
\Psi_{\mathbf v}(t,x,y) = \partialartial_x\lambdaeft(\frac1{i\partialartial_x\partialhi}e^{i\partialhi}\chi\lambdaeft(\lambdaanglembda_z(z - tv),\lambdaanglembda_y(y - tv_y)\right)\right),
\]
where \(\chi\in C^\infty_0(\mathbb R^2)\) is a smooth, non-negative, real-valued, compactly supported function, localized near \(0\) in space and frequency at scale \(\lambdaesssim 1\), the phase \(\partialhi\) is defined as in \eqref{PHASE} and the scales
\[
\lambdaanglembda_z = t^{-\frac12}v^{-\frac14}\lambdaangle v\rangle^{\frac14},\qquad \lambdaanglembda_y = t^{-\frac12}v^{\frac14}\lambdaangle v\rangle^{\frac14}.
\]
For simplicity we normalize \(\int_{\mathbb R^2} \chi(z,y)\,dzdy = 1\). We also note that by a slight abuse of notation we consider \(v\) to be a fixed parameter (independent of \(t,z\)) in this section.
\begin{rem}
If our initial data is localized near the origin in space and at frequency \(\mathbf k_0 = (\xi_0,\eta_0)\) then the corresponding linear solution will be spatially localized on the ray \(\Gamma_{\mathbf v} = \{z = t\mathbf v\}\) of the Hamiltonian flow, where the group velocity
\[
\mathbf v = - \nabla\omega(\mathbf k_0) = (- m(\xi_0) - \xi_0^{-2}\eta_0^2,2\xi_0^{-1}\eta_0).
\]
We note that the frequency may then be written in terms of the velocity as
\[
\mathbf k_0 = \lambdaeft(m^{-1}(v),\frac12v_y m^{-1}(v)\right).
\]
If a linear solution is localized near the ray \(\Gamma_{\mathbf v}\) in space at scale \(\lambdaanglembda_z^{-1}\) in \(z\) and \(\lambdaanglembda_y^{-1}\) in \(y\) then from the uncertainty principle, the Fourier transform may be localized at most such that
\[
\lambdaeft|\xi - \xi_0\right|\lambdaesssim\lambdaanglembda_z,\qquad \lambdaeft|(\eta - \eta_0) - \frac{\eta_0}{\xi_0}(\xi - \xi_0)\right|\lambdaesssim \lambdaanglembda_y.
\]
In order for a function to be coherent on timescales \(\approx T\) we require that \(\omega(\mathbf k)\) may be well-approximated by its linearization, with errors of size \(\lambdal T^{-1}\). Computing the Taylor expansion of the dispersion relation \(\omega(\mathbf k)\) at frequency \(\mathbf k = \mathbf k_0\) we obtain
\[
\omega(\mathbf k) = \omega(\mathbf k_0) - \mathbf v\cdot(\mathbf k - \mathbf k_0) + \frac12(\mathbf k - \mathbf k_0)\cdot\nabla^2\omega(\mathbf k_0) (\mathbf k - \mathbf k_0) + \dots.
\]
With the above localization we calculate
\[
\lambdaeft|(\mathbf k - \mathbf k_0)\cdot\nabla^2\omega(\mathbf k_0) (\mathbf k - \mathbf k_0)\right|\lambdaesssim m'(\xi_0) \lambdaanglembda_z^2 + \xi_0^{-1} \lambdaanglembda_y^2.
\]
Thus we require,
\[
T\lambdaanglembda_z^2\lambdaesssim \frac1{m'(\xi_0)}\sim \frac{\lambdaangle v\rangle^{\frac12}}{v^{\frac12}},\qquad T\lambdaanglembda_y^2\lambdaesssim \xi_0\sim v^{\frac12}\lambdaangle v\rangle^{\frac12},
\]
which motivates the choice of scales.
\end{rem}
\begin{rem}
In the infinite depth case \(h = \infty\) essentially identical reasoning yields the scales
\[
\lambdaanglembda_z = t^{-\frac12},\qquad \lambdaanglembda_y = t^{-\frac12}v^{\frac12}.
\]
\end{rem}
\begin{figure}
\caption{An illustration of the localization of the wave packet \(\Psi_{\mathbf v}
\end{figure}
In order to clarify these heuristics we first make the following definition:
\begin{defn}
Given a fixed time \(t\geq1\), a velocity \(\mathbf v\in \mathbb R^2\) so that \(v\in \Sigma_t\) and a (possibly \(t,v\)-dependent) constant \(\Upsilon>0\), we say \(f\in WP(t,\mathbf v,\Upsilon)\) if \(f\in C^\infty_0(\mathbb R^2)\) is supported in the set \(\{\frac zt\in \tilde\Sigma_t\}\), where \(\tilde \Sigma_t\) is a slight dilation of \(\Sigma_t\), and for all \(\alpha,\beta,\mu,\nu\geq 0\) we have the estimate,
\begin{equation}
\lambdaeft|(z - tv)^\mu (y - tv_y)^\nu(\partialartial_x - i\partialartial_x\partialhi)^\alpha (\partialartial_xL_y)^\beta f\right|\lambdaesssim_{\alpha,\beta,\mu,\nu}\lambdaanglembda_z^{\alpha - \mu}\lambdaanglembda_y^{\beta - \nu}\Upsilon.
\end{equation}
\end{defn}
\noindent We note that if \(f\in WP(t,\mathbf v,\Upsilon)\) then it is localized in space near the ray \(\Gamma_{\mathbf v}\) and in frequency near the corresponding frequency \(\mathbf k = \lambdaeft(m^{-1}(v),\frac12v_y m^{-1}(v)\right)\) at the scale of uncertainty. Using this definition we may clarify the structure of the wave packet \(\Psi_{\mathbf v}\):
\begin{lem}\lambdaanglebel{lem:Microlocal}
For all times \(t\gg1\) sufficiently large and all \(\mathbf v\in \mathbb R^2\) be chosen such that \(v\in\Sigma_t\), the associated wave packet \(\Psi_{\mathbf v}\in WP(t,\mathbf v,1)\) and \(\mathcal L\Psi_{\mathbf v}\in WP(t,\mathbf v,t^{-1})\).
Further, writing \(\chi = \chi(\lambdaanglembda_z(z - tv),\lambdaanglembda_y(y - tv_y))\), we have the decomposition
\begin{equation}\lambdaanglebel{LinearForm}
\begin{aligned}
e^{-i\partialhi}\mathcal L\Psi_{\mathbf v} &= - \partialartial_x\lambdaeft(\frac1{2t}(z - tv)\chi\right) - \partialartial_xL_y\lambdaeft(\frac1{4t^2}(y - tv_y)\chi\right) + \partialartial_x\lambdaeft(\frac12im'(\partialartial_x\partialhi)\partialartial_x\chi\right)\\
&\quad + (\partialartial_xL_y)^2\lambdaeft(\frac1{i4t^2\partialartial_x\partialhi}\chi\right) + \mb{err},
\end{aligned}
\end{equation}
where the error term \(\mb{err}\in WP(t,\mathbf v,t^{-\frac32}v^{-\frac34}\lambdaangle v\rangle^{-\frac14})\).
\end{lem}
\begin{proof}
We may write the wave packet in the form
\begin{equation}\lambdaanglebel{LeadingOrderDecomp}
e^{-i\partialhi}\Psi_{\mathbf v} = \chi\lambdaeft(\lambdaanglembda_z(z - tv),\lambdaanglembda_y(y - tv_y)\right) + \frac{\lambdaanglembda_z}{i\partialartial_x\partialhi}\chi_z\lambdaeft(\lambdaanglembda_z(z - tv),\lambdaanglembda_y(y - tv_y)\right).
\end{equation}
We recall that \(\partialartial_x\partialhi = m^{-1}(t^{-1}z)\) and a simple application of the Inverse Function Theorem yields the estimates
\begin{gather*}
m^{-1}(v)\sim v^{\frac12}\lambdaangle v\rangle^{\frac12},\qquad |\partialartial_vm^{-1}(v)|\lambdaesssim v^{-\frac12}\lambdaangle v\rangle^{\frac12},\\
|\partialartial_v^\alpha m^{-1}(v)|\lambdaesssim_k v^{-(\frac12 + \alpha)}\lambdaangle v\rangle^{-k},\quad \alpha\geq 2.
\end{gather*}
We then recall that if \((x,y)\in \operatorname{supp}\Psi_{\mathbf v}\) then \(|z - tv|\lambdaesssim \lambdaanglembda_z^{-1} \lambdaesssim t^{\frac12}v^{-\frac14}\lambdaangle v\rangle^{\frac14}\) so provided \(t\gg1\) we have
\[
m^{-1}(t^{-1}z)\sim m^{-1}(v)
\]
As a consequence we may differentiate to obtain
\[
\lambdaeft|\partialartial_x^\alpha\lambdaeft(\frac{\lambdaanglembda_z}{i\partialartial_x\partialhi}\right)\right|\lambdaesssim_\alpha t^{-(\frac12+\alpha)} v^{-(\frac34 + \alpha)}\lambdaangle v\rangle^{-\frac14}\lambdal \lambdaanglembda_z^\alpha,
\]
whenever \(\alpha\geq 0\), \(t\gg1\), \((x,y)\in \operatorname{supp}\Psi_{\mathbf v}\) and \(v\in \Sigma_t\). Differentiating \eqref{LeadingOrderDecomp} with respect to \(\partialartial_x\), \(\partialartial_xL_y\) and using the fact that \(\chi\) is compactly supported near \(0\), we obtain \(\Psi_{\mathbf v}\in WP(t,\mathbf v,1)\).
In order to calculate \(\mathcal L\Psi_{\mathbf v}\), we first define the Fourier multiplier
\[
M(\xi) = \xi^2\coth\xi - \xi,
\]
so that \(M(D_x) = i(\partialartial_x^2\mathcal T^{-1} - \partialartial_x)\) and \(m = \partialartial_\xi M\). Using the Taylor expansion of the symbol about the point \(\xi = \partialartial_x\partialhi\) we obtain
\begin{equation}\lambdaanglebel{NonlocalExpansion}
\begin{aligned}
M(D_x)\Psi_{\mathbf v} &= M(\partialartial_x\partialhi)\Psi_{\mathbf v} + m(\partialartial_x\partialhi)\lambdaeft(D_x - \partialartial_x\partialhi\right)\Psi_{\mathbf v} + \frac12m'(\partialartial_x\partialhi)\lambdaeft(D_x - \partialartial_x\partialhi\right)^2\Psi_{\mathbf v}\\
&\quad + \frac i{2t}\Psi_{\mathbf v} + \mb{err}_0,
\end{aligned}
\end{equation}
where we have used that \(\partialartial_x\partialhi = m^{-1}(t^{-1}z)\) so \(m'(\partialartial_x\partialhi)\partialartial_x^2\partialhi = -\frac1t\), and the error term \(\mb{err}_0\) may be written using the Fourier transform as
\begin{align*}
\mb{err}_0 &= \frac1{2\partiali}\int \lambdaeft(\frac12\int_0^1m''\lambdaeft(\partialartial_x\partialhi + \tau(\xi - \partialartial_x\partialhi)\right)(1 - \tau)^2(\xi - \partialartial_x\partialhi)^3\,d\tau\right)\hat\Psi_{\mathbf v}(t,\xi,\eta)e^{i(\xi x + \eta y)}\,d\xi d\eta.
\end{align*}
Using the facts that \(\Psi_{\mathbf v}\in WP(t,\mathbf v,1)\) and \(|m''(\xi)|\lambdaesssim_k \lambdaangle \xi\rangle^{-k}\) it is quickly observed that the error term satisfies \(\mb{err}_0\in WP(t,\mathbf v,t^{-\frac32}v^{-\frac34}\lambdaangle v\rangle^{-k})\).
As a consequence it remains to consider the approximate linear operator,
\[
\tilde{\mathcal L} = \partialartial_t - iM(\partialartial_x\partialhi) - im(\partialartial_x\partialhi)\lambdaeft(D_x - \partialartial_x\partialhi\right) - \frac12im'(\partialartial_x\partialhi)\lambdaeft(D_x^2 - \partialartial_x\partialhi\right)^2 + \frac1{2t} + \partialartial_x^{-1}\partialartial_y^2,
\]
which satisfies
\(
\mathcal L\Psi_{\mathbf v} = \tilde{\mathcal L}\Psi_{\mathbf v} + \mb{err}_0.
\)
Next we change variables \((t,x,y)\mapsto (t,z,y)\) so that with \(\partialhi = \partialhi(t,z) = t\Phi(t^{-1}z)\) we may write the approximate linear operator \(\tilde{\mathcal L}\) as
\begin{align*}
\mathcal{\tilde L} = \lambdaeft(\partialartial_t - i\partialartial_t\partialhi\right) + \frac zt\lambdaeft(\partialartial_z - i\partialartial_z\partialhi\right) + \frac12m'(-\partialartial_z\partialhi)\lambdaeft(\partialartial_z - i\partialartial_z\partialhi\right)^2 + \frac1t - \partialartial_z^{-1}\partialartial_y^2 + \frac yt\partialartial_y.
\end{align*}
As a consequence, for a function \(f = f(t,z,y)\) we obtain
\begin{align*}
e^{-i\partialhi}\tilde{\mathcal L}\partialartial_z\lambdaeft(\frac{e^{i\partialhi}}{i\partialartial_z\partialhi}f\right) &= \lambdaeft(\partialartial_t + \frac zt\partialartial_z + \frac12m'(-\partialartial_z\partialhi)\partialartial_z^2 + \frac1t + \frac yt\partialartial_y\right)\lambdaeft(f + \frac1{i\partialartial_z\partialhi}\partialartial_zf\right) - \partialartial_y^2\lambdaeft(\frac1{i\partialartial_z\partialhi}f\right).
\end{align*}
Taking \(f(t,z,y) = \chi\lambdaeft(\lambdaanglembda_z(z - tv),\lambdaanglembda_y(y - tv_y)\right)\) we calculate
\[
\partialartial_tf = - \frac1{2t}(z - tv)\partialartial_z\chi - \frac1{2t}(y - tv_y)\partialartial_y\chi - v\partialartial_z\chi - v_y\partialartial_y\chi,
\]
and similarly for \(\partialartial_zf\). Plugging these into the above expression we obtain
\begin{align*}
e^{-i\partialhi}\tilde{\mathcal L}\partialartial_z\lambdaeft(\frac{e^{i\partialhi}}{i\partialartial_z\partialhi}\chi\right) &= \partialartial_z\lambdaeft(\frac1{2t}(z - tv)\chi\right) + \partialartial_y\lambdaeft(\frac1{2t}(y - tv_y)\chi\right) + \partialartial_z\lambdaeft(\frac12m'(-\partialartial_z\partialhi)\partialartial_z\chi\right)\\
&\quad - \partialartial_y^2\lambdaeft(\frac1{i\partialartial_z\partialhi}\chi\right) + \mb{err}_1,
\end{align*}
where the error term is given by
\begin{align*}
\mb{err}_1 &= \partialartial_z\lambdaeft(\frac1{2t}(z - tv)\lambdaeft(\frac1{i\partialartial_z\partialhi}\partialartial_z\chi\right)\right) + \partialartial_y\lambdaeft(\frac1{2t}(y - tv_y)\lambdaeft(\frac1{i\partialartial_z\partialhi}\partialartial_z\chi\right)\right) \\
&\quad + \partialartial_z\lambdaeft(\frac12m'(-\partialartial_z\partialhi)\partialartial_z\lambdaeft(\frac1{i\partialartial_z\partialhi}\partialartial_z\chi\right)\right) + \frac12m''(- \partialartial_z\partialhi)\partialartial_z^2\partialhi \partialartial_z\lambdaeft(\chi + \frac1{i\partialartial_z\partialhi}\partialartial_z\chi\right)\\
&\quad + i\frac zt\frac{\partialartial_z^2\partialhi}{(\partialartial_z\partialhi)^2}\partialartial_z\chi,
\end{align*}
and we have used the fact that \(\partialartial_z\partialartial_t\partialhi = - \frac zt\partialartial_z^2\partialhi\).
Returning to the original variables, \((t,z,y)\mapsto (t,x,y)\), we obtain
\begin{align*}
e^{-i\partialhi}\tilde{\mathcal L}\Psi_{\mathbf v} &= - \partialartial_x\lambdaeft(\frac1{2t}(z - tv)\chi\right) - \partialartial_xL_y\lambdaeft(\frac1{4t^2}(y - tv_y)\chi\right)\\
&\quad + \partialartial_x\lambdaeft(\frac12im'(\partialartial_x\partialhi)\partialartial_x\chi\right) + (\partialartial_xL_y)^2\lambdaeft(\frac1{i4t^2\partialartial_x\partialhi}\chi\right) + \mb{err}_1,
\end{align*}
where we may use the bounds
\begin{gather*}
\partialartial_x\partialhi \sim v^{\frac12}\lambdaangle v\rangle^{\frac12},\qquad |\partialartial_x^2\partialhi|\lambdaesssim t^{-1}v^{-\frac12}\lambdaangle v\rangle^{\frac12}\\
|\partialartial_x^\alpha \partialhi|\lambdaesssim_k t^{-(1 + \alpha)}v^{\frac12 - \alpha}\lambdaangle v\rangle^{-k},\quad \alpha\geq2,
\end{gather*}
to show that whenever \(v\in \Sigma_t\) and \(t\gg1\), the error term \(\mb{err}_1\in WP(t,\mathbf v,t^{-\frac32}v^{-\frac34}\lambdaangle v\rangle^{-\frac14})\).
Finally, we observe that we may expand the leading order terms in this expression to obtain
\[
\tilde{\mathcal L}\Psi_{\mathbf v}\in WP(t,\mathbf v,t^{-1}),
\]
which completes the proof.
\end{proof}
\subsection{Testing by wave packets}
We recall that for a given velocity \(\mathbf v\) the wave packet has similar spatial localization to the hyperbolic part \(u^{\mr{hyp},+}_N\) of \(u\) localized at positive \(x\)-frequency \(N\sim m^{-1}(v)\). From the pointwise estimates of Lemma~\ref{lem:FDR2} we expect that in the region \((x,y)\approx t\mathbf v\) the leading order part of \(u(t,x,y)\) is given by \(u_N^{\mr{hyp},\partialm}(t,x,y)\). As a consequence, we should be able to recover the leading order behavior of \(u\) by testing it against \(\Psi_{\mathbf v}\).
This heuristic motivates the definition of the function
\[
\gamma(t,\mathbf v) = \int u_x\bar\Psi_{\mathbf v}\,dxdy.
\]
Due to the normalization that \(\int \chi\,dxdy = 1\) we then expect that
\[
u(t,t\mathbf v)\approx 2t^{-1}\lambdaangle v\rangle^{\frac12}\mathbb Re\lambdaeft(e^{i\partialhi}\gamma(t,\mathbf v)\right),
\]
where we note that \(t^{-1}\lambdaangle v\rangle^{\frac12} = \lambdaanglembda_z\lambdaanglembda_y\). To make this heuristic precise we prove the following lemma:
\begin{lem}\lambdaanglebel{lem:GammaBounds}
For \(t\gg1\) we have the estimate
\begin{equation}\lambdaanglebel{est:BasicGammaBound}
\|\lambdaangle v\rangle^{\frac12}\gamma\chi_{\Sigma_t}\|_{L^\infty_{\mathbf v}}\lambdaesssim t\|u_x\|_{L^\infty_{x,y}}.
\end{equation}
as well as the estimate for the difference,
\begin{equation}
\|(u_x(t,t\mathbf v) - 2t^{-1}\lambdaangle v\rangle^{\frac12}\mathbb Re(e^{i\partialhi}\gamma(t,\mathbf v)))\chi_{\Sigma_t}\|_{L^\infty_{\mathbf v}} \lambdaesssim t^{-\frac{13}{12}}\|u\|_X.\lambdaanglebel{est:DifferenceLInf}
\end{equation}
\end{lem}
\begin{proof}
The pointwise estimate \eqref{est:BasicGammaBound} follows from the fact that for \(v\in \Sigma_t\) we have
\[
\|\Psi_{\mathbf v}\|_{L^1_{x,y}}\lambdaesssim t\lambdaangle v\rangle^{-\frac12}.
\]
For the pointwise difference \eqref{est:DifferenceLInf} we first define
\[
u_{x,v}^{\mr{hyp},+} = \sum\lambdaimits_{m(N)\sim v}\partialartial_xu_N^{\mr{hyp},+},
\]
and use the pointwise bound \eqref{est:FDR2-ELLIPTIC-2} for the elliptic piece \(u^\mr{ell}\) as well as the spatial localization of \(u_N^\mr{hyp} = 2\mathbb Re u_N^{\mr{hyp},+}\) to obtain
\[
\lambdaeft|u_x(t,t\mathbf v) - 2\mathbb Re u_{x,v}^{\mr{hyp},+}(t,t\mathbf v)\right|\lambdaesssim t^{-\frac{13}{12}}\|u\|_X.
\]
Next we use the pointwise bound \eqref{est:FDR2-ELLIPTIC-2} for the elliptic piece to obtain
\[
|t^{-1}\lambdaangle v\rangle^{\frac12}\lambdaangle u_x^\mr{ell},\partialsi_{\mathbf v}\rangle|\lambdaesssim t^{-\frac{13}{12}}.
\]
We may then use the spatial localization of \(\Psi_{\mathbf v}\) to obtain
\[
|t^{-1}\lambdaangle v\rangle^{\frac12}\lambdaangle u^\mr{hyp}_x - u_{x,v}^\mr{hyp},\Psi_{\mathbf v}\rangle| \lambdaesssim_k t^{-k}\|u\|_X,
\]
and similarly, recalling that \(u_N^{\mr{hyp},-} = \chi_N^\mr{hyp} u_N^-\), we have
\[
|\lambdaangle u_{x,v}^{\mr{hyp},-} ,\Psi_{\mathbf v}\rangle |\lambdaesssim \sum\lambdaimits_{m(N)\sim v}\lambdaangle \partialartial_xu_N^-,\chi_N^\mr{hyp} \Psi_{\mathbf v}\rangle|\lambdaesssim t^{-k}\|u\|_X,
\]
where the rapid decay follows from the fact that \(\chi_N^\mr{hyp}\Psi_{\mathbf v}\in WP(t,\mathbf v,1)\) is localized at \emph{positive} wavenumbers \(\sim m^{-1}(v)\) up to rapidly decaying tails at scale \(\lambdaanglembda_z\lambdaesssim t^{-\frac{23}{48}}\). Combining these bounds we obtain
\[
t^{-1}\lambdaangle v \rangle^{\frac12}\lambdaeft|\gamma(t,\mathbf v) - \lambdaangle u_{x,v}^{\mr{hyp},+},\Psi_{\mathbf v}\rangle\right| \lambdaesssim t^{-\frac{13}{12}}\|u\|_X.
\]
Thus it remains to consider the difference
\[
\mathfrak D = \lambdaeft|e^{-i\partialhi}u_{x,v}^{\mr{hyp},+}(t,t\mathbf v) - t^{-1}\lambdaangle v\rangle^{\frac12}\gamma(t,\mathbf v)\right|
\]
Next we define
\[
w(t,z,y) = e^{-i\partialhi}u_{x,v}^{\mr{hyp},+}(t,x,y),
\]
and write the difference as
\begin{align*}
\mathfrak D &= w(t,tv,tv_y) - t^{-1}\lambdaangle v\rangle^{\frac12}\int w(t,z,y)\chi(\lambdaanglembda_z(z - tv),\lambdaanglembda_y(y - tv_y))\,dzdy\\
&= t^{-1}\lambdaangle v\rangle^{\frac12}\int \lambdaeft( w(t,tv,tv_y) - w(t,z,y) \right) \chi(\lambdaanglembda_z(z - tv),\lambdaanglembda_y(y - tv_y))\,dzdy
\end{align*}
Applying the elliptic estimate \eqref{est:FDR2-Elliptic1} with the frequency localization of \(u_{x,v}^{\mr{hyp},+}\) we obtain the estimates
\begin{gather*}
\|w_z\|_{L^2}\lambdaesssim \|L_+u_{v,x}^{\mr{hyp},+}\|_{L^2}\lambdaesssim t^{-1}v^{-\frac12}\lambdaangle v\rangle^{\frac12}\|u\|_X,\\
\|w_{yy}\|_{L^2}\lambdaesssim t^{-2}\|(L_y\partialartial_x)^2u_{v,x}^{\mr{hyp},+}\|_{L^2}\lambdaesssim t^{-2}v\lambdaangle v\rangle\|u\|_X,
\end{gather*}
so we may apply the Sobolev estimate \eqref{est:Holder} with \(\alpha = \frac14\) to \(w\) to obtain
\[
|w(t,tv,tv_y) - w(t,z,y)|\lambdaesssim t^{-\frac54}v^{-\frac18}\lambdaangle v\rangle^{\frac78}\lambdaeft(|z - tv|^{\frac14} + |y - tv|^{\frac14}\right)\|u\|_X.
\]
As a consequence we obtain the estimate
\[
|\mathfrak D|\lambdaesssim t^{-\frac98}v^{-\frac3{16}}\lambdaangle v\rangle^{\frac{15}{16}}\|u\|_X,
\]
which completes the proof of \eqref{est:DifferenceLInf}.
\end{proof}
\subsection{The ODE for \(\gamma\)}
From Lemma~\ref{lem:GammaBounds} we see that \(\gamma\) may be used to estimate the size of \(u_x\) up to errors that decay in time. In order to obtain bounds for \(\gamma\) we will treat \(\mathbf v\) as a fixed parameter and consider the ODE satisfied by \(\gamma\)
\begin{equation}\lambdaanglebel{ResonantODE}
\dot\gamma(t,\mathbf v) = \lambdaangle (uu_x)_x,\Psi_{\mathbf v}\rangle + \lambdaangle u_x,\mathcal L\Psi_{\mathbf v}\rangle.
\end{equation}
For the first of these terms we may use that there are no parallel resonances to show that at least one of the \(u\) terms must be elliptic and have improved decay. For the second term we use the expression \eqref{LinearForm} to see that to leading order \(\mathcal L\Psi_{\mathbf v}\) has a divergence-type structure so we may integrate by parts to obtain improved decay. As a consequence we obtain the following lemma:
\begin{lem}
If \(u\) is a solution to \eqref{eqn:cc-fd} then for \(t\gg1\) we have the estimate
\begin{equation}
\|\dot\gamma\chi_{\Sigma_t}\|_{L^\infty_{\mathbf v}} \lambdaesssim t^{-\frac{13}{12}} \|u\|_X\lambdaeft(1 + \|u\|_X\right).\lambdaanglebel{ODEestPwse}
\end{equation}
\end{lem}
\begin{proof}
We start by considering the nonlinear term appearing in \eqref{ResonantODE}. Integrating by parts we obtain
\[
\frac12\lambdaangle (u^2)_{xx},\Psi_{\mathbf v}\rangle = - \frac12\lambdaangle ((u^\mr{hyp})^2)_x,\partialartial_x\Psi_{\mathbf v}\rangle - \lambdaangle (u^\mr{hyp} u^\mr{ell})_x,\partialartial_x\Psi_{\mathbf v}\rangle - \frac12\lambdaangle ((u^\mr{ell})^2)_x,\partialartial_x\Psi_{\mathbf v}\rangle.
\]
For the second and third terms we may apply the pointwise bounds of Lemma~\ref{lem:FDR2} to obtain
\[
|\lambdaangle (u^\mr{hyp} u^\mr{ell})_x,\partialartial_x\Psi_{\mathbf v}\rangle| + |\lambdaangle ((u^\mr{ell})^2)_x,\partialartial_x\Psi_{\mathbf v}\rangle| \lambdaesssim t^{-\frac{13}{12}}\|u\|_{X}^2.
\]
For the remaining term we first use the spatial localization of \(\Psi_{\mathbf v}\) to replace \(u^\mr{hyp}\) by \(u^\mr{hyp}_v\), where we recall that
\[
u^\mr{hyp}_v = \sum\lambdaimits_{\substack{m(N)\sim v}} u_N^\mr{hyp}.
\]
Recalling the definition of \(u_N^\mr{hyp} = \chi_N^\mr{hyp} u_N\) we may write
\[
\lambdaangle (u^\mr{hyp}_v)^2,\Psi_{\mathbf v}\rangle = \sum\lambdaimits_{m(N_1),m(N_2)\sim v}\lambdaangle u_{N_1}u_{N_2},\chi_{N_1}^\mr{hyp}\chi_{N_2}^\mr{hyp}\Psi_{\mathbf v}\rangle.
\]
We observe that for sufficiently large \(t\gg1\) (independent of \(v\)) the function \(\mathbb{T}heta = \chi_{N_1}^\mr{hyp}\chi_{N_2}^\mr{hyp}\Psi_{\mathbf v}\in WP(t,\mathbf v,1)\) and hence is localized at frequency \(m^{-1}(v)\) up to rapidly decaying tails at scale \(\lambdaanglembda_z\lambdaesssim t^{- \frac{23}{48}}\). In particular, for \(t\gg1\) sufficiently large (independently of \(t,v\)) we have
\[
|P_{< \frac12m^{-1}(v)}\mathbb{T}heta |\lambdaesssim_k t^{-k},\qquad |P_{\geq \frac32 m^{-1}(v)}\mathbb{T}heta|\lambdaesssim_k t^{-k}.
\]
However, the product \(u_{N_1}u_{N_2}\) has compact Fourier support in neighborhoods of size \(O(\delta m^{-1}(v))\) about the frequencies \(0, \partialm 2m^{-1}(v)\). In particular, by choosing \(0<\delta\lambdal1\) sufficiently small (independently of \(v\)) we may ensure that
\[
P_{\frac12 m^{-1}(v)\lambdaeq \cdot < \frac32 m^{-1}(v)}(u_{N_1}u_{N_2}) = 0,
\]
and hence
\[
|\lambdaangle (u^\mr{hyp}_v)^2,\Psi_{\mathbf v}\rangle|\lambdaesssim t^{-k}\|u\|_X^2.
\]
To complete the estimate we consider the linear term. We first recall from Lemma~\ref{lem:Microlocal} that \(\mathcal L\Psi_{\mathbf v}\in WP(t,\mathbf v,t^{-1})\) and hence satisfies \(\|\lambdaangle v\rangle^{\frac12}\mathcal L\Psi_{\mathbf v}\|_{L^1_{x,y}}\lambdaesssim 1\). Estimating as in Lemma~\ref{lem:GammaBounds} we then obtain
\[
|\lambdaangle u_x^\mr{hyp} - u_{x,v}^{\mr{hyp},+},\mathcal L\Psi_{\mathbf v}\rangle|\lambdaesssim_k t^{-k}\|u\|_X.
\]
Next we recall the expression \eqref{LinearForm} for the operator \(e^{-i\partialhi}\mathcal L\Psi_{\mathbf v}\). In particular, we may take \(w = e^{-\partialhi}u_{x,v}^{\mr{hyp},+}(t,x,y)\) as before and integrate by parts to obtain
\begin{align*}
\lambdaangle u_{x,v}^{\mr{hyp},+},\mathcal L\Psi_{\mathbf v}\rangle &= \lambdaangle w_z,\frac1{2t}(z - tv)\chi\rangle + \lambdaangle w_y,\frac1{4t^2}(y - tv_y)\chi\rangle\\
&\quad - \lambdaangle w_z,\frac12im'(\partialartial_x\partialhi) \partialartial_x\chi\rangle + \lambdaangle w_{yy},\frac1{4it^2\partialartial_x\partialhi}\chi\rangle + \lambdaangle w,\mb{err}\rangle,
\end{align*}
where the error term \(\mb{err}\in WP(t,\mathbf v,t^{-\frac32}v^{-\frac34}\lambdaangle v\rangle^{-\frac14})\). Applying the \(L^2\)-estimates for \(w\) as in Lemma~\ref{lem:GammaBounds} and the pointwise estimate \eqref{est:FDR2-HYPERBOLIC-2} for the final term, we obtain the estimate \eqref{ODEestPwse}.
\end{proof}
\subsection{Proof of global existence}~
We now complete the proof of Theorem~\ref{thm:Main}. We choose \(\mathcal T_0\geq 1\) and by taking \(\mathcal M\gg1\) sufficiently large and \(0<\epsilon\lambdal1\) sufficiently small we may find a solution \(S(-t)u\in C([0,T];X(0))\) to \eqref{eqn:cc-fd} for some \(T\geq \mathcal T_0\). Next we assume that the bootstrap assumption \eqref{BS} holds on the interval \([0,T]\) from which we obtain the energy estimate \eqref{AP}.
Next we use the estimate \eqref{est:BasicGammaBound} to bound \(\gamma\) at time \(\mathcal T_0\) in terms of \(\|u_x\|_{L^\infty}\) and the Sobolev estimate \eqref{Starter410} to obtain
\[
|\gamma(\mathcal T_0,\mathbf v)|\lambdaesssim \epsilon \mathcal T_0^{C\mathcal M\epsilon}.
\]
We may then solve the ODE satisfied by \(\gamma\) on the time interval \([\mathcal T_0,T]\) using the estimate \eqref{ODEestPwse} to obtain
\[
\|\lambdaangle v\rangle^{\frac12}\gamma \chi_{\Sigma_t}\|_{L^\infty_{\mathbf v}}\lambdaesssim \epsilon \mathcal T_0^{C\mathcal M\epsilon} + \int_{\mathcal T_0}^t \|\lambdaangle v\rangle^{\frac12}\dot\gamma \chi_{\Sigma_t}\|_{L^\infty_{\mathbf v}}\lambdaesssim \epsilon \mathcal T_0^{C\mathcal M\epsilon} + \epsilon \mathcal T_0^{-\frac1{12} + 2C\mathcal M \epsilon},
\]
provided \(0<\epsilon\lambdal1\) is sufficiently small. We may then apply the estimate \eqref{est:DifferenceLInf} for the difference between \(u_x\) and \(2t^{-1}\lambdaangle v\rangle^{\frac12}\mathbb Re(e^{i\partialhi}\gamma)\) to obtain
\[
\|u_x\chi_{\Sigma_t}\|_{L^\infty}\lambdaesssim \epsilon t^{-1}\lambdaeft(\mathcal T_0^{C\mathcal M\epsilon} + \mathcal T_0^{-\frac1{12} + 2C\mathcal M \epsilon}\right).
\]
By choosing \(\mathcal M\gg1\) sufficiently large and \(0<\epsilon\lambdal1\) sufficiently small we may combine this with the estimate \eqref{ImprovedDecay} for \(u_x\) in the region \(\Sigma_t^c\) to obtain the bound
\[
\|u_x\|_{L^\infty}\lambdaeq \frac12\mathcal M\epsilon t^{-\frac12}\lambdaangle t\rangle^{-\frac12},
\]
which closes the bootstrap. The solution \(u\) then exists globally and satisfies the energy estimate \eqref{Fdn} and the pointwise estimate \eqref{est:PTWISEDECAY}.
\subsection{Proof of scattering}~
It remains to prove that our solutions scatters in \(L^2\). As in~\cite{2014arXiv1409.4487H} we do not have scattering in the sense that \(\mathcal L u\in L^1_tL^2_{x,y}\) but we are able to construct a normal form correction to remove the worst bilinear interactions and show that \(S(-t)u(t)\) converges in \(L^2\) as \(t\rightarrow\infty\). We note that for translation invariant initial data the worst nonlinear interactions are the high-low interactions (see Appendix~\ref{app:IP}). However, the spatial localization ensures that these interactions can only occur on very short timescales, thus attenuating their effect. From the pointwise and elliptic estimates of Section~\ref{sect:KS} we see that the worst nonlinear interactions for spatially localized initial data are the high-high (hyperbolic) interactions for which we may construct a well-defined normal form.
We first define the leading order part of \(w\) by
\[
w = P_{t^{-\frac16}<\cdot\lambdaeq t^{\frac1{12}}}u,
\]
and then have the following lemma:
\begin{lem}
For \(t\gg1\) we have the estimate
\begin{equation}\lambdaanglebel{Reduction}
\|uu_x - 2\mathbb Re(w^+w_x^+)\|_{L^2}\lambdaesssim t^{-\frac{97}{96}}\|u\|_X^2.
\end{equation}
\end{lem}
\begin{proof}
We start by using the estimate \eqref{ImprovedDecay} to reduce the estimate to the region \(\Sigma_t\),
\[
\|uu_x\chi_{\Sigma_t^c}\|_{L^2}\lambdaesssim \|u_x \chi_{\Sigma_t^c}\|_{L^\infty}\|u\|_{L^2}\lambdaesssim t^{-\frac{97}{96}}\|u\|_X^2.
\]
Next we use the pointwise estimates of Lemma~\ref{lem:FDR2} to reduce to the hyperbolic parts,
\[
\|(uu_x - u^\mr{hyp} u^\mr{hyp}_x) \chi_{\Sigma_t}\|_{L^2}\lambdaesssim\|u_x^\mr{ell}\|_{L^\infty} \|u\|_{L^2} + \|u_\mr{ell}\chi_{\Sigma_t}\|_{L^\infty}\|u_x^\mr{hyp}\|_{L^2}\lambdaesssim t^{-\frac{97}{96}}\|u\|_X^2.
\]
We observe that
\[
u^\mr{hyp} u_x^\mr{hyp} = 2\mathbb Re(u^{\mr{hyp},+}u_x^{\mr{hyp},+}) + \partialartial_x|u^{\mr{hyp},+}|^2,
\]
and that
\[
\partialartial_x|u^{\mr{hyp},+}|^2 = 2\mathop{\rm Im}\nolimits (\bar u^{\mr{hyp},+}L_z^+u^{\mr{hyp},+}),
\]
so applying the elliptic estimate \eqref{est:FDR2-Elliptic1} we obtain
\[
\|(u^\mr{hyp} u_x^\mr{hyp} - 2\mathbb Re(u^{\mr{hyp},+}u_x^{\mr{hyp},+}))\chi_{\Sigma_t}\|_{L^2} \lambdaesssim \|u^\mr{hyp} \chi_{\Sigma_t}\|_{L^\infty}\|L_z^+u^{\mr{hyp},+}\chi_\mr{hyp}^+\|_{L^2}\lambdaesssim t^{-\frac{97}{96}}\|u\|_X^2.
\]
Taking \(w^{\mr{hyp},+} = \sum\lambdaimits_N \chi_N^\mr{hyp} w_N^+\) as above we see that
\[
w^{\mr{hyp},+} \chi_{\Sigma_t} = u^{\mr{hyp},+} \chi_{\Sigma_t},
\]
and hence
\[
\|uu_x - 2\mathbb Re(w^{\mr{hyp},+} w_x^{\mr{hyp},+})\|_{L^2} \lambdaesssim t^{-\frac{97}{96}}\|u\|_X^2.
\]
Finally we may once again apply the pointwise estimates of Lemma~\ref{lem:FDR2} to obtain the bound
\[
\|w^+w_x^+ - w^{\mr{hyp},+}w_x^{\mr{hyp},+}\|_{L^2}\lambdaesssim t^{-\frac{97}{96}}\|u\|_X^2,
\]
which completes the proof.
\end{proof}
We now construct a normal form for the nonlinear term \(2\mathbb Re(w^+w_x^+)\). Here we essentially proceed as in Proposition~\ref{prop:NRG} and define a symmetric bilinear form \(B[u,v]\) with symbol
\[
b(\mathbf k_1,\mathbf k_2) = \frac{\xi_1 + \xi_2}{2\Omega(\mathbf k_1,\mathbf k_2)},
\]
where \(\Omega\) is the resonance function defined as in \eqref{ResonanceFunction}. By construction we have
\[
\mathcal L B[f,g] = \frac12(fg)_x + B[f,\mathcal Lg] + B[\mathcal L f,g].
\]
Further, we have the following lemma:
\begin{lem}
We have the estimates
\begin{align}
\|B[w^+,w^+]\|_{L^2} &\lambdaesssim t^{-\frac23}\|u\|_X^2,\lambdaanglebel{AMC}\\
\|B[w_+,\mathcal L w_+]\|_{L^2} &\lambdaesssim t^{-\frac32}\|u\|_X^2(1 + \|u\|_X).\lambdaanglebel{BMC}
\end{align}
\end{lem}
\begin{proof}
We note that here we need only consider high frequency outputs as \(\xi_1,\xi_2>0\) have the same sign. From the estimate \eqref{FDResonanceLB} we see that for \(0<\xi_1\lambdaesssim \xi_2\), the symbol \(b\) satisfies the bounds
\[
b(\mathbf k_1,\mathbf k_2)\lambdaesssim \frac{\lambdaangle \xi_2\rangle}{\xi_1\xi_2},
\]
and hence we may decompose
\[
B[u^+,v^+] = Q[\partialartial_x^{-1}u^+,v^+] + Q[v^+,\partialartial_x^{-1}u^+],
\]
where \(Q\) is given by
\[
Q[u^+,v^+] = \sum\lambdaimits_{N}B[u_{<N}^+,v_N^+].
\]
We may then verify that the corresponding symbol \(q\in \mathscr S\) and applying the Coifman-Meyer Theorem \eqref{CM} with the frequency localization of \(w^+\) we obtain the estimates
\begin{align*}
\|B[w^+,w^+]\|_{L^2}&\lambdaesssim \|w_+\|_{L^\infty}\|\partialartial_x^{-1}w_+\|_{L^2},\\
\|B[w_+,\mathcal L w_+]\|_{L^2} &\lambdaesssim \|w_+\|_{L^\infty}\|\partialartial_x^{-1}\mathcal Lw^+\|_{L^2}.
\end{align*}
For the estimate \eqref{AMC} we may use the pointwise estimates of Lemma~\ref{lem:FDR2} and the frequency localization to obtain
\[
\|w^+\|_{L^\infty}\lambdaesssim t^{-\frac34}\|u\|_X,\qquad \|\partialartial_x^{-1}w^+\|_{L^2}\lambdaesssim t^{\frac16}\|u\|_X.
\]
For the estimate \eqref{BMC} we instead compute
\[
\partialartial_x^{-1}\mathcal L w^+ = \frac12 P_{t^{-\frac16}<\cdot\lambdaeq t^{\frac1{12}}}^+(u^2) + \partialartial_x^{-1}[\partialartial_t,P_{t^{-\frac16}<\cdot\lambdaeq t^{\frac1{12}}}^+]u.
\]
Using the frequency localization we then obtain
\[
\|\partialartial_x^{-1}\mathcal L w^+\|_{L^2}\lambdaesssim \|u\|_{L^\infty}\|u\|_{L^2} + t^{-\frac56}\|u\|_{L^2}\lambdaesssim t^{-\frac34}\|u\|_X^2 + t^{-\frac56}\|u\|_X,
\]
which completes the proof.
\end{proof}
To complete the proof of scattering we apply the estimates \eqref{Reduction}, \eqref{BMC} with the energy estimate \eqref{Fdn} to obtain the bound
\[
\|\mathcal L(u - 2\mathbb Re B[w^+,w^+])\|_{L^2}\lambdaesssim t^{-\frac{97}{96} + C\epsilon}\epsilon^2.
\]
In particular, provided \(0<\epsilon\lambdal1\) is sufficiently small, we can see that given the integrability in time of the nonlinear interactions we can construct a Cauchy sequence for $S(-t)(u - 2\mathbb Re B[w^+,w^+])$ in $L^2$ converging to a \(W\in L^2\) so that for \(t\gg1\),
\[
\|S(-t) (u - 2\mathbb Re B[w^+,w^+]) - W\|_{L^2}\lambdaesssim t^{-\frac1{96} + C\epsilon}\epsilon^2.
\]
Applying the estimate \eqref{AMC} we have
\[
\|B[w^+,w^+]\|_{L^2}\lambdaesssim t^{-\frac34 + C\epsilon}\epsilon^2,
\]
and hence \(u\) satisfies the estimate \eqref{Fds}. Finally we note that \(\|W\|_{L^2} = \|u_0\|_{L^2}\) by conservation of mass.
\begin{appendix}
\section{Ill-posedness in Besov-type spaces}\lambdaanglebel{app:IP}
In this section we show that the infinite depth equation \eqref{eqn:cc} is ill-posed in (almost all) the natural Galilean-invariant, scale-invariant Besov-type refinements of \(\dot H^{\frac14,0}\) considered in \cite{2016arXiv160806730K}.
To define these spaces we make an almost orthogonal decomposition
\[
u = \sum_{N\in2^\mathbb Z}\sum_{k\in\mathbb Z}u_{N,k},
\]
where each \(u_{N,k}\) has Fourier-support in the trapezium
\[
\mathcal Q_{N,k} = \lambdaeft\{(\xi,\eta)\in\mathbb R^2: \frac12 N<|\xi|<2N,\ \lambdaeft|\frac\eta\xi - kN^{\frac12}\right| < \frac34 N^{\frac12}\right\}.
\]
We then define the space \(\mr{ell}l^q\mr{ell}l^p L^2\) with norm
\[
\|u\|_{\mr{ell}l^q\mr{ell}l^p L^2}^q = \sum\lambdaimits_{N\in 2^\mathbb Z} N^{\frac14 q}\lambdaeft(\sum\lambdaimits_{k\in \mathbb Z}\|u_{N,k}\|_{L^2}^p\right)^{\frac qp}.
\]
It is straightforward to verify that these spaces are indeed both scale-invariant and Galilean invariant by recalling that the Galilean shift \eqref{Galilean} corresponds to the map
\[
\hat u(t,\xi,\eta)\mapsto e^{-ic^2t\xi}e^{2ict\eta}\hat u(t,\xi,\eta - c\xi).
\]
Further, it is clear that when \(p = q = 2\) we have \(\mr{ell}l^2\mr{ell}l^2L^2 = \dot H^{\frac14,0}\). We remark that analogously to \cite[Theorem~1.4]{2016arXiv160806730K} we may show that \(\mr{ell}l^q\mr{ell}l^pL^2\) embeds continuously into the space of distributions whenever \(1\lambdaeq q\lambdaeq\infty\) and \(1\lambdaeq p<\frac43\) and that it contains the Schwartz functions for all \(p > 1\).
Using similar ideas to \cite{2016arXiv160806730K,MR1885293}, we then obtain the following ill-posedness result:
\begin{thm}\lambdaanglebel{thm:IP}
Let \(1\lambdaeq q\lambdaeq\infty\) and \(1<p\lambdaeq\infty\). Then there does not exist a continuously embedded space \(X_T\subset C([-T,T]:\mr{ell}l^q\mr{ell}l^pL^2)\) so that for all \(\partialhi\in \mr{ell}l^q\mr{ell}l^pL^2\),
\begin{gather}
\|S_\infty(t)\partialhi\|_{X_T}\lambdaesssim \|\partialhi\|_{\mr{ell}l^q\mr{ell}l^pL^2},\\
\lambdaeft\|\int_0^tS_\infty(t - t')[u(t')\ \partialartial_xu(t')]\,dt'\right\|_{X_T}\lambdaesssim \|u\|_{X_T}^2,
\end{gather}
where \(S_\infty(t)\) is the infinite depth linear propagator, defined as in \eqref{Propagator}.
In particular, for the infinite depth equation \eqref{eqn:cc}, the solution map \(u_0\mapsto u(t)\) (considered as a map on \(\mr{ell}l^q\mr{ell}l^pL^2\)) fails to be twice differentiable at \(u_0 = 0\).
\end{thm}
\begin{proof}
We proceed by contradiction. Suppose that such a space \(X_T\) does exist, then for any \(\partialhi\in \mr{ell}l^q\mr{ell}l^pL^2\) and \(t\in[-T,T]\) we have the estimate
\begin{equation}\lambdaanglebel{est:BoundToFail}
\lambdaeft\|\int_0^tS_\infty(t - t')\partialartial_x[(S_\infty(t')\partialhi)^2]\,dt'\right\|_{\mr{ell}l^q\mr{ell}l^pL^2}\lambdaesssim \|\partialhi\|_{\mr{ell}l^q\mr{ell}l^pL^2}^2.
\end{equation}
Our goal is to show that this estimate must fail for a suitable choice of \(\partialhi\). We note that as we will only work with \(O(1)\) choices of \(x\)-frequency, our argument is independent of the choice of \(q\).
We first choose low and high frequency parameters \(0<\delta\lambdal 1\lambdal N\), where for convenience we assume that both are dyadic integers. We then define the high and low frequency sets by
\begin{align*}
E_\mr{high} &:= \lambdaeft\{\mathbf k\in\mathbb R^2:- \frac14\delta<|\xi| - N<\frac14\delta,\ \lambdaeft|\frac\eta\xi\right|< N^{\frac12}\right\},\\
E_\mr{low} &:= \lambdaeft\{\mathbf k\in\mathbb R^2:-\frac14\delta<|\xi| - \delta<\frac14\delta,\ \lambdaeft|\frac\eta\xi\right|< N^{\frac12}\right\}.
\end{align*}
We observe that
\[
|E_\mr{high}|\sim\delta N^{\frac32},\qquad |E_\mr{low}|\sim \delta^2N^{\frac12}
\]
Further, an elementary algebraic calculation gives us that,
\begin{align*}
E_\mr{high} + E_\mr{low} &\subset \lambdaeft\{\mathbf k\in\mathbb R^2:- \frac12\delta<|\xi| - N \mp \delta<\frac12\delta,\ \lambdaeft|\frac\eta\xi\right|< 10 N^{\frac12}\right\},\\
E_\mr{high} + E_\mr{low}&\supset \lambdaeft\{\mathbf k\in\mathbb R^2:- \frac12\delta<|\xi| - N \mp \delta<\frac12\delta,\ \lambdaeft|\frac\eta\xi\right|< \frac1{10}N^{\frac12}\right\},
\end{align*}
as well as
\[
(E_\mr{high} + E_\mr{low})\cap (E_\mr{low} + E_\mr{low}) = \emptyset,\qquad (E_\mr{high} + E_\mr{low})\cap(E_\mr{high} + E_\mr{high}) = \emptyset.
\]
We define functions associated to the sets \(E_\mr{high},E_\mr{low}\) by
\[
\hat\partialhi_\mr{high} = \delta^{-\frac12}N^{-1}\mathbf1_{E_\mr{high}},\qquad \hat\partialhi_\mr{low} = \delta^{\frac1{2p}-\frac32}N^{-\frac1{2p}}\mathbf1_{E_\mr{low}},
\]
and take
\[
\partialhi = \partialhi_\mr{high} + \partialhi_\mr{low}.
\]
As the high frequency set \(E_\mr{high}\subset\mathcal Q_{N,0}\) we have the estimate
\begin{equation}\lambdaanglebel{est:HiBound}
\|\partialhi_\mr{high}\|_{\mr{ell}l^q\mr{ell}l^pL^2}\sim 1.
\end{equation}
As the low frequency set
\(
E_\mr{low}\subset \bigcup_{|k|\lambdaeq \delta^{-\frac12}N^{\frac12}}\mathcal Q_{\delta,k}
\)
we obtain the low frequency bound
\begin{equation}\lambdaanglebel{est:LoBound}
\|\partialhi_\mr{low}\|_{\mr{ell}l^q\mr{ell}l^pL^2}\sim 1.
\end{equation}
Combining \eqref{est:HiBound} and \eqref{est:LoBound} with the fact that \(E_\mr{high}\cap E_\mr{low} = \emptyset\) we obtain
\begin{equation}
\|\partialhi\|_{\mr{ell}l^q\mr{ell}l^pL^2}\sim 1.
\end{equation}
Further, from the bounds on \(|E_\mr{high} + E_\mr{low}|\) and \(|E_\mr{low}|\) we obtain the convolution estimate
\[
\|\hat\partialhi_\mr{high}*\hat\partialhi_\mr{low}\|_{L^2} \gtrsim \delta^{\frac12 + \frac1{2p}}N^{\frac14 - \frac1{2p}}.
\]
We now consider the left hand side of \eqref{est:BoundToFail}. Using the support properties of the sums of \(E_\mr{high} + E_\mr{high}\), \(E_\mr{low} + E_\mr{low}\) and \(E_\mr{high} + E_\mr{low}\) we obtain the lower bound
\[
\lambdaeft\|\int_0^tS_\infty(t - s)\partialartial_x[(S_\infty(s)\partialhi)^2]\,ds\right\|_{\mr{ell}l^q\mr{ell}l^pL^2} \gtrsim N^{\frac54}\lambdaeft\|\int_0^t\int_{\mathbf k = \mathbf k_1 + \mathbf k_2}e^{is\Omega(\mathbf k_1,\mathbf k_2)}\hat\partialhi_\mr{high}(\mathbf k_1)\hat\partialhi_\mr{low}(\mathbf k_2)\,d\mathbf k_1\,ds\right\|_{L^2},
\]
where the resonance function \(\Omega = \Omega_\infty\) is defined as in \eqref{ResonanceInf}. If \(\mathbf k_1\in E_\mr{high}\), \(\mathbf k_2\in E_\mr{low}\) and \(\mathbf k = \mathbf k_1 + \mathbf k_2\) then provided \(N\gg 1\) we obtain the estimate
\[
|\Omega(\mathbf k_1,\mathbf k_2)|\lambdaesssim N\delta.
\]
If we choose \(\delta = N^{-(1 + \epsilon)}\) then for \(|t|\sim 1\) we obtain
\[
\lambdaeft|\int_0^t e^{it'\Omega(\mathbf k_1,\mathbf k_2)}\,dt'\right| = \lambdaeft|\frac{e^{it\Omega(\mathbf k_1,\mathbf k_2)} - 1}{i\Omega(\mathbf k_1,\mathbf k_2)}\right| \gtrsim 1 + O(N^{-\epsilon}).
\]
As \(\hat\partialhi_\mr{high},\hat\partialhi_\mr{low}\geq0\) then for \(N\gg1\) we obtain
\[
\lambdaeft\|\int_0^t\int_{\mathbf k = \mathbf k_1 + \mathbf k_2}e^{is\Omega(\mathbf k_1,\mathbf k_2)}\hat\partialhi_\mr{high}(\mathbf k_1)\hat\partialhi_\mr{low}(\mathbf k_2)\,d\mathbf k_1\,ds\right\|_{L^2}\gtrsim \|\partialhi_\mr{high}*\partialhi_\mr{low}\|_{L^2},
\]
and hence
\[
\lambdaeft\|\int_0^tS_\infty(t - s)\partialartial_x[(S_\infty(s)\partialhi)^2]\,ds\right\|_{\mr{ell}l^q\mr{ell}l^pL^2}\gtrsim N^{1 - \frac1p}N^{- \epsilon(\frac12 + \frac1{2p})}.
\]
If \(p>1\) then by choosing \(0<\epsilon\lambdal1\) sufficiently small we may take \(N\rightarrow+\infty\) to obtain a contradiction.
\end{proof}
\end{appendix}
\end{document} |
\begin{document}
\title{On finite factors of centralizers of parabolic subgroups in Coxeter groupsootnotetext{MSC2000: 20F55 (primary), 20E34 (secondary)}
\begin{abstract}
It has been known that the centralizer $Z_W(W_I)$ of a parabolic subgroup $W_I$ of a Coxeter group $W$ is a split extension of a naturally defined reflection subgroup by a subgroup defined by a $2$-cell complex $\mathcal{Y}$.
In this paper, we study the structure of $Z_W(W_I)$ further and show that, if $I$ has no irreducible components of type $A_n$ with $2 \leq n < \infty$, then every element of finite irreducible components of the inner factor is fixed by a natural action of the fundamental group of $\mathcal{Y}$.
This property has an application to the isomorphism problem in Coxeter groups.
\end{abstract}
\section{Introduction}
\label{sec:intro}
A pair $(W,S)$ of a group $W$ and its (possibly infinite) generating set $S$ is called a \emph{Coxeter system} if $W$ admits the following presentation
\begin{displaymath}
W=\langle S \mid (st)^{m(s,t)}=1 \mbox{ for all } s,t \in S \mbox{ with } m(s,t)<\infty \rangle \enspace,
\end{displaymath}
where $m \colon (s,t) \mapsto m(s,t) \in \{1,2,\dots\} \cup \{\infty\}$ is a symmetric mapping in $s,t \in S$ with the property that we have $m(s,t)=1$ if and only if $s=t$.
A group $W$ is called a \emph{Coxeter group} if $(W,S)$ is a Coxeter system for some $S \subseteq W$.
Since Coxeter systems and some associated objects, such as root systems, appear frequently in various topics of mathematics, algebraic or combinatorial properties of Coxeter systems and those associated objects have been investigated very well, forming a long history and establishing many beautiful theories (see e.g., \cite{Hum} and references therein).
For example, it has been well known that, given an arbitrary Coxeter system $(W,S)$, the mapping $m$ by which the above group presentation defines the same group $W$ is uniquely determined.
In recent decades, not only the properties of a Coxeter group $W$ associated to a specific generating set $S$, but also the group-theoretic properties of an arbitrary Coxeter group $W$ itself have been studied well.
One of the recent main topics in the study of group-theoretic properties of Coxeter groups is the \emph{isomorphism problem}, that is, the problem of determining which of the Coxeter groups are isomorphic to each other as abstract groups.
In other words, the problem is to investigate the possible \lq\lq types'' of generating sets $S$ for a given Coxeter group $W$.
For example, it has been known that for a Coxeter group $W$ in certain classes, the set of reflections $S^W := \{wsw^{-1} \mid w \in W \mbox{ and } s \in S\}$ associated to any possible generating set $S$ of $W$ (as a Coxeter group) is equal to each other and independent of the choice of $S$ (see e.g., \cite{Bah}).
A Coxeter group $W$ having this property is called \emph{reflection independent}.
A simplest nontrivial example of a Coxeter group which is not reflection independent is Weyl group of type $G_2$ (or the finite Coxeter group of type $I_2(6)$) with two simple reflections $s,t$, which admits another generating set $\{s,ststs,(st)^3\}$ of type $A_1 \times A_2$ involving an element $(st)^3$ that is not a reflection with respect to the original generating set.
One of the main branches of the isomorphism problem in Coxeter groups is to determine the possibilities of a group isomorphism between two Coxeter groups which preserves the sets of reflections (with respect to some specified generating sets).
Such an isomorphism is called \emph{reflection-preserving}.
In a recent study by the author of this paper, it is revealed that some properties of the centralizers $Z_W(r)$ of reflections $r$ in a Coxeter group $W$ (with respect to a generating set $S$) can be applied to the study of reflection independent Coxeter groups and reflection-preserving isomorphisms.
An outline of the idea is as follows.
First, by a general result on the structures of the centralizers of parabolic subgroups \cite{Nui11} or the normalizers of parabolic subgroups \cite{Bri-How} in Coxeter groups applied to the case of a single reflection, we have a decomposition $Z_W(r) = \langle r \rangle \times (W^{\perp r} \rtimes Y_r)$, where $W^{\perp r}$ denotes the subgroup generated by all the reflections except $r$ itself that commute with $r$, and $Y_r$ is a subgroup isomorphic to the fundamental group of a certain graph associated to $(W,S)$.
The above-mentioned general results also give a canonical presentation of $W^{\perp r}$ as a Coxeter group.
Then the unique maximal reflection subgroup (i.e., subgroup generated by reflections) of $Z_W(r)$ is $\langle r \rangle \times W^{\perp r}$.
Now suppose that $W^{\perp r}$ has no finite irreducible components.
In this case, the maximal reflection subgroup of $Z_W(r)$ has only one finite irreducible component, that is $\langle r \rangle$.
Now it can be shown that, if the image $f(r)$ of $r$ by a group isomorphism $f$ from $W$ to another Coxeter group $W'$ is not a reflection with respect to a generating set of $W'$, then the finite irreducible components of the unique maximal reflection subgroup of the centralizer of $f(r)$ in $W'$ have more elements than $\langle r \rangle$, which is a contradiction.
Hence, in such a case of $r$, the image of $r$ by any group isomorphism from $W$ to another Coxeter group is always a reflection.
See the author's preprint \cite{Nui_ref} for more detailed arguments.
As we have seen in the previous paragraph, it is worthy to look for a class of Coxeter groups $W$ for which the above subgroup $W^{\perp r}$ of the centralizer $Z_W(r)$ of each reflection $r$ has no finite irreducible components.
The aim of this paper is to establish a tool for finding Coxeter groups having the desired property.
The main theorem (in a special case) of this paper can be stated as follows:
\begin{quote}
\textbf{Main Theorem (in a special case).}
Let $r \in W$ be a reflection, and let $s_{\gamma}$ be a generator of $W^{\perp r}$ (as a Coxeter group) which belongs to a finite irreducible component of $W^{\perp r}$.
Then $s_{\gamma}$ commutes with every element of $Y_r$.
(See the previous paragraph for the notations.)
\end{quote}
By virtue of this result, to show that $W^{\perp r}$ has no finite irreducible components, it suffices to find (by using the general structural results in \cite{Nui11} or \cite{Bri-How}) for each generator $s_{\gamma}$ of $W^{\perp r}$ an element of $Y_r$ that does not commute with $s_{\gamma}$.
A detailed argument along this strategy is given in the preprint \cite{Nui_ref}.
In fact, the main theorem (Theorem \ref{thm:YfixesWperpIfin}) of this paper is not only proven for the above-mentioned case of single reflection $r$, but also generalized to the case of centralizers $Z_W(W_I)$ of parabolic subgroups $W_I$ generated by some subsets $I \subseteq S$, with the property that $I$ has no irreducible components of type $A_n$ with $2 \leq n < \infty$.
(We notice that there exists a counterexample when the assumption on $I$ is removed; see Section \ref{sec:counterexample} for details.)
In the generalized statement, the group $W^{\perp r}$ is replaced naturally with the subgroup of $W$ generated by all the reflections except those in $I$ that commute with every element of $I$, while the group $Y_r$ is replaced with a subgroup of $W$ isomorphic to the fundamental group of a certain $2$-cell complex defined in \cite{Nui11}.
We emphasize that, although the general structures of these subgroups of $Z_W(W_I)$ have been described in \cite{Nui11} (or \cite{Bri-How}), the main theorem of this paper is still far from being trivial; moreover, to the author's best knowledge, no other results on the structures of the centralizers $Z_W(W_I)$ which is in a significantly general form and involves much detailed information than those given in the general structural results \cite{Bri-How,Nui11} have been known in the literature.
The paper is organized as follows.
In Section \ref{sec:Coxetergroups}, we summarize some fundamental properties and definitions for Coxeter groups.
In Section \ref{sec:properties_centralizer}, we summarize some properties of the centralizers of parabolic subgroups relevant to our argument in the following sections, which have been shown in some preceding works (mainly in \cite{Nui11}).
In Section \ref{sec:main_result}, we give the statement of the main theorem of this paper (Theorem \ref{thm:YfixesWperpIfin}), and give a remark on its application to the isomorphism problem in Coxeter groups (also mentioned in a paragraph above).
The proof of the main theorem is divided into two main steps: First, Section \ref{sec:proof_general} presents some auxiliary results which do not require the assumption, put in the main theorem, on the subset $I$ of $S$ that $I$ has no irreducible components of type $A_n$ with $2 \leq n < \infty$.
Then, based on the results in Section \ref{sec:proof_general}, Section \ref{sec:proof_special} deals with the special case as in the main theorem that $I$ has no such irreducible components, and completes the proof of the main theorem.
The proof of the main theorem makes use of the list of positive roots given in Section \ref{sec:Coxetergroups} several times.
Finally, in Section \ref{sec:counterexample}, we describe in detail a counterexample of our main theorem when the assumption that $I$ has no irreducible components of type $A_n$ with $2 \leq n < \infty$ is removed.
\paragraph*{Acknowledgments.}
The author would like to express his deep gratitude to everyone who helped him, especially to Professor Itaru Terada who was the supervisor of the author during the graduate course in which a part of this work was done, and to Professor Kazuhiko Koike, for their invaluable advice and encouragement.
The author would also like to the anonymous referee for the precious comments, especially for suggestion to reduce the size of the counterexample shown in Section \ref{sec:counterexample} which was originally of larger size.
A part of this work was supported by JSPS Research Fellowship (No.~16-10825).
\section{Coxeter groups}
\label{sec:Coxetergroups}
The basics of Coxeter groups summarized here are found in \cite{Hum} unless otherwise noticed.
For some omitted definitions, see also \cite{Hum} or the author's preceding paper \cite{Nui11}.
\subsection{Basic notions}
\label{sec:defofCox}
A pair $(W,S)$ of a group $W$ and its (possibly infinite) generating set $S$ is called a \emph{Coxeter system}, and $W$ is called a \emph{Coxeter group}, if $W$ admits the following presentation
\begin{displaymath}
W=\langle S \mid (st)^{m(s,t)}=1 \mbox{ for all } s,t \in S \mbox{ with } m(s,t)<\infty \rangle \enspace,
\end{displaymath}
where $m \colon (s,t) \mapsto m(s,t) \in \{1,2,\dots\} \cup \{\infty\}$ is a symmetric mapping in $s,t \in S$ with the property that we have $m(s,t)=1$ if and only if $s=t$.
Let $\Gamma$ denote the \emph{Coxeter graph} of $(W,S)$, which is a simple undirected graph with vertex set $S$ in which two vertices $s,t \in S$ are joined by an edge with label $m(s,t)$ if and only if $m(s,t) \geq 3$ (by usual convention, the label is omitted when $m(s,t)=3$; see Figure \ref{fig:finite_irreducible_Coxeter_groups} below for example).
If $\Gamma$ is connected, then $(W,S)$ is called \emph{irreducible}.
Let $\ell$ denote the length function of $(W,S)$.
For $w,u \in W$, we say that $u$ is a \emph{right divisor} of $w$ if $\ell(w) = \ell(wu^{-1}) + \ell(u)$.
For each subset $I \subseteq S$, the subgroup $W_I := \langle I \rangle$ of $W$ generated by $I$ is called a \emph{parabolic subgroup} of $W$.
Let $\Gamma_I$ denote the Coxeter graph of the Coxeter system $(W_I,I)$.
For two subsets $I,J \subseteq S$, we say that $I$ is \emph{adjacent to} $J$ if an element of $I$ is joined by an edge with an element of $J$ in the Coxeter graph $\Gamma$.
We say that $I$ is \emph{apart from} $J$ if $I \cap J = \emptyset$ and $I$ is not adjacent to $J$.
For the terminologies, we often abbreviate a set $\{s\}$ with a single element of $S$ to $s$ for simplicity.
\subsection{Root systems and reflection subgroups}
\label{sec:rootsystem}
Let $V$ denote the \emph{geometric representation space} of $(W,S)$, which is an $\mathbb{R}$-linear space equipped with a basis $\Pi = \{\alpha_s \mid s \in S\}$ and a $W$-invariant symmetric bilinear form $\langle \,,\, \rangle$ determined by
\begin{displaymath}
\langle \alpha_s, \alpha_t \rangle =
\begin{cases}
-\cos(\pi / m(s,t)) & \mbox{if } m(s,t) < \infty \enspace; \\
-1 & \mbox{if } m(s,t) = \infty \enspace,
\end{cases}
\end{displaymath}
where $W$ acts faithfully on $V$ by $s \cdot v=v-2\langle \alpha_s, v\rangle \alpha_s$ for $s \in S$ and $v \in V$.
Then the \emph{root system} $\Phi=W \cdot \Pi$ consists of unit vectors with respect to the bilinear form $\langle \,,\, \rangle$, and $\Phi$ is the disjoint union of $\Phi^+ := \Phi \cap \mathbb{R}_{\geq 0}\Pi$ and $\Phi^- := -\Phi^+$ where $\mathbb{R}_{\geq 0}\Pi$ signifies the set of nonnegative linear combinations of elements of $\Pi$.
Elements of $\Phi$, $\Phi^+$, and $\Phi^-$ are called \emph{roots}, \emph{positive roots}, and \emph{negative roots}, respectively.
For a subset $\Psi \subseteq \Phi$ and an element $w \in W$, define
\begin{displaymath}
\Psi^+ := \Psi \cap \Phi^+ \,,\, \Psi^- := \Psi \cap \Phi^- \,,\,
\Psi[w] :=\{\gamma \in \Psi^+ \mid w \cdot \gamma \in \Phi^-\} \enspace.
\end{displaymath}
It is well known that the length $\ell(w)$ of $w$ is equal to $|\Phi[w]|$.
For an element $v=\sum_{s \in S}c_s\alpha_s$ of $V$, define the \emph{support} $\mathrm{Supp}\,v$ of $v$ to be the set of all $s \in S$ with $c_s \neq 0$.
For a subset $\Psi$ of $\Phi$, define the support $\mathrm{Supp}\,\Psi$ of $\Psi$ to be the union of $\mathrm{Supp}\,\gamma$ over all $\gamma \in \Psi$.
For each $I \subseteq S$, define
\begin{displaymath}
\Pi_I := \{\alpha_s \mid s \in I\} \subseteq \Pi \,,\, V_I := \mathrm{span}\,\Pi_I \subseteq V \,,\, \Phi_I := \Phi \cap V_I \enspace.
\end{displaymath}
It is well known that $\Phi_I$ coincides with the root system $W_I \cdot \Pi_I$ of $(W_I,I)$.
We notice the following well-known fact:
\begin{lem}
\label{lem:support_is_irreducible}
The support of any root $\gamma \in \Phi$ is irreducible.
\end{lem}
\begin{proof}
Note that $\gamma \in \Phi_I = W_I \cdot \Pi_I$, where $I= \mathrm{Supp}\,\gamma$.
On the other hand, it follows by induction on the length of $w$ that, for any $w \in W_I$ and $s \in I$, the support of $w \cdot \alpha_s$ is contained in the irreducible component of $I$ containing $s$.
Hence the claim follows.
\end{proof}
For a root $\gamma=w \cdot \alpha_s \in \Phi$, let $s_\gamma := wsw^{-1}$ be the \emph{reflection} along $\gamma$, which acts on $V$ by $s_\gamma \cdot v=v-2 \langle \gamma, v \rangle \gamma$ for $v \in V$.
For any subset $\Psi \subseteq \Phi$, let $W(\Psi)$ denote the \emph{reflection subgroup} of $W$ generated by $\{s_{\gamma} \mid \gamma \in \Psi\}$.
It was shown by Deodhar \cite{Deo_refsub} and by Dyer \cite{Dye} that $W(\Psi)$ is a Coxeter group.
To determine their generating set $S(\Psi)$ for $W(\Psi)$, let $\Pi(\Psi)$ denote the set of all \lq\lq simple roots'' $\gamma \in (W(\Psi) \cdot \Psi)^+$ in the \lq\lq root system'' $W(\Psi) \cdot \Psi$ of $W(\Psi)$, that is, all the $\gamma$ for which any expression $\gamma=\sum_{i=1}^{r}c_i\beta_i$ with $c_i>0$ and $\beta_i \in (W(\Psi) \cdot \Psi)^+$ satisfies that $\beta_i=\gamma$ for every index $i$.
Then the set $S(\Psi)$ is given by
\begin{displaymath}
S(\Psi) := \{s_\gamma \mid \gamma \in \Pi(\Psi)\} \enspace.
\end{displaymath}
We call $\Pi(\Psi)$ the \emph{simple system} of $(W(\Psi),S(\Psi))$.
Note that the \lq\lq root system'' $W(\Psi) \cdot \Psi$ and the simple system $\Pi(\Psi)$ for $(W(\Psi),S(\Psi))$ have several properties that are similar to the usual root systems $\Phi$ and simple systems $\Pi$ for $(W,S)$; see e.g., Theorem 2.3 of \cite{Nui11} for the detail.
In particular, we have the following result:
\begin{thm}
[{e.g., \cite[Theorem 2.3]{Nui11}}]
\label{thm:reflectionsubgroup_Deodhar}
Let $\Psi \subseteq \Phi$, and let $\ell_\Psi$ be the length function of $(W(\Psi),S(\Psi))$.
Then for $w \in W(\Psi)$ and $\gamma \in (W(\Psi) \cdot \Psi)^+$, we have $\ell_\Psi(ws_\gamma)<\ell_\Psi(w)$ if and only if $w \cdot \gamma \in \Phi^-$.
\end{thm}
We say that a subset $\Psi \subseteq \Phi^+$ is a \emph{root basis} if for each pair $\beta,\gamma \in \Psi$, we have
\begin{displaymath}
\begin{cases}
\langle \beta,\gamma \rangle=-\cos(\pi/m) & \mbox{if } s_\beta s_\gamma \mbox{ has order } m<\infty \enspace;\\
\langle \beta,\gamma \rangle \leq -1 & \mbox{if } s_\beta s_\gamma \mbox{ has infinite order}.
\end{cases}
\end{displaymath}
For example, it follows from Theorem \ref{thm:conditionforrootbasis} below that the simple system $\Pi(\Psi)$ of $(W(\Psi),S(\Psi))$ is a root basis for any $\Psi \subseteq \Phi$.
For two root bases $\Psi_1,\Psi_2 \subseteq \Phi^+$, we say that a mapping from $\Psi_1 = \Pi(\Psi_1)$ to $\Psi_2 = \Pi(\Psi_2)$ is an isomorphism if it induces an isomorphism from $S(\Psi_1)$ to $S(\Psi_2)$.
We show some properties of root bases:
\begin{thm}
[{\cite[Theorem 4.4]{Dye}}]
\label{thm:conditionforrootbasis}
Let $\Psi \subseteq \Phi^+$.
Then we have $\Pi(\Psi)=\Psi$ if and only if $\Psi$ is a root basis.
\end{thm}
\begin{prop}
[{\cite[Corollary 2.6]{Nui11}}]
\label{prop:fintyperootbasis}
Let $\Psi \subseteq \Phi^+$ be a root basis with $|W(\Psi)|<\infty$.
Then $\Psi$ is a basis of a positive definite subspace of $V$ with respect to the bilinear form $\langle \,,\, \rangle$.
\end{prop}
\begin{prop}
[{\cite[Proposition 2.7]{Nui11}}]
\label{prop:finitesubsystem}
Let $\Psi \subseteq \Phi^+$ be a root basis with $|W(\Psi)|<\infty$, and $U= \mathrm{span}\,\Psi$.
Then there exist an element $w \in W$ and a subset $I \subseteq S$ satisfying that $|W_I|<\infty$ and $w \cdot (U \cap \Phi^+)=\Phi_I^+$.
Moreover, the action of this $w$ maps $U \cap \Pi$ into $\Pi_I$.
\end{prop}
\subsection{Finite parabolic subgroups}
\label{sec:longestelement}
We say that a subset $I \subseteq S$ is of \emph{finite type} if $|W_I|<\infty$.
The finite irreducible Coxeter groups have been classified as summarized in \cite[Chapter 2]{Hum}.
Here we determine a labelling $r_1,r_2,\dots,r_n$ (where $n = |I|$) of elements of an irreducible subset $I \subseteq S$ of each finite type in the following manner, where the values $m(r_i,r_j)$ not listed here are equal to $2$ (see Figure \ref{fig:finite_irreducible_Coxeter_groups}):
\begin{description}
\item[Type $A_n$ ($1 \leq n < \infty$):] $m(r_i,r_{i+1})=3$ ($1 \leq i \leq n-1$);
\item[Type $B_n$ ($2 \leq n < \infty$):] $m(r_i,r_{i+1})=3$ ($1 \leq i \leq n-2$) and $m(r_{n-1},r_n)=4$;
\item[Type $D_n$ ($4 \leq n < \infty$):] $m(r_i,r_{i+1})=m(r_{n-2},r_n)=3$ ($1 \leq i \leq n-2$);
\item[Type $E_n$ ($n=6,7,8$):] $m(r_1,r_3)=m(r_2,r_4)=m(r_i,r_{i+1})=3$ ($3 \leq i \leq n-1$);
\item[Type $F_4$:] $m(r_1,r_2)=m(r_3,r_4)=3$ and $m(r_2,r_3)=4$;
\item[Type $H_n$ ($n=3,4$):] $m(r_1,r_2)=5$ and $m(r_i,r_{i+1})=3$ ($2 \leq i \leq n-1$);
\item[Type $I_2(m)$ ($5 \leq m < \infty$):] $m(r_1,r_2)=m$.
\end{description}
We call the above labelling $r_1,\dots,r_n$ the \emph{standard labelling} of $I$.
\begin{figure}
\caption{Coxeter graphs of the finite irreducible Coxeter groups (here we write $i$ instead of $r_i$ for each vertex)}
\label{fig:finite_irreducible_Coxeter_groups}
\end{figure}
Let $w_0(I)$ denote the (unique) longest element of a finite parabolic subgroup $W_I$.
It is well known that $w_0(I)^2 = 1$ and $w_0(I) \cdot \Pi_I = -\Pi_I$.
Now let $I$ be irreducible of finite type.
If $I$ is of type $A_n$ ($n \geq 2$), $D_k$ ($k$ odd), $E_6$ or $I_2(m)$ ($m$ odd), then the automorphism of the Coxeter graph $\Gamma_I$ of $W_I$ induced by (the conjugation action of) $w_0(I)$ is the unique nontrivial automorphism of $\Gamma_I$.
Otherwise, $w_0(I)$ lies in the center $Z(W_I)$ of $W_I$ and the induced automorphism of $\Gamma_I$ is trivial, in which case we say that $I$ is of \emph{$(-1)$-type}.
Moreover, if $W_I$ is finite but not irreducible, then $w_0(I)=w_0(I_1) \dotsm w_0(I_k)$ where the $I_i$ are the irreducible components of $I$.
\section{Known properties of the centralizers}
\label{sec:properties_centralizer}
This section summarizes some known properties (mainly proven in \cite{Nui11}) of the centralizers $Z_W(W_I)$ of parabolic subgroups $W_I$ in Coxeter groups $W$, especially those relevant to the argument in this paper.
First, we fix an abstract index set $\Lambda$ with $|\Lambda| = |I|$, and define $S^{(\Lambda)}$ to be the set of all injective mappings $x \colon \Lambda \to S$.
For $x \in S^{(\Lambda)}$ and $\lambda \in \Lambda$, we put $x_\lambda = x(\lambda)$; thus $x$ may be regarded as a duplicate-free \lq\lq $\Lambda$-tuple'' $(x_\lambda)=(x_\lambda)_{\lambda \in \Lambda}$ of elements of $S$.
For each $x \in S^{(\Lambda)}$, let $[x]$ denote the image of the mapping $x$; $[x] = \{x_{\lambda} \mid \lambda \in \Lambda\}$.
In the following argument, we fix an element $x_I \in S^{(\Lambda)}$ with $[x_I] = I$.
We define
\begin{displaymath}
C_{x,y} := \{w \in W \mid \alpha_{x_\lambda}=w \cdot \alpha_{y_\lambda} \mbox{ for every } \lambda \in \Lambda\} \mbox{ for } x,y \in S^{(\Lambda)} \enspace.
\end{displaymath}
Note that $C_{x,y} \cdot C_{y,z} \subseteq C_{x,z}$ and $C_{x,y}{}^{-1} = C_{y,x}$ for $x,y,z \in S^{(\Lambda)}$.
Now we define
\begin{displaymath}
w \ast y_{\lambda} := x_{\lambda} \mbox{ for } x,y \in S^{(\Lambda)}, w \in C_{x,y} \mbox{ and } \lambda \in \Lambda \enspace,
\end{displaymath}
therefore we have $w \cdot \alpha_s = \alpha_{w \ast s}$ for any $w \in C_{x,y}$ and $s \in [y]$.
(This $\ast$ can be interpreted as the conjugation action of elements of $C_{x,y}$ to the elements of $[y]$.)
Moreover, we define
\begin{displaymath}
w \ast y := x \mbox{ for } x,y \in S^{(\Lambda)} \mbox{ and } w \in C_{x,y}
\end{displaymath}
(this $\ast$ can be interpreted as the diagonal action on the $\Lambda$-tuples).
We define $C_I = C_{x_I,x_I}$, therefore we have
\begin{displaymath}
C_I = \{w \in W \mid w \cdot \alpha_s=\alpha_s \mbox{ for every } s \in I\} \enspace,
\end{displaymath}
which is a normal subgroup of $Z_W(W_I)$.
To describe generators of $C_I$, we introduce some notations.
For subsets $J,K \subseteq S$, let $J_{\sim K}$ denote the set of elements of $J \cup K$ that belongs to the same connected component of $\Gamma_{J \cup K}$ as an element of $K$.
Now for $x \in S^{(\Lambda)}$ and $s \in S \smallsetminus [x]$ for which $[x]_{\sim s}$ is of finite type, there exists a unique $y \in S^{(\Lambda)}$ for which the element
\begin{displaymath}
w_x^s := w_0([x]_{\sim s})w_0([x]_{\sim s} \smallsetminus \{s\})
\end{displaymath}
belongs to $C_{y,x}$.
In this case, we define
\begin{displaymath}
\varphi(x,s) := y \enspace,
\end{displaymath}
therefore $\varphi(x,s) = w_x^s \ast x$ in the above notations.
We have the following result:
\begin{prop}
[{see \cite[Theorem 3.5(iii)]{Nui11}}]
\label{prop:generator_C}
Let $x,y \in S^{(\Lambda)}$ and $w \in C_{x,y}$.
Then there are a finite sequence $z_0 = y,z_1,\dots,z_{n-1},z_n = x$ of elements of $S^{(\Lambda)}$ and a finite sequence $s_0,s_1,\dots,s_{n-1}$ of elements of $S$ satisfying that $s_i \not\in [z_i]$, $[z_i]_{\sim s_i}$ is of finite type and $\varphi(z_i,s_i) = z_{i+1}$ for each index $0 \leq i \leq n-1$, and we have $w = w_{z_{n-1}}^{s_{n-1}} \cdots w_{z_1}^{s_1} w_{z_0}^{s_0}$.
\end{prop}
For subsets $J,K \subseteq S$, define
\begin{displaymath}
\Phi_J^{\perp K} := \{\gamma \in \Phi_J \mid \langle \gamma,\alpha_s \rangle = 0 \mbox{ for every } s \in K\} \,,\, W_J^{\perp K} := W(\Phi_J^{\perp K})
\end{displaymath}
(see Section \ref{sec:rootsystem} for notations).
Then $(W_J^{\perp K},R^{J,K})$ is a Coxeter system with root system $\Phi_J^{\perp K}$ and simple system $\Pi^{J,K}$, where
\begin{displaymath}
R^{J,K} := S(\Phi_J^{\perp K}) \,,\, \Pi^{J,K} := \Pi(\Phi_J^{\perp K})
\end{displaymath}
(see \cite[Section 3.1]{Nui11}).
In the notations, the symbol $J$ will be omitted when $J=S$; hence we have
\begin{displaymath}
W^{\perp I}=W_S^{\perp I}=\langle \{s_\gamma \mid \gamma \in \Phi^{\perp I}\} \rangle \enspace.
\end{displaymath}
On the other hand, we define
\begin{displaymath}
Y_{x,y} := \{w \in C_{x,y} \mid w \cdot (\Phi^{\perp [y]})^+ \subseteq \Phi^+\} \mbox{ for } x,y \in S^{(\Lambda)} \enspace.
\end{displaymath}
Note that $Y_{x,y} = \{w \in C_{x,y} \mid (\Phi^{\perp [x]})^+=w \cdot (\Phi^{\perp [y]})^+\}$ (see \cite[Section 3.1]{Nui11}).
Note also that $Y_{x,y} \cdot Y_{y,z} \subseteq Y_{x,z}$ and $Y_{x,y}{}^{-1} = Y_{y,x}$ for $x,y,z \in S^{(\Lambda)}$.
Now we define $Y_I = Y_{x_I,x_I}$, therefore we have
\begin{displaymath}
Y_I = \{w \in C_I \mid (\Phi^{\perp I})^+ = w \cdot (\Phi^{\perp I})^+\} \enspace.
\end{displaymath}
We have the following results:
\begin{prop}
[{see \cite[Lemma 4.1]{Nui11}}]
\label{prop:charofBphi}
For $x \in S^{(\Lambda)}$ and $s \in S \smallsetminus [x]$, the three conditions are equivalent:
\begin{enumerate}
\item $[x]_{\sim s}$ is of finite type, and $\varphi(x,s) = x$;
\item $[x]_{\sim s}$ is of finite type, and $\Phi^{\perp [x]}[w_x^s] \neq \emptyset$;
\item $\Phi_{[x] \cup \{s\}}^{\perp [x]} \neq \emptyset$.
\end{enumerate}
If these three conditions are satisfied, then we have $\Phi^{\perp [x]}[w_x^s]=(\Phi_{[x] \cup \{s\}}^{\perp [x]})^+=\{\gamma(x,s)\}$ for a unique positive root $\gamma(x,s)$ satisfying $s_{\gamma(x,s)}=w_x^s$.
\end{prop}
\begin{prop}
\label{prop:factorization_C}
Let $x,y \in S^{(\Lambda)}$.
\begin{enumerate}
\item \label{item:prop_factorization_C_decomp}
(See \cite[Theorem 4.6(i)(iv)]{Nui11}.)
The group $C_{x,x}$ admits a semidirect product decomposition $C_{x,x} = W^{\perp [x]} \rtimes Y_{x,x}$.
Moreover, if $w \in Y_{x,y}$, then the conjugation action by $w$ defines an isomorphism $u \mapsto wuw^{-1}$ of Coxeter systems from $(W^{\perp [y]},R^{[y]})$ to $(W^{\perp [x]},R^{[x]})$.
\item \label{item:prop_factorization_C_generator_Y}
(See \cite[Theorem 4.6(ii)]{Nui11}.)
Let $w \in Y_{x,y}$.
Then there are a finite sequence $z_0 = y,z_1,\dots,z_{n-1},z_n = x$ of elements of $S^{(\Lambda)}$ and a finite sequence $s_0,s_1,\dots,s_{n-1}$ of elements of $S$ satisfying that $z_{i+1} \neq z_i$, $s_i \not\in [z_i]$, $[z_i]_{\sim s_i}$ is of finite type and $w_{z_i}^{s_i} \in Y_{z_{i+1},z_i}$ for each index $0 \leq i \leq n-1$, and we have $w = w_{z_{n-1}}^{s_{n-1}} \cdots w_{z_1}^{s_1} w_{z_0}^{s_0}$.
\item \label{item:prop_factorization_C_generator_perp}
(See \cite[Theorem 4.13]{Nui11}.)
The generating set $R^{[x]}$ of $W^{\perp [x]}$ consists of elements of the form $w s_{\gamma(y,s)} w^{-1}$ satisfying that $y \in S^{(\Lambda)}$, $w \in Y_{x,y}$ and $\gamma(y,s)$ is a positive root as in the statement of Proposition \ref{prop:charofBphi} (hence $[y]_{\sim s}$ is of finite type and $\varphi(y,s) = y$).
\end{enumerate}
\end{prop}
\begin{prop}
[{see \cite[Proposition 4.8]{Nui11}}]
\label{prop:Yistorsionfree}
For any $x \in S^{(\Lambda)}$, the group $Y_{x,x}$ is torsion-free.
\end{prop}
For the structure of the entire centralizer $Z_W(W_I)$, a general result (Theorem 5.2 of \cite{Nui11}) implies the following proposition in a special case (a proof of the proposition from Theorem 5.2 of \cite{Nui11} is straightforward by noticing the fact that, under the hypothesis of the following proposition, the group $\mathcal{A}$ defined in the last paragraph before Theorem 5.2 of \cite{Nui11} is trivial and hence the group $B_I$ used in Theorem 5.2 of \cite{Nui11} coincides with $Y_I$):
\begin{prop}
[{see \cite[Theorem 5.2]{Nui11}}]
\label{prop:Z_for_special_case}
If every irreducible component of $I$ of finite type is of $(-1)$-type (see Section \ref{sec:longestelement} for the terminology), then we have $Z_W(W_I) = Z(W_I) \times (W^{\perp I} \rtimes Y_I)$.
\end{prop}
We also present an auxiliary result, which will be used later:
\begin{lem}
[{see \cite[Lemma 3.2]{Nui11}}]
\label{lem:rightdivisor}
Let $w \in W$ and $J,K \subseteq S$, and suppose that $w \cdot \Pi_J \subseteq \Pi$ and $w \cdot \Pi_K \subseteq \Phi^-$.
Then $J \cap K=\emptyset$, the set $J_{\sim K}$ is of finite type, and $w_0(J_{\sim K})w_0(J_{\sim K} \smallsetminus K)$ is a right divisor of $w$ (see Section \ref{sec:defofCox} for the terminology).
\end{lem}
\section{Main results}
\label{sec:main_result}
In this section, we state the main results of this paper, and give some relevant remarks.
The proof will be given in the following sections.
The main results deal with the relations between the \lq\lq finite part'' of the reflection subgroup $W^{\perp I}$ and the subgroup $Y_I$ of the centralizer $Z_W(W_I)$.
In general, for any Coxeter group $W$, the product of the finite irreducible components of $W$ is called the \emph{finite part} of $W$; here we write it as $W_{\mathrm{fin}}$.
Then, since $W^{\perp I}$ is a Coxeter group (with generating set $R^I$ and simple system $\Pi^I$) as mentioned in Section \ref{sec:properties_centralizer}, $W^{\perp I}$ has its own finite part $W^{\perp I}{}_{\mathrm{fin}}$.
To state the main theorem, we introduce a terminology: We say that a subset $I$ of $S$ is \emph{$A_{>1}$-free} if $I$ has no irreducible components of type $A_n$ with $2 \leq n < \infty$.
Then the main theorem of this paper is stated as follows:
\begin{thm}
\label{thm:YfixesWperpIfin}
Let $I$ be an $A_{>1}$-free subset of $S$ (see above for the terminology).
Then for each $\gamma \in \Pi^I$ with $s_\gamma \in W^{\perp I}{}_{\mathrm{fin}}$, we have $w \cdot \gamma = \gamma$ for every $w \in Y_I$.
Hence each element of the subgroup $Y_I$ of $Z_W(W_I)$ commutes with every element of $W^{\perp I}{}_{\mathrm{fin}}$.
\end{thm}
Among the several cases for the subset $I$ of $S$ covered by Theorem \ref{thm:YfixesWperpIfin}, we emphasize the following important special case:
\begin{cor}
\label{cor:YfixesWperpIfin}
Let $I \subseteq S$.
If every irreducible component of $I$ of finite type is of $(-1)$-type (see Section \ref{sec:longestelement} for the terminology), then we have
\begin{displaymath}
Z_W(W_I) = Z(W_I) \times W^{\perp I}{}_{\mathrm{fin}} \times (W^{\perp I}{}_{\mathrm{inf}} \rtimes Y_I) \enspace,
\end{displaymath}
where $W^{\perp I}{}_{\mathrm{inf}}$ denotes the product of the infinite irreducible components of $W^{\perp I}$ (hence $W^{\perp I} = W^{\perp I}{}_{\mathrm{fin}} \times W^{\perp I}{}_{\mathrm{inf}}$).
\end{cor}
\begin{proof}
Note that the assumption on $I$ in Theorem \ref{thm:YfixesWperpIfin} is now satisfied.
In this situation, Proposition \ref{prop:Z_for_special_case} implies that $Z_W(W_I) = Z(W_I) \times (W^{\perp I} \rtimes Y_I)$.
Now by Theorem \ref{thm:YfixesWperpIfin}, both $Y_I$ and $W^{\perp I}{}_{\mathrm{inf}}$ centralize $W^{\perp I}{}_{\mathrm{fin}}$, therefore the latter factor of $Z_W(W_I)$ decomposes further as $W^{\perp I}{}_{\mathrm{fin}} \times (W^{\perp I}{}_{\mathrm{inf}} \rtimes Y_I)$.
\end{proof}
We notice that the conclusion of Theorem \ref{thm:YfixesWperpIfin} will not generally hold when we remove the $A_{>1}$-freeness assumption on $I$.
A counterexample will be given in Section \ref{sec:counterexample}.
Here we give a remark on an application of the main results to a study of the isomorphism problem in Coxeter groups.
An important branch in the research on the isomorphism problem in Coxeter groups is to investigate, for two Coxeter systems $(W,S)$, $(W',S')$ and a group isomorphism $f \colon W \to W'$, the possibilities of \lq\lq shapes'' of the images $f(r) \in W'$ by $f$ of reflections $r \in W$ (with respect to the generating set $S$); for example, whether $f(r)$ is always a reflection in $W'$ (with respect to $S'$) or not.
Now if $r \in S$, then Corollary \ref{cor:YfixesWperpIfin} and Proposition \ref{prop:Yistorsionfree} imply that the unique maximal reflection subgroup of the centralizer of $r$ in $W$ is $\langle r \rangle \times W^{\perp \{r\}}$, which has finite part $\langle r \rangle \times W^{\perp \{r\}}{}_{\mathrm{fin}}$.
Moreover, the property of $W^{\perp \{r\}}{}_{\mathrm{fin}}$ shown in Theorem \ref{thm:YfixesWperpIfin} can imply that the factor $W^{\perp \{r\}}{}_{\mathrm{fin}}$ becomes \lq\lq frequently'' almost trivial.
In such a case, the finite part of the unique maximal reflection subgroup of the centralizer of $f(r)$ in $W'$ should be very small, which can be shown to be impossible if $f(r)$ is too far from being a reflection.
Thus the possibilities of the shape of $f(r)$ in $W'$ can be restricted by using Theorem \ref{thm:YfixesWperpIfin}.
See \cite{Nui_ref} for a detailed study along this direction.
The author hope that such an argument can be generalized to the case that $r$ is not a reflection but an involution of \lq\lq type'' which is $A_{>1}$-free (in a certain appropriate sense).
\section{Proof of Theorem \ref{thm:YfixesWperpIfin}: General properties}
\label{sec:proof_general}
In this and the next sections, we give a proof of Theorem \ref{thm:YfixesWperpIfin}.
First, this section gives some preliminary results that hold for an arbitrary $I \subseteq S$ (not necessarily $A_{>1}$-free; see Section \ref{sec:main_result} for the terminology).
Then the next section will focus on the case that $I$ is $A_{>1}$-free as in Theorem \ref{thm:YfixesWperpIfin} and complete the proof of Theorem \ref{thm:YfixesWperpIfin}.
\subsection{Decompositions of elements of $Y_{z,y}$}
\label{sec:finitepart_decomposition_Y}
It is mentioned in Proposition \ref{prop:factorization_C}(\ref{item:prop_factorization_C_generator_Y}) that each element $u \in Y_{z,y}$ with $y,z \in S^{(\Lambda)}$ admits a kind of decomposition into elements of some $Y$.
Here we introduce a generalization of such decompositions, which will play an important role below.
We give a definition:
\begin{defn}
\label{defn:standard_decomposition}
Let $u \in Y_{z,y}$ with $y,z \in S^{(\Lambda)}$.
We say that an expression $\mathcal{D} := \omega_{n-1} \cdots \omega_1\omega_0$ of $u$ is a \emph{semi-standard decomposition} of $u$ with respect to a subset $J$ of $S$ if there exist $y^{(i)} = y^{(i)}(\mathcal{D}) \in S^{(\Lambda)}$ for $0 \leq i \leq n$, $t^{(i)} = t^{(i)}(\mathcal{D}) \in S$ for $0 \leq i \leq n-1$ and $J^{(i)} = J^{(i)}(\mathcal{D}) \subseteq S$ for $0 \leq i \leq n$, with $y^{(0)} = y$, $y^{(n)} = z$ and $J^{(0)} = J$, satisfying the following conditions for each index $0 \leq i \leq n-1$:
\begin{itemize}
\item We have $t^{(i)} \not\in [y^{(i)}] \cup J^{(i)}$ and $t^{(i)}$ is adjacent to $[y^{(i)}]$.
\item The subset $K^{(i)} = K^{(i)}(\mathcal{D}) := ([y^{(i)}] \cup J^{(i)})_{\sim t^{(i)}}$ of $S$ is of finite type (see Section \ref{sec:properties_centralizer} for the notation).
\item We have $\omega_i = \omega_{y^{(i)},J^{(i)}}^{t^{(i)}} := w_0(K^{(i)})w_0(K^{(i)} \smallsetminus \{t^{(i)}\})$.
\item We have $\omega_i \in Y_{y^{(i+1)},y^{(i)}}$ and $\omega_i \cdot \Pi_{J^{(i)}} = \Pi_{J^{(i+1)}}$.
\end{itemize}
We call the above subset $K^{(i)}$ of $S$ the \emph{support} of $\omega_i$.
We call a component $\omega_i$ of $\mathcal{D}$ a \emph{wide transformation} if its support $K^{(i)}$ intersects with $J^{(i)} \smallsetminus [y^{(i)}]$; otherwise, we call $\omega_i$ a \emph{narrow transformation}, in which case we have $\omega_i = \omega_{y^{(i)},J^{(i)}}^{t^{(i)}} = w_{y^{(i)}}^{t^{(i)}}$.
Moreover, we say that $\mathcal{D} = \omega_{n-1} \cdots \omega_1\omega_0$ is a \emph{standard decomposition} of $u$ if $\mathcal{D}$ is a semi-standard decomposition of $u$ and $\ell(u) = \sum_{j=0}^{n-1} \ell(\omega_j)$.
The integer $n$ is called the \emph{length} of $\mathcal{D}$ and is denoted by $\ell(\mathcal{D})$.
\end{defn}
\begin{exmp}
\label{exmp:semi-standard_decomposition}
We give an example of a semi-standard decomposition.
Let $(W,S)$ be a Coxeter system of type $D_7$, with standard labelling $r_1,\dots,r_7$ of elements of $S$ given in Section \ref{sec:longestelement}.
We put $n := 4$, and define the objects $y^{(i)}$, $t^{(i)}$ and $J^{(i)}$ as in Table \ref{tab:example_semi-standard_decomposition}, where we abbreviate each $r_i$ to $i$ for simplicity.
In this case, the subsets $K^{(i)}$ of $S$ introduced in Definition \ref{defn:standard_decomposition} are determined as in the last row of Table \ref{tab:example_semi-standard_decomposition}.
We have
\begin{displaymath}
\begin{split}
\omega_0 &= w_0(\{r_1,r_2,r_3,r_4,r_5\})w_0(\{r_1,r_2,r_3,r_5\}) = r_2r_3r_4r_5r_1r_2r_3r_4 \enspace, \\
\omega_1 &= w_0(\{r_3,r_4,r_5,r_6\})w_0(\{r_3,r_4,r_5\}) = r_3r_4r_5r_6 \enspace, \\
\omega_2 &= w_0(\{r_4,r_5,r_6,r_7\})w_0(\{r_4,r_5,r_6\}) = r_7r_5r_4r_6r_5r_7 \enspace, \\
\omega_3 &= w_0(\{r_3,r_4,r_5,r_6\})w_0(\{r_4,r_5,r_6\}) = r_6r_5r_4r_3 \enspace.
\end{split}
\end{displaymath}
Let $u$ denote the element $\omega_3\omega_2\omega_1\omega_0$ of $W$.
Then it can be shown that $u \in Y_{z,y}$ where $y := y^{(0)} = (r_1,r_2,r_3)$ and $z := y^{(n)} = (r_5,r_4,r_3)$, and the expression $\mathcal{D} = \omega_3\omega_2\omega_1\omega_0$ is a semi-standard decomposition of $u$ of length $4$ with respect to $J := J^{(0)} = \{r_5\}$.
Moreover, $\mathcal{D}$ is in fact a standard decomposition of $u$ (which is the same as the one obtained by using Proposition \ref{prop:standard_decomposition_existence} below).
Among the four component $\omega_i$, the first one $\omega_0$ is a wide transformation and the other three $\omega_1,\omega_2,\omega_3$ are narrow transformations.
\end{exmp}
\begin{table}[hbt]
\centering
\caption{The data for the example of semi-standard decompositions}
\label{tab:example_semi-standard_decomposition}
\begin{tabular}{|c||c|c|c|c|c|} \hline
$i$ & $4$ & $3$ & $2$ & $1$ & $0$ \\ \hline\hline
$y^{(i)}$ & $(5,4,3)$ & $(6,5,4)$ & $(4,5,6)$ & $(3,4,5)$ & $(1,2,3)$ \\ \hline
$t^{(i)}$ & --- & $3$ & $7$ & $6$ & $4$ \\ \hline
$J^{(i)}$ & $\{1\}$ & $\{1\}$ & $\{1\}$ & $\{1\}$ & $\{5\}$ \\ \hline
$K^{(i)}$ & --- & $\{3,4,5,6\}$ & $\{4,5,6,7\}$ & $\{3,4,5,6\}$ & $\{1,2,3,4,5\}$ \\ \hline
\end{tabular}
\end{table}
The next proposition shows existence of standard decompositions:
\begin{prop}
\label{prop:standard_decomposition_existence}
Let $u \in Y_{z,y}$ with $y,z \in S^{(\Lambda)}$, and let $J \subseteq S$ satisfying that $u \cdot \Pi_J \subseteq \Pi$.
Then there exists a standard decomposition of $u$ with respect to $J$.
\end{prop}
\begin{proof}
We proceed the proof by induction on $\ell(u)$.
For the case $\ell(u) = 0$, i.e., $u = 1$, the empty expression satisfies the conditions for a standard decomposition of $u$.
From now, we consider the case $\ell(u) > 0$.
Then there is an element $t = t^{(0)} \in S$ satisfying that $u \cdot \alpha_t \in \Phi^-$.
Since $u \in Y_{z,y}$ and $u \cdot \Pi_J \subseteq \Pi \subseteq \Phi^+$, we have $t \not\in [y] \cup J$ and $\alpha_t \not\in \Phi^{\perp [y]}$, therefore $t$ is adjacent to $[y]$.
Now by Lemma \ref{lem:rightdivisor}, $K = K^{(0)} := ([y] \cup J)_{\sim t}$ is of finite type and $\omega_0 := \omega_{y,J}^{t}$ is a right divisor of $u$ (see Section \ref{sec:defofCox} for the terminology).
By the definition of $\omega_{y,J}^{t}$ in Definition \ref{defn:standard_decomposition}, there exist unique $y^{(1)} \in S^{(\Lambda)}$ and $J^{(1)} \subseteq S$ satisfying that $y^{(1)} = \omega_0 \ast y$ (see Section \ref{sec:properties_centralizer} for the notation) and $\omega_0 \cdot \Pi_J = \Pi_{J^{(1)}}$.
Moreover, since $\omega_0$ is a right divisor of $u$, it follows that $\Phi[\omega_0] \subseteq \Phi[u]$ (see e.g., Lemma 2.2 of \cite{Nui11}), therefore $\Phi^{\perp [y]}[\omega_0] \subseteq \Phi^{\perp [y]}[u] = \emptyset$ and $\omega_0 \in Y_{y^{(1)},y}$.
Put $u' = u\omega_0{}^{-1}$.
Then we have $u' \in Y_{z,y^{(1)}}$, $u' \cdot \Pi_{J^{(1)}} \subseteq \Pi$ and $\ell(u') = \ell(u) - \ell(\omega_0) < \ell(u)$ (note that $\omega_0 \neq 1$).
Hence the concatenation of $\omega_0$ to a standard decomposition of $u' \in Y_{z,y^{(1)}}$ with respect to $J^{(1)}$ obtained by the induction hypothesis gives a desired standard decomposition of $u$.
\end{proof}
We present some properties of (semi-)standard decompositions.
First, we have the following:
\begin{lem}
\label{lem:another_decomposition_Y_no_loop}
For any semi-standard decomposition $\omega_{n-1} \cdots \omega_1\omega_0$ of an element of $W$, for each $0 \leq i \leq n-1$, there exists an element of $\Pi_{K^{(i)} \smallsetminus \{t^{(i)}\}}$ which is not fixed by $\omega_i$.
\end{lem}
\begin{proof}
Assume contrary that $\omega_i$ fixes $\Pi_{K^{(i)} \smallsetminus \{t^{(i)}\}}$ pointwise.
Then by applying Proposition \ref{prop:charofBphi} to the pair of $[y^{(i)}] \cup J^{(i)}$ and $t^{(i)}$ instead of the pair of $[x]$ and $s$, it follows that there exists a root $\gamma \in (\Phi_{K^{(i)}}^{\perp K^{(i)} \smallsetminus \{t^{(i)}\}})^+$ with $\omega_i \cdot \gamma \in \Phi^-$ (note that, in this case, the element $w_x^s$ in Proposition \ref{prop:charofBphi} coincides with $\omega_i$).
By the definition of the support $K^{(i)}$ of $\omega_i$, $K^{(i)}$ is apart from $[y^{(i)}] \smallsetminus K^{(i)}$, therefore this root $\gamma$ also belongs to $(\Phi^{\perp [y^{(i)}]})^+$.
Hence we have $\Phi^{\perp [y^{(i)}]}[\omega_i] \neq \emptyset$, contradicting the property $\omega_i \in Y_{y^{(i+1)},y^{(i)}}$ in Definition \ref{defn:standard_decomposition}.
Hence Lemma \ref{lem:another_decomposition_Y_no_loop} holds.
\end{proof}
For a semi-standard decomposition $\mathcal{D} = \omega_n \cdots \omega_1\omega_0$ of $u \in Y_{z,y}$, let $0 \leq i_1 < i_2 < \cdots < i_k \leq n$ be the indices $i$ with the property that $[y^{(i+1)}(\mathcal{D})] = [y^{(i)}(\mathcal{D})]$ and $J^{(i+1)}(\mathcal{D}) = J^{(i)}(\mathcal{D})$.
Then we define the \emph{simplification} $\widehat{\mathcal{D}}$ of $\mathcal{D}$ to be the expression $\omega_n \cdots \hat{\omega_{i_k}} \cdots \hat{\omega_{i_1}} \cdots \hat{\omega_{i_0}} \cdots \omega_0$ obtained from $\mathcal{D} = \omega_n \cdots \omega_1\omega_0$ by removing all terms $\omega_{i_j}$ with $1 \leq j \leq k$.
Let $\widehat{u}$ denote the element of $W$ expressed by the product $\widehat{\mathcal{D}}$.
The following lemma is straightforward to prove:
\begin{lem}
\label{lem:another_decomposition_Y_reduce_redundancy}
In the above setting, let $\sigma$ denote the mapping from $\{0,1,\dots,n-k\}$ to $\{0,1,\dots,n\}$ satisfying that $\widehat{\mathcal{D}} = \omega_{\sigma(n-k)} \cdots \omega_{\sigma(1)}\omega_{\sigma(0)}$.
Then we have $\widehat{u} \in Y_{\widehat{z},y}$ for some $\widehat{z} \in S^{(\Lambda)}$ with $[\widehat{z}] = [z]$; $\widehat{\mathcal{D}}$ is a semi-standard decomposition of $\widehat{u}$ with respect to $J^{(0)}(\widehat{\mathcal{D}}) = J^{(0)}(\mathcal{D})$; we have $J^{(n-k+1)}(\widehat{\mathcal{D}}) = J^{(n+1)}(\mathcal{D})$; and for each $0 \leq j \leq n-k$, we have $[y^{(j)}(\widehat{\mathcal{D}})] = [y^{(\sigma(j))}(\mathcal{D})]$, $[y^{(j+1)}(\widehat{\mathcal{D}})] = [y^{(\sigma(j)+1)}(\mathcal{D})]$, $J^{(j)}(\widehat{\mathcal{D}}) = J^{(\sigma(j))}(\mathcal{D})$ and $J^{(j+1)}(\widehat{\mathcal{D}}) = J^{(\sigma(j)+1)}(\mathcal{D})$.
\end{lem}
\begin{exmp}
\label{exmp:simplification}
For the case of Example \ref{exmp:semi-standard_decomposition}, the simplification $\widehat{\mathcal{D}}$ of the standard decomposition $\mathcal{D} = \omega_3\omega_2\omega_1\omega_0$ of $u$ is obtained by removing the third component $\omega_2$, therefore $\widehat{\mathcal{D}} = \omega_3\omega_1\omega_0$.
We have
\begin{displaymath}
\begin{split}
&y^{(0)}(\widehat{\mathcal{D}}) = y^{(0)}(\mathcal{D}) = (r_1,r_2,r_3) \,,\, y^{(1)}(\widehat{\mathcal{D}}) = y^{(1)}(\mathcal{D}) = (r_3,r_4,r_5) \enspace, \\
&y^{(2)}(\widehat{\mathcal{D}}) = y^{(2)}(\mathcal{D}) = (r_4,r_5,r_6) \,,\, y^{(3)}(\widehat{\mathcal{D}}) = (r_3,r_4,r_5) = \widehat{z} \enspace.
\end{split}
\end{displaymath}
Now since $\omega_3$ is the inverse of $\omega_1$, the semi-standard decomposition $\widehat{\mathcal{D}}$ of $\widehat{u}$ is not a standard decomposition of $\widehat{u}$.
\end{exmp}
Moreover, we have the following result:
\begin{lem}
\label{lem:another_decomposition_Y_shift_x}
Let $\mathcal{D} = \omega_n \cdots \omega_1\omega_0$ be a semi-standard decomposition of an element $u \in W$.
Let $r \in [y^{(0)}]$, and suppose that the support of each $\omega_i$ is apart from $r$.
Moreover, let $s \in J^{(0)}$, $s' \in J^{(n+1)}$ and suppose that $u \ast s = s'$.
Then we have $r \in [y^{(n+1)}]$, $u \ast r = r$ and $u \in Y_{z',z}$, where $z$ and $z'$ are elements of $S^{(\Lambda)}$ obtained from $y^{(0)}$ and $y^{(n+1)}$ by replacing $r$ with $s$ and with $s'$, respectively.
\end{lem}
\begin{proof}
We use induction on $n \geq 0$.
Put $\mathcal{D}' = \omega_{n-1} \cdots \omega_1\omega_0$, and let $u' \in Y_{y^{(n)},y^{(0)}}$ be the element expressed by the product $\mathcal{D}'$.
Let $s'' := u' \ast s \in J^{(n)}$.
By the induction hypothesis, we have $r \in [y^{(n)}]$, $u' \ast r = r$ and $u' \in Y_{z'',z}$, where $z''$ is the element of $S^{(\Lambda)}$ obtained from $y^{(n)}$ by replacing $r$ with $s''$.
Now, since the support $K^{(n)}$ of $\omega_n$ is apart from $r \in [y^{(n)}]$, it follows that $r \in [y^{(n+1)}]$ and $\omega_n \ast r = r$, therefore $u \ast r = \omega_n u' \ast r = r$.
On the other hand, we have $z' = \omega_n \ast z''$ by the construction of $z'$ and $z''$.
Moreover, by the definition of $\omega_n$, the set $K^{(n)}$ is apart from $([y^{(n)}] \cup J^{(n)}) \smallsetminus K^{(n)}$, therefore $K^{(n)}$ is also apart from the subset $([z''] \cup J^{(n)}) \smallsetminus K^{(n)}$ of $([y^{(n)}] \cup J^{(n)}) \smallsetminus K^{(n)}$.
Since $[y^{(n)}] \cap K^{(n)} \subseteq [z''] \cap K^{(n)}$, it follows that $\Phi^{\perp [z'']}[\omega_n] = \Phi_{K^{(n)}}^{\perp [z''] \cap K^{(n)}}[\omega_n] \subseteq \Phi_{K^{(n)}}^{\perp [y^{(n)}] \cap K^{(n)}}[\omega_n] = \Phi^{\perp [y^{(n)}]}[\omega_n] = \emptyset$ (note that $\omega_n \in Y_{y^{(n+1)},y^{(n)}}$), therefore we have $\omega_n \in Y_{z',z''}$.
Hence we have $u = \omega_n u' \in Y_{z',z}$, concluding the proof.
\end{proof}
\subsection{Reduction to a special case}
\label{sec:proof_reduction}
Here we give a reduction of our proof of Theorem \ref{thm:YfixesWperpIfin} to a special case where the possibility of the subset $I \subseteq S$ is restricted in a certain manner.
First, for $J \subseteq S$, let $\iota(J)$ denote temporarily the union of the irreducible components of $J$ that are not of finite type, and let $\overline{\iota}(J)$ denote temporarily the set of elements of $S$ that are not apart from $\iota(J)$ (hence $J \cap \overline{\iota}(J) = \iota(J)$).
For example, when $(W,S)$ is given by the Coxeter graph in Figure \ref{fig:example_notation_iota} (where we abbreviate each $r_i \in S$ to $i$) and $J = \{r_1,r_3,r_4,r_5,r_6\}$ (indicated in Figure \ref{fig:example_notation_iota} by the black vertices), we have $\iota(J) = \{r_1,r_5,r_6\}$ and $\overline{\iota}(J) = \{r_1,r_2,r_5,r_6,r_7\}$, therefore $J \cap \overline{\iota}(J) = \{r_1,r_5,r_6\} = \iota(J)$ as mentioned above.
Now we have the following:
\begin{figure}
\caption{An example for the notations $\iota(J)$ and $\overline{\iota}
\label{fig:example_notation_iota}
\end{figure}
\begin{lem}
\label{lem:proof_reduction_root_perp}
Let $I$ be an arbitrary subset of $S$.
Then we have $w \in W_{S \smallsetminus \overline{\iota}(I)}$ for any $w \in Y_{y,x_I}$ with $y \in S^{(\Lambda)}$, and we have $\Phi^{\perp I} = \Phi_{S \smallsetminus \overline{\iota}(I)}^{\perp I \smallsetminus \overline{\iota}(I)}$.
\end{lem}
\begin{proof}
First, let $w \in Y_{y,x_I}$ with $y \in S^{(\Lambda)}$.
Then by Proposition \ref{prop:factorization_C}(\ref{item:prop_factorization_C_generator_Y}), there are a finite sequence $z_0 = x_I,z_1,\dots,z_{n-1},z_n = y$ of elements of $S^{(\Lambda)}$ and a finite sequence $s_0,s_1,\dots,s_{n-1}$ of elements of $S$ satisfying that $z_{i+1} \neq z_i$, $s_i \not\in [z_i]$, $[z_i]_{\sim s_i}$ is of finite type and $w_{z_i}^{s_i} \in Y_{z_{i+1},z_i}$ for each index $0 \leq i \leq n-1$, and we have $w = w_{z_{n-1}}^{s_{n-1}} \cdots w_{z_1}^{s_1} w_{z_0}^{s_0}$.
We show, by induction on $0 \leq i \leq n-1$, that $\iota([z_{i+1}]) = \iota(I)$, $\overline{\iota}([z_{i+1}]) = \overline{\iota}(I)$, and $w_{z_i}^{s_i} \in W_{S \smallsetminus \overline{\iota}(I)}$.
It follows from the induction hypothesis when $i > 0$, and is trivial when $i = 0$, that $\iota([z_i]) = \iota(I)$ and $\overline{\iota}([z_i]) = \overline{\iota}(I)$.
Since $s_i \not\in [z_i]$ and $[z_i]_{\sim s_i}$ is of finite type, it follows from the definition of $\overline{\iota}$ that $[z_i]_{\sim s_i} \subseteq S \smallsetminus \overline{\iota}([z_i])$, therefore we have $w_{z_i}^{s_i} \in W_{S \smallsetminus \overline{\iota}([z_i])} = W_{S \smallsetminus \overline{\iota}(I)}$, $\iota([z_{i+1}]) = \iota([z_i]) = \iota(I)$, and $\overline{\iota}([z_{i+1}]) = \overline{\iota}([z_i]) = \overline{\iota}(I)$, as desired.
This implies that $w = w_{z_{n-1}}^{s_{n-1}} \cdots w_{z_1}^{s_1} w_{z_0}^{s_0} \in W_{S \smallsetminus \overline{\iota}(I)}$, therefore the first part of the claim holds.
For the second part of the claim, the inclusion $\supseteq$ is obvious by the definitions of $\iota(I)$ and $\overline{\iota}(I)$.
For the other inclusion, it suffices to show that $\Phi^{\perp I} \subseteq \Phi_{S \smallsetminus \overline{\iota}(I)}$, or equivalently $\Pi^I \subseteq \Phi_{S \smallsetminus \overline{\iota}(I)}$.
Let $\gamma \in \Pi^I$.
By Proposition \ref{prop:factorization_C}(\ref{item:prop_factorization_C_generator_perp}), we have $\gamma = w \cdot \gamma(y,s)$ for some $y \in S^{(\Lambda)}$, $w \in Y_{x_I,y}$ and a root $\gamma(y,s)$ introduced in the statement of Proposition \ref{prop:charofBphi}.
Now by applying the result of the previous paragraph to $w^{-1} \in Y_{y,x_I}$, it follows that $\iota([y]) = \iota(I)$, $\overline{\iota}([y]) = \overline{\iota}(I)$, and $w \in W_{S \smallsetminus \overline{\iota}(I)}$.
Moreover, since $[y]_{\sim s}$ is of finite type (see Proposition \ref{prop:charofBphi}), a similar argument implies that $[y]_{\sim s} \subseteq S \smallsetminus \overline{\iota}([y]) = S \smallsetminus \overline{\iota}(I)$ and $w_y^s \in W_{S \smallsetminus \overline{\iota}(I)}$, therefore $\gamma(y,s) \in \Phi_{S \smallsetminus \overline{\iota}(I)}$.
Hence we have $\gamma = w \cdot \gamma(y,s) \in \Phi_{S \smallsetminus \overline{\iota}(I)}$, concluding the proof of Lemma \ref{lem:proof_reduction_root_perp}.
\end{proof}
For an arbitrary subset $I$ of $S$, suppose that $\gamma \in \Pi^I$, $s_\gamma \in W^{\perp I}{}_{\mathrm{fin}}$, and $w \in Y_I$.
Then by the second part of Lemma \ref{lem:proof_reduction_root_perp}, we have $\gamma \in \Pi^I = \Pi^{S \smallsetminus \overline{\iota}(I),I \smallsetminus \overline{\iota}(I)}$ and $s_\gamma$ also belongs to the finite part of $W_{S \smallsetminus \overline{\iota}(I)}^{\perp I \smallsetminus \overline{\iota}(I)}$.
Moreover, we have $w \in W_{S \smallsetminus \overline{\iota}(I)}$ by the first part of Lemma \ref{lem:proof_reduction_root_perp}, therefore $w$ also belongs to the group $Y_{I \smallsetminus \overline{\iota}(I)}$ constructed from the pair $S \smallsetminus \overline{\iota}(I)$, $I \smallsetminus \overline{\iota}(I)$ instead of the pair $S$, $I$.
Hence we have the following result: If the conclusion of Theorem \ref{thm:YfixesWperpIfin} holds for the pair $S \smallsetminus \overline{\iota}(I)$, $I \smallsetminus \overline{\iota}(I)$ instead of the pair $S$, $I$, then the conclusion of Theorem \ref{thm:YfixesWperpIfin} also holds for the pair $S$, $I$.
Note that $I \smallsetminus \overline{\iota}(I) = I \smallsetminus \iota(I)$ is the union of the irreducible components of $I$ of finite type.
As a consequence, we may assume without loss of generality that every irreducible component of $I$ is of finite type (note that the $A_{>1}$-freeness in the hypothesis of Theorem \ref{thm:YfixesWperpIfin} is preserved by considering $I \smallsetminus \iota(I)$ instead of $I$).
From now on, we assume that every irreducible component of $I$ is of finite type, as mentioned in the last paragraph.
For any $J \subseteq S$, we say that a subset $\Psi$ of the simple system $\Pi^J$ of $W^{\perp J}$ is an \emph{irreducible component} of $\Pi^J$ if $S(\Psi) = \{s_\beta \mid \beta \in \Psi\}$ is an irreducible component of the generating set $R^J$ of $W^{\perp J}$.
Now, as in the statement of Theorem \ref{thm:YfixesWperpIfin}, let $w \in Y_I$ and $\gamma \in \Pi^I$, and suppose that $s_{\gamma} \in W^{\perp I}{}_{\mathrm{fin}}$.
Let $\Psi$ denote the union of the irreducible components of $\Pi^I$ containing some $w^k \cdot \gamma$ with $k \in \mathbb{Z}$.
Then we have the following:
\begin{lem}
\label{lem:proof_reduction_Psi_finite}
In this setting, $\Psi$ is of finite type; in particular, $|\Psi| < \infty$.
Moreover, the two subsets $I \smallsetminus \mathrm{Supp}\,\Psi$ and $\mathrm{Supp}\,\Psi$ of $S$ are not adjacent.
\end{lem}
\begin{proof}
First, there exists a finite subset $K$ of $S$ for which $w \in W_K$ and $\gamma \in \Phi_K$.
Then, the number of mutually orthogonal roots of the form $w^k \cdot \gamma$ is at most $|K| < \infty$, since those roots are linearly independent and contained in the $|K|$-dimensional space $V_K$.
This implies that the number of irreducible components of $\Pi^I$ containing some $w^k \cdot \gamma$, which are of finite type by the property $s_\gamma \in W^{\perp I}{}_{\mathrm{fin}}$ and Proposition \ref{prop:factorization_C}(\ref{item:prop_factorization_C_decomp}), is finite.
Therefore, the union $\Psi$ of those irreducible components is also of finite type.
Hence the first part of the claim holds.
For the second part of the claim, assume contrary that some $s \in I \smallsetminus \mathrm{Supp}\,\Psi$ and $t \in \mathrm{Supp}\,\Psi$ are adjacent.
By the definition of $\mathrm{Supp}\,\Psi$, we have $t \in \mathrm{Supp}\,\beta \subseteq \mathrm{Supp}\,\Psi$ for some $\beta \in \Psi$.
Now we have $s \not\in \mathrm{Supp}\,\beta$.
Let $c > 0$ be the coefficient of $\alpha_t$ in $\beta$.
Then the property $s \not\in \mathrm{Supp}\,\beta$ implies that $\langle \alpha_s,\beta \rangle \leq c \langle \alpha_s,\alpha_t \rangle < 0$, contradicting the property $\beta \in \Phi^{\perp I}$.
Hence the claim holds, concluding the proof of Lemma \ref{lem:proof_reduction_Psi_finite}.
\end{proof}
We temporarily write $L = I \cap \mathrm{Supp}\,\Psi$, and put $\Psi' = \Psi \cup \Pi_L$.
Then we have $\mathrm{Supp}\,\Psi' = \mathrm{Supp}\,\Psi$, therefore by Lemma \ref{lem:proof_reduction_Psi_finite}, $I \smallsetminus \mathrm{Supp}\,\Psi'$ and $\mathrm{Supp}\,\Psi'$ are not adjacent.
On the other hand, we have $|\Psi| < \infty$ by Lemma \ref{lem:proof_reduction_Psi_finite}, therefore $\mathrm{Supp}\,\Psi' = \mathrm{Supp}\,\Psi$ is a finite set.
By these properties and the above-mentioned assumption that every irreducible component of $I$ is of finite type, it follows that $\Pi_L$ is of finite type as well as $\Psi$.
Note that $\Psi \subseteq \Pi^I \subseteq \Phi^{\perp L}$.
Hence the two root bases $\Psi$ and $\Pi_L$ are orthogonal, therefore their union $\Psi'$ is also a root basis by Theorem \ref{thm:conditionforrootbasis}, and we have $|W(\Psi')|<\infty$.
By Proposition \ref{prop:fintyperootbasis}, $\Psi'$ is a basis of a subspace $U := \mathrm{span}\,\Psi'$ of $V_{\mathrm{Supp}\,\Psi'}$.
By applying Proposition \ref{prop:finitesubsystem} to $W_{\mathrm{Supp}\,\Psi'}$ instead of $W$, it follows that there exist $u \in W_{\mathrm{Supp}\,\Psi'}$ and $J \subseteq \mathrm{Supp}\,\Psi'$ satisfying that $W_J$ is finite, $u \cdot (U \cap \Phi^+) = \Phi_J^+$ and $u \cdot (U \cap \Pi) \subseteq \Pi_J$.
Now we have the following:
\begin{lem}
\label{lem:proof_reduction_conjugate_by_Y}
In this setting, if we choose such an element $u$ of minimal length, then there exists an element $y \in S^{(\Lambda)}$ satisfying that $u \in Y_{y,x_I}$, the sets $[y] \smallsetminus J$ and $J$ are not adjacent, and $(u \cdot \Psi) \cup \Pi_{[y] \cap J}$ is a basis of $V_J$.
\end{lem}
\begin{proof}
Since $\Psi'$ is a basis of $U$, the property $u \cdot (U \cap \Phi^+) = \Phi_J^+$ implies that $u \cdot \Psi'$ is a basis of $V_J$.
Now we have $u \cdot \Pi_L \subseteq \Pi_J$ since $\Pi_L \subseteq U \cap \Pi$, while $u$ fixes $\Pi_{I \smallsetminus L}$ pointwise since the sets $I \smallsetminus \mathrm{Supp}\,\Psi' = I \smallsetminus L$ and $\mathrm{Supp}\,\Psi'$ are not adjacent.
By these properties, there exists an element $y \in S^{(\Lambda)}$ satisfying that $y = u \ast x_I$, $[y] \cap \mathrm{Supp}\,\Psi' \subseteq J$ and $[y] \smallsetminus \mathrm{Supp}\,\Psi' = I \smallsetminus \mathrm{Supp}\,\Psi'$.
Since $J \subseteq \mathrm{Supp}\,\Psi'$, it follows that $[y] \smallsetminus J$ and $J$ are not adjacent.
On the other hand, since $u \cdot \Pi_{I \smallsetminus L} = \Pi_{I \smallsetminus L}$, $u \cdot (U \cap \Phi^+) = \Phi_J^+$ and $\Pi_{I \smallsetminus L} \cap U = \emptyset$, it follows that $\Pi_{I \smallsetminus L} \cap \Phi_J^+ = \emptyset$, therefore we have $u \cdot \Pi_L = \Pi_{[y] \cap J}$.
Hence $u \cdot \Psi' = (u \cdot \Psi) \cup \Pi_{[y] \cap J}$ is a basis of $V_J$.
Finally, we show that such an element $u$ of minimal length satisfies that $u \cdot \Pi^I \subseteq \Phi^+$, hence $u \cdot (\Phi^{\perp I})^+ \subseteq \Phi^+$ and $u \in Y_{y,x_I}$.
We have $u \cdot \Psi \subseteq u \cdot (U \cap \Phi^+) = \Phi_J^+$.
Secondly, for any $\beta \in \Pi^I \smallsetminus \Psi$, assume contrary that $u \cdot \beta \in \Phi^-$.
Then we have $\beta \in \Phi_{\mathrm{Supp}\,\Psi'}$ since $u \in W_{\mathrm{Supp}\,\Psi'}$, therefore $s_{\beta} \in W_{\mathrm{Supp}\,\Psi'}$.
On the other hand, since $\Psi$ is the union of some irreducible components of $\Pi^I$, it follows that $\beta$ is orthogonal to $\Psi$, hence orthogonal to $\Psi'$.
By these properties, the element $u s_{\beta}$ also satisfies the above characteristics of the element $u$.
However, now the property $u \cdot \beta \in \Phi^-$ implies that $\ell(u s_{\beta}) < \ell(u)$ (see Theorem \ref{thm:reflectionsubgroup_Deodhar}), contradicting the choice of $u$.
Hence we have $u \cdot \beta \in \Phi^+$ for every $\beta \in \Pi^I \smallsetminus \Psi$, therefore $u \cdot \Pi^I \subseteq \Phi^+$, concluding the proof of Lemma \ref{lem:proof_reduction_conjugate_by_Y}.
\end{proof}
For an element $u \in Y_{y,x_I}$ as in Lemma \ref{lem:proof_reduction_conjugate_by_Y}, Proposition \ref{prop:factorization_C}(\ref{item:prop_factorization_C_decomp}) implies that $u \cdot \gamma \in \Pi^{[y]}$ and $s_{u \cdot \gamma} = u s_\gamma u^{-1} \in W^{\perp [y]}{}_{\mathrm{fin}}$.
Now $w$ fixes the root $\gamma$ if and only if the element $uwu^{-1} \in Y_{y,y}$ fixes the root $u \cdot \gamma$.
Moreover, the conjugation by $u$ defines an isomorphism of Coxeter systems $(W_I,I) \to (W_{[y]},[y])$.
Hence, by considering $[y] \subseteq S$, $uwu^{-1} \in Y_{[y]}$, $u \cdot \gamma \in \Pi^{[y]}$ and $u \cdot \Psi \subseteq \Pi^{[y]}$ instead of $I$, $w$, $\gamma$ and $\Psi$ if necessary, we may assume without loss of generality the following conditions:
\begin{description}
\item[(A1)] Every irreducible component of $I$ is of finite type.
\item[(A2)] There exists a subset $J \subseteq S$ of finite type satisfying that $I \smallsetminus J$ and $J$ are not adjacent and $\Psi \cup \Pi_{I \cap J}$ is a basis of $V_J$.
\end{description}
Moreover, if an irreducible component $J'$ of $J$ is contained in $I$, then a smaller subset $J \smallsetminus J'$ instead of $J$ also satisfies the assumption (A2); indeed, now $\Pi_{J'} \subseteq \Pi_{I \cap J}$ spans $V_{J'}$, and since $\Psi \cup \Pi_{I \cap J}$ is a basis of $V_J$ and the support of any root is irreducible (see Lemma \ref{lem:support_is_irreducible}), it follows that the support of any element of $\Psi \cup \Pi_{I \cap (J \smallsetminus J')}$ does not intersect with $J'$.
Hence, by choosing a subset $J \subseteq S$ in (A2) as small as possible, we may also assume without loss of generality the following condition:
\begin{description}
\item[(A3)] Any irreducible component of $J$ is not contained in $I$.
\end{description}
We also notice the following properties:
\begin{lem}
\label{lem:Psi_is_full}
In this setting, we have $\Psi = \Pi^{J,I \cap J}$, hence $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ is a basis of $V_J$.
\end{lem}
\begin{proof}
The inclusion $\Psi \subseteq \Pi^{J,I \cap J}$ follows from the definition of $\Psi$ and the condition (A2).
Now assume contrary that $\beta \in \Pi^{J,I \cap J} \smallsetminus \Psi$.
Then we have $\beta \in \Pi^I$ by (A2).
Since $\Psi$ is the union of some irreducible components of $\Pi^I$, it follows that $\beta$ is orthogonal to $\Psi$ as well as to $\Pi_{I \cap J}$.
This implies that $\beta$ belongs to the radical of $V_J$, which should be trivial by Proposition \ref{prop:fintyperootbasis}.
This is a contradiction.
Hence the claim holds.
\end{proof}
\begin{lem}
\label{lem:property_w}
In this setting, the element $w \in Y_I$ satisfies that $w \cdot \Phi_J = \Phi_J$, and the subgroup $\langle w \rangle$ generated by $w$ acts transitively on the set of the irreducible components of $\Pi^{J,I \cap J}$.
\end{lem}
\begin{proof}
The second part of the claim follows immediately from the definition of $\Psi$ and Lemma \ref{lem:Psi_is_full}.
It also implies that $w \cdot \Pi^{J,I \cap J} = \Pi^{J,I \cap J}$, while $w \cdot \Pi_{I \cap J} = \Pi_{I \cap J}$ since $w \in Y_I$.
Moreover, $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ is a basis of $V_J$ by Lemma \ref{lem:Psi_is_full}.
This implies that $w \cdot V_J = V_J$, therefore we have $w \cdot \Phi_J = \Phi_J$.
Hence the claim holds.
\end{proof}
\subsection{A key lemma}
\label{sec:proof_key_lemma}
Let $I^{\perp}$ denote the set of all elements of $S$ that are apart from $I$.
Then there are two possibilities: $\Pi^{J,I \cap J} \not\subseteq \Phi_{I^{\perp}}$, or $\Pi^{J,I \cap J} \subseteq \Phi_{I^{\perp}}$.
Here we present a key lemma regarding the former possibility (recall the three conditions (A1)--(A3) specified above):
\begin{lem}
\label{lem:finitepart_first_case_irreducible}
If $\Pi^{J,I \cap J} \not\subseteq \Phi_{I^{\perp}}$, then we have $I \cap J \neq \emptyset$ and $J$ is irreducible.
\end{lem}
\begin{proof}
First, take an element $\beta \in \Pi^{J,I \cap J} \smallsetminus \Phi_{I^{\perp}}$.
Then we have $\beta \not\in \Phi_I$ since $\Pi^{J,I \cap J} \subseteq \Phi^{\perp I}$.
Moreover, since the support $\mathrm{Supp}\,\beta$ of $\beta$ is irreducible (see Lemma \ref{lem:support_is_irreducible}), there exists an element $s \in \mathrm{Supp}\,\beta \smallsetminus I$ which is adjacent to an element of $I$, say $s' \in I$.
Now the property $\beta \in \Phi^{\perp I}$ implies that $s' \in \mathrm{Supp}\,\beta$, since otherwise we have $\langle \beta,\alpha_{s'} \rangle \leq c \langle \alpha_s,\alpha_{s'} \rangle < 0$ where $c > 0$ is the coefficient of $\alpha_s$ in $\beta$.
Hence we have $s' \in \mathrm{Supp}\,\Pi^{J,I \cap J} \subseteq J$.
Let $K$ denote the irreducible component of $J$ containing $s'$.
Put $\Psi' = \Pi^{J,I \cap J} \cap \Phi_K$.
Then, since $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ is a basis of $V_J$ by Lemma \ref{lem:Psi_is_full} and the support of any root is irreducible (see Lemma \ref{lem:support_is_irreducible}), it follows that $\beta \in \Psi'$, $\Psi'$ is orthogonal to $\Pi^{J,I \cap J} \smallsetminus \Psi'$ and $\Psi' \cup \Pi_{I \cap K}$ is a basis of $V_K$.
Now $\Psi'$ is the union of some irreducible components of $\Pi^{J,I \cap J}$.
We show that $J$ is irreducible if we have $w \cdot \Phi_K = \Phi_K$.
In this case, we have $w \cdot \Psi' = \Psi'$, therefore $\Pi^{J,I \cap J} = \Psi' \subseteq \Phi_K$ by the second part of Lemma \ref{lem:property_w}.
Now by the condition (A3), $J$ has no irreducible components other than $K$ (indeed, if such an irreducible component $J'$ of $J$ exists, then the property $\Pi^{J,I \cap J} \subseteq \Phi_K$ implies that the space $V_{J'}$ should be spanned by a subset of $\Pi_{I \cap J}$, therefore $J' \subseteq I$).
Hence $J = K$ is irreducible.
Thus it suffices to show that $w \cdot \Phi_K = \Phi_K$.
For the purpose, it also suffices to show that $w \cdot \Phi_K \subseteq \Phi_K$ (since $K$ is of finite type as well as $J$), or equivalently $w \cdot \Pi_K \subseteq \Phi_K$.
Moreover, by the three properties that $K$ is irreducible, $K \cap I \neq \emptyset$ and $w \cdot \Pi_{K \cap I} = \Pi_{K \cap I}$, it suffices to show that $w \cdot \alpha_{t'} \in \Phi_K$ provided $t' \in K$ is adjacent to some $t \in K$ with $w \cdot \alpha_t \in \Phi_K$.
Now note that $w \cdot \Phi_J = \Phi_J$ by Lemma \ref{lem:property_w}.
Assume contrary that $w \cdot \alpha_{t'} \not\in \Phi_K$.
Then we have $w \cdot \alpha_{t'} \in \Phi_J \smallsetminus \Phi_K = \Phi_{J \smallsetminus K}$ since $K$ is an irreducible component of $J$, therefore $w \cdot \alpha_{t'}$ is orthogonal to $w \cdot \alpha_t \in \Phi_K$.
This contradicts the property that $t'$ is adjacent to $t$, since $w$ leaves the bilinear form $\langle\,,\,\rangle$ invariant.
Hence we have $w \cdot \alpha_{t'} \in \Phi_K$, as desired.
\end{proof}
\section{Proof of Theorem \ref{thm:YfixesWperpIfin}: On the special case}
\label{sec:proof_special}
In this section, we introduce the assumption in Theorem \ref{thm:YfixesWperpIfin} that $I$ is $A_{>1}$-free, and continue the argument in Section \ref{sec:proof_general}.
Recall the properties (A1), (A2) and (A3) of $I$, $J$ and $\Psi = \Pi^{J,I \cap J}$ (see Lemma \ref{lem:Psi_is_full}) given in Section \ref{sec:proof_reduction}.
Our aim here is to prove that $w$ fixes $\Pi^{J,I \cap J}$ pointwise, which implies our goal $w \cdot \gamma = \gamma$ since $\gamma \in \Psi = \Pi^{J,I \cap J}$ by the definition of $\Psi$.
We divide the following argument into two cases: $\Pi^{J,I \cap J} \not\subseteq \Phi_{I^{\perp}}$, or $\Pi^{J,I \cap J} \subseteq \Phi_{I^{\perp}}$ (see Section \ref{sec:proof_key_lemma} for the definition of $I^{\perp}$).
\subsection{The first case $\Pi^{J,I \cap J} \not\subseteq \Phi_{I^{\perp}}$}
\label{sec:proof_special_first_case}
Here we consider the case that $\Pi^{J,I \cap J} \not\subseteq \Phi_{I^{\perp}}$.
In this case, the subset $J \subseteq S$ of finite type is irreducible by Lemma \ref{lem:finitepart_first_case_irreducible}, therefore we can apply the classification of finite irreducible Coxeter groups.
Let $J = \{r_1,r_2,\dots,r_N\}$, where $N = |J|$, be the standard labelling of $J$ (see Section \ref{sec:longestelement}).
We write $\alpha_i = \alpha_{r_i}$ for simplicity.
We introduce some temporal terminology.
We say that an element $y \in S^{(\Lambda)}$ satisfies \emph{Property P} if $[y] \smallsetminus J = I \smallsetminus J$ (hence $[y] \smallsetminus J$ is apart from $J$ by the condition (A2)) and $\Pi^{J,[y] \cap J} \cup \Pi_{[y] \cap J}$ is a basis of $V_J$.
For example, $x_I$ itself satisfies Property P.
For any $y \in S^{(\Lambda)}$ satisfying Property P and any element $s \in J \smallsetminus [y]$ with $\varphi(y,s) \neq y$, we say that the isomorphism $t \mapsto w_y^s \ast t$ from $[y] \cap J$ to $[\varphi(y,s)] \cap J$ is a \emph{local transformation} (note that now $[y]_{\sim s} \subseteq J$ and $w_y^s \in W_J$ by the above-mentioned property that $[y] \smallsetminus J$ is apart from $J$).
By abusing the terminology, in such a case we also call the correspondence $y \mapsto \varphi(y,s)$ a local transformation.
Note that, in this case, $\varphi(y,s)$ also satisfies Property P, we have $w_y^s \in Y_{\varphi(y,s),y}$ and $w_y^s \ast t = t$ for any $t \in [y] \smallsetminus J$, and the action of $w_y^s$ induces an isomorphism from $\Pi^{J,[y] \cap J}$ to $\Pi^{J,[\varphi(y,s)] \cap J}$.
Since $w \cdot \Pi^{J,I \cap J} = \Pi^{J,I \cap J}$, the claim is trivial if $|\Pi^{J,I \cap J}| = 1$.
From now, we consider the case that $|\Pi^{J,I \cap J}| \geq 2$, therefore we have $N = |J| \geq |I \cap J| + 2 \geq 3$ (note that $I \cap J \neq \emptyset$ by Lemma \ref{lem:finitepart_first_case_irreducible}).
In particular, $J$ is not of type $I_2(m)$.
On the other hand, we have the following results:
\begin{lem}
\label{lem:J_not_A_N}
In this setting, $J$ is not of type $A_N$.
\end{lem}
\begin{proof}
We show that $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ cannot span $V_J$ if $J$ is of type $A_N$, which deduces a contradiction and hence concludes the proof.
By the $A_{>1}$-freeness of $I$, each irreducible component of $I \cap J$ (which is also an irreducible component of $I$) is of type $A_1$.
Now by applying successive local transformations, we may assume without loss of generality that $r_1 \in I$ (indeed, if the minimal index $i$ with $r_i \in I$ satisfies $i \geq 2$, then we have $\varphi(x_I,r_{i-1}) \ast r_i = r_{i-1}$).
In this case, we have $r_2 \not\in I$, while we have $\Phi_J^{\perp I} \subseteq \Phi_{J \smallsetminus \{r_1,r_2\}}$ by the fact that any positive root in the root system $\Phi_J$ of type $A_N$ is of the form $\alpha_i + \alpha_{i+1} + \cdots + \alpha_{i'}$ with $1 \leq i \leq i' \leq N$.
This implies that the subset $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ of $\Phi_J^{\perp I} \cup \Pi_{I \cap J}$ cannot span $V_J$, as desired.
\end{proof}
To prove the next lemma (and some other results below), we give a list of all positive roots of the Coxeter group of type $E_8$.
The list is divided into six parts (Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6}).
In the lists, we use the standard labelling $r_1,\dots,r_8$ of generators.
The coefficients of each root are placed at the same relative positions as the corresponding vertices of the Coxeter graph of type $E_8$ in Figure \ref{fig:finite_irreducible_Coxeter_groups}; for example, the last root $\gamma_{120}$ in Table \ref{tab:positive_roots_E_8_6} is $2\alpha_1 + 3\alpha_2 + 4\alpha_3 + 6\alpha_4 + 5\alpha_5 + 4\alpha_6 + 3\alpha_7 + 2\alpha_8$ (which is the highest root of type $E_8$).
For the columns for actions of generators (4th to 11th columns), a blank cell means that the generator $r_j$ fixes the root $\gamma_i$ (or equivalently, $\langle \alpha_j,\gamma_i \rangle = 0$); while a cell filled by \lq\lq ---'' means that $\gamma_i = \alpha_j$.
Moreover, the positive roots of the parabolic subgroup of type $E_6$ (respectively, $E_7$) generated by $\{r_1,\dots,r_6\}$ (respectively, $\{r_1,\dots,r_7\}$) correspond to the rows indicated by \lq\lq $E_6$'' (respectively, \lq\lq $E_7$'').
By the data for actions of generators, it can be verified that the list indeed exhausts all the positive roots.
\begin{table}[p]
\centering
\caption{List of positive roots for Coxeter group of type $E_8$ (part $1$)}
\label{tab:positive_roots_E_8_1}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|cc} \cline{1-11}
height & $i$ & root $\gamma_i$ & \multicolumn{8}{|c|}{index $k$ with $r_j \cdot \gamma_i = \gamma_k$} \\ \cline{4-11}
& & & $r_1$ & $r_2$ & $r_3$ & $r_4$ & $r_5$ & $r_6$ & $r_7$ & $r_8$ \\ \cline{1-11}\cline{1-11}
$1$ & $1$ & \Eroot{1}{0}{0}{0}{0}{0}{0}{0} & --- & & $9$ & & & & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $2$ & \Eroot{0}{1}{0}{0}{0}{0}{0}{0} & & --- & & $10$ & & & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $3$ & \Eroot{0}{0}{1}{0}{0}{0}{0}{0} & $9$ & & --- & $11$ & & & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $4$ & \Eroot{0}{0}{0}{1}{0}{0}{0}{0} & & $10$ & $11$ & --- & $12$ & & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $5$ & \Eroot{0}{0}{0}{0}{1}{0}{0}{0} & & & & $12$ & --- & $13$ & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $6$ & \Eroot{0}{0}{0}{0}{0}{1}{0}{0} & & & & & $13$ & --- & $14$ & & $E_6$ & $E_7$ \\ \cline{2-11}
& $7$ & \Eroot{0}{0}{0}{0}{0}{0}{1}{0} & & & & & & $14$ & --- & $15$ & & $E_7$ \\ \cline{2-11}
& $8$ & \Eroot{0}{0}{0}{0}{0}{0}{0}{1} & & & & & & & $15$ & --- \\ \cline{1-11}
$2$ & $9$ & \Eroot{1}{0}{1}{0}{0}{0}{0}{0} & $3$ & & $1$ & $16$ & & & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $10$ & \Eroot{0}{1}{0}{1}{0}{0}{0}{0} & & $4$ & $17$ & $2$ & $18$ & & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $11$ & \Eroot{0}{0}{1}{1}{0}{0}{0}{0} & $16$ & $17$ & $4$ & $3$ & $19$ & & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $12$ & \Eroot{0}{0}{0}{1}{1}{0}{0}{0} & & $18$ & $19$ & $5$ & $4$ & $20$ & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $13$ & \Eroot{0}{0}{0}{0}{1}{1}{0}{0} & & & & $20$ & $6$ & $5$ & $21$ & & $E_6$ & $E_7$ \\ \cline{2-11}
& $14$ & \Eroot{0}{0}{0}{0}{0}{1}{1}{0} & & & & & $21$ & $7$ & $6$ & $22$ & & $E_7$ \\ \cline{2-11}
& $15$ & \Eroot{0}{0}{0}{0}{0}{0}{1}{1} & & & & & & $22$ & $8$ & $7$ \\ \cline{1-11}
$3$ & $16$ & \Eroot{1}{0}{1}{1}{0}{0}{0}{0} & $11$ & $23$ & & $9$ & $24$ & & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $17$ & \Eroot{0}{1}{1}{1}{0}{0}{0}{0} & $23$ & $11$ & $10$ & & $25$ & & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $18$ & \Eroot{0}{1}{0}{1}{1}{0}{0}{0} & & $12$ & $25$ & & $10$ & $26$ & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $19$ & \Eroot{0}{0}{1}{1}{1}{0}{0}{0} & $24$ & $25$ & $12$ & & $11$ & $27$ & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $20$ & \Eroot{0}{0}{0}{1}{1}{1}{0}{0} & & $26$ & $27$ & $13$ & & $12$ & $28$ & & $E_6$ & $E_7$ \\ \cline{2-11}
& $21$ & \Eroot{0}{0}{0}{0}{1}{1}{1}{0} & & & & $28$ & $14$ & & $13$ & $29$ & & $E_7$ \\ \cline{2-11}
& $22$ & \Eroot{0}{0}{0}{0}{0}{1}{1}{1} & & & & & $29$ & $15$ & & $14$ \\ \cline{1-11}
\end{tabular}
\end{table}
\begin{table}[p]
\centering
\caption{List of positive roots for Coxeter group of type $E_8$ (part $2$)}
\label{tab:positive_roots_E_8_2}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|cc} \cline{1-11}
height & $i$ & root $\gamma_i$ & \multicolumn{8}{|c|}{index $k$ with $r_j \cdot \gamma_i = \gamma_k$} \\ \cline{4-11}
& & & $r_1$ & $r_2$ & $r_3$ & $r_4$ & $r_5$ & $r_6$ & $r_7$ & $r_8$ \\ \cline{1-11}
$4$ & $23$ & \Eroot{1}{1}{1}{1}{0}{0}{0}{0} & $17$ & $16$ & & & $30$ & & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $24$ & \Eroot{1}{0}{1}{1}{1}{0}{0}{0} & $19$ & $30$ & & & $16$ & $31$ & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $25$ & \Eroot{0}{1}{1}{1}{1}{0}{0}{0} & $30$ & $19$ & $18$ & $32$ & $17$ & $33$ & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $26$ & \Eroot{0}{1}{0}{1}{1}{1}{0}{0} & & $20$ & $33$ & & & $18$ & $34$ & & $E_6$ & $E_7$ \\ \cline{2-11}
& $27$ & \Eroot{0}{0}{1}{1}{1}{1}{0}{0} & $31$ & $33$ & $20$ & & & $19$ & $35$ & & $E_6$ & $E_7$ \\ \cline{2-11}
& $28$ & \Eroot{0}{0}{0}{1}{1}{1}{1}{0} & & $34$ & $35$ & $21$ & & & $20$ & $36$ & & $E_7$ \\ \cline{2-11}
& $29$ & \Eroot{0}{0}{0}{0}{1}{1}{1}{1} & & & & $36$ & $22$ & & & $21$ \\ \cline{1-11}
$5$ & $30$ & \Eroot{1}{1}{1}{1}{1}{0}{0}{0} & $25$ & $24$ & & $37$ & $23$ & $38$ & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $31$ & \Eroot{1}{0}{1}{1}{1}{1}{0}{0} & $27$ & $38$ & & & & $24$ & $39$ & & $E_6$ & $E_7$ \\ \cline{2-11}
& $32$ & \Eroot{0}{1}{1}{2}{1}{0}{0}{0} & $37$ & & & $25$ & & $40$ & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $33$ & \Eroot{0}{1}{1}{1}{1}{1}{0}{0} & $38$ & $27$ & $26$ & $40$ & & $25$ & $41$ & & $E_6$ & $E_7$ \\ \cline{2-11}
& $34$ & \Eroot{0}{1}{0}{1}{1}{1}{1}{0} & & $28$ & $41$ & & & & $26$ & $42$ & & $E_7$ \\ \cline{2-11}
& $35$ & \Eroot{0}{0}{1}{1}{1}{1}{1}{0} & $39$ & $41$ & $28$ & & & & $27$ & $43$ & & $E_7$ \\ \cline{2-11}
& $36$ & \Eroot{0}{0}{0}{1}{1}{1}{1}{1} & & $42$ & $43$ & $29$ & & & & $28$ \\ \cline{1-11}
$6$ & $37$ & \Eroot{1}{1}{1}{2}{1}{0}{0}{0} & $32$ & & $44$ & $30$ & & $45$ & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $38$ & \Eroot{1}{1}{1}{1}{1}{1}{0}{0} & $33$ & $31$ & & $45$ & & $30$ & $46$ & & $E_6$ & $E_7$ \\ \cline{2-11}
& $39$ & \Eroot{1}{0}{1}{1}{1}{1}{1}{0} & $35$ & $46$ & & & & & $31$ & $47$ & & $E_7$ \\ \cline{2-11}
& $40$ & \Eroot{0}{1}{1}{2}{1}{1}{0}{0} & $45$ & & & $33$ & $48$ & $32$ & $49$ & & $E_6$ & $E_7$ \\ \cline{2-11}
& $41$ & \Eroot{0}{1}{1}{1}{1}{1}{1}{0} & $46$ & $35$ & $34$ & $49$ & & & $33$ & $50$ & & $E_7$ \\ \cline{2-11}
& $42$ & \Eroot{0}{1}{0}{1}{1}{1}{1}{1} & & $36$ & $50$ & & & & & $34$ \\ \cline{2-11}
& $43$ & \Eroot{0}{0}{1}{1}{1}{1}{1}{1} & $47$ & $50$ & $36$ & & & & & $35$ \\ \cline{1-11}
\end{tabular}
\end{table}
\begin{table}[p]
\centering
\caption{List of positive roots for Coxeter group of type $E_8$ (part $3$)}
\label{tab:positive_roots_E_8_3}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|cc} \cline{1-11}
height & $i$ & root $\gamma_i$ & \multicolumn{8}{|c|}{index $k$ with $r_j \cdot \gamma_i = \gamma_k$} \\ \cline{4-11}
& & & $r_1$ & $r_2$ & $r_3$ & $r_4$ & $r_5$ & $r_6$ & $r_7$ & $r_8$ \\ \cline{1-11}
$7$ & $44$ & \Eroot{1}{1}{2}{2}{1}{0}{0}{0} & & & $37$ & & & $51$ & & & $E_6$ & $E_7$ \\ \cline{2-11}
& $45$ & \Eroot{1}{1}{1}{2}{1}{1}{0}{0} & $40$ & & $51$ & $38$ & $52$ & $37$ & $53$ & & $E_6$ & $E_7$ \\ \cline{2-11}
& $46$ & \Eroot{1}{1}{1}{1}{1}{1}{1}{0} & $41$ & $39$ & & $53$ & & & $38$ & $54$ & & $E_7$ \\ \cline{2-11}
& $47$ & \Eroot{1}{0}{1}{1}{1}{1}{1}{1} & $43$ & $54$ & & & & & & $39$ \\ \cline{2-11}
& $48$ & \Eroot{0}{1}{1}{2}{2}{1}{0}{0} & $52$ & & & & $40$ & & $55$ & & $E_6$ & $E_7$ \\ \cline{2-11}
& $49$ & \Eroot{0}{1}{1}{2}{1}{1}{1}{0} & $53$ & & & $41$ & $55$ & & $40$ & $56$ & & $E_7$ \\ \cline{2-11}
& $50$ & \Eroot{0}{1}{1}{1}{1}{1}{1}{1} & $54$ & $43$ & $42$ & $56$ & & & & $41$ \\ \cline{1-11}
$8$ & $51$ & \Eroot{1}{1}{2}{2}{1}{1}{0}{0} & & & $45$ & & $57$ & $44$ & $58$ & & $E_6$ & $E_7$ \\ \cline{2-11}
& $52$ & \Eroot{1}{1}{1}{2}{2}{1}{0}{0} & $48$ & & $57$ & & $45$ & & $59$ & & $E_6$ & $E_7$ \\ \cline{2-11}
& $53$ & \Eroot{1}{1}{1}{2}{1}{1}{1}{0} & $49$ & & $58$ & $46$ & $59$ & & $45$ & $60$ & & $E_7$ \\ \cline{2-11}
& $54$ & \Eroot{1}{1}{1}{1}{1}{1}{1}{1} & $50$ & $47$ & & $60$ & & & & $46$ \\ \cline{2-11}
& $55$ & \Eroot{0}{1}{1}{2}{2}{1}{1}{0} & $59$ & & & & $49$ & $61$ & $48$ & $62$ & & $E_7$ \\ \cline{2-11}
& $56$ & \Eroot{0}{1}{1}{2}{1}{1}{1}{1} & $60$ & & & $50$ & $62$ & & & $49$ \\ \cline{1-11}
$9$ & $57$ & \Eroot{1}{1}{2}{2}{2}{1}{0}{0} & & & $52$ & $63$ & $51$ & & $64$ & & $E_6$ & $E_7$ \\ \cline{2-11}
& $58$ & \Eroot{1}{1}{2}{2}{1}{1}{1}{0} & & & $53$ & & $64$ & & $51$ & $65$ & & $E_7$ \\ \cline{2-11}
& $59$ & \Eroot{1}{1}{1}{2}{2}{1}{1}{0} & $55$ & & $64$ & & $53$ & $66$ & $52$ & $67$ & & $E_7$ \\ \cline{2-11}
& $60$ & \Eroot{1}{1}{1}{2}{1}{1}{1}{1} & $56$ & & $65$ & $54$ & $67$ & & & $53$ \\ \cline{2-11}
& $61$ & \Eroot{0}{1}{1}{2}{2}{2}{1}{0} & $66$ & & & & & $55$ & & $68$ & & $E_7$ \\ \cline{2-11}
& $62$ & \Eroot{0}{1}{1}{2}{2}{1}{1}{1} & $67$ & & & & $56$ & $68$ & & $55$ \\ \cline{1-11}
\end{tabular}
\end{table}
\begin{table}[p]
\centering
\caption{List of positive roots for Coxeter group of type $E_8$ (part $4$)}
\label{tab:positive_roots_E_8_4}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|cc} \cline{1-11}
height & $i$ & root $\gamma_i$ & \multicolumn{8}{|c|}{index $k$ with $r_j \cdot \gamma_i = \gamma_k$} \\ \cline{4-11}
& & & $r_1$ & $r_2$ & $r_3$ & $r_4$ & $r_5$ & $r_6$ & $r_7$ & $r_8$ \\ \cline{1-11}
$10$ & $63$ & \Eroot{1}{1}{2}{3}{2}{1}{0}{0} & & $69$ & & $57$ & & & $70$ & & $E_6$ & $E_7$ \\ \cline{2-11}
& $64$ & \Eroot{1}{1}{2}{2}{2}{1}{1}{0} & & & $59$ & $70$ & $58$ & $71$ & $57$ & $72$ & & $E_7$ \\ \cline{2-11}
& $65$ & \Eroot{1}{1}{2}{2}{1}{1}{1}{1} & & & $60$ & & $72$ & & & $58$ \\ \cline{2-11}
& $66$ & \Eroot{1}{1}{1}{2}{2}{2}{1}{0} & $61$ & & $71$ & & & $59$ & & $73$ & & $E_7$ \\ \cline{2-11}
& $67$ & \Eroot{1}{1}{1}{2}{2}{1}{1}{1} & $62$ & & $72$ & & $60$ & $73$ & & $59$ \\ \cline{2-11}
& $68$ & \Eroot{0}{1}{1}{2}{2}{2}{1}{1} & $73$ & & & & & $62$ & $74$ & $61$ \\ \cline{1-11}
$11$ & $69$ & \Eroot{1}{2}{2}{3}{2}{1}{0}{0} & & $63$ & & & & & $75$ & & $E_6$ & $E_7$ \\ \cline{2-11}
& $70$ & \Eroot{1}{1}{2}{3}{2}{1}{1}{0} & & $75$ & & $64$ & & $76$ & $63$ & $77$ & & $E_7$ \\ \cline{2-11}
& $71$ & \Eroot{1}{1}{2}{2}{2}{2}{1}{0} & & & $66$ & $76$ & & $64$ & & $78$ & & $E_7$ \\ \cline{2-11}
& $72$ & \Eroot{1}{1}{2}{2}{2}{1}{1}{1} & & & $67$ & $77$ & $65$ & $78$ & & $64$ \\ \cline{2-11}
& $73$ & \Eroot{1}{1}{1}{2}{2}{2}{1}{1} & $68$ & & $78$ & & & $67$ & $79$ & $66$ \\ \cline{2-11}
& $74$ & \Eroot{0}{1}{1}{2}{2}{2}{2}{1} & $79$ & & & & & & $68$ & \\ \cline{1-11}
$12$ & $75$ & \Eroot{1}{2}{2}{3}{2}{1}{1}{0} & & $70$ & & & & $80$ & $69$ & $81$ & & $E_7$ \\ \cline{2-11}
& $76$ & \Eroot{1}{1}{2}{3}{2}{2}{1}{0} & & $80$ & & $71$ & $82$ & $70$ & & $83$ & & $E_7$ \\ \cline{2-11}
& $77$ & \Eroot{1}{1}{2}{3}{2}{1}{1}{1} & & $81$ & & $72$ & & $83$ & & $70$ \\ \cline{2-11}
& $78$ & \Eroot{1}{1}{2}{2}{2}{2}{1}{1} & & & $73$ & $83$ & & $72$ & $84$ & $71$ \\ \cline{2-11}
& $79$ & \Eroot{1}{1}{1}{2}{2}{2}{2}{1} & $74$ & & $84$ & & & & $73$ & \\ \cline{1-11}
\end{tabular}
\end{table}
\begin{table}[p]
\centering
\caption{List of positive roots for Coxeter group of type $E_8$ (part $5$)}
\label{tab:positive_roots_E_8_5}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|cc} \cline{1-11}
height & $i$ & root $\gamma_i$ & \multicolumn{8}{|c|}{index $k$ with $r_j \cdot \gamma_i = \gamma_k$} \\ \cline{4-11}
& & & $r_1$ & $r_2$ & $r_3$ & $r_4$ & $r_5$ & $r_6$ & $r_7$ & $r_8$ \\ \cline{1-11}
$13$ & $80$ & \Eroot{1}{2}{2}{3}{2}{2}{1}{0} & & $76$ & & & $85$ & $75$ & & $86$ & & $E_7$ \\ \cline{2-11}
& $81$ & \Eroot{1}{2}{2}{3}{2}{1}{1}{1} & & $77$ & & & & $86$ & & $75$ \\ \cline{2-11}
& $82$ & \Eroot{1}{1}{2}{3}{3}{2}{1}{0} & & $85$ & & & $76$ & & & $87$ & & $E_7$ \\ \cline{2-11}
& $83$ & \Eroot{1}{1}{2}{3}{2}{2}{1}{1} & & $86$ & & $78$ & $87$ & $77$ & $88$ & $76$ \\ \cline{2-11}
& $84$ & \Eroot{1}{1}{2}{2}{2}{2}{2}{1} & & & $79$ & $88$ & & & $78$ & \\ \cline{1-11}
$14$ & $85$ & \Eroot{1}{2}{2}{3}{3}{2}{1}{0} & & $82$ & & $89$ & $80$ & & & $90$ & & $E_7$ \\ \cline{2-11}
& $86$ & \Eroot{1}{2}{2}{3}{2}{2}{1}{1} & & $83$ & & & $90$ & $81$ & $91$ & $80$ \\ \cline{2-11}
& $87$ & \Eroot{1}{1}{2}{3}{3}{2}{1}{1} & & $90$ & & & $83$ & & $92$ & $82$ \\ \cline{2-11}
& $88$ & \Eroot{1}{1}{2}{3}{2}{2}{2}{1} & & $91$ & & $84$ & $92$ & & $83$ & \\ \cline{1-11}
$15$ & $89$ & \Eroot{1}{2}{2}{4}{3}{2}{1}{0} & & & $93$ & $85$ & & & & $94$ & & $E_7$ \\ \cline{2-11}
& $90$ & \Eroot{1}{2}{2}{3}{3}{2}{1}{1} & & $87$ & & $94$ & $86$ & & $95$ & $85$ \\ \cline{2-11}
& $91$ & \Eroot{1}{2}{2}{3}{2}{2}{2}{1} & & $88$ & & & $95$ & & $86$ & \\ \cline{2-11}
& $92$ & \Eroot{1}{1}{2}{3}{3}{2}{2}{1} & & $95$ & & & $88$ & $96$ & $87$ & \\ \cline{1-11}
$16$ & $93$ & \Eroot{1}{2}{3}{4}{3}{2}{1}{0} & $97$ & & $89$ & & & & & $98$ & & $E_7$ \\ \cline{2-11}
& $94$ & \Eroot{1}{2}{2}{4}{3}{2}{1}{1} & & & $98$ & $90$ & & & $99$ & $89$ \\ \cline{2-11}
& $95$ & \Eroot{1}{2}{2}{3}{3}{2}{2}{1} & & $92$ & & $99$ & $91$ & $100$ & $90$ & \\ \cline{2-11}
& $96$ & \Eroot{1}{1}{2}{3}{3}{3}{2}{1} & & $100$ & & & & $92$ & & \\ \cline{1-11}
$17$ & $97$ & \Eroot{2}{2}{3}{4}{3}{2}{1}{0} & $93$ & & & & & & & $101$ & & $E_7$ \\ \cline{2-11}
& $98$ & \Eroot{1}{2}{3}{4}{3}{2}{1}{1} & $101$ & & $94$ & & & & $102$ & $93$ \\ \cline{2-11}
& $99$ & \Eroot{1}{2}{2}{4}{3}{2}{2}{1} & & & $102$ & $95$ & & $103$ & $94$ & \\ \cline{2-11}
& $100$ & \Eroot{1}{2}{2}{3}{3}{3}{2}{1} & & $96$ & & $103$ & & $95$ & & \\ \cline{1-11}
\end{tabular}
\end{table}
\begin{table}[p]
\centering
\caption{List of positive roots for Coxeter group of type $E_8$ (part $6$)}
\label{tab:positive_roots_E_8_6}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|cc} \cline{1-11}
height & $i$ & root $\gamma_i$ & \multicolumn{8}{|c|}{index $k$ with $r_j \cdot \gamma_i = \gamma_k$} \\ \cline{4-11}
& & & $r_1$ & $r_2$ & $r_3$ & $r_4$ & $r_5$ & $r_6$ & $r_7$ & $r_8$ \\ \cline{1-11}
$18$ & $101$ & \Eroot{2}{2}{3}{4}{3}{2}{1}{1} & $98$ & & & & & & $104$ & $97$ \\ \cline{2-11}
& $102$ & \Eroot{1}{2}{3}{4}{3}{2}{2}{1} & $104$ & & $99$ & & & $105$ & $98$ & \\ \cline{2-11}
& $103$ & \Eroot{1}{2}{2}{4}{3}{3}{2}{1} & & & $105$ & $100$ & $106$ & $99$ & & \\ \cline{1-11}
$19$ & $104$ & \Eroot{2}{2}{3}{4}{3}{2}{2}{1} & $102$ & & & & & $107$ & $101$ & \\ \cline{2-11}
& $105$ & \Eroot{1}{2}{3}{4}{3}{3}{2}{1} & $107$ & & $103$ & & $108$ & $102$ & & \\ \cline{2-11}
& $106$ & \Eroot{1}{2}{2}{4}{4}{3}{2}{1} & & & $108$ & & $103$ & & & \\ \cline{1-11}
$20$ & $107$ & \Eroot{2}{2}{3}{4}{3}{3}{2}{1} & $105$ & & & & $109$ & $104$ & & \\ \cline{2-11}
& $108$ & \Eroot{1}{2}{3}{4}{4}{3}{2}{1} & $109$ & & $106$ & $110$ & $105$ & & & \\ \cline{1-11}
$21$ & $109$ & \Eroot{2}{2}{3}{4}{4}{3}{2}{1} & $108$ & & & $111$ & $107$ & & & \\ \cline{2-11}
& $110$ & \Eroot{1}{2}{3}{5}{4}{3}{2}{1} & $111$ & $112$ & & $108$ & & & & \\ \cline{1-11}
$22$ & $111$ & \Eroot{2}{2}{3}{5}{4}{3}{2}{1} & $110$ & $113$ & $114$ & $109$ & & & & \\ \cline{2-11}
& $112$ & \Eroot{1}{3}{3}{5}{4}{3}{2}{1} & $113$ & $110$ & & & & & & \\ \cline{1-11}
$23$ & $113$ & \Eroot{2}{3}{3}{5}{4}{3}{2}{1} & $112$ & $111$ & $115$ & & & & & \\ \cline{2-11}
& $114$ & \Eroot{2}{2}{4}{5}{4}{3}{2}{1} & & $115$ & $111$ & & & & & \\ \cline{1-11}
$24$ & $115$ & \Eroot{2}{3}{4}{5}{4}{3}{2}{1} & & $114$ & $113$ & $116$ & & & & \\ \cline{1-11}
$25$ & $116$ & \Eroot{2}{3}{4}{6}{4}{3}{2}{1} & & & & $115$ & $117$ & & & \\ \cline{1-11}
$26$ & $117$ & \Eroot{2}{3}{4}{6}{5}{3}{2}{1} & & & & & $116$ & $118$ & & \\ \cline{1-11}
$27$ & $118$ & \Eroot{2}{3}{4}{6}{5}{4}{2}{1} & & & & & & $117$ & $119$ & \\ \cline{1-11}
$28$ & $119$ & \Eroot{2}{3}{4}{6}{5}{4}{3}{1} & & & & & & & $118$ & $120$ \\ \cline{1-11}
$29$ & $120$ & \Eroot{2}{3}{4}{6}{5}{4}{3}{2} & & & & & & & & $119$ \\ \cline{1-11}
\end{tabular}
\end{table}
Then we have the following:
\begin{lem}
\label{lem:possibility_J_is_E_6}
In this setting, if $J$ is of type $E_6$, then $|I \cap J| = 1$.
\end{lem}
\begin{proof}
By the property $N \geq |I \cap J| + 2$ and the $A_{>1}$-freeness of $I$, it follows that $I \cap J$ is either $\{r_2,r_3,r_4,r_5\}$ (of type $D_4$) or the union of irreducible components of type $A_1$.
In the former case, we have $\Phi_J^{\perp I} = \emptyset$ (see Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6}), a contradiction.
Therefore, $I \cap J$ consists of irreducible components of type $A_1$.
Now assume contrary that $I \cap J$ is not irreducible.
Then, by applying successive local transformations and by using symmetry, we may assume without loss of generality that $r_1 \in I$ (cf., the proof of Lemma \ref{lem:J_not_A_N}).
Now we have $\Pi^{J,\{r_1\}} = \{\alpha_2,\alpha_4,\alpha_5,\alpha_6,\alpha'\}$ which is the standard labelling of type $A_5$, where $\alpha'$ is the root $\gamma_{44}$ in Table \ref{tab:positive_roots_E_8_3}.
Note that $\Pi_{(I \cap J) \smallsetminus \{r_1\}} \subseteq \Pi^{J,\{r_1\}}$.
Now the same argument as Lemma \ref{lem:J_not_A_N} implies that the subspace $V'$ spanned by $\Pi^{J,I \cap J} \cup \Pi_{(I \cap J) \smallsetminus \{r_1\}}$ is a proper subspace of the space spanned by $\Pi^{J,\{r_1\}}$, therefore $\dim V' < 5$.
This implies that the subspace spanned by $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$, which is the sum of $V'$ and $\mathbb{R}\alpha_1$, has dimension less than $6 = \dim V_J$, contradicting the fact that $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ spans $V_J$ (see Lemma \ref{lem:Psi_is_full}).
Hence $I \cap J$ is irreducible, therefore the claim holds.
\end{proof}
We also give a list of all positive roots of the Coxeter group of type $D_n$ (Table \ref{tab:positive_roots_D_n}) in order to prove the next lemma (and some other results below).
Some notations are similar to the above case of type $E_8$.
For the data for actions of generators on the roots, if the action $r_k \cdot \gamma$ does not appear in the list, then it means either $r_k$ fixes $\gamma$ (or equivalently, $\gamma$ is orthogonal to $\alpha_k$), or $\gamma = \alpha_k$.
Again, these data imply that the list indeed exhausts all the positive roots.
\begin{table}[hbt]
\centering
\caption{List of positive roots for Coxeter group of type $D_n$}
\label{tab:positive_roots_D_n}
\begin{tabular}{|c|c|} \hline
roots & actions of generators \\ \hline
$\gamma^{(1)}_{i,j} := \sum_{h=i}^{j} \alpha_h$ & $r_{i-1} \cdot \gamma^{(1)}_{i,j} = \gamma^{(1)}_{i-1,j}$ ($i \geq 2$) \\
($1 \leq i \leq j \leq n-2$) & $r_i \cdot \gamma^{(1)}_{i,j} = \gamma^{(1)}_{i+1,j}$ ($i \leq j-1$) \\
($\gamma^{(1)}_{i,i} = \alpha_i$) & $r_j \cdot \gamma^{(1)}_{i,j} = \gamma^{(1)}_{i,j-1}$ ($i \leq j-1$) \\
& $r_{j+1} \cdot \gamma^{(1)}_{i,j} = \gamma^{(1)}_{i,j+1}$ ($j \leq n-3$) \\
& $r_{n-1} \cdot \gamma^{(1)}_{i,n-2} = \gamma^{(2)}_i$ \\
& $r_n \cdot \gamma^{(1)}_{i,n-2} = \gamma^{(3)}_i$ \\ \hline
$\gamma^{(2)}_i := \sum_{h=i}^{n-1} \alpha_h$ & $r_{i-1} \cdot \gamma^{(2)}_i = \gamma^{(2)}_{i-1}$ ($i \geq 2$) \\
($1 \leq i \leq n-1$) & $r_i \cdot \gamma^{(2)}_i = \gamma^{(2)}_{i+1}$ ($i \leq n-2$) \\
($\gamma^{(2)}_{n-1} = \alpha_{n-1}$) & $r_{n-1} \cdot \gamma^{(2)}_i = \gamma^{(1)}_{i,n-2}$ ($i \leq n-2$) \\
& $r_n \cdot \gamma^{(2)}_i = \gamma^{(4)}_{i,n-1}$ ($i \leq n-2$) \\ \hline
$\gamma^{(3)}_i := \sum_{h=i}^{n-2} \alpha_h + \alpha_n$ & $r_{i-1} \cdot \gamma^{(3)}_i = \gamma^{(3)}_{i-1}$ ($i \geq 2$) \\
($1 \leq i \leq n-1$) & $r_i \cdot \gamma^{(3)}_i = \gamma^{(3)}_{i+1}$ ($i \leq n-2$) \\
($\gamma^{(3)}_{n-1} = \alpha_n$) & $r_n \cdot \gamma^{(3)}_i = \gamma^{(1)}_{i,n-2}$ ($i \leq n-2$) \\
& $r_{n-1} \cdot \gamma^{(3)}_i = \gamma^{(4)}_{i,n-1}$ ($i \leq n-2$) \\ \hline
$\gamma^{(4)}_{i,j} := \sum_{h=i}^{j-1} \alpha_h + \sum_{h=j}^{n-2} 2\alpha_h + \alpha_{n-1} + \alpha_n$ & $r_{i-1} \cdot \gamma^{(4)}_{i,j} = \gamma^{(4)}_{i-1,j}$ ($i \geq 2$) \\
($1 \leq i < j \leq n-1$) & $r_i \cdot \gamma^{(4)}_{i,j} = \gamma^{(4)}_{i+1,j}$ ($i \leq j-2$) \\
& $r_{j-1} \cdot \gamma^{(4)}_{i,j} = \gamma^{(4)}_{i,j-1}$ ($i \leq j-2$) \\
& $r_j \cdot \gamma^{(4)}_{i,j} = \gamma^{(4)}_{i,j+1}$ ($j \leq n-2$) \\
& $r_{n-1} \cdot \gamma^{(4)}_{i,n-1} = \gamma^{(3)}_i$ \\
& $r_n \cdot \gamma^{(4)}_{i,n-1} = \gamma^{(2)}_i$ \\ \hline
\end{tabular}
\end{table}
Then we have the following:
\begin{lem}
\label{lem:possibility_J_is_D_N}
In this setting, suppose that $J$ is of type $D_N$.
\begin{enumerate}
\item \label{item:lem_possibility_J_is_D_N_case_1}
If $I \cap J$ has an irreducible component of type $D_k$ with $k \geq 4$ and $N - k$ is odd, then we have $|I \cap J| \leq k + (N-k-3)/2$.
\item \label{item:lem_possibility_J_is_D_N_case_2}
If $N$ is odd, $I \cap J$ does not have an irreducible component of type $D_k$ with $k \geq 4$ and $\{r_{N-1},r_N\} \not\subseteq I$, then we have $|I \cap J| \leq (N-3)/2$.
\item \label{item:lem_possibility_J_is_D_N_case_3}
If $N$ is odd, $I \cap J$ does not have an irreducible component of type $D_k$ with $k \geq 4$ and $\{r_{N-1},r_N\} \subseteq I$, then we have $|I \cap J| \leq (N-1)/2$.
\end{enumerate}
\end{lem}
\begin{proof}
Assume contrary that the hypothesis of one of the three cases in the statement is satisfied but the inequality in the conclusion does not hold.
We show that $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ cannot span $V_J$, which is a contradiction and therefore concludes the proof.
First, recall the property $N \geq |I \cap J| + 2$ and the $A_{>1}$-freeness of $I$.
Then, in the case \ref{item:lem_possibility_J_is_D_N_case_1}, by applying successive local transformations, we may assume without loss of generality that $I \cap J$ consists of elements $r_{2j}$ with $1 \leq j \leq (N-k-1)/2$ and $r_j$ with $N-k+1 \leq j \leq N$.
Similarly, in the case \ref{item:lem_possibility_J_is_D_N_case_2} (respectively, the case \ref{item:lem_possibility_J_is_D_N_case_3}), by applying successive local transformations and using symmetry, we may assume without loss of generality that $I \cap J$ consists of elements $r_{2j}$ with $1 \leq j \leq (N-1)/2$ (respectively, $r_{2j}$ with $1 \leq j \leq (N-1)/2$ and $r_N$).
In any case, we have $\Phi_J^{\perp I} \subseteq \Phi_{J \smallsetminus \{r_1\}}$ (see Table \ref{tab:positive_roots_D_n}), therefore the subspace spanned by $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ is contained in $V_{J \smallsetminus \{r_1\}}$.
Hence $\Pi^{J,I \cap J} \cup \Pi_{I \cap J}$ cannot span $V_J$, concluding the proof.
\end{proof}
We divide the following argument into two cases.
\subsubsection{Case $w \cdot \Pi_J \not\subseteq \Phi^+$}
\label{sec:proof_special_first_case_subcase_1}
In order to prove that $w \cdot \Pi_J \subseteq \Phi^+$, here we assume contrary that $w \cdot \Pi_J \not\subseteq \Phi^+$ and deduce a contradiction.
In this setting, we construct a decomposition of $w$ in the following manner.
Take an element $s \in J$ with $w \cdot \alpha_s \in \Phi^-$.
By Lemma \ref{lem:rightdivisor}, the element $w_{x_I}^s$ is a right divisor of $w$.
This implies that $\Phi^{\perp I}[w_{x_I}^s] \subseteq \Phi^{\perp I}[w] = \emptyset$ (see Lemma 2.2 of \cite{Nui11} for the first inclusion), therefore we have $w_{x_I}^s \in Y_{y,x_I}$ where we put $y := \varphi(x_I,s) \in S^{(\Lambda)}$.
By Proposition \ref{prop:charofBphi}, we have $y \neq x_I$.
This element $w_{x_I}^s$ induces a local transformation $x_I \mapsto y$.
Now if $w(w_{x_I}^s)^{-1} \cdot \Pi_J \not\subseteq \Phi^+$, then we can similarly factor out from $w(w_{x_I}^s)^{-1}$ a right divisor of the form $w_y^t \in Y_{\varphi(y,t),y}$ with $t \in J$.
Iterating this process, we finally obtain a decomposition of $w$ of the form $w = u w_{y_{n-1}}^{s_{n-1}} \cdots w_{y_1}^{s_1} w_{y_0}^{s_0}$ satisfying that $n \geq 1$, $u \in Y_{x_I,z}$ with $z \in S^{(\Lambda)}$, $w_{y_i}^{s_i} \in Y_{y_{i+1},y_i} \cap W_J$ for every $0 \leq i \leq n-1$ where we put $y_0 = x_I$ and $y_n = z$, and $u \cdot \Pi_J \subseteq \Phi^+$.
Put $u' := w_{y_{n-1}}^{s_{n-1}} \cdots w_{y_1}^{s_1} w_{y_0}^{s_0} \neq 1$.
By the construction, the action of $u' \in Y_{z,x_I} \cap W_J$ induces (as the composition of successive local transformations) an isomorphism $\sigma \colon I \cap J \to [z] \cap J$, $t \mapsto u' \ast t$, while $u'$ fixes every element of $\Pi_{I \smallsetminus J}$.
Now $\sigma$ is not an identity mapping; otherwise, we have $z = x_I$ and $1 \neq u' \in Y_{x_I,x_I}$, while $u'$ has finite order since $|W_J| < \infty$, contradicting Proposition \ref{prop:Yistorsionfree}.
On the other hand, we have $u \cdot \Phi_J = wu'{}^{-1} \cdot \Phi_J = w \cdot \Phi_J = \Phi_J$, therefore $u \cdot \Phi_J^+ = \Phi_J^+$ since $u \cdot \Pi_J \subseteq \Phi^+$.
This implies that $u \cdot \Pi_J = \Pi_J$, therefore the action of $u$ defines an automorphism $\tau$ of $J$.
Since $w = u u' \in Y_I$, the composite mapping $\tau \circ \sigma$ is the identity mapping on $I \cap J$, while $\sigma$ is not identity as above.
As a consequence, we have $\tau^{-1}|_{I \cap J} = \sigma$ and hence $\tau^{-1}$ is a nontrivial automorphism of $J$, therefore the possibilities of the type of $J$ are $D_N$, $E_6$ and $F_4$ (recall that $J$ is neither of type $A_N$ nor of type $I_2(m)$).
\begin{lem}
\label{lem:proof_special_first_case_subcase_1_not_F_4}
In this setting, $J$ is not of type $F_4$.
\end{lem}
\begin{proof}
Assume contrary that $J = \{r_1,r_2,r_3,r_4\}$ is of type $F_4$.
In this case, each of $r_1$ and $r_2$ is not conjugate in $W_J$ to one of $r_3$ and $r_4$ by the well-known fact that the conjugacy classes for the simple reflections $r_i$ are determined by the connected components of the graph obtained from the Coxeter graph by removing all edges having non-odd labels.
Therefore, the mapping $\tau^{-1}|_{I \cap J} = \sigma$ induced by the action of $u' \in W_J$ cannot map an element $r_i$ ($1 \leq i \leq 4$) to $r_{5-i}$.
This contradicts the fact that $\tau^{-1}$ is a nontrivial automorphism of $J$.
Hence the claim holds.
\end{proof}
From now, we consider the remaining case that $J$ is either of type $D_N$ with $4 \leq N < \infty$ or of type $E_6$.
Take a standard decomposition $\mathcal{D} = \omega_{\ell(\mathcal{D})-1} \cdots \omega_1\omega_0$ of $u \in Y_{x_I,z}$ with respect to $J$ (see Proposition \ref{prop:standard_decomposition_existence}).
Note that $J$ is irreducible and $J \not\subseteq [z]$.
This implies that, if $0 \leq i \leq \ell(\mathcal{D})-1$ and $\omega_j$ is a narrow transformation for every $0 \leq j \leq i$, then it follows by induction on $0 \leq j \leq i$ that the support of $\omega_j$ is apart from $J$, the product $\omega_j \cdots \omega_1\omega_0$ fixes $\Pi_J$ pointwise, $[y^{(j+1)}] \cap J = [z] \cap J$, and $[y^{(j+1)}] \smallsetminus J$ is not adjacent to $J$ (note that $[z] \smallsetminus J = I \smallsetminus J$ is not adjacent to $J$).
By these properties, since $u$ does not fix $\Pi_J$ pointwise, $\mathcal{D}$ contains at least one wide transformation.
Let $\omega := \omega_i$ be the first (from right) wide transformation in $\mathcal{D}$, and write $y = y^{(i)}(\mathcal{D})$, $t = t^{(i)}(\mathcal{D})$ and $K = K^{(i)}(\mathcal{D})$ for simplicity.
Note that $J^{(i)}(\mathcal{D}) = J$ by the above argument.
Note also that $\Pi^{K,[y] \cap K} \subseteq \Pi^{[y]}$, since $[y] \smallsetminus K$ is not adjacent to $K$ by the definition of $K$.
Now the action of $\omega_{i-1} \cdots \omega_1\omega_0 u' \in Y_{y,x_I}$ induces an isomorphism $\Pi^I \to \Pi^{[y]}$ which maps $\Pi^{J,I \cap J}$ onto $\Pi^{J,[y] \cap J} = \Pi^{J,[z] \cap J}$.
Hence we have the following (recall that $\Pi^{J,I \cap J}$ is the union of some irreducible components of $\Pi^I$):
\begin{lem}
\label{lem:proof_special_first_case_subcase_1_local_irreducible_components}
In this setting, $\Pi^{J,[y] \cap J}$ is isomorphic to $\Pi^{J,I \cap J}$ and is the union of some irreducible components of $\Pi^{[y]}$.
In particular, each element of $\Pi^{J,[y] \cap J}$ is orthogonal to any element of $\Pi^{K,[y] \cap K} \smallsetminus \Phi_J$.
\end{lem}
Now note that $K = ([y] \cup J)_{\sim t}$ is irreducible and of finite type, and $t$ is adjacent to $[y]$.
Moreover, by Lemma \ref{lem:another_decomposition_Y_no_loop}, the element $\omega = \omega_{y,J}^{t}$ does not fix $\Pi_{K \smallsetminus \{t\}}$ pointwise.
By these properties and symmetry, we may assume without loss of generality that the possibilities of $K$ are as follows:
\begin{enumerate}
\item \label{item:proof_special_first_case_subcase_1_J_E_6}
$J$ is of type $E_6$, and;
\begin{enumerate}
\item \label{item:proof_special_first_case_subcase_1_J_E_6_K_E_8}
$K = J \cup \{t,t'\}$ is of type $E_8$ where $t$ is adjacent to $r_6$ and $t'$, and $t' \in [y]$,
\item \label{item:proof_special_first_case_subcase_1_J_E_6_K_E_7}
$K = J \cup \{t\}$ is of type $E_7$ where $t$ is adjacent to $r_6$, and $r_6 \in [y]$,
\end{enumerate}
\item \label{item:proof_special_first_case_subcase_1_J_D_7}
$J$ is of type $D_7$, $K = J \cup \{t\}$ is of type $E_8$ where $t$ is adjacent to $r_7$, and $r_7 \in [y]$,
\item \label{item:proof_special_first_case_subcase_1_J_D_5}
$J$ is of type $D_5$, and;
\begin{enumerate}
\item \label{item:proof_special_first_case_subcase_1_J_D_5_K_E_7}
$K = J \cup \{t,t'\}$ is of type $E_7$ where $t$ is adjacent to $r_5$ and $t'$, and $t' \in [y]$,
\item \label{item:proof_special_first_case_subcase_1_J_D_5_K_E_6}
$K = J \cup \{t\}$ is of type $E_6$ where $t$ is adjacent to $r_5$, and $r_5 \in [y]$,
\end{enumerate}
\item \label{item:proof_special_first_case_subcase_1_J_D_N}
$J$ is of type $D_N$, $K = J \cup \{t\}$ is of type $D_{N+1}$ where $t$ is adjacent to $r_1$, and $r_1 \in [y]$.
\end{enumerate}
We consider Case \ref{item:proof_special_first_case_subcase_1_J_E_6_K_E_8}.
We have $|[y] \cap J| = |I \cap J| = 1$ by Lemma \ref{lem:possibility_J_is_E_6}.
Now by Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6} (where $r_7 = t$ and $r_8 = t'$), we have $\langle \beta,\beta' \rangle \neq 0$ for some $\beta \in \Pi^{J,[y] \cap J}$ and $\beta' \in \Pi^{K,[y] \cap K} \smallsetminus \Phi_J$ (namely, $(\beta,\beta') = (\alpha_4,\gamma_{84})$ when $[y] \cap J = \{r_1\}$; $(\beta,\beta') = (\gamma_{16},\gamma_{74})$ when $[y] \cap J = \{r_3\}$; and $(\beta,\beta') = (\alpha_1,\gamma_{74})$ when $[y] \cap J = \{r_j\}$ with $j \in \{2,4,5,6\}$, where the roots $\gamma_k$ are as in Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6}).
This contradicts Lemma \ref{lem:proof_special_first_case_subcase_1_local_irreducible_components}.
We consider Case \ref{item:proof_special_first_case_subcase_1_J_E_6_K_E_7}.
We have $|[y] \cap J| = |I \cap J| = 1$ by Lemma \ref{lem:possibility_J_is_E_6}, hence $[y] \cap J = \{r_6\}$.
Now we have $\alpha_5 + \alpha_6 + \alpha_t \in \Pi^{K,[y] \cap K} \smallsetminus \Phi_J$, $\alpha_4 \in \Pi^{J,[y] \cap J}$, and these two roots are not orthogonal, contradicting Lemma \ref{lem:proof_special_first_case_subcase_1_local_irreducible_components}.
We consider Case \ref{item:proof_special_first_case_subcase_1_J_D_7}.
Note that $N = 7 \geq |I \cap J| + 2 = |[y] \cap J| + 2$, therefore $|[y] \cap J| \leq 5$.
By Lemma \ref{lem:possibility_J_is_D_N} and $A_{>1}$-freeness of $I$, it follows that the possibilities of $[y] \cap J$ are as listed in Table \ref{tab:proof_special_first_case_subcase_1_J_D_7}, where we put $(r'_1,r'_2,r'_3,r'_4,r'_5,r'_6,r'_7,r'_8) = (t,r_6,r_7,r_5,r_4,r_3,r_2,r_1)$ (hence $K = \{r'_1,\dots,r'_8\}$ is the standard labelling of type $E_8$).
Now by Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6}, we have $\langle \beta,\beta' \rangle \neq 0$ for some $\beta \in \Pi^{J,[y] \cap J}$ and $\beta' \in \Pi^{K,[y] \cap K} \smallsetminus \Phi_J$ as listed in Table \ref{tab:proof_special_first_case_subcase_1_J_D_7}, where we write $\alpha'_j = \alpha_{r'_j}$ and the roots $\gamma_k$ are as in Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6}.
This contradicts Lemma \ref{lem:proof_special_first_case_subcase_1_local_irreducible_components}.
\begin{table}[hbt]
\centering
\caption{List of roots for Case \ref{item:proof_special_first_case_subcase_1_J_D_7}}
\label{tab:proof_special_first_case_subcase_1_J_D_7}
\begin{tabular}{|c|c|c|} \hline
$[y] \cap J$ & $\beta$ & $\beta'$ \\ \hline
$r'_3 \in [y] \cap J \subseteq \{r'_3,r'_6,r'_7,r'_8\}$ & $\alpha'_2$ & $\gamma_{16}$ \\ \hline
$\{r'_3,r'_5\}$ & $\alpha'_2$ & $\gamma_{31}$ \\ \hline
$\{r'_2,r'_3\} \subseteq [y] \cap J \subseteq \{r'_2,r'_3,r'_4,r'_5,r'_6\}$ & $\alpha'_8$ & $\gamma_{97}$ \\ \cline{1-2}
$\{r'_2,r'_3,r'_7\}$ & $\gamma_{22}$ & \\ \hline
$\{r'_2,r'_3,r'_8\}$ & $\alpha'_6$ & $\gamma_{104}$ \\ \hline
\end{tabular}
\end{table}
We consider Case \ref{item:proof_special_first_case_subcase_1_J_D_5_K_E_7}.
Note that $N = 5 \geq |I \cap J| + 2 = |[y] \cap J| + 2$, therefore $|[y] \cap J| \leq 3$.
By $A_{>1}$-freeness of $I$, every irreducible component of $[y] \cap J$ is of type $A_1$.
Now by Lemma \ref{lem:possibility_J_is_D_N}, the possibilities of $[y] \cap J$ are as listed in Table \ref{tab:proof_special_first_case_subcase_1_J_D_5_K_E_7}, where we put $(r'_1,r'_2,r'_3,r'_4,r'_5,r'_6,r'_7) = (r_1,r_4,r_2,r_3,r_5,t,t')$ (hence $K = \{r'_1,\dots,r'_7\}$ is the standard labelling of type $E_7$).
Now by Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6}, we have $\langle \beta,\beta' \rangle \neq 0$ for some $\beta \in \Pi^{J,[y] \cap J}$ and $\beta' \in \Pi^{K,[y] \cap K} \smallsetminus \Phi_J$ as listed in Table \ref{tab:proof_special_first_case_subcase_1_J_D_5_K_E_7}, where we write $\alpha'_j = \alpha_{r'_j}$ and the roots $\gamma_k$ are as in Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6}.
This contradicts Lemma \ref{lem:proof_special_first_case_subcase_1_local_irreducible_components}.
\begin{table}[hbt]
\centering
\caption{List of roots for Case \ref{item:proof_special_first_case_subcase_1_J_D_5_K_E_7}}
\label{tab:proof_special_first_case_subcase_1_J_D_5_K_E_7}
\begin{tabular}{|c|c|c|} \hline
$[y] \cap J$ & $\beta$ & $\beta'$ \\ \hline
$[y] \cap J \subseteq \{r'_2,r'_4,r'_5\}$ & $\alpha'_1$ & $\gamma_{61}$ \\ \cline{1-2}
$\{r'_3\}$ & $\gamma_{16}$ & \\ \hline
$\{r'_1\}$ & $\alpha'_4$ & $\gamma_{71}$ \\ \hline
\end{tabular}
\end{table}
We consider Case \ref{item:proof_special_first_case_subcase_1_J_D_5_K_E_6}.
By the same reason as Case \ref{item:proof_special_first_case_subcase_1_J_D_5_K_E_7}, every irreducible component of $[y] \cap J$ is of type $A_1$.
Now by Lemma \ref{lem:possibility_J_is_D_N}, we have only two possibilities of $[y] \cap J$; $\{r_5\}$ and $\{r_4,r_5\}$.
In the first case $[y] \cap J = \{r_5\}$, we have $\alpha_2 \in \Pi^{J,[y] \cap J}$, $\alpha_3 + \alpha_5 + \alpha_t \in \Pi^{K,[y] \cap K} \smallsetminus \Phi_J$, and these two roots are not orthogonal, contradicting Lemma \ref{lem:proof_special_first_case_subcase_1_local_irreducible_components}.
Hence we consider the second case $[y] \cap J = \{r_4,r_5\}$.
In this case, the action of the first wide transformation $\omega$ in $\mathcal{D}$ maps the elements $r_1$, $r_2$, $r_3$, $r_4$ and $r_5$ to $t$, $r_5$, $r_3$, $r_2$ and $r_4$, respectively (note that $\{t,r_5,r_3,r_2,r_4\}$ is the standard labelling of type $D_5$).
Now, by a similar argument as above, the possibility of the second wide transformation $\omega_{i'}$ in $\mathcal{D}$ (if exists) is as in Case \ref{item:proof_special_first_case_subcase_1_J_D_5_K_E_6}, where $t'' := t^{(i')}(\mathcal{D})$ is adjacent to either $r_2$ or $r_4$ (note that Case \ref{item:proof_special_first_case_subcase_1_J_D_5_K_E_7} cannot occur as discussed above, while Case \ref{item:proof_special_first_case_subcase_1_J_D_N} cannot occur by the shape of $J$ and the property $r_1 \not\in [y] \cap J$).
This implies that the action of $\omega_{i'}$ either maps the elements $t$, $r_5$, $r_3$, $r_4$ and $r_2$ to $t''$, $r_2$, $r_3$, $r_5$ and $r_4$, respectively (forming a subset of type $D_5$ with the ordering being the standard labelling), or maps the elements $t$, $r_5$, $r_3$, $r_2$ and $r_4$ to $t''$, $r_4$, $r_3$, $r_5$ and $r_2$, respectively (forming a subset of type $D_5$ with the ordering being the standard labelling).
By iterating the same argument, it follows that the sequence of elements $(r_2,r_3,r_4,r_5)$ is mapped by successive wide transformations in $\mathcal{D}$ to one of the following three sequences; $(r_2,r_3,r_4,r_5)$, $(r_5,r_3,r_2,r_4)$ and $(r_4,r_3,r_5,r_2)$.
Hence $u$ itself should map $(r_2,r_3,r_4,r_5)$ to one of the above three sequences; while the action of $u$ induces the nontrivial automorphism $\tau$ of $J$, which maps $(r_1,r_2,r_3,r_4,r_5)$ to $(r_1,r_2,r_3,r_5,r_4)$.
This is a contradiction.
Finally, we consider the case \ref{item:proof_special_first_case_subcase_1_J_D_N}.
First we have the following lemma:
\begin{lem}
\label{lem:proof_special_first_case_subcase_1_J_D_N}
In this setting, suppose further that there exists an integer $k \geq 1$ satisfying that $2k \leq N - 3$, $r_{2j - 1} \in [y]$ and $r_{2j} \not\in [y]$ for every $1 \leq j \leq k$, and $r_{2k + 1} \not\in [y]$.
Then there exist a root $\beta \in \Pi^{J,[y] \cap J}$ and a root $\beta' \in \Pi^{K,[y] \cap K} \smallsetminus \Phi_J$ with $\langle \beta,\beta' \rangle \neq 0$.
\end{lem}
\begin{proof}
Put $J' := \{r_j \mid 2k+1 \leq j \leq N\}$.
First, we have $\beta' := \alpha_t + \sum_{j=1}^{2k} \alpha_j \in \Pi^{K,[y] \cap K} \smallsetminus \Phi_J$ in this case.
On the other hand, $\Pi^{J,[y] \cap J} \smallsetminus \Phi_{J'}$ consists of $k$ roots $\gamma^{(4)}_{2j-1,2j}$ with $1 \leq j \leq k$ (see Table \ref{tab:positive_roots_D_n} for the notation), while $\Pi_{[y] \cap J} \smallsetminus \Phi_{J'}$ consists of $k$ roots $\alpha_{2j-1}$ with $1 \leq j \leq k$.
Hence $|(\Pi^{J,[y] \cap J} \cup \Pi_{[y] \cap J}) \smallsetminus \Phi_{J'}| = 2k$.
Since $\Pi^{J,[y] \cap J} \cup \Pi_{[y] \cap J}$ is a basis of the space $V_J$ of dimension $N$, it follows that the subset $(\Pi^{J,[y] \cap J} \cup \Pi_{[y] \cap J}) \cap \Phi_{J'}$ spans a subspace of dimension $N - 2k = |J'|$.
This implies that $(\Pi^{J,[y] \cap J} \cup \Pi_{[y] \cap J}) \cap \Phi_{J'} \not\subseteq \Phi_{J' \smallsetminus \{r_{2k+1}\}}$, therefore (since $\alpha_{2k+1} \not\in \Pi_{[y] \cap J}$) we have $\Pi^{J,[y] \cap J} \cap \Phi_{J'} \not\subseteq \Phi_{J' \smallsetminus \{r_{2k+1}\}}$, namely there exists a root $\beta \in \Pi^{J,[y] \cap J} \cap \Phi_{J'}$ which has non-zero coefficient of $\alpha_{2k+1}$.
These $\beta$ and $\beta'$ satisfy $\langle \beta,\beta' \rangle \neq 0$ by the construction, concluding the proof.
\end{proof}
By Lemma \ref{lem:proof_special_first_case_subcase_1_J_D_N} and Lemma \ref{lem:proof_special_first_case_subcase_1_local_irreducible_components}, the hypothesis of Lemma \ref{lem:proof_special_first_case_subcase_1_J_D_N} should not hold.
By this fact, $A_{>1}$-freeness of $I$ and the property $N \geq |I \cap J| + 2 = |[y] \cap J| + 2$, it follows that the possibilities of $[y] \cap J$ are as follows (up to the symmetry $r_{N-1} \leftrightarrow r_N$); (I) $[y] \cap J = J \smallsetminus \{r_{2j} \mid 1 \leq j \leq k\}$ for an integer $k$ with $2 \leq k \leq (N-2)/2$ and $2k \neq N-3$; (II) $N$ is odd and $[y] \cap J = \{r_{2j-1} \mid 1 \leq j \leq (N-1)/2\}$; (III) $N$ is even and $[y] \cap J = \{r_{2j-1} \mid 1 \leq j \leq (N-2)/2\}$; (IV) $N$ is even and $[y] \cap J = \{r_{2j-1} \mid 1 \leq j \leq N/2\}$.
For Case (I), by the shape of $J$ and $[y] \cap J$, it follows that $I \cap J = [y] \cap J$, and each local transformation can permute the irreducible components of $I \cap J$ containing neither $r_{N-1}$ nor $r_N$ but it fixes pointwise the irreducible component(s) of $I \cap J$ containing $r_{N-1}$ or $r_N$.
This contradicts the fact that $\sigma = \tau^{-1}|_{I \cap J}$ for a nontrivial automorphism $\tau^{-1}$ of $J$ (note that $\tau^{-1}$ exchanges $r_{N-1}$ and $r_N$).
Case (II) contradicts Lemma \ref{lem:possibility_J_is_D_N}(\ref{item:lem_possibility_J_is_D_N_case_2}).
For Case (III), the roots $\alpha_{N-1} \in \Pi^{J,[y] \cap J}$ and $\alpha_t + \sum_{j=1}^{N-2} \alpha_j \in \Pi^{K,[y] \cap K} \smallsetminus \Phi_J$ are not orthogonal, contradicting Lemma \ref{lem:proof_special_first_case_subcase_1_local_irreducible_components}.
Finally, for the remaining case, i.e., Case (IV), by the shape of $J$ and $[y] \cap J$, it follows that $I \cap J = [y] \cap J$ and each local transformation leaves the set $I \cap J$ invariant.
By this result and the property that $\sigma = \tau^{-1}|_{I \cap J}$ for a nontrivial automorphism $\tau^{-1}$ of $J$, only the possibility of $[y] \cap J$ is that $N = 4$ and $[y] \cap J = I \cap J = \{r_1,r_3\}$, and $\sigma$ exchanges $r_1$ and $r_3$.
Now we arrange the standard decomposition $\mathcal{D}$ of $u$ as $u = \omega''_{\ell} \omega'_{\ell-1} \omega''_{\ell-1} \cdots \omega'_2 \omega''_2 \omega'_1 \omega''_1$, where each $\omega'_j$ is a wide transformation and each $\omega''_j$ is a (possibly empty) product of narrow transformations.
Let each wide transformation $\omega'_j$ belong to $Y_{z'_j,z_j}$ with $z_j,z'_j \in S^{(\Lambda)}$.
In particular, we have $\omega'_1 = \omega$ and $z_1 = y$.
Now we give the following lemma:
\begin{lem}
\label{lem:proof_special_first_case_subcase_1_J_D_N_case_IV}
In this setting, the following properties hold for every $1 \leq j \leq \ell - 1$: The action of the element $u_j := \omega''_j \omega'_{j-1} \omega''_{j-1} \cdots \omega'_1 \omega''_1$ maps $(r_1,r_2,r_3,r_4)$ to $(r_1,r_2,r_3,r_4)$ when $j$ is odd and to $(r_1,r_2,r_4,r_3)$ when $j$ is even; the subsets $J$ and $[z_j] \smallsetminus J$ are not adjacent; the support of $\omega'_j$ is as in Case \ref{item:proof_special_first_case_subcase_1_J_D_N} above, with $t$ replaced by some element $t_j \in S$; and $\omega'_j$ maps $(r_1,r_2,r_3,r_4)$ to $(r_1,r_2,r_4,r_3)$.
\end{lem}
\begin{proof}
We use induction on $j$.
By the definition of narrow transformations, the first and the second parts of the claim hold obviously when $j = 1$ and follow from the induction hypothesis when $j > 1$.
In particular, we have $u_j \cdot \Pi_J = \Pi_J$.
Put $(h,h') := (3,4)$ when $j$ is odd and $(h,h') := (4,3)$ when $j$ is even.
Then we have $[z_j] \cap J = \{r_1,r_h\}$.
Now, by using the above argument, it follows that the support of $\omega'_j$ is of the form $\{r_1,r_2,r_3,r_4,t_j\}$ which is the standard labelling of type $D_5$, where $t_j$ is adjacent to one of the two elements of $[z_j] \cap J$.
We show that $t_j$ is adjacent to $r_1$, which already holds when $j = 1$ (note that $t_j = t$ when $j = 1$).
Suppose $j > 1$ and assume contrary that $t_j$ is adjacent to $r_h$.
In this case, $t_j$ is apart from $[z_j] \smallsetminus \{r_h\}$.
On the other hand, we have $[z'_{j-1}] \cap J = \{r_1,r_h\}$, the subsets $[z'_{j-1}] \smallsetminus J$ and $J$ are not adjacent, and the support of each narrow transformation in $\omega''_j$ is apart from to $J$.
Moreover, by the induction hypothesis, we have $[z_{j-1}] \cap J = \{r_1,r_{h'}\}$ and the action of $\omega'_{j-1}$ maps $(r_1,r_2,r_h,r_{h'})$ to $(r_1,r_2,r_{h'},r_h)$ while it fixes every element of $[z_{j-1}] \smallsetminus J$.
This implies that $\omega''_j \in Y_{z'',z_{j-1}}$ for the element $z'' \in S^{(\Lambda)}$ obtained from $z_j$ by replacing $r_h$ with $r_{h'}$.
Now we have $\alpha_{t_j} \in \Pi^{[z'']}$ since $t_j$ is not adjacent to $[z''] = ([z_j] \smallsetminus \{r_h\}) \cup \{r_{h'}\}$, therefore $\beta' := (\omega''_j)^{-1} \cdot \alpha_{t_j} \in \Pi^{[z_{j-1}]}$.
This root belongs to $\Phi_{S \smallsetminus J}$ and has non-zero coefficient of $\alpha_{t_j}$, since the support of each narrow transformation in $\omega''_j$ is not adjacent to $J$ and hence does not contain $t_j$.
Therefore, the roots $\beta' \in \Pi^{[z_{j-1}]} \smallsetminus \Pi^{J,[z_{j-1}] \cap J}$ and $\alpha_1 + 2\alpha_2 + \alpha_3 + \alpha_4 \in \Pi^{J,[z_{j-1}] \cap J}$ are not orthogonal.
This contradicts the fact that $\Pi^{J,[y] \cap J}$ is the union of some irreducible components of $\Pi^{[y]}$ (see Lemma \ref{lem:proof_special_first_case_subcase_1_local_irreducible_components}) and the isomorphism $\Pi^{[y]} \to \Pi^{[z_{j-1}]}$ induced by the action of $\omega''_{j-1}\omega'_{j-2}\omega''_{j-2} \cdots \omega''_2\omega'_1$ maps $\Pi^{J,[y] \cap J}$ to $\Pi^{J,[z_{j-1}] \cap J}$ (since the action of this element leaves the set $\Pi_J$ invariant).
This contradiction proves that $t_j$ is adjacent to $r_1$, therefore the third part of the claim holds.
Finally, the fourth part of the claim follows immediately from the third part.
Hence the proof of Lemma \ref{lem:proof_special_first_case_subcase_1_J_D_N_case_IV} is concluded.
\end{proof}
By Lemma \ref{lem:proof_special_first_case_subcase_1_J_D_N_case_IV}, the action of the element $\omega'_{\ell-1} \omega''_{\ell-1} \cdots \omega'_2 \omega''_2 \omega'_1 \omega''_1$, hence of $u = \omega''_{\ell}\omega'_{\ell-1}u_{\ell-1}$, maps the elements $(r_1,r_2,r_3,r_4)$ to either $(r_1,r_2,r_3,r_4)$ or $(r_1,r_2,r_4,r_3)$.
This contradicts the above-mentioned fact that $\sigma$ exchanges $r_1$ and $r_3$.
Summarizing, we have derived a contradiction in each of the six possible cases, Cases \ref{item:proof_special_first_case_subcase_1_J_E_6_K_E_8}--\ref{item:proof_special_first_case_subcase_1_J_D_N}.
Hence we have proven that the assumption $w \cdot \Pi_J \not\subseteq \Phi^+$ implies a contradiction, as desired.
\subsubsection{Case $w \cdot \Pi_J \subseteq \Phi^+$}
\label{sec:proof_special_first_case_subcase_2}
By the result of Section \ref{sec:proof_special_first_case_subcase_1}, we have $w \cdot \Pi_J \subseteq \Phi^+$.
Since $w \cdot \Phi_J = \Phi_J$ by Lemma \ref{lem:property_w}, it follows that $w \cdot \Phi_J^+ \subseteq \Phi_J^+$, therefore $w \cdot \Phi_J^+ = \Phi_J^+$ (note that $|\Phi_J| < \infty$).
Hence the action of $w$ defines an automorphism $\tau$ of $J$ (in particular, $w \cdot \Pi_J = \Pi_J$).
To show that $\tau$ is the identity mapping (which implies the claim that $w$ fixes $\Pi^{J,I \cap J}$ pointwise), assume contrary that $\tau$ is a nontrivial automorphism of $J$.
Then the possibilities of the type of $J$ are as follows: $D_N$, $E_6$ and $F_4$ (recall that $J$ is neither of type $A_N$ nor of type $I_2(m)$).
Moreover, since the action of $w \in Y_I$ fixes every element of $I \cap J$, the subset $I \cap J$ of $J$ is contained in the fixed point set of $\tau$.
This implies that $J$ is not of type $F_4$, since the nontrivial automorphism of a Coxeter graph of type $F_4$ has no fixed points.
Suppose that $J$ is of type $E_6$.
Then, by the above argument on the fixed points of $\tau$ and Lemma \ref{lem:possibility_J_is_E_6}, we have $I \cap J = \{r_2\}$ or $I \cap J = \{r_4\}$.
Now take a standard decomposition of $w$ with respect to $J$ (see Proposition \ref{prop:standard_decomposition_existence}).
Then no wide transformation can appear due to the shape of $J$ and the position of $I \cap J$ in $J$ (indeed, we cannot obtain a subset of finite type by adding to $J$ an element of $S$ adjacent to $I \cap J$).
This implies that the decomposition of $w$ consists of narrow transformations only, therefore $w$ fixes $\Pi_J$ pointwise, contradicting the fact that $\tau$ is a nontrivial automorphism.
Secondly, suppose that $J$ is of type $D_N$ with $N \geq 5$.
Then, by the above argument on the fixed points of $\tau$, we have $I \cap J \subseteq J \smallsetminus \{r_{N-1},r_N\}$, therefore every irreducible component of $I \cap J$ is of type $A_1$ (by $A_{>1}$-freeness of $I$).
Now take a standard decomposition $\mathcal{D}$ of $w$ with respect to $J$ (see Proposition \ref{prop:standard_decomposition_existence}).
Note that $\mathcal{D}$ involves at least one wide transformation, since $\tau$ is not the identity mapping.
By the shape of $J$ and the position of $I \cap J$ in $J$, only the possibility of the first (from right) wide transformation $\omega = \omega_i$ in $\mathcal{D}$ is as follows: $K = J \cup \{t\}$ is of type $D_{N+1}$, $t$ is adjacent to $r_1$, and $r_1 \in [y]$, where we put $y = y^{(i)}(\mathcal{D})$, $t = t^{(i)}(\mathcal{D})$, and $K = K^{(i)}(\mathcal{D})$.
Now the claim of Lemma \ref{lem:proof_special_first_case_subcase_1_J_D_N} in Section \ref{sec:proof_special_first_case_subcase_1} also holds in this case, while $\Pi^{J,[y] \cap J}$ is the union of some irreducible components of $\Pi^{[y]}$ by the same reason as in Section \ref{sec:proof_special_first_case_subcase_1}.
Hence the hypothesis of Lemma \ref{lem:proof_special_first_case_subcase_1_J_D_N} should not hold.
This argument and the properties that $N \geq |I \cap J| + 2 = |[y] \cap J| + 2$ and $I \cap J \subseteq J \smallsetminus \{r_{N-1},r_N\}$ imply that the possibilities of $[y] \cap J$ are the followings: $N$ is odd and $[y] \cap J$ consists of elements $r_{2j-1}$ with $1 \leq j \leq (N-1)/2$; or, $N$ is even and $[y] \cap J$ consists of elements $r_{2j-1}$ with $1 \leq j \leq (N-2)/2$.
The former possibility contradicts Lemma \ref{lem:possibility_J_is_D_N}(\ref{item:lem_possibility_J_is_D_N_case_2}).
On the other hand, for the latter possibility, the roots $\alpha_{N-1} \in \Pi^{J,[y] \cap J}$ and $\alpha_t + \sum_{j=1}^{N-2} \alpha_j \in \Pi^{[y]} \smallsetminus \Pi^{J,[y] \cap J}$ are not orthogonal, contradicting the above-mentioned fact that $\Pi^{J,[y] \cap J}$ is the union of some irreducible components of $\Pi^{[y]}$.
Hence we have a contradiction for any of the two possibilities.
Finally, we consider the remaining case that $J$ is of type $D_4$.
By the property $N = 4 \geq |I \cap J| + 2$ and $A_{>1}$-freeness of $I$, it follows that $I \cap J$ consists of at most two irreducible components of type $A_1$.
On the other hand, by the shape of $J$, the fixed point set of the nontrivial automorphism $\tau$ of $J$ is of type $A_1$ or $A_2$.
Since $I \cap J$ is contained in the fixed point set of $\tau$ as mentioned above, it follows that $|I \cap J| = 1$.
If $I \cap J = \{r_1\}$, then we have $\Pi^{J,I \cap J} = \{\alpha_3,\alpha_4,\beta\}$ where $\beta = \alpha_1 + 2\alpha_2 + \alpha_3 + \alpha_4$ (see Table \ref{tab:positive_roots_D_n}), and every element of $\Pi^{J,I \cap J}$ forms an irreducible component of $\Pi^{J,I \cap J}$.
However, now the property $w \cdot \Pi_J = \Pi_J$ implies that $w$ fixes $\alpha_2$ and permutes the three simple roots $\alpha_1$, $\alpha_3$ and $\alpha_4$, therefore $w \cdot \beta = \beta$, contradicting the fact that $\langle w \rangle$ acts transitively on the set of the irreducible components of $\Pi^{J,I \cap J}$ (see Lemma \ref{lem:property_w}).
By symmetry, the same result holds when $I \cap J = \{r_3\}$ or $\{r_4\}$.
Hence we have $I \cap J = \{r_2\}$.
Take a standard decomposition of $w$ with respect to $J$ (see Proposition \ref{prop:standard_decomposition_existence}).
Then no wide transformation can appear due to the shape of $J$ and the position of $I \cap J$ in $J$ (indeed, we cannot obtain a subset of finite type by adding to $J$ an element of $S$ adjacent to $I \cap J$).
This implies that the decomposition of $w$ consists of narrow transformations only, therefore $w$ fixes $\Pi_J$ pointwise, contradicting the fact that $\tau$ is a nontrivial automorphism.
Summarizing, we have derived in any case a contradiction from the assumption that $\tau$ is a nontrivial automorphism.
Hence it follows that $\tau$ is the identity mapping, therefore our claim has been proven in the case $\Pi^{J,I \cap J} \not\subseteq \Phi_{I^{\perp}}$.
\subsection{The second case $\Pi^{J,I \cap J} \subseteq \Phi_{I^{\perp}}$}
\label{sec:proof_special_second_case}
In this subsection, we consider the remaining case that $\Pi^{J,I \cap J} \subseteq \Phi_{I^{\perp}}$.
In this case, we have $\Pi_{I^{\perp}} \subseteq \Pi^I$, therefore $\Pi^{J,I \cap J} = \Pi_{J \smallsetminus I}$.
Let $L$ be an irreducible component of $J \smallsetminus I$.
Then $L$ is of finite type.
The aim of the following argument is to show that $w$ fixes $\Pi_L$ pointwise; indeed, if this is satisfied, then we have $\Pi^{J,I \cap J} = \Pi_{J \smallsetminus I} = \Pi_L$ since $\langle w \rangle$ acts transitively on the set of irreducible components of $\Pi^{J,I \cap J}$ (see Lemma \ref{lem:property_w}), therefore $w$ fixes $\Pi^{J,I \cap J}$ pointwise, as desired.
Note that $w \cdot \Pi_L \subseteq \Pi_{J \smallsetminus I}$, since now $w$ leaves the set $\Pi^{J,I \cap J} = \Pi_{J \smallsetminus I}$ invariant.
\subsubsection{Possibilities of semi-standard decompositions}
\label{sec:proof_special_second_case_transformations}
Here we investigate the possibilities of narrow and wide transformations in a semi-standard decomposition of the element $w$, in a somewhat wider context.
Let $\mathcal{D} = \omega_{\ell(\mathcal{D})-1} \cdots \omega_1\omega_0$ be a semi-standard decomposition of an element $u$ of $W$, with the property that $[y^{(0)}]$ is isomorphic to $I$, $J^{(0)}$ is irreducible and of finite type, and $J^{(0)}$ is apart from $[y^{(0)}]$.
Note that any semi-standard decomposition of the element $w \in Y_I$ with respect to the set $L$ defined above satisfies the condition.
Note also that $\mathcal{D}^{-1} := (\omega_0)^{-1}(\omega_1)^{-1} \cdots (\omega_{\ell(\mathcal{D})-1})^{-1}$ is also a semi-standard decomposition of $u^{-1}$, and $(\omega_i)^{-1}$ is a narrow (respectively, wide) transformation if and only if $\omega_i$ is a narrow (respectively, wide) transformation.
The proof of the next lemma uses a concrete description of root systems of all finite irreducible Coxeter groups except types $A$ and $I_2(m)$.
Table \ref{tab:positive_roots_B_n} shows the list for type $B_n$, where the notational conventions are similar to the case of type $D_n$ (Table \ref{tab:positive_roots_D_n}).
For the list for type $F_4$ (Table \ref{tab:positive_roots_F_4}), the list includes only one of the two conjugacy classes of positive roots (denoted by $\gamma_i^{(1)}$), and the other positive roots (denoted by $\gamma_i^{(2)}$) are obtained by using the symmetry $r_1 \leftrightarrow r_4$, $r_2 \leftrightarrow r_3$.
In the list, $[c_1,c_2,c_3,c_4]$ signifies a positive root $c_1 \alpha_1 + c_2 \alpha_2 + c_3\alpha_3 + c_4\alpha_4$, and the description in the columns for actions of generators is similar to the case of type $E_8$ (Tables \ref{tab:positive_roots_E_8_1}--\ref{tab:positive_roots_E_8_6}).
The list for type $H_4$ is divided into two parts (Tables \ref{tab:positive_roots_H_4_1} and \ref{tab:positive_roots_H_4_2}).
In the list, $[c_1,c_2,c_3,c_4]$ signifies a positive root $c_1 \alpha_1 + c_2 \alpha_2 + c_3\alpha_3 + c_4\alpha_4$, where we put $c = 2\cos(\pi/5)$ for simplicity and therefore $c^2 = c+1$.
The other description is in a similar manner as the case of type $E_8$, and the marks \lq\lq $H_3$'' indicate the positive roots of the parabolic subgroup of type $H_3$ generated by $\{r_1,r_2,r_3\}$.
\begin{table}[hbt]
\centering
\caption{List of positive roots for Coxeter group of type $B_n$}
\label{tab:positive_roots_B_n}
\begin{tabular}{|c|c|} \hline
roots & actions of generators \\ \hline
$\gamma^{(1)}_{i,j} := \sum_{h=i}^{j} \alpha_h$ & $r_{i-1} \cdot \gamma^{(1)}_{i,j} = \gamma^{(1)}_{i-1,j}$ ($i \geq 2$) \\
($1 \leq i \leq j \leq n-1$) & $r_i \cdot \gamma^{(1)}_{i,j} = \gamma^{(1)}_{i+1,j}$ ($i \leq j-1$) \\
($\gamma^{(1)}_{i,i} = \alpha_i$) & $r_j \cdot \gamma^{(1)}_{i,j} = \gamma^{(1)}_{i,j-1}$ ($i \leq j-1$) \\
& $r_{j+1} \cdot \gamma^{(1)}_{i,j} = \gamma^{(1)}_{i,j+1}$ ($j \leq n-2$) \\
& $r_n \cdot \gamma^{(1)}_{i,n-1} = \gamma^{(2)}_{i,n}$ \\ \hline
$\gamma^{(2)}_{i,j} := \sum_{h=i}^{j-1} \alpha_h + \sum_{h=j}^{n-1} 2\alpha_h + \sqrt{2}\alpha_n$ & $r_{i-1} \cdot \gamma^{(2)}_{i,j} = \gamma^{(2)}_{i-1,j}$ ($i \geq 2$) \\
($1 \leq i < j \leq n$) & $r_i \cdot \gamma^{(2)}_{i,j} = \gamma^{(2)}_{i+1,j}$ ($i \leq j-2$) \\
& $r_{j-1} \cdot \gamma^{(2)}_{i,j} = \gamma^{(2)}_{i,j-1}$ ($i \leq j-2$) \\
& $r_j \cdot \gamma^{(2)}_{i,j} = \gamma^{(2)}_{i,j+1}$ ($j \leq n-1$) \\
& $r_n \cdot \gamma^{(2)}_{i,n} = \gamma^{(1)}_{i,n-1}$ \\ \hline
$\gamma^{(3)}_i := \sum_{h=i}^{n-1} \sqrt{2}\alpha_h + \alpha_n$ & $r_{i-1} \cdot \gamma^{(3)}_i = \gamma^{(3)}_{i-1}$ ($i \geq 2$) \\
($1 \leq i \leq n$) & $r_i \cdot \gamma^{(3)}_i = \gamma^{(3)}_{i+1}$ ($i \leq n-1$) \\
($\gamma^{(3)}_n = \alpha_n$) & \\ \hline
\end{tabular}
\end{table}
\begin{table}[hbt]
\centering
\caption{List of positive roots for Coxeter group of type $F_4$}
\label{tab:positive_roots_F_4}
The data of the remaining positive roots $\gamma^{(2)}_i$ are obtained by replacing $[c_1,c_2,c_3,c_4]$ with $[c_4,c_3,c_2,c_1]$ and replacing each $r_j$ with $r_{5-j}$.\\
\begin{tabular}{|c||c|c|c|c|c|c|} \cline{1-7}
height & $i$ & root $\gamma^{(1)}_i$ & \multicolumn{4}{|c|}{$k$; $r_j \cdot \gamma^{(1)}_i = \gamma^{(1)}_k$} \\ \cline{4-7}
& & & $r_1$ & $r_2$ & $r_3$ & $r_4$ \\ \cline{1-7}
$1$ & $1$ & $[1,0,0,0]$ & --- & $3$ & & \\ \cline{2-7}
& $2$ & $[0,1,0,0]$ & $3$ & --- & $4$ & \\ \cline{1-7}
$2$ & $3$ & $[1,1,0,0]$ & $2$ & $1$ & $5$ & \\ \cline{2-7}
& $4$ & $[0,1,\sqrt{2},0]$ & $5$ & & $2$ & $6$ \\ \cline{1-7}
$3$ & $5$ & $[1,1,\sqrt{2},0]$ & $4$ & $7$ & $3$ & $8$ \\ \cline{2-7}
& $6$ & $[0,1,\sqrt{2},\sqrt{2}]$ & $8$ & & & $4$ \\ \cline{1-7}
$4$ & $7$ & $[1,2,\sqrt{2},0]$ & & $5$ & & $9$ \\ \cline{2-7}
& $8$ & $[1,1,\sqrt{2},\sqrt{2}]$ & $6$ & $9$ & & $5$ \\ \cline{1-7}
$5$ & $9$ & $[1,2,\sqrt{2},\sqrt{2}]$ & & $8$ & $10$ & $7$ \\ \cline{1-7}
$6$ & $10$ & $[1,2,2\sqrt{2},\sqrt{2}]$ & & $11$ & $9$ & \\ \cline{1-7}
$7$ & $11$ & $[1,3,2\sqrt{2},\sqrt{2}]$ & $12$ & $10$ & & \\ \cline{1-7}
$8$ & $12$ & $[2,3,2\sqrt{2},\sqrt{2}]$ & $11$ & & & \\ \cline{1-7}
\end{tabular}
\end{table}
\begin{table}[hbt]
\centering
\caption{List of positive roots for Coxeter group of type $H_4$ (part $1$), where $c = 2\cos(\pi/5)$, $c^2 = c + 1$}
\label{tab:positive_roots_H_4_1}
\begin{tabular}{|c||c|c|c|c|c|c|c} \cline{1-7}
height & $i$ & root $\gamma_i$ & \multicolumn{4}{|c|}{$k$; $r_j \cdot \gamma_i = \gamma_k$} \\ \cline{4-7}
& & & $r_1$ & $r_2$ & $r_3$ & $r_4$ \\ \cline{1-7}
$1$ & $1$ & $[1,0,0,0]$ & --- & $5$ & & & $H_3$ \\ \cline{2-7}
& $2$ & $[0,1,0,0]$ & $6$ & --- & $7$ & & $H_3$ \\ \cline{2-7}
& $3$ & $[0,0,1,0]$ & & $7$ & --- & $8$ & $H_3$ \\ \cline{2-7}
& $4$ & $[0,0,0,1]$ & & & $8$ & --- \\ \cline{1-7}
$2$ & $5$ & $[1,c,0,0]$ & $9$ & $1$ & $10$ & & $H_3$ \\ \cline{2-7}
& $6$ & $[c,1,0,0]$ & $2$ & $9$ & $11$ & & $H_3$ \\ \cline{2-7}
& $7$ & $[0,1,1,0]$ & $11$ & $3$ & $2$ & $12$ & $H_3$ \\ \cline{2-7}
& $8$ & $[0,0,1,1]$ & & $12$ & $4$ & $3$ \\ \cline{1-7}
$3$ & $9$ & $[c,c,0,0]$ & $5$ & $6$ & $13$ & & $H_3$ \\ \cline{2-7}
& $10$ & $[1,c,c,0]$ & $13$ & & $5$ & $14$ & $H_3$ \\ \cline{2-7}
& $11$ & $[c,1,1,0]$ & $7$ & $15$ & $6$ & $16$ & $H_3$ \\ \cline{2-7}
& $12$ & $[0,1,1,1]$ & $16$ & $12$ & & $7$ \\ \cline{1-7}
$4$ & $13$ & $[c,c,c,0]$ & $10$ & $17$ & $9$ & $18$ & $H_3$ \\ \cline{2-7}
& $14$ & $[1,c,c,c]$ & $18$ & & & $10$ \\ \cline{2-7}
& $15$ & $[c,c+1,1,0]$ & $19$ & $11$ & $17$ & $20$ & $H_3$ \\ \cline{2-7}
& $16$ & $[c,1,1,1]$ & $12$ & $20$ & & $11$ \\ \cline{1-7}
$5$ & $17$ & $[c,c+1,c,0]$ & $21$ & $13$ & $15$ & $22$ & $H_3$ \\ \cline{2-7}
& $18$ & $[c,c,c,c]$ & $14$ & $22$ & & $13$ \\ \cline{2-7}
& $19$ & $[c+1,c+1,1,0]$ & $15$ & & $21$ & $23$ & $H_3$ \\ \cline{2-7}
& $20$ & $[c,c+1,1,1]$ & $23$ & $16$ & $24$ & $15$ \\ \cline{1-7}
$6$ & $21$ & $[c+1,c+1,c,0]$ & $17$ & $25$ & $19$ & $26$ & $H_3$ \\ \cline{2-7}
& $22$ & $[c,c+1,c,c]$ & $26$ & $18$ & $27$ & $17$ \\ \cline{2-7}
& $23$ & $[c+1,c+1,1,1]$ & $20$ & & $28$ & $19$ \\ \cline{2-7}
& $24$ & $[c,c+1,c+1,1]$ & $28$ & & $20$ & $27$ \\ \cline{1-7}
\end{tabular}
\end{table}
\begin{table}[p]
\centering
\caption{List of positive roots for Coxeter group of type $H_4$ (part $2$), where $c = 2\cos(\pi/5)$, $c^2 = c + 1$}
\label{tab:positive_roots_H_4_2}
\begin{tabular}{|c||c|c|c|c|c|c|c} \cline{1-7}
height & $i$ & root $\gamma_i$ & \multicolumn{4}{|c|}{$k$; $r_j \cdot \gamma_i = \gamma_k$} \\ \cline{4-7}
& & & $r_1$ & $r_2$ & $r_3$ & $r_4$ \\ \cline{1-7}
$7$ & $25$ & $[c+1,2c,c,0]$ & & $21$ & & $29$ & $H_3$ \\ \cline{2-7}
& $26$ & $[c+1,c+1,c,c]$ & $22$ & $29$ & $30$ & $21$ \\ \cline{2-7}
& $27$ & $[c,c+1,c+1,c]$ & $30$ & & $22$ & $24$ \\ \cline{2-7}
& $28$ & $[c+1,c+1,c+1,1]$ & $24$ & $31$ & $23$ & $30$ \\ \cline{1-7}
$8$ & $29$ & $[c+1,2c,c,c]$ & & $26$ & $32$ & $25$ \\ \cline{2-7}
& $30$ & $[c+1,c+1,c+1,c]$ & $27$ & $33$ & $26$ & $28$ \\ \cline{2-7}
& $31$ & $[c+1,2c+1,c+1,1]$ & $34$ & $28$ & & $33$ \\ \cline{1-7}
$9$ & $32$ & $[c+1,2c,2c,c]$ & & $35$ & $29$ & \\ \cline{2-7}
& $33$ & $[c+1,2c+1,c+1,c]$ & $36$ & $30$ & $35$ & $31$ \\ \cline{2-7}
& $34$ & $[2c+1,2c+1,c+1,1]$ & $31$ & $37$ & & $36$ \\ \cline{1-7}
$10$ & $35$ & $[c+1,2c+1,2c,c]$ & $38$ & $32$ & $33$ & \\ \cline{2-7}
& $36$ & $[2c+1,2c+1,c+1,c]$ & $33$ & $39$ & $38$ & $34$ \\ \cline{2-7}
& $37$ & $[2c+1,2c+2,c+1,1]$ & & $34$ & $40$ & $39$ \\ \cline{1-7}
$11$ & $38$ & $[2c+1,2c+1,2c,c]$ & $35$ & $41$ & $36$ & \\ \cline{2-7}
& $39$ & $[2c+1,2c+2,c+1,c]$ & & $36$ & $42$ & $37$ \\ \cline{2-7}
& $40$ & $[2c+1,2c+2,c+2,1]$ & & & $37$ & $43$ \\ \cline{1-7}
$12$ & $41$ & $[2c+1,3c+1,2c,c]$ & $44$ & $38$ & $45$ & \\ \cline{2-7}
& $42$ & $[2c+1,2c+2,2c+1,c]$ & & $45$ & $39$ & $46$ \\ \cline{2-7}
& $43$ & $[2c+1,2c+2,c+2,c+1]$ & & & $46$ & $40$ \\ \cline{1-7}
$13$ & $44$ & $[2c+2,3c+1,2c,c]$ & $41$ & & $47$ & \\ \cline{2-7}
& $45$ & $[2c+1,3c+1,2c+1,c]$ & $47$ & $42$ & $41$ & $48$ \\ \cline{2-7}
& $46$ & $[2c+1,2c+2,2c+1,c+1]$ & & $48$ & $43$ & $42$ \\ \cline{1-7}
$14$ & $47$ & $[2c+2,3c+1,2c+1,c]$ & $45$ & $49$ & $44$ & $50$ \\ \cline{2-7}
& $48$ & $[2c+1,3c+1,2c+1,c+1]$ & $50$ & $46$ & & $45$ \\ \cline{1-7}
$15$ & $49$ & $[2c+2,3c+2,2c+1,c]$ & $51$ & $47$ & & $52$ \\ \cline{2-7}
& $50$ & $[2c+2,3c+1,2c+1,c+1]$ & $48$ & $52$ & & $47$ \\ \cline{1-7}
$16$ & $51$ & $[3c+1,3c+2,2c+1,c]$ & $49$ & & & $53$ \\ \cline{2-7}
& $52$ & $[2c+2,3c+2,2c+1,c+1]$ & $53$ & $50$ & $54$ & $49$ \\ \cline{1-7}
$17$ & $53$ & $[3c+1,3c+2,2c+1,c+1]$ & $52$ & & $55$ & $51$ \\ \cline{2-7}
& $54$ & $[2c+2,3c+2,2c+2,c+1]$ & $55$ & & $52$ & \\ \cline{1-7}
$18$ & $55$ & $[3c+1,3c+2,2c+2,c+1]$ & $54$ & $56$ & $53$ & \\ \cline{1-7}
$19$ & $56$ & $[3c+1,3c+3,2c+2,c+1]$ & $57$ & $55$ & & \\ \cline{1-7}
$20$ & $57$ & $[3c+2,3c+3,2c+2,c+1]$ & $56$ & $58$ & & \\ \cline{1-7}
$21$ & $58$ & $[3c+2,4c+2,2c+2,c+1]$ & & $57$ & $59$ & \\ \cline{1-7}
$22$ & $59$ & $[3c+2,4c+2,3c+1,c+1]$ & & & $58$ & $60$ \\ \cline{1-7}
$23$ & $60$ & $[3c+2,4c+2,3c+1,2c]$ & & & & $59$ \\ \cline{1-7}
\end{tabular}
\end{table}
Then, for the wide transformations in $\mathcal{D}$, we have the following:
\begin{lem}
\label{lem:proof_special_second_case_transformations_wide}
In this setting, if $\omega_i$ is a wide transformation, then there exist only the following two possibilities, where $K^{(i)} = \{r_1,r_2,\dots,r_N\}$ is the standard labelling of $K^{(i)}$ given in Section \ref{sec:longestelement}:
\begin{enumerate}
\item $K^{(i)}$ is of type $A_N$ with $N \geq 3$, $t^{(i)} = r_2$, $[y^{(i)}] \cap K^{(i)} = \{r_1\}$ and $J^{(i)} = \{r_3,\dots,r_N\}$; now the action of $\omega_i$ maps $r_1$ to $r_N$ and $(r_3,r_4,\dots,r_N)$ to $(r_1,r_2,\dots,r_{N-2})$;
\item $K^{(i)}$ is of type $E_7$, $t^{(i)} = r_6$, $[y^{(i)}] \cap K^{(i)} = \{r_1,r_2,r_3,r_4,r_5\}$ and $J^{(i)} = \{r_7\}$; now the action of $\omega_i$ maps $(r_1,r_2,r_3,r_4,r_5)$ to $(r_1,r_5,r_3,r_4,r_2)$ and $r_7$ to $r_7$.
\end{enumerate}
Hence, if $\mathcal{D}$ involves a wide transformation, then $J^{(0)}$ is of type $A_{N'}$ with $1 \leq N' < \infty$.
\end{lem}
\begin{proof}
The latter part of the claim follows from the former part and the fact that the sets $J^{(i)}$ for $0 \leq i \leq \ell(\mathcal{D})$ are all isomorphic to each other.
For the former part, note that $J^{(i)}$ is an irreducible subset of $K^{(i)}$ which is not adjacent to $[y^{(i)}]$ (by the above condition that $J^{(0)}$ is apart from $[y^{(0)}]$), $t^{(i)}$ is adjacent to $[y^{(i)}] \cap K^{(i)}$, and $\omega_i$ cannot fix the set $\Pi_{K^{(i)} \smallsetminus \{t\}}$ pointwise (see Lemma \ref{lem:another_decomposition_Y_no_loop}).
Moreover, since $I$ is $A_{>1}$-free, $[y^{(i)}]$ is also $A_{>1}$-free.
By these properties, a case-by-case argument shows that the possibilities of $K^{(i)}$, $[y^{(i)}]$ and $t^{(i)}$ are as enumerated in Table \ref{tab:lem_proof_special_second_case_transformations_wide} up to symmetry (note that $J^{(i)} = K^{(i)} \smallsetminus ([y^{(i)}] \cup \{t^{(i)}\})$).
Now, for each case in Table \ref{tab:lem_proof_special_second_case_transformations_wide} except the two cases specified in the statement, it follows by using the tables for the root systems of finite irreducible Coxeter groups that there exists a root $\beta \in (\Phi_{K^{(i)}}^{\perp [y^{(i)}] \cap K^{(i)}})^+$ that has non-zero coefficient of $\alpha_{t^{(i)}}$, as listed in Table \ref{tab:lem_proof_special_second_case_transformations_wide} (where the notations for the roots $\beta$ are as in the tables).
This implies that $\omega_i \cdot \beta \in \Phi^-$.
Moreover, the definition of $K^{(i)}$ implies that the set $[y^{(i)}] \smallsetminus K^{(i)}$ is apart from $K^{(i)}$, therefore $\beta \in \Phi^{\perp [y^{(i)}]}$ and $\Phi^{\perp [y^{(i)}]}[\omega_i] \neq \emptyset$.
However, this contradicts the property $\omega_i \in Y_{y^{(i+1)},y^{(i)}}$.
Hence one of the two conditions specified in the statement should be satisfied, concluding the proof of Lemma \ref{lem:proof_special_second_case_transformations_wide}.
\end{proof}
\begin{table}[hbt]
\centering
\caption{List for the proof of Lemma \ref{lem:proof_special_second_case_transformations_wide}}
\label{tab:lem_proof_special_second_case_transformations_wide}
\begin{tabular}{|c|c|c|c|} \hline
type of $K^{(i)}$ & $[y^{(i)}] \cap K^{(i)}$ & $t^{(i)}$ & $\beta$ \\ \hline
$A_N$ ($N \geq 3$) & $\{r_1\}$ & $r_2$ & --- \\ \hline
$B_N$ ($N \geq 4$) & $\{r_{k+1},\dots,r_N\}$ ($3 \leq k \leq N-1$) & $r_k$ & $\gamma^{(3)}_{k}$ \\ \hline
$D_N$ & $\{r_{N-1},r_N\}$ & $r_{N-2}$ & $\gamma^{(4)}_{1,2}$ \\ \cline{2-3}
& $\{r_{k+1},\dots,r_{N-1},r_N\}$ ($2 \leq k \leq N-4$) & $r_k$ & \\ \hline
$E_N$ ($6 \leq N \leq 8$) & $\{r_1\}$ & $r_3$ & $\gamma_{44}$ \\ \hline
$E_7$ & $\{r_1,r_2,r_3,r_4,r_5\}$ & $r_6$ & --- \\ \cline{2-4}
& $\{r_7\}$ & $r_6$ & $\gamma_{61}$ \\ \hline
$E_8$ & $\{r_1,r_2,r_3,r_4,r_5\}$ & $r_6$ & $\gamma_{119}$ \\ \cline{2-3}
& $\{r_1,r_2,r_3,r_4,r_5,r_6\}$ & $r_7$ & \\ \cline{2-4}
& $\{r_8\}$ & $r_7$ & $\gamma_{74}$ \\ \hline
$F_4$ & $\{r_1\}$ & $r_2$ & $\gamma^{(1)}_7$ \\ \hline
$H_4$ & $\{r_1\}$ & $r_2$ & $\gamma_{40}$ \\ \cline{2-3}
& $\{r_1,r_2\}$ & $r_3$ & \\ \cline{2-4}
& $\{r_4\}$ & $r_3$ & $\gamma_{32}$ \\ \hline
\end{tabular}
\end{table}
On the other hand, for the narrow transformations in $\mathcal{D}$, we have the following:
\begin{lem}
\label{lem:proof_special_second_case_transformations_narrow}
In this setting, suppose that $\omega_i$ is a narrow transformation, $[y^{(i+1)}] \neq [y^{(i)}]$, and $K^{(i)} \cap [y^{(i)}] = K^{(i)} \smallsetminus \{t^{(i)}\}$ has an irreducible component of type $A_1$.
Then $K^{(i)}$ is of type $A_2$ or of type $I_2(m)$ with $m$ an odd number.
\end{lem}
\begin{proof}
First, by the condition $[y^{(i+1)}] \neq [y^{(i)}]$ and the definition of $\omega_i$, the action of the longest element of $W_{K^{(i)}}$ induces a nontrivial automorphism of $K^{(i)}$ which does not fix the element $t^{(i)}$.
This property restricts the possibilities of $K^{(i)}$ to one of the followings (where we use the standard labelling of $K^{(i)}$): $K^{(i)} = \{r_1,\dots,r_N\}$ is of type $A_N$ and $t^{(i)} \neq r_{(N+1)/2}$; $K^{(i)} = \{r_1,\dots,r_N\}$ is of type $D_N$ with $N$ odd and $t^{(i)} \in \{r_{N-1},r_N\}$; $K^{(i)} = \{r_1,\dots,r_6\}$ is of type $E_6$ and $t^{(i)} \not\in \{r_2,r_4\}$; or $K^{(i)}$ is of type $I_2(m)$ with $m$ odd.
Secondly, by considering the $A_{>1}$-freeness of $I$ (hence of $[y^{(i)}]$), the possibilities are further restricted to the followings: $K^{(i)}$ is of type $A_2$; $K^{(i)}$ is of type $E_6$ and $t^{(i)} \in \{r_1,r_6\}$; and $K^{(i)}$ is of type $I_2(m)$ with $m$ odd.
Moreover, by the hypothesis that $K^{(i)} \cap [y^{(i)}]$ has an irreducible component of type $A_1$, the above possibility of type $E_6$ is denied.
Hence the claim holds.
\end{proof}
\subsubsection{Proof of the claim}
\label{sec:finitepart_secondcase_N_2}
From now, we prove our claim that $w$ fixes the set $\Pi_L$ pointwise.
First, we have $w \cdot \Pi_L \subseteq \Pi_{J \smallsetminus I}$ as mentioned above, therefore Proposition \ref{prop:standard_decomposition_existence} implies that there exists a standard decomposition of $w$ with respect to $L$.
Moreover, $L$ is apart from $I = [x_I]$, since $\Pi_L$ is an irreducible component of $\Pi^I$.
Now if $L$ is not of type $A_N$ with $1 \leq N < \infty$, then Lemma \ref{lem:proof_special_second_case_transformations_wide} implies that the standard decomposition of $w$ involves no wide transformations, therefore $w$ fixes $\Pi_L$ pointwise, as desired (note that any narrow transformation $\omega_i$ fixes $\Pi_{J^{(i)}}$ pointwise by the definition).
Hence, from now, we consider the case that $L$ is of type $A_N$ with $1 \leq N < \infty$.
First, we present some definitions:
\begin{defn}
\label{defn:admissible_for_N_large}
Suppose that $2 \leq N < \infty$.
Let $\mathcal{D} = \omega_{\ell(\mathcal{D})-1} \cdots \omega_1\omega_0$ be a semi-standard decomposition of an element of $W$.
We say that a sequence $s_1,s_2,\dots,s_{\mu}$ of distinct elements of $S$ is \emph{admissible of type $A_N$} with respect to $\mathcal{D}$, if $J^{(0)}$ is of type $A_N$, $\mu \equiv N \pmod{2}$, and the following conditions are satisfied, where we put $M := \{s_1,s_2\dots,s_{\mu}\}$ (see Figure \ref{fig:finitepart_secondcase_N_2_definition}):
\begin{enumerate}
\item \label{item:admissible_N_large_J_irreducible_component}
$\Pi_{J^{(0)}}$ is an irreducible component of $\Pi^{[y^{(0)}]}$.
\item \label{item:admissible_N_large_M_line}
$m(s_j,s_{j+1}) = 3$ for every $1 \leq j \leq \mu-1$.
\item \label{item:admissible_N_large_J_in_M}
For each $0 \leq h \leq \ell(\mathcal{D})$, there exists an odd number $\lambda(h)$ with $1 \leq \lambda(h) \leq \mu - N + 1$ satisfying the following conditions, where we put $\rho(h) := \lambda(h) + N - 1$:
\begin{displaymath}
J^{(h)} = \{s_j \mid \lambda(h) \leq j \leq \rho(h)\} \enspace,
\end{displaymath}
\begin{displaymath}
\begin{split}
[y^{(h)}] \cap M &= \{s_j \mid 1 \leq j \leq \lambda(h) - 2 \mbox{ and } j \equiv 1 \pmod{2}\} \\
&\cup \{s_j \mid \rho(h) + 2 \leq j \leq \mu \mbox{ and } j \equiv \mu \pmod{2}\} \enspace.
\end{split}
\end{displaymath}
\item \label{item:admissible_N_large_y_isolated}
For each $0 \leq h \leq \ell(\mathcal{D})$, every element of $[y^{(h)}] \cap M$ forms an irreducible component of $[y^{(h)}]$ of type $A_1$.
\item \label{item:admissible_N_large_narrow_transformation}
For each $0 \leq h \leq \ell(\mathcal{D})-1$, if $\omega_h$ is a narrow transformation, then one of the following two conditions is satisfied:
\begin{itemize}
\item $K^{(h)}$ intersects with $[y^{(h)}] \cap M$, and $[y^{(h+1)}] = [y^{(h)}]$;
\item $K^{(h)}$ is apart from $[y^{(h)}] \cap M$ (hence $[y^{(h+1)}] \cap M = [y^{(h)}] \cap M$).
\end{itemize}
\item \label{item:admissible_N_large_wide_transformation}
For each $0 \leq h \leq \ell(\mathcal{D})-1$, if $\omega_h$ is a wide transformation, then one of the following two conditions is satisfied:
\begin{itemize}
\item $\lambda(h+1) = \lambda(h) - 2$, $K^{(h)} = J^{(h)} \cup \{s_{\lambda(h)-2},s_{\lambda(h)-1}\}$ is of type $A_{N+2}$, $t^{(h)} = s_{\lambda(h)-1}$, and the action of $\omega_h$ maps $s_{\lambda(h)+j} \in J^{(h)}$ ($0 \leq j \leq N-1$) to $s_{\lambda(h+1)+j}$ and maps $s_{\lambda(h)-2} \in [y^{(h)}]$ to $s_{\rho(h)+2}$;
\item $\lambda(h+1) = \lambda(h) + 2$, $K^{(h)} = J^{(h)} \cup \{s_{\rho(h)+1},s_{\rho(h)+2}\}$ is of type $A_{N+2}$, $t^{(h)} = s_{\rho(h)+1}$, and the action of $\omega_h$ maps $s_{\lambda(h)+j} \in J^{(h)}$ ($0 \leq j \leq N-1$) to $s_{\lambda(h+1)+j}$ and maps $s_{\rho(h)+2} \in [y^{(h)}]$ to $s_{\lambda(h)-2}$.
\end{itemize}
\end{enumerate}
Moreover, we say that such a sequence $s_1,s_2,\dots,s_{\mu}$ is \emph{tight} if $M = \bigcup_{h=0}^{\ell(\mathcal{D})} J^{(h)}$.
\end{defn}
\begin{figure}
\caption{Admissible sequence when $N \geq 2$; here $N = 7$, black circles in the top and the bottom rows indicate elements of $[y^{(h)}
\label{fig:finitepart_secondcase_N_2_definition}
\end{figure}
\begin{defn}
\label{defn:admissible_for_N_1}
Suppose that $N = 1$.
Let $\mathcal{D} = \omega_{\ell(\mathcal{D})-1} \cdots \omega_1\omega_0$ be a semi-standard decomposition of an element of $W$.
We say that a sequence $s_1,s_2,\dots,s_{\mu}$ of distinct elements of $S$ is \emph{admissible of type $A_1$} with respect to $\mathcal{D}$, if $J^{(0)}$ is of type $A_1$ and the following conditions are satisfied, where we put $M = \{s_1,s_2\dots,s_{\mu}\}$ (see Figure \ref{fig:finitepart_secondcase_N_1_definition}):
\begin{enumerate}
\item \label{item:admissible_N_1_J_irreducible_component}
$\Pi_{J^{(0)}}$ is an irreducible component of $\Pi^{[y^{(0)}]}$.
\item \label{item:admissible_N_1_J_in_M}
For each $0 \leq h \leq \ell(\mathcal{D})$, we have $J^{(h)} \subseteq M$ and $M \smallsetminus J^{(h)} \subseteq [y^{(h)}]$.
\item \label{item:admissible_N_1_y_isolated}
For each $0 \leq h \leq \ell(\mathcal{D})$, every element of $[y^{(h)}] \cap M$ forms an irreducible component of $[y^{(h)}]$ of type $A_1$.
\item \label{item:admissible_N_1_narrow_transformation}
For each $0 \leq h \leq \ell(\mathcal{D})-1$, if $\omega_h$ is a narrow transformation, then one of the following two conditions is satisfied:
\begin{itemize}
\item $K^{(h)}$ intersects with $[y^{(h)}] \cap M$, and $[y^{(h+1)}] = [y^{(h)}]$;
\item $K^{(h)}$ is apart from $[y^{(h)}] \cap M$, hence $[y^{(h+1)}] \cap M = [y^{(h)}] \cap M$.
\end{itemize}
\item \label{item:admissible_N_1_wide_transformation}
For each $0 \leq h \leq \ell(\mathcal{D})-1$, if $\omega_h$ is a wide transformation, then one of the following two conditions is satisfied:
\begin{itemize}
\item $J^{(h+1)} \neq J^{(h)}$, $K^{(h)}$ is of type $A_3$, $K^{(h)} \smallsetminus \{t^{(h)}\} = J^{(h)} \cup J^{(h+1)}$, $J^{(h+1)} \subseteq [y^{(h)}] \cap M$, and the action of $\omega_h$ exchanges the unique element of $J^{(h)}$ and the unique element of $J^{(h+1)}$;
\item $J^{(h+1)} = J^{(h)}$ and $[y^{(h+1)}] = [y^{(h)}]$.
\end{itemize}
\end{enumerate}
Moreover, we say that such a sequence $s_1,s_2,\dots,s_{\mu}$ is \emph{tight} if $M = \bigcup_{h=0}^{\ell(\mathcal{D})} J^{(h)}$.
\end{defn}
\begin{figure}
\caption{Admissible sequence when $N = 1$; here $\omega_h$ is a wide transformation of the first type in Definition \ref{defn:admissible_for_N_1}
\label{fig:finitepart_secondcase_N_1_definition}
\end{figure}
Note that, if a sequence $s_1,s_2,\dots,s_{\mu}$ is admissible of type $A_N$ with respect to a semi-standard decomposition $\mathcal{D} = \omega_{\ell(\mathcal{D})-1} \cdots \omega_1\omega_0$, then the subsequence of $s_1,s_2,\dots,s_{\mu}$ consisting of the elements of $\bigcup_{j=0}^{\ell(\mathcal{D})} J^{(j)}$ is admissible of type $A_N$ with respect to $\mathcal{D}$ and is tight (for the case $N \geq 2$, the property of wide transformations in Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_wide_transformation}) implies that $\bigcup_{j=0}^{\ell(\mathcal{D})} J^{(j)} = \{s_i \mid \lambda(k) \leq i \leq \rho(k')\}$ for some $k,k' \in \{0,1,\dots,\ell(\mathcal{D})\}$).
Moreover, the sequence $s_1$, $s_2,\dots,s_{\mu}$ is also admissible of type $A_N$ with respect to $\mathcal{D}^{-1}$.
The above definitions are relevant to our purpose in the following manner:
\begin{lem}
\label{lem:claim_holds_when_admissible}
Let $\mathcal{D} = \omega_{\ell(\mathcal{D})-1} \cdots \omega_1\omega_0$ be a semi-standard decomposition of $w$ with respect to $L$.
If there exists a sequence which is admissible of type $A_N$ with respect to $\mathcal{D}$, then $w$ fixes $\Pi_L$ pointwise.
\end{lem}
\begin{proof}
First, note that $y^{(\ell(\mathcal{D}))} = x_I = y^{(0)}$ since $w \in Y_I$, therefore $[y^{(\ell(\mathcal{D}))}] \cap M = [y^{(0)}] \cap M$ where $M$ is as defined in Definition \ref{defn:admissible_for_N_large} (when $N \geq 2$) or Definition \ref{defn:admissible_for_N_1} (when $N = 1$).
Now it follows from the properties in Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_J_in_M}) when $N \geq 2$, or Definition \ref{defn:admissible_for_N_1}(\ref{item:admissible_N_1_J_in_M}) when $N = 1$, that $J^{(\ell(\mathcal{D}))} = J^{(0)} = L$.
Hence $w$ fixes $\Pi_L$ pointwise when $N = 1$.
Moreover, when $N \geq 2$, the property in Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_wide_transformation}) implies that $\omega_h \ast s_{\lambda(h)+j} = s_{\lambda(h+1)+j}$ for every $0 \leq h \leq \ell(\mathcal{D})-1$ and $0 \leq j \leq N-1$.
Now by this property and the above-mentioned property $J^{(\ell(\mathcal{D}))} = J^{(0)}$, it follows that $w$ fixes the set $\Pi_{J^{(0)}} = \Pi_{L}$ pointwise.
Hence the proof is concluded.
\end{proof}
As mentioned above, a standard decomposition of $w$ with respect to $L$ exists.
Therefore, by virtue of Lemma \ref{lem:claim_holds_when_admissible}, it suffices to show that there exists a sequence which is admissible with respect to this standard decomposition.
More generally, we prove the following proposition (note that the above-mentioned standard decomposition of $w$ satisfies the assumption in this proposition):
\begin{prop}
\label{prop:admissible_sequence_exists}
Let $\mathcal{D} = \omega_{\ell(\mathcal{D})-1} \cdots \omega_1\omega_0$ be a semi-standard decomposition of an element.
Suppose that $J^{(0)}$ is of type $A_N$ with $1 \leq N < \infty$, and $\Pi_{J^{(0)}}$ is an irreducible component of $\Pi^{[y^{(0)}]}$.
Then there exists a sequence which is admissible of type $A_N$ with respect to $\mathcal{D}$.
\end{prop}
To prove Proposition \ref{prop:admissible_sequence_exists}, we give the following key lemma, which will be proven below:
\begin{lem}
\label{lem:admissible_sequence_extends}
Let $n \geq 0$.
Let $\mathcal{D} = \omega_n\omega_{n-1} \cdots \omega_1\omega_0$ be a semi-standard decomposition of an element, and put $\mathcal{D}' := \omega_{n-1} \cdots \omega_1\omega_0$, which is also a semi-standard decomposition of an element satisfying that $y^{(0)}(\mathcal{D}') = y^{(0)}(\mathcal{D})$ and $J^{(0)}(\mathcal{D}') = J^{(0)}(\mathcal{D})$.
Suppose that $s_1,\dots,s_{\mu}$ is a sequence which is admissible of type $A_N$ with respect to $\mathcal{D}'$.
For simplicity, put $y^{(j)} = y^{(j)}(\mathcal{D})$, $J^{(j)} = J^{(j)}(\mathcal{D})$, $t^{(j)} = t^{(j)}(\mathcal{D})$, and $K^{(j)} = K^{(j)}(\mathcal{D})$ for each index $j$.
\begin{enumerate}
\item \label{item:lem_admissible_sequence_extends_narrow}
If $\omega_n$ is a narrow transformation, then we have either $[y^{(n+1)}] = [y^{(n)}]$, or $K^{(n)}$ is apart from $[y^{(n)}] \cap \bigcup_{j=0}^{n} J^{(j)}$.
\item \label{item:lem_admissible_sequence_extends_wide_N_1_fix}
If $N = 1$, $\omega_n$ is a wide transformation and $J^{(n+1)} = J^{(n)}$, then we have $[y^{(n+1)}] = [y^{(n)}]$.
\item \label{item:lem_admissible_sequence_extends_wide_N_1}
If $N = 1$, $\omega_n$ is a wide transformation and $J^{(n+1)} \neq J^{(n)}$, then $K^{(n)}$ is of type $A_3$, $K^{(n)} \smallsetminus (J^{(n)} \cup \{t^{(n)}\}) \subseteq [y^{(n)}]$, and the action of $\omega_n$ exchanges the unique element of $J^{(n)}$ and the unique element of $K^{(n)} \smallsetminus (J^{(n)} \cup \{t^{(n)}\})$ (the latter belonging to $[y^{(n)}] \cap J^{(n+1)}$).
\item \label{item:lem_admissible_sequence_extends_wide_N_large}
If $N \geq 2$ and $\omega_n$ is a wide transformation, then $K^{(n)}$ is of type $A_{N+2}$, the unique element $s'$ of $K^{(n)} \smallsetminus (J^{(n)} \cup \{t^{(n)}\})$ belongs to $[y^{(n)}]$, and one of the following two conditions is satisfied:
\begin{enumerate}
\item \label{item:lem_admissible_sequence_extends_wide_N_large_left}
$t^{(n)}$ is adjacent to $s'$ and $s_{\lambda(n)}$, and the action of $\omega_n$ maps the elements $s_{\lambda(n)}$, $s_{\lambda(n)+1}$, $s_{\lambda(n)+2},\dots,s_{\rho(n)}$ and $s'$ to $s'$, $t^{(n)}$, $s_{\lambda(n)},\dots,s_{\rho(n)-2}$ and $s_{\rho(n)}$, respectively.
Moreover;
\begin{enumerate}
\item \label{item:lem_admissible_sequence_extends_wide_N_large_left_branch}
if $\lambda(n) \geq 3$ and $s_{\lambda(n)-2} \in \bigcup_{j=0}^{n} J^{(j)}$, then we have $s' = s_{\lambda(n)-2}$ and $t^{(n)} = s_{\lambda(n)-1}$;
\item \label{item:lem_admissible_sequence_extends_wide_N_large_left_terminal}
otherwise, we have $s' \not\in \bigcup_{j=0}^{n} J^{(j)}$.
\end{enumerate}
\item \label{item:lem_admissible_sequence_extends_wide_N_large_right}
$t^{(n)}$ is adjacent to $s'$ and $s_{\rho(n)}$, and the action of $\omega_n$ maps the elements $s_{\rho(n)}$, $s_{\rho(n)-1}$, $s_{\rho(n)-2},\dots,s_{\lambda(n)}$ and $s'$ to $s'$, $t^{(n)}$, $s_{\rho(n)},\dots,s_{\lambda(n)+2}$ and $s_{\lambda(n)}$, respectively.
Moreover;
\begin{enumerate}
\item \label{item:lem_admissible_sequence_extends_wide_N_large_right_branch}
if $\rho(n) \leq \mu - 2$ and $s_{\rho(n)+2} \in \bigcup_{j=0}^{n} J^{(j)}$, then we have $s' = s_{\rho(n)+2}$ and $t^{(n)} = s_{\rho(n)+1}$;
\item \label{item:lem_admissible_sequence_extends_wide_N_large_right_terminal}
otherwise, we have $s' \not\in \bigcup_{j=0}^{n} J^{(j)}$.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{lem}
Then Proposition \ref{prop:admissible_sequence_exists} is deduced by applying Lemma \ref{lem:admissible_sequence_extends} and the next lemma to the semi-standard decompositions $\mathcal{D}_{\nu} := \omega_{\nu-1} \cdots \omega_1\omega_0$ ($0 \leq \nu \leq \ell(\mathcal{D})$) successively (note that, when $\nu = 0$, i.e., $\mathcal{D}_{\nu}$ is an empty expression, the sequence $s_1,\dots,s_N$, where $J^{(0)} = \{s_1,\dots,s_N\}$ is the standard labelling of type $A_N$, is admissible of type $A_N$ with respect to $\mathcal{D}_{\nu}$):
\begin{lem}
\label{lem:admissible_sequence_existence_from_lemma}
In the situation of Lemma \ref{lem:admissible_sequence_extends}, we define a sequence $\sigma$ of elements of $S$ in the following manner: For Cases \ref{item:lem_admissible_sequence_extends_narrow}, \ref{item:lem_admissible_sequence_extends_wide_N_1_fix}, \ref{item:lem_admissible_sequence_extends_wide_N_large_left_branch} and \ref{item:lem_admissible_sequence_extends_wide_N_large_right_branch}, let $\sigma$ be the sequence $s_1,\dots,s_{\mu}$; for Case \ref{item:lem_admissible_sequence_extends_wide_N_1}, let $s'$ be the unique element of $K^{(n)} \smallsetminus (J^{(n)} \cup \{t^{(n)}\}) = J^{(n+1)}$, and let $\sigma$ be the sequence $s_1,\dots,s_{\mu},s'$ when $s' \not\in \{s_1,\dots,s_{\mu}\}$ and the sequence $s_1,\dots,s_{\mu}$ when $s' \in \{s_1,\dots,s_{\mu}\}$; for Case \ref{item:lem_admissible_sequence_extends_wide_N_large_left_terminal}, let $\sigma$ be the sequence $s'$, $t^{(n)}$, $s_{\lambda(n)}$, $s_{\lambda(n)+1},\dots,s_{\rho'}$, where $\rho'$ denotes the largest index $1 \leq \rho' \leq \mu$ with $s_{\rho'} \in \bigcup_{j=0}^{n} J^{(j)}$; for the case \ref{item:lem_admissible_sequence_extends_wide_N_large_right_terminal}, let $\sigma$ be the sequence $s'$, $t^{(n)}$, $s_{\rho(n)}$, $s_{\rho(n)-1},\dots,s_{\lambda'}$, where $\lambda'$ denotes the smallest index $1 \leq \lambda' \leq \mu$ with $s_{\lambda'} \in \bigcup_{j=0}^{n} J^{(j)}$.
Then $\sigma$ is admissible of type $A_N$ with respect to $\mathcal{D} = \omega_n \cdots \omega_1\omega_0$.
\end{lem}
Now our remaining task is to prove Lemma \ref{lem:admissible_sequence_extends} and Lemma \ref{lem:admissible_sequence_existence_from_lemma}.
For the purpose, we present an auxiliary result:
\begin{lem}
\label{lem:admissible_sequence_non_adjacent_pairs}
Let $s_1,\dots,s_{\mu}$ be a sequence which is admissible of type $A_N$, where $N \geq 2$, with respect to a semi-standard decomposition $\mathcal{D}$ of an element of $W$.
Suppose that the sequence $s_1,\dots,s_{\mu}$ is tight.
If $1 \leq j_1 < j_2 \leq \mu$, $j_2 - j_1 \geq 2$, and either $j_1 \equiv 1 \pmod{2}$ or $j_2 \equiv \mu \pmod{2}$, then $s_{j_1}$ is not adjacent to $s_{j_2}$.
\end{lem}
\begin{proof}
By symmetry, we may assume without loss of generality that $j_1 \equiv 1 \pmod{2}$.
Put $\mathcal{D} = \omega_{n-1} \cdots \omega_1\omega_0$.
Since the sequence $s_1,\dots,s_\mu$ is tight, there exists an index $0 \leq h \leq n$ with $s_{j_2} \in J^{(h)}$.
Now the properties \ref{item:admissible_N_large_M_line} and \ref{item:admissible_N_large_J_in_M} in Definition \ref{defn:admissible_for_N_large} imply that $J^{(h)} = \{s_{\lambda(h)},s_{\lambda(h)+1},\dots,s_{\rho(h)}\}$ is the standard labelling of type $A_N$, therefore the claim holds if $s_{j_1} \in J^{(h)}$ (note that $j_2 - j_1 \geq 2$).
On the other hand, if $s_{j_1} \not\in J^{(h)}$, then the property \ref{item:admissible_N_large_J_in_M} in Definition \ref{defn:admissible_for_N_large} and the fact $j_1 < j_2$ imply that $j_1 < \lambda(h)$, therefore $s_{j_1} \in [y^{(h)}]$ since $j_1 \equiv 1 \pmod{2}$.
Hence the claim follows from the fact that $J^{(h)}$ is apart from $[y^{(h)}]$ (see the property \ref{item:admissible_N_large_J_irreducible_component} in Definition \ref{defn:admissible_for_N_large}).
\end{proof}
From now, we prove the pair of Lemma \ref{lem:admissible_sequence_extends} and Lemma \ref{lem:admissible_sequence_existence_from_lemma} by induction on $n \geq 0$.
First, we give a proof of Lemma \ref{lem:admissible_sequence_existence_from_lemma} for $n = n_0$ by assuming Lemma \ref{lem:admissible_sequence_extends} for $0 \leq n \leq n_0$.
Secondly, we will give a proof of Lemma \ref{lem:admissible_sequence_extends} for $n = n_0$ by assuming Lemma \ref{lem:admissible_sequence_extends} for $0 \leq n < n_0$ and Lemma \ref{lem:admissible_sequence_existence_from_lemma} for $0 \leq n < n_0$.
\begin{proof}
[Proof of Lemma \ref{lem:admissible_sequence_existence_from_lemma} (for $n = n_0$) from Lemma \ref{lem:admissible_sequence_extends} (for $n \leq n_0$).]
When $n_0 = 0$, the claim is obvious from the property of $\omega_{n_0}$ specified in Lemma \ref{lem:admissible_sequence_extends}.
From now, we suppose that $n_0 > 0$.
We may assume without loss of generality that the sequence $s_1,\dots,s_{\mu}$ (denoted here by $\sigma'$) which is admissible with respect to $\mathcal{D}'$ is tight, therefore we have $M' := \{s_1,\dots,s_{\mu}\} = \bigcup_{j=0}^{n_0} J^{(j)}$.
We divide the proof according to the possibility of $\omega_{n_0}$ listed in Lemma \ref{lem:admissible_sequence_extends}.
By symmetry, we may omit the argument for Case \ref{item:lem_admissible_sequence_extends_wide_N_large_right} without loss of generality.
In Case \ref{item:lem_admissible_sequence_extends_narrow}, since $M' = \bigcup_{j=0}^{n_0} J^{(j)}$ as above, $\omega_{n_0}$ satisfies the condition for $\sigma'$ in Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_narrow_transformation}) (when $N \geq 2$) or Definition \ref{defn:admissible_for_N_1}(\ref{item:admissible_N_1_narrow_transformation}) (when $N = 1$), hence $\sigma = \sigma'$ is admissible of type $A_N$ with respect to $\mathcal{D}$.
Similarly, in Case \ref{item:lem_admissible_sequence_extends_wide_N_1_fix}, Case \ref{item:lem_admissible_sequence_extends_wide_N_large_left_branch}, and Case \ref{item:lem_admissible_sequence_extends_wide_N_1} with $s' \in M'$, respectively, the wide transformation $\omega_{n_0}$ satisfies the condition for $\sigma'$ in Definition \ref{defn:admissible_for_N_1}(\ref{item:admissible_N_1_wide_transformation}), Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_wide_transformation}), and Definition \ref{defn:admissible_for_N_1}(\ref{item:admissible_N_1_wide_transformation}), respectively.
Hence $\sigma = \sigma'$ is admissible of type $A_N$ with respect to $\mathcal{D}$ in these three cases.
From now, we consider the remaining two cases: Case \ref{item:lem_admissible_sequence_extends_wide_N_1} with $s' \not\in M'$, and Case \ref{item:lem_admissible_sequence_extends_wide_N_large_left_terminal}.
Note that, in Case \ref{item:lem_admissible_sequence_extends_wide_N_large_left_terminal}, the tightness of $\sigma'$ implies that $\lambda(n_0) = 1$ and $\rho' = \mu$, therefore $\sigma$ is the sequence $s'$, $t^{(n_0)}$, $s_1,\dots,s_{\mu}$.
Moreover, in this case the unique element $s'$ of $K^{(n_0)} \cap [y^{(n_0)}]$ does not belong to $\bigcup_{j=0}^{n_0} J^{(j)} = M'$, therefore $t^{(n_0)}$ cannot be adjacent to $[y^{(n_0)}] \cap M'$; hence $t^{(n_0)} \not\in M'$ by the property of $\sigma'$ in Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_J_in_M}).
Note also that, in both of the two cases, we have $s' \in J^{(n_0+1)}$ and $\{s'\}$ is an irreducible component of $[y^{(n_0)}]$.
We prove by induction on $0 \leq \nu \leq n_0$ that the sequence $\sigma$ is admissible of type $A_N$ with respect to $\mathcal{D}_{\nu}$ and $s' \in [y^{(\nu+1)}(\mathcal{D}_{\nu})]$, where
\begin{displaymath}
\mathcal{D}_{\nu} = \omega'_{\nu}\omega'_{\nu-1} \cdots \omega'_1\omega'_0 := (\omega_{n_0-\nu})^{-1}(\omega_{n_0-\nu+1})^{-1} \cdots (\omega_{n_0-1})^{-1}(\omega_{n_0})^{-1}
\end{displaymath}
is a semi-standard decomposition of an element with respect to $J^{(n_0+1)}$.
Note that $y^{(j)}(\mathcal{D}_{\nu}) = y^{(n_0-j+1)}$, $J^{(j)}(\mathcal{D}_{\nu}) = J^{(n_0-j+1)}$, $t^{(j)}(\mathcal{D}_{\nu}) = t^{(n_0-j+1)}$ and $K^{(j)}(\mathcal{D}_{\nu}) = K^{(n_0-j+1)}$ for each index $j$.
When $\nu = 0$, this claim follows immediately from the property of $\omega_{n_0}$ specified in Lemma \ref{lem:admissible_sequence_extends}, properties of $\sigma'$ and the definition of $\sigma$.
Suppose that $\nu > 0$.
Note that $s' \in [y^{(\nu)}(\mathcal{D}_{\nu-1})]$ (which is equal to $[y^{(\nu)}(\mathcal{D}_{\nu})] = [y^{(n_0-\nu+1)}]$) by the induction hypothesis.
First, we consider the case that $\omega'_{\nu}$ (or equivalently, $\omega_{n_0-\nu}$) is a wide transformation.
In this case, the possibility of $\omega_{n_0-\nu}$ is as specified in the condition of $\sigma'$ in Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_wide_transformation}) (when $N \geq 2$) or Definition \ref{defn:admissible_for_N_1}(\ref{item:admissible_N_1_wide_transformation}) (when $N = 1$), where $h = n_0-\nu$; in particular, we have $K^{(n_0-\nu)} \smallsetminus \{t^{(n_0-\nu)}\} \subseteq M'$, therefore $[y^{(n_0-\nu+1)}] \smallsetminus M' = [y^{(n_0-\nu)}] \smallsetminus M'$.
Hence the element $s'$ of $[y^{(n_0-\nu+1)}] \smallsetminus M'$ belongs to $[y^{(n_0-\nu)}] = [y^{(\nu+1)}(\mathcal{D}_{\nu})]$, and the property of $\omega_{n_0-\nu} = (\omega'_{n_0})^{-1}$ implies that $\sigma$ is admissible of type $A_N$ with respect to $\mathcal{D}_{\nu}$ as well as $\mathcal{D}_{\nu-1}$.
Secondly, we consider the case that $\omega'_{\nu}$ (or equivalently, $\omega_{n_0-\nu}$) is a narrow transformation.
By applying Lemma \ref{lem:admissible_sequence_extends} (for $n = \nu$) to the pair $\mathcal{D}_{\nu}$, $\mathcal{D}_{\nu-1}$ and the sequence $\sigma$, it follows that either $[y^{(\nu+1)}(\mathcal{D}_{\nu})] = [y^{(\nu)}(\mathcal{D}_{\nu})]$, or the support of $\omega'_{\nu}$ is apart from $[y^{(\nu)}(\mathcal{D}_{\nu})] \cap \bigcup_{j=0}^{\nu} J^{(j)}(\mathcal{D}_{\nu})$.
Now in the former case, we have $s' \in [y^{(\nu)}(\mathcal{D}_{\nu})] = [y^{(\nu+1)}(\mathcal{D}_{\nu})]$.
On the other hand, in the latter case, we have $s' \in [y^{(\nu)}(\mathcal{D}_{\nu})] \cap \bigcup_{j=0}^{\nu} J^{(j)}(\mathcal{D}_{\nu})$ since $s' \in [y^{(\nu)}(\mathcal{D}_{\nu})]$ as above and $s' \in J^{(0)}(\mathcal{D}_{\nu}) = J^{(n_0+1)}$ by the choice of $s'$, therefore $s'$ is apart from the support of $\omega'_{\nu}$.
Hence, it follows in any case that $s' \in [y^{(\nu+1)}(\mathcal{D}_{\nu})]$; and the property of $\omega_{n_0-\nu} = (\omega'_{\nu})^{-1}$ specified by the condition of $\sigma'$ in Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_narrow_transformation}) (when $N \geq 2$) or Definition \ref{defn:admissible_for_N_1}(\ref{item:admissible_N_1_narrow_transformation}) (when $N = 1$), where $h = n_0-\nu$, implies that $\sigma$ is admissible of type $A_N$ with respect to $\mathcal{D}_{\nu}$ as well as $\mathcal{D}_{\nu-1}$.
Hence the claim of this paragraph follows.
By using the result of the previous paragraph with $\nu = n_0$, the sequence $\sigma$ is admissible of type $A_N$ with respect to $\mathcal{D}_{n_0} = \mathcal{D}^{-1}$, hence with respect to $\mathcal{D}$ as well.
This completes the proof.
\end{proof}
By virtue of the above result, our remaining task is finally to prove Lemma \ref{lem:admissible_sequence_extends} for $n = n_0$ by assuming Lemma \ref{lem:admissible_sequence_extends} for $0 \leq n < n_0$ and Lemma \ref{lem:admissible_sequence_existence_from_lemma} for $0 \leq n < n_0$ (in particular, with no assumptions when $n_0 = 0$).
Put $M' := \{s_1,\dots,s_{\mu}\}$.
In the proof, we may assume without loss of generality that the sequence $s_1,\dots,s_{\mu}$ (denoted here by $\sigma'$) which is admissible with respect to $\mathcal{D}'$ is tight (hence we have $J^{(0)} = M'$ when $n_0 = 0$).
Now by Lemma \ref{lem:proof_special_second_case_transformations_wide}, the claim of Lemma \ref{lem:admissible_sequence_extends} holds for the case that $N = 1$ and $\omega_{n_0}$ is a wide transformation.
From now, we consider the other case that either $N \geq 2$ or $\omega_{n_0}$ is a narrow transformation.
Assume contrary that the claim of Lemma \ref{lem:admissible_sequence_extends} does not hold.
Then, by Lemma \ref{lem:proof_special_second_case_transformations_wide}, Lemma \ref{lem:proof_special_second_case_transformations_narrow} and the properties of the tight sequence $\sigma'$ in Definition \ref{defn:admissible_for_N_large} (when $N \geq 2$) or Definition \ref{defn:admissible_for_N_1} (when $N = 1$), it follows that the possibilities for the $\omega_{n_0}$ is as follows (up to symmetry):
\begin{description}
\item[Case (I):] $\omega_{n_0}$ is a narrow transformation, $K^{(n_0)}$ is of type $A_2$ or type $I_2(m)$ with $m$ odd, and we have $s_{\eta} \in K^{(n_0)} \cap [y^{(n_0)}]$ for some index $1 \leq \eta \leq \mu$; hence $t^{(n_0)} \not\in M'$, $K^{(n_0)} = \{s_{\eta},t^{(n_0)}\}$ and the action of $\omega_{n_0}$ exchanges the two elements of $K^{(n_0)}$.
\item[Case (II):] $N \geq 2$, $\omega_{n_0}$ is a wide transformation, $K^{(n_0)}$ is of type $A_{N+2}$, and $t^{(n_0)}$ is adjacent to $s_{\lambda(n_0)}$ and the unique element $s'$ of $[y^{(n_0)}] \cap K^{(n_0)}$; hence the action of $\omega_{n_0}$ maps the elements $s_{\lambda(n_0)}$, $s_{\lambda(n_0)+1}$, $s_{\lambda(n_0)+2},\cdots,s_{\rho(n_0)}$ and $s'$ to $s'$, $t^{(n_0)}$, $s_{\lambda(n_0)},\dots,s_{\rho(n_0)-2}$ and $s_{\rho(n_0)}$, respectively.
Moreover, $t^{(n_0)} \not\in M'$, and
\begin{description}
\item[Case (II-1):] $s' = s_{j_0}$ for an index $\rho(n_0)+2 \leq j_0 \leq \mu$ with $j_0 \equiv \mu \pmod{2}$;
\item[Case (II-2):] $\lambda(n_0) \geq 3$ and $s' \not\in \{s_{\lambda(n_0)-2},s_{\lambda(n_0)-1},\dots,s_{\mu}\}$;
\item[Case (II-3):] $\lambda(n_0) \geq 3$ and $s' = s_{\lambda(n_0)-2}$.
\end{description}
\end{description}
In particular, by the tightness of $\sigma'$, the conditions in the above four cases cannot be satisfied when $n_0 = 0$.
Hence the claim holds when $n_0 = 0$.
From now, we suppose that $n_0 > 0$.
For each of the four cases, we determine an element $\overline{s} \in [y^{(n_0)}] \cap M'$ and an element $\overline{t} \in S \smallsetminus [y^{(n_0)}]$ in the following manner: $\overline{s} = s_{\eta}$ and $\overline{t} = t^{(n_0)}$ in Case (I); $\overline{s} = s_{j_0}$ and $\overline{t} = t^{(n_0)}$ in Case (II-1); $\overline{s} = s_{\lambda(n_0)-2}$ and $\overline{t} = s_{\lambda(n_0)-1}$ in Case (II-2); and $\overline{s} = s_{\lambda(n_0)-2}$ and $\overline{t} = t^{(n_0)}$ in Case (II-3).
Note that $\overline{s}$ and $\overline{t}$ are adjacent by the definition.
Since $\sigma'$ is tight, there exists an index $0 \leq h_0 \leq n_0-1$ with $\overline{s} \in J^{(h_0)}$; let $h_0$ be the largest index with this property.
By the definition of $h_0$, $\omega_{h_0}$ is a wide transformation and $J^{(h_0+1)} \neq J^{(h_0)}$.
Let $\overline{r}$ denote the element of $J^{(h_0+1)}$ with $\omega_{h_0} \ast \overline{s} = \overline{r}$.
Then we have $\overline{r} \in [y^{(h_0)}]$ by the property of $\omega_{h_0}$ and the choice of $\overline{s}$.
Let $\overline{\mathcal{D}} := \omega'_{n'-1} \cdots \omega'_1\omega'_0$ denote the simplification of
\begin{displaymath}
(\omega_{n_0-1} \cdots \omega_{h_0+2}\omega_{h_0+1})^{-1} = (\omega_{h_0+1})^{-1}(\omega_{h_0+2})^{-1} \cdots (\omega_{n_0-1})^{-1}
\end{displaymath}
(see Section \ref{sec:finitepart_decomposition_Y} for the terminology), and let $\overline{u}$ be the element of $W$ expressed by the product $\overline{\mathcal{D}}$.
Here we present the following lemma:
\begin{lem}
\label{lem:proof_lem_admissible_sequence_extends_simplification}
In this setting, the support of each transformation in $\overline{\mathcal{D}}$ does not contain $\overline{t}$ and is apart from $\overline{s}$.
\end{lem}
\begin{proof}
We prove by induction on $0 \leq \nu' \leq n'-1$ that the support $K'$ of $\omega'_{\nu'}$ does not contain $\overline{t}$ and is apart from $\overline{s}$.
Let $(\omega_{\nu})^{-1}$ be the term in $(\omega_{h_0+1})^{-1}(\omega_{h_0+2})^{-1} \cdots (\omega_{n_0-1})^{-1}$ corresponding to the term $\omega'_{\nu'}$ in the simplification $\overline{\mathcal{D}}$.
First, by the definition of simplification and the property of narrow transformations specified in Definition \ref{defn:admissible_for_N_large} (when $N \geq 2$) or Definition \ref{defn:admissible_for_N_1} (when $N = 1$), $K'$ is apart from $[y^{(\nu')}(\overline{\mathcal{D}})] \cap M' = [y^{(\nu+1)}] \cap M'$ (see Lemma \ref{lem:another_decomposition_Y_reduce_redundancy} for the equality) if $\omega'_{\nu'}$ (or equivalently, $\omega_{\nu}$) is a narrow transformation.
Now we have $\overline{s} \in [y^{(n_0)}] = [y^{(0)}(\overline{\mathcal{D}})]$ and $\overline{s} \in M'$ by the definition, therefore the induction hypothesis implies that $\overline{s} \in [y^{(\nu')}(\overline{\mathcal{D}})] \cap M'$.
Hence $K'$ is apart from $\overline{s}$ if $\omega'_{\nu'}$ is a narrow transformation.
This also implies that $\overline{t} \not\in K'$ if $\omega'_{\nu'}$ is a narrow transformation, since $\overline{t}$ is adjacent to $\overline{s}$.
From now, we consider the other case that $\omega'_{\nu'}$ (or equivalently, $\omega_{\nu}$) is a wide transformation.
Recall that $\overline{s} \in [y^{(\nu')}(\overline{\mathcal{D}})]$ as mentioned above.
Then, by the property of wide transformation $\omega_{\nu}$ specified in Definition \ref{defn:admissible_for_N_large} (when $N \geq 2$) or Definition \ref{defn:admissible_for_N_1} (when $N = 1$) and the definition of simplification, it follows that $\overline{s} \in J^{(\nu'+1)}(\overline{\mathcal{D}})$ provided $K'$ is not apart from $\overline{s}$.
On the other hand, by the definition of $h_0$, we have $\overline{s} \not\in J^{(j)}$ for any $h_0+1 \leq j \leq n_0$.
This implies that $K'$ should be apart from $\overline{s}$; therefore we have $\overline{t} \not\in K'$, since $\overline{t}$ is adjacent to $\overline{s}$.
Hence the proof of Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification} is concluded.
\end{proof}
Now, in all the cases except Case (II-2), the following property holds:
\begin{lem}
\label{lem:proof_lem_admissible_sequence_extends_root_other_cases}
In Cases (I), (II-1) and (II-3), there exists a root $\beta \in \Pi^{[y^{(h_0)}]}$ in which the coefficient of $\alpha_{\overline{s}}$ is zero and the coefficient of $\alpha_{\overline{t}} = \alpha_{t^{(n_0)}}$ is non-zero.
\end{lem}
\begin{proof}
First, Lemma \ref{lem:another_decomposition_Y_reduce_redundancy} implies that $\overline{u} \cdot \Pi_{J^{(n_0)}} = \Pi_{J^{(h_0+1)}}$ and $[y'] = [y^{(h_0+1)}]$ where $y' := y^{(n')}(\overline{\mathcal{D}})$.
Put $r' := \overline{u}^{-1} \ast \overline{r} \in J^{(n_0)}$.
Then by Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification} and Lemma \ref{lem:another_decomposition_Y_shift_x}, we have $\overline{u} \in Y_{z',z}$, where $z$ and $z'$ are elements of $S^{(\Lambda)}$ obtained from $y^{(0)}(\overline{\mathcal{D}}) = y^{(n_0)}$ and $y'$ by replacing the element $\overline{s}$ with $r'$ and $\overline{r}$, respectively.
Now by the property of the wide transformation $\omega_{h_0}$, it follows that $y^{(h_0)}$ is obtained from $y^{(h_0+1)}$ by replacing $\overline{s}$ with $\overline{r}$; hence we have $[z'] = [y^{(h_0)}]$.
We show that there exists a root $\beta' \in \Pi^{[z]}$ in which the coefficient of $\alpha_{\overline{s}}$ is zero and the coefficient of $\alpha_{t^{(n_0)}}$ is non-zero.
In Case (I), $t^{(n_0)}$ is apart from both $[y^{(n_0)}] \smallsetminus \{\overline{s}\}$ and $J^{(n_0)}$, while we have $[z] \subseteq ([y^{(n_0)}] \smallsetminus \{\overline{s}\}) \cup J^{(n_0)}$ by the definition; hence $\beta' := \alpha_{t^{(n_0)}}$ satisfies the required condition.
In Case (II-1), we have $\overline{r} = s_{\lambda(h_0+1)}$ by the property of $\omega_{h_0}$, therefore $r' = s_{\lambda(n_0)}$ by the property of wide transformations in $\overline{\mathcal{D}}$ (see Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_wide_transformation})).
Put $\beta' := \alpha_{t^{(n_0)}} + \alpha_{s_{\lambda(n_0)}} + \alpha_{s_{\lambda(n_0)+1}} \in \Pi^{K^{(n_0)},\{r'\}}$ (note that $N \geq 2$ and $K^{(n_0)}$ is of type $A_{N+2}$).
Now $K^{(n_0)}$ is apart from $[y^{(n_0)}] \smallsetminus \{\overline{s}\} = [z] \smallsetminus \{r'\}$, therefore we have $\beta' \in \Pi^{[z]}$ and $\beta'$ satisfies the required condition.
Moreover, in Case (II-3), we have $\overline{r} = s_{\rho(h_0+1)}$ by the property of $\omega_{h_0}$, therefore $r' = s_{\rho(n_0)}$ by the property of wide transformations in $\overline{\mathcal{D}}$ (see Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_wide_transformation})).
Now, since $N \geq 2$ and $K^{(n_0)}$ is of type $A_{N+2}$, $t^{(n_0)}$ is not adjacent to $r'$, while $K^{(n_0)}$ is apart from $[y^{(n_0)}] \smallsetminus \{\overline{s}\} = [z] \smallsetminus \{r'\}$.
Hence $\beta' := \alpha_{t^{(n_0)}}$ satisfies the required condition.
By Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification}, the action of $\overline{u}$ does not change the coefficients of $\alpha_{\overline{s}}$ and $\alpha_{\overline{t}}$.
Hence by the result of the previous paragraph, the root $\beta := \overline{u} \cdot \beta' \in \Pi^{[z']} = \Pi^{[y^{(h_0)}]}$ satisfies the required condition, concluding the proof of Lemma \ref{lem:proof_lem_admissible_sequence_extends_root_other_cases}.
\end{proof}
Since $\overline{t} \not\in J^{(h_0)}$ and $\overline{t}$ is adjacent to $\overline{s}$, the root $\beta \in \Pi^{[y^{(h_0)}]}$ given by Lemma \ref{lem:proof_lem_admissible_sequence_extends_root_other_cases} does not belong to $\Pi_{J^{(h_0)}}$ and is not orthogonal to $\alpha_{\overline{s}}$.
However, since $\overline{s} \in J^{(h_0)}$, this contradicts the fact that $\Pi_{J^{(h_0)}}$ is an irreducible component of $\Pi^{[y^{(h_0)}]}$ (see Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_J_irreducible_component}) when $N \geq 2$, or Definition \ref{defn:admissible_for_N_1}(\ref{item:admissible_N_1_J_irreducible_component}) when $N = 1$).
Hence we have derived a contradiction in the three cases in Lemma \ref{lem:proof_lem_admissible_sequence_extends_root_other_cases}.
From now, we consider the remaining case, i.e., Case (II-2).
In this case, the following property holds:
\begin{lem}
\label{lem:proof_lem_admissible_sequence_extends_simplification_2}
In this setting, the support of each transformation in $\overline{\mathcal{D}}$ does not contain $t^{(n_0)}$ and is apart from $s'$.
\end{lem}
\begin{proof}
For each $0 \leq i \leq n_0 - h_0 - 1$, let $\mathcal{D}_i$ denote the semi-standard decomposition of an element defined by
\begin{displaymath}
\mathcal{D}_i = \omega''_i \cdots \omega''_1\omega''_0 := (\omega_{n_0-i})^{-1} \cdots (\omega_{n_0-1})^{-1}(\omega_{n_0})^{-1} \enspace.
\end{displaymath}
For each $0 \leq i \leq n_0 - h_0 - 1$, let $\sigma_i$ denote the sequence $s'$, $t^{(n_0)}$, $s_{\lambda(n_0)}$, $s_{\lambda(n_0)+1},\dots,s_{\overline{\rho}(i)}$, where $\overline{\rho}(i)$ denotes the largest index $s_{\lambda(n_0)} \leq \overline{\rho}(i) \leq \mu$ with $s_{\overline{\rho}(i)} \in \bigcup_{j=0}^{i+1} J^{(j)}(\mathcal{D}_i)$ ($= \bigcup_{j=n_0-i}^{n_0+1} J^{(j)}$).
We prove the following properties by induction on $1 \leq i \leq n_0 - h_0 - 1$: The sequence $\sigma_i$ is admissible with respect to $\mathcal{D}_i$; we have $s' \in [y^{(i+1)}(\mathcal{D}_i)]$; and we have either $[y^{(i+1)}(\mathcal{D}_i)] = [y^{(i)}(\mathcal{D}_i)]$ and $J^{(i+1)}(\mathcal{D}_i) = J^{(i)}(\mathcal{D}_i)$, or the support $K'' = K^{(i)}(\mathcal{D}_i)$ of $\omega''_i$ is apart from $s'$.
Note that, by the properties of $\omega_{n_0}$ and $\sigma'$, we have $s' \in [y^{(n_0)}] = [y^{(1)}(\mathcal{D}_0)]$, and the sequence $\sigma_0$ (which is $s'$, $t^{(n_0)}$, $s_{\lambda(n_0)},\dots,s_{\rho(n_0)}$) is admissible with respect to $\mathcal{D}_0$.
By the induction hypothesis and Lemma \ref{lem:admissible_sequence_extends} for $n = i$ applied to the sequence $\sigma_{i-1}$ and the pair $\mathcal{D}_i$ and $\mathcal{D}_{i-1}$ (note that $i \leq n_0 - h_0 - 1 \leq n_0 - 1$), it follows that the possibilities of $\omega''_i = (\omega_{n_0-i})^{-1}$ are as listed in Lemma \ref{lem:admissible_sequence_extends}.
Now if $\omega''_i$ is a narrow transformation, then as in Case \ref{item:lem_admissible_sequence_extends_narrow} of Lemma \ref{lem:admissible_sequence_extends}, we have either $[y^{(i+1)}(\mathcal{D}_i)] = [y^{(i)}(\mathcal{D}_i)]$, or $K''$ is apart from $s'$ (note that $s' \in [y^{(i)}(\mathcal{D}_i)]$ by the induction hypothesis, while $s' \in J^{(0)}(\mathcal{D}_i) = J^{(n_0+1)}$).
On the other hand, suppose that $\omega''_i = (\omega_{n_0-i})^{-1}$ is a wide transformation.
Then, by the property of $\sigma'$, the support $K''$ of the wide transformation $\omega_{n_0-i}$ is contained in $M'$, therefore $s' \not\in K''$.
This implies that $K''$ is apart from $s'$, since we have $s' \in [y^{(i)}(\mathcal{D}_i)]$ by the induction hypothesis.
Moreover, in any case of $\omega''_i$, we have $s' \in [y^{(i+1)}(\mathcal{D}_i)]$ by the above-mentioned fact $s' \in [y^{(i)}(\mathcal{D}_i)]$ and the above argument.
On the other hand, the sequence $\sigma$ in Lemma \ref{lem:admissible_sequence_existence_from_lemma} corresponding to the current case is equal to $\sigma_i$, therefore $\sigma_i$ is admissible with respect to $\mathcal{D}_i$ by Lemma \ref{lem:admissible_sequence_existence_from_lemma} for $n = i$ (note again that $i \leq n_0 - 1$).
Hence the claim of the previous paragraph holds.
By the above result, the simplification $\overline{D} = \omega'_{n'-1} \cdots \omega'_0$ of $\omega''_{n_0-h_0-1} \cdots \omega''_2\omega''_1$ satisfies the following conditions: For each $0 \leq \nu' \leq n'-1$, we have $s' \in [y^{(\nu')}(\overline{D})]$, and the support of $\omega'_{\nu'}$ is apart from $s'$.
Since $t^{(n_0)}$ is adjacent to $s'$, this implies that the support of each $\omega'_{\nu'}$ does not contain $t^{(n_0)}$.
Hence the proof of Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification_2} is concluded.
\end{proof}
By Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification_2}, we have $s' \in [y^{(n')}(\overline{\mathcal{D}})] = [y^{(h_0+1)}]$, therefore the set $K^{(h_0)}$ of type $A_{N+2}$ consisting of $s_{\lambda(n_0)-2}$, $s_{\lambda(n_0)-1},\dots,s_{\rho(n_0)}$ is apart from $s'$.
On the other hand, since $s_{\lambda(n_0)-2} \in [y^{(n_0)}]$, the set $K^{(n_0)}$ of type $A_{N+2}$ is apart from $s'$.
From now, by using these properties, we construct a root $\beta' \in \Pi^{[y^{(n_0)}]} \smallsetminus \Pi_{J^{(n_0)}}$ which is not orthogonal to $\alpha_{s_{\lambda(n_0)+1}} \in \Pi_{J^{(n_0)}}$ (note that $N \geq 2$), in the following five steps.
\textbf{Step 1.}
Note that the set $K^{(n_0)}$ is apart from $[y^{(n_0)}] \smallsetminus K^{(n_0)}$.
Put $z^{(0)} := y^{(n_0)}$.
Then we have $u_1 := w_{z^{(0)}}^{t^{(n_0)}} = s' t^{(n_0)} \in Y_{z^{(1)},z^{(0)}}$, where $z^{(1)} \in S^{(\Lambda)}$ is obtained from $z^{(0)}$ by replacing $s'$ with $t^{(n_0)}$.
Similarly, we have $u_2 := w_{z^{(1)}}^{s_{\lambda(n_0)}} = t^{(n_0)} s_{\lambda(n_0)} \in Y_{z^{(2)},z^{(1)}}$, where $z^{(2)} \in S^{(\Lambda)}$ is obtained from $z^{(1)}$ by replacing $t^{(n_0)}$ with $s_{\lambda(n_0)}$.
Now, since $\beta_0 := \alpha_{s_{\lambda(n_0)}}$ and $\beta'_0 := \alpha_{s_{\lambda(n_0)+1}}$ are non-orthogonal elements of $\Pi_{J^{(n_0)}} \subseteq \Pi^{[z^{(0)}]}$, the roots $\beta_2 := u_2u_1 \cdot \beta_0 = \alpha_{s'}$ and $\beta'_2 := u_2u_1 \cdot \beta'_0 = \alpha_{t^{(n_0)}} + \alpha_{s_{\lambda(n_0)}} + \alpha_{s_{\lambda(n_0)+1}}$ are non-orthogonal elements of $\Pi^{[z^{(2)}]}$.
\textbf{Step 2.}
By the construction, $z^{(2)}$ is obtained from $y^{(n_0)}$ by replacing $s'$ with $s_{\lambda(n_0)}$.
On the other hand, we have $J^{(n_0)} = J^{(h_0+1)}$ and $\overline{u} \ast s_{\lambda(n_0)} = s_{\lambda(n_0)}$ by the property of wide transformations in $\overline{\mathcal{D}}$.
Now by Lemma \ref{lem:another_decomposition_Y_shift_x}, we have $u_3 := \overline{u} \in Y_{z^{(3)},z^{(2)}}$, where $z^{(3)} \in S^{(\Lambda)}$ is obtained from $y^{(n')}(\overline{\mathcal{D}})$ by replacing $s'$ with $s_{\lambda(n_0)}$.
Note that $[z^{(3)}] = ([y^{(n')}(\overline{\mathcal{D}})] \smallsetminus \{s'\}) \cup \{s_{\lambda(n_0)}\} = ([y^{(h_0+1)}] \smallsetminus \{s'\}) \cup \{s_{\lambda(n_0)}\}$.
Put $\beta_3 := u_3 \cdot \beta_2$ and $\beta'_3 := u_3 \cdot \beta'_2$.
Then we have $\beta_3,\beta'_3 \in \Pi^{[z^{(3)}]}$ and $\langle \beta_3,\beta'_3 \rangle \neq 0$.
Moreover, by Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification} and Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification_2}, $u_3$ fixes $\alpha_{s'}$, hence $\beta_3 = \alpha_{s'}$; and the action of $u_3$ does not change the coefficients of $\alpha_{s'}$, $\alpha_{t^{(n_0)}}$, $\alpha_{\overline{s}}$ and $\alpha_{\overline{t}}$, hence the coefficients of these four simple roots in $\beta'_3$ are $0$, $1$, $0$ and $0$, respectively.
This also implies that the coefficient of $\alpha_{s_{\lambda(n_0)}}$ in $\beta'_3$ is non-zero, since $t^{(n_0)}$ is adjacent to $s_{\lambda(n_0)} \in [z^{(3)}]$.
\textbf{Step 3.}
Note that the set $K^{(h_0)}$ is apart from $[y^{(h_0+1)}] \smallsetminus K^{(h_0)}$, hence from $[z^{(3)}] \smallsetminus K^{(h_0)}$.
Then we have $u_4 := w_{z^{(3)}}^{\overline{t}} = \overline{t} s_{\lambda(n_0)} \overline{s} \overline{t} \in Y_{z^{(4)},z^{(3)}}$, where $z^{(4)} \in S^{(\Lambda)}$ is obtained from $z^{(3)}$ by exchanging $s_{\lambda(n_0)}$ and $\overline{s}$.
Now we have $\beta_4 := u_4 \cdot \beta_3 = \alpha_{s'} \in \Pi^{[z^{(4)}]}$, $\beta'_4 := u_4 \cdot \beta'_3 \in \Pi^{[z^{(4)}]}$ and $\langle \beta_4,\beta'_4 \rangle \neq 0$.
Moreover, by the property of coefficients in $\beta'_3$ mentioned in Step 2 and the fact that $\overline{t}$ is adjacent to $s_{\lambda(n_0)}$ and $\overline{s}$, it follows that the coefficient of $\alpha_{\overline{s}}$ in $\beta'_4$ is non-zero.
\textbf{Step 4.}
Since $[z^{(4)}] = [z^{(3)}]$, there exists an element $z^{(5)} \in S^{(\Lambda)}$ satisfying that $[z^{(5)}] = [z^{(2)}]$ and $u_5 := \overline{u}{}^{-1} \in Y_{z^{(5)},z^{(4)}}$.
We have $\beta_5 := u_5 \cdot \beta_4 \in \Pi^{[z^{(5)}]}$, $\beta'_5 := u_5 \cdot \beta'_4 \in \Pi^{[z^{(5)}]}$ and $\langle \beta_5,\beta'_5 \rangle \neq 0$.
Now by Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification} and Lemma \ref{lem:proof_lem_admissible_sequence_extends_simplification_2}, $u_5$ fixes $\alpha_{s'}$, hence $\beta_5 = \alpha_{s'}$; and the action of $u_5$ does not change the coefficient of $\alpha_{\overline{s}}$, hence the coefficient of $\alpha_{\overline{s}}$ in $\beta'_5$ is non-zero.
\textbf{Step 5.}
Put $u_6 := u_2{}^{-1}$ and $u_7 := u_1{}^{-1}$.
Since $[z^{(5)}] = [z^{(2)}]$ as above, there exists an element $z^{(7)} \in S^{(\Lambda)}$ satisfying that $[z^{(7)}] = [z^{(0)}] = [y^{(n_0)}]$ and $u_7u_6 \in Y_{z^{(7)},z^{(5)}}$.
Now we have $\beta_7 := u_7u_6 \cdot \beta_5 = \alpha_{s'}$, since $\beta_5 = \beta_2$.
On the other hand, put $\beta'_7 := u_7u_6 \cdot \beta'_5$.
Then we have $\beta'_7 \in \Pi^{[z^{(7)}]} = \Pi^{[y^{(n_0)}]}$ and $\langle \beta_7,\beta'_7 \rangle \neq 0$.
Moreover, since $u_7u_6 \in W_{S \smallsetminus \{\overline{s}\}}$, the coefficient of $\alpha_{\overline{s}}$ in $\beta'_7$ is the same as the coefficient of $\alpha_{\overline{s}}$ in $\beta'_5$, which is non-zero as mentioned in Step 4.
Hence we have constructed a root $\beta' = \beta'_7$ satisfying the above condition.
However, this contradicts the fact that $\Pi_{J^{(n_0)}}$ is an irreducible component of $\Pi^{[y^{(n_0)}]}$ (see Definition \ref{defn:admissible_for_N_large}(\ref{item:admissible_N_large_J_irreducible_component})).
Summarizing, we have derived a contradiction in any of the four cases, Case (I)--Case (II-3), therefore Lemma \ref{lem:admissible_sequence_extends} for $n = n_0$ holds.
Hence our claim has been proven in the case $\Pi^{J,I \cap J} \subseteq \Phi_{I^{\perp}}$.
This completes the proof of Theorem \ref{thm:YfixesWperpIfin}.
\section{A counterexample for the general case}
\label{sec:counterexample}
In this section, we present an example which shows that our main theorem, Theorem \ref{thm:YfixesWperpIfin}, will not generally hold when the assumption on the $A_{>1}$-freeness of $I \subseteq S$ is removed.
We consider a Coxeter system $(W,S)$ of rank $7$ with Coxeter graph $\Gamma$ in Figure \ref{fig:counterexample}, where the vertex labelled by an integer $i$ corresponds to a generator $s_i \in S$.
Put $I = \{s_4,s_5\}$ which is of type $A_2$ (hence is not $A_{>1}$-free).
\begin{figure}
\caption{Coxeter graph $\Gamma$ and subset $I \subseteq S$ for the counterexample; here the two duplicated circles correspond to $I = \{s_4,s_5\}
\label{fig:counterexample}
\end{figure}
To determine the simple system $\Pi^I$ of $W^{\perp I}$, Proposition \ref{prop:factorization_C}(\ref{item:prop_factorization_C_generator_perp}) implies that each element of $\Pi^I$ is written as $u \cdot \gamma(y,s)$, where $y \in S^{(\Lambda)}$, $u \in Y_{x_I,y}$, $s \in S \smallsetminus [y]$, $[y]_{\sim s}$ is of finite type, $\varphi(y,s) = y$, and $\gamma(y,s)$ is the unique element of $(\Phi_{[y] \cup \{s\}}^{\perp [y]})^+$ as in Proposition \ref{prop:charofBphi}.
In this case, the element $u^{-1} \in Y_{y,x_I}$ admits a decomposition as in Proposition \ref{prop:factorization_C}(\ref{item:prop_factorization_C_generator_Y}).
In particular, such an element $y$ can be obtained from $x_I$ by applying a finite number of operations of the form $z \mapsto \varphi(z,t)$ with an appropriate element $t \in S$.
Table \ref{tab:list_counterexample} gives a list of all the element $y \in S^{(\Lambda)}$ obtained in this way.
In the second and the fourth columns of the table, we abbreviate each $s_i$ ($1 \leq i \leq 7$) to $i$ for simplicity.
This table shows, for each $y$, all the elements $t \in S \smallsetminus [y]$ satisfying that $[y]_{\sim t}$ is of finite type and $\varphi(y,t) \neq y$, as well as the corresponding element $\varphi(y,t) \in S^{(\Lambda)}$ (more precisely, the subset $[\varphi(y,t)]$ of $S$).
Now the list of the $y$ in the table is closed by the operations $y \mapsto \varphi(y,t)$, while it involves the starting point $x_I$ (No.~I in Table \ref{tab:list_counterexample}), therefore the list indeed includes a complete list of the possible $y$.
\begin{table}[hbt]
\centering
\caption{List for the counterexample}
\label{tab:list_counterexample}
\begin{tabular}{|c||c|c||c|c|} \hline
No. & $[y]$ & $\gamma \in \Phi^{\perp [y]}$ & $t$ & $\varphi(y,t)$ \\ \hline
I & $\{4,5\}$ & $[10|0\underline{00}|00]$, $[01|0\underline{00}|00]$ & $3$ & II \\ \cline{4-5}
& & & $6$ & III \\ \cline{4-5}
& & & $7$ & IV \\ \hline
II & $\{3,4\}$ & $[10|\underline{11}1|00]$, $[01|\underline{11}1|00]$ & $1$ & V \\ \cline{4-5}
& & & $2$ & VI \\ \cline{4-5}
& & & $5$ & I \\ \hline
III & $\{5,6\}$ & $[10|00\underline{0}|\underline{0}0]$, $[01|00\underline{0}|\underline{0}0]$ & $4$ & I \\ \cline{4-5}
& & & $7$ & IV \\ \hline
IV & $\{5,7\}$ & $[10|00\underline{0}|0\underline{0}]$, $[01|00\underline{0}|0\underline{0}]$ & $4$ & I \\ \cline{4-5}
& & & $6$ & III \\ \hline
V & $\{1,3\}$ & $[\underline{0}0|\underline{0}01|00]$, $[\underline{1}1|\underline{2}21|00]$ & $2$ & VI \\ \cline{4-5}
& & & $4$ & II \\ \hline
VI & $\{2,3\}$ & $[\underline{0}0|\underline{0}01|00]$, $[\underline{1}1|\underline{2}21|00]$ & $1$ & V \\ \cline{4-5}
& & & $4$ & II \\ \hline
\end{tabular}
\end{table}
On the other hand, Table \ref{tab:list_counterexample} also includes some elements of $(\Phi^{\perp [y]})^+$ for each possible $y \in S^{(\Lambda)}$.
In the third column of the table, we abbreviate a root $\sum_{i=1}^{7} c_i \alpha_{s_i}$ to $[c_1c_2|c_3c_4c_5|c_6c_7]$.
Moreover, a line is drawn under the coefficient $c_i$ of $\alpha_{s_i}$ if $s_i$ belongs to $[y]$.
Now for each $y$, each root $\gamma \in (\Phi^{\perp [y]})^+$ and each $t$ appearing in the table, the root $w_y^t \cdot \gamma \in (\Phi^{\perp [\varphi(y,t)]})^+$ also appears in the row corresponding to the element $\varphi(y,t) \in S^{(\lambda)}$.
Moreover, for each $y$ in the table, if an element $s \in S \smallsetminus [y]$ satisfies that $[y]_{\sim s}$ is of finite type and $\varphi(y,s) = y$, then the corresponding root $\gamma(y,s)$ always appears in the row corresponding to the $y$.
By these properties, the above-mentioned characterization of the elements of $\Pi^I$ and the decompositions of elements of $Y_{x_I,y}$ given by Proposition \ref{prop:factorization_C}(\ref{item:prop_factorization_C_generator_Y}), it follows that all the elements of $\Pi^I$ indeed appear in the list.
Hence we have $\Pi^I = \{\alpha_{s_1},\alpha_{s_2}\}$ (see the row I in Table \ref{tab:list_counterexample}), therefore both elements of $\Pi^I$ satisfy that the corresponding reflection belongs to $W^{\perp I}{}_{\mathrm{fin}}$.
Moreover, we consider the following sequence of operations:
\begin{displaymath}
\begin{split}
x_I {}:={} & (s_4,s_5) \overset{3}{\to} (s_3,s_4) \overset{1}{\to} (s_1,s_3) \overset{2}{\to} (s_3,s_2) \overset{4}{\to} (s_4,s_3) \\
&\overset{5}{\to} (s_5,s_4) \overset{6}{\to} (s_6,s_5) \overset{7}{\to} (s_5,s_7) \overset{4}{\to} (s_4,s_5) = x_I \enspace,
\end{split}
\end{displaymath}
where we write $z \overset{i}{\to} z'$ to signify the operation $z \mapsto z' = \varphi(z,s_i)$.
Then a direct calculation shows that the element $w$ of $Y_I$ defined by the product of the elements $w_z^t$ corresponding to the above operations satisfies that $w \cdot \alpha_{s_1} = \alpha_{s_2}$.
Hence the conclusion of Theorem \ref{thm:YfixesWperpIfin} does not hold in this case where the assumption on the $A_{>1}$-freeness of $I$ is not satisfied.
\noindent
\textbf{Koji Nuida}\\
Present address: Research Institute for Secure Systems, National Institute of Advanced Industrial Science and Technology (AIST), AIST Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568, Japan\\
E-mail: k.nuida[at]aist.go.jp
\end{document} |
\begin{document}
\title{{f Critical ($P_5$,bull)-free graphs}
\begin{abstract}
Given two graphs $H_1$ and $H_2$, a graph is $(H_1,H_2)$-free if it contains no induced subgraph isomorphic to $H_1$ or $H_2$. Let $P_t$ and $C_t$ be the path and the cycle on $t$ vertices, respectively. A bull is the graph obtained from a triangle with two disjoint pendant edges. In this paper, we show that there are finitely many 5-vertex-critical ($P_5$,bull)-free graphs.
{\bf Keywords.} coloring; critical graphs; forbidden induced subgraphs; strong perfect graph theorem; polynomial-time algorithms.
\end{abstract}
\section{Introduction}
All graphs in this paper are finite and simple. We say that a graph $G$ {\em contains} a graph $H$ if $ H $ is isomorphic to an induced subgraph of $G$. A graph $G$ is {\em H-free} if it does not contain $H$. For a family of graphs $\mathcal{H}$, $G$ is {\em $\mathcal{H}$-free} if $G$ is $H$-free for every $H\in \mathcal{H}$. When $\mathcal{H}$ consists of two graphs, we write $(H_1,H_2)$-free instead of $\{H_1,H_2\}$-free.
A $k$-{\em coloring} of a graph $G$ is a function $\phi:V(G)\rightarrow\{1,...,k\}$ such that $\phi(u)\neq\phi(v)$ whenever $u$ and $v$ are adjacent in $G$. Equivalently, a $k$-coloring of $G$ is a partition of $V(G)$ into $k$ independent sets. We call a graph $k$-{\em colorable} if it admits a $k$-coloring. The {\em chromatic number} of $G$, denoted by $\chi(G)$, is the minimum number $k$ for which $G$ is $k$-colorable. The {\em clique number} of $G$, denoted by $\omega(G)$, is the size of a largest clique in $G$.
A graph $G$ is said to be $k$-{\em chromatic} if $\chi(G)=k$. We say that $G$ is {\em critical} if $\chi(H)<\chi(G)$ for every proper subgraph $H$ of $G$. A $k$-{\em critical} graph is one that is $k$-chromatic and critical. An easy consequence of the definition is that every critical graph is connected. Critical graphs were first investigated by Dirac \cite{Di51,Di52,Di52i} in 1951, and then by Lattanzio and Jensen \cite{L02,J02} among others, and by Goedgebeur \cite{GS18} in recent years.
Vertex-criticality is a weaker notion. Suppose that $G$ is a graph. Then $G$ is said to be $k$-{\em vertex-critical} if $G$ has chromatic number $k$ and removing any vertex from $G$ results in a graph that is $(k-1)$-colorable. For a set $\mathcal{H}$ of graphs, we say that $ G $ is {\em k-vertex-critical $\mathcal{H}$-free} if it is $k$-vertex-critical and $\mathcal{H}$-free. The following problem arouses our interest.
{\noindent} \textbf{The finiteness problem.} Given a set $\mathcal{H}$ of graphs and an integer $k\ge 1$, are there only finitely many $ k $-vertex-critical $\mathcal{H}$-free graphs?
This problem is meaningful because the finiteness of the set has a fundamental algorithmic implication.
\begin{theorem}[Folklore]\label{Folklore}
If the set of all $k$-vertex-critical $\mathcal{H}$-free graphs is finite, then there is a polynomial-time algorithm to determine whether an $\mathcal{H}$-free graph is $(k-1)$-colorable. \qed
\end{theorem}
Let $K_n$ be the complete graph on $n$ vertices. Let $ P_t $ and $ C_t $ denote the path and the cycle on $t$ vertices, respectively. The {\em complement} of $G$ is denoted by $\overline{G}$. For $s,r\ge 1$, let $K_{r,s}$ be the complete bipartite graph with one part of size $r$ and the other part of size $s$. A class of graphs that has been extensively studied recently is the class of $P_t$-free graphs. In \cite{BHS09}, it was shown that there are finite many 4-vertex-critical $P_5$-free graphs. This result was later generalized to $P_6$-free graphs \cite{CGSZ16}. In the same paper, an infinite family of 4-vertex-critical $P_7$-free graphs was constructed. Moreover, for every $k\ge 5$, an infinite family of $k$-vertex-critical $P_5$-free graphs was constructed in \cite{HMRSV15}. This implies that the finiteness of $k$-vertex-critical $P_t$-free graphs for $t\ge 1$ and $k\ge 4$ has been completely solved by researchers. We summarize the results in the following table.
\begin{table}[!ht]
\centering
\caption{The finiteness of $k$-vertex-critical $P_t$-free graphs.}
\renewcommand\arraystretch{1.5}
\setlength{\tabcolsep}{4mm}{}
\begin{tabular}{|c|p{1.6cm}<{\centering}|p{1.9cm}<{\centering}|p{1.6cm}<{\centering}|p{1.7cm}<{\centering}|}
\hline
\diagbox{$k$}{$t$} & $\le4$ & 5 & 6 & $\ge7$\\
\hline
4 & finite & finite \cite{BHS09}& finite \cite{CGSZ16}& infinite \cite{CGSZ16}\\
\hline
$\ge 5$ & finite & infinite \cite{HMRSV15}& infinite & infinite \\
\hline
\end{tabular}
\end{table}
Because there are infinitely many 5-vertex-critical $P_5$-free graphs, many researchers have investigated the finiteness problem of $k$-vertex-critical $(P_5,H)$-free graphs. Our research is mainly motivated by the following dichotomy result.
\begin{theorem}[\cite{CGHS21}]
Let $H$ be a graph of order 4 and $k\ge 5$ be a fixed integer. Then there are infinitely many k-vertex-critical $(P_5,H)$-free graphs if and only if $H$ is $2P_2$ or $P_1+K_3$.
\end{theorem}
This theorem completely solves the finiteness problem of $k$-vertex-critical $(P_5,H)$-free graphs for graphs of order 4. In \cite{CGHS21}, the authors also posed the natural question of which five-vertex graphs $H$ lead to finitely many $k$-vertex-critical $(P_5,H)$-free graphs. It is known that there are exactly 13 5-vertex-critical $(P_5,C_5)$-free graphs \cite{HMRSV15}, and that there are finitely many 5-vertex-critical ($P_5$,banner)-free graphs \cite{CHLS19,HLS19}, and finitely many $k$-vertex-critical $(P_5,\overline{P_5})$-free graphs for every fixed $k$ \cite{DHHMMP17}. In \cite{CGS}, Cai, Goedgebeur and Huang show that there are finitely many $k$-vertex-critical ($P_5$,gem)-free graphs and finitely many $k$-vertex-critical ($P_5,\overline{P_3+P_2}$)-free graphs. Hell and Huang proved that there are finitely many $k$-vertex-critical $(P_6,C_4)$-free graphs \cite{HH17}. This was later generalized to $(P_5,K_{r,s})$-free graphs in the context of $H$-coloring \cite{KP17}. This gives an affirmative answer for $H=K_{2,3}$.
\noindent {\bf Our contributions.} We continue to study the finiteness of vertex-critical $(P_5,H)$-free graphs when $H$ has order 5. The {\em bull} graph (see \autoref{bull}) is the graph obtained from a triangle with two disjoint pendant edges. In this paper, we prove that there are only finitely many 5-vertex-critical ($P_5$,bull)-free graphs.
\begin{figure}
\caption{The bull graph.}
\label{bull}
\end{figure}
To prove the result on bull-free graphs, we performed a careful structural analysis combined with the pigeonhole principle based on the properties of 5-vertex-critical graphs.
The remainder of the paper is organized as follows. We present some preliminaries in Section \ref{Preliminarlies} and give structural properties around an induced $C_5$ in a ($P_5$,bull)-free graph in Section \ref{structure}. We then show that there are finitely many 5-vertex-critical ($P_5$,bull)-free graphs in Section \ref{bull-free}.
\section{Preliminaries}\label{Preliminarlies}
For general graph theory notation we follow \cite{BM08}. For $k\ge 4$, an induced cycle of length $k$ is called a {\em $k$-hole}. A $k$-hole is an {\em odd hole} (respectively {\em even hole}) if $k$ is odd (respectively even). A {\em $k$-antihole} is the complement of a $k$-hole. Odd and even antiholes are defined analogously.
Let $G=(V,E)$ be a graph. For $S\subseteq V$ and $u\in V\setminus S$, let $d(u,S)={min}_{v\in S}d(u,v)$, where $d(u,v)$ denotes the length of the shortest path from $u$ to $v$. If $uv\in E$, we say that $u$ and $v$ are {\em neighbors} or {\em adjacent}, otherwise $u$ and $v$ are {\em nonneighbors} or {\em nonadjacent}. The {\em neighborhood} of a vertex $v$, denoted by $N_G(v)$, is the set of neighbors of $v$. For a set $X\subseteq V$, let $N_G(X)=\cup_{v\in X}N_G(v)\setminus X$. We shall omit the subscript whenever the context is clear. For $x\in V$ and $S\subseteq V$, we denote by $N_S(x)$ the set of neighbors of $x$ that are in $S$, i.e., $N_S(x)=N_G(x)\cap S$. For two sets $X,S\subseteq V(G)$, let $N_S(X)=\cup_{v\in X}N_S(v)\setminus X$. For $X,Y\subseteq V$, we say that $X$ is {\em complete} (resp. {\em anticomplete}) to $Y$ if every vertex in $X$ is adjacent (resp. nonadjacent) to every vertex in $Y$. If $X=\{x\}$, we write ``$x$ is complete (resp. anticomplete) to $Y$'' instead of ``$\{x\}$ is complete (resp. anticomplete) to $Y$''. If a vertex $v$ is neither complete nor anticomplete to a set $S$, we say that $v$ is {\em mixed} on $S$. For a vertex $v\in V$ and an edge $xy\in E$, if $v$ is mixed on $\{x,y\}$, we say that $v$ is {\em mixed} on $xy$. For a set $H\subseteq V$, if no vertex in $V-H$ is mixed on $H$, we say that $H$ is a {\em homogeneous set}, otherwise $H$ is a {\em nonhomogeneous set}. A vertex subset $S\subseteq V$ is {\em independent} if no two vertices in $S$ are adjacent. A {\em clique} is the complement of an independent set. Two nonadjacent vertices $u$ and $v$ are said to be {\em comparable} if $N(v)\subseteq N(u)$ or $N(u)\subseteq N(v)$. A vertex subset $K\subseteq V$ is a {\em clique cutset} if $G-K$ has more connected components than $G$ and $K$ is a clique. For an induced subgraph $A$ of $G$, we write $G-A$ instead of $G-V(A)$. For $S\subseteq V$, the subgraph \emph{induced} by $S$ is denoted by $G[S]$. For $S\subseteq V$ and an induced subgraph $A$ of $G$, we may write $S$ instead of $G[S]$ and $A$ instead of $V(A)$ for the convenience of writing whenever the context is clear.
We proceed with a few useful results that will be needed later. The first one is well-known in the study of $k$-vertex-critical graphs.
\begin{lemma}[Folklore]\label{lem:xy}
A $k$-vertex-critical graph contains no clique cutsets.
\end{lemma}
Another folklore property of vertex-critical graphs is that such graphs contain no comparable vertices. In \cite{CGHS21}, a generalization of this property was presented.
\begin{lemma}[\cite{CGHS21}]\label{lem:XY}
Let $G$ be a $k$-vertex-critical graph. Then $G$ has no two nonempty disjoint subsets $X$ and $Y$ of $V(G)$ that satisfy all the following conditions.
\begin{itemize}
\item $X$ and $Y$ are anticomplete to each other.
\item $\chi(G[X])\le\chi(G[Y])$.
\item Y is complete to $N(X)$.
\end{itemize}
\end{lemma}
A property on bipartite graphs is shown as follows.
\begin{lemma}[\cite{F93}]\label{2K2}
Let $G$ be a connected bipartite graph. If $G$ contains a $2K_2$, then $G$ must contain a $P_5$.
\end{lemma}
As we mentioned earlier, there are finitely many 4-vertex-critical $P_5$-free graphs.
\begin{theorem}[\cite{BHS09,MM12}]\label{thm:finite4Critical}
If $G=(V,E)$ is a 4-vertex-critical $P_5$-free graph, then $|V|\le 13$.
\end{theorem}
A graph $G$ is {\em perfect} if $\chi(H)=\omega(H)$ for every induced subgraph $H$ of $G$. Another result we use is the well-known Strong Perfect Graph Theorem.
\begin{theorem}[The Strong Perfect Graph Theorem\cite{CRST06}]\label{thm:SPGT}
A graph is perfect if and only if it contains no odd holes or odd antiholes.
\end{theorem}
Moreover, we prove a property about homogeneous sets, which will be used frequently in the proof of our results.
\begin{lemma}\label{lem:homogeneous}
Let $G$ be a 5-vertex-critical $P_5$-free graph and $S$ be a homogeneous set of $V(G)$. For each component $A$ of $G[S]$,
\begin{enumerate}[(i)]
\item if $\chi(A)=1$, then $A$ is a $K_1$;
\item if $\chi(A)=2$, then $A$ is a $K_2$;
\item if $\chi(A)=3$, then $A$ is a $K_3$ or a $C_5$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) is clearly true. Moreover, since $V(A)\subseteq S$, $V(A)$ is also a homogeneous set. Next we prove (ii) and (iii).
(ii)Since $\chi(A)=2$, let $\{x,y\}\subseteq V(A)$ induce a $K_2$. Suppose that there is another vertex $z$ in $A$. Because $G$ is 5-vertex-critical, $G-z$ is 4-colorable. Since $\chi(A)=2$, let $\{V_1,V_2,V_3,V_4\}$ be a 4-coloring of $G-z$ where $V(A)\setminus\{z\}\subseteq V_1\cup V_2$. Since $A$ is homogeneous, $\{V_1\cup \{z\},V_2,V_3,V_4\}$ or $\{V_1,V_2\cup \{z\},V_3,V_4\}$ is a 4-coloring of $G$, a contradiction. Thus $A$ is a $K_2$.
(iii)We first show that $G$ must contain a $K_3$ or a $C_5$. If $A$ is $K_3$-free, then $\omega(A)<\chi(A)=3$ and so $A$ is imperfect. Since $A$ is $P_5$-free, $A$ must contain a $C_5$ by \autoref{thm:SPGT}. Thus $A$ contains either a $K_3$ or a $C_5$.
If $A$ contains a $K_3$ induced by $\{x,y,z\}$, suppose that there is another vertex $s$ in $A$. Because $G$ is 5-vertex-critical, $G-s$ is 4-colorable. Since $\chi(A)=3$, let $\{V_1,V_2,V_3,V_4\}$ be a 4-coloring of $G-s$ where $V(A)\setminus\{s\}\subseteq V_1\cup V_2\cup V_3$. Since $A$ is homogeneous, $\{V_1\cup \{s\},V_2,V_3,V_4\}$, $\{V_1,V_2\cup \{s\},V_3,V_4\}$ or $\{V_1,V_2,V_3\cup \{s\},V_4\}$ is a 4-coloring of $G$, a contradiction. Thus $A$ is a $K_3$. Similarly, $A$ is a $C_5$ if $A$ contains a $C_5$.
\end{proof}
\section{Structure around a 5-hole}\label{structure}
Let $G=(V,E)$ be a graph and $H$ be an induced subgraph of $G$. We partition $V\setminus V(H)$ into subsets with respect to $H$ as follows: for any $X\subseteq V(H)$, we denote by $S(X)$ the set of vertices in $V\setminus V(H)$ that have $X$ as their neighborhood among $V(H)$, i.e.,
$$S(X)=\{v\in V\setminus V(H): N_{V(H)}(v)=X\}.$$
\noindent For $0\le m\le|V(H)|$, we denote by $S_m$ the set of vertices in $V\setminus V(H)$ that have exactly $m$ neighbors in $V(H)$. Note that $S_m=\cup_{X\subseteq V(H):|X|=m}S(X)$.
Let $G$ be a ($P_5$,bull)-free graph and $C=v_1,v_2,v_3,v_4,v_5$ be an induced $C_5$ in $G$. We partition $V\setminus C$ with respect to $C$ as above. All subscripts below are modulo five. Clearly, $S_1=\emptyset$ and so $V(G)=V(C)\cup S_0\cup S_2\cup S_3\cup S_4\cup S_5$. Since $G$ is ($P_5$,bull)-free, it is easy to verify that $S(v_i,v_{i+1})=S(v_{i-2},v_i,v_{i+2})=\emptyset$. So $S_2=\cup_{1\le i\le 5}S(v_{i-1},v_{i+1})$ and $S_3=\cup_{1\le i\le 5}S(v_{i-1},v_{i},v_{i+1})$. Note that $S_4=\cup_{1\le i\le 5}S(v_{i-2},v_{i-1},v_{i+1},v_{i+2})$. In the following, we write $S_2(i)$ for $S(v_{i-1},v_{i+1})$, $S_3(i)$ for $S(v_{i-1},v_{i},v_{i+1})$ and $S_4(i)$ for $S(v_{i-2},v_{i-1},v_{i+1},v_{i+2})$. We now prove a number of useful properties of $S(X)$ using the fact that $G$ is ($P_5$,bull)-free. All properties are proved for $i=1$ due to symmetry. In the following, if we say that $\{r,s,t,u,v\}$ induces a bull, it means that $r,v$ are two pendant vertices, $s$ is the neighbor of $r$, $u$ is the neighbor of $v$, and $stu$ is a triangle. If we say that $\{r,s,t,u,v\}$ induces a $P_5$, it means that any two consecutive vertices are adjacent.
\begin{enumerate}[label=\bfseries (\arabic*)]
\item {$S_2(i)$ is complete to $S_2(i+1)\cup S_3(i+1)$.}\label{S2(i)S2(i+1)}
Let $x\in S_2(1)$ and $y\in S_2(2)\cup S_3(2)$. If $xy\notin E$, then $\{x,v_5,v_4,v_3,y\}$ induces a $P_5$.
\item {$S_2(i)$ is anticomplete to $S_2(i+2)$.}\label{S2(i)S2(i+2)}
Let $x\in S_2(1)$ and $y\in S_2(3)$. If $xy\in E$, then $\{v_3,v_2,y,x,v_5\}$ induces a bull.
\item {$S_2(i)$ is anticomplete to $S_3(i+2)$.}\label{S2(i)S3(i+2)}
Let $x\in S_2(1)$ and $y\in S_3(3)$. If $xy\in E$, then $\{v_1,v_2,x,y,v_4\}$ induces a bull.
\item {$S_2(i)$ is anticomplete to $S_4(i)$.}\label{S2(i)S4(i)}
Let $x\in S_2(1)$ and $y\in S_4(1)$. If $xy\in E$, then $\{v_1,v_2,x,y,v_4\}$ induces a bull.
\item {$S_2(i)\cup S_3(i)$ is complete to $S_4(i+2)$.}\label{S2(i)S4(i+2)}
Let $x\in S_2(1)\cup S_3(1)$ and $y\in S_4(3)$. If $xy\notin E$, then $\{v_3,v_4,y,v_5,x\}$ induces a bull.
\item {$S_2(i)$ is complete to $S_4(i+1)\cup S_5$.}\label{S2S5}
Let $x\in S_2(1)$ and $y\in S_4(2)\cup S_5$. If $xy\notin E$, then $\{v_3,y,v_1,v_5,x\}$ induces a bull.
\item {$S_3(i)$ is complete to $S_3(i+1)$.}\label{S3(i)S3(i+1)}
Let $x\in S_3(1)$ and $y\in S_3(2)$. If $xy\notin E$, then $\{x,v_5,v_4,v_3,y\}$ induces a $P_5$.
\end{enumerate}
\section{The main result}\label{bull-free}
Let $\mathcal{F}$ be the set of graphs shown in \autoref{fig:5vertexcritical}. It is easy to verify that all graphs in $\mathcal{F}$ are 5-vertex-critical.
\begin{figure}
\caption{$F_1$.}
\caption{$F_2$.}
\caption{$F_3$.}
\caption{$F4$.}
\caption{$F_5$.}
\caption{$F_6$.}
\caption{$F_7$.}
\caption{$F_8$.}
\caption{$F_9$.}
\caption{Some 5-vertex-critical graphs.}
\label{K5}
\label{F1}
\label{F2}
\label{F3}
\label{F4}
\label{F5}
\label{F6}
\label{F7}
\label{F8}
\label{F9}
\label{fig:5vertexcritical}
\end{figure}
\begin{theorem}\label{th}
There are finitely many 5-vertex-critical $(P_5,bull)$-free graphs.
\end{theorem}
\begin{proof}
Let $G=(V,E)$ be a 5-vertex-critical ($P_5$,bull)-free graph. We show that $|G|$ is bounded. If $G$ has a subgraph isomorphic to a member $F\in\mathcal{F}$, then $|V(G)|=|V(F)|$ by the definition of vertex-critical graph and so we are done. Hence, we assume in the following that $G$ has no subgraph isomorphic to a member in $\mathcal{F}$. Since there are exactly 13 5-vertex-critical $(P_5,C_5$)-free graphs \cite{HMRSV15}, the proof is completed if $G$ is $C_5$-free. So assume that $G$ contains an induced $C_5$ in the following. Let $C={v_1,v_2,v_3,v_4,v_5}$ be an induced $C_5$. We partition $V(G)$ with respect to $C$.
\begin{claim}\label{S5}
$S_5$ is an independent set.
\end{claim}
\begin{proof}
Suppose that $x,y\in S_5$ and $xy\in E$. Then $G$ contains $F_1$, a contradiction.
\end{proof}
\begin{claim}\label{coloring number}
For each $1\le i\le 5$, some properties of $G$ are as follows:
\begin{itemize}
\item $\chi(G[S_3(i)])\le 2$.
\item $\chi(G[S_2(i)\cup S_3(i)])\le 3$.
\item $\chi(G[S_4(i)])\le 2$.
\item $\chi(G[S_5\cup S_0])\le 4$.
\end{itemize}
\end{claim}
\begin{proof}
It suffices to prove for $i=1$. Suppose that $\chi(G[S_3(1)])\ge3$. Then $\chi(G-v_3)\ge 5$, contradicting that $G$ is 5-vertex-critical. So $\chi(G[S_3(1)])\le2$. Similarly, We can prove the other three properties.
\end{proof}
We first bound $S_0$.
\begin{claim}\label{S0}
{$N(S_0)\subseteq S_5$}.
\end{claim}
\begin{proof}
Let $x\in N(S_0)$ and $y\in S_0$ be a neighbor of $x$. Then we show that $x\in S_5$. Let $1\le i\le5$. If $x\in S_2(i)\cup S_3(i)$, then $\{y,x,v_{i+1},v_{i+2},v_{i+3}\}$ induces a $P_5$. If $x\in S_4(i)$, then $\{v_i,v_{i+1},v_{i+2},x,y\}$ induces a bull. Therefore, $y\notin S_2\cup S_3\cup S_4$. It follows that $y\in S_5$.
\end{proof}
\begin{claim}\label{S0 color}
If A is a component of $G[S_0]$, then $\chi(A)=4$.
\end{claim}
\begin{proof}
By \autoref{coloring number}, $\chi(A)\le4$. Suppose that $\chi(A)\le 3$. So $\chi(C)\ge \chi(A)$. Combined with the fact that $C$ is anticomplete to $A$, we know that $C$ is not complete to $N(A)$ by \autoref{lem:XY}. This contradicts the facts that $C$ is complete to $S_5$ and $N(A)\subseteq S_5$. Thus $\chi(A)=4$.
\end{proof}
\begin{claim}\label{S0 connected}
$G[S_0]$ is connected.
\end{claim}
\begin{proof}
Suppose that there are two components $A_1$ and $A_2$ in $G[S_0]$. Since $G$ is connected, there must exist $w_1\in N(A_1)$ and so $w_1\in S_5$ by \autoref{S0}. By \autoref{coloring number}, $w_1$ cannot be complete to $A_1$ and $A_2$. So $w_1$ is mixed on an edge $x_1y_1\in E(A_1)$. Similarly, there exists $w_2\in S_5$ mixed on an edge $x_2y_2\in E(A_2)$ and not complete to $A_1$. So $w_2$ is anticomplete to $A_1$, otherwise if $w_2$ is mixed on an edge $z_1z_2\in E(A_1)$, then $\{z_1,z_2,w_2,x_2,y_2\}$ induces a $P_5$. It follows that $w_2$ is anticomplete to $\{x_1,y_1\}$. Then $\{y_1,x_1,w_1,v_1,w_2\}$ induces a $P_5$, a contradiction.
\end{proof}
By Claims \ref{S0 color}-\ref{S0 connected}, we obtain the following claim.
\begin{claim}\label{S0 4-chromatic}
$G[S_0]$ is a connected 4-chromatic graph.
\end{claim}
\begin{claim}
$N(S_0)=S_5$.
\end{claim}
\begin{proof}
Suppose that $w_1\in S_5$ is anticomplete to $S_0$. Since $G$ is connected, there must exist $w_2\in S_5$, which is a neighbor of $S_0$. By \autoref{coloring number}, $w_2$ is not complete to $S_0$ and so mixed on an edge $xy$ in $G[S_0]$. Thus, $\{w_1,v_1,w_2,x,y\}$ induces a $P_5$, a contradiction.
\end{proof}
To bound $S_0$, we partition $S_0$ into two parts. Let $L=S_0\cap N(S_5)$ and $R=S_0\setminus L$.
\begin{claim}\label{L S5}
If $R\neq\emptyset$, then (i)$L$ is complete to $S_5$; (ii)$N(R)=L$.
\end{claim}
\begin{proof}
Let $L_i=\{l\in L|d(l,R)=i\}$, where $i\ge 1$. Let $l\in L_1$. There exists $r\in R$, which is adjacent to $l$. Let $u\in S_5$ be a neighbor of $l$. Note that if $|S_5|=1$, $S_5$ is a clique cutset of $G$, contradicting \autoref{lem:xy}. So $|S_5|\ge 2$. For each $u'\in S_5\setminus\{u\}$, $u'$ is adjacent to $l$, otherwise $\{r,l,u,v_1,u'\}$ induces a $P_5$. Hence, $L_1$ is complete to $S_5$. Let $l_2\in L_2$. By the definition of $L_2$, there must exist $l_1\in L_1$, $l_2$ is adjacent to $l_1$. Let $r_1\in R$ and $u_2\in S_5$ be the neighbor of $l_1$ and $l_2$, respectively. Since $d(l_2,R)=2$, $l_2r_1\notin E$. Since $L_1$ is complete to $S_5$, $l_1u_2\in E$. Thus $\{v_1,u_2,l_2,l_1,r_1\}$ induces a bull, a contradiction. So $L_2=\emptyset$ and thus $L_i=\emptyset$ for each $i\ge 3$. Then $L=L_1$. Therefore, $L$ is complete to $S_5$ and $N(R)=L$.
\end{proof}
\begin{claim}\label{LR components}
Let $L'$ and $R'$ be components of $G[L]$ and $G[R]$, respectively. Then $L'$ is complete or anticomplete to $R'$.
\end{claim}
\begin{proof}
Let $u\in S_5$. By \autoref{L S5}, $u$ is complete to $L'$. Assume $L'$ is not anticomplete to $R'$. We show that $L'$ is complete to $R'$ in the following. Let $l_1\in V(L')$ and $r_1\in V(R')$ be adjacent. If $l_1$ is mixed on $R'$, then $l_1$ must be mixed on an edge $x_1y_1$ in $R'$ and so $\{v_1,u,l_1,x_1,y_1\}$ induces a $P_5$, a contradiction. So $l_1$ is complete to $R'$. Suppose that $l_2\in V(L')$ is not complete to $R'$, then there exists $r_2\in V(R')$ not adjacent to $l_2$. Since $l_1r_2\in E$, $r_2$ is mixed on $L'$ and so mixed on an edge $x_2y_2$ in $L'$. Thus $\{v_1,u,x_2,y_2,r_2\}$ induces a bull, a contradiction. It follows that $L'$ is complete to $R'$.
\end{proof}
\begin{claim}\label{R finite}
$|R|\le 8$.
\end{claim}
\begin{proof}
Let $R'$ and $R''$ be two arbitrary components of $G[R]$. Let $u_1\in S_5$. If there exists $l_1,l_2\in L$ such that $l_1\in N(R')\setminus N(R'')$ and $l_2\in N(R'')\setminus N(R')$, then $\{u_1,l_1,l_2\}\cup R'\cup R''$ contains an induced bull or an induced $P_5$, depending on whether $l_1l_2\in E$. So $N(R')\subseteq N(R'')$ or $N(R'')\subseteq N(R')$. We may assume $N(R')\subseteq N(R'')$. By \autoref{LR components}, $R''$ is complete to $N(R')$. It follows from \autoref{lem:XY} that $\chi(R'')<\chi(R')$. By \autoref{S0 4-chromatic} and \autoref{LR components}, for each component of $G[R]$, there must exist a vertex in $L$ complete to this component. Since $G[S_0]$ is 4-chromatic, the chromatic number of components of $G[R]$ is at most 3. So there are at most three components $R_1,R_2$ and $R_3$ in $G[R]$. Assume that $\chi(R_1)=1,\chi(R_2)=2$ and $\chi(R_3)=3$. By \autoref{LR components} and the definition of $R$, we know that $R_1,R_2$ and $R_3$ are all homogeneous. By \autoref{lem:homogeneous}, we know that $|R_1|=1$, $|R_2|=2$ and $|R_3|\le 5$. Therefore, $|R|\le 8$.
\end{proof}
\begin{claim}\label{L finite}
If $R\neq \emptyset$, then $|L|\le 8$.
\end{claim}
\begin{proof}
Let $L'$ and $L''$ be two arbitrary components of $G[L]$. By \autoref{L S5}, $L',L''\subseteq N(R)$. Let $u_1\in S_5$. By \autoref{L S5}, \autoref{LR components} and \autoref{coloring number}, each component of $G[L]$ must be complete to some component of $G[R]$ and so $\chi(G[L])\le 3$. Suppose that there exists $r_1,r_2\in R$ such that $r_1\in N(L')\setminus N(L'')$ and $r_2\in N(L'')\setminus N(L')$. Then $r_1$ and $r_2$ belong to different components of $R$ by \autoref{LR components}. So $r_1r_2\notin E$. Then $\{u_1,r_1,r_2\}\cup L'\cup L''$ contains an induced $P_5$, a contradiction. Combined with \autoref{L S5}, we know that $N(L')\subseteq N(L'')$ or $N(L'')\subseteq N(L')$. We may assume $N(L')\subseteq N(L'')$. By \autoref{LR components}, $L''$ is complete to $N(L')$. It follows from \autoref{lem:XY} that $\chi(L'')<\chi(L')$. Note that $\chi(G[L])\le 3$. So there are at most three components $L_1,L_2$ and $L_3$ in $G[L]$. Assume that $\chi(L_1)=1,\chi(L_2)=2$ and $\chi(L_3)=3$. By \autoref{LR components} and \autoref{L S5}, we know that $L_1,L_2$ and $L_3$ are all homogeneous. By \autoref{lem:homogeneous}, we know that $|L_1|=1$, $|L_2|=2$ and $|L_3|\le 5$. Therefore, $|L|\le 8$.
\end{proof}
By Claims \ref{R finite}-\ref{L finite}, we obtain the following claim.
\begin{claim}\label{cla:S0 1}
If $R\neq\emptyset$, $|S_0|\le 16$.
\end{claim}
Next, we bound $S_0$ when $R=\emptyset$.
\begin{claim}\label{cla:S0 2}
If $R=\emptyset$, then $|S_0|\le 13$.
\end{claim}
\begin{proof}
Since $R=\emptyset$, $S_0\subseteq N(S_5)$. For each $v\in S_0$, $\chi(G-v)=4$ since $G$ is 5-vertex-critical. Let $\pi$ be a 4-coloring of $G-v$. By the fact that $\chi(C)=3$ and $S_5$ is complete to $C$, all vertices in $S_5$ must be colored with the same color in $\pi$. Since $S_0\subseteq N(S_5)$, the vertices in $S_0\setminus\{v\}$ must be colored with the remaining three colors, i.e., $\chi(G[S_0]-v)\le 3$. Combined with \autoref{S0 4-chromatic}, $G[S_0]$ is a $P_5$-free 4-vertex-critical graph. By \autoref{thm:finite4Critical}, $|S_0|\le 13$.
\end{proof}
By Claims \ref{cla:S0 1}-\ref{cla:S0 2}, $|S_0|\le 16$. Next, we bound $S_5$.
\begin{claim}\label{S4(i)S5}
For at most one value of $i$, where $1\le i\le 5$, $S_4(i)$ is not anticomplete to $S_5$.
\end{claim}
\begin{proof}
Suppose that $S_4(i)$ and $S_4(j)$ are not anticomplete to $S_5$, where $1\le i<j\le5$. Then $G$ must have a subgraph isomorphic to $F_2,F_3,F_4$ or $F_5$, a contradiction.
\end{proof}
\begin{claim}
$|S_5|\le 2^{16}$.
\end{claim}
\begin{proof}
Suppose that $|S_5|> 2^{|S_0|}$. By the pigeonhole principle, there are two vertices $u,v\in S_5$ that have the same neighborhood in $S_0$. Since $u$ and $v$ are not comparable, there exists $x\in N(u)\setminus N(v)$ and $y\in N(v)\setminus N(u)$. Clearly, $x,y\in S_3\cup S_4(i)$ by \autoref{S4(i)S5} and \ref{S2S5}, for some $1\le i\le 5$. By symmetry, we assume $i=1$.
Suppose that $x,y\in S_4(1)$. Then $xy\notin E$, otherwise $G$ has a subgraph isomorphic to $F_8$. So $\{x,u,v_1,v,y\}$ induces a $P_5$, a contradiction.
Suppose that $x,y\in S_3$. Without loss of generality, we assume $x\in S_3(1)$. If $y\in S_3(3)\cup S_3(4)$, $G$ must have a subgraph isomorphic to $F_7$, a contradiction. If $y\in S_3(2)\cup S_3(5)$, then $xy\in E$ by \ref{S3(i)S3(i+1)} and so $G$ contains $F_8$, a contradiction. If $y\in S_3(1)$, then $xy\notin E$, otherwise $G$ has a subgraph isomorphic to $F_6$. Then $\{x,u,v_3,v,y\}$ induces a $P_5$, a contradiction.
So we assume that $x\in S_4(1)$ and $y\in S_3$. If $y\in S_3(1)\cup S_3(2)\cup S_3(5)$, then $G$ has a subgraph isomorphic to $F_7$, a contradiction. Thus $y\in S_3(3)\cup S_3(4)$. From \ref{S2(i)S4(i+2)} we know that $xy\in E$. Note that $G$ has a subgraph isomorphic to $F_8$, a contradiction.
Therefore, $|S_5|\le 2^{|S_0|}\le 2^{16}$.
\end{proof}
Next, we bound $S_2$. By \ref{S2(i)S2(i+1)}-\ref{S2S5} and \autoref{S0}, for each $1\le i\le 5$, all vertices in $V\setminus S_2(i)$ are complete or anticomplete to $S_2(i)$, except those in $S_3(i)$. So we divide $S_2(i)$ into two parts. Let $R(i)=S_2(i)\cap N(S_3(i))$ and $L(i)=S_2(i)\setminus R(i)$.
\begin{claim}\label{P3 conclusion1}
If $G[R(i)]$ contains a $P_3$, then the two endpoints of the $P_3$ have the same neighborhood in $S_3(i)$.
\end{claim}
\begin{proof}
Let $uvw$ be a $P_3$ contained in $R(i)$. Let $u'\in S_3(i)$ be a neighbor of $w$. Then $uu'\in E$, otherwise $\{u,v,w,u',v_i\}$ induces a bull or a $P_5$, depending on whether $vu'\in E$. So $N_{S_3(i)}(w)\subseteq N_{S_3(i)}(u)$. Similarly, $N_{S_3(i)}(u)\subseteq N_{S_3(i)}(w)$. Therefore, $u$ and $w$ have the same neighborhood in $S_3(i)$.
\end{proof}
\begin{claim}\label{L(i)}
$|L(i)|\le 8$.
\end{claim}
\begin{proof}
If $S_3(i)=\emptyset$ or $R(i)=\emptyset$, then $S_2(i)$ is homogeneous. If there are two components $X$ and $Y$ in $G[S_2(i)]$, then $Y$ is complete to $N(X)$ and $X$ is complete to $N(Y)$, contradicting \autoref{lem:XY}. So $G[S_2(i)]$ is connected. By \autoref{coloring number} and \autoref{lem:homogeneous}, $G[S(i)]$ is a $K_1$, a $K_2$, a $K_3$ or a $C_5$. Thus $|L(i)|\le 5$.
So we assume that $S_3(i)\neq \emptyset$ and $R(i)\neq \emptyset$. Let $u$ be an arbitrary vertex in $R(i)$ and $u'$ be its neighbor in $S_3(i)$. Then $u$ is not mixed on any edge $xy$ in $L(i)$, otherwise $\{y,x,u,u',v_i\}$ induces a $P_5$. Then $u$ is complete or anticomplete to any component of $L(i)$ and so all components of $L(i)$ are homogeneous. By \autoref{lem:homogeneous}, each component of $L(i)$ is a $K_1$, a $K_2$, a $K_3$ or a $C_5$.
We show that there is at most one 3-chromatic component in $L(i)$. Suppose that $X_1$ and $Y_1$ are two 3-chromatic components in $L(i)$. Note that $X_1$ and $Y_1$ are homogeneous. Since $\chi(G[S_2(i)])\le 3$, $X_1$ and $Y_1$ are anticomplete to $R(i)$. So $Y_1$ is complete to $N(X_1)$ and $X_1$ is complete to $N(Y_1)$, which contradicts \autoref{lem:XY}. So, there is at most one 3-chromatic component in $L(i)$.
Then we show that there is at most one $K_2$-component in $L(i)$. Suppose that $X_2=x_1y_1$ and $Y_2=x_2y_2$ are two $K_2$-components in $L(i)$. Note that $X_2$ and $Y_2$ are homogeneous. By \autoref{lem:XY}, there must exist $u_1,u_2\in R(i)$ such that $u_1$ is complete to $X_2$ and anticomplete to $Y_2$ and $u_2$ is complete to $Y_2$ and anticomplete to $X_2$ . Let $u_1',u_2'\in S_3(i)$ be the neighbor of $u_1$ and $u_2$, respectively. Clearly, $u_1'$ and $u_2'$ are not the same vertex, otherwise $\{x_1,u_1,u_1',u_2,x_2\}$ induces a bull or a $P_5$, depending on whether $u_1u_2\in E$. So $u_1'u_2\notin E$ and $u_2'u_1\notin E$. It follows that $u_1u_2\notin E$, otherwise $\{x_2,u_2,u_1,u_1',v_i\}$ induces a $P_5$. Then $\{u_1,u_1',v_i,u_2',u_2\}$ induces a bull or a $P_5$, depending on whether $u_1'u_2'\in E$, a contradiction. So, there is at most one $K_2$-component in $L(i)$.
Similarly, there is at most one $K_1$-component in $L(i)$. It follows that $|L(i)|\le 8$. The proof is completed.
\end{proof}
\begin{figure}
\caption{The graph contained in $G[R(i)]$.}
\label{fig:uvwst}
\end{figure}
\begin{claim}\label{P3 conclusion2}
If $G[R(i)]$ contains $P_3=uvw$, then $G[R(i)]$ must contain the graph induced by $\{u,v,w,s,t\}$ in \autoref{fig:uvwst}. Moreover, $u,w,s$ and $t$ have the same neighborhood in $S_3(i)$ and $N_{S_3(i)}(u)\cap N_{S_3(i)}(v)=\emptyset$.
\end{claim}
\begin{proof}
Let $u'$ be an arbitrary neighbor of $w$ in $S_3(i)$. By \autoref{P3 conclusion1} we know that $N_{S_3(i)}(u)=N_{S_3(i)}(w)$ and so $uu'\in E$. Since $u$ and $w$ are not comparable, there must exist $s\in N(u)\setminus N(w)$ and $t\in N(w)\setminus N(u)$. Clearly, $s,t\in L(i)\cup R(i)$.
\noindent{\bf Case 1. }$s,t\in L(i)$. Then $st\notin E$, otherwise $\{s,t,w,u',v_i\}$ induces a $P_5$. Moreover, $sv\notin E$, otherwise $\{s,v,w,u',v_i\}$ induces a bull or a $P_5$, depending on whether $vu'\in E$. Similarly, $tv\notin E$. So $\{s,u,v,w,t\}$ induces a $P_5$, a contradiction.
\noindent{\bf Case 2. }One vertex of $\{s,t\}$ belongs to $L(i)$ and the other belongs to $R(i)$. We assume that $s\in L(i)$ and $t\in R(i)$. Then $sv\notin E$, otherwise $\{s,v,w,u',v_i\}$ induces a bull or a $P_5$, depending on whether $vu'\in E$. So $vu'\notin E$, otherwise $\{s,u,v,u',v_i\}$ induces a bull. Let $z'$ be a neighbor of $v$ in $S_3(i)$. Clearly, $\{s,u,v,z',v_i\}$ induces a bull or a $P_5$, depending on whether $uz'\in E$, a contradiction.
\noindent{\bf Case 3. }$s,t\in R(i)$. Suppose that $sv\notin E$. Then $suv$ is a $P_3$ and so $u'$ is complete or anticomplete to $\{s,v\}$ by \autoref{P3 conclusion1}. Suppose that $u'$ is complete to $\{s,v\}$. If $vt\in E$, then $uvt$ is a $P_3$ and so $tu'\in E$ by \autoref{P3 conclusion1}. Then $\{t,v,w,u'\}$ induces a $K_4$, contradicting that $\chi(G[S_2(i)\cup S_3(i)])\le 3$. So $vt\notin E$. Hence $vwt$ is a $P_3$ and then $tu'\in E$ by \autoref{P3 conclusion1}. Then $st\in E$, otherwise $\{s,u,v,w,t\}$ induces a $P_5$. It is easy to verify that $\{s,u,v,w,t,u'\}$ induces a 4-chromatic subgraph, contradicting that $\chi(G[S_2(i)\cup S_3(i)])\le 3$. So $u'$ must be anticomplete to $\{s,v\}$. Then $st\notin E$, otherwise $\{s,t,w,u',v_i\}$ induces a bull or a $P_5$, depending on whether $tu'\in E$. Hence $tv\in E$, otherwise $\{s,u,v,w,t\}$ induces a $P_5$. Let $z'$ be an arbitrary neighbor of $v$ in $S_3(i)$. Since $suv$ is a $P_3$, $sz'\in E$ by \autoref{P3 conclusion1}. Note that $uvt$ and $uvw$ are all $P_3$ and so $N_{S_3(i)}(u)=N_{S_3(i)}(w)=N_{S_3(i)}(t)$. Then $tz'\notin E$, otherwise $\{t,v,z',w\}$ induces a $K_4$. Note that $\{s,z',v_i,u',w\}$ induces a bull or a $P_5$, depending on whether $u'z'\in E$, a contradiction. Thus $sv\in E$. By symmetry, $tv\in E$.
Since $svw$ and $uvt$ are all $P_3$, we know that $u,w,s,t$ have the same neighborhood in $S_3(i)$ by \autoref{P3 conclusion1} and so $su',tu'\in E$. Then $vu'\notin E$, otherwise $\{v,w,t,u'\}$ induces a $K_4$. Since $u'$ is an arbitrary neighbor of $w$ in $S_3(i)$, $v$ is anticomplete to $N_{S_3(i)}(u)$. Thus $N_{S_3(i)}(u)\cap N_{S_3(i)}(v)=\emptyset$.
If $st\in E$, then $ust$ is a $P_3$. From the above proof we know that $s$ is anticomplete to $N_{S_3(i)}(u)$, which contradicts the fact that $su'\in E$. So $st\notin E$. It follows that $\{u,v,w,s,t\}$ induces the graph in \autoref{fig:uvwst}. This completes the proof of the claim.
\end{proof}
\begin{claim}\label{P3-free}
$G[R(i)]$ is $P_3$-free.
\end{claim}
\begin{proof}
Suppose that $G[R(i)]$ contains a $P_3=uvw$. By \autoref{P3 conclusion2}, $G[R(i)]$ contains a subgraph in \autoref{fig:uvwst} induced by $\{u,v,w,s,t\}$. Moreover, $u,w,s,t$ have the same neighborhood in $S_3(i)$ and $v$ is anticomplete to $N_{S_3(i)}(u)$. Let $u'$ and $v'$ be arbitrary neighbor of $u$ and $v$ in $S_3(i)$, respectively. Then $u'$ is complete to $\{u,w,s,t\}$ and nonadjacent to $v$ and $v'$ is anticomplete to $\{u,w,s,t\}$. It follows from \autoref{lem:XY} that $\{w,t\}$ is not complete to $N\{u,s\}$. So there exists $a\in N\{u,s\}$ such that $a$ is not complete to $\{w,t\}$. Clearly, $a\in L(i)\cup R(i)$.
Suppose $a\in L(i)$. Assume that $as\in E$. So $au\in E$, otherwise $\{a,s,u,u',v_i\}$ induces a bull. Then $av\in E$, otherwise $\{a,u,v,v',v_i\}$ induces a $P_5$. Note that $\{a,s,v,u\}$ induces a $K_4$, a contradiction. Thus $a\in R(i)$.
If $a$ is adjacent to only one vertex in $\{s,u\}$, then either $usa$ or $sua$ is a $P_3$ and so $N_{S_3(i)}(s)\cap N_{S_3(i)}(u)=\emptyset$ by \autoref{P3 conclusion2}, contradicting that $su',uu'\in E$. Thus $a$ is complete to $\{s,u\}$. Then $av\notin E$, otherwise $\{s,u,a,v\}$ induces a $K_4$. Because $auv$ is a $P_3$, we know that $au'\notin E$ and $av'\in E$ by \autoref{P3 conclusion2}. Since $a$ is not complete to $\{w,t\}$, we assume that $at\notin E$ by symmetry. Note that $\{t,u',v_i,v',a\}$ induces a bull or a $P_5$, depending on whether $u'v'\in E$, a contradiction.
Therefore, $G[R(i)]$ is $P_3$-free.
\end{proof}
Since $G[R(i)]$ is $P_3$-free, $G[R(i)]$ is a disjoint union of cliques. By \autoref{coloring number}, each component of $G[R(i)]$ is a $K_1$, a $K_2$ or a $K_3$. We next prove that the number of them is finite.
\begin{claim}\label{R(i) 1}
There are at most $2^{|L(i)|}$ $K_1$-components and 5 $K_2$-components in $G[R(i)]$.
\end{claim}
\begin{proof}
We first show that there are at most $2^{|L(i)|}$ $K_1$-components in $G[R(i)]$. Suppose there are more than $2^{|L(i)|}$ $K_1$-components in $G[R(i)]$. By the pigeonhole principle, there exists $u,v\in R(i)$ and they have the same neighborhood in $L(i)$. Since $u$ and $v$ are not comparable, there exists $u',v'\in S_3(i)$ such that $u'\in N(u)\setminus N(v)$ and $v'\in N(v)\setminus N(u)$. Then $\{u,u',v_i,v',v\}$ induces a bull or a $P_5$, depending on whether $u'v'\in E$, a contradiction. So there are at most $2^{|L(i)|}$ $K_1$-components in $G[R(i)]$.
Next we show that there are at most 5 $K_2$-components in $G[R(i)]$.
Suppose that $A_1$ and $A_2$ are two homogeneous $K_2$-components of $G[R(i)]$. By \autoref{lem:XY}, there exists $x_1\in N(A_1)\setminus N(A_2)$ and $y_1\in N(A_2)\setminus N(A_1)$. Clearly, $x_1,y_1\in S_3(i)\cup L(i)$. Suppose that $x_1,y_1\in L(i)$. Let $w_1,w_2\in S_3(i)$ be the neighbor of $A_1$ and $A_2$, respectively. If $x_1y_1\in E$, then $\{y_1,x_1,w_1,v_i\}\cup A_1$ contains an induced $P_5$. So $x_1y_1\notin E$. Note that $w_2\notin N(A_1)$, otherwise $\{w_2,x_1,y_1\}\cup A_1\cup A_2$ contains an induced $P_5$. Similarly, $w_1\notin N(A_2)$. Then $\{v_i,w_1,w_2\}\cup A_1\cup A_2$ contains an induced bull or an induced $P_5$, depending on whether $w_1w_2\in E$, a contradiction. Suppose that $x_1\in L(i)$ and $y_1\in S_3(i)$. Let $w_3$ be the neighbor of $A_1$ in $S_3(i)$. Note that $w_3\in N(A_2)$, otherwise $\{v_i,w_3,y_1\}\cup A_1\cup A_2$ contains an induced bull or an induced $P_5$, depending on whether $w_3y_1\in E$. Then $w_3y_1\in E$, otherwise $\{x_1,y_1,w_3\}\cup A_1\cup A_2$ contains an induced $P_5$. Then $\{w_3,y_1\}\cup A_2$ induces a $K_4$, contradicting that $\chi(G[S_2(i)\cup S_3(i)])\le 3$. So $x_1,y_1\in S_3(i)$ and then $\{v_i,x_1,y_1\}\cup A_1\cup A_2$ contains an induced bull or an induced $P_5$, depending on whether $x_1y_1\in E$, a contradiction. Thus there is at most one homogeneous $K_2$-component in $G[R(i)]$.
Let $B_1=x_3y_3$ and $B_2=x_4y_4$ be two arbitrary nonhomogeneous $K_2$-components of $G[R(i)]$ and the vertices mixed on $B_1$ or $B_2$ are clearly in $L(i)\cup S_3(i)$. Suppose that each vertex in $S_3(i)$ is complete or anticomplete to $B_1$, then there exists $z'\in L(i)$ mixed on $B_1$. Let $t\in S_3(i)$ be complete to $B_1$, then $\{z',x_3,y_3,t,v_i\}$ induces a bull, a contradiction. So there must exist $z_3\in S_3(i)$ mixed on $B_1$. Similarly, there exists $z_4\in S_3(i)$ mixed on $B_2$. By symmetry, we assume $z_3x_3,z_4x_4\in E$ and $z_3y_3,z_4y_4\notin E$. Then $z_3$ is complete or anticomplete to $B_2$, otherwise $\{y_3,x_3,z_3,x_4,y_4\}$ induces a $P_5$. Similarly, $z_4$ is complete or anticomplete to $B_1$. If $z_3$ is anticomplete to $B_2$ and $z_4$ is anticomplete to $B_1$, then $\{x_3,z_3,v_i,z_4,x_4\}$ induces a bull or a $P_5$, depending on whether $z_3z_4\in E$. If $z_3$ is complete to $B_2$ and $z_4$ is complete to $B_1$, then $\{y_3,z_4,v_i,z_3,y_4\}$ induces a bull or a $P_5$, depending on whether $z_3z_4\in E$. So we assume $z_3$ is anticomplete to $B_2$ and $z_4$ is complete to $B_1$. It follows that $z_3z_4\in E$, otherwise $\{y_4,x_4,z_4,v_i,z_3\}$ induces a $P_5$. So there are at most 4 nonhomogeneous $K_2$-components in $R(i)$, otherwise the vertices in $S_3(i)$ mixed on them respectively can induce a $K_5$, a contradiction.
The above proof shows that there are at most $2^{|L(i)|}$ $K_1$-components and 5 $K_2$-components in $G[R(i)]$.
\end{proof}
\begin{claim}\label{R(i) 2}
There is at most one $K_3$-component in $G[R(i)]$.
\end{claim}
\begin{proof}
Suppose that $T_1=x_1y_1z_1,T_2=x_2y_2z_2$ are two arbitrary $K_3$-components of $G[R(i)]$. Let $x',y'\in S_3(i)$ be the neighbor of $T_1$ and $T_2$, respectively. Since $\chi(G[S_2(i)\cup S_3(i)])\le 3$, $x'$ is mixed on $T_1$ and $y'$ is mixed on $T_2$. By symmetry, we assume that $x'x_1,y'x_2\in E$ and $x'y_1,y'y_2\notin E$. So $x'$ is not mixed on $T_2$, otherwise $\{y_1,x_1,x'\}\cup T_2$ contains an induced $P_5$. Moreover, since $\chi(G[S_2(i)\cup S_3(i)])\le 3$, $x'$ is not complete to $T_2$. Thus $x'$ is anticomplete to $T_2$. Similarly, $y'$ is anticomplete to $T_1$. Then $\{x_1,x',v_i,y',x_2\}$ induces a bull or a $P_5$, depending on whether $x'y'\in E$, a contradiction.
Therefore, there is at most one $K_3$-component in $G[R(i)]$.
\end{proof}
By Claims \ref{L(i)}, \ref{R(i) 1} and \ref{R(i) 2}, $|L(i)|\le 8$ and $|R(i)|\le 2^{|L(i)|}+13$. So $|S_2|\le 5\times(2^8+21)$.
Finally, we bound $S_3$ and $S_4$.
\begin{claim}\label{S3 trivial}
For each $1\le i\le 5$, the number of $K_1$-components in $G[S_3(i)]$ is not more than $2^{|S_2(i)\cup S_5|}$.
\end{claim}
\begin{proof}
It suffices to prove for $i=1$. Suppose that the number of $K_1$-components in $G[S_3(1)]$ is more than $2^{|S_2(1)\cup S_5|}$. The pigeonhole principle shows that there are two $K_1$-components $u,v$ having the same neighborhood in $S_2(1)\cup S_5$. Since $u$ and $v$ are not comparable, there must exist $u'\in N(u)\setminus N(v)$ and $v'\in N(v)\setminus N(u)$. By \ref{S2(i)S2(i+1)}, \ref{S2(i)S3(i+2)}, \ref{S3(i)S3(i+1)} and \ref{S2(i)S4(i+2)}, $u',v'\in S_3(3)\cup S_3(4)\cup S_4(1)\cup S_4(2)\cup S_4(5)$. So $\{u,u',v_3,v',v\}$ induces a bull or a $P_5$, depending on whether $u'v'\in E$, a contradiction.
\end{proof}
\begin{claim}\label{S4 trivial}
For each $1\le i\le 5$, the number of $K_1$-components in $G[S_4(i)]$ is not more than $2^{|S_5|}$.
\end{claim}
\begin{proof}
It suffices to prove for $i=1$. Suppose that the number of $K_1$-components in $G[S_4(1)]$ is more than $2^{|S_5|}$. The pigeonhole principle shows that there are two $K_1$-components $u,v$ having the same neighborhood in $S_5$. Since $u$ and $v$ are not comparable, there must exist $u'\in N(u)\setminus N(v)$ and $v'\in N(v)\setminus N(u)$. By \ref{S2(i)S4(i)}, \ref{S2(i)S4(i+2)} and \ref{S2S5}, $u',v'\in (\cup_{i=1,2,5}S_3(i))\cup (\cup_{2\le i\le 5}S_4(i))$. So $\{u,u',v_1,v',v\}$ induces a bull or a $P_5$, depending on whether $u'v'\in E$, a contradiction.
\end{proof}
\begin{claim}\label{S4(i) 2-chromatic}
If $\chi(S_4(i))=2$ for some $1\le i\le 5$, then $S_3\cup S_4$ is bounded.
\end{claim}
\begin{proof}
Without loss of generality, we assume $\chi(S_4(1))=2$. It follows from \ref{S2(i)S4(i+2)} that $S_3(3)=S_3(4)=\emptyset$, otherwise $S_4(1)\cup S_3(3)\cup \{v_3,v_4\}$ contains an induced $K_5$. Since $G$ has no subgraph isomorphic to $F_9$, $\chi(S_4(i))\le 1$ for each $2\le i\le 5$ and $\chi(S_3(j))\le 1$ for each $j=1,2,5$. By Claims \ref{S3 trivial}-\ref{S4 trivial}, $S_3\cup(\cup_{2\le i\le 5}S_4(i))$ is bounded and the number of $K_1$-components in $G[S_4(1)]$ is also bounded.
We now show that the number of vertices in a 2-chromatic component of $G[S_4(1)]$ is bounded. Let $A$ be a 2-chromatic component of $G[S_4(1)]$ and so $A$ is bipartite. Let the bipartition of $A$ be $(X,Y)$. Suppose that $|X|>2^{|S_3\cup (\cup_{2\le i\le 5}S_4(i))\cup S_5|}$. By the pigeonhole principle, there exists two vertices $x_1,x_2\in X$ which have the same neighborhood in $S_3\cup (\cup_{2\le i\le 5}S_4(i))\cup S_5$. Since $x_1$ and $x_2$ are not comparable, there must exist $y_1\in N(x_1)\setminus N(x_2),y_2\in N(x_2)\setminus N(x_1)$. Clearly, $y_1,y_2\in Y$ and so $\{x_1,x_2,y_1,y_2\}$ induces a $2K_2$ in $A$. Since $A$ is connected and bipartite, $A$ contains a $P_5$ by \autoref{2K2}, a contradiction. Thus $|X|\le 2^{|S_3\cup (\cup_{2\le i\le 5}S_4(i))\cup S_5|}$. Similarly, $|Y|\le 2^{|S_3\cup (\cup_{2\le i\le 5}S_4(i))\cup S_5|}$. Thus the number of vertices in $A$ is bounded.
Then we show that there are at most five 2-chromatic components in $G[S_4(1)]$.
Suppose that $A_1$ and $A_2$ are two homogeneous 2-chromatic components of $G[S_4(1)]$. By \autoref{lem:XY}, $A_1$ is not complete to $N(A_2)$ and $A_2$ is not complete to $N(A_1)$. So there must exist $z_1\in N(A_1)\setminus N(A_2)$ and $z_2\in N(A_2)\setminus N(A_1)$. Clearly, $z_1,z_2\in (\cup_{i=1,2,5}S_3(i))\cup (\cup_{2\le i\le 5}S_4(i))\cup S_5$. Then $\{v_1,z_1,z_2\}\cup A_1\cup A_2$ contains an induced bull or an induced $P_5$, depending on whether $z_1z_2\in E$, a contradiction. Thus there is at most one homogeneous 2-chromatic component in $G[S_4(1)]$.
Let $B_1,B_2$ be two nonhomogeneous 2-chromatic components of $G[S_4(1)]$. So there exists $x'$ mixed on $B_1$ and $y'$ mixed on $B_2$. Let $x'$ be mixed on edge $x_3y_3$ in $B_1$ and $y'$ be mixed on edge $x_4y_4$ in $B_2$. By symmetry, assume that $x'x_3,y'x_4\in E$ and $x'y_3,y'y_4\notin E$. It is evident that $x'$ and $y'$ are not the same vertex, otherwise $\{y_3,x_3,x',x_4,y_4\}$ induces a $P_5$. Similarly, $x'$ is not mixed on $x_4y_4$ and $y'$ is not mixed on $x_3y_3$. Clearly, $x',y'\in (\cup_{i=1,2,5}S_3(i))\cup (\cup_{2\le i\le 5}S_4(i))\cup S_5$. If $x'$ is anticomplete to $\{x_4,y_4\}$ and $y'$ is anticomplete to $\{x_3,y_3\}$, then $\{x_3,x',v_1,y',x_4\}$ induces a bull or a $P_5$, depending on whether $x'y'\in E$. If $x'$ is complete to $\{x_4,y_4\}$ and $y'$ is complete to $\{x_3,y_3\}$, then $\{y_4,x',v_1,y',y_3\}$ induces a bull or a $P_5$, depending on whether $x'y'\in E$. So we assume that $x'$ is complete to $\{x_4,y_4\}$ and $y'$ is anticomplete to $\{x_3,y_3\}$. Then $x'y'\in E$, otherwise $\{y',x_4,y_4,x',x_3\}$ induces a bull. So the number of nonhomogeneous 2-chromatic components of $G[S_4(1)]$ is not more than 4, otherwise the vertices mixed on them respectively can induce a $K_5$.
So there are at most five 2-chromatic components in $G[S_4(1)]$. It follows that $S_3\cup S_4$ is bounded.
\end{proof}
\begin{claim}\label{S3(i) 2-chromatic}
If $\chi(S_3(i))=2$ for some $1\le i\le 5$, then $S_3\cup S_4$ is bounded.
\end{claim}
\begin{proof}
Without loss of generality, we assume $\chi(S_3(3))=2$. It follows from \ref{S3(i)S3(i+1)} that $S_3(2)=S_3(4)=\emptyset$, otherwise $S_3(3)\cup S_3(2)\cup \{v_2,v_3\}$ or $S_3(3)\cup S_3(4)\cup \{v_4,v_3\}$ contains an induced $K_5$. Similarly, it follows from \ref{S2(i)S4(i+2)} that $S_4(1)=S_4(5)=\emptyset$. Since $G$ has no subgraph isomorphic to $F_9$, $\chi(S_4(i))\le 1$ for each $2\le i\le 4$ and $\chi(S_3(j))\le 1$ for each $j=1,5$. By Claims \ref{S3 trivial}-\ref{S4 trivial}, $(\cup_{i=1,5}S_3(i))\cup S_4$ is bounded and the number of $K_1$-components in $G[S_3(3)]$ is also bounded.
We now show that the number of vertices in a 2-chromatic component of $G[S_3(3)]$ is bounded. Let $A$ be a 2-chromatic component of $G[S_3(3)]$ and so $A$ is bipartite. Let the bipartition of $A$ be $(X,Y)$. Suppose that $|X|>2^{|S_2(3)\cup S_5\cup (\cup_{i=1,5}S_3(i))\cup (\cup_{2\le i\le 4}S_4(i))|}$. By the pigeonhole principle, there exists two vertices $x_1,x_2\in X$ which have the same neighborhood in $S_2(3)\cup S_5\cup (\cup_{i=1,5}S_3(i))\cup (\cup_{2\le i\le 4}S_4(i))$. Since $x_1$ and $x_2$ are not comparable, there must exist $y_1\in N(x_1)\setminus N(x_2),y_2\in N(x_2)\setminus N(x_1)$. Clearly, $y_1,y_2\in Y$ and so $\{x_1,x_2,y_1,y_2\}$ induces a $2K_2$ in $A$. Since $A$ is connected and bipartite, $A$ contains a $P_5$ by \autoref{2K2}, a contradiction. Thus $|X|\le 2^{|S_2(3)\cup S_5\cup (\cup_{i=1,5}S_3(i))\cup (\cup_{2\le i\le 4}S_4(i))|}$. Similarly,
\[|Y|\le 2^{|S_2(3)\cup S_5\cup (\cup_{i=1,5}S_3(i))\cup (\cup_{2\le i\le 4}S_4(i))|}.\]
Thus the number of vertices in $A$ is bounded.
Then we show that there are at most $(2^{|S_2(3)|}+4)$ 2-chromatic components in $G[S_3(3)]$.
Suppose that the number of homogeneous 2-chromatic components of $G[S_3(3)]$ is more than $2^{|S_2(3)|}$. By the pigeonhole principle, there are two 2-chromatic components $A_1,A_2$ such that $N_{S_2(3)}(A_1)=N_{S_2(3)}(A_2)$. By \autoref{lem:XY}, $A_1$ is not complete to $N(A_2)$ and $A_2$ is not complete to $N(A_1)$. So there must exist $z_1\in N(A_1)\setminus N(A_2)$ and $z_2\in N(A_2)\setminus N(A_1)$. Clearly, $z_1,z_2\in (\cup_{i=1,5}S_3(i))\cup (\cup_{2\le i\le 4}S_4(i))\cup S_5$. Then $\{v_1,z_1,z_2\}\cup A_1\cup A_2$ contains an induced bull or an induced $P_5$, depending on whether $z_1z_2\in E$, a contradiction. Thus there are at most $2^{|S_2(3)|}$ homogeneous 2-chromatic components in $G[S_3(3)]$.
Let $B_1,B_2$ be two nonhomogeneous 2-chromatic components of $G[S_3(3)]$. So there exists $x'$ mixed on $B_1$ and $y'$ mixed on $B_2$. Let $x'$ be mixed on edge $x_3y_3$ in $B_1$ and $y'$ be mixed on edge $x_4y_4$ in $B_2$. By symmetry, assume that $x'x_3,y'x_4\in E$ and $x'y_3,y'y_4\notin E$. It is evident that $x'$ and $y'$ are not the same vertex, otherwise $\{y_3,x_3,x',x_4,y_4\}$ induces a $P_5$. Similarly, $x'$ is not mixed on $x_4y_4$ and $y'$ is not mixed on $x_3y_3$. Clearly, $x',y'\in S_2(3)\cup S_5\cup (\cup_{i=1,5}S_3(i))\cup (\cup_{2\le i\le 4}S_4(i))$.
\noindent{\bf Case 1.} $x'$ is anticomplete to $\{x_4,y_4\}$ and $y'$ is anticomplete to $\{x_3,y_3\}$. Then $x'$ is nonadjacent to $y'$, otherwise $\{y_3,x_3,x',y',x_4,y_4\}$ induces a $P_6$. If $x',y'\notin S_2(3)$, then $\{x_3,x',v_1,y',x_4\}$ induces a $P_5$. If $x',y'\in S_2(3)$, then $\{x',x_3,v_3,x_4,y'\}$ induces a $P_5$. So assume $x'\in S_2(3)$ and $y'\notin S_2(3)$. Then $\{x_4,v_3,y_3,x_3,x'\}$ induces a bull, a contradiction.
\noindent{\bf Case 2.} $x'$ is complete to $\{x_4,y_4\}$ and $y'$ is anticomplete to $\{x_3,y_3\}$. Then $x'y'\in E$, otherwise $\{y',x_4,y_4,x',x_3\}$ induces a bull. So as the case when $x'$ is anticomplete to $\{x_4,y_4\}$ and $y'$ is complete to $\{x_3,y_3\}$.
\noindent{\bf Case 3.} $x'$ is complete to $\{x_4,y_4\}$ and $y'$ is complete to $\{x_3,y_3\}$. Suppose that $x',y'\notin S_2(3)$ and so $\{y_4,x',v_1,y',y_3\}$ induces a bull or a $P_5$, depending on whether $x'y'\in E$, a contradiction. If $x',y'\in S_2(3)$, then $x'y'\in E$, otherwise $\{x',y_4,v_3,y_3,y'\}$ induces a $P_5$. If $x'\in S_2(3)$ and $y'\notin S_2(3)$, then $x'y'\in E$, otherwise $\{v_1,y',y_3,x_3,x'\}$ induces a bull.
We now know that $x'$ must be adjacent to $y'$. So the number of nonhomogeneous 2-chromatic components of $G[S_3(3)]$ is not more than 4, otherwise the vertices mixed on them respectively can induce a $K_5$, a contradiction. It follows that there are at most $(2^{|S_2(3)|}+4)$ 2-chromatic components in $G[S_3(3)]$.
Therefore, $S_3\cup S_4$ is bounded.
\end{proof}
By Claims \ref{S3 trivial}-\ref{S3(i) 2-chromatic}, $S_3\cup S_4$ is bounded and so is $|G|$. This completes the proof of \autoref{th}.
\end{proof}
\end{document} |
\begin{document}
\title[Upper central series for the group of unitriangular automorphisms...]
{Upper central series for the group of unitriangular automorphisms of a free associative algebra}
\author{Valeriy G. Bardakov}
\address{Sobolev Institute of Mathematics, Novosibirsk State University, Novosibirsk 630090, Russia}
\address{and}
\address{Laboratory of Quantum Topology, Chelyabinsk State University, Brat'ev Kashirinykh street 129,
Chelyabinsk 454001, Russia}
\email{[email protected]}
\thanks{The first author thank the organizers of the Conference ``Groups, Geometry and Dynamics"
(Almora, India, 2012) for this beautiful and interesting Conference.}
\author{Mikhail V. Neshchadim}
\address{Sobolev Institute of Mathematics, Novosibirsk State University, Novosibirsk 630090, Russia}
\email{[email protected]}
\thanks{The authors was supported by Federal Target Grant ``Scientific
and educational personnel of innovation Russia'' for 2009-2013
(government contract No. 02.740.11.0429). Also, this research was
supported by the Indo-Russian DST-RFBR project
grant DST/INT/RFBR/P-137 (No.~13-01-92697)}
\subjclass[2010]{Primary 16W20; Secondary
20E15, 20F14.}
\keywords{Free associative algebra, group of unitriangular automorphisms, upper central series}
\begin{abstract}
We study some subgroups of the group of unitriangular automorphisms $U_n$
of a free associative algebra over a field of characteristic zero. We find the center
of $U_n$ and describe the hypercenters of $U_2$ and $U_3$. In particular, we
prove that the upper central series for $U_2$ has infinite length.
As consequence, we prove that the
groups $U_n$ are non-linear for all $n \geq 2$.
\end{abstract}
\maketitle
\section{Introduction}
In this paper we consider a free associative algebra $A_n = K
\langle x_1, x_2,$ $\ldots,$ $x_n \rangle$ over a field $K$ of
characteristic zero. We assume that $A_n$ has unity. The group of $K$-automorphisms of this
algebra, i.e. automorphisms that fix elements of $K$ is
denoted by $\mathrm{Aut} \, A_n$.
The group of tame automorphisms $\mathrm{TAut} \, A_n$ of $A_n$
is generated by the group of affine automorphism
$\mathrm{Aff} \, A_n$
and the group of unitraingular automorphisms $U_n=U(A_n)$.
From the result of Umirbaev \cite{U} follows that $\mathrm{Aut} \, A_3 \not= \mathrm{TAut} \, A_3$.
A question about linearity (i.e. about a faithful representation
by finite dimensional matrices over some field) of $\mathrm{TAut}
\, A_n$ was studied in the paper of Roman'kov, Chirkov, Shevelin
\cite{R}, where it was proved that for $n \geq 4$ these groups are
not linear. Sosnovskii \cite{S} proved that for $n \geq 3$ the
group $\mathrm{Aut} \, P_n$ is not linear, where
$P_n = K [x_1, x_2,$ $\ldots,$ $x_n ]$ is the polynomial algebra over a field $K$.
His result follows
from description of the upper central series for the group of
unitriangular automorphisms $U(P_n)$ of $P_n$.
The structure of the present paper is the following. In Section 2 we introduce some notations, recall
some facts on the automorphism group of free associative algebra and its subgroups.
In the previous article \cite{B} we
found the lower central series and the series of the commutator subgroups for $U_n$.
In Section 3 we study the upper central series for $U_2$ and prove that the length of this series is infinite.
Prove that $U_2$ is non-linear. In Section 4 we study the upper central series for $U_3$ and describe the
hypercentral subgroups in the terms of some algebras. In Section 5 we find the center of $U_n$ for $n \geq 4$.
Also, in Sections 4 and 5 we formulate some hypotheses and questions
that are connected with the
theory of non-commutative invariants in free associative algebra under the action of some subgroups of $U_n$.
\section{Some previous results and remark}
Let us recall definitions of some automorphisms and subgroups of $\mathrm{Aut} \, A_n$.
For any index $i\in \left\{1, \ldots, n \right\}$, a constant
$\alpha \in K^* = K\backslash \{0\}$ and a polynomial $f = f(x_1,
\ldots , \widehat{x_i}, \ldots ,x_n) \in A_n$ (where the symbol
$\widehat{x_i}$ denotes that $x_i$ is not included in $f$) {\it the
elementary automorphism } $\sigma (i, \alpha, f)$ is an
automorphism in $\mathrm{Aut} \, A_n$ that acts on the
variables $x_1, \ldots ,x_n$ by the rule:
$$
\sigma (i, \alpha, f) :
\left\{
\begin{array}{lcl}
x_i \longmapsto \alpha \, x_i + f, \,\, & \\
x_j \longmapsto x_j, \,\, & \mbox{if} & \,\, j \neq i. \\
\end{array}
\right.
$$
The group of tame automorphisms $\mathrm{TAut} \, A_n$ is
generated by all elementary automorphisms.
The group of affine automorphisms $\mathrm{Aff} \, A_n$ is a
subgroup of $\mathrm{TAut} \, A_n$ that consists of automorphisms
$$
x_i\longmapsto a_{i1} x_1+ \ldots + a_{in} x_n + b_i,\,\, i=1, \ldots , n,
$$
where $a_{ij}$, $b_i\in K$, $i,j=1, \ldots ,n$, and the matrix
$(a_{ij})$ is a non-degenerate one. The group of affine
automorphisms is the semidirect product $K^n \leftthreetimes
\mathrm{GL}_n (K)$ and, in particular, embeds in the group of
matrices $\mathrm{GL}_{n+1} (K)$.
The group of triangular automorphisms $T_n = T(A_n)$ of algebra
$A_n$ is generated by automorphisms
$$
x_i\longmapsto \alpha_i x_i+f_i(x_{i+1}, \ldots , x_n),\,\, i = 1,\ldots , n,
$$
where $\alpha_i \in K^*$, $f_i\in A_n$ and $f_n\in K$. If all
$\alpha_i =1$ then this automorphism is called {\it the
unitriangular automorphism}. The group of unitriangular
automorphisms is denoted by $U_n = U(A_n)$.
In the group $U_n$ let us define a subgroup $G_i$, $i = 1, 2, \ldots, n$ which is generated by automorphisms
$$
\sigma (i, 1, f),~~ \, \mbox{where} \, f = f(x_{i+1}, \ldots , x_n) \in A_n.
$$
Note that the subgroup $G_i$ is abelian and isomorphic to an
additive subgroup of algebra $A_n$ that is generated by $x_{i+1},
\ldots , x_n$, $i = 1, \ldots, n-1$, and the subgroup $G_n $ is
isomorphic to the additive group of the field $K$.
{\it The lower central series}
of a group $G$ is the series
$$
G = \gamma_1 G \geq \gamma_2 G \geq \ldots,
$$
where $\gamma_{i+1} G = [\gamma_i G, G],$ $i = 1, 2, \ldots$. {\it The series of the commutator subgroups} of a
group $G$ is the series
$$
G = G^{(0)} \geq G^{(1)} \geq G^{(2)} \geq \ldots,
$$
where $G^{(i+1)} = [G^{(i)}, G^{(i)}],$ $i = 0, 1, \ldots$. Here for subsets $H$, $K$ of $G$, $[H, K]$
denotes the subgroup of $G$ generated by the commutators $[h, k] = h^{-1} k^{-1} h k$ for $h \in H$ and
$k \in K$.
Recall that the $k$-th hypercenter $Z_k = Z_k(G)$ of the upper central series of $G$ for the
non-limited ordinal $k$ is defined by the rule
$$
Z_k / Z_{k-1} = Z(G / Z_{k-1})
$$
or equivalently,
$$
Z_k = \{ g \in G ~|~[g, h] \in Z_{k-1}~ \mbox{for all} ~h \in G \},
$$
and $Z_1(G) = Z(G)$ is the center of $G$.
If $\alpha$ is a limit ordinal, then define
$$
Z_{\alpha} = \bigcup_{\beta < \alpha} Z_{\beta}.
$$
It was proved in \cite{B} that $U_n$ is a semidirect product of abelian groups:
$$
U_n = (\ldots(G_1 \leftthreetimes G_2)\leftthreetimes \ldots ) \leftthreetimes G_n,
$$
and the lower central
series and the series of commutator subgroups of $U_n$ satisfy the following two properties respectvely:
1) For $n \geq 2$
$$
\gamma_2 U_n = \gamma_3 U_n = \ldots.
$$
In particular, for $n \geq 2$ the group $U_n$ is not nilpotent.
2) The group $U_n$ is solvable of degree $n$ and the
corresponding commutator subgroups have the form:
$$
\begin{array}{l}
U_n= (\ldots(G_1 \leftthreetimes G_2)\leftthreetimes \ldots ) \leftthreetimes G_{n},\\
U_n^{(1)}= (\ldots(G_1 \leftthreetimes G_2)\leftthreetimes \ldots ) \leftthreetimes G_{n-1},\\
.........................................\\
U_n^{(n-1)}= G_1,\\
U_n^{(n)}= 1.
\end{array}
$$
Yu. V. Sosnovskiy \cite{S} found the upper central series for the unitriangular group $U(P_n)$ of the
polynomial algebra $P_n$. (Note, that he considered polynomials without free terms.)
He proved that for $n \geq 3$ the group $U(P_n)$ has the upper central series
of length $((n-1)(n-2)/2) \omega + 1$ for any field $K$, where $\omega$ is the first limit ordinal.
If $\mathrm{char} \, K = 0$ then the
hypercenters of $U(P_4)$ have the form\\
$Z_{k} = \{ (x_1 + f_1(x_3, x_4), x_2, x_3) ~|~
\mathrm{deg}_{x_3} f_1(x_3, x_4) \leq k -1 \}$,\\
$Z_{\omega} = \{ (x_1 + f_1(x_3, x_4), x_2, x_3) \}$,\\
$Z_{\omega + k} = \{ (x_1 + f_1(x_2, x_3, x_4), x_2, x_3) ~|~
\mathrm{deg}_{x_2} f_1(x_2, x_3, x_4) \leq k \}$,\\
$Z_{2 \omega} = \{ (x_1 + f_1(x_2, x_3, x_4), x_2, x_3) \}$,\\
$Z_{2\omega + k} = \{ (x_1 + f_1(x_2, x_3, x_4), x_2 + f_2(x_3, x_4), x_3) ~|~
\mathrm{deg}_{x_3} f_2(x_3, x_4) \leq k - 1 \}$,\\
$Z_{3 \omega} = \{ (x_1 + f_1(x_2, x_3, x_4), x_2 + f_2(x_3, x_4), x_3) \}$,\\
$Z_{3 \omega + 1} = U(P_4)$,\\
where $k = 1, 2, \ldots$ runs over the set of natural numbers and
$f_1, f_2, f_3$ are arbitrary polynomials in $P_n$ which depend on the corresponding variables.
\section{Unitriangular group $U_2$}
Let $A_2 = K \langle x, y \rangle$ be the free associative algebra over a field $K$ of
characteristic zero with the variables $x$ and $y$. Then
$$
U_2=
\left\{
\varphi= \left( x + f(y), y + b \right) ~|~ f(y) \in K\langle y \rangle,\,\,b \in K \right\}
$$
is the group of unitriangular automorphisms of $A_2$.
It is not difficult to check the following Lemma
\begin{lemma}\label{l:form}
1) If $\varphi = \left( x + f(y), y + b \right) \in U_2$,
then its inverse is equal to
$$
\varphi^{-1}= \left( x - f(y - b), y - b \right);
$$
2) if
$ \varphi= \left( x + f(y), y + b \right)$ and $\psi= \left( x + h(y), y + c \right)\in U_2$,
then the following formulas hold:
-- the formula of conjugation
$$
\psi^{-1}\varphi \psi = \left( x + h(y) - h(y + b) + f(y + c), y + b \right),
$$
-- the formula of commutation
$$
\varphi^{-1}\psi^{-1}\varphi \psi = \left( x + h(y) - h(y + b) + f(y + c) - f(y), y \right).
$$
\end{lemma}
Using this Lemma we can describe the center of $U_2$.
\begin{lemma} \label{l:c}
The center of $U_2$ has the form
$Z(U_2) = \left\{ \varphi = \left( x + a, y \right) ~|~ a \in K \right\}$.
\end{lemma}
\begin{proof}
If $\varphi = \left( x + a, y \right)$, then from the formula of conjugation (see Lemma~\ref{l:form}) follows
that $\varphi \in Z(U_2)$. To prove the inverse inclusion
suppose
$ \varphi= \left( x+f(y), y+b \right) \in Z(U_2)$.
Using the formula of conjugation we get
$$
\varphi= \left( x + f(y), y + b \right) = \psi^{-1} \varphi \psi = \left( x + h(y) - h(y + b) + f(y + c)
y + b \right),
$$
for any automorphism $\psi = \left( x + h(y), y + c \right) \in U_2$,
i.e.
$ f(y) = h(y) - h(y + b) + f(y + c)$.
Taking $h = 0$ we get $f(y) = f(y + c)$ for every $c \in K$.
Hence, $f(y) = a \in K$. We have only relation
$0 = h(y) - h(y + b)$. Since $h(y)$ is arbitrary, it follows that $b=0$.
\end{proof}
\begin{lemma}\label{l:com}
The following properties hold true in $U_2$.
1)
$[U_2, U_2] = \left\{\varphi = \left( x + f(y), y \right) ~|~ f(y) \in K \langle y \rangle \right\}$.
2) If $\varphi = \left( x + f(y), y \right)$, where $f(y) \in K \langle y \rangle \setminus K$, then
$$
C_{U_2}(\varphi) = \left\{\left( x + h(y), y \right) ~|~ h(y)\in K \langle y \rangle \right\},
$$
where $C_{U_2}(\varphi)$ is the centralizer of $\varphi$ in $U_2$, i.e.
$C_{U_2}(\varphi) = \{ \psi \in U_2 ~|~ \psi \varphi = \varphi \psi\}$.
3) If $\varphi= \left( x,y+b \right)$, $b \in K$, then
$C_{U_2}(\varphi) = \left\{ \left( x + a, y + c \right) ~|~ a, c \in K \right\}$.
\end{lemma}
\begin{proof} 1) Let
$\varphi = \left( x + f(y), y + b \right)$, $\psi = \left( x + h(y), y + c \right) \in U_2$.
By the formula of commutation
$$
\varphi^{-1} \psi^{-1} \varphi \psi = \left( x + h(y) - h(y + b) + f(y + c) - f(y), y \right).
$$
It is easy to see that any element of $K \langle y \rangle$ can be represented in a form
$r(y + d) - r(y)$ for some $r(y) \in K \langle y \rangle$ and $d\in K$.
Hence,
$$
K \langle y \rangle = \{ h(y) - h(y + b) + f(y + c) - f(y)~|~h, f \in K \langle y \rangle,~~b, c \in K\}
$$
and 1) is true.
2) Let
$ \varphi = \left( x+f(y), y \right)$, $f(y) \in K \langle y \rangle \setminus K$ and
$ \psi = \left( x + h(y), y + c \right)$ be an arbitrary element of $C_{U_2}(\varphi)$.
Using the formula of conjugation we get
$$
\varphi= \left( x + f(y), y \right) = \psi^{-1} \varphi \psi = \left( x + f(y + c), y \right).
$$
Hence $c = 0$.
3) Let
$ \varphi = \left( x, y + b \right)$ and
$ \psi = \left( x + h(y), y + c \right)$ be an arbitrary element of $C_{U_2}(\varphi)$.
Using the formula of conjugation we get
$$
\varphi = \left( x, y + b \right) = \psi^{-1} \varphi \psi = \left( x + h(y) - h(y + b), y + b \right).
$$
Hence $h(y)=a \in K$.
\end{proof}
\begin{lemma}\label{l:co}
If $s$ is a non-negative integer, then the $(s+1)th$ hypercenter of $U_2$ has the form
$$
Z_{s+1}(U_2) = \left\{\varphi = \left( x+f(y),y \right)
~|~ f(y) \in K \langle y \rangle,\,\, \mathrm{deg} f(y) \leq s \right\}.
$$
\end{lemma}
\begin{proof}
If $s = 0$, then the assertion follows from Lemma \ref{l:c}. Suppose that for
$s + 1$ the assertion holds true. We now prove it for $s + 2$.
Let
$$
\varphi = \left( x + f(y), y + b \right)\in Z_{s+2}(U_2).
$$
Using the formula of commutation (see Lemma \ref{l:form})
for $\varphi$ and $\psi= \left( x + h(y), y + c \right)$
we get
$$
\varphi^{-1}\psi^{-1}\varphi \psi = \left( x + h(y) - h(y + b) + f(y + c) - f(y), y \right).
$$
If $b \neq 0$ and since $h(y)$ is an arbitrary polynomial of $K \langle y \rangle$, then
$h(y) - h(y + b) + f(y + c) - f(y)$ represents arbitrary element of
$K \langle y \rangle$.
But it is not possible since for any automorphism $\varphi = (x + f(y), y) \in Z_{s+1}$ the degree
of $f(y)$ is not bigger than $s$.
Hence $b = 0$. Since $\mathrm{deg} f(y+c)-f(y) \leq s$ and $c$ is an arbitrary element of $K$,
we have $\mathrm{deg} f(y) \leq s + 1$. So the inclusion from left to right is proved. The inverse inclusion is
evident.
\end{proof}
Hence, from Lemma \ref{l:co}
$$
Z_{\omega}(U_2)=\left\{\varphi= \left( x + f(y), y \right) ~|~ f(y) \in K\langle y \rangle \right\},
$$
and using Lemma \ref{l:com} we have $Z_{\omega}(U_2)=[U_2, U_2]$.
Therefore
$$
Z_{\omega+1}(U_2)=U_2.
$$
\begin{corollary}
The group $U_2$ is not linear.
\end{corollary}
\begin{proof}
We know (see, for example \cite{G}) that if a linear group does not
contain torsion, then the length of the upper central series of this group is finite.
But we proved that the length of the upper central series for
$U_2$ is equal to $\omega + 1$. Hence, the group $U_2$ is not linear.
\end{proof}
Since $U_2$ is a subgroup of $\mathrm{Aut} \, A_2$, we have proven
that this group is not linear too. Using the fact that if $P_2$ is a polynomial algebra with unit over
$K$, then $\mathrm{Aut} \, A_2 = \mathrm{Aut} \, P_2$
(see, for example \cite{C}).
\begin{corollary}
Let $n \geq 2$. Then the groups $\mathrm{Aut} \, A_n$ and $\mathrm{Aut} \, P_n$ are
not linear.
\end{corollary}
It follows from the fact that $\mathrm{Aut} \, A_2 \leq \mathrm{Aut} \, A_n$
and $\mathrm{Aut} \, P_2 \leq \mathrm{Aut} \, P_n$ for all $n \geq 2$.
\begin{remark}
In \cite{S} the author considered the polynomials without free terms and proved that $\mathrm{Aut} \, P_3$
is not linear. Using his method it is not difficult to prove that if $P_2$ contains free terms, then
$\mathrm{Aut} \, P_2$ is not linear over any field of arbitrary characteristic.
\end{remark}
\section{Unitriangular group $U_3$}
The group $U_3$ is equal to
$$
U_3 = \{ (x_1 + f_1, x_2 + f_2, x_3 + f_3) ~|~f_1 = f_1(x_2, x_3) \in K \langle x_2, x_3 \rangle,
$$
$$
f_2 = f_2(x_3) \in K \langle x_3 \rangle, f_3 \in K \}.
$$
Define an algebra $S$ as subalgebra of $K \langle x_2, x_3 \rangle$
$$
S = \{ f(x_2, x_3)\in K \langle x_2, x_3 \rangle ~|~ f(x_2 + g(x_3), x_3 + h) = f (x_2, x_3)~
$$
$$
\mbox{for any}~g(x_3) \in K \langle x_3 \rangle,~~h \in K \}
$$
Hence, $S$ is a subalgebra of fixed elements under the action of the group
$$
\{ (x_2 + g, x_3 + h) ~|~g = g(x_3) \in K \langle x_3 \rangle, h \in K \}
$$
which is isomorphic to $U_2$.
The set $S$ is a subalgebra of $A_3$.
Define a set of commutators
$$
c_1 = [x_2, x_3],~~c_{k+1} = [c_k, x_3],~~k = 1, 2, \ldots,
$$
where $[a, b] = a b - b a$ is the ring commutator. Using induction on $k$, it is not difficult to check
the following result.
\begin{lemma}
The commutators $c_k$, $k = 1, 2, \ldots,$ lie in $S$.
\end{lemma}
{\bf Hypothesis 1.} The algebra $K \langle c_1, c_2, \ldots \rangle$ is equal to $S$.
Note that the elements $c_1, c_2, \ldots$ are free generators of $K \langle c_1, c_2, \ldots \rangle$
(see \cite[p.~62]{Co}).
\begin{theorem} The center $Z(U_3)$ of the group $U_3$ is equal to
$$
Z(U_3) = \{ (x_1 + f_1, x_2, x_3) ~|~f_1 = f_1(x_2, x_3) \in S \}.
$$
\end{theorem}
\begin{proof} The inclusion $\supseteq$ is evident. Let
$$
\varphi = (x_1 + f_1(x_2, x_3), x_2 + f_2(x_3), x_3 + f_3)
$$
be some element in $Z(U_3)$ and
$$
\psi = (x_1 + g_1(x_2, x_3), x_2 + g_2(x_3), x_3 + g_3)
$$
be an arbitrary element of $U_3$. Since $\varphi \psi = \psi \varphi$ then we have the equalities\\
$x_1 + g_1(x_2, x_3) + f_1(x_2 + g_2(x_3), x_3 + g_3) =
x_1 + f_1(x_2, x_3) + g_1(x_2 + f_2(x_3), x_3 + f_3),$\\
$x_2 + g_2(x_3) + f_2(x_3 + g_3) = x_2 + f_2(x_3) + g_2(x_3 + f_3),$\\
$x_3 + g_3 + f_3 = x_3 + f_3 + g_3.$ \\
\noindent The third equality holds for all $x_3$, $g_3$ and $f_3$. Rewrite the first and the second equality
in the form\\
$g_1(x_2, x_3) + f_1(x_2 + g_2(x_3), x_3 + g_3) = f_1(x_2, x_3) + g_1(x_2 + f_2(x_3), x_3 + f_3),$\\
$g_2(x_3) + f_2(x_3 + g_3) = f_2(x_3) + g_2(x_3 + f_3).$\\
\noindent Let $g_1 = x_2$, $g_2 = g_3 = 0$. Then
$$
x_2 + f_1(x_2, x_3) = f_1(x_2, x_3) + x_2 + f_2(x_3).
$$
Hence $f_2(x_3) = 0$.
Let $g_1 = x_3$, $g_2 = g_3 = 0$. Then
$$
x_3 + f_1(x_2, x_3) = f_1(x_2, x_3) + x_3 + f_3.
$$
Hence $f_3 = 0.$
We have only one condition
$$
f_1(x_2 + g_2(x_3), x_3 + g_3) = f_1(x_2, x_3),
$$
i.e. $f_1 \in S$.
\end{proof}
Let us define the following subsets in the algebra $A_2 = K\langle x_2, x_3 \rangle$:\\
$S_1 = S$,\\
$S_{m+1} = \{ f \in A_2 ~|~f^{\varphi} - f \in S_m~ \mbox{for all}~ \varphi \in U_2 \}$, $m = 1, 2, \ldots$,\\
$S_{\omega} = \bigcup\limits_{m = 1}^{\infty} S_{m}$, \\
$S_{\omega+1} = \{ f \in A_2 ~|~f^{\varphi} - f \in S_\omega~ \mbox{for all}~ \varphi \in U_2 \}$,\\
$S_{\omega+m+1} = \{ f \in A_2 ~|~f^{\varphi} - f \in S_{\omega+m}~ \mbox{for all}~ \varphi \in U_2 \}$,
$m = 1, 2, \ldots$,\\
$S_{2\omega} = \bigcup\limits_{m = 1}^{\infty} S_{\omega+m}$, \\
$R_{m} = \{ f = f(x_3) \in K \langle x_3 \rangle ~|~\mathrm{deg} \, f \leq m \}$, $m = 0, 1, \ldots ,$\\
$R_{\omega} = \bigcup\limits_{m = 0}^{\infty} R_{m}$. \\
It is not difficult to see that all $S_k$ are modules over $S$.
\begin{remark} If we consider the homomorphism
$$
\pi : K \langle x_1, x_2, x_3 \rangle \longrightarrow K [x_1, x_2, x_3],
$$
then\\
$S^{\pi} = K$,\\
$S_{m+1}^{\pi} = \{ f \in K[x_3] ~|~\mathrm{deg} \, f \leq m \}$, $m = 1, 2, \ldots$,\\
$S_{\omega}^{\pi} = K[x_3]$, \\
$S_{\omega+m+1}^{\pi} = \{ f \in K[x_2, x_3] ~|~\mathrm{deg}_{x_2} \, f \leq m \}$,
$m = 0, 1, \ldots$,\\
$S_{2\omega}^{\pi} = K[x_2, x_3]$, \\
$R_{m}^{\pi} = \{ f \in K[x_3] ~|~\mathrm{deg} \, f \leq m \}$, $m = 1, 2, \ldots ,$\\
$R_{\omega}^{\pi} = K[x_3]$. \\
\end{remark}
\begin{theorem} The following equalities hold
\begin{equation}\label{eq:m}
Z_{m} = \{ (x_1 + f_1(x_2, x_3), x_2, x_3)~|~ f_1 \in S_m \},~~ m = 1, 2, \ldots, 2\omega,
\end{equation}
\begin{multline}\label{eq:2m}
Z_{2\omega+m} = \{ (x_1 + f_1(x_2, x_3), x_2 + f_2(x_3), x_3)~|~ f_1 \in k\langle x_2, x_3 \rangle,
f_2(x_3) \in R_m \},\\ m = 1, 2, \ldots, \omega,
\end{multline}
\begin{equation}\label{eq:3m}
Z_{3\omega + 1} = U_3.
\end{equation}
\end{theorem}
\begin{proof} We use induction on $m$. To prove (\ref{eq:m}) for $m+1$, we assume that for all $m$ such that
$1 \leq m < \omega$ equality (\ref{eq:m}) holds.
If
$$
\varphi = (x_1 + f_1(x_2, x_3), x_2 + f_2(x_3), x_3 + f_3) \in Z_{m+1}
$$
and
$$
\psi = (x_1 + g_1(x_2, x_3), x_2 + g_2(x_3), x_3 + g_3) \in U_{3},
$$
then for some
$$
\theta = (x_1 + h_1(x_2, x_3), x_2, x_3) \in Z_{m}
$$
holds $\varphi \, \psi = \psi \, \varphi \, \theta$. Acting on the generators $x_1$, $x_2$, $x_3$ by
$\varphi \, \psi$ and $\psi \, \varphi \, \theta$
we have
two relations
\begin{multline}\label{eq:1}
f_1(x_2 + g_2(x_3), x_3 + g_3) - f_1(x_2, x_3) = h_1(x_2, x_3) + \\
+ g_1(x_2 + f_2(x_3), x_3 + f_3) - g_1(x_2, x_3),
\end{multline}
\begin{equation}\label{eq:2}
g_2(x_3) + f_2(x_2 + g_3) = f_2(x_3) + g_2(x_2 + f_3).
\end{equation}
If $g_2 = 0$, then the relation (\ref{eq:2}) has the form
$$
f_2(x_2 + g_3) = f_2(x_3).
$$
Since $g_3$ is an arbitrary element of $K$, then $f_2 \in K$. But in this case (\ref{eq:2}) has the form
$$
g_2(x_3 + f_3) = g_2(x_3).
$$
Since $g_2(x_3)$ is an arbitrary element of $K\langle x_3 \rangle$, then $f_3 = 0$ and (\ref{eq:1}) has the
form\\
\begin{equation}\label{eq:3}
f_1(x_2 + g_2(x_3), x_3 + g_3) - f_1(x_2, x_3) = h_1(x_2, x_3) + g_1(x_2 + f_2, x_3) - g_1(x_2, x_3).
\end{equation}\\
Let $g_1 = x_2^{N}$ for some natural number $N$. Using the homomorphism
$$
\pi : K \langle x_1, x_2, x_3 \rangle \longrightarrow K[x_1, x_2, x_3]
$$
and the equality $\mathrm{deg}_{x_2}\, h_1^{\pi} = 0$ we see that if $f_2 \not= 0$, then
$$
\mathrm{deg}_{x_2}\, \left( f_1(x_2 + g_2(x_3), x_3 + g_3) - f_1(x_2, x_3) \right)^{\pi} = N - 1.
$$
Since $N$ is any non-negative integer, then $f_2 = 0$ and
$$
f_1(x_2 + g_2(x_3), x_3 + g_3) - f_1(x_2, x_3) \in S_m,
$$
i.e. $f_1(x_2, x_3) \in S_{m+1}$ and we have proven the equality (\ref{eq:m}) for $m+1$:
$$
Z_{m+1} = \{ (x_1 + f_1(x_2, x_3), x_2, x_3)~|~ f_1 \in S_{m+1} \}.
$$
To prove (\ref{eq:m}) for $\omega+m+1$ assume that for all $\omega+m$ such that
$1 \leq m < \omega$ equality (\ref{eq:m}) holds.
If $\varphi \in Z_{\omega+m+1}$ and $\psi \in U_3$ then for some $\theta \in Z_{\omega+m}$ we have
$\varphi \, \psi = \psi\, \varphi \, \theta$ that give the relations (\ref{eq:1}) and (\ref{eq:2}).
As in the previous case we
can check that $f_2 \in K$, $f_3 = 0$ and (\ref{eq:1})--(\ref{eq:2}) are equivalent to (\ref{eq:3}).
Let $g_1 = x_2^{N}$ for some natural number $N$. Using the homomorphism
$$
\pi : K \langle x_1, x_2, x_3 \rangle \longrightarrow K [x_1, x_2, x_3]
$$
and the inequality $\mathrm{deg}_{x_2}\, h_1^{\pi} \leq m - 1$ we see that if $f_2 \not= 0$, then
$$
\mathrm{deg}_{x_2}\, \left( f_1(x_2 + g_2(x_3), x_3 + g_3) - f_1(x_2, x_3) \right)^{\pi} = N - 1
$$
for $N \geq m + 1$. But the degree of the left hand side is bounded.
Hence $f_2 = 0$ and we have
$$
f_1(x_2 + g_2(x_3), x_3 + g_3) - f_1(x_2, x_3) \in S_{\omega+m},
$$
i.e., $f_1(x_2, x_3) \in S_{m+1}$ and we have proven the equality (\ref{eq:m}) for $\omega+m+1$:
$$
Z_{\omega+m+1} = \{ (x_1 + f_1(x_2, x_3), x_2, x_3)~|~ f_1 \in S_{\omega+m+1} \}.
$$
To prove (\ref{eq:2m}) for $m+1$ assume that for all $m$ such that
$1 \leq m < \omega$ equality (\ref{eq:2m}) holds.
If $\varphi \in Z_{2\omega+m+1}$, $\psi \in U_3$, then for some $\theta \in Z_{2\omega+m}$ we have
$\varphi \, \psi = \psi\, \varphi \, \theta$. If
$$
\varphi = (x_1 + f_1(x_2, x_3), x_2 + f_2(x_3), x_3 + f_3),
$$
$$
\psi = (x_1 + g_1(x_2, x_3), x_2 + g_2(x_3), x_3 + g_3)
$$
and
$$
\theta = (x_1 + h_1(x_2, x_3), x_2 + h_2(x_3), x_3),
$$
then we have the relations
\begin{equation*}
\begin{split}
x_1 + g_1(x_2, x_3) + f_1(x_2 + g_2(x_3), x_3 + g_3) & = x_1 + h_1(x_2, x_3) + f_1(x_2 + h_2(x_3), x_3) +\\
& + g_1(x_2 + h_2(x_3) + f_2(x_3), x_3 + f_3),
\end{split}
\end{equation*}
$$
x_2 + g_2(x_3) + f_2(x_3 + g_3) = x_2 + h_2(x_3) + f_2(x_3) + g_2(x_3 + f_3).
$$
Since $h_1$ is an arbitrary element of $K \langle x_2, x_3 \rangle$, then we must consider only the second
relation which is equal to
$$
f_2(x_3 + g_3) - f_2(x_3) = h_2(x_3) + g_2(x_3 + f_3) - g_2(x_3).
$$
Since $\mathrm{deg} \, h_2 \leq m$ and $g_2(x_3)$ is any element of $K \langle x_3 \rangle$, then $f_3 = 0$.
Hence,
$$
f_2(x_3 + g_3) - f_2(x_3) = h_2(x_3).
$$
From this equality follows that $\mathrm{deg} \, f_2 \leq m + 1$. We have proven the equality (\ref{eq:2m}) for $m+1$:
$$
Z_{2\omega+m+1} = \{ (x_1 + f_1(x_2, x_3), x_2 + f_2(x_3), x_3)~|~ f_1 \in K \langle x_2, x_3 \rangle,
f_2(x_3) \in R_{m+1} \}.
$$
To prove (\ref{eq:2m}) we note that
$$
[U_3, U_3] \subseteq \{ (x_1 + f_1(x_2, x_3), x_2 + f_2(x_3), x_3) \}.
$$
\end{proof}
We described the hypercenters of $U_3$ in the terms of the algebras $S_m$ and $S_{\omega+m}$. It is
interesting to find sets of generators for these algebras.
To do it we must give answers on the following questions.
{\bf Question 1} (see Hypothesis 1). Is it true that
$$
S = K\langle c_1, c_2, \ldots \rangle?
$$
{\bf Question 2.} Is it true that for all $m \geq 1$ the following equalities are true
$$
S_{m+1} = \{ f \in K\langle S, x_3 \rangle ~|~\mathrm{deg}_{x_3} \, f \leq m \}?
$$
{\bf Question 3.} Is it true that
$$
\bigcup_{m=1}^{\infty} S_m = K\langle S, x_3 \rangle?
$$
{\bf Question 4.} Is it true that for all $m \geq 1$ the following equalities are true
$$
S_{\omega+m} = \{ f \in K \langle S, x_3, x_2 \rangle ~|~\mathrm{deg}_{x_2} \, f \leq m \}?
$$
If $R$ is the Specht algebra of $A_2$, i.e. the subalgebra of $A_2$ that is generated by all commutators
$$
[x_2, x_3],~~[[x_2, x_3], x_3],~~[[x_2, x_3], x_2], \ldots
$$
then the following inclusions hold
\begin{equation}\label{eq:4}
S_{m+1} \subset \{ f \in K \langle R, x_3 \rangle ~|~\mathrm{deg}_{x_3} \, f \leq m \},
\end{equation}
\begin{equation}\label{eq:5}
S_{\omega+m} \subset \{ f \in K \langle R, x_3, x_2 \rangle ~|~\mathrm{deg}_{x_2} \, f \leq m \},
\end{equation}
for all $m \geq 1$. It follows from the fact that $K \langle x_2, x_3 \rangle$ is a free left $R$-module
with the set of free generators
$$
x_2^{\alpha} x_3^{\beta}, ~~~\alpha, \beta \geq 0.
$$
Note that the inclusion (\ref{eq:4}) is strict. It follows from
\begin{proposition}
The commutators
$$
[x_2, \underbrace{x_3, \ldots, x_3}_k, x_2] = [c_k, x_2],~~k \geq 1,
$$
do not lie in $S_{m}$, $m \geq 1$.
\end{proposition}
\begin{proof}
Indeed, for the automorphism
$$
\varphi = (x_2 + g_2(x_3), x_3 + g_3)
$$
of the algebra $K \langle x_2, x_3 \rangle$
we have
$$
[c_k, x_2]^{\varphi} = [c_k^{\varphi}, x_2^{\varphi}] = [c_k, x_2 + g_2(x_3)] = [c_k, x_2] + [c_k, g_2(x_3)].
$$
If $g_2(x_3) = x_3^{N}$, then\\
\begin{align*}
[c_k, x_3^{N}] & = c_k \, x_3^{N} - x_3^{N} c_k = (c_k x_3 - x_3 c_k) x_3^{N-1} + x_3 c_k x_3^{N-1} -
x_3^{N} c_k \\
& = c_{k+1} x_3^{N-1} + x_3 (c_k x_3^{N-1} - x_3^{N-1} c_k) \\
& =c_{k+1} x_3^{N-1} +
x_3 \left(( c_k x_3 - x_3 c_k) x_3^{N-2} + x_3 c_k x_3^{N-2} - x_3^{N-1} c_k \right)\\
& =c_{k+1} x_3^{N-1} + x_3 c_{k+1} x_3^{N-2} + x_3^{2} ( c_k x_3^{N-2} - x_3^{N-2} c_k )\\
& = \ldots \\
& = \sum_{p+q = N-1} x_3^p c_{k+1} x_3^q.\\
\end{align*}
Hence, for $\varphi = (x_2 + g_2(x_3), x_3 + g_3)$ we have
$$
[c_k, x_2]^{\varphi} - [c_k, x_2] = \sum_{p+q = N-1} x_3^p c_{k+1} x_3^q.
$$
If
$$
g_2(x_3) = \sum_{n=0}^{N} a_n x_3^n,
$$
then
$$
[c_k, x_2]^{\varphi} - [c_k, x_2] = \sum_{n=1}^{N-1} a_n \sum_{p+q=n-1} x_3^p c_{k+1} x_3^q.
$$
If $[c_k, x_2] \in S_m$ for some $m$, then
$$
[[c_k, x_2], \varphi ] \equiv [c_k, x_2]^{\varphi} - [c_k, x_2] \in S_{m-1}.
$$
Let
$$
\psi = (x_2 + h_2(x_3), x_3 + h),~~~\varphi = (x_2 + x_3^N, x_3).
$$
Then
\begin{align*}
[[[c_k, x_2], \varphi ], \psi ] & = \left[\sum_{p+q=N-1} x_3^p c_{k+1} x_3^q, \psi \right] \\
& = \sum_{p+q=N-1} (x_3 + h)^p c_{k+1} (x_3 + h)^q - \sum_{p+q=N-1} x_3^p c_{k+1} x_3^q \\
& = \sum_{p+q=N-1} \left( \sum_{l=0}^{p-1} C_p^l x_3^l h^{p-l} \right) c_{k+1}
\left( \sum_{r=0}^{q} C_{q-1}^r x_3^r h^{q-r} \right)\\
\end{align*}
has the degree $N-2$ on the variable $x_3$. Continuing this process we are getting that
if $\mathrm{deg} \, g_2(x_3) = N$, then
$$
[c_k, x_2, \varphi, \psi_1, \ldots, \psi_{N-1}] \in S
$$
for every $\psi_1, \ldots, \psi_{N-1} \in U_2$. Hence,\\
$[c_k, x_2, \varphi, \psi_1, \ldots, \psi_{N-1}] \in S,$\\
$...............................$ \\
$[[c_k, x_2, \varphi ] \in S_N,$\\
$[c_k, x_2] \in S_{N+1}.$\\
Using the similar ideas we can prove that
$$
[c_k, x_2] \not\in S_{N}.
$$
Since we can take arbitrary number $N > m$, then
$$
[c_k, x_2] \not\in S_{m}, ~~m = 1, 2, \ldots
$$
\end{proof}
\section{Center of the unitriangular group $U_n$, $n \geq 4$}
In this section we prove the following assertion
\begin{theorem} Any automorphism $\varphi$ in the center $Z(U_n)$ of $U_n$ has a form
$$
\varphi=
\left( x_1 + f(x_2, \ldots, x_n), x_2, \ldots, x_n \right),
$$
where the polynomial $f$ is such that
$$
f(x_2 + g_2, \ldots, x_n + g_n) = f(x_2, \ldots, x_n)
$$
for every
$ g_2 \in K \langle x_3, \ldots, x_n \rangle$, $g_3\in K \langle x_4, \ldots, x_n \rangle, \ldots, g_n \in K$.
\end{theorem}
We will assume that $U_{n-1}$ includes in $U_n$ by the rule
$$
U_{n-1} = \left\{ \varphi = \left( x_1, x_2 + g_2, \ldots, x_n + g_n \right) \in U_n ~|~
g_2 \in K \langle x_3, \ldots, x_n \rangle,\,\, \ldots, g_n \in K
\right\}.
$$
Hence we have the following sequence of inclusions for the subgroups $U_k$, $k=3,\ldots,n$
$$
U_n \geq U_{n-1} \geq \ldots \geq U_{3}.
$$
In this assumption we can formulate Theorem by the following manner
$$
Z(U_n) = \left\{
\varphi= \left( x_1 + f(x_2, \ldots, x_n), x_2, \ldots, x_n \right) ~|~
f^{U_{n-1}}=f
\right\},
$$
where
$$
f^{U_{n-1}} = \{ f^{\psi} ~|~ \psi \in U_{n-1} \}.
$$
\begin{proof}
Let
$$
\varphi= \left( x_1 + f_1, x_2 + f_2, \ldots, x_n + f_n \right) \in Z(U_n)
$$
and
$$
\psi= \left( x_1 + g_1, x_2 + g_2, \ldots, x_n + g_n \right)
$$
be an arbitrary element of $U_n$.
Then $x_k^{\varphi\psi} = x_k^{\psi\varphi}$ for all $k = 1, 2, \ldots, n$.
In particular, if $k = 1$, then
\begin{equation}\label{eq:11}
( x_1 + f_1)^\psi = ( x_1 + g_1)^\varphi.
\end{equation}
Put $g_1 = x_2$, $g_2 = g_3 = \ldots = 0$.
Then this relation has the form
$$
x_1 + x_2 + f_1 = x_1 + f_1 + x_2 + f_2.
$$
Hence, $f_2 = 0$.
Analogously, putting $g_1 = x_3$, $g_2 = g_3 = \ldots = 0$, we get $f_3 = 0$.
Hence, $f_2 = f_3 = \ldots = f_n = 0$.
The relation (\ref{eq:11}) for arbitrary $\psi$ has the form
$$
x + g_1(x_2, \ldots, x_n) + f_1( x_2 + g_2, \ldots, x_n + g_n) = x + f_1( x_2, \ldots, x_n)
+ g_1(x_2, \ldots, x_n).
$$
Hence,
$$
f_1( x_2+g_2, \ldots, x_n + g_n) = f_1( x_2, \ldots, x_n).
$$
\end{proof}
Let us define the notations
$$
\zeta U_n = \left\{ f(x_2, \ldots, x_n) \in K \langle x_2, \ldots, x_n \rangle ~|~ f^{U_{n-1}} = f \right\},
$$
$$
\zeta U_{n-1} = \left\{ f(x_3, \ldots, x_n) \in K \langle x_3, \ldots, x_n \rangle ~|~ f^{U_{n-2}} = f \right\},
$$
$$
......................................................................................
$$
$$
\zeta U_{3} = \left\{ f(x_{n-2}, x_{n-1}, x_n) \in K \langle x_{n-2},x_{n-1},x_n \rangle ~|~
f^{U_{2}} = f \right\}.
$$
Note that $\zeta U_{3} = S$.
We formulate the next hypothesis on the structure of the algebras $\zeta U_k$, $k=3, \ldots, n$.
{\bf Hypothesis 2.} The following inclusions hold
$$
\zeta U_4 \subseteq K \langle \zeta U_3, x_{n-2} \rangle,
$$
$$
\zeta U_5\subseteq K \langle \zeta U_4, x_{n-3} \rangle,
$$
$$
.................................,
$$
$$
\zeta U_n\subseteq K \langle \zeta U_{n-1}, x_{2} \rangle.
$$
Recall that by Hypothesis 1 we have
$$
\zeta U_3 = K \langle c_1, c_2, \ldots \rangle,
$$
where $c_1 = [x_{n-1}, x_n]$, $c_{k+1} = [c_k, x_n]$, $k = 1, 2, \ldots $
\begin{proposition}
If Hypotheses 1 and 2 are true, then the following equality holds
$$
\zeta U_k = K \langle c_1, c_2, \ldots \rangle,\,\,\,k = 3, 4, \ldots, n.
$$
\end{proposition}
\begin{proof}
For $k = 4$ Hypothesis 2 has the form
$$
\zeta U_4 \subseteq K \langle \zeta U_3, x_{n-2} \rangle,
$$
i.e., every polynomial $f\in \zeta U_4$
can be represented in the form
$$
f = F(x_{n-2}, c_1, c_2, \ldots, c_N)
$$
for some non-negative integer $N$.
Applying the automorphism
$$
\psi = \left( x_1, x_2, \ldots, x_{n-2} + g_{n-2} , x_{n-1}, x_n \right),
$$
we get
$$
F(x_{n-2} + g_{n-2}, c_1, c_2, \ldots, c_N) = F(x_{n-2}, c_1, c_2, \ldots, c_N).
$$
Here $g_{n-2} = g_{n-2}(x_{n-1}, x_n)$ is an arbitrary element of $K \langle x_{n-1}, x_n \rangle$.
Putting in this equality $g_{n-2} = c_{N+1}$ and $x_{n-2} = 0$ we have
$$
F(c_{N+1}, c_1, c_2, \ldots, c_N) = F(0, c_1, c_2, \ldots, c_N).
$$
Since $c_1, c_2, \ldots $ are free generators,
$F$ does not contain the variable $x_{n-2}$. Hence
$$
\zeta U_4 = K \langle c_1, c_2, \ldots \rangle.
$$
Analogously, we can prove the equality
$$
\zeta U_k = K \langle c_1, c_2, \ldots \rangle,\,\,\,k=3, 4, \ldots, n.
$$
\end{proof}
We see that the description of the hypercenters of $U_n$ (see Hypotheses 1 and 2) is connected with the
theory of non-commutative invariants in free associative algebra under the action of some subgroups of $U_n$.
We will study these invariants in next papers.
\end{document} |
\begin{document}
\theoremstyle{definition}
\newenvironment{fexample}
{\begin{mdframed}\begin{example}}
{\end{example}\end{mdframed}}
\newcommand{\bigslant}[2]{{\raisebox{.2em}{$#1$}\left/\raisebox{-.2em}{$#2$}\right.}}
\title{S-limited shifts}
\pagenumbering{arabic}
\abstract{In this paper, we explore the construction and dynamical properties of $\mathcal{S}$-limited shifts. An $S$-limited shift is a subshift defined on a finite alphabet $\mathcal{A} = \{1, \ldots,p\}$ by a set $\mathcal{S} = \{S_1, \ldots, S_p\}$, where $S_i \subseteq \mathbb{N}$ describes the allowable lengths of blocks in which the corresponding letter may appear. We give conditions for which an $\mathcal{S}$-limited shift is a subshift of finite type or sofic. We give an exact formula for finding the entropy of such a shift and show that an $\mathcal{S}$-limited shift and its factors must be intrinsically ergodic. Finally, we give some conditions for which two such shifts can be conjugate, and additional information about conjugate $\mathcal{S}$-limited shifts.}
\section{Introduction}
$S$-gap shifts are a class of shift spaces defined on the alphabet $\mathcal{A} = \{0,1\}$ by a set $S \subseteq \mathbb{N}_0$ (where $\mathbb{N}_0 = \mathbb{N} \cup \{0\})$, which describes the allowable number of 0s that can separate two 1s in a string in the space. $S$-gap shifts and their dynamical properties have been and continue to be studied thoroughly and used in applications (\cite{Baker}, \cite{Jung}, \cite{Sgap}). In particular, in \cite{Jung}, Jung proved that an $S$-gap shift $X(S)$ is mixing if and only if $gcd\{n+1: n \in S\}=1$, along with other necessary and sufficient conditions for which $X(S)$ satisfies various specification properties. In \cite{Sgap}, Dastjerdi and Jangjoo established a collection of dynamical and topological properties of $S$-gap shifts, including a characterization of the $S$-gap shifts that are subshifts of finite type or sofic.
In \cite{SPrimegap}, Dastjerdi and Jangjooye introduced a broader class of shift spaces, called $(S, S')$-gap shifts, which again are defined on the alphabet $\{0,1\}$ by two sets $S, S' \subseteq \mathbb{N}_0$, which define the allowable lengths of the blocks of 0s and 1s, respectively. They established many properties of this class of shift spaces, including results about the entropy of an $(S,S')$-gap shift, and some specific conditions for conjugacy.
In this paper, we investigate a broader class of shift spaces, called $\mathcal{S}$-limited shifts. These subshifts are defined on an alphabet $\{1, \ldots, p \}$ by a finite set $\mathcal{S} = \{S_1, \ldots, S_p\}$ with $S_i \subseteq \mathbb{N}$ for $1 \leq i \leq p$, and each $S_i$ describes the allowable lengths of blocks of the corresponding letter in a string in the shift space. For the majority of the paper, we restrict the order in which the blocks may appear. In the more general setting in which the order is unrestricted, we refer to the shift as a generalized $\mathcal{S}$-limited shift. In both cases, we study important dynamical properties of these shift spaces. In particular, we prove the following result about the entropy of an $\mathcal{S}$-limited shift.
\begin{theorem*}
Let $\mathcal{S}=\{S_1,S_2,...,S_p\}$ such that $S_i\subseteq\mathbb{N}$ for $1 \leq i \leq p$. Then, the entropy of the $\mathcal{S}$-limited shift $X(\mathcal{S})$ is $\log \lambda$, where $\lambda$ is the unique positive solution to
\[\displaystyle\sum_{\omega \in G_\mathcal{S}}x^{-|\omega|}=1,\]
where $G_\mathcal{S}=\{1^{m_1}2^{m_2}...p^{m_p} : m_i\in S_i\text{ for }1\leq i\leq p\}.$
\end{theorem*}
We note that this theorem is an extension of a result in \cite{SPrimegap}, which states that the entropy of an $(S, S')$-gap shift is given by $\log(\lambda)$, where $\lambda$ is the unique non-negative solution to
\[ \sum_{s+s' \in \{\!\!\{S+S'\}\!\!\} } x^{-(s+s'+2)}=1,\]
where $\{\!\!\{S+S'\}\!\!\} = \{ s+s' : s\in S, s' \in S'\}$ and values of multiplicities are included (that is, if $s_1 + s_1' = s_2 + s_2'$ but $s_1 \neq s_2$ then both $s_1 +s_1'$ and $s_2 + s_2'$ are included in $\{\!\!\{S + S'\}\!\!\}$).
Next, we obtain the following results regarding conjugacy between two $\mathcal{S}$-limited shifts.
\begin{theorem*}
Let $\mathcal{S}=\set{S_1,...,S_p}$ and $\mathcal{T}=\set{T_1,...,T_q}$ and suppose $X(\mathcal{S})$ is conjugate to $X(\mathcal{T})$. Then, where $G_\mathcal{S}$ and $G_\mathcal{T}$ denote the collections $\{1^{s_1}\ldots p^{s_p}: s_i \in S_i\}$ and $\{1^{t_1} \ldots q^{t_q}; t_i \in T_i\}$ respectively, for all $l\in\mathbb{N}$, $\abs{\set{x\in G_\mathcal{S} : |x|=l}}=\abs{\set{y\in G_\mathcal{T} : |y|=l}}$.
\end{theorem*}
\begin{theorem*}
Let $\mathcal{S}=\set{S_1,...,S_p}$ and $\mathcal{T}=\set{T_1,...,T_p}$ where for all $i$, $S_i,T_i\subseteq\mathbb{N}$. Let $s_i^m$ denote the $m$-th element of $S_i$ sorted in increasing order, and define $t_i^m$ similarly. If for all $i_1,i_2,...,i_p\in\mathbb{N}$, \[
\sum_{k=1}^p s_k^{i_k}=\sum_{k=1}^p t_k^{i_k},
\]
then $X(\mathcal{S})$ is conjugate to $X(\mathcal{T})$.
\end{theorem*}
Both theorems are generalizations of existing results for $(S, S')$-gap shifts from \cite{SPrimegap}. The former is a generalization of a theorem that states if $X(S,S')$ and $X(T,T')$ are conjugate $(S,S')$-gap shifts, then $\{\!\!\{S + S'\}\!\!\} = \{\!\!\{T + T'\}\!\!\}$. The latter is a generalization of a theorem that assumes $\{\!\!\{S + S'\}\!\!\} = \{\!\!\{T + T'\}\!\!\}$ and gives specific cases for which $X(S,S')$ is conjugate to $X(T,T')$.
\section{Background and Definitions}
Let $\mathcal{A} = \{1, \ldots, p\}$ be an alphabet, and let $X$ be a shift space on $\mathcal{A}$. Define $B_n(X) = \{ \omega_1 \ldots \omega_n : \omega_i \in \mathcal{A} \text{ for } 1 \leq i \leq n \}$ to be the collection of all words of length $n$ and let $\mathcal{L}(X) = \bigcup_{n \geq 1} B_n(X)$ denote the \textbf{language} of $X$. If the shift space is understood in context, we will use $B_n$ and $\mathcal{L}$.
A shift space $X$ is called \textbf{irreducible} if for all $\omega, \tau \in \mathcal{L}(X)$, there exists some $\xi \in \mathcal{L}(X)$ such that $\omega \xi \tau \in \mathcal{L}(X)$. A word $\xi \in \mathcal{L}(X)$ is called \textbf{synchronizing} if for any $\omega \xi, \xi \tau \in \mathcal{L}(X)$, the word $\omega \xi \tau \in \mathcal{L}(X)$. An irreducible shift space $X$ with a synchronizing word is called a \textbf{synchronized system}. A shift space $X$ is called \textbf{mixing} if for all $\omega, \tau \in \mathcal{L}(X)$, there exists some $N$ such that for each $n \geq N$, there is a $\xi \in B_n(X)$ such that $\omega \xi \tau \in \mathcal{L}(X)$.
Let $X$ denote the full shift on $\mathcal{A}$. A \textbf{subshift of finite type (SFT)} is a subshift $X_F \subseteq X$ defined by a finite collection $F\subset \mathcal{L}(X)$ of forbidden words, meaning that $X_F=\{\omega \in X: \tau \text{ appears nowhere in } \omega \text{ for all } \tau \in F\}$. For any given SFT, the list of forbidden words define both an adjacency matrix and finite graph presentation that represent the shift space. The finite graph presentation consists of finitely many vertices, and infinite walks correspond to infinite strings in the shift space. Similarly, one can consider a broader class, called \textbf{sofic} subshifts, which includes all subshifts that have a finite graph presentation. As an adjacency matrix can be defined from a finite graph, each sofic subshift has an associated adjacency matrix. For more information on SFTs and sofic subshifts, see \cite{LM}.
One of the simplest constructions of a subshift that is neither sofic nor an SFT on the alphabet $\mathcal{A} = \{0,1\}$ is the prime gap shift, which includes all strings in which any two adjacent 1s are separated by a prime number of 0s. This shift space is an example of an $S$-gap shift.
An \textbf{$\bm{S}$-gap shift} is a subshift of the form \[X_S=\overline{\{...10^{n_{-1}}10^{n_0}10^{n_1}1... : n_i\in S\}},\]
where $S \subseteq \mathbb{N}_0$. While the prime gap shift provides an example of a non-sofic subshift, there exist $S$-gap shifts that are SFTs or sofic as well. However, we cannot guarantee the existence of a finite graph presentation or adjacency matrix for a general $S$-gap shift, and hence new techniques must be used for exploring properties of these shift spaces. A more general class of subshifts that includes all $S$-gap shifts is the $(S, S')$-gap shifts. An $\bm{(S,S')}$\textbf{-gap shift} is a subshift on the alphabet $\mathcal{A} = \{0,1\}$ of the form: \[X(S,S')=\overline{\{...1^{m_{-2}}0^{n_{-1}}1^{m_{-1}}0^{n_0}1^{m_0}0^{n_1}1^{m_1}... : n_i\in S,m_i\in S'\text{ for all } i\in\mathbb{Z}\}},\]
where $S, S' \subseteq \mathbb{N}$.
Again, it is the case that $(S,S')$-gap shifts can be SFT, sofic, or neither. To clarify some of these characterizations, we consider the following examples.
\begin{example}[Golden mean shift]
Let $\mathcal{A} = \{0,1\}$. The golden mean shift is an SFT with forbidden word list $F = \{11\}$. It has the following graph presentation and adjacency matrix:
\begin{center}
\begin{multicols}{2}
\begin{tikzpicture}
\tikzset{vertex/.style = {shape=circle,draw,minimum size=1.5em}}
\tikzset{edge/.style = {->,> = latex'}}
\node[vertex] (A) {}
edge[in=210,out=150,loop] node[auto,swap] {0} (A);
\node[vertex] (B) [right=of A] {}
edge[<-, bend right] node[auto,swap] {1} (A)
edge[->, bend left] node[auto] {0} (A);
\end{tikzpicture}
$\begin{bmatrix}
1 & 1 \\
1 & 0
\end{bmatrix}$
\end{multicols}
\end{center}
The golden mean shift is also an $S$-gap shift with $S = \mathbb{N}$.
\end{example}
\begin{example}[Even Shift]
Let $\mathcal{A} = \{0,1\}$. The even shift is a sofic subshift that contains all infinite strings in which any two adjacent 1s are separated by an even number of 0s. It has the following graph presentation and adjacency matrix:
\begin{center}
\begin{multicols}{2}
\begin{tikzpicture}
\tikzset{vertex/.style = {shape=circle,draw,minimum size=1.5em}}
\tikzset{edge/.style = {->,> = latex'}}
\node[vertex] (A) {}
edge[in=210,out=150,loop] node[auto,swap] {1} (A);
\node[vertex] (B) [right=of A] {}
edge[<-, bend right] node[auto,swap] {0} (A)
edge[->, bend left] node[auto] {0} (A);
\end{tikzpicture}
$\begin{bmatrix} 1&1 \\ 1&0 \end{bmatrix}$
\end{multicols}
\end{center}
It is an $S$-gap shift with $S = \{2n : n \in \mathbb{N}_0\}$.
\end{example}
For the remainder of this paper, unless otherwise specified, we fix our alphabet $\mathcal{A} = \{1,2, \ldots, p\}$.
Building upon the ideas of an $S$-gap and $(S,S')$-gap shift, we arrive at out definition of an $\mathcal{S}$-limited shift. As a preliminary step, we will introduce our collection of limiting sets $\mathcal{S} = \{S_1, \ldots, S_p\}$ with $S_i \subseteq \mathbb{N}$ for $1 \leq i \leq p$ and a collection of finite blocks
\[G_\mathcal{S}=\{1^{m_1}2^{m_2}...p^{m_p} : m_i\in S_i\text{ for }1\leq i\leq p\}\]
called the \textbf{core set}. An \textbf{$\bm{\mathcal{S}}$-limited shift} is a subshift on $\mathcal{A}$ defined by
\[X(\mathcal{S})=\overline{\{\cdots x_{-1}x_0x_1\cdots : x_i\in G_\mathcal{S}\text{ for all } i \in \mathbb{Z}\}}.\]
In the case that any of the $S_i$ are infinite, we note that taking the closure is a non-trivial step in defining the subshift, as we now consider bi-infinite strings ending or beginning with an infinite string of a single letter.
While most of our results concern $\mathcal{S}$-limited shifts, one can extend a few results to a more general setting. So far, we have been restricting the order in which the blocks of letters can appear in $\mathcal{S}$-limited shifts. Let $\mathcal{A}$ and $\mathcal{S}$ be as above. A \textbf{generalized $\mathcal{S}$-limited shift}, denoted $X_\mathcal{S}$, is the closure of the set
\[ \{\ldots \omega_{-1}^{\alpha_{-1}}\omega_0^{\alpha_0}\omega_1^{\alpha_1} \ldots \mid \omega_i \in \mathcal{A}, \omega_i \neq \omega_{i+1}, \text{ and } \alpha_i \in S_{\omega_i} \text{ for all } i\}. \]
The first difference between $\mathcal{S}$-limited shifts and generalized $\mathcal{S}$-limited shifts is that the full shift on $p$ letters is a generalized $\mathcal{S}$-limited shift with $S_i = \mathbb{N}$ for $1 \leq i \leq p$. However, the full shift is not an $\mathcal{S}$-limited shift. The most significant difference between these two classes of shift spaces is the absence of an analogous core set $G_\mathcal{S}$ in the case of a generalized $\mathcal{S}$-limited shift.
As mentioned in the introduction, we will discuss the entropy of $\mathcal{S}$-limited shifts. Although the definition of entropy is more complex in a general setting (see \cite{Walters} for details), the definition reduces nicely in the symbolic dynamics setting. The \textbf{(topological) entropy} of a subshift $X$, denoted by $h(X)$, is given by
\[h(X) = \lim_{n \to \infty} \frac{\log(B_n(X))}{n}.\]
Entropy can be used to define another class of subshifts. A subshift $X$ is \textbf{almost sofic} if given $\varepsilon >0$, there exists an SFT $Y\subset X$ such that $h(Y) > h(X) - \varepsilon$. We are using the definition from \cite{Peterson}, but analogous definitions are used in other sources, such as \cite{LM}.
Entropy is widely used as a conjugacy invariant in the study of dynamical systems. We say that two subshifts $X$ and $Y$ are conjugate if there exists a homeomorphism $\varphi:X \to Y$ such that $\varphi \circ \sigma = \sigma \circ \varphi$, where $\sigma$ denotes the shift map. In searching for conjugacy between two shift spaces, we often look for an invertible sliding block code. A \textbf{sliding block code} with memory $m$ and anticipation $n$ is a map $\varphi:X\to\mathcal{A}^{\mathbb{Z}}$ defined by $y=\varphi(x)$ with $y_i=\Phi(x_{[i-m,i+n]})$, where $\Phi:\mathcal{B}_{m+n+1}(X)\to\mathcal{A}$ and $x_{[i-m,i+n]} = x_{i-m}x_{i-m+1} \cdots x_{i+n} \in \mathcal{B}_{m+n+1}(X)$.
\section{Dynamical Properties}
For the remainder of this paper, we will fix our alphabet $\mathcal{A} = \{1, \ldots, p\}$, and let $\mathcal{S} = \{S_1, \ldots S_p\}$ with $S_i \subseteq \mathbb{N}$.
We will use $X(\mathcal{S})$ to denote the $\mathcal{S}$-limited shift and $X_\mathcal{S}$ to denote a generalized $\mathcal{S}$-limited shift. As we first turn our attention to some of the properties of $\mathcal{S}$-limited shifts, we first note some unique features of these subshifts.
In the case that $S_i = \mathbb{N}$ for each $1 \leq i \leq p$, the corresponding $\mathcal{S}$-limited shift is an SFT with forbidden word list $F = \{ nm :m\neq n, m \neq n+1, 1\leq n < p\} \cup \{pi : 1<i < p\}$. Given that a maximal $\mathcal{S}$-limited shift is an SFT, we consider the following proposition, which gives necessary and sufficient conditions for which an $\mathcal{S}$-limited shift is an SFT.
\begin{proposition} \label{SFT limited}
$X(\mathcal{S})$ is an SFT if and only if $S_i$ is finite or cofinite for every $S_i\in\mathcal{S}$.
\end{proposition}
\begin{proof} First, recall that we are restricting the order in which blocks may appear. Hence,
\[ F_0 = \{nm : m \neq n, m \neq n+1, 1 \leq n < p\} \cup \{ pi : 1<i < p\} \]
is forbidden for any $X( \mathcal{S})$. Next, assume $S_i$ is finite. Then, if $S_i$ is finite,
\[F_i=\{ai^nb:a,b\in\mathcal{A}\setminus\{i\},n\in\{1,2,...,\max S_i\}\setminus S_i\}\cup\{i^{1+\max S_i}\}\]
is a finite list of forbidden words associated with the set $S_i$. If, on the other hand, $S_i$ is cofinite, then
\[F_i=\{ai^nb:a,b\in\mathcal{A}\setminus\{i\},n\in\mathbb{N}\setminus S_i\}\]
is a finite list of forbidden words associated with the set $S_i$. Hence, $F = \bigcup_{i=0}^n F_i$ is a finite list of forbidden words, and so $X(\mathcal{S})$ is an SFT. \\
Now, assume that $X(\mathcal{S})$ is an SFT. Notice that if there exists an $S_i$ that is neither finite nor cofinite, then the forbidden word list $F_i$ associated with that set must be infinite. Hence, each $S_i \in \mathcal{S}$ must be finite or cofinite.
\end{proof}
\begin{proposition} A generalized $\mathcal{S}$-limited shift $X_\mathcal{S}$ is an SFT if and only if every $S_i$ is finite or cofinite for every $S_i \in \mathcal{S}$.
\end{proposition}
The proof follows exactly as in Proposition \ref{SFT limited}, except we can discard the forbidden word list $F_0$ since we are no longer restricting the order in which the blocks appear.
Next, we aim to find conditions for which an $\mathcal{S}$-limited shift will be sofic. As a preliminary step, we introduce some terminology and an alternative definition for sofic subshifts. Let $X$ be a subshift and $\omega \in \mathcal{L}(X)$. The \textbf{follower set} of $\omega$, denoted $\mathcal{F}_X(\omega)$ is the set of the form $\mathcal{F}_X(\omega)=\{\tau\in \mathcal{L}(X) : \omega\tau\in \mathcal{L}(X)\}$. A subshift $X$ is sofic if and only if it has a finite number of distinct follower sets. For more information on sofic subshifts, see \cite{LM}.
We will also use the difference sets associated with each $S_i$. Let
\[\mathbb{D}elta(S_i)=\{s_0,d_1,d_2, d_3... : d_i=s_{i}-s_{i-1} \text{ for } i \geq 1\}\]
be the sequence of differences between consecutive entries of $S_i$ for $1 \leq i \leq p$.
\begin{proposition} \label{sofic limited}
$X(\mathcal{S})$ is sofic if and only if for every $S_i\in\mathcal{S}$, $\mathbb{D}elta(S_i)$ is eventually periodic.
\end{proposition}
\begin{proof}
First, notice that a follower set is determined by the last letter of the last block of letters in the finite word. More specifically, for $\omega \in \mathcal{L}(X(\mathcal{S}))$, $i \in \mathcal{A}$, and any $n \in \mathbb{N}$, we can see that $F(\omega a i^n) = F(ai^n)$, where $a = i-1$ if $2 \leq i \leq p$ and $a = p$ if $i=1$. Hence, we will only consider follower sets of words of the form $a i^n$ or $i$.\\
Now, assume that $\mathbb{D}elta(S_i)$ is eventually periodic. If $S_i = \{s_0, s_1, \ldots\}$, then \newline $\mathbb{D}elta(S_i) = \{d_0, d_1, \ldots, d_{k-1}, \overline { m_1, \ldots, m_l}\}$. Then, assuming $i \in \mathcal{A}$ and $a = i-1$ when $2 \leq i \leq p$ and $a=p$ when $i=1$, the follower sets can take three forms:
\begin{align*}
&F(i), &1 \leq i \leq p \\
&F(ai^n), & 1\leq n \leq s_{k-1}\\
&F(ai^{s_{k+i-2}+j_i}), &0 \leq j_i \leq m_i-1 \text{ and } 1 \leq i \leq l
\end{align*}
Therefore, given our initial statement regarding the form of follower sets, there can only be at most finitely many follower sets for $X(\mathcal{S})$. Thus, $X(\mathcal{S})$ is sofic.\\
Next, assume that $X(\mathcal{S})$ is sofic and so $X(\mathcal{S})$ has only finitely many follower sets. Then, there must exist some integers $m$ and $q$ with $m < q$ such that $F(ai^m) = F(ai^q)$. Choose the smallest such $q$ for which there exists such an $m$. Now, let $S_i \cap (m,q] = \{n_1, \ldots, n_t\}$ and $s_{max} = \max \{ s \in S_i : s \leq m\}$. Then,
\[\mathbb{D}elta(S_i) = \{s_0, s_1-s_0, \ldots, n_1-s_{max}, \overline{n_2-n_1, \ldots, n_t-n_{t-1}, q+n_1-n_t}\}, \]
and so $\mathbb{D}elta(S_i)$ is eventually periodic. Since this argument is not dependent on the choice of $i$, we conclude that each $\mathbb{D}elta(S_i)$ must be eventually periodic.
\end{proof}
\begin{proposition} A generalized $\mathcal{S}$-limited shift $X_\mathcal{S}$ is sofic if and only if $\mathbb{D}elta(S_i)$ is eventually periodic for every $S_i \in \mathcal{S}$.
\end{proposition}
The proof follows from the proof of Proposition \ref{sofic limited} with one small modification. When considering follower sets of the form $F(ai^n)$, we allow $a\in\mathcal{A} \setminus \{i\}$.
Next, we move on to properties involving the language of an $\mathcal{S}$-limited shift. First, notice that all $\mathcal{S}$-limited shifts are irreducible and synchronized with synchronizing words of the form $p1$ or $a(a+1)$ where $1 \leq a < p$. The following example highlights some of the challenges we face when determining which $\mathcal{S}$-limited shifts are mixing.
\begin{example}
Let $\mathcal{A} = \{1,2\}$ and let $S_1 = S_2 = \{ 2n+1 : n \geq 0\}$. Notice that $21, 12 \in \mathcal{L}(X(\mathcal{S}))$.
If $\omega \in \mathcal{L}(X(\mathcal{S}))$ is a word such that $21\omega12 \in \mathcal{L}(X(\mathcal{S}))$, then $\ell(\omega) = 2l+1$ for some $l \geq 0$.
Hence, for any $N \geq 1$ and $2n \geq N$, there exists no word $\omega \in B_{2n}(X(\mathcal{S}))$ such that $21 \omega 12 \in \mathcal{L}(X(\mathcal{S}))$.
\end{example}
\begin{proposition} \label{mixing}
An $\mathcal{S}$-limited shift $X(\mathcal{S})$ is mixing if and only if $\gcd\{s_1 + \cdots + s_p : s_i \in S_i\} = 1$.
\end{proposition}
\begin{proof}
First, we will assume that $X(\mathcal{S})$ is mixing. Since $p1 \in \mathcal{L}(X(\mathcal{S}))$, then there exists some $N$ such that for all $n \geq N$, there is a word $\omega \in B_n(X(\mathcal{S}))$ such that $p1\omega p1 \in \mathcal{L}(X(\mathcal{S}))$.
Since $\omega$ must be of the form $ 1^{s_1 - 1} 2^{s_2} \ldots p^{s_p} \tau_1 \tau_2 \ldots \tau_n 1^{s_1 '}2^{s_2'}\ldots (p-1)^{s_{p-1}'} p^{s_p'-1}$, where $\tau_k \in G_\mathcal{S}$ for $1 \leq k \leq n$ and $s_i, s_i' \in S_i$ for $1 \leq i \leq p$.
Hence, there exist words of this form, say $\omega_1$ and $ \omega_2$, of lengths $n$ and $n+1$, respectively.
Thus, there exist words $1\omega_1p$ and $1\omega_2p$ of lengths $n+2$ and $n+3$ (respectively), where both $1\omega_1p$ and $1\omega_2p$ consist of concatenated blocks from $G_\mathcal{S}$. Hence, $\gcd\{ s_1 + \cdots + s_p : s_i \in S_i\} = 1$. \\
Next, we assume $\gcd \{s_1+\cdots + s_p : s_i \in S_i\} =1$.
Then, there exists some sufficiently large $N$ such that for all $n \geq N$, there exists a word of length $n$ of the form $\tau_1 \cdots \tau_{m}$, where $\tau_i \in G_\mathcal{S}$ for all $1 \leq i \leq m$. Hence, $p\tau_1 \cdots \tau_m 1 \in B_{n+2}(X(\mathcal{S}))$. Since $p1$ is synchronizing and $X(\mathcal{S})$ is irreducible, then $X(\mathcal{S})$ must be mixing.
\end{proof}
A generalized $\mathcal{S}$-limited shift does not have a core set analogous to the set $G_\mathcal{S}$ in the $\mathcal{S}$-limited shift setting. To parallel our use of $G_\mathcal{S}$ in the previous proof, we can think of words from $X_\mathcal{S}$ in terms of non-repeating blocks, that is, we consider finite words of the form $a_1^{n_1} \tau_1 \tau_2 \ldots \tau_m a_2^{n_2}$, where $a_1, a_2 \in \mathcal{A}$, $n_1, n_2 \in \mathbb{N}$, and $\tau_i = a_{i1}^{s_{i1}}a_{i2}^{s_{i2}}\ldots a_{ik}^{s_{ik}}$, with $a_{ij} \neq a_{il}$ for $j \neq l$ and $s_{ij} \in S_{a_{ij}}$.
\begin{proposition}
A generalized $\mathcal{S}$-limited shift $X_\mathcal{S}$ is mixing if and only if \newline $\gcd\left \{\sum_{i=1}^k s_{a_i} : s_{a_i} \in S_{a_i}, a_i \neq a_j (\text{for } i\neq j), 2\leq k \leq p \right \} =1$.
\end{proposition}
\begin{proof}
First, assume $X_\mathcal{S}$ is mixing. Let $a_1$, $a_2 \in \mathcal{A}$ with $a_1 \neq a_2$. Then, $a_1a_2 \in \mathcal{L}(X_\mathcal{S})$ and hence there exists $N$ such that for all $n \geq N$, there is a word $\omega \in B_n(X_\mathcal{S})$ such that $a_1 a_2 \omega a_1 a_2 \in \mathcal{L}(X_\mathcal{S})$.
Notice that $\omega$ must be of the form $\omega = a_2^{s_{a_1}-1} a_1^{s_{a_2}-1}$ or $\omega = a_2^{s_{a_1}-1} \tau_1 \tau_2 \ldots \tau_m a_1^{s_{a_2}-1}$ where $\tau_i = a_i^{s_i}$ with $s_i \in S_{a_i}$ for $1 \leq i \leq m$. Hence, there exist words $\omega_1 \in B_n(X_\mathcal{S})$ and $\omega_2 \in B_{n+1}(X_\mathcal{S})$ of this form.
Therefore, there exist words $a_2 \omega_1 a_1 \in B_{n+2}(X_\mathcal{S})$ and $a_2 \omega_2 a_1 \in B_{n+3}(X_\mathcal{S})$, and each word consists of concatenated blocks of $i$'s of length $s_i \in S_i$ where $i \in \mathcal{A}$. Since $a_1$ and $a_2$ were chosen arbitrarily, then \newline $\gcd\left \{\sum_{i=1}^k s_{a_i} : s_{a_i} \in S_{a_i}, a_i \neq a_j (\text{for } i\neq j), 2\leq k \leq p \right \} =1$.
Next, assume $\gcd\left \{\sum_{i=1}^k s_{a_i} : s_{a_i} \in S_{a_i}, a_i \neq a_j (\text{for } i\neq j), 2\leq k \leq p \right \} =1$. Then, there exists some sufficiently large $N$ such that for all $n \geq N$, there exists a word $a_1^{s_{a_1}}a_2^{s_{a_2}} \ldots a_m^{s_{a_m}} \in B_n(X_\mathcal{S})$, where $a_i \in \mathcal{A}$, $a_i \neq a_{i+1}$ for $1 \leq i < m$ and $s_{a_1} \in S_{a_1}$. Hence, $a_0 a_1^{s_{a_1}}a_2^{s_{a_2}} \ldots a_m^{s_{a_m}} a_{m+1} \in B_{n+2}(X_\mathcal{S})$, where $a_0 \neq a_1$ and $a_m \neq a_{m+1}$. Notice that $a_i a_j$ is synchronizing for all $a_i \neq a_j$ with $a_i, a_j \in \mathcal{A}$. Since $X_\mathcal{S}$ is irreducible, then $X_\mathcal{S}$ must be mixing.
\end{proof}
\section{Entropy}
In the remaining sections, we will only work with $\mathcal{S}$-limited shifts. Our current methods would require significant modifications to be extended to the generalized $\mathcal{S}$-limited shift setting due to the fact that they rely on the existence of a good core set, $G_\mathcal{S}$.
Entropy is a conjugacy invariant that is often sought in symbolic dynamics. While entropy of SFTs and sofic subshifts is well-understood (see \cite{LM}) due to the existence of adjacency matrices in these settings, entropy calculations can vary in more general shift space settings. Entropy calculations exist for both $S$-gap shifts and $(S, S')$-gap shifts (see \cite{LM} and \cite{SPrimegap}).
\begin{theorem} \label{entropy}
Let $\mathcal{S}=\{S_1,S_2,...,S_p\}$ with $S_i\subseteq\mathbb{N}$ for $1\leq i \leq p$. Then, the entropy of $X(\mathcal{S})$ is $\log \frac{1}{\lambda}$, where $\lambda$ is the unique positive solution to $\displaystyle\sum_{\omega \in G_\mathcal{S}}x^{|\omega|}=1$.
\end{theorem}
\begin{proof}
Consider the generating function $H(z)=\displaystyle\sum_{m=1}^\infty (\#B_m)z^m$, where $\#B_m$ denotes the cardinality of $B_m(X(\mathcal{S}))$. First, we claim that the radius of convergence of $H(z)$ is $e^{-h(X(\mathcal{S}))}$.
To see this, we will show that $\displaystyle\lim_{m\to\infty}\sqrt[m]{|\#B_mz^m|}=1$ when $z=e^{-h(X(\mathcal{S}))}$. First, notice that
\[
e^{-h(X(\mathcal{S}))} = e^{-\lim_{m\to\infty}\frac{\log\#B_m}{m}} =\lim_{m\to\infty}\left(\frac{1}{\#B_m}\right)^{1/m} .
\]
By letting $z = e^{-h(X(\mathcal{S}))}$, we obtain
\[\lim_{m\to\infty}\left(\#B_mz^m\right)^{1/m}=\lim_{m\to\infty} (\#B_m)^{1/m} \left(\frac{1}{\#B_m}\right)^{1/m}=1.\]
Hence, $e^{-h(X(\mathcal{S}))}$ must be the radius of convergence of $H(z)$.
Next, we define the following values which depend on $m$ and $k$: \[
A_m^k=\#\{\omega\in B_m(X(\mathcal{S})):\omega=\tau_1\tau_2\cdots\tau_k\text{, where }\tau_i\in G_\mathcal{S} \text{ for }1\leq i\leq k\}.
\]
Using this, we can define the collection of functions \[
F_k(z)=\sum_{m=1}^\infty A_m^kz^m.
\]
Notice that $F_1(z)=\displaystyle\sum_{\omega \in G_\mathcal{S}}z^{|\omega|}=1$. It remains to show that the unique positive solution to $F_1(z)=1$ is equal to the radius of convergence of $H(z)$, so that $e^{-h(X(\mathcal{S}))} = \lambda$ and hence $\log\left ( \frac{1}{\lambda}\right) = h(X(\mathcal{S}))$.
First, we will show that $F_k(z)F_l(z)=F_{k+l}(z)$ for all $k,l\in\mathbb{N}$. Let $m\in\mathbb{N}$ and consider the coefficient on the $z^m$ term of $F_{k+l}(z)$, which corresponds to all words $\omega$ formed by concatenating $k+l$ words from $G_\mathcal{S}$ such that $\ell(\omega) = m$. Set $\omega = xy$, where $x$ is composed of $k$ concatenated words from $G_\mathcal{S}$ and $y$ is composed of $l$ concatenated words from $G_\mathcal{S}$. Then, we know that $\ell(y)=m-\ell(x)$. Therefore, the coefficient on $z^m$ in $F_{k+l}(z)$ is $\displaystyle\sum_{i=0}^nA_{i}^kA_{m-i}^l$, which is the coefficient on $z^m$ in $F_k(z)F_l(z)$. Therefore, $F_k(z)F_l(z)=F_{k+l}(z)$ as claimed. From this, we notice $\displaystyle\sum_{k=1}^\infty F_k(z) = \sum_{k=1}^\infty (F_1(z))^k$, which converges for all
$z$ for which $F_1(z) < 1$, so its radius of convergence is equal to the positive value of $z$ for which $F_1(z)=1$.
Next, we look to establish a relation between $F_1(z)$ and $H(z)$. While it is clear that $\displaystyle\sum_{k\geq 1}A_m^k \leq\#B_m$, we find that words $\omega\in B_m(X(\mathcal{S}))$ can take three forms, where in all cases if some word $x$ is not specifically in $G_\mathcal{S}$, then it cannot have any subwords in $G_\mathcal{S}$:
\begin{enumerate}
\item $\omega = x$ where $|x|=n$
\item $\omega\in A_n^k$
\item $\omega = x\tau y$ where $|x|=i$, $|y|=j$, $|\tau|=n-i-j$ and $\tau$ is the concatenation of $k$ words in $G_\mathcal{S}$
\end{enumerate}
The first question that is raised is, how many allowable words of length $m$ can have no subwords in $G_\mathcal{S}$. The first thing to notice is that there must be fewer than $2p$ transitions from one letter to another of a different value, otherwise a substring of $G_\mathcal{S}$ must appear since our alphabet contains $p$ letters.
Now, we can create an upper bound for the number of such words by summing over the number of transitions between different letters. There are at most $\displaystyle p\sum_{t=0}^{2p}\binom{m}{t}$ such words, where $p$ corresponds to the number of options for a first letter and $t$ corresponds to the positions for the transitions.
Then, we can see that is bounded above by $2p^2 m^{2p}$. Now, we define the sequence $W_m$ to be the number of allowable words of length $m$ that have no subwords in $G_\mathcal{S}$, and based on this upper bound, we can use the root test to see that the radius of convergence of $\displaystyle\sum_{m=1}^\infty W_mz^m$ is at least 1.
Using this, we can construct the following inequality: \[
\sum_{k\geq 1}A_m^k\leq \#B_m\leq W_m+\sum_{k\geq 1}W_k\sum_{i\geq 0}W_i\sum_{j\geq 0}A_{m-i-j}^k.
\]
Using that chain of inequalities, we obtain the following inequalities: \[
\sum_{k\geq 1}F_k(z)\leq H(z)\leq \sum_{m\geq 1}W_mz^m+\sum_{k\geq 1}F_k(z)\left(\sum_{i\geq 0}W_iz^i\right)\left(\sum_{j\geq 0}W_jz^j\right).
\]
Here, we can see that $H(z)$ will converge if and only if $\displaystyle\sum_{k\geq 1}F_k(z)$ converges, since above we showed that happens only if $F_1(z) < 1$ and all of the other sums on the rightmost side of the inequality have a radius of convergence of 1 so we do not need to worry about their effect on convergence. Therefore, the radius of convergence of $H(z)$ is equal to the unique positive solution to $F_1(z)=1$, which we have already shown proves our original claim.
\end{proof}
Now that we have established entropy calculations for $\mathcal{S}$-limited shifts, we can classify all shifts of this type.
\begin{proposition}
All $\mathcal{S}$-limited shifts are almost sofic.
\end{proposition}
\begin{proof}
Let $X(\mathcal{S})$ be an $\mathcal{S}$-limited shift with $\mathcal{S} =\{S_1, \ldots, S_p\}$.
Consider the $\mathcal{S}$-limited shift $X(\mathcal{S}\vert_n)$ defined by $\mathcal{S}\vert_n = \{S_1\vert_n, S_2\vert_n, \ldots, S_p\vert_n\}$, where $S_i\vert_n$ consists of the first $n$ entries of $S_i$ for $1\leq i \leq p$. By Proposition \ref{SFT limited}, $X(\mathcal{S}\vert_n)$ is an SFT. By Theorem \ref{entropy}, entropy of an $\mathcal{S}$-limited shift is given by $\displaystyle\sum_{\omega \in G_\mathcal{S}}x^{|\omega|}=1$, and so we can choose $n$ large enough such that $h(X(\mathcal{S}\vert_n))$ is arbitrarily close to $h(X(\mathcal{S}))$.
\end{proof}
As mentioned earlier, (topological) entropy has a simplified definition for symbolic dynamic spaces. The measure-theoretic entropy of $\sigma$ is denoted by $h_\mu(\sigma)$ where $\mu \in \mathcal{M}_\sigma(X)$ and $\mathcal{M}_\sigma(X)$ denotes the space of shift invariant probability measures on $X$. While we will not state the full definition of measure-theoretic entropy, we will establish the connection between $h(X)$ and $h_\mu(\sigma)$. The variational principle gives the following relationship:
\[h(X) = \sup\{h_\mu(\sigma) : \mu \in \mathcal{M}_\sigma(X)\}.\]
A shift space $X$ is \textbf{intrinsically ergodic} if there exists a unique measure $\mu \in \mathcal{M}_\sigma(X)$ that attains the supremum in the variational principle. For more information on entropy and intrinsically ergodic systems, see \cite{Walters}.
\begin{theorem}
Every subshift factor of an $\mathcal{S}$-limited shift is intrinsically ergodic.
\end{theorem}
\begin{proof}
Consider the following decomposition of $\mathcal{L}(X(\mathcal{S}))$:
\begin{align*}
G_\mathcal{S}^* = &\{\tau_1\tau_2 \ldots \tau_n : \tau_i \in G_\mathcal{S} \text{ for } 1 \leq i \leq n\} \\
\mathcal{C}^P = &\{ l^{n} (l+1)^{s_{l+1}}\cdots p^{s_{p}} : 1< l \leq p, s_i \in S_i \text{ for } l < i \leq p, n \in \mathbb{N}\}\\
\cup &\{ 1^{n} 2^{s_2}\cdots p^{s_{p}} : n \in \mathbb{N}\setminus S_1, s_i \in S_i \text{ for } 2 \leq i \leq p\} \\
\mathcal{C}^S = &\{ 1^{s_1} \cdots (k-1)^{s_{k-1}} k^n : 1 \leq k \leq p, s_i \in S_i \text{ for } 1\leq i <k, n \in \mathbb{N}\} \\
\cup &\{ 1^{s_1} 2^{s_2}\cdots (p-1)^{s_{p-1}}p^n : n \in \mathbb{N}\setminus S_p, s_i \in S_i \text{ for } 1 \leq i \leq p-1\}
\end{align*}
Notice that $\mathcal{L} = \mathcal{C}^P G_\mathcal{S}^* \mathcal{C}^S$, i.e. for every $\omega \in \mathcal{L}$, $\omega = \xi_P \xi_G \xi_S$, where $\xi_P \in \mathcal{C}^P$, $\xi_G \in G_\mathcal{S}^*$, and $\xi_S \in \mathcal{C}^S$. In order to use the results from \cite{CT}, we show that this decomposition satisfies three properties:
\begin{enumerate}[(i)]
\item For any $\tau_1, \ldots, \tau_m \in G_\mathcal{S}^*$, $\tau_1\tau_2\ldots\tau_m \in G_\mathcal{S}^*$, so $G_\mathcal{S}^*$ has specification.
\item We will show that $\tilde{h}(\mathcal{C}^P \cup \mathcal{C}^S) = 0$, where $\tilde{h}(\cdot)$ denotes the growth rate $\tilde{h}(X) = \frac{1}{n}\log(B_n(X))$. We claim that the cardinalities of both $\mathcal{C}^S$ and $\mathcal{C}^P$ are bounded above by a polynomial of degree $2^{p-2}$, where $p = \abs{\mathcal{A}}$. We will calculate the upper bound by considering the case where $S_i = \mathbb{N}$ for all $1 \leq i \leq p$.
Notice that when $p=2$, there is exactly 1 block of $n$ 1's and $(n-1)$ blocks of the form $1^m 2^{n-m}$. So, $\abs{B_n(\mathcal{C}^P)} = n$, and similarly $\abs{B_n(\mathcal{C}^S)} = n$.
If we assume that $\abs{B_n(\mathcal{C}^P)} \approx n^{2^{p-2}}$ for $p>2$, notice that $\abs{B_n(\mathcal{C}^P)} \approx \left(n^{2^{p-2}}\right)^2 = n^{2^{p-1}}$ for $(p+1)$ symbols.
Hence, our claim holds. Therefore, we obtain
\[ \tilde{h}(\mathcal{C}^P \cup \mathcal{C}^S) = \lim_{n \to \infty} \frac{1}{n}\log(n^{2^{p-2}}) = 0\].
\item Given our decomposition, for $M\in\mathbb{N}$, define the collections of words $G_\mathcal{S}(M)$ to be \[G_\mathcal{S}(M) = \{ \xi_P \xi_G \xi_S : \xi_P \in \mathcal{C}^P, \xi_G \in G_\mathcal{S}^*, \xi_S \in \mathcal{C}^S, \abs{\xi_P} \leq M, \abs{\xi_S} \leq M\} .\] Given some $\xi_P\xi_G\xi_S \in G_\mathcal{S}(M)$, notice that $\xi_P$ and $\xi_S$ determine the length of word $u$ and $v$ such that $u \xi_P\xi_G \xi_S v \in G_\mathcal{S}$. Since there are only finitely many such $\xi_P$ and $\xi_S$, then there exists some $t$ such that there exists words $u$ and $v$ with $\abs{u}\leq t$ and $\abs{v} \leq t$ for which $u\xi_P \xi_G \xi_S v \in G_\mathcal{S}$.
\end{enumerate}
Since our decomposition satisfies these three properties, then by \cite{CT}, an $\mathcal{S}$-limited shift and all of its subfactors must be intrinsically ergodic.
\end{proof}
\section{Conjugacy}
Next, we address the question of conjugacy of $\mathcal{S}$-limited shifts through a generalization of the results in \cite{SPrimegap}. The benefit of restricting the order in which the blocks appear is that we can now split the elements of an $\mathcal{S}$-limited shift up into its building blocks from $G_\mathcal{S}$, since we know that every time we begin a run of $1$s a new element of $G_\mathcal{S}$ has begun, which is the only time that can happen. Therefore, we can consider elements of $\mathcal{S}$-limited shifts in terms of the elements of $G_\mathcal{S}$ they consist of and the order in which those elements appear.
In the following proof, we consider two $\mathcal{S}$-limited shifts, say $X(\mathcal{S})$ and $X(\mathcal{T})$. We will denote the core sets associated with the shifts $X(\mathcal{S})$ and $X(\mathcal{T})$ by $G_\mathcal{S}$ and $G_\mathcal{T}$, respectively.
\begin{theorem}
Let $\mathcal{S}=\set{S_1,...,S_p}$ and $\mathcal{T}=\set{T_1,...,T_q}$ and suppose $X(\mathcal{S})$ is conjugate to $X(\mathcal{T})$. Then, for all $l\in\mathbb{N}$, $\abs{\set{x\in G_\mathcal{S} : |x|=l}}=\abs{\set{y\in G_\mathcal{T} : |y|=l}}$.
\end{theorem}
\begin{proof}
Let $X(\mathcal{S})$ be conjugate to $X(\mathcal{T})$. Suppose, to the contrary, that there exists some $l\in\mathbb{N}$ such that $\abs{\set{x\in G_\mathcal{S} : |x|=l}}\neq \abs{\set{y\in G_\mathcal{T} : |y|=l}}$.
In particular, let $l$ be the smallest such length and suppose without loss of generality that $\abs{\set{x\in G_\mathcal{S} : |x|=l}} > \abs{\set{y\in G_\mathcal{T} : |y|=l}}$.
Now, consider the number of points of period $l$ in $X(\mathcal{S})$ and $X(\mathcal{T})$. There must be the same number of periodic points that are built using elements of $G_\mathcal{S}$ and $G_\mathcal{T}$ of length less than $l$ because we can construct a length-preserving bijection between the elements of $G_\mathcal{S}$ and $G_\mathcal{T}$ that are shorter than $l$ letters.
However, there will be $\abs{\set{x\in G_\mathcal{S} : |x|=l}}$ elements in $X(\mathcal{S})$ of the form $\bar{x}$ where $x\in G_\mathcal{S}$ and $|x|=l$ and $\abs{\set{y\in G_\mathcal{T} : |y|=l}}$ elements in $X(\mathcal{T})$ of the form $\bar{y}$ where $y\in G_\mathcal{T}$ and $|y|=l$, and those consist of the remaining words of period $l$ in the two subshifts respectively.
Therefore, there are more words of period $l$ in $X(\mathcal{S})$ than there are in $X(\mathcal{T})$. However, conjugacy preserves the number of periodic points in a subshift, so we have a contradiction. Thus, for all $l\in\mathbb{N}$, we must have $\abs{\set{x\in G_\mathcal{S} : |x|=l}}=\abs{\set{y\in G_\mathcal{T} : |y|=l}}$.
\end{proof}
As shown in \cite{SPrimegap} for an $(S,S')$-gap shift (that is, the case where $p=q=2$), even though this is necessary, it is far from sufficient. Next, we can provide a sufficient case for sufficiency which is a generalization of their result for $(S,S')$-gap shifts.
\begin{theorem} \label{limited conjugacy}
Let $\mathcal{S}=\set{S_1,...,S_p}$ and $\mathcal{T}=\set{T_1,...,T_p}$ where $S_i,T_i\subseteq\mathbb{N}$ for $1\leq i \leq p$. Let $s_i^m$ denote the $m$-th element of $S_i$ sorted in increasing order, and define $t_i^m$ similarly. If for all $i_1,i_2,...,i_p\in\mathbb{N}$, \[
\sum_{k=1}^p s_k^{i_k}=\sum_{k=1}^p t_k^{i_k},
\]
then $X(\mathcal{S})$ is conjugate to $X(\mathcal{T})$.
\end{theorem}
In order to prove this theorem, however, we first need the following result to allow us to---as done in the case for sufficiency---think in terms of elements of $G_\mathcal{S}$.
\begin{definition}
Suppose $\mathcal{S}=\set{S_1,...,S_p}$ and $\mathcal{T}=\set{T_1,...,T_q}$. Then, we say the sliding block code $\varphi:X(\mathcal{S})\to X(\mathcal{T})$ induced by the block map $\Phi$ of memory $m$ and anticipation $n$ \textbf{induces} $\psi:G_\mathcal{S}\to G_\mathcal{T}$ if for all $x$ and $y$ of length $m$ and $n$ respectively and for all $z\in G_\mathcal{S}$, $\varphi(xzy)=\psi(z)$.
\end{definition}
\begin{example}[Induced Conjugacy]\label{induced conjugacy example}
Let $\mathcal{S}=\set{S_1,S_2,S_3}$ where $S_1=\mathbb{N}$, $S_2=\set{2n : n\in \mathbb{N}}$, and $S_3=\set{3,5}$. Also, let $\mathcal{T}=\set{T_1,T_2,T_3}$ where $T_1=\mathbb{N}$, $T_2=\set{2n+1 : n\in\mathbb{N}}$, and $T_3=\set{2,4}$. If we say $s_i^m$ and $t_i^m$ are the $m$th smallest element of $s_i$ and $t_i$ respectively, we notice that $s_1^m=t_1^m$, $s_2^m+1=t_2^m$, and $s_3^m-1=t_3^m$. Then, we can see that $\psi:G_\mathcal{S}\to G_\mathcal{T}$ where $\psi(1^n2^m3^p)=1^n2^{m+1}3^{p-1}$ is a conjugacy. If we take the block code $\Phi:\set{1,2,3}^2\to\set{1,2,3}$ with memory 1 and anticipation 0 where \[
\Phi(xy)=\begin{cases}
x \text{ if }xy=23 \\
y \text{ otherwise}
\end{cases}
\]
we can see that the sliding block code induced by $\Phi$ will induce $\psi$.
\end{example}
Not all elements of $X(\mathcal{S})$, however, will be the concatenation of elements of $G_\mathcal{S}$. Specifically, that will be the case when some element of $X(\mathcal{S})$ begins or ends with an infinite block of the same letter. Therefore, before we can speak about the action of sliding block codes that induce bijections from some $G_\mathcal{S}$ to some other $G_\mathcal{T}$, we must consider what happens when an element it would act on begins or ends with infinitely many of the same letter.
\begin{lemma} \label{induced infinite block}
Let $\mathcal{S}=\set{S_1,...,S_p}$ and $\mathcal{T}=\set{T_1,...,T_q}$ and $\varphi:X(\mathcal{S})\to \set{1,2,...,q}^\mathbb{Z}$ be a sliding block code induced by the block map $\Phi$ of memory $m$ and anticipation $n$ that induces a bijection $\psi:G_\mathcal{S}\to G_\mathcal{T}$. Also, let $I=\set{i\in\mathbb{N} : \abs{S_i}=\abs{\mathbb{N}}}$ and $J=\set{j\in\mathbb{N} : \abs{T_j}=\abs{\mathbb{N}}}$. Then, $\pi:I\to \set{1,2,...,q}$ where for all $i\in I$, $\Phi(i^r)=\pi(i)$ for $r=m+n+1$ must have image $J$.
\end{lemma}
\begin{proof}
Let $\mathcal{S}=\set{S_1,...,S_p}$ and $\mathcal{T}=\set{T_1,...,T_q}$ and $\varphi:X(\mathcal{S})\to \set{1,2,...,q}^\mathbb{Z}$ be a sliding block code induced by the block map $\Phi$ of memory $m$ and anticipation $n$ that induces a bijection $\psi:G_\mathcal{S}\to G_\mathcal{T}$, and set $r=m+n+1$. Also, let $I=\set{i\in\mathbb{N} : \abs{S_i}=\abs{\mathbb{N}}}$ and $J=\set{j\in\mathbb{N} : \abs{T_j}=\abs{\mathbb{N}}}$. From this, we can define a function $\pi:\set{1,2,...,p}\to\set{1,2,...,q}$ by $\pi(i)=\Phi(i^r)$. First we wish to show for any $j\in J$ there must exist some $i\in I$ so that $\pi(i)=j$, after which we will show $\pi(I)\subseteq J$.
Assume, to the contrary, that there exists $j \in J$ such that there does not exist any $i\in I$ so that $\pi(i)=j$. Now, let $x$ and $y$ be strings over $\set{1,2,...,p}$ of length $m$ and $n$ respectively, and consider the longest block of $j$s that can occur in the image of $\varphi(xzy)$ where $z\in G_\mathcal{S}$ with any block code $\Phi_j$ with memory $m$ and anticipation $n$ where $\Phi_j(i^r)\neq j$ for all $i\in I$. This will occur when $\varphi_j(xzy)=j^{|z|}$, so let us consider the largest $z$ where this can be the case. For each letter in $\set{1,2,...,p}\setminus I$, there is a maximum number of consecutive instances of that letter that can occur in any $z\in G_\mathcal{S}$. Clearly, for the $z$ we are constructing, we wish to have as many of each of these letters as possible. Let us say that $s$ is the largest of all of those values, then the length of $z$ contributed by these letters is at most $p*s$. For the letters in $I$, of which there are at most $p$, we can have at most $r-1$ consecutive instances of that letter. Therefore, the contribution to the length of $z$ from letters in $I$ is less than $p*r$. Thus, we cannot have more than $p(r+s)$ consecutive $j$s in the image of any word in $G_\mathcal{S}$ under $\varphi_j$. Then, since $\Phi_j$ will output a $j$ whenever $\Phi$ does, there cannot be more than $p(r+s)$ consecutive $j$s in the image of any word in $G_\mathcal{S}$ under $\varphi$. However, $T_j$ is infinite and $p(r+s)$ is finite, so there exists a word in $G_\mathcal{T}$ with more than $p(r+s)$ consecutive $j$s. Thus, $\varphi$ cannot induce a bijection from $G_\mathcal{S}$ to $G_\mathcal{T}$, giving us a contradiction. Thus, $\pi(I)\supseteq J$.
Now, suppose that $i\in I$. We wish to show $\pi(i)\in J$. To see this, let $k$ be the maximum value in any set in $\mathcal{T}$ that is not infinite, and fix some $i\in I$. Because $i\in I$, we know that $S_i$ is infinite, so there must be some value $l\in S_i$ where $l > k + r$. Therefore, there must exist $w\in G_\mathcal{S}$ that has $l$ consecutive $i$s. Then, $\varphi(w)$ has a block of $l-r > k$ consecutive $\pi(i)$s. Therefore, $\pi(i)\in J$ because otherwise $\varphi(w)$ could not be in $G_\mathcal{T}$, which would contradict the fact that $\varphi$ induces a bijection from $G_\mathcal{S}$ to $G_\mathcal{T}$. Thus $\pi(I)\subseteq J$. Combining that with the result above that $\pi(I)\supseteq J$, we have $\pi(I)=J$.
\end{proof}
Using Lemma \ref{induced infinite block}, we can establish a link between sliding block codes that induce bijections and conjugacies.
\begin{lemma} \label{induced bijection}
Let $\mathcal{S}=\set{S_1,...,S_p}$ and $\mathcal{T}=\set{T_1,...,T_q}$ and $\varphi:X(\mathcal{S})\to \set{1,2,...,q}^\mathbb{Z}$ be a sliding block code that induces a bijection $\psi:G_\mathcal{S}\to G_\mathcal{T}$. Then, $\varphi$ is a conjugacy from $X(\mathcal{S})$ to $X(\mathcal{T})$.
\end{lemma}
\begin{proof}
Let $\mathcal{S}=\set{S_1,...,S_p}$ and $\mathcal{T}=\set{T_1,...,T_q}$ and suppose $\varphi:X(\mathcal{S})\to \set{1,2,...,q}^\mathbb{Z}$ is a sliding block code that induces a bijection $\psi:G_\mathcal{S}\to G_\mathcal{T}$.
First, we will show that the image of $\varphi$ is $X(\mathcal{T})$.
Let $x\in X(\mathcal{S})$, and suppose that it neither begins nor ends with an infinite block of the same letter. Then, using the definition of an $\mathcal{S}$-limited shift, we can rewrite $x=\cdots x_{-1}x_0x_1\cdots$ where each $x_i\in G_\mathcal{S}$. Because $\varphi$ induces $\psi$, we can see $\varphi(x)=\cdots\psi(x_{-1})\psi(x_0)\psi(x_1)\cdots$. Then, since each $\psi(x_i)\in G_{\mathcal{T}}$, we have $\varphi(x)\in X(\mathcal{T})$.
Now, we must handle the case where $x$ begins and/or ends with an infinite block of the same letter. Suppose $x\in X(\mathcal{S})$ ends with an infinite block of the letter $j$. Then, through possibly reindexing the $x_i$s, we can say $x=\cdots x_{-2}x_{-1}x_0 y j^\mathbb{N}$ where each $x_i\in G_\mathcal{S}$, $y=1^{n_1}2^{n_2}\cdots (j-1)^{n_{j-1}}$ where $n_i\in S_i$ for all $i\in\set{1,2,...,j-1}$. Then, because $x$ ends with infinitely many $j$s, $S_j$ must be infinite, so there must be some $z\in G_\mathcal{S}$ so that $z$ begins $yj^r$ where $r$ is 1 plus the sum of the memory and anticipation of $\Phi$. Then, by Lemma \ref{induced infinite block}, if we say $\Phi(j^r)=k$, we know that we can have a word in $X(\mathcal{T})$ end in a block of infinitely many $k$s. Therefore, we have that $\varphi(x)=\cdots\psi(x_{-2})\psi(x_{-1})\psi(x_0)wk^\mathbb{N}$ where $w$ is the first $|y|$ letters of $\psi(z)$, from which we can see $\varphi(x)\in X(\mathcal{T})$. All of the remaining cases can be shown using the same approach of breaking as much of $x$ as possible up into elements of $G_\mathcal{S}$, and then viewing the remaining part as a substring of an element of $G_\mathcal{S}$ that begins and/or ends with a sufficiently long string of the letter that begins and/or ends $x$. Then, we can use the fact that our sliding block code induces a bijection to see what it will map that word to, and we find that it is in $X(\mathcal{T})$.
Therefore, $\varphi(x) \in X(\mathcal{T})$ for all $x \in X(\mathcal{S})$.
Now, we will show that $\varphi:X(\mathcal{S})\to X(\mathcal{T})$ is a bijection. For that, first we will show that $\varphi$ is onto. Suppose $y\in X(\mathcal{T})$. Again, we first consider the case where $y=\cdots y_{-1}y_0y_1\cdots$ where each $y_i\in G_{\mathcal{T}}$. Then, since $\psi$ is a bijection, we can construct $x\in X(\mathcal{S})$ where $x=\cdots x_{-1}x_0x_1\cdots$ where for every $i\in\mathbb{Z}$, $x_i=\psi^{-1}(y_i)$. Then, using the fact that $\varphi$ induces $\psi$, we can see $\varphi(x)=y$.
Suppose now that $y\in X(\mathcal{T})$ begins and/or ends with an infinite block of the same letter---and as above, without loss of generality, say $y=\cdots y_{-2}y_{-1}y_0zj^\infty$ where each $y_i\in G_\mathcal{T}$ and $z=1^{n_1}2^{n_2}\cdots (j-1)^{n_{j-1}}$ where $n_i\in T_i$ for all $i\in\set{1,2,...,j-1}$. Now, for each $l\in \mathbb{N}$, let $w_l\in G_\mathcal{T}$ so that $w_l$ begins $yj^{l+r}$---note that this exists for every $l$ because this can be followed by more $j$s and $T_j$ is infinite. We wish to find a sufficiently large $l$ so the first $|yj^l|$ letters of $\psi^{-1}(w_l)$ ends with $i^r$ where $\Phi(i^r)=j$ with $S_i$ infinite. If we let $l_0=q(r+m)$ where $m$ is the largest element of any finite set in $\mathcal{T}$, then by similar reasoning to that employed in the proof of Lemma \ref{induced infinite block}, this must be the case for $w_{l_0}$. Thus, if $x=\cdots x_{-2}x_{-1}x_0 w i^\infty$ where $x_i=\psi^{-1}(y_i)$ for all $i\leq 0$, and $w$ is the first $|yj^l|$ letters of $\psi^{-1}(w_{l_0})$, then $\varphi(x)=y$. Therefore, $\varphi$ is onto.
Next, suppose $x,y\in X(\mathcal{S})$ with $x\neq y$. There are three ways in which this could happen: either $x$ and $y$ consist of a different sequence of words from $G_\mathcal{S}$, $x=\sigma^n(y)$ for some $n\neq 0$, or $x$ and $y$ consist of the concatenation of the same complete elements of $G_\mathcal{S}$ but are not equal (and therefore must begin and/or end with infinite blocks of some letter). In the second case, $\varphi(x)=\varphi(\sigma^n(y))=\sigma^n(\varphi(y))$. Thus, for us to have $\varphi(x)=\varphi(y)$, then $\varphi(x)$ must have a period that divides $n$. However, this means that $x$ and $\varphi(x)$ would have different periods because $x\neq y$ so $x$ cannot have a period that divides $n$, which gives us a contradiction. Therefore, if $x=\sigma^n(y)$, then $\varphi(x)\neq \varphi(y)$. If $x$ and $y$ consist of a different sequence of words from $G_\mathcal{S}$, then, because $\varphi$ induces the bijection $\varphi$, we must have $\varphi(x)$ and $\varphi(y)$ consist of a different sequence of words from $G_{\mathcal{T}}$. Thus, we must have $\varphi(x)\neq \varphi(y)$. Finally, suppose that $x$ and $y$ consist of the concatenation in the same order of the same complete elements of $G_\mathcal{S}$ but still $x\neq y$. Suppose, without loss of generality, that their difference comes after the end of the rightmost complete element of $G_\mathcal{S}$, and say that the parts of the words after the last complete element of $G_\mathcal{S}$ until there is an infinite block of some letter are $x'$ and $y'$ respectively, and that the letters repeated infinitely many times are $i$ and $j$ respectively. Notice that we must have $x'\neq y'$. Therefore, there must exist words $x''$ and $y''$ in $G_\mathcal{S}$ so that $x''$ begins with $x'i^r$ and $y''$ begins with $y'j^r$, and clearly $x_1\neq y_1$. We can have it so that $x''$ and $y''$ have the same number of every letter following the smaller of $i$ and $j$. Then, since $\psi$ is induced by a sliding block code, we can see that $\psi(x'')$ and $\psi(y'')$ must disagree on the section that is the image of the part up to and including the block of the smaller of the $i$s and $j$s. We can also see that the section of $\psi(x'')$ that is the image of the part of $x''$ up to the block of $i$s and the section of $\psi(y'')$ that is the image of the part of $y''$ up to the block of $j$s must be in $\varphi(x)$ and $\varphi(y)$ respectively, so we must have $\varphi(x)\neq \varphi(y)$. Therefore, we have $\varphi(x)=\varphi(y)$ if and only if $x=y$, so $\varphi$ is one-to-one. Thus, $\varphi$ is an invertible factor map, so it is a conjugacy.
\end{proof}
Before proving Theorem \ref{limited conjugacy}, we will introduce one last definition to keep our notation as simple as possible.
\begin{definition}
Let $\mathcal{S}=\set{S_1,...,S_p}$ and $\mathcal{T}=\set{T_1,...,T_q}$ and suppose $\varphi:X(\mathcal{S})\to X(\mathcal{T})$ is a sliding block code. Then, we will say that a \textbf{transition point} is any index $i$ so that, if $\varphi(x)=y=\cdots y_{-1}y_0y_1\cdots$ (where here $y_j \in \mathcal{A}$ for all $j$), $y_i\neq y_{i+1}$. If $y_i+1=y_{i+1}$, we will call this an internal transition point, and if $y_{i+1}=1$, we will call this an external transition point.
\end{definition}
The motivation for the distinction between internal and external transition points comes from thinking about elements of $X(\mathcal{S})$ as the concatenation of words from $G_\mathcal{S}$, since the internal transition points will be those within a word in $G_\mathcal{S}$ and the external transition points will be the last index within one word in $G_\mathcal{S}$, and so will be the transition between that word and the next one. Using this, we can finally turn to our proof.
\begin{proof}[Proof of Theorem \ref{limited conjugacy}]
Let $\mathcal{S}=\set{S_1,...,S_p}$ and $\mathcal{T}=\set{T_1,...,T_p}$ with $S_i,T_i\subseteq\mathbb{N}$ for $1\leq i \leq p$. Let $s_i^m$ denote the $m$th element of $S_i$ sorted in increasing order, and define $t_i^m$ similarly. Suppose that for all $i_1,i_2,...,i_p\in\mathbb{N}$, \[
\sum_{k=1}^p s_k^{i_k}=\sum_{k=1}^p t_k^{i_k}.
\]
\noindent As an outline of the rest of the proof, we will first construct a bijection $\psi:G_{\mathcal{S}}\to G_\mathcal{T}$. Then, we will construct a block map $\Phi$ that induces a sliding block code $\varphi$ so that $\varphi$ induces $\psi$. Finally, from there, we will use Lemma \ref{induced bijection} to show that $\varphi$ is a conjugacy.
Let $x\in G_\mathcal{S}$. Then, by the definition of $G_\mathcal{S}$, we can write $x=1^{n_1}2^{n_2}\cdots p^{n_p}$ where each $n_i\in S_i$. Using the indexing of each $S_i$, we can rewrite that as $x=1^{s_1^{i_1}}2^{s_2^{i_2}}\cdots p^{s_p^{i_p}}$. Using that parameterization, we can construct $\psi:G_{\mathcal{S}}\to G_\mathcal{T}$ by letting $\psi(1^{s_1^{i_1}}2^{s_2^{i_2}}\cdots p^{s_p^{i_p}})=1^{t_1^{i_1}}2^{t_2^{i_2}}\cdots p^{t_p^{i_p}}$, which will be our bijection.
Notice that, by the assumption that the sum of elements from the $S_i$s and $T_i$s with the same indices will be equal in in the statement of the theorem, for all $x\in G_\mathcal{S}$, $\abs{x}=\abs{\psi(x)}$, which is one of the properties of $X(\mathcal{S})$ and $X(\mathcal{T})$ that will allow us to construct a sliding block code that induces $\psi$.
The other main property we will use also comes from that assumption, which is that for all $j$, there exists some $n_j\in\mathbb{Z}$ so that, for all $i$, $s_j^i+d_j=t_j^i$. Specifically, we can see that $d_j=t_j^1+s_j^1$ by subtracting the sum from the statement of the sum where all the indices are 1 from the sum where all the indices are 1 except for the index for the $j$th set, and that index is $i$. Using this, we can rewrite $\psi$ so that it is expressed solely in terms of the sets $S_j$ and the constants $d_j$. In order to do that, we notice that $1^{t_1^{i_1}}2^{t_2^{i_2}}\cdots p^{t_p^{i_p}}$ can be rewritten as $1^{s_1^{i_1}+d_1}2^{s_2^{i_2}+d_2}\cdots p^{s_p^{i_p}+d_p}$. Therefore, we can say $\psi(1^{s_1^{i_1}}2^{s_2^{i_2}}\cdots p^{s_p^{i_p}})=1^{s_1^{i_1}+d_1}2^{s_2^{i_2}+d_2}\cdots p^{s_p^{i_p}+d_p}$.
Now, we can use this to figure out where the transition points in $x\in X(\mathcal{S})$ must be in any sliding block code that induces $\psi$. Because every element of $G_\mathcal{S}$ is mapped to an element of $G_\mathcal{T}$ of the same length, the external transition points must be the indices $i$ such that $x_i=n$ and $x_{i+1}=1$. Turning to the internal transition points, we can assume that we are only looking at some word $y\in G_\mathcal{S}$ because internal transition points only occur completely within one of those words. For any such $y$, there will be $p-1$ internal transition points: the change from $k$s to $(k+1)$s for all $k\in\set{1,2,...,p-1}$. Then, from the parameterization of $\psi$ in terms of only the $S_j$s and the $d_j$s, we can see that the internal transition point from $k$ to $k+1$ will occur at the letter indexed $\displaystyle\sum_{j=1}^k s_j^{i_j} + \sum_{j=1}^k d_j$. In terms of creating a sliding block code, the value that is of more importance than that, however, is the distance between the change from $k$s to $(k+1)$s in a word in $G_\mathcal{S}$ and the transition point that switches from $k$ to $k+1$ in the image of that word under $\psi$. Using the previous indexing of the transition points, we can see that distance will be $\displaystyle\sum_{j=1}^k d_j$, which we will refer to as $r_k$. Using those, we will define the value $r=\displaystyle 1+\max_k r_k$, which will be both the memory and anticipation of the block map $\Phi$ we create.
To construct $\Phi:B_{2r+3}(X(\mathcal{S}))\to\set{1,2,...,p}$, suppose $x\in B_{2r+3}(X(\mathcal{S}))$ where \newline$x=x_{-r-1}x_{-r}\cdots x_0\cdots x_{r}x_{r+1}$. First, find the transition point as described in the previous paragraph that is the closest to $x_0$, breaking ties to favor transition points that have negative indices in $x$. In order to do so, using the index offsets described, there are three cases for this we must address:
\begin{enumerate}
\item If there does not exist a transition point, then we will say $\Phi(x)=x_0$, because we know all the transition points are far enough from $x_0$ so that in $\Psi$ (as well as any sliding block code that induces $\Psi$), we must have that the letters with that index are the same in both the domain and the image.
\item If the closest transition point comes before $x_0$, suppose it is a transition point from $k$ to $l$, where $l=k+1$ for any internal transition point and $l=1$ for any external transition point. Then, we know that in the image of any word in $X(\mathcal{S})$ of which $x$ is a substring, and in any sliding block code inducing $\Psi$, the letter in the image of that word with the same index as $x_0$ must be $k+1$, so $\Phi(x)=l$.
\item If the closest transition point comes on or after $x_0$, suppose it is a transition point from $k$ to $l$, where $l=k+1$ for any internal transition point and $l=1$ for any external transition point. Then, for similar reasoning as in the previous case, $\Phi(x)=k$.
\end{enumerate}
Now, let $\varphi$ be the sliding block code induced by $\Phi$. By the construction of $\Phi$, we can see $\varphi$ will induce $\psi$. Therefore, by Lemma \ref{induced bijection}, $\varphi$ is a conjugacy from $X(\mathcal{S})$ to $X(\mathcal{T})$.
\end{proof}
Having established this proof, we can use that same technique to create a sliding block code that induces a bijection between the core sets of two specific $\mathcal{S}$-limited shifts.
\begin{example}[$\mathcal{S}$-limited Shift Conjugacy]
As in Example \ref{induced conjugacy example}, let $\mathcal{S}=\set{S_1,S_2,S_3}$ where $S_1=\mathbb{N}$, $S_2=\set{2n : n\in \mathbb{N}}$, and $S_3=\set{3,5}$, and also $\mathcal{T}=\set{T_1,T_2,T_3}$ where $T_1=\mathbb{N}$, $T_2=\set{2n+1 : n\in\mathbb{N}}$, and $T_3=\set{2,4}$. We can see that that $X(\mathcal{S})$ and $X(\mathcal{T})$ satisfy the conditions for Theorem \ref{limited conjugacy}, so $X(\mathcal{S})$ is conjugate to $X(\mathcal{T})$. In the previous example, we constructed a conjugacy $\psi:G_\mathcal{S}\to G_\mathcal{T}$, and then constructed a sliding block code that induced that conjugacy. While we could use that, along with Lemma \ref{induced bijection}, to show $X(\mathcal{S})$ is conjugate to $X(\mathcal{T})$, with Theorem \ref{limited conjugacy}, we can show the conjugacy by just noticing $s_1^m=t_1^m$, $s_2^m+1=t_2^m$, and $s_3^m-1=t_3^m$.
\end{example}
\end{document} |
\begin{document}
\title{Quadrant marked mesh patterns in $132$-avoiding permutations II}
\begin{abstract}
\noindent
Given a permutation $\sigma = \sigma_1 \ldots \sigma_n$ in the symmetric group
$S_n$, we say that $\sigma_i$ matches the marked mesh pattern
$MMP(a,b,c,d)$ in $\sigma$ if there are at least
$a$ points to the right of $\sigma_i$ in $\sigma$ which are greater than
$\sigma_i$, at least $b$ points to the left of $\sigma_i$ in $\sigma$ which
are greater than $\sigma_i$, at least $c$ points to the left of
$\sigma_i$ in $\sigma$ which are smaller than $\sigma_i$, and
at least $d$ points to the right of $\sigma_i$ in $\sigma$ which
are smaller than $\sigma_i$.
This paper is continuation of the systematic study of the distribution
of quadrant marked mesh patterns in 132-avoiding permutations
started in \cite{kitremtie} where
we mainly studied the distribution of the number of matches
of $MMP(a,b,c,d)$ in 132-avoiding permutations
where exactly one of $a,b,c,d$ is greater
than zero and the remaining elements are zero. In this paper,
we study the distribution of the number of matches
of $MMP(a,b,c,d)$ in 132-avoiding permutations
where exactly two of $a,b,c,d$ are greater
than zero and the remaining elements are zero.
We provide explicit recurrence relations to enumerate our objects which
can be used to give closed forms for the generating functions associated
with such distributions. In many cases, we provide combinatorial explanations of the coefficients that appear in our generating functions. The case of quadrant marked mesh patterns $MMP(a,b,c,d)$ where three or more of $a,b,c,d$ are
constrained to be greater than 0 will be studied in \cite{kitremtieIII}.\\
\noindent {\bf Keywords:} permutation statistics, quadrant marked mesh pattern, distribution, Pell numbers
\end{abstract}
\tableofcontents
\section{Introduction}
The notion of mesh patterns was introduced by Br\"and\'en and Claesson \cite{BrCl} to provide explicit expansions for certain permutation statistics as, possibly infinite, linear combinations of (classical) permutation patterns. This notion was further studied in \cite{AKV,HilJonSigVid,kitlie,kitrem,kitremtie,Ulf}.
Kitaev and Remmel \cite{kitrem} initiated the systematic study of distribution of quadrant marked mesh patterns on permutations. The study was extended to 132-avoiding permutations by Kitaev, Remmel and Tiefenbruck in \cite{kitremtie}, and the present paper continues this line of research.
Kitaev and Remmel also studied the distribution of quadrant marked
mesh patterns in up-down and down-up permutations \cite{kitrem2,kitrem3}.
Let $\sigma = \sigma_1 \ldots \sigma_n$ be a permutation written in one-line notation. Then we will consider the
graph of $\sigma$, $G(\sigma)$, to be the set of points $(i,\sigma_i)$ for
$i =1, \ldots, n$. For example, the graph of the permutation
$\sigma = 471569283$ is pictured in Figure
\ref{fig:basic}. Then if we draw a coordinate system centered at a
point $(i,\sigma_i)$, we will be interested in the points that
lie in the four quadrants I, II, III, and IV of that
coordinate system as pictured
in Figure \ref{fig:basic}. For any $a,b,c,d \in
\mathbb{N} = \{0,1,2, \ldots \}$ and any $\sigma = \sigma_1 \ldots \sigma_n \in S_n$, the set of all permutations of length $n$, we say that $\sigma_i$ matches the
quadrant marked mesh pattern $\mathrm{MMP}(a,b,c,d)$ in $\sigma$ if,
in $G(\sigma)$ relative
to the coordinate system which has the point $(i,\sigma_i)$ as its
origin, there are at least $a$ points in quadrant I,
at least $b$ points in quadrant II, at least $c$ points in quadrant
III, and at least $d$ points in quadrant IV.
For example,
if $\sigma = 471569283$, the point $\sigma_4 =5$ matches
the marked mesh pattern $\mathrm{MMP}(2,1,2,1)$ since in $G(\sigma)$ relative
to the coordinate system with the origin at $(4,5)$,
there are 3 points in quadrant I,
1 point in quadrant II, 2 points in quadrant III, and 2 points in
quadrant IV. Note that if a coordinate
in $\mathrm{MMP}(a,b,c,d)$ is 0, then there is no condition imposed
on the points in the corresponding quadrant.
In addition, we shall
consider patterns $\mathrm{MMP}(a,b,c,d)$ where
$a,b,c,d \in \mathbb{N} \cup \{\emptyset\}$. Here when
a coordinate of $\mathrm{MMP}(a,b,c,d)$ is the empty set, then for $\sigma_i$ to match
$\mathrm{MMP}(a,b,c,d)$ in $\sigma = \sigma_1 \ldots \sigma_n \in S_n$,
it must be the case that there are no points in $G(\sigma)$ relative
to the coordinate system with the origin at $(i,\sigma_i)$ in the corresponding
quadrant. For example, if $\sigma = 471569283$, the point
$\sigma_3 =1$ matches
the marked mesh pattern $\mathrm{MMP}(4,2,\emptyset,\emptyset)$ since in
$G(\sigma)$ relative
to the coordinate system with the origin at $(3,1)$,
there are 6 points in $G(\sigma)$ in quadrant I,
2 points in $G(\sigma)$ in quadrant II, no points in
both quadrants III and IV. We let
$\mathrm{mmp}^{(a,b,c,d)}(\sigma)$ denote the number of $i$ such that
$\sigma_i$ matches $\mathrm{MMP}(a,b,c,d)$ in~$\sigma$.
\fig{basic}{The graph of $\sigma = 471569283$.}
Note how the (two-dimensional) notation of \'Ulfarsson \cite{Ulf} for marked mesh patterns corresponds to our (one-line) notation for quadrant marked mesh patterns. For example,
\[
\mathrm{MMP}(0,0,k,0)=\mathrm{mmp}attern{scale=2.3}{1}{1/1}{}{0/0/1/1/k}\hspace{-0.25cm},\ \mathrm{MMP}(k,0,0,0)=\mathrm{mmp}attern{scale=2.3}{1}{1/1}{}{1/1/2/2/k}\hspace{-0.25cm},
\]
\[
\mathrm{MMP}(0,a,b,c)=\mathrm{mmp}attern{scale=2.3}{1}{1/1}{}{0/1/1/2/a} \hspace{-2.07cm} \mathrm{mmp}attern{scale=2.3}{1}{1/1}{}{0/0/1/1/b} \hspace{-2.07cm} \mathrm{mmp}attern{scale=2.3}{1}{1/1}{}{1/0/2/1/c} \ \mbox{ and }\ \ \ \mathrm{MMP}(0,0,\emptyset,k)=\mathrm{mmp}attern{scale=2.3}{1}{1/1}{0/0}{1/0/2/1/k}\hspace{-0.25cm}.
\]
Given a sequence $w = w_1 \ldots w_n$ of distinct integers,
let $\red[w]$ be the permutation found by replacing the
$i$-th largest integer that appears in $\sigma$ by $i$. For
example, if $\sigma = 2754$, then $\red[\sigma] = 1432$. Given a
permutation $\tau=\tau_1 \ldots \tau_j$ in the symmetric group $S_j$, we say that the pattern $\tau$ {\em occurs} in $\sigma = \sigma_1 \ldots \sigma_n \in S_n$ provided there exists
$1 \leq i_1 < \cdots < i_j \leq n$ such that
$\red[\sigma_{i_1} \ldots \sigma_{i_j}] = \tau$. We say
that a permutation $\sigma$ {\em avoids} the pattern $\tau$ if $\tau$ does not
occur in $\sigma$. Let $S_n(\tau)$ denote the set of permutations in $S_n$
which avoid $\tau$. In the theory of permutation patterns, $\tau$ is called a {\em classical pattern}. See \cite{kit} for a comprehensive introduction to
the study of patterns in permutations.
It has been a rather popular direction of research in the literature on permutation patterns to study permutations avoiding a 3-letter pattern subject to extra restrictions (see \cite[Subsection 6.1.5]{kit}). In \cite{kitremtie},
we started the study of the generating functions
\begin{equation*} \lambdabel{Rabcd}
Q_{132}^{(a,b,c,d)}(t,x) = 1 + \sum_{n\geq 1} t^n Q_{n,132}^{(a,b,c,d)}(x)
\end{equation*}
where for any $a,b,c,d \in \{\emptyset\} \cup \mathbb{N}$,
\begin{equation*} \lambdabel{Rabcdn}
Q_{n,132}^{(a,b,c,d)}(x) = \sum_{\sigma \in S_n(132)} x^{\mathrm{mmp}^{(a,b,c,d)}(\sigma)}.
\end{equation*}
For any $a,b,c,d$, we will write $Q_{n,132}^{(a,b,c,d)}(x)|_{x^k}$ for
the coefficient of $x^k$ in $Q_{n,132}^{(a,b,c,d)}(x)$.
There is one obvious symmetry in this case which is induced
by the fact that if $\sigma \in S_n(132)$, then $\sigma^{-1} \in S_n(132)$.
That is, the following lemma was proved in \cite{kitremtie}.
\begin{lemma}\lambdabel{sym} {\rm (\cite{kitremtie})}
For any $a,b,c,d \in \{\emptyset\} \cup \mathbb{N}$,
\begin{equation*}
Q_{n,132}^{(a,b,c,d)}(x) = Q_{n,132}^{(a,d,c,b)}(x).
\end{equation*}
\end{lemma}
In \cite{kitremtie}, we studied the generating
functions $Q_{132}^{(k,0,0,0)}(t,x)$,
$Q_{132}^{(0,k,0,0)}(t,x) = Q_{132}^{(0,0,0,k)}(t,x)$, and
$Q_{132}^{(0,0,k,0)}(t,x)$ where $k$ can be either
the empty set or a positive integer as well as the
generating functions $Q_{132}^{(k,0,\emptyset,0)}(t,x)$ and
$Q_{132}^{(\emptyset,0,k,0)}(t,x)$. We also showed
that sequences of the form $(Q_{n,132}^{(a,b,c,d)}(x)|_{x^r})_{n \geq s}$
count a variety of combinatorial objects that appear
in the {\em On-line Encyclopedia of Integer Sequences} (OEIS) \cite{oeis}.
Thus, our results gave new combinatorial interpretations
of certain classical sequences such as the Fine numbers and the Fibonacci
numbers as well as provided certain sequences that appear in the OEIS
with a combinatorial interpretation where none had existed before. Another particular result of our studies in \cite{kitremtie} is enumeration of permutations avoiding simultaneously the patterns 132 and 1234.
The main goal of this paper is to continue the study of
$Q_{132}^{(a,b,c,d)}(t,x)$ and combinatorial interpretations of
sequences of the form
$(Q_{n,132}^{(a,b,c,d)}(x)|_{x^r})_{n \geq s}$ in
the case where $a,b,c,d \in \mathbb{N}$ and exactly two of
these parameters are non-zero. The case when at least three of the parameters are non-zero will be studied in \cite{kitremtieIII}.
Next we list several
results from \cite{kitremtie} which we need in this paper.
\begin{theorem}\lambdabel{thm:Qk000} (\cite[Theorem 4]{kitremtie})
\begin{equation*}\lambdabel{eq:Q0000}
Q_{132}^{(0,0,0,0)}(t,x) = C(xt) = \frac{1-\sqrt{1-4xt}}{2xt}
\end{equation*}
and, for $k \geq 1$,
\begin{equation*}\lambdabel{Qk000}
Q_{132}^{(k,0,0,0)}(t,x) = \frac{1}{1-tQ_{132}^{(k-1,0,0,0)}(t,x)}.
\end{equation*}
Hence
\begin{equation*}\lambdabel{eq:Q100(0)}
Q_{132}^{(1,0,0,0)}(t,0) = \frac{1}{1-t}
\end{equation*}
and, for $k \geq 2$,
\begin{equation}\lambdabel{x=0Qk000}
Q_{132}^{(k,0,0,0)}(t,0) = \frac{1}{1-tQ_{132}^{(k-1,0,0,0)}(t,0)}.
\end{equation}
\end{theorem}
\begin{theorem}\lambdabel{thm:Q00k0} (\cite[Theorem 8]{kitremtie})
For $k \geq 1$,
\begin{align}\lambdabel{gf00k0}
Q_{132}^{(0,0,k,0)}(t,x)&=\frac{1+(tx-t)(\sum_{j=0}^{k-1}C_jt^j) -
\sqrt{(1+(tx-t)(\sum_{j=0}^{k-1}C_jt^j))^2 -4tx}}{2tx}\nonumber\\
&=\frac{2}{1+(tx-t)(\sum_{j=0}^{k-1}C_jt^j) + \sqrt{(1+(tx-t)(\sum_{j=0}^{k-1}C_jt^j))^2 -4tx}}\notag
\end{align}
and
\begin{equation*}
Q_{132}^{(0,0,k,0)}(t,0) = \frac{1}{1-t(C_0+C_1 t+\cdots +C_{k-1}t^{k-1})}.
\end{equation*}
\end{theorem}
It follows from Lemma \ref{sym} that $Q_{132}^{(0,k,0,0)}(t,x) =
Q_{132}^{(0,0,0,k)}(t,x)$ for all $k \geq 1$. Thus, our next
theorem (obtained in \cite{kitremtie}) gives an expression for
$Q_{132}^{(0,k,0,0)}(t,x) = Q_{132}^{(0,0,0,k)}(t,x)$.
\begin{theorem}\lambdabel{thm:Q0k00} (Theorem 11 of \cite{kitremtie})
\begin{equation*}\lambdabel{Q0100}
Q_{132}^{(0,1,0,0)}(t,x) = Q_{132}^{(0,0,0,1)}(t,x) = \frac{1}{1-tC(tx)}.
\end{equation*}
For $k > 1$,
\begin{equation*}\lambdabel{Q0100-}
Q_{132}^{(0,k,0,0)}(t,x) = Q_{132}^{(0,0,0,k)}(t,x) = \frac{1+t\sum_{j=0}^{k-2} C_j t^j
(Q_{132}^{(0,k-1-j,0,0)}(t,x) -C(tx))}{1-tC(tx)}
\end{equation*}
and
\begin{equation*}\lambdabel{x=0Q0100-}
Q_{132}^{(0,k,0,0)}(t,0) =Q_{132}^{(0,0,0,k)}(t,0) = \frac{1+t\sum_{j=0}^{k-2} C_j t^j
(Q_{132}^{(0,k-1-j,0,0)}(t,0) -1)}{1-t}.
\end{equation*}
\end{theorem}
As it was pointed out in \cite{kitremtie}, {\em avoidance} of a marked mesh pattern without quadrants containing the empty set can always be expressed in terms of multi-avoidance of (possibly many) classical patterns. Thus, among our results we will re-derive several known facts in permutation patterns theory. However, our main goals are more ambitious aimed at finding distributions in question.
\section{$Q_{n,132}^{(k,0,\ell,0)}(x)$ where $k,\ell \geq 1$}
Throughout this paper, we shall classify the $132$-avoiding permutations
$\sigma = \sigma_1 \ldots \sigma_n$ by the position of $n$
in $\sigma$. That is, let
$S^{(i)}_n(132)$ denote the set of $\sigma \in S_n(132)$ such
that $\sigma_i =n$.
Clearly each $\sigma \in S_n^{(i)}(132)$ has the structure
pictured in Figure \ref{fig:basic2}. That is, in the graph of
$\sigma$, the elements to the left of $n$, $A_i(\sigma)$, have
the structure of a $132$-avoiding permutation, the elements
to the right of $n$, $B_i(\sigma)$, have the structure of a
$132$-avoiding permutation, and all the elements in
$A_i(\sigma)$ lie above all the elements in
$B_i(\sigma)$. It is well-known that the number of $132$-avoiding
permutations in $S_n$ is the {\em Catalan number}
$C_n = \frac{1}{n+1} \binom{2n}{n}$ and the generating
function for the $C_n$'s is given by
\begin{equation*}\lambdabel{Catalan}
C(t) = \sum_{n \geq 0} C_n t^n = \frac{1-\sqrt{1-4t}}{2t}=
\frac{2}{1+\sqrt{1-4t}}.
\end{equation*}
\fig{basic2}{The structure of $132$-avoiding permutations.}
If $k \geq 1$, it is easy to
compute a recursion for $Q_{n,132}^{(k,0,\ell,0)}(x)$ for any
fixed $\ell \geq 1$. It is clear that $n$ can never match
the pattern $\mathrm{MMP}(k,0,\ell,0)$ for $k \geq 1$ in any
$\sigma \in S_n(132)$.
For $i \geq 1$, it is easy to see that as we sum
over all the permutations $\sigma$ in $S_n^{(i)}(132)$, our choices
for the structure for $A_i(\sigma)$ will contribute a factor
of $Q_{i-1,132}^{(k-1,0,\ell,0)}(x)$ to $Q_{n,132}^{(k,0,\ell,0)}(x)$ since
none of the elements to the right of $n$ have
any effect on whether an element in $A_i(\sigma)$ matches
the pattern $\mathrm{MMP}(k,0,\ell,0)$ and the presence of $n$ ensures
that an element in $A_i(\sigma)$ matches $\mathrm{MMP}(k,0,\ell,0)$ in $\sigma$ if
and only if it matches $\mathrm{MMP}(k-1,0,\ell,0)$ in $A_i(\sigma)$.
Similarly, our choices
for the structure for $B_i(\sigma)$ will contribute a factor
of $Q_{n-i,132}^{(k,0,\ell,0)}(x)$ to $Q_{n,132}^{(k,0,\ell,0)}(x)$ since
neither $n$ nor any of the elements to the left of $n$ have
any effect on whether an element in $B_i(\sigma)$ matches
the pattern $\mathrm{MMP}(k,0,\ell,0)$.
Thus,
\begin{equation}\lambdabel{k0l0rec}
Q_{n,132}^{(k,0,\ell,0)}(x) =
\sum_{i=1}^n Q_{i-1,132}^{(k-1,0,\ell,0)}(x)\
Q_{n-i,132}^{(k,0,\ell,0)}(x).
\end{equation}
Multiplying both sides of (\ref{k0l0rec}) by $t^n$ and summing
over all $n \geq 1$, we obtain that
\begin{equation*}\lambdabel{k0l0rec2}
-1+Q_{132}^{(k,0,\ell,0)}(t,x) =
t Q_{132}^{(k-1,0,\ell,0)}(t,x)\ Q_{132}^{(k,0,\ell,0)}(t,x)
\end{equation*}
so that we have the following theorem.
\begin{theorem}\lambdabel{thm:Qk0l0} For all $k, \ell \geq 1$,
\begin{equation}\lambdabel{k0l0gf}
Q_{132}^{(k,0,\ell,0)}(t,x) =
\frac{1}{1-t Q_{132}^{(k-1,0,\ell,0)}(t,x)}.
\end{equation}
\end{theorem}
Note that by Theorem \ref{thm:Q00k0}, we have an explicit formula
for $Q_{132}^{(0,0,\ell,0)}(t,x)$ for all $\ell \geq 1$ so
that we can then use the recursion (\ref{k0l0gf}) to
compute $Q_{132}^{(k,0,\ell,0)}(t,x)$ for all $k \geq 1$.
\subsection{Explicit formulas for $Q^{(k,0,\ell,0)}_{n,132}(x)|_{x^r}$}
Note that
\begin{equation}\lambdabel{x=0k0l0gf}
Q_{132}^{(k,0,\ell,0)}(t,0) =
\frac{1}{1-t Q_{132}^{(k-1,0,\ell,0)}(t,0)}.
\end{equation}
Since $Q_{132}^{(1,0,0,0)}(t,0) = Q_{132}^{(0,0,1,0)}(t,0) = \frac{1}{1-t}$,
it follows from the recursions
(\ref{x=0Qk000}) and (\ref{x=0k0l0gf}) that
for all $k \geq 2$,
$Q_{132}^{(k,0,0,0)}(t,0) = Q_{132}^{(k-1,0,1,0)}(t,0)$.
This is easy to see directly. That is,
it is clear that if in $\sigma \in S_n(132)$, $\sigma_j$ matches
$\mathrm{MMP}(k-1,0,1,0)$, then there is an
$i < j$ such that $\sigma_i < \sigma_j$ so that
$\sigma_i$ matches $\mathrm{MMP}(k,0,0,0)$. Vice versa,
suppose that in $\sigma \in S_n(132)$, $\sigma_j$ matches
$\mathrm{MMP}(k,0,0,0)$ where $k \geq 2$. Because
$\sigma$ is $132$-avoiding this means the elements in
the first quadrant relative to the coordinate system with
$(j,\sigma_j)$ as the origin must be increasing. Thus,
there exist $j < j_1 < \cdots < j_k \leq n$ such that
$\sigma_j < \sigma_{j_1} < \cdots < \sigma_{j_k}$ and, hence,
$\sigma_{j_1}$ matches $\mathrm{MMP}(k-1,0,1,0)$.
Thus, the number of $\sigma \in S_n(132)$ where
$\mathrm{mmp}^{(k,0,0,0)}(\sigma)=0$ is equal to the number of $\sigma \in S_n(132)$ where
$\mathrm{mmp}^{(k-1,0,1,0)}(\sigma)=0$ for $k \geq 2$.
In \cite{kitremtie}, we computed the generating function
$Q_{132}^{(k,0,0,0)}(t,0)$ for small $k$. Thus, we have that
\begin{eqnarray*}
Q_{132}^{(2,0,0,0)}(t,0)= Q_{132}^{(1,0,1,0)}(t,0) &=&\frac{1-t}{1-2t};\\
Q_{132}^{(3,0,0,0)}(t,0)= Q_{132}^{(2,0,1,0)}(t,0)&=&\frac{1-2t}{1-3t+t^2};\\
Q_{132}^{(4,0,0,0)}(t,0)= Q_{132}^{(3,0,1,0)}(t,0)&=&\frac{1-3t+t^2}{1-4t+3t^2};\\
Q_{132}^{(5,0,0,0)}(t,0)= Q_{132}^{(4,0,1,0)}(t,0)&=&\frac{1-4t+3t^2}{1-5t+6t^2-t^3};\\
Q_{132}^{(6,0,0,0)}(t,0)= Q_{132}^{(5,0,1,0)}(t,0)&=&\frac{1-5t+6t^2-t^3}{1-6t+10t^2-4t^3}, \mbox{and}\\
Q_{132}^{(7,0,0,0)}(t,0)= Q_{132}^{(6,0,1,0)}(t,0)&=&\frac{1-6t+10t^3-4t^3}{1-7t+15t^2-10t^3+t^4}.
\end{eqnarray*}
Note that
$Q_{132}^{(0,0,2,0)}(t,0) = \frac{1}{1-t-t^2}$ by
Theorem \ref{thm:Q00k0}. Thus, by (\ref{x=0k0l0gf}),
we can compute that
\begin{eqnarray*}
Q_{132}^{(1,0,2,0)}(t,0) &=& \frac{1-t-t^2}{1-2t-t^2};\\
Q_{132}^{(2,0,2,0)}(t,0) &=& \frac{1-2t-t^2}{1-3t+t^3};\\
Q_{132}^{(3,0,2,0)}(t,0) &=& \frac{1-3t+t^3}{1-4t+2t^2+2t^3}, \ \mbox{and}\\
Q_{132}^{(4,0,2,0)}(t,0) &=& \frac{1-4t+2t^2+2t^3}{1-5t+5t^2+2t^3-t^4}.
\end{eqnarray*}
We note that $\{Q^{(1,0,2,0)}_{n,132}(0)\}_{n \geq 1}$ is the sequence
of the Pell numbers which is A000129 in the OEIS. This result should be compared with a known fact \cite[page 250]{kit} that the avoidance of $123$, $2143$ and $3214$ simultaneously gives the Pell numbers (the avoidance of $\mathrm{MMP}(1,0,2,0)$ is equivalent to avoiding simultaneously $2134$ and $1234$).
\begin{problem} Find a combinatorial explanation of the fact that in $S_n$, the number of (132,2134,1234)-avoiding permutations is the same as the number of (123,2143,3214)-avoiding permutations. Can any of the known bijections between $132$- and $123$-avoiding permutations (see \cite[Chapter 4]{kit}) be of help here?\end{problem}
The sequence $\{Q^{(2,0,2,0)}_{n,132}(0)\}_{n \geq 1}$ is sequence
A052963 in the OEIS which has the generating function $\frac{1-t-t^2}{1-3t+t^3}$.
That is, $\frac{1-2t-t^2}{1-3t+t^3} -1 = t\frac{1-t-t^2}{1-3t+t^3}$.
This sequence had no listed combinatorial interpretation so that we have now given a combinatorial interpretation to this sequence.
Similarly, $Q_{132}^{(0,0,3,0)}(t,0) = \frac{1}{1-t-t^2-2t^3}$. Thus, by (\ref{x=0k0l0gf}), we can compute that
\begin{eqnarray*}
Q_{132}^{(1,0,3,0)}(t,0) &=& \frac{1-t-t^2-2t^3}{1-2t-t^2-2t^3};\\
Q_{132}^{(2,0,3,0)}(t,0) &=& \frac{1-2t-t^2-2t^3}{1-3t-t^3+2t^4};\\
Q_{132}^{(3,0,3,0)}(t,0) &=& \frac{1-3t-t^3+2t^4}{1-4t+2t^2+4t^4}, \ \mbox{and}\\
Q_{132}^{(4,0,3,0)}(t,0) &=& \frac{1-4t+2t^2+4t^4}{1-5t+5t^2+5t^4-2t^5}.
\end{eqnarray*}
In this case, the sequence $(Q_{n,132}^{(1,0,3,0)}(0))_{n \geq 1}$ is
sequence A077938 in the OEIS which has the generating
function $\frac{1}{1-2t-t^2-2t^3}$. That is,
$\frac{1-t-t^2-2t^3}{1-2t-t^2-2t^3} -1 = t\frac{1}{1-2t-t^2-2t^3}$.
This sequence had no listed combinatorial interpretation so that we have now given a combinatorial interpretation to this sequence.
We can also find the coefficient of the highest power of
$x$ that occurs in $Q_{n,132}^{(k,0,\ell,0)}(x)$ for any $k,\ell \geq 1$.
That is, it is easy to
see that the maximum possible number of matches of $\mathrm{MMP}(k,0,\ell,0)$
for a $\sigma= \sigma_1 \ldots \sigma_n \in S_n(132)$ occurs when
$\sigma_1 \ldots \sigma_{\ell}$ is a $132$-avoiding permutation in $S_{\ell}$
and $\sigma_{\ell+1}\ldots \sigma_{n}$ is an increasing sequence.
Thus, we have the following theorem.
\begin{theorem}\lambdabel{maxcoeff0k0l}
For any $k,\ell \geq 1$ and $n \geq k +\ell +1$,
the highest power of $x$ that occurs in $Q_{n,132}^{(k,0,\ell,0)}(x)$
is $x^{n-k-\ell}$ which appears with a coefficient of $C_\ell$.
\end{theorem}
Given that we have computed the generating functions
$ Q_{132}^{(0,0,\ell,0)}(t,x)$, we can then use
(\ref{k0l0gf}) to compute the following.
\begin{align*}
& Q_{132}^{(1,0,1,0)}(t,x) =1+t+2 t^2+ (4+x)t^3+\left(8+5 x+x^2\right)t^4+
\left(16+17 x+8 x^2+x^3\right)t^5+\\
&\left(32+49 x+38 x^2+12 x^3+x^4\right)t^6+
\left(64+129 x+141 x^2+77 x^3+17 x^4+x^5\right)t^7+\\
&\left(128+321 x+453 x^2+361 x^3+143 x^4+23 x^5+x^6\right)t^8+\\
&\left(256+769 x+1326 x^2+1399 x^3+834 x^4+247 x^5+30 x^6+x^7\right)t^9+
\cdots.
\end{align*}
\begin{align*}
&Q_{132}^{(2,0,1,0)}(t,x) =1+t+2 t^2+5 t^3+(13+x)t^4 +
\left(34+7 x+x^2\right)t^5+\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\
&\left(89+32 x+10 x^2+x^3\right)t^6+
\left(233+122 x+59 x^2+14 x^3+x^4\right)t^7+ \\
& \left(610+422 x+272 x^2+106 x^3+19 x^4+x^5\right)t^8+\\
& \left(1597+1376 x+1090 x^2+591 x^3+182 x^4+25 x^5+x^6\right)t^9 +
\cdots.
\end{align*}
\begin{align*}
&Q_{132}^{(3,0,1,0)}(t,x) =1+t+2 t^2+5 t^3+14 t^4+ (41+x)t^5+ \left(122+9 x+x^2\right)t^6+\ \ \ \ \ \ \ \ \ \ \ \\
& \left(365+51 x+12 x^2+x^3\right)t^7+\left(1094+235 x+84 x^2+16 x^3+x^4\right)t^8+ \\
&\left(3281+966 x+454 x^2+139 x^3+21 x^4+x^5\right)t^9 + \cdots.
\end{align*}
We can explain several of the coefficients that appear
in the polynomials $Q_{n,132}^{(k,0,1,0)}(x)$ for various $k$.
\begin{theorem}
$Q_{n,132}^{(1,0,1,0)}(0) = 2^{n-1}$
for $n \geq 1$.
\end{theorem}
\begin{proof}
This follows immediately from the fact that
$Q_{132}^{(1,0,1,0)}(t,0) = \frac{1-t}{1-2t}$. We can
also give a simple inductive proof of this fact.
Clearly $Q_{1,132}^{(1,0,1,0)}(0)=1$. Assume
that $Q_{k,132}^{(1,0,1,0)}(0)=2^{k-1}$ for $k < n$. Then
suppose that $\mathrm{mmp}^{(1,0,1,0)}(\sigma) = 0$ and $\sigma_i =n$. Then
it must be the case that the elements to the left of $\sigma_i$ are
decreasing so that $\sigma_1 \ldots \sigma_{i-1} = (n-1)(n-2) \ldots (n-(i-1))$.
But then the elements to the right of $\sigma_i$ must form a 132-avoiding
permutation of $S_{n-1}$ which has no occurrence of the pattern $\mathrm{MMP}(1,0,1,0)$.
Thus, if $i =n$, we only have one such $\sigma$ and if
$i <n$, we have $2^{n-i-1}$ choices for $\sigma_{i+1} \ldots \sigma_n$ by induction.
It follows that
$$Q_{n,132}^{(1,0,1,0)}(0) = 1+ \sum_{i=1}^{n-1} 2^{i-1} = 2^{n-1}.$$
\end{proof}
The sequence $(Q_{n,132}^{(1,0,1,0)}(x)|_{x})_{n \geq 3}$ is
the sequence A000337 in the OEIS which has the formula
$a(n) = (n-1)2^{n-1} +1$, and the following theorem confirms this fact.
\begin{theorem}
For $n \geq 3$,
\begin{equation}\lambdabel{Q1010x}
Q_{n,132}^{(1,0,1,0)}(x)|_x = (n-3)2^{n-2} +1.
\end{equation}
\end{theorem}
\begin{proof}
To prove (\ref{Q1010x}), we classify
the $\sigma = \sigma_1 \ldots \sigma_n \in S_n(132)$ such
that $\mathrm{mmp}^{(1,0,1,0)}(\sigma) =1$ according to whether
the $\sigma_i$ which matches $\mathrm{MMP}(1,0,1,0)$ occurs to the
left or right of position of $n$ in $\sigma$.
First,
suppose that $\sigma_i=n$ and the $\sigma_s$ matching $\mathrm{MMP}(1,0,1,0)$ in $\sigma$
is such that $s < i$. It follows that $\red[\sigma_1 \ldots \sigma_{i-1}]$ is
an element of $S_{i-1}(132)$ such that $\mathrm{mmp}^{(0,0,1,0)} =1$.
We proved in \cite{kitremtie} that
$Q^{(0,0,1,0)}_{n,132}(x)|_x =\binom{n}{2}$ so that we
have $\binom{i-1}{2}$ choices for $\sigma_1 \ldots \sigma_{i-1}$.
It must be the case that
$\mathrm{mmp}^{(1,0,1,0)}(\sigma_{i+1} \ldots \sigma_n) =0$ so that
we have $2^{n-i-1}$ choices for $\sigma_{i+1} \ldots \sigma_n$. It follows
that there are
$\binom{n-1}{2} + \sum_{i=3}^{n-1} \binom{i-1}{2}2^{n-i-1}$ permutations
$\sigma \in S_n(132)$ where the unique element which matches
$\mathrm{MMP}(1,0,1,0)$ occurs to the left of the position of $n$ in $\sigma$.
Next suppose that $\sigma = \sigma_1 \ldots \sigma_n \in S_n(132)$,
$\mathrm{mmp}^{(1,0,1,0)}(\sigma) =1$, $\sigma_i=n$ and the $\sigma_s$ matching $\mathrm{MMP}(1,0,1,0)$
is such that $s > i$. Then the elements to the left of $\sigma_i$ in $\sigma$ must
be decreasing and the elements to the right of $\sigma_i$ in $\sigma$
must be such that $\mathrm{mmp}^{(1,0,1,0)}(\sigma_{i+1} \ldots \sigma_n) = 1$.
Thus, we have $1+(n-i-3)2^{n-i-2}$ choices for $\sigma_{i+1} \ldots \sigma_n$
by induction. It follows
that there are
$$\sum_{i=1}^{n-3} (1+(n-i-3)2^{n-i-2}) = (n-3) + \sum_{j=1}^{n-4} j2^{j+1}$$ permutations
$\sigma \in S_n(132)$ where the unique element which matches
$\mathrm{MMP}(1,0,1,0)$ occurs to the right of the position of $n$ in $\sigma$.
Thus,
\begin{eqnarray*}
Q^{(1,0,1,0)}_{n,132}(x)|_x &=& (n-3) + \sum_{j=1}^{n-4} j2^{j+1} +
\binom{n-1}{2}+ \sum_{i=3}^{n-1} \binom{i-1}{2}2^{n-i-1} \\
&=& (n-3)2^{n-2} +1.
\end{eqnarray*}
Here the last equality can easily be proved by induction or be verified
by Mathematica.
\end{proof}
We also can find explicit formulas for the second highest coefficient
in $Q_n^{(k,0,1,0)}(x)$ for $k \geq 1$.
\begin{theorem}
\begin{equation}\lambdabel{secondk010}
Q_{n,132}^{(k,0,1,0)}(x)|_{x^{n-2-k}}= 2k+\binom{n-k}{2}
\end{equation}
for all $n \geq k+3$.
\end{theorem}
\begin{proof}
We proceed by induction on $k$.
First we shall prove that $Q_{n,132}^{(1,0,1,0)}(x)|_{x^{n-3}}= 2+\binom{n-1}{2}$ for
$n \geq 4$. That is, suppose that
$\sigma = \sigma_1 \ldots \sigma_n \in S_n(132)$ and
$\mathrm{mmp}^{(1,0,1,0)}(\sigma) =n-3$. If $\sigma_1 =n$, then
$\sigma_2 \ldots \sigma_n$ must be strictly increasing. Similarly,
if $\sigma_{n-1} =n$ so that $\sigma_n =1$, then
$\sigma_1 \ldots \sigma_{n-1}$ must be strictly increasing.
It cannot be that $\sigma_i = n$ where $1 < i < n-1$ because
in that case the most $\mathrm{MMP}(1,0,1,0)$-matches that we can
have in $\sigma$ occurs when $\sigma_1 \ldots \sigma_i$ is
an increasing sequence and $\sigma_{i+1} \ldots \sigma_n$ is
an increasing sequence which would give us a total of $i-2 +n-i -2 = n-4$
matches of $\mathrm{MMP}(1,0,1,0)$. Thus, the only other possibility
is if $\sigma_n =n$ in which case $\mathrm{mmp}^{(0,0,1,0)}(\sigma_1 \ldots \sigma_{n-1}) =
n-3$. We proved in \cite{kitremtie} that
$Q_{n,132}^{(0,0,1,0)}(x)|_{x^{n-2}} = \binom{n}{2}$. Thus, if
$\sigma_n =n$ we have that $\binom{n-1}{2}$ choices for
$\sigma_1 \ldots \sigma_{n-1}$. It follows that
$Q_{n,132}^{(1,0,1,0)}(x)|_{x^{n-3}} = 2 + \binom{n-1}{2}$ for $n \geq 4$.
Assume that $k \geq 2$ we have established (\ref{secondk010}) for $k-1$.
We know that the highest power of $x$ that occurs in $Q_{n,132}^{(k,0,1,0)}(x)$ is
$x^{n-1-k}$ which occurs with a coefficient of 1 for $n \geq k+2$.
Now
$$Q_{n,132}^{(k,0,1,0)}(x)|_{x^{n-2-k}}= \sum_{i=1}^n(Q_{i-1,132}^{(k-1,0,1,0)}(x)Q_{n-i,132}^{(k,0,1,0)}(x))|_{x^{n-2-k}}.$$
Since the highest power of $x$ that occurs in
$Q_{i-1,132}^{(k-1,0,1,0)}(x)$ is
$x^{i-1 - 1 -k}$ and the highest power of $x$ that occurs in $Q_{n-i,132}^{(k-1,0,1,0)}(x)$ is
$x^{n-i - 1 -k}$, $(Q_{i-1,132}^{(k-1,0,1,0)}(x)Q_{n-i,132}^{(k,0,1,0)}(x))|_{x^{n-2-k}}=0$
unless $i\in \{1,n-1,n\}$. Thus, we have 3 cases. \\
\ \\
{\bf Case 1.} $i=1$. In that case,
$$(Q_{i-1,132}^{(k-1,0,1,0)}(x)Q_{n-i,132}^{(k,0,1,0)}(x))|_{x^{n-2-k}}=
Q_{n-1,132}^{(k,0,1,0)}(x)|_{x^{n-2-k}} =1.$$
{\bf Case 2.} $i=n-1$. In this case, we are considering permutations of the form
$\sigma = \sigma_1 \ldots \sigma_{n-2} n 1$. Then we must have
$\mathrm{mmp}^{(k-1,0,1,0)}(\red[\sigma_1 \ldots \sigma_{n-2}]) = n-k-2= (n-2)-1-(k-1)$ so that
there is only one choice for $\sigma_1 \ldots \sigma_{n-2}$. Thus, in this case,
$$(Q_{i-1,132}^{(k-1,0,1,0)}(x)Q_{n-i,132}^{(k,0,1,0)}(x))|_{x^{n-2-k}}=
Q_{n-2,132}^{(k-1,0,1,0)}(x))|_{x^{n-2-k}} =1.$$
{\bf Case 3.} $i=n$. In this case,
\begin{eqnarray*}
(Q_{i-1,132}^{(k-1,0,1,0)}(x)Q_{n-i,132}^{(k,0,1,0)}(x))|_{x^{n-2-k}} &=&
Q_{n-1,132}^{(k-1,0,1,0)}(x))|_{x^{n-2-k}} \\
&=& 2(k-1) + \binom{n-1-(k-1)}{2}\\
&=&
2(k-1) + \binom{n-k}{2}
\end{eqnarray*}
for $n-1 \geq k-1 +3$.\\
\ \\
Thus, it follows that
$Q_{n,132}^{(k,0,1,0)}(x)|_{x^{n-2-k}}= 2k+\binom{n-k}{2}$ for $n \geq k+3$.
\end{proof}
Similarly, we have computed the following.
\begin{align*}
&Q_{132}^{(1,0,2,0)}(t,x) =1+t+2 t^2+5 t^3+(12+2 x) t^4+ \left(29+11 x+2 x^2\right) t^5+\ \ \ \ \ \ \ \ \ \\
&\left(70+45 x+15 x^2+2 x^3\right) t^6+ \left(169+158 x+81 x^2+19 x^3+2 x^4\right) t^7+\\
&\left(408+509 x+359 x^2+129 x^3+23 x^4+2 x^5\right) t^8+\\
&\left(985+1550 x+1409 x^2+700 x^3+189 x^4+27 x^5+2 x^6\right) t^9+\cdots.
\end{align*}
\begin{eqnarray*}
&&Q_{132}^{(2,0,2,0)}(t,x) =1+t+ 2 t^2+5 t^3+14 t^4+
(40+2x)t^5+ \left(115+15 x+2 x^2\right)t^6+\\
&& \left(331+77 x+19 x^2+2 x^3\right)t^7+
\left(953+331 x+121 x^2+23 x^3+2 x^4\right)t^8+\\
&& \left(2744+1288 x+624 x^2+177 x^3+27 x^4+2 x^5\right)t^9+ \cdots.
\end{eqnarray*}
In this case, the sequence $(Q_{n,132}^{(2,0,2,0)}(0))_{n \geq 1}$
is A052963 in the OEIS which satisfies
the recursion $a(n) = 3a(n-1)-a(n-3)$ with $a(0)=1$, $a(1) =2$ and $a(2) =5$, and has the generating function
$\frac{1-t-t^2}{1-3t+t^3}$.
\begin{align*}
&Q_{132}^{(3,0,2,0)}(t,x) =1+t+2 t^2+5 t^3+14 t^4+42 t^5+ (130+2x) t^6+
\left(408+19 x+2 x^2\right)t^7+ \\
&\left(1288+117 x+23 x^2+2 x^3\right)t^8+
\left(4076+588 x+169 x^2+27 x^3+2 x^4\right)t^9 + \cdots.
\end{align*}
We have also computed the following.
\begin{eqnarray*}
&&Q_{132}^{(1,0,3,0)}(t,x) =1+t+2 t^2+5 t^3+14 t^4+
(37+5 x)t^5+ \left(98+29 x+5 x^2\right)t^6+\\
&& \left(261+124 x+39 x^2+5 x^3\right)t^7+
\left(694+475 x+207 x^2+49 x^3+5 x^4\right)t^8+\\
&& \left(1845+1680 x+963 x^2+310 x^3+59 x^4+5 x^5\right)t^9 + \\
&& \left(4906+5635 x+4056 x^2+1692 x^3+433 x^4+69 x^5+5 x^6\right) t^{10} +\cdots.
\end{eqnarray*}
\begin{eqnarray*}
&&Q_{132}^{(2,0,3,0)}(t,x)= 1+t+2 t^2+5 t^3+14 t^4+42 t^5+(127+5 x)t^6 +
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\
&&\left(385+39 x+5 x^2\right)t^7+\left(1169+207 x+49 x^2+5 x^3\right)t^8+ \\
&&\left(3550+938 x+310 x^2+59 x^3+5 x^4\right)t^9 + \\
&& \left(10781+3866 x+1642 x^2+433 x^3+69 x^4+5 x^5\right) t^{10} + \cdots.
\end{eqnarray*}
\begin{eqnarray*}
&&Q_{132}^{(3,0,3,0)}(t,x)= 1+t+2 t^2+5 t^3+14 t^4+
42 t^5+132 t^6+ (424+5 x)t^7+\ \ \ \ \ \ \ \ \\
&& \left(1376+49 x+5 x^2\right)t^8+ \left(4488+310 x+59 x^2+5 x^3\right)t^9
+ \\
&& \left(14672+1617
x+433 x^2+69 x^3+5 x^4\right) t^{10} + \cdots.
\end{eqnarray*}
We can also find a formula for the second highest coefficient
in $Q_{n,132}^{(k,0,m,0)}(x)$ for $m \geq 2$.
\begin{theorem}
For all $k \geq 1$, $m \geq 2$ and $n \geq m+k+2$,
$$Q_{n,132}^{(k,0,m,0)}(x)|_{x^{n-m-2}}=C_{m+1}+(2k+1)C_m+2C_m(n-k-m-2).$$
\end{theorem}
\begin{proof}
First we establish the base case which is when
$k =1$ and $m \geq 2$.
In this case,
$$Q_{n,132}^{(1,0,m,0)}(x) = \sum_{i=1}^n
Q_{i-1,132}^{(0,0,m,0)}(x) Q_{n-i,132}^{(1,0,m,0)}(x).$$
Since the highest power of $x$ that can appear in
$Q_{n,132}^{(0,0,m,0)}(x)$ is $x^{n-m}$ for $n >m$ and
the highest power of $x$ that can appear in
$Q_{n,132}^{(1,0,m,0)}(x)$ is $x^{n-m-1}$ for $n >m+1$, it
follows that the highest power of $x$ that appears in
$Q_{i-1,132}^{(0,0,m,0)}(x) Q_{n-i,132}^{(1,0,m,0)}(x)$
will be less than $x^{n-m-2}$ for $i =2, \ldots, n-1$. Thus,
we have three cases to consider. \\
\ \\
{\bf Case 1.} $i=1$. In this case, $Q_{i-1,132}^{(0,0,m,0)}(x) Q_{n-i,132}^{(1,0,m,0)}(x)
= Q_{n-1,132}^{(1,0,m,0)}(x)$ and we know that
$$Q_{n-1,132}^{(1,0,m,0)}(x)|_{x^{n-m-2}} = C_m \ \mbox{for } n \geq m+2.$$
\ \\
{\bf Case 2.} $i=n-1$. In this case, $Q_{i-1,132}^{(0,0,m,0)}(x) Q_{n-i,132}^{(1,0,m,0)}(x)
= Q_{n-2,132}^{(0,0,m,0)}(x)$ and it was proved in
\cite{kitremtie} that
$$Q_{n-2,132}^{(0,0,m,0)}(x)|_{x^{n-m-2}} = C_m \ \mbox{for } n \geq m+2.$$
\ \\
{\bf Case 3.} $i=n$. In this case, $Q_{i-1,132}^{(0,0,m,0)}(x) Q_{n-i,132}^{(1,0,m,0)}(x)
= Q_{n-1,132}^{(0,0,m,0)}(x)$ and it was proved in
\cite{kitremtie} that
$$Q_{n-1,132}^{(0,0,m,0)}(x)|_{x^{n-m-2}} = C_{m+1}-C_m +
2C_m(n-2-m) \ \mbox{for } n \geq m+3.$$
Thus, it follows that
\begin{eqnarray*}
Q_{n,132}^{(1,0,m,0)}(x)|_{x^{n-m-2}} &=& C_{m+1}+C_m +
2C_m(n-2-m) \\
&=& C_{m+1}+3C_m + 2C_m(n-3-m)\ \mbox{for } n \geq m+3.
\end{eqnarray*}
For example, for $m=2$, we get that
$$Q_{n,132}^{(1,0,2,0)}(x)|_{x^{n-4}} = 11+4(n-5) \ \mbox{for } n \geq 5$$
and, for $m=3$, we get that
$$Q_{n,132}^{(1,0,2,0)}(x)|_{x^{n-4}} = 29+10(n-6) \ \mbox{for } n \geq 6$$
which agrees with the series that we computed.
Now assume that $k > 1$ and we have proved the theorem
for $k-1$ and all $m \geq 2$. Then
$$Q_{n,132}^{(k,0,m,0)}(x) = \sum_{i=1}^n
Q_{i-1,132}^{(k-1,0,m,0)}(x) Q_{n-i,132}^{(k,0,m,0)}(x).$$
Since the highest power of $x$ that can appear in
$Q_{n,132}^{(k-1,0,m,0)}(x)$ is $x^{n-m-(k-1)}$ for $n \geq m+k$ and
the highest power of $x$ that can appear in
$Q_{n,132}^{(k,0,m,0)}(x)$ is $x^{n-m-k}$ for $n >m+k$, it
follows that the highest power of $x$ that appears in
$Q_{i-1,132}^{(k-1,0,m,0)}(x) Q_{n-i,132}^{(k,0,m,0)}(x)$
will be less than $x^{n-m-k-1}$ for $i =2, \ldots, n-1$. Thus,
we have three cases to consider. \\
\ \\
{\bf Case 1.} $i=1$. In this case, $Q_{i-1,132}^{(k-1,0,m,0)}(x) Q_{n-i,132}^{(k,0,m,0)}(x)
= Q_{n-1,132}^{(k,0,m,0)}(x)$ and we know that
$$Q_{n-1,132}^{(k,0,m,0)}(x)|_{x^{n-m-k-1}} = C_m \ \mbox{for } n \geq m+k+2.$$
\ \\
{\bf Case 2.} $i=n-1$. In this case, $Q_{i-1,132}^{(k-1,0,m,0)}(x) Q_{n-i,132}^{(k,0,m,0)}(x)
= Q_{n-2,132}^{(k-1,0,m,0)}(x)$ and we know that
$$Q_{n-2,132}^{(k-1,0,m,0)}(x)|_{x^{n-m-k-1}} = C_m \ \mbox{for } n \geq m+k+2.$$
\ \\
{\bf Case 3.} $i=n$. In this case, $Q_{i-1,132}^{(k-1,0,m,0)}(x) Q_{n-i,132}^{(k,0,m,0)}(x)
= Q_{n-1,132}^{(k-1,0,m,0)}(x)$ and we know by induction that
$$Q_{n-1,132}^{(k-1,0,m,0)}(x)|_{x^{n-m-k-1}} = C_{m+1}+(2(k-1)+1)C_m +
2C_m(n-m-(k-1)-1) \ \mbox{for } n \geq m+k+2.$$
Thus, it follows that
$$
Q_{n,132}^{(k,0,m,0)}(x)|_{x^{n-m-k-1}}
= C_{m+1}+(2k+1)C_m + 2C_m(n-m-k-2)\ \mbox{for } n \geq m+k+2.$$
\end{proof}
\section{$Q_{n,132}^{(k,0,0,\ell)}(x)=Q_{n,132}^{(k,\ell,0,0)}(x)$
where $k,\ell \geq 1$}
By Lemma \ref{sym}, we know that
$Q_{n,132}^{(k,0,0,\ell)}(x)= Q_{n,132}^{(k,\ell,0,0)}(x)$.
Thus, we will only consider $Q_{n,132}^{(k,0,0,\ell)}(x)$ in
this section.
Suppose that $n \geq \ell +1$.
It is clear that $n$ can never match
the pattern $\mathrm{MMP}(k,0,0,\ell)$ for $k \geq 1$ in any
$\sigma \in S_n(132)$.
For $i \leq n-\ell$, it is easy to see that as we sum
over all the permutations $\sigma$ in $S_n^{(i)}(132)$, our choices
for the structure for $A_i(\sigma)$ will contribute a factor
of $Q_{i-1,132}^{(k-1,0,0,0)}(x)$ to $Q_{n,132}^{(k,0,0,\ell)}(x)$.
That is, since
all the elements $A_i(\sigma)$ have the elements in $B_i(\sigma)$ in their
fourth quadrant and $B_i(\sigma)$ consists of at least $\ell$ elements so
that the presence of $n$ ensures
that an element in $A_i(\sigma)$ matches $\mathrm{MMP}(k,0,0,\ell)$ in $\sigma$ if
and only if it matches $\mathrm{MMP}(k-1,0,0,0)$ in $A_i(\sigma)$.
Similarly, our choices
for the structure for $B_i(\sigma)$ will contribute a factor
of $Q_{n-i,132}^{(k,0,0,\ell)}(x)$ to $Q_{n,132}^{(k,0,0,\ell)}(x)$ since
neither $n$ nor any of the elements to the left of $n$ have
any effect on whether an element in $B_i(\sigma)$ matches
$\mathrm{MMP}(k,0,0,\ell)$.
Now suppose $i > n-\ell$ and $j =n-i$. In this case, $B_i(\sigma)$ consists
of $j$ elements. In this situation, an element
of $A_i(\sigma)$ matches $\mathrm{MMP}(k,0,0,\ell)$ in $\sigma$ if and only if it matches $\mathrm{MMP}(k-1,0,0,\ell -j)$ in
$A_i(\sigma)$. Thus,
our choices for $A_i(\sigma)$ contribute a factor of
$Q^{(k-1,0,0,\ell-j)}_{i-1,132}(x) = Q^{(k-1,0,0,\ell -j)}_{n-j-1,132}(x)$
to $Q_{n,132}^{(k,0,0,\ell)}(x)$. Similarly, our choices
for the structure for $B_i(\sigma)$ will contribute a factor
of $Q_{n-i,132}^{(k,0,0,\ell)}(x)$ to $Q_{n,132}^{(k,0,0,\ell)}(x)$ since
neither $n$ nor any of the elements to the left of $n$ have
any effect on whether an element in $B_i(\sigma)$ matches
the pattern $\mathrm{MMP}(k,0,0,\ell)$. Note that
since
$j < \ell$, we know that $Q_{n-i,132}^{(k,0,0,\ell)}(x) =C_j$.
It follows that for $n \geq \ell +1$,
\begin{eqnarray}\lambdabel{Q-k00l}
Q_{n,132}^{(k,0,0,\ell)}(x) &=& \sum_{i=1}^{n-\ell}
Q_{i-1,132}^{(k-1,0,0,0)}(x)Q_{n-i,132}^{(k,0,0,\ell)}(x) + \nonumber \\
&& \sum_{j=0}^{\ell -1} C_j Q_{n-j-1,132}^{(k-1,0,0,\ell-j)}(x).
\end{eqnarray}
Multiplying both sides of (\ref{Q-k00l}) by $t^n$, summing for $n \geq \ell +1$ and observing
that $Q_{j,132}^{(k,0,0,\ell)}(x) = C_j$ for $j \leq \ell$, we
see that for $k, \ell \geq 1$,
\begin{eqnarray*}\lambdabel{Qk00l}
Q_{132}^{(k,0,0,\ell)}(t,x)- \sum_{j=0}^\ell C_jt^j &=&
t Q_{132}^{(k-1,0,0,0)}(t,x)\left(Q_{132}^{(k,0,0,\ell)}(t,x) - \sum_{j=0}^{\ell-1} C_jt^j\right)+
\nonumber \\
&& t \sum_{j=0}^{\ell -1} C_j t^j \left(Q_{132}^{(k-1,0,0,0)}(t,x) -
\sum_{s=0}^{\ell-j-1} C_st^s\right).
\end{eqnarray*}
Thus, we have the following theorem.
\begin{theorem} \lambdabel{thm:k00l}
For all $k, \ell \geq 1$,
\begin{multline}\lambdabel{Qk00lgf-}
Q_{132}^{(k,0,0,\ell)}(t,x) = \\
\frac{C_\ell t^\ell + \sum_{j=0}^{\ell -1} C_j t^j (1 -tQ_{132}^{(k-1,0,0,0)}(t,x)
+t(Q_{132}^{(k-1,0,0,\ell-j)}(t,x)-\sum_{s=0}^{\ell -j -1}C_s t^s))}{1-tQ_{132}^{(k-1,0,0,0)}(t,x)}.
\end{multline}
\end{theorem}
Note that we can compute generating functions
of the form $Q_{132}^{(k,0,0,0)}(t,x)$ by Theorem~\ref{thm:Qk000} and
generating functions of the form $Q_{132}^{(0,0,0,\ell)}(t,x)$
by Theorem~\ref{thm:Q0k00} so that we can use
(\ref{Qk00lgf-}) to compute $Q_{132}^{(k,0,0,\ell)}(t,x)$ for
any $k, \ell \geq 0$.
\subsection{Explicit formulas for $Q^{(k,0,0,\ell)}_{n,132}(x)|_{x^r}$}
By Theorem \ref{thm:k00l}, we have that
\begin{eqnarray}\lambdabel{Qk00lgf--}
Q_{132}^{(k,0,0,1)}(t,x) &=&
\frac{t + (1 -tQ_{132}^{(k-1,0,0,0)}(t,x))
+t(Q_{132}^{(k-1,0,0,1)}(t,x)-1)}{1-tQ_{132}^{(k-1,0,0,0)}(t,x)} \nonumber \\
&=& \frac{1 -tQ_{132}^{(k-1,0,0,0)}(t,x)
+tQ_{132}^{(k-1,0,0,1)}(t,x)}{1-tQ_{132}^{(k-1,0,0,0)}(t,x)}.
\end{eqnarray}
We note that $Q_{132}^{(0,0,0,0)}(t,x) = C(tx)$ so that
$Q_{132}^{(0,0,0,0)}(t,0)=1$. As described in
the previous section, we have computed $Q_{132}^{(k,0,0,0)}(t,0)$
for small values of $k$ in \cite{kitremtie}.
Plugging those generating functions into (\ref{Qk00lgf--}), one
can compute that
\begin{eqnarray*}
Q_{132}^{(1,0,0,1)}(t,0) &=& \frac{1-t+t^2}{(1-t)^2},\\
Q_{132}^{(2,0,0,1)}(t,0) &=& \frac{1-2t+t^2+t^3}{1-3t+2t^2},\\
Q_{132}^{(3,0,0,1)}(t,0) &=& \frac{1-3t+2t^2+t^4}{1-4t+4t^2-t^3},\\
Q_{132}^{(4,0,0,1)}(t,0) &=& \frac{1-4t+4t^2-t^3+t^5}{1-5t+7t^2-3t^3}, \ \mbox{and}\\
Q_{132}^{(5,0,0,1)}(t,0) &=& \frac{1-5t+7t^2-3t^3+t^6}{1-6t+11t^2-7t^3+t^4}.
\end{eqnarray*}
It is easy to see that the maximum number of $\mathrm{MMP}{(1,0,0,1)}$-matches occurs
when either $\sigma$ ends with $1n$ or $n1$. It follows that
for $n \geq 3$, the highest power of $x$ in $Q^{(1,0,0,1)}_{n,132}(x)$ is
$x^{n-2}$ and its coefficient is $2C_{n-2}$.
More generally, it is easy to see that the maximum number of
$\mathrm{MMP}{(k,0,0,1)}$-matches occurs
when $\sigma \in S_n(132)$ ends with a shuffle of $1$ with
$(n-k+1)(n-k) \ldots n$. Thus, we have the following
theorem.
\begin{theorem}\lambdabel{highQk001}
For $n \geq k+1$, the highest power of $x$ in $Q^{(k,0,0,1)}_{n,132}(x)$ is
$x^{n-k-1}$ and its coefficient is $(k+1)C_{n-k-1}$.
\end{theorem}
We can also compute
\begin{eqnarray*}
&&Q_{132}^{(1,0,0,1)}(t,x)= 1+t+2 t^2+(3+2 x) t^3+(4+6 x+4 x^2) t^4+\\
&&(5+12 x+15 x^2+10 x^3) t^5+(6+20 x+36 x^2+42 x^3+28 x^4) t^6+\\
&&(7+30 x+70 x^2+112 x^3+126 x^4+84 x^5) t^7+\\
&&(8+42 x+120 x^2+240 x^3+360 x^4+396 x^5+264 x^6) t^8+\\
&&(9+56 x+189 x^2+450 x^3+825 x^4+1188 x^5+1287 x^6+858 x^7) t^9+\cdots.
\end{eqnarray*}
It is easy to explain some of these coefficients.
That is, we have the following theorem.
\begin{theorem} \
\begin{itemize}
\item[(i)] $\displaystyle Q^{(1,0,0,1)}_{n,132}(0) = n$ for all $n \geq 1$,
\item[(ii)] $\displaystyle Q^{(1,0,0,1)}_{n,132}(x)|_x = (n-1)(n-2)$ for all $n \geq 3$, and
\item[(iii)] $\displaystyle Q^{(1,0,0,1)}_{n,132}(x)|_{x^{n-3}} = 3C_{n-2}$ for all $n \geq 3$.
\end{itemize}
\end{theorem}
\begin{proof}
To see that $Q^{(1,0,0,1)}_{n,132}(0) =n$ for $n \geq 1$ note that
the only permutations $\sigma \in S_n(132)$ that have no
$\mathrm{MMP}{(1,0,0,1)}$-matches are the identity
$12\ldots n$ plus the permutations
of the form $n(n-1) \ldots (n-k)12 \ldots (n-k-1)$ for $k=0,\ldots, n-1$.
For $n \geq 3$, we claim that
$$a(n) = Q^{(1,0,0,1)}_{n,132}(x)|_x =(n-1)(n-2).$$
This is easy to see by induction. That is, there are
three ways to have a $\sigma \in S_n(132)$ with $\mathrm{mmp}^{(1,0,0,1)}(\sigma) =1$.
That is, $\sigma$ can start with $n$ in which case we have
$a(n-1) =(n-2)(n-3)$ ways to arrange $\sigma_2 \ldots \sigma_n$ or
$\sigma$ can start with $(n-1)n$ in which case there can
be no $\mathrm{MMP}(1,0,0,1)$ matches in $\sigma_3 \ldots \sigma_n$ which means
that we have $(n-2)$ choices to arrange $\sigma_3 \ldots \sigma_n$ or
$\sigma$ can end with $n$ in which case $\sigma_1 \ldots \sigma_{n-1}$ must have exactly one $\mathrm{MMP}(0,0,0,1)$-match so that by our previous results
in \cite{kitremtie}, we have
$n-2$ ways to arrange $\sigma_1 \ldots \sigma_n$. Thus,
$a(n) = (n-2)(n-3) +2(n-2) = (n-1)(n-2)$.
For $Q^{(1,0,0,1)}_{n,132}(x)|_{x^{n-3}}$, we note that
\begin{eqnarray*}
Q_{n,132}^{(1,0,0,1)}(x) &=& Q_{n-1,132}^{(0,0,0,1)}(x)+\sum_{i=1}^{n-1}
Q_{i-1,132}^{(0,0,0,0)}(x)Q_{n-i,132}^{(1,0,0,1)}(x)\\
&=& Q_{n-1,132}^{(0,0,0,1)}(x)+\sum_{i=1}^{n-1}
C_{i-1}x^{i-1}Q_{n-i,132}^{(1,0,0,1)}(x).
\end{eqnarray*}
Thus,
$$Q_{n,132}^{(1,0,0,1)}(x)|_{x^{n-3}} = Q_{n-1,132}^{(0,0,0,1)}(x)|_{x^{n-3}}
+\sum_{i=1}^{n-2}
C_{i-1}Q_{n-i,132}^{(0,0,0,1)}(x)|_{x^{n-i-2}}.$$
It was proved in \cite{kitremtie} that
$Q_{n,132}^{(0,0,0,1)}(x)|_{x^{n-2}} =C_{n-1}$ for $n \geq 2$ and,
by Theorem \ref {highQk001}, \\
$Q_{n,132}^{(1,0,0,1)}(x)|_{x^{n-2}} =2C_{n-1}$ for
$n \geq 2$. Thus, for $n \geq 3$,
\begin{eqnarray*}
Q_{n,132}^{(1,0,0,1)}(x)|_{x^{n-3}} &=& C_{n-2} + \sum_{i=1}^{n-2}
C_{i-1}2C_{n-i-2}\\
&=&C_{n-2}+ 2\sum_{i=1}^{n-2}
C_{i-1}C_{n-i-2} = C_{n-2} +2C_{n-2} =3C_{n-2}.
\end{eqnarray*}
\end{proof}
One can also compute that
\begin{align*}
&Q_{132}^{(2,0,0,1)}(t,x)= 1+t+2 t^2+5 t^3+(11+3 x) t^4+(23+13 x+6 x^2) t^5+
\ \ \ \ \ \ \ \ \ \ \ \ \ \\
&(47+40 x+30 x^2+15 x^3) t^6+(95+107 x+104 x^2+81 x^3+42 x^4) t^7+\\
&(191+266 x+308 x^2+301 x^3+238 x^4+126 x^5) t^8+\\
&(383+633 x+837 x^2+949 x^3+926 x^4+738 x^5+396 x^6) t^9+\cdots
\end{align*}
and
\begin{align*}
&Q_{132}^{(3,0,0,1)}(t,x) =1+t+2 t^2+5 t^3+14 t^4+
(38+4 x) t^5+(101+23 x+8 x^2) t^6+\ \ \ \ \ \\
&(266+92 x+51 x^2+20 x^3) t^7+(698+320 x+221 x^2+135 x^3+56 x^4) t^8+\\
&(1829+1038 x+821 x^2+614 x^3+392 x^4+168 x^5) t^9+\cdots.
\end{align*}
Here the sequence $(Q_{n,132}^{(2,0,0,1)}(0))_{n \geq 1}$ which
starts out $1,2,5,11,23,47,95,191, \ldots $
is the sequence A083329 from the OEIS which counts the number
of partitions $\pi$ of $n$, which when written
in {\em increasing form}, is such that the permutation
$flatten(\pi)$ avoids the permutations 213 and 312. For the increasing form of a set partition $\pi$, one
write the parts in increasing order separated by backslashes where
the parts are written so that minimal elements in the parts increase.
Then $flatten(\pi)$ is just the permutation that results
by removing the backslashes. For example, $\pi=13/257/468$ is written
in increasing form and $flatten(\pi) =13257468$.
\begin{problem} Find a bijection between the
$\sigma \in S_n(132)$ such that $\mathrm{mmp}^{(2,0,0,1)}(\sigma) =0$
and the set partitions $\pi$ of $n$ such that $flatten(\pi)$ avoid
231 and 312.
\end{problem}
None of the sequences $(Q_{n,132}^{(k,0,0,1)}(0))_{n \geq 1}$ for
$k=3,4,5$ appear in the OEIS.
Similarly, one can compute that
\begin{equation*}\lambdabel{Qk002gf}
Q_{132}^{(k,0,0,2)}(t,x) =\frac{1 -(t+t^2)Q_{132}^{(k-1,0,0,0)}(t,x)+tQ_{132}^{(k-1,0,0,2)}(t,x)+
t^2Q_{132}^{(k-1,0,0,1)}(t,x)}{1-tQ_{132}^{(k-1,0,0,0)}(t,x)}.
\end{equation*}
Then one can use this formula to compute that
\begin{align*}
&Q_{132}^{(1,0,0,2)}(t,x)= 1+t+2 t^2+5 t^3+(9+5x) t^4+(14+18 x+10 x^2) t^5+\ \ \ \ \ \\
&(20+42 x+45 x^2+25 x^3) t^6+(27+80 x+126 x^2+126 x^3+70 x^4) t^7+\\
&(35 +135x +280x^2+392x^3+378x^4+210 x^5)t^8\\
&(44+210 x+540 x^2+960 x^3+1260 x^4+1088 x^5+660 x^6) t^9+\cdots.
\end{align*}
It is easy to see that permutations $\sigma \in S_n(132)$ which
have the maximum number of $\mathrm{MMP}(1,0,0,2)$-matches in $\sigma$ are those
permutations that end in either $n12$, $n12$, $21n$, $2n1$ or $n21$.
Thus, the highest power of $x$ that occurs in $Q_{n,132}^{(1,0,0,2)}(x)$
is $x^{n-3}$ which has a coefficient of $5C_{n-3}$.
\begin{align*}
&Q_{132}^{(2,0,0,2)}(t,x)= 1+t+2 t^2+5 t^3+14 t^4+ (33+9x) t^5 +
(72+42x+18x^2)t^6+\ \ \ \ \ \ \\
&(151+135x+98x^2+45x^3) t^7+(310+370 x+358 x^2+266 x^3+126 x^4) t^8+\\
&(629+931 x+1093 x^2+1047 x^3+784 x^4+378 x^5) t^9+\cdots.
\end{align*}
It is easy to see that permutations $\sigma \in S_n(132)$ which
have the maximum number of $\mathrm{MMP}(2,0,0,2)$-matches in $\sigma$ are those
permutations that end in either a shuffle of $21$ and $(n-1)n$ or
$(n-1)n12$, $(n-1)12n$, and $12(n-1)n$.
Thus, the highest power of $x$ that occurs in $Q_{n,132}^{(2,0,0,2)}(x)$
is $x^{n-4}$ which has a coefficient of $9C_{n-4}$.
\begin{align*}
&Q_{132}^{(3,0,0,2)}(t,x) = 1+t+2 t^2+5 t^3+14 t^4+42 t^5+
(118+14x) t^6+\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\
&(319+82x+28x^2) t^7+(847+329x+184x^2+70x^3) t^8+\\
&(2231+1138x+807x^2+490x^3+196x^4) t^9+\cdots.
\end{align*}
It is easy to see that permutations $\sigma \in S_n(132)$ which
have the maximum number of $\mathrm{MMP}(2,0,0,2)$-matches in $\sigma$ are those
permutations that end in either a shuffle of $21$ and $(n-2)(n-1)n$ or
$(n-2)(n-1)n12$, $(n-2)(n-1)12n$, $(n-2)12(n-1)n$, and $12(n-2)(n-1)n$.
Thus, the highest power of $x$ that occurs in $Q_{n,132}^{(3,0,0,2)}(x)$
is $x^{n-5}$ which has a coefficient of $14C_{n-5}$.
None of the series $(Q_{n,132}^{(k,0,0,2)}(0))_{n \geq 1}$ for
$k=1,2,3$ appear in the OEIS.
\section{$Q_{n,132}^{(0,k,\ell,0)}(x)=Q_{n,132}^{(0,0,\ell,k)}(x)$
where $k,\ell \geq 1$}
By Lemma \ref{sym}, we know that
$Q_{n,132}^{(0,k,\ell,0)}(x)= Q_{n,132}^{(0,0,\ell,k)}(x)$.
Thus, we will only consider $Q_{n,132}^{(0,k,\ell,0)}(x)$ in
this section.
Suppose that $n \geq k$.
It is clear that $n$ can never match
the pattern $\mathrm{MMP}(0,k,\ell,0)$ for $k \geq 1$ in any
$\sigma \in S_n(132)$.
For $i \geq k$, it is easy to see that as we sum
over all the permutations $\sigma$ in $S_n^{(i)}(132)$, our choices
for the structure for $A_i(\sigma)$ will contribute a factor
of $Q_{i-1,132}^{(0,k,\ell,0)}(x)$ to $Q_{n,132}^{(0,k,\ell,0)}(x)$ since
none of the elements to the right of $A_i(\sigma)$ have any effect on whether
an element of $A_i(\sigma)$ matches $\mathrm{MMP}(0,k,\ell,0)$.
The presence of $n$ and the elements of $A_i(\sigma)$ ensures
that an element in $B_i(\sigma)$ matches $\mathrm{MMP}(0,k,\ell,0)$ in $\sigma$ if
and only if it matches $\mathrm{MMP}(0,0,\ell,0)$ in $B_i(\sigma)$. Thus,
our choices for $B_i(\sigma)$ contribute a factor of
$Q^{(0,0,\ell,0)}_{n-i,132}(x)$ to $Q_{n,132}^{(0,k,\ell,0)}(x)$.
Now suppose $i < k$ and $j =n-i$. In this case, $A_i(\sigma)$ consists
of $i-1$ elements. In this situation, an element
of $B_i(\sigma)$ matches $\mathrm{MMP}(0,k,\ell,0)$ in $\sigma$ if and only if it matches $\mathrm{MMP}(0,k-i,\ell,0)$ in
$B_i(\sigma)$. Thus,
our choices for $B_i(\sigma)$ contribute a factor of
$Q^{(0,k-i,\ell,0)}_{n-i,132}(x)$
to $Q_{n,132}^{(0,k,\ell,0)}(x)$. As before, our choices
for the structure for $A_i(\sigma)$ will contribute a factor
of $Q_{i-1,132}^{(0,k,\ell,0)}(x)$ to $Q_{n,132}^{(0,k,\ell,0)}(x)$ but in
such a situation $Q_{i-1,132}^{(0,k,\ell,0)}(x) =C_{i-1}$.
It follows that for $n \geq k$,
\begin{eqnarray}\lambdabel{recQk00l}
Q_{n,132}^{(0,k,\ell,0)}(x) &=& \sum_{i=k}^{n}
Q_{i-1,132}^{(0,k,\ell,0)}(x)Q_{n-i,132}^{(0,0,\ell,0)}(x) + \nonumber \\
&& \sum_{j=1}^{k-1} C_j Q_{n-j,132}^{(0,k-j,\ell,0)}(x).
\end{eqnarray}
Multiplying both sides of (\ref{recQk00l}) by $t^n$, summing for $n \geq \ell +1$ and observing
that $Q_{j,132}^{(0,k,\ell,0)}(x) = C_j$ for $j \leq \ell$, we
see that for $k, \ell \geq 1$,
\begin{eqnarray*}\lambdabel{Qk00lc}
Q_{132}^{(0,k,\ell,0)}(t,x)- \sum_{j=0}^{k-1} C_jt^j &=&
t Q_{132}^{(0,0,\ell,0)}(t,x)\left(Q_{132}^{(0,k,\ell,0)}(t,x)-\sum_{s=0}^{k-2} C_st^s\right) +
\nonumber \\
&&t \sum_{i=0}^{k-2} C_{i}t^{i} \left(Q_{132}^{(0,k-i-1,\ell,0)}(t,x)-\sum_{s=0}^{k-i-2} C_st^s\right).
\end{eqnarray*}
It follows that we have the following theorem.
\begin{theorem}\lambdabel{thm:Q0kl0}
For all $k,\ell \geq 1$,
\begin{multline}\lambdabel{Q0kl0gf}
Q_{132}^{(0,k,\ell,0)}(t,x) = \\
\frac{C_{k-1} t^{k-1} + \sum_{j=0}^{k-2} C_j t^j \left(1 -tQ_{132}^{(0,0,\ell,0)}(t,x)
+t(Q_{132}^{(0,k-i-1,\ell,0)}(t,x)-\sum_{s=0}^{k-i-2}C_s t^s)\right)}{1-tQ_{132}^{(0,0,\ell,0)}(t,x)}.
\end{multline}
\end{theorem}
Since we can compute $Q_{132}^{(0,0,\ell,0)}(t,x)$ by
Theorem \ref{thm:Q00k0}, we can use (\ref{Q0kl0gf}) to
compute \\
$Q_{132}^{(0,k,\ell,0)}(t,x)$ for all $k, \ell \geq 1$.
\subsection{Explicit formulas for $Q^{(0,k,\ell,0)}_{n,132}(x)|_{x^r}$}
It follows from Theorem \ref{thm:Q0kl0} and Theorem \ref{thm:Q00k0} that
\begin{eqnarray*}
Q_{132}^{(0,1,\ell,0)}(t,0) &=& \frac{1}{1-tQ_{132}^{(0,0,\ell,0)}(t,0)}\\
&=& \frac{1}{1-t\frac{1}{1-t(C_0+C_1t+ \cdots +C_{\ell -1}t^{\ell -1})}}\\
&=& \frac{1-t(C_0+C_1t+ \cdots +C_{\ell -1}t^{\ell -1})}{1-t(1+C_0+C_1t+ \cdots +C_{\ell -1}t^{\ell -1})}.
\end{eqnarray*}
Thus, one can compute that
\begin{eqnarray*}
&&Q_{132}^{(0,1,1,0)}(t,0) = \frac{1-t}{1-2t};\\
&&Q_{132}^{(0,1,2,0)}(t,0) = \frac{1-t-t^2}{1-2t-t^2};\\
&&Q_{132}^{(0,1,3,0)}(t,0) = \frac{1-t-t^2-2t^3}{1-2t-t^2-2t^3}, \ \mbox{and}\\
&&Q_{132}^{(0,1,4,0)}(t,0) = \frac{1-t-t^2-2t^3-5t^4}{1-2t-t^2-2t^3-5t^4}.\\
\end{eqnarray*}
Similarly, one can compute
\begin{equation*}
Q_{132}^{(0,2,\ell,0)}(t,x) = \frac{1-tQ_{132}^{(0,0,\ell,0)}(t,x)+tQ_{132}^{(0,1,\ell,0)}(t,x)}{1-tQ_{132}^{(0,0,\ell,0)}(t,x)} = 1+\frac{tQ_{132}^{(0,1,\ell,0)}(t,x)}{1-tQ_{132}^{(0,0,\ell,0)}(t,x)}.
\end{equation*}
Note that
\begin{eqnarray*}
Q_{132}^{(0,2,\ell,0)}(t,0) &=& 1+ \frac{t\frac{1-t(C_0+C_1t+ \cdots +C_{\ell -1}t^{\ell -1})}{1-t(1+C_0+C_1t+ \cdots +C_{\ell -1}t^{\ell -1})}}{1-t\frac{1}{1-t(C_0+C_1t+ \cdots +C_{\ell -1}t^{\ell -1})}}\\
&=& 1+\frac{t(1-t(C_0+C_1t+ \cdots +C_{\ell -1}t^{\ell -1}))^2}{(1-t(1+C_0+C_1t+ \cdots +C_{\ell -1}t^{\ell -1}))^2}.
\end{eqnarray*}
Thus, it follows that
\begin{eqnarray*}
&&Q_{132}^{(0,2,1,0)}(t,0) = 1+t\left(\frac{1-t}{1-2t}\right)^2;\\
&&Q_{132}^{(0,2,2,0)}(t,0) = 1+t\left(\frac{1-t-t^2}{1-2t-t^2}\right)^2;\\
&&Q_{132}^{(0,2,3,0)}(t,0) =
1+t\left(\frac{1-t-t^2-2t^3}{1-2t-t^2-2t^3}\right)^2,\ \mbox{and}\\
&&Q_{132}^{(0,2,4,0)}(t,0) =
1+t\left(\frac{1-t-t^2-2t^3-5t^4}{1-2t-t^2-2t^3-5t^4}\right)^2.\\
\end{eqnarray*}
One can use (\ref{Q0kl0gf}) and our previous computations for
$Q_{132}^{(0,0,\ell,0)}(t,x)$ to compute \\
$Q_{132}^{(0,1,\ell,0)}(t,x)$.
\begin{align*}
&Q_{132}^{(0,1,1,0)}(t,x) = 1+t+2 t^2+(4+x) t^3+(8+5 x+x^2) t^4+
(16+17 x+8 x^2+x^3) t^5+\\
&(32+49 x+38 x^2+12 x^3+x^4) t^6+(64+129 x+141 x^2+77 x^3+17 x^4+x^5) t^7+\\
&(128+321 x+453 x^2+361 x^3+143 x^4+23 x^5+x^6) t^8+\\
&(256+769 x+1326 x^2+1399 x^3+834 x^4+247 x^5+30 x^6+x^7) t^9+\cdots.
\end{align*}
\begin{align*}
&Q_{132}^{(0,1,2,0)}(t,x) = 1+t+2 t^2+5 t^3+(12+2 x) t^4+
(29+11 x+2 x^2) t^5+\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\
&(70+45 x+15 x^2+2 x^3) t^6+(169+158 x+81 x^2+19 x^3+2 x^4) t^7+\\
& (408+509 x+359 x^2+129 x^3+23 x^4+2 x^5) t^8+\\
&(985+1550 x+1409 x^2+700 x^3+189 x^4+27 x^5+2 x^6) t^9+\cdots.
\end{align*}
\begin{align*}
&Q_{132}^{(0,1,3,0)}(t,x) = 1+t+2 t^2+5 t^3+14 t^4+ (37+5 x)t^5+
(98+29 x+5 x^2)t^6+\ \ \ \ \ \ \ \ \ \ \\
& (261+124 x+39 x^2+5 x^3)t^7+ (694+475 x+207 x^2+49 x^3+5 x^4)t^8+\\
&(1845+1680 x+963 x^2+310 x^3+59 x^4+5 x^5)t^9 + \cdots.
\end{align*}
\begin{align*}
&Q_{132}^{(0,1,4,0)}(t,x) =1+t+2 t^2+5 t^3+14 t^4+42 t^5+(118+14 x)t^6+
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\
& (331+84 x+14 x^2)t^7+ (934+370 x+112 x^2+14 x^3)t^8+ \\
&(2645+1455 x+608 x^2+140 x^3+14 x^4)t^9 + \cdots.
\end{align*}
We can explain the highest and second highest coefficients
that appear in $Q_{n,132}^{(0,1,\ell,0)}(x)$ for all
$\ell \geq 1$. That is, we have the following theorem.
\begin{theorem}\lambdabel{h2ndhQ01l0} \
\begin{itemize}
\item[(i)] For all $\ell \geq 1$ and $n \geq \ell +1$, the highest power of $x$ in
$Q_{n,132}^{(0,1,\ell,0)}(x)$
is $x^{n -\ell -1}$ and its coefficient is $C_\ell$.
\item[(ii)] $Q_{n,132}^{(0,1,1,0)}(x)|_{x^{n-3}} = 2+\binom{n-1}{2}$ for
all $n \geq 4$.
\item[(iii)] For all $\ell \geq 2$,
$Q_{n,132}^{(0,1,\ell,0)}(x)|_{x^{n-\ell -2}} =
C_{\ell+1} +C_{\ell} +2C_{\ell}(n-2 -\ell)$ for
all $n \geq 3+ \ell$.
\end{itemize}
\end{theorem}
\begin{proof}
For (i), it is easy to see that the maximum
number of $\mathrm{MMP}(0,1,\ell,0)$ matches occurs for a $\sigma \in S_n(132)$ if
$\sigma$ starts with $n$ followed by any arrangement of
$S_{\ell}(132)$ followed by $\ell +1, \ell +2, \ldots, n-1$ in
increasing order.
Thus, the highest power of $x$ in $Q_{132}^{(0,1,\ell,0)}(t,x)$
is $x^{n -\ell -1}$ and its coefficient is $C_\ell$.
For parts (ii) and (iii), we use the fact
that
\begin{equation*}
Q_{n,132}^{(0,1,\ell,0)}(x) =
\sum_{i=1}^n Q_{i-1,132}^{(0,1,\ell,0)}(x) Q_{n-i,132}^{(0,0,\ell,0)}(x).
\end{equation*}
It was proved in \cite{kitremtie} that for $n > \ell$,
the highest power of $x$ that occurs in
$ Q_{n,132}^{(0,0,\ell,0)}(x)$ is $x^{n-\ell}$ and its coefficient
is $C_{\ell}$. It follows that for $3 \leq i \leq n-1$, the highest power of $x$
that appears in
$Q_{i-1,132}^{(0,1,\ell,0)}(x) Q_{n-i,132}^{(0,0,\ell,0)}(x)$ is
less than $n -\ell -2$. Thus, we have three cases to consider. \\
\ \\
{\bf Case 1.} $i=1$. In this case
$Q_{i-1,132}^{(0,1,\ell,0)}(x) Q_{n-i,132}^{(0,0,\ell,0)}(x) =
Q_{n-1,132}^{(0,0,\ell,0)}(x)$ so that we get a contribution of $Q_{n-1,132}^{(0,0,\ell,0)}(x)|_{x^{n-\ell -2}}$.\\
\ \\
{\bf Case 2.} $i=2$. In this case
$Q_{i-1,132}^{(0,1,\ell,0)}(x) Q_{n-i,132}^{(0,0,\ell,0)}(x) =
Q_{n-2,132}^{(0,0,\ell,0)}(x)$ so that we get a contribution of
$Q_{n-2,132}^{(0,0,\ell,0)}(x)|_{x^{n-\ell -2}}= C_{\ell}$.\\
\ \\
{\bf Case 3.} $i=n$. In this case
$Q_{i-1,132}^{(0,1,\ell,0)}(x) Q_{n-i,132}^{(0,0,\ell,0)}(x) =
Q_{n-1,132}^{(0,1,\ell,0)}(x)$ so that we get a contribution of
$Q_{n-1,132}^{(0,1,\ell,0)}(x)|_{x^{n-\ell -2}}= C_{\ell}$.\\
Thus, it follows that
\begin{equation*}
Q_{n,132}^{(0,1,\ell,0)}(x)_{x^{n-\ell -2}} = 2C_{\ell} +
Q_{n-1,132}^{(0,0,\ell,0)}(x)|_{x^{n-\ell -2}}.
\end{equation*}
Then parts (ii) and (iii) follow from the fact that it was
proved in \cite{kitremtie} that
\begin{itemize}
\item[] $Q_{n,132}^{(0,0,1,0)}(x)|_{x^{n-2}} = \binom{n}{2}$
for $n \geq 2$ and, for all $k \geq 2$,
\item[]
$Q_{n,132}^{(0,0,k,0)}(x)|_{x^{n-k-1}} = C_{k+1}-C_{k}+
2(n-k-1)C_{k}$ for $n \geq k+1$.
\end{itemize}
\end{proof}
Similarly, one can compute the following.
\begin{align*}
&Q_{132}^{(0,2,1,0)}(t,x) = 1+t+2 t^2+5 t^3+ (12+2x)t^4+ (24+12x+2x^2)t^5 +
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\
&(64+48x+18x^2+2x^3)t^6+(144+160 x+97 x^2+26 x^3+2 x^4)t^7+\\
&(320+480x+408x^2+184x^3+36x^4+2x^5)t^8+\\
& (704+1344 x+1479 x^2+958 x^3+327 x^4+48 x^5+2 x^6)t^9 + \cdots.
\end{align*}
\begin{align*}
&Q_{132}^{(0,2,2,0)}(t,x) = 1+t+2 t^2+5 t^3+14 t^4+ (38+4 x)t^5+(102+26x+4x^2)t^6+\ \ \ \ \ \ \\
&(271+120 x+34 x^2+4 x^3) t^7 +(714+470 x+200 x^2+42 x^3+4 x^4)t^8+\\
&(1868+1672x+964x^2+304x^3+50x^4+4x^5)t^9 + \cdots.
\end{align*}
\begin{align*}
&Q_{132}^{(0,2,3,0)}(t,x) = 1+t+2 t^2+5 t^3+14 t^4+42 t^5+(122+10 x)t^6+
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\
&(351+68 x+10 x^2)t^7+(1006+326x+88x^2+10x^3)t^8 + \\
&(2168+1364x+512x^2+108x^3+10x^4)t^9 + \cdots.
\end{align*}
\begin{align*}
&Q_{132}^{(0,2,4,0)}(t,x) = 1+t+2 t^2+5 t^3+14 t^4+42 t^5+132 t^6+\\
& (401+28 x)t^7 +(1206 +196x+28x^2)t^8 + (3618+964 x+252 x^2+28 x^3)t^9 + \cdots.
\end{align*}
In this case, we can explicitly calculate the highest and
second highest coefficients that appear in
$Q_{n,132}^{(0,2,\ell,0)}(x)$ for sufficiently large $n$. That is,
we have the following theorem.
\begin{theorem}\lambdabel{h2ndhQ02l0} \
\begin{itemize}
\item[(i)] For all $\ell \geq 1$ and $n \geq 3+\ell$, the highest power of $x$ that appears in $Q_{n,132}^{(0,2,\ell,0)}(x)$ is $x^{n-2-\ell}$ which appears
with a coefficient of $2C_\ell$.
\item[(ii)] For all $n \geq 5$, $\displaystyle Q_{n,132}^{(0,2,1,0)}(x)|_{x^{n-4}} = 6 + 2\binom{n-2}{2}$.
\item[(iii)] For all $\ell \geq 2$ and $n \geq 4+ \ell$,
$\displaystyle Q_{n,132}^{(0,2,\ell,0)}(x)|_{x^{n-3-\ell}} =2C_{\ell +1} + 8C_{\ell} +
4C_{\ell}(n-4-\ell)$.
\end{itemize}
\end{theorem}
\begin{proof}
For (i), it is easy to see that the maximum
number of $\mathrm{MMP}(0,1,\ell,0)$-matches occurs for a $\sigma \in S_n(132)$ if
$\sigma$ starts with $(n-1)n$ or $n(n-1)$ followed by any arrangement of
$S_{\ell}(132)$ followed by $\ell +1, \ell +2, \ldots, n-2$ in
increasing order. Thus, the highest power of $x$ $Q_{132}^{(0,2,\ell,0)}(t,x)$
is $x^{n -\ell -2}$ and its coefficient is $2C_\ell$.
For parts (ii) and (iii), we use the fact
that
\begin{equation*}
Q_{n,132}^{(0,2,\ell,0)}(x) = Q_{n-1,132}^{(0,1,\ell,0)}(x)+
\sum_{i=2}^n Q_{i-1,132}^{(0,2,\ell,0)}(x) Q_{n-i,132}^{(0,0,\ell,0)}(x).
\end{equation*}
It was proved in \cite{kitremtie} that for $n > \ell$,
the highest power of $x$ that occurs in
$Q_{n,132}^{(0,0,\ell,0)}(x)$ is $x^{n-\ell}$ and its coefficient
is $C_{\ell}$. Moreover, it was proved in \cite{kitremtie} that
$$Q_{n,132}^{(0,0,1,0)}(x)|_{x^{n-2}} = \binom{n}{2} \ \mbox{for } n \geq 2$$
and, for $\ell \geq 2$,
$$Q_{n,132}^{(0,0,\ell,0)}(x)|_{x^{n-1-\ell}} = C_{\ell +1}-C_{\ell}
+2C_{\ell}(n-3-\ell) \ \mbox{for } n \geq 3+\ell.$$
It follows that for $4 \leq i \leq n-1$, the highest power of $x$
that appears in \\
$Q_{i-1,132}^{(0,2,\ell,0)}(x) Q_{n-i,132}^{(0,0,\ell,0)}(x)$ is
less than $n -\ell -3$. Thus, we have four cases to consider when computing
$Q_{n,132}^{(0,2,1,0)}(x)|_{x^{n-4}}$.
\ \\
{\bf Case 1.} $Q_{n-1,132}^{(0,1,1,0)}(x)|_{x^{n-4}}$. In this case,
by Theorem \ref{h2ndhQ01l0}, we have that,
$$Q_{n-1,132}^{(0,1,1,0)}(x)|_{x^{n-4}} = 2+\binom{n-2}{2} \ \mbox{for }
n \geq 5.$$
\ \\
{\bf Case 2.} $i=2$. In this case
$Q_{i-1,132}^{(0,2,1,0)}(x) Q_{n-i,132}^{(0,0,1,0)}(x) =
Q_{n-2,132}^{(0,0,1,0)}(x)$ and
$$Q_{n-2,132}^{(0,0,1,0)}(x)|_{x^{n-2}} = \binom{n-2}{2}
\ \mbox{for } n \geq 4.$$
{\bf Case 3.} $i=3$. In this case
$Q_{i-1,132}^{(0,1,1,0)}(x) Q_{n-i,132}^{(0,0,1,0)}(x) =
2Q_{n-3,132}^{(0,0,1,0)}(x)$ so that we get a contribution of
$Q_{n-3,132}^{(0,0,1,0)}(x)|_{x^{n-4}}= 2C_{1}=2$ for $n \geq 5$.\\
\ \\
{\bf Case 4.} $i=n$. In this case
$Q_{i-1,132}^{(0,2,1,0)}(x) Q_{n-i,132}^{(0,0,1,0)}(x) =
Q_{n-1,132}^{(0,2,1,0)}(x)$ so that we get a contribution of
$Q_{n-1,132}^{(0,2,1,0)}(x)|_{x^{n-4}}= 2C_1=2$ for $n \geq 5$.\\
\ \\
Thus, it follows that
$$Q_{n,132}^{(0,2,1,0)}(x)|_{x^{n-4}}=6 +2\binom{n-2}{2} \ \mbox{for }
n \geq 5.$$
Similarly, we have four cases to consider when computing
$Q_{n,132}^{(0,2,\ell,0)}(x)|_{x^{n-3-\ell}}$ for $\ell \geq 2$.\\
\ \\
{\bf Case 1.} $Q_{n-1,132}^{(0,1,\ell,0)}(x)|_{x^{n-3-\ell}}$. In this case,
by Theorem \ref{h2ndhQ01l0},
we have that
$$Q_{n-1,132}^{(0,1,\ell,0)}(x)|_{x^{n-3-\ell}} =
C_{\ell +1} +C_{\ell} + 2C_{\ell}(n-3-\ell)
\ \mbox{for } n \geq 4+\ell.$$
\ \\
{\bf Case 2.} $i=2$. In this case
$Q_{i-1,132}^{(0,2,\ell,0)}(x) Q_{n-i,132}^{(0,0,\ell,0)}(x) =
Q_{n-2,132}^{(0,0,\ell,0)}(x)$ and
$$Q_{n-2,132}^{(0,0,\ell,0)}(x)|_{x^{n-3-\ell}} =
C_{\ell +1} -C_{\ell}+2C_{\ell}(n-3-\ell)
\ \mbox{for } n \geq 3+\ell.$$
{\bf Case 3.} $i=3$. In this case
$Q_{i-1,132}^{(0,1,\ell,0)}(x) Q_{n-i,132}^{(0,0,\ell,0)}(x) =
2Q_{n-3,132}^{(0,0,\ell,0)}(x)$ so that we get a contribution of
$Q_{n-3,132}^{(0,0,\ell,0)}(x)|_{x^{n-3-\ell}}= 2C_{\ell}$ for $n \geq 4+ \ell$.\\
\ \\
{\bf Case 4.} $i=n$. In this case
$Q_{i-1,132}^{(0,2,\ell,0)}(x) Q_{n-i,132}^{(0,0,\ell,0)}(x) =
Q_{n-1,132}^{(0,2,\ell,0)}(x)$ so that we get a contribution of
$Q_{n-1,132}^{(0,2,1,0)}(x)|_{x^{n-3-\ell}}= 2C_\ell$ for $n \geq 4+\ell$.\\
\ \\
Thus, it follows that for $n \geq 4+\ell$,
\begin{eqnarray*}
Q_{n,132}^{(0,2,\ell,0)}(x)|_{x^{n-3-\ell}}&=&
2C_{\ell +1} +4C_{\ell}+4C_{\ell}(n-3-\ell) \\
&=& 2C_{\ell +1} +8C_{\ell}+4C_{\ell}(n-4-\ell).
\end{eqnarray*}
For example, when $\ell =2$, we obtain that
$$Q_{n,132}^{(0,2,\ell,0)}(x)|_{x^{n-5}} =26+ 8(n-6) \ \mbox{for } n \geq 6$$
and, when $\ell =3$, we obtain that
$$Q_{n,132}^{(0,3,\ell,0)}(x)|_{x^{n-6}} =68+ 20(n-7) \ \mbox{for } n \geq 7$$
which agrees with the series that we computed.
\end{proof}
\section{$Q_{n,132}^{(0,k,0,\ell)}(x)$ where $k,\ell \geq 1$}
Suppose that $n \geq k+\ell $.
It is clear that $n$ can never match
the pattern $\mathrm{MMP}(0,k,0,\ell)$ for $k \geq 1$ in any
$\sigma \in S_n(132)$. There are three cases that we
have to consider when dealing with the contribution of the permutations
of $S^{(i)}_n(132)$ to $Q^{(0,k,0,\ell)}_{n,132}(x)$.\\
\ \\
{\bf Case 1.} $i \leq k-1$.
It is easy to see that as we sum
over all the permutations $\sigma$ in $S_n^{(i)}(132)$, our choices
for the structure for $A_i(\sigma)$ will contribute a factor
of $C_{i-1}$ to $Q_{n,132}^{(0,k,0,\ell)}(x)$ since no element in $A_i(\sigma)$ can match $\mathrm{MMP}(0,k,0,\ell)$. The presence of $n$ plus the
elements in $A_i(\sigma)$ ensure that an element
in $B_i(\sigma)$ matches $\mathrm{MMP}(0,k,0,\ell)$ in $\sigma$ if
and only if it matches $\mathrm{MMP}(0,k-i,0,\ell)$ in $B_i(\sigma)$. Hence
our choices for $B_i(\sigma)$ contribute a factor of
$Q^{(0,k-i,0,\ell)}_{n-i,132}(x)$ to $Q_{n,132}^{(0,k,0,\ell)}(x)$.
Thus, in this case, the elements of $S_n^{(i)}(132)$ contribute
$C_{i-1}Q^{(0,k-i,0,\ell)}_{n-i,132}(x)$ to $Q_{n,132}^{(0,k,0,\ell)}(x)$.\\
\ \\
{\bf Case 2.} $k \leq i \leq n-\ell$. Note that in this case, there are at least $k$ elements in $A_i(\sigma) \cup \{n\}$ and at least $\ell$ in $B_i(\sigma)$.
The presence of the
elements in $B_i(\sigma)$ ensure that an element
in $A_i(\sigma)$ matches $\mathrm{MMP}(0,k,0,\ell)$ in $\sigma$ if
and only if it matches $\mathrm{MMP}(0,k,0,0)$ in $A_i(\sigma)$. Hence
our choices for $A_i(\sigma)$ contribute a factor of
$Q^{(0,k,0,0)}_{n-i,132}(x)$ to $Q_{n,132}^{(0,k,0,\ell)}(x)$.
The presence of $n$ plus the
elements in $A_i(\sigma)$ ensures that an element
in $B_i(\sigma)$ matches $\mathrm{MMP}(0,k,0,\ell)$ in $\sigma$ if
and only if it matches $\mathrm{MMP}(0,0,0,\ell)$ in $B_i(\sigma)$. Thus,
our choices for $B_i(\sigma)$ contribute a factor of
$Q^{(0,0,0,\ell)}_{n-i,132}(x)$ to $Q_{n,132}^{(0,k,0,\ell)}(x)$.
Thus, in this case, the elements of $S_n^{(i)}(132)$ contribute
$Q^{(0,k,0,0)}_{i-1,132}(x)Q^{(0,0,0,\ell)}_{n-i,132}(x)$ to $Q_{n,132}^{(0,k,0,\ell)}(x)$.\\
\ \\
{\bf Case 3.} $i > n-\ell$.
Let $j = n-i$ so that $j < \ell$.
It is easy to see that as we sum
over all the permutations $\sigma$ in $S_n^{(i)}(132)$, our choices
for the structure for $B_i(\sigma)$ will contribute a factor
of $C_{j}$ to $Q_{n,132}^{(0,k,0,\ell)}(x)$ since no element in $B_i(\sigma)$ can match $\mathrm{MMP}(0,k,0,\ell)$. The presence of the
elements in $B_i(\sigma)$ ensures that an element
in $A_i(\sigma)$ matches $\mathrm{MMP}(0,k,0,\ell)$ in $\sigma$ if
and only if it matches $\mathrm{MMP}(0,k,0,\ell-j)$ in $A_i(\sigma)$. Hence
our choices for $A_i(\sigma)$ contribute a factor of
$Q^{(0,k,0,\ell-j)}_{n-j-1,132}(x)$ to $Q_{n,132}^{(0,k,0,\ell)}(x)$.
Thus, in this case, the elements of $S_n^{(i)}(132)$ contribute
$C_{j}Q^{(0,k,0,\ell-j)}_{n-j-1,132}(x)$ to $Q_{n,132}^{(0,k,0,\ell)}(x)$.\\
It follows that for $n \geq k+\ell$,
\begin{multline} \lambdabel{Q0k0lrec}
Q^{(0,k,0,\ell)}_{n,132}(x) = \\
\sum_{i=1}^{k-1}
C_{i-1} Q^{(0,k-i,0,\ell)}_{n-i,132}(x) +
\sum_{i=k}^{n-\ell} Q^{(0,k,0,0)}_{i-1,132}(x)Q^{(0,0,0,\ell)}_{n-i,132}(x) +
\sum_{j=0}^{\ell -1} C_j Q^{(0,k,0,\ell -j)}_{n-j-1,132}(x).
\end{multline}
Multiplying both sides of (\ref{Q0k0lrec}) by $t^n$ and summing, we see
that \\
\ \\
$\displaystyle Q_{132}^{(0,k,0,\ell)}(t,x)
- \sum_{j=0}^{k+\ell -1}C_jt^j = t\left(\sum_{j=0}^{k-2} C_j t^j \left(Q_{132}^{(0,k,0,\ell-j-1)}(t,x) -
\sum_{s=0}^{k-j-2} C_s t^s\right)\right) + $ \\
\ \ \ \ $\displaystyle t \left(Q_{132}^{(0,k,0,0)}(t,x) -
\sum_{u=0}^{k-2} C_u t^u\right)\left(Q_{132}^{(0,0,0,\ell,)}(t,x) -
\sum_{v=0}^{\ell-1} C_v t^v\right)+ $ \\
\ \ \ \ $\displaystyle t \left(\sum_{j=0}^{\ell-1} C_j t^j \left(Q_{132}^{(0,k,0,\ell-j)}(t,x) -
\sum_{s=0}^{k-j-2} C_s t^s\right)\right).$\\
\ \\
Thus
\begin{multline}\lambdabel{Q0k0lgf}
Q_{132}^{(0,k,0,\ell)}(t,x) = \\
\sum_{j=0}^{k+\ell -1}C_jt^j +
t\left(\sum_{j=0}^{k-2} C_j t^j \left(Q_{132}^{(0,k,0,\ell-j-1)}(t,x) -
\sum_{s=0}^{k-j-2} C_s t^s\right)\right) + \\
t \left(Q_{132}^{(0,k,0,0)}(t,x) -
\sum_{u=0}^{k-1} C_u t^u\right)\left(Q_{132}^{(0,0,0,\ell)}(t,x) -
\sum_{v=0}^{\ell-1} C_v t^v\right)+ \\
t \left(\sum_{j=0}^{\ell-1} C_j t^j \left(Q_{132}^{(0,k,0,\ell-j)}(t,x) -
\sum_{w=0}^{k+\ell-j-2} C_w t^w\right)\right).
\end{multline}
Note the first term of the last term on the right-hand side of
(\ref{Q0k0lgf}) is $t(Q_{132}^{(0,k,0,\ell)}(t,x) - \sum_{w=0}^{k+\ell-2} C_w t^w)$
so that we can bring the term $tQ_{132}^{(0,k,0,\ell)}(t,x)$ to the other side
and solve
$Q_{132}^{(0,k,0,\ell)}(t,x)$ to obtain the following
theorem.
\begin{theorem}\lambdabel{thm:Q0k0l} For all $k,\ell \geq 1$,
\begin{equation}\lambdabel{Q0k0lgf2}
Q_{132}^{(0,k,0,\ell)}(t,x) = \frac{\Phi_{k,\ell}(t,x)}{1-t}
\end{equation}
where \\
$\displaystyle
\Phi_{k,\ell}(t,x) = \sum_{j=0}^{k+\ell -1}C_jt^j -\sum_{j=0}^{k+\ell-2}C_jt^{j+1}
+t\left(\sum_{j=0}^{k-2} C_j t^j \left(Q_{132}^{(0,k,0,\ell-j-1)}(t,x) -
\sum_{s=0}^{k-j-2} C_s t^s\right)\right) + $ \\
$\displaystyle t \left(Q_{132}^{(0,k,0,0)}(t,x) -
\sum_{u=0}^{k-1} C_u t^u\right)\left(Q_{132}^{(0,0,0,\ell)}(t,x) -
\sum_{v=0}^{\ell-1} C_v t^v\right)+ $ \\
$\displaystyle t \left(\sum_{j=1}^{\ell-1} C_j t^j \left(Q_{132}^{(0,k,0,\ell-j)}(t,x) -
\sum_{w=0}^{k+\ell-j-2} C_w t^w\right)\right)$.
\end{theorem}
Note that we can compute $Q_{132}^{(0,k,0,0)}(t,x)$ and
$Q_{132}^{(0,0,0,\ell)}(t,x)$ by Theorem \ref{thm:Q0k00} so
that we can use (\ref{Q0k0lgf2}) to compute $Q_{132}^{(0,k,0,\ell)}(t,x)$
for all $k,\ell \geq 1$.
\subsection{Explicit formulas for $Q^{(0,k,0,\ell)}_{n,132}(x)|_{x^r}$}
It follows from Theorem \ref{thm:Q0k0l} that
\begin{equation*}\lambdabel{Q0101}
Q_{132}^{(0,1,0,1)}(t,x) = \frac{1+tQ_{132}^{(0,1,0,0)}(t,x)(Q_{132}^{(0,0,0,1)}(t,x)-1)}{1-t},
\end{equation*}
and
\begin{multline}\lambdabel{Q0201}
Q_{132}^{(0,2,0,1)}(t,x) = \\
\frac{1+tQ_{132}^{(0,1,0,1)}(t,x)+
tQ_{132}^{(0,2,0,0)}(t,x)Q_{132}^{(0,0,0,1)}(t,x)-tQ_{132}^{(0,2,0,0)}(t,x)-tQ_{132}^{(0,0,0,1)}(t,x)}{1-t}.
\end{multline}
Similarly, using the fact that
$$Q_{132}^{(0,2,0,0)}(t,x) =Q_{132}^{(0,0,0,2)}(t,x) \mbox{ and }
Q_{132}^{(0,2,0,1)}(t,x) =Q_{132}^{(0,1,0,2)}(t,x),$$ one can show that
\begin{multline}\lambdabel{Q0202}
Q_{132}^{(0,2,0,2)}(t,x) =\\
\frac{1+(t+t^2)Q_{132}^{(0,2,0,1)}(t,x)+ t(Q_{132}^{(0,2,0,0)}(t,x))^2 -(2t+t^2)Q_{132}^{(0,2,0,0)}(t,x)}{1-t}.
\end{multline}
Here are the first few terms of these series.
\begin{align*}
&Q_{132}^{(0,1,0,1)}(t,x) =1+t+2 t^2+(4+x) t^3+(7+5 x+2 x^2) t^4+(11+14 x+12 x^2+5 x^3) t^5+\\
& (16+30 x+39 x^2+33 x^3+14 x^4) t^6+
(22+55 x+95 x^2+117 x^3+98 x^4+42 x^5) t^7+\\
& (29+91 x+195 x^2+309 x^3+36x^4+306 x^5+132 x^6) t^8+\\
&(37+140 x+357 x^2+684 x^3+1028 x^4+1197 x^5+990 x^6+429 x^7) t^9+\cdots.
\end{align*}
\begin{align*}
&Q_{132}^{(0,2,0,1)}(t,x) =1+t+2 t^2+5 t^3+(12+2 x) t^4+(25+13 x+4 x^2) t^5+
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\
& (46+45 x+31 x^2+10 x^3) t^6+(77+115 x+124 x^2+85 x^3+28 x^4) t^7+\\
&(120+245 x+359 x^2+370 x^3+252 x^4+84 x^5) t^8+\\
&(177+462 x+854 x^2+1159 x^3+1160 x^4+786 x^5+264 x^6) t^9+\cdots.
\end{align*}
\begin{align*}
&Q_{132}^{(0,2,0,2)}(t,x) = 1+t+2 t^2+5 t^3+14 t^4+(38+4 x) t^5+
(91+33 x+8 x^2) t^6+
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \\
&(192+139 x+78 x^2+20 x^3) t^7+(365+419 x+377 x^2+213 x^3+56 x^4) t^8+\\
&(639+1029 x+1280 x^2+1116 x^3+630 x^4+168 x^5) t^9+\cdots.
\end{align*}
It is easy to find the coefficients of the highest power of
$x$ in $Q^{(0,k,0,\ell)}_{n,132}(x)$. That is, we have
the following theorem.
\begin{theorem}
For $n \geq k+\ell +1$, the highest power of $x$ that occurs in
$Q^{(0,k,0,\ell)}_{n,132}(x)$ is $x^{n-k -\ell}$
which occurs with a coefficient of $C_kC_{\ell}C_{n-k - \ell}$.
\end{theorem}
\begin{proof}
It is easy to see that the maximum number of
$\mathrm{MMP}(0,k,0,\ell)$-matches occurs for a $\sigma \in S_n(132)$ if
$\sigma$ starts with some $132$-avoiding
rearrangement of $n,n-1, \ldots, n-k+1$ and
ends with some $132$-avoiding rearrangement of $1, 2,\ldots, \ell$.
In the middle of such a permutation, we can choose any $132$-avoiding permutation
of $\ell +1, \ldots, n-k$. It follows that the highest power of
$x$ which occurs in $Q^{(0,k,0,\ell)}_{n,132}(x)$ is $x^{n-k -\ell}$
which occurs with a coefficient of $C_kC_{\ell}C_{n-k - \ell}$.
\end{proof}
We can also find an explicit formula for a coefficient
of the second highest power of $x$ that occurs in $Q^{(0,1,0,1)}_{n,132}(x)$.
\begin{theorem} For $n \geq 4$,
$$Q^{(0,1,0,1)}_{n,132}(x)|_{n-3} = 2C_{n-2} +C_{n-3}.$$
\end{theorem}
\begin{proof}
In this case, for $n \geq 3$,
\begin{equation}\lambdabel{0101rec}
Q^{(0,1,0,1)}_{n,132}(x) = Q^{(0,1,0,1)}_{n-1,132}(x) + \sum_{i=1}^{n-1}
Q^{(0,1,0,0)}_{i-1,132}(x)Q^{(0,0,0,1)}_{n-i,132}(x).
\end{equation}
We proved in \cite{kitremtie} that for all $n \geq 0$,
$Q^{(1,0,0,0)}_{n,132}(x) = Q^{(0,1,0,0)}_{n,132}(x) =
Q^{(0,0,0,1)}_{n,132}(x)$.
In addition, we proved that for $n \geq 1$, the highest power of
$x$ that occurs in $Q^{(1,0,0,0)}_{n,132}(x)$ is $x^{n-1}$ and
$Q^{(1,0,0,0)}_{n,132}(x)|_{x^{n-1}} = C_{n-1}$ and that for
$n \geq 2$, $Q^{(1,0,0,0)}_{n,132}(x)|_{x^{n-2}} = C_{n-1}$.
It follows that for $n \geq 4$,
\begin{eqnarray*}
Q^{(1,0,0,0)}_{n,132}(x)|_{x^{n-3}} &=&
Q^{(0,1,0,1)}_{n-1,132}(x)|_{x^{n-3}} +
Q^{(0,0,0,1)}_{n-2,132}(x)|_{x^{n-3}} + \\
&&
\sum_{i=2}^{n-1}
Q^{(0,1,0,0)}_{i-1,132}(x)|_{x^{i-2}}Q^{(0,0,0,1)}_{n-i,132}(x)|_{x^{n-i-1}}\\
&=&C_{n-2} + C_{n-3} + \sum_{i-2}^{n-2} C_{i-2} C_{n-i-1} \\
&=& C_{n-2} + C_{n-3} + C_{n-2} = 2C_{n-2} +C_{n-3}.
\end{eqnarray*}
\end{proof}
We can also get explicit formulas for
$Q_{132}^{(0,1,0,1)}(t,0)$, $Q_{132}^{(0,2,0,1)}(t,0)$, and $Q_{132}^{(0,2,0,2)}(t,0)$ based
on the fact that we know that
\begin{eqnarray*}
Q_{132}^{(0,1,0,0)}(t,0)&=& Q_{132}^{(0,0,0,1)}(t,0)= \frac{1}{1-t}\ \mbox{and}\\
Q_{132}^{(0,2,0,0)}(t,0)&=& Q_{132}^{(0,0,0,2)}(t,0)= \frac{1-t+t^2}{(1-t)^2}.
\end{eqnarray*}
Then one can use the above formulas to compute that
\begin{eqnarray*}
Q_{132}^{(0,1,0,1)}(t,0)&=&\frac{1-2t+2t^2}{(1-t)^3};\\
Q_{132}^{(0,2,0,1)}(t,0)&=&\frac{1-3t+4t^2-t^3+t^4}{(1-t)^4},\ \mbox{and}\\
Q_{132}^{(0,2,0,2)}(t,0)&=&\frac{1-4t+7t^2-5t^3+4t^4+2t^5}{(1-t)^5}.
\end{eqnarray*}
It is then easy to compute $Q^{(0,1,0,1)}_{n,132}(0) = 1 + \binom{n}{2}$
for $n \geq 2$. This is a known fact \cite[Table 6.1]{kit} since avoidance of the pattern $\mathrm{MMP}(0,1,0,1)$ is equivalent to avoiding the (classical) pattern 321 (thus, here we deal with avoidance of 132 and 321).
The sequence $\{Q^{(0,2,0,1)}_{n,132}(0)\}_{n \geq 1}$ is
A116731 in the OEIS counting the number of permutations
of length $n$ which avoid the patterns $321$, $2143$, and $3142$.
\end{document} |
\begin{document}
\title{On the Boundedness of Globally $F$-split varieties}
\author{Liam Stigant}
\address{Department of Mathematics, Imperial College London, 180 Queen's Gate,
London SW7 2AZ, UK}
\email{[email protected]}
\subjclass[2020]{14J30, 14J32, 14M22 14E30, 14G17, 14B05}
\date{\today}
\pagestyle{myheadings} \markboth{
Liam Stigant
}{
On the Boundedness of Globally $F$-split varieties
}
\begin{abstract}
This paper proposes the use of $F$-split and globally $F$-regular conditions in the pursuit of BAB type results in positive characteristic. The main technical work comes in the form of a detailed study of threefold Mori fibre spaces over positive dimensional bases. As a consequence we prove the main theorem, which reduces birational boundedness for a large class of varieties to the study of prime Fano varieties.
\end{abstract}
\maketitle
\tableofcontents
\setlength{\parskip}{0.5em}
\section{Introduction}
There has been great success proving boundedness results in characteristic zero using the techniques and results of the LMMP. Beyond dimension $2$, however, there has not been much progress in positive characteristic. This is perhaps a consequence of the relative newness of the LMMP results in this setting, but it also tells of the existence of difficulties unique to characteristic $p$.
In this direction, we prove the following.
\begin{theorem}\label{Main}
Fix $0 < \delta, \epsilon <1$. Let $S_{\delta,\epsilon}$ be the set of threefolds satisfying the following conditions
\begin{itemize}
\item $X$ is a projective variety over an algebraically closed field of characteristic $p >7, \frac{2}{\delta}$;
\item $X$ is terminal, rationally chain connected and $F$-split;
\item $(X,\Deltaelta)$ is $\epsilon$-klt and log Calabi-Yau for some boundary $\Deltaelta$; and
\item The coefficients of $\Deltaelta$ are greater than $\delta$.
\end{itemize}
Then there is a set $S'_{\delta,\epsilon}$, bounded over $\text{Spec}(\mathbb{Z})$ such that any $X\in S_{\delta,\epsilon}$ is either birational to a member of $S'_{\delta,\epsilon}$ or to some $X'\in S_{\delta,\epsilon}$, Fano with Picard number $1$.
\end{theorem}
The constraints on the characteristic of the field are required to control the singularities arising in terminal Mori fibrations. In particular the $p>7$ requirement ensures that terminal del Pezzo fibrations have generically smooth fibres and the $p> \frac{2}{\delta}$ is needed to control the singularities appearing in the base of a conic bundle. This in turn allows for lower dimensional boundedness results to be applied.
The condition that $X$ be terminal is to allow us to reduce to the case that $X$ is a terminal Mori fibre space. While we might normally achieve this by taking a terminalisation $\tilde{X} \to X$, we cannot do so while also ensuring that the coefficients of $\tilde{\Delta}$ are still bounded below. In fact while bounding the coefficients below is used to prove a canonical bundle formula for Mori fibre spaces of relative dimension $1$ it is in many ways the relative dimension $2$ case that forces the assumption $X$ is terminal.
If $(X,\Deltaelta) \to S$ is a klt Mori fibre space with coefficients bounded below by $\frac{2}{p}$ then we may freely take a terminalisation and run an MMP to obtain a tame conic bundle, which is what we require for our boundedness proof. If however the relative dimension is $2$ then after taking a terminalisation and running an MMP we may end with a Mori fibration of relative dimension $1$, where we cannot easily control the singularities of the base. This happens whenever $X$ is singular along a curve $C$ which maps inseparably onto the base and we expect this is the only way it might happen.
The main motivation for this result comes from \cite{chen2018birational} where a similar result is proven in the characteristic zero setting. More generally we have the following generalisation of BAB, which essentially appeared in \cite{mckernan2003threefold} and remains unsolved even in characteristic $0$.
\begin{conjecture}
Fix $\kappa$, an algebraically closed field of characteristic $0$, let $d$ be a natural number and take $\epsilon$ a positive real number. Then the projective varieties $X$ over $\kappa$ such that
\begin{itemize}
\item $X$ has dimension $d$;
\item $(X,B)$ is $\epsilon$-klt for some boundary $B$;
\item $-(K_{X}+B)$ is nef; and
\item $X$ is rationally connected;
\end{itemize}
are bounded.
\end{conjecture}
With the LMMP for klt pairs known in dimension $3$ and characteristic $p>5$, it is natural to turn our attention to results and conjectures of this type in positive and mixed characteristic. There are several major problems one would face in the pursuit of such a result, even in the weaker case of birational boundedness in dimension $3$, which do not arise in characteristic zero. Perhaps the most immediate is that $X$ rationally connected no longer removes the possibility that $K_{X} \not\equiv 0$. For example, in positive characteristic there are families of K3 surfaces which are rationally connected. It is not clear then, even in dimension $2$, that such a result would hold.
It is also very difficult to control the singularities of the base, and indeed the fibres, of a Mori fibre space, which makes proofs of an inductive nature very challenging. The failure of Kawamata-Viehweg vanishing presents a similar difficulty.
Unique to positive characteristic, we have singularities characterised by properties of the Frobenius morphism. In particular there are notions of globally $F$-split and globally $F$-regular which can be thought of as positive characteristic analogues of lc log Calabi-Yau varieties and klt log Fano varieties. While the exact nature of this analogy is the subject of a variety of results and conjectures, it is expected, and often known, that these varieties should behave similarly to their characteristic zero counterparts.
Most notably, in this context, the $F$-split and globally $F$-regular conditions are preserved under the steps of the LMMP including Mori fibrations. In fact the conditions are also preserved under taking a general fibre of a fibration. They also come naturally equipped with vanishing theorems, with globally $F$-regular pairs satisfying full Kawamata-Viehweg vanishing.
We also have some relevant characterisations of uniruled $F$-split varieties. If $X$ is smooth it cannot be simultaneously $F$-split, Calabi-Yau and uniruled. In particular, an $F$-split, canonical surface cannot be uniruled and have pseudo-effective canonical divisor.
In many ways then, global $F$-singularities begin to resolve the most obvious difficulties in proving positive characteristic boundedness results. They present their own problems however, there is no satisfactory notion of ``$\epsilon$-$F$-split'' or ``$\epsilon$-globally $F$-regular'' which makes it difficult to work solely with these notions in the context of boundedness.
That said, while the $F$-split and globally $F$-regular conditions fit naturally into the study of log pairs, we may also choose to consider them as properties of the underlying base varieties. In such a way we may formulate the following questions, though in practice even the most optimistic might expect further conditions on the characteristic. One could also reasonably ask that the $\epsilon$-klt pair $(X,B)$ is itself $F$-split, or globally $F$-regular, in place of the base variety.
\begin{question}\label{Q1}
Fix $d$ a natural number and $\epsilon$ a positive real number. Then is the set, $S$, (resp. $S'$) of projective varieties $X$ such that $(1)-(4)$ (resp. $(1),(2),(3'),(4')$) hold bounded over $\mathbb{Z}$?
\begin{enumerate}[label=(\arabic*)]
\item $X$ has dimension $d$ over some closed field $\kappa$.
\item $(X,B)$ is $\epsilon$-klt for some boundary $B$.
\item $-(K_{X}+B)$ is big and nef.
\item[$(3')$] $K_{X}+B\equiv 0$.
\item If $\kappa$ has characteristic $p>0$, then $X$ is globally $F$-regular.
\item[$(4')$] If $\kappa$ has characteristic $p>0$, then $X$ is $F$-split and rationally chain connected.
\end{enumerate}
\end{question}
\begin{remark}
Here rationally chain connected is chosen over rationally connected in light of \cite{gongyo2015rational} which shows that globally $F$-regular threefolds are rationally chain connected in characteristic $p > 7$. Further in characteristic zero, under mild assumptions on the singularities (X admits a boundary $\Delta$ with $(X,\Delta)$ dlt), rational chain connectedness coincides with rational connectedness so this is still a natural generalisation. In any case, in dimension $3$ the globally $F$-regular condition is strictly stronger than $F$-split and rationally chain connected whenever the characteristic is greater than $7$.
In fact other than the case of Fano varieties of Picard number $1$, Gongyo et al are able to show separable rational connectedness. This might, therefore, also be a natural condition to impose instead, especially since the classical proof of the boundedness of characteristic zero prime Fano threefolds so heavily relies on the existence of a free curve.
\end{remark}
Given \autoref{Q1}, it is natural to ask what can be gleaned from \autoref{Main} about globally $F$-regular varieties of the type described in \autoref{Q1}. Unfortunately the answer is very little, while every globally $F$-regular variety is $F$-split and if $X$ is of $\epsilon$-log Fano type it is also of $\epsilon$-LCY type, we cannot sensibly ensure that the resulting $\epsilon$-LCY pair $(X,\Deltaelta)$ has coefficients bounded below, even if we require it for the pair $\epsilon$-log Fano pair $(X,\Deltaelta')$.\\
As part of this work we prove the following weak BAB result in \autoref{J1} and \autoref{J2}. This draws heavily on the arguments of Jiang in \cite{jiang2014boundedness}.
\begin{theorem}\label{Main2}
Fix $0 < \delta, \epsilon <1$ and let $T_{\delta,\epsilon}$ be the set of threefold pairs $(X,\Deltaelta)$ satisfying the following conditions
\begin{itemize}
\item $X$ is projective over a closed field of characteristic $p >7,\frac{2}{\delta}$;
\item $X$ is terminal, rationally chain connected and $F$-split;
\item $(X,\Deltaelta)$ is $\epsilon$-klt and LCY;
\item The coefficients of $\Deltaelta$ are greater than $\delta$; and
\item $X$ admits a Mori fibre space structure $X \to Z$ where $Z$ is not a point.
\end{itemize}
Then the set $\{\textup{Vol}(-K_{X}) \colon \exists \Deltaelta \text{ with } (X,\Deltaelta) \in T_{\delta,\epsilon}\}$ is bounded above.
\end{theorem}
\begin{remark}
Together with the observation that taking a terminalisation and running a $K_{X}$-MMP can only increase the anti-canonical volume, we reduced weak BAB for varieties in $S_{\Delta,\epsilon}$ to the case of prime Fano varieties of $\epsilon$-LCY type. Over a fixed field, however, this is essentially superseded by the result of \cite{das2018boundedness}, which gives weak BAB for varieties $X$ with $K_{X}+\Delta \equiv 0$ for some boundary $\Delta$ taking coefficients in a DCC set and making $(X,\Delta)$ klt.
Results similar to \autoref{Main} and \autoref{Main2} are proven in \cite[Theorem 1.7, Theorem 1.8]{ZhuangFano} for Fano threefolds satisfying certain conditions on the Seshadri constant at a smooth closed point. Further these conditions are closely related to global $F$-regularity by \cite[Theorem 1.3]{ZhuangFano}.
\end{remark}
We begin by collecting some relevant definitions and results for later usage. Then \autoref{S-CB} establishes key results about the behaviour of conic bundles in sufficiently high characteristic. Next \autoref{S-MFS} contains the key boundedness arguments, with weak BAB deferred to \autoref{S-BAB}. Finally \autoref{Main} is proved in \autoref{S-res}.
\section{Definitions}
\subsection{MMP Singularities}
Here $\mathbb{K}$ will be taken to mean either $\mathbb{R}$ or $\mathbb{Q}$. If no field is specified, it is taken to be $\mathbb{R}$. We outline the key notions of singularity arising in the MMP.
\begin{definition}
Let $X$ be a normal variety. A \emph{$\mathbb{K}$-boundary} is an effective $\mathbb{K}$-divisor $\Deltaelta$ where $K_{X}+\Deltaelta$ is $\mathbb{K}$-Cartier and the coefficients of $\Deltaelta$ are at most $1$.
A \emph{$\mathbb{K}$ pair} is a couple $(X,B)$ where $X$ is normal and $B$ is a $\mathbb{K}$-boundary.
If $B$ is not effective but $(X,B)$ would otherwise be a $\mathbb{K}$ pair we call it a \emph{$\mathbb{K}$ sub pair}.
\end{definition}
Since $K_{X}+\Deltaelta$ is $\mathbb{R}$-Cartier, we may pull it back along any morphism $\pi\colon Y \to X$. If $\pi$ is birational then there is a unique choice of $\Deltaelta_{Y}=\sum -a(Y,E,X,\Deltaelta)E$ which agrees with $\Deltaelta$ away from the exceptional locus of $\pi$ such that $\pi^{*}(K_{X}+\Deltaelta)\sim_{\mathbb{R}}K_{Y}+\Deltaelta_{Y}$. In a slight abuse of notation we write $f^{*}(K_{X}+\Deltaelta)=(K_{Y}+ \Deltaelta_{Y})$.
Suppose that $f\colon Y \to X$ is birational morphism of normal varieties and there is a some normal variety $Z$ with $g\colon Z\to Y$. If $E$ is a divisor on $Y$ with strict transform $E'$ on $Z$ then $a(Z,E',X,\Deltaelta)=a(Z,E',Y,\Deltaelta_{Y})=a(Y,E,X,\Deltaelta)$. We may view then the coefficients $a(Y,E,X,\Deltaelta)$ as being independent of $Y$ and write $a(E,X,\Deltaelta)$ instead.
\begin{definition}
Given a sub pair $(X,\Deltaelta)$ we define the \emph{discrepancy} $$\text{Disc}(X,\Deltaelta):=\inf \{a(E,X,\Deltaelta) \text{ such that } E \text{ is exceptional and has non-empty center on } X\}$$
and the \emph{total discrepancy}
$$\text{TDisc}(X,\Deltaelta):=\inf \{a(E,X,\Deltaelta) \text { such that } E \text{ has non-empty center on } X\}$$
\end{definition}
We then use these to define a suite of singularities.
\begin{definition}
Let $(X,\Deltaelta)$ be a (sub) pair then we say that $(X,\Deltaelta)$ is
\begin{itemize}
\item \emph{(Sub) terminal} if $\text{Disc}(X,\Deltaelta) > 0$.
\item \emph{(Sub) canonical} if $\text{Disc}(X,\Deltaelta)\geq 0$.
\item \emph{(Sub) plt} if $\text{Disc}(X,\Deltaelta) > -1$.
\item \emph{(Sub) $\epsilon$-klt} if $\text{TDisc}(X,\Deltaelta) > \epsilon-1$.
\item \emph{(Sub) $\epsilon$-lc} if $\text{TDisc}(X,\Deltaelta) \geq \epsilon -1$.
\end{itemize}
\end{definition}
For $\epsilon=0$ we say klt, lc respectively.
When we have resolution of singularities there is another, more practical version.
\begin{definition}
Let $(X,\Deltaelta)$ be a (sub) pair and $\pi\colon Y\to X$ a log resolution of $(X,\Deltaelta)$. Let $$t=\min\{a(E,X,\Deltaelta) \text{ such that } E \text{ is a divisor on } Y\}$$ and $$d=\min\{a(E,X,\Deltaelta) \text{ such that } E \text{ is an exceptional divisor of } \pi\colon Y\to X\}.$$
Then $(X,\Deltaelta)$ is
\begin{itemize}
\item \emph{(Sub) $\epsilon$-klt} if $t > \epsilon-1$;
\item \emph{(Sub) $\epsilon$-lc} if $t\geq \epsilon -1$.
\end{itemize}
If $\Deltaelta=0$ then $X$ is
\begin{itemize}
\item \emph{terminal} if $d > 0$;
\item \emph{canonical} if $d\geq 0$;
\end{itemize}
\end{definition}
This also gives rise to an additional notion of singularity, which is dependent on the choice of resolution and can be thought of as the limit of a klt pair.
\begin{definition}
A pair $(X,\Deltaelta)$ is called \emph{dlt} if there is a log resolution $\pi\colon Y \to X$ of $(X,\Deltaelta)$ with $K_{Y}+\Deltaelta_{Y}=\pi^{*}(K_{X}+\Deltaelta)$ such that $\text{Coeff}_{E}(\Deltaelta_{Y}) < 1$ for every $E$ exceptional.
\end{definition}
If $(X,\Deltaelta)$ is sub klt or sub lc etc and $\pi\colon Y \to X$ is a birational morphism from a normal variety and $K_{Y}+\Deltaelta_{Y}=\pi^{*}(K_{X}+\Deltaelta)$ then $(Y,\Deltaelta_{Y})$ has the same singularities. Conversely we have the following.
\begin{lemma}\cite[Lemma 3.38]{kollar2008birational}\label{comparisonLemma}
Suppose $(X,\Deltaelta),(X',\Deltaelta')$ are pairs equipped with proper birational morphisms $f\colon X \to Y$ and $f'\colon X'\to Y$ with $f_{*}\Deltaelta=f'_{*}\Deltaelta'$.
Suppose further that $-(K_{X}+\Deltaelta)$ is $f$ nef and $(K_{X'}+\Deltaelta')$ is $f'$ nef. Then $a(E,X,\Deltaelta) \leq a(E,X',\Deltaelta')$ for any $E$ with non-trivial center on $Y$.
\end{lemma}
In particular, these notions of singularity are preserved under a $(K_{X}+\Deltaelta)$ MMP.
\begin{definition}
A (sub) $\epsilon$-klt pair $(X,\Deltaelta)$ where $K_{X}+\Deltaelta \equiv 0$ is said to be \emph{(sub) $\epsilon$-log Calabi-Yau}, or just (sub) $\epsilon$-LCY.
If instead $-(K_{X}+\Deltaelta)$ is big and nef, it is said to be \emph{(sub) $\epsilon$-log Fano}.
\end{definition}
Again for $\epsilon=0$ we just say LCY and log Fano, equally if $\Deltaelta=0$ we drop the log.
Of particular interest is the class of prime Fano varieties which we may think of as Mori fibre spaces over a point.
\begin{definition}
A terminal Fano variety is said to be \emph{prime} if it has Picard rank $1$.
\end{definition}
\begin{corollary}
Suppose that $(X,\Deltaelta)$ is (sub) $\epsilon$-LCY and $f\colon X\dashrightarrow X'$ is either a flip or a divisorial contraction then $(X',f_{*}\Deltaelta)$ is (sub) $\epsilon$-LCY.
\end{corollary}
\begin{proof}
Both $(K_{X}+\Deltaelta)$ and $(K_{X'}+\Deltaelta')$ are numerically trivial so it suffices to show that $(K_{X'}+\Deltaelta')$ is $\mathbb{R}$-Cartier by \autoref{comparisonLemma}.
If $g\colon X \to Y$ is the contraction of an extremal ray and $D\equiv_{g} 0$ is Cartier, there is some $L$ Cartier on $Y$ with $g^{*}L=D$.
Suppose first that $f$ is a divisorial contraction. Then $K_{X}+\Deltaelta=f^{*}L$, say, and so $K_{X'}+\Deltaelta'=L$ by the projection formula.
Otherwise $f$ is a flip and there is $g\colon X \to Y$ a flipping contraction together with $g'\colon X' \to Y$ such that $f=g'^{-1}\circ g$. Hence writing $K_{X}+\Deltaelta=g^{*}L$ again gives $K_{X'}+\Deltaelta'=g'^{*}L$.
In either case, $(K_{X'}+\Deltaelta')$ is $\mathbb{R}$-Cartier.
\end{proof}
We will be interested in LCY varieties in which general points can be connected by rational curves in the following senses.
\begin{definition}
Let $X$ be a variety over a field $\kappa$. Then $X$ is said to be:
\begin{itemize}
\item \emph{Uniruled} if there is a proper family of connected curves $f\colon U \to Y$ where the generic fibres have only rational components together with a dominant morphism $U \to X$ which does not factor through $Y$.
\item \emph{Rationally chain connected (RCC)} if there is $f\colon U \to Y$ as above such that $u^{2}\colon U \times_{Y} U \to X \times_{k} X$ is dominant.
\item \emph{Rationally connected} if there is $f\colon U \to Y$ as above witnessing rational chain connectedness such that the general fibres are irreducible.
\item \emph{Separably rationally connected} if $f$ as above is separable.
\end{itemize}
\end{definition}
If $X \to X'$ is a dominant morphism from $X$ uniruled/RCC/rationally connected then we may compose $U \to X \to X'$ to see that $X'$ is uniruled/RCC/rationally connected.
\subsection{$F$-Singularities of Pairs}
We now introduce Frobenius singularities, unique to positive characteristic. We focus on the $F$-pure and $F$-split conditions, as $F$-regularity will not be needed.
\begin{definition}
Given a $\kappa$ algebra $R$ in positive characteristic we denote the Frobenius morphism by $F\colon R\to R$ sending $x \to x^{p}$. Any $R$ module $M$ then has an induced module structure, denoted $F_{*}M$ where $R$ acts as $r.x=F(r)x=r^{p}x$. Finally $R$ is said to be \emph{$F$-finite} if $F_{*}R$ is a finite $R$ module.
These definitions naturally extend to schemes over $\kappa$.
\end{definition}
Note that all perfect fields are $F$-finite, and so is every variety over an $F$-finite field.
In this context we can view the Frobenius morphism as a map of $R$ modules $F\colon R \to F_{*}R$. We will also write $F^{e}\colon R \to F_{*}^{e}R$ for the $e^{th}$ iterated Frobenius.
\begin{definition}
Let $X$ be a variety over an $F$-finite field.
We say $X$ is:
\begin{itemize}
\item \emph{$F$-pure} if the Frobenius morphism $\ox \to F_{*}\ox$ is pure, or equivalently locally split.
\item \emph{(Globally) $F$-split} if the Frobenius morphism $\ox \to F_{*}\ox$ is split.
\end{itemize}
\end{definition}
Being $F$-split is a particularly strong condition, giving the following vanishing result almost immediately.
\begin{lemma}\label{vanish}
Let $X$ be an $F$-split variety and $A$ an ample $\mathbb{Q}$-Cartier divisor. Then $H^{i}(X,A)=0$ for all $i>0$.
\end{lemma}
\begin{proof}
By assumption $\ox \to \mathcal{F}e \ox $ splits, and hence so does $A=A\otimes \ox \to A \otimes \mathcal{F}e \ox =\mathcal{F}e A^{p^{e}}$. That is we have $id\colon A \to \mathcal{F}e A^{p^{e}} \to A$, and taking cohomology we see that $H^{i}(X,A)$ injects into $H^{i}(X,\mathcal{F}e A^{p^{e}})=H^{i}(X,A^{p^{e}})$ which vanishes for $e>>0$.
\end{proof}
Take $X$ a normal variety. To mirror the notion of a boundary we introduce pairs $(\mathcal{L}, \phi)$ where $\mathcal{L}$ is a line bundle and $\phi\colon \mathcal{F}e \mathcal{L} \to \ox$. By applying duality on the smooth locus, which contains all the codimension $1$ points we observe that $\textup{Hom}_{\ox}(\mathcal{F}e \mathcal{L},\ox)=H^{0}(X,\mathcal{L}^{-1}((1-p^{e})K_{X}))$. Therefore such a pair corresponds to a divisor $\Deltaelta_{\phi} \geq 0$ with $(p^{e}-1)(K_{X}+\Deltaelta_{\phi}) \sim \mathcal{L}$. \\ Reversing this procedure is slightly more involved. If ${(p^{e}-1)(K_{X}+\Deltaelta) \sim \mathcal{L}}$ (we write this $K_{X}+\Deltaelta \sim_{\mathbb{Z}_{(p)}} \mathcal{L}$) we may obtain $\phi_{\Deltaelta}\colon \mathcal{F}e \mathcal{L} \to \ox$, however we could also write say $(p^{2e}-1)(K_{X}+\Deltaelta) \sim \mathcal{L'}$ where $\mathcal{L'} \not\sim \mathcal{L}$. We introduce, therefore, the following notion of equivalence.
First, we say that two such pairs, $(\mathcal{L}, \phi)$ and $(\mathcal{L}', \phi')$ are equivalent if:
\begin{itemize}
\item There is an isomorphism $\psi: \mathcal{L} \to \mathcal{L'}$ such that following diagram commutes; or
\[\begin{tikzcd}
\mathcal{F}e \mathcal{L} \arrow[rd, "\phi"] \arrow[rr, "\mathcal{F}e \psi"] & & \mathcal{F}e \mathcal{L}' \arrow[ld, "\phi'"] \\
& \ox &
\end{tikzcd}\]
\item $\mathcal{L}=\mathcal{L}^{p^{e'}+1}$ and $\phi'\colon \mathcal{F}e[e+e']\mathcal{L}^{p^{e'}+1}\to \ox$ is the precisely the map given by
$$\mathcal{F}e[e+e'](\mathcal{L}\otimes \mathcal{L}^{p^{e'}}) \xrightarrow{\mathcal{F}e\phi} \mathcal{F}e \mathcal{L} \xrightarrow{\phi} \ox.$$
\end{itemize}
We then expand the notion of equivalence to allow any finite combination of the above equivalences, more precisely we take the transitive closure of our initial relation.
This gives a bijection between equivalence classes of pairs $(\mathcal{L}, \phi)$ and $\Deltaelta \geq 0$ with $(K_{X}+\Deltaelta)$ $\mathbb{Z}_{(p)}$-Cartier. Full details on such pairs can be found in Chapter 16 of Schwede's notes on $F$-singularities \cite{schwede2010f}.
To extend this framework to allow for sub pairs we can instead work with morphisms $\mathcal{F}e\mathcal{L} \to K(X)$ where we view $K(X)$ as a constant sheaf on $X$. Given such a morphism $\phi$, we can always find $E \geq 0$ Cartier such that when we twist by $E$ we obtain $\phi'\colon =\mathcal{F}e(\mathcal{L}((1-p^{e})E)) \to \ox$ and thus associate a divisor $\Deltaelta_{\phi'}$ with $(1-p^{e})(K_{X}+\Deltaelta_{\phi'})\sim \mathcal{L}((1-p^{e})E$ and then take $\Deltaelta_{\phi}=\Deltaelta_{\phi'}-E$.
\begin{lemma}\cite[Lemma 2.3]{das2015f}
With the notation as above, $\Deltaelta_{\phi}$ does not depend on the choice of $E$.
\end{lemma}
\begin{definition}
A sub $\mathbb{Z}_{(p)}$ pair is a couple $(X,B)$ where $(K_{X}+B)$ is $\mathbb{Z}_{(p)}$-Cartier and the coefficients of $B$ are at most $1$. We write $\phi{e}_{B}\colon F_{*}^{e,B}\mathcal{L}_{e,B} \to K(X)$ for the associated morphism dropping the dependence on $B$ when it remains clear. If $B$ is effective $(X,B)$ is called a $\mathbb{Z}_{(p)}$ pair and we view $\phi$ as being a morphism to $\ox$.
Let $(X,B)$ be a (sub) $\mathbb{Z}_{(p)}$ pair, then $(X,B)$ is
\begin{itemize}
\item \emph{(sub) $F$-pure} if $\ox \subseteq \text{Im}(\phi^{e})$ for some $e$.
\item \emph{(sub) $F$-split} if $1\in\text{Im}(H^{0}(X,\phi^{e}))$ for some $e$.
\end{itemize}
\end{definition}
Being $F$-split is also sometimes called globally $F$-split to distinguish to from $F$-pure, which can be thought of as being locally split.
Locally to a point of codimension $1$ these definitions are particularly well-behaved.
\begin{lemma}\cite[Lemma 2.14]{das2015f}
Let $R$ be a regular DVR with parameter $t$, then a sub $\mathbb{Z}_{(p)}$ pair $(R,\lambda t)$ is sub $F$-pure iff $\lambda \leq 1$ and sub $F$-regular iff $\lambda < 1$.
\end{lemma}
In particular we see that the coefficient of $\Deltaelta_{\phi}$ at $E$ depends only on $\phi$ near $E$.
\begin{corollary}\label{local}
Suppose $\phi\colon \mathcal{F}e\mathcal{L} \to K(X)$ has associated divisor $\Deltaelta$ then $1-\text{Coeff}_{E}(\Deltaelta)=\inf\{t\colon (X,\Deltaelta+tE) \text{ is } F \text{-pure at the generic point of } E\}$.
\end{corollary}
While these definitions do not pullback along birational morphisms as obviously as the usual MMP singularities, it is still possible.
\begin{lemma}\cite[Lemma 7.2.1]{blickle2013p}
Suppose that $f\colon X \to Y$ is a birational morphism with $X$ normal and $(Y,\Deltaelta)$ a sub $F$-split pair. Then there is $\Deltaelta'$ on $X$ making $(X,\Deltaelta')$ a sub $F$-split pair such that $(K_{X}+\Deltaelta')=f^{*}(K_{Y}+\Deltaelta)$.
\end{lemma}
\begin{proof}
Take the corresponding map $\phi\colon \mathcal{F}e\mathcal{L} \to K(Y)$. Then we may freely view $\mathcal{L}$ as a subsheaf of $K(X)$ and so extend $\phi$ to a map $\phi\colon \mathcal{F}e K(Y) \to K(Y)$. Taking the inverse image gives $f^{-1}(\phi)\colon f^{-1}\mathcal{F}e K(Y) \to f^{-1}K(Y)$ and $f^{-1}\mathcal{F}e \mathcal{L} \to f^{-1}K(Y)$. Since $\pi$ is birational we obtain an isomorphism $f^{-1}K(Y) \to K(X)$. We then have the following situation.
\[\begin{tikzcd}
f^{-1}\mathcal{F}e(\mathcal{L}) \otimes_{f^{-1}\mathcal{F}e\mathcal{O}_{Y}}\mathcal{F}e\ox \arrow[r, hook] & \mathcal{F}e K(X) \arrow[r] & K(X) \\
f^{-1}\mathcal{F}e(\mathcal{L}) \arrow[r, hook] \arrow[u, hook] & f^{-1}\mathcal{F}e K(Y) \arrow[r, "f^{-1}(\phi)"'] \arrow[u, "\sim"] & f^{-1}K(Y) \arrow[u, "\sim", hook']
\end{tikzcd}\]
Note however that $f^{-1}\mathcal{F}e(\mathcal{L}) \otimes_{f^{-1}\mathcal{F}e\mathcal{O}_{Y}}\ox= \mathcal{F}e f^{*}\mathcal{L}$ and hence we obtain the desired map $\tilde{\phi}\colon \mathcal{F}e f^{*}\mathcal{L} \to K(X)$. This induces a divisor $\Deltaelta'$ on $X$ with $$(p^{e}-1)(K_{X}+\Deltaelta') \sim f^{*}\mathcal{L} \sim (p^{e}-1)f^{*}(K_{Y}+\Deltaelta).$$ The coefficient of $\Deltaelta'$ at a codimension one point can be recovered from $\tilde{\phi}$ by working locally around that point. In particular, wherever $f$ is an isomorphism, $\phi$ and $\tilde{\phi}$ agree. Therefore the coefficients of $\Deltaelta$ and $\Deltaelta'$ agree on this locus also, so we have $f^{*}(K_{Y}+\Deltaelta)=(K_{X}+\Deltaelta')$ as required. Moreover commutativity of the earlier diagram gives that whenever $1 \in \text{Im}(H^{0}(Y,\phi))$ then it is also in the image of $H^{0}(X,\tilde{\phi})$, and hence $(X,\Deltaelta)$ is sub $F$-split.
\end{proof}
In general the local forms of these singularities cannot be pushed forward, however the global ones often can be, even along morphisms which are not birational.
\begin{lemma}\cite[Theorem 5.2]{das2015f}
Suppose that $(X,\Deltaelta)$ is sub $F$-split and there is a map $f\colon X \to Y$ with $f_{*}\ox =\mathcal{O}_{Y}$ and $(K_{X}+\Deltaelta) \sim_{\mathbb{Z}_{(p)}} f^{*}\mathcal{L}$ for some line bundle $\mathcal{L}$ on $Y$. If every component of $\Deltaelta$ which dominates $Y$ is effective then there is $\Deltaelta_{Y}$ with $(Y,\Deltaelta_{Y})$ sub $F$-split and $\mathcal{L}\sim_{\mathbb{Z}_{(p)}} (K_{Y}+\Deltaelta_{Y})$.
\end{lemma}
If $f\colon X \to Y$ is birational then the conditions are automatically satisfied and the induced $\Deltaelta_{Y}$ is just the pushforward $f_{*}\Deltaelta$ by \autoref{local}. Therefore if $X$ is sub $F$-split so is every $X'$ birational to $X$. Further if $X$ is $F$-split and $X'$ is obtained by taking a terminalisation or running a $K_{X}+B$ MMP for any $B$ then $X'$ is $F$-split.
\subsection{Boundedness}
Finally we introduce the relevant notions of boundedness.
\begin{definition}\label{d_birationally-bounded} We say that a set $\mathfrak{X}$
of varieties is \emph{birationally bounded over a base $S$} if there is a flat, projective family $Z \to T$, where $T$ is a reduced quasi-projective scheme over $S$, such that every $X\in \mathfrak{X}$ is birational to some geometric fibre of $Z \to T$. If the base is clear from context, say if every $X \in \mathfrak{X}$ has the same base, we omit dependence on $S$.
If for each $X \in \mathfrak{X}$ the map to a geometric fibre is an isomorphism we say that $\mathfrak{X}$ is \emph{bounded over $S$}.
\end{definition}
If $S=\text{Spec }{R}$ we often just say (birationally) bounded over $R$. In practice we characterise boundedness over $\mathbb{Z}$ via the following result, coming from existence of the Hilbert and Chow schemes.
\begin{lemma}\cite[Proposition 5.3]{tanaka2019boundedness}
Fix integers $d$ and $r$. Then there is a flat projective family $Z \to T$ where $T$ is a reduced quasi-projective scheme over $\mathbb{Z}$ satisfying the following property. If
\begin{enumerate}
\item $\kappa$ is a field;
\item $X$ is a geometrically integral projective scheme of dimension $r$ over $\kappa$; and
\item there is a closed immersion $j\colon X \to \mathbb{P}^{m}_{\kappa}$ for some $m\in \mathbb{Z}$ such that $j^{*}(\mathcal{O}(1))^{r} \leq d$.
\end{enumerate}
Then $X$ is realised as a geometric fibre of $Z \to T$
\end{lemma}
\begin{corollary}\label{l_birationally-bounded}
Suppose $\mathfrak{X}$ is a set of varieties over closed fields and there are positive real numbers $d,V$ such that for every $X \in \mathfrak{X}$,
\begin{itemize}
\item $X$ has dimension at most $d$; and
\item There is $M$ on $X$ with $\phi_{|M|}$ birational and $\textup{Vol}(M)\leq V$.
\end{itemize}
Then $\mathfrak{X}$ is birationally bounded over $\mathbb{Z}$. If in fact each $M$ is very ample then $\mathfrak{X}$ is bounded.
\end{corollary}
Conversely, if $S$ is Noetherian then we may always choose $H$ relatively very ample on $Z \to T$ with trivial higher direct images. The restriction of $H$ to any geometric fibre is therefore very ample, and of bounded degree.
\section{Preliminary Results}
In this section we gather necessary results for later usage. We begin with some results on surfaces, followed by some MMP results and their applications. We also collect some Bertini type theorems at the end of the section.
\begin{theorem}\cite[Theorem 6.9]{alexeev1994boundedness}\label{BAB}
Fix $\epsilon >0$ and an algebraically closed field of arbitrary characteristic. Let $S$ be the set of all projective surfaces $X$ which admit a $\Deltaelta$ such that:
\begin{itemize}
\item $(X,\Deltaelta)$ is $\epsilon$-klt;
\item $-(K_{X}+\Deltaelta)$ is nef; and
\item Any of the following holds $K_{X} \not\equiv 0$, $\Deltaelta \neq 0$, $X$ has worse than Du Val singularities.
\end{itemize}
Then $S$ is bounded.
\end{theorem}
Alexeev shows boundedness over a fixed field, however it is not immediately clear if such varieties are collectively bounded over $\mathbb{Z}$. We briefly show that his methods can be extended, via the arguments of \cite{witaszek2015effective} to give a boundedness result in mixed characteristic.
\begin{theorem}\label{SBAB}
Fix $\epsilon$ a positive real number. Let $S$ be the set of projective surfaces $X$ such that following conditions hold:
\begin{itemize}
\item $X$ is a variety over some closed field $\kappa$;
\item $(X,B)$ is $\epsilon$-klt for some boundary $B$;
\item $-(K_{X}+B)$ is nef; and
\item $X$ is rationally chain connected and $F$-split (if $\kappa$ has characteristic $p$).
\end{itemize}
Then $S$ is bounded.
\end{theorem}
\begin{proof}
We consider first $\hat{S}:=\{X \in S\colon K_{X} \not\equiv 0\}$. Take any such $X \in \hat{S}$, then by Alexeev \cite[Chapter 6]{alexeev1994boundedness} we have the following:
\begin{itemize}
\item The minimal resolution $\tilde{X}\to X$ has $\rho(X) < A $, for some constant A, depending only on $\epsilon$ and admits a birational morphism to $\mathbb{P}^{2}$ or $\mathbb{F}_{n}$ for $n < \frac{2}{\epsilon}$. In particular there is a set $T_{\epsilon}$ bounded over $\mathbb{Z}$ such that every $\tilde{X}$ is a blowup of some $Y \in T_{\epsilon}$ along a finite length subscheme of dimension $0$. That is the set of minimal desingularisations is bounded over $\mathbb{Z}$.
\item We may run a $K_{X}$-MMP to obtain $X'$ a Mori fibre space.
\item There is an $N$, independent of the field of definition, such that $NK_{X'}$ is Cartier for any Mori fibre space $X'$ obtained as above.
\item $\textup{Vol}(-K_{X'})$ is bounded independently of the base field.
\item If $X'$ is such a Mori fibre space $X' \to \mathbb{P}^{1}$ and $F$ a general fibre then $-K_{X} +(\frac{2}{\epsilon}-1)F$ is ample.
\end{itemize}
It is sufficient then to show $S'=\{X' \text{ an } \epsilon-\text{LCY type, Mori fibre space }\}$ is bounded in mixed characteristic, then $\hat{S}$ is bounded by sandwiching as in Alexeev's original proof and the full result follows. In turn by \autoref{l_birationally-bounded} it is enough to find $V$ such that every $X' \in S'$ has a very ample divisor, $H$, satisfying $H^{2}\leq V$. We do this first for positive characteristic varieties.
Fix, then, $m > \frac{2}{\epsilon}-1$ and suppose $X'\to \mathbb{P}^{1}$ is a Mori fibre space in positive characteristic. Then $A=-K_{X'} +mF$ is ample and $NA$ is Cartier. Further we have that $A'=7NK_{X'}+27N^{2}A=(7N-27N^{2})K_{X'}+27N^{2}mF$ is very ample by \cite[Theorem 4.1]{witaszek2015effective}. Since $F$ is base point free, we may add further multiples of $F$ and consider the very ample Cartier divisor $\hat{A}=(27N^{2}-7N)(-K_{X'}+2mF)$. Then $$\hat{A}^{2}=\textup{Vol}(X',\hat{A})\leq (27N^{2}-7N^{2})(\textup{Vol}(X',-K_{X'})+2m\textup{Vol}(F,-K_{F}))$$ which is bounded above, since $\textup{Vol}(X',-K_{X'})$ is bounded and $\textup{Vol}(F,-K_{F})=2$.
Similarly if $X'$ has $\rho(X')=1$ and $-K_{X'}$ ample then $-nK_{X'}$ is a very ample Cartier divisor with vanishing higher cohomology for some $n$ fixed independently of $X'$. Then $(-nK_{X'})^{2}=n^{2}\textup{Vol}(X,-K_{X'})$ is bounded and the result follows similarly.
Suppose then that $X \in S$ with $K_{X} \equiv 0$, then it must have worse than canonical singularities by \autoref{split}. Let $\pi\colon Y \to X$ be a minimal resolution, with $K_{Y}+B=\pi^{*}K_{X} \equiv 0$ and $B >0$, then $Y$ is still $\epsilon$-klt, so $Y \in \hat{S}$. Consequently $X$ has $\mathbb{Q}$-Cartier Index dividing $N$ also. Moreover, there is $H$ on $Y$ very ample with $H^{2}$ bounded above. Let $H'=\pi_{*}H$, so that $NH'$ is ample and Cartier on $X$. Applying \cite[Theorem 4.1]{witaszek2015effective} again we see that $A\equiv 27N^{2}H$ is very ample, since $K_{X}\equiv 0$, with $A^{2}$ bounded above.
The arguments in characteristic $0$ are essentially the same, making use of Koll{\'a}r's effective base-point freeness result \cite[Theorem 1.1, Lemma 1.2]{kollar1993effective} instead of Witaszek's result, and the existence of very free rational curves on smooth rationally connected surfaces instead of \autoref{split}.
\end{proof}
\begin{remark}
In particular we have an affirmative answer to Question 1 in dimension $2$.
\end{remark}
\begin{theorem}\cite[Theorem 1.2]{patakfalvi2019ordinary}\label{WO-unir}
Let $X$ be a normal, Cohen Macaulay variety with $W\mathcal{O}$-rational singularities over a perfect field of positive characteristic. Then $X$ cannot simultaneously satisfy all the following conditions.
\begin{enumerate}
\item $X$ is uniruled.
\item $X$ is $F$-split.
\item $X$ has trivial canonical bundle.
\end{enumerate}
If in fact $X$ is smooth then we may replace $K_{X}\sim 0$ with $K_{X} \equiv 0$.
\end{theorem}
We refer to \cite[Definition 3.8]{patakfalvi2019ordinary} for a definition of $W\mathcal{O}$-rational singularities. It suffices to know that regular varieties have $W\mathcal{O}$-rational singularities, from which we obtain the following.
\begin{corollary}\label{split}
Let $X$ be a uniruled, $F$-split surface over a perfect field of positive characteristic. If $K_{X} \equiv 0$ then $X$ has worse than canonical singularities.
\end{corollary}
\begin{proof}
Suppose for contradiction that $X$ has canonical singularities. Then we can replace $X$ with its minimal resolution and suppose that $X$ is smooth. In particular it is Cohen-Macaulay and has $W\mathcal{O}$-rational singularities. We then apply \autoref{WO-unir} to obtain the result.
\end{proof}
\begin{lemma}\label{vol}\cite[Lemma 2.5]{jiang2018birational}
Suppose $X$ is projective and normal, $D$ is an $\mathbb{R}$-Cartier divisor and $S$ is a basepoint free normal and prime divisor. Then for any $q >0$,
\[\textup{Vol}(X,D+qS) \leq \textup{Vol}(X,D) + q\dim(X)\textup{Vol}(S,D|_{S}+qS|_{S}).\]
\end{lemma}
We now collect the necessary results from the positive characteristic MMP and consider a few applications.
\begin{theorem}\cite[Theorem 1.7]{birkar2017existence}, \cite[Theorem 1.2]{birkar2013existence} \label{Cone Theorem}
Let $k$ be an algebraically closed field of characteristic $p>5$.
Let $(X, \Deltaelta)$ be a three-dimensional klt pair over $k$, together with a projective morphism $X \to Z$ a quasi-projective $k$ scheme, then there exists a $(K_X+\Deltaelta)$-MMP over $Z$ that terminates.
In particular, if $X$ is $\mathbb{Q}$-factorial, then
there is a sequence of birational maps of three-dimensional normal and $\mathbb{Q}$-factorial varieties:
\[
X=:X_0 \overset{\varphi_0}{\dashrightarrow} X_1 \overset{\varphi_1}{\dashrightarrow} \cdots \overset{\varphi_{\ell-1}}{\dashrightarrow} X_{\ell}
\]
such that if $\Deltaelta_i$ denotes the strict transform of $\Deltaelta$ on $X_i$, then
the following properties hold:
\begin{enumerate}
\item
For any $i \in \{0, \ldots, \ell\}$,
$(X_i, \Deltaelta_i)$ is klt and projective over $Z$.
\item
For any $i \in \{0, \ldots, \ell-1\}$,
$\varphi_i\colon X_i \dashrightarrow X_{i+1}$ is either a $(K_{X_i}+\Deltaelta_i)$-divisorial contraction over $Z$ or a $(K_{X_i}+\Deltaelta_i)$-flip over $Z$.
\item
If $K_X+\Deltaelta$ is pseudo-effective over $Z$, then $K_{X_{\ell}}+\Deltaelta_{\ell}$ is nef over $Z$.
\item
If $K_X+\Deltaelta$ is not pseudo-effective over $Z$, then
there exists a $(K_{X_{\ell}}+\Deltaelta_{\ell})$-Mori fibre space $X_{\ell} \to Y$ over $Z$.
\end{enumerate}
\end{theorem}
\begin{theorem}\cite[Theorem 10.4]{fujino2009fundamental}\label{dlt}
Let $X$ be a normal quasi-projective variety of any dimension and characteristic for which the log MMP holds. Let $B$ be an effective divisor with $K_{X}+B$ $\mathbb{R}$-Cartier then there is a birational morphism $f\colon Y \to X$, called a dlt modification, such that the following holds:
\begin{itemize}
\item $Y$ is $\mathbb{Q}$-factorial;
\item $a(E,X,B) \leq -1$ for every $f$ exceptional divisor $E$;
\item If $B_{Y}=f^{-1}_{*}B' + \sum_{E \text{ exceptional}} E$ then $(Y,B_{Y})$ is dlt; and
\item $K_{Y}+B_{Y}+F=f^{*}(K_{X}+B)$ where $F= \sum_{E\colon a(E,X,B)<-1} -(a(E,X,B)+1)E$.
\end{itemize}
where $B'$ has coefficient $\min\{\text{Coeff}_{E}(B),1\}$ at each $E$. Further if $(X,B)$ is a log pair then $F$ is exceptional.
\end{theorem}
\begin{theorem}[Nlc Cone Theorem]\label{NLCT}
Let $(X,\Deltaelta)$ be a threefold $\mathbb{Q}$-pair, over a closed field of characteristic $p > 5$. Then write $\overline{NE}(X/T)_{nlc}$ for the cone spanned by curves contained in the non log canonical locus of $X$. Then we have the following decomposition
\[\overline{NE(X)}(X)=\overline{NE(X)}_{K_{X}+B \geq 0}+ \overline{NE(X)}_{nlc}+R_{i}\]
where $R_{i}$ are extremal rays with $R_{i} \cap\overline{NE(X)}_{nlc}=\{0\}$, generated by curves $C_{i}$ such that $0> (K_{X}+B).C_{i} \geq -6$.
\end{theorem}
\begin{proof}
If $(X,\Deltaelta)$ is dlt this is part of the usual Cone Theorem \cite[Theorem 1.1]{birkar2017existence}.
Suppose next that $\Deltaelta=B+F$ where $(X,B)$ is dlt and $F$ has support contained in $\lfloor B \rfloor$. Note that if $C$ is an irreducible curve with $F.C <0$ then $C \subseteq F$. Therefore any effective curve $C$ can be written $C=C_{0} +C_{F}$ where $F.C_{0}\geq 0$ and $C_{F} \subseteq F$. Thus by compactness of the unit ball in a finite dimensional vector space, any $[\gamma] \in \overline{NE}(X/T)$ can be written $[\gamma] = [\gamma_{0}] + [\gamma_{F}]$ with $F.\gamma_{0} \geq 0$ and $[\gamma_{F}] \in \overline{NE}(F/T)$ in the same fashion.
Take any $K_{X}+\Deltaelta$ negative extremal ray $L$. Take a non-zero $[\gamma] \in L$, then as $L$ is extremal we have $[\gamma_{F}],[\gamma_{0}] \in L$. If $[\gamma_{F}] \neq 0$ then $L \subseteq \overline{NE}(F/T)$. Otherwise if $[\gamma_{F}]=0$ then $L$ is $K_{X}+B$ negative. Hence we can conclude the result from the Cone Theorem for dlt pairs.
Suppose finally that $X$ is not dlt. Let $\pi \colon Y \to X$ be a dlt modification of $(X,B)$ with $(Y,B_{Y})$ dlt and $K_{Y}+B_{Y}+F=\pi^{*}(K_{X}+B)$. Take any $K_{X}+B$ negative extremal ray, $L$, such that $L \cap\overline{NE(X)}_{nlc}=\{0\}$. Take any class $\gamma$ with $[\gamma] \in L\setminus \{0\}$ and choose $[\gamma'] \in \overline{NE}(Y/T)$ with $f_{*}[\gamma']=[\gamma]$. Then by the projection formula we have that $(K_{Y}+B_{Y}+F).\gamma'=(K_{X}+B).f_{*}\gamma'=(K_{X}+B).\gamma < 0$.
From above, we can write $\gamma'=C_{0}+C_{F}+ \sum \lambda_{i}C_{i}$ where $\lambda_{i} >0$, $(K_{Y}+B_{Y}+F).C_{0} \geq 0$ $C_{F} \in \overline{NE}(F/T)$ and the $C_{i}$ each generate $(K_{Y}+B_{Y}+F)$ negative extremal rays with $-(K_{Y}+B_{Y}+F).C_{i} \leq 6$.
From our choice of $R$ we must have $f_{*}C_{0}=f_{*}C_{F}=0$ and hence it follows that $[f_{*}C_{k}] \in R\setminus \{0\}$ for some $k$. Thus $(K_{X}+B).f_{*}C_{k}=(K_{Y}+B_{Y}+F).C_{k} \geq -6$.
Since each $R$ is the pushforward of a $(K_{Y}+B_{Y})$ negative extremal ray, there are only countably many generating curves $C_{i}$ and they cannot accumulate in $(K_{X}+\Deltaelta)_{< 0}$ else they would accumulate on $Y$ also.
\end{proof}
\begin{lemma}
Let $X$ be a normal curve over any field and $\Deltaelta \geq 0 $ be a divisor with $-(K_{X}+\Deltaelta)$ big and nef. Then the non-klt locus of $\Deltaelta$ is either empty or geometrically connected.
\end{lemma}
\begin{proof}
If $-(K_{X}+\Deltaelta)$ is big and nef then so is $-K_{X}$. After base changing to $H^{0}(X,\ox)$ if necessary we have $\deg K_{X} = -2$ by \cite[Corollary 2.8]{tanaka2018minimal} giving that $ \deg \Deltaelta <2$. The non-klt locus of $(X,\Deltaelta)$ is precisely the support of $\lfloor \Delta \rfloor$ and hence can contain at most one point.
\end{proof}
\begin{theorem}\cite[Theorem 5.2]{tanaka2018minimal}\label{Tcl}
Let $(X,\Deltaelta)$ be a surface log pair over any field $\kappa$. Let $\pi\colon X \to S$ be a morphism of $\kappa$ schemes with $\pi_{*}\ox =\mathcal{O}_{S}$. Suppose that $-(K_{X}+\Deltaelta)$ is $\pi$-nef and $\pi$-big, then for any $s \in S$ $X_{S}\cap \textup{Nklt}(X,\Deltaelta)$ is either empty or geometrically connected.
\end{theorem}
\begin{theorem}[Weak Connectedness Lemma]\label{WCL}
Let $X$ be a threefold over any closed field $\kappa$ of characteristic $p>5$ together with $\Deltaelta\geq 0$ on $X$ such that $K_{X}+\Deltaelta$ is $\mathbb{R}$-Cartier. Suppose that $-(K_{X}+\Deltaelta)$ is ample, then $\textup{Nklt}(X,\Deltaelta)$ is either empty or connected.
\end{theorem}
\begin{proof}
If $(X,\Delta)$ is klt the result is trivially true, so suppose otherwise.
Let $(Y,\Delta_{Y}) \to (X,\Delta)$ be a dlt modification. Then $-L:=K_{Y}+\Delta_{Y}+F=f^{*}(K_{X}+\Delta)$ with $(Y,\Delta_{Y})$ dlt and $L$ nef and big. We may further write $L=A+E$ with $A$ ample and $E$ effective and exceptional over $X$. In particular $E$ has support contained inside $S_{Y}=\lfloor \Delta_{Y} \rfloor$. Note that $S_{Y}$ maps surjectively onto $\textup{Nklt}(X,\Delta)$ so it is sufficient to show that $S_{Y}$ is connected.
Take a general $G_{Y} \sim \epsilon A +(1-\epsilon) L-\delta S_{Y}$, then for small $\delta$ we may assume $G_{Y}$ is ample, and hence further that $(X,\Delta_{Y}+G_{Y})$ is dlt. Write $K_{Y}+\Delta_{Y}+G_{Y}\sim - P_{Y}=-(\epsilon E + F + \delta S_{Y})$ and note $\text{Supp}(P_{Y})=S_{Y}$. In particular $K_{Y}+\Delta_{Y}+G_{Y}$ is not pseudo-effective and hence we may run a $(Y,\Delta_{Y}+G_{Y})$ LMMP which terminates in a Mori fibre spaces $Y' \to Z$. By the arguments of \cite[Theorem 9.3]{birkar2013existence} on the induced pair $(Y',\Delta_{Y'})$, $\textup{Nklt}(Y',\Delta_{Y'})=\text{Supp}(\lfloor \Delta_{Y'} \rfloor)=\text{Supp}(P_{Y'})$ has the same number of connected components as $\textup{Nklt}(X,\Deltaelta)$, so it suffices to prove the result here.
Suppose first that $\dim Z=0$. Then $\rho(Y')=1$. In particular if $D,D'$ are effective and $H$ ample, then $H.D.D' >0$, so certainly $D.D'>0$. Thus $P_{Y'}$ cannot have disconnected support.
Suppose next that $\dim Z > 0 $. Let $T$ be the generic fibre. We must have $P_{Y'}|_{T}> 0$ since $Y' \to Z$ is a $P_{Y'}\sim -(K_{Y'}+\Deltaelta_{Y'}+G_{Y'})$ positive contraction. However $P_{Y'}$ has the same support as $\lfloor \Delta_{Y'} \rfloor$ so at least one connected component must dominate $Z$. Suppose then, for contradiction, there is a second connected component. Clearly it must also dominate $Z$, else it could not possibly be disjoint from the first. Consider then $(T,\Delta_{T}=\Delta_{Y'}|_{T})$. Since $T \to Y'$ is flat, the pullback of $\Delta_{Y'}$ is just the inverse image, and in particular $\lfloor \Delta_{T} \rfloor$ contains the pullback of both connected components. Suppose $R$ is the extremal ray whose contraction induces the Mori fibration. Then we have $-(K_{Y'}+\Delta_{Y'}+G_{Y'}).R >0$, but since $R$ is spanned by a nef curve, as contracting it defines a fibration, and $G_{Y'}$ is effective, we must have $G_{Y'}.R \geq 0$. Hence in fact $-(K_{Y'}+\Delta_{Y'}).R >0$ also, and so $-K_{T}+\Delta_{T}$ is ample. Then, however, the non-klt locus of $(T,\Delta_{T})$ must be connected, a contradiction.
\end{proof}
\begin{lemma}\label{cc} \cite[Proposition 4.37]{kollar2013singularities}
Suppose that $(S,B)$ is a klt surface and $(K_{S}+B+D) \sim 0$ for $D$ effective, integral and disconnected, then $D$ has exactly two connected components.
\end{lemma}
Finally we collect some needed Bertini type theorems.
\begin{theorem}\cite[Theorem 1]{tanaka2017semiample}
Let $(X,\Deltaelta)$ be a log canonical (resp. klt) pair over an algebraically closed field where $\Deltaelta$ is an effective $\mathbb{Q}$-divisor. Suppose $D$ is a semiample divisor on $X$ then there is an effective divisor $D'\sim D$ with $(X,\Deltaelta+D')$ log canonical (resp. klt).
\end{theorem}
\begin{corollary}\label{average}
Suppose that $(X,\Deltaelta)$ is a sub klt pair over an algebraically closed field together with $D$ a divisor on $X$ and $\pi\colon (X',\Deltaelta') \to X$ a log resolution of $(X,\Deltaelta)$. Further assume that there is some $D'$ on $X'$ with $\pi_{*}D'=D$, $-(K_{X'}+\Deltaelta'+D')$ $\pi$-nef, $(X,\Deltaelta')$ sub klt and $D'$ semiample. Then there is $E \sim D$ on $X$ effective with $(X,\Deltaelta+E)$ sub klt. If in fact $(X,\Deltaelta)$ is $\epsilon$-klt then we may choose $E$ such that $(X,\Deltaelta+E)$ is also.
\end{corollary}
\begin{proof}
We may write $\Deltaelta'=\Deltaelta_{p}-\Deltaelta_{n}$ as the difference of two effective divisors. Since $(X',\Deltaelta')$ is log smooth we must have that $(X',\Deltaelta_{p})$ is klt. Thus by the proceeding theorem we have that there is some $E' \sim D'$ with $(X',\Deltaelta_{p}+E')$ klt. Then we must also have that $(X',\Deltaelta'+E')$ is sub klt
Write $E=\pi_{*}E'$, then $R=\pi^{*}(K_{X}+\Deltaelta+E)- (K_{X'}+\Deltaelta'+E')\equiv_{f}-(K_{X'}+\Deltaelta'+D')$ is $\pi$-nef and exceptional. Hence by the negativity lemma we have that $-R$ is effective, and $\pi^{*}(K_{X}+\Deltaelta+E) \leq (K_{X'}+\Deltaelta'+E')$ giving that $(X,\Deltaelta +E)$ is klt.
If $(X',\Deltaelta)$ is $\epsilon$-klt then so is $(X',\Deltaelta_{p})$. Let $\delta =\min (1-\epsilon-c_{i})$ where $c_{i}$ are the coefficients of $\Deltaelta_{p}$ and take $m \in \mathbb{N}$ such that $\frac{1}{m} < \delta$. Applying the previous theorem to $mD'$ instead of $D'$, yields $E'' \sim mD$ with $(X',\Deltaelta'+E'')$ klt. Taking $E'=\frac{1}{m}E$ then continuing as above gives the required divisor.
\end{proof}
\begin{theorem}\cite[Corollary 1.6]{patakfalvi2017singularities}\label{smoothness}
Let $f\colon X \to Z$ be a projective fibration of relative dimension $2$ from a terminal variety with $f_{*}\ox=\mathcal{O}_{Z}$ over a perfect field of positive characteristic $p > 7$, such that $-K_{X}$ is ample over $Z$. Then a general fibre of $f$ is smooth.
\end{theorem}
\begin{theorem}[Bertini for residually separated morphisms]\cite[Theorem 1]{cumino1986axiomatic}\label{Bertini}
Let $f\colon X \to \mathbb{P}^{n}$ a residually separated morphism of finite type from a smooth scheme over a closed field. Then the pullback of a general hyperplane $H$ on $\mathbb{P}^{n}$ is smooth.
\end{theorem}
Here residually separated means that the induced map on residue fields $\ox[X,x] \to \ox[\mathbb{P}^{n}, f(x)]$ is a separable extension.
\section{Conic Bundles}\label{S-CB}
In this section the ground field will always be algebraically closed of characteristic $p> 0$. In some results we put additional restrictions on the characteristic, most often that $p \neq 2$.
We start with some useful results on finite morphisms and klt singularities.
\begin{definition}
Take a finite, separable and dominant morphism of normal varieties $f\colon X \to Y$.
If $D$ is a divisor on $Y$ then $f$ is said to be \emph{tamely ramified over $D$} if for every prime divisor $D'$ lying over $D$ the ramification index is not divisible by $p$ and the induced residue field extension is separable.
Moreover $f$ is said to be \emph{divisorially tamely ramified} if for any proper birational morphism of normal varieties $Y' \to Y$ we have the following. If $X' \to X$ is the normalisation of the base change $X\times_{Y}Y'$, and $f'\colon X'\to Y'$ the induced map, then $f'$ is tamely ramified over every prime divisor in $Y'$.
If instead $f$ is generically finite, we say it is divisorially tamely ramified if the finite part of its Stein factorisation is so. Equally if either of $X$ or $Y$ is not normal, $f\colon X \to Y$ is said to be divisorially tamely ramified if the induced morphism on their normalisations is.
\end{definition}
If $f$ is generically finite of degree $d <p$ then it is always divisorially tamely ramified. If $D'$ lies over $D$ then both the ramification index, $r_{D'}$ and the inertial degree, $e_{D'}$ are bounded by $d$, in fact $d= \sum_{f(D')=D} r_{D'}e_{D'}$ by multiplicativity of the norm. This remains the case on any higher birational model.
\begin{lemma}
Let $f\colon Y \to X$ be a dominant, separable, finite morphism of normal varieties. Suppose that $K_{X}$ is $\mathbb{Q}$-Cartier then $K_{Y}=f^{*}K_{X}+\Deltaelta$ where $\Deltaelta \geq 0$. Further if $f$ is divisorially tamely ramified, then for $Q \in Y$ a codimension $1$ point lying over $P \in X$ we have $\text{Coeff}_{Q}(\Deltaelta)=r_{Q}-1$ where $r_{Q}$ is the degree of $f|_{Q}\colon Q \to P$.
\end{lemma}
\begin{proof}
By localising at the codimension $1$ points of $X$ we reduce to the case of Riemann-Hurwitz-Hasse to see that $\Deltaelta$ exists as required and $\text{Coeff}_{Q}(\Deltaelta)=\delta_{Q}$ where $\delta_{Q} \geq r_{Q}-1$ with equality when $p\nmid r_{q}$. In particular when $f$ is divisorially tamely ramified, we ensure $\delta_{Q}=r_{Q}-1$.
\end{proof}
The singularities of the domain and image of a finite divisorially tame ramified morphism are closely connected, as the following lemma shows.
\begin{lemma}\cite[Proposition 3.16]{kollar1997singularities} \label{finite adjunction}
Let $f\colon X' \to X$ be a dominant, divisorially tamely ramified, finite morphism of normal varieties of degree $d$. Fix $\Deltaelta$ on $X$ with $K_{X}+\Deltaelta$ $\mathbb{Q}$-Cartier. Write $K_{X'}+\Deltaelta'=f^{*}(K_{X}+\Deltaelta)$ then the following hold:
\begin{enumerate}
\item $1+\text{TDisc}(X,\Deltaelta) \leq 1+\text{TDisc}(X',\Deltaelta') \leq d(1+\text{TDisc}(X,\Deltaelta))$.
\item $(X,\Deltaelta)$ is sub klt (resp. sub lc) iff $(Y,\Deltaelta')$ is sub klt (resp. sub lc).
\end{enumerate}
\end{lemma}
\begin{proof}
By restricting to the smooth locus of $X$, which contains all the codimension $1$ points of $X$, we may suppose that $K_{X}$ is Cartier and apply the previous lemma. Hence we get $\Deltaelta'=f^{*}(K_{X}+\Deltaelta)-K_{X'}$ where for $Q\in X'$ lying over $P\in X$ we have $\text{Coeff}_{Q}(\Deltaelta')=r_{Q}(\text{Coeff}_{P}(\Deltaelta))-(r_{Q}-1)$.
Suppose that we have proper birational morphisms $\pi\colon Y \to X$ and we write $Y'$ for the normalisation of $Y\times_{X} X'$ so that we have the following diagram.
\[\begin{tikzcd}
Y' \arrow[d, "\pi'"] \arrow[r, "g"] & Y \arrow[d, "\pi"] \\
X' \arrow[r, "f"] & X
\end{tikzcd}\]
Let $E'$ be a divisor on $Y'$ exceptional over $X'$ and $E$ the corresponding divisor on $Y$.
At $E'$ we can write $$K_{Y'}= \pi'^{*}(K_{X'}+\Deltaelta')+a(E',X',\Deltaelta')E'=g^{*}\pi^{*}(K_{X}+\Deltaelta)+a(E',X',\Deltaelta')E'$$
essentially by definition. Conversely however we have $K_{Y'}=g^{*}K_{Y}+\delta_{E'}E'$ which may be rewritten as
$$K_{Y'}=g^{*}(\pi^{*}(K_{X}+\Deltaelta)+a(E,X,\Deltaelta)E)+\delta_{E'}E'.$$
In particular equating the two descriptions, as $\delta_{E'}=r_{E'}-1$ by \autoref{finite adjunction}, we have that
\[r_{E'}a(E,X,\Deltaelta)+(r_{E'}-1)=a(E',X',\Deltaelta')\]
and thus $a(E,X,\Deltaelta)+1=\frac{1}{r_{E'}}(a(E',X',\Deltaelta')+1)$ with $1 \leq r_{E'} \leq d$.
Since, by a theorem of Zariski \cite[Theorem VI.1.3]{kollar1999rational}, every valuation with center on $X'$ is realised by some birational $Y' \to X'$ occurring as a pullback of a birational morphism $Y \to X$, this is sufficient to show that $1+\text{TDisc}(X,\Deltaelta) \leq 1+\text{TDisc}(X',\Deltaelta') \leq d(1+\text{TDisc}(X,\Deltaelta))$. The second part then follows.
\end{proof}
We will be interested in conic bundles satisfying certain tameness criteria. This in turn will allow us to control the singularities arising on the base of the fibration. This is done in \autoref{cbf}.
\begin{definition}
A \emph{conic bundle} is a threefold sub pair $(X,\Deltaelta)$ equipped with a morphism $f\colon X \to Z$ where Z is a normal surface, $f_{*}\ox=\mathcal{O}_{Z}$, the generic fibre is a smooth rational curve and $(K_{X}+\Deltaelta)=f^{*}D$ for some $\mathbb{Q}$-Cartier divisor on $X$. We will call it \emph{regular} if $X$ and $Z$ are smooth and $f$ is flat; and \emph{terminal} if $X$ is terminal and $f$ has relative Picard rank $1$. Further we call it (sub) $\epsilon$-klt or log canonical if $(X,\Deltaelta)$ is.
If each horizontal component of $\Deltaelta$ is effective and divisorially tamely ramified over $Z$ then the conic bundle is said to be \emph{tame}.
For $P$ a codimension $1$ point of $Z$ we define $$d_{P}=\max\{t\colon (X,\Deltaelta+tf^{*}P) \text{ is lc over the generic point of } P\}.$$
The \emph{discriminant divisor} of $f\colon X \to Z$ is $D_{Z}=\sum_{P \in X}(1-d_{P})P$.
The \emph{moduli part} $M_{Z}$ is then given by $D-D_{Z}-K_{Z}$.
\end{definition}
In positive characteristic the discriminant divisor is not always well defined for a general fibration, it may be that $d_{P} \neq 1$ for infinitely many $P$. This can be caused by either a failure of generic smoothness or inseparability of the horizontal components of $\Deltaelta$ over the base.
Suppose, however, that $(X,\Deltaelta) \to Z$ is a tame conic bundle. We may take a log resolution $X' \to X$ as this does not change $d_{P}$ and is still a tame conic bundle by \autoref{tame base change}. Thus we may suppose that $\Deltaelta$ is an SNC divisor and hence near $P$, $\Deltaelta+f^{*}P$ is also SNC for all but finitely many $P$, by generic smoothness of the fibres and as the horizontal components are divisorially tamely ramified over $Z$. Hence in fact $D_{Z}$ is well defined in this case.
\begin{lemma}\label{tame base change}
Let $f\colon (X,\Deltaelta) \to Z$ be a tame conic bundle, and $X' \to X$ either a birational morphism from a normal variety or the base change by a divisorially tamely ramified morphism from a normal variety $g\colon Z' \to Z$. Then there is $\Deltaelta'$ with $(X',\Deltaelta')$ a tame conic bundle over $Z$ or $Z'$ as appropriate. Moreover in this case $X' \to X$ is also divisorially tamely ramified.
\end{lemma}
\begin{proof}
If $\pi\colon X' \to X$ is a birational morphism with $K_{X'}+\Deltaelta'=\pi^{*}(K_{X}+\Deltaelta)$ then the only horizontal components of $\Deltaelta'$ are the strict transforms of horizontal components of $\Deltaelta$. Take such a component $D'$ then, normalising if necessary, it factors $D' \to D \to Z$ with $D \to Z$ divisorially tamely ramified but then it must itself be divisorially tamely ramified.
Suppose then $g\colon Z' \to Z$ is generically finite. From above, and by Stein factorisation we may freely suppose that $g$ is finite. Then the base change morphism $g'\colon X' \to X$ is a finite morphism of normal varieties and we may induce $\Deltaelta'$ with $g'^{*}(K_{X}+\Deltaelta)=K_{X'}+\Deltaelta'$. Again the horizontal components of $\Deltaelta'$ are precisely the base changes of the horizontal components of $\Deltaelta$.
It suffices to show then that if $D$ is a horizontal divisor on $X$ such that $D \to Z$ is divisorially tamely ramified then $D' \to Z'$, the base change, is also divisorially tamely ramified. Certainly $D' \to Z'$ is still separable. Suppose $C$ is any curve on $Z$ and $C'$ a curve on $Z'$ lying over it. In turn take any $C_{D'}$ lying over $C'$ on $D'$. Then $C_{D'}$ is the base change of some $C_{D}$. Since $C_{D} \to C$ is separable, so too is $C_{D'} \to C'$. Equally as the ramification indices of $C', C_{D}$ are not divisible by $p$, neither can the ramification index of $C_{D'}$ over $C_{D}$ be. This same argument holds after base change by any higher birational model of $Z$, and by \cite[Theorem VI.1.3]{kollar1999rational} every valuation with centre on $Z'$ is can be realised on the pullback of some such model. Thus $D' \to Z'$ is divisorially tamely ramified and hence $(X',\Deltaelta') \to Z'$ is tame.
It is enough to show that $X' \to X$ is divisorially tamely ramified after base changing by a higher birational model of $Z$. In particular, after taking a flatification we may assume $f\colon X \to Z$ is flat. Now suppose $D$ is a divisor on $X$, lying over some curve $C$ on $Z$. We have $f^{*}C=\sum E_{i}$ with $E_{0}=D$. Let $C_{j}$ be the curves lying over $C$ in $Z'$, then if $E_{i,j}$ are the divisors lying over $E_{i}$, for some fixed $i$, they are in one-to-one correspondence with the $C_{j}$. We have $g'^{*}f^{*}C=\sum r_{i,j}E_{i,j}=\sum_{j} r_{i}\sum _{i}E_{j}$ and thus none of the $r_{i,j}$, in particular the $r_{0,j}$ are divisible by $p$. Moreover the $E_{0,j} \to E_{0}$ must be separable since the $C_{j} \to C$ are.
The same holds after taking a higher birational model of $X$, and thus $X' \to X$ is divisorially tamely ramified as claimed.
\end{proof}
In practice we deal exclusively with tame conic bundles arising in the following fashion.
\begin{lemma}\label{S2}
Suppose that $(X,\Deltaelta)$ is klt and LCY, equipped with a Mori fibre space structure over a surface $Z$ and the horizontal components of $\Deltaelta$ have coefficients bounded below by $\delta$. Then if $X$ is defined over a field of characteristic $p > \frac{2}{\delta}$, $f\colon (X,\Deltaelta) \to Z$ is a tame conic bundle.
\end{lemma}
\begin{proof}
Since $\delta <1$, the characteristic is larger than $2$ and the general fibre is necessarily a smooth rational curve, in particular $X$ is a conic bundle. Let $G$ be the generic fibre, so that $(G,\Deltaelta_{G})$ is klt and $G$ is also smooth rational curve. Then if $D$ is some horizontal component of $\Deltaelta$ the degree of $f\colon D \to Z$ is precisely the degree of $D|_{G}$. However $\deg \delta D|_{G} < \deg \Deltaelta|_{G} =-2$ and thus $\deg D < p$. Replacing $D$ by its normalisation, $D'$ does not change the degree, so $D'\to Z$ has degree $<p$ and thus is divisorially tamely ramified.
\end{proof}
\begin{remark}
One might be tempted to ask if this bound could be further improved for $\epsilon$-klt pairs, $(X,\Deltaelta)$. In this case we have $(G,\Deltaelta_{G})$ is $\epsilon$-klt and so one might attempt to use a bound of the form $p > \frac{1-\epsilon}{\delta}$ to prevent any component of $\Deltaelta$ mapping inseparably onto the base. It does not seem however that such a bound would ensure that every component is divisorially tamely ramified and there may be wild ramification away from the general fibre.
\end{remark}
\begin{theorem}\label{cbf}
Let $f\colon (X,\Deltaelta) \to Z$ be a sub $\epsilon$-klt, tame conic bundle. Then for some choice of $M\sim_{\mathbb{Q}} M_{Z}$ we have $(Z,D_{Z}+M)$ sub $\epsilon$-klt. If in fact $\Deltaelta \geq 0$, we may take $D_{Z},M$ to be effective also.
\end{theorem}
\begin{remark}
The implicit condition that $(X,\Deltaelta)$ is a threefold pair is necessary only in that it assures the existence of log resolutions. This result holds in dimension $d$ so long as the existence of log resolutions of singularities holds in dimensions $d,d-1$.
\end{remark}
We will prove this in several steps. First we consider the case that $\Deltaelta^{h}$, the horizontal part of $\Deltaelta$, is a union of sections of $f$. In this setting we have an even stronger result. After moving to a higher birational model, we have that $(Z,D_{Z})$ is klt and $M_{Z}$ is semiample.
\begin{lemma}\label{ShokurovAdjunction}
Suppose that $f\colon (X,\Deltaelta) \to Z$ is a sub $\epsilon$-klt conic bundle with $\Deltaelta^{h}$ effective and with support that is generically a union of sections of $f$, then there is $\pi \colon Z'\to Z$ a birational morphism with $(Z',D_{Z'})$ sub $\epsilon$-klt and $M_{Z'}$ semiample. In particular for some choice of $M\sim M_{Z'}$ we have $(Z,D_{Z}+\pi_{*}M)$ sub $\epsilon$-klt.
\end{lemma}
\begin{proof}
This result is well known and essentially comes from \cite{prokhorov2009towards}. Details specific to positive characteristic can be found in \cite[Section 4]{das2016adjunction}, \cite[Lemma 3.1]{witaszek2017canonical} and \cite[Lemma 6.7]{cascini2013base}
We sketch, some key points of the proof.
Since generically $X \to Z$ is a $\mathbb{P}^{1}$ bundle and the horizontal part of $\Deltaelta$ is a union of sections, we induce a rational map $\phi\colon Z \dashrightarrow \overline{\mathcal{M}}_{0, n}$, the moduli space of $n$-pointed stable curves of genus $0$. By taking appropriate resolutions we may suppose that $(X,\Deltaelta)$ is log smooth, $Z$ is smooth and $\phi$ is defined everywhere on $Z$. Blowing down certain divisors on the universal family over $\overline{\mathcal{M}}_{0, n}$ and pulling back to $Z$ we may further assume that $X\to Z$ factors through a $\mathbb{P}^{1}$ bundle over $Z$ via a birational morphism.
Then working locally over each point of codimension $1$ and applying $2$ dimensional inversion of adjunction, we see that in fact $D_{Z}$ is determined by the vertical part of $\Deltaelta$, indeed $\Deltaelta^{V}=f^{*}D_{Z}$, and that $M_{Z}$ is the pullback of an ample divisor on $\overline{\mathcal{M}}_{0, n}$ by $\phi$. In particular $M_{Z}$ is semiample and $D_{Z}$ takes coefficients in the same set as $\Deltaelta^{V}$, therefore they are bounded above by $1-\epsilon$.
From the following lemma, we see that in fact we may further suppose that $(Z,D_{Z})$ is log smooth. Since if $\pi\colon (Z',\Deltaelta')\to Z$ is a log resolution of $(Z,D_{Z})$ we have $K_{Z'}+\Deltaelta'=\pi^{*}(K_{Z}+D_{Z})$, $\pi^{*}M_{Z}=M_{Z'}$ and $K_{Z'}+D_{Z'}+M_{Z'}=\pi^{*}(K_{Z}+D_{Z}+M_{Z})=K_{Z'}+\Deltaelta'+M_{Z'}$, giving $D_{Z'}=\Deltaelta'$ as required. In particular then \autoref{average} gives that $(Z,D_{Z}+M_{Z})$ is sub $\epsilon$-klt.
\end{proof}
\begin{lemma}
Suppose that $Z$ is as given above and $Z'\to Z$ is the birational model found in the proof with $M_{Z'}$ semiample. Suppose further that $Y$ is a normal variety admitting a birational morphism $\pi\colon Y \to Z'$. If $M_{Y}$ is the moduli part coming from the induced conic bundle $X_{Y} \to Y$ then $\pi^{*}M_{Z'}=M_{Y}$.
\end{lemma}
\begin{proof}
Let $\phi\colon Z' \to \overline{\mathcal{M}}_{0, n}$ and $\chi\colon Y \dashrightarrow \overline{\mathcal{M}}_{0, n}$ be the rational maps induced by the base changes of $X\to Z$. By assumption $\phi$ is a morphism.
Although $\chi$ is a priori defined only on some open set, it must factor through $\phi$ whenever it is defined, and hence extends to a full morphism $\chi=\phi \circ \pi$.
Write then that $M_{Z'}=\phi^{*}A$ and $M_{Y}=\chi^{*}A'$. A more careful study of the proof of the previous result would give $A=A'$ and the result follows. However for simplicity one can also note that $M_{Z'}=\pi_{*}M_{Y}=\pi_{*}\chi^{*}A'=\phi^{*}A'$, so that $M_{Y}=\pi^{*}\phi^{*}A'=\pi^{*}M_{Z'}$.
\end{proof}
We now reduce from the general case of \autoref{cbf} to the special case of \autoref{ShokurovAdjunction} to prove the theorem. This requires the following lemma, due essentially to Ambro.
\begin{lemma}\cite[Theorem 3.2]{ambro1999adjunction}
Suppose that $f\colon (X,\Deltaelta) \to Z$ is a tame conic bundle. Let $g\colon Z' \to Z$ be a finite, divisorially tamely ramified morphism of normal varieties and $(X',\Deltaelta') \to Z'$ the induced fibration. Then $(X',\Deltaelta') \to Z$ is tame and $g^{*}(K_{Z}+D_{Z})=K_{Z'}+D_{Z'}$ for $D_{Z'}$ the induced discriminant divisor of $(X',\Deltaelta') \to Z'$.
\end{lemma}
\begin{proof}
By \autoref{tame base change}, $(X',\Deltaelta') \to Z'$ is tame and hence $D_{Z'}$ is well defined by the discussion proceeding \autoref{tame base change}.
It remains to show that $g^{*}(K_{Z}+D_{Z})=K_{Z'}+D_{Z'}$. To see this fix $Q$ a prime of $Z'$ and write $r_{Q}$ for the degree of the induced map onto some $P$ a prime of $Z$.
From the proof of \autoref{finite adjunction} we see that if $K_{Z'}+B=g^{*}(K_{Z}+D_{Z})$ then $1-\text{Coeff}_{Q}(B)=r_{Q}(\text{Coeff}_{P}(D_{Z})-1)$. In particular then it suffices to show that $d_{Q}=r_{Q}d_{P}$. We consider two cases.
Suppose that $c \leq d_{P}$. Then we have $(X,\Deltaelta+cf^{*}P)$ log canonical over $P$. Hence $(X',\Deltaelta'+g'^{*}f^{*}P=\Deltaelta+cf'^{*}g^{*}P)$ is also log canonical by the \autoref{finite adjunction}. But $f'^{*}g^{*}P \geq f'^{*}r_{Q}Q$ so it must be that $d_{Q} \geq r_{Q}c$. Hence in fact $d_{Q} \geq r_{Q}d_{P}$.
Conversely if $c \geq d_{P}$ then,$(X,\Deltaelta+cf^{*}P)$ is not log canonical over $P$. In particular replacing $X$ with a suitable birational model $X'' \to X$ we suppose that there is some prime $E$ of $X$ with $f_{E}=P$ and $\text{Coeff}_{E}(\Deltaelta+cf^{*}P) < -1$. Similarly there is $E'$ on $X'$ with $g'(E')=E$ and $f'(E')=Q$ which also has $\text{Coeff}_{E}(\Deltaelta'+cg'^{*}f^{*}P) < -1$ but $\text{Coeff}_{E}(cg'^{*}f^{*}P)=\text{Coeff}_{E}(cf^{*}r_{Q}P)$ and hence $c \geq rd_{Q}$. Thus we have the equality $d_{Q}=r_{Q}d_{Q}$.
\end{proof}
Note that in the setup above $g^{*}(K_{Z}+D_{Z}+M_{Z})=K_{Z'}+D_{Z'}+M_{Z'}$ so we must have that $M_{Z'}=g^{*}M_{Z}$.
\begin{lemma}
Suppose that $f\colon X \to Z$ is a tame conic bundle. Then there is a finite, divisorially tamely ramified morphism $g\colon Z' \to Z$ with $g^{*}(K_{Z}+D_{Z}+M_{Z})=K_{Z'}+D_{Z'}+M_{Z'}$ and a birational morphism $h\colon Z'' \to Z'$ such that $M_{Z''}$ is semiample.
\end{lemma}
\begin{proof}
Let $D$ be any horizontal component of $\Deltaelta$ which is not a section of $f$ then $f$ restricts to a divisorially tamely ramified morphism $D \to Z$. After replacing $D$ with its normalisation and Stein factorising, we may suppose that $D\to Z$ is finite with $D$ normal. Taking the fibre product of $X \to Z$ with the normalisation $\tilde{D}$ of $D$ we find $X' \to \tilde{D}$ satisfying the initial conditions but with the one component of $\Deltaelta$ is now generically a section.
In this fashion, we eventually get to $Z' \to Z$ with $g^{*}(K_{Z}+D_{Z}+M_{Z})=K_{Z'}+D_{Z'}+M_{Z'}$ and all the horizontal components of $\Deltaelta$ being generically sections. Hence we may apply \autoref{ShokurovAdjunction} to give the result.
\end{proof}
\begin{proof}[Proof of \autoref{cbf}]
Take $f\colon (X,\Deltaelta) \to Z$ as given. Then we have $g\colon Z' \to Z$ and $h\colon Z''\to Z'$ as above. Write $d$ for the degree of $g$.
Fix $B_{Z''}\sim M_{Z''}$ making $(Z'',D_{Z''}+B_{Z''})$ sub klt. Write $B_{Z}=\frac{1}{d}g_{*}h_{*}B_{Z''}$. It is sufficient to show that $(Z,D_{Z}+B_{Z})$ is sub $\epsilon$-klt since $B_{Z} \sim M_{Z}$ is always effective and $D_{Z} \geq 0$ whenever $\Deltaelta$ is.
Let $Y \to Z$ be a log resolution of $(Z,D_{Z}+B_{Z})$ and take $Y',Y''$ appropriate fibre products to form the following diagram.
\[\begin{tikzcd}
Y'' \arrow[r, "\pi''"] \arrow[d, "h'"] & Z'' \arrow[d, "h"] \\
Y' \arrow[r, "\pi'"] \arrow[d, "g'"] & Z' \arrow[d, "g"] \\
Y \arrow[r, "\pi"] & Z
\end{tikzcd}\]
We have that $M_{Y''}=\pi''^{*}M_{Z''}$, so write $B_{Y''}=\pi''^{*}B_{Z''}$ and $\frac{1}{d}g'_{*}h'_{*}B_{Y''}=B_{Y}$. Then we must have that $\pi_{*}B_{Y}=B_{Z}$ and $K_{Y}+D_{Y}+B_{Y}\sim \pi^{*}(K_{Z}+D_{Z}+B_{Z})$. Note further that $\pi^{*}B_{Z}$ and $B_{Y}$ differ only over the exceptional locus, hence $B_{Y}$ has SNC support. Indeed $D_{Y}+B_{Y}$ has SNC support. Further since $(Y'',D_{Y''}+B_{Y''})$ is sub $\epsilon$-klt and $g'_{*}h'_{*}(D_{Y''}+B_{Y''})=d(D_{Y}+B_{Y})$ it must be that $D_{Y}+B_{Y}$ have coefficients strictly less than $1-\epsilon$, thus $(Y,D_{Y}+B_{Y})$ is sub $\epsilon$-klt and therefore so is $(Z,D_{Z}+B_{Z})$.
\end{proof}
\subsection{Generic smoothness}
We will also need to consider the pullbacks of very ample divisors on the base of a suitably smooth conic bundle. This is done to obtain an adjunction result which is required in the next section. We work here under the assumption the ground field is closed of positive characteristic $p>2$. This requirement on the characteristic is due entirely to the following lemma.
\begin{lemma}\label{disc-smooth}
Let $(X,\Deltaelta) \to Z$ be a regular conic bundle. Then there is some, possibly reducible, curve $C$ on $Z$ such that for any $P \in Z$ the fibre, $F_{P}$, over $P$ is determined as follows:
\begin{enumerate}
\item If $P \in Z\setminus C$ then $F_{p}$ is a smooth rational curve.
\item If $P \in C\setminus \textup{Sing}(C)$ then $F_{p}$ is a the union of two rational curves meeting transversally.
\item If $P \in \textup{Sing}(C)$ then $F_{p}$ is a non-reduced rational curve.
\end{enumerate}
Further if $H$ is a smooth curve meeting $C$ transversely away from $\textup{Sing}(C)$ then $f^{*}H$ is smooth.
\end{lemma}
\begin{proof}
This is essentially \cite[Proposition 1.8]{sarkisov1983conic}. We sketch the proof as our statement is slightly different.
Since $X$ is smooth $-K_{X}$ is relatively ample and defines an embedding into a $\mathbb{P}^{2}$ bundle over $Z$.
Fix any point $P$ in $X$ then in some neighbourhood $U$ around $P$, $X_{U}$ is given inside $\mathbb{P}^{2} \times U$ by the vanishing of $x^{t}Qx$. Here $Q$ is a diagonalisable $3\times 3$ matrix taking coefficients in $\kappa[U]$, unique up to invertible linear transformation, so we may take $C$ to be the divisor on which the rank of $Q$ is less than $3$. That $Q$ has rank $3$ on some open set follows from smoothness of the generic fibre.
Then the singular points of $C$ are precisely the locus on which $Q$ has rank less than $2$. By taking a diagonalisation of $Q$ we may write $X_{U}$ as the vanishing of $\sum A_{i}x_{i}^{2}$ for some $A_{i} \in \kappa[U]$ and we obtain the classification of fibres by consideration of the rank.
Suppose then $H$ is a smooth curve as given. Away from $C$, $f^{*}H$ is clearly smooth, so it suffices to consider the intersection with $C$, however we can see it is smooth here by computing the Jacobian using the local description of $X$ given above.
\end{proof}
\begin{theorem}[Embedded resolution of surface singularities]\cite[Theorem 1.2]{cutkosky2009resolution}
Suppose that $V$ is a non-singular threefold, $S$ a reduced surface in $V$ and $E$ a simple normal crossings divisor on $V$ then there is a sequence of blowups $\pi\colon V_{n} \to V_{n-1} \to ... V$ such that the strict transform $S_{n}$ of $S$ to $V_{n}$ is smooth. Further each blowup is the blowup of a non-singular curve or a point and the blown up subvariety is contained in the locus of $V_{i}$ on which the preimage of $S+E$ is not log smooth.
\end{theorem}
\begin{corollary}\label{c_res_1}
Suppose $(X,\Deltaelta) \to Z$ is a regular, tame conic bundle and we fix a very ample linear system $|A|$ on $Z$. Then there is a log resolution $(X',\Deltaelta') \to (X,\Deltaelta)$ such that for any sufficiently general element $H\in |A|$, its pullback $G'$ to $X'$ has $(X',G'+E)$ log smooth for $E$ the reduced exceptional divisor of $\pi$.
\end{corollary}
\begin{proof}
By the previous theorem we may find birational morphism $\pi\colon X' \to X$ which is a log resolution of $(X,\Deltaelta)$ factoring as blowups $X'=X_{n} \to X_{n-1} \to .... X_{0}=X$ of smooth subvarieties contained in the non-log smooth locus of each step.
We show first a general $G'$ is smooth. At each stage we blow-up smooth curves $V_{i}$ in the non-log smooth locus. Let $G_{i}$ be the pullback of $H$ to $X_{i}$, suppose for induction it is smooth. That $G_{0}$ is smooth is the content of \autoref{disc-smooth} and so the base case of the induction argument holds.
We may assume that $f_{i,*}V_{i}=V_{Z,i}$ is a curve for $f_{i}\colon X_{i} \to X \to Z$ else a general $H$ avoids it and so a general $G_{i+1}$ is smooth also. Note that each vertical component of $\Deltaelta$ is log smooth near the generic point of their image, since $X$ is a regular conic bundle, so $V_{i}$ must be contained in the strict transform of some horizontal component of $\Deltaelta$. Since $V_{i}$ is not contracted, it follows that $V_{i} \to V_{Z,i}$ is separable as $(X,\Deltaelta,Z)$ is tame. Thus as a general $H$ meets $V_{Z,i}$ transversely, a general $G_{i}$ meets $V_{i}$ transversely and hence a general $G_{i+1}$ is smooth. By induction then $G'=G_{n}$ is smooth.
Suppose that $V$ is a curve contained in the locus on which $\pi^{-1}$ is not an isomorphism that is not contracted by $f$. Then for a general point $P$ of $V$, we claim that the fibre over $P$ is log smooth. As before we argue by induction, the the base case trivially true. Suppose then that we blowup a curve $V_{i}$ lying over $V$ on $X$ and $V_{Z}$ on $Z$. Then $V_{i}$ must meet the fibre over $P$ transversally. Indeed $V_{i} \to V \to V_{Z}$ is separable, as above, forcing $V_{i} \to V$ to be separable also. But then $V_{i}$ meets a general fibre transversally as claimed.
Suppose now that $E$ is an integral exceptional divisor of $X'\to X$. Let $V=\pi_{*}E$, then as before general $G$ meets $V$ transversely if $V$ is a curve, or not at all otherwise. Suppose $V$ is a curve, then for a general point $P$ of $V$, the fibre over $P$ is a system of log smooth curves. Finally then the intersection of a general $G'$ and $E$ is a scheme of pure dimension $1$ contained in the disjoint union of such systems of log smooth curves, in particular it is log smooth.
Suppose then we fix two exceptional divisors $E_{1},E_{2}$ meeting at a curve $V$. Again we suppose that $V$ is not contracted by $f'=f \circ \pi$. Write $\pi_{*}V=V_{X}$ and $f'_{*}V=V_{Z}$. Then $V_{X} \to V_{Z}$ is separable as before and for a general $G'$ meeting $V$ transversely, the intersection of $G$ with $\pi^{*}V'$ is a log smooth system of rational curves, and then $G.V \subseteq G.\pi^{*}V_{X}$ is log smooth, or equally it is finitely many points with multiplicity $1$.
\end{proof}
\begin{theorem}
Let $(X,\Deltaelta) \to Z$ be a regular, tame conic bundle and $|A|$ a very ample linear system on $Z$. Then there is a log resolution $(X',\Deltaelta') \to (X,\Deltaelta)$ such that for a general $H \in |A|$, the pullback $G'$ to $X'$ is smooth with $(X',\Deltaelta'+G')$ log smooth.
\end{theorem}
\begin{proof}
Write $E$ for the reduced exceptional divisor. For a general $H \in |A|$ we let $G=f^{*}H$ be the pullback to $X$. We then take $X'$ as in \autoref{c_res_1}.
Clearly a general $G'$ avoids the intersection of any $3$ components of $\text{Supp}(\Deltaelta')+E$, and from above $(X',G'+E)$ is log smooth. Suppose $D$ is a vertical component of $\Deltaelta$. Then either $G$ can be assumed to avoid it, or to meet it at a smooth fibre. By the usual arguments, since the only non-contracted curves we blow up map separably onto their image, $G'$ meets $D'$ the strict transform of $D$ on $X'$ along a log smooth locus. Further this locus meets any exceptional divisor either transversally or not at all. Now suppose $D_{2}$ is any other component of $\text{Supp}(\Deltaelta')+E$ which does not dominate $Z$. Then if either $D_{2}.D'$ has dimension less than $1$ or is contracted over $Z$ then a general $G'$ avoids it, so suppose otherwise. In which case $D_{2}$ must be exceptional over $X$ with image $V\subseteq D$ on $X$. However $D_{2}.D'$ is just the strict transform of $V$ inside $D'$ and, for a general $G'$, $G'.D_{2}.D$ is log smooth as required.
It remains then to consider the horizontal components of $\Deltaelta$. Let $D$ be any such component and $D'$ its strict transform. Since $(X,\Deltaelta,Z)$ is tame, so is $(X',\Deltaelta',Z)$. In particular then $D' \to Z$ is divisorially tamely ramified and so residually separated over $Z$ away from finitely many points of $Z$. Hence by Bertini's Theorem, \autoref{Bertini}, the pullback of a general $H$, which is just the intersection of a general $G'$ with $D'$ is smooth. Further as $D' \to Z$ is divisorially tamely ramified, if $V$ is any curve on $D'$ not contracted over $Z$ a general $G'|_{D'}$ meets it transversally. Hence for any other component $D_{2}$ of $\text{Supp}(\Deltaelta')+E$ we have $(X',D'+D_{2}+G')$ log smooth for a general $G'$ and the result follows.
\end{proof}
\begin{corollary}\label{tameAdjunction}
Suppose $(X,\Deltaelta,Z)$ is a terminal, sub $\epsilon$-klt, tame conic bundle. Take a general very ample $H$ on $Z$, with $G=f^{*}H$, then
$(G,\Deltaelta|_{G}=\Deltaelta_{G})$ is sub $\epsilon$-klt.
\end{corollary}
\begin{proof}
Throwing away finitely many points of $Z$ we may freely suppose that the conic bundle is regular.
By the previous theorem there is a log resolution $\pi\colon (X',\Deltaelta') \to (X,\Deltaelta)$ with $(X',\Deltaelta'+G')$ smooth. Write $\pi_{G}\colon G' \to G$ for the restricted map. Then $(K_{X'}+\Deltaelta'+G')|_{G'}=\pi_{G}^{*}(K_{G}+\Deltaelta_{G})=K_{G'}+\Deltaelta'|_{G}$. However $\Deltaelta'|_{G}$ is log smooth with coefficients less than $1-\epsilon$ by construction, and hence $(G,\Deltaelta_{G})$ is $\epsilon$-klt by assumption.
\end{proof}
\section{$F$-Split Mori Fibre Spaces}\label{S-MFS}
The aim of this section is to prove the following theorem.
\begin{theorem}\label{setup}
For a field $\kappa$ of positive characteristic we let $S_{\kappa}$ be the set of $(X,\Deltaelta)$, $\epsilon$-LCY threefold pairs with $X$ terminal, globally $F$-split and rationally chain connected over $\kappa$. We further require that $(X,\Deltaelta)$ admits a $K_{X}$ Mori fibration $f\colon (X,\Deltaelta) \to Z$ where either
\begin{enumerate}
\item $Z$ is a smooth rational curve, there is $H$ on $Z$ very ample of degree $1$ and a general fibre $G$ of $X \to Z$ is smooth.
\[\text{or}\]
\item $p>2$ and $(X,\Deltaelta) \to Z$ is a tame, terminal conic bundle such that there is a very ample linear system $|A|$ on $Z$ with $A^{2} \leq c$. In which case $G$ the pullback of a sufficiently general $H \in |A|$ is smooth with $(G,\Deltaelta_{G}=\Deltaelta|_{G})$ $\epsilon$-klt by \autoref{tameAdjunction}.
\end{enumerate}
Then the set of base varieties $$S'=\{X \text{ such that } \exists \Deltaelta \text{ with } (X,\Deltaelta) \in S_{\kappa} \text{ for algebraically closed } \kappa\}$$ is birationally bounded over $\mathbb{Z}$.
\end{theorem}
\begin{remark}
In practice this will be applied to pairs over fields of characteristic $p > 7,\frac{2}{\delta}$ with boundary coefficients bounded below by $\delta$. The constraints on $p$ come from \autoref{smoothness} and \autoref{cbf}, via \autoref{S2}.
\end{remark}
This chapter is devoted to the proof, but the outline is as follows. We fix a general, very ample divisor $H$ on the base and write $G=f^{*}H$. Then argue that $A=-mK_{X}+nG$ is ample, for $m,n$ not depending on $X,\Deltaelta$ or $G$. This is done by bounding the intersection of $K_{X}$ with curves not contracted by $f$ and generating an extremal ray in the cone of curves. We then show that in fact we may choose these $m,n$ such that $A$ defines a birational map, by lifting sections from $G$ using appropriate boundedness results in lower dimensions. The $F$-split assumption is used to lift sections from $G$ with \autoref{vanish}, it will also be needed to apply \autoref{setup} by ensuring that the bases $Z$ are suitably bounded.
If, for some $t>0$, the non-klt locus of $(X,(1+t)\Deltaelta)$ is contracted then since $(K_{X}+(1+t)\Deltaelta) \sim -tK_{X}$ it follows that every $-K_{X}$ negative extremal ray is generated by a curve $\gamma$ with $K_{X}.\gamma \leq \frac{6}{t}$. In particular as we have $G.C \geq 1$ for any $-K_{X}$ negative curve $C$ it must be that $-K_{X}+\frac{7}{t}G$ is ample. Clearly for any $(X,\Deltaelta) \to Z$ there is such a $t$, however we wish to find one independent of the pair. For this we may use a result due to Jiang, the original proof is a-priori for characteristic $0$, but the proof is arithmetic in nature and holds in arbitrary characteristic.
\begin{theorem}\label{Jiang}\cite[Theorem 5.1]{jiang2018birational}
Fix a positive integer $m$ and $\epsilon >0$ a real number. Then there is some $\lambda$ depending only on $m,\epsilon$ satisfying the following property.
Take $(T,B)$ any smooth, projective $\epsilon$-klt surface. Write $B=\sum b_{i}B_{i}$ and suppose $K_{T}+B \equiv N-A$ for $N$ nef and $A$ ample. If $B.N,\sum b_{i}, B^{2} \leq m$ then $(T,(1+\lambda)B)$ is klt.
\end{theorem}
First we show that results of this form lift to characterisations of the non-klt locus of $(X,(1+t)\Deltaelta)$, then show how the result above may be applied here.
\begin{lemma}
We use the notation of \autoref{setup}. Suppose $Z$ is a surface and there is $t$ such that $(G,(1+t)\Deltaelta_{G})$ is klt. Then every curve in the non-klt locus of $(X,(1+t)\Deltaelta)$ is contracted by $f$.
\end{lemma}
\begin{proof}
Let $\pi\colon X' \to X$ be a log resolution of $(X,\Deltaelta+G)$ with $K_{X'}+\Deltaelta'=\pi^{*}(K_{X}+\Deltaelta)$, then $(X',\Deltaelta'+G')$ is log smooth and $\Deltaelta'$ and $G$ have no common components, where $G'$ is the pullback of $G$. Now $X' \to X$ must also be a log resolution of $(X,(1+t)\Deltaelta)$, and hence if we write $K_{X'}+B=\pi^{*}(K_{X}+(1+t)\Deltaelta)$ then it is also true that $(X',B+G')$ is log smooth and that $B$ and $G'$ have no common components. Hence $(G',B|_{G'})$ is sub klt by assumption and in particular it has coefficients strictly less than $1$.
Suppose $Z$ is a non-klt center of $(X,(1+t)\Deltaelta)$ and $E$ is a prime divisor lying over $Z$ inside $X'$. Then $E$ has coefficients strictly larger than $1$ in $B$. Since $(X',B+G')$ is log smooth, it must be that $E|_{G'}$ is an integral divisor and it is trivial if and only if $E$ and $G'$ do not meet. But then $E|_{G'} =\lfloor E|_{G'} \rfloor =0$ and so $E$ does not meet $G'$. Hence neither does $H$ meet $f_{*}\pi_{*}E=f_{*}Z$. In particular if $C$ is a curve in the non-klt locus, then there is an ample divisor $H$ on $Z$ not meeting $f_{*}C$. This is possible only if $f_{*}C$ is a point.
\end{proof}
\begin{lemma}
Using the notation of \autoref{setup} suppose that $Z$ is a curve and write $Y$ for the generic fibre of $f\colon X\to Z$. If there is $t$ such that $(Y,(1+t)\Deltaelta_{Y})$ is klt, then every curve in the non-klt locus of $(X,(1+t)\Deltaelta)$ is contracted by $f$.
\end{lemma}
\begin{proof}
This follows essentially as above.
Take a log resolution $\pi\colon (X',\Deltaelta') \to (X,\Deltaelta)$. Write $Y'$ for the generic fibre of $X' \to Z$. Then $(Y',\Deltaelta'|_{Y'}) \to (Y,\Deltaelta_{Y})$ is a log resolution. Again write $K_{X'}+B=\pi^{*}(K_{X}+(1+t)\Deltaelta)$. Then again if $B$ has a component $D$ with coefficient at least $1$ then $D$ cannot dominate $Z$, else it would pull back to $G'$ to give a contradiction. Hence the non-klt locus of $(X,(1+t)\Deltaelta)$ must be contracted as claimed.
\end{proof}
\begin{lemma}
Using the notation of the previous lemmas. There is some $\lambda$ independent of $(X,\Deltaelta)$ and $G$ for which the non-klt locus of $(X,(1+t)\Deltaelta)$ is contracted for all $t \leq \lambda$.
\end{lemma}
\begin{proof}
We consider two cases.
Suppose first $Z$ is a curve, so the generic fibre $Y$ is a regular del Pezzo surface and $(G,\Deltaelta_{G})$ is $\epsilon$-klt LCY. Then, by the work of Tanaka \cite[Corollary 4.8]{tanaka2019boundedness}, $(-K_{G})^{2} \leq 9$. We write $\Deltaelta_{G}=\sum \lambda_{i}D_{i}$ and since $G$ is regular we have $D_{i}.K_{G} \geq 1$. Hence $\sum \lambda_{i} \leq \Deltaelta_{G}.(-K_{G}) \leq 9$ and $\Deltaelta_{G}^{2} =(-K_{G})^{2} \leq 9$. We conclude the result holds by \autoref{Jiang} with $N=-K_{G}$ and $A=-K_{G}$.
Suppose then that $Z$ is a surface. Then by \autoref{disc-smooth}, $G$ is a smooth surface, geometrically ruled over a general very ample divisor $H$ on $Z$. Further by \autoref{tameAdjunction}, $(G,\Deltaelta_{G})$ is $\epsilon$-klt and by assumption $K_{G}+\Deltaelta_{G}\sim kF$ where $F$ is the general fibre over $H$ and $H^{2}=k \leq c$. Finally note that $\Deltaelta_{G}^{V} \sim_{f,\mathbb{Q}} 0$.
We may write $\Deltaelta_{G}= \sum \lambda_{i}D_{i}+ \sum \mu_{i}F_{i}$ where $F_{i}$ are fibres over $H$ and $D_{i}$ dominate $H$. Since $F_{i}$ is a fibre and $G$ is smooth, each $F_{i}$ is reduced by the genus formula and contains at most $2$ components since $-K_{X}.F_{i}=-2$. Further $\Deltaelta_{G}.F=(-K_{G}).F=2$ and hence $\Deltaelta_{G}^{2}=(-K_{G}+kF)^{2}=(-K_{G})^{2}-2kK_{G}.F +(kF)^{2} \leq (-K_{G})^{2}+4c$ which in turn is bounded above by $8+4c$ due to \cite[Proposition 11.19]{buadescu2001algebraic}, since $G$ is a smooth geometrically ruled surface.
It remains then to show that the sum of the coefficients of $\Deltaelta_{G}$ is bounded. Note that $\sum \lambda_{i} \leq \sum \lambda_{i}D_{i}.F =\Deltaelta_{G}.F =2$. We therefore need only bound $\sum \mu_{i}$.
Suppose for contradiction that $w=\sum \mu_{i} >3 +k$. Let $B=\sum \lambda_{i}D_{i} +(1-\frac{3+k}{w})\sum \mu_{i}F_{i} \sim -K_{G}-(F^{1}+F^{2}+F^{3})$, for general fibres $F^{i}$.
Then $(G,B)$ is klt and so by \autoref{cc}, $D=F^{1}+F^{2}+F^{3}$ has 2 connected components, a clear contradiction.
Therefore we may choose $A$ small and ample with $A.\Deltaelta_{G} < c$ and write $N=kF+A$ to satisfy the conditions of \autoref{Jiang}. The result then follows as $\Deltaelta_{G}.N=kF.\Deltaelta_{G}+A.B\leq 3c$ is still bounded.
\end{proof}
\begin{corollary}\label{nAmple}
There is some $n$ such that for any $(X,\Deltaelta) \to Z$ and $G$ as in \autoref{setup} we have $-K_{X}+nG$ is ample.
\end{corollary}
\begin{proof}
Take any $n \geq \frac{7}{\lambda}$ for $\lambda$ as in the previous lemma. Suppose $R$ is a $K_{X}+(1+\lambda)\Deltaelta\equiv -\lambda K_{X}$ negative extremal ray. By construction, every curve in the nlc locus is contracted by $X \to Z$ so $-\lambda K_{X}$ is positive on $\overline{NE(X)}_{nlc}\setminus \{0\}$. Hence by \autoref{NLCT} any such ray is spanned by a curve $C$ with $0< -\lambda K_{X}.C \leq 6$ and $G.C >0$. Since $G$ is Cartier, we have $G.C > 0$ and hence $(-K_{X}+nG).G \geq \frac{1}{\lambda} >0$. In particular $-K_{X}+nG$ is ample as claimed.
\end{proof}
\begin{theorem}\label{t-ample}
Let $(X,\Deltaelta) \to Z$ and $G$ be as in \autoref{setup}. Then there is $t$ not depending on the pair $(X,\Deltaelta)$ nor on $G$ with $-3K_{X}+tG$ ample and defining a birational map.
\end{theorem}
\begin{proof}
Consider first the case that $\dim Z=1$. Then $G$ is a smooth del Pezzo surface, so $-3K_{G}$ is globally generated by \cite[Proposition 2.14]{bernasconi2022pezzo}. Let $G_{1},G_{2}$ be other general fibres and consider
\[0 \to \ox(-3K_{X}+kG-G_{1}-G_{2}) \to \ox(-3K_{X}+kG) \to \mathcal{O}_{G_{1}}(-3K_{G_{1}})\oplus \mathcal{O}_{G_{2}}(-3K_{G_{2}}) \to 0.\]
Since $X$ is globally $F$-split $H^{i}(X,A)=0$ for all $i>0$ and $A$ ample by \autoref{vanish}. In particular then $H^{1}(X,\ox(-3K_{X}+kG-G_{1}-G_{2}))$ vanishes when $k\geq 3n+2$ for $n$ as given by the proceeding corollary. Therefore we may lift sections of $-3K_{G_{i}}$ to see that $-3K_{X}+kG$ defines a birational map for any $k \geq 3n+2$.
Suppose instead that $\dim Z=2$, so $G$ is a conic bundle. Choose a general $H'\sim H$ on $Z$ and let $G'$ be its pullback. Consider $A_{k}=(-K_{X}+kG)|_{G'}=(-k_{G'}+(k-1)dF)$ for $d \geq 1$, where $F$ is the general fibre of $G'\to H'$. Then $A_{k}$ is ample for $k >n$ and is Cartier since $G$ is smooth. In particular by the Fujita conjecture for smooth surfaces \cite[Corollary 2.5]{terakawa1999d}, $K_{G'}+4A_{k}$ is very ample. Choosing suitable $k,k'$ we may write $K_{G'}+4A_{k}=-3K_{G'}+4(k-1)dF=(-3K_{X}+k'G)|_{G'}$. Consider now
\[0 \to \ox(-3K_{X}+(k'-1)G)\to \ox(-3K_{X}+k'G)\to \mathcal{O}_{G'}(-3K_{G'}+4(k-1)dF) \to 0.\]
Again the higher cohomology of $-3K_{X}+(k'-1)G$ vanishes and we may lift sections to $H^{0}(X,\ox(-3K_{X}+k'G))$ from general fibres. In particular $-3K_{X}+k'G$ separates points on a general $G'$ so $-3K_{X}+(k'+1)G$ separates general points and thus defines a birational map.
We may then pick some suitably large $t$ for which the result holds as $k,k'$ were chosen independently of $(X,\Deltaelta) \to Z$ and $G,G_{1},G_{2}$.
\end{proof}
\begin{lemma}
Let $(X,\Deltaelta) \to Z,S$ and $G$ be as in \autoref{setup} and $t$ as in \autoref{t-ample}. Then there is some constant $C$ with $(-3K_{X}+tG)^{3} \leq C$ and $(X,\Deltaelta)\in S$.
\end{lemma}
\begin{proof}
The anticanonical volumes $\textup{Vol}(X,-K_{X})$ are bounded by some $V$ by \autoref{Main2} which is proved in the next section.
Suppose first $\dim Z=1$. Then $\textup{Vol}(G,-K_{G})=(-K_{G})^{2} \leq 9$ and so by Lemma 3.4
\[\textup{Vol}(X,-3K_{X}+nG) \leq \textup{Vol}(X,-3K_{X}) + 3t\textup{Vol}(G,-3K_{G})\leq 27(V+9t)\]
as required.
Suppose instead then that $\dim Z=2$. So $G$ is a conic bundle over some $H$ on $Z$ with $H^{2} \leq c$. Hence we get
\[\textup{Vol}(G,(-3K_{X}+tG)|_{G})= (-3K_{G}+(t+1)H^{2}F)^{2}=9K_{G}^{2}-2(t+1)H^{2}(K_{G}.F)\]
where $F$ is a general fibre of $G \to H$. Hence $F$ is a smooth rational curve and $K_{G}.F=-2$ and $\textup{Vol}(G,(-3K_{X}+tG)|_{G})\leq 72+4(t+1)c$. Then as before we may apply \autoref{vol} to get
\[\textup{Vol}(X,-3K_{X}+tG) \leq \textup{Vol}(X,-3K_{X}) + 3n\textup{Vol}(G,(-3K_{X}+tG)|_{G})\]
and boundedness follows.
\end{proof}
\begin{proof}[Proof of \autoref{setup}]
Suppose $(X,\Deltaelta) \in S$. Then $A=-3K_{X}+tG$ is birational with bounded volume by the preceding results. Thus $S'$ is birationally bounded by \autoref{l_birationally-bounded}.
\end{proof}
\section{Weak BAB for Mori Fibre Spaces}\label{S-BAB}
This section is devoted to providing a bound on the volume of $-K_{X}$ under suitable conditions. Namely we show that the claim holds if $X$ belongs to a suitable family of $\epsilon$-LCY Mori fibre spaces whose bases are bounded. We will work over fields of characteristic $p>5$ as we will need to appeal to \autoref{WCL} at several points. In practice these results will be applied under the hypotheses of \autoref{Main2} with the constraints on characteristic needed to ensure $X$ is a tame conic bundle, or a generically smooth del Pezzo fibration as appropriate.
We consider first the case that $X$ is a tame conic bundle over a surface.
\begin{theorem}\label{J1}
Pick $\epsilon,c >0$. Then there is $V(\epsilon,c)$ such that if $f\colon (X,\Deltaelta) \to S$ is any projective, tame conic bundle over any closed field of characteristic $p> 5$, $(X,\Deltaelta)$ is $\epsilon$-klt and $S$ admits a very ample divisor $H$ with $H^{2} \leq c$, then $\textup{Vol}(-K_{X}) \leq V(\epsilon,c)$.
\end{theorem}
We may further assume that $H$ and $G=f^{*}H$ are smooth. Moreover $H$ may be taken so that $(G,\Deltaelta|_{G})$ is $\epsilon$-klt also by \autoref{tameAdjunction}.
If $\textup{Vol}(-K_{X})=0$ the result is trivially true, so we may suppose that $-K_{X}$ is big. In particular we may write $-K_{X}\sim A+E$ where $A$ is ample and $E \geq 0$. Note that $$-K_{X}-(1-\delta)\Deltaelta\sim -\delta K_{X} \sim \delta A + \delta E$$ for any $0 < \delta <1$. Choose $\delta$ such that $(X,(1-\delta)\Deltaelta+\delta E)$ and $(G,(1-\delta)\Deltaelta|_{G}+\delta E|_{G})$ are $\epsilon$-klt and write $B=(1-\delta)\Deltaelta+\delta E$. Then $(X,B)$ is $\epsilon$-log Fano by construction. The proof follows essentially as in characteristic zero, which can be found in \cite{jiang2014boundedness}, but we include a full proof for completeness as some details are modified.
\begin{lemma}\cite[Lemma 6.5]{jiang2014boundedness}
With notation as above, $\textup{Vol}(-K_{X}|_{G}) \leq \frac{8(c+2)}{\epsilon}$.
\end{lemma}
\begin{proof}
Suppose for contradiction $\textup{Vol}(-K_{X}|_{G}) >\frac{8(c+2)}{\epsilon}$ and choose $r$ rational with $\textup{Vol}(-K_{X}|_{G}) > 4r >\frac{8(c+2)}{\epsilon}$.
Write $F$ for the general fibre of $G \to H$. Then $G|_{G}=H^{2}F=kF$ and for suitably divisible $m$ and any $n$ we have the following short exact sequence.
\[0 \to \mathcal{O}_{G}(-mK_{X}|_{G}-nF) \to \mathcal{O}_{G}(-mK_{X}|_{G}-(n-1)F) \to \mathcal{O}_{F}(-mK_{F}) \to 0\]
In particular then $h^{0}(G,-mK_{X}|_{G}-nF) \geq h^{0}(G,-mK_{X}|_{G}-(n-1)F)-h^{0}(F,-mK_{F})$.
Hence by induction we have $h^{0}(G,-mK_{X}|_{G}-nF) \geq h^{0}(G,-mK_{X}|_{G})-n\cdot h^{0}(F,-mK_{F})$.
Note however that, letting $n=mr$ we have $$\lim_{m \to \infty} \frac{2}{m^{2}}(h^{0}(G,-mK_{X}|_{G})-n\cdot h^{0}(F,-mK_{F}))= \textup{Vol}(-K_{X}|_{G})-2r\textup{Vol}(-K_{F}) > 0$$ since $F$ is a smooth rational curve. Hence $-mK_{X}|_{G}-mrF$ admits a section for $m$ sufficiently large and divisible. Choose an effective $D\sim_{\mathbb{Q}} -K_{X}|_{G}-rF$.
Consider now \[(G,(1-\frac{k+2}{r})B|_{G}+\frac{k+2}{r}D+F_{1}+F_{2})\]
for two general fibres $F_{1}, F_{2}$.
This has \begin{align*}
&-K_{G}+(1-\frac{k+2}{r})B|_{G}+\frac{k+2}{r}D+F_{1}+F_{2} \\
\sim & -(K_{X}|_{G}+kF)+\frac{k+2}{r})B|_{G}+\frac{k+2}{r}(-K_{X}|_{G}-rF)+F_{1}+F_{2} \\
\sim & -(1-\frac{k+2}{r})(K_{X}+B)|_{G} \\
\end{align*}
and hence we may apply the Connectedness Lemma for surfaces, \autoref{Tcl}, to see that its non-klt locus is connected. Note that we have $r> c+2 \geq k+2$ and so as $-(K_{X}+B)$ is ample, this pair satisfies the assumptions of the Connectedness Lemma.
Since both $F_{1}$ and $F_{2}$ are contained in the non-klt locus, there must be a non-klt center $W$ dominating $H$. Thus it follows that $(F,(1-\frac{k+2}{r})B|_{F}+\frac{k+2}{r}D|_{F})$ is non-klt. However $(F,(1-\frac{k+2}{r})B|_{F})$ is $\epsilon$-klt so we must have $\deg (\frac{k+2}{r}D|_{F})\geq \epsilon$. Finally since $D|_{F}\sim -K_{X}|_{F}=-K_{F}$ we have $\deg(D|_{F})=2$ and hence $\frac{2(c+2)}{r} \geq \frac{2(k+2)}{r} \geq \epsilon$, contradicting the choice of $r$.
\end{proof}
\begin{proof}[Proof of \autoref{J1}]
Take $V(\epsilon,c)=\frac{144(c+2)}{\epsilon^{2}}$ suppose for contradiction that $ \textup{Vol}(-K_{X}) > \frac{144(c+2)}{\epsilon^{2}}$. Choose $t$ with $\textup{Vol}(-K_{X})> t\cdot\frac{24(c+2)}{\epsilon} > \frac{144(c+2)}{\epsilon^{2}}$ and consider the following short exact sequence.
\[0 \to \ox(-mK_{X}-nG)\to \ox(-mK_{X}+(n-1)G)\to \mathcal{O}_{G}(-mK_{X}|_{G}-(n-1)G) \to 0\]
Arguing as before we see that $h^{0}(X,-mK_{X}-tmG)$ grows like $\frac{r}{6}m^{3}$ with $r\geq \textup{Vol}(-K_{X})-3t\textup{Vol}(-K_{X}|_{G}) >0$ by the previous lemma. In particular we may find $D \sim_{\mathbb{Q}} -K_{X}+tG$.
Let $\pi\colon Y \to X$ be a log resolution of $(X, (1-\frac{3}{t})B+\frac{3}{t}D)$. We may write $K_{Y}+\Deltaelta_{Y}+E=\pi^{*}(K_{X}+(1-\frac{3}{t})B+\frac{3}{t}D)$ where $(Y,\Deltaelta_{Y})$ is sub klt and $E$ is supported on the non-klt places of $(X, (1-\frac{3}{t})B+\frac{3}{t}D)$.
As shown by Tanaka in \cite[Theorem 1]{tanaka2017semiample}, since $|L|=\pi^{*}f^{*}|H|$ is base point free there is some $m$ with $(Y,\Deltaelta_{Y}+\frac{1}{m}(L_{1}+L_{2}+L_{3}))$ still klt for every choice of $L_{i} \in |L|$. In particular, fixing some general $z\in Z$ we may take $H_{i} \in |H|$ meeting $Z$ for $1\leq i \leq 2m$ such that for any $I \subseteq \{0,1,...,2m\}$ with $|I| =3$ the following hold:
\begin{itemize}
\item $(Y,\Deltaelta_{Y}+\sum_{i \in I}\frac{1}{m}\pi^{*}f^{*}H_{i})$ is klt;
\item $\bigcap_{i\in I} H_{i}={z}$.
\end{itemize}
Thus we must have
\[\textup{Nklt}(X,(1-\frac{3}{t})B+\frac{3}{t}D)=\textup{Nklt}(X, (1-\frac{3}{t})B+\frac{3}{t}D+\frac{1}{m}f^{*}H_{i})\]
for each $i$.
Let $F$ be the fibre over $z$ and $G_{1} = \sum_{i=1}^{2m} \frac{1}{m}H_{i}$. Then clearly $\text{mult}_{F}(G_{1}) \geq 2$ and hence $(X,G)$ cannot be klt at $F$. By construction we have
\[\textup{Nklt}(X, (1-\frac{3}{t})B+\frac{3}{t}D) \cup F = \textup{Nklt}(X, (1-\frac{3}{t})B+\frac{3}{t}D+G_{1}).\]
Similarly we may further take $G_{2} \sim f^{*}H$ not containing $F$ such that
\[\textup{Nklt}(X, (1-\frac{3}{t})B+\frac{3}{t}D+G_{1}+G_{2}) = \textup{Nklt}(X, (1-\frac{3}{t})B+\frac{3}{t}D+G_{1}).\]
Now $-(K_{X}+(1-\frac{3}{t})B+\frac{3}{t}D+G_{1}+G_{2}) \sim (1-\frac{3}{t})(K_{X}+B)$ is ample, so we may apply the Connectedness Lemma, \autoref{WCL}, to see there is a curve in the non-klt locus of $(X,(1-\frac{3}{t})B+\frac{3}{t}D)$ meeting $F$. In particular then the non-klt locus dominates $S$. Hence we must also have that $(F,(1-\frac{3}{t})B|_{F}+\frac{3}{t}D|_{F})$ is not klt. However $(F,B|_{F})$ is $\epsilon$-klt and $F$ is a smooth rational curve. Therefore by degree considerations, since $-K_{X}|_{F} \sim D|_{F}$ we must have $t \leq \frac{6}{\epsilon}$, contradicting our choice of $t$.
\end{proof}
\begin{theorem}[Ambro-Jiang Conjecture for surfaces]\label{aj}\cite[Theorem 2.8]{jiang2014boundedness}
Fix $0<\epsilon<1$. There is a number $\mu(\epsilon)$ depending only on $\epsilon$ such that for any surface $S$ over any closed field $k$, if $S$ has a boundary $B$ with $(S,B)$ $\epsilon$-klt weak log Fano then \[\inf \{ulct (S,B;G) \text{ where } G \sim_{\mathbb{Q}}-(K_{S}+B) \text{ and } G+B \geq 0\}\geq \mu(\epsilon)\]
\end{theorem}
Here $ulct (S,B;G)= \sup\{t\colon (S,B+tG) \text{ is lc and } 0 \leq t \leq 1\}$ and in particular it is at most the usual lct, if $G$ is effective.
Though the proof is given for characteristic zero, it is essentially an arithmetic proof that the result holds for $\mathbb{P}^{2}$ and $\mathbb{F}_{n}$ for $n \leq \frac{2}{\epsilon}$. The arguments of the proof work over any algebraically closed field and as the bound is given explicitly in terms of $\epsilon$ it is independent of the base field.
By applying this result to a general fibre of a Mori fibration over a curve we obtain the desired boundedness result.
\begin{theorem}\label{J2}
Pick $\epsilon>0$. Suppose that $f\colon X \to \mathbb{P}^{1}$ is a terminal threefold Mori fibre space with smooth generic fibre over a closed field of characteristic $p> 5$. If there is a pair $(X,\Deltaelta)$ which is $\epsilon$-LCY then $\textup{Vol}(-K_{X})\leq W(\epsilon)$ for some $W(\epsilon)$ depending only on $\epsilon$.
\end{theorem}
\begin{proof}
By \autoref{nAmple}, there is some $t(\epsilon)\geq 1$ depending only on $\epsilon$ with $-K_{X}+t(\epsilon)F$ ample, where $F$ is a general fibre.
Let $\mu=\mu(1)$ as given in \autoref{aj} and take $W(\epsilon)= \frac{27(t(\epsilon)+2)}{\mu}$. Suppose for contradiction $\textup{Vol}(-K_{X}) > W(\epsilon)$ and choose $s$ rational with $\textup{Vol}(-K_{X}) > 27s > W(\epsilon)$. Clearly $s > \frac{(t(\epsilon)+2)}{\mu} > t(\epsilon)+2$.
For any $n$ and for sufficiently divisible $m$, we have the following short exact sequence.
\[0 \to \ox (-mK_{X}-nF) \to \ox(-mK_{X}-(n-1)F) \to \mathcal{O}_{F}(-mK_{F})\to 0.\]
This gives $h^{0}(X,-mK_{X}-nF) \geq h^{0}(X,-mK_{X})-nh^{0}(F,-mK_{F})$ and subsequently
\[\lim_{m \to \infty} \frac{6}{m^{3}}(h^{0}(X,-mK_{X})-smh^{0}(F,-mK_{F})= \textup{Vol}(-K_{X})-3s\textup{Vol}(-K_{F}).\] Since $F$ is a smooth del Pezzo surface we have $\textup{Vol}(-K_{F})\leq 9$. So by construction $-mK_{X}-smF$ is effective for large, divisible $m$.
Choose $D\geq 0$ with $D \sim_{\mathbb{Q}} -K_{X}-sF$ and consider $(X,\frac{t(\epsilon)+2}{s}D+F_{1}+F_{2})$ for $F_{1},F_{2}$ general fibres. By construction we have
\begin{align*}
-(K_{X}+\frac{t(\epsilon)+2}{s}D+F_{1}+F_{2})&\sim -(K_{X}-\frac{t(\epsilon)+2}{s}K_{X}-t(\epsilon)F)\\
&\sim (1-\frac{t(\epsilon)+2}{s})(-K_{X}+t(\epsilon)F) + \frac{t(\epsilon)(t(\epsilon)+2)}{s}F
\end{align*}
which is ample since $F$ is nef and $-K_{X}+t(\epsilon)F$ is ample.
Then the Connectedness Lemma, \autoref{WCL}, gives that the non-klt locus is connected, and clearly contains $F_{1},F_{2}$, so it must contain a non-klt center $W$ which dominates $\mathbb{P}^{1}$. Thus it must be that $(F,\frac{t(\epsilon)+2}{s}D|_{F})$ is not klt. However $F$ is smooth, and equivalently terminal, with $-K_{F}\sim D|_{F}$ ample, so by \autoref{aj} it follows that $\frac{t(\epsilon)+2}{s} \geq lct(F,0;D|_{F}) \geq \mu=\mu(1)$. Thus we have $s \leq \frac{t(\epsilon)+2}{\mu}$ contradicting our choice of $s$ and proving the result.
\end{proof}
\section{Birational Boundedness}\label{S-res}
We are now ready to prove the main theorems using the results of the previous sections.
\begin{lemma}\label{S1}
Suppose that $(X,\Deltaelta)$ is an $\epsilon$-klt LCY pair in characteristic $p>5$, with $\Deltaelta \neq 0$ and $X$ both rationally chain connected and $F$-split. Then there is a birational map $\pi\colon X \dashrightarrow X'$ such that $X'$ has a Mori fibre space structure $X' \to Z$ and $\Deltaelta'=\pi_{*}\Deltaelta$ on $X'$ making $(X',\Deltaelta')$ klt and $\epsilon$-LCY. Further both $X'$ and $Z$ are rationally chain connected and $F$-split and if $X$ is terminal, so is $X'$.
\end{lemma}
\begin{proof}
Replacing $X$ by a $\mathbb{Q}$-factorialisation, we can assume $X$ is $\mathbb{Q}$-factorial. This can be done by \autoref{dlt}.
Since $(X,\Deltaelta)$ is klt so is $(X,0)$ and hence we may run a terminating $K_{X}$ MMP $X=X_{0} \dashrightarrow X_{1} \dashrightarrow... \dashrightarrow X_{n}=X'$ by \autoref{Cone Theorem}. At each step $X_{i} \dashrightarrow X_{i+1}$ we may pushforward $\Deltaelta_{i}$ to $\Deltaelta_{i+1}$, which is still klt since $K_{X}+\Deltaelta \equiv 0$. Similarly since $X_{i}$ is $F$-split and rationally chain connected, so is $X_{i+1}$ as these are preserved under birational maps of normal varieties. Since $K_{X}$ cannot be pseudo-effective, $X'$ has a Mori fibre space structure $X' \to Z$, where $Z$ is also rationally chain connected and $F$-split. If $X$ is terminal we may run a $K_{X}$ MMP terminating at a terminal variety, hence $X'$ is terminal also.
\end{proof}
\begin{reptheorem}{Main}
Fix $0 < \delta, \epsilon <1$. Let $S_{\delta,\epsilon}$ be the set of threefolds satisfying the following conditions:
\begin{itemize}
\item $X$ is a projective variety over an algebraically closed field of characteristic $p >7, \frac{2}{\delta}$;
\item $X$ is terminal, rationally chain connected and $F$-split;
\item $(X,\Deltaelta)$ is $\epsilon$-klt and log Calabi-Yau for some boundary $\Deltaelta$; and
\item The coefficients of $\Deltaelta$ are greater than $\delta$.
\end{itemize}
Then there is a set $S'_{\delta,\epsilon}$, bounded over $\text{Spec}(\mathbb{Z})$ such that any $X\in S_{\delta,\epsilon}$ is either birational to a member of $S'_{\delta,\epsilon}$ or to some $X'\in S_{\delta,\epsilon}$, Fano with Picard number $1$.
\end{reptheorem}
\begin{proof}
Take any $(X,\Deltaelta)\in S$ and replace it by a Mori fibre space $(X',\Deltaelta') \to Z$ by \autoref{S1}. Then $Z$ is $F$-split and rationally chain connected. If $Z$ is a surface then $p>\frac{2}{\delta}$ ensures that $(X',\Deltaelta')\to Z$ is a tame conic bundle by \autoref{S2}. In particular $Z$ admits a boundary $\Deltaelta_{Z}$ such that $(Z,\Deltaelta_{Z})$ is $\epsilon$-LCY by \autoref{cbf}. Hence by BAB for surfaces, \autoref{SBAB}, there is $|A|$ a very ample linear system on $Z$ with $A^{2}\leq c$ for some $c$ independent of $X,\Deltaelta,Z$.
On the other hand, if $Z$ is a curve then it is a smooth rational curve and $p>7$ gives that the general fibre of $X \to Z$ is smooth by \autoref{smoothness}. Let then $S'_{\delta,\epsilon,V}$ be set of such Mori fibre space $(X',\Deltaelta') \to Z$ with $Z$ not a point and $\textup{Vol}(-K_{X})\leq V(\epsilon,c)$. In both cases we conclude by \autoref{setup} that the set is birationally bounded.
\end{proof}
\begin{reptheorem}{Main2}
Fix $0 < \delta, \epsilon <1$ and let $T_{\delta,\epsilon}$ be the set of threefold pairs $(X,\Deltaelta)$ satisfying the following conditions
\begin{itemize}
\item $X$ is projective over a closed field of characteristic $p >7,\frac{2}{\delta}$;
\item $X$ is terminal, rationally chain connected and $F$-split;
\item $(X,\Deltaelta)$ is $\epsilon$-klt and LCY;
\item The coefficients of $\Deltaelta$ are greater than $\delta$; and
\item $X$ admits a Mori fibre space structure $X \to Z$ where $Z$ is not a point.
\end{itemize}
Then the set $\{\textup{Vol}(-K_{X}) \colon \exists \Deltaelta \text{ with } (X,\Deltaelta) \in T_{\delta,\epsilon}\}$ is bounded above.
\end{reptheorem}
\begin{proof}
Take $(X,\Deltaelta) \in T_{\delta,\epsilon}$ and let $X \to Z$ be the associated Mori fibre space structure. If $Z$ is a curve then we conclude that $\textup{Vol}(-K_{X})$ is bounded by \autoref{J2} in light of \autoref{smoothness}. If instead $Z$ is a surface then the set of possible such $Z$ is bounded by \autoref{SBAB} and \autoref{cbf} as above. Hence we conclude the claim by \autoref{J1}.
\end{proof}
\addtocontents{toc}{\protect\setcounter{tocdepth}{-1}}
\end{document} |
\begin{equation}gin{document}
\title[Vanishing cohomology]{Vanishing cohomology and Betti bounds for complex projective hypersurfaces}
\author[L. Maxim ]{Lauren\c{t}iu Maxim}
\address{L. Maxim : Department of Mathematics, University of Wisconsin-Madison, 480 Lincoln Drive, Madison WI 53706-1388, USA}
\email {[email protected]}
\thanks{L. Maxim is partially supported by the Simons Foundation Collaboration (Grant \#567077) and by
the Romanian Ministry of National Education (CNCS-UEFISCDI grant PN-III-P4-ID-PCE-2020-0029).}
\author[L. P\u{a}unescu ]{Lauren\c{t}iu P\u{a}unescu}
\address{L. P\u{a}unescu: Department of Mathematics, University of Sydney, Sydney, NSW, 2006, Australia}
\email {[email protected]}
\author[M. Tib\u{a}r]{Mihai Tib\u{a}r}
\address{M. Tib\u{a}r : Universit\' e de Lille, CNRS, UMR 8524 -- Laboratoire Paul Painlev\'e, F-59000 Lille, France}
\email {[email protected]}
\thanks{M. Tib\u{a}r acknowledges the support of the Labex CEMPI (ANR-11-LABX-0007). }
\keywords{singular projective hypersurface, vanishing cycles, vanishing cohomology, Betti numbers, Milnor fiber, Lefschetz hyperplane theorem}
\subjclass[2010]{32S30, 32S50, 55R55, 58K60}
\date{\today}
\begin{equation}gin{abstract} We employ the formalism of vanishing cycles and perverse sheaves to introduce and study the vanishing cohomology of complex projective hypersurfaces. As a consequence, we give upper bounds for the Betti numbers of projective hypersurfaces, generalizing those obtained by different methods by Dimca in the isolated singularities case, and by Siersma-Tib\u{a}r in the case of hypersurfaces with a $1$-dimensional singular locus. We also prove a supplement to the Lefschetz hyperplane theorem for hypersurfaces, which takes the dimension of the singular locus into account, and we use it to give a new proof of a result of Kato.
\end{enumerate}d{abstract}
\maketitle
\section{Introduction. Results}\label{intro}
Let $V=\{f=0\} \subset {\mathbb C} P^{n+1}$ be a reduced complex projective hypersurface of degree $d$, with $n \geq 1$. By the classical Lefschetz Theorem, the inclusion map $j:V \hboxookrightarrow {\mathbb C} P^{n+1}$ induces cohomology isomorphisms
\begin{equation}\label{one}
j^k:H^k( {\mathbb C} P^{n+1};{\mathbb Z}) \overset{{\it Cone}g}\longrightarrow H^k(V;{\mathbb Z}) \ \ \text{for all} \ \ k<n,
\end{enumerate}d{equation}
and a primitive monomorphism for $k=n$ (e.g., see \cite[Theorem 5.2.6]{Di}). Moreover, if $s=\dim V_{\rm sing}<n$ is the complex dimension of the singular locus of $V$ (with $\dim \emptyset=-1$), then Kato \cite{Ka} showed that (see also \cite[Theorem 5.2.11]{Di})
\begin{equation}\label{two}
H^k(V;{\mathbb Z}) {\it Cone}g H^k( {\mathbb C} P^{n+1};{\mathbb Z}) \ \ \text{for all} \ \ n+s+2\leq k\leq 2n,
\end{enumerate}d{equation} and the homomorphism $j^k$ induced by inclusion is given in this range (and for $k$ even) by multiplication by $d=\deg(V)$.
It therefore remains to study the cohomology groups $H^k(V;{\mathbb Z})$ for $n \leq k \leq n+s+1$.
In the case when $V\subset {\mathbb C} P^{n+1}$ is a {\it smooth} degree $d$ hypersurface, the above discussion yields that $H^k(V;{\mathbb Z}) {\it Cone}g H^k( {\mathbb C} P^{n};{\mathbb Z})$ for all $k \neq n$. This is in fact the only information we take as an input in this note (it also suffices to work with \eqref{one}, its homology counterpart, and Poincar\'e duality).
The Universal Coefficient Theorem also yields in this case that $H^n(V;{\mathbb Z})$ is free abelian, and its rank $b_n(V)$ can be easily deduced from the formula for the Euler characteristic of $V$ (e.g., see \cite[Proposition 10.4.1]{M}):
\begin{equation}\label{chi}
\chi(V)=(n+2)-\frac{1}{d} \big[1+(-1)^{n+1}(d-1)^{n+2}\big].
\end{enumerate}d{equation}
Specifically, if $V\subset {\mathbb C} P^{n+1}$ is a smooth degree $d$ projective hypersurface, one has:
\begin{equation}\label{bsm}
b_n(V)=\frac{(d-1)^{n+2}+(-1)^{n+1}}{d}+\frac{3(-1)^n+1}{2}.
\end{enumerate}d{equation}
The case when $V$ has only isolated singularities was studied by Dimca \cite{Di0,Di}, (see also \cite{Mi} and \cite{ST}) while projective hypersurfaces with a one-dimensional singular locus have been more recently considered by Siersma-Tib\u{a}r \cite{ST}.
In the singular case, let us fix a Whitney stratification ${\mathcal V}$ of $V$ and consider a one-parameter smoothing of degree $d$, namely $$V_t:=\{f_t=f-tg=0\}\subset {\mathbb C} P^{n+1} \ \ (t \in {\mathbb C}),$$ for $g$ a general polynomial of degree $d$. Here, the meaning of ``general'' is that the hypersurface $W:=\{g=0\}$ is smooth and transverse to all strata in the stratification ${\mathcal V}$ of $V$.
Then, for $t \neq 0$ small enough, all the $V_t$ are smooth and transverse to the stratification ${\mathcal V}$. Let $$B=\{f=g=0\}$$ be the base locus (axis) of the pencil. Consider the incidence variety
$$V_D:=\{(x,t)\in {\mathbb C} P^{n+1} \times D \mid x \in V_t \}$$
with $D$ a small disc centered at $0 \in {\mathbb C}$ so that $V_t$ is smooth for all $t \in D^*:=D\setminus \{0\}$. Denote by $\pi:V_D \to D$ the proper projection map, and note that $V=V_0=\pi^{-1}(0)$ and $V_t=\pi^{-1}(t)$ for all $t \in D^*$.
In what follows we write $V$ for $V_0$ and use $V_t$ for a smoothing of $V$ (i.e., with $t \in D^*$).
In this setup, one can define the Deligne vanishing cycle complex of the family $\pi$, see \cite[Section 10.3]{M} for a quick introduction. More precisely, one has a bounded constructible complex $$\varphi_\pi \underline{{\mathbb Z}}_{V_D} \in D^b_c(V)$$
on the hypersurface $V$, whose hypercohomology groups fit into a long exact sequence (called the {\it specialization sequence}):
\begin{equation}\label{spec}
\cdots \longrightarrow H^k(V;{\mathbb Z}) \overset{sp^k}{\longrightarrow} H^k(V_t;{\mathbb Z}) \overset{\alphapha^k}{\longrightarrow} {\mathbb H}^k(V; \varphi_\pi \underline{{\mathbb Z}}_{V_D}) \longrightarrow H^{k+1}(V;{\mathbb Z}) \overset{sp^{k+1}}{\longrightarrow} \cdots
\end{enumerate}d{equation}
The maps $sp^k$ are called the {\it specialization} morphisms, while the $\alphapha^k$'s are usually referred to as the {\it canonical} maps.
For any integer $k$, we define $$H^k_\varphi(V):={\mathbb H}^k(V; \varphi_\pi \underline{{\mathbb Z}}_{V_D})$$
and call it the {\it $k$-th vanishing cohomology group of $V$}. This is an invariant of $V$, i.e., it does not depend on the choice of a particular smoothing of degree $d$ (since all smooth hypersurfaces of a fixed degree are diffeomorphic). By its very definition, the vanishing cohomology measures the difference between the topology of a given projective hypersurface $V$ and that of a smooth hypersurface of the same degree.
\begin{rem}\label{r1}
Since the incidence variety $V_D=\pi^{-1}(D)$ deformation retracts to $V=\pi^{-1}(0)$, and the {\it specialization map} $sp^k:H^k(V;{\mathbb Z}) \to H^k(V_t;{\mathbb Z})$ of \eqref{spec} factorizes as $$H^k(V;{\mathbb Z}) \overset{{\it Cone}g}{\longrightarrow} H^k(V_D;{\mathbb Z}) \longrightarrow H^k(V_t;{\mathbb Z})$$ with $H^k(V_D;{\mathbb Z}) \to H^k(V_t;{\mathbb Z})$ induced by the inclusion map, it follows readily that the vanishing cohomology of $V$ can be identified with the relative cohomology of the pair $(V_D,V)$, i.e., \begin{equation} H^k_\varphi(V) {\it Cone}g H^{k+1}(V_D,V_t;{\mathbb Z}).\end{enumerate}d{equation}
In particular, the groups $H^k_\varphi(V)$ are the cohomological version of the {\it vanishing homology groups} $$H_k^{\curlyvee}(V):=H_{k}(V_D,V_t;{\mathbb Z})$$ introduced and studied in \cite{ST} in special situations. For the purpose of computing Betti numbers of projective hypersurfaces, the two ``vanishing'' theories yield the same answer, but additional care is needed to handle torsion when computing the actual integral cohomology groups.
\end{enumerate}d{rem}
Our first result gives the concentration degrees of the vanishing cohomology of a projective hypersurface in terms of the dimension of the singular locus.
\begin{thm}\label{th1}
Let $V \subset {\mathbb C} P^{n+1}$ be a reduced complex projective hypersurface with $s=\dim V_{\rm sing}$ the complex dimension of its singular locus. Then
\begin{equation}
H^k_\varphi(V) {\it Cone}g 0 \ \ \text{ for all integers} \ \ k \notin [n, n+s].
\end{enumerate}d{equation}
Moreover, $H^n_\varphi(V)$ is a free abelian group.
\end{enumerate}d{thm}
In view of Remark \ref{r1}, one gets by Theorem \ref{th1} and the Universal Coefficient Theorem the concentration degrees of the vanishing homology groups $H_k^{\curlyvee}(V)$ of a projective hypersurface in terms of the dimension of its singular locus:
\begin{cor}\label{corh}
With the above notations and assumptions, we have that
\begin{equation}
H_k^{\curlyvee}(V) {\it Cone}g 0 \ \ \text{ for all integers} \ \ k \notin [n+1, n+s+1].
\end{enumerate}d{equation}
Moreover, $H_{n+s+1}^{\curlyvee}(V)$ is free.
\end{enumerate}d{cor}
\begin{rem}
In the case when the projective hypersurface $V \subset {\mathbb C} P^{n+1}$ has a $1$-dimensional singular locus, it was shown in \cite[Theorem 4.1]{ST} that $H_k^{\curlyvee}(V) {\it Cone}g 0$ for all $k \neq n+1, n+2$. Moreover, Theorem 6.1 of \cite{ST} shows that in this case one also has that $H_{n+2}^{\curlyvee}(V)$ is free. So, Corollary \ref{corh} provides a generalization of the results of \cite{ST} to projective hypersurfaces with arbitrary singularities. Nevertheless, the methods used in its proof are fundamentally different from those in \cite{ST}.
\end{enumerate}d{rem}
As a consequence of Theorem \ref{th1}, the specialization sequence \eqref{spec} together with the fact that the integral cohomology of a smooth projective hypersurface is free, yield the following result on the integral cohomology of a complex projective hypersurface (where the estimate on the $n$-th Betti number uses formula \eqref{bsm}):
\begin{cor}\label{corgen}
Let $V \subset {\mathbb C} P^{n+1}$ be a degree $d$ reduced projective hypersurface with a singular locus $V_{\rm sing}$ of complex dimension $s$. Then:
\begin{equation}gin{itemize}
\item[(i)] $H^k(V;{\mathbb Z}) {\it Cone}g H^k(V_t;{\mathbb Z}) {\it Cone}g H^k({\mathbb C} P^n;{\mathbb Z})$ \ for all integers $k \notin [n, n+s+1]$.
\item[(ii)] $H^n (V;{\mathbb Z}) {\it Cone}g \ker(\alphapha^n)$ is free.
\item[(iii)] $H^{n+s+1} (V;{\mathbb Z}) {\it Cone}g H^{n+s+1} ({\mathbb C} P^n;{\mathbb Z}) \oplus \mathop{{\mathrm{coker}}}\nolimits(\alphapha^{n+s})$.
\item[(iv)] $H^k(V;{\mathbb Z}) {\it Cone}g \ker(\alphapha^k) \oplus \mathop{{\mathrm{coker}}}\nolimits(\alphapha^{k-1})$ for all integers $k \in [n+1,n+s]$, $s\ge 1$.
\end{enumerate}d{itemize}
In particular, $$b_n(V) \leq b_n(V_t)=\frac{(d-1)^{n+2}+(-1)^{n+1}}{d}+\frac{3(-1)^n+1}{2},$$
and
$$b_{k}(V) \leq \mathop{{\mathrm{rank}}}\nolimits \ H^{k-1}_{\varphi}(V) + b_{k}({\mathbb C} P^{n}) \ \ \text{ for all integers} \ k \in [n+1,n+s+1], \ s\ge 0.$$
\end{enumerate}d{cor}
The homological version of the specialisation sequence \eqref{spec} identifies to the long exact sequence of the pair $(V_{D}, V_{t})$, namely:
\[ \cdots \to H_{k+1}(V_{t};{\mathbb Z}) \to H_{k+1}(V_{D};{\mathbb Z}) \to H_{k+1}^{\curlyvee}(V;{\mathbb Z}) \ \overset{\alphapha_{k}}{\longrightarrow} \ H_{k}(V_{t};{\mathbb Z}) \to \cdots
\]
The inclusions $V_{t}\hboxookrightarrow V_{D}\hboxookrightarrow {\mathbb C} P^{n+1}\times D$ induce in homology a commutative triangle, where $H_{k}(V_{t};{\mathbb Z}) \to H_{k}({\mathbb C} P^{n+1}\times D;{\mathbb Z})$ is injective for $k\not= n$ (by the Lefschetz Theorem for $k<n$, and
it is multiplication by $d$ for $k>n$, see e.g. Remark \ref{Katoh} for the homological version of the proof of Theorem \ref{Kato}). This
shows that the morphisms $H_{k}(V_{t};{\mathbb Z}) \to H_{k}(V_{D};{\mathbb Z})$ is also injective for all $k\not= n$, and therefore $\alphapha_{k} =0$ for $k\not= n$. Consequently, the above long exact sequence splits into a $5$-term exact sequence, and short exact sequences:
\begin{equation}gin{equation}\label{sp1}
\begin{equation}gin{split}
0 \to H_{n+1}(V_{t};{\mathbb Z}) \to H_{n+1}(V;{\mathbb Z}) \to H_{n+1}^{\curlyvee}(V;{\mathbb Z}) \stackrel{\alphapha_{n}}{\to}
H_{n}(V_{t};{\mathbb Z}) \rightarrow H_{n}(V;{\mathbb Z}) \to 0. \\
0 \to H_{k}(V_{t};{\mathbb Z}) \to H_{k}(V_{D};{\mathbb Z}) \to H_{k}^{\curlyvee}(V;{\mathbb Z}) \to 0
\ \ \ \ \ \ \ \ \mbox{ for } \ k\ge n+1.
\end{enumerate}d{split}
\end{enumerate}d{equation}
We then get the following homological version of Corollary \ref{corgen}(i-iv), with the same upper bounds for Betti numbers, but with an interesting improvement for (iii) and (iv) showing more explicitly the dependence of the homology of $V$ on the vanishing homology groups:
\begin{cor}\label{corgenhom}
Let $V \subset {\mathbb C} P^{n+1}$ be a degree $d$ reduced projective hypersurface with a singular locus $V_{\rm sing}$ of complex dimension $s$. Then:
\begin{equation}gin{itemize}
\item[(i')] $H_{k}(V;{\mathbb Z}) {\it Cone}g H_{k}(V_t;{\mathbb Z}) {\it Cone}g H_{k}({\mathbb C} P^n;{\mathbb Z})$ \ for all $k\le n-1$ and all $k\ge n+s+2$.
\item[(ii')] $H_{n}(V;{\mathbb Z}) {\it Cone}g \mathop{{\mathrm{coker}}}\nolimits(\alphapha_{n})$.
\item[(iii')] $H_{n+1}(V;{\mathbb Z}) {\it Cone}g \ker(\alphapha_{n}) \oplus H_{n+1}({\mathbb C} P^n;{\mathbb Z})$.
\item[(iv')] $H_{k}(V;{\mathbb Z}) {\it Cone}g H_k^{\curlyvee}(V;{\mathbb Z}) \oplus H_{k}({\mathbb C} P^n;{\mathbb Z})$, for all $ n+2 \le k \le n+s+1$, whenever $s\ge 1$, \\ and $H_{n+s+1}(V;{\mathbb Z})$ is free.
\end{enumerate}d{itemize}
\end{enumerate}d{cor}
The ranks of the (possibly non-trivial) vanishing (co)homology groups can be estimated in terms of the local topology of singular strata and of their generic transversal types by making use of the hypercohomology spectral sequence. Such estimates can be made precise for hypersurfaces with low-dimensional singular loci.
Concretely, as special cases of Corollaries \ref{corgen} and \ref{corgenhom}, in Section \ref{bounds} we recast Siersma-Tib\u{a}r's \cite{ST} result for $s\le 1$, and in particular Dimca's \cite{Di0,Di} computation for $s=0$. Concerning the estimation of the rank of the highest interesting (co)homology group, we prove the following general result:
\begin{thm}\label{th2}
Let $V \subset {\mathbb C} P^{n+1}$ be a degree $d$ reduced projective hypersurface with a singular locus $V_{\rm sing}$ of complex dimension $s$. For each connected stratum $S_i \subseteq V_{\rm sing}$ of top dimension $s$ in a Whitney stratification of $V$, let $F_i^\pitchfork$ denote its transversal Milnor fiber with corresponding Milnor number $\mu_i^\pitchfork$.
Then:
\begin{equation}\label{bt}
b_{n+s+1}(V) \leq 1+ \sum_i \mu_i^\pitchfork,
\end{enumerate}d{equation}
and the inequality is strict for $n+s$ even.
\end{enumerate}d{thm}
In fact, the inequality in \eqref{bt} is deduced from
\begin{equation}\label{btb} b_{n+s+1}(V) \leq 1+\mathop{{\mathrm{rank}}}\nolimits \ H^{n+s}_{\varphi}(V),\end{enumerate}d{equation}
together with
\begin{equation}
\mathop{{\mathrm{rank}}}\nolimits \ H^{n+s}_{\varphi}(V) \leq \sum_i \mu_i^\pitchfork,
\end{enumerate}d{equation}
and the inequality \eqref{btb} is strict for $n+s$ even. For further refinements of Theorem \ref{th2}, see Remark \ref{rem31}. Note also that if $s=0$, i.e., $V$ has only isolated singularities, then $\mu_i^\pitchfork$ is just the usual Milnor number of such a singularity of $V$.
Let us remark that if the projective hypersurface $V \subset {\mathbb C} P^{n+1}$ has singularities in codimension $1$, i.e., $s=n-1$, then $b_{n+s+1}(V)=b_{2n}(V)=r$, where $r$ denotes the number of irreducible components of $V$. Indeed, in this case, one has (e.g., see \cite[(5.2.9)]{Di}):
\begin{equation}\label{top} H^{2n}(V;{\mathbb Z}) {\it Cone}g {\mathbb Z}^{r}.\end{enumerate}d{equation}
In particular, Theorem \ref{th2} yields the following generalization of \cite[Corollary 7.6]{ST}:
\begin{cor} If the reduced projective hypersurface $V \subset {\mathbb C} P^{n+1}$ has singularities in codimension $1$, then the number $r$ of irreducible components of $V$ satisfies the inequality:
\begin{equation}
r \leq 1+\sum_i \mu_i^\pitchfork.
\end{enumerate}d{equation}
\end{enumerate}d{cor}
\begin{rem}\label{fr} Note that if the projective hypersurface $V \subset {\mathbb C} P^{n+1}$ is a rational homology manifold, then the Lefschetz isomorphism \eqref{one} and Poincar\'e duality over the rationals yield that $b_i(V)=b_i({\mathbb C} P^n)$ for all $i \neq n$. Moreover, $b_n(V)$ can be deduced by computing the Euler characteristic of $V$, e.g., as in \cite[Section 10.4]{M}.\end{enumerate}d{rem}
The computation of Betti numbers of a projective hypersurface which is a rational homology manifold can be deduced without appealing to Poincar\'e duality by using the vanishing cohomology instead, as the next result shows:
\begin{prop}\label{p1}
If the projective hypersurface $V \subset {\mathbb C} P^{n+1}$ is a ${\mathbb Q}$-homology manifold, then $H^k_{\varphi}(V) \otimes {\mathbb Q} {\it Cone}g 0$ for all $k \neq n$. In particular, in this case one gets: $b_i(V)=b_i(V_t)=b_i({\mathbb C} P^n)$ for all $i \neq n$, and $b_n(V)=b_n(V_t)+\mathop{{\mathrm{rank}}}\nolimits H^n_{\varphi}(V)$.
\end{enumerate}d{prop}
dskip
At this end, we note that Corollary \ref{corgen}(i) reproves Kato's isomorphism \eqref{two} about the integral cohomology of $V$, by using only the integral cohomology of a smooth hypersurface (for this it suffices to rely only on the Lefschetz isomorphism \eqref{one}, its homological version, and Poincar\'e duality). In Section \ref{supLHT}, we give a new proof of Kato's result (see Theorem \ref{Kato}),
which relies on the following supplement to the Lefschetz hyperplane section theorem for hypersurfaces, which may be of independent interest:
\begin{thm}\label{thapi}
Let $V \subset {\mathbb C} P^{n+1}$ be a reduced complex projective hypersurface with $s=\dim V_{\rm sing}$ the complex dimension of its singular locus. (By convention, we set $s=-1$ if $V$ is nonsingular.) Let $H \subset {\mathbb C} P^{n+1}$ be a generic hyperplane.
Then
\begin{equation}\label{34api}
H^k(V,V\cap H; {\mathbb Z})=0 \ \ \text{for} \ \ k < n \ \ \text{and} \ \ n+s+1 < k < 2n.
\end{enumerate}d{equation}
Moreover, $H^{2n}(V,V \cap H; {\mathbb Z}){\it Cone}g{\mathbb Z}^r$, where $r$ is the number of irreducible components of $V$, and $H^{n}(V,V \cap H; {\mathbb Z})$ is (torsion-)free.
\end{enumerate}d{thm}
Note that the vanishing \eqref{34api} for $k<n$ is equivalent to the classical Lefschetz hyperplane section theorem. The proof of \eqref{34api} for $n+s+1 < k < 2n$ reduces to understanding the homotopy type of the complement of a smooth affine hypersurface transversal to the hyperplane at infinity;
see \cite[Corollary 1.2]{Lib} for such a description. Homological counterparts of Theorem \ref{thapi} and of Kato's result are also explained in Section \ref{supLHT}, see Corollary \ref{corap} and Remark \ref{Katoh}.
dskip
Finally, let us note that similar techniques apply to the study of Milnor fiber cohomology of complex hypersurface singularity germs. This is addressed by the authors in the follow-up paper \cite{MPT} (see also \cite{ST0} for the case of $1$-dimensional singularities).
dskip
\noindent{\bf Acknowledgements.} L. Maxim thanks the Sydney Mathematical Research Institute (SMRI) for support and hospitality, and J\"org Sch\"urmann for useful discussions.
\section{Concentration degrees of vanishing cohomology}
The proof of Theorem \ref{th1} makes use of the formalism of perverse sheaves and their relation to vanishing cycles, see \cite{Di1,M} for a brief introduction.
\subsection{Proof of Theorem \ref{th1}}
By definition, the incidence variety $V_D$ is a complete intersection of pure complex dimension $n+1$. It is non-singular if $V=V_0$ has only isolated singularities, but otherwise it has singularities where the base locus $B=V\cap W$ of the pencil $\{f_t\}_{t\in D}$ intersects the singular locus $\Sigmama:=V_{\rm sing}$ of $V$.
If $\underline{{\mathbb Z}}_{V_D}$ denotes the constant sheaf with stalk ${\mathbb Z}$ on the complete intersection $V_D$, a result of L\^e \cite{Le} implies that the complex $\underline{{\mathbb Z}}_{V_D}[n+1]$ is a perverse sheaf on $V_D$. It then follows that
$\varphi_\pi \underline{{\mathbb Z}}_{V_D}[n]$ is a ${\mathbb Z}$-perverse sheaf on $\pi^{-1}(0)=V$ (see, e.g., \cite[Theorem 10.3.13]{M} and the references therein).
Recall that the stalk of the cohomology sheaves of $\varphi_\pi \underline{{\mathbb Z}}_{V_D}$ at a point $x \in V$ are computed by (e.g., see \cite[(10.20)]{M}):
\begin{equation}
{\mathcal H}^j(\varphi_\pi \underline{{\mathbb Z}}_{V_D})_x {\it Cone}g H^{j+1}(B_{x}, B_{x}\cap V_t;{\mathbb Z}),
\end{enumerate}d{equation}
where $B_{x}$ denotes the intersection of $V_D$ with a sufficiently small ball in some chosen affine chart ${\mathbb C}^{n+1} \times D$ of the ambient space ${\mathbb C} P^{n+1} \times D$ (hence $B_x$ is contractible). Here $B_{x}\cap V_t=F_{\pi,x}$ is the Milnor fiber of $\pi$ at $x$.
Let us now consider the function $$h=f/g:{\mathbb C} P^{n+1} \setminus W \to {\mathbb C}$$ where $W:=\{g=0\}$, and note that $h^{-1}(0)=V\setminus B$ with $B=V\cap W$ the base locus of the pencil. If $x \in V \setminus B$, then in a neighborhood of $x$ one can describe $V_t$ ($t \in D^*$) as $$\{x \mid f_t(x)=0\}=\{x \mid h(x)=t\},$$
i.e., as the Milnor fiber of $h$ at $x$. Note also that $h$ defines $V$ in a neighborhood of $x \notin B$. Since the Milnor fiber of a complex hypersurface singularity germ does not depend on the choice of a local equation (e.g., see \cite[Remark 3.1.8]{Di}), we can therefore use $h$ or a local representative of $f$ when considering Milnor fibers (of $\pi$) at points in $V \setminus B$. From here on we will use the notation $F_x$ for the Milnor fiber of the hypersurface singularity germ $(V,x)$, and we note for future reference that the above discussion also yields that $F_x$ is a manifold, which moreover is contractible if $x \in V \setminus B$ is a smooth point.
It was shown in \cite[Proposition 5.1]{PP} (see also \cite[Proposition 4.1]{MSS} or \cite[Lemma 4.2]{ST}) that
there are no vanishing cycles along the base locus $B$, i.e.,
\begin{equation} \varphi_\pi \underline{{\mathbb Z}}_{V_D} \vert_B \simeq 0.\end{enumerate}d{equation}
Therefore, if $u:V\setminus B \hboxookrightarrow V$ is the open inclusion, we get that
\begin{equation}\label{s6}
\varphi_\pi \underline{{\mathbb Z}}_{V_D} \simeq u_! u^* \varphi_\pi \underline{{\mathbb Z}}_{V_D}.
\end{enumerate}d{equation}
Since pullback to open subvarieties preserves perverse sheaves, we note that $u^* \varphi_\pi \underline{{\mathbb Z}}_{V_D}[n]$ is a perverse sheaf on the {\it affine} variety $V \setminus B$.
Artin's vanishing theorem for perverse sheaves (e.g., \cite[Corollary 6.0.4]{Sc}) then implies that:
\begin{equation}gingroup
\alphalowdisplaybreaks
\begin{equation}gin{equation}\label{s1}
\begin{equation}gin{split}
H^k_\varphi(V)
& := {\mathbb H}^{k}(V; \varphi_\pi \underline{{\mathbb Z}}_{V_D}) \\
& {\it Cone}g {\mathbb H}^{k-n}(V; \varphi_\pi \underline{{\mathbb Z}}_{V_D}[n]) \\
& {\it Cone}g {\mathbb H}^{k-n}(V; u_! u^* \varphi_\pi \underline{{\mathbb Z}}_{V_D}[n]) \\
& {\it Cone}g {\mathbb H}_c^{k-n}(V \setminus B; u^* \varphi_\pi \underline{{\mathbb Z}}_{V_D}[n]) \\
& {\it Cone}g 0
\end{enumerate}d{split}
\end{enumerate}d{equation}
\end{enumerate}dgroup
for all $k-n<0$, or equivalently, for all $k<n$.
Contractibility of Milnor fibers at smooth points of $V \setminus B$ implies that the support of $\varphi_\pi \underline{{\mathbb Z}}_{V_D}$ is in fact contained in $\Sigmama \setminus B$, with $\Sigmama$ denoting as before the singular locus of $V$. In particular, if $v:\Sigmama \setminus B \hboxookrightarrow V\setminus B$ is the closed inclusion, then
\begin{equation}\label{s5} u^* \varphi_\pi \underline{{\mathbb Z}}_{V_D} \simeq v_!v^*u^* \varphi_\pi \underline{{\mathbb Z}}_{V_D}.\end{enumerate}d{equation}
Next, consider the composition of inclusion maps $$\Sigmama \setminus B \overset{q}{\hboxookrightarrow}\Sigmama \overset{p}{\hboxookrightarrow} V$$ with $p\circ q=u \circ v$. By using \eqref{s6} and \eqref{s5}, we get:
\begin{equation}gingroup
\alphalowdisplaybreaks
\begin{equation}gin{equation}\label{s3}
\begin{equation}gin{split} \varphi_\pi \underline{{\mathbb Z}}_{V_D}
& \simeq u_!v_!v^* u^*\varphi_\pi \underline{{\mathbb Z}}_{V_D} \\
& \simeq (u\circ v)_! (u\circ v)^* \varphi_\pi \underline{{\mathbb Z}}_{V_D} \\
& \simeq (p\circ q)_! (p\circ q)^* \varphi_\pi \underline{{\mathbb Z}}_{V_D} \\
& \simeq p_!q_!q^*p^* \varphi_\pi \underline{{\mathbb Z}}_{V_D} \\
&\simeq p_*p^* \varphi_\pi \underline{{\mathbb Z}}_{V_D},
\end{enumerate}d{split}
\end{enumerate}d{equation}
\end{enumerate}dgroup
where the last isomorphism uses the fact that $p^* \varphi_\pi \underline{{\mathbb Z}}_{V_D}$ is supported on $\Sigmama \setminus B$, hence $p^* \varphi_\pi \underline{{\mathbb Z}}_{V_D}\simeq q_!q^*p^* \varphi_\pi \underline{{\mathbb Z}}_{V_D}$.
Since the support of the perverse sheaf $\varphi_\pi \underline{{\mathbb Z}}_{V_D}[n]$ on $V$ is contained in the closed subset $\Sigmama$, we get that $p^*\varphi_\pi \underline{{\mathbb Z}}_{V_D}[n]$ is a perverse sheaf on $\Sigmama$ (e.g., see \cite[Corollary 8.2.10]{M}). Since the complex dimension of $\Sigmama$ is $s$, the support condition for perverse sheaves together with the hypercohomology spectral sequence yield that
$${\mathbb H}^{\end{enumerate}d{lem}l}(\Sigmama; p^*\varphi_\pi \underline{{\mathbb Z}}_{V_D}[n]) {\it Cone}g 0$$ for all $\end{enumerate}d{lem}l \notin [-s,s]$.
This implies by \eqref{s3} that
\begin{equation}\label{s2} H^k_\varphi(V)={\mathbb H}^{k-n}(V; \varphi_\pi \underline{{\mathbb Z}}_{V_D}[n]) {\it Cone}g
{\mathbb H}^{k-n}(\Sigmama;p^*\varphi_\pi \underline{{\mathbb Z}}_{V_D}[n]) {\it Cone}g 0\end{enumerate}d{equation} for all $k \notin [n-s,n+s]$.
The desired concentration degrees for the vanishing cohomology is now obtained by combining \eqref{s1} and \eqref{s2}.
dskip
Let us finally show that $H^n_\varphi(V)$ is free. Fix a Whitney stratification ${\mathcal V}$ of $V$, so that $V \setminus \Sigmama$ is the top stratum. (Note that together with $\pi^{-1}(D^*)$, this also yields a Whitney stratification of $V_D$.) Since $W$ intersects $V$ transversally (i.e., $W$ intersects each stratum $S$ in ${\mathcal V}$ transversally in ${\mathbb C} P^{n+1}$), we can assume without any loss of generality that the base locus $B=V \cap W$ is a closed union of strata of ${\mathcal V}$.
Next, we have by \eqref{s1}
that
$$ H^n_\varphi(V) {\it Cone}g {\mathbb H}_c^{0}(V \setminus B; u^* \varphi_\pi \underline{{\mathbb Z}}_{V_D}[n]),$$
with $${\mathcal P}:=u^* \varphi_\pi \underline{{\mathbb Z}}_{V_D}[n]$$
a ${\mathbb Z}$-perverse sheaf on the affine variety $V \setminus B$ and $u:V \setminus B \hboxookrightarrow V$ the open inclusion. In particular, this implies that if $S \in {\mathcal V}$ is any stratum in $V \setminus B$ with inclusion $i_S:S \hboxookrightarrow V \setminus B$ then ${\mathcal H}^k(i_S^!{\mathcal P}) \simeq 0$ for all integers $k<-\dim_{{\mathbb C}} S$.
By the Artin-Grothendieck type result of \cite[Corollary 6.0.4]{Sc}, in order to show that ${\mathbb H}^0_c(V\setminus B;{\mathcal P})$ is free it suffices to check that the perverse sheaf ${\mathcal P}$ satisfies the following costalk condition (see \cite[Example 6.0.2(3)]{Sc}):\footnote{We thank J\"org Sch\"urmann for indicating the relevant references to us.}
\begin{equation}\label{co1}
{\mathcal H}^{-\dim_{{\mathbb C}} S}(i_S^!{\mathcal P})_x \ \text{ is free }
\end{enumerate}d{equation}
for any point $x$ in any stratum $S$ in $V \setminus B$ with inclusion $i_S:S \hboxookrightarrow V \setminus B$.
Let us now fix a stratum $S \in {\mathcal V}$ contained in $V \setminus B$ and let $x \in S$ be a point with inclusion map $k_x:\{x\} \hboxookrightarrow S$. Consider the composition $i_x:=i_S \circ k_x: \{x\} \hboxookrightarrow V \setminus B$. Using the fact that $$k_x^*i_S^! \simeq k_x^!i_S^! [2 \dim_{{\mathbb C}} S] \simeq i_x^! [2 \dim_{{\mathbb C}} S]$$
(e.g., see \cite[Remark 6.0.2(1)]{Sc}), the condition \eqref{co1} for $x \in S$ is equivalent to the following:
\begin{equation}\label{co2}
{\mathcal H}^{\dim_{{\mathbb C}} S}(i_x^!{\mathcal P}) \ \text{ is free}.
\end{enumerate}d{equation}
In fact, the above discussion applies to any algebraically constructible complex ${\mathcal F}^\centerdot \in {^pD}^{\geq 0}$, with $({^pD}^{\leq 0}, {^pD}^{\geq 0})$ denoting the perverse t-structure on $D^b_c(V \setminus B)$.
Furthermore, in our setup (i.e., working with PID coefficients and having finitely generated stalk cohomology) ${\mathcal F}^\centerdot \in {^pD}^{\geq 0}$ satisfies the additional costalk condition \eqref{co1} (or, equivalently, \eqref{co2})
if and only if the Verdier dual ${\mathcal D}{\mathcal F}^\centerdot $ satisfies ${\mathcal D}{\mathcal F}^\centerdot \in {^pD}^{\leq 0}$.
Let $i:V=V_0 \hboxookrightarrow V_D$ denote the closed inclusion, and consider the following {\it variation triangle} for the projection map $\pi:V_D \to D$:
\begin{equation}\label{var}
i^![1] \longrightarrow \varphi_\pi \overset{var}{\longrightarrow} \psi_\pi \overset{[1]}{\longrightarrow}
\end{enumerate}d{equation}
with $\psi_\pi $ denoting the corresponding nearby cycle functor for $\pi$ (e.g., see \cite[(5.90)]{Sc}). Apply the functor $u^!=u^*$ to the triangle \eqref{var}, and the apply the resulting triangle of functors to the complex $\underline{{\mathbb Z}}_{V_D}[n]$ to get the following triangle of constructible complexes on $V\setminus B$:
\begin{equation}\label{var2}
{\mathcal Z}:=u^!i^!\underline{{\mathbb Z}}_{V_D}[n+1] \longrightarrow {\mathcal P}:=u^* \varphi_\pi \underline{{\mathbb Z}}_{V_D}[n] \longrightarrow
{\mathcal R}:=u^* \psi_\pi \underline{{\mathbb Z}}_{V_D}[n] \overset{[1]}{\longrightarrow}
\end{enumerate}d{equation}
Let $x \in S$ be a point in a stratum of $V \setminus B$ with inclusion map $i_x:\{x\} \hboxookrightarrow V \setminus B$ as before, and apply the functor $i_x^!$ to the triangle \eqref{var2} to get the triangle:
\begin{equation}\label{var3}
i_x^!{\mathcal Z} \longrightarrow i_x^!{\mathcal P} \longrightarrow i_x^!{\mathcal R} \overset{[1]}{\longrightarrow}
\end{enumerate}d{equation}
The cohomology long exact sequence associated to \eqref{var3} contains the terms
$$\cdots \longrightarrow {\mathcal H}^{\dim_{{\mathbb C}} S}(i_x^!{\mathcal Z}) \longrightarrow {\mathcal H}^{\dim_{{\mathbb C}} S}(i_x^!{\mathcal P}) \longrightarrow {\mathcal H}^{\dim_{{\mathbb C}} S}(i_x^!{\mathcal R}) \longrightarrow \cdots$$
Since the category of (torsion-)free abelian groups is closed under extensions, in order to prove \eqref{co2} it suffices to check that ${\mathcal H}^{\dim_{{\mathbb C}} S}(i_x^!{\mathcal Z})$ and ${\mathcal H}^{\dim_{{\mathbb C}} S}(i_x^!{\mathcal R})$ are (torsion-)free. (Note that, in fact, all costalks in question are finitely generated.)
Let us first show that ${\mathcal H}^{\dim_{{\mathbb C}} S}(i_x^!{\mathcal Z})$ is free. Regard the stratum $S$ containing $x$ as a stratum in $V_D$, and let $r_x:\{x\} \to V_D$ be the point inclusion, i.e., $r_x=i\circ u \circ i_x$. So $i_x^!{\mathcal Z}=r_x^!\underline{{\mathbb Z}}_{V_D}[n+1]$.
Recall that $\underline{{\mathbb Z}}_{V_D}[n+1]$ is a ${\mathbb Z}$-perverse sheaf on $V_D$, i.e., $\underline{{\mathbb Z}}_{V_D}[n+1] \in {^pD}^{\leq 0}(V_D) \cap{^pD}^{\geq 0}(V_D)$.
As already indicated above, in order to show that ${\mathcal H}^{\dim_{{\mathbb C}} S}(r_x^!\underline{{\mathbb Z}}_{V_D}[n+1])$ is free it suffices to verify that ${\mathcal D}(\underline{{\mathbb Z}}_{V_D}[n+1]) \in {^pD}^{\leq 0}(V_D)$, or equivalently, ${\mathcal D}\underline{{\mathbb Z}}_{V_D} \in {^pD}^{\leq -n-1}(V_D)$. This fact is a consequence of \cite[Definition 6.0.4, Example 6.0.11]{Sc}, where it is shown that the complete interesection $V_D$ has a {\it rectified homological depth} equal to its complex dimension $n+1$.
Next note that, due to the local product structure, the Milnor fiber $F_x$ of the hypersurface singularity germ $(V,x)$ with $x \in S$ has the homotopy type of a finite CW complex of real dimension $n-\dim_{{\mathbb C}} S$. In particular, $H_{n-\dim_{{\mathbb C}}S}(F_x;{\mathbb Z})$ is free.
Since by the costalk calculation (cf. \cite[(5.92)]{Sc}) and Poincar\'e duality we have for $x\in S$ that
\begin{equation}
{\mathcal H}^{\dim_{{\mathbb C}} S}(i_x^!{\mathcal R}) {\it Cone}g H_c^{n+\dim_{{\mathbb C}}S}(F_x;{\mathbb Z}) {\it Cone}g H_{n-\dim_{{\mathbb C}}S}(F_x;{\mathbb Z}),
\end{enumerate}d{equation}
it follows that ${\mathcal H}^{\dim_{{\mathbb C}} S}(i_x^!{\mathcal R})$ is free. This completes the proof of Theorem \ref{th1}.
\subsection{Proof of Proposition \ref{p1}}
Since $V$ is a ${\mathbb Q}$-homology manifold, it follows by standard arguments involving the Hamm fibration (e.g., see \cite[Theorem 3.2.12]{Di}) that $V_D$ is also a ${\mathbb Q}$-homology manifold (with boundary). Thus $\underline{{\mathbb Q}}_{V_D}[n+1]$ is a self-dual ${\mathbb Q}$-perverse sheaf on $V_D$. Moreover, since $\varphi_\pi[-1]$ commutes with the Verdier dualizing functor (see \cite[Theorem 3.1]{Ma} and the references therein), we get that ${\mathcal Q}:=\varphi_\pi\underline{{\mathbb Q}}_{V_D}[n]$ is a Verdier self-dual perverse sheaf on $V$. Using the Universal Coefficients Theorem, we obtain:
$$H^k_{\varphi}(V) \otimes {\mathbb Q}={\mathbb H}^{k-n}(V;{\mathcal Q}){\it Cone}g {\mathbb H}^{k-n}(V;{\mathcal D}{\mathcal Q}) {\it Cone}g {\mathbb H}^{n-k}(V;{\mathcal Q})^\vee = (H^{2n-k}_{\varphi}(V)\otimes {\mathbb Q})^\vee.$$
The desired vanishing follows now from Theorem \ref{th1}.
\section{Bounds on Betti numbers of projective hypersurfaces}\label{bounds}
In this section, we prove Theorem \ref{th2} and specialize it, along with Corollary \ref{corgen}, in the case when the complex dimension $s$ of the singular locus is $\leq 1$.
\subsection{Proof of Theorem \ref{th2}}
Let $\Sigmama:=V_{\rm sing}$ be the singular locus of $V$, of complex dimension $s$, and fix a Whitney stratification ${\mathcal V}$ of $V$ so that $V \setminus \Sigmama$ is the top open stratum.
We have by Corollary \ref{corgen} (or by the specialization sequence \eqref{spec}) that
$$b_{n+s+1}(V) \leq 1+\mathop{{\mathrm{rank}}}\nolimits \ H^{n+s}_{\varphi}(V).$$
So it suffices to show that
\begin{equation}\label{rvg}
\mathop{{\mathrm{rank}}}\nolimits \ H^{n+s}_{\varphi}(V) \leq \sum_i \mu_i^\pitchfork,
\end{enumerate}d{equation}
where the summation on the right-hand side runs over the top $s$-dimensional connected strata $S_i$ of $\Sigmama$, and $\mu_i^\pitchfork$ denotes the corresponding transversal Milnor number for such a stratum $S_i$.
If $s=0$, an easy computation shows that \eqref{rvg} is in fact an equality, see \eqref{nee} below. Let us next investigate the case when $s\geq 1$.
For any $\end{enumerate}d{lem}l \leq s$, denote by $\Sigmama_{\end{enumerate}d{lem}l}$ the union of strata in $\Sigmama$ of complex dimension $\leq \end{enumerate}d{lem}l$. In particular, we can filter $\Sigmama$ by closed (possibly empty) subsets
$$\Sigmama=\Sigmama_s \supset \Sigmama_{s-1} \supset \cdots \supset \Sigmama_0 \supset \Sigmama_{-1}=\emptyset.$$
Let $$U_\end{enumerate}d{lem}l:=\Sigmama_{\end{enumerate}d{lem}l} \setminus \Sigmama_{\end{enumerate}d{lem}l-1}$$ be the union of $\end{enumerate}d{lem}l$-dimensional strata, so $\Sigmama_\end{enumerate}d{lem}l=\sqcup_{k\leq \end{enumerate}d{lem}l} U_k$. (Here, $\sqcup$ denotes disjoint union.) Recall that the smooth hypersurface $W=\{g=0\}$ was chosen so that it intersects each stratum in $\Sigmama$ transversally.
In the notations of the proof of Theorem \ref {th1} , it follows from equations \eqref{s1} and \eqref{s5} that:
$$ H^{n+s}_\varphi(V) {\it Cone}g {\mathbb H}_c^{n+s}(V \setminus B; u^* \varphi_\pi \underline{{\mathbb Z}}_{V_D})
{\it Cone}g {\mathbb H}_c^{n+s}(\Sigmama \setminus B; v^* u^*\varphi_\pi \underline{{\mathbb Z}}_{V_D}),$$
with $B=V \cap W$ the axis of the pencil, and with $v:\Sigmama\setminus B \hboxookrightarrow V \setminus B$ and $u:V \setminus B \hboxookrightarrow V$ the inclusion maps.
We also noted that either $h$ or a local representative of $f$ can be used when considering Milnor fibers of $\pi$ at points in $V \setminus B$. For simplicity, let us use the notation
$${\mathcal R} := v^* u^*\varphi_\pi \underline{{\mathbb Z}}_{V_D} \in D^b_c(\Sigmama \setminus B),$$
and consider the part of the long exact sequence for the compactly supported hypercohomology of ${\mathcal R}$ associated to the disjoint union
$$\Sigmama \setminus B= (U_s \setminus B) \sqcup (\Sigmama_{s-1} \setminus B)$$
involving $H^{n+s}_{\varphi}(V)$,
namely:
$$\cdots \to {\mathbb H}^{n+s}_c(U_s \setminus B; {\mathcal R}) \to H^{n+s}_{\varphi}(V) \to {\mathbb H}^{n+s}_c(\Sigmama_{s-1} \setminus B; {\mathcal R}) \to \cdots$$
We claim that
\begin{equation}\label{cl1}
{\mathbb H}^{n+s}_c(\Sigmama_{s-1} \setminus B; {\mathcal R}) {\it Cone}g 0,
\end{enumerate}d{equation}
so, in particular, there is an epimorphism:
\begin{equation}\label{nee2}{\mathbb H}^{n+s}_c(U_s \setminus B; {\mathcal R}) \twoheadrightarrow H^{n+s}_{\varphi}(V).\end{enumerate}d{equation}
In order to prove \eqref{cl1}, consider
the part of the long exact sequence for the compactly supported hypercohomology of ${\mathcal R}$ associated to the disjoint union
$$\Sigmama_{s-1} \setminus B= (U_{s-1} \setminus B) \sqcup (\Sigmama_{s-2} \setminus B)$$
involving $ {\mathbb H}^{n+s}_c(\Sigmama_{s-1} \setminus B; {\mathcal R})$,
namely:
$$\cdots \to {\mathbb H}^{n+s}_c(U_{s-1} \setminus B; {\mathcal R}) \to {\mathbb H}^{n+s}_c(\Sigmama_{s-1} \setminus B; {\mathcal R}) \to {\mathbb H}^{n+s}_c(\Sigmama_{s-2} \setminus B; {\mathcal R}) \to \cdots$$
We first show that \begin{equation}\label{cl2}{\mathbb H}^{n+s}_c(U_{s-1} \setminus B; {\mathcal R}) {\it Cone}g 0.\end{enumerate}d{equation}
Indeed, the $(p,q)$-entry in the $E_2$-term of the hypercohomology spectral sequence computing ${\mathbb H}^{n+s}_c(U_{s-1} \setminus B; {\mathcal R})$ is given by $$E^{p,q}_2=H^p_c(U_{s-1} \setminus B; {\mathcal H}^q( {\mathcal R})),$$ and we are interested in those pairs of integers $(p,q)$ with $p+q=n+s$.
Since a point in a $(s-1)$-dimensional stratum of $V$ has a Milnor fiber which has the homotopy type of a finite CW complex of real dimension $n-s+1$, it follows that $${\mathcal H}^q( {\mathcal R}) \vert_{U_{s-1} \setminus B}\simeq 0 \ \ \text{ for any } \ q>n-s+1.$$
Also, by reasons of dimension, we have that $E_2^{p,q}=0$ if $p> 2s-2$. In particular, the only possibly non-trivial entries on the $E_2$-page of the above spectral sequence are those corresponding to pairs $(p,q)$ with $p\leq 2s-2$ and $q\leq n-s+1$, none of which add up to $n+s$.
This proves \eqref{cl2}. If $s=1$, this completes the proof of \eqref{cl1} since $\Sigmama_{-1}=\emptyset$. If $s>1$, the long exact sequences for the compactly supported hypercohomology of ${\mathcal R}$ associated to the disjoint union
$$\Sigmama_{\end{enumerate}d{lem}l} \setminus B= (U_{\end{enumerate}d{lem}l} \setminus B) \sqcup (\Sigmama_{\end{enumerate}d{lem}l-1} \setminus B),$$
$0 \leq \end{enumerate}d{lem}l \leq s-1$, can be employed to reduce
the proof of \eqref{cl1} to showing that
\begin{equation}\label{cl3}{\mathbb H}^{n+s}_c(U_{\end{enumerate}d{lem}l} \setminus B; {\mathcal R}) {\it Cone}g 0\end{enumerate}d{equation}
for all $0 \leq \end{enumerate}d{lem}l \leq s-1$. To prove \eqref{cl3}, we make use of the hypercohomology spectral sequence whose $E_2$-term is computed by
$$E^{p,q}_2=H^p_c(U_{\end{enumerate}d{lem}l} \setminus B; {\mathcal H}^q( {\mathcal R})),$$ and we are interested again in those pairs of integers $(p,q)$ with $p+q=n+s$.
Since a point in an $\end{enumerate}d{lem}l$-dimensional stratum of $V$ has a Milnor fiber which has the homotopy type of a finite CW complex of real dimension $n-\end{enumerate}d{lem}l$, it follows that $${\mathcal H}^q( {\mathcal R}) \vert_{U_{\end{enumerate}d{lem}l} \setminus B}\simeq 0 \ \ \text{ for any } \ q>n-\end{enumerate}d{lem}l.$$ Moreover, by reasons of dimension, $E_2^{p,q}=0$ if $p> 2\end{enumerate}d{lem}l$. So the only possibly non-trivial entries on the $E_2$-page are those corresponding to pairs $(p,q)$ with $p\leq 2\end{enumerate}d{lem}l$ and $q\leq n-\end{enumerate}d{lem}l$, none of which add up to $n+s$. This proves \eqref{cl3}, and completes the proof of \eqref{cl1} in the general case.
In order to prove \eqref{rvg}, we make use of the epimorphism \eqref{nee2} as follows.
Recall that, in our notations, $U_s \setminus B$ is a disjoint union of connected strata $S_i \setminus B$ of complex dimension $s$. Each $S_i \setminus B$ has a generic transversal Milnor fiber $F_i^\pitchfork$, which has the homotopy type of a bouquet of $\mu_i^\pitchfork$ $(n-s)$-dimensional spheres. So the integral cohomology of $F_i^\pitchfork$ in concentrated in degree $n-s$. Moreover, for each $i$, there is a local system ${\mathcal L}_i^\pitchfork$ on $S_i \setminus B$ with stalk $\widetilde{H}^{n-s}(F_i^\pitchfork;{\mathbb Z})$, whose monodromy is usually refered to as the {\it vertical monodromy}. This is exactly the restriction of the constructible sheaf ${\mathcal H}^{n-s}( {\mathcal R})$ to
$S_i \setminus B$.
It then follows from the hypercohomology spectral sequence computing ${\mathbb H}^{n+s}_c(U_s \setminus B; {\mathcal R}) $ and by Poincar\'e duality that
\begin{equation}\label{last}{\mathbb H}^{n+s}_c(U_s \setminus B; {\mathcal R}) {\it Cone}g \bigoplus_i \ H^{2s}_c(S_i \setminus B;{\mathcal L}_i^\pitchfork) {\it Cone}g \bigoplus_i \ H_0(S_i \setminus B;{\mathcal L}_i^\pitchfork)\end{enumerate}d{equation}
which readily gives \eqref{rvg}.
$\hboxfill$ $\square$
\begin{rem}\label{rem31}
Note that the upper bound on $b_{n+s+1}(V)$ can be formulated entirely in terms of coinvariants of vertical monodromies along the top dimensional singular strata of $V$. Indeed, if in the notations of the above proof we further let $h_i^v$ denote the vertical monodromy along $S_i \setminus B$, then each term on the right-hand side of \eqref{last} is computed by the coinvariants of $h_i^v$, i.e., $H_0(S_i \setminus B;{\mathcal L}_i^\pitchfork) {\it Cone}g \widetilde{H}^{n-s}(F_i^\pitchfork;{\mathbb Z})_{h^v_i}.$ Note that the latter statement, when combined with \eqref{nee2}, yields an epimorphism
\begin{equation}\label{nee2b} \bigoplus_i \widetilde{H}^{n-s}(F_i^\pitchfork;{\mathbb Z})_{h^v_i} \twoheadrightarrow H^{n+s}_{\varphi}(V),\end{enumerate}d{equation}
the summation on the left hand side being over the top dimensional singular strata of $V$. One can, moreover, proceed like in \cite{MPT} and give a more precise dependence of all (possibly non-trivial) vanishing cohomology groups $H^{k}_{\varphi}(V)$, $n \leq k \leq n+s$, in terms of the singular strata of $V$. We leave the details to the interested reader.
\end{enumerate}d{rem}
\subsection{Isolated singularities}
Assume that the projective hypersurface $V \subset {\mathbb C} P^{n+1}$ has only isolated singularities (i.e., $s=0$).
Then the incidence variety $V_D$ is smooth since the pencil has an empty base locus, and the projection $\pi:V_D \to D$ has isolated singularities exactly at the singular points of $V$.
The only non-trivial vanishing homology group, $H_{n+1}^{\curlyvee}(V)$, is free, and is computed as:
\begin{equation}\label{nee} H_{n+1}^{\curlyvee}(V) {\it Cone}g \bigoplus_{x \in V_{\rm sing}} \widetilde{H}_{n}(F_x;{\mathbb Z}) {\it Cone}g \bigoplus_{x \in V_{\rm sing}} {\mathbb Z}^{\mu_x},
\end{enumerate}d{equation}
where $F_x$ denotes the Milnor fiber of the isolated hypersurface singularity germ $(V,x)$, with corresponding Milnor number $\mu_x$. The second isomorphism follows from the fact that $F_x$ has the homotopy type of a bouquet of $\mu_x$ $n$-spheres.
The $5$-term exact sequence \eqref{sp1} then reads as:
\begin{equation}\label{speciso}
0 \to H_{n+1}(V_t;{\mathbb Z}) \to H_{n+1}(V;{\mathbb Z}) {\to} \bigoplus_{x \in V_{\rm sing}} \widetilde{H}_{n}(F_x;{\mathbb Z}) \overset{\alphapha_{n}}{\to} H_{n}(V_t;{\mathbb Z}) \to H_{n}(V;{\mathbb Z}) \to 0.
\end{enumerate}d{equation}
Therefore Corollary \ref{corgen}(i)--(iii), together with the following bound via Theorem \ref{th2}:
$$b_{n+1}(V) \leq 1+\sum_{x \in V_{\rm sing}} \mu_x.$$
recover \cite[Proposition 2.2]{ST}, which in turn is a homology counterpart of Dimca's result \cite[Theorem 5.4.3]{Di}.
In fact, Dimca's result was formulated in cohomology, and it is a direct
consequence of the specialization sequence \eqref{spec} via Theorem \ref{th1}, together with the observation that the only non-trivial vanishing cohomology group, $H^n_{\varphi}(V)$, is computed as:
\begin{equation}\label{nee} H^n_{\varphi}(V) {\it Cone}g \bigoplus_{x \in V_{\rm sing}} \widetilde{H}^n(F_x;{\mathbb Z}).\end{enumerate}d{equation}
\begin{rem}\label{rem3.3}
Let us recall here that if $V\subset {\mathbb C} P^{n+1}$ is a degree $d$ reduced projective hypersurface with only isolated singularities, then its Euler characteristic is computed by the formula (e.g., see \cite[Exercise 5.3.7(i) and Corollary 5.4.4]{Di} or \cite[Proposition 10.4.2]{M}):
\begin{equation}\label{chii}
\chi(V)=(n+2)-\frac{1}{d} \big[1+(-1)^{n+1}(d-1)^{n+2}\big] +(-1)^{n+1}\sum_{x \in V_{\rm sing}} \mu_x,
\end{enumerate}d{equation}
with $\mu_x$ denoting as before the Milnor number of the isolated hypersurface singularity germ $(V,x)$. In particular, if $V$ is a projective {\it curve} (i.e., $n=1$), then $H_{0}(V;{\mathbb Z}) {\it Cone}g {\mathbb Z}$, $H_{2}(V;{\mathbb Z}){\it Cone}g {\mathbb Z}^r$, with $r$ denoting the number of irreducible components of $V$, and $H_{1}(V;{\mathbb Z})$ is a free group whose rank is computed from \eqref{chii} by the formula:
\begin{equation}\label{b1}b_1(V)=r+1+d^2-3d-\sum_{x \in V_{\rm sing}} \mu_x.\end{enumerate}d{equation}
\end{enumerate}d{rem}
\subsection{$1$-dimensional singular locus}
This particular case was treated in homology in \cite[Proposition 7.7]{ST}. Let us recall the preliminaries, in order to point out once more that in this paper we have transposed them to a fully general setting.
One starts with $V \subset {\mathbb C} P^{n+1}$, a degree $d$ projective hypersurface with a singular locus $\Sigmama:=V_{\rm sing}$ of complex dimension $1$.
The singular locus $\Sigmama$ consists of a union of irreducible projective curves $\Sigmama_i$ and a finite set $I$ of isolated singular points. Each curve $\Sigmama_i$ has a generic transversal type of transversal Milnor fiber $F_i^\pitchfork \simeq \bigvee_{\mu_i^\pitchfork} S^{n-1}$ with corresponding transversal Milnor number $\mu_i^\pitchfork$. Each $\Sigmama_i$ also contains a finite set $S_i$ of special points of non-generic transversal type. One endows $V$ with the Whitney stratification whose strata are:
\begin{equation}gin{itemize}
\item the isolated singular points in $I$,
\item the special points in $S=\bigcup_i S_i$,
\item the (top) one-dimensional components of $\Sigmama \setminus S$,
\item the open stratum $V \setminus \Sigmama$.
\end{enumerate}d{itemize}
The genericity of the pencil $\{V_{t}\}_{t\in D}$ implies that the base locus $B$ intersects each $\Sigmama_i$ in a finite set $B_i$ of general points, which are not contained in $I \cup S_i$.
The total space $V_D$ of the pencil has in this case only isolated singularities (corresponding to the points where $B$ intersects $\Sigmama$), and the projection $\pi:V_D \to D$ has a $1$-dimensional singular locus $\Sigmama \times \{0\}$.
With the above specified landscape, the Siersma-Tib\u ar result \cite[Proposition 7.7]{ST} reads now as the specialisation for $s=1$ of Corollary \ref{corgenhom}, together with the bound provided by Theorem \ref{th2}.
\section{Examples}
In this section we work out a few specific examples. In particular, in \S\ref{quad} we show that the upper bound given by Theorem \ref{th2} is sharp, \S\ref{rathom} deals with a hypersurface which is a rational homology manifold, while \S\ref{projc} discusses the case of a projective cone on a singular curve. However, as pointed out in \cite{Di0} already in the case of isolated singularities, it is difficult in general to compute the integral cohomology of a hypersurface by means of Corollary \ref{corgen}. It is therefore important to also develop alternative methods for exact calculations of cohomology and/or Betti numbers, e.g., see \cite{Di} for special situations.
\subsection{Singular quadrics}\label{quad} Let $n$ and $q$ be integers satisfying $4 \leq q \leq n+1$, and let
$$f_q(x_0,\ldots x_{n+1})=\sum_{0\leq i,j \leq n+1} q_{ij} x_i x_j$$ be a quadric of rank $q:=\mathop{{\mathrm{rank}}}\nolimits (Q)$ with $Q=(q_{ij})$. The singular locus $\Sigmama$ of the quadric hypersurface $V_q=\{f_q=0\} \subset {\mathbb C} P^{n+1}$ is a linear space of complex dimension $s=n+1-q$ satisfying $0 \leq s \leq n-3$. The generic transversal type for $\Sigmama={\mathbb C} P^s$ is an $A_1$-singularity, so $\mu^\pitchfork=1$.
Theorem \ref{th2} yields that \begin{equation}\label{ub2} b_{n+s+1}(V_q) \leq 2.\end{enumerate}d{equation} In what follows, we show that if the rank $q$ is even (i.e., $n+s+1$ is even), the upper bound on $ b_{n+s+1}(V_q)$ given in \eqref{ub2} is sharp. Indeed, in our notation, the quadric $V_q$ is a projective cone with vertex $\Sigmama$ over a smooth quadric $W_q \subset {\mathbb C} P^{n-s}$. Moreover, since $n-s\geq 3$, the homotopy version of the Lefschetz hyperplane theorem yields that $W_q$ is simply-connected (see, e.g., \cite[Theorem 1.6.5]{Di}). Let $U=V_q \setminus \Sigmama$ and consider the long exact sequence
$$ \cdots \to H^k_c(U;{\mathbb Z}) \to H^k(V_q;{\mathbb Z}) \to H^k(\Sigmama;{\mathbb Z}) \to H^{k+1}_c(U;{\mathbb Z}) \to \cdots $$
Note that projecting from $\Sigmama$ gives $U$ the structure of a vector bundle of rank $s+1$ over $W_q$. Let $p:U \to W_q$ denote the bundle map. Then $$H^k_c(U;{\mathbb Z}){\it Cone}g H^k(W_q;Rp_!\underline{{\mathbb Z}}_U)$$ can be computed by the corresponding hypercohomology spectral sequence (i.e., the compactly supported Leray-Serre spectral sequence of the map $p$), with $E^{a,b}_2=H^a(W_q;R^bp_!\underline{{\mathbb Z}}_U)$. Since $\pi_1(W_q)=0$, the local system $R^bp_!\underline{{\mathbb Z}}_U$ is constant on $W_q$ with stalk
$H^b_c({\mathbb C}^{s+1};{\mathbb Z})$. Since the latter is ${\mathbb Z}$ if $b=2s+2$ and $0$ otherwise, the above spectral sequence yields isomorphisms $H^k_c(U;{\mathbb Z}) {\it Cone}g H^{k-2-2s}(W_q;{\mathbb Z})$ if $k \geq 2s+2$ and $H^k_c(U;{\mathbb Z}) {\it Cone}g 0$ if $k < 2s+2$. On the other hand, $H^k(\Sigmama;{\mathbb Z})=0$ if $k>2s$, so the above long exact sequence yields:
\begin{equation}
H^k(V_q;{\mathbb Z}){\it Cone}g
\begin{equation}gin{cases}
H^k(\Sigmama;{\mathbb Z}) & 0 \leq k \leq 2s \\
0 & k=2s+1 \\
H^{k-2-2s}(W_q;{\mathbb Z}) & 2s+2 \leq k \leq 2n.
\end{enumerate}d{cases}
\end{enumerate}d{equation}
Since $W_q$ is a smooth quadric, its integral cohomology is known from \eqref{one}, \eqref{two} and \eqref{bsm}. Altogether, this gives:
\begin{equation}
H^k(V_q;{\mathbb Z}){\it Cone}g
\begin{equation}gin{cases}
0 & k \text{ odd} \\
{\mathbb Z} & k \text{ even}, \ k \neq n+s+1 \\
{\mathbb Z}^2 & k= n+s+1 \text{ even}.
\end{enumerate}d{cases}
\end{enumerate}d{equation}
\subsection{One-dimensional singular locus with a two-step filtration}\label{rathom}
Let $V=\{f=0\}\subset {\mathbb C} P^4$ be the $3$-fold in homogeneous coordinates $[x:y:z:t:v]$, defined by $$f=y^2z+x^3+tx^2+v^3.$$ The singular locus of $V$ is the projective line $\Sigmama=\{[0:0:z:t:0] \mid z,t \in {\mathbb C}\}$. By \eqref{one}, we get: $b_0(V)=1$, $b_1(V)=0$, $b_2(V)=1$. Since $V$ is irreducible, \eqref{top} yields: $b_6(V)=1$.
We are therefore interested to understand the Betti numbers $b_3(V)$, $b_4(V)$ and $b_5(V)$.
It was shown in \cite[Example 6.1]{M0} that $V$ has a Whitney stratification with strata: $$S_3:=V \setminus \Sigmama, \ \ S_1:=\Sigmama \setminus [0:0:0:1:0], \ \ S_0:=[0:0:0:1:0],$$ giving $V$ a two-step filtration $V \supset \Sigmama \supset [0:0:0:1:0].$
The transversal singularity for the top singular stratum $S_1$ is the Brieskorn type singularity $y^2+x^3+v^3=0$ at the origin of ${\mathbb C}^3$ (in a normal slice to $S_1$), with corresponding transversal Milnor number $\mu_1^\pitchfork =4$. So Theorem \ref{th2} yields that $b_5(V) \leq 5$, while Corollary \ref{corgen} gives $b_3(V) \leq 10$. As we will indicate below, the actual values of $b_3(V)$ and $b_5(V)$ are zero.
It was shown in \cite[Example 6.1]{M0} that the hypersurface $V$ is in fact a ${\mathbb Q}$-homology manifold, so it satisfies Poincar\'e duality over the rationals. In particular, $b_5(V)=b_1(V)=0$ and $b_4(V)=b_2(V)=1$. To determine $b_3(V)$, it suffices to compute the Euler characteristic of $V$, since $\chi(V)=4-b_3(V)$. Let us denote by $Y\subset {\mathbb C} P^4$ a smooth $3$-fold which intersects the Whitney stratification of $V$ transversally. Then \eqref{chi} yields that $\chi(Y)=-6$ and we have by \cite[(10.40)]{M} that
\begin{equation}\label{plug}
\chi(V)=\chi(Y)-\chi(S_1 \setminus Y) \cdot \mu_1^\pitchfork -\chi(S_0) \cdot (\chi(F_0)-1),
\end{enumerate}d{equation}
where $F_0$ denotes the Milnor fiber of $V$ at the singular point $S_0$. As shown in \cite[Example 6.1]{M0}, $F_0 \simeq S^3 \vee S^3$. So, using the fact that the general $3$-fold $Y$ intersects $S_1$ at $3$ points, we get from \eqref{plug} that $\chi(V)=4$. Therefore, $b_3(V)=0$, as claimed. Moreover, since $H^3(V;{\mathbb Z})$ is free, this also shows that in fact $H^3(V;{\mathbb Z}){\it Cone}g 0$.
\begin{rem} Note that the hypersurface of the previous example has the same Betti numbers as ${\mathbb C} P^3$. This fact can also be checked directly, by noting that the monodromy operator acting on the reduced homology of the Milnor fiber of $f$ at the origin in ${\mathbb C}^5$ has no eigenvalue equal to $1$ (see \cite[Corollary 5.2.22]{Di}).
More generally, consider a degree $d$ homogeneous polynomial $g(x_0,\ldots, x_n)$ with associated Milnor $F_g$ such that the monodromy operator $h_*$ acting on $\widetilde{H}_*(F_g;{\mathbb Q})$ is the identity. Then the hypersurface $V=\{g(x_0,\ldots, x_n)+x_{n+1}^{d}=0\}\subset {\mathbb C} P^{n+1}$ has the same ${\mathbb Q}$-(co)homology as ${\mathbb C} P^n$. For example, the hypersurface $V_n=\{x_0x_1\ldots x_n+x_{n+1}^{n+1}=0\}$ has singularities in codimension $2$, but the same ${\mathbb Q}$-(co)homology as ${\mathbb C} P^n$. However, $V_n$ does not have in general the ${\mathbb Z}$-(co)homology of ${\mathbb C} P^n$; indeed, $H^3(V_2;{\mathbb Z})$ contains $3$-torsion (cf. \cite[Proposition 5.4.8]{Di}).
\end{enumerate}d{rem}
\subsection{Projective cone on a curve}\label{projc}
The projective curve $C=\{xyz=0\}\subset {\mathbb C} P^2$ has three irreducible components and three singularities of type $A_1$ (each having a corresponding Milnor number equal to $1$). Therefore, by Remark \ref{rem3.3} and formula \eqref{b1}, the integral cohomology of $C$ is given by:
$$H^0(C;{\mathbb Z}){\it Cone}g {\mathbb Z}, \ H^1(C;{\mathbb Z}) {\it Cone}g {\mathbb Z}, \ H^2(C;{\mathbb Z}){\it Cone}g {\mathbb Z}^3.$$
The projective cone on $C$ is the surface $V=\{xyz=0\}\subset {\mathbb C} P^3$. The singular locus of $V$ consists of three projective lines intersecting at the point $[0:0:0:1]$, each having a (generic) transversal singularity of type $A_1$, i.e., with corresponding transversal Milnor number equal to $1$.
By \cite[(5.4.18)]{Di}, we have that $$H^k(V;{\mathbb Z}) {\it Cone}g H^{k-2}(C;{\mathbb Z}), \ \ \text{for all} \ k \geq 2.$$
Together with \eqref{one}, this yields:
\begin{equation}\label{comp1}
H^0(V;{\mathbb Z}){\it Cone}g {\mathbb Z}, \ H^1(V;{\mathbb Z}){\it Cone}g 0, \ H^2(V;{\mathbb Z}){\it Cone}g {\mathbb Z}, \ H^3(V;{\mathbb Z}) {\it Cone}g {\mathbb Z}, \ H^4(V;{\mathbb Z}){\it Cone}g {\mathbb Z}^3.
\end{enumerate}d{equation}
By Theorem \ref{th1}, the only non-trivial vanishing cohomology groups of $V$ are $H^2_{\varphi}(V)$, which is free, and $H^3_{\varphi}(V)$. These can be explicitly computed by using
\eqref{bsm}, \eqref{sp1} and \eqref{comp1}, to get: $$H^2_{\varphi}(V){\it Cone}g {\mathbb Z}^7, \ H^3_{\varphi}(V){\it Cone}g {\mathbb Z}^2$$ (compare with \cite[Example 7.5]{ST}).
\section{Supplement to the Lefschetz hyperplane theorem and applications}\label{supLHT}
In this section, we give a new proof of Kato's result mentioned in the Introduction. Our proof is different from that of \cite[Theorem 5.2.11]{Di}, and it relies on a supplement to the Lefschetz hyperplane section theorem (Theorem \ref{thapi}), which is proved in Theorem \ref{thap} below.
\subsection{A supplement to the Lefschetz hyperplane theorem}
In this section, we prove the following result of Lefschetz type:
\begin{thm}\label{thap}
Let $V \subset {\mathbb C} P^{n+1}$ be a reduced complex projective hypersurface with $s=\dim V_{\rm sing}$ the complex dimension of its singular locus. (By convention, we set $s=-1$ if $V$ is nonsingular.) Let $H \subset {\mathbb C} P^{n+1}$ be a generic hyperplane (i.e., transversal to a Whitney stratification of $V$), and denote by $V_H:=V\cap H$ the corresponding hyperplane section of $V$.
Then
\begin{equation}\label{34ap}
H^k(V,V_H; {\mathbb Z})=0 \ \ \text{for} \ \ k < n \ \ \text{and} \ \ n+s+1 < k < 2n.
\end{enumerate}d{equation}
Moreover, $H^{2n}(V,V_H; {\mathbb Z}){\it Cone}g{\mathbb Z}^r$, where $r$ is the number of irreducible components of $V$, and $H^{n}(V,V_H; {\mathbb Z})$ is (torsion-)free.
\end{enumerate}d{thm}
\begin{equation}gin{proof}
Let us first note that the long exact sequence for the cohomology of the pair $(V,V_H)$ together with \eqref{top} yield that:
$$H^{2n}(V,V_H; {\mathbb Z}){\it Cone}g H^{2n}(V;{\mathbb Z}){\it Cone}g{\mathbb Z}^r.$$
Moreover, we have isomorphisms:
$$H^k(V,V_H; {\mathbb Z}) {\it Cone}g H^k_c(V^a;{\mathbb Z}),$$
where $V^a:=V\setminus V_H$.
Therefore, the vanishing in \eqref{34ap} for $k<n$ is a consequence of the Artin vanishing theorem (e.g., see \cite[Corollary 6.0.4]{Sc}) for the perverse sheaf $\underline{{\mathbb Z}}_{V^a}[n]$ (cf. \cite{Le}) on the affine hypersurface $V^a$ obtained from $V$ by removing the hyperplane section $V_H$. Indeed,
$$H^k_c(V^a;{\mathbb Z})={\mathbb H}^{k-n}_c(V^a;\underline{{\mathbb Z}}_{V^a}[n]) {\it Cone}g 0$$ for all $k-n<0$. (Note that vanishing in this range is equivalent to the classical Lefschetz hyperplane section theorem.)
Since $V$ is reduced, we have that $s<n$. If $n=s+1$ then $n+s+1=2n$ and there is nothing else to prove in \eqref{34ap}. So let us now assume that $n>s+1$. For $n+s+1<k<2n$, we have the following sequence of isomorphisms:
\begin{equation}\label{35} \begin{equation}gin{split}
H^k(V, V_H; {\mathbb Z}) &{\it Cone}g H^k(V \cup H, H; {\mathbb Z}) \\
&{\it Cone}g H_{2n+2-k} ({\mathbb C} P^{n+1}\setminus H, {\mathbb C} P^{n+1} \setminus (V \cup H); {\mathbb Z}) \\
&{\it Cone}g H_{2n+1-k}({\mathbb C} P^{n+1}\setminus (V \cup H); {\mathbb Z}),
\end{enumerate}d{split}
\end{enumerate}d{equation}
where the first isomorphism follows by excision, the second is an application of the Poincar\'e-Alexander-Lefschetz duality, and the third follows from the cohomology long exact sequence of a pair.
Set
$$U={\mathbb C} P^{n+1}\setminus (V \cup H),$$ and let $L = {\mathbb C} P^{n-s}$ be a generic linear subspace (i.e., transversal to both $V$ and $H$). Then, by transversality, $L \cap V$ is a nonsingular hypersurface in $L$, transversal to the hyperplane at infinity $L \cap H$ in $L$. Therefore, $U \cap L=L\setminus (V \cup H) \cap L$ has the homotopy type of a wedge
$$U \cap L \simeq S^1 \vee S^{n-s} \vee \ldots \vee S^{n-s},$$
e.g., see \cite[Corollary 1.2]{Lib}. Thus, by the Lefschetz hyperplane section theorem (applied $s+1$ times), we obtain:
$$H_i(U;{\mathbb Z}) {\it Cone}g H_i(U \cap L;{\mathbb Z}) {\it Cone}g 0$$
for all integers $i$ in the range $1 < i < n-s$. Substituting $i = 2n+1-k$ in \eqref{35}, we get that
$H^k(V, V_H; {\mathbb Z}){\it Cone}g 0$ for all integers $k$ in the range $n+s+1 < k < 2n$.
It remains to show that $H^{n}(V,V_H; {\mathbb Z}){\it Cone}g H^n_c(V^a;{\mathbb Z}){\it Cone}g {\mathbb H}^{0}_c(V^a;\underline{{\mathbb Z}}_{V^a}[n])$ is (torsion-)free. This follows as in the proof of Theorem \ref{th1} since the affine hypersurface $V^a$ has rectified homological depth equal to its complex dimension $n$.
This completes the proof of the theorem.
\end{enumerate}d{proof}
Theorem \ref{thap} and the Universal Coefficient Theorem now yield the following consequence:
\begin{cor}\label{corap}
In the notations of Theorem \ref{thap} we have that:
\begin{equation}\label{36ap}
H_k(V,V_H; {\mathbb Z})=0 \ \ \text{for} \ \ k < n \ \ \text{and} \ \ n+s+1 < k < 2n.
\end{enumerate}d{equation}
Moreover, $H_{2n}(V,V_H; {\mathbb Z}){\it Cone}g{\mathbb Z}^r$, where $r$ is the number of irreducible components of $V$.
\end{enumerate}d{cor}
\subsection{Kato's theorem for hypersurfaces}
The isomorphism \eqref{two} from the introduction was originally proved by Kato \cite{Ka}, and it holds more generally for complete intersections. We derive it here as a consequence of Theorem \ref{thap}.
\begin{thm}[Kato]\label{Kato}
Let $V\subset {\mathbb C} P^{n+1}$ be a reduced degree $d$ complex projective hypersurface with $s=\dim V_{\rm sing}$ the complex dimension of its singular locus. (By convention, we set $s=-1$ if $V$ is nonsingular.) Then
\begin{equation}\label{twoap}
H^k(V;{\mathbb Z}) {\it Cone}g H^k( {\mathbb C} P^{n+1};{\mathbb Z}) \ \ \text{for all} \ \ n+s+2\leq k\leq 2n.
\end{enumerate}d{equation}
Moreover, if $j:V \hboxookrightarrow {\mathbb C} P^{n+1}$ denotes the inclusion, the induced cohomology homomorphisms
\begin{equation}\label{threeap}
j^k:H^k( {\mathbb C} P^{n+1};{\mathbb Z}) \longrightarrow H^k(V;{\mathbb Z}), \ \ n+s+2\leq k\leq 2n,
\end{enumerate}d{equation}
are given by multiplication by $d$ if $k$ is even.
\end{enumerate}d{thm}
\begin{equation}gin{proof}
The statement of the theorem is valid only if $n\geq s+2$, so in particular we can assume that $V$ is irreducible and hence $H^{2n}(V;{\mathbb Z}){\it Cone}g {\mathbb Z}$.
Moreover, the fact that $j^{2n}$ is multiplication by $d=\deg(V)$ is true regardless of the dimension of singular locus, see \cite[(5.2.10)]{Di}.
If $n=s+2$ there is nothing else to prove, so we may assume (without any loss of generality) that $n \geq s+3$.
We next proceed by induction on $s$.
If $V$ is nonsingular (i.e., $s=-1$), the assertions are well-known for any $n \geq 1$. We include here a proof for completeness. The isomorphism \eqref{twoap} can be obtained in this case from the Lefschetz isomorphism \eqref{one}, its homology analogue, and Poincar\'e duality. The statement about $j^k$ can also be deduced from \eqref{one} and Poincar\'e duality, but we include here a different argument inspired by \cite{Di}. Consider the isolated singularity at the origin for the affine cone $CV \subset {\mathbb C}^{n+2}$ on $V$, and the corresponding link $L_V:=S^{2n+3} \cap CV$, for $S^{2n+3}$ a small enough sphere at the origin in ${\mathbb C}^{n+2}$. Then $L_V$ is a $(n-1)$-connected closed oriented manifold of real dimension $2n+1$, so its only possibly nontrivial integral (co)homology appears in degrees $0$, $n$, $n+1$ and $2n+1$. The Hopf fibration $S^1 \hboxookrightarrow S^{2n+3} \longrightarrow {\mathbb C} P^{n+1}$ induces by restriction to $CV$ a corresponding Hopf fibration for $V$, namely $S^1 \hboxookrightarrow L_V \longrightarrow V$. Then for any $n+1 \leq k \leq 2n-2$, the cohomology Gysin sequences for the diagram of fibrations
$$\xymatrix{
S^{2n+3} \ar[r] & {\mathbb C} P^{n+1}\\
L_V \ar[u] \ar[r] & V \ar[u].
}$$
yield commutative diagrams (with ${\mathbb Z}$-coefficients):
\begin{equation}\label{Gys} \CD
0=H^{k+1}(S^{2n+3}) @>>> H^k({\mathbb C} P^{n+1}) @>{\psi}>{{\it Cone}g}> H^{k+2}({\mathbb C} P^{n+1}) @>>> H^{k+2}(S^{2n+3})=0\\
@VVV @V {j^k}VV @V {j^{k+2}}VV @VVV \\
0=H^{k+1}(L_V) @>>> H^k(V) @>{\psi_V}>{{\it Cone}g}> H^{k+2}(V) @>>> H^{k+2}(L_V)=0\\
\end{enumerate}dCD \end{enumerate}d{equation}
Here, if $k=2\end{enumerate}d{lem}l$ is even, the isomorphism $\psi$ is the cup product with the cohomology generator $a\in H^2({\mathbb C} P^{n+1};{\mathbb Z})$, and similarly, $\psi_V$ is the cup product with $j^2(a)$. The assertion about $j^k$ follows now from \eqref{Gys} by decreasing induction on $\end{enumerate}d{lem}l$, using the fact mentioned at the beginning of the proof that $j^{2n}$ is given by multiplication by $d$.
Let us next choose a generic hyperplane $H \subset {\mathbb C} P^{n+1}$ (i.e., $H$ is transversal to a Whitney stratification of $V$), and set as before $V_H=V \cap H$. It then follows from Theorem \ref{thap} and the cohomology long exact sequence of the pair $(V,V_H)$ that $H^{2n-1}(V;{\mathbb Z}) {\it Cone}g 0$. It therefore remains to prove \eqref{twoap} and the corresponding assertion about $j^k$ for $k$ in the range for $n+s+2\leq k\leq 2n-2$.
Let us consider the commuting square
$$\CD
V_H @> {\delta} >> H={\mathbb C} P^n\\
@V {\gamma}VV @VVV\\\
V @>> {j} > {\mathbb C} P^{n+1}
\end{enumerate}dCD$$
and the induced commutative diagram in cohomology:
\begin{equation}\label{di} \CD
H^k({\mathbb C} P^{n+1};{\mathbb Z}) @> {j^k} >> H^k(V;{\mathbb Z}) \\
@V {{\it Cone}g}VV @VV{\gamma^k}V\\\
H^k({\mathbb C} P^{n};{\mathbb Z}) @>> {\delta^k} > H^k(V_H;{\mathbb Z})
\end{enumerate}dCD \end{enumerate}d{equation}
By Theorem \ref{thap} and the cohomology long exact sequence of the pair $(V,V_H)$ we get that $\gamma^k$ is an isomorphism for all integers $k$ in the range $n+s+2\leq k\leq 2n-2$. Moreover, since $V_H \subset {\mathbb C} P^n$ is a degree $d$ reduced projective hypersurface with a $(s-1)$-dimensional singular locus (by transversality), the induction hypothesis yields that $H^k(V_H;{\mathbb Z}) {\it Cone}g H^k({\mathbb C} P^n;{\mathbb Z})$ for $n+s \leq k \leq 2n-2$ and that, in the same range and for $k$ even, the homomorphism
$\delta^k$ is given by multiplication by $d$. The commutativity of the above diagram \eqref{di} then yields \eqref{twoap} for all integers $k$ satisfying $n+s+2\leq k\leq 2n-2$, and the corresponding assertion about the induced homomorphism $j^k$ for $k$ even in the same range. This completes the proof of the theorem.
\end{enumerate}d{proof}
\begin{rem}
Let us remark here that the proof of Kato's theorem in \cite[Theorem 5.2.11]{Di} relies on the Kato-Matsumoto result \cite{KM} on the connectivity of the Milnor fiber of the singularity at the origin of the affine cone $CV \subset {\mathbb C}^{n+2}$.
\end{enumerate}d{rem}
\begin{rem}\label{Katoh}
One can prove the homological version of Theorem \ref{Kato} in the similar manner, namely by using Corollary \ref{corap} instead of Theorem \ref{thap}. This yields the isomorphisms:
\begin{equation}
H_k(V;{\mathbb Z}) {\it Cone}g H_k( {\mathbb C} P^{n+1};{\mathbb Z}) \ \ \text{for all} \ \ n+s+2\leq k\leq 2n,
\end{enumerate}d{equation}
and the homomorphisms induced by the inclusion $j:V\hboxookrightarrow {\mathbb C} P^{n+1}$ in homology are given in this range (and for $k$ even) by multiplication by $d=\deg(V)$.
\end{enumerate}d{rem}
\begin{rem}
We already noted that Theorem \ref{th1} yields the isomorphism \eqref{two} of Kato's theorem (see Corollary \ref{corgen}(i)). On the other hand, Kato's Theorem \ref{Kato} may be used to obtain a weaker version of Theorem \ref{th1} by more elementary means. Indeed, in the notations from the Introduction consider the diagram:
$$ \CD
H^k({\mathbb C} P^{n+1};{\mathbb Z}) @> {{\it Cone}g}>> H^k({\mathbb C} P^{n+1} \times D;{\mathbb Z}) @> {b^k} >> H^k(V_D;{\mathbb Z}) @> {c^k} >> H^k(V_t;{\mathbb Z})\\
@. @. @V {{\it Cone}g}VV @. \\
@. @. H^k(V;{\mathbb Z})
\end{enumerate}dCD
$$
and let $a^k:=c^k \circ b^k$. By Theorem \ref{Kato}, we have that:
\begin{equation}gin{itemize}
\item[(i)] $a^k$ is the multiplication by $d$ if $k>n$ even and an isomorphism for $k<n$;
\item[(ii)] $b^k$ is the multiplication by $d$ if $n+s+2\leq k \leq 2n$ ($k$ even) and an isomorphism for $k<n$.
\end{enumerate}d{itemize}
Therefore, $c^k$ is an isomorphism if $n+s+2 \leq k \leq 2n$ or $k<n$. The cohomology long exact sequence of the pair $(V_D,V_t)$ then yields that $H^k_{\varphi}(V){\it Cone}g H^{k+1}(V_D,V_t;{\mathbb Z}){\it Cone}g 0$ for all integers $k \notin [n-1,n+s+1]$.
\end{enumerate}d{rem}
\begin{equation}gin{thebibliography}{99}
\bibitem{Di0} Dimca, Alexandru, {\it On the homology and cohomology of complete intersections with isolated singularities} Compositio Math. 58 (1986), no. 3, 321--339.
\bibitem{Di} Dimca, Alexandru, {\it Singularities and Topology of Hypersurfaces}, Universitext, Springer, 1992.
\bibitem{Di1} Dimca, Alexandru, {\it Sheaves in Topology}, Universitext, Springer-Verlag, Berlin, 2004.
\bibitem{Ka} Kato, Mitsuyoshi, {\it Topology of $k$-regular spaces and algebraic sets}, Manifolds -- Tokyo 1973 (Proc. Internat. Conf. on Manifolds and Related Topics in Topology), pp. 153--159. Univ. Tokyo Press, Tokyo, 1975.
\bibitem{KM} Kato, Mitsuyoshi, Matsumoto, Yukio,
{\it On the connectivity of the Milnor fiber of a holomorphic function at a critical point}, Manifolds--Tokyo 1973 (Proc. Internat. Conf., Tokyo, 1973), pp. 131--136. Univ. Tokyo Press, Tokyo, 1975.
\bibitem{Le} L\^e, D\~ung Tr\'ang, {\it Sur les cycles \'evanouissants des espaces analytiques}, C. R. Acad. Sci. Paris S\'er. A-B 288(4), A283--A285 (1979).
\bibitem{Lib} Libgober, Anatoly, {\it Homotopy groups of the complements to singular hypersurfaces, II}, Ann. of Math. (2) 139 (1994), 117--144.
\bibitem{Ma} Massey, David, {\it Natural commuting of vanishing cycles and the Verdier dual}, Pacific J. Math. 284 (2016), no. 2, 431--437.
\bibitem{M0} Maxim, Laurentiu, {\it Intersection homology and Alexander modules of hypersurface complements}, Comment. Math. Helv. 81 (2006), no. 1, 123--155.
\bibitem{M} Maxim, Laurentiu, {\it Intersection Homology \& Perverse Sheaves, with Applications to Singularities}, Graduate Texts in Mathematics, Vol. 281, Springer, 2019.
\bibitem{MPT} Maxim, Laurentiu, Paunescu, Laurentiu, Tibar, Mihai, {\it The vanishing cohomology of non-isolated hypersurface singularities},
arXiv:2007.07064
\bibitem{MSS} Maxim, Laurentiu, Saito, Morihiko, Sch\"urmann, J\"org, {\it Hirzebruch-Milnor classes of complete intersections}, Adv. Math. 241 (2013), 220--245.
\bibitem{Mi} Miller, John L., {\it Homology of complex projective hypersurfaces with isolated singularities}, Proc. Amer. Math. Soc. 56 (1976), 310--312.
\bibitem{PP} Parusi\'nski, Adam, Pragacz, Piotr,
{\it Characteristic classes of hypersurfaces and characteristic cycles},
J. Algebraic Geom. 10 (2001), no. 1, 63--79.
\bibitem{ST0} Siersma, Dirk, Tib\u{a}r, Mihai, {\it Milnor fibre homology via deformation}, Singularities and computer algebra, 305--322, Springer, Cham, 2017.
\bibitem{ST} Siersma, Dirk, Tib\u{a}r, Mihai, {\it Vanishing homology of projective hypersurfaces with $1$-dimensional singularities}, Eur. J. Math. 3 (2017), no. 3, 565--586.
\bibitem{Sc} Sch\"urmann, J\"org, {\it Topology of Singular Spaces and Constructible Sheaves}, Birkh\"auser, Monografie Matematyczne 63, 2003.
\end{enumerate}d{thebibliography}
\end{enumerate}d{document} |
\begin{document}
\begin{abstract}
This article deals with the uniqueness and stability issues in the inverse problem of determining the unbounded potential of
the Schr\"odinger operator in a bounded domain of $\mathbb{R}^n$, $n \geq 3$, endowed with Robin boundary condition,
from knowledge of its boundary spectral data.
These data are defined by the pairs formed by the eigenvalues and either partial or full Dirichlet measurement of the eigenfunctions on the boundary of the domain.
\end{abstract}
\maketitle
\section{Introduction}
In the present article $\Omega$ is a $C^{1,1}$ bounded domain of $\mathbb{R}^n$, $n\ge 3$, with boundary $\Gamma$, and we equip the two spaces $H:=L^2(\Omega)$ and $V:=H^1(\Omega)$ with their usual scalar product.
Put $p:=2n/(n+2)$ and let $p^\ast:=2n/(n-2)$ be its conjugate number, in such a way that $V$ is continuously embedded in $L^{p^\ast}(\Omega)$.
\subsection{The Robin Laplacian}
\label{sec-RL}
For $\alpha \in L^\infty (\Gamma,\mathbb{R})$ and $q\in L^{n/2} (\Omega,\mathbb{R})$, we introduce the following continuous sesquilinear form $\mathfrak{a} : V\times V\rightarrow \mathbb{C}$
\[
\mathfrak{a}(u,v)=\int_\Omega \nabla u\cdot \nabla \overline{v}dx+\int_\Omega qu\overline{v}dx+\int_\Gamma \alpha u\overline{v}ds(x),\quad u,v\in V.
\]
Throughout the entire text, we assume that $\alpha \ge -\mathfrak{c}$ for some constant $\mathfrak{c} \in (0, \mathfrak{n}^{-2})$ almost everywhere on $\Gamma$, where $\mathfrak{n}$ denotes the norm of the (bounded) trace operator $u\in V\mapsto u_{|\Gamma}\in L^2(\Gamma)$.
Set
\[
\mathrm{Q}(\rho,\aleph):=\{q\in L^\rho(\Omega,\mathbb{R});\; \|q\|_{L^\rho(\Omega)}\le \aleph\},\quad \rho \ge n/2,\; \aleph >0.
\]
Then, arguing as in the derivation of \cite[Lemma A2]{Po}, we obtain that
\begin{equation}\label{ii1}
\|qu^2\|_{L^1(\Omega)}\le \epsilon \|u\|_V^2+C_\epsilon\|u\|_H^2,\quad q\in \mathrm{Q}(n/2,\aleph),\; u\in V,\; \epsilon >0,
\end{equation}
for some constant $C_\epsilon>0$ depending only of $n$, $\Omega$, $\aleph$ and $\epsilon$.
Further, applying \eqref{ii1} with $\epsilon =\kappa:=(1-\mathfrak{c}\mathfrak{n}^2)/2$ yields
\begin{equation}\label{co}
\mathfrak{a}(u,u)+ \lambda ^\ast\|u\|_H^2\ge \kappa \|u\|_V^2,\quad u\in V,
\end{equation}
where $\lambda^\ast>0$ is a constant which depends only on $n$, $\Omega$, $\mathfrak{c}$ and $\aleph$.
Let us consider the bounded operator $A:V\rightarrow V^\ast$ defined by
\[
\langle Au,v\rangle=\mathfrak{a}(u,v),\quad u,v\in V,
\]
where $\langle \cdot,\cdot \rangle$ denotes the duality pairing between an arbitrary Banach space and its dual.
Notice that $A$ is self-adjoint and coercive according to \eqref{co}.
\subsection{Boundary spectral data}
\label{sec-BSD}
With reference to \cite[Theorem 2.37]{Mc}, the spectrum of $A$ consists of its eigenvalues
$\lambda_k$, $k \in \mathbb{N}:=\{1,2,\ldots \}$, arranged in non-decreasing order and repeated with the (finite) multiplicity,
\[
-\infty <\lambda_1\le \lambda_2\le \ldots \le \lambda_k\le \ldots, \quad \mbox{and\ such\ that}\quad \lim_{k \to \infty}\lambda_k \rightarrow \infty.
\]
Moreover, there exists an orthonormal basis $\{ \phi_k,\ k \in \mathbb{N} \}$ of $H$, made of eigenfunctions $\phi_k\in V$ of $A$, satisfying
\[
\mathfrak{a}(\phi_k,v)=\lambda_k(\phi_k,v),\quad v\in V,\quad k \in \mathbb{N},
\]
where $(\cdot,\cdot)$ is the usual scalar product in $H$. For the sake of shortness, we write
\[
\psi_k:=\phi_k{_{|\Gamma}},\quad k \in \mathbb{N}.
\]
Recall that for $u\in V$, we have $\Delta u\in H^{-1}(\Omega)$, the space dual to $H_0^1(\Omega)$, but that it is not guaranteed that $\Delta u$ lie in $V^\ast$ (which is strictly embedded in $H^{-1}(\Omega)$). Thus, we introduce
\[
W:=\{u\in V; \Delta u\in V^\ast\}.
\]
Endowed with its natural norm
\[
\|u\|_{W}=\|u\|_V+\|\Delta u\|_{V^\ast},\quad u\in W,
\]
is a Banach space.
Next, for $\varphi \in H^{1/2}(\Gamma)$, we set
\[
\dot{\varphi}:=\{v\in V;\; v_{|\Gamma}=\varphi\},
\]
and we equip the space $H^{1/2}(\Gamma)$ with its graph norm
\[
\|\varphi\|_{H^{1/2}(\Gamma)}=\min\{\|v\|_V;\; v\in \dot{\varphi}\}.
\]
Now, for $u\in W$ fixed, we put
\[
\Phi_u (v):=\langle \Delta u , v\rangle+(\nabla u,\nabla v),\quad v\in V,
\]
apply the Cauchy-Schwarz inequality, and get that
\begin{equation}\label{0.1}
|\Phi_u (v)|\le \|\Delta u\|_{V^\ast}\|v\|_V+\|u\|_V\|v\|_V \leq \|u\|_W\|v\|_V.
\end{equation}
Moreover, since $C_0^\infty (\Omega)$ is dense in $H_0^1(\Omega)$, it is easy to see that $H_0^1(\Omega)\subset \ker \Phi_u$ and consequently that $\Phi_u (v)$ depends only on $v_{|\Gamma}$. This enables us to define the normal derivative of $u$, denoted by $\partial_\nu u$, as the unique vector in $H^{-1/2}(\Gamma)$ satisfying
\[
\langle \partial_\nu u , \varphi\rangle =\Phi_u (v),\quad v \in \dot{\varphi}\hskip.2cm \mbox{is arbitrary}.
\]
As a consequence we have
\[
\|\partial_\nu u\|_{H^{-1/2}(\Gamma)}\le \|u\|_W,
\]
by \eqref{0.1}, and the following generalized Green formula:
\begin{equation}
\label{ggf}
\langle\Delta u , v\rangle+(\nabla u,\nabla v)=\langle \partial_\nu u , v_{|\Gamma}\rangle,\quad u\in W,\; v\in V.
\end{equation}
Pick $f\in V^\ast$ and $\mu \in \mathbb{C}$, and let $u\in V$ satisfy
\begin{equation}\label{vf}
\mathfrak{a}(u,v)+\mu(u,v)=\langle f , v\rangle ,\quad v\in V.
\end{equation}
Using that $C_0^\infty(\Omega) \subset V$, we obtain that
\[
\int_\Omega \nabla u\cdot \nabla \overline{v}dx +\int_\Omega qu\overline{v}dx+\mu\int_\Omega u\overline{v}dx=\langle f| v \rangle,\quad v\in C_0^\infty (\Omega),
\]
which yields $-\Delta u+qu+\mu u=f$ in $\mathscr{D}'(\Omega)$. Thus, bearing in mind that $qu \in V^\ast$, we have $u\in W$, and the generalized Green formula \eqref{ggf} provides
\[
\langle \partial_\nu u+\alpha u_{|\Gamma} , v_{|\Gamma}\rangle =0,\quad v\in V.
\]
Since $v\in V\mapsto v_{|\Gamma}\in H^{1/2}(\Gamma)$ is surjective, the above line reads $\partial_\nu u+\alpha u_{|\Gamma}=0$, showing that \eqref{vf} is the variational formulation of the following boundary value problem (BVP):
\[
(-\Delta +q+\mu)u=f\; \mathrm{in}\; \Omega,\quad \partial_\nu u+\alpha u_{|\Gamma}=0\; \mathrm{on}\; \Gamma.
\]
Thus, taking $\mu=\lambda_k$ for all $k \in \mathbb{N}$, we find that $\phi_k\in W$ satisfies
\begin{equation}\label{ee}
(-\Delta +q-\lambda_k)\phi_k=0\; \mathrm{in}\; \Omega,\quad \partial_\nu \phi_k+\alpha \phi_k{_{|\Gamma}}=0\; \mathrm{on}\; \Gamma.
\end{equation}
\subsection{Statement of the results}
We stick to the notations of the previous sections, that is to say that we write $\tilde{\lambda}_k$ (resp., $\tilde{\phi}_k$, $\tilde{\psi}_k$), $k \in \mathbb{N}$, instead of $\lambda_k$ (resp., $\phi_k$, $\psi_k$) when the potential $\tilde{q}$ is substituted for $q$.
Our first result is as follows.
\begin{theorem}\label{theorem1}
Let $q$ and $\tilde{q}$ be in $L^{r}(\Omega,\mathbb{R})$, where $r=n/2$ when $n \ge 4$ and $r >n/2$ when $n=3$, and let $\ell \in \mathbb{N}$. Then, the conditions
\[
\lambda_k=\tilde{\lambda}_k\ \mbox{for\ all}\ k \ge \ell\quad \mbox{and}\quad \psi_k=\tilde{\psi}_k\ \mbox{on}\ \Gamma\ \mbox{for\ all}\ k\ge 1,
\]
yield that $q=\tilde{q}$ in $\Omega$.
\end{theorem}
The claim of Theorem \ref{theorem1} was first established for smooth bounded potentials, in the peculiar case where $\ell=1$, by Nachman, Sylvester and Uhlmann in \cite{NSU}. In the same context (of smooth bounded potentials), their result was extended to $\ell \geq 1$ through a heuristic approach in \cite{Sm}.
In view of stating our stability results, we denote by $\ell^\infty$ (resp. $\ell^2$) the Banach (resp., Hilbert) space of bounded (resp. squared summable) sequences of complex numbers $(z_k)$
, equipped with the norm
\[
\|(z_k)\|_{\ell^\infty}:=\sup_{k\ge 1}|z_k|\ \left({\rm resp.,}\ \|(z_k)\|_{\ell^2}:=\left(\sum_{k\ge 1}|z_k|^2\right)^{1/2}\right),
\]
and let
\[
\ell^2(L^2(\Gamma)):=\left\{ (w_k) \in L^2(\Gamma)^{\mathbb N}\; \mbox{such that}\; (\|w_k\|_{L^2(\Gamma)})\in \ell^2 \right\}
\]
be endowed with its natural norm
\[
\|(w_k)\|_{\ell^2(L^2(\Gamma))}:=\|(\|w_k\|_{L^2(\Gamma)})\|_{\ell^2}.
\]
\begin{theorem}\label{theorem2}
Fix $\aleph \in (0,\infty)$ and let
$(q,\tilde{q})\in \mathrm{Q}(r,\aleph)^2$, where $r=n/2$ when $n \ge 4$ and $r >n/2$ when $n = 3$, satisfy $q-\tilde{q} \in L^2(\Omega)$. Assume that $(\lambda_k-\tilde{\lambda}_k)\in \ell^\infty$ fulfills $\|(\lambda_k-\tilde{\lambda}_k)\|_{\ell^\infty}\le \aleph$ and that $(\psi_k-\tilde{\psi}_k)\in \ell^2(L^2(\Gamma))$.
Then, we have
\[
\|q-\tilde{q}\|_{H^{-1}(\Omega)}\le C\left( \|(\lambda_k-\tilde{\lambda}_k)\|_{\ell^\infty}+ \|(\psi_k-\tilde{\psi}_k)\|_{\ell^2(L^2(\Gamma))} \right)^{2(1-2\beta)/(3(n+2))},
\]
where $\beta:=\max \left( 0,n(2-r)/(2r) \right)$ and $C$ is a positive constant depending only on $n$, $\Omega$, $\aleph$ and $\mathfrak{c}$.
\end{theorem}
\begin{remark}\label{remark1}
{\rm
(i) It is worth noticing that we have $\beta=0$ when $n \ge 4$, whereas $\beta \in [0,1/2)$ when $n=3$. Moreover, in the latter case we see that $\beta$ converges to $1/2$ (resp., $0$) as $r$ approaches $3/2$ (resp., $2$) from above (resp., below). \\
(ii) We have $q-\tilde{q} \in L^2(\Omega)$ for all $(q,\tilde{q}) \in \mathrm{Q}(n/2,\aleph)^2$, provided that $n \ge 4$. Nevertheless, this is no longer true when $n=3$, even if $(q,\tilde{q})$ is taken in $\mathrm{Q}(r,\aleph)^2$ with $r \in (n/2,2)$. Hence the additional requirement of Theorem \ref{theorem2} that $q-\tilde{q} \in L^2(\Omega)$ in the three-dimensional case.\\
(iii) When $q-\tilde{q} \in L^\infty(\Omega)$, we have $(\lambda_k-\tilde{\lambda}_k)\in \ell^\infty$ and $\|(\lambda_k-\tilde{\lambda}_k)\|_{\ell^\infty}\le \|q-\tilde{q}\|_{L^\infty(\Omega)}$, by the min-max principle. Thus, Theorem \ref{theorem2} remains valid by replacing the condition $\|(\lambda_k-\tilde{\lambda}_k)\|_{\ell^\infty}\le \aleph$ by the stronger assumption $\|q-\tilde{q}\|_{L^\infty(\Omega)}\le \aleph$.
}
\end{remark}
To the best of our knowledge, there is no comparable stability result available in the mathematical literature for Robin boundary conditions, even when the potentials are assumed to be bounded. Nevertheless, it should be pointed out that the variable coefficients case was recently addressed by \cite{BCKPS} in the framework of Dirichlet boundary conditions.
Further downsizing the data needed for retrieving the unknown potential, we seek a stability inequality requesting a local Dirichlet boundary measurement of the eigenfunctions only, i.e. boundary observation of the $\psi_k$'s and $\tilde{\psi}_k$'s that is performed on astrict subset of $\Gamma$. For this purpose we consider a subdomain $\Omega_0$ of $\Omega$ such that $\overline{\Omega}_0$ is a neighborhood of $\Gamma$ in $\overline{\Omega}$, a fixed nonempty open subset $\Gamma_{\ast}$ of $\Gamma$, and for all $\vartheta \in (0,\infty)$ we introduce the function $\Psi_\vartheta : [0,\infty) \to \mathbb{R}$ as
\begin{equation}
\label{def-Phi}
\Phi_\vartheta(t):=
\left\{
\begin{array}{cl}
0 & \mbox{if}\ t=0
\\
|\ln t|^{-\vartheta} & \mbox{if}\ t \in (0,1/e)
\\
t & \mbox{if}\ t \in [1/e,\infty).
\end{array}
\right.
\end{equation}
The corresponding local stability estimate can be stated as follows.
\begin{theorem}
\label{theorem3}
For $\aleph \in (0,\infty)$ fixed, let $(q,\tilde{q}) \in \mathrm{Q}(n,\aleph)^2$ satisfy $q=\tilde{q}$ on $\Omega_0$.
Assume that $\alpha \in C^{0,1}(\Gamma)$, and suppose that $(\lambda_k-\tilde{\lambda}_k)\in \ell^\infty$ and that $(k^{\mathfrak{t}}(\psi_k-\tilde{\psi}_k))\in \ell^2(L^2(\Gamma))$ for some $\mathfrak{t}>4/n+1$, with
\[
\|(\lambda_k-\tilde{\lambda}_k)\|_{\ell^\infty}\le \aleph,\quad \|(k^\mathfrak{t}(\psi_k-\tilde{\psi}_k))\|_{\ell^2(L^2(\Gamma))}\le \aleph.
\]
Then there exist two constants $C>0$ and $\vartheta>0$, both of them depending only on $n$, $\Omega$, $\Omega_0$, $\Gamma_{\ast}$, $\aleph$, $\mathfrak{c}$ and $\|\alpha\|_{C^{0,1}(\Gamma)}$, such that we have:
\begin{equation}
\label{thm3}
\|q-\tilde{q}\|_{H^{-1}(\Omega)}\le C\Phi_\vartheta\left( \|(\lambda_k-\tilde{\lambda}_k)\|_{\ell^\infty}+ \|(k^{-\mathfrak{t}+2/n}(\psi_k-\tilde{\psi}_k))\|_{\ell^2(H^1(\Gamma_{\ast}))} \right).
\end{equation}
\end{theorem}
\begin{remark}\label{remark2}
{\rm
Bearing in mind that the $k$-th eigenvalue, $k \ge 1$, of the unperturbed Dirichlet Laplacian (i.e. the operator $A$ associated with $q=0$ in $\Omega$ and $\alpha=0$ on $\Gamma$) scales like $k^{2/n}$ when $k$ becomes large, see e.g. \cite[Theorem III.36 and Remark III.37]{Be}, we obtain by combining the min-max principle with \eqref{ii1}, that for all
$q\in \mathrm{Q}(n,\aleph)$,
\begin{equation}\label{waf}
C^{-1}k^{2/n}\le 1+|\lambda_k|\le Ck^{2/n},\ k\ge 1,
\end{equation}
where $C \in (1,\infty)$ is a constant depending only on $n$, $\Omega$, $\mathfrak{c}$ and $\aleph$.
In light of Lemma \ref{lemma5.0} below, establishing the $H^2$-regularity of the eigenfunctions $\phi_k$, $k \ge 1$, of $A$, and the energy estimate \eqref{5.1}, it follows from \eqref{waf} that $(k^{-\mathfrak{t}+2/n}\psi_k)\in \ell^2(H^1(\Gamma))$. Therefore, we have $\|(k^{-\mathfrak{t}+2/n}(\psi_k-\tilde{\psi}_k))\|_{\ell^2(H^1(\Gamma_{\ast}))}<\infty$ on the right hand side of \eqref{thm3}.
}
\end{remark}
\subsection{A short bibliography of the existing literature}
\label{sec-literature}
The first published uniqueness result for the multidimensional Borg-Levinson problem can be found in \cite{NSU}. The breakthrough idea of the authors of this article was to relate the inverse spectral problem under analysis to the one of determining the bounded potential by the corresponding elliptic Dirichlet-to-Neumann map. This can be understood from the fact that, the Schwartz kernel of the elliptic Dirichlet-to-Neumann operator can be, at least heuristically, fully expressed in terms of the eigenvalues and the normal derivatives of the eigenfunctions. Later on, \cite{Is} proved that the result of \cite{NSU}, which assumes complete knowledge of the boundary spectral data, remains valid when finitely many of them remain unknown.
The stability issue for multidimensional Borg-Levinson type problems was first examined in \cite{AS}. The authors proceed by
relating the spectral data to the corresponding hyperbolic Dirichlet-to-Neumann operator, which stably determines the bounded electric potential. We refer the reader to \cite{BCY1,BCY2,BD} for alternative inverse stability results based on this approach.
In all the aforementioned results, the number of unknown spectral data is at most finite (that is to say that the data are either complete or incomplete). Nevertheless, it was proved in
\cite{CS} that asymptotic knowledge of the boundary spectral data is enough to H\"older stably retrieve the bounded potential. This result was improved in
\cite{KKS,So} by removing all quantitative information on the eigenfunctions of the stability inequality, at the expense of an additional summability condition on their boundary measurements.
In all the articles cited above in this section, the unknown potential is supposed to be bounded. The unique determination of unbounded potentials by either complete or incomplete boundary spectral data is discussed in \cite{PS, Po}, whereas the stability issue for the same problem, but in the variable coefficients case, is examined in \cite{BCKPS}. As for the treatment of the inverse problem of determining the unbounded potential from asymptotic knowledge of the spectral data, we refer the reader to \cite{BKMS} for the uniqueness issue, and to \cite{KS}
for the stability issue.
All the above mentioned results were obtained for multidimensional Laplace operators endowed with Dirichlet boundary conditions, except for \cite{NSU} which proved that full knowledge of the boundary spectral data of the Robin Laplacian uniquely determines the unknown electric potential. But, apart from the claim, based on a heuristic approach, of \cite{Sm}, that incomplete knowledge of the spectral data of the multidimensional Robin Laplacian uniquely determines the unknown bounded potential, it seems that, even for a bounded unknown potential $q$, there is no reconstruction result of $q$ by incomplete spectral data, available in the mathematical literature for such operators. In the present article we prove not only unique identification by incomplete spectral data, but also stable determination by either full or local boundary spectral data, of the singular potential of the multidimensional Robin Laplacian.
\subsection{Outline}
The remaining part of this paper is structured as follows.
In Section \ref{sec-pre} we gather several technical results which are needed by the proof of the three main results of this article. Then we proceed with the proof of
Theorems \ref{theorem1}, \ref{theorem2} and \ref{theorem3} in Section \ref{sec-proof}.
\section{Preliminaries}
\label{sec-pre}
In this section we collect several preliminary results that are needed by the proof of the main results of this article. We start by noticing, upon applying \eqref{co} with $u=\phi_k$, $k\ge 1$, that
\begin{equation}\label{lb}
\lambda_k>-\lambda^\ast,\quad k\ge 1.
\end{equation}
\subsection{Resolvent estimates}
By \cite[Corollary 2.39]{Mc}, the operator $A-\lambda : V \to V^\ast$ has a bounded inverse whenever $\lambda \in \rho (A):=\mathbb{C}\setminus \sigma(A)$, the resolvent set of $A$. Furthermore, for all $f \in V^\ast$ we have
\begin{equation}\label{rf}
(A-\lambda)^{-1}f=\sum_{k\ge 1} \frac{\langle f , \phi_k \rangle}{\lambda_k-\lambda}\phi_k,
\end{equation}
where the series converges in $V$.
For further use, we now establish that the resolvent $(A-\lambda)^{-1}$ may be regarded as a bounded operator from $H$ into the space $K:=\{u\in H;\; Au \in H\}$ endowed with the norm
\[
\|u\|_K:=\|u\|_H+\|Au\|_H,\quad u\in K.
\]
\begin{lemma}
\label{lemma1}
For all $\lambda \in \rho(A)$, the operator $(A-\lambda)^{-1}$ is bounded from $H$ into $K$.
\end{lemma}
\begin{proof}
Put $u:=(A-\lambda)^{-1}f$ where $f\in H$ is fixed. Then, we have $(u , \phi_k )=(f,\phi_k)/(\lambda_k-\lambda)$ for all $k \ge 1$, from \eqref{rf}, whence
\begin{equation}
\label{es8}
Au=\sum_{k\ge1} \frac{\lambda_k}{\lambda_k-\lambda} (f,\phi_k) \phi_k,
\end{equation}
according to \cite[Theorem 2.37]{Mc}, the series being convergent in $V^\ast$.
Moreover, since
$$\sum_{k\ge 1} \frac{\lambda_k^2}{|\lambda_k-\lambda|^2} | (f,\phi_k) |^2 \le \| (\lambda_k / (\lambda_k-\lambda)) \|_{\ell^\infty}^2 \| f \|_H^2<\infty, $$
by the Parseval theorem, the right hand side on \eqref{es8} lies in $H$. Therefore, we have $Au\in H$ and $\| A u \|_H \le \| (\lambda_k / (\lambda_k-\lambda)) \|_{\ell^\infty} \| f \|_H$, and consequently
$u \in K$ and $\| u \|_K \le \| ((1+\lambda_k) / (\lambda_k-\lambda)) \|_{\ell^\infty} \| f \|_H$.
\end{proof}
\begin{proposition}\label{proposition1}
Let $q\in \mathrm{Q}(n/2,\aleph)$ and let $\lambda \in \rho(A)$. Then, for all $f \in V^\ast$, the following estimate
\begin{equation}
\label{re2}
\|(A-\lambda)^{-1}f\|_V \le C \|((\lambda_k+\lambda^\ast)/(\lambda_k-\lambda))\|_{\ell^\infty} \|f\|_{V^\ast}
\end{equation}
holds with $C=\kappa^{-1/2} \| (A+\lambda_*)^{-1}\|_{\mathcal{B}(V^\ast,V)}$, where $\mathcal{B}(V^\ast,V)$ denotes the space of linear bounded operators from $V^\ast$ to $V$.
Moreover, in the special case where $f \in H$, we have
\begin{equation}
\label{re1}
\|(A-\lambda)^{-1}f\|_H\le \|(1/(\lambda_k-\lambda))\|_{\ell^\infty}\|f\|_H.
\end{equation}
\end{proposition}
\begin{proof}
Since \eqref{re1} follows directly from \eqref{rf} and the Parseval formula, it is enough to prove \eqref{re2}. To this purpose we set $u:=(A-\lambda)^{-1} f$ and notice from the obvious identity $\Delta u = (q-\lambda) u - f \in V^\ast$ that $u \in W$. Therefore, by applying \eqref{ggf} with $v=u$, we infer from the coercivity estimate \eqref{co} that
\begin{equation}
\label{es7}
\kappa \|u\|_V^2\le \langle (A+\lambda^\ast) u , u \rangle_{V^\ast,V}.
\end{equation}
Let us assume for a while that $f \in H$. Then, with reference to \eqref{es8}, we have
$$ (A+\lambda^\ast) u = \sum_{k \ge 1} \frac{\lambda_k+\lambda^\ast}{\lambda_k-\lambda} (f,\phi_k) \phi_k, $$
where the series converges in $H$. It follows from this, \eqref{rf} and \eqref{es7} that
\begin{eqnarray}
\label{es9}
\kappa \|u\|_V^2 & \le & \sum_{k \ge 1} \frac{\lambda_k+\lambda^\ast}{|\lambda_k-\lambda|^2} |(f,\phi_k)|^2 \\
& \le & \| ( (\lambda_k+\lambda^\ast) / (\lambda_k-\lambda) ) \|_{\ell^\infty}^2
\sum_{k \ge 1} \frac{|(f,\phi_k)|^2}{\lambda_k+\lambda^\ast}. \nonumber
\end{eqnarray}
Further, taking into account that
$$ \sum_{k \ge 1} \frac{|(f,\phi_k)|^2}{\lambda_k+\lambda^\ast} = \| (A+\lambda^\ast)^{-1} f \|_H^2$$
according to \eqref{rf} and the Parseval formula, and then using that $\| (A+\lambda^\ast)^{-1} f \|_H \le \| (A+\lambda_*)^{-1}\|_{\mathcal{B}(V^\ast,V)} \| f \|_{V^\ast}$, we infer from \eqref{es9} that
\begin{equation}
\label{es9b}
\|u\|_V \leq \kappa^{-1/2} \| (A+\lambda_*)^{-1}\|_{\mathcal{B}(V^\ast,V)} \| ( (\lambda_k+\lambda^\ast) / (\lambda_k-\lambda) ) \|_{\ell^\infty}\| f \|_{V^\ast}.
\end{equation}
Finally, keeping in mind that $u=(A-\lambda)^{-1} f$ and that $(A-\lambda)^{-1} \in \mathcal{B}(V^\ast,V)$,
\eqref{re2} follows readily from \eqref{es9b} by density of $H$ in $V^\ast$.
\end{proof}
As a byproduct of Proposition \ref{proposition1}, we have the following:
\begin{corollary}\label{corollary1}
Let $q\in \mathrm{Q}(n/2,\aleph)$. Then, for all $\tau \in [1,+\infty)$ we have
\begin{equation}
\label{re4}
\|(A-(\tau+i)^2)^{-1}f\|_H\le (2\tau)^{-1}\|f\|_H,\ f \in H.
\end{equation}
Moreover, for all $\tau \ge \tau_\ast=:1+(\max(0,2-\lambda^\ast))^{1/2}$, we have
\begin{equation}
\label{re5}
\|(A-(\tau+i)^2)^{-1}f\|_V \le C (\tau+\lambda^\ast) \|f\|_{V^\ast},\ f \in V^{\ast},
\end{equation}
where $C$ is the same constant as in \eqref{re2}.
\end{corollary}
\begin{proof}
As \eqref{re4} is a straightforward consequence of \eqref{re1}, we shall only prove \eqref{re5}. To do that, we refer to \eqref{re2} and notice that
\begin{equation}
\label{es10}
\frac{\lambda_k+\lambda^\ast}{|\lambda_k-(\tau+i)^2|}= \frac{\lambda_k+\lambda^\ast}{\left( (\lambda_k-(\tau^2-1))^2+4\tau^2 \right)^{1/2}}
\leq 2 \Theta(\lambda_k),\ k \geq 1,
\end{equation}
where we have set
$\Theta(t):=(t+\lambda^\ast)/ (|t-(\tau^2-1)|+ 2\tau)$ for all $t \in [-\lambda^\ast,\infty)$.
Further, taking into account that $\Theta$ is a decreasing function on $[\tau^2-1,\infty)$, provided that $\tau \geq \tau_\ast$, we easily get that
$$ \sup_{t \in [-\lambda^\ast,+\infty)} \Theta(t) \le \frac{\tau^2-1+\lambda^\ast}{2 \tau} \le \frac{\tau+\lambda^\ast}{2}, $$
which along with \eqref{re2} and \eqref{es10}, yields \eqref{re5}.
\end{proof}
\begin{proposition}\label{proposition2}
Let $q \in \mathrm{Q}(n/2,\aleph)$. Then, there exists a constant $C>0$, depending only on
$n$, $\Omega$, $\mathfrak{c}$ and $\aleph$, such that for all $\sigma \in [0,1]$ and all $f \in L^{p_\sigma}(\Omega)$, we have
\begin{equation}\label{re7}
\|(A-(\tau+i)^2)^{-1}f\|_{L^{p_\sigma^\ast}(\Omega)}\le C\tau ^{-1+2\sigma}\|f\|_{L^{p_\sigma}(\Omega)},\ \tau \in [\tau_\ast,\infty),
\end{equation}
where $p_\sigma:=2n / (n+2 \sigma)$ and $p_\sigma^\ast:=2n / (n-2 \sigma)$ is the conjugate integer to $p_\sigma$.
\end{proposition}
\begin{proof}
In light of \eqref{re5}, we have for all $f \in L^p(\Omega)$,
$$
\| (A-(\tau+i)^2)^{-1} f \|_{L^{p^\ast}(\Omega)} \le C \tau \| f \|_{L^p(\Omega)},\ \tau \in [\tau_\ast,\infty),
$$
by the Sobolev embedding theorem, where $C$ is a positive constant depending only on
$n$, $\Omega$, $\mathfrak{c}$ and $\aleph$.
Thus, \eqref{re7} follows from this and \eqref{re4} by interpolating between $H=L^{p_0}(\Omega)$ and $L^p(\Omega)=L^{p_1}(\Omega)$ with the aid of the Riesz-Thorin theorem (see, e.g. \cite[Theorem IX.17]{RS2}
\end{proof}
\subsection{Asymptotic spectral analysis}
Set $\mathfrak{H}:=H^2(\Omega)$ if $n\ne 4$ and put $\mathfrak{H}:=H^{2+\epsilon}(\Omega)$ for some arbitrary $\epsilon >0$, if $n=4$. We notice that $\mathfrak{H} \subset L^\infty(\Omega)$ and that the embedding is continuous, provided that $n=3$ or $n=4$, while $\mathfrak{H}$ is continuously embedded in
$L^{2n/(n-4)}(\Omega)$ when $n>4$. The main purpose for bringing $\mathfrak{H}$ into the analysis here is the following useful property: $fu\in H$
whenever $f\in L^{\max(2,n/2)}(\Omega)$ and $u\in \mathfrak{H}$.
Next we introduce the subspace
\[
\mathfrak{h}:=\{ g=\partial_\nu G +\alpha G_{|\Gamma};\; G\in \mathfrak{H}\}
\]
of $L^2(\Gamma)$, equipped with its natural quotient norm
\[
\|g\|_{\mathfrak{h}}:=\min\{ \|G\|_{\mathfrak{H}};\; G\in \dot{g}\},\quad g\in \mathfrak{h},
\]
where
\[
\dot{g}:=\{ G\in \mathfrak{H};\; \partial_\nu G+\alpha G_{|\Gamma}=g\},\quad g\in \mathfrak{h},
\]
and we consider the non homogenous BVP:
\begin{equation}\label{bvp1}
(-\Delta +q-\lambda )u=0\; \mathrm{in}\; \Omega ,\quad \partial_\nu u+\alpha u_{|\Gamma} =g\; \mathrm{on}\; \Gamma .
\end{equation}
We first examine the well-posedness of \eqref{bvp1}.
\begin{lemma}\label{lemma2}
Let $\lambda\in \rho(A)$ and let $g\in \mathfrak{h}$. Then, the function
\begin{equation}\label{sol1}
u_\lambda (g):=(A-\lambda )^{-1}(\Delta -q+\lambda)G+G
\end{equation}
is independent of $G\in \dot{g}$. Moreover, $u_\lambda (g)\in W$ is the unique solution to \eqref{bvp1} and is expressed as
\begin{equation}\label{rep1}
u_\lambda (g)=\sum_{k\ge 1}\frac{(g,\psi_k)}{\lambda_k-\lambda}\phi_k
\end{equation}
in $H$, where $(\cdot,\cdot)$ denotes the usual scalar product in $L^2(\Gamma)$.
\end{lemma}
\begin{proof}
Since $G\in \mathfrak{H}$, it is clear that $(\Delta -q+\lambda)G\in H$. Thus, the right hand side of \eqref{sol1} lies in $W$ and it is obviously a solution to the BVP \eqref{bvp1}. Moreover, $\lambda$ being taken in the resolvent set of $A$, this solution is unique.
Further, for all $G_1$ and $G_2$ in $\dot{g}$, it is easy to check that $\partial_\nu (G_1-G_2) + \alpha (G_1-G_2)=0$ on $\Gamma$ and that $(A-\lambda )^{-1}(\Delta -q+\lambda)(G_1-G_2)=-(G_1-G_2)$ in $\Omega$. Therefore,
the function $u_\lambda (g)$ given by \eqref{sol1}, is independent of $G\in \dot{g}$.
We turn now to showing \eqref{rep1}. To do that we apply the generalized Green formula \eqref{ggf} with $u=u_\lambda(g)$ and $v=\phi_k$, $k \geq 1$. We obtain
\[
\langle \Delta u_\lambda(g) , \phi_k\rangle+(\nabla u_\lambda(g)|\nabla \phi_k)=\langle \partial_\nu u_\lambda(g) , \psi_k\rangle,
\]
which may be equivalently rewritten as
\begin{equation}\label{2.1}
((q-\lambda) u_\lambda(g),\phi_k)+(\nabla u_\lambda(g),\nabla \phi_k)=\langle g-\alpha u_\lambda(g)_{|\Gamma} ,\psi_k\rangle.
\end{equation}
Doing the same with $u=\phi_k$ and $v=u_\lambda(g)$, and taking the conjugate of both sides of the obtained equality, we find that
$$( u_\lambda(g), (q-\lambda_k) \phi_k)+( \nabla u_\lambda(g), \nabla \phi_k)=-\langle u_\lambda(g)_{|\Gamma} , \alpha \psi_k \rangle.
$$
Bearing in mind that $q$ and $\alpha$ are real-valued, and that $\lambda_k \in \mathbb{R}$, this entails that
\begin{equation}\label{2.2}
((q-\lambda_k)u_\lambda(g),\phi_k)+(\nabla u_\lambda(g),\nabla \phi_k)=-\langle \alpha u_\lambda(g)_{|\Gamma} , \psi_k\rangle.
\end{equation}
Now, taking the difference of \eqref{2.1} with \eqref{2.2}, we end up getting that
\[
(\lambda_k-\lambda )(u_\lambda(g),\phi_k)=\langle g , \psi_k \rangle=(g,\psi_k).
\]
This and the basic identity
\[
u_\lambda(g)=\sum_{k\ge 1}(u,\phi_k)\phi_k
\]
yield \eqref{rep1}.
\end{proof}
The series on the right hand side of \eqref{rep1} converges only in $H$ and thus we cannot deduce an expression of the trace $u_\lambda(g)_{| \Gamma}$ in terms of $\lambda_k$ and $\psi_k$, $k \geq 1$, directly from \eqref{rep1}. To circumvent this difficulty we
establish the following lemma:
\begin{lemma}\label{lemma4}
Let $g\in \mathfrak{h}$. Then, for all $\lambda$ and $\mu$ in $\rho(A)$, we have
\begin{equation}\label{s2}
u_\lambda(g){_{|\Gamma}} -u_\mu(g){_{|\Gamma}}=(\lambda-\mu)\sum_{k\ge 1}\frac{(g,\psi_k)}{(\lambda_k-\lambda)(\lambda_k-\mu)}\psi_k,
\end{equation}
and the series converges in $H^{1/2}(\Gamma)$.
\end{lemma}
\begin{proof}
Notice that
\[
(-\Delta +q-\lambda )(u_\lambda -u_\mu)=(\lambda-\mu)u_\mu
\]
in $\Omega$ and that $\partial_\nu(u_\lambda -u_\mu)+\alpha(u_\lambda -u_\mu)_{|\Gamma}=0$ on $\Gamma$, where, for shortness sake, we write $u_\lambda =u_\lambda (g)$ and $u_\mu=u_\mu(g)$.
Thus, we have
\[
u_\lambda -u_\mu=(\lambda-\mu)(A-\lambda)^{-1}u_\mu =(\lambda-\mu)\sum_{k\ge 1}\frac{(u_\mu,\phi_k)}{\lambda_k-\lambda}\phi_k.
\]
On the other hand, since
\[
(u_\mu,\phi_k)=\frac{(g,\psi_k)}{\lambda_k-\mu},\ k \geq 1,
\]
from \eqref{rep1}, we obtain that
\begin{equation}\label{s1}
u_\lambda -u_\mu=(\lambda-\mu)\sum_{k\ge 1}\frac{(g,\psi_k)}{(\lambda_k-\lambda)(\lambda_k-\mu)}\phi_k.
\end{equation}
Moreover, we have
\[
\sum_{k\ge 1}\frac{(g,\psi_k)}{(\lambda_k-\lambda)(\lambda_k-\mu)}(A-\lambda)\phi_k=\sum_{k\ge 1}\frac{(g,\psi_k)}{\lambda_k-\mu}\phi_k,
\]
the series being convergent in $H$. It follows from this and \eqref{s1} that
\[
u_\lambda -u_\mu=(\lambda-\mu) (A-\lambda)^{-1}\sum_{k\ge 1}\frac{(g,\psi_k)}{\lambda_k-\mu}\phi_k,
\]
where the series on the right hand side of \eqref{s1} converges in $V$. As a consequence we have
\begin{equation}
u_\lambda{_{|\Gamma}} -u_\mu{_{|\Gamma}}=(\lambda-\mu)\sum_{k\ge 1}\frac{(g,\psi_k)}{(\lambda_k-\lambda)(\lambda_k-\mu)}\psi_k,
\end{equation}
the series being convergent in $H^{1/2}(\Gamma)$.
\end{proof}
Next, we establish the following {\it a priori} estimate for the solution to \eqref{bvp1}.
\begin{lemma}\label{lemma3}
Let $q\in \mathrm{Q}(n/2,\aleph)$. Then, there exist two constants $\lambda_+>0$ and $C>0$, depending only on $n$, $\Omega$, $\aleph$ and $\mathfrak{c}$, such that for all $\lambda \in (-\infty,-\lambda_+]$ and all $g\in \mathfrak{h}$, the solution $u_\lambda (g)$ to \eqref{bvp1} satisfies the estimate
\begin{equation}\label{lim1}
|\lambda|^{1/2} \|u_\lambda(g)\|_H+\|u_\lambda (g)\|_V\le C\|g\|_{L^2(\Gamma)}.
\end{equation}
\end{lemma}
\begin{proof}
Fix $\lambda \in \rho(A)\cap (-\infty ,0)$. We apply the generalized Green formula \eqref{ggf}
with $u=v:=u_\lambda$, where we write $u_\lambda$ instead of $u_\lambda(g)$. We get that
\begin{equation}\label{ae1}
|\lambda| \|u_\lambda\|_H^2+\|\nabla u_\lambda\|_H^2 \leq \|qu_\lambda ^2\|_{L^1(\Omega)}-(\alpha u_\lambda,u_\lambda)+(g,u_\lambda).
\end{equation}
Next, $\epsilon$ being fixed in $(0,+\infty)$, we combine \eqref{ii1} with \eqref{ae1} and obtain
\begin{equation}\label{ae2}
|\lambda |\|u_\lambda\|_H^2+\|\nabla u_\lambda\|_H^2\le \epsilon\|u_\lambda\|_V^2+C_\epsilon \|u\|_H^2+\mathfrak{c}\mathfrak{n} ^2\|u\|_V^2+\mathfrak{n} \|g\|_{L^2(\Gamma)}\|u_\lambda\|_V,
\end{equation}
where $C_\epsilon$ is a positive constant depending only on $n$, $\Omega$, $\aleph$ and $\epsilon$.
Taking $\epsilon = \kappa =(1- \mathfrak{c}\mathfrak{n}^2)/2$ in \eqref{ae2} then yields
\[
(|\lambda |-1-C_\kappa)\|u_\lambda\|_H^2+\kappa\|u_\lambda\|_V^2\le \mathfrak{n} \|g\|_{L^2(\Gamma)}\|u_\lambda\|_V.
\]
As a consequence we have
$$ | \lambda | \| u_\lambda \|_H^2 + \| u_\lambda \|_V^2 \leq \frac{2\mathfrak{n}^2}{\kappa^2} \|g\|_{L^2(\Gamma)}^2, $$
whenever $|\lambda| \geq (1+C_\kappa) \slash (1 - \kappa \slash 4)$, and
\eqref{lim1} follows readily from this.
\end{proof}
Armed with Lemma \ref{lemma3} we can examine the dependence of (the trace of) the solution to the BVP \eqref{bvp1} with respect to $q$.
More precisely, we shall establish that the influence of the potential on $u_\lambda(g)$ is, in some sense, dimmed as the spectral parameter $\lambda$ goes to $-\infty$.
\begin{lemma}\label{lemma5.1}
Let $q$ and $\tilde{q}$ be in $\mathrm{Q}(n/2,\aleph)$.
Then, for all
$g\in \mathfrak{h}$, we have
\begin{equation}\label{lim2}
\lim_{\lambda=\Re \lambda \rightarrow -\infty}\|u_\lambda(g)_{|\Gamma}-\tilde{u}_\lambda(g)_{|\Gamma}\|_{H^{1/2}(\Gamma)}=0.
\end{equation}
\end{lemma}
\begin{proof}
Let $\lambda \in (-\infty,-\lambda_+]$, where $\lambda_+$ is the same as in Lemma \ref{lemma3}. We use the same notation as in the proof of Lemma \ref{lemma3} and write $u_\lambda$ (resp., $\tilde{u}_\lambda$) instead of $u_\lambda(g)$ (resp., $\tilde{u}_\lambda(g)$). Since
\[
(-\Delta +q-\lambda )(u_\lambda - \tilde{u}_\lambda)=(\tilde{q}-q)\tilde{u}_\lambda\quad \mathrm{in}\; \Omega
\]
and
\[
\partial_\nu(u_\lambda - \tilde{u}_\lambda)+\alpha (u_\lambda - \tilde{u}_\lambda)_{|\Gamma}=0\quad \mbox{on}\; \Gamma,
\]
we have
$$ u_\lambda - \tilde{u}_\lambda=(A-\lambda)^{-1} ((\tilde{q}-q)\tilde{u}_\lambda), $$
whence
\begin{equation}
\label{a1}
\| u_\lambda - \tilde{u}_\lambda \|_V \le C \|((\lambda_k+\lambda^\ast)/(\lambda_k-\lambda))\|_{\ell^\infty} \| (\tilde{q}-q)\tilde{u}_\lambda\|_{V^\ast},
\end{equation}
by \eqref{re2}, where $C$ is a positive constant which is independent of $\lambda$.
We are left with the task of estimating $\| (\tilde{q}-q)\tilde{u}_\lambda\|_{V^\ast}$. For this purpose, we notice from $\tilde{q}-q \in L^{n/2}(\Omega)$ and from $\tilde{u}_\lambda \in L^{p^\ast}(\Omega)$ that $(\tilde{q}-q)\tilde{u}_\lambda \in L^p(\Omega)$. Thus, bearing in mind that the embedding $V \subset L^{p^\ast}(\Omega)$ is continuous, we infer from H\"older's inequality that
\begin{eqnarray*}
\| (\tilde{q}-q)\tilde{u}_\lambda\|_{V^\ast}
& \le & \| \tilde{q}-q \|_{L^{n/2}(\Omega)} \| \tilde{u}_\lambda \|_{L^{p^\ast}(\Omega)} \\
& \le & 2 \aleph \| \tilde{u}_\lambda \|_{V}.
\end{eqnarray*}
In light of \eqref{lim1}, this entails that
$$\| (\tilde{q}-q)\tilde{u}_\lambda\|_{V^\ast} \leq C \|g\|_{L^2(\Gamma)},$$
for some constant $C$ depending only on $n$, $\Omega$, $\aleph$ and $\mathfrak{c}$. From this, \eqref{a1} and the continuity of the trace operator $w\in V\mapsto w_{|\Gamma}\in H^{1/2}(\Gamma)$, we obtain that
\[
\|(u_\lambda)_{| \Gamma}-(\tilde{u}_\lambda)_{|\Gamma}\|_{H^{1/2}(\Gamma)}\le C \|((\lambda_k+\lambda^\ast)/(\lambda_k-\lambda))\|_{\ell^\infty} \|g\|_{L^2(\Gamma)},
\]
where $C$ is independent of $\lambda$. Now \eqref{lim2} follows immediately from this upon sending $\lambda$ to $-\infty$ on both sides of the above inequality.
\end{proof}
\subsection{$H^2$-regularity of the eigenfunctions}
For all $q \in L^{n/2}(\Omega)$,
we have $\phi_k \in V$, $k \ge 1$, but it is no guaranteed in general that $\phi_k \in H^2(\Omega)$.
Nevertheless, we shall establish that the regularity of the eigenfunctions of $A$ can be upgraded to $H^2$, provided that the potential $q$ is taken in $L^n(\Omega)$.
\begin{lemma}\label{lemma5.0}
Let $q\in \mathrm{Q}(n,\aleph)$ and assume that $\alpha\in C^{0,1}(\Gamma)$. Then, for all $k\in \mathbb{N}$, we have $\phi_k\in H^2(\Omega)$ and the estimate
\begin{equation}\label{5.1}
\|\phi_k\|_{H^2(\Omega)}\le C(1+|\lambda_k|),
\end{equation}
where $C$ is a positive constant depending on $n$, $\Omega$ and $\aleph$ and $\|\alpha\|_{C^{0,1}(\Gamma)}$.
\end{lemma}
\begin{proof}
Let us start by noticing from \eqref{co} that
\begin{equation}\label{5.0}
\|\phi_k\|_V\le \kappa^{-1/2}(\lambda_k+\lambda ^\ast)^{1/2},\ k \geq 1.
\end{equation}
On the other hand we have $q\phi_k\in H$ for all $k\in \mathbb{N}$, and the estimate
\begin{equation}\label{5.0.1}
\|q\phi_k\|_H\le \|q\|_{L^n(\Omega)}\|\phi_k\|_{L^{p^\ast}(\Omega)}\le C_0 \|\phi_k\|_V,
\end{equation}
where $C_0$ is a positive constant depending only on $n$, $\Omega$, $\mathfrak{c}$ and $\aleph$.
Next, bearing in mind that $\alpha \phi_k{_{|\Gamma}} \in H^{1/2}(\Gamma)$, we pick
$\phi_k^0\in H^2(\Omega)$ such that $\partial_\nu \phi_k^0=\alpha \phi_k{_{|\Gamma}}$. Evidently, we have
\[
-\Delta (\phi_k+\phi_k^0)=(\lambda_k-q)\phi_k-\Delta\phi_k^0\; \mbox{in}\; \Omega \quad \mbox{and} \quad \partial_\nu(\phi_k+\phi_k^0)=0\; \mbox{on}\; \Gamma.
\]
Since $(\lambda_k-q)\phi_k-\Delta\phi_k^0\in H$, \cite[Theorem 3.17]{Tr} then yields that $\phi_k+\phi_k^0\in H^2(\Omega)$. As a consequence we have $\phi_k=(\phi_k+\phi_k^0) -\phi_k^0 \in H^2(\Omega)$ and
\[
\|\phi_k\|_{H^2(\Omega)}\le C_1(\|(\lambda_k-q)\phi_k\|_H+\|\phi_k\|_V)
\]
for some constant $C_1>0$ which depends only on $n$, $\Omega$ and $\|\alpha\|_{C^{0,1}(\Gamma)}$, by \cite[Lemma 3.181]{Tr} (see also \cite[Theorem 2.3.3.6]{Gr}). Putting this together with \eqref{5.0}-\eqref{5.0.1}, we obtain \eqref{5.1}.
\end{proof}
\section{Proof of Theorems \ref{theorem1}, \ref{theorem2} and \ref{theorem3}}
\label{sec-proof}
\subsection{Proof of Theorem \ref{theorem1}}
\label{sec-prthm1}
We use the same notations as in the previous sections. Namely, we denote by $\tilde{A}$ is the operator generated in $H$ by $\mathfrak{a}$ where $\tilde{q}$ is substituted for $q$, and we write $u_\lambda$ (resp., $\tilde{u}_\lambda$) instead of $u_\lambda (g)$ (resp., $\tilde{u}_\lambda(g)$).
Let $\lambda \in \mathbb{C}\setminus \mathbb{R}$ and pick $\mu$ in $\rho(A)\cap \rho(\tilde{A})$. Depending on whether $\ell=1$ or $\ell \ge 2$, we have either
\[
(u_\lambda)_{|\Gamma} -(u_\mu)_{|\Gamma}=(\tilde{u}_\lambda)_{|\Gamma} - (\tilde{u}_\mu)_{|\Gamma}
\]
or
\begin{eqnarray*}
& & (u_\lambda)_{|\Gamma} -(u_\mu)_{|\Gamma}-(\lambda-\mu) \sum_{k= 1}^{\ell-1}\frac{(g,\psi_k)}{(\lambda_k-\lambda)(\lambda_k-\mu)}\psi_k
\\
&=& (\tilde{u}_\lambda)_{|\Gamma} -(\tilde{u}_\mu)_{|\Gamma}-(\lambda-\mu)\sum_{k= 1}^{\ell-1}\frac{(g,\psi_k)}{(\tilde{\lambda}_k-\lambda)(\tilde{\lambda}_k-\mu)}\psi_k,
\end{eqnarray*}
by virtue of \eqref{s2}. Sending $\Re \mu$ to $-\infty$ in these two identities, where $\Re \mu$ denotes the real part of $\mu$, we get with the help of \eqref{lim2} that
\begin{equation}\label{RtoD}
(u_\lambda)_{|\Gamma}- (\tilde{u}_\lambda)_{|\Gamma}=R_\lambda^\ell,
\end{equation}
where
\[
R_\lambda^\ell=R_\lambda^\ell (g) := \left\{ \begin{array}{ll} 0 & \mbox{if}\ \ell=1 \\ \sum_{k= 1}^{\ell-1}\frac{(\tilde{\lambda}_k -\lambda_k)(g,\psi_k)}{(\lambda_k-\lambda)(\tilde{\lambda}_k-\lambda)}\psi_k & \mbox{if}\ \ell \ge 2. \end{array} \right.
\]
Notice for further use that there exists $\lambda_*>0$ such that the estimate
\begin{equation}\label{Re}
| \langle R_\lambda ^\ell ,h\rangle|\le \frac{C_\ell}{|\lambda|^2}\|g\|_{L^2(\Gamma)}\|h\|_{L^2(\Gamma)},\quad |\lambda |\ge \lambda_*,\quad g,h\in \mathfrak{h},
\end{equation}
holds for some constant $C_\ell=C_\ell(q,\tilde{q})$ which is independent of $\lambda$.
Let us now consider two functions $G \in \mathfrak{H}$ and $H \in \mathfrak{H}$, that will be made precise below, and put $u:=(A-\lambda)^{-1}(\Delta -q+\lambda)G+G$, $g:=\partial_\nu G+\alpha G_{|\Gamma}$ and $h:=\partial_\nu H +\alpha H_{|\Gamma}$. Then, bearing in mind that $\partial_\nu u + u_{| \Gamma}=g$, the Green formula yields that
\begin{equation}
\label{eq-G}
\int_\Gamma u\overline{h}ds(x)=\int_\Gamma g\overline{H}ds(x)+\int_\Omega (u\Delta\overline{H}-\Delta u\overline{H})dx.
\end{equation}
Further, taking into account that $\Delta u=(q-\lambda )u$ in $\Omega$, we see that
\begin{eqnarray*}
u\Delta\overline{H}-\Delta u\overline{H} & =& u (\Delta -q+\lambda )\overline{H} \\
& = & \left( (A-\lambda)^{-1}(\Delta -q+\lambda )G+G \right) (\Delta -q+\lambda )\overline{H}.
\end{eqnarray*}
Thus, assuming that $(\Delta +\lambda)G=(\Delta +\lambda)H=0$, the above identity reduces to
\[
u\Delta\overline{H}-\Delta u\overline{H}= -\left( -(A-\lambda)^{-1}qG+G \right) q\overline{H},
\]
and \eqref{eq-G} then reads
\begin{equation}\label{id1}
\int_\Gamma u\overline{h}ds(x)=\int_\Gamma g\overline{H}ds(x)-\int_\Omega \left( -(A-\lambda)^{-1}qG+G \right) q\overline{H} dx.
\end{equation}
This being said, we set $\lambda_\tau:=(\tau+i)^2$ for some fixed $\tau \in [1,+\infty)$, pick two vectors $\omega$ and $\theta$ in $\mathbb{S}^{n-1}$, and we consider the special case where
\[
G(x)=\mathfrak{e}_{\lambda_\tau,\omega}(x):=e^{i\sqrt{\lambda_\tau}\omega \cdot x},\quad \overline{H}(x)=\mathfrak{e}_{\lambda_\tau,-\theta}(x):= e^{-i\sqrt{\lambda_\tau}\theta \cdot x}.
\]
Next, we put
$$
S(\lambda_\tau,\omega ,\theta) :=\int_\Gamma u_\lambda (g)\overline{h}ds(x),\ \quad \tilde{S}(\lambda_\tau,\omega ,\theta) :=\int_\Gamma \tilde{u}_\lambda (g)\overline{h}ds(x),
$$
in such a way that
\begin{equation}
\label{eq-S}
S(\lambda_\tau,\omega ,\theta)-\tilde{S}(\lambda,\omega ,\theta)=\langle R_{\lambda_\tau} ^\ell(g) , h\rangle.
\end{equation}
Then, taking into account that
\[
g(x)=(i\sqrt{\lambda_\tau}\omega \cdot \nu+\alpha)e^{i\sqrt{\lambda_\tau}\omega \cdot x},\quad \overline{h}(x)= (-i\sqrt{\lambda_\tau}\theta \cdot \nu+\alpha)e^{-i\sqrt{\lambda_\tau}\theta \cdot x},
\]
we have $\|g\|_{L^2(\Gamma)}\|h\|_{L^2(\Gamma)}\le C\tau^2$ for some positive constant $C$ which is independent of $\omega$, $\theta$ and $\tau$, and we infer from \eqref{Re} and \eqref{eq-S} that
\begin{equation}
\label{S1}
\lim_{\tau \rightarrow \infty}\sup_{\omega,\theta \in \mathbb{S}^{n-1}} \left( S(\lambda_\tau,\omega ,\theta)-\tilde{S}(\lambda_\tau,\omega ,\theta) \right)=0.
\end{equation}
On the other hand, \eqref{id1} reads
\begin{equation}
\label{S1b}
S(\lambda_\tau,\omega ,\theta) =S_0(\lambda_\tau,\omega ,\theta)
+\int_\Gamma(i\sqrt{\lambda_\tau}\omega \cdot\nu +\alpha)e^{-i\sqrt{\lambda_\tau}(\theta-\omega)\cdot x}ds(x),
\end{equation}
where
\begin{equation}
\label{id3}
S_0(\lambda_\tau,\omega ,\theta):=\int_\Omega (A-\lambda_\tau)^{-1}(q\mathfrak{e}_{\lambda_\tau,\omega})q\mathfrak{e}_{\lambda_\tau,-\theta}dx-\int_\Omega qe^{-i\sqrt{\lambda_\tau}(\theta-\omega)\cdot x}dx.
\end{equation}
Now, we fix $\xi$ in $\mathbb{R}^n$, pick $\eta \in \mathbb{S}^{n-1}$ such that $\xi \cdot \eta =0$, and for all $\tau \in \left( |\xi|/2,+\infty \right)$ we set
\begin{equation}
\label{es4}
\omega_\tau :=\left(1-|\xi|^2/(4\tau ^2)\right)^{1/2}\eta -\xi/(2\tau),\quad \theta_\tau :=\left(1-|\xi|^2/(4\tau ^2)\right)^{1/2}\eta +\xi/(2\tau)
\end{equation}
in such a way that
\begin{equation}
\label{id3b}
\lim_{\tau \rightarrow +\infty} \sqrt{\lambda_\tau}(\theta_\tau-\omega_\tau) = \xi.
\end{equation}
Evidently, we have
\begin{equation}
\label{gs0}
\|\mathfrak{e}_{\lambda_\tau,\omega_\tau}\|_{L^\infty (\Omega)}\le \|e^{|x|} \|_{L^\infty (\Omega)},\quad \|\mathfrak{e}_{\lambda_\tau,-\theta_\tau}\|_{L^\infty (\Omega)}\le \|e^{|x|} \|_{L^\infty (\Omega)}.
\end{equation}
Next, with reference to the notations $\beta=\max \left( 0,n(2-r)/(2r) \right)$ and $p_\sigma=2n / (n+2\sigma)$, $\sigma \in [0,1]$, of Theorem \ref{theorem2} and Proposition \ref{proposition2}, respectively,
we see that $\beta=0$ and hence that $p_\beta=p_0=2$, when $n \ge 4$, whereas $p_\beta=r \in (3/2,2)$, when $n=3$. Thus, we have $p_\beta\le r$ whenever $n \ge 3$, and consequently $q \in L^{p_\beta}(\Omega)$. It follows from this and \eqref{gs0} that
$q\mathfrak{e}_{\lambda_\tau,\omega_\tau}$ and $q\mathfrak{e}_{\lambda_\tau,-\theta_\tau}$ lie in $L^{p_\beta}(\Omega)$ and satisfy the estimate
\begin{equation}
\label{gs1}
\|q\mathfrak{e}_{\lambda_\tau,\omega_\tau}\|_{L^{p_\beta}(\Omega)}+\|q\mathfrak{e}_{\lambda_\tau,-\theta_\tau} \|_{L^{p_\beta}(\Omega)}\le C \| q \|_{L^r(\Omega)},\quad \tau \in (|\xi|/2,\infty),
\end{equation}
for some positive constant $C=C(n,\Omega)$ depending only on $n$ and $\Omega$. Moreover, for all $\tau \ge \max(|\xi|/2,\tau_\ast)$, we have
\begin{eqnarray}
\label{gm0}
& & \left| \int_\Omega (A-\lambda_\tau)^{-1}(q\mathfrak{e}_{\lambda_\tau,\omega_\tau})q\mathfrak{e}_{\lambda_\tau,-\theta_\tau}dx \right| \\
& \leq & \| (A-\lambda_\tau)^{-1}(q\mathfrak{e}_{\lambda_\tau,\omega_\tau}) \|_{L^{p_\beta^\ast}(\Omega)} \|q\mathfrak{e}_{\lambda_\tau,-\theta_\tau} \|_{L^{p_\beta}(\Omega)} \nonumber\\
& \le & C \tau^{-1+2\beta} \|q\mathfrak{e}_{\lambda_\tau,\omega_\tau}\|_{L^{p_\beta}(\Omega)} \|q\mathfrak{e}_{\lambda_\tau,-\theta_\tau} \|_{L^{p_\beta}(\Omega)},\ \nonumber
\end{eqnarray}
by \eqref{re7}, where $C>0$ is independent of $\tau$. Since $\beta \in [0,1/2)$ from its very definition, we infer from
\eqref{gs1}-\eqref{gm0} that
\begin{equation}
\label{gs2}
\lim_{\tau \rightarrow \infty}\left| \int_\Omega (A-\lambda_\tau)^{-1}(q\mathfrak{e}_{\lambda_\tau,\omega_\tau})q\mathfrak{e}_{\lambda_\tau,-\theta_\tau}dx \right|=0,
\end{equation}
which together with \eqref{id3}-\eqref{id3b} yields that
\[
\lim_{\tau \rightarrow \infty} S_0(\lambda_\tau ,\omega_\tau ,\theta_\tau) =-\int_\Omega qe^{-i\xi \cdot x},\quad \xi \in \mathbb{R}^n.
\]
From this and the identity
\[
\lim_{\tau \rightarrow \infty}\left( S_0(\lambda_\tau ,\omega_\tau ,\theta_\tau)-\tilde{S}_0(\lambda_\tau ,\omega_\tau ,\theta_\tau) \right)=\lim_{\tau \rightarrow \infty} \left( S(\lambda_\tau,\omega_\tau ,\theta_\tau)-\tilde{S}(\lambda_\tau,\omega_\tau ,\theta_\tau) \right)=0,
\]
arising from \eqref{S1}-\eqref{S1b}, it then follows that
\[
\int_\Omega (q-\tilde{q})e^{-i\xi \cdot x}dx=0,\quad \xi \in \mathbb{R}^n.
\]
Otherwise stated, the Fourier transform of $(q-\tilde{q})\chi_\Omega$, where $\chi_\Omega$ is the characteristic function of $\Omega$, is identically zero in $\mathscr{S}'(\mathbb{R}^n)$. By the injectivity of the Fourier transformation, this entails that $q=\tilde{q}$ in $\Omega$.
\subsection{Proof of Theorem \ref{theorem2}}
\label{sec-prthm2}
Pick $\omega$ and $\theta$ be in $\mathbb{S}^{n-1}$, and let $\lambda\in \mathbb{C}\setminus\mathbb{R}$.
We use the same notations as in the proof of Theorem \ref{theorem1}. Namely, for all $x \in \Gamma$,
we write
$$
g(x)=g_\lambda(x)=(i\sqrt{\lambda}\omega \cdot \nu+\alpha)e^{i\sqrt{\lambda}\omega \cdot x},\
\overline{h}(x)=\overline{h}_\lambda(x)= (-i\sqrt{\lambda}\theta \cdot \nu+\alpha)e^{-i\sqrt{\lambda}\theta \cdot x}$$
and we recall that
$S(\lambda,\omega ,\theta)=\int_\Gamma u_\lambda (g)\overline{h}ds(x)$. Next, for all $\mu \in \rho(A)\cap \rho(\tilde{A})$ we set
\begin{equation}
\label{4.0}
T(\lambda ,\mu)=T(\lambda ,\mu,\omega ,\theta):=S(\lambda,\omega ,\theta)-S(\mu,\omega ,\theta)=\int_\Gamma \left(u_\lambda (g)-u_\mu (g) \right)\overline{h}ds(x).
\end{equation}
By Lemma \ref{lemma4}, we have
\[
T(\lambda ,\mu)= (\lambda-\mu)\sum_{k\ge 1}\frac{d_k}{(\lambda_k-\lambda)(\lambda_k-\mu)},\ d_k:=(g,\psi_k)(\psi_k,h),
\]
and hence
\begin{equation}
\label{dt}
T(\lambda ,\mu)-\tilde{T}(\lambda ,\mu)=U(\lambda ,\mu)+V(\lambda ,\mu),
\end{equation}
where
\begin{eqnarray}
U(\lambda ,\mu)&:=&\sum_{k\ge 1} \frac{\lambda-\mu}{\lambda_k-\mu} \frac{d_k-\tilde{d}_k}{\lambda_k-\lambda}, \label{du}
\\
V(\lambda ,\mu)&:=&\sum_{k\ge 1}\left(\frac{\lambda-\mu}{(\lambda_k-\lambda)(\lambda_k-\mu)}-\frac{\lambda-\mu}{(\tilde{\lambda}_k-\lambda)(\tilde{\lambda}_k-\mu)}\right)\tilde{d}_k. \label{dv}
\end{eqnarray}
Notice that for all $k \in \mathbb{N}$, we have $d_k-\tilde{d}_k=(g,\psi_k-\tilde{\psi}_k) (\psi_k,h)+ (g,\tilde{\psi}_k)(\psi_k-\tilde{\psi}_k,h)$, which immediately entails that
\begin{equation}
\label{es0}
\frac{|d_k-\tilde{d}_k|}{|\lambda_k-\lambda|}\le \left(\frac{|(g|\psi_k)|}{|\lambda_k-\lambda|}\|h\|_{L^2(\Gamma)}+\rho_k(\lambda)\frac{|(\tilde{\psi}_k|h)|}{|\tilde{\lambda}_k-\lambda|}\|g\|_{L^2(\Gamma)}\right)\|\psi_k-\tilde{\psi}_k\|_{L^2(\Gamma)},
\end{equation}
where $\rho_k(\lambda):=|\tilde{\lambda}_k-\lambda|/|\lambda_k-\lambda|$. Further, since $0 \le \rho_k(\lambda) \le 1+|\lambda_k-\tilde{\lambda}_k| / |\lambda_k-\lambda|$ and $(\lambda_k-\tilde{\lambda}_k)\in \ell^\infty$ by assumption, with $\|(\lambda_k-\tilde{\lambda}_k)\|_{\ell^\infty}\le \aleph$, it is apparent that
$(\rho_k(\lambda ))\in \ell ^\infty$ and that
$$
\|(\rho_k(\lambda ))\|_{\ell ^\infty}\le \zeta(\lambda):=1+\frac{\aleph}{|\Im \lambda|},
$$
where $\Im \lambda$ denotes the imaginary part of $\lambda$.
Thus, applying the Cauchy-Schwarz inequality in \eqref{es0} and Parseval's theorem to the representation formula \eqref{rep1} in Lemma \ref{lemma2}, we get that
\begin{equation}
\label{es1}
\sum_{k=1}^N \frac{|d_k-\tilde{d}_k|}{|\lambda_k-\lambda|}\le M(\lambda)\|(\psi_k-\tilde{\psi}_k)\|_{\ell^2(L^2(\Gamma))},\ N \in \mathbb{N},
\end{equation}
where
\begin{equation}
\label{def-M}
M(\lambda):=\|h\|_{L^2(\Gamma)}\|u_\lambda (g)\|_H +\zeta(\lambda)\|g\|_{L^2(\Gamma)}\|\tilde{u}_\lambda (h)\|_H.
\end{equation}
As a consequence we have $\sum_{k\ge 1} |d_k-\tilde{d}_k| / |\lambda_k-\lambda|<\infty$. Furthermore, taking into account that
$$ \frac{|\lambda-\mu|}{|\lambda_k-\mu|} \le 1 + \frac{|\lambda|}{\lambda_1},\ \mu \in (-\infty,-\lambda_1], $$
we apply the dominated convergence theorem to \eqref{du} and find that
\begin{equation}\label{4.1}
\lim_{\mu=\Re \mu \rightarrow -\infty}U(\lambda ,\mu)=\sum_{k\ge 1}\frac{d_k-\tilde{d}_k}{\lambda_k-\lambda}=:\mathcal{U}(\lambda).
\end{equation}
Moreover, we have
\begin{equation}\label{4.4}
|\mathcal{U}(\lambda)|
\le M(\lambda)\|(\psi_k-\tilde{\psi}_k)\|_{\ell^2(L^2(\Gamma))},
\end{equation}
according to \eqref{es1}.
Arguing as before with $V$ defined by \eqref{dv} instead of $U$, we obtain in a similar fashion that
\begin{equation}\label{4.3}
\lim_{\mu=\Re \mu \rightarrow -\infty}V(\lambda ,\mu)=\sum_{k\ge 1} \frac{\tilde{\lambda}_k-\lambda_k}{(\lambda_k-\lambda)(\tilde{\lambda}_k-\lambda)} \tilde{d}_k=:\mathcal{V}(\lambda)
\end{equation}
and that
\begin{equation}\label{4.7}
|\mathcal{V}(\lambda)|\le \zeta(\lambda)\|(\tilde{\lambda}_k-\lambda_k)\|_{\ell^\infty}\| \tilde{u}_\lambda(g)\|_H\|\tilde{u}_\lambda(h)\|_H.
\end{equation}
Having seen this, we refer to \eqref{4.0}-\eqref{dt} and deduce from Lemma \ref{lemma5.1}, \eqref{4.1} and \eqref{4.3} that
\begin{equation}
\label{es3}
\int_\Gamma \left( u_\lambda (g)-\tilde{u}_\lambda (g) \right) \overline{h}ds(x)=\mathcal{U}(\lambda)+\mathcal{V}(\lambda).
\end{equation}
Now, taking $\lambda=\lambda_\tau=(\tau+i)^2$ for some fixed $\tau \in \left( |\xi|/2, \infty \right)$ and $(\omega,\theta)=(\omega_\tau,\theta_\tau)$, where $\omega_\tau$ and $\theta_\tau$ are the same as in \eqref{es4}, we combine \eqref{S1b}-\eqref{id3} with \eqref{es3}. We obtain that
the Fourier transform $\hat{b}$ of $b:=(\tilde{q}-q)\chi_\Omega$, reads
\begin{equation}\label{4.6}
\hat{b}((1+i/\tau)\xi)= \mathcal{U}(\lambda_\tau )+\mathcal{V}(\lambda_\tau)+\mathfrak{R}(\lambda_\tau),
\end{equation}
where
\[
\mathfrak{R}(\lambda_\tau):=\int_\Omega (\tilde{A}-\lambda_\tau)^{-1}(\tilde{q}\mathfrak{e}_{\lambda_\tau,\omega_\tau})\tilde{q}\mathfrak{e}_{\lambda_\tau,-\theta_\tau}dx-\int_\Omega (A-\lambda_\tau)^{-1}(q\mathfrak{e}_{\lambda_\tau,\omega_\tau})q\mathfrak{e}_{\lambda_\tau,-\theta_\tau}dx.
\]
Moreover, for all $\tau \ge \max(|\xi|/2,\tau_\ast)$, we have
\begin{equation}\label{4.8.1}
|\mathfrak{R}(\lambda_\tau)|\le C \tau^{-1+2\beta},
\end{equation}
by \eqref{gs1}-\eqref{gm0}, where $\beta \in [0,1/2)$ is defined in Theorem \ref{theorem2} and $\tau_\ast$ is the same as in Corollary \ref{corollary1}.
Here and in the remaining part of this proof, $C$ denotes a positive constant depending only on $n$, $\Omega$, $\aleph$ and $\mathfrak{c}$, which may change from line to line.
On the other hand, using that
\begin{eqnarray*}
\left| \hat{b}((1+i/\tau)\xi) -\hat{b}(\xi) \right| & = & \left| \int_{\mathbb{R}^n} e^{-i \xi \cdot x} \left( e^{\frac{\xi}{\tau} \cdot x} - 1 \right) b(x) dx \right| \\
& \le & \frac{| \xi |}{\tau} \left( \sup_{x \in \Omega} e^{(| \xi | / \tau) |x|} \right) \| b \|_{L^1(\mathbb{R}^n)},
\end{eqnarray*}
we get in a similar way to \cite[Eq. (5.1)]{CS} that
$$
|\hat{b}(\xi)|\le |\hat{b}((1+i/\tau)\xi)|+\frac{c|\xi|}{\tau}e^{c|\xi|/\tau}\aleph,\ \tau \in (|\xi|/2, \infty),
$$
for some positive constant $c$ depending only on $\Omega$. Putting this together with
\eqref{4.6}-\eqref{4.8.1} we find that for all $\tau \ge \max(|\xi|/2,\tau_\ast)$,
\begin{equation}\label{4.9}
|\hat{b}(\xi)|\le \frac{C}{\tau^{1-2\beta}}+\frac{c|\xi|}{\tau}e^{c|\xi|/\tau}\aleph+|\mathcal{U}(\lambda_\tau )|+|\mathcal{V}(\lambda_\tau)|.
\end{equation}
To upper bound $|\mathcal{U}(\lambda_\tau )|+|\mathcal{V}(\lambda_\tau)|$ on the right hand side of \eqref{4.9}, we recall from \eqref{sol1} that $u_{\lambda_\tau}(g)=-(A-\lambda_\tau)^{-1} (q \mathfrak{e}_{\lambda_\tau,\omega_\tau}) + \mathfrak{e}_{\lambda_\tau,\omega_\tau}$ and that $\tilde{u}_{\lambda_\tau}(h)=-(\tilde{A}-\lambda_\tau)^{-1} (\tilde{q} \mathfrak{e}_{\lambda_\tau,-\theta_\tau})+\mathfrak{e}_{\lambda_\tau,-\theta_\tau}$, and we combine \eqref{re7} with
\eqref{gs0} and \eqref{gs1}: We get for all $\tau \ge \tau_{\xi}:=\max(1,|\xi|/2,\tau_\ast)$, that
$$\| u_{\lambda_\tau}(g) \|_H + \| \tilde{u}_{\lambda_\tau}(h) \|_H \leq C. $$
This together with the basic estimate $\| g \|_{L^2(\Gamma)} + \| h \|_{L^2(\Gamma)} \le C \tau$, \eqref{def-M}, \eqref{4.4} and \eqref{4.7}, yield that
$$
|\mathcal{U}(\lambda_\tau )|+|\mathcal{V}(\lambda_\tau)|\le C\left(\tau\|(\psi_k-\tilde{\psi}_k)\|_{\ell^2(L^2(\Gamma))}+\|(\tilde{\lambda}_k-\lambda_k)\|_{\ell^\infty}\right),\ \tau \in [ \tau_{\xi},\infty ).
$$
Inserting this into \eqref{4.9}, we find that
\begin{equation}\label{4.12}
|\hat{b}(\xi)|\le \frac{C}{\tau^{1-2\beta}}+\frac{c|\xi|}{\tau}e^{c|\xi|/\tau}\aleph+C \tau \delta,\ \tau \in [ \tau_{\xi},\infty ),
\end{equation}
where we have set
\begin{equation}
\label{def-delta}
\delta:= \| (\psi_k-\tilde{\psi}_k)\|_{\ell^2(L^2(\Gamma))}+\|(\tilde{\lambda}_k-\lambda_k)\|_{\ell^\infty}.
\end{equation}
Let $\varrho \in (0,1)$ to be made precise further. For all $\tau \in [\tau_\ast,\infty)$, where $\tau_\ast$ is defined in Corollary \ref{corollary1}, it is apparent that the condition $\tau \ge \tau_{\xi}$ is automatically satisfied whenever $\xi \in B(0,\tau^\varrho):= \{ \xi \in \mathbb{R}^n,\ |\xi|< \tau^\varrho \}$.
Thus, squaring both sides of \eqref{4.12} and integrating the obtained inequality over $B(0,\tau^\varrho)$, we get that
$$
\|\hat{b}\|_{L^2(B(0,\tau ^\varrho))}^2\le C\left( \tau^{-2(1-2 \beta) + \varrho n}+ e^{2c\tau^{-(1-\varrho)}} \tau^{\varrho (n+2)-2}+\tau^{2+\varrho n}\delta ^2 \right),\ \tau \in [\tau_\ast,\infty).
$$
Then, taking $\varrho=(1-2\beta)/(n+2)$ in the above line, we obtain that
\begin{equation}
\label{sta1}
\|\hat{b}\|_{L^2(B(0,\tau ^{(1-2\beta)/(n+2)}))}^2\le C \left( \tau^{-(1-2\beta)}+\tau^{(3n+4)/(n+2)}\delta ^2 \right), \tau \in [\tau_\ast,\infty).
\end{equation}
On the other hand, using that the Fourier transform is an isometry from $L^2(\mathbb{R}^n)$ to itself, we have for all $\tau \in [\tau_\ast,\infty)$,
\begin{eqnarray*}
\int_{\mathbb{R}^n \setminus B(0,\tau^{(1-2\beta)/(n+2)})} (1+|\xi|^2|)^{-1}|\hat{b}(\xi)|^2d\xi & \le &
\tau^{-2(1-2\beta)/(n+2)}\|b\|_{L^2(\mathbb{R}^n)}^2 \\
& \le & C \tau^{-2(1-2\beta)/(n+2)},
\end{eqnarray*}
which together with
\eqref{sta1} yields that
\[
\|b\|_{H^{-1}(\mathbb{R}^n)}^2\le \tau^{-2(1-2\beta)/(n+2)}+\tau^{(3n+4)/(n+2)}\delta ^2,\ \tau \in [\tau_\ast,\infty).
\]
Assuming that $\delta < \left( 2(1-2\beta)/(3n+4) \right)^{1 /2}=:\delta_0$, we get by minimizing the right hand side of the above estimate with respect to $\tau \in [\tau_\ast,\infty)$, that
$$
\|b\|_{H^{-1}(\mathbb{R}^n)}\le C\delta^{2(1-2\beta)/(3(n+2))},
$$
and the desired stability inequality follows from this upon recalling that $\|q-\tilde{q}\|_{H^{-1}(\Omega)}\le \|b\|_{H^{-1}(\mathbb{R}^n)}$. Finally, we complete the proof by noticing that for all $\delta \ge \delta_0$, we have
$$ \|q-\tilde{q}\|_{H^{-1}(\Omega)}\le \|q-\tilde{q}\|_{L^2(\Omega)} \leq \left( 2 \aleph \delta_0^{-2(1-2\beta)/(3(n+2))} \right) \delta^{2(1-2\beta)/(3(n+2))}. $$
\subsection{Proof of Theorem \ref{theorem3}}
Upon possibly substituting $q+\lambda^\ast+1$ (resp., $\tilde{q}+\lambda^\ast+1$) for $q$ (resp., $\tilde{q}$), we shall assume without loss of generality in the sequel, that $\lambda_k\ge 1$ (resp., $\tilde{\lambda}_k \ge 1$) for all $k \ge 1$.
Next, taking into account that $q=\tilde{q}$ in $\Omega_0$, we notice that the function $u_k:=\phi_k-\tilde{\phi}_k$, $k \ge 1$, satisfies
\begin{equation}
\label{es20}
(-\Delta +q-\lambda_k)u_k=(\lambda_k-\tilde{\lambda}_k)\tilde{\phi}_k\; \mbox{in}\; \Omega_0,\quad \partial_\nu u_k+\alpha u_k=0\; \mbox{on}\; \Gamma.
\end{equation}
Now, let us recall from \cite[Theorem 2.2]{BCKPS} that for all $s \in (0,1/2)$ fixed,
there exist three constants
$C=C(n,\Omega_0,\Gamma_{\ast})>0$, $\mathfrak{b}=\mathfrak{b}(n,\Omega_0,\Gamma_{\ast},s)>0$ and $\gamma=\gamma(n,\Omega_0)>0$, such that for all $r \in (0,1)$ and all $\lambda \in [0,+\infty)$, we have
\begin{equation}\label{UC1}
C\left(\|u\|_{H^1(\Gamma_0)}+ \|\partial_\nu u\|_{L^2(\Gamma_0)}\right)\le r^{s/4}\|u\|_{H^2(\Omega_0)}+e^{\mathfrak{b}r^{-\gamma}}\mathfrak{C}_\lambda (u),\ u\in H^2(\Omega_0),
\end{equation}
where we have set $\Gamma_0:=\partial \Omega_0$ and
\[
\mathfrak{C}_\lambda (u) :=(1+\lambda)\left(\|u\|_{H^1(\Gamma_{\ast})}+\| \partial_\nu u\|_{L^2(\Gamma_{\ast})}\right)+\|(\Delta -q+\lambda)u\|_{L^2(\Omega_0)}.
\]
Thus, in light of \eqref{5.1} and the embedding $\Gamma \subset \Gamma_0$, we deduce from \eqref{es20} upon applying \eqref{UC1} with $(\lambda,u)=(\lambda_k,(u_k)_{| \Omega_0})$, $k \geq 1$, that for all $r \in (0,1)$, we have
\begin{eqnarray*}
& & C\|\psi_k-\tilde{\psi}_k\|_{L^2(\Gamma)} \\
& \le & r^{s/4}( \lambda_k +\tilde{\lambda}_k )+e^{\mathfrak{b} r^{-\gamma}}\left((1+\|\alpha\|_{C^{0,1}(\Gamma)})\lambda_k\|\psi_k-\tilde{\psi}_k\|_{H^1(\Gamma_{\ast})}+|\lambda_k-\tilde{\lambda}_k|\right),
\end{eqnarray*}
for some constant $C>0$ depending only on $n$, $\Omega$, $\Omega_0$, $\Gamma_\ast$, $\aleph$ and $s$.
From this and Weyl's asymptotic formula \eqref{waf}, it then follows for all $k \ge 1$ and all $r \in (0,1)$, that
\begin{equation}
\label{es21}
C\|\psi_k-\tilde{\psi}_k\|_{L^2(\Gamma)}^2\le r^{s/2}k^{4/n}+e^{2\mathfrak{b}r^{-\gamma}}\left(k^{4/n}\|\psi_k-\tilde{\psi}_k\|_{H^1(\Gamma_{\ast})}^2 + |\lambda_k-\tilde{\lambda}_k|^2 \right).
\end{equation}
Here and in the remaining part of this proof, $C$ denotes a generic positive constant depending only on $n$, $\Omega$, $\Omega_0$, $\Gamma_\ast$, $\aleph$ and $\alpha$, which may change from one line to another. Since the constant $C$ is independent of $k \geq 1$ and
since $\sum_{k\ge 1}k^{-2\mathfrak{t}+4/n}<\infty$ as we have $2\mathfrak{t}>1+4/n$, we find upon multiplying both sides of
\eqref{es21} by $k^{-2\mathfrak{t}}$ and then summing up the result over $k \geq 1$, that
\begin{eqnarray}
\label{es22}
& & C \|(k^{-\mathfrak{t}} (\psi_k-\tilde{\psi}_k))\|_{\ell ^2(L^2(\Gamma))}^2 \\
& \le & r^{s/2}
+e^{2\mathfrak{b}r^{-\gamma}} \left( \| (k^{-\mathfrak{t}+2/n} (\psi_k-\tilde{\psi}_k))\|_{\ell^2(H^1(\Gamma_{\ast}))}^2 + \| (k^{-\mathfrak{t}} (\lambda_k-\tilde{\lambda}_k)) \|_{\ell^2}^2 \right), \nonumber
\end{eqnarray}
uniformly in $r \in (0,1)$.
Further, taking into account that $(k^{\mathfrak{t}}(\psi_k-\tilde{\phi}_k)) \in \ell^2(L^2(\Gamma))$ and $\|(k^{\mathfrak{t}}(\psi_k-\tilde{\phi}_k))\|_{\ell ^2(L^2(\Gamma))}\le \aleph$, we have
\begin{eqnarray*}
\|(\psi_k-\tilde{\psi}_k)\|_{\ell ^2(L^2(\Gamma))}^2
&\le &\|(k^{\mathfrak{t}}(\psi_k-\tilde{\phi}_k))\|_{\ell ^2(L^2(\Gamma))}\|(k^{-\mathfrak{t}}(\psi_k-\tilde{\psi}_k))\|_{\ell ^2(L^2(\Gamma))} \\
& \le & \aleph\|(k^{-\mathfrak{t}}(\psi_k-\tilde{\psi}_k))\|_{\ell ^2(L^2(\Gamma))},
\end{eqnarray*}
by the Cauchy-Schwarz inequality, and hence
\begin{eqnarray}
\label{es23}
& & C \|(\psi_k-\tilde{\psi}_k)\|_{\ell ^2(L^2(\Gamma))}^2 \\
& \le & r^{s/4}
+e^{\mathfrak{b}r^{-\gamma}} \left( \| (k^{-\mathfrak{t}} (\lambda_k-\tilde{\lambda}_k)) \|_{\ell^2} + \| (k^{-\mathfrak{t}+2/n} (\psi_k-\tilde{\psi}_k))\|_{\ell^2(H^1(\Gamma_{\ast}))} \right), \nonumber
\end{eqnarray}
whenever $r \in (0,1)$, by \eqref{es22}.
Moreover, since
$$\| (k^{-\mathfrak{t}} (\lambda_k-\tilde{\lambda}_k)) \|_{\ell^2} \le \left( \sum_{k\ge 1} k^{-2\mathfrak{t}}\right)^{1/2}
\| (\lambda_k-\tilde{\lambda}_k)) \|_{\ell^\infty}$$
and $\sum_{k\ge 1} k^{-2\mathfrak{t}}<\infty$ as we assumed that $2\mathfrak{t}>1+n/2$, \eqref{es23} then provides
\begin{equation}
\label{es24}
\|(\psi_k-\tilde{\psi}_k)\|_{\ell ^2(L^2(\Gamma))}^2 \le C \left( r^{s/4}
+e^{\mathfrak{b}r^{-\gamma}} \delta_\ast \right),\ r \in (0,1),
\end{equation}
where we have set
$$ \delta_\ast :=
\| (\lambda_k-\tilde{\lambda}_k) \|_{\ell^\infty} + \| (k^{-\mathfrak{t}+2/n} (\psi_k-\tilde{\psi}_k))\|_{\ell^2(H^1(\Gamma_{\ast}))}. $$
Next, with reference to \eqref{def-delta} we have
\begin{eqnarray*}
\delta^2 & \le & 2 \left( \|(\lambda_k-\tilde{\lambda}_k)\|_{\ell^\infty}^2 + \|(\psi_k-\tilde{\psi}_k)\|_{\ell ^2(L^2(\Gamma))}^2 \right) \\
& \le & 2 \left( \aleph \|(\lambda_k-\tilde{\lambda}_k)\|_{\ell^\infty} + \|(\psi_k-\tilde{\psi}_k)\|_{\ell ^2(L^2(\Gamma))}^2 \right). \nonumber
\end{eqnarray*}
Moreover, since $\|(\lambda_k-\tilde{\lambda}_k)\|_{\ell^\infty} \le e^{\mathfrak{b}r^{-\gamma}} \delta_\ast$ whenever $r \in (0,1)$, the above inequality combined with \eqref{es24} yield that
\begin{equation}
\label{e1}
\delta^2 \leq C \left( r^{s/4}+e^{\mathfrak{b}r^{-\gamma}} \delta_\ast \right),\ r \in (0,1).
\end{equation}
On the other hand, we have
$$
\|q-\tilde{q}\|_{H^{-1}(\Omega)}\le C \delta^{2 (1-2\beta) / (3(n+2))},
$$
from Theorem \ref{theorem2}. Putting this together with \eqref{e1}, we obtain that
\begin{equation}\label{e3}
\|q-\tilde{q}\|_{H^{-1}(\Omega)}\le C \left( r^{s/4}+e^{\mathfrak{b}r^{-\gamma}}\delta_\ast \right)^{(1-2\beta)/(3(n+2))},\ r \in (0,1).
\end{equation}
Let us now examine the two cases $\delta_\ast \in (0,1/e)$ and $\delta_\ast \in [1/e,\infty)$ separately. We start with $\delta_\ast \in (0,1/e)$ and take $r=| \ln \delta_\ast |^{-1/\gamma} \in (0,1)$ in \eqref{e3}, getting that
\begin{eqnarray*}
\|q-\tilde{q}\|_{H^{-1}(\Omega)} & \le & C \left( | \ln \delta_\ast |^{-s/(4\gamma)}+\delta_\ast^{(\mathfrak{b}+1)} \right)^{(1-2\beta)/(3(n+2))} \\
& \le & C \left( | \ln \delta_\ast |^{-s/(4\gamma)}+ e^{-(\mathfrak{b}+1)} | \ln \delta_\ast |^{-(\mathfrak{b}+1)} \right)^{(1-2\beta)/(3(n+2))},
\end{eqnarray*}
where we used in the last line that $\delta_\ast \le 1/(e | \ln \delta_\ast |)$. This immediately yields
\begin{equation}\label{e4}
\|q-\tilde{q}\|_{H^{-1}(\Omega)} \le C | \ln \delta_\ast |^{-\vartheta},\ \delta_\ast \in (0,1/e),
\end{equation}
where
$$\vartheta:=\min \left( s/(4\gamma) , \mathfrak{b}+1 \right)(1-2\beta)/(3(n+2)).$$
Next, for $\delta_\ast \in [1/e,\infty)$, we get upon choosing, say, $r=1/2$ in \eqref{e3}, and then taking into account that
$r < 1 \le e \delta_\ast$ and $(1-2\beta)/(3(n+2)) \ge 0$, that
\begin{eqnarray*}
\|q-\tilde{q}\|_{H^{-1}(\Omega)} & \le & C \left( (e\delta_\ast)^{s/4}+e^{2^\gamma \mathfrak{b}-1} e \delta_\ast \right)^{(1-2\beta)/(3(n+2))} \\
& \le & C (e\delta_\ast)^{(1-2\beta)/(3(n+2))} \\
& \le & C \delta_\ast.
\end{eqnarray*}
Now, with reference to \eqref{def-Phi}, the stability estimate \eqref{thm3} follows readily from this and \eqref{e4}.
\end{document} |
\begin{equation}gin{document}
\title{Depth-Width Trade-offs for Neural Networks via Topological Entropy}
\author{Kaifeng Bu$^1$}
\email{[email protected] (K.Bu)}
\author{Yaobo Zhang$^{2,3}$}
\email{[email protected] (Y.Zhang)}
\author{Qingxian Luo$^{4,5}$}
\email{[email protected](Q.Luo)}
\address[$1$]{Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA}
\address[$2$]{Zhejiang Institute of Modern Physics, Zhejiang University, Hangzhou, Zhejiang 310027, China}
\address[$3$]{Department of Physics, Zhejiang University, Hangzhou Zhejiang 310027, China}
\address[$4$]{School of Mathematical Sciences, Zhejiang University, Hangzhou, Zhejiang 310027, China}
\address[$5$]{Center for Data Science, Zhejiang University, Hangzhou Zhejiang 310027, China}
\begin{equation}gin{abstract}
One of the central problems in the study of deep learning theory
is to understand how the structure properties, such as depth, width and the number of nodes,
affect the expressivity of deep neural networks.
In this work, we show a new connection between the expressivity of deep neural networks and topological entropy
from dynamical system, which can be used to characterize depth-width trade-offs of neural networks.
We provide an upper bound on the topological entropy of neural
networks with continuous semi-algebraic units by the structure parameters. Specifically,
the topological entropy of ReLU network with $l$ layers and $m$ nodes per layer is upper bounded by $O(l\log m)$.
Besides, if the neural network is a good approximation of some function $f$, then
the size of the neural network has an exponential lower bound with respect to the topological
entropy of $f$.
Moreover, we discuss the relationship between topological entropy,
the number of oscillations, periods and Lipschitz constant.
\end{abstract}
\maketitle
\section{Introduction}
Deep neural network has been a hot topic in machine learning, which has lots of applications ranging from
pattern recognition to computer vision.
Understanding the representation power of neural network is one of the key problems in
deep learning theory. Universal approximation theorem tells us that
any continuous function can be approximated by a depth-2 neural network with some
activation function on a bounded domain \cite{Cybenko1989,Hornik1989,Funahashi1989,Barron1994}. However, the size of the neural
network in this approximation can be exponential which is impractical in real life.
Hence, we are interested in the neural networks with bounded size.
One natural question is to investigate the trade-offs between depth and width. The benefits of
depths on the representational power of neural networks has attracted lots of attention, and there are many
results based on the depth separation argument \cite{Eldan16,Telgarsky2015,Telgarsky2016,Schmitt1999,Montufar2014,Malach2019,Poole2016,Raghu17,Arora2016,Liang2016,Kileel2019}.
Depth separation argument has also been considered in other
computational models, such as boolean circuits \cite{Hastad1986,Hastad87, Parberry94,RossmanFOCS15}
and sum-product networks \cite{Shawe11,Martens2014}.
To get a depth separation argument for neural networks,
several measures to quantify the complexity of the functions have been
introduced, such as the number of
linear regions \cite{Montufar2014}, Fourier spectrum \cite{Eldan16},
global curvature \cite{Poole2016},
trajectory length \cite{Raghu17}, fractals \cite{Malach2019} and so on.
Recently, Telgarsky used the number of oscillations as a measure of
the complexity of function to prove that there exist neural networks with
$\theta(k^3)$ layers, $\theta(1)$ nodes per layer which can not be
approximated by networks with $O(k)$ layers and $O(2^k)$ nodes \cite{Telgarsky2016}.
Moreover, Chatziafratis et al provided a connection between
the representation power of neural networks and the periods
of the function by the well-known Sharkovsky's Theorem \cite{Chatziafratis2019}.
Furthermore, by revealing a tighter connection between
periods, Lipschitz constant and the number of oscillations,
Chatziafratis et al gave an improved depth-width trade-offs \cite{Chatziafratis2020}.
In this work, we show the connection between
the representation power of neural networks and topological entropy,
a well-known concept in dynamic system to quantify the complexity of
the system.
First, we provide an upper bound on the topological entropy of neural
networks with semi-algebraic units by the structure parameters like depth and
width. For example, for the ReLU network with $l$ layers and $m$ nodes per layer,
the topological entropy is upper bounded by $O(l\log m)$.
Besides, if the neural network is a good approximation of some function $f$, then
the size has an exponential lower bound with respect to the topological
entropy of $f$. Furthermore, we discuss the connection between topological entropy,
number of oscillations, periods and Lipschitz constant.
\section{Preliminaries}
\subsection{Background about dynamic system}
In this subsection, we will introduce some basic facts about one-dimensional dynamic system.
First, let us introduce the definition of
topological entropy. Topological entropy of a dynamic system
quantifies the complexity of the system, such as the number of
different orbits and the sensitivity of evolution on the initial states.
There are several equivalent definitions of topological
entropy. Here we take the one introduced by Adler, Konheim, and McAndrew \cite{Adler65}.
Let $X$ be a compact Hausdorff space, $f$ be a continuous map from
$X$ to $X$.
Given a set $\mathcal{A}$ of subsets of $X$, if their union is $X$,
then $\mathcal{A}$ is called a cover of $X$. If each element in $\mathcal{A}$ is an open set, then $\mathcal{A}$
is called an open cover of $X$.
Given open covers $\mathcal{A}_1, \mathcal{A}_2,...,\mathcal{A}_n$ of $X$, we denote $\bigvee^{n}_{i=1}\mathcal{A}_i$ as follows,
\begin{equation}gin{eqnarray*}
\bigvee^{n}_{i=1}\mathcal{A}_i:=
\set{
A_1\cap A_2...\cap A_n:
A_i\mathrm{i}n \mathcal{A}_i , \forall i, ~\text{and}~~
A_1\cap A_2...\cap A_n\neq\emptyset
}.
\end{eqnarray*}
Given an open cover
$\mathcal{A}$, we can define the open
cover $f^{-i}(\mathcal{A})$ and $\mathcal{A}^n_{f}$ as follows
\begin{equation}gin{eqnarray*}
f^{-i}(\mathcal{A})&:=&\set{f^{-i}(A): A\mathrm{i}n \mathcal{A}},\\
\mathcal{A}^n_{f}&=&\bigvee^{n-1}_{i=0}f^{-i}(\mathcal{A}).
\end{eqnarray*}
Let us denote
$\mathcal{N}(\mathcal{A})$ to be the minimal cardinality of
the subcover from $\mathcal{A}$. Mathematically, $\mathcal{N}(\mathcal{A})$ can be defined as follows
\begin{equation}gin{eqnarray*}
\mathcal{N}(\mathcal{A})
=\min\set{Card(\mathcal{B}): \mathcal{B}\subset \mathcal{A} ~~\text{and} ~~\mathcal{B}~~ \text{is a cover of X}},
\end{eqnarray*}
where $Card(\mathcal{B})$ denotes the cardinality of $\mathcal{B}$.
Now, we are ready to define topological entropy.
\begin{equation}gin{Def}\cite{Adler65}
Given a compact Hausdorff topological space $X$, and a continuous map $f:X\to X$,
for an open cover $\mathcal{A}$, the topological entropy of $f$
on the cover $\mathcal{A}$ is defined as
\begin{equation}gin{eqnarray*}
h_{top}(f,\mathcal{A})
=\lim_{n\to \mathrm{i}nfty}
\frac{1}{n}\log_2 \mathcal{N}(\mathcal{A}^{n}_f).
\end{eqnarray*}
The topological entropy of $f$ is defined as
\begin{equation}gin{eqnarray}
h_{top}(f)=\sup_{\mathcal{A}:~ \text{open cover of X}}
h_{top}(f,\mathcal{A}).
\end{eqnarray}
\end{Def}
The topological entropy takes value from $[0, +\mathrm{i}nfty]$. (See Fig \ref{fig:infty} for the examples of functions with finite and infinite
topological entropy.)
Topological entropy has some nice properties, which we have listed in the Appendix \ref{sec:top}.
In this work, we consider the case where $X$ is a closed interval $[a, b]$ and
$f$ is a continuous function from $[a, b]$ to $[a,b]$.
For such interval map, topological entropy has several nice
characterization. In this work, we will use the following one.
We list other characterizations in Appendix \ref{sec:top}.
\begin{equation}gin{figure}[h]
\center{
\subfigure[]{
\begin{equation}gin{minipage}[t]{0.5\linewidth}
\centering
\mathrm{i}ncludegraphics[width=5.5cm]{graph11}
\end{minipage}
}
\subfigure[]{
\begin{equation}gin{minipage}[t]{0.5\linewidth}
\centering
\mathrm{i}ncludegraphics[width=5.5cm]{graph10-new}
\end{minipage}
}
}
\caption{Examples of functions with finite and infinite topological entropy.
(a) $g:[0,1]\to [0,1]$ with $h_{top}(g)=3$;
(b) $f:[0,1]\to [0,1]$ with $h_{top}(f)=+\mathrm{i}nfty$, where $f$ is conjugate to $g^n$ on the interval $[2^{-(n-1)}, 2^{-n}]$ for each
integer $n\geq 0$ and $f(0)=0$. (See the definition of conjugacy in Appendix \ref{sec:top}.)}
\label{fig:infty}
\end{figure}
\begin{equation}gin{Def}
A continuous function $f:[a,b]\to [a,b]$ is piece-wise monotone,
if there exists a finite partition of $[a,b]$ such that $f$ is monotone on each piece. Let us denote $c(f)$
to be minimal number of monotonicity of $f$.
\end{Def}
\begin{equation}gin{lem}\label{lem:monh}\cite{Misiurewicz80,Young81}
If the continuous function $f:[a,b]\to [a,b]$ is piecewise monotone, then
\begin{equation}gin{eqnarray*}
h_{top}(f)
=\lim_{n\to \mathrm{i}nfty}
\frac{1}{k}\log c(f^k)=\mathrm{i}nf_k \frac{1}{k}\log c(f^k),
\end{eqnarray*}
where $c(f)$ is the number of intervals of monotonicity of $f$.
\end{lem}
Now let us introduce the definition of periods in the dynamical system.
\begin{equation}gin{Def}
A continuous function $f:[a,b]\to [a,b]$ has a point of period $n$ if
there exists $x_0\mathrm{i}n [a,b]$ such that
\begin{equation}gin{eqnarray*}
f^n(x_0)&=&x_0,\\
f^i(x_0)&\neq& x_0,~~ \forall 1\leq i\leq n-1.
\end{eqnarray*}
The set $\set{x_0, f(x_0),...,f^{n-1}(x_0)}$ is called
a $n$-cycle of $f$.
\end{Def}
There is a well-known theorem called
Sharkovsky's Theorem, which describes the structure of the periods of cycles
of the interval map.
\begin{equation}gin{Def}[Sharkovsky's ordering]
Let us define Sharkovsky ordering as follows
\begin{equation}gin{eqnarray*}
&&3\vartriangleright 5 \vartriangleright7 \vartriangleright\cdots\vartriangleright\\
&\vartriangleright& 3\cdot 2\vartriangleright 5\cdot 2 \vartriangleright 7\cdot 2 \vartriangleright \cdots\vartriangleright\\
&\vartriangleright& 3\cdot 2^2\vartriangleright 5\cdot 2^2 \vartriangleright 7\cdot 2^2 \vartriangleright\cdots\vartriangleright\\
&&~~~~~~~~~~~~~~~~\vdots\\
&\vartriangleright& 3\cdot 2^n\vartriangleright 5\cdot 2^n \vartriangleright7\cdot 2^n \vartriangleright \cdots\vartriangleright\\
&&~~~~~~~~~~~~~~~~\vdots\\
&\vartriangleright& \cdots\vartriangleright 2^3 \vartriangleright 2^2 \vartriangleright 2\vartriangleright 1\\
\end{eqnarray*}
\end{Def}
Let us define $Per(f)$ to be the set of periods of cycles of a map $f:[a, b]\to [a, b]$ and denote $\mathbb{N}_{sh}=\mathbb{N}\cup \set{2^{\mathrm{i}nfty}}$.
Sharkovsky's Theorem tells us that Sharkovsky's ordering can be
used to characterize the periods of a continuous function as follows.
\begin{equation}gin{thm}\cite{Sharkovsky64,Sharkovsky65}
Given a continuous function $f:[a, b]\to [a, b]$, there exists
$s\mathrm{i}n \mathbb{N}_{sh}$ such that
$Per(f)=\set{k\mathrm{i}n\mathbb{N}: s\vartriangleright k}$. Conversely,
for any $s\mathrm{i}n \mathbb{N}_{sh}$, there exists a continuous function
$f:[a,b]\to [a,b]$ such that
$Per(f)=\set{k\mathrm{i}n\mathbb{N}: s\vartriangleright k}$.
\end{thm}
Next, let us give the definition of crossings (or oscillations), where the relationship between
the number of crossings and periods has been considered in \cite{Chatziafratis2019,Chatziafratis2020}.
\begin{equation}gin{Def}
Given a continuous function $f:[a,b]\to [a,b]$, for any $[x,y]\subset[a,b]$,
$f$ crosses $[x,y]$ if there exists $c,d\mathrm{i}n[a,b]$ such that
$f(c)=x$, $f(d)=y$. We use $C_{x,y}(f)$ to denote the number that $f$ crosses $[x,y]$, which means
there exists $c_1,d_1<c_2,d_2<...<c_t,d_t$ with $t=C_{x,y}(f)$ such that $f(c_i)=x, f(d_i)=y$ for any $1\leq i\leq C_{x,y}(f)$.
\end{Def}
Finally, let us introduce the concept called $f$-covering \cite{Alseda00}.
\begin{equation}gin{Def}[$f$-covering]
Given a continuous function $f:[a,b]\to [a,b]$ and two intervals $I_1,I_2\subset [a,b]$,
we say that $I_1$ $f$-covers $I_2$
if there exists a subinterval $J$ of $I_1$ such that
$f(J)=I_2$. Besides, we say that $I_1$ $f$-covers $I_2$ $t$ times if there
exists $t$ subintervals $J_1,..,J_t $ of $I_1$ with pairwise disjoint interior such that
$f(J_i)=I_2$ for $i=1,...,t$.
\end{Def}
Based on the definitions of crossing and $f$-covering, it is easy to see that
$C_{x,y}(f)=t$ iff the maximal times that $[a,b]$ $f$-covers $[x,y]$ is equal to $t$.
\subsection{Neural networks with semi-algebraic units}
A neural network is a function defined by a connected directed graph with
some activation function $\sigma: \mathbb{R}\to\mathbb{R} $ and a set of parameters:
a weight for each edge and a bias for each node of the graph.
Usually the activation function $\sigma: \mathbb{R}\to\mathbb{R} $ is a nonlinear function.
The root nodes do the computation on the input vector, while the
internal nodes do the computation on the output from other nodes.
The activation function for nodes may be different, and there are two common choices:
(1) ReLU gate: $\vec{x}\to \sigma_R(\langle\vec{a},\vec{x}\rangle+b)$, where $\sigma_R(x)=\max\set{0,x}$;
(2) maximaization gate $Max$: $\vec{x}\to \max^n_{i=1} x_i$.
Here we consider an important class of activation functions, called semi-algebraic units (or semi-algebraic gates)\cite{Telgarsky2016}.
The definition of a semi-algebraic gate is given as follows
\begin{equation}gin{Def}
A function $\sigma: \mathbb{R}^n \to \mathbb{R}$ is called $(t, d_1, d_2)$ semi-algebraic, if there exists
$t$ polynomials $\set{p_i}^{t}_{i=1}$ of degree $\leq d_1$ and
$s$ tripes $(L_j, U_j, q_j)^{s}_{j=1}$
where $L_i$ and $U_i$ are subsets of $\set{1,2,....,t}$, and
each $q_j$ is a polynomial of degree $\leq d_2$
such that
\begin{equation}gin{eqnarray}
f(\vec{x})
=\sum^{s}_{j=1}
q_j(\vec{x}) \left(\Pi_{i\mathrm{i}n L_j}
\mathbb{I}(p_i(\vec{x})<0)\right)
\left(\Pi_{i\mathrm{i}n U_j}
\mathbb{I}(p_i(\vec{x})<0)\right),
\end{eqnarray}
where $\mathbb{I}(\cdot)$ is the indicator function.
\end{Def}
Here, we are interested in the continuous semi-algebraic unit, that is the function $\sigma: \mathbb{R}^n \to \mathbb{R}$ is continuous
and semi-algebraic.
For example, the standard
ReLU gate $\vec{x}\to \sigma_R(\langle\vec{a},\vec{x}\rangle+b)$
is a continuous and $(1,1,1)$ semi-algebraic unit \cite{Telgarsky2016}.
The maximization gate $Max: \mathbb{R}^n\to \mathbb{R}$ defined as $Max(\vec{x})=\max^n_{i=1} x_i$
is a continuous and $(n(n-1),1,1)$ semi-algebraic unit \cite{Telgarsky2016}.
\begin{equation}gin{Def}
A function $\sigma:\mathbb{R} \to \mathbb{R} $ is called
$(t, d)$-poly, if there exists a partition
of $\mathbb{R}$ into $\leq t$ intervals such that $\sigma$
is a polynomial of degree $\leq d$ on each interval.
\end{Def}
Denote $\mathcal{N}_n(l,m,t,d_1,d_2)$
to be the set of neural networks with $\leq l$ layers,
$\leq m$ nodes per layer, the activation function
being continuous and $(t, d_1, d_2)$ semi-algebraic and the input dimension being $n$.
As the function
$f$ we would like to represent
is a continuous function $f:[a,b]\to [a,b]$,
we consider the neural networks with
input dimension being $1$, i.e., $\mathcal{N}_1(l,m,t,d_1,d_2)$.
\section{Informal statement of our main results}
Our first result shows the connection between topological entropy
and the depth, width of deep neural networks, and provides an upper bound
of the topological entropy of neural networks with continuous semi-algebraic units by the
structure parameters.
\begin{equation}gin{thm}[Informal version of Theorem \ref{thm:main1}]
For any neural networks $g$ with
$l$ layers, $m$ nodes per layer and
$(t, d_1, d_2)$ semi-algebraic units
as activation function,
then
\begin{equation}gin{eqnarray}
h_{top}(\tau\circ g )\leq l(1+\log_2m+\log_2t+\log_2d_1)
+l^2\log_2 d_2,
\end{eqnarray}
where
$\tau: \mathbb{R} \to \mathbb{R}$ is defined as
follows (See Figure \ref{fig1}.)
\begin{equation}gin{equation}
\tau(x)=\left\{
\begin{equation}gin{array}{c}
a, x \leq 1,\\
x, a\leq x\leq b\\
b, x>b.
\end{array}
\right.
\end{equation}
\end{thm}
\begin{equation}gin{figure}[!h]
\center{\mathrm{i}ncludegraphics[width=5.5cm] {graph1}}
\caption{The figure for the function $\tau(x)$.}
\label{fig1}
\end{figure}
Our second result shows the connection between the topological entropy of a given function $f$ and the
depth-width trade-offs required to have a good approximation of $f$.
\begin{equation}gin{thm}[Informal statement of Theorem \ref{thm:main2}]
Given a continuous function
$f:[a,b]\to [a,b]$ with positive and finite topological entropy,
if $g$ is a good approximation of $f$ with respect to $\norm{\cdot}_{L^{\mathrm{i}nfty}}$, where
$g$ is a
neural network with
$l$ layers, $m$ nodes per layer and
$(t, d_1, d_2)$ semi-algebraic units
as activation function, then we have
\begin{equation}gin{eqnarray}
m\geq \frac{exp(\Omega(\frac{1}{l}h_{top}(f)))}{2td_1d^l_2}.
\end{eqnarray}
Hence, if the neural network $g$ is a good approximation of $f^k$ with respect to $\norm{\cdot}_{L^{\mathrm{i}nfty}}$, then we have
\begin{equation}gin{eqnarray}
m\geq \frac{exp(\Omega(\frac{k}{l}h_{top}(f)))}{2td_1d^l_2}.
\end{eqnarray}
\end{thm}
Our third result discusses the connection between the topological entropy, periods, the number of
oscillations and Lipschitz constant.
\section{ Connection between topological entropy and the size of neural networks}
First, let us consider the topological entropy of
the neural networks with $l$ layers, $m$ nodes per layer and
activation function being $(t,d_1,d_2)$ semi-algebraic and continuous, i.e., the functions from
$\mathcal{N}_1(l,m,t,d_1,d_2)$.
Let us define $\tau:\mathbb{R}\to\mathbb{R}$ as follows
\begin{equation}gin{equation*}
t_2(x)=\left\{
\begin{equation}gin{array}{ccc}
a,& x \leq 1,\\
x,& a\leq x\leq b,\\
b,& x>b.
\end{array}
\right.
\end{equation*}
We can rewrite $\tau(x)$ as follows
\begin{equation}gin{eqnarray}
\tau(x)=
a+
(x-a)\mathbb{I}(x>a)
+(b-x)\mathbb{I}(x>b).
\end{eqnarray}
Hence $\tau$ is continuous and $(2,1,1)$ semi-algebraic.
Therefore, for any $g\mathrm{i}n\mathcal{N}_1(l,m,t,d_1,d_2) $,
the function $\tau\circ g$ is a continuous function
from $[a,b]$ to $[a,b]$. Thus, we can compute the
topological entropy of
$\tau\circ g$.
To get an upper bound on the topological entropy of neural networks, we first
need the following lemma, which gives an upper bound on the number of intervals of monotonicity of $f$.
\begin{equation}gin{lem}\label{lem:pm}
If the function
$ f :[a,b]\to [a,b]$ is continuous and $(t,d)$-poly, we have
\begin{equation}gin{eqnarray}
c(f)\leq td.
\end{eqnarray}
\end{lem}
\begin{equation}gin{proof}
Since $ f :[a,b]\to [a,b]$ is $(t,d)$-poly, then there exists a partition
of the interval $[a,b]$ into subintervals $\set{J_i}^t_{i=1}$ such that $f$ is a polynomial of degree
$\leq d$ on each subinterval $J_i$.
It is directly for any polynomial degree
$\leq d$, we can divide $\mathbb{R}$ into
$\leq d$ intervals such that this polynomial
is monotone in each piece.
Hence, we can divide each
subinterval $J_i$ into at most $d$ pieces, such that
$f $ is monotone on each piece.
Thus
\begin{equation}gin{eqnarray*}
c(f)\leq td.
\end{eqnarray*}
\end{proof}
Now, we are ready to prove our first result, which gives an upper bound on the topological entropy of the
neural networks by the structure parameters of the neural networks.
\begin{equation}gin{thm}\label{thm:main1}
For any $g\mathrm{i}n \mathcal{N}_1(l,m,t,d_1,d_2)$,
the
topological entropy for the function $\tau\circ g:[a,b]\to [a,b]$
is upper bounded
by the structure parameters as follows
\begin{equation}gin{eqnarray}
h_{top}(\tau\circ g )
\leq
l(1+\log_2m+\log_2t+\log_2d_1)
+2l^2\log d_2.
\end{eqnarray}
\end{thm}
\begin{equation}gin{proof}
It has been proved that if the function
$f:\mathbb{R}^n\to \mathbb{R} $ is $(t, d_1, d_2)$ semi-algebraic,
$g_1,...,g_n:\mathbb{R} \to \mathbb{R}$ is $(s, d_3)$-poly, then
$\mu(x):=f(g_1(x),...,g_n(x))$ is
$(stn(1+d_1d_3), d_2d_3)$-poly \cite{Telgarsky2016}.
Thus, by analyzing the neural network layer by layer, for any $g\mathrm{i}n \mathcal{N}_1(l,m,t,d_1,d_2)$,
$\tau\circ g $ is $(\alpha_l,\begin{equation}ta_l)$-poly, where
\begin{equation}gin{eqnarray*}
\alpha_l&\leq& 2(2mtd_1)^ld^{\frac{1}{2}l^2+l}_2,\\
\begin{equation}ta_l&\leq& d^l_2.
\end{eqnarray*}
Therefore, by Lemma \ref{lem:pm}, we have
\begin{equation}gin{eqnarray*}
c(\tau\circ g )
\leq 2(2mtd_1)^ld^{2l^2}_2.
\end{eqnarray*}
By Lemma \ref{lem:monh},
we have
\begin{equation}gin{eqnarray*}
\lim_k\frac{1}{k}\log_2 c(f^k)
=\mathrm{i}nf_k \frac{1}{k}\log_2 c(f^k)
=h_{top}(f),
\end{eqnarray*}
which implies that
\begin{equation}gin{eqnarray*}
c(f)\geq 2^{h_{top}(f)}.
\end{eqnarray*}
Therefore,
we have
\begin{equation}gin{eqnarray*}
h_{top}(\tau\circ g)
\leq l(1+\log_2m+\log_2t+\log_2d_1)
+2l^2\log_2 d_2.
\end{eqnarray*}
\end{proof}
Next,
to get the relationship between topological entropy
of the function $f$ and that of the neural networks, we
need to consider the continuity of the topological entropy.
\begin{equation}gin{lem}\label{lem:semcon}\cite{Misiurewicz79hor}
For any continuous function
$f:[a,b]\to [a,b]$, it holds that
\begin{equation}gin{eqnarray}
\lim_{g\to f}
\mathrm{i}nf h_{top}(g)\geq
h_{top}(f),
\end{eqnarray}
where $g:[a,b]\to [a,b]$ is continuous and $g\to f$ by
$L^{\mathrm{i}nfty}$ norm.
\end{lem}
Based on the lower semi-continuity of
topological entropy, if the
given function has finite topological entropy, then for any $\epsilon>0$,
there exists $\delta>0$ such that
for any continuous function $g:[a,b]\to [a,b]$ with
$\norm{f-g}_{L^{\mathrm{i}nfty}}<\delta$, we have
\begin{equation}gin{eqnarray*}
h_{top}(g)
\geq h_{top}(f)-\epsilon.
\end{eqnarray*}
If $0<h_{top}(f)<+\mathrm{i}nfty$, let us take $\epsilon=\frac{1}{2}h_{top}(f)$, there exists
$\delta(f)>0$ such that for any continuous function $g$ with $\norm{f-g}_{L^{\mathrm{i}nfty}}<\delta(f)$, we have
\begin{equation}gin{eqnarray*}
h_{top}(g)
\geq \frac{1}{2}h_{top}(f).
\end{eqnarray*}
\begin{equation}gin{thm}\label{thm:main2}
Given a continuous function
$f:[a,b]\to [a,b]$ with positive and finite topological entropy,
then there exists $\delta(f)>0$ such that
for any $g\mathrm{i}n \mathcal{N}_1(l,m,t,d_1,d_2)$ with $\norm{f-g}_{L^{\mathrm{i}nfty}}\leq \delta(f)$, we have
\begin{equation}gin{eqnarray}
m\geq
\frac{2^{\frac{1}{2l}{h(f)}}}{2td_1d^{2l}_2}.
\end{eqnarray}
\end{thm}
\begin{equation}gin{proof}
First, based on Lemma \ref{lem:semcon},
there exists $\delta(f)>0$
such that for any continuous function
$g:[a, b]\to [a,b]$, we have
\begin{equation}gin{eqnarray*}
h_{top}(\tau\circ g)
\geq \frac{1}{2}h_{top}(f).
\end{eqnarray*}
Besides, it is easy to see that $\tau$ is a Lipschitz function and
$|\tau(x)-\tau(y)|\leq |x-y|$.
Hence, for any
$g\mathrm{i}n \mathcal{N}_1(l,m,t,d_1,d_2)$ with $\norm{f-g}_{L^{\mathrm{i}nfty}}\leq \delta(f)$, we have
\begin{equation}gin{eqnarray*}
\norm{\tau\circ g-f}_{L^{\mathrm{i}nfty}}\leq \norm{g-f}_{L^{\mathrm{i}nfty}}\leq \delta(f).
\end{eqnarray*}
Then the topological entropy of $\tau\circ g:[a,b]\to [a,b]$ has the following
lower bound,
\begin{equation}gin{eqnarray*}
h_{top}(\tau\circ g)
\geq \frac{1}{2}h_{top}(f).
\end{eqnarray*}
However, due to Theorem \ref{thm:main1}, for any $g\mathrm{i}n \mathcal{N}_1(l,m,t,d_1,d_2)$,
we have
\begin{equation}gin{eqnarray*}
h(\tau\circ g )\leq1+l+l\log_2 m.
\end{eqnarray*}
Therefore, we have
\begin{equation}gin{eqnarray*}
\frac{1}{2}h_{top}(f)
\leq l(1+\log_2m+\log_2t+\log_2d_1)
+2l^2\log_2 d_2.
\end{eqnarray*}
That is
\begin{equation}gin{eqnarray*}
m\geq
\frac{2^{\frac{1}{2l}h_{top}(f)}}{2td_1d^{2l}_2}.
\end{eqnarray*}
\end{proof}
Theorem \ref{thm:main2} tells us that if
the neural network $g\mathrm{i}n \mathcal{N}_1(l,m,t,d_1,d_2)$
is a good approximation (i.e., $\norm{f-g}_{L^{\mathrm{i}nfty}}\leq \delta(f)$ ),
then the depth $m$ has an exponential lower bound with respect to
the topological entropy.
Besides, if we iterate the function for $k$ times, i.e,
$f^k$ and the neural network $g\mathrm{i}n \mathcal{N}_1(l,m,t,d_1,d_2)$ is a good approximation of
$f^k$, we have the following corollary.
\begin{equation}gin{cor}
Given a continuous function
$f:[0,1]\to [0,1]$ with positive and finite topological entropy,
then there exists $\delta(f^k)>0$ such that
for any $g\mathrm{i}n \mathcal{N}_1(l,m,t,d_1,d_2)$ with $\norm{f^k-g}_{L^{\mathrm{i}nfty}}\leq \delta(f^k)$, we have
\begin{equation}gin{eqnarray}
m\geq
\frac{2^{\frac{k}{2l}h_{top}(f)}}{2td_1d^{2l}_2}.
\end{eqnarray}
\end{cor}
\begin{equation}gin{proof}
This corollary comes directly from
Theorem \ref{thm:main2} and
$h_{top}(f^k)=kh_{top}(f)$ for any integer $k\geq 0$. (See Lemma \ref{lem:k}
in Appendix \ref{sec:top}.)
\end{proof}
For example, if we take the activation function to be
ReLU unit which is continuous and (1,1,1) semi-algebraic, then the following
statements come directly from Theorem \ref{thm:main1} and \ref{thm:main2}.
\begin{equation}gin{prop}
For any ReLU network $g$ with at most $l$ layers and at most $m$ nodes per layer, then
\begin{equation}gin{eqnarray}
h_{top}(\tau\circ g)
\leq l(1+\log_2m).
\end{eqnarray}
\end{prop}
\begin{equation}gin{prop}
Given a continuous function
$f:[a,b]\to [a,b]$ with finite topological entropy,
then there exists $\delta(f)>0$ such that
for any ReLU network $g$ with at most $l$ layers and at most $m$ nodes per layer which satisfies
$\norm{f-g}_{L^{\mathrm{i}nfty}}\leq \delta(f)$, we have
\begin{equation}gin{eqnarray}
m\geq 2^{\frac{1}{2l}h_{top}(f)-1}.
\end{eqnarray}
Moreover, if $g$ is
a good approximation of
$f^k$ with respect to $L^{\mathrm{i}nfty}$ norm, i.e., $\norm{f^k-g}_{L^{\mathrm{i}nfty}}\leq \delta(f^k)$, then
we have
\begin{equation}gin{eqnarray}
m\geq 2^{\frac{k}{2l}h_{top}(f)-1}.
\end{eqnarray}
\end{prop}
If the function $f$ we would like to present has infinity topological entropy, i.e., $h_{top}(f)=+\mathrm{i}nfty$,
then due to the lower semi-continuity
of topological entropy, for any $N>0$, there exists $\delta_{N}(f)>0$ such that
for any continuous function $g: [a,b]\to [a,b]$
with $\norm{g-f}_{L^{\mathrm{i}nfty}}<\delta_N(f)$,
\begin{equation}gin{eqnarray}
h_{top}(g)\geq N
\end{eqnarray}
\begin{equation}gin{prop}
Given a continuous function
$f:[a,b]\to [a,b]$ with $h_{top}(f)=+\mathrm{i}nfty$, then
any $N>0$ sufficiently large, there exists
$\delta_N(f)>0$ such that
for any $g\mathrm{i}n \mathcal{N}_1(l,m,t,d_1,d_2)$ with $\norm{f-g}_{L^{\mathrm{i}nfty}}<\delta_N(f)$, we have
\begin{equation}gin{eqnarray}
m\geq
\frac{2^{N/l}}{2td_1d^l_2}.
\end{eqnarray}
\end{prop}
\begin{equation}gin{proof}
The proof is the same as Theorem \ref{thm:main2}.
\end{proof}
\subsection{Examples}
First,
let us consider the tent map $t_{\alpha}:[0,1]\to [0,1]$, where $t_{\alpha}(x)$ is defined as follows
\begin{equation}gin{equation*}
t_{\alpha}(x)=\left\{
\begin{equation}gin{array}{c}
\alpha x, 0\leq x\leq 1/2,\\
\\
\alpha(1-x), 1/2<x\leq 1,
\end{array}
\right.
\end{equation*}
where $0\leq \alpha\leq 2$. (See Figure \ref{fig:tent})
\begin{equation}gin{figure}[h]
\center{
\subfigure[Tent map $t_{\alpha}$]{
\begin{equation}gin{minipage}[t]{0.5\linewidth}
\centering
\mathrm{i}ncludegraphics[width=5.5cm]{graph6-new}
\end{minipage}
}
\subfigure[$t^4_{\alpha}$]{
\begin{equation}gin{minipage}[t]{0.5\linewidth}
\centering
\mathrm{i}ncludegraphics[width=5.5cm]{graph7-1}
\end{minipage}
}
}
\caption{Tent map $t_{\alpha}$ and $t^4_{\alpha}$ with different parameters $\alpha$.}
\label{fig:tent}
\end{figure}
The topological entropy of $t_{\alpha}$ can be easily computed by Lemma \ref{lem:slope}, and we have
\begin{equation}gin{equation*}
h_{top}(t_{\alpha})=\left\{
\begin{equation}gin{array}{c}
0, 0\leq \alpha \leq 1,\\
\\
\log_2\alpha, 1<\alpha\leq 2.
\end{array}
\right.
\end{equation*}
(See Figure \ref{fig:toptent}.)
\begin{equation}gin{figure}[h]
\center{\mathrm{i}ncludegraphics[width=5.5cm] {graph5}}
\caption{The topological entropy of the tent map $t_{\alpha}$ for $0<\alpha\leq 2$.}
\label{fig:toptent}
\end{figure}
Hence, based on Theorem \ref{thm:main2}, if we would like to have a good approximation
of $t^{k}_{\alpha}$ for $\alpha>1$, then
the width required to represent $t^{k}_{\alpha}$ with continuous and $(t,d_1,d_2)$ semi-algebraic units is
\begin{equation}gin{eqnarray*}
m\geq C(t,d_1,d_2)\alpha^{k/l},
\end{eqnarray*}
where $C(t,d_1,d_2)$ is a constant which only depends on
$t,d_1,d_2$.
Next,
let us consider the logistic map
$f_{\begin{equation}ta}:[0,1]\to [0,1]$ as follows
\begin{equation}gin{eqnarray*}
f(x)=\begin{equation}ta x(1-x),
\end{eqnarray*}
where the parameter $\begin{equation}ta$ is taken from $[0,4]$ (See Figure \ref{fig:log}).
Logistic map has been used to get lower bounds on the size of sigmoidal neural networks \cite{Schmitt1999}.
\begin{equation}gin{figure}[h]
\center{
\subfigure[Logistic map $f_{\begin{equation}ta}$]{
\begin{equation}gin{minipage}[t]{0.5\linewidth}
\centering
\mathrm{i}ncludegraphics[width=5.5cm]{graph8-new}
\end{minipage}
}
\subfigure[$f^4_{\begin{equation}ta}$ ]{
\begin{equation}gin{minipage}[t]{0.5\linewidth}
\centering
\mathrm{i}ncludegraphics[width=5.5cm]{graph9-new}
\end{minipage}
}
}
\caption{Logistic map $f_{\begin{equation}ta}$ and $f^4_{\begin{equation}ta}$ with different parameters $\begin{equation}ta$.}
\label{fig:log}
\end{figure}
It is easy to see that $ h_{top}(f_\begin{equation}ta)=1$ when $\begin{equation}ta=4$, and $h_{top}(f_\begin{equation}ta)=0$ when
$\begin{equation}ta=2$. Hence, based on Theorem \ref{thm:main2}, if we would like to have a good approximation
of $f^{k}_{4}$, then
the width required to represent $f^{k}_{4}$ with continuous and $(t,d_1,d_2)$ semi-algebraic function is
\begin{equation}gin{eqnarray*}
m\geq C(t,d_1,d_2)2^{k/l},
\end{eqnarray*}
where $C(t,d_1,d_2)$ is a constant which only depends on
$t,d_1,d_2$.
\section{Relationship between topological entropy and periods, number of crossings and Lipschitz constant }
In this section, we will discuss the connection between
topological entropy and periods, number of crossings and Lipschitz constant.
\subsection{Relationship between topological entropy and periods, the number of crossings }
In fact, the relationship between topological entropy and periods has been discussed in \cite{Alseda00}, which has the following statement.
\begin{equation}gin{lem}[\cite{Alseda00}]
Given a continuous map $f:[a,b]\to [a,b]$, it has positive topological entropy
iff it has a cycle of
period which is not a power of 2.
\end{lem}
In this subsection, we will show the connection between
topological entropy and the number of crossings for piece-wise monotone function
$f:[a,b]\to [a,b]$.
Let us define $C(f)$ as follows
\begin{equation}gin{eqnarray}
C(f):=\sup_{x<y} C_{x,y}(f),
\end{eqnarray}
which is the
maximal number of
crossings over any interval $[x,y]\subset[a,b]$. We find the relationship between
the maximal number of crossings $C(f)$ and
topological entropy $h_{top}(f)$ in the asymptotic case.
\begin{equation}gin{prop}
Given a continuous function $f:[a,b]\to [a,b]$ which is piece-wise monotone,
then
\begin{equation}gin{eqnarray}
\lim_{k\to\mathrm{i}nfty}\sup_k
\frac{1}{k}\log_2 C(f^k)
=h_{top}(f).
\end{eqnarray}
\end{prop}
\begin{equation}gin{proof}
First, since
$f$ is piece-wise monotone, then
there exists a finite partition of $[a,b]$
into subintervals such that
$f$ is monotone on each subinterval.
For any subinterval where $f$ is monotone,
there is at most one crossing over $[x,y]$.
Thus
for any $x,y\mathrm{i}n [a,b]$, we have
\begin{equation}gin{eqnarray*}
C_{xy}(f)
\leq c(f),
\end{eqnarray*}
i.e., $C(f)\leq c(f)$.
Therefore,
\begin{equation}gin{eqnarray*}
\lim_k \sup_k
\frac{1}{k}\log_2 C(f^k)
\leq \lim_k \frac{1}{k}\log_2 c(f^k)
=h_{top}(f).
\end{eqnarray*}
Besides,
if $h_{top}(f)=0$, then we have already got the result as
\begin{equation}gin{eqnarray*}
\lim_{k\to\mathrm{i}nfty}\sup_k
\frac{1}{k}\log_2 C(f^k)\geq 0
\end{eqnarray*}
Hence, we only need to consider the case where
$h_{top}(f)>0$.
Let us introduce the concept called $s$-horeses \cite{Misiurewicz79hor,Misiurewicz80hor},
which is
an interval $J\subset [a,b]$ and a partition $\mathcal{D}$ of
$J$ into s subintervals such that
the closure of each element of $\mathcal{D}$
$ f$-covers $J$.
It has been proved in \cite{Misiurewicz79hor,Misiurewicz80hor} that there exist sequences $\set{k_n}^{\mathrm{i}nfty}_{n=1}$ and
$\set{s_n}^{\mathrm{i}nfty}_{n=1}$ of positive integers such that
$\lim_{n\to \mathrm{i}nfty}k_n=\mathrm{i}nfty$ and for each $n$, there exists $s_n$-horseshoes $(J_n, D_n)$ for $f^{k_n}$ such that
\begin{equation}gin{eqnarray*}
\lim_{n\to \mathrm{i}nfty}\frac{1}{k_n}
\log_2 s_n
=h_{top}(f).
\end{eqnarray*}
Based on the definition of $s_n$-horseshoe, for the map $f^{k_n}$,
the closure of each subinterval in $D_n$ $f^{k_n}$-covers $J_n$.
Thus, based on the definition of crossings, we have
\begin{equation}gin{eqnarray*}
C(f^{k_n})\geq C_{J_n}(f^{k_n})\geq
s_n.
\end{eqnarray*}
Therefore
\begin{equation}gin{eqnarray*}
\lim_k \sup_k
\frac{1}{k}\log_2 C(f^k)
\geq \lim_{n\to \mathrm{i}nfty}\frac{1}{k_n}
\log_2 s_n
=h_{top}(f).
\end{eqnarray*}
\end{proof}
\subsection{Relationship between topological entropy and Lipschitz constant}
Let us consider the connection between Lipschitz constant and topological entropy.
Let us denote the Lipschitz constant of $f$ by $L(f)$, that is
\begin{equation}gin{eqnarray}
L(f)
=\mathrm{i}nf\set{L\geq 0: |f(x)-f(y)|\leq L|x-y|, \forall x,y\mathrm{i}n[a,b]}.
\end{eqnarray}
The connection between periods, the number of crossings and Lipschitz constant has been discussed in \cite{Chatziafratis2020}.
It has been proved that
if the Lipschitz constant matches the number of crossings, i.e., $C_{x y}(f^k)=L(f^k)$, then
a $L^1$-separation between $f^k$ and ReLU neural networks can be obtained \cite{Chatziafratis2020}.
Here we discuss the relationship between Lipschitz constant and
topological entropy.
\begin{equation}gin{prop}\label{prop:LipT}
Given a continuous function $f:[a,b]\to [a,b]$ which piece-wise monotone,
then
\begin{equation}gin{eqnarray}
\lim_{k\to\mathrm{i}nfty}
\frac{1}{k}\log_2 L(f^k)
=\mathrm{i}nf_k \frac{1}{k}\log_2 L(f^k),
\end{eqnarray}
and
\begin{equation}gin{eqnarray}
\lim_{k\to \mathrm{i}nfty}
\max \set{0, \frac{1}{k}\log_2 L(f^k)}
\geq h_{top}(f).
\end{eqnarray}
\end{prop}
\begin{equation}gin{proof}
Based on the definition of Lipschitz constant,
it is easy to see that
\begin{equation}gin{eqnarray*}
|f^{n+k}(x)-f^{n+k}(y)|
&=&|f^{n}(f^{k}(x))-f^{n}(f^{k}(y))|\\
&\leq& L(f^n) |f^k(x)-f^k(y)|\\
&\leq& L(f^n) L(f^k) |x-y|,
\end{eqnarray*}
for any integers $n,k$ and any $x,y\mathrm{i}n [a,b]$.
Thus,
\begin{equation}gin{eqnarray}
L(f^{n+k})
\leq L(f^n) L(f^k).
\end{eqnarray}
i.e., $\log_2 L(f^{n+k})\leq\log_2 L(f^{n})+\log_2 L(f^{k})$.
Hence $\set{\log_2 L(f^k)}_k$
is a subadditive sequence.
Therefore, according to
Lemma \ref{lem:sub} in Appendix \ref{sec:top}, the limit
\begin{equation}gin{eqnarray*}
\lim_{k\to\mathrm{i}nfty}\frac{1}{k}\log_2 L(f^k)
\end{eqnarray*}
exists and
\begin{equation}gin{eqnarray*}
\lim_{k\to \mathrm{i}nfty}\frac{1}{k}\log_2 L(f^k)
=\mathrm{i}nf_{k}\frac{1}{k}\log_2 L(f^k).
\end{eqnarray*}
Let us another characterization of topological entropy of the function, which is piece-wise monotone, by
variation \cite{Alseda00} as follows
\begin{equation}gin{eqnarray*}
\lim_{k\to \mathrm{i}nfty}
\max \set{0, \frac{1}{k}\log_2 Var(f^k)}
=h_{top}(f),
\end{eqnarray*}
where variation $Var(f)$ is defined to be
the supremum of
\begin{equation}gin{eqnarray*}
\sum^{t}_{i=1}
|f(x_{i+1})-f(x_i))|,
\end{eqnarray*}
over all finite sequences $x_1<x_2<....<x_t$ in $[a,b]$.
(See Lemma \ref{lem:var} in Appendix \ref{sec:top}.)
Due to the definition
of $Var(f)$, it is easy to see
\begin{equation}gin{eqnarray*}
Var(f^k)
\leq L(f^k)|b-a|.
\end{eqnarray*}
Therefore,
\begin{equation}gin{eqnarray*}
\lim_{k\to \mathrm{i}nfty}\frac{1}{k}\log_2 L(f^k)
\geq \lim_{k\to \mathrm{i}nfty}\frac{1}{k}\log_2 Var(f^k),
\end{eqnarray*}
which implies that
\begin{equation}gin{eqnarray*}
\lim_{k\to \mathrm{i}nfty}
\max \set{0, \frac{1}{k}\log_2 L(f^k)}
\geq h_{top}(f).
\end{eqnarray*}
\end{proof}
Based on Proposition \ref{prop:LipT}, if $L(f^k)\geq 1$, then $L(f^k)$ has
an exponential lower bound with respect to the topological entropy of
$f$, i.e.,
$L(f^{k})\geq 2^{kh_{top}(f)}$.
\section{Conclusion}
In this paper, we have investigated the relationship between topological entropy and
expressivity of deep neural networks.
We provide a depth-width
trade-offs based on the topological entropy from the theory of dynamic system.
For example,
the topological entropy of the ReLU network with $l$ layers and $m$ nodes per layer is upper bounded by $O(l\log m)$.
Besides, we show that the size of the neural network required to represent a given function has an exponential lower bound with respect to
the topological entropy of the function, where
the exponential lower bound holds for
$L^{\mathrm{i}nfty}$-error approximation.
For example, if we would like to represent the function $f$ by ReLU network with $l$ layers and $m$ nodes per layer,
then the width $m$ has a lower bound $\exp(\Omega(h_{top}(f)/l))$.
Moreover, we discuss the relationship between
topological entropy, periods, Lipschitz constant and the number of crossings,
especially the relationship in the asymptotic case.
Note that one key step to get exponential lower bound on the size of neural networks for $L^{\mathrm{i}nfty}$-error approximation
is the lower semi-continuity of
topological entropy with respect to
$L^{\mathrm{i}nfty}$ norm.
If the lower semi-continuity of
topological entropy with respect to $L^p$ norm (e.g., $L^1$ norm) holds, it
will lead to exponential lower bound (with respect to topological entropy)
for $L^p$-error approximation.
Further studies on
the (semi-)continuity of topological entropy are desired.
Besides,
it would be quite interesting to study the
relationship between topological entropy and
VC dimension. We leave it for further study.
\section{Acknowledgments}
K. B. thanks Arthur Jaffe for the help and support and thanks Weichen Gu for the discussion on topological entropy.
K. B. acknowledges the support of
ARO Grants W911NF-19-1-0302 and
W911NF-20-1-0082.
\begin{equation}gin{thebibliography}{99}
\bibitem[ABMM16]{Arora2016}
Raman Arora, Amitabh Basu, Poorya Mianjy, and Anirbit Mukherjee,
{Understanding deep neural networks with rectified linear units},
{arXiv:1611.01491.}
\bibitem[AKM65]{Adler65}
R. L. Adler, A. G. Konheim and M. H. McAndrew,
{Topological entropy},
{\emph{Transactions of the American Mathematical Society },
\textbf{114}(1965), 309--319.
}
\bibitem[ALM00]{Alseda00}
Llu\'is Alsed\`a, Jaume Llibre, and Micha\l~Misiurewicz,
{Combinatorial dynamics and entropy in dimension one},
(2000).
\bibitem[Bar94]{Barron1994}
Andrew R. Barron,
{Approximation and estimation bounds for artificial neural networks},
{\emph{Machine Learning},
\textbf{14(1)}(1994), 115--133.
}
\bibitem[CNPW19]{Chatziafratis2019}
Vaggos Chatziafratis, Sai Ganesh Nagarajan, Ioannis Panageas, and Xiao Wang,
{Depth-width trade-offs for relu networks via sharkovsky's theorem},
{arXiv:1912.04378}.
\bibitem[CNP20]{Chatziafratis2020}
Vaggos Chatziafratis, Sai Ganesh Nagarajan, Ioannis Panageas,
{Better depth-width trade-offs for neural networks through the lens of dynamical systems},
{arXiv:2003.00777}.
\bibitem[Cyb89]{Cybenko1989}
George Cybenko,
{Approximation by superpositions of a sigmoidal function},
{\emph {Mathematics of Control, Signals and Systems},
\textbf{2(4)}(1989), 303--314.
}
\bibitem[DB11]{Shawe11}
Olivier Delalleau and Yoshua Bengio,
{Shallow vs. deep sum-product networks},
{In \emph{ Advances in Neural Information Processing Systems},
(2011), 666-674.
}
\bibitem[ES16]{Eldan16}
Ronen Eldan and Ohad Shamir,
{The power of depth for feedforward neural networks},
{In \emph{Conference of learning theory},
(2016), 907--940.
}
\bibitem[Fun89]{Funahashi1989}
Ken-Ichi Funahashi,
{On the approximate realization of continuous mappings by neural networks},
{\emph{Neural Networks},
\textbf{2(3)}(1989), 183--192.
}
\bibitem[Has86]{Hastad1986}
John Hastad,
{Almost optimal lower bounds for small depth circuits},
{In \emph{Proceedings of the
eighteenth annual ACM symposium on Theory of computing},
ACM (1986), 6--20.
}
\bibitem[H\r{a}s87]{Hastad87}
Johan H\r{a}stad,
{Computational limitations of small-depth circuits},
{(1987), MIT Press.
}
\bibitem[HMW89]{Hornik1989}
Kurt Hornik, Maxwell, Stinchcombe and Halbert White,
{Multilayer feedforward networks are universal approximators},
{\emph{Neural Networks},
\textbf{2(5)}(1989), 359--366.
}
\bibitem[KTB19]{Kileel2019}
Joe Kileel, Matthew Trager, and Joan Bruna,
{On the expressive power of deep polynomial neural networks},
{arXiv:1905.12207.}
\bibitem[LS16]{Liang2016}
Shiyu Liang and Rayadurgam Srikant,
{Why deep neural networks for function approximation?},
{ arXiv:1610.04161.}
\bibitem[Mis79]{Misiurewicz79hor}
Micha\l~Misiurewicz,
{Horseshoes for mappings of an interval},
{\emph{Bull. Acad.
Pol. Sci., Ser. Sci. Math.},
\textbf{27}(1979), 167--169.
}
\bibitem[Mis80a]{Misiurewicz80hor}
Micha\l~Misiurewicz,
{Horseshoes for continuous mappings of an interval},
{\emph{Dynamical systems},
(1980), 127--135.
}
\bibitem[Mis80b]{Misiurewicz80}
Micha\l~Misiurewicz and Wies\l aw Szlenk,
{Entropy of piecewise monotone mappings},
{\emph{ Studia Math},
\textbf{67}(1980), 45--63.
}
\bibitem[MM14]{Martens2014}
James Martens and Venkatesh Medabalimi,
{On the expressive efficiency of sum product networks},
{arXiv:1411.7717.}
\bibitem[MPCB14]{Montufar2014}
Guido F. Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio,
{On the number of linear regions of deep neural networks},
{In \emph{Advances in Neural Information Processing Systems},
(2014), 2294--2932.
}
\bibitem[MSS19]{Malach2019}
Eran Malach and Shai Shalev-Shwartz,
{Is deeper better only when shallow is good?},
{arXiv:1903.03488.
}
\bibitem[PGM94]{Parberry94}
Ian Parberry, Michael R. Garey, and Albert Meyer,
{Circuit complexity and neural networks},
{(1984), MIT Press.}
\bibitem[PLRDG16]{Poole2016}
Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli,
{Exponential expressivity in deep neural networks through transient chaos. In Advances in neural information processing systems},
{In \emph{Advances in Neural Information Processing Systems},
(2016), 3360--3368.
}
\bibitem[RPJKGSD17]{Raghu17}
Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha Sohl Dickstein,
{On the expressive power of deep neural networks},
{In \emph{Proceedings of the 34th International Conference on Machine Learning},
\textbf{70}(2017), 2847--2854.
}
\bibitem[RST15]{RossmanFOCS15}
Benjamin Rossman, Rocco A. Servedio, and Li-Yang Tan,
{An average-case depth hierarchy theorem for boolean circuits},
{In \emph{Proceedings of the 2015 IEEE 56th Annual Symposium on Foundations of Computer Science (FOCS)},
IEEE(2015),1030--1048.
}
\bibitem[Sch00]{Schmitt1999}
Michael Schmitt,
{Lower bounds on the complexity of approximating continuous functions by sigmoidal neural networks},
{In \emph{Advances in Neural Information Processing Systems},
(2000), 328--334.
}
\bibitem[Sha64]{Sharkovsky64}
OM Sharkovsky,
{Coexistence of the cycles of a continuous mapping of the line into itself},
{\emph{Ukrainskij matematicheskij zhurnal},
\textbf{16(01)} (1964), 61--71.
}
\bibitem[Sha65]{Sharkovsky65}
OM Sharkovsky,
{On cycles and structure of continuous mapping},
{\emph{Ukrainskij matematicheskij zhurnal},
\textbf{17(03)} (1965), 104--111.
}
\bibitem[Tel15]{Telgarsky2015}
Matus Telgarsky,
{Representation benefits of deep feedforward networks},
{arXiv:1509.08101.}
\bibitem[Tel16]{Telgarsky2016}
Matus Telgarsky,
{benefits of depth in neural networks},
{In \emph{Conference on Learning Theory},
(2016), 1517--1539.
}
\bibitem[You81]{Young81}
Lai-Sang Young,
{On the prevalence of horseshoes},
{\emph{Transactions of the American Mathematical Society},
\textbf{263}(1981), 75--88.
}
\end{thebibliography}
\begin{equation}gin{appendix}
\section{Properties of topological entropy}\label{sec:top}
Here, we list some useful facts about topological entropy. More information can be found in
\cite{Alseda00}.
\begin{equation}gin{lem}\cite{Alseda00}\label{lem:k}
Given a compact Hausdorff space $X$ and a continuous function $f: X\to X$,
topological entropy of $f$ and $f^{k}$ has the following relatiobship
\begin{equation}gin{eqnarray}
h_{top}(f^k)
=kh_{top}(f),
\end{eqnarray}
for any integer $k\geq 0$.
\end{lem}
\begin{equation}gin{prop}\cite{Alseda00}
Given compact Hausdorff spaces $X,Y$,
$f:X\to X, g:Y\to Y, \phi:X\to Y$ are continuous maps such that the following diagram
\begin{equation}gin{equation}
\begin{equation}gin{array}{cccc}
X&\stackrel{f}{\longrightarrow}&X\\
\small{\varphi}\downarrow~&~~~~~~~~~~~~~~&~\downarrow\small{\varphi}\\
Y&\stackrel{g}{\longrightarrow}&Y
\end{array}
\end{equation}
commutes, i.e., $\varphi\circ f=g\circ \varphi$, we have the following properties
(a) if $\varphi$ is injective, then $h_{top}(f)\leq h_{top}(g)$,
(b) if $\varphi$ is surjective, then $h_{top}(f)\geq h_{top}(g)$,
(c) if $\varphi$ is bijective, then $h_{top}(f)= h_{top}(g)$. And $\varphi$ is called a conjugacy between
$f$ and $g$ (or $f$ and $g$ are conjugate).
\end{prop}
If $X=[a, b]$,
then we have the following
characterization of topological entropy for a continuous function $f:[a,b]\to [a,b]$.
\begin{equation}gin{Def}[\cite{Misiurewicz79hor,Misiurewicz80hor}]
Given a continuous function $f:[a,b]\to [a,b]$, an s-horseshoe with $s\geq 2$ for $f$ is
$(J, \mathcal{D})$, where
$J\subset [a,b]$ is an interval and $\mathcal{D}$ is a partition
$J$ into s subintervals such that
the closure of each element of $\mathcal{D}$
$ f$-covers $J$.
\end{Def}
\begin{equation}gin{lem}[\cite{Misiurewicz79hor,Misiurewicz80hor}]
Given a continuous function $f:[a,b]\to [a,b]$ with positive entropy, then there exist sequences $\set{k_n}^{\mathrm{i}nfty}_{n=1}$ and
$\set{s_n}^{\mathrm{i}nfty}_{n=1}$ of positive integers such that
$\lim_{n\to \mathrm{i}nfty}k_n=\mathrm{i}nfty$, for each $n$ the map $f^{k_n}$ has an $s_n$-horseshoe and
\begin{equation}gin{eqnarray}
\lim_{n\to \mathrm{i}nfty}
\frac{1}{k_n}
\log s_n=h_{top}(f).
\end{eqnarray}
\end{lem}
\begin{equation}gin{Def}\cite{Alseda00}
Given a continuous function $f:[a,b]\to [a,b]$, the variation $Var(f)$ is defined to be
the supremum of
\begin{equation}gin{eqnarray}
\sum^{t}_{i=1}
|f(x_{i+1})-f(x_i))|
\end{eqnarray}
over all finite sequences $x_1<x_2<....<x_t$ in $[a,b]$.
\end{Def}
\begin{equation}gin{lem}\cite{Alseda00}\label{lem:var}
Given a continuous function $f:[a,b]\to [a,b]$ which piece-wise monotone, then we have
\begin{equation}gin{eqnarray}
\lim_{k\to \mathrm{i}nfty}
\max \set{0, \frac{1}{k}\log_2 Var(f^k)}
=h_{top}(f).
\end{eqnarray}
\end{lem}
\begin{equation}gin{lem}\label{lem:slope}\cite{Misiurewicz80}
Given a continuous function $f:[a,b]\to [a,b]$, which is piece-wise monotone, if $f$ is
affine with the slope coefficient of absolute value s on each piece of monotonicity, then
\begin{equation}gin{eqnarray}
h_{top}(f)=
\max\set{0,\log_2 s}.
\end{eqnarray}
\end{lem}
\begin{equation}gin{lem}\label{lem:sub}\cite{Alseda00}
Given a subadditive sequence $\set{a_k}^{\mathrm{i}nfty}_{k=1}$(i.e. $a_{n+k}\leq a_n+a_k$), we have
\begin{equation}gin{eqnarray}
\lim_{k\to \mathrm{i}nfty}\frac{a_k}{k}
\end{eqnarray}
exists and is equal to $\mathrm{i}nf_k\frac{a_k}{k} $.
\end{lem}
\end{appendix}
\end{document} |
\begin{document}
\title[IBVP for the continuity equation]{Initial-boundary value problems for continuity equations with $BV$ coefficients}
\author[G.~Crippa]{Gianluca Crippa}
\address{G.C. Departement Mathematik und Informatik,
Universit\"at Basel, Rheinsprung 21, CH-4051 Basel, Switzerland.}
\email{[email protected]}
\author[C.~Donadello]{Carlotta Donadello}
\address{C.D. Laboratoire de Math\'ematiques,
Universit\'e de Franche-Comt\'e, 16 route de Gray, F-25030 Besan\c con Cedex,
France.}
\email{[email protected]}
\author[L.~V.~Spinolo]{Laura V.~Spinolo}
\address{L.V.S. IMATI-CNR, via Ferrata 1, I-27100 Pavia, Italy.}
\email{[email protected]}
\maketitle
{
\rightskip .85 cm
\leftskip .85 cm
\parindent 0 pt
\begin{footnotesize}
{\sc Abstract.}
We establish well-posedness of initial-boundary value problems for continuity equations with $BV$ (bounded total variation) coefficients. We do not prescribe any condition on the orientation of the coefficients at the boundary of the domain. We also discuss some examples showing that, regardless the orientation of the coefficients at the boundary, uniqueness may be violated as soon as the $BV$ regularity deteriorates at the boundary.
\noindent
{\sc Keywords:} Continuity equation, transport equation, initial-boundary value problem,
low regularity coefficients, uniqueness.
\noindent
{\sc MSC (2010):} 35F16.
\end{footnotesize}
}
\section{Introduction}
This work is devoted to the study of the initial-boundary value problem for the continuity equation
\begin{equation}
\label{e:cone}
\partial_t u + {\rm div}\, (bu) = 0, \quad \text{where $b : \, ]0, T[ \times \Omega \to \mathbb{R}^d$ and $u : \, ]0, T[ \times \Omega \to \mathbb{R}$.}
\end{equation}
In the previous expression, $\Omega \subseteq \mathbb{R}^d$ is an open set, $T>0$ is a real number and $\mathrm{div}$ denotes the divergence computed with respect to the space variable only.
The analysis of~\eqref{e:cone} in the case when $b$ has low regularity has recently drawn considerable attention: for an overview of some of the main contributions, we refer to the lecture notes by Ambrosio and Crippa~\cite{AmbrosioCrippa}. Here, we only quote the two main breakthroughs due to DiPerna and Lions~\cite{diPernaLions} and to Ambrosio~\cite{Ambrosio:trabv}, which deal with the case when ${\rm div}\, b$ is bounded and $b$ enjoys Sobolev and $BV$ (bounded total variation) regularity, respectively. More precisely, in~\cite{diPernaLions} and~\cite{Ambrosio:trabv} the authors establish existence and uniqueness results for the Cauchy problem posed by coupling~\eqref{e:cone} with an initial datum in the case when $\Omega= \mathbb{R}^d$.
In the classical framework where $b$ and $u$ are both smooth up to the boundary, the initial-boundary value problem is posed by prescribing
\begin{equation}
\label{e:clpb}
\left\{
\begin{array}{lll}
\partial_t u + {\rm div}\, (bu) = 0 & \text{in $]0, T [ \times \Omega $} \\
u = \bar g & \text{on $\Gamma^-$}\\
u = \bar u & \text{at $t =0$}, \\
\end{array}
\right.
\end{equation}
where $\bar u$ and $\bar g$ are bounded smooth functions
and $\Gamma^-$ is the portion of $]0, T[ \times \partial \Omega$
where the characteristics are entering the domain $]0, T[ \times \Omega$. Note, however, that if $b$ and $u$ are not sufficiently regular (if, for example, $u$ is only an $L^{\infty}$ function), then their values on negligible sets are not, a priori, well defined. In~\S~\ref{ss:df} we provide the distributional formulation of~\eqref{e:clpb} by relying on the theory of normal traces for weakly differentiable vector fields, see the works by Anzellotti~\cite{Anzellotti} and, more recently, by Chen and Frid~\cite{ChenFrid}, Chen, Torres and Ziemer~\cite{ChenTorresZiemer} and by Ambrosio, Crippa and Maniglia~\cite{AmbrosioCrippaManiglia}.
Our main positive result reads as follows:
\begin{theorem}
\label{t:wp}
Let $\Omega \subseteq \mathbb{R}^d$ be an open set with uniformly Lipschitz boundary. Assume that the vector field $b$ satisfies the following hypotheses:
\begin{itemize}
\item[1.] $b \in L^{\infty} (]0, T [ \times \Omega ; \mathbb{R}^d)$;
\item[2.] ${\rm div}\, b \in L^{\infty} (]0, T [ \times \Omega)$;
\item[3.] for every open and bounded set $\Omega_\ast \subseteq \Omega$, $b \in L^1_\mathrm{loc}([0, T [ ; BV (\Omega_\ast; \mathbb{R}^d))$.
\end{itemize}
Then, given $\bar u \in L^{\infty} ( \Omega )$ and $\bar g \in L^{\infty} ( \Gamma^-) $, problem~\eqref{e:clpb} admits a unique distributional solution $u \in L^{\infty} (]0, T [ \times \Omega)$.
\end{theorem}
Some remarks are here in order. First, we recall that $\Gamma^-$ is a subset of $]0, T[ \times \partial \Omega$ and we point out that $L^{\infty} (\Gamma^-)$ denotes the space
$L^{\infty} (\Gamma^-,~{\mathscr L}^1~\otimes~{\mathscr H}^{d-1})$.
Second, we refer to the book by Leoni~\cite[Definition 12.10]{Leoni} for the definition of open set with uniformly Lipschitz boundary. In the case when $\Omega$ is bounded, the definition reduces to the classical condition that $\Omega$ has Lipschitz boundary. This regularity assumption guarantees that classical results on the traces of Sobolev and $BV$ functions apply to the set $\Omega$, see again Leoni~\cite{Leoni} for an extended discussion.
Third, several works are devoted to the analysis of the initial-boundary value problem~\eqref{e:clpb}. In particular, we refer to Bardos~\cite{Bardos} for an extended discussion on the case when $b$ enjoys Lipschitz regularity, and to Mischler~\cite{Mischler} for the case when the continuity equation in~\eqref{e:clpb} is the Vlasov equation. Also, we quote reference~\cite{Boyer}, where Boyer establishes uniqueness and existence results for~\eqref{e:clpb} and investigates space continuity properties of the trace of the solution on suitable surfaces. The main assumption in~\cite{Boyer} is that $b$ has Sobolev regularity, and besides this there are the technical assumptions that ${\rm div}\, b \equiv 0$ and that $\Omega$ is bounded. See also the analysis by Girault and Ridgway Scott~\cite{GiraultRidgway} for the case when $b$ enjoys Sobolev regularity and is tangent to the boundary. Note that the extension of Boyer's proof to the case when $b$ has $BV$ regularity is not straightforward.
Our approach is quite different from Boyer's: indeed, the analysis in~\cite{Boyer} is based on careful estimates on the behavior of $b$ and $u$ close to the boundary and involves the introduction of a system of normal and tangent coordinates at $\partial \Omega$, and the use of a local regularization of the equation. Conversely, as mentioned above, in the present work we rely on the theory of normal traces for weakly differentiable vector fields. From the point of view of the results we obtain, the main novelties of the present work can be summarized as follows.
\begin{itemize}
\item We provide a distributional formulation of problem~\eqref{e:clpb} under the solely assumptions that $b \in L^{\infty} (]0, T[ \times \Omega; \mathbb{R}^d)$ and ${\rm div}\, b$ is a locally finite Radon measure, see Lemma~\ref{l:wt} and Definition~\ref{d:rf} in \S~\ref{ss:df}. Conversely, in~\cite{Boyer} the distributional formulation of~\eqref{e:clpb} requires that $b$ enjoys Sobolev regularity.
\item We establish well-posedness of~\eqref{e:clpb} (see Theorem~\ref{t:wp}) under the assumptions that $b$ enjoys $BV$ regularity, while in~\cite{Boyer} Sobolev regularity is required. Note, however, that the main novelty in Theorem~\ref{t:wp} is the uniqueness part, since existence can be established under the solely hypotheses that $b \in L^{\infty} (]0, T[ \times \Omega; \mathbb{R}^d)$ and ${\rm div}\, b \in L^{\infty}(]0, T[ \times \Omega)$ by closely following the same argument as in~\cite{Boyer}, see~\cite{CDS:hyp} for the technical details. We point out in passing that, for the Cauchy problem, the extension of the uniqueness result from Sobolev to $BV$ regularity is one of the main achievement in Ambrosio's paper~\cite{Ambrosio:trabv}. Also, this extension is crucial in view of the applications to some classes of nonlinear PDEs like systems of conservation laws in several space dimensions, see the lecture notes by De Lellis~\cite{CDL:notes} and the references therein.
\item We exhibit some counterexamples (see Theorems~\ref{t:cex} and~\ref{t:cex2} and Corollary~\ref{c:dpt} below) showing that, regardless the orientation of $b$ at the boundary, uniqueness may be violated as soon as $b$ enjoys $BV$ regularity in every open set $\Omega_\ast$ compactly contained in $\Omega$, but the regularity deteriorates at the boundary of $\Omega$. Also, as the proof of Theorem~\ref{t:cex2} shows, if $BV$ regularity deteriorates at the domain boundary, it may happen that the normal trace of $b$ at $\partial \Omega$ is identically zero, while the normal trace of $bu$ is identically $1$, see \S~\ref{ss:df} for the definition of normal trace of $b$ and $bu$.
\item In~\cite[\S~7.1]{Boyer}, Boyer establishes a space continuity property for the solution of~\eqref{e:clpb} in directions trasversal to the vector field $b$ under the assumption that $b$ enjoys Sobolev regularity. Proposition~\ref{p:sc} in the present work ensures that an analogous property holds under $BV$ regularity assumptions. The property we establish is loosely speaking the following: assume $\Sigma_r$ is a family of surfaces which continuously depend on the parameter $r$ and assume moreover that the surfaces are all transversal to a given direction. Then the normal trace of the vector field $u b$ on $\Sigma_r$ strongly converges to the normal trace of $ub$ on $\Sigma_{r_0}$ as $r \to r_0$.
\end{itemize}
Here is our first counterexample. In the statement of Theorem~\ref{t:cex}, $\mathrm{Tr} \, b$ denotes the normal trace of $b$ along the outward pointing, unit normal vector to $\partial \Omega$, as defined in~\S~\ref{ss:df}.
\begin{theorem}
\label{t:cex}
Let $\Omega$ be the set $\Omega: = ]0, + \infty[ \times \mathbb{R}^2$. Then there is a vector field ${b: ]0, 1[ \times \Omega \to \mathbb R^3}$ such that
\begin{itemize}
\item[i)] $ b \in L^{\infty} (]0,1[ \times \Omega; \mathbb R^3)$;
\item[ii)] ${\rm div}\, b \equiv 0$;
\item[iii)] for every open and bounded set $\Omega_\ast$ such that its closure
$\bar \Omega_\ast \subseteq \Omega$, we have $b \in L^1([0, 1 [ ; BV (\Omega_\ast; \mathbb{R}^3))$;
\item[iv)] $\mathrm{Tr} \, b \equiv - 1$ on $]0, 1[ \times \partial \Omega$;
\item[v)] the initial-boundary value problem
\begin{equation}
\label{e:cex:ibvp}
\left\{
\begin{array}{lll}
\partial_t u + {\rm div}\, (bu ) = 0 & \text{in $]0, 1[ \times \Omega$} \\
u = 0 & \text{on $ ]0, 1[ \times \partial \Omega$} \\
u = 0 & \text{at $ t=0 $}
\phantom{\int} \\
\end{array}
\right.
\end{equation}
admits infinitely many different solutions.
\end{itemize}
\end{theorem}
Some remarks are here in order. First, since the vector field $b$ is divergence-free, then any solution of~\eqref{e:cex:ibvp} is a solution of the transport equation $$\partial_t u + b \cdot \nabla u=0$$ satisfying
zero boundary and initial conditions. Second, the proof of Theorem~\ref{t:cex} uses an intriguing construction due to Depauw~\cite{Depauw}.
Finally, note that property iv) in the statement of Theorem~\ref{t:cex} states that the vector field $b$ is inward pointing at the boundary $\partial \Omega$. This fact is actually crucial for our argument because it allows us to build on Depauw's construction.
When the vector field is outward pointing, one could heuristically expect that the solution would not be affected by the loss of regularity of $b$ at the domain boundary. Indeed, in the smooth case the solution is simply ``carried out" of the domain along the characteristics and, consequently, the behavior of the solution inside the domain is not substantially affected by what happens close to the boundary. Hence, one would be tempted to guess that, even in the non smooth case, when $\mathrm{Tr} \, b >0$ on the boundary the solution inside the domain is not affected by boundary behaviors and uniqueness should hold even when the $BV$ regularity of $b$ deteriorates at the boundary. The example discussed in the statement of Theorem~\ref{t:cex2} shows that this is actually not the case and that, even if $b$ is outward pointing at $\partial \Omega$, then uniqueness may be violated as soon as the $BV$ regularity deteriorates at the boundary.
\begin{theorem}
\label{t:cex2}
Let $\Omega$ be the set $\Omega: = ]0, + \infty[ \times \mathbb{R}^2$. Then there is a vector field ${b: ]0, 1[ \times \Omega \to \mathbb R^3}$ such that
\begin{itemize}
\item[i)] $ b \in L^{\infty} (]0,1[ \times \Omega; \mathbb R^3)$;
\item[ii)] ${\rm div}\, b \equiv 0$;
\item[iii)] for every open and bounded set $\Omega_\ast$ such that its closure
$\bar \Omega_\ast \subseteq \Omega$, we have $b \in L^1([0, 1 [ ; BV (\Omega_\ast; \mathbb{R}^3))$;
\item[iv)] $\mathrm{Tr} \, b \equiv 1$ on $]0, 1[ \times \partial \Omega$;
\item[v)] the initial-boundary value problem
\begin{equation}
\label{e:cex:ibvp2}
\left\{
\begin{array}{lll}
\partial_t u + {\rm div}\, (bu ) = 0 & \text{in $]0, 1[ \times \Omega$} \\
u = 0 & \text{at $ t=0$} \phantom{\int} \\
\end{array}
\right.
\end{equation}
admits infinitely many different solutions.
\end{itemize}
\end{theorem}
We make some observations. First, by a trivial modification of the proof one can exhibit a vector field $b$ satisfying properties i), ii), iii) and v) above and, instead of property iv), $\mathrm{Tr} \, b \equiv 0$ on $]0, 1[ \times \partial \Omega$. Hence, even in the case when $b$ is tangent at the domain boundary, uniqueness may be violated as soon as the $BV$ regularity deteriorates at the domain boundary.
Second, the proof of Theorem~\ref{t:cex2} does not use Depauw's example~\cite{Depauw}. The key point is constructing a non trivial solution of~\eqref{e:cex:ibvp2} such that $u(t, x) \ge 0$ for a.e. $(t, x) \in ]0, 1[ \times \Omega$ and $\mathrm{Tr} \, \, (bu) < 0$ (note that $\mathrm{Tr} \, b > 0$ by property iv) in the statement of the theorem). Heuristically, such solution ``enters'' the domain $\Omega$, although the characteristics are outward pointing at the boundary.
Third, since $\mathrm{Tr} \, b \equiv 1$ on $]0, 1 [ \times \partial \Omega$, then in the formulation of the initial-boundary value problem~\eqref{e:cex:ibvp2} we do not prescribe the value of the solution $u$ at the boundary. In the proof of Theorem~\ref{t:cex2} we exhibit infinitely many different solutions of~\eqref{e:cex:ibvp2} and in general different solutions attain different values on $]0, 1[ \times \partial \Omega$. However, by refining the proof of Theorem~\ref{t:cex2} we obtain the following result.
\begin{corol}
\label{c:dpt}
Let $\Omega$ be the set $\Omega : = ]0, + \infty [ \times \mathbb{R}^2$, then there is a vector field $b~:~]0, 1[ \times \Omega \to \mathbb{R}^3$ satisfying requirements $\mathrm{i)}, \dots, \mathrm{v)}$ in the statement of Theorem~\ref{t:cex2} and such that the initial-boundary value problem~\eqref{e:cex:ibvp2} admits infinitely many solutions that satisfy $\mathrm{Tr} \, (bu) \equiv 0$ on $]0, 1[ \times \partial \Omega$.
\end{corol}
The additional condition
$\mathrm{Tr} \, (bu) \equiv 0$ in the corollary can be heuristically interpreted as (a weak version of)
$u \equiv 0$ on $]0, 1[ \times \partial \Omega$.
We also point out that, again by a trivial modification of the proof, one can exhibit a vector field $b$ satisfying properties i), ii), iii) and v) in the statement of Corollary~\ref{c:dpt} and, instead of property iv), $\mathrm{Tr} \, b \equiv 0$ on $]0, 1[ \times \partial \Omega$. Also, for any given real constant $c$, one can actually construct infinitely many solutions of~\eqref{e:cex:ibvp2} that satisfy $\mathrm{Tr} \, (bu) = c$ on $]0, 1[ \times \partial \Omega$.
\subsection*{Outline}
The paper is organized as follows. In \S~\ref{s:pre} we recall some results on normal traces of vector fields established in~\cite{AmbrosioCrippaManiglia}. In \S~\ref{s:pwp} we establish the uniqueness part of the proof of Theorem~\ref{t:wp} and the space continuity property. In \S~\ref{s:cex} we construct the counter-examples that prove Theorems~\ref{t:cex} and \ref{t:cex2} and Corollary~\ref{c:dpt}.
\subsection*{Notation}
\begin{itemize}
\item $ {\mathscr L}^n$: the $n$-dimensional Lebesgue measure.
\item ${\mathscr H}^m$: the $m$-dimensional Hausdorff measure.
\item $\mu \res E$: the restriction of the measure $\mu$ to the measurable set $E$.
\item $\mathbf{1}_E$: the characteristic function of the set $E$.
\item $\Omega$: an open set in $\mathbb{R}^d$ having uniformly Lipschitz continuous boundary.
\item $L^{\infty} (]0, T[ \times \partial \Omega): = L^{\infty} (]0, T[ \times \partial \Omega, {\mathscr L}^1 \otimes
{\mathscr H}^{d-1} )$, where we denote with $\otimes$ the (tensor) product of two measures.
\item ${\rm div}\, b$: the distributional divergence of the vector field ${b: ]0, T[ \times \Omega \to \mathbb{R}^d}$, computed with respect to the $x \in \Omega$ variable only.
\item ${\rm Div}\, B$: the standard ``full'' distributional divergence of the vector field $B$. In particular, when $B: ]0, T[ \times \Omega \to \mathbb{R}^{d+1}$, then ${\rm Div}\, B$ is the divergence computed with respect to the $(t, x) \in ]0, T[ \times \Omega$ variable .
\item $\nabla \varphi:$ the gradient of the smooth function ${\varphi: ]0, T[ \times \Omega \to \mathbb{R}^d}$, computed with respect to the $x \in \Omega$ variable only.
\item $\mathrm{Tr} \, (b, \Sigma)$: the normal trace of the vector field $b$ on the surface $\Sigma \subseteq \Omega$, as defined in~\cite{AmbrosioCrippaManiglia} (see also \S~\ref{ss:acm} in here).
\item $\mathrm{Tr} \, b$: the normal trace of the vector field $b$ on $]0, T[ \times \partial \Omega$, defined as in~\S~\ref{ss:df}.
\item $\mathcal M_{\infty}(\Lambda)$: the class of bounded, measure-divergence vector fields, namely the functions $B \in L^{\infty} (\Lambda; \mathbb{R}^N)$ such that the distributional divergence ${\rm Div}\, B$ is a locally bounded Radon measure on the open set $\Lambda \subseteq \mathbb{R}^N$.
\item $|x|$: the Euclidian norm of the vector $x \in \mathbb{R}^d$.
\item $\mathrm{supp} \, \rho$: the support of the smooth function $\rho: \mathbb{R}^N \to \mathbb{R}$.
\item $B_R (0)$:
the ball of radius $R>0$ and center at $0$.
\end{itemize}
\section{Normal traces of bounded, measure-divergence vector fields}
\label{s:pre}
\label{ss:acm}
We collect in this section some definitions and properties concerning weak traces of measure-divergence vector fields. Our presentation follows \cite[\S 3]{AmbrosioCrippaManiglia}.
Given an open set $\Lambda \subseteq \mathbb{R}^N$, we denote by $\mathcal M_\infty (\Lambda)$ the family of bounded, measure-divergence vector fields, namely the functions $B \in L^{\infty} (\Lambda; \mathbb{R}^N)$ such that the distributional divergence ${\rm Div}\, B$ is a locally bounded Radon measure on $\Lambda$.
\begin{definition}\label{nomal_trace}
Assume that $\Lambda \subseteq \mathbb{R}^N$ is a domain with uniformly Lipschitz continuous boundary.
Let $B \in \mathcal M_\infty (\Lambda)$, then the normal trace of $B$ on $\partial \Lambda$ can be defined as a distribution by the identity
\begin{equation}\label{e:normal_trace}
\langle{\rm Tr} (B, \partial \Lambda), \varphi\rangle = \int_{\Lambda}\nabla\varphi\cdot B\,dx +\int_{\Lambda}\varphi\,d(
{\rm Div}\, B) \qquad \forall \varphi \in
\mathcal C^{\infty}_c (\mathbb{R}^N).
\end{equation}
\end{definition}
This definition is consistent with the Gauss-Green formula if the vector field $B$ is sufficiently smooth. In this case the distribution is induced by the integration of $B\cdot \vec n$ on $\partial \Lambda$, where $\vec n$ is the outward pointing, unit normal vector to $\partial \Lambda$.
\begin{lemma}\label{proposition3.2_ACM}
The distribution defined above is induced by an $L^\infty$ function on $\partial \Lambda$, which we can still call ${\rm Tr} (B, \partial \Lambda)$, with
\begin{equation}\label{Linfty_bound}
\|{\rm Tr} (B, \partial \Lambda)\|_{L^\infty(\partial\Lambda)}\leq \|B\|_{L^\infty(\Lambda)}.
\end{equation}
Moreover, if $\Sigma$ is a Borel set contained in $\partial \Lambda_1 \cap \partial \Lambda_2$ and if $\vec n_{1}=\vec n_{2}$ on $\Sigma$, then
\begin{equation}
\label{trace_on_sigma}
{\rm Tr} (B, \partial \Lambda_1)= {\rm Tr} (B, \partial \Lambda_2) \quad {\mathscr H}^{N-1}-a.e. \; \text{on} \; \Sigma.
\end{equation}
\end{lemma}
Starting from the identity \eqref{trace_on_sigma}, it is possible to introduce the notion of normal trace on general bounded, oriented, Lipschitz continuous hypersurfaces $\Sigma\subseteq\mathbb{R}^N$. Indeed, once the orientation of $\vec n_\Sigma$ is fixed, we can find $\Lambda_1 \subseteq \mathbb{R}^N$ such that $\Sigma \subseteq\partial \Lambda_1$ and the normal vectors $n_\Sigma$ and $n_1$ coincide. Then we can define
\begin{equation}
{\rm Tr}^- (B, \Sigma): = {\rm Tr} (B, \partial \Lambda_1).
\end{equation}
Analogously, if $\Lambda_2 \subseteq \mathbb{R}^N$ is an open subset such that $\Sigma\subseteq\partial\Lambda_2$, and $\vec n_{2}=-\vec{n}_{\Sigma}$, we can define
\begin{equation}
{\rm Tr}^+ (B, \Sigma): = - {\rm Tr} (B, \partial \Lambda_2).
\end{equation}
Note that we have the formula
\begin{equation} \label{divergence_on_boundary}
({\rm Div}\, B) \res \Sigma= \Big( {\rm Tr}^+(B,\Sigma)-{\rm Tr}^-(B,\Sigma) \Big) {\mathscr H}^{N-1}\res\Sigma.
\end{equation}
In particular, ${\rm Tr}^+$ and ${\rm Tr}^-$ coincide ${\mathscr H}^{N-1}$-a.e.~on $\Sigma$ if and only if $\Sigma$ is negligible for the measure ${\rm Div}\, B$.
We now go over some space continuity results established in~\cite[\S 3]{AmbrosioCrippaManiglia}. We first recall the definition of a family of graphs.
\begin{definition}
\label{d:fg}
Let $I \subseteq \mathbb{R}$ be an open interval. A family of oriented surfaces $\{ \Sigma_r \}_{r \in I} \subseteq \mathbb{R}^N$ is a family of graphs if there are a bounded open set $D \subseteq \mathbb{R}^{N-1}$ and a Lipschitz function $f : D \to \mathbb{R}$ such that the following holds. There is a system of coordinates $(x_1, \cdots, x_N)$ such that, for every $r \in I$,
$$
\Sigma_r = \big\{ (x_1, \dots, x_N): \; f(x_1, \dots, x_{N-1}) - x_N =r \big\}
$$
and $\Sigma_r$ is oriented by the normal $(- \nabla f, 1)/\sqrt{1 + |\nabla f|^2}$.
\end{definition}
We now quote~\cite[Theorem 3.7]{AmbrosioCrippaManiglia}.
\begin{theorem}
\label{t:wc} Let $B \in \mathcal M_{\infty}(\mathbb{R}^N)$ and let $\{ \Sigma_r \}_{r \in I}$ be a family of graphs as in Definition~\ref{d:fg}.
Given $r_0 \in I$, we define the functions $\alpha_0, \alpha_r: D \to \mathbb{R}$ by setting
$$
\alpha_0 (x_1, \dots, x_{N-1}): = \mathrm{Tr} \,^- (B, \Sigma_{r_0}) \big(x_1, \dots, x_{N-1}, f(x_1, \dots, x_{N-1})-r_0 \big)
$$
and
$$
\alpha_r (x_1, \dots, x_{N-1}): = \mathrm{Tr} \,^+ (B, \Sigma_{r}) \big(x_1, \dots, x_{N-1}, f(x_1, \dots, x_{N-1})-r\big).
$$
Then we have
$$
\alpha_r \weaks\alpha_0
\quad
\text{weakly$^{\ast}$ in $L^{\infty} (D, {\mathscr L}^{N-1} \res D )$ as $r \to r_0^+$.}
$$
\end{theorem}
\section{Proof of Theorem~\ref{t:wp}}
\label{s:pwp}
\subsection{Preliminary results}
In this section we establish some results that are preliminary to the distributional formulation of problem~\eqref{e:clpb}.
\begin{lemma}\label{l:preli2}
Let $B$ be a locally bounded vector field on $\mathbb{R}^N$ and let $\{ \rho_\varepsilon \}_{0< \varepsilon <1}$ be a standard family of mollifiers satisfying $\mathrm{supp} \, \rho_\varepsilon \subseteq B_\varepsilon (0)$ for every $\varepsilon \in ]0, 1[$.
The divergence of $B$ is a locally finite measure if and only if for any $K$ compact in $\mathbb{R}^N$ there exists a positive constant $C$ such that the inequality
\begin{equation}
\label{e:unifb}
\|{\rm Div}\, B \ast \rho_\varepsilon\|_{L^1(K)}\leq C
\end{equation}\label{eq: preliminary_lemma}
holds uniformly in $\varepsilon \in ]0, 1[$.
\end{lemma}
\begin{proof}
If ${\rm Div}\, B$ is a locally finite measure the inequality \eqref{e:unifb} is satisfied on any compact $K$ for some constant $C$ independent from $\varepsilon$.
On the other hand, the sequence $({\rm Div}\, B) \ast\rho_\varepsilon = {\rm Div}\, (B \ast\rho_\varepsilon)$ converges to ${\rm Div}\, B$ in the sense of distributions and the uniform bound \eqref{e:unifb} implies that we can extract a subsequence which converges weakly in the sense of measures.
\end{proof}
\begin{lemma}\label{l:preli1}
Let $\Lambda \subseteq \mathbb{R}^N$ be an open subset with uniformly Lipschitz continuous boundary and let $B$ belong to $\mathcal M_\infty (\Lambda)$. Then the vector field
$$
\tilde B(z) : =
\left\{
\begin{array}{ll}
B(z) & z \in \Lambda \\
0 & \text{otherwise} \\
\end{array}
\right.
$$
belongs to $\mathcal M_{\infty} (\mathbb{R}^N)$.
\end{lemma}
\begin{proof}
We only need to check that the distributional divergence of $\tilde B$ is a locally bounded Radon measure. Given $\varepsilon \in ]0, 1[$ we define the $\varepsilon$-neighborhood of $\partial\Lambda$ as
$$
\displaystyle
\partial \Lambda_\varepsilon= \{ z \in \mathbb{R}^N \: : \: \textrm{dist}(z,\partial\Lambda)<\varepsilon \}.
$$
Any compact subset $K$ of $\mathbb{R}^N$ can be decomposed as follows:
\begin{equation}
K =\big(
K \cap
( \Lambda \setminus
\partial \Lambda_\varepsilon)
\big)
\cup
\big(K \cap
\partial \Lambda_\varepsilon \big)
\cup
\big(
K \setminus(\Lambda \cup
\partial \Lambda_\varepsilon)
\big).
\end{equation}
Also, note that ${\rm Div}\,(\tilde B\ast\rho_\varepsilon)$ is zero on $K \setminus ( \Lambda \cup \partial \Lambda_\varepsilon)$ and that its $L^1$ norm is uniformly bounded on $K\cap (\Lambda \setminus \partial \Lambda_\varepsilon)$.
Moreover,
\begin{equation}\label{L1_norm_sigma_eps}
\begin{split}
\int_{K \cap \partial \Lambda_\varepsilon}
|{\rm div}\,(\tilde B\ast\rho_\varepsilon)| \, dz & =
\int_{K\cap \partial \Lambda_\varepsilon}
|\tilde B\ast\nabla\rho_\varepsilon| \, dz \\
& \leq \|\tilde B \|_{L^\infty (\mathbb{R}^N)}
\| \nabla \rho_\varepsilon \|_{L^1
( \mathbb{R}^N)}
{\mathscr L}^N (K \cap \partial
\Lambda_\varepsilon ). \\
\end{split}
\end{equation}
We observe that $\|\tilde B \|_{L^\infty (\mathbb{R}^N)} = \| B\|_{L^{\infty} (\Lambda)}$, that
${\mathscr L}^N (K \cap \partial
\Lambda_\varepsilon ) \leq C_\ast \varepsilon$ and that $\| \nabla \rho_\varepsilon \|_{L^1
( \mathbb{R}^N)} \leq C_{\ast \ast} / \varepsilon$ for suitable constants $C_\ast >0$ and $C_{\ast \ast}>0$. Hence,
$$
\int_{K \cap \partial \Lambda_\varepsilon}
|{\rm div}\,(\tilde B\ast\rho_\varepsilon)| \, dz
\leq \| B \|_{L^\infty (\Lambda)} C_\ast
C_{\ast \ast}
$$
and by relying on Lemma \ref{l:preli2} we conclude.
\end{proof}
\subsection{Distributional formulation of problem~\eqref{e:clpb}}
\label{ss:df}
We can now discuss the distributional formulation of~\eqref{e:clpb}. The following result provides a distributional formulation of the normal trace of $b$ and $bu$ on $]0, T[ \times \partial \Omega$.
\begin{lemma}
\label{l:wt}
Let $\Omega \subseteq \mathbb{R}^d$ be an open set with uniformly Lipschitz boundary and let $T>0$. Assume that $b \in L^{\infty} (]0, T[ \times \Omega; \mathbb{R}^d)$ is a vector
field such that ${\rm div}\, b$ is a locally finite Radon measure on $]0, T[ \times \Omega$. Then there is a unique function, which in the following we denote by $\mathrm{Tr} \, b$, that belongs to {$L^{\infty} (]0, T[ \times \partial \Omega) $} and satisfies
\begin{equation}
\label{e:trb}
\begin{split}
\int_0^T \int_{\partial \Omega} \mathrm{Tr} \, b \, \varphi \, d {\mathscr H}^{d-1} dt - &
\int_\Omega \varphi(0, x) \, dx =
\int_0^T \int_\Omega \partial_t \varphi + b \cdot
\nabla \varphi \, dx dt \\ & +
\int_0^T \int_\Omega \varphi \, d( {\rm div}\, b )
\quad \forall \varphi \in \mathcal C^{\infty}_c
([0, T[ \times \mathbb{R}^d) .
\end{split}
\end{equation}
Also, if $w \in L^{\infty}(]0, T[ \times \Omega)$ and $f \in
L^{\infty}(]0, T[ \times \Omega)$ satisfy
\begin{equation}
\label{e:snsdd}
\int_0^T \int_\Omega w \big( \partial_t \eta+ b \cdot
\nabla \eta \big) dx dt +
\int_0^T \int_\Omega f \eta \, dx dt
=0
\quad \forall \eta \in \mathcal C^{\infty}_c
(]0, T[ \times \Omega),
\end{equation}
then there are two uniquely determined functions, which in the following we denote by $\mathrm{Tr} \, (bw) \in L^{\infty} (]0, T[ \times \partial \Omega) $ and $w_0 \in L^\infty (\Omega)$, that satisfy
\begin{equation}
\label{e:trbu}
\begin{split}
\int_0^T \int_{\partial \Omega} & \mathrm{Tr} \, (bw) \varphi \, d {\mathscr H}^{d-1} dt -
\int_\Omega \varphi(0, \cdot) w_0 \, dx \\
& =
\int_0^T \int_\Omega w \big( \partial_t \varphi + b \cdot
\nabla \varphi \big) dx dt +
\int_0^T \int_\Omega f \varphi \, dx dt
\quad
\forall \, \varphi \in \mathcal C^{\infty}_c
([0, T[ \times \mathbb{R}^d). \\
\end{split}
\end{equation}
\end{lemma}
Note that requirement~\eqref{e:snsdd} is nothing but the distributional formulation of the equation
\begin{equation}
\label{e:effe}
\partial_t w + {\rm div}\, (bw ) = f
\quad \text{in $]0, T[ \times \Omega$}.
\end{equation}
Also note that the existence of the function $w_0$ follows from Lemma~1.3.3 in \cite{Dafermos}, the new part is the existence of the function $\mathrm{Tr} \, (bw)$.
\begin{proof}
We first establish the existence of a function $\mathrm{Tr} \, b $ satisfying~\eqref{e:trb}. Note that the uniqueness of such a function follows from the arbitrariness of the test function $\varphi$. We define the vector field $B: \mathbb{R}^{d+1} \to \mathbb{R}^{d+1}$ by setting
\begin{equation}
\label{e:B}
B(t, x) : =
\left\{
\begin{array}{lll}
(1, b) & (t, x) \in ]0, T[ \times \Omega \\
\; \; \; 0 & \text{elsewhere in $\mathbb{R}^{d+1}$} \\
\end{array}
\right.
\end{equation}
and we note that
$
{\rm Div}\, B \big|_{]0, T[ \times \Omega} = {\rm div}\, b,
$
therefore $B$ satisfies the hypotheses of Lemma~\ref{l:preli1} provided that $\Lambda: = ]0, T[ \times \Omega$. Hence, $ B \in \mathcal M_{\infty} (\mathbb{R}^{d+1}).$
We apply Lemma~\ref{proposition3.2_ACM} and we observe that
$
\mathrm{Tr} \, (B, \partial \Lambda) \big|_{\{ 0 \} \times \Omega} \equiv - 1.
$
We can then conclude by setting
$$
\mathrm{Tr} \, b : = \mathrm{Tr} \, (B, \partial \Lambda)
\big|_{]0, T [ \times \partial \Omega}.
$$
The existence of the function $\mathrm{Tr} \, (bw )$ satisfying~\eqref{e:trbu} can be established by setting
\begin{equation}
\label{e:c}
C(t, x) : =
\left\{
\begin{array}{lll}
(w, bw) & (t, x) \in ]0, T[ \times \Omega \\
\; \; \; 0 & \text{elsewhere in $\mathbb{R}^{d+1}$} \\
\end{array}
\right.
\end{equation}
and observing that condition~\eqref{e:snsdd} implies that
$
{\rm Div}\, C \big|_{]0, T[ \times \Omega} = f.
$
We can then conclude by using the same argument as before, by setting
\begin{equation}
\label{e:tc}
w_0 : = - \mathrm{Tr} \, (C, \partial \Lambda) \big|_{\{ 0 \} \times \Omega}
\quad \text{and} \quad
\mathrm{Tr} \, (bw) : = \mathrm{Tr} \, (C, \partial \Lambda) \big|_{]0, T[ \times \partial \Omega}.
\end{equation}
\end{proof}
We now state the rigorous formulation of problem~\eqref{e:clpb}.
\begin{definition}
\label{d:rf}
Let $\Omega \subseteq \mathbb{R}^d$ be an open set with uniformly Lipschitz boundary. Assume that $b \in L^{\infty} (]0, T[ \times \Omega; \mathbb{R}^d)$ is a vector field such that ${\rm div}\, b$ is a locally finite Radon measure on $]0, T[ \times \Omega.$ A distributional solution of~\eqref{e:clpb} is
a function $u \in L^{\infty} (]0, T[ \times \Omega)$ such that
\begin{itemize}
\item[i)] $u$ satisfies equation~\eqref{e:snsdd} with $f\equiv0$;
\item[ii)] $w_0= \bar u$;
\item[iii)] $\mathrm{Tr} \, (bu) = \bar g \mathrm{Tr} \, b$ on the set $\Gamma^-$ which is defined as follows:
$$
\Gamma^- : = \big\{ (t, x) \in ]0, T[ \times \partial \Omega: \; (\mathrm{Tr} \, \, b) (t, x) < 0 \big\}.
$$
\end{itemize}
\end{definition}
Note that in Definition~\ref{d:rf} we only assume $f \equiv 0$ for the sake of simplicity. By removing the condition $f \equiv 0$ from point i) we obtain the distributional formulation of the initial-boundary value problem obtained by replacing the first line of~\eqref{e:clpb} with~\eqref{e:effe}.
\subsection{Proof of Theorem~\ref{t:wp}}
First, we observe that the existence of a solution of~\eqref{e:clpb} is established in~\cite{CDS:hyp} by closely following an argument due to Boyer~\cite{Boyer}. The argument to establish uniqueness is organized in two main steps: in \S~\ref{sss:ren} we show that, under the hypotheses of Theorem~\ref{t:wp}, distributional solutions of~\eqref{e:clpb} enjoy renormalization properties. Next, in \S~\ref{sss:gro} we conclude by relying on a by now standard argument based on Gronwall Lemma.
\subsubsection{Renormalization properties}
\label{sss:ren} We fix $u$ distributional solution of~\eqref{e:clpb} and we proceed according to the following steps.
{\sc Step 1:} we use the same argument as in Ambrosio~\cite{Ambrosio:trabv} to establish renormalization properties ``inside" the domain. More precisely, the Renormalization Theorem~\cite[Theorem 3.5]{Ambrosio:trabv} implies that the function $u^2$ satisfies
\begin{equation}
\label{e:a}
\begin{split}
\int_0^T \int_\Omega u^2 \big( \partial_t \psi+ b \cdot
\nabla \psi \big) dx dt - \int_0^T \int_\Omega u^2 {\rm div}\, b \, \psi \, dx dt &
= - \int_\Omega \bar u^2 \psi(0, \cdot) dx \\
&
\forall \psi \in \mathcal C^{\infty}_c
([0, T[ \times \Omega). \\
\end{split}
\end{equation}
{\sc Step 2:} we establish a trace renormalization property.
First, we observe that by combining hypothesis 3 in the statement of Theorem~\ref{t:wp} with Theorem 3.84 in the book by Ambrosio, Fusco and Pallara~\cite{AmbrosioFuscoPallara} we obtain that the vector field $B$ defined as in~\eqref{e:B} satisfies $B(t, \cdot) \in BV (\Omega_\ast)$ for every open and bounded set $\Omega_\ast \subseteq \mathbb{R}^d$ and for ${\mathscr L}^1$-a.e. $t \in ]0, T[$.
Next, we recall that the proof of Lemma~\ref{l:wt} ensures that the vector field $uB$ belongs to $\mathcal M_\infty (\mathbb{R}^{d+1})$. We can then apply~\cite[Theorem 4.2]{AmbrosioCrippaManiglia}, which implies the following trace renormalization property:
\begin{equation}
\label{e:trn}
\mathrm{Tr} \, (u^2 b) (t, x) =
\left\{
\begin{array}{cc}
\displaystyle{
\left( \frac{\mathrm{Tr} \, (ub) }{ \mathrm{Tr} \, \, b }\right)^2 } \mathrm{Tr} \, \, b &
\quad \mathrm{Tr} \, \, b (t, x) \neq 0 \\
\phantom{ciao} \\
0 & \qquad \mathrm{Tr} \, \, b (t, x) = 0. \phantom{\int} \\
\end{array}
\right.
\end{equation}
Some remarks are here in order. First, to define $\mathrm{Tr} \, (u^2 b)$ we recall~\eqref{e:a}, use Lemma~\ref{l:wt} and set
\begin{equation}
\label{e:tru2}
\mathrm{Tr} \, (u^2 b) : = \mathrm{Tr} \, (u^2 B, \partial \Lambda)\big|_{]0, T[ \times \partial \Omega},
\end{equation}
where $\Lambda = ]0, T[ \times \Omega$.
Second, note that, strictly speaking, the statement of~\cite[Theorem 4.2]{AmbrosioCrippaManiglia} requires that the vector field $B$ has $BV$ regularity with respect to the $(t, x)$-variables, which in our case would imply some control on the time derivative of $b$. However, by examining the proof of~\cite[Theorem 4.2]{AmbrosioCrippaManiglia} and using the particular structure of the vector field $B$ one can see that only space regularity is needed to establish~\eqref{e:trn}.
{\sc Step 3:} by combining~\eqref{e:a} with~\eqref{e:tru2} and recalling Lemma \ref{l:wt} we infer that
\begin{equation}
\label{e:trbu2}
\begin{split}
\int_0^T \int_{\partial \Omega} & \mathrm{Tr} \, (u^2 b) \varphi \, d {\mathscr H}^{d-1} dt -
\int_\Omega \bar u^2 \varphi(0, x) \, dx
=
\int_0^T \int_\Omega u^2 \big( \partial_t \varphi + b \cdot
\nabla \varphi \big) dx dt \\ & -
\int_0^T \int_\Omega u^2 {\rm div}\, b \, \varphi \, dx dt
\qquad
\forall \, \varphi \in \mathcal C^{\infty}_c
([0, T[ \times \mathbb{R}^d). \\
\end{split}
\end{equation}
\subsubsection{Conclusion of the proof of Theorem~\ref{t:wp}}
\label{sss:gro}
We conclude by following a by now standard argument, see for example the expository work by De Lellis~\cite[Proposition 1.6]{CDL:bourbaki}. We proceed according to the following steps.
{\sc Step A:} we observe that, the equation being linear, establishing that the distributional solution of~\eqref{e:clpb} is unique amounts to show that, if $\bar u \equiv 0$ and $\bar g \equiv 0$, then any distributional solution satisfies $u \equiv 0$. Also, note that in the remaining part of the proof we use~\cite[Lemma 1.3.3]{Dafermos} and we identify $u^2$ with its representative satisfying that the map $t \mapsto u^2(t, \cdot)$ is continuous in $L^{\infty}_{\mathrm{loc}} (\Omega)$ endowed with the weak$^{\ast}$ topology.
{\sc Step B:} we fix $\bar t \in ]0, T[$ and we construct a sequence of test functions $\varphi_n$ as follows. First, we choose a function $h: [0, + \infty[ \to \mathbb{R}$ such that
\begin{equation}
\label{e:nu}
h \in \mathcal C^{\infty}_c ([0, + \infty[),
\quad h \ge 0 \; \text{and} \; h' \leq 0 \; \text{everywhere in $[0, + \infty[$.}
\end{equation}
Next, we set
\begin{equation}
\label{e:nu1}
\nu(t, x) : = h \big( \| b \|_{L^{\infty} } |t- \bar t | + | x | \big)
\end{equation}
and we observe that $\nu$ satisfies
\begin{equation}
\label{e:nu2}
\partial_t \nu + b \cdot \nabla \nu \leq
\partial_t \nu + \| b \|_{L^{\infty}} |\nabla \nu| \leq 0 \quad
\text{${\mathscr L}^{d+1}$-a.e. $(t, x) \in ]- \infty, \bar t[ \times \mathbb{R}^d$.}
\end{equation}
We then choose a sequence of cut-off functions $\chi_n \in \mathcal C^{\infty}_c ([0, + \infty[)$ satisfying
\begin{equation}
\label{e:chi}
\chi_n \equiv 1 \; \text{on $[0, \bar t \, ]$}, \; \;
\chi_n \equiv 0 \; \text{on $[ \, \bar t + 1/n, + \infty[$}, \;
\;
\chi_n' \leq 0 \; \text{everywhere on $[0, + \infty[$}.
\end{equation}
Finally, we set
$$
\varphi_n (t, x) : = \chi_n (t) \nu(t, x) \quad (t, x) \in [0, T[ \times \mathbb{R}^d
$$
and we observe that $\varphi_n \ge 0$ everywhere on $[0, + \infty[ \times \mathbb{R}^d$ and that $\varphi_n$ is compactly supported in~$[0, T[ \times \mathbb{R}^d$ provided that $n$ is sufficiently large.
{\sc Step C:} we use $\varphi_n$ as a test function in~\eqref{e:trbu2}. First, we observe that by recalling that $\bar g \equiv 0$ and $\bar u \equiv 0$ and by using the renormalization property~\eqref{e:trn} we obtain that the left hand side of~\eqref{e:trbu2} is nonnegative, namely
\begin{equation*}
\begin{split}
0 & \leq
\int_0^T \! \! \! \int_\Omega u^2 \nu \chi_n' dx dt +
\int_0^T \! \! \! \int_\Omega \chi_n u^2 \big( \partial_t \nu + b \cdot
\nabla \nu \big) dx dt -
\int_0^T\! \! \! \int_\Omega {\rm div}\, b \, u^2 \, \nu \chi_n \, dx dt \,. \\
\end{split}
\end{equation*}
Next, we let $n \to + \infty$ and by recalling properties~\eqref{e:nu2} and~\eqref{e:chi} we obtain
$$
\int_\Omega
\nu(\bar t, \cdot) \, u^2 (\bar t, \cdot) dx \leq \| {\rm div}\, b \|_{L^{\infty}}
\int_0^{\bar t} \int_\Omega \nu \, u^2 dx dt.
$$
We can finally conclude by using Gronwall Lemma and the arbitrariness of the function $h$ in~\eqref{e:nu1}. This concludes the proof of Theorem~\ref{t:wp}.
\qed
\subsection{Rigorous statement and proof of the space continuity property}
\label{ss:sc}
We provide a rigorous formulation of the analogue of the space continuity property stablished in the Sobolev case by Boyer in~\cite[\S~7.1]{Boyer}.
\begin{propos}
\label{p:sc}
Let $b$ be as in the statement of Theorem~\ref{t:wp},
$u~\in~L^{\infty} (]0, T[ \times \Omega)$ be the distributional solution of~\eqref{e:clpb} and $B \in \mathcal M_{\infty} (\mathbb{R}^{d+1}) $ be the same vector field as in~\eqref{e:B}. Given a family of graphs $\{ \Sigma_r \}_{r \in I} \subseteq \mathbb{R}^d$ as in Definition~\ref{d:fg}, we fix $r_0 \in I$ and we define the functions $\gamma_0, \gamma_r: ]0,T[ \times D
\to \mathbb{R}$ by setting
$$
\gamma_0 (t,x_1, \dots, x_{d-1}): = \mathrm{Tr} \,^- (uB, ]0, T[ \times \Sigma_{r_0})
\big(t,x_1, \dots, x_{d-1}, f(x_1, \dots, x_{d-1})-r_0 \big)
$$
and
$$
\gamma_r (t,x_1, \dots, x_{d-1}): = \mathrm{Tr} \,^+ (uB, ]0, T[ \times \Sigma_{r})
\big(t,x_1, \dots, x_{d-1}, f(x_1, \dots, x_{d-1})-r\big).
$$
Then
\begin{equation}
\label{e:sc}
\gamma_r \to \gamma_0
\quad
\text{strongly in $L^1 (]0, T[ \times D )$ as $r \to r_0^+$.}
\end{equation}
\begin{proof}
The argument is organized in three steps.
{\sc Step 1:} we make some preliminary considerations and introduce some notation. With a slight abuse of notation,
we consider $b$ as a vector field defined on $\mathbb{R}^{d+1}$, set equal to zero out of $]0, T[ \times \Omega$.
By combining hypothesis 3 in the statement of Theorem~\ref{t:wp} with~\cite[Theorem 3.84]{AmbrosioFuscoPallara} we obtain that $b (t, \cdot) \in BV_\mathrm{loc} (\mathbb{R}^d)$ for ${\mathscr L}^1$-a.e. $t \in \mathbb{R}$. Hence, the classical theory of $BV$ functions (see for instance~\cite[Section 3.7]{AmbrosioFuscoPallara}) ensures that the outer and inner traces $b(t, \cdot)^+_{\Sigma_r}$ and $b(t, \cdot)^-_{\Sigma_r}$ are well-defined, vector valued functions for ${\mathscr L}^1$-a.e. $t \in \mathbb{R}$ and for every $r$.
{\sc Step 2:} given $B$ as in \eqref{e:B}, we define the functions $\beta_0, \beta_r: ]0,T[ \times D \to \mathbb{R}$ by setting
$$
\beta_0 (t, x_1, \dots, x_{d-1}): = \mathrm{Tr} \,^- (B, ]0, T[ \times \Sigma_{r_0})
\big(t, x_1, \dots, x_{d-1}, f(x_1, \dots, x_{d-1})-r_0 \big)
$$
and
$$
\beta_r (t, x_1, \dots, x_{d-1}): = \mathrm{Tr} \,^+ (B, ]0, T[ \times \Sigma_{r})
\big(t, x_1, \dots, x_{d-1}, f(x_1, \dots, x_{d-1})-r\big).
$$
We claim that
\begin{equation}
\label{e:beta}
\beta_r \to \beta_0
\quad
\text{strongly in $L^1 (]0, T[ \times D )$ as $r \to r_0^+$.}
\end{equation}
To establish~\eqref{e:beta}, we first observe that by using~\cite[Theorem 3.88]{AmbrosioFuscoPallara} and an approximation argument one can show that for every
$r \in I$ we have
$$
\beta_r =
b^+_{\Sigma_r} \cdot \vec m,
\quad \text{and} \quad
\beta_0 = b^-_{\Sigma_0} \cdot \vec m \quad
\text{for ${\mathscr L}^d$-a.e.
$(t, x) \in ]0, T[ \times D$}.
$$
In the previous expression, $\vec m = (- \nabla f, 1)/ \sqrt{1 + |\nabla f|^2}$ is the unit normal vector defining the orientation of $\Sigma_r$. Also, by again combining~\cite[Theorem 3.88]{AmbrosioFuscoPallara} with an approximation argument we get that
$$
\int_0^T \int_D |\beta_r - \beta_0 | dx_1 \dots d_{x_{d-1}} dt \leq
\int_0^T |D b (t, \cdot) | ( S ) dt,
$$
which implies~\eqref{e:beta}. In the previous expression, $|Db(t, \cdot)|$ denotes the total variation of the distributional derivative of $b(t, \cdot)$, and $S$ is the set
$$
\begin{aligned}
S:= \Big\{ (x_1, \ldots , x_{d-1}, x_d) \, : \,
&(x_1, \ldots , x_{d-1}) \in D \; \text{ and } \; \\
&f(x_1, \ldots , x_{d-1}) - r < x_d < f(x_1, \ldots , x_{d-1}) - r_0 \Big\} \,.
\end{aligned}
$$
{\sc Step 3:} we conclude the proof of Proposition~\ref{p:sc}.
First, we observe that due to Theorem~\ref{t:wc} we have that
\begin{equation}
\label{e:wc2}
\gamma_r \rightharpoonup \gamma_0 \text{ weakly in $L^2 (]0, T[ \times D )$ as $r \to r_0^+$.}
\end{equation}
Next, we recall that $\gamma_r$ is the normal trace of $u B$ and that $\beta_r$ is the trace of $B$, so that by applying~\cite[Theorem 4.2]{AmbrosioCrippaManiglia} we get
\begin{equation}
\label{e:square}
\gamma^2_r = \beta_r \mathrm{Tr} \,^+ (u^2 B, ]0, T[ \times \Sigma_{r})
\quad \text{and} \quad
\gamma^2_0 = \beta_0 \mathrm{Tr} \,^- (u^2 B, ]0, T[ \times \Sigma_{r_0}).
\end{equation}
By combining~\eqref{e:beta} with the uniform bound $\| \beta_r\|_{L^\infty} \leq \| b \|_{L^\infty}$ we infer that $\beta_r \to \beta_0$ strongly in $L^2 (]0, T[ \times D)$. Then we apply Theorem~\ref{t:wc} to $\mathrm{Tr} \,^+ (u^2 B, ]0, T[ \times \Sigma_{r})$ and hence by recalling~\eqref{e:square} we conclude that
\begin{equation}
\label{e:wc3}
\gamma^2_r \rightharpoonup \gamma^2_0 \text{ weakly in $L^2 (]0, T[ \times D )$ as $r \to r_0^+$.}
\end{equation}
By using~\eqref{e:wc2}, we get that~\eqref{e:wc3} implies that $\gamma_r \to \gamma_0$ strongly in $L^2 (]0, T[ \times D )$ and from this we eventually get~\eqref{e:sc}.
\end{proof}
\end{propos}
\section{Counter-examples}
\label{s:cex}
\subsection{Some notation and a preliminary result}
\label{ss:cex:not}
For the reader's convenience, we collect here some notation
we use in this section.
\begin{itemize}
\item Throughout all \S~\ref{s:cex}, $\Omega$ denotes the set
$]0, + \infty[ \times \mathbb{R}^2$.
\item We use the notation $(r, y) \in ]0, + \infty[ \times \mathbb{R}^2$ or, if needed, the notation $(r, y_1, y_2)~\in~]0, + \infty[ ~\times ~\mathbb{R} ~\times~\mathbb{R}$
to denote points in $\Omega$.
\item $\mathrm{div}$ denotes the divergence computed with respect to the $(r, y)$-variable.
\item $\mathrm{Div}$ denotes the divergence computed with respect to the $(t, r, y)$-variable.
\item ${\rm div}_y$ denotes the divergence computed with respect to the $y$ variable only.
\item We decompose $]0, 1[ \times \Omega$ as
$]0, 1[ \times \Omega = \Lambda^+ \cup \Lambda^- \cup \mathcal S$, where
\begin{equation}
\label{e:lambdap}
\Lambda^+ : = \{ (t, r, y) \in ]0, 1[ \times \Omega: \; r > t \}
\end{equation}
and
\begin{equation}
\label{e:lambdam}
\Lambda^- : = \{ (t, r, y) \in ]0, 1[ \times \Omega: \; r < t \},
\end{equation}
while $\mathcal S$ is the surface
\begin{equation}
\label{e:sigma}
\mathcal S : =
\{ (t, r, y) \in ]0, 1[ \times \Omega: \; r = t \}.
\end{equation}
\end{itemize}
We also observe that, thanks to~\cite[Lemma 1.3.3]{Dafermos}, up to a redefinition of $u(t,x)$ in a negligible set of times, we can assume that the map $t \mapsto u(t, \cdot)$ is continuous from $]0, 1[$ in $L^{\infty} (\Omega)$ endowed with the weak-$\ast$ topology, and in particular
$$
u(t, \cdot) \weaks u_0 \quad \text{in $L^{\infty} (\Omega)$
as $t \to 0^+$,}
$$
where $u_0$ the value attained by $u$ at $t=0$, as in Lemma~\ref{l:wt}.
\subsection{Proof of Theorem~\ref{t:cex}}
\label{s:proof1}
The proof is organized in three steps.
{\sc Step 1:} we recall an intriguing example due to Depauw~\cite{Depauw} which is pivotal to our construction. In~\cite{Depauw}, Depauw explicitly exhibits a vector field $c: ]0, + \infty[ \times \mathbb{R}^2 \to \mathbb{R}^2$ satisfying the following properties:
\begin{itemize}
\item[a)] $c \in L^{\infty} (]0, + \infty[ \times \mathbb{R}^2; \mathbb{R}^2)$.
\item[b)] For every $r>0$, $c(r, \cdot )$ is piecewise smooth and, for almost every $y \in \mathbb R^2$, the characteristic curve trough $y$ is well defined.
\item[c)] $ {\rm div}_y c \equiv 0$.
\item[d)] $c \in L^1_\mathrm{loc} \big( ]0, 1 [ ; BV_{\mathrm{loc}}( \mathbb{R}^2; \mathbb R^2) \big)$, but $c \notin L^1 \big(
[0, 1 [ ; BV_{\mathrm{loc}}( \mathbb{R}^2; \mathbb R^2) \big)$. Namely, the $BV$ regularity deteriorates as $r \to 0^+$.
\item[e)] The Cauchy problem
\begin{equation}
\label{e:depauw}
\left\{
\begin{array}{lll}
\partial_r w+ {\rm div}_y (c w ) = 0 & \text{on $]0, 1[ \times \mathbb{R}^2$} \\
w = 0 & \text{at $r=0$} \phantom{\int} \\
\end{array}
\right.
\end{equation}
admits a nontrivial bounded solution, which in the following we denote by $v(r, y)$.
\end{itemize}
{\sc Step 2:} we exhibit a vector field $b$ satisfying properties i), $\dots$, v) in the statement of Theorem~\ref{t:cex2}. We recall that the sets $\Lambda^+$, $\Lambda^-$ and $\mathcal S$ are defined by~\eqref{e:lambdap},~\eqref{e:lambdam} and~\eqref{e:sigma}, respectively. We define the vector field $b: ]0, 1[ \times \Omega \to \mathbb{R}^3$ by setting
\begin{equation}
\label{e:b}
b(t, r, y) : =
\left\{
\begin{array}{ll}
\displaystyle{\big(1, c(r, y) \big) } & \text{in $\Lambda^-$} \\
\displaystyle{\big(1, 0 \big) } & \text{in $\Lambda^+$ } \\
\end{array}
\right.
\end{equation}
In the previous expression, $c$ is Depauw's vector field as in {\sc Step 1}. By relying on properties a), c) and d) in {\sc Step 1} one can show that $b$ satisfies properties i), ii), iii) in the statement of Theorem~\ref{t:cex}.
Next, we recall that the initial-boundary value problem~\eqref{e:cex:ibvp} admits the trivial solution $u \equiv 0$ and that the linear combination of solutions is again a solution. Hence, establishing property v) in the statement of Theorem~\ref{t:cex}
amounts to exhibit a nontrivial solution of~\eqref{e:cex:ibvp}.
We define the function $u$ by setting
\begin{equation}
\label{e:u2}
u(t, r, y) : =
\left\{
\begin{array}{ll} v(r, y) & \text{in $\Lambda^-$} \\
0 & \text{in $\Lambda^+$, } \\
\end{array}
\right.
\end{equation}
where $v$ is the same function as in {\sc Step 1}.
{\sc Step 3:} we show that the function $u$ is a distributional solution of~\eqref{e:cex:ibvp}. We set $C:=(u, bu)$ and we observe that by construction ${\rm Div}\, C \equiv 0$ on $\Lambda^+$. Also, property e) in {\sc Step 1} implies that ${\rm Div}\, C \equiv 0$ on $\Lambda^-$. Finally, by recalling~\eqref{divergence_on_boundary} we infer that ${\rm Div}\, C \res \mathcal S =0$ since the normal trace is $0$ on both sides.
We are left to show that the initial and boundary data are attained. First, we observe that $u(t, \cdot) \weaks 0$ as $t \to 0^+$ and hence $u_0 \equiv 0$ by the weak continuity of $u$ with respect to time. Next, we fix an open and bounded set $D \subseteq \mathbb{R}^2$ and we define the family of graphs $\{ \Sigma_r \}_{r \in ]0, 1[} \subseteq ]0, 1[ \times \Omega$ by setting
\begin{equation}
\label{e:sigmar}
\Sigma_r : = \big\{
(t, r, y_1, y_2): \; t \in ]0, 1[ \; \text{and} \; (y_1, y_2) \in D \big\}.
\end{equation}
The orientation is given by the vector $(0, -1, 0, 0)$. We point out that requirement e) in {\sc Step 1} implies that $v(r, \cdot) \weaks 0$ as $r \to 0^+$. Hence, by recalling that $b$ is given by~\eqref{e:b}, we obtain that $\mathrm{Tr} \,^+ (C, \Sigma_r) \weaks 0$ as $r \to 0^+$. By recalling Theorem~\ref{t:wc} and Definition~\ref{d:rf}, we infer that $\mathrm{Tr} \, (bu) \equiv 0$ on $]0, 1[ \times \partial \Omega$.
This concludes the proof of Theorem~\ref{t:cex}.
\qed
\subsection{Proof of Theorem~\ref{t:cex2}}
The proof is divided in four main steps:
\begin{enumerate}
\item in \S~\ref{sss:betak} we construct the auxiliary vector field $\beta_k$, which will serve as a ``building block" for the construction of the vector field $b$;
\item in \S~\ref{sss:b} we define the vector field $b$;
\item in \S~\ref{sss:reg} we show that $b$ satisfies properties iii) and iv) in the statement of Theorem~\ref{t:cex2};
\item finally, in \S~\ref{sss:nt} we exhibit a non trivial solution of the initial-boundary value problem~\eqref{e:cex:ibvp2}. Since the problem is linear, any linear combination of solutions is also a solution and hence the existence of a nontrivial solution implies the existence of infinitely many different solutions.
\end{enumerate}
\subsubsection{Construction of the vector field $\beta_k$}
\label{sss:betak}
We fix $k \in \mathbb N$ and we construct the vector field $\beta_k$, which is defined on the cell
$$
(r, y_1, y_2) \in \, ]0, 4 \cdot 2^{-k} [ \times ]0, 4 \cdot 2^{-k} [ \times ]0, 4 \cdot 2^{-k} [.
$$
We split the $r$-interval $]0, 4 \cdot 2^{-k}[$ into four equal sub-intervals and we proceed according to the following steps. \\
{\sc Step 1:} if $r \in ] 0, 2^{-k} [$, we consider a ``three-colors chessboard" in the $(y_1,y_2)$-variables at scale $2^{-k}$ as in Figure~\ref{f:s1}, left part. The vector field $\beta_k$ attains the values $(1, 0, 0)$, $(-5, 0, 0)$ and $(0, 0, 0)$ on dashed, black and white squares, respectively. Note that $\beta_k$ satisfies
\begin{equation}
\label{e:dfs1}
{\rm div}\, \beta_k \equiv 0 \quad \text{on $]0, 2^{-k} [ \times ]0, 4 \cdot 2^{-k} [ \times ]0, 4 \cdot 2^{-k} [$}
\end{equation}
since $\beta_k$ is piecewise constant and tangent at its discontinuity surfaces.
Here is the rigorous definition of $\beta_k$: we set
$$
D_k : = \bigcup _{n, m=0, 1}
] (2 n) 2^{-k} , (2n+1 )2^{-k} [ \times ] (2 m)2^{-k} , (2m+1 )2^{-k} [
$$
and
\begin{equation}
\label{e:dk}
B_k : = \bigcup _{n, m=0, 1}
] (2 n+1) 2^{-k} , (2n+2 )2^{-k} [ \times ] (2 m+1)2^{-k} , (2m+2 )2^{-k} [.
\end{equation}
Note that $D_k$ and $B_k$ are represented in the left part of Figure~\ref{f:s1} by dashed and black regions, respectively.
Next, we define
\begin{equation}
\label{e:betak1}
\beta_k(r, y_1, y_2):=
\left\{
\begin{array}{lll}
(1, 0, 0) &
\text{if $(y_1, y_2) \in D_k$} \\
\phantom{\int} \\
(-5, 0, 0) &
\text{if $(y_1, y_2) \in B_k$} \\
\phantom{\int} \\
(0, 0, 0) & \text{elsewhere on $\big] 0, 4 \cdot 2^{-k} \big[
\times \big] 0, 4 \cdot 2^{-k} \big[ $} \\
\end{array}
\right.
\end{equation}
\begin{figure}
\caption{The vector field $\beta_k(r, y_1, y_2)$ for different values of $r$: the dashed, black and white squares are the region where $\beta_k$ attains the values $(1, 0, 0)$, $(-5, 0, 0)$ and $(0, 0, 0)$, respectively.}
\label{f:s1}
\end{figure}
{\sc Step 2:} if $r \in ] 2^{-k}, 2 \cdot 2^{-k} [$, then the heuristic idea to define $\beta_k$ is that we want to (i) horizontally leftward slide the rightmost dashed squares and (ii) horizontally rightward slide the leftmost black squares. The final goal is that at $r = 2 \cdot 2^{-k}$ we have reached the configuration of the vector field described in Figure~\ref{f:s1}, center part. The nontrivial issue is that we also require that
\begin{equation}
\label{e:dfs2}
{\rm div}\, \beta_k \equiv 0 \quad \text{on $] 0, 2 \cdot 2^{-k} [ \times ]0, 4 \cdot 2^{-k} [ \times ]0, 4 \cdot 2^{-k} [$}.
\end{equation}
To achieve~\eqref{e:dfs2}, we employ the construction illustrated in Figure~\ref{f:s2}: the vector field $\beta_k$ attains the value $(1, 0, 0)$ on the horizontal part of the dashed region, the value $(1, -1, 0)$ on the inclined part of the dashed region and the value $(0, 0, 0)$ elsewhere. Note that~\eqref{e:dfs2} is satisfied because $\beta_k$ is piecewise constant and it is tangent at its discontinuity surfaces on the interval $r \in ] 2^{-k}, 2 \cdot 2^{-k} [ $. We conclude by recalling~\eqref{e:dfs1} and by observing that the normal trace is continuous at the discontinuity surface $r = 2^{-k}$ and hence no divergence is created there.
Here is the rigorous definition of $\beta_k$ for $r \in ] 2^{-k}, 2 \cdot 2^{-k} [$:
\begin{equation}
\label{e:betak2}
\beta_k(r, y_1, y_2):=
\left\{
\begin{array}{lll}
(1, 0, 0) &
\text{if $(y_1, y_2) \in ]0, 2^{-k}[ \times ]0, 2^{-k}[$} \\
\phantom{\int} & \text{or $(y_1, y_2) \in ]0, 2^{-k}[ \times ]2 \cdot 2^{-k},
3 \cdot 2^{-k}[ $ }\\
\phantom{\int} \\
(1, -1, 0) &
\text{if $ - r + 3 \cdot 2^{-k} < y_1 < - r + 4 \cdot 2^{-k}$} \\
\phantom{\int} & \text{and $y_2 \in ]0, 2^{-k}[$ or
$y_2 \in ]2 \cdot 2^{-k},
3 \cdot 2^{-k}[$ }\\
\phantom{\int} \\
(-5, 0, 0) &
\text{if $(y_1, y_2) \in ]3 \cdot 2^{-k}, 4 \cdot 2^{-k}[ \times ] 2^{-k}, 2 \cdot 2^{-k}[$} \\
\phantom{\int} & \text{or $(y_1, y_2) \in ]3 \cdot 2^{-k}, 4 \cdot 2^{-k}[
\times ]3 \cdot 2^{-k}, 4 \cdot 2^{-k}[$ } \\
\phantom{\int} \\
(-5, -5, 0) &
\text{if $ r < y_1 < r + 2^{-k}$} \\
\phantom{\int} & \text{and $y_2 \in ] 2^{-k}, 2 \cdot 2^{-k}[$ or
$y_2 \in ]3 \cdot 2^{-k}, 4 \cdot 2^{-k}[ $}\\
\phantom{\int} \\
(0, 0, 0) & \text{elsewhere on $\big] 0, 4 \cdot 2^{-k} \big[
\times \big] 0, 4 \cdot 2^{-k} \big[ $} \\
\end{array}
\right.
\end{equation}
\begin{figure}
\caption{The vector field $\beta_k(r, y_1, y_2)$ for $y_2 \in ]0, 2^{-k}
\label{f:s2}
\end{figure}
{\sc Step 3:} if $r \in ] 2 \cdot 2^{-k}, 3 \cdot 2^{-k} [$, the heuristic idea is defining $\beta_k$ in such a way that (i) we push up the lower black region in Figure~\ref{f:s1}, central part, (ii) we pull down the upper dashed region in Figure~\ref{f:s1}, central part and (iii) we satisfy the requirement that $\beta_k$ is divergence-free. This is done by basically using the same construction as in {\sc Step 2}. Note that at $r = 3 \cdot 2^{-k}$ we have reached the configuration described in Figure~\ref{f:s1}, right part.
Here is the rigorous definition of $\beta_k$ for $r \in ]2 \cdot 2^{-k}, 3 \cdot 2^{-k}[$:
$$
\beta_k(r, y_1, y_2):=
\left\{
\begin{array}{lll}
(1, 0, 0) &
\text{if $(y_1, y_2) \in ]0, 2 \cdot 2^{-k}[ \times ]0, 2^{-k}[$} \\
\phantom{\int} \\
(1, 0, -1) &
\text{if $y_1 \in ]0, 2 \cdot 2^{-k}[$} \\
\phantom{\int} & \text{and $ - r + 4 \cdot 2^{-k} < y_2 < - r + 5 \cdot 2^{-k}$} \\
\phantom{\int} \\
(-5, 0, 0) &
\text{if $(y_1, y_2) \in
]2 \cdot 2^{-k}, 4 \cdot 2^{-k}[ \times ]3 \cdot 2^{-k}, 4 \cdot 2^{-k}[$ } \\
\phantom{\int} \\
(-5, 0, -5) &
\text{if $ y_1 \in ]2 \cdot 2^{-k}, 4 \cdot 2^{-k}[ $} \\
\phantom{\int} & \text{and $ r - 2^{-k} < y_2 < r $}\\
\phantom{\int} \\
(0, 0, 0) & \text{elsewhere on $\big] 0, 4 \cdot 2^{-k} \big[
\times \big] 0, 4 \cdot 2^{-k} \big[ $} \\
\end{array}
\right.
$$
{\sc Step 4:} if $r \in ] 3 \cdot 2^{-k}, 4 \cdot 2^{-k} [$, then we consider the ``three colors chessboard" in the $(y_1,y_2)$-variables at scale $2 \cdot 2^{-k}$ illustrated in Figure~\ref{f:s1}, right part. The vector field $\beta_k$ attains the value $(1, 0, 0)$, $(-5, 0, 0)$ and $(0, 0, 0)$ on dashed, black and white regions, respectively.
Here is the rigorous definition of $\beta_k$ for $r \in ]3 \cdot 2^{-k}, 4 \cdot 2^{-k}[$:
$$
\beta_k(r, y_1, y_2):=
\left\{
\begin{array}{lll}
(1, 0, 0) &
\text{if $(y_1, y_2) \in ]0, 2 \cdot 2^{-k}[ \times ]0, 2 \cdot 2^{-k}[$} \\
\phantom{\int} \\
(-5, 0, 0) &
\text{if $(y_1, y_2) \in
]2 \cdot 2^{-k}, 4 \cdot 2^{-k}[ \times ]2 \cdot 2^{-k}, 4 \cdot 2^{-k}[$ } \\
\phantom{\int} \\
(0, 0, 0) & \text{elsewhere on $\big] 0, 4 \cdot 2^{-k} \big[
\times \big] 0, 4 \cdot 2^{-k} \big[ $} \\
\end{array}
\right.
$$
Note that by construction
\begin{equation}
\label{e:dfs4}
{\rm div}\, \beta_k \equiv 0 \quad \text{on $]0, 4 \cdot 2^{-k} [ \times ]0, 4 \cdot 2^{-k} [ \times ]0, 4 \cdot 2^{-k} [$}.
\end{equation}
\subsubsection{Construction of the vector field $b$}
\label{sss:b}
We now define the vector field $b$ by using as a ``building block" the vector field $\beta_k$ defined in \S~\ref{sss:betak}. We proceed in three steps. \\
{\sc Step A:} we extend $\beta_k$ to $]0, 2^{2-k}[ \times \mathbb{R}^2$ by imposing that it is $2^{2-k}$-periodic in both $y_1$ and $y_2$, namely we set
\begin{equation}
\label{e:per}
\beta_k (r, y_1 + m 2^{2-k}, y_2 + n 2^{2-k} ): = \beta_k (r, y_1, y_2)
\end{equation}
for every $m, n \in \mathbb Z$
and
$(y_1, y_2) \in \big] 0, 2^{2-k} \big[
\times \big] 0, 2^{2-k} \big[$.
We recall~\eqref{e:dfs4} and we observe that $\beta_k$ is tangent at the surfaces $y_1= m 2^{2-k}$ and $y_2 = n 2^{2-k}$, $m, n \in \mathbb Z$. We therefore get
\begin{equation}
\label{e:dfsa}
{\rm div}\, \beta_k \equiv 0 \quad \text{on $]0, 2^{2-k} [ \times \mathbb{R}^2$}.
\end{equation}
{\sc Step B:} we define the vector field $b (t, r, y_1, y_2)$ on the set $\Lambda^-$ defined by~\eqref{e:lambdam}. To this end, we introduce the decomposition
\begin{equation}
\label{e:dec}
]0, 1[: = \mathcal N \cup \bigcup_{k =3}^{\infty} I_k ,
\end{equation}
where $\mathcal N$ is an ${\mathscr L}^1$-negligible set and
$$
I_k : =
\left\{
\begin{array}{lll}
]1/2, 1 [ & \text{if $k =3$} \\
\phantom{\int} \\
\displaystyle{ \Big] 1- \sum_{j=3}^{k} 2^{2-j}, 1 - \sum_{j=3}^{k-1} 2^{2-j}
\Big[ }
& \text{if $k \ge 4$}. \\
\end{array}
\right.
$$
We then set
\begin{equation}
\label{e:brt}
b(t, r, y_1, y_2) : =
\beta_k (r + 1- \sum_{j=3}^{k} 2^{2-j}, y_1, y_2) \quad \text{in $\Lambda^-$, when $r \in I_k$.}
\end{equation}
Some remarks are here in order. First, to illustrate the heuristic idea underlying definition~\eqref{e:brt} we focus on the behavior at time $t=1$. The vector field $b(1, \cdot)$ behaves like $\beta_3$ on the interval $r \in ]1/2, 1[$, like $\beta_4$ on the interval $r \in ]1/4, 1/2[$, like $\beta_5$ on the interval $r \in ]1/8, 1/4[$, and so on. In other words, as $r \to 0^+$ the $r$-component of vector field $b$ oscillates between the values $1$, $-5$ and $0$ on a finer and finer ``three-colors chessboard".
Second, we recall~\eqref{e:dfsa} and we observe that the vector field is continuous at the surfaces $r = 1 - \sum_{j=3}^{k-1} 2^{2-j}$, $k \ge 4$. Hence,
\begin{equation}
\label{e:dfsb}
{\rm div}\, b \equiv 0 \quad \text{on $\Lambda^-$}
\end{equation}
where the set $\Lambda^-$ is the same as in~\eqref{e:lambdam}.
{\sc Step C:} we define the vector field $b (t, r, y_1, y_2)$ on the set $\Lambda^+$ defined by~\eqref{e:lambdap}. To this end, we use the same decomposition of the unit interval as in~\eqref{e:dec} and we set
\begin{equation}
\label{e:btr}
b(t, r, y_1, y_2) : = (1, 0, 0) \mathbf{1}_{D(t)} +
(-5, 0, 0) \mathbf{1}_{B(t)} \quad \text{in $\Lambda^+$, when $t \in I_k$}.
\end{equation}
In the previous expression, $\mathbf{1}_{D(t)}$ denotes the characteristic function of the set
$$
D(t) : = \Big\{ (r, y_1, y_2): \beta_k (t + 1- \sum_{j=3}^{k} 2^{2-j}, y_1, y_2) \cdot (1, 0, 0) =1 \Big\}
$$
and $\mathbf{1}_{B(t)}$ is the characteristic function of the set
$$
B(t) : = \Big\{ (r, y_1, y_2):
\beta_k (t + 1- \sum_{j=3}^{k} 2^{2-j}, y_1, y_2) \cdot (1, 0, 0) =-5 \Big\}
$$
Hence, we have
$
{\rm div}\, b \equiv 0$ on $\Lambda^+$
since $b$ is piecewise constant and tangent at the discontinuity surfaces.
Next, we observe that by construction for any $t \in ]0,1[$ the normal trace of $b$ is continuous at the surface $\{ (r, y_1, y_2) : \; r=t \}\subseteq \Omega$. By recalling~\eqref{e:dfsb}, we arrive at
\begin{equation}
\label{e:dfsc}
{\rm div}\, b \equiv 0 \quad \text{on $]0, 1[ \times \Omega$}.
\end{equation}
\subsubsection{Proof of the regularity and trace properties}
\label{sss:reg}
We first observe that, for every open and bounded set $\Omega_\ast$ such that $\bar \Omega_\ast \subseteq \Omega$, we have $b \in L^1 ( [0, T[; BV (\Omega_\ast ; \mathbb{R}^3))$ since it is piecewise constant on $\Omega_\ast$. However, note that the $BV$ regularity degenerates at the boundary $r=0$.
Next, we prove that $\mathrm{Tr} \, b \equiv 1$: we use the family of graphs $\{ \Sigma_r \}_{r \in ]0, 1[}$ defined as in~\eqref{e:sigmar}. By relying on Theorem~\ref{t:wc}, we infer that proving that $\mathrm{Tr} \, b \equiv 1$ amounts to show that there is a sequence $r_k \to 0^+$ such that
$
\alpha_{r_k} \weaks 1
$
weakly$^{\ast}$ in $L^{\infty} (]0, 1[ \times D, {\mathscr L}^3 \res ]0, 1 [ \times D)$ as $k \to + \infty$, for every $D$ open and bounded in~$\mathbb{R}^2$. Hence, we can conclude by choosing $r_k: = 2^{2-k}$.
\subsubsection{Construction of a nontrivial solution of the initial-boundary value problem~\eqref{e:cex:ibvp2}}
\label{sss:nt}
To exhibit a nontrivial solution of~\eqref{e:cex:ibvp2} we proceed as follows: first, we give the rigorous definition, next we make some heuristic remark and finally we show that $u$ is actually a distributional solution
of~\eqref{e:cex:ibvp2}.
We recall the decomposition $]0, 1[\times \Omega = \Lambda^+ \cup \Lambda^- \cup \mathcal S$,
where $\Lambda^+$, $\Lambda^-$ and $\mathcal S$ are defined by~\eqref{e:lambdap},~\eqref{e:lambdam} and~\eqref{e:sigma}, respectively. We define the function $u: ]0, 1[ \times \Omega \to \mathbb{R}$ by setting
\begin{equation}
\label{e:vu}
u (t, r, y_1, y_2) : =
\left\{
\begin{array}{ll}
1 & (t, r, y_1, y_2) \in \Lambda^- \; \text{and} \;
b(t, r, y_1, y_2) \cdot (1, 0,0) =1 \\
0 & \text{elsewhere in $]0, 1[ \times \Omega$}.
\end{array}
\right.
\end{equation}
The heuristic idea behind this definition is as follows. We have defined the vector field $b$ in such a way that, although $b$ is overall outward pointing (namely, $\mathrm{Tr} \, b > 0$), there are actually countably many regions where $b$ is
inward pointing (namely its $r$-component is strictly positive) which accumulate and mix at the domain boundary: these regions are represented by the dashed square in Figure~\ref{f:s1}. The function $u$ is defined in such a way that $u$ is transported along the characteristics (which are well-defined for a.e. $(r, y)$ in the domain interior) and it is nonzero only on the regions where $b$ is inward pointing. As a result, although $b$ is overall outward pointing, it actually carries into the domain the nontrivial function $u$. This behavior is made possible by the breakdown of the $BV$
regularity of $b$ at the domain boundary.
We now show that $u$ is a distributional solution of the initial-boundary value problem~\eqref{e:cex:ibvp2}.
First, we observe that $u(t, \cdot) \weaks 0$ as $t \to 0^+$ and hence the weak continuity of $u$ with respect to the time implies that the initial datum is satisfied.
We then set $C:= (u, bu)$ and we observe that
${\rm Div}\, C =0$ on $\Lambda^+$ because $C$ is identically $0$ there. Next, we observe that the vector field $b$ is constant with respect to $t$ in $\Lambda^-$ and, by recalling that $u$ is defined as in~\eqref{e:vu}, we infer that $u$ is also constant with respect to $t$ in $\Lambda^-$. Hence, showing that ${\rm Div}\, C\equiv 0$ in $\Lambda^-$ amounts to show that
$
{\rm div}\, (bu) \equiv 0
$
in $\Lambda^-$. This can be done by relying on the same arguments we used to obtain~\eqref{e:dfsc}.
Finally, we observe that the normal vector to the surface $\mathcal S$ is (up to an arbitrary choice of the orientation)
$
\vec n : = (1/ \sqrt{2}, -1/ \sqrt{2}, 0, 0).
$
Hence, by construction the normal trace of $C$ is zero on both sides of the surface $\mathcal S$ and hence ${\rm Div}\, C \res \mathcal S =0$.
This concludes the proof of Theorem~\ref{t:cex2}.
\qed
\subsection{Proof of Corollary~\ref{c:dpt}}
\label{ss:rm}
We first describe the heuristic idea underlying the construction of the vector field $b$. Loosely speaking, we proceed as in the proof of Theorem~\ref{t:cex2}, but we modify the values of the ``building block" $\beta_k$ on the subinterval $r \in ]0, 2^{-k}[$. Indeed, instead of defining $\beta_k$ as in {\sc Step 1} of \S~\ref{sss:betak}, we introduce nontrivial components in the $(y_1, y_2)$-directions. These non-trivial components are reminiscent of the construction in Depauw~\cite{Depauw} and the resulting vector field can be actually regarded as a localized version of Depauw's vector field. In particular, they enable us to construct a solution that oscillates between $1$, $-1$ and $0$ and undergoes a finer and finer mixing as $r \to 0^+$.
The technical argument is organized in two steps: in \S~\ref{sss:ldp} we introduce the ``localized version" of Depauw vector field, while in \S~\ref{sss:con} we conclude the proof of Corollary~\ref{c:dpt}. Before proceeding, we introduce the following notation:
\begin{itemize}
\item $Q_k$ is the square $(y_1, y_2) \in ]0, 2^{-k}[ \times ]0, 2^{-k}[$;
\item $S_k$ is the square $(y_1, y_2) \in ]0, 2^{2-k}[ \times ]0, 2^{2-k}[$.
\end{itemize}
\subsubsection{A localized version of Depauw~\cite{Depauw} vector field}
\label{sss:ldp}
We construct the vector field $\alpha_k$, which is defined on the cell ${(r, y_1, y_2) \in ]0, 2^{-k}[ \times Q_k}$. Also, for this construction we regard $r$ as a time-like variable and we describe how a given initial datum evolves under the action of $\alpha_k$. The argument is divided into steps.
{\sc Step 1:} we construct the ``building block" $a_k$, which is defined on the square $(y_1, y_2) \in ]- 2^{ -2-k}, 2^{-2-k}[ \times ]- 2^{-2-k}, 2^{-2-k}[$ by setting
\begin{equation}
\label{e:ak}
a_k (y_1, y_2) =
\left\{
\begin{array}{ll}
(0, - 2 y_1) & |y_1| > |y_2| \\
(2 y_2, 0) & |y_1| < |y_2| \,. \\
\end{array}
\right.
\end{equation}
Note that $a_k$ takes values in $\mathbb{R}^2$, it is divergence free and it is tangent at the boundary of the square.
{\sc Step 2:} we define the function $\bar z_k: Q_k \to \mathbb{R}$ by considering the chessboard illustrated in Figure~\ref{f:dp4}, left part. The function $\bar z_k$ attains the value $-1$ and $1$ on white and black squares, respectively.
\begin{figure}
\caption{The action of the vector field $\alpha_k (r, \cdot)$ on the solution $z_k$ on the interval $r \in ]0, 2^{-2-k}
\label{f:dp4}
\end{figure}
{\sc Step 3:} we begin the construction of the vector field
$
\alpha_k: ]0, 2^{-k}[ \times Q_k \to \mathbb{R}^2.
$
If $r \in ]0, 2^{-2-k}[$, then $\alpha_k(r, \cdot)$ is defined by setting
\begin{equation}
\label{e:alphak}
\alpha_k (r, y_1, y_2) =
\left\{
\begin{array}{ll}
a_k (y_1- 2^{-1-k}, y_2 -2^{-1-k}) \\
\qquad \text{if $(y_1, y_2) \in ] 2^{-2-k}, 3 \cdot 2^{-2-k} [ \times ] 2^{-2-k}, 3 \cdot 2^{-2-k} [$}\\ \phantom{ciao} \\
(0, 0) \; \text{elsewhere on $Q_k$}.
\end{array}
\right.
\end{equation}
See Figure~\ref{f:dp4}, central part, for a representation of the values attained by $\alpha_k$ on the interval $r \in ]0, 2^{-2-k}[$.
We term $z_k$ the solution of the initial-boundary value problem
\begin{equation}
\label{e:zk1}
\left\{
\begin{array}{lll}
\partial_r z_k + {\rm div}_y ( \alpha_k z_k ) = 0 &
\text{in $]0, 2^{-k}[ \times Q_k$ } \\
z_k = \bar z_k & \text{at $r=0$,} \\
\end{array}
\right.
\end{equation}
where $\bar z_k$ is defined as in {\sc Step 2}. Note that by construction ${\rm div}_y \alpha_k =0$ and therefore the first line of~\eqref{e:zk1} is actually a transport equation. Hence, the value attained by the function $z_k$ can be determined by the classical method of characteristics. In particular, the function $z_k (2^{-2-k}, \cdot)$ is represented in Figure~\ref{f:dp4}, right part, and it attains the values $1$ and $-1$ on black and white squares, respectively.
{\sc Step 4:} if $r \in ]2^{-2-k}, 3 \cdot 2^{-2-k}[$, then $\alpha_k(r, \cdot)$ is defined by setting
$$
\alpha_k (r, y_1, y_2) =
a_k (y_1- i 2^{-2-k}, y_2 - j 2^{-2-k})
$$
if
$$
(y_1, y_2) \in ](i-1) 2^{-2-k}, (i+1) \cdot 2^{-2-k} [ \times ] (j-1) 2^{-2-k}, (j+1) \cdot 2^{-2-k} [,
$$
where $i, j$ can be either $1$ or $3$. See Figure~\ref{f:dp3}, central part, for a representation of the values attained by $\alpha_k$ on the interval $r \in ]2^{-2-k}, 3 \cdot 2^{-2-k}[$. Note that by construction ${\rm div}_y \alpha_k \equiv 0$ on $]0, 3 \cdot 2^{-2-k}[ \times Q_k$ and hence the solution $z_k$ of~\eqref{e:zk1} evaluated at $r = 3 \cdot 2^{-2-k}$ is as in Figure~\ref{f:dp3}, right part: as usual, the black and white squares represent the regions where $z_k (3 \cdot 2^{-2-k}, \cdot)$ attain the values $1$ and $-1$, respectively.
\begin{figure}
\caption{The action of the vector field $\alpha_k (r, \cdot)$ on the solution $z_k$ on the interval $r \in ]2^{-2-k}
\label{f:dp3}
\end{figure}
{\sc Step 5:} if $r \in ]3 \cdot 2^{-2-k}, 4 \cdot 2^{-2-k}[$, then $\alpha_k(r, \cdot)$ is again defined by~\eqref{e:alphak}. Hence, the values attained by $z_k(2^{-k}, \cdot)$ are those represented in Figure~\ref{f:dp1}, right part.
\begin{figure}
\caption{The action of the vector field $\alpha_k (r, \cdot)$ on the solution $z_k$ on the interval $r \in ]3 \cdot 2^{-2-k}
\label{f:dp1}
\end{figure}
\subsubsection{Conclusion of the proof}
\label{sss:con}
Loosely speaking, the proof of Corollary~\ref{c:dpt} is concluded by combining the construction described in \S~\ref{sss:ldp} with the proof of Theorem~\ref{t:cex2}. The argument is divided in four steps.
{\sc Step A:} we define the vector field $\tilde \beta_k$ and the solution $u_k$ on $(r, y_1, y_2) \in ]0, 2^{-k}[ \times S_k$, where $S_k$ is the square $ ]0, 2^{2-k}[ \times ]0, 2^{2-k}[$.
We recall the definition of $B_k$ provided by~\eqref{e:dk} and we set
\begin{equation*}
\tilde \beta_k (r, y_1, y_2): =
\left\{
\begin{array}{lll}
\big(1, \alpha_k (r, y_1, y_2) \big)
& (y_1, y_2) \in ]0, 2^{-k} [ \times ]0, 2^{-k}[ \\
\phantom{\int} \\
\big(1, \alpha_k (r, y_1-2 \cdot 2^{-k}, y_2) \big)
& (y_1, y_2) \in ]2 \cdot 2^{-k}, 3 \cdot 2^{-k} [ \times ]0, 2^{-k}[ \\
\phantom{\int} \\
\big(1, \alpha_k (r, y_1, y_2- 2 \cdot 2^{-k}) \big)
& (y_1, y_2) \in ]0, 2^{-k}[ \times ]2 \cdot 2^{-k}, 3 \cdot 2^{-k} [\\
\phantom{\int} \\
\big(1, \alpha_k (r, y_1- 2 \cdot 2^{-k}, y_2- 2 \cdot 2^{-k}) \big)
& (y_1, y_2) \in
]2^{-k}, 3 \cdot 2^{-k} [ \times ]2^{-k}, 3 \cdot 2^{-k} [\\
\phantom{\int} \\
(-5, 0, 0) & (y_1, y_2) \in B_k \\
\phantom{\int} \\
(0, 0, 0) & \text{elsewhere in $S_k$} \\
\end{array}
\right.
\end{equation*}
Note that, basically, the definition of $\tilde \beta_k$ is obtained from~\eqref{e:betak1} by changing the value of the vector field on $D_k$ and inserting as a component in the $(y_1, y_2)$-directions the vector field $\alpha_k$ constructed in \S~\ref{sss:ldp}.
Also, we define the function $u_k$ by setting
\begin{equation*}
u_k (r, y_1, y_2): =
\left\{
\begin{array}{lll}
z_k (r, y_1, y_2)
& (y_1, y_2) \in ]0, 2^{-k} [ \times ]0, 2^{-k}[ \\
\phantom{\int} \\
z_k (r, y_1- 2 \cdot 2^{-k}, y_2)
& (y_1, y_2) \in ]2 \cdot 2^{-k}, 3 \cdot 2^{-k} [ \times ]0, 2^{-k}[ \\
\phantom{\int} \\
z_k (r, y_1, y_2- 2 \cdot 2^{-k})
& (y_1, y_2) \in ]0, 2^{-k}[ \times ]2 \cdot 2^{-k}, 3 \cdot 2^{-k} [\\
\phantom{\int} \\
z_k (r, y_1- 2 \cdot 2^{-k}, y_2- 2 \cdot 2^{-k})
& (y_1, y_2) \in
]2 \cdot 2^{-k}, 3 \cdot 2^{-k} [ \times
]2 \cdot 2^{-k}, 3 \cdot 2^{-k} [\\
\phantom{\int} \\
0 & \text{elsewhere in $S_k$}, \\
\end{array}
\right.
\end{equation*}
where $z_k$ is the same function as in \S~\ref{sss:ldp}.
{\sc Step B:} we define the vector field $\tilde \beta_k$ and the solution $u_k$ for $(r, y_1, y_2) \in ]2^{-k}, 2^{2-k}[ \times S_k$.
We set
$
\tilde \beta_k (r, y_1, y_2) : = \beta_k (r, y_1, y_2)
$, where $\beta_k$ denotes the same vector field as in \S~\ref{sss:betak}. The function $u_k$ satisfies
$$
\partial_r u_k + {\rm div}_y (\tilde \beta_k u_k) =0.
$$
Since ${\rm div}_y \tilde \beta_k =0$, the values attained by $u_k$ for $(r, y_1, y_2) \in ]2^{-k}, 2^{2-k}[ \times S_k$ can be computed by the classical method of characteristics. To provide an heuristic intuition of the behavior of $u_k$, we refer to Figure~\ref{f:s1}, center and right part, and we point out that $u_k$ attains the value $0$ on white and black areas, while on dashed areas it attains the same values as in Figure~\ref{f:dp1}, right part.
{\sc Step C:} we extend $\tilde \beta_k$ and $u_k$ to $]0, 2^{2-k}[ \times \mathbb{R}^2$ by periodicity by proceeding as in~\eqref{e:per}.
{\sc Step D:} we finally define a vector field $b$ and the function $u$. We recall the decomposition~\eqref{e:dec} and we define $b$ as in~\eqref{e:brt} and~\eqref{e:btr}, replacing $\beta_k$ with $\tilde \beta_k$. Also, we define $u$ by setting
$$
u(t, r, y_1, y_2) =
\left\{
\begin{array}{lll}
u_k (r_, y_1, y_2) & \text{in $\Lambda^-$, when $r \in I_k$} \\
0 & \text{in $\Lambda^+$}.
\end{array}
\right.
$$
By arguing as in the proof of Theorem~\ref{t:cex2}, one can show that $u$ and $b$ satisfy requirements i), $\dots$, v) in the statement of Theorem~\ref{t:cex2} and that moreover $\mathrm{Tr} \, (bu) \equiv 0$. This concludes the proof of Corollary~\ref{c:dpt}.
\qed
\section*{Acnkowledgments}
The construction of the counter-examples exhibited in the proofs of Theorem~\ref{t:cex2} and Corollary~\ref{c:dpt} was inspired by a related example due to Stefano Bianchini. Also, the authors wish to express their gratitude to Wladimir Neves for pointing out reference~\cite{Boyer}. Part of this work was done when Spinolo was affiliated to the University of Zurich, which she thanks for the nice hospitality. Donadello and Spinolo thank the University of Basel for the kind hospitality during their visits. Crippa is partially supported by the
SNSF grant 140232, while Donadello acknowledges partial
support from the ANR grant CoToCoLa.
\end{document} |
\begin{document}
\title[Multiplicative refinement paths to Brownian motion]{Bilateral Canonical
Cascades: Multiplicative Refinement Paths to Wiener's and Variant Fractional
Brownian Limits}
\author{Julien Barral}
\author{Beno\^\i t Mandelbrot}
\address{INRIA Rocquencourt, B.P. 105, 78153 Le Chesnay Cedex, France}
\email{[email protected]}
\address{Pacific Northwest National Laboratory, 222 Third Street, Cambridge,
MA \ 02142, USA}
\email{[email protected]}
\subjclass[2000]{60G57, 60F10, 28A80, 28A78 }
\keywords{Random functions, Martingales, Central Limit Theorem, Brownian Motion, Fractals, Hausdorff dimension}
\thanks{The authors thank Pierre-Lin Pommier for kind help in the numerical simulations.}
\begin{abstract}
The original density is 1 for $t\in (0,1)$, $b$ is an integer base ($b\geq 2$
), and $p\in (0,1)$ is a parameter. The first construction stage divides the
unit interval into $b$ subintervals and multiplies the density in each
subinterval by either $1$ or $-1$ with the respective frequencies of $\frac{1
}{2}+\frac{p}{2}$ and $\frac{1}{2}-\frac{p}{2}$. It is shown that the
resulting density can be renormalized so that, as $n\rightarrow \infty $ ($n$
being the number of iterations) the signed measure converges in some sense
to a non-degenerate limit. If $H=1+\log _{b}$ $p>{1}/{2}$, hence $p>b^{{-1}/{
2}}$, renormalization creates a martingale, the convergence is strong, and
the limit shares the H\"{o}lder and Hausdorff properties of the fractional
Brownian motion of exponent $H$. If $H\leq {1}/{2}$, hence $p\leq b^{{-1}/{2}
}$, this martingale does not converge. However, a different normalization
can be applied, for $H\leq \frac{1}{2}$ to the martingale itself and for $H>
\frac{1}{2}$ to the discrepancy between the limit and a finite
approximation. In all cases the resulting process is found
to converge weakly to the Wiener Brownian motion, independently of $H$ and
of $b$. Thus, to the usual additive paths toward Wiener measure, this
procedure adds an infinity of multiplicative paths.
\end{abstract}
\maketitle
\section{Introduction}
To motivate and clarify a new construction, this introduction compares it with others that are widely familiar. After a non-random construction has been randomized, its outcome may range
from "loosening up" slightly to changing
completely. Both possibilities, as well as intermediate ones, enter in this
paper. The point of departure is a family of non-random "cartoon" functions \cite{Maffinity} that are constructed by multiplicative interpolation. Designed as counterparts of Wiener Brownian motion \cite
{Wiener1,Wiener2} or fractional Brownian motion \cite{Kol,M2}, they have
proven to be very useful in teaching and in applications. They, in turn, are
made random in this paper, in a way that seems a "natural
inverse" but actually fails to be a straightforward step
back to the original. The fact that it reveals new interesting phenomena
suggests that the study of fractals/multifractals continues to be in large
part driven by novel special constructions with odd properties, and not only
by a general theory. Those non-random cartoons, together with a few other
examples, contradict the widely held belief that multifractal functions
(variable H\"{o}lder's $H$) are constructed by
"multiplicative chaos" and unifractal functions (uniform H\"{o}lder's $H$), by "additive chaos".
The non-random prototype described in \cite{Maffinity} is the crudest
cartoon of Wiener-Brownian motion illustrated in figure \ref
{wienerbrownianmotion}. The indicator joins the points $(0,0)$ and $(1,1)$.
The base is $b=4$ and the generator $G(t)$ is graphed by four intervals of
slope $2$ or $-2$ forming a piecewise linear continuous graph linking the
following points: $\left( 0,0\right) $, $(\frac{1}{4},\frac{1}{2})$, $(\frac{
1}{2},0)$, $(\frac{3}{4},\frac{1}{2})$, and $(1,1)$. Recursive interpolation
using this generator yields a curve characterized by the Fickian exponent $H=
\frac{1}{2}$.
\begin{center}
\begin{figure}
\caption{A Basic non-random cartoon of Wiener Brownian motion. Stage 2 (lower left). Stage 3 (lower right). Stage 4 (top panel).}
\label{wienerbrownianmotion}
\end{figure}
\end{center}
A very limited randomization, described as "shuffling", moves the interval of slope $-2$ along the
abscissa from the second position to a randomly chosen position. Shuffling is
a familiar step in binomial or multinomial multifractal measures. More
interesting is the more thorough randomization introduced in \cite{M1} and called "canonical". In this context it chooses each of the four
intervals of the generator at random, independently of the others, so that
increasing and decreasing intervals have probabilities equal to their
frequencies in the original cartoon. Here $p_{+}=\frac{3}{4}$ and $p_{-}=
\frac{1}{4}$. The increment $G(1)-G(0)$ is no longer equal to $1$, but
random with the expected value $1$. As a result, the construction is no
longer a recursive interpolation and can be called a recursive refinement.
A more general construction of a non-random cartoon has an arbitrary base $
b>3$ and a continuous piecewise linear generator made of $b$ intervals of
slope $\frac{b}{c}$, where $c\le b$ is a second integer base. In that case,
defining $H$ by $c=b^{H}$ and $p$ as $p=b^{H-1}$, the frequencies $f_{+}$
and $f_{-}$ are $f_{+}=\frac{1}{2}+\frac{1}{2}\frac{c}{b}=\frac{1}{2}+\frac{
b^{H-1}}{2}=\frac{1}{2}+\frac{p}{2}$ and $f_{-}=\frac{1}{2}-\frac{p}{2}$.
The limit "cartoon" is of unbounded variation and shares the H\"{o}lder and Hausdorff properties of Wiener or Fractional Brownian Motion with
the same $H$. In the Brownian motions and their cartoons, the correlations
between past and future is known to take the form $2^{2H-1}-1$. It is
positive $\in \left( 0,1\right) $ in the persistent case $H>\frac{1}{2}$,
and negative $\in \left( -\frac{1}{2},0\right) $ in the antipersistent case $
H<\frac{1}{2}$.
A canonical randomization of any of the Brownian cartoons can now be
described. A first step consists in making all those frequencies into
probabilities. A second step consists in eliminating various constraints on $
H$ and $p$ that are due to their origin in cartoons. Both the Wiener and
Fractional Brownian motions non-random cartoons require that $
0<H<1 $ and that $H$ be a ratio of logarithms of integers, of the form $
\frac{\log c}{\log b}$. We shall allow $p$ to vary from $0$ to $1$, which
implies $-\infty <H<1$, and leave $c$ unrestricted, allowing it even to be
smaller than 1.
This paper's object is to describe the limits of the functions $\left\{
B^H_{n}\right\} _{n\geq 1}$ generated by this procedure.
The derivatives of the functions $\left\{
B^H_{n}\right\} _{n\geq 1}$ (in the sense of distributions) form a signed
measure-valued martingale. Moreover, $B^H_n$ is absolutely continuous, and the correlation of the derivatives of $B^H_n$ and $B^H_{n+1}$ is equal to $p$ almost everywhere. For the classic positive canonical cascades \cite{M1}, those martingales converge strongly to a limit.
But that limit can degenerate to 0, and, if so, no normalization
yielding a non zero limit is known.
For the bilateral canonical cascades
considered in this paper, the situation will be shown to be altogether
different, following a pattern first observed in exploratory simulations. The persistent case $\frac{1}{2}<H<1$ behaves as expected, as
illustrated in Figures \ref{H=.95} and \ref{H=.7}: the martingale
converges and has a strong limit, namely, a non-Gaussian variant process sharing the H\"{o}lder and Hausdorff properties of the persistent Fractional Brownian
Motion of exponent $H$ (Theorem~\ref{th2}). The antipersistent case $-\infty <H\leq \frac{1}{2}$
, to the contrary, defies facile extrapolation. As illustrated in Figures \ref
{H=.5}, \ref{H=.25} and \ref{H=-2}, the
martingale does not converge to zero but oscillates increasingly wildly.
However, there does exist an alternative normalization that yields a
nondegenerate weak limit for $n\rightarrow \infty $, namely, Wiener Brownian
Motion. The fact that the exponent $H$ no longer affects the limit is a surprising form of
what physicists call "universality", a
phenomenon that recalls the Gaussian central limit theorem. It expresses
that the rules of the cascade are destroyed in the limit, leaving only an
accumulation of noise. Our result provides new functional central limit
theorems. Moreover, the normalization factor in the special case $H=1/2$ is
atypical (see Theorem~\ref{th1'} and Corollary~\ref{FTLC'}). Since $H$ is no
longer a H\"{o}lder, negative values of $H$ create no paradox whatsoever.
To ensure a long-range power-law correlation function, exquisite long range order must be present in Fractional Brownian
cartoons. Under canonical randomization this order is robust in the case of persistence. But it is
non robust and destroyed in the case of antipersistence, with a clear
critical point in the Wiener Brownian case.
This is novel but recalls an
observation concerning the Cauchy-L\'{e}vy stable exponent $\alpha $: it is
constrained to $0\leq \alpha <2$, that is, $H>\frac{1}{2}$. The non-random cartoon of one such process \cite{Mfinance} has a generator joining
the points $(0,0)$, $\left( \frac{1}{2},p\right) $, $\left( \frac{1}{2}
,1-p\right) $, $\left( 1,1\right) $. For all $p\in \left( 0,1\right) $, this
can be interpolated into a discontinuous function in which the
discontinuities $\Delta $ have a distribution of the form $\Pr \left\{
\Delta >\delta \right\} \sim \delta ^{-\alpha }$ with $\alpha =\frac{-1}{
\log _{2}p}$. This exponent can range as $\alpha \in \left( 0,\infty \right)
$. But let us, after $k$ stages, change the number of discontinuities of
given size and the number of continuous steps. Rather than fixed, let them be
random Poissonian with the same expectation. When $\alpha <2$, the process
converges to a stable one, but when $\alpha >2$, it explodes.
In any event, for $H<\frac{1}{2}$ the integral of the covariance of the
Fractional Brownian Motion vanishes. This demands very special correlation
properties that are easily destroyed by diverse manipulations. "Instability"
also characterizes for $H<\frac{1}{2}$ the limits of expressions of the form
$\Sigma G[\Delta B_{H}(t)]$, where $\Delta B_{H}(t)$ is the increment of a
Fractional Brownian Motion over $[t,t+1]$ and $G$ is a strongly non linear
transform \cite{Taqqu}. In this context, the fact that Kolmogorov's
turbulence takes on the unstable value $H=\frac{1}{3}$ may reward a close
look.
In the preceeding first-approximation results, we perceived an analogy with
the usual sums of iid random variables $X$ of finite variance. The strong
law of large numbers tells us that the sample average, a normalized sum of $
n $ variables $X_{k}$, strongly converges to $\mathbb{E}(X)$. Then the central limit
theorem tells us that the discrepancy between $\mathbb{E}(X)$ and the $n$th normalized
sum can be subjected to a different normalization --- division
by $\sqrt{n}$ --- and after that converges
weakly to a Wiener Brownian motion. When $H\neq 1/2$, this situation generalizes to our canonical cascades, but with some major changes. Here, $\mathbb{E}(X)$
is replaced for $H>\frac{1}{2}$ by a variant fractional Brownian motion and
for $H<\frac{1}{2}$ by $0$. The rate of convergence for the remainder depends on $H$. It takes
the same analytic form $b^{n\left( \frac{1}{2}-H\right) }$ for all $H\neq 1/2$ but plays different roles: For $H<\frac{1
}{2}$ it compensates for boundless growth, and for $H>1/2$, for a decrease to $0$ (Theorems~\ref{th1} and \ref{th3}).
Our construction possesses a natural extension to the case $H=-\infty$ if we consider the processes $B^H_n/b^{-nH}$ and let $H$ tend to $-\infty$ for every $n\ge 1$. In this case, $p=0$ and $k$
recursions yield $b^{k}$ values of a random walk with symmetric correlated increments taking values in $\{-1,1\}$. After
division by $b^{\frac{k}{2}}$, there is a limit in distribution for the associated piecewise linear function, namely, the Wiener Brownian Motion (see Corollary~\ref{FTLC}). This is an unexpected extension of the usual result known for the classical random
walk obtained by coin tossing. Here, convergence is as "weak" as can be, since the
terms in the sequence are statistically independent.
Understanding bilateral cascades is helped by a step that has been
fruitful since the earliest canonical cascades \cite{M1}. It consists in
keeping $p$ constant, replacing the interval $[0,1]$ by the cube $[0,1]^{E}$, and varying the Euclidean dimension $E$ from a large value down. In all
cascade constructions, the proper distance is not Euclidean but ultrametric.
Hence, the nondegenerate versus degenerate alternative requires no new
argument: it proceeds just as on a linear grid of base $b^{E}$. The critical
$H=\frac{1}{2}$ now corresponds to $p_{crit}=b^{\frac{-E}{2}}$, which $
\searrow 0$ as $E\nearrow \infty $. In a high-dimensional space, a cascade
with the given $p$ yields a variant fractional Wiener signed measure but the
intersections of that measure by subspaces of small\textbf{\ }$E$
degenerates to an infinitesimal Wiener measure (this extension to higher
dimensions will be studied in a further work). Classically, this is also the
case in birth and death cascades with multiplier values 1 and 0. The novelty
present in the bilateral case in that the term "degenerate" takes a
different meaning.
The martingales considered in this paper are the very simplest special case of the following more general construction. As for positive canonical cascades, given an integer $b\ge 2$, the recursive process consists in associating with each $b$-adic subinterval $J$ of $[0,1]$ a random weight $W_J$ so that these weights are i.i.d with a random variable $W$ and $\mathbb{E}(W)$ is defined and equal to $1/b$. Then, one gets a sequence of random piecewise functions $(F_n)_{n\ge 1}$ by imposing that $F_n(0)=0$ and that the increment of $F_n$ over the interval $J$ of the $n^{\mbox{{\small th}}}$ generation is equal to the product $W_{J_1}W_{J_2}\cdots W_{J_n}$, where $J_k$ is the $b$-adic interval of generation $k$ containing $J(=J_n)$. Observe that this construction falls in the category of infinite products of functions \cite{Fan,BM}. The family $\{F_n\}_{n\ge 1}$ forms a continuous functions-valued martingale. A sufficient condition for the sequence $F_n$ to converge almost surely uniformly is that the function $\tau_W(p)=q-1-\log_b\mathbb{E}(|W|^p)$ takes a positive value at some $p\in (1,2]$. In the simplest case studied in this paper, $W$ belongs to $\{-b^{-H},b^{-H}\}$, and the critical value $H=1/2$ separates the domain $H\le 1/2$ for which $\tau_W(p)\le 0$ over $[1,2]$ and the domain $H\in (1/2,1]$ for which we always have $\tau_W(2)>0$. In the general case, when $\tau_W((1,2])\not\subset (-\infty,0]$, the limit of the signed canonical cascade is not a unifractal but a multifractal function -- to be studied in a further work.
The rest of the paper is organized as follows. This section ends with definitions and notations used in the sequel. Also, the processes studied in this paper are more formally defined than in the previous paragraphs. Section~\ref{statements} and~\ref{statements2} provide our main results for the cases $p\le b^{-1/2}$ (i.e. $H\le 1/2$) and $p>b^{-1/2}$ (i.e. $H>1/2$) respectively. The next three sections are devoted to the proofs of our main results.
\begin{center}
\begin{figure}
\caption{$B^H_k$ for $k=8,\ 12,\ 18,\ 27$ in the case $b=2$ and $H=0.95$: Fast strong convergence.}
\label{H=.95}
\end{figure}
\end{center}
\begin{center}
\begin{figure}
\caption{$B^H_k$ for $k=8,\ 12,\ 18,\ 27$ in the case $b=2$ and $H=0.7$. Strong convergence.}
\label{H=.7}
\end{figure}
\end{center}
\begin{center}
\begin{figure}
\caption{$B^H_k/\sigma_{1/2}
\label{H=.5}
\end{figure}
\end{center}
\begin{center}
\begin{figure}
\caption{$B^H_k/\sigma_H b^{k(1/2-H)}
\label{H=.25}
\end{figure}
\end{center}
\begin{center}
\begin{figure}
\caption{$B^H_k/\sigma_H b^{k(1/2-H)}
\label{H=-2}
\end{figure}
\end{center}
\section{Construction of the martingale}
\subsection{Definitions and notations}\label{def}
$\ $
Let $b\geq 2$ be an integer.
For $n\ge 0$ let $\Sigma_n=\{0,\dots,b-1\}^{n}$, where $\Sigma_0$ contains
only the empty word denoted by $\emptyset$. Also let $\Sigma^*=\bigcup_{n\ge
0}\Sigma_n$ and $\Sigma=\{0,\dots,b-1\}^{\mathbb{N}^*}$. The concatenation
operation from $\Sigma^*\times (\Sigma^*\bigcup \Sigma)$ to $
(\Sigma^*\bigcup \Sigma)$ is denoted $\cdot$.
For $x\in \Sigma $ and $n\geq 1$, let $x|n$ be the projection of $x$ on $
\Sigma _{n}$ and $x|\infty=x$. Then for $n\geq 1$ and $w\in \Sigma _{n}$, we set $[w]=\{x\in
\Sigma :x|n=w\}$. Given two words of infinite length $x,y\in \Sigma $, one
defines $x\wedge y$ as $x|n_{0}$, where $n_{0}=\sup \{n\geq
1:x|n=y|n\} $. Adopt the convention that $\inf \emptyset =0$ and $x|0$
is the empty word $\emptyset $.
The length of any element $w$ of $\Sigma_n$ is equal to $n$ and is denoted
by $|w|$.
Denote by $\pi $ the mapping $x\in \Sigma \mapsto \sum_{k=1}^{\infty
}x_{k}b^{-k}$.
If $w\in \Sigma^*$, $t_w$ stands for the number $\sum_{k=1}^{|w|}w_kb^{-k}$
and $I_w$ stands for the closed $b$-adic interval $\pi ([w])$.
For $n\geq 0$ denote by $\mathcal{D}_{n}$ the set of $b$-adic numbers of the
$n^{\mbox{{\small th}}}$ generation in $[0,1]$. Also denote by $\mathcal{D}$ the set
of all $b$-adic numbers of $[0,1]$.
Denote by $\mathcal{C}([0,1])$ the space of real valued continuous
functions on $[0,1]$. Then, for $\alpha\in (0,1]$, $\mathcal{C}
^\alpha([0,1]) $ stands for the subspace of $\mathcal{C}([0,1])$ whose
elements are uniformly $\alpha$-H\"older continuous, i.e. $f\in \mathcal{C}^{\alpha}
([0,1])$ if and only if there exists $C>0$ such that $|f(t)-f(s)|\le
C|t-s|^\alpha$ for all $t,s\in [0,1]$.
If $f\in \mathcal{C}([0,1])$, denote its modulus of continuity by $\omega
(f,\cdot )$ (for $\delta \in \lbrack 0,1]$, $\omega (f,\delta )=\sup_{t,s\in
\lbrack 0,1],|t-s|\leq \delta }|f(t)-f(s)|$).
Recall that the pointwise H\"older exponent of $f$ at $t_0\in [0,1]$ is
defined by
\begin{equation*}
h_f(t_0)=\sup \left \{\alpha\ge 0: \ \exists \ P\in\mathbb{R}[X],\
\sup_{t\in [0,1]\setminus \{t_0\}}\frac{ |f(t)-f(t_0)-P(t)|}{|t-t_0|^\alpha}
<\infty\right \}.
\end{equation*}
If $I$ is a subinterval of $[0,1]$, $\Delta f(I)$ stands for $
|f(\sup(I))-f(\inf(I))|$.
\subsection{A construction of a recursive canonical cascade with values $\pm
1$.}
$\ $
\noindent
Let $(\Omega ,\mathcal{A},\mathbb{P})$ be the probability space on which the
random variables in the sequel are defined. If $Y$ is a random variable, we
shall denote by $\mathcal{L}(Y)$ its probability distribution.
For $0\le k\le b-1$ let $S_k(t)=b^{-1}t+k$.
If $H\in \lbrack -\infty ,1]$, define the probability measure $\pi _{b,H}=p_{b,H}^{+}\,\delta
_{1}+p_{b,H}^{-}\,\delta _{-1}$, where
\begin{equation*}
p_{b,H}^+=\frac{1+b^{H-1}}{2}\quad \mbox{and} \quad p_{b,H}^-=1-p_{b,H}^+,
\end{equation*}
with the convention $b^{-\infty}=0$.
For all $H\in \lbrack -\infty ,1]$, let $(\epsilon ^{H}(w))_{w\in \Sigma
^{\ast }}$ be a sequence of mutually independent random variables of common
probability distribution $\pi _{b,H}$. When $H$ is fixed in the sequel,
sometimes we simply write $\epsilon (w)$ for $\epsilon ^{H}(w)$.
If $u\in \lbrack 0,1]\setminus \mathcal{D}$, we identify $u$ with the unique
element $\widetilde{u}\in \Sigma $ such that $u=\pi (\widetilde{u})$. Then,
for every $n\geq 1$, if $H\in (-\infty ,1]$ we consider on $[0,1]$ the continuous
piecewise linear map $B^H_n$ over the $b$-adic intervals of the $n^{\mbox{\small th}}$ generation
such that $B^H_n(0)=0$ and for $w \in \Sigma_n$ the increment of $B^H_n$ over $I_w$ is equal to $\epsilon^H(w|1) \cdots \epsilon^H (w|n) b^{-nH}$, i.e.
\begin{equation*}
B^H_n (t)=b^{-nH}\int_0^t b^n \epsilon^H(u|1) \cdots \epsilon^H (u|n)\, du,
\end{equation*}
We leave the reader verify that the sequence $(B^H_n)_{n\ge 1}$ is a $\mathcal{C}([0,1])$-valued martingale
with respect to the filtration $\big (\sigma(\epsilon (w): \ w\in \Sigma_n)
\big )_{n\ge 1}$.
More generally for all $w\in \Sigma^*$ let
\begin{equation*}
B^H_n(w) (t)=b^{-nH}\int_0^t b^n \epsilon^H(w\cdot u|1) \cdots \epsilon^H
(w\cdot u|n)\, du.
\end{equation*}
Of course, $B^H_n(\emptyset)=B^H_n$ almost surely.
For all $0\le k\le b-1$ and $t\in I_k=[k/b,(k+1)/b]$ we have the relation
\begin{equation} \label{fonc}
B^H_n(t)-B^H_n(k/b)= \epsilon^H(k) b^{-H} B_{n-1}^H(k)\left (S_{k}^{-1}
(t)\right )
\end{equation}
and more generally for all $w\in\Sigma^*$, $t\in I_w$ and $n\ge |w|$
\begin{equation} \label{fonc0}
B^H_n(t)-B^H_n(t_w)= \epsilon^H (w|1) \cdots \epsilon^H (w|n) b^{-nH}
B_{n-|w|}^H(w)\left (S_{w_n}^{-1}\circ \cdots \circ S_{w_1}^{-1} (t)\right ).
\end{equation}
For $n\geq 0$ and $w\in \Sigma ^{\ast }$ we denote by $Z_{n}(w)$ the random
variable $B_{n}^{H}(w)(1)$, with the convention $B_{0}^{H}(w)(1)=1$. When $
w=\emptyset $, we simply write $Z_{n}$ for $Z_{n}(w)$. The relation (\ref
{fonc}) yields for every $n\geq 1$
\begin{equation} \label{fonc2}
Z_{n}=\sum_{k=0}^{b-1}b^{-H}\epsilon _{k}Z_{n-1}(k),
\end{equation}
where the random variables $\epsilon _{0},\dots ,\epsilon
_{b-1},Z_{n-1}(0),\dots ,Z_{n-1}(b-1)$ are mutually independent. Moreover, $
\mathcal{L}(\epsilon _{k})=\pi _{b,H}$ and $\mathcal{L}(Z_{n-1}(k))=\mathcal{
L}(Z_{n-1})$ for all $0\leq k\leq b-1$. Relation (\ref{fonc2}), which will
be useful in the sequel, is familiar from the positive cascade case \cite{M1}.
Finally, for $H\in [-\infty,1/2]$ let
\begin{equation*}
\sigma _{H}=
\begin{cases}
1 & \mbox {if $H=-\infty$} \\
\displaystyle\sqrt{\frac{b-1}{b^{2-2H}-b}+1} &
\mbox {if $H\in
(-\infty,1/2)$} \\
\displaystyle\sqrt{\frac{b-1}{b}} & \mbox {if $H=1/2$}
\end{cases}.
\end{equation*}
\section{Weak convergence of the normalized martingale $B^H_n$ to Wiener Brownian motion, independently of $b$ and $H$ in the anti-persistant case H $\leq $ 1/2}~\label{statements}
\begin{theorem}
\label{th1} Let $H\in (-\infty,1/2)$. The sequence $\left (\mathcal{L}\big(B_{n}^{H}/\sigma_H b^{n(1/2-H)}\big)\right )_{n\geq 1}
$ converges weakly to the Wiener measure as $n$ goes to $\infty $.
\end{theorem}
\begin{theorem}\label{th1'}
The sequence $\left (\mathcal{L}\big(B_{n}^{1/2}/\sigma_{1/2}\sqrt{n}\big)\right )_{n\geq 1}
$ converges weakly to the Wiener measure as $n$ goes to $\infty $.
\end{theorem}
\begin{remark}\label{dege}
(1) When $H<1/2$, Theorem~\ref{th1} implies that with probability 1, $
\limsup_{n\rightarrow \infty }\frac{\Vert B_{n}^{H}\Vert _{\infty }}{b^{n(1/2-H)}}
>0$. Thus the martingale $B^H_n$ neither strongly converges to a non trivial limit in $\mathcal{C}([0,1])$ nor to 0. This fact deserves to be called degeneracy. This shows a strong difference with positive canonical cascades for which degeneracy means uniform convergence to 0 (\cite{M1,KP}). The same remarks hold when $H=1/2$.
(2) For $H\le 1/2$ define $X^H_n=B^H_n(1)/\sigma_H b^{n(1/2-H)}$ if $H< 1/2$ and $X^H_n=B_{n}^{1/2}(1)/\sigma_{1/2}\sqrt{n}$ otherwise. The reader can check that $X^H_n(1)$ is not a Cauchy sequence in $L^2$ while the $L^2$ norm of $X^H_n(1)$ converges to 1 as $n$ goes to $\infty$. This implies that $X_n^H(1)$ cannot converge almost surely to a standard normal random variable. Consequently, Theorem~\ref{th1} and \ref{th1'} cannot be strengthen into results of almost sure convergence.
\end{remark}
\section{Restatement of Theorems~\ref{th1} and ~\ref{th1'} as functional CLT with atypical normalization when $H=1/2$}\label{TLC}$\ $
If $H\in [-\infty,1/2]$, $n\geq 1$ and $0\leq k<b^{n}$ and $w$ is the unique element of $\Sigma
_{n}$ such that $t_{w}=kb^{-n}$ let
\begin{equation*}
\xi _{k}^{(n,H)}=\prod_{j=1}^{n}\epsilon ^{H}(w|j).
\end{equation*}
For a given $n\ge 1$, the random variables $\xi
_{k}^{(n,H)}$, $0\le k<b^n$, are identically distributed, and they take values in $\{-1,1\}$.
Also, consider the random walk $\big (S_{r}^{(n,H)}\big)_{0\le r<b^n}$ defined by
\begin{equation*}
S_{r}^{(n,H)}=\sum_{k=0}^{r-1}\xi _{k}^{(n,H)}
\end{equation*}
(with the convention $S_{-1}^{(n,H)}=0$).
\begin{corollary}[Functional central limit theorem]
\label{FTLC} Let $H\in [-\infty,1/2)$ and for $n\ge 1$ and $t\in [0,1]$ define $X_{n}^{H}(t)=\displaystyle\frac{1}{\sigma _{H}\sqrt{b^n}}\left [
S_{[b^{n}t]}^{(n,H)}+(b^{n}t-[b^{n}t])\xi _{\lbrack b^{n}t]}^{(n,H)} \right ]$. The sequence $\mathcal{L}(
X^H_n)_{n\ge 1}$ converges weakly to the Wiener measure as $n$ tends to $
\infty$.
\end{corollary}
\begin{corollary}[Functional central limit theorem]
\label{FTLC'} For $n\ge 1$ and $t\in [0,1]$ define $X_{n}^{1/2}(t)= \displaystyle\frac{1}{\sigma _{1/2}\sqrt{nb^{n}}}
\left [S_{[b^{n}t]}^{(n,1/2)}+(b^{n}t-[b^{n}t])\xi _{\lbrack b^{n}t]}^{(n,1/2)} \right ]$. The sequence $\mathcal{L}(
X^{1/2}_n)_{n\ge 1}$ converges weakly to the Wiener measure as $n$ tends to $
\infty$.
\end{corollary}
\begin{remark}
(1) When $H<1/2$, the stochastic process $X_{n}^{H}$ takes formally the
same form as a the processes considered in central limit theorems for weakly
dependent sequences (see \cite{Bil}, Ch. 19 or \cite{Doukhan}). The main difference is that in
the process we consider the random variables $\xi _{k}^{(n,H)}$ are highly
correlated. Nevertheless, the same asymptotic behavior (weak convergence to
the Wiener measure) holds.
(2) In the case $H=1/2$, the normalizing factor takes an untypical form.
\end{remark}
\begin{remark}
When $H=-\infty$, it is not
difficult to verify that the conclusions of Propositions~\ref{prop1}--\ref
{prop3} hold for $(X^{-\infty}_n)_{n\ge 1}$ instead of the sequence $(B^H_n/a^H_n)_{n\ge
1}$ considered in Section~\ref{proofth1} by using the same approach. Consequently, the proof
is left to the reader in this case.
\end{remark}
\begin{remark}
We mention that functional central limit theorems associated with positive canonical cascades have been established in \cite{LRR} in a very different spirit. There, a square integrable random weight $W$ is fixed which generates a canonical multiplicative cascades in base $b$ and its associated sequence of increasing functions $F^{(b)}_n$ on $[0,1]$ converging to a function $F^{(b)}_\infty$. For each $ n\in \mathbb{N}^*\cup\{\infty\}$ the authors establish a functional central limit theorem for $\big (F_n^{(b)}(t)-t\big )/\sqrt {b}$ as the basis $b$ tends to $\infty$: $\big (F_n^{(b)}(t)-t\big )/\sqrt {b}$ converges in law to a multiple of the Brownian motion. As letting $b$ tend to $\infty$ weakens the correlations between the increments of $F^{(b)}$, the existence of such a weak limit is natural.
\end{remark}
\section{Strong convergence of the martingale in the persistent case ${1}/{2}<H\le 1$}~\label{statements2}
\begin{theorem}\label{th2}
Suppose that $H\in (1/2,1]$. The sequence $(B_{n}^{H})_{n\geq 1}$
is a martingale that converges almost surely and in $L^{2}$ norm to a
continuous function $B^{H}$. Moreover, with probability 1,
\begin{enumerate}
\item $B^{H}$ belongs to $\bigcap_{H'<H}\mathcal{C}^{H'}([0,1])$ and it has everywhere a pointwise H\"{o}lder exponent equal to $H$.
\item The Hausdorff and box dimensions of the graph of $B^{H}$ is $2-H$.
\end{enumerate}
\end{theorem}
\begin{remark}
The limit process $B^H$ is not Gaussian since a computation shows that the third moment of the centered random variable $B^H(1)-1$ does not vanish.
Notice that the case $H=1$ yields the deterministic function $B^1(t)=t$.
\end{remark}
\section{Functional CLT associated with the strong convergence case $1/2<H<1$}$\ $
It will be shown in Section~\ref{strong} that $\mathbb{E}(B^H(1)^2)<\infty$ if $H>1/2$. Consequently the number $\sigma_H=\sqrt{\mathbb{E}(B^H(1)^2)-1}$ is positive and finite when $H\in (1/2,1)$.
The following Theorems~\ref{th3} and \ref{th4} must be viewed as couterparts of Theorems~\ref{th1} and Corollary~\ref{FTLC}.
\begin{theorem}\label{th3}
Let $H\in (1/2,1)$. The sequence $\left (\big(B^H-B^H_n\big )/\sigma_H b^{n(1/2-H)}\right )_{n\ge 1}$ converges weakly to the Wiener measure as $n$ tends to $\infty$.
\end{theorem}
If $H\in (1/2,1)$, for every $w\in \Sigma^*$ denote by $B^H(w)$ the almost sure limit of $B^H_n(w)$. Also, if $n\geq 1$ and $0\leq k<b^{n}$ and $w$ is the unique element of $\Sigma
_{n}$ such that $t_{w}=kb^{-n}$ let
$$
\widetilde \xi _{k}^{(n,H)}=\big (B^H(w)(1)-1\big )\prod_{j=1}^{n}\epsilon ^{H}(w|j).
$$
Then define $\displaystyle S_{p}^{(n,H)}=\sum_{k=0}^{p-1}\widetilde \xi _{k}^{(n,H)}$ for $0\le p<b^n$ and finally consider on $[0,1]$ the piecewise linear function
$$
X^H_n(t)=\frac{1}{\sigma _{H}\sqrt{b^{n}}}\left [
S_{[b^{n}t]}^{(n,H)}+(b^{n}t-[b^{n}t])\widetilde \xi _{\lbrack b^{n}t]}^{(n,H)}\right ].
$$
\begin{theorem}\label{th4}
Let $H\in (1/2,1)$. The sequence $\mathcal{L}(
X^H_n)_{n\ge 1}$ converges weakly to the Wiener measure as $n$ tends to $
\infty$.
\end{theorem}
\begin{remark}
Theorem~\ref{th3} implies that $(B^H-B^H_n)(1)/\sigma_H b^{n(1/2-H)}$ converges in law to a $\mathcal{N}(0,1)$ law. This result is of the same nature as Proposition 4.1 in \cite{OW} which deals with central limits theorems associated with non negative canonical cascades. The technique used in \cite{OW} would work to establish the convergence of $\mathcal{L}\big ((B^H-B^H_n )(1)/\sigma_H b^{n(1/2-H)}\big )$. It uses Lindeberg's theorem, while we exploit the functional equation (\ref{fonc2}).
\end{remark}
\section{Proof of Theorem~\ref{th1} and \ref{th1'} and their corollaries concerning the case $H\leq \frac{1}{2}$, i.e., $
p\leq b^{-1/2}$}\label{proofth1}
Theorems~\ref{th1} and \ref{th1'} follow from the next three propositions. In fact we are going to show that the sequence $\left (\mathcal{L}\big (B^H_n/a^H_n\big )\right )_{n\ge
1}$ converges weakly to the Wiener measure, where for $n\ge 1$ $\displaystyle a_{n}^{H}=
\displaystyle\sqrt{\left( \frac{b-1}{b}+b^{1-2H}-1\right) \frac{b^{n(1-2H)}-1
}{b^{1-2H}-1}}$ if $H<1/2$ and $\displaystyle a^{H}_n=\sqrt{\frac{b-1}{b}n}$ if $H=1/2$ (observe that by the L'Hospital rule, $
a_{n}^{H}$ converges to $a_{n}^{1/2}$ as $H\nearrow
1/2 $). It is easily seen that this will imply Theorems ~\ref{th1} and \ref{th1'} and so their corollaries Corollaries~\ref{FTLC} and \ref{FTLC'}.
When $H<1/2$, the normalization by $a_n^H$ is more practical to use than $\sigma_H b^{n(1/2-H)}$ because it naturally appears in the $B^H_n$ asymptotic behavior's study.
\begin{proposition}
\label{prop1} Let $H\in (-\infty,1/2]$. The sequence $\big (\mathcal{L}
(B^H_n(1)/a^H_n)\big )_{n\ge 1}$ converges to $\mathcal{N}(0,1)$ as $n$ goes
to $\infty$.
\end{proposition}
\begin{proof}
Let $Y_n=B^H_n(1)/a^H_n=Z_n/a^H_n$. It is enough to show that
\begin{enumerate}
\item for every $p\ge 0$ one has the property $(\mathcal{P}_{2p})$: $
M_{2p}=\lim_{n\to\infty} \mathbb{E}(Y_n^{2p})$ exists. Moreover $M_2=1$;
\item for every $p\ge 0$ one has the property $(\mathcal{P}_{2p+1})$: $
\lim_{n\to\infty} \mathbb{E}(Y_n^{2p+1})=0$;
\item the moments of even orders obey the following induction relation valid
for $p\ge 2$:
\begin{equation*}
M_{2p}=\big (b^p-b\big )^{-1}\sum_{\substack{ 0\le
\alpha_0,\dots,\alpha_{b-1}<p \\ \sum_{k=0}^{b-1}\alpha_k=p}} \frac{(2p)!}{
(2\alpha_0)!\cdots (2\alpha_{b-1})!} \prod_{k=0}^{b-1} M_{2\alpha_k}.
\end{equation*}
\end{enumerate}
Indeed, (1) will ensure that the sequence of probability distributions $
\mathcal{L}(Y_n)$ is tight. Moreover, it is easy to verify that a $\mathcal{N
}(0,1)$ random variable $N$ is so that its moments of even orders satisfy
the same relation as the numbers $M_{2p}$, $p\ge 1$, defined by $M_2=1$ and
the induction relation (3) (to see this, write $N$ as the sum of $b$
independent $\mathcal{N}(0,b^{-1/2})$ random variables). Consequently, since
the law $\mathcal{N}(0,1)$ is characterized by its moments, $Y_n$ must
converge in law to $\mathcal{N}(0,1)$.
Let us establish (1), (2) and (3).
Let us take the expectation of the square of (\ref{fonc2}), by using the
fact that $\mathbb{E}(Z_n)=1$. This will explain the introduction of the
normalization factor $a_n^H$.
We have (notice that $\mathbb{E}(\epsilon_0)=b^{H-1}$)
\begin{eqnarray} \label{m2}
\mathbb{E}(Z_n^2)&=& b^{1-2H}\mathbb{E}(Z_{n-1}^2)+ b(b-1) b^{-2H}\left (
\mathbb{E}(\epsilon_0)\right )^2 \\
&=&b^{1-2H}\mathbb{E}(Z_{n-1}^2)+\frac{b-1}{b}.
\end{eqnarray}
This yields $\mathbb{E}(Z_n^2)=(a_n^H)^2+1$. In particular, the limit $M_2$
is well defined and equals 1. Moreover, $\lim_{n\to\infty}\mathbb{E}(Y_n)=0$
since $\mathbb{E}(Z_n)=1$ and $\lim_{n\to\infty} a_n^H=\infty$.
Now let $q$ be an integer $\ge 3$. Taking the expectation of (\ref{fonc}) to
the power $q$ yields
\begin{equation} \label{Mq}
\mathbb{E}(Z_{n+1}^q)=b^{1-qH}\mathbb{E}(\epsilon_0^q)\mathbb{E}
(Z_{n}^q)+b^{-Hq}\sum_{\substack{ 0\le \beta_0,\dots,\beta_{b-1}<q \\
\sum_{k=0}^{b-1}\beta_k=q}} \gamma_{(\beta_0,\dots, \beta_{b-1})}
\prod_{k=0}^{b-1} \mathbb{E}(\epsilon_0^{\beta_k})\mathbb{E}
(Z_{n}^{\beta_k}),
\end{equation}
where $\displaystyle \gamma_{\beta_0,\dots, \beta_{b-1}} =\frac{q!}{(\beta_0)!\cdots
(\beta_{b-1})!}$.
Let us denote $\mathbb{E}(Y_n^q)$ by $M_q^{(n)}$, the set $\{(\beta_0,\dots,
\beta_{b-1})\in\mathbb{N}^b: 0\le \beta_0,\dots,\beta_{b-1}<q,\
\sum_{k=0}^{b-1}\beta_k=q\}$ by $S_q$, the ratio $\displaystyle\sqrt{\frac{n
}{n +1}}$ by $r^{(1/2)}_n$ and the ratio $\displaystyle \sqrt{\frac{b^{n(1-2H)}-1}{
b^{(n+1)(1-2H)}-1}}$ by $r_n^{(H)}$ when $H<1/2$.
Now, using that $\mathbb{E}(\epsilon_0^q)=b^{H-1}$ or $1$ depending on $q$ is
an odd or an even number, (\ref{Mq}) yields for $H\le 1/2$
\begin{equation} \label{H}
M_q^{(n+1)}=
\begin{cases}
\displaystyle (r^{(H)}_n)^{q}\left
(b^{-(q-1)H}M_q^{(n)}+b^{-qH}\sum_{\beta\in S_q}\gamma_{ \beta}
\prod_{k=0}^{b-1}\mathbb{E}(\epsilon_0^{\beta_k}) M_{\beta_k}^{(n)}\right )
& \mbox{if $q$ is odd}, \\
\displaystyle (r^{(H)}_n)^{q}\left
(b^{1-qH}M_q^{(n)}+b^{-qH}\sum_{\beta\in S_q}\gamma_{ \beta}
\prod_{k=0}^{b-1} \mathbb{E}(\epsilon_0^{\beta_k})M_{\beta_k}^{(n)}\right )
& \mbox{if $q$ is even}
\end{cases}
.
\end{equation}
We show by induction that $\big ((\mathcal{P}_{2p-1}),(\mathcal{P}_{2p})
\big
)$ holds for $p\ge 1$, and we deduce the relation (3).
We have shown that $\big ((\mathcal{P}_{1}),(\mathcal{P}_{2})\big )$ holds.
Suppose that $\big ((\mathcal{P}_{2k-1}),(\mathcal{P}_{2k})\big )$ holds for
$1\le k\le p-1$, with $p\ge 2$. In particular, $M_{\beta_k}^{(n)}$ goes to 0
as $n$ goes to $\infty$ if $\beta_k$ is an odd integer belonging to $[1,
2p-3]$.
Suppose $H=1/2$ and simply denote $r_n^{(1/2)}$ by $r_n$. Every element of the set $S_{2p-1}$ must contain an odd
component. Due to our induction assumption, this implies that in the
relation (\ref{H}), the term $r_n^{2p-1}b^{-(2p-1)/2}\sum_{\beta\in
S_{2p-1}}\gamma_{\beta} \prod_{k=0}^{b-1}\mathbb{E}(\epsilon_0^{\beta_k})
M_{\beta_k}^{(n)}$ in the right hand side of $M_{2p-1}^{(n+1)}$ goes to 0 at
$\infty$. This yields
\begin{equation*}
M^{(n+1)}_{2p-1}=r_n^{2p-1}b^{-(p-1)}M^{(n)}_{2p-1}+o(1)
\end{equation*}
as $n\to\infty$. Since $r_n^{2p-1}b^{-(p-1)}\le b^{1-p}<1$, this yields $\lim_{n\to\infty}M^{(n)}_{2p-1}=0$, that is to say $(\mathcal{P}_{2p-1})$.
Now, the same argument as above shows that in the right hand side of $
M^{(n+1)}_{2p}$, we have
\begin{equation*}
\lim_{n\to\infty} \sum_{\beta\in S_{2p}}\gamma_{ \beta} \prod_{k=0}^{b-1}
\mathbb{E}(\epsilon_0^{\beta_k})M_{\beta_k}^{(n)}=\sum_{\substack{ \beta\in
S_{2p} \\ \beta_k \mbox{ even}}}\gamma_{ \beta} \prod_{k=0}^{b-1}
M_{\beta_k}.
\end{equation*}
Denote by $L$ the right hand side of the above relation and define $
L'=(b^p-b)^{-1} L$. By using (\ref{H}) we deduce from the previous
lines that
\begin{equation} \label{rel}
M^{(n+1)}_{2p}=r_n^{2p}b^{1-p}M^{(n)}_{2p}+b^{-p}L+o(1).
\end{equation}
Then by using that $r_n\to 1$ as $n\to\infty$ and the relation $L'=
b^{1-p}L'+b^{-p} L$ we obtain
\begin{equation*}
M^{(n+1)}_{2p}-L'= r_n^{2p}b^{1-p}(M^{(n)}_{2p}-L')+o(1).
\end{equation*}
This yields both $(\mathcal{P}_{2p})$ and (3) since $r_n^{2p}b^{1-p}\sim b^{1-p}<1$ as $n\to\infty$.
Now suppose that $H<1/2$. Almost the same arguments as when $H=1/2$ yield
the conclusion. The only change is that we have to perform one more
computation to obtain a relation equivalent to (\ref{rel}). Due to the
expression of $r^{(H) }_n$, we have
\begin{eqnarray*}
M^{(n+1)}_{2p}&=&b^{-(1-2H)p}\left (b^{(1-2pH)}M_{2p}^{(n)}+b^{-2pH} L\right
)+o(1) \\
&=& b^{1-p}M_{2p}^{(n)}+b^{-p} L+o(1).
\end{eqnarray*}
\end{proof}
\begin{proposition}
\label{prop2} Let $H\in (-\infty,1/2]$. Let $(W(t))_{t\in [0,1]}$ be a standard Brownian motion. For every $p\ge 1$, the
probability distribution $\mathcal{L}\bigg(\big (B^H_n(t)/a_n^H\big )_{t\in
\mathcal{D}_p}\bigg )$ converges to $\mathcal{L}\big ((W(t))_{t\in\mathcal{D}
_p}\big )$ as $n\to\infty$.
\end{proposition}
\begin{proof}
Let $p\ge 1$ and denote by $0=t_0<t_1\dots< t_{2^p}=1$ the elements of $
\mathcal{D}_p$. Also, simply denote $B^H_n(t)/a_n^H$ by $\mathcal{Y}_n(t)$
and by $\phi_n$ the characteristic function of $\mathcal{Y}_n(1)$ ($\mathcal{Y}
_n(1)$ is nothing but the random variable $Y_n$ studed in Proposition~\ref{prop1}). By using the fact that in (\ref{fonc0}) the fonctions $
B^H_{n-|w|}(w)$ are mutually independent and identically distributed with $
a_{n-|w|}^H \mathcal{Y}_{n-|w|}(1)$, and also independent of the products $
\epsilon(w|1) \cdots \epsilon (w|n) b^{-nH}$, we can get that for $
(u_w)_{w\in \Sigma_p}\in \mathbb{R}^{2^p}$ and $n>p$
\begin{equation*}
\mathbb{E}\left ( e^{i\sum_{w\in\Sigma_p}u_w \big (\mathcal{Y}_n
(t_w+b^{-p})-\mathcal{Y}_n(t_w)\big )}\right )= \mathbb{E}\prod_{w\in
\Sigma_p}\phi_{n-p}\left (u_w b^{-pH} \frac{a_{n-p}^H}{a_n^H}
\prod_{k=1}^p\epsilon(w|k) \right ).
\end{equation*}
It follows from Proposition~\ref{prop1} that $\phi_{n-p}(t)$ goes to $
e^{-t^2/2}$ as $n$ goes to $\infty$. Moreover, $b^{-pH} a_{n-p}^H/a_n^H$
tends to $b^{-p/2}$ as $n$ goes to $\infty$. Thus, applying the dominated
convergence theorem yields
\begin{eqnarray*}
\lim_{n\to\infty} \mathbb{E}\left ( e^{i\sum_{w\in\Sigma_p}u_w \big (
\mathcal{Y}_n (t_w+b^{-p})-\mathcal{Y}_n(t_w)\big )}\right )&=&\mathbb{E}
\left (\prod_{w\in \Sigma_p} \exp\Big (2^{-1}u_w^2
b^{-p}\prod_{k=1}^p\epsilon(w|k) ^2\Big)\right ) \\
&=& \prod_{w\in \Sigma_p}\exp\big( u_w^2 b^{-p}/2\big )
\end{eqnarray*}
since the $\epsilon(w|k)$ take values in $\{-1,1\}$. This yields the
conclusion.
\end{proof}
\begin{proposition}
\label{prop3} Let $H\in (-\infty,1/2]$. The sequence $\big (\mathcal{L}
(B^H_n/a_n^H)\big )_{n\ge 1}$ of probability distributions on $C([0,1])$ is
tight.
\end{proposition}
\begin{proof}
Let us denote by $\mathcal{Y}_n$ the process $B^H_n/a_n^H$ as in the proof of Proposition~\ref{prop2}. By Theorem 7.3
of \cite{Bil}, since $\mathcal{Y}_n(0)=0$ almost surely for all $n\ge 1$, it
is enough to show that for each positive $\varepsilon$
\begin{equation} \label{tighness}
\lim_{\delta\to 0}\limsup_{n\to\infty} \mathbb{P}\big (\omega(\mathcal{Y}
_n,\delta)\ge \varepsilon\big )=0
\end{equation}
(the modulus of continuity $\omega(f,\cdot)$ of a continuous function $f$ is defined is Section~\ref{def}).
Fix $H^{\prime }\in (0,1/2)$ and $K$ a positive integer such that $
2K(1/2-H^{\prime })>1$. It follows from the proof of Proposition~\ref{prop1}
that the sequence $\left (\mathbb{E}\left( \mathcal{Y}_{m}(1)^{2K}\right
)
\right )_{m\ge 1}$ is bounded by a constant $C_K$. Moreover, by construction
there exists a constant $C_{b,H}$ such that for all $n>p\ge 1$ we have $
\frac{a_{n-p}^H}{a_n^H}\le C_{b,H} b^{p(H-1/2)}$. By using (\ref{fonc0}) and
a Markov inequality we can get that for $n\ge 2$, $1\le p<n$ and $0\le k\le
b^p-1$
\begin{eqnarray*}
&& \mathbb{P}\left ( \left |\mathcal{Y}_n\big ((k+1)b^{-p}\big)- \mathcal{Y}_n\big (kb^{-p}\big )
\right |>b^{-pH^{\prime }}\right ) \\
& \le & b^{2KpH^{\prime }}\mathbb{E}\left (\left |\mathcal{Y}_n\big ((k+1)b^{-p}\big)-
\mathcal{Y}_n\big (kb^{-p}\big )\right |^{2K}\right ) \\
&\le & b^{2K(H^{\prime }-H)p} \left (\frac{a_{n-p}^H}{a_n^H}\right )^{2K}
\mathbb{E}(\mathcal{Y}_{n-p}(1)^{2K}) \\
&\le & C_KC_{b,H}b^{2K(H^{\prime }-1/2)p}.
\end{eqnarray*}
Now let $\alpha_p=C_K C_{b,H}b^{p(1+2K(H^{\prime }-1/2))}$. By our choice of $
H^{\prime }$ and $K$ the series $\sum_{p\ge 1}\alpha_p$ converge. Moreover,
since for $1\le p<n$ the $b$-adic increments of generation $p$ of $\mathcal{
Y}_n$ have the same probability distribution, we have
\begin{equation*}
\mathbb{P}\left (\exists \ 0\le k< b^{p},\ \left |\mathcal{Y}_n\big (
(k+1)b^{-p}\big)- \mathcal{Y}_n\big (kb^{-p}\big )\right |>b^{-pH^{\prime
}}\right )\le \alpha_p.
\end{equation*}
On the other hand, if $p\ge n$ and $0\le k< b^{p}$, by construction since
there exists a constant $c_{b,H}<1$ such that $a_n^H\ge c_{b,H}b^{n(1/2-H)}$
we have
\begin{multline*}
\left |\mathcal{Y}_n\big ((k+1)b^{-p}\big)-\mathcal{Y}_n\big (kb^{-p}\big )
\right |=\frac{b^{-n(H-1)}b^{-p}}{a_n^H} \le \frac{b^{n/2-p}}{c_{b,H}}
\le \frac{b^{-p/2}}{c_{b,H}}
\le \frac{b^{-pH^{\prime }}}{c_{b,H}}.
\end{multline*}
Let $A_p$ denote the rest $\sum_{j\ge p}\alpha_j$. We deduce from the
previous lines that for all $p\ge 1$,
\begin{equation*}
\sup_{n\ge 2} \mathbb{P}\left (\exists\ j\ge p,\ \exists \ 0\le k< b^{-j},\
\left |\mathcal{Y}_n\big ((k+1)b^{-j}\big)- \mathcal{Y}_n\big (kb^{-j}\big )
\right |>c_{b,H}^{-1} b^{-jH^{\prime }}\right )\le A_{p}.
\end{equation*}
The event $\left \{\forall\ j\ge p,\ \forall \ 0\le k< b^{-j},\ \left |
\mathcal{Y}_n\big ((k+1)b^{-j}\big)- \mathcal{Y}_n\big (kb^{-j}\big )
\right
|\le c_{b,H}^{-1}b^{-jH^{\prime }}\right\}$ is denoted by $E^n_p$ .
One has $\mathbb{P}(E_p^n)\ge 1-A_p$. A simple adaptation of the proof of
the Kolmogorov-Centsov theorem \cite{Centsov} (see the proof of Proposition~
\ref{prop4}(3) in the next section) shows that on $E^n_p$, we have
\begin{equation*}
\displaystyle \sup_{n\ge 2}\sup_{\substack{ 0\le s<t\le 1 \\ t-s\le b^{-p}}}
\frac{\left |\mathcal{Y}_n(t)- \mathcal{Y}_n(s)\right |}{(t-s)^{H^{\prime }}}
\le \frac{2(b-1)c_{b,H}^{-1}}{1-b^{-H^{\prime }}}.
\end{equation*}
Consequently, for all $n\ge 2$ we have $\omega\big (\mathcal{Y}_n,b^{-p})\big )\le \frac{
2(b-1)c_{b,H}^{-1}b^{-pH^{\prime }}}{1-b^{-H^{\prime }}}$. This yields
\begin{equation*}
\inf_{n\ge 2} \mathbb{P}\left (\omega\big (\mathcal{Y}_n,b^{-p}\big )\le
\frac{2(b-1)c_{b,H}^{-1}b^{-pH^{\prime }}}{1-b^{-H^{\prime }}}\right )\ge
\inf_{n\ge 1} \mathbb{P}(E^n_p)\ge 1-A_p.
\end{equation*}
Since $\lim_{p\to\infty}A_p=0$, the previous inequality yields (\ref
{tighness}).
\end{proof}
\noindent \textit{Proof of Theorem~\ref{th1}.} We use the notations of the
three previous propositions. Suppose that $(\mathcal{Y}_{n_k})_{k\ge 1}$ is
subsequence of $(\mathcal{Y}_n)_{n\ge 1}$ which converges weakly to a
probability distribution $\mathcal{W}_{\infty}$. Due to Proposition~\ref
{prop2}, a process $\mathcal{Y}$ such that $\mathcal{L}(\mathcal{Y})=
\mathcal{W}_{\infty}$ has continuous path and is such that for all $p\ge 1$,
$\mathcal{L}\big ((\mathcal{Y}(t))_{t\in\mathcal{D}_p}\big )=\mathcal{L}
\big
((\mathcal{W}(t))_{t\in\mathcal{D}_p}\big )$. Since $\bigcup_{p\ge 1}
\mathcal{D}_p$ is dense in $[0,1]$ and we know that the almost sure limit of a
sequence of centered Gaussian variables is a centered Gaussian variable with
variance equal to the limit of the variances, we conclude
that $\mathcal{W}_{\infty}=\mathcal{W}.$ Now the final conclusion comes from
Proposition~\ref{prop3}.
\section{Proof of Theorem~\ref{th2} concerning strong convergence when $1/2<H<1
$}\label{strong}
We first construct in Proposition~\ref{prop5} a stochastic process thanks to
the almost sure pointwise convergence of $B^H_n$ over the set $b$-adic
numbers. We establish regularity properties for this process and then
identify this process as the almost sure uniform limit of $B^H_n$
(Proposition~\ref{identification}) by using a result on vector martingales.
At last we prove the result concerning the Hausdorff and box dimensions of the graph
of the limit $B^H$ of $B^H_n$.
\begin{proposition}
\label{prop5} Let $H\in (1/2,1]$. With probability one
\begin{enumerate}
\item for every $b$-adic number $t$ in $[0,1]$ the sequence $B^H_n(t)$
converges to a limit denoted $B^H_\infty(t)$.
\item The function $B^H_\infty$ defined on the $b$-adic numbers possesses a
(necessarily unique) continuous extension to $[0,1]$ also denoted $B^H_\infty$.
\item The function $B^H_\infty$ belongs to $C^{H^{\prime }}([0,1])$ for all $
H^{\prime }<H$.
\item The pointwise H\"older exponent of $B^H_\infty$ at every point of $
[0,1]$ is equal to $H$.
\end{enumerate}
\end{proposition}
We first establish the following useful result on the martingale $
(B^H_n(1))_{n\ge 1}$.
\begin{lemma}\label{lemmoments}
\label{prop4} Let $H\in (1/2,1]$. The martingale $\big (B^H_n(1)\big )_{n\ge
1}$ is bounded in $L^q$ norm for all $q\ge 1$.
\end{lemma}
\begin{proof}
Denote $B^H_n(1)$ by $Z_n$ as in the proof of Proposition~\ref{prop1}. Since
$(Z_n)_{n\ge 1}$ is a martingale, the sequence $(\mathbb{E}(Z_n^2))_{n\ge 1}$
is non-decreasing. Consequently, it follows from (\ref{fonc2}), (\ref{m2})
and the fact that $b^{1-2H}<1$ since $H>1/2$ that $(Z_n)_{n\ge 1}$ is
bounded in $L^2$ norm and thus it converges almost surely to a limit $
Z_\infty$. Then, the relation (\ref{Mq}) as well as arguments very similar
to those used in the proof of Proposition~\ref{prop1} show that the sequence
$\mathbb{E}(Z_n^q)$ converges for every integer $q\ge 1$ as $n$ goes to $
\infty$. In particular it is bounded in $L^{2q}$ for every integer $q\ge 1$.
This implies that $\mathbb{E}(Z_\infty^{2q})<\infty$ for every integer $q\ge
1 $ by the Fatou lemma.
\end{proof}
\begin{proof}
\textit{(of Proposition~\ref{prop5})} (1) Since $B^H_n(0)=0$ for all $n\ge 1$
almost surely, it is enough to establish that for every $p\ge 1$ and $0\le
k< b^{-p}$ the sequence $\big (\Delta^H_n(p,k)\big)_{n\ge 1}$ defined by $
\Delta^H_n(p,k)=B^H_n\big ((k+1)b^{-p}\big )-B^H_n\big (kb^{-p}\big )$
converges almost surely. Indeed, since the set of $b$-adic numbers is
countable, this will imply that with probability one, $\big (\Delta^H_n(p,l)
\big)_{n\ge 1}$ converges for every, $p\ge 1$ and $0\le l<b^p$ as $n$ goes
to $\infty$, thus $B^H_n\big (kb^{-p})=\sum_{l=0}^{k-1}\Delta^H_n(p,l)$
converges for $p\ge 1$ and $1\le k\le b^p$ as $n$ goes to $\infty$.
Now, it is sufficient to notice that given $p\ge 1$, $0\le k< b^{-p}$, and $
w=w_1\cdots w_p$ so that $kb^{-p}=\sum_{j=1}^p w_jb^{-j}$, the relation (\ref
{fonc0}) yields for $n\ge p+1$
\begin{equation} \label{delta}
\Delta^H_n(p,k)= \epsilon(w_1) \cdots \epsilon (w_1\cdots w_p) b^{-pH}
B^H_{n-p}(w)(1).
\end{equation}
The convergence of $\Delta^H_n(p,k)$ then comes from Lemma~\ref{prop4} which
ensures that the martingale $(B^H_{n-p}(w)(1))_{n\ge 1}$ converges to a
limit $B^H_\infty(w)(1)$ since it is bounded in $L^2$-norm.
Let $\Delta^H_\infty(p,k)$ and $B^H_\infty(kb^{-p})$ denote the limit of $
\Delta^H_n(p,k)$ and $B^H_n(kb^{-p})$ respectively. By construction, given $
p\ge 1$, $0\le k< b^{-p}$, and $w=w_1\cdots w_p$ so that $
kb^{-p}=\sum_{j=1}^p w_jb^{-j}$, we have
\begin{eqnarray}
\label{delta'}\Delta^H_\infty(p,k)&=& B^H_\infty\big ((k+1)b^{-p}\big )-B^H_\infty\big (
kb^{-p}\big ) \\
\label{delta"}&= &\epsilon(w_1) \cdots \epsilon (w_1\cdots w_p) b^{-pH} B^H_\infty(w)(1).
\end{eqnarray}
\noindent (2) and (3) We adapt the proof of the Kolmogorov-Centsov theorem
\cite{Centsov,KS} which uses the dyadic basis while we work in any basis $b$. Let $H^{\prime }\in (0,H)$. Fix an integer $
K>1/2(H-H^{\prime })$. Due to (\ref{delta'}) and (\ref{delta"}), for $p\ge 1$
we have
\begin{equation*}
\alpha_p:=\mathbb{P}\left (\exists \ 0\le k<b^p,\ |\Delta^H_\infty(p,k)|\ge
b^{-pH^{\prime }}\right )\le b^{p(1+2K(H^{\prime }-H)} \mathbb{E}\big (
B^H_\infty(1)^{2K}\big).
\end{equation*}
Since $\sum_{p\ge 1}\alpha_p<\infty$, the Borel-Cantelli implies that with
probability 1, there exists $n_0$ such that
\begin{equation} \label{Centsov0}
\sup_{0\le k<b^n}|\Delta^H_\infty(n,k)|< b^{-nH^{\prime }}, \quad \forall\
n\ge n_0 .
\end{equation}
Now we fix $n\ge n_0$ and show that for all $m>n$,
\begin{equation} \label{Centsov}
|B^H_\infty(t)-B^H_\infty(s)|\le 2(b-1)\sum_{j=n+1}^m b^{-H^{\prime
}j},\quad \forall\ t,s\in \mathcal{D}_m,\ 0<t-s<b^{-n}.
\end{equation}
If $m=n+1$, one has $s=kb^{-(n+1)}$ and $t=k^{\prime -(n+1)}$ with $
0<k^{\prime }-k<b$, so due to (\ref{Centsov0}) we have $|B^H_\infty(t)-B^H_
\infty(s)|\le (k' -k)b^{-(n+1)H'}$, hence the conclusion.
Suppose that (\ref{Centsov}) holds for $n+1\le m \le M-1$. Let $t,s\in
\mathcal{D}_M$ such that $0<t-s<b^{-n}$ and consider $t_1=\max\{u\in
\mathcal{D}_{M-1}: u\le t\}$ and $s_1=\min\{u\in \mathcal{D}_{M-1}: u\ge s\}$
. One has $s\le s_1 \le t_1\le t$, $t_1-s_1<b^{-n}$, $s_1-s\le (b-1) b^{-M}$
and $t-t_1\le (b-1)b^{-M}$. Now, since $s_1$ and $t_1$ belong to $\mathcal{D}
_{M-1}\subset \mathcal{D}_M$, property (\ref{Centsov0}) implies that $
|B^H_\infty(s)-B^H_\infty(s_1)|\le (b-1) b^{-MH^{\prime }}$ and $
|B^H_\infty(t)-B^H_\infty(t_1)|\le (b-1) b^{-MH^{\prime }}$. Moreover, since
(\ref{Centsov}) holds for $m=M-1$ one has $|B^H_\infty(t_1)-B^H_\infty(s_1)|
\le 2(b-1)\sum_{j=n+1}^{M-1}b^{-H^{\prime }j}$. This is enough to get (\ref
{Centsov}) for $m=M$.
Property (\ref{Centsov}) being established for all $n\ge n_0$, taking $t,s\in
\mathcal{D}$ such that $0<|t-s|<b^{-n_0}$ and $n$ the integer such that $
b^{-(n+1)}\le |t-s|<b^{-n}$, since both $t$ and $s$ belong to $\bigcup_{p>n}
\mathcal{D}_p$ we deduce from (\ref{Centsov}) that
\begin{equation*}
|B^H_\infty(t)-B^H_\infty(s)|\le 2(b-1)\sum_{j=n+1}^\infty b^{-H^{\prime
}j}\le \frac{2(b-1)}{1-b^{-H^{\prime }}}|t-s|^{H^{\prime }}.
\end{equation*}
This is enough to construct on $[0,1]$ a unique continuous extension of $
B^H_\infty$. As a consequence of what preceeds, this extension belongs to $
C^{H^{\prime }}([0,1])$ for all $H^{\prime }<H$.
\noindent (4) We need the following lemma which describes the asymptotic
behavior of the characteristic function of $B^H_\infty(1)$. This lemma will
be also useful in finding a lower bound for the Hausdorff dimension of the
graph of $B^H$.
\begin{lemma}
\label{fourier} Let $\varphi$ stand for the characteristic function of $
B^H_\infty(1)$. There exists $\rho\in(0,1)$ such that $\varphi(t)=O\big (
\rho^{|t|^{1/H}}\big )\ \ (|t|\to\infty)$. In particular, the probability
distribution of $B^H_\infty(1)$ possesses an infinitely differentiable
density.
\end{lemma}
\begin{proof}
Since $\mathbb{E}(B^H_\infty(1))=1$, the probability distribution of $
B^H_\infty(1)$ is not concentrated at 0 and thus for every $\eta>0$ there
exists $\alpha\in (0,\eta)$ and $\gamma<1$ such that $\sup_{t, |t|\in
[\alpha,b^H\alpha]}|\varphi(t)|\le \gamma$.
Now, using the fact that
\begin{equation*}
\varphi(t)=\left [p_{b,H}^+\varphi\big (b^{-H}t\big )+p_{b,H}^-\varphi\big (-b^{-H}t\big )\right
]^b,
\end{equation*}
one obtains by induction that
\begin{equation*}
\sup_{t,\ |t|\in [b^{kH}\alpha,b^{(k+1)H]
}\alpha]}|\varphi(t)|\le \gamma^{b^k}\quad (\forall\ k\ge 0).
\end{equation*}
Since $|t|^{1/H}\le b\alpha^{1/H} b^{k}$ for $|t|\in [b^{kH}\alpha,b^{(k+1)H}\alpha]$, the
conclusion follows with $\rho=\gamma^{1/ b\alpha^{1/H}}$.
The rate of decay of $\varphi$ at $\infty$ yields the conclusion regarding
the probability distribution of $B^H_\infty(1)$.
\end{proof}
It follows from Lemma~\ref{fourier} that $\mathbb{E}(|B^H_\infty(1)|^{-
\gamma})<\infty$ for all $\gamma\in (0,1)$. This will be used with $
\gamma=1/2$ in what follows.
We next use an approach similar to that used for the study of the pointwise
H\"older exponents of Brownian motion \cite{Erdos,KS}.
Let $\varepsilon>0$. We show that the subset $\mathcal{O}$ of $\Omega$ of
points $\omega$ such that the corresponding path $B^H_\infty$ possesses
points at which the pointwise H\"older exponent is at least $H+\varepsilon$
is included in a set of null probability.
We fix an integer $K>4/\varepsilon $ and denote by $n_K$ the smallest
integer $n$ such that $Kb^{-n}\le 1$. For $t\in [0,1]$ and $n\ge n_K$ , consider
$S^K_n(t)$ a subset of $[0,1]$ made of $K+1$ consecutive $b$-adic
numbers of generation $n$ such that $t\in [\min\, S^K_n(t),\max\, S^K_n(t)]$.
Also denote by $\boldsymbol{S}^K_n(t)$ the set of $K$ consecutive $b$-adic
intervals delimited by the elements of $S^K_n(t)$. If the pointwise H\"older
exponent at $t$ is larger than or equal to $H+\varepsilon$ then for $n$
large enough one has necessarily $\sup_{s\in
S^K_n(t)}|B_\infty^H(s)-B_\infty^H(t)|\le (Kb^{-n})^{H+\varepsilon/2}$, so
that $\sup_{I\in \boldsymbol{S}^K_n(t)} |\Delta B_\infty^H(I)|\le 2
(Kb^{-n})^{H+\varepsilon/2}$.
Know let $\boldsymbol{S}^K_n$ be the set made of all $K$-uple of consecutive $
b$-adic intervals of generation $n$, and if $S\in \boldsymbol{S}^K_n$, denote the event $\big \{ \sup_{I\in S} |\Delta
B_\infty^H(I)|\le 2 (Kb^{-n})^{H+\varepsilon/2}\big \}$ by $E_{S}$. The previous lines
show that
\begin{equation*}
\mathcal{O}\subset \mathcal{O}^{\prime }=\bigcap_{n\ge n_K}\bigcup_{p\ge
n}\bigcup_{S\in \boldsymbol{S}^K_p}E_{S}.
\end{equation*}
By construction, if $S\in \boldsymbol{S}^K_p$, $\big (|\Delta
B_\infty^H(I)|\big)_{I\in S}$ is equal to $(
b^{-pH}Y_I)_{I\in S}$, where the $K$ random variables $Y_I$ are
mutually independent and identically distributed with $|B^H_\infty(1)|$.
Consequently, $\mathbb{P}(E_{S})$ depends only on $K$ and $p$ and
\begin{equation*}
\mathbb{P}(E_{S})\le \left [ \mathbb{P}(|B^H_\infty|\le 2
(Kb^{-p})^{\varepsilon/2})\right ]^K\le \sqrt{2}^K K^{K \varepsilon/4} b^{-p
K \varepsilon/4}\left [\mathbb{E}(|B^H_\infty(1)|^{-1/2})\right ]^K.
\end{equation*}
Since the cardinality of $\boldsymbol{S}^K_p$ is less than $b^p$, this yields $
\mathbb{P}\big(\bigcup_{S\in \boldsymbol{S}^K_p}E_{S}\big )\le C b^p
b^{-p K \varepsilon/4}$, with $C=\sqrt{2}^K K^{K \varepsilon/4}\left [
\mathbb{E}(|B^H_\infty(1)|^{-1/2})\right ]^K$. Due to our choice for $K$,
this implies that the series $\sum \mathbb{P}\big(\bigcup_{S\in \boldsymbol{S}
^K_p}E_{S}\big )$ converges and $\mathbb{P}(\mathcal{O}^{\prime
})=0$.
\end{proof}
The next proposition makes it possible to conclude that the random sequence
of functions $(B^H_n)_{n\ge 1}$ converges almost surely uniformly to the
function $B^H_\infty$ constructed previously. The same kind of approach is
used to establish the convergence of continuous function-valued martingales
related to multiplicative processes on a homogeneous or Galton Watson tree
in \cite{Joffe,Biggins,Barral}, but the context in the mentioned papers is
rather different from the present one because the martingales $M_n(s)$
considered there take the form $\sum_{w\in \Sigma_n} \prod_{k=1}^nW(w|k)(s)$
where the random weights $W(w)(s)$ depend smoothly on the parameter $s$
belonging to some open subset of $\mathbb{R}^d$ independently of $\Sigma$ (and more generally a super-critical Galton-Watson tree),
while for $B^H_n(s)$ the parameter $s$ is a generic point in $\Sigma$
(identified with $[0,1]$).
\begin{proposition}
\label{identification} One has $\mathbb{E}\left
(\|B^H_\infty\|_\infty
\right
)<\infty$. Consequently, with probability 1, $B^H_n$ converges
uniformly to $B^H_\infty$.
\end{proposition}
\begin{proof}
In fact we are going to prove that $\mathbb{E}\left
(\|B^H_\infty\|_
\infty^2\right )<\infty$. Define
\begin{equation*}
\widetilde Z_n=\sup_{t\in\bigcup_{p\ge 1}\mathcal{D}_p}|B^H_n(t)|, \ \widetilde Z_{n}(k)=
\sup_{t\in\bigcup_{p\ge 1}\mathcal{D}_p}|B^H_n(k)(t)|,\ 0\le k\le b-1.
\end{equation*}
Due to (\ref{fonc}) one has
\begin{equation*}
\widetilde Z_n\le \max_{0\le k\le b-1}b^{-H}\widetilde Z_{n-1}(k)+|B^H_n(kb^{-1})|\le \sum_{k=0}^{b-1}b^{-H}\widetilde Z_{n-1}(k)+|B^H_n(kb^{-1})|
\end{equation*}
Thus
\begin{eqnarray*}
\mathbb{E}(\widetilde Z_n^2)&\le& \sum_{k=0}^{b-1}\mathbb{E}\left
(b^{-2H}\widetilde Z_{n-1}(k)^2+2\widetilde Z_{n-1}(k)|B^H_n(kb^{-1})|+|B^H_n(kb^{-1})|^2\right )
\\
&=&b^{1-2H}\mathbb{E}(\widetilde Z_{n-1}^2)+ 2\mathbb{E}\left
(\sum_{k=0}^{b-1}\widetilde Z_{n-1} (k)|B^H_n(kb^{-1})|\right )+\sum_{k=0}^{b-1}\mathbb{E}
(|B^H_n(kb^{-1})|^2 ) \\
&\le & b^{1-2H}\mathbb{E}(\widetilde Z_{n-1}^2)+2\mathbb{E}(\widetilde Z_{n-1}^2)^{1/2}
\sum_{k=0}^{b-1}\left\|B^H_n(kb^{-1})\right \|_2+
\sum_{k=0}^{b-1}\| B^H_n(kb^{-1})\|_2^2.
\end{eqnarray*}
Now we use the fact that the sequence
$\sup_{0\le k\le b-1}\left\| B^H_n(kb^{-1})\right \|_2$ is bounded due to the proof of Proposition~\ref{prop5}. Thus there exists $C>0$ such that for all
$n\ge 1$
\begin{equation} \label{infty}
\mathbb{E}(\widetilde Z_n^2)\le f\big (\mathbb{E}(\widetilde Z_{n-1}^2)\big ),\ \mbox{with } f(x)=
b^{1-2H}x+C\sqrt {x}+C.
\end{equation}
Since $b^{1-2H}<1$, there exists $x_0> 0$ such that $f(x)<x$ for all $x>x_0$
. This remark together with (\ref{infty}) implies that $\mathbb{E}(\widetilde Z_n^2)\le
\max \left (x_0, f\big (\mathbb{E}(\widetilde Z_{1}^2)\big )\right )$ for all $n\ge 2$.
To conclude, we use Proposition V-2-6 in \cite{Neveu} which ensures that since $
\mathcal{C}([0,1])$ is a complete separable Banach space and $
\|B^H_\infty\|_{L^1}<\infty$, $B^H_\infty$ is the almost sure limit of $
\widetilde B^H_n=\mathbb{E}\big (B^H_\infty|\sigma(\epsilon (w): |w|\le n)
\big )$. Furthermore, given $n\ge 1$ and $w\in\Sigma_n$, one can show by
induction on $p\ge 0$ that, with probability 1, $\widetilde B^H_n(t_{w\cdot
u})=B^H_n(t_{w\cdot u})$ for all $u\in \Sigma_p$. This implies that $
\widetilde B^H_n=B^H_n$ almost surely since these functions coincide over $\mathcal{D}$.
\end{proof}
\begin{proposition}
Let $H\in (1/2,1]$. With probability 1, the Hausdorff and box dimensions of
the graph of $B_H$ are equal to $2-H$.
\end{proposition}
We shall need an additional notation. If $w\in\Sigma^*$ and $J=\pi ([w])$
then we define $\boldsymbol{\epsilon}(J)=\prod_{k=1}^{|w|}\epsilon(w|k)$.
\begin{proof}
Let us denote by $\Gamma_H$ the graph $\left \{\big (t,B_H(t)\big ): t\in
[0,1]\right \}$ of $B_H$.
At first, the fact that $2-H$ is an upper bound for the box dimension of the
graph of $B_H$ comes from the fact that $B_H\in C^{H^{\prime }}([0,1])$ for
all $H^{\prime }<H$ (see \cite{Falc} Ch. 11).
To find the sharp lower bound $2-H$ for the Hausdorff dimension of $\Gamma_H$
, we use the method consisting in showing that with probability 1, the
measure on this graph obtained as the image of the Lebesgue measure
restricted to $[0,1]$ by the mapping $t\mapsto \big (t,B_H(t)\big )$ has a
finite energy with respect to the Riesz Kernel $u\in\mathbb{R}
^2\setminus\{0\}\mapsto \|u\|^{-\gamma}$ for all $\gamma <2-H$ (see \cite{Falc} Ch. 11
for more details). This property holds if we show that for all $\gamma <2-H$
\begin{equation*}
\int_{[0,1]^2}\mathbb{E}\left (\frac{1}{\sqrt{|t-s|^2+|B_H(t)-B_H(s)|^2}
^\gamma}\right )\ dtds <\infty.
\end{equation*}
If $I$ is a closed subinterval of $[0,1]$, we denote by $\mathcal{G}(I)$ the
set of closed $b$-adic intervals of maximal length included in $I$, and then
$m_I=\min\bigcup_{J\in\mathcal{G}(I)}J$ and $M_I =\max\bigcup_{J\in\mathcal{G
}(I)}J$.
Let $0<s<t<1$ be two non $b$-adic numbers. We define two sequences $
(s_p)_{p\ge 0}$ and $(t_p)_{p\ge 0}$ as follows. Let $s_0=m_{[s,t]}$ and $
t_0=M_{[s,t]}$. Then let define inductively $(s_p)_{p\ge 1}$ and $
(t_p)_{p\ge 1}$ as follows: $s_p=m_{[s,s_{p-1}]}$ and $t_p=M_{[t_{p-1},t]}$.
Let us denote by $\mathcal{C}$ the collection of intervals containing $[s_0,t_0]$ and the intervals $[s_p,s-{p-1}]$ and $[t_{p-1},t_p]$, $p\ge 1$. Every interval $I\in \mathcal{C}$ is the union of at most $b-1$
intervals of the same generation $n_I$, the elements of $\mathcal{G}(I)$,
and we have $\Delta B_H(I)=\sum_{J\in\mathcal{G}(I)} \boldsymbol{\epsilon}
(J)b^{-n_IH}Y_J$.
By construction, we have $\min_{I\in\mathcal{C}
}n_I=n_{[s_0,t_0]}$ and $(t-s)/b\le b^{-n_{[s_0,t_0]}}\le (t-s)$.
Also, all the random variables $Y_I$ are mutually independent and
independent of $\mathcal{T}_{\mathcal{C}}=\sigma (\boldsymbol{\epsilon}
(J):J\in \mathcal{G}(I),\ I\in\mathcal{C})$.
Now, we write
\begin{equation*}
B_H(t)-B_H(s)=b^{-n_{[s_0,t_0]}H}\left (\sum_{J\in \mathcal{G}([s_0,t_0])}
\boldsymbol{\epsilon} (J)Y_J+ Z(s,s_0)+Z(t_0,t)\right),
\end{equation*}
where
\begin{equation*}
\begin{cases}
\displaystyle Z(s,s_0)=\lim_{p\to\infty} \sum_{0\le k\le
p}b^{(n_{[s_0,t_0]}-n_{[s_{k+1},s_k]})H} \sum_{J\in\mathcal{G}
([s_{k+1},s_k])} \boldsymbol{\epsilon} (J)Y_J \\
\displaystyle Z(t_0,t)=\lim_{p\to\infty} \sum_{0\le k\le
p}b^{(n_{[s_0,t_0]}-n_{[t_k, t_{k+1}]})H} \sum_{J\in\mathcal{G}([t_k,
t_{k+1}])} \boldsymbol{\epsilon} (J)Y_J.
\end{cases}
\end{equation*}
Let $\mathcal{Z}(t,s)=\sum_{J\in \mathcal{G}([s_0,t_0])} \boldsymbol{\epsilon
} (J)Y_J+ Z(s,s_0)+Z(t_0,t)$ and fix $J_0\in \mathcal{G}([s_0,t_0])$.
Conditionally on $\mathcal{T}_{\mathcal{C}}$, $\mathcal{Z}(t,s)$ is the sum
of $\pm Y(J_0)$ plus a random variable $U$ independent of $Y(J_0)$.
Consequently, the probability distribution of $\mathcal{Z}(t,s)$
conditionally on $\mathcal{T}_{\mathcal{C}}$ possesses a density $f_{t,s}$
and $\|\widehat{f_{t,s}}\|_{L^1}\le \|\varphi\|_{L^1}$, where $\varphi$ is
the characteristic function of $Y(J_0)$ studied in Lemma~\ref{fourier}.
Thus, for $\gamma<2-H$ we have
\begin{eqnarray*}
\mathbb{E}\left (\frac{1}{\sqrt{|t-s|^2+|B_H(t)-B_H(s)|^2}^\gamma}|\mathcal{T
}_{\mathcal{C}}\right )&=& \int_{\mathbb{R}}\frac{f_{t,s}(u)}{\sqrt{
|t-s|^2+b^{-2n_{[s_0,t_0]}H}u^2}^\gamma}\, du\\
&\le& \int_{\mathbb{R}}\frac{f_{t,s}(u)}{\sqrt{
|t-s|^2+b^{-2H}(t-s)^{2H}u^2}^\gamma}\, du \\
&=&|t-s|^{1-H-\gamma} \int_{\mathbb{R}}\frac{f_{t,s}(|t-s|^{1-H}v)}{\sqrt{
1+b^{-2H}v^2}^\gamma}\, dv.
\end{eqnarray*}
The function $f_{t,s}$ is bounded independently of $t,\ s$ and $\mathcal{T}_{\mathcal{
C}}$ since it is bounded by $\|\widehat{f_{t,s}}\|_{L^1}$ and we just saw that
this number is bounded by $\|\varphi\|_{L^1}$. It follows that
\begin{equation*}
\mathbb{E}\left (\frac{1}{\sqrt{|t-s|^2+|B_H(t)-B_H(s)|^2}^\gamma}\right
)\le \|\varphi\|_{L^1}|t-s|^{1-H-\gamma}\int_{\mathbb{R}}\frac{dv}{\sqrt{1+b^{-2H}v^2}^\gamma}
.
\end{equation*}
This yields the conclusion.
\end{proof}
\section{Proof of Theorems~\ref{th3} and \ref{th4} concerning functional CLT when $1/2<H<1$}
\noindent
{\it Proofs of Theorem~\ref{th3} and ~\ref{th4}.} We proceed in three steps as for Theorem~\ref{th1}.
Let $a_n^H=\sigma_Hb^{n(1/2-H)}$. For $w\in\Sigma^*$ and $n\ge 1$, let $Y_n(w)=\big (B^H(w)-B^H_{n}(w)\big )(1)/a_n^H$, and simply denote $Y_n(\emptyset)$ by $Y_n$. By construction, $\mathcal{L}(Y_n(w))=\mathcal{L}(Y_n)$. Also, for $n\ge 1$ let $\mathcal{X}^H_{n}=B^H-B^H_n$.
Step 1: We leave the reader verify that
\begin{equation}\label{zzz}
Y_n=\frac{1}{\sqrt{b}}\sum_{k=0}^{b-1} \epsilon (k)Y_{n-1}(k),
\end{equation}
where $Y_{n-1}(k)\sim Y_{n-1}$, and the $Y_{n-1}(k)$'s are centered, mutually independent, and independent of the $\epsilon(k)$'s.
It is then straightforward to show by induction that properties (1), (2) and (3) of the proof of Proposition~\ref{prop1} hold (with the new sequence $(Y_n)$ considered in the present proof).
Thus $X^H_n(1)=\mathcal{X}^H_n(1)$ converges weakly to $\mathcal{N}(0,1)$.
Step 2: Let $(W(t))_{t\in [0,1]}$ be a standard Brownian motion. For every $p\ge 1$, the
probability distributions $\mathcal{L}\bigg(\big (X^H_{p+n}(t)\big )_{t\in
\mathcal{D}_p}\bigg )= \mathcal{L}\bigg(\big (\mathcal{X}^H_{p+n}(t)\big )_{t\in
\mathcal{D}_p}\bigg )$ converge to $\mathcal{L}\big ((W(t))_{t\in\mathcal{D}
_p}\big )$ as $n\to\infty$. This is obtained by following the same approach as in the proof of Proposition~\ref{prop2}, as well as the step 1 and the fact that if $w\in\Sigma_p$, for all $n\ge p$ we have
\begin{equation}\label{yyy}
\Delta \mathcal{X}^H_n(I_w)=\Delta X^H_n(I_w)=b^{-p/2}Y_{n-p}(w).
\end{equation}
Step 3: To see that the sequences $\big (\mathcal{L}(X^H_n)\big )_{n\ge 1}$ and $\big (\mathcal{L}(\mathcal{X}^H_n)\big )_{n\ge 1}$ are tight, we follow the same approach as in Proposition~\ref{prop3}.
For $n\ge 2$ and $p\ge 1$, we notice that:
If $1\le p\le n$ and $w\in \Sigma_p$ then we have $\Delta \mathcal{X}^H_n(I_w)=\Delta X^H_n(I_w)=b^{-p/2}Y_{n-p}(w)$ (this is (\ref{yyy})).
If $p>n$ and $w\in \Sigma_p$ then we have
\begin{eqnarray*}
|\Delta X^H_n(I_w)|= |\Delta X^H_n(I_{w|n})|b^{n-p}&=&\sigma_H^{-1}b^{n/2-p} \big |B^H(w|n)(1)-1\big|\\
&\le &\sigma_H^{-1}b^{-p/2} \big |B^H(w|n)(1)-1\big|.
\end{eqnarray*}
and
\begin{eqnarray*}
|\Delta \mathcal{X}^H_n(I_w)|&=&(a^H_n)^{-1}\left |\Delta B^H(I_w)-\Delta B^H_n(I_w)\right |\\
&=&\sigma_H^{-1}b^{n(H-1/2)}\left |b^{-pH}B^H(w)(1)\prod_{k=1}^p\epsilon (w|k)-b^{-nH}b^{n-p}\prod_{k=1}^n\epsilon (w|k)\right |
\\
&=&\sigma_H^{-1}b^{-p/2} \left |b^{(p-n)(1/2-H)}B^H(w)(1)\prod_{k=n+1}^p\epsilon (w|k)-b^{-(p-n)/2}\right |.
\end{eqnarray*}
Now, since by Lemma~\ref{lemmoments} and the step 1 the sequences $Y_{n}(w)$, $\big |B^H(w|n)(1)-1\big|$ and $\left |b^{(p-n)(1/2-H)}B^H(w)(1)\prod_{k=n+1}^p\epsilon (w|k)-b^{-(p-n)/2}\right |$ are bounded in $L^q$ for all $q\ge 1$ independently of $w$, we can deduce (by using an approach similar to that used in the proof of Proposition~\ref{prop3}) from the previous estimates of
$|\Delta \mathcal{X}^H_n(I_w)|$ and $|\Delta X^H_n(I_w)|$ that for every $H'<1/2$, there exists a positive sequence $(\beta_p)_{p\ge 1}$ such that $\sum_{p\ge 1}\beta_p<\infty$ and
$$
\sup_{n\ge 2} \mathbb{P}(\exists\ w\in \Sigma_p: |\Delta X^H_n(I_w)|>b^{-pH'})+\sup_{n\ge 2} \mathbb{P}(\exists\ w\in \Sigma_p: |\Delta \mathcal{X}^H_n(I_w)|>b^{-pH'})\le \beta_p.
$$
In view of the proof of Proposition~\ref{prop3}, this is enough to establish the desired tightness.
\end{document} |
\begin{document}
\title{{\huge A Max-Correlation White Noise Test for Weakly Dependent Time
Series\thanks{
We thank three referees and Co-Editor Michael Jansson for helpful comments
and suggestions that led to significant improvements to our manuscript. We
also thank Eric Ghysels, Shigeyuki Hamori, Peter R. Hansen, Yoshihiko
Nishiyama, Kenichiro Tamaki, Kozo Ueda, and Zheng Zhang, seminar
participants at the Kyoto Institute of Economic Research, UNC Chapel Hill,
the University of Essex, and Kobe University, and conference participants at
the 10th Spring Meeting of JSS, 2016 SWET, 2016 AMES, 2016 JJSM, 2016
NBER-NSF Time Series Conference, the 15th International Conference of WEAI,
and SETA 2019 for helpful comments. The second author is grateful for
financial supports from JSPS KAKENHI (Grant Number 16K17104), Kikawada
Foundation, Mitsubishi UFJ Trust Scholarship Foundation, Nomura Foundation,
and Suntory Foundation.}}}
\author{ \ \ Jonathan B. Hill \thanks{
Corresponding author. Department of Economics, University of North Carolina
at Chapel Hill. E-mail: \texttt{[email protected]}; web: \texttt{
https://jbhill.web.unc.edu.}} \qquad\ \ \ \ \ and\qquad\ \ Kaiji Motegi
\thanks{
Graduate School of Economics, Kobe University. E-mail: \texttt{
[email protected]}} \\
University of North Carolina \qquad\ \qquad Kobe University}
\date{\today \\
}
\maketitle
\setstretch{1}
\begin{center}
\textbf{Abstract}
\end{center}
This paper presents a bootstrapped p-value white noise test based on the
maximum correlation, for a time series that may be weakly dependent under
the null hypothesis. The time series may be prefiltered residuals. The test
statistic is a normalized weighted maximum sample correlation coefficient $
\max_{1\leq h\leq \mathcal{L}_{n}}\sqrt{n}|\hat{\omega}_{n}(h)\hat{\rho}
_{n}(h)|$, where $\hat{\omega}_{n}(h)$ are weights and the maximum lag $
\mathcal{L}_{n}$ increases at a rate slower than the sample size $n$. We
only require uncorrelatedness under the null hypothesis, along with a moment
contraction dependence property that includes mixing and non-mixing
sequences. We show Shao's (\citeyear{Shao2011_JoE}) dependent wild bootstrap
is valid for a much larger class of processes than originally considered. It
is also valid for residuals from a general class of parametric models as
long as the bootstrap is applied to a first order expansion of the sample
correlation.
We prove the bootstrap is asymptotically valid without exploiting extreme
value theory (standard in the literature) or recent Gaussian approximation
theory. Finally, we extend Escanciano and Lobato's (
\citeyear{EscancianoLobato2009}) automatic maximum lag selection to our
setting with an unbounded lag set that ensures a consistent white noise
test, and find it works extremely well in controlled experiments.
\newline
\textbf{MSC2010 classifications} : 62J07, 62F03, 62F40. \textbf{JEL
classifications} : C12, C52.
\newline
\textbf{Keywords} : maximum correlation, white noise test, near epoch
dependence, dependent wild bootstrap, automatic lag selection.
\setstretch{1.225}
\section{Introduction\label{sec:intro}}
We present a bootstrap white noise test based on the maximum (in absolute
value) autocorrelation. The data may be observed, or filtered residuals. A
new asymptotic theory approach is used relative to the literature, one that
sidesteps deriving the asymptotic distribution of a max-correlation
statistic, or working with tools specific to Gaussian approximations and
couplings. We operate solely on the bootstrapped p-value. Convergence in
finite dimensional distributions of the sample correlation is combined with
with new theory for handling convergence of arbitrary arrays. The latter is
applicable for dealing with the maximum of an increasing sequence of
correlations, in particular when residuals based on a plug-in estimator are
used.
The class of time series models considered here is:
\begin{equation}
y_{t}=f(x_{t-1},\phi _{0})+u_{t}\text{ \ and \ }u_{t}=\epsilon _{t}\sigma
_{t}(\theta _{0}) \label{reg_model}
\end{equation}
where $\phi $ $\in $ $\mathbb{R}^{k_{\phi }}$, $k_{\phi }$ $\geq $ $0$, and $
f(x,\phi )$ is a level response function. The error $\epsilon _{t}$
satisfies $E[\epsilon _{t}]$ $=$ $0$, $E[\epsilon _{t}^{2}]$ $<$ $\infty $,
and the regressors are $x_{t}$ $\in $ $\mathbb{R}^{k_{x}}$, $k_{x}$ $\geq $ $
0$. We assume $\{x_{t},y_{t}\}$ are strictly stationary in order to focus
ideas. Volatility $\sigma _{t}^{2}(\theta _{0})$ is a process measurable
with respect to $\mathcal{F}_{t-1}$ $\equiv $ $\sigma (y_{\tau },x_{\tau }$ $
:$ $\tau $ $\leq $ $t$ $-$ $1)$, where $\theta _{0}$ is decomposed as $[\phi
_{0}^{\prime },\delta _{0}^{\prime }]$ $\in $ $\mathbb{R}^{k_{\theta }}$, $
\delta _{0}\in $ $\mathbb{R}^{k_{\delta }}$ are volatility-specific
parameters, and $(k_{\theta },k_{\delta })$ $\geq $ $0$. The dimensions of $
\phi _{0}$ and $\delta _{0}$ (hence $\theta _{0}$) may be zero, depending on
the model desired and the interpretation of the test variable $\epsilon _{t}$
. Thus, $k_{\phi }$ $=$ $0$ implies a volatility model $y_{t}$ $=$ $\epsilon
_{t}\sigma _{t}(\theta _{0})$, if $k_{\delta }$ $=$ $0$ then $y_{t}$ $=$ $
f(x_{t-1},\phi _{0})$ $+$ $\epsilon _{t}$, and $y_{t}$ $=$ $\epsilon _{t}$
when $k_{\theta }$ $=$ $0$ (i.e. a filter is not used). We want to test if $
\{\epsilon _{t}\}$ is a white noise process:
\begin{equation*}
H_{0}:E\left[ \epsilon _{t}\epsilon _{t-h}\right] =0\text{ }\forall h\in
\mathbb{N}\text{ against }H_{1}:E\left[ \epsilon _{t}\epsilon _{t-h}\right]
\neq 0\text{ for some }h\in \mathbb{N}.
\end{equation*}
Notice $\epsilon _{t}$ need not have a zero conditional mean: we do not
require, e.g., $E[\epsilon _{t}|x_{t-1}]$ $=$ $0$ $a.s.$ This implies that
we do not require $\sigma _{t}^{2}(\theta _{0})$ to be a conditional
variance. Together, (\ref{reg_model}) allows for model mis-specification.
Nevertheless, (\ref{reg_model}) is assumed correct in some sense, whether $
H_{0}$ is true or not, in view of $E[\epsilon _{t}]$ $=$ $0$ and possibly
other moment conditions used to identify $\theta _{0}$. Thus, $\theta _{0}$
should be thought of as a pseudo-true value that can be identified, often by
unconditional moment conditions \citep{KullbackLeibler1951,Sawa1978}.
Complete assumptions are given in Section \ref{sec:max_corr}: see especially
Assumption \ref{assum:plug}.
Unless $y_{t}$ $=$ $\epsilon _{t}$ such that $y_{t}$ is known to have a zero
mean, let $\hat{\theta}_{n}$ $=$ $[\hat{\phi}_{n}^{\prime },\hat{\delta}
_{n}^{\prime }]$ estimates $\theta _{0}$ where $n$ is the sample size, and
define the residual, and its sample serial covariance and correlation at lag
$h$ $\geq $ $1$:
\begin{equation*}
\epsilon _{t}(\hat{\theta}_{n})\equiv \frac{u_{t}(\hat{\phi}_{n})}{\sigma
_{t}(\hat{\theta}_{n})}\equiv \frac{y_{t}-f(x_{t-1},\hat{\phi}_{n})}{\sigma
_{t}(\hat{\theta}_{n})}\text{ \ and \ }\hat{\gamma}_{n}(h)\equiv \frac{1}{n}
\sum_{t=1+h}^{n}\epsilon _{t}(\hat{\theta}_{n})\epsilon _{t-h}(\hat{\theta}
_{n})\text{ \ and }\hat{\rho}_{n}(h)\equiv \frac{\hat{\gamma}_{n}(h)}{\hat{
\gamma}_{n}(0)}.
\end{equation*}
In the pure volatility model set $f(x_{t-1},\hat{\phi}_{n})$ $=$ $0$, and in
the level model set $\sigma _{t}(\hat{\theta}_{n})$ $=$ $1$.
Our primary test statistic is the normalized weighted sample maximum
correlation,
\begin{equation*}
\mathcal{\hat{T}}_{n}\equiv \sqrt{n}\max_{1\leq h\leq \mathcal{L}
_{n}}\left\vert \hat{\omega}_{n}(h)\hat{\rho}_{n}(h)\right\vert ,
\end{equation*}
where $\hat{\omega}_{n}(h)$ $>$ $0$ are possibly stochastic weights with $
\hat{\omega}_{n}(h)$ $\overset{p}{\rightarrow }$ $\omega (h)$ $>$ $0$, where
$\omega (h)$ are non-stochastic. The weights allow for ($i$) control for
variable dispersion across lags that affect empirical power, or ($ii$) a
decrease in accuracy in probability when $n$ is small and $h$ is large. In
the former case $\hat{\omega}_{n}(h)$ may be an inverted standard deviation
estimator. In the latter case we might use $\hat{\omega}_{n}(h)$ $=$ $(n$ $-$
$2)/(n$ $-$ $h)$ as in \cite{LjungBox1978}. Despite the generality afforded
by weights, we find using $\hat{\omega}_{n}(h)$ $=$ $1$ results in accurate
size and comparably high power in Monte Carlo simulations. Indeed, using an
inverted standard deviation $\hat{\omega}_{n}(h)$ does not improve test
performance in our experiments due to estimation error associated with $\hat{
\omega}_{n}(h)$.
The number of lags $\mathcal{L}_{n}$\ can converge to a finite positive
integer: the theory follows trivially from the proofs of our main results.
In that case our test would not be a formal test of the white noise
hypothesis. We want $\mathcal{L}_{n}$ $\rightarrow $ $\infty $ as $n$ $
\rightarrow $ $\infty $ in order to ensure a white noise test, and that $
\mathcal{L}_{n}$ $=$ $o(n)$ to ensure $\hat{\gamma}_{n}(h)$ $=$ $E[\epsilon
_{t}\epsilon _{t-h}]$ $+$ $O_{p}(1/\sqrt{n})$ for each $h\in \{1,...,
\mathcal{L}_{n}\}$ and therefore yield a consistent test. The limit theory
in that case requires more than convergence in finite dimensional
distributions based on classic arguments
\citep[e.g.][]{HoffJorg1984,HoffJorg1991}, which is one of the major
challenges we address in this paper.
Interest in the maximum of an increasing sequence of deviated covariances $
\sqrt{n}$ $\max_{1\leq h\leq \mathcal{L}_{n}}|$$\hat{\gamma}_{n}$$(h)$ $-$ $
\gamma $$(h)|$ dates in some form to \cite{Berman1964} and \cite{Hannan1974}
. See also \cite{XiaoWu2014} and their references. In this literature the
test variable is observed, and the exact asymptotic distribution form of a
suitably normalized $\sqrt{n}\max_{1\leq h\leq \mathcal{L}_{n}}|\hat{\gamma}
_{n}(h)$ $-$ $\gamma (h)|$ is sought. \cite{XiaoWu2014} impose a moment
contraction property on $y_{t}$, and $\mathcal{L}_{n}$ $=$ $O(n^{\upsilon })$
for some $\upsilon $ $\in $ $(0,1)$ that is smaller with greater allowed
dependence. They show $a_{n}\{\sqrt{n}\max_{1\leq h\leq \mathcal{L}_{n}}|
\hat{\gamma}_{n}(h)$ $-$ $\gamma (h)|/(\sum_{h=0}^{\infty }\gamma
(h)^{2})^{1/2}$ $-$ $b_{n}\}$ $\overset{d}{\rightarrow }$ $\exp \{-\exp
\{-x\}\}$, a Gumbel distribution, with normalizing sequences $a_{n},b_{n}$ $
\sim $ $(2\ln (n))^{1/2}$. See, also, \cite{Jirak2011}. \cite{XiaoWu2014} do
not prove their blocks-of-blocks bootstrap is valid under their assumptions,
and only observed data are allowed. The moment contraction property is also
more restrictive than the Near Epoch Dependence [NED] property used here
\citep[see the supplemental material][Appendix B]{HillMotegi_supp_mat}.
\citet{Chernozhukov_etal2013,Chernozhukov_etal2015,Chernozhukov_etal2016}
significantly improve on results in the literature on Gaussian
approximations and couplings, cf. \cite{Yurinskii1977}, \cite
{DudleyPhilipp1983}, \cite{Portnoy1986}, and \cite{LeCam1988}. They allow
for arbitrary dependence across the sequence of sample means, and the
sequence length may grow at a rate of order $e^{Kn^{\varsigma }}$ for some $
K,\varsigma $ $>$ $0$. Sample autocorrelations, however, only exist for lags
$\{0,...,n-1\}$, and are Fisher consistent for the population
autocorrelations for lags $h$ up to order $o(n)$. The independence
assumption, however, is not feasible for a white noise test since $\epsilon
_{t}\epsilon _{t-h}$ is at best a martingale difference, and may be
generally dependent under either hypothesis. Further, a Gaussian
approximation theory cannot handle the maximum distance between $\hat{\rho}
_{n}(h)$ based on residuals $\epsilon _{t}(\hat{\theta}_{n})\epsilon _{t-h}(
\hat{\theta}_{n})$, and its version based on $\epsilon _{t}\epsilon _{t-h}$
(and other components due to the plug-in estimator $\hat{\theta}_{n}$)
because $\epsilon _{t}\epsilon _{t-h}$ is typically not Gaussian even if $
\epsilon _{t}$ is.\footnote{
When filtered data are used we must \ prove in Lemma \ref{lm:corr_expan}
that $\max_{1\leq h\leq \mathcal{L}_{n}}|1/\sqrt{n}\sum_{t=1}^{n}\epsilon
_{t}(\hat{\theta}_{n})\epsilon _{t-h}(\hat{\theta}_{n})$ $-$ $1/\sqrt{n}
\sum_{t=1}^{n}z_{t}(h)|$ $\overset{p}{\rightarrow }$ $0$ for some sequence $
\{\mathcal{L}_{n}\}$, $\mathcal{L}_{n}$ $\rightarrow $ $\infty $, and some
process $\{z_{t}(h)\}$ that is a function of $\epsilon _{t}\epsilon _{t-h}$
and components of $\hat{\theta}_{n}$. We then prove in Lemma \ref{lm:clt_max}
that $\max_{1\leq h\leq \mathcal{L}_{n}}|1/\sqrt{n}\sum_{t=1}^{n}z_{t}(h)|$ $
\overset{d}{\rightarrow }$ $\max_{1\leq h\leq \infty }|\mathcal{Z}(h)|$ for
some Gaussian process $\{\mathcal{Z}(h)\}$. Under suitable memory and
heterogeneity restrictions, the Gaussian approximation theory of Zhang and
Wu (\citeyear{ZhangWu2017}), cf. \cite{Chernozhukov_etal2013}, can handle $
\max_{1\leq h\leq \mathcal{L}_{n}}|1/\sqrt{n}\sum_{t=1}^{n}z_{t}(h)|$ $
\overset{d}{\rightarrow }$ $\max_{1\leq h\leq \infty }|\mathcal{Z}(h)|$
since $\{\mathcal{Z}(h)\}$ is Gaussian. But their theory cannot determine $
\max_{1\leq h\leq \mathcal{L}_{n}}|1/\sqrt{n}\sum_{t=1}^{n}\epsilon _{t}(
\hat{\theta}_{n})\epsilon _{t-h}(\hat{\theta}_{n})$ $-$ $1/\sqrt{n}
\sum_{t=1}^{n}z_{t}(h)|$ $\overset{p}{\rightarrow }$ $0 $ because that would
require $1/\sqrt{n}\sum_{t=1}^{n}z_{t}(h)$ itself to be Gaussian for each $n$
. The latter generally does not hold because $\epsilon _{t}\epsilon _{t-h}$
is not Gaussian even if $\epsilon _{t}$ is.}
\citet[Appendix
B]{Chernozhukov_etal_manymom_2014}, cf.
\citet[Supplemental
Appendix]{Chernozhukov_etal2018}, allow for \textit{almost surely} bounded $
\beta $-mixing data, but the above problem involving filtered data is not
resolved, boundedness rules out many time series of practical interest, and
our NED environment eclipses a mixing environment
\citep[see Section \ref{sec:assum_expan}, below, and see, e.g.,][Chapter
17]{Davidson1994}.
\cite{ZhangWu2017} extend results in \cite{Chernozhukov_etal2013} to a large
class of dependent processes
\citep[see also][for an extension to
geometrically dependent data in a bootstrap setting]{ZhangWu2014}. Their
framework is the functional dependence or moment contraction notions
popularized in, e.g., \cite{Wu2005}. The possibility of filtered data is
ignored, which requires a non-Gaussian approximation theory. Further, it is
not obvious which processes satisfy the conditions of their main Theorem 3.2
(e.g. nonlinear ARMA-GARCH, stochastic volatility).
Compared to the above literature, we use a different asymptotic theory
approach. We sidestep extreme value theoretic methods by exploiting
convergence of $\{\sqrt{n}(\hat{\gamma}_{n}(h)$ $-$ $\gamma $$(h))$ $:$ $1$ $
\leq $ $h$ $\leq $ $\mathcal{L}\}$ to a Gaussian process, for each finite $
\mathcal{L}$ $\in $ $\mathbb{N}$. Since that is not sufficient for weak
convergence in the classic sense of \citet{HoffJorg1984,HoffJorg1991}, we
develop new theory for double array convergence, which is associated with
arguments dating to \cite{Ramsey1930}. This allows us to prove that under $
H_{0}$ the maximum distance over $1$ $\leq $ $h$ $\leq $ $\mathcal{L}_{n}$
between $\sqrt{n}\hat{\rho}_{n}(h)$ and its bootstrapped version converges
to zero for some sequence of positive integers $\{\mathcal{L}_{n}\}$, with $
\mathcal{L}_{n}$ $\rightarrow $ $\infty $ and $\mathcal{L}_{n}$ $=$ $o(n)$,
without using extreme value theoretic arguments or Gaussian approximation
theory. Under additional technical conditions presented in the supplemental
material \citet[Appendix G]{HillMotegi_supp_mat}, we show $\mathcal{L}_{n}$ $
=$ $O(n^{c}/\ln (n))$ must also hold, for some $c$ $\in $ $(0,1)$ that
depends on the rate of convergence of the weights $\hat{\omega}_{n}(h)$ $
\overset{p}{\rightarrow }$ $\omega (h)$, and an asymptotic approximation
expansion for the plug-in $\hat{\theta}_{n}$. Under standard regularity
conditions $c$ $=$ $1/2$, hence $\mathcal{L}_{n}$ $=$ $O(\sqrt{n}/\ln (n))$.
These are our primary contributions. As in \citet{Chernozhukov_etal2013}, we
do not require $\sqrt{n}\max_{1\leq h\leq \mathcal{L}_{n}}|\hat{\omega}
_{n}(h)\hat{\rho}_{n}(h)|$ to converge in law under $H_{0}$ since the
bootstrap is asymptotically valid irrespective of the asymptotic properties
of $\sqrt{n}\max_{1\leq h\leq \mathcal{L}_{n}}|\hat{\omega}_{n}(h)\hat{\rho}
_{n}(h)|$.
Our asymptotic theory covers a class of continuous transforms of $[\sqrt{n}
\hat{\omega}_{n}(h)\hat{\rho}_{n}(h)]_{h=1}^{\mathcal{L}_{n}}$, including
the maximum, but also a weighted average $n\sum_{h=1}^{\mathcal{L}_{n}}\hat{
\omega}_{n}^{2}(h)\hat{\rho}_{n}^{2}(h)$, and therefore portmanteau
statistics \citep[cf.][]{LjungBox1978,Hong1996,Hong2001}.
\citet{Hong1996,Hong2001} presents spectral density methods for testing for
uncorrelatedness, and the proposed test statistic is simply a normalized
portmanteau. The latter is shown to be asymptotically normal under
regularity conditions that ensure $\sqrt{n}\hat{\rho}_{n}^{2}(h)$ is
asymptotically independent across $h$ under $H_{0}$. The approach taken here
alleviates the necessity for the normalized $n\sum_{h=1}^{\mathcal{L}_{n}}
\hat{\omega}_{n}^{2}(h)\hat{\rho}_{n}^{2}(h)$ to converge in law under $
H_{0} $, hence we do not require asymptotic independence. Further, as
opposed to \citet{Hong1996,Hong2001},\ our test statistic achieves the
parametric rate of convergence because we do not use self-normalization. See
Remark \ref{rm:Hong_local} in Section \ref{sec:max_corr}.
We perform a bootstrap p-value test using Shao's (\citeyear{Shao2011_JoE})
dependent wild bootstrap, and prove its validity. In order to control for
the use of filtered sampling errors, the bootstrap is applied to a first
order expansion of the sample covariance. \cite{DelgadoVelasco2011} take a
different approach by using orthogonally transformed jointly standardized
correlations in order to control for residuals and dependence. They assume a
fixed maximum lag $\mathcal{L}$, however, due to joint standardization.
Finally, in order to resolve the choice of $\{\mathcal{L}_{n}\}$ in
practice, we extend Escanciano and Lobato's (\citeyear{EscancianoLobato2009}
) automatic maximum lag selection method to our setting. They develop a
Q-test with bounded maximum lag that is selected based on the magnitude of
the maximum correlation. We allow for selection from an increasing set of
integers, and provide a new asymptotic theory for the automatic maximum lag.
General dependence under the null is allowed in different ways in \cite
{Hong1996}, \cite{RomanoThombs1996}, \cite{Shao2011_JoE}, and \cite
{GuayGuerreLazarova2013}, amongst others. Our NED setting is similar to that
of \cite{Lobato2001} and \citet{NankervisSavin2010,NankervisSavin2012}, but
the former works with observed data and requires a fixed maximum lag, and we
allow for a substantially larger class of filters and parametric estimators
than the latter. NED encompasses mixing and non-mixing processes, hence our
setting is more general than Zhu's (\citeyear{Zhu2015}) for his block-wise
random weighting bootstrap.
\cite{Shao2011_JoE}, \cite{GuayGuerreLazarova2013} and \cite{XiaoWu2014} use
a moment contraction property from \cite{Wu2005} and \cite{WuMin2005} with
(potentially far) greater moment conditions than imposed here
\citep[e.g][]{Shao2011_JoE,GuayGuerreLazarova2013}. \cite{Shao2011_JoE}
requires a complicated eighth order cumulant condition that is only known to
hold under geometric memory, and residuals are not treated. \cite{XiaoWu2014}
only require slightly more than a $4^{th}$ moment, as we do, but do not
allow for residuals. We show in the supplemental material
\citet[Appendix
B]{HillMotegi_supp_mat} that our NED setting is more general than the moment
contraction properties employed in \cite{Shao2011_JoE} and \cite
{GuayGuerreLazarova2013}, and allows for slower memory decay than \cite
{XiaoWu2014}.
Test statistics that combine serial correlations have a vast history dating
to Box and Pierce's (\citeyear{BoxPierce70}) Q-test. Many generalizations
exist, including letting the maximum lag increase \citep{Hong1996,Hong2001};
bootstrapping or re-scaling for size correction under weak dependence
\citep{RomanoThombs1996,Lobato2001,HorowitzLobatoNankervisSavin2006,KuanLee2006,Zhu2015}
; using a Lagrange Multiplier type statistic to account for weak dependence
\citep[e.g.][]{AndrewsPloberger1996,LobatoNankervisSavin2002}; exploiting an
expansion and orthogonal projection to produce pivotal statistics
\citep{Lobato2001,KuanLee2006,DelgadoVelasco2011};
and using endogenous maximum lag selection (Escanciano and Lobato,
\citeyear{EscancianoLobato2009}, Guay, Guerre, and Lazarov\'{a},
\citeyear{GuayGuerreLazarova2013}).
A related class of estimators exploits the periodogram, an increasing sum of
sample correlations, dating to \cite{GrenanderRossenblatt1952}
\citep[e.g.][]{Hong1996,Deo2000,DelgadoHidalgoVelasco2005,Shao2011_JoE,ZhuLi2015}
. \cite{Hong1996} standardizes a periodogram resulting in less-than $\sqrt{n}
$-local power, while Cram\'{e}r-von Mises and Kolmogorov-Smirnov transforms
in \cite{Deo2000},
Delgado, Hidalgo, and Velasco \citeyearpar{DelgadoHidalgoVelasco2005}, and
\cite{Shao2011_JoE} result in $\sqrt{n}$-local power. \cite
{GuayGuerreLazarova2013} show that Hong's (\citeyear{Hong1996}) standardized
portmanteau test (but not a Cram\'{e}r-von Mises test) can detect
local-to-null correlation values at a rate faster than $\sqrt{n}$ provided
an adaptive increasing maximum lag is used. Finally, a weighted sum of
correlations also arises in Andrews and Ploberger's (
\citeyear{AndrewsPloberger1996}) sup-LM test
\citep[cf.][]{NankervisSavin2010}.
A simulation study shows our proposed max-correlation test with Shao's (
\citeyear{Shao2011_JoE}) dependent wild bootstrap and automatic lag (denoted
$\hat{\mathcal{T}}^{dw}(\mathcal{L}_{n}^{\ast })$) dominates a variety of
other tests. In this paper, we compare $\hat{\mathcal{T}}^{dw}(\mathcal{L}
_{n}^{\ast })$ and Shao's (\citeyear{Shao2011_JoE}) dependent wild bootstrap
spectral Cram\'{e}r-von Mises test, which is proposed for observed data. In
the supplemental material \citet[][Appendix H]{HillMotegi_supp_mat}, we
consider other tests, including Hong's (\citeyear{Hong1996}) test based on a
standardized periodogram, a CvM test with Zhu and Li's (\citeyear{ZhuLi2015}
) block-wise random weighting bootstrap, and Andrews and Ploberger's (
\citeyear{AndrewsPloberger1996}) sup-LM test with the dependent wild
bootstrap. Overall the CvM test is one of the strongest competitors of our
test. First, generally $\hat{\mathcal{T}}^{dw}(\mathcal{L}_{n}^{\ast })$
achieves sharp size. Second, $\hat{\mathcal{T}}^{dw}(\mathcal{L}_{n}^{\ast })
$, the sup-LM, and the CvM tests lead to roughly comparable power when there
exist autocorrelations at small lags. Third, $\hat{\mathcal{T}}^{dw}(
\mathcal{L}_{n}^{\ast })$ has high power while others have nearly trivial
power when there exist autocorrelations at remote lags. Thus, of the tests
under study, $\hat{\mathcal{T}}^{dw}(\mathcal{L}_{n}^{\ast })$ is the only
white noise test that accomplishes both sharp size in general and high
power. The sharp performance of $\hat{\mathcal{T}}^{dw}(\mathcal{L}
_{n}^{\ast })$ stems from the fact that the automatic lag selection
mechanism trims redundant lags under $H_{0}$, and hones in on the most
informative lag under $H_{1}$.
The remainder of the paper is as follows. Section \ref{sec:max_corr}
contains the assumptions and main results. Automatic lag selection is
developed in Section \ref{sec:lag_select}, and a Monte Carlo study follows
in Section \ref{sec:sim}. Concluding remarks are left for Section \ref
{sec:conclude}. Proofs are gathered in Appendix \ref{app:proofs} and the
supplemental material \citet[][Appendix F]{HillMotegi_supp_mat}.
Throughout $|\cdot |$ is the $l_{1}$-matrix norm; $||\cdot ||$ is the $l_{2}$
-matrix norm; $||\cdot ||_{p}$ is the $L_{p}$-norm. $I(\cdot )$ is the
indicator function: $I(A)$ $=$ $1$ if $A$ is true, else $I(A)$ $=$ $0$. $
\mathcal{F}_{t}$ $\equiv $ $\sigma (y_{\tau },x_{\tau }$ $:$ $\tau $ $\leq $
$t)$. All random variables lie in a complete probability measure space $
(\Omega ,\mathcal{P},\mathcal{F})$, hence $\sigma (\cup _{t\in \mathbb{Z}}
\mathcal{F}_{t})$ $\subseteq $ $\mathcal{F}$. We drop the (pseudo) true
value $\theta _{0}$ from function arguments when there is no confusion.
\section{Max-Correlation Test\label{sec:max_corr}{}}
We first lay out the assumptions and derive some fundamental properties of
the correlation maximum. We then derive the main results.
\subsection{Assumptions and Asymptotic Expansion\label{sec:assum_expan}}
An expansion of $\epsilon _{t}(\hat{\theta}_{n})$\ around $\theta _{0}$ is
required in order to ensure the bootstrapped statistic captures the
influence of the estimator $\hat{\theta}_{n}$ on $\sqrt{n}\hat{\rho}_{n}(h)$
. This is accomplished under various regularity assumptions. Let $\{\upsilon
_{t}\}$ be a stationary $\alpha $-mixing process with $\sigma $-fields $
\mathfrak{V}_{s}^{t}$ $\equiv $ $\sigma (\upsilon _{\tau }$ $:$ $s$ $\leq $ $
\tau $ $\leq $ $t)$ and $\mathfrak{V}_{t}\equiv \mathfrak{V}_{-\infty }^{t}$
, and coefficients $\alpha _{m}^{(\upsilon )}$ $=$ $\sup_{\mathcal{A}\subset
\mathfrak{V}_{t}^{\infty },\mathcal{B}\subset \mathfrak{V}_{-\infty
}^{t-m}}|P\left( \mathcal{A}\cap \mathcal{B}\right) $ $-$ $P\left( \mathcal{A
}\right) P\left( \mathcal{B}\right) |$ $\rightarrow $ $0$ as $m$ $
\rightarrow $ $\infty $. We say $L_{q}$-bounded $\{\epsilon _{t}\}$ is
stationary $L_{q}$-NED with size $\lambda $ $>$ $0$ on a mixing base $
\{\upsilon _{t}\}$ when $||\epsilon _{t}$ $-$ $E[\epsilon _{t}|\mathfrak{V}
_{t-m}^{t+m}]||_{q}$ $=$ $O(m^{-\lambda -\iota })$ for tiny $\iota $ $>$ $0$.
\footnote{
This definition of size is slightly different from the conventional one,
e.g. \citet[p. 262]{Davidson1994}. We use de Jong's (\citeyear{deJong1997}:
Definition 1) definition because we use his central limit theorem for NED
arrays.} If $\epsilon _{t}$ $=$ $\upsilon _{t}$ then $||\epsilon _{t}$ $-$ $
E[\epsilon _{t}|\mathfrak{V}_{t-m}^{t+m}]||_{q}$ $=$ $0$, hence NED includes
mixing sequences, but it also includes non-mixing sequences since it covers
infinite lag functions of mixing sequences that need not be mixing. NED is
related to McLeish's (\citeyear{McLeish1975}) mixingale property. See
\citet[Chapter 17]{Davidson1994} for historical references and deep results.
\begin{assumption}[data generating process]
\label{assum:dgp}$
$\newline
$a$. $\{x_{t},y_{t}\}$ are stationary, ergodic, and $L_{2+\delta }$-bounded
for tiny $\delta $ $>$ $0.
$\newline
$b$. $\epsilon _{t}$ is stationary, ergodic, $E[\epsilon _{t}]$ $=$ $0$, $
L_{r}$-bounded, $r$ $>$ $4$, and $L_{4}$-NED with size $1/2$ on stationary $
\alpha $-mixing $\{\upsilon _{t}\}$ with coefficients $\alpha
_{h}^{(\upsilon )}$ $=$ $O(h^{-r/(r-4)-\iota })$ for tiny $\iota $ $>$ $
0.
$\newline
$c$. The weights satisfy $\hat{\omega}_{n}(h)$ $>$ $0$ $a.s.$ and $\hat{
\omega}_{n}(h)$ $=$ $\omega (h)+O_{p}(1/n^{\kappa })$ for some $\kappa $ $>$
$0$ and non-random $\omega (h)$ $\in $ $(0,\infty )$, for each $h$.
\end{assumption}
\begin{remark}
\normalfont The assumption $E[\epsilon _{t}]$ $=$ $0$ is typically imposed
in practice with inclusion of a regression model constant term. It is
important that the necessary steps for ensuring $E[\epsilon _{t}]$ $=$ $0$
are taken, since otherwise a white noise test may reject due merely to $
E[\epsilon _{t}]$ $\neq $ $0$.
\end{remark}
\begin{remark}
\normalfont Ergodicity is not required in principle, but imposed to allow
easily for laws of large numbers on functions of $f(x_{t},\phi )$ and $
\sigma _{t}^{2}(\theta )$ and their derivatives. Indeed, NED does not
necessarily carry over to arbitrary measurable transforms of an NED process.
$\alpha $-mixing, for example, implies ergodicity, it extends to measurable
transforms, and is a sub-class of NED. \cite{LobatoNankervisSavin2002}
impose a similar NED property. \cite{NankervisSavin2010}, who generalize the
white noise test of \cite{AndrewsPloberger1996}, allow for NED observed $
y_{t}$, but mistakenly assume $y_{t}$ is only $L_{2}$-NED.\footnote{
A Gaussian central limit theorem requires the \emph{product}, in our case $
\epsilon _{t}\epsilon _{t-h}$, to be $L_{2}$-NED, which holds when $\epsilon
_{t}$ is $L_{p}$-bounded, $p$ $>$ $4$, and $L_{4}$-NED
\citep[Theorem
17.9]{Davidson1994}.}
\end{remark}
\begin{remark}
\normalfont The requirement $\hat{\omega}_{n}(h)$ $=$ $\omega
(h)+O_{p}(1/n^{\kappa })$ will hold under suitable moment conditions,
depending on how $\hat{\omega}_{n}(h)$ is constructed. If $\hat{\omega}
_{n}(h)$ is a standard deviation for the sample correlation, for example, $
\kappa $ $=$ $1/2$ can hold under the existence of higher moments and a
broad memory property like $\alpha $-mixing, even for some kernel estimators
\citep[e.g.][]{AndrewsHAC91}.
\end{remark}
If $y_{t}$ $=$ $\epsilon _{t}$\ is known then a filter is not required and
Assumption \ref{assum:dgp} suffices for our main results. In this case, if $
y_{t}$ is iid under $H_{0}$, then it only needs to be $L_{2}$-bounded.
The next assumption is required if a filter is used. Let $\boldsymbol{0}_{l}$
\ be an $l$-dimensional zero vector. Define
\begin{eqnarray}
&&G_{t}(\phi )\equiv \left[ \frac{\partial }{\partial \phi ^{\prime }}
f(x_{t-1},\phi ),\boldsymbol{0}_{k_{\delta }}^{\prime }\right] ^{\prime }\in
\mathbb{R}^{k_{\theta }}\text{ \ and \ }s_{t}(\theta )\equiv \frac{1}{2}
\frac{\partial }{\partial \theta }\ln \sigma _{t}^{2}(\theta ) \label{GsD}
\\
&& \notag \\
&&\mathcal{D}(h)\equiv E\left[ \left( \epsilon _{t}s_{t}+\frac{G_{t}}{\sigma
_{t}}\right) \epsilon _{t-h}\right] +E\left[ \epsilon _{t}\left( \epsilon
_{t-h}s_{t-h}+\frac{G_{t-h}}{\sigma _{t-h}}\right) \right] \in \mathbb{R}
^{k_{\theta }}. \notag
\end{eqnarray}
We do not require a filter for the above entities to make sense. If $
y_{t}=\epsilon _{t}$, for example, then $G_{t}(\phi )$, $s_{t}(\theta )$ and
therefore $\mathcal{D}(h)$ are each just zero.
We require notation that makes use of estimating equations $m_{t}$ $\in $ $
\mathbb{R}^{k_{m}}$ and a matrix $\mathcal{A}$ $\in $ $\mathbb{R}^{k_{\theta
}\times k_{m}}$ defined under Assumption \ref{assum:plug}.c. Define
\begin{eqnarray}
&&r_{t}(h)\equiv \frac{\epsilon _{t}\epsilon _{t-h}-E\left[ \epsilon
_{t}\epsilon _{t-h}\right] -\mathcal{D}(h)^{\prime }\mathcal{A}m_{t}}{E\left[
\epsilon _{t}^{2}\right] }\text{ and }\rho (h)\equiv \frac{E[\epsilon
_{t}\epsilon _{t-h}]}{E[\epsilon _{t}^{2}]} \label{rz} \\
&&z_{t}(h)\equiv r_{t}(h)-\rho (h)r_{t}(0)=\frac{\epsilon _{t}\epsilon
_{t-h}-\rho (h)\epsilon _{t}^{2}-\left( \mathcal{D}(h)-\rho (h)\mathcal{D}
(0)\right) ^{\prime }\mathcal{A}m_{t}}{E\left[ \epsilon _{t}^{2}\right] }.
\notag
\end{eqnarray}
The process that arises in the key approximation is:
\begin{equation}
\mathcal{Z}_{n}(h)\equiv \frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}z_{t}(h).
\label{Zn}
\end{equation}
\begin{assumption}[plug-in: response and identification]
\label{assum:plug} $\ \ \
$\newline
$a$. \emph{Level response.} $f$ $:$ $\mathbb{R}^{k_{x}}\times \Phi $ $
\rightarrow $ $\mathbb{R}$, where $\Phi $ is a compact subset of $\mathbb{R}
^{k_{\phi }}$, $k_{\phi }$ $\geq $ $0$; $f(x,\phi )$ is Borel measurable for
each $\phi $, and for each $x$ three times continuously differentiable,
where $(\partial /\partial \phi )^{j}f(x,\phi )$ is Borel measurable for
each $\phi $ and $j$ $=$ $1,2,3$; $E[\sup_{\phi \in \mathcal{N}_{\phi
_{0}}}|(\partial /\partial \phi )^{j}f(x_{t},\phi )|^{6}]$ $<$ $\infty $ for
$j$ $=$ $0,1,2,3$ and some compact set with positive measure $\mathcal{N}
_{\phi _{0}}\subseteq \Phi $ containing $\phi _{0}$.
\newline
$b$. \emph{Volatility.} $\sigma _{t}^{2}$ $:\Theta $ $\rightarrow $ $
[0,\infty )$ where $\Theta $ $=$ $\Phi $ $\times $ $\Delta $ $\in $ $\mathbb{
R}^{k_{\theta }}$, and $\Delta $\ is a compact subset of $\mathbb{R}
^{k_{\delta }}$, $k_{\delta }$ $\geq $ $0$; $\sigma _{t}^{2}(\theta )$ is $
\mathcal{F}_{t-1}$-measurable, continuous, and three times continuously
differentiable, where $(\partial /\partial \theta )^{j}\ln \sigma
_{t}^{2}(\theta )$ is Borel measurable for each $\theta $ and $j$ $=$ $1,2,3$
; $\inf_{\theta \in \Theta }|\sigma _{t}^{2}(\theta )|$ $\geq $ $\iota $ $>$
$0$ $a.s.$ and $E[\sup_{\theta \in \mathcal{N}_{\theta _{0}}}|(\partial
/\partial \theta )^{j}\ln \sigma _{t}^{2}(\theta )|^{4}]$ $<$ $\infty $ for $
j$ $=$ $0,1,2,3$ and some compact subset $\mathcal{N}_{\theta _{0}}\subseteq
\Theta $ containing $\theta _{0}$.
\newline
$c$. \emph{Estimator}$.$ $\hat{\theta}_{n}$ $\in $ $\Theta $ for each $n$,
and for a unique interior point $\theta _{0}$ $\in $ $\Theta $ we have $
\sqrt{n}(\hat{\theta}_{n}$ $-$ $\theta _{0})$ $=$ $\mathcal{A}
n^{-1/2}\sum_{t=1}^{n}m_{t}(\theta _{0})$ $+$ $\mathcal{R}_{m}(n)$, where
the $k_{m}$ $\times $ $1$ stochastic remainder $\mathcal{R}_{m}(n)$ $=$ $
O_{p}(n^{-\zeta })$\ for some $\zeta $ $>$ $0$, with $\mathcal{F}_{t}$
-measurable estimating equations $m_{t}$ $=$ $[m_{i,t}]_{i=1}^{k_{m}}$ $:$ $
\Theta $ $\rightarrow $ $\mathbb{R}^{k_{m}}$ for $k_{m}$ $\geq $ $k_{\theta }
$, and non-stochastic $\mathcal{A}$ $\in $ $\mathbb{R}^{k_{\theta }\times
k_{m}}$. Moreover, zero mean $m_{t}(\theta _{0})$ is stationary, ergodic, $
L_{r/2}$-bounded and $L_{2}$-NED with size $1/2$ on $\{\upsilon _{t}\}$,
where $r$ $>$ $4$ and $\{\upsilon _{t}\}$\ appear in Assumption \ref
{assum:dgp}.b.
\newline
$d$. \emph{Finite dimensional variance}$.$ Let $\mathcal{L}$ $\in $ $\mathbb{
N}$ be arbitrary, and let $\lambda $ $\equiv $ $[\lambda _{h}]_{h=1}^{
\mathcal{L}}$ $\in $ $\mathbb{R}^{\mathcal{L}}$. Then\linebreak\ $\lim
\inf_{n\rightarrow \infty }\inf_{\lambda ^{\prime }\lambda =1}E[(\sum_{h=1}^{
\mathcal{L}}\lambda _{h}\mathcal{Z}_{n}(h))^{2}]$ $>$ $0$.
\end{assumption}
\begin{remark}
\normalfont Smoothness (a) and (b) ensure a stochastic equicontinuity
property for uniform laws of large numbers. Non-differentiability can be
allowed provided certain other smoothness conditions involving, e.g.,
bracketing numbers apply \citep[see, e.g.,][]{PakesPollard1989,ArconesYu1994}
.
\end{remark}
\begin{remark}
\normalfont$E[\sup_{\phi \in \mathcal{N}_{\phi _{0}}}|(\partial /\partial
\phi )^{j}f(x_{t},\phi )|^{4}]$ $<$ $\infty $ and $E[\sup_{\theta \in
\mathcal{N}_{\theta _{0}}}|(\partial /\partial \theta )^{j}\ln \sigma
_{t}^{2}(\theta )|^{4}]$ $<$ $\infty $ are used to prove a uniform law of
large numbers, where the former can imply higher moment bounds than in
Assumption \ref{assum:dgp}, depending on the response $f$. Fourth moments
are required due to a required residual cross-product expansion. $
E[\sup_{\theta \in \mathcal{N}_{\theta _{0}}}|(\partial /\partial \theta
)^{j}\ln \sigma _{t}^{2}(\theta )|^{4}]$ $<$ $\infty $ holds for many linear
and nonlinear volatility models, e.g. GARCH, Quadratic GARCH, GJR-GARCH
\citep{FZ04,FZ10}. The $6^{th}$ moment bound $E[\sup_{\phi \in \mathcal{N}
_{\phi _{0}}}|(\partial /\partial \phi )^{j}f(x_{t},\phi )|^{6}]$ $<$ $
\infty $ is used to determine the rate of convergence of the correlation
expansion approximation, which itself is used to bound the rate of increase
of $\mathcal{L}_{n}$ in Lemma \ref{lm:corr_expan}.
\end{remark}
\begin{remark}
\normalfont$\hat{\theta}_{n}$ under (c) asymptotically is a linear function
of some zero mean $\mathcal{F}_{t}$-measurable process $m_{t}(\theta _{0})$.
This includes M-estimators, GMM and (Generalized) Empirical Likelihood with
smooth or nonsmooth estimating equations, and estimators with non-smooth
criteria and asymptotic expansions like LAD and quantile regression.
Typically $m_{t}(\theta _{0})$ is a function of $u_{t}$ or $\epsilon _{t}$
and the gradients $(\partial /\partial \phi )f(x_{t},\phi _{0})$ and/or $
(\partial /\partial \theta )\sigma _{t}^{2}(\theta _{0})$, in which case $
E[m_{t}]$ $=$ $0$ represents an orthogonality condition that identifies $
\theta _{0}$, even if $\epsilon _{t}$ is not white noise. The assumption
that $m_{t}$\ is NED in (c), in conjunction with Assumption \ref{assum:dgp},
implies linear combinations of $\epsilon _{t}\epsilon _{t-h}$ and $m_{t}$
are NED \citep[Theorem 17.8]{Davidson1994}, which promotes Gaussian finite
dimensional asymptotics for the residuals cross-product.
\end{remark}
\begin{remark}
\normalfont The approximation error in (c) $\sqrt{n}(\hat{\theta}_{n}$ $-$ $
\theta _{0})$ $=$ $\mathcal{A}n^{-1/2}\sum_{t=1}^{n}m_{t}(\theta _{0})$ $+$ $
O_{p}(n^{-\zeta })$ is of order $n^{-\zeta }$ for some $\zeta $ $>$ $0$. In
many cases $\zeta $ $=$ $1/2$ under suitable regularity conditions. This
allows us to describe the order of convergence for the remainder term in an
asymptotic expansion of the sample correlation, which we require when
deriving an upper bound on $\mathcal{L}_{n}$. All other technical arguments
only require $\sqrt{n}(\hat{\theta}_{n}$ $-$ $\theta _{0})$ $=$ $\mathcal{A}
n^{-1/2}\sum_{t=1}^{n}m_{t}(\theta _{0})$ $+$ $o_{p}(1)$.
\end{remark}
\begin{remark}
\normalfont(d) is a standard nondegeneracy assumption for finite dimensional
asymptotics.
\end{remark}
The theory developed in this paper extends to a class of measurable
functions of $[\sqrt{n}\hat{\rho}_{n}(h)]_{h=1}^{\mathcal{L}_{n}}$.
Specifically:
\begin{equation}
\vartheta :\mathbb{R}^{\mathcal{L}}\rightarrow \lbrack 0,\infty )\text{ for
arbitrary }\mathcal{L}\in \mathbb{N}, \label{phi_map}
\end{equation}
which satisfies the following: $\vartheta (x)$ is continuous; lower bound $
\vartheta (a)$ $=$ $0$ \emph{if and only if} $a$ $=$ $0$; upper bound $
\vartheta (a)$ $\leq $ $K\mathcal{LM}$ for some $K$ $>$ $0$ and any $a$ $=$ $
[a_{h}]_{h=1}^{\mathcal{L}}$ such that $|a_{h}|$ $\leq $ $\mathcal{M}$ for
each $h$; divergence $\vartheta (a)$ $\rightarrow $ $\infty $ as $||a||$ $
\rightarrow $ $\infty $; monotonicity $\vartheta (a_{\mathcal{L}_{1}})$ $
\leq $ $\vartheta ([a_{\mathcal{L}_{1}}^{\prime },c_{\mathcal{L}_{2}-
\mathcal{L}_{1}}^{\prime }]^{\prime })$ where $(a_{\mathcal{L}},c_{\mathcal{L
}})$ $\in $ $\mathbb{R}^{\mathcal{L}}$, $\forall \mathcal{L}_{2}$ $\geq $ $
\mathcal{L}_{1}$ and any $c_{\mathcal{L}_{2}-\mathcal{L}_{1}}$ $\in $ $
\mathbb{R}^{\mathcal{L}_{2}-\mathcal{L}_{1}}$; and the triangle inequality $
\vartheta (a$ $+$ $b)$ $\leq $ $\vartheta (a)$ $+$ $\vartheta (b)$ $\forall
a,b$ $\in $ $\mathbb{R}^{\mathcal{L}_{n}}$. Examples include the maximum $
\vartheta (a)$ $=$ $\max_{1\leq h\leq \mathcal{L}}|a_{h}|$, and sums $
\vartheta (a)$ $=$ $\sum_{h=1}^{\mathcal{L}}|a_{h}|$ and $\vartheta (a)$ $=$
$\sum_{h=1}^{\mathcal{L}}a_{h}^{2}$, where $a$ $=$ $[a_{h}]_{h=1}^{\mathcal{L
}}$. The lower bound $\vartheta (a)$ $=$ $0$ \emph{if and only if} $a$ $=$ $
0 $ ensures we omit cases where test power is not asymptotically one. As one
example, when $\tilde{\vartheta}(a)$ $=$ $\sum_{h=1}^{\mathcal{L}}a_{h}$ the
statistic $\tilde{\vartheta}([\sqrt{n}\hat{\omega}_{n}(h)\hat{\rho}
_{n}(h)]_{h=1}^{\mathcal{L}_{n}})$ need not diverge under the alternative
because $\tilde{\vartheta}(a)$ $=$ $0$ is possible for $a$ $\neq $ $0$.
We do not show that $\vartheta $ depends on $\mathcal{L}$ to reduce
notation. The general test statistic is therefore:
\begin{equation*}
\mathcal{\hat{T}}_{n}\equiv \vartheta \left( \left[ \sqrt{n}\hat{\omega}
_{n}(h)\hat{\rho}_{n}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) .
\end{equation*}
Both $\max_{1\leq h\leq \mathcal{L}_{n}}|\sqrt{n}\hat{\omega}_{n}(h)\hat{\rho
}_{n}(h)|$ and a weighted portmanteau $n\sum_{h=1}^{\mathcal{L}_{n}}\hat{
\omega}_{n}^{2}(h)\hat{\rho}_{n}^{2}(h)$ are covered. Note that the
normalization $\vartheta ([\sqrt{n}\hat{\omega}_{n}(h)\hat{\rho}
_{n}(h)]_{h=1}^{\mathcal{L}_{n}})$ $=$ $(2\mathcal{L}_{n})^{-1/2}\sum_{h=1}^{
\mathcal{L}_{n}}\hat{\omega}_{n}(h)\{n\hat{\rho}_{n}^{2}(h)$ $-$ $1\}$ and
similar normalized spectral density estimators used in
\citet[eq.
(3)]{Hong1996} and \cite{Hong2001}\ are not covered here because it violates
positivity $\vartheta $ $:$ $\mathbb{R}^{\mathcal{L}}$ $\rightarrow $ $
[0,\infty )$, lower bound $\vartheta (a)$ $=$ $0$ \emph{if and only if} $a$ $
=$ $0$, and monotonicity. The fix $\vartheta ([\sqrt{n}\hat{\omega}_{n}(h)
\hat{\rho}_{n}(h)]_{hP=1}^{\mathcal{L}_{n}})$ $=$ $(2\mathcal{L}
_{n})^{-1/2}|\sum_{h=1}^{\mathcal{L}_{n}}\hat{\omega}_{n}(h)\{n\hat{\rho}
_{n}^{2}(h)$ $-$ $1\}|$ still violates $\vartheta (a)$ $=$ $0$ \emph{if and
only if} $a$ $=$ $0$, and monotonicity.
The following result establishes a key (non-Gaussian) approximation theory
for an increasing sequence of serial correlations. See Appendix \ref
{app:proofs} for all proofs. Recall $\kappa $ $>$ $0$ in $\hat{\omega}
_{n}(h) $ $=$ $\omega (h)+O_{p}(1/n^{\kappa })$ and $\zeta $ $>$ $0$ in $
\sqrt{n}(\hat{\theta}_{n}$ $-$ $\theta _{0})$ $=$ $\mathcal{A}
n^{-1/2}\sum_{t=1}^{n}m_{t}(\theta _{0})$ $+$ $O_{p}(n^{-\zeta })$, cf.
Assumptions \ref{assum:dgp}.c and \ref{assum:plug}.c. These will determine
an upper bound on $\mathcal{L}_{n}$ $\rightarrow $ $\infty $.
\begin{lemma}
\label{lm:corr_expan}Let Assumptions \ref{assum:dgp} and \ref{assum:plug}
hold. Then
\begin{equation}
\mathcal{\tilde{X}}_{n}(h)\equiv \left\vert \sqrt{n}\hat{\omega}
_{n}(h)\left\{ \hat{\rho}_{n}(h)-\rho (h)\right\} -\omega (h)\frac{1}{\sqrt{n
}}\sum_{t=1+h}^{n}\left\{ r_{t}(h)-\rho (h)r_{t}(0)\right\} \right\vert
=O_{p}\left( 1/n^{\min \{\zeta ,\kappa ,1/2\}}\right) . \label{expans_rate}
\end{equation}
Moreover, for some non-unique monotonic sequence of positive integers $\{
\mathcal{L}_{n}\}$, $\mathcal{L}_{n}$ $\rightarrow $ $\infty $ and $\mathcal{
L}_{n}$ $=$ $o(n)$, we have: $|\vartheta (\sqrt{n}[\hat{\omega}_{n}(h)\{\hat{
\rho}_{n}(h)$ $-$ $\rho (h)\}]_{h=1}^{\mathcal{L}_{n}})$ $-$ $\vartheta
([\omega (h)\mathcal{Z}_{n}(h)]_{h=1}^{\mathcal{L}_{n}})|$ $\leq $ $
\vartheta ([\sqrt{n}\hat{\omega}_{n}(h)\{\hat{\rho}_{n}(h)$ $-$ $\rho (h)\}$
$-$ $\omega (h)\mathcal{Z}_{n}(h)]_{h=1}^{\mathcal{L}_{n}})$ $\overset{p}{
\rightarrow }0$. Therefore, under the null hypothesis:
\begin{equation}
\left\vert \vartheta \left( \left[ \sqrt{n}\hat{\omega}_{n}(h)\hat{\rho}
_{n}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) -\vartheta \left( \left[
\omega (h)\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\left\{ \frac{\epsilon
_{t}\epsilon _{t-h}-\mathcal{D}(h)^{\prime }\mathcal{A}m_{t}}{E\left[
\epsilon _{t}^{2}\right] }\right\} \right] _{h=1}^{\mathcal{L}_{n}}\right)
\right\vert \overset{p}{\rightarrow }0. \label{maxmax}
\end{equation}
Finally, if $\vartheta (\cdot )$ is the maximum transform, and $(n^{\min
\{\zeta ,\kappa ,1/2\}}/\ln (n))\mathcal{\tilde{X}}_{n}(h)$ for all $h$\ is
uniformly integrable, then $\mathcal{L}_{n}$ $=$ $O(n^{\min \{\zeta ,\kappa
,1/2\}}/\ln (n))$ must be satisfied.
\end{lemma}
\begin{remark}
\label{rm: bounded_Ln}\normalfont The sequence $\{\mathcal{L}_{n}\}$ is not
unique because for any other $\{\mathcal{\mathring{L}}_{n}\}$, $\mathcal{
\mathring{L}}_{n}$ $\rightarrow $ $\infty $ and $\lim \sup_{n\rightarrow
\infty }\{\mathcal{\mathring{L}}_{n}/\mathcal{L}_{n}\}$ $<$ $1$,
monotonicity $\vartheta (a_{k})$ $\leq $ $\vartheta ([a_{k}^{\prime
},c_{l-k}^{\prime }]^{\prime })$ $\forall a_{k}$ $\in $ $\mathbb{R}^{k}$ and
$\forall c_{l-k}$ $\in $ $\mathbb{R}^{l-k}$\ implies as $n$ $\rightarrow $ $
\infty $:
\begin{eqnarray}
&&\vartheta \left( \left[ \sqrt{n}\hat{\omega}_{n}(h)\{\hat{\rho}
_{n}(h)-\rho (h)\}-\omega (h)\mathcal{Z}_{n}(h)\right] _{h=1}^{\mathcal{
\mathring{L}}_{n}}\right) \label{phiphi} \\
&&\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\leq \vartheta \left( \left[
\sqrt{n}\hat{\omega}_{n}(h)\{\hat{\rho}_{n}(h)-\rho (h)\}-\omega (h)\mathcal{
Z}_{n}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) \overset{p}{\rightarrow }0,
\notag
\end{eqnarray}
hence $|\vartheta (\sqrt{n}[\hat{\omega}_{n}(h)\{\hat{\rho}_{n}(h)$ $-$ $
\rho (h)\}]_{h=1}^{\mathcal{\mathring{L}}_{n}})$ $-$ $\vartheta (\left[
\omega (h)\mathcal{Z}_{n}(h)\right] _{h=1}^{\mathcal{\mathring{L}}_{n}})|$ $
\overset{p}{\rightarrow }$ $0$. Indeed, by an identical argument trivially (
\ref{phiphi}) applies for \emph{any} positive integer sequence $\{\mathcal{
\mathring{L}}_{n}\}$ that satisfies $\lim \sup_{n\rightarrow \infty }\{
\mathcal{\mathring{L}}_{n}/\mathcal{L}_{n}\}$ $<$ $1$, covering the case $
\mathcal{\mathring{L}}_{n}$ $\rightarrow $ $(0,\infty )$. All subsequent
results therefore extend to this general case, but we do not highlight it
because it does not promote a consistent test.
\end{remark}
\begin{remark}
\normalfont An upper bound on $\mathcal{L}_{n}$ requires the mapping $
\vartheta $ to be specified, so we work with the maximum. By Lemma \ref
{lm:max_p} in Appendix \ref{app:proofs},\ if $(n^{\min \{\zeta ,\kappa
,1/2\}}/\ln (n))\mathcal{\tilde{X}}_{n}(h)$ is uniformly integrable, then
from standard arguments $\mathcal{L}_{n}$ $=$ $O(n^{\min \{\zeta ,\kappa
,1/2\}}/\ln (n))$ must hold. We do not tackle the case where uniform
integrability fails to hold. Additional technical conditions laid out in
\citet[Appendix
G]{HillMotegi_supp_mat} yield uniform integrability. In particular, we
require $||1/\hat{\gamma}_{n}(0)||_{p}$ $=$ $O(1)$ for some $p$ $>$ $1$,
which generally cannot be easily verified. Further, $E[\epsilon _{t}^{6}]$ $<
$ $\infty $, the plug-in $\sqrt{n}||\hat{\theta}_{n}$ $-$ $\theta _{0}||_{4}$
$=$ $O(1)$ and plug-in remainder\ $||n^{\lambda }\mathcal{R}_{m}(n)||_{q}$ $=
$ $O(1)$ for some $\lambda $ $>$ $0$ and $q$ $>$ $2$, and test statistic
weight $n^{\min \{\kappa ,\zeta ,1/2\}}||\hat{\omega}_{n}(h)-\omega (h)||_{r}
$ $=$ $O(1)$\ for some $r$ $>$ $2$. Conditions like $\sqrt{n}||\hat{\theta}
_{n}$ $-$ $\theta _{0}||_{4}$ $=$ $O(1)$ generally require moment conditions
higher than $E[\epsilon _{t}^{6}]$ $<$ $\infty $: see
\citet[Appendix
G: Example 1]{HillMotegi_supp_mat}. These are relatively mild conditions and
hold for most of the data generating processes under the simulation study in
Section \ref{sec:sim}.\footnote{
Some processes in the simulation study evidently fail to have higher
moments, but are used to demonstrate the sensitivity of the proposed test to
moment condition failure. See Section \ref{sec:sim}.} In those cases $\zeta $
$=$ $1/2$, and $\hat{\omega}_{n}(h)$ $=$ $\omega (h)$ $=$ $1$\ so that $
\kappa $ $=$ $\infty $, hence $\mathcal{L}_{n}$ $=$ $O(\sqrt{n}/\ln (n))$.
\end{remark}
The proof of Lemma \ref{lm:corr_expan}\ relies on a new two-fold argument.
It is new because it cannot rely on Gaussian approximation theory for high
dimensional processes. First we prove $\mathcal{A}_{\mathcal{L},n}$ $\equiv $
$\vartheta ([\sqrt{n}\hat{\omega}_{n}(h)\{\hat{\rho}_{n}(h)$ $-$ $\rho (h)\}$
$-$ $\omega (h)\mathcal{Z}_{n}(h)]_{h=1}^{\mathcal{L}})$ $\overset{p}{
\rightarrow }$ $0$ for each $\mathcal{L}$\ $\in $ $\mathbb{N}$. Using
standard weak convergence theory, this does not suffice to show $\mathcal{A}
_{\mathcal{L}_{n},n}$ $\overset{p}{\rightarrow }$ $0$ for some $\mathcal{L}
_{n}$ $\rightarrow $ $\infty $. This follows because weak convergence, in
the broad sense of \citet{HoffJorg1984,HoffJorg1991}, to a Gaussian limit
with a version that has uniformly bounded and uniformly continuous sample
paths, is equivalent to convergence in finite dimensional distributions, the
existence of a pseudo metric $d$ on $N$\ such that $(N,d)$ is a totally
bounded pseudo metric space, and a stochastic equicontinuity property based
on $d$ holds. If $d$ is the Euclidean distance, for example, then $(N,d)$ is
not totally bounded because $\mathbb{N}$\ is not compact. See
\citet{Dudley1978,Dudley1984} and \citet[Chapters
9-10]{Pollard1990}. We take an approach different from Hoffman-Jorgensen's
\citeyearpar{HoffJorg1984} notion of weak dependence. We prove that $
\mathcal{A}_{\mathcal{L},n}$ $\overset{p}{\rightarrow }$ $0$ for each $
\mathcal{L}$\ $\in $ $\mathbb{N}$ directly implies $\mathcal{A}_{\mathcal{L}
_{n},n}$ $\overset{p}{\rightarrow }$ $0$ for some sequence of positive
integers $\{\mathcal{L}_{n}\}$ that satisfies $\mathcal{L}_{n}$ $\rightarrow
$ $\infty $ and $\mathcal{L}_{n}$ $=$ $o(n)$. See Lemmas \ref{lm:array_conv}-
\ref{lm:max_dist} in Appendix \ref{app:proofs}. Thus, by sidestepping the
\citet{HoffJorg1984,HoffJorg1991} view of weak dependence, which requires
more than convergence in finite dimensional distributions, we are able to
show that such convergence suffices. Our approach has deep roots in \cite
{Ramsey1930} theory, based on its implications for monotone subsequences
\citep[e.g.][]{BoehmeRosenfeld1974,Thomason1988,Myers2002} as applied to
Frech\'{e}t spaces \citep{BoehmeRosenfeld1974}.
The same array argument, coupled with extant central limit theory for NED
arrays, yields the following fundamental Gaussian approximation result for
the Lemma \ref{lm:corr_expan} approximation process $\{\mathcal{Z}_{n}(h)$ $
: $ $1$ $\leq $ $h$ $\leq $ $\mathcal{L}_{n}\}$. Recall $\mathcal{Z}_{n}(h)$
$\equiv $ $1/\sqrt{n}\sum_{t=1+h}^{n}z_{t}(h)$ where $z_{t}(h)$ $\equiv $ $
r_{t}(h)$ $-$ $\rho (h)r_{t}(0)$ and $r_{t}(h)$ $\equiv $ $\{\epsilon
_{t}\epsilon _{t-h}$ $-$ $E[\epsilon _{t}\epsilon _{t-h}]$ $-$ $\mathcal{D}
(h)^{\prime }\mathcal{A}m_{t}\}/E[\epsilon _{t}^{2}]$.
\begin{lemma}
\label{lm:clt_max}Let Assumptions \ref{assum:dgp} and \ref{assum:plug} hold.
Let $\{\mathcal{Z}(h)$ $:$ $h$ $\in $ $\mathbb{N}\}$ be a zero mean Gaussian
process with variance $\lim_{n\rightarrow \infty
}1/n\sum_{s,t=1}^{n}E[z_{s}(h)z_{t}(h)]$ $<$ $\infty $, and covariance
function \linebreak $E[\mathcal{Z}(h)\mathcal{Z}(\tilde{h})]$ $=$ $
\lim_{n\rightarrow \infty }1/n\sum_{s,t=1}^{n}E[z_{s}(h)z_{t}(\tilde{h})]$.
Then for some $\{\mathcal{Z}(h)$ $:$ $h$ $\in $ $\mathbb{N}\}$ and some
non-unique monotonic sequence of positive integers $\{\mathcal{L}_{n}\}$, $
\mathcal{L}_{n}$ $\rightarrow $ $\infty $ and $\mathcal{L}_{n}$ $=$ $o(n)$: $
\vartheta ([\omega (h)\mathcal{Z}_{n}(h)]_{h=1}^{\mathcal{L}_{n}})$ $\overset
{d}{\rightarrow }$ $\vartheta ([\omega (h)\mathcal{Z}(h)]_{h=1}^{\infty }).$
\end{lemma}
\begin{remark}
\normalfont If an estimator $\hat{\theta}_{n}$ is not required then $
\mathcal{D}(h)$ $=$ $0$ and the covariance function $E[\mathcal{Z}(h)
\mathcal{Z}(\tilde{h})]$ reduces accordingly. If additionally $\epsilon _{t}$
is iid under the null then $E[\mathcal{Z}(h)\mathcal{Z}(\tilde{h})]$ $=$ $
E[\epsilon _{t}^{2}\epsilon _{t-h}^{2}]/(E[\epsilon _{t}^{2}])^{2}$, which
equals $1$ if $h$ $\neq $ $0$, and otherwise $E[\epsilon
_{t}^{4}]/(E[\epsilon _{t}^{2}])^{2}$. If $\hat{\theta}_{n}$ is not required
then we can bypass our array convergence argument and use the Gaussian
approximation argument in \cite{ZhangWu2017}, under their moment contraction
assumptions.
\end{remark}
\begin{remark}
\label{rm:Ln:gaussian}\normalfont An upper bound on the rate $\mathcal{L}_{n}
$ $\rightarrow $ $\infty $\ can be provided in the maximum case under
various dependence settings. For example,
\citet[Appendix
B]{Chernozhukov_etal_manymom_2014} impose boundedness and a $\beta $-mixing
property, and \cite{ZhangWu2014}, and \cite{ZhangWu2017} work with a
functional dependence property. Under their conditions a limit theory that
supports our Lemma \ref{lm:corr_expan} expansion is evidently possible,
while the bound on $\mathcal{L}_{n}$ $\rightarrow $ $\infty $\ follows from
our Lemma \ref{lm:max_p} and results in
\citet[Appendix
G]{HillMotegi_supp_mat}. In that case, their Theorem 3.2 will apply, hence $
\mathcal{L}_{n}\left( \ln \left( \mathcal{L}_{n}\right) \right) ^{3q/2}$ $=$
$o(n^{q/2-1+\iota })$ for some $\iota $ $>$ $0$, provided $E|z_{t}(h)|^{q}$ $
<$ $\infty $ for all $h$ and some $q$ $\geq $ $4$.\footnote{
See \citet[Theorem 3.2 and p. 1900]{ZhangWu2017}. They yield the optimal
bound $\mathcal{L}_{n}\left( \ln \left( \mathcal{L}_{n}\right) \right)
^{3q/2}$ $=$ $o(n^{q/2-1})$, up to a multiplicative logarithmic term that is
trumped by $n^{\iota }$ for any tiny $\iota $ $>$ $0$.} The latter moment
bound generally requires $\epsilon _{t}$ to be $L_{8}$-bounded. Put $q$ $=$ $
8$ to yield $\mathcal{L}_{n}\left( \ln \left( \mathcal{L}_{n}\right) \right)
^{12}$ $=$ $o\left( n^{3+\iota }\right) $. Hence $\mathcal{L}_{n}$ $
\rightarrow $ $\infty $ as fast as $Kn^{3-\iota }$ for tiny $\iota $ $>$ $0$
\ is allowed. Since we require $\mathcal{L}_{n}$ $=$ $o(n)$ for sample
covariance consistency, the binding upper bound on $\mathcal{L}_{n}$ $
\rightarrow $ $\infty $ comes from Lemma \ref{lm:corr_expan}, e.g. $\mathcal{
L}_{n}$ $=$ $O(\sqrt{n}/\ln (n))$ under standard regularity conditions and $
\hat{\omega}_{n}(h)$ $=$ $\omega (h)$ $=$ $1$. We leave for future study a
Gaussian approximation theory for high dimensional, heterogeneous and
possibly non-stationary NED processes.
\end{remark}
Combine Lemmas \ref{lm:corr_expan} and \ref{lm:clt_max} and invoke the
triangle inequality to yield the following main result.
\begin{theorem}
\label{th:max_corr_expan}Under Assumptions \ref{assum:dgp} and \ref
{assum:plug}, $\vartheta ([\sqrt{n}\hat{\omega}_{n}(h)\{\hat{\rho}_{n}(h)$ $
- $ $\rho (h)\}]_{h=1}^{\mathcal{L}_{n}})$ $\overset{d}{\rightarrow }$ $
\vartheta ([\omega (h)\mathcal{Z}(h)]_{h=1}^{\infty })$ for some monotonic
sequence of positive integers $\{\mathcal{L}_{n}\}$ that is not unique, $
\mathcal{L}_{n}$ $\rightarrow $ $\infty $ and $\mathcal{L}_{n}$ $=$ $o(n)$,
where $\{\mathcal{Z}(h)$ $:$ $h$ $\in $ $\mathbb{N}\}$ is a zero mean
Gaussian process with variance $\lim_{n\rightarrow \infty
}n^{-1}\sum_{s,t=1}^{n}E[z_{s}(h)z_{t}(h)]$ $<$ $\infty $, and covariance
function $\lim_{n\rightarrow \infty }n^{-1}\sum_{s,t=1}^{n}E[z_{s}(h)z_{t}(
\tilde{h})]$. Therefore under the null hypothesis $\vartheta ([\sqrt{n}\hat{
\omega}_{n}(h)\hat{\rho}_{n}(h)]_{h=1}^{\mathcal{L}_{n}})$ $\overset{d}{
\rightarrow }$ $\vartheta ([\omega (h)\mathcal{Z}(h)]_{h=1}^{\infty })$,
where $\{\mathcal{Z}(h)$ $:$ $h$ $\in $ $\mathbb{N}\}$ is a zero mean
Gaussian process with variance $\lim_{n\rightarrow \infty
}n^{-1}\sum_{s,t=1}^{n}E[r_{s}(h)r_{t}(h)]$ $<$ $\infty $ and $r_{t}(h)$ $
\equiv $ $\{\epsilon _{t}\epsilon _{t-h}-$ $\mathcal{D}(h)^{\prime }\mathcal{
A}m_{t}\}/E[\epsilon _{t}^{2}]$. Moreover, if $\vartheta (\cdot )$ is the
maximum transform, and $(n^{\min \{\zeta ,\kappa ,1/2\}}/\ln (n))\mathcal{
\tilde{X}}_{n}(h)$ for all $h$\ is uniformly integrable, where $\mathcal{
\tilde{X}}_{n}(h)$ is defined in (\ref{expans_rate}), then $\mathcal{L}_{n}$
$=$ $O(n^{\min \{\zeta ,\kappa ,1/2\}}/\ln (n))$ must be satisfied.
\end{theorem}
We now have a fundamental result for the maximum weighted autocorrelation
under white noise.
\begin{corollary}
\label{cor:expand_null}Under Assumptions \ref{assum:dgp} and \ref{assum:plug}
, $\max_{1\leq h\leq \mathcal{L}_{n}}|\sqrt{n}\hat{\omega}_{n}(h)\{\hat{\rho}
_{n}(h)$ $-$ $\rho (h)\}|$ $\overset{d}{\rightarrow }$ $\max_{1\leq h\leq
\infty }|\omega (h)\mathcal{Z}(h)|$ for some monotonic sequence of positive
integers $\{\mathcal{L}_{n}\}$ that is not unique, $\mathcal{L}_{n}$ $
\rightarrow $ $\infty $ and $\mathcal{L}_{n}$ $=$ $o(n)$, where $\{\mathcal{Z
}(h)$ $:$ $h$ $\in $ $\mathbb{N}\}$ is defined in Theorem \ref
{th:max_corr_expan}. Therefore, under the white noise null hypothesis $
\max_{1\leq h\leq \mathcal{L}_{n}}|\sqrt{n}\hat{\omega}_{n}(h)\hat{\rho}
_{n}(h)|$ $\overset{d}{\rightarrow }$ $\max_{1\leq h\leq \infty }|\omega (h)
\mathcal{Z}(h)|$. Further, if $(n^{\min \{\zeta ,\kappa ,1/2\}}/\ln (n))
\mathcal{\tilde{X}}_{n}(h)$ for all $h$\ is uniformly integrable, where $
\mathcal{\tilde{X}}_{n}(h)$ is defined in (\ref{expans_rate}), then $
\mathcal{L}_{n}$ $=$ $O(n^{\min \{\zeta ,\kappa ,1/2\}}/\ln (n))$ must be
satisfied.
\end{corollary}
\begin{remark}
\normalfont The conclusions of Theorem \ref{th:max_corr_expan} and Corollary
\ref{cor:expand_null} do not require $\vartheta ([\sqrt{n}\hat{\omega}_{n}(h)
\hat{\rho}_{n}(h)]_{h=1}^{\mathcal{L}_{n}})$ to have a well-defined limit
law under the null. This is decidedly different from the max-correlation
literature in which $\lim_{n\rightarrow \infty }\max_{1\leq h\leq \mathcal{L}
_{n}}|\omega (h)\mathcal{Z}(h)|$ is characterized under suitable conditions
that ensure asymptotic independence $E[\mathcal{Z}(i)\mathcal{Z}(j)]$ $
\rightarrow $ $0$ as $|i$ $-$ $j|$ $\rightarrow $ $0$. See, e.g.,
\citet[Chapter 6]{Leadbetter_etal1983} and \citet{Husler1986}. We do not
require asymptotic independence, nor therefore convergence in law.
\end{remark}
\subsection{Bootstrapped P-Value Test\label{sec:p_val}}
We work with Shao's (\citeyear{Shao2011_JoE}) dependent wild bootstrap.
Recall $m_{t}(\theta )$ are the estimating equations for $\hat{\theta}_{n}$,
let $\widehat{\mathcal{A}}_{n}$ be a consistent estimator of $\mathcal{A}$\
in Assumption \ref{assum:plug}.c, and define
\begin{equation}
\mathcal{\hat{D}}_{n}(h)\equiv \frac{1}{n}\sum_{t=h+1}^{n}\left\{ \left(
\epsilon _{t}(\hat{\theta}_{n})s_{t}(\hat{\theta}_{n})+\frac{G_{t}(\hat{
\theta}_{n})}{\sigma _{t}(\hat{\theta}_{n})}\right) \epsilon _{t-h}(\hat{
\theta}_{n})+\epsilon _{t}(\hat{\theta}_{n})\left( \epsilon _{t-h}(\hat{
\theta}_{n})s_{t-h}(\hat{\theta}_{n})+\frac{G_{t-h}(\hat{\theta}_{n})}{
\sigma _{t-h}(\hat{\theta}_{n})}\right) \right\} . \label{D_hat}
\end{equation}
We now operate on an approximation of $\epsilon _{t}(\hat{\theta}
_{n})\epsilon _{t-h}(\hat{\theta}_{n})$ expanded around $\theta _{0}$ under $
H_{0}$, cf. Lemma \ref{lm:corr_expan}:
\begin{equation*}
\widehat{\mathcal{E}}_{n,t,h}(\hat{\theta}_{n})\equiv \epsilon _{t}(\hat{
\theta}_{n})\epsilon _{t-h}(\hat{\theta}_{n})-\mathcal{\hat{D}}
_{n}(h)^{\prime }\widehat{\mathcal{A}}_{n}m_{t}(\hat{\theta}_{n}).
\end{equation*}
In practice $G_{t}(\theta )$ and $\sigma _{t}(\theta )$ are typically
unobserved and must be iteratively approximated based on initial conditions.
Examples include linear and nonlinear AR-GARCH models. In such cases $
\mathcal{\hat{D}}_{n}(h)$ is infeasible. \cite{Meitz_Saikkonen_2011},
amongst others, lay out sufficient conditions for the QML estimator for a
large class of AR-GARCH models to be consistent and asymptotically normal,
including smoothness conditions similar to Assumption \ref{assum:plug} that
include Lipschitz properties imposed on $f(x_{t},\phi )$ and $\sigma
_{t}(\theta )$. In their setting, initial conditions vanish geometrically
fast and therefore do not play a role in asymptotics both for the QML
estimator, and for sample statistics like a feasible version of $\mathcal{
\hat{D}}_{n}(h)$. See their Assumptions DGP, E, and C1-C3.
\subsection{Dependent Wild Bootstrap\label{sec:dwb}}
The wild bootstrap is proposed for iid and mds sequences
\citep{Wu1986,Liu1988,Hansen1996}. \citet{Shao2010,Shao2011_JoE} generalizes
the idea to allow for dependent sequences. \cite{Shao2010} allows for
general dependence by using block-wise iid random draws as weights, with a
covariance function that equals a kernel function. His requirements rule out
a truncated kernel, but allow a Bartlett kernel amongst others. We follow
\cite{Shao2011_JoE} whose draws effectively have a truncated kernel
covariance function.
The algorithm is as follows. Set a block size $b_{n}$ such that $1\leq
b_{n}<n$, $b_{n}$ $\rightarrow $ $\infty $ and $b_{n}/n$ $\rightarrow $ $0$.
Denote the blocks by $\mathcal{B}_{s}=\{(s-1)b_{n}+1,\dots ,sb_{n}\}$ with $
s=1,\dots ,n/b_{n}$. Assume for simplicity that the number of blocks $
n/b_{n} $ is an integer. Generate iid random numbers $\{\xi _{1},\dots ,\xi
_{n/b_{n}}\}$ with $E[\xi _{i}]$ $=$ $0$, $E[\xi _{i}^{2}]$ $=$ $1$, and $
E[\xi _{i}^{4}]$ $<$ $\infty $. Define an auxiliary variable $\varphi
_{t}=\xi _{s}$ if $t$ $\in $ $\mathcal{B}_{s}$. Compute $\mathcal{\hat{T}}
_{n}^{(dw)}$ $\equiv $ $\vartheta ([\sqrt{n}\hat{\rho}_{n}^{(dw)}(h)]_{h=1}^{
\mathcal{L}_{n}})$ from:
\begin{equation}
\hat{\rho}_{n}^{(dw)}(h)\equiv \frac{1}{1/n\sum_{t=1}^{n}\epsilon _{t}^{2}(
\hat{\theta}_{n})}\frac{1}{n}\sum_{t=1+h}^{n}\varphi _{t}\left\{ \widehat{
\mathcal{E}}_{n,t,h}(\hat{\theta}_{n})-\frac{1}{n}\sum_{s=1+h}^{n}\widehat{
\mathcal{E}}_{n,s,h}(\hat{\theta}_{n})\right\} . \label{R_hat_dwb}
\end{equation}
Repeat $M$ times, resulting in bootstrapped statistics $\{\mathcal{\hat{T}}
_{n,i}^{(dw)}\}_{i=1}^{M}$, and an approximate p-value $\hat{p}_{n,M}^{(dw)}$
$\equiv $ $1/M\sum_{i=1}^{M}I(\mathcal{\hat{T}}_{n,i}^{(dw)}$ $\geq $ $
\mathcal{\hat{T}}_{n})$. The test proposed rejects the null at nominal size $
\alpha $ when $\hat{p}_{n,M}^{(dw)}$ $<$ $\alpha $. The wild bootstrap has
block size $b_{n}$ $=$ $1$ and no re-centering with $1/n\sum_{s=1+h}^{n}
\widehat{\mathcal{E}}_{n,s,h}(\hat{\theta}_{n})$.
We use a sample version of the first order expansion variable $\epsilon
_{t}\epsilon _{t-h}$ $-$ $\mathcal{D}(h)^{\prime }\mathcal{A}m_{t}$ from
Lemma \ref{lm:corr_expan}. It is \textit{incorrect} to use just $\epsilon
_{t}(\hat{\theta}_{n})\epsilon _{t-h}(\hat{\theta}_{n})$, as with:
\begin{equation}
\hat{\rho}_{n}^{(dw)}(h)\equiv \frac{1}{1/n\sum_{t=1}^{n}\epsilon _{t}^{2}(
\hat{\theta}_{n})}\frac{1}{n}\sum_{t=1+h}^{n}\varphi _{t}\left\{ \epsilon
_{t}(\hat{\theta}_{n})\epsilon _{t-h}(\hat{\theta}_{n})-\frac{1}{n}
\sum_{s=1+h}^{n}\epsilon _{t}(\hat{\theta}_{n})\epsilon _{t-h}(\hat{\theta}
_{n})\right\} . \label{rho_no_ex}
\end{equation}
This follows since $\varphi _{t}$ is mean zero and independent of the data,
hence $1/n\sum_{t=1+h}^{n}\varphi _{t}\epsilon _{t}(\hat{\theta}
_{n})\epsilon _{t-h}(\hat{\theta}_{n})$ $=$ $1/n\sum_{t=1+h}^{n}\varphi
_{t}\epsilon _{t}\epsilon _{t-h}$ $+$ $o_{p}(1/\sqrt{n})$, yet $
1/n\sum_{s=1+h}^{n}\epsilon _{t}(\hat{\theta}_{n})\epsilon _{t-h}(\hat{\theta
}_{n})$ $=$ $E[\epsilon _{t}\epsilon _{t-h}]$ $+$ $O_{p}(1/\sqrt{n})$ by
standard first order arguments and $E[m_{t}]$ $=$ $0$. Hence, $\sqrt{n}\hat{
\rho}_{n}^{(dw)}(h)$ from (\ref{rho_no_ex}) is equivalent to $1/\sqrt{n}
\sum_{t=1+h}^{n}\varphi _{t}\epsilon _{t}\epsilon _{t-h}/E[\epsilon
_{t}^{2}] $ asymptotically with probability approaching one, which under the
null has the same asymptotic properties as $1/\sqrt{n}\sum_{t=1+h}^{n}
\epsilon _{t}\epsilon _{t-h}/E[\epsilon _{t}^{2}]$. The latter is not
equivalent to the Lemma \ref{lm:corr_expan} first order expansion process $\{
\mathcal{Z}_{n}(h)\}$ because asymptotic information from the estimator $
\hat{\theta}_{n}$ has been scrubbed out by the bootstrap variable $\varphi
_{t}$. The bootstrapped $\hat{\rho}_{n}^{(dw)}(h)$ in (\ref{R_hat_dwb}),
however, contains the required information.
\cite{Shao2011_JoE} imposes Wu's (\citeyear{Wu2005}) moment contraction
property with an eighth moment, which we denote MC$_{8}$
\citep[see Appendix
B in][for details]{HillMotegi_supp_mat}. He then applies a Hilbert space
approach for weak convergence of a spectral density process $\{\hat{S}
_{n}(\lambda )$ $:$ $\lambda $ $\in $ $[0,\pi ]\}$ to yield convergence for $
\int_{0}^{\pi }\hat{S}_{n}^{2}(\lambda )d\lambda $.\footnote{
See, e.g., \cite{PolitisRomano1994} for applications of weak convergence in
a Hilbert space to bootstrapped statistics.} Only observed data are
considered. There are several reasons why a different approach is required
here. First, $\hat{S}_{n}(\lambda )$ is a sum of all $\{\hat{\gamma}_{n}(h)$
$:$ $1$ $\leq $ $h$ $\leq $ $n$ $-$ $1\}$, and
\citet[proof of Theorem
3.1]{Shao2011_JoE} uses a variance of conditional variance bound for
probability convergence based on Chebyshev's inequality. This requires $
E[\epsilon _{t}^{8}]$ $<$ $\infty $ and a complicated eighth order joint
cumulant series bound which is only known to hold when $\epsilon _{t}$ is
\textit{geometric} MC$_{8}$ \citep[see][]{ShaoWu2007}. Second, we only need
convergence in distribution of $\sqrt{n}\hat{\gamma}_{n}(h)$, coupled with a
new array convergence result, which are easier to handle than weak
convergence of $\{\hat{S}_{n}(\lambda )$ $:$ $\lambda $ $\in $ $[0,\pi ]\}$
on a Hilbert space. Third, in the Hilbert space approach the supremum is not
a continuous mapping from the space of square integrable (with respect to
Lebesgue measure) functions on $[0,\pi ]$. It is therefore not clear how, or
if, Shao's (\citeyear{Shao2011_JoE}: Theorem 3.1) proof applies to our
statistic.
In order to prove that the bootstrapped $\hat{\rho}_{n}^{(dw)}(h)$ has the
same finite dimensional limit distributions as $\hat{\rho}_{n}(h)$ under the
null, it is helpful to have the equations $m_{t}(\theta )$ in the Assumption
\ref{assum:plug}.c expansion $\sqrt{n}(\hat{\theta}_{n}$ $-$ $\theta _{0})$ $
=$ $\mathcal{A}n^{-1/2}\sum_{t=1}^{n}m_{t}(\theta _{0})$ $+$ $o_{p}(1)$ to
be a smooth parametric function for a required uniform law of large numbers.
As with response smoothness under Assumption \ref{assum:plug}.a,b, more
general smoothness properties are achievable at the expense of more intense
notation.\footnote{
Nonsmoothness can be allowed provided certain bracketing or other smoothness
properties are applied like a Lipschitz condition or the
Vapnick-Chervonenkis class, which ensure a required stochastic
equicontinuity condition. See, e.g., \cite{Andrews1987}, \cite{ArconesYu1994}
and \cite{GaenssslerZiegler1994}.}
\begin{assumption2c'}
$\hat{\theta}_{n}$ $\in $ $\Theta $ for each $n$, and for a unique interior
point $\theta _{0}$ $\in $ $\Theta $ we have $\sqrt{n}(\hat{\theta}_{n}$ $-$
$\theta _{0})$ $=$ $\mathcal{A}n^{-1/2}\sum_{t=1}^{n}m_{t}(\theta _{0})$ $+$
$\mathcal{R}_{m}(n)$ where the $k_{m}$ $\times $ $1$ stochastic remainder $
\mathcal{R}_{m}(n)$ $=$ $O_{p}(n^{-\zeta })$\ for some $\zeta $ $>$ $0$,
with $\mathcal{F}_{t}$-measurable estimating equations $m_{t}$ $=$ $
[m_{i,t}]_{i=1}^{k_{m}}$ $:$ $\Theta $ $\rightarrow $ $\mathbb{R}^{k_{m}}$
for $k_{m}$ $\geq $ $k_{\theta }$; and non-stochastic $\mathcal{A}$ $\in $ $
\mathbb{R}^{k_{\theta }\times k_{m}}$. $m_{t}(\theta )$ is twice
continuously differentiable, $(\partial /\partial \theta )^{j}m_{t}(\theta )$
is Borel measurable for each $\theta $ and $j$ $=$ $1,2$, and $
E[\sup_{\theta \in \Theta }|(\partial /\partial \theta )^{i}m_{j,t}(\theta
)|]$ $<$ $\infty $ for each $i$ $=$ $0,1,2$ and $j$ $=$ $1,...,k_{m}$.
Moreover, zero mean $m_{t}$ is stationary, ergodic, $L_{r/2}$-bounded and $
L_{2}$-NED with size $1/2$ on $\{\upsilon _{t}\}$, where $r$ $>$ $4$ and $
\{\upsilon _{t}\}$\ appear in Assumption \ref{assum:dgp}.b.
\end{assumption2c'}
The bootstrapped p-value leads to a valid and consistent test. Note $\kappa $
$>$ $0$ in $\hat{\omega}_{n}(h)$ $=$ $\omega (h)+O_{p}(1/n^{\kappa })$ and $
\zeta $ $>$ $0$ in $\sqrt{n}(\hat{\theta}_{n}$ $-$ $\theta _{0})$ $=$ $
\mathcal{A}n^{-1/2}\sum_{t=1}^{n}m_{t}(\theta _{0})$ $+$ $O_{p}(n^{-\zeta })$
, cf. Assumptions \ref{assum:dgp}.c and \ref{assum:plug}.c$^{\prime }$.
\begin{theorem}
\label{th:p_dep_wild_boot}Let Assumptions \ref{assum:dgp}, \ref{assum:plug}
.a,b,c$^{\prime }$,d hold, and let the number of bootstrap samples $M$ $=$ $
M_{n}$ $\rightarrow $ $\infty $. There exists a non-unique monotonic
sequence of maximum lags $\{\mathcal{L}_{n}\}$, $\mathcal{L}_{n}$ $
\rightarrow $ $\infty $ and $\mathcal{L}_{n}$ $=$ $o(n)$, such that under $
H_{0}$, $P(\hat{p}_{n,M}^{(dw)}$ $<$ $\alpha )$ $\rightarrow $ $\alpha $,
and if $H_{0}$ is false then $P(\hat{p}_{n,M}^{(dw)}$ $<$ $\alpha )$ $
\rightarrow $ $1$. Further, $\mathcal{L}_{n}$ $=$ $O(n^{\min \{\zeta ,\kappa
,1/2\}}/\ln (n))$ must be satisfied.
\end{theorem}
\begin{remark}
\normalfont A similar theory applies to an approximate p-value computed by
wild bootstrap where $\varphi _{t}$ is iid $N(0,1)$, provided $\epsilon _{t}$
forms a mds under the null.
\end{remark}
\begin{remark}
\normalfont The test operates on $\sqrt{n}\hat{\rho}_{n}(h)$ and $\sqrt{n}
\hat{\rho}_{n}^{(dw)}(h)$ and therefore achieves the parametric rate of
local asymptotic power against the sequence of alternatives: $H_{1}^{L}$ $:$
$\rho (h)$ $=$ $r(h)/\sqrt{n}$ for each $h$ where $r(h)$ are fixed
constants, $|r(h)|$ $\leq $ $\sqrt[\text{ }]{n}$. See
\citet[Appendix D,
especially Theorem D.1]{HillMotegi_supp_mat}.
\end{remark}
\begin{remark}
\normalfont The bound $\mathcal{L}_{n}$ $=$ $O(n^{\min \{\zeta ,\kappa
,1/2\}}/\ln (n))$ generally must hold. A uniform integrability condition is
not imposed here, as it is in Lemma \ref{lm:corr_expan}, cf. Lemma \ref
{lm:max_p}.a, since the proof operates on conditional probabilities. The
latter are bounded and therefore uniformly integrable. Further, the
conditional probabilities imbed any given transform $\vartheta $ over lags $
1,...,\mathcal{L}$. The maximum transform requirement from Lemma \ref
{lm:max_p} for bounding $\mathcal{L}_{n}$ is merely applied to the
conditional probabilities themselves over lags $\mathcal{L}$ $\in $ $\{1,...,
\mathcal{L}_{n}\}$. See the proof of Theorem \ref{th:p_dep_wild_boot}, cf.
Lemma \ref{lm:max_cor*}.b.
\end{remark}
\begin{remark}
\label{rm:Hong_local}\normalfont Hong's (\citeyear{Hong1996}:\ Section 2)
encompassing class of statistics, which includes \linebreak $(2\mathcal{L}
_{n})^{-1/2}\sum_{h=1}^{\mathcal{L}_{n}}\hat{\omega}_{n}(h)\{n\hat{\rho}
_{n}^{2}(h)$ $-$ $1\}$, does not achieve the parametric rate of convergence
due to the normalizing term $\mathcal{L}_{n}^{-1/2}$. The rate logically is $
n^{1/2}/\mathcal{L}_{n}^{1/4}$ hence Hong's (\citeyear{Hong1996}) class of
statistics have non-trivial power against $n^{1/2}/\mathcal{L}_{n}^{1/4}$
-local alternatives. We by-pass self-normalization by working solely in a
bootstrap framework based on finite dimensional asymptotics. As noted above,
our transform class $\vartheta $\ does not allow for self-normalized
statistics. Moreover, we do not need to know the limit distribution of $
\vartheta ([\sqrt{n}\hat{\omega}_{n}(h)\hat{\rho}_{n}(h)]_{h=1}^{\mathcal{L}
_{n}})$, nor even be guaranteed that it has one. Our approach eases the
burden of self-normalization with an increasing maximum lag: we retain $
\sqrt{n}$-asymptotics and therefore $\sqrt{n}$-local power, even for Hong's (
\citeyear{Hong1996}) (non-normalized) $\sum_{h=1}^{\mathcal{L}_{n}}\hat{
\omega}_{n}(h)n\hat{\rho}_{n}^{2}(h)$.
\end{remark}
\section{Automatic Maximum Lag Selection\label{sec:lag_select}}
We approach lag selection from the perspective of the practitioner by
providing a data-driven, or automatic, lag selection method. Our method
closely follows \cite{EscancianoLobato2009}, whose work is motivated by the
automatic Neyman test proposed in \cite{InglotLedwina2006}. Let $\mathcal{L}
_{n}^{\ast }$ denote the data-driven lag selected. Under $H_{0}$, Escanciano
and Lobato's (\citeyear{EscancianoLobato2009}) method leads to $P(\mathcal{L}
_{n}^{\ast }$ $=$ $1)$ $\rightarrow $ $1$ because higher lags do not provide
useful information and incur a high penalty for their use (see below for
details). Contrary to their Q-test method, however, we allow $\mathcal{L}
_{n} $ $\rightarrow $ $\infty $ and by using a bootstrap we do not need to
standardize the sample autocorrelations.
In order to ease notation, we only work with the max-correlation statistic
and weight $\hat{\omega}_{n}(h)$ $=$ $1$, but all subsequent results carry
over to the general transform $\vartheta $ and general $\hat{\omega}_{n}(h)$
$\overset{p}{\rightarrow }$ $\omega (h)$ $>$ $0$ with few additional proof
steps. Hence $\kappa $ $=$ $\infty $ in Assumption \ref{assum:dgp}.c.
The optimal lag $\mathcal{L}_{n}^{\ast }$ is chosen from a set $\{1,...,
\mathcal{\bar{L}}_{n}\}$ for some pre-chosen upper-bound $\mathcal{\bar{L}}
_{n}$ $\rightarrow $ $\infty $. In the case of the maximum and $\hat{\omega}
_{n}(h)$ $=$ $\omega (h)$ $=$ $1$, we have from expansion Lemma \ref
{lm:corr_expan} and dependent wild bootstrap Theorem \ref{th:p_dep_wild_boot}
that $\mathcal{\bar{L}}_{n}$ $=$ $O(n^{\min \{\zeta ,1/2\}}/\ln (n))$ must
hold, where $\zeta $ $>$ $0$ appears in the Assumption \ref{assum:plug}.c or
\ref{assum:plug}.c$^{\prime }$ plug-in expansion $\sqrt{n}(\hat{\theta}_{n}$
$-$ $\theta _{0})$ $=$ $\mathcal{A}n^{-1/2}\sum_{t=1}^{n}m_{t}(\theta _{0})$
$+$ $O_{p}(n^{-\zeta })$. Under standard regularity conditions many plug-in
estimators will satisfy $\zeta $ $=$ $1/2$, hence $\mathcal{\bar{L}}_{n}$ $=$
$O(\sqrt{n}/\ln (n))$. We use the integer part of $\delta \sqrt{n}/(\ln (n))$
for certain $\delta $ $>$ $0$\ in our simulation study below. We only
consider sequences $\{\mathcal{L}_{n}\}$ that satisfy $\mathcal{L}_{n}/
\mathcal{\bar{L}}_{n}\rightarrow \lbrack 0,K]$ for any finite $K$ $>$ $0$
and we assume the results of Section \ref{sec:max_corr} hold for any such $\{
\mathcal{L}_{n}\}$. We save notation by fixing $K$ $=$ $1$.
We also need to allow for selection of \textit{any} positive integer
sequence $\{\mathcal{L}_{n}\}$ that satisfies $\mathcal{L}_{n}/\mathcal{\bar{
L}}_{n}\rightarrow \lbrack 0,1]$, hence $\mathcal{L}_{n}$ $\rightarrow $ $
(0,\infty ]$ is assumed such that $\mathcal{L}_{n}$ $\rightarrow $ $\mathcal{
L}$, a finite positive integer, is possible. This is required because
Escanciano and Lobato's (\citeyear{EscancianoLobato2009}) method leads to $P(
\mathcal{L}_{n}^{\ast }$ $=$ $1)$ $\rightarrow $ $1$ under $H_{0}$. See
Remark \ref{rm: bounded_Ln} for discussion of the validity of our main
results when $\mathcal{L}_{n}$ $\rightarrow $ $(0,\infty )$.
\cite{EscancianoLobato2009} work with a penalized Q-statistic, with a
penalty that is an increasing function of the number of included lags.
Similarly, define the \textit{penalized max-correlation} test statistic
\begin{equation}
\mathcal{\hat{T}}_{n}^{\mathcal{P}}(\mathcal{L})\equiv \mathcal{\hat{T}}_{n}(
\mathcal{L})-\mathcal{P}_{n}(\mathcal{L})\text{ where }\mathcal{\hat{T}}_{n}(
\mathcal{L})\equiv \sqrt{n}\max_{1\leq h\leq \mathcal{L}}\left\vert \hat{\rho
}_{n}(h)\right\vert \label{Tp}
\end{equation}
with penalty function $\mathcal{P}_{n}(\cdot )$:
\begin{equation}
\mathcal{P}_{n}(\mathcal{L})=\left\{
\begin{array}{ll}
\sqrt{\mathcal{L}\ln n} & \text{if }\mathcal{\hat{T}}_{n}(\mathcal{L})\leq
\sqrt{q\ln n} \\
\sqrt{2\mathcal{L}} & \text{if }\mathcal{\hat{T}}_{n}(\mathcal{L})>\sqrt{
q\ln n}
\end{array}
\right. \label{Pn}
\end{equation}
where $q$ is a fixed positive constant. A small value of $q$ leads to the
AIC penalty $\sqrt{2\mathcal{L}}$ being chosen with high probability, while
a large $q$ promotes selection of the BIC penalty. \cite
{EscancianoLobato2009} use $q$ $=$ $2.4$, a choice motivated by their own
simulation evidence, and evidence from \cite{InglotLedwina2006}. \cite
{InglotLedwina2006} develop an automatic Neyman test, and the portmanteau
test explored in \cite{EscancianoLobato2009} belongs to a class of smooth
tests proposed in \cite{Neyman1937}. Hence, it is not surprising that their $
q$ values are similar. We find a slightly larger value $q$ $=$ $3$ leads to
strong results across null and alternative hypotheses for our test: see the
discussion in Section \ref{sec:sim_design}, and see Figure \ref
{fig:size_power_maxcorr_automatic_lag}.
The chosen maximum lag $\mathcal{L}_{n}^{\ast }$ for each $n$\ is:
\begin{equation}
\mathcal{L}_{n}^{\ast }=\min \left\{ \mathcal{L}_{n}:1\leq \mathcal{L}
_{n}\leq \mathcal{\bar{L}}_{n}:\mathcal{\hat{T}}_{n}^{\mathcal{P}}(\mathcal{L
}_{n})\geq \mathcal{\hat{T}}_{n}^{\mathcal{P}}(l)\text{ for each }l=1,...,
\mathcal{\bar{L}}_{n}\right\} . \label{Ln_*}
\end{equation}
We chose $\{\mathcal{L}_{n}\}$ from those integer sequences satisfying $
\mathcal{L}_{n}$ $\geq $ $1$ and $\mathcal{L}_{n}$ $\leq $ $\mathcal{\bar{L}}
_{n}$ to ensure $\mathcal{L}_{n}/\mathcal{\bar{L}}_{n}$ $\rightarrow $ $
[0,1] $ holds in practice, but in theory we may select \textit{any} $\{
\mathcal{L}_{n}\}$ such that $\mathcal{L}_{n}$ $\geq $ $1$ and $\mathcal{L}
_{n}/\mathcal{\bar{L}}_{n}$ $\rightarrow $ $[0,1]$. Notice $l$ may be a
function of $n$, e.g. $l$ $=$ $\mathcal{\bar{L}}_{n}$ $-$ $1$. The penalties
$(\sqrt{\mathcal{L}\ln n},\sqrt{2\mathcal{L}})$ are related to Escanciano
and Lobato's (\citeyear{EscancianoLobato2009}: p. 144) penalties $(\mathcal{L
}\ln n,2\mathcal{L})$ for a fixed horizon Q-statistic. We need the square
root because the max-correlation operates on $\sqrt{n}\hat{\rho}_{n}(h)$
rather than $n\hat{\rho}_{n}^{2}(h)$. Contrary to \cite{EscancianoLobato2009}
, however, our test statistic \textit{and} penalty are based on the
max-correlation, we allow for diverging sequences $\{\mathcal{L}_{n}\}$, and
we do not need to standardize the correlations because we use a bootstrap.
\footnote{\citet[second remark following
Theorem 2]{EscancianoLobato2009} claim that a diverging maximum lag is
possible for their Q-test with an automatic lag, but an asymptotic theory is
not presented. Further, it is not obvious that their current fixed maximum
lag proof can extend to the unbounded maximum lag case. By their eq. (11)
they need $\sum_{l=1}^{\mathcal{\bar{L}}_{n}}P(\mathcal{L}_{n}$\textit{\ }$=$
\textit{\ }$l)$\textit{\ }$\rightarrow $\textit{\ }$0$\textit{\ as }$n$
\textit{\ }$\rightarrow $\textit{\ }$\infty $ hence $P(\mathcal{L}_{n}$
\textit{\ }$=$\textit{\ }$l)$ $\rightarrow $ $0$ fast enough when $\mathcal{
\bar{L}}_{n}$ $\rightarrow $ $\infty $, which may not hold under their
current assumptions.}
Define $\rho (\infty )$ $\equiv $ $\lim_{h\rightarrow \infty }\rho (h)$, and
$h^{\ast }$ $\equiv $ $\min \{h$ $:$ $h$ $=$ $\arg \max_{1\leq h\leq \infty
}|\rho (h)|\}$, the smallest lag at which the largest correlation in
magnitude occurs.
\begin{theorem}
\label{th:lag_select}Let Assumptions \ref{assum:dgp} and \ref{assum:plug}
hold. $a.$ Under $H_{0}$, if $(n^{\min \{\zeta ,\kappa ,1/2\}}/\ln (n))
\mathcal{\tilde{X}}_{n}(h)$ for all $h$\ is uniformly integrable, where $
\mathcal{\tilde{X}}_{n}(h)$ is defined in (\ref{expans_rate}), then $
\mathcal{L}_{n}$ $=$ $O(n^{\min \{\zeta ,\kappa ,1/2\}}/\ln (n))$ must hold,
and $P(\mathcal{L}_{n}^{\ast }$ $=$ $1)$ $\rightarrow $ $1$. $b$. Under $
H_{1}$, $\mathcal{L}_{n}^{\ast }$ $\overset{p}{\rightarrow }$ $h^{\ast }$
provided $\mathcal{\bar{L}}_{n}$ $=$ $o(n/\ln (n))$.
\end{theorem}
\begin{remark}
\normalfont We only require the more lenient $\mathcal{\bar{L}}_{n}$ $=$ $
o(n/\ln (n))$ for the proof of $(b)$. The more stringent restriction $
\mathcal{\bar{L}}_{n}$ $=$ $O(n^{\min \{\zeta ,1/2\}}/\ln (n))$ arises under
$(a)$ since there we must prove $P(\mathcal{P}_{n}(\mathcal{L}_{n})$ $=$ $
\sqrt{\mathcal{L}_{n}\ln (n)})$ $\rightarrow $ $1$ by using Lemma \ref
{lm:corr_expan}.
\end{remark}
\begin{remark}
\normalfont Under $H_{1}$ the optimal lag selected satisfies $\mathcal{L}
_{n}^{\ast }$ $\overset{p}{\rightarrow }$ $h^{\ast }$. Notice $h^{\ast }$
may be any value in $\mathbb{N}$ because we allow the maximum lag under
consideration for finite samples to diverge $\mathcal{\bar{L}}_{n}$ $
\rightarrow $ $\infty $. This ensures a consistent white noise test. The
reason $h^{\ast }$ is selected asymptotically is the penalized
max-correlation favors choosing lags that are \emph{at least} as large as
the most informative lag(s), the lag(s) at which the max-correlation takes
place. A nice advantage of the procedure is $\mathcal{L}_{n}^{\ast }$
converges to the \emph{smallest} of such \emph{most informative lags},
ensuring as $n$ $\rightarrow $ $\infty $ that the greatest number of data
points possible are used for computing that correlation magnitude. A
portmanteau statistic, however, sums over \emph{all} squared correlations
over a finite set of lags, hence its version is optimized at the largest
fixed lag $\bar{h}$\ under consideration, hence $P(\mathcal{L}_{n}^{\ast }$ $
=$ $\bar{h})$ $\rightarrow $ $1$
\citep[see the proof of
Theorem 2 in][]{EscancianoLobato2009}.
\end{remark}
\begin{remark}
\normalfont The proof that $\mathcal{L}_{n}^{\ast }$ converges to $1$ in
probability under $H_{0}$ only exploits the null property $\sqrt{n}\hat{\rho}
_{n}(h)$ $=$ $O_{p}(1)$. The latter \emph{also} holds under the $\sqrt{n}$
-local alternative $\rho (h)$ $=$ $r(h)/\sqrt{n}$, since $\sqrt{n}\hat{\rho}
_{n}(h)$ $=$ $\sqrt{n}(\hat{\rho}_{n}(h)$ $-$ $\rho (h))$ $+$ $r(h)$ $=$ $
O_{p}(1)$. Thus, $P(\mathcal{L}_{n}^{\ast }$ $=$ $1)$ $\rightarrow $ $1$
under $H_{1}^{L}$ $:$ $\rho (h)$ $=$ $r(h)/\sqrt{n}$ as well. This means the
max-correlation test with our proposed automatic lag selection will have
trivial asymptotic local power against all directions from the null with $
r(1)$ $=$ $0$.\footnote{
Simply consider $r(h)$ $=$ $0$ $\forall h$ $\neq $ $2$ and $r(2)$ $\neq $ $0$
. Since $\mathcal{L}_{n}^{\ast }$ $\overset{p}{\rightarrow }$ $1$, the above
local drift cannot be detected asymptotically (with probability greater than
the size of the test). We thank a referee for pointing this out.}
\end{remark}
\begin{remark}
\normalfont In our proof, e.g., under $H_{0}$, we show $\mathcal{\hat{T}}
_{n}^{\mathcal{P}}(\mathcal{L}_{n})\geq \mathcal{\hat{T}}_{n}^{\mathcal{P}
}(l)$ for each $l=1,...,\mathcal{\bar{L}}_{n}$ \emph{if and only if} $
\mathcal{L}_{n}$ $\rightarrow $ $1$, while by definition $\mathcal{L}
_{n}^{\ast }$\ is the least of all such sequences. We do this by inspecting
an equivalent expression for $P(\mathcal{\hat{T}}_{n}^{\mathcal{P}}(\mathcal{
L}_{n})$ $\geq $ $\mathcal{\hat{T}}_{n}^{\mathcal{P}}(l))$ \emph{for each} $
1 $ $\leq $ $l$ $\leq $ $\mathcal{\bar{L}}_{n}$ (equivalence holds
asymptotically with probability approaching one).
\citet[proof of Theorem
1]{EscancianoLobato2009}, by contrast, look at the joint probability that $
\mathcal{L}_{n}^{\ast }$ $\neq $ $1$, hence they must show $\sum_{l=2}^{
\mathcal{\bar{L}}}P(\mathcal{L}_{n}^{\ast }$ $=$ $l)$ $\rightarrow $ $0$
where $\mathcal{\bar{L}}$ is fixed and finite
\citep[see][eq.
(11)]{EscancianoLobato2009}. The joint probability argument does not
obviously transfer to the case where $\mathcal{\bar{L}}$ $\mathcal{=}$ $
\mathcal{\bar{L}}_{n}$ $\rightarrow $ $\infty $ since $\sum_{l=2}^{\mathcal{
\bar{L}}_{n}}P(\mathcal{L}_{n}^{\ast }$ $=$ $l)$ $\rightarrow $ $0$ requires
$P(\mathcal{L}_{n}^{\ast }$ $=$ $l)$ $\rightarrow $ $0$ sufficiently fast,
which need not hold under their assumptions.
\end{remark}
\section{Monte Carlo Experiments \label{sec:sim}}
We now perform a Monte Carlo experiment to gauge the merits of the
max-correlation test and automatic lag (labeled $\hat{\mathcal{T}}^{dw}(
\mathcal{L}_{n}^{\ast })$). A main competitor studied here is a Shao's (
\citeyear{Shao2011_JoE}) dependent wild bootstrap spectral Cram\'{e}r-von
Mises test (labeled $CvM^{dw}$). See Section \ref{sec:sim_design} for the
simulation design and Section \ref{sec:sim_results} for results. In the
supplemental material \citet[][Appendix H]{HillMotegi_supp_mat} we study
other tests, including the max-correlation with a pre-chosen non-random lag $
\mathcal{L}_{n}$, the Ljung-Box test, Hong's (\citeyear{Hong1996}) test
based on a standardized periodogram, a CvM test with Zhu and Li's (
\citeyear{ZhuLi2015}) block-wise random weighting bootstrap, and Andrews and
Ploberger's (\citeyear{AndrewsPloberger1996}) sup-LM test with the dependent
wild bootstrap. $CvM^{dw}$ is one of the strongest competitors in terms of
empirical size and power.
\subsection{Simulation Design \label{sec:sim_design}}
We consider a variety of data generating processes, filters, and estimation
methods. We first construct an error term $e_{t}$ that drives an observed
variable $y_{t}$. Let $\nu _{t}$ be iid $N(0,1)$. We consider iid $e_{t}=\nu
_{t}$; GARCH(1,1) $e_{t}=\nu _{t}w_{t}$ with random volatility process $
w_{1}^{2}=1$ and $w_{t}^{2}=1+0.2e_{t-1}^{2}+0.5w_{t-1}^{2}$ for $t\geq 2$;
MA(2) $e_{t}=\nu _{t}+0.5\nu _{t-1}+0.25\nu _{t-2}$ for $t$ $\geq $ $3$,
with initial values $e_{1}$ $=$ $0$ and $e_{2}$ $=$ $\nu _{2}+0.5\nu _{1}$;
and AR(1) $e_{t}=0.7e_{t-1}+\nu _{t}$ for $t$ $\geq $ $2$ with initial $
e_{1} $ $=$ $0$. Each error process is strictly stationary and ergodic.
\footnote{
Ergodicity follows since each error process is stationary $\alpha $-mixing.
See, e.g., \cite{KolmogorovRozanov1960} for processes with continuous
bounded spectral densities (e.g. stationary Gaussian AR, Gaussian MA(2));
\cite{Nelson1990} for GARCH process stationarity; and \cite{CarrascoChen2002}
for mixing properties of stationary GARCH processes.} We use each of the
four error terms in each of the following six scenarios.
\renewcommand\labelitemi{}
\begin{itemize}
\item \textbf{Scenario \#1: Simple} $y_{t}=e_{t}$; mean filter $\epsilon
_{t}=y_{t}-E[y_{t}]$; $\hat{\phi}_{n}=1/n\sum_{t=1}^{n}y_{t}$.
\item \textbf{Scenario \#2: Bilinear} $y_{t}=0.5e_{t-1}y_{t-2}+e_{t}$; mean
filter $\epsilon _{t}=y_{t}-E[y_{t}]$; $\hat{\phi}_{n}=1/n
\sum_{t=1}^{n}y_{t} $.
\item \textbf{Scenario \#3: AR(2)} $y_{t}=0.3y_{t-1}-0.15y_{t-2}+e_{t}$;
AR(2) filter $\epsilon _{t}=y_{t}-\phi _{1}y_{t-1}-\phi _{2}y_{t-2}$; least
squares.
\item \textbf{Scenario \#4: AR(2)} $y_{t}=0.3y_{t-1}-0.15y_{t-2}+e_{t}$;
AR(1) filter $\epsilon _{t}=y_{t}-\phi _{1}y_{t-1}$; least squares.
\item \textbf{Scenario \#5: GARCH(1,1)} $y_{t}=\sigma _{t}e_{t}$, $\sigma
_{t}^{2}=1+0.2y_{t-1}^{2}+0.5\sigma _{t-1}^{2}$; no filter.
\item \textbf{Scenario \#6: GARCH(1,1)} $y_{t}=\sigma _{t}e_{t}$, $\sigma
_{t}^{2}=1+0.2y_{t-1}^{2}+0.5\sigma _{t-1}^{2}$; GARCH(1,1) filter $\epsilon
_{t}=y_{t}/\sigma _{t}$ with $\sigma _{t}^{2}=\omega +\alpha
y_{t-1}^{2}+\beta \sigma _{t-1}^{2}$; quasi-maximum likelihood.\footnote{
QML is performed using the iterated process $\tilde{\sigma}_{1}^{2}(\theta )$
$=$ $\omega $ and $\tilde{\sigma}_{t}^{2}(\theta )$ $=$ $\omega $ $+$ $
\alpha y_{t-1}^{2}$ $+$ $\beta \tilde{\sigma}_{t-1}^{2}(\theta )$ for $t$ $=$
$2,\dots ,n$. We impose $(\omega ,\alpha ,\beta )$ $>$ $0$ and $\alpha $ $+$
$\beta $ $\leq $ $1$ during estimation.}
\end{itemize}
In \#5 and \#6, $e_{t}$ is standardized so that $E[e_{t}^{2}]=1$.
The null is true for \#1, \#2, \#3, \#5 and \#6 when the error $e_{t}$ is
iid or GARCH. For \#4 the null is false for any error $e_{t}$ because a
misspecified AR(1) filter is used. This results in an AR(1) test variable $
\epsilon _{t}$, with geometrically decaying autocorrelations when $e_{t}$ is
iid or GARCH.
In \#1--\#4, $y_{t}$ is stationary for each error. The GARCH(1,1) process in
\#5--\#6 is strong when $e_{t}$ is iid, and semi-strong when $e_{t}$ is
GARCH(1,1) since it is an adapted mds \citep{DrostNijman1993}, hence in
those cases $y_{t}$ is stationary \citep{Nelson1990,LeeHansen1994}. If $
e_{t} $ is MA(2) or AR(1), then both $\{e_{t},y_{t}\}$ are serially
correlated. In the MA(2) error case, it can be verified that GARCH $y_{t}$
is stationary due to the finite feedback structure. It is unknown whether
GARCH $y_{t}$ with a GARCH or AR(1) error has a stationary solution {
\citep[see,
e.g.,][]{DrostNijman1993,StraumannMikosch2006}.}
All of our chosen tests require a finite fourth moment on the test variable $
\epsilon _{t}$, and in all cases $E[e_{t}^{4}]$ $<$ $\infty $. In \#1--\#4, $
E[\epsilon _{t}^{4}]$ $<$ $\infty $ holds for each error type $e_{t}$. In
Scenario \#6 we test the standardized error $\epsilon _{t}$ $=$ $e_{t}$ $=$ $
y_{t}/\sigma _{t}$ which has a finite fourth moment in all cases.
In Scenario \#3 we do not include a constant term in the filter in order to
reduce estimator dispersion, and because $E[y_{t}]$ $=$ $0$ is known to be
correct within this experiment. In practice a constant term would be
included to ensure $E[\epsilon _{t}]$ $=$ $0$.
In Scenario \#5, however, we test GARCH $\epsilon _{t}$ $=$ $y_{t}$ itself. $
E[\epsilon _{t}^{4}]$ $<$ $\infty $ holds when $e_{t}$ is iid or MA(2), but
it is unknown in theory whether a fourth moment exists when $e_{t}$ is
GARCH(1,1) or AR(1).\footnote{
As an experiment not presented in this paper, we simulated $J$ $=$ $10,000$
sample paths $\{y_{t}\}_{t=1}^{250}$ from GARCH $\epsilon _{t}$ $=$ $y_{t}$
with GARCH(1,1) or AR(1) error $e_{t}$. We inspected the median over all $J$
samples of the $4^{th}$ moment for subsamples $\{y_{t}\}_{t=1}^{T}$ with $T$
$=$ $50,...,250$. Denote this statistic as $k(T)$. $k(T)$ grows
exponentially in $T$, suggesting a $4^{th}$ moment does not exist for either
process.} Test results in the latter case should therefore be interpreted
with some caution.
We also consider three additional scenarios in which remote autocorrelations
are present. Only an iid error $e_{t}$ is used for the following processes
in order to focus on autocorrelation remoteness.
\begin{itemize}
\item \textbf{Scenario \#7: Remote MA(6)} $y_{t}=e_{t}+0.25e_{t-6}$; mean
filter $\epsilon _{t}=y_{t}-E[y_{t}]$; $\hat{\phi}_{n}=1/n
\sum_{t=1}^{n}y_{t} $.
\item \textbf{Scenario \#8: Remote MA(12)} $y_{t} = e_{t} + 0.25 e_{t-12}$;
mean filter $\epsilon_{t} = y_{t} - E[y_{t}]$; $\hat{\phi}_{n} = 1/n
\sum_{t=1}^{n} y_{t}$.
\item \textbf{Scenario \#9: Remote MA(24)} $y_{t} = e_{t} + 0.25 e_{t-24}$;
mean filter $\epsilon_{t} = y_{t} - E[y_{t}]$; $\hat{\phi}_{n} = 1/n
\sum_{t=1}^{n} y_{t}$.
\end{itemize}
In Remote MA($q$), $\rho (h)\neq 0$ \textit{if and only if} $h=q$. Hence,
any test with a maximum lag less than $q$ should fail to detect serial
dependence.
We draw $J=1000$ Monte Carlo samples of size $n\in \{100,250,500,1000\}$. We
draw $2n$ observations and retain the last $n$ observations for analysis.
The rejection frequency of any test corresponds to its empirical size when
the tested variable $\epsilon _{t}$ is white noise, and empirical power when
$\epsilon _{t}$ is correlated. In Table \ref{table:sim_scenarios_summary} we
summarize the dependence property of $\epsilon _{t}$ under each scenario and
error $e_{t}$.
\begin{table}[th]
\caption{Dependence of Test Variable $\protect\epsilon _{t}$ under Each
Scenario and Error $e_{t}$}
\label{table:sim_scenarios_summary}
\begin{center}
{\fontsize{10pt}{16pt} \selectfont
\begin{tabular}{r|c|c|c|c|c|c|c}
\hline
Scenario & \#1 & \#2 & \#3 & \#4 & \#5 & \#6 & \#7, \#8, \#9 \\ \hline
DGP & Simple & Bilinear & AR(2) & AR(2) & GARCH & GARCH & Remote MA \\
Filter & Mean & Mean & AR(2) & AR(1) & None & GARCH & Mean \\ \hline
iid $e_{t}$ & \textbf{iid} & \textbf{wn} & \textbf{iid} & corr & \textbf{mds}
& \textbf{iid} & remote corr \\
GARCH $e_{t}$ & \textbf{mds} & \textbf{wn} & \textbf{mds} & corr & \textbf{
mds} & \textbf{mds} & not considered \\
MA(2) $e_{t}$ & corr & corr & corr & corr & corr & corr & not considered \\
AR(1) $e_{t}$ & corr & corr & corr & corr & corr & corr & not considered \\
\hline
\end{tabular}
}
\end{center}
\par
{\fontsize{10pt}{14pt} \selectfont wn $=$ non-mds white noise. corr $=$
autocorrelated. remote corr $=$ autocorrelation is present at a remote lag.
\textbf{Bold} text is used to highlight when the null is true.}
\end{table}
Our proposed test is the max-correlation test with the dependent wild
bootstrap and automatic lag, $\hat{\mathcal{T}}^{dw}(\mathcal{L}_{n}^{\ast })
$. The test statistic is $\mathcal{\hat{T}}_{n}(\mathcal{L}_{n}^{\ast })$ $
\equiv $ $\sqrt{n}\max_{1\leq h\leq \mathcal{L}_{n}^{\ast }}|\hat{\omega}
_{n}(h)\hat{\rho}_{n}(h)|$ with weight $\hat{\omega}_{n}(h)$ $=$ $1$.
\footnote{
Other plausible weights include an inverted standard deviation based on a
HAC estimator, and/or the \cite{LjungBox1978} weights. In the present paper,
we demonstrate that the uniform weight leads to accurate size and high
power. In simulations not reported here we also find that an inverted
standard deviation, either parametric (when known) or nonparametric, is
suboptimal due to the added sampling error.} We compute the bootstrapped
statistic $\mathcal{\hat{T}}_{n,i}^{(dw)}(\mathcal{L}_{n,i}^{\ast })$ $
\equiv $ $\sqrt{n}\max_{1\leq h\leq \mathcal{L}_{n,i}^{\ast }}|\hat{\rho}
_{n,i}^{(dw)}(h)|$ for each bootstrap sample $i\in \{1,\dots ,M\}$ with $
M=500$. $\hat{\rho}_{n,i}^{(dw)}(h)$ is computed via (\ref{R_hat_dwb}) based
on the Lemma \ref{lm:corr_expan} correlation expansion, which correctly
accounts for the first order (asymptotic) impact of the $i^{th}$ sample's
plug-in $\hat{\theta}_{n,i}$. Note that $\mathcal{L}_{n,i}^{\ast }$ is the
automatic lag for the $i^{th}$ bootstrap sample specifically. The dependent
wild bootstrap requires a choice of block size $b_{n}$. \cite{Shao2011_JoE}
uses $b_{n}$ $=$ $b\sqrt{n}$ with $b$ $\in $ $\{.5,1,2\}$, leading to
qualitatively similar results. We therefore use the middle value $b=1$.
\footnote{
We compared $b_{n}=b\sqrt{n}$ across $b$ $\in $ $\{.5,1,2\}$ in unreported
simulations and found there is little difference in test performance.} The
approximate p-value is computed as $\hat{p}_{n,M}^{(dw)}=1/M\sum_{i=1}^{M}I(
\mathcal{\hat{T}}_{n,i}^{(dw)}(\mathcal{L}_{n,i}^{\ast })\geq \hat{\mathcal{T
}}_{n}(\mathcal{L}_{n}^{\ast }))$.
The automatic lag selection requires a choice set $\{1,..,\mathcal{\bar{L}}
_{n}\}$ with maximum possible lag length $\mathcal{\bar{L}}_{n}$, and the
tuning parameter $q$ (cf. (\ref{Pn}) and (\ref{Ln_*})). Let $[z]$ denote the
integer part of $z$. We set $\mathcal{\bar{L}}_{n}=[\delta \sqrt{n}/(\ln n)]$
with $\delta =10$ so that $\bar{\mathcal{L}}_{n}\in \{21,28,35,45\}$ for $
n\in \{100,250,500,1000\}$, respectively. In the present simulation design, $
\mathcal{\bar{L}}_{n}$ satisfies the Lemma \ref{lm:corr_expan} and Theorem
\ref{th:lag_select}\ requirement $\mathcal{\bar{L}}_{n}$ $=$ $O(\sqrt{n}/\ln
(n))$ for all processes except possibly when the test variable is GARCH $
y_{t}$ with GARCH or AR(1) error $e_{t}$. Similar and larger values lead to
qualitatively similar results.\footnote{
In experiments not reported here we also used $\mathcal{\bar{L}}_{n}$ $=$ $
[\delta n/(\ln (n))^{c}]$ for various $c$ and $\delta $ and found
essentially the same results. Thus, the value $\mathcal{\bar{L}}_{n}=[\delta
\sqrt{n}/(\ln n)]$ is not essential to test performance, but does satisfy
Lemma \ref{lm:corr_expan} and Theorem \ref{th:lag_select} for most processes
under study.}
In order to choose a plausible value of $q$ for the penalty function in (\ref
{Pn}), we perform a preliminary simulation study that computes empirical
size and size-adjusted power for the max-correlation test with $\hat{
\mathcal{T}}_{n}(\mathcal{L}_{n}^{\ast })$\ across $q\in \{1.50,1.75,\dots
,4.50\}$. We consider two cases in order to highlight empirical size and
power properties. In Case 1, size is computed under Scenario \#1 with an iid
error; and size-adjusted power is computed under \#4 with an iid error. In
Case 2, size is computed under \#5 with an iid error; and size-adjusted
power is computed under \#5 with MA(2) error. For each case, sample size is $
n\in \{100,500\}$; nominal size is $\alpha =0.05$; $J=1000$ Monte Carlo
samples and $M=500$ bootstrap samples are generated. See Figure \ref
{fig:size_power_maxcorr_automatic_lag} for results. Variation of empirical
size and size-adjusted power for the test based on $\hat{\mathcal{T}}^{dw}(
\mathcal{L}_{n}^{\ast })$ across the values of $q$ is fairly small in each
experiment, implying that a choice of $q$ should not have a critical impact
on the test performance. For each case and sample size, we obtain relatively
accurate size and high power around $q$ $=$ $3$, hence $q$ $=$ $3$ is used.
We also perform the dependent wild bootstrap Cram\'{e}r-von Mises test in
\cite{Shao2011_JoE}, $CvM^{dw}$. This test is based on the sample spectral
distribution function $F_{n}(\lambda )$ $\equiv $ $\int_{0}^{\lambda
}I_{n}(\omega )d\omega $ with periodogram $I_{n}(\omega )$ $\equiv $ $(2\pi
)^{-1}\sum_{h=1-n}^{n-1}\hat{\gamma}_{n}(h)e^{-h\omega }$. Define:
\begin{equation*}
S_{n}(\lambda )\equiv \sqrt{n}(F_{n}(\lambda )-\hat{\gamma}_{n}(0)\psi
_{0}(\lambda ))=\sum_{h=1}^{n-1}\sqrt{n}\hat{\gamma}_{n}(h)\psi _{h}(\lambda
),
\end{equation*}
where $\psi _{h}(\lambda )$ $=$ $(h\pi )^{-1}\sin (h\lambda )$ if $h$ $\neq $
$0$,\ else $\psi _{h}(\lambda )$ $=$ $\lambda (2\pi )^{-1}$. The CvM test
statistic is $\mathcal{C}_{n}$ $=$ $\int_{0}^{\pi }S_{n}^{2}(\lambda
)d\lambda $, which has a non-standard limit distribution under the null.
\footnote{
In practice we use a numerical integral based on the midpoint approximation
with an increment of $.01$.} We then use Shao's (\citeyear{Shao2011_JoE},
Section 3) dependent wild bootstrap based on the Lemma \ref{lm:corr_expan}
correlation expansion to compute an approximate p-value. Note that all $
\mathcal{L}_{n}=n-1$ lags are used by construction. \cite{Shao2011_JoE} does
not consider the use of a filter, but we apply the test to all scenarios for
the sake of comparison, and use the correlation expansion to control for a
filter when used.
\subsection{Simulation Results \label{sec:sim_results}}
\subsubsection{Automatic Lag}
We first check the performance of the automatic lag selection itself. Recall
that by Theorem \ref{th:lag_select} $\mathcal{L}_{n}^{\ast }$ $\overset{p}{
\rightarrow }$ $1$ under $H_{0}$, and under $H_{1}$ $\mathcal{L}_{n}^{\ast}$
$\overset{p}{\rightarrow }$ $h^{\ast }$, the smallest lag at which the
largest correlation occurs. Under Scenarios \#1-\#6 when the error $e_{t}$
is iid or GARCH the null is false only for \#4. In the latter case, the test
variable $\epsilon _{t}$ is AR(1) hence $h^{\ast }$ $=$ $1$.
In Table \ref{table:median_automatic_lag} we report the median of optimal
lags $\{\mathcal{L}_{n}^{\ast (1)},\dots ,\mathcal{L}_{n}^{\ast (J)}\}$ for
each scenario, where $\mathcal{L}_{n}^{\ast (j)}$ is the $j^{th}$ sample's
optimal lag. We also report the smallest lag at which the largest
correlation occurs, $h^{\ast }$. In most cases we compute $h^{\ast }$
analytically. In a few cases an analytic solution is not feasible so we use
a large sample simulation. We generate $50000$ samples of size $n$ $=$ $
50000 $, and the autocorrelations for $\epsilon _{t}$ for each sample. We
then report the median computed $h^{\ast }$ across all samples.
In \#1--\#6, when $H_{0}$ is true or autocorrelations exist at small lags,
the median of $\mathcal{L}_{n}^{\ast (j)}$ is 1 or 2. This (nearly) matches
the predictions of Theorem \ref{th:lag_select} and the reported $h^{\ast }$
in most cases. In just two cases, (i) bilinear with GARCH error and (ii)
GARCH with GARCH error and without a filter, the reported $h^{\ast }$\ is $4$
. This is higher than the optimally selected lag (1 or 2). These are the
only cases where the median of $\mathcal{L}_{n}^{\ast (j)}$ deviates by more
than 0 or 1 from $h^{\ast }$. In both of these cases the process is highly
volatile, possibly causing the aberrant deviation of $\mathcal{L}_{n}^{\ast
(j)}$ from $h^{\ast }$, and the low empirical size of the max-correlation
test. As suggested in Section \ref{sec:sim_design}, either of these
processes may fail the required moment conditions for the underlying theory
surrounding $\mathcal{L}_{n}^{\ast }$.
In \#7--\#9, where autocorrelations exist at remote lags, the median of $
\mathcal{L}_{n}^{\ast (j)}$ pinpoints those lags given a large enough sample
size. Under Remote MA(12), for example, the median is 1 for $n\leq 250$ but
exactly 12 for $n\geq 500$.
\subsubsection{Empirical Size \label{sec:sim_size}}
We now report rejection frequencies associated with nominal size $\alpha \in
\{.01,.05,.10\}$. See Table \ref
{table:main_paper_rf_automatic_scenario123456} for $\hat{\mathcal{T}}^{dw}(
\mathcal{L}_{n}^{\ast })$ under \#1--\#6; see Table \ref
{table:main_paper_rf_CvM_scenario123456} for $CvM^{dw}$ under \#1--\#6; and
see Table \ref{table:main_paper_rf_scenario789} for both tests under
\#7--\#9.
We begin with Scenario \#1 (simple), $n=100$, and iid error. The empirical
size with respect to nominal sizes $\alpha \in \{.010,.050,.100\}$ is $
\{.017,.068,.128\}$ for $\hat{\mathcal{T}}^{dw}(\mathcal{L}_{n}^{\ast })$
and $\{.023,.081,.138\}$ for $CvM^{dw}$, hence $\hat{\mathcal{T}}^{dw}(
\mathcal{L}_{n}^{\ast })$ has reasonably sharp size that is sharper than $
CvM^{dw}$. A similar implication holds for \#2 (bilinear), $n=100$, and iid
error, where the empirical size is $\{.008,.047,.090\}$ for $\hat{\mathcal{T}
}^{dw}(\mathcal{L}_{n}^{\ast })$ and $\{.018,.076,.149\}$ for $CvM^{dw}$. In
general, the empirical size of the test based on $\hat{\mathcal{T}}^{dw}(
\mathcal{L}_{n}^{\ast })$ is at least as good as (and often better than)
size associated with $CvM^{dw}$.
The reason why $\hat{\mathcal{T}}^{dw}(\mathcal{L}_{n}^{\ast })$ achieves
fairly sharp size in most cases is that, as confirmed in Table \ref
{table:median_automatic_lag}, $\mathcal{L}_{n}^{\ast }$ is sufficiently
close to $1$ in most samples under $H_{0}$. That feature cuts redundant lags
and improves the size of the test. In fact, we find in the supplemental
material \citet[][Appendix H]{HillMotegi_supp_mat} that $\hat{\mathcal{T}}
^{dw}(\mathcal{L}_{n}^{\ast })$ achieves the sharpest size among a variety
of tests.\footnote{
In Scenario \#2 (bilinear) with a GARCH error, the max-correlation test is
undersized, even in large samples $n=1000$. The primary cause is the
bilinear process combined with a GARCH error results in extreme volatility,
which undermines the efficacy of the bootstrap. The test is even more
undersized under Scenario \#5 (GARCH) with a GARCH error. The CvM test is
also undersized for Scenario \#2 with a GARCH error. It is, however, less
affected than the max-correlation test in Scenario \#5 with a GARCH error.
Weighting the correlations for a max-correlation test might alleviate the
under-rejection, for example using weights equal to the inverted standard
errors. The least volatile correlations in this case are given the greatest
weight. We leave that possibility for a future project.} $CvM^{dw}$ uses all
$\mathcal{L}_{n}=n-1$ lags, but the greatest weight is assigned to small
lags by construction. Hence $CvM^{dw}$ leads to have fairly accurate size in
most cases, although generally the max-correlation test dominates.
\subsubsection{Empirical Power \label{sec:sim_power}}
In \#1--\#6, the relative performance of $\hat{\mathcal{T}}^{dw}(\mathcal{L}
_{n}^{\ast })$ and $CvM^{dw}$ under $H_{1}$ varies across cases. The former
is more powerful than the latter in some cases, but not in other cases. In
general, there is not a drastic gap between the two tests. See \#2, $n=1000$
, and AR(1) error, for example. The empirical power with respect to $\alpha
\in \{.010,.050,.100\}$ is $\{.723,.823,.864\}$ for $\hat{\mathcal{T}}^{dw}(
\mathcal{L}_{n}^{\ast })$ and $\{.474,.697,.810\}$ for $CvM^{dw}$. But in
\#3, with $n=1000$, and an AR(1) error, power is $\{.599,.847,.922\}$ for $
\hat{\mathcal{T}}^{dw}(\mathcal{L}_{n}^{\ast })$ and $\{.688,.876,.923\}$
for $CvM^{dw}$.
In \#7--\#9, however, $\hat{\mathcal{T}}^{dw}(\mathcal{L}_{n}^{\ast })$
dominates $CvM^{dw}$ completely (see Table \ref
{table:main_paper_rf_scenario789}). $\hat{\mathcal{T}}^{dw}(\mathcal{L}
_{n}^{\ast })$ successfully detects remote autocorrelations given a large
enough sample size, while $CvM^{dw}$ fails to detect them for any $n$. The
power of $\hat{\mathcal{T}}^{dw}(\mathcal{L}_{n}^{\ast })$ under \#8 (Remote
MA(12)), for instance, is $\{.013,.067,.117\}$ for $n=100$, $
\{.024,.134,.244\}$ for $n=250$, $\{.371,.673,.770\}$ for $n=500$, and $
\{.983,.997,.997\}$ for $n=1000$. Logically power increases as $n$ grows.
The reason that $\hat{\mathcal{T}}^{dw}(\mathcal{L}_{n}^{\ast })$\ detects
remote autocorrelations is confirmed in Table \ref
{table:median_automatic_lag} (cf. Theorem \ref{th:lag_select}.b): $\mathcal{L
}_{n}^{\ast }$ converges to $h^{\ast }=12$ when $n\geq 500$ under \#8. The
power of $CvM^{dw}$, by contrast, is $\{.034,.110,.179\}$ for $n=100$, $
\{.025,.087,.155\}$ for $n=250$, $\{.026,.092,.161\}$ for $n=500$, and $
\{.017,.083,.166\}$ for $n=1000$. $CvM^{dw}$ has (almost) no power against
the remote autocorrelation even when $n=1000$. In fact, we find in
\citet[][Appendix H]{HillMotegi_supp_mat} that $\hat{\mathcal{T}}^{dw}(
\mathcal{L}_{n}^{\ast })$ is the only test that has power against remote
autocorrelations among a variety of tests which have decent size.
The reason why $CvM^{dw}$ fails to capture remote autocorrelations is that
it incorporates \textit{all} available sample correlations, while assigning
the greatest weight to small lags. That feature delivers sharp size and high
power against adjacent correlations like Scenarios \#1--\#6, but critically
low power against remote correlations like Scenarios \#7--\#9.
The (non-weighted) max-correlation, by contrast, operates on the most
informative serial correlation over a range of lags $\{1,...,\mathcal{L}
_{n}^{\ast }\}$. The optimal maximum lag selected $\mathcal{L}_{n}^{\ast }$
asymptotically hones in on the most informative lag range: the range that
includes the smallest lag at which the greatest correlation in magnitude
occurs. Thus, in large samples in particular, $\hat{\mathcal{T}}^{dw}(
\mathcal{L}_{n}^{\ast })$ delivers the single most informative serial
correlation for test purposes, as opposed to a weighted sum of all, and
therefore potentially less useful, correlations. That feature itself
generally delivers accurate size (or under-rejections in some cases) and
competitive power for Scenarios \#1-\#6, and dominant power against remote
correlations.
In some cases against adjacent correlations power is not dominant when a
large pre-chosen non-random $\mathcal{L}_{n}$ is used
\citep[see][Appendix
H]{HillMotegi_supp_mat}, but such a shortcoming is alleviated by using our
proposed automatic lag $\mathcal{L}_{n}^{\ast }$. The combined
max-correlation with automatic lag and bootstrapped p-value leads to a
dominant test over all when size and power are considered, in comparison to
a variety of tests.
\section{Conclusion\label{sec:conclude}}
We present a bootstrap max-correlation test of the white noise hypothesis
for regression model residuals. The maximum correlation over an increasing
lag length has a long history in the statistics literature, but only in
terms of characterizing its limit distribution using extreme value theory
and only for observed data. We apply a bootstrap method to a first order
correlation expansion in order to account for the impact of a plug-in $\hat{
\theta}_{n}$ used to compute model residuals. We prove that Shao's (
\citeyear{Shao2011_JoE}) dependent wild bootstrap yields a valid test in a
more general environment than \cite{Shao2011_JoE} or \cite{XiaoWu2014}
considered. Our approach does not require showing that the original and
bootstrapped max-correlation test statistics have the same limit properties
under the null, allowing us to bypass the extreme value theory approach
altogether. We also extend Escanciano and Lobato's (
\citeyear{EscancianoLobato2009}) automatic lag selection to our setting with
an (asymptotically) unbounded lag set. We prove that the automatic lag
converges in probability to one under the null, and the smallest lag at
which the largest correlation in magnitude occurs under the alternative. In
both cases, the procedure hones in on the most informative lag, offering the
greatest number of data points for analysis, for the given hypothesis.
Simulation experiments show that our test with the automatic lag generally
out-performs a variety of other tests. It achieves sharper empirical size in
most cases than other tests since the automatic lag $\mathcal{L}_{n}^{\ast }$
is sufficiently close to 1 under the null hypothesis. When there exist
serial correlations at small lags, the max-correlation test and some strong
competitors such as the Cram\'{e}r-von Mises test with the dependent wild
bootstrap lead to roughly comparable empirical power. When there exist
correlations only at remote lags, the max-correlation test has (potentially)
high power while the Cram\'{e}r-von Mises test has nearly trivial power for
any sample size due to its weighting structure. Other tests also have
comparatively lower power. This striking difference stems from the fact that
the automatic lag $\mathcal{L}_{n}^{\ast }$ pinpoints the relevant remote
lag, while other tests by construction incorporate many lags into a test
statistic (the CvM test gives the greatest weight to low lags, making it
useless against remote lags).
\setcounter{equation}{0} \renewcommand{\theequation}{{\thesection}.
\arabic{equation}} \setcounter{remark}{0} \renewcommand{\theremark}{
\Alph{section}.\arabic{remark}}
\appendix
\section{Appendix: Proofs\label{app:proofs}}
We assume all random variables exist on a complete measure space such that
majorants and integrals over uncountable families of measurable functions
are measurable, and probabilities where applicable are outer probability
measures. See Pollard's (\citeyear{Pollard1984}: Appendix C) \textit{
permissibility} criteria, and see Dudley's (\citeyear{Dudley1984}: p. 101)
\textit{admissible Suslin} property.
We use the following variance bound for NED sequences repeatedly. If $w_{t}$
\ is zero mean, $L_{p}$-bounded for some $p$ $>$ $2$, and $L_{2}$-NED with
size $1/2$, on an $\alpha $-mixing base with decay $O(h^{-p/(p-2)-\iota })$,
then by Theorem 17.5 in \cite{Davidson1994} and Theorem 1.6 in \cite
{McLeish1975}:
\begin{equation}
E\left[ \left( 1/\sqrt{n}\sum\nolimits_{t=1}^{n}w_{t}\right) ^{2}\right]
=O(1). \label{NED_var}
\end{equation}
The following results are key steps toward sidestepping extreme value theory
and Gaussian approximations when working with the maximum. The first result
expands on a result in \citet[Lemma 1]{BoehmeRosenfeld1974} for first
countable topological spaces. The latter is intimately linked to array
convergence implications of theory developed in \cite{Ramsey1930}, cf. \cite
{BoehmeRosenfeld1974}, \cite{Thomason1988} and \cite{Myers2002}. Recall that
any metric space is a first countable topological space
\citep[see,
e.g.,][p. 131]{Lipschitz1965}.
\begin{lemma}
\label{lm:array_conv}Assume the array $\{\mathcal{A}_{k,n}$ $:$ $1$ $\leq $ $
k$ $\leq $ $\mathcal{I}_{n}\}_{n\geq 1}$ lies in a first countable
topological space, where $\{\mathcal{I}_{n}\}_{n\geq 1}$ is a sequence of
positive integers, $\mathcal{I}_{n}$ $\rightarrow $ $\infty $ as $n$ $
\rightarrow $ $\infty $. Let $\lim_{n\rightarrow \infty }\mathcal{A}_{k,n}$ $
=$ $a_{k}$ for each fixed $k$, and $\lim_{k\rightarrow \infty }a_{k}$ $=$ $a$
. Then $\lim_{n\rightarrow \infty }\mathcal{A}_{\mathcal{L}_{n},n}$ $=$ $a$
for some non-unique sequence of positive integers $\{\mathcal{L}_{n}\}$,
where $\mathcal{L}_{n}$ $\leq $ $\mathcal{I}_{n}$, and $\mathcal{L}_{n}$ $
\rightarrow $ $\infty $. If $\mathcal{I}_{n}$ $=$ $n$ then $\mathcal{L}_{n}$
$\leq $ $n$. Moreover, $\lim_{n\rightarrow \infty }\mathcal{A}_{\mathcal{
\tilde{L}}_{n},n}$ $=$ $a$ for any other monotonic sequence of positive
integers $\{\mathcal{\tilde{L}}_{n}\}_{n=1}^{\infty }$, $\mathcal{\tilde{L}}
_{n}$ $\rightarrow $ $\infty $ that satisfies $\mathcal{\tilde{L}}_{n}/
\mathcal{L}_{n}$ $\rightarrow $ $0$, hence $\mathcal{L}_{n}$ $=$ $o(n)$ can
always be assured.
\end{lemma}
\noindent \textbf{Proof.}\qquad In view of $\lim_{n\rightarrow \infty }
\mathcal{A}_{k,n}$ $=$ $a_{k}$ for each fixed $k$, and $\lim_{k\rightarrow
\infty }a_{k}$ $=$ $a$, there always exists a monotonic sequence of positive
integers $\{\mathcal{L}_{n}\}_{n=1}^{\infty }$, $\mathcal{L}_{n}$ $
\rightarrow $ $\infty $ as $n$ $\rightarrow $ $\infty $, satisfying
\begin{equation}
\left\vert \mathcal{A}_{\mathcal{L}_{n},n}-a_{\mathcal{L}_{n}}\right\vert
\leq \frac{1}{\mathcal{L}_{n}}. \label{ALaL}
\end{equation}
Similarly, for any $\{\mathcal{L}_{n}\}_{n=1}^{\infty }$ that satisfies (\ref
{ALaL}), any other monotonic sequence of positive integers $\{\mathcal{
\tilde{L}}_{n}\}_{n=1}^{\infty }$, $\mathcal{\tilde{L}}_{n}$ $\rightarrow $ $
\infty $ and $\mathcal{\tilde{L}}_{n}/\mathcal{L}_{n}$ $\rightarrow $ $0$
also satisfies (\ref{ALaL}). Hence $\{\mathcal{L}_{n}\}_{n=1}^{\infty }$ is
not unique, and $\mathcal{L}_{n}$ $=$ $o(n)$ is always feasible.
Inequality (\ref{ALaL}) yields $a_{\mathcal{L}_{n}}$ $-$ $1/\mathcal{L}_{n}$
$\leq $ $\mathcal{A}_{\mathcal{L}_{n},n}$ $\leq $ $a_{\mathcal{L}_{n}}$ $+$ $
1/\mathcal{L}_{n}$ for each $n$. Then, by the definition of a limit:
\begin{equation*}
\lim_{n\rightarrow \infty }a_{\mathcal{L}_{n}}-\frac{1}{\mathcal{L}_{n}}\leq
\lim_{n\rightarrow \infty }\mathcal{A}_{\mathcal{L}_{n},n}\leq
\lim_{n\rightarrow \infty }a_{\mathcal{L}_{n}}+\frac{1}{\mathcal{L}_{n}}.
\end{equation*}
The Sandwich theorem, $\mathcal{L}_{n}$ $\rightarrow $ $\infty $ and
therefore $\lim_{n\rightarrow \infty }a_{\mathcal{L}_{n}}$ $=$ $a$, now
yields:
\begin{equation*}
\lim_{n\rightarrow \infty }\mathcal{A}_{\mathcal{L}_{n},n}=\lim_{n
\rightarrow \infty }a_{\mathcal{L}_{n}}=a.
\end{equation*}
See also \citet[Lemma 1]{BoehmeRosenfeld1974}. This proves $
\lim_{n\rightarrow \infty }\mathcal{A}_{\mathcal{L}_{n},n}$ $=$ $a$ for some
non-unique monotonic sequence of positive integers $\{\mathcal{L}
_{n}\}_{n=1}^{\infty }$, where $\mathcal{L}_{n}$ $=$ $o(n)$ is always
feasible. By construction $\mathcal{L}_{n}$ $\leq $ $\mathcal{I}_{n}$ must
be satisfied, hence if $\mathcal{I}_{n}$ $=$ $n$ then $\mathcal{L}_{n}$ $
\leq $ $n$. $\mathcal{QED}$.
The next result uses Lemma \ref{lm:array_conv} as the basis for deriving
\textit{in probability} convergence of a function of an increasing set of
random variables. This result forms the basis for the proof of the
non-Gaussian correlation expansion Lemma \ref{lm:corr_expan}.
Recall the continuous $\vartheta $ $:$ $\mathbb{R}^{\mathcal{L}_{n}}$ $
\rightarrow $ $[0,\infty )$ satisfies: lower bound $\vartheta (a)$ $=$ $0$
\emph{if and only if} $a$ $=$ $0$; upper bound $\vartheta (a)$ $\leq $ $K
\mathcal{LM}$ for some $K$ $>$ $0$ and any $a$ $=$ $[a_{h}]_{h=1}^{\mathcal{L
}}$ such that $|a_{h}|$ $\leq $ $\mathcal{M}$ for each $h$; divergence $
\vartheta (a)$ $\rightarrow $ $\infty $ as $||a||$ $\rightarrow $ $\infty $;
monotonicity $\vartheta (a_{\mathcal{L}_{1}})$ $\leq $ $\vartheta ([a_{
\mathcal{L}_{1}}^{\prime },c_{\mathcal{L}_{2}-\mathcal{L}_{1}}^{\prime
}]^{\prime })$ where $(a_{\mathcal{L}},c_{\mathcal{L}})$ $\in $ $\mathbb{R}^{
\mathcal{L}}$, $\forall \mathcal{L}_{2}$ $\geq $ $\mathcal{L}_{1}$ and any $
c_{\mathcal{L}_{2}-\mathcal{L}_{1}}$ $\in $ $\mathbb{R}^{\mathcal{L}_{2}-
\mathcal{L}_{1}}$; and the triangle inequality $\vartheta (a$ $+$ $b)$ $\leq
$ $\vartheta (a)$ $+$ $\vartheta (b)$ $\forall a,b$ $\in $ $\mathbb{R}^{
\mathcal{L}_{n}}$.
\begin{lemma}
\label{lm:max_p}Let $\{\mathcal{X}_{n}(i),\mathcal{Y}_{n}(i)$ $:$ $i$ $\in $
$\mathbb{N}\}_{n\geq 1}$ be arrays of random variables.
\newline
$a.$ If $\mathcal{X}_{n}(i)$ $\overset{p}{\rightarrow }$ $0$ as $n$ $
\rightarrow $ $\infty $\ for each $i$ $\in $ $\mathbb{N}$, then $\vartheta ([
\mathcal{X}_{n}(i)]_{i=1}^{\mathcal{L}_{n}})$ $\overset{p}{\rightarrow }$ $0$
for some non-unique sequence of positive integers $\{\mathcal{L}_{n}\}$ with
$\mathcal{L}_{n}$ $\rightarrow $ $\infty $. Moreover $\mathcal{L}_{n}$ $=$ $
o(n)$ can always be assured.
Further, let $\vartheta $ be the maximum transform: $\vartheta ([\mathcal{X}
_{n}(i)]_{i=1}^{\mathcal{L}_{n}})$ $=$ $\max_{1\leq i\leq \mathcal{L}_{n}}|
\mathcal{X}_{n}(i)|$. If for some non-stochastic $g$ $:$ $\mathbb{N}$\ $
\rightarrow $ $[0,\infty )$ with $g(n)$ $\rightarrow $ $\infty $ as $n$ $
\rightarrow $ $\infty $, $g(n)\mathcal{X}_{n}(i)$ $\overset{p}{\rightarrow }$
$0$ and $g(n)\mathcal{X}_{n}(i)$\ is uniformly integrable for each $i$, then
$\mathcal{L}_{n}$ $=$ $O(g(n))$ must hold.
\newline
$b.$ If $\mathcal{X}_{n}(i)$ $-$ $\mathcal{Y}_{n}(i)$ $\overset{p}{
\rightarrow }$ $0$ as $n$ $\rightarrow $ $\infty $\ for each $i$ $\in $ $
\mathbb{N}$, then for some non-unique sequence of positive integers $\{
\mathcal{L}_{n}\}$ with $\mathcal{L}_{n}$ $\rightarrow $ $\infty $: $
|\vartheta ([\mathcal{X}_{n}(i)]_{i=1}^{\mathcal{L}_{n}})$ $-$ $\vartheta ([
\mathcal{Y}_{n}(i)]_{i=1}^{\mathcal{L}_{n}})|$ $\leq $ $|\vartheta ([
\mathcal{X}_{n}(i)$ $-$ $\mathcal{Y}_{n}(i)]_{i=1}^{\mathcal{L}_{n}})|$ $
\overset{p}{\rightarrow }$ $0$. Moreover $\mathcal{L}_{n}$ $=$ $o(n)$ can
always be assured.
Further, let $\vartheta $ be the maximum transform: $\vartheta ([\mathcal{X}
_{n}(i)]_{i=1}^{\mathcal{L}_{n}})$ $=$ $\max_{1\leq i\leq \mathcal{L}_{n}}|
\mathcal{X}_{n}(i)|$. If for some non-stochastic $g$ $:$ $\mathbb{N}$\ $
\rightarrow $ $[0,\infty )$ with $g(n)$ $\rightarrow $ $\infty $ as $n$ $
\rightarrow $ $\infty $, $g(n)(\mathcal{X}_{n}(i)$ $-$ $\mathcal{Y}_{n}(i))$
$\overset{p}{\rightarrow }$ $0$ and $g(n)(\mathcal{X}_{n}(i)$ $-$ $\mathcal{Y
}_{n}(i))$ is uniformly integrable for each $i$, then $\mathcal{L}_{n}$ $=$ $
O(g(n))$ must hold.
\end{lemma}
\begin{remark}
\normalfont$\mathcal{L}_{n}$ $=$ $o(n)$ is required for sample correlation
consistency.
\end{remark}
\begin{remark}
\normalfont Due to difficulties associated with having a general plug-in and
therefore non-Gaussian approximation theory without imposing dependence or
heterogeneity restrictions, we only bound $\mathcal{L}_{n}$ for the maximum
case using classic arguments. An improved upper bound on $\mathcal{L}_{n}$ $
\rightarrow $ $\infty $ is possible with additional assumptions. The
Gaussian approximation or extreme value theoretic approaches yield (possibly
far) sharper bounds, but require information that generally does not hold
when $X_{n}(i)$ is a filtered functional of a plug-in estimator as discussed
in Section \ref{sec:intro}. Indeed, under Gaussianicity the
Sudakov--Fernique inequality is available
\citep[see,
e.g.,][]{Chatterjee2005,Chernozhukov_etal2015}. See also the tools developed
in \citet[Sections 6 and 7]{ZhangWu2017}: in their stationary functional
dependence setting, a sharper bound than (\ref{Pmax}), below, is available
for a Gaussian approximation \citep[see, e.g.,][Theorem 6.2]{ZhangWu2017}.
\end{remark}
\begin{remark}
\normalfont In principle we can let $\{g(n)\}_{n=1}^{\infty }$ be an array $
\{g_{n}(i)$ $:$ $i$ $=$ $1,2,...\}_{n=1}^{\infty }$ allowing for a different
rate of convergence for each $i$, e.g. $g_{n}(i)\mathcal{X}_{n}(i)$ $\overset
{p}{\rightarrow }$ $0$. We only treat a single sequence $\{g(n)\}_{n=1}^{
\infty }$ for brevity, and given the environment in which we apply the
theory.\
\end{remark}
\noindent \textbf{Proof.}
\newline
\textbf{Claim (a).}\qquad By assumption each $\mathcal{X}_{n}(i)$ $\overset{p
}{\rightarrow }$ $0$, therefore $\vartheta ([\mathcal{X}_{n}(i)]_{i=1}^{k})$
$\overset{p}{\rightarrow }$ $0$ for each $k$. Define $\mathcal{A}_{k,n}$ $
\equiv $ $1$ $-$ $\exp \{-\vartheta ([\mathcal{X}_{n}(i)]_{i=1}^{k})\}$ $\in
$ $[0,1]$ $a.s$. $\forall (k,n)$, and $\mathcal{P}_{k,n}$ $\equiv $ $
\int_{0}^{\infty }P(\mathcal{A}_{k,n}$ $>$ $\epsilon )d\epsilon $. Note that
$\mathcal{P}_{k,n}$ $\leq $ $\mathcal{P}_{k+1,n}$ because $\mathcal{A}_{k,n}$
$\leq $ $\mathcal{A}_{k+1,n}$ by monotonicity of $\vartheta $. Lebesgue's
dominated convergence theorem, and $\mathcal{A}_{k,n}$ $\overset{p}{
\rightarrow }$ $0$, therefore yield for each $k$:
\begin{equation*}
\lim_{n\rightarrow \infty }\mathcal{P}_{k,n}=\lim_{n\rightarrow \infty
}\int_{0}^{\infty }P\left( \mathcal{A}_{k,n}>\epsilon \right) d\epsilon
=\lim_{n\rightarrow \infty }\int_{0}^{1}P\left( \mathcal{A}_{k,n}>\epsilon
\right) d\epsilon =\int_{0}^{1}\lim_{n\rightarrow \infty }P\left( \mathcal{A}
_{k,n}>\epsilon \right) d\epsilon =0.
\end{equation*}
Hence $\lim_{n\rightarrow \infty }\mathcal{P}_{k,n}$ $=$ $0$ for each $k$,
and therefore $\lim_{k\rightarrow \infty }\lim_{n\rightarrow \infty }
\mathcal{P}_{k,n}$ $=$ $0$.
Now apply Lemma \ref{lm:array_conv} to $\mathcal{P}_{k,n}$ to deduce that
there exists a positive integer sequence $\{\mathcal{L}_{n}\}$ that is not
unique, where $\mathcal{L}_{n}$ $\rightarrow $ $\infty $, and $\mathcal{L}
_{n}$ $=$ $o(n)$ is always feasible, such that $\lim_{n\rightarrow \infty }
\mathcal{P}_{\mathcal{L}_{n},n}$ $=$ $\lim_{n\rightarrow \infty
}\int_{0}^{1}P(\mathcal{A}_{\mathcal{L}_{n},n}$ $>$ $\epsilon )d\epsilon $ $
= $ $0$. Therefore, by construction $E[\mathcal{A}_{\mathcal{L}_{n},n}]$ $=$
$\int_{0}^{1}P(\mathcal{A}_{\mathcal{L}_{n},n}$ $>$ $\epsilon )d\epsilon $ $
\rightarrow $ $0$. Hence $\mathcal{A}_{\mathcal{L}_{n},n}$ $\overset{p}{
\rightarrow }$ $0$ by Markov's inequality, which yields $\vartheta ([
\mathcal{X}_{n}(i)]_{i=1}^{\mathcal{L}_{n}})$ $\overset{p}{\rightarrow }$ $0$
as claimed.
Now consider an upper bound on $\mathcal{L}_{n}$ $\rightarrow $ $\infty $
when $\vartheta $ is the maximum transform $\vartheta ([\mathcal{X}
_{n}(i)]_{i=1}^{\mathcal{L}_{n}})$ $=$ $\max_{1\leq i\leq \mathcal{L}_{n}}|
\mathcal{X}_{n}(i)|$. By assumption $g(n)|\mathcal{X}_{n}(i)|$ $\overset{p}{
\rightarrow }$ $0$ $\forall i$ $\in $ $\mathbb{N}$ for some non-random
positive $g(n)$ $\rightarrow $ $\infty $. Bonferroni and Markov inequalities
yield:
\begin{eqnarray}
P\left( \max_{1\leq i\leq \mathcal{L}_{n}}\left\vert \mathcal{X}
_{n}(i)\right\vert >\eta \right) &=&P\left( \bigcup_{i=1}^{\mathcal{L}
_{n}}\left\vert \mathcal{X}_{n}(i)\right\vert >\eta \right) \leq \sum_{i=1}^{
\mathcal{L}_{n}}P\left( \left\vert \mathcal{X}_{n}(i)\right\vert >\eta
\right) \label{Pmax} \\
&\leq &\frac{1}{\eta }\sum_{i=1}^{\mathcal{L}_{n}}E\left\vert \mathcal{X}
_{n}(i)\right\vert =\frac{\mathcal{L}_{n}}{\eta g(n)}\frac{1}{\mathcal{L}_{n}
}\sum_{i=1}^{\mathcal{L}_{n}}g(n)E\left\vert \mathcal{X}_{n}(i)\right\vert .
\notag
\end{eqnarray}
Since $g(n)|\mathcal{X}_{n}(i)|$ $\overset{p}{\rightarrow }$ $0$ $\forall i$
$\in $ $\mathbb{N}$, if $g(n)\mathcal{X}_{n}(i)$ is also uniformly
integrable for each $i$ then $g(n)E|\mathcal{X}_{n}(i)|$ $\rightarrow $ $0$ $
\forall i$ $\in $ $\mathbb{N}$ \citep[e.g.][Theorem
3.5]{Billingsley1999}. Thus $1/\mathcal{L}_{n}\sum_{i=1}^{\mathcal{L}
_{n}}g(n)E|\mathcal{X}_{n}(i)|$ $\rightarrow $ $0$, hence $P(\max_{1\leq
i\leq \mathcal{L}_{n}}|\mathcal{X}_{n}(i)|$ $>$ $\eta )$ $=$ $o(\mathcal{L}
_{n}/g(n))$. Therefore any $\mathcal{L}_{n}$ $=$ $O(g(n))$ yields $
\max_{1\leq i\leq \mathcal{L}_{n}}|\mathcal{X}_{n}(i)|$ $\overset{p}{
\rightarrow }$ $0$ as required.
\newline
\textbf{Claim (b).}\qquad The mapping $\vartheta $ satisfies the triangle
inequality and $\vartheta (\cdot )$ $\geq $ $0$. Apply the inequality twice
to yield
\begin{equation*}
\vartheta \left( \left[ \mathcal{X}_{n}(i)\right] _{i=1}^{\mathcal{L}
_{n}}\right) \leq \vartheta \left( \left[ \mathcal{Y}_{n}(i)\right] _{i=1}^{
\mathcal{L}_{n}}\right) +\vartheta \left( \left[ \mathcal{X}_{n}(i)-\mathcal{
Y}_{n}(i)\right] _{i=1}^{\mathcal{L}_{n}}\right)
\end{equation*}
and
\begin{equation*}
\vartheta \left( \left[ \mathcal{Y}_{n}(i)\right] _{i=1}^{\mathcal{L}
_{n}}\right) \leq \vartheta \left( \left[ \mathcal{X}_{n}(i)\right] _{i=1}^{
\mathcal{L}_{n}}\right) +\vartheta \left( \left[ \mathcal{X}_{n}(i)-\mathcal{
Y}_{n}(i)\right] _{i=1}^{\mathcal{L}_{n}}\right)
\end{equation*}
hence
\begin{equation*}
\left\vert \vartheta \left( \left[ \mathcal{X}_{n}(i)\right] _{i=1}^{
\mathcal{L}_{n}}\right) -\vartheta \left( \left[ \mathcal{Y}_{n}(i)\right]
_{i=1}^{\mathcal{L}_{n}}\right) \right\vert \leq \vartheta \left( \left[
\mathcal{X}_{n}(i)-\mathcal{Y}_{n}(i)\right] _{i=1}^{\mathcal{L}_{n}}\right)
.
\end{equation*}
Now apply (a) to $\mathcal{X}_{n}(i)$ $-$ $\mathcal{Y}_{n}(i)$ to yield $
\vartheta ([\mathcal{X}_{n}(i)$ $-$ $\mathcal{Y}_{n}(i)]_{i=1}^{\mathcal{L}
_{n}})$ $\overset{p}{\rightarrow }$ $0$. The upper bound on $\mathcal{L}_{n}$
follows from the Claim (a) argument. $\mathcal{QED}$.
Lemma \ref{lm:max_dist} similarly uses Lemma \ref{lm:array_conv} as the
basis for \textit{in distribution} convergence of a function of an
increasing set of random variables. This result is used to prove the
Gaussian approximation Lemma \ref{lm:clt_max}. The following result,
however, does not use any distributional assumptions other than convergence.
\begin{lemma}
\label{lm:max_dist}Let $\{\mathcal{X}_{n}(i)$ $:$ $i$ $\in $ $\mathbb{N}
\}_{n\geq 1}$ be an array of random variables, and assume $\{\mathcal{X}
_{n}(i)$ $:$ $1$ $\leq $ $i$ $\leq $ $\mathcal{L}\}$ $\overset{d}{
\rightarrow }$ $\{\mathcal{X}(i)$ $:$ $1$ $\leq $ $i$ $\leq $ $\mathcal{L}\}$
for each $\mathcal{L}$ $\in $ $\mathbb{N}$, where $\{\mathcal{X}(i)$ $:$ $1$
$\leq $ $i$ $\leq $ $\infty \}$ is a stochastic process. Then $\vartheta ([
\mathcal{X}_{n}(i)]_{i=1}^{\mathcal{L}_{n}})$ $\overset{d}{\rightarrow }$ $
\vartheta ([\mathcal{X}(i)]_{i=1}^{\infty })$ for some non-unique sequence
of positive integers $\{\mathcal{L}_{n}\}$, where $\mathcal{L}_{n}$ $
\rightarrow $ $\infty $ and $\mathcal{L}_{n}$ $=$ $o(n)$.
\end{lemma}
\noindent \textbf{Proof.}
\linebreak \textbf{Step \ 1.\qquad }
Convergence in finite dimensional distributions, continuity of $\vartheta
(\cdot )$ $\geq $ $0$, and the mapping theorem yield $\vartheta ([\mathcal{X}
_{n}(i)]_{i=1}^{k})$ $\overset{d}{\rightarrow }$ $\vartheta ([\mathcal{X}
(i)]_{i=1}^{k})$ for each $k$ $\in $ $\mathbb{N}$. By the definition of
distribution convergence:
\begin{equation*}
P\left( \vartheta \left( \left[ \mathcal{X}_{n}(i)\right] _{i=1}^{k}\right)
\geq x\right) \rightarrow P\left( \vartheta \left( \left[ \mathcal{X}(i)
\right] _{i=1}^{k}\right) \geq x\right) \text{ for each }k\in \mathbb{N}
\text{ and }x\in \lbrack 0,\infty ).
\end{equation*}
Similarly, by the mapping theorem $P(\exp \{-\vartheta ([\mathcal{X}
_{n}(i)]_{i=1}^{k})\}$ $\geq $ $x)$ $\rightarrow $ $P(\exp \{-\vartheta ([
\mathcal{X}(i)]_{i=1}^{k})\}$ $\geq $ $x)$ for each $x$ $\in $ $[0,1].$ Now
define:
\begin{eqnarray*}
&&\mathcal{I}_{k,n}\equiv \int_{0}^{1}P\left( \exp \left\{ -\vartheta \left(
\left[ \mathcal{X}_{n}(i)\right] _{i=1}^{k}\right) \right\} \geq x\right) dx,
\\
&&\mathcal{I}_{k}\equiv \int_{0}^{1}P\left( \exp \left\{ -\vartheta \left(
\left[ \mathcal{X}(i)\right] _{i=1}^{k}\right) \right\} \geq x\right) dx
\text{ and }\mathcal{I}\equiv \int_{0}^{1}P\left( \exp \left\{ -\vartheta
\left( \left[ \mathcal{X}(i)\right] _{i=1}^{\infty }\right) \right\} \geq
x\right) dx.
\end{eqnarray*}
Notice $\mathcal{I}_{k,n}$, $\mathcal{I}_{k}$\ and $\mathcal{I}$\ are well
defined for any $k$ and $n$ because $\vartheta (\cdot )$ $\geq $ $0$ and $
\exp \{-\vartheta (\cdot )\}$ $\in $ $[0,1]$.\ Then by Lebesgue's dominated
convergence theorem:
\begin{equation*}
\lim_{n\rightarrow \infty }\int_{0}^{1}\left\vert P\left( \exp \left\{
-\vartheta \left( \left[ \mathcal{X}_{n}(i)\right] _{i=1}^{k}\right)
\right\} \geq x\right) -P\left( \exp \left\{ -\vartheta \left( \left[
\mathcal{X}(i)\right] _{i=1}^{k}\right) \right\} \geq x\right) \right\vert
dx=0
\end{equation*}
and
\begin{equation*}
\lim_{k\rightarrow \infty }\int_{0}^{1}\left\vert P\left( \exp \left\{
-\vartheta \left( \left[ \mathcal{X}(i)\right] _{i=1}^{k}\right) \right\}
\geq x\right) -P\left( \exp \left\{ -\vartheta \left( \left[ \mathcal{X}(i)
\right] _{i=1}^{\infty }\right) \right\} \geq x\right) \right\vert dx=0.
\end{equation*}
Scheff\'{e}'s lemma now yields $\lim_{n\rightarrow \infty }\mathcal{I}_{k,n}$
$=$ $\mathcal{I}_{k}$ for each $k$, and $\lim_{k\rightarrow \infty }\mathcal{
I}_{k}$ $=$ $\mathcal{I}$. Now apply Lemma \ref{lm:array_conv} to $\mathcal{I
}_{k,n}$ to yield $\lim_{n\rightarrow \infty }\mathcal{I}_{\mathcal{L}_{n},n}
$ $=$ $\mathcal{I}$ for some non-unique, monotonically increasing positive
integer sequence $\{\mathcal{L}_{n}\}$, where $\mathcal{L}_{n}$ $\rightarrow
$ $\infty $. Therefore:
\begin{equation*}
\lim_{n\rightarrow \infty }\int_{0}^{1}P\left( \exp \left\{ -\vartheta
\left( \left[ \mathcal{X}_{n}(i)\right] _{i=1}^{\mathcal{L}_{n}}\right)
\right\} \geq x\right) dx=\int_{0}^{1}P\left( \exp \left\{ -\vartheta \left(
\left[ \mathcal{X}(i)\right] _{i=1}^{\infty }\right) \right\} \geq x\right)
dx,
\end{equation*}
hence identically
\begin{equation}
\lim_{n\rightarrow \infty }E[\exp \{-\vartheta ([\mathcal{X}_{n}(i)]_{i=1}^{
\mathcal{L}_{n}})\}]=E\left[ \exp \left\{ -\vartheta \left( \left[ \mathcal{X
}(i)\right] _{i=1}^{\infty }\right) \right\} \right] . \label{EE_conv}
\end{equation}
Moreover, in view of Lemma \ref{lm:array_conv}, (\ref{EE_conv}) holds for
any other monotonic sequence of positive integers $\{\mathcal{\tilde{L}}
_{n}\}_{n=1}^{\infty }$, $\mathcal{\tilde{L}}_{n}$ $\rightarrow $ $\infty $
that satisfies $\mathcal{\tilde{L}}_{n}/\mathcal{L}_{n}$ $\rightarrow $ $0$,
hence $\mathcal{L}_{n}$ $=$ $o(n)$ is always feasible.
\newline
\textbf{Step 2.}\qquad\ Repeat Step 1 to deduce that for each $s$ $\in $ $
\mathbb{N}$ and some non-unique, positive, monotonically increasing integer
sequences $\{\mathcal{L}_{s}(n)\}$, $\mathcal{L}_{s}(n)$ $=$ $o(n)$ and $
\mathcal{L}_{s}(n)$ $\rightarrow $ $\infty $:
\begin{equation}
\lim_{n\rightarrow \infty }E\left[ \exp \left\{ -\vartheta \left( \left[
\mathcal{X}_{n}(i)\right] _{i=1}^{\mathcal{L}_{s}(n)}\right) \right\} ^{s}
\right] =E\left[ \exp \left\{ -\vartheta \left( \left[ \mathcal{X}(i)\right]
_{i=1}^{\infty }\right) \right\} ^{s}\right] . \label{EE}
\end{equation}
Each $\{\mathcal{L}_{s}(n)\}$ is monotonic, and any other sequence $\{
\mathcal{\tilde{L}}_{n}\}_{n=1}^{\infty }$, $\mathcal{\tilde{L}}_{n}$ $
\rightarrow $ $\infty $ that satisfies $\mathcal{\tilde{L}}_{n}/\mathcal{L}
_{s}(n)$ $\rightarrow $ $0$ and $\mathcal{\tilde{L}}_{n}$ $=$ $o(n)$\ is
also valid. Hence, there exists a non-unique monotonic sequence of positive
integers $\{\mathcal{L}_{n}\}$ such that $\mathcal{L}_{n}$ $\rightarrow $ $
\infty $, $\mathcal{L}_{n}$ $=$ $o(n)$,\ and $\limsup_{n\rightarrow \infty
}\{\mathcal{L}_{n}/\mathcal{L}_{s}(n)\}$ $<$ $1$ for each $s$, that
satisfies (\ref{EE}) for each $s$. Therefore:
\begin{equation}
\lim_{n\rightarrow \infty }E\left[ \exp \left\{ -\vartheta \left( \left[
\mathcal{X}_{n}(i)\right] _{i=1}^{\mathcal{L}_{n}}\right) \right\} ^{s}
\right] =E\left[ \exp \left\{ -\vartheta \left( \left[ \mathcal{X}(i)\right]
_{i=1}^{\infty }\right) \right\} ^{s}\right] \text{ }\quad \forall s\in
\mathbb{N}. \label{EsEs}
\end{equation}
Property (\ref{EsEs}) implies $\exp \{-\vartheta ([\mathcal{X}
_{n}(i)]_{i=1}^{\mathcal{L}_{n}})\}$ $\overset{d}{\rightarrow }$ $\exp
\{-\vartheta ([\mathcal{X}(i)]_{i=1}^{\infty })\}$
\citep[][Theorem
30.2]{Billingsley1995}. The claim now follows by the mapping theorem. $
\mathcal{QED}$.
Let $h$ $\geq $ $0$. Recall $\rho (h)$ $\equiv $ $E[\epsilon _{t}\epsilon
_{t-h}]/E[\epsilon _{t}^{2}]$ and
\begin{eqnarray*}
&&G_{t}(\phi )\equiv \left[ \frac{\partial }{\partial \phi ^{\prime }}
f(x_{t-1},\phi ),\boldsymbol{0}_{k_{\delta }}^{\prime }\right] ^{\prime }\in
\mathbb{R}^{k_{\theta }}\text{ \ and \ }s_{t}(\theta )\equiv \frac{1}{2}
\frac{\partial }{\partial \theta }\ln \sigma _{t}^{2}(\theta ) \\
&&\mathcal{D}(h)\equiv E\left[ \left( \epsilon _{t}s_{t}+G_{t}/\sigma
_{t}\right) \epsilon _{t-h}\right] +E\left[ \epsilon _{t}\left( \epsilon
_{t-h}s_{t-h}+G_{t-h}/\sigma _{t-h}\right) \right] \in \mathbb{R}^{k_{\theta
}} \\
&&z_{t}(h)\equiv r_{t}(h)-\rho (h)r_{t}(0)\text{ where }r_{t}(h)\equiv \frac{
\epsilon _{t}\epsilon _{t-h}-E\left[ \epsilon _{t}\epsilon _{t-h}\right] -
\mathcal{D}(h)^{\prime }\mathcal{A}m_{t}}{E\left[ \epsilon _{t}^{2}\right] },
\end{eqnarray*}
where $m_{t}$ and $\mathcal{A}$\ appear in plug-in expansion Assumption \ref
{assum:plug}.c: $\sqrt{n}(\hat{\theta}_{n}$ $-$ $\theta _{0})$ $=$ $\mathcal{
A}n^{-1/2}\sum_{t=1}^{n}m_{t}(\theta _{0})$ $+$ $O_{p}(n^{-\zeta })$ for
some $\zeta $ $>$ $0$. The following two lemmas are based on standard
arguments and are therefore proved in \citet[Appendix F]{HillMotegi_supp_mat}
.
\begin{lemma}
\label{lm:expansion}Under Assumptions \ref{assum:dgp} and \ref{assum:plug}:
for some $\zeta $ $>$ $0$ that appears in Assumption \ref{assum:plug}.c, $
\mathcal{X}_{n}(h)$ $\equiv $ $|\sqrt{n}\{\hat{\rho}_{n}(h)$ $-$ $\rho (h)\}$
$-$ $1/\sqrt{n}\sum\nolimits_{t=1+h}^{n}\{r_{t}(h)$ $-$ $\rho (h)r_{t}(0)\}|$
$=$ $O_{p}(1/n^{\min \{\zeta ,1/2\}})$ for each $h$.
\end{lemma}
\begin{lemma}
\label{lm:conv_fdd}Let Assumptions \ref{assum:dgp} and \ref{assum:plug}
hold, and write $\mathcal{Z}_{n}(h)$ $\equiv $ $1/\sqrt{n}
\sum_{t=1+h}^{n}z_{t}(h)$. For each $\mathcal{L}$\ $\in \mathbb{N}$ : $\{
\mathcal{Z}_{n}(h)$ $:$ $1$ $\leq $ $h$ $\leq $ $\mathcal{L\}}$ $\overset{d}{
\rightarrow }$ $\{\mathcal{Z}(h)$ $:$ $1$ $\leq $ $h$ $\leq $ $\mathcal{L\}}
, $ where $\{\mathcal{Z}(h)$ $:$ $1$ $\leq $ $h$ $\leq $ $\mathcal{L}\}$ is
a zero mean Gaussian process with variance $\lim_{n\rightarrow \infty
}n^{-1}\sum_{s,t=1}^{n}E[z_{s}(h)z_{t}(h)]$ $\in $ $(0,\infty )$, and
covariance function \linebreak $\lim_{n\rightarrow \infty
}n^{-1}\sum_{s,t=1}^{n}E[z_{s}(h)z_{t}(\tilde{h})]$.
\end{lemma}
\noindent \textbf{Proof of Lemma \ref{lm:corr_expan}.}\qquad Recall
Assumption \ref{assum:dgp}.c states\ $\hat{\omega}_{n}(h)$ $=$ $\omega
(h)+O_{p}(n^{-\kappa })$ for some $\kappa $ $>$ $0$, and under Assumption
\ref{assum:plug}.c, $\sqrt{n}(\hat{\theta}_{n}$ $-$ $\theta _{0})$ $=$ $
\mathcal{A}n^{-1/2}\sum_{t=1}^{n}m_{t}(\theta _{0})$ $+$ $O_{p}(n^{-\zeta })$
for some $\zeta $ $>$ $0$.
Property (\ref{NED_var}) applies to $r_{t}(h)$ $-$ $\rho (h)r_{t}(0)$ under
Assumptions \ref{assum:dgp} and \ref{assum:plug}, cf. Theorem 17.8 in \cite
{Davidson1994}, hence $1/\sqrt{n}\sum_{t=1+h}^{n}\{r_{t}(h)$ $-$ $\rho
(h)r_{t}(0)\}$ $=$ $O_{p}(1)$. By Lemma \ref{lm:expansion}:
\begin{equation*}
\mathcal{X}_{n}(h)\equiv \left\vert \sqrt{n}\left\{ \hat{\rho}_{n}(h)-\rho
(h)\right\} -\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\left\{ r_{t}(h)-\rho
(h)r_{t}(0)\right\} \right\vert =O_{p}\left( 1/n^{\min \{\zeta
,1/2\}}\right) .
\end{equation*}
Therefore:
\begin{eqnarray}
\left\vert \mathcal{\tilde{X}}_{n}(h)\right\vert &\equiv &\left\vert \sqrt{n
}\hat{\omega}_{n}(h)\left\{ \hat{\rho}_{n}(h)-\rho (h)\right\} -\omega (h)
\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\left\{ r_{t}(h)-\rho (h)r_{t}(0)\right\}
\right\vert \label{rn_ro_h} \\
&\leq &\left\vert \omega (h)\right\vert \times \mathcal{X}_{n}(h)+\left\vert
\hat{\omega}_{n}(h)-\omega (h)\right\vert \times \mathcal{X}_{n}(h) \notag
\\
&&+\left\vert \hat{\omega}_{n}(h)-\omega (h)\right\vert \times \left\vert
\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\left\{ r_{t}(h)-\rho (h)r_{t}(0)\right\}
\right\vert =O_{p}\left( 1/n^{\min \{\zeta ,1/2\}}\right) +O_{p}\left(
1/n^{\kappa }\right) \notag \\
&=&O_{p}\left( 1/n^{\min \{\zeta ,\kappa ,1/2\}}\right) , \notag
\end{eqnarray}
which implies $(n^{\min \{\zeta ,\kappa ,1/2\}}/\ln (n))|\mathcal{\tilde{X}}
_{n}(h)|$ $\overset{p}{\rightarrow }$ $0$. Setting $\hat{\rho}_{n}(h)$ $=$ $0
$ for any $h$ $\notin $ $\{0,...,n\}$, the claims now follow by applications
of Lemma \ref{lm:max_p} to $\{|\mathcal{\tilde{X}}_{n}(h)|\}_{h\in \mathbb{N}
}$. In particular, by (\ref{rn_ro_h}) and Lemma \ref{lm:max_p}, if $(n^{\min
\{\zeta ,\kappa ,1/2\}}/\ln (n))\mathcal{\tilde{X}}_{n}(h)$\ for each $h$ is
uniformly integrable then $\mathcal{L}_{n}$ $=$ $O(n^{\min \{\zeta ,\kappa
,1/2\}}/\ln (n))$. $\mathcal{QED}$.
\newline
\textbf{Proof of Lemma \ref{lm:clt_max}.}\qquad Lemma \ref{lm:conv_fdd}
implies $\{\mathcal{Z}_{n}(h)$ $:$ $1$ $\leq $ $h$ $\leq $ $\mathcal{L}\}$ $
\overset{d}{\rightarrow }$ $\{\mathcal{Z}(h)$ $:$ $1$ $\leq $ $h$ $\leq $ $
\mathcal{L}\}$ for each $\mathcal{L}$ $\in $ $\mathbb{N}$, where $\{\mathcal{
Z}(h)$ $:$ $1$ $\leq $ $h$ $\leq $ $\mathcal{L}\}$ is a zero mean Gaussian
process. Now apply Lemma \ref{lm:max_dist} to prove the claim. $\mathcal{QED}
$.
The proof of Theorem \ref{th:p_dep_wild_boot} requires the following uniform
laws and probability bound, and weak convergence for the bootstrapped
p-value. The first result is rudimentary and therefore proved in
\citet[Appendix F]{HillMotegi_supp_mat}. Recall $m_{t}$ are the Assumption
\ref{assum:plug}.c$^{\prime }$ estimating equations.
\begin{lemma}
\label{lm:unif_law}Under Assumptions \ref{assum:dgp} and \ref{assum:plug}
.a,b,c$^{\prime }$,d $\sup_{\theta \in \Theta }||1/n\sum_{t=1}^{n}\omega
_{t}(\partial /\partial \theta )m_{t}(\theta )||$ $\overset{p}{\rightarrow }$
$0$, \linebreak $\sup_{\theta \in \Theta }||1/n\sum_{t=1}^{n}(\partial
/\partial \theta )m_{t}(\theta )$ $-$ $E[(\partial /\partial \theta
)m_{t}(\theta )]||$ $\overset{p}{\rightarrow }$ $0$, and $1/\sqrt{n}
\sum_{t=1+h}^{n}\omega _{t}m_{t}$ $=$ $O_{p}(1).$
\end{lemma}
Let $\Rightarrow ^{p}$ denote weak convergence in probability on $l_{\infty
} $ (the space of bounded functions) as defined in
\citet[Section
3]{GineZinn1990}. Recall by Lemma \ref{lm:clt_max} that $|\vartheta ([
\mathcal{Z}_{n}(h)]_{h=1}^{\mathcal{L}_{n}})$ $-$ $\vartheta ([\mathcal{Z}
(h)]_{h=1}^{\mathcal{L}_{n}})|$ $\overset{p}{\rightarrow }$ $0$ for some
zero mean Gaussian process $\{\mathcal{Z}(h)$ $:$ $h$ $\in $ $\mathbb{N}\}$
with variance $\lim_{n\rightarrow \infty
}n^{-1}\sum_{s,t=1}^{n}E[z_{s}(h)z_{t}(h)]$ $<$ $\infty $. Define the
sample:
\begin{equation*}
\mathfrak{X}_{n}\equiv \left\{ m_{t},x_{t},y_{t}\right\} _{t=1}^{n}.
\end{equation*}
\begin{lemma}
\label{lm:max_cor*}Let Assumptions \ref{assum:dgp} and \ref{assum:plug}.a,b,c
$^{\prime }$,d hold.
\newline
$a.$ For each $\mathcal{L}$ $\in $ $\mathbb{N}$, $\{\sqrt{n}\hat{\rho}
_{n}^{(dw)}(h)$ $:$ $1$ $\leq $ $h$ $\leq $ $\mathcal{L}\}$ $\Rightarrow ^{p}
$ $\{\mathcal{\mathring{Z}}(h)$ $:$ $1$ $\leq $ $h$ $\leq $ $\mathcal{L}\},$
where $\{\mathcal{\mathring{Z}}(h)$ $:$ $h$ $\in $ $\mathbb{N}\}$ is an
independent copy of $\{\mathcal{Z}(h)$ $:$ $h$ $\in $ $\mathbb{N}\}$.$
$\newline
$b.$ For some monotonic sequence of positive integers $\{\mathcal{L}_{n}\}$,
$\mathcal{L}_{n}$ $\rightarrow $ $\infty $ and $\mathcal{L}_{n}$ $=$ $o(n)$:
\begin{equation*}
\sup_{c>0}\left\vert P\left( \vartheta \left( \left[ \hat{\omega}_{n}(h)
\sqrt{n}\hat{\rho}_{n}^{(dw)}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) \leq
c|\mathfrak{X}_{n}\right) -P\left( \vartheta \left( \left[ \omega (h)
\mathcal{\mathring{Z}}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) \leq
c\right) \right\vert \overset{p}{\rightarrow }0.
\end{equation*}
Finally, $\mathcal{L}_{n}$ $=$ $O(n^{\min \{\zeta ,\kappa ,1/2\}}/\ln (n))$
must be satisfied.
\end{lemma}
\noindent \textbf{Proof.}\qquad
\newline
\textbf{Claim (a).}\qquad Let $\{\varphi _{t}\}_{t=1}^{n}$ be a draw of the
auxiliary variables, and write
\begin{equation}
\rho _{n}^{\ast }(h)\equiv \frac{1}{E\left[ \epsilon _{t}^{2}\right] }\frac{1
}{n}\sum_{t=1+h}^{n}\varphi _{t}\left\{ \mathcal{E}_{t,h}-E\left[ \mathcal{E}
_{1,h}\right] \right\} \text{ where }\mathcal{E}_{t,h}\equiv \epsilon
_{t}\epsilon _{t-h}-\mathcal{D}(h)^{\prime }\mathcal{A}m_{t}. \label{g*h}
\end{equation}
Recall $\widehat{\mathcal{E}}_{n,t,h}(\hat{\theta}_{n})$ $\equiv $ $\epsilon
_{t}(\hat{\theta}_{n})\epsilon _{t-h}(\hat{\theta}_{n})$ $-$ $\mathcal{\hat{D
}}_{n}(h)^{\prime }\widehat{\mathcal{A}}_{n}m_{t}(\hat{\theta}_{n})$, and:
\begin{equation*}
\hat{\rho}_{n}^{(dw)}(h)\equiv \frac{1}{1/n\sum_{t=1}^{n}\epsilon _{t}^{2}(
\hat{\theta}_{n})}\frac{1}{n}\sum_{t=1+h}^{n}\varphi _{t}\left\{ \widehat{
\mathcal{E}}_{n,t,h}(\hat{\theta}_{n})-\frac{1}{n}\sum_{s=1+h}^{n}\widehat{
\mathcal{E}}_{n,s,h}(\hat{\theta}_{n})\right\} .
\end{equation*}
Let $\{\mathcal{Z}(h)$ $:$ $h$ $\in $ $\mathbb{N}\}$ be the Lemma \ref
{lm:clt_max} Gaussian process. It suffices to show:
\begin{eqnarray}
&&\left\{ \sqrt{n}\rho _{n}^{\ast }(h):1\leq h\leq \mathcal{L}\right\}
\Rightarrow ^{p}\left\{ \mathcal{\mathring{Z}}(h):1\leq h\leq \mathcal{L}
\right\}, \label{weak_r} \\
&&\sqrt{n}\left\vert \hat{\rho}_{n}^{(dw)}(h)-\rho _{n}^{\ast
}(h)\right\vert \overset{p}{\rightarrow }0\text{ for each }h\text{,}
\label{rr}
\end{eqnarray}
where $\{\mathcal{\mathring{Z}}(h)$ $:$ $h$ $\in $ $\mathbb{N}\}$ is an
independent copy of $\{\mathcal{Z}(h)$ $:$ $h$ $\in $ $\mathbb{N}\}$. We
shorten the proof by letting $\{\xi _{1},\dots ,\xi _{n/b_{n}}\}$ be iid $
N(0,1)$ random variables. The general case is similar, where $\xi _{i}$ are
iid, $E[\xi _{i}]$ $=$ $0$, $E[\xi _{i}^{2}]$ $=$ $1$ and $E[\xi _{i}^{4}]$ $
<$ $\infty $, except statements about conditional distribution normality
must be replaced with added steps to show asymptotic convergence in
conditional distribution.
\ \newline
\textbf{Step 1.}\qquad Consider (\ref{weak_r}). Define $\mathbb{L}$ $\equiv $
$\{1,...,\mathcal{L}\}$. It suffices to prove weak convergence on a Polish
space in the sense of \citet{HoffJorg1984,HoffJorg1991}, cf.
\citet[p. 853
and Theorem 3.1.a]{GineZinn1990}. The latter holds\textit{\ if and only if}
there exists a pseudo metric $d$ on $\mathbb{L}$\ such that $(\mathbb{L},d)$
is a totally bounded pseudo metric space; $\{\sqrt{n}\rho _{n}^{\ast }(h)$ $
: $ $1$ $\leq $ $h$ $\leq $ $\mathcal{L}\}$\ $\overset{d}{\rightarrow }$ $\{
\mathcal{\mathring{Z}}(h)$ $:$ $1$ $\leq $ $h$ $\leq $ $\mathcal{L}\}$; and
the sequence of distributions governing $\{\sqrt{n}\rho _{n}^{\ast
}(h)\}_{n\geq 1}$ are stochastically equicontinuous on $\mathbb{L}$. $
\mathbb{L}$\ is compact, so pick the sup-norm $d$. Stochastic equicontinuity
is trivial because $\mathbb{L}$\ is discrete and bounded. It now suffices to
prove convergence in finite dimensional distributions. We follow an argument
given in \citet[proof of Theorem 2]{Hansen1996}.
By construction of $\varphi _{t}$ via $\xi _{t}$:
\begin{equation*}
\rho _{n}^{\ast }(h)=\frac{1}{E\left[ \epsilon _{t}^{2}\right] }\frac{1}{
n/b_{n}}\sum_{s=1}^{n/b_{n}}\xi _{s}\frac{1}{b_{n}}
\sum_{t=(s-1)b_{n}+1+h}^{sb_{n}}\left\{ \mathcal{E}_{t,h}-E\left[ \mathcal{E}
_{1,h}\right] \right\} =\frac{1}{E\left[ \epsilon _{t}^{2}\right] }\frac{1}{
n/b_{n}}\sum_{s=1}^{n/b_{n}}\xi _{s}\frac{1}{b_{n}}\mathfrak{E}_{n,h},
\end{equation*}
say, where $\mathfrak{E}_{n,h}$ $\equiv $ $\sum_{t=(s-1)b_{n}+1+h}^{sb_{n}}\{
\mathcal{E}_{t,h}$ $-$ $E\left[ \mathcal{E}_{1,h}\right] \}$. Operate
conditionally on $\mathfrak{X}_{n}$ $\equiv $ $\{m_{t},x_{t},y_{t}
\}_{t=1}^{n}$, and write $E_{\mathfrak{X}_{n}}[\cdot ]$ $\equiv $ $E[\cdot |
\mathfrak{X}_{n}]$. By joint Gaussianicity and independence of $\xi _{s}$, $
\{\sqrt{n}\rho _{n}^{\ast }(h)$ $:$ $1$ $\leq $ $h$ $\leq $ $\mathcal{L}\}$
is for each $\mathcal{L}$ $\in $ $\mathbb{N}$ a zero mean Gaussian process
with covariance function $nE_{\mathfrak{X}_{n}}[\rho _{n}^{\ast }(h)\rho
_{n}^{\ast }(\tilde{h})]$ $=$ $1/n\sum_{s=1}^{n/b_{n}}\mathfrak{E}_{n,h}
\mathfrak{E}_{n,\tilde{h}}/(E[\epsilon _{t}^{2}])^{2}$. Observe:
\begin{eqnarray}
&&\lim_{n\rightarrow \infty }E\left[ nE_{\mathfrak{X}_{n}}\left[ \rho
_{n}^{\ast }(h)\rho _{n}^{\ast }(\tilde{h})\right] \right] \label{limEnE} \\
&&\text{ \ \ \ \ \ \ \ \ }=\frac{1}{\left[ E\left[ \epsilon _{t}^{2}\right]
\right] ^{2}}\lim_{n\rightarrow \infty }\frac{1}{n}\sum_{s=1}^{n/b_{n}}
\sum_{t,u=(s-1)b_{n}+1+h}^{sb_{n}}E\left[ \left\{ \mathcal{E}_{t,h}-E\left[
\mathcal{E}_{1,h}\right] \right\} \left\{ \mathcal{E}_{u,\tilde{h}}-E\left[
\mathcal{E}_{1,\tilde{h}}\right] \right\} \right] \notag \\
&&\text{ \ \ \ \ \ \ \ \ }=\frac{1}{\left[ E\left[ \epsilon _{t}^{2}\right]
\right] ^{2}}\sum_{i=0}^{\infty }E\left[ \left\{ \mathcal{E}_{1,h}-E\left[
\mathcal{E}_{1,h}\right] \right\} \left\{ \mathcal{E}_{1+i,\tilde{h}}-E\left[
\mathcal{E}_{1,\tilde{h}}\right] \right\} \right] \notag \\
&&\text{ \ \ \ \ \ \ \ \ }=\lim_{n\rightarrow \infty }\frac{1}{n}E\left[
\sum_{t=1}^{n}\frac{\left( \mathcal{E}_{t,h}-E\left[ \mathcal{E}_{t,h}\right]
\right) }{E\left[ \epsilon _{t}^{2}\right] }\sum_{t=1}^{n}\frac{\left(
\mathcal{E}_{t,\tilde{h}}-E\left[ \mathcal{E}_{t,\tilde{h}}\right] \right) }{
E\left[ \epsilon _{t}^{2}\right] }\right] =E\left[ \mathcal{Z}(h)\mathcal{Z}(
\tilde{h})\right] . \notag
\end{eqnarray}
The final equality follows directly from the definition of $\mathcal{Z}(h)$
in Lemma \ref{lm:clt_max}.
Let $\mathfrak{X}$ be the set of samples $\mathfrak{\tilde{X}}_{n}$ such
that $nE_{\mathfrak{\tilde{X}}_{n}}[\rho _{n}^{\ast }(h)\rho _{n}^{\ast }(
\tilde{h})]$ $\overset{p}{\rightarrow }$ $\lim_{n\rightarrow \infty }E[nE_{
\mathfrak{\tilde{X}}_{n}}[\rho _{n}^{\ast }(h)\rho _{n}^{\ast }(\tilde{h})]]$
$=$ $E[\mathcal{Z}(h)\mathcal{Z}(\tilde{h})]$. We will prove:
\begin{equation}
P\left( \mathfrak{X}_{n}\in \mathfrak{X}\right) =1. \label{PXX}
\end{equation}
In conjunction with (\ref{limEnE}), it then follows that the finite
dimensional distributions of $\{\sqrt{n}\rho _{n}^{\ast }(h)$ $:$ $1$ $\leq $
$h$ $\leq $ $\mathcal{L}\}$ converge to those of $\{\mathcal{\mathring{Z}}
(h) $ $:$ $1$ $\leq $ $h$ $\leq $ $\mathcal{L}\}$, where $\{\mathcal{
\mathring{Z}}(h)$ $:$ $1$ $\leq $ $h$ $\leq $ $\mathcal{L}\}$ is a zero mean
Gaussian process with covariance function $E[\mathcal{Z}(h)\mathcal{Z}(
\tilde{h})]$. Independence of $\xi _{s}$ with respect to the sample $
\mathfrak{X}_{n}$, Gaussianicity, and the fact that Gaussian processes are
completely determined by their mean and covariance structure, together imply
$\{\mathcal{\mathring{Z}}(h)$ $:$ $1$ $\leq $ $h$ $\leq $ $\mathcal{L}\}$ is
an independent copy of $\{\mathcal{Z}(h)$ $:$ $1$ $\leq $ $h$ $\leq $ $
\mathcal{L}\}$.
Consider (\ref{PXX}). The following exploits arguments presented in
\citet[Appendix]{deJong1997}. Let $\{l_{n}\}$ be any sequence of integers $
l_{n}$ $\in $ $\{1,...,b_{n}\}$ such that $l_{n}$ $\rightarrow $ $\infty $
and $l_{n}$ $=$ $o(b_{n})$. Define:{\small
\begin{equation*}
\mathcal{Y}_{n,s}(h)\equiv \sum_{t=(s-1)b_{n}+l_{n}+1}^{sb_{n}}\left\{
\mathcal{E}_{t,h}-E\left[ \mathcal{E}_{1,h}\right] \right\} \text{, \ }
\mathcal{U}_{n,s}(h)\equiv \sum_{t=(s-1)b_{n}+1}^{(s-1)b_{n}+l_{n}}\left\{
\mathcal{E}_{t,h}-E\left[ \mathcal{E}_{1,h}\right] \right\} \text{, }
\mathcal{R}(h)\equiv -\sum_{t=1}^{h}\{\mathcal{E}_{t,h}-E[\mathcal{E}
_{1,h}]\}.
\end{equation*}
}By construction $\sum_{t=(s-1)b_{n}+1+h}^{sb_{n}}\{\mathcal{E}_{t,h}$ $-$ $
E[\mathcal{E}_{1,h}]\}$ $=$ $\mathcal{Y}_{n,s}(h)$ $+$ $\mathcal{U}_{n,s}(h)$
$+$ $\mathcal{R}(h)$, hence
\begin{eqnarray*}
\frac{1}{n}\sum_{s=1}^{n/b_{n}}\mathfrak{E}_{n,h}\mathfrak{E}_{n,\tilde{h}}
&=&\frac{1}{n}\sum_{s=1}^{n/b_{n}}\mathcal{Y}_{n,s}(h)\mathcal{Y}_{n,s}(
\tilde{h})+\frac{1}{n}\sum_{s=1}^{n/b_{n}}\mathcal{U}_{n,s}(h)\mathcal{U}
_{n,s}(\tilde{h})+\frac{1}{b_{n}}\mathcal{R}(h)\mathcal{R}(\tilde{h}) \\
&&+\frac{1}{n}\sum_{s=1}^{n/b_{n}}\mathcal{Y}_{n,s}(h)\mathcal{U}_{n,s}(
\tilde{h})+\frac{1}{n}\sum_{s=1}^{n/b_{n}}\mathcal{Y}_{n,s}(\tilde{h})
\mathcal{U}_{n,s}(h)+\frac{1}{n}\sum_{s=1}^{n/b_{n}}\mathcal{Y}_{n,s}(h)
\mathcal{R}(\tilde{h}) \\
&&+\frac{1}{n}\sum_{s=1}^{n/b_{n}}\mathcal{Y}_{n,s}(\tilde{h})\mathcal{R}(h)+
\frac{1}{n}\sum_{s=1}^{n/b_{n}}\mathcal{U}_{n,s}(h)\mathcal{R}(\tilde{h})+
\frac{1}{n}\sum_{s=1}^{n/b_{n}}\mathcal{U}_{n,s}(\tilde{h})\mathcal{R}(h).
\end{eqnarray*}
We will prove all terms are $o_{p}(1)$ save $1/n\sum_{s=1}^{n/b_{n}}\mathcal{
Y}_{n,s}(h)\mathcal{Y}_{n,s}(\tilde{h})$\ hence:
\begin{equation}
\frac{1}{n}\sum_{s=1}^{n/b_{n}}\mathfrak{E}_{n,h}\mathfrak{E}_{n,\tilde{h}}=
\frac{1}{n}\sum_{s=1}^{n/b_{n}}\mathcal{Y}_{n,s}(h)\mathcal{Y}_{n,s}(\tilde{h
})+o_{p}(1). \label{EEEE_ZU}
\end{equation}
First, under Assumptions \ref{assum:dgp} and \ref{assum:plug}, $\mathcal{E}
_{t,h}$ is stationary, ergodic and $L_{2}$-bounded. Therefore $||\mathcal{R}(
\tilde{h})||_{2}$ $\leq $ $\sum_{t=1}^{\tilde{h}}||\mathcal{E}_{t,h}$ $-$ $E[
\mathcal{E}_{1,h}]||_{2}$ $\leq $ $K$ for each finite $\tilde{h}$, hence by
the Cauchy-Schwartz inequality $E|b_{n}^{-1}\mathcal{R}(h)\mathcal{R}(\tilde{
h})|$ $\leq $ $K/b_{n}$ $\rightarrow $ $0$.
Second, the NED and moment properties of $\epsilon _{t}$ and $m_{t}$\ in
Assumptions \ref{assum:dgp} and \ref{assum:plug} imply $\mathcal{E}_{t,h}$ $
\equiv $ $\epsilon _{t}\epsilon _{t-h}$ $-$ $\mathcal{D}(h)^{\prime }
\mathcal{A}m_{t}$ is $L_{p}$-bounded, $p$ $>$ $2$, $L_{2}$-NED on an $\alpha
$-mixing base with decay $O(h^{-p/(p-2)})$. Therefore $||1/\sqrt{b_{n}}
\mathcal{Y}_{n,1}(h)||_{2}$ and $||1/\sqrt{l_{n}}\mathcal{U}_{n,1}(\tilde{h}
)||_{2}$ are $O(1)$ by (\ref{NED_var}). Multiply and divide $\mathcal{Y}
_{n,s}(h)$ and $\mathcal{U}_{n,s}(\tilde{h})$\ by $b_{n}$ and $l_{n}$
respectively, and use stationarity, Minkowski and Cauchy-Schwartz
inequalities, and $l_{n}/b_{n}$ $=$ $o(1)$ to yield
\begin{eqnarray*}
&&\left\Vert \frac{1}{n}\sum_{s=1}^{n/b_{n}}\mathcal{Y}_{n,s}(h)\mathcal{U}
_{n,s}(\tilde{h})\right\Vert _{1}=O\left( \left( \frac{l_{n}}{b_{n}}\right)
^{1/2}\left\Vert \frac{1}{\sqrt{b_{n}}}\mathcal{Y}_{n,1}(h)\right\Vert
_{2}\left\Vert \frac{1}{\sqrt{l_{n}}}\mathcal{U}_{n,1}(\tilde{h})\right\Vert
_{2}\right) =O\left( \left( l_{n}/b_{n}\right) ^{1/2}\right) =o(1), \\
&&\left\Vert \frac{1}{n}\sum_{s=1}^{n/b_{n}}\mathcal{Y}_{n,s}(h)\mathcal{R}
_{n}(\tilde{h})\right\Vert _{1}=O\left( \left\Vert \frac{1}{\sqrt{b_{n}}}
\mathcal{Y}_{n,1}(h)\right\Vert _{2}\left\Vert \frac{1}{\sqrt{b_{n}}}
\mathcal{R}_{n}(\tilde{h})\right\Vert _{2}\right) =o(1), \\
&&\left\Vert \frac{1}{n}\sum_{s=1}^{n/b_{n}}\mathcal{U}_{n,s}(h)\mathcal{U}
_{n,s}(\tilde{h})\right\Vert _{1}=O\left( \frac{l_{n}}{b_{n}}\left\Vert
\frac{1}{\sqrt{l_{n}}}\mathcal{U}_{n,1}(h)\right\Vert _{2}\left\Vert \frac{1
}{\sqrt{l_{n}}}\mathcal{U}_{n,1}(\tilde{h})\right\Vert _{2}\right) =o(1), \\
&&\left\Vert \frac{1}{n}\sum_{s=1}^{n/b_{n}}\mathcal{U}_{n,s}(h)\mathcal{R}
_{n}(\tilde{h})\right\Vert _{1}=O\left( \left( \frac{l_{n}}{b_{n}}\right)
^{1/2}\left\Vert \frac{1}{\sqrt{l_{n}}}\mathcal{U}_{n,1}(h)\right\Vert
_{2}\right) =o(1).
\end{eqnarray*}
This proves (\ref{EEEE_ZU}).
Next, de Jong's (\citeyear{deJong1997}: Assumption 2) conditions are
satisfied under the given NED property. Hence, by the proof of de Jong's (
\citeyear{deJong1997}) Theorem 2: $1/n\sum_{s=1}^{n/b_{n}}\mathcal{Y}
_{n,s}^{2}(h)$ $\overset{p}{\rightarrow }$ $\lim_{n\rightarrow \infty
}n^{-1}E[(\sum_{t=1}^{n}\{\mathcal{E}_{t,h}$ $-$ $E[\mathcal{E}
_{1,h}]\})^{2}]$. An identical argument can be used to prove that the
product $\mathcal{Y}_{n,s}(h)\mathcal{Y}_{n,s}(\tilde{h})$\ satisfies:
\begin{equation}
\frac{1}{n}\sum_{s=1}^{n/b_{n}}\mathcal{Y}_{n,s}(h)\mathcal{Y}_{n,s}(\tilde{h
})\overset{p}{\rightarrow }\lim_{n\rightarrow \infty }\frac{1}{n}E\left[
\left( \sum_{t=1}^{n}\left\{ \mathcal{E}_{t,h}-E\left[ \mathcal{E}_{1,h}
\right] \right\} \right) \left( \sum_{t=1}^{n}\left\{ \mathcal{E}_{t,\tilde{h
}}-E\left[ \mathcal{E}_{1,\tilde{h}}\right] \right\} \right) \right] .
\label{ZhZh}
\end{equation}
Property (\ref{PXX}) is proved since combining (\ref{limEnE}), (\ref{EEEE_ZU}
) and (\ref{ZhZh}) yields
\begin{equation*}
nE_{\mathfrak{X}_{n}}\left[ \rho _{n}^{\ast }(h)\rho _{n}^{\ast }(\tilde{h})
\right] \overset{p}{\rightarrow }\lim_{n\rightarrow \infty }E\left[ nE_{
\mathfrak{X}_{n}}\left[ \rho _{n}^{\ast }(h)\rho _{n}^{\ast }(\tilde{h})
\right] \right] =E\left[ \mathcal{Z}(h)\mathcal{Z}(\tilde{h})\right] .
\end{equation*}
\textbf{Step 2.}\qquad Now turn to (\ref{rr}).
\textbf{Step 2.1}\qquad Recall $\mathcal{E}_{t,h}$ $\equiv $ $\epsilon
_{t}\epsilon _{t-h}$ $-$ $\mathcal{D}(h)^{\prime }\mathcal{A}m_{t}$ and $
\widehat{\mathcal{E}}_{n,t,h}(\hat{\theta}_{n})$ $\equiv $ $\epsilon _{t}(
\hat{\theta}_{n})\epsilon _{t-h}(\hat{\theta}_{n})$ $-$ $\mathcal{\hat{D}}
_{n}(h)^{\prime }\widehat{\mathcal{A}}_{n}m_{t}(\hat{\theta}_{n})$. We will
prove in Step 2.2 that:
\begin{equation}
\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}\left\{ \widehat{\mathcal{E}}
_{n,t,h}(\hat{\theta}_{n})-\frac{1}{n}\sum_{s=1+h}^{n}\widehat{\mathcal{E}}
_{n,s,h}(\hat{\theta}_{n})\right\} =\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}
\varphi _{t}\left\{ \mathcal{E}_{t,h}-E\left[ \mathcal{E}_{t,h}\right]
\right\} +o_{p}(1) \label{omEE omEE}
\end{equation}
by showing (it is straightforward to show (\ref{omeeDAm})-(\ref{omDAsumm})
imply (\ref{omEE omEE})):
\begin{eqnarray}
&&\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}\epsilon _{t}(\hat{\theta}
_{n})\epsilon _{t-h}(\hat{\theta}_{n})=\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}
\varphi _{t}\epsilon _{t}\epsilon _{t-h}+o_{p}(1), \label{omeeDAm} \\
&&\mathcal{\hat{D}}_{n}(h)^{\prime }\widehat{\mathcal{A}}_{n}\frac{1}{\sqrt{n
}}\sum_{t=1+h}^{n}\varphi _{t}m_{t}(\hat{\theta}_{n})=\mathcal{D}(h)^{\prime
}\mathcal{A}\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}m_{t}+o_{p}(1),
\label{DAomm} \\
&&\frac{1}{\sqrt{n}}\sum\nolimits_{t=1+h}^{n}\varphi _{t}\frac{1}{n}
\sum_{s=1+h}^{n}\epsilon _{s}(\hat{\theta}_{n})\epsilon _{s-h}(\hat{\theta}
_{n})=\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}E\left[ \epsilon
_{t}\epsilon _{t-h}\right] +o_{p}(1), \label{omSumee} \\
&&\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}\mathcal{\hat{D}}
_{n}(h)^{\prime }\widehat{\mathcal{A}}_{n}\frac{1}{n}\sum_{t=1+h}^{n}m_{t}(
\hat{\theta}_{n})=o_{p}(1). \label{omDAsumm}
\end{eqnarray}
By the construction of $\varphi _{t}$, for iid $\xi _{s}$\ distributed $
N(0,1)$:
\begin{eqnarray*}
E\left[ \left( \frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}\left\{
\mathcal{E}_{t,h}-E\left[ \mathcal{E}_{t,h}\right] \right\} \right) ^{2}
\right] &=&E\left[ \left( \frac{1}{\sqrt{n}}\sum_{s=1}^{n/b_{n}}\xi
_{s}\sum_{t=(s-1)b_{n}+1}^{sb_{n}}\varphi _{t}\left\{ \mathcal{E}_{t,h}-E
\left[ \mathcal{E}_{t,h}\right] \right\} \right) ^{2}\right] \\
&=&E\left[ \left( \frac{1}{\sqrt{b_{n}}}\sum_{t=1}^{b_{n}}\left\{ \mathcal{E}
_{t,h}-E\left[ \mathcal{E}_{t,h}\right] \right\} \right) ^{2}\right] .
\end{eqnarray*}
Under Assumptions \ref{assum:dgp}.b and \ref{assum:plug}.c$^{\prime }$, (\ref
{NED_var}) applies to $\mathcal{E}_{t,h}$ $-$ $E[\mathcal{E}_{t,h}]$
\citep[Theorems
17.8 and 17.9]{Davidson1994}. Hence $E[(1/\sqrt{b_{n}}\sum_{t=1}^{b_{n}}\{
\mathcal{E}_{t,h}$ $-$ $E\left[ \mathcal{E}_{t,h}\right] \})^{2}]$ $=$ $O(1)$
, and therefore:
\begin{equation}
\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}\left\{ \mathcal{E}_{t,h}-E
\left[ \mathcal{E}_{t,h}\right] \right\} =O_{p}(1). \label{omEE}
\end{equation}
Further, by application of Lemma \ref{lm:expansion}, $\sqrt{n}\{\hat{\gamma}
_{n}(0)$ $-$ $\gamma (0)\}$ $=$ $n^{-1/2}\sum\nolimits_{t=1}^{n}\{\epsilon
_{t}^{2}$ $-$ $E[\epsilon _{t}^{2}]$ $-$ $\mathcal{D}(0)^{\prime }\mathcal{A}
m_{t}\}$ $+$ $O_{p}(1/\sqrt{n})$. Coupled with stationarity, ergodicity and
square integrability yields:
\begin{equation}
\frac{1}{n}\sum_{t=1}^{n}\epsilon _{t}^{2}(\hat{\theta}_{n})=E\left[
\epsilon _{t}^{2}\right] +o_{p}(1). \label{sume}
\end{equation}
Combine (\ref{omEE omEE}), (\ref{omEE}) and (\ref{sume})\ to yield (\ref{rr}
) as required:{\small
\begin{equation*}
\sqrt{n}\hat{\rho}_{n}^{(dw)}(h)=\frac{1}{1/n\sum_{t=1}^{n}\epsilon _{t}^{2}(
\hat{\theta}_{n})}\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}\left\{
\mathcal{E}_{t,h}-E\left[ \mathcal{E}_{t,h}\right] \right\} +o_{p}(1)=\frac{1
}{E\left[ \epsilon _{t}^{2}\right] }\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}
\varphi _{t}\left\{ \mathcal{E}_{t,h}-E\left[ \mathcal{E}_{t,h}\right]
\right\} +o_{p}(1).
\end{equation*}
} \qquad
\textbf{Step 2.2}\qquad We now prove (\ref{omeeDAm})-(\ref{omDAsumm}).
Consider (\ref{omeeDAm}). Since $\varphi _{t}$ is zero mean Gaussian and
independent of the sample, the proof of Lemma \ref{lm:corr_expan} carries
over verbatim to show:
\begin{eqnarray}
\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}\epsilon _{t}(\hat{\theta}
_{n})\epsilon _{t-h}(\hat{\theta}_{n}) &=&\frac{1}{\sqrt{n}}
\sum_{t=1+h}^{n}\varphi _{t}\epsilon _{t}\epsilon _{t-h}-\sqrt{n}\left( \hat{
\theta}_{n}-\theta _{0}\right) ^{\prime }\frac{1}{n}\sum_{t=1+h}^{n}\varphi
_{t}\left( \epsilon _{t}s_{t}+G_{t}/\sigma _{t}\right) \epsilon _{t-h}
\notag \\
&&-\sqrt{n}\left( \hat{\theta}_{n}-\theta _{0}\right) ^{\prime }\frac{1}{n}
\sum_{t=1+h}^{n}\varphi _{t}\epsilon _{t}\left( \epsilon _{t-h}s_{t-h}+\frac{
G_{t-h}}{\sigma _{t-h}}\right) +o_{p}(1). \label{omee_ee_r}
\end{eqnarray}
By the stated moment bounds and the construction of $\varphi _{t}$ we have:
\begin{eqnarray*}
\frac{1}{n}\sum_{t=1+h}^{n}\varphi _{t}\left( \epsilon
_{t}s_{t}+G_{t}/\sigma _{t}\right) \epsilon _{t-h} &=&\frac{1}{n}
\sum_{t=1}^{n}\varphi _{t}\left( \epsilon _{t}s_{t}+G_{t}/\sigma _{t}\right)
\epsilon _{t-h}+o_{p}(1) \\
&=&\frac{1}{n}\sum_{s=1}^{n/b_{n}}\xi
_{s}\sum_{t=(s-1)b_{n}+1}^{sb_{n}}\left( \epsilon _{t}s_{t}+G_{t}/\sigma
_{t}\right) \epsilon _{t-h}+o_{p}(1).
\end{eqnarray*}
Stationarity, independence of $\xi _{s}$, and $E[(\epsilon _{t}s_{t}$ $+$ $
G_{t}/\sigma _{t})^{2}\epsilon _{t-h}^{2}]$ $<$ $\infty $ under Assumptions
\ref{assum:dgp}.b and \ref{assum:plug}.a,b yield:
\begin{eqnarray*}
E\left[ \left( \frac{1}{n}\sum_{s=1}^{n/b_{n}}\xi _{s}\left\{
\sum_{t=(s-1)b_{n}+1+h}^{sb_{n}}\left( \epsilon _{t}s_{t}+\frac{G_{t}}{
\sigma _{t}}\right) \epsilon _{t-h}\right\} \right) ^{2}\right] &=&\frac{
b_{n}}{n}E\left[ \left\{ \frac{1}{b_{n}}\sum_{t=1}^{b_{n}}\left( \epsilon
_{t}s_{t}+\frac{G_{t}}{\sigma _{t}}\right) \epsilon _{t-h}\right\} ^{2}
\right] \\
&\leq &\frac{b_{n}}{n}\left( \left\Vert \left( \epsilon _{t}s_{t}+\frac{G_{t}
}{\sigma _{t}}\right) \epsilon _{t-h}\right\Vert _{2}\right) ^{2}=o(1).
\end{eqnarray*}
Hence $1/n\sum_{t=1+h}^{n}\varphi _{t}(\epsilon _{t}s_{t}+G_{t}/\sigma
_{t})\epsilon _{t-h}$ $\overset{p}{\rightarrow }$ $0$. Combining that with $
\sqrt{n}(\hat{\theta}_{n}-\theta _{0})$ $=$ $O_{p}(1)$ and (\ref{omee_ee_r})
yields (\ref{omeeDAm}).
Next, (\ref{DAomm}). By Lemma \ref{lm:unif_law}:
\begin{equation}
\sup_{\theta \in \Theta }\left\Vert \frac{1}{n}\sum_{t=1}^{n}\varphi _{t}
\frac{\partial }{\partial \theta }m_{t}(\theta )\right\Vert \overset{p}{
\rightarrow }0\text{ and }\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi
_{t}m_{t}=O_{p}(1). \label{domulln}
\end{equation}
Now write:
\begin{eqnarray*}
\mathcal{\hat{D}}_{n}(h)^{\prime }\widehat{\mathcal{A}}_{n}\frac{1}{\sqrt{n}}
\sum_{t=1+h}^{n}\varphi _{t}m_{t}(\hat{\theta}_{n}) &=&\mathcal{D}
(h)^{\prime }\mathcal{A}\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}m_{t}+
\mathcal{D}(h)^{\prime }\mathcal{A}\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi
_{t}\left\{ m_{t}(\hat{\theta}_{n})-m_{t}\right\} \\
&&+\left\{ \mathcal{\hat{D}}_{n}(h)^{\prime }\widehat{\mathcal{A}}_{n}-
\mathcal{D}(h)^{\prime }\mathcal{A}\right\} \frac{1}{\sqrt{n}}
\sum_{t=1+h}^{n}\varphi _{t}m_{t} \\
&&+\left\{ \mathcal{\hat{D}}_{n}(h)^{\prime }\widehat{\mathcal{A}}_{n}-
\mathcal{D}(h)^{\prime }\mathcal{A}\right\} \frac{1}{\sqrt{n}}
\sum_{t=1+h}^{n}\varphi _{t}\left\{ m_{t}(\hat{\theta}_{n})-m_{t}\right\} .
\end{eqnarray*}
Note that $\mathcal{\hat{D}}_{n}(h)$ $\overset{p}{\rightarrow }$ $\mathcal{D}
(h)$ by arguments in the proof of Lemma \ref{lm:corr_expan}, and by
supposition $\widehat{\mathcal{A}}_{n}$ $\overset{p}{\rightarrow }$ $
\mathcal{A}$. Moreover, by a mean value theorem argument, Assumption \ref
{assum:plug}.c$^{\prime }$, and (\ref{domulln}):
\begin{equation*}
\left\Vert \frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}\left\{ m_{t}(\hat{
\theta}_{n})-m_{t}\right\} \right\Vert \leq \sqrt{n}\left\Vert \hat{\theta}
_{n}-\theta _{0}\right\Vert \times \sup_{\theta \in \Theta }\left\Vert \frac{
1}{n}\sum_{t=1+h}^{n}\varphi _{t}\frac{\partial }{\partial \theta }
m_{t}(\theta )\right\Vert \overset{p}{\rightarrow }0.
\end{equation*}
The latter convergence in probability, combined with (\ref{domulln}),
suffice to prove (\ref{DAomm}).
Proceeding to (\ref{omSumee}), first note that
\begin{equation}
\frac{1}{\sqrt{n}}\sum_{t=1}^{n}\varphi _{t}=b_{n}\frac{1}{\sqrt{n}}
\sum_{s=1}^{n/b_{n}}\xi _{s}=\sqrt{b_{n}}\left( n/b_{n}\right)
^{-1/2}\sum_{s=1}^{n/b_{n}}\xi _{s}=O_{p}\left( \sqrt{b_{n}}\right) .
\label{sum_om}
\end{equation}
Second, by equation (F.5) in the proof of Lemma \ref{lm:expansion} in \cite
{HillMotegi_supp_mat}:
\begin{equation}
\left\vert \sqrt{n}\hat{\gamma}_{n}(h)-n^{-1/2}\sum_{t=1+h}^{n}\epsilon
_{t}\epsilon _{t-h}+\sqrt{n}\left( \hat{\theta}_{n}-\theta _{0}\right)
^{\prime }\mathcal{D}(h)\right\vert \overset{p}{\rightarrow }0.
\label{root(n)g}
\end{equation}
Use (\ref{root(n)g}), and $\hat{\theta}_{n}$ $=$ $\theta _{0}$ $+$ $O_{p}(1/
\sqrt{n})$ to deduce $1/n\sum_{s=1+h}^{n}\epsilon _{s}(\hat{\theta}
_{n})\epsilon _{s-h}(\hat{\theta}_{n})$ $=$ $1/n\sum_{t=1+h}^{n}\epsilon
_{t}\epsilon _{t-h}$ $+$ $O_{p}(1/\sqrt{n})$. Therefore
\begin{equation*}
\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}\frac{1}{n}\sum_{s=1+h}^{n}
\epsilon _{s}(\hat{\theta}_{n})\epsilon _{s-h}(\hat{\theta}_{n})=\frac{1}{
\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}\frac{1}{n}\sum_{t=1+h}^{n}\epsilon
_{t}\epsilon _{t-h}+O_{p}\left( 1/\sqrt{n/b_{n}}\right) .
\end{equation*}
It remains to show
\begin{equation}
\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}\frac{1}{n}\sum_{t=1+h}^{n}
\epsilon _{t}\epsilon _{t-h}=\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}E
\left[ \epsilon _{t}\epsilon _{t-h}\right] +o_{p}(1). \label{eomEe}
\end{equation}
Under Assumptions \ref{assum:dgp}.b, $\epsilon _{t}\epsilon _{t-h}$ $-$ $
E[\epsilon _{t}\epsilon _{t-h}]$\ satisfies (\ref{NED_var}), hence $E[(1/
\sqrt{n}\sum_{t=1}^{n}\{\epsilon _{t}\epsilon _{t-h}$ $-$ $E\left[ \epsilon
_{t}\epsilon _{t-h}\right] \})^{2}]$ $=$ $O(1)$. Further $1/\sqrt{n}
\sum_{t=1+h}^{n}\varphi _{t}$ $=$ $O_{p}(\sqrt{b_{n}})$ from (\ref{sum_om}).
Hence
\begin{equation*}
\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}\frac{1}{n}\sum_{t=1+h}^{n}
\left\{ \epsilon _{t}\epsilon _{t-h}-E\left[ \epsilon _{t}\epsilon _{t-h}
\right] \right\} =\frac{1}{\sqrt{n}}\sum_{t=1+h}^{n}\varphi _{t}\times
O_{p}(1/\sqrt{n})=O_{p}\left( 1/\sqrt{n/b_{n}}\right) .
\end{equation*}
Since $b_{n}/n$ $\rightarrow $ $0$, (\ref{eomEe}) follows directly.
Finally, for (\ref{omDAsumm}), since $1/\sqrt{n}\sum_{t=1+h}^{n}\varphi _{t}$
$=$ $O_{p}(\sqrt{b_{n}})$ and $\mathcal{\hat{D}}_{n}(h)^{\prime }\widehat{
\mathcal{A}}_{n}$ $\overset{p}{\rightarrow }$ $\mathcal{D}(h)^{\prime }
\mathcal{A}$ we need only show $1/n\sum_{t=1}^{n}m_{t}(\hat{\theta}_{n})$ $=$
$o_{p}(1/\sqrt{b_{n}})$. A first order expansion and the mean value theorem
yield:
\begin{equation*}
\left\Vert \frac{1}{n}\sum_{t=1+h}^{n}m_{t}(\hat{\theta}_{n})-\frac{1}{n}
\sum_{t=1+h}^{n}m_{t}\right\Vert \leq \sup_{\theta \in \Theta }\left\Vert
\frac{1}{n}\sum_{t=1+h}^{n}\frac{\partial }{\partial \theta }m_{t}(\theta
_{n}^{\ast })\right\Vert \left\Vert \hat{\theta}_{n}-\theta _{0}\right\Vert .
\end{equation*}
By Lemma \ref{lm:unif_law}: $\sup_{\theta \in \Theta
}||1/n\sum_{t=1}^{n}(\partial /\partial \theta )m_{t}(\theta )$ $-$ $
E[(\partial /\partial \theta )m_{t}(\theta )]||$ $\overset{p}{\rightarrow }$
$0$, and $\sup_{\theta \in \Theta }||E[(\partial /\partial \theta
)m_{t}(\theta )]||$ $<$ $\infty $ and $\hat{\theta}_{n}$ $-$ $\theta _{0}$ $=
$ $O_{p}(1/\sqrt{n})$\ under Assumption \ref{assum:plug}.c$^{\prime }$.
Moreover, by Assumption \ref{assum:plug}.c$^{\prime }$, $m_{t}$ $=$ $
[m_{i,t}]_{i=1}^{k_{m}}$\ satisfies (\ref{NED_var}), hence $E[(1/\sqrt{n}
\sum_{t=1}^{n}m_{i,t}^{2}]$ $=$ $O(1)$. This yields $1/n
\sum_{t=1+h}^{n}m_{t}(\hat{\theta}_{n})$ $=$ $1/n\sum_{t=1+h}^{n}m_{t}$ $+$ $
O_{p}(1/\sqrt{n})$ $=$ $O_{p}(1/\sqrt{n})$. Since $b_{n}$ $=$ $o(n)$ the
proof is complete.
\newline
\textbf{Claim (b).}\qquad Weak convergence in probability Claim (a), the
mapping theorem and Slutsky's theorem yield for each $\mathcal{L}$ $\in $ $
\mathbb{N}$:
\begin{equation}
\vartheta \left( \left[ \sqrt{n}\hat{\omega}_{n}(h)\hat{\rho}_{n}^{(dw)}(h)
\right] _{h=1}^{\mathcal{L}}\right) \Rightarrow ^{p}\vartheta \left( \left[
\omega (h)\mathcal{\mathring{Z}}(h)\right] _{h=1}^{\mathcal{L}}\right) .
\label{maxp_maxR_weak_p}
\end{equation}
Therefore \citep[see, e.g.,][eq. (3.4)]{GineZinn1990}:
\begin{equation*}
\mathcal{X}_{n}(\mathcal{L})\equiv \sup_{c>0}\left\vert P\left( \vartheta
\left( \left[ \sqrt{n}\hat{\omega}_{n}(h)\hat{\rho}_{n}^{(dw)}(h)\right]
_{h=1}^{\mathcal{L}}\right) \leq c|\mathfrak{X}_{n}\right) -P\left(
\vartheta \left( \left[ \omega (h)\mathcal{\mathring{Z}}(h)\right] _{h=1}^{
\mathcal{L}}\right) \leq c\right) \right\vert \overset{p}{\rightarrow }0.
\end{equation*}
Apply Lemma \ref{lm:max_p}.a to $\mathcal{X}_{n}(\mathcal{L})$\ to yield for
some monotonic sequence of positive integers $\{\mathcal{L}_{n}\}_{n\geq 1}$
, $\mathcal{L}_{n}$ $\rightarrow $ $\infty $ and $\mathcal{L}_{n}$ $=$ $o(n)$
:
\begin{equation*}
\mathcal{X}_{n}(\mathcal{L}_{n})\leq \max_{1\leq \mathcal{L}\leq \mathcal{L}
_{n}}\mathcal{X}_{n}(\mathcal{L})\overset{p}{\rightarrow }0.
\end{equation*}
Therefore $\mathcal{X}_{n}(\mathcal{L}_{n})$ $\overset{p}{\rightarrow }$ $0$
. Finally, since $\mathcal{X}_{n}(\mathcal{L})$ is bounded, it is uniformly
integrable. Hence $\mathcal{L}_{n}$ $=$ $O(n^{\min \{\zeta ,\kappa
,1/2\}}/\ln (n))$ must be satisfied for $\max_{1\leq \mathcal{L}\leq
\mathcal{L}_{n}}\mathcal{X}_{n}(\mathcal{L})$ $\overset{p}{\rightarrow }$ $0$
and therefore for $\mathcal{X}_{n}(\mathcal{L}_{n})$ $\overset{p}{
\rightarrow }$ $0$, cf. Lemma \ref{lm:max_p}.a. $\mathcal{QED}$.
\newline
\textbf{Proof of Theorem \ref{th:p_dep_wild_boot}.}\qquad Assume the weights
$\hat{\omega}_{n}(h)$ $=$ $1$ to conserve notation, without loss of
generality. Operate conditionally on $\mathfrak{X}_{n}$ $\equiv $ $
\{m_{t},x_{t},y_{t}\}_{t=1}^{n}$, and recall $\hat{p}_{n,M}^{(dw)}$ $\equiv $
$1/M\sum_{i=1}^{M}I(\mathcal{\hat{T}}_{n,i}^{(dw)}$ $\geq $ $\mathcal{\hat{T}
}_{n})$. First, by the Glivenko-Cantelli theorem:
\begin{equation}
\hat{p}_{n,M}^{(dw)}\overset{p}{\rightarrow }P\left( \vartheta \left( \left[
\sqrt{n}\hat{\rho}_{n}^{(dw)}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) \geq
\vartheta \left( \left[ \sqrt{n}\hat{\rho}_{n}(h)\right] _{h=1}^{\mathcal{L}
_{n}}\right) |\mathfrak{X}_{n}\right) \text{ as }M\rightarrow \infty .
\label{p_dw_p}
\end{equation}
Second, by Lemma \ref{lm:max_cor*}:
\begin{equation}
\sup_{c>0}\left\vert P\left( \vartheta \left( \left[ \sqrt{n}\hat{\rho}
_{n}^{(dw)}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) \leq c|\mathfrak{X}
_{n}\right) -P\left( \vartheta \left( \left[ \mathcal{\mathring{Z}}(h)\right]
_{h=1}^{\mathcal{L}_{n}}\right) \leq c\right) \right\vert \overset{p}{
\rightarrow }0, \label{p_dw_R1}
\end{equation}
where $\{\mathcal{Z}(h)$ $:$ $h$ $\in $ $\mathbb{N}\}$ is a zero mean
Gaussian process with variance $E[\mathcal{Z}(h)^{2}]$ $<$ $\infty $, $\{
\mathcal{\mathring{Z}}(h)$ $:$ $h$ $\in $ $\mathbb{N}\}$ is an independent
copy of $\{\mathcal{Z}(h)$ $:$ $h$ $\in $ $\mathbb{N}\}$, and $\{\mathcal{L}
_{n}\}$ is a monotonic sequence of positive integers, $\mathcal{L}_{n}$ $
\rightarrow $ $\infty $ and $\mathcal{L}_{n}$ $=$ $o(n)$. In particular, $
\mathcal{L}_{n}$ $=$ $O(n^{\min \{\zeta ,\kappa ,1/2\}}/\ln (n))$ must be
satisfied.
Impose $H_{0}$ $:$ $\rho (h)$ $=$ $0$ $\forall h$ $\in $ $\mathbb{N}$.
Define $\bar{F}_{n}^{(0)}(c)$ $\equiv $ $P(\vartheta ([\mathcal{\mathring{Z}}
(h)]_{h=1}^{\mathcal{L}_{n}})$ $>$ $c)$. Note that (\ref{p_dw_R1}) implies:
\begin{equation*}
P\left( \vartheta \left( \left[ \sqrt{n}\hat{\rho}_{n}^{(dw)}(h)\right]
_{h=1}^{\mathcal{L}_{n}}\right) \geq \vartheta \left( \left[ \sqrt{n}\hat{
\rho}_{n}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) |\mathfrak{X}_{n}\right)
-P\left( \vartheta \left( \left[ \mathcal{\mathring{Z}}(h)\right] _{h=1}^{
\mathcal{L}_{n}}\right) \geq \vartheta \left( \left[ \sqrt{n}\hat{\rho}
_{n}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) \right) \overset{p}{
\rightarrow }0.
\end{equation*}
Since $[\mathcal{\mathring{Z}}(h)]_{h=1}^{\mathcal{L}_{n}}$ is independent
of the sample $\mathfrak{X}_{n}$, we therefore have:
\begin{equation}
P\left( \vartheta \left( \left[ \sqrt{n}\hat{\rho}_{n}^{(dw)}(h)\right]
_{h=1}^{\mathcal{L}_{n}}\right) \geq \vartheta \left( \left[ \sqrt{n}\hat{
\rho}_{n}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) |\mathfrak{X}_{n}\right)
-\bar{F}_{n}^{(0)}\left( \vartheta \left( \left[ \sqrt{n}\hat{\rho}_{n}(h)
\right] _{h=1}^{\mathcal{L}_{n}}\right) \right) \overset{p}{\rightarrow }0.
\label{PF}
\end{equation}
$\bar{F}_{n}^{(0)}$ is continuous by Gaussianicity. Theorem \ref
{th:max_corr_expan} and Slutsky's theorem therefore yield:
\begin{equation}
\left\vert \bar{F}_{n}^{(0)}\left( \vartheta \left( \left[ \sqrt{n}\hat{\rho}
_{n}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) \right) -\bar{F}
_{n}^{(0)}\left( \vartheta \left( \left[ \mathcal{Z}(h)\right] _{h=1}^{
\mathcal{L}_{n}}\right) \right) \right\vert \overset{p}{\rightarrow }0.
\label{FF}
\end{equation}
Together, (\ref{p_dw_p}), (\ref{PF}) and (\ref{FF}) yield for any sequence
of positive integers $\{M_{n}\}$, $M_{n}$ $\rightarrow $ $\infty $:
\begin{eqnarray}
\hat{p}_{n,M_{n}}^{(dw)} &=&P\left( \vartheta \left( \left[ \sqrt{n}\hat{\rho
}_{n}^{(dw)}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) \geq \vartheta \left(
\left[ \sqrt{n}\hat{\rho}_{n}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) |
\mathfrak{X}_{n}\right) +o_{p}(1) \label{p_dw_F} \\
&=&\bar{F}_{n}^{(0)}\left( \vartheta \left( \left[ \mathcal{Z}(h)\right]
_{h=1}^{\mathcal{L}_{n}}\right) \right) +o_{p}(1). \notag
\end{eqnarray}
Since$\{\mathcal{\mathring{Z}}(h)$ $:$ $h$ $\in $ $\mathbb{N}\}$ is an
independent copy of $\{\mathcal{Z}(h)$ $:$ $h$ $\in $ $\mathbb{N}\}$, $\bar{F
}_{n}^{(0)}(\vartheta ([\mathcal{Z}(h)]_{h=1}^{\mathcal{L}_{n}}))$ is
distributed uniform on $[0,1]$. Now use (\ref{p_dw_F}) to conclude $P(\hat{p}
_{n,M_{n}}^{(dw)}$ $<$ $\alpha )$ $=$ $P(\bar{F}_{n}^{(0)}\left( (\vartheta
\left[ \mathcal{Z}(h)\right] _{h=1}^{\mathcal{L}_{n}})\right) $ $<$ $\alpha
) $ $+$ $o(1)$ $=$ $\alpha $ $+$ $o(1)$ $\rightarrow $ $\alpha .$
Impose $H_{1}$ $:$ $\rho (h)$ $\neq $ $0$ for some $h$ $\in $ $\mathbb{N}$.
Recall $\vartheta $ satisfies the triangle inequality, and divergence $
\vartheta (a)$ $\rightarrow $ $\infty $ as $||a||$ $\rightarrow $ $\infty $.
Theorem \ref{th:max_corr_expan} therefore yields: $\vartheta ([\sqrt{n}\hat{
\rho}_{n}(h)]_{h=1}^{\mathcal{L}_{n}})$ $\leq $ $\vartheta ([\sqrt{n}\{\hat{
\rho}_{n}(h)$ $-$ $\rho (h)\}]_{h=1}^{\mathcal{L}_{n}})$ $+$ $\vartheta ([
\sqrt{n}\rho (h)]_{h=1}^{\mathcal{L}_{n}})$ $=$ $\vartheta ([\mathcal{Z}
(h)]_{h=1}^{\mathcal{L}_{n}})+\vartheta ([\sqrt{n}\rho (h)]_{h=1}^{\mathcal{L
}_{n}})$ $+$ $o_{p}(1)$, and $\vartheta ([\sqrt{n}\rho (h)]_{h=1}^{\mathcal{L
}_{n}})$ $\leq $ $\vartheta ([\sqrt{n}\{\hat{\rho}_{n}(h)$ $-$ $\rho
(h)\}]_{h=1}^{\mathcal{L}_{n}})$ $+$ $\vartheta ([\sqrt{n}\hat{\rho}
_{n}(h)]_{h=1}^{\mathcal{L}_{n}})$ $=$ $\vartheta ([\mathcal{Z}(h)]_{h=1}^{
\mathcal{L}_{n}})$ $+$ $\vartheta ([\sqrt{n}\hat{\rho}_{n}(h)]_{h=1}^{
\mathcal{L}_{n}})$ $+$ $o_{p}(1)$ $\overset{p}{\rightarrow }$ $\infty $.
Hence:
\begin{equation}
\infty \overset{p}{\leftarrow }\vartheta \left( \left[ \mathcal{Z}(h)\right]
_{h=1}^{\mathcal{L}_{n}}\right) +\vartheta \left( \left[ \sqrt{n}\rho (h)
\right] _{h=1}^{\mathcal{L}_{n}}\right) +o_{p}(1)\geq \vartheta \left( \left[
\sqrt{n}\rho (h)\right] _{h=1}^{\mathcal{L}_{n}}\right) -\vartheta \left(
\left[ \mathcal{Z}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) +o_{p}(1)
\overset{p}{\rightarrow }\infty . \label{phi(rho)}
\end{equation}
Combine (\ref{p_dw_p}), (\ref{p_dw_R1}) and (\ref{phi(rho)}) to deduce $P(
\hat{p}_{n,M_{n}}^{(dw)}$ $<$ $\alpha )$ $\rightarrow $ $1$ for any $\alpha $
$\in $ $(0,1)$\ because:{\small
\begin{eqnarray*}
\hat{p}_{n,M_{n}}^{(dw)} &=&P\left( \vartheta \left( \left[ \sqrt{n}\hat{\rho
}_{n}^{(dw)}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) \geq \vartheta \left(
\left[ \sqrt{n}\hat{\rho}_{n}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) |
\mathfrak{X}_{n}\right) +o_{p}(1) \\
&=&P\left( \vartheta \left( \left[ \mathcal{\mathring{Z}}(h)\right] _{h=1}^{
\mathcal{L}_{n}}\right) \geq \vartheta \left( \left[ \sqrt{n}\hat{\rho}
_{n}(h)\right] _{h=1}^{\mathcal{L}_{n}}\right) \right) +o_{p}(1)=\bar{F}
_{n}^{(0)}\left( \vartheta \left( \left[ \sqrt{n}\hat{\rho}_{n}(h)\right]
_{h=1}^{\mathcal{L}_{n}}\right) \right) +o_{p}(1)\overset{p}{\rightarrow }0.
\text{ }\mathcal{QED}.
\end{eqnarray*}
}\textbf{Proof of Theorem \ref{th:lag_select}}.\qquad Let $q$ be any fixed
positive constant. Recall that the penalty $\mathcal{P}_{n}(\mathcal{L})$ $=$
$\sqrt{\mathcal{L}\ln n}$ if $\mathcal{\hat{T}}_{n}(\mathcal{L})$ $\leq $ $
\sqrt{q\ln n}$, else $\mathcal{P}_{n}(\mathcal{L})$ $=$ $\sqrt{2\mathcal{L}}$
.
\newline
\textbf{Claim (a).}\qquad Let $H_{0}$\ be true. It suffices to prove the
following. First, for any $\{\mathcal{L}_{n}\}$, $\mathcal{L}_{n}$ $
\rightarrow $ $(0,\infty ]$ and $\mathcal{L}_{n}/\mathcal{\bar{L}}_{n}$ $
\rightarrow $ $[0,1]$, the penalty term satisfies:
\begin{equation}
P\left( \mathcal{P}_{n}(\mathcal{L}_{n})=\sqrt{\mathcal{L}_{n}\ln (n)}
\right) \rightarrow 1. \label{PP_lnn}
\end{equation}
Hence $\mathcal{\hat{T}}_{n}^{\mathcal{P}}(\mathcal{L})$ $\equiv $ $\mathcal{
\hat{T}}_{n}(\mathcal{L})$ $-$ $\sqrt{\mathcal{L}\ln n}$ asymptotically with
probability approaching one. Second, for such $\{\mathcal{L}_{n}\}$ the
following holds:
\begin{eqnarray}
&&P\left( \mathcal{\hat{T}}_{n}(\mathcal{L}_{n})-\mathcal{\hat{T}}
_{n}(l)\geq \left( \sqrt{\mathcal{L}_{n}}-\sqrt{l}\right) \sqrt{\ln (n)}
\right) \rightarrow 1\text{ if }l\geq \mathcal{L}_{n} \label{PTT} \\
&&P\left( \mathcal{\hat{T}}_{n}(\mathcal{L}_{n})-\mathcal{\hat{T}}
_{n}(l)\geq \left( \sqrt{\mathcal{L}_{n}}-\sqrt{l}\right) \sqrt{\ln (n)}
\right) \rightarrow 0\text{ for fixed }l=1,...,\mathcal{L}_{n}-1. \notag
\end{eqnarray}
Together (\ref{PP_lnn}) and (\ref{PTT}) prove the claim $P(\mathcal{L}
_{n}^{\ast }$ $=$ $1)$ $\rightarrow $ $1$ since\ the following holds \textit{
for every} $l$ $=$ $1,...,\mathcal{\bar{L}}_{n}$ \textit{if and only if} $
\mathcal{L}_{n}$ $\rightarrow $ $1$:
\begin{equation}
\lim_{n\rightarrow \infty }P\left( \mathcal{\hat{T}}_{n}^{\mathcal{P}}(
\mathcal{L}_{n})\geq \mathcal{\hat{T}}_{n}^{\mathcal{P}}(l)\right)
=\lim_{n\rightarrow \infty }P\left( \mathcal{\hat{T}}_{n}(\mathcal{L}_{n})-
\mathcal{\hat{T}}_{n}(l)\geq \left( \sqrt{\mathcal{L}_{n}}-\sqrt{l}\right)
\sqrt{\ln (n)}\right) =1, \label{limPlimP}
\end{equation}
while $\mathcal{L}_{n}^{\ast }$ is the least of sequences that satisfy (\ref
{limPlimP}) \textit{for every} $l$ $=$ $1,...,\mathcal{\bar{L}}_{n}$.
Now consider (\ref{PP_lnn}). By construction of $\mathcal{P}_{n}(\mathcal{L}
_{n})$ it suffices to prove $P(\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})$ $>$ $
\sqrt{q\ln n})$ $\rightarrow $ $0$. Under $H_{0}$, $\sqrt{n}\hat{\rho}
_{n}(h) $ $=$ $O_{p}(1)$ by (\ref{rn_ro_h}), hence $\sqrt{n}\hat{\rho}
_{n}(h)/\sqrt{q\ln n}$ $\overset{p}{\rightarrow }$ $0$ for any fixed $q$ $
\in $ $(0,\infty )$. Therefore, by Lemma \ref{lm:max_p} for some $\{\mathcal{
\bar{L}}_{n}\}$ that satisfies $\mathcal{\bar{L}}_{n}$ $\rightarrow $ $
\infty $ and $\mathcal{\bar{L}}_{n}$ $=$ $o(n)$:
\begin{equation}
\frac{\mathcal{\hat{T}}_{n}(\mathcal{\bar{L}}_{n})}{\sqrt{q\ln n}}=\frac{
\sqrt{n}\max_{1\leq h\leq \mathcal{\bar{L}}_{n}}\left\vert \hat{\rho}
_{n}(h)\right\vert }{\sqrt{q\ln n}}\overset{p}{\rightarrow }0. \label{T/lnn}
\end{equation}
By the same lemma, if $(n^{\min \{\zeta ,\kappa ,1/2\}}/\ln (n))\mathcal{
\tilde{X}}_{n}(h)$ for all $h$\ is uniformly integrable, where $\mathcal{
\tilde{X}}_{n}(h)$ is defined in (\ref{expans_rate}), then $\mathcal{\bar{L}}
_{n}$ $=$ $O(n^{\min \{\zeta ,\kappa ,1/2\}}/\ln (n))$ must hold. By
monotonicity of $\mathcal{\hat{T}}_{n}(\cdot )$ $\geq $ $0$, (\ref{T/lnn})
holds for any $\{\mathcal{L}_{n}\}$, $\mathcal{L}_{n}$ $\rightarrow $ $
(0,\infty ]$ and $\mathcal{L}_{n}/\mathcal{\bar{L}}_{n}$ $\rightarrow $ $
[0,1]$. Thus $\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})/\sqrt{q\ln n}$ $\overset
{p}{\rightarrow }$ $0$ for all such $\{\mathcal{L}_{n}\}$.
Now consider (\ref{PTT}). Suppose $l$ $>$ $\mathcal{L}_{n}$. By (\ref{T/lnn}
), $\mathcal{\hat{T}}_{n}(\mathcal{\bar{L}}_{n})/\sqrt{\ln n}$ $=$ $o_{p}(1)$
and therefore $\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})$ $-$ $\mathcal{\hat{T}}
_{n}(l)$ $=$ $o_{p}(\sqrt{\ln (n)})$ for any $\{\mathcal{L}_{n}\}$, $
\mathcal{L}_{n}$ $\rightarrow $ $(0,\infty ]$ and $\mathcal{L}_{n}/\mathcal{
\bar{L}}_{n}$ $\rightarrow $ $[0,1]$, and any $1$ $\leq $ $l$ $\leq $ $
\mathcal{\bar{L}}_{n}$. Now use (\ref{PP_lnn}), monotonicity of $\mathcal{
\hat{T}}_{n}(\cdot )$, and $\inf_{n\geq 1}\{\sqrt{l}-\sqrt{\mathcal{L}_{n}}
\} $ $>$ $0$, to yield that as $n$ $\rightarrow $ $\infty $:
\begin{eqnarray*}
P\left( \mathcal{\hat{T}}_{n}(\mathcal{L}_{n})-\mathcal{\hat{T}}_{n}(l)\geq
\left( \sqrt{\mathcal{L}_{n}}-\sqrt{l}\right) \sqrt{\ln (n)}\right)
&=&P\left( \frac{\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})-\mathcal{\hat{T}}
_{n}(l)}{\sqrt{\ln (n)}}\geq \sqrt{\mathcal{L}_{n}}-\sqrt{l}\right) \\
&& \\
&=&P\left( \sqrt{l}-\sqrt{\mathcal{L}_{n}}\geq \frac{\mathcal{\hat{T}}
_{n}(l)-\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})}{\sqrt{\ln (n)}}\right)
\rightarrow 1.
\end{eqnarray*}
Similarly, if $l$ $=$ $\mathcal{L}_{n}$ then $\sqrt{l}$ $-$ $\sqrt{\mathcal{L
}_{n}}$ $=$ $0$ and $\mathcal{\hat{T}}_{n}(l)$ $-$ $\mathcal{\hat{T}}_{n}(
\mathcal{L}_{n})$ $=$ $0$ hence the above limit holds.
Conversely, suppose $l\in \{1,...,\mathcal{L}_{n}$ $-$ $1\}$\ and $\mathcal{L
}_{n}$ $>$ $1$. Then from $\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})$ $=$ $
o_{p}(\sqrt{q\ln n})$\ and $1$ $-$ $\sqrt{l/\mathcal{L}_{n}}>$ $0$ it
follows:
\begin{equation*}
P\left( \mathcal{\hat{T}}_{n}(\mathcal{L}_{n})-\mathcal{\hat{T}}_{n}(l)\geq
\left( \sqrt{\mathcal{L}_{n}}-\sqrt{l}\right) \sqrt{\ln (n)}\right) =P\left(
\frac{\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})-\mathcal{\hat{T}}_{n}(l)}{\sqrt{
\mathcal{L}_{n}}\sqrt{\ln (n)}}\geq \left( 1-\sqrt{\frac{l}{\mathcal{L}_{n}}}
\right) \right) \rightarrow 0.
\end{equation*}
Claim (\ref{PTT}) follows directly.
\newline
\textbf{Claim (b).}\qquad Let $H_{1}$ hold. Let $ap1$ denote \textit{
asymptotically with probability approaching one}. Define $h_{n}^{\ast }$ $
\equiv $ $\min \{h_{n}$ $:$ $h_{n}$ $=$ $\arg \max_{1\leq h\leq \mathcal{
\bar{L}}_{n}}|\hat{\rho}_{n}(h)|\}$, the smallest lag at which the largest
sample correlation in magnitude over lags $1$ $\leq $ $h$ $\leq $ $\mathcal{
\bar{L}}_{n}$ occurs.
Define $\mathbb{N}_{1}$ $\equiv $ $\{h$ $\in $ $\mathbb{N}$ $:$ $E[\epsilon
_{t}\epsilon _{t-h}]$ $\neq $ $0\}$ and $\mathbb{\text{\b{N}}}_{1}$ $\equiv $
$\min \{\mathbb{N}_{1}\}$, the smallest lag at which the autocorrelation is
not zero. We prove in Step 1 that for any integer sequence $\{\mathcal{L}
_{n}\}$ such that $\mathcal{L}_{n}$ $\rightarrow $ $[\mathbb{\text{\b{N}}}
_{1},\infty ]$ and $\mathcal{L}_{n}/\mathcal{\bar{L}}_{n}$ $\rightarrow $ $
[0,1]$:
\begin{equation}
P\left( \mathcal{P}_{n}(\mathcal{L}_{n})=\sqrt{2\mathcal{L}_{n}}\right)
\rightarrow 1. \label{P2L}
\end{equation}
We then prove in Step 2 that \textit{if and only if} $\mathcal{L}
_{n}/h_{n}^{\ast }$ $\overset{p}{\rightarrow }$ $[1,\infty ]$:
\begin{equation}
P\left( \mathcal{\hat{T}}_{n}(\mathcal{L}_{n})\geq \mathcal{\hat{T}}
_{n}(l)+2(\sqrt{\mathcal{L}_{n}}-\sqrt{l})\right) \rightarrow 1\text{
\textit{for each} }1\leq l\leq \mathcal{\bar{L}}_{n}. \label{TTLl}
\end{equation}
Moreover, $h_{n}^{\ast }$ $\overset{p}{\rightarrow }$ $h^{\ast }$ $\equiv $ $
\min \{h$ $:$ $h$ $=$ $\arg \max_{1\leq h\leq \infty }|\rho (h)|\}$ is an
easy consequence of $\mathcal{\bar{L}}_{n}$ $\rightarrow $ $\infty $,
consistency of the sample correlation under the stated assumptions, and
Slutsky's theorem. Notice $h^{\ast }$ $\in $ $[\mathbb{\text{\b{N}}}
_{1},\infty )$ by construction of $\mathbb{\text{\b{N}}}_{1}$.
The proof of the claim then proceeds as follows. Take any integer sequence $
\{\mathcal{L}_{n}\}$, $\mathcal{L}_{n}/h_{n}^{\ast }$ $\overset{p}{
\rightarrow }$ $[1,\infty ]$ and $\mathcal{L}_{n}/\mathcal{\bar{L}}_{n}$ $
\rightarrow $ $[0,1]$. Then (\ref{P2L}) holds because $h^{\ast }$ $\in $ $[
\mathbb{\text{\b{N}}}_{1},\infty )$, hence $\mathcal{\hat{T}}_{n}^{\mathcal{P
}}(\mathcal{L}_{n})$ $\equiv $ $\mathcal{\hat{T}}_{n}(\mathcal{L}_{.})$ $-$ $
\sqrt{2\mathcal{L}_{n}}$ $ap1$. Since such a sequence implies (\ref{TTLl}),
we have $\mathcal{\hat{T}}_{n}^{\mathcal{P}}(\mathcal{L}_{n})$ $\geq $ $
\mathcal{\hat{T}}_{n}^{\mathcal{P}}(l)$ $ap1$\ for each $l$ $=$ $1,...,
\mathcal{\bar{L}}_{n}$. Conversely, if (\ref{TTLl}) holds then $\mathcal{L}
_{n}/h_{n}^{\ast }$ $\overset{p}{\rightarrow }$ $[1,\infty ]$. This yields (
\ref{P2L}) because $h^{\ast }$ $\in $ $[\mathbb{\text{\b{N}}}_{1},\infty )$.
Therefore $\mathcal{\hat{T}}_{n}^{\mathcal{P}}(\mathcal{L}_{n})$ $\geq $ $
\mathcal{\hat{T}}_{n}^{\mathcal{P}}(l)$ $ap1$\ for each $l$ $=$ $1,...,
\mathcal{\bar{L}}_{n}$ \textit{if and only if} $\mathcal{L}_{n}/h_{n}^{\ast
} $ $\overset{p}{\rightarrow }$ $[1,\infty ]$. Since the optimal $\{\mathcal{
L}_{n}^{\ast }\}$ is the least of such sequences, the selection $\mathcal{L}
_{n}^{\ast }$ satisfies $\mathcal{L}_{n}/h_{n}^{\ast }$ $\overset{p}{
\rightarrow }$ $1$. Together $\mathcal{L}_{n}/h_{n}^{\ast }$ $\overset{p}{
\rightarrow }$ $1$ and $h_{n}^{\ast }$ $\overset{p}{\rightarrow }$ $h^{\ast
} $ prove the claim.
\textbf{Step 1:}\qquad Consider (\ref{P2L}). Use (\ref{rn_ro_h}) to deduce $
\hat{\rho}_{n}(h)$ $-$ $\rho (h)\overset{p}{\rightarrow }0$ for each $h$.
Lemma \ref{lm:max_p} therefore yields for some integer sequence $\{\mathcal{
\bar{L}}_{n}\}$, $\mathcal{\bar{L}}_{n}$ $\rightarrow $ $\infty $:
\begin{equation*}
\left\vert \max_{1\leq h\leq \mathcal{\bar{L}}_{n}}\left\vert \hat{\rho}
_{n}(h)\right\vert -\max_{1\leq h\leq \mathcal{\bar{L}}_{n}}\left\vert \rho
(h)\right\vert \right\vert \leq \left\vert \max_{1\leq h\leq \mathcal{\bar{L}
}_{n}}\left\vert \hat{\rho}_{n}(h)-\rho (h)\right\vert \right\vert \overset{p
}{\rightarrow }0,
\end{equation*}
where $\lim_{n\rightarrow \infty }\max_{1\leq h\leq \mathcal{\bar{L}}
_{n}}\left\vert \rho (h)\right\vert $ $\in $ $(0,\infty )$. By monotonicity,
for any $\{\mathcal{L}_{n}\}$, $\mathcal{L}_{n}$ $\rightarrow $ $(0,\infty ]$
and $\mathcal{L}_{n}/\mathcal{\bar{L}}_{n}$ $\rightarrow $ $[0,1]$, and
sufficiently large $n$:
\begin{equation*}
\left\vert \max_{1\leq h\leq \mathcal{L}_{n}}\left\vert \hat{\rho}
_{n}(h)\right\vert -\max_{1\leq h\leq \mathcal{L}_{n}}\left\vert \rho
(h)\right\vert \right\vert \leq \left\vert \max_{1\leq h\leq \mathcal{L}
_{n}}\left\vert \hat{\rho}_{n}(h)-\rho (h)\right\vert \right\vert \leq
\left\vert \max_{1\leq h\leq \mathcal{\bar{L}}_{n}}\left\vert \hat{\rho}
_{n}(h)-\rho (h)\right\vert \right\vert \overset{p}{\rightarrow }0.
\end{equation*}
Therefore for any $\{\mathcal{L}_{n}\}$, $\mathcal{L}_{n}$ $\rightarrow $ $[
\mathbb{\text{\b{N}}}_{1},\infty ]$ and $\mathcal{L}_{n}/\mathcal{\bar{L}}
_{n}$ $\rightarrow $ $[0,1]$:
\begin{equation*}
\frac{\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})}{\sqrt{q\ln n}}=\frac{\sqrt{n}
\max_{1\leq h\leq \mathcal{L}_{n}}\left\vert \hat{\rho}_{n}(h)\right\vert }{
\sqrt{q\ln n}}\overset{p}{\rightarrow }\infty .
\end{equation*}
This proves (\ref{P2L}) by construction (\ref{Pn}) of the penalty term $
\mathcal{P}_{n}(\mathcal{L}_{n})$.
\textbf{Step 2:}\qquad Next we prove (\ref{TTLl}). First, note that by
Theorem \ref{th:max_corr_expan} $\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})/
\sqrt{n}$ $\overset{p}{\rightarrow }$ $(0,1)$ for any $\{\mathcal{L}_{n}\}$,
$\mathcal{L}_{n}$ $\rightarrow $ $[\mathbb{\text{\b{N}}}_{1},\infty ]$ and $
\mathcal{L}_{n}/\mathcal{\bar{L}}_{n}$ $\rightarrow $ $[0,1]$. Hence $
\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})/\sqrt{n/\ln (n)}$ $\overset{p}{
\rightarrow }$ $\infty $ for any $\mathcal{L}_{n}$ $\rightarrow $ $[\mathbb{
\text{\b{N}}}_{1},\infty ]$, where $\mathcal{L}_{n}$ $=$ $o(n/\ln (n))$ by
assumption. Monotonicity ensures $\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})$ $
\geq $ $\mathcal{\hat{T}}_{n}(l)$ for each $l$ $\leq $ $\mathcal{L}_{n}$,
hence $\mathcal{\hat{T}}_{n}(l)/\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})$ $=$ $
[\mathcal{\hat{T}}_{n}(l)/\sqrt{n}]/[\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})/
\sqrt{n}]$ $\overset{p}{\rightarrow }$ $[0,1]$ for such $l$. Indeed, if both
$(l,\mathcal{L}_{n})\geq h_{n}^{\ast }$ $\equiv $ $\min \{h_{n}$ $:$ $h_{n}$
$=$ $\arg \max_{1\leq h\leq \mathcal{\bar{L}}_{n}}|\hat{\rho}_{n}(h)|\}$
then by construction $\mathcal{\hat{T}}_{n}(l)/\mathcal{\hat{T}}_{n}(
\mathcal{L}_{n})$ $=$ $1$.
Now suppose $1$ $\leq $ $l$ and $l/\mathcal{L}_{n}$ $\rightarrow $ $[0,1)$,
and $\mathcal{L}_{n}/h_{n}^{\ast }$ $\overset{p}{\rightarrow }$ $[0,1)$,
hence $1$ $\leq $ $l$ $<$ $\mathcal{L}_{n}$ $<$ $h_{n}^{\ast }$ as $n$ $
\rightarrow $ $\infty $ $ap1$. Then $\mathcal{\hat{T}}_{n}(l)/\mathcal{\hat{T
}}_{n}(\mathcal{L}_{n})$ $\overset{p}{\rightarrow }$ $[0,1)$ by monotonicity
and the construction of $h_{n}^{\ast }$. Now use $\mathcal{L}_{n}$ $\leq $ $
\mathcal{\bar{L}}_{n}$ $=$ $o(n/\ln (n))$ by assumption, and $\mathcal{\hat{T
}}_{n}(\mathcal{L}_{n})/\sqrt{n/\ln (n)}$ $\overset{p}{\rightarrow }$ $
\infty $ to yield:
\begin{eqnarray}
P\left( \mathcal{\hat{T}}_{n}(\mathcal{L}_{n})\geq \mathcal{\hat{T}}
_{n}(l)+2\left( \sqrt{\mathcal{L}_{n}}-\sqrt{l}\right) \right) &=&P\left(
\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})\left( 1-\frac{\mathcal{\hat{T}}_{n}(l)
}{\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})}\right) \geq 2\sqrt{\mathcal{L}_{n}}
\left( 1-\sqrt{\frac{l}{\mathcal{L}_{n}}}\right) \right) \text{ \ \ \ \ \ \
\ \ \ } \label{PPP1} \\
&\geq &P\left( \frac{\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})}{\sqrt{n/\ln (n)}
}\left( 1-\frac{\mathcal{\hat{T}}_{n}(l)}{\mathcal{\hat{T}}_{n}(\mathcal{L}
_{n})}\right) \geq 2\sqrt{\frac{\mathcal{L}_{n}}{n/\ln (n)}}\right)
\rightarrow 1. \notag
\end{eqnarray}
Next, consider $1$ $\leq $ $l$ and $l/h_{n}^{\ast }$ $\overset{p}{
\rightarrow }$ $[0,1)$, and $\mathcal{L}_{n}/h_{n}^{\ast }$ $\overset{p}{
\rightarrow }$ $[1,\infty ]$, hence $1$ $\leq $ $l$ $\leq $ $h_{n}^{\ast }$ $
-$ $1$ $ap1$ and $\mathcal{L}_{n}$ $\geq $ $h_{n}^{\ast }$ $ap1$. Then $P(
\mathcal{\hat{T}}_{n}(l)$ $=$ $\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})$ $
\rightarrow $ $0$ since by construction $h_{n}^{\ast }$ is the smallest lag
at which the maximum correlation occurs. Monotonicity therefore yields $
\mathcal{\hat{T}}_{n}(l)/\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})$ $\overset{p}
{\rightarrow }$ $[0,1)$, and again we deduce (\ref{PPP1}).
Now let $(l,\mathcal{L}_{n})$ $\geq $ $h_{n}^{\ast }$ $ap1$. Then by
construction $\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})$ $=$ $\mathcal{\hat{T}}
_{n}(l)$ $ap1$. Trivially if $l$ $<$ $\mathcal{L}_{n}$ $(l$ $\geq $ $
\mathcal{L}_{n})$ then $\sqrt{\mathcal{L}_{n}}$ $-$ $\sqrt{l}$ $>$ $0$ ($
\sqrt{\mathcal{L}_{n}}$ $-$ $\sqrt{l}$ $\leq $ $0$). Hence $P(\mathcal{\hat{T
}}_{n}(\mathcal{L}_{n})$ $\geq $ $\mathcal{\hat{T}}_{n}(l)$ $+$ $2[\sqrt{
\mathcal{L}_{n}}$ $-$ $\sqrt{l}])$ $\rightarrow $ $1$ \textit{if and only if}
$l$ $\geq $ $\mathcal{L}_{n}$.
Next, let $\mathcal{L}_{n}$ $<$ $h_{n}^{\ast }$ $\leq $ $l$ $ap1$ such that $
\mathcal{\hat{T}}_{n}(l)$ $=$ $\mathcal{\hat{T}}_{n}(h_{n}^{\ast })$ $ap1$.
Use $\mathcal{L}_{n}/l$ $\rightarrow $ $[0,1)$, $l$ $=$ $o(n/\ln (n))$, $
\mathcal{\hat{T}}_{n}(h_{n}^{\ast })/\sqrt{n/\ln (n)}$ $\overset{p}{
\rightarrow }$ $\infty $, and $\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})/
\mathcal{\hat{T}}_{n}(h_{n}^{\ast })$ $\overset{p}{\rightarrow }$ $[0,1)$ to
yield:
\begin{equation*}
P\left( \mathcal{\hat{T}}_{n}(\mathcal{L}_{n})\geq \mathcal{\hat{T}}
_{n}(l)+2\left( \sqrt{\mathcal{L}_{n}}-\sqrt{l}\right) \right) =P\left(
2\left( 1-\sqrt{\frac{\mathcal{L}_{n}}{l}}\right) \sqrt{\frac{l}{n/\ln (n)}}
\geq \frac{\mathcal{\hat{T}}_{n}(h_{n}^{\ast })}{\sqrt{n/\ln (n)}}\left( 1-
\frac{\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})}{\mathcal{\hat{T}}
_{n}(h_{n}^{\ast })}\right) \right) \rightarrow 0.
\end{equation*}
Finally, generally $\mathcal{\hat{T}}_{n}(l)$ $=$ $\mathcal{\hat{T}}_{n}(
\mathcal{L}_{n})$ $a.s.$ for some $\{l,\mathcal{L}_{n}\}$ and all but a
finite number of $n$ is possible. For example when $l$ $=$ $\mathcal{L}_{n}$
. In this case $P(\mathcal{\hat{T}}_{n}(\mathcal{L}_{n})$ $\geq $ $\mathcal{
\hat{T}}_{n}(l)$ $+$ $2(\sqrt{\mathcal{L}_{n}}$ $-$ $\sqrt{l}))$ $=$ $P(0$ $
\geq $ $2(\sqrt{\mathcal{L}_{n}}$ $-$ $\sqrt{l}))$ $\rightarrow $ $1$
\textit{if and only if} $l$ $\geq $ $\mathcal{L}_{n}$.
Combining the above results, we deduce $P(\mathcal{\hat{T}}_{n}(\mathcal{L}
_{n})$ $\geq $ $\mathcal{\hat{T}}_{n}(l)$ $+$ $2[\sqrt{\mathcal{L}_{n}}$ $-$
$\sqrt{l}])$ $\rightarrow $ $1$ \textit{for every} $1$ $\leq $ $l$ $\leq $ $
\mathcal{\bar{L}}_{n}$\ \textit{if and only if} $\mathcal{L}_{n}$ $\geq $ $
h_{n}^{\ast }$, proving (\ref{TTLl}). $\mathcal{QED}$.
\setstretch{.8}
\begin{table}[th]
\caption{Median of Automatically Selected Lags $\mathcal{L}_{n}^{*}$ }
\label{table:median_automatic_lag}
\begin{center}
{\fontsize{10.5pt}{19pt} \selectfont
\begin{tabular}{c|c|c|c|c}
\hline
$e_{t}$ & IID & GARCH(1,1) & MA(2) & AR(1) \\ \hline
$n$ & $\{ 100, 250, 500, 1000 \}$ & $\{ 100, 250, 500, 1000 \}$ & $\{ 100,
250, 500, 1000 \}$ & $\{ 100, 250, 500, 1000 \}$ \\ \hline
\#1 & $\{ 1, 1, 1, 1 \}$ & $\{ 1, 1, 1, 1 \}$ & $\{ 1, 1, 1, 1 \}$ & $\{ 1,
1, 1, 1 \}$ \\
& $H_{0}$, $h^{*} = 1$ & $H_{0}$, $h^{*} = 1$ & $H_{1}$, $h^{*} = 1$ & $
H_{1} $, $h^{*} = 1$ \\ \hline
\#2 & $\{ 1, 1, 1, 1 \}$ & $\{ 1, 2, 2, 2 \}$ & $\{ 1, 1, 1, 1 \}$ & $\{ 1,
1, 1, 1 \}$ \\
& $H_{0}$, $h^{*} = 1$ & $H_{1}$, $\hat{h}^{*} = 4$ & $H_{1}$, $\hat{h}^{*}
= 1$ & $H_{1}$, $\hat{h}^{*} = 1$ \\ \hline
\#3 & $\{ 1, 1, 1, 1 \}$ & $\{ 1, 1, 1, 1 \}$ & $\{ 1, 1, 1, 1 \}$ & $\{ 1,
1, 1, 2 \}$ \\
& $H_{0}$, $h^{*} = 1$ & $H_{0}$, $h^{*} = 1$ & $H_{1}$, $h^{*} = 1$ & $
H_{1} $, $h^{*} = 1$ \\ \hline
\#4 & $\{ 2, 2, 2, 2 \}$ & $\{ 2, 2, 2, 2 \}$ & $\{ 1, 1, 2, 1 \}$ & $\{ 1,
1, 1, 1 \}$ \\
& $H_{1}$, $\hat{h}^{*} = 1$ & $H_{1}$, $\hat{h}^{*} = 1$ & $H_{1}$, $\hat{h}
^{*} = 1$ & $H_{1}$, $\hat{h}^{*} = 1$ \\ \hline
\#5 & $\{ 1, 1, 1, 1 \}$ & $\{ 1, 1, 2, 2 \}$ & $\{ 1, 1, 1, 1 \}$ & $\{ 1,
1, 1, 1 \}$ \\
& $H_{0}$, $h^{*} = 1$ & $H_{1}$, $\hat{h}^{*} = 4$ & $H_{1}$, $\hat{h}^{*}
= 1$ & $H_{1}$, $\hat{h}^{*} = 1$ \\ \hline
\#6 & $\{ 1, 1, 1, 1 \}$ & $\{ 1, 1, 1, 1 \}$ & $\{ 1, 1, 1, 1 \}$ & $\{ 1,
1, 1, 1 \}$ \\
& $H_{0}$, $h^{*} = 1$ & $H_{0}$, $h^{*} = 1$ & $H_{1}$, $h^{*} = 1$ & $
H_{1} $, $h^{*} = 1$ \\ \hline
\#7 & $\{ 1, 1, 6, 6 \}$ & - & - & - \\
& $H_{1}$, $h^{*} = 6$ & - & - & - \\ \hline
\#8 & $\{ 1, 1, 12, 12 \}$ & - & - & - \\
& $H_{1}$, $h^{*} = 12$ & - & - & - \\ \hline
\#9 & $\{ 1, 1, 1, 24 \}$ & - & - & - \\
& $H_{1}$, $h^{*} = 24$ & - & - & - \\ \hline
\end{tabular}
}
\end{center}
\par
{\fontsize{9.5pt}{13pt} \selectfont
\#1: simple $y_{t}=e_{t}$ with a mean filter. \#2: bilinear process with a
mean filter. \#3: AR(2) process with an AR(2) filter. \#4: AR(2) process
with an AR(1) filter. \#5: GARCH(1,1) process without a filter. \#6:
GARCH(1,1) process with a GARCH filter. \#7: Remote MA(6) process with a
mean filter. \#8: Remote MA(12) process with a mean filter. \#9: Remote
MA(24) process with a mean filter. The error term $e_{t}$ is IID,
GARCH(1,1), MA(2), or AR(1) in Scenarios \#1--\#6, while it is IID in
Scenarios \#7--\#9. This table reports the median of automatic lags for
actual test statistics, $\mathcal{L}_{n}^{\ast }$, across $J=1000$ Monte
Carlo samples. The largest possible lag length is $\bar{\mathcal{L}}_{n} =
[10 \sqrt{n} / (\ln n)]$. The tuning parameter that affects the penalty term
$\mathcal{P}_{n}(\mathcal{L})$ is $q=3$. $H_{0}$ implies the test variable $
\{\epsilon _{t}\}$ is white noise, while $H_{1}$ implies $\{\epsilon_{t}\}$
is serially correlated. The smallest lag at which the largest correlation
occurs, $h^{\ast }$, is reported if it can be computed analytically.
Otherwise, we report a simulation-based $\hat{h}^{\ast}$ as follows. $J=50000
$ Monte Carlo samples of size $n=50000$ are generated, and sample
autocorrelations of $\{\epsilon _{t}\}$ at $h=1,\dots ,20$ are computed. Let
$\hat{h}_{j}^{\ast }$ be the smallest lag at which the largest correlation
occurs for the $j^{th}$ sample, then the reported $\hat{h}^{\ast }$ is the
median of $\{\hat{h}_{1}^{\ast },\dots ,\hat{h}_{J}^{\ast}\}$. }
\end{table}
\begin{table}[th]
\caption{Rejection Frequencies of Max-Correlation Test with Automatic Lag $
\mathcal{L}_{n}^{*}$ (Scenarios \#1--\#6) }
\label{table:main_paper_rf_automatic_scenario123456}
\begin{center}
{\fontsize{9.5pt}{15.5pt} \selectfont
\begin{tabular}{c|c|c|c|c|c|c}
\multicolumn{7}{c}{IID Error: $e_{t} = \nu_{t}$} \\ \hline
& \#1. Simple & \#2. Bilin & \#3. AR2/AR2 & \#4. AR2/AR1 & \#5. GARCH/wo &
\#6. GARCH/w \\ \hline
$n$ & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% &
1\%, 5\%, 10\% & 1\%, 5\%, 10\% \\ \hline
$100$ & .017, .068, .128 & .008, .047, .090 & .002, .061, .129 & .034, .197,
.327 & .006, .031, .068 & .026, .091, .140 \\
$250$ & .011, .045, .087 & .012, .042, .087 & .005, .048, .093 & .178, .479,
.634 & .005, .029, .063 & .019, .064, .125 \\
$500$ & .007, .047, .096 & .010, .036, .077 & .004, .045, .094 & .462, .803,
.894 & .005, .033, .083 & .016, .053, .096 \\
$1000$ & .012, .050, .109 & .008, .056, .103 & .006, .057, .097 & .929,
.996, .998 & .008, .038, .077 & .011, .047, .101 \\ \hline
\multicolumn{7}{c}{} \\
\multicolumn{7}{c}{GARCH(1,1) Error: $e_{t} = \nu_{t} w_{t}$ with $w_{t}^{2}
= 1 + 0.2 e_{t-1}^{2} + 0.5 w_{t-1}^{2}$} \\ \hline
& \#1. Simple & \#2. Bilin & \#3. AR2/AR2 & \#4. AR2/AR1 & \#5. GARCH/wo &
\#6. GARCH/w \\ \hline
$n$ & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% &
1\%, 5\%, 10\% & 1\%, 5\%, 10\% \\ \hline
$100$ & .005, .026, .075 & .007, .021, .040 & .004, .054, .109 & .027, .150,
.248 & .001, .003, .012 & .026, .090, .162 \\
$250$ & .004, .031, .069 & .008, .023, .038 & .007, .040, .084 & .116, .312,
.451 & .004, .010, .015 & .011, .063, .107 \\
$500$ & .001, .031, .063 & .017, .029, .042 & .008, .032, .078 & .283, .588,
.733 & .003, .005, .006 & .012, .046, .089 \\
$1000$ & .006, .032, .071 & .014, .026, .031 & .008, .033, .085 & .741,
.925, .961 & .002, .002, .002 & .006, .051, .102 \\ \hline
\multicolumn{7}{c}{} \\
\multicolumn{7}{c}{MA(2) Error: $e_{t} = \nu_{t} + 0.5 \nu_{t-1} + 0.25
\nu_{t-2}$} \\ \hline
& \#1. Simple & \#2. Bilin & \#3. AR2/AR2 & \#4. AR2/AR1 & \#5. GARCH/wo &
\#6. GARCH/w \\ \hline
$n$ & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% &
1\%, 5\%, 10\% & 1\%, 5\%, 10\% \\ \hline
$100$ & .693, .901, .951 & .582, .769, .825 & .012, .068, .135 & .242, .601,
.762 & .461, .707, .788 & .908, .966, .979 \\
$250$ & .993, .998, 1.00 & .841, .935, .962 & .006, .060, .114 & .677, .927,
.982 & .707, .834, .868 & .992, .993, .993 \\
$500$ & 1.00, 1.00, 1.00 & .932, .965, .976 & .019, .086, .151 & .972, .999,
.999 & .798, .874, .908 & 1.00, 1.00, 1.00 \\
$1000$ & 1.00, 1.00, 1.00 & .968, .982, .986 & .063, .184, .257 & 1.00,
1.00, 1.00 & .900, .934, .952 & 1.00, 1.00, 1.00 \\ \hline
\multicolumn{7}{c}{} \\
\multicolumn{7}{c}{AR(1) Error: $e_{t} = 0.7 e_{t-1} + \nu_{t}$} \\ \hline
& \#1. Simple & \#2. Bilin & \#3. AR2/AR2 & \#4. AR2/AR1 & \#5. GARCH/wo &
\#6. GARCH/w \\ \hline
$n$ & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% &
1\%, 5\%, 10\% & 1\%, 5\%, 10\% \\ \hline
$100$ & .531, .752, .847 & .477, .637, .685 & .021, .128, .227 & .263, .636,
.788 & .179, .345, .432 & .987, .991, .991 \\
$250$ & .903, .979, .990 & .676, .774, .812 & .041, .217, .355 & .714, .954,
.984 & .156, .271, .334 & 1.00, 1.00, 1.00 \\
$500$ & .998, 1.00, 1.00 & .715, .819, .860 & .181, .512, .636 & .991, 1.00,
1.00 & .111, .176, .230 & 1.00, 1.00, 1.00 \\
$1000$ & 1.00, 1.00, 1.00 & .723, .823, .864 & .599, .847, .922 & 1.00,
1.00, 1.00 & .066, .126, .158 & 1.00, 1.00, 1.00 \\ \hline
\end{tabular}
}
\end{center}
\par
{\fontsize{9.5pt}{13pt} \selectfont
\#1: Simple $y_{t} = e_{t}$ with a mean filter. \#2: Bilinear $y_{t} = 0.5
e_{t-1} y_{t-2} + e_{t}$ with a mean filter. \#3: AR(2) $y_{t} = 0.3 y_{t-1}
- 0.15 y_{t-2} + e_{t}$ with an AR(2) filter. \#4: AR(2) $y_{t} = 0.3
y_{t-1} - 0.15 y_{t-2} + e_{t}$ with an AR(1) filter. \#5: GARCH(1,1) $y_{t}
= \sigma_{t} e_{t}$, $\sigma_{t}^{2} = 1 + 0.2 y_{t-1}^{2} + 0.5
\sigma_{t-1}^{2}$ without (wo) a filter. \#6: GARCH(1,1) $y_{t} = \sigma_{t}
e_{t}$, $\sigma_{t}^{2} = 1 + 0.2 y_{t-1}^{2} + 0.5 \sigma_{t-1}^{2}$ with
(w) a GARCH filter. For each scenario, $\nu_{t} \overset{i.i.d.}{\sim}
N(0,1) $. The largest possible lag length is $\bar{\mathcal{L}}_{n} = [10
\sqrt{n} / (\ln n)]$, and the tuning parameter that affects the penalty term
$\mathcal{P}_{n} (\mathcal{L})$ is $q = 3$. This table reports rejection
frequencies with respect to nominal size $\alpha \in \{ 0.01, 0.05, 0.10 \}$
across $J = 1000$ Monte Carlo samples, where the number of bootstrap samples
is $M = 500$. }
\end{table}
\begin{table}[th]
\caption{Rejection Frequencies of Cram\'{e}r-von Mises Test $CvM^{dw}$
(Scenarios \#1--\#6) }
\label{table:main_paper_rf_CvM_scenario123456}
\begin{center}
{\fontsize{9.5pt}{15.5pt} \selectfont
\begin{tabular}{c|c|c|c|c|c|c}
\multicolumn{7}{c}{IID Error: $e_{t} = \nu_{t}$} \\ \hline
& \#1. Simple & \#2. Bilin & \#3. AR2/AR2 & \#4. AR2/AR1 & \#5. GARCH/wo &
\#6. GARCH/w \\ \hline
$n$ & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% &
1\%, 5\%, 10\% & 1\%, 5\%, 10\% \\ \hline
100 & .023, .081, .138 & .018, .076, .149 & .020, .086, .167 & .133, .338,
.483 & .021, .077, .141 & .034, .087, .144 \\
250 & .016, .072, .144 & .030, .085, .154 & .011, .065, .127 & .370, .615,
.735 & .011, .058, .118 & .019, .065, .112 \\
500 & .010, .051, .102 & .014, .072, .124 & .012, .059, .132 & .710, .882,
.939 & .009, .053, .103 & .016, .072, .141 \\
1000 & .008, .060, .108 & .016, .063, .106 & .010, .049, .102 & .974, .991,
.993 & .015, .058, .107 & .013, .057, .103 \\ \hline
\multicolumn{7}{c}{} \\
\multicolumn{7}{c}{GARCH(1,1) Error: $e_{t} = \nu_{t} w_{t}$ with $w_{t}^{2}
= 1 + 0.2 e_{t-1}^{2} + 0.5 w_{t-1}^{2}$} \\ \hline
& \#1. Simple & \#2. Bilin & \#3. AR2/AR2 & \#4. AR2/AR1 & \#5. GARCH/wo &
\#6. GARCH/w \\ \hline
$n$ & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% &
1\%, 5\%, 10\% & 1\%, 5\%, 10\% \\ \hline
100 & .017, .081, .149 & .002, .030, .070 & .026, .086, .168 & .118, .287,
.430 & .006, .049, .103 & .036, .100, .168 \\
250 & .013, .059, .108 & .029, .048, .083 & .012, .058, .127 & .242, .501,
.648 & .009, .037, .080 & .020, .075, .132 \\
500 & .015, .066, .115 & .026, .038, .075 & .011, .051, .104 & .550, .802,
.881 & .013, .052, .111 & .026, .072, .143 \\
1000 & .010, .060, .116 & .004, .014, .028 & .008, .056, .105 & .880, .973,
.993 & .006, .032, .065 & .049, .065, .073 \\ \hline
\multicolumn{7}{c}{} \\
\multicolumn{7}{c}{MA(2) Error: $e_{t} = \nu_{t} + 0.5 \nu_{t-1} + 0.25
\nu_{t-2}$} \\ \hline
& \#1. Simple & \#2. Bilin & \#3. AR2/AR2 & \#4. AR2/AR1 & \#5. GARCH/wo &
\#6. GARCH/w \\ \hline
$n$ & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% &
1\%, 5\%, 10\% & 1\%, 5\%, 10\% \\ \hline
100 & .898, .984, .995 & .450, .743, .866 & .029, .113, .182 & .570, .805,
.898 & .681, .908, .969 & .878, .927, .940 \\
250 & .999, 1.00, 1.00 & .769, .924, .968 & .019, .086, .189 & .951, .996,
.999 & .903, .979, .994 & .983, .989, .991 \\
500 & 1.00, 1.00, 1.00 & .884, .966, .990 & .032, .144, .250 & 1.00, 1.00,
1.00 & .959, .994, .995 & .995, .998, .998 \\
1000 & 1.00, 1.00, 1.00 & .974, .994, .997 & .068, .295, .471 & 1.00, 1.00,
1.00 & .986, .997, 1.00 & .998, .998, .998 \\ \hline
\multicolumn{7}{c}{} \\
\multicolumn{7}{c}{AR(1) Error: $e_{t} = 0.7 e_{t-1} + \nu_{t}$} \\ \hline
& \#1. Simple & \#2. Bilin & \#3. AR2/AR2 & \#4. AR2/AR1 & \#5. GARCH/wo &
\#6. GARCH/w \\ \hline
$n$ & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% & 1\%, 5\%, 10\% &
1\%, 5\%, 10\% & 1\%, 5\%, 10\% \\ \hline
100 & .925, .996, 1.00 & .282, .567, .741 & .064, .193, .299 & .472, .741,
.849 & .564, .818, .923 & .958, .970, .973 \\
250 & .999, 1.00, 1.00 & .341, .572, .718 & .136, .341, .465 & .935, .991,
.999 & .680, .849, .912 & .984, .987, .988 \\
500 & 1.00, 1.00, 1.00 & .393, .630, .781 & .325, .592, .700 & .999, 1.00,
1.00 & .700, .852, .918 & .999, 1.00, 1.00 \\
1000 & 1.00, 1.00, 1.00 & .474, .697, .810 & .688, .876, .923 & 1.00, 1.00,
1.00 & .750, .877, .929 & .998, .999, .999 \\ \hline
\end{tabular}
}
\end{center}
\par
{\fontsize{9.5pt}{13pt} \selectfont
\#1: Simple $y_{t} = e_{t}$ with a mean filter. \#2: Bilinear $y_{t} = 0.5
e_{t-1} y_{t-2} + e_{t}$ with a mean filter. \#3: AR(2) $y_{t} = 0.3 y_{t-1}
- 0.15 y_{t-2} + e_{t}$ with an AR(2) filter. \#4: AR(2) $y_{t} = 0.3
y_{t-1} - 0.15 y_{t-2} + e_{t}$ with an AR(1) filter. \#5: GARCH(1,1) $y_{t}
= \sigma_{t} e_{t}$, $\sigma_{t}^{2} = 1 + 0.2 y_{t-1}^{2} + 0.5
\sigma_{t-1}^{2}$ without (wo) a filter. \#6: GARCH(1,1) $y_{t} = \sigma_{t}
e_{t}$, $\sigma_{t}^{2} = 1 + 0.2 y_{t-1}^{2} + 0.5 \sigma_{t-1}^{2}$ with
(w) a GARCH filter. For each scenario, $\nu_{t} \overset{i.i.d.}{\sim}
N(0,1) $. The dependent wild bootstrap with $M = 500$ samples is used to
compute an approximate p-value of the Cram\'{e}r-von Mises test. All $
\mathcal{L}_{n} = n - 1$ lags are used. This table reports rejection
frequencies with respect to nominal size $\alpha \in \{ 0.01, 0.05, 0.10 \}$
across $J = 1000$ Monte Carlo samples. }
\end{table}
\begin{table}[th]
\caption{Rejection Frequencies of $\hat{\mathcal{T}}^{dw} (\mathcal{L}
_{n}^{*})$ and $CvM^{dw}$ (Scenarios \#7--\#9) }
\label{table:main_paper_rf_scenario789}
\begin{center}
{\fontsize{11pt}{18pt} \selectfont
\begin{tabular}{c|c||c|c|c||c|c|c||c|c|c}
\multicolumn{11}{c}{Max-correlation test with automatically selected lag $
\hat{\mathcal{T}}^{dw} (\mathcal{L}_{n}^{*})$} \\ \hline
& & \multicolumn{3}{c||}{\#7. MA(6)} & \multicolumn{3}{c||}{\#8. MA(12)} &
\multicolumn{3}{c}{\#9. MA(24)} \\ \hline
$n$ & $\bar{\mathcal{L}}_{n}$ & 1\% & 5\% & 10\% & 1\% & 5\% & 10\% & 1\% &
5\% & 10\% \\ \hline
100 & 21 & .016 & .084 & .139 & .013 & .067 & .117 & .017 & .065 & .118 \\
250 & 28 & .155 & .289 & .352 & .024 & .134 & .244 & .012 & .042 & .088 \\
500 & 35 & .710 & .812 & .826 & .371 & .673 & .770 & .024 & .097 & .192 \\
1000 & 45 & .999 & 1.00 & 1.00 & .983 & .997 & .997 & .578 & .833 & .918 \\
\hline
\multicolumn{11}{c}{} \\
\multicolumn{11}{c}{Cram\'{e}r-von Mises test $CvM^{dw}$} \\ \hline
& & \multicolumn{3}{c||}{\#7. MA(6)} & \multicolumn{3}{c||}{\#8. MA(12)} &
\multicolumn{3}{c}{\#9. MA(24)} \\ \hline
$n$ & $\mathcal{L}_{n}$ & 1\% & 5\% & 10\% & 1\% & 5\% & 10\% & 1\% & 5\% &
10\% \\ \hline
100 & 99 & .040 & .098 & .171 & .034 & .110 & .179 & .029 & .098 & .186 \\
250 & 249 & .026 & .080 & .142 & .025 & .087 & .155 & .022 & .088 & .143 \\
500 & 499 & .014 & .087 & .175 & .026 & .092 & .161 & .024 & .071 & .133 \\
1000 & 999 & .038 & .160 & .320 & .017 & .083 & .166 & .028 & .079 & .144 \\
\hline
\end{tabular}
}
\end{center}
\par
{\fontsize{9.5pt}{13pt} \selectfont
Scenario \#7: Remote MA(6) $y_{t} = e_{t} + 0.25 e_{t-6}$ with a mean
filter. Scenario \#8: Remote MA(12) $y_{t} = e_{t} + 0.25 e_{t-12}$ with a
mean filter. Scenario \#9: Remote MA(24) $y_{t} = e_{t} + 0.25 e_{t-24}$
with a mean filter. For each scenario, $e_{t} \overset{i.i.d.}{\sim} N(0,1)$
. For each test, the dependent wild bootstrap is used to compute an
approximate p-value. For the max-correlation test, the largest possible lag
length is $\bar{\mathcal{L}}_{n} = [10 \sqrt{n} / (\ln n)]$, and the tuning
parameter that affects the penalty term $\mathcal{P}_{n} (\mathcal{L})$ is $
q = 3$. For the Cram\'{e}r-von Mises test, all $\mathcal{L}_{n} = n - 1$
lags are used. We report rejection frequencies with respect to nominal size $
\alpha \in \{ 0.01, 0.05, 0.10 \}$ across $J = 1000$ Monte Carlo samples. }
\end{table}
\begin{figure}
\caption{Empirical Size and Size-Adjusted Power of $\hat{\mathcal{T}
\label{fig:size_power_maxcorr_automatic_lag}
\end{figure}
\end{document} |
\baregin{document}
\baribliographystyle{plain}
\title{{\bf\Large{On the K\"ahler-Ricci flow with small initial $E_1$ energy (I)}
\tauableofcontents
\sigmagmaection{Introduction and main results }
\sigmagmaubsection{The motivation} In \cite{[chen-tian2]}
\cite{[chen-tian1]}, a family of functionals $E_k (k = 1,2,\cdots
n)$ was introduced by the first named author and G. Tian to prove
the convergence of the K\"ahler-Ricci flow under appropriate
curvature assumptions. The aim of this program (cf. \cite{[chen1]})
is to study how the lower bound of $E_1$ is used to derive the
convergence of the K\"ahler-Ricci flow, i.e., the existence of
K\"ahler-Einstein metrics. We will address this question in
Subsection 1.2. The corresponding problem of the relation between
the lower bound of $E_0$, which is the $K$-energy introduced by T.
Mabuchi, and the existence of K\"ahler-Einstein metrics has been
extensively studied (cf. \cite{[BaMa]}, \cite{[chen-tian2]},
\cite{[chen-tian1]}, \cite{[Do]}). One interesting question in this
program is how the lower bound of $E_1$ compares to the lower bound
of $E_0$. We will give a satisfactory answer to this question
in Subsection 1.3. \\
\sigmagmaubsection{The lower bound of $E_1$ and K\"ahler-Einstein metrics}
Let $(M, [\Omegaegaegamega])$ be a polarized compact K\"ahler manifold with
$[\Omegaegaegamega]=2\partiali c_1(M)>0 $ (the first Chern class) in this paper. In
\cite{[chen1]}, the first named author proved a stability theorem of
the K\"ahler-Ricci flow near the infimum of $E_1$ under the
assumption that the initial metric has $Ric>-1$ and $|Rm|$ bounded.
Unfortunately, this stability theorem needs a topological assumption
\baregin{equation}(-1)^n([c_1(M)]^{[2]}[\Omegaegaegamega]^{[n-2]}-{\mathfrak f}rac
{2(n+1)}{n}[c_2(M)][\Omegaegaegamega]^{[n-2]})\mathfrak geq 0.\lambdaambdabel{c1}\e_2apsilonsilonq The only
known compact manifold which satisfies this condition is ${\mathfrak C}C P^n$,
which restricts potential applications of this result. The main
purpose of this paper is to remove this assumption.
\baregin{theo}\lambdaambdabel{main2}
Suppose that $M$ is pre-stable, and $E_1$ is bounded from below in $
[\Omegaegaegamega]$. For any $\deltaelta, \Lambdaambda>0,$ there exists a small positive
constant $\e_2apsilonsilon(\deltaelta, \Lambdaambda)>0$ such that for any metric $\Omegaegaegamega_0$ in
the subspace ${\mathcal A}(\deltaelta, \Lambdaambda, \Omegaegaegamega, \e_2apsilonsilon)$ of K\"ahler metrics
$$\{\Omegaegaegamega_{\partialhi}=\Omegaegaegamega+\partialbp \partialhi\;|\;
Ric(\Omegaegaegamega_{\partialhi})>-1+\deltaelta, \; |Rm|(\Omegaegaegamega_{\partialhi})\lambdaeq \Lambdaambda,\;
E_1(\Omegaegaegamega_{\partialhi})\lambdaeq \sqrt{-1}nf E_1+\e_2apsilonsilon \},$$where $E_1(\Omegaegaegamega')=E_{1,
\Omegaegaegamega}(\Omegaegaegamega')$, the K\"ahler-Ricci flow will deform it exponentially
fast to a K\"ahler-Einstein metric in the limit.
\e_2and{theo}
\baregin{rem} The condition that $M$ is pre-stable (cf. Defi \ref
{prestable}), roughly means that the complex structure doesn't jump
in the limit (cf. \cite{[chen1]}, \cite{[PhSt]}). In G. Tian's
definition of $K$-stability, this condition appears to be one of
three necessary conditions for a complex structure to be $K$-stable
(cf. \cite{[Do]}, \cite{[Tian2]}).
\e_2and{rem}
\baregin{rem}This gives a sufficient condition for the existence of K\"ahler-Einstein metrics.
More interestingly, by a theorem of G. Tian \cite{[Tian2]}, this
also gives a sufficient condition for an algebraic manifold being
weakly K-stable. One tempting question is: does this condition
imply weak K-stability directly?
\e_2and{rem}
\baregin{rem}
If we call the result in \cite{[chen1]} a ``pre-baby" step in this
ambitious program, then Theorem 1.1 and 1.5 should be viewed as a
``baby step" in this program. We wish to remove the assumption on
the bound of the bisectional curvature. More importantly (cf.
Theorem 1.8 below), we wish to replace the condition on the Ricci
curvature in both Theorem 1.1 and 1.5 by a condition on the scalar
curvature. Then our theorem really becomes a ``small energy" lemma.
\e_2and{rem}
If we remove the condition of ``pre-stable", then
\baregin{theo}\lambdaambdabel{main} Suppose that $(M, [\Omegaegaegamega])$ has no nonzero
holomorphic vector fields and $E_1$ is bounded from below in
$[\Omegaegaegamega].$ For any $\deltaelta, B, \Lambdaambda>0,$ there exists a small
positive constant $\e_2apsilonsilon(\deltaelta, B, \Lambdaambda, \Omegaegaegamega)>0$ such that for any
metric $\Omegaegaegamega_0$ in the subspace ${\mathcal A}(\deltaelta, B, \Lambdaambda, \e_2apsilonsilon)$ of
K\"ahler metrics
$$\{\Omegaegaegamega_{\partialhi}=\Omegaegaegamega+\partialbp \partialhi\;|\; Ric(\Omegaegaegamega_{\partialhi})>-1+\deltaelta,\; |\partialhi|
\lambdaeq B, \;|Rm|(\Omegaegaegamega_{\partialhi})\lambdaeq \Lambdaambda, \; E_1(\Omegaegaegamega_{\partialhi})\lambdaeq \sqrt{-1}nf
E_1+\e_2apsilonsilon \}$$ the K\"ahler-Ricci flow will deform it exponentially
fast to a K\"ahler-Einstein metric in the limit.
\e_2and{theo}
\baregin{rem} In light of Theorem \ref{main4} below, we can replace the
condition on $E_1$ by a corresponding condition on $E_0.\;$
\e_2and{rem}
\sigmagmaubsection{The relations between energy functionals $E_k$ }
Song-Weinkove \cite{[SoWe]} recently proved that $E_k$ have a
lower bound
on the space of K\"ahler metrics with nonnegative Ricci curvature
for K\"ahler-Einstein manifolds. Moreover, they also showed that
modulo holomorphic vector fields, $E_1$
is proper if and only if there exists a K\"ahler-Einstein metric.
Shortly afterwards, N. Pali
\cite{[Pali]} gave a formula between $E_1$ and the $K$-energy
$E_0$, which says that the $E_1$ energy
is always bigger than the $K$-energy.
Tosatti \cite{[Tosatti]} proved that under some curvature assumptions,
the critical point of $E_k$ is a K\"ahler-Einstein metric.
Pali's theorem means that $E_1$ has a lower bound if the $K$-energy
has a lower bound.
A natural question is if
the converse holds. To our own surprise, we proved the following
result.
\baregin{theo}\lambdaambdabel{main4}$E_1$ is bounded from below if and only if
the $K$-energy is bounded from below in the class $[\Omegaegaegamega].$
Moreover, we have{\mathfrak f}ootnote{For simplicity of notation, we will often
drop the subscript $\partialhi$ and write $|\nuablabla f|^2$ for $|\nuablabla
f|_{\partialhi}^2 $. But in an integral, $|\nuablabla f|^2$ is with respect to
the metric of the volume form. }
$$\sqrt{-1}nf_{\Omegaegaegamega'\sqrt{-1}n [\Omegaegaegamega]} E_{1, \Omegaegaegamega}(\Omegaegaegamega')=2\sqrt{-1}nf_{\Omegaegaegamega'\sqrt{-1}n [\Omegaegaegamega]} E_{0,
\Omegaegaegamega}(\Omegaegaegamega')-{\mathfrak f}rac 1{nV}\sqrt{-1}nt_M\;|\nuablabla h_{\Omegaegaegamega}|^2 \Omegaegaegamega^n,$$ where
$h_{\Omegaegaegamega}$ is the Ricci potential function with respect to $\Omegaegaegamega$.
\e_2and{theo}
A crucial observation leads to this theorem is
\baregin{theo} Along the K\"ahler-Ricci flow, $E_1$ will
decrease after finite time.
\lambdaambdabel{main3}
\e_2and{theo}
Theorem \ref{main4} and \ref{main3} of course lead more questions
than they answer to. For instance, is the properness of $E_k$
equivalent to the properness of $E_l$ for $k\nueq l$? More subtlely,
are the corresponding notions of semi-stability ultimately
equivalent to each other? Is there a preferred functional in this
family, or a better linear combination of these $E_k$ functionals?
The first named author genuinely believes that
this observation opens the door for more interesting questions. \\
Another interesting question is the relation of $E_k$ with various
notions of stability defined by algebraic conditions. Theorem 1.1
and 1.5 suggest an indirect link of these functionals $E_k$ and
stability. According to A. Futaki \cite{[futaki04]}, these
functionals may directly link to the asymptotic Chow semi-stability
(note that the right hand side of (1.2) in \cite{[futaki04]} is
precisely equal to ${\mathfrak f}rac{d E_k}{dt}$ if one takes $p=k+1$ and $\partialhi
= c_1^k$, cf. Theorem 2.4 below). It is highly interesting to
explore further in this direction.
\sigmagmaubsection{Main ideas of proofs of Theorem 1.1 and
1.5}
In \cite{[chen1]}, a topological condition is used to control the
$L^2$ norm of the bisectional curvature once the Ricci curvature is
controlled. Using the parabolic Moser iteration arguments, this
gives a uniform bound on the full bisectional curvature. In the
present paper, we need to find a new way to control the full
bisectional curvature under the flow. The whole scheme of obtaining
this uniform estimate on curvatures depends on two crucial steps and
their dynamical interplay.\\
{\barf STEP 1:} The first step is to follow the approach of the
celebrated work of Yau on the Calabi conjecture (cf. {\cite{[Cao]}},
\cite{[Yau]}). The key point here is to control the $C^0$ norm of
the evolving K\"ahler potential $\partialhi(t)$, in particular, the growth
of $ u = {{\partial\partialhi}\Omegaegaegaver {\partial t}}$ along the K\"ahler-Ricci flow. Note
that $u$ satisfies
\[
{ {\partial u}\Omegaegaegaver {\partial t}} = \tauriangle_{\partialhi} u + u.
\]
Therefore, the crucial step is to control the first eigenvalue of
the Laplacian operator (assuming the traceless Ricci form is
controlled via an iteration process which we will describe as {\barf
STEP} 2 below).
For our purpose, we need
to show that the first eigenvalue of the evolving Laplacian operator
is bigger than $1 + \mathfrak gamma$ for some fixed $\mathfrak gamma > 0.\;$ Such a problem
already appeared in \cite{[chen0]} and \cite{[chen-tian2]} since the
first eigenvalue of the Laplacian operator of K\"ahler-Einstein
metrics is exactly $1.\;$ If $Aut_r(M, J)\nueq 0,\;$ the uniqueness
of K\"ahler-Einstein metrics implies that the dimension of the first
eigenspace is fixed; while the vanishing of the Futaki invariant
implies that $u(t)$ is essentially perpendicular to the first
eigenspace of the evolving metrics. These are two crucial
ingredients which allow one to squeeze out a small gap $\mathfrak gamma$ on the
first eigenvalue estimate. Following the approach in \cite {[chen0]}
and \cite{[chen-tian2]}, we can show that $u$ decays exponentially.
This in turn implies the $C^0$ bound on the evolving K\"ahler
potential. Consequently, this leads to control of all derivatives of
the evolving potential, in particular, the bisectional curvature. In
summary, as long as we have control of the first eigenvalue, one
controls the full bisectional curvature of the evolving K\"ahler
metrics.
For Theorem 1.5, a crucial technique step is to use an estimate
obtained in \cite{[chenhe]} on the Ricci curvature tensor.\\
{\barf STEP 2:} Here we follow the Moser iteration techniques which
appeared in \cite{[chen1]}. Assuming that the full bisectional
curvature is bounded by some large but fixed number, the norm of the
traceless bisectional curvature and the traceless Ricci tensor both
satisfy the following inequality:
\[
{{\partial u}\Omegaegaegaver {\partial t}} \lambdaeq \tauriangle_{\partialhi} u + |Rm| u \lambdaeq
\tauriangle_{\partialhi}
u + C\cdot u.
\]
If the curvature of the evolving metric is controlled in $L^p(p>
n)$, then the smallness of the energy $E_1$ allows you to control
the norm of the traceless Ricci tensor of the evolving K\"ahler
metrics (cf. formula (\ref{2})). According to Theorem \ref{lem2.8}
and \ref{theo4.18}, this will in turn give an improved estimate on
the first eigenvalue in a slightly longer period, but perhaps
without full uniform control of the bisectional curvature in the
``extra" time. However, this gives uniform control on the K\"ahler
potential which in turn gives the bisectional curvature in the
extended ``extra" time. We use the Moser iteration again to obtain
sharper control on the norm of the traceless bisectional
curvature.\\
Hence, the combination of the parabolic Moser iteration together
with Yau's estimate, yields the desired global estimate. In
comparison, the iteration process in \cite{[chen1]} is instructive
and more direct. The first named author believes that this
approach is perhaps more important than the mild results we obtained
there.
\sigmagmaubsection{The organization} This paper is roughly organized as
follows: In Section 2, we review some basic facts in K\"ahler
geometry and necessary information on the K\"ahler-Ricci flow. We
also include some basic facts on the energy functionals $E_k$. In
Section 3, we prove Theorem \ref{main4} and \ref{main3}. In Section
4, we prove several technical theorems on the K\"ahler-Ricci flow.
The key results are the estimates of the first eigenvalue of the
evolving Laplacian, which are proved in Section 4.3. In Section 5, 6
we prove Theorem \ref{main2} and Theorem \ref{main}.\\
\nuoindent {\barf Acknowledgements}: Part of this work was done while
the second named author was visiting University of Wisconsin-Madison
and he would like to express thanks for the hospitality. The second
named author would also like to thank
Professor W. Y. Ding and X. H. Zhu for
their help and encouragement. The first named author would like to
thank Professor P. Li of his interest and encouragement in this
project. The authors would like to thank the referees for numerous
suggestions which helped to improve the presentation. \vskip 1cm
\sigmagmaection{Setup and known results}
\sigmagmaubsection{Setup of notations}
Let $M$ be an $n$-dimensional compact K\"ahler manifold. A K\"ahler
metric can be given by its K\"ahler form $\Omegaegaegamega$ on $M$. In local
coordinates $z_1, \cdots, z_n $, this $\Omegaegaegamega$ is of the form
\[
\Omegaegaegamega = \sigmagmaqrt{-1} \deltaisplaystyleplaystyle \sigmagmaum_{i,j=1}^n\;g_{i
\Omegaegaegaverline{j}} d\,z^i\wedge d\,z^{\Omegaegaegaverline{j}} > 0,
\]
where $\{g_{i\Omegaegaegaverline {j}}\}$ is a positive definite Hermitian
matrix function. The K\"ahler condition requires that $\Omegaegaegamega$ is
a closed positive (1,1)-form. In other words, the following holds
\[
{{\partialartialrtial g_{i \Omegaegaegaverline{k}}} \Omegaegaegaver {\partialartialrtial z^{j}}} =
{{\partialartialrtial g_{j \Omegaegaegaverline{k}}} \Omegaegaegaver {\partialartialrtial z^{i}}}\qquad {\rm
and}\qquad {{\partialartialrtial g_{k \Omegaegaegaverline{i}}} \Omegaegaegaver {\partialartialrtial
z^{\Omegaegaegaverline{j}}}} = {{\partialartialrtial g_{k \Omegaegaegaverline{j}}} \Omegaegaegaver
{\partialartialrtial z^{\Omegaegaegaverline{i}}}}\qquad{\mathfrak f}orall\;i,j,k=1,2,\cdots, n.
\]
The K\"ahler metric corresponding to $\Omegaegaegamega$ is given by
\[
\sigmagmaqrt{-1} \;\deltaisplaystyleplaystyle \sigmagmaum_1^n \; {g}_{\alphalphapha
\Omegaegaegaverline{\bareta}} \; d\,z^{\alphalphapha}\;\Omegaegaegatimes d\, z^{
\Omegaegaegaverline{\bareta}}.
\]
For simplicity, in the following, we will often denote by $\Omegaegaegamega$
the corresponding K\"ahler metric. The K\"ahler class of $\Omegaegaegamega$
is its cohomology class $[\Omegaegaegamega]$ in $H^2(M,{\mathfrak R}R).\;$ By the Hodge
theorem, any other K\"ahler metric in the same K\"ahler class is
of the form
\[
\Omegaegaegamega_{\partialhi} = \Omegaegaegamega + \sigmagmaqrt{-1} \deltaisplaystyleplaystyle \sigmagmaum_{i,j=1}^n\;
{{\partialartialrtial^2 \partialhi}\Omegaegaegaver {\partialartialrtial z^i \partialartialrtial z^{\Omegaegaegaverline{j}}}}
\;dz^i\wedge dz^{\barar j}
> 0
\]
for some real valued function $\partialhi$ on $M.\;$ The functional space
in which we are interested (often referred to as the space of
K\"ahler potentials) is
\[
{\cal P}(M,\Omegaegaegamega) = \{ \partialhi\sqrt{-1}n C^{\sqrt{-1}nfty}(M, {\mathfrak R}R) \;\mid\;
\Omegaegaegamega_{\partialhi} = \Omegaegaegamega + \sigmagmaqrt{-1} {\partialartialrtial} \Omegaegaegaverline{\partialartialrtial}
\partialhi > 0\;\;{\rm on}\; M\}.
\]
Given a K\"ahler metric $\Omegaegaegamega$, its volume form is
\[
\Omegaegaegamega^n = n!\;\lambdaeft(\sigmagmaqrt{-1} \rightarrowght)^n \deltaet\lambdaeft(g_{i
\Omegaegaegaverline{j}}\rightarrowght) d\,z^1 \wedge d\,z^{\Omegaegaegaverline{1}}\wedge \cdots
\wedge d\,z^n \wedge d \,z^{\Omegaegaegaverline{n}}.
\]
Its Christoffel symbols are given by
\[
{\mathfrak G}ammamma^k_{i\,j} = \deltaisplaystyleplaystyle \sigmagmaum_{l=1}^n\;g^{k\Omegaegaegaverline{l}}
{{\partialartialrtial g_{i \Omegaegaegaverline{l}}} \Omegaegaegaver {\partialartialrtial z^{j}}} ~~~{\rm
and}~~~ {\mathfrak G}ammamma^{\Omegaegaegaverline{k}}_{\Omegaegaegaverline{i} \,\Omegaegaegaverline{j}} =
\deltaisplaystyleplaystyle \sigmagmaum_{l=1}^n\;g^{\Omegaegaegaverline{k}l} {{\partialartialrtial g_{l
\Omegaegaegaverline{i}}} \Omegaegaegaver {\partialartialrtial z^{\Omegaegaegaverline{j}}}},
\qquad{\mathfrak f}orall\;i,j,k=1,2,\cdots n.
\]
The curvature tensor is
\[
R_{i \Omegaegaegaverline{j} k \Omegaegaegaverline{l}} = - {{\partialartialrtial^2 g_{i \Omegaegaegaverline
{j}}} \Omegaegaegaver {\partialartialrtial z^{k} \partialartialrtial z^{\Omegaegaegaverline{l}}}} +
\deltaisplaystyleplaystyle \sigmagmaum_ {p,q=1}^n g^{p\Omegaegaegaverline{q}} {{\partialartialrtial g_{i
\Omegaegaegaverline{q}}} \Omegaegaegaver {\partialartialrtial z^{k}}} {{\partialartialrtial g_{p
\Omegaegaegaverline{j}}} \Omegaegaegaver {\partialartialrtial z^{\Omegaegaegaverline{l}}}},
\qquad{\mathfrak f}orall\;i,j,k,l=1,2,\cdots n.
\]
We say that $\Omegaegaegamega$ is of nonnegative bisectional curvature if
\[
R_{i \Omegaegaegaverline{j} k \Omegaegaegaverline{l}} v^i v^{\Omegaegaegaverline{j}} w^k w^
{\Omegaegaegaverline{l}}\mathfrak geq 0
\]
for all non-zero vectors $v$ and $w$ in the holomorphic tangent
bundle of $M$. The bisectional curvature and the curvature tensor
can be mutually determined. The Ricci curvature of $\Omegaegaegamega$ is
locally given by
\[
R_{i \Omegaegaegaverline{j}} = - {{\partialartialrtial}^2 \vskip .1cmg \deltaet (g_{k
\Omegaegaegaverline{l}}) \Omegaegaegaver {\partialartialrtial z_i \partialartialrtial \barar z_j }} .
\]
So its Ricci curvature form is
\[
{\rm Ric}({\Omegaegaegamega}) = \sigmagmaqrt{-1} \deltaisplaystyleplaystyle \sigmagmaum_{i,j=1}^n \;R_{i
\Omegaegaegaverline{j}}\; d\,z^i\wedge d\,z^{\Omegaegaegaverline{j}} = -\sigmagmaqrt{-1}
\partialartialrtial \Omegaegaegaverline {\partialartialrtial} \vskip .1cmg \;\deltaet (g_{k \Omegaegaegaverline{l}}).
\]
It is a real, closed (1,1)-form. Recall that $[\Omegaegaegamega]$ is called a
canonical K\"ahler class if this Ricci form is cohomologous to
$\lambdaambdambda \,\Omegaegaegamega$ for some constant $\lambdaambdambda$\,. In our setting, we
require $\lambdaambdambda = 1.\;$
\sigmagmaubsection{The K\"ahler-Ricci flow}
Let us assume that the first Chern class $c_1(M)$ is positive.
Choose an initial K\"ahler metric $\Omegaegaegamega$ in $2\partiali c_1(M).\;$ The
normalized K\"ahler-Ricci flow (cf. \cite{[Ha82]}) on a K\"ahler
manifold $M$ is of the form
\baregin{equation}
{{\partialartialrtial g_{i \Omegaegaegaverline{j}}} \Omegaegaegaver {\partialartialrtial t }} = g_{i
\Omegaegaegaverline{j}} - R_{i \Omegaegaegaverline{j}}, \qquad{\mathfrak f}orall\; i,\; j=
1,2,\cdots ,n. \lambdaambdabel{eq:kahlerricciflow}
\e_2and{equation}
It follows that on the level of K\"ahler potentials, the Ricci
flow becomes
\baregin{equation}
{{\partialartialrtial \partialhi} \Omegaegaegaver {\partialartialrtial t }} = \vskip .1cmg {{\Omegaegaegamega_{\partialhi}}^n
\Omegaegaegaver {\Omegaegaegamega}^n } + \partialhi - h_{\Omegaegaegamega} , \lambdaambdabel{eq:flowpotential}
\e_2and{equation}
where $h_{\Omegaegaegamega}$ is defined by
\[
{\rm Ric}({\Omegaegaegamega})- \Omegaegaegamega = \sigmagmaqrt{-1} \partialartialrtial \Omegaegaegaverline{\partialartialrtial}
h_ {\Omegaegaegamega}, \; {\rm and}\;\deltaisplaystyleplaystyle \sqrt{-1}nt_M\; (e^{h_{\Omegaegaegamega}} -
1) {\Omegaegaegamega}^n = 0.
\]
Then the evolution equation for bisectional curvature is
\baregin{eqnarray}{{\partialartialrtial }\Omegaegaegaver {\partialartialrtial t}} R_{i \Omegaegaegaverline{j} k
\Omegaegaegaverline{l}} & = & \barigtriangleup R_{i \Omegaegaegaverline{j} k
\Omegaegaegaverline{l}} + R_{i \Omegaegaegaverline{j} p \Omegaegaegaverline{q}} R_{q
\Omegaegaegaverline{p} k \Omegaegaegaverline{l}} - R_{i \Omegaegaegaverline{p} k \Omegaegaegaverline{q}}
R_{p \Omegaegaegaverline{j} q \Omegaegaegaverline{l}} + R_{i \Omegaegaegaverline{l} p
\Omegaegaegaverline{q}} R_{q \Omegaegaegaverline{p} k \Omegaegaegaverline{j}} + R_{i
\Omegaegaegaverline{j} k \Omegaegaegaverline{l}} \nuonumber\\
& & \;\;\; -{1\Omegaegaegaver 2} \lambdaeft( R_{i \Omegaegaegaverline{p}}R_{p \Omegaegaegaverline{j}
k \Omegaegaegaverline{l}} + R_{p \Omegaegaegaverline{j}}R_{i \Omegaegaegaverline{p} k
\Omegaegaegaverline{l}} + R_{k \Omegaegaegaverline{p}}R_{i \Omegaegaegaverline{j} p
\Omegaegaegaverline{l}} + R_{p \Omegaegaegaverline{l}}R_{i \Omegaegaegaverline{j} k
\Omegaegaegaverline{p}} \rightarrowght). \lambdaambdabel{eq:evolutio of curvature1}
\e_2and{eqnarray}
Here ${\mathfrak D}elta$ is the Laplacian of the metric $g(t).$ The evolution
equation for Ricci curvature and scalar curvature are
\baregin{eqnarray} {{\partial R_{i \bar j}}\Omegaegaegaver {\partial t}} & = & \tauriangle
R_{i\bar j} + R_{i\bar j p \bar q} R_{q \bar p} -R_{i\bar p} R_{p \bar
j},\lambdaambdabel
{eq:evolutio of curvature2}\\
{{\partial R}\Omegaegaegaver {\partial t}} & = & \tauriangle R + R_{i\bar j} R_{j\bar i}- R.
\lambdaambdabel{eq:evolutio of curvature3}
\e_2and{eqnarray}
For direct computations and using the evolving frames, we can obtain
the following evolution equations for the bisectional curvature:
\baregin{equation}
\partiald {R_{i\barar jk\barar l}}{t} ={\mathfrak D}elta R_{i\barar jk\barar l}- R_{i\barar
jk\barar l}+R_{i \barar j m\barar n}R_{n\barar m k\barar l}-R_{i\barar m k\barar
n}R_{m\barar j n\barar l}+R_{i\barar l m\barar n}R_{n\barar m k\barar l}
\lambdaambdabel{eq:evolution of curvature4}
\e_2and{equation}
As usual, the flow equation (\ref{eq:kahlerricciflow}) or
(\ref{eq:flowpotential}) is referred to as the K\"ahler-Ricci flow
on $M$. It was proved by Cao \cite{[Cao]}, who followed Yau's
celebrated work \cite{[Yau]}, that the K\"ahler-Ricci flow exists
globally for any smooth initial K\"ahler metric. It was proved by S.
Bando \cite{[Bando]} in dimension 3 and N. Mok \cite{[Mok]} in all
dimensions that the positivity of the bisectional curvature is
preserved under the flow. In \cite{[chen-tian2]} and
\cite{[chen-tian1]}, the first named author and G. Tian proved that
the K\"ahler-Ricci flow, in a K\"ahler-Einstein manifold, initiated
from a metric with positive bisectional curvature converges to a
K\"ahler-Einstein metric with constant bisectional curvature. In
unpublished work on the K\"ahler-Ricci flow, G. Perelman proved,
along with other results, that the scalar curvature is always
uniformly bounded.
\sigmagmaubsection{Energy functionals $E_k$}
In \cite{[chen-tian2]}, a family of energy functionals $E_k (k=0, 1,
2,\cdots, n)$ was introduced and these functionals played an
important role there. First, we recall the definitions of these
functionals.
\baregin{defi} For any $k=0, 1, \cdots, n,$ we define a functional
$E_k^0$ on ${\mathcal P}(M, \Omegaegaegamega)$ by
$$E_{k, \Omegaegaegamega}^0(\partialhi)={\mathfrak f}rac 1V\sqrt{-1}nt_M\; {\mathfrak B}ig(\vskip .1cmg {\mathfrak f}rac {\Omegaegaegamega^n_{\partialhi}}
{\Omegaegaegamega^n}-h_{\Omegaegaegamega}{\mathfrak B}ig){\mathfrak B}ig( \sigmagmaum_{i=0}^k\; Ric(\Omegaegaegamega_{\partialhi})^i\wedge
\Omegaegaegamega^{k-i}{\mathfrak B}ig)\wedge \Omegaegaegamega_{\partialhi}^{n-k}+{\mathfrak f}rac 1V\sqrt{-1}nt_M\; h_{\Omegaegaegamega}{\mathfrak B}ig(
\sigmagmaum_{i=0}^k\; Ric(\Omegaegaegamega)^i\wedge \Omegaegaegamega^{k-i}{\mathfrak B}ig)\wedge \Omegaegaegamega^{n-k}.$$
\e_2and{defi}
\baregin{defi}For any $k=0, 1, \cdots, n$, we define $J_{k, \Omegaegaegamega}$ as
follows
$$J_{k, \Omegaegaegamega}(\partialhi)=-{\mathfrak f}rac {n-k}{V}\sqrt{-1}nt_0^1\;\sqrt{-1}nt_M\;\partiald {\partialhi(t)}{t}(\Omegaegaegamega_{\partialhi(t)}
^{k+1}-\Omegaegaegamega^{k+1})\wedge \Omegaegaegamega_{\partialhi(t)}^{n-k-1} \wedge dt,$$ where
$\partialhi(t)(t\sqrt{-1}n [0, 1])$ is a path from $0$ to $\partialhi$ in ${\mathcal P}(M,
\Omegaegaegamega)$.
\e_2and{defi}
\baregin{defi}For any $k=0, 1, \cdots, n,$ the functional $E_{k, \Omegaegaegamega}$
is defined as follows
$$E_{k, \Omegaegaegamega}(\partialhi)=E_{k, \Omegaegaegamega}^0(\partialhi)-J_{k, \Omegaegaegamega}(\partialhi).$$
For simplicity, we will often drop the subscript $\Omegaegaegamega$.
\e_2and{defi}
By direct computation, we have
\baregin{theo} For any $k=0, 1, 2,\cdots, n,$ we have
\baregin{equation}s {\mathfrak f}rac {dE_k}{dt}=&&{\mathfrak f}rac {k+1}{V}\sqrt{-1}nt_M\;{\mathfrak D}elta_{\partialhi}\deltaot\partialhi
Ric(\Omegaegaegamega_{\partialhi})^k\wedge\Omegaegaegamega_{\partialhi}^{n-k}\\&&-{\mathfrak f}rac
{n-k}{V}\sqrt{-1}nt_M\;\deltaot\partialhi
(Ric(\Omegaegaegamega_{\partialhi})^{k+1}-\Omegaegaegamega_{\partialhi}^{k+1})\wedge \Omegaegaegamega_{\partialhi}^{n-k-1}.
\e_2apsilonsilonqs Here $\partialhi(t)$ is any path in ${\mathcal P}(M, \Omegaegaegamega)$.
\e_2and{theo}
\baregin{rem}Note that
$$
{\mathfrak f}rac {dE_0}{dt}=-{\mathfrak f}rac nV\sqrt{-1}nt_M\; \deltaot\partialhi
(Ric(\Omegaegaegamega_{\partialhi})-\Omegaegaegamega_{\partialhi})\wedge \Omegaegaegamega_{\partialhi}^{n-1}.$$ Thus, $E_0$
is the well-known $K$-energy.
\e_2and{rem}
\baregin{theo} Along the K\"ahler-Ricci flow where $Ric(\Omegaegaegamega_{\partialhi})>-\Omegaegaegamega_{\partialhi}$ is
preserved, we have
$${\mathfrak f}rac {dE_k}{dt}\lambdaeq-{\mathfrak f}rac {k+1}{V}\sqrt{-1}nt_M\;(R(\Omegaegaegamega_{\partialhi})-r) Ric(\Omegaegaegamega_{\partialhi})^k\wedge\Omegaegaegamega_{\partialhi}^{n-k}.$$
When $k=0, 1,$ we have \baregin{equation}s
{\mathfrak f}rac {dE_0}{dt}&=&-{\mathfrak f}rac 1V\sqrt{-1}nt_M\;|\nuablabla \deltaot\partialhi|^2\Omegaegaegamega_{\partialhi}^n
\lambdaeq 0,\\
{\mathfrak f}rac {dE_1}{dt}&\lambdaeq&-{\mathfrak f}rac
2V\sqrt{-1}nt_M\;(R(\Omegaegaegamega_{\partialhi})-r)^2\Omegaegaegamega_{\partialhi}^n\lambdaeq 0. \e_2apsilonsilonqs
\e_2and{theo}
Recently, Pali in \cite{[Pali]} found the following formula, which
will be used in this paper.
\baregin{theo}\lambdaambdabel{pali}For any $\partialhi\sqrt{-1}n {\mathcal P}(M, \Omegaegaegamega)$, we have
$$E_{1, \Omegaegaegamega}(\partialhi)=2E_{0, \Omegaegaegamega}(\partialhi)+{\mathfrak f}rac 1{nV}\sqrt{-1}nt_M\; |\nuablabla u|^2
\Omegaegaegamega_{\partialhi}^n -{\mathfrak f}rac 1{nV}\sqrt{-1}nt_M\; |\nuablabla h_{\Omegaegaegamega}|^2 \Omegaegaegamega^n,$$ where
$$u=\vskip .1cmg{\mathfrak f}rac {\Omegaegaegamega^n_{\partialhi}}{\Omegaegaegamega^n}+\partialhi-h_{\Omegaegaegamega}.$$
\e_2and{theo}
\baregin{rem}This formula directly implies that if $E_0$ is bounded
from below, then $E_1$ is bounded from below on ${\mathcal P}(M, \Omegaegaegamega)$.
\e_2and{rem}
\baregin{rem}In a forthcoming paper \cite{[chenli]}, the second named author will
generalize Theorem \ref{pali} to all the functionals $E_k(k\mathfrak geq 1)$,
and discuss some interesting relations between $E_k.$
\e_2and{rem}
\sigmagmaection{Energy functionals $E_0$ and $E_1$}
In this section, we want to prove Theorem \ref{main4} and
\ref{main3}.
\sigmagmaubsection{Energy functional $E_1$ along the K\"ahler-Ricci flow}
The following theorem is well-known in literature (cf.
\cite{[chen2]}).
\baregin{lem}\lambdaambdabel{sec2}The minimum of the scalar curvature along the K\"ahler-Ricci flow,
if negative, will increase to zero
exponentially.
\e_2and{lem}
\baregin{proof} Let $\mu(t)=-\min_M R(x, 0)e^{-t},$ then
\baregin{equation}s\partiald {}t(R+\mu(t))&=&{\mathfrak D}elta (R+\mu(t))+|Ric|^2-(R+\mu(t))\\
&\mathfrak geq &{\mathfrak D}elta (R+\mu(t))-(R+\mu(t)). \e_2apsilonsilonqs Since $R(x,
0)+\mu(0)\mathfrak geq 0$, by the maximum principle we have
$$R(x, t)\mathfrak geq -\mu(t)=\min_M R(x, 0)e^{-t}.$$
\e_2and{proof}
Using the above lemma, the following theorem is an easy corollary
of Pali's formula.
\baregin{theo}\lambdaambdabel{thm3.2}Along the K\"ahler-Ricci flow, $E_1$ will
decrease after finite time. In particular, if the initial scalar
curvature $R(0)>-n+1$, then there is a small constant $\deltaelta>0$
depending only on
$n$ and $\min_{x\sqrt{-1}n M} R(0)$ such that for all time $t>0$, we have
\baregin{equation} {\mathfrak f}rac {d}{dt}E_1\lambdaeq -{\mathfrak f}rac {\deltaelta}V \sqrt{-1}nt_M\;|\nuablabla \deltaot \partialhi|^2
\Omegaegaegamega_{\partialhi}^n\lambdaeq
0.\lambdaambdabel{aaa}\e_2apsilonsilonq
\e_2and{theo}
\baregin{proof} Along the K\"ahler-Ricci flow, the evolution equation
for $|\nuablabla \deltaot \partialhi|^2$ is
$$\partiald {}t|\nuablabla \deltaot \partialhi|^2={\mathfrak D}elta_{\partialhi} |\nuablabla \deltaot \partialhi|^2-|\nuablabla \nuablabla \deltaot
\partialhi|^2-|\nuablabla \barar \nuablabla\deltaot \partialhi|^2+|\nuablabla \deltaot \partialhi|^2.$$ By Theorem
\ref{pali}, we have \baregin{equation}s{\mathfrak f}rac {d}{dt}E_1&=&-{\mathfrak f}rac 2V\sqrt{-1}nt_M\;|\nuablabla
\deltaot \partialhi|^2\Omegaegaegamega_{\partialhi}
^n +{\mathfrak f}rac 1{nV}{\mathfrak f}rac {d}{dt}\sqrt{-1}nt_M\; |\nuablabla \deltaot \partialhi|^2\Omegaegaegamega_{\partialhi}^n\\
&= &-{\mathfrak f}rac 2V\sqrt{-1}nt_M\;|\nuablabla \deltaot \partialhi|^2\Omegaegaegamega_{\partialhi}^n +{\mathfrak f}rac
1{nV}\sqrt{-1}nt_M\;(-|\nuablabla \nuablabla \deltaot \partialhi|^2-|\nuablabla \barar \nuablabla\deltaot
\partialhi|^2+|\nuablabla \deltaot \partialhi|^2+|\nuablabla \deltaot \partialhi|^2(n-R))\Omegaegaegamega_{\partialhi}^n
\e_2apsilonsilonqs If the scalar curvature at the initial time $R(x, 0)\mathfrak geq
-n+1+n\deltaelta (n\mathfrak geq 2)$ for some small $\deltaelta>0$, by Lemma \ref{sec2}
for all time $t>0$ we have $R(x, t)\mathfrak geq -n+1+n\deltaelta.$ Then we have
\baregin{equation}n{\mathfrak f}rac {d}{dt}E_1 &\lambdaeq&-{\mathfrak f}rac 2V\sqrt{-1}nt_M\;|\nuablabla \deltaot
\partialhi|^2\Omegaegaegamega_{\partialhi}^n +{\mathfrak f}rac 1{nV}\sqrt{-1}nt_M\;(-|\nuablabla \nuablabla \deltaot \partialhi|^2-
|\nuablabla \barar \nuablabla\deltaot \partialhi|^2+|\nuablabla \deltaot \partialhi|^2+|\nuablabla \deltaot
\partialhi|^2(2n-1-n
\deltaelta))\Omegaegaegamega_{\partialhi}^n\nuonumber\\
&\lambdaeq&-{\mathfrak f}rac {\deltaelta}V \sqrt{-1}nt_M\;|\nuablabla \deltaot \partialhi|^2\Omegaegaegamega_{\partialhi}^n.
\lambdaambdabel{aeq1}\e_2apsilonsilonqn Otherwise, by Lemma \ref{sec2} after finite time
$T_0=\vskip .1cmg{\mathfrak f}rac {-\min_M R(x, 0)}{n-1-n\deltaelta},$ we still have $R(x,
t)>-n+1+n\deltaelta$ for small $\deltaelta
>0.$ Thus, the inequality (\ref{aeq1}) holds.
If $n=1$, by direct calculation we have
$${\mathfrak f}rac {dE_1}{dt}=-{\mathfrak f}rac 2V\sqrt{-1}nt_M\; (R(\Omegaegaegamega_{\partialhi})-1)^2\Omegaegaegamega_{\partialhi}=-{\mathfrak f}rac
2V\sqrt{-1}nt_M\; ({\mathfrak D}elta_{\partialhi}\deltaot\partialhi)^2\Omegaegaegamega_{\partialhi}.$$ If the initial
scalar curvature $R(0)>0$, by R. Hamilton's results in \cite{[Ha88]}
the scalar curvature has a uniformly positive lower bound. Thus,
$R(t)\mathfrak geq c>0$ for some constant $c>0$ and all $t>0.$ Therefore, by
the proof of Lemma \ref{lem2.13} in section \ref{section4.4.1} the
first eigenvalue of ${\mathfrak D}elta_{\partialhi}$ satisfies $\lambdaambda_1(t)\mathfrak geq c.$ Then
$${\mathfrak f}rac {dE_1}{dt}=-{\mathfrak f}rac 2V\sqrt{-1}nt_M\; ({\mathfrak D}elta_{\partialhi}\deltaot\partialhi)^2\Omegaegaegamega_
{\partialhi}\lambdaeq -{\mathfrak f}rac {2c}V\sqrt{-1}nt_M\; |\nuablabla\deltaot\partialhi|^2\Omegaegaegamega_{\partialhi}.$$ The
theorem is proved.
\e_2and{proof}
\sigmagmaubsection{On the lower bound of $E_0$ and $E_1$} In this section, we
will prove Theorem \ref{main4}. Recall the generalized energy:
\baregin{equation}s I_{\Omegaegaegamega}(\partialhi)&=&{\mathfrak f}rac 1V\sigmagmaum_{i=0}^{n-1}\sqrt{-1}nt_M\; \sigmagmaqrt{-1}
\partialartialrtial \partialhi\wedge\barar \partialartialrtial \partialhi\wedge \Omegaegaegamega^i\wedge\Omegaegaegamega_{\partialhi}^
{n-1-i},\\
J_{\Omegaegaegamega}(\partialhi)&=&{\mathfrak f}rac 1V\sigmagmaum_{i=0}^{n-1}{\mathfrak f}rac {i+1}{n+1}\sqrt{-1}nt_M\;
\sigmagmaqrt{-1}\partialartialrtial \partialhi\wedge\barar \partialartialrtial \partialhi\wedge
\Omegaegaegamega^i\wedge\Omegaegaegamega_{\partialhi}^{n-1-i}. \e_2apsilonsilonqs By direct calculation, we can
prove
$$0\lambdaeq I-J\lambdaeq I\lambdaeq (n+1)(I-J)$$
and for any K\"ahler potential $\partialhi(t)$
$${\mathfrak f}rac {d}{dt}(I_{\Omegaegaegamega}-J_{\Omegaegaegamega})(\partialhi(t))=-{\mathfrak f}rac
1V\sqrt{-1}nt_M\;\partialhi{\mathfrak D}elta_{\partialhi} \deltaot\partialhi \;\Omegaegaegamega_{\partialhi}^n.$$
The behaviour of $E_1$ for the family of K\"ahler potentials
$\partialhi(t)$ satisfying the equation (\ref{eq1}) below has been studied
by Song and Weinkove in \cite{[SoWe]}. Following their ideas, we
have the following lemma.
\baregin{lem}\lambdaambdabel{lema3.3}For
any K\"ahler metric $\Omegaegaegamega_0\sqrt{-1}n[\Omegaegaegamega]$, there exists a K\"ahler
metric $\Omegaegaegamega_0'\sqrt{-1}n [\Omegaegaegamega]$ such that $Ric(\Omegaegaegamega_0')>0$ and
$$E_0(\Omegaegaegamega_0)\mathfrak geq E_0(\Omegaegaegamega_0').$$
\e_2and{lem}
\baregin{proof}We consider the complex Monge-Amp\`{e}re equation
\baregin{equation}(\Omegaegaegamega_0+\partialbp \varepsilonphi)^n=e^{th_{0}+c_t}\Omegaegaegamega_0^n,\lambdaambdabel{eq1}\e_2apsilonsilonq
where $h_{0}$ satisfies the following equation
$$Ric(\Omegaegaegamega_0)-\Omegaegaegamega_0=\partialbp h_{0},\qquad {\mathfrak f}rac 1V\sqrt{-1}nt_M\;e^{h_0}
\Omegaegaegamega_0^n=1$$ and $c_t$ is the constant chosen so that
$$\sqrt{-1}nt_M\; e^{th_0+c_t}\Omegaegaegamega_0^n=V.$$
By Yau's results in \cite{[Yau]}, there exists a unique
$\varepsilonphi(t)(t\sqrt{-1}n [0, 1])$ to the equation (\ref{eq1}) with $\sqrt{-1}nt_M\;
\varepsilonphi \Omegaegaegamega_0^n=0.$ Then $\varepsilonphi(0)=0.$ Note that the equation
(\ref{eq1}) implies \baregin{equation} Ric(\Omegaegaegamega_{\varepsilonphi})=\Omegaegaegamega_{\varepsilonphi}+(1-t)\partialbp
h_0-\partialbp\varepsilonphi,\lambdaambdabel{eq2}\e_2apsilonsilonq and
$${\mathfrak D}elta_{\varepsilonphi}\deltaot\varepsilonphi=h_0+c_t'.$$ By the definition of $E_0$
we have \baregin{equation}s {\mathfrak f}rac {d}{dt}E_0(\varepsilonphi(t))&=&-{\mathfrak f}rac 1V\sqrt{-1}nt_M\;
\deltaot\varepsilonphi(R(\Omegaegaegamega_{\varepsilonphi})-n)\Omegaegaegamega_{\varepsilonphi}^n\\
&=&-{\mathfrak f}rac 1V\sqrt{-1}nt_M\;
\deltaot\varepsilonphi((1-t){\mathfrak D}elta_{\varepsilonphi}h_0-{\mathfrak D}elta_{\varepsilonphi}\varepsilonphi)\Omegaegaegamega_
{\varepsilonphi}^n\\
&=&-{\mathfrak f}rac {1-t}{V}\sqrt{-1}nt_M\;{\mathfrak D}elta_{\varepsilonphi} \deltaot\varepsilonphi h_0\;
\Omegaegaegamega_{\varepsilonphi}^n+{\mathfrak f}rac
1V\sqrt{-1}nt_M\;\varepsilonphi{\mathfrak D}elta_{\varepsilonphi}\deltaot\varepsilonphi\Omegaegaegamega_{\varepsilonphi}^n\\
&=&-{\mathfrak f}rac {1-t}{V}\sqrt{-1}nt_M\;({\mathfrak D}elta_{\varepsilonphi} \deltaot\varepsilonphi)^2\;\Omegaegaegamega_
{\varepsilonphi}^n-{\mathfrak f}rac {d}{dt}(I-J)_{\Omegaegaegamega_0}(\varepsilonphi). \e_2apsilonsilonqs Integrating
the above formula from $0$ to $1$, we have
$$E_0(\varepsilonphi(1))-E_0(\Omegaegaegamega_0)=-{\mathfrak f}rac 1V\sqrt{-1}nt_0^1(1-s)\sqrt{-1}nt_M\;({\mathfrak D}elta_
{\varepsilonphi} \deltaot\varepsilonphi)^2\; \Omegaegaegamega_{\varepsilonphi}^n\wedge
ds-(I-J)_{\Omegaegaegamega_0}(\varepsilonphi(1))\lambdaeq 0.$$ By the equation (\ref{eq2}),
we know $Ric(\Omegaegaegamega_{\varepsilonphi(1)})> 0$. This proves the lemma.
\e_2and{proof}
Now we can prove Theorem \ref{main4}.
\baregin{theo}$E_1$ is bounded from below if and only if the $K$-energy
is bounded from below in the class $[\Omegaegaegamega].$ Moreover, we have
$$\sqrt{-1}nf_{\Omegaegaegamega'\sqrt{-1}n [\Omegaegaegamega]} E_{1}(\Omegaegaegamega')=2\sqrt{-1}nf_{\Omegaegaegamega'\sqrt{-1}n [\Omegaegaegamega]} E_{0}(\Omegaegaegamega')-
{\mathfrak f}rac 1{nV}\sqrt{-1}nt_M\; |\nuablabla h_{\Omegaegaegamega}|^2\Omegaegaegamega^n.$$
\e_2and{theo}
\baregin{proof}It is sufficient to show that if $E_1$ is bounded from below,
then $E_0$ is bounded from below. For any K\"ahler metric $\Omegaegaegamega_0$,
by Lemma \ref{lema3.3} there exists a K\"ahler metric
$\Omegaegaegamega_0'=\Omegaegaegamega+\partialbp \varepsilonphi_0$ such that
$$Ric(\Omegaegaegamega_0')\mathfrak geq c>0,\qquad E_0(\Omegaegaegamega_0)\mathfrak geq E_0(\Omegaegaegamega_0'),$$
where $c$ is a constant depending only on $\Omegaegaegamega_0.$ Let $\varepsilonphi(t)$
be the solution to the K\"ahler-Ricci flow with the initial metric
$\Omegaegaegamega_0',$
$$\partiald {\varepsilonphi}{t}=\vskip .1cmg{\mathfrak f}rac {\Omegaegaegamega_{\varepsilonphi}^n}{\Omegaegaegamega^n}+\varepsilonphi-h_{\Omegaegaegamega},
\qquad \varepsilonphi(0)=\varepsilonphi_0.$$ Then for any $t>s\mathfrak geq0$, by Theorem
\ref{thm3.2} we have \baregin{equation} E_1(t)-E_1(s)\lambdaeq
2\deltaelta(E_0(t)-E_0(s)),\lambdaambdabel{qeq3}\e_2apsilonsilonq where $E_1(t)=E_1(\Omegaegaegamega,
\Omegaegaegamega_{\varepsilonphi(t)})$ and $\deltaelta={\mathfrak f}rac {n-1}{2n}$ if $n\mathfrak geq 2,$ or
$\deltaelta=c>0 $ if $n=1$. Here $c$ is a constant obtained in the proof of
Theorem \ref{thm3.2}. By Theorem \ref{pali} we have
$$E_1(t)-2E_0(s)-{\mathfrak f}rac 1{nV}\sqrt{-1}nt_M\;|\nuablabla\deltaot\varepsilonphi|^2\Omegaegaegamega_{\varepsilonphi}^n
(s)+C_{\Omegaegaegamega}\lambdaeq \deltaelta(E_1(t) -{\mathfrak f}rac
1{nV}\sqrt{-1}nt_M\;|\nuablabla\deltaot\varepsilonphi|^2\Omegaegaegamega_{\varepsilonphi}^n(t)+C_{\Omegaegaegamega})-2\deltaelta
E_0(s).$$ i.e. \baregin{equation} E_1(t)-{\mathfrak f}rac
1{n(1-\deltaelta)V}\sqrt{-1}nt_M\;|\nuablabla\deltaot\varepsilonphi|^2\Omegaegaegamega_{\varepsilonphi}^n(s)+ {\mathfrak f}rac
{\deltaelta}{n(1-\deltaelta)V}\sqrt{-1}nt_M\;|\nuablabla\deltaot\varepsilonphi|^2\Omegaegaegamega_{\varepsilonphi}^n(t)+C_{\Omegaegaegamega}\lambdaeq
2E_0(s),\lambdaambdabel{qeq4}\e_2apsilonsilonq where $C_{\Omegaegaegamega}={\mathfrak f}rac 1{nV}\sqrt{-1}nt_M\; |\nuablabla
h_{\Omegaegaegamega}|^2\Omegaegaegamega^n.$ By (\ref{qeq3}) we know $E_0$ is bounded from
below along the K\"ahler-Ricci flow. Thus there exists a sequence of
times $t_m$ such that
$$\sqrt{-1}nt_M\;|\nuablabla\deltaot\varepsilonphi|^2\Omegaegaegamega_{\varepsilonphi}^n(t_m)\rightarrow 0,\qquad m\rightarrow
\sqrt{-1}nfty.$$ We choose $t=t_m$ and let $m\rightarrow \sqrt{-1}nfty$ in (\ref{qeq4}),
$$\sqrt{-1}nf E_1-{\mathfrak f}rac
1{n(1-\deltaelta)V}\sqrt{-1}nt_M\;|\nuablabla\deltaot\varepsilonphi|^2\Omegaegaegamega_{\varepsilonphi}^n(s)+C_{\Omegaegaegamega}\lambdaeq
2E_0(s)\lambdaeq 2E_0(\Omegaegaegamega_0'),
$$ where the last inequality is because $E_0$ is decreasing along
the K\"ahler-Ricci flow. Thus, we choose $s=t_m$ again and let $m\rightarrow
\sqrt{-1}nfty$,
$$\sqrt{-1}nf E_1+C_{\Omegaegaegamega}\lambdaeq 2E_0(\Omegaegaegamega_0')\lambdaeq 2E_0(\Omegaegaegamega_0).$$
Thus, $E_0$ is bounded from below in $[\Omegaegaegamega]$, and
$$\sqrt{-1}nf E_1\lambdaeq 2\sqrt{-1}nf E_0-C_{\Omegaegaegamega}.$$
On the other hand, for any $\Omegaegaegamega'\sqrt{-1}n [\Omegaegaegamega]$ we have
$$E_1(\Omegaegaegamega')\mathfrak geq 2E_0(\Omegaegaegamega')-C_{\Omegaegaegamega}.$$
Combining the last two inequalities, we have $\sqrt{-1}nf E_1=2\sqrt{-1}nf
E_0-C_{\Omegaegaegamega}$. Thus, the theorem is proved.
\e_2and{proof}
\sigmagmaection{Some technical lemmas}
In this section, we will prove some technical lemmas, which will be
used in the proof of Theorem \ref{main2} and \ref{main}. These
lemmas are based on the K\"ahler-Ricci flow
$$\partiald {\partialhi}{t}=\vskip .1cmg{\mathfrak f}rac {\Omegaegaegamega_{\partialhi}^n}{\Omegaegaegamega^n}+\partialhi-h_{\Omegaegaegamega}.$$
Most of these results are taken from
\cite{[chen1]}-\cite{[chen-tian1]}. The readers are referred to
these papers for the details. Here we will prove some of them for
completeness.
\sigmagmaubsection{Estimates of the Ricci curvature}
The following result shows that we can control the curvature
tensor in a short time.
\baregin{lem}\lambdaambdabel{lem2.1}(cf. \cite{[chen1]})
Suppose that for some $\deltaelta>0$, the curvature of $\Omegaegaegamega_0=\Omegaegaegamega+\partialbp
\partialhi(0)$ satisfies the following conditions
$$\lambdaeft \{\baregin{array}{lll}|Rm|(0)&\lambdaeq& \Lambdaambda,\\
R_{i\barar j}(0)&\mathfrak geq& -1+\deltaelta.
\e_2and{array}\rightarrowght. $$
Then there exists a constant $T(\deltaelta, \Lambdaambda)>0$, such that for the
evolving K\"ahler metric $\Omegaegaegamega_t(0\lambdaeq t\lambdaeq 6T)$, we have the
following
\baregin{equation} \lambdaeft \{\baregin{array}{lll}|Rm|(t)&\lambdaeq& 2\Lambdaambda,\\
R_{i\barar j}(t)&\mathfrak geq& -1+{\mathfrak f}rac {\deltaelta}2.
\e_2and{array}\rightarrowght. \lambdaambdabel{1}\e_2apsilonsilonq
\e_2and{lem}
\baregin{lem}
\lambdaambdabel{lem2.2}(cf. \cite{[chen1]})If $E_1(0)\lambdaeq \sqrt{-1}nf_{\Omegaegaegamega'\sqrt{-1}n
[\Omegaegaegamega] }E_1(\Omegaegaegamega')+\e_2apsilonsilon,$ and
$$Ric(t)+\Omegaegaegamega(t)\mathfrak geq {\mathfrak f}rac {\deltaelta}2>0,\qquad {\mathfrak f}orall t\sqrt{-1}n
[0, T],$$ then along the K\"ahler-Ricci flow we have \baregin{equation}{\mathfrak f}rac
1V\sqrt{-1}nt_0^{T}\sqrt{-1}nt_M\;\;|Ric-\Omegaegaegamega|^2(t)\Omegaegaegamega_ {\partialhi}^n\wedge dt\lambdaeq {\mathfrak f}rac
{\e_2apsilonsilon}2.\lambdaambdabel{2}\e_2apsilonsilonq
\e_2and{lem}
Since we have the estimate of the Ricci curvature, the following
theorem shows that the Sobolev constant is uniformly bounded if
$E_1$ is small.
\baregin{prop}\lambdaambdabel{lem2.3}(cf. \cite{[chen1]}) Along the K\"ahler-Ricci flow, if $E_1(0)\lambdaeq \sqrt{-1}nf_{\tauilde\Omegaegaegamega\sqrt{-1}n [\Omegaegaegamega]
}E_1(\tauilde\Omegaegaegamega)+\e_2apsilonsilon,$ and for any $t\sqrt{-1}n [0, T],$
$$Ric(t)+\Omegaegaegamega(t)\mathfrak geq 0, $$ the diameter of the
evolving metric $\Omegaegaegamega_{\partialhi}$ is uniformly bounded for $t\sqrt{-1}n [0, T].$
As $\e_2apsilonsilon\rightarrow 0, $ we have $D\rightarrow \partiali.$ Let $\sigmagmaigma(\e_2apsilonsilon)$ be the maximum
of the Sobolev and Poincar\'e constants with respect to the metric
$\Omegaegaegamega_{\partialhi}$. As $\e_2apsilonsilon\rightarrow 0,$ we have $\sigmagmaigma(\e_2apsilonsilon)\lambdaeq
\sigmagmaigma<+\sqrt{-1}nfty.$ Here $\sigmagmaigma$ is a constant independent of $\e_2apsilonsilon.$
\e_2and{prop}
Next we state a parabolic version of Moser iteration argument (cf.
\cite{[chen-tian1]}).
\baregin{prop}\lambdaambdabel{lem2.17} Suppose the Sobolev and Poincar\'e
constants of the evolving K\"ahler metrics $g(t)$ are both uniformly
bounded by $\sigmagmaigma$. If a nonnegative function $u$ satisfies the
following inequality
$$\partiald {}tu\lambdaeq {\mathfrak D}el_{\partialhi} u+f(t, x)u, \;\; {\mathfrak f}orall \,t\sqrt{-1}n (a, b),$$
where $|f|_{L^p(M, g(t))}$ is uniformly bounded by some constant $c$
for some $p>{\mathfrak f}rac m2 $, where $m=2n=\deltaim_{{\mathfrak R}R}M$, then for any
$\tauau\sqrt{-1}n (0, b-a)$ and any $t\sqrt{-1}n (a+\tauau, b)$, we have{\mathfrak f}ootnote{The
constant $C$ may differ from line to line. The notation $C(A, B,
...)$ means that the constant $C$ depends only on $A, B, ...$.}
$$u(t)\lambdaeq {\mathfrak f}rac {C(n, \sigmagmaigma, c)}{\tauau^{{\mathfrak f}rac {m+2}{4}}}{\mathfrak B}ig(\sqrt{-1}nt_{t-
\tauau}^t\sqrt{-1}nt_M \;u^2 \,\Omegaegaegamega_{\partialhi}^n\wedge ds{\mathfrak B}ig)^{{\mathfrak f}rac 12}.$$
\e_2and{prop}
By the above Moser iteration, we can show the following lemma.
\baregin{lem}\lambdaambdabel{lem2.5} For any $\deltaelta, \Lambdaambda>0,$ there exists a small
positive constant $\e_2apsilonsilon(\deltaelta, \Lambdaambda)>0$ such that if the initial metric
$\Omegaegaegamega_0$ satisfies the following condition: \baregin{equation} Ric(0)>-1+\deltaelta,\;
|Rm(0)|\lambdaeq \Lambdaambda, \;E_1(0)\lambdaeq\sqrt{-1}nf E_1+\e_2apsilonsilon,\lambdaambdabel{z1}\e_2apsilonsilonq then after
time $2T$ along the K\"ahler-Ricci flow, we have
\baregin{equation}|Ric-\Omegaegaegamega|(t)\lambdaeq C_1(T, \Lambdaambda)\e_2apsilonsilon, \qquad{\mathfrak f}orall t\sqrt{-1}n [2T,
6T]\lambdaambdabel{2.5}\e_2apsilonsilonq and \baregin{equation}|\deltaot\partialhi-c(t)|_{C^0}\lambdaeq C(\sigmagmaigma)C_1(T,
\Lambdaambda)\e_2apsilonsilon, \qquad{\mathfrak f}orall t \sqrt{-1}n [2T, 6T],\lambdaambdabel{z2}\e_2apsilonsilonq where $c(t)$ is
the average of $\deltaot\partialhi$ with respect to the metric $g(t)$, and
$\sigmagmaigma$ is the uniformly upper bound of the Sobolev and Poincar\'e
constants in Proposition \ref{lem2.3}.
\e_2and{lem}
\baregin{proof} Let $Ric^0=Ric-\Omegaegaegamega.$ Then $u=|Ric^0|^2(t)$ satisfies the
parabolic inequality
$$\partiald ut\lambdaeq {\mathfrak D}elta_{\partialhi} u+c(n)|Rm|_{g(t)}u,$$
Note that by Lemma \ref{lem2.1}, $|Rm|(t)\lambdaeq 2\Lambdaambda,$ for $0\lambdaeq
t\lambdaeq 6T.$ Then applying Lemma \ref{lem2.1} again and Lemma
\ref{lem2.2} for $t\sqrt{-1}n [2T, 6T]$, we have {\mathfrak f}ootnote{Since the volume
$V$ of the K\"ahler manifold $M$ is fixed for the metrics in the
same K\"ahler class, the constant $C(T, \Lambdaambda)$ below should depend on
$V$, but we don't specify this for simplicity.} \baregin{equation}s
|Ric^0|^2(t)&\lambdaeq &C(\Lambdaambda, T){\mathfrak B}ig(\sqrt{-1}nt_{0}^{6T}\sqrt{-1}nt_M\;\;|Ric-\Omegaegaegamega|^4(t)\Omegaegaegamega_{\partialhi}^n\wedge dt{\mathfrak B}ig)^{{\mathfrak f}rac 12}\\
&\lambdaeq &C(\Lambdaambda, T)(1+\Lambdaambda){\mathfrak B}ig(\sqrt{-1}nt_{0}^{6T}\sqrt{-1}nt_M\;\;|Ric-\Omegaegaegamega
|^2(t)\Omegaegaegamega_{\partialhi}^n\wedge dt{\mathfrak B}ig)^{{\mathfrak f}rac
12}\\
&\lambdaeq &C(\Lambdaambda, T)\sigmagmaqrt{\e_2apsilonsilon}. \e_2apsilonsilonqs Thus, \baregin{equation}|Ric-\Omegaegaegamega|(t)\lambdaeq C(\Lambdaambda,
T)\e_2apsilonsilon^{{\mathfrak f}rac 14}. \lambdaambdabel{z3}\e_2apsilonsilonq Recall that ${\mathfrak D}elta_{\partialhi}
\deltaot\partialhi=n-R(\Omegaegaegamega_{\partialhi})$, by the above estimate and Proposition
\ref{lem2.3} we have \baregin{equation}|\deltaot\partialhi-c(t)|_{C^0}\lambdaeq C(\sigmagmaigma)C(T,
\Lambdaambda)\e_2apsilonsilon^{{\mathfrak f}rac 14}, {\mathfrak f}orall t\sqrt{-1}n [2T, 6T].\lambdaambdabel{z4}\e_2apsilonsilonq For
simplicity, we can write $\e_2apsilonsilon^{{\mathfrak f}rac 14}$ in the inequalities
(\ref{z3}) and (\ref{z4}) as $\e_2apsilonsilon$, since we can assume $E_1(0)\lambdaeq
\sqrt{-1}nf E_1+\e_2apsilonsilon^4$ in the assumption. The lemma is proved.
\e_2and{proof}
\sigmagmaubsection{Estimate of the average of $\partiald {\partialhi}t$}
In this section, we want to control $c(t)={\mathfrak f}rac 1V\sqrt{-1}nt_M\;\deltaot \partialhi
\Omegaegaegamega^n_{\partialhi}$. Here we follow the argument in \cite{[chen-tian2]}.
Notice that the argument essentially needs the lower bound of the
$K$-energy, which can be obtained by Theorem \ref{main4} in our
case. Observe that for any solution $\partialhi(t)$ of the K\"ahler-Ricci
flow,
$$\partiald {\partialhi}t=\vskip .1cmg{\mathfrak f}rac {\Omegaegaegamega_{\partialhi}^n}{\Omegaegaegamega^n}+\partialhi-h_{\Omegaegaegamega},$$
the function
$\tauilde\partialhi(t)=\partialhi(t)+Ce^t$ also satisfies the above equation for
any constant $C$. Since $$\partiald {\tauilde \partialhi}{t}(0)=\partiald
{\partialhi}{t}(0)+C,$$ we have $\tauilde c(0)=c(0)+C.$ Thus we can
normalize the solution $\partialhi(t)$ such that the average of
$\deltaot\partialhi(0)$ is any given constant.
The proof of the following lemma will be used in section 5 and 6, so
we include a proof here.
\baregin{lem}\lambdaambdabel{lem2.6}(cf. \cite{[chen-tian2]})Suppose that the $K
$-energy is bounded from below along the K\"ahler-Ricci flow. Then
we can normalize the solution $\partialhi(t)$ so that
$$c(0)={\mathfrak f}rac 1V\sqrt{-1}nt_0^{\sqrt{-1}nfty}\;e^{-t}\sqrt{-1}nt_M\;|\nuablabla \deltaot\partialhi|^2\Omegaegaegamega_
{\partialhi}^n\wedge dt<\sqrt{-1}nfty. $$ Then for all time $t>0$, we have
$$0<c(t),\;\;\sqrt{-1}nt_0^{\sqrt{-1}nfty}\;c(t)dt<E_0(0)-E_0(\sqrt{-1}nfty),$$
where $E_0(\sqrt{-1}nfty)=\lambdaim_{t\rightarrow \sqrt{-1}nfty}E_0(t)$.
\e_2and{lem}
\baregin{proof} A simple calculation yields
$$c'(t)=c(t)-{\mathfrak f}rac 1V\sqrt{-1}nt_M\;|\nuablabla \deltaot\partialhi|^2\Omegaegaegamega_{\partialhi}^n.$$
Define
$$\e_2apsilonsilon(t)={\mathfrak f}rac 1V\sqrt{-1}nt_M\;|\nuablabla \deltaot\partialhi|^2\Omegaegaegamega_{\partialhi}^n.$$
Since the $K$-energy has a lower bound along the K\"ahler-Ricci
flow, we have
$$\sqrt{-1}nt_0^{\sqrt{-1}nfty}\;\e_2apsilonsilon(t)dt={\mathfrak f}rac 1V\sqrt{-1}nt_0^{\sqrt{-1}nfty}\sqrt{-1}nt_M\;|\nuablabla \deltaot
\partialhi|^2\Omegaegaegamega_{\partialhi}^n\wedge dt= E_0(0)-E_0(\sqrt{-1}nfty).$$ Now we normalize
our initial value of $c(t)$ as
\baregin{equation}s c(0)&=&\sqrt{-1}nt_0^{\sqrt{-1}nfty}\;\e_2apsilonsilon(t)e^{-t}dt\\
&=&{\mathfrak f}rac 1V\sqrt{-1}nt_0^{\sqrt{-1}nfty}\;e^{-t}\sqrt{-1}nt_M\;|\nuablabla
\deltaot\partialhi|^2\Omegaegaegamega_{\partialhi}^n
\wedge dt\\
&\lambdaeq &{\mathfrak f}rac 1V\sqrt{-1}nt_0^{\sqrt{-1}nfty}\sqrt{-1}nt_M\;|\nuablabla \deltaot\partialhi|^2\Omegaegaegamega_
{\partialhi}^n\wedge dt\\
&= &E_0(0)-E_0(\sqrt{-1}nfty). \e_2apsilonsilonqs From the equation for $c(t)$, we
have
$$(e^{-t}c(t))'=-\e_2apsilonsilon(t)e^{-t}.$$
Thus, we have \baregin{equation}s0<c(t)=\sqrt{-1}nt^{\sqrt{-1}nfty}_t
\;\e_2apsilonsilon(\tauau)e^{-(\tauau-t)}d\tauau \lambdaeq E_0(0)-E_0(\sqrt{-1}nfty) \e_2apsilonsilonqs and
$$\lambdaim_{t\rightarrow \sqrt{-1}nfty}c(t)=\lambdaim_{t\rightarrow \sqrt{-1}nfty}\sqrt{-1}nt^{\sqrt{-1}nfty}_t \;\e_2apsilonsilon(\tauau)
e^{-(\tauau-t)}d\tauau=0.$$ Since the $K$-energy is bounded from below,
we have
$$\sqrt{-1}nt_0^{\sqrt{-1}nfty}\;c(t)dt={\mathfrak f}rac 1V\sqrt{-1}nt_0^{\sqrt{-1}nfty}\sqrt{-1}nt_M\;|\nuablabla \deltaot
\partialhi|^2\Omegaegaegamega_{\partialhi}^n\wedge dt-c(0)\lambdaeq E_0(0)-E_0(\sqrt{-1}nfty).$$
\e_2and{proof}
\baregin{lem}\lambdaambdabel{lem5.9}Suppose that $E_1$ is bounded from below on
${\mathcal P}(M, \Omegaegaegamega)$. For any solution $\partialhi(t)$ of the K\"ahler-Ricci flow
with the initial metric $\Omegaegaegamega_0$ satisfying
$$E_1(0)\lambdaeq \sqrt{-1}nf E_1+\e_2apsilonsilon, $$
after normalization for the K\"ahler potential $\partialhi(t)$ of the
solution, we have
$$0<c(t),\;\;\sqrt{-1}nt_0^{\sqrt{-1}nfty}c(t)\Omegaegaegamega^n_{\partialhi}\lambdaeq {\mathfrak f}rac {\e_2apsilonsilon}2.$$
\e_2and{lem}
\baregin{proof}By Theorem \ref{main4}, the $K$-energy is bounded from
below, then one can find a sequence of times $t_m\rightarrow \sqrt{-1}nfty$ such
that $$\sqrt{-1}nt_M\; |\nuablabla \deltaot\partialhi|^2\Omegaegaegamega^n_{\partialhi}{\mathfrak B}ig|_{t=t_m}\rightarrow 0.$$ By
Theorem \ref{pali}, we have
$$E_1(t)=2E_0(t)+{\mathfrak f}rac 1V\sqrt{-1}nt_M\; |\nuablabla\deltaot\partialhi|^2\Omegaegaegamega_{\partialhi}^n-C_{\Omegaegaegamega}.$$
Then \baregin{equation}s 2(E_0(0)-E_0(t_m))&=&E_1(0)-E_{1}(t_m)-{\mathfrak f}rac
1V\sqrt{-1}nt_M\;|\nuablabla\deltaot\partialhi|^2\Omegaegaegamega_ {\partialhi}^n{\mathfrak B}ig|_{t=0}+ {\mathfrak f}rac
1V\sqrt{-1}nt_M\;|\nuablabla\deltaot\partialhi|^2\Omegaegaegamega_{\partialhi}^n{\mathfrak B}ig|_{t=t_m}\\&\lambdaeq&\e_2apsilonsilon+{\mathfrak f}rac
1V\sqrt{-1}nt_M\;| \nuablabla\deltaot\partialhi|^2\Omegaegaegamega_{\partialhi}^n{\mathfrak B}ig|_{t=t_m}\\&\rightarrow&\e_2apsilonsilon.\e_2apsilonsilonqs
Since the $K$-energy is decreasing along the K\"ahler-Ricci flow, we
have
$$E_0(0)-E_0(\sqrt{-1}nfty)\lambdaeq {\mathfrak f}rac {\e_2apsilonsilon}2.$$
By the proof of Lemma \ref{lem2.6}, for any solution of the
K\"ahler-Ricci flow we can normalize $\partialhi(t)$ such that
$$0<c(t),\;\;\sqrt{-1}nt_0^{\sqrt{-1}nfty}c(t)\Omegaegaegamega^n_{\partialhi}\lambdaeq E_0(0)-E_0(\sqrt{-1}nfty)
\lambdaeq {\mathfrak f}rac {\e_2apsilonsilon}2. $$ The lemma is proved.
\e_2and{proof}
\sigmagmaubsection{Estimate of the first eigenvalue of the Laplacian operator}
\sigmagmaubsubsection{Case 1: $M$ has no nonzero holomorphic vector
fields}\lambdaambdabel{section4.4.1} In this subsection, we will estimate the
first eigenvalue of the Laplacian when $M$ has no nonzero
holomorphic vector fields. In order to show that the norms of $\partialhi$
decay exponentially in section 4.5, we need to prove that the first
eigenvalue is strictly greater than $1$.
\baregin{theo}\lambdaambdabel{lem2.8}Assume that $M$ has no nonzero
holomorphic vector fields. For any $A, B>0$, there exist $\e_2ata(A,
B, \Omegaegaegamega)>0$ such that for any metric
$\Omegaegaegamega_{\partialhi}=\Omegaegaegamega+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial \partialhi, $ if
$$-\e_2ata \Omegaegaegamega_{\partialhi}\lambdaeq Ric(\Omegaegaegamega_{\partialhi})-\Omegaegaegamega_{\partialhi}\lambdaeq A\Omegaegaegamega_{\partialhi}\alphan
|\partialhi|\lambdaeq B,
$$
then the first eigenvalue of the Laplacian ${\mathfrak D}elta_{\partialhi}$ satisfies
$$\lambdaambda_1>1+\mathfrak gamma(\e_2ata, B, A, \Omegaegaegamega),$$
where $\mathfrak gamma>0$ depends only on $\e_2ata, B, A$ and the background
metric $ \Omegaegaegamega$.
\e_2and{theo}
The following lemma is taken from \cite{[chen-tian2]}.
\baregin{lem}\lambdaambdabel{lem2.9}(cf. \cite{[chen-tian2]}) If the K\"ahler
metric $\Omegaegaegamega_{\partialhi}$ satisfies
$$Ric(\Omegaegaegamega_{\partialhi})\mathfrak geq \alphalpha\Omegaegaegamega_{\partialhi},\alphan |\partialhi|\lambdaeq B$$
for two constants $\alphalpha$ and $B,$ then there exists a uniform
constant $C$ depending only on $\alphalpha, B$ and $\Omegaegaegamega$ such that
$$\sqrt{-1}nf_M \vskip .1cmg {\mathfrak f}rac {\Omegaegaegamega_{\partialhi}^n}{\Omegaegaegamega^n}(x)\mathfrak geq -4C(\alphalpha, B, \Lambdaambda) e^{2
(1+\sqrt{-1}nt_M\; \vskip .1cmg {\mathfrak f}rac {\Omegaegaegamega_{\partialhi}^n}{\Omegaegaegamega^n}\Omegaegaegamega^n_{\partialhi})}.$$
\e_2and{lem}
The following crucial lemma is taken from Chen-He \cite{[chenhe]}.
Here we include a proof.
\baregin{lem}\lambdaambdabel{lem2.10}For any constant $A, B>0$, if
$|Ric(\Omegaegaegamega_{\partialhi})|\lambdaeq A$ and $|\partialhi|\lambdaeq B,$ then there is a
constant $C$ depending only on $A, B$ and the background metric
$\Omegaegaegamega$ such that $|\partialhi|_{C^{3, \bareta}(M, \Omegaegaegamega)}\lambdaeq C(A, B, \Omegaegaegamega, \bareta)$
for any $\bareta\sqrt{-1}n (0, 1)$. In particular, one can find two constants
$C_2(A, B, \Omegaegaegamega)$ and $C_3(A, B, \Omegaegaegamega)$ such that
$$C_2(A, B,
\Omegaegaegamega)\Omegaegaegamega\lambdaeq \Omegaegaegamega_{\partialhi}\lambdaeq C_3(A, B, \Omegaegaegamega)\Omegaegaegamega.$$
\e_2and{lem}
\baregin{proof} We use Yau's estimate on complex Monge-Amp\`ere
equation to obtain the $C^{3, \bareta}$ norm of $|\partialhi|.$ Let $F=\vskip .1cmg
{\mathfrak f}rac {\Omegaegaegamega^n_{\partialhi}}{\Omegaegaegamega^n}.$ Then we have \baregin{equation}s {\mathfrak D}elta_{\Omegaegaegamega}
F=g^{i\barar j}\partialartialrtial_i\partialartialrtial_{\barar j}\vskip .1cmg {\mathfrak f}rac
{\Omegaegaegamega^n_{\partialhi}}{\Omegaegaegamega^n} =-g^{i\barar j}R_{i\barar j}(\partialhi)+R(\Omegaegaegamega), \e_2apsilonsilonqs
where ${\mathfrak D}elta_{\Omegaegaegamega}$ denotes the Laplacian of $\Omegaegaegamega$. On the other
hand, we choose normal coordinates at a point such that $ g_{i\barar
j}=\deltaelta_{ij}$ and $g_{i\barar j}(\partialhi)=\lambdaambda_i\deltaelta_{ij},$ then \baregin{equation}s
g^{i\barar j}R_{i\barar j}({\partialhi})=\sigmagmaum_i R_{i\barar i}(\partialhi) \lambdaeq A\sigmagmaum_i
g_{i\barar i}(\partialhi) =A(n+{\mathfrak D}elta_{\Omegaegaegamega} \partialhi) \e_2apsilonsilonqs and
$$g^{i\barar j}R_{i\barar j}(\partialhi)\mathfrak geq -A(n+{\mathfrak D}elta_{\Omegaegaegamega}\partialhi).$$
Hence, we have
\baregin{equation}n {\mathfrak D}elta_{\Omegaegaegamega} (F-A\partialhi)&\lambdaeq &R(\Omegaegaegamega)+An\lambdaambdabel{f1}\\
{\mathfrak D}elta_{\Omegaegaegamega} (F+A\partialhi)&\mathfrak geq &R(\Omegaegaegamega)-An. \lambdaambdabel{f2}\e_2apsilonsilonqn Appling the
Green formula, we can bound $F$ from above. In fact, \baregin{equation}s
F+A\partialhi&\lambdaeq &{\mathfrak f}rac 1V\sqrt{-1}nt_M\; -G(x, y){\mathfrak D}elta_{\Omegaegaegamega}
(F+A\partialhi)(y)\Omegaegaegamega^n
(y)+{\mathfrak f}rac 1V\sqrt{-1}nt_M\;(F+A\partialhi) \Omegaegaegamega^n\\
&\lambdaeq &{\mathfrak f}rac 1V\sqrt{-1}nt_M\; -G(x, y)(R(\Omegaegaegamega)-An)\Omegaegaegamega^n(y)+{\mathfrak f}rac 1V\sqrt{-1}nt_M\;
(F+A\partialhi) \Omegaegaegamega^n\\
&\lambdaeq &C(\Lambdaambda, A, B), \e_2apsilonsilonqs where $\Lambdaambda$ is an upper bound of
$|Rm|_{\Omegaegaegamega}$. Notice that in the last inequality we used
$${\mathfrak f}rac 1V\sqrt{-1}nt_M\; F\Omegaegaegamega^n\lambdaeq \vskip .1cmg {\mathfrak B}ig({\mathfrak f}rac 1V\sqrt{-1}nt_M\; e^F \Omegaegaegamega^n{\mathfrak B}ig)=0.$$
Hence, $F\lambdaeq C(\Lambdaambda, A, B).$ Consider complex Monge-Amp\`ere
equation \baregin{equation}(\Omegaegaegamega+\partialbp\partialhi)^n=e^F\Omegaegaegamega^n,\lambdaambdabel{f3}\e_2apsilonsilonq by Yau's
estimate we have \baregin{equation}s
{\mathfrak D}elta_{\partialhi}(e^{-k\partialhi}(n+{\mathfrak D}elta_{\Omegaegaegamega}\partialhi))&\mathfrak geq & e^{-k\partialhi}
({\mathfrak D}elta_{\Omegaegaegamega} F-n^2\sqrt{-1}nf_{i\nueq j}R_{i\barar ij\barar j}(\Omegaegaegamega))\\
&-&ke^{-k\partialhi}n(n+{\mathfrak D}elta_{\Omegaegaegamega}\partialhi)+(k+\sqrt{-1}nf_{i\nueq j}R_{i\barar ij\barar
j}(\Omegaegaegamega))e^{-k\partialhi+
{\mathfrak f}rac {-F}{n-1}}(n+{\mathfrak D}elta_{\Omegaegaegamega}\partialhi)^{1+{\mathfrak f}rac 1{n-1}}\\
&\mathfrak geq &e^{-k\partialhi}(R(\Omegaegaegamega)-An-{\mathfrak D}elta_{\Omegaegaegamega}\partialhi-n^2\sqrt{-1}nf_{i\nueq
j}R_{i\barar ij
\barar j}(\Omegaegaegamega))\\
&-&ke^{-k\partialhi}n(n+{\mathfrak D}elta_{\Omegaegaegamega}\partialhi)+(k+\sqrt{-1}nf_{i\nueq j}R_{i\barar ij\barar
j} (\Omegaegaegamega))e^{-k\partialhi+ {\mathfrak f}rac {-F}{n-1}}(n+{\mathfrak D}elta_{\Omegaegaegamega}\partialhi)^{1+{\mathfrak f}rac
1{n-1}}. \e_2apsilonsilonqs The function $e^{-k\partialhi}(n+{\mathfrak D}elta_{\Omegaegaegamega}\partialhi)$ must
achieve its maximum at some point $p.$ At this point,
$$0\mathfrak geq -An -{\mathfrak D}elta_{\Omegaegaegamega}\partialhi(p)-kn(n+{\mathfrak D}elta_{\Omegaegaegamega}\partialhi)+(k-\Lambdaambda)e^{{\mathfrak f}rac {-F(p)}
{n-1}}(n+{\mathfrak D}elta_{\Omegaegaegamega}\partialhi)^{1+{\mathfrak f}rac 1{n-1}}(p).$$ Notice that we can
bound $\sigmagmaup F$ by $C(\Lambdaambda, A, B).$ Thus, the above inequality implies
$$n+{\mathfrak D}elta_{\Omegaegaegamega}\partialhi \lambdaeq C_4(\Lambdaambda, A, B).$$
Since we have an upper bound on $F$, the lower bound of $F$ can be
obtained by Lemma \ref{lem2.9} \baregin{equation}s\sqrt{-1}nf F\mathfrak geq -4C(\Lambdaambda, A, B) \e_2axp
(2+2\sqrt{-1}nt_M\; F \Omegaegaegamega_{\partialhi}^n) =C(\Lambdaambda, A, B).\e_2apsilonsilonqs On the other hand,
\baregin{equation}s \sqrt{-1}nf F\lambdaeq\vskip .1cmg {\mathfrak f}rac {\Omegaegaegamega^n_{\partialhi}}{\Omegaegaegamega^n}=\vskip .1cmg\partialrod_i
\;(1+\partialhi_{i\barar i})\lambdaeq \vskip .1cmg (\partialrod_i(n+{\mathfrak D}elta_{\Omegaegaegamega}
\partialhi)^{n-1}(1+\partialhi_{i\barar i})). \e_2apsilonsilonqs Hence, $1+\partialhi_{i\barar i}\mathfrak geq
C_5(\Lambdaambda, A, B)>0.$ Thus,
$$C_5(\Lambdaambda, A, B)\lambdaeq n+{\mathfrak D}elta_{\Omegaegaegamega} \partialhi \lambdaeq C_4(\Lambdaambda, A, B).$$
By (\ref{f1}) and (\ref{f2}), we have
$$|{\mathfrak D}elta_{\Omegaegaegamega} F|\lambdaeq C(A, B, \Lambdaambda).$$
By the elliptic estimate, $F\sqrt{-1}n W^{2, p}(M, \Omegaegaegamega)$ for any $p>1.$
Recall that $F$ satisfies the equation (\ref{f3}), we have the
H\"older estimate $\partialhi\sqrt{-1}n C^{2, \alphalpha}(M, \Omegaegaegamega)$ for some $\alphalpha\sqrt{-1}n (0,
1)$ (cf. \cite{[Siu]},\cite{[Tru]}). Let $\partialsi$ be a local potential
of $\Omegaegaegamega$ such that $\Omegaegaegamega=\partialbp \partialsi$. Differential the equation
(\ref{f3}), we have
$${\mathfrak D}elta_{\partialhi}\partiald {}{z^i}(\partialhi+\partialsi)-\partiald {}{z^i}\vskip .1cmg \Omegaegaegamega^n=\partiald {F}{z^i}\sqrt{-1}n W^{1, p}(M, \Omegaegaegamega).$$
Note that the coefficients of ${\mathfrak D}elta_{\partialhi}$ is in $C^{\alphalpha}(M,
\Omegaegaegamega)$, by the elliptic estimate $\partialhi\sqrt{-1}n W^{4, p}(M, \Omegaegaegamega)$. Then by
the Sobolev embedding theorem for any $\bareta\sqrt{-1}n (0, 1),$
$$|\partialhi|_{C^{3, \bareta}(M, \Omegaegaegamega)}\lambdaeq C(A, B, \Omegaegaegamega, \bareta).$$
The lemma is proved.
\e_2and{proof}
For convenience, we introduce the following definition.
\baregin{defi} For any K\"ahler metric $\Omegaegaegamega,$ we define
$$W(\Omegaegaegamega)=\sqrt{-1}nf_f {\mathfrak B}ig\{\sqrt{-1}nt_M \;|f_{\alphalpha\bareta}|^2\Omegaegaegamega^n\;\;{\mathfrak B}ig|\;f\sqrt{-1}n W^
{2,2}(M, \Omegaegaegamega), \sqrt{-1}nt_M\;f^2\Omegaegaegamega^n=1, \sqrt{-1}nt_M\;f\Omegaegaegamega^n=0{\mathfrak B}ig\}.$$
\e_2and{defi}
Assume that $M$ has no nonzero holomorphic vector fields, then the
following lemma gives a positive lower bound of $W(\Omegaegaegamega).$
\baregin{lem}\lambdaambdabel{lem2.12}Assume that $M$ has no nonzero
holomorphic vector fields. For any constant $A, B>0$, there exists
a positive constant $C_6$ depending on $A, B$ and the background
metric $\Omegaegaegamega$, such that for any K\"ahler metric
$\Omegaegaegamega_{\partialhi}=\Omegaegaegamega+\partialbp\partialhi,$ if
$$|Ric(\Omegaegaegamega_{\partialhi})|\lambdaeq A, \alphan |\partialhi|\lambdaeq B,$$
then
$$W(\Omegaegaegamega_{\partialhi})\mathfrak geq C_6>0.$$
\e_2and{lem}
\baregin{proof}Suppose not, we can find a sequence of metrics $\Omegaegaegamega_m=\Omegaegaegamega
+\partialbp\partialhi_m$ and functions $f_m$ satisfying
$$|Ric(\Omegaegaegamega_m)|\lambdaeq A, \qquad |\partialhi_m|\lambdaeq B,$$
and
$$\sqrt{-1}nt_M\;f_m^2\Omegaegaegamega_{m}^n=1, \sqrt{-1}nt_M\;f_m\Omegaegaegamega_{m}^n=0, \sqrt{-1}nt_M \;|f_{m,
\alphalpha\bareta}|_{g_m}^2\Omegaegaegamega_{m}^n\rightarrow0.$$ Note that the Sobolev constants
with respect to the metrics $\Omegaegaegamega_m$ are uniformly bounded. By Lemma
\ref{lem2.10}, we can assume that $\Omegaegaegamega_m$ converges to a K\"ahler
metric $\Omegaegaegamega_{\sqrt{-1}nfty}$ in $C^{1, \bareta}(M, \Omegaegaegamega)$ norm for some $\bareta\sqrt{-1}n
(0, 1).$ Now define a sequence of vector fields \baregin{equation}
X_m^i=g_m^{i\barar k}\partiald {f_{m}}{z^{\barar k}},\qquad X_m=X_m^i\partiald
{}{z^i}.\lambdaambdabel{f4}\e_2apsilonsilonq By direct calculation, we have
$$|X_m|^2_{g_m}=|\nuablabla f_m|_{g_m}^2,$$
and \baregin{equation}s {\mathfrak B}ig|\partiald {X_m}{\barar z}{\mathfrak B}ig|_{g_m}^2=\sigmagmaum_{i, j}{\mathfrak B}ig|\partiald
{X_m^i}{z^{\barar j}}{\mathfrak B}ig|_{g_m}^2=|f_{m, \alphalpha\bareta}|_{g_m}^2. \e_2apsilonsilonqs Then
\baregin{equation}\sqrt{-1}nt_M \;{\mathfrak B}ig|\partiald {X_m}{\barar
z}{\mathfrak B}ig|_{g_m}^2\Omegaegaegamega_{g_m}^n\rightarrow0.\lambdaambdabel{la1}\e_2apsilonsilonq Next we claim that
there exist two positive constants $C_7$ and $C_8$ which depend only
on $A$ and the Poincar\'e constant $\sigmagmaigma$ such that
\baregin{equation}0<C_7(\sigmagmaigma)\lambdaeq \sqrt{-1}nt_M \; |X_m|_{g_m}^2 \Omegaegaegamega_{g_m}^n\lambdaeq
C_8(A).\lambdaambdabel{la2}\e_2apsilonsilonq In fact, since the Poincar\'e constant is
uniformly bounded in our case,
$$\sqrt{-1}nt_M \; |X_m|_{g_m}^2 \Omegaegaegamega_{g_m}^n=\sqrt{-1}nt_M \; |\nuablabla f_m|_{g_m}^2 \Omegaegaegamega_
{g_m}^n\mathfrak geq C(\sigmagmaigma)\sqrt{-1}nt_M\;f_m^2\Omegaegaegamega_{g_m}^n=C(\sigmagmaigma).$$ On the
other hand, since the Ricci curvature has a upper bound, we have
\baregin{equation}s\sqrt{-1}nt_M \; |{\mathfrak D}elta_m f_m|_{g_m}^2 \Omegaegaegamega_{g_m}^n&=&\sqrt{-1}nt_M \;|f_{m,
\alphalpha \bareta}|_{g_m}^2\Omegaegaegamega_{g_m}^n+\sqrt{-1}nt_M \;R_{i\barar j}f_{m, \barar i}f_{m,
j}\Omegaegaegamega_
{g_m}^n\\
&\lambdaeq &\sqrt{-1}nt_M \;|f_{m, \alphalpha\bareta}|_{g_m}^2\Omegaegaegamega_{g_m}^n+A\sqrt{-1}nt_M \;|\nuablabla
f_m|
_{g_m}^2\Omegaegaegamega_{g_m}^n\\
&\lambdaeq &\sqrt{-1}nt_M \;|f_{m, \alphalpha\bareta}|_{g_m}^2\Omegaegaegamega_{g_m}^n+{\mathfrak f}rac 12\sqrt{-1}nt_M
\; |
{\mathfrak D}elta_m f_m|_{g_m}^2 \Omegaegaegamega_{g_m}^n+{\mathfrak f}rac {A^2}2 \sqrt{-1}nt_M \;f_m^2\Omegaegaegamega_{g_m}^n\\
\e_2apsilonsilonqs Then
$$\sqrt{-1}nt_M \; |{\mathfrak D}elta_m f_m|_{g_m}^2 \lambdaeq 1+A^2.$$
Therefore, \baregin{equation}s\sqrt{-1}nt_M \; |X_m|_{g_m}^2 \Omegaegaegamega_{g_m}^n&=&\sqrt{-1}nt_M \;
|\nuablabla f_m|_{g_m}
^2 \Omegaegaegamega_{g_m}^n\\
&\lambdaeq&{\mathfrak f}rac 12\sqrt{-1}nt_M \; |{\mathfrak D}elta_m f_m|_{g_m}^2 + {\mathfrak f}rac
12\sqrt{-1}nt_M\;f_m^2
\Omegaegaegamega_{g_m}^n\\
&\lambdaeq &C(A). \e_2apsilonsilonqs This proves the claim.
Now we have
$$\sqrt{-1}nt_M\;f_m^2\Omegaegaegamega_{m}^n=1, \qquad \sqrt{-1}nt_M\;|\nuablabla\barar \nuablabla f_m|_{g_m}^2
\Omegaegaegamega_{m}^n\lambdaeq C(A),\; \sqrt{-1}nt_M\; |f_{m, \alphalpha\bareta}|_{g_m}^2 \Omegaegaegamega_m^n\rightarrow
0,$$ then $f_m\sqrt{-1}n W^{2,2}(M, \Omegaegaegamega_m).$ Note that the metrics $\Omegaegaegamega_m$
are $C^{1, \bareta}$ equivalent to $\Omegaegaegamega_{\sqrt{-1}nfty}$, then $f_m\sqrt{-1}n W^{2,
2}(M, \Omegaegaegamega_{\sqrt{-1}nfty}),$ thus we can assume $f_m$ strongly converges to
$f_{\sqrt{-1}nfty}$ in $W^{1, 2}(M, \Omegaegaegamega_{\sqrt{-1}nfty})$. By (\ref{f4}) $X_{m}$
strongly converges to $ X_{\sqrt{-1}nfty}$ in $L^2(M, \Omegaegaegamega_{\sqrt{-1}nfty}).$ Thus,
by (\ref{la2}), \baregin{equation} 0<C_7\lambdaeq \sqrt{-1}nt_M \; |X_{\sqrt{-1}nfty}|^2
\Omegaegaegamega_{\sqrt{-1}nfty}^n\lambdaeq C_8.\lambdaambdabel{f5}\e_2apsilonsilonq Next we show that $X_{\sqrt{-1}nfty}$
is holomorphic. In fact, for any vector valued smooth function
$\xi=(\xi^1, \xi^2, \cdots, \xi^n),$ \baregin{equation}s {\mathfrak B}ig|\sqrt{-1}nt_M\;
\xi\cdot\barar
\partialartialrtial X_m\Omegaegaegamega_{\sqrt{-1}nfty}^n {\mathfrak B}ig|^2&=&{\mathfrak B}ig|\sqrt{-1}nt_M\; \xi^k {\mathfrak f}rac
{\partialartialrtial X_m}{\partialartialrtial \barar z^{ k}}\;\Omegaegaegamega_{\sqrt{-1}nfty}^n
{\mathfrak B}ig|^2\\&\lambdaeq &\sqrt{-1}nt_M\; |\xi|^2\Omegaegaegamega_{\sqrt{-1}nfty}^n\sqrt{-1}nt_M\; {\mathfrak B}ig|\partiald
{X_m}{\barar z}{\mathfrak B}ig|^2\Omegaegaegamega_{\sqrt{-1}nfty}^n\\&\lambdaeq &C \sqrt{-1}nt_M\;
|\xi|^2\Omegaegaegamega_{\sqrt{-1}nfty}^n\sqrt{-1}nt_M\; {\mathfrak B}ig|\partiald {X_m}{\barar
z}{\mathfrak B}ig|_{g_m}^2\Omegaegaegamega_{g_m}^n \rightarrow 0. \e_2apsilonsilonqs On the other hand,
$$\sqrt{-1}nt_M\; \xi\cdot\barar
\partialartialrtial X_m\Omegaegaegamega_{\sqrt{-1}nfty}^n =-\sqrt{-1}nt_M\; \barar \partialartialrtial \xi\cdot X_m \;
\Omegaegaegamega_{\sqrt{-1}nfty}^n\rightarrow -\sqrt{-1}nt_M\; \barar \partialartialrtial \xi\cdot X_{\sqrt{-1}nfty} \;
\Omegaegaegamega_{\sqrt{-1}nfty}^n.$$ Then $X_{\sqrt{-1}nfty}$ is a weak holomorphic vector
field, thus it must be holomorphic. By (\ref{f5}) $X_{\sqrt{-1}nfty}$ is a
{{nonzero}} holomorphic vector field, which contradicts the
assumption that $M$ has no nonzero holomorphic vector fields. The
lemma is proved.
\e_2and{proof}
\baregin{lem}\lambdaambdabel{lem2.13} If the K\"ahler metric $\Omegaegaegamega_g$ satisfies
$Ric(\Omegaegaegamega_g)\mathfrak geq (1-\e_2ata)\Omegaegaegamega_g$ where $0<\e_2ata< {\mathfrak f}rac {\sigmagmaqrt{C_6}}2$.
Here $C_6$ is the constant obtained in Lemma \ref{lem2.12}. Then the
first eigenvalue of ${\mathfrak D}elta_g$ satisfies $\lambdaambda_1\mathfrak geq 1+\mathfrak gamma$, where
$\mathfrak gamma={\mathfrak f}rac {\sigmagmaqrt{C_6}}{2}.$
\e_2and{lem}
\baregin{proof} Let $ u$ is any eigenfunction of $\Omegaegaegamega_g$ with
eigenvalue $\lambdaambda_1,$ so ${\mathfrak D}elta_g u=-\lambdaambda_1u.$ Then by direct
calculation, we have \baregin{equation}s \sqrt{-1}nt_M\; u_{ij}u_{\barar i\barar
j}\;\Omegaegaegamega_g^n&=&-\sqrt{-1}nt_M\;u_{ij\barar j}
u_{\barar i}\;\Omegaegaegamega_g^n\\
&=& -\sqrt{-1}nt_M\;(u_{j\barar ji}+R_{i\barar k}u_{k})u_{\barar i}\;\Omegaegaegamega_g^n\\
&=&\sqrt{-1}nt_M (({\mathfrak D}el_g u)^2-R_{i\barar j}u_{j}u_{\barar i})\;\Omegaegaegamega_g^n.
\e_2apsilonsilonqs This implies \baregin{equation}s C_6\sqrt{-1}nt_M\; u^2 \Omegaegaegamega_g^n&\lambdaeq & \sqrt{-1}nt_M\;
(({\mathfrak D}el_g u)^2-R_{i\barar
j}u_{j}u_{\barar i})\;\Omegaegaegamega_g^n\\
&\lambdaeq &\lambdaambda_1^2 \sqrt{-1}nt_M\; u^2 \Omegaegaegamega^n-(1-\e_2ata)\sqrt{-1}nt_M\;|\nuablabla u|^2 \Omegaegaegamega_g^n\\
&=&(\lambdaambda_1^2-(1-\e_2ata)\lambdaambda_1)\sqrt{-1}nt_M\; u^2 \Omegaegaegamega_g^n. \e_2apsilonsilonqs Thus, we
have $\lambdaambda_1^2-(1-\e_2ata)\lambdaambda_1-C_6\mathfrak geq 0.$ Then,
$$\lambdaambda_1\mathfrak geq 1+{\mathfrak f}rac {\sigmagmaqrt{C_6}}{2}.$$ \e_2and{proof}
\baregin{flushleft}
\baregin{proof}[Proof of Theorem \ref{lem2.8}]
The theorem follows directly from the above Lemma \ref{lem2.12} and
\ref{lem2.13}.
\e_2and{proof}
\e_2and{flushleft}
\sigmagmaubsubsection{Case 2: $M$ has nonzero holomorphic vector
fields}\lambdaambdabel{section4.4.2} In this subsection, we will consider the
case when $M$ has nonzero holomorphic vector fields. Denote by
$Aut(M)^{\circ}$ the connected component containing the identity of
the holomorphic transformation group of $M$. Let $K$ be a maximal
compact subgroup of $Aut(M)^{\circ}$. Then there is a semidirect
decomposition of $Aut(M)^{\circ}$(cf. \cite{[FM]}),
$$Aut(M)^{\circ}=Aut_r(M)\partialropto R_u,$$
where $Aut_r(M)\sigmagmaubset Aut(M)^{\circ}$ is a reductive algebraic
subgroup and the complexification of $K$, and $R_u$ is the unipotent
radical of $Aut(M)^{\circ}$. Let $\e_2ata_r(M, J)$ be the Lie algebra
of $Aut_r(M, J).$
Now we introduce the following definition which is a mild
modification from \cite{[chen1]} and \cite{[PhSt]}.
\baregin{defi}\lambdaambdabel{prestable} The complex structure $J$ of $M$
is called pre-stable, if no complex structure of the orbit of
diffeomorphism group contains larger (reduced) holomorphic
automorphism group (i.e., $Aut_r(M)$).
\e_2and{defi}
Now we recall the following $C^{k, \alphalpha}$ convergence theorem of a
sequence of K\"ahler metrics, which is well-known in literature (cf.
\cite{[PhSt]}, \cite{[Tian4]}).
\baregin{theo}\lambdaambdabel{conv1} Let $M$ be a compact K\"ahler manifold. Let
$(g(t), J(t))$ be any sequence of metrics $g(t)$ and complex
structures $J(t)$ such that $g(t)$ is K\"ahler with respect to
$J(t)$. Suppose the following is true:
\baregin{enumerate}\sqrt{-1}tem For some integer $k\mathfrak geq 1$, $|\nuablabla^lRm|_{g(t)}$ is uniformly bounded for
any integer $l (0\lambdaeq l< k)$;
\sqrt{-1}tem The injectivity radii $i(M,
g(t))$ are all bounded from below;
\sqrt{-1}tem There exist two
uniform constant $c_1$ and $c_2$ such that $0<c_1\lambdaeq {\mathfrak V}ol(M,
g(t))\lambdaeq c_2$.
\e_2and{enumerate}
Then there exists a subsequence of $t_j$, and a sequence of
diffeomorphism $F_j: M\rightarrow M$ such that the pull-back metrics $\tauilde
g(t_j)=F_j^*g(t_j)$ converge in $C^{k, \alphalpha}({\mathfrak f}orall \,\alphalpha\sqrt{-1}n (0,
1))$ to a $C^{k, \alphalpha}$ metric $g_{\sqrt{-1}nfty}$. The pull-back complex
structure tensors $\tauilde J(t_j)=F_j^*J(t_j)$ converge in $C^{k,
\alphalpha}$ to an integral complex structure tensor $\tauilde J_{\sqrt{-1}nfty}$.
Furthermore, the metric $ g_{\sqrt{-1}nfty}$ is K\"ahler with respect to
the complex structure $\tauilde J_{\sqrt{-1}nfty}$.
\e_2and{theo}
\baregin{theo}\lambdaambdabel{theo4.18}Suppose $M$ is pre-stable.
For any $\Lambdaambda_0, \Lambdaambda_1>0$, there exists $\e_2ata>0$ depending only on
$\Lambdaambda_0$ and $\Lambdaambda_1$ such that for any metric $\Omegaegaegamega\sqrt{-1}n 2\partiali c_1(M),$
if \baregin{equation}|Ric(\Omegaegaegamega)-\Omegaegaegamega|\lambdaeq \e_2ata,\;\; |Rm(\Omegaegaegamega)|\lambdaeq \Lambdaambda_0,\;\;|\nuablabla
Rm(\Omegaegaegamega)|\lambdaeq \Lambdaambda_1, \lambdaambdabel{r1}\e_2apsilonsilonq then for any smooth function $f$
satisfying
$$ \sqrt{-1}nt_M\; f\Omegaegaegamega^n=0 {\;\;{ and}\;\;} Re\lambdaeft(\sqrt{-1}nt_M\; X(f)\Omegaegaegamega^n\rightarrowght)=0, \qquad
{\mathfrak f}orall X\sqrt{-1}n \e_2ata(M, J),$$ we have
$$\sqrt{-1}nt_M\; |\nuablabla f|^2\Omegaegaegamega^n>(1+\mathfrak gamma(\e_2ata, \Lambdaambda_0, \Lambdaambda_1))\sqrt{-1}nt_M\; |f|^2\Omegaegaegamega^n,$$
where $\mathfrak gamma>0$ depends only on $\e_2ata, \Lambdaambda_0$ and $\Lambdaambda_1.$
\e_2and{theo}
\baregin{proof}Suppose not, for any positive numbers $\e_2ata_m\rightarrow 0$,
there exists a sequence of K\"ahler metrics $\Omegaegaegamega_m\sqrt{-1}n 2\partiali c_1(M)$
such that \baregin{equation} |Ric(\Omegaegaegamega_m)-\Omegaegaegamega_{m}|\lambdaeq \e_2ata_m,\;\; |Rm(\Omegaegaegamega_m)|\lambdaeq
\Lambdaambda_0,\;\;\;|\nuablabla_m Rm(\Omegaegaegamega_m)|\lambdaeq \Lambdaambda_1,\lambdaambdabel{r2}\e_2apsilonsilonq where $Rm_m$
is with respect to the metric $\Omegaegaegamega_m$, and smooth functions $f_m$
satisfying
$$\sqrt{-1}nt_M\; f_m\Omegaegaegamega_m^n=0, \qquad Re\lambdaeft(\sqrt{-1}nt_M\;
X(f_m)\Omegaegaegamega_m^n\rightarrowght)=0, \qquad {\mathfrak f}orall X\sqrt{-1}n \e_2ata(M, J),$$
\baregin{equation}\sqrt{-1}nt_M\; |\nuablabla_m f_m|^2\Omegaegaegamega_m^n<(1+\mathfrak gamma_m)\sqrt{-1}nt_M\;
|f_m|^2\Omegaegaegamega_m^n,\lambdaambdabel{eq5.25}\e_2apsilonsilonq where $0<\mathfrak gamma_m\rightarrow 0.$ Without
loss of generality, we may assume that
\[
\sqrt{-1}nt_M\; f_m^2 \Omegaegaegamega_m^n = 1, \qquad {\mathfrak f}orall m \sqrt{-1}n {\mathbb N}N,
\]
which means
\[
\sqrt{-1}nt_M\;
|\nuablabla_m f_m|^2\Omegaegaegamega_m^n\lambdaeq 1 + \mathfrak gammamma_m < 2.
\]
Then, $f_m $ will converge in $W^{1,2}$ if $(M, \Omegaegaegamega_m)$
converges. However, according to our stated condition, $(M,
\Omegaegaegamega_m, J)$ will converge in $C^{2, \alphalpha}(\alphalpha\sqrt{-1}n (0, 1))$ to $(M,
\Omegaegaegamega_\sqrt{-1}nfty, J_\sqrt{-1}nfty).\;$ In fact, by (\ref{r2}) the diameters of
$\Omegaegaegamega_m$ are uniformly bounded. Note that all the metrics $\Omegaegaegamega_m$ are
in the same K\"ahler class, the volume is fixed. Then by (\ref{r2})
again, the injectivity radii are uniformly bounded from below.
Therefore, all the conditions of Theorem \ref{conv1} are satisfied.
Note that the complex structure
$J_\sqrt{-1}nfty$ lies in the closure of the orbit of diffeomorphisms,
while $\Omegaegaegamega_\sqrt{-1}nfty$ is a K\"ahler-Einstein metric in $(M,
J_\sqrt{-1}nfty)$. By the standard deformation theorem in complex
structures, we have
\[
\deltaim Aut_r (M, J) \lambdaeq \deltaim Aut_r(M, J_\sqrt{-1}nfty).
\]
By abusing notation, we can write
\[
Aut_r(M, J) \sigmagmaubset Aut_r(M, J_\sqrt{-1}nfty).
\]
By our assumption of pre-stable of $(M, J)$, we have the
inequality the other way around. Thus, we have
\[
\deltaim Aut_r(M, J) = \deltaim Aut_r(M, J_\sqrt{-1}nfty),\;\;\;{\rm or}\;\;\;
Aut_r(M, J) = Aut_r(M, J_\sqrt{-1}nfty).
\]
Now, let $f_\sqrt{-1}nfty $ be the $W^{1,2}$ limit of $f_m$, then we have
\[
1 \lambdaeq |f_\sqrt{-1}nfty|_{W^{1,2}(M,\, \Omegaegaegamega_\sqrt{-1}nfty)} \lambdaeq 3
\]
and
\[
\sqrt{-1}nt_M f_\sqrt{-1}nfty \Omegaegaegamega_\sqrt{-1}nfty^n = 0, \qquad Re\lambdaeft(\sqrt{-1}nt_M\; X(f_\sqrt{-1}nfty) \Omegaegaegamega_\sqrt{-1}nfty^n\rightarrowght) =
0,\qquad {\mathfrak f}orall X\sqrt{-1}n \e_2ata(M, J).
\]
Thus, $f_\sqrt{-1}nfty$ is a non-trivial function. Since $\Omegaegaegamega_\sqrt{-1}nfty$ is a K\"ahler-Einstein metric, we have
\[
\sqrt{-1}nt_M\; \tauhetaeta_X f_\sqrt{-1}nfty \Omegaegaegamega_\sqrt{-1}nfty^n = 0,
\]
where \[
{\cal L}_X \Omegaegaegamega_\sqrt{-1}nfty =\partialbp\tauhetaeta_X.
\]
This implies that $f_\sqrt{-1}nfty $ is perpendicular to the first eigenspace{\mathfrak f}ootnote{Note that
$\tauriangle \tauhetaeta_X = -\tauhetaeta_X $ is totally real for
$X \sqrt{-1}n Aut_r(M, J_\sqrt{-1}nfty).\;$ Moreover, the first eigenspace consists of
all such $\tauhetaeta_X.\;$} of $\tauriangle_{\Omegaegaegamega_\sqrt{-1}nfty}.\;$
In other words, there is a $\deltaelta > 0$ such that
\[
\sqrt{-1}nt_M |\nuablabla f_\sqrt{-1}nfty|^2 \Omegaegaegamega_\sqrt{-1}nfty^n > (1+ \deltaelta) \sqrt{-1}nt_M f_\sqrt{-1}nfty^2 \Omegaegaegamega_\sqrt{-1}nfty^n > 1+ \deltaelta.
\]
However, this contradicts the following fact:
\baregin{equation}s \sqrt{-1}nt_M \;|\nuablabla f_\sqrt{-1}nfty|^2 \Omegaegaegamega_\sqrt{-1}nfty^n & \lambdaeq & \deltaisplaystyleplaystyle \lambdaim_{m\rightarrowghtarrow \sqrt{-1}nfty}
\sqrt{-1}nt_M |\nuablabla_m f_m|^2 \Omegaegaegamega_m^n \\
& \lambdaeq & \deltaisplaystyleplaystyle \lambdaim_{m\rightarrowghtarrow \sqrt{-1}nfty} (1+ \mathfrak gamma_m) \sqrt{-1}nt_M f_\sqrt{-1}nfty^2 \Omegaegaegamega_\sqrt{-1}nfty^n = 1.
\e_2apsilonsilonqs
The lemma is then proved.
\e_2and{proof}
\sigmagmaubsection{Exponential decay in a short time}\lambdaambdabel{section4.5}
In this subsection, we will show that the $W^{1,2}$ norm of $\deltaot
\partialhi$ decays exponentially in a short time. Here we follow the
argument in \cite{[chen-tian2]} and use the estimate of the first
eigenvalue obtained in the previous subsection.
\baregin{lem}\lambdaambdabel{lem2.14} Suppose for any time $t\sqrt{-1}n [T_1, T_2]$,
we have
$$|Ric-\Omegaegaegamega|(t)\lambdaeq C_1\e_2apsilonsilon\alphan \lambdaambda_1(t)\mathfrak geq 1+\mathfrak gammamma>1.$$
Let $$\mu_0(t)={\mathfrak f}rac 1V\sqrt{-1}nt_M\;(\deltaot\partialhi-c(t))^2\Omegaegaegamega_{\partialhi}^n.$$ If
$\e_2apsilonsilon$ is small enough, then there exists a constant $\alphalpha_0>0$
depending only on $\mathfrak gammamma, \sigmagmaigma$ and $C_1\e_2apsilonsilon$ such that
$$\mu_0(t)\lambdaeq e^{-\alphalpha_0 (t-T_1)}\mu_0(T_1), \qquad{\mathfrak f}orall t\sqrt{-1}n [T_1,
T_2].$$
\e_2and{lem}
\baregin{proof} By direct calculation, we have
\baregin{equation}s {\mathfrak f}rac {d}{dt}\mu_0(t)&=&{\mathfrak f}rac
2V\sqrt{-1}nt_M\;(\deltaot\partialhi-c(t))(\deltaeltaot \partialhi-c(t)')\Omegaegaegamega_{\partialhi}^n+{\mathfrak f}rac
1V\sqrt{-1}nt_M\;(\deltaot \partialhi-c(t))^2{\mathfrak D}elta_{\partialhi}\deltaot
\partialhi\Omegaegaegamega_{\partialhi}^n\\&=&-{\mathfrak f}rac 2V\sqrt{-1}nt_M\;(1+\deltaot \partialhi-c(t))|\nuablabla (\deltaot
\partialhi-c(t))|^2\Omegaegaegamega_{\partialhi}^n+{\mathfrak f}rac 2V\sqrt{-1}nt_M\;(\deltaot
\partialhi-c(t))^2\Omegaegaegamega_{\partialhi}^n. \e_2apsilonsilonqs By the assumption, we have for
$t\sqrt{-1}n [T_1, T_2]$ \baregin{equation}s {\mathfrak f}rac {d}{dt}\mu_0(t)&=&-{\mathfrak f}rac
2V\sqrt{-1}nt_M\;(1+\deltaot\partialhi-c(t))|\nuablabla
\deltaot\partialhi|^2\Omegaegaegamega_{\partialhi}^n+{\mathfrak f}rac 2V\sqrt{-1}nt_M\;(\deltaot\partialhi-c(t))^2\Omegaegaegamega_{\partialhi}^n\\
&\lambdaeq&-{\mathfrak f}rac 2V\sqrt{-1}nt_M\;(1-C(\sigmagmaigma)C_1\e_2apsilonsilon)|\nuablabla
\deltaot\partialhi|^2\Omegaegaegamega_{\partialhi}^n
+{\mathfrak f}rac 2V\sqrt{-1}nt_M\;(\deltaot \partialhi-c(t))^2\Omegaegaegamega_{\partialhi}^n\\
&\lambdaeq&-{\mathfrak f}rac 2V\sqrt{-1}nt_M\;(1-C(\sigmagmaigma)C_1\e_2apsilonsilon)(1+\mathfrak gammamma)(\deltaot
\partialhi-c(t))^2\Omegaegaegamega_{\partialhi}^n+{\mathfrak f}rac 2V\sqrt{-1}nt_M\;(\deltaot
\partialhi-c(t))^2\Omegaegaegamega_{\partialhi}^n\\&=&-\alphalphapha_0\mu_0(t). \e_2apsilonsilonqs Here
$$\alphalpha_0=2(1-C(\sigmagmaigma)C_1\e_2apsilonsilon)(1+\mathfrak gammamma)-2>0,$$ if we choose $\e_2apsilonsilon$ small
enough. Thus, we have
$$\mu_0(t)\lambdaeq e^{-\alphalpha_0 (t-T_1)}\mu_0(T_1).$$
\e_2and{proof}
\baregin{lem}\lambdaambdabel{lem2.15}Suppose for any time $t\sqrt{-1}n [T_1, T_2]$, we
have
$$|Ric-\Omegaegaegamega|(t)\lambdaeq C_1\e_2apsilonsilon\alphan \lambdaambda_1(t)\mathfrak geq 1+\mathfrak gammamma>1.$$
Let $$\mu_1(t)={\mathfrak f}rac 1V\sqrt{-1}nt_M\;|\nuablabla \deltaot \partialhi|^2\Omegaegaegamega_{\partialhi}^n.$$ If
$ \e_2apsilonsilon$ is small enough, then there exists a constant $\alphalpha_1>0$
depending only on $\mathfrak gammamma$ and $C_1\e_2apsilonsilon$ such that
$$\mu_1(t)\lambdaeq e^{-\alphalpha_1(t-T_1)}\mu_1(T_1), \qquad{\mathfrak f}orall t\sqrt{-1}n [T_1,
T_2].$$
\e_2and{lem}
\baregin{proof} Recall that the evolution equation for $|\nuablabla \deltaot\partialhi|^2$
is
$$\partiald {}t|\nuablabla \deltaot \partialhi|^2={\mathfrak D}elta_{\partialhi} |\nuablabla \deltaot \partialhi|^2-|\nuablabla \nuablabla \deltaot
\partialhi|^2-|\nuablabla \barar \nuablabla\deltaot \partialhi|^2+|\nuablabla \deltaot \partialhi|^2.$$ Then for any
time $t\sqrt{-1}n [T_1, T_2],$ \baregin{equation}s {\mathfrak f}rac d{dt}\mu_1(t)&=&{\mathfrak f}rac
1V\sqrt{-1}nt_M\;(-|\nuablabla \nuablabla \deltaot \partialhi|^2- |\nuablabla \barar \nuablabla\deltaot \partialhi|^2+|\nuablabla
\deltaot \partialhi|^2+|\nuablabla \deltaot \partialhi|^2{\mathfrak D}elta_{\partialhi}
\deltaot\partialhi)\;\Omegaegaegamega_{\partialhi}^n\\
&\lambdaeq &{\mathfrak f}rac 1V\sqrt{-1}nt_M\;(-\mathfrak gammamma|\nuablabla \deltaot
\partialhi|^2+(n-R(\Omegaegaegamega_{\partialhi}))|\nuablabla \deltaot \partialhi|^2)
\Omegaegaegamega_{\partialhi}^n\\
&\lambdaeq& -(\mathfrak gammamma-C_1\e_2apsilonsilon)\mu_1(t). \e_2apsilonsilonqs Thus, we have
$$\mu_1(t)\lambdaeq e^{-\alphalpha_1(t-T_1)}\mu_1(T_1)$$
where $\alphalpha_1=\mathfrak gammamma-C_1\e_2apsilonsilon>0$ if we choose $\e_2apsilonsilon$ small.
\e_2and{proof}
\sigmagmaubsection {Estimate of the $C^0$ norm of $\partialhi(t)$ }\lambdaambdabel{section4.6}
In this subsection, we derive some estimates on the $C^0$ norm of
$|\partialhi|$. Recall that in the previous subsection we proved that the
$W^{1, 2}$ norm of $|\deltaot\partialhi-c(t)|$ decays exponentially. Based on
this result we will use the parabolic Moser iteration to show that
the $C^0$ norm of $|\deltaot\partialhi-c(t)|$ also decays exponentially.
\baregin{lem}\lambdaambdabel{lem2.18} Suppose that $\mu_0(t), \mu_1(t)$ decay
exponentially for $t\sqrt{-1}n [T_1, T_2]$ as in Lemma \ref{lem2.14} and
\ref{lem2.15}, then we have
$${\mathfrak B}ig|\partiald {\partialhi}t-c(t){\mathfrak B}ig|_{C^{0}}\lambdaeq {\mathfrak f}rac {C_9(n, \sigmagmaigma)}
{\tauau^{{\mathfrak f}rac {m}{4}}}{\mathfrak B}ig(\mu_0(t-\tauau)+{\mathfrak f}rac
1{\alphalpha_1^2}\mu_1^2(t-\tauau){\mathfrak B}ig)^{{\mathfrak f}rac 12},\;\;\;{\mathfrak f}orall \;t\sqrt{-1}n
[T_1+\tauau, T_2]$$ where $m=\deltaim_{{\mathfrak R}R}M$ and $\tauau<T_2-T_1$.
\e_2and{lem}
\baregin{proof} Let $u=\partiald {\partialhi}t-c(t),$ the evolution equation
for $u$ is
$$\partiald ut={\mathfrak D}elta_{\partialhi} u+u+\mu_1(t),$$
where $\mu_1(t)={\mathfrak f}rac 1V\sqrt{-1}nt_M\;|\nuablabla \deltaot \partialhi|^2\Omegaegaegamega_{\partialhi}^n.$
Note that in the proof of Lemma \ref{lem2.15}, we derived
$$\partiald {}t\mu_1(t)\lambdaeq -\alphalpha_1 \mu_1(t).$$
Thus, we have
$$\partiald {}t(u_++{\mathfrak f}rac 1{\alphalpha_1}\mu_1)\lambdaeq {\mathfrak D}elta_{\partialhi} (u_++{\mathfrak f}rac 1{\alphalpha_1}
\mu_1)+(u_++{\mathfrak f}rac 1{\alphalpha_1}\mu_1).$$ where $u_+=\max\{u, 0\}$.
Since $u_++{\mathfrak f}rac 1{\alphalpha_1}\mu_1$ is a nonnegative function, we can
use the parabolic Moser iteration,
$$(u_++{\mathfrak f}rac 1{\alphalpha_1}\mu_1)(t)\lambdaeq {\mathfrak f}rac {C(n, \sigmagmaigma)}{\tauau^{{\mathfrak f}rac {m
+2}{4}}} {\mathfrak B}ig(\sqrt{-1}nt_{t-\tauau}^t\;\sqrt{-1}nt_M (u_++{\mathfrak f}rac 1{\alphalpha_1}\mu_1)^2(s)
\;\Omegaegaegamega_{\partialhi}^n\wedge ds{\mathfrak B}ig)^{{\mathfrak f}rac 12}.$$ Since $\mu_0$ and $\mu_1$
are decreasing,
\baregin{equation}n& &(u_++{\mathfrak f}rac 1{\alphalpha_1}\mu_1)(t)\nuonumber\\
&\lambdaeq&{\mathfrak f}rac {C(n, \sigmagmaigma)}{\tauau^{{\mathfrak f}rac
{m+2}{4}}}{\mathfrak B}ig(\sqrt{-1}nt_{t-\tauau}^t \;(\mu_0(s)+{\mathfrak f}rac 1{\alphalpha_1^2}
\mu_1^2(s))ds{\mathfrak B}ig)^{{\mathfrak f}rac 12}\nuonumber\\
&\lambdaeq&{\mathfrak f}rac {C(n, \sigmagmaigma)}{\tauau^{{\mathfrak f}rac
{m}{4}}}{\mathfrak B}ig(\mu_0(t-\tauau)+{\mathfrak f}rac
1{\alphalpha_1^2}\mu_1^2(t-\tauau){\mathfrak B}ig)^{{\mathfrak f}rac 12}. \lambdaambdabel{x1}\e_2apsilonsilonqn On the
other hand, the evolution equation for $-u$ is
$$\partiald {}t(-u)={\mathfrak D}elta_{\partialhi} (-u)+(-u)-\mu_1(t)\lambdaeq {\mathfrak D}elta_{\partialhi} (-u)+(-u).$$
Thus,
$$\partiald {}t(-u)_+\lambdaeq {\mathfrak D}elta_{\partialhi} (-u)_++(-u)_+.$$
By the parabolic Moser iteration, we have \baregin{equation}n (-u)_+&\lambdaeq
&{\mathfrak f}rac {C(n, \sigmagmaigma)}{\tauau^{{\mathfrak f}rac {m+2}{4}}}{\mathfrak B}ig(\sqrt{-1}nt_{t-\tauau}^t\;
\sqrt{-1}nt_M (-u)_+^2\Omegaegaegamega_{\partialhi}^n\wedge ds{\mathfrak B}ig)^{{\mathfrak f}rac 12}\nuonumber\\
&\lambdaeq&{\mathfrak f}rac {C(n, \sigmagmaigma)}{\tauau^{{\mathfrak f}rac
{m}{4}}}\mu_0(t-\tauau)^{{\mathfrak f}rac 12}. \lambdaambdabel{x2}\e_2apsilonsilonqn Combining the
two inequalities (\ref{x1})(\ref{x2}), we obtain the estimate
$${\mathfrak B}ig|\partiald {\partialhi}t-c(t){\mathfrak B}ig|_{C^{0}}\lambdaeq {\mathfrak f}rac {C(n, \sigmagmaigma)}{\tauau^
{{\mathfrak f}rac {m}{4}}}{\mathfrak B}ig (\mu_0(t-\tauau)+{\mathfrak f}rac
1{\alphalpha_1^2}\mu_1^2(t-\tauau){\mathfrak B}ig)^{{\mathfrak f}rac 12}.$$ This proved the
lemma.\e_2and{proof}
\baregin{lem}\lambdaambdabel{lem2.19}Under the same assumptions as in Lemma \ref{lem2.18}, we have
$$|\partialhi(t)|\lambdaeq |\partialhi(T_1+\tauau)|+{\mathfrak f}rac {C_{10}(n, \sigmagmaigma)}{\alphalpha \tauau^{{\mathfrak f}rac
{m}{4}}}(\sigmagmaqrt{\mu_0(T_1)}+{\mathfrak f}rac 1{\alphalpha_1}\mu_1(T_1)) + \tauilde
C,\qquad {\mathfrak f}orall\, t\sqrt{-1}n [T_1+\tauau, T_2].$$ Here $\tauilde
C=E_0(0)-E_0(\sqrt{-1}nfty)$ is a constant in Lemma \ref{lem2.6}.
\e_2and{lem}
\baregin{proof}
\baregin{equation}s |\partialhi(t)|&\lambdaeq &|\partialhi(T_1+\tauau)|+\sqrt{-1}nt_{T_1+\tauau}^{t}\;
{\mathfrak B}ig|\partiald {\partialhi(s)}
s-c(s){\mathfrak B}ig| ds+\sqrt{-1}nt_{T_1+\tauau}^{t}\;c(s) ds\\
&\lambdaeq &|\partialhi(T_1+\tauau)|+{\mathfrak f}rac {C(n, \sigmagmaigma)}{\tauau^{{\mathfrak f}rac
{m}{4}}}\sqrt{-1}nt_{T_1+\tauau} ^{t}\;{\mathfrak B}ig(\mu_0(s-\tauau)+{\mathfrak f}rac
1{\alphalpha_1^2}\mu_1^2(s-\tauau){\mathfrak B}ig)^{{\mathfrak f}rac 12}ds +
\tauilde C\\
&\lambdaeq &|\partialhi(T_1+\tauau)|+{\mathfrak f}rac {C(n, \sigmagmaigma)}{\tauau^{{\mathfrak f}rac
{m}{4}}}(\sigmagmaqrt {\mu_0(T_1)}+{\mathfrak f}rac
1{\alphalpha_1}\mu_1(T_1))\sqrt{-1}nt_{T_1+\tauau}^{t}\;e^{-
\alphalpha (s- \tauau-T_1)}ds +\tauilde C\\
&\lambdaeq&|\partialhi(T_1)|+{\mathfrak f}rac {C(n, \sigmagmaigma)}{\alphalpha \tauau^{{\mathfrak f}rac
{m}{4}}}(\sigmagmaqrt{\mu_0(T_1)}+{\mathfrak f}rac 1{\alphalpha_1}\mu_1(T_1)) +\tauilde C \e_2apsilonsilonqs
where $\alphalpha=\min\{{\mathfrak f}rac {\alphalpha_0}2, \alphalpha_1\}$ and $\tauilde
C=E_0(0)-E_0(\sqrt{-1}nfty)$ is a constant in Lemma \ref{lem2.6}.
\e_2and{proof}
\sigmagmaubsection{Estimate of the $C^k$ norm of $\partialhi(t)$ }
In this subsection, we shall obtain uniform $C^k$ bounds for the
solution $\partialhi(t)$ of the K\"ahler-Ricci flow
$$\partiald {\partialhi}t=\vskip .1cmg{\mathfrak f}rac {\Omegaegaegamega^n_{\partialhi}}{\Omegaegaegamega^n}+\partialhi-h_{\Omegaegaegamega}$$
with respect to any background metric $\Omegaegaegamega$. For simplicity, we
normalize $h_{\Omegaegaegamega}$ to satisfy
$$\sqrt{-1}nt_M\; h_{\Omegaegaegamega}\;\Omegaegaegamega^n=0.$$
The following is the main result in this subsection.
\baregin{theo}\lambdaambdabel{theoRm}For any positive constants $\Lambdaambda, B>0$ and
small $\e_2ata>0$, there exists a constant $C_{11}$ depending only on
$B, \e_2ata, \Lambdaambda$ and the Sobolev constant $\sigmagmaigma$ such that if the
background metric $\Omegaegaegamega$ satisfies
$$|Rm(\Omegaegaegamega)|\lambdaeq \Lambdaambda, \qquad |Ric(\Omegaegaegamega)-\Omegaegaegamega|\lambdaeq \e_2ata, $$
and $|\partialhi(t)|, |\deltaot\partialhi(t)|\lambdaeq B,$ then
$$|Rm|(t)\lambdaeq C_{11}(B, \Lambdaambda, \e_2ata, \sigmagmaigma).$$
\e_2and{theo}
\baregin{proof}Note that $R(\Omegaegaegamega)-n={\mathfrak D}elta_{\Omegaegaegamega} h_{\Omegaegaegamega}$, by the assumption we
have
$$|{\mathfrak D}elta_{\Omegaegaegamega} h_{\Omegaegaegamega}|\lambdaeq \e_2ata.$$
Since the Sobolev constant with respect to the metric $\Omegaegaegamega$ is
uniformly bounded by a constant $\sigmagmaigma$, we have
$$|h_{\Omegaegaegamega}|_{C^0}\lambdaeq C(\sigmagmaigma)\e_2ata.$$
Now we use Yau's estimate to obtain higher order estimate of
$\partialhi.$ Define
$$F=\deltaot\partialhi-\partialhi+h_{\Omegaegaegamega},$$
then the K\"ahler-Ricci flow can be written as
$$(\Omegaegaegamega+\partialbp \partialhi)^n=e^F \Omegaegaegamega^n.$$
By Yau's estimate we have \baregin{equation}s
{\mathfrak D}elta_{\partialhi}(e^{-k\partialhi}(n+{\mathfrak D}elta_{\Omegaegaegamega}\partialhi))&\mathfrak geq &
e^{-k\partialhi}({\mathfrak D}elta_{\Omegaegaegamega} F-n^2
\sqrt{-1}nf_{i\nueq j}R_{i\barar ij\barar j}(\Omegaegaegamega))\\
&-&ke^{-k\partialhi}n(n+{\mathfrak D}elta_{\Omegaegaegamega}\partialhi)+(k+\sqrt{-1}nf_{i\nueq j}R_{i\barar ij\barar
j}(\Omegaegaegamega))e^{-k\partialhi+ {\mathfrak f}rac {-F}{n-1}}(n+{\mathfrak D}elta_{\Omegaegaegamega}\partialhi)^{1+{\mathfrak f}rac
1{n-1}}. \e_2apsilonsilonqs Note that \baregin{equation}s \partiald {}t(e^{-k\partialhi}(n+{\mathfrak D}elta_{\Omegaegaegamega}
\partialhi))&=&-k\deltaot\partialhi e^{-k
\partialhi}(n+{\mathfrak D}elta_{\Omegaegaegamega} \partialhi)+e^{-k\partialhi}{\mathfrak D}elta_{\Omegaegaegamega} \deltaot\partialhi\\
&=&-k\deltaot\partialhi e^{-k\partialhi}(n+{\mathfrak D}elta_{\Omegaegaegamega} \partialhi)+e^{-k\partialhi}{\mathfrak D}elta_{\Omegaegaegamega}
(F+\partialhi-h_{\Omegaegaegamega}).\e_2apsilonsilonqs Combing the above two inequalities, we have
\baregin{equation}s ({\mathfrak D}elta_{\partialhi}-\partiald {}t)(e^{-k\partialhi}(n+{\mathfrak D}elta_{\Omegaegaegamega}\partialhi))&\mathfrak geq &
e^{-k\partialhi}({\mathfrak D}elta_{\Omegaegaegamega} h_{\Omegaegaegamega}+n-n^2\sqrt{-1}nf_{i\nueq j}R_{i\barar ij\barar
j}(\Omegaegaegamega))\\&+&(k\deltaot\partialhi-kn-1) e^{-k\partialhi}(n+{\mathfrak D}elta_{\Omegaegaegamega}
\partialhi)\\&+&(k+\sqrt{-1}nf_{i\nueq j}R_{i\barar ij\barar j}(\Omegaegaegamega))e^{-k\partialhi+ {\mathfrak f}rac
{-F}{n-1}}(n+{\mathfrak D}elta_{\Omegaegaegamega}\partialhi)^{1+{\mathfrak f}rac 1{n-1}}.\e_2apsilonsilonqs Since $\partialhi,
{\mathfrak D}elta_{\Omegaegaegamega} h_{\Omegaegaegamega}, |h_{\Omegaegaegamega}|, |Rm(\Omegaegaegamega)|, \deltaot\partialhi$ are bounded,
by the maximum principle we can obtain the following estimate
$$n+{\mathfrak D}elta_{\Omegaegaegamega} \partialhi\lambdaeq C_{12}(B, \e_2ata, \Lambdaambda, \sigmagmaigma).$$
By the definition of $F$,
$$\vskip .1cmg{\mathfrak f}rac {\Omegaegaegamega^n_{\partialhi}}{\Omegaegaegamega^n}=F\mathfrak geq -C_{13}(B, \e_2ata, \sigmagmaigma).$$
On the other hand, we have \baregin{equation}s\vskip .1cmg{\mathfrak f}rac
{\Omegaegaegamega^n_{\partialhi}}{\Omegaegaegamega^n}&=&\vskip .1cmg\partialrod_{i=1}^n(1+\partialhi_{i\barar i})\lambdaeq\vskip .1cmg
((n+{\mathfrak D}el_{\Omegaegaegamega}\partialhi)^{n-1}(1+\partialhi_{i\barar i})). \e_2apsilonsilonqs Thus,
$1+\partialhi_{i\barar i}\mathfrak geq e^{-C_{13}}C_{12}^{-{\mathfrak f}rac 1{n-1}},$i.e.
$C_{14}\Omegaegaegamega\lambdaeq \Omegaegaegamega_{\partialhi}\lambdaeq C_{12}\Omegaegaegamega.$ Following Calabi's
computation (cf. \cite{[chen-tian1]},\cite{[Yau]}), we can obtain
the following $C^3$ estimate:
$$|\partialhi|_{C^3(M, \,\Omegaegaegamega)}\lambdaeq C_{14}(B, \e_2ata, \Lambdaambda, \sigmagmaigma).$$
Since the metrics $\Omegaegaegamega_{\partialhi}$ are uniformly equivalent, the flow is
uniform parabolic with $C^1$ coefficients. By the standard parabolic
estimates, the $C^4$ norm of $\partialhi$ is bounded, and then all the
curvature tensors are also bounded. The theorem is proved.
\e_2and{proof}
\sigmagmaection{Proof of Theorem \ref{main} }
In this section, we shall prove Theorem \ref{main}. This theorem
needs the technical condition that $M$ has no nonzero holomorphic
vector fields, which will be removed in Section \ref{section6}. The
idea is to use the estimate of the first eigenvalue proved in
Section \ref{section4.4.1}.
\baregin{theo}Suppose that $M$ has no nonzero holomorphic vector fields
and $E_1$ is bounded from below in $[\Omegaegaegamega].$ For any $\deltaelta, B,
\Lambdaambda>0,$ there exists a small positive constant $\e_2apsilonsilon(\deltaelta, B,
\Lambdaambda, \Omegaegaegamega)>0$ such that for any metric $\Omegaegaegamega_0$ in the subspace
${\mathcal A}(\deltaelta, B, \Lambdaambda, \e_2apsilonsilon)$ of K\"ahler metrics
$$\{\Omegaegaegamega_{\partialhi}=\Omegaegaegamega+\partialbp \partialhi\;|\; Ric(\Omegaegaegamega_{\partialhi})>-1+\deltaelta,\; |\partialhi|
\lambdaeq B, \;|Rm|(\Omegaegaegamega_{\partialhi})\lambdaeq \Lambdaambda, \; E_1(\Omegaegaegamega_{\partialhi})\lambdaeq \sqrt{-1}nf
E_1+\e_2apsilonsilon \}$$ the K\"ahler-Ricci flow will deform it exponentially
fast to a K\"ahler-Einstein metric in the limit.
\e_2and{theo}
\baregin{flushleft}
\baregin{proof} Let $\Omegaegaegamega_0=\Omegaegaegamega+\partialbp \partialhi(0)\sqrt{-1}n {\mathcal A}(\deltaelta, B, \Lambdaambda,
\e_2apsilonsilon)$, where $\e_2apsilonsilon$ will be determined later. Note that $E_1(0)\lambdaeq
\sqrt{-1}nf E_1 +\e_2apsilonsilon, $ by Lemma \ref{lem5.9} we have
$$E_0(0)-E_0(\sqrt{-1}nfty)\lambdaeq {\mathfrak f}rac {\e_2apsilonsilon}2<1.$$
Here we choose $\e_2apsilonsilon<2.$ Therefore, we can normalize the
K\"ahler-Ricci flow such that for the normalized solution
$\partialsi(t)$,\baregin{equation} 0< c(t), \;\sqrt{-1}nt_0^{\sqrt{-1}nfty} c(t)dt<1,\lambdaambdabel{a1}\e_2apsilonsilonq
where $c(t)={\mathfrak f}rac 1V\sqrt{-1}nt_M\;\deltaot\partialsi\Omegaegaegamega^n_{\partialsi}.$ Now we give the
details on how to normalize the solution to satisfy (\ref{a1}).
Since $\Omegaegaegamega_0=\Omegaegaegamega+\partialbp \partialhi(0)\sqrt{-1}n {\mathcal A}(\deltaelta, B, \Lambdaambda, \e_2apsilonsilon)$, by Lemma
{\ref{lem2.10}} we have
$$C_2(\Lambdaambda, B, \Omegaegaegamega)\Omegaegaegamega\lambdaeq \Omegaegaegamega_0\lambdaeq C_3(\Lambdaambda, B, \Omegaegaegamega)\Omegaegaegamega.$$
By the equation of K\"ahler-Ricci flow, we have
$$|\deltaot\partialhi|(0)= {\mathfrak B}ig|\vskip .1cmg {\mathfrak f}rac {\Omegaegaegamega_{\partialhi}^n}{\Omegaegaegamega^n}+\partialhi-h_{\Omegaegaegamega}
{\mathfrak B}ig|_{t=0}\lambdaeq C_{16}(\Omegaegaegamega, \Lambdaambda, B).$$ Set $\partialsi(t)=\partialhi(t)+C_0e^t,$
where
$$C_0={\mathfrak f}rac 1V\sqrt{-1}nt_0^{\sqrt{-1}nfty}\;e^{-t}\sqrt{-1}nt_M\; |\nuablabla \deltaot\partialhi|^2\;\Omegaegaegamega^n_
{\partialhi}\wedge dt -{\mathfrak f}rac 1V\sqrt{-1}nt_M\; \deltaot\partialhi
\Omegaegaegamega_{\partialhi}^n{\mathfrak B}ig|_{t=0}.$$ Then (\ref{a1}) holds and
$$|C_0|\lambdaeq 1+C_{16},$$
and $$|\partialsi|(0), \;|\deltaot\partialsi|(0)\lambdaeq B+1+C_{16}:=B_0.$$ \vskip 10pt
\tauextbf{STEP 1.}(Estimates for $t\sqrt{-1}n [2T_1, 6T_1]$). By Lemma
\ref{lem2.1} there exists a constant $T_1(\deltaelta, \Lambdaambda)$ such that
\baregin{equation} Ric(t)>-1+{\mathfrak f}rac {\deltaelta}2,\alphan |Rm|(t)\lambdaeq 2\Lambdaambda, \qquad {\mathfrak f}orall
t\sqrt{-1}n [0, 6T_1].\lambdaambdabel{5.20}\e_2apsilonsilonq By Lemma {\ref{lem2.5}} and the
equation (\ref{5.20}), we can choose $\e_2apsilonsilon$ small enough so that
\baregin{equation}|Ric-\Omegaegaegamega|(t)\lambdaeq C_1(T_1, \Lambdaambda)\e_2apsilonsilon<{\mathfrak f}rac 12,\qquad {\mathfrak f}orall t\sqrt{-1}n
[2T_1, 6T_1],\lambdaambdabel{5.21}\e_2apsilonsilonq and \baregin{equation}|\deltaot \partialsi-c(t)|\lambdaeq
C(\sigmagmaigma)C_1(T_1, \Lambdaambda)\e_2apsilonsilon<1, \qquad{\mathfrak f}orall t\sqrt{-1}n [2T_1,
6T_1].\lambdaambdabel{5.22}\e_2apsilonsilonq Then by the inequality
(\ref{a1})\baregin{equation}|\deltaot\partialsi|(t)\lambdaeq 1+|c(t)|\lambdaeq 2,\qquad {\mathfrak f}orall t\sqrt{-1}n
[2T_1, 6T_1]. \lambdaambdabel{5.25}\e_2apsilonsilonq Note that the equation for $\deltaot\partialsi$
is
$$\partiald {}t\deltaot\partialsi={\mathfrak D}elta_{\partialsi} \deltaot\partialsi+\deltaot\partialsi,$$
we have \baregin{equation} |\deltaot\partialsi|(t)\lambdaeq |\deltaot\partialsi|(0)e^{2T_1}\lambdaeq
B_0e^{2T_1},\qquad {\mathfrak f}orall t\sqrt{-1}n [0, 2T_1]. \lambdaambdabel{5.24} \e_2apsilonsilonq Thus,
for any $t\sqrt{-1}n [2T_1, 6T_1]$ we have \baregin{equation}s|\partialsi|(t)&\lambdaeq
&|\partialsi|(0)+\sqrt{-1}nt_0^{2T_1}\;
|\deltaot\partialsi|ds+\sqrt{-1}nt_{2T_1}^t|\deltaot\partialsi|ds\\
&\lambdaeq &B_0+2T_1B_0e^{2T_1}+8T_1,\e_2apsilonsilonqs where the last inequality
used (\ref{5.25}) and (\ref{5.24}). For simplicity, we define
\baregin{equation}s B_1:&=&B_0+2T_1B_0e^{2T_1}+8T_1+2,\\
B_k:&=&B_{k-1}+2,\qquad 2\lambdaeq k\lambdaeq 4. \e_2apsilonsilonqs Then
$$|\deltaot\partialsi|(t),\;|\partialsi|(t)\lambdaeq B_1,\qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1,
6T_1].$$ By Theorem \ref{theoRm} we have
$$|Rm|(t)\lambdaeq C_{11}(B_1, \Lambdaambda_{\Omegaegaegamega}, 1),\qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1,
6T_1],$$ where $\Lambdaambda_{\Omegaegaegamega}$ is an upper bound of curvature tensor
with respect to the metric $\Omegaegaegamega,$ and $C_{11}$ is a constant
obtained in Theorem \ref{theoRm}. Set $\Lambdaambda_0=C_{11}(B_4, \Lambdaambda_{\Omegaegaegamega},
1)$, we have
$$|Rm|(t)\lambdaeq \Lambdaambda_0, \qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1, 6T_1].$$
\vskip 10pt \tauextbf{STEP 2.}(Estimate for $t\sqrt{-1}n [2T_1+2T_2,
2T_1+6T_2]$). By STEP 1, we have
$$|Ric-\Omegaegaegamega|(2T_1)\lambdaeq C_1\e_2apsilonsilon<{\mathfrak f}rac 12 \alphan |Rm|(2T_1)\lambdaeq\Lambdaambda_0.$$
By Lemma {\ref{lem2.1}}, there exists a constant $T_2({\mathfrak f}rac 12,
\Lambdaambda_0)\sqrt{-1}n (0, T_1]$ such that
$$|Rm|(t)\lambdaeq 2\Lambdaambda_0,\alphan Ric(t)\mathfrak geq 0,\qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1, 2T_1
+6T_2].$$ Recall that $E_1\lambdaeq \sqrt{-1}nf E_1+\e_2apsilonsilon ,$ by Lemma \ref{lem2.2}
and Lemma \ref{lem2.5} there exists a constant $C_1'(T_2, \Lambdaambda_0)$
such that
$$|Ric-\Omegaegaegamega|(t)\lambdaeq C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon, \qquad{\mathfrak f}orall t\sqrt{-1}n [2T_1
+2T_2, 2T_1+6T_2].$$ Choose $\e_2apsilonsilon$ small enough so that $C_1'(T_2,
\Lambdaambda_0)\e_2apsilonsilon<{\mathfrak f}rac 12.$ Then by Lemma \ref{lem2.5},
$$|\deltaot\partialsi-c(t)|_{C^0}\lambdaeq C(\sigmagmaigma)C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon,\qquad
{\mathfrak f}orall t\sqrt{-1}n [2T_1+2T_2, 2T_1+6T_2].$$ Choose $\e_2apsilonsilon$ small enough so
that $ C(\sigmagmaigma)C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon<1.$ Thus, we can estimate the
$C^0$ norm of $\partialsi$ for any $t\sqrt{-1}n [2T_1+2T_2, 2T_1+6T_2]$
\baregin{equation}s|\partialsi(t)|&\lambdaeq &|\partialsi|(2T_1+2T_2)+{\mathfrak B}ig|\sqrt{-1}nt_{2T_1+2T_2}^t\;
{\mathfrak B}ig(\partiald {\partialsi}{s}-c(s){\mathfrak B}ig)ds{\mathfrak B}ig|+{\mathfrak B}ig|\sqrt{-1}nt_0^t\;c(s)ds{\mathfrak B}ig|\\
&\lambdaeq &B_1+ 4T_2C(\sigmagmaigma)C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon+1. \e_2apsilonsilonqs Choose $\e_2apsilonsilon$
small enough such that $4T_2C(\sigmagmaigma)C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon<1,$ then
$$|\partialsi(t)|\lambdaeq B_2, \qquad{\mathfrak f}orall t\sqrt{-1}n [2T_1+2T_2, 2T_1+6T_2].$$
Since $M$ has no nonzero holomorphic vector fields, applying Theorem
\ref{lem2.8} for the parameters $\e_2ata=C_1'\e_2apsilonsilon,\;A=1\;, |\partialhi|\lambdaeq
B_4,$ if we choose $\e_2apsilonsilon$ small enough, there exists a constant
$\mathfrak gammamma(C_1'\e_2apsilonsilon, B_4, 1, \Omegaegaegamega)$ such that the first eigenvalue of the
Laplacian ${\mathfrak D}elta_{\partialsi}$ satisfies
$$\lambdaambda_1(t)\mathfrak geq 1+\mathfrak gammamma>1, \qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1+2T_2, 2T_1+6T_2].$$
\tauextbf{STEP 3.} In this step, we want to prove the following
claim:
\baregin{claim}For any positive number $S\mathfrak geq 2T_1+6T_2$, if
$$|Ric-\Omegaegaegamega|(t)\lambdaeq C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon<{\mathfrak f}rac 12 \alphan |\partialsi(t)|
\lambdaeq B_3, \qquad{\mathfrak f}orall t\sqrt{-1}n [2T_1+2T_2, S],$$ then we can extend
the solution $g(t)$ to $[2T_1+2T_2, S+4T_2]$ such that the above
estimates still hold for $t\sqrt{-1}n [2T_1+2T_2, S+4T_2]$.
\e_2and{claim}
\baregin{proof}By the assumption and Lemma {\ref{lem2.5}}, we have
$$|\deltaot \partialsi(t)-c(t)|_{C^0}\lambdaeq C(\sigmagmaigma)C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon,
\qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1+2T_2, S].$$ Note that in step 2, we know
that $C(\sigmagmaigma)C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon<1.$ Then
$$|\deltaot\partialsi|(t)\lambdaeq 2,\qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1+2T_2, S].$$
Therefore, $|\partialsi|, |\deltaot\partialsi|\lambdaeq B_3$. By Theorem \ref{theoRm} and
the definition of $\Lambdaambda_0,$ we have
$$|Rm|(t)\lambdaeq \Lambdaambda_0,\qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1+2T_2, S].$$
By Lemma {\ref{lem2.1}} and the definition of $T_2$,
$$|Rm|(t)\lambdaeq 2\Lambdaambda_0, \;\;Ric(t)\mathfrak geq 0, \qquad{\mathfrak f}orall t\sqrt{-1}n [S-2T_2,
S
+4T_2].$$ Thus, by Lemma \ref{lem2.2} and Lemma \ref{lem2.5} we
have
$$|Ric-\Omegaegaegamega|(t)\lambdaeq C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon,\qquad {\mathfrak f}orall t\sqrt{-1}n [S, S
+4T_2],$$ and
$$|\deltaot\partialsi-c(t)|_{C^0}\lambdaeq C(\sigmagmaigma)C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon, \qquad
{\mathfrak f}orall t\sqrt{-1}n [S, S+4T_2].$$ Then we can estimate the $C^0$ norm of
$\partialsi$ for $t\sqrt{-1}n [S, S+4T_2],$ \baregin{equation}s|\partialsi(t)|&\lambdaeq
&|\partialsi|(S)+{\mathfrak B}ig|\sqrt{-1}nt_S^{S+4T_2}\; {\mathfrak B}ig(\partiald
{\partialsi}{s}-c(s){\mathfrak B}ig)ds{\mathfrak B}ig|
+{\mathfrak B}ig|\sqrt{-1}nt_0^{\sqrt{-1}nfty}\;c(s)ds{\mathfrak B}ig|\\
&\lambdaeq &B_3+4T_2C(\sigmagmaigma)C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon+1 \\
&\lambdaeq &B_4. \e_2apsilonsilonqs Then by Theorem \ref{lem2.8} and the definition of
$\mathfrak gamma$, the first eigenvalue of the Laplacian ${\mathfrak D}elta_{\partialsi}$
$$\lambdaambda_1(t)\mathfrak geq 1+\mathfrak gammamma>1, \qquad{\mathfrak f}orall t\sqrt{-1}n [2T_1+2T_2, S+4T_2].$$
Note that
$$\mu_0(2T_1+2T_2)={\mathfrak f}rac 1V\sqrt{-1}nt_M\;(\deltaot\partialsi-c(t))^2\Omegaegaegamega_{\partialsi}^n
\lambdaeq(C(\sigmagmaigma)C_1'\e_2apsilonsilon)^2 $$ and \baregin{equation}s\mu_1(2T_1+2T_2)&=&{\mathfrak f}rac
1V\sqrt{-1}nt_M\;|\nuablabla\deltaot\partialsi|^2\Omegaegaegamega_
{\partialsi}^n\\
&=&{\mathfrak f}rac 1V\sqrt{-1}nt_M\;(\deltaot\partialsi-c(t))(R(\Omegaegaegamega_{\partialsi})-n)\Omegaegaegamega_{\partialsi}
^n\\
&\lambdaeq&C(\sigmagmaigma)(C_1'\e_2apsilonsilon)^2. \e_2apsilonsilonqs By Lemma \ref{lem2.19}, we can
choose $\e_2apsilonsilon$ small enough such that \baregin{equation}s|\partialsi(t)|&\lambdaeq&
|\partialsi(2T_1+3T_2)|+{\mathfrak f}rac {C(n, \sigmagmaigma)}{\alphalpha T_2^{{\mathfrak f}rac {m}{4}}}
(\sigmagmaqrt{\mu_0(2T_1+2T_2)}+{\mathfrak f}rac 1{\alphalpha_1}\mu_1(2T_1+2T_2)) +1\\
&\lambdaeq&B_2+{\mathfrak f}rac {C(n, \sigmagmaigma)}{\alphalpha T_2^{{\mathfrak f}rac {m}{4}}}(1+{\mathfrak f}rac
1{\alphalpha_1}C_1'\e_2apsilonsilon)
C(\sigmagmaigma)C_1'\e_2apsilonsilon+1\\
&\lambdaeq &B_3 \e_2apsilonsilonqs for $t\sqrt{-1}n [S, S+4T_2].$ Note that $\e_2apsilonsilon$ doesn't
depend on $S$ here, so it won't become smaller as $S\rightarrow \sqrt{-1}nfty.$
\e_2and{proof}
\tauextbf{STEP 4.} By step 3, we know the bisectional curvature is
uniformly bounded and the first eigenvalue $\lambdaambda_1(t)\mathfrak geq 1+\e_2ata>1$
uniformly for some positive constant $\e_2ata>0.$ Thus, following the
argument in \cite{[chen-tian2]}, the K\"ahler-Ricci flow converges
to a K\"ahler-Einstein metric exponentially fast. This theorem is
proved.
\e_2and{proof}
\e_2and{flushleft}
\sigmagmaection{Proof of Theorem \ref{main2}}\lambdaambdabel{section6}
In this section, we shall use the pre-stable condition to drop the
assumptions that $M$ has no nonzero holomorphic vector fields, and
the dependence of the initial K\"ahler potential. The proof here is
roughly the same as in the previous section, but there are some
differences.
In the STEP 1 of the proof below, we will choose a new background
metric at time $t=2T_1$, so the new K\"ahler potential with respect
to the new background metric at $t=2T_1$ is $0$, and has nice
estimates afterwards. Notice that all the estimates, particularly
in Theorem \ref{theo4.18} and \ref{theoRm}, are essentially
independent of the choice of the background metric. Therefore the
choice of $\e_2apsilonsilon$ will not depend on the initial K\"ahler potential
$\partialhi(0)$. This is why we can remove the assumption on the initial
K\"ahler potential.
As in Theorem \ref{main}, the key point of the proof is to use the
improved estimate on the first eigenvalue in Section
\ref{section4.4.2} (see Claim \ref{last} below). Since the curvature
tensors are bounded in some time interval, by Shi's estimates the
gradient of curvature tensors are also bounded. Then the assumptions
of Theorem \ref{theoRm} are satisfied, and we can use the estimate
of the first eigenvalue.
Now we state the main result of this section.
\baregin{theo}Suppose $M$ is pre-stable, and
$E_1$ is bounded from below in $[\Omegaegaegamega]$. For any $\deltaelta, \Lambdaambda>0,$
there exists a small positive constant $\e_2apsilonsilon(\deltaelta, \Lambdaambda)>0$ such
that for any metric $\Omegaegaegamega_0$ in the subspace ${\mathcal A}(\deltaelta, \Lambdaambda, \Omegaegaegamega,
\e_2apsilonsilon)$ of K\"ahler metrics
$$\{\Omegaegaegamega_{\partialhi}=\Omegaegaegamega+\partialbp \partialhi\;|\;
Ric(\Omegaegaegamega_{\partialhi})>-1+\deltaelta, \; |Rm|(\Omegaegaegamega_{\partialhi})\lambdaeq \Lambdaambda,\;
E_1(\Omegaegaegamega_{\partialhi})\lambdaeq \sqrt{-1}nf E_1+\e_2apsilonsilon \},$$ the K\"ahler-Ricci flow will
deform it exponentially fast to a K\"ahler-Einstein metric in the
limit.
\e_2and{theo}
\baregin{flushleft}
\baregin{proof} Let $\Omegaegaegamega_0\sqrt{-1}n {\mathcal A}(\deltaelta, \Lambdaambda, \Omegaegaegamega, \e_2apsilonsilon)$, where $\e_2apsilonsilon$
will be determined later. By Lemma \ref{lem2.1} there exists a
constant $T_1(\deltaelta, \Lambdaambda)$ such that
$$Ric(t)>-1+{\mathfrak f}rac {\deltaelta}2 \alphan |Rm|(t)\lambdaeq 2\Lambdaambda, \qquad{\mathfrak f}orall t\sqrt{-1}n
[0, 6T_1].$$ By Lemma {\ref{lem2.5}}, we can choose $\e_2apsilonsilon$ small
enough so that \baregin{equation}|Ric-\Omegaegaegamega|(t)\lambdaeq C_1(T_1, \Lambdaambda)\e_2apsilonsilon<{\mathfrak f}rac 12,
\qquad{\mathfrak f}orall t\sqrt{-1}n [2T_1, 6T_1],\lambdaambdabel{8.29}\e_2apsilonsilonq and \baregin{equation}|\deltaot
\partialhi-c(t)|\lambdaeq C(\sigmagmaigma)C_1(T_1, \Lambdaambda)\e_2apsilonsilon<1, \qquad{\mathfrak f}orall t\sqrt{-1}n
[2T_1, 6T_1].\lambdaambdabel{8.30}\e_2apsilonsilonq
\tauextbf{STEP 1.}(Choose a new background metric). Let $\underline
\Omegaegaegamega=\Omegaegaegamega+\partialbp \partialhi(2T_1)$ and let $\underline \partialhi(t)$ be the solution to
the following K\"ahler-Ricci flow
$$\lambdaeft\{\baregin{array}{l}
\partiald {\underline \partialhi(t)}t=\vskip .1cmg {\mathfrak f}rac {(\underline \Omegaegaegamega+\partialbp \underline\partialhi)^n}{\underline \Omegaegaegamega^n}
+\underline \partialhi-h_{\underline\Omegaegaegamega},\qquad t\mathfrak geq 2T_1,\\
\underline \partialhi(2T_1)=0.\\
\e_2and{array}
\rightarrowght. $$ Here $h_{\underline \Omegaegaegamega}$ satisfies the following
conditions
$$Ric(\underline \Omegaegaegamega)-\underline \Omegaegaegamega=\partialbp h_{\underline \Omegaegaegamega}\alphan \sqrt{-1}nt_M\; h_{\underline \Omegaegaegamega} \underline
\Omegaegaegamega^n=0.$$ Then the metric $\underline \Omegaegaegamega(t)=\underline \Omegaegaegamega+\partialbp \underline\partialhi(t)$
satisfies
$$\partiald {}t\underline\Omegaegaegamega(t)=-Ric (\underline \Omegaegaegamega(t))+\underline \Omegaegaegamega(t)\alphan \underline \Omegaegaegamega(2T_1)=\Omegaegaegamega+
\partialbp \partialhi(2T_1).$$ By the uniqueness of K\"ahler-Ricci flow, we have
$$\underline \Omegaegaegamega(t)=\Omegaegaegamega+\partialbp \partialhi(t),\qquad {\mathfrak f}orall t\mathfrak geq 2T_1.$$
Since the Sobolev constant is bounded and
$$|{\mathfrak D}elta_{\underline \Omegaegaegamega}h_{\underline \Omegaegaegamega}|=|R(\underline \Omegaegaegamega)-n|\lambdaeq C_1(T_1, \Lambdaambda)\e_2apsilonsilon,$$
we have $${\mathfrak B}ig|\partiald {\underline \partialhi}{t}{\mathfrak B}ig|(2T_1)=|h_{\underline \Omegaegaegamega}|\lambdaeq
C(\sigmagmaigma)C_1(T_1, \Lambdaambda)\e_2apsilonsilon.$$ Since $E_1$ is decreasing in our case,
we have
$$E_1(\underline \Omegaegaegamega)\lambdaeq E_1(\Omegaegaegamega+\partialbp \partialhi(0))\lambdaeq \sqrt{-1}nf E_1+\e_2apsilonsilon.$$ By
Lemma \ref{lem5.9}, we have $$E_0(\underline \Omegaegaegamega)\lambdaeq \sqrt{-1}nf E_0+{\mathfrak f}rac
\e_2apsilonsilon2.$$ Thus, by Lemma \ref{lem2.6} we have
$${\mathfrak f}rac 1V\sqrt{-1}nt_{2T_1}^{\sqrt{-1}nfty}\; e^{-t}\sqrt{-1}nt_M\; {\mathfrak B}ig|\nuablabla \partiald {\underline\partialhi}
{t}{\mathfrak B}ig|^2\underline\Omegaegaegamega(t)^n\wedge dt <{\mathfrak f}rac {\e_2apsilonsilon}2<1.$$ Here we choose
$\e_2apsilonsilon<2.$ Set $\partialsi(t)=\underline \partialhi(t)+C_0e^{t-2T_1},$ where
$$C_0={\mathfrak f}rac 1V\sqrt{-1}nt_{2T_1}^{\sqrt{-1}nfty}\; e^{-t}\sqrt{-1}nt_M\; {\mathfrak B}ig|\nuablabla \partiald {\underline
\partialhi}{t}{\mathfrak B}ig|^2\underline\Omegaegaegamega(t)^n\wedge dt-{\mathfrak f}rac 1V \sqrt{-1}nt_M\; \partiald {\underline
\partialhi}t\underline\Omegaegaegamega(t)^n{\mathfrak B}ig|_{t=2T_1}.
$$
Then $$|\partialsi(2T_1)|,\;\; {\mathfrak B}ig|\partiald {\partialsi}{t}{\mathfrak B}ig|(2T_1)\lambdaeq 2,$$ and
$$0< \underline c(t),\;\;\sqrt{-1}nt_{2T_1}^{\sqrt{-1}nfty}\; \underline c(t)dt<1,$$
where $\underline c(t)={\mathfrak f}rac 1V\sqrt{-1}nt_M\; \partiald {\partialsi}{t} \underline\Omegaegaegamega_{\partialsi}^n.$
Since
$$|\deltaot\partialsi-\underline c(t)|=|\deltaot\partialhi-c(t)|\lambdaeq C(\sigmagmaigma)C_1(T_1, \Lambdaambda)
\e_2apsilonsilon,\qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1, 6T_1],$$ we have
$$|\deltaot\partialsi|(t)\lambdaeq 2, \qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1, 6T_1],$$
and \baregin{equation}s |\partialsi|(t)&\lambdaeq& |\partialsi|(2T_1)+{\mathfrak B}ig|\sqrt{-1}nt_{2T_1}^{t}\;
(\deltaot\partialsi-\underline c(t)){\mathfrak B}ig|+{\mathfrak B}ig|\sqrt{-1}nt_{2T}^{\sqrt{-1}nfty}\; \underline
c(s)ds{\mathfrak B}ig|\\&\lambdaeq&3+4T_1C(\sigmagmaigma)C_1(T_1, \Lambdaambda)\e_2apsilonsilon, \qquad {\mathfrak f}orall
t\sqrt{-1}n [2T_1, 6T_1].\e_2apsilonsilonqs Choose $\e_2apsilonsilon$ small enough such that $4T_1
C(\sigmagmaigma)C_1(T_1, \Lambdaambda)\e_2apsilonsilon<1$, and define $B_k=2k+2$. Then
$$|\partialsi|, |\deltaot\partialsi|\lambdaeq B_1, \qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1, 6T_1].$$
By Theorem \ref{theoRm}, we have
$$|Rm|(t)\lambdaeq C_{11}(B_1, 2\Lambdaambda, 1),\qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1, 6T_1].$$
Here $C_{11}$ is a constant obtained in Theorem \ref{theoRm}. Let
$\Lambdaambda_0:= C_{11}(B_3, 2\Lambdaambda, 1)$, then
$$|Rm|(t)\lambdaeq \Lambdaambda_0,\qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1, 6T_1].$$
\vskip 10pt \tauextbf{STEP 2.}(Estimates for $t\sqrt{-1}n [2T_1+2T_2,
2T_1+6T_2]$). By step 1, we have
$$|Ric-\Omegaegaegamega|(2T_1)\lambdaeq C_1(T_1, \Lambdaambda)\e_2apsilonsilon<{\mathfrak f}rac 12,\alphan |Rm|(2T_1)\lambdaeq
\Lambdaambda_0.$$ By Lemma {\ref{lem2.1}}, there exists a constant $T_2({\mathfrak f}rac
12, \Lambdaambda_0)\sqrt{-1}n (0, T_1]$ such that \baregin{equation}|Rm|(t)\lambdaeq 2\Lambdaambda_0,\alphan
Ric(t)\mathfrak geq 0,\qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1, 2T_1+6T_2].\lambdaambdabel{7.28}\e_2apsilonsilonq
Recall that $E_1\lambdaeq \sqrt{-1}nf E_1+\e_2apsilonsilon ,$ by Lemma \ref{lem2.2} and Lemma
\ref{lem2.5} there exists a constant $C_1'(T_2, \Lambdaambda_0)$ such that
$$|Ric-\Omegaegaegamega|(t)\lambdaeq C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon, \qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1+2T_2,
2T_1+6T_2].$$ Choose $\e_2apsilonsilon$ small enough so that $C_1'(T_2,
\Lambdaambda_0)\e_2apsilonsilon<{\mathfrak f}rac 12$. Then by Lemma \ref{lem2.5},
$$|\deltaot\partialsi(t)-\underline c(t)|_{C^0}\lambdaeq C(\sigmagmaigma)C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon,
\qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1+2T_2, 2T_1+6T_2].$$ {{Choose $\e_2apsilonsilon$ small
such that $C(\sigmagmaigma)C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon<1.$}} Thus, we can estimate
the $C^0$ norm of $\partialsi$ for any $t\sqrt{-1}n [2T_1+2T_2, 2T_1+6T_2]$
\baregin{equation}s| {\partialsi}(t)|&\lambdaeq&|\partialsi|(2T_1+2T_2)+ {\mathfrak B}ig|\sqrt{-1}nt_{2T_1+2T_2}^t\;
{\mathfrak B}ig(\partiald { \partialsi}s-\underline
c(s){\mathfrak B}ig)ds{\mathfrak B}ig|+{\mathfrak B}ig|\sqrt{-1}nt_{2T_1+2T_2}^{\sqrt{-1}nfty}\; \underline c(s)ds{\mathfrak B}ig|\\
&\lambdaeq &B_1+4T_2 C(\sigmagmaigma)C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon +1\\&\lambdaeq &B_2. \e_2apsilonsilonqs
Here we choose $\e_2apsilonsilon$ small enough such that $4T_2 C(\sigmagmaigma)C_1'(T_2,
\Lambdaambda_0)\e_2apsilonsilon<1$. Thus, by the definition of $\Lambdaambda_0,$ we have
$$|Rm|(t)\lambdaeq \Lambdaambda_0, \qquad{\mathfrak f}orall t\sqrt{-1}n [2T_1+2T_2, 2T_1+6T_2].$$
\tauextbf{STEP 3.} In this step, we want to prove the following
claim:
\baregin{claim}\lambdaambdabel{last}For any positive number $S\mathfrak geq 2T_1+6T_2$, if
$$|Ric-\Omegaegaegamega|(t)\lambdaeq C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon<{\mathfrak f}rac 12, \alphan |Rm|(t)\lambdaeq
\Lambdaambda_0,\qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1+2T_2, S],$$ then we can extend the
solution $g(t)$ to $[2T_1+2T_2, S+4T_2]$ such that the above
estimates still hold for $t\sqrt{-1}n [2T_1+2T_2, S+4T_2]$.
\e_2and{claim}
\baregin{proof} By Lemma
{\ref{lem2.1}} and the definition of $T_2$,
$$|Rm|(t)\lambdaeq 2\Lambdaambda_0,\; Ric(t)\mathfrak geq 0, \qquad{\mathfrak f}orall t\sqrt{-1}n [2T_1+2T_2, S
+4T_2].$$ Thus, by Lemma \ref{lem2.2} and Lemma \ref{lem2.5} we have
$$|Ric-\Omegaegaegamega|(t)\lambdaeq C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon,\qquad {\mathfrak f}orall t\sqrt{-1}n [S-2T_2, S
+4T_2].$$ Therefore, we have
$$|\deltaot\partialsi(t)-\underline c(t)|_{C^0}\lambdaeq C(\sigmagmaigma)C_1'(T_2, \Lambdaambda_0)\e_2apsilonsilon,
\qquad {\mathfrak f}orall t\sqrt{-1}n [2T_1+2T_2, S+4T_2].$$ By Theorem \ref{main4}
the $K$-energy is bounded from below, then the Futaki invariant
vanishes. Therefore, we have
$$\sqrt{-1}nt_M\; X(\deltaot\partialsi)\underline\Omegaegaegamega_{\partialsi}^n=0,\qquad {\mathfrak f}orall X\sqrt{-1}n
\e_2ata_r(M, J).$$ By the assumption that $M$ is pre-stable and
Theorem \ref{theo4.18}, if $\e_2apsilonsilon$ is small enough, there exists a
constant $\mathfrak gamma(C_1'\e_2apsilonsilon, 2\Lambdaambda_0)$ such that
$$\sqrt{-1}nt_M\; |\nuablabla \deltaot\partialsi|\underline\Omegaegaegamega^n_{\partialsi}\mathfrak geq (1+\mathfrak gamma)\sqrt{-1}nt_M\; |
\deltaot\partialsi-\underline c(t)|^2\underline\Omegaegaegamega^n_{\partialsi}.$$ Therefore, Lemma
\ref{lem2.14} still holds, i.e. there exists a constant $\alphalpha(\mathfrak gamma,
C_1'\e_2apsilonsilon, \sigmagmaigma)>0$ such that for any $t\sqrt{-1}n [2T_1+2T_2, S+4T_2]$
$$\mu_1(t)\lambdaeq \mu_1(2T_1+2T_2)e^{-\alphalpha(t-2T_1-2T_2)},$$
and
$$\mu_0(t)\lambdaeq {\mathfrak f}rac {1}{1-C_1'\e_2apsilonsilon}\mu_1(t)\lambdaeq 2\mu_1(2T_1+2T_2)e^{-
\alphalpha(t-2T_1-2T_2)}. $$
Then by Lemma \ref{lem2.19}, we can choose $\e_2apsilonsilon$ small enough such
that \baregin{equation}s|\partialsi(t)|&\lambdaeq& |\partialsi(2T_1+3T_2)|+{\mathfrak f}rac {C_{10}(n,
\sigmagmaigma)}{\alphalpha T_2^{{\mathfrak f}rac {m}{4}}}
(\sigmagmaqrt{\mu_0(2T_1+2T_2)}+{\mathfrak f}rac 1{\alphalpha_1}\mu_1(2T_1+2T_2)) +1\\
&\lambdaeq&B_2+{\mathfrak f}rac {C_{10}(n, \sigmagmaigma)}{\alphalpha T_2^{{\mathfrak f}rac
{m}{4}}}(1+{\mathfrak f}rac 1{\alphalpha_1}C_1'\e_2apsilonsilon)
C(\sigmagmaigma)C_1'\e_2apsilonsilon+1\\
&\lambdaeq &B_3 \e_2apsilonsilonqs for $t\sqrt{-1}n [S, S+4T_2].$ By the definition of
$\Lambdaambda_0,$ we have
$$|Rm|(t)\lambdaeq \Lambdaambda_0,\qquad {\mathfrak f}orall t\sqrt{-1}n [S, S+4T_2].$$
\e_2and{proof}
\tauextbf{STEP 4.} By step 3, we know the bisectional curvature is
uniformly bounded and the $W^{1, 2}$ norm of $\underline {\deltaot \partialhi}-\underline
c(t)$ decays exponentially. Thus, following the argument in
\cite{[chen-tian2]}, the K\"ahler-Ricci flow converges to a
K\"ahler-Einstein metric exponentially fast. This theorem is proved.
\e_2and{proof}
\e_2and{flushleft}
\baregin{thebibliography}{2}
\baribitem{[Bando]}S. Bando. The $K$-energy map, almost Einstein K\"ahler metrics
and an inequality of the Miyaoka-Yau type. Tohoku
Math. J. (2) 39 (1987), no. 2, 231--235.
\baribitem{[BaMa]}S. Bando, T. Mabuchi. Uniqueness of Einstein K\"ahler
metrics modulo connected group actions. Algebraic geometry,
Sendai, 1985, 11--40, Adv. Stud. Pure Math., 10, North-Holland,
Amsterdam, 1987.
\baribitem{[Cao]}H. D. Cao. Deformation of K\"ahler metrics to K\"ahler-Einstein
metrics on compact K\"ahler manifolds. Invent. Math. 81
(1985), no. 2, 359--372.
\baribitem{[CCZ]}H. D. Cao, B. L. Chen, X. P. Zhu. Ricci flow on
compact K\"ahler manifolds of positive bisectional curvature. C.
R. Math. Acad. Sci. Paris 337 (2003), no. 12, 781--784.
\baribitem{[chen0]}X. X. Chen. Calabi flow in Riemann surfaces
revisited, IMRN, 6(2001), 275-297.
\baribitem{[chen1]}X. X. Chen. On the lower bound of energy functional
$E_1 (I)$-- a stability theorem on the K\"ahler-Ricci flow. J. Geom.
Anal., 16 (2006) 23-38.
\baribitem{[chen2]}X. X. Chen. On K\"ahler
manifold with positive orthogonal bisectional curvature surfaces, to
appear in Adv. in Math., arXiv:math.DG/0606229.
\baribitem{[chenhe]}X. X. Chen, W. Y. He. On the Calabi flow. math.DG/0603523.
\baribitem{[chen-tian2]}X. X. Chen, G. Tian. Ricci flow on K\"ahler-Einstein
surfaces. Invent. Math. 147 (2002), no. 3, 487--544.
\baribitem{[chen-tian1]}X. X. Chen, G. Tian. Ricci flow on K\"ahler-Einstein
manifolds. Duke. Math. J. 131, (2006), no. 1, 17-73.
\baribitem{[DiTi1]}W. Y. Ding, G. Tian. K\"ahler-Einstein metrics and
the generalized Futaki invariant. Invent. Math. 110 (1992),
no. 2, 315--335.
\baribitem{[DiTi2]}W. Y. Ding, G. Tian. The generalized Moser-Trudinger
inequality. Proceedings of Nankai International Conference of
Nonlinear Analysis, 1993.
\baribitem{[Do]}S. K. Donaldson. Scalar curvature and stability of
toric varieties. J. Differential Geom. 62 (2002), no. 2, 289--349.
\baribitem{[futaki04]} A. Futaki. Asymptotic Chow semi-stability and
integral invariants, Internat. J. Math., 9(2004), 967-979.
\baribitem{[FM]}A. Futaki, T. Mabuchi. Bilinear forms and extremal
K\"ahler vector fields associated with K\"ahler classes. Math.
Ann., 301 (1995), 199-210.
\baribitem{[Ha82]}R. S. Hamilton. Three-manifolds with positive Ricci
curvature. J. Differential Geom. 17 (1982), no. 2, 255--306.
\baribitem{[Ha88]}R. S. Hamilton. The Ricci flow on surfaces.
Mathematics and general relativity, 237-262, Comtemp. Math., 71,
Amer. Math. Soc., Providence, RI, 1988.
\baribitem{[chenli]}H. Z. Li. A new formula for the Chen-Tian energy
functionals $E_k$ and its applications. math.DG/0609724.
\baribitem{[Mok]}N. Mok. The uniformization theorem for compact
K\"ahler manifolds of nonnegative holomorphic bisectional curvature.
J. Differential Geom. 27 (1988), no. 2, 179--214.
\baribitem{[Pali]}N. Pali. A consequence of a lower bound of the $K$-energy.
Int. Math. Res. Not. 2005, no. 50, 3081--3090.
\baribitem{[Pere]}G. Perelman. The entropy formula for the Ricci flow
and its geometric applications. math.DG/0211159.
\baribitem{[Pere2]}G. Perelman. Unpublished work on K\"ahler-Ricci flow.
\baribitem{[PhSt]}D. H. Phong, J. Sturm. On stability and convergence of K\"ahler-Ricci flow. math.DG/0412185.
\baribitem{[Shi]}W. X. Shi. Ricci deformation of the metric on
complete noncompact Riemannian manifolds. J. Differential Geom. 30
(1989) 303-394.
\baribitem{[Siu]}Y.-T. Siu. Lectures on Hermitian-Einstein metrics for
stable bundles and K\"ahler-Einstein metrics. Birkh\"auser, Verlag,
1987.
\baribitem{[SoWe]}J. Song, B. Weinkove. Energy functionals and
canonical K\"ahler metrics. math.DG/0505476.
\baribitem{[Tian1]}G. Tian. On K\"ahler-Einstein metrics on certain
K\"ahler manifolds with $C\sigmagmab 1(M)>0$. Invent. Math. 89 (1987),
no. 2, 225--246.
\baribitem{[Tian2]}G. Tian. K\"ahler-Einstein metrics with positive
scalar curvature. Invent. Math. 130 (1997), no. 1, 1--37.
\baribitem{[Tian2a]}G. Tian. The $K$-energy of hypersurfaces and
stability. Comm. Ana. Geom. 2 (1994), 239-265.
\baribitem{[Tian3]}G. Tian. Canonical metrics in K\"ahler geometry.
Notes taken by Meike Akveld. Lectures in Mathematics ETH Z\"urich.
Birkh\"auser Verlag, Basel, 2000.
\baribitem{[Tian4]}G. Tian. On the Calabi's conjecture for
complex surface with positive first Chern class, Invent. Math. 101
(1990) 101-172.
\baribitem{[Tosatti]}V. Tosatti. On the critical points of the $E_k$
functionals in K\"ahler geometry. math.DG/0506021.
\baribitem{[Tru]}N. S. Trudinger. Local estimates for subsolutions and
supersolutions of general second order elliptic quasilinear
equations. Invent. Math. 61 (1980), no. 1, 67-79.
\baribitem{[Yau]}S. T. Yau. On the Ricci curvature of a compact
K\"ahler manifold and the complex Monge-Amp\`ere equation. I. Comm.
Pure Appl. Math. 31 (1978), no. 3, 339--411.
\e_2and{thebibliography}
\vskip3mm
Xiuxiong Chen, Department of Mathematics, University of
Wisconsin-Madison, Madison WI 53706, USA; [email protected]\\
Haozhao Li, School of Mathematical Sciences, Peking University,
Beijing, 100871, P.R. China; [email protected]\\
Bing Wang, Department of Mathematics, University of
Wisconsin-Madison, Madison WI 53706, USA; [email protected]\\
\e_2and{document} |
\begin{document}
\begin{abstract}
In this paper, we discuss the dimension of interval orders having a representation using $n$ different interval lengths, and the dimension of interval orders which has a representation using length in $[1, r]$.
\end{abstract}
\title{Dimension bounds of classes of interval orders}
\section{Introduction}
In a recent paper by Keller, Trenk, and Young \cite{KTY}, the authors proved that the dimension of interval orders that have a representation with interval lengths 0 and 1 have dimension at most 3. At the end of their paper, they proposed two problems. (1) Find a good bound on the dimension of interval orders whose representation uses intervals of length $r$ and $s$, where $r,s>0$. (2) Find a good bound on the dimension of interval orders that have a representation using at most $r$ different lengths.
Let $f(r)$ denote the best bound in problem (2). In \cite{KTY}, the authors gave a simple upper bound $f(r)\leq3r+\binom{r}{2}$. We provide a better bound; we also noticed that the bound is related to not just the number of lengths but also the relation between the lengths, hence it's natural to discuss the dimension of interval orders that have a representation with interval lengths in a certain range.
In this paper, we use the function ``$\lg$'' to denote the logarithm of base $2$.
\section{Background}
A partially ordered set (or poset) is a pair $(X, P)$ where $X$ is a set and $P$ is a reflexive, antisymmetric and transitive binary relation on $X$. We will use $\mathbf{P}$ to denote the pair $(X,P)$. We use $x<y$ in $P$ to denote $(x, y) \in P$, $x\|y$ when $x$ and $y$ are incomparable in $P$. For a subset $Y$ of the ground set $X$, we denote the restriction of $P$ to $Y$ by $P(Y)$; the poset $\mathbf{Q} = (Y, P(Y))$ is a subposet of $\mathbf{P}$.
A linear order $L$ on $X$ is called linear extension of $P$ if $P \subseteq L$. We denote the dual of $L$ by $L^d$. A family of linear extensions $\mathcal{R}$ on $X$ is called a realizer of $P$ if $\cap \mathcal{R} = P$; in other words, for all $x$, $y$ in $X$, we have $x<y$ in $P$ if and only if $x<y$ in $L$ for every $L \in \mathcal{R}$. The dimension of a poset $\mathbf{P}$ is the minimum cardinality of a realizer of $P$, denoted by $\dim(\mathbf{P})$.
A poset $(X,P)$ is an interval order if for each $x \in X$, an interval $[l_x, r_x]$ can be assigned to $x$, such that for $x$, $y$ $\in X$, $x < y$ in $P$ if and only if $r_x < l_y$. We call the assignment of a collection of intervals to all the vertices in the ground set of $\mathbf{P}$ an interval representation of $\mathbf{P}$. Fishburn \cite{F} showed that a poset is an interval order if and only if it does not contain $S_2 = \mathbf{2}+\mathbf{2}$ as a subposet.
An interval order is a semi-order if it has an interval representation in which all intervals have the same length. Usually, we use unit length for the intervals, so semiorders can be called unit interval orders. Scott and Suppes \cite{SS} proved that an interval order $\mathbf{P}$ is a semiorder if and only if $\mathbf{P}$ does not contain a $\mathbf{3}+\mathbf{1}$ as a subposet.
\section{Twin-free and distinguishing representations}
Let $\mathbf{P}$ be an interval order, and fix a representation for $\mathbf{P}$. Let $x,y\in X$ be such that the same interval is assigned to both. We call $x$ and $y$ a twin (of points). If a respresentation does not have any twins, we call it twin-free. A respresentation of an interval order is distinguishing, if every real number occurs at most once as an endpoint of an interval of the representation, i.e.~no two intervals share an endpoint. A distinguishing representation is of course twin-free.
Let $\mathbf{P}=(X,P)$ be a poset, and $x,y\in X$. We say $x,y$ have duplicated holdings, if $\{z\in X: z>x\}=\{z\in X: z>y\}$ and $\{z\in X: z<x\}=\{z\in X: z<y\}$; in other words the upsets and the downsets of $x$ and $y$ are the same. If $\mathbf{P}$ is an interval order with a representation in which $x$ and $y$ are twins, then they have duplicated holdings. So if an interval order has no duplicated holdings, then every representation is twin-free.
One important property of two elements with duplicated holdings is that we may discard one of them without reducing the dimension (as long as the dimension is at least $2$). We will use this property later by assuming that some poset, for which we are proving an upper bound for its dimension, has no duplicated holdings.
Since this paper studies interval orders for which the lengths of the intervals are not arbitrary, we introduce the following notations. Let $S\subseteq\mathbb{R}^+\cup\{0\}$, $S\neq\emptyset$. An $S$-representation of a poset $\mathbf{P}$ is an interval representation, in which every interval length is in $S$.
It is easy to see that every interval order has a distinguishing $\mathbb{R}^+$-representation. Things get less obvious with restrictions introduced. A simple example would be a $\{0\}$-representation of an antichain of size at least $2$, which can not be made distinguishing, or even twin-free. We will prove that---essentially---this is the only problem case.
Following Fishburn and Graham \cite{FG}, we will use the notation $C(S)$ to denote the family of posets that have an $S$-representation. As a special case, $C([\alpha,\beta])$ denotes the family of posets for which there is a representation with intervals of lengths between $\alpha$ and $\beta$ (inclusive). We will use the shorthands $C[\alpha,\beta]=C([\alpha,\beta])$, and $C(\alpha)=C([1,\alpha])$.
The following observation is obvious due to the scalability of intervals in a representation.
\begin{observation}\label{obs:scaling}
$C[\alpha, \beta] = C[m\alpha, m\beta]$, for all $m\in\mathbb{R}$.
\end{observation}
With these notations, $C(\mathbb{R}^+)$ is the family of interval orders, and for $s\neq 0$, $C(\{s\})=C(\{1\})=C(1)$ is the family of semiorders.
Now we are ready to prove the theorem that shows that---in most cases---we can assume that a poset has a distinguishing representation.
\begin{theorem}\label{theorem:representations}
Let $S\subseteq\mathbb{R}^+\cup\{0\}$, $S\neq\emptyset$.
\begin{enumerate}
\item\label{part:1} Every poset $\mathbf{P}\in C(S)$ that has a twin-free free $S$-representation also has a distuingishing $S$-representation.
\item\label{part:2} If $0\not\in S$, then every poset $\mathbf{P}\in C(S)$ has a distinguishing $S$-representation.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $S\subseteq\mathbb{R}^+\cup\{0\}$, $S\neq\emptyset$, and let $\mathbf{P} \in C(S)$. Consider an $S$-representation of $\mathbf{P}$; with a slight abuse of notation, the multiset of intervals in this representations will also be referred as $\mathbf{P}$. We will define two symmetric operations that we will perform repeatedly. These will be used to decrease the number of common endpoints of the intervals. After this, we enter a second phase, in which we remove twins, if possible.
\subsection*{Left and right compression}
Let $c\in\mathbb{R}$, and $\epsilon>0$. Let $L=\{x\in P: l_x<c\}$, and let $R=P-L$. Define $L'=\{[l_x+\epsilon,r_x+\epsilon]:x\in P\}$. Let $P'=L'\cup R$, a multiset of intervals. The operation that creates $P'$ from $P$ is what we call ``left compression'' with parameters $c$ and $\epsilon$.
We can similarly define right compressions. Let $R=\{x\in P: r_x>c\}$, and let $L=P-R$. Define $R'=\{[l_x-\epsilon,r_x-\epsilon]:x\in P\}$. Let $P'=L\cup R'$ to define the operation of right compression.
\begin{lemma}
Let $\mathbf{P}$ be a poset (representation), $c\in\mathbb{R}$, and let $\epsilon=\frac12\min\{|a-b|:\text{$a$ and $b$ are distinct endpoints}\}$. Let $\mathbf{P'}$ be the left (right) compression of $\mathbf{P}$ with parameters $c$ and $\epsilon$. Then $\mathbf{P}$ and $\mathbf{P'}$ represent isomorphic posets.
\end{lemma}
\begin{proof}[Proof of lemma]
We will do the proof for left compressions. The argument for right compressions is symmetric.
Notice that if $a$ and $b$ are two endpoints of intervals of $\mathbf{P}$, then their relation won't change, unless $a=b$. More precisely, if $a<b$ in $\mathbf{P}$ then the corresponding points in $\mathbf{P'}$ will maintain this relation. Similarly for $a>b$.
So if $x$ and $y$ are two intervals in $\mathbf{P}$ with no common endpoints, then their (poset) relation is maintained in $\mathbf{P'}$.
Now suppose that $x$ and $y$ are intervals with some common endpoints. There are a few cases to consider.
If $l_x=l_y$ then either $x,y\in L$ or $x,y\in R$, so either both are shifted, or neither. Therefore $x\|y$ both in $\mathbf{P}$ and in $\mathbf{P'}$.
Now suppose $l_x\neq l_y$; without loss of generality $l_x<l_y$. Also assume $r_x=r_y$. Then $l_x+\epsilon<l_y$, so $x\|y$ both in $\mathbf{P}$ and in $\mathbf{P'}$.
The remaining case is, without loss of generality, $r_x=l_y$. Then $r_x+\epsilon<r_y$ (unless $l_y=r_y=r_x$, which was covered in the second case), so, again $x\|y$ both in $\mathbf{P}$ and in $\mathbf{P'}$.
\end{proof}
Now we return to the proof of the theorem. We will perform left and right compressions until no common endpoints remain except for twins. Let $x$, $y$ be two intervals with a common endpoint, but $x\neq y$. Let $\epsilon=\frac12\min\{|a-b|:\text{$a$ and $b$ are distinct endpoints}\}$, as above.
\begin{itemize}
\item
If $l_x=l_y$ and $r_x\neq r_y$, perform a right compression with $c=\min\{r_x,r_y\}$ and $\epsilon$.
\item
If $r_x=r_y$ and $l_x\neq l_y$, perform a left compression with $c=\max\{l_x,l_y\}$ and $\epsilon$.
\item If $l_x<r_x=l_y<r_y$ (or vice versa) either a left or a right shift will work with $c=r_x=l_y$.
\end{itemize}
Note that even though the definition of $\epsilon$ looks the same in every step, the actual value will change as the representation changes. Indeed, it is easy to see that $\epsilon$ is getting halved in every step.
If $\mathbf{P}$ started with a twin-free representation, then we have arrived to a distinguishing representation, so part~\ref{part:1} is proven.
If $\mathbf{P}$ had twins, those are still present at the representation. Let $x$ and $y$ be identical intervals of the representation, and let $\epsilon=\frac12\min\{|a-b|:\text{$a$ and $b$ are distinct endpoints}\}$ again. If $0\not\in S$, then the length of $x$ (and hence the length of $y$) is positive. Note that this length is at least $\epsilon$. Move $x$ by $\epsilon$ to the right, that is, replace $x$ with the interval $[l_x+\epsilon,r_x+\epsilon]$. The new representation will not have the $x$,$y$ twin and respresents the same poset. Repeat this until all twins disappear.
\end{proof}
\section{Choice functions}
Let $\mathbf{I}$ be a representation of an interval order $(X, P)$. Kierstead and Trotter \cite{KT} defined choice functions: a choice function $f$ on $\mathbf{I}$ is an injection $f: X \to \mathbb{R}$ such that $l_x \leq f(x) \leq r_x$ in $\mathbb{R}$. For a given choice function $f$, define the linear order $L(f)$ by setting $x<y$ in $L(f)$ if and only if $f(x)<f(y)$ in $\mathbb{R}$. It is easy to see that for each choice function $f$ on $\mathbf{I}$, $L(f)$ is a linear extension of $P$. Indeed, for every $x,y\in X$, $x < y$ in $P$, $I_x$ always lies to the left of $I_y$, hence for any choice function $f$, we have $f(x) < f(y)$.
In \cite{KT}, the following lemma is proven, which is specific to interval orders. We provide a different proof here, which hopefully provides some more insight.
\begin{lemma}\label{lemma:parts}
Let $(X, P)$ be an interval order, $X = X_1 \cup X_2 \cup\cdots\cup X_s$ be a partition. Let $L_i$ be a linear extension of $P(X_i)$ where $i= 1,2,\ldots,s$. Then there exists a linear extension $L$ of $P$ such that $L(X_i) = L_i$.
\end{lemma}
\begin{proof}
We will prove the lemma for $s=2$; the case of $s>2$ then follows by induction.
Let $X_1,X_2,L_1,L_2$ be defined as in the lemma. Define the relation $E=L_1\cup L_2\cup P$, and the directed graph $G=(X,E)$. It is sufficient to show that $G$ has no directed closed walk; indeed, if that is the case, the transitive closure $T$ of $G$ is an extension of the poset $\mathbf{P}$, and any linear extension $L$ of $T$ will satisfy the requirements of the conclusion of the lemma.
Suppose for a contradiction that $G$ contains a directed closed walk.
Since neither $G[X_1]$ nor $G[X_2]$ contains a directed closed walk, every directed closed walk in $G$ must have both an $X_1X_2$ and an $X_2X_1$ edge. We will call these edges \emph{cross-edges}. Let $C$ be a directed closed walk in $G$ with the minimum number of cross-edges.
As we noted, $C$ contains at least one $X_1X_2$ edge; let $(a,b)$ be such an edge. Let $(c,d)$ be the first $X_2X_1$ edge that follows $(a,b)$ in $C$. Observe that $c<d$, $a<b$ in $P$, and $b\leq c$ in $L_2$. If $d=a$, then $c<d=a<b$ in $P$, which would contradict $b\leq c$ in $L_2$. If $d>a$ in $L_1$, then we could eliminate the path $ab\ldots cd$ in $C$, replacing it with the single-edge path $ad$, and thereby decreasing the number of cross-edges in $C$, contradicting the minimality of $C$. (See Figure~\ref{fig:cycles}.)
\begin{figure}
\caption{Minimal oriented cycles}
\label{fig:cycles}
\end{figure}
So we concluded that $d<a$ in $L_1$, and recall that $b\leq c$ in $L_2$. If $b\leq c$ in $P$, then $a<b\leq c<d$ would contradict $d<a$ in $L_1$. (In particular, $b\neq c$.) Obviously, $b\not>c$ in $P$, so $b\|c$ in $P$. Similar argument shows that $d\| a$ in $P$. Hence the set $\{a,b,c,d\}$ induces a $\mathbf{2}+\mathbf{2}$ in $\mathbf{P}$, a contradiction.
\end{proof}
Let $(X, P)$ be a poset, and $X=Y \cup Z$ be a partition of $X$. We say that $Y$ is over $Z$ in an linear extension $L$ of $P$ if $y>z$ in $L$ whenever $y \in Y$, $z \in Z$ and $y\|z$ in $P$.
Using choice functions, Kierstead and Trotter \cite{KT} provided a shorter proof of a lemma below due to Rabinovitch \cite{R78}. We will include the proof for completeness.
\begin{lemma}\label{lemma:partition}
Let $(X, P)$ be an interval order, and $X=X_1 \cup X_2$ be a partition of $X$, where $\mathbf{P}_1 =(X_1, P(X_1))$, $\mathbf{P}_2 =(X_2, P(X_2))$. Then
\[
\dim(\mathbf{P}) \leq \max\{\dim(\mathbf{P}_1), \dim(\mathbf{P}_2)\} + 2.
\]
\end{lemma}
\begin{proof}
Consider a distinguishing representation of $(X, P)$, and let $t=\max\{\dim(\mathbf{P}_1), \dim(\mathbf{P}_2)\}$. By Lemma ~\ref{lemma:parts}, there exists a family $\mathcal{R}$ of $t$ linear extensions of $P$, such that the restriction of the linear extensions in $\mathcal{R}$ to each $X_i$ form a realizer of $P_i$, for $i=1,2$. Then define two choice functions $f_1$ and $f_2$, where $f_1(x)=l_x$, $f_2(x)=r_x$ for every $x \in X_1$; $f_1(y)=r_y$, $f_2(y)=l_y$ for every $y \in X_2$. Let $L_1=L(f_1)$, $L_2=L(f_2)$. Clearly, $\mathcal{R} \cup \{L_1, L_2\}$ is a realizer of $P$.
\end{proof}
\begin{theorem}\label{theorem:eofchoice}
Let $\mathbf{P}=(X, P)$ be an interval order with no duplicated holdings, let $\mathbf{I}$ be a distinguishing representation of $\mathbf{P}$. If $L$ is an arbitrary linear extension of $P$, then there exists a choice function $f$ on $\mathbf{I}$, such that $L(f)=L$.
\end{theorem}
\begin{proof}
Without loss of generality, label the ground set $X$ by the linear extension $L=x_1x_2, \ldots ,x_n$. Let $I$ be the function that maps each $x \in X$ to a closed interval $I_x$ in $\mathbb{R}$. If $I_x$ is an interval with a positive length, then let $l_x$ be the left endpoint of $I(x)$ and $r_x$ be the right endpoint of $I_x$. Otherwise, we say $I_x \in D$ if it is a zero length interval and let $m_x$ denote the real value of $I_x$ in $\mathbb{R}$. Meanwhile, let $\epsilon$ be the smallest difference between any two endpoints in $\mathbf{I}$. Since $\mathbf{I}$ is a distinguishing representation, we have $\epsilon>0$.
Without further due, let's find a choice function for $\mathbf{I}$ that gives us $L$. For convenience, let $f_i=f(x_i)$ for $i=1,2 \cdots n$. First, define:
\begin{align*}
f_i&=\begin{cases}
m_{x_1}, &\text{$I_x \in D$}\\
l_{x_1}+\epsilon/2, &\text{$I_x \notin D$}\\
\end{cases}\\
\end{align*}
If $x_1$ is a zero length interval in $\mathbf{I}$, then $f_1=m_{x_1} \in [l_x, r_x]$, for $l_{x_1}=m_{x_1}=r_{x_1}$ in this case. Otherwise, if $x_1 \notin D$, $l_{x_1}+\epsilon/2 < l_{x_1}+\epsilon <r_{x_1}$ by our definition. Then, for $i= 2,3,\ldots n$, define:
\begin{align*}
f_i&=\begin{cases}
m_{x_i}, &\text{$I_{x_i} \in D$}\\
max\{f(i-1)+ \epsilon/{2^i}, l_{x_i}+\epsilon/{2^i}\}, &\text{$I_{x_i} \notin D$}\\
\end{cases}\\
\end{align*}
We shall check if $f_i \in [l_{x_i},r_{x_i}]$ for every $i= 1,2,\cdots n$. We call $f_i$ to be good if $f_1 \in [l_{x_i},r_{x_i}]$. We have already shown that $f_1$ is good. We will proceed by induction. Let's first show that $f(2)$ is good. If one or both of $I_{x_1}$ and $I_{x_2}$ are zero length intervals, it's clear that $f_1$ and $f_2$ are good. Assume that they are both intervals with positive length. If $x_2>x_1$ in $P$, then $f_2=l_{x_2}+\epsilon/{4}<l_{x_2}+\epsilon \leq r_{x_2}$, $f_2$ is good. Otherwise, if $x_1 \| x_2$ in $P$, there are 2 cases, either $l_{x_1}<l_{x_2}<r_{x_1}$ or $l_{x_2}<l_{x_1}<r_{x_2}$. In the first case, $l_{x_2}>l_{x_1}+\epsilon/2 +\epsilon/4$, hence $f_2=l_{x_2}+\epsilon/4$, $f_2$ is good. In the second case $f_2=f_1+\epsilon/4=l_{x_1}+\epsilon/2+\epsilon/4<l_{x_1}+\epsilon<r_{x_2}$, hence $f_2$ is also good. Now, assume $f_i's $ are good for $i=1,2, \ldots ,k-1$, $(0 < k \leq n)$, need to show that $f_k$ is also good. If $I_{x_{k-1}}$ is a zero length interval, either $x_{k-1} < x_k$ or $x_{k-1} \| x_k$ in $P$, it's clear that $f_k$ is good. Let's assume $I_{x_{k-1}}$ has positive length. If after we take the maximum we obtained $f_k=l_{x_n}+\epsilon/{2^n}$, then $f_k$ is good. The case we need to check is the one that $f_k= f_{k-1}+ \epsilon/{2^k}>l_{x_k}+\epsilon/{2^k}$ and meanwhile $f_{k-1}+ \epsilon/{2^k}$ is not in $[l_{x_k}, r_{x_k}]$, i.e. $f_{k-1}+ \epsilon/{2^k}>r_{x_k}$. But we will show that this is impossible. Since for $f_{k-1}$ there exists a interval $I_{x_s}$, $0<s<k-1$ (see Figure~\ref{fig:case})
, such that $f_{k-1}<l_{x_s}+\epsilon/2 +\epsilon/{2^2} +\cdots < l_{x_s}+\epsilon$. And we have $x_s<x_{k-1}<x_k$ in $L$, hence $x_k \| x_s$. Then $f(x_{k-1})-\epsilon < l(x_s)<r(x_k)<f_{x_{k-1}}$, notice that $r(x_k)=m(x_k)$ if $x_k$ is a zero length interval, but both case give us $r(x_k)-l(x_s)< \epsilon$ which is a contradiction. Hence $f_{k}$ is good, we have $f_i \in [l_{x_i}, r_{x_i}]$ for each $i=1,2, \ldots ,n$. For the rest of the proof, it's easy to see that $f_i \geq f_{i-1} + \epsilon/2^{i-1} > f_{i-1}$, hence $L(f)=L$.
\begin{figure}\label{fig:case}
\end{figure}
\end{proof}
\section{Dimension of interval orders using two lengths}
In \cite{KTY}, the following theorem is proven.
\begin{theorem}\label{theorem:ocdim}
If $\mathbf{P}$ is an interval order that has representation such that every interval is of length 0 or 1, then $\dim(\mathbf{P}) \leq 3$.
\end{theorem}
In \cite{KTY}, the authors defined two disjoint sets of incomparable pairs neither of which contains a alternating cycle, hence there exist linear extensions that reverse all the incomparable pairs in each of the sets. The remaining incomparable pairs can be reversed in one extra linear extension. Here we provide a shorter proof using a choice function, which gives the three linear extensions that realize the interval order directly.
\begin{proof}
Let $\mathbf{P}$ be a twin-free interval order, and let $\mathbf{I}$ be a distinguishing representation of $\mathbf{P}$ which only consist of length 0 and 1 intervals. Let poset $\mathbf{U} = (U, \mathbf{P_U})$ be the subposet of $\mathbf{P}$ consisting all the points represented by intervals of length 1 in $\mathbf{I}$, and $\mathbf{D}$ be the subposet of $\mathbf{P}$ consisting all the points represented by intervals of length 0 in $I$. Let $D$ be the ground set of $\mathbf{D}$. For each element $x\in D$, use $R_x$ to denote the unique real number in the interval representing $x$. Partition $U$ into antichains $A_1, A_2,\ldots,A_t$ by taking the minimal elements successively. It is easy to see that $x<y$ in $P$ for every $x \in A_i$, $y \in A_{i+2}$. Let $A_\mathrm{odd} = \{x \in U: x \in A_i \text{ for some $i \in [t]$ with $i$ odd}\}$, and $A_\mathrm{even}=U-A_\mathrm{odd}$.
Let $f_1$, $f_2$ be choice functions on $\mathbf{I}$, defined as follows.
\begin{align*}
f_1(x)&=\begin{cases}
l_x, &\text{$x \in A_\mathrm{odd}$}\\
r_x, &\text{$x \in A_\mathrm{even}$}\\
R_x, &\text{$x \in D$}\\
\end{cases}\\
f_2(x)&=\begin{cases}
r_x, &\text{$x \in A_\mathrm{odd}$}\\
l_x, &\text{$x \in A_\mathrm{even}$}\\
R_x, &\text{$x \in D$}\\
\end{cases}
\end{align*}
Then, let $L_1=L(f_1)$, $L_2=L(f_2)$, hence $L_1$ and $L_2$ are both linear extensions of $P$. It is clear that each incomparable pair $\{x, y\}$, where $x \in A_\mathrm{odd}$, $y \in A_\mathrm{even}$, is reversed in the two linear extensions, as well as the incomparable pairs $\{x, y\}$, for which $x \in U$, $y \in D$. the only incomparable pairs need to be reversed are the ones that both of the points are in the same $A_i$, we can reverse all of them in:
\[
L_3 = L_1^d(A_1) < L_1^d(A_2) <\cdots< L_1^d(A_t).
\]
Hence, $\{L_1, L_2, L_3\}$ is a realizer of $\mathbf{P}$.
\end{proof}
\section{Dimension of interval orders with representation using multiple positive lengths}
Let $r,s>0$. Recall that $C(\{r,s\})$ denotes the class of interval orders that have a representation, in which every interval is of length $r$ or $s$.
Rabinovitch \cite{R87} proved that the dimension of a semiorder is at most 3. Here, we prove the following bound on the dimension of posets in $C(\{r, s\})$.
\begin{proposition}Let $\mathbf{P}\in C(\{r, s\})$.
Then $\dim(\mathbf{P}) \leq 5$.
\end{proposition}
\begin{proof}
Let $\mathbf{P}\in C(\{r,s\})$, and consider a representation of $\mathbf{P}$.
We can partition $\mathbf{P}$ into the union of 2 semiorders, $\mathbf{S_r}$ and $\mathbf{S_s}$, which consist of intervals of length only $r$ and $s$, respectively. Since the dimension of a semiorder is at most 3, apply Lemma~\ref{lemma:partition} to conclude
\[
\dim(P) \leq \max\{\dim(S_r), \dim(S_s)\} \leq 5.
\]
\end{proof}
Let $f(r)$ be the maximum dimension of interval orders having a representation consisting of intervals of at most $r$ different positive lengths. By partitioning the interval orders into the union of $r$ different semiorders, then using similar techniques, we have the following bound for $f(r)$.
\begin{proposition}
$f(r) \leq \lceil\lg r\rceil+3$.
\end{proposition}
If these bounds are tight is not known. Even for Proposition~6.1, the existence of an interval order in the class $C(\{r, s\})$ of dimension 4 remains open.
\section{Dimension of interval orders in $C(\alpha)$}
Recall that $C(\alpha)$ is the family of posets that have a respresentation with intervals of lengths between $1$ and $\alpha$.
\begin{theorem}
Let $\mathbf{P}=(X, P)$ be a interval order with a representation such that each interval is of length $1$ except for one interval, which is of length between 0 and 2 (inclusive). Then $\dim(\mathbf{P}) \leq 3$.
\end{theorem}
\begin{proof}
We may assume that $\mathbf{P}$ has no duplicated holdings. By Theorem~\ref{theorem:representations},
it has a distinguishing representation; fix one of these. Let $x_0$ be the interval whose length is not $1$. Let $m_0$ be the midpoint of $x_0$, and let $A_0$ be the set of intervals that contain $m_0$. (See Figure 3.)
Let $U_0 = \{x \in X: l_x>m_0 \}$, and let $D_0= \{x \in X: r_x<m_0\}$. Let $A_1$ be the set of minimal elements of $U_0$, and let $U_i=U_{i-1} - A_i$, where $A_i$ is the set of minimal elements of $U_{i-1}$ for $i=1,2,\ldots, k$. Similarly, let $B_1$ be the set of maximal elements of $D_0$, and let $D_i=D_{i-1} - B_i$, where $B_i$ is the set of maximal elements of $D_{i-1}$ for $i=1,2,\ldots, s$. Hence we have a partition $P_1$ of $\mathbf{P}$: $B_s \cup\cdots\cup B_1\cup A_0 \cup A_1\cup \cdots \cup A_k$.
For any elements $x$ and $y$, where $x \in A_0$, $y \in A_2$, we have $x<y$ in $P$. Indeed, if $y$ is in $A_2$, there must be an element $w$ in $A_1$, such that, $m_0 < l_w<r_w<l_y$. Since $w$ has length 1, we have $l_y>m_0+1$. And given that $x_0$ has length between 0 and 2 inclusive with midpoint $m_0$, we have $r_x \leq m_0+1<l_y$. By symmetry, it can be proved that $x<y$ for every $x\in B_2$, $y \in A_0$. In addition, from the property of semiorders, $x<y$ for every $x\in A_i$, $y \in A_{i+2}$, and for every $x \in B_{j+2}$, $y \in B_j$, where $i=1,2, \ldots, k-2$, $j=1,2, \ldots, s-2$. Finally, for every $x \in B_1$, $y \in A_1$, clearly $x<y$ since $r_x<m_0<l_y$.
Hence if we relabel the partition $P_1$ from left to right to be $S_1 \cup\cdots\cup S_n$, we have $x<y$ for every $x \in A_i$, $y \in A_{i+2}$, where $i=1,2,\ldots, n-2$. Meanwhile each $S_i$ is an antichain. Then, apply a similar method as the one in the proof of Theorem ~\ref{theorem:ocdim}. Let $f_1, f_2$ be choice functions on $\mathbf{I}$, which define as follows:
\begin{align*}
f_1(x)&=\begin{cases}
l_x, &\text{$x \in S_\mathrm{odd}$}\\
r_x, &\text{$x \in S_\mathrm{even}$}\\
\end{cases}\\
f_2(x)&=\begin{cases}
r_x, &\text{$x \in S_\mathrm{odd}$}\\
l_x, &\text{$x \in S_\mathrm{even}$}\\
\end{cases}
\end{align*}
Let $L_1=L(f_1)$, $L_2=L(f_2)$, and let $L_3 = L_1^d(A_1) < L_1^d(A_2) <\cdots< L_1^d(A_t)$. Clearly $\{L_1, L_2, L_3\}$ is a realizer of $\mathbf{P}$.
\end{proof}
\begin{theorem}\label{thm:C2}
Let $\mathbf{P}\in C(2)$. Then $\dim(\mathbf{P}) \leq 4$.
\end{theorem}
\begin{proof}
Let $\mathbf{P}\in C(2)$, where $\mathbf{P}=(X, P)$. Fix a distinguishing representation of $\mathbf{P}$. We will think of the elements of $\mathbf{P}$ as intervals, and we will use the notation $l_x$, $r_x$ to denote the left and right endpoints of $x$ in $X$, respectively.
We again apply the technique of partitioning the poset by successively removing minimal elements. To be precise, we let $A_1$ be the set of minimal elements of $X$, and let $P_1=P(X-A_1)$. We define $A_i$ recursively as follows: assuming that $A_{i-1}$ and $P_{i-1}$ is defined, we let $A_i$ be the set of minimal elements of $X_{i-1}$, and we let $X_i=X_{i-1}-A_{i}$.
We will show that for all $i$, $A_i<A_{i+3}$; that is, whenever $x\in A_i$ and $y\in A_{i+3}$, we have $x<y$.
Let $i$ be a positive integer, and let $x\in A_i$, $y\in A_{i+3}$. Since $x\not> y$, we just have to prove that $x$ and $y$ can not be incomparable. There exists $z_2\in A_{i+2}$ such that $z_2<y$; and so on, $z_1\in A_{i+1}$ with $z_1<z_2$, and $z_0\in A_i$ with $z_0<z_1$.
Note that
\[
l_y>r_{z_2}\geq l_{z_2}+1>r_{z_1}+1\geq l_{z_1}+2>r_{z_0}+2.
\]
Since $z_0,x\in A_i$, we have $z_0\| x$, so $l_x\leq r_{z_0}$. From these we conclude that $l_y>l_x+2$. If $x\|y$, then $r_x\geq l_y$, which would make the length of $x$ more than $2$. So we conclude $x<y$, as desired.
We define three linear extensions with choice functions that reverses most critical pairs. Let the choice functions $f_0,f_1,f_2$ be defined by
\begin{align*}
f_i(x)=\begin{cases}
r_x, &\text{$x \in A_j$ with $j\equiv i\mod{3}$}\\
l_x, &\text{otherwise}.
\end{cases}
\end{align*}
Let $L_1,L_2,L_3$ be the linear extensions defined by these choice functions.
If $x\|y$, and $x\in A_i$, $y\in A_j$ with $i\neq j$, then $l_x<l_y\leq r_x$, which means that they will appear in both order in one of $L_1,L_2,L_3$. So we only need to reverse critical pairs that appear in a single $A_i$. This can be done with one extra linear extension:
\[
L_4 = L_1^d(A_1) < L_1^d(A_2) <\cdots< L_1^d(A_t).
\]
\begin{figure}\label{fig:onediff}
\end{figure}
\end{proof}
It is open whether there is a poset in $C(2)$ that is actually four-dimensional. It feels unlikely that the addition of all numbers between $1$ and $2$ as possible lengths would not increase the dimension from semiorders, but finding a four dimensional poset in $C(2)$ has resisted our efforts.
Recall $C[\alpha, \beta]$ denote the class of interval orders that can be represented with intervals of lengths in the range $[\alpha, \beta]$. Note that $C(\alpha) = C[1, \alpha]$. Use $f(C[\alpha, \beta])$ to denote the least upper bound of the dimension of posets in the class $C[\alpha, \beta]$. We just proved that $f(C[1, 2]) \leq 4$.
\begin{theorem}
For $t\geq 2$,
$f(C(t))=f(C[1, t]) \leq 2\lceil \lg\lg t\rceil+4$.
\end{theorem}
\begin{proof}
Let $n=2^{2^{\lceil \lg\lg t\rceil}}$. Since $n\geq t$, it is clear that $f(C[1,t])\leq f(C[1,n])$. We will show by induction that $f(C[1,n])\leq 2\lceil\lg\lg n\rceil+4=2\lg\lg n+4$.
For $n=2$, the statement reduces to Theorem~\ref{thm:C2}. Let $n>2$ be an integer. Note that in this case $\lceil\lg\lg t\rceil\geq 1$, so $n\geq 4$ and a square. Let $m=\sqrt{n}=2^{2^{\lceil\lg\lg t\rceil-1}}$. If $\mathbf{P}\in C[1,n]$, then we can partition intervals of a representation of $\mathbf{P}$ into ``short'' intervals of length at most $m$, and ``long'' intervals of length at least $m$. (Intervals of length $m$, if any, can be placed arbitrarily.) By Lemma~\ref{lemma:partition}, Observation~\ref{obs:scaling}, and the hypothesis,
\begin{multline*}
f(C[1,n])\leq \max\{f(C[1,m]),f(C[m,n])\}+2=f(C[1,m])+2\leq\\
2(\lceil \lg\lg t\rceil-1)+4+2=
2\lceil \lg\lg t\rceil+4.
\end{multline*}
\end{proof}
\end{document} |
\begin{document}
\title{The reverse Yang-Mills-Higgs flow in a neighbourhood of a critical point}
\author{Graeme Wilkin}
\mathop{\rm ad}\nolimitsdress{Department of Mathematics, National University of Singapore, Singapore 119076}
\email{[email protected]}
\date{\today}
\begin{abstract}
The main result of this paper is a construction of solutions to the reverse Yang-Mills-Higgs flow converging in the $C^\infty$ topology to a critical point. The construction uses only the complex gauge group action, which leads to an algebraic classification of the isomorphism classes of points in the unstable set of a critical point in terms of a filtration of the underlying Higgs bundle.
Analysing the compatibility of this filtration with the Harder-Narasimhan-Seshadri double filtration gives an algebraic criterion for two critical points to be connected by a flow line. As an application, we can use this to construct Hecke modifications of Higgs bundles via the Yang-Mills-Higgs flow. When the Higgs field is zero (corresponding to the Yang-Mills flow), this criterion has a geometric interpretation in terms of secant varieties of the projectivisation of the underlying bundle inside the unstable manifold of a critical point, which gives a precise description of broken and unbroken flow lines connecting two critical points. For non-zero Higgs field, at generic critical points the analogous interpretation involves the secant varieties of the spectral curve of the Higgs bundle.
\end{abstract}
\maketitle
\thispagestyle{empty}
\baselineskip=16pt
\section{Introduction}
There is a well-known relationship between the Yang-Mills heat flow on a Riemann surface and the notion of stability from algebraic geometry. This began with work of Atiyah and Bott \cite{AtiyahBott83} and continued with Donaldson's proof \cite{Donaldson83} of the Narasimhan-Seshadri theorem \cite{NarasimhanSeshadri65} and subsequent work of Daskalopoulos \cite{Daskal92} and Rade \cite{Rade92}, which shows that the Yang-Mills flow converges to a unique critical point which is isomorphic to the graded object of the Harder-Narasimhan-Seshadri double filtration of the initial condition. In the setting of Higgs bundles, a theorem of Hitchin \cite{Hitchin87} and Simpson \cite{Simpson88} shows that a polystable Higgs bundle is gauge equivalent to the minimum of the Yang-Mills-Higgs functional and that this minimum is achieved by the heat flow on the space of metrics. The results of \cite{Wilkin08} show that the theorem of Daskalopoulos and Rade described above extends to the Yang-Mills-Higgs flow on the space of Higgs bundles over a compact Riemann surface. More generally, when the base manifold is compact and K\"ahler, then these results are due to \cite{Donaldson85}, \cite{Donaldson87-2}, \cite{UhlenbeckYau86}, \cite{Simpson88}, \cite{DaskalWentworth04}, \cite{Sibley15}, \cite{Jacob15} and \cite{LiZhang11}.
Continuing on from these results, it is natural to investigate flow lines between critical points. Naito, Kozono and Maeda \cite{NaitoKozonoMaeda90} proved the existence of an unstable manifold of a critical point for the Yang-Mills functional, however their method does not give information about the isomorphism classes in the unstable manifold, and their proof requires a manifold structure on the space of connections (which is not true for the space of Higgs bundles). Recent results of Swoboda \cite{Swoboda12} and Janner-Swoboda \cite{JannerSwoboda15} count flow lines for a perturbed Yang-Mills functional, however these perturbations destroy the algebraic structure of the Yang-Mills flow, and so there does not yet exist an algebro-geometric description of the flow lines in the spirit of the results described in the previous paragraph. Moreover, one would also like to study flow lines for the Yang-Mills-Higgs functional, in which case the perturbations do not necessarily preserve the space of Higgs bundles, which is singular.
The purpose of this paper is to show that in fact there is an algebro-geometric description of the flow lines connecting given critical points of the Yang-Mills-Higgs functional over a compact Riemann surface. As an application, we show that the Hecke correspondence for Higgs bundles studied by Witten in \cite{witten-hecke} has a natural interpretation in terms of gradient flow lines. Moreover, for the Yang-Mills flow, at a generic critical point there is a natural embedding of the projectivisation of the underlying bundle inside the unstable set of the critical point, and the results of this paper show that the isomorphism class of the limit of the downwards flow is determined if the initial condition lies in one of the secant varieties of this embedding, giving us a geometric criterion to distinguish between broken and unbroken flow lines. For the Yang-Mills-Higgs flow the analogous picture involves the secant varieties of the space of Hecke modifications compatible with the Higgs field. At generic critical points of the Yang-Mills-Higgs functional this space of Hecke modifications is the spectral curve of the Higgs bundle.
The basic setup for the paper is as follows. Let $E \rightarrow X$ be a smooth complex vector bundle over a compact Riemann surface with a fixed Hermitian metric and let $\mathcal{B}$ denote the space of Higgs pairs on $E$. The \emph{Yang-Mills-Higgs functional} is
\begin{equation*}
\mathop{\rm YM}\nolimitsH(\bar{\partial}_A, \phi) := \| F_A + [\phi, \phi^*] \|_{L^2}^2
\end{equation*}
and the \emph{Yang-Mills-Higgs flow} is the downwards gradient flow of $\mathop{\rm YM}\nolimitsH$ given by the equation \eqref{eqn:YMH-flow}. This flow is generated by the action of the complex gauge group $\mathcal{G}^\mathbb C$. Equivalently, one can fix a Higgs pair and allow the Hermitian metric on the bundle to vary in which case the flow becomes a nonlinear heat equation on the space of Hermitian metrics (cf. \cite{Donaldson85}, \cite{Simpson88}). At a critical point for this flow the Higgs bundle splits into Higgs subbundles and on each subbundle the Higgs structure minimises $\mathop{\rm YM}\nolimitsH$. The \emph{unstable set} of a critical point $(\bar{\partial}_A, \phi)$ consists of all Higgs pairs for which a solution to the $\mathop{\rm YM}\nolimitsH$ flow \eqref{eqn:YMH-flow} exists for all negative time and converges in the smooth topology to $(\bar{\partial}_A, \phi)$ as $t \rightarrow - \infty$. The first theorem of the paper gives an algebraic criterion for a complex gauge orbit to intersect the unstable set for the Yang-Mills-Higgs flow.
\begin{theorem}[Criterion for convergence of reverse heat flow]\label{thm:unstable-set-intro}
Let $E$ be a complex vector bundle over a compact Riemann surface $X$, and let $(\bar{\partial}_A, \phi)$ be a Higgs bundle on $E$. Suppose that $E$ admits a filtration $(E^{(1)}, \phi^{(1)}) \subset \cdots \subset (E^{(n)}, \phi^{(n)}) = (E, \phi)$ by Higgs subbundles such that the quotients $(Q_k, \phi_k) := (E^{(k)}, \phi^{(k)}) / (E^{(k-1)}, \phi^{(k-1)})$ are Higgs polystable and $\slope(Q_k) < \slope(Q_j)$ for all $k < j$. Then there exists $g \in \mathcal{G}^\mathbb C$ and a solution to the reverse Yang-Mills-Higgs heat flow equation with initial condition $g \cdot (\bar{\partial}_A, \phi)$ which converges to a critical point isomorphic to $(Q_1, \phi_1) \oplus \cdots \oplus (Q_n, \phi_n)$.
Conversely, if there exists a solution of the reverse heat flow from the initial condition $(\bar{\partial}_A, \phi)$ converging to a critical point $(Q_1, \phi_1) \oplus \cdots \oplus (Q_n, \phi_n)$ where each $(Q_j, \phi_j)$ is polystable with $\slope(Q_k) < \slope(Q_j)$ for all $k < j$, then $(E, \phi)$ admits a filtration $(E^{(1)}, \phi^{(1)}) \subset \cdots \subset (E^{(n)}, \phi^{(n)}) = (E, \phi)$ whose graded object is isomorphic to $(Q_1, \phi_1) \oplus \cdots \oplus (Q_n, \phi_n)$.
\end{theorem}
A key difficulty in the construction is the fact that the space of Higgs bundles is singular, and so the existing techniques for constructing unstable sets (see for example \cite{NaitoKozonoMaeda90} for the Yang-Mills flow or \cite[Sec. 6]{Jost05} in finite dimensions) cannot be directly applied since they depend on the manifold structure of the ambient space. One possibility is to study the unstable set of the function $\| F_A + [\phi, \phi^*] \|_{L^2}^2 + \| \bar{\partial}_A \phi \|_{L^2}^2$ on the space of all pairs $(\bar{\partial}_A, \phi)$ without the Higgs bundle condition $\bar{\partial}_A \phi = 0$, however one would then need a criterion to determine when a point in this unstable set is a Higgs bundle and one would also need a method to determine the isomorphism classes of these points.
The construction in the proof of Theorem \ref{thm:unstable-set-intro} is intrinsic to the singular space since it uses the action of the complex gauge group to map the unstable set for the linearised $\mathop{\rm YM}\nolimitsH$ flow (for which we can explicitly describe the isomorphism classes) to the unstable set for the Yang-Mills-Higgs flow. The method used here to compare the flow with its linearisation is called the ``scattering construction'' in \cite{Hubbard05} and \cite{Nelson69} since it originates in the study of wave operators in quantum mechanics (see \cite{ReedSimonVol3} for an overview). The method in this paper differs from \cite{Hubbard05} and \cite{Nelson69} in that (a) the construction here is done using the gauge group action in order to preserve the singular space and (b) the distance-decreasing formula for the flow on the space of metrics \cite{Donaldson85} is used here in order to avoid constructing explicit local coordinates as in \cite{Hubbard05} (the construction of \cite{Hubbard05} requires a manifold structure around the critical points).
As a consequence of Theorem \ref{thm:unstable-set-intro}, we have an algebraic criterion for critical points to be connected by flow lines.
\begin{corollary}[Algebraic classification of flow lines]\label{cor:algebraic-flow-line-intro}
Let $x_u = (\bar{\partial}_{A_u}, \phi_u)$ and $x_{\bf e}^{(l)}l = (\bar{\partial}_{A_{\bf e}^{(l)}l}, \phi_{\bf e}^{(l)}l)$ be critical points with $\mathop{\rm YM}\nolimitsH(x_u) > \mathop{\rm YM}\nolimitsH(x_{\bf e}^{(l)}l)$. Then $x_u$ and $x_{\bf e}^{(l)}l$ are connected by a flow line if and only if there exists a Higgs pair $(E, \phi)$ which has Harder-Narasimhan-Seshadri double filtration whose graded object is isomorphic to $x_{\bf e}^{(l)}l$, and which also admits a filtration $(E^{(1)}, \phi^{(1)}) \subset \cdots \subset (E^{(n)}, \phi^{(n)}) = (E, \phi)$ by Higgs subbundles such that the quotients $(Q_k, \phi_k) := (E^{(k)}, \phi^{(k)}) / (E^{(k-1)} \phi^{(k-1)})$ are polystable and satisfy $\slope(Q_k) < \slope(Q_j)$ for all $k < j$ and the graded object $(Q_1, \phi_1) \oplus \cdots \oplus (Q_n, \phi_n)$ is isomorphic to $x_u$.
\end{corollary}
As an application of the previous theorem, we can construct Hecke modifications of Higgs bundles via Yang-Mills-Higgs flow lines. First consider the case of a Hecke modification at a single point (miniscule Hecke modifications in the terminology of \cite{witten-hecke}).
\begin{theorem}\label{thm:hecke-intro}
\begin{enumerate}
\item Let $0 \rightarrow (E', \phi') \rightarrow (E, \phi) \stackrel{v}{\rightarrow} \mathbb C_p \rightarrow 0$ be a Hecke modification such that $(E, \phi)$ is stable and $(E', \phi')$ is semistable, and let $L_u$ be a line bundle with $\deg L_u + 1 < \slope(E') < \slope(E)$. Then there exist sections $\phi_u, \phi_{\bf e}^{(l)}l \in H^0(K)$, a line bundle $L_{\bf e}^{(l)}l$ with $\deg L_{\bf e}^{(l)}l = \deg L_u + 1$ and a metric on $E \oplus L_u$ such that $x_u = (E, \phi) \oplus (L_u, \phi_u)$ and $x_{\bf e}^{(l)}l = (E_{gr}', \phi_{gr}') \oplus (L_{\bf e}^{(l)}l, \phi_{\bf e}^{(l)}l)$ are critical points connected by a $\mathop{\rm YM}\nolimitsH$ flow line, where $(E_{gr}', \phi_{gr}')$ is isomorphic to the graded object of the Seshadri filtration of $(E', \phi')$.
\item Let $x_u = (E, \phi) \oplus (L_u, \phi_u)$ and $x_{\bf e}^{(l)}l = (E', \phi') \oplus (L_{\bf e}^{(l)}l, \phi_{\bf e}^{(l)}l)$ be critical points connected by a $\mathop{\rm YM}\nolimitsH$ flow line such that $L_u, L_{\bf e}^{(l)}l$ are line bundles with $\deg L_{\bf e}^{(l)}l = \deg L_u + 1$, $(E, \phi)$ is stable and $(E', \phi')$ is polystable with $\deg L_u + 1 < \slope(E') < \slope(E)$. Then $(E', \phi')$ is the graded object of the Seshadri filtration of a Hecke modification of $(E, \phi)$. If $(E', \phi')$ is Higgs stable then it is a Hecke modification of $(E, \phi)$.
\end{enumerate}
\end{theorem}
For Hecke modifications defined at multiple points, we can inductively apply the above theorem to obtain a criterion for two critical points to be connected by a broken flow line. For non-negative integers $m, n$, the definition of $(m, n)$ stability is given in Definition \ref{def:m-n-stable}. The space $\mathcal{N}_{\phi, \phi_u}$ denotes the space of Hecke modifications compatible with the Higgs fields $\phi$ and $\phi_u$ (see Definition \ref{def:Hecke-compatible}).
\begin{corollary}\label{cor:broken-hecke-intro}
Consider a Hecke modification $0 \rightarrow (E', \phi') \rightarrow (E, \phi) \rightarrow \oplus_{j=1}^n \mathbb C_{p_j} \rightarrow 0$ defined by $n > 1$ distinct points $\{ v_1, \ldots, v_n \} \in \mathbb{P} E^*$, where $(E, \phi)$ is $(0,n)$ stable. If there exists $\phi_u \in H^0(K)$ such that $v_1, \ldots, v_n \in \mathcal{N}_{\phi, \phi_u}$, then there is a broken flow line connecting $x_u = (E, \phi) \oplus (L_u, \phi_u)$ and $x_{\bf e}^{(l)}l = (E_{gr}', \phi_{gr}') \oplus (L_{\bf e}^{(l)}l, \phi_{\bf e}^{(l)}l)$, where $(E_{gr}', \phi_{gr}')$ is the graded object of the Seshadri filtration of the semistable Higgs bundle $(E', \phi')$.
\end{corollary}
For any gradient flow, given upper and lower critical sets $C_u$ and $C_{\bf e}^{(l)}l$, one can define the spaces $\mathcal{F}_{{\bf e}^{(l)}l, u}$ (resp. $\mathcal{BF}_{{\bf e}^{(l)}l,u}$) of unbroken flow lines (resp. broken or unbroken flow lines) connecting these sets, and the spaces $\mathcal{P}_{{\bf e}^{(l)}l, u}$ (resp. $\mathcal{BP}_{{\bf e}^{(l)}l, u}$) of pairs of critical points connected by an unbroken flow line (resp. broken or unbroken flow line). These spaces are correspondences with canonical projection maps to the critical sets given by the projection taking a flow line to its upper and lower endpoints.
\begin{equation*}
\xymatrix{
& \mathcal{F}_{{\bf e}^{(l)}l, u} \ar[d] \ar@/_/[ddl] \ar@/^/[ddr] & & & \mathcal{BF}_{{\bf e}^{(l)}l, u} \ar[d] \ar@/_/[ddl] \ar@/^/[ddr] \\
& \mathcal{P}_{{\bf e}^{(l)}l, u} \ar[dl] \ar[dr] & & & \mathcal{BP}_{{\bf e}^{(l)}l, u} \ar[dl] \ar[dr] & \\
C_{\bf e}^{(l)}l & & C_u & C_{\bf e}^{(l)}l & & C_u
}
\end{equation*}
In the setting of Theorem \ref{thm:hecke-intro}, let $d = \deg E$ and $r = \rank(E)$ and let $C_u$ and $C_{\bf e}^{(l)}l$ be the upper and lower critical sets. There are natural projection maps to the moduli space of semistable Higgs bundles $C_u \rightarrow \mathcal{M}_{ss}^{Higgs}(r, d)$ and $C_{\bf e}^{(l)}l \rightarrow \mathcal{M}_{ss}^{Higgs}(r, d-1)$. Suppose that $\gcd(r,d) = 1$ so that $\mathcal{M}_{ss}^{Higgs}(r, d)$ consists solely of stable Higgs pairs and hence any Hecke modification is semistable. Since the flow is $\mathcal{G}$-equivariant, then there is an induced correspondence variety, denoted $\mathcal{M}_{{\bf e}^{(l)}l, u}$ in the diagram below.
\begin{equation*}
\xymatrix{
& \mathcal{F}_{{\bf e}^{(l)}l, u} \ar[d] \ar@/_/[ddl] \ar@/^/[ddr] & \\
& \mathcal{P}_{{\bf e}^{(l)}l, u} \ar[dl] \ar[dr] \ar[d] & \\
C_{\bf e}^{(l)}l \ar[d] & \mathcal{M}_{{\bf e}^{(l)}l,u} \ar[dl] \ar[dr] & C_u \ar[d] \\
\mathcal{M}_{ss}^{Higgs}(r, d-1) & & \mathcal{M}_{ss}^{Higgs}(r,d)
}
\end{equation*}
As a consequence of Theorem \ref{thm:hecke-intro}, we have the following result.
\begin{corollary}\label{cor:hecke-correspondence-intro}
$\mathcal{M}_{{\bf e}^{(l)}l,u}$ is the Hecke correspondence.
\end{corollary}
A natural question from Floer theory is to ask whether a pair of critical points connected by a broken flow line can also be connected by an unbroken flow line, i.e whether $\mathcal{BP}_{{\bf e}^{(l)}l, u} = \mathcal{P}_{{\bf e}^{(l)}l, u}$. The methods used to prove the previous theorems can be used to investigate this question using the geometry of secant varieties of the space of Hecke modifications inside the unstable set of a critical point. For critical points of the type studied in Theorem \ref{thm:hecke-intro}, generically this space of Hecke modifications is the spectral curve of the Higgs field, and so the problem reduces to studying secant varieties of the spectral curve. This is explained in detail in Section \ref{sec:secant-criterion}. In particular, Corollary \ref{cor:rank-2-classification} gives a complete classification of the unbroken flow lines on the space of rank $2$ Higgs bundles.
The paper is organised as follows. In Section \ref{sec:preliminaries} we set the notation for the paper, prove a slice theorem around the critical points and derive some preliminary estimates for the $\mathop{\rm YM}\nolimitsH$ flow near a critical point. Section \ref{sec:local-analysis} contains the main part of the analysis of the $\mathop{\rm YM}\nolimitsH$ flow around a critical point, which leads to the proof of Theorem \ref{thm:unstable-set-intro} and Corollary \ref{cor:algebraic-flow-line-intro}. In Section \ref{sec:hecke} we interpret the analytic results on flow lines in terms of the Hecke correspondence, leading to the proof of Theorem \ref{thm:hecke-intro}, Corollary \ref{cor:broken-hecke-intro} and Corollary \ref{cor:hecke-correspondence-intro}. Appendix \ref{sec:uniqueness} contains a proof that a solution to the reverse $\mathop{\rm YM}\nolimitsH$ flow with a given initial condition is necessarily unique.
{\bf Acknowledgements.} I would like to thank George Daskalopoulos, M.S. Narasimhan and Richard Wentworth for their interest in the project, as well as George Hitching for useful discussions about \cite{ChoeHitching10} and \cite{Hitching13}.
\section{Preliminaries}\label{sec:preliminaries}
\subsection{The Yang-Mills-Higgs flow on a compact Riemann surface}
Fix a compact Riemann surface $X$ and a smooth complex vector bundle $E \rightarrow X$. Choose a normalisation so that $\mathop{\rm vol}\nolimits(X) = 2\pi$. Fix $\bar{\partial}_{A_0} : \Omega^0(E) \rightarrow \Omega^{0,1}(E)$ such that $\bar{\partial}_{A_0}$ is $\mathbb C$-linear and satisfies the Leibniz rule $\bar{\partial}_{A_0}(fs) = (\bar{\partial} f) s + f (\bar{\partial}_{A_0} s)$ for all $f \in \Omega^0(X)$ and $s \in \Omega^0(E)$. Let $\mathcal{A}^{0,1}$ denote the affine space $\bar{\partial}_{A_0} + \Omega^{0,1}(\mathop{\rm End}\nolimits(E))$. A theorem of Newlander and Nirenberg identifies $\mathcal{A}^{0,1}$ with the space of holomorphic structures on $E$. The \emph{space of Higgs bundles on $E$} is
\begin{equation}
\mathcal{B} := \{ (\bar{\partial}_A, \phi) \in \mathcal{A}^{0,1} \times \Omega^{1,0}(\mathop{\rm End}\nolimits(E)) \, : \, \bar{\partial}_A \phi = 0 \}
\end{equation}
The complex gauge group is denoted $\mathcal{G}^\mathbb C$ and acts on $\mathcal{B}$ by $g \cdot (\bar{\partial}_A, \phi) = (g \bar{\partial}_A g^{-1}, g \phi g^{-1})$. If $X$ is a complex manifold with ${\bf d}^{(i)}m_\mathbb C X > 1$ then we impose the extra integrability conditions $(\bar{\partial}_A)^2 = 0$ and $\phi \wedge \phi = 0$. Given a Hermitian metric on $E$, let $\mathcal{A}$ denote the space of connections on $E$ compatible with the metric, and let $\mathcal{G} \subset \mathcal{G}^\mathbb C$ denote the subgroup of unitary gauge transformations. The Chern connection construction defines an injective map $\mathcal{A}^{0,1} \hookrightarrow \mathcal{A}$ which is a diffeomorphism when ${\bf d}^{(i)}m_\mathbb C X = 1$. Given $\bar{\partial}_A \in \mathcal{A}^{0,1}$, let $F_A$ denote the curvature of the Chern connection associated to $\bar{\partial}_A$ via the Hermitian metric. The metric induces a pointwise norm $| \cdot | : \Omega^2(\mathop{\rm End}\nolimits(E)) \rightarrow \Omega^0(X, \mathbb R)$ and together with the Riemannian structure on $X$ an $L^2$ norm $\| \cdot \|_{L^2} : \Omega^2(\mathop{\rm End}\nolimits(E)) \rightarrow \mathbb R$. The \emph{Yang-Mills-Higgs functional} $\mathop{\rm YM}\nolimitsH : \mathcal{B} \rightarrow \mathbb{R}$ is defined by
\begin{equation}\label{eqn:YMH-def}
\mathop{\rm YM}\nolimitsH(\bar{\partial}_A, \phi) = \| F_A + [\phi, \phi^*] \|_{L^2}^2 = \int_X | F_A + [ \phi, \phi^*] |^2 \, dvol
\end{equation}
When ${\bf d}^{(i)}m_\mathbb C = 1$, the Hodge star defines an isometry $* : \Omega^2(\mathop{\rm End}\nolimits(E)) \rightarrow \Omega^0(\mathop{\rm End}\nolimits(E)) \cong \mathop{\rm Lie}\nolimits \mathcal{G}^\mathbb C$. For any initial condition $(A_0, \phi_0)$, the following equation for $g_t \in \mathcal{G}^\mathbb C$ has a unique solution on the interval $t \in [0, \infty)$ (cf. \cite{Donaldson85}, \cite{Simpson88})
\begin{equation}\label{eqn:gauge-flow}
\frac{\partial g}{\partial t} g_t^{-1} = - i * ( F_{g_t \cdot A_0} + [g_t \cdot \phi_0, (g_t \cdot \phi_0)^*]) , \quad g_0 = \id .
\end{equation}
This defines a unique curve $(A_t, \phi_t) = g_t \cdot (A_0, \phi_0) \in \mathcal{B}$ which is a solution to the downwards Yang-Mills-Higgs gradient flow equations
\begin{align}\label{eqn:YMH-flow}
\begin{split}
\frac{\partial A}{\partial t} & = i \bar{\partial}_A * (F_A + [\phi, \phi^*]) \\
\frac{\partial \phi}{\partial t} & = i \left[ \phi, *(F_A + [\phi, \phi^*]) \right] .
\end{split}
\end{align}
for all $t \in [0, \infty)$. The result of \cite[Thm 3.1]{Wilkin08} shows that the solutions converge to a unique limit $(A_\infty, \phi_\infty)$ which is a critical point of $\mathop{\rm YM}\nolimitsH$. Moreover \cite[Thm. 4.1]{Wilkin08} shows that the isomorphism class of this limit is determined by the graded object of the Harder-Narasimhan-Seshadri double filtration of the initial condition $(A_0, \phi_0)$.
\begin{remark}
Since the space $\mathcal{B}$ of Higgs bundles is singular, then we define the gradient of $\mathop{\rm YM}\nolimitsH$ as the gradient of the function $\| F_A + [\phi, \phi^*] \|_{L^2}^2$ defined on the ambient smooth space $T^* \mathcal{A}^{0,1}$, which contains the space $\mathcal{B}$ as a singular subset. When the initial condition is a Higgs bundle, then a solution to \eqref{eqn:YMH-flow} is generated by the action of the complex gauge group $\mathcal{G}^\mathbb C$ which preserves $\mathcal{B}$. Therefore the solution to \eqref{eqn:YMH-flow} is contained in $\mathcal{B}$ and so from now on we can consider the flow \eqref{eqn:YMH-flow} as a well-defined gradient flow on the singular space $\mathcal{B}$. Throughout the paper we define a critical point to be a stationary point for the Yang-Mills-Higgs flow.
\end{remark}
\begin{definition}\label{def:critical-point}
A \emph{critical point} for $\mathop{\rm YM}\nolimitsH$ is a pair $(A, \phi) \in \mathcal{B}$ such that
\begin{equation}\label{eqn:critical-point}
\bar{\partial}_A * (F_A + [\phi, \phi^*]) = 0, \quad \text{and} \quad \left[ \phi, *(F_A + [\phi, \phi^*]) \right] = 0 .
\end{equation}
\end{definition}
The critical point equations \eqref{eqn:critical-point} imply that the bundle $E$ splits into holomorphic $\phi$-invariant sub-bundles $E_1 \oplus \cdots \oplus E_n$, such that the induced Higgs structure $(\bar{\partial}_{A_j}, \phi_j)$ on the bundle $E_j$ minimises the Yang-Mills-Higgs functional on the bundle $E_j$ (cf. \cite[Sec. 5]{AtiyahBott83} for holomorphic bundles and \cite[Sec. 4]{Wilkin08} for Higgs bundles). In particular, each Higgs pair $(\bar{\partial}_{A_j}, \phi_j)$ is polystable. The decomposition is not necessarily unique due to the possibility of polystable bundles with the same slope, however it is unique if we impose the condition that $(E_1, \phi_1) \oplus \cdots \oplus (E_n, \phi_n)$ is the graded object of the socle filtration of the Higgs bundle $(E, \phi)$ (see \cite{HuybrechtsLehn97} for holomorphic bundles and \cite[Sec. 4]{BiswasWilkin10} for Higgs bundles). With respect to this decomposition the curvature $*(F_A + [\phi, \phi^*]) \in \Omega^0(\mathop{\rm ad}\nolimits(E)) \cong \mathop{\rm Lie}\nolimits(\mathcal{G}$ has the following block-diagonal form
\begin{equation}\label{eqn:critical-curvature}
i * (F_A + [\phi, \phi^*]) = \left( \begin{matrix} \lambda_1 \cdot \id_{E_1} & 0 & \cdots & 0 \\ 0 & \lambda_2 \cdot \id_{E_2} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n \cdot \id_{E_n} \end{matrix} \right)
\end{equation}
where $\lambda_j =\slope(E_j)$ and we order the eigenvalues by $\lambda_j < \lambda_k$ for all $j < k$.
\begin{definition}
A \emph{Yang-Mills-Higgs flow line} connecting an upper critical point $x_u = (\bar{\partial}_{A_u}, \phi_u)$ and a lower critical point $x_{\bf e}^{(l)}l = (\bar{\partial}_{A_{\bf e}^{(l)}l}, \phi_{\bf e}^{(l)}l)$ is a continuous map $\gamma : \mathbb{R} \rightarrow \mathcal{B}$ such that
\begin{enumerate}
\item $\frac{d\gamma}{dt}$ satisfies the Yang-Mills-Higgs flow equations \eqref{eqn:YMH-flow}, and
\item $\lim_{t \rightarrow - \infty} \gamma(t) = x_u$ and $\lim_{t \rightarrow \infty} \gamma(t) = x_{\bf e}^{(l)}l$, where the convergence is in the $C^\infty$ topology on $\mathcal{B}$.
\end{enumerate}
\end{definition}
\begin{definition}\label{def:unstable-set}
The \emph{unstable set} $W_{x_u}^-$ of a non-minimal critical point $x_u = (\bar{\partial}_{A_u}, \phi_u)$ is defined as the set of all points $y_0 \in \mathcal{B}$ such that a solution $y_t$ to the Yang-Mills-Higgs flow equations \eqref{eqn:YMH-flow} exists for all $(-\infty, 0]$ and $y_t \rightarrow x$ in the $C^\infty$ topology on $\mathcal{B}$ as $t \rightarrow - \infty$.
\end{definition}
\subsection{A local slice theorem}\label{subsec:local-slice}
In this section we define local slices around the critical points and describe the isomorphism classes in the negative slice.
\begin{definition}\label{def:slice}
Let $x = (\bar{\partial}_A, \phi) \in \mathcal{B}$. The \emph{slice} through $x$ is the set of deformations orthogonal to the $\mathcal{G}^\mathbb C$ orbit at $x$.
\begin{equation}\label{eqn:slice-def}
S_x = \{ (a, \varphi) \in \Omega^{0,1}(\mathop{\rm End}\nolimits(E)) \oplus \Omega^{1,0}(\mathop{\rm End}\nolimits(E)) \mid \bar{\partial}_A^* a -*[*\phi^*, \varphi] = 0, (\bar{\partial}_A + a, \phi + \varphi) \in \mathcal{B} \} .
\end{equation}
If $x$ is a critical point of $\mathop{\rm YM}\nolimitsH$ with $\beta = *(F_A + [\phi, \phi^*])$, then the \emph{negative slice} $S_x^-$ is the subset
\begin{equation}\label{eqn:neg-slice-def}
S_x^- = \{ (a, \varphi) \in S_x \mid \lim_{t \rightarrow \infty} e^{i \beta t} \cdot (a, \varphi) = 0 \} .
\end{equation}
\end{definition}
To prove Lemma \ref{lem:slice-theorem} and Proposition \ref{prop:filtered-slice-theorem} below, one needs to first define the slice on the $L_1^2$ completion of the space of Higgs bundles with the action of the $L_2^2$ completion of the gauge group. The following lemma shows that if the critical point $x$ is $C^\infty$ then the elements in the slice $S_x$ are also $C^\infty$.
\begin{lemma}\label{lem:slice-smooth}
Let $x = (\bar{\partial}_A, \phi)$ be a critical point of $\mathop{\rm YM}\nolimitsH$ in the space of $C^\infty$ Higgs bundles, let $S_x$ be the set of solutions to the slice equations in the $L_1^2$ completion of the space of Higgs bundles and let $\delta x = (a, \varphi) \in S_x$. Then $\delta x$ is $C^\infty$.
\end{lemma}
\begin{proof}
The slice equations are
\begin{align*}
\bar{\partial}_A \varphi + [a, \phi] + [a, \varphi] & = 0 \\
\bar{\partial}_A^* a - *[\phi^*, *\varphi] & = 0
\end{align*}
Since $(a, \varphi) \in L_1^2$ and $(\bar{\partial}_A, \phi)$ is $C^\infty$, then the second equation above implies that $\bar{\partial}_A^* a \in L_1^2$ and so $a \in L_2^2$ by elliptic regularity. After applying Sobolev multiplication $L_2^2 \times L_1^2 \rightarrow L^4$, then $[a, \varphi] \in L^4$ and so the first equation above implies that $\bar{\partial}_A \varphi \in L^4$, hence $\varphi \in L_1^4$. Repeating this again shows that $\varphi \in L_2^2$, and then one can repeat the process inductively to show that $\delta x = (a, \varphi)$ is $C^\infty$.
\end{proof}
The following result gives a local description of the space of Higgs bundles in terms of the slice. The infinitesimal action of $\mathcal{G}^\mathbb C$ at $x \in \mathcal{B}$ is denoted by $\rho_x : \mathop{\rm Lie}\nolimits(\mathcal{G}^\mathbb C) \cong \Omega^0(\mathop{\rm End}\nolimits(E)) \rightarrow \Omega^{0,1}(\mathop{\rm End}\nolimits(E)) \oplus \Omega^{1,0}(\mathop{\rm End}\nolimits(E))$. Explicitly, for $x = (\bar{\partial}_A, \phi)$ and $u \in \Omega^0(\mathop{\rm End}\nolimits(E))$, we have $\rho_x(u) = -(\bar{\partial}_A u, [\phi, u])$. The $L^2$-orthogonal complement of $\ker \rho_x \subseteq \Omega^0(\mathop{\rm End}\nolimits(E))$ is denoted $(\ker \rho_x)^\perp$.
\begin{lemma}\label{lem:slice-theorem}
Fix $x \in \mathcal{B}$. Then the map $\psi : (\ker \rho_x)^\perp \times S_x \rightarrow \mathcal{B}$ given by $\psi(u, \delta x) = \exp(u) \cdot (x + \delta x)$ is a local homeomorphism.
\end{lemma}
\begin{proof}
The result of \cite[Prop. 4.12]{Wilkin08} shows that the statement is true for the $L_1^2$ completion of the space of Higgs bundles and the $L_2^2$ completion of the gauge group, and so it only remains to show that it remains true on restricting to the space of $C^\infty$ Higgs bundles with the action of the group of $C^\infty$ gauge transformations. The proof of this statement follows from elliptic regularity using the same method as \cite[Cor. 4.17]{Wilkin08}.
\end{proof}
Now let $x = (\bar{\partial}_A, \phi)$ be a critical point and let $\beta = \mu(x) := *(F_A + [\phi, \phi^*])$. The Lie algebra $\mathop{\rm Lie}\nolimits(\mathcal{G}^\mathbb C) \cong \Omega^0(\mathop{\rm End}\nolimits(E))$ decomposes into eigenbundles for the adjoint action of $e^{i \beta}$. We denote the positive, zero and negative eigenspaces respectively by $\Omega^0(\mathop{\rm End}\nolimits(E)_+)$, $\Omega^0(\mathop{\rm End}\nolimits(E)_0)$ and $\Omega^0(\mathop{\rm End}\nolimits(E)_-)$. The positive and negative eigenspaces are nilpotent Lie algebras with associated unipotent groups $\mathcal{G}_+^\mathbb C$ and $\mathcal{G}_-^\mathbb C$. The subgroups of $\mathcal{G}$ and $\mathcal{G}^\mathbb C$ consisting of elements commuting with $e^{i \beta}$ are denoted $\mathcal{G}_\beta$ and $\mathcal{G}_\beta^\mathbb C$ respectively. Since $\Omega^0(\mathop{\rm End}\nolimits(E)_0) \oplus \Omega^0(\mathop{\rm End}\nolimits(E)_+)$ is also a Lie algebra then there is a corresponding subgroup denoted $\mathcal{G}_*^\mathbb C$.
Let $\mathcal{G}_x$ and $\mathcal{G}_x^\mathbb C$ denote the respective isotropy groups of $x$ in $\mathcal{G}$ and $\mathcal{G}^\mathbb C$. There is an inclusion $(\mathcal{G}_x)^\mathbb C \subseteq \mathcal{G}_x^\mathbb C$, however at a non-minimal critical point the two groups may not be equal (in the context of reductive group actions on finite-dimensional affine spaces, this question has been studied by Sjamaar in \cite[Prop. 1.6]{Sjamaar95}). At a general critical point, the Higgs bundle $(E, \phi)$ splits into polystable Higgs sub-bundles $(E_1, \phi_1) \oplus \cdots \oplus (E_n, \phi_n)$, where we order by increasing slope. Then a homomorphism $u \in \mathop{\rm Hom}\nolimits(E_j, E_k)$ satisfying $u \phi_j = \phi_k u$ will be zero if $j > k$ since $(E_j, \phi_j)$ and $(E_k, \phi_k)$ are polystable and $\slope(E_j) > \slope(E_k)$, however if $j < k$ then the homomorphisms do not necessarily vanish in which case $(\mathcal{G}_x)^\mathbb C \subsetneq \mathcal{G}_x^\mathbb C$. Therefore $\ker \rho_x = \mathop{\rm Lie}\nolimits(\mathcal{G}_x^\mathbb C) \subset \Omega^0(\mathop{\rm End}\nolimits(E)_+) \oplus \Omega^0(\mathop{\rm End}\nolimits(E)_0)$, and so $\mathcal{G}_x^\mathbb C \subset \mathcal{G}_*^\mathbb C$.
The result of \cite[Thm. 2.16]{Daskal92} shows that the $L_2^2$ completion of the gauge group satisfies $\mathcal{G}^\mathbb C \cong \mathcal{G}_*^\mathbb C \times_{\mathcal{G}_\beta} \mathcal{G}$. We will use $(\ker \rho_x)_*^\perp$ to denote $(\ker \rho_x)^\perp \cap (\Omega^0(\mathop{\rm End}\nolimits(E))_+ \oplus \Omega^0(\mathop{\rm End}\nolimits(E)_0)$. At a critical point $x$, the above argument shows that isotropy group $\mathcal{G}_x^\mathbb C$ is contained in $\mathcal{G}_*^\mathbb C$, and so we have the following refinement of Lemma \ref{lem:slice-theorem}.
\begin{proposition}\label{prop:filtered-slice-theorem}
Let $x \in \mathcal{B}$ be a critical point of $\mathop{\rm YM}\nolimitsH$. Then there exists a $\mathcal{G}$-invariant neighbourhood $U$ of $x$ and a neighbourhood $U'$ of $[\id, 0, 0]$ in $\mathcal{G} \times_{\mathcal{G}_\beta} \left( (\ker \rho_x)_*^\perp \times S_x \right)$ such that $\psi : U' \rightarrow U$ is a $\mathcal{G}$-equivariant homeomorphism.
\end{proposition}
The results of Section \ref{sec:local-analysis} show that the negative slice $S_x^-$ is complex gauge-equivalent to the unstable set $W_x^-$ of a critical point. The next lemma gives a complete classification of the isomorphism classes in $S_x^-$. Together with the results of Section \ref{sec:local-analysis}, this is used in Section \ref{sec:hecke} to classify critical points connected by flow lines.
\begin{lemma}\label{lem:classify-neg-slice}
Let $x = (E_1, \phi_1) \oplus \cdots \oplus (E_n, \phi_n)$ be a critical point of $\mathop{\rm YM}\nolimitsH$ with curvature as in \eqref{eqn:critical-curvature} with the Higgs polystable subbundles ordered so that $\slope(E_j) < \slope(E_k)$ iff $j < k$. If $\delta x \in S_x^- \cap U$ then $x + \delta x$ has a filtration $(E^{(1)}, \phi^{(1)}) \subset \cdots \subset (E^{(n)}, \phi^{(n)})$ by Higgs subbundles such that the successive quotients are $(E^{(k)}, \phi^{(k)}) / (E^{(k-1)}, \phi^{(k-1)}) = (E_k, \phi_k)$. Conversely, there exists a neighbourhood $U$ of $x$ such that if a Higgs bundle $y = (E, \phi) \in U$ admits such a filtration then it is gauge equivalent to $x + \delta x$ for some $\delta x \in S_x^-$.
\end{lemma}
\begin{proof}
The first statement follows directly from the definition of the negative slice in \eqref{eqn:neg-slice-def}.
Let $\mathop{\rm End}\nolimits(E)_-$ be the subbundle of $\mathop{\rm End}\nolimits(E)$ corresponding to the negative eigenspaces of $i \beta$ and let $\rho_x^- : \Omega^0(\mathop{\rm End}\nolimits(E)_-) \rightarrow \Omega^{0,1}(\mathop{\rm End}\nolimits(E)_-) \oplus \Omega^{1,0}(\mathop{\rm End}\nolimits(E)_-)$ be the restriction of the infinitesimal action to the negative eigenspaces. Then
\begin{equation*}
\im \rho_x^- = \im \rho_x \cap \Omega^{0,1}(\mathop{\rm End}\nolimits(E)_-) \oplus \Omega^{1,0}(\mathop{\rm End}\nolimits(E)_-)
\end{equation*}
and
\begin{equation}\label{eqn:negative-orthogonal}
\ker (\rho_x^-)^* \supseteq \ker \rho_x^* \cap \Omega^{0,1}(\mathop{\rm End}\nolimits(E)_-) \oplus \Omega^{1,0}(\mathop{\rm End}\nolimits(E)_-)
\end{equation}
Since $\im \rho_x \oplus \ker \rho_x^* \cong \Omega^{0,1}(\mathop{\rm End}\nolimits(E)) \oplus \Omega^{1,0}(\mathop{\rm End}\nolimits(E))$ by \cite[Lem. 4.9]{Wilkin08} then
\begin{align*}
\Omega^{0,1}(\mathop{\rm End}\nolimits(E)_-) \oplus \Omega^{1,0}(\mathop{\rm End}\nolimits(E)_-) & = \left( \im \rho_x \oplus \ker \rho_x^* \right) \cap \left( \Omega^{0,1}(\mathop{\rm End}\nolimits(E)_-) \oplus \Omega^{1,0}(\mathop{\rm End}\nolimits(E)_-) \right) \\
& \subseteq \left( \im \rho_x^- \oplus \ker (\rho_x^-)^* \right) \subseteq \Omega^{0,1}(\mathop{\rm End}\nolimits(E)_-) \oplus \Omega^{1,0}(\mathop{\rm End}\nolimits(E)_-)
\end{align*}
and so \eqref{eqn:negative-orthogonal} must be an equality, therefore $\Omega^{0,1}(\mathop{\rm End}\nolimits(E)_-) \oplus \Omega^{1,0}(\mathop{\rm End}\nolimits(E)_-) \cong \im \rho_x^- \oplus \ker (\rho_x^-)^*$. Therefore the function
\begin{align*}
\psi^- : (\ker \rho_x^-)^\perp \times \ker (\rho_x^-)^* & \rightarrow \Omega^{0,1}(\mathop{\rm End}\nolimits(E)_-) \oplus \Omega^{1,0}(\mathop{\rm End}\nolimits(E)_-) \\
(u, \delta x) & \mapsto e^u \cdot (x + \delta x)
\end{align*}
is a local diffeomorphism at $0$. If $\delta x \in S_x^-$ then $x + \delta x \in \mathcal{B}$, and so $e^u \cdot (x + \delta x) \in \mathcal{B}$, since the complex gauge group preserves the space of Higgs bundles. Conversely, if $e^u \cdot (x + \delta x) \in \mathcal{B}$ then $x + \delta x \in \mathcal{B}$ and so $\delta x \in S_x^-$. Therefore $\psi$ restricts to a local homeomorphism $(\ker \rho_x^-)^\perp \times S_x^- \rightarrow \mathcal{B} \cap \left( \Omega^{0,1}(\mathop{\rm End}\nolimits(E)_-) \oplus \Omega^{1,0}(\mathop{\rm End}\nolimits(E)_-) \right)$.
\end{proof}
The next two results concern a sequence of points $g_t \cdot z$ in a $\mathcal{G}^\mathbb C$ orbit which approach a critical point $x$ in the $L_k^2$ norm and for which $\mathop{\rm YM}\nolimitsH(z) < \mathop{\rm YM}\nolimitsH(x)$. Since $x$ is critical and $\mathop{\rm YM}\nolimitsH(z) < \mathop{\rm YM}\nolimitsH(x)$ then $x \in \overline{\mathcal{G}^\mathbb C \cdot z} \setminus \mathcal{G}^\mathbb C \cdot z$, and therefore $\| g_t \|_{L_{k+1}^2} \rightarrow \infty$. The result below shows that the $C^0$ norm of the function ${\Sigma}_{i}gma(h_t) = \mathop{\rm Tr}\nolimits(h_t) + \mathop{\rm Tr}\nolimits(h_t^{-1}) - 2 \rank (E)$ must also blow up.
\begin{lemma}\label{lem:GIT-C0-norm-blows-up}
Let $x \in \mathcal{B}$ be a critical point of $\mathop{\rm YM}\nolimitsH$ and let $z \in \mathcal{B}$ be any point such that $\mathop{\rm YM}\nolimitsH(z) < \mathop{\rm YM}\nolimitsH(x)$. Suppose that there exists a sequence of gauge transformations $g_t \in \mathcal{G}^\mathbb C$ such that $g_t \cdot z \rightarrow x$ in $L_k^2$. Then the change of metric $h_t = g_t^* g_t$ satisfies $\sup_X {\Sigma}_{i}gma(h_t) \rightarrow \infty$.
\end{lemma}
\begin{proof}
Let $U$ be the neighbourhood of $x$ from Lemma \ref{lem:slice-theorem}. Since $g_t \cdot z \rightarrow x$, then there exists $T$ such that $g_t \cdot z \in U$ for all $t \geq T$. Therefore there exists $f_t$ in a neighbourhood of the identity in $\mathcal{G}^\mathbb C$ such that $f_t \cdot g_t \cdot z \in S_x$. The uniqueness of the decomposition from the slice theorem shows that if $t > T$, then $f_{t} \cdot g_{t} \cdot z = f_{t, T} \cdot f_{T} \cdot g_{T} \cdot z$ with $f_{t,T} \in \mathcal{G}_x^\mathbb C$. Therefore $t \rightarrow \infty$ implies that $f_{t,T}$ diverges in $\mathcal{G}_x^\mathbb C$. Fix a point $p$ on the surface $X$, and let $\mathcal{G}_0^\mathbb C$ be the normal subgroup of complex gauge transformations that are the identity at $p$, as in \cite[Sec. 13]{AtiyahBott83}. We have the following short exact sequence of groups
\begin{equation*}
1 \rightarrow \mathcal{G}_0^\mathbb C \rightarrow \mathcal{G}^\mathbb C \rightarrow \mathcal GL(n, \mathbb C) \rightarrow 1 .
\end{equation*}
Since $\mathcal{G}_0^\mathbb C$ acts freely on the space of connections (and hence on $\mathcal{B}$), then restriction to the fibre over $p$ defines a bijective correspondence between $\mathcal{G}_x^\mathbb C \subset \mathcal{G}^\mathbb C$ and a subgroup of $\mathcal GL(n, \mathbb C)$ via the exact sequence above. Therefore $f_{t,T}$ diverges in $\mathcal{G}_x^\mathbb C$ implies that the restriction of $f_{t,T}$ to the fibre over $p$ diverges in $\mathcal GL(n, \mathbb C)$, and so the $C^0$ norm of $f_{t,T}$ diverges to $\infty$, and hence the same is true for $g_t = f_t^{-1} \cdot f_{t, T} \cdot f_T \cdot g_T \cdot z$ since $g_T$ is fixed and both $f_t$ and $f_T$ are contained in a fixed neighbourhood of the identity in $\mathcal{G}^\mathbb C$. Therefore $\sup_X {\Sigma}_{i}gma(h_t) \rightarrow \infty$.
\end{proof}
\begin{corollary}\label{cor:bounded-metric-away-from-critical}
Let $x$ be a critical point of $\mathop{\rm YM}\nolimitsH$. Then for each neighbourhood $V$ of $x$ in the $L_k^2$ topology on $\mathcal{B}$ and each constant $C > 0$, there exists a neighbourhood $U$ of $x$ such that if $z \notin V$ and $\mathop{\rm YM}\nolimitsH(z) < \mathop{\rm YM}\nolimitsH(x)$, then $y = g \cdot z$ with $h = g^* g$ satisfying $\sup_X {\Sigma}_{i}gma(h) \leq C$ implies that $y \notin U$.
\end{corollary}
\begin{proof}
If no such neighbourhood $U$ exists, then we can construct a sequence $y_t = g_t \cdot z$ converging to $x$ in $L_k^2$ such that $h_t = g_t^* g_t$ satisfies $\sup_X {\Sigma}_{i}gma(h_t) \leq C$ for all $t$, however this contradicts the previous lemma.
\end{proof}
\subsection{Modifying the $\mathop{\rm YM}\nolimitsH$ flow in a neighbourhood of a critical point}
Let $x$ be a critical point, let $\beta = \mu(x) = *(F_A + [\phi, \phi^*])$, and let $\mathcal{G}_*^\mathbb C$ be the subgroup defined in the previous section. In this section we explain how to modify the $\mathop{\rm YM}\nolimitsH$ flow near $x$ so that the gauge transformation generating the flow is contained in $\mathcal{G}_*^\mathbb C$. The reason for modifying the flow is so that we can apply the distance-decreasing formula of Lemma \ref{lem:modified-distance-decreasing}, which is used for the convergence result of Section \ref{subsec:inverse-construction}.
Let $U$ be a $\mathcal{G}$-invariant neighbourhod of $x$ such that $U$ is homeomorphic to a neighbourhood of $[\id, 0, 0]$ in $\mathcal{G} \times_{\mathcal{G}_\beta} \left( (\ker \rho_x)_*^\perp \times S_x \right)$ by Proposition \ref{prop:filtered-slice-theorem}. Let $V \subset U$ be the image of $(\ker \rho_x)_*^\perp \times S_x$ under the homeomorphism from Proposition \ref{prop:filtered-slice-theorem}. For each $y \in V$, let $\gamma_-(y)$ be the component of $\mu(y)$ in $\Omega^0(\mathop{\rm End}\nolimits(E)_-)$. Since $\mu$ is $\mathcal{G}$-equivariant then we can extend $\gamma_-$ equivariantly from $V$ to all of $U$ using the action of $\mathcal{G}$. Define the map $\gamma : U \rightarrow \mathop{\rm Lie}\nolimits(\mathcal{G})$ by
\begin{equation}\label{eqn:def-gamma}
\gamma(y) = \gamma_-(y) - \gamma_-(y)^*
\end{equation}
\begin{definition}\label{def:modified-flow}
The \emph{modified flow} with initial condition $y_0 \in U$ is the solution to
\begin{equation}\label{eqn:modified-flow}
\frac{dy}{dt} = - I \rho_x(\mu(y)) + \rho_x(\gamma(y)) .
\end{equation}
More explicitly, on the space of Higgs bundles $y = (\bar{\partial}_A, \phi)$ satisfies
\begin{align*}
\frac{\partial A}{\partial t} & = i \bar{\partial}_A * (F_A + [\phi, \phi^*]) - \bar{\partial}_A \gamma(\bar{\partial}_A, \phi) \\
\frac{\partial \phi}{\partial t} & = i [\phi, *(F_A + [\phi, \phi^*])] - [\phi, \gamma(\bar{\partial}_A, \phi)]
\end{align*}
\end{definition}
In analogy with \eqref{eqn:gauge-flow}, the modified flow is generated by the action of the gauge group $y_t = g_t \cdot y_0$, where $g_t$ satisfies the equation
\begin{equation}\label{eqn:modified-gauge-flow}
\frac{\partial g_t}{\partial t} g_t^{-1} = -i \mu(g_t \cdot y_0) + \gamma(g_t \cdot y_0), \quad g_0 = \id .
\end{equation}
As before, let $V \subset U$ be the image of $(\ker \rho_x)_*^\perp \times S_x$ under the homeomorphism from the slice theorem (Proposition \ref{prop:filtered-slice-theorem}). Note that if $y_0 \in V$ then $\frac{\partial g_t}{\partial t} g_t^{-1} \in \mathop{\rm Lie}\nolimits(\mathcal{G}_*^\mathbb C)$, so $g_t \in \mathcal{G}_*^\mathbb C$ and the solution to the modified flow remains in $V$ for as long as it remains in the neighbourhood $U$.
\begin{lemma}\label{lem:relate-flows}
Let $y_t = g_t \cdot y_0$ be the solution to the $\mathop{\rm YM}\nolimitsH$ flow \eqref{eqn:gauge-flow} with initial condition $y_0$. Then there exists $s_t \in \mathcal{G}$ solving the equation
\begin{equation}\label{eqn:unitary-modification}
\frac{ds}{dt} s_t^{-1} = \gamma(s_t \cdot y_t), \quad s_0 = \id
\end{equation}
such that $\tilde{y}_t = s_t \cdot y_t$ is a solution to the modified flow equation \eqref{eqn:modified-flow} with initial condition $y_0$.
\end{lemma}
\begin{proof}
Since $\gamma$ is $\mathcal{G}$-equivariant then \eqref{eqn:unitary-modification} reduces to
\begin{equation*}
\frac{ds}{dt} s_t^{-1} = \mathop{\rm Ad}\nolimits_{s_t} \gamma(y_t) .
\end{equation*}
Since $\gamma(y_t) \in \mathop{\rm Lie}\nolimits(\mathcal{G})$ is already defined by the gradient flow $y_t$, then this equation reduces to solving an ODE on the fibres of the bundle, and therefore existence of solutions follows from ODE existence theory. Let $\tilde{g}_t = s_t \cdot g_t$. A calculation shows that
\begin{align*}
\frac{d \tilde{g}_t}{dt} \tilde{g}_t^{-1} & = \frac{ds}{dt} s_t^{-1} + \mathop{\rm Ad}\nolimits_{s_t} \left( \frac{dg}{dt} g_t^{-1} \right) \\
& = \gamma(s_t \cdot y_t) - i \mathop{\rm Ad}\nolimits_{s_t} \mu(y_t) \\
& = \gamma(\tilde{y}_t) - i \mu(\tilde{y}_t) \\
& = \gamma(\tilde{g}_t \cdot y_0) - i \mu(\tilde{g}_t \cdot y_0) ,
\end{align*}
and so $\tilde{y}_t = \tilde{g}_t \cdot y_0 = s_t \cdot y_t$ is a solution to the modified flow \eqref{eqn:modified-flow} with initial condition $y_0$.
\end{proof}
As a corollary, we see that the change of metric is the same for the YMH flow \eqref{eqn:gauge-flow} and the modified flow \eqref{eqn:modified-gauge-flow}.
\begin{corollary}\label{cor:metrics-same}
Let $y_t = g_t \cdot y_0$ be a solution to the Yang-Mills-Higgs flow equation \eqref{eqn:gauge-flow} and $\tilde{y}_t = \tilde{g}_t \cdot y_0$ be a solution to the modified flow equation \eqref{eqn:modified-gauge-flow}. Then $h_t = g_t^* g_t = \tilde{g}_t^* \tilde{g}_t$.
\end{corollary}
Finally, we prove that convergence for the upwards $\mathop{\rm YM}\nolimitsH$ flow implies convergence for the modified flow.
\begin{lemma}\label{lem:unstable-sets-same}
Let $x$ be a critical point and let $y_0 \in W_x^-$. Then the modified flow with initial condition $y_0$ exists for all $t \in (-\infty, 0]$ and converges in the $C^\infty$ topology to a point in $\mathcal{G} \cdot x$.
\end{lemma}
\begin{proof}
Let $y_t$ be the $\mathop{\rm YM}\nolimitsH$ flow with initial condition $y_0$ and $\tilde{y_t} = s_t \cdot y_t$ the modified flow. By the definition of $W_x^-$ the $\mathop{\rm YM}\nolimitsH$ flow exists for all $t \in (-\infty, 0]$ and $y_t \rightarrow x$ in the $C^\infty$ topology. Existence of the modified flow then follows from Lemma \ref{lem:relate-flows}. Proposition \ref{prop:exponential-convergence} shows that $y_t \rightarrow x$ exponentially in $L_k^2$ for all $k$, and so the same is true for $\gamma(y_t)$. Therefore the length of the modified flow line satisfies
\begin{align*}
\int_{-\infty}^0 \| I \rho_{\tilde{y}_t}(\mu(\tilde{y}_t)) - \rho_{\tilde{y}_t}(\gamma(\tilde{y}_t)) \|_{L_k^2} \, dt & = \int_{-\infty}^0 \| I \rho_{y_t}(\mu(y_t)) - \rho_{y_t}(\gamma(y_t)) \|_{L_k^2} \, dt \\
& \leq \int_{-\infty}^0 \| \rho_{y_t}(\mu(y_t)) \|_{L_k^2} \, dt + \int_{-\infty}^0 \| \rho_{y_t}(\gamma(y_t)) \|_{L_k^2} \, dt
\end{align*}
which is finite since the length $\int_{-\infty}^0 \| \rho_{y_t}(\mu(y_t)) \|_{L_k^2} \, dt$ of the $\mathop{\rm YM}\nolimitsH$ flow line is finite, $y_t$ is bounded and $\gamma(y_t) \rightarrow 0$ exponentially. This is true for all $k$, and so the modified flow converges in the $C^\infty$ topology.
\end{proof}
\subsection{Preliminary estimates for the $\mathop{\rm YM}\nolimitsH$ flow in a neighbourhood of a critical point}
Given eigenvalues for $i \beta$ labelled by $\lambda_1 \leq \cdots \leq \lambda_k < 0 \leq \lambda_{k+1} \leq \cdots$, for any $y \in S_x^-$ and any norm, we have the Lipschitz bounds
\begin{equation}\label{eqn:lipschitz-slice}
e^{\lambda_1 t} \| y - x \| \leq \| e^{i \beta t} \cdot y - x \| \leq e^{\lambda_k t} \| y - x \| .
\end{equation}
\begin{lemma}\label{lem:moment-map-quadratic}
For any critical point $x$ there exists $C>0$ such that for any $y \in S_{x}^-$, we have $\| \mu(y) - \beta \|_{C^0} \leq C \| y - x \|_{C^0}^2$.
\end{lemma}
\begin{proof}
Let $y \in S_x^-$ and define $\delta y := y-x \in V \cong T_x V$. Then the defining equation for the moment map shows that for all $v \in \mathfrak{k}$, we have
\begin{equation*}
d \mu_x (\delta y) \cdot v = \omega(\rho_x(v), \delta y) = \left< I \rho_x(v), \delta y \right>
\end{equation*}
By the definition of the slice, each $\delta y \in S_x^-$ is orthogonal to the infinitesimal action of $\mathcal{G}^\mathbb C$ at $x$, and so $\left< I \rho_x(v), \delta y \right>=0$ for all $v \in \mathfrak{k}$. Therefore $d \mu_x(\delta y) = 0$. Since the moment map $\mu(\bar{\partial}_A, \phi) = F_A + [\phi, \phi^*]$ is quadratic, then we have
\begin{equation*}
\| \mu(y) - \mu(x) \|_{C^0} \leq \| d\mu_x(\delta y) \|_{C^0} + C \| \delta y \|_{C^0}^2 = C \| \delta y \|_{C^0}^2 .
\end{equation*}
Since the moment map is $\mathcal{G}$-equivariant and the norms above are all $\mathcal{G}$-invariant, then the constant $C$ is independent of the choice of critical point in the orbit $\mathcal{G} \cdot x$.
\end{proof}
Given $g \in \mathcal{G}^\mathbb C$, let $g^*$ denote the adjoint with respect to the Hermitian metric on $E$ and let $\mathcal{G}$ act on $\mathcal{G}^\mathbb C$ by left multiplication. In every equivalence class of the space of metrics $\mathcal{G}^\mathbb C/ \mathcal{G}$ there is a unique positive definite self-adjoint section $h$, which we use from now on to represent elements of $\mathcal{G}^\mathbb C/ \mathcal{G}$. Given $h = g^* g \in \mathcal{G}^\mathbb C/ \mathcal{G}$, define $\mu_h : \mathcal{B} \rightarrow \Omega^0(\mathop{\rm End}\nolimits(E)) \cong \mathop{\rm Lie}\nolimits (\mathcal{G}^\mathbb C)$ by
\begin{equation}\label{eqn:def-muh}
\mu_h(y) = \mathop{\rm Ad}\nolimits_{g^{-1}} \left( \mu(g\cdot y) \right) .
\end{equation}
Since the moment map is $\mathcal{G}$-equivariant, then for any $k \in \mathcal{G}$ we have
\begin{equation*}
\mathop{\rm Ad}\nolimits_{g^{-1}} \mathop{\rm Ad}\nolimits_{k^{-1}} \left( \mu(k \cdot g \cdot y) \right) = \mathop{\rm Ad}\nolimits_{g^{-1}} \left( \mu(g\cdot y) \right)
\end{equation*}
and so $\mu_h$ is well-defined on $\mathcal{G}^\mathbb C/ \mathcal{G}$. The length of a geodesic in the space of positive definite Hermitian matrices is computed in \cite[Ch. VI.1]{Kobayashi87}. Following \cite[Prop. 13]{Donaldson85} (see also \cite[Prop. 6.3]{Simpson88}), it is more convenient to define the distance function ${\Sigma}_{i}gma : \mathcal{G}^\mathbb C/ \mathcal{G}\rightarrow \mathbb R$
\begin{equation}\label{eqn:def-sigma}
{\Sigma}_{i}gma(h) = \mathop{\rm Tr}\nolimits h + \mathop{\rm Tr}\nolimits h^{-1} - 2 \rank (E) .
\end{equation}
As explained in \cite{Donaldson85}, the function $\sup_X {\Sigma}_{i}gma$ is not a norm in the complete metric space $\mathcal{G}^\mathbb C/ \mathcal{G}$, however we do have $h_t \stackrel{C^0}{\longrightarrow} h_\infty$ in $\mathcal{G}^\mathbb C/ \mathcal{G}$ if and only if $\sup_X {\Sigma}_{i}gma(h_t h_\infty^{-1}) \rightarrow 0$. Note that if $h_1 = g_1^* g_1$ and $h_2 = g_2^* g_2$, then
\begin{equation}\label{eqn:metric-difference}
{\Sigma}_{i}gma(h_1 h_2^{-1}) = {\Sigma}_{i}gma \left( g_1^* g_1 g_2^{-1} (g_2^*)^{-1} \right) = {\Sigma}_{i}gma \left( (g_1 g_2^{-1})^* g_1 g_2^{-1} \right) .
\end{equation}
Recall from \cite{Donaldson85}, \cite{Simpson88} that we have the following distance-decreasing formula for a solution to the downwards $\mathop{\rm YM}\nolimitsH$ flow. Since the change of metric is the same for the modified flow by Corollary \ref{cor:metrics-same}, then \eqref{eqn:distance-decreasing} is also valid for the modified flow.
\begin{lemma}\label{lem:distance-decreasing}
Let $y_1, y_2 \in \mathcal{B}$ and suppose that $y_1 = g_0 \cdot y_2$ for $g \in \mathcal{G}^\mathbb C$. For $j = 1,2$, define $y_j(t)$ to be the solution of the $\mathop{\rm YM}\nolimitsH$ flow \eqref{eqn:YMH-flow} with initial condition $y_j$. Define $g_t$ by $y_1(t) = g_t \cdot y_2(t)$ and let $h_t = g_t^* g_t$ be the associated change of metric. Then
\begin{equation}\label{eqn:distance-decreasing}
\left( \frac{\partial}{\partial t} + \mathcal{D}elta \right) {\Sigma}_{i}gma(h_t) \leq 0 .
\end{equation}
\end{lemma}
Since $\mathop{\rm Lie}\nolimits(\mathcal{G}_*^\mathbb C) = \Omega^0(\mathop{\rm End}\nolimits(E))_0 \oplus \Omega^0(\mathop{\rm End}\nolimits(E))_+$ and the adjoint action of $e^{-i \beta t}$ is the identity on $\Omega^0(\mathop{\rm End}\nolimits(E))_0$ and strictly contracting on $\Omega^0(\mathop{\rm End}\nolimits(E))_+$, then we have the following lemma which is used in Section \ref{subsec:inverse-construction}.
\begin{lemma}\label{lem:modified-distance-decreasing}
Given any $g_0 \in \mathcal{G}_*^\mathbb C$, let $g_t = e^{-i \beta t} g_0 e^{i \beta t}$ and $h_t = g_t^* g_t$. Then $\frac{\partial}{\partial t} {\Sigma}_{i}gma(h_t) \leq 0$.
\end{lemma}
As part of the proof of the distance-decreasing formula in \cite{Donaldson85} we also have the following inequalities. This result is used in the proof of Lemma \ref{lem:uniform-bound-sigma}.
\begin{lemma}\label{lem:metric-inequalities}
For any metric $h \in \mathcal{G}^\mathbb C / \mathcal{G}$ and any $y \in \mathcal{B}$, we have
\begin{align*}
-2i \mathop{\rm Tr}\nolimits \left( (\mu_h(y) - \mu(y)) h \right) + \mathcal{D}elta \mathop{\rm Tr}\nolimits(h) & \leq 0 \\
2i \mathop{\rm Tr}\nolimits \left( (\mu_h(y) - \mu(y)) h^{-1} \right) + \mathcal{D}elta \mathop{\rm Tr}\nolimits(h) & \leq 0 .
\end{align*}
\end{lemma}
\subsection{Exponential convergence of the backwards flow}
In this section we prove that if a solution to the backwards $\mathop{\rm YM}\nolimitsH$ flow converges to a critical point, then it must do so exponentially in each Sobolev norm.
\begin{proposition}\label{prop:exponential-convergence}
Let $y_t$ be a solution to the $\mathop{\rm YM}\nolimitsH$ flow \eqref{eqn:YMH-flow} such that $\lim_{t \rightarrow -\infty} y_t = x$. Then for each positive integer $k$ there exist positive constants $C_1$ and $\eta$ such that $\| y_t - x \|_{L_k^2} \leq C_1 e^{\eta t}$ for all $t \leq 0$.
\end{proposition}
The proof of the proposition reduces to the following lemmas. First recall from the slice theorem that there is a unique decomposition
\begin{equation*}
y = e^u \cdot (x + z)
\end{equation*}
for $u \in (\ker \rho_x)^\perp$ and $z \in S_x$. We can further decompose $z = z_{\geq 0} + z_-$, where $z_- \in S_x^-$ is the component of $z$ in the negative slice and $z_{\geq 0} = z - z_-$. At the critical point $x$ we have the decomposition $\mathop{\rm End}\nolimits(E) \cong \mathop{\rm End}\nolimits(E)_+ \oplus \mathop{\rm End}\nolimits(E)_0 \oplus \mathop{\rm End}\nolimits(E)_-$ according to the eigenspaces of $i \beta$ (cf. Sec. \ref{subsec:local-slice}). Then with respect to this decomposition $z_{\geq 0}$ is the component of $z$ in $\Omega^{0,1}(\mathop{\rm End}\nolimits(E))_+ \oplus \mathop{\rm End}\nolimits(E)_0) \oplus \Omega^{1,0}(\mathop{\rm End}\nolimits(E)_+ \oplus \mathop{\rm End}\nolimits(E)_0)$ and $z_-$ is the component in $\Omega^{0,1}(\mathop{\rm End}\nolimits(E)_-) \oplus \Omega^{1,0}(\mathop{\rm End}\nolimits(E)_-)$. In terms of the action of $\beta = \mu(x)$ we have $\lim_{t \rightarrow \infty} e^{i \beta t} \cdot z_- = 0$ and $\lim_{t \rightarrow \infty} e^{- i \beta t} \cdot z_{\geq 0} = z_0$, where $z_0$ is the component of $z$ in $\Omega^{0,1}(\mathop{\rm End}\nolimits(E)_0) \oplus \Omega^{1,0}(\mathop{\rm End}\nolimits(E)_0)$. Note that if $y = e^u \cdot (x + z)$ is a Higgs bundle, then $x+z$ is a Higgs bundle since $e^u \in \mathcal{G}^\mathbb C$ preserves the space of Higgs bundles, however $x+z_{\geq 0}$ may not be a Higgs bundle as the pair $(\bar{\partial}_{A_{\geq 0}}, \phi_{\geq 0})$ representing $x + z_{\geq 0}$ may not satisfy $\bar{\partial}_{A_{\geq 0}} \phi_{\geq 0} = 0$. Even though $\phi_{\geq 0}$ may not be holomorphic, we can still apply the principle that curvature decreases in subbundles and increases in quotient bundles and follow the same idea as \cite[Sec. 8 \& 10]{AtiyahBott83} to prove the following lemma.
\begin{lemma}\label{lem:non-holomorphic-extensions}
\begin{enumerate}
\item $\mathop{\rm YM}\nolimitsH(e^u \cdot (x + z_{\geq 0})) \geq \mathop{\rm YM}\nolimitsH(x)$.
\item $\grad \mathop{\rm YM}\nolimitsH(e^u \cdot (x + z_{\geq 0}))$ is tangent to the set $\{z_- = 0\}$.
\end{enumerate}
\end{lemma}
The next lemma shows that the component in the negative slice is decreasing exponentially.
\begin{lemma}
Let $y_t = e^u \cdot (x + z_{\geq 0} + z_-)$ be a solution to the $\mathop{\rm YM}\nolimitsH$ flow such that $\lim_{t \rightarrow -\infty} y_t = x$. Then there exist positive constants $K_1$ and $K_2$ such that $\| z_- \|_{L_1^2}^2 \leq K_1 e^{K_2 t}$ for all $t \leq 0$.
\end{lemma}
\begin{proof}
The proof follows the idea of \cite[Sec. 10]{Kirwan84}. The downwards gradient flow equation for $z_-$ is
\begin{equation*}
\frac{\partial z_-}{\partial t} = L z_- + N_-(u, z_{\geq 0}, z_-)
\end{equation*}
where $L$ is a linear operator and the derivative of $N_-$ vanishes at the origin. Since $z_-$ is orthogonal to the $\mathcal{G}^\mathbb C$ orbit through $x$, then the Laplacian term in $\grad \mathop{\rm YM}\nolimitsH$ vanishes on $z_-$ and so the linear part satisfies $e^{Lt} z_- = e^{-i \beta t} \cdot z_-$. Since $z_-$ is in the negative slice then there exists $\lambda_{min} > 0$ such that $\left< L z_- , z_- \right>_{L_1^2} \geq \lambda_{min} \| z_- \|_{L_1^2}$. Now Lemma \ref{lem:non-holomorphic-extensions} shows that the $\mathop{\rm YM}\nolimitsH$ flow preserves the set $\{ z_- = 0 \}$, and so $N_-(u, z_{\geq 0}, 0) = 0$. Since $N_-$ is $C^1$ with vanishing derivative at the origin then for all $\varepsilon > 0$ there exists $\delta > 0$ such that if $\| y_t - x \|_{L_1^2} < \delta$ then
\begin{equation*}
\| N_-(u, z_{\geq 0}, z_-) \|_{L_1^2} \leq \varepsilon \| z_- \|_{L_1^2}
\end{equation*}
Therefore
\begin{equation*}
\frac{1}{2} \frac{\partial}{\partial t} \| z_- \|_{L_1^2}^2 = \left< L z_-, z_- \right>_{L_1^2} + \left< N_-(u, z_{\geq 0}, z_- ), z_- \right>_{L_1^2} \geq (\lambda_{min} - \varepsilon) \| z_- \|_{L_1^2}^2 ,
\end{equation*}
and so if $\varepsilon > 0$ is small enough (e.g. $\varepsilon < \frac{1}{2} \lambda_{min}$) then there exist positive constants $K_1$ and $K_2$ such that $\| z_- \|_{L_1^2}^2 \leq K_1 e^{K_2 t}$ for all $t \leq 0$.
\end{proof}
The next lemma shows that the difference $\mathop{\rm YM}\nolimitsH(x) - \mathop{\rm YM}\nolimitsH(y_t)$ is decreasing exponentially.
\begin{lemma}\label{lem:f-exponential}
Let $y_t = e^u \cdot (x+z_{\geq 0} + z_-)$ be a solution to the $\mathop{\rm YM}\nolimitsH$ flow such that $\lim_{t \rightarrow -\infty} y_t = x$. Then there exist positive constants $K_1'$ and $K_2'$ such that
\begin{equation*}
\mathop{\rm YM}\nolimitsH(x) - \mathop{\rm YM}\nolimitsH(e^u \cdot (x + z_{\geq 0} + z_-)) \leq K_1' e^{K_2' t}
\end{equation*}
for all $t \leq 0$.
\end{lemma}
\begin{proof}
Recall that the Morse-Kirwan condition from Lemma \ref{lem:non-holomorphic-extensions} implies
\begin{equation*}
\mathop{\rm YM}\nolimitsH(e^u \cdot (x + z_{\geq 0})) - \mathop{\rm YM}\nolimitsH(x) \geq 0
\end{equation*}
Since $x$ is a critical point of $\mathop{\rm YM}\nolimitsH$, then for all $\varepsilon > 0$ there exists $\delta > 0$ such that if $\| y_t - x \|_{L_1^2} < \delta$ we have
\begin{equation*}
\mathop{\rm YM}\nolimitsH(e^u \cdot (x + z_{\geq 0} + z_-)) - \mathop{\rm YM}\nolimitsH(e^u \cdot (x + z_{\geq 0})) \geq -\varepsilon \| z_- \|_{L_1^2} .
\end{equation*}
Therefore
\begin{align*}
\mathop{\rm YM}\nolimitsH(e^u \cdot (x + z_{\geq 0} + z_-)) - \mathop{\rm YM}\nolimitsH(x) & = \mathop{\rm YM}\nolimitsH(e^u \cdot (x + z_{\geq 0} + z_-)) - \mathop{\rm YM}\nolimitsH(e^u \cdot (x + z_{\geq 0})) \\
& \quad \quad + \mathop{\rm YM}\nolimitsH(e^u \cdot (x + z_{\geq 0})) - \mathop{\rm YM}\nolimitsH(x) \\
& \geq - \varepsilon \|z_- \|_{L_1^2} \geq - \varepsilon \sqrt{K_1} e^{\frac{1}{2} K_2 t}
\end{align*}
Since $\mathop{\rm YM}\nolimitsH(e^u \cdot (x + z_{\geq 0} + z_-))$ is monotone decreasing with $t$ and $\lim_{t \rightarrow -\infty} \mathop{\rm YM}\nolimitsH(e^u \cdot (x+z_{\geq 0} + z_-)) = \mathop{\rm YM}\nolimitsH(x)$, then $\mathop{\rm YM}\nolimitsH(e^u \cdot (x + z_{\geq 0} + z_-)) \leq \mathop{\rm YM}\nolimitsH(x)$, and so the above equation implies that
\begin{equation*}
\left| \mathop{\rm YM}\nolimitsH(y_t) - \mathop{\rm YM}\nolimitsH(x) \right| \leq K_1' e^{K_2' t}
\end{equation*}
for positive constants $K_1' = \varepsilon \sqrt{K_1}$ and $K_2' = \frac{1}{2} K_2$.
\end{proof}
\begin{lemma}\label{lem:interior-bound}
Let $y_t$ be a solution to the $\mathop{\rm YM}\nolimitsH$ flow such that $y_t \rightarrow x$ as $t \rightarrow -\infty$. Then for each positive integer $k$ there exists a constant $C$ and a constant $\tau_0 \in \mathbb R$ such that
\begin{equation*}
\| y_\tau - x \|_{L_k^2} \leq C \int_{-\infty}^\tau \| \grad \mathop{\rm YM}\nolimitsH(y_s) \|_{L^2} \, ds
\end{equation*}
for all $\tau \leq \tau_0$.
\end{lemma}
\begin{proof}
Recall the interior estimate from \cite[Lem. 7.3]{Rade92}, \cite[Prop. 3.6]{Wilkin08} which says that for all positive integers $k$ there exists a neighbourhood $U$ of $x$ in the $L_k^2$ topology and a constant $C$ such that if $y_t \in U$ for all $t \in [0, T]$ then
\begin{equation*}
\int_1^T \| \grad \mathop{\rm YM}\nolimitsH(y_t) \|_{L_k^2} \, dt \leq C \int_0^T \| \grad \mathop{\rm YM}\nolimitsH(y_t) \|_{L^2} \, dt .
\end{equation*}
The constant $C$ is uniform as long as the initial condition satisfies a uniform bound on the derivatives of the curvature of the underlying holomorphic bundle and the flow line $y_t$ remains in the fixed neighbourhood $U$ of the critical point $x$ (cf. \cite[Prop. A]{Rade92}). In particular, the estimates of \cite[Lem. 3.14, Cor 3.16]{Wilkin08} show that this bound on the curvature is satisfied for any initial condition along a given flow line $y_t$. \emph{A priori} the constant depends on $T$, however it can be made uniform in $T$ using the following argument. Let $C$ be the constant for $T = 2$. For any $T \geq 2$, let $N$ be an integer greater than $T$ such that $y_t \in U$ for all $t \in [0, N]$. We then have
\begin{align*}
\int_1^T \| \grad \mathop{\rm YM}\nolimitsH(y_t) \|_{L_k^2} \, dt & \leq \sum_{n=1}^{N-1} \int_n^{n+1} \| \grad \mathop{\rm YM}\nolimitsH(y_t) \|_{L_k^2} \, dt \\
& \leq C \sum_{n=1}^{N-1} \int_{n-1}^{n+1} \| \grad \mathop{\rm YM}\nolimitsH(y_t) \|_{L^2} \, dt \\
& \leq 2 C \int_0^N \| \grad \mathop{\rm YM}\nolimitsH(y_t) \|_{L^2} \, dt
\end{align*}
Since $\lim_{t \rightarrow - \infty} y_t = x$ in the $C^\infty$ topology, then for any $\varepsilon > 0$ there exists $\tau_0$ such that $\tau \leq \tau_0$ implies that $\| y_t - x \|_{L_k^2} < \varepsilon$ for all $t \leq \tau$ and therefore by choosing $\varepsilon$ small we can apply the above interior estimate on any interval $[t, \tau]$ for $\tau \leq \tau_0$. Therefore we have the bound
\begin{equation*}
\int_t^\tau \| \grad \mathop{\rm YM}\nolimitsH(y_s) \|_{L_k^2} \, ds \leq 2C \int_{-\infty}^\tau \| \grad \mathop{\rm YM}\nolimitsH(y_s) \|_{L^2} \, ds
\end{equation*}
For fixed $\tau$ the right-hand side of the above inequality is constant, and so
\begin{equation*}
\| y_\tau - x \|_{L_k^2} \leq \int_{-\infty}^\tau \| \grad \mathop{\rm YM}\nolimitsH(y_s) \|_{L_k^2} \, ds \leq 2C \int_{-\infty}^\tau \| \grad \mathop{\rm YM}\nolimitsH(y_s) \|_{L^2} \, ds
\end{equation*}
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:exponential-convergence}]
After possibly shrinking the neighbourhood $U$ from the previous lemma, we can apply the Lojasiewicz inequality (cf. \cite[Prop. 3.5]{Wilkin08}) which implies that
\begin{equation*}
\int_{-\infty}^\tau \| \grad \mathop{\rm YM}\nolimitsH(y_s) \|_{L^2} \, ds \leq \frac{1}{C\theta} \left( f(x) - f(y_\tau) \right)^\theta
\end{equation*}
for constants $C > 0$ and $\theta \in (0, \frac{1}{2}]$. Lemma \ref{lem:f-exponential} shows that
\begin{equation*}
\left( f(x) - f(y_\tau) \right)^\theta \leq (K_1')^\theta e^{\theta K_2' t}
\end{equation*}
for all $t \leq 0$. These two estimates together with the result of Lemma \ref{lem:interior-bound} show that
\begin{equation*}
\| y_t - x \|_{L_k^2} \leq C_1 e^{\eta t}
\end{equation*}
for some positive constants $C_1, \eta$ and all $t \leq 0$.
\end{proof}
\section{The isomorphism classes in the unstable set}\label{sec:local-analysis}
Given a critical point $x \in \mathcal{B}$, in this section we show that for each $y \in S_x^-$ there exists a smooth gauge transformation $g \in \mathcal{G}^\mathbb C$ such that $g \cdot y \in W_x^-$ (Proposition \ref{prop:convergence-group-action}), and conversely for each $y \in W_x^-$ there exists $g \in \mathcal{G}^\mathbb C$ such that $g \cdot y \in S_x^-$ (Proposition \ref{prop:unstable-maps-to-slice}). As a consequence, the isomorphism classes in the unstable set are in bijective correspondence with the isomorphism classes in the negative slice, and so we have a complete description of these isomorphism classes by Lemma \ref{lem:classify-neg-slice}. This leads to Theorem \ref{thm:algebraic-flow-line} which gives an algebraic criterion for two points to be connected by a flow line.
\subsection{Convergence of the scattering construction}\label{sec:scattering-convergence}
The goal of this section is to prove Proposition \ref{prop:convergence-group-action}, which shows that every point in the negative slice $S_x^-$ is complex gauge equivalent to a point in the unstable set $W_x^-$.
The construction involves flowing up towards the critical point on the slice using the linearisation of the $\mathop{\rm YM}\nolimitsH$ flow and then flowing down using the $\mathop{\rm YM}\nolimitsH$ flow. A similar idea is used by Hubbard in \cite{Hubbard05} for analytic flows around a critical point in $\mathbb C^n$, where the flow on the slice is defined by projecting the flow from the ambient space. Hubbard's construction uses the fact that the ambient space is a manifold to (a) define this projection to the negative slice, and (b) define local coordinates in which the nonlinear part of the gradient flow satisfies certain estimates in terms of the eigenvalues for the linearised flow \cite[Prop. 4]{Hubbard05}, which is necessary to prove convergence. This idea originated in the study of the existence of scattering states in classical and quantum mechanics. In the context of this paper, one can think of the linearised flow and the YMH flow as two dynamical systems and the goal is to compare their behaviour as $t \rightarrow - \infty$ (see \cite[Ch. XI.1]{ReedSimonVol3} for an overview). As noted in \cite{Hubbard05}, \cite{Nelson69} and \cite{ReedSimonVol3}, the eigenvalues of the linearised flow play an important role in comparing the two flows.
The method of this section circumvents the need for a local manifold structure by defining the flow on the slice using the linearised flow and then using the distance-decreasing property of the flow on the space of metrics from \cite{Donaldson85}, \cite{Simpson88} (cf. Lemma \ref{lem:distance-decreasing}) in place of the estimate of \cite[Prop. 4]{Hubbard05} on the nonlinear part of the flow. The entire construction is done in terms of the complex gauge group, and so it is valid on any subset preserved by $\mathcal{G}^\mathbb C$, thus avoiding any problems associated with the singularities in the space of Higgs bundles. Moreover, using this method it follows naturally from the Lojasiewicz inequality and the smoothing properties of the heat equation that the backwards $\mathop{\rm YM}\nolimitsH$ flow with initial condition in the unstable set converges in the $C^\infty$ topology.
\subsubsection{A $C^0$ bound on the metric}
First we derive an \emph{a priori} estimate on the change of metric along the flow. Fix an initial condition $y_0 \in S_x^-$ and let $\beta = \mu(x) = \Lambda(F_A + [\phi, \phi^*]) \in \Omega^0(\mathop{\rm ad}\nolimits(E)) \cong \mathop{\rm Lie}\nolimits(\mathcal{G})$. In this section we also use the function $\mu_h(y) = \mathop{\rm Ad}\nolimits_{g^{-1}} \left( \mu(g\cdot y) \right)$ from \eqref{eqn:def-muh}. The linearised flow with initial condition $y_0$ has the form $e^{- i \beta t} \cdot y_0$, and the $\mathop{\rm YM}\nolimitsH$ flow \eqref{eqn:gauge-flow} has the form $g_t \cdot y_0$. Let $f_t = g_t \cdot e^{i \beta t}$ and define $h_t = f_t^* f_t \in \mathcal{G}^\mathbb C / \mathcal{G}$. This is summarised in the diagram below.
\begin{figure}
\caption{Comparison of the gradient flow and the linearised flow.}
\end{figure}
\begin{lemma}\label{lem:derivative-difference}
For any initial condition $y_0 \in S_x^-$, the induced flow on $\mathcal{G}^\mathbb C / \mathcal{G}$ satisfies
\begin{equation*}
\frac{dh_t}{dt} = -2 i h_t \, \mu_h(e^{-i \beta t} \cdot y_0) + i \beta h_t + h_t (i\beta)
\end{equation*}
\end{lemma}
\begin{proof}
First compute
\begin{equation}\label{eqn:group-scattering}
\frac{df}{dt} f_t^{-1} = \frac{dg}{dt} g_t^{-1} + g_t (i\beta) e^{i\beta t} f_t^{-1} = -i \mu(g_t \cdot y_0) + f_t (i\beta) f_t^{-1}
\end{equation}
Then
\begin{align*}
\frac{dh}{dt} & = \frac{df^*}{dt} f_t + f_t^* \frac{df}{dt} \\
& = f_t^* \left( \frac{df}{dt} f_t^{-1} \right)^* f_t + f_t^* \left( \frac{df}{dt} f_t^{-1} \right) f_t \\
& = - f_t^* i \mu(g_t \cdot y_0) f_t + i \beta h_t - f_t^* i \mu(g_t \cdot y_0) f_t + h_t (i\beta) \\
& = -2 f_t^* i \mu(g_t \cdot y_0) f_t + i \beta h_t + h_t (i\beta) \\
& = -2 i h_t \mathop{\rm Ad}\nolimits_{f_t^{-1}} \left( \mu(g_t \cdot y_0) \right) + i \beta h_t + h_t (i\beta) \\
& = -2 i h_t \, \mu_h(e^{-i \beta t} \cdot y_0) + i \beta h_t + h_t (i\beta)
\end{align*}
where the last step follows from the definition of $\mu_h$ in \eqref{eqn:def-muh} and the fact that $e^{-i \beta t} = f_t^{-1} \cdot g_t$.
\end{proof}
The next estimate gives a bound for $\sup_X {\Sigma}_{i}gma(h_t)$ in terms of $\| y_0 - x \|_{C^0}$.
\begin{lemma}\label{lem:uniform-bound-sigma}
For every $\varepsilon > 0$ there exists a constant $C > 0$ such that for any initial condition $y_0 \in S_{x}^-$ with $\| e^{-i \beta T} \cdot y_0 - x \|_{C^0} < \varepsilon$ we have the estimate $\sup_X {\Sigma}_{i}gma(h_t) \leq C \| e^{-i \beta T} \cdot y_0 - x \|_{C^0}^2$ for all $0 \leq t \leq T$.
\end{lemma}
\begin{proof}
Taking the trace of the result of Lemma \ref{lem:derivative-difference} gives us
\begin{align*}
\frac{d}{dt} \mathop{\rm Tr}\nolimits h_t = \mathop{\rm Tr}\nolimits \left( \frac{dh}{dt} \right) & = - 2i \mathop{\rm Tr}\nolimits \left( (\mu_h(e^{-i \beta t} \cdot y_0) - \beta) h_t \right) \\
\frac{d}{dt} \mathop{\rm Tr}\nolimits h_t^{-1} = - \mathop{\rm Tr}\nolimits \left( h_t^{-1} \frac{dh}{dt} h_t^{-1} \right) & = 2i \mathop{\rm Tr}\nolimits \left( h_t^{-1} (\mu_h(e^{-i \beta t} \cdot y_0) - \beta) \right)
\end{align*}
Therefore
\begin{equation*}
\frac{d}{dt} \mathop{\rm Tr}\nolimits ( h_t ) = -2i \mathop{\rm Tr}\nolimits \left( (\mu_h(e^{-i \beta t} \cdot y_0) - \mu(e^{-i\beta t} \cdot y_0) ) h_t \right) - 2i \mathop{\rm Tr}\nolimits \left( (\mu(e^{-i \beta t} \cdot y_0) - \beta) h_t \right)
\end{equation*}
Lemma \ref{lem:metric-inequalities} together with the fact that $h_t$ is positive definite then shows that
\begin{align*}
\left( \frac{\partial}{\partial t} + \mathcal{D}elta \right) \mathop{\rm Tr}\nolimits(h_t) & \leq -2i \mathop{\rm Tr}\nolimits \left( (\mu(e^{-i \beta t} \cdot y_0) - \beta) h_t \right) \\
& \leq C_1 \| \mu(e^{-i \beta t} \cdot y_0) - \beta \|_{C^0} \mathop{\rm Tr}\nolimits (h_t) \\
& \leq C_1 \| e^{-i \beta t} \cdot y_0 - x \|_{C^0}^2 \mathop{\rm Tr}\nolimits (h_t) \quad \text{(by Lemma \ref{lem:moment-map-quadratic})}
\end{align*}
A similar calculation shows that
\begin{equation*}
\left( \frac{\partial}{\partial t} + \mathcal{D}elta \right) \mathop{\rm Tr}\nolimits(h_t^{-1}) \leq C_1 \| e^{-i \beta t} \cdot y_0 - x \|_{C^0}^2 \mathop{\rm Tr}\nolimits (h_t^{-1})
\end{equation*}
If we label the eigenvalues of $i \beta$ as $\lambda_1 \leq \cdots \leq \lambda_k < 0 \leq \lambda_{k+1} \leq \cdots \leq \lambda_n$, then the estimate $\| e^{i \beta s} \cdot (y_0 - x) \|_{C^0}^2 \leq e^{2\lambda_k s} \| y_0 - x \|_{C^0}^2$ from \eqref{eqn:lipschitz-slice} gives us
\begin{align}\label{eqn:sigma-sub-estimate}
\begin{split}
\left( \frac{\partial}{\partial t} + \mathcal{D}elta \right) {\Sigma}_{i}gma(h_t) & = \left( \frac{\partial}{\partial t} + \mathcal{D}elta \right) \left( \mathop{\rm Tr}\nolimits (h_t) + \mathop{\rm Tr}\nolimits (h_t^{-1}) \right) \\
& \leq C_1 \| e^{-i \beta t} \cdot (y_0 - x) \|_{C^0}^2 \left( \mathop{\rm Tr}\nolimits (h_t) + \mathop{\rm Tr}\nolimits (h_t^{-1}) \right) \\
& = C_1 \| e^{i \beta (T-t)} \cdot e^{-i \beta T} \cdot (y_0 - x) \|_{C^0}^2 \left( \mathop{\rm Tr}\nolimits (h_t) + \mathop{\rm Tr}\nolimits (h_t^{-1}) \right) \\
& \leq C_1 e^{2\lambda_k (T-t)} \| e^{-i \beta T} \cdot (y_0 - x) \|_{C^0}^2 {\Sigma}_{i}gma(h_t) + C_1 e^{2 \lambda_k (T-t)} \| e^{-i \beta T} \cdot (y_0 - x) \|_{C^0}^2 \rank(E)
\end{split}
\end{align}
Let $K_1 = C_1 \| e^{-i \beta T} \cdot y_0 - x \|_{C^0}^2$ and $K_2 = C_1 \| e^{-i \beta T} \cdot y_0 - x \|_{C^0}^2 \rank(E)$. Define
\begin{equation*}
\nu_t = {\Sigma}_{i}gma(h_t) \exp\left( \frac{K_1}{2\lambda_k} e^{2 \lambda_k (T-t)} \right) - \int_0^t K_2 e^{2 \lambda_k (T-s)} \exp \left( \frac{K_1}{2\lambda_k} e^{2 \lambda_k (T-s)} \right) \, ds
\end{equation*}
Note that $\nu_0 = 0$ since $h_0 = \id$. A calculation using \eqref{eqn:sigma-sub-estimate} then shows that
\begin{equation*}
\left( \frac{\partial}{\partial t} + \mathcal{D}elta \right) \nu_t \leq 0
\end{equation*}
and so $\sup_X \nu_t \leq \sup_X \nu_0 = 0$ by the maximum principle. Therefore
\begin{align*}
\sup_X {\Sigma}_{i}gma(h_t) & \leq \exp \left(-\frac{K_1}{2\lambda_k} e^{2 \lambda_k (T-t)} \right) \int_0^t K_2 e^{2 \lambda_k (T-s)} \exp \left(\frac{K_1}{2\lambda_k} e^{2 \lambda_k (T-s)} \right) \, ds \\
& \leq \exp \left( - \frac{K_1}{2\lambda_k} \right) \int_0^t K_2 e^{2\lambda_k (T-s)} \, ds \leq C \| e^{-i \beta T} \cdot y_0 - x \|_{C^0}^2
\end{align*}
for some constant $C$, since $\lambda_k < 0$, $0 \leq s \leq t < T$, $K_1$ is bounded since $\| e^{-i \beta T} \cdot y_0 - x \|_{C^0} < \varepsilon$ by assumption and $K_2$ is proportional to $\| e^{-i \beta T} \cdot y_0 - x \|_{C^0}^2$.
\end{proof}
\subsubsection{$C^\infty$ convergence in the space of metrics}\label{subsec:metric-convergence}
Now consider the case of a fixed $y_0 \in S_x^-$ and define $y_t = e^{i \beta t} \cdot y_0$. Define $g_s(y_t) \in \mathcal{G}^\mathbb C$ to be the unique solution of \eqref{eqn:gauge-flow} such that $g_s(y_t) \cdot y_t$ is the solution to the $\mathop{\rm YM}\nolimitsH$ flow at at time $s$ with initial condition $y_t$. Let $f_s(y_t) = g_s(y_t) \cdot e^{i \beta s} \in \mathcal{G}^\mathbb C$, and define $h_s(y_t) = f_s(y_t)^* f_s(y_t)$ to be the associated change of metric. The estimate from the previous lemma now becomes
\begin{equation}\label{eqn:sigma-C0-estimate}
\sup_X {\Sigma}_{i}gma(h_s(y_t)) \leq C \| y_t - x \|_{C^0}^2 = C \| e^{i \beta t} \cdot (y_0 - x) \|_{C^0}^2 \leq C e^{2 \lambda_k t} \| y_0 - x \|_{C^0}^2
\end{equation}
This is summarised in the diagram below.
\begin{figure}
\caption{Comparison of $f_{t_1}
\label{fig:flow-up-flow-down}
\end{figure}
\begin{proposition}\label{prop:metrics-converge}
$h_t(y_0) \stackrel{C^0}{\longrightarrow} h_\infty(y_0) \in \mathcal{G}^\mathbb C / \mathcal{G}$ as $t \rightarrow \infty$. The limit depends continuously on the initial condition $y_0$. The rate of convergence is given by
\begin{equation}\label{eqn:metric-convergence-rate}
\sup_X {\Sigma}_{i}gma(h_t(y_0) (h_\infty(y_0))^{-1}) \leq C_2 e^{2 \lambda_k t} \| y_0 - x \|_{C^0}^2
\end{equation}
where $C_2 > 0$ is a constant depending only on the orbit $\mathcal{G} \cdot x$.
\end{proposition}
\begin{proof}
Let $t_1 > t_2 \geq T$. The estimate \eqref{eqn:sigma-C0-estimate} shows that
\begin{equation*}
\sup_X {\Sigma}_{i}gma(h_{t_1-t_2}(y_{t_2})) \leq C_2 \| y_{t_2} - x \|_{C^0}^2 \leq C e^{2 \lambda_k t_2} \| y_0 - x \|_{C^0}^2 \leq C e^{2 \lambda_k T} \| y_0 - x \|_{C^0}^2 .
\end{equation*}
Recall from \eqref{eqn:metric-difference} that
\begin{equation*}
{\Sigma}_{i}gma(h_{t_1}(y_0) h_{t_2}(y_0)^{-1}) = {\Sigma}_{i}gma\left( ( f_{t_1}(y_0) f_{t_2}(y_0)^{-1} )^* f_{t_1}(y_0) f_{t_2}(y_0)^{-1} \right) .
\end{equation*}
The distance-decreasing formula of Lemma \ref{lem:distance-decreasing} shows that
\begin{equation*}
\sup_X {\Sigma}_{i}gma\left( ( f_{t_1}(y_0) f_{t_2}(y_0)^{-1} )^* f_{t_1}(y_0) f_{t_2}(y_0)^{-1} \right) \leq \sup_X {\Sigma}_{i}gma( h_{t_1-t_2}(y_{t_2}) ) .
\end{equation*}
Therefore the distance (measured by ${\Sigma}_{i}gma$) between the two metrics $h_{t_1}(y_0)$ and $h_{t_2}(y_0)$ satisfies the following bound
\begin{align*}
\sup_X {\Sigma}_{i}gma(h_{t_1}(y_0) h_{t_2}(y_0)^{-1}) & = \sup_X {\Sigma}_{i}gma\left( ( f_{t_1}(y_0) f_{t_2}(y_0)^{-1} )^* f_{t_1}(y_0) f_{t_2}(y_0)^{-1} \right) \\
& \leq \sup_X {\Sigma}_{i}gma( h_{t_1-t_2}(y_{t_2}) ) \leq C_2 e^{2 \lambda_k T} \| y_0 - x \|_{C^0}^2
\end{align*}
and so $h_t(y_0)$ is a Cauchy sequence in $C^0$ with a unique limit $h_\infty \in \mathcal{G}^\mathbb C / \mathcal{G}$, The above equation shows that the rate of convergence is given by \eqref{eqn:metric-convergence-rate}.
Since the finite-time Yang-Mills-Higgs flow and linearised flow both depend continuously on the initial condition, then $h_t(y_0)$ depends continuously on $y_0$ for each $t > 0$. Continuous dependence of the limit then follows from the estimate \eqref{eqn:metric-convergence-rate}.
\end{proof}
Now we can improve on the previous estimates to show that $h_t(y_0)$ converges in the smooth topology along a subsequence, and therefore the limit $h_\infty$ is $C^\infty$. Define $z_t = f_t(y_0) \cdot y_0$, where $y_0 \in S_x^-$ and $f_t(y_0) \in \mathcal{G}^\mathbb C$ are as defined in the previous proposition. Given a Higgs bundle $z_t = (\bar{\partial}_A, \phi)$, let $\nabla_A$ denote the covariant derivative with respect to the metric connection associated to $\bar{\partial}_A$.
\begin{lemma}
For each initial condition $y_0 \in S_x^-$, there is a uniform bound on $\sup_X | \nabla_A^{\bf e}^{(l)}l \mu(z_t) |$ and $\sup_X |\nabla_A^{\bf e}^{(l)}l \phi|$ for each ${\bf e}^{(l)}l \geq 0$.
\end{lemma}
\begin{proof}
Since $\{ e^{i \beta t} \cdot y_0 \, : \, t \in [0, \infty] \}$ is a compact curve in the space of $C^\infty$ Higgs bundles connecting two $C^\infty$ Higgs bundles $y_0$ and $x$, then $\sup_X \left| \mu(e^{i \beta t} \cdot y_0) \right|$ and $\sup_X \left| \nabla_A \phi \right|$ are both uniformly bounded along the sequence $e^{i \beta t} \cdot y_0$. By construction, $z_t$ is the time $t$ $\mathop{\rm YM}\nolimitsH$ flow with initial condition $e^{i \beta t} \cdot y_0$. Along the $\mathop{\rm YM}\nolimitsH$ flow, for each ${\bf e}^{(l)}l$ the quantities $\sup_X \left| \nabla_A^{\bf e}^{(l)}l \mu \right|$ and $\sup_X \left| \nabla_A^{\bf e}^{(l)}l \phi \right|$ are both uniformly bounded by a constant depending on the value of $\sup_X \left| \mu \right|$ and $\sup_X \left| \nabla_A \phi \right|$ at the initial condition (cf. \cite[Sec. 3.2]{Wilkin08}). Since these quantities are uniformly bounded for the initial conditions, then the result follows.
\end{proof}
\begin{corollary}
There is a subsequence $t_n$ such that $h_{t_n} \rightarrow h_\infty$ in the $C^\infty$ topology. Therefore $h_\infty$ is $C^\infty$.
\end{corollary}
\begin{proof}
Since $z_t$ is contained in the complex gauge orbit of $y_0$ for all $t$, then \cite[Lem. 3.14]{Wilkin08} shows that the uniform bound on $\left| \nabla_A^{\bf e}^{(l)}l \mu(z_t) \right|$ from the previous lemma implies a uniform bound on $\left| \nabla_A^{\bf e}^{(l)}l F_A \right|$ for all ${\bf e}^{(l)}l$. Therefore, since Proposition \ref{prop:metrics-converge} shows that $h_t$ converges in $C^0$, then the estimates of \cite[Lem. 19 \& 20]{Donaldson85} show that $h_t$ is bounded in $C^{\bf e}^{(l)}l$ for all ${\bf e}^{(l)}l$, and so there is a subsequence $h_{t_n}$ converging in the $C^\infty$ topology.
\end{proof}
\subsubsection{$C^\infty$ convergence in the space of Higgs bundles}\label{sec:convergence-in-B}
In this section we show that the scattering construction converges in the $C^\infty$ topology on the space of Higgs bundles. As a consequence of the methods, we obtain an estimate that shows the solution to the reverse heat flow constructed in Section \ref{subsec:construct-reverse-solution} converges to the critical point $x$ in the smooth topology.
This section uses a slightly modified version of the flow from the previous section, defined as follows. Given $y_0 \in S_x^-$ and $t > 0$, let $x_s = g_s \cdot e^{i \beta t} \cdot y_0$ be the time $s$ solution to the $\mathop{\rm YM}\nolimitsH$ flow \eqref{eqn:gauge-flow} with initial condition $e^{i \beta t} \cdot y_0$, let $s(t)$ be the unique point in time such that $\mathop{\rm YM}\nolimitsH(x_{s(t)}) = \mathop{\rm YM}\nolimitsH(y_0)$ and define $t' = \min \{ t, s(t) \}$. Since the critical values of $\mathop{\rm YM}\nolimitsH$ are discrete, then $t'$ is well-defined for small values of $\mathop{\rm YM}\nolimitsH(x) - \mathop{\rm YM}\nolimitsH(y_0)$.
\begin{center}
\begin{pspicture}(0,-0.5)(8,5)
\psline(4,5)(4,0)
\psline(4,4)(4,1)
\pscurve[arrowsize=5pt]{->}(4,4)(4.4,2.3)(5,1.2)(5.2,1)
\psline[linestyle=dashed](0,1)(8,1)
\psdots[dotsize=3pt](4,5)(4,4)(4,1)(5.2,1)
\uput{4pt}[180](4,5){\small{$x$}}
\uput{4pt}[180](4,4){\small{$e^{i \beta t} \cdot y_0$}}
\uput{4pt}[180](4,0.7){\small{$y_0$}}
\uput{4pt}[0](4.5,0.7){\small{$z_t = g_{t'} \cdot e^{i \beta t} \cdot y_0$}}
\uput{2pt}[180](4,0){$S_x^-$}
\uput{3pt}[30](4.4,2.3){\small{$g_{t'}$}}
\uput{3pt}[180](4,2.3){\small{$e^{i \beta t}$}}
\uput{3pt}[90](1,1){\small{$\mathop{\rm YM}\nolimitsH^{-1}(\mathop{\rm YM}\nolimitsH(y_0))$}}
\end{pspicture}
\end{center}
Now define $z_t = g_{t'} \cdot e^{i \beta t} \cdot y_0$ and $y_t = e^{i \beta (t-t')} \cdot y_0$. Note that $z_t = g_{t'} \cdot e^{i \beta t'} \cdot y_t$ and so the results of the previous section show that the $C^0$ norm of the change of metric connecting $y_t$ and $z_t$ is bounded. Therefore Corollary \ref{cor:bounded-metric-away-from-critical} shows that $y_t$ and $z_t$ are both uniformly bounded away from $x$.
\begin{lemma}\label{lem:bounded-away-from-critical}
There exists $T > 0$ such that $t-t' \leq T$ for all $t$.
\end{lemma}
\begin{proof}
If $s(t) \geq t$ then $t' = t$ and the desired inequality holds. Therefore the only non-trivial case is $s(t) < t$. Since $\mathop{\rm YM}\nolimitsH(z_t) = \mathop{\rm YM}\nolimitsH(y_0)$ and $\mathop{\rm YM}\nolimitsH$ is continuous in the $L_1^2$ norm on $\mathcal{B}$, then there exists a neighbourhood $V$ of $x$ such that $z_t \notin V$ for all $t$. We also have $z_t = f_{t'} \cdot y_t$ with $f_{t'} = g_{t'} e^{i \beta t'}$ such that $h_t = f_{t'}^* f_{t'}$ satisfies $\sup_X {\Sigma}_{i}gma(h_t) \leq C \| y_t - x \|_{C^0}^2 \leq C \| y_0 - x \|_{C^0}^2$ by Lemma \ref{lem:uniform-bound-sigma}, and so Corollary \ref{cor:bounded-metric-away-from-critical} shows that there exists a neighbourhood $U$ of $x$ in the $L_1^2$ topology on $\mathcal{B}$ such that $y_t \notin U$. Therefore there exists $\eta > 0$ such that $\| y_t - x \|_{L_1^2} \geq \eta$ and $\| z_t - x \|_{L_1^2} \geq \eta$.
Since $y_t = e^{i \beta (t-t')} \cdot y_0$ and $e^{i \beta s} \cdot y_0$ converges to $x$ as $s \rightarrow \infty$, then there exists $T$ such that $t-t' \leq T$ for all $t$, since otherwise $\| y_t - x \|_{L_1^2} < \eta$ for some $t$ which contradicts the inequality from the previous paragraph.
\end{proof}
Next we use the Lojasiewicz inequality to derive a uniform bound on $\| z_t - x \|_{L_1^2}$.
\begin{lemma}\label{lem:L12-bound}
Given $\varepsilon > 0$ there exists $\delta > 0$ such that for each $y_0 \in S_x^-$ with $\| y_0 - x \|_{L_1^2} < \delta$ there exists a neighbourhood $U$ of $x$ in the $L_1^2$ topology such that $\| z_t - x \|_{L_1^2} < \varepsilon$ for all $t$ such that $e^{i \beta t} \cdot y_0 \in U$.
\end{lemma}
\begin{proof}
Recall from \cite[Prop. 3.5]{Wilkin08} that there exists $\varepsilon_1 > 0$ and constants $C > 0$ and $\theta \in \left( 0, \frac{1}{2} \right)$ such that the Lojasiewicz inequality
\begin{equation}\label{eqn:lojasiewicz}
\| \grad \mathop{\rm YM}\nolimitsH(z) \|_{L^2} \geq C \left| \mathop{\rm YM}\nolimitsH(x) - \mathop{\rm YM}\nolimitsH(z) \right|^{1-\theta}
\end{equation}
holds for all $z$ such that $\| z - x \|_{L_1^2} < \varepsilon_1$. Recall the interior estimate \cite[Prop. 3.6]{Wilkin08} which says that for any positive integer $k$ there exists $\varepsilon_2 > 0$ and a constant $C_k'$ such that for any solution $x_s = g_s \cdot e^{i \beta t} \cdot y_0$ to the $\mathop{\rm YM}\nolimitsH$ flow with initial condition $e^{i \beta t} \cdot y_0$ which satisfies $\| x_s - x \|_{L_k^2} < \varepsilon_2$ for all $0 \leq s \leq S$, then we have
\begin{equation}\label{eqn:interior-estimate}
\int_1^S \| \grad \mathop{\rm YM}\nolimitsH(x_s) \|_{L_k^2} \, dt \leq C_k' \int_0^S \| \grad \mathop{\rm YM}\nolimitsH(x_s) \|_{L^2} \, dt .
\end{equation}
where the constant $C_k'$ is uniform over all initial conditions in a given $\mathcal{G}^\mathbb C$ orbit and for all $S$ such that $\| x_s - x \|_{L_k^2} < \varepsilon_2$ for all $s \in [0, S]$ (cf. Lemma \ref{lem:interior-bound}). Define $\varepsilon' = \min \{ \varepsilon, \varepsilon_1, \varepsilon_2 \}$. A calculation using \eqref{eqn:lojasiewicz} (cf. \cite{Simon83}) shows that any flow line $x_s$ which satisfies $\| x_s - x \|_{L_1^2} < \varepsilon'$ for all $s \in [0,t']$ also satisfies the gradient estimate
\begin{equation*}
C \theta \| \grad \mathop{\rm YM}\nolimitsH(x_s) \|_{L^2} \leq \frac{\partial}{\partial s} \left| \mathop{\rm YM}\nolimitsH(x) - \mathop{\rm YM}\nolimitsH(x_s) \right|^\theta
\end{equation*}
and so if $\| x_s - x \|_{L_1^2} < \varepsilon'$ for all $s < t'$ then
\begin{align}\label{eqn:flow-length-estimate}
\begin{split}
\int_0^{t'} \| \grad \mathop{\rm YM}\nolimitsH(x_s) \|_{L^2} \, ds & \leq \frac{1}{C\theta} \left( |\mathop{\rm YM}\nolimitsH(x) - \mathop{\rm YM}\nolimitsH(x_{t'}) |^\theta - |\mathop{\rm YM}\nolimitsH(x) - \mathop{\rm YM}\nolimitsH(x_0)|^\theta \right) \\
& \leq \frac{1}{C\theta} \left| \mathop{\rm YM}\nolimitsH(x) - \mathop{\rm YM}\nolimitsH(x_{t'}) \right|^\theta
\end{split}
\end{align}
Let $k=1$ in \eqref{eqn:interior-estimate} and choose $\delta > 0$ so that $\| y_0 - x \|_{L_1^2} < \delta$ implies that that $\frac{1}{C \theta} |\mathop{\rm YM}\nolimitsH(x) - \mathop{\rm YM}\nolimitsH(y_0)|^\theta \leq \frac{\varepsilon'}{3C_1'}$, where $C$ and $\theta$ are the constants from the Lojasiewicz inequality \eqref{eqn:lojasiewicz} and $C_1'$ is the constant from \eqref{eqn:interior-estimate} for $k=1$. Therefore, since $\mathop{\rm YM}\nolimitsH(y_0) = \mathop{\rm YM}\nolimitsH(x_{t'}) < \mathop{\rm YM}\nolimitsH(x_\tau) \leq \mathop{\rm YM}\nolimitsH(x)$ for all $\tau < t'$, then
\begin{equation}\label{eqn:energy-bound}
\frac{1}{C\theta} \left| \mathop{\rm YM}\nolimitsH(x) - \mathop{\rm YM}\nolimitsH(x_\tau) \right|^\theta \leq \frac{\varepsilon'}{3C_1'} \quad \text{for all $\tau < t'$}.
\end{equation}
Since the finite-time $\mathop{\rm YM}\nolimitsH$ flow depends continuously on the initial condition in the $L_1^2$ norm by \cite[Prop. 3.4]{Wilkin08}, then there exists a neighbourhood $U$ of $x$ such that $x_0 \in U$ implies that $\| x_1 - x \|_{L_1^2} < \frac{1}{3} \varepsilon'$. Choose $t$ large so that $e^{i \beta t} \cdot y_0 = e^{i \beta t'} \cdot y_t \in U$ and let $x_s = g_s \cdot e^{i \beta t} \cdot y_0$ be the solution to the $\mathop{\rm YM}\nolimitsH$ flow at time $s$ with initial condition $x_0 = e^{i \beta t} \cdot y_0$. Note that $x_{t'} = z_t$. Define
\begin{equation*}
\tau = \sup \{ s \mid \| x_r - x \|_{L_1^2} < \varepsilon' \, \, \text{for all $r \leq s$} \}
\end{equation*}
and note that $\tau > 0$. By definition of $\tau$, the Lojasiewicz inequality \eqref{eqn:lojasiewicz} and the interior estimate \eqref{eqn:flow-length-estimate} are valid for the flow line $x_s$ on the interval $[0,\tau]$. If $\tau < t'$, then \eqref{eqn:flow-length-estimate} and \eqref{eqn:energy-bound} imply that
\begin{align*}
\| x_\tau - x \|_{L_1^2} & \leq \| x_1 - x \|_{L_1^2} + \| x_\tau - x_1 \|_{L_1^2} \\
& < \frac{1}{3} \varepsilon' + \int_1^\tau \| \grad \mathop{\rm YM}\nolimitsH(x_s) \|_{L_1^2} \, ds \\
& \leq \frac{1}{3} \varepsilon' + C_1' \int_0^\tau \| \grad \mathop{\rm YM}\nolimitsH(x_s) \|_{L^2} \, ds \\
& \leq \frac{1}{3} \varepsilon' + \frac{C_1'}{C \theta} \left| \mathop{\rm YM}\nolimitsH(x) - \mathop{\rm YM}\nolimitsH(x_\tau) \right|^\theta \leq \frac{1}{3} \varepsilon' + \frac{1}{3} \varepsilon'
\end{align*}
contradicting the definition of $\tau$ as the supremum. Therefore $t' \leq \tau$ and the same argument as above shows that $\| x_{t'} - x_0 \|_{L_1^2} < \frac{2}{3} \varepsilon'$, so we conclude that $z_t = x_{t'}$ satisfies $\| z_t - x \|_{L_1^2} < \frac{2}{3} \varepsilon' < \varepsilon$ for all $t$ such that $e^{i \beta t} \cdot y_0 \in U$.
\end{proof}
Now that we have a uniform $L_1^2$ bound on $z_t - x$, then we can apply the same idea using the interior estimate \eqref{eqn:interior-estimate} as well as continuous dependence on the initial condition in the $L_k^2$ norm from \cite[Prop. 3.4]{Wilkin08} to prove the following uniform $L_k^2$ bound on $z_t - x$.
\begin{lemma}\label{lem:Lk2-length}
Given $\varepsilon > 0$ and a positive integer $k$ there exists $\delta > 0$ such that if $\| y_0 - x \|_{L_1^2} < \delta$ then there exists a neighbourhood $U$ of $x$ in the $L_k^2$ topology such that $\| z_t - x \|_{L_k^2} < \varepsilon$ for all $t$ such that $e^{i \beta t} \cdot y_0 \in U$.
\end{lemma}
Now we can prove that there is a limit $z_\infty$ in the space of $C^\infty$ Higgs bundles. In Section \ref{subsec:construct-reverse-solution} we will show that $z_\infty \in W_x^-$.
\begin{proposition}\label{prop:strong-convergence}
For each $y_0 \in S_x^-$, let $z_t$ be the sequence defined above. Then there exists $z_\infty \in \mathcal{B}$ such that for each positive integer $k$ there exists a subsequence of $z_t$ converging to $z_\infty$ strongly in $L_k^2$.
\end{proposition}
\begin{proof}
The previous estimate with $k=2$ shows that $\| z_t - x \|_{L_2^2}$ is bounded. Compactness of the embedding $L_{k+1}^2 \hookrightarrow L_k^2$ shows that there is a subsequence $\{ z_{t_n} \}$ converging strongly to a limit $z_\infty$ in $L_1^2$.
For any $k > 1$, the same argument applied to the subsequence $\{ z_{t_n} \}$ from the previous paragraph shows that there exists a further subsequence, which we denote by $\{ z_{t_{n_j}} \}$, which converges strongly in $L_k^2$. Since $z_{t_{n_j}} \stackrel{L_1^2}{\longrightarrow} z_\infty$ then the limit in $L_k^2$ of $z_{t_{n_j}}$ must be $z_\infty$ also. Therefore $z_\infty$ is a $C^\infty$ Higgs pair.
\end{proof}
Finally, we can prove that $z_\infty$ is gauge-equivalent to $y_0$. Recall the constant $T$ from Lemma \ref{lem:bounded-away-from-critical} and let $\varphi(z_t,s)$ denote the time $s$ downwards $\mathop{\rm YM}\nolimitsH$ flow \eqref{eqn:YMH-flow} with initial condition $z_t$. The gauge transformation $f_t(y_0) \in \mathcal{G}^\mathbb C$ from Proposition \ref{prop:metrics-converge} satisfies $f_t(y_0) \cdot y_0 = \phi(z_t, t-t')$.
For any $k$, let $z_{t_n}$ be a subsequence converging strongly to $z_\infty$ in $L_k^2$. Such a subsequence exists by Proposition \ref{prop:strong-convergence}. Since $0 \leq t_n - t_n' \leq T$ for all $n$ then there exists $s \in [0, T]$ and a subsequence $\{t_{n_{\bf e}^{(l)}l} \}$ such that $t_{n_{\bf e}^{(l)}l} - t_{n_{\bf e}^{(l)}l}' \rightarrow s$. Since the finite-time $\mathop{\rm YM}\nolimitsH$ flow depends continuously on the initial condition in $L_k^2$, then $f_{t_{n_{\bf e}^{(l)}l}} (y_0) \cdot y_0 = \varphi(z_{t_{n_{\bf e}^{(l)}l}}, t_{n_{\bf e}^{(l)}l} - t_{n_{\bf e}^{(l)}l}')$ converges to $z_\infty^0 := \varphi(z_\infty, s)$ strongly in $L_k^2$. After taking a further subsequence if necessary, the method of Section \ref{subsec:metric-convergence} shows that the change of metric associated to $f_{t_{n_{\bf e}^{(l)}l}}(y_0)$ converges strongly in $L_{k+1}^2$. Therefore, since the action of the Sobolev completion $\mathcal{G}_{L_{k+1}^2}^\mathbb C$ on $\mathcal{B}_{L_k^2}$ is continuous, then $\varphi(z_\infty, s)$ (and hence $z_\infty$) is related to $y_0$ by a gauge transformation in $\mathcal{G}_{L_{k+1}^2}^\mathbb C$. Since $y_0$ and $z_\infty$ are both smooth Higgs pairs then an elliptic regularity argument shows that this gauge transformation is smooth. Therefore we have proved the following result.
\begin{proposition}\label{prop:limit-in-group-orbit}
Given any $y_0 \in S_x^-$, let $z_\infty$ be the limit from Proposition \ref{prop:strong-convergence}. Then there exists a smooth gauge transformation $g \in \mathcal{G}^\mathbb C$ such that $z_\infty = g \cdot y_0$.
\end{proposition}
Since $z_\infty^0 = \varphi(z_\infty, s)$ is related to $z_\infty$ by the finite-time flow and $s$ is bounded, then we have the following estimate for $\| z_\infty^0 - x \|_{L_k^2}$. Note that this requires a bound on $\| y_0 - x \|_{L_1^2}$ for the estimates of this section to work, and a bound on $\| y_0 - x \|_{C^0}$ for the estimates of Lemma \ref{lem:uniform-bound-sigma} to work.
\begin{corollary}\label{cor:flow-bound}
For all $\varepsilon > 0$ there exists $\delta > 0$ such that $\| y_0 - x \|_{L_1^2} + \| y_0 - x \|_{C^0} < \delta$ implies $\| z_\infty^0 - x \|_{L_k^2} < \varepsilon$.
\end{corollary}
\begin{remark}
The previous proof uses the fact that the \emph{finite-time} flow depends continuously on the initial condition. The limit of the downwards $\mathop{\rm YM}\nolimitsH$ flow as $t \rightarrow \infty$ depends continuously on initial conditions within the same Morse stratum (cf. \cite[Thm. 3.1]{Wilkin08}). It is essential that the constant $T$ from Lemma \ref{lem:bounded-away-from-critical} is finite (which follows from Corollary \ref{cor:bounded-metric-away-from-critical}) in order to guarantee that $z_\infty$ and $\varphi(z_\infty, s)$ are gauge equivalent. Without a bound on $T$, it is possible that $z_\infty$ may be in a different Morse stratum to $\lim_{t \rightarrow \infty} \varphi(z_t, t-t')$.
\end{remark}
\subsubsection{Constructing a convergent solution to the backwards $\mathop{\rm YM}\nolimitsH$ flow}\label{subsec:construct-reverse-solution}
In this section we show that the limit $z_\infty$ is in the unstable set $W_x^-$.
\begin{proposition}\label{prop:convergence-group-action}
For each $y_0 \in S_x^-$ there exists $g \in \mathcal{G}^\mathbb C$ such that $g \cdot y_0 \in W_x^-$.
\end{proposition}
\begin{proof}
In what follows, fix any positive integer $k$. Given $y_0 \in S_x^-$, let $z_t^0 = f_t(y_0) \cdot y_0$, where $f_t$ is the complex gauge transformation from Proposition \ref{prop:metrics-converge}. Then Proposition \ref{prop:limit-in-group-orbit} shows that there exists $z_\infty^0 := \phi(z_\infty, s)$ and a subsequence $\{ z_{t_n}^0 \}$ such that $z_{t_n}^0 \rightarrow z_\infty^0$ strongly in $L_k^2$.
For any $s > 0$, let $y_s = e^{i \beta s} \cdot y_0$ and define $z_t^{-s} = f_t(y_s) \cdot y_s$. By definition, $z_t^0$ is the downwards $\mathop{\rm YM}\nolimitsH$ flow for time $s$ with initial condition $z_t^{-s}$. Applying Proposition \ref{prop:strong-convergence} to the subsequence $z_{t_n}^{-s}$ shows that there is a subsequence $z_{t_{n_j}}^{-s}$ converging in $L_k^2$ to some $z_\infty^{-s}$. Since the $\mathop{\rm YM}\nolimitsH$ flow for finite time $s$ depends continuously on the initial condition (cf. \cite[Prop. 3.4]{Wilkin08}) then $z_{t_{n_j}}^{-s} \rightarrow z_\infty^{-s}$ and $z_{t_{n_j}}^0 \rightarrow z_\infty^0$ implies that $z_\infty^0$ is the time $s$ flow with initial condition $z_\infty^{-s}$. Therefore, for any $s > 0$ we have constructed a solution to the $\mathop{\rm YM}\nolimitsH$ flow on $[-s, 0]$ connecting $z_\infty^0$ and $z_\infty^{-s}$. Proposition \ref{prop:backwards-uniqueness} shows that this solution must be unique for each $s$, and therefore there is a well-defined solution on the time interval $(-\infty, 0]$.
Moreover, we also have the uniform bound from Corollary \ref{cor:flow-bound} which shows that for all $\varepsilon > 0$ there exists $\delta > 0$ such that $\| z_\infty^{-s} - x \|_{L_k^2} \leq \varepsilon$ for all $y_0$ such that $\| y_0 - x \|_{L_1^2} < \delta$. Therefore as $s \rightarrow \infty$, the sequence $z_\infty^{-s}$ converges strongly to $x$ in the $L_k^2$ norm for any $k$, and so $z_\infty^0 = g \cdot y_0 \in W_x^-$. Proposition \ref{prop:exponential-convergence} then shows that the convergence is exponential in each Sobolev norm.
\end{proof}
\subsection{Convergence of the inverse process}\label{subsec:inverse-construction}
In this section we consider the inverse procedure to that of the previous section and prove that each point in the unstable set $W_x^-$ is gauge equivalent to a point in the negative slice $S_x^-$. The idea is similar to that of the previous section, except here we use the modified flow.
\subsubsection{A $C^0$ bound in the space of metrics}
Given $y_0 \in W_x^-$, let $y_t = g_t \cdot y_0$ be the solution to the modified flow \eqref{eqn:modified-flow} with initial condition $y_0$. Define $f_t = g_t \cdot e^{i\beta t}$ and let $h_t = f_t^* f_t$. This is summarised in the diagram below.
Using a similar calculation as the previous section, we have the same expression for the change of metric as in Lemma \ref{lem:derivative-difference}.
\begin{lemma}\label{lem:reverse-derivative-difference}
For any initial condition $y_0 \in W_x^-$, the induced flow on $\mathcal{G}^\mathbb C / \mathcal{G}$ satisfies
\begin{equation}\label{eqn:metric-derivative}
\frac{dh_t}{dt} = -2i h_t \mu_h(e^{-i \beta t} \cdot y_0) + i \beta h_t + i h_t \beta .
\end{equation}
\end{lemma}
\begin{proof}
A similar calculation as in the proof of Lemma \ref{lem:derivative-difference} (this time using the modified flow \eqref{eqn:modified-flow}) shows that
\begin{equation*}
\frac{df_t}{dt} f_t^{-1} = -i\mu(g_t \cdot y_0) + \gamma(g_t \cdot y_0) + f_t (i \beta) f_t^{-1} .
\end{equation*}
Then
\begin{align*}
\frac{dh_t}{dt} & = f_t^* \left( \frac{df}{dt} f_t^{-1} \right)^* f_t + f_t^* \left( \frac{df}{dt} f_t^{-1} \right) f_t \\
& = f_t^* \left( -i \mu(g_t \cdot y_0) - \gamma(g_t \cdot y_0) + (f_t^*)^{-1} (i \beta) f_t^* - i \mu(g_t \cdot y_0) + \gamma(g_t \cdot y_0) + f_t (i \beta) f_t^{-1} \right) f_t \\
& = -2i h_t f_t^{-1} \mu(g_t \cdot y_0) f_t + i \beta h_t + i h_t \beta \\
& = -2i h_t \mu_h(e^{-i \beta t} \cdot y_0) + i \beta h_t + i h_t \beta . \qedhere
\end{align*}
\end{proof}
\begin{lemma}\label{lem:reverse-uniform-bound-sigma}
For every $\varepsilon > 0$ there exists a constant $C > 0$ such that for any initial condition $y_0 \in W_{x}^-$ with $\| e^{-i \beta T} \cdot y_0 - x \|_{C^1} + \| g_T \cdot y_0 - x \|_{C^1} < \varepsilon$ we have the estimate
\begin{equation*}
\sup_X {\Sigma}_{i}gma(h_t) \leq C \left( \| e^{-i \beta T} \cdot y_0 - x \|_{C^1} + \| g_T \cdot y_0 - x \|_{C^1} \right)
\end{equation*}
for all $0 \leq t \leq T$.
\end{lemma}
\begin{proof}
In contrast to the proof of Lemma \ref{lem:uniform-bound-sigma}, $e^{-i \beta t} \cdot y_0$ is not in the slice $S_x$ and so it satisfies the inequality $\| \mu(e^{-i \beta t} \cdot y_0) - \beta \|_{C^0} \leq C' \| e^{-i \beta t} \cdot y_0 - x \|_{C^1}$ instead of the quadratic bound of Lemma \ref{lem:moment-map-quadratic}. Using this inequality, the same idea as in the proof of Lemma \ref{lem:uniform-bound-sigma} leads to the bound
\begin{align}\label{eqn:heat-operator-bounded}
\begin{split}
\left( \frac{\partial}{\partial t} + \mathcal{D}elta \right) {\Sigma}_{i}gma(h_t) & \leq C_1 \| e^{-i \beta t} \cdot (y_0-x) \|_{C^1} \left( \mathop{\rm Tr}\nolimits(h_t) + \mathop{\rm Tr}\nolimits(h_t^{-1}) \right) \\
& = C_1 \| e^{-i \beta t} \cdot (y_0 - x) \|_{C^1} {\Sigma}_{i}gma(h_t) + 2 C_1 \| e^{-i \beta t} \cdot (y_0 - x) \|_{C^1} \rank(E)
\end{split}
\end{align}
In general, if the heat operator is bounded for all $t \geq 0$
\begin{equation*}
\left( \frac{\partial}{\partial t} + \mathcal{D}elta \right) f(p,t) \leq C(t) f(p,t) + D(t), \quad p \in X, t \in [0, \infty)
\end{equation*}
for some nonnegative functions $C(t)$ and $D(t)$ independent of $p \in X$, then $f(p,t)$ satisfies the bound
\begin{equation}\label{eqn:general-heat-bound}
f(p,t) \leq \exp \left( \int_0^t C(s) \, ds \right) \int_0^t D(s) \, ds + f(p,0)
\end{equation}
Therefore \eqref{eqn:heat-operator-bounded} implies that the problem reduces to finding a bound for $\int_0^t \| e^{-i \beta s} \cdot (y_0 - x) \|_{C^1} \, ds$. Proposition \ref{prop:exponential-convergence} shows that the backwards flow with initial condition in $W_x^-$ converges exponentially to $x$ in every Sobolev norm. Therefore there exists a neighbourhood $U$ of $x$ such that if $g_T \cdot y_0 \in U$ then there exist positive constants $C_1$ and $\eta$ such that the following estimate holds
\begin{equation*}
\| y_0 - x \|_{C^1} \leq C_1 e^{-\eta T} \| g_T \cdot y_0 - x \|_{C^1} .
\end{equation*}
Recall the eigenbundles $\mathop{\rm End}\nolimits(E)_-$, $\mathop{\rm End}\nolimits(E)_0$ and $\mathop{\rm End}\nolimits(E)_+$ from Section \ref{sec:preliminaries}. The above estimate shows that each component of $y_0 - x$ in $\mathop{\rm End}\nolimits(E)_-$, $\mathop{\rm End}\nolimits(E)_0$ and $\mathop{\rm End}\nolimits(E)_+$ is bounded by $C_1 e^{-\eta T} \| g_T \cdot y_0 - x \|_{C^1}$. Since the component of $e^{-i \beta t} \cdot (y_0 - x)$ in $\mathop{\rm End}\nolimits(E)_+$ is exponentially decreasing with $t$ then
\begin{equation*}
\int_0^T \| (e^{-i \beta t} \cdot y_0 - x)_{\mathop{\rm End}\nolimits(E)_+} \|_{C^1} dt \leq C_1' \| y_0 - x \|_{C^1} \leq C_1 e^{-\eta T} \| g_T \cdot y_0 - x \|_{C^1} .
\end{equation*}
The component of $e^{-i \beta t} \cdot (y_0 - x)$ in $\mathop{\rm End}\nolimits(E)_0$ is constant with respect to $t$, and so
\begin{equation*}
\int_0^T \| (e^{-i \beta t} \cdot y_0 - x)_{\mathop{\rm End}\nolimits(E)_0} \|_{C^1} dt \leq C_2' T \| y_0 - x \|_{C^1} \leq C_2 T e^{-\eta T} \| g_T \cdot y_0 - x \|_{C^1} .
\end{equation*}
Finally, the component of $e^{-i \beta t} \cdot (y_0 - x)$ in $\mathop{\rm End}\nolimits(E)_-$ is exponentially increasing, and so we have the bound
\begin{equation*}
\int_0^T \| (e^{-i \beta t} \cdot y_0 - x)_{\mathop{\rm End}\nolimits(E)_-} \|_{C^1} \leq C_3 \| e^{-i \beta T} \cdot (y_0 - x) \|_{C^1} .
\end{equation*}
Combining the estimates for the three components shows that the integral
\begin{equation*}
I(t) = \int_0^t \| e^{-i \beta s} \cdot (y_0-x) \|_{C^1} \, ds
\end{equation*}
is bounded by
\begin{equation*}
I(t) \leq C_1 e^{-\eta T} \| g_T \cdot y_0 - x \|_{C^1} + C_2 T e^{-\eta T} \| g_T \cdot y_0 - x \|_{C^1} + C_3 \| e^{-i \beta T} \cdot y_0 - x \|_{C^1}
\end{equation*}
The inequality \eqref{eqn:general-heat-bound} together with the assumption $\| g_T \cdot y_0 - x \|_{C^1} + \| e^{-i \beta T} \cdot y_0 - x \|_{C^1} < \varepsilon$ shows that there exists a constant $C$ such that
\begin{equation*}
\sup_X {\Sigma}_{i}gma(h_t) \leq C \left( \| e^{-i \beta T} \cdot y_0 - x \|_{C^1} + \| g_T \cdot y_0 - x \|_{C^1} \right) \qedhere
\end{equation*}
\end{proof}
\subsubsection{Convergence in the space of Higgs bundles}
In this section we use a method analogous to that of Section \ref{sec:convergence-in-B} to show that the sequence converges in the space of Higgs bundles and that the limit is gauge equivalent to $y_0$. Given $y_0 \in W_x^-$ and $t \in (-\infty, 0]$, define $s < 0$ by $\| e^{i \beta s} \cdot g_t(y_0) \cdot y_0 - x \|_{L_k^2} = \| y_0 - x \|_{L_k^2}$. Note that this is well-defined for small values of $\| y_0 - x \|_{L_k^2}$ since Lemma \ref{lem:unstable-sets-same} shows that $g_t(y_0) \cdot y_0 \rightarrow x$ in the $C^\infty$ topology as $t \rightarrow - \infty$ and for $s < 0$ the action of $e^{i \beta s}$ exponentially increases the $C^0$ norm of the component of $g_t(y_0) \cdot y_0$ in $\mathop{\rm End}\nolimits(E)_-$. Now define $t' := \max \{ t, s \} < 0$, let $f_t(y_0) = e^{i \beta t'} \cdot g_{t'}(g_{t-t'}(y_0) \cdot y_0) $ and $z_t := e^{i \beta t'} \cdot g_t(y_0) \cdot y_0 = f_t(y_0) \cdot g_{t-t'}(y_0) \cdot y_0$. Let $h_t = f_t^* f_t$ be the associated change of metric.
\begin{center}
\begin{pspicture}(0,-0.5)(8,5.5)
\psline[arrowsize=5pt]{->}(4,4)(4,1)
\pscurve(3.9,5)(4,4)(4.4,2.3)(5,1.2)(5.2,1)(6,0.4)
\psline[linestyle=dashed](0,1)(8,1)
\psdots[dotsize=3pt](3.9,5)(4,4)(4,1)(5.2,1)
\uput{4pt}[180](3.9,5){\small{$x$}}
\uput{4pt}[180](4,4){\small{$g_t(y_0) \cdot y_0$}}
\uput{5pt}[270](3.8,1){\small{$z_t = f_t \cdot y_0$}}
\uput{5pt}[270](5.2,1){\small{$y_0$}}
\uput{2pt}[270](6,0.4){$W_x^-$}
\uput{3pt}[30](4.4,2.3){\small{$g_{t}$}}
\uput{3pt}[180](4,2.3){\small{$e^{i \beta t'}$}}
\uput{3pt}[90](8,1){\small{$\| y_0 - x \|_{L_k^2} = \text{constant}$}}
\end{pspicture}
\end{center}
Lemma \ref{lem:reverse-uniform-bound-sigma} then shows that $\sup_X {\Sigma}_{i}gma(h_t) \leq C \left( \| z_t - x \|_{C^1} + \| g_{t-t'}(y_0) \cdot y_0 - x \|_{C^1} \right)$. Since either $\| z_t - x \|_{L_k^2} = \| y_0 - x \|_{L_k^2}$ (when $t < t'$) or $g_{t-t'}(y_0) \cdot y_0 = y_0$ (when $t'=t$), then Corollary \ref{cor:bounded-metric-away-from-critical} shows that $g_{t-t'}(y_0) \cdot y_0$ and $z_t$ are both bounded away from $x$ in the $L_k^2$ norm. As a consequence, $|t-t'|$ is uniformly bounded in the same way as Lemma \ref{lem:bounded-away-from-critical}. Therefore
\begin{equation}\label{eqn:bounded-linear-flow}
\| e^{i \beta t} \cdot g_t(y_0) \cdot y_0 - x \|_{L_k^2} = \| e^{i \beta (t-t')} \cdot z_t - x \|_{L_k^2} \leq C' \| z_t - x \|_{L_k^2} = C' \| y_0 - x \|_{L_k^2}
\end{equation}
for some constant $C'$, which implies that there is a subsequence of $e^{i \beta t} \cdot g_t(y_0) \cdot y_0$ converging strongly to a limit $z_\infty^0$ in $L_{k-1}^2$. Since this is true for all $k$, then $z_\infty^0$ is a $C^\infty$ Higgs pair.
A special case of \eqref{eqn:bounded-linear-flow} is
\begin{equation}\label{eqn:bounded-linear-flow-C1}
\| e^{i \beta t} \cdot g_t(y_0) \cdot y_0 - x \|_{C^1} \leq C \| e^{i \beta t} \cdot g_t(y_0) \cdot y_0 - x \|_{L_k^2} \leq C' \| y_0 - x \|_{L_k^2}
\end{equation}
for any $k$ such that $L_k^2 \hookrightarrow C^1$ is an embedding.
By modifying the method of Proposition \ref{prop:metrics-converge} we can now show that the change of metric converges in $C^0$. For $t \in (-\infty, 0]$, define $f_t(y_0) = e^{i \beta t} \cdot g_{t}(y_0)$ and let $t_1 \leq t_2 \leq T < 0$. This is summarised in the diagram below.
\begin{proposition}\label{prop:reverse-metrics-converge}
$h_t(y_0)$ converges in the $C^0$ norm to a unique limit $h_\infty(y_0) \in \mathcal{G}^\mathbb C / \mathcal{G}$ as $t \rightarrow -\infty$. The limit depends continuously on the initial condition $y_0 \in S_x^-$. The rate of convergence is given by
\begin{equation}\label{eqn:reverse-metric-convergence-rate}
\sup_X {\Sigma}_{i}gma(h_t(y_0) (h_\infty(y_0))^{-1}) \leq C_2 e^{2 \eta t} \| y_0 - x \|_{L_k^2}
\end{equation}
where $C_2 > 0$ is a constant depending only on the orbit $\mathcal{G} \cdot x$, the constant $\eta$ is from Proposition \ref{prop:exponential-convergence} and $k$ is a positive integer chosen so that $L_k^2 \hookrightarrow C^1$ is a continuous embedding.
\end{proposition}
\begin{proof}
The result follows from the same procedure as the proof of Proposition \ref{prop:metrics-converge}, except now we use the estimate from Lemma \ref{lem:reverse-uniform-bound-sigma} instead of the estimate from Lemma \ref{lem:uniform-bound-sigma} and the distance-decreasing formula for the modified flow from Lemma \ref{lem:modified-distance-decreasing}.
Let $h_{t_1-t_2}(y_{t_2})$ be the change of metric connecting $y_{t_2} = g_{t_2}(y_0) \cdot y_0$ and $e^{i \beta (t_1-t_2)} \cdot y_{t_1}$. Lemma \ref{lem:reverse-uniform-bound-sigma} and the estimate \eqref{eqn:bounded-linear-flow-C1} above show that $h_{t_1-t_2}(y_{t_2})$ satisfies
\begin{align*}
\sup_X {\Sigma}_{i}gma(h_{t_1-t_2}(y_{t_2})) & \leq C \left( \| e^{i \beta (t_1-t_2)} \cdot y_{t_1} - x \|_{C^1} + \| y_{t_2} - x \|_{C^1} \right) \\
& \leq C C' \| y_{t_2} - x \|_{L_k^2} + C \| y_{t_2} - x \|_{C^1} \\
& \leq C'' \| y_T - x \|_{L_k^2} \leq C_2 e^{2 \eta T} \| y_0 - x \|_{L_k^2}
\end{align*}
By the construction of the modified flow, the gauge transformation connecting $y_{t_2}$ and $e^{i \beta (t_1-t_2)} \cdot y_{t_1}$ is in $\mathcal{G}_*^\mathbb C$, The distance-decreasing formula for the action of $e^{i \beta (t_1 - t_2)}$ from Lemma \ref{lem:modified-distance-decreasing} then implies that
\begin{equation*}
{\Sigma}_{i}gma(h_{t_1}(y_0) h_{t_2}(y_0)^{-1}) \leq {\Sigma}_{i}gma(h_{t_1-t_2}(y_{t_2}))
\end{equation*}
and so the sequence $h_t(y_0)$ is Cauchy in the $C^0$ norm, by the same proof as Proposition \ref{prop:metrics-converge}.
\end{proof}
Therefore $y_0$ is connected to $z_\infty^0$ by a $C^0$ gauge transformation. Elliptic regularity together with the fact that $z_\infty^0$ is a $C^\infty$ Higgs pair then shows that $y_0$ is gauge equivalent to $z_\infty^0$ by a $C^\infty$ gauge transformation.
The same method as the proof of Proposition \ref{prop:convergence-group-action} then allows us to explicitly construct a solution of the linearised flow $z_\infty^{-s} = e^{i\beta s} \cdot z_\infty^0$ converging to $x$ as $s \rightarrow +\infty$. Lemma \ref{lem:classify-neg-slice} then shows that $z_\infty^0$ is $\mathcal{G}^\mathbb C$ equivalent to a point in $S_x^-$, which is smooth by Lemma \ref{lem:slice-smooth}.
Therefore $y_0$ is $\mathcal{G}^\mathbb C$ equivalent to a point in $S_x^-$, and so we have proved the following converse to Proposition \ref{prop:convergence-group-action}.
\begin{proposition}\label{prop:unstable-maps-to-slice}
For each $y_0 \in W_x^-$ there exists a $C^\infty$ gauge transformation $g \in \mathcal{G}^\mathbb C$ such that $g \cdot y_0 \in S_x^-$.
\end{proposition}
\subsection{An algebraic criterion for the existence of flow lines}\label{sec:filtration-criterion}
The results of the previous two sections combine to give the following theorem.
\begin{theorem}\label{thm:algebraic-flow-line}
Let $E$ be a complex vector bundle over a compact Riemann surface $X$, and let $(\bar{\partial}_A, \phi)$ be a Higgs bundle on $E$. Suppose that $E$ admits a filtration $(E^{(1)}, \phi^{(1)}) \subset \cdots \subset (E^{(n)}, \phi^{(n)}) = (E, \phi)$ by Higgs subbundles such that the quotients $(Q_k, \phi_k) := (E^{(k)}, \phi^{(k)}) / (E^{(k-1)}, \phi^{(k-1)})$ are Higgs polystable and $\slope(Q_k) < \slope(Q_j)$ for all $k < j$. Then there exists $g \in \mathcal{G}^\mathbb C$ and a solution to the reverse Yang-Mills-Higgs heat flow equation with initial condition $g \cdot (\bar{\partial}_A, \phi)$ which converges to a critical point isomorphic to $(Q_1, \phi_1) \oplus \cdots \oplus (Q_n, \phi_n)$.
Conversely, if there exists a solution of the reverse heat flow from the initial condition $(\bar{\partial}_A, \phi)$ converging to a critical point $(Q_1, \phi_1) \oplus \cdots \oplus (Q_n, \phi_n)$ then $(\bar{\partial}_A, \phi)$ admits a filtration $(E^{(1)}, \phi^{(1)}) \subset \cdots \subset (E^{(n)}, \phi^{(n)}) = (E, \phi)$ whose graded object is isomorphic to $(Q_1, \phi_1) \oplus \cdots \oplus (Q_n, \phi_n)$.
\end{theorem}
\begin{proof}
Suppose first that $(\bar{\partial}_A, \phi)$ admits a filtration $(E^{(1)}, \phi^{(1)}) \subset \cdots \subset (E^{(n)}, \phi^{(n)}) = (E, \phi)$ by Higgs subbundles such that the quotients $(Q_k, \phi_k) := (E^{(k)}, \phi^{(k)}) / (E^{(k-1)}, \phi^{(k-1)})$ are Higgs polystable and $\slope(Q_k) < \slope(Q_j)$ for all $k < j$. Let $x$ be a critical point isomorphic to $(Q_1, \phi_1) \oplus \cdots \oplus (Q_n, \phi_n)$, and let $U$ be the neighbourhood of $x$ from Lemma \ref{lem:classify-neg-slice}. Then by applying the isomorphism $x \cong (Q_1, \phi_1) \oplus \cdots \oplus (Q_n, \phi_n)$ and scaling the extension classes there exists a complex gauge transformation such that $g \cdot (\bar{\partial}_A, \phi)$ is in $U$. Applying Lemma \ref{lem:classify-neg-slice} shows that $(\bar{\partial}_A, \phi)$ is isomorphic to a point in $S_x^-$, and therefore Proposition \ref{prop:convergence-group-action} shows that $(\bar{\partial}_A, \phi)$ is isomorphic to a point in $W_x^-$.
Conversely, if $x = (Q_1, \phi_1) \oplus \cdots \oplus (Q_n, \phi_n)$ is a critical point and $(\bar{\partial}_A, \phi) \in W_x^-$, then Proposition \ref{prop:unstable-maps-to-slice} shows that there exists $g \in \mathcal{G}^\mathbb C$ such that $g \cdot (\bar{\partial}_A, \phi) \in S_x^-$. Therefore Lemma \ref{lem:classify-neg-slice} shows that $(\bar{\partial}_A, \phi)$ admits a filtration whose graded object is isomorphic to $(Q_1, \phi_1) \oplus \cdots \oplus (Q_n, \phi_n)$.
\end{proof}
\section{The Hecke correspondence via Yang-Mills-Higgs flow lines}\label{sec:hecke}
Let $(E, \phi)$ be a polystable Higgs bundle of rank $r$ and degree $d$, and let $(L_u, \phi_u)$ be a Higgs line bundle with $\deg L_u < \slope E$. Let $F$ be a smooth complex vector bundle $C^\infty$ isomorphic to $E \oplus L_u$ and choose a metric on $F$ such that the Higgs structure on $(E, \phi) \oplus (L_u,\phi_u)$ is a Yang-Mills-Higgs critical point in the space $\mathcal{B}(F)$ of Higgs bundles on $F$. The goal of this section is to show that Hecke modifications of the Higgs bundle $(E, \phi)$ correspond to Yang-Mills-Higgs flow lines in $\mathcal{B}(F)$ connecting the critical point $(E, \phi) \oplus (L_u, \phi_u)$ to lower critical points.
In Section \ref{sec:Higgs-hecke-review} we review Hecke modifications of Higgs bundles. Section \ref{sec:canonical-map} describes how the space of Hecke modifications relates to the geometry of the negative slice and Section \ref{sec:YMH-flow-hecke} contains the proof of Theorem \ref{thm:flow-hecke} which shows that Hecke modifications correspond to $\mathop{\rm YM}\nolimitsH$ flow lines. In Section \ref{sec:secant-criterion} we give a geometric criterion for points to be connected by unbroken flow lines in terms of the secant varieties of the space of Hecke modifications inside the negative slice. In particular, this gives a complete classification of the $\mathop{\rm YM}\nolimitsH$ flow lines for rank $2$ (cf. Corollary \ref{cor:rank-2-classification}). Throughout this section the notation $\mathcal{E}$ is used to denote the sheaf of holomorphic sections of the bundle $E$.
\subsection{Hecke modifications of Higgs bundles}\label{sec:Higgs-hecke-review}
The purpose of this section is to derive some basic results for Hecke modifications of Higgs bundles which will be used in Section \ref{sec:YMH-flow-hecke} to prove Theorem \ref{thm:flow-hecke}. In Section \ref{sec:secant-criterion} we extend these results to study unbroken YMH flow lines.
First recall that a Hecke modification of a holomorphic bundle $E$ over a Riemann surface $X$ is determined by points $p_1, \ldots, p_n \in X$ (not necessarily distinct) and nonzero elements $v_j \in E_{p_j}^*$ for $j = 1, \ldots, n$. This data determines a sheaf homomorphism $\mathcal{E} \rightarrow \oplus_{j=1}^n \mathbb C_{p_j}$ to the skyscraper sheaf supported at $p_1, \ldots, p_n$ with kernel a locally free sheaf $\mathcal{E}'$. This determines a holomorphic bundle $E' \rightarrow X$ which we call the \emph{Hecke modification of $E$ determined by $v = (v_1, \ldots, v_n)$}.
\begin{equation*}
0 \rightarrow \mathcal{E}' \rightarrow \mathcal{E} \stackrel{v}{\longrightarrow} b_igoplus_{j=1}^n \mathbb C_{p_j} \rightarrow 0
\end{equation*}
Since the kernel sheaf $\mathcal{E}'$ only depends on the equivalence class of each $v_j$ in $\mathbb{P} E_{p_j}^*$ then from now on we abuse the notation slightly and also use $v_j \in \mathbb{P} E_{p_j}^*$ to denote the equivalence class of $v_j \in E_{p_j}^*$.
As explained in \cite[Sec. 4.5]{witten-hecke}, if $(E, \phi)$ is a Higgs bundle, then a Hecke modification of $(E, \phi)$ may introduce poles into the Higgs field and so there are restrictions on the allowable modifications which preserve holomorphicity of the Higgs field.
\begin{definition}
Let $(E, \phi)$ be a Higgs bundle. A Hecke modification $E'$ of $E$ is \emph{compatible} with $\phi$ if the induced Higgs field on $E'$ is holomorphic.
\end{definition}
The next result describes a basic condition for the modification to be compatible with the Higgs field.
\begin{lemma}
Let $(E, \phi)$ be a Higgs bundle, and $0 \rightarrow \mathcal{E}' \rightarrow \mathcal{E} \stackrel{v}{\longrightarrow} \mathbb{C}_p \rightarrow 0$ a Hecke modification of $E$ induced by $v \in E_p^*$. Then the induced Higgs field $\phi'$ on $E'$ is holomorphic if and only if there exists an eigenvalue $\mu$ of $\phi(p)$ such that the composition $\mathcal{E} \otimes K^{-1} \stackrel{\phi - \mu \cdot \id}{\ensuremath{\relbar\mathrel{\mkern-4mu}\relbar\mathrel{\mkern-4mu}\longrightarrow}} \mathcal{E} \stackrel{v}{\rightarrow} \mathbb C_p$ is zero.
\end{lemma}
\begin{proof}
Let $\phi \in H^0(\mathop{\rm End}\nolimits(E) \otimes K)$. Then $\phi$ pulls back to a holomorphic Higgs field $\phi' \in H^0(\mathop{\rm End}\nolimits(E') \otimes K)$ if and only if for any open set $U \subset X$ and any section $s \in \mathcal{E}(U)$, the condition $s \in \ker (\mathcal{E}(U) \stackrel{v}{\rightarrow} \mathbb C_p(U))$ implies that $\phi(s) \in \ker ((\mathcal{E} \otimes K)(U) \stackrel{v}{\rightarrow} \mathbb C_p(U))$. After choosing a trivialisation of $K$ in a neighbourhood of $p$, we can decompose the Higgs field $\phi(p)$ on the fibre $E_p$ as follows
\begin{equation}\label{eqn:fibre-extension}
\xymatrix{
0 \ar[r] & \ker v \ar[r] \ar[d]^{\left. \phi(p) \right|_{\ker v}} & E_p \ar[r] \ar[d]^{\phi(p)} & \mathbb{C}_p \ar[r] \ar[d]^\mu & 0 \\
0 \ar[r] & \ker v \ar[r] & E_p \ar[r] & \mathbb{C}_p \ar[r] & 0
}
\end{equation}
where scalar multiplication by $\mu$ is induced from the action of $\phi(p)$ on the quotient $\mathbb{C}_p = E_p / \ker v$. Therefore the endomorphism $\left( \phi(p) - \mu \cdot \id \right)$ maps $E_p$ into the subspace $\ker v$ and so $v \in E_p^*$ descends to a well-defined homomorphism $v' : \mathop{\rm coker}\nolimits \left( \phi(p) - \mu \cdot \id \right) \rightarrow \mathbb C$.
Conversely, given an eigenvalue $\mu$ of $\phi(p)$ and an element $v' \in \mathop{\rm coker}\nolimits(\phi(p) - \mu \cdot \id)^*$, one can choose a basis of $E_p$ and extend $v'$ to an element $v \in E_p^*$ such that $\im (\phi(p) - \mu \cdot \id) \subset \ker v$. Equivalently, $\phi(p)$ preserves $\ker v$ and so $v \in E_p^*$ defines a Hecke modification $E'$ of $E$ such that the induced Higgs field on $E'$ is holomorphic.
\end{proof}
\begin{corollary}\label{cor:Higgs-compatible}
Let $(E, \phi)$ be a Higgs bundle and let $0 \rightarrow \mathcal{E}' \rightarrow \mathcal{E} \stackrel{v}{\rightarrow} \mathbb C_p \rightarrow 0$ be a Hecke modification of $E$ induced by $v \in \mathbb{P} E_p^*$. The following conditions are equivalent
\begin{enumerate}
\item The induced Higgs field $\phi'$ on $E'$ is holomorphic.
\item There exists an eigenvalue $\mu$ of $\phi(p)$ such that $v(\phi(s)) = \mu v(s)$ for all sections $s$ of $E$.
\item There exists an eigenvalue $\mu$ of $\phi(p)$ such that $v$ descends to a well-defined $v' \in (\mathop{\rm coker}\nolimits (\phi(p) - \mu \cdot \id))^*$.
\end{enumerate}
\end{corollary}
\begin{lemma}\label{lem:resolve-higgs-subsheaf}
Let $(E, \phi)$ be a Higgs bundle and $(G, \varphi)$ a Higgs subsheaf. Then there exists a Higgs subbundle $(G', \varphi') \subset (E, \phi)$ such that $\rank(G) = \rank (G')$ and $(G, \varphi)$ is a Higgs subsheaf of $(G', \varphi')$.
\end{lemma}
\begin{proof}
Since ${\bf d}^{(i)}m_\mathbb C X = 1$ then a standard procedure shows that there is a holomorphic subbundle $G' \subset E$ with $\rank(G) = \rank (G')$ and $G$ is a subsheaf of $G'$, and so it only remains to show that this is a Higgs subbundle. The reverse of the construction above shows that the Higgs field $\varphi$ preserving $G$ extends to a meromorphic Higgs field $\varphi'$ preserving $G'$, and since this is the restriction of a holomorphic Higgs field $\phi$ on $E$ to the holomorphic subbundle $G'$, then $\varphi'$ must be holomorphic on $G'$. Therefore $G'$ is $\phi$-invariant.
\end{proof}
\begin{definition}\label{def:m-n-stable}
A Higgs bundle $(E, \phi)$ is \emph{$(m,n)$-stable (resp. $(m,n)$-semistable)} if for every proper $\phi$-invariant holomorphic subbundle $F \subset E$ we have
\begin{equation*}
\frac{\deg F + m}{\rank F} < \frac{\deg E - n}{\rank E} \quad \text{(resp. $\leq$)} .
\end{equation*}
\end{definition}
If $(E, \phi)$ is $(0,n)$-semistable then any Hecke modification $0 \rightarrow (\mathcal{E}', \phi') \rightarrow (\mathcal{E}, \phi) \rightarrow \oplus_{j=1}^n \mathbb C_{p_j} \rightarrow 0$ is semistable.
\begin{definition}
Then \emph{space of admissible Hecke modifications} is the subset $\mathcal{N}_{\phi} \subset \mathbb{P} E^*$ corresponding to the Hecke modifications which are compatible with the Higgs field.
\end{definition}
\begin{remark}\label{rem:hecke-explicit}
\begin{enumerate}
\item If $\phi = 0$ then $\mathcal{N}_{0} = \mathbb{P} E^*$. If $E$ is $(0,1)$-stable then there is a well-defined map $\mathbb{P} E^* \rightarrow \mathbb{P} H^1(E^*)$. The construction of the next section generalises this to a map $\mathcal{N}_{\phi} \rightarrow \mathbb{P} \mathcal{H}^1(E^*)$ (cf. Remark \ref{rem:canonical-map}).
\item Note that the construction above is the reverse of that described in \cite{witten-hecke}, which begins with $E'$ and modifies the bundle to produce a bundle $E$ with $\deg E = \deg E' + 1$. Here we begin with $E$ and construct $E'$ via a modification $0 \rightarrow \mathcal{E}' \rightarrow \mathcal{E} \rightarrow \mathbb C_p \rightarrow 0$ since we want to interpret the compatible modifications in terms of the geometry of the negative slice (see Section \ref{sec:canonical-map}) in order to draw a connection with the results on gradient flow lines for the Yang-Mills-Higgs flow functional from Section \ref{sec:filtration-criterion}.
\item One can also see the above construction more explicitly in local coordinates as in \cite{witten-hecke} by choosing a local frame $\{ s_1, \ldots, s_n \}$ for $E$ in a neighbourhood $U$ of $p$ with local coordinate $z$ centred at $p$ and for which the evaluation map $\mathcal{E} \stackrel{v}{\rightarrow} \mathbb C_p$ satisfies $v(s_1) = s_1(0)$ and $v(s_j) = 0$ for all $j=2, \ldots, n$. Then over $U \setminus \{ p \}$, the functions $\{ \frac{1}{z} s_1(z), s_2(z) \ldots, s_n(z) \}$ form a local frame for $E'$. Equivalently, the transition function $g = \left( \begin{matrix} \frac{1}{z} & 0 \\ 0 & \id \end{matrix} \right)$ maps the trivialisation for $E$ to a trivialisation for $E'$ (note that this is the inverse of the transition function from \cite[Sec. 4.5.2]{witten-hecke} for the reason explained in the previous paragraph). In this local frame field on $E$ we write $\phi(z) = \left( \begin{matrix} A(z) & B(z) \\ C(z) & D(z) \end{matrix} \right)$. The action on the Higgs field is then
\begin{equation*}
g \left( \begin{matrix} A(z) & B(z) \\ C(z) & D(z) \end{matrix} \right) g^{-1} = \left( \begin{matrix} A(z) & \frac{1}{z} B(z) \\ z C(z) & D(z) \end{matrix} \right)
\end{equation*}
Therefore the induced Higgs field on $E'$ will have a pole at $p$ unless $B(0) = 0$. The scalar $A(0)$ in this local picture is the same as the scalar $\mu$ from \eqref{eqn:fibre-extension}, and we see that
\begin{equation*}
\phi(p) - \mu \cdot \id = \left( \begin{matrix} 0 & 0 \\ C(0) & D(0) - \mu \cdot \id \end{matrix} \right)
\end{equation*}
With respect to the basis of $E_p$ given by the choice of local frame, $v(\phi(p) - \mu \cdot \id) = 0$. Moreover, via this local frame $\mathop{\rm coker}\nolimits ( \phi(p) - \mu \cdot \id)$ is identified with a subspace of $E_p$ which contains the linear span of $s_1(0)$. Therefore we see in the local coordinate picture that $v \in E_p^*$ descends to an element of $(\mathop{\rm coker}\nolimits (\phi(p) - \mu \cdot \id))^*$.
\end{enumerate}
\end{remark}
The next result shows that the admissible Hecke modifications have an interpretation in terms of the spectral curve associated to the Higgs field. This extends the results of \cite{witten-hecke} to include the possibility that $p$ is a branch point of the spectral cover.
First recall Hitchin's construction of the spectral curve from \cite{Hitchin87-2}. Let $(E, \phi)$ be a Higgs pair. Then there is a projection map $\pi : K \rightarrow X$ and a bundle $\pi^* E$ over the total space of the canonical bundle together with a tautological section $\lambda$ of $\pi^* E$. The zero set of the characteristic polynomial of $\pi^* \phi$ defines a subvariety $S$ inside the total space of $K$. The projection $\pi$ restricts to a map $\pi : S \rightarrow X$, where for each $p \in X$ the fibre $\pi^{-1}(p)$ consists of the eigenvalues of the Higgs field $\phi(p)$. As explained in \cite{Hitchin87-2}, generically the discriminant of the Higgs field has simple zeros and in this case $S$ is a smooth curve called the \emph{spectral curve}. The induced projection $\pi : S \rightarrow X$ is then a ramified covering map with ramification divisor denoted $\mathcal{R} \subset S$.
The pullback of the Higgs field to the spectral curve is a bundle homomorphism $\pi^* E \rightarrow \pi^*(E \otimes K)$, and the eigenspaces correspond to $\ker (\pi^* \phi - \lambda \cdot \id)$, where $\lambda$ is the tautological section defined above. When the discriminant of the Higgs field has simple zeros then Hitchin shows in \cite{Hitchin87-2} that the eigenspaces form a line bundle $\mathcal{N} \rightarrow S$ and that the original bundle $E$ can be reconstructed as $\pi_* \mathcal{L}$, where the line bundle $\mathcal{L} \rightarrow S$ is formed by modifying $\mathcal{N}$ at the ramification points $0 \rightarrow \mathcal{N} \rightarrow \mathcal{L} \rightarrow b_igoplus_{p \in \mathcal{R}} \mathbb C_p \rightarrow 0$. One can reconstruct the Higgs field $\phi$ by pushing forward the endomorphism defined by the tautological section $\lambda : \mathcal{L} \rightarrow \mathcal{L} \otimes \pi^* K$.
\begin{lemma}
If the discriminant of $\phi$ has simple zeros then an admissible Hecke modification of $(E, \phi)$ corresponds to a Hecke modification of the line bundle $\mathcal{L}$ over the spectral curve.
\end{lemma}
\begin{proof}
Consider the pullback bundle $\pi^* E \rightarrow S$. The pullback of the Higgs field induces a sheaf homomorphism $(\pi^* \phi - \lambda \cdot \id) : \pi^* \mathcal{E} \otimes (\pi^* K)^{-1} \rightarrow \pi^* \mathcal{E}$. As explained in \cite[Sec. 2.6]{witten-hecke}, when the discriminant of $\phi$ has simple zeros then the cokernel of this homomorphism is the line bundle $\mathcal{L} \rightarrow S$ such that $\mathcal{E} \cong \pi_* \mathcal{L}$.
For $\mu \in S$ such that $p = \pi(\mu)$, there is an isomorphism of the stalks of the skyscraper sheaves $\mathbb C_p \cong \pi_* (\mathbb C_\mu)$. Then a Hecke modification $\mathcal{L} \stackrel{v'}{\rightarrow} \mathbb C_\mu$ given by nonzero $v' \in \mathcal{L}_\mu^*$ induces a Hecke modification $v = v' \circ q \circ \pi^* : \mathcal{E} \rightarrow \mathbb C_p$, defined by the commutative diagram below.
\begin{equation*}
\xymatrix{
\pi^* \mathcal{E} \otimes (\pi^* K)^{-1} \ar[rr]^(0.6){\pi^*\phi - \lambda \cdot \id} & & \pi^* \mathcal{E} \ar[r]^(0.3)q & \mathop{\rm coker}\nolimits(\pi^* \phi - \lambda \cdot \id) \ar[dr]^(0.6){v'} \ar[r] & 0 \\
& & \mathcal{E} \ar[u]^{\pi^*} \ar[rr]^v & & \mathbb C_p \ar[r] & 0
}
\end{equation*}
The definition of $v$ implies that for any open set $U \subset X$ with a trivialisation of $K$ in a neighbourhood of $p$, and all $s \in \mathcal{E}(U)$ we have
\begin{equation*}
v(\phi s) = v' \circ q(\pi^* (\phi s)) = v' \circ q(\mu \, \pi^* (s)) = \mu \, v' \circ q \circ \pi^* (s) = \mu \, v(s)
\end{equation*}
and so $v$ is compatible with the Higgs field by Corollary \ref{cor:Higgs-compatible}.
Conversely, let $v \in E_p^*$ be compatible with the Higgs field $\phi$. Corollary \ref{cor:Higgs-compatible} shows that this induces a well-defined element of $\mathop{\rm coker}\nolimits(\phi - \mu \cdot \id)^*$. Consider the endomorphisms $\phi(p) - \mu \cdot \id$ on the fibre of $E$ over $p \in X$ and $\pi^* \phi(\mu) - \mu \cdot \id$ on the fibre of $\pi^* E$ over $\mu \in S$.
\begin{equation*}
\xymatrix{
(\pi^* E \otimes \pi^* K^{-1})_\mu \ar[rr]^(0.6){\pi^* \phi - \mu \cdot \id} & & (\pi^* E)_\mu \ar[r] & \mathop{\rm coker}\nolimits (\pi^* \phi - \mu \cdot \id)_\mu \ar[r] & 0\\
(E \otimes K^{-1})_p \ar[rr]^(0.6){\phi - \mu \cdot \id} \ar[u] & & E_p \ar[r] \ar[u] & \mathop{\rm coker}\nolimits(\phi - \mu \cdot \id)_p \ar[r] \ar@{-->}[u] & 0
}
\end{equation*}
The universal property of cokernel defines a map $\mathop{\rm coker}\nolimits(\phi - \mu \cdot \id)_p \rightarrow \mathop{\rm coker}\nolimits(\pi^* \phi - \mu \cdot \id)_\mu$. Since the discriminant of the Higgs field has simple zeros then both fibres are one-dimensional and so this map becomes an isomorphism. Therefore $v$ induces a well-defined homomorphism on the fibre $\mathop{\rm coker}\nolimits(\pi^* \phi - \mu \cdot \id)_\mu \rightarrow \mathbb C$, and hence a Hecke modification of $\mathcal{L}$ at $\mu \in S$.
\end{proof}
\begin{remark}
When $p \in X$ is not a branch point of $\pi : S \rightarrow X$ then this result is contained in \cite{witten-hecke}.
\end{remark}
\begin{corollary}
If the discriminant of $\phi$ has simple zeros then the space of Hecke modifications is $\mathcal{N}_{\phi} = S$.
\end{corollary}
\subsection{Secant varieties associated to the space of Hecke modifications}\label{sec:canonical-map}
The purpose of this section is to connect the geometry of the space of Hecke modifications with the geometry of the negative slice at a critical point in order to prepare for the proof of Theorem \ref{thm:flow-hecke} in the next section.
Let $(E_1, \phi_1)$ and $(E_2, \phi_2)$ be Higgs bundles and let $\bar{\partial}_A$ denote the induced holomorphic structure on $E_1^* E_2$. Then there is an elliptic complex
\begin{equation*}
\Omega^0(E_1^* E_2) \stackrel{L_1}{\longrightarrow} \Omega^{0,1}(E_1^* E_2) \oplus \Omega^{1,0}(E_1^* E_2) \stackrel{L_2}{\longrightarrow} \Omega^{1,1}(E_1^* E_2) ,
\end{equation*}
where $L_1(u) = (\bar{\partial}_A u, \phi_2 u - u \phi_1)$ and $L_2(a, \varphi) = (\bar{\partial}_A \varphi + [a, \phi])$. Let $\mathcal{H}^0 = \ker L_1$, $\mathcal{H}^1 = \ker L_1^* \cap \ker L_2$ and $\mathcal{H}^2 = \ker L_2^*$ denote the spaces of harmonic forms. Recall that if $(E_1, \phi_1)$ and $(E_2, \phi_2)$ are both Higgs stable and $\slope(E_2) < \slope(E_1)$ then $\mathcal{H}^0(E_1^* E_2) = 0$.
Now consider the special case where $(E_1, \phi_1)$ is $(0, n)$-stable and $(E_2, \phi_2)$ is a Higgs line bundle. Let $\mathcal{B}$ denote the space of Higgs bundles on the smooth bundle $E_1 \oplus E_2$ and choose a metric such that $(E_1, \phi_1) \oplus (E_2, \phi_2)$ is a critical point of $\mathop{\rm YM}\nolimitsH : \mathcal{B} \rightarrow \mathbb R$. Definition \ref{def:slice} shows that $\mathcal{H}^1(E_1^* E_2) \cong S_x^-$ is the negative slice at this critical point.
Let $0 \rightarrow (\mathcal{E}', \phi') \rightarrow (\mathcal{E}_1, \phi_1) \rightarrow \oplus_{j=1}^n \mathbb C_{p_j} \rightarrow 0$ be a Hecke modification defined by $v_1, \ldots, v_n \in \mathbb{P} E_1^*$. Applying the functor $\mathop{\rm Hom}\nolimits(\cdot, \mathcal{E}_2)$ to the short exact sequence $0 \rightarrow \mathcal{E}' \rightarrow \mathcal{E}_1 \rightarrow \oplus_j \mathbb C_{p_j} \rightarrow 0$ gives us an exact sequence of sheaves $0 \rightarrow \mathop{\rm Hom}\nolimits(\mathcal{E}_1, \mathcal{E}_2) \rightarrow \mathop{\rm Hom}\nolimits(\mathcal{E}', \mathcal{E}_2) \rightarrow \oplus_{j=1}^n \mathbb C_{p_j}^* \rightarrow 0$, where the final term comes from the isomorphism $\mathop{\rm Ext}\nolimits^1(\oplus_j \mathbb C_{p_j}, \mathcal{E}_2) \cong \mathop{\rm Hom}\nolimits(\mathcal{E}_2, \oplus_j \mathbb C_{p_j} \otimes K)^* \cong \oplus_j \mathbb C_{p_j}^*$. Note that this depends on a choice of trivialisations of $E_2$ and $K$, however the kernel of the map $\mathop{\rm Hom}\nolimits(\mathcal{E}', \mathcal{E}_2) \rightarrow \oplus_j \mathbb C_{p_j}$ is independent of these choices. This gives us the following short exact sequence of Higgs sheaves
\begin{equation}\label{eqn:dual-short-exact}
0 \rightarrow \mathcal{E}_1^* \mathcal{E}_2 \rightarrow (\mathcal{E}')^* \mathcal{E}_2 \rightarrow b_igoplus_{j=1}^n \mathbb C_{p_j}^* \rightarrow 0
\end{equation}
There is an induced map $\Omega^0((E')^* E_2) \rightarrow \Omega^{1,0}((E')^* E_2)$ given by $s \mapsto \phi_2 s - s \phi'$. Recall from Corollary \ref{cor:Higgs-compatible} that there exists an eigenvalue $\mu_j$ for $\phi_1(p_j)$ such that $v(\phi_1(p_j) - \mu_j \cdot \id) = 0$ for each $j=1, \ldots, n$. From the above exact sequence there is an induced homomorphism $\Omega^{1,0}((E')^* E_2) \stackrel{ev^1}{\longrightarrow} \oplus_{j=1}^n \mathbb C_{p_j} \rightarrow 0$. The component of $ev^1(\phi_2 s - s \phi')$ in $\mathbb C_{p_j}$ is $(\phi_2(p_j) - \mu_j ) s$. In particular, $\phi_2 s - s \phi' \in \ker(ev^1)$ iff $\phi_2(p_j) = \mu_j$ for all $j=1, \ldots, n$.
\begin{definition}\label{def:Hecke-compatible}
Let $(E_1, \phi_1)$ be a Higgs bundle, and $(E_2, \phi_2)$ a Higgs line bundle. The \emph{space of Hecke modifications compatible with $\phi_1$ and $\phi_2$}, denoted $\mathcal{N}_{\phi_1, \phi_2} \subset \mathcal{N}_{\phi_1}$, is the set of Hecke modifications compatible with $\phi_1$ such that $ev^1(\phi_2 s - s \phi') = 0$ for all $s \in \Omega^0((E')^* E_2)$.
\end{definition}
\begin{remark}\label{rem:miniscule-compatible}
Note that if $n = 1$ and $v \in \mathbb{P} E_1^*$ is a Hecke modification compatible with $\phi_1$, then the requirement that $v \in \mathcal{N}_{\phi_1, \phi_2}$ reduces to $\phi_2(p) = \mu$, where $\mu$ is the eigenvalue of $\phi_1(p)$ from Corollary \ref{cor:Higgs-compatible}. Such a $\phi_2 \in H^0(\mathop{\rm End}\nolimits(E_2) \otimes K) = H^0(K)$ always exists since the canonical linear system is basepoint free and therefore $b_igcup_{\phi_2 \in H^0(K)} \mathcal{N}_{\phi_1, \phi_2} = \mathcal{N}_{\phi_1}$. If $n > 1$ then $\phi_2$ with these properties may not exist for some choices of $\phi_1 \in H^0(\mathop{\rm End}\nolimits(E_1) \otimes K)$ and $v_1, \ldots, v_n \in \mathbb{P} E_1^*$ (the existence of $\phi_2$ depends on the complex structure of the surface $X$). If $\phi_1 = 0$, then we can choose $\phi_2 = 0$ and in this case $\mathcal{N}_{\phi_1, \phi_2} = \mathcal{N}_{\phi_1} = \mathbb{P} E_1^*$ (this corresponds to the case of the Yang-Mills flow in Theorem \ref{thm:flow-hecke}).
\end{remark}
\begin{lemma}
Let $(E_1, \phi_1)$ be Higgs polystable and $(E_2, \phi_2)$ be a Higgs line bundle. Let $0 \rightarrow (\mathcal{E}', \phi') \rightarrow (\mathcal{E}, \phi) \rightarrow \oplus_{j=1}^n \mathbb C_{p_j} \rightarrow 0$ be a Hecke modification defined by distinct $v_1, \ldots, v_n \in \mathcal{N}_{\phi_1, \phi_2}$.
Then there is an exact sequence
\begin{equation}\label{eqn:hyper-exact-sequence}
0 \rightarrow \mathcal{H}^0(E_1^* E_2) \rightarrow \mathcal{H}^0((E')^* E_2) \rightarrow \mathbb C^n \rightarrow \mathcal{H}^1(E_1^* E_2) \rightarrow \mathcal{H}^1((E')^* E_2)
\end{equation}
\end{lemma}
\begin{proof}
The short exact sequence \eqref{eqn:dual-short-exact} leads to the following commutative diagram of spaces of smooth sections
\begin{equation*}
\xymatrix@C=1.6em{
0 \ar[r] & \Omega^0(E_1^* E_2) \ar[r]^{i^*} \ar[d]^{L_1} & \Omega^0((E')^* E_2) \ar[r]^{ev^0} \ar[d]^{L_1} & b_igoplus_{j=1}^n \mathbb C_{p_j} \ar[r] & 0 \\
0 \ar[r] & \Omega^{0,1}(E_1^* E_2) \oplus \Omega^{1,0}(E_1^* E_2) \ar[r]^(0.47){i^*} & \Omega^{0,1}((E')^* E_2) \oplus \Omega^{1,0}((E')^* E_2) \ar[r]^(0.62){ev^1} & b_igoplus_{j=1}^n \mathbb C_{p_j} \oplus \mathbb C_{p_j} \ar[r] & 0
}
\end{equation*}
Since $\bar{\partial}_A s$ depends on the germ of a section around a point, then there is no well-defined map $b_igoplus_{j=1}^n \mathbb C_{p_j} \rightarrow b_igoplus_{j=1}^n \mathbb C_{p_j} \oplus \mathbb C_{p_j}$ making the diagram commute, so the exact sequence \eqref{eqn:hyper-exact-sequence} does not follow immediately from the standard construction, and therefore we give an explicit construction below.
First construct a map $\mathbb C^n \rightarrow \mathcal{H}^1(E_1^* E_2)$ as follows. Given $z \in \mathbb C^n$, choose a smooth section $s' \in \Omega^0((E')^* E_2)$ such that $ev^0(s') = z$ and $ev^1(\bar{\partial}_A s') = 0$. Since $\phi_2(p_j) = \mu_j$, then $ev^1(\phi_2 s' - s' \phi') = 0$ and so $ev^1(L_1 s') = 0$. Therefore $(\bar{\partial}_A s', \phi_2 s' - s' \phi') = i^*(a, \varphi)$ for some $(a, \varphi) \in \Omega^{0,1}(E_1^* E_2) \oplus \Omega^{1,0}(E_1^* E_2)$. Let $[(a, \varphi)] \in \mathcal{H}^1(E_1^* E_2)$ denote the harmonic representative of $(a, \varphi)$. Define the map $\mathbb C^n \rightarrow \mathcal{H}^1(E_1^* E_2)$ by $z \mapsto [(a, \varphi)]$.
To see that this is well-defined independent of the choice of $s' \in \Omega^0((E')^* E_2)$, note that if $s'' \in \Omega^0((E')^* E_2)$ is another section such that $ev^0(s'') = z$ and $ev^1(\bar{\partial}_A s'') = 0$, then $ev^0(s'' - s') = 0$, and so $s'' - s' = i^*(s)$ for some $s \in \Omega^0(E_1^* E_2)$. Therefore $L_1(s'' - s') = i^* L_1(s)$ with $[L_1(s)] = 0 \in \mathcal{H}^1(E_1^* E_2)$, and so $s'$ and $s''$ determine the same harmonic representative in $\mathcal{H}^1(E_1^* E_2)$.
To check exactness of \eqref{eqn:hyper-exact-sequence} at the term $\mathbb C^n$, note that if $z = ev^0(s')$ for some harmonic $s' \in \mathcal{H}^0((E')^* E_2)$, then $L_1(s') = 0 = i^* (0,0)$, and so $z \in \mathbb C^n$ maps to $0 \in \mathcal{H}^1(E_1^* E_2)$. Moreover, if $z$ maps to $0 \in \mathcal{H}^1(E_1^*E_2)$, then there exists $s' \in \Omega^0((E')^* E_2)$ such that $L_1(s') = i^*(a, \varphi)$ where $(a, \varphi) \in \Omega^{0,1}(E_1^* E_2) \oplus \Omega^{1,0}((E')^* E_2)$ and $(a, \varphi) = L_1(s)$ for some $s \in \Omega^0(E_1^* E_2)$. Therefore $s'$ and $i^* s$ differ by a harmonic section of $\mathcal{H}^0((E')^* E_2)$. Since $ev^0(i^* s) = 0$ then $z$ is the image of this harmonic section under the map $\mathcal{H}^0((E')^* E_2) \rightarrow \mathbb C^n$.
To check exactness at $\mathcal{H}^1(E_1^* E_2)$, given $z \in \mathbb C^n$ construct $(a, \varphi)$ as above and note that $i^*(a, \varphi) = L_1 s'$ for some $s' \in \Omega^0((E')^* E_2)$. Therefore $i^*[(a, \varphi)] = 0 \in \mathcal{H}^1((E')^* E_2)$ and so the image of $\mathbb C^n \rightarrow \mathcal{H}^1(E_1^* E_2)$ is contained in the kernel of $\mathcal{H}^1(E_1^* E_2) \rightarrow \mathcal{H}^1((E')^* E_2)$. Now suppose that the image of $[(a, \varphi)]$ is zero in $\mathcal{H}^1((E')^* E_2)$, i.e. $i^*(a, \varphi) = L_1 s'$ for some $s' \in \Omega^0((E')^* E_2)$. Let $z = ev^0(s')$. Note that $z = 0$ implies that $s' = i^* s$ for some $s \in \Omega^0(E_1^* E_2)$, and so $[(a, \varphi)] = 0$. If $z \neq 0$ then there exists $s'' \in \Omega^0((E')^* E_2)$ such that $ev^1(L_1(s'')) = 0$ and $ev^0(s'') = z$. Then $L_1(s'') = i^*(a'', \varphi'')$ for some $(a'', \varphi'') \in \Omega^{0,1}(E_1^* E_2) \oplus \Omega^{1,0}(E_1^* E_2)$. Moreover, $ev^0(s'' - s') = 0$, so $s'' - s' = i^* s$ for some $s \in \Omega^0(E_1^* E_2)$. Commutativity implies that $L_1 s = (a'', \varphi'') - (a, \varphi)$, and so the harmonic representatives $[(a, \varphi)]$ and $[(a'', \varphi'')]$ are equal. Therefore $[(a, \varphi)]$ is the image of $z$ by the map $\mathbb C^n \rightarrow \mathcal{H}^1(E_1^* E_2)$, which completes the proof of exactness at $\mathcal{H}^1(E_1^* E_2)$.
Exactness at the rest of the terms in the sequence \eqref{eqn:hyper-exact-sequence} then follows from standard methods.
\end{proof}
For any stable Higgs bundle $(E, \phi)$ with $d = \deg E$ and $r = \rank E$, define the \emph{generalised Segre invariant} by
\begin{equation*}
s_k(E, \phi) := k d - r \left( \max_{F \subset E, \rank F = k} \deg F \right) .
\end{equation*}
where the maximum is taken over all $\phi$-invariant holomorphic subbundles of rank $k$. Note that $s_k(E, \phi) \geq s_k(E, 0) =: s_k(E)$ and
\begin{equation*}
\frac{1}{rk} s_k(E, \phi) = \min_{F \subset E, \rank F = k} \left( \slope(E) - \slope(F) \right)
\end{equation*}
Note that any Hecke modification $(E', \phi') \hookrightarrow (E, \phi)$ with $\deg E - \deg E' = n$ has Segre invariant $s_k(E', \phi') \geq s_k(E, \phi) - nk$. As a special case, $(E', \phi')$ is stable if $n < \frac{1}{k} s_k(E, \phi)$ for all $k = 1, \ldots, r-1$.
A theorem of Lange \cite[Satz 2.2]{Lange83} shows that a general stable holomorphic bundle $E$ satisfies $s_k(E) \geq k(r - k)(g-1)$ for all $k = 1, \ldots, r - 1$. Since there is an dense open subset of stable Higgs bundles whose underlying holomorphic bundle is stable, then Lange's theorem also gives the same lower bound on the Segre invariant for a general stable Higgs bundle.
\begin{lemma}\label{lem:segre-bound}
Let $0 \rightarrow (E', \phi') \rightarrow (E, \phi) \rightarrow \oplus_{j=1}^n \mathbb C_{p_j} \rightarrow 0$ be a Hecke modification defined by distinct points $v_1, \ldots, v_n \in \mathbb{P} E^*$ such that $n < \frac{1}{k} s_k(E, \phi)$ for all $k = 1, \ldots, r-1$. Then $\slope(G) < \slope (E')$ for any proper non-zero Higgs subbundle $(G, \phi_G) \subset (E, \phi)$. In particular, this condition is satisfied if $(E, \phi)$ is a general stable Higgs bundle and $n < g-1$.
\end{lemma}
\begin{proof}
Let $k = \rank G$ and $h = \deg G$. Then the lower bound on the Segre invariant implies that
\begin{align*}
\slope(E') - \slope(G) = \frac{d - n}{r} - \frac{h}{k} & = \frac{1}{rk} \left(kd - kn - rh \right) \\
& \geq \frac{1}{rk} \left( s_k(E, \phi) - kn \right)
\end{align*}
Therefore if $n < \frac{1}{k} s_k(E, \phi)$ then $\slope(E') - \slope(G) > 0$ for any Higgs subbundle of rank $k$. If $n < g-1$ then \cite[Satz 2.2]{Lange83} shows that this condition is satisfied for general stable Higgs bundles.
\end{proof}
\begin{corollary}\label{cor:n-dim-kernel}
Let $(E_1, \phi_1)$ be a stable Higgs bundle, let $n < \frac{1}{k} s_k(E_1, \phi_1)$ for all $k=1, \ldots, \rank(E_1)-1$ and let $(E_2, \phi_2)$ be a Higgs line bundle such that $\deg E_2 < \frac{\deg E_1 - n}{\rank E_1}$. Then given any set of $n$ distinct points $\{ v_1, \ldots, v_n \} \subset \mathcal{N}_{\phi_1, \phi_2}$ there is a well-defined $n$-dimensional subspace $\ker (\mathcal{H}^1(E_1^* E_2) \rightarrow \mathcal{H}^1((E')^* E_2))$.
\end{corollary}
\begin{proof}
Let $(E', \phi')$ be the Hecke modification of $(E_1, \phi_1)$ determined by $\{ v_1, \ldots v_n \} \subset \mathbb{P} E_1^*$. The lower bound on the Segre invariant implies that $(E', \phi')$ is Higgs stable, and therefore $\mathcal{H}^0((E')^* E_2) = 0$ since $\slope(E_2) < \slope(E') = \frac{\deg E_1 - n}{\rank E_1}$. The exact sequence \eqref{eqn:hyper-exact-sequence} then reduces to
\begin{equation*}
0 \rightarrow \mathbb C^n \rightarrow \mathcal{H}^1(E_1^* E_2) \rightarrow \mathcal{H}^1((E')^* E_2)
\end{equation*}
and so $\ker (\mathcal{H}^1(E_1^* E_2) \rightarrow \mathcal{H}^1((E')^* E_2))$ is a well-defined $n$-dimensional subspace of $\mathcal{H}^1(E_1^* E_2)$ associated to $\{ v_1, \ldots, v_n \}$.
\end{proof}
\begin{remark}\label{rem:canonical-map}
As noted above, the maps $\mathbb C^n \rightarrow \mathcal{H}^1(E_1^* E_2)$ depend on choosing trivialisations, but different choices lead to the same map up to a change of basis of $\mathbb C^n$, and so the subspace $\ker (\mathcal{H}^1((E')^* E_2) \rightarrow \mathcal{H}^1(E_1^* E_2))$ is independent of these choices.
In the special case where $n=1$, then this construction gives a well-defined map $\mathcal{N}_{\phi_1, \phi_2} \rightarrow \mathbb{P} \mathcal{H}^1(E_1^* E_2)$. When $n < \frac{1}{k} s_k(E_1, \phi_1)$ for all $k$, then Corollary \ref{cor:n-dim-kernel} shows that any $n$ distinct points $v_1, \ldots, v_n$ span a nondegenerate copy of $\mathbb{P}^{n-1}$ in $\mathbb{P} \mathcal{H}^1(E_1^* E_2)$.
In the special case where $\phi_1 = \phi_2 = 0$ and $E_2$ is trivial, then $\mathcal{N}_{\phi_1, \phi_2} = \mathbb{P} E^*$ and $\mathcal{H}^1(E_1^*) \cong H^{0,1}(E_1^*) \oplus H^{1,0}(E_1^*)$. Then the map $\mathbb{P} E^* \rightarrow \mathcal{H}^1(E_1^*) \rightarrow H^{0,1}(E_1^*) \cong H^0(E_1 \otimes K)^*$ is the usual map defined for holomorphic bundles (cf. \cite[p804]{HwangRamanan04}).
\end{remark}
\begin{definition}\label{def:secant-variety}
The \emph{$n^{th}$ secant variety}, denoted $\Sec^n(\mathcal{N}_{\phi_1, \phi_2}) \subset \mathbb{P} \mathcal{H}^1(E_1^* E_2)$, is the union of the subspaces $\vecspan \{ v_1, \ldots, v_n \} \subset \mathbb{P} \mathcal{H}^1(E_1^* E_2)$ taken over all $n$-tuples of distinct points $v_1, \ldots, v_n \in \mathcal{N}_{\phi_1, \phi_2}$.
\end{definition}
The next lemma is a Higgs bundle version of \cite[Lemma 3.1]{NarasimhanRamanan69}. Since the proof is similar to that in \cite{NarasimhanRamanan69} then it is omitted.
\begin{lemma}\label{lem:nr-higgs}
Let $0 \rightarrow (E_2, \phi_2) \rightarrow (F, \tilde{\phi}) \rightarrow (E_1, \phi_1) \rightarrow 0$ be an extension of Higgs bundles defined by the extension class $[(a, \varphi)] \in \mathcal{H}^1(E_1^* E_2)$. Let $(E', \phi') \stackrel{i}{\longrightarrow} (E_1, \phi_1)$ be a Higgs subsheaf such that $i^*[(a, \varphi)] = 0 \in \mathcal{H}^1((E')^* E_2)$. Then $(E', \phi')$ is a Higgs subsheaf of $(F, \tilde{\phi})$.
\end{lemma}
\begin{equation*}
\xymatrix{
& & & (E', \phi') \ar[d]^i \ar@{-->}[dl] \\
0 \ar[r] & (E_2, \phi_2) \ar[r] & (F, \tilde{\phi}) \ar[r] & (E_1, \phi_1) \ar[r] & 0 \\
}\qedhere
\end{equation*}
\begin{corollary}\label{cor:linear-span}
Let $(E_1, \phi_1)$ be stable, let $n < \frac{1}{k} s_k(E_1, \phi_1)$ for all $k=1, \ldots, \rank(E_1)-1$, let $(E_2, \phi_2)$ be a Higgs line bundle and suppose that $\deg E_2 < \frac{\deg E_1 - n}{\rank E_1}$. Let $0 \rightarrow (E_2, \phi_2) \rightarrow (F, \tilde{\phi}) \rightarrow (E_1, \phi_1) \rightarrow 0$ be an extension of Higgs bundles with extension class $[(a, \varphi)] \in \mathcal{H}^1(E_1^* E_2)$. Let $0 \rightarrow (E', \phi') \stackrel{i}{\hookrightarrow} (E_1, \phi_1) \rightarrow \oplus_{j=1}^n \mathbb C_{p_j} \rightarrow 0$ be a Hecke modification determined by distinct points $\{v_1, \ldots, v_n\} \in \mathcal{N}_{\phi_1, \phi_2}$.
Then $(E', \phi')$ is a subsheaf of $(F, \tilde{\phi})$ if $[(a, \varphi)] \in \vecspan\{ v_1, \ldots, v_n \} \subset \mathcal{H}^1(E_1^* E_2)$.
\end{corollary}
\begin{proof}
If $[(a, \varphi)] \in \vecspan\{ v_1, \ldots, v_n \}$ then $[(a, \varphi)] \in \ker (\mathcal{H}^1(E_1^* E_2) \rightarrow \mathcal{H}^1((E')^* E_2))$ by Corollary \ref{cor:n-dim-kernel}, and therefore $(E', \phi')$ is a subsheaf of $(F, \tilde{\phi})$ by Lemma \ref{lem:nr-higgs}.
\end{proof}
The next lemma gives a condition on the extension class $[(a, \varphi)] \in \mathcal{H}^1(E_1^* E_2)$ for $(E', \phi')$ to be the subsheaf of largest degree which lifts to a subsheaf of $(F, \tilde{\phi})$. This is used to study unbroken flow lines in Section \ref{sec:secant-criterion}.
\begin{lemma}\label{lem:nondegenerate-maximal}
Let $(E_1, \phi_1)$ be a stable Higgs bundle, choose $n$ such that $2n-1 < \frac{1}{k} s_k(E_1, \phi_1)$ for all $k=1, \ldots, \rank(E_1)$, let $(E_2, \phi_2)$ be a Higgs line bundle and suppose that $\deg E_2 < \frac{\deg E_1 - (2n-1)}{\rank E_1}$. Let $0 \rightarrow (E_2, \phi_2) \rightarrow (F, \tilde{\phi}) \rightarrow (E_1, \phi_1) \rightarrow 0$ be an extension of Higgs bundles with extension class $[(a, \varphi)] \in \Sec^n(\mathcal{N}_{\phi_1, \phi_2}) \setminus \Sec^{n-1}(\mathcal{N}_{\phi_1, \phi_2}) \subset \mathbb{P} \mathcal{H}^1(E_1^* E_2)$ and let $0 \rightarrow (E', \phi') \stackrel{i}{\hookrightarrow} (E_1, \phi_1) \rightarrow \oplus_{j=1}^n \mathbb C_{p_j} \rightarrow 0$ be a Hecke modification determined by distinct points $v_1, \ldots, v_n \in \mathcal{N}_{\phi_1, \phi_2}$ such that $i^* [(a, \varphi)] = 0$.
Let $(\mathcal{E}'', \phi'') \stackrel{i''}{\hookrightarrow} (\mathcal{E}, \phi)$ be a subsheaf such that $(i'')^* [(a, \varphi)] = 0 \in \mathcal{H}^1((E'')^* E_2)$ and $\rank E'' = \rank E$. Then $\deg(E'') \leq \deg(E')$.
\end{lemma}
\begin{proof}
Let $\{ v_1'', \ldots, v_m'' \} \subset \mathcal{N}_{\phi_1, \phi_2}$ be the set of distinct points defining the Hecke modification $(\mathcal{E}'', \phi'') \stackrel{i''}{\hookrightarrow} (\mathcal{E}_1, \phi_1)$. Then $i^* [(a, \varphi)] = 0$ and $(i'')^*[(a, \varphi)] = 0$ together imply that $[(a, \varphi)] \in \vecspan\{ v_1, \ldots, v_n \} \cap \vecspan \{ v_1'', \ldots, v_m''\}$. Either $m + n > 2n-1$ (and so $\deg E'' \leq \deg E'$) or $m + n \leq 2n-1$ in which case Corollary \ref{cor:n-dim-kernel} together with the lower bound $2n-1 < \frac{1}{k} s_k(E_1, \phi_1)$ implies that $\vecspan \{ v_1, \ldots, v_n \} \cap \vecspan \{v_1'', \ldots, v_m'' \}$ is the linear span of $\{ v_1, \ldots, v_n \} \cap \{ v_1'', \ldots, v_m'' \}$. Since $m+n \leq 2n-1$ then $\{ v_1, \ldots, v_n \} \cap \{ v_1'', \ldots, v_m'' \}$ is a strict subset of $\{ v_1, \ldots, v_n\}$, which is not possible since $[(a, \varphi)] \notin \Sec^{n-1}(\mathcal{N}_{\phi_1, \phi_2})$. Therefore $\deg E'' \leq \deg E'$.
\end{proof}
\subsection{Constructing Hecke modifications of Higgs bundles via the Yang-Mills-Higgs flow.}\label{sec:YMH-flow-hecke}
Let $(E, \phi)$ be a stable Higgs bundle and $L_u$ a line bundle with $\deg L_u < \frac{\deg E - 1}{\rank E}$, and let $E'$ be a Hecke modification of $E$ which is compatible with the Higgs field
\begin{equation*}
0 \rightarrow (\mathcal{E}', \phi') \stackrel{i}{\hookrightarrow} (\mathcal{E}, \phi) \stackrel{v}{\rightarrow} \mathbb C_p \rightarrow 0 .
\end{equation*}
The goal of this section is to construct critical points $x_u = (L_u, \phi_u) \oplus (E, \phi)$ and $x_{\bf e}^{(l)}l = (L_{\bf e}^{(l)}l, \phi_{\bf e}^{(l)}l) \oplus (E', \phi')$ together with a broken flow line connecting $x_u$ and $x_{\bf e}^{(l)}l$. The result of Theorem \ref{thm:algebraic-flow-line} shows that this amounts to constructing a Higgs field $\phi_u \in H^0(K)$, a Higgs pair $(F, \tilde{\phi})$ in the unstable set of $x_u$ and a complex gauge transformation $g \in \mathcal{G}^\mathbb C$ such that $(E', \phi')$ is a Higgs subbundle of $g \cdot (F, \tilde{\phi})$.
\begin{lemma}\label{lem:construct-Higgs-extension}
Let $0 \rightarrow (\mathcal{E}', \phi') \rightarrow (\mathcal{E}, \phi) \stackrel{v}{\rightarrow} \mathbb C_p \rightarrow 0$ be a Hecke modification such that $(E, \phi)$ and $(E', \phi')$ are both Higgs semistable, and let $L_u$ be a line bundle with $\deg L_u < \slope(E') < \slope(E)$. Then there exists a Higgs field $\phi_u \in H^0(K)$ and a non-trivial Higgs extension $(F, \tilde{\phi})$ of $(L_u, \phi_u)$ by $(E, \phi)$ such that $(E', \phi')$ is a Higgs subsheaf of $(F, \tilde{\phi})$.
\end{lemma}
\begin{proof}
By Remark \ref{rem:miniscule-compatible}, there exists $\phi_u \in H^0(K)$ such that $v \in \mathcal{N}_{\phi, \phi_u}$. Since $(E', \phi')$ is semistable with $\slope(E') > \slope(L_u)$ then $\mathcal{H}^0((E')^* L_u) = 0$ and so the exact sequence \eqref{eqn:hyper-exact-sequence} shows that the Hecke modification $v \in \mathbb{P} E^*$ determines a one-dimensional subspace of $\mathcal{H}^1(E^* L_u)$, and that any non-trivial extension class in this subspace is in the kernel of the map $\mathcal{H}^1(E^* L_u) \rightarrow \mathcal{H}^1((E')^* L_u)$. Let $0 \rightarrow (L_u, \phi_u) \rightarrow (F, \tilde{\phi}) \rightarrow (E, \phi) \rightarrow 0$ be such an extension. Then Lemma \ref{lem:nr-higgs} shows that $(E', \phi')$ is a Higgs subsheaf of $(F, \tilde{\phi})$.
\end{proof}
We can now use this result to relate Hecke modifications at a single point with $\mathop{\rm YM}\nolimitsH$ flow lines.
\begin{theorem}\label{thm:flow-hecke}
\begin{enumerate}
\item Let $0 \rightarrow (E', \phi') \rightarrow (E, \phi) \stackrel{v}{\rightarrow} \mathbb C_p \rightarrow 0$ be a Hecke modification such that $(E, \phi)$ is stable and $(E', \phi')$ is semistable, and let $L_u$ be a line bundle with $\deg L_u + 1 < \slope(E') < \slope(E)$. Then there exist sections $\phi_u, \phi_{\bf e}^{(l)}l \in H^0(K)$, a line bundle $L_{\bf e}^{(l)}l$ with $\deg L_{\bf e}^{(l)}l = \deg L_u + 1$ and a metric on $E \oplus L_u$ such that $x_u = (E, \phi) \oplus (L_u, \phi_u)$ and $x_{\bf e}^{(l)}l = (E_{gr}', \phi_{gr}') \oplus (L_{\bf e}^{(l)}l, \phi_{\bf e}^{(l)}l)$ are critical points connected by a $\mathop{\rm YM}\nolimitsH$ flow line, where $(E_{gr}', \phi_{gr}')$ is isomorphic to the graded object of the Seshadri filtration of $(E', \phi')$.
\item Let $x_u = (E, \phi) \oplus (L_u, \phi_u)$ and $x_{\bf e}^{(l)}l = (E', \phi') \oplus (L_{\bf e}^{(l)}l, \phi_{\bf e}^{(l)}l)$ be critical points connected by a $\mathop{\rm YM}\nolimitsH$ flow line such that $L_u, L_{\bf e}^{(l)}l$ are line bundles with $\deg L_u = \deg L_{\bf e}^{(l)}l + 1$, $(E, \phi)$ is stable and $(E', \phi')$ is polystable with $\deg L_u + 1 < \slope(E') < \slope(E)$. If $(E', \phi')$ is Higgs stable then it is a Hecke modification of $(E, \phi)$. If $(E', \phi')$ is Higgs polystable then it is the graded object of the Seshadri filtration of a Hecke modification of $(E, \phi)$.
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm:flow-hecke}]
Given a Hecke modification $0 \rightarrow (\mathcal{E}', \phi') \rightarrow (\mathcal{E}, \phi) \rightarrow \mathbb C_p \rightarrow 0$ as in Lemma \ref{lem:construct-Higgs-extension}, choose $\phi_u \in H^0(K)$ such that $v \in \mathcal{N}_{\phi, \phi_u}$ and apply a gauge transformation to $E \oplus L_u$ such that $x_u = (E, \phi) \oplus (L_u, \phi_u)$ is a critical point of $\mathop{\rm YM}\nolimitsH$. The harmonic representative of the extension class $[(a, \varphi)] \in \mathcal{H}^1(E^* L_u)$ from Lemma \ref{lem:construct-Higgs-extension} defines an extension $0 \rightarrow (L_u, \phi_u) \rightarrow (F, \tilde{\phi}) \rightarrow (E, \phi) \rightarrow 0$ such that $y = (F, \tilde{\phi})$ is in the negative slice of $x_u$, and therefore flows down to a limit isomorphic to the graded object of the Harder-Narasimhan-Seshadri filtration of $(F, \tilde{\phi})$.
Lemma \ref{lem:construct-Higgs-extension} also shows that $(E', \phi')$ is a Higgs subsheaf of $(F, \tilde{\phi})$. Lemma \ref{lem:resolve-higgs-subsheaf} shows that this has a resolution as a Higgs subbundle of $(F, \tilde{\phi})$, however since the Harder-Narasimhan type of $(F, \tilde{\phi})$ is strictly less than that of $(E, \phi) \oplus (L_u, \phi_u)$, $\rank(E') = \rank(F) - 1$ and $\deg E' = \deg E - 1$, then $(E', \phi')$ already has the maximal possible slope for a semistable Higgs subbundle of $(F, \tilde{\phi})$, and therefore $(E', \phi')$ must be the maximal semistable Higgs subbundle. Since $\rank(E') = \rank(F) - 1$, then the graded object of the Harder-Narasimhan-Seshadri filtration of $(F, \tilde{\phi})$ is $(E_{gr}', \phi_{gr}') \oplus (L_{\bf e}^{(l)}l, \phi_{\bf e}^{(l)}l)$, where $(L_{\bf e}^{(l)}l, \phi_{\bf e}^{(l)}l) = (F, \tilde{\phi}) / (E', \phi')$. Theorem \ref{thm:algebraic-flow-line} then shows that $(E, \phi) \oplus (L_u, \phi_u)$ and $(E_{gr}', \phi_{gr}') \oplus (L_{\bf e}^{(l)}l, \phi_{\bf e}^{(l)}l)$ are connected by a flow line.
Conversely, if $x_u = (E, \phi) \oplus (L_u, \phi_u)$ and $x_{\bf e}^{(l)}l = (E', \phi') \oplus (L_{\bf e}^{(l)}l, \phi_{\bf e}^{(l)}l)$ are critical points connected by a flow line, then Theorem \ref{thm:algebraic-flow-line} shows that there exists a Higgs pair $(F, \tilde{\phi})$ in the negative slice of $x_u$ such that $(E', \phi')$ is the graded object of the Seshadri filtration of the maximal semistable Higgs subbundle of $(F, \tilde{\phi})$. If $(E', \phi')$ is Higgs stable, then since $\slope(E') > \slope (L_u)$ we see $(E', \phi')$ is a Higgs subsheaf of $(E, \phi)$ with $\rank(E) = \rank(E')$ and $\deg(E') = \deg(E) - 1$. Therefore $(E', \phi')$ is a Hecke modification of $(E, \phi)$. If $(E', \phi')$ is Higgs polystable then the same argument shows that $(E', \phi')$ is the graded object of the Seshadri filtration of a Hecke modification of $(E, \phi)$.
\end{proof}
In general, for any flow one can define the space $\mathcal{F}_{{\bf e}^{(l)}l, u}$ of flow lines connecting upper and lower critical sets $C_u$ and $C_{\bf e}^{(l)}l$, and the space $\mathcal{P}_{{\bf e}^{(l)}l, u} \subset C_u \times C_{\bf e}^{(l)}l$ of pairs of critical points connected by a flow line. These spaces are equipped with projection maps to the critical sets defined by the canonical projection taking a flow line to its endpoints.
\begin{equation}
\xymatrix{
& \mathcal{F}_{{\bf e}^{(l)}l, u} \ar[d] \ar@/_/[ddl] \ar@/^/[ddr] & \\
& \mathcal{P}_{{\bf e}^{(l)}l, u} \ar[dl] \ar[dr] & \\
C_{\bf e}^{(l)}l & & C_u
}
\end{equation}
For the Yang-Mills-Higgs flow, given critical sets $C_u$ and $C_{\bf e}^{(l)}l$ of respective Harder-Narasimhan types $(\frac{d}{r}, \deg L_u)$ and $(\frac{d-1}{r}, \deg L_u + 1)$ as in Theorem \ref{thm:flow-hecke} above, there are natural projection maps to the moduli space $C_u \rightarrow \mathcal{M}_{ss}^{Higgs}(r, d)$ and $C_{\bf e}^{(l)}l \rightarrow \mathcal{M}_{ss}^{Higgs}(r, d-1)$. Since the flow is $\mathcal{G}$-equivariant, then there is an induced correspondence variety $\mathcal{M}_{{\bf e}^{(l)}l, u} \subset \mathcal{M}_{ss}^{Higgs}(r, d-1) \times \mathcal{M}_{ss}^{Higgs}(r, d)$.
\begin{equation}\label{eqn:flow-hecke-diagram}
\xymatrix{
& \mathcal{P}_{{\bf e}^{(l)}l, u} \ar[dl] \ar[dr] \ar[d] & \\
C_{\bf e}^{(l)}l \ar[d] & \mathcal{M}_{{\bf e}^{(l)}l,u} \ar[dl] \ar[dr] & C_u \ar[d] \\
\mathcal{M}_{ss}^{Higgs}(r, d-1) & & \mathcal{M}_{ss}^{Higgs}(r,d)
}
\end{equation}
Theorem \ref{thm:flow-hecke} shows that $\left( (E', \phi'), (E, \phi) \right) \in \mathcal{M}_{{\bf e}^{(l)}l, u}$ if and only if $(E', \phi')$ is a Hecke modification of $(E, \phi)$ and both Higgs pairs are semistable. If $r$ and $d$ are coprime then $\mathcal{M}_{ss}^{Higgs}(r,d)$ consists of stable Higgs pairs and so every Hecke modification of $(E, \phi)$ is semistable. Therefore we have proved
\begin{corollary}
$\mathcal{M}_{{\bf e}^{(l)}l,u}$ is the Hecke correspondence.
\end{corollary}
For Hecke modifications defined at multiple points (non-miniscule Hecke modifications in the terminology of \cite{witten-hecke}), we immediately have the following result.
\begin{corollary}\label{cor:broken-hecke}
Let $(E, \phi)$ be a $(0,n)$-stable Higgs bundle and consider a Hecke modification $0 \rightarrow (\mathcal{E}', \phi') \rightarrow (\mathcal{E}, \phi) \rightarrow \oplus_{j=1}^n \mathbb C_{p_n} \rightarrow 0$ defined by $n > 1$ distinct points $\{ v_1, \ldots, v_n \} \in \mathbb{P} E^*$. If there exists $\phi_u \in H^0(K)$ such that $v_1, \ldots, v_n \in \mathcal{N}_{\phi, \phi_u}$, then there is a broken flow line connecting $x_u = (E, \phi) \oplus (L_u, \phi_u)$ and $x_{\bf e}^{(l)}l = (E_{gr}', \phi_{gr}') \oplus (L_{\bf e}^{(l)}l, \phi_{\bf e}^{(l)}l)$, where $(E_{gr}', \phi_{gr}')$ is the graded object of the Seshadri filtration of the semistable Higgs bundle $(E', \phi')$.
\end{corollary}
\begin{proof}
Inductively apply Theorem \ref{thm:flow-hecke}.
\end{proof}
\subsection{A geometric criterion for unbroken $\mathop{\rm YM}\nolimitsH$ flow lines}\label{sec:secant-criterion}
Corollary \ref{cor:broken-hecke} gives a criterion for two $\mathop{\rm YM}\nolimitsH$ critical points $x_u = (E, \phi) \oplus (L_u, \phi_u)$ and $x_{\bf e}^{(l)}l = (E', \phi') \oplus (L_{\bf e}^{(l)}l, \phi_{\bf e}^{(l)}l)$ to be connected by a broken flow line. It is natural to ask whether they are also connected by an \emph{unbroken} flow line. The goal of this section is to answer this question by giving a geometric construction for points in the negative slice of $x_u$ which correspond to unbroken flow lines connecting $x_u$ and $x_{\bf e}^{(l)}l$ in terms of the secant varieties $\Sec^n(\mathcal{N}_{\phi, \phi_u})$. For holomorphic bundles, the connection between secant varieties and Hecke modifications has been studied in \cite{LangeNarasimhan83}, \cite{ChoeHitching10} and \cite{Hitching13}.
Given a $\mathop{\rm YM}\nolimitsH$ critical point $x_u = (E, \phi) \oplus (L_u, \phi_u)$ with $(E, \phi)$ stable and $\rank L_u =1$, consider an extension $0 \rightarrow (L_u, \phi_u) \rightarrow (F, \tilde{\phi}) \rightarrow (E, \phi) \rightarrow 0$ with extension class $[(a, \varphi)] \in \mathcal{H}^1(E^* L_u) = S_{x_u}^-$. Let $0 \rightarrow (E', \phi') \rightarrow (E, \phi) \rightarrow \oplus_{j=1}^n \mathbb C_{p_j} \rightarrow 0$ be a Hecke modification of $(E, \phi)$ as in the previous lemma, such that $\deg L_u < \slope(E')$.
\begin{lemma}\label{lem:subbundle-slope-bound}
If $(G, \phi_G)$ is a semistable Higgs subbundle of $(F, \tilde{\phi})$ with $\slope(G) > \deg L_u$ and $\rank(G) < \rank (E)$, then there is a Higgs subbundle $(G', \phi_G') \subset (E, \phi)$ with $\slope(G') \geq \slope(G)$ and $\rank(G) = \rank(G')$.
\end{lemma}
\begin{proof}
If $(G, \phi_G)$ is a semistable Higgs subbundle of $(F, \tilde{\phi})$ with $\slope(G) > \deg L_u$, then $\mathcal{H}^0(G^* L_u) = 0$, and so $(G, \phi_G)$ is a Higgs subsheaf of $(E, \phi)$.
\begin{equation*}
\xymatrix{
& & & (G, \phi_G) \ar[dl] \ar@{-->}[d] \\
0 \ar[r] & (L_u, \phi_u) \ar[r] & (F, \tilde{\phi}) \ar[r] & (E, \phi) \ar[r] & 0
}
\end{equation*}
Lemma \ref{lem:resolve-higgs-subsheaf} shows that the subsheaf $(G, \phi_G)$ can be resolved to form a Higgs subbundle $(G', \phi_G')$ of $(E, \phi)$ with $\slope(G') \geq \slope(G)$.
\end{proof}
\begin{theorem}\label{thm:unbroken-criterion}
Let $(E, \phi)$ be a stable Higgs bundle with Segre invariant $s_k(E, \phi)$ and choose $n$ such that $0 < 2n-1 < \min_{1 \leq k \leq r-1} \left( \frac{1}{k} s_k(E, \phi) \right)$. Let $0 \rightarrow (\mathcal{E}', \phi') \rightarrow (\mathcal{E}, \phi) \rightarrow \oplus_{j=1}^n \mathbb C_{p_j} \rightarrow 0$ be a Hecke modification of $(E, \phi)$ defined by distinct points $v_1, \ldots, v_n \in \mathbb{P} E^*$, and let $(L_u, \phi_u)$ be a Higgs line bundle such that $v_1, \ldots, v_n \in \mathcal{N}_{\phi, \phi_u}$. Choose a metric such that $x_u = (E, \phi) \oplus (L_u, \phi_u)$ is a $\mathop{\rm YM}\nolimitsH$ critical point.
Then any extension class $[(a, \varphi)] \in \vecspan \{ v_1, \ldots, v_n \} \cap \left( \Sec^n(\mathcal{N}_{\phi, \phi_u}) \setminus \Sec^{n-1}(\mathcal{N}_{\phi, \phi_u}) \right) \subset \mathbb{P} \mathcal{H}^1(E^* L_u)$ is isomorphic to an unbroken flow line connecting $x_u = (E, \phi) \oplus (L_u, \phi_u)$ and $x_{\bf e}^{(l)}l = (E', \phi') \oplus (L_{\bf e}^{(l)}l, \phi_{\bf e}^{(l)}l)$.
\end{theorem}
\begin{proof}
Let $(F, \tilde{\phi})$ be a Higgs bundle determined by the extension class $[(a, \varphi)] \in \mathbb{P} \mathcal{H}^1(E^* L_u)$. The choice of bundle is not unique, but the isomorphism class of $(F, \tilde{\phi})$ is unique. The proof reduces to showing that $(E', \phi')$ is the maximal semistable Higgs subbundle of $(F, \tilde{\phi})$.
Since $[(a, \varphi)] \notin \Sec^{n-1}(\mathcal{N}_{\phi, \phi_u})$, then Lemma \ref{lem:nondegenerate-maximal} shows that $(E', \phi')$ is the subsheaf of $(E, \phi)$ with maximal degree among those that lift to a subsheaf of $(F, \tilde{\phi})$. Any semistable Higgs subbundle $(E'', \phi'')$ of $(F, \tilde{\phi})$ with $\rank(E'') = \rank(E)$ either has $\slope(E'') \leq \deg L_u < \slope(E')$, or it is a subsheaf of $(E, \phi)$ and so must have $\slope(E'') \leq \slope(E')$.
The previous lemma shows that if $(G, \phi_G)$ is any semistable Higgs subbundle of $(F, \tilde{\phi})$ with $\slope(G) > \deg L_u$ and $\rank(G) < \rank(E)$, then there is a Higgs subbundle $(G', \phi_G')$ of $(E, \phi)$ with $\slope(G') \geq \slope(G)$. The upper bound on $n = \deg E - \deg E'$ in terms of the Segre invariant then implies that $\slope(E') > \slope(G') \geq \slope(G)$ by Lemma \ref{lem:segre-bound}.
Therefore the subbundle $(\tilde{E}', \tilde{\phi}')$ resolving the subsheaf $(E', \phi') \subset (F, \tilde{\phi})$ is the maximal semistable Higgs subbundle of $(F, \tilde{\phi})$. Since $(\tilde{E}', \tilde{\phi}')$ is semistable and $\slope(\tilde{E}') \geq \slope(E') > \deg L_u$, then $\mathcal{H}^0((\tilde{E}')^* L_u) = 0$, and so $(\tilde{E}', \tilde{\phi}')$ is a Higgs subsheaf of $(E, \phi)$ that lifts to a subbundle of $(F, \tilde{\phi})$. Since $\deg E'$ is maximal among all such subsheaves, then we must have $(E', \phi') = (\tilde{E}', \tilde{\phi}')$ and so $(E', \phi')$ is the maximal semistable subbundle of $(F, \tilde{\phi})$. Therefore Theorem \ref{thm:algebraic-flow-line} shows that $x_u$ and $x_{\bf e}^{(l)}l$ are connected by an unbroken flow line.
\end{proof}
If $\rank(F) = 2$ (so that $E$ is a line bundle), then the condition on the Segre invariant $s_k(E, \phi)$ becomes vacuous. Moreover, $\mathbb{P} E^* \cong X$ and so Hecke modifications of $E$ are determined by a subset $\{ v_1, \ldots, v_n \} \subset X$. Therefore in the case $\rank(F) = 2$, we have a complete classification of the $\mathop{\rm YM}\nolimitsH$ flow lines on the space of Higgs bundles $\mathcal{B}(F)$.
\begin{corollary}\label{cor:rank-2-classification}
Let $F \rightarrow X$ be a $C^\infty$ Hermitian vector bundle with $\rank(F) = 2$. Let $x_u = (L_1^u, \phi_1^u) \oplus (L_2^u, \phi_2^u)$ and $x_{\bf e}^{(l)}l = (L_1^{\bf e}^{(l)}l, \phi_1^{\bf e}^{(l)}l) \oplus (L_2^{\bf e}^{(l)}l, \phi_2^{\bf e}^{(l)}l)$ be non-minimal critical points with $\mathop{\rm YM}\nolimitsH(x_u) > \mathop{\rm YM}\nolimitsH(x_{\bf e}^{(l)}l)$. Suppose without loss of generality that $\deg L_1^u > \deg L_1^{\bf e}^{(l)}l > \deg L_2^{\bf e}^{(l)}l > \deg L_2^u$. Let $n = \deg L_1^u - \deg L_1^{\bf e}^{(l)}l$.
Then $x_u$ and $x_{\bf e}^{(l)}l$ are connected by a broken flow line if and only if there exists $\{ v_1, \ldots, v_n \} \in \mathcal{N}_{\phi_1^u, \phi_2^u}$ such that
\begin{align*}
0 \rightarrow (L_1^{\bf e}^{(l)}l, \phi_1^{\bf e}^{(l)}l) \rightarrow (L_1^u, \phi_1^u) \rightarrow \oplus_{j=1}^n \mathbb C_{p_j} \rightarrow 0 \\
0 \rightarrow (L_2^u, \phi_2^u) \rightarrow (L_2^{\bf e}^{(l)}l, \phi_2^{\bf e}^{(l)}l) \rightarrow \oplus_{j=1}^n \mathbb C_{p_j} \rightarrow 0
\end{align*}
are both Hecke modifications determined by $\{ v_1, \ldots, v_n \}$. They are connected by an unbroken flow line if the previous condition holds and $\{ v_1, \ldots, v_n \} \in \Sec^n(\mathcal{N}_{\phi_1^u, \phi_2^u}) \setminus \Sec^{n-1}(\mathcal{N}_{\phi_1^u, \phi_2^u})$.
\end{corollary}
\appendix
\section{Uniqueness for the reverse Yang-Mills-Higgs flow}\label{sec:uniqueness}
The methods of Donaldson \cite{Donaldson85} and Simpson \cite{Simpson88} show that the Yang-Mills-Higgs flow resembles a nonlinear heat equation, and therefore the backwards flow is ill-posed. In Section \ref{sec:scattering-convergence} we prove existence of solutions to the backwards heat flow that converge to a critical point. To show that these solutions are well-defined we prove in this section that if a solution to the reverse $\mathop{\rm YM}\nolimitsH$ flow exists then it must be unique.
Using the Hermitian metric, let $d_A$ be the Chern connection associated to $\bar{\partial}_A$ and let $\psi = \phi + \phi^* \in \Omega^1(i \mathop{\rm ad}\nolimits(E))$. The holomorphicity condition $\bar{\partial}_A \phi = 0$ becomes the pair of equations $d_A \psi = 0$, $d_A^* \psi = 0$ which also imply that $[F_A, \psi] = d_A^2 \psi = 0$, and the Yang-Mills-Higgs functional is $\| F_A + \psi \wedge \psi \|_{L^2}^2$.
\begin{proposition}\label{prop:backwards-uniqueness}
Let $(d_{A_1}, \psi_1)(t)$, $(d_{A_2}, \psi_2)(t)$ be two solutions of the Yang-Mills-Higgs flow \eqref{eqn:YMH-flow-general} on a compact Riemann surface with respective initial conditions $(d_{A_1}, \psi_1)(0)$ and $(d_{A_2}, \psi_2)(0)$. If there exists a finite $T > 0$ such that $(d_{A_1}, \psi_1)(T) = (d_{A_2}, \psi_2)(T)$ then $(d_{A_1}, \psi_1)(t) = (d_{A_2}, \psi_2)(t)$ for all $t \in [0, T]$.
\end{proposition}
The result of Proposition \ref{prop:backwards-uniqueness} is valid when the base manifold is a compact Riemann surface, since we use the estimates of \cite[Sec. 3.2]{Wilkin08} to prove that the constant $C$ in Lemma \ref{lem:heat-inequalities} is uniform. In the case of the Yang-Mills flow on a compact K\"ahler manifold the estimates of Donaldson in \cite{Donaldson85} show that we can make this constant uniform on a finite time interval $[0,T]$ and so the result also applies in this setting. The setup described in the previous paragraph consisting of Higgs pairs $(d_A, \psi)$ satisfying $d_A \psi = 0$, $d_A^* \psi = 0$ is valid on any Riemannian manifold, and so the result of Proposition \ref{prop:backwards-uniqueness} will also apply to any class of solutions for which one can prove that the connection, Higgs field, the curvature and all of their derivatives are uniformly bounded on the given finite time interval $[0, T]$.
Let $\nabla_A$ denote the covariant derivative associated to the connection $d_A$. The complex connection associated to the pair $(d_A, \psi)$ is $D_{(A, \psi)} \eta = d_A \eta + [\psi, \eta]$ and the Laplacian is $\mathcal{D}elta_{(A, \psi)} \eta = D_{(A, \psi)}^* D_{(A, \psi)} \eta + D_{(A, \psi)} D_{(A, \psi)}^* \eta$ for any form $\eta \in \Omega^p(\mathop{\rm End}\nolimits(E))$. The equation $d_A \psi = 0$ implies that the curvature of the complex connection is $D_{(A, \psi)} D_{(A, \psi)} \eta = [F_A + \psi \wedge \psi, \eta]$.
We have the following identities which will be useful in what follows. The notation $a \times b$ is used to denote various bilinear expressions with constant coefficients.
\begin{align}
0 & = d_A(F_A + \psi \wedge \psi) , \quad 0 = [\psi, F_A + \psi \wedge \psi] \label{eqn:Higgs-Bianchi} \\
\mathcal{D}elta_{(A, \psi)} \eta & = \nabla_A^* \nabla_A \eta + (F_A + \psi \wedge \psi) \times \eta + R_M \times \eta + \psi \times \psi \times \eta + \nabla_A \psi \times \psi \times \eta \quad \label{eqn:Higgs-Weitzenbock} \\
0 & = D_{(A, \psi)}^* D_{(A, \psi)}^* (F_A + \psi \wedge \psi) \label{eqn:compose-adjoint}
\end{align}
The first identity follows from the Bianchi identity and the equation $d_A \psi = 0$. Equation \eqref{eqn:Higgs-Weitzenbock} is the Weitzenbock identity for a Higgs pair which follows from the usual identity for $\nabla_A$ (see for example \cite{BourguignonLawson81}) together with the fact that $(\psi \wedge \psi) \times \eta$ and the remaining terms in the Laplacian are of the form $\psi \times \psi \times \eta + \nabla_A \psi \times \psi \times \eta$. To see the identity \eqref{eqn:compose-adjoint}, take the inner product of the right hand side with an arbitrary $\eta \in \Omega^0(\mathop{\rm End}\nolimits(E))$. We have (cf. \cite[(2.2)]{Rade92} for the case $\psi = 0$)
\begin{align*}
\left< D_{(A, \psi)}^* D_{(A, \psi)}^* (F_A + \psi \wedge \psi), \eta \right> & = \left< F_A + \psi \wedge \psi, D_{(A, \psi)} D_{(A, \psi)} \eta \right> \\
& = \left< F_A + \psi \wedge \psi, [F_A + \psi \wedge \psi, \eta] \right> = 0
\end{align*}
Consider the Yang-Mills-Higgs flow equations
\begin{equation}\label{eqn:YMH-flow-general}
\frac{\partial A}{\partial t} = - d_A^* (F_A + \psi \wedge \psi), \quad \frac{\partial \psi}{\partial t} = * [\psi, *(F_A + \psi \wedge \psi)]
\end{equation}
After using the metric to decompose $\Omega^1(\mathop{\rm End}\nolimits(E)) \cong \Omega^1(\mathop{\rm ad}\nolimits(E)) \oplus \Omega^1(i \mathop{\rm ad}\nolimits(E))$, the flow equation can be written more compactly as
\begin{equation*}
\frac{\partial}{\partial t} (d_A + \psi) = - D_{(A, \psi)}^* (F_A + \psi \wedge \psi)
\end{equation*}
We then have
\begin{align*}
\frac{\partial}{\partial t} (F_A + \psi \wedge \psi) & = d_A \left( \frac{\partial A}{\partial t} \right) + \frac{\partial \psi}{\partial t} \wedge \psi + \psi \wedge \frac{\partial \psi}{\partial t} \\
& = - d_A d_A^* (F_A + \psi \wedge \psi) + \left[ \psi, *[\psi, *(F_A + \psi \wedge \psi)] \right] \\
& = - \mathcal{D}elta_{(A, \psi)} (F_A + \psi \wedge \psi) - d_A*[\psi, *(F_A + \psi \wedge \psi)] + [\psi, d_A^*(F_A + \psi \wedge \psi)]
\end{align*}
where in the last step we use the Bianchi identity \eqref{eqn:Higgs-Bianchi}. We also have
\begin{align*}
\frac{\partial}{\partial t} \left( d_A^* (F_A + \psi \wedge \psi) \right) & = -*\left[ \frac{\partial A}{\partial t}, *(F_A + \psi \wedge \psi) \right] + d_A^* \left( \frac{\partial}{\partial t} (F_A + \psi \wedge \psi) \right) \\
& = * \left[ d_A^*(F_A + \psi \wedge \psi), F_A + \psi \wedge \psi \right] - d_A^* d_A d_A^* (F_A + \psi \wedge \psi) \\
& \quad \quad + d_A^* [\psi, *[\psi, *(F_A + \psi \wedge \psi)]]
\end{align*}
and
\begin{align*}
\frac{\partial}{\partial t} \left( -*[\psi, *(F_A + \psi \wedge \psi)] \right) & = -* \left[ \frac{\partial \psi}{\partial t}, *(F_A + \psi \wedge \psi) \right] - *\left[ \psi, \frac{\partial}{\partial t} *(F_A + \psi \wedge \psi) \right] \\
& = * \left[ - *[\psi, *(F_A + \psi \wedge \psi)], *(F_A + \psi \wedge \psi) \right] + * \left[ \psi, * d_A d_A^* (F_A + \psi \wedge \psi) \right] \\
& \quad \quad - * \left[ \psi, *[\psi, *[\psi, *(F_A + \psi \wedge \psi)]] \right]
\end{align*}
Adding these two results gives us
\begin{align*}
\frac{\partial}{\partial t} \left( D_{(A, \psi)}^* (F_A + \psi \wedge \psi) \right) & = * \left[ D_{(A, \psi)}^* (F_A + \psi \wedge \psi), F_A + \psi \wedge \psi \right] - D_{(A, \psi)}^* D_{(A, \psi)} D_{(A, \psi)}^* (F_A + \psi \wedge \psi) \\
& = * \left[ D_{(A, \psi)}^* (F_A + \psi \wedge \psi), F_A + \psi \wedge \psi \right] - \mathcal{D}elta_{(A, \psi)} D_{(A, \psi)}^* (F_A + \psi \wedge \psi)
\end{align*}
where the last step uses \eqref{eqn:compose-adjoint}. Let $\mu_{(A, \psi)} = F_A + \psi \wedge \psi$ and $\nu_{(A, \psi)} = D_{(A, \psi)}^* (F_A + \psi \wedge \psi)$. The above equations become
\begin{align}
\left( \frac{\partial}{\partial t} + \mathcal{D}elta_{(A, \psi)} \right) \mu_{(A, \psi)} & = -d_A*[\psi, *(F_A + \psi \wedge \psi)] + [\psi, d_A^*(F_A + \psi \wedge \psi)] \label{eqn:mu-evolution} \\
\left( \frac{\partial}{\partial t} + \mathcal{D}elta_{(A, \psi)} \right) \nu_{(A, \psi)} & = *[\nu_{(A, \psi)}, *\mu_{(A, \psi)}] \label{eqn:nu-evolution}
\end{align}
Now consider two solutions $(A_1, \psi_1)(t)$ and $(A_2, \psi_2)(t)$ to the Yang-Mills-Higgs flow equations \eqref{eqn:YMH-flow-general} on the time interval $[0,T]$ such that $(A_1, \psi_1)(T) = (A_2, \psi_2)(T)$. We will show below that this implies $(A_1, \psi_1)(0) = (A_2, \psi_2)(0)$.
Define $(a_t, \varphi_t) = (A_2, \psi_2)(t) - (A_1, \psi_1)(t)$, $m_t = \mu_{(A_2, \psi_2)} - \mu_{(A_1, \psi_1)}$ and $n_t = \nu_{(A_2, \psi_2)} - \nu_{(A_1, \psi_1)}$. In terms of $(a_t, \varphi_t)$ we can write
\begin{align*}
m_t = \mu_{(A_2, \psi_2)} - \mu_{(A_1, \psi_1)} = d_{A_1} a_t + a_t \wedge a_t + [\psi, \varphi_t] + \varphi_t \wedge \varphi_t
\end{align*}
and for any $\eta \in \Omega^p(\mathop{\rm End}\nolimits(E))$ the difference of the associated Laplacians has the form
\begin{equation}\label{eqn:laplacian-difference}
\left(\mathcal{D}elta_{(A_2, \psi_2)} - \mathcal{D}elta_{(A_1, \psi_1)} \right) \eta = \nabla_A a \times \eta + a \times \nabla_A \eta + a \times a \times \eta + \psi \times \varphi \times \eta + \varphi \times \varphi \times \eta
\end{equation}
where again $\omega_1 \times \omega_2$ is used to denote a bilinear expression in $\omega_1$ and $\omega_2$ with constant coefficients. By definition of $\nu_{(A, \psi)}$ as the gradient of the Yang-Mills-Higgs functional at $(d_A, \psi)$ we immediately have
\begin{equation*}
\frac{\partial}{\partial t} (a_t + \varphi_t) = n_t , \quad \text{and} \quad \frac{\partial}{\partial t} (\nabla_A a_t + \nabla_A \varphi_t) = \left( \frac{\partial A}{\partial t} \times a_t, \frac{\partial A}{\partial t} \times \varphi_t \right) + \nabla_A n_t
\end{equation*}
Equation \eqref{eqn:mu-evolution} then becomes
\begin{align*}
\left( \frac{\partial}{\partial t} + \mathcal{D}elta_{(A_1, \psi_1)} \right) m_t & = - \left( \mathcal{D}elta_{(A_2, \psi_2)} - \mathcal{D}elta_{(A_1, \psi_1)} \right) \mu_{(A_2, \psi_2)} \\
& \quad \quad + a_t \times \psi_1 \times (F_{A_1} + \psi_1 \wedge \psi_1) + \nabla_{A_1} \varphi_t \times (F_{A_1} + \psi_1 \wedge \psi_1) \\
& \quad \quad + \nabla_{A_1} \psi_1 \times m_t + \psi_1 \times n_t
\end{align*}
and equation \eqref{eqn:nu-evolution} becomes
\begin{align*}
\left( \frac{\partial}{\partial t} + \mathcal{D}elta_{(A_1, \psi_1)} \right) n_t & = *[\nu_{(A_2, \psi_2)}, * \mu_{(A_2, \psi_2)}] - *[\nu_{(A_1, \psi_1)}, *\mu_{(A_1, \psi_1)}] - \left( \mathcal{D}elta_{(A_2, \psi_2)} - \mathcal{D}elta_{(A_1, \psi_1)} \right) \nu_{(A_2, \psi_2)} \\
& = *[n_t, *\mu_{(A_2, \psi_2)}] +*[\nu_{(A_1, \psi_1)}, *m_t] - \left( \mathcal{D}elta_{(A_2, \psi_2)} - \mathcal{D}elta_{(A_1, \psi_1)} \right) \nu_{(A_2, \psi_2)} \\
\end{align*}
Using \eqref{eqn:laplacian-difference} and the Weitzenbock formula \eqref{eqn:Higgs-Weitzenbock}, we then have the following inequalities. In the case where $X$ is a compact Riemann surface then the estimates of \cite[Sec. 2.2]{Wilkin08} show that all of the derivatives of the connection, the Higgs field and the curvature $F_A$ are uniformly bounded along the flow and so the constant can be chosen uniformly on the interval $[0,T]$.
\begin{lemma}\label{lem:heat-inequalities}
For any pair of solutions $(d_{A_1}, \psi_1)(t)$ and $(d_{A_2}, \psi_2)(t)$ to the Yang-Mills-Higgs flow \eqref{eqn:YMH-flow-general} there exists a positive constant $C$ (possibly depending on $t$) such that the following inequalities hold
\begin{align}
\left| \left( \frac{\partial}{\partial t} + \nabla_{A_1}^* \nabla_{A_1} \right) m_t \right| & \leq C \left( |a_t| + |\varphi_t| + | \nabla_{A_1} a_t| +| \nabla_{A_1} \varphi_t |+ | m_t | + |n_t| \right) \label{eqn:m-evolution} \\
\left| \left( \frac{\partial}{\partial t} + \nabla_{A_1}^* \nabla_{A_1} \right) n_t \right| & \leq C \left( |a_t| + |\varphi_t| + | \nabla_{A_1} a_t| +| \nabla_{A_1} \varphi_t |+ | m_t | + |n_t| \right) \label{eqn:n-evolution} \\
\left| \frac{\partial}{\partial t} (a_t + \varphi_t) \right| & = | n_t | \label{eqn:a-evolution} \\
\left| \frac{\partial}{\partial t} (\nabla_A a_t + \nabla_A \varphi_t) \right| & \leq C \left( |a_t| + |\varphi_t| + | \nabla_A n_t | \right) \label{eqn:nabla-a-evolution}
\end{align}
Moreover, if $X$ is a compact Riemann surface then the constant $C$ can be chosen uniformly on any finite time interval $[0, T]$.
\end{lemma}
For simplicity of notation, in the following we use $\nabla := \nabla_{A_1}$ and $\square := \nabla_{A_1}^* \nabla_{A_1}$. Let $X := (m_t, n_t)$ and $Y := (a_t, \varphi_t, \nabla a_t, \nabla \varphi_t)$. The previous lemma implies that there exists a positive constant $C$ such that the following inequalities hold
\begin{align}\label{eqn:coupled-system}
\begin{split}
\left| \frac{\partial X}{\partial t} + \square X \right| & \leq C \left( | X | + | \nabla X| + | Y | \right) \\
\left| \frac{\partial Y}{\partial t} \right| & \leq C \left( |X| + |\nabla X| + |Y| \right)
\end{split}
\end{align}
A general result of Kotschwar in \cite[Thm 3]{kotschwar-uniqueness} shows that any system satisfying \eqref{eqn:coupled-system} on the time interval $[0,T]$ for which $X(T) = 0$, $Y(T)=0$, must also satisfy $X(t) = 0$, $Y(t) = 0$ for all $t \in [0, T]$. In the context of the Yang-Mills-Higgs flow \eqref{eqn:YMH-flow-general}, this gives us the proof of Proposition \ref{prop:backwards-uniqueness}.
\end{document} |
\begin{document}
\title{Nonexistence of tight spherical design of harmonic index $4$}
\author{Takayuki Okuda and Wei-Hsuan Yu}
\subjclass[2010]{Primary 52C35; Secondary 14N20, 90C22, 90C05}
\keywords{spherical design, equiangular lines}
\mathop{\mathrm{ad}}\nolimitsdress{
Department of Mathematics, Graduate School of Science, Hiroshima University
1-3-1 Kagamiyama, Higashi-Hiroshima, 739-8526 Japan}
{\bold e}mail{[email protected]}
\mathop{\mathrm{ad}}\nolimitsdress{Department of Mathematics, Michigan State University,
619 Red Cedar Road, East Lansing, MI 48824}
{\bold e}mail{[email protected]}
\thanks{The first author is supported by
Grant-in-Aid for JSPS Fellow No.25-6095 and the second author is supported in part by
NSF grants CCF1217894, DMS1101697}
\date{}
\maketitle
\begin{abstract}
We give a new upper bound of the cardinality of a set of equiangular lines in $\mathbb{R}^n$
with a fixed angle $\theta$
for each $(n,\theta)$ satisfying certain conditions.
Our techniques are based on semi-definite programming methods for spherical codes introduced by Bachoc--Vallentin [J.~Amer.~Math.~Soc.~2008].
As a corollary to our bound,
we show the nonexistence of spherical tight designs
of harmonic index $4$ on $S^{n-1}$ with $n \geq 3$.
{\bold e}nd{abstract}
\section{Introduction}
The purpose of this paper is to give a new upper bound of the cardinality of a set of equiangular lines with certain angles (see Theorem \ref{thm:rel}).
As a corollary to our bound, we show the nonexistence of tight designs of harmonic index $4$ on $S^{n-1}$ with $n \geq 3$ (see Theorem \ref{thm:nonex-tight}).
Throughout this paper,
$S^{n-1} := \{ x \in \mathbb{R}^n \mid \| x \| = 1 \}$
denotes the unit sphere in $\mathbb{R}^{n}$.
By Bannai--Okuda--Tagami \cite{ban13},
a finite subset $X$ of $S^{n-1}$ is called
{\bold e}mph{a spherical design of harmonic index $t$ on $S^{n-1}$}
(or shortly, {\bold e}mph{a harmonic index $t$-design on $S^{n-1}$})
if $\sum_{{\bold x} \in X} f({\bold x}) = 0$
for any harmonic polynomial function $f$ on $\mathbb{R}^{n}$ of degree $t$.
Our concern in this paper is in
tight harmonic index $4$-designs.
A harmonic index $t$-design $X$ is said to be {\bold e}mph{tight} if $X$
attains the lower bound given by \cite[Theorem 1.2]{ban13}.
In particular,
for $t = 4$,
a harmonic index $4$-design on $S^{n-1}$ is tight
if its cardinality is $(n+1)(n+2)/6$.
For the cases where $n=2$, we can construct tight harmonic index
$4$-designs as two points ${\bold x}$ and ${\bold y}$ on $S^{1}$ with the
inner-product $\langle {\bold x},{\bold y} \rangle_{\mathbb{R}^2} = \pm \sqrt{1/2}$.
The paper \cite[Theorem 4.2]{ban13} showed that
if tight harmonic index $4$-designs on $S^{n-1}$ exist,
then $n$ must be $2$ or $3(2k-1)^2 -4$ for some integers $k \geq 3$.
As a main result of this paper,
we show that the later cases do not occur.
That is, the following theorem holds:
\begin{thm}\label{thm:nonex-tight}
For each $n \geq 3$,
spherical tight design of harmonic index $4$ on $S^{n-1}$ does not exist.
{\bold e}nd{thm}
A set of lines in $\mathbb{R}^n$ is called {\bold e}mph{an equiangular line system} if
the angle between each pair of lines is constant.
By definition, an equiangular line system can be considered as
a spherical two-distance set with the inner product set $\{ \pm \cos \theta \}$
for some constant $\theta$.
Such the constant $\theta$ is called {\bold e}mph{the common angle} of the equiangular line system.
The recent development of this topic can be found in \cite{barg14, grea14}.
By \cite[Proposition 4.2]{ban13},
any tight harmonic index $4$-design on $S^{n-1}$
can be considered as an equiangular line system with the common angle $\arccos \sqrt{3/(n+4)}$.
The proof of Theorem \ref{thm:nonex-tight}
will be reduced to a new relative upper bound (see Theorem \ref{thm:rel})
for the cardinalities of equiangular line systems with a fixed common angle.
Note that in some cases, our relative bound is better
than the Lemmens--Seidel relative bound
(see Section \ref{sec:main} for more details).
The paper is organized as follows:
In Section \ref{sec:main},
as a main theorem of this paper,
we give a new relative bound for the cardinalities of equiangular line systems with a fixed common angle satisfying certain conditions.
Theorem \ref{thm:nonex-tight} is followed as a corollary to our relative bound.
In Section \ref{sec:proof}, our relative bound is proved based on the method by Bachoc--Vallentin \cite{bac08a}.
\section{Main results}\label{sec:main}
In this paper, we denote by $M(n)$ and $M_{\cos \theta}(n)$ the maximum number of equiangular lines in $\mathbb{R}^n$ and that with the fixed common angle $\theta$, respectively.
By definition, \[
M(n) = \sup_{0 \leq \alpha < 1} M_\alpha(n).
\]
The important problems for equiangular lines are to give upper and lower estimates $M(n)$ or $M_\alpha(n)$ for fixed $\alpha$.
One can find a summary of recent progress of this topic in \cite{barg14, grea14}.
Let us fix $0 \leq \alpha < 1$.
Then for a finite subset $X$ of $S^{n-1}$ with $I(X) \subset \{ \pm \alpha \}$,
we can easily find an equiangular line system with the common angle $\arccos \alpha$
and the cardinality $|X|$,
where \[
I(X) := \{ \langle {\bold x},{\bold y} \rangle_{\mathbb{R}^n} \mid {\bold x},{\bold y} \in X \text{ with } {\bold x} \neq {\bold y} \}
\]
is the set of inner-product values of distinct vectors in $X \subset S^{n-1} \subset \mathbb{R}^n$.
The converse is also true.
In particular, we have
\begin{align*}
M_{\alpha}(n) = \max \{ |X| \mid X \subset S^{n-1} \text{ with } I(X) \subset \{ \pm \alpha \} \},
{\bold e}nd{align*}
and therefore, our problem can be considered as a problem in special kinds of spherical two-distance sets.
In this paper, we are interested in upper estimates of $M_\alpha(n)$.
According to \cite{lem73},
Gerzon gave the upper bound on $M(n)$ as
$M(n) \leq n(n+1)/2$ and therefore,
we have \[
M_\alpha(n) \leq \frac{n(n+1)}{2} \quad \text{ for any } \alpha.
\]
This upper bound is called the Gerzon absolute bound.
Lemmens and Seidel \cite{lem73} showed that
\begin{equation*}
M_\alpha(n) \le \frac {n(1-\alpha^2)}{1-n\alpha^2} \quad \text{ in the cases where } 1-n\alpha^2 > 0.
{\bold e}nd{equation*}
This inequality is sometimes called the Lemmens--Seidel
relative bound as opposed to the Gerzon absolute bound.
As a main theorem of this paper,
we give other upper estimates of $M_\alpha(n)$ as follows:
\begin{thm}\label{thm:rel}
Let us take $n \geq 3$ and $\alpha \in (0,1)$ with
\[
2-\frac{6\alpha-3}{\alpha^2} < n < 2 + \frac{6\alpha+3}{\alpha^2}.
\]
Then
\[
M_\alpha(n) \leq 2 + \frac{(n-2)}{\alpha} \max \left\{ \frac{(1-\alpha)^3}{(n-2)\alpha^2 +6\alpha-3}, \frac{(1+\alpha)^3}{-(n-2)\alpha^2 + 6\alpha+3} \right\}.
\]
In particular, for an integer $l \geq 2$, if
\[
3l^2-6l+2 < n < 3l^2 + 6l+2
\]
then
\[
M_{1/l}(n) \leq 2 + (n-2) \max \left\{ \frac{(l-1)^3}{-3l^2 + 6l + (n-2)}, \frac{(l+1)^3}{3l^2 + 6l -(n-2)} \right\}.
\]
{\bold e}nd{thm}
Recall that by \cite[Proposition 4.2, Theorem 4.2]{ban13} for $n \geq 3$,
if there exists a tight harmonic index $4$-design $X$ on $S^{n-1}$,
then $n = 3(2k-1)^2-4$ for some $k \geq 3$
and
\begin{align*}
M_{\sqrt{3/(n+4)}}(n) = (n+1)(n+2)/6.
{\bold e}nd{align*}
However, as a corollary to Theorem \ref{thm:rel},
we have the following upper bound of $M_{\sqrt{3/(n+4)}}(n)$
and obtain Theorem \ref{thm:nonex-tight}.
\begin{cor}\label{cor:tight4-eq}
Let us put
$n_k := 3(2k-1)^2-4$ and $\alpha_k := \sqrt{3/(n_k+4)} = 1/(2k-1)$.
Then for each integer $k \geq 2$,
\begin{align*}
M_{\alpha_k}(n_k)
&\leq 2 (k-1) (4k^3-k-1) (< (n_k+1)(n_k+2)/6).
{\bold e}nd{align*}
{\bold e}nd{cor}
It should be remarked that in the setting of Corollary \ref{cor:tight4-eq},
the Lemmens--Seidel relative bound does not work since \[
1-n_k \alpha_k^2 = -2(4 k^2-4k-1)/(2k-1)^2 < 0
\]
and our bound is better than the Gerzon absolute bound.
The proof of Theorem \ref{thm:rel} is given in Section \ref{sec:proof}
based on Bachoc--Vallentin's SDP method for spherical codes \cite{bac08a}.
The origins of applications of the linear programming method in coding theory can be traced back to the work of Delsarte \cite{del73}.
Applications of semidefinite programming (SDP) method in coding theory and distance geometry gained momentum after the pioneering work of Schrijver \cite{Schrijver05code}
that derived SDP bounds on codes in the Hamming and Johnson spaces.
Schrijver's approach was based on the so-called Terwilliger algebra
of the association scheme.
The similar idea for spherical codes were formulated by Bachoc and Vallentin \cite{bac08a}
regarding for kissing number problems.
Barg and Yu \cite{barg13} modified it to achieve maximum size of spherical two-distance sets in $\mathbb{R}^n$ for most values of $n \leq 93$.
In our proof, we restricted the method to obtain upper bounds for equiangular line sets.
We can see in \cite{GST06upperbounds,Schrijver05code} and \cite{bac08a, bac09opti, bac09sdp, Musin08bounds}
that the SDP method works well
for studying binary codes and spherical codes, respectively.
Especially, for equiangular lines, Barg and Yu \cite{barg14} give the best known upper bounds of $M(n)$ for some $n$ with $n \leq 136$ by the SDP method.
Our bounds in Corollary \ref{cor:tight4-eq} are the same as \cite{barg14} in lower dimensions.
However, in some cases, we need some softwares to complete the SDP method.
It should be emphasized that our theorem offer upper bound of $M_{\alpha_k}(n_k)$ for arbitrary large $n_k$
and the proof can be followed by hand calculations without using any convex optimization software.
\section{Proof of our relative bound}\label{sec:proof}
To prove Theorem \ref{thm:rel},
we apply Bachoc--Vallentin's SDP method for spherical codes in \cite{bac08a} to spherical two-distance sets.
The explicit statement of it was given by Barg--Yu \cite{barg13}.
We use symbols $P^{n}_l(u)$ and $S^n_l(u,v,t)$
in the sense of \cite{bac08a}.
It should be noted that
the definition of $S^{n}_l(u,v,t)$ is different from
that of \cite{bac09opti} and \cite{barg13}
(see also \cite[Remark 3.4]{bac08a} for such the differences).
In order to state it,
we define
\begin{align*}
W(x)&:= \begin{pmatrix}1&0\\0&0{\bold e}nd{pmatrix} +
\begin{pmatrix}0&1\\1&1{\bold e}nd{pmatrix} (x_1+x_2)/3 +
\begin{pmatrix}
0&0\\0&1
{\bold e}nd{pmatrix} (x_3+x_4+x_5+x_6), \\
S^{n}_l(x;\alpha,\beta) &:= S^{n}_{l}(1,1,1)+S^{n}_l(\alpha,\alpha,1)x_1 + S^{n}_l(\beta,\beta,1) x_2 + S^{n}_l(\alpha,\alpha,\alpha) x_3 \\
& \quad \quad + S^{n}_l(\alpha,\alpha,\beta) x_4 + S^{n}_l(\alpha,\beta,\beta) x_5 + S^{n}_l(\beta,\beta,\beta) x_6
{\bold e}nd{align*}
for each $x = (x_1,x_2,x_3,x_4,x_5,x_6) \in \mathbb{R}^6$ and $\alpha, \beta \in [-1,1)$.
We remark that $W(x)$ is a symmetric matrix of size $2$ and
$S^{n}_{l}(x;\alpha,\beta)$ is a symmetric matrix of infinite size
indexed by $\{ (i,j) \mid i,j = 0,1,2,\dots, \}$.
\begin{fact}[Bachoc--Vallentin \cite{bac08a} and Barg--Yu \cite{barg13}]\label{fact:SDP-problem}
Let us fix $\alpha, \beta \in [-1,1)$.
Then
any finite subset $X$ of $S^{n-1}$ with $I(X) \subset \{ \alpha,\beta \}$
satisfies
\[
|X| \leq \max \{ 1 + (x_1 + x_2)/3 \mid x = (x_1,\dots,x_6) \in \Omega^{n}_{\alpha,\beta} \}
\]
where the subset $\Omega^{n}_{\alpha,\beta}$ of $\mathbb{R}^{6}$ is defined by
\[
\Omega_{\alpha,\beta}^{n} := \{\, x = (x_1,\dots,x_6) \in \mathbb{R}^{6} \mid \text{ $x$ satisfies the following four conditions } \,\}.
\]
\begin{enumerate}
\item $x_i \geq 0$ for each $i = 1,\dots,6$.
\item $W(x)$ is positive semi-definite.
\item $3 + P^{n}_l(\alpha) x_1 + P^{n}_l (\beta) x_2 \geq 0$ for each $l=1,2,\dots.$
\item Any finite principal minor of $S^{n}_l(x;\alpha,\beta)$ is positive semi-definite for each $l = 0,1,2,\dots.$
{\bold e}nd{enumerate}
{\bold e}nd{fact}
To prove Theorem \ref{thm:rel},
we use the following ``linear version'' of Fact \ref{fact:SDP-problem}:
\begin{cor}\label{cor:triangleLP}
In the same setting of Fact $\ref{fact:SDP-problem}$,
\[
|X| \leq \max \{ 1 + (x_1 + x_2)/3 \mid x = (x_1,\dots,x_6) \in \widetilde{\Omega}^{n}_{\alpha,\beta} \}
\]
where the subset $\widetilde{\Omega}^{n}_{\alpha,\beta}$ of $\mathbb{R}^{6}$ is defined by
\[
\widetilde{\Omega}_{\alpha,\beta}^{n} := \{\, x = (x_1,\dots,x_6) \in \mathbb{R}^{6} \mid \text{ $x$ satisfies the following three conditions } \,\}.
\]
\begin{enumerate}
\item $x_i \geq 0$ for each $i = 1,\dots,6$.
\item $\det W(x) \geq 0$.
\item $(S^{n}_l)_{i,i}(x;\alpha,\beta) \geq 0$ for each $l,i = 0,1,2,\dots$,
where $(S^{n}_l)_{i,i}(x;\alpha,\beta)$ is the $(i,i)$-entry of the matrix $S^{n}_l(x;\alpha,\beta)$.
{\bold e}nd{enumerate}
{\bold e}nd{cor}
By Corollary \ref{cor:triangleLP},
the proof of Theorem \ref{thm:rel} can be reduced to show the following proposition:
\begin{prop}\label{prop:SDPrel}
Let $n \geq 3$ and $0 < \alpha <1$.
Then the following holds:
\begin{enumerate}
\item
\begin{align*}
\max \{ 1+(x_1+x_2)/3 \mid x \in \widetilde{\Omega}^{n}_{\alpha,-\alpha} \} \leq 2 + 2 + (n-2)\frac{(1-\alpha)^3}{\alpha((n-2)\alpha^2 +6\alpha-3)}
{\bold e}nd{align*}
if
$(1-\alpha)^3(-(n-2)\alpha^2 + 6\alpha+3) \geq
(1+\alpha)^3((n-2)\alpha^2 +6\alpha-3) \geq 0$.
\item
\begin{align*}
\max \{ 1+(x_1+x_2)/3 \mid x \in \widetilde{\Omega}^{n}_{\alpha,-\alpha} \} \leq 2 + (n-2) \frac{(1+\alpha)^3}{\alpha(-(n-2)\alpha^2 + 6\alpha+3)}
{\bold e}nd{align*}
if
$(1+\alpha)^3((n-2)\alpha^2 +6\alpha-3) \geq (1-\alpha)^3(-(n-2)\alpha^2 + 6\alpha+3) \geq 0$.
{\bold e}nd{enumerate}
{\bold e}nd{prop}
For the proof of Proposition \ref{prop:SDPrel}, we need the next
explicit formula of $(S^{n}_3)_{1,1}$ which are obtained by direct
computations:
\begin{lem}\label{lem:S3explicite}
For each $-1 < \alpha < 1$,
\begin{align*}
(S^{n}_3)_{1,1}(1,1,1) &= 0 \\
(S^{n}_3)_{1,1}(\alpha,\alpha,1) &= \frac{n(n+2)(n+4)(n+6)}{3(n-1)(n+1)(n+3)}
\alpha^2 (1-\alpha^2)^3 \\
(S^{n}_3)_{1,1}(\alpha,\alpha,\alpha) &=
-\frac{n(n+2)(n+4)(n+6)}{(n-2)(n-1)(n+1)(n+3)}
(\alpha-1)^3 \alpha^3 ((n-2)\alpha^2-6\alpha-3) \\
(S^{n}_3)_{1,1}(\alpha,\alpha,-\alpha) &=
-\frac{n(n+2)(n+4)(n+6)}{(n-2)(n-1)(n+1)(n+3)}
\alpha^3 (\alpha+1)^3 ((n-2)\alpha^2 +6\alpha-3).
{\bold e}nd{align*}
{\bold e}nd{lem}
\begin{proof}[Proof of Proposition $\ref{prop:SDPrel}$]
Fix $\alpha$ with $0 < \alpha < 1$ and
take any $x \in \widetilde{\Omega}^{n}_{\alpha,-\alpha}$.
For simplicity we put $X = (x_1+x_2)/3$, $Y = x_3+x_5$ and $Z = x_4+x_6$.
By computing $\det W(x)$,
we have
\begin{align}
-X(X-1)+Y+Z \geq 0. \label{eq:Wrel}
{\bold e}nd{align}
Furthermore,
we have $(S^{n}_3)_{1,1}(x;\alpha,-\alpha) \geq 0$,
and hence, by Lemma \ref{lem:S3explicite},
\begin{multline*}
(n-2) \frac{(1-\alpha^2)^3}{\alpha} X
- (1-\alpha)^3 (-(n-2)\alpha^2 + 6\alpha+3) Y \\
- (1+\alpha)^3 ((n-2)\alpha^2 +6\alpha-3) Z \geq 0
{\bold e}nd{multline*}
Therefore,
in the cases where
\[
(1-\alpha)^3(-(n-2)\alpha^2 + 6\alpha+3) \geq
(1+\alpha)^3((n-2)\alpha^2 +6\alpha-3) \geq 0,
\]
we obtain
\begin{align*}
(n-2) \frac{(1-\alpha^2)^3}{\alpha} X - (1+\alpha)^3 ((n-2)\alpha^2 +6\alpha-3) (Y+Z) \geq 0.
{\bold e}nd{align*}
By combining with {\bold e}qref{eq:Wrel},
\begin{align*}
(n-2) \frac{(1-\alpha^2)^3}{\alpha} X - (1+\alpha)^3 ((n-2)\alpha^2 +6\alpha-3) X(X-1) \geq 0
{\bold e}nd{align*}
Thus we have
\[
2 + (n-2) \frac{(1-\alpha)^3}{\alpha((n-2)\alpha^2 +6\alpha-3)} \geq X+1 = 1 + (x_1+x_2)/3.
\]
By the similar arguments,
in the cases where
\[
(1+\alpha)^3((n-2)\alpha^2 +6\alpha-3) \geq
(1-\alpha)^3(-(n-2)\alpha^2 + 6\alpha+3) \geq 0,
\]
we have
\[
2 + (n-2) \frac{(1+\alpha)^3}{\alpha(-(n-2)\alpha^2 +6\alpha+3)} \geq X+1 = 1 + (x_1+x_2)/3.
\]
{\bold e}nd{proof}
\begin{rem}
Harmonic index $4$-designs are defined by using
the functional space $\mathop{\mathrm{Harm}}\nolimits_{4}(S^{n-1})$.
Therefore, it seems to be natural to consider $\mathop{\mathrm{Harm}}\nolimits_4(S^{n-1})$
in Bachoc--Vallentin's SDP method.
In our proof, the functional space
\[
H^{n-1}_{3,4} \subset \bigoplus_{m=0}^{4} H^{n-1}_{m,4} = \mathop{\mathrm{Harm}}\nolimits_{4}(S^{n-1})
\]
$($see \cite{bac08a} for the notation of $H^{n-1}_{m,l}$$)$
plays an important role to show the nonexistence of tight designs of harmonic index $4$
since $(S^n_3)_{1,1}$ comes from $H^{n-1}_{3,4}$.
We checked that if we consider
$H^{n-1}_{0,4} \oplus H^{n-1}_{1,4} \oplus H^{n-1}_{2,4} \oplus
H^{n-1}_{4,4}$ instead of $H^{n-1}_{3,4}$,
our upper bound can not be obtained for small $k$.
However, we can not find any conceptional reason of the importance of $H^{n-1}_{3,4}$.
{\bold e}nd{rem}
\section*{Acknowledgements.}
The authors would like to give heartfelt thanks to Eiichi Bannai, Alexander Barg and Makoto Tagami whose suggestions and comments were of inestimable value for this paper.
The authors also would like to thanks
Akihiro Munemasa, Hajime Tanaka and Ferenc Sz{\"o}ll{\H o}si
for their valuable comments.
\providecommand{\bysame}{\leavevmode\hbox
to3em{\hrulefill}\thinspace} \providecommand{\href}[2]{#2}
\begin{thebibliography}{A}
\bibitem{bac08a}
C.~Bachoc and F.~Vallentin, {\bold e}mph{New upper bounds for kissing
numbers from
semidefinite programming}, J. Amer. Math. Soc. \textbf{21} (2008), \href{http://www.ams.org/journals/jams/2008-21-03/S0894-0347-07-00589-9/home.html}{909--924}.
\bibitem{bac09opti}
C.~Bachoc and F.~Vallentin,
{\bold e}mph{Optimality and uniqueness of the {$(4,10,1/6)$} spherical code},
J. Combin. Theory Ser. A \textbf{116} (2009), \href{http://www.sciencedirect.com/science/article/pii/S0097316508000733}{195--204}.
\bibitem{bac09sdp}
C.~Bachoc and F.~Vallentin,
{\bold e}mph{Semidefinite programming, multivariate orthogonal polynomials, and codes in spherical caps},
J. Combin. Theory Ser. A \textbf{30} (2009), \href{http://www.sciencedirect.com/science/article/pii/S0195669808001522}{625--637}.
\bibitem{barg13}
A. Barg and W.-H. Yu, {\bold e}mph{New bounds for spherical two-distance
set,} Experimental Mathematics, \textbf{22} (2013),
\href{http://www.tandfonline.com/doi/abs/10.1080/10586458.2013.767725#.VCJb9_l_txE}{187--194}.
\bibitem{barg14}
A. Barg and W.-H. Yu, {\bold e}mph{New bounds for equiangular lines},
Discrete Geometry and Algebraic Combinatorics, A. Barg and O. Musin, Editors, AMS Series: Contemporary Mathematics, \href{http://www.ams.org/books/conm/625/}{vol. 625}, 2014, pp.111--121.
\bibitem{ban13}
E. Bannai, T. Okuda, and M. Tagami, {\bold e}mph{Spherical designs of
harmonic index $t$,} J.~Approx.~Theory, \href{http://www.sciencedirect.com/science/article/pii/S0021904514001324}{in press}.
\bibitem{Caen00}
D. de Caen, {\bold e}mph{Large equiangular sets of lines in Euclidean
space}, Electron. J. Combin. \textbf{7} (2000), Research Paper 55, \href{http://www.combinatorics.org/ojs/index.php/eljc/article/view/v7i1r55}{3pp}.
\bibitem{del73}
P. Delsarte, {\bold e}mph{An algebraic approach to the association schemes of coding theory}, Philips Research Repts Suppl.
\textbf{10} (1973), 1-97.
\bibitem{GST06upperbounds}
D.~Gijswijt, A.~Schrijver and H.~Tanaka,
{\bold e}mph{New upper bounds for nonbinary codes based on the {T}erwilliger algebra and semidefinite programming},
J. Combin. Theory Ser. A \textbf{113} (2006), \href{http://www.sciencedirect.com/science/article/pii/S0097316506000598}{1719--1731}.
\bibitem{grea14}
G. Greaves, J. H. Koolen, A. Munemasa, and F. Sz{\"o}ll{\H o}si,
{\bold e}mph{Equiangular lines in {E}uclidean spaces}, preprint,
available at
\href{http://arxiv.org/abs/1403.2155}{arXiv:1403:2155}.
\bibitem{lem73}
P.~W.~H. Lemmens and J.~J. Seidel, {\bold e}mph{Equiangular lines},
Journal of Algebra \textbf{24} (1973), \href{http://www.sciencedirect.com/science/article/pii/0021869373901233}{494--512}.
\bibitem{Musin08bounds}
O.~R.~Musin,
{\bold e}mph{Bounds for codes by semidefinite programming},
Tr. Mat. Inst. Steklova \textbf{263} (2008), \href{http://link.springer.com/article/10.1134
\bibitem{Schrijver05code}
A.~Schrijver,
{\bold e}mph{New code upper bounds from the {T}erwilliger algebra and semidefinite programming},
IEEE Trans. Inform. Theory \textbf{51} (2005), \href{http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=1468304}{2859--2866}.
{\bold e}nd{thebibliography}
{\bold e}nd{document} |
\begin{document}
\title{The Manin-Mumford conjecture and the Tate-Voloch conjecture for a product of Siegel moduli spaces}
\maketitle
\tableofcontents
\begin{abstract}
We use perfectoid spaces associated to abelian varieties and Siegel moduli spaces to study torsion points and ordinary CM points. We reprove the Manin-Mumford conjecture, i.e. Raynaud's theorem. We also prove the Tate-Voloch conjecture for a product of Siegel moduli spaces, namely ordinary CM points outside a closed subvariety can not be $p$-adically too close to it.
\end {abstract}
\section{Introduction}
We use the theory of perfectoid spaces to study torsion points in abelian varieties and ordinary CM points in Siegel moduli spaces.
The use of perfectoid spaces is inspired by Xie's recent work \cite{Xie}.
\subsection{Tate-Voloch conjecture}
Our main new result is about ordinary CM points.
Let $p$ be a prime number, $L$ the complete maximal unramified extension of $\BQ_p$.
Let $X $ be a product of Siegel moduli spaces over $ L$ with arbitrary level structures.
\begin{thm} \label{TVcor1}
Let $Z$ be a closed subvariety
of $X_ {\bar L}$. There exists a constant $c>0$ such that for every ordinary CM point $x\in X(\bar L)$, if the distance $d(x,Z)$ from $x$ to $Z$ satisfies
$d(x,Z)\leq c$, then $x\in Z$.
\end{thm}
The distance $d(x,Z)$ is defined as follows.
Let $\|\cdot\|$ be a $p$-adic norm on $\bar L$.
Let $\fX$ be an integral model of $X$ over $\CO_{\bar L}$.
Let $\{U_1,...,U_n\}$ be a finite open cover of $ \fX $ by affine schemes flat over $\CO_{\bar L}$.
Define $d(x,Z)$ to be the supremum of $\|f(x)\|$'s
where $ U_i$ contains $x$ and $ f\in \CO_\fX(U_i)$ vanishing on $Z\bigcap U_i$.
The definition of $d(x,Z)$ depends on the choices of the integral model and the cover. However, the truth of Theorem \ref{TVcor1} does not depend on these choices, see \ref{The distance function}. Moreover, we show that Theorem \ref{TVcor1} holds for formal subschemes of $\fX$ (with maximal level at $p$), see Theorem \ref{TVcor1aff}.
And for CM points which are canonical liftings, we prove an ``almost effective" version, see Theorem \ref{samel}.
It is clear that the same
statement in Theorem \ref{TVcor1} is true replacing $X_{\bar L}$ by a closed subvariety. In particular,
Theorem \ref{TVcor1} is in fact equivalent the same statement for $X$ being a single Siegel moduli space, by embedding a product of Siegel moduli spaces into a larger Siegel moduli space.
\begin{rmk}
(1)
For a power of the modular curve without level structure, Theorem \ref{TVcor1} was proved by Habegger \cite{Hab} by a different method. However, Habegger's proof relies on a result of Pila \cite{Pil} (see also \cite[Theorem 8]{Hab}) concerning Zariski closure of a Hecke orbit.
As far as we know, it is not available for Siegel moduli spaces yet.
Moreover, Habegger's method seems not applicable to formal schemes.
(2) Habegger \cite{Hab} also showed that the ordinary condition is necessary.
(3) The original Tate-Voloch conjecture \cite{TV} states that in a semi-abelian variety, torsion points outside a closed subvariety can not be $p$-adically too close to it.
This conjecture was proved by Scanlon \cite{Sca0} \cite{Sca} when the semi-abelian variety is defined over $\bar \BQ_p$.
Xie \cite{Xie} proved dynamic analogs of Tate-Voloch conjecture for projective spaces.
\end{rmk}
\subsubsection{Idea of the proof of Theorem \ref{TVcor1}}
It is not hard to reduce Theorem \ref{TVcor1} to the case that $X$ has maximal level at $p$, see Lemma \ref{T'T}. We sketch the proof of Theorem \ref{TVcor1} in this case. Relative to the canonical lifting of an ordinary point $x$ in the reduction of $X$, ordinary CM points in $X$ with reduction $x$ are
like $p$-primary roots of unity relative to 1 in the open unit disc around 1 (see Proposition \ref{dJN3.2}). This is the Serre-Tate theory. If we only consider one such disc, Theorem \ref{TVcor1} follows from a result of Serban \cite{Ser}.
In general,
we need to study all infinitely
many Serre-Tate deformation spaces together. In characteristic $p$, this can be achieved by
Chai's global Serre-Tate theory \cite{Cha} (see \ref{GST}).
To prove Theorem \ref{TVcor1}, we at first prove a Tate-Voloch type result in a family characteristic $p$ (see \ref{pf{TVcor1}}).
Then we use the ordinary perfectoid Siegel space associated to $ X$ and the perfectoid universal covers of
Serre-Tate deformation spaces
to translate this result to the desired Theorem \ref{TVcor1}.
\subsubsection{Possible generalizations}
For Shimura varieties of Hodge type, the ordinary locus in the usual sense could be empty.
In this case, we consider the notion of $\mu$-ordinariness (see \cite{Wed}).
Then following our strategy, we need three ingredients.
At first, a theory of
Serre-Tate coordinates for $\mu$-ordinary CM points.
For Shimura varieties of Hodge type, see \cite{Hon} and \cite{SZ}.
Secondly, a
global theory of Serre-Tate coordinates in characteristic $p$. For Shimura varieties of PEL type, such results should be known to experts.
Thirdly, $\mu$-ordinary perfectoid Shimura varieties. Following \cite{Sch13}, certain perfectoid Shimura varieties of abelian type are constructed in \cite{She}.
For
universal abelian varieties over Shimura varieties of PEL type,
we expect a Tate-Voloch type result for torsion points in fibers over $\mu$-ordinary CM points.
Still, we need analogs of the above three ingredients.
\subsection{Manin-Mumford conjecture}
For torsion points in abelian varieties, we reprove Raynaud's theorem \cite{Ray83}, which is also known as the Manin-Mumford conjecture.
\begin{thm} [Raynaud \cite{Ray83}] \label{MM} Let $F$ be a number field.
Let $A$ be an abelian variety over $F$ and $V $ a closed subvariety of $A$. If
$V$ contains a dense subset of torsion points of $A$, then $V$ is the translate of an abelian subvariety of $A$ by a torsion point.
\end{thm}
\subsubsection{Idea of the proof of Theorem \ref{MM}}\label{1.1}
We simply consider the case when $V$ does not contain any translate of a nontrivial abelian subvariety.
Suppose that $A$ has good reduction at a place of $F$ unramified over a prime number $p$.
Let $[p]:A\to A$ be the morphism multiplication by $p$. Let $\Lambda _n$ be a suitable set of reductions of torsions in $[p^n]^{-1}(V)$, and $\Lambda _n^{\mathrm{Zar}}$
its Zariski closure in the base change to $\bar\BF_p$ of the reduction of $A$.
Use the $p$-adic perfectoid universal cover of $A$ to lift $\Lambda _n^{\mathrm{Zar}}$ to $A $.
A variant of Scholze's approximation lemma \cite{Sch12} shows that as $n$ get larger, the liftings are closer to $V$ (see Proposition \ref{techprop2}). A result of Scanlon \cite{Sca0} on the Tate-Voloch conjecture for prime-to-$p$ torsions implies that the prime-to-$p$ torsions of these points are in $V$ for $n$ large enough
(see Proposition \ref{techprop}).
Assume that $\Lambda _n$ is infinite and we deduce a contradiction as follows.
A result of Poonen \cite{Poo} (see Theorem \ref{BT}) shows that the size of the set of prime-to-$p$ torsions in
$\Lambda _n^{\mathrm{Zar}}$ is not small.
Then the liftings give a lower bound on the size of the set of prime-to-$p$ torsions in $V$ (see Proposition \ref{mainprop}).
Now consider the $l$-adic perfectoid space associated to $A$. By the same approach, we can repeatedly improve
such lower bounds. Finally
we get a contradiction as $A$ is of finite dimensional.
\begin{rmk}
The proofs of Poonen's result and Scanlon's result are independent of Theorem \ref{MM}.
\end{rmk}
\subsection{Organization of the Paper}
The preliminaries on adic spaces and perfectiod spaces are given in Section \ref{adic spaces and perfectiod spaces}.
We introduce the perfectoid universal cover of an abelian scheme in \ref{subThe Perfectoid universal cover of a formal abelian scheme}. The reader may skip these materials and only come back for references.
We set up notations for
the proof of Theorem \ref{MM} in \ref{Tilting and reduction}, then prove Theorem \ref{MM}
in Section \ref{proof}.
We introduce the ordinary perfectoid Siegel space and set up notations
for
the proof of Theorem \ref{TVcor1} in Section \ref{The ordinary perfectoid Siegel space and Serre-Tate theory}. Then we prove Theorem \ref {TVcor1} in Section \ref{pf{TVcor}}.
\section{Adic spaces and perfectiod spaces}\label{adic spaces and perfectiod spaces}
We briefly recall the theory of adic spaces due to Huber \cite{Hub}\cite{Hub93}\cite{Hub94}\cite{Hub96}, and the generalization by Scholze-Weinstein \cite{SW}. Then we define tube neighborhoods in adic spaces and distance functions. Finally we recall the theory of perfectoid spaces of Scholze \cite{Sch12} and an approximation lemma due to Scholze.
Let $K$ be a
non-archimedean field, i.e. a complete nondiscrete topological field whose topology is inudced by a non-archimedean norm $\|\cdot\|_K$ ($\|\cdot\|$ for short). Define
$K^\circ=\{x\in K:\|x\|\leq 1\}$, $K^{\circ\circ}=\{x\in K:\|x\|< 1\}$.
Let $\varpi\in K^{\circ\circ}-\{0\}$.
\subsection{Adic generic fibers of certain formal schemes }\label{ADIC SPACES ASSOCIATED TO SCHEMES}
\subsubsection{Adic spaces}
Let $R$ be a complete Tate $K$-algebra, i.e. a complete topological $K$-algebra with a subring $R_0\subset R$ such that $\{aR_0:a\in K^\times\}$ forms a basis of open neighborhoods of 0. A subset of $R$ is called bounded if it is contained in a certain $aR_0$.
Let $ R^\circ$ be the subring of power bounded elements, i.e. $x\in R^\circ$ if and only if the set of all powers of $x$ form a bounded subset of $R$.
Let $R^+\subset R^\circ$ be an open integrally closed subring. Such a pair $ (R, R^+)$ is called an affinoid $K$-algebra.
Let $\Spa(R, R^+)$
be the topological space whose underlying set is the set of equivalent classes of continuous valuations $|\cdot(x)|$ on $R$ such that $|f(x)|\leq 1$ for every
$f\in R^+$ and
topology is generated by the
subsets of the form $$U(\frac{f_1,...,f_n}{g}):=\{x\in \Spa(R, R^+): \forall i, |f_i(x)|\leq |g(x)|\}$$ such that $(f_1,...,f_n)=R$.
There is a natural presheaf on $\Spa(R, R^+)$ (see \cite[p 519]{Hub94}).
If this preasheaf is a sheaf, then the affinoid $K$-algebra $(R,R^+)$ is called sheafy, and $\Spa(R,R^+)$ is called an affinoid adic space over $K$.
\begin{asmp}\label{norm} If $K^\circ\subset R^+$, for every $x\in \Spa(R,R^+)$,
we always choose a representative $|\cdot(x)|$ in the equivalence class of $x$ such that
$|f(x)|=\|f\|_K$ for every $f\in K$.
\end{asmp}
Define a category $(V)$ as in \cite[Definition 2.7]{Sch12}. Objects in $(V)$ are triples
$ (\CX, \CO_\CX , \{|\cdot(x)|:x\in \CX\}) $ where $(\CX,\CO_\CX)$ is a locally ringed
topological space whose structure sheaf is a sheaf of complete topological $K$-algebras, and $|\cdot(x)|$ is an equivalence class of continuous valuations on the stalk of $\CO_\CX$ at $x$. Morphisms in $(V)$ are morphisms of locally ringed topological spaces which are continuous $K$-algebra morphisms on the structure sheaves, and compatible with the valuations on the stalks in the obvious sense.
\begin{defn} \label{adicdef}
An adic space $\CX$ over $K$ is an
object in $(V)$ which is locally on $\CX$ an affinoid adic space over $K$.
An adic space over $\Spa(K,K^\circ)$ is
an adic space over $K$ with a morphism to $\Spa(K,K^\circ)$.
A morphism between two adic spaces over $\Spa(K,K^\circ)$ is a morphism in $(V)$ compatible the morphisms to $\Spa(K,K^\circ)$.
The set of morphisms $\Spa(K, K^\circ)\to \CX$ is denoted by $\CX(K,K^\circ)$.
\end{defn}
There is a natural inclusion
$\CX(K,K^\circ)\hookrightarrow \CX$
by mapping a morphism $\Spa(K, K^\circ)\to \CX$ to its image.
We always identify $\CX(K,K^\circ)$ as a subset of $ \CX$ by this inclusion.
\subsubsection{Adic generic fibers of certain formal schemes}
A Tate $K$-algebra $R$ is called of topologically finite type (tft for short) if $R$ is a quotient of $K \langle T_1, T_2 ,..., T_n \rangle $. In particular, it is equipped with the $\varpi$-adic topology. Similarly define $K^\circ$-algebras of tft.
By \cite[5.2.6.Theorem 1]{BGR} and \cite[Theorem 2.5]{Hub94}, if $R$ is of tft, then an affinoid $K$-algebra $ (R, R^+)$ is sheafy.
Similar to the rigid analytic generic fibers of formal schemes over $K^\circ$ \cite[7.4]{Bos}, we naturally have a functor from the category of formal schemes over $K^\circ$ locally of tft
to adic spaces over $\Spa(K,K^\circ)$ such that the image of $ \Spf A$ is $ \Spa(A [\frac{1}{\varpi}],A ^c)$ where $A^c$ is the integral closure of $A$ in $A[\frac{1}{\varpi}]$.
The image of a formal scheme under this functor is called its adic generic fiber.
We are interested in certain infinite covers of abelian schemes and Siegel moduli spaces. They are not of tft. We need to
generalize the adic generic fiber functor. In \cite{SW}, the category of adic spaces over $\Spa(K,K^\circ)$ is enlarged in a sheaf-theoretical way. Moreover, the adic generic fiber functor extends to
the category of formal schemes over $K^\circ$ locally admitting a finitely generated ideal of definition.
For our purpose, we only need the following special case. Let $\fX$ be a formal $K^\circ$-scheme which is covered
by affine open formal subschemes $ \{\Spf A_i:i\in I\} $, where $I$ is an index set, such that each affinoid $K$-algebra
$ (A_i[\frac{1}{\varpi}],A_i^c)$ is sheafy.
Then the adic generic fiber $\CX$ of $\fX$ is an adic space over $\Spa(K,K^\circ)$ in the sense of Definition \ref{adicdef}.
Indeed, $\CX$ is the obtained by glueing the affinoid adic spaces $\Spa (A_i[\frac{1}{\varpi}],A_i^c)$'s in the obvious way.
We have an easy consequence.
\begin{lem}\label{algpoints} Let $\CX$ be the adic generic fiber $\fX$. Then there is
a natural bijection $\fX (K^\circ)\simeq \CX(K, K^\circ).$
\end{lem}
\subsection{Tube neighborhoods and distance functions}\label{tube neighborhoods in the adic generic fiber}
\subsubsection{Tube neighborhoods}
Let $\fX=\Spf B$, where $B$ is a flat $K^\circ$-algebra of tft.
Let $\fZ$ be a closed formal subscheme defined by a closed ideal $ I$.
Let $\CX$ be the adic generic fiber of $\fX $. Then $\CX=\Spa(R,R^+)$ where $R= B [\frac{1}{\varpi}]$ and $R^+ $ is the integral closure of $B $ in $R$.
\begin{defn}\label{defnnbhd}
For $\ep\in K^\times ,$ the $\ep$-neighborhood of $\fZ$ in $\CX$ is defined to be the subset
$$\CZ_\ep: =\{x\in \CX: |f (x)|\leq |\ep(x)| \mbox{ for every } f\in I \}.$$ \end{defn}
\begin{rmk} Note that $\CZ_\ep$ may not be open in $\CX$.
If $I$ is generated by $\{f_1,...,f_n\}$, then $\CZ_\ep=U(\frac{f_1,...,f_n,\ep}{\ep})$ is naturally an open adic subspace of $\CX$. In fact, for our applications, we only use this case.
\end{rmk}
Definition \ref{defnnbhd} immediately implies the following lemmas.
\begin{lem} \label{intersect} Let $\fZ= \bigcap\limits_{i=1}^m \fZ_i$, where each $\fZ_i$ is a closed formal subschemes of $\fX$.
For $\ep\in K^\times$, let
$\CZ_{i,\ep}$ be the
$\ep $-neighborhood of $\fZ_i$.
Then $\CZ_\ep= \bigcap\limits _{i=1}^m\CZ_{i,\ep}.$
\end{lem}
\begin{lem}\label{union}
Let $\fZ= \fZ_1\bigcup \fZ_2$, where $\fZ_1,\fZ_2$ are closed formal subschemes of $\fX$.
(1) Then $\CZ_{1,\ep}\subset \CZ_{ \ep}.$
(2)
Suppose that there exists $\delta\in K^{\circ\circ}-\{0\}$ which vanishes on $\fZ_2$.
Then
$\CZ_\ep\subset \CZ_{1,\ep/\delta}.$
\end{lem}
Let $X$ be a $K^\circ$-scheme locally of finite type, and $\fX$ the $\varpi$-adic formal completion of $ X $. Let $\CX$ be the adic generic fiber of $\fX$. We also call $\CX$ the adic generic fiber of $X $.
Let $Z$ be a closed subscheme of $X_K$. We define tube neighborhoods of $Z$ in $\CX$ as follows
(see also \cite[Proposition 8.7]{Sch12}).
Suppose that $X $ is affine. Let
$\fZ\subset \fX$ be the closed formal subscheme associated to the schematic closure of $Z$. \begin{defn}
For $\ep\in K^\times ,$ the $\ep$-neighborhood of $Z$ in $\CX$ is defined to be the $\ep$-neighborhood of $\fZ$ in $\CX$. \end{defn}
\begin{rmk} If the schematic closure of $Z$
has empty special fiber,
then $\CZ_\ep$ is empty.
\end{rmk}
To define tube neighborhoods in general, we need to glue affinoid pieces.
We consider the following relative situation.
Let $ Y $ be another affine $K^\circ$-scheme of finite type, and $\Phi:Y\to X$ a $K^\circ$-morphism. Let $W$ be the preimage of $Z$ which is a closed subscheme of $Y_K$, and $\CW_\ep$ its $\ep$-neighborhood.
By the functoriality of formal completion and taking adic generic fibers, we have an induced morphism $\Psi:\CY\to \CX$.
From the fact that schematic image is compatible with flat base change (see \cite[2.5, Proposition 2]{BLR}), we easily deduce the following lemma.
\begin{lem}\label{pullback1}If $\Phi:Y\to X$ is flat, then $\Psi^{-1}(\CZ_{\ep})=\CW_{ \ep}$.
In particular, if $Y\subset X$ is an open $K^\circ$-subscheme, $\CW_\ep=\CZ_\ep \bigcap \CY$ under the natural inclusion $\CY\hookrightarrow \CX$.
\end{lem}
Now we turn to the general case.
Let $X$ be an $K^\circ$-scheme locally of finite type. For an open subscheme $U\subset X$, let $Z_U$ be the restriction of $Z$ to $U$.
Let $S=\{U_i:i\in I\}$ be an affine open cover of $X$, where $I$ is an index set and each $U_i $ is of finite type over $K^\circ$.
Let $\CZ_{U_i,\ep}$ be the $\ep$-neighborhood of $Z_{U_i}$ in the adic generic fiber $\CU_i$ of $U_i$. Note that each $\CU_i$ is naturally an open adic subspace of $\CX$.
\begin{defn} \label{globalnbhd}Define the $\ep$-neighborhood of $Z$ in $\CX$ by
$\CZ_\ep:=\bigcup\limits_{U\in S}\CZ_{U,\ep}.$
\end{defn}
As a corollary of Lemma \ref{pullback1}, this definition is independent of the choice of the cover $\CU$.
\subsubsection{Distance functions} \label{The distance function}
Let $U$ be an affine open subset of $X$ which is flat over $K^\circ$. Let $I $ be an ideal of the coordinate ring of $U$.
For $x\in U(K) $,
define $d_U(x,I):=\sup\{\|f(x)\|:f\in I\}.$
Let $\CI$ be the ideal sheaf of the schematic closure of $Z$ in $X$.
Assume that $X$ is of finite type over $K^\circ$.
Let $\CU:=\{U_1,...,U_n\}$ be a finite affine open cover of $ X $ such that each $U_i$ is flat over $K^\circ$.
For $x\in X(K)$, define $d^\CU(x,Z)$ to be the maximum of $ d_{U_i}(x,I)$ over all $i$'s such that $x\in U_i$.
Let
$x^\circ\in X(K^\circ)$ and $x$ the generic point of $x^\circ$. Regard $x$ as a point in $\CX(K,K^\circ)$ via Lemma \ref{algpoints}.
Let $U$ be an affine open subset of $X$ flat over $K^\circ$ such that $x^\circ\in U(K^\circ)$. We have a tautological relation between the distance function and tube neighborhoods.
\begin{lem} \label{distep}
Let $\ep\in K^\times$.
Then $x\in \CZ_{U,\ep}$ if and only if $d_U (x,\CI(U))\leq \|\ep\|$.
\end{lem}
By Lemma \ref{pullback1}, the number $d_U (x,\CI(U))$ does not depend on the choice of $U$.
Define $$d(x,Z):= d_U (x,\CI(U))$$
Then $d(x,Z) = d^\CU(x,Z) $ for every finite affine open cover $\CU $ of $X$.
Our distance function coincides with the one in the end of \cite[Section 1]{Sca0}, which is defined globally.
A finite extension of $K$ has a natural structure of a non-archimedean field (see \cite{BGR}). Let $\bar K$ be an algebraic closure of $K$.
The above discussion is naturally generalized to $x \in X(\bar K)$ and $Z\subset X_{\bar K}$.
\subsubsection{Tate-Voloch type sets}\label{TVset} Let $X$ be of finite type over $K^\circ$.
\begin{defn}\label{TVtype}Fix an arbitrary finite affine open cover $\CU$ of $ X $ by subschemes flat over $K^\circ$.
A set
$T\subset X(\bar K)$ is of Tate-Voloch type if for every closed subscheme $Z$ of $X_{\bar K}$, there exists a constant $c>0$ such that for every
$x\in T$, if $d^\CU(x,Z)\leq c$, then $x\in Z(\bar K)$.
\end{defn}
\begin{rmk} Is there always a set of Tate-Voloch type? Let $C\subset X$ be irreducible and flat over $K^\circ$ of relative dimension 1. Choose one point in each residue disk in $C$.
Easy to check that this set of points of $X$ is of Tate-Voloch type. Moreover, we can choose points in residue disks in $C$ chose degrees are unbounded.
The following questions are more meaningful. Is there always a Tate-Voloch type set which is Zariski dense in $X$?
Can the points in this set have unbounded
the degrees over $K$?
Indeed, the Tate-Voloch type sets in Theorem \ref{TVcor1} and in the results of Habegger, Scanlon and
Xie give positive answers to these two questions.
\end{rmk}
Let $Y$ be a $K^\circ$-scheme of finite type,
and $\pi:Y\to X$ a finite schematically dominant morphism.
\begin{lem}\label{T'T}Let $T\subset X(\bar K)$ be of Tate-Voloch type and $T'=\pi^{-1}(T)\subset Y(\bar K)$.
Then $T'$ is of Tate-Voloch type.
\end{lem}
\begin{proof}We may assume that $Y=\Spec B$ and $X=\Spec A$ where $A$ is a subring of $B$.
Let $L$ be a finite extension of $K$.
Let $Z'$ be a closed subscheme of $Y_{L}$. We need to show that $d(x', Z')$ has a positive lower bound for $x'\in T'-Z'({\bar K})$.
Define the dimension of $Z'$ to be the maximal dimension of the irreducible components of $Z'$. We allow $Z'$ to be empty, in which case we
define its dimension to be $-1$. We do induction on the dimension of $Z'$.
Then the dimension $-1$ case is trivial. Now we consider the general case with the hypothesis that the lemma holds for all lower dimensions.
Suppose such a lower bound does not exists, then there exists a sequence of $x_n' \in T'- Z'({\bar K})$
such that $d(x_n',Z
')\to 0$ as $n\to \infty$. We will find a contradiction.
Let $Z$ be the schematic image of $Z'$ by $\pi$, $x_n=\pi(x_n')$.
Let the schematic closure
of $Z$ in $X_{L^\circ }$(resp. $Z'$ in $Y_{L^\circ }$) be defined by an ideal $J\subset A\otimes{L^\circ }$ (resp. $I\subset B\otimes{L^\circ }$).
Then
$I\otimes L \supset
J B\otimes L$.
Since $J B\otimes{L^\circ }$ is finitely generated,
there exists a positive integer $r$ such that $I \supset
\varpi^rJ B\otimes{L^\circ }$. Thus $d(x_n,Z
)\to 0$ as $n\to \infty$.
Since $T$ is of Tate-Voloch type, $ x_n\in Z(\bar K)$ for $n$ large enough. We may assume that every $ x_n\in Z(\bar K)$.
Since $x_n'\not\in Z'$, $\pi^{-1}(Z)=Z'\bigcap Z_1$ where $Z_1$ is a closed subscheme of $Y_L$ not containing $Z'$ but containing all
$x_n'$. Claim: $d(x_n', Z'\bigcap Z_1)\to 0$ as $n\to \infty$.
This contradicts the induction hypothesis. Thus $d(x', Z')$ has a positive lower bound for $x'\in T'-Z'(K)$.
Now we prove the claim.
Let the schematic closure of
$Z_1$ in $Y_{L^\circ}$ be defined by an ideal $I_1\subset B \otimes{L^\circ }$.
Then the schematic closure of
$Z'\bigcap Z_1$ is defined by the following ideal of $ B \otimes{L^\circ }$: $$I_2:=(I_1\otimes{L }+I'\otimes{L }) \bigcap B \otimes{L^\circ }=(I_1 +I')\otimes{L } \bigcap B \otimes{L^\circ },$$ which is finitely generated.
Thus there exists a positive integer $s$ such that $(I_1+I') \supset \varpi ^s I_2$.
Now the claim follows from that $d(x_n',Z
')\to 0$ and $x_n'\in Z_1$.
\end{proof}
\subsection{Perfectoid spaces} \subsubsection{Two perfectoid fields}\label{perftheory}
Instead of recalling the definition of perfectoid fields (see \cite[Definition 3.1]{Sch12}),
we consider two examples and use them through out this paper.
Let $k=\bar \BF_p$, $W=W(k)$ the ring of Witt vectors, and $L=W[\frac{1}{p}]$. For each integer $n\geq 0$, let $\mu_{p^n}$ be a primitive $p^n$-th root of unity in $\bar L$ such that $\mu_{p^{n+1}}^p=\mu_{p^n}$.
Let $$L^{\mathrm{cycl}}:=\bigcup\limits_{n=1}^\infty L(\mu_{p^n}).$$
Let $\varpi=\mu_p-1$,
and $K$
the $\varpi$-adic completion of $L^{\mathrm{cycl}}$.
Then $K$ is a perfectoid field in the sense that
$$K^\circ/\varpi\to K^\circ/\varpi ,\ x\mapsto x^p$$ is surjective (see \cite[Definition 3.1]{Sch12}). Let $$K^\flat= k((t^{1/p^\infty})) $$ be the $t$-adic completion of $ \bigcup\limits_{n=1}^\infty k((t))(t^{1/p^n})$.
Then $ K^\flat$ is a perfectoid field.
Let $\varpi^\flat=t^{1/p}$. Equip $ {K^\flat}$ with the nonarhimedean norm $\|\cdot\|_{K^\flat}$ such that $\|\varpi^\flat\|_{K^\flat}=\|\varpi\|_{K}$.
Consider the morphism
\begin{equation}K^\circ/\varpi\to K^\fcc/\varpi^\flat,\ \mu_{p^n}-1\mapsto t^{1/p^n}\label{2.3.2eq}.\end{equation}
This morphism is well-defined since $$(\mu_{p^n}-1)^{p^m}\simeq \mu_{p^{n-m}}-1(\mod \varpi)$$ for $m<n$.
Easy to check this morphism is an isomorphism. We call $ K^\flat$ the tilt of $K$.
\subsubsection{Perfectoid spaces}
The most important property of a perfectoid $K$-algebra $R$ is that $$R^\circ/\varpi\to R^\circ/\varpi ,\ x\mapsto x^p$$ is surjective (see
\cite[Definition 5.1]{Sch12}).
An affinoid $K$-algebra $(R,R^+)$ is called perfectoid if $R$ is perfectoid.
By \cite[Theorem 6.3]{Sch12}, an affinoid $K$-algebra $(R,R^+)$ is sheafy.
Define a perfectoid space over $K$ to be an adic space over $K$
locally isomorphic to $\Spa(R,R^+)$, where $(R,R^+)$
is a perfectoid affinoid $K$-algebra.
By \cite[Theorem 5.2]{Sch12}, there is an equivalence between the categories of perfectoid $K$-algebras and perfectoid $K^\flat$-algebras.
By \cite[Lemma 6.2]{Sch12}, and \cite[Proposition 6.17]{Sch12}, this category equivalence induces an equivalence between
between the categories of perfectoid affinoid $K$-algebras and perfectoid affinoid $K^\flat$-algebras, as well as an equivalence between
between the categories of perfectoid spaces over $K$ and perfectoid spaces over $K^\flat$.
The image of an object or a morphism in the category of perfectoid $K$-algebras, perfectoid affinoid $K$-algebras, or
perfectoid spaces over $K$ is called its tilt.
\subsubsection{Two important maps $\sharp$ and $\rho$}\label{srho}
Let $R$ be perfectoid $K$-algebra and $R^{\flat}$ its tilt.
By \cite[Proposition 5.17]{Sch12}, there is a multiplicative homeomorphism
$R^{\flat}\simeq \vpl\limits_{x\mapsto x^p} R .$
Denote the projection to the first component by $$R^\flat\to R, \ f\mapsto f^\sharp.$$
Let $(R,R^+)$ be perfectoid affinoid $K$-algebra and $(R^\flat,R^\fpl)$ its tilt.
For $x\in \Spa(R,R^+), $ let $\rho(x)\in \Spa(R^\flat,R^\fpl)$ be the valuation
$|f(\rho(x))|=|f^\sharp(x)|$ for $f\in R^\flat$. This defines a map between sets
$$\rho: \Spa(R,R^+)\mapsto \Spa(R^\flat,R^\fpl).$$
Note that $ \Spa(R^\flat,R^\fpl)$ is the tilt of $ \Spa(R,R^+) $. The definition of $\rho$ glues and we have a
map $$\rho_\CX:|\CX|\simeq|\CX^\flat|$$ between the underlying sets of a perfectoid space $\CX$ over $K$ and its tilt $\CX^\flat$.
\begin{lem}\label{tiltmor}
(1) Let $\phi:R\to S$ be a morphism between perfectoid $K$-algebras, and
$\phi^\flat:R^\flat\to S^\flat$ its tilt. Then for every $f\in R^\flat$, we have $\phi^\flat(f)^\sharp=\phi(f^\sharp).$
(2) Let $\Phi:\CX\to\CY$ be a morphism between perfectoid spaces over $K$ and $\Phi^\flat$ its tilt. Then as maps between topological spaces, we have $$\rho_\CY\circ \Phi=\Phi^\flat\circ \rho_{\CX}$$
\end{lem}
\begin{proof} (1) follows from the definition of the $\sharp$-map and \cite[Theorem 5.2]{Sch12}. (2) follows from (1). \end{proof}
By (2), the restriction of $\rho_\CX $ to $\CX(K,K^\circ)$
gives the functorial bijection $\CX(K,K^\circ)\simeq \CX^\flat(K^\flat,K^{\flat\circ})$,
which we also denote by $ \rho_\CX$.
In the next two paragraphs, we compute $ \rho_\CX$ in two cases.
\subsubsection{Tilting and reduction}\label{subTilting and reduction}
Let $(R,R^+)$ be a perfectoid affinoid $K $-algebra and $(R^\flat,R^\fpl)$
its tilt. Suppose there exists a flat $W$-algebra $S$ such that
\begin{itemize}
\item[1] $R^+ $ is the $\varpi$-adic completion of $S\otimes_W K^\circ$,
\item[2] $R^\fpl $ is the $\varpi^\flat$-adic completion of $S_k \otimes_k K^{\flat\circ}$.
\end{itemize}
Let $\phi:S\to W$ be a $W$-algebra morphism, $\phi_k:S_ k\to k$ be its base change. Then $\phi $ induces a map $\psi:R^+ \to K^\circ$ which further induces a point $x$ of $\Spa(R,R^+)$. Similarly,
$\phi_k $ induces a map $\psi':R^\fpl \to K^\fcc$ which further induces a point $x'$ of $\Spa(R^\flat,R^\fpl)$.
Then $\psi/\varpi
= \psi'/\varpi^\flat$ under the isomorphism $R^+/\varpi\simeq R^\fpl/\varpi^\flat.$ By \cite[Theorem 5.2]{Sch12}, $\phi'$ is the tilt of $\phi$
and thus we have the following lemma. \begin{lem}\label{tiltred2}
We have $\rho_{\Spa(R,R^+)}(x)=x'$.
\end{lem}
\subsubsection{An example: the perfectoid closed unit disc}\label{CGperf}
Let $R=K\pair{T^{1/p^\infty},T^{-1/p^\infty}}$, the $\varpi^\flat$-adic completion of $\bigcup\limits_{r\in\BZ_{\geq 0}} K [ T^{1/p^r}, T^{-1/p^r}].$ Then $R$ is perfectoid. The tilt $R^\flat$ of $R$ is $K^\flat\pair{T^{1/p^\infty},T^{-1/p^\infty}}$.
Let $ \CG^\perf:=\Spa (R,R^\circ) $. Then $\CG^\perf$ is a perfectoid space over $\Spa(K,K^\circ)$, and
$ \CG^{\perf,\flat}:=\Spa(R^\flat, R^\fcc)$ is its tilt.
Let $c \in \BZ_p$, and $m\in \BZ_{\geq 0}$.
The $K^\circ$-morphism $R ^\circ\to K^\circ$ defined by $$T ^{1/p^n}\to \mu_{p^{m+n}}^c$$
gives a point $x\in \CG^\perf(K,K^\circ)$.
The $K^\fcc$-morphism $R^\perf \to K^\flat$ defined by $$T ^{1/p^n}\to (1+t^{1/p^{m+n}})^c$$
gives a point $x'\in \CG^{\perf,\flat}(K^\flat,K^\fcc)$.
The following lemma follows from \eqref{2.3.2eq} and \cite[Theorem 5.2]{Sch12}.
\begin{lem} \label{2316} We have $\rho_{\CG^\perf}(x)=x'$.
\end{lem}
Similar result holds for
$\CG^{l,\perf}=\Spa(R,R^\circ)$ where $$R=K\pair{T_1^{1/p^\infty},T_1^{-1/p^\infty},...,T_l^{1/p^\infty},T_l^{-1/p^\infty}},$$
and its tilt $\CG^{l,\perf\flat}=\Spa(R^\flat,R^\fcc)$ where $$R^\flat=K^\flat\pair{T_1^{1/p^\infty},T_1^{-1/p^\infty},...,T_l^{1/p^\infty},T_l^{-1/p^\infty}}.$$
\subsection{A variant of Scholze's approximation lemma}\label{some estimates on perfectoid spaces}
The perfectoid fields $K$, $K^\flat$ and related notations are as in \ref{perftheory}.
Let $(R,R^+)$ be a perfectoid affinoid $(K,K^\circ)$-algebra with tilt $(R^\flat,R^\fpl)$.
Let $\CX=\Spa(R,R^+)$ with tilt
$\CX^\flat=\Spa(R^\flat,R^\fpl)$.
For $f,g\in R$, define $|f(x)-g (x)| $ to be $|(f-g)(x)|$.
The following approximation lemma plays an important role in Scholze's work \cite{Sch12}.
\begin{lem}[{\cite[Corollary 6.7 (1)]{Sch12}}]\label{Corollary 6.7. (1)} Let $f\in R^+$.
Then for every $ c\geq 0$, there exists
$g\in R^\fpl$ such that for every $x\in \CX$, we have
\begin{equation}|f(x)-g^\sharp(x)| \leq\|\varpi\|^{\frac{1}{p}} \max\{|f(x)|,\|\varpi\|^c\} =\|\varpi\|^{\frac{1}{p}} \max\{|g^\sharp(x)|,\|\varpi\|^c\}.\label{2.1}\end{equation} \end{lem}
Here the map $\sharp$ is as in \ref{srho} (i.e. $|g(\rho(x))|=|g^\sharp(x)|$), and we use $\|\cdot\|$ to denote $\|\cdot\|_K$.
Recall that $k=\bar \BF_p$.
Assume that there exists
a $k$-algebra $S$, such that $ R^{\flat+}$ is the $\varpi^\flat$-adic completion of $S \otimes K^{\flat\circ}$.
Then we have natural maps\begin{equation*}\Hom_k(S,k)\hookrightarrow \Hom_ {K^{\flat\circ}}( S \otimes K^{\flat\circ},K^{\flat\circ}) \simeq \CX^\flat(K^\flat,K^{\flat\circ}) .\end{equation*}
Thus we regard $ (\Spec S)(k) $ as a subset of $ \CX^\flat$.
\begin{lem}\label{improveCorollary 6.7. (1)} Continue to use the notations in Lemma \ref{Corollary 6.7. (1)}. Assume that
$ c\in \BZ[\frac{1}{p}] $. There exists
a finite sum $$g_c= \sum_{\substack{i\in \BZ[\frac{1}{p}]_{\geq 0},\\i< \frac{1}{p}+c}}g_{c,i} \cdot (\varpi^\flat)^i $$ with $g_{c,i}\in S$ and only finitely many $g_{c,i}\neq 0$,
such that \begin{equation}g-g_c\in (\varpi^\flat)^{\frac{1}{p}+c}R^{\flat+}.\label{modi}
\end{equation}
\end{lem}
\begin{proof} There exists a \textit{finite sum}
$g'=\sum s_j a_j \in S \otimes K^{\flat\circ}$, where $s_j\in S$ and $a_j\in K^\fcc$, such that
$g-g'\in (\varpi^\flat)^{\frac{1}{p}+c}R^{\flat+}.$
Claim: let $a\in K^{\flat\circ}$, then there exists a positive integer $N$ such that
$$a-\sum_{\substack{h\in (\frac{\BZ}{ p^N})_{\geq 0},\\h< \frac{1}{p}+c }} \alpha_h \cdot (\varpi^\flat)^h \in (\varpi^\flat)^{\frac{1}{p}+c}K^{\fcc}$$ for certain $\alpha_h\in k$. Indeed, the claim follows from that $K^{\fcc}$ is the $\varpi^\flat$-adic completion of $ \bigcup\limits_{n=1}^\infty k[[t]][ (\varpi^\flat)^{1/p^n}]$.
Note that $\{h\in (\frac{\BZ}{ p^N})_{\geq 0}, h< \frac{1}{p}+c \}$ is finite set.
So there exists a finite sum $$g_c=\sum_{\substack{i\in \BZ[\frac{1}{p}]_{\geq 0},\\i< \frac{1}{p}+c}}g_{c,i} \cdot (\varpi^\flat)^i $$ with $g_{c,i}\in S$
such that
$g'-g_c\in (\varpi^\flat)^{\frac{1}{p}+c}R^{\flat+}.$
Then
$g-g_c\in (\varpi^\flat)^{\frac{1}{p}+c}R^{\flat+}.$ \end{proof}
\begin{lem} \label{g=0}Let $g_c$ be as in Lemma \ref{improveCorollary 6.7. (1)} and $x\in (\Spec S)(k)$.
Regarding $x\in \CX^\flat(K^\flat,K^\fcc)$ via the inclusion above. If $|g_c(x)| \leq \|\varpi\|^{\frac{1}{p}+c}$, then $g_{c,i}(x)=0$ for all $i$.
\end{lem}
\begin{proof} Since $x\in (\Spec S)(k)$, if $g_{c,i}(x)\neq 0$, then $|g_{c,i}(x)|=1$. Let $i_0< \frac{1}{p}+c$ be the minimal $i$
such that $|g_{c,i}(x)|=1$.
Then $ |g_c(x)|=\|\varpi^\flat\|_{K^\flat}^{i_0}> \|\varpi\|^{\frac{1}{p}+c}$, a contradiction.
\end{proof}
\subsubsection{Profinite setting} Impose the following assumption.
\begin{asmp} \label{asmp4}
There are $k$-algebras
$S_0\subset S_1\subset...$ such that $S=\bigcup S_n$.
\end{asmp}
Let $\CX_n $ is the adic generic fiber of $\Spec S_n\otimes K^\fcc$. Then we have a natural morphism
$$\pi_n:\CX^\flat\to \CX_n.$$ We also use $\pi_n$ to denote the morphism $(\Spec S)(k)\to (\Spec S_n)(k)$. We have natural maps
$$ (\Spec S_n)(k) \hookrightarrow \Hom_{K^\fcc}(S_n\otimes K^\fcc, K^{\flat\circ})\simeq \CX_n (K^\flat,K^{\flat\circ}) $$ by which
we regard $(\Spec S_n)(k)$ as a subset of $\CX_n$.
For each $n$, let $\Lambda_n\subset (\Spec S_n)(k) $ be a set of $k$-points, and $\Lambda^{\mathrm{Zar}}_n$ the Zariski closure of $\Lambda _n$ in $\Spec S_n$.
We have the following maps and inclusions between sets:
$$|\CX|\xrightarrow{\rho}|\CX^\flat|\xrightarrow{\pi_n}|\CX_n| \supset \Lambda _n^{\mathrm{Zar}} (k) \supset \Lambda_n. $$
where $\rho$ is as in \ref{srho}.
Let $f\in R^+$, and $\Xi:=\{x\in \CX :|f(x )|=0\}$. We have the following variant of Lemma \ref {Corollary 6.7. (1)}.
\begin{prop}\label{techprop2} Assume that $ \Lambda_n\subset \pi_n ( \rho( \Xi))$ for each $n$.
Then for each $\ep\in K^\times$, there exists a positive integer $n$ such that
$|f (x)|\leq \|\ep\|_K$ for every $x\in ( \pi_n\circ \rho)^{-1}\left(\Lambda _n^{\mathrm{Zar}}(k) {\mathrm{rig}}ht)$.
\end{prop}
\begin{proof}
Choose $c\in \BZ _{\geq 0}$ large enough such that $ \|\varpi\|_K^{\frac{1}{p}+c}\leq \|\ep\|_K $, choose $g$ as in Lemma \ref{Corollary 6.7. (1)} and choose a finite sum $$ g_c=\sum\limits_{\substack{i\in \BZ[\frac{1}{p}]_{\geq 0},\\i< \frac{1}{p}+c}}g_{c,i} \cdot (\varpi^\flat)^i $$
as in Lemma \ref{improveCorollary 6.7. (1)} where $g_{c,i}\in S$ for all $i$.
There exists a positive integer $n(c)$ such that $g_{c,i}\in S_{n(c)}$ for all $i$ by the finiteness of the sum. By the assumption, every element $x\in \Lambda_{n(c)}$ can be written as $\pi_{n(c)} \circ \rho (y)$ where $y\in \Xi$. By \eqref{2.1} and \eqref{modi}, $|g_c(\rho(y) )|\leq \|\varpi\|^{\frac{1}{p}+c}$.
Then by Lemma \ref{g=0} and that $\rho(y)\in (\Spec S)(k)$, $g_{c,i}(\rho(y))=0$. Since
$g_{c,i}\in S_{n(c)}$, $g_{c,i}(x)=0$.
Thus $g_{c,i}$ lies in the ideal defining $\Lambda_{n(c)}^{\mathrm{Zar}}$.
So $g_{c,i}(x)=0$, and thus $g_c(x)=0$, for every $x\in \Lambda _{n(c)}^{\mathrm{Zar}}(k) $. By \eqref{2.1} and \eqref{modi}, for every $x\in \Lambda _{n(c)}^{\mathrm{Zar}}(k) $, we have $$|f\left(\rho^{-1}\left(\pi_{n(c)} ^{-1}(x){\mathrm{rig}}ht){\mathrm{rig}}ht)|\leq \|\varpi\|^{\frac{1}{p}+c}\leq\| \ep\|.$$ \end{proof}
\section{Perfectoid universal cover of an abelian scheme}\label{The perfectoid universal cover of a formal abelian scheme}
Let $K$ be the perfectoid field in \ref{perftheory} and $K^\flat$ its tilt. Let $\fA$ be a formal abelian scheme over $K^\circ$.
We first recall the perfectoid universal cover of $\fA$ and its tilt constructed in \cite[Lemme A.16]{PB}.
Then we study the relation between tilting and reduction.
\subsection{Perfectoid universal cover of an abelian scheme}\label{subThe Perfectoid universal cover of a formal abelian scheme}
Let $\fA'$ be a formal abelian scheme over $\Spf K^{\flat\circ} $. Assume that there is an isomorphism \begin{equation}\fA\otimes K^\circ/\varpi\simeq \fA'\otimes K^{\flat\circ}/\varpi^\flat\label{modpicond}\end{equation} of abelian schemes over $K^\circ/\varpi\simeq K^{\flat\circ}/\varpi^\flat$.
Let
\begin{equation*}\tilde\fA:=\vpl\limits_{[p]} \fA,\ \tilde\fA':=\vpl\limits_{[p]} \fA'.\end{equation*} Here the transition maps $[p]$ are the morphism multiplication by $p$ and
inverse limits
exist in the categories of $\varpi$-adic and $\varpi^\flat$-adic formal schemes (see \cite[Lemme A.15]{PB}). Index the inverse systems by $\BZ_{\geq 0}$. Let $ \Spf R_0^+\subset \fA$ be an affine open formal subscheme.
Let $R_i^+$ be the coordinate ring of $([p]^{i})^{-1} \Spf R_0^+ $, in other words, $ \Spf R_i^+=([p]^{i})^{-1} \Spf R_0^+ $.
Let $R_i=R_i^+[\frac{1}{\varpi}]$,
then $R_i^+ $ is integrally closed in $R_i$.
Let
$R^+ $ be the $\varpi$-adic completion of $\bigcup\limits_{i=0}^\infty R_i^+$,
$ R =R^+[\frac{1}{\varpi}] .$
Let $ \Spf R_0'^+\subset \fA'$ be an affine open formal subscheme such that the restriction of \eqref{modpicond} to $ \Spf R_0^+\otimes K^\circ/\varpi$ is an isomorphism to $ \Spf R_0'^+ \otimes K^{\flat\circ}/\varpi^\flat$.
We similarly define $R_i'^+$, $R'^+$ and $R'$.
\begin{lem} [{\cite[Lemme A.16]{PB}}]\label{A.16}
The affinoid $ K^\flat $-algebra $(R',R'^+)$ is perfectoid. So is $(R,R^+)$.
Moreover, $(R',R'^+)$ is the tilt of $(R,R^+).$
\end{lem}
Thus
the adic generic fiber $\CA^\perf$ (resp. $\CA'^{\perf}$) of $\tilde\fA$ (resp. $\tilde\fA'$) is a perfectoid space.
Moreover, $ \CA'^{\perf}$ is the tilt of $ \CA^\perf$. Thus we use $ \CA^{\perf\flat}$ to denote $ \CA'^{\perf}$.
We call $\CA^\perf$ (resp. $\CA^{\perf\flat}$) the perfectoid universal cover
of $ \fA$ (resp. $ \fA'$).
By Lemma \ref{algpoints}, there are natural bijections
$$\tilde \fA(K^\circ)\simeq \CA^\perf(K,K^\circ),\ \tilde \fA'(K^\fcc)\simeq \CA^{\perf\flat}(K^\flat,K^\fcc).$$
Let $\CA$ (resp. $\CA' $) be the adic generic fiber of $\fA$ (resp. $\fA'$).
By Lemma \ref{algpoints}, we have natural bijections $$\fA (K^\circ)\simeq \CA(K,K^\circ),\ \fA' (K^\fcc)\simeq \CA'(K^\flat,K^\fcc).$$
\begin{defn} \label{adicgroup}
The group structures on $\CA(K, K^\circ)$, $\CA^\perf(K, K^\circ)$, $\CA'(K^\flat,K^\fcc)$,and $\CA^{\perf\flat}(K^\flat,K^\fcc)$ are defined to be the ones
induced from the natural bijections above.
\end{defn}
By the functoriality of taking adic generic fibers, we have morphisms $$\pi_n:\CA^\perf \to \CA,\ \pi_n':\CA^{\perf\flat} \to \CA' $$
for $n\in \BZ_{\geq 0},$ and
morphisms
$$[p]:\CA\to \CA, \ [p]:\CA'\to \CA'.$$
Consider
the following commutative diagram
\begin{equation}
\xymatrix{
\tilde\fA(K^\circ) \ar[r]^{\simeq } \ar[d]^{\simeq} &\vpl\limits_{[p]} \fA (K^{ \circ}) \ar[d]^{\simeq} \\
\CA^\perf(K,K^\circ) \ar[r]^{ } & \vpl\limits_{[p]} \CA(K,K^\circ)}\label{faca}
\end{equation}
where the bottom map is given by $\pi_n$'s. We immediately have the following lemma.
\begin{lem}\label{faca'}
The bottom map in \eqref{faca} is a group isomorphism.
\end{lem}
\begin{rmk} Indeed,
$\CA^\perf$ serves as certain ``limit" of the inverse system $\vpl \CA$
in the sense of \cite[Definition 2.4.1]{SW} by \cite[Proposition 2.4.2]{SW}. Then Lemma \ref{faca'} also follows from \cite[Proposition 2.4.5]{SW}.
\end{rmk}
Now we study torsion points in the inverse limit.
We set up some group theoretical convention once for all.
Let $G$ be an abelian group. We denote by $G[n]$ the subgroup of elements of orders dividing $n$ and
by $G_{\tor}$ the subgroup of orsion elements. For a prime $p$,
we use $G[p^\infty]$ to denote the subgroup of $p$-primary torsion points, and
$G_{p'-\tor}$ to denote the subgroup of prime-to-$p$ torsion points. If $H$ is a \textit{subset} of $G$, $H_\tor$ and $H_{p'-\tor}$ to denote the subset $H\bigcap G_{\tor}$
and $H\bigcap G_{p'-\tor}$ when both the definitions of $H$ and $G$ are clear from the context.
The following lemma is elementary.
\begin{lem}\label{gpth} Let $G$ be an abelian group, then
$$ ( \vpl\limits_{[p]} G) _{p'-\tor}\simeq\vpl\limits_{[p]} G _{p'-\tor}.$$
\end{lem}
\begin{lem} \label{4isoms} There are group isomorphisms $$ \CA^\perf(K,K^\circ)_{p'-\tor} \simeq \vpl\limits_{[p]} \CA(K,K^\circ)_{p'-\tor} \simeq \CA(K,K^\circ)_{p'-\tor} $$
where the second isomorphism is the restriction of $\pi_n$.
Similar result holds for $\CA'$ and $\CA^{\perf\flat}$.
\end{lem}
\begin{proof} The first isomorphism is from Lemma \ref{faca'} and \ref{gpth}.
Since $\CA(K,K^\circ)[n]\simeq \fA(K^\circ)[n]$ is a finite group,
$[p]$ is an isomorphism on $\CA(K,K^\circ)[n]$ for every natural number $n$ coprime to $p$. The second isomorphism follows.
\end{proof}
\begin{prop} \label{perfgroup}
The functorial bijection $$\rho=\rho_{ \CA^\perf}: \CA^\perf(K,K^\circ)\simeq \CA^{\perf\flat}(K^{ \flat },K^{ \flat\circ})$$ (see \ref{srho}) is a group isomorphism.
\end{prop}
\begin{proof}
We only show the
compatibility of $\rho$ with the multiplication maps, i.e. we show that the following diagram is commutative:
$$
\xymatrix{
\CA^\perf(K,K^\circ)\times \CA^\perf(K,K^\circ) \ar[r]^{\rho \times \rho } \ar[d]^{} & \CA^{\perf\flat} (K^\flat,K^\fcc)\times \CA^{\perf\flat} (K^\flat,K^\fcc) \ar[d]^{} \\
\CA^\perf(K,K^\circ)\ar[r]^{ \rho } & \CA^{\perf\flat} (K^\flat,K^\fcc) .}
$$
Here the vertical maps are the multiplication maps on corresponding groups.
Consider the formal abelian schemes $\fB=\fA\times \fA$ and $\fB'=\fA'\times \fA'$. We do the same construction to get their perfectoid universal covers $\CB^\perf$ and $\CB^{\perf\flat}$.
The multiplication morphism $ \fB\to \fA$ induces $m:\CB^\perf\to \CA^\perf$.
The multiplication morphism $ \fB'\to \fA'$ induces $m':\CB^{\perf\flat}\to \CA^{\perf\flat}$. By \eqref{modpicond} and \cite[Theorem 5.2]{Sch12}, $m'=m^\flat$.
By functoriality, we have a
commutative diagram
$$
\xymatrix{
\CB^\perf(K,K^\circ) \ar[r]^{\rho_{ \CB^\perf} } \ar[d]^{m} & \CB^{\perf\flat} (K^\flat,K^\fcc) \ar[d]^{m^\flat} \\
\CA^\perf(K,K^\circ)\ar[r]^{ \rho_{ \CA^\perf} } & \CA^{\perf\flat} (K^\flat,K^\fcc) .}
$$
We only need to show that this diagram can be identified with the diagram we want. For example we show that the top horizontal maps in the two diagrams coincide, i.e. a commutative diagram
$$
\xymatrix{
\CB^\perf(K,K^\circ) \ar[r]^{ \rho_{ \CB^\perf} } \ar[d]^{\simeq} &\CB^{\perf\flat} (K^\flat,K^\fcc) \ar[d]^{\simeq} \\
\CA^\perf(K,K^\circ)\times \CA^\perf(K,K^\circ)\ar[r]^{\rho \times \rho } & \CA^\perf(K,K^\circ)\times \CA^\perf(K,K^\circ) .}
$$
The projection $ \fB=\fA\times \fA\to \fA$ to the $i$-th component, $i=1,2$, induces
$p_i:\CB^\perf \to \CA^\perf $. Easy to check that
$$p_1\times p_2:\CB^\perf(K,K^\circ)\to \CA^\perf(K,K^\circ)\times\CA^\perf(K,K^\circ)$$ is a group isomorphism by passing to formal schemes. Similarly we have an isomorphism $$p_1'\times p_2':\CB^{\perf\flat} (K^\flat,K^\fcc)\to \CA^\perf(K,K^\circ)\times \CA^\perf(K,K^\circ).$$
The commutativity is implied by that
$p_i'=p_i ^\flat$, which is from \eqref{modpicond} and \cite[Theorem 5.2]{Sch12}. \end{proof}
\subsection{Tilting and reduction}\label{Tilting and reduction}
Let $k=\bar \BF_p$ and let $W=W(k)$ be the ring of Witt vectors.
Let $A$ be an abelian scheme over $W$, $A_{K^\circ}$ be its base change to $K^\circ$, $\CA$ be the adic generic fiber of $A_{K^\circ}$.
Let $A_k$ be the special fiber of $A$, and
$ A'$ be the base change $A_k \otimes K^{\flat\circ}$ with adic generic fiber $\CA' $.
Since $$A_{K^\circ}\otimes (K^\circ/\varpi)\simeq A\otimes_Wk\otimes_k(K^\circ/\varpi)\simeq A'\otimes_{K^\fcc} (K^\fcc/\varpi^\flat),$$
we can apply the construction in Lemma \ref{A.16} to the formal completions of $A_k\otimes_k{K^\circ}$ and $A'$.
Then we have the
perfectoid universal cover $\CA^\perf$ of the $\varpi$-adic formal completion of $A_{K^\circ}$, the
perfectoid universal cover $\CA^{\perf\flat}$ of the $\varpi^\flat$-adic formal completion of $A_{K^\fcc}$,
and the morphisms $\pi_n:\CA^\perf\to \CA$, $\pi_n':\CA^{\perf\flat}\to\CA'$ for each $n\in \BZ_{\geq0}$.
The following well-known results can be deduced from \cite{ST}.
\begin{lem}\label{NOS} (1) The inclusion
$A(W)\hookrightarrow A ( {K^\circ})$ gives an isomorphism
$A(W)_{p'-\tor}\simeq A ( {K^\circ})_{p'-\tor}.$
(2) The reduction map gives an isomorphism
$$\red: A(W)_{p'-\tor}\simeq A( k)_{p'-\tor}.$$
(3) The natural inclusion $A( k)\hookrightarrow A_{K^\fcc}(K^\fcc)$ gives an isomorphism $A( k)_{p'-\tor} \simeq A_{K^\fcc}(K^\fcc)_{p'-\tor} .$
\end{lem}
Now we relate reduction and tilting.
\begin{lem}\label{comdiag}
Let the unindexed maps in the following diagram be the naturals ones:
$$
\xymatrix{
\CA ( K,K^\circ)_{p'-\tor} & \ar[l]_{ \pi_n } \CA^\perf (K, {K^\circ})_{p'-\tor}\ar[r]^{ \rho }
& \CA^{\perf\flat} ( K^\flat, K^\fcc )_{p'-\tor}\ar[r]^{\pi_n' } & \CA' ( K^\flat, K^\fcc )_{p'-\tor} \\
\ar[u] A_{K^\circ}( {K^\circ})_{p'-\tor} & \ar[l]_{ } A(W)_{p'-\tor}\ar[r]^{ \red }
&A( k)_{p'-\tor}\ar[r]^{ } & A_{K^\fcc}(K^\fcc)_{p'-\tor}\ar[u]
.}
$$
Then each map is a group isomorphism, and the diagram is commutative (up to inverting the arrows).
\end{lem}
\begin{proof}We may assume $n=0$.
Definition \ref{adicgroup}, Lemma \ref{4isoms}, Proposition \ref{perfgroup} and Lemma \ref{NOS} give the isomorphisms.
We only need to check the commutativity.
And we only need to check the two maps from $A(W)_{p'-\tor}$ to $ \CA^{\perf\flat} ( K^\flat, K^\fcc )_{p'-\tor} $ are the same.
This follows from Lemma \ref{tiltred2}. \end{proof}
Similarly, we have the following commutative diagram:
\begin{equation}
\xymatrix{
\CA^\perf (K, {K^\circ}) \ar[r]^{ \rho } & \CA^{\perf\flat} ( K^\flat, K^\fcc ) \ar[rd]^{ \simeq} &
\\
\ar@{^{(}->}[u] ^{\iota} \vpl\limits_{[p]} A(W) \ar[r]^{\red } \ar[d] ^{ \pi_0 } &
\vpl\limits_{[p]} \ar[d] ^{\pi'_n } A( k) \ar@{^{(}->}[r] & \ar[d] ^{\pi_n'} \vpl\limits_{[p]} \CA' ( K^\flat, K^\fcc ) \\
\ar[d] ^{} \bigcap\limits_{i=0}^{\infty}p^iA(W) &A( k) \ar[d]^{[p^n]}\ar@{^{(}->}[r] &\ar[d]^{[p^n]} \CA' ( K^\flat, K^\fcc )\\
A(W) \ar[r]^{\red }&A( k) \ar@{^{(}->}[r] & \CA' ( K^\flat, K^\fcc )
.}
\label{commallpts1} \end{equation}
Here $\iota$ is induced from the inclusion $A(W)\hookrightarrow \CA^\perf(K,K^\circ) $
and the isomorphism $\CA^\perf(K,K^\circ) \simeq\vpl\limits_{[p]} \CA(K,K^\circ)$ (see Lemma \ref{faca'}).
Here and from now on we regard $\vpl\limits_{[p]} A(W)$ as a subset of $\CA^\perf (K, {K^\circ}) $ via $\iota$,
$A(k) $ as a subset $ \CA' ( K^\flat, K^\fcc ) $, and
$\vpl\limits_{[p]} A(k) $ as a subset of $ \CA^{\perf \flat}( K^\flat, K^\fcc )$.
\section{Proof of Theorem \ref{MM}}\label{proof}
In this section, we at first prove a lower bound on prime-to-$p$ torsion points in a subvariety. Then we prove Theorem \ref{MM}. Let $k=\bar \BF_p$, $W=W(k)$ the ring of Witt vectors, and $L=W[\frac{1}{p}]$.
\subsection{Results of Poonen, Raynaud and Scanlon }\label{estimates}
\begin{thm}[Poonen \cite{Poo}]\label{BT}
Let $B$ be an abelian variety defined over $k$, and $V$ an irreducible closed subvariety of $B$. Let $S$ be a finite set of primes. Suppose that $V$ generates $B$,
then the composition of $$V(k)\hookrightarrow B(k)\xrightarrow{\bigoplus_{l\in S}\pr_{l}} \bigoplus_{l\in S}B[l^\infty]$$ is surjective, where $\pr_l$ is the projection to the $l$-primary component.
\end{thm}
Let $A$ be an abelian scheme over $W$.
Let $T=\bigcap\limits_{n=0}^{\infty}p^n(A(L)[p^\infty]),$ the maximal divisible subgroup of $A(L)[p^\infty]$.
Though not needed, as an illustration, we note that by \cite[Exemples 5.2.3]{Ray83b}, $T=0$ if the $p$-rank of $A_k$ is 0 or if $A$ is a ``general ordinary abelian variety", and $T=A(L)[p^\infty]\simeq L/\BZ_p^{\dim A_L}$ if $A$ is the canonical lifting in Serre-Tate theory, see \ref {Classical Serre-Tate theory}.
\begin{lem} [{Raynaud \cite[Lemma 5.2.1]{Ray83b}}] \label{T''}
(1) Let $T_o$ be the subgroup of $A(\bar{L})[p^\infty]$ coming from the connected component of the $p$-divisible group of $A $,
then $T_o\bigcap T=0$.
(2) As a subgroup of $A( \bar L)[p^\infty]$,
$T$ is a $\Gal(\bar{L}/L)$-direct summand.
\end{lem}
Note that \begin{equation}\bigcap\limits_{n=0}^{\infty}p^n(A(W)_\tor)= A(W)_{p'-\tor}\bigoplus \bigcap\limits_{n=0}^{\infty}p^n(A(W)[p^\infty]).\label{shabi}\end{equation}
\begin{cor} \label{T'''}
The following reduction map is injective $$\red: \bigcap\limits_{n=0}^{\infty}p^n(A(W)_\tor) \to A(k).$$
\end{cor}
Let $Z \subset A_L$ be a closed subvariety.
\begin{lem} [{Raynaud \cite[8.2]{Ray83}}]\label{finalestimate} Let $T'$ be a $\Gal(\bar{L}/L)$-direct summand such that as $\Gal(\bar{L}/L)$-modules
$$ A(\bar{L})_{ \tor}= A(\bar{L})_{p'-\tor}\bigoplus T\bigoplus T' .$$
If $Z$ does not contain any translate of a nontrivial abelian subvariety of $A_L$,
there exists a positive integer $N$ such that
the order of the $T'$-component of every element in $Z(\bar{L})_\tor$ divides $p^N$.
\end{lem}
\begin{rmk}Lemma \ref{finalestimate} is used by Raynaud \cite{Ray83} to reduce the Manin-Mumford conjecture to a theorem (see \cite[Theorem 3.5.1]{Ray83}) obtained
by studying $p$-adic rigid analytic properties of universal vector extension
of an abelian variety.
\end{rmk}
Let $K$ and $K^\flat$ be the perfectoid fields in \ref{perftheory}.
Let $\CA$ be the adic generic fiber of $A_{K^\circ}$. Let $Z^{\mathrm{Zar}}$ be the Zariski closure of
$Z$ in $A$, and $\CZ$ the adic generic fiber of $Z^{\mathrm{Zar}}_{K^\circ}$, .
For $\ep\in K^\times, $ let $\CZ_\ep$ be the $\ep$-neighborhood of $Z_K$ in $\CA$ as in Definition \ref {globalnbhd}.
By Lemma \ref{distep}, a result of Scanlon \cite{Sca0} on the Tate-Voloch conjecture implies the following lemma.
\begin{lem}[Scanlon \cite{Sca0}] \label{Scalem}There exists $\ep\in K^\times, $ such that $\CA(K,K^\circ)_{p'-\tor}\bigcap \CZ_\ep\subset \CZ.$
\end{lem}
\begin{rmk}
The proofs of Poonen's result and Scanlon 's result are independent of Theorem \ref{MM}.
\end{rmk}
\subsection{A lower bound }\label{the proof of Proposition}
Define \begin{equation} \Lambda :=Z^{\mathrm{Zar}}(W)\bigcap \bigcap\limits_{n=0}^{\infty}p^n(A(W) _\tor),\
\Lambda_\infty :=\iota\left( \pi_0^{-1}(\Lambda){\mathrm{rig}}ht) ,\label{deflambda}\end{equation}
where $\pi_0$ and $\iota$ are as in the left column of diagram \eqref{commallpts1}.
Then $\rho( \Lambda_\infty)$ is contained in (the image of) $ \vpl\limits_{[p]} A( k) $ by diagram \eqref{commallpts1}. Now let $ \Lambda_n :=\pi_n'(\rho( \Lambda_\infty)) $. Then $\Lambda_n$ is contained in (the image of) $A( k)$.
Let $\Lambda^{\mathrm{Zar}}_n$ be the Zariski closure of $\Lambda _n$
in $A_{k}$.
\begin{prop} \label{techprop} There exists a positive integer $n$ such that
$$\pi_0\left(\rho^{-1}\left({\pi_n'}^{-1}\left(\Lambda _n^{\mathrm{Zar}} (k) _{p'-\tor} {\mathrm{rig}}ht){\mathrm{rig}}ht ){\mathrm{rig}}ht)\bigcap \CA ( K,K^\circ)_{p'-\tor} \subset \CZ .$$
\end{prop}
\begin{proof}
Let $\CU$ be a finite affine open cover of $A$ by affine open subschemes flat over $W$. Let $U\in \CU$. The restriction of $\CA^\perf$ over the adic generic fiber of $U_{K^\circ} $ is a perfectoid space $\CX=\Spa(R,R^+)$ whose tilt satisfies Assumption \ref{asmp4} (see Lemma \ref{A.16} and the discussion above it).
Let $\CI$ be the ideal sheaf of $Z^{\mathrm{Zar}}$.
Let $f\in \CI(U)$. Regard $f$ as in $R$.
By definition of $\Lambda _n$, we can apply
Proposition \ref{techprop2} to $f$ and $\Lambda _n$.
Varying $U$ in $\CU$ and varying $f$ in a finite set of generators of $\CI(U)$,
Proposition \ref{techprop2} implies that for every $\ep\in K^\times, $ there exists a positive integer $n$ such that
$$\pi_0\left(\rho^{-1}\left({\pi_n'}^{-1}\left(\Lambda _n^{\mathrm{Zar}} (k) {\mathrm{rig}}ht){\mathrm{rig}}ht ){\mathrm{rig}}ht)\subset \CZ_\ep .$$
Then Proposition \ref{techprop} follows from Lemma \ref{Scalem}.
\end{proof}
Our lower bound on the size of the set of prime-to-$p$ torsions in $Z$ is as follows.
\begin{prop}\label{mainprop} Let $p>2$. Assume that $Z$ contains the unit $0\in A_L$.
(1) Assume $\Lambda$ is infinite. For every prime number $l\neq p$, the image of the composition of
\begin{equation} Z^{\mathrm{Zar}}(W) _{p'-\tor}\hookrightarrow A(W)_{p'-\tor}\xrightarrow{\pr_l} A(W)[l^\infty]
\label{intro3}
\end{equation}
contains a translate of a free $\BQ_l/\BZ_l$-submodule of $A(W)[l^\infty]
$ of rank at least $2$.
Here the map $\pr_l$ is the projection to the $l$-primary component.
(2) Assume that the image of the composition of \begin{equation*} \Lambda\hookrightarrow A(W)_{\tor}\xrightarrow{\pr_p} A(W)[p^\infty] \end{equation*}
contains a translate of a free $ L/\BZ_p$-submodule of rank $r$. For every prime number $l\neq p$, the image of the composition of \eqref {intro3}
contains a translate of a free $\BQ_l/\BZ_l$-submodule of $A(W)[l^\infty]
$ of rank $2r$.
\end{prop}
\begin{proof}
Fix a large $n$ such that \begin{equation} \pi_0\left(\rho^{-1}\left({\pi_n'}^{-1}\left( \Lambda _n^{\mathrm{Zar}} (k)_{p'-\tor} {\mathrm{rig}}ht){\mathrm{rig}}ht ){\mathrm{rig}}ht)\bigcap \CA ( K,K^\circ)_{p'-\tor} \subset \CZ ( K,K^\circ)_{p'-\tor}
\label{chang}
\end{equation} as in Proposition \ref{techprop}.
Let $ X $ be the image of the left hand side of \eqref{chang} via the
composition of \begin{equation*} \CA ( K,K^\circ)_{p'-\tor}
\simeq A(W)_{p'-\tor} \xrightarrow{\pr_l} A(W)[l^\infty].
\end{equation*} Then $X $ is contained in the image of the composition of \eqref{intro3}.
To prove (1), we only need to prove the following claim: $X$ contains a translate of a free $\BQ_l/\BZ_l$-submodule of $A(W)[l^\infty]
$ of rank at least $2$ for every $l$.
By diagram \eqref{commallpts1},
we have $\Lambda _0= \red ( \Lambda) .$
Since $p>2$, by Corollary \ref{T'''}, $\Lambda_0$ is infinite. Since $\Lambda_0=[p]^n(\Lambda_n)$,
$\Lambda_n$ is infinite. There exists $a\in A(k)$ such that an irreducible component of $ \Lambda _n^{\mathrm{Zar}} +a$
(is contained and)
generates a nontrivial abelian subvariety $A'$ of $A_k$. Since $Z$ contains the unit $0\in A_L$, $ \Lambda _n^{\mathrm{Zar}} $ contains the unit $0\in A_k$ and $A'$ contains $a$.
Let $a_p$ be the $p$-primary part of $a$ and $a_{p'}=a-a_p$.
By Theorem \ref{BT} (for $ \Lambda _n^{\mathrm{Zar}} +a\subset A'$ and $S=\{p,l\}$),
the image of \begin{equation} \Lambda _n^{\mathrm{Zar}} (k) +a\xrightarrow{\pr_l\bigoplus \pr_p}A (k)[l^\infty]\bigoplus A (k)[p^\infty]\label{zhejuhua}\end{equation}
contains $M\bigoplus \{a_p\}$, where
$M$ is
a free $\BQ_l/\BZ_l$-submodule of $A(k)[l^\infty]$ of rank at least $2$.
Thus $$\left(\pr_l\bigoplus \pr_p{\mathrm{rig}}ht)\left ( \Lambda _n^{\mathrm{Zar}} (k) +a_{p'}{\mathrm{rig}}ht)\supset M\bigoplus\{ 0\}.$$
We claim: $$ \pr_l\left (\left ( \Lambda _n^{\mathrm{Zar}} (k) +a _{p'}{\mathrm{rig}}ht)_{p'-\tor}{\mathrm{rig}}ht)\supset M .$$
Indeed, write $b\in \Lambda _n^{\mathrm{Zar}} (k) +a_{p'}$ as the sum $b_p+b_{p'}$ of $p$-primary part and prime-to-$p$ part. Then
$\left(\pr_l\bigoplus \pr_p{\mathrm{rig}}ht) b=\pr_l(b_{p'})+b_p$. If this is $x+0\in M\bigoplus\{ 0\}$, then $b_p=0$, and $b=b_{p'}$.
Thus $ \pr_l(b)=x\in M$. The claim is proved.
By the claim, $$ M-\pr_l (a_{p'})\subset Y := \pr_l\left( \Lambda _n^{\mathrm{Zar}} (k) _{p'-\tor}{\mathrm{rig}}ht) . $$
By Lemma \ref{comdiag},
$X $
contains the preimage of $[p]^n(Y )$ under the isomorphism $\red: A(W)_{p'-\tor}\simeq A( k)_{p'-\tor}.$
Thus we proved the claim above.
To prove (2), we only need to prove the following claim: $X $ contains a translate of a free $\BQ_l/\BZ_l$-submodule of $A(W)[l^\infty]
$ of rank at least $2r$ for every $l$.
By diagram \eqref{commallpts1},
we have $\Lambda _0= \red ( \Lambda) .$ Since $p>2$, by Corollary \ref{T'''}
and the assumption on $\Lambda$,
$\pr_p(\Lambda_0)$ contains a translate of a free $ L/\BZ_p$-submodule of rank $r$.
Let $V_1,...,V_m$ be the irreducible components of
$\Lambda_0^{\mathrm{Zar}}$.
Let $A_i$ be the minimal abelian subvariety of $A_k$ such that a certain translate of $A_i$ contains $V_i$.
Since
the $p$-rank of $A_i$ is at most its dimension, at least one $A_i$ is
of dimension at least $r$.
Since $\Lambda_0=[p]^n(\Lambda_n)$,
there exists $a\in A(k)$ such that an irreducible component of $ \Lambda _n^{\mathrm{Zar}} +a$ generates an abelian subvariety of $A_k$ of dimension at least $r$.
Then we prove (2) by copying the proof of (1) above, starting from the sentence containing \eqref{zhejuhua}.
The only modification needed is that the rank of $M$ should be at least $2r$.
\end{proof}
\subsection{The proof of Theorem \ref{MM}}\label{proofcurve}
Now we prove Theorem \ref{MM}. By the argument in \cite{PZ}, we only need to prove the following weaker theorem.
We save the symbol $A $ for the proof.
\begin{thm}\label{curveMM1} Let $F$ be number field. Let
$B$ be an abelian variety over $F$ and $V $ a closed subvariety of $B$.
If
$V$ does not contain any translate of an abelian subvariety of $B$ of positive dimension, then $V$ contains only finitely many torsion points of $B$.
\end{thm}
\begin{proof}
We only need to prove the theorem up to replacing $V$ by a multiple.
Let $v$ be a place of $F$ unramified over a prime number $p>2$
such that $B$ has good reduction. Let $A$ be the base change to $W $ of the integral smooth model of $B$ over $\CO_{F_v}$.
Let $Z=V_L\subset A$.
By \eqref{shabi} and Lemma \ref{finalestimate}, up to replacing $V$ by $[p^N]V $ for $N$ large enough, we may assume that $Z^{\mathrm{Zar}}(W)_\tor\subset \bigcap\limits_{n=0}^{\infty}p^n(A(W)_{\tor}).$ Thus $\Lambda=Z^{\mathrm{Zar}}(W)_\tor$, where $\Lambda$ is defined as in \eqref{deflambda}. Suppose that $V$ contains infinitely many torsion points. Then $\Lambda$ is infinite. Up to replacing $V$ by $[p^N]V $, we may assume that $Z$ contains the unit $0\in A_L$.
Now we want to find a contradiction.
By Proposition \ref{mainprop} (1),
for every prime number $l\neq p,$ the composition of \begin{equation*} Z(L)_{p'-\tor}\hookrightarrow A(L)_{p'-\tor}\xrightarrow{\pr_l} A(L)[l^\infty]
\end{equation*}
contains a translate of a free $\BQ_l/\BZ_l$-submodule of $A(L)[l^\infty]
$ of rank $2$.
Let $u$ be another place of $F$, unramified over an odd prime number $ l\neq p$,
such that $B$ has good reduction at $u$. Let $B_o$ be the reduction. Let $M$ be the completion of the maximal unramified extension of $F_u$ and $\bar M$ its algebraic closure.
Then the composition \begin{equation*} V(\bar M)_\tor \hookrightarrow B(\bar M)_\tor \xrightarrow{\pr_l} B(\bar M)[l^\infty]
\end{equation*}
contains a translate $G$ of a free $\BQ_l/\BZ_l$-submodule of $B(\bar M)[l^\infty]
$ of rank $2$.
Let $T=\bigcap\limits_{n=0}^{\infty}l^n(B(M)[l^\infty]).$
By Lemma \ref{finalestimate} (applied to $l$, $M$ instead of $p$, $L$), up to replacing $V$ by $[l^N]V $ for $N$ large enough,
$G $ is contained in $T$.
By \eqref{shabi} (applied to $l$, $M^\circ$ instead of $p$, $W$), the image of the composition of
\begin{equation*} V( M) \bigcap \bigcap\limits_{n=0}^{\infty}l^n(B(M)_\tor) \hookrightarrow B(M) \xrightarrow{\pr_l} B(M)[l^\infty]
\end{equation*} contains $G$.
By Proposition \ref{mainprop} (2) (applied to $l$, $M^\circ$ instead of $p$, $W$),
for every prime number $q\neq l$, the composition \begin{equation*} V( M)_{l'-\tor}\hookrightarrow B( M)_{l'-\tor}\xrightarrow{\pr_q} B( M)[q^\infty]
\end{equation*}
contains a translate of a free $\BQ_q/\BZ_q$-submodule of rank $4$. Repeating this process (use more places or only work at $v$ and $u$), we get a contradiction as $A$ is of finite dimension.
\end{proof}
\section{Ordinary perfectoid Siegel space and Serre-Tate theory}
\label{The ordinary perfectoid Siegel space and Serre-Tate theory}
Let $\BA_f$ be the ring of finite adeles of $\BQ$, $U^p\subset \GSp_{2g}(\BA_f^p)$ an open compact subgroup contained in the congruence subgroup of level-$N$ for some $N\geq 3$ prime to $p$.
Let $X=X_{g,U^p} $ over $\BZ_p$ be
the
Siegel moduli space of principally polarized $g$-dimensional abelian
varieties over $\BZ_p$-schemes with level-$U ^p$ -structure. Let $X_o$ be special fiber of $X$.
We will use the perfectoid fields defined in \ref{perftheory}. We briefly recall some notations. Let $k=\bar \BF_p$, $W=W(k)$ the ring of Witt vectors, $L$ the fraction field of $W$, and $L^{\mathrm{cycl}}$ the field extension of $L$ by adjoining all $p$-power-th roots of unity. Let $K$ be
the $p$-adic completion of $L^{\mathrm{cycl}}$ which is a perfectoid field. Then $K^\flat= k((t^{1/p^\infty})) $ is the tilt of $K$.
Fix a primitive $p^n$-th root of unity $\mu_{p^n}$ for every positive integer $n$
such that $\mu_{p^{n+1}}^p=\mu_{p^n}$.
\subsection{Ordinary perfectoid Siegel space}
\label{Ordinary perfectoid Siegel space}
Let $X_o(0) \subset X_o$ be the ordinary locus. Let $\fX(0)$ over $\BZ_p$ be
the open formal subscheme of the formal completion of $X$ along $X_o$ defined by the condition that every local lifting of the Hasse invariant is invertible (see \cite[Definition 3.2.12, Lemma 3.2.13]{Sch13}).
Then $\fX(0)/p= X_o(0)$ (see \cite[Lemma 3.2.5]{Sch13}).
Let $ \widehat{X_o(0)_{K^\fcc}}$ be the $\varpi^\flat$-adic formal completion of $X_o(0)_{K^\fcc}$.
Let $\CX(0) $ and $\CX'(0) $ be the adic genric fibers of $\fX(0)_{K^\circ}$
and $\widehat{X_o(0)_{K^\fcc}}$ respectively.
Let $\Fr:X_o(0)\to X_o(0)$ be the (relative)
Frobenius morphism (note that $X_o(0)$ is defined over $\BF_p$).
Let $ \Fr^\can:\fX(0)\to\fX(0)$ be given by the functor sending an abelian scheme $A$ to its quotient by the connected subgroup scheme of $A[p]$.
Then $ \Fr^\can/p=\Fr$. We also use $ \Fr^\can$ and $\Fr$ to denote their base changes to $K^\circ$ and $K^\fcc$ respectively.
Let $$ \tilde \fX(0):=\vpl_{\Fr^\can}\fX(0)_{K^\circ},\ \tilde \fX'(0):=\vpl_{\Fr} \widehat{X_o(0)_{K^\fcc}},$$
where the inverse limits are taken in the categories of $\varpi$-adic and $\varpi^\flat$-adic formal schemes respectively.
Here
$\varpi=\mu_p-1$ and $\varpi^\flat=t^{1/p}$.
By \cite[Corollary 3.2.19]{Sch13}, the corresponding adic genric fibers $\CX(0)^\perf$ and $\CX'(0)^\perf$ of $ \tilde \fX(0)$ and $ \tilde \fX'(0)$ are perfectoid spaces. Moreover, $\CX'(0)^\perf=\CX(0)^{\perf,\flat}$, the tilt of $\CX(0)^\perf$.
Then we have the natural projections
\begin{equation}
\pi:\CX(0)^\perf\to \CX(0),\ \pi':\CX(0)^{\perf,\flat}\to \CX'(0)\label{natpi}.\end{equation}
We also have a natural map between the underlying sets defined in \ref{srho}\begin{equation} \rho_{\CX(0)^\perf}:|\CX(0)^\perf|\to|\CX(0)^{\perf,\flat}|.\label{natrho}.\end{equation}
(The map $\rho_{\CX(0)^\perf}$ is in fact a homeomorphism and we do not need this fact.)
\subsection{Classical Serre-Tate theory} \label{Classical Serre-Tate theory}
We use the adjective ``classical" to indicate the Serre-Tate theory \cite{Kat} discussed in this subsection, compared with Chai's global Serre-Tate theory to be discussed in \ref{GST}.
Let $R$ be an Artinian local ring with maximal ideal $\fm$ and residue field $k$.
Let $ A/\Spec R$ be an abelian scheme with ordinary special fiber $A_k$. Let $A_k^\vee$ be the dual abelian variety of $A_k$.
There is a $\BZ_p$-module
morphism from the product of Tate-modules $ T_p A_{k}\otimes T_p A_k^\vee$ to $1+\fm$
constructed by Katz \cite{Kat}. We call this morphism the classical Serre-Tate coordinate system for $A/\Spec R$.
If
$ A/\Spec R$ is moreover a principally polarized abelian scheme,
the Serre-Tate coordinate system for $A/\Spec R$ is a $\BZ_p$-module
morphism
\begin{equation} q_{A/\Spec R} : \Sym^2(T_p A_{k}) \to 1+\fm.\label{60}\end{equation}
Let $x\in X_o(0)(k)$, and let $A_x $ be the corresponding principally polarized abelian variety.
Let $\fM_x$ be the formal completion of $X$ at $x$, and $\fA/\fM_x$ the formal universal deformation of $A_x$.
Then as part of the construction of $q_{A/\Spec R}$, there is an isomorphism of formal schemes over $W$:
\begin{equation}\fM_x\simeq \Hom_{\BZ_p}( \Sym^2(T_p A_{x}),\hat\BG_m)\label{stiso},\end{equation}
where $\hat\BG_m$ is the formal completion of the multiplicative group scheme over $W$ along the unit section.
In particular,
$\fM_x$ has a formal torus structure.
Moreover, if $A_k\simeq A_x$ in \eqref{60}, then \eqref{60} is the value of \eqref{stiso} at the morphism $\Spec R\to \fM_x$ induced by $A$. Let $\CO(\fM_x)$ be the coordinate ring of $\fM_x$, and let $\fm_x$ be the maximal ideal of $\CO(\fM_x)$.
From \eqref{stiso}, we have a morphism of $\BZ_p$-modules: $$ q=q_{ \fA/\fM_x}:\Sym^2(T_p A_{x}) \to 1+\fm_x.$$
Fix a basis $\xi_1,...,\xi_{g(g+1)/2}$ of $\Sym^2(T_p A_{x})$.
\begin{prop}[{\cite[3.2]{dJN}}]\label{dJN3.2} Let $F$ be a finite extension of $L$ with ring of integers $F^\circ$.
Let $y^\circ\in X(F^\circ)$ with generic fiber $y$.
Suppose that $y^\circ\in \fM_x(F^\circ)$.
Then $y$ is a CM point if and only if $q(\xi_i)(y^\circ)$ is a $p$-primary root of unity for $i=1,...,g(g+1)/2$.
\end{prop}
Thus every ordinary CM point is contained in $X(L^{\mathrm{cycl}})$. For an ordinary CM point $y\in X(L^{\mathrm{cycl}})$.
there is a unique $y^\circ\in X (K^\circ)$ whose generic fiber is $y_K\in X(K)$. We regard $y^\circ$ as a point in $\fX(0)(K^\circ)$ and $y_K$ as a point in $\CX(K,K^\circ)$ via Lemma \ref{algpoints}.
\begin{defn}\label{generator}
Let $a =(a ^{(1)},...,a^{(g(g+1)/2)})\in\BZ_{\geq 0}^{g(g+1)/2}$.
(1) An ordinary CM point $y\in X(L^{\mathrm{cycl}})$ with reduction $x$
is called of order $p^a$ w.r.t. the basis $\xi_1,...,\xi_{g(g+1)/2}$
if
$q(\xi_i)(y^\circ) $ is a primitive $p^{a^{(i)}}$-th root of unity for each $i=1,...,g(g+1)/2$.
If moreover $q(\xi_i)(y^\circ)=\mu_{p^{a^{(i)}}}$, $y$
is called a $\mu$-generator w.r.t. the basis $\xi_1,...,\xi_{g(g+1)/2}$.
(2) Assume that $a$ is non-increasing so that $q(\xi_{i+1})(y^\circ)$ is an $r^{(i)}$-th power of $q(\xi_{i})(y^\circ)$ for
some (non-unique) $r^{(i)}\in \BZ_p$, $i=1,. . . ,g(g+1)/2-1$. We call $(r^{(1)},. . . ,r^{( {g(g+1)/2-1})})\in \BZ_p^{g(g+1)/2-1}$ a ratio of $y$ w.r.t. the basis $\xi_1,...,\xi_{g(g+1)/2}$.
\end{defn}
It is clear that if $a$ is non-increasing, then the usual $p$-adic absolute value $|r^{(i)}|_p=p^{a ^{(i+1)}-a ^{(i)}}$.
Let $T_i =q(\xi_i)-1\in \fm_x.$ Then we have an isomorphism \begin{equation}\CO(\fM_x)\simeq W[[T_1,...,T_{g(g+1)/2}]]\label{6.1}.\end{equation}
Let $ \widehat {X_o(0)}_{/x}$ be the formal completion of $X_o(0)$ at $x$.
Restricted to $ \widehat {X_o(0)}_{/x}$, \eqref{6.1} gives
an isomorphism \begin{equation}\CO(\widehat {X_o(0)}_{/x})\simeq k[[T_1,...,T_{g(g+1)/2}]].\label{6.3}\end{equation}
Let $U\to X_o(0)$ be an {\mathrm{et}}ale morphism, $z\in U(k)$ with image $x$.
Then
\eqref{6.3} gives an isomorphism \begin{equation}\CO(\widehat {U}_{/z})\simeq k[[T_1,...,T_{g(g+1)/2}]]. \label{63}\end{equation}
Let $A_z$ be the pullback of $A_x$ at $z$. Then we naturally have $T_p A_z\simeq T_p A_x$. Thus we also regard $\xi_1,...,\xi_{g(g+1)/2}$ as a basis of $\Sym^2(T_p A_{z})$.
\begin{defn}\label{realization'}
We call \eqref{63} the realization of the classical Serre-Tate coordinate system of
$ \widehat {U}_{/z}$ at the basis $\xi_1,...,\xi_{g(g+1)/2}$ of $\Sym^2(T_p A_{z})$.
\end{defn}
We have another description of \eqref{63}. Let $I_n$ be a descending sequence of open ideals of $\CO(\widehat {U}_{/z})$ defining the topology of $\CO(\widehat {U}_{/z})$. Let $R_n:=\CO(\widehat {U}_{/z})/I_n$,
let $A_n $ be the pullback of the formal universal principally polarized abelian scheme over $\fM_x$ to $\Spec R_n$
with special fiber $A_z$. Let $$q_{A _n/\Spec R_n}: \Sym^2(T_p A^\univ_x)\to R_n^\times $$ be the classical Serre-Tate coordinate system of $A _n/R_n$.
Then $ q_{A_n/\Spec R_n}(\xi_i)-1=T_i(\mod I_n)$.
Thus the sequence $\{ q_{A_n/\Spec R_n}(\xi_i)-1\}_n $ gives an element in $ \CO(\widehat {U}_{/z})\simeq \vpl_n R_n,$
which equals $T_i$.
\subsection{Tilts of ordinary CM points}\label{Tilting canonical liftings}
Let $ \widehat {X_o(0)_{K^{\fcc}}}_{/x}$ be the formal completion of ${X_o(0)_{K^{\fcc}}}$ at $x$.
By \eqref{6.3}, we have
\begin{equation}\CO\left (\widehat {X_o(0)_{K^{\fcc}}}_{/x}{\mathrm{rig}}ht)\simeq K^{\fcc}[[T_1,...,T_{g(g+1)/2}]].\label{cDx}\end{equation}
Let $\cD_x$ be the adic generic fiber of $\widehat {X_o(0)_{K^{\fcc}}}_{/x}$.
Then $\cD_x$ is is an adic subspace of $\CX'(0)$ in the sense of Definition \ref{adicdef}.
Moreover, \eqref{cDx} and Lemma \ref {algpoints} imply an isomorphism \begin{equation}\cD_x(K^\flat, K^\fcc)\simeq K^{\flat{\circ\circ},g(g+1)/2}. \label{cdreal}\end{equation}
\begin{lem}\label{6.2.3}
Let $y\in X(L ^{\mathrm{cycl}})$ be an ordinary CM point with reduction $x$.
(1) For every $\tilde y\in \pi^{-1}(y_K)\subset \CX(0)^\perf$, we have $$\pi'\circ \rho_{\CX(0)^\perf}(\tilde y)\in \cD_x .$$
(2) Let $a =(a ^{(1)},...,a^{(g(g+1)/2)})\in\BZ_{\geq 0}^{g(g+1)/2}$ and $I\subset \{1,2,...,g(g+1)/2\}$ the subset of $i$'s such that $a^{(i)}=0$. Let $y$ be a $\mu$-generator of order $p^a$ w.r.t. the basis $\xi_1,...,\xi_{g(g+1)/2}$ (see Definition \ref{generator}). There exists $\tilde y\in \pi^{-1}(y_K)$ such that via the isomorphism \eqref{cdreal},
the $i$-th coordinate of $\pi'\circ \rho_{\CX(0)^\perf}(\tilde y) $ is 0 for $i\in I$ and is $ t^{1/p^{a^{(i)}}}$ for $i\not\in I$.
\end{lem}
\begin{proof}
We recall the effect of $\Fr^\can$ on $\fM_x$ (see \cite[4.1]{Kat}).
Denote $\fM_{x}$ by $\fM_{A_x}$.
Let $\sigma\in \Aut(k)$ be the Frobenius. Let $A_x^{(\sigma)}:= A_x \otimes_{k,\sigma} k$ be the base change by $\sigma$.
Then $\Fr^\can $, restricted to $\fM_{A_x}$ gives a morphism $\Fr^\can: \fM_{A_x}\to \fM_{A_x^{(\sigma)}}$ over $W$ \cite[p 171]{Kat}.
Let $\sigma(\xi_1),...,\sigma(\xi_{g(g+1)/2})$ be the induced basis
of $\Sym^2(T_p A_{x}^{(\sigma)}[p^\infty])$.
Then
\cite[Lemma 4.1.2]{Kat} implies that \begin{equation} \Fr^{\can,*}(q (\sigma(\xi_i)))=q (\xi_i)^p.\label{Qp}\end{equation}
We associate a perfectoid space to $\fM_x$.
Let
\begin{equation*}\tilde\fM_x:=\vpl\limits_{\Fr^\can} \fM_{ A_x^{(\sigma^{-n})}} .\end{equation*}
By a similar (and easier) proof as the one for \cite[Corollary 3.2.19]{Sch13}, the adic generic fiber $\CM_x^\perf$ of $\tilde\fM_{x,K^\circ}$
is a perfectoid space.
Moreover, let $\CM_x'^\perf$ be the adic generic fiber of $\vpl\limits_{\Fr } \widehat {X_o(0)_{K^{\fcc}}}_{/x}$.
Then $\CM_x'^\perf$ is the tilt of $ \CM_x^\perf$.
By Lemma \ref{tiltmor}, the tilting process commutes with restriction to an open subspace. Thus to prove Lemma \ref{6.2.3}, we we only need to consider the tilting between $ \CM_x^\perf$ and $\CM_x'^\perf$.
Then Lemma \ref{6.2.3} follows from the cases $c=0$ and $c=1$ of Lemma \ref{2316} (which deals with closed units discs while here we are dealing with open unit discs so that we apply \ref{tiltmor} again).
\end{proof}
\subsection{Global Serre-Tate theory}\label{GST}
\subsubsection{The algebraic and geometric formulations}
Now we review Chai's globalization of Serre-Tate coordinate system in characteristic $p$ \cite{Cha}.
Let $U$ be a $\BF_p$-scheme.
Let $A/U$ be an abelian scheme whose relative dimensions on connected components of $U$ are the same.
Define
$$\nu_U=\vpl _{n}\Coker([p^n]:\BG_m\to \BG_m),$$
which is a $\BZ_p$-sheaf on $U_{\mathrm{et}}$.
\begin{eg}\label{GSTeg}
(1) Let $m\geq n$ be positive integers, and $U_0=\Spec k[T]/T^{p^n}$. Then the $p^m$-th power of an element in $(k[T]/T^{p^n})^\times$ with constant term $b$ is $b^{p^m} $.
Thus
\begin {equation}\nu_{U_0}(U_0)=(k[T]/T^{p^n})^\times/k^\times\simeq 1+T(k[T]/T^{p^n}).\label{phix}\end{equation}
(2) Let $B$ be an $\BF_p$-algebra, $U=\Spec B $ and
$U'=\Spec B[T]/T^{p^n}$.
For $m\geq n$, consider the map $$B^\times/(B^\times)^{p^m}\bigoplus
(1+TB[T]/T^{p^n})\to (B[T]/T^{p^n})^\times/((B[T]/T^{p^n})^\times)^{p^m}$$
defined by
$(a,f)\mapsto af.$
Easy to check that this is a group isomorphism.
In particular, \begin {equation}\nu_{U'}(U')\simeq\nu_{U}(U)\bigoplus
(1+TB[T]/T^{p^n}) .\label{phixB}\end{equation}
(3) For every $z\in U(k) $, $\{z\}\times _{U}U' \simeq U_0$.
Then the restriction of the isomorphism \eqref{phixB} at $z$ is the isomorphism \eqref{phix}.
\end{eg}
Suppose $A/U$ is ordinary.
Let $T_p A[p^\infty]^{\mathrm{et}}$ be the Tate module attached to the maximal {\mathrm{et}}ale quotient of the $p$-divisible group $A[p^\infty]$.
The global Serre-Tate coordinate system for $A/U$ is a homomorphism of $\BZ_p$-sheaves
$$ q_{A/U}:T_p A[p^\infty]^{\mathrm{et}}\otimes T_p A^\vee[p^\infty]^{\mathrm{et}}\to \nu_U$$
constructed by Chai \cite[2.5]{Cha}.
Let $U_0=\Spec k[T]/T^{p^n}$.
Let $A/U_0$ be an ordinary abelian scheme, and $A_k$ the special fiber of $A$.
Then
\begin{equation} T_p A[p^\infty]^{\mathrm{et}}\otimes T_p A^\vee[p^\infty]^{\mathrm{et}} \simeq T_p A_k[p^\infty]\otimes T_p A_k^\vee[p^\infty],\label{612}\end{equation}
where the right hand side is regarded as a constant sheaf.
\begin{lem}[{\cite[(2.5.1)]{Cha}}]\label{GSTeg'} The morphism of $\BZ_p$-modules $$T_p A_k[p^\infty]\otimes T_p A_k^\vee[p^\infty]\to \nu_U(U)\simeq 1+T(k[T]/T^{p^n})$$ induced from $q_{A/U_0} $ via \eqref{612} coincides with the classical Serre-Tate coordinate system (see \eqref{60}).
\end{lem}
The geometric formulation of global Serre-Tate coordinate system is as follows.
Let $A^{\univ}$ be the universal principally polarized abelian scheme over $X_o(0)$, and $\hat A^{\univ}$ the formal completion of $A^{\univ}$ along the zero section which is a formal torus over $X_o(0)$.
Then the sheaf of polarization-preserving $\BZ_p$-homomorphisms
between $T_p A^{\univ}[p^\infty]^{\mathrm{et}}$ and $\hat A^{\univ}$ is a formal torus over $ X_o(0)$ of dimension $\frac{g(g+1)}{2}$. Let us call it $\fT_1$. Let $\Delta$ be the diagonal embedding of $X_o(0)$
into $X_o(0)\times X_o(0)$, and let
$\fT_2$ be the formal completion of $X_o(0)\times X_o(0)$ along this embedding.
\begin{prop}[{\cite[Proposition 5.4]{Cha}}] \label{geogl}
There is a canonical isomorphism $\fT_1\simeq \fT_2.$
In particular, $\fT_2$ has a formal torus structure over the first $X_o(0) $.
\end{prop}
\subsubsection{Igusa tower}
In order to have sections of the {\mathrm{et}}ale $\BZ_p$-sheaf $T_p A[p^\infty]^{\mathrm{et}}$ over $U$,
or equivalently to trivialize the formal torus, we need to pass to the Igusa tower, defined as follow.
For $n=0,1,...,\infty$, let $\fI_n$ be the functor assigning to every $k$-algebra $R$ the set of isomorphism classes of pairs
$$\{(A,\vep):A\in X_o(0)(R),\ \vep:A[p^n]\simeq \hat\BG_{m,R}^g[p^n]\}.$$
By \cite[8.1.1]{Hid}, for $n<\infty$ (resp. $n=\infty$)
the functor $\fI_n$ is represented by a $k$-scheme (which we still denote by $\fI_n$) finite (resp. profinite) Galois over
$X_o(0)$ with Galois group $\GL_g(\BZ/p^n\BZ)$ (resp. $\GL_g(\BZ_p)$).
And $\fI_n$ is known as the Igusa scheme of level $n$.
\subsubsection{Realization of the global Serre-Tate coordinate system at a basis}\label{GSTapp}
Let $U_0 $ be an affine open subscheme of $X_o(0)$. Let $U=\Spec B:=\fI_\infty|_{U_0}$ . Let $\Delta$ be the diagonal of $U\times U$.
We have two projection maps $\pr_1,\pr_2$ from $\widehat{U\times U}_{/\Delta }$ to the first and second $ U$.
For $z\in U(k)$, the restriction of $\pr_2$ induces \begin{equation}\pr_1^{-1}(\{z\})\overset{\pr_2}\simeq \widehat{ U}_{/z} \label{633}.
\end{equation}
Let $\CO\left(\widehat{U\times U}_{/\Delta }{\mathrm{rig}}ht)$ be the coordinate ring of $\widehat{U\times U}_{/\Delta }$.
Endow $\CO\left(\widehat{U\times U}_{/\Delta }{\mathrm{rig}}ht)$ a $B$-algebra structure via $\pr_1$.
By Proposition \ref{geogl},
we have a (non-unique) $B$-algebra isomorphism \begin{equation}\CO\left(\widehat{U\times U}_{/\Delta }{\mathrm{rig}}ht)\simeq B[[T_1,...,T_{ g(g+1)/2}]].\label{BTT}\end{equation}
Let
$ A/\widehat{U\times U}_{/\Delta }$ be the pullback of $A^{\univ}|_{U_0}$.
Assume that
$\Sym^2(T_p A^{\univ}[p^\infty]^{\mathrm{et}}) (U)$ is a free $\BZ_p$-modules of rank $ g(g+1)/2$. Let $ \xi_1 ,..., \xi_{g(g+1)/2} $ be a basis of $\Sym^2(T_p A^{\univ}[p^\infty]^{\mathrm{et}}) (U)$ (whose existence follows from the definition of $\fI_\infty$ and the polarization).
The realization of the global Serre-Tate coordinate system of $ A/\widehat{U\times U}_{/\Delta }$ at the basis $ \xi_1 ,..., \xi_{g(g+1)/2} $ is a construction of an isomorphism \eqref{BTT} as follows.
For the simplicity of notations, let us assume $g=1$. The general case can be dealt in the same way.
Let $\xi=\xi_1$ and $T=T_1$.
Let $$U'=\widehat{U\times U}_{/\Delta } /T^{p^n} \simeq \Spec B[[T]]/T^{p^n} $$ and let $A_n $ be the restriction of $A$ to $U'$. The global Serre-Tate coordinate system of $A_n/U' $ is a homomorphism of $\BZ_p$-sheaves over $U_{2,{\mathrm{et}}}$
$$q_{A_n/U'}: \Sym^2(T_p A_n [p^\infty]^{\mathrm{et}}) \to \nu_{U'} .$$
Note that
$\xi$ gives a basis
$\xi_n$ of
$\Sym^2(T_p A_n [p^\infty]^{\mathrm{et}}) (U')$.
Then we have
$$q_{A_n/U'}(\xi_n)\in \nu(U')\simeq\nu_{U}(U)\bigoplus
(1+TB[T]/T^{p^n}) $$
where the second isomorphism is \eqref{phixB}.
Consider the morphism $$\phi_n: \nu(U')\to(1+TB[T]/T^{p^n})\hookrightarrow B[T]/T^{p^n}$$
where the first map is the projection and second map is the natural inclusion.
Let $T_n^{\mathrm{ST}}\in B[T]/T^{p^n}$ be $ \phi_n(q_{A_n/U'}(\xi_n))-1 $.
As $n$ varies, $T_n^{\mathrm{ST}} $'s give an element $$T^{\mathrm{ST}}\in \CO\left(\widehat{U\times U}_{/\Delta }{\mathrm{rig}}ht)\simeq \vpl_n B [T]/T^{p^n}.$$
We compare the above construction with the realization of the classical Serre-Tate coordinate system.
Let $z\in U(k)$.
The restriction of $A$ to $\pr_1^{-1}(\{z\})$ is pullback $A^\univ|_{\widehat{ U}_{/z }}$ of $A^{\univ}|_{U_0}$ to $\widehat{ U}_{/z }$ via \eqref{633}.
(Thus we may regard $A$ as the family $\{A^\univ|_{\widehat{ U}_{/z }}:z\in U(k)\}.$)
The realization of the classical Serre-Tate coordinate system of $\widehat{ U}_{/z }$
at $\xi_z$ (the restriction of $\xi$ at $z$) gives an element $T_z^{c{\mathrm{ST}}}\in \widehat{ U}_{/z }$ and an isomorphism
$ \widehat{ U}_{/z }\simeq \Spf k[[T_z^{c{\mathrm{ST}}}]]$
(see Definition \ref{realization'}). Here and below, the supscript $c$ indicates ``classical".
\begin{lem}\label{6.3.3}
The restriction of $T^{\mathrm{ST}}$ to $\pr_1^{-1}(\{z\}) \simeq\widehat{ U}_{/z} $ is $T_z^{c{\mathrm{ST}}}$. In particular, $$\CO\left(\widehat{U\times U}_{/\Delta }{\mathrm{rig}}ht)= B[[T^{\mathrm{ST}}]].$$
\end{lem}
\begin{proof}
The restriction of \eqref{BTT} to $\pr_1^{-1}(\{z\}) \simeq\widehat{ U}_{/z} $ via \eqref{633} gives an isomorphism $\CO(\widehat{ U}_{/z} )\simeq k[[T]]. $
Let $$q^c_n=q_{A^\univ|_{\widehat{ U}_{/z } }/T^{p^n}}^c: \Sym^2(T_p A^\univ_z[p^\infty])\to 1+T k[[T]] /T^{p^n}$$ be the classical Serre-Tate coordinate system of $A^\univ|_{\widehat{ U}_{/z }} /T^{p^n}$ (see \eqref{60}). Then the image of $T_z^{c{\mathrm{ST}}}$ in $ k[[T]] /T^{p^n}$ is $q^c_n (\xi_z)-1.$
By Example \ref{GSTeg} (3) and Lemma \ref{GSTeg'}, $q^c_n(\xi_z)$ equals the restriction of $\phi_n(q_{A_n/U'}(\xi_n))$ at $z$. Thus the first statement follows. The second statement follows from the first one.
\end{proof}
\section{Proof of Theorem \ref{TVcor1} }\label{pf{TVcor}}
In this section, we at first prove a Tate-Voloch type result in a family in characteristic $p$.
Combined with the results in Section \ref{The ordinary perfectoid Siegel space and Serre-Tate theory}, we prove Theorem \ref{TVcor1}.
We continue to use the notations in Section \ref{The ordinary perfectoid Siegel space and Serre-Tate theory}.
\subsection{Tate-Voloch type result in a family in characteristic $p$}\label{pf{TVcor1}}
Recall that $k=\bar \BF_p$ and $K^\flat= k((t^{1/p^\infty})) $.
In the proof of Lemma \ref{g=0}, we used the following simple fact: let
$S$ a $k$-algebra, $g\in S$ and $x\in (\Spec S)(k)$, then $g (x)= 0$ or $|g (x)|_k=1$ where the valuation $|\cdot|_k$ on $k$ takes value 0 on $0\in k$ and 1 on $k^\times$. This fact can be naively regarded as an analog of the Tate-Voloch conjecture over $k$. We want to consider this analog in a family. We need some notations.
Let $l$ be a positive integer.
For $d=(d^{(1)},...,d^{(l)}) \in (p^{\BZ_{< 0}})^l$, define $t^d:=(t^{d^{(1)}},...,t^{d^{(l)}})\in (K^\fcc)^l$.
For $c=(c^{(1)},. . . ,c^{(l)}) \in ( {\BZ_{p}}^\times)^l$, define $$(1+t^d)^c-1:=\left((1+t^{d^{(1)}})^{c^{(1)}}-1,. . .,(1+t^{d^{(l)}})^{c^{(l)}}-1{\mathrm{rig}}ht)\in (K^\fcc)^l.$$
Fix a sequence $\{d_n\}_{n=1}^\infty$ of elements in $(p^{\BZ_{< 0}})^l$ and a sequence $\{c_n\}_{n=1}^\infty$ of elements in $(\BZ_p^\times)^l$.
Let
\begin{equation} \label{equation}y_n= (1+t^{d_n})^{c_n}-1 \in (K^\fcc)^l\subset \Spec K^\fcc[[ T_1,...,T_l]].
\end{equation}
Let $\BN=\{1,2,. . . \}$ the sequence of positive integers. For $\delta\in (0,1)$ and the given sequence $\{d_n\}_{n=1}^\infty$, let $$\BN(\delta)=\{n\in \BN:
d_n^{(i)}/d_n^{(i+1)}<\delta\}.$$ If $l=1$, we understand $\BN(\delta)$ as $\BN$.
\begin{prop} \label{7.2.1} Let $ A$ be a reduced $k$-algebra and $V=\Spec A$.
Let $ \{z_n\}_{n=1}^\infty $ be a sequence of (not necessarily distinct) points in $V(k)$.
Let $f\in A[[ T_1,...,T_l]]$ and
let $f_{z_n}\in k[[ T_1,...,T_l]]$ be the restriction of $f$ at $z_n$.
Assume that \begin{itemize}
\item[$(\star)$] for every
infinite subset $\BN'\subset \BN$,
the set
$\{z_n: n\in \BN'\} $ is Zariski dense in $V$.
\end{itemize}
If $f\neq 0$,
then there exists $ D_0\in \BR_{>0} $ and $\delta_0\in (0,1)$ such that for every $D\geq D_0$ and $\delta\leq \delta_0$,
the following set is finite
\begin{equation} \{ n\in \BN(\delta) : \|f_{z_{n}}(y_{n})\| < \|T_l (y_n)\|^D\} \label{fDn}.\end{equation}
Here $T_l (y_n)$ is, by definition, the $l$-th coordinate of $y_n$.
\end{prop}
\begin{proof}
We do induction on $l$.
The case $l=1$ is proved as follows.
Let $f=\sum\limits_{m\geq 0} a_mT^m$ where $ a_m \in A$. Regard $a_m$ as a function on $V$ so that $a_m(z_n)\in k$.
Claim: there exists some $m$ such that
$a_{m}({z_{n}})\neq 0$ for $n$ large enough. Let $m_0
$ be the smallest such $m$.
Then $$\|f_{z_{n}}(y_{n})\|=\|t^{d_{n}m_0}\| $$ for $n$ large enough.
Let $D_0=m_0$ and we are done.
Now we prove the claim by contradiction. Assume that for every $m$, $a_{m}(z_n)= 0$ for infinitely many $n$. By assumption $(\star)$ and the reducedness of $A$, $a_m=0$.
Thus $f=0$. This is a contradiction.
Now we do the induction. Let $l>1$.
We prepare some notations.
Let $d'_n,y'_n$ be the first $l-1$ components of $d_n,y_n$ respectively. For $\delta\in (0,1)$, we have a subsequence $\BN(\delta)'\subset \BN$ defined using the sequence $\{d'_n\} _{i=1}^\infty$.
Then $\BN(\delta)'\supset \BN(\delta)$.
Assume that $f\neq 0$. Write $f=T_l^{m_1}(g_1+f_1)$ where $g_1\in A[[T_1,. . .,T_{l-1}]]\backslash \{0\}$ and $f_1\in T_l A[[T_1,. . .,T_{l}]]$.
Below, to lighten notation, we abbreviate the subscript $z_n$.
Then for $n$ in the set \eqref{fDn}, with $D$ and $\delta$ to be determined, we have \begin{equation*}\|g_1(y_n)+f_1(y_n)\|= \|T_l (y_n)\| ^{-m_1}\|f(y_n)\|< \|T_l (y_n)\| ^{D-m_1} . \end{equation*}
If $D\geq m_1+1$, then
$$\|g_1(y_n) \|\leq \|g_1(y_n)+f_1(y_n) \|+\|f_1(y_n)\|\leq \|T_l (y_n)\|.$$
Since $\|T_{l} (y_n)\|< \|T_{l-1} (y'_n)\|^{1/\delta}$ and $\|g_1(y'_n)\|=\|g_1(y_n)\|$, we have
\begin{equation}
\|g_1(y'_n)\|< \|T_{l-1} (y'_n)\|^{1/\delta}.
\label{woll}\end{equation}
By the induction hypothesis, there exists $D'>0$ and $\delta'_0\in(0,1)$ such that if $ \delta\leq 1/D'$ and $\delta\leq \delta'_0$, $\{n\in \BN(\delta)':\eqref{woll}\text{ holds}\}$ is finite. Then \eqref{fDn} is finite by choosing $\delta_0=\min\{1/D', \delta'_0\}$. \end{proof}
\begin{rmk}(1) $ D_0$ and $\delta_0 $ are uniform for all choices of $\{c_n\}_{n=1}^\infty$. We do not need this fact later.
(2) The proposition is inspired by \cite[Lemma 2.10]{Ser}. In the proof of \cite[Lemma 2.10]{Ser}, there is a minor imprecision.
The following modification is suggested by Serban.
Define $T_\delta$ in \cite[Lemma 2.10]{Ser} to be the first set in the intersection
but not the entire intersection, so that the statement (2) in loc. cit. is about $T_\delta\cap S_{\phi}(q^{-1-c})$.
The 3rd displayed formula in the proof of \cite[Lemma 2.10]{Ser} should be removed.
Then, on can still get the 5th displayed formula in that proof with slightly more effort.
\end{rmk}
\subsubsection{Closure and limit}
We show that assumption $(\star)$ in Proposition \ref{7.2.1} holds in some situations.
\begin{lem}\label{1lem}
Let $\{B_i\}_{i=0}^\infty$ be a system of rings and $B=\varinjlim_i B_i$. Let $ f_i:\Spec B\to \Spec B_i$ be the natural morphism.
Let $ \Lambda\subset \Spec B $ be a subset and $ \Lambda_i= f_i(\Lambda)\subset \Spec B_i $. We have the following relation between Zariski closures: \begin{equation}\Lambda ^{\mathrm{Zar}} =\bigcap_{i=0}^\infty f_i^{-1}\left( \Lambda_i^{\mathrm{Zar}}{\mathrm{rig}}ht) .\label{bydef}\end{equation}
\end{lem}
\begin{proof}
The ideal $I\subset B$ defining $\Lambda ^{\mathrm{Zar}}$, with reduced induced structure as a closed subscheme, is generated by the union of the images $ I_i$ in $B$, where $I_i\subset B_i$ is the ideal of elements whose image in $B$ vanishes on $\Lambda ^{\mathrm{Zar}}$. By the definition of $\Lambda_i$, $I_i$ is the ideal defining $\Lambda_i^{\mathrm{Zar}}$. Then \eqref{bydef} follows.
\end{proof}
Let $f:U\to U_0$ be a surjective morphism of schemes. Let $\Lambda_0\subset U_0 $ be a subset with Zariski closure $\Lambda_0^{\mathrm{Zar}}$ in $U_0$.
For $s\in \Lambda_0$, choose $z_s\in f^{-1}(s)$. Let $\Lambda=\{z_s:s\in \Lambda_0\}$ with Zariski closure $ \Lambda^{\mathrm{Zar}}$ in $U$.
\begin{lem}
Assume that $f$ is closed.
(1)
The image of $\Lambda^{\mathrm{Zar}}$ in $U_0$ is $\Lambda_0^{\mathrm{Zar}}$.
(2) Assume that $\Lambda_0^{\mathrm{Zar}}$ is irreducible and $U$ is noetherian.
There exists a choice of $\Lambda$ such that $\Lambda^{\mathrm{Zar}}$ is irreducible.
(3) In (2), further assume that $f$ is finite and the Zariski closure of every infinite subset of $\Lambda_0$ is $\Lambda_0^{\mathrm{Zar}}$. Then the Zariski closure of every infinite subset of $\Lambda$ is $\Lambda^{\mathrm{Zar}}$.
\end{lem}
\begin{proof} (1) is easy and the proof is omitted.
(2) For every member of the finitely many irreducible (so closed) components of $f^{-1}(\Lambda_0^{\mathrm{Zar}})$, its image in
$\Lambda_0^{\mathrm{Zar}}$ is a closed subscheme. By the irreducibility of $\Lambda_0^{\mathrm{Zar}}$, some irreducible component of $f^{-1}(\Lambda_0^{\mathrm{Zar}})$ is surjective to $\Lambda_0^{\mathrm{Zar}}$.
We choose all $z_s$'s in this component.
(3) Note that a finite surjective morphism preserves dimension, and a proper closed subscheme of a noetherian irreducible scheme has a strictly smaller dimension. Then (3) follows from (1) and counting dimensions.
\end{proof}
The last two lemmas imply the following corollary.
\begin{cor} \label{profirred} Let
$B,B_i$'s be as in Lemma \ref{1lem}. Let $ U=\Spec B$ (not necessary noetherian), $U_0=\Spec B_0$ and $f= f_0$.
Assume that each $B_i$ is noetherian and the transition morphisms $\Spec B_j\to \Spec B_i$ are finite surjective.
Assume that the Zariski closure of every infinite subset of $\Lambda_0$ is $\Lambda_0^{\mathrm{Zar}}$.
There exists a choice of $\Lambda$ such that the Zariski closure of every infinite subset of $\Lambda$ is $\Lambda^{\mathrm{Zar}}$.
\end{cor}
To fulfill the second assumption of the corollary, we use the following lemma.
\begin{lem} \label{profirredlem} Let $U_0$ be a noetherian scheme. For every infinite subset $Y\subset U_0 $, there is an infinite subset $\Lambda_0\subset Y $ such that the Zariski closure of every infinite subset of $\Lambda_0$ is $\Lambda_0^{\mathrm{Zar}}$.
\end{lem}
\begin{proof} By the noetherianness of $U_0$, there exists a closed subscheme
$V$ of $U_0$ containing an infinite subset $\Lambda_0$ of $ Y $ such that every proper
closed subscheme of $V$ only contains finitely many elements in $Y$. \end{proof}
\subsection{Proof of Theorem \ref{TVcor1}}
Let $X$ be a product of Siegel moduli spaces over $\BZ_p$ with certain level structures away from $p$.
By Lemma \ref{T'T}, Theorem \ref{TVcor1} follows from
the following theorem.
\begin{thm} \label{TVcor1'} Let $Z$ be a closed subvariety of
of $X_{\bar L} $. There exists a constant $c>0$ such that for every ordinary CM point $x \in X(L^\cyc)$, if $d(x,Z)\leq c$, then $x\in Z$.\end{thm}
Here the distance function $d(x,Z)$ is defined as in \ref{The distance function} using the integral model $X$
\begin{proof}
We prove Theorem \ref{TVcor1'} when $X$ is a single Siegel moduli space. The general case is proved in the same way or by embedding a product of Siegel moduli spaces into a bigger one.
We continue to use the notations in Section \ref{The ordinary perfectoid Siegel space and Serre-Tate theory}. In particular, the
fields $L, L^\cyc$, $K$ and $K^\flat$ below are as in the beginning of Section \ref{The ordinary perfectoid Siegel space and Serre-Tate theory};
the formal scheme $\fX(0)$, the adic locus $\CX(0)$, the perfectoid spaces $\CX(0)^\perf,\CX(0)^{\perf\flat}$ and Frobenius morphism $\Fr^\can$
below are as in \ref{Ordinary perfectoid Siegel space}.
For an ordinary CM point $x \in X(L^\cyc)$, we the same notation $x$ to denote
its base change in $X(K)$.
Let $x^\circ$ be the unique $K^\circ$-point in $ X$ whose generic fiber is $x$.
Suppose that $Z$ is defined over a finite Galois extension $F $ of $L$. Let $\CI$ be the ideal sheaf of the schematic closure of $Z$ in $X_{ F^\circ}$.
Let $\CU$ be an affine open subscheme of $X_{W}$, of finite type over $W$. (This is the only use of a calligraphic font not representing an adic space in this paper.)
We only need to find a constant $c$ such that, if an
ordinary CM point $x\in X(L^\cyc)$ satisfies $x^\circ\in \CU (K^\circ)$ and $d_{\CU_{K^\circ}}(x_{K},I)<c$, then $x\in Z$.
Here the distance function is as in \ref{The distance function}.
We at first have the following simplification on $F$. Let $K'=FK$. Suppose $\CI (\CU_{F^\circ})$ is generated by $f_i$, $i=1,...,n$.
For $\sigma\in G:=\Gal(K'/K) $,
$f_i^\sigma$ is in the coordinate ring of $\CU_{K'^\circ}$ and
$\|f_i(x_{K'} )\| =\|f_i^\sigma(x_{K'} )\| $.
Let $I$ be the ideal of the coordinate ring of $\CU_{K^\circ}$ generated by $\prod_{\sigma\in G} f_i^\sigma$, $i=1,...,n$. Then $$d_{\CU_{K^\circ}}(x_{K},I)= d_{\CU_{K'^\circ}}(x_{K'},\CI_{K'^\circ}(\CU_{K'^\circ}))^{|G|}.$$ Thus we may assume that $F\subset K$. Equivalently, $F\subset L^{\mathrm{cycl}}$.
Now we reduce Theorem \ref{TVcor1'} to Theorem \ref{TVcor1aff} below, which is formulated with affine formal schemes.
For an ordinary CM point $x\in X(K)$, we also use $x$ to denote the corresponding point $ \CX(0)(K,K^\circ)$.
Let $\fU$ be the restriction of the $\varpi$-adic formal completion of $\CU$ to $\fX(0)$.
By Lemma \ref{distep}, Theorem \ref{TVcor1'} is deduced from Theorem \ref{TVcor1aff}. \end{proof}
\begin{thm} \label{TVcor1aff} Let $\fZ$ be an irreducible closed formal subscheme of $\fU_{F^\circ}$. For a sequence $\{x_n\}_{n=1}^{\infty}$ of ordinary CM points such that $x_n$ is in the $\ep_n$-neighborhood of $\fZ$ and
with $ \|\ep_n\|\to 0$, we have $x_n\in \CZ$ for infinitely many $n$'s.
\end{thm}
The proof of Theorem \ref{TVcor1aff} consists of two bulks: one involvs
perfectoid spaces and one does not. The perfectoid one is more technical and proves results to be used in the second one. The non-perfectoid one concludes Theorem \ref{TVcor1aff}. We will present the on-perfectoid one first, in \ref{Global Serre-Tate coordinate} and \ref{Proof of Theorem {TVcor1aff}}.
A canonical lifting is an ordinary CM points of order 1 w.r.t. a (equivalently every) basis, see Definition \ref{generator} (1). The following lemma will be proved in Theorem \ref{samel} using perfectoid spaces.
\begin{lem}\label{samellem}
Theorem \ref{TVcor1aff} holds if we replace ``ordinary CM points" by ``canonical liftings".
\end{lem}
\subsubsection{Global Serre-Tate coordinate} \label{Global Serre-Tate coordinate}
Before we proceed to the proof of Theorem \ref{TVcor1aff}, let us recall the realization of the global Serre-Tate coordinate system in \ref{GSTapp}.
Let $U_0$ be the special fiber of $\fU $.
Let $U=\Spec B$ be
the profinite Galois cover of $U_0$ defined in \ref{GSTapp} (and coming from the infinite level Igusa scheme)
such that
$\Sym^2(T_p A^{\univ}[p^\infty]^{\mathrm{et}}) (U)$ is a free $\BZ_p$-modules of rank $ g(g+1)/2$. Let $\Delta$ be the diagonal of $U\times U$.
Then by Lemma \ref{6.3.3}, for a basis \begin{equation} \xi_1 ,..., \xi_{g(g+1)/2} \label{base0}\end{equation} of $\Sym^2(T_p A^{\univ}[p^\infty]^{\mathrm{et}}) (U)$, we have the realization of the global Serre-Tate coordinate system
$$\CO\left(\widehat{U\times U}_{/\Delta }{\mathrm{rig}}ht)= B[[T_1^{\mathrm{ST}},...,T_{g(g+1)/2}^{\mathrm{ST}}]],$$
which has the following property.
For every $z\in U(k)$, we have an isomorphism \begin{equation}\pr_1^{-1}(\{z\})\overset{\pr_2}\simeq \widehat{ U}_{/z} \label{633'}
\end{equation} as in \eqref{633}, and the corresponding isomorphism
\begin{equation} \widehat{ U}_{/z}\simeq \Spf k[[T_{1,z}^{\mathrm{ST}},...,T_{g(g+1)/2,z}^{\mathrm{ST}}]],\label{7.2}\end{equation}
Let $T_{i,z}^{\mathrm{ST}}$ be the restriction of $T_i^{\mathrm{ST}}$ to $ \pr_1^{-1}(\{z\})\simeq \widehat{ U}_{/z}$.
Let \begin{equation}\xi_{z,1},...,\xi_{z,g(g+1)/2} \label{base}\end{equation} be the restriction of $ \xi_1 ,..., \xi_{g(g+1)/2} $.
Then \eqref{7.2}
coincides with the realization of the classical Serre-Tate coordinate system of
$ \widehat{ U}_{/z } $ at $\xi_{z,1},...,$ $\xi_{z,g(g+1)/2} $, see Definition \ref{realization'}.
\subsubsection{Proof of Theorem \ref{TVcor1aff}} \label{Proof of Theorem {TVcor1aff}}
After passing to an infinite subsequence, we may assume that $\{\red( x_n)\}_{n=1}^\infty$ is a sequence of the same point or pairwisely different points.
Let $z_n\in U(k)$ be over $\red( x_n)\in U_0(k)$.
By Corollary \ref{profirred} and
Lemma \ref{profirredlem}, after passing to an infinite subsequence, we may assume the following.
\begin{asmp}\label{star}
For every
infinite subset $\BN'\subset \BN$,
the Zariski closure of the set
$\{z_n: n\in \BN'\} $ in $U$ is the Zariski closure of the set
$\{z_n: n\in \BN\} $. \end{asmp}
We regarded the basis \eqref{base} for $z=z_n$ as a basis of $\Sym^2(T_p A_{x_n})$ naturally.
Let $x_n$ be of order $p^{a_n}$ w.r.t. \eqref{base} (see Definition \ref{generator} (1))
where $a_n=(a_n^{(1)},...,a_n^{(g(g+1)/2)})\in\BZ_{\geq 0}^{g(g+1)/2}$.
After passing to an infinite subsequence and permuting the basis \eqref{base0} of $\Sym^2(T_p A^{\univ}[p^\infty]^{\mathrm{et}}) (U)$, we may assume that every $a_n$ is non-increasing (see Definition \ref{generator} (2)). Let $l\leq g(g+1)/2$ be a non-negative integer such that for every $n$, if $i>l$, then $a_n^{(i)}=0$.
For example, if $l=g(g+1)/2$, the assumption automatically holds; if $l=0$, we are in the situation of Lemma \ref{samel1}.
We will reduce Theorem \ref{TVcor1aff} to the case $l=0$ by using Lemma \ref{samel1} below.
We need the ``upper triangular change of variables" argument following \cite{Ser}.
By ``upper triangular change of variables", we indeed mean changing the first $l$-element of the basis \eqref{base0} of $\Sym^2(T_p A^{\univ}[p^\infty]^{\mathrm{et}}) (U)$ via an upper triangular matrix as follows.
For $C\in \GL_l(\BZ_p)$, $(\xi_1 ,..., \xi_{l})C$ combined with $(\xi_{l+1},. . .,\xi_{(g(g+1)/2)})$ gives a new basis of $\Sym^2(T_p A^{\univ}[p^\infty]^{\mathrm{et}}) (U)$. Thus by restriction as in \eqref{base}, we have a new basis of $\Sym^2(T_p A_{x_n})$ for every $n$.
Let $x_n$ be of order $p^{a_n(C)}$ w.r.t. this new basis, where $a_n(C)\in\BZ_{\geq 0}^{g(g+1)/2}$.
Then for $C$ upper triangular, $a_n(C)$ is still non-increasing.
\begin{lem}\label{samel1}
Assume Assumption \ref{star}.
Assume that for every upper triangular matrix $C\in \GL_l(\BZ_p)$, the $l$-th component (so the $i$-th component for $i=1,. . .,l$ as well) of $a_n(C)$ goes to $\infty$ as $n\to \infty$.
Then $ x_n\in \CZ$ for all $n\in \BN$.
\end{lem}
We postpone the proof of Lemma \ref{samel1}.
We finish the proof of Theorem \ref{TVcor1aff} by induction on
the dimension of $\fZ$.
If $\fZ$ is empty,
define its dimension to be $-1$. When $\fZ$ is of dimension $-1$, the theorem is trivial.
The induction hypothesis is that the theorem holds for lower dimensions, and it will only be used in the proof of Lemma \ref{claim1} (2) below.
By Lemma \ref{samel1} and passing to an infinite subsequence, we may assume that for an upper triangular matrix $C\in \GL_l(\BZ_p)$, the $l$-th component of $a_n(C)$ is bounded. Replacing the basis \eqref{base0} by the new basis that is
$(\xi_1 ,..., \xi_{l})C$ combined with $(\xi_{l+1},. . .,\xi_{(g(g+1)/2)})$, we may assume that there is a non-negative integer $m$ such that for every $n$, $a_n^{(l)}\leq p^m$. The fact that $a_n^{(i)}=0$ for $i>l$ does not change.
\begin{lem} \label{claim1}Let $m$ be a non-negative integer.
Then the following hold.
(1) The adic generic fiber of $(\Fr^\can)^m (x_n^\circ)$ is in the $\ep_n$-neighborhood of the scheme theoretic image $(\Fr^\can)^m(\fZ)$ (see \cite[2.3]{Kap}).
(2) Assume that $(\Fr^\can)^m (x_n^\circ) \in (\Fr^\can)^m(\fZ(K^\circ))$
for infinitely many $n$'s, then $x_n^\circ \in\fZ(K^\circ) $ for infinitely many $n$'s.
\end{lem}
\begin{proof}
To lighten the notations, assume that $m=1$.
Consider the closed formal subscheme $(\Fr^\can)^{-1}(\Fr^\can(\fZ))$ of $\fU$ which contains $\fZ$.
Then $x_n^\circ$ is contained in the $\ep_n$-neighborhood of $(\Fr^\can)^{-1}(\Fr^\can(\fZ))$
by Lemma \ref{union} (1).
Then (1) follows from the analog of Lemma \ref{pullback1} for formal schemes (which directly follows from Definition \ref{defnnbhd}).
For (2), we prove it by contradiction. Let $\{n_i\}\subset \BN$ be an infinite subsequence such that $\Fr^\can(x_{n_i}^\circ) \in \Fr^\can(\fZ(K^\circ))$
and
$x_{n_i}^\circ\not\in \fZ(K^\circ)$ for $n_i$ large enough.
In particular, $(\Fr^\can)^{-1}(\Fr^\can(\fZ))\neq\fZ $. Thus by \cite[Proposition 2.10]{Kap}, it is not hard to show that
$$(\Fr^\can)^{-1}(\Fr^\can(\fZ))=\fZ\bigcup \fZ'$$ such that $\fZ' $ does not contain $\fZ$ and $x_{n_i}^\circ \in \fZ'(K^\circ) $.
By Lemma \ref{intersect}, every $x_{n_i} $ is contained in the $\ep_{n_i}$-neighborhood of $\fZ\bigcap \fZ'$.
Let $\fZ_1$ be the union of irreducible components of $\fZ\bigcap \fZ'$ which dominate
$\Spf F^\circ$.
By Lemma \ref{union} (2), there exists $\delta\in K^{\circ}-\{0\}$ such that every $x_{n_i} $ is contained
in the $\ep_{n_i}/\delta$-neighborhood of $\fZ_1$.
Since every irreducible component of $\fZ_1$ has dimension less than the dimension of $\fZ$,
by the induction hypothesis, we have $x_{n_i} \in \fZ_1(K^\circ)\subset \fZ(K^\circ)$. This is a contradiction.
\end{proof}
By \eqref{Qp} and Lemma \ref{claim1}, after passing to an infinite subsequence, we may assume that for every $n$, if $i\geq l$, then
$a_n^{(i)}=0$, i.e. we may replace $l$ by $l-1$. Continue this process, we may assume that $l=0$, i.e. $a_n^{(i)}=0$ for every $n$ and $i$.
Now Theorem \ref{TVcor1aff} follows from Lemma \ref{samel1}.
\subsubsection{Canonical liftings and perfectoid strategy}\label{Canonical liftings and perfectoid strategy}
Now our remaining tasks are: proof of Lemma \ref{samellem} and proof of Lemma \ref{samel1}.
For Lemma \ref{samellem}, we prove an ``almost effective" version of Theorem \ref{TVcor1aff} for canonical liftings. In the proof, we use the ordinary perfectoid Siegel space and Scholze's approximation lemma, following a strategy of Xie \cite{Xie}. Our later proof of Lemma \ref{samel1} involves a more complicated version of this proof (which in particular uses the global Serre-Tate coordinate).
Let $\CX $ be the restriction of $\CX(0)^\perf$ to the adic generic fiber of $\fU_{K^\circ}$.
Then $\CX=\Spa(R,R^+)$ where $(R,R^+)$ is a perfectod affinoid $(K,K^\circ)$-algebra (there is no need to specify $R$ though it is easy to do so).
The restriction of $\CX(0)^{\perf,\flat}$ to the adic generic fiber
of $U_0\otimes K^\fcc$ is $\CX^\flat=\Spa(R^\flat,R^\fpl)$, the tilt of $\CX$. More concretely, it is given as follows:
let $S_m$ be the coordinate ring of
$(\Fr^m)^{-1}(U_0)$ with the natural inclusion $S_{m-1}\hookrightarrow S_m$, and $S=\bigcup_m S_m$, then $ R^{\flat+}$ is the $\varpi^\flat$-adic completion of $S \otimes K^{\flat\circ}$. Let $\CX_m $ be the adic generic fiber of $\Spec S_m\otimes K^\fcc$, and $$\pi_m:\CX^\flat\to \CX_m$$ the natural projection. Recall $\pi$ and $\pi'$ as defined in \eqref{natpi}.
Then $\pi_0=\pi'|_{\CX^\flat}$ (which has image in $\CX_0$). We abbreviate $\pi|_{\CX}$ as $\pi$ (which has image in the adic generic fiber of $\fU_{K^\circ}$.)
Let $\rho$ be the restriction of $\rho_{\CX(0)^\perf}$
(see
\eqref{natrho}) to $\CX$.
For $f\in \CO(\fU)$ in the defining ideal of $\fZ$, regard $f$ as an element of $ R^+$ by the inclusion $\CO(\fU) \subset R^+$.
For $c\in \BZ _{> 0}$, choose $g$ as in Lemma \ref{Corollary 6.7. (1)} (w.r.t. $f$) and choose a finite sum $$ g_c=\sum\limits_{\substack{i\in \BZ[\frac{1}{p}]_{\geq 0},\\i< \frac{1}{p}+c}}g_{c,i} \cdot (\varpi^\flat)^i $$ as in Lemma \ref{improveCorollary 6.7. (1)} where $g_{c,i}\in S$ for all $i$.
There exists a positive integer $m(c)$ such that $g_{c,i}\in S_{m(c)}$ for all $i$ by the finiteness of the sum. Let $G_c:=g_c^{p^{m(c)}} $.
Then we have the finite sum \begin{equation}G_c=\sum\limits_{\substack{i\in \BZ[\frac{1}{p}]_{\geq 0},\\i< \frac{1}{p}+c}}G_{c,i} \cdot (\varpi^\flat)^{p^{m(c)}i},\label{icmicm}
\end{equation}where $G_{c,i}=g_{c,i}^{p^{m(c)}}$.
By the construction of $S_n$'s,
we have $G_{c,i}\in S_0.$
Let $I_c$ be the ideal of $S_0$ generated by $\{G_{c,i}:i\in \BZ[\frac{1}{p}]_{\geq 0},i< \frac{1}{p}+c\} $.
By the noetherianness of $S_0$, there exists a positive integer $M$ such that \begin{equation}\sum_{c=1}^\infty I_c=\sum_{c=1}^MI_c.\label{icm'}\end{equation}
For $y \in \fX(0)$ and $\tilde y \in \pi^{-1}(y )\subset \CX $, $|f(\tilde y)|=\|f( y)\|$.
If $\|f( y)\|\leq \|\varpi\|^{\frac{1}{p}+M}$, by \eqref{2.1} and \eqref{modi}, we have $|g_c( \pi_{m(c)} (\rho(\tilde y)))|\leq \|\varpi\|^{\frac{1}{p}+c}$ for $c=1,...,M$.
So for $c=1,...,M$, we have \begin{equation}|G_c( \pi_{0} (\rho(\tilde y)))|=|G_c( \pi_{m(c)} (\rho(\tilde y)))|\leq \|\varpi\|^{(\frac{1}{p}+c)p^{m(c)}}\label{icm''}.
\end{equation}
\begin{thm}\label{samel}
Assume that $\{f_1,...,f_r\}\subset \CO(\fU)$ generates the ideal defining $\fZ$. For each $f_j$, let $M_j$ be the $M$ as in \eqref{icm'} with $f$ replaced by $f_j$. Let $\BM=\max \{M_j:j=1,...,t\}$.
Let
$ y$ be a canonical lifting in the $\varpi ^{\frac{1}{p}+\BM}$-neighborhood of $\fZ$. Then $y\in \CZ$.
\end{thm}
\begin{proof} Apply Lemma \ref{6.2.3} (2) to $y$ with $a=1$. Choose $\tilde y\in \pi^{-1}(y) $ to be as in Lemma \ref{6.2.3} (2).
Then $\pi_{0} (\rho(\tilde y))=\red(y)\in U_0(k)$, where we understand $U_0(k)$ as a subset of $\fX(K^\flat,K^{\flat\circ})$ naturally.
Let $f=f_j$ and $M=M_j$ for some $j$. Then
$ |f(\tilde y)|\leq \|\varpi\|^{\frac{1}{p}+M}$ and thus we have \eqref{icm''}.
Similar to Lemma \ref{g=0}, by \eqref{icm''} and \eqref{icmicm}, we have $G_{c,i}( \pi_{0} (\rho(\tilde y)))=0$ for every $c=1,...,M$ and corresponding $i$'s.
By \eqref{icm'}, $I_c( \pi_{0} (\rho(\tilde y)))=\{0\}$ for every $c\in \BZ _{>0}$.
So $$G_c( \pi_{m(c)} (\rho(\tilde y)))=G_c( \pi_{0} (\rho(\tilde y)))=0$$ for every $c\in \BZ _{>0}$.
Thus $g_c( \pi_{m(c)} (\rho(\tilde y)))=0$.
By \eqref{2.1} and \eqref{modi}, $ |f(\tilde y)|\leq \|\varpi\|^{\frac{1}{p}+c}$ for every $c\in \BZ _{>0}$. Thus $ |f(\tilde y)|=0$.
\end{proof}
\begin{rmk}The effectivity of $\BM$ is essentially determined by the effectivity of the determination of the approximating function $g$ in Lemma \ref{Corollary 6.7. (1)}. However, Scholze's proof of lemma \ref{Corollary 6.7. (1)} uses ``almost ring theory" and is not effective. It is meaningful to ask if Lemma \ref{Corollary 6.7. (1)} can be made effective.
\end{rmk}
\subsubsection{Toward the proof of Lemma \ref{samel1}} \label{Toward the proof}
This paragraph closely mimics the proof of Theorem \ref{samel}.
Let notations be as above Theorem \ref{samel} and let $y=x_n$.
For every $c=1,...,M$ and a corresponding $i$,
we want to show that $G_{c,i}( \pi_{0} (\rho(\tilde x_{n})))=0$.
Then by \eqref{icm'}, $I_c( \pi_{0} (\rho(\tilde x_{n})))=\{0\}$ for every $c\in \BZ _{>0}$.
So $G_c( \pi_{m(c)} (\rho(\tilde x_{n})))=G_c( \pi_{0} (\rho(\tilde x_{n})))=0$ for every $c\in \BZ _{>0}$.
Thus $g_c( \pi_{m(c)} (\rho(\tilde x_{n})))=0$.
By \eqref{2.1} and \eqref{modi}, $ |f(\tilde x_{n})|\leq \|\varpi\|^{\frac{1}{p}+c}$ for every $c\in \BZ _{>0}$. Thus $ |f(\tilde x_{n})|=0$.
Let $f$ run over a finite set of generators of the defining ideal of $\fZ$ and choose infinite subsequences successively, we have $ x_n\in \CZ$ for infinitely many $n$'s.
\subsubsection{Spaces}
For $x\in U_0(k)$ (resp. $ U(k)$), let $\cD_x$
be the adic generic fiber of the formal completion of $ U_0 \otimes K^\fcc$ (resp. $ U \otimes K^\fcc$) at $x$. (This coincides with the definition in \ref
{Tilting canonical liftings}.) Equivalently, $\cD_x$ is the adic generic fiber of the formal completion of $ \widehat {U_0}_{/x} \otimes K^\fcc$ (resp. $ \widehat {U}_{/x} \otimes K^\fcc$).
The following two diagrams summarize the adic spaces/k-schemes and morphisms between them that we use:
\begin{equation}
\xymatrix{
\CX \ar[r]^{\rho } \ar[d]^{\pi} & \CX^\flat \ar[d]^{\pi_0} & & & \\
\CX(0) & \CX_0& \ar[l]_{(1) \ \ \ \ \ } \coprod_{x\in U_0(k)}\cD_x& \ar[l] _{\ \ (2)}\coprod_{z\in U(k)}\cD_z & \\
& U_0& \ar[l]_{(1') \ \ \ \ \ } \coprod_{x\in U_0(k)}\widehat{U_0}_{/x}& \ar[l] _{\ \ (2')}\coprod_{z\in U(k)}\widehat{U}_{/z} &\ar[l] _{\eqref{633'}\ \ \ }\coprod_{z\in U(k)}\pr_1^{-1}(\{z\})
\ar[r]^{\ \ \ \ \ \ (3)} & \widehat{U\times U}_{/\Delta } \\
} \label{faca1}
\end{equation}
Here the morphisms (1) (1') (3) are the natural inclusions. And the morphism (2), when restricted to $\cD_z, z\in U(k)$, is the natural isomorphism $\cD_{z}\simeq\cD_x$ where
$x\in U_0(k)$ is the image of $z$. We have the parallel statement for (2').
\subsubsection{Functions}
Let $H_{c,i}$ be the image of $G_{c,i}$ in $B$ under the morphism $S_0=\CO(U_0)\to B =\CO(U)$, and $H_{c,i,z_{n}}\in \CO(\widehat{ U}_{/z_n}) $ the image of $H_{c,i}$ under the morphism $B=\CO(U)\to \CO(\widehat{ U}_{/z_n}) $.
For $\tilde x_n \in \pi^{-1}(x_{n})\subset \CX$, by Lemma \ref{6.2.3} (1), $\pi_{0} (\rho(\tilde x_n))\in \cD_{\red( x_n)}$. Let $y_n$ be the preimage of $\pi_{0} (\rho(\tilde x_n))$ in $\cD_{z_n}$ via the natural isomorphism $\cD_{z_n}\simeq\cD_{\red(x_n)}$.
Then as elements in $K^{\fcc}$, we have
$$H_{c,i,z_{n}}(y_n)=H_{c,i}(y_n)= G_{c,i}(\pi_{0} (\rho(\tilde x_{n}))) .$$
\begin{lem}\label{G<1} There is a constant $h_{c,i}<1$ such that $\|H_{c,i,z_{n_m}}(y_{n_m})\|<h_{c,i}$.
\end{lem}
\begin{proof}If the lemma is not true, let $i_0$ be the smallest $i$ appearing in the finite sum \eqref{icmicm} such that $\|H_{c,i,z_{n_m}}(y_{n_m})\|\to 1$
for a subsequence $\{n_m\}_{m=1}^\infty\subset \BN$. Then \eqref{icmicm} implies that
$\|G_{c}( \pi_{0} (\rho(\tilde x_{n_m})))\| \to \|\varpi\|^{i_0 p^{m(c)}},$ which contradicts \eqref{icm''}.
\end{proof}
Let $\phi$ be the composition of
\begin{equation*}\phi: B=\CO(U)\to B\otimes B\to \CO\left(\widehat{U\times U}_{/\Delta }{\mathrm{rig}}ht)= B[[T_1^{\mathrm{ST}},...,T_{g(g+1)/2}^{\mathrm{ST}}]] \end{equation*}
where the first morphism is $b\mapsto 1\otimes b$.
I.e. $\phi$ gives the projection $\pr_2:\widehat{U\times U}_{/\Delta }$ to the second $U$.
Tracking the second diagram of \eqref{faca1}, we have the following lemma.
\begin{lem}\label{restriction }
The restriction of $ \phi(H_{c,i})$ to $ \pr_1^{-1}(\{z_n\})\overset{\pr_2}\simeq \widehat{ U_1}_{/z_n}$ in \eqref{633'} is $H_{c,i,z_{n}}\in \CO(\widehat{ U_1}_{/z_n}) $.
\end{lem}
\subsubsection{Proof of Lemma \ref{samel1}}
We need some notations.
For an open subset $O\subset \BZ_p^{l-1}$,
let $\BN(O)\subset \BN$ be the subsequence such that the first $l-1$ components of a ratio of $x_n$ w.r.t. this basis (see Definition \ref{generator}) is in $O$. If $l=1$, we understand $\BN(O)$ as the whole $ \BN$ (and we will not need the case $l=0$).
For $r \in \BZ_p^{l-1}$ and $\delta\in (0,1)$, let $\BN(r,\delta)=\BN(O(r,\delta))$ where $O(r,\delta)$ is the $p$-adic closed disc centered at $r$ of radius $\delta$.
Now we start to prove Lemma \ref{samel1}.
By the discussion in \ref{Toward the proof}, we only need to
prove that for every $n\in \BN$, $G_{c,i}(\pi_{0} (\rho(\tilde x_{n}))) =0$. Let $\Spec A\subset \Spec B$ be the Zariski closure of the set
$\{z_n: n\in \BN\} $.
Let $f$ be the image of $\phi(H_{c,i})$ under $B[[T_1^{\mathrm{ST}},...,T_{g(g+1)/2}^{\mathrm{ST}}]]\to A[[T_1^{\mathrm{ST}},...,T_{g(g+1)/2}^{\mathrm{ST}}]]$.
By Lemma \ref{restriction }, we have
\begin{equation}\label{GH0}G_{c,i}(\pi_{0} (\rho(\tilde x_{n}))) =H_{c,i,z_{n}}(y_{n})=f(y_n).\end{equation}
We prove the stronger result $f=0$ by contradiction.
Assume that $f\neq 0$. We want to apply Proposition \ref{7.2.1} to $f$ and $y_n$'s.
We check the conditions in Proposition \ref{7.2.1}. First, by the compatibility between the
Global and classical Serre-Tate coordinates as in the end of \ref
{Global Serre-Tate coordinate}, we use Lemma \ref{6.2.3} (2) to conclude that $y_n$'s are as in \eqref{equation} above Proposition \ref{7.2.1}.
Second, the assumption $(\star)$ in Proposition \ref{7.2.1} holds by Assumption \ref{star}.
By the assumption that $a_n$ goes to $\infty$ as $n\to \infty$ in Lemma \ref{samel1}, Lemma \ref{G<1} and the second ``=" of \eqref{GH0}, for $n$ large enough, $n$ satisfies the inequality in \eqref{fDn} of Proposition \ref{7.2.1} (for every $D$).
Then by Proposition \ref{7.2.1},
there exists $\delta_0\in (0,1)$ such that $\BN(0,\delta_0) $ is finite.
For a general $r \in \BZ_p^{l-1}$, by \cite[Lemma 2.7]{Ser}, after an ``upper triangular change of variables" (as defined above Lemma \ref{samel1}), we may use the same proof for $r=0$ to conclude that there exists $\delta_r\in (0,1)$ such that $\BN(r,\delta_r) $ is finite.
By its compactness, $ \BZ_p^{l-1}$ is the union of
$p$-adic closed discs centered at $r$ of radius $\delta_r$ for finitely many $r$'s.
Then the infinite set $\BN$ is the union of the finite sets $\BN(r,\delta_r) $'s for these finitely many $r$'s. This is a contradiction.
\end{document} |
\begin{document}
\textitle{On some conjectures concerning Stern's sequence and its twist}
\author{Michael Coons}
\address{University of Waterloo, Dept.~of Pure Math., Waterloo, ON, N2L 3G1, Canada}
\email{[email protected]}
\thanks{Research supported by a Fields--Ontario Fellowship.}
\subjclass[2010]{Primary 11B37; Secondary 11B83}
\keywords{Stern sequence, functional equations, binary expansion}
\date{\today}
\begin{abstract}
In a recent paper, Roland Bacher conjectured three identities concerning Stern's sequence and its twist. In this paper we prove Bacher's conjectures. Possibly of independent interest, we also give a way to compute the Stern value (or twisted Stern value) of a number based solely on its binary expansion.
\end{abstract}
\title{On some conjectures concerning Stern's sequence and its twist}
\section{Introduction}
We define the {\em Stern sequence} (also known as {\em Stern's diatomic sequence}) $\{s(n)\}_{n\geqslant 0}$ by $s(0)=0$, $s(1)=1$, and for all $n\geqslant 1$ by $$s(2n)=s(n),\qquad s(2n+1)=s(n)+s(n+1);$$ this is sequence A002487 in Sloane's list. We denote by $S(z)$ the generating function of the Stern sequence; that is, $$S(z):=\sum_{n\geqslant 0} s(n)z^n.$$ Stern's sequence has been well studied and has many interesting properties (see e.g., \cite{Dil1, Dil2, Leh1, Lin1} for details). One of the most interesting properties is that the sequence $\{s(n+1)/s(n)\}_{n\geqslant 1}$ is an enumeration of the positive reduced rationals without repeats.
Similarly Bacher \cite{B} introduced the {\em twisted Stern sequence} $\{t(n)\}_{n\geqslant 0}$ given by the recurrences $t(0)=0$, $t(1)=1$, and for $n\geqslant 1$ by $$t(2n)=-t(n),\qquad t(2n+1)=-t(n)-t(n+1).$$ We denote by $T(z)$ the generating function of the twisted Stern sequence; that is, $$T(z):=\sum_{n\geqslant 0} t(n)z^n.$$ Towards describing the relationship between the Stern sequence and its twist, Bacher \cite{B} gave many results, and two conjectures. As the main theorems of this article, we prove these conjectures, so we will state them as theorems (note that we have modified some of the notation).
\begin{theorem}\label{Bacherconj1} There exists an integral sequence $\{u(n)\}_{n\geqslant 0}$ such that for all $e\geqslant 0$ we have $$\sum_{n\geqslant 0} t(3\cdot 2^e+n)z^n=(-1)^eS(z)\sum_{n\geqslant 0}u(n)z^{n\cdot 2^e}.$$
\end{theorem}
Note that in this theorem (as in the original conjecture), it is implicit that the sequence $\{u(n)\}_{n\geqslant 0}$ is defined by the relationship $$U(z):=\sum_{n\geqslant 0}u(n)z^n =\frac{\sum_{n\geqslant 0}t(3+n)z^n}{S(z)}.$$
\begin{theorem}\label{Bacherconj2} (i) The series $$G(z):=\frac{\sum_{n\geqslant 0}(s(2+n)-s(1+n))z^n}{S(z)}$$ satisfies $$\sum_{n\geqslant 0}(s(2^{e+1}+n)-s(2^e+n))z^n=G(z^{2^e})S(z)$$ for all $e\in\mathbb{N}$.
Similarly, (ii) the series $$H(z):=-\frac{\sum_{n\geqslant 0}(t(2+n)+t(1+n))z^n}{S(z)}$$ satisfies $$(-1)^{e+1}\sum_{n\geqslant 0}(t(2^{e+1}+n)+t(2^e+n))z^n=H(z^{2^e})S(z)$$ for all $e\in\mathbb{N}$.
\end{theorem}
These theorems were originally stated as Conjectures 1.3 and 3.2 in \cite{B}.
\section{Untwisting Bacher's First Conjecture}
In this section, we will prove Theorem \ref{Bacherconj1}, but first we note the following lemma which are a direct consequence of the definitions of the Stern sequence and its twist.
\begin{lemma}\label{AT} The generating series $S(z)=\sum_{n\geqslant 0}s(n)z^n$ and $T(z)=\sum_{n\geqslant 0}t(n)z^n$ satisfy the functional equations $$S(z^2)=\left(\frac{z}{1+z+z^2}\right)S(z)$$ and $$T(z^2)=\left(T(z)-2z\right)\left(\frac{-z}{1+z+z^2}\right),$$ respectively.
\end{lemma}
We prove here only the functional equation for $T(z)$. The functional equation for the generating series of the Stern sequence is well--known; for details see, e.g., \cite{CoonsS1, Dil1}.
\begin{proof}[Proof of Lemma \ref{AT}] This is a straightforward calculation using the definition of $t(n)$. Note that \begin{align*} T(z)&= \sum_{n\geq 0}t(2n)z^{2n}+\sum_{n\geq 0}t(2n+1)z^{2n+1}\\
&=-\sum_{n\geq 0}t(n)z^{2n}+t(1)z+\sum_{n\geq 1}t(2n+1)z^{2n+1}\\
&=-T(z^2)+z-\sum_{n\geq 1}t(n)z^{2n+1}-\sum_{n\geq 1}t(n+1)z^{2n+1}\\
&=-T(z^2)+z-zT(z^2)-z^{-1}\sum_{n\geq 1}t(n+1)z^{2(n+1)}\\
&=-T(z^2)+2z-zT(z^2)-z^{-1}\sum_{n\geq 0}t(n+1)z^{2(n+1)}\\
&=-T(z^2)+2z-zT(z^2)-z^{-1}T(z^2).
\end{align*} Solving for $T(z^2)$ gives $$T(z^2)=\left(T(z)-2z\right)\left(\frac{-z}{1+z+z^2}\right),$$ which is the desired result.
\end{proof}
Since the proof of Theorem \ref{Bacherconj1} is easiest for the case $e=1$, and this case is indicative of the proof for the general case, we present it here separately.
\begin{proof}[Proof of Theorem \ref{Bacherconj1} for $e=1$] Recall that we define the sequence $\{u(n)\}_{n\geqslant 0}$ is by the relationship $$U(z):=\sum_{n\geqslant 0}u(n)z^n =\frac{\sum_{n\geqslant 0}t(3+n)z^n}{S(z)}.$$ Since $$\sum_{n\geqslant 0}t(3+n)z^n=\frac{1}{z^3}\left(T(z)+z^2-z\right),$$ we have that \begin{equation}\label{bUA}U(z)=\frac{T(z)+z^2-z}{z^3S(z)}=\frac{1}{z^3}\cdot\frac{T(z)}{S(z)}+\frac{z^2-z}{z^3}\cdot\frac{1}{S(z)}.\end{equation} Note that we are interested in a statement about the function $U(z^{2})$. We will use the functional equations for $S(z)$ and $T(z)$ to examine this quantity via \eqref{bUA}. Note that equation \eqref{bUA} gives, sending $z\mapsto z^{2}$ and using applying Lemma \ref{AT}, that $$U(z^2)=\frac{1}{z^6}\cdot\frac{T(z^2)}{S(z^2)}+\frac{z^4-z^2}{z^6}\cdot\frac{1}{S(z^2)}=\frac{1}{z^6S(z)}\left(2z-T(z)+(z^3-z)(1+z+z^2)\right).$$ Thus we have that \begin{align*} (-1)^1 S(z)U(z^2)&=\frac{-1}{z^6}\left(2z-T(z)-z-z^2-z^3+z^3+z^4+z^5\right)\\
&=\frac{1}{z^6}\left(T(z)-z+z^2-z^4-z^5\right)\\
&=\frac{1}{z^6}\sum_{n\geq 6}t(n)z^n\\
&=\sum_{n\geq 0}t(3\cdot 2+n)z^n,
\end{align*} which is exactly what we wanted to show.
\end{proof}
For the general case, complications arise in a few different places. The first is concerning $T(z^{2^e})$. We will build up the result with a sequence of lemmas to avoid a long and calculation--heavy proof of Theorem \ref{Bacherconj1}.
\begin{lemma}\label{T2e} For all $e\geq 1$ we have $$T(z^{2^e})=T(z)\prod_{i=0}^{e-1}\left(\frac{-z^{2^i}}{1+z^{2^i}+z^{2^{i+1}}}\right)-2\sum_{j=0}^{e-1}z^{2^j}\prod_{i=j}^{e-1}\left(\frac{-z^{2^i}}{1+z^{2^i}+z^{2^{i+1}}}\right).$$
\end{lemma}
\begin{proof} We give a proof by induction. Note that for $e=1$, the right--hand side of the desired equality is $$T(z)\left(\frac{-z}{1+z+z^{2}}\right)-2z\left(\frac{-z}{1+z+z^{2}}\right)=\left(T(z)-2z\right)\left(\frac{-z}{1+z+z^{2}}\right)=T(z^2)$$ where the last equality follows from Lemma \ref{AT}.
Now suppose the identity holds for $e-1$. Then, again using Lemma \ref{AT}, we have
\begin{align*} T(z^{2^e}) = T((z^2)^{2^{e-1}})&=T(z^2)\prod_{i=0}^{e-2}\left(\frac{-z^{2^{i+1}}}{1+z^{2^{i+1}}+z^{2^{i+2}}}\right)-2\sum_{j=0}^{e-2}z^{2^{j+1}}\prod_{i=j}^{e-2}\left(\frac{-z^{2^{i+1}}}{1+z^{2^{i+1}}+z^{2^{i+2}}}\right)\\
&=\left(T(z)-2z\right)\left(\frac{-z}{1+z+z^{2}}\right)\prod_{i=1}^{e-1}\left(\frac{-z^{2^{i}}}{1+z^{2^{i}}+z^{2^{i+1}}}\right)\\
&\qquad\qquad-2\sum_{j=1}^{e-1}z^{2^{j}}\prod_{i=j}^{e-1}\left(\frac{-z^{2^{i}}}{1+z^{2^{i}}+z^{2^{i+1}}}\right)\\
&=\left(T(z)-2z\right)\prod_{i=0}^{e-1}\left(\frac{-z^{2^{i}}}{1+z^{2^{i}}+z^{2^{i+1}}}\right)-2\sum_{j=1}^{e-1}z^{2^{j}}\prod_{i=j}^{e-1}\left(\frac{-z^{2^{i}}}{1+z^{2^{i}}+z^{2^{i+1}}}\right)\\
&=T(z)\prod_{i=0}^{e-1}\left(\frac{-z^{2^{i}}}{1+z^{2^{i}}+z^{2^{i+1}}}\right)-2\sum_{j=0}^{e-1}z^{2^{j}}\prod_{i=j}^{e-1}\left(\frac{-z^{2^{i}}}{1+z^{2^{i}}+z^{2^{i+1}}}\right).
\end{align*} Hence, by induction, the identity is true for all $e\geq 1$.
\end{proof}
We will need the following result for our next lemma.
\begin{theorem}[Bacher \cite{B}]\label{B1.4} For all $e\geqslant 1$, we have $$\prod_{i=0}^{e-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)=\frac{(-1)^e}{z(1+z^{2^e})}\sum_{n=0}^{3\cdot 2^e}t(3\cdot 2^e+n)z^n.$$
\end{theorem}
The following lemma is similar to the comment made in Remark 1.5 of \cite{B}.
\begin{lemma} For all $e\geq 1$, we have that $$\sum_{n=0}^{3\cdot 2^e}t(n)z^n=z-z^2+\sum_{k=0}^{e-1}(-1)^kz^{3\cdot 2^k+1}(z^{2^k}+1)\prod_{i=0}^{k-1}(1+z^{2^i}+z^{2^{i+1}}).$$
\end{lemma}
\begin{proof} If $e\geq 1$, then we have \begin{align*} \sum_{n=0}^{3\cdot 2^e}t(n)z^n &= z-z^2+\sum_{k=0}^{e-1}\sum_{n=3\cdot 2^k}^{3\cdot 2^{k+1}}t(n)z^n\\
&= z-z^2+\sum_{k=0}^{e-1}\sum_{n=0}^{3\cdot 2^{k}}t(3\cdot 2^{k}+n)z^{n+3\cdot 2^{k}}\\
&= z-z^2+\sum_{k=0}^{e-1}z^{3\cdot 2^k}\sum_{n=0}^{3\cdot 2^{k}}t(3\cdot 2^{k}+n)z^{n}.
\end{align*} Applying Theorem \ref{B1.4}, we have that $$\sum_{n=0}^{3\cdot 2^e}t(n)z^n=z-z^2+\sum_{k=0}^{e-1}z^{3\cdot 2^k}(-1)^kz(z^{2^k}+1)\prod_{i=0}^{k-1}(1+z^{2^i}+z^{2^{i+1}}),$$ which after some trivial term arrangement gives the result.
\end{proof}
\begin{lemma}\label{tspp} For all $e\geq 1$, we have $$\sum_{n=0}^{3\cdot 2^e} t(n)z^n=2z\sum_{j=0}^{e-1}(-1)^j\prod_{i=0}^{j-1}(1+z^{2^i}+z^{2^{i+1}})-(-1)^ez(z^{2^e}-1)\prod_{i=0}^{e-1} (1+z^{2^i}+z^{2^{i+1}}).$$
\end{lemma}
\begin{proof} This lemma is again proved by induction, using the result of the previous lemma. Note that in view of the previous lemma, by subtracting the first term on the right--hand side of the desired equality, it is enough to show that for all $e\geq 1$, we have \begin{multline}\label{lrhs}z-z^2+\sum_{k=0}^{e-1}(-1)^k\left(z^{4\cdot 2^k}+z^{3\cdot 2^k}-2\right)z\prod_{i=0}^{k-1}(1+z^{2^i}+z^{2^{i+1}})\\ =-(-1)^ez(z^{2^e}-1)\prod_{i=0}^{e-1} (1+z^{2^i}+z^{2^{i+1}}).\end{multline}
If $e=1$, then the left--hand side of \eqref{lrhs} is $$z-z^2+(z^4+z^3-s)z=-z-z^2+z^4+z^5,$$ and the right--hand side of \eqref{lrhs} is $$-(-1)z(z^2-1)(1+z+z^2)=-z-z^2+z^4+z^5,$$ so that \eqref{lrhs} holds for $e=1$.
Now suppose that \eqref{lrhs} holds for $e-1$. Then \begin{align*} z-z^2+\sum_{k=0}^{e-1}(-1)^k&\left(z^{4\cdot 2^k}+z^{3\cdot 2^k}-2\right)z\prod_{i=0}^{k-1}(1+z^{2^i}+z^{2^{i+1}})\\
&= (-1)^{e-1}\left(z^{4\cdot 2^{e-1}}+z^{3\cdot 2^{e-1}}-2\right)z\prod_{i=0}^{e-2}(1+z^{2^i}+z^{2^{i+1}})\\
&\qquad\qquad+z-z^2+\sum_{k=0}^{e-2}(-1)^k\left(z^{4\cdot 2^k}+z^{3\cdot 2^k}-2\right)z\prod_{i=0}^{k-1}(1+z^{2^i}+z^{2^{i+1}})\\
&= (-1)^{e-1}\left(z^{4\cdot 2^{e-1}}+z^{3\cdot 2^{e-1}}-2\right)z\prod_{i=0}^{e-2}(1+z^{2^i}+z^{2^{i+1}})\\
&\qquad\qquad-(-1)^{e-1}z(z^{2^{e-1}}-1)\prod_{i=0}^{e-2} (1+z^{2^i}+z^{2^{i+1}})\end{align*} Factoring out the product we thus have that
\begin{align*}
z-z^2+\sum_{k=0}^{e-1}(-1)^k&\left(z^{4\cdot 2^k}+z^{3\cdot 2^k}-2\right)z\prod_{i=0}^{k-1}(1+z^{2^i}+z^{2^{i+1}})\\
&=(-1)^e\prod_{i=0}^{e-2} (1+z^{2^i}+z^{2^{i+1}})\cdot \left(-\left(z^{4\cdot 2^{e-1}}+z^{3\cdot 2^{e-1}}-2\right)z+z(z^{2^{e-1}}-1)\right)\\
&=-(-1)^ez\prod_{i=0}^{e-2} (1+z^{2^i}+z^{2^{i+1}})\cdot \left(z^{4\cdot 2^{e-1}}+z^{3\cdot 2^{e-1}}-z^{2\cdot 2^{e-1}}-1\right)\\
&=-(-1)^ez\prod_{i=0}^{e-2} (1+z^{2^i}+z^{2^{i+1}})\cdot (z^{2^e}-1)(1+z^{2^{e-1}}+z^{2^{e}})\\
&=-(-1)^ez(z^{2^e}-1)\prod_{i=0}^{e-1} (1+z^{2^i}+z^{2^{i+1}}),
\end{align*} so that by induction, \eqref{lrhs} holds for all $e\geq 1$.
\end{proof}
With these lemmas in place we are in position to prove Theorem \ref{Bacherconj1}.
\begin{proof}[Proof of Theorem \ref{Bacherconj1}] We start by restating \eqref{bUA}; that is $$U(z)=\frac{T(z)+z^2-z}{z^3S(z)}=\frac{1}{z^3}\cdot\frac{T(z)}{S(z)}+\frac{z^2-z}{z^3}\cdot\frac{1}{S(z)}.$$ Sending $z\mapsto z^{2^e},$ we have that $$U(z^{2^e})=\frac{1}{z^{3\cdot 2^e}}\cdot\frac{T(z^{2^e})}{S(z^{2^e})}+\frac{z^{2^{e+1}}-z^{2^e}}{z^{3\cdot 2^e}}\cdot\frac{1}{S(z^{2^e})}=\frac{1}{z^{3\cdot 2^e}}\cdot\frac{T(z^{2^e})}{S(z^{2^e})}+\frac{z^{2^{e+1}}-z^{2^e}}{z^{3\cdot 2^e}z^{2^e-1}S(z)}\cdot\prod_{i=0}^{e-1} (1+z^{2^i}+z^{2^{i+1}}),$$ where we have used the functional equation for $S(z)$ to give the last equality. Using Lemma \ref{T2e} and the functional equation for $S(z)$, we have that \begin{align}\nonumber\frac{T(z^{2^e})}{S(z^{2^e})}&=\frac{T(z)\prod_{i=0}^{e-1}\left(\frac{-z^{2^i}}{1+z^{2^i}+z^{2^{i+1}}}\right)-2\sum_{j=0}^{e-1}z^{2^j}\prod_{i=j}^{e-1}\left(\frac{-z^{2^i}}{1+z^{2^i}+z^{2^{i+1}}}\right)}{S(z)}\cdot\prod_{i=0}^{e-1} \left(\frac{1+z^{2^i}+z^{2^{i+1}}}{z^{2^i}}\right)\\
\label{ToverS}&=(-1)^e\frac{T(z)}{S(z)}-(-1)^e\frac{2z}{S(z)}\sum_{j=0}^{e-1}(-1)^{j}\prod_{i=0}^{j-1}\left({1+z^{2^i}+z^{2^{i+1}}}\right).\end{align} Applying this to the expression for $U(z^{2^e})$ we have, multiplying by $(-1)^eS(z)$, that \begin{multline*}(-1)^eS(z)U(z^{2^e})=\frac{1}{z^{3\cdot 2^e}}\left(T(z)-2z\sum_{j=0}^{e-1}(-1)^{j}\prod_{i=0}^{j-1}\left({1+z^{2^i}+z^{2^{i+1}}}\right)\right.\\ \left.+(-1)^ez(z^{2^e}-1)\prod_{i=0}^{e-1}\left({1+z^{2^i}+z^{2^{i+1}}}\right)\right).\end{multline*} Now by Lemma \ref{tspp}, this reduces to $$(-1)^eS(z)U(z^{2^e})=\frac{1}{z^{3\cdot 2^e}}\left(T(z)-\sum_{n=0}^{3\cdot 2^e} t(n)z^n\right)=\sum_{n\geq 0}t(3\cdot 2^3+n)z^n,$$ which proves the theorem.
\end{proof}
\section{Untwisting Bacher's Second Conjecture}
In this section, we will prove Theorem \ref{Bacherconj2}. For ease of reading we have separated the proofs of the two parts of Theorem \ref{Bacherconj2}.
To prove Theorem \ref{Bacherconj2}(i) we will need the following lemma.
\begin{lemma}\label{pss} For all $k\geq 0$ we have that $$z\prod_{i=0}^{k-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)=\sum_{n=1}^{2^k}s(n)z^n+\sum_{n=1}^{2^k-1}s(2^k-n)z^{n+2^k}.$$
\end{lemma}
\begin{proof} Again, we prove by induction. Note that for $k=0$, the product and the left--most sum are both empty, thus they are equal to $1$ and $0$, respectively. Since $$z=s(1)z=\sum_{n=1}^{2^0}s(n)z^n$$ the theorem is true for $k=0$. To use some nonempty terms, we consider the case $k=1$. Then we have $$z\prod_{i=0}^{1-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)=z+z^2+z^3=\sum_{n=1}^{2^1}s(n)z^n+\sum_{n=1}^{2^1-1}s(2^1-n)z^{n+2^1},$$ so the theorem holds for $k=1$.
Now suppose the theorem holds for $k-1$. Then \begin{align*} z\prod_{i=0}^{k-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)&=\left(1+z^{2^{k-1}}+z^{2^{k}}\right)\cdot z\prod_{i=0}^{k-2}\left(1+z^{2^i}+z^{2^{i+1}}\right)\\
&=\left(1+z^{2^{k-1}}+z^{2^{k}}\right)\left(\sum_{n=1}^{2^{k-1}}s(n)z^n+\sum_{n=1}^{2^{k-1}-1}s(2^{k-1}-n)z^{n+2^{k-1}}\right)\\
&=\left(\sum_{n=1}^{2^{k-1}}s(n)z^n+\sum_{n=1}^{2^{k-1}-1}s(2^{k-1}-n)z^{n+2^{k-1}}+\sum_{n=1}^{2^{k-1}}s(n)z^{n+2^{k-1}}\right)\\
&\qquad +\left(\sum_{n=1}^{2^{k-1}-1}s(2^{k-1}-n)z^{n+2^{k}}+\sum_{n=1}^{2^{k-1}}s(n)z^{n+2^{k}}\right.\\
&\qquad\qquad\left.+\sum_{n=1}^{2^{k-1}-1}s(2^{k-1}-n)z^{n+3\cdot 2^{k-1}}\right)\\
&=\Sigma_1+\Sigma_2,
\end{align*} where $\Sigma_1$ and $\Sigma_2$ represent the triplets of sums from the previous line (we have grouped the last sums in triplets since we will deal with them that way. Note that we have \begin{multline*}\Sigma_1=\sum_{n=1}^{2^{k-1}}s(n)z^n+s(2^{k-1})z^{2^k}+\sum_{n=1}^{2^{k-1}-1}\left(s(n)+s(2^{k-1}-n)\right)z^{n+2^{k-1}}\\ =\sum_{n=1}^{2^{k-1}}s(n)z^n+s(2^{k})z^{2^k}+\sum_{n=1}^{2^{k-1}-1}s(2^{k-1}+n)z^{n+2^{k-1}}=\sum_{n=1}^{2^{k}}s(n)z^n,\end{multline*} where we have used the fact that $s(2n)=s(n)$ and for $n\in[0,2^j]$ the identity $s(2^j+n)=s(2^j-n)+s(n)$ holds (see, e.g., \cite[Theorem 1.2(i)]{B} for details). Similarly, since $2^{k-1}-n=2^k-(n+2^{k-1})$ and $$s(2^{k-1}-n)+s(n)=s(2^{k-1}+n)=s(2^k+n)-s(n)=s(2^k-n)$$ (see Proposition 3.1(i) and Theorem 1.2(i) of \cite{B}), we have that \begin{align*}\Sigma_2&=\sum_{n=1}^{2^{k-1}-1}\left(s(2^{k-1}-n)+s(n)\right)z^{n+2^k}+s(2^{k-1})z^{3\cdot 2^{k-1}}+\sum_{n=1}^{2^{k-1}-1}s(2^{k-1}-n)z^{n+2^{k-1}+2^k}\\ &= \sum_{n=1}^{2^{k-1}-1}s(2^{k}-n)z^{n+2^k}+s(2^{k-1})z^{3\cdot 2^{k-1}}+\sum_{n=2^{k-1}-1}^{2^{k}-1}s(2^{k}-n)z^{n+2^k}\\ &=\sum_{n=1}^{2^k-1}s(2^k-n)z^{n+2^k}.\end{align*} Thus $$\Sigma_1+\Sigma_2=\sum_{n=1}^{2^{k}}s(n)z^n+\sum_{n=1}^{2^k-1}s(2^k-n)z^{n+2^k},$$ and by induction the lemma is proved.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Bacherconj2}(i)] We denote as before the generating series of the Stern sequence by $S(z)$. Splitting up the sum in the definition of $G(z)$ we see that $$G(z)=\frac{1}{z^{2}}(1-z)-\frac{1}{z S(z)},$$ so that using the functional equation for $S(z)$ we have \begin{align*}G(z^{2^e}) = \frac{1}{z^{2^{e+1}}}(1-z^{2^e})-\frac{1}{z^{2^e} S(z^{2^e})}= \frac{1}{z^{2^{e+1}}}(1-z^{2^e})-\frac{1}{z^{2^{e}}}\cdot\frac{\prod_{i=0}^{e-1}(1+z^{2^i}+z^{2^{i+1}})}{z^{2^{e}-1}S(z)}.
\end{align*} This gives \begin{equation}\label{32rhsi} G(z^{2^e})S(z)=\frac{1}{z^{2^{e+1}}}\left((1-z^{2^e})S(z)-z\prod_{i=0}^{e-1}(1+z^{2^i}+z^{2^{i+1}})\right).\end{equation} We use the previous lemma to deal with the right--hand side of \eqref{32rhsi}; that is, the previous lemma gives that
\begin{align*}
(1-z^{2^e})S(z)-z\prod_{i=0}^{e-1}(1+z^{2^i}+z^{2^{i+1}})&= \sum_{n\geq 1}s(n)z^n-\sum_{n=1}^{2^e}s(n)z^n\\
&\qquad\qquad-\sum_{n\geq 1}s(n)z^{n+2^e}-\sum_{n=1}^{2^e-1}s(2^e-n)z^{n+2^e}\\
&=\sum_{n\geq 2^e+1} s(n)z^n-\sum_{n\geq 2^e}s(n)z^{n+2^e}\\
&\qquad\qquad-\sum_{n=1}^{2^e-1}(s(n)+s(2^e-n))z^{n+2^e}\\
&=\sum_{n\geq 1} s(2^e+n)z^{n+2^e}-\sum_{n\geq 2^e}s(n)z^{n+2^e}\\
&\qquad\qquad-\sum_{n=1}^{2^e-1}s(2^e+n)z^{n+2^e}\\
&=\sum_{n\geq 0} s(2^{e+1}+n)z^{n+2^{e+1}}-\sum_{n\geq 0}s(2^e+n)z^{n+2^{e+1}}.
\end{align*} Dividing the last line by $z^{2^{e+1}}$ gives the desired result. This proves the theorem.
\end{proof}
The proof of the second part of the theorem follows similarly. We will use the following lemma.
\begin{lemma}[Bacher \cite{B}] For $n$ satisfying $1\leq n\leq 2^e$ we have that
\begin{enumerate}
\item[(i)] $t(2^{e+1}+n)+t(2^{e}+n)=(-1)^{e+1}s(n),$
\item[(ii)] $t(2^{e}+n)=(-1)^e(s(2^e-n)-s(n))$,
\item[(iii)] $t(2^{e+1}+n)=(-1)^{e+1}s(2^{e}-n)$.
\end{enumerate}
\end{lemma}
\begin{proof} Parts (i) and (ii) are given in Proposition 3.1 and Theorem 1.2 of \cite{B}, respectively. Part (iii) follows easily from (i) and (ii).
Note that (i) gives that $$t(2^{e+1}+n)=(-1)^{e+1}s(n)-t(2^{e}+n),$$ which by (ii) becomes \begin{equation*}t(2^{e+1}+n)=(-1)^{e+1}s(n)+(-1)^{e+1}s(2^e-n)-(-1)^{e+1}s(n)=(-1)^{e+1}s(2^e-n).\qedhere\end{equation*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{Bacherconj2} (ii)] We denote as before the generating series of the Stern sequence by $S(z)$. Splitting up the sum in the definition of $H(z)$ we see that $$H(z)=\frac{1}{z S(z)}-\frac{1+z}{z^{2}}\cdot\frac{T(z)}{S(z)}.$$ Since we will need to consider $H(z^{2^e})$, we will need to compute $\frac{T(z^{2^e})}{S(z^{2^e})}.$ Fortunately we have done this in the proof of Theorem \ref{Bacherconj1}, in \eqref{ToverS}, and so we use this expression here. Thus, applying the functional equation for $S(z)$, we have that \begin{multline*} H(z^{2^e})=\frac{\prod_{i=0}^{e-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)}{z^{2^{e+1}-1}S(z)}-(-1)^e\left(\frac{1+z^{2^e}}{z^{2^{e+1}}}\right)\frac{T(z)}{S(z)}\\ +(-1)^e\left(\frac{1+z^{2^e}}{z^{2^{e+1}}}\right)\frac{2z}{S(z)}\sum_{j=0}^{e-1}(-1)^j\prod_{i=0}^{j-1}\left(1+z^{2^i}+z^{2^{i+1}}\right),\end{multline*} so that \begin{multline*} (-1)^{e+1}H(z^{2^e})S(z)=\frac{1}{z^{2^{e+1}}}\left((1+z^{2^e})T(z)-(-1)^ez\prod_{i=0}^{e-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)\right.\\ \left.-2z(1+z^{2^e})\sum_{j=0}^{e-1}(-1)^j\prod_{i=0}^{j-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)\right).\end{multline*} An application of Lemma \ref{tspp} gives \begin{align*} (-1)^{e+1}H(z^{2^e})S(z)&=\frac{1}{z^{2^{e+1}}}\left((1+z^{2^e})T(z)-(-1)^ez\prod_{i=0}^{e-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)\right.\\
&\ \ \left.-(1+z^{2^e})\sum_{n=0}^{3\cdot 2^e}t(n)z^n-(-1)^ez(z^{2^e}-1)(1+z^{2^e})\prod_{i=0}^{e-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)\right)\\
&=\frac{1}{z^{2^{e+1}}}\left(\mathfrak{S}_1+\mathfrak{S}_2+\mathfrak{S}_3+\mathfrak{S}_3\right),\end{align*} where we have used the $\mathfrak{S}_i$ to indicate the terms in the previous line.
Note that \begin{align*}\mathfrak{S}_1+\mathfrak{S}_3&=(1+z^{2^e})\left(T(z)-\sum_{n=0}^{3\cdot 2^e}t(n)z^n\right)\\
&=(1+z^{2^e})\sum_{n\geq 1}t(3\cdot 2^e+n)z^{n+3\cdot2^e}\\
&=\sum_{n\geq 2^e+1}t(2^{e+1}+n)z^{n+2^{e+1}}+\sum_{n\geq 2^{e+1}+1}t(2^{e}+n)z^{n+2^{e+1}},\end{align*} so that $$\frac{\mathfrak{S}_1+\mathfrak{S}_3}{z^{2^e+1}}=\sum_{n\geq 2^e+1}t(2^{e+1}+n)z^{n}+\sum_{n\geq 2^{e+1}+1}t(2^{e}+n)z^{n}.$$ Using Lemma \ref{pss}, we have \begin{align*} \frac{\mathfrak{S}_2+\mathfrak{S}_4}{z^{2^{e+1}}}&=\frac{1}{z^{2^{e+1}}}\left(-(-1)^e-(-1)^e(z^{2^{e+1}}-1)\right)z\prod_{i=0}^{e-1}\left(1+z^{2^i}+z^{2^{i+1}}\right)\\
&=\frac{(-1)^{e+1}}{z^{2^{e+1}}}\cdot z^{2^{e+1}}\left(\sum_{n=1}^{2^e}s(n)z^n+\sum_{n=1}^{2^e-1}s(2^e-n)z^{n+2^e}\right)\\
&=(-1)^{e+1}\left(\sum_{n=1}^{2^e}s(n)z^n+\sum_{n=1}^{2^e-1}s(2^e-n)z^{n+2^e}\right).
\end{align*} Using the proceeding lemma and the fact that $t(2^{e+1})+t(2^e)=0$ so that we can add in a zero term, we have that \begin{align*} \frac{\mathfrak{S}_2+\mathfrak{S}_4}{z^{2^{e+1}}}&=\sum_{n=1}^{2^e}(t(2^{e+1}+n)+t(2^e+n))z^n+\sum_{n=1}^{2^e-1}t(2^{e+1}+n)z^{n+2^e}\\
&=\sum_{n=1}^{2^e}(t(2^{e+1}+n)+t(2^e+n))z^n+\sum_{n=2^e}^{2^{e+1}-1}t(2^{e}+n)z^{n}\\
&=\sum_{n=0}^{2^e}(t(2^{e+1}+n)+t(2^e+n))z^n+\sum_{n=2^e}^{2^{e+1}-1}t(2^{e}+n)z^{n}\\
&=\sum_{n=0}^{2^e}t(2^{e+1}+n)z^n+\sum_{n=0}^{2^{e+1}-1}t(2^{e}+n)z^{n}.
\end{align*} Putting together these results gives \begin{align*}(-1)^{e+1}H(z^{2^e})S(z)&=\frac{1}{z^{2^{e+1}}}\left(\mathfrak{S}_1+\mathfrak{S}_2+\mathfrak{S}_3+\mathfrak{S}_3\right)\\ &=\sum_{n\geq 0}t(2^{e+1}+n)z^n+\sum_{n\geq 0}t(2^{e}+n)z^{n},\end{align*} which proves the theorem.
\end{proof}
\section{Computing with binary expansions}
To gain intuition regarding Bacher's conjectures mentioned in the first section, we found it very useful to understand what happens to the Stern sequence and its twist at sums of powers of $2$. Thus, in this section we prove the following theorem which removes the need to use the recurrences to give the values of the Stern sequence.
\begin{theorem} Let $n\geqslant 4$ and write $n=\sum_{i=0}^m 2^ib_i$, the binary expansion of $n$. Then $$s(n)=\left[\begin{matrix}1 & 1\end{matrix}\right]\left(\prod_{i=1}^{m-1}\left[\begin{matrix}1 & 1-b_i\\ b_i & 1 \end{matrix}\right]\right)\left[\begin{matrix}1\\ b_0 \end{matrix}\right]$$
\end{theorem}
\begin{proof} Note that from the definition of the Stern sequence we easily have that $$s(2a+x)=s(a)+x\cdot s(a+1)\qquad (x\in\{0,1\}).$$ It follows that for each $k\in\{0,1,\ldots,m-1\}$ we have that both $$s\left(\sum_{i=k}^m 2^{i-k}b_i\right)=s\left(\sum_{i=k+1}^m 2^{i-(k+1)}b_i\right)+b_k\cdot s\left(1+\sum_{i=k+1}^m 2^{i-(k+1)}b_i\right)$$ and $$s\left(1+\sum_{i=k}^m 2^{i-k}b_i\right)=(1-b_k)\cdot s\left(\sum_{i=k+1}^m 2^{i-(k+1)}b_i\right)+s\left(1+\sum_{i=k+1}^m 2^{i-(k+1)}b_i\right).$$ Starting with $k=0$ and applying the above equalities, and using the fact that $b_m=1$ so that $s(b_m)=s(b_m+1)=1$ gives the result.
\end{proof}
We have a similar result for the twisted Stern sequence whose proof is only trivially different from the above and so we have omitted it.
\begin{theorem} Let $n\geqslant 4$ and write $n=\sum_{i=0}^m 2^ib_i$, the binary expansion of $n$. Then $$t(n)=(-1)^m\left[\begin{matrix}1 & -1\end{matrix}\right]\left(\prod_{i=1}^{m-1}\left[\begin{matrix}1 & 1-b_i\\ b_i & 1 \end{matrix}\right]\right)\left[\begin{matrix}1\\ b_0 \end{matrix}\right].$$
\end{theorem}
Indeed, both $s(n)$ and $t(n)$ are $2$--regular, and so the fact that $s(n)$ and $t(n)$ satisfy theorems like the two above is provided by Lemma 4.1 of \cite{AS} (note that while the existence is proven, the matrices are not explicitly given there).
\end{document} |
\begin{document}
\title{Orthomodular Lattices Induced by the Concurrency Relation}
\begin{abstract}
We apply to locally finite
partially ordered sets a construction which associates
a complete lattice to a given poset;
the elements of the lattice are
the closed subsets of a closure
operator, defined starting from the \emph{concurrency} relation.
We show that, if the partially ordered set satisfies
a property of local density, i.e.: N-density,
then the associated lattice is also orthomodular.
We then consider occurrence
nets, introduced by C.A. Petri as models of concurrent computations, and define
a family of subsets of the elements of an occurrence net;
we call those subsets \emph{causally closed} because they
can be seen as subprocesses of the whole net which are,
intuitively, closed with respect to the forward and backward local state changes.
We show that, when the net is K-dense, the causally
closed sets coincide with the closed sets induced by the
closure operator defined starting from the
concurrency relation. K-density is a property of partially ordered sets introduced by Petri,
on the basis of former axiomatizations of special relativity theory.
\end{abstract}
\section{Introduction}
We consider models of concurrent behaviours based on partial orders, and in particular
occurrence nets.
Occurrence nets were introduced by C.A. Petri (\cite{P77}) as a model of
non sequential processes which are physically realizable.
They are a special kind of Petri nets,
where occurrences of local states (also called conditions) and of events are partially
ordered.
The partial order is interpreted as a kind of causal dependence relation
(for background on partial orders and occurrence nets, see~\cite{BF88}).
Any partially ordered set (or \emph{poset} for short) induces a \emph{concurrency} relation, defined as the complement of
the partial order. This relation is symmetric but in general non transitive.
By applying known techniques of lattice theory, one can derive, from such a relation,
a complete, orthocomplemented lattice of subsets of the underlying set.
In a recent paper (\cite{BPR09}) we showed that this technique, applied to an
occurrence net, always gives an orthomodular lattice.
In the present paper, we consider N-density and K-density, properties defined by Petri,
inspired by former axiomatizations of special relativity theory
(see, for example, \cite{C58}).
A partial order is K-dense if any \emph{line} (namely, a maximal subset of pairwise
ordered elements) intersects any \emph{cut} (a maximal subset of pairwise
incomparable elements).
This corresponds to the intuitive idea that in a given global state (represented by a
cut) any sequential subprocess is in some state, given by a point along the
subprocess.
N-density is a sort of local, and weaker, form of K-density.
We show that N-density, together with two local finiteness properties, of a poset is
sufficient to produce an orthomodular lattice.
This generalizes one of the main results of~\cite{BPR09}.
We then restrict attention to degree-finite and interval-finite occurrence nets.
On these nets we introduce the notion of \emph{causally closed set},
which corresponds to a set of elements of the net
which identifies a sort of \emph{causally closed} subprocess,
i.e.: a subprocess uniquely constructible starting from a set of concurrent conditions.
We show that closed sets, as defined in~\cite{BPR09}, are causally closed and prove that,
in the case of K-dense occurrence nets, closed sets and causally closed sets coincide.
The next section collects some definitions and results to be used
later. In Section~\ref{s:chiusico}, we show that N-density and
interval-finiteness suffice to grant the orthomodularity
of the lattice of closed sets generated starting from the
concurrency relation. Section~\ref{s:subproc} introduces the notion
of causally closed set, and shows that in K-dense occurrence nets,
closed sets and causally closed sets coincide.
Proofs are omitted, but can be found in~\cite{bpr09dcmex}.
\section{Preliminary Definitions}\label{s:preldef}
\subsection{Orthomodular Posets and Lattices}
In this section we recall the basic definitions
related to orthomodular posets and lattices.
\begin{definition}
An \emph{orthocomplemented poset}
${\cal P}=\pang{P, \leq, 0, 1,(\,.\,)'}$
is a partially ordered set $(P, \leq)$, equipped with a
minimum and a maximum element, respectively denoted by 0 and
1, and with a map $(\,.\,)':P \rightarrow P$,
such that the following conditions are verified (where
$\lor$ and $\land$ denote, respectively, the least
upper bound and the greatest lower bound with respect to
$ \leq $, when they exist):
$ \forall x, y \in P $
\begin{align*}
&\emph{(i)} \quad (x')' = x;\\
&\emph{(ii)} \quad x \leq y \Rightarrow y' \leq x';\\
&\emph{(iii)} \quad x \land x' = 0 \ \emph{and} \ x \lor x' = 1.
\end{align*}
\end{definition}
The map $(\,.\,)':P \rightarrow P$ is
called an \emph{orthocomplementation} in $P$.
In an orthocomplemented poset, $\land$ and $\lor$, when they exist,
are not independent:
in fact, the so-called De Morgan laws hold:
$(x \lor y)'= x' \land y'$, $(x \land y)'= x' \lor y'.$
In the following, we will sometimes use \emph{meet} and
\emph{join} to denote, respectively, $\land$ and $\lor$.
Meet and join can be extended to families of elements in the obvious way,
denoted by $\bigwedge$ and $\bigvee$.
\begin{figure}
\caption{A finite orthomodular lattice.}
\label{f:ortomodulare}
\end{figure}
A poset $(P,\leq)$ is called \emph{orthocomplete} when it is orthocomplemented and
every countable subset of pairwise orthogonal elements of
$P$ has a least upper bound.
A lattice $\mathcal{L}$ is a poset
in which for any pair of elements meet and join always exist.
A lattice $\mathcal{L}$ is \emph{complete} when the meet and the join
of any subset of $\mathcal{L}$ always exist.
\begin{definition}\cite{BC81}
An \emph{orthomodular poset} ${\cal P}=\pang{P, \leq, 0, 1,(\,.\,)'}$
is an orthocomplete poset which satisfies the condition:
\begin{displaymath}
x \leq y \Rightarrow y = x \lor (y \land x')
\end{displaymath}
\end{definition}
which is usually referred to as the \emph{orthomodular} law.
The orthomodular law is weaker than the distributive law.
A lattice $\mathcal{L}$ is called \emph{distributive} if and only if $\forall x,y,z \in \mathcal{L}$
the equalities
$x \land (y \lor z) = (x \land y) \lor (x \land z)$,
$x \lor(y \land z) = (x \lor y) \land (x \lor z)$
hold.
Orthocomplemented distributive lattices are called Boolean algebras.
Orthomodular posets and lattices can therefore be considered as a generalization of Boolean
algebras and have been studied as algebraic models for quantum logic \cite{PP91}.
Any orthomodular lattice can be seen as a family of
partially overlapping Boolean algebras.
Figure~\ref{f:ortomodulare} shows a finite orthomodular lattice.
\subsection{Closure Operators}\label{s:closop}
\begin{definition}
Let $X$ be a set and $\mathbb{P}(X)$ the powerset of $X$.
A map $\mathcal{C}: \mathbb{P}(X) \rightarrow \mathbb{P}(X)$
is a \emph{closure operator} on $X$ if, for all $A,B \subseteq X$,
\begin{align*}
&\emph{(i)} \quad A \subseteq \mathcal{C}(A),\\
&\emph{(ii)} \quad A \subseteq B \Rightarrow \mathcal{C}(A) \subseteq \mathcal{C}(B),\\
&\emph{(iii)} \quad \mathcal{C}(\mathcal{C}(A))=\mathcal{C}(A).
\end{align*}
\end{definition}
A subset $A$ of $X$ is called \emph{closed} with respect to $\mathcal{C}$
if $\mathcal{C}(A)=A$.
If $\mathcal{C}$ is a closure operator on a set $X$,
the family
$\{A \subseteq X \ | \ \mathcal{C}(A)=A\}$
of closed subsets of $X$ forms a complete lattice,
when ordered by inclusion, in which
\begin{displaymath}
\bigwedge \{A_i: i \in I\}= \bigcap_{i \in I} A_i, \quad
\bigvee \{A_i: i \in I\}= \mathcal{C}(\bigcup_{i \in I} A_i).
\end{displaymath}
The proof of this statement can be found in \cite{B79}.
We now describe a well-known construction from binary relations
to closure operators (see, for example, \cite{B79}).
Let $X$ be a set,
and $\alpha \subseteq X \times X$ be a symmetric relation.
Define an operator $(.)^\perp$ on the powerset of $X$:
given $A \subseteq X$
\begin{displaymath}
A^\perp=\{x \in X \ | \ \forall y \in A: (x,y) \in \alpha\}.
\end{displaymath}
By applying twice the operator $(\,.\,)^\perp$,
we get a new operator $C(\,.\,)=(\,.\,)^{\perp \perp}$.
The map $C$ on the powerset of $X$ is a closure operator on $X$.
A subset $A$ of $X$ is called \emph{closed}
with respect to $(\,.\,)^{\perp \perp}$
if $A=A^{\perp \perp}$.
The family $L(X)$
of all closed sets of $X$,
ordered by set inclusion, is a complete lattice.
When $\alpha$ is also irreflexive,
the operator $(\,.\,)^\perp$ applied to elements of $L(X)$
is an orthocomplementation; the structure
$\mathcal{L}(X)=\pang{L(X), \subseteq, \emptyset, X, (\,.\,)^\perp}$
then forms an orthocomplemented complete lattice.
\subsection{Occurrence Nets}
\begin{definition}
A \emph{net} is a triple $ N = (B, E, F) $, where
$B$ and $E$ are countable sets,
$ F \subseteq ( B \times E ) \cup ( E \times B) $, and
\begin{itemize}
\item [\emph{(i)}] $ B \cap E = \emptyset $
\item [\emph{(ii)}] $ \forall e \in E \ \ \exists x, y \in B : (x,e) \in F $ and $(e,y) \in F$.
\end{itemize}
\end{definition}
The elements of $B$ are called \emph{local states} or \emph{conditions},
the elements of $E$ \emph{local changes of state} or \emph{events},
and $F$ is called the \emph{flow relation}.
Note that we allow isolated conditions but not isolated events.
Local states correspond to properties which can be true or false
in a given global state of the system;
potential global states of the system
modeled by $N$ are subsets of local states.
For each $x \in B \cup E$, define
$ \preco{x} = \{ y \in B \cup E \ | \ (y, x) \in F \} $,
$ \postc{x} = \{ y \in B \cup E \ | \ (x, y) \in F \} $.
For $e \in E$, an element $b \in B$ is a \emph{precondition} of
$e$ if $b \in \preco e$; it is a \emph{postcondition} of $e$
if $b \in \postc e$.
Occurrences of events are in accord with the following \emph{firing rule}:
an event may occur if its preconditions are true and
its postconditions are false;
when the event occurs its preconditions become false, while its postconditions become true.
In this way the occurrence of an event changes the current global state
by only changing the \emph{local} states directly connected to the event itself.
Occurrence nets are a special class of nets
used to model non-sequential processes (\cite{P77}, \cite{BF88}) by recording the partial order of the validity of conditions and of the event occurrences in the evolution of the system behaviour.
\begin{definition}\label{reteoccorrenze}
A net $N=(B,E,F)$ is an \emph{occurrence net} iff
\begin{itemize}
\item [\emph{(i)}] $\forall b \in B: |\preco b| \le 1 \ \land \ |\postc b| \le 1$ and
\item [\emph{(ii)}]$\forall x,y \in B \cup E:(x,y) \in F^+ \Rightarrow (y,x) \notin F^+$,
\end{itemize}
where $F^+$ denotes the transitive closure of $F$.
\end{definition}
Definition \ref{reteoccorrenze}(i) means that
an occurrence net does not contain non-deterministic choices,
the idea being that all conflicts are resolved at the behavioural level.
Definition \ref{reteoccorrenze}(ii) means that
an occurrence net contains no cycles,
the idea being that all loops are unfolded at the behavioural level.
Because of Definition \ref{reteoccorrenze}(ii),
the structure $(X,\sqsubseteq)$ derived from an occurrence net $N$
by putting $X=B \cup E$ and $\sqsubseteq=F^*$
($F^*$ denotes the reflexive and transitive closure of $F$)
is a partially ordered set (shortly \emph{poset}).
We will use $\sqsubset$ to denote the associated
strict partial order.
Given a partial order relation $\leq$ on a set P,
we can derive the relations $li= \ \leq \cup \geq$,
and $co=(P \times P)\setminus li$.
We will be interested in such relations derived from
$(X, \sqsubseteq)$. In such case,
intuitively, $x \ li \ y$ means that $x$ and $y$
are connected by a causal relation,
and $x \ co \ y$ means that $x$ and $y$
are causally independent.
The relations $li$ and $co$ are symmetric and not transitive.
Note that
$li$ is a reflexive relation, while $co$ is irreflexive.
Given an element $x \in X$ and a set $S \subseteq X$,
we write $ \ x \ co \ S$
if $\forall y \in S: x \ co \ y$.
Moreover, given two sets $S_1 \subseteq X$ and $S_2 \subseteq X$,
we write $S_1 \ co \ S_2$ if $\forall x \in S_1, \forall y \in S_2: x \ co \ y$.
In the following we will use $x\, co\, y$ or $(x,y) \in co$
indifferently, and similarly for $li$.
On the basis of the flow relation $F$ and its transitive closure $F^+$,
to each $x \in X$ we associate the elements in its \emph{past} and in its \emph{future}, denoted by:
\begin{align*}
&F^-(x) = \{ y \in X \ | \ y \sqsubset x \} \ \textrm{and}\\
&F^+(x) = \{ z \in X \ | \ x \sqsubset z \}
\end{align*}
respectively.
By generalizing to subsets $S$ of $X$,
we denote the \emph{past} and the \emph{future} of $S$ by
\begin{align*}
&F^-(S)=\{x \in X \ |
\ x \notin S, \exists y \in S: x \in F^-(y)\} \ \textrm{and} \\
&F^+(S)=\{x \in X \ |
\ x \notin S, \exists y \in S: x \in F^+(y)\}.
\end{align*}
From the definition, it follows that an element $x$
belongs neither to its future nor to its past.
A \emph{clique} of a binary relation is a set of pairwise related elements.
From the $co$ and $li$ relations one can define \emph{cuts} and
\emph{lines} of a poset $\mathcal{P} = (P, \leq)$ as
maximal cliques of $co$ and $li$, respectively:
\begin{align*}
&\mathrm{Cuts}(\mathcal{P})=
\{c \subseteq P \ | \ c \ \textrm{is a maximal clique of } co \cup id_P \};\\
&\mathrm{Lines}(\mathcal{P})=
\{\ l \subseteq P \ | \ l \ \textrm{is a maximal clique of } li\}.
\end{align*}
Given an occurrence net $N = (B, E, F)$, we will denote by
$\mathrm{Cuts}(N)$ and $\mathrm{Lines}(N)$, respectively,
the set of cuts and the set of lines of the poset associated
to $N$.
We will always assume the Axiom of Choice, so that any clique
of $co$ and of $li$ can be extended to a maximal clique.
In particular, we will denote cliques and maximal cliques of $co$
which contain only conditions by $B$-cosets and $B$-cuts, respectively.
C.A. Petri formalized some properties
which intuitively should hold for posets corresponding to
non-sequential processes which are actually feasible \cite{P80}, see also \cite{BF88}.
In particular, we will consider interval-finiteness,
degree-finiteness, a sort of local density called N-density,
and K-density.
\begin{definition}
$\mathcal{P} = (P, \leq)$ is \emph{interval-finite}
$\Leftrightarrow \forall x,y \in P: |[x,y]|< \infty$,
where $[x,y]=\{z \in P \ | \ x \leq z \leq y\}$.
\end{definition}
For $x, y \in P$, we write $x \lessdot y$ if $x < y$ and,
for all $z \in P$, $x < z \leq y \Rightarrow z = y$.
Let $\preco{x} = \{\, y\,|\, y \lessdot x\,\}$,
and $\postc{x} = \{\, y\,|\, x \lessdot y\,\}$.
\begin{definition}
$\mathcal{P} = (P, \leq)$ is \emph{degree-finite}
$\Leftrightarrow \forall x \in P:
|\preco x| < \infty \ \mathrm{and}\ |\postc x| < \infty$.
\end{definition}
\begin{definition}
$\mathcal{P} = (P, \leq)$ is \emph{N-dense}
$\Leftrightarrow \forall \ x,y,v,w \in P$:
$(y < v$ and $y < x$ and $w < v$ and
$(y \ co \ w \ co \ x \ co \ v)) \Rightarrow \exists z \in P: (y < z < v$
and
$(w \ co \ z \ co \ x))$.
\end{definition}
For a graphical representation of N-density condition see Figure \ref{f:Ndense}.
\begin{figure}
\caption{Illustration of N-density.}
\label{f:Ndense}
\end{figure}
\begin{proposition}\label{p:Netocc_Ndense} \cite{BF88}
Let $(X,\sqsubseteq)$ be the poset associated to an occurrence net $N=(B,E,F),
\ X=(B \cup E)$.
Then $(X,\sqsubseteq)$ is N-dense.
\end{proposition}
K-density is based on the idea of interpreting
cuts as (global) states and lines as sequential subprocesses.
K-density postulates that every occurrence of a subprocess
must be in some state.
\begin{definition}
$\mathcal{P} = (P, \leq) \ \textrm{is} \ \emph{K-dense} \Leftrightarrow
\forall c \in \mathrm{Cuts}(\mathcal{P}),
\forall l \in \mathrm{Lines}(\mathcal{P}):
c \cap l \neq \emptyset$.
\end{definition}
Obviously, in general $|c \cap l| \leq 1$.
An occurrence net $N$ is K-dense if its associated poset
is K-dense.
\section{Closed Sets Induced by the Concurrency Relation}\label{s:chiusico}
In this section we apply the construction recalled at the
end of Section~\ref{s:closop}
to the $co$ relation in partially ordered sets,
and study the resulting algebraic structure of closed sets.
Let $\mathcal{P} = (P, \le)$ be a poset. We can define
an operator on subsets of $P$,
which corresponds to an orthocomplementation,
since $co$ is irreflexive, and by this operator we define closed sets.
\begin{definition}\label{d:coclosed}
Let $S \subseteq P$, then
\begin{align*}
&(i) \quad S^\perp=\{x \in P \ | \ \forall y \in S: x \ co \ y\} \
\textrm{is the \emph{orthocomplement} of} \ S; \\
&(ii) \quad \textrm{if } S = (S^\perp)^\perp,
\textrm{then $S$ is a \emph{closed set} of} \ P.
\end{align*}
\end{definition}
The set $S^\perp$
contains the elements of $P$
which are not in causal relation with any element of $S$.
Obviously, $S \cap S^{\perp} = \emptyset$ for any $S \subseteq P$.
In the following, we sometimes denote
$(S^\perp)^\perp$ by $S^{\perp \perp}$.
Note that:
$\forall c \in Cuts(\mathcal{P})$, $c^\perp= \emptyset$ and $ c^{\perp\perp}=P$.
\begin{example}
Figure \ref{f:chiuso} shows a closed set $S$ and
its orthocomplement $S^\perp $ on the poset associated
to an occurrence net.
\end{example}
\begin{figure}
\caption{A closed set $S$ and its orthocomplement $S^\perp $.}
\label{f:chiuso}
\end{figure}
A closed set $S$ and its orthocomplement $S^\perp$ share the past
and the future.
\begin{proposition} \cite{BPR09}\label{p:ortocomplementi}
Let $S=S^{\perp \perp}$. Then $F^-(S) = F^-(S^\perp)$ and $F^+(S) = F^+(S^\perp)$.
\end{proposition}
Now we study the algebraic structure induced by
the closure operator defined above.
We call $L(P)$ the collection of closed sets of $\mathcal{P}=(P,\le)$.
By the results on closure operators recalled in Section~\ref{s:closop},
we know that
\[
\mathcal{L}(P)= \pang{L(P), \subseteq, \emptyset, P, (\,.\,)^\perp}
\]
is an orthocomplemented complete lattice, in which the meet is just
set intersection, while the join of a family of elements
is given by set union followed by closure.
In general, the structure $\mathcal{L}(P)$ is not orthomodular and
a fortiori non distributive, as can be seen in the following example.
\begin{example}
Let us consider the poset $\mathcal{P}=(P, \le)$ shown
in the left side of Fig.~\ref{f:Ndense}
and the closed sets $\{w\}$ and $ \{v,w\}$;
$\{w\}^\perp=\{x,y\}$.
The orthomodular law is not valid since
$\{w\}\subset\{v,w\}, (\{x,y\} \land \{v,w\})=\emptyset$
and hence $(\{w\} \lor \emptyset)$
is not equal to $\{v,w\}$.
\end{example}
The poset considered in the previous example is not N-dense.
It is natural to investigate if there is a relation between
N-density of a poset $P$ and the orthomodularity of the associated
structure of closed sets $\mathcal{L}(P)$.
It turns out that N-density of an interval-finite poset
is a sufficient condition even if not necessary for orthomodularity.
\begin{theorem}\label{t:Ndense_Orth}
Let $\mathcal{P}=(P,\le)$ be an N-dense, interval-finite poset.
Then $\mathcal{L}(P)$ is an orthomodular lattice.
\end{theorem}
The reverse implication is not true, as can be seen in the following example.
\begin{example}
Figure \ref{f:noNdense_Orth} shows a poset $\mathcal{P}=(P,\le)$ which is not N-dense.
The family of closed sets $L(P)$ forms a Boolean,
hence orthomodular, lattice, whose atoms are
$\{z\}, \{x\},\{w\},\{u\}$.
\end{example}
\begin{figure}
\caption{\label{f:noNdense_Orth}
\label{f:noNdense_Orth}
\end{figure}
From Theorem \ref{t:Ndense_Orth} and Proposition \ref{p:Netocc_Ndense} it follows
that the family $L(N)$ of closed sets of the poset $(X, \sqsubseteq)$ associated to
an interval-finite occurrence net $N=(B,E,F)$, with $X=(B \cup E)$,
forms an orthomodular lattice
$\mathcal{L}(N)=\pang{L(N), \subseteq,$
$\emptyset, X, (\,.\,)\perp}$.
This result constitutes a generalization and a different proof of a theorem in \cite{BPR09},
where interval-finite and degree-finite occurrence nets have
been considered and where closed sets have been called ``causally closed sets''.
\section{Causally Closed Sets in Occurrence Nets}
\label{s:subproc}
In this section we consider only interval-finite and degree-finite occurrence nets.
We introduce particular subsets of elements of such an occurrence net $N=(B,E,F)$,
which we call \virg{causally closed sets}, since they can be interpreted as subprocesses
which can be uniquely obtained by a particular $B$-coset.
We show that, for K-dense occurrence nets, \virg{causally closed sets} coincide with
closed sets of the poset associated to the occurrence net, as defined in the previous section.
\begin{definition} \label{d:sottoinsiemi_causalmente_chiusi}
Let $N=(B,E,F)$ be an occurrence net.
$C \subseteq B \cup E$ is a \emph{causally closed set} iff
\begin{itemize}
\item [\emph{(i)}] $\forall e \in E$, $\preco e \subseteq C \Rightarrow
e \in C$,
\item [\emph{(ii)}] $\forall e \in E$, $\postc e \subseteq C \Rightarrow
e \in C$,
\item [\emph{(iii)}] $\forall e \in E$, $e \in C \Rightarrow
\preco e \cup \postc e \subseteq C$,
\item [\emph{(iv)}] $\forall x,y \in C$, $x \ li \ y \Rightarrow
[x,y] \subseteq C$.
\end{itemize}
\end{definition}
A causally closed set is therefore a convex set which, intuitively, is closed with respect to the firing rule,
i.e.: if it contains an event then it contains also all its preconditions and postconditions,
and, moreover, if it contains all the preconditions or all the postconditions of an event
then it contains the event itself.
\begin{example}
The set $S$ in Figure~\ref{f:chiuso} is causally closed.
The set of grayed elements on the left side of Figure~\ref{f:griglia}
is causally closed.
On the contrary, the set of grayed elements on the right side of Figure~\ref{f:griglia}
is not causally closed because, for example, it contains the preconditions of an event
but it does not contain the event itself.
\end{example}
We call $CC(N)$ the family of causally closed sets of $N$.
It is easy to prove that $CC(N)$ is closed by intersection,
$\emptyset \in CC(N)$, $B \cup E \in CC(N)$.
Hence the family $CC(N)$ forms a complete lattice,
where meet is given by set intersection.
In general this lattice is not orthocomplemented.
\begin{definition}\label{d:costruzione_sottoinsiemi_causalmente_chiusi}
Let $N=(B,E,F)$ be an occurrence net, $X = B \cup E$, and $\mathbb{P}(X)$ be the powerset of $X$.
Define $\phi: \mathbb{P}(X) \rightarrow \mathbb{P}(X)$ as follows:
$\forall A \subseteq X$,
$\phi(A)=\bigcap \{C_i \ | \ C_i \in CC(N)$ and $A \subseteq C_i\}$.
\end{definition}
\noindent
Note that $\phi$ is a closure operator,
$A \subseteq \phi(A)$ and the frontier of
$\phi(A)$ is a subset of conditions,
where the frontier of a subset $A$ of $X$ is the set
$\{ x \in A \ | \ \exists y \in X \setminus A$
such that: $x F y $ or $y Fx \}$.
Closed sets, as defined in Section \ref{s:chiusico},
are causally closed sets, and this directly follows
from a characterization of closed sets given in \cite{BPR09}.
However, in general a causally closed set is not a closed set,
in fact the following example shows two cases in which $\phi(A) \neq A^{\perp \perp}$.
\begin{example} Figure \ref{f:griglia} shows a non K-dense occurrence net.
Remember that
$\forall c \in Cuts(N), c^\perp= \emptyset$ and $ c^{\perp\perp}=X$.
Consider now the B-cut $c_1$ formed by the gray conditions
on the left side of the figure;
$\phi(c_1)=c_1$.
For the B-cut $c_2$ on the right side of the figure,
$c_2 \subset \phi(c_2) \subset X$.
\end{example}
\begin{figure}
\caption{\label{f:griglia}
\label{f:griglia}
\end{figure}
When $A$ is a B-coset of $N=(B,E,F)$,
$\phi(A)$ can be inductively constructed,
as shown in the following.
\begin{definition} \label{d:costruzione_Ai}
Given $A_i \subseteq B \cup E$,
define $A_{i+1}= A_i \cup \{Int(e) \ |
\ e \in E, \preco e \subseteq A_i \lor \postc e \subseteq A_i\}$,
where for $e \in E, Int(e)=\{e\} \cup \preco e \cup \postc e$.
\end{definition}
Note that $A_i \subseteq A_{i+1}$ for every $i$.
\begin{proposition} \label{p:costruzione_chiuso}
Let $A$ be a $B$-coset of $N$.
Then $\bigcup_{i \in \mathbb{N}} A_i = \phi(A)$, where $A_0=A$.
\end{proposition}
The inductive construction of a causally closed set from
a B-coset is shown in Figure~\ref{f:ind_const}.
This motivates the name of
$\emph{causally closed}$ sets and suggests an interpretation of these as
$\emph{non sequential}$ $\emph{subprocesses}$, which are $\emph{causally closed}$
in the sense that are uniquely constructed starting by a B-coset of the whole process.
The construction, in fact, simulates the forward and backward run of the system starting from a B-coset:
it adds all the events such that either all their preconditions
or all their postconditions belong to the starting B-coset,
and then it proceeds by adding to the set all the post/preconditions of the added events,
and so on until no other event may be added.
\begin{figure}
\caption{Inductive construction of a causally closed set.}
\label{f:ind_const}
\end{figure}
Now we show that for K-dense occurrence nets, $CC(N)$
coincides with the collection $L(N)$ of
closed sets of $N$ generated by the $co$ relation.
Let $N=(B,E,F)$ be a K-dense, interval-finite and degree-finite occurrence net.
\begin{theorem}
Let $Y \subseteq B \cup E$.
Then $Y \in CC(N) \iff Y \in L(N)$
\end{theorem}
The theorem is an immediate consequence of the following
propositions.
\begin{proposition}\label{p:tagli_Kdensita}
Let $A$ be a B-coset of $N$.
Then $\phi(A)=A^{\perp\perp}$.
\end{proposition}
\begin{proposition}
Let $Y \in CC(N)$, and $A \subseteq Y$ be a B-cut of the subposet induced by $Y$.
Then $\phi(A)=Y$.
\end{proposition}
Therefore, in the case of K-dense, interval-finite
and degree-finite occurrence nets, the families of
causally closed sets and of closed sets coincide.
In particular, any closed set can be uniquely obtained
starting from a B-coset by applying to it the inductive construction as in
Definition~\ref{d:costruzione_Ai}.
\section{Conclusion}
The main contribution of this paper is twofold.
On one hand, we have applied to locally finite
partially ordered sets a construction which associates
a complete lattice to a given poset;
the elements of the lattice are certain subsets of
the poset, precisely the closed subsets of a closure
operator, defined starting from the \emph{co} relation.
We have shown that, if the partially ordered set satisfies
a property of local density, i.e.: N-density,
then the associated lattice is also orthomodular.
Orthomodular posets are studied in the frame of
quantum logic (see, for example, \cite{ql_dcg}).
This suggests to interpret closed sets as propositions in a logical language.
On the other hand, we have focused attention on occurrence
nets as models of concurrent computations, and defined
a family of subsets of the elements of an occurrence net;
we call those subsets \emph{causally closed} because they
can be seen as subprocesses of the whole net which are,
intuitively, closed with respect to the (forward and backward) firing rule
of the net.
We have shown that, when the net is K-dense, the causally
closed sets coincide with the closed sets induced by the
closure operator defined starting from the
\emph{co} relation.
Starting from these first results, we intend to pursue the investigation
of lattices of subprocesses in different directions.
On one hand, we will study further properties of such lattices,
and their relations with domain theory.
On the other hand, we will extend the construction to cyclic
Petri nets, in which a sensible concurrency relation can
be defined even in the absence of a global partial order.
\section*{Acknowledgments}
Work partially supported by MIUR.
\end{document} |
\begin{document}
\title[GRADED ANNIHILATORS AND TIGHT CLOSURE]{GRADED ANNIHILATORS OF MODULES OVER
THE FROBENIUS SKEW POLYNOMIAL RING, AND TIGHT CLOSURE}
{{\mathfrak{m}}athfrak{a}}uthor{RODNEY Y. SHARP}
{{\mathfrak{m}}athfrak{a}}ddress{Department of Pure Mathematics,
University of Sheffield, Hicks Building, Sheffield S3 7RH, United Kingdom\\
{\it Fax number}: 0044-114-222-3769}
\email{[email protected]}
\thanks{The author was partially supported by the
Engineering and Physical Sciences Research Council of the United
Kingdom (Overseas Travel Grant Number EP/C538803/1).}
\subjclass[2000]{Primary 13A35, 16S36, 13D45, 13E05, 13E10;
Secondary 13H10}
\date{\today}
\keywords{Commutative Noetherian ring, prime characteristic,
Frobenius homomorphism, tight closure, (weak) test element, (weak)
parameter test element, skew polynomial ring; local cohomology;
Cohen--Macaulay local ring.}
\begin{abstract}
This paper is concerned with the tight closure of an ideal ${{\mathfrak{m}}athfrak{a}}$
in a commutative Noetherian local ring $R$ of prime characteristic
$p$. Several authors, including R. Fedder, K.-i. Watanabe, K. E.
Smith, N. Hara and F. Enescu, have used the natural Frobenius
action on the top local cohomology module of such an $R$ to good
effect in the study of tight closure, and this paper uses that
device. The main part of the paper develops a theory of what are
here called `special annihilator submodules' of a left module over
the Frobenius skew polynomial ring associated to $R$; this theory
is then applied in the later sections of the paper to the top
local cohomology module of $R$ and used to show that, if $R$ is
Cohen--Macaulay, then it must have a weak parameter test element,
even if it is not excellent.
\end{abstract}
{\mathfrak{m}}aketitle
\setcounter{section}{-1}
\section{\sc Introduction}
\label{in}
Throughout the paper, $R$ will denote a commutative
Noetherian ring of prime characteristic $p$.
We shall always
denote by $f:R\longrightarrow R$ the Frobenius homomorphism, for which $f(r)
= r^p$ for all $r \in R$. Let ${{\mathfrak{m}}athfrak{a}}$ be an ideal of $R$. The {\em
$n$-th Frobenius power\/} ${{\mathfrak{m}}athfrak{a}}^{[p^n]}$ of ${{\mathfrak{m}}athfrak{a}}$ is the ideal of
$R$ generated by all $p^n$-th powers of elements of ${{\mathfrak{m}}athfrak{a}}$.
We use $R^{\circ}$ to denote the complement in $R$ of the union of
the minimal prime ideals of $R$. An element $r \in R$ belongs to
the {\em tight closure ${{\mathfrak{m}}athfrak{a}}^*$ of ${{\mathfrak{m}}athfrak{a}}$\/} if and only if there
exists $c \in R^{\circ}$ such that $cr^{p^n} \in {{\mathfrak{m}}athfrak{a}}^{[p^n]}$ for
all $n \gg 0$. We say that ${{\mathfrak{m}}athfrak{a}}$ is {\em tightly closed\/}
precisely when ${{\mathfrak{m}}athfrak{a}}^* = {{\mathfrak{m}}athfrak{a}}$. The theory of tight closure was
invented by M. Hochster and C. Huneke \cite{HocHun90}, and many
applications have been found for the theory: see \cite{Hunek96}
and \cite{Hunek98}, for example.
In the case when $R$ is local, several authors have used, as an
aid to the study of tight closure, the natural Frobenius action on
the top local cohomology module of $R$: see, for example, R.
Fedder \cite{F87}, Fedder and K.-i. Watanabe \cite{FW87}, K. E.
Smith \cite{S94}, N. Hara and Watanabe \cite{HW96} and F. Enescu
\cite{Enesc03}. This device is employed in this paper. The natural
Frobenius action provides the top local cohomology module of $R$
with a natural structure as a left module over the skew polynomial
ring $R[x,f]$ associated to $R$ and $f$. Sections \ref{nt} and
\ref{ga} develop a theory of what are here called `special
annihilator submodules' of a left $R[x,f]$-module $H$. To explain
this concept, we need the definition of the {\em graded
annihilator\/} $\grann_{R[x,f]}H$ of $H$. Now $R[x,f]$ has a
natural structure as a graded ring, and $\grann_{R[x,f]}H$ is
defined to be the largest graded two-sided ideal of $R[x,f]$ that
annihilates $H$. On the other hand, for a graded two-sided ideal
${{\mathfrak{m}}athfrak{B}}$ of $R[x,f]$, the {\em annihilator of ${{\mathfrak{m}}athfrak{B}}$ in $H$\/} is
defined as
$$ {{\mathfrak{m}}athfrak{a}}nn_H{{\mathfrak{m}}athfrak{B}} := \{ h \in H : \theta h = 0 {\mathfrak{m}}box{~for all~}\theta
\in {{\mathfrak{m}}athfrak{B}}\}. $$ I say that an $R[x,f]$-submodule of $H$ is a {\em
special annihilator submodule\/} of $H$ if it has the form
${{\mathfrak{m}}athfrak{a}}nn_H{{\mathfrak{m}}athfrak{B}}$ for some graded two-sided ideal ${{\mathfrak{m}}athfrak{B}}$ of $R[x,f]$.
There is a natural bijective inclusion-reversing correspondence
between the set of all special annihilator submodules of $H$ and
the set of all graded annihilators of submodules of $H$. A large
part of this paper is concerned with exploration and exploitation
of this correspondence. It is particularly satisfactory in the
case where the left $R[x,f]$-module $H$ is $x$-torsion-free, for
then it turns out that the set of all graded annihilators of
submodules of $H$ is in bijective correspondence with a certain
set of radical ideals of $R$, and one of the main results of \S
\ref{ga} is that this set is finite in the case where $H$ is
Artinian as an $R$-module. The theory that emerges has some
uncanny similarities to tight closure theory. Use is made of the
Hartshorne--Speiser--Lyubeznik Theorem (see R. Hartshorne and R.
Speiser \cite[Proposition 1.11]{HarSpe77}, G. Lyubeznik
\cite[Proposition 4.4]{Lyube97}, and M. Katzman and R. Y. Sharp
\cite[1.4 and 1.5]{KS}) to pass between a general left
$R[x,f]$-module that is Artinian over $R$ and one that is
$x$-torsion-free.
In \S \ref{tc}, this theory of special annihilator submodules is
applied to prove an existence theorem for weak parameter test elements
in a Cohen--Macaulay local ring of characteristic $p$. To explain this,
I now review some definitions concerning weak test elements.
A {\em $p^{w_0}$-weak test element\/} for $R$ (where $w_0$ is a
non-negative integer) is an element $c' \in R^{\circ}$ such that,
for every ideal ${{\mathfrak{m}}athfrak{b}}$ of $R$ and for $r \in R$, it is the case
that $r \in {{\mathfrak{m}}athfrak{b}}^*$ if and only if $c'r^{p^n} \in {{\mathfrak{m}}athfrak{b}}^{[p^n]}$ for
all $n \geq w_0$. A $p^0$-weak test element is called a {\em test
element\/}.
A proper ideal ${{\mathfrak{m}}athfrak{a}}$ in $R$ is said to be a {\em parameter ideal\/}
precisely when it can be generated by $\height {{\mathfrak{m}}athfrak{a}}$ elements. Parameter ideals
play an important r\^ole in tight closure theory, and Hochster and Huneke
introduced the concept of parameter test element for $R$.
A {\em $p^{w_0}$-weak parameter test element\/} for $R$
is an element $c' \in R^{\circ}$ such that,
for every parameter ideal ${{\mathfrak{m}}athfrak{b}}$ of $R$ and for $r \in R$, it is the case
that $r \in {{\mathfrak{m}}athfrak{b}}^*$ if and only if $c'r^{p^n} \in {{\mathfrak{m}}athfrak{b}}^{[p^n]}$ for
all $n \geq w_0$. A $p^0$-weak parameter test element is called a {\em
parameter test
element\/}.
It is a result of Hochster and Huneke \cite[Theorem
(6.1)(b)]{HocHun94} that an algebra of finite type over an
excellent local ring of characteristic $p$ has a $p^{w_0}$-weak
test element for some non-negative integer $w_0$; furthermore,
such an algebra which is reduced actually has a test element. Of
course, a (weak) test element is a (weak) parameter test element.
One of the main results of this paper is Theorem \ref{tc.2}, which
shows that every Cohen--Macaulay local ring of characteristic $p$,
even if it is not excellent, has a $p^{w_0}$-weak parameter test
element for some non-negative integer $w_0$.
Lastly, the final \S \ref{en} establishes some connections between
the theory developed in this paper and the $F$-stable primes of F. Enescu
\cite{Enesc03}.
\section{\sc Graded annihilators and related concepts}
\label{nt}
\begin{ntn}
\label{nt.1} Throughout, $R$ will denote a commutative Noetherian
ring of prime characteristic $p$. We shall work with the
skew polynomial ring $R[x,f]$ associated to $R$ and $f$ in the
indeterminate $x$ over $R$. Recall that $R[x,f]$ is, as a left
$R$-module, freely generated by $(x^i)_{i \in {\mathfrak{n}}n}$ (I use ${\mathfrak{m}}athbb N$
and ${\mathfrak{n}}n$ to denote the set of positive integers and the set of
non-negative integers, respectively),
and so consists
of all polynomials $\sum_{i = 0}^n r_i x^i$, where $n \in {\mathfrak{n}}n$
and $r_0,\ldots,r_n \in R$; however, its multiplication is subject to the
rule
$$
xr = f(r)x = r^px {\mathfrak{q}}uad {\mathfrak{m}}box{~for all~} r \in R\/.
$$
Note that $R[x,f]$ can be considered as a positively-graded ring
$R[x,f] = \bigoplus_{n=0}^{\infty} R[x,f]_n$, with $R[x,f]_n =
Rx^n$ for all $n \in {\mathfrak{n}}n$. The ring $R[x,f]$ will be referred to as the {\em
Frobenius skew polynomial ring over $R$.}
Throughout, we shall let $G$ and $H$ denote left $R[x,f]$-modules.
The {\em annihilator of $H$\/} will be denoted by ${{\mathfrak{m}}athfrak{a}}nn_{R[x,f]}H$
or ${{\mathfrak{m}}athfrak{a}}nn_{R[x,f]}(H)$. Thus
$$
{{\mathfrak{m}}athfrak{a}}nn_{R[x,f]}(H) = \{ \theta \in R[x,f] : \theta h = 0 {\mathfrak{m}}box{~for all~} h \in H\},
$$
and this is a (two-sided) ideal of $R[x,f]$. For a two-sided ideal
${{\mathfrak{m}}athfrak{B}}$ of $R[x,f]$, we shall use ${{\mathfrak{m}}athfrak{a}}nn_H{{\mathfrak{m}}athfrak{B}}$ or ${{\mathfrak{m}}athfrak{a}}nn_H({{\mathfrak{m}}athfrak{B}})$ to
denote the {\em annihilator of ${{\mathfrak{m}}athfrak{B}}$ in $H$}. Thus
$$
{{\mathfrak{m}}athfrak{a}}nn_H{{\mathfrak{m}}athfrak{B}} = {{\mathfrak{m}}athfrak{a}}nn_H({{\mathfrak{m}}athfrak{B}}) = \{ h \in
H : \theta h = 0 {\mathfrak{m}}box{~for all~}\theta \in {{\mathfrak{m}}athfrak{B}}\},
$$
and this is an $R[x,f]$-submodule of $H$.
\end{ntn}
\begin{defrmks}
\label{nt.2}
We say that the left $R[x,f]$-module $H$ is {\em $x$-torsion-free\/} if $xh = 0$, for
$h \in H$, only when $h = 0$.
The set $\Gamma_x(H) := \left\{ h \in H : x^jh = 0
{\mathfrak{m}}box{~for some~} j \in {\mathfrak{m}}athbb N \right\}$ is an $R[x,f]$-submodule of
$H$, called the\/ {\em $x$-torsion submodule} of $H$. The
$R[x,f]$-module $H/\Gamma_x(H)$ is $x$-torsion-free.
\end{defrmks}
\begin{rmk}
\label{nt.2b} Let ${{\mathfrak{m}}athfrak{B}}$ be a subset of $R[x,f]$. It is easy to
see that ${{\mathfrak{m}}athfrak{B}}$ is a graded two-sided ideal of $R[x,f]$ if and only if
there is an ascending chain $({{\mathfrak{m}}athfrak{b}}_n)_{n \in {\mathfrak{n}}n}$ of ideals of $R$ (which must,
of course, be eventually stationary) such that
${{\mathfrak{m}}athfrak{B}} = \bigoplus_{n\in{\mathfrak{n}}n}{{\mathfrak{m}}athfrak{b}}_n x^n$. We shall sometimes denote the
ultimate constant value of the ascending sequence $({{\mathfrak{m}}athfrak{b}}_n)_{n \in {\mathfrak{n}}n}$ by
$\lim_{n \rightarrow \infty}{{\mathfrak{m}}athfrak{b}}_n$.
Note that, in particular, if ${{\mathfrak{m}}athfrak{b}}$ is an
ideal of $R$, then ${{\mathfrak{m}}athfrak{b}} R[x,f] = \bigoplus_{n \in {\mathfrak{n}}n} {{\mathfrak{m}}athfrak{b}} x^n$ is
a graded two-sided ideal of $R[x,f]$. It was noted in \ref{nt.1}
that the annihilator of a left $R[x,f]$-module is a two-sided
ideal.
\end{rmk}
\begin{lem} [Y. Yoshino {\cite[Corollary (2.7)]{Yoshi94}}]
\label{nt.2c} The ring $R[x,f]$ satisfies the ascending chain condition on
graded two-sided ideals.
\end{lem}
\begin{proof} This can be proved by the argument in Yoshino's proof of
\cite[Corollary (2.7)]{Yoshi94}.
\end{proof}
\begin{defs}
\label{nt.2d} We define the {\em graded annihilator\/} $\grann_{R[x,f]}H$ of
the left $R[x,f]$-module $H$ by
$$
\grann_{R[x,f]}H = \left\{ \sum_{i=0}^n r_ix^i \in R[x,f] : n \in
{\mathfrak{n}}n, {\mathfrak{m}}box{~and~} r_i \in R,\, r_ix^i \in {{\mathfrak{m}}athfrak{a}}nn_{R[x,f]}H {\mathfrak{m}}box{~for
all~} i = 0, \ldots, n\right\}.
$$
Thus $\grann_{R[x,f]}H$ is the largest graded two-sided ideal of $R[x,f]$
contained in ${{\mathfrak{m}}athfrak{a}}nn_{R[x,f]}H$; also, if we write $\grann_{R[x,f]}H =
\bigoplus_{n\in{\mathfrak{n}}n}{{\mathfrak{m}}athfrak{b}}_n x^n$ for a suitable
ascending chain $({{\mathfrak{m}}athfrak{b}}_n)_{n \in {\mathfrak{n}}n}$ of ideals of $R$, then ${{\mathfrak{m}}athfrak{b}}_0 = (0:_RH)$,
the annihilator of $H$ as an $R$-module.
We say that an $R[x,f]$-submodule of $H$ is a {\em special
annihilator submodule of $H$\/} if it has the form ${{\mathfrak{m}}athfrak{a}}nn_H({{\mathfrak{m}}athfrak{B}})$
for some {\em graded\/} two-sided ideal ${{\mathfrak{m}}athfrak{B}}$ of $R[x,f]$. We
shall use ${\mathfrak{m}}athcal{A}(H)$ to denote the set of special
annihilator submodules of $H$.
\end{defs}
\begin{defrmks}
\label{nt.05d} There are some circumstances in which $\grann_{R[x,f]}H =
{{\mathfrak{m}}athfrak{a}}nn_{R[x,f]}H$: for example, this would be the case if $H$ was a ${\mathfrak{m}}athbb Z$-graded
left $R[x,f]$-module. Work of Y. Yoshino in \cite[\S 2]{Yoshi94} provides us
with further examples.
Following Yoshino \cite[Definition (2.1)]{Yoshi94},
we say that $R$ {\em has sufficiently many units\/} precisely when,
for each $n \in {\mathfrak{m}}athbb N$, there exists $r_n \in R$ such that all $n$ elements
$(r_n)^{p^i} - r_n~(1 \leq i \leq n)$ are units of $R$.
Yoshino proved in \cite[Lemma (2.2)]{Yoshi94} that if
either $R$ contains an infinite field,
or $R$ is local and has infinite residue field,
then $R$ has sufficiently many units. He went on to show in
\cite[Theorem (2.6)]{Yoshi94} that, if
$R$ has sufficiently many units,
then each two-sided ideal
of $R[x,f]$ is graded.
Thus if $R$ has sufficiently many units, then $\grann_{R[x,f]}H =
{{\mathfrak{m}}athfrak{a}}nn_{R[x,f]}H$, even if $H$ is not graded.
\end{defrmks}
\begin{lem}
\label{nt.3} Let ${{\mathfrak{m}}athfrak{B}}$ and ${{\mathfrak{m}}athfrak{B}} '$ be
graded two-sided ideals of $R[x,f]$ and let $N$ and
$N'$ be $R[x,f]$-submodules of the left $R[x,f]$-module $H$.
\begin{enumerate}
\item If ${{\mathfrak{m}}athfrak{B}} \subseteq {{\mathfrak{m}}athfrak{B}} '$, then ${{\mathfrak{m}}athfrak{a}}nn_H({{\mathfrak{m}}athfrak{B}}) \supseteq {{\mathfrak{m}}athfrak{a}}nn_H({{\mathfrak{m}}athfrak{B}} ')$.
\item If $N \subseteq N'$, then $\grann_{R[x,f]}N \supseteq \grann_{R[x,f]}N '$.
\item We have
${{\mathfrak{m}}athfrak{B}} \subseteq \grann_{R[x,f]}\left({{\mathfrak{m}}athfrak{a}}nn_H({{\mathfrak{m}}athfrak{B}})\right)$.
\item We have $N \subseteq {{\mathfrak{m}}athfrak{a}}nn_H\!\left(\grann_{R[x,f]}N\right).$
\item There
is an order-reversing bijection, $\Gamma\/,$ from the set
${\mathfrak{m}}athcal{A}(H)$ of special annihilator submo\-d\-ules of $H$ to
the set of graded annihilators of submodules of $H$ given by
$$
\Gamma : N \longmapsto \grann_{R[x,f]}N.
$$
The inverse bijection, $\Gamma^{-1},$ also order-reversing, is
given by
$$
\Gamma^{-1} : {{\mathfrak{m}}athfrak{B}} \longmapsto {{\mathfrak{m}}athfrak{a}}nn_H({{\mathfrak{m}}athfrak{B}}).
$$
\end{enumerate} \end{lem}
\begin{proof} Parts (i), (ii), (iii) and (iv) are obvious.
(v) Application of part (i) to the inclusion in part (iii) yields that
$$
{{\mathfrak{m}}athfrak{a}}nn_H({{\mathfrak{m}}athfrak{B}}) \supseteq {{\mathfrak{m}}athfrak{a}}nn_H\!\left(
\grann_{R[x,f]}\left({{\mathfrak{m}}athfrak{a}}nn_H({{\mathfrak{m}}athfrak{B}})\right) \right){\mathfrak{m}}box{;}
$$
however, part (iv) applied to the $R[x,f]$-submodule
${{\mathfrak{m}}athfrak{a}}nn_H({{\mathfrak{m}}athfrak{B}})$ of $H$ yields that
$$
{{\mathfrak{m}}athfrak{a}}nn_H({{\mathfrak{m}}athfrak{B}}) \subseteq {{\mathfrak{m}}athfrak{a}}nn_H\!\left(
\grann_{R[x,f]}\left({{\mathfrak{m}}athfrak{a}}nn_H({{\mathfrak{m}}athfrak{B}})\right) \right){\mathfrak{m}}box{;}
$$
hence $ {{\mathfrak{m}}athfrak{a}}nn_H({{\mathfrak{m}}athfrak{B}}) = {{\mathfrak{m}}athfrak{a}}nn_H\!\left(
\grann_{R[x,f]}\left({{\mathfrak{m}}athfrak{a}}nn_H({{\mathfrak{m}}athfrak{B}})\right) \right). $ Similar
considerations show that
$$
\grann_{R[x,f]}N =
\grann_{R[x,f]}\left({{\mathfrak{m}}athfrak{a}}nn_H\!\left(\grann_{R[x,f]}N\right)\right).
$$
\end{proof}
\begin{rmk}
\label{nt.4} It follows from Lemma \ref{nt.3} that, if $N$ is a special annihilator
submodule of $H$, then it is the annihilator (in $H$) of its own
graded annihilator. Likewise,
a graded two-sided ideal ${{\mathfrak{m}}athfrak{B}}$ of $R[x,f]$ which is the
graded annihilator of some $R[x,f]$-submodule
of $H$ must be the graded annihilator of ${{\mathfrak{m}}athfrak{a}}nn_H({{\mathfrak{m}}athfrak{B}})$.
\end{rmk}
Much use will be made of the
following lemma.
\begin{lem}
\label{nt.5} Assume that the left $R[x,f]$-module $G$ is
$x$-torsion-free.
Then there is a radical ideal ${{\mathfrak{m}}athfrak{b}}$ of $R$ such that
$\grann_{R[x,f]}G = {{\mathfrak{m}}athfrak{b}} R[x,f] = \bigoplus _{n\in{\mathfrak{n}}n} {{\mathfrak{m}}athfrak{b}} x^n$.
\end{lem}
\begin{proof} There is a family
$({{\mathfrak{m}}athfrak{b}}_n)_{n\in{\mathfrak{n}}n}$ of ideals of $R$ such that ${{\mathfrak{m}}athfrak{b}}_n \subseteq
{{\mathfrak{m}}athfrak{b}}_{n+1}$ for all $n \in {\mathfrak{n}}n$ and $\grann_{R[x,f]}G = \bigoplus
_{n\in{\mathfrak{n}}n} {{\mathfrak{m}}athfrak{b}}_n x^n$. There exists $n_0 \in {\mathfrak{n}}n$ such that ${{\mathfrak{m}}athfrak{b}}_n = {{\mathfrak{m}}athfrak{b}}_{n_0}$
for all $n \geq n_0$. Set ${{\mathfrak{m}}athfrak{b}} := {{\mathfrak{m}}athfrak{b}}_{n_0}$. It is enough for us to show that,
if $r \in R$ and $e \in {\mathfrak{n}}n$ are such that $r^{p^e} \in {{\mathfrak{m}}athfrak{b}}$, then $r \in {{\mathfrak{m}}athfrak{b}}_0$.
To this end, let $h \in {\mathfrak{m}}athbb N$ be such that $h \geq {\mathfrak{m}}ax \{e,n_0\}$. Then, for all
$g \in G$, we have
$x^hrg = r^{p^h}x^hg = 0$, since $r^{p^h} \in {{\mathfrak{m}}athfrak{b}} = {{\mathfrak{m}}athfrak{b}}_h$. Since $G$ is
$x$-torsion-free, it follows that $rG = 0$, so that $r \in {{\mathfrak{m}}athfrak{b}}_0$.
\end{proof}
\begin{defi}
\label{nt.6} Assume that the left $R[x,f]$-module $G$ is
$x$-torsion-free. An ideal ${{\mathfrak{m}}athfrak{b}}$ of $R$ is called a {\em
$G$-special $R$-ideal} if there is an $R[x,f]$-submodule
$N$ of $G$ such that $\grann_{R[x,f]}N = {{\mathfrak{m}}athfrak{b}} R[x,f] = \bigoplus
_{n\in{\mathfrak{n}}n} {{\mathfrak{m}}athfrak{b}} x^n$. It is worth noting that, then, the ideal
${{\mathfrak{m}}athfrak{b}}$ is just $(0:_RN)$.
We shall denote the set of $G$-special $R$-ideals by
${\mathfrak{m}}athcal{I}(G)$. Note that, by Lemma \ref{nt.5}, all the ideals
in ${\mathfrak{m}}athcal{I}(G)$ are radical.
\end{defi}
We can now combine together the results of Lemmas \ref{nt.3}(v)
and \ref{nt.5} to obtain the following result, which is
fundamental for the work in this paper.
\begin{prop}
\label{nt.7} Assume that
the left $R[x,f]$-module $G$ is $x$-torsion-free.
There is an order-reversing bijection, $\Delta : {\mathfrak{m}}athcal{A}(G)
\longrightarrow {\mathfrak{m}}athcal{I}(G),$ from the set ${\mathfrak{m}}athcal{A}(G)$ of special
annihilator submodules of $G$ to the set ${\mathfrak{m}}athcal{I}(G)$ of
$G$-special $R$-ideals given by
$$
\Delta : N \longmapsto \left(\grann_{R[x,f]}N\right)\cap R =
(0:_RN).
$$
The inverse bijection, $\Delta^{-1} : {\mathfrak{m}}athcal{I}(G) \longrightarrow
{\mathfrak{m}}athcal{A}(G),$ also order-reversing, is given by
$$
\Delta^{-1} : {{\mathfrak{m}}athfrak{b}} \longmapsto {{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{b}} R[x,f])\right).
$$
When $N \in {\mathfrak{m}}athcal{A}(G)$ and ${{\mathfrak{m}}athfrak{b}} \in {\mathfrak{m}}athcal{I}(G)$ are such
that $\Delta(N) = {{\mathfrak{m}}athfrak{b}}$, we shall say simply that `{\em $N$ and
${{\mathfrak{m}}athfrak{b}}$ correspond}'.
\end{prop}
\begin{cor}
\label{nt.9} Assume that
the left $R[x,f]$-module $G$ is $x$-torsion-free.
Then both the sets ${\mathfrak{m}}athcal{A}(G)$ and ${\mathfrak{m}}athcal{I}(G)$ are
closed under taking arbitrary intersections.
\end{cor}
\begin{proof} Let $\left(N_{\lambda}\right)_{\lambda \in \Lambda}$ be an
arbitrary family of special annihilator submodules of $G$. For
each $\lambda \in \Lambda$, let ${{\mathfrak{m}}athfrak{b}}_{\lambda}$ be the
$G$-special $R$-ideal corresponding to $N_{\lambda}$. In
view of Proposition \ref{nt.7}, it is sufficient for us to show
that $\bigcap_{\lambda \in \Lambda}N_{\lambda} \in {\mathfrak{m}}athcal{A}(G)$
and ${{\mathfrak{m}}athfrak{b}} := \bigcap_{\lambda \in \Lambda}{{\mathfrak{m}}athfrak{b}}_{\lambda} \in
{\mathfrak{m}}athcal{I}(G)$.
To prove these, simply note that
$$
{\textstyle \bigcap_{\lambda \in \Lambda}N_{\lambda} =
\bigcap_{\lambda \in \Lambda}{{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{b}}_{\lambda}
R[x,f])\right) = {{\mathfrak{m}}athfrak{a}}nn_G\left(\left(\sum_{\lambda \in
\Lambda}{{\mathfrak{m}}athfrak{b}}_{\lambda}\right)\! R[x,f]\right)}
$$
and that $\sum_{\lambda \in \Lambda}N_{\lambda}$ is an
$R[x,f]$-submodule of $G$ such that
$$
{\textstyle \grann_{R[x,f]}\left(\sum_{\lambda \in
\Lambda}N_{\lambda}\right) = \bigcap_{\lambda \in
\Lambda}\grann_{R[x,f]}N_{\lambda} = \bigcap_{\lambda \in
\Lambda}\left({{\mathfrak{m}}athfrak{b}}_{\lambda}R[x,f] \right) = {{\mathfrak{m}}athfrak{b}} R[x,f].}
$$
\end{proof}
\begin{rmk}
\label{nt.8} Suppose
that the left $R[x,f]$-module $G$ is $x$-torsion-free.
It is worth pointing out now that, since $R$ is Noetherian, so
that the set ${\mathfrak{m}}athcal{I}(G)$ of $G$-special $R$-ideals
satisfies the ascending chain condition, it is a consequence of
Proposition \ref{nt.7} that the set ${\mathfrak{m}}athcal{A}(G)$ of special
annihilator submodules of $G$, partially ordered by inclusion,
satisfies the descending chain condition. This is the case even if
$G$ is not finitely generated. Note that (by \cite[Theorem
(1.3)]{Yoshi94}), the (noncommutative) ring $R[x,f]$ is neither
left nor right Noetherian if $\dim R > 0$.
\end{rmk}
\section{\sc Examples relevant to the theory of tight closure}
\label{ex}
The purpose of this section is to present some motivating examples, from the
theory of tight closure, of some of the concepts introduced in \S \ref{nt}.
Throughout this section, we shall again employ the notation of
\ref{nt.1}, and ${{\mathfrak{m}}athfrak{a}}$ will always denote an ideal of $R$.
Recall that the {\em Frobenius closure ${{\mathfrak{m}}athfrak{a}}^F$ of ${{\mathfrak{m}}athfrak{a}}$} is the ideal of $R$
defined by
$$ {{\mathfrak{m}}athfrak{a}} ^F := \big\{ r
\in R : {\mathfrak{m}}box{there exists~} n \in {\mathfrak{n}}n {\mathfrak{m}}box{~such that~} r^{p^n}
\in {{\mathfrak{m}}athfrak{a}}^{[p^n]}\big\}.
$$
\begin{rmk}
\label{ex.0} Let $({{\mathfrak{m}}athfrak{b}}_n)_{n\in{\mathfrak{n}}n}$ be a family of ideals of $R$
such that ${{\mathfrak{m}}athfrak{b}}_n \subseteq f^{-1}({{\mathfrak{m}}athfrak{b}}_{n+1})$ for all $n \in {\mathfrak{n}}n$.
Then $\bigoplus_{n\in{\mathfrak{n}}n}{{\mathfrak{m}}athfrak{b}}_nx^n$ is a graded left ideal of
$R[x,f]$, and so we may form the graded left $R[x,f]$-module $
R[x,f]/\bigoplus_{n\in{\mathfrak{n}}n}{{\mathfrak{m}}athfrak{b}}_nx^n$. This may be viewed as
$\bigoplus_{n\in{\mathfrak{n}}n}R/{{\mathfrak{m}}athfrak{b}}_n$, where, for $r \in R$ and $n \in
{\mathfrak{n}}n$, the result of multiplying the element $r + {{\mathfrak{m}}athfrak{b}}_n$ of the
$n$-th component by $x$ is the element $r^{p} + {{\mathfrak{m}}athfrak{b}}_{n+1}$ of the
$(n+1)$-th component.
Note that the left $R[x,f]$-module
$R[x,f]/\bigoplus_{n\in{\mathfrak{n}}n}{{\mathfrak{m}}athfrak{b}}_nx^n$ is $x$-torsion-free if and
only if ${{\mathfrak{m}}athfrak{b}}_n = f^{-1}({{\mathfrak{m}}athfrak{b}}_{n+1})$ for all $n \in {\mathfrak{n}}n$, that is,
if and only if $({{\mathfrak{m}}athfrak{b}}_n)_{n\in{\mathfrak{n}}n}$ is an $f$-sequence in the sense of
\cite[Definition 4.1(ii)]{SN}.
\end{rmk}
\begin{ntn}
\label{ex.0a} Since $R[x,f]{{\mathfrak{m}}athfrak{a}} =
\bigoplus_{n\in{\mathfrak{n}}n}{{\mathfrak{m}}athfrak{a}}^{[p^n]}x^n$, we can view the graded left
$R[x,f]$-module $$R[x,f]/R[x,f]{{\mathfrak{m}}athfrak{a}}$$ as
$\bigoplus_{n\in{\mathfrak{n}}n}R/{{\mathfrak{m}}athfrak{a}}^{[p^n]}$ in the manner described in
\ref{ex.0}. We shall denote the graded left $R[x,f]$-module
$\bigoplus_{n\in{\mathfrak{n}}n}R/{{\mathfrak{m}}athfrak{a}}^{[p^n]}$ by $H({{\mathfrak{m}}athfrak{a}})$.
Recall from \cite[4.1(iii)]{SN} that
$\left(({{\mathfrak{m}}athfrak{a}}^{[p^n]})^F\right)_{n \in {\mathfrak{n}}n}$ is the {\em canonical
$f$-sequence associated to ${{\mathfrak{m}}athfrak{a}}$}. We shall denote
$\bigoplus_{n\in{\mathfrak{n}}n}R/({{\mathfrak{m}}athfrak{a}}^{[p^n]})^F$, considered as a graded
left $R[x,f]$-module in the manner described in \ref{ex.0}, by
$G({{\mathfrak{m}}athfrak{a}})$. Note that $G({{\mathfrak{m}}athfrak{a}})$ is $x$-torsion-free.
\end{ntn}
\begin{lem}
\label{ex.0b} With the notation of\/ {\rm \ref{ex.0a}}, we have
$\Gamma_x(H({{\mathfrak{m}}athfrak{a}})) = \bigoplus_{n\in{\mathfrak{n}}n}({{\mathfrak{m}}athfrak{a}}^{[p^n]})^F/{{\mathfrak{m}}athfrak{a}}^{[p^n]}$,
so that there is an isomorphism of graded left $R[x,f]$-modules
$$
H({{\mathfrak{m}}athfrak{a}})/ \Gamma_x(H({{\mathfrak{m}}athfrak{a}})) \cong G({{\mathfrak{m}}athfrak{a}}).
$$
\end{lem}
\begin{proof} Let $n \in {\mathfrak{n}}n$ and $r \in R$.
Then the element $r + {{\mathfrak{m}}athfrak{a}}^{[p^n]}$ of the $n$-th
component of $H({{\mathfrak{m}}athfrak{a}})$ belongs to
$\Gamma_x(H({{\mathfrak{m}}athfrak{a}}))$
if and only if there exists $m \in {\mathfrak{n}}n$ such
that $x^m(r + {{\mathfrak{m}}athfrak{a}}^{[p^n]})= r^{p^m} + ({{\mathfrak{m}}athfrak{a}}^{[p^{n}]})^{[p^{m}]} = 0$, that is,
if and only if $r \in ({{\mathfrak{m}}athfrak{a}}^{[p^n]})^F$.
\end{proof}
\begin{prop}
\label{ex.1} We use the notation of\/ {\rm \ref{ex.0a}}.
Suppose that there exists a $p^{w_0}$-weak test element $c$ for
$R$, for some $w_0 \in {\mathfrak{n}}n$. Then
\begin{enumerate}
\item $ {{\mathfrak{m}}athfrak{a}}nn_{H({{\mathfrak{m}}athfrak{a}})}\left(
\bigoplus_{n\geq w_0} Rcx^n\right) =
\bigoplus_{n\in{\mathfrak{n}}n}({{\mathfrak{m}}athfrak{a}}^{[p^n]})^*/{{\mathfrak{m}}athfrak{a}}^{[p^n]}{\mathfrak{m}}box{;} $
\item $
{{\mathfrak{m}}athfrak{a}}nn_{G({{\mathfrak{m}}athfrak{a}})}\left(\bigoplus_{n\in
{\mathfrak{n}}n} Rcx^n\right) =
\bigoplus_{n\in{\mathfrak{n}}n}({{\mathfrak{m}}athfrak{a}}^{[p^n]})^*/({{\mathfrak{m}}athfrak{a}}^{[p^n]})^F. $
\end{enumerate}
\end{prop}
\begin{proof} (i) Let $j \in {\mathfrak{n}}n$ and $r \in R$.
Then the element $r + {{\mathfrak{m}}athfrak{a}}^{[p^j]}$ of the $j$-th
component of $H({{\mathfrak{m}}athfrak{a}})$ belongs to
${{\mathfrak{m}}athfrak{a}}nn_{H({{\mathfrak{m}}athfrak{a}})}\left(\bigoplus_{n\geq w_0} Rcx^n\right)$ if and only if $cr^{p^n} \in
({{\mathfrak{m}}athfrak{a}}^{[p^j]})^{[p^n]}$ for all $n \geq w_0$, that is, if and only if $r \in
({{\mathfrak{m}}athfrak{a}}^{[p^j]})^*$.
(ii) By part (i),
$$
{\textstyle \bigoplus_{n\in {\mathfrak{n}}n}}({{\mathfrak{m}}athfrak{a}}^{[p^n]})^*/({{\mathfrak{m}}athfrak{a}}^{[p^n]})^F
\subseteq {{\mathfrak{m}}athfrak{a}}nn_{G({{\mathfrak{m}}athfrak{a}})}\left({\textstyle \bigoplus_{n\geq w_0}}
Rcx^n\right).
$$
Note that ${{\mathfrak{m}}athfrak{a}}nn_{G({{\mathfrak{m}}athfrak{a}})}\left(\bigoplus_{n\geq w_0} Rcx^n\right)$
is a graded $R[x,f]$-submodule
of $G({{\mathfrak{m}}athfrak{a}})$. Let $j \in {\mathfrak{n}}n$ and $r \in R$ be such that
$r + ({{\mathfrak{m}}athfrak{a}}^{[p^j]})^F$ belongs to the $j$-th
component of ${{\mathfrak{m}}athfrak{a}}nn_{G({{\mathfrak{m}}athfrak{a}})}\left(\bigoplus_{n\geq w_0} Rcx^n\right)$. Then,
for all $n \geq w_0$, we have $cr^{p^n} \in ({{\mathfrak{m}}athfrak{a}}^{[p^{j+n}]})^F =
(({{\mathfrak{m}}athfrak{a}}^{[p^{j}]})^{[p^{n}]})^F$. Therefore, by \cite[Lemma 0.1]{KS}, we
have $r \in ({{\mathfrak{m}}athfrak{a}}^{[p^{j}]})^*$.
It follows from this that
$$
{\textstyle \bigoplus_{n\in {\mathfrak{n}}n}}({{\mathfrak{m}}athfrak{a}}^{[p^n]})^*/({{\mathfrak{m}}athfrak{a}}^{[p^n]})^F
= {{\mathfrak{m}}athfrak{a}}nn_{G({{\mathfrak{m}}athfrak{a}})}\left({\textstyle \bigoplus_{n\geq w_0}} Rcx^n\right),
$$
and so $\bigoplus_{n\in {\mathfrak{n}}n}({{\mathfrak{m}}athfrak{a}}^{[p^n]})^*/({{\mathfrak{m}}athfrak{a}}^{[p^n]})^F$ is a special annihilator
submodule of the $x$-torsion-free graded left $R[x,f]$-module $G({{\mathfrak{m}}athfrak{a}})$.
Let ${{\mathfrak{m}}athfrak{b}}$ be the $G({{\mathfrak{m}}athfrak{a}})$-special $R$-ideal corresponding to
this member of ${\mathfrak{m}}athcal{A}(G({{\mathfrak{m}}athfrak{a}}))$. The above-displayed equation shows that
$Rc \subseteq {{\mathfrak{m}}athfrak{b}}$. Hence, by Proposition \ref{nt.7},
\begin{align*}
{\textstyle \bigoplus_{n\in {\mathfrak{n}}n}}({{\mathfrak{m}}athfrak{a}}^{[p^n]})^*/({{\mathfrak{m}}athfrak{a}}^{[p^n]})^F &
= {{\mathfrak{m}}athfrak{a}}nn_{G({{\mathfrak{m}}athfrak{a}})}\left({\textstyle \bigoplus_{n\geq w_0}} Rcx^n\right)
\supseteq
{{\mathfrak{m}}athfrak{a}}nn_{G({{\mathfrak{m}}athfrak{a}})}\left({\textstyle \bigoplus_{n\in {\mathfrak{n}}n}} Rcx^n\right) \\
& \supseteq {{\mathfrak{m}}athfrak{a}}nn_{G({{\mathfrak{m}}athfrak{a}})}\left({\textstyle \bigoplus_{n\in {\mathfrak{n}}n}} {{\mathfrak{m}}athfrak{b}}
x^n\right) = {\textstyle \bigoplus_{n\in
{\mathfrak{n}}n}}({{\mathfrak{m}}athfrak{a}}^{[p^n]})^*/({{\mathfrak{m}}athfrak{a}}^{[p^n]})^F.
\end{align*}
\end{proof}
\begin{defi}
\label{ex.1t} The {\em weak test ideal $\tau'(R)$
of $R$\/} is defined to be
the ideal generated by $0$ and all weak test elements for $R$. (By a
`weak test element' for $R$ we mean a $p^{w_0}$-weak
test element for $R$ for some $w_0 \in {\mathfrak{n}}n$.)
It is easy to see that each element of $\tau'(R) \cap R^{\circ}$
is a weak test element for $R$.
\end{defi}
\begin{thm}
\label{ex.2} We use the notation of\/ {\rm \ref{ex.0a}}.
Suppose that there exists a $p^{w_0}$-weak test element $c$ for
$R$, for some $w_0 \in {\mathfrak{n}}n$.
Let $H$ be the positively-graded left $R[x,f]$-module given by
$$
H := \bigoplus_{{{\mathfrak{m}}athfrak{a}} \textit{~is an ideal of~} R}H({{\mathfrak{m}}athfrak{a}}) =
\bigoplus_{{{\mathfrak{m}}athfrak{a}} \textit{~is an ideal of~}
R}\Big(\textstyle{\bigoplus_{n\in{\mathfrak{n}}n}} R/{{\mathfrak{m}}athfrak{a}}^{[p^n]}\Big).
$$
Set $T := {\displaystyle \bigoplus_{{{\mathfrak{m}}athfrak{a}} \textit{~is an ideal of~}
R}} \Big(\bigoplus_{n\in{\mathfrak{n}}n}({{\mathfrak{m}}athfrak{a}}^{[p^n]})^*/{{\mathfrak{m}}athfrak{a}}^{[p^n]}\Big)$.
\begin{enumerate}
\item Then
$
T
= {{\mathfrak{m}}athfrak{a}}nn_{H}\left(
\bigoplus_{n\geq w_0} Rcx^n\right),
$
and so is a special annihilator submodule of $H$.
\item Write $\grann_{R[x,f]}T =
\bigoplus_{n \in {\mathfrak{n}}n} {{\mathfrak{m}}athfrak{c}}_nx^n$ for a suitable ascending
chain $({{\mathfrak{m}}athfrak{c}}_n)_{n\in{\mathfrak{n}}n}$ of ideals of $R$. Then $\lim_{n \rightarrow \infty}{{\mathfrak{m}}athfrak{c}}_n
= \tau'(R)$, the weak test ideal for $R$.
\item Furthermore, $T$ contains every special annihilator
submodule $T'$ of $H$ for which the graded annihilator
$\grann_{R[x,f]}T' = \bigoplus_{n \in {\mathfrak{n}}n} {{\mathfrak{m}}athfrak{b}}_nx^n$ has\/
$\height (\lim_{n \rightarrow \infty}{{\mathfrak{m}}athfrak{b}}_n) \geq 1$. (The height
of the improper ideal $R$ is considered to be $\infty$.)
\end{enumerate}
\end{thm}
\begin{proof} (i) This is immediate from Proposition \ref{ex.1}(i).
(ii) Write ${{\mathfrak{m}}athfrak{c}} := \lim_{n \rightarrow \infty}{{\mathfrak{m}}athfrak{c}}_n$.
Since there exists a weak test element for $R$, the ideal $\tau'(R)$
can be generated by finitely many weak test elements for $R$, say by
$c_i~(i = 1, \ldots, h)$, where $c_i$ is a $p^{w_i}$-weak test element for
$R$ (for $i= 1, \ldots, h$). Set $\widetilde{w} = {\mathfrak{m}}ax\{w_1, \ldots, w_h\}$.
It is immediate from part (i) that $\bigoplus_{n \geq \widetilde{w}}\tau'(R)x^n
\subseteq \grann_{R[x,f]}T$, and so $\tau'(R) \subseteq {{\mathfrak{m}}athfrak{c}}$. Therefore
$\height {{\mathfrak{m}}athfrak{c}} \geq 1$, so that ${{\mathfrak{m}}athfrak{c}} \cap R^{\circ} {\mathfrak{n}}eq \emptyset$ by
prime avoidance, and ${{\mathfrak{m}}athfrak{c}}$ can be generated by its elements in $R^{\circ}$.
There exists $m_0 \in {\mathfrak{n}}n$ such that ${{\mathfrak{m}}athfrak{c}}_n = {{\mathfrak{m}}athfrak{c}}$ for all $n \geq m_0$. Let
$c' \in {{\mathfrak{m}}athfrak{c}} \cap R^{\circ}$. Thus $T$ is annihilated by $c'x^n$ for
all $n \geq m_0$; therefore, for each ideal ${{\mathfrak{m}}athfrak{a}}$ of $R$, and for all $r \in {{\mathfrak{m}}athfrak{a}}^*$,
we have $c'r^{p^n} \in {{\mathfrak{m}}athfrak{a}}^{[p^n]}$ for all $n \geq m_0$, so that
$c'$ is a $p^{m_0}$-weak test element for $R$. Therefore $c' \in \tau'(R)$.
Since ${{\mathfrak{m}}athfrak{c}}$ can be generated by elements in ${{\mathfrak{m}}athfrak{c}} \cap R^{\circ}$, it
follows that ${{\mathfrak{m}}athfrak{c}} \subseteq \tau'(R)$.
(iii) Since $T' = {{\mathfrak{m}}athfrak{a}}nn_{H}\Big(\bigoplus_{n \in {\mathfrak{n}}n} {{\mathfrak{m}}athfrak{b}}_nx^n\Big)$, it follows that
$$
T' = \bigoplus_{{{\mathfrak{m}}athfrak{a}} \text{~is an ideal of~} R} \Big({\textstyle
\bigoplus_{n\in{\mathfrak{n}}n}}{{\mathfrak{m}}athfrak{a}}_n/{{\mathfrak{m}}athfrak{a}}^{[p^n]}\Big),
$$
where, for each ideal ${{\mathfrak{m}}athfrak{a}}$ of $R$ and each $n \in {\mathfrak{n}}n$, the ideal ${{\mathfrak{m}}athfrak{a}}_n$ of
$R$ contains ${{\mathfrak{m}}athfrak{a}}^{[p^n]}$. Suppose that $\lim_{n \rightarrow \infty}{{\mathfrak{m}}athfrak{b}}_n = {{\mathfrak{m}}athfrak{b}}$
and that $v_0 \in{\mathfrak{n}}n$ is such
that ${{\mathfrak{m}}athfrak{b}}_n = {{\mathfrak{m}}athfrak{b}}$ for all $n \geq v_0$. Since $\height {{\mathfrak{m}}athfrak{b}} \geq 1$, there
exists $\overline{c} \in {{\mathfrak{m}}athfrak{b}}\cap R^{\circ}$, by prime avoidance.
Let ${{\mathfrak{m}}athfrak{a}}$ be an ideal of $R$
and let $n \in {\mathfrak{n}}n$.
Then, for each $r \in {{\mathfrak{m}}athfrak{a}}_n$, the element $r + {{\mathfrak{m}}athfrak{a}}^{[p^n]}$ of the
$n$-th component of $H({{\mathfrak{m}}athfrak{a}})$ is annihilated by $\overline{c}x^j$ for all $j \geq
v_0$. This means that $\overline{c}r^{p^j} \in ({{\mathfrak{m}}athfrak{a}}^{[p^n]})^{[p^j]}$ for all
$j \geq v_0$, so that $r \in ({{\mathfrak{m}}athfrak{a}}^{[p^n]})^*$. Therefore $T' \subseteq T$.
\end{proof}
\begin{thm}
\label{ex.3} We use the notation of\/ {\rm \ref{ex.0a}}.
Suppose that there exists a $p^{w_0}$-weak test element $c$ for
$R$, for some $w_0 \in {\mathfrak{n}}n$.
Let $G$ be the positively-graded $x$-torsion-free left $R[x,f]$-module given by
$$
G := \bigoplus_{{{\mathfrak{m}}athfrak{a}} \textit{~is an ideal of~} R}G({{\mathfrak{m}}athfrak{a}}) =
\bigoplus_{{{\mathfrak{m}}athfrak{a}} \textit{~is an ideal of~} R}\Big({\textstyle
\bigoplus_{n\in{\mathfrak{n}}n}} R/({{\mathfrak{m}}athfrak{a}}^{[p^n]})^F\Big).
$$
Set $U := {\displaystyle \bigoplus_{{{\mathfrak{m}}athfrak{a}} \textit{~is an ideal of~}
R}} \Big(\bigoplus_{n\in{\mathfrak{n}}n}({{\mathfrak{m}}athfrak{a}}^{[p^n]})^*/({{\mathfrak{m}}athfrak{a}}^{[p^n]})^F\Big)$.
\begin{enumerate}
\item Then $ U = {{\mathfrak{m}}athfrak{a}}nn_{G}\left( \bigoplus_{n\in{\mathfrak{n}}n} Rcx^n\right),
$ and so is a special annihilator submodule of $G$. \item Let
${{\mathfrak{m}}athfrak{b}}$ be the $G$-special $R$-ideal corresponding to $U$. Then
${{\mathfrak{m}}athfrak{b}}$ is the smallest member of ${\mathfrak{m}}athcal{I}(G)$ of positive
height.
\end{enumerate}
\end{thm}
\begin{proof} (i) This is immediate from Proposition \ref{ex.1}(ii).
(ii) Note that $Rc \subseteq {{\mathfrak{m}}athfrak{b}}$, by part (i); therefore $\height
{{\mathfrak{m}}athfrak{b}} \geq 1$. To complete the proof, we show that, if ${{\mathfrak{m}}athfrak{b}}' \in
{\mathfrak{m}}athcal{I}(G)$ has $\height {{\mathfrak{m}}athfrak{b}}' \geq 1$, then ${{\mathfrak{m}}athfrak{b}} \subseteq
{{\mathfrak{m}}athfrak{b}}'$. By prime avoidance, there exists $\widetilde{c} \in {{\mathfrak{m}}athfrak{b}}'
\cap R^{\circ}$. Let $U' \in {\mathfrak{m}}athcal{A}(G)$ correspond to ${{\mathfrak{m}}athfrak{b}}'$
(in the correspondence of Proposition \ref{nt.7}). Since $U' =
{{\mathfrak{m}}athfrak{a}}nn_G{{\mathfrak{m}}athfrak{b}}' R[x,f]$, it follows that
$$
U' = \bigoplus_{{{\mathfrak{m}}athfrak{a}} \text{~is an ideal of~} R} \Big({\textstyle
\bigoplus_{n\in{\mathfrak{n}}n}}{{\mathfrak{m}}athfrak{a}}_n/({{\mathfrak{m}}athfrak{a}}^{[p^n]})^F\Big),
$$
where, for each ideal ${{\mathfrak{m}}athfrak{a}}$ of $R$ and each $n \in {\mathfrak{n}}n$, the ideal
${{\mathfrak{m}}athfrak{a}}_n$ of $R$ contains $({{\mathfrak{m}}athfrak{a}}^{[p^n]})^F$. Let ${{\mathfrak{m}}athfrak{a}}$ be an ideal
of $R$ and let $n \in {\mathfrak{n}}n$. Then, for each $r \in {{\mathfrak{m}}athfrak{a}}_n$, the
element $r + ({{\mathfrak{m}}athfrak{a}}^{[p^n]})^F$ of the $n$-th component of $G({{\mathfrak{m}}athfrak{a}})$
is annihilated by $\widetilde{c}x^j$ for all $j \geq 0$. This
means that $\widetilde{c}r^{p^j} \in (({{\mathfrak{m}}athfrak{a}}^{[p^n]})^{[p^j]})^F$
for all $j \geq 0$, so that $r \in ({{\mathfrak{m}}athfrak{a}}^{[p^n]})^*$ by \cite[Lemma
0.1(i)]{KS}. Therefore $U' \subseteq U$, so that ${{\mathfrak{m}}athfrak{b}}' \supseteq
{{\mathfrak{m}}athfrak{b}}$.
\end{proof}
\section{\sc Properties of
special annihilator submodules in the $x$-torsion-free case}
\label{ga}
Throughout this section, we shall employ the notation of
\ref{nt.1}. The aim is to develop the theory of special
annihilator submodules of an $x$-torsion-free left
$R[x,f]$-module.
\begin{lem}
\label{ga.1} Suppose that
$G$ is $x$-torsion-free. Let $N$ be a special annihilator submodule of
$G$. Then the left $R[x,f]$-module $G/N$ is also
$x$-torsion-free.
\end{lem}
\begin{proof} By Lemma \ref{nt.5} and
Proposition \ref{nt.7}, there is a radical ideal ${{\mathfrak{m}}athfrak{b}}$ of $R$ such
that $N = {{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{b}} R[x,f]\right).$ Let $g \in G$ be such
that $xg \in N$. Therefore, for all $r \in {{\mathfrak{m}}athfrak{b}}$ and all $j \in
{\mathfrak{n}}n$, we have $rx^j(xg) = 0$, that is $rx^{j+1}g = 0$. Also, for
$r \in {{\mathfrak{m}}athfrak{b}}$, since $r(xg) = 0$, we have $x(rg) = r^pxg = 0$, and
so $rg = 0$ because $G$ is $x$-torsion-free. Thus $g \in
{{\mathfrak{m}}athfrak{a}}nn_G\left(\bigoplus_{n\in{\mathfrak{n}}n}{{\mathfrak{m}}athfrak{b}} x^n\right) = N$. It follows
that $G/N$ is $x$-torsion-free.
\end{proof}
\begin{lem}
\label{ga.1a} Suppose that $G$ is $x$-torsion-free. Let ${{\mathfrak{m}}athfrak{a}}$ be
an ideal of $R$, and set $L := {{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{a}} R[x,f]\right) \in
{\mathfrak{m}}athcal{A}(G)$. Then $L =
{{\mathfrak{m}}athfrak{a}}nn_G\left(\sqrt{{{\mathfrak{m}}athfrak{a}}} R[x,f]\right)$.
\end{lem}
\begin{proof} Let ${{\mathfrak{m}}athfrak{d}} \in {\mathfrak{m}}athcal{I}(G)$ correspond to $L$.
Note that ${{\mathfrak{m}}athfrak{d}}$ is radical, by Lemma \ref{nt.5}; also, ${{\mathfrak{m}}athfrak{a}}
\subseteq {{\mathfrak{m}}athfrak{d}}$. Hence $ {{\mathfrak{m}}athfrak{a}} \subseteq \sqrt{{{\mathfrak{m}}athfrak{a}}} \subseteq
\sqrt{{{\mathfrak{m}}athfrak{d}}} = {{\mathfrak{m}}athfrak{d}}$. Since ${{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{a}} R[x,f]\right) =
{{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{d}} R[x,f]\right)$, we must have ${{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{a}}
R[x,f]\right) = {{\mathfrak{m}}athfrak{a}}nn_G\left(\sqrt{{{\mathfrak{m}}athfrak{a}}} R[x,f]\right).$
\end{proof}
\begin{prop}
\label{ga.3} Suppose that $G$ is $x$-torsion-free. Let ${{\mathfrak{m}}athfrak{a}}$ be an
ideal of $R$, and set $$L := {{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{a}} R[x,f]\right) \in
{\mathfrak{m}}athcal{A}(G).$$ Note that $G/L$ is $x$-torsion-free, by Lemma\/
{\rm \ref{ga.1}}. Let $N$ be an $R$-submodule of $G$ such that $L
\subseteq N \subseteq G$.
\begin{enumerate}
\item If $N = {{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{b}} R[x,f]\right) \in {\mathfrak{m}}athcal{A}(G)$,
where ${{\mathfrak{m}}athfrak{b}}$ is an ideal of $R$ contained in ${{\mathfrak{m}}athfrak{a}}$, then
$$N/L = {{\mathfrak{m}}athfrak{a}}nn_{G/L}\left(({{\mathfrak{m}}athfrak{b}}:{{\mathfrak{m}}athfrak{a}})R[x,f]\right) \in {\mathfrak{m}}athcal{A}(G/L).$$
Furthermore, if the ideal in ${\mathfrak{m}}athcal{I}(G)$ corresponding to $N$
is ${{\mathfrak{m}}athfrak{b}}$, then $({{\mathfrak{m}}athfrak{b}}:{{\mathfrak{m}}athfrak{a}})$ is the ideal in ${\mathfrak{m}}athcal{I}(G/L)$
corresponding to $N/L$.
\item If $N/L = {{\mathfrak{m}}athfrak{a}}nn_{G/L}\left({{\mathfrak{m}}athfrak{c}} R[x,f]\right) \in
{\mathfrak{m}}athcal{A}(G/L)$, where ${{\mathfrak{m}}athfrak{c}}$ is an ideal of $R$, then
$$N = {{\mathfrak{m}}athfrak{a}}nn_{G}\left({{\mathfrak{m}}athfrak{a}}{{\mathfrak{m}}athfrak{c}} R[x,f]\right)
= {{\mathfrak{m}}athfrak{a}}nn_{G}\left(({{\mathfrak{m}}athfrak{a}}\cap{{\mathfrak{m}}athfrak{c}}) R[x,f]\right) \in {\mathfrak{m}}athcal{A}(G).$$
Furthermore, if ${{\mathfrak{m}}athfrak{a}}$ is the ideal in ${\mathfrak{m}}athcal{I}(G)$
corresponding to $L$ and ${{\mathfrak{m}}athfrak{c}}$ is the ideal in ${\mathfrak{m}}athcal{I}(G/L)$
corresponding to $N/L$, then ${{\mathfrak{m}}athfrak{a}}\cap{{\mathfrak{m}}athfrak{c}}$ is the ideal in
${\mathfrak{m}}athcal{I}(G)$ corresponding to $N$.
\item There is an order-preserving bijection from $\{ N \in
{\mathfrak{m}}athcal{A}(G) : N \supseteq L\}$ to ${\mathfrak{m}}athcal{A}(G/L)$ given by
$N {\mathfrak{m}}apsto N/L$.
\end{enumerate}
\end{prop}
\begin{proof} By Lemma \ref{ga.1}, the
left $R[x,f]$-module $G/L$ is $x$-torsion-free.
(i) Let $g \in N$. Let $i,j \in {\mathfrak{n}}n$ and $r \in ({{\mathfrak{m}}athfrak{b}}:{{\mathfrak{m}}athfrak{a}})$, $u \in
{{\mathfrak{m}}athfrak{a}}$. Then $ux^i(rx^jg) = ur^{p^i}x^{i+j}g = 0$ because $ ur^{p^i}
\in {{\mathfrak{m}}athfrak{b}}$ and ${{\mathfrak{m}}athfrak{b}} x^{i+j}$ annihilates $N$. This is true for all
$i \in {\mathfrak{n}}n$ and $u \in {{\mathfrak{m}}athfrak{a}}$. Therefore $ rx^jg \in {{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{a}}
R[x,f]\right) = L. $ Since this is true for all $j \in {\mathfrak{n}}n$ and $r
\in ({{\mathfrak{m}}athfrak{b}}:{{\mathfrak{m}}athfrak{a}})$, we see that $N/L \subseteq
{{\mathfrak{m}}athfrak{a}}nn_{G/L}\left(({{\mathfrak{m}}athfrak{b}}:{{\mathfrak{m}}athfrak{a}}) R[x,f]\right)$.
Now suppose that $g \in G$ is such that $g + L \in
{{\mathfrak{m}}athfrak{a}}nn_{G/L}\left(({{\mathfrak{m}}athfrak{b}}:{{\mathfrak{m}}athfrak{a}}) R[x,f]\right)$. Let $r \in {{\mathfrak{m}}athfrak{b}}$ and $i
\in {\mathfrak{n}}n$. Then $r \in ({{\mathfrak{m}}athfrak{b}}:{{\mathfrak{m}}athfrak{a}})$ and so $rx^{i+1}$ annihilates $g
+ L \in G/L$. Hence $rx^{i+1}g \in L$. Since ${{\mathfrak{m}}athfrak{b}} \subseteq {{\mathfrak{m}}athfrak{a}}$,
we see that $r^{p-1}rx^{i+1}g = 0$, so that $xrx^ig = 0$. As $G$
is $x$-torsion-free, it follows that $rx^ig = 0$. As this is true
for all $r \in {{\mathfrak{m}}athfrak{b}}$ and $i \in {\mathfrak{n}}n$, we see that $ g \in
{{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{b}} R[x,f]\right) = N. $ Hence $N/L =
{{\mathfrak{m}}athfrak{a}}nn_{G/L}\left(({{\mathfrak{m}}athfrak{b}}:{{\mathfrak{m}}athfrak{a}}) R[x,f]\right)$.
To prove the final claim, we have to show that $\grann_{R[x,f]}(N/L)
= \bigoplus _{n\in{\mathfrak{n}}n} ({{\mathfrak{m}}athfrak{b}}:{{\mathfrak{m}}athfrak{a}}) x^n$, given that $\grann_{R[x,f]}N
= \bigoplus _{n\in{\mathfrak{n}}n} {{\mathfrak{m}}athfrak{b}} x^n$. In view of the preceding
paragraph, it remains only to show that $$\grann_{R[x,f]}(N/L) \subseteq
({{\mathfrak{m}}athfrak{b}}:{{\mathfrak{m}}athfrak{a}})R[x,f].$$
Let $r \in R$ be such that $rx^i \in \grann_{R[x,f]}(N/L)$ for all
$i \in {\mathfrak{n}}n$. Let $g \in N$. Then $rx^ig \in L$ for all $i \in
{\mathfrak{n}}n$, and so ${{\mathfrak{m}}athfrak{a}} rx^ig = 0$ for all $i \in {\mathfrak{n}}n$. As this is true
for all $g \in N$ and for all $i \in {\mathfrak{n}}n$, it follows that ${{\mathfrak{m}}athfrak{a}} r
\subseteq \left(\grann_{R[x,f]}N\right) \cap R = {{\mathfrak{m}}athfrak{b}}$. Hence $r
\in ({{\mathfrak{m}}athfrak{b}}:{{\mathfrak{m}}athfrak{a}})$.
(ii) Let $g \in N$. Then $ux^ig \in L$ for all $u \in {{\mathfrak{m}}athfrak{c}}$ and $i
\in {\mathfrak{n}}n$, and so $rux^ig = 0$ for all $r \in {{\mathfrak{m}}athfrak{a}}$, $u \in {{\mathfrak{m}}athfrak{c}}$ and
$i \in {\mathfrak{n}}n$. Hence $N \subseteq
{{\mathfrak{m}}athfrak{a}}nn_G\left(\bigoplus_{n\in{\mathfrak{n}}n}{{\mathfrak{m}}athfrak{a}}{{\mathfrak{m}}athfrak{c}} x^n\right) =
{{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{a}}{{\mathfrak{m}}athfrak{c}} R[x,f]\right)$.
Now let $g \in {{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{a}}{{\mathfrak{m}}athfrak{c}} R[x,f]\right)$. Then, for all $r
\in {{\mathfrak{m}}athfrak{a}}$, $u \in {{\mathfrak{m}}athfrak{c}}$ and $i,j \in {\mathfrak{n}}n$, we have $rx^i(ux^jg) =
ru^{p^i}x^{i+j}g = 0$, and so $ux^jg \in L$ for all $u \in {{\mathfrak{m}}athfrak{c}}$
and $j \in {\mathfrak{n}}n$. Hence $g + L \in
{{\mathfrak{m}}athfrak{a}}nn_{G/L}\left(\bigoplus_{n\in{\mathfrak{n}}n}{{\mathfrak{m}}athfrak{c}} x^n\right) = N/L$, and $g
\in N$. It follows that $ N = {{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{a}}{{\mathfrak{m}}athfrak{c}} R[x,f]\right). $
Also, by Lemma \ref{ga.1a}, we have
$${{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{a}}{{\mathfrak{m}}athfrak{c}} R[x,f]\right) = {{\mathfrak{m}}athfrak{a}}nn_G\left(({{\mathfrak{m}}athfrak{a}}\cap{{\mathfrak{m}}athfrak{c}}) R[x,f]\right),$$
because ${{\mathfrak{m}}athfrak{a}} {{\mathfrak{m}}athfrak{c}}$ and ${{\mathfrak{m}}athfrak{a}} \cap {{\mathfrak{m}}athfrak{c}}$ have the same radical.
To prove the final claim, we have to show that $\grann_{R[x,f]}N =
({{\mathfrak{m}}athfrak{a}}\cap{{\mathfrak{m}}athfrak{c}})R[x,f]$, given that
$$\grann_{R[x,f]}(N/L) = {{\mathfrak{m}}athfrak{c}} R[x,f] {\mathfrak{q}}uad {\mathfrak{m}}box{~and~} {\mathfrak{q}}uad
\grann_{R[x,f]}(L) = {{\mathfrak{m}}athfrak{a}} R[x,f].$$ In view of the
preceding paragraph, it remains only to show that $\grann_{R[x,f]}N
\subseteq ({{\mathfrak{m}}athfrak{a}}\cap{{\mathfrak{m}}athfrak{c}})R[x,f].$ However, this is clear, because
$
\grann_{R[x,f]}N \subseteq \grann_{R[x,f]}L \cap \grann_{R[x,f]}(N/L).
$
(iii) This is now immediate from parts (i) and (ii).
\end{proof}
\begin{rmk}
\label{ga.3r} It follows from Proposition \ref{ga.3}(ii) (and with
the hypotheses and notation thereof) that, if ${{\mathfrak{m}}athfrak{a}}$ is an ideal of
$R$ and $L := {{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{a}} R[x,f]\right)$, then $
{{\mathfrak{m}}athfrak{a}}nn_{G/L}\left({{\mathfrak{m}}athfrak{a}} R[x,f]\right) = 0$.
\end{rmk}
Because the special $R$-ideals introduced in Definition \ref{nt.6} are
radical, the following lemma will be very useful.
\begin{lem}
\label{ga.2} Let ${{\mathfrak{m}}athfrak{a}}$ and ${{\mathfrak{m}}athfrak{b}}$ be proper radical ideals of $R$, and let
their (unique) minimal primary decompositions be
$$
{{\mathfrak{m}}athfrak{a}} = {\mathfrak{r}}_1 \cap \ldots \cap {\mathfrak{r}}_k \cap {\mathfrak{p}}_1 \cap \ldots \cap {\mathfrak{p}}_t \cap
{\mathfrak{p}}_1' \cap \ldots \cap {\mathfrak{p}}_u'
$$
and
$$
{{\mathfrak{m}}athfrak{b}} = {\mathfrak{r}}_1 \cap \ldots \cap {\mathfrak{r}}_k \cap {\mathfrak{q}}_1 \cap \ldots \cap {\mathfrak{q}}_v \cap
{\mathfrak{q}}_1' \cap \ldots \cap {\mathfrak{q}}_w',
$$
where the notation is such that
$$
\left\{{\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_t,
{\mathfrak{p}}_1', \ldots, {\mathfrak{p}}_u'\right\} \cap \left\{{\mathfrak{q}}_1, \ldots, {\mathfrak{q}}_v,
{\mathfrak{q}}_1', \ldots, {\mathfrak{q}}_w'\right\} = \emptyset,
$$
and such that none of ${\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_t$ contains an
associated prime of ${{\mathfrak{m}}athfrak{b}}$, each of ${\mathfrak{p}}_1', \ldots, {\mathfrak{p}}_u'$
contains an associated prime of ${{\mathfrak{m}}athfrak{b}}$, none of ${\mathfrak{q}}_1, \ldots,
{\mathfrak{q}}_v$ contains an associated prime of ${{\mathfrak{m}}athfrak{a}}$, and each of ${\mathfrak{q}}_1',
\ldots, {\mathfrak{q}}_w'$ contains an associated prime of ${{\mathfrak{m}}athfrak{a}}$. (Note that
some, but not all, of the integers $k$, $t$ and $u$ might be zero;
a similar comment applies to the primary decomposition of ${{\mathfrak{m}}athfrak{b}}$.)
Then
\begin{enumerate}
\item ${{\mathfrak{m}}athfrak{a}} \cap {{\mathfrak{m}}athfrak{b}} = {\mathfrak{r}}_1 \cap \ldots \cap {\mathfrak{r}}_k \cap {\mathfrak{p}}_1 \cap
\ldots \cap {\mathfrak{p}}_t \cap {\mathfrak{q}}_1 \cap \ldots \cap {\mathfrak{q}}_v$ is the
minimal primary decomposition; \item if ${{\mathfrak{m}}athfrak{a}} {\mathfrak{n}}ot\subseteq {{\mathfrak{m}}athfrak{b}}$,
the equation
$ ({{\mathfrak{m}}athfrak{b}} : {{\mathfrak{m}}athfrak{a}}) = {\mathfrak{q}}_1 \cap \ldots \cap {\mathfrak{q}}_v$
gives the minimal primary decomposition.
\end{enumerate}
\end{lem}
\begin{proof} (i) Each of ${\mathfrak{p}}_1', \ldots, {\mathfrak{p}}_u'$ must contain one of
${\mathfrak{q}}_1, \ldots, {\mathfrak{q}}_v$; likewise, each of ${\mathfrak{q}}_1', \ldots, {\mathfrak{q}}_w'$
must contain one of ${\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_t$ . The claim then
follows easily.
(ii) Since $ ({{\mathfrak{m}}athfrak{b}} : {{\mathfrak{m}}athfrak{a}}) = ({{\mathfrak{m}}athfrak{b}} \cap {{\mathfrak{m}}athfrak{a}} : {{\mathfrak{m}}athfrak{a}})$, it is clear from part (i) that
$$
{\mathfrak{q}}_1 \cap \ldots \cap {\mathfrak{q}}_v \subseteq ({{\mathfrak{m}}athfrak{b}} : {{\mathfrak{m}}athfrak{a}}).
$$
Now let $r \in ({{\mathfrak{m}}athfrak{b}} : {{\mathfrak{m}}athfrak{a}}) = ({{\mathfrak{m}}athfrak{b}} \cap {{\mathfrak{m}}athfrak{a}} : {{\mathfrak{m}}athfrak{a}})$.
Then, for each $i = 1, \ldots, v$,
we have $r{{\mathfrak{m}}athfrak{a}} \subseteq {\mathfrak{q}}_i$, whereas ${{\mathfrak{m}}athfrak{a}} {\mathfrak{n}}ot\subseteq {\mathfrak{q}}_i$;
hence $r \in {\mathfrak{q}}_i$
because ${\mathfrak{q}}_i$ is prime.
\end{proof}
\begin{thm}
\label{ga.4} Suppose that $G$ is $x$-torsion-free. Let $N :=
{{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{b}} R[x,f]\right) \in {\mathfrak{m}}athcal{A}(G)$, where the ideal
${{\mathfrak{m}}athfrak{b}} \in {\mathfrak{m}}athcal{I}(G)$ corresponds to $N$. Assume that $N {\mathfrak{n}}eq
0$, and let ${{\mathfrak{m}}athfrak{b}} = {\mathfrak{p}}_1 \cap \ldots \cap {\mathfrak{p}}_t$ be the minimal
primary decomposition of the (radical) ideal ${{\mathfrak{m}}athfrak{b}}$.
Suppose that $t > 1$, and consider any partition $\{1,\ldots,t\} =
U \cup V$, where $U$ and $V$ are two non-empty disjoint sets. Set
$ {{\mathfrak{m}}athfrak{a}} = \bigcap_{i\in U} {\mathfrak{p}}_i$ and ${{\mathfrak{m}}athfrak{c}} = \bigcap_{i\in V}
{\mathfrak{p}}_i$. Let $L := {{\mathfrak{m}}athfrak{a}}nn_G\left({{\mathfrak{m}}athfrak{a}} R[x,f]\right) \in
{\mathfrak{m}}athcal{A}(G)$. Then
\begin{enumerate}
\item $0 \subset L \subset N$ (the symbol `$\subset$' is reserved
to denote strict inclusion); \item $N/L = {{\mathfrak{m}}athfrak{a}}nn_{G/L}\left({{\mathfrak{m}}athfrak{c}}
R[x,f]\right) \in {\mathfrak{m}}athcal{A}(G/L)$ with corresponding ideal ${{\mathfrak{m}}athfrak{c}}
\in {\mathfrak{m}}athcal{I}(G/L)$; and \item $\grann_{R[x,f]}L = {{\mathfrak{m}}athfrak{a}} R[x,f]$,
so that ${{\mathfrak{m}}athfrak{a}} \in {\mathfrak{m}}athcal{I}(G)$ corresponds to $L$.
\end{enumerate}
\end{thm}
\begin{proof} (i) It is clear that $ L \subseteq N$.
Suppose that $L = 0$ and seek a contradiction. Let $g \in N$. Let
$i,j \in {\mathfrak{n}}n$ and $r \in {{\mathfrak{m}}athfrak{c}}$, $u \in {{\mathfrak{m}}athfrak{a}}$. Then $ux^i(rx^jg) =
ur^{p^i}x^{i+j}g = 0$ because $ ur^{p^i} \in {{\mathfrak{m}}athfrak{b}}$ and ${{\mathfrak{m}}athfrak{b}}
x^{i+j}$ annihilates $N$. This is true for all $i \in {\mathfrak{n}}n$ and $u
\in {{\mathfrak{m}}athfrak{a}}$. Therefore $ rx^jg \in {{\mathfrak{m}}athfrak{a}}nn_G\left(\bigoplus_{n\in{\mathfrak{n}}n}{{\mathfrak{m}}athfrak{a}}
x^n\right) = L = 0. $ It follows that $\bigoplus_{n\in{\mathfrak{n}}n}{{\mathfrak{m}}athfrak{c}} x^n
\subseteq \grann_{R[x,f]}N = \bigoplus _{n\in{\mathfrak{n}}n} {{\mathfrak{m}}athfrak{b}} x^n$, so that
${{\mathfrak{m}}athfrak{c}} \subseteq {{\mathfrak{m}}athfrak{b}}$. But ${{\mathfrak{m}}athfrak{b}} \subseteq {{\mathfrak{m}}athfrak{c}}$, and so ${{\mathfrak{m}}athfrak{c}} = {{\mathfrak{m}}athfrak{b}}$.
However, this contradicts the fact that ${{\mathfrak{m}}athfrak{b}} = {\mathfrak{p}}_1 \cap \ldots
\cap {\mathfrak{p}}_t$ is the unique minimal primary decomposition of ${{\mathfrak{m}}athfrak{b}}$.
Therefore $L {\mathfrak{n}}eq 0$.
Now suppose that $L = N$ and again seek a contradiction. Then
$\bigoplus_{n\in{\mathfrak{n}}n}{{\mathfrak{m}}athfrak{a}} x^n \subseteq
\grann_{R[x,f]}N = \bigoplus _{n\in{\mathfrak{n}}n} {{\mathfrak{m}}athfrak{b}} x^n$, so that ${{\mathfrak{m}}athfrak{a}} \subseteq {{\mathfrak{m}}athfrak{b}}$. But
${{\mathfrak{m}}athfrak{b}} \subseteq {{\mathfrak{m}}athfrak{a}}$, and so ${{\mathfrak{m}}athfrak{a}} = {{\mathfrak{m}}athfrak{b}}$, and this again leads to a contradiction.
Therefore $L {\mathfrak{n}}eq N$.
(ii) Since ${{\mathfrak{m}}athfrak{b}} \subseteq {{\mathfrak{m}}athfrak{a}}$, it is immediate from Proposition
\ref{ga.3}(i) that $N/L = {{\mathfrak{m}}athfrak{a}}nn_{G/L}\left(({{\mathfrak{m}}athfrak{b}}:{{\mathfrak{m}}athfrak{a}})R[x,f]\right)
\in {\mathfrak{m}}athcal{A}(G/L)$ and that the ideal $({{\mathfrak{m}}athfrak{b}}:{{\mathfrak{m}}athfrak{a}}) \in
{\mathfrak{m}}athcal{I}(G/L)$ corresponds to $N/L$. However, it follows from
Lemma \ref{ga.2}(ii) that $({{\mathfrak{m}}athfrak{b}}:{{\mathfrak{m}}athfrak{a}}) = \bigcap_{i\in V} {\mathfrak{p}}_i =
{{\mathfrak{m}}athfrak{c}}$.
(iii) Let ${{\mathfrak{m}}athfrak{d}} \in {\mathfrak{m}}athcal{I}(G)$ correspond to $L$. Note that
${{\mathfrak{m}}athfrak{a}} = \bigcap_{i\in U} {\mathfrak{p}}_i \subseteq {{\mathfrak{m}}athfrak{d}}$. By Proposition
\ref{ga.3}(i), the ideal in ${\mathfrak{m}}athcal{I}(G/L)$ corresponding to
$N/L$ is $({{\mathfrak{m}}athfrak{b}}:{{\mathfrak{m}}athfrak{d}})$. Therefore, by part (ii), we have $({{\mathfrak{m}}athfrak{b}}:{{\mathfrak{m}}athfrak{d}})
= {{\mathfrak{m}}athfrak{c}}$. But, by Proposition \ref{ga.3}(ii), the ideal in
${\mathfrak{m}}athcal{I}(G)$ corresponding to $N$ is ${{\mathfrak{m}}athfrak{d}}\cap{{\mathfrak{m}}athfrak{c}}$. Therefore
${{\mathfrak{m}}athfrak{b}} = {{\mathfrak{m}}athfrak{d}} \cap {{\mathfrak{m}}athfrak{c}}$, and so ${{\mathfrak{m}}athfrak{d}} {\mathfrak{n}}eq R$.
Now ${{\mathfrak{m}}athfrak{d}}$ is a radical ideal of $R$. By Lemma \ref{ga.2}(i), each
${\mathfrak{p}}_j$, for $j \in U$, is an associated prime of ${{\mathfrak{m}}athfrak{d}}$. Hence
${{\mathfrak{m}}athfrak{d}} \subseteq \bigcap_{j\in U} {\mathfrak{p}}_j = {{\mathfrak{m}}athfrak{a}}$. But we already know
that ${{\mathfrak{m}}athfrak{a}} \subseteq {{\mathfrak{m}}athfrak{d}}$, and so ${{\mathfrak{m}}athfrak{d}} = {{\mathfrak{m}}athfrak{a}}$.
\end{proof}
\begin{cor}
\label{ga.10} Suppose
that $G$ is $x$-torsion-free. Then the set of $G$-special
$R$-ideals is precisely the set of all finite intersections of
prime $G$-special $R$-ideals (provided one includes the
empty intersection, $R$, which corresponds to the zero special annihilator
submodule of $G$). In symbols,
$$
{\mathfrak{m}}athcal{I}(G) = \left\{ {\mathfrak{p}}_1 \cap \ldots \cap {\mathfrak{p}}_t : t \in {\mathfrak{n}}n
{\mathfrak{m}}box{~and~} {\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_t \in
{\mathfrak{m}}athcal{I}(G)\cap\Spec(R)\right\}.
$$
\end{cor}
\begin{proof} By Corollary \ref{nt.9}, the set ${\mathfrak{m}}athcal{I}(G)$ is
closed under taking intersections. A proper ideal ${{\mathfrak{m}}athfrak{a}} \in
{\mathfrak{m}}athcal{I}(G)$ is radical and it follows from Theorem \ref{ga.4}
that each (necessarily prime) primary component of ${{\mathfrak{m}}athfrak{a}}$ also
belongs to ${\mathfrak{m}}athcal{I}(G)$. This is enough to complete the proof.
\end{proof}
\begin{lem}
\label{ga.11} Suppose that $G$ is $x$-torsion-free. Let ${\mathfrak{p}}$ be a
maximal member of ${\mathfrak{m}}athcal{I}(G) \setminus \{R\}$ with respect to
inclusion, and let $L \in {\mathfrak{m}}athcal{A}(G)$ be the corresponding
special annihilator submodule of $G$. Thus $L$ is a minimal member
of the set of non-zero special annihilator submodules of $G$.
Then ${\mathfrak{p}}$ is prime, and any non-zero $g\in L$ satisfies
$\grann_{R[x,f]}R[x,f]g = {\mathfrak{p}} R[x,f]$.
\end{lem}
\begin{proof} It follows from Corollary \ref{ga.10} that ${\mathfrak{p}}$ is prime.
Since $R[x,f]g$ is a non-zero $R[x,f]$-submodule of $L$, there is
a proper radical ideal ${{\mathfrak{m}}athfrak{a}} \in {\mathfrak{m}}athcal{I}(G)$ such that
$$
{{\mathfrak{m}}athfrak{a}} R[x,f] = \grann_{R[x,f]}R[x,f]g \supseteq \grann_{R[x,f]}L = {\mathfrak{p}}
R[f,x].
$$
Since ${\mathfrak{p}}$ is a maximal member of ${\mathfrak{m}}athcal{I}(G) \setminus
\{R\}$, we must have ${{\mathfrak{m}}athfrak{a}} = {\mathfrak{p}}$.
\end{proof}
Our next major aim is to show that, in the situation of Corollary
\ref{ga.10}, the set ${\mathfrak{m}}athcal{I}(G)$ is finite if $G$ has the
property that, for each special annihilator submodule $L$ of $G$
(including $0 = {{\mathfrak{m}}athfrak{a}}nn_GR[x,f]$), the $x$-torsion-free residue class
module $G/L$ (see Lemma \ref{ga.1}) does not contain, as an
$R[x,f]$-submodule, an infinite direct sum of non-zero special
annihilator submodules of $G/L$. This may seem rather a
complicated hypothesis, and so we point out now that it is
satisfied if $G$ is a Noetherian or Artinian left $R[x,f]$-module,
and therefore if $G$ is a Noetherian or Artinian $R$-module. These
ideas will be applied, later in the paper, to an example in which
$G$ is Artinian as an $R$-module.
The following lemma will be helpful in an inductive argument in
the proof of Theorem \ref{ga.12}.
\begin{lem}
\label{ga.12p} Suppose that $G$ is $x$-torsion-free, and that the
set ${\mathfrak{m}}athcal{I}(G) \setminus \{R\}$ is non-empty and has finitely
many maximal members: suppose that there are $n$ of these and
denote them by ${\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n$. (The ideals ${\mathfrak{p}}_1, \ldots,
{\mathfrak{p}}_n$ are prime, by \/ {\rm \ref{ga.11}}.) Let $L :=
{{\mathfrak{m}}athfrak{a}}nn_G\left({\mathfrak{p}}_1 \cap \cdots \cap {\mathfrak{p}}_n\right)R[x,f]$. Then the
left $R[x,f]$-module $G/L$ is $x$-torsion-free, and
$$
{\mathfrak{m}}athcal{I}(G/L) \cap \Spec (R) = {\mathfrak{m}}athcal{I}(G) \cap \Spec (R)
\setminus \{{\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n\}.
$$
\end{lem}
\begin{proof} Note
that $\bigcap_{i=1}^n {\mathfrak{p}}_i \in {\mathfrak{m}}athcal{I}(G)$, by Corollary
\ref{nt.9}. Therefore $\grann_{R[x,f]}L = \left(\bigcap_{i=1}^n
{\mathfrak{p}}_i\right)R[x,f]$ and $L$ corresponds to $\bigcap_{i=1}^n
{\mathfrak{p}}_i$.
That $G/L$ is $x$-torsion-free follows from Lemma \ref{ga.1}. By
Proposition \ref{ga.3}(iii),
$$
{\mathfrak{m}}athcal{A}(G/L) = \left\{ N/L : N \in {\mathfrak{m}}athcal{A}(G) {\mathfrak{m}}box{~and~}
L \subseteq N \right\}.
$$
Let $N \in {\mathfrak{m}}athcal{A}(G)$ with $L \subset N$, and let ${{\mathfrak{m}}athfrak{b}} \in
{\mathfrak{m}}athcal{I}(G)$ correspond to $N$. Note that ${{\mathfrak{m}}athfrak{b}} \subset
\bigcap_{i=1}^n {\mathfrak{p}}_i$, and that no associated prime of ${{\mathfrak{m}}athfrak{b}}$ can
contain properly any of ${\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n$. Therefore the
minimal primary decomposition of the radical ideal ${{\mathfrak{m}}athfrak{b}}$ will have
the form
$$
{{\mathfrak{m}}athfrak{b}} = \left( {\textstyle \bigcap_{i\in I}} {\mathfrak{p}}_i \right) \cap
{\mathfrak{q}}_1 \cap \ldots \cap {\mathfrak{q}}_v,
$$
where $I$ is some (possibly empty) subset of $\{1, \ldots, n\}$
and none of ${\mathfrak{q}}_1, \ldots, {\mathfrak{q}}_v$ contains any of ${\mathfrak{p}}_1, \ldots,
{\mathfrak{p}}_n$. Note that ${\mathfrak{q}}_1, \ldots, {\mathfrak{q}}_v$ must all belong to
${\mathfrak{m}}athcal{I}(G) \cap \Spec (R) \setminus \{{\mathfrak{p}}_1, \ldots,
{\mathfrak{p}}_n\}$. Proposition \ref{ga.3}(i), this time used in conjunction
with Lemma \ref{ga.2}(ii), now shows that $N/L \in
{\mathfrak{m}}athcal{A}(G/L)$ and the ideal of ${\mathfrak{m}}athcal{I}(G/L)$
corresponding to $N/L$ is
$$
\left({{\mathfrak{m}}athfrak{b}} : {\mathfrak{p}}_1 \cap \cdots \cap {\mathfrak{p}}_n\right) = {\mathfrak{q}}_1 \cap
\ldots \cap {\mathfrak{q}}_v.
$$
Note also that, if ${\mathfrak{q}} \in {\mathfrak{m}}athcal{I}(G) \cap \Spec (R)
\setminus \{{\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n\}$ and
$$
J := \left\{ j \in \{1, \ldots, n\} : {\mathfrak{p}}_j {\mathfrak{n}}ot\supset {\mathfrak{q}}
\right\},
$$
then ${{\mathfrak{m}}athfrak{c}} := \left(\bigcap_{j\in J} {\mathfrak{p}}_j \right) \cap {\mathfrak{q}} \in
{\mathfrak{m}}athcal{I}(G)$ and ${{\mathfrak{m}}athfrak{c}} \subset \bigcap_{i=1}^n{\mathfrak{p}}_i$. It now
follows from Corollary \ref{ga.10} that
$$
{\mathfrak{m}}athcal{I}(G/L) \cap \Spec (R) = {\mathfrak{m}}athcal{I}(G) \cap \Spec (R)
\setminus \{{\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n\},
$$
as required.
\end{proof}
\begin{thm}
\label{ga.12} Suppose
that $G$ is $x$-torsion-free. Assume that $G$ has the property
that, for each special annihilator submodule $L$ of $G$ (including $0 =
{{\mathfrak{m}}athfrak{a}}nn_GR[x,f]$), the $x$-torsion-free residue class module $G/L$
does not contain, as an $R[x,f]$-submodule, an infinite direct sum
of non-zero special annihilator submodules of $G/L$.
Then the set ${\mathfrak{m}}athcal{I}(G)$ of $G$-special $R$-ideals is
finite.
\end{thm}
\begin{proof} By Corollary \ref{ga.10}, it is enough for us to show that
the set ${\mathfrak{m}}athcal{I}(G)\cap \Spec (R)$ is finite; we may suppose
that the latter set is not empty, so that it has maximal members
with respect to inclusion. In the first part of the proof, we show
that ${\mathfrak{m}}athcal{I}(G)\cap \Spec (R)$ has only finitely many such
maximal members.
Let $\left({\mathfrak{p}}_{\lambda}\right)_{\lambda \in \Lambda}$ be a
labelling of the set of maximal members of ${\mathfrak{m}}athcal{I}(G)\cap
\Spec (R)$, arranged so that ${\mathfrak{p}}_{\lambda} {\mathfrak{n}}eq {\mathfrak{p}}_{{\mathfrak{m}}u}$
whenever $\lambda$ and ${\mathfrak{m}}u$ are different elements of $\Lambda$.
For each $\lambda \in \Lambda$, let $S_{\lambda}$ be the member of
${\mathfrak{m}}athcal{A}(G)$ corresponding to ${\mathfrak{p}}_{\lambda}$.
Consider $\lambda, {\mathfrak{m}}u \in \Lambda$ with $\lambda {\mathfrak{n}}eq {\mathfrak{m}}u$. By
Lemma \ref{ga.11}, a non-zero $g \in S_{\lambda} \cap S_{{\mathfrak{m}}u}$
would have to satisfy $\grann_{R[x,f]}R[x,f]g = {\mathfrak{p}}_{\lambda} R[x,f]
= {\mathfrak{p}}_{{\mathfrak{m}}u} R[x,f]$. Since ${\mathfrak{p}}_{\lambda} {\mathfrak{n}}eq {\mathfrak{p}}_{{\mathfrak{m}}u}$, this is
impossible. Therefore $S_{\lambda} \cap S_{{\mathfrak{m}}u} = 0$ and the sum
$S_{\lambda} + S_{{\mathfrak{m}}u}$ is direct.
Suppose, inductively, that $n \in {\mathfrak{m}}athbb N$ and we have shown that,
whenever $\lambda_1, \ldots, \lambda_n$ are $n$ distinct members
of $\Lambda$, then the sum $\sum_{i=1}^nS_{\lambda_i}$ is direct.
We can now use Lemma \ref{ga.11} to see that, if $g_i \in
S_{\lambda_i}$ for $i = 1, \ldots, n$, then
$$
\grann_{R[x,f]}R[x,f](g_1 + \cdots + g_n) =
\bigcap_{\stackrel{\scriptstyle i=1}{g_i {\mathfrak{n}}eq
0}}^n{\mathfrak{p}}_{\lambda_i}R[x,f],
$$
and then to deduce that, for $\lambda_{n+1}
\in \Lambda \setminus\{\lambda_1, \ldots,
\lambda_n\}$, we must have
$
\left( \bigoplus_{i=1}^n S_{\lambda_i} \right) \bigcap S_{\lambda_{n+1}} = 0,
$
so that the sum $S_{\lambda_1} + \cdots + S_{\lambda_n} + S_{\lambda_{n+1}}$
is direct.
It follows that the sum $\sum_{\lambda \in \Lambda}S_{\lambda}$ is
direct; since each $S_{\lambda}$ is non-zero, the hypothesis about
$G/0$ (that is, about $G$) ensures that $\Lambda$ is finite.
We have thus shown that ${\mathfrak{m}}athcal{I}(G)\cap \Spec (R)$ has only
finitely many maximal members. Note that ${\mathfrak{m}}ax\{ \height {\mathfrak{p}} : {\mathfrak{p}}
{\mathfrak{m}}box{~is a maximal member of~} {\mathfrak{m}}athcal{I}(G)\cap \Spec (R) \}$
is an upper bound for the lengths of chains
$$
{\mathfrak{p}}_0 \subset {\mathfrak{p}}_1 \subset \cdots \subset {\mathfrak{p}}_w
$$
of prime ideals in ${\mathfrak{m}}athcal{I}(G)\cap \Spec (R)$. We argue by
induction on the maximum $t$ of these lengths. When $t = 0$, all
members of ${\mathfrak{m}}athcal{I}(G)\cap \Spec (R)$ are maximal members of
that set, and so, by the first part of this proof,
${\mathfrak{m}}athcal{I}(G)\cap \Spec (R)$ is finite. Now suppose that $t >
0$, and that it has been proved that ${\mathfrak{m}}athcal{I}(G)\cap \Spec
(R)$ is finite for smaller values of $t$.
We know that there are only finitely many maximal members of
${\mathfrak{m}}athcal{I}(G)\cap \Spec (R)$; suppose that there are $n$ of
these and denote them by ${\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n$. Let $L :=
{{\mathfrak{m}}athfrak{a}}nn_G\left({\mathfrak{p}}_1 \cap \cdots \cap {\mathfrak{p}}_n\right)R[x,f]$. We can now
use Lemma \ref{ga.12p} to deduce that the left $R[x,f]$-module
$G/L$ is $x$-torsion-free and
$$
{\mathfrak{m}}athcal{I}(G/L) \cap \Spec (R) = {\mathfrak{m}}athcal{I}(G) \cap \Spec (R)
\setminus \{{\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_n\}.
$$
It follows from this and Proposition \ref{ga.3}(ii) that the
inductive hypothesis can be applied to $G/L$, and so we can deduce
that the set $${\mathfrak{m}}athcal{I}(G) \cap \Spec (R) \setminus \{{\mathfrak{p}}_1,
\ldots, {\mathfrak{p}}_n\}$$ is finite. Hence ${\mathfrak{m}}athcal{I}(G) \cap \Spec (R)$
is a finite set and the inductive step is complete.
\end{proof}
\begin{cor}
\label{ga.12c} Suppose that the left $R[x,f]$-module $G$ is
$x$-torsion-free and either Artinian or Noetherian as an
$R$-module. Then the set ${\mathfrak{m}}athcal{I}(G)$ of $G$-special
$R$-ideals is finite.
\end{cor}
\begin{thm}
\label{ga.13} Suppose that $G$ is $x$-torsion-free and that the
set ${\mathfrak{m}}athcal{I}(G)$ of $G$-special $R$-ideals is finite.
Then there exists a (uniquely determined) ideal ${{\mathfrak{m}}athfrak{b}} \in
{\mathfrak{m}}athcal{I}(G)$ with the properties that $\height {{\mathfrak{m}}athfrak{b}} \geq 1$ (the
improper ideal $R$ is considered to have infinite height) and ${{\mathfrak{m}}athfrak{b}}
\subset {{\mathfrak{m}}athfrak{c}}$ for every other ideal ${{\mathfrak{m}}athfrak{c}} \in {\mathfrak{m}}athcal{I}(G)$ with
$\height {{\mathfrak{m}}athfrak{c}} \geq 1$. Furthermore, for $g \in G$, the following
statements are equivalent:
\begin{enumerate}
\item $g$ is annihilated by ${{\mathfrak{m}}athfrak{b}} R[x,f] = \bigoplus_{n\in{\mathfrak{n}}n}{{\mathfrak{m}}athfrak{b}} x^n$;
\item there exists $c \in R^{\circ}\cap {{\mathfrak{m}}athfrak{b}}$ such that
$cx^ng = 0$ for all $n \gg 0$;
\item there exists $c \in R^{\circ}$ such that $cx^ng = 0$ for all $n \gg 0$.
\end{enumerate}
\end{thm}
\begin{proof} By Corollary \ref{ga.10}, we have
$$
{\mathfrak{m}}athcal{I}(G) = \left\{ {\mathfrak{p}}_1 \cap \ldots \cap {\mathfrak{p}}_t : t \in {\mathfrak{n}}n
{\mathfrak{m}}box{~and~} {\mathfrak{p}}_1, \ldots, {\mathfrak{p}}_t \in
{\mathfrak{m}}athcal{I}(G)\cap\Spec(R)\right\}.
$$
Since ${\mathfrak{m}}athcal{I}(G)$ is finite, it is immediate that
$$
{{\mathfrak{m}}athfrak{b}} := \bigcap_{\stackrel{\scriptstyle {\mathfrak{p}} \in
{\mathfrak{m}}athcal{I}(G)\cap\Spec(R)}{\height {\mathfrak{p}} \geq 1}}{\mathfrak{p}}
$$
is the smallest ideal in ${\mathfrak{m}}athcal{I}(G)$ of height greater than
$0$. Since $\height {{\mathfrak{m}}athfrak{b}} \geq 1$, so that there exists $c \in {{\mathfrak{m}}athfrak{b}}
\cap R^{\circ}$ by prime avoidance, it is clear that (i)
$R_{\red}ightarrow$ (ii) and (ii) $R_{\red}ightarrow$ (iii).
(iii) $R_{\red}ightarrow$ (i) Let $n_0 \in {\mathfrak{n}}n$ and $c \in R^{\circ}$ be
such that $cx^ng = 0$ for all $n \geq n_0$. Then, for all $j \in {\mathfrak{n}}n$,
we have $x^{n_0}cx^jg = c^{p^{n_0}}x^{n_0 + j}g = 0$, so that $cx^jg = 0$
because $G$ is $x$-torsion-free.
Therefore $g \in {{\mathfrak{m}}athfrak{a}}nn_G(RcR[x,f])$. Now ${{\mathfrak{m}}athfrak{a}}nn_G(RcR[x,f])\in
{\mathfrak{m}}athcal{A}(G)$: let ${{\mathfrak{m}}athfrak{a}} \in {\mathfrak{m}}athcal{I}(G)$ be the corresponding
$G$-special $R$-ideal. Since $c \in {{\mathfrak{m}}athfrak{a}}$, we must have
$\height {{\mathfrak{m}}athfrak{a}} \geq 1$. Therefore ${{\mathfrak{m}}athfrak{b}} \subseteq {{\mathfrak{m}}athfrak{a}}$, by definition
of ${{\mathfrak{m}}athfrak{b}}$, and so
$$
g \in {{\mathfrak{m}}athfrak{a}}nn_G(RcR[x,f]) = {{\mathfrak{m}}athfrak{a}}nn_G({{\mathfrak{m}}athfrak{a}} R[x,f]) \subseteq {{\mathfrak{m}}athfrak{a}}nn_G({{\mathfrak{m}}athfrak{b}}
R[x,f]).
$$
\end{proof}
Corollary \ref{ga.12c} and Theorem \ref{ga.13} give hints about
how this work will be exploited, in Section \ref{tc} below, to
obtain results in the theory of tight closure. The aim is to apply
Corollary \ref{ga.12c} and Theorem \ref{ga.13} to
$H^d_{{\mathfrak{m}}}(R)/\Gamma_x(H^d_{{\mathfrak{m}}}(R))$, where $(R,{\mathfrak{m}})$ is a local
ring of dimension $d > 0$; the local cohomology module
$H^d_{{\mathfrak{m}}}(R)$, which is well known to be Artinian as an
$R$-module, carries a natural structure as a left $R[x,f]$-module.
The passage between $H^d_{{\mathfrak{m}}}(R)$ and its $x$-torsion-free
residue class $R[x,f]$-module
$H^d_{{\mathfrak{m}}}(R)/\Gamma_x(H^d_{{\mathfrak{m}}}(R))$ is facilitated by the
following extension, due to G. Lyubeznik, of a result of R.
Hartshorne and R. Speiser. It shows that, when $R$ is local, an
$x$-torsion left $R[x,f]$-module which is Artinian (that is,
`cofinite' in the terminology of Hartshorne and Speiser) as an
$R$-module exhibits a certain uniformity of behaviour.
\begin{thm} [G. Lyubeznik {\cite[Proposition 4.4]{Lyube97}}]
\label{hs.4} {\rm (Compare Hartshorne--Speiser \cite[Proposition
1.11]{HarSpe77}.)} Suppose that $(R,{\mathfrak{m}})$ is local, and let $H$ be
a left $R[x,f]$-module which is Artinian as an $R$-module. Then
there exists $e \in {\mathfrak{m}}athbb N_0$ such that $x^e\Gamma_x(H) = 0$.
\end{thm}
Hartshorne and Speiser first proved this result in the
case where $R$ is local and contains its residue field which is perfect.
Lyubeznik applied his theory of $F$-modules to obtain the result
without restriction on the local ring $R$ of characteristic $p$.
\begin{defi}
\label{hslno} Suppose that $(R,{\mathfrak{m}})$ is local, and let $H$ be a
left $R[x,f]$-module which is Artinian as an $R$-module. By the
Hartshorne--Speiser--Lyubeznik Theorem \ref{hs.4}, there exists $e
\in {\mathfrak{n}}n$ such that $x^e\Gamma_x(H) = 0$: we call the smallest such
$e$ the {\em Hartshorne--Speiser--Lyubeznik number\/}, or {\em
HSL-number\/} for short, of $H$.
\end{defi}
It will be helpful to have available an extension of this idea.
\begin{defi}
\label{ga.14} We say that the left $R[x,f]$-module $H$ {\em admits
an HSL-number\/} if there exists $e \in {\mathfrak{n}}n$ such that $x^e\Gamma_x(H) = 0$;
then we call the smallest such $e$ the {\em HSL-number\/} of $H$.
\end{defi}
We have seen above in \ref{hs.4} and \ref{hslno} that if $H$ is Artinian
as an $R$-module, then it admits an HSL-number. Note also that
if $H$ is Noetherian
as an $R$-module, then it admits an HSL-number, because $\Gamma_x(H)$ is an
$R[x,f]$-submodule of $H$, and so is an $R$-submodule and therefore
finitely generated.
\begin{cor}
\label{ga.15} Suppose that the left $R[x,f]$-module $H$ admits an
HSL-number $m_0$, and that the $x$-torsion-free left
$R[x,f]$-module $G := H/\Gamma_x(H)$ has only finitely many
$G$-special $R$-ideals. Let ${{\mathfrak{m}}athfrak{b}}$ be the smallest ideal in
${\mathfrak{m}}athcal{I}(G)$ of positive height (see\/ {\rm \ref{ga.13}}). For
$h \in H$, the following statements are equivalent:
\begin{enumerate}
\item $h$ is annihilated by $\bigoplus_{n\geq m_0}{{\mathfrak{m}}athfrak{b}}^{[p^{m_0}]} x^n$;
\item there exists $c \in R^{\circ}\cap {{\mathfrak{m}}athfrak{b}}$ such that
$cx^nh = 0$ for all $n \geq m_0$;
\item there exists $c \in R^{\circ}\cap {{\mathfrak{m}}athfrak{b}}$ such that
$cx^nh = 0$ for all $n \gg 0$;
\item there exists $c \in R^{\circ}$ such that $cx^nh = 0$ for all $n \gg 0$.
\end{enumerate}
\end{cor}
\begin{proof} Since ${{\mathfrak{m}}athfrak{b}}
\cap R^{\circ} {\mathfrak{n}}eq 0$ by prime avoidance, it is clear that (i)
$R_{\red}ightarrow$ (ii), (ii)
$R_{\red}ightarrow$ (iii) and (iii) $R_{\red}ightarrow$ (iv).
(iv) $R_{\red}ightarrow$ (i) Since $cx^n(h + \Gamma_x(H)) = 0$ in $G$
for all $n \gg 0$, it follows from Theorem \ref{ga.13} that $h + \Gamma_x(H)$
is annihilated by ${{\mathfrak{m}}athfrak{b}} R[x,f]$. Therefore, for all $r \in {{\mathfrak{m}}athfrak{b}}$ and $j \in {\mathfrak{n}}n$,
we have $rx^j(h + \Gamma_x(H)) = 0$, so that $rx^jh \in \Gamma_x(H)$ and
$r^{p^{m_0}}x^{m_0+j}h = x^{m_0}rx^jh = 0$. Therefore $h \in {{\mathfrak{m}}athfrak{a}}nn_H
\left(\bigoplus_{n\geq m_0}{{\mathfrak{m}}athfrak{b}}^{[p^{m_0}]} x^n\right)$.
\end{proof}
\section{\sc Applications to tight closure}
\label{tc}
The aim of this section is to apply results from Section \ref{ga}
to the theory of tight closure in the local ring $(R,{\mathfrak{m}})$ of
dimension $d > 0$. As was mentioned in Section \ref{ga}, we shall
be concerned with the top local cohomology module $H^d_{{\mathfrak{m}}}(R)$,
which has a natural structure as a left $R[x,f]$-module, and its
$x$-torsion-free residue class module
$H^d_{{\mathfrak{m}}}(R)/\Gamma_x(H^d_{{\mathfrak{m}}}(R))$. The (well-known) left
$R[x,f]$-module structure carried by $H^d_{{\mathfrak{m}}}(R)$ is described
in detail in \cite[2.1 and 2.3]{KS}.
\begin{rmd}
\label{tc.1} Suppose that $(R,{\mathfrak{m}})$ is a local ring of dimension $d > 0$.
The above-mentioned natural left $R[x,f]$-module structure carried
by $H^d_{{\mathfrak{m}}}(R)$ is independent of any choice of a system of parameters for
$R$. However, if one does choose a system of parameters $a_1, \ldots, a_d$ for
$R$, then one can obtain a quite concrete
representation of the local cohomology module $H^d_{{\mathfrak{m}}}(R)$ and, through this,
an explicit formula for the effect of multiplication by the indeterminate
$x \in R[x,f]$ on an element of $H^d_{{\mathfrak{m}}}(R)$.
Denote by $a_1, \ldots, a_d$ a system of parameters for $R$.
\begin{enumerate}
\item Represent
$H^d_{{\mathfrak{m}}}(R)$ as the $d$-th cohomology module of the \u{C}ech
complex of $R$ with respect to $a_1, \ldots, a_d$, that is, as the
residue class module of $R_{a_1 \ldots a_d}$ modulo the image,
under the \u{C}ech complex `differentiation' map, of
$\bigoplus_{i=1}^dR_{a_1 \ldots a_{i-1}a_{i+1}\ldots a_d}$. See
\cite[\S 5.1]{LC}. We use `$\left[{\mathfrak{p}}hantom{=} \right]$' to denote natural
images of elements of $R_{a_1\ldots a_d}$ in this residue class
module. Note that, for $i \in \{1, \ldots, d\}$, we have
$$
\left[{\mathfrak{r}}ac{a_i^k}{(a_1 \ldots a_d)^k}\right] = 0 {\mathfrak{q}}uad {\mathfrak{m}}box{~for
all~} k \in {\mathfrak{n}}n.
$$
Denote the product $a_1 \ldots a_d$ by $a$. A typical element of
$H^d_{{\mathfrak{m}}}(R)$ can be represented as $ \left[r/a^j\right]$ for
some $r \in R$ and $j \in {\mathfrak{n}}n$; moreover, for $r, r_1 \in R$ and
$j, j_1 \in {\mathfrak{n}}n$, we have $ \left[r/a^j\right] =
\left[r_1/a^{j_1}\right] $ if and only if there exists $k \in {\mathfrak{n}}n$
such that $k \geq {\mathfrak{m}}ax\{j,j_1\}$ and $ a^{k-j}r - a^{k-j_1}r_1 \in
(a_1^k, \ldots, a_d^k)R. $ In particular, if $a_1, \ldots, a_d$
form an $R$-sequence (that is, if $R$ is Cohen--Macaulay), then $
\left[r/a^j\right] = 0$ if and only if $r \in (a_1^j, \ldots,
a_d^j)R$, by \cite[Theorem 3.2]{O'Car83}, for example.
\item The left $R[x,f]$-module structure on $H^d_{{\mathfrak{m}}}(R)$ is such
that
$$
x\left[{\mathfrak{r}}ac{r}{(a_1\ldots a_d)^j}\right] =
\left[{\mathfrak{r}}ac{r^p}{(a_1\ldots a_d)^{jp}}\right] {\mathfrak{q}}uad {\mathfrak{m}}box{~for all~} r \in R
{\mathfrak{m}}box{~and~} j \in {\mathfrak{n}}n.
$$
The reader might like to consult \cite[2.3]{KS} for more details, and should
in any case note that this left $R[x,f]$-module structure does not depend on the
choice of system of parameters $a_1, \ldots, a_d$.
\end{enumerate}
\end{rmd}
\begin{rmk}
\label{tc.1r} Let the situation and notation be as in \ref{tc.1}.
Here we relate the left $R[x,f]$-module structure on $H :=
H^d_{{\mathfrak{m}}}(R)$ described in \ref{tc.1} to the tight closure in $H$
of its zero submodule. See \cite[Definition (8.2)]{HocHun90} for
the definition of the tight closure in an $R$-module of one of its
submodules. Let $n \in {\mathfrak{n}}n$.
\begin{enumerate}
\item The $n$-th component $Rx^n$ of $R[x,f]$ is isomorphic, as an
$(R,R)$-bimodule, to $R$ considered as a left $R$-module in the
natural way and as a right $R$-module via $f^n$, the $n$-th power
of the Frobenius ring homomorphism. Let $L$ be a submodule of the
$R$-module $M$. It follows that an element $m \in M$ belongs to
$L^*_M$, the {\em tight closure of $L$ in $M$\/}, if and only if
there exists $c \in R^\circ$ such that $cx^n \otimes m$ belongs,
for all $n \gg 0$, to the image of $R[x,f]\otimes_R L$ in
$R[x,f]\otimes_R M$ under the map induced by inclusion.
\item Let $S$ be a multiplicatively closed subset of $R$. It is straightforward
to check that there is an isomorphism of $R$-modules
$$\gamma_n: Rx^n \otimes_R S^{-1}R \stackrel{\cong}{\longrightarrow} S^{-1}R$$
for which $\gamma_n ( bx^n \otimes (r/s) ) = br^{p^n}/s^{p^n}$ for
all $b,r \in R$ and $s \in S$; the inverse of $\gamma_n$ satisfies
$(\gamma_n)^{-1} (r/s) = rs^{p^n-1}x^n \otimes (1/s)$ for all $r \in
R$ and $s \in S$.
\item Now represent $H := H^d_{{\mathfrak{m}}}(R)$ as the $d$-th cohomology
module of the \u{C}ech complex of $R$ with respect to the system
of parameters $a_1, \ldots, a_d$, as in \ref{tc.1}(i). We can use
isomorphisms like that described in part (ii), together with the
right exactness of tensor product, to see that (when we think of
$H$ simply as an $R$-module) there is an isomorphism of
$R$-modules $\delta_n: Rx^n \otimes_R H \stackrel{\cong}{\longrightarrow} H$
for which
$$
\delta_n \left( bx^n \otimes \left[{\mathfrak{r}}ac{r}{(a_1\ldots
a_d)^j}\right] \right) = \left[{\mathfrak{r}}ac{br^{p^n}}{(a_1\ldots
a_d)^{jp^n}}\right] {\mathfrak{q}}uad {\mathfrak{m}}box{~for all~} b,r \in R {\mathfrak{m}}box{~and~}
j \in {\mathfrak{n}}n.
$$
Thus, in terms of the natural left $R[x,f]$-module structure on
$H$, we have $\delta_n \left( bx^n \otimes h\right) = bx^nh$ for
all $b \in R$ and $h \in H$.
\item It thus follows that, for $h \in H$, we have $h
\in 0^*_{H}$ if and only if there exists $c \in R^{\circ}$ such
that $cx^nh = 0$ for all $n \gg 0$.
\item Observe that $\Gamma_x(H)
\subseteq 0^*_H$.
\item Suppose that $(R,{\mathfrak{m}})$ is Cohen--Macaulay, and use the
notation of part (iii) again; write $a := a_1\ldots a_d$. Let $r
\in R$, $j \in {\mathfrak{n}}n$, and let $h := \left[r/a^j\right]$ in $H$. It
follows from \ref{tc.1}(i) that $r \in ((a_1^j, \ldots,
a_d^j)R)^*$ if and only if there exists $c \in R^{\circ}$ such
that $cx^nh = 0$ for all $n \gg 0$. Thus, by part (iv) above, $r
\in ((a_1^j, \ldots, a_d^j)R)^*$ if and only if $h \in 0^*_{H}$.
\item Let the situation and notation be as in part (vi) above.
Then the $R$-homomorphism ${\mathfrak{n}}u_j : R/(a_1^j, \ldots, a_d^j)R \longrightarrow
H$ for which ${\mathfrak{n}}u_j(r' + (a_1^j, \ldots, a_d^j)R) =
\left[r'/a^j\right]$ for all $r'\in R$ is a monomorphism (by
\ref{tc.1}(i)). Furthermore, the induced homogeneous
$R[x,f]$-homomorphism
$$
R[x,f]\otimes_R{\mathfrak{n}}u_j : R[x,f]\otimes_R\left(R/(a_1^j, \ldots,
a_d^j)R\right) \longrightarrow R[x,f]\otimes_RH
$$
of graded left $R[x,f]$-modules is also a monomorphism: this is
because a homogeneous element of $\Ker
\left(R[x,f]\otimes_R{\mathfrak{n}}u_j\right)$ must have the form
$r'x^k\otimes(1 + (a_1^j, \ldots, a_d^j)R)$ for some $r' \in R$
and $k \in {\mathfrak{n}}n$; since $r'x^k\otimes\left[1/a^j\right] = 0$, it
follows from \ref{tc.1r}(iii) and \ref{tc.1}(i) that $r' \in
(a_1^{jp^k}, \ldots, a_d^{jp^k})R$, so that $r'x^k\otimes(1 +
(a_1^j, \ldots, a_d^j)R) = 0$.
\end{enumerate}
\end{rmk}
\begin{lem}
\label{tc.1s} Suppose that $(R,{\mathfrak{m}})$ is a local ring of dimension $d > 0$;
set $H := H^d_{{\mathfrak{m}}}(R)$ and $G := H/\Gamma_x(H)$. Let $h \in H$. Then the
following statements are equivalent:
\begin{enumerate}
\item $h \in 0^*_{H}$;
\item $h + \Gamma_x(H) \in 0^*_{G}$;
\item there exists $c \in R^{\circ}$ such
that $cx^n(h+\Gamma_x(H)) = 0$ in $G$ for all $n \gg 0$;
\item there exists $c \in R^{\circ}$ such
that $cx^n(h+\Gamma_x(H)) = 0$ in $G$ for all $n \geq 0$.
\end{enumerate}
\end{lem}
\begin{proof} Let
$m_0$ denote the HSL-number of $H$ (see\/ {\rm \ref{hslno}}).
(i) $R_{\red}ightarrow$ (ii) This is immediate from the fact that
$0^*_{H} \subseteq (\Gamma_x(H))^*_H$ once it is recalled from
\cite[Remark (8.4)]{HocHun90} that $h + \Gamma_x(H) \in 0^*_G$ if
and only if $h \in (\Gamma_x(H))^*_H$.
(ii) $R_{\red}ightarrow$ (iii) Suppose that $h + \Gamma_x(H) \in
0^*_{G}$, so that $h \in (\Gamma_x(H))^*_H$. Under the isomorphism
$\delta_n: Rx^n \otimes_R H \stackrel{\cong}{\longrightarrow}H$ of
\ref{tc.1r}(iii) (where $n \in {\mathfrak{n}}n$), the image of $Rx^n \otimes_R
\Gamma_x(H)$ is mapped into $\Gamma_x(H)$. Therefore there exists
$c \in R^{\circ}$ such that $cx^nh \in\Gamma_x(H)$ for all $n \gg
0$, that is, such that $cx^n(h+\Gamma_x(H)) = 0$ in $G$ for all $n
\gg 0$.
(iii) $R_{\red}ightarrow$ (iv) Suppose that there exist
$c \in R^{\circ}$ and $n_0 \in {\mathfrak{n}}n$ such
that $cx^n(h+\Gamma_x(H)) = 0$ in $G$ for all $n \geq n_0$. Then,
for all $j \in {\mathfrak{n}}n$, we have
$$
x^{n_0}cx^j(h+\Gamma_x(H)) = c^{p^{n_0}}x^{n_0 + j}(h+\Gamma_x(H)) = 0 {\mathfrak{q}}uad
{\mathfrak{m}}box{~in~} G.
$$
Since $G$ is $x$-torsion-free, we see that $cx^j(h+\Gamma_x(H)) = 0$ for all
$j \geq 0$.
(iv) $R_{\red}ightarrow$ (i) Suppose that there exists $c \in R^{\circ}$ such
that $cx^n(h+\Gamma_x(H)) = 0$ in $G$ for all $n \geq 0$. Then $cx^nh \in
\Gamma_x(H)$ for all $n \geq 0$, so that
$x^{m_0}cx^nh = 0$ for all $n \geq 0$. This implies that
$c^{p^{m_0}}x^{m_0 + n}h = 0$ for all $n \geq 0$, so that $h \in 0^*_{H}$ by
\ref{tc.1r}(iv).
\end{proof}
\begin{defi}
\label{tc.1t} The {\em weak parameter test ideal $\sigma'(R)$
of $R$\/} is defined to be
the ideal generated by $0$ and all weak parameter test elements for $R$. (By a
`weak parameter test element' for $R$ we mean a $p^{w_0}$-weak
parameter test element for $R$ for some $w_0 \in {\mathfrak{n}}n$.)
It is easy to see that each element of $\sigma'(R) \cap R^{\circ}$
is a weak parameter test element for $R$.
\end{defi}
The next theorem is one of the main results of this paper.
\begin{thm}
\label{tc.2} Let $(R,{\mathfrak{m}})$ (as in\/ {\rm \ref{nt.1}})
be a Cohen--Macaulay local ring of
dimension $d > 0$.
Set $H := H^d_{{\mathfrak{m}}}(R)$, a left $R[x,f]$-module which
is Artinian as an $R$-module; let $m_0$ be its HSL-number (see\/ {\rm \ref{hslno}}),
and let $q_0 := p^{m_0}$.
Set $G := H/\Gamma_x(H)$, an $x$-torsion-free left
$R[x,f]$-module. By\/ {\rm \ref{ga.12c}} and\/ {\rm \ref{ga.13}},
there exists a (uniquely determined) smallest ideal ${{\mathfrak{m}}athfrak{b}}$ of
height at least $1$ in the set ${\mathfrak{m}}athcal{I}(G)$ of
$G$-special $R$-ideals.
Let $c$ be any element of ${{\mathfrak{m}}athfrak{b}} \cap R^{\circ}$. Then $c^{q_0}$ is a $q_0$-weak
parameter test element for $R$. In particular, $R$ has a $q_0$-weak
parameter test element. In fact, the weak parameter test ideal $\sigma'(R)$ of
$R$ satisfies ${{\mathfrak{m}}athfrak{b}}^{[q_0]} \subseteq \sigma'(R) \subseteq {{\mathfrak{m}}athfrak{b}}$.
\end{thm}
\begin{note} It should be noted that, in Theorem \ref{tc.2}, it is
not assumed that $R$ is excellent. There are examples of Gorenstein
local rings of characteristic $p$
which are not excellent: see \cite[p.\ 260]{HMold}.
\end{note}
\begin{proof} We have to show that, for an arbitrary parameter ideal ${{\mathfrak{m}}athfrak{a}}$
of $R$ and $r \in {{\mathfrak{m}}athfrak{a}}^*$, we have $c^{q_0}r^{p^n} \in {{\mathfrak{m}}athfrak{a}}^{[p^n]}$ for all
$n \geq m_0$. In the first part of the proof, we establish this in the case where
${{\mathfrak{m}}athfrak{a}}$ is an ideal ${\mathfrak{q}}$ generated by a full system of parameters $a_1, \ldots, a_d$
for $R$.
Let $r \in {\mathfrak{q}}^*$, so that there exists $\widetilde{c} \in R^{\circ}$ such
that $\widetilde{c}r^{p^n} \in {\mathfrak{q}}^{[p^n]}$ for all $n \gg 0$. Use $a_1,
\ldots, a_d$ in the notation of \ref{tc.1}(i) for $H^d_{{\mathfrak{m}}}(R) =
H$, and write $a := a_1\ldots a_d$. We have $ \widetilde{c}x^n\left[r/a\right]
= \left[\widetilde{c}r^{p^n}\!/a^{p^n}\right] = 0 $ in $H$ for all $n \gg 0$.
Set $h := \left[r/a\right] \in H$. Thus $\widetilde{c}x^nh = 0$ for all $n \gg 0$.
It therefore follows from Corollary \ref{ga.15} that $h$ is annihilated by $
\bigoplus_{n\geq m_0}{{\mathfrak{m}}athfrak{b}}^{[p^{m_0}]} x^n$, so that, in particular,
$c^{p^{m_0}}x^nh = 0$ for all $n \geq m_0$. Hence, in $H$,
$$
\left[{\mathfrak{r}}ac{c^{q_0}r^{p^{n}}}{(a_1 \ldots a_d)^{p^{
n}}}\right] = c^{p^{m_0}}x^{n}\left[{\mathfrak{r}}ac{r}{a_1 \ldots
a_d}\right] = c^{p^{m_0}}x^n h = 0 {\mathfrak{q}}uad {\mathfrak{m}}box{~for all~} n \geq m_0.
$$
Since $R$ is Cohen--Macaulay, we can now deduce from \ref{tc.1}(i) that
$c^{q_0}r^{p^n} \in {\mathfrak{q}}^{[p^n]}$ for all $n \geq m_0$, as required (for ${\mathfrak{q}}$).
Now let ${{\mathfrak{m}}athfrak{a}}$ be an arbitrary parameter ideal of $R$. A proper
ideal in a Cohen--Macaulay local ring is a parameter ideal if and
only if it can be generated by part of a system of parameters. In
view of the first part of this proof, we can, and do, assume that
$\height {{\mathfrak{m}}athfrak{a}} < d$. There exist a system of parameters $a_1,
\ldots, a_d$ for $R$ and an integer $i \in \{0, \ldots, d-1\}$
such that ${{\mathfrak{m}}athfrak{a}} = (a_1,\ldots, a_i)R$. Let $r \in {{\mathfrak{m}}athfrak{a}}^*$. Then, for
each $v \in {\mathfrak{m}}athbb N$, we have $r \in ((a_1,\ldots, a_i,a_{i+1}^v,
\ldots, a_d^v)R)^*$, and, since $a_1,\ldots, a_i,a_{i+1}^v,
\ldots, a_d^v$ is a system of parameters for $R$, it follows from
the first part of this proof that
$$
c^{q_0}r^{p^n} \in ((a_1,\ldots,
a_i,a_{i+1}^v, \ldots, a_d^v)R)^{[p^n]} = \left(a_1^{p^n},\ldots,
a_i^{p^n},a_{i+1}^{vp^n}, \ldots, a_d^{vp^n}\right)\!R
{\mathfrak{q}}uad {\mathfrak{m}}box{~for all~} n \geq m_0.
$$
Therefore, for all $n \geq m_0$,
$$
c^{q_0}r^{p^n} \in \bigcap_{v \in {\mathfrak{m}}athbb N} \left(a_1^{p^n},\ldots,
a_i^{p^n},a_{i+1}^{vp^n}, \ldots, a_d^{vp^n}\right)\!R \subseteq
\bigcap_{v \in {\mathfrak{m}}athbb N} \left({{\mathfrak{m}}athfrak{a}}^{[p^{n}]} + {\mathfrak{m}}^{vp^{n}} \right) =
{{\mathfrak{m}}athfrak{a}}^{[p^{n}]}
$$
by Krull's Intersection Theorem. This shows that
$c^{q_0}$ is a $q_0$-weak
parameter test element for $R$, so that, since ${{\mathfrak{m}}athfrak{b}}$ can be generated by
elements in ${{\mathfrak{m}}athfrak{b}} \cap R^{\circ}$, it follows that ${{\mathfrak{m}}athfrak{b}}^{[q_0]} \subseteq \sigma'(R)$.
Now let $c \in \sigma'(R)\cap R^{\circ}$; we suppose that $c {\mathfrak{n}}ot\in {{\mathfrak{m}}athfrak{b}}$ and
seek a contradiction. Thus ${{\mathfrak{m}}athfrak{b}} \subset {{\mathfrak{m}}athfrak{b}} + Rc$. Let $L : =
{{\mathfrak{m}}athfrak{a}}nn_G({{\mathfrak{m}}athfrak{b}} R[x,f])$ and $L' := {{\mathfrak{m}}athfrak{a}}nn_G(({{\mathfrak{m}}athfrak{b}} + Rc)R[x,f])$, two
special annihilator submodules of the $x$-torsion-free left
$R[x,f]$-module $G$. Since $L$ corresponds to the
$G$-special $R$-ideal ${{\mathfrak{m}}athfrak{b}}$, we must have $L' \subset L$,
since otherwise we would have
$$
({{\mathfrak{m}}athfrak{b}} + Rc)R[x,f] \subseteq \grann_{R[x,f]}L' = \grann_{R[x,f]}L =
{{\mathfrak{m}}athfrak{b}} R[x,f].
$$
Therefore there exists $h \in H$ such that $h + \Gamma_x(H)$ is
annihilated by ${{\mathfrak{m}}athfrak{b}} R[x,f]$ but not by $({{\mathfrak{m}}athfrak{b}} + Rc) R[x,f]$. Since
$\height {{\mathfrak{m}}athfrak{b}} \geq 1$, it follows from Lemma \ref{tc.1s} that $h
\in 0^*_H$.
Choose a system of parameters $a_1, \ldots, a_d$ for $R$; use
$a_1, \ldots, a_d$ in the notation of \ref{tc.1}(i) for
$H^d_{{\mathfrak{m}}}(R) = H$, and write $a := a_1\ldots a_d$. There exist $r
\in R$ and $j \in {\mathfrak{n}}n$ such that $h = [r/a^j]$. By
\ref{tc.1r}(vi), we have $r \in ((a_1^j, \ldots, a_d^j)R)^*$.
Since $c$ is a weak parameter test element for $R$, we see that
$cr^{p^n} \in (a_1^{jp^n}, \ldots, a_d^{jp^n})R$ for all $n \gg
0$, so that $cx^nh = 0$ for all $n \gg 0$. Thus there is some $n_0
\in {\mathfrak{n}}n$ such that $cx^n(h + \Gamma_x(H)) = 0$ in $G$ for all $n
\geq n_0$. Therefore, for all $j \in {\mathfrak{n}}n$, we have
$$
x^{n_0}cx^j(h + \Gamma_x(H)) = c^{p^{n_0}}x^{n_0 + j}(h + \Gamma_x(H)) = 0,
$$
so that $cx^j(h + \Gamma_x(H)) = 0$ because
$G$ is $x$-torsion-free. Thus $h + \Gamma_x(H)$ is annihilated by
$RcR[x,f]$ as well as by ${{\mathfrak{m}}athfrak{b}} R[x,f]$. This is a contradiction.
Therefore ${{\mathfrak{m}}athfrak{b}}^{[q_0]} \subseteq \sigma'(R) \subseteq {{\mathfrak{m}}athfrak{b}}$, since $\sigma'(R)$
can be generated by its elements that lie in $R^{\circ}$.
\end{proof}
Use $R'$ to
denote $R$ (as in\/ {\rm \ref{nt.1}}) regarded as an $R$-module by means of $f$.
With this notation, $f : R \longrightarrow R'$ becomes a homomorphism
of $R$-modules.
Recall that, when $(R,{\mathfrak{m}})$ is a local ring of
dimension $d > 0$, we say that $R$ is
{\em $F$-injective\/} precisely when the induced homomorphisms
$ H^i_{{\mathfrak{m}}}(f) : H^i_{{\mathfrak{m}}}(R) \longrightarrow H^i_{{\mathfrak{m}}}(R')$ are injective for all
$i = 0, \ldots, d$. See R. Fedder and K-i. Watanabe \cite[Definition 1.7]{FW87}
and the ensuing discussion.
\begin{cor}
\label{tc.3} Let $(R,{\mathfrak{m}})$ be an $F$-injective Cohen--Macaulay
local ring of dimension $d > 0$. The left $R[x,f]$-module $H :=
H^d_{{\mathfrak{m}}}(R)$ is $x$-torsion-free. By Theorem\/ {\rm \ref{ga.13}},
there exists a (uniquely determined) smallest ideal ${{\mathfrak{m}}athfrak{b}}$ of
height at least $1$ in the set ${\mathfrak{m}}athcal{I}(H)$ of
$H$-special $R$-ideals.
Let $c$ be any element of ${{\mathfrak{m}}athfrak{b}} \cap R^{\circ}$. Then $c$ is a
parameter test element for $R$. In fact, ${{\mathfrak{m}}athfrak{b}}$ is the parameter
test ideal of $R$ (see\/ {\rm \cite[Definition 4.3]{Smith95}}).
\end{cor}
\begin{note} It should be noted that, in Corollary \ref{tc.3}, it is
not assumed that $R$ is excellent.
\end{note}
\begin{proof} With the
notation of Theorem \ref{tc.2}, the HSL-number $m_0$ of $H$ is $0$
when $R$ is $F$-injective, and so $q_0 = 1$ and $G \cong H$ in
this case. By Theorem \ref{tc.2}, each element $c \in {{\mathfrak{m}}athfrak{b}} \cap
R^{\circ}$ is a $q_0$-weak parameter test element for $R$, that
is, a parameter test element for $R$. Since $R$ has a parameter
test element, its parameter test ideal $\sigma(R)$ is equal to the
ideal of $R$ generated by all parameter test elements. By Theorem
\ref{tc.2}, we therefore have
$
{{\mathfrak{m}}athfrak{b}} = {{\mathfrak{m}}athfrak{b}}^{[q_0]} \subseteq \sigma(R) \subseteq \sigma'(R) \subseteq {{\mathfrak{m}}athfrak{b}}.
$
\end{proof}
\begin{cor}
\label{tc.4} Let $(R,{\mathfrak{m}})$ be an $F$-injective Gorenstein local
ring of dimension $d > 0$. The left $R[x,f]$-module $H :=
H^d_{{\mathfrak{m}}}(R)$ is $x$-torsion-free. By Theorem\/ {\rm \ref{ga.13}},
there exists a (uniquely determined) smallest ideal ${{\mathfrak{m}}athfrak{b}}$ of
height at least $1$ in the set ${\mathfrak{m}}athcal{I}(H)$ of
$H$-special $R$-ideals.
Let $c$ be any element of ${{\mathfrak{m}}athfrak{b}} \cap R^{\circ}$. Then $c$ is a test
element for $R$. In fact, ${{\mathfrak{m}}athfrak{b}}$ is the test ideal of $R$.
\end{cor}
\begin{note} It should be noted that, in Corollary \ref{tc.4}, it is
not assumed that $R$ is excellent.
\end{note}
\begin{proof} This follows immediately from Corollary \ref{tc.3} once
it is recalled that an $F$-injective Cohen--Macaulay local ring is reduced and
that a parameter test element for a reduced Gorenstein local ring $R$
of characteristic $p$ is automatically a test element for $R$: see
the proof of \cite[Proposition 4.1]{Hunek98}.
\end{proof}
\section{\sc Special $R$-ideals and Enescu's $F$-stable primes}
\label{en}
The purpose of this section is to establish connections between
the work in \S \ref{ga} and \S \ref{tc} above and F. Enescu's
$F$-stable primes of an $F$-injective Cohen--Macaulay local ring
$(R,{\mathfrak{m}})$, defined in \cite[\S 2]{Enesc03}.
\begin{ntn}
\label{en.1} Throughout this section, $(R,{\mathfrak{m}})$ will be assumed to
be a Cohen--Macaulay local ring of dimension $d > 0$, and we shall
let $a_1, \ldots, a_d$ denote a fixed system of parameters for
$R$, and set ${\mathfrak{q}} := (a_1, \ldots, a_d)R$. We shall use $a_1,
\ldots, a_d$ in the notation of \ref{tc.1}(i) for $H :=
H^d_{{\mathfrak{m}}}(R)$, and write $a := a_1\ldots a_d$.
For each $b \in R$, we define (following Enescu \cite[Definition
1.1]{Enesc03}) the ideal ${\mathfrak{q}}(b)$ by
$$
{\mathfrak{q}}(b) := \left\{ c \in R : cb^{p^n} \in {\mathfrak{q}}^{[p^n]} {\mathfrak{m}}box{~for
all~} n \gg 0 \right\}.
$$
(Actually, Enescu only made this definition when $b {\mathfrak{n}}ot\in {\mathfrak{q}}$;
however, the right-hand side of the above display is equal to $R$
when $b \in {\mathfrak{q}}$, and there is no harm in our defining ${\mathfrak{q}}(b)$ to
be $R$ in this case.) In view of \ref{tc.1}(i), the ideal ${\mathfrak{q}}(b)$
is equal to the ultimate constant value of the ascending chain
$({{\mathfrak{m}}athfrak{b}}_n)_{n \in {\mathfrak{n}}n}$ of ideals of $R$ for which
$\bigoplus_{n\in{\mathfrak{n}}n}{{\mathfrak{m}}athfrak{b}}_n x^n = \grann_{R[x,f]}R[x,f][b/a]$, the
graded annihilator of the $R[x,f]$-submodule of $H$ generated by
$[b/a]$.
Now consider the special case in which $R$ is (also)
$F$-injective. Then the left $R[x,f]$-module $H$ is
$x$-torsion-free, and so it follows from Lemma \ref{nt.5} that,
for each $b \in R$, the ideal ${\mathfrak{q}}(b)$ is radical and
$\grann_{R[x,f]}R[x,f][b/a] = {\mathfrak{q}}(b)R[x,f]$; thus ${\mathfrak{q}}(b)$ is an
$H$-special $R$-ideal. We again follow Enescu and set
$$
Z_{{\mathfrak{q}},R} := \{ {\mathfrak{q}}(b) : b \in R \setminus {\mathfrak{q}} \}.
$$
\end{ntn}
Enescu proved, in \cite[Theorem 2.1]{Enesc03}, that (when
$(R,{\mathfrak{m}})$ is Cohen--Macaulay and $F$-injective) the set of maximal
members of $Z_{{\mathfrak{q}},R}$ is independent of the choice of ${\mathfrak{q}}$, is
finite, and consists of prime ideals. The next theorem shows that
the set of maximal members of $Z_{{\mathfrak{q}},R}$ is actually equal to the
set of maximal members of ${\mathfrak{m}}athcal{I}(H) \setminus \{R\}$: we saw
in Lemma \ref{ga.11} that this set consists of prime ideals, and
in Corollary \ref{ga.12c} that it is finite.
\begin{thm}
\label{en.2} Let the situation and notation be as in\/ {\rm
\ref{en.1}}, and suppose that the Cohen--Macaulay local ring
$(R,{\mathfrak{m}})$ is $F$-injective. Then the set of maximal members of
$Z_{{\mathfrak{q}},R}$ is equal to the set of maximal members of
${\mathfrak{m}}athcal{I}(H) \setminus \{R\}$.
\end{thm}
\begin{proof} The comments in \ref{en.1}
show that $Z_{{\mathfrak{q}},R} \subseteq {\mathfrak{m}}athcal{I}(H)$; clearly, no member
of $Z_{{\mathfrak{q}},R}$ can be equal to $R$. It is therefore sufficient for
us to show that a maximal member ${\mathfrak{p}}$ of ${\mathfrak{m}}athcal{I}(H)
\setminus \{R\}$ must belong to $Z_{{\mathfrak{q}},R}$.
Let $L \in {\mathfrak{m}}athcal{A}(H)$ be the special annihilator submodule of
$H$ corresponding to ${\mathfrak{p}}$. Now $H$ is an Artinian $R$-module: let
$h$ be a non-zero element of the socle of $L$. By Lemma
\ref{ga.11}, we have $\grann_{R[x,f]}R[x,f]h = {\mathfrak{p}} R[x,f]$.
However, for each $j \in {\mathfrak{m}}athbb N$, we have $R[1/a^j] \cong R/(a_1^j,
\ldots, a_d^j)R$, by \ref{tc.1r}(vii), so that
$$
\Hom_R(R/{\mathfrak{m}},R[1/a^j]) \cong \Hom_R(R/{\mathfrak{m}},R/(a_1^j, \ldots, a_d^j)R) \cong
\Ext^d_R(R/{\mathfrak{m}},R)
$$
by \cite[Lemma 1.2.4]{BH}. It follows that it is possible to write $h$ in
the form $h = [r/a]$ for some $r \in R$, and therefore ${\mathfrak{p}} =
{\mathfrak{q}}(r) \in Z_{{\mathfrak{q}},R}$.
\end{proof}
\end{document} |
\begin{document}
\title[A class of nonlocal elliptic
problems in the half space with a hole]
{Existence of positive solution for a class of nonlocal elliptic
problems in the half space with a hole}
\author{Xing Yi}
\thanks{*College of Mathematics and Computer Science, Key Laboratory of High Performance Computing
and Stochastic Information Processing (Ministry of Education of China),
Hunan Normal University, Changsha, Hunan 410081, P. R. China
([email protected])}
\maketitle
\vspace*kip 0.3in
{\bf Abstract}\quad This work concerns with the existence of solutions for the following class of nonlocal elliptic problems
\begin{eqnarray}\label{eq:0.1}
&&\left\{\begin{array}{l}
(-\Delta)^{s} u+u=|u|^{p-2} u \text { in } \Omega_{r} \\
u \geq 0 \quad \text { in }\Omega_{r} \text { and } u \hspace*pace*{0em}eq 0 \\
u=0 \quad \mathbb{R}^{N} \backslash \Omega_{r}
\end{array}\right.,
\end{eqnarray}
involving the fractional Laplacian operator $(-\Delta)^{s},$ where $s \in(0,1), N>2 s$, $\Omega_{r}$ is the half space with a hole in $\mathbb{R}^N$ and $p \in\left(2,2_{s}^{*}\right) .$ The main technical approach is based on variational and topological methods.
\vspace*kip 0.1in
\hspace*pace*{0em}oindent{\it Keywords:}\quad Nonlocal elliptic problems, Positive high energy solution, Half space with a hole
\hspace*pace*{0em}oindent {\bf AMS} classification: 58J05, 35J60. \vspace*pace{3mm}
\renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}}
\setcounter{equation}{0}\Section*{1. Introduction}
\setcounter{section}{1}\setcounter{equation}{0}
Let $\mathbb{R}_{+}^{N}=\left\{\left(x^{\prime}, x_{N}\right) \in \mathbb{R}^{N-1} \times \mathbb{R} \mid 0<x_{N}<\infty\right\}$ be the upper half space. $\Omega_{r}$ is an unbounded smooth domain such that
\[\overline{\Omega_{r}}\subset \mathbb{R}_{+}^{N},\]
and \[ \mathbb{R}_{+}^{N}\setminus\overline{\Omega_{r}}\subset B_{\rho}(a_{r})\subset \mathbb{R}_{+}^{N}\ \] with $a_{r}=(a,r)\in \mathbb{R}_{+}^{N}$.
Indeed, $\Omega_r$ is the upper half space with a hole.
We consider the following fractional elliptic problem:
\begin{eqnarray}\label{eq:1.1}
&&\left\{\begin{array}{l}
(-\Delta)^{s} u+u=|u|^{p-2} u \text { in } \Omega \\
u \geq 0 \quad \text { in }\Omega_{r} \text { and } u \hspace*pace*{0em}eq 0 \\
u=0 \quad \mathbb{R}^{N} \backslash \Omega
\end{array}\right.,
\end{eqnarray}
where $ \Omega=\Omega_r,\ s \in(0,1), N>2 s$, $p \in\left(2,2_{s}^{*}\right),$ where $2_{s}^{*}=\frac{2 N}{N-2 s}$ is the fractional critical Sobolev exponent and $(-\Delta)^{s}$ is the classical fractional Laplace operator.
When $s \hspace*pace*{0em}earrow 1^{-},$ problem(\ref{eq:1.1}) is related to the following elliptic problem
\begin{eqnarray}\label{eq:1.2}
-\triangle u+u=|u|^{p-1}u,\quad x\in \Omega, \quad u\in H_{0}^{1}(\Omega)
\end{eqnarray}
.
When $\Omega$ is a bounded domain, by applying the compactness of the embedding $H_{0}^{1}(\Omega)\hookrightarrow L^{p}(\Omega), 1< p<\frac{2N}{N-2}$, there is a positive solution of (\ref{eq:1.2}). If $\Omega$ is an unbounded domain, we can not obtain a solution for problem (\ref{eq:1.2}) by using Mountain-Pass Theorem directly
because the embedding $H_{0}^{1}(\Omega)\hookrightarrow L^{p}(\Omega), 1< p<\frac{2N}{N-2}$ is not compactness. However, if $\Omega=\mathbb{R}^{N}$, Berestycki-Lions \cite{21},
proved that there is a radial positive solution of equation (\ref{eq:1.2})
by applying the compactness of the embedding $H_{r}^{1}(\mathbb{R}^{N})\hookrightarrow L^{p}(R^{N}),2<p<\frac{2N}{N-2}$, where $H_r^{1}(\mathbb{R}^{N})$ consists of the radially symmetric functions in $H^{1}(\mathbb{R}^{N})$.
By the P.L.Lions's Concentration-Compactness Principle \cite{7}, there exists an unique positive solution for problem (\ref{eq:1.2}) in $\mathbb{R}^{N}$.
By moving Plane method, Gidas-Ni-Nirenberg \cite{3} also proved that every positive solution of equation
\begin{eqnarray}\label{eq:1.3}
-\triangle u+u=|u|^{p-1}u,\quad x\in \mathbb{R}^{N}, \quad u\in H^{1}(\mathbb{R}^{N})
\end{eqnarray}
is radially symmetric with respect to some point in $\mathbb{R}^{N}$ satisfying
\begin{eqnarray}
u(r)re^{r}=\gamma+o(1)\ as \ r \rightarrow \infty.
\end{eqnarray}
Kwong \cite{188} proved that the positive solution of (\ref{eq:1.3}) is unique up to translations.
In fact, Esteban and Lions \cite{1} proved that there
is not any nontrivial solution of equation (\ref{eq:1.2}) when $\Omega$ is an Esteban-Lions domain (for example $\mathbb{R}_+^3$). Thus, we want to change the topological property of the domain $\Omega$ to look for a solution of problem (\ref{eq:1.2}). Wang \cite{4} proved
that if $\rho$ is sufficiently small and $z_{0N}\rightarrow\infty$, then Eq(\ref{eq:1.2}) admits a positive higher energy
solution in $\mathbb{R}_{+}^{N} \backslash \overline{B_{\rho}\left(z_{0}^{\prime}, z_{0 N}\right)}$. Such problem has been extensively studied in recent
years, see for instance, \cite{9,55} and references therein. From the above researches,
we believed that the existence of the solution to the equation (\ref{eq:1.2}) will be affected by the topological property of the domain $\Omega$.
Recently, the case $s \in(0,1)$ has received a special attention, because involves the fractional Laplacian operator $(-\Delta)^{s}$, which arises in a quite natural way in many different contexts, such as, among the others, the thin obstacle problem, optimization, finance, phase transitions, stratified materials, anomalous diffusion, crystal dislocation, soft thin films, semipermeable membranes, flame propagation, conservation laws, ultra-relativistic limits of quantum mechanics, quasigeostrophic flows, multiple scattering, minimal surfaces, materials science and water waves, for more detail see \cite{x1,e16,x2,D21,x4}.
When $ \Omega \subset \mathbb{R}^{N}$ is an exterior domain, i.e. an unbounded domain with smooth boundary $\partial \Omega \hspace*pace*{0em}eq \emptyset$ such that $\mathbb{R}^{N} \backslash \Omega$ is bounded, $s \in(0,1), N>2 s$, $p \in\left(2,2_{s}^{*}\right),$ the above problem has been studied by O. Alves, Giovanni Molica Bisci , César E. Torres Ledesma in \cite{21} proving that (1.1) does not have a ground state solution. This fact represents a serious difficulty when dealing with this kind of nonlinear fractional elliptic phenomena. More precisely, the authors analyzed the behavior of Palais-Smale sequences and showed a precise estimate of the energy levels where the Palais-Smale condition fails, which made it possible to show that the problem (1.1) has at least one positive solution, for $\mathbb{R}^{N} \backslash \Omega$ small enough. A key point in the approach explored in \cite{16,9} is the existence and uniqueness, up to a translation, of a positive solution $Q$ of the limit problem associated with (\ref{eq:1.1}) given by
\begin{eqnarray}\label{eq:1.6}
(-\Delta)^{s} u+u=|u|^{p-2} u \text { in } \mathbb{R}^{N},
\end{eqnarray}
for every $p \in\left(2,2_{s}^{*}\right)$. Moreover, $Q$ is radially symmetric about the origin and monotonically decreasing in $|x| .$ On the contrary of the classical elliptic case, the exponential decay at infinity is not used in order to prove the existence of a nonnegative solution for (\ref{eq:1.1}).
When $\Omega \subset \mathbb{R}^{N}$ is $\mathbb{R}_+^N$, by moving Plane method, Wenxiong Chen, Yan Li and Pei Ma \cite{11d6}(p123 Theorem 6.8.3 ) proved that there is no nontrivial solution of problem (\ref{eq:1.1}).
It is interesting in considering the existence of the high energy equation for the problem (\ref{eq:1.1})
in the half space with a hole in $\mathbb{R}^{N}$.
\begin{theorem}
There is $\rho_{0}>0,r_{0}>0$ such that if $0<\rho\leq\rho_{0} $ and $r \geq r_{0}$,
then there is a positive solution of equation (\ref{eq:0.1}).
\end{theorem}
The paper is organized as follows. In section 2, we give some preliminary results. The Compactness lemma will be given in Section 3. At last, we give the proof of Theorem 1.1.
\setcounter{equation}{0}\Section{Some preliminary results }
For $s \in(0,1)$ and $N>2 s,$ the fractional Sobolev space of order $s$ on $\mathbb{R}^{N}$ is defined by
$$
H^{s}\left(\mathbb{R}^{N}\right):=\left\{u \in L^{2}\left(\mathbb{R}^{N}\right): \int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}} \frac{|u(x)-u(z)|^{2}}{|x-z|^{N+2 s}} d z d x<\infty\right\}
$$
endowed with the norm
$$
\|u\|_{s}:=\left(\int_{\mathbb{R}^{N}}|u(x)|^{2} d x+\int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}} \frac{|u(x)-u(z)|^{2}}{|x-z|^{N+2 s}} d z d x\right)^{1 / 2}
.$$
We recall the fractional version of the Sobolev embeddings (see \cite{q16}).
\begin{theorem}
Let $s \in(0,1),$ then there exists a positive constant $C=C(N, s)>0$ such that
$$
\|u\|_{L^{2_{s}^{*}}\left(\mathbb{R}^{N}\right)}^{2} \leq C \int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}} \frac{|u(x)-u(y)|^{2}}{|x-y|^{N+2 s}} d y d x
$$
and then $H^{s}\left(\mathbb{R}^{N}\right) \hookrightarrow L^{q}\left(\mathbb{R}^{N}\right)$ is continuous for all $q \in\left[2,2_{s}^{*}\right] .$ Moreover, if $\Theta \subset \mathbb{R}^{N}$ is a
bounded domain, we have that the embedding $H^{s}\left(\mathbb{R}^{N}\right) \hookrightarrow L^{q}(\Theta)$ is compact for any $q \in\left[2,2_{s}^{*}\right) .$
\end{theorem}
Hereafter, we denote by $X_{0}^{s} \subset H^{s}\left(\mathbb{R}^{N}\right)$ the subspace defined by
$$
X_{0}^{s}:=\left\{u \in H^{s}\left(\mathbb{R}^{N}\right): u=0 \text { a.e. in } \mathbb{R}^{N} \backslash \Omega\right\}.
$$
We endow $X_{0}^{s}$ with the norm $\|\cdot\|_{s}$. Moreover we introduce the following norm
$$
\|u\|:=\left(\int_{\Omega_{r}}|u(x)|^{2} d x+\iint_{\mathcal{Q}} \frac{|u(x)-u(z)|^{2}}{|x-z|^{N+2 s}} d z d x\right)^{\frac{1}{2}}
$$
where $\mathcal{Q}:=\mathbb{R}^{2 N} \backslash\left(\Omega_{r}^{c} \times \Omega_{r}^{c}\right)$. We point out that $\|u\|_{s}=\|u\|$ for any $u \in X_{0}^{s}$. Since $\partial \Omega$ is bounded and smooth, by [\cite{D21}, Theorem 2.6], we have the following result.
\begin{theorem}
The space $C_{0}^{\infty}(\Omega)$ is dense in $\left(X_{0}^{s},\|\cdot\|\right) .$
\end{theorem}
In what follows, we denote by $H^{s}(\Omega)$ the usual fractional Sobolev space endowed with the norm
$$
\|u\|_{H^{s}}:=\left(\int_{\Omega}|u(x)|^{2} d x+\int_{\Omega} \int_{\Omega} \frac{|u(x)-u(z)|^{2}}{|x-z|^{N+2 s}} d z d x\right)^{\frac{1}{2}}.
$$
Related to these fractional spaces, we have the following properties
{\bf Proposition 2.3.}The following assertions hold true:
(i) If $v \in X_{0}^{s}$, we have that $v \in H^{s}(\Omega)$ and
$$
\|v\|_{H^{s}} \leq\|v\|_{s}=\|v\| .
$$
(ii) Let $\Theta$ an open set with continuous boundary. Then, there exists a positive constant $\mathfrak{C}=$ $\mathfrak{C}(N, s),$ such that
$$
\|v\|_{L^{2_{s}^{*}}(\Theta)}^{2}=\|v\|_{L^{2^{*}}\left(\mathbb{R}^{N}\right)}^{2} \leq \mathfrak{C} \iint_{\mathbb{R}^{2 N}} \frac{|v(x)-v(z)|^{2}}{|x-z|^{N+2 s}} d z d x
$$
for every $v \in X_{0}^{s} ;$ see [\cite{e16}, Theorem 6.5$]$.
From now on, $M_{\infty}$ denotes the following constant
\begin{eqnarray}\label{eq:1.s6}
M_{\infty}:=\inf \left\{\|u\|_{s}^{2}: u \in H^{s}\left(\mathbb{R}^{N}\right) \text { and } \int_{\mathbb{R}^{N}}|u(x)|^{p} d x=1\right\},
\end{eqnarray}
which is positive by Theorem 2.1. Furthermore, for any $v \in H^{s}\left(\mathbb{R}^{N}\right)$ and $z \in \mathbb{R}^{N},$ we set the function
$$
v^{z}(x):=v(x+z).
$$
Then, by doing the change of variable $\tilde{x}=x+z$ and $\tilde{y}=y+z,$ it is easily seen that
$$\left\|v^{z}\right\|_{s}^{2}=\|v\|_{s}^{2} \text { as well as }\left\|v^{z}\right\|_{L^{p}\left(\mathbb{R}^{N}\right)}=\|v\|_{L^{p}\left(\mathbb{R}^{N}\right)} .$$
Arguing as in \cite{b16} the following result holds true.
\begin{theorem}
Let $\left\{u_{n}\right\} \subset H^{s}\left(\mathbb{R}^{N}\right)$ be a minimizing sequence such that
$$
\left\|u_{n}\right\|_{L^{p}\left(\mathbb{R}^{N}\right)}=1 \text { and }\left\|u_{n}\right\|_{s}^{2} \rightarrow M_{\infty} \text { as } n \rightarrow+\infty.
$$
Then, there is a sequence $\left\{y_{n}\right\} \subset \mathbb{R}^{N}$ such that $\left\{u_{n}^{y_{n}}\right\}$ has a convergent subsequence, and so, $M_{\infty}$ is attained.
\end{theorem}
As a byproduct of the above result the next corollary is obtained.
{\bf Corollary 1} There is $v \in H^{s}\left(\mathbb{R}^{N}\right)$ such that $\|v\|_{s}=M_{\infty}$ and $\|v\|_{L^{p}\left(\mathbb{R}^{N}\right)}=1 .$
Let $\varphi$ be a minimizer of $(\ref{eq:1.s6}),$ that is
$$
\varphi \in H^{s}\left(\mathbb{R}^{N}\right), \quad \int|\varphi|^{p} d x=1 \text { and } M_{\infty}=\|\varphi\|_{s}^{2}.
$$
Take \begin{eqnarray}
&&\xi \in C^{\infty}(\mathbb{R}^{+},\mathbb{R}),\eta\in C^{\infty}(\mathbb{R},\mathbb{R}),
\end{eqnarray}
such that
\[\xi(t)=\left\{
\begin{array}{ll}
0,0\leq t\leq \rho,\\[2mm]
1,t\geq 2\rho,
\end{array}
\right.\]
\[ \eta(t)=\left\{
\begin{array}{ll}
0,t\leq 0,\\[2mm]
1,t\geq 1,
\end{array}
\right.\]
and \[0\leq \zeta\leq 1,0\leq \eta\leq 1.
\]
Now, we define
\[f_{y}(x)=\xi(|x-a_{r}|)\eta(x_{N})\varphi(x-y),\]
and
\[\Psi_{y}(x)=\frac{f_{y}(x)}{\left\|f_{y}\right\|_{L^{p}\left(\mathbb{R}^{N}\right)}}=c_{y} f_{y}(x) \text { where } c_{y}=\frac{1}{\left\|f_{y}\right\|_{L^{p}\left(\mathbb{R}^{N}\right)}} .\]
Throughout this section we endow $X_{0}^{s}$ with the norm
$$
\|u\|:=\left(\iint_{\mathcal{Q}} \frac{|u(x)-u(y)|^{2}}{|x-y|^{n+2 s}} d y d x+\int_{\Omega_{r}}|u|^{2} d x\right)^{1 / 2}
$$
and denote by $M>0$ the number
\begin{eqnarray}\label{eq:1w.sq6}
M:=\inf \left\{\|u\|^{2}: u \in X_{0}^{s}, \int_{\Omega_{r}}|u(x)|^{p} d x=1\right\} .
\end{eqnarray}
\begin{lemma}\label{lm:2.4}
Let $y=(y^{'},y_{N})$ , we have\\
(1)$\|f_{y}-\varphi(x-y)\|_{L^{p}(\mathbb{R}^{N})}=o(1)$, $|y-a_{r}|\rightarrow \infty$, and $y_{N}\rightarrow +\infty$, or $y_{N} \rightarrow \infty$ and $\rho \rightarrow 0$;
(2)$\|f_{y}-\varphi(x-y)\|=o(1),|y-a_{r}|\rightarrow \infty$, and $y_{N}\rightarrow +\infty$, or $y_{N} \rightarrow +\infty$ and $\rho \rightarrow 0$.\label{11}
\end{lemma}
\begin{proof}
Similarly as \cite{9,4}, we have
(i) After the change of variables $z=x-y,$ one has
\[
\begin{array}{ll}
\ \|f_{y}-\varphi(x-y)\|^{p}_{L^{p}(\mathbb{R}^{N})}
&=\int_{\mathbb{R}^{N}}|\xi(|x-a_{r}|)\eta(x_{N})-1|^{p}|\varphi(x-y)|^{p}\ dx\\[2mm]
&=\int_{\mathbb{R}^{N}}|\xi(|x+y-a_{r}|)\eta(x_{N}+y)-1|^{p}|\varphi(z)|^{p}\ dz
\end{array}
\]
Let $g_{y}(z)=|\xi(|x+y-a_{r}|)\eta(x_{N}+y)-1|^{p}|\varphi(z)|^{p}.$ Since $|y-a_{r}|\rightarrow \infty$, and $y_{N}\rightarrow +\infty$, it follows that
$$
g_{y}(z) \rightarrow 0 \text { a.e. in } \mathbb{R}^{N}
$$
Now, taking into account that
$$
g_{y}(z)=|\xi(|x+y-a_{r}|)\eta(x_{N}+y)-1|^{p}|\varphi(z)|^{p} \leq 2^{p}|\varphi(z)|^{p} \in L^{1}\left(\mathbb{R}^{N}\right),
$$
the Lebesgue's dominated convergence theorem yields
$$
\int_{\mathbb{R}^{N}} g_{y}(z) d z \rightarrow 0 \text { as } |y-a_{r}|\rightarrow \infty, \ y_{N}\rightarrow +\infty.
$$
Therefore
$$
\|f_{y}-\varphi(x-y)\|_{L^{p}(\mathbb{R}^{N})}=o(1), |y-a_{r}|\rightarrow \infty, \ y_{N}\rightarrow +\infty
$$
\[
\begin{array}{ll}
\ \|f_{y}-\varphi(x-y)\|^{p}_{L^{p}(\mathbb{R}^{N})}
&=\int_{B_{2\rho(a_{r})}\cup\{x_{N}\leq 1\}}|\xi(|x-a_{r}|)\eta(x_{N})-1|^{p}|\varphi(x-y)|^{p}\ dx\\[2mm]
&=\int_{B_{2\rho(a_{r})}}|\xi(|x-a_{r}|)\eta(x_{N})-1|^{p}|\varphi(x-y)|^{p}\ dx\\[2mm]
&+\int_{\{x_{N}\leq 1\}}|\xi(|x-a_{r}|)\eta(x_{N})-1|^{p}|\varphi(x-y)|^{p}\ dx
\end{array}
\]
and
\[
\begin{array}{ll}
\ \int_{B_{2\rho(a_{r})}}|\xi(|x-a_{r}|)\eta(x_{N})-1|^{p}|\varphi(x-y)|^{p}\ dx
&\leq C mesB_{2\rho(a_{r})}\max_{x\in\mathbb{R}^{N}}\varphi(x)\rightarrow 0\ as\ \rho \rightarrow 0,
\end{array}
\]
\[
\begin{array}{ll}
\ \int_{\{x_{N}\leq 1\}}|\xi(|x-a_{r}|)\eta(x_{N})-1|^{p}|\varphi(x-y)|^{p}\ dx
&=\int_{\{x_{N}\leq 1\}}|\eta(x_{N})-1|^{p}|\varphi(x-y)|^{p}\ dx\\[2mm]
&=\int_{\{z_{3}\leq y_{N}+1\}}|\eta (x_{N}+y_{N})-1|^{p}|\varphi(z)|^{p}\ dz\\[2mm]
& \rightarrow 0 \ as\ y_{N}\rightarrow +\infty,\ \rho \rightarrow 0.
\end{array}
\]
Therefore
$$
\|f_{y}-\varphi(x-y)\|_{L^{p}(\mathbb{R}^{N})}=o(1)\ y_{N} \rightarrow \infty\ and \ \rho \rightarrow 0.
$$
(ii)
Now, we claim that
$$
\int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}} \frac{\left|(\xi(|x-a_{r}|)\eta(x_{N})-1) \varphi\left(x-y\right)-(\xi(|z-a_{r}|)\eta(z_{N})-1) \varphi\left(z-y\right)\right|^{2}}{|x-z|^{N+2 s}} d z d x=o_{n}(1)
$$
Indeed, let
$$
\Upsilon_{u}(x, y):=\frac{u(x)-u(z)}{|x-z|^{\frac{N}{2}+s}}
$$
Then, after the change of variables $\tilde{x}=x-y$ and $\tilde{y}=z-y,$ one has
\[\begin{array}{ll}
\int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}} \frac{\left|(\xi(|x-a_{r}|)\eta(x_{N})-1) \varphi\left(x-y\right)-(\xi(|z-a_{r}|)\eta(z_{N})-1) \varphi\left(z-y\right)\right|^{2}}{|x-z|^{N+2 s}} d z d x
& =\int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}}\left|\Upsilon_{n}(x, z)\right|^{2} d z d x.
\end{array}
\]
where
$$
\Phi_{n}(x, z):=\frac{\left(\xi(|x+y-a_{r}|)\eta(x_{N}+y_{N}))-1\right) \varphi(x)-\left(\xi(|z+y-a_{r}|)\eta(z_{N}+y_{N}))-1\right) \varphi(z)}{|x-z|^{\frac{N}{2}+s}}
$$
Recalling that $|y-a_{r}|\rightarrow \infty$, and $y_{N}\rightarrow +\infty$, we also have
$$
\Upsilon_{n}(x, y) \rightarrow 0 \text { a.e. in } \mathbb{R}^{N} \times \mathbb{R}^{N}
$$
On the other hand, a direct application of the mean value theorem yields
\begin{eqnarray}\label{ezq:0b.1}
\begin{aligned}
\left|\Upsilon_{n}(x, z)\right| & \leq\left|\left(\xi(|x+y-a_{r}|)\eta(x_{N}+y_{N}))-1\right) \| \Upsilon_{\varphi}(x, y)\right|+|\varphi(z)|\left|\Upsilon_{1-\xi\eta}\left(x+y, z+y\right)\right| \\
& \leq\left|\Upsilon_{\varphi}(x, z)\right|+\frac{C|\varphi(z)|}{|x-z|^{\frac{N}{2}+s-1}} \chi_{B(z, 1)}(x)+\frac{2|\varphi(z)|}{|x-z|^{\frac{N}{2}+s}} \chi_{B^{c}(z, 1)}(x),
\end{aligned}
\end{eqnarray}
for almost every $(x, z) \in \mathbb{R}^{N} \times \mathbb{R}^{N} .$ Now, it is easily seen that the right hand side in (\ref{ezq:0b.1}) is $L^{2}$ -integrable. Thus, By the Lebesgue's dominated convergence theorem and i, it follows that
$$\|f_{y}-\varphi(x-y)\|=o(1),|y-a_{r}|\rightarrow \infty,\ and\ y_{N}\rightarrow +\infty$$
By i,we have $\|f_{y}-\varphi(x-y)\|_{L^{2}(\mathbb{R}^{N})}=o(1)$, $|y-a_{r}|\rightarrow \infty$, and $y_{N}\rightarrow +\infty$, or $y_{N} \rightarrow \infty$ and $\rho \rightarrow 0$;
\[\begin{array}{ll}
\ \|f_{y}-\varphi(x-y)\|^{2}
&= \int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}} \frac{\left|\left(f_{y}-\varphi(\cdot-y)\right)(x)-\left(f_{y}-\varphi(\cdot-y)\right)(z)\right|^{2}}{|x-z|^{N+2 s}} d z d x \\[2mm]
&+\int_{\mathbb{R}^{N}}\left|f_{y}(x)-\varphi(x-y)\right|^{2} d x .
\end{array}
\]
Setting
$$
I_{1}:=\iint_{\mathbb{R}^{2 N}} \frac{(\xi(|x-a_{r}|)\eta(x_{N}))-\xi(|z-a_{r}|)\eta(z_{3}))|^{2}|\varphi(x-y)|^{2}}{|x-z|^{N+2 s}} d z d x
$$
and
$$
I_{2}:=\iint_{\mathbb{R}^{2 N}} \frac{\left.\left|\xi(|z-a_{r}|)\eta(z_{3}))-1\right|^{2} \mid \varphi(x-y)-\varphi(z-y)\right)\left.\right|^{2}}{|x-z|^{N+2 s}} d z d x
$$
the following inequality holds
$$
\int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}} \frac{\left|\left(f_{y}-\varphi(\cdot-y)\right)(x)-\left(f_{y}-\varphi(\cdot-y)\right)(z)\right|^{2}}{|x-z|^{N+2 s}} d z d x \leq I_{1}+I_{2}.
$$
Moreover, by definition of $\xi,$ we also have
$$
\xi(|z+y-a_{r}|)\eta(z_{3}+y_{N})-\left.1\right|^{2} \frac{|\varphi(x)-\varphi(z)|^{2}}{|x-z|^{N+2 s}} \leq 4 \frac{|\varphi(x)-\varphi(z)|^{2}}{|x-z|^{N+2 s}} \in L^{1}\left(\mathbb{R}^{N} \times \mathbb{R}^{N}\right)
$$
and
$$\xi(|z+y-a_{r}|)\eta(z_{3}+y_{N})-1|^{2} \frac{|\varphi(x)-\varphi(z)|^{2}}{|x-y|^{N+2 s}} \rightarrow 0 \text { a.e. in } \mathbb{R}^{N} \times \mathbb{R}^{N}$$
as $y_{N} \rightarrow \infty$ and $\rho \rightarrow 0$.
Hence, the Lebesgue's theorem ensures that
$$
I_{2}\rightarrow 0 \text { as } \rho \rightarrow 0.
$$
Now, by [\cite{m21}, Lemma 2.3], for every $y \in \mathbb{R}^{N}$, one has
$$
I_{1}=\iint_{\mathbb{R}^{2 N}} \frac{(\xi(|x-a_{r}|)\eta(x_{N}))-\xi(|z-a_{r}|)\eta(z_{3}))|^{2}|\varphi(x-y)|^{2}}{|x-z|^{N+2 s}} d z d x \rightarrow 0 \text { as } \rho \rightarrow 0
$$.
Therefore
$$
\|f_{y}-\varphi(x-y)\|=o(1), y_{N} \rightarrow +\infty , \rho \rightarrow 0.
$$
\end{proof}
\begin{lemma}
The equality $M_{\infty}=M$ holds true. Hence, there is no $u \in X_{0}^{s}$ such that $\|u\|^{2}=M$ and $\|u\|_{L^{p}\left(\mathbb{R}^{N}\right)}=1$, and so, the minimization problem (\ref{eq:1w.sq6}) does not have solution.
\end{lemma}
\begin{proof}
The proof is similar to \cite{4}, and we only give a sketch here. By Proposition 2.3 - part (i) it follows that
$$
M_{\infty} \leq M
$$
Take a sequence $y^{n}$ in $\Omega_{r}$ such that\\
\[|y^{n}-a_{r}|\rightarrow \infty,and\ y_{N}^{n} \rightarrow +\infty\ as\ n\rightarrow \infty.\]
Then by lemma \ref{lm:2.4}, we have
\[\|f_{y^{n}}-\varphi(x-y^{n})\|_{L^{p}(R^{N})}=o(1),|y^{n}-a_{r}|\rightarrow \infty,and\ y^{n}_{3}\rightarrow +\infty,\]
\[\|f_{y^{n}}-\varphi(x-y^{n})\|_{s}=o(1),|y^{n}-a_{r}|\rightarrow \infty, and \ y^{n}_{3}\rightarrow +\infty,\]
\[c_{y^{n}}=\frac{1}{\left\|f_{y^{n}}\right\|_{L^{p}\left(\mathbb{R}^{N}\right)}} \rightarrow 1,|y^{n}-a_{r}|\rightarrow \infty,and\ y^{n}_{3}\rightarrow +\infty\]
Now, since $\varphi$ is a minimizer of $(\ref{eq:1.s6}),$ one has
$$
\left\|f_{y^{n}}\right\|_{s}^{2}=\left\|\varphi\left(\cdot-y_{n}\right)\right\|_{s}^{2}+o_{n}(1)=\|\varphi\|_{s}^{2}+o_{n}(1)=M_{\infty}+o_{n}(1)
$$
Similar arguments ensure that
$$
\left\|\Psi_{n}\right\|_{s}^{2}=\left\|\Psi_{n}\right\|^{2}=M_{\infty}+o_{n}(1)
$$
and$$\left\|\Psi_{n}\right\|_{L^{p}\left(\mathbb{R}^{N}\right)}=1$$
So\[ M \leq M_{\infty}\].
We then conclude that $M= M_{\infty}$.
Now, suppose by contradiction that there is $v_{0} \in X_{0}^{s}$ satisfying
$$
\left\|v_{0}\right\|=M \text { and }\left\|v_{0}\right\|_{L^{p}(\Omega)}=1
$$
Without loss of generality, we can assume that $v_{0} \geq 0$ in $\Omega$. Note that by $M= M_{\infty}$, since $v_{0} \in$ $H^{s}\left(\mathbb{R}^{N}\right)$ and $\left\|v_{0}\right\|=\left\|v_{0}\right\|_{s},$ it follows that $v_{0}$ is a minimizer for $(\ref{eq:1.s6}),$ and so, a solution of roblem
\begin{eqnarray}\label{eq:0.v1}
\left\{\begin{aligned}
(-\Delta)^{s} u+u &=M_{\infty} u^{p-1} \text { in } \mathbb{R}^{N} \\
u & \in H^{s}\left(\mathbb{R}^{N}\right) .
\end{aligned}\right.
\end{eqnarray}
Therefore, by the maximum principle we get that $v_{0}>0$ in $\mathbb{R}^{N}$, which is impossible, because $v_{0}=0$ in $\mathbb{R}^{N} \backslash \Omega_{r}$. This completes the proof.
\end{proof}
\setcounter{equation}{0}\Section{A Compactness lemma }
In this section we prove a compactness result involving the energy functional $I: X_{0}^{s} \rightarrow \mathbb{R}$ associated to the main problem (\ref{eq:0.1}) and given by
$$
I(u):=\frac{1}{2}\left(\iint_{\mathcal{Q}} \frac{|u(x)-u(y)|^{2}}{|x-y|^{N+2 s}} d y d x+\int_{\Omega_{r}}|u|^{2} d x\right)-\frac{1}{p} \int_{\Omega_{r}}|u|^{p} d x
$$
In order to do this, we consider the problem
\begin{eqnarray}\label{eq:0.x1}
\left\{\begin{aligned}
(-\Delta)^{s} u+u &=|u|^{p-2} u \text { in } \mathbb{R}^{N} \\
u & \in H^{s}\left(\mathbb{R}^{N}\right)
\end{aligned}\right.
\end{eqnarray}
whose energy functional $I_{\infty}: H^{s}\left(\mathbb{R}^{N}\right) \rightarrow \mathbb{R}$ has the form
$$
I_{\infty}(u):=\frac{1}{2}\left(\int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}} \frac{|u(x)-u(y)|^{2}}{|x-y|^{N+2 s}} d y d x+\int_{\mathbb{R}^{N}}|u|^{2} d x\right)-\frac{1}{p} \int_{\mathbb{R}^{N}}|u|^{p} d x.
$$
With the above notations we are able to prove the following compactness result.
\begin{lemma}\label{eq:0cb.1}
Let $\left\{u_{n}\right\} \subset X_{0}^{s}$ be a sequence such that
\begin{eqnarray}\label{eq:0bx.1}
I\left(u_{n}\right) \rightarrow c \text { and } I^{\prime}\left(u_{n}\right) \rightarrow 0 \text { as } n \rightarrow \infty.
\end{eqnarray}
Then there are a nonnegative integer $k, k$ sequences $\left\{y_{n}^{i}\right\}$ of points of the form $\left(x_{n}^{\prime}, m_{n}+1 / 2\right)$ for integers $m_{n}, i=1,2, \cdots, k, $ $u_{0} \in X_{0}^{s}$ solving equation (\ref{eq:0.1}) and nontrivial functions $u^{1}, \cdots, u^{k}$ in $H^{s}\left(\mathbb{R}^{N}\right)$ solving equation (\ref{eq:0.x1}). Moreover there is a subsequence $\left\{u_{n}\right\}$ satisfying
$$
\begin{array}{l}
\text { (1) }u_{n}(x)=u^{0}(x)+u^{1}\left(x-x_{n}^{1}\right)+\cdots+u^{k}\left(x-x_{n}^{k}\right)+o(1) strongly, where
x_{n}^{i}=y_{n}^{1}+\cdots+y_{n}^{i} \rightarrow \infty,\\ i=1,2, \cdots, k \\
\text { (2) }\left\|u_{n}\right\|^{2}=\left\|u^{0}\right\|_{\Omega_{r}}^{2}+\left\|u^{1}\right\|^{2}+\cdots+\left\|u^{k}\right\|^{2}+o(1) \\
\text { (3) } I\left(u_{n}\right)=I\left(u^{0}\right)+I_{\infty}\left(u^{1}\right)+\cdots+I_{\infty}\left(u^{k}\right)+o(1)
\end{array}
$$
If $u_{n} \geqslant 0$ for $n=1,2, \cdots,$ then $u^{1}, \cdots, u^{k}$ can be chosen as positive solutions, and $u^{0} \geqslant 0$
\end{lemma}
\begin{proof}
See \cite{21,4}.
\end{proof}
{\bf COROLLARY 2} Let $\left\{u_{n}\right\} \subset M_{\Omega_{r}}$ satisfy $\|u_{n}\|_{\Omega_{r}}^{2}=c+o(1)$ and
$M<c<2^{(p-2) / p} M.$ Then $\left\{u_{n}\right\}$ contains a strongly convergent subsequence.
\begin{proof}
See \cite{21,4}.
\end{proof}
\renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}}
\setcounter{equation}{0}\Section{ Proof of Theorem 1}
Set
$$
\chi(t)=\left\{\begin{array}{ll}
1 & \text { if } 0 \leqslant t \leqslant 1 \\
\frac{1}{t} & \text { if } 1 \leqslant t<\infty
\end{array}\right.
$$
and define $\beta: H^{s}\left(\mathbb{R}^{N}\right) \rightarrow \mathbb{R}^{N}$\cite{4} by
$$
\beta(u)=\int_{\mathbb{R}^{N}} u^{2}(x) \chi(|x|) x d x.
$$
For $r \geqslant r_{1}$, let
$$
\begin{array}{l}V_{r}=\left\{\left.u \in H_{0}^{1}(\Omega_{r}\right)|\int_{\Omega_{r}}|u|^{p}=1, \beta(u)=a_{r}\right\} \\ c_{r}=\inf _{u \in V_{r}}\|u\|_{\Omega_{r}}^{2}.\end{array}
$$
\begin{lemma}
$c_{r}>M$.
\end{lemma}
ProoF: It is easy to see that $c_{r} \geqslant M.$ Suppose $c_{r}=\alpha .$ Take a sequence $\left\{v_{m}\right\} \subset X_{0}^{s}$ s.t.
$$
\begin{array}{l}
\left\|v_{m}\right\|_{L_{p} \left(\Omega_{r}\right)}=1, \beta\left(v_{m}\right)=a_{r} \quad \text { for } \quad m=1,2, \cdots, \\
\left\|v_{m}\right\|^{2}=M+o(1).
\end{array}
$$
Let $u_{m}=M^{1 /(p-2)} v_{m}$ for $m=1,2, \cdots$. Then
$$I^{\prime}\left(u_{n}\right)=o_{n}(1) \text { in }\left(X_{0}^{s}\right)^{*}$$
and
$$I\left(u_{n}\right)=\left(\frac{1}{2}-\frac{1}{p}\right) M^{\frac{p}{p-2}}+o_{n}(1).$$
By the maximum principle, $\left\{u_{m}\right\}$ does not contain any convergent subsequence. $\mathrm{By}$ lemma (\ref{eq:0cb.1}) there is a sequence $\left\{x_{m}\right\}$ of the form $\left(x_{m}^{\prime}, m+\frac{1}{2}\right)$ for integers $m$ such that
$$
\begin{array}{c}
\left|x_{m}\right| \longrightarrow \infty \\
u_{m}(x)=\varphi\left(x-x_{m}\right)+o(1) \text { strongly. }
\end{array}
$$
Since $\varphi$ is radially symmetric, we may take $m$ to be positive.
Next, we consider the following sets
$$
\mathbb{R}_{+}^{N}:=\left\{x \in \mathbb{R}^{N}:\left\langle x, x_{m}\right\rangle>0\right\} \text { and }\mathbb{R}_{-}^{N}:=\mathbb{R}^{N} \backslash\mathbb{R}_{+}^{N}.
$$
We may assume that
$$
\begin{aligned}
\left|x_{m}\right| \geqslant 4 \text { from } m=1,2, \cdots . \text { Now } & \\
\qquad\left\langle\beta\left(\varphi\left(x-x_{m}\right)\right), x_{m}\right\rangle=& \int_{\mathbb{R}^{N}} \varphi^{2}\left(x-x_{m}\right) \chi(|x|)\left\langle x, x_{m}\right\rangle d x \\
=& \int_{\mathbb{R}_{+}^{N}} \varphi^{2}\left(x-x_{m}\right) \chi(|x|)\left\langle x, x_{m}\right\rangle d x \\
&+\int_{\left(\mathbb{R}_{-}^{N}\right)} \varphi^{2}\left(x-x_{m}\right) \chi(|x|)\left\langle x, x_{m}\right\rangle d x \\
\geqslant & \int_{B_{1}\left(x_{m}\right)} \varphi^{2}\left(x-x_{m}\right) \chi(|x|)\left\langle x, x_{m}\right\rangle d x \\
&+\int_{\mathbb{R}_{-}^{N}} \varphi^{2}\left(x-x_{m}\right) \chi(|x|)\left\langle x, x_{m}\right\rangle d x.
\end{aligned}
$$
Note that there are $c_{1}>0, c_{2}>0$ such that for $x \in B_{1}\left(x_{m}\right),$ we have
$$
\begin{array}{l}
\varphi^{2}\left(x-x_{m}\right) \geqslant c_{1} \\
\quad\left\langle x, x_{m}\right\rangle \geqslant c_{2}|x|\left|x_{m}\right| \text { for } m=1,2, \cdots .
\end{array}
$$
Thus
$$
\begin{aligned}
\int_{B_{1}\left(x_{m}\right)} \varphi^{2}\left(x-x_{m}\right) \chi(|x|)\left\langle x, x_{m}\right\rangle d x & \geqslant c_{1} c_{2} \int_{B_{1}\left(x_{m}\right)} \chi(|x|)|x|\left|x_{m}\right| d x \\
& \geqslant c_{3}\left|x_{m}\right|^{N+1}, \quad c_{3}>0 \quad \text { a constant. }
\end{aligned}
$$
Recalling that for each $x\in\mathbb{R}_{-}^{N}$,
$$
\left|x-y_{n}\right| \geq|x|
$$
it follows that
$$
\left|u\left(x-y_{n}\right)\right|^{2} \chi(|x|)|x| \leq R|u(|x|)|^{2} \in L^{1}\left(\mathbb{R}^{N}\right)(R>0)
$$
(see \cite{21} lemma 4.3).
This fact, combined with the limit
$$
u\left(x-y_{n}\right) \rightarrow 0 \text { as }\left|y_{n}\right| \rightarrow+\infty
$$
implies that
$$
\int_{\mathbb{R}_{-}^{N}}\left|u\left(x-y_{n}\right)\right|^{2} \chi(|x|)|x| d x=o_{n}(1).
$$
We conclude that
$$
\begin{aligned}
M^{1 /(p-2)}\left|a_{r}\right| & \geqslant\left\langle\beta\left(u_{m}\right), \frac{x_{m}}{\left|x_{m}\right|}\right\rangle \\
&=\left\langle\beta\left(\varphi\left(x-x_{m}\right)\right), \frac{x_{m}}{\left|x_{m}\right|}\right\rangle+o(1) \\
& \geqslant c_{3}\left|x_{m}\right|^{N}+o(1)
\end{aligned}
$$
a contradiction. Thus $c_{r}>M$.
{\bf REMARK 1} By Lemma \ref{lm:2.4}(1), there is $r_{1}>0$ such that
$$
\frac{1}{2} \leqslant\left\|f_{y}\right\|_{L^{p}\left(\Omega_{r}\right)} \leqslant \frac{3}{2}
$$
where $r \geqslant r_{1}$ and $\left|y-a_{r}\right| \geqslant r / 2$ and $y_{N} \geqslant r / 2$.
{\bf REMARK 2}. By Lemma \ref{lm:2.4}(2), there is $r_{2} \geqslant r_{1}$ such that
$M<\left\|\Psi_{y}\right\|^{2}<\frac{c_{r}+M}{2}$
where $r \geqslant r_{2}$ and $\left|y-a_{r}\right| \geqslant r / 2$ and $y_{N} \geqslant r / 2$.
\begin{lemma}\label{eq:1.f3}
There is $r_{3} \geqslant r_{2}$ such that if $r \geqslant r_{3},$ then
$$
\left\langle\beta\left(\varphi_{y}\right), y\right\rangle>0 \quad \text { for } \quad y \in \partial\left(B_{r / 2}\left(a_{r}\right)\right).
$$
\end{lemma}
\begin{proof} By lemma \ref{lm:2.4}, we have $ 2 / 3 \leqslant c_{y} \leqslant 2 .$ For $r \geqslant r_{2},$ let
$$
\begin{aligned}
A_{\left((3 / 8) r_{1}(5 / 8) r\right)} &=\left\{x \in \mathbb{R}^{N}\left|\frac{3}{8} r \leqslant\right| x-a_{r} \mid \leqslant \frac{5}{8} r\right\}, \\
\mathbb{R}_{+}^{N}(y) &=\left\{x \in \mathbb{R}^{N} \mid\langle x, y\rangle>0\right\} \\
\mathbb{R}_{-}^{N}(y) &=\left\{x \in \mathbb{R}^{N} \mid\langle x, y\rangle<0\right\}
\end{aligned}
$$
$$\begin{aligned}
\left\langle\beta\left(\varphi_{y}\right), y\right\rangle=c_{y} &\left[\int_{\mathbb{R}_{+}^{N}(y)} \xi^{2}\left(\left|x-a_{r}\right|\right) \eta^{2}\left(x_{N}\right) \varphi^{2}(x-y) \chi(|x|)\langle x, y\rangle d x\right.\\
&\left.+\int_{\mathbb{R}_{-}^{N}(y)} \xi^{2}\left(\left|x-a_{r}\right|\right) \eta^{2}\left(x_{N}\right) \varphi^{2}(x-y) \chi(|x|)\langle x, y\rangle d x\right] \\
\geqslant \frac{2}{3} &\left[\int_{A((3 / 8) r,(5 / 8) r)} \varphi^{2}(x-y) \chi(|x|)\langle x, y\rangle\right.\\
&+\int_{\mathbb{R}_{-}^{N}(y)} \varphi^{2}(x-y) \chi(|x|)\langle x, y\rangle d x
\end{aligned}$$
$$\begin{aligned}
\int_{A((3 / 8) r,(5 / 8) r)} \varphi^{2}(x-y) \chi(|x|)\langle x, y\rangle d x & \geqslant c_{6} \int_{A((3 / 8) r,(5 / 8) r)} \chi(|x|)|x||y| d x \text { for } c_{6}>0 \\
& \geqslant c_{6}|y|\left[\left(\frac{5}{8} r\right)^{N}-\left(\frac{3}{8} r\right)^{N}\right] \\
& \geqslant c_{7} r^{N+1} \text { for } c_{7}>0 . \\
\int_{R_{-}^{N}(y)} \varphi^{2}(x-y) \chi(|x|)\langle x, y\rangle d x =o_{n}(1).
\end{aligned}$$
Therefore, there is $r_{3} \geqslant r_{2},$ such that if $r \geqslant r_{3},\left|y-a_{r}\right|=r / 2$
$$
\left\langle\beta\left(\Psi_{y}\right), y\right\rangle \geqslant c_{7} r^{N+1}-o_{n}(1)>0.
$$
This completes the proof.
\end{proof}
By Lemma \ref{lm:2.4} and Lemma \ref{eq:1.f3}, fix $\rho_{0}>0, r_{0} \geqslant r_{3}$ such that if $0<\rho \leqslant \rho_{0}, r \geqslant r_{0}$ then $\left\|\varphi_{y}\right\|_{\Omega_{r}}^{2}<2^{(p-2) / p} \alpha$ for $y \in \overline{B_{r / 2}}\left(a_{r}\right) .$ From now on, fix $\rho_{0}, r_{0},$ for $r \geqslant r_{0} .$ Let
$$
\begin{array}{l}
B=\left\{\Psi_{y}|| y-a_{r} \mid \leqslant \frac{r}{2}\right\} \\
\Gamma=\left\{h \in C\left(V_{r}, V_{r}\right) \mid h(u)=u \quad \text { if } \quad\|u\|^{2}<\frac{c_{r}+\alpha}{2}\right\}.
\end{array}
$$
\begin{lemma}
$$h(B) \cap V_{r} \hspace*pace*{0em}eq \emptyset \text { for each } h \in \Gamma.$$
\end{lemma}
\begin{proof} Let $h \in \Gamma$ and $H(x)=\beta \circ h \circ \varphi_{x}: \mathbb{R}^{N} \rightarrow \mathbb{R}^{N}$. Consider the homotopy,
for $0 \leqslant t \leqslant 1$
$$
F(t, x)=(1-t) H(x)+t I(x) \quad \text { for } \quad x \in \mathbb{R}^{N}.
$$
If $x \in \partial\left(B_{r / 2}\left(a_{r}\right)\right),$ then, by Remark 8 and Lemma 9,
$$
\begin{array}{c}
\left\langle\beta\left(\Psi_{x}\right), x\right\rangle>0 \\
\alpha<\left\|\Psi_{x}\right\|^{2}<\frac{c_{r}+\alpha}{2}.
\end{array}
$$
Then
$$
\begin{aligned}
\langle F(t, x), x\rangle &=\langle(1-t) H(x), x\rangle+\langle t x, x\rangle \\
&=(1-t)\left\langle\beta\left(\Psi_{x}\right), x\right\rangle+t\langle x, x\rangle \\
&>0.
\end{aligned}
$$
Thus $F(t, x) \hspace*pace*{0em}eq 0$ for $x \in \partial\left(B_{r / 2}\left(a_{r}\right)\right) .$ By the homotopic invariance of the degree
$$
d\left(H(x), B_{r / 2}\left(a_{r}\right), a_{r}\right)=d\left(I, B_{r / 2}\left(a_{r}\right), a_{r}\right)=1.
$$
There is $x \in B_{r / 2}\left(a_{r}\right)$ such that
$$
a_{r}=H(x)=\beta\left(h \circ \Psi_{x}\right).
$$
Thus $h(B) \cap V_{r} \hspace*pace*{0em}eq \emptyset$ for each $h \in \Gamma$.
Now we are in the position to prove Theorem A: Consider the class of mappings
$$
F=\left\{h \in C\left(\overline{B_{r / 2}\left(a_{r}\right)}\right), H^{1}\left(R_{N}\right):\left.h\right|_{\partial B_{r / 2}\left(a_{r}\right)}=\Psi_{y}\right\}
$$
and set
$$
c=\inf _{h \in F} \frac{\sup }{y \in B_{r / 2}\left(a_{r}\right)}\|h(y)\|_{\Omega_{r}}^{2}.
$$
It follows from the above Lemmas, with the appropriate choice of $r$ that
$$
\alpha<c_{r}=\inf _{u \in V_{\gamma}}\|u\|_{\Omega_{r}}^{2} \leqslant c<2^{(p-2) / p} \alpha
$$
and
$$
\max _{\partial B_{r / 2}\left(a_{r}\right)}\|h(y)\|_{\Omega_{r}}^{2}<\max _{B_{r / 2}\left(a_{r}\right)}\|h(y)\|_{\Omega_{r}}^{2}.
$$
Theorem 1,1 then follows by applying the version of the mountain pass theorem from Brezis-Nirenberg \cite{d6}.
\end{proof}
\vspace*kip 0.3in
\end{document} |
\begin{document}
\title*{Entropy of nonautonomous dynamical systems}
\author{Christoph Kawan}
\institute{Fakult\"{a}t f\"{u}r Informatik und Mathematik, Universit\"{a}t Passau, Innstra{\ss}e 33, 94032 Passau, Germany; e-mail: [email protected]}
\maketitle
\abstract{Different notions of entropy play a fundamental role in the classical theory of dynamical systems. Unlike many other concepts used to analyze autonomous dynamics, both measure-theoretic and topological entropy can be extended quite naturally to discrete-time nonautonomous dynamical systems given in the process formulation. This paper provides an overview of the author's work on this subject. Also an example is presented that has not appeared before in the literature.\keywords{Nonautonomous dynamical systems; topological entropy; measure-theoretic entropy; variational principle}
}
\section{Introduction}
In the 1950s, Kolmogorov and Sinai established the concept of measure-theoretic (or metric) entropy, based on Shannon entropy from information theory, as an invariant for measure-preserving maps on probability spaces. This invariant was used, e.g., by Ornstein \cite{Orn} to classify Bernoulli shifts. Some years later, Adler, Konheim and McAndrew \cite{AKM} defined in strict analogy a notion of entropy for continuous maps on compact spaces. They already conjectured that both entropy notions are related to each other in the sense of a variational principle, i.e., the topological entropy equals the supremum over all measure-theoretic entropies (supremizing over all invariant Borel probability measures). This was proved not much later by Goodman, Goodwyn and Dinaburg \cite{Go1,Gow,Din}.
In the theory of dynamical systems, developed in the ensuing decades, both notions of entropy play a fundamental role as it turned out that they are related to many other dynamical characteristics such as Lyapunov exponents, dimensions of invariant measures and invariant sets and growth rates of periodic orbits, but also to the existence of horseshoes. Moreover, entropy has become a central concept in a branch of the topological theory of dynamical systems dedicated to the question of how well a dynamical system can be `digitalized', i.e., modeled by a symbolic dynamical system \cite{Dow}.
Motivated by the study of triangular maps, Kolyada and Snoha \cite{KSn} extended the notion of topological entropy to nonautonomous systems given by a sequence of continuous maps on a compact metric space. Together with Misiurewicz, they generalized this concept to sequences of maps between possibly different metric spaces in \cite{KMS} and proved analogues of the Misiurewicz-Szlenk formula for the entropy of piecewise monotone interval maps. Further work on topological entropy of nonautonomous systems has been done in \cite{PMa,Mou,OW1,OW2,ZCh,ZZH,ZLX} by several researchers with different motivations and partially independently of \cite{KSn,KMS}. An essential difference to the classical theory that should be mentioned is that the nonautonomous version of topological entropy is \emph{not} a purely topological quantity. In fact, it depends on the sequence of metrics imposed on the time-varying state space.
Concepts of measure-theoretic entropy for sequences of maps were first introduced in the papers \cite{ZLX,Can,Ka1}. While \cite{ZLX,Can} require that all maps in the sequence preserve the same measure, a very restrictive condition, the approach in \cite{Ka1} is completely general. The invariant measure now becomes a sequence $(\mu_n)_{n\in\mathbb{Z}_+}$ of measures so that $(f_n)_*\mu_n = \mu_{n+1}$ for the given sequence of maps $f_n$. To introduce a reasonable notion of entropy in this general context, an additional structure (called an \emph{admissible class}) needs to be imposed on the system, consisting in a family of sequences of measurable partitions. This family has to satisfy certain axioms in order to obtain structural results such as a power rule and invariance under a reasonably general class of transformations.
In the topological framework, a relation between the topological and the measure-theoretic entropy can be established through the definition of a suitable admissible class adapted to the metric space structure. We call this class the \emph{Misiurewicz class}, since it allows for an easy adaptation of Misiurewicz's proof of the variational principle \cite{Mis} to show that the measure-theoretic entropy is bounded above by the topological entropy. In the classical case of a single map, the entropy computed with respect to the Misiurewicz class reduces again to the Kolmogorov-Sinai measure-theoretic entropy.
It is still unclear whether a full variational principle holds in this context. One obstruction to a proof, amongst others, is that the Misiurewicz class might not contain elements of arbitrarily small diameter, in general. Some sufficient conditions for the existence of such sequences of small-diameter partitions have been identified in \cite{KLa}, but a general approach to this problem is still missing.
The paper is organized as follows. In Section \ref{sec_motiv}, we motivate the entropy theory for nonautonomous dynamical systems by applications in networked control. Section \ref{sec_entth} explains the entropy theory developed in \cite{KSn,KMS} and \cite{Ka1,Ka2,KLa}, including the nonautonomous versions of topological and measure-theoretic entropy and their relation. Finally, an example for a system satisfying a full variational principle is presented in Section \ref{sec_ex}.
\section{Motivation from networked control}\label{sec_motiv}
\begin{figure}
\caption{The simplest model of an NCS}
\label{fig:0}
\end{figure}
The author's central motivation for the development of a nonautonomous entropy theory comes from problems arising in networked control. Networked control systems (NCS) are spatially distributed systems whose components (sensors, controllers and actuators) share a common digital communication network. Examples can be found in vehicle tracking, underwater communications for remotely controlled surveillance and rescue submarines, remote surgery, space exploration and aircraft design. Another large field of applications can be found in modern industrial systems, where industrial production is combined with information and communication technology (`Industry 4.0'). A fundamental problem in this field is to determine the minimal requirements on the communication network so that a specified control objective can be achieved.
The simplest model of an NCS consists of a single feedback loop containing a finite-capacity channel which transmits state information acquired by a sensor from a coder to the controller (see Fig.~\ref{fig:0}). The first task of the controller, before deciding on the control action, often consists in the computation of a state estimate. If the system is autonomous, it has been shown in \cite{MPo} that the smallest channel capacity above which a state estimation of arbitrary precision can be achieved is given by the topological entropy of the system. If the problem setup is slightly changed, time-dependencies of many different sorts can appear. Here are some examples:
\begin{itemize}
\item Non-invariance of the region of relevant initial states leads to a time-dependent state space.
\item The requirement of an exponential improvement of the estimate over time leads to a time-dependent metric on the state space.
\item In a stochastic formulation of the problem, non-invariance of the distribution of $x_0$ (the initial state) leads to a time-dependent probability measure.
\item Time-varying coding policies lead to time-dependent partitions of the state space (with respect to which entropy needs to be computed).
\end{itemize}
The entropy theory described in this paper is sufficiently general to handle all of these time-dependencies. A first application to a state estimation problem can be found in \cite{Ka3}.
\section{Entropy theory for nonautonomous systems}\label{sec_entth}
{\bf Notation:} We write $\mathbb{N} = \{1,2,3,\ldots\}$ and $\mathbb{Z}_+ = \{0,1,2,\ldots\}$. By $\delta_x$ we denote the Dirac measure concentrated at a point $x$. The cardinality of a finite set $S$ is denoted by $\# S$. If $A$ is a subset of a metric space $(X,d)$, we write $\mathrm{diam} A = \sup \{ d(x,y) : x,y\in A\}$. If $\mathcal{A}$ is a collection of sets $A\subset X$, we write $\mathrm{diam}\mathcal{A} = \sup\{\mathrm{diam} A: A\in\mathcal{A}\}$. All logarithms are taken to the base $2$.
A nonautonomous dynamical system, or briefly an NDS, is a pair $(X_{\infty},f_{\infty})$, where $X_{\infty} = (X_n)_{n\in\mathbb{Z}_+}$ is a sequence of sets and $f_{\infty} = (f_n)_{n\in\mathbb{Z}_+}$ a sequence of maps $f_n:X_n \rightarrow X_{n+1}$. For all $i\in\mathbb{Z}_+$ and $n\in\mathbb{N}$, we define
\begin{equation*}
f_i^0 := \mathrm{id}_{X_i},\quad f_i^n := f_{i+n-1} \circ \cdots \circ f_{i+1} \circ f_i,\quad f_i^{-n} := (f_i^n)^{-1}.
\end{equation*}
We do not assume that the maps $f_i$ are invertible, so $f_i^{-n}$ is only applied to sets. We speak of a \emph{topological NDS} if each $X_n$ is a compact metric space $(X_n,d_n)$ and the sequence $f_{\infty}$ is equicontinuous, i.e., for every $\varepsilon>0$ there is $\delta>0$ so that $d_n(x,y) < \delta$ for any $n\in\mathbb{Z}_+$ and $x,y\in X_n$ implies $d_{n+1}(f_n(x),f_n(y)) < \varepsilon$.
\subsection{Topological entropy}\label{subsec_topent}
To define the topological entropy of a dynamical system, one needs to specify a \emph{resolution} on the state space. Usually, this resolution is given by a finite $\varepsilon>0$ or by an open cover. In the case of an NDS $(X_{\infty},f_{\infty})$, we have to consider a sequence of open covers instead. Hence, let $\mathcal{U}_{\infty} = (\mathcal{U}_n)_{n\in\mathbb{Z}_+}$ be a sequence so that $\mathcal{U}_n$ is an open cover of $X_n$ for every $n$. For all $i\in\mathbb{Z}_+$ and $n\in\mathbb{N}$ define
\begin{equation*}
\mathcal{U}_i^n := \bigvee_{j=0}^{n-1}f_i^{-j}\mathcal{U}_{i+j},
\end{equation*}
which is the common refinement of the open covers $f_i^{-j}\mathcal{U}_{i+j}$ of $X_i$, i.e., the open cover whose elements are of the form
\begin{equation*}
U_{j_i} \cap f_i^{-1}(U_{j_{i+1}}) \cap \ldots \cap f_i^{-n+1}(U_{j_{i+n-1}}),\quad U_{j_l} \in \mathcal{U}_l.
\end{equation*}
Then the entropy of $f_{\infty}$ w.r.t.~$\mathcal{U}_{\infty}$ is defined by
\begin{equation}\label{eq_defent_ocs}
h(f_{\infty};\mathcal{U}_{\infty}) := \limsup_{n\rightarrow\infty}\frac{1}{n}\log N(\mathcal{U}_0^n),
\end{equation}
where $N(\cdot)$ denotes the minimal cardinality of a finite subcover. Here, unlike in the autonomous case, the $\limsup$ in general is not a limit (see \cite{KSn} for a counter-example).
To define a notion of topological entropy, independent of a given resolution, one usually takes the supremum over all resolutions. However, taking the supremum of $h(f_{\infty};\mathcal{U}_{\infty})$ over all sequences $\mathcal{U}_{\infty}$ would result in a quantity that is usually $+\infty$, because a sequence of open covers whose diameters exponentially converge to zero generates an increase of information that is not due to the dynamics of the system. Hence, such sequences have to be excluded. An elegant way how to do this, is to consider only sequences with Lebesgue numbers bounded away from zero. We thus let $\mathcal{L}(X_{\infty})$ denote the family of all such sequences and define the \emph{topological entropy of $(X_{\infty},f_{\infty})$} as
\begin{equation*}
h_{\mathrm{top}}(f_{\infty}) := \sup_{\mathcal{U}_{\infty}\in\mathcal{L}(X_{\infty})}h(f_{\infty};\mathcal{U}_{\infty}).
\end{equation*}
This definition was first given in \cite{KMS}. Some properties of $h_{\mathrm{top}}$ are the following:
\begin{itemize}
\item Alternative characterizations in terms of $(n,\varepsilon)$-spanning or $(n,\varepsilon)$-separated sets can be given. For instance, a set $E \subset X_0$ is $(n,\varepsilon;f_{\infty})$-spanning if for every $x\in X_0$ there exists $y\in E$ such that $d_i(f_0^i(x),f_0^i(y)) < \varepsilon$ for $0 \leq i < n$. Letting $r(n,\varepsilon;f_{\infty})$ denote the minimal cardinality of an $(n,\varepsilon;f_{\infty})$-spanning set,
\begin{equation}\label{eq_char_spansets}
h_{\mathrm{top}}(f_{\infty}) = \lim_{\varepsilon\downarrow0}\limsup_{n\rightarrow\infty}\frac{1}{n}\log r(n,\varepsilon;f_{\infty}).
\end{equation}
\item In the case where $X_{\infty}$, $d_{\infty}$ and $f_{\infty}$ are constant, $h_{\mathrm{top}}(f_{\infty})$ reduces to the usual notion of topological entropy for maps, which immediately follows from \eqref{eq_char_spansets}.
\item The topological entropy $h_{\mathrm{top}}(f_{\infty})$ also generalizes several other notions of entropy studied before, as for instance \emph{topological sequence entropy} \cite{Go2} and \emph{topological entropy for uniformly continuous maps on non-compact metric spaces} \cite{Bow}.
\item Fundamental properties of topological entropy for maps carry over to its nonautonomous generalization, as for instance the power rule, which can be formulated as follows. For $m\in\mathbb{N}$ define the $m$th power system $(X_{\infty}^{[m]},f_{\infty}^{[m]})$ by $X_n^{[m]} := X_{nm}$ and $f_n^{[m]} := f_{nm}^m$. Then the following power rule holds:
\begin{equation*}
h_{\mathrm{top}}(f_{\infty}^{[m]}) = m \cdot h_{\mathrm{top}}(f_{\infty}).
\end{equation*}
Here the equicontinuity of $f_{\infty}$ is essential, see \cite{KSn} for a counter-example in the case when $f_{\infty}$ is not equicontinuous.
\end{itemize}
\subsection{Measure-theoretic entropy}\label{subsec_mesent}
To define measure-theoretic entropy, we consider systems given by measurable maps $f_n:X_n \rightarrow X_{n+1}$ between probability spaces $(X_n,\mathcal{F}_n,\mu_n)$, preserving the measures $\mu_n$ in the sense that $(f_n)_*\mu_n = \mu_{n+1}$ for all $n\in\mathbb{Z}_+$. In this case, we also call the sequence $\mu_{\infty} = (\mu_n)_{n\in\mathbb{Z}_+}$ an \emph{invariant measure sequence}, or briefly an \emph{IMS} for the given NDS $(X_{\infty},f_{\infty})$, and we speak of a \emph{measure-theoretic NDS}. Analogously to the topological framework, we define the entropy of $f_{\infty}$ w.r.t.~a sequence of finite measurable partitions $\mathcal{P}_n$ of $X_n$ by
\begin{equation*}
h(f_{\infty};\mathcal{P}_{\infty}) = h_{\mu_{\infty}}(f_{\infty};\mathcal{P}_{\infty}) := \limsup_{n\rightarrow\infty}\frac{1}{n}H_{\mu_0}(\mathcal{P}_0^n),
\end{equation*}
where $\mathcal{P}_0^n$ denotes the partition $\bigvee_{i=0}^{n-1}f_0^{-i}\mathcal{P}_i$ and $H_{\mu_0}(\cdot)$ is the Shannon entropy of a partition computed w.r.t.~the measure $\mu_0$.
To define measure-theoretic entropy independently of a sequence of partitions, we have to follow a similar strategy as in the topological case. However, the concept of Lebesgue numbers is not helpful here, and a similar construction of a family $\mathcal{L}(X_{\infty})$, using the measures $\mu_n$, does not lead to satisfying results. Looking at the topological theory, one sees that results for topological entropy such as the power rule rely on the equicontinuity of the sequence $f_{\infty}$, and not on the mere continuity of each $f_n$. However, in the measure-theoretic setting considered here we do not require a similar property.
One way to overcome these obstructions is the study of the essential properties of the family $\mathcal{L}(X_{\infty})$, defined in the topological framework, and enforcing these properties in the measure-theoretic framework by an axiomatic definition. As it turns out, the following definition leads to satisfying results.
\begin{definition}
A nonempty family $\mathcal{E}$ of sequences of finite measurable partitions for $X_{\infty}$ is called an \emph{admissible class} if it satisfies the following axioms:
\begin{enumerate}
\item[(A)] For each $\mathcal{P}_{\infty} = (\mathcal{P}_n)_{n\in\mathbb{Z}_+}\in\mathcal{E}$ there is a bound $N\in\mathbb{N}$ on the cardinality $\#\mathcal{P}_n$, i.e., $\#\mathcal{P}_n \leq N$ for all $n\in\mathbb{Z}_+$.
\item[(B)] If $\mathcal{P}_{\infty} = (\mathcal{P}_n)_{n\in\mathbb{Z}_+} \in \mathcal{E}$ and $\mathcal{Q}_{\infty} = (\mathcal{Q}_n)_{n\in\mathbb{Z}_+}$ is another sequence of finite measurable partitions for $X_{\infty}$ such that each $\mathcal{Q}_n$ is coarser than $\mathcal{P}_n$, then $\mathcal{Q}_{\infty}\in\mathcal{E}$.
\item[(C)] If $\mathcal{P}_{\infty} = (\mathcal{P}_n)_{n\in\mathbb{Z}_+} \in \mathcal{E}$ and $m\in\mathbb{N}$, then also the sequence $\mathcal{P}_{\infty}^{\langle m \rangle}$, defined as follows, is an element of $\mathcal{E}$:
\begin{equation*}
\mathcal{P}_n^{\langle m \rangle} := \bigvee_{i=0}^{m-1}f_n^{-i}\mathcal{P}_{i+n},\quad n\in\mathbb{Z}_+.
\end{equation*}
\end{enumerate}
\end{definition}
Given an admissible class $\mathcal{E}$, we can define the measure-theoretic entropy of $f_{\infty}$ w.r.t.~this class as
\begin{equation*}
h_{\mathcal{E}}(f_{\infty}) = h_{\mathcal{E}}(f_{\infty};\mu_{\infty}) := \sup_{\mathcal{P}_{\infty}\in\mathcal{E}}h_{\mu_{\infty}}(f_{\infty};\mathcal{P}_{\infty}).
\end{equation*}
Some elementary properties of admissible classes and their entropy are summarized in the following proposition, cf.~\cite{Ka1}.
\begin{proposition}
Given a measure-theoretic NDS, the following statements hold:
\begin{enumerate}
\item[(i)] There exists a maximal admissible class $\mathcal{E}_{\max}$ defined as the family of all sequences $\mathcal{P}_{\infty}$ satisfying Axiom (A).
\item[(ii)] Unions and nonempty intersections of admissible classes are admissible classes.
\item[(iii)] For each $\emptyset \neq \mathcal{F} \subset \mathcal{E}_{\max}$ there exists a smallest admissible class $\mathcal{E}(\mathcal{F})$ containing $\mathcal{F}$, and its entropy satisfies
\begin{equation*}
h_{\mathcal{E}(\mathcal{F})}(f_{\infty}) = \sup_{\mathcal{P}_{\infty}\in\mathcal{F}}h(f_{\infty};\mathcal{P}_{\infty}).
\end{equation*}
\end{enumerate}
\end{proposition}
One might be tempted to regard the maximal admissible class $\mathcal{E}_{\max}$ as a canonical admissible class for the definition of entropy. However, this class is usually useless, because it contains two many elements. In \cite[Ex.~18]{Ka1} it has been shown that $h_{\mathcal{E}_{\max}}(f_{\infty}) = \infty$ whenever the maps $f_n$ are bi-measurable and the probability spaces $X_n$ are non-atomic.
As in the classical theory, we can describe the dependence of $h(f_{\infty};\mathcal{P}_{\infty})$ on $\mathcal{P}_{\infty} \in \mathcal{E}_{\max}$, using a metric on $\mathcal{E}_{\max}$, defined as
\begin{equation*}
D(\mathcal{P}_{\infty},\mathcal{Q}_{\infty}) := \sup_{n\in\mathbb{Z}_+}\left(H_{\mu_n}(\mathcal{P}_n|\mathcal{Q}_n) + H_{\mu_n}(\mathcal{Q}_n|\mathcal{P}_n)\right),
\end{equation*}
with the conditional entropy $H(\cdot|\cdot)$. In the classical case, $D(\cdot,\cdot)$ reduces to the well-known \emph{Rokhlin metric}. Just as in this case, the map $\mathcal{P}_{\infty} \mapsto h(f_{\infty};\mathcal{P}_{\infty})$ is Lipschitz continuous w.r.t.~$D$ with Lipschitz constant $1$.
One particularly useful property of the measure-theoretic entropy w.r.t.~an admissible class is the following power rule, cf.~\cite[Prop.~25]{Ka1}.
\begin{proposition}
Given a measure-theoretic NDS $(X_{\infty},f_{\infty})$ and $m\in\mathbb{N}$, consider the $m$th power system $(X^{[m]}_{\infty},f^{[m]}_{\infty})$. If $\mathcal{E}$ is an admissible class for $(X_{\infty},f_{\infty})$, we denote by $\mathcal{E}^{[m]}$ the class of all sequences of partitions for $X^{[m]}_{\infty}$ which are defined by restricting the sequences in $\mathcal{E}$ to the spaces in $X^{[m]}_{\infty}$, i.e., $\mathcal{P}_{\infty} = \{\mathcal{P}_n\}_{n\in\mathbb{Z}_+}\in\mathcal{E}$ iff
\begin{equation*}
\mathcal{P}^{[m]}_{\infty} := \{\mathcal{P}_{nm}\}_{n\in\mathbb{Z}_+} \in \mathcal{E}^{[m]}.
\end{equation*}
Then $\mathcal{E}^{[m]}$ is an admissible class for $(X^{[m]}_{\infty},f^{[m]}_{\infty})$ and
\begin{equation*}
h_{\mathcal{E}^{[m]}}\left(f^{[m]}_{\infty}\right) = m\cdot h_{\mathcal{E}}\left(f_{\infty}\right).
\end{equation*}
\end{proposition}
\subsection{Measure-theoretic entropy for topological NDS}\label{subsec_mestopent}
The concept of measure-theoretic entropy described in the preceding subsection appears to be too general and abstract for interesting applications. In this section, we explain how measure-theoretic and topological entropy interact through the definition of a specific admissible class adapted to the metric space structure of a topological NDS.
In the following, let $(X_{\infty},f_{\infty})$ be a topological NDS and $\mu_{\infty}$ an associated IMS.
\begin{definition}
The \emph{Misiurewicz class} $\mathcal{E}_{\mathrm{M}}$ associated with $(X_{\infty},f_{\infty})$ and $\mu_{\infty}$ is defined as follows. A sequence $\mathcal{P}_{\infty} = (\mathcal{P}_n)_{n\in\mathbb{Z}_+}$ of finite Borel partitions, $\mathcal{P}_n = \{P_{n,1},\ldots,P_{n,k_n}\}$, belongs to $\mathcal{E}_{\mathrm{M}}$ if for every $\varepsilon>0$ there are $\delta>0$ and compact sets $K_{n,i} \subset P_{n,i}$ for $n\in\mathbb{Z}_+$, $1\leq i \leq k_n$, such that the following holds for all $n\in\mathbb{Z}_+$:
\begin{enumerate}
\item[(a)] $\mu_n(P_{n,i}\backslash K_{n,i}) \leq \varepsilon$ for $1\leq i\leq k_n$.
\item[(b)] If $x\in K_{n,i}$, $y\in K_{n,j}$, $i\neq j$, then $d_n(x,y) \geq \delta$.
\end{enumerate}
\end{definition}
As it turns out, this definition in fact yields an admissible class that is well-adapted to the metric space structure, as expressed by the following theorem.
\begin{theorem}
$\mathcal{E}_{\mathrm{M}}$ is an admissible class with the following properties:
\begin{enumerate}
\item[(i)] $\mathcal{E}_{\mathrm{M}}$ and the associated entropy $h_{\mathcal{E}_{\mathrm{M}}}(f_{\infty};\mu_{\infty})$ are preserved by equi-conjugacies, i.e., equicontinuous changes of coordinates.
\item[(ii)] In the autonomous case, i.e., when $X_{\infty},d_{\infty},f_{\infty}$ and $\mu_{\infty}$ are constant, $h_{\mathcal{E}_{\mathrm{M}}}(f_{\infty};\mu_{\infty})$ reduces to the usual Kolmogorov-Sinai measure-theoretic entropy.
\item[(iii)] The inequality
\begin{equation*}
h_{\mathcal{E}_{\mathrm{M}}}(f_{\infty};\mu_{\infty}) \leq h_{\mathrm{top}}(f_{\infty})
\end{equation*}
holds (establishing one part of the variational principle).
\end{enumerate}
\end{theorem}
The proofs of (i) and (iii) can be found in \cite[Prop.~26, Prop.~27, Thm.~28]{Ka1} and the proof of (ii) in \cite[Cor.~3.1]{KLa}.
Since the definition of $\mathcal{E}_{\mathrm{M}}$ is tailored to the (first half of the) proof of the variational principle due to Misiurewicz \cite{Mis}, proving (ii) is an easy task. However, it is not as easy as it might seem to prove that $h_{\mathcal{E}_{\mathrm{M}}}$ in fact generalizes the classical notion of measure-theoretic entropy, since even if $X_{\infty}$, $d_{\infty}$, $f_{\infty}$ and $\mu_{\infty}$ are assumed to be constant, we still have to deal with non-constant sequences of partitions. The proof is accomplished through the following result, cf.~\cite[Thm.~3.1]{KLa}.
\begin{theorem}\label{thm_finescaleseq}
Assume that there exists a sequence $(\mathbb{R}C_{\infty}^k)_{k\in\mathbb{Z}_+}$ in $\mathcal{E}_{\mathrm{M}}$ with
\begin{equation*}
\lim_{k\rightarrow\infty}\sup_{n\in\mathbb{Z}_+}\sup_{R \in \mathbb{R}C^k_n}\mathrm{diam} R = 0.
\end{equation*}
Then the measure-theoretic entropy satisfies
\begin{equation*}
h_{\mathcal{E}_{\mathrm{M}}}(f_{\infty};\mu_{\infty}) = \lim_{k\rightarrow\infty}h(f_{\infty};\mathbb{R}C^k_{\infty}) = \sup_{k\in\mathbb{Z}_+}h(f_{\infty};\mathbb{R}C^k_{\infty}).
\end{equation*}
\end{theorem}
In the autonomous case, it is clear that every constant sequence of partitions is contained in $\mathcal{E}_{\mathrm{M}}$, hence any refining sequence of partitions defines a sequence $(\mathbb{R}C_{\infty}^k)_{k\in\mathbb{Z}_+}$, as required in the theorem. Consequently, the theorem says that the entropy is already determined on the constant sequences of partitions, so the classical definition of Kolmogorov-Sinai entropy is retained.
In general, it is unclear whether the Misiurewicz class contains sequences as required in Theorem \ref{thm_finescaleseq}. The following result, proved in \cite{KLa}, yields several sufficient conditions in the case when the state space is time-invariant, cf.~\cite[Thm.~3.2]{KLa}.
\begin{theorem}\label{thm_finescaleconds}
Assume that $(X_n,d_n) \equiv (X,d)$ for some compact metric space $(X,d)$. Then each of the following conditions guarantees that $\mathcal{E}_{\mathrm{M}}$ contains elements of arbitrarily (uniformly) small diameter:
\begin{enumerate}
\item[(i)] $\{\mu_n : n\in\mathbb{Z}_+\}$ is relatively compact in the strong topology on the space of measures.
\item[(ii)] For every $\alpha>0$ there is a finite measurable partition $\mathcal{A}$ of $X$ with $\mathrm{diam}\mathcal{A}<\alpha$ such that $\nu(\partial\mathcal{A})=0$ for all weak$^*$-limits $\nu$ of $\mu_{\infty}$. (This holds, in particular, if there are only countably many non-equivalent weak$^*$-limits.)
\item[(iii)] $X = [0,1]$ or $X = \mathrm{S}^1$ and there exists a dense set $D\subset X$ such that every $x\in D$ satisfies $\nu(\{x\})=0$ for all weak$^*$-limits $\nu$ of $\mu_{\infty}$.
\item[(iv)] $X$ has topological dimension zero.
\end{enumerate}
In each case, the sequences of partitions can in fact be chosen constant.
\end{theorem}
The following theorem provides an example, where both topological and measure-theoretic entropy can be computed, cf.~\cite[Thm.~5.4 and Thm.~5.5]{Ka2}.
\begin{theorem}
Let $M$ be a compact Riemannian manifold and $f_{\infty} = (f_n)_{n\in\mathbb{Z}_+}$ a sequence of $C^2$-expanding maps $f_n:M \rightarrow M$ with expansion factors uniformly bounded away from one, and $C^2$-norms uniformly bounded. Then
\begin{equation*}
h_{\mathrm{top}}(f_{\infty}) = \limsup_{n\rightarrow\infty}\frac{1}{n}\log \int_M |\det\mathrm{D} f_0^n(x)| \mathrm{d} \mathrm{vol},
\end{equation*}
and for any smooth initial measure $\mu_0$, with $\mu_{\infty} = (f_0^n\mu_0)_{n\in\mathbb{Z}_+}$,
\begin{equation*}
h_{\mathcal{E}_{\mathrm{M}}}(f_{\infty};\mu_{\infty}) = \limsup_{n\rightarrow\infty}\frac{1}{n}\int_M \log|\det\mathrm{D} f_0^n(x)| \mathrm{d} \mathrm{vol}.
\end{equation*}
\end{theorem}
The question under which conditions an NDS satisfies a full variational principle, i.e.,
\begin{equation*}
h_{\mathrm{top}}(f_{\infty}) = \sup_{\mu_{\infty}}h_{\mathcal{E}_{\mathrm{M}}}(f_{\infty};\mu_{\infty})
\end{equation*}
is completely open. Only some examples are known which do not allow for a broad generalization.
\section{An example}\label{sec_ex}
In this section, we apply the theory explained above to an NDS which has been introduced in \cite{BOp} by Balibrea and Oprocha. We will need the following proposition whose proof is completely analogous to the autonomous case, and hence is omitted.
\begin{figure}
\caption{The maps $f$ and $g$}
\label{fig:1}
\end{figure}
\begin{proposition}\label{prop_lipest}
Let $(X_{\infty},f_{\infty})$ be a topological NDS such that $f_n$ is (globally) Lipschitz-continuous with Lipschitz-constant $L_n$ for each $n$ and $X_0$ has finite upper capacitive dimension $\overline{\dim}_C(X_0)$. Then
\begin{equation*}
h_{\mathrm{top}}(f_{\infty}) \leq \overline{\dim}_C(X_0) \cdot \limsup_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1}\max\{0,\log L_i\}.
\end{equation*}
\end{proposition}
Now consider the NDS from \cite[Thm.~4]{BOp}, which is constructed from the two piecewise affine maps depicted in Fig.~\ref{fig:1}. More precisely, let $m_0 := 1$ and $m_n := 2^{n^2}$ for all $n\in\mathbb{N}$. Consider the maps $f,g:[0,1]\rightarrow[0,1]$ in Fig.~\ref{fig:1}, and the NDS $f_{\infty} = (f_n)_{n\in\mathbb{Z}_+}$ defined by
\begin{equation*}
f_i := \left\{\begin{array}{rl}
f & \mbox{ if } i = m_n \mbox{ for some } n\\
g & \mbox{ otherwise}
\end{array}\right..
\end{equation*}
For the Lebesgue measure $\lambda$ on $[0,1]$ we have weak convergence $\mu_n = f_0^n\lambda \rightarrow \delta_0$, since every trajectory with initial value in $[0,1)$ converges to zero. More precisely, this implies $\varphi \circ f_0^n(x) \rightarrow \varphi(0)$ for every $x\in [0,1)$ and every continuous function $\varphi:[0,1]\rightarrow\mathbb{R}$. Hence, $\int \varphi \mathrm{d}\mu_n = \int \varphi \circ f_0^n \mathrm{d}\lambda \rightarrow \int \varphi(0) \mathrm{d} \lambda$ by the theorem of dominated convergence. Consequently, by Theorem \ref{thm_finescaleconds}(ii), the admissible class $\mathcal{E}_M(\mu_{\infty})$ contains all constant sequences of partitions with $\delta_0$-zero boundaries, in particular all constant sequences $\mathcal{P}_n \equiv \mathcal{P}$, where $\mathcal{P}$ consists of nontrivial subintervals of $[0,1]$.
Let $\mathcal{P}$ be a partition of $[0,1]$ into intervals of length $1/(3k)$ for some $k\in\mathbb{N}$. Then each interval in $\mathcal{P}$ is completely contained in $J^- := [0,1/3]$, $J := [1/3,2/3]$ or $J^+ := [2/3,1]$. Let $\lambda$ denote the Lebesgue measure on $[0,1]$. Then
\begin{equation*}
H_{\lambda}\left(\bigvee_{i=0}^{m_n}f_0^{-i}\mathcal{P}\right) = H_{\lambda}\left(\bigvee_{i=0}^{m_{n-1}}f_0^{-i}\mathcal{P} \vee \bigvee_{i=m_{n-1} + 1}^{m_n}f_0^{-i}\mathcal{P}\right)
\geq H_{\lambda}\left(\bigvee_{i=m_{n-1}+1}^{m_n}f_0^{-i}\mathcal{P}\right).
\end{equation*}
Note that for $m_{n-1}+1 \leq i \leq m_n$ we have
\begin{equation*}
f_0^{-i} = \left(g^{i - m_{n-1} - 1} \circ f_{m_{n-1}} \circ \cdots \circ f_1 \circ f_0 \right)^{-1} = f_0^{-(m_{n-1}+1)} \circ g^{-(i-m_{n-1}-1)},
\end{equation*}
and hence, writing $l_n := m_n - m_{n-1} - 1$,
\begin{equation*}
H_{\lambda}\left(\bigvee_{i=0}^{m_n}f_0^{-i}\mathcal{P}\right) \geq H_{\lambda}\left(f_0^{-(m_{n-1}+1)}\bigvee_{i=0}^{l_n} g^{-i}\mathcal{P}\right).
\end{equation*}
Now we look only at those members of $\bigvee_{i=0}^{l_n}g^{-i}\mathcal{P}$ that come from intervals $P \in \mathcal{P}$ with $P \subset J$. Let us write $\mathcal{P}^J$ for the the set of all elements in $\mathcal{P}$ contained in $J$. Then the above can be estimated by
\begin{align*}
&\geq H_{\lambda}\left(f_0^{-(m_{n-1}+1)}\bigvee_{i=0}^{l_n} g^{-i} \mathcal{P}^J \right)\\
&= - \sum_{P \in \bigvee_{i=0}^{l_n}g^{-i}\mathcal{P}^J} \lambda(f_0^{-(m_{n-1}+1)}P) \log \lambda(f_0^{-(m_{n-1}+1)}P).
\end{align*}
Now we use that $J$ is $g$-invariant and $f_0^{-(m_{n-1}+1)}(A) = f^{-n}(A)$ for any $A \subset J$ and $n\geq1$. Moreover, we use that $f^{-1}(x) = (1/2)(x - (1/3)) + (2/3)$ on $J$. Together with the fact that $g^{-1}$ is trivial on $J^+$, this gives
\begin{align*}
H_{\lambda}\left(\bigvee_{i=0}^{m_n}f_0^{-i}\mathcal{P}\right) &\geq - \left(\# \bigvee_{i=0}^{l_n}g^{-i}\mathcal{P}^J\right) \frac{1}{3^{l_n}2^n 3k} \log \frac{1}{3^{l_n}2^n 3k}\allowdisplaybreaks\\
&= \log \left(3^{l_n}2^n 3k\right) = l_n\log (3) + n\log(2) + \log(3k).
\end{align*}
Dividing by $m_n$ and sending $n$ to infinity, gives $\log(3)$, since
\begin{equation*}
\frac{m_n - m_{n-1} - 1}{m_n} = 1 - 2^{- 2n - 1} - \frac{1}{2^{n^2}} \rightarrow 1,
\end{equation*}
and $n/m_n \rightarrow 0$. Writing $\lambda_{\infty}$ for the sequence $\lambda_n := f_0^n\lambda$, we obtain
\begin{equation*}
h_{\mathcal{E}_M}(f_{\infty};\lambda_{\infty}) \geq \log(3).
\end{equation*}
Since $L=3$ is a Lipschitz constant for both $f$ and $g$, Proposition \ref{prop_lipest} yields
\begin{equation*}
\log(3) \leq h_{\mathcal{E}_M}(f_{\infty};\lambda_{\infty}) \leq h_{\mathrm{top}}(f_{\infty}) \leq \log(3),
\end{equation*}
implying that for $f_{\infty}$ a full variational principle is satisfied with $\lambda_{\infty}$ being an IMS of maximal entropy.
\begin{remark}
It is easy to see that every trajectory $\{f_0^n(x)\}_{n\in\mathbb{Z}_+}$ with $x \neq 1$ converges to $0$. Hence, the example shows that both the measure-theoretic and the topological entropy can capture transient chaotic behavior, which is not seen in the asymptotic behavior of trajectories.
\end{remark}
\end{document} |
\begin{document}
\begin{abstract}
A semilinear parabolic equation with constraint modeling the dynamics of a microelectromechanical system (MEMS) is studied. In contrast to the commonly used MEMS model, the well-known pull-in phenomenon occurring above a critical potential threshold is not accompanied by a break-down of the model, but is recovered by the saturation of the constraint for pulled-in states. It is shown that a maximal stationary solution exists and that saturation only occurs for large potential values.
In addition, the existence, uniqueness, and large time behavior of solutions to the evolution equation are studied.
\varepsilonnd{abstract}
\keywords{Parabolic variational inequality, obstacle problem, MEMS, well-posedness, large time behavior}
\subjclass[2010]{35M86,35K57,35J87,35B40}
\maketitle
\section{Introduction}
We investigate the well-posedness and qualitative behavior of solutions to the following equation
\begin{subequations}\label{EvEq001x}
\begin{align}
\partial_t u - \Delta u + \partial\mathbb{I}_{[-1,\infty)}(u) & \owns - \frac{\lambda}{2(1+u+W(x))^2} \,,\qquad t>0\,,\quad x\in D\, , \label{EvEq001ax} \\
u & = 0 \,,\qquad t>0\,,\quad x\in \partial D\, ,\label{EvEq001cx} \\
u(0) & = u_0 \,,\qquad x\in D\, , \label{EvEq001dx}
\varepsilonnd{align}
\varepsilonnd{subequations}
arising from the modeling of idealized electrostatically actuated microelectromechanical systems (MEMS) with varying dielectric properties. Here, $D$ is the shape at rest of membrane coated with a thin dielectric layer which is held fixed on its boundary and suspended above a rigid horizontal ground plate with the same shape $D$. Holding the ground plate at potential zero and applying a positive potential to the membrane induce a Coulomb force across the device and thereby a deformation of the membrane. After a suitable rescaling, the ground plate is located at vertical position $z=-1$ while the membrane at rest is located at $z=0$, and its vertical deflection $u(t,x)$ at time $t\ge 0$ and position $x\in D$ solves \varepsilonqref{EvEq001x}. The parameter $\lambda$ in \varepsilonqref{EvEq001ax} is proportional to the applied voltage while $W$ is non-negative and depends on the spatial position $x$ and accounts for the possible dielectric heterogeneity of the membrane. We point out that inertia and bending effects are neglected in \varepsilonqref{EvEq001x}. This model is derived in \cite{LW17} where we revisit the derivation of MEMS models with varying dielectric properties and differs from the commonly used model to describe the dynamics of MEMS which reads \cite{Pe02}
\begin{subequations}\label{Pe}
\begin{align}
\partial_t u - \Delta u & = - \frac{\lambda f(x)}{2(1+u)^2} \,,\qquad t>0\,,\quad x\in D\, , \label{Pea} \\
u & = 0 \,,\qquad t>0\,,\quad x\in \partial D\, ,\label{Pec} \\
u(0) & = u_0 \,,\qquad x\in D\, , \label{Ped}
\varepsilonnd{align}
\varepsilonnd{subequations}
where $u$ and $\lambda$ have the same meaning as above, but the dielectric properties of the membrane are accounted for by the function $f$ which is non-negative and depends on the spatial position $x\in D$. The difference in the reaction terms in \varepsilonqref{EvEq001ax} and \varepsilonqref{Pea} stems from different approaches to compute the electrostatic force exerted on the membrane in the modeling, and we refer to \cite{LW17} and \cite{Pe02} for the complete derivations. Also, the thickness of the membrane with heterogeneous dielectric properties is retained when deriving \varepsilonqref{EvEq001ax}.
From a physical point of view an ubiquitous feature of MEMS devices is that when the applied potential exceeds a certain threshold value the restoring elastic forces no longer balance the electrostatic forces, and the membrane touches down on the ground plate, a phenomenon known as pull-in instability \cite{PeB03}. From the mathematical point of view this means that when $\lambda$ is larger than a certain threshold value $\lambda_*$, the diffusion term no longer overcomes the reaction term and there is a time $T_*>0$ such that $\min u(T_*)=-1$. When this occurs, the two models respond in a completely different way. Indeed, in \varepsilonqref{Pe} the reaction term becomes singular and the solution ceases to exist at this time (such a behavior is also referred to as {\it quenching} in literature). In contrast, the constraint term $ \partial\mathbb{I}_{[-1,\infty)}(u)$ accounts for the fact that the membrane cannot penetrate the ground plate upon touching down but rather lies directly on it. The notation $\partial\mathbb{I}_{[-1,\infty)}(u)$ stands for the subdifferential of the indicator function $\mathbb{I}_{[-1,\infty)}$ of the closed convex set $[-1,\infty)$, the indicator function taking the value zero on $[-1,\infty)$ and the value $\infty$ on its complement. Since $\partial\mathbb{I}_{[-1,\infty)}$ is a set-valued operator, see \varepsilonqref{mm} below, the evolution equation \varepsilonqref{EvEq001ax} is actually a differential inclusion which could also be written as a parabolic variational inequality, see for instance \cite{Ba10, Br73}. Owing to this constraint, the evolution equation \varepsilonqref{EvEq001ax} features no singularity, not even in the coincidence region where $u=-1$, at least if $W>0$ in $D$. Therefore, one expects to have global solutions for this model. As we shall prove below, this is indeed true and, in fact, $W$ may even vanish, but only at isolated points and not to rapidly, the latter being measured by some integrability assumption on $1/W$, see \varepsilonqref{W1} below. We shall not explore the influence of a non-empty zero set of $W$ in great detail herein. Since $W$ is proportional to $1/\sigma$, where $\sigma$ denotes the dielectric permittivity of the membrane (see \cite{LW17}), the assumption $W>0$ corresponds to a membrane with no perfectly conducting part.
Another striking difference between the two models is that there is no stationary solution to \varepsilonqref{Pe} when $\lambda$ exceeds the critical value $\lambda_*$ while there is always at least one stationary solution to \varepsilonqref{EvEq001x} for all values of $\lambda$. Nevertheless, as we shall see, there is still a critical value $\Lambda_z>0$ for $\lambda$ which separates stationary solutions in {\it unzipped states} (for $\lambda<\Lambda_z$) and in {\it zipped states} (for $\lambda>\Lambda_z$) defined as:
\begin{definition*}
A measurable function $h:D\rightarrow [-1,\infty)$ is a {\it zipped state} if the {\it coincidence set} $$\mathcal{C}(h):=\{ x\in D\,;\, h(x)=-1\}$$ has a positive Lebesgue measure and an {\it unzipped state} otherwise.
\varepsilonnd{definition*}
Thus, the issue of non-existence of stationary solutions to \varepsilonqref{Pe} is replaced in \varepsilonqref{EvEq001x} with the existence of zipped states. Since the pioneering works \cite{GPW05,Pe02,FMPS06,BGP00} a lot of research has been devoted to \varepsilonqref{Pe} providing a wealth of information on the structure of stationary solutions, the occurrence of touchdown in finite time, and the dynamical properties of solutions. We refer to \cite{EGG10} and \cite{LWBible} for a more detailed description and references.
Returning to \varepsilonqref{EvEq001x}, which is the focus of this paper, let us mention that an equation with a similar constraint is considered in \cite{GB01} in a (fourth-order stationary) MEMS model with a dielectric layer placed on top of the ground plate. We also refer to \cite{LLG14,LLG15}, where a regularizing term is added in \varepsilonqref{Pe} in order to describe the behavior of a MEMS after initial contact of the membrane and the ground plate.
The purpose of this paper is to provide various results for \varepsilonqref{EvEq001x} including a description of the stationary solutions, the well-posedness of the evolution problem as well as qualitative properties of the solutions. These results are presented in the next section.
\section{Main Results}\label{SecMR}
We assume throughout this paper that $D$ is a bounded domain in $\mathbb{R}^d$, $d\ge 1$, with smooth boundary $\partial D$ and that
\begin{equation}\label{W1}
\text{$W$ is a non-negative measurable function on $D$ such that $1/W\in L_2(D)$\,.}
\varepsilonnd{equation}
Further assumptions on $W$ will be explicitly stated later on whenever needed. Let us point out that \varepsilonqref{W1} is the minimal assumption to ensure that the right-hand side of \varepsilonqref{EvEq001ax} belongs to $L_1(D)$.
\subsection{Stationary Problem}
We shall first present our main results with respect to stationary solutions to \varepsilonqref{EvEq001x}. To have a compacter notion of the right-hand side of \varepsilonqref{EvEq001x} in the following, we introduce
\begin{equation}\label{g}
g_W(v)(x):= \frac{1}{2 \left(1+v(x)+W(x)\right)^{2}} \,,\quad x\in D\,,
\varepsilonnd{equation}
for a given function $v:D\rightarrow [-1,\infty)$.
We let $-\Delta_1$ be the $L_1$-realization of the Laplace-Dirichlet operator, that is,
$$
-\Delta_1 u:=-\Delta u\,,\qquad u\in D(\Delta_1):=\{w\in W_1^1(D)\,;\, \Delta w\in L_1(D)\,,\ w=0 \text{ on } \partial D\}\,,
$$
where $w=0 \text{ on } \partial D$ is to be understood in the sense of traces. Recall that $D(\Delta_1)$ embeds continuously in $W_q^1(D)$ for $1\le q < d/(d-1)$. Let us also recall that $\partial \mathbb{I}_{[-1,\infty)}$ is the maximal monotone graph in $\mathbb{R}\times\mathbb{R}$ given by
\begin{equation}\label{mm}
\partial \mathbb{I}_{[-1,\infty)}(r)=\left\{ \begin{array}{cl}
\varepsilonmptyset \,, & r<-1\,,\\
(-\infty,0]\,, & r=-1\,,\\
\{0\}\,, & r>-1\,.
\varepsilonnd{array}\right.
\varepsilonnd{equation}
The following definition gives a precise notion of a stationary solution.
\begin{definition}\label{SSDef001}
A {\it stationary solution} to \varepsilonqref{EvEq001x} is a function $u\in D(\Delta_1)$ such that $g_W(u)\in L_1(D)$ and
$$
- \Delta u +\partial \mathbb{I}_{[-1,\infty)}(u)\owns -\lambda g_W(u) \quad \text{in } D\,.
$$
Equivalently, for a.e. $x\in D$,
$$
\big(\Delta u(x)-\lambda g_W(u)(x)\big) \big(r-u(x)\big)\le 0\,,\quad r\ge -1\,.
$$
\varepsilonnd{definition}
Owing to the integrability of $\Delta u$ and $g_W(u)$, the differential inclusion in Definition~\ref{SSDef001} is to be understood in $L_1(D)$, that is, for a.e. $x\in D$. Throughout the paper we shall omit ``a.e.'' when no confusion seems likely.
The main result regarding stationary solutions is the following.
\begin{thm}[{\bf Maximal Stationary Solutions}]\label{statsol}
Suppose \varepsilonqref{W1}. Given $\lambda>0$, there is a maximal stationary solution $U_\lambda \in \mathring{H}^1(D)\cap D(\Delta_1)$ to \varepsilonqref{EvEq001x} with $-1\le U_\lambda\le 0$ in $D$, and there is $\Lambda_z\in (0,\infty)$ such that $U_\lambda$ is unzipped for $\lambda<\Lambda_z$ and zipped for $\lambda>\Lambda_z$.
Moreover, $U_\lambda$ is decreasing with respect to $\lambda$ in the sense that if $\lambda_1<\lambda_2$, then $U_{\lambda_1}\ge U_{\lambda_2}$ in $D$.
Finally, if $1/W\in L_{2p}(D)$ for some $p\in (1,\infty)$, then $U_\lambda\in W_p^2(D)$.
\varepsilonnd{thm}
Interestingly, Theorem~\ref{statsol} guarantees the existence of at least one stationary solution to \varepsilonqref{EvEq001x} for {\it any} value of $\lambda$. As already mentioned this markedly contrasts with the commonly used vanishing aspect ratio model \varepsilonqref{Pe} for which no stationary solution exists for large values of $\lambda$, see \cite{BGP00, GPW05, Pe02}. Nevertheless, the role of the critical value of $\lambda$ is played by $\Lambda_z$ which separates the structural behavior of stationary solutions.
Theorem~\ref{statsol} is proven in Section~\ref{Sec2} to which we also refer for a precise definition of a maximal stationary solution (see Proposition~\ref{P1}) and for additional information on stationary solutions in general. The existence result is obtained by a rather classical monotone iterative scheme similar to the one used in \cite{EGG10,GhG06} to construct stationary solutions to \varepsilonqref{Pe}. However, due to the constraint in \varepsilonqref{EvEq001ax} the proof is based on the analysis of cite{BS73} on semilinear second-order equations featuring maximal monotone graphs in $L_1$.
To complement the investigation of stationary solutions, we consider in Section~\ref{Sec3} the particular case when $d=1$ and $W\varepsilonquiv const$. For this situation we present in Theorem~\ref{T44} a complete characterization of all stationary solutions. Even in this simplified setting, the structure of stationary solutions turns out to be quite sensitive with respect to the value of $\lambda$. In particular, it is shown that if $W$ is small, then there is an interval for $\lambda$ for which there is coexistence of unzipped and zipped states, a feature for which numerical evidence is provided in \cite{GB01} for a related model.
\subsection{Evolution Problem}
We next consider the evolution equation as stated in \varepsilonqref{EvEq001x}. Interestingly it can be seen as the gradient flow in $L_2(D)$ associated with the total energy
\begin{equation}\label{E}
\mathcal{E}_W(u):= \frac{1}{2}\int_D\varepsilonrt\nabla u\varepsilonrt^2\,\mathrm{d} x+\int_D \mathbb{I}_{[-1,\infty)}(u)\,\mathrm{d} x - \frac{\lambda}{2} \int_D\frac{\mathrm{d} x}{1+u+W}\,.
\varepsilonnd{equation}
However, our analysis relies only partially on this structure since the functional setting we work with is $L_1(D)$ due to the integrability assumption \varepsilonqref{W1} on $1/W$. We use the notation
\begin{equation}
\mathcal{A} := \left\{ v \in \mathring{H}^1(D)\ ;\ v \ge -1 \;\text{ a.e. in }\; D \right\} \label{EvEq002}
\varepsilonnd{equation}
for the domain of the convex part of the energy $\mathcal{E}_W$,
where
$$
\mathring{H}^1(D):=\{v\in H^1(D)\,;\, v=0 \text{ on } \partial D\}\,.
$$
For our purpose, the framework of weak solutions turns out to be not sufficient. Thus, we introduce the stronger notion of an \textit{energy solution}.
\begin{definition}\label{EvDef001}
Let $u_0\in \mathcal{A}$. An {\it energy solution} to \varepsilonqref{EvEq001x} is a function $u$ such that, for all $t>0$,
$$
u\in W_2^1(0,t;L_2(D))\cap L_\infty(0,t;\mathring{H}^1(D))\cap L_1((0,t), D(\Delta_1))\,,\quad u(t)\in\mathcal{A}\,,
$$
which satisfies the energy estimate
\begin{equation}
\frac{1}{2} \int_0^t \|\partial_t u(s)\|_2^2\ \mathrm{d}s + \mathcal{E}_W(u(t)) \le \mathcal{E}_W(u_0) \label{EvEq010}
\varepsilonnd{equation}
and the weak formulation of \varepsilonqref{EvEq001ax}
\begin{equation}
\int_D (u(t)-u_0) \vartheta\ \mathrm{d}x = - \int_0^t \int_D \left[ \nabla u \cdot \nabla\vartheta + \zeta_u \vartheta +\lambda\vartheta g_W(u) \right]\ \mathrm{d}x\mathrm{d}s \label{EvEq011}
\varepsilonnd{equation}
for all $\vartheta\in \mathring{H}^1(D)\cap L_\infty(D)$, where
$$
\zeta_u:= \Delta u-\lambda g_W(u)-\partial_t u\in L_1((0,t)\times D)
$$
satisfies $\zeta_u\in \partial\mathbb{I}_{[-1,\infty)}(u)$ a.e. in $(0,t)\times D$.
\varepsilonnd{definition}
The existence of energy solutions is guaranteed by the next theorem.
\begin{thm}[{\bf Existence}]\label{EvThm002}
Suppose \varepsilonqref{W1} and let $\lambda>0$. Given $u_0\in \mathcal{A}\cap L_\infty(D)$, there exists at least one energy solution $u$
to \varepsilonqref{EvEq001x} satisfying also
\begin{equation}
u(t,x) \le \| (u_0)_+\|_\infty\ , \qquad (t,x)\in (0,\infty)\times D\ . \label{EvEq003}
\varepsilonnd{equation}
In addition, if there are $\kappa\in (0,1)$ and $T>0$ such that $u\ge \kappa-1$ in $(0,T)\times D$, then
$$
u\in C^1([0,T);L_p(D))\cap C((0,T);W_p^2(D))
$$
for all $p\in (1,\infty)$.
\varepsilonnd{thm}
The proof of Theorem~\ref{EvThm002} is performed in Section~\ref{Sec5} and relies partially on the gradient flow structure in $L_2(D)$ of \varepsilonqref{EvEq001x}. Indeed, we exploit this structure under the additional assumption $1/W\in L_4(D)$. In that case, we use the direct method of calculus of variations to construct a solution in $\mathcal{A}\cap H^2(D)$ to the time implicit Euler scheme associated with \varepsilonqref{EvEq001x}. We then use a compactness argument along with \cite{BS73} to solve the same implicit Euler scheme but with $1/W\in L_2(D)$, thereby obtaining a less regular solution in $\mathcal{A}\cap D(\Delta_1)$. We next pass to the limit as the discretization parameter tends to zero, using a combination of energy arguments and Dunford-Pettis' theorem.
We supplement Theorem~\ref{EvThm002} with a uniqueness result which is valid when $1/W$ enjoys better integrability property.
\begin{thm}[{\bf Uniqueness and Comparison Principle}] \label{EvThm005}
Suppose \varepsilonqref{W1} and, in addition, that
$1/W\in L_r(D)$ for some $r>3d/2$. Let $\lambda>0$. Given $u_0\in \mathcal{A}\cap L_\infty(D)$, there exists a unique energy solution $u$
to \varepsilonqref{EvEq001x} satisfying also~\varepsilonqref{EvEq003}.
Furthermore, if $v_0\in \mathcal{A}\cap L_\infty(D)$ is such that $u_0\le v_0$ in $D$ and if $v$
denotes the corresponding energy solution to \varepsilonqref{EvEq001x}, then $u(t)\le v(t)$ in $D$ for all $t\ge 0$.
\varepsilonnd{thm}
It is well-known that the comparison principle is available for parabolic variational inequalities \cite[Proposition~II.7]{Br72}. The proof of Theorem~\ref{EvThm005} is given in Section~\ref{Sec5}.
We next turn to the large time dynamics of energy solutions to \varepsilonqref{EvEq001x} and combine the information on the maximal stationary solutions provided by Theorem~\ref{statsol} along with the energy inequality \varepsilonqref{EvEq010} to describe the structure of the $\omega$-limit set $\omega(u_0)$, defined for an energy solution $u$ to \varepsilonqref{EvEq001x} as the set of all $v\in \mathcal{A}$ for which there is a sequence $(t_k)_{k\ge 1}$ of positive real numbers such that
\begin{equation*}
\lim_{k\to\infty} t_k = \infty \;\text{ and }\; \lim_{k\to\infty} \|u(t_k)-v\|_2 = 0\ .
\varepsilonnd{equation*}
Owing to the energy structure, the $\omega$-limit set consists only of stationary solutions.
\begin{thm}\label{EvThm004}
Suppose \varepsilonqref{W1}. Let $\lambda>0$ and $u_0\in \mathcal{A}\cap L_\infty(D)$ and consider an energy solution $u$ to \varepsilonqref{EvEq001x} satisfying also \varepsilonqref{EvEq003}.
Then the set $\omega(u_0)$ is non-empty and bounded in $\mathring{H}^1(D)$ and contains only stationary solutions to \varepsilonqref{EvEq001x}. Furthermore, if $1/W\in L_r(D)$ for some $r>3d/2$ and $u_0\ge U_\lambda$ in $D$, then $\omega(u_0)=\{U_\lambda\}$ and
$$
\lim_{t\to\infty} \| u(t) - U_\lambda\|_2 = 0\ .
$$
\varepsilonnd{thm}
The proof of Theorem~\ref{EvThm004} relies on the energy inequality \varepsilonqref{EvEq010} and is carried out in Section~\ref{Sec6} in the spirit of the proof of Lasalle's invariance principle. An numerical illustration is given in Figure~\ref{MassSpring}.
\begin{figure}[ht]
\centering\includegraphics[scale=.45]{ConstrainedParabolic_ZipUnzip2.jpg}
\caption{\small Zipped and unzipped states: one-dimensional simulation of the solution to \varepsilonqref{EvEq001x} at increasing time instants ($W\varepsilonquiv 1$, $\lambda=4$, $u_0= 0$ and constraint approximated by $K\min(1+u,0)$ with $K$ large).}\label{MassSpring}
\varepsilonnd{figure}
We finally provide some additional information on the dynamics when the evolution starts from rest, that is, when $u_0=0$.
\begin{thm}\label{obelix888}
Suppose that $1/W\in L_r(D)$ for some $r>3d/2$ and let $u$ be the solution to \varepsilonqref{EvEq001x} with $u_0=0$. Then:
\begin{itemize}
\item[(i)] For $t_1<t_2$ there holds $u(t_1)\ge u(t_2)$ in $D$ and $\mathcal{C}(u(t_1))\subset \mathcal{C}(u(t_2))$.
\item[(ii)] If $\lambda<\Lambda_z$, then $u(t)$ is unzipped for all $t\ge 0$.
\item[(iii)] There is $\Lambda^*\ge \Lambda_z$ such that if $\lambda >\Lambda^*$, then there is $T_z=T_z(\lambda, W)>0$ such that $u(t)$ is zipped for $t>T_z$.
\item[(iv)] In addition, $\Lambda^*= \Lambda_z$ if $W\in L_\infty(D)$.
\varepsilonnd{itemize}
\varepsilonnd{thm}
For the proof of Theorem~\ref{obelix} is performed in Section~\ref{Sec6}. The time monotinicity in statement (i) is actually a classical feature of parabolic equations when the initial value is a supersolution and it turns out that the constraint does not alter this property.
\section{Stationary Solutions}\label{Sec2}
In this section we prove Theorem~\ref{statsol}. Thus we investigate stationary solutions to \varepsilonqref{EvEq001x} in the sense of Definition~\ref{SSDef001}, that is, solutions to
\begin{subequations}\label{idefix}
\begin{align}
- \Delta u +\partial \mathbb{I}_{[-1,\infty)}(u)&\owns -\lambda g_W(u) \,,&& x\in D\,, \label{idefix1}\\
u&=0\,,&& x\in\partial D\,, \label{idefix3}
\varepsilonnd{align}
\varepsilonnd{subequations}
with $\lambda>0$ and $g_W$ given in \varepsilonqref{g}. Recalling that we always assume \varepsilonqref{W1} to hold, the right-hand side of \varepsilonqref{idefix1} belongs to $L_1(D)$ and the analysis of this section is based on the nice properties of the maximal monotone operator $-\Delta+\partial\mathbb{I}_{[-1,\infty)}$ in $L_1(D)$ thoroughly studied in \cite[$\S$1]{BS73}. In particular, we recall the basic result on existence and uniqueness.
\begin{thm}\cite[Theorem 1]{BS73}\label{Flea}
Given $f\in L_1(D)$, there is a unique $v\in D(\Delta_1)$ such that
$$
- \Delta v (x) +\partial \mathbb{I}_{[-1,\infty)}(v(x))\owns f(x)\,,\quad x\in D\,.
$$
\varepsilonnd{thm}
We now establish the existence of $L_1$-solutions to \varepsilonqref{idefix} with the help of a classical monotone scheme. To this end, we introduce the notion of {\it subsolution} and {\it supersolution} to \varepsilonqref{idefix}.
\begin{definition}\label{DD}
(a) A subsolution to \varepsilonqref{idefix} is a function $\sigma\in D(\Delta_1)$ with $g_W(\sigma)\in L_1(D)$ for which there is $F_\sigma\in L_1(D)$ such that $F_\sigma\le -\lambda g_W(\sigma)$ in $D$, and $\sigma$ is the unique solution to
\begin{equation*}
-\Delta \sigma +\partial\mathbb{I}_{[-1,\infty)}(\sigma)\owns F_\sigma\ \text{ in } D\,.
\varepsilonnd{equation*}
(b) A supersolution to \varepsilonqref{idefix} is a function $\sigma\in D(\Delta_1)$ with $g_W(\sigma)\in L_1(D)$ for which there is $F_\sigma\in L_1(D)$ such that $F_\sigma\ge -\lambda g_W(\sigma)$ in $D$, and $\sigma$ is the unique solution to
\begin{equation*}
-\Delta \sigma +\partial\mathbb{I}_{[-1,\infty)}(\sigma)\owns F_\sigma\ \text{ in } D\,.
\varepsilonnd{equation*}
\varepsilonnd{definition}
We first observe that, for any subsolution $\sigma$ to \varepsilonqref{idefix}, we have
\begin{equation}\label{b}
-1\le \sigma\le 0\quad\text{in}\ D\,,
\varepsilonnd{equation}
where the first inequality stems from Definition~\ref{DD}~(a) and the second one is due to \cite[Proposition 5]{BS73} by comparison with the zero solution since $F_\sigma\le -\lambda g_W(\sigma)\le 0$ in $D$.
\begin{prop}[{\bf Stationary Solutions}]
\label{P1}
Let $\lambda>0$. Then there is a solution $U_\lambda\in D(\Delta_1)\cap \mathring{H}^1(D)$ to \varepsilonqref{idefix} with $-1\le U_\lambda\le 0$ a.e.
Moreover, this solution is maximal in the sense that $U_\lambda\ge \sigma$ in $D$ for any subsolution $\sigma$ in the sense of Definition~\ref{DD}.
Finally, if $1/W\in L_{2p}(D)$ for some $p\in (1,\infty)$, then $U_\lambda\in W_p^2(D)$.
\varepsilonnd{prop}
\begin{proof}
Let us first observe that there is at least one subsolution to \varepsilonqref{idefix}. Indeed, since $1/W\in L_1(D)$, it follows from Theorem~\ref{Flea} that there exists a unique solution $\sigma_0\in D(\Delta_1)$ to
$$
-\Delta \sigma_0 +\partial\mathbb{I}_{[-1,\infty)}(\sigma_0) \owns -\frac{\lambda}{2W^2}\quad \text{in } \ D\,.
$$
As $\sigma_0\ge -1$ in $D$, one has that $-\lambda/2W^2\le -\lambda g_W(\sigma_0)$ in $D$, so that $\sigma_0$ is a subsolution to \varepsilonqref{idefix} in the sense of Definition~\ref{DD}~(a).
Fix now an arbitrary subsolution $\sigma$ with corresponding $F_\sigma$ and set $u^0:=0$ in $D$. Since $\sigma\le u^0$ in $D$ by \varepsilonqref{b}, we have
$$
F_\sigma\le -\lambda g_W(\sigma)\le -\lambda g_W(u^0)\le 0\quad \text{in }\ D\,.
$$
Hence, if $u^1\in D(\Delta_1)$ denotes the unique solution to
$$
-\Delta u^1 +\partial\mathbb{I}_{[-1,\infty)}(u^1) \owns -\lambda g_W(u^0)\quad\text{in } \ D
$$
given by Theorem~\ref{Flea}, then \cite[Proposition 5]{BS73} implies that $-1\le \sigma\le u^1\le u^0=0$ in $D$ and
$$
\zeta_\sigma:=F_\sigma+\Delta \sigma\le \zeta^1:=-\lambda g_W(u^0)+\Delta u^1\le \zeta^0:=0\quad \text{in } \ D\,.
$$
Arguing by induction yields for each $n\in \mathbb{N}$ the unique solution $u^{n+1}\in D(\Delta_1)$ to
$$
-\Delta u^{n+1} +\partial\mathbb{I}_{[-1,\infty)}(u^{n+1}) \owns -\lambda g_W(u^{n})\quad \text{in }\ D
$$
for which
\begin{align}
-1&\le \sigma\le u^{n+1}\le u^n\le 0 \quad \text{in }\ D \,,\label{est1}\\
\zeta_\sigma &\le \zeta^{n+1}\le \zeta^n\le 0\quad \text{in }\ D \,,\label{est2}
\varepsilonnd{align}
where
\begin{equation}\label{hakim}
\zeta^n:=-\lambda g_W(u^{n-1})+\Delta u^{n}\,,\quad n\ge 1\,.
\varepsilonnd{equation}
Since $\sigma$ and $\zeta_\sigma$ both belong to $L_1(D)$, the ordering properties \varepsilonqref{est1} and \varepsilonqref{est2} allow us to apply the monotone convergence theorem and obtain that
\begin{equation}\label{sting}
(u^n,\zeta^n)\longrightarrow (U_\lambda,\zeta)\quad \text{in }\ L_1(D,\mathbb{R}^2)\,,
\varepsilonnd{equation}
where, for $x\in D$,
$$
U_\lambda (x):=\inf_{n\ge 0} u^n(x)\qquad\text{and}\qquad \zeta (x):=\inf_{n\ge 0} \zeta^n(x)\,.
$$
Furthermore, by \varepsilonqref{est1},
$$
0\le g_W(u^n)\le g_W(u^{n+1})\le \frac{1}{2W^2}\quad \text{in } \ D\,,
$$
and we use once more the monotone convergence theorem to deduce that there is $G\in L_1(D)$ such that
\begin{equation}\label{copeland}
g_W(u^n)\longrightarrow G\quad \text{in } L_1(D)\,.
\varepsilonnd{equation}
Since there is a subsequence $(n_k)_{k\in\mathbb{N}}$ such that
$$
\big(u^{n_k},g_W(u^{n_k})\big)\longrightarrow (U_\lambda, G)\quad \text{a.e. in }\ D
$$
according to \varepsilonqref{sting} and \varepsilonqref{copeland}, the continuity of $g_W$ (with respect to $u$) entails that
\begin{equation}\label{summers}
G=g_W(U_\lambda)\,.
\varepsilonnd{equation}
On the one hand, we pass to the limit as $n\rightarrow\infty$ in \varepsilonqref{hakim} by using \varepsilonqref{sting}-\varepsilonqref{summers} to obtain
\begin{equation}\label{incognito}
-\Delta U_\lambda +\zeta=-\lambda g_W(U_\lambda)\quad \text{in } \ D\,.
\varepsilonnd{equation}
On the other hand, let $v\in D(\Delta_1)$ be the unique solution to
$$
-\Delta v+ \partial\mathbb{I}_{[-1,\infty)}(v) \owns -\lambda g_W(U_\lambda)\quad\text{in }\ D\,.
$$
Due to \cite[Proposition 5]{BS73}, we have
$$
\|\Delta u^{n+1}-\Delta u^n\|_1\le 2\lambda \|g_W(u^n)-g_W(U_\lambda)\|_1\,.
$$
Thanks to \varepsilonqref{sting}-\varepsilonqref{summers}, we may pass to the limit as $n\rightarrow\infty$ in the previous inequality and conclude that
$\Delta U_\lambda = \Delta v\in L_1(D)$. This implies $U_\lambda =v\in D(\Delta_1)$ and we derive from \varepsilonqref{incognito} that $\zeta\in \partial\mathbb{I}_{[-1,\infty)}(U_\lambda)$. Consequently, $U_\lambda$ is a solution to \varepsilonqref{idefix}. Moreover, $U_\lambda$ is independent of the previously fixed subsolution $\sigma$ (since the sequence $(u^n)$ is) and hence $U_\lambda$ lies above any subsolution.
Finally, if $1/W\in L_{2p}(D)$ for some $p\in (1,\infty)$, then
$-\lambda g_W(U_\lambda)\in L_p(D)$ so that \cite[Theorem 1, Corollary 8]{BS73} readily imply that $U_\lambda\in W^2_p(D)$.
This completes the proof.
\varepsilonnd{proof}
We now draw several consequences from Proposition~\ref{P1} and begin with the monotonicity of $U_\lambda$ with respect to $\lambda$.
\begin{cor}\label{C1}
If $\lambda_1<\lambda_2$, then $U_{\lambda_1}\ge U_{\lambda_2}$. In particular, if $U_\lambda$ is a zipped state, then $U_{\lambda'}$ is also zipped for any $\lambda'>\lambda$.
\varepsilonnd{cor}
\begin{proof}
This follows from Proposition~\ref{P1} by observing that
$$
-\Delta U_{\lambda_2}+\zeta_{\lambda_2} =-\lambda_2 g_W(U_{\lambda_2})\le -\lambda_1 g_W(U_{\lambda_2})\,.
$$
\varepsilonnd{proof}
A monotonicity property with respect to $W$ is also available.
\begin{cor}\label{C1a}
For $j\in\{1,2\}$ let $W_j$ be a non-negative measurable function such that $1/W_j\in L_2(D)$ with $W_1\le W_2$ in $D$ and let $\lambda>0$. If $U_{\lambda,j}$ denotes the maximal solution to \varepsilonqref{idefix} corresponding to $W_j$, then $U_{\lambda,1}\le U_{\lambda,2}$.
\varepsilonnd{cor}
\begin{proof}
The assumptions imply that
$$
-\lambda g_{W_1}(U_{\lambda,1})\le -\lambda g_{W_2}(U_{\lambda,1}) \quad \text{ in }\ D
$$
and the assertion thus follows from Proposition~\ref{P1}.
\varepsilonnd{proof}
We next turn to the structure of the set of stationary solutions and introduce
$$
\Lambda_z:=\inf\{\lambda>0\,;\, U_\lambda\ \text{is zipped}\}\in [0,\infty]\,.
$$
\begin{cor}\label{C1b}
If $\lambda>\Lambda_z$, then any solution to \varepsilonqref{idefix} is zipped.
\varepsilonnd{cor}
\begin{proof}
This follows immediately from Corollary~\ref{C1} and the maximality of $U_\lambda$ when $\Lambda_z$ is finite.
\varepsilonnd{proof}
We now investigate in more detail the touchdown behavior of solutions. As we shall see in the next result, zipped states do exist for large values of $\lambda$.
\begin{prop}[\textbf{Zipped Solutions}]\label{P3}
The threshold value $\Lambda_z$ is finite.
\varepsilonnd{prop}
\begin{proof}
We argue along the lines of \cite{GPW05, Ka63, Pe02}. Let $\varphi_1\in H^2(D)\cap \mathring{H}^1(D)$ be the positive eigenfunction of $-\Delta_1$ associated with the positive first eigenvalue $\mu_1$ and satisfying $\|\varphi_1\|_1=1$. Consider $\lambda>0$ and let $u$ be a solution to \varepsilonqref{idefix}. Multiplying \varepsilonqref{idefix1} by $\varphi_1$ and integrating over $D$ entail
\begin{equation*}
\begin{split}
\mu_1\int_D\varphi_1 u\,\mathrm{d} x +\int_D \zeta \varphi_1\,\mathrm{d} x =-\lambda\int_D \varphi_1 g_W(u)\,\mathrm{d} x\,.
\varepsilonnd{split}
\varepsilonnd{equation*}
Owing to \varepsilonqref{b}, we have
\begin{equation*}
\begin{split}
\lambda\int_D \varphi_1 g_W(u)\,\mathrm{d} x \ge \lambda\int_D \varphi_1 g_W(0)\,\mathrm{d} x
\varepsilonnd{split}
\varepsilonnd{equation*}
and
\begin{equation*}
\begin{split}
\int_D\varphi_1 u\,\mathrm{d} x \ge -\int_D \varphi_1 \,\mathrm{d} x =-1\,.
\varepsilonnd{split}
\varepsilonnd{equation*}
Therefore
$$
\int_D \zeta \varphi_1\,\mathrm{d} x \le - \lambda\int_D\varphi_1 g_W(0)\,\mathrm{d} x +\mu_1 < 0\,,
$$
as soon as
$$
\lambda > \lambda^*:=\frac{\mu_1}{\|\varphi_1 g_W(0)\|_1}\,.
$$
Since $\varphi_1>0$ in $D$, we have thus shown that $\zeta\not\varepsilonquiv 0$, so that $u$ is a zipped state. In particular, $U_\lambda$ is zipped for $\lambda>\lambda^*$, hence $\Lambda_z\le \lambda^*$.
\varepsilonnd{proof}
Now we show that the maximal solution is unzipped for small values of $\lambda$.
\begin{prop}[{\bf Unzipped Solutions}]
\label{P2}
The threshold value $\Lambda_z$ is positive. Furthermore, if $1/W\in L_{2p}(D)$ with $p>d/2$, there is $\lambda_*:=\lambda_*(W)>0$ such that for $\lambda\in (0,\lambda_*)$ there is $\omega_\lambda>0$ such that \varepsilonmph{any} solution $u$ to \varepsilonqref{idefix} satisfies
$$
u\ge -1 +\omega_\lambda \ \text{ in $D$}.
$$
In particular, all states are unzipped for $\lambda\in (0,\lambda_*)$.
\varepsilonnd{prop}
\begin{proof}
Firstly, it follows from \cite{Pe02,GhG06} that there is $\lambda_0>0$ such that the boundary value problem
$$
-\Delta V_\lambda=-\lambda g_0(V_\lambda)\quad\text{in $D$}\,,\qquad V_\lambda=0\quad\text{on $\partial D$}
$$
has a solution $V_\lambda\in H^2(D)\cap \mathring{H}^1(D)$ satisfying
$V_\lambda>-1$ in $D$ for all $\lambda\in (0,\lambda_0)$. Since $g_0(V_\lambda)\ge g_W(V_\lambda)$ in $D$, we deduce that
$V_\lambda$ is a subsolution to \varepsilonqref{idefix} so that Proposition~\ref{P1} implies that $U_\lambda\ge V_\lambda$ in $D$. Therefore, $\Lambda_z\ge \lambda_0$.
Secondly, if $1/W\in L_{2p}(D)$ with $p>d/2$, then there exists a unique solution $v_\lambda\in W_p^2(D)\hookrightarrow C(\bar D)$ to
$$
- \Delta v_\lambda = -\frac{\lambda}{2W^2} \quad\text{in $D$}\,,\qquad
v_\lambda =0\quad\text{on $\partial D$}
$$
satisfying
$$
v_\lambda = \lambda v_1 \ge -\lambda \|v_{1}\|_\infty>-1
$$
in $D$ provided that $\lambda\in (0,\lambda_*)$ for some $\lambda_*$ sufficiently small. It then remains to note that
$$
-g_W(u)\ge -\frac{1}{2W^2} \quad\text{a.e. in $D$}
$$
for any solution $u$ to \varepsilonqref{idefix} so that $u\ge v_\lambda$ a.e. in $D$ according to \cite[Proposition~5]{BS73}.
\varepsilonnd{proof}
Note that the maximal solution $U_\lambda$ is unzipped for $\lambda\in (0,\Lambda_z)$ while it is zipped for $\lambda\in (\Lambda_z,\infty)$ according to Proposition~\ref{P3} and Proposition~\ref{P2}. Consequently, Theorem~\ref{statsol} is now a consequence of the preceding observations.
A key issue is whether or not unzipped and zipped states may coexist for a given value of $\lambda$. Introducing
$$
\Lambda_u:=\inf\{\lambda>0\,;\, \text{there is a zipped state to \varepsilonqref{idefix}}\}\in [0,\Lambda_z]\,,
$$
it follows from Proposition~\ref{P2} that $\Lambda_u>0$ at least if $1/W\in L_{2p}(D)$ for $p>d/2$. Coexistence of zipped and unzipped states could only take place in the intermediate range $ [\Lambda_u, \Lambda_z]$ provided this interval is non-empty. This issue will be addressed in the next section when $d=1$ and $W\varepsilonquiv const$. In the related model studied in \cite{GB01} this phenomenon seems indeed to occur according to the simulations performed therein.
\section{A Complete Characterization of Stationary Solutions when $d=1$ and $W=const$}\label{Sec3}
We now derive a complete characterization of the solutions to \varepsilonqref{idefix} in dimension $d=1$ when \mbox{$W\varepsilonquiv const >0$}. Without loss of generality we let $D=(-1,1)$. We first state a simple characterization of zipped states.
\begin{lem}\label{L11}
If $u$ is a zipped state to \varepsilonqref{idefix} with $D=(-1,1)$, then there are $0<b<a<1$ such that $\mathcal{C}(u)=[b,a]$, $u'(b)=u'(a)=0$, and $u(x)>-1$ for $x\in (-1,b)\cup (a,1)$.
\varepsilonnd{lem}
\begin{proof}
According to \cite[II.~Theorem~7.1]{KS}, any solution $u$ to \varepsilonqref{idefix} belongs to $C^1([-1,1])$. Let $u$ be a zipped state. Owing to $u(-1)=0$ there is a $b\in (0,1)$ such that $u(x)>-1$ for $x\in [-1,b)$ and $u(b)=-1$. Moreover, $u'(b)=0$ since $u'$ is continuous and $b$ is a minimum point of $u$. Assume now for contradiction that there are $b_n\searrow b$ such that $u(b_n)>-1$. Then $u''(b_n)= \lambda g_W(u)(b_n)>0$ so that $u'(b_n)>0$. Hence
$$
u'(x)=u'(b_n)+\int_{b_n}^x \lambda g_W(u)(y)\,\mathrm{d} y \ge u'(b_n) >0\,,\quad x \in [b_n,1)\,,
$$
implying that $b$ is the only point at which $u$ takes the value $-1$, contradicting that $u$ is a zipped state. Consequently, $u(x)=-1$ for $x$ in a right-neighborhood of $b$. Setting
$$
a:=\sup\{x\in (b,1)\,;\, u(y)=-1\text{ for } y\in [b,x]\} <1\,,
$$
there exist $a_n\searrow a$ such that $u(a_n)>-1$. The same argument as above shows that $u'(x)>0$ for $x\in (a_n,1)$. This implies the statement.
\varepsilonnd{proof}
\begin{rem}
Lemma~\ref{L11} holds true for any continuous non-negative function $W$.
\varepsilonnd{rem}
Introducing
\begin{equation*}
\varphi(r):=\sqrt{r(1-r)}+(1-r)^{3/2}\log(1+\sqrt{r})-\frac{1}{2} (1-r)^{3/2}\log (1-r)\,,\qquad r\in (0,1)\,,
\varepsilonnd{equation*}
and
\begin{equation*}
\Lambda_*(r):= (1+r)^3 \varphi\left(\frac{1}{1+r}\right)^2\,,\quad r\ge 0\,,
\varepsilonnd{equation*}
we can give a complete characterization of the solutions to \varepsilonqref{idefix} when $d=1$ and $W=const>0$ in form of a case-by-case analysis.
\begin{thm}\label{T44}
Let $d=1$ and $W=const>0$. There is a unique $r_0\in (0,1)$ such that $\varphi'(r_0)=0$, and the solutions to \varepsilonqref{idefix} are characterized as follows.
{\bf (I)} For $1/(1+W) \le r_0$, the following possibilities arise:
\begin{itemize}
\item[(i)] If $\lambda>\Lambda_*(W)$, then there is no unzipped state and a unique zipped state.
\item[(ii)] If $\lambda=\Lambda_*(W)$, then there is a unique unzipped state that touches down on $-1$ at exactly one point, but no zipped state.
\item[(iii)] If $\lambda<\Lambda_*(W)$, then there is a unique unzipped state, but no zipped state.
\varepsilonnd{itemize}
{\bf (II)} For $1/(1+W) > r_0$, the following possibilities arise:
\begin{itemize}
\item[(i)] If $\lambda>(1+W)^3\varphi(r_0)^2$, then there is no unzipped state, but a unique zipped state.
\item[(ii)] If $\lambda=(1+W)^3\varphi(r_0)^2$, then there are a unique unzipped and a unique zipped state.
\item[(iii)] If $\lambda\in \left(\Lambda_*(W),(1+W)^3\varphi(r_0)^2\right)$, then there are two unzipped states and a unique zipped state.
\item[(iv)] If $\lambda=\Lambda_*(W)$, then there are two unzipped states, one touching down exactly at one point, but no zipped state.
\item[(v)] If $\lambda<\Lambda_*(W)$, then there are two unzipped states, but no zipped state.
\varepsilonnd{itemize}
\varepsilonnd{thm}
The value of $r_0$ is approximately $0.388346$. Theorem~\ref{T44} shows that the structure of stationary solutions is very sensitive with respect to the value of $\lambda$. In case (I) (corresponding to large values of $W$), there is no coexistence of zipped and unzipped states. However, in case (II) (corresponding to small values of $W$) there is an interval for $\lambda$ for which there is coexistence. This is in accordance with the numerical findings of \cite{GB01} for a related problem (see Figure~3 therein). That the structure of stationary solutions is very sensitive with respect to the value of $\lambda$ has also been observed in related MEMS models without constraint but including a quasilinear diffusion given by the mean curvature, see \cite{BP12,PX15,CHW13}.
To prove Theorem~\ref{T44} we first characterize the zipped states of \varepsilonqref{idefix}. To this end we investigate the shooting problem of finding $a\in (-1,1)$ such that there is a solution $u$ to
\begin{subequations}\label{idefixb}
\begin{align}
- u''&= -\lambda g_W(u) \,,\qquad x\in (a,1)\,, \label{idefix1b}\\
u(a)=-1&\,,\qquad u'(a)=0\,,\qquad u(1)=0\,,&& \label{idefix3b}
\varepsilonnd{align}
\varepsilonnd{subequations}
where $g_W$ is given by \varepsilonqref{g}. The next lemma discusses its solvability completely.
\begin{lem}\label{31}
The shooting problem~\varepsilonqref{idefixb} with $a\in (-1,1)$ has a solution if and only if the constraint
\begin{equation}\label{c4}
0<\frac{\Lambda_*(W)}{4}<\lambda
\varepsilonnd{equation}
is satisfied. In that case, $a$ is uniquely given by
\begin{equation}\label{c44}
a=1-\sqrt{\frac{\Lambda_*(W)}{\lambda}}\,.
\varepsilonnd{equation}
\varepsilonnd{lem}
\begin{proof}
Consider $a\in (-1,1)$ such that \varepsilonqref{idefixb} has a solution $u$ on $[a,1]$. Multiplying \varepsilonqref{idefix1b} by $2 u'$ yields
$$
\frac{\mathrm{d}}{\mathrm{d} x} (u')^2=\frac{\lambda u'}{(1+u+W)^2}=-\lambda \frac{\mathrm{d}}{\mathrm{d} x} \left(\frac{1}{1+u+W}\right) \,.
$$
Integrating this equality and using $u'\ge 0$ due to the convexity of $u$ we derive
$$
\left(\frac{1+u+W}{1+u}\right)^{1/2} u'=\left(\frac{\lambda}{W}\right)^{1/2}\,.
$$
Integrating then this relation from $a$ to $1$ implies the constraint
\begin{equation}\label{c}
\left(\frac{W}{\lambda}\right)^{1/2}\int_{-1}^0 \left(\frac{1+z+W}{1+z}\right)^{1/2}\,\mathrm{d} z=1-a \in (0,2)\,.
\varepsilonnd{equation}
Observe that the substitution $y=\sqrt{1+z}$ gives
\begin{equation*}
\begin{split}
\int_{-1}^0 \left(\frac{1+z+W}{1+z}\right)^{1/2}\,\mathrm{d} z&=2\int_0^1 \left(y^2+W\right)^{1/2}\,\mathrm{d} y\\
&= \left( y\sqrt{y^2+W}+W\log\left(y+\sqrt{y^2+W}\right)\right)\mathfrak{B}ig\varepsilonrt_{y=0}^{y=1}
\\
&=\sqrt{1+W}+W\log\left(1+\sqrt{1+W}\right)-\frac{1}{2} W\log W\,.
\varepsilonnd{split}
\varepsilonnd{equation*}
Combining this identity with \varepsilonqref{c}, we deduce that the shooting problem \varepsilonqref{idefixb} has a solution provided that
\begin{equation*}
\lambda^{1/2} (1-a)= \sqrt{W(1+W)}+W^{3/2}\log\left(1+\sqrt{1+W}\right)-\frac{1}{2} W^{3/2}\log W\\
=\sqrt{\Lambda_*(W)}\,.
\varepsilonnd{equation*}
This shows that \varepsilonqref{c4} is a necessary condition for the solvability of \varepsilonqref{idefixb} and that $a$ is given by \varepsilonqref{c44}. If \varepsilonqref{c4} is satisfied, then we may define $a\in (-1,1)$ by \varepsilonqref{c44} and thereby obtain a solution to \varepsilonqref{idefixb}.
\varepsilonnd{proof}
\begin{cor}\label{C100}
Any solution to \varepsilonqref{idefix} is even on $(-1,1)$. Moreover, \varepsilonqref{idefix} admits a zipped state if and only if the constraint
\begin{equation}\label{c4x}
0<\Lambda_*(W)<\lambda
\varepsilonnd{equation}
is satisfied. In that case, the zipped state is unique and its coincidence set is $[-a,a]$ with $a$ given by \varepsilonqref{c44}.
\varepsilonnd{cor}
\begin{proof}
If $u$ is any unzipped solution to \varepsilonqref{idefix}, then $u$ is even: Indeed, if $r_0\in (-1,1)$ is the (unique) point of minimum of the strictly convex function $u$, then $x\mapsto u(r_0- x)$ and $x\mapsto u(r_0+x)$ coincide as they solve the same ordinary differential equation with identical initial values $(u(r_0),0)$ at $x=0$. Since $u(-1)=u(1)=0$, this readily implies $r_0=0$ and that $u$ is even. If $u$ is any zipped solution to \varepsilonqref{idefix}, then $u=-1$ on the coincidence set $[b,a]$ according to Lemma~\ref{L11}. Hence, $u$ solves \varepsilonqref{idefixb} on $[a,1]$ while $x\mapsto u(-x)$ solves \varepsilonqref{idefixb} on $[-b,1]$. By Lemma~\ref{31} the constraint \varepsilonqref{c4} is satisfied, and it follows from \varepsilonqref{c44} that $a=-b$. Therefore, since \varepsilonqref{idefixb} has a unique solution, this implies that $u(x)=u(-x)$ for $x\in [a,1]$. As $u=-1$ on $[-a,a]$, this shows that $u$ is even. Finally, since $a\in (0,1)$ is uniquely determined by \varepsilonqref{c44}, the assertion follows from Lemma~\ref{31}.
\varepsilonnd{proof}
Note that by Corollary~\ref{C100}, any unzipped state of \varepsilonqref{idefix} reaches its minimum value at $x=0$. Therefore, to characterize all unzipped states of \varepsilonqref{idefix} it suffices to investigate the following shooting problem of finding $m\in (0,1]$ such that there is a solution to
\begin{subequations}\label{obelix}
\begin{align}
- u''&= -\lambda g_W(u) \,,\qquad x\in (0,1)\,, \label{obelix1}\\
u(0)=-m&\,,\qquad u'(0)=0\,,\qquad u(1)=0\,. && \label{obelix2}
\varepsilonnd{align}
\varepsilonnd{subequations}
As for its solvability we have:
\begin{lem}\label{miraculix}
The shooting problem \varepsilonqref{obelix} with $m\in (0,1]$ has a solution if and only if the constraint
\begin{equation}\label{c5a}
\frac{\sqrt{\lambda}}{(1+W)^{3/2}}\in \varphi\left(\left(0,\frac{1}{1+W}\right]\right)
\varepsilonnd{equation}
is met. In that case, $m$ satisfies
\begin{equation}\label{c5}
\lambda= (1+W)^3\varphi\left(\frac{m}{1+W}\right)^2\,.
\varepsilonnd{equation}
\varepsilonnd{lem}
\begin{proof}
Let $m\in (0,1)$ be such that \varepsilonqref{obelix} has a solution $u$. Proceeding as in the proof of Lemma~\ref{31}, the relation corresponding to \varepsilonqref{c} reads
\begin{equation*}
\int_{-m}^0 \left(\frac{1+z+W}{m+z}\right)^{1/2}\,\mathrm{d} z=\left(\frac{\lambda}{1-m+W}\right)^{1/2}\,.
\varepsilonnd{equation*}
Computing then the left-hand side with the substitution $y=\sqrt{m+W}$, we obtain
\begin{equation*}
\begin{split}
\sqrt{\lambda}= & \left[m(1+W)(1+W-m)\right]^{1/2}+(1+W-m)^{3/2}\log\left(\sqrt{m}+\sqrt{1+W}\right)\\
&-\frac{1}{2} (1+W-m)^{3/2}\log(1+W-m)\,,
\varepsilonnd{split}
\varepsilonnd{equation*}
which is equivalent to \varepsilonqref{c5}. Conversely, if \varepsilonqref{c5a} is met, then we may choose $m\in (0,1)$ such that \varepsilonqref{c5} holds and the assertion follows.
\varepsilonnd{proof}
To analyze \varepsilonqref{c4x}, \varepsilonqref{c5a} we derive more information on the function $\varphi$.
\begin{lem}\label{L99}
The function $\varphi$ is positive on $(0,1)$ with $\varphi(0)=0=\varphi(1)$, and there is a unique $r_0\in (0,1)$ such that $\varphi'(r_0)=0$.
\varepsilonnd{lem}
\begin{proof}
For $r\in (0,1)$ the derivative of $\varphi$ is of the form
$$
\varphi'(r)=(1-r)^{1/2}\,\psi(r)\,,
$$
where
$$
\psi(r):=\frac{1-2r}{2\sqrt{r}(1-r)}-\frac{3}{2}\log(1+\sqrt{r})+\frac{1-r}{2(\sqrt{r}+r)}+\frac{3}{4}\log(1-r)+\frac{1}{2}\,.
$$
For the derivative of $\psi$ we obtain
$$
\psi'(r)=\frac{-2r^2-1+r}{4r^{3/2} (1-r)^2}-\frac{5}{4(r+\sqrt{r})}-\frac{(1-r)\left(1+2\sqrt{r}\right)}{4\sqrt{r}(\sqrt{r}+r)^2}-\frac{3}{4 (1-r)}\,.
$$
Noticing that $-2r^2-1+r<0$ we conclude that $\psi'(r)<0$ for each $r\in (0,1)$. Thus, since $\psi(0)=+\infty$ and $\psi(1)=-\infty$, $\psi$ has a unique zero in $(0,1)$. This implies the claim.
\varepsilonnd{proof}
\begin{proof}[Proof of Theorem~\ref{T44}]
We discuss the different cases listed in the statement and use, to this end, the properties of $\varphi$ derived in Lemma~\ref{L99}.
\noindent {\bf (I)} Suppose that $1/(1+W) \le r_0$.
(i) If $\lambda>\Lambda_*(W)$, then it readily follows from Corollary~\ref{C100} and Lemma~\ref{miraculix} that
there is a unique zipped state but no unzipped state.
(ii) If $\lambda=\Lambda_*(W)$, then Lemma~\ref{miraculix} implies that there is a one unzipped state touching down on~$-1$ exactly at $x=0$, while there is no zipped state according to Corollary~\ref{C100}.
(iii) If $\lambda<\Lambda_*(W)$, then there is a unique $m\in (0,1)$ such that \varepsilonqref{c5} holds. Hence, there is a unique unzipped state due to Lemma~\ref{miraculix} but no zipped state due to Corollary~\ref{C100}.
\noindent {\bf (II)} Now suppose that $1/(1+W) > r_0$.
(i) If $\lambda>(1+W)^3\varphi(r_0)^2$, then in particular $\lambda>\Lambda_*(W)$ so that there is a unique zipped state by Corollary~\ref{C100} but no unzipped state due to Lemma~\ref{miraculix}.
(ii) If $\lambda=(1+W)^3\varphi(r_0)^2$, then in particular $\lambda>\Lambda_*(W)$ so that there is a unique zipped state due to Corollary~\ref{C100}. Moreover, there is exactly one $m\in (0,1)$, given by $m=r_0(1+W)$ such that \varepsilonqref{c5} holds true and so there is a unique unzipped state according to Lemma~\ref{miraculix}.
(iii) If $\lambda\in \left(\Lambda_*(W),(1+W)^3\varphi(r_0)^2\right)$, then there are $0<m_1< m_2<1$ such that \varepsilonqref{c5} is satisfied by $m_1$ and $m_2$. Hence, there are two unzipped states due to Lemma~\ref{miraculix} and a unique zipped state by Corollary~\ref{C100}.
(iv) If $\lambda=\Lambda_*(W)$, then we also find $m\in (0,1)$ such that \varepsilonqref{c5} holds true. Hence, by Lemma~\ref{miraculix}
there are two unzipped states, one touching down on $-1$ at exactly $x=0$. Due to Corollary~\ref{C100} there is no zipped state.
(v) If $\lambda<\Lambda_*(W)$, then there are $0<m_1< m_2<1$ such that \varepsilonqref{c5} is satisfied by $m_1$ and $m_2$. Hence, there are two unzipped states due to Lemma~\ref{miraculix}, but there is no zipped state according to Corollary~\ref{C100}.
Since all cases as listed in the statement are covered, Theorem~\ref{T44} follows.
\varepsilonnd{proof}
\section{The evolution problem: existence and uniqueness}\label{Sec5}
We now turn to the evolution equation \varepsilonqref{EvEq001x} which can be equivalently written in the form
\begin{subequations}\label{EvEq001}
\begin{align}
\partial_t u - \Delta u + \zeta & = - \lambda g_W(u) \;\text{ in }\; (0,\infty)\times D\ , \label{EvEq001a} \\
\zeta & \in \partial\mathbb{I}_{[-1,\infty)}(u) \;\text{ in }\; (0,\infty)\times D\ , \label{EvEq001b} \\
u & = 0 \;\text{ on }\; (0,\infty)\times \partial D\ ,\label{EvEq001c} \\
u(0) & = u_0 \;\text{ in }\; D\ , \label{EvEq001d}
\varepsilonnd{align}
\varepsilonnd{subequations}
where $\lambda>0$ is fixed. Recall that \varepsilonqref{W1} is assumed throughout and that the energy $\mathcal{E}_W$ and the set $\mathcal{A}$ are defined in \varepsilonqref{E} and \varepsilonqref{EvEq002}, respectively.
The first step towards the proof of Theorem~\ref{EvThm002} is the solvability of the time implicit Euler scheme associated with \varepsilonqref{EvEq001} in $\mathcal{A}\cap H^2(D)$ when $1/W \in L_4(D)$.
\begin{lem}\label{EvLem012}
Assume that $1/W\in L_4(D)$. Given $h\in (0,1)$ and $f\in \mathcal{A}$, there exists $(u,\zeta)\in \mathcal{A}\times L_2(D)$ with $u\in H^2(D)$ solving
\begin{subequations}\label{EvEq013}
\begin{align}
\frac{u-f}{h} - \Delta u + \zeta & = - \lambda g_W(u) \;\text{ in }\; D\ , \label{EvEq013a} \\
\zeta & \in \partial\mathbb{I}_{[-1,\infty)}(u) \;\text{ in }\; D\ , \label{EvEq013b} \\
u & = 0 \;\text{ on }\; \partial D\ , \label{EvEq013c}
\varepsilonnd{align}
and satisfying
\begin{equation}
\frac{1}{2h} \|u-f\|_2^2 + \mathcal{E}_W(u) \le \mathcal{E}_W(f)\ . \label{EvEq014}
\varepsilonnd{equation}
\varepsilonnd{subequations}
In addition, if $f\le M$ in $D$ for some number $M\ge 0$, then $u\le M$ in $D$.
\varepsilonnd{lem}
\begin{proof}
The proof relies on the direct method of calculus of variations. For $v\in\mathcal{A}$, we define
$$
\mathcal{F}(v) := \frac{1}{2h} \|v-f\|_2^2 + \mathcal{E}_W(v)\ .
$$
Since
\begin{equation}
\mathcal{E}_W(v) \ge \frac{\|\nabla v\|_2^2}{2} - \frac{\lambda}{2} \left\| \frac{1}{W} \right\|_1\ , \qquad v\in \mathcal{A}\ , \label{EvEq015}
\varepsilonnd{equation}
the functional $\mathcal{F}$ is bounded from below on $\mathcal{A}$. Therefore there is a minimizing sequence $(v_j)_{j\ge 1}$ in $\mathcal{A}$ satisfying
\begin{equation}
\mu := \inf_{v\in\mathcal{A}}\{\mathcal{F}(v)\} \le \mathcal{F}(v_j) \le \mu + \frac{1}{j}\ , \qquad j\ge 1\ . \label{EvEq016}
\varepsilonnd{equation}
Owing to \varepsilonqref{EvEq015} and \varepsilonqref{EvEq016},
$$
\|\nabla v_j\|_2^2 \le 2 \mathcal{F}(v_j) + \left\| \frac{\lambda}{W} \right\|_1 \le 2(1+\mu) + \left\| \frac{\lambda}{W} \right\|_1\ , \qquad j\ge 1\, ,
$$
and we infer from the compactness of the embedding of $\mathring{H}^1(D)$ in $L_2(D)$ that there are $u\in \mathring{H}^1(D)$ and a subsequence of $(v_j)_{j\ge 1}$ (not relabeled) such that
\begin{align}
v_j & \rightharpoonup u \;\text{ in }\; \mathring{H}^1(D)\ , \label{EvEq017} \\
v_j & \longrightarrow u \;\text{ in }\; L_2(D) \;\text{ and a.e. in }\; D\ . \label{EvEq018}
\varepsilonnd{align}
It readily follows from \varepsilonqref{EvEq017} and \varepsilonqref{EvEq018} that $u\in \mathcal{A}$ while \varepsilonqref{EvEq018}, the integrability properties of $1/W$, and Lebesgue's dominated convergence theorem entail that
$$
\lim_{j\to\infty} \int_D \frac{\mathrm{d}x}{1+v_j+W} = \int_D \frac{\mathrm{d}x}{1+v+W}\ .
$$
Since the convex part of $\mathcal{E}_W$ is weakly lower semicontinuous in $L_2(D)$, classical arguments imply that $u$ is a minimizer of $\mathcal{F}$ in $\mathcal{A}$.
To derive the corresponding Euler-Lagrange equation for $u$, we pick $v\in\mathcal{A}$, $\tau\in (0,1)$, and observe that $\tau u + (1-\tau) v$ belongs to $\mathcal{A}$. The minimizing property of $u$ reads
$$
\mathcal{F}(u) \le \mathcal{F}(\tau u +(1-\tau) v)\ ,
$$
from which we deduce that
\begin{align*}
0 & \le \frac{1}{2h} \int_D (v-u) [(1+\tau) u + (1-\tau) v -2f]\ \mathrm{d}x \\
& \quad + \frac{1}{2} \int_D \nabla(v-u)\cdot \nabla[(1+\tau) u + (1-\tau) v]\ \mathrm{d}x \\
& \quad + \frac{\lambda}{2} \int_D \frac{v-u}{(1+u+W)[1+\tau u + (1-\tau) v + W]}\ \mathrm{d}x\ .
\varepsilonnd{align*}
Since
$$
\left| \frac{v-u}{(1+u+W)[1+\tau u + (1-\tau) v + W]} \right| \le \frac{|v-u|}{W^2} \in L_1(D)\ ,
$$
we may pass to the limit as $\tau\to 1$ in the previous inequality and conclude that
\begin{align*}
0 & \le \frac{1}{h} \int_D (v-u)(u-f)\ \mathrm{d}x + \int_D \nabla(v-u)\cdot \nabla u\ \mathrm{d}x + \lambda\int_D (v-u) g_W(u)\ \mathrm{d}x
\varepsilonnd{align*}
for all $v\in\mathcal{A}$. By \cite[Proposition~2.8]{Ba10} this implies that $u\in H^2(D)$ and
$$
- \left[ \frac{u-f}{h} + \lambda g_W(u) \right] + \Delta u \in \partial\mathbb{I}_{[-1,\infty)}(u) \;\text{ in }\; D\ .
$$
In other words, there is $\zeta\in L_2(D)$ such that $(u,\zeta)$ solves \varepsilonqref{EvEq013a}, \varepsilonqref{EvEq013b}, and \varepsilonqref{EvEq013c}.
Next, using once more the minimizing property of $u$ entails that $\mathcal{F}(u)\le \mathcal{F}(f)$, hence \varepsilonqref{EvEq014}.
Finally, if $f\le M$ in $D$, then
$$
u - h \Delta u + h \zeta = f- \lambda h g_W(u) \le M \;\text{ in }\ D\ ,
$$
and the upper bound $u\le M$ readily follows from \cite[Proposition~4]{BS73} (applied with the convex function $\Phi(r) = (r-M)_+$).
\varepsilonnd{proof}
We next derive some monotonicity property of the iterative scheme leading eventually to the time monotonicity of the solution to the evolution equation~\varepsilonqref{EvEq001x}.
\begin{lem}\label{saturnino}
Assume that $1/W\in L_\infty(D)$ and let $f$ be a supersolution to \varepsilonqref{idefix} in the sense of Definition~\ref{DD}. Further let $u$ be the solution to \varepsilonqref{EvEq013} constructed in Lemma~\ref{EvLem012} corresponding to $1/h\ge \lambda\|1/W\|_\infty$. Then $u\le f$ in $D$, and $u$ is a supersolution to \varepsilonqref{idefix}.
\varepsilonnd{lem}
\begin{proof}
Let $\zeta_f:=F_f+\Delta f$ with $F_f\ge -\lambda g_W(f)$ in $D$ according to Definition~\ref{DD}~(b). Then, by \varepsilonqref{EvEq013a}
$$
\frac{u-f}{h}-\Delta (u-f) +\zeta-\zeta_f=-\lambda g_W(u)-F_f\le \lambda \big(g_W(f)-g_W(u)\big)\quad \text{in }\ D\,.
$$
Since
$$
-\int_D (u-f) \Delta (u-f) \,\mathrm{d} x\ge 0\qquad\text{and}\qquad \int_D(\zeta-\zeta_f) (u-f)\,\mathrm{d} x\ge 0
$$
due to \cite[Lemma~2]{BS73} and the monotonicity of $\partial\mathbb{I}_{[-1,\infty)}$, it follows from the above inequality that
$$
\frac{1}{h}\| (u-f)_+\|_2^2\le \frac{\lambda}{2}\int_D (u-f)_+^2 \frac{1+u+W+1+f+W}{(1+u+W)^2 (1+f+W)^2}\, \mathrm{d} x \le \lambda \left\|\frac{1}{W}\right\|_\infty^3 \|(u-f)_+\|_2^2\,.
$$
Owing to the assumption on $h$, this readily implies that $(u-f)_+$ vanishes identically, that is, $u\le f$ in $D$. Finally, using \varepsilonqref{EvEq013} again, we realize that
$$
-\Delta u+\zeta=-\lambda g_W(u)+\frac{f-u}{h}\ge -\lambda g_W(u)\quad \text{in }\ D\,,
$$
so that $u$ is a supersolution to \varepsilonqref{idefix}.
\varepsilonnd{proof}
\begin{rem}\label{waters}
The statement of Lemma~\ref{saturnino} remains true if one only assumes that $1/W\in L_r(D)$ for some $r>3d/2$ provided $h$ is sufficiently small with respect to the norm of $1/W$ in $L_r(D)$. The proof is slightly more involved as it uses the Gagliardo-Nirenberg inequality, see the proof of Theorem~\ref{EvThm005} below for a similar argument.
\varepsilonnd{rem}
We next establish a version of Lemma~\ref{EvLem012} under the only assumption that $1/W\in L_2(D)$. In that case, the right-hand side of \varepsilonqref{EvEq013a} only belongs to $L_1(D)$ and the natural functional setting to work with is $L_1(D)$, as in \cite{BS73}.
\begin{lem}\label{EvLem019}
Given $h\in (0,1)$ and $f\in \mathcal{A}\cap L_\infty(D)$, there exists $(u,\zeta)\in \mathcal{A}\times L_1(D)$ with $u\in D(\Delta_1)$ and $(u,\zeta)$ satisfies \varepsilonqref{EvEq013}. Furthermore, there are a superlinear non-negative even and convex function $\Phi\in C^2([0,\infty))$ and a positive constant $C_0>0$ depending only on $W$ such that
\begin{equation}
\int_D \Phi(\zeta)\ \mathrm{d}x \le \frac{\Phi''(0)}{h} \frac{\|u-f\|_2^2}{h} + C_0\ . \label{EvEq020}
\varepsilonnd{equation}
In addition, $u\le \|f_+\|_\infty$ in $D$.
\varepsilonnd{lem}
\begin{proof}
For $j\ge 1$ define $W_j := W + 1/j$. Then $1/W_j$ belongs to $L_\infty(D)$ and we infer from Lemma~\ref{EvLem012} that there is $(u_j,\zeta_j)\in \mathcal{A}\times L_2(D)$ such that $u_j\in H^2(D)$ and $(u_j,\zeta_j)$ satisfies \varepsilonqref{EvEq013} with $W_j$ instead of $W$. Since $W_j\ge W$, we deduce in particular from \varepsilonqref{EvEq014} and \varepsilonqref{EvEq015} that
\begin{equation}
\|\nabla u_j\|_2^2 \le \left\| \frac{\lambda}{W_j} \right\|_1 + 2 \mathcal{E}_{W_j}(u_j) \le \left\| \frac{\lambda}{W} \right\|_1 + 2 \mathcal{E}_{W_j}(f) \le \left\| \frac{\lambda}{W} \right\|_1 + \|\nabla f\|_2^2\ . \label{EvEq021}
\varepsilonnd{equation}
Also, since $f \le \|f_+\|_\infty$ in $D$, a further consequence of Lemma~\ref{EvLem012} is that
\begin{equation}
- 1 \le u_j \le \|f_+\|_\infty \;\text{ in }\; D\ . \label{EvEq027}
\varepsilonnd{equation}
Next, since $1/W^2\in L_1(D)$, a refined version of the de la Vall\'ee-Poussin theorem \cite{Le77} (see also \cite[Theorem~8]{La15}) guarantees that there exists a convex even function $\Phi\in C^2(\mathbb{R})$ such that $\Phi(0)=\Phi'(0)=0$, $\Phi'$ is a concave and positive function on $(0,\infty)$, and
\begin{equation}
\lim_{r\to\infty} \frac{\Phi(r)}{r} = \infty \;\;\text{ and }\;\; I_W := \int_D \Phi\left( \frac{\lambda}{W^2} \right)\ \mathrm{d}x\ . \label{EvEq022}
\varepsilonnd{equation}
Since $1+u_j+W_j \ge W_j \ge W$ and $\Phi$ is increasing on $[0,\infty)$, we realize that
\begin{align}
\int_D \Phi\left( - \frac{\lambda}{(1+u_j+W_j)^2} \right)\ \mathrm{d}x & = \int_D \Phi\left( \frac{\lambda}{(1+u_j+W_j)^2} \right)\ \mathrm{d}x \nonumber \\
& \le \int_D \Phi\left( \frac{\lambda}{W^2} \right)\ \mathrm{d}x = I_W\ . \label{EvEq023}
\varepsilonnd{align}
Owing to \cite[Proposition~4]{BS73} and the convexity and symmetry of $\Phi$, it follows from \varepsilonqref{EvEq013a}--\varepsilonqref{EvEq013c} and \varepsilonqref{EvEq023} that
\begin{align*}
\int_D \Phi(\zeta_j)\ \mathrm{d}x & \le \int_D \Phi\left( - \frac{u_j-f}{h} - \frac{\lambda}{2(1+u_j+W_j)^2} \right)\ \mathrm{d}x \\
& = \int_D \Phi\left( \frac{u_j-f}{h} + \frac{\lambda}{2(1+u_j+W_j)^2} \right)\ \mathrm{d}x \\
& \le \frac{1}{2} \int_D \Phi\left( 2 \frac{u_j-f}{h} \right)\ \mathrm{d}x + \frac{1}{2} \int_D \Phi\left( \frac{\lambda}{(1+u_j+W_j)^2} \right)\ \mathrm{d}x \\
& \le \frac{1}{2} \int_D \Phi\left( 2 \frac{|u_j-f|}{h} \right)\ \mathrm{d}x + \frac{I_W}{2}\ .
\varepsilonnd{align*}
Since the concavity of $\Phi'$ implies that $\Phi(r) \le \Phi''(0) r^2/2$ for $r\ge 0$, we end up with
\begin{equation}
\int_D \Phi(\zeta_j)\ \mathrm{d}x \le \frac{\Phi''(0)}{h} \frac{\|u_j-f\|_2^2}{h} + \frac{I_W}{2}\ . \label{EvEq024}
\varepsilonnd{equation}
Combining \varepsilonqref{EvEq014} and \varepsilonqref{EvEq024} gives
\begin{align*}
\int_D \Phi(\zeta_j)\ \mathrm{d}x & \le \frac{2\Phi''(0)}{h} \left[ \mathcal{E}_{W_j}(f) - \mathcal{E}_{W_j}(u_j) \right] + \frac{I_W}{2} \\
& \le \frac{2\Phi''(0)}{h} \left[ \frac{\|\nabla f\|_2^2}{2} + \frac{\lambda}{2} \int_D \frac{\mathrm{d}x}{1+u_j+W_j} \right] + \frac{I_W}{2}\ ,
\varepsilonnd{align*}
hence
\begin{equation}
\int_D \Phi(\zeta_j)\ \mathrm{d}x \le \frac{\Phi''(0)}{h} \left[ \|\nabla f\|_2^2 + \left\| \frac{\lambda}{W} \right\|_1 \right] + \frac{I_W}{2}\ . \label{EvEq025}
\varepsilonnd{equation}
According to \varepsilonqref{EvEq022}, the function $\Phi$ is superlinear at infinity and we infer from \varepsilonqref{EvEq025} and Dunford-Pettis' theorem that $(\zeta_j)_{j\ge 1}$ is weakly compact in $L_1(D)$. Combining this property with \varepsilonqref{EvEq021} and the compactness of the embedding of $\mathring{H}^1(D)$ in $L_2(D)$ gives a subsequence of $(u_j,\zeta_j)_{j\ge 1}$ (not relabeled) and $(u,\zeta)\in \mathring{H}^1(D)\times L_1(D)$ such that
\begin{subequations}\label{EvEq026}
\begin{align}
u_j & \rightharpoonup u \;\text{ in }\; \mathring{H}^1(D)\ , \label{EvEq026a} \\
u_j & \longrightarrow u \;\text{ in }\; L_2(D) \;\text{ and a.e. in }\; D\ , \label{EvEq026b} \\
\zeta_j & \rightharpoonup \zeta \;\text{ in }\; L_1(D)\ . \label{EvEq026c}
\varepsilonnd{align}
\varepsilonnd{subequations}
Ii follows in particular from \varepsilonqref{EvEq026b}, \varepsilonqref{EvEq027}, the square integrability of $1/W$, and Lebesgue's dominated convergence theorem that
\begin{equation*}
- 1 \le u \le \|f_+\|_\infty \;\text{ in }\; D\ ,
\varepsilonnd{equation*}
and
\begin{equation}
\lim_{j\to\infty} \left\| \frac{1}{(1+u_j+W_j)^m} - \frac{1}{(1+u+W)^m} \right\|_1 = 0\ , \qquad m\in\{1,2\}\ . \label{EvEq029}
\varepsilonnd{equation}
Consequently, $u\in\mathcal{A} \cap L_\infty(D)$ and we may pass to the limit as $j\to\infty$ in \varepsilonqref{EvEq013a} for $u_j$ to deduce that
\begin{equation}
\frac{u-f}{h} - \Delta u + \zeta = - \lambda g_W(u) \;\;\text{ in }\;\; \mathcal{D'}(D)\ . \label{EvEq030}
\varepsilonnd{equation}
However, \varepsilonqref{EvEq024}, \varepsilonqref{EvEq026c}, \varepsilonqref{EvEq029} (with $m=2$), and \varepsilonqref{EvEq030} imply that $\Delta u\in L_1(D)$, so that $(u,\zeta)$ solves \varepsilonqref{EvEq013a} in $L_1(D)$. In the same vein, we may use \varepsilonqref{EvEq026a}, \varepsilonqref{EvEq026b}, and \varepsilonqref{EvEq029} (with $m=1$) to pass to the limit as $j\to\infty$ in \varepsilonqref{EvEq014} for $u_j$ and deduce that $u$ satisfies \varepsilonqref{EvEq014}. Finally, owing to the weak convergence \varepsilonqref{EvEq026c} of $(\zeta_j)_{j\ge 1}$ in $L_1(D)$ and \varepsilonqref{EvEq026b}, a weak lower semicontinuity argument applied to \varepsilonqref{EvEq024} based on the convexity of $\Phi$ leads to \varepsilonqref{EvEq020} with $C_0:=I_W/2$.
We are left with identifying the relation between $u$ and $\zeta$. To this end, we consider $v\in L_\infty(D)$ with $v\ge -1$ in $D$ and first observe that the weak convergence \varepsilonqref{EvEq026c} of $(\zeta_j)_{j\ge 1}$ in $L_1(D)$ and the boundedness \varepsilonqref{EvEq027} of $(v-u_j)_{j\ge 1}$ as well as its a.e. convergence \varepsilonqref{EvEq026b} allow us to apply \cite[Proposition~2.61]{FL07} and conclude that
$$
\lim_{j\to\infty} \int_D \zeta_j (v-u_j)\ \mathrm{d}x = \int_D \zeta (v-u)\ \mathrm{d}x\ .
$$
The left-hand side of the previous identity being non-positive due to \varepsilonqref{EvEq013b} for $u_j$, we realize that
$$
\int_D \zeta (v-u)\ \mathrm{d}x \le 0
$$
for all $v\in L_\infty(D)$ with $v\ge -1$ in $D$.
In particular, taking $v:=(u-1)/2\in L_\infty(D)$ which satisfies $-1<v<u$ in the set $\{x\in D\,;\, u(x)>-1\}$, we derive that $\zeta=0$ in this set. Since anyway $\zeta\le 0$ in $D$, we conclude that $\zeta\in\partial\mathbb{I}_{[-1,\infty)}(u)$ in $D$, and the proof of Lemma~\ref{EvLem019} is complete.
\varepsilonnd{proof}
We also improve Lemma~\ref{saturnino} to the framework of Lemma~\ref{EvLem019}.
\begin{lem}\label{saturnino2}
Assume that $1/W\in L_r(D)$ for some $r>3d/2$ and let $f$ be a supersolution to \varepsilonqref{idefix} in the sense of Definition~\ref{DD}. Further let $u$ be the solution to \varepsilonqref{EvEq013} constructed in Lemma~\ref{EvLem019} corresponding to $1/h\ge \lambda\|1/W\|_\infty$. Then $u\le f$ in $D$, and $u$ is a supersolution to \varepsilonqref{idefix}.
\varepsilonnd{lem}
\begin{proof}
We keep the notation of the proof of Lemma~\ref{EvLem019}. Since $W_j\ge W$ in $D$ for all $j\ge 1$, it readily follows from Lemma~\ref{saturnino} and Remark~\ref{waters} that $u_j\le f$ in $D$ and
$$
F_{u_j}:=-\Delta u_j+\zeta_j\ge -\lambda g_W(u_j)\quad \text{in }\ D\,.
$$
Owing to the convergences stated in \varepsilonqref{EvEq026} and \varepsilonqref{EvEq029} we may let $j\rightarrow \infty$ in the previous two inequalities to conclude that $u\le f$ in $D$, and that $u$ is a supersolution to \varepsilonqref{idefix}.
\varepsilonnd{proof}
We are now in a position to prove Theorem~\ref{EvThm002}.
\begin{proof}[Proof of Theorem~\ref{EvThm002}]
Set $M:= \|(u_0)_+\|_\infty\ge 0$ and consider $h\in (0,1)$. Defining $(u_0^h, \zeta_0^h):=(u_0,0)$ we use Lemma~\ref{EvLem019} to construct by induction a sequence $(u_n^h,\zeta_n^h)_{n\ge 0}$ in $\mathcal{A}\times L_1(D)$ such that, for all $n\ge 0$, $u_{n+1}^h\in D(\Delta_1)$ and $(u_{n+1}^h,\zeta_{n+1}^h)$ solves
\begin{subequations}\label{EvEq031}
\begin{align}
\frac{u_{n+1}^h-u_n^h}{h} - \Delta u_{n+1}^h + \zeta_{n+1}^h & = - \lambda g_W(u_{n+1}^h) \;\text{ in }\; D\ , \label{EvEq031a} \\
\zeta_{n+1}^h & \in \partial\mathbb{I}_{[-1,\infty)}(u_{n+1}^h) \;\text{ in }\; D\ , \label{EvEq031b} \\
u_{n+1}^h & = 0 \;\text{ on }\; \partial D\ . \label{EvEq031c}
\varepsilonnd{align}
\varepsilonnd{subequations}
In addition, for all $n\ge 0$,
\begin{align}
& - 1 \le u_{n+1}^h \le M\ , \qquad x\in D\ , \label{EvEq032} \\
& \frac{1}{2h} \left\| u_{n+1}^h-u_n^h \right\|_2^2 + \mathcal{E}_W(u_{n+1}^h) \le \mathcal{E}_W(u_n^h)\ , \label{EvEq033} \\
& \int_D \Phi(\zeta_{n+1}^h)\ \mathrm{d}x \le \frac{\Phi''(0)}{h} \frac{\left\| u_{n+1}^h-u_n^h \right\|_2^2}{h} + C_0\ , \label{EvEq034}
\varepsilonnd{align}
the function $\Phi$ and the constant $C_0$ being defined in Lemma~\ref{EvLem019}. Introducing the time-dependent piecewise constant functions
\begin{subequations}\label{EvEq035}
\begin{align}
u^h(t,x) := \sum_{n\ge 0} u_n^h(x) \mathbf{1}_{[nh,(n+1)h)}(t)\ , \qquad (t,x)\in [0,\infty)\times D\ , \label{EvEq035a} \\
\zeta^h(t,x) := \sum_{n\ge 0} \zeta_n^h(x) \mathbf{1}_{[nh,(n+1)h)}(t)\ , \qquad (t,x)\in [0,\infty)\times D\ ,\label{EvEq035b}
\varepsilonnd{align}
\varepsilonnd{subequations}
we infer from \varepsilonqref{EvEq033} that, for $n\ge 0$,
\begin{equation}
\frac{1}{2h} \sum_{m=0}^n \left\| u_{m+1}^h-u_m^h \right\|_2^2 + \mathcal{E}_W(u_{n+1}^h) \le \mathcal{E}_W(u_0)\ . \label{EvEq036}
\varepsilonnd{equation}
In turn, \varepsilonqref{EvEq036} gives
\begin{equation}
\frac{1}{h} \sum_{m=0}^n \left\| u_{m+1}^h-u_m^h \right\|_2^2 + \left\| \nabla u_{n+1}^h \right\|_2^2 \le C_1 := 2\mathcal{E}_W(u_0) + \left\| \frac{1}{W} \right\|_1\ , \qquad n\ge 0\ . \label{EvEq037}
\varepsilonnd{equation}
Now, combining \varepsilonqref{EvEq034} and \varepsilonqref{EvEq037} leads us to
$$
\sum_{m=0}^{n+1} \int_D \Phi(\zeta_m^h)\ \mathrm{d}x = \sum_{m=0}^{n} \int_D \Phi(\zeta_{m+1}^h)\ \mathrm{d}x \le C_1 \frac{\Phi''(0)}{h} + (n+1) C_0\ ,
$$
hence
\begin{equation}
\sum_{m=0}^{n+1} h \int_D \Phi(\zeta_m^h)\ \mathrm{d}x \le C_1 \Phi''(0) + C_0 (n+1)h\ , \qquad n\ge 0\ . \label{EvEq038}
\varepsilonnd{equation}
Let us now fix $t>0$ and translate the above derived estimates in terms of $u^h$ and $\zeta^h$. Since $t\in [(n+1)h,(n+2)h)$ for some $n\ge -1$, it follows from \varepsilonqref{EvEq032} and \varepsilonqref{EvEq037} for $n\ge 0$ and from the definition of $u^h$ for $n=-1$ that
\begin{equation}
- 1 \le u^h(t) \le M \;\text{ in }\; D\ , \qquad \|\nabla u^h(t)\|_2^2 \le C_1\ . \label{EvEq039}
\varepsilonnd{equation}
Furthermore, by \varepsilonqref{EvEq038} for $n\ge 0$ and the definition of $\zeta^h$ for $n=-1$,
\begin{align}
\int_0^t \int_D \Phi(\zeta^h(\tau,x))\ \mathrm{d}x\mathrm{d}\tau & \le \int_0^{(n+2)h} \int_D \Phi(\zeta^h(\tau,x))\ \mathrm{d}x\mathrm{d}\tau \nonumber \\
& = \sum_{m=0}^{n+1} \int_{mh}^{(m+1)h} \int_D \Phi(\zeta^h(\tau,x))\ \mathrm{d}x\mathrm{d}\tau \nonumber \\
& = \sum_{m=0}^{n+1} h \int_D \Phi(\zeta_m^h(x))\ \mathrm{d}x \nonumber \\
& \le C_1 \Phi''(0) + C_0 t\ . \label{EvEq040}
\varepsilonnd{align}
We finally deduce from \varepsilonqref{EvEq035a}, \varepsilonqref{EvEq037}, and Cauchy-Schwarz' inequality that
\begin{equation}
\|u^h(t)-u^h(s)\|_2 \le \sqrt{C_1} \sqrt{t-s+h}\ , \qquad s\in [0,t]\ . \label{EvEq041}
\varepsilonnd{equation}
Owing to the compactness of the embedding of $\mathring{H}^1(D)$ in $L_2(D)$, the estimates \varepsilonqref{EvEq039} and \varepsilonqref{EvEq041} allow us to apply the variant of Arzel\`a-Ascoli theorem stated in \cite[Proposition~3.3.1]{AGS08} to obtain the existence of $u\in C([0,\infty);L_2(D))$ and a sequence $(h_k)_{k\ge 1}$ of positive real numbers such that
\begin{equation}
\lim_{k\to\infty} h_k = 0\ , \qquad \lim_{k\to\infty} \|u^{h_k}(t) - u(t)\|_2 = 0 \;\text{ for all }\; t\ge 0\ . \label{EvEq042}
\varepsilonnd{equation}
In addition, for all $T>0$, the superlinearity \varepsilonqref{EvEq022} of $\Phi$, the bound \varepsilonqref{EvEq040}, and Dunford-Pettis' theorem guarantee that $(\zeta^h)_h$ is relatively weakly sequentially compact in $L_1((0,T)\times D)$ while $(\nabla u^h)_h$ is obviously relatively weakly compact in $L_2((0,T)\times D;\mathbb{R}^d)$ according to \varepsilonqref{EvEq039}. We may thus further assume that there is $\zeta\in L_1((0,T)\times D)$ such that
\begin{align}
\zeta^{h_k} & \rightharpoonup \zeta \;\;\text{ in }\;\; L_1((0,T)\times D)\ , \label{EvEq043} \\
\nabla u^{h_k} & \rightharpoonup \nabla u \;\;\text{ in }\;\; L_2((0,T)\times D;\mathbb{R}^d)\ , \label{EvEq044} \\
u^{h_k} & \longrightarrow u \;\;\text{ a.e. in }\;\; (0,T)\times D\ . \label{EvEq045}
\varepsilonnd{align}
A first consequence of \varepsilonqref{EvEq039}, \varepsilonqref{EvEq044}, and \varepsilonqref{EvEq045} is that $u(t)\in \mathcal{A}$ and satisfies \varepsilonqref{EvEq003} for all $t\ge 0$. Next, \varepsilonqref{EvEq042}, \varepsilonqref{EvEq045}, the square integrability of $1/W$, and Lebesgue's dominated convergence theorem entail that
\begin{equation}
\lim_{k\to\infty} \int_0^t \int_D \frac{\mathrm{d}x\mathrm{d}\tau}{(1+u^{h_k}(\tau,x) + W(x))^2} = \int_0^t \int_D \frac{\mathrm{d}x\mathrm{d}\tau}{(1+u(\tau,x)+ W(x))^2}\ , \qquad t\ge 0\ . \label{EvEq046}
\varepsilonnd{equation}
Also, we argue as in the proof of Lemma~\ref{EvLem019} to deduce from \varepsilonqref{EvEq039}, \varepsilonqref{EvEq043}, and \varepsilonqref{EvEq045} that $u$ and $\zeta$ are related according to Definition~\ref{EvDef001}~(c).
Let us now identify the equation solved by $(u,\zeta)$. To this end, consider $\vartheta\in \mathring{H}^1(D)\cap L_\infty(D)$ and $t>0$. For $k$ large enough, there is $n_k\ge 0$ such that $t\in [(n_k+1)h_k,(n_k+2)h_k)$ and a classical computation relying on \varepsilonqref{EvEq031a}, \varepsilonqref{EvEq031b}, and \varepsilonqref{EvEq035} gives (recalling that $(u_0^{h_k},\zeta_0^{h_k})=(u_0,0)$)
\begin{align*}
\int_D (u^{h_k}(t)-u_0) \vartheta\ \mathrm{d}x & = - \int_0^t \int_D \left[ \nabla u^{h_k}\cdot \nabla\vartheta + \zeta^{h_k} \vartheta + \frac{\lambda \vartheta}{2(1+u^{h_k}+W)^2} \right]\ \mathrm{d}x\mathrm{d}\tau \\
& \qquad + \int_0^{h_k} \int_D \left[ \nabla u_0\cdot \nabla\vartheta + \frac{\lambda \vartheta}{2(1+u_0+W)^2} \right]\ \mathrm{d}x\mathrm{d}\tau \\
& \qquad + \int_t^{(n_k+2)h_k} \int_D \left[ \nabla u^{h_k}\cdot \nabla\vartheta + \zeta^{h_k} \vartheta + \frac{\lambda \vartheta}{2(1+u^{h_k}+W)^2} \right]\ \mathrm{d}x\mathrm{d}\tau\ .
\varepsilonnd{align*}
Thanks to the convergences \varepsilonqref{EvEq042}, \varepsilonqref{EvEq043}, \varepsilonqref{EvEq044}, and \varepsilonqref{EvEq046}, we may pass to the limit as $k\to\infty$ in the previous identity and deduce that $(u,\zeta)$ solves \varepsilonqref{EvEq001a}, \varepsilonqref{EvEq001b} in the weak sense \varepsilonqref{EvEq011}.
We are left with passing to the limit in the discrete energy inequality \varepsilonqref{EvEq036}. For $t>0$ and $k$ large enough, there is $n_k\ge 0$ such that $t \in [(n_k+1)h_k,(n_k+2)h_k)$. Then $u^{h_k}(t) = u_{n_k+1}^{h_k}$ and the discrete energy inequality \varepsilonqref{EvEq036} reads
\begin{equation}
\frac{1}{2h_k} \sum_{m=0}^{n_k} \left\| u_{m+1}^{h_k} - u_m^{h_k} \right\|_2^2 + \mathcal{E}_W(u^{h_k}(t)) \le \mathcal{E}_W(u_0)\ . \label{EvEq047}
\varepsilonnd{equation}
On the one hand, we infer from \varepsilonqref{EvEq039} that $(u^{h_k}(t))_{k\ge 1}$ is bounded in $\mathring{H}^1(D)\cap L_\infty(D)$ and converges towards $u(t)$ in $L_2(D)$. We may thus extract a subsequence of $(u^{h_k}(t))_{k\ge 1}$ (possibly depending on $t$) which converges weakly towards $u(t)$ in $\mathring{H}^1(D)$ as well as a.e. in $D$. These properties along with the integrability of $1/W$ and Lebesgue's dominated convergence theorem readily imply that
\begin{equation}
\mathcal{E}_W(u(t)) \le \liminf_{k\to\infty} \mathcal{E}_W(u^{h_k}(t))\ . \label{EvEq048}
\varepsilonnd{equation}
Also, for $\delta>h_k$,
\begin{align}
\frac{1}{2h_k} \sum_{m=0}^{ n_k} \left\| u_{m+1}^{h_k} - u_m^{h_k} \right\|_2^2 & = \frac{1}{2h_k^2} \sum_{m=0}^{ n_k} \int_{m h_k}^{(m+1) h_k} \left\| u_{m+1}^{h_k} - u_m^{h_k} \right\|_2^2\ \mathrm{d}\tau \nonumber \\
& = \frac{1}{2} \int_0^{(n_k+1) h_k} \left\| \frac{u^{h_k}(\tau+h_k) - u^{h_k}(\tau)}{h_k} \right\|_2^2 \ \mathrm{d}\tau \nonumber \\
& \ge \frac{1}{2} \int_0^{t-\delta} \left\| \frac{u^{h_k}(\tau+ h_k) - u^{h_k}(\tau)}{h_k} \right\|_2^2 \ \mathrm{d}\tau\ . \label{EvEq049}
\varepsilonnd{align}
Let $\delta\in (0,t/2)$. Since
$$
(\tau,x) \longmapsto \frac{u^{h_k}(\tau+h_k) - u^{h_k}(\tau)}{h_k}
$$
converges to $\partial_t u$ in $\mathcal{D}'((0,\infty)\times D)$ by \varepsilonqref{EvEq042} and is bounded in $L_2((0,t-\delta)\times D)$ due to \varepsilonqref{EvEq037} and \varepsilonqref{EvEq049}, we realize that $\partial_t u$ belongs to $L_2((0,t-\delta)\times D)$ and satisfies
$$
\frac{1}{2}\int_0^{t-\delta} \|\partial_t u(\tau)\|_2^2\ \mathrm{d}\tau \le \liminf_{k\to \infty} \frac{1}{2h_k} \sum_{m=0}^{n_k} \left\| u_{m+1}^{h_k} - u_m^{h_k} \right\|_2^2\ .
$$
Since $\delta\in (0,t/2)$ was arbitrarily chosen, the previous inequality is also valid for $\delta=0$ and we combine it with \varepsilonqref{EvEq047} and \varepsilonqref{EvEq048} to conclude that $u$ satisfies the energy inequality \varepsilonqref{EvEq010}.
Finally, if there are $\kappa\in (0,1)$ and $T>0$ such that $u\ge \kappa-1$ in $(0,T)\times D$, then $g_W(u) \le 1/(2\kappa^2)$ and thus belongs to $L_\infty(0,T)\times D)$. Classical parabolic regularity results then complete the proof of Theorem~\ref{EvThm002}.
\varepsilonnd{proof}
We immediately derive the time monotonicity of the just constructed solution when the initial value is a supersolution to \varepsilonqref{idefix}.
\begin{prop}\label{jovanotti}
Let $1/W\in L_r(D)$ for some $r>3d/2$ and let $u_0\in \mathcal{A}\cap L_\infty(D)$ be a supersolution to \varepsilonqref{idefix}. If $u$ denotes the corresponding solution to \varepsilonqref{EvEq001} constructed in Theorem~\ref{EvThm002}, then for a.a. $x\in D$ the function $t\mapsto u(t,x)$ is non-increasing.
\varepsilonnd{prop}
\begin{proof}
We keep the notation of the proof of Theorem~\ref{EvThm002}. Thanks to Lemma~\ref{saturnino2} and the assumption on the initial value $u_0$, an induction argument entails that $u_n^h\ge u_{n+1}^h$ in $D$ and $u_{n+1}^h$ is a supersolution to \varepsilonqref{idefix} for all $n\in \mathbb{N}$ provided that $h$ is small enough. Therefore, the function $t\mapsto u^h(t,x)$ is non-increasing for a.a. $x\in D$ and the assertion follows from \varepsilonqref{EvEq042}.
\varepsilonnd{proof}
We next focus on the uniqueness of energy solutions when $1/W$ enjoys suitable integrability properties and actually prove a comparison principle.
\begin{proof}[Proof of Theorem~\ref{EvThm005}: Uniqueness and comparison principle]
Let $1/W\in L_r(D)$ with $r>3d/2$. Setting $p:=2r/(r-3)$ when $r>3$ and $p:=\infty$ when $r\le 3$ (the latter being possible only in one space dimension $d=1$), the constraint $r>3d/2$ guarantees that $p\in (2,2^*)$ when it is finite, the Sobolev exponent $2^*$ being given by $2^*:=2d/(d-2)$ for $d\ge 3$ and $2^*:=\infty$ for $d\in\{1,2\}$. This choice of $p$ implies the validity of the Gagliardo-Nirenberg inequality
$$
\|w\|_p \le C \|\nabla w\|_2^\theta \|w\|_2^{1-\theta}\ , \qquad w\in \mathring{H}^1(D)\ ,
$$
where $\theta := d(p-2)/2p \in (0,1)$ for $p$ finite and $\theta:=1/2$ for $p=\infty$ and $C$ depends only on $D$, $d$, and $p$.
Now, let $u_0, v_0\in\mathcal{A}\cap L_\infty(D)$ and consider two energy solutions $u$ and $v$ to \varepsilonqref{EvEq001} in the sense of Definition~\ref{EvDef001} with initial values $u_0$ and $v_0$, respectively. Owing to \varepsilonqref{EvEq001b}, the boundedness of $v-u$, and the integrability of $\zeta_v-\zeta_u$, there holds
$$
\int_D (\zeta_v-\zeta_u)(v-u)_+\ \mathrm{d}x\ge 0\ ,
$$
and we infer from \varepsilonqref{EvEq001} that
\begin{align*}
\frac{1}{2} \frac{d}{dt} \|(v-u)_+\|_2^2 & \le - \|\nabla(v-u)_+\|_2^2 + \frac{\lambda}{2} \int_D \frac{(1+u+W+1+v+W)(v-u)_+^2}{(1+u+W)^2(1+v+W)^2}\ \mathrm{d}x \\
& \le - \|\nabla(v-u)_+\|_2^2 + \lambda \int_D \frac{(v-u)_+^2}{W^3}\ \mathrm{d}x\ .
\varepsilonnd{align*}
We next use H\"older's inequality along with the previously recalled Gagliardo-Nirenberg inequality to obtain
\begin{align*}
\frac{1}{2} \frac{d}{dt} \|(v-u)_+\|_2^2 & \le - \|\nabla(v-u)_+\|_2^2 + \lambda \|(v-u)_+\|_p^2 \|W^{-1}\|_r^3 \\
& \le - \|\nabla(v-u)_+\|_2^2 + \lambda C^2 \|W^{-1}\|_r^3 \|\nabla(v-u)_+\|_2^{2\theta} \|(v-u)_+\|_2^{2(1-\theta)}\ .
\varepsilonnd{align*}
We finally deduce from Young's inequality that
\begin{align*}
\frac{1}{2} \frac{d}{dt} \|(v-u)_+\|_2^2 & \le (\theta-1) \|\nabla(v-u)_+\|_2^2 + (1-\theta) \left[ \lambda C^2 \|W^{-1}\|_r^3 \right]^{1/(1-\theta)} \|(v-u)_+\|_2^2 \\
& \le \left[ \lambda C^2 \|W^{-1}\|_r^3 \right]^{1/(1-\theta)} \|(v-u)_+\|_2^2\ .
\varepsilonnd{align*}
Integrating the previous differential inequality gives
\begin{equation}
\|(v-u)_+(t)\|_2^2 \le e^{C_2 t} \|(v_0-u_0)_+\|_2^2 \ , \qquad t\ge 0\ , \qquad C_2 := \left[ \lambda C^2 \|W^{-1}\|_r^3 \right]^{1/(1-\theta)}\ . \label{EvEq050}
\varepsilonnd{equation}
On the one hand, it readily follows from \varepsilonqref{EvEq050} that, if $u_0\le v_0$ a.e. in $D$, then $u(t)\le v(t)$ a.e. in $D$ for all $t\ge 0$. On the other hand, using again \varepsilonqref{EvEq050}, we realize that
\begin{align*}
\|(v-u)(t)\|_2^2 & = \|(v-u)_+(t)\|_2^2 + \|(u-v)_+(t)\|_2^2 \\
& \le e^{C_2 t} \left( \|(v_0-u_0)_+\|_2^2 + \|(u_0-v_0)_+\|_2^2 \right) = e^{C_2 t} \|v_0-u_0\|_2^2\ ,
\varepsilonnd{align*}
hence the claimed uniqueness.
\varepsilonnd{proof}
\section{The evolution problem: large time dynamics}\label{Sec6}
We now investigate the large time behavior of energy solutions by characterizing the $\omega$-limit sets as stated in Theorem~\ref{EvThm004}.
\begin{proof}[Proof of Theorem~\ref{EvThm004}]
Fix $u_0\in \mathcal{A}\cap L_\infty(D)$ and consider an energy solution $(u,\zeta)$ satisfying \varepsilonqref{EvEq003} as provided by Theorem~\ref{EvThm002}. The energy inequality \varepsilonqref{EvEq010} and the square integrability of $1/W$ imply then that
$$
\int_0^t \|\partial_t u(\tau)\|_2^2\ \mathrm{d}\tau + \|\nabla u(t)\|_2^2 \le C_1 = 2 \mathcal{E}_W(u_0) + \left\| \frac{\lambda}{W} \right\|_1\ , \qquad t\ge 0\ ,
$$
and thus
\begin{equation}
\int_0^\infty \|\partial_t u(\tau)\|_2^2\ \mathrm{d}\tau + \sup_{t\ge 0}\left\{ \|\nabla u(t)\|_2^2 \right\} \le C_1\ . \label{LDEq001}
\varepsilonnd{equation}
By \varepsilonqref{LDEq001}, $(u(t))_{t\ge 0}$ is bounded in $\mathring{H}^1(D)$ and thus relatively compact in $L_2(D)$. Consequently, there are a sequence $(t_k)_{k\ge 1}$ of positive real numbers in $(1,\infty)$ and $v\in L_2(D)$ such that
\begin{equation}
\lim_{k\to\infty} t_k = \infty\ , \qquad \lim_{k\to\infty} \|u(t_k)-v\|_2=0\ . \label{LDEq002}
\varepsilonnd{equation}
We now define the sequences $(V_k,\xi_k)_{k\ge 1}$ by
$$
(V_k,\xi_k)(s,x) := (u,\zeta)(s+t_k,x)\ , \qquad (s,x)\in [-1,1]\times D\,,
$$
where
$$
\zeta:=-\lambda g_W(u)+\Delta u-\partial_t u\,.
$$
On the one hand, it readily follows from \varepsilonqref{LDEq001} that
$$
(V_k)_{k\ge 1} \text{ is bounded in } L_\infty(-1,1;\mathring{H}^1(D)) \text{ and in } W_2^1(-1,1;L_2(D))\ .
$$
Owing to the compactness of the embedding of $\mathring{H}^1(D)$ in $L_2(D)$, we infer from \cite[Corollary~4]{Si87} that there are $V\in C([-1,1];L_2(D))$ and a subsequence of $(V_k)_{k\ge 1}$ (not relabeled) such that
\begin{align}
V_k & \longrightarrow V \;\text{ in }\; C([-1,1];L_2(D)) \;\text{ and a.e. in }\; (-1,1)\times D\ , \label{LDEq003} \\
\nabla V_k & \rightharpoonup \nabla V \;\text{ in }\; L_2((-1,1)\times D;\mathbb{R}^d)\ . \label{LDEq004}
\varepsilonnd{align}
A first consequence of \varepsilonqref{LDEq001} and \varepsilonqref{LDEq003} is that, for all $s\in [-1,1]$,
\begin{align*}
\|V(s) - v\|_2 & = \lim_{k\to\infty} \|V_k(s) - V_k(0)\|_2 \le \lim_{k\to\infty} \left| \int_{t_k}^{s+t_k} \|\partial_t u(\tau)\|_2\ \mathrm{d}\tau \right| \\
& \le \sqrt{2} \lim_{k\to\infty} \left( \int_{-1+t_k}^{1+t_k} \|\partial_t u(\tau)\|_2^2\ \mathrm{d}\tau \right)^{1/2} = 0\ ,
\varepsilonnd{align*}
so that
\begin{equation}
V(s) \varepsilonquiv v\ , \qquad s\in [-1,1]\ . \label{LDEq005}
\varepsilonnd{equation}
Another consequence of \varepsilonqref{LDEq003}, the square integrability of $1/W$, and Lebesgue's dominated convergence theorem is that
\begin{equation}
\lim_{k\to\infty} \int_{-1}^1 \int_D g_W(V_k) \mathrm{d}x \mathrm{d}s = \int_{-1}^1 \int_D g_W(V)\mathrm{d}x \mathrm{d}s\ . \label{LDEq006}
\varepsilonnd{equation}
On the other hand, we introduce
$$
G_k := - \lambda g_W(V_k) - \partial_s V_k\ , \qquad k\ge 1\ .
$$
Since $(\partial_s V_k)_{k\ge 1}$ is bounded in $L_2((-1,1)\times D)$ by \varepsilonqref{LDEq001} and
$$
\left| \lambda g_W(V_k) \right| \le \frac{\lambda}{2W^2} \in L_1((-1,1)\times D)\ ,
$$
the sequence $(G_k)_{k\ge 1}$ is the sum of two sequences which are relatively weakly sequentially compact in $L_1((-1,1)\times D)$ and is thus also relatively weakly sequentially compact in $L_1((-1,1)\times D)$. Using once more the de la Vall\'ee-Poussin theorem \cite{Le77, La15}, there is a non-negative and even convex function $\Phi\in C^2(\mathbb{R})$ such that
\begin{equation}
\lim_{r\to \infty} \frac{\Phi(r)}{r} = \infty\ , \qquad C_2 := \sup_{k\ge 1} \left\{ \int_{-1}^1 \int_D \Phi(|G_k|)\ \mathrm{d}x\mathrm{d}s \right\} < \infty\ . \label{LDEq008}
\varepsilonnd{equation}
Furthermore, the regularity of $(u,\zeta)$ ensures that, for almost every $s\in [-1,1]$, $V_k(s)\in \mathring{H}^1(D)$, $\Delta V_k(s)\in L_1(D)$, $\xi_k(s)\in L_1(D)$, and $G_k(s) \in L_1(D)$. Together with the weak formulation \varepsilonqref{EvEq011} of \varepsilonqref{EvEq001} and \cite[Theorem~1]{BS73}, these properties imply that $V_k(s)$ is the unique solution to
\begin{subequations}\label{LDEq007}
\begin{align}
-\Delta V_k(s) + \xi_k(s) & = G_k(s) \;\text{ in }\; D\ , \label{LDEq007a} \\
\xi_k(s) & \in \partial\mathbb{I}_{[-1,\infty)}(V_k(s)) \;\text{ in }\; D\ , \label{LDEq007b} \\
V_k(s) & = 0 \;\text{ on }\; D\ . \label{LDEq007c}
\varepsilonnd{align}
\varepsilonnd{subequations}
We then apply \cite[Proposition~4]{BS73} to deduce from \varepsilonqref{LDEq007} that, for all $k\ge 1$ ,
$$
\int_D \Phi(\xi_k(s,x))\ \mathrm{d}x \le \int_D \Phi(G_k(s,x))\ \mathrm{d}x \;\text{ for almost every }\; s\in (-1,1)\ ,
$$
hence, thanks to \varepsilonqref{LDEq008},
$$
\int_{-1}^1 \int_D \Phi(\xi_k(s,x))\ \mathrm{d}x\mathrm{d}s \le C_2\ .
$$
The superlinearity \varepsilonqref{LDEq008} of $\Phi$ along with the previous bound and Dunford-Pettis' theorem entail that $(\xi_k)_{k\ge 1}$ is relatively weakly sequentially compact in $L_1((-1,1)\times D)$. Consequently, there are $\xi\in L_1((-1,1)\times D)$ and a subsequence of $(\xi_k)_{k\ge 1}$ (not relabeled) such that
\begin{equation}
\xi_k \rightharpoonup \xi \;\text{ in }\; L_1((-1,1)\times D)\ . \label{LDEq009}
\varepsilonnd{equation}
Now, to identify the equation solved by $(V,\xi)$, we infer from the weak formulation \varepsilonqref{EvEq011} of \varepsilonqref{EvEq001} that, for $\vartheta\in\mathcal{A}\cap L_\infty(D)$ and $k\ge 1$,
\begin{align*}
& \int_{-1}^1 \int_D \left[ \nabla V_k \cdot \nabla\vartheta + \xi_k \vartheta + \lambda\vartheta g_W(V_k) \right]\ \mathrm{d}x\mathrm{d}s \\
& = \int_{-1+t_k}^{1+t_k} \int_D \left[ \nabla u \cdot \nabla\vartheta + \zeta \vartheta + \lambda\vartheta g_W(u) \right]\ \mathrm{d}x\mathrm{d}\tau \\
& = \int_D [u(-1+t_k)-u(1+t_k)] \vartheta\ \mathrm{d}x = \int_{-1+t_k}^{1+t_k} \int_D \vartheta \partial_t u\ \mathrm{d}x\mathrm{d}\tau\ .
\varepsilonnd{align*}
By Cauchy-Schwarz' inequality,
\begin{align*}
\left| \int_{-1+t_k}^{1+t_k} \int_D \vartheta \partial_t u\ \mathrm{d}x\mathrm{d}\tau \right| \le \|\vartheta\|_2 \int_{-1+t_k}^{1+t_k} \|\partial_t u\|_2\ \mathrm{d}\tau \le \sqrt{2} \|\vartheta\|_2 \left( \int_{-1+t_k}^{1+t_k} \|\partial_t u\|_2^2\ \mathrm{d}\tau \right)^{1/2}\ ,
\varepsilonnd{align*}
and the right-hand side of the above inequality converges to zero as $k\to\infty$ by \varepsilonqref{LDEq001}. Consequently,
\begin{equation}
\lim_{k\to\infty} \int_{-1}^1 \int_D \left[ \nabla V_k \cdot \nabla\vartheta + \xi_k \vartheta + \lambda\vartheta g_W(V_k) \right]\ \mathrm{d}x\mathrm{d}s = 0\ . \label{LDEq010}
\varepsilonnd{equation}
Gathering \varepsilonqref{LDEq004}, \varepsilonqref{LDEq005}, \varepsilonqref{LDEq006}, \varepsilonqref{LDEq009}, and \varepsilonqref{LDEq010}, we end up with
$$
\int_{-1}^1 \int_D \left[ \nabla v \cdot \nabla\vartheta + \xi \vartheta + \lambda\vartheta g_W(v) \right]\ \mathrm{d}x\mathrm{d}s = 0\ ,
$$
which entails, in particular, that $\xi$ does not depend on time and that $\Delta v = \xi + \lambda g_W(v)$ belongs to $L_1(D)$. We finally check that $\xi\in \partial\mathbb{I}_{[-1,\infty)}(v)$ a.e. in $D$ as in the proof of Lemma~\ref{EvLem012}, recalling that $-1 \le v \le \|(u_0)_+\|_\infty$ as a consequence of \varepsilonqref{EvEq003} and \varepsilonqref{LDEq002}. Also, $v$ belongs to $\omega(u_0)$ by \varepsilonqref{LDEq002}. Thus, $\omega(u_0)$ is non-empty and obviously bounded in $\mathring{H}^1(D)$ by \varepsilonqref{LDEq001} and Poincar\'e's inequality.
To finish off the proof, let us assume that $u_0\ge U_\lambda$. The comparison principle in Theorem~\ref{EvThm005} implies that $u(t)\ge U_\lambda$ for all $t\ge 0$. Consequently, if $v\in \omega(u_0)$, then $v\ge U_\lambda$ and the maximality of $U_\lambda$ stated in Proposition~\ref{P1} entails that $v=U_\lambda$ as claimed.
\varepsilonnd{proof}
Combining Proposition~\ref{jovanotti} and Theorem~\ref{EvThm004} gives several properties of the solution to \varepsilonqref{EvEq001} starting from the rest state $u_0=0$ as summarized in Theorem~\ref{obelix888}.
\begin{proof}[Proof of Theorem~\ref{obelix888}] Let $u_0=0$ and let $u$ be the corresponding solution to \varepsilonqref{EvEq001} given by Theorem~\ref{EvThm002}
and Theorem~\ref{EvThm005}. Clearly, $u_0$ is a supersolution to \varepsilonqref{idefix} and satisfies $u_0\ge U_\lambda$ in $D$. It then readily follows from Proposition~\ref{jovanotti} and Theorem~\ref{EvThm004} that $u(t_1)\ge u(t_2)\ge U_\lambda$ in $D$ for $t_1<t_2$. On the one hand, this ordering property obviously implies that
\begin{equation}\label{gilmour}
\mathcal{C}(u(t_1))\subset \mathcal{C}(u(t_2))\subset \mathcal{C}(U_\lambda)\,.
\varepsilonnd{equation}
Since the measure of $\mathcal{C}(U_\lambda)$ equals 0 if $\lambda <\Lambda_z$, statements~(i) and~(ii) follow.
As for statement~(iii), let $\varphi_1$ be the positive eigenfunction to $-\Delta_1$ associated with the first eigenvalue $\mu_1$ and normalized as $\|\varphi_1\|_1=1$. It then follows from \varepsilonqref{EvEq011} that
\begin{equation*}
\int_0^t \int_D \zeta_u \varphi_1 \, \mathrm{d}x\mathrm{d}s =
-\int_D u(t) \varphi_1\ \mathrm{d}x -\mu_1 \int_0^t \int_D u\varphi_1 \ \mathrm{d}x\mathrm{d}s - \lambda \int_0^t\int_D \varphi_1 g_W(u)\ \mathrm{d}x\mathrm{d}s\,.
\varepsilonnd{equation*}
Since $-1\le u(s)\le 0$ in $D$ for all $s\ge 0$, we further obtain that
\begin{equation*}
\int_0^t \int_D \zeta_u \varphi_1 \, \mathrm{d}x\mathrm{d}s \le 1 -\left(\lambda \int_D \varphi_1 g_W(0)\,\mathrm{d} x-\mu_1\right) t\,.
\varepsilonnd{equation*}
Introducing
$$
\Lambda^*:=\frac{\mu_1}{\int_D \varphi_1 g_W(0)\,\mathrm{d} x}
$$
and setting
$$
T_z:=\frac{\Lambda^*}{\mu_1(\lambda-\Lambda^*)}\,,
$$
we realize that
\begin{equation}\label{neville}
\int_0^t \int_D \zeta_u \varphi_1 \, \mathrm{d}x\mathrm{d}s <0
\varepsilonnd{equation}
for all $t>T_z$. Consequently, given $t>T_z$ there is $s(t)\in (0,t)$ such that $\zeta_{u}(s(t))\not\varepsilonquiv 0$ and thus $\varepsilonrt\mathcal{C}(u(s(t))\varepsilonrt>0$. The time monotonicity \varepsilonqref{gilmour} of the coincidence set then implies that $\varepsilonrt\mathcal{C}(u(t))\varepsilonrt>0$ and the proof of Theorem~\ref{obelix888}.
To prove statement (iv) we proceed along the lines of \cite{BCMR,GhG08a} and construct a subsolution to \varepsilonqref{EvEq001} {\color{red} for} $\lambda>\Lambda_z$ which is well-separated from $-1$. This will eventually imply that the corresponding maximal stationary solution is unzipped, contradicting the assumption that $\lambda>\Lambda_z$. More specifically, set $M:= \|W\|_\infty$ and consider $\lambda>\Lambda_z$. Let $u$ be the solution to \varepsilonqref{EvEq001} with initial value $u_0=0$ and assume for contradiction that $\zeta_u(t)\varepsilonquiv 0$ for all $t>0$. For $\varepsilon\in (0,1)$, we define the function
\begin{equation}
\Psi_\varepsilon(r) := -1-M + \left[ \frac{(1+r+M)^3 - \varepsilon^3 (1+M)^3}{1-\varepsilon^3} \right]^{1/3}\ , \qquad r\in (r_\varepsilon, \infty)\ , \label{acdc1}
\varepsilonnd{equation}
with $r_\varepsilon := -(1+M)(1-\varepsilon)<0$. Observe that
\begin{equation}
\Psi_\varepsilon'(r) = \frac{(1+r+M)^2}{(1-\varepsilon^3)^{1/3} \left[ (1+r+M)^3 - \varepsilon^3 (1+M)^3 \right]^{2/3}} > 0\ , \qquad r\in (r_\varepsilon,\infty)\ , \label{acdc2}
\varepsilonnd{equation}
and
$$
\Psi_\varepsilon''(r) = - \frac{2 \varepsilon^3 (1+M)^3}{(1-\varepsilon^3)^{1/3}} \frac{(1+r+M)}{\left[ (1+r+M)^3 - \varepsilon^3 (1+M)^3 \right]^{5/3}} < 0\ , \qquad r\in (r_\varepsilon,\infty)\ ,
$$
so that $\Psi_\varepsilon$ is an increasing concave function from $(r_\varepsilon,\infty)$ onto $(-(1+M),\infty)$. We next define $v_\varepsilon := \Psi_\varepsilon^{-1}(u)$ in $(0,\infty)\times D$. Since $\Psi_\varepsilon$ is increasing and $u$ ranges in $[-1,0]$, we obtain that
\begin{equation}
\varrho_\varepsilon := \left[ \varepsilon^3 (1+M)^3 + (1-\varepsilon^3) M^3 \right]^{1/3} - (1+M) \le v_\varepsilon \le 0 \;\;\text{ in }\;\; (0,\infty)\times D\ . \label{acdc3}
\varepsilonnd{equation}
Observe that the convexity of $r\mapsto r^3$ ensures that
\begin{equation}\label{notadam}
\varrho_\varepsilon\ge \varepsilon^3-1>-1\,.
\varepsilonnd{equation}
We next infer from \varepsilonqref{EvEq001a} that $v_\varepsilon$ solves
\begin{equation}
\partial_t v_\varepsilon - \Delta v_\varepsilon = \frac{\Psi_\varepsilon''(v_\varepsilon)}{\Psi_\varepsilon'(v_\varepsilon)} |\nabla v_\varepsilon|^2 - \lambda (1-\varepsilon^3) g_W(v_\varepsilon) + \frac{\lambda}{2} \frac{S_\varepsilon(v_\varepsilon)}{H_\varepsilon(v_\varepsilon)} \;\text{ in }\; (0,\infty)\times D\ , \label{acdc4}
\varepsilonnd{equation}
where
$$
S_\varepsilon (r,x) := (1-\varepsilon^3) \Psi_\varepsilon'(r) \left[ 1+ \Psi_\varepsilon(r) + W(x) \right]^2 - \left[ 1+ r + W(x) \right]^2\ , \qquad (r,x)\in [\varrho_\varepsilon,0] \times D\,,
$$
and
$$
H_\varepsilon (r,x) :=\Psi_\varepsilon'(r) \left[ 1+ \Psi_\varepsilon(r) + W(x) \right]^2 \left[ 1+ r + W(x) \right]^2 >0\ , \qquad (r,x)\in [\varrho_\varepsilon,0] \times D\,.
$$
It follows from the definition \varepsilonqref{acdc1} of $\Psi_\varepsilon$ that, for $r\in [\varrho_\varepsilon,0]$,
\begin{equation}
S_\varepsilon (r,x) = (M-W(x)) \left[ 1 - R_\varepsilon(r) \right] \left[ 2 + 2r + M + W(x) - (M-W(x)) R_\varepsilon(r) \right]\ , \label{acdc5}
\varepsilonnd{equation}
where
$$
R_\varepsilon(r) := \frac{(1-\varepsilon^3)^{1/3} (1+r+M)}{\left[ (1+r+M)^3 - \varepsilon^3 (1+M)^3 \right]^{1/3}}\ .
$$
Since
$$
R_\varepsilon'(r) = - \frac{\varepsilon^3 (1-\varepsilon^3)^{1/3} (1+M)^3}{\left[ (1+r+M)^3 - \varepsilon^3 (1+M)^3 \right]^{4/3}} \le 0\ , \qquad r\in [\varrho_\varepsilon,0]\ ,
$$
there holds $R_\varepsilon(0)\le R_\varepsilon(r) \le R_\varepsilon(\varrho_\varepsilon)$ for $r\in [\varrho_\varepsilon,0]$, hence
\begin{equation}
1 \le R_\varepsilon(r) \le \frac{\varrho_\varepsilon + 1+ M}{M}\ , \qquad r\in [\varrho_\varepsilon,0]\ . \label{acdc6}
\varepsilonnd{equation}
Consequently, for $r\in [\varrho_\varepsilon,0]$ and $x\in D$, it follows from \varepsilonqref{acdc6} and the definition of $M$ that
\begin{align}
& 2 + 2r + M + W(x) - (M-W(x)) R_\varepsilon(r) \nonumber \\
& \qquad \ge 2 + 2 \varrho_\varepsilon + M + W(x) - \frac{M-W(x)}{M} (\varrho_\varepsilon + 1+ M) \nonumber \\
& \qquad = \frac{M+W(x)}{M} (1+\varrho_\varepsilon) + 2 W(x) \ge 0\ . \label{acdc7}
\varepsilonnd{align}
We then infer from \varepsilonqref{acdc5}, \varepsilonqref{acdc6}, \varepsilonqref{acdc7}, and the definition of $M$ that $S_\varepsilon(r,x)\le 0$ for $(r,x)\in [\varrho_\varepsilon,0]\times D$. Along with \varepsilonqref{acdc4} and the monotonicity and concavity of $\Psi_\varepsilon$, this readily implies that
\begin{subequations}\label{acdc8}
\begin{equation}
\partial_t v_\varepsilon - \Delta v_\varepsilon \le - \lambda (1-\varepsilon^3) g_W(v_\varepsilon) \;\text{ in }\; (0,\infty)\times D\,, \label{acdc8a}
\varepsilonnd{equation}
while \varepsilonqref{acdc3} and \varepsilonqref{notadam} entail that $g_W(v_\varepsilon)$ belongs to $L_\infty((0,\infty)\times D)$.
In addition, since $\Psi_\varepsilon(0)=0$, it follows from \varepsilonqref{EvEq001c} and \varepsilonqref{EvEq001d} that
\begin{align}
v_\varepsilon & = 0 \;\text{ on }\; (0,\infty)\times \partial D\ ,\label{acdc8b} \\
v_\varepsilon(0) & = 0 \;\text{ in }\; D\ . \label{acdc8c}
\varepsilonnd{align}
\varepsilonnd{subequations}
Thanks to \varepsilonqref{acdc3}, \varepsilonqref{notadam}, and \varepsilonqref{acdc8}, we can construct by a classical Perron method a solution
$$
u_\varepsilon\in C^1([0,\infty),L_2(D))\cap C([0,\infty),W_2^2(D))
$$
to the initial boundary value problem
\begin{align*}
\partial_t u_\varepsilon - \Delta u_\varepsilon & = - \lambda (1-\varepsilon^3) g_W(u_\varepsilon) \;\text{ in }\; (0,\infty)\times D\ , \\%\label{acdc9a} \\
u_\varepsilon & = 0 \;\text{ on }\; (0,\infty)\times \partial D\ , \\%\label{acdc9b} \\
u_\varepsilon(0) & = 0 \;\text{ in }\; D\ ,
\varepsilonnd{align*}
which satisfies $v_\varepsilon\le u_\varepsilon \le 0$ and $g_W(u_\varepsilon)\le g_W(v_\varepsilon)$ in $(0,\infty)\times D$. It follows from \varepsilonqref{acdc3} that $u_\varepsilon\ge \varepsilon^3-1$ in $(0,\infty)\times D$ so that $u_\varepsilon$ is actually the solution to \varepsilonqref{EvEq001} with $\lambda (1-\varepsilon^3)$ instead of $\lambda$, the uniqueness being guaranteed by Theorem~\ref{EvThm005}. We then infer from Theorem~\ref{EvThm004} and Proposition~\ref{jovanotti} that
$$
U_{\lambda (1-\varepsilon^3)} = \inf_{t\ge 0} u_\varepsilon(t) \ge \varepsilon^3 - 1 \;\text{ in }\; D\ .
$$
However, $\lambda(1-\varepsilon^3)>\Lambda_z$ for $\varepsilon$ small enough and the just obtained lower bound contradicts the definition of $\Lambda_z$. Therefore, there is $T>0$ such that $\zeta_u(T)\not\varepsilonquiv 0$ and thus $|\mathcal{C}(T)|>0$. Owing to the time monotonicity \varepsilonqref{gilmour} of the coincidence set, we have shown that $u(t)$ is zipped for $t\ge T$ and the proof is complete.
\varepsilonnd{proof}
\begin{rem}
Inequality \varepsilonqref{neville} is actually valid for any energy solution to \varepsilonqref{EvEq001} with initial value $u_0\in\mathcal{A}\cap L_\infty(D)$ for $t$ large enough, thereby guaranteeing that $\zeta_u \not\varepsilonquiv 0$ in $(0,t)\times D$ for such $t$. However, it is not clear whether this implies that $\zeta_{u}(t)\not\varepsilonquiv 0$ for all $t$ sufficiently large.
\varepsilonnd{rem}
\section*{Acknowledgments}
Part of this work was done while PhL enjoyed the hospitality and support of the Institut f\"ur Angewandte Mathematik, Leibniz Universit\"at Hannover.
\begin{thebibliography}{11}
\bibitem{AGS08} L.~Ambrosio, N.~Gigli, and G.~Savar\'e. \textsl{Gradient flows in metric spaces and in the space of probability measures. Second edition.} Lectures in Mathematics ETH Z\"urich. Birkh\"auser Verlag, Basel, 2008.
\bibitem{Ba10}
V.~Barbu.
{\it Nonlinear differential equations of monotone types in Banach spaces.}
Springer Monographs in Mathematics, 2010, Springer New York, Heidelberg.
\bibitem{BGP00}
D.H.~Bernstein, P.~Guidotti, and J.~A. Pelesko.
\textsl{Analytical and numerical analysis of electrostatically actuated MEMS devices}.
Proceedings of Modeling and Simulation of Microsystems 2000, San Diego, CA, (2000), pp.~489--492.
\bibitem{Br72} H.~Br\'ezis. \textsl{Probl\`emes unilat\'eraux.} J. Math. Pures Appl. (9) \textbf{51} (1972), 1--168.
\bibitem{Br73} H.~Br\'ezis.
\textsl{Op\'erateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert.} North-Holland Mathematics Studies, No. 5., North-Holland Publishing Co., 1973.
\bibitem{BS73}
H. Brezis and W.A. Strauss.
\textsl{Semi-linear second-order elliptic equations in $L^1$}.
J. Math. Soc. Japan \textbf{25} (1973), 565--590.
\bibitem{BCMR}
H. Brezis, T. Cazenave, Y. Martel, and A. Ramiandrisoa.
\textsl{Blow up for $u_t - \Delta u=g(u)$ revisited.} Adv. Differential Equations {\bf 1} (1996), no. 1, 73--90.
\bibitem{BP12}
N. Brubaker and J. A. Pelesko.
\textsl{Analysis of a one-dimensional prescribed mean curvature equation with singular nonlinearity.}
Nonlinear Anal. {\bf 75} (2012), 5086--5102.
\bibitem{CHW13}
Y.-H. Cheng, K.-C. Hung, and S.-H. Wang.
\textsl{Global bifurcation diagrams and exact multiplicity of positive solutions for a one-dimensional prescribed mean curvature problem arising in MEMS.} Nonlinear Anal. {\bf 89} (2013), 284--298.
\bibitem{EGG10}
P.~Esposito, N.~Ghoussoub, and Y.~Guo
\textsl{Mathematical
analysis of partial differential equations modeling electrostatic {MEMS}},
vol.~20 of Courant Lecture Notes in Mathematics, Courant Institute of
Mathematical Sciences, New York; American Mathematical Society, Providence,
RI, 2010.
\bibitem{FMPS06}
G.~Flores, G.~Mercado, J.~A. Pelesko, and N.~Smyth.
\textsl{Analysis of the
dynamics and touchdown in a model of electrostatic {MEMS}}, SIAM J. Appl.
Math., 67 (2006/07), pp.~434--446 (electronic).
\bibitem{FL07} I.~Fonseca and G.~Leoni, \textsl{Modern methods in the calculus of variations: $L^p$ spaces.} Springer Monographs in Mathematics. Springer, New York, 2007.
\bibitem{GhG06}
N.~Ghoussoub and Y.~Guo.
\textsl{On the partial differential equations of
electrostatic {MEMS} devices: stationary case}, SIAM J. Math. Anal., 38
(2006/07), pp.~1423--1449 (electronic).
\bibitem{GhG08a}
N.~Ghoussoub and Y.~Guo.
\textsl{Estimates for the
quenching time of a parabolic equation modeling electrostatic {MEMS}},
Methods Appl. Anal., 15 (2008), pp.~361--376
\bibitem{GB01}
P.~Guidotti and D.~Bernstein.
\textsl{Modeling and analysis of hysteresis phenomena in electrostatic zipper actuators}.
Proceedings of Modeling and Simulation of Microsystems 2001, Hilton Head Island, SC, 306--309.
\bibitem{GPW05}
Y.~Guo, Z.~Pan, and M.J.~Ward.
\textsl{Touchdown and pull-in voltage behavior of a MEMS device with varying dielectric properties}.
SIAM J. Appl. Math. \textbf{66} (2005), 309--338.
\bibitem{Ka63} S.~Kaplan. \textit{On the growth of solutions of quasi-linear parabolic equations}. Comm. Pure Appl. Math \textbf{XVI} (1963), 305--330.
\bibitem{KS}
D. Kinderlehrer and G. Stampacchia. \textsl{An introduction to variational inequalities and their applications.} Reprint of the 1980 original. Classics in Applied Mathematics, 31. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2000.
\bibitem{La15} Ph.~Lauren\c cot. \textsl{Weak compactness techniques and coagulation equations}, in ``Evolutionary Equations with Applications in Natural Sciences'', J.~Banasiak \& M.~Mokhtar-Kharroubi (eds.), Lecture Notes Math. \textbf{2126}, Springer, 2015, pp.~199--253.
\bibitem{LW17}
Ph.~Lauren{\c{c}}ot and Ch.~Walker.
\textsl{Heterogeneous dielectric properties in MEMS Models}. Preprint (2017) submitted for publication.
\bibitem{LWBible}
Ph.~Lauren\c{c}ot and Ch.~Walker.
\textsl{Some singular equations modeling MEMS}.
Bull. Amer. Math. Soc. {\bf 54} (2017), 437-479.
\bibitem{Le77} C.-H.~L\^e, \textsl{Etude de la classe des op\'erateurs m-accr\' etifs de $L^1(\Omegaega)$ et accr\'etifs dans $L^\infty(\Omegaega)$}. Th\`ese de 3\`eme cycle (Universit\'e de Paris VI, Paris, 1977).
\bibitem{LLG14}
A.E.~Lindsay, J.~Lega, and K.G.~Glasner.
\textsl{Regularized model of post-touchdown configurations in electrostatic MEMS: Equilibrium analysis}.
Phys.~D \textbf{280-281} (2014), 95--108.
\bibitem{LLG15}
A.E.~Lindsay, J.~Lega, and K.G.~Glasner.
\textsl{Regularized model of post-touchdown configurations in electrostatic MEMS: Interface dynamics}.
IMA J. Appl. Math. \textbf{80} (2015), 1635--1663.
\bibitem{PX15}
H. Pan and R. Xing.
\textsl{On the existence of positive solutions for some nonlinear boundary value problems and applications to MEMS models.} Discrete Contin. Dyn. Syst. {\bf 35} (2015), no. 8, 3627--3682.
\bibitem{Pe02}
J.A.~Pelesko.
\textsl{Mathematical modeling of electrostatic MEMS with tailored dielectric properties}.
SIAM J. Appl. Math. \textbf{62} (2002), 888--908.
\bibitem{PeB03}
J.A.~Pelesko and D.H.~Bernstein.
{\it Modeling MEMS and NEMS}.
Chapman \& Hall/CRC, Boca Raton, FL, 2003.
\bibitem{Si87} J.~Simon. \textsl{Compact sets in the space $L^p(0,T;B)$.} Ann. Mat. Pura Appl. (4) \textbf{146} (1987), 65--96.
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\title{Nonradial solutions of weighted elliptic superlinear problems in bounded symmetric domains}
\author{Hugo Adu\'en\footnote{Departamento de Matem\'aticas y Estad\'\i stica, Universidad de C\'ordoba, Monter\'\i a, Colombia. E-mail address: [email protected]}, Sigifredo Herr\'on\footnote{ Escuela de Matem\'aticas, Universidad Nacional de Colombia Sede Medell\'\i n, Medell\'\i n, Colombia. E-mail address: [email protected]}}
\maketitle
\begin{abstract}
The present work has two objectives. First, we prove that a weight\-ed superlinear elliptic problem has infinitely many nonradial solutions in the unit ball. Second, we obtain the same conclusion in annuli for a more general nonlinearity which also involves a weight. We use a lower estimate of the energy level of radial solutions with $k-1$ zeros in the interior of the domain and a simple counting. Uniqueness results due to Tanaka \cite[2008]{Tanaka1} and \cite[2007]{Tanaka2} are very useful in our approach.
\end{abstract}
\textbf{Keywords:} Nonradial solutions, critical level, uniqueness, nodal solution
\\
\textbf{ MSC2010:} 35A02, 35A24, 35J60, 35J61
\section{Introduction and statement of results}
We consider
\begin{equation}\label{theproblem}
\begin{cases}
\Delta u +K(\Vert x\Vert)\vert u\vert^{p-1}u=0, \ \text{ for } \ x\in\Omega,\\
u=0, \text{ for } x\in\partial\Omega,
\end{cases}
\end{equation}
and
\begin{equation}\label{theproblem2}
\begin{cases}
\Delta u +K(\Vert x\Vert)g(u)=0, \ \text{ for } \ x\in\Omega,\\
u=0, \text{ for } x\in\partial\Omega,
\end{cases}
\end{equation}
We are interested in nonradial solutions assuming $K\in C^2(\overline{\Omega})$ and positive, $\Omega$ is the unit ball for the case \eqref{theproblem} and an annulus $\Omega=\{x\in {\ensuremath{\mathbb{R}}}^N\colon a\leq \Vert x\Vert \leq b\}$ for the case \eqref{theproblem2} and $p$ is subcritical, namely $1<p<(N+2)/(N-2)$ with $N\geq 3$.
It is well known that some solutions of problems \eqref{theproblem} and \eqref{theproblem2} can be obtained as critical points of the functional $J\colon H^1_0\to\mathbb{R}$ defined by
\begin{equation}\label{J}
J(u)=\int_\Omega \left( \frac{1}{2}\Vert \nabla u\Vert^2-\frac{1}{p+1}K(\Vert x\Vert)\vert u\vert ^{p+1}\right){\,\mathrm{d}x},
\end{equation}
and
\begin{equation}\label{J_g}
J(u)=\int_\Omega \left( \frac{1}{2}\Vert \nabla u\Vert^2-K(\Vert x\Vert)G(u)\right){\,\mathrm{d}x},
\end{equation}
respectively, where $G(s)=\int_0^s g(t)\,dt.$ For simplicity, we are using the same letter $J$ in both cases. When we are looking for radial solutions to \eqref{theproblem} and \eqref{theproblem2}, the corresponding problem to be considered takes the form
\begin{equation}\label{rbvp0}
\begin{cases}
u''(r)+\frac{N-1}{r}u'(r)+K(r)\vert u(r)\vert ^{p-1}u(r)=0, \quad \text{for } r\in (0,1)\\
\hspace{4.34cm} u'(0)=u(1)=0,
\end{cases}
\end{equation}
and
\begin{equation}\label{rbvp0:g}
\begin{cases}
u''(r)+\frac{N-1}{r}u'(r)+K(r)g(u(r))=0, \quad \text{for } r\in (a,b)\\
\hspace{3.46cm} u(a)= u(b)=0,
\end{cases}
\end{equation}
respectively.
From \eqref{J}, a radial solution $u$ for \eqref{theproblem} satisfies
\begin{equation}\label{J-radial}
J(u)=\left( \frac{1}{2}-\frac{1}{p+1}\right) \omega_N\int_{0}^{1}r^{N-1}K(r)\vert v(r)\vert^{p+1}{\,\mathrm{d}r},
\end{equation}
where $\omega_N$ is the measure of the unit sphere in $\mathbb{R}^N$ and $v(r) = u(x)$ with $\|x\|= r$.
In a similar fashion, if $u$ is a radial solution for \eqref{theproblem2} in the annulus $\Omega=\{x\in {\ensuremath{\mathbb{R}}}^N\colon a\leq \Vert x\Vert \leq b\}$ then,
\begin{equation}\label{J-radial:con:g}
J(u)= \omega_N\int_{a}^{b}r^{N-1}K(r)\left( \frac{g(v)v(r)}{2}-G(v)\right) {\,\mathrm{d}r}.
\end{equation}
From now on, all throughout the paper, $c, c_1,C, C_0,\ C_1,\ C_2,\overline{C},\ldots$ will denote generic positive constants, independent from $ u $, which may change from line to line.
\\
In this work we prove that problems \eqref{theproblem} and \eqref{theproblem2} have infinitely many nonradial solutions in the unit ball of $\mathbb{R}^N$ and the annuli, respectively. For problems \eqref{theproblem} and \eqref{theproblem2}, Ramos \emph{et al} \cite{Ramos} proved the existence of a sequence $u_k$ of sign-changing solutions whose energy levels are of order $k^\sigma$, where $\sigma=2(p+1)/(N(p-1))$, namely $J(u_k)\sim k^\sigma$. By using radial techniques, we are able to prove a lower estimate for critical levels of radial solutions $u_k$, with $k-1$ zeros and we establish that $J(u_k)\geq C(k-1)^{N\sigma}$. Then, taking into account the uniqueness results due to S. Tanaka (\cite{Tanaka1, Tanaka2}) and that the critical levels of radial solutions are more spaced, we get, by a counting argument, that most of the sign-changing solutions obtained by Ramos \emph{et al} are nonradial.
Very little about infinitely many nonradial solutions using radial tehcniques is known and, we emphasize that an upper estimate of the critical levels is not necessary. We take advantage of one result in \cite[theorem 1]{Ramos} and we complement a couple of Tanaka's theorem by proving that \eqref{theproblem} and \eqref{theproblem2} have infinitely many nonradial solutions. Additionally, we prove that there is an infinite number of nonradial solutions considering nonlinearities $ g(x, s) =K(\|x\|)\vert s\vert^{p-1}s$ and $ g(x, s) = K(\|x\|)g(s)$, from the list of sign-changing solutions obtained by Ramos \emph{et al} in \cite[theorem 1]{Ramos}.
\\
In \cite{ACC}, an important ingredient for getting nonradial solutions was a uniqueness result in a superlinear context. Papers where uniqueness results have been obtained for other kinds of problem are, for example, \cite{Tanaka3, AH}. For these, our approach does not apply. To the best of our knowledge, an estimate of critical levels as in \cite{Ramos} for sublinear problems,
is not known. That is why we will use the results of uniqueness due to S. Tanaka \cite{Tanaka1, Tanaka2}. Precisely, he obtained for the problem
\begin{equation}\label{rbvp}
\left\{ \begin{aligned}
u''(r) + \frac{N-1}{r}u' (r) + K(r)|u(r)|^{p-1}u(r)&=0, \quad 0<r<1,\\
u'(0)=u(1) = 0, \ u(0) & >0,\\
&\hspace*{-6.7cm} u\ \mbox{ has exactly}\quad k-1 \ \mbox{ zeros in } (0, 1),
\end{aligned}
\right.
\end{equation}
the following result.
\begin{theorem}
Under the conditions $K\in C^2[0,1], K>0$ and
\begin{equation}\label{conditionmain}
[V(r)-p(N-2)-N+4][V(r)-p(N-2)+N]-2rV^\prime(r)<0,
\end{equation}
where $V(r)=rK'(r)/K(r)$, the solution of problem \eqref{rbvp} exists and it is unique.
\end{theorem}
In \cite[Corollary 2.2]{Tanaka2}, S. Tanaka proved the following consequence.
\begin{theorem} Suppose $K\in C^2[a,b]$ and $K>0$.
Assuming that:
\begin{enumerate}[\rm(H1)]
\item $-2(N-1)\leq V(r)\leq -2$ and $V'(r)\geq 0$.
\item The function $g$ is odd, $g\in C^1({\ensuremath{\mathbb{R}}})$ and $g(s)>0$ for $s>0$.
\item $\left( g(s)/s\right)'>0$ for $s>0$.
\end{enumerate}
Then, Problem \eqref{theproblem2} has at most one radial solution $u$ with exactly $k-1$ zeros in $(a,b)$ and $u'(a)>0.$
\end{theorem}
We complement these results by proving the existence of infinitely many nonradial solutions. Our main theorems read as follows.
\begin{theorem}\label{main}
Assuming that $K\in C^2[0,1]$, $K>0$ and \eqref{conditionmain}, the problem \eqref{theproblem} has infinitely many nonradial solutions.
\end{theorem}
\begin{theorem}\label{main2}
If $1<p<(N+2)/(N-2), \ K\in C^2[a,b], K>0$, \textup{(H1)-(H3)} hold and,
\begin{enumerate}
\item[\rm(H4)] There exists $C>0$ such that, for every $s>0, \ g(s) \leq C\,s^p$.
\item [\rm(H5)]\footnote{This is the well known Ambrosetti - Rabinowitz superlinear condition.}There exists $\theta >2$ such that, for every $s>0,\ sg(s)\geq \theta\, G(s)$.
\end{enumerate}
Then, the problem \eqref{theproblem2} has infinitely many nonradial solutions.
\end{theorem}
\begin{rem}
As an example, the function $g(s)=\vert s\vert ^{p-1}s$ verifies \textup{(H4)} and \textup{(H5)}.
\end{rem}
In section 2 we present some preliminaries and in section 3 we prove lower estimates of critical levels of radial solutions, which will be very important in order to prove our theorems in section 4.
\section{Some preliminaries}
From \eqref{J-radial:con:g} and \textup{(H5)}, we observe that
\begin{equation}\label{J-radial:con:g2}
J(u)\geq C\int_{a}^{b}r^{N-1}K(r) g(v)v(r) {\,\mathrm{d}r},
\end{equation}
for every radial solution $u$ for \eqref{theproblem2}.
\begin{rem}\label{remark!}
Conditions \textup{(H2)-(H5)} imply:
\begin{enumerate}[\rm (a)]
\item Due to \textup{(H2)}, the function $g$ holds $sg(s)>0$ for $s\neq 0$. In addition, by using \textup{(H4)} it follows that $g(s)/s \leq C \left( sg(s)\right)^{(p-1)/(p+1)}$ for some positive constant $C$: let us denote $\delta=(p-1)/(p+1)$. Assumption \textup{(H4)} implies
\[
\left( \frac{g(s)}{s^p}\right) ^{1-\delta}=\left( \frac{g(s)}{s^p}\right) ^{2/(p+1)}\leq C,
\]
and thus, multiplying by $\left( \frac{g(s)}{s^p}\right) ^{\delta}$, we get
\[
\frac{g(s)}{s^p}\leq C\left( \frac{g(s)}{s^p}\right) ^{\delta},
\]
from which the assertion follows.
\item From \textup{(H3)} and \textup{(H2)} it follows that $g'(s)>g(s)/s>0$ for $s\neq 0$.
\item If $g(x,s):=K(\Vert x\Vert)g(s)$, assumption \textup{(H4)} implies $g(x,s)/s\to 0$ as $s\to 0$, uniformly in $x$.
\item Again, \textup{(H4)} implies
\[
0\leq g(x,s)s=K(\Vert x\Vert)g(s)s\leq C\Vert K\Vert\vert s\vert^{p+1}\leq C_1(\vert s\vert^{p+1}+1).
\]
\item Because of \textup{(H5)},
\[
g(x,s)s\geq \theta\, G(x,s)\geq \theta\, G(x,s)-C,
\]
where $G(x,s)=K(\Vert x\Vert)G(s)$ and $C>0$ is a constant.
\item Hypothesis \textup{(H5)} implies that $g$ is superlinear. More exactly we have $\Big(s^{-\theta}G(s)\Big )'\geq 0$ for $s>0$ and thus, for $s>1$ we obtain $G(s) \geq G(1)s^\theta=G(1)\vert s\vert^\theta$. From this, \textup{(H2)} and \textup{(H5)} we get
\[
\lim\limits_{\vert s\vert \to\infty}\frac{g(s)}{s}=+\infty.
\]
\end{enumerate}
\end{rem}
In order to prove theorems \ref{main} and \ref{main2}, we shall apply theorem 1 due to Ramos \emph{et al} in \cite{Ramos} considering special cases. In such a theorem, for problems \eqref{theproblem} and \eqref{theproblem2}, authors proved the existence of a sequence $u_k$ of sign-changing solutions whose energy levels are of order $k^\sigma$, where $\sigma=2(p+1)/(N(p-1))$. To prove our first theorem, $\Omega$ will be the unit ball, $g(x,s)=K(\|x\|)\,|s|^{p-1}s, f(x,s)\equiv 0, \mu=p+1$ and we choose any number
\[
\nu\in\left( 0,\frac{N+2-p(N-2)}{2}\right),
\]
in order to obtain condition (1.4) in \cite{Ramos}. In this context, such a theorem is established as follows.
\begin{theorem}\label{ramos}
Assuming that $N\geq3, \ 1 <p< (N+2)/(N-2)$,
the problem
\[
\Delta u +K(\Vert x\Vert)\vert u\vert^{p-1}u=0; \, u\in H_0^1(\Omega),
\]
admits a sequence of sign-changing solutions $(u_k)_{k\in\mathbb{N}}$ whose energy levels $J(u_k)$ satisfy
\begin{equation}\label{ineq1ramos}
c_1k^{\sigma}\leq J(u_k)\leq c_2k^{\sigma},
\end{equation}
for some $c_1, c_2>0$ with $\sigma=\frac{2(p+1)}{N(p-1)}$.
\end{theorem}
To prove our second theorem, we will take $\Omega$ as an annulus, $g(x,s)=K(\|x\|)\,g(s)$, $f(x,s)\equiv 0, \mu=\theta$ and we choose
\[
\nu\in\left( 0,\frac{\theta(N+2-p(N-2))}{2(p+1)}\right),
\]
with the aim that condition (1.4) in \cite{Ramos} holds; further, the above remarks imply all conditions in \cite[Theorem 1]{Ramos} are satisfied and hence, its conclusion give us a sequence of sign-changing solutions $(u_k)_{k\in\mathbb{N}}$ whose energy levels $J(u_k)$ satisfy \eqref{ineq1ramos}.
\section{Lower estimates of critical leves}
In this section we obtain estimates of the critical levels corresponding to a radial solution $u_k$ with $k-1$ zeros for the problems \eqref{theproblem} and \eqref{theproblem2}. More exactly, in order to prove our first main result we establish an estimate from below of $J(u_k)$ where $u_k$ is a radial solution of \eqref{rbvp} with $k-1$ zeros in $(0,1)$. Then, the same estimate will be gotten for a radial solution $u_k$ with $k-1$ zeros for the problem \eqref{theproblem2}.
\begin{theorem}\label{boundbelow}
Let $\delta\colon=(p-1)/(p+1)$. There exists a constant $C>0$ such that for all solution $u_k\equiv u$ of \eqref{rbvp}, we have
\begin{equation}\label{ineq2Kaji}
J(u)\ge C(k-1)^{N\sigma}.
\end{equation}
\end{theorem}
\begin{theorem}\label{boundbelow}
There exists a constant $C>0$ such that for all solution $u_k\equiv u$ with $k-1$ zeros of \eqref{rbvp0:g}, we have
\begin{equation}\label{ineq2Kaji}
J(u)\ge C(k-1)^{N\sigma}.
\end{equation}
\end{theorem}
\section{Proof of theorems \ref{main} and \ref{main2}}
By using theorems of section 3 and a counting argument, we can show our main results.
\end{document} |
\begin{document}
\begin{abstract}
We survey some tools and techniques for determining geometric properties of a link complement from a link diagram.
In particular, we survey the tools used to estimate geometric invariants in terms of basic diagrammatic link invariants.
We focus on determining when a link is hyperbolic, estimating its volume, and bounding its cusp shape and cusp area. We give sample applications and state some open questions and conjectures.
\end{abstract}
\title{A survey of hyperbolic knot theory}
\section{Introduction}\label{Sec:Intro}
Every link $L\subset S^3$ defines a compact, orientable 3-manifold boundary consisting of tori; namely, the link exterior $X(L)=S^3\smallsetminus N(L)$, where $N(L)$ denotes an open regular neighborhood. The interior of $X(L)$ is homeomorphic to the link complement $S^3\smallsetminus L$. Around 1980, Thurston proved that link complements decompose into pieces that admit locally homogeneous geometric structures. In the most common and most interesting scenario, the entire link complement has a hyperbolic structure, that is a metric of constant curvature $-1$. By Mostow--Prasad rigidity, this hyperbolic structure is unique up to isometry, hence geometric invariants of $S^3\smallsetminus L$ give topological invariants of $L$ that provide a wealth of information about $L$ to aid in its classification.
An important and difficult problem is to determine the geometry of a link complement directly from link diagrams, and to estimate geometric invariants such as volume and the lengths of geodesics in terms of basic diagrammatic invariants of $L$. This problem often goes by the names \emph{WYSIWYG topology}{\footnote{\emph{WYSIWYG} stands for ``what you see is what you get''.}}
or \emph{effective geometrization} \cite{Johnson:WYSIWYG}. Our purpose in this paper is to survey some results that effectively predict geometry in terms of diagrams, and to state some open questions. In the process, we also summarize some of the most commonly used tools and techniques that have been employed to study this problem.
\subsection{Scope and aims}
This survey is primarily devoted to three main topics: determining when a knot or link is hyperbolic, bounding its volume, and estimating its cusp geometry. Our main goal is to focus
on the methods, techniques, and tools of the field, in the hopes that this paper will lead to more research, rather than strictly listing previous results.
This focus overlaps significantly with the list of topics in Adams' survey article \emph{Hyperbolic knots}~\cite{Adams:survey}. That survey, written in 2003 and published in 2005, came out just as the pursuit of effective geometrization was starting to mature. Thus, although the topics are quite similar, both the results and the underlying techniques have advanced to a considerable extent. This is especially visible in efforts to predict hyperbolic volume (\refsec{Volume}), where only a handful of the results that we list were known by 2003. The same pattern asserts itself throughout.
As with all survey articles, the list of results and open problems that we can address is necessarily incomplete.
We are not addressing the very interesting questions on the geometry of embedded surfaces, lengths and isotopy classes of geodesics, exceptional Dehn fillings, or geometric properties of other knot and link invariants.
Some of the results and techniques we have been unable to cover will appear in a forthcoming book in preparation by Purcell \cite{Purcell:book}.
\subsection{Originality, or lack thereof}
With one exception, all of the results presented in this survey have appeared elsewhere in the literature. For all of these results, we point to references rather than giving rigorous proofs. However, we often include quick sketches of arguments to convey a sense of the methods that have been employed.
The one exception to this rule is \refthm{SymmetricKnotVolume}, which has not previously appeared in writing. Even this result cannot be described as truly original, since the proof works by assembling a number of published theorems. We include the proof to indicate how to assemble the ingredients.
\subsection{Organization}
We organize this survey as follows: \refsec{Definitions} introduces terminology and background that we will use throughout. \refsec{Hyperbolic} is concerned with the problem of determining whether a given link is hyperbolic. We summarize some of the most commonly used methods used for this problem, and provide examples. In \refsec{Volume} and \refsec{Cusps}, we address the problem of estimating important geometric invariants of hyperbolic link complements in terms of diagrammatic quantities. In \refsec{Volume}, we discuss methods for obtaining two sided combinatorial bounds on the hyperbolic volume of link complements. In \refsec{Cusps}, we address the analogous questions for cusp shapes and for lengths of curves on cusp tori.
\section{Definitions}\label{Sec:Definitions}
In this section, we gather many of the key definitions that will be used throughout the paper. Most of these definitions can be found (and are better motivated) in standard textbooks on knots and links, and on $3$--manifolds and hyperbolic geometry. We list them briefly for ease of reference.
\subsection{Diagrams of knots and links}
Some of the initial study of knots and links, such as the work of Tait in the late 1800s, was a study of \emph{diagrams}: projections of a knot or link onto a plane ${\mathbb{R}}^2 \subset {\mathbb{R}}^3$, which can be compactified to $S^2\subset S^3$. We call the surface of projection the \emph{plane of projection} for the diagram. We may assume that a link has a diagram that is a 4-valent graph on $S^2$, with over-under crossing information at each vertex. When studying a knot via diagrams, there are obvious moves that one can make to the diagram that do not affect the equivalence class of knot; for example these include \emph{flypes} studied by Tait, and Reidemeister moves studied in the 1930s. Without going into details on these moves, we do want our diagrams to be ``sufficiently reduced,'' in ways that are indicated by the following definitions.
\begin{definition}\label{Def:Prime}
A diagram of a link is \emph{prime} if for any simple closed curve $\gamma \subset S^2$, intersecting the diagram transversely exactly twice in edges, $\gamma$ bounds a disk $D^2 \subset S^2$ that intersects the diagram in a single edge with no crossings.
\end{definition}
Two non-prime diagrams are shown in \reffig{Prime}, left. The first diagram can be simplified by removing a crossing. The second diagram cannot be reduced in the same way, because the knot is composite; it can be thought of as composed of two simpler prime diagrams by joining them along unknotted arcs. Prime diagrams are seen as building blocks of all knots and links, and so we restrict to them.
\begin{figure}
\caption{Left: two diagrams that are not prime. Right: a twist reduced diagram.}
\label{Fig:Prime}
\end{figure}
\begin{definition}\label{Def:CrossingNumber}
Suppose $K$ is a knot or link with diagram $D$. The \emph{crossing number} of the diagram, denoted $c(D)$, is the number of crossings in $D$.
The \emph{crossing number} of $K$, denoted $c(K)$, is defined to be the minimal number of crossings in any diagram of $K$.
\end{definition}
Removing a crossing as on the left of \reffig{Prime} gives a diagram that is more reduced. The following definition gives another way to reduce diagrams.
\begin{definition}\label{Def:TwistReduced}
Let $K$ be a knot or link with diagram $D$. The diagram is said to be \emph{twist reduced} if whenever $\gamma$ is a simple closed curve in the plane of projection meeting the diagram exactly twice in two crossings, running directly through the crossing, then $\gamma$ bounds a disk containing only a string of alternating bigon regions in the diagram. See \reffig{Prime}, right.
\end{definition}
Any diagram can be modified to be twist reduced by performing a sequence of flypes and removing unnecessary crossings.
\begin{definition}\label{Def:TwistNumber}
A \emph{twist region} in a diagram is a portion of the diagram consisting of a maximal string of bigons arranged end-to-end, where maximal means there are no other bigons adjacent to the ends. Additionally, we require twist regions to be alternating (otherwise, remove crossings).
The number of twist regions in a prime, twist reduced diagram is the \emph{twist number} of the diagram, and is denoted $t(D)$. The minimum of $t(D)$ over all diagrams of $K$ is denoted $t(K)$.
\end{definition}
\subsection{The link complement}
Rather than study knots exclusively via diagrams and graphs, we typically consider the \emph{knot complement}, namely the $3$--manifold $S^3\smallsetminus K$. This is homeomorphic to the interior of the compact manifold $X(K):=S^3\smallsetminus N(K)$, called the \emph{knot exterior}, where $N(K)$ is a regular neighborhood of $K$. When we consider knot complements and knot exteriors, we are able to apply results in $3$--manifold topology, and consider curves and surfaces embedded in them. The following definitions apply to such surfaces.
\begin{definition}\label{Def:Incompressible}
An orientable surface $S$ properly embedded in a compact orientable $3$--manifold $\overline{M}$ is \emph{incompressible} if whenever $E\subset \overline{M}$ is a disk with $\partial E\subset S$, there exists a disk $E'\subset S$ with $\partial E' = \partial E$.
$S$ is \emph{$\partial$-incompressible} if whenever $E\subset \overline{M}$ is a disk whose boundary is made up of an arc $\alpha$ on $S$ and an arc on $\partial \overline{M}$, there exists a disk $E'\subset S$ whose boundary is made up of the arc $\alpha$ on $S$ and an arc on $\partial S$.
\end{definition}
\begin{definition}\label{Def:Essential}
Let $\overline{M}$ be a compact orientable $3$--manifold.
Consider a (possibly non-orientable) properly embedded surface $S \subset \overline{M}$. Let $\widetilde{S}$ be the boundary of a regular neighborhood $N(S) \subset \overline{M}$. If $S \neq S^2$, it is said to be \emph{essential} if $\widetilde{S}$ is incompressible and $\partial$-incompressible. A two--sphere $S \subset M$ is called \emph{essential} if it does not bound a $3$--ball.
We will say that $\overline{M}$ is \emph{Haken} if it is irreducible and contains an essential surface $S$. In this case, we also say the interior $M$ is Haken.
\end{definition}
Finally, we will sometimes consider knot complements that are fibered, in the sense of the following definition.
\begin{definition}\label{Def:Fibered}
A $3$--manifold $M$ is said to be \emph{fibered} if it can be written as a fiber bundle over $S^1$, with fiber a surface. Equivalently, $M$ is the mapping torus of a self-homeomorphism $f$ of a (possibly punctured) surface $S$. That is, there exists $f\colon S\to S$ such that
\[ M = S\times I / (x,0) \sim (f(x),1). \]
The map $f$ is called the \emph{monodromy} of the fibration.
\end{definition}
\subsection{Hyperbolic geometry notions}
The knot and link complements that we address in this article also admit geometric structures, as in the following definition.
\begin{definition}\label{Def:HyperbolicKnot}
A knot or link $K$ is said to be \emph{hyperbolic} if its complement admits a complete metric of constant curvature $-1$. Equivalently, it is hyperbolic
if $S^3 \smallsetminus K = {\mathbb{H}}^3 / \Gamma$, where ${\mathbb{H}}^3$ is hyperbolic $3$--space and $\Gamma$ is a discrete, torsion-free group of isometries, isomorphic to $\pi_1(S^3 \smallsetminus K)$.
\end{definition}
Thurston showed that a prime knot in $S^3$ is either hyperbolic, or it is a \emph{torus knot} (can be embedded on an unknotted torus in $S^3$), or it is a \emph{satellite knot} (can be embedded in the regular neighborhood of a non-trivial knot) \cite{thurston:bulletin}. This article is concerned with hyperbolic knots and links.
\begin{definition}\label{Def:CuspedManifold}
Suppose $\overline{M}$ is a compact orientable $3$--manifold with $\partial M$ a collection of tori, and suppose the interior $M \subset \overline{M}$ admits a complete hyperbolic structure. We say $M$ is a \emph{cusped manifold}.
Moreover, $M$ has ends of the form $T^2\times [1,\infty)$. Under the covering map $\rho\colon {\mathbb{H}}^3\to M$, each end is geometrically realized as the image of a horoball $H_i\subset {\mathbb{H}}^3$. The preimage $\rho^{-1}(\rho(H_i))$ is a collection of horoballs. By shrinking $H_i$ if necessary, we can ensure that these horoballs have disjoint interiors in ${\mathbb{H}}^3$. For such a choice of $H_i$, $\rho(H_i) = C_i$ is said to be a \emph{horoball neighborhood} of the \emph{cusp} $C_i$, or \emph{horocusp} in $M$.
\end{definition}
\begin{definition}\label{Def:CuspShape}
The boundary of a horocusp inherits a Euclidean structure from the hyperbolic structure on $M$. This Euclidean structure is well defined up to similarity. The similarity class is called the \emph{cusp shape}.
\end{definition}
\begin{definition}\label{Def:MaximalCusp}
For each cusp of $M$ there is an 1--parameter family of horoball neighborhoods obtained by expanding the horoball $H_i$ while keeping the same limiting point on the sphere at infinity. In the preimage, expanding $H_i$ expands all horoballs in the collection $\rho^{-1}(C_i)$. Expand each cusp until the collection of horoballs $\rho^{-1}(\cup C_i)$ become tangent, and cannot be expanded further while keeping their interiors disjoint. This is a choice of \emph{maximal cusps}. The choice depends on the order of expansion of cusps $C_1, \dots, C_n$. If $M$ has a single end $C_1$ then there is a unique choice of expansion, giving a unique maximal cusp referred to as the \emph{the maximal cusp} of $M$.
\end{definition}
\begin{definition}\label{Def:Slope}
For a fixed set of embedded horoball neighborhoods $C_1, \dots, C_n$ of the cusps of a cusped hyperbolic $3$--manifold $M$, we have noted that the torus $\partial C_i$ inherits a Euclidean metric. Any isotopy class of simple closed curves on the torus is called a \emph{slope}. The \emph{length of a slope} $s$, denoted $\ell(s)$, is defined to be the length of a geodesic representative of $s$ on the Euclidean torus $\partial C_i$.
\end{definition}
\section{Determining hyperbolicity}\label{Sec:Hyperbolic}
Given a combinatorial description of a knot or link, such as a diagram or braid presentation, one of the first things we would often like to ascertain is whether the link complement admits a hyperbolic structure at all. In this section, we describe the currently available tools to check this and give examples of knots to which they apply.
There are three main tools used to prove a link or family of links is hyperbolic. The first is direct calculation, for example using gluing and completeness equations, often with the help of a computer. The second is Thurston's geometrization theorem for Haken manifolds, which says that the only obstruction to $X(K)$ being hyperbolic consists of surfaces with non-negative Euler characteristic. The third is to perform a long Dehn filling on a manifold that is already known to be hyperbolic, for instance by one of the previous two methods.
\subsection{Computing hyperbolicity directly}
From Riemannian geometry, a manifold $M$ admits a hyperbolic structure if and only if $M = {\mathbb{H}}^3 / \Gamma$, where $\Gamma \cong \pi_1(M)$ is a discrete subgroup of $\mathrm{Isom}^+({\mathbb{H}}^3) = \operatorname{PSL}(2,{\mathbb{C}})$. See \refdef{HyperbolicKnot}.
Therefore one way to find a hyperbolic structure on a link complement is to find a discrete faithful representation of its fundamental group into $\operatorname{PSL}(2,{\mathbb{C}})$. This is usually impractical to do directly. However, note that if a manifold $M$ can be decomposed into simply connected pieces, for example a triangulation by tetrahedra, then these lift to the universal cover. If this cover is isometric to $ {\mathbb{H}}^3$, then the lifted tetrahedra will be well-behaved in hyperbolic 3--space. Conversely, if the lifted tetrahedra fit together coherently in ${\mathbb{H}}^3$, in a group--equivariant fashion, one can glue the metrics on those tetrahedra to obtain a hyperbolic metric on $M$. This gives a condition for determining hyperbolicity, which is often implemented in practice.
\subsubsection{Gluing and completeness equations for triangulations}
The first method for finding a hyperbolic structure is direct, and is used most frequently by computer, such as in the software SnapPy that computes hyperbolic structures directly from diagrams \cite{SnapPy}. The method is to first decompose the knot or link complement into ideal tetrahedra, as in \refdef{IdealTetrahedron}, and then to solve a system of equations on the tetrahedra to obtain a hyperbolic structure. See \refthm{GluingCompleteness}.
This method is most useful for a single example, or for a finite collection of examples. For example, it was used by Hoste, Thistlethwaite, and Weeks to classify all prime knots with up to 16 crossings \cite{HTW}. Of the $1,701,903$ distinct prime knots with at most 16 crossings, all but 32 are hyperbolic.
We will give a brief description of the method. For further details, there are several good references, including notes of Thurston \cite{thurston:notes} where these ideas first appeared, and papers by Neumann and Zagier~\cite{NeumannZagier}, and Futer and Gu{\'e}ritaud~\cite{fg:angled-survey}. Purcell is developing a book with details and examples~\cite{Purcell:book}.
\begin{definition}\label{Def:IdealTetrahedron}
An \emph{ideal tetrahedron} is a tetrahedron whose vertices have been removed. When a knot or link complement is decomposed into ideal tetrahedra, all ideal vertices lie on the link, hence have been removed.
\end{definition}
There are algorithms for decomposing knot and link complements into ideal tetrahedra. For example, Thurston decomposes the figure--8 knot complement into two ideal tetrahedra \cite{thurston:notes}. Menasco generalizes this, describing how to decompose a link complement into two ideal polyhedra, which can then be subdivided into tetrahedra \cite{Menasco:Polyhedra}. Weeks uses a different algorithm in his computer software SnapPea \cite{Weeks:Algorithm}.
Assuming we have a decomposition of a knot or link complement into ideal tetrahedra, we now describe how to turn this into a complete hyperbolic structure. The idea is to associate a complex number to each ideal edge of each tetrahedron encoding the hyperbolic structure of the ideal tetrahedron. The triangulation gives a complete hyperbolic structure if and only if these complex numbers satisfy certain equations: the \emph{edge gluing} and \emph{completeness} equations.
Consider ${\mathbb{H}}^3$ in the upper half space model, ${\mathbb{H}}^3 \cong {\mathbb{C}}\times (0, \infty)$. An ideal tetrahedron $\Delta \subset {\mathbb{H}}^3$ can be moved by isometry so that three of its vertices are placed at $0$, $1$, and $\infty$ in $\partial{\mathbb{H}}^3 \cong {\mathbb{C}}\cup \{\infty\}$. The fourth vertex lies at a point $z\in {\mathbb{C}} \smallsetminus \{0, 1 \}$. The edges between these vertices are hyperbolic geodesics.
\begin{definition}
The parameter $z\in{\mathbb{C}}$ described above is called the \emph{edge parameter} associated with the edge from $0$ to $\infty$. It determines $\Delta$ up to isometry.
\end{definition}
Notice if $z$ is real, then the ideal tetrahedron is flat, with no volume. We will prefer to work with $z$ with positive imaginary part. Such a tetrahedron $\Delta$ is said to be \emph{geometric}, or positively oriented. If $z$ has negative imaginary part, the tetrahedron $\Delta$ is negatively oriented.
Given a hyperbolic ideal tetrahedron embedded in ${\mathbb{H}}^3$ as above, we can apply (orientation--preserving) isometries of ${\mathbb{H}}^3$ taking different vertices to $0$, $1$, $\infty$.
By taking each edge to the geodesic from $0$ to $\infty$, we assign edge parameters to all six edges of the ideal tetrahedron. This leads to the following relations between edge parameters:
\begin{lemma}
Suppose $\Delta$ is a hyperbolic ideal tetrahedron with vertices at $0$, $1$, $\infty$, and $z$. Then the edge parameters of the six edges of $\Delta$ are as follows:
\begin{itemize}
\item Edges $[0, \infty]$ and $[1 , z]$ have edge parameter $z$.
\item Edges $[1 , \infty]$ and $[0 , z]$ have edge parameter $1/(1-z)$.
\item Edges $[z , \infty]$ and $[0 , 1]$ have edge parameter $(z-1)/z$.
\end{itemize}
In particular, opposite edges in the tetrahedron have the same edge parameter.
\end{lemma}
Suppose an ideal tetrahedron $\Delta$ with vertices at $0$, $1$, $\infty$ and $z$ is glued along the triangle face with vertices at $0$, $\infty$, and $z$ to another tetrahedron $\Delta'$. Then $\Delta'$ will have vertices at $0$, $\infty$, $z$ and at the point $zw$, where $w$ is the edge parameter of $\Delta'$ along the edge $[0, \infty]$. When we glue all tetrahedra in ${\mathbb{H}}^3$ around an ideal edge of the triangulation, if the result is hyperbolic then the product of all edge parameters must be $1$ with arguments summing to $2\pi$. More precisely, the sum of the logs of the edge parameters must be $0+2\pi\,i$.
\begin{definition}[Gluing equations]
Let $e$ be an ideal edge of a triangulation of a $3$--manifold $M$, for example a knot or link complement. Let $z_1, \dots, z_k$ be the edge parameters of the edge of the tetrahedra identified to $e$. The \emph{gluing equation} associated with the edge $e$ is:
\begin{equation}\label{Eqn:Gluing}
\prod_{i=1}^k z_i = 1 \quad \mbox{and} \quad \sum_{i=1}^k \arg(z_i)=2\pi.
\end{equation}
Writing this in terms of logarithms, \refeqn{Gluing} is equivalent to:
\begin{equation}\label{Eqn:LogGluing}
\sum_{i=1}^k \log(z_i) = 2\pi\,i.
\end{equation}
\end{definition}
A triangulation may satisfy all gluing equations at all its edges, and yet fail to give a complete hyperbolic structure. To ensure the structure is complete, an additional condition must be satisfied for each torus boundary component.
\begin{definition}[Completeness equations]
Let $T$ be a torus boundary component of a $3$--manifold $M$ whose interior admits an ideal triangulation.
Truncate the tips of all tetrahedra to obtain a triangulation of $T$. Let $\mu$ be an oriented simple closed curve on $T$, isotoped to meet edges of the triangulation transversely, and to avoid vertices. Each segment of $\mu$ in a triangle cuts off a single vertex of the triangle, which comes from an edge of the ideal triangulation and so has an associated edge parameter $z_i$. If the vertex lies to the right of $\mu$, let $\epsilon_i=+1$; otherwise let $\epsilon_i=-1$. The \emph{completeness equation} associated to $\mu$ is:
\begin{equation}
\sum_i \epsilon_i\,\log(z_i) = 0,
\quad \mbox{ which implies } \quad
\prod_i z_i^{\epsilon_i} =1.
\end{equation}
\end{definition}
With these definitions, we may state the main theorem.
\begin{theorem}\label{Thm:GluingCompleteness}
Suppose $\overline{M}$ is a $3$-manifold with torus boundary, equipped with an ideal triangulation. Suppose for some choice of positively oriented edge parameters $\{ z_1, \dots, z_n\}$, the gluing equations are satisfied for each edge, and the completeness equations are satisfied for homology generators $\mu$, $\lambda$ on each component of $\partial {\overline{M}}$. Then the interior of
$\overline{M}$, denoted by $M$, admits a complete hyperbolic structure. Furthermore, the unique hyperbolic metric on $M$ is given by the geometric tetrahedra determined by the edge parameters.
\end{theorem}
In fact, the hypotheses of \refthm{GluingCompleteness} are stronger than necessary. If $\overline{M}$ has $k$ torus boundary components, then only $n-k$ of the $n$ gluing equations are necessary (see \cite{NeumannZagier} or \cite{Choi}). In addition, only one of $\mu$ or $\lambda$ is required from each boundary component \cite{Choi}.
Some classes of $3$--manifolds that can be shown to be hyperbolic using \refthm{GluingCompleteness} include the classes of once-punctured torus bundles, 4-punctured sphere bundles, and 2--bridge link complements \cite{GueritaudFuter:2bridge}. (In each class, some low-complexity examples must be excluded to ensure hyperbolicity.) These manifolds have natural ideal triangulations guided by combinatorics. In the case of $2$--bridge knot and link complements, the triangulation is also naturally adapted to a planar diagram of the link \cite{SakumaWeeks}. Once certain low-complexity cases (such as ($2,q$) torus links) have been excluded, one can show that the gluing equations for these triangulations have a solution. This gives a direct proof that the manifolds are hyperbolic.
\subsubsection{Circle packings and right angled polyhedra}\label{Sec:CirclePack}
Certain link complements have very special geometric properties that allow us to compute their hyperbolic structure directly, but with less work than solving nonlinear gluing and completeness equations as above. These include the Whitehead link, which can be obtained from a regular ideal octahedron with face-identifications \cite{thurston:notes}. They also include an important and fairly general family of link complements called \emph{fully augmented links}, which we now describe.
Starting with any knot or link diagram, identify \emph{twist regions}, as in \refdef{TwistNumber}. The left of \reffig{Augmented} shows a knot diagram with two twist regions. Now, to each twist region, add a simple unknotted closed curve encircling the two strands of the twist region, as shown in the middle of \reffig{Augmented}. This is called a \emph{crossing circle}. Because each crossing circle is an unknot, we may perform a full twist along a disk bounded by that unknot without changing the homeomorphism type of the link complement.
This allows us to remove as many pairs of crossings as possible from twist regions. An example is shown on the right of \reffig{Augmented}. The result is the diagram of a fully augmented link.
\begin{figure}
\caption{Left: a diagram of a knot $K$. Center: adding a crossing circles for each twist region of $K$ produces a link $J$. Right: removing full twists produces a \emph{fully augmented link}
\label{Fig:Augmented}
\end{figure}
Provided the original link diagram before adding crossing circles is sufficiently reduced (prime and twist reduced; see Definitions~\ref{Def:Prime} and~\ref{Def:TwistReduced}), the resulting fully augmented link will be hyperbolic, and its hyperbolic structure can be completely determined by a circle packing. The procedure is as follows.
Replace the diagram of the fully augmented link with a trivalent graph by replacing each neighborhood of a crossing circle (with or without a bounded crossing) by a single edge running between knot strands, closing the knot strands. See \reffig{Decomp}, left. Now take the dual of this trivalent graph; this is a triangulation of $S^2$. Provided the original diagram was reduced, there will be a circle packing whose nerve is this triangulation of $S^2$. The circle packing and its orthogonal circles cut out a right angled ideal polyhedron in ${\mathbb{H}}^3$. The hyperbolic structure on the complement of the fully augmented link is obtained by gluing two copies of this right angled ideal polyhedron. More details are in \cite{FuterPurcell, purcell:augmented}.
\begin{figure}
\caption{Left: Obtain a 3-valent graph by replacing crossing circles with edges. Middle: The dual is a triangulation of $S^2$. Right: The nerve of the triangulation defines a circle packing that cuts out a polyhedron in ${\mathbb{H}
\label{Fig:Decomp}
\end{figure}
\subsection{Geometrization of Haken manifolds}
The methods of the previous section have several drawbacks. While solving gluing and completeness equations works well for examples, it is difficult to use the methods to find hyperbolic structures for infinite classes of examples. The method that has been most useful to show infinite examples of knots and links are hyperbolic is to apply Thurston's geometrization theorem for Haken manifolds, which takes the following form for manifolds with torus boundary components.
\begin{theorem}[Geometrization of Haken manifolds]\label{Thm:Haken}
Let $M$ be the interior of a compact manifold $\overline{M}$, such that $\partial \overline{M}$ is a non-empty union of tori. Then exactly one of the following holds:
\begin{itemize}
\item $\overline{M}$ admits an essential torus, annulus, sphere, or disk, or
\item $M$ admits a complete hyperbolic metric.
\end{itemize}
\end{theorem}
Thus the method to prove $M$ is hyperbolic following \refthm{Haken} is to show
$\overline{M}$ cannot admit embedded essential surfaces of nonnegative Euler characteristic. Arguments ruling out such surfaces are typically topological or combinatorial in nature.
Some sample applications of this method are as follows. Menasco used the method to prove any alternating knot or link, aside from a $(2,q)$-torus link, is hyperbolic \cite{Menasco:Alternating}. Adams and his students generalized Menasco's argument to show that almost alternating and toroidally alternating links are hyperbolic \cite{Adams:ToroidallyAlt, Adams:AlmostAlt}. There are many other generalizations, e.g.\ \cite{fkp:hyp}.
Menasco's idea was to subdivide an alternating link complement into two balls, above and below the plane of projection, and crossing balls lying in a small neighborhood of each crossing, with equator along the plane of projection. An essential surface can be shown to intersect the balls above and below the plane of projection in disks only, and to intersect crossing balls in what are called \emph{saddles}. These saddles act as fat vertices on the surface, and can be used to obtain a bound on the Euler characteristic of an embedded essential surface. Combinatorial arguments, using properties of alternating diagrams, then rule out surfaces with non-negative Euler characteristic.
More generally, classes of knots and links can be subdivided into simpler pieces, whose intersection with essential surfaces is then examined. Typically, surfaces with nonnegative Euler characteristic can be restricted to lie in just one or two pieces, and then eliminated.
Thurston's \refthm{Haken} can also be used to show that manifolds with certain properties are hyperbolic. For example, consider again the gluing equations. This gives a complicated nonlinear system of equations. If we consider only the imaginary part of the logarithmic gluing equation \refeqn{LogGluing},
the system becomes linear: the sums of dihedral angles around each edge must be $2\pi$. It is much easier to solve such a system of equations.
\begin{definition}\label{Def:AngleStruct}
Suppose $M$ is the interior of a compact manifold with torus boundary, with an ideal triangulation. A solution to the imaginary part of the (logarithmic) gluing equations \refeqn{LogGluing} for the triangulation is called a \emph{generalized angle structure} on $M$. If all angles lie strictly between $0$ and $\pi$, the solution is called an \emph{angle structure}. See \cite{fg:angled-survey, luo-tillmann:generalized-angles} for background on (generalized) angle structures.
\end{definition}
\begin{theorem}[Angle structures and hyperbolicity]\label{Thm:AngleStruct}
If $M$ admits an angle structure, then $M$ also admits a hyperbolic metric.
\end{theorem}
The proof has been attributed to Casson, and appears in Lackenby \cite{Lackenby:Word}. The idea is to consider how essential surfaces intersect each tetrahedron of the triangulation. These surfaces can be isotoped into \emph{normal form}. A surface without boundary in normal form intersects tetrahedra only in triangles and in quads. The angle structure on $M$ can be used to define a combinatorial area on a normal surface. An adaptation of the Gauss--Bonnet theorem implies that the Euler characteristic is a negative multiple of the combinatorial area. Then one shows that the combinatorial area of an essential surface must always be strictly positive, hence Euler characteristic is strictly negative. Then \refthm{Haken} gives the result.
Knots and links that can be shown to be hyperbolic using the tools of \refthm{AngleStruct} include arborescent links, apart from three enumerated families of non-hyperbolic exceptions. This can be shown by constructing an ideal triangulation (or a slightly more general ideal decomposition) of the complement of an arborescent link, and endowing it with an angle structure \cite{FG:Arborescent}.
Conversely, every hyperbolic knot or link complement in $S^3$ admits \emph{some} ideal triangulation with an angle structure \cite{HRS:AnglesHomology}. However, this triangulation is not explicitly constructed, and need not have any relation to the combinatorics of a diagram.
\subsection{Hyperbolic Dehn filling}
Another method for proving that classes of knots or links are hyperbolic is to use Dehn filling. Thurston showed that all but finitely many Dehn fillings on a hyperbolic manifold with a single cusp yield a closed hyperbolic $3$--manifold \cite{thurston:notes}.
More effective versions of Thurston's theorem have been exploited to show hyperbolicity all but a bounded number of Dehn fillings. Results in this vein include $2\pi$--theorem that yields negatively curved metrics \cite{BleilerHodgson:2pi}, and geometric deformation theorems of Hodgson and Kerckhoff \cite{HK:univ}. The sharpest result along these lines is the $6$--Theorem, due independently to Agol \cite{Agol:Bounds} and Lackenby \cite{Lackenby:Word}. (The statement below assumes the geometrization conjecture, proved by Perelman shortly after the papers \cite{Agol:Bounds, Lackenby:Word} were published.)
\begin{theorem}[6--Theorem]\label{Thm:6Theorem}
Suppose $M$ is a hyperbolic $3$--manifold homeomorphic to the interior of a compact manifold $\overline{M}$ with torus boundary components $T_1, \dots, T_k$. Suppose $s_1, \dots, s_k$ are slopes, with $s_i\subset T_i$. Suppose there exists a choice of horoball neighborhoods of the cusps of $M$ such that in the induced Euclidean metric on $T_i$, the slope $s_i$ has length strictly greater than $6$, for all $i$. Then the manifold obtained by Dehn filling along $s_1, \dots, s_k$, denoted $M(s_1, \dots, s_k)$, is hyperbolic.
\end{theorem}
\refthm{6Theorem} can be used to prove that a knot or link is hyperbolic, is as follows. First, show the knot complement $S^3\smallsetminus K$ is obtained by Dehn filling a manifold $Y$ that is known to be hyperbolic. Then, prove that the slopes used to obtain $S^3\smallsetminus K$ from $Y$ have length greater than $6$ on a horoball neighborhood of the cusps of $Y$. See also \refsec{Cusps} for ways to prove that slopes are long.
Some examples of links to which this theorem has been applied include \emph{highly twisted links}, which have diagrams with $6$ or more crossings in every twist region. (See \refdef{TwistNumber}.)
These links can be obtained by surgery, as follows. Start with a fully augmented link as described above, for instance the example shown in \reffig{Augmented}. Performing a Dehn filling along the slope $1/n$ on a crossing circle adds $2n$ crossings to the twist region encircled by that crossing circle, and removes the crossing circle from the diagram.
When $|n| \geq 3$, the result of such Dehn filling on each crossing circle is highly twisted.
Using the explicit geometry of fully augmented links obtained from the circle packing, we may bound lengths of the slopes $1/n_i$ on crossing circles. Then \refthm{6Theorem} shows that the resulting knots and links must be hyperbolic \cite{FuterPurcell}.
Other examples can also be obtained in this manner. For example, Baker showed that infinite families of Berge knots are hyperbolic by showing they are Dehn fillings of minimally twisted chain link complements, which are known to be hyperbolic, along sequences of slopes that are known to grow in length \cite{Baker:HypBerge}.
The $6$--Theorem is sharp. This was shown by Agol \cite{Agol:Bounds}, and by Adams and his students for a knot complement \cite{AdamsEtAl:sharp}. The pretzel knot $P(n,n,n)$, which has $3$ twist regions, and the same number of crossings in each twist region, has a toroidal Dehn filling along a slope with length exactly $6$.
\subsection{Fibered knots and high distance knots}
We finish this section with a few remarks about other ways to prove manifolds are hyperbolic, and give references for further information. However, these methods seem less directly applicable to knots in $S^3$ than those discussed above, and the full details are beyond the scope of this paper.
Recall \refdef{Fibered} of a fibered knot. When the monodromy is pseudo-Anosov, the knot complement is known to be hyperbolic \cite{Thurston:Fiber}. The figure--8 knot complement can be shown to be hyperbolic in this way; see for example~\cite[page~70]{thurston:notes}. Certain links obtained as the complement of closed braids and their braid axis have also been shown to be hyperbolic using these methods \cite{HironakaKin}. It seems difficult to apply these methods directly to knots, however.
Another method is to consider bridge surfaces of a knot. Briefly, there is a notion of distance that measures the complexity of the bridge splitting of a knot. Bachman and Schleimer proved that any knot whose bridge distance is at least 3 must be hyperbolic \cite{BachmanSchleimer}. It seems difficult to bound bridge distance for classes of examples directly from a knot diagram. Recent work of Johnson and Moriah is the first that we know to obtain such bounds \cite{JohnsonMoriah}.
\section{Volumes}\label{Sec:Volume}
As mentioned in the introduction, the goal of effective geometrization is
to determine or estimate geometric invariants directly from a diagram. As volume is the first and most natural invariant of a hyperbolic manifold, the problem of estimating volume from a diagram has received considerable attention. In this section, we survey some of the results and techniques on both upper and lower bounds on volume.
\subsection{Upper bounds on volume}
Many bounds in this section involve constants with geometric meaning. In particular, we define
\[
{v_{\rm tet}} = \text{volume of a regular ideal tetrahedron in } {\mathbb{H}}^3 = 1.0149 \ldots
\]
and
\[
{v_{\rm oct}} = \text{volume of a regular ideal octahedron in } {\mathbb{H}}^3 = 3.6638 \ldots
\]
These constants are useful in combinatorial upper bounds on volume because every geodesic tetrahedron in ${\mathbb{H}}^3$ has volume at most ${v_{\rm tet}}$, and every geodesic octahedron has volume at most ${v_{\rm oct}}$. See e.g.\ Benedetti and Petronio \cite{BenedettiPetronio}.
\subsubsection{Bounds in terms of crossing number}
The first volume bounds for hyperbolic knots are due to Adams \cite{Adams:thesis}.
He showed that, if $D = D(K)$ is a diagram of a hyperbolic knot or link with $c \geq 5$ crossings, then
\begin{equation}\label{Eqn:AdamsFirst}
\operatorname{vol}(S^3 \smallsetminus K) \leq 4 (c(D) - 4) {v_{\rm tet}}.
\end{equation}
Adams' method of proof was to use the knot diagram to divide $S^3 \smallsetminus K$ into tetrahedra with a mix of ideal and material vertices, and to count the tetrahedra. Since the subdivision contains at most $4(c(D) - 4)$ tetrahedra, and each tetrahedron has volume at most ${v_{\rm tet}}$, the bound follows.
In a more recent paper \cite{Adams:triple-crossing-number}, Adams improved the upper bound of \refeqn{AdamsFirst}:
\begin{theorem}\label{Thm:AdamsV8}
Let $D = D(K)$ be a diagram of a hyperbolic link $K$, with at least $5$ crossings. Then
\[
\operatorname{vol}(S^3 \smallsetminus K) \leq (c(D) - 5) {v_{\rm oct}} + 4 {v_{\rm tet}}.
\]
\end{theorem}
Again, the method is to divide the link complement into a mixture of tetrahedra and octahedra, and to bound the volume of each polyhedron by ${v_{\rm tet}}$ or ${v_{\rm oct}}$ respectively. The subdivision into octahedra was originally described by D.\ Thurston.
\begin{figure}
\caption{Every twist knot $K_n$ has two twist regions, consisting of $2$ and $n$ crossings. Every $K_n$ can be obtained by Dehn filling the red component of the Whitehead link $L$, depicted on the right.}
\label{Fig:TwistKnot}
\end{figure}
The upper bound of \refthm{AdamsV8} is known to be asymptotically sharp, in the sense that there exist diagrams of knots and links $K_n$ with $\operatorname{vol}(S^3\smallsetminus K_n)/c(K_n) \to {v_{\rm oct}}$ as $n\to \infty$; see \cite{CKP:gmax}. On the other hand, this upper bound can be arbitrarily far from sharp. A useful example is the sequence of twist knots $K_n$ depicted in \reffig{TwistKnot}. Since the number of crossings is $n+2$, the upper bound of \refthm{AdamsV8} is linear in $n$. However, the volumes of $K_n$ are universally bounded and only increasing to an asymptotic limit:
\[
\operatorname{vol}(S^3 \smallsetminus K_n) < {v_{\rm oct}}, \qquad \lim_{n \to \infty} \operatorname{vol}(S^3 \smallsetminus K_n) = {v_{\rm oct}}
\]
This holds as a consequence of the following theorem of Gromov and Thurston \cite[Theorem 6.5.6]{thurston:notes}.
\begin{theorem}\label{Thm:VolumeDecreases}
Let $M$ be a finite volume hyperbolic manifold with cusps. Let $N = M(s_1, \ldots, s_n)$ be a Dehn filling of some cusps of $M$. Then $\operatorname{vol}(N) < \operatorname{vol}(M)$.
\end{theorem}
Returning to the case of twist knots, every $K_n$ can be obtained by Dehn filling on one component of the Whitehead link $L$, shown in \reffig{TwistKnot}, right. \refthm{VolumeDecreases} implies
$\operatorname{vol}(S^3 \smallsetminus K_n) < \operatorname{vol}(S^3 \smallsetminus L) = {v_{\rm oct}}$.
\subsubsection{Bounds in terms of twist number}
Following the example of twist knots in \reffig{TwistKnot}, it makes sense to seek upper bounds on volume in terms of the twist number $t(K)$ of a knot $K$ (see \refdef{TwistNumber}), rather than the crossing number alone.
The following result combines the work of Lackenby \cite{lackenby:alt-volume} with an improvement by Agol and D.\ Thurston \cite[Appendix]{lackenby:alt-volume}.
\begin{theorem}\label{Thm:LATUpperBound}
Let $D(K)$ be a diagram of a hyperbolic link $K$. Then
\[
\operatorname{vol}(S^3 \smallsetminus K) \leq 10 (t(D) - 1) {v_{\rm tet}}.
\]
Furthermore, this bound is asymptotically sharp, in the sense that there exist knot diagrams $D_n = D(K_n)$ with $\operatorname{vol}(S^3 \smallsetminus K_n)/t(D_n) \to 10 {v_{\rm tet}}$.
\end{theorem}
The method of proof is as follows. First, one constructs a \emph{fully augmented link} $L$, by adding an extra component for each twist region of $D(K)$ (see \reffig{Augmented}). As described in \refsec{CirclePack}, the link complement $S^3 \smallsetminus L$ has simple and explicit combinatorics, making it relatively easy to bound $\operatorname{vol}(S^3 \smallsetminus L)$ by counting tetrahedra. Then,
\refthm{VolumeDecreases} implies that the same upper bound on volume applies to $S^3 \smallsetminus K$.
As a counterpart to the asymptotic sharpness of \refthm{LATUpperBound}, there exist sequences of knots where $t(K_n) \to \infty$ but $\operatorname{vol}(S^3 \smallsetminus K_n)$ is universally bounded. One family of such examples is the \emph{double coil knots} studied by the authors \cite{fkp:coils}.
Subsequent refinements or interpolations between \refthm{AdamsV8} and \refthm{LATUpperBound} have been found by Dasbach and Tsvietkova \cite{DasbachTsvietkova, DasbachTsvietkova:simplicial} and Adams \cite{Adams:bipyramids}. These refinements produce a smaller upper bound compared to that of \refthm{LATUpperBound} when the diagram $D(K)$ has both twist regions with many crossings and with few crossings. However, the worst case scenario for the multiplicative constant does not improve due to the asymptotic sharpness of Theorems~\ref{Thm:AdamsV8} and~\ref{Thm:LATUpperBound}.
\subsection{Lower bounds on volume}
By results of Jorgensen and Thurston \cite{thurston:notes}, the volumes of hyperbolic $3$--manifolds are well-ordered. It follows that every family of hyperbolic $3$--manifolds (e.g.\ link complements; fibered knot complements, knot complements of genus 3, etc.) contains finitely many members realizing the lowest volume. Gabai, Meyerhoff, and Milley \cite{gmm:smallest-cusped} showed that the three knot complements of lowest volume are the figure-8 knot, the $5_2$ knot, and the $(-2,3,7)$ pretzel, whose volumes are
\begin{equation}\label{Eqn:LowestKnots}
\operatorname{vol}(4_1) = 2 {v_{\rm tet}} = 2.0298\ldots, \qquad
\operatorname{vol}( 5_2) = \operatorname{vol}( P(-2,3,7)) = 2.8281 \ldots.
\end{equation}
Agol \cite{agol:multicusp} showed that the two multi-component links of lowest volume are the Whitehead link and the $(-2,3,8)$ pretzel link, both of which have volume ${v_{\rm oct}} = 3.6638 \ldots$.
Yoshida \cite{Yoshida:Smallest4Cusped} has identified the smallest volume link of $4$ components, with volume $2 {v_{\rm oct}}$.
Beyond these entries, lower bounds applicable to \emph{all} knots (or \emph{all} links) become scarce. Not even the lowest volume link of $3$ components is known to date.
Nevertheless, there are several practical methods of obtaining diagrammatic lower bounds on the volume of a knot or link, each of which applies to an infinite family of links, and each of which produces \emph{scalable} lower bounds that become larger as the complexity of a diagram becomes larger.
We survey these methods below.
\subsubsection{Angle structures} Suppose that $S^3 \smallsetminus K$ has an ideal triangulation $\tau$ supporting an angle structure $\theta$.
(Recall \refdef{AngleStruct}.) Every ideal tetrahedron of $\tau$, supplied with angles via $\theta$, has an associated volume. As a consequence, one may naturally define a volume $\operatorname{vol}(\theta)$ by summing the volumes of the individual tetrahedra.
\begin{conjecture}[Casson]\label{Conj:Casson}
Let $\tau$ be an ideal triangulation of a hyperbolic manifold $M$, which supports an angle structure $\theta$. Then
\[
\operatorname{vol}(\theta) \leq \operatorname{vol}(M),
\]
with equality if and only if $\theta$ solves the gluing equations and gives the complete hyperbolic structure.
\end{conjecture}
While \refconj{Casson} is open in general, it is known to hold if the triangulation $\tau$ is \emph{geometric}, meaning that some (possibly different) angle structure $\theta'$ solves the gluing equations on $\tau$. In this case, a theorem of Casson and Rivin \cite{fg:angled-survey, rivin:volume} says that $\theta'$ uniquely maximizes volume over all angle structures on $\tau$, implying in particular that $\operatorname{vol}(\theta) \leq \operatorname{vol}(\theta') = \operatorname{vol}(M)$.
In particular, the known case of \refconj{Casson} has been applied to the family of $2$--bridge links. In this case, the link complement has a natural angled triangulation whose combinatorics is closely governed by the link diagram \cite[Appendix]{GueritaudFuter:2bridge}. It follows that, for a sufficiently reduced diagram $D$ of a $2$--bridge link $K$,
\begin{equation}\label{Eqn:2BridgeVolume}
2 {v_{\rm tet}} t(D) - 2.7066 \leq \operatorname{vol}(S^3 \smallsetminus K) \leq 2{v_{\rm oct}} (t(D) - 1),
\end{equation}
which both sharpens the upper bound of \refthm{LATUpperBound} and proves a comparable lower bound.
There are rather few other families where this method has been successfully applied. One is the weaving knots studied by Champanerkar, Kofman, and Purcell \cite{CKP:Weaving}.
In the spirit of open problems, we mention the family of fibered knots and links. Agol showed that these link complements admit combinatorially natural \emph{veering triangulations} \cite{Agol:veering}, which have angle structures with nice properties \cite{HRST:veering, FG:veering}. A proof of \refconj{Casson}, even for this special family, would drastically expand the list of link complements for which we have practical, combinatorial volume estimates. See Worden \cite{Worden:experiment} for more on this problem.
\subsubsection{Guts} One powerful method of estimating the volume of a Haken $3$--manifold was developed by Agol, Storm, and Thurston \cite{AST:guts}, building on previous work of Agol \cite{Agol:guts}.
\begin{definition}
Let $M$ be a Haken hyperbolic $3$--manifold and $S \subset M$ a properly embedded essential surface. We use the symbol $M {\backslash \backslash} S$ to denote the complement in $M$ of a collar neighborhood of $S$. Following the work of Jaco, Shalen, and Johannson \cite{jaco-shalen, johannson}, there is a canonical way to decompose $M {\backslash \backslash} S$ along essential annuli into three types of pieces:
\begin{itemize}
\item $I$--bundles over a subsurface $\Sigma \subset S$,
\item Seifert fibered pieces, which are necessarily solid tori when $M$ is hyperbolic,
\item all remaining pieces, which are denoted $\operatorname{guts}(M,S)$.
\end{itemize}
Thurston's hyperbolization theorem (a variant of \refthm{Haken}) implies that $\operatorname{guts}(M,S)$ admits a hyperbolic metric with totally geodesic boundary. By Miyamoto's theorem \cite{miyamoto}, this metric with geodesic boundary has volume at least ${v_{\rm oct}} \abs{ \chi(\operatorname{guts}(M,S)) }$, where $\chi$ denotes Euler characteristic.
\end{definition}
Agol, Storm, and Thurston showed \cite{AST:guts}:
\begin{theorem}\label{Thm:ASTGuts}
Let $M$ be a Haken hyperbolic $3$--manifold and $S \subset M$ a properly embedded essential surface. Then
\[
\operatorname{vol}(M) \geq {v_{\rm oct}} \abs{ \chi(\operatorname{guts}(M,S)) }.
\]
\end{theorem}
The proof of \refthm{ASTGuts} relies on geometric estimates due to Perelman. Agol, Storm, and Thurston double $M {\backslash \backslash} S$ along its boundary and apply Ricci flow with surgery.
They show that the metric on $\operatorname{guts}(M,S)$ converges to the one with totally geodesic boundary, while volume decreases, and while the metric on the remaining pieces shrinks away to volume $0$.
\refthm{ASTGuts} has been applied to several large families of knots. For alternating knots and links, Lackenby computed the guts of checkerboard surfaces in an alternating diagram \cite{lackenby:alt-volume}. Combined with
\refthm{LATUpperBound} and \refthm{ASTGuts}, this implies:
\begin{theorem}\label{Thm:LackenbyVolAlt}
Let $D$ be a prime alternating diagram of a hyperbolic link $K$ in $S^3$. Then
\begin{equation*}
\frac{{v_{\rm oct}}}{2} (t(D)-2) \leq \operatorname{vol}(S^3 \smallsetminus K) \leq 10{v_{\rm tet}} (t(D) - 1),
\end{equation*}
\end{theorem}
Thus, for alternating knots, the combinatorics of a diagram determines $\operatorname{vol}(S^3 \smallsetminus K)$ up to a factor less than $6$. Compare \refeqn{2BridgeVolume} in the 2--bridge case.
The authors of this survey have extended the method to the larger family of \emph{semi-adequate} links, and the even larger family of \emph{homogeneously adequate} links. Refer to \cite{fkp:guts} and \cite{fkp:PolandSurvey} for definitions of these families and precise theorem statements. The method gives particularly straightforward estimates in the same vein as \refthm{LackenbyVolAlt} for positive braids \cite{fkp:guts, Giambrone} and for Montesinos links \cite{fkp:guts, FinlinsonPurcell}.
\begin{question}\label{Ques:GutsComputable}
Does every knot $K \subset S^3$ admit an essential spanning surface $S$ such that the Euler characteristic $\chi(\operatorname{guts}(S^3 \smallsetminus K, \, S))$ can be computed directly from diagrammatic data?
\end{question}
The answer to \refques{GutsComputable} is ``yes'' whenever $K$ admits a \emph{homogeneously adequate} diagram in the terminology of \cite{fkp:guts}. However, it is not known whether $K$ always admits such a diagram. This is closely related to \cite[Question 10.10]{fkp:guts}.
\subsubsection{Dehn filling bounds}
A powerful method for proving lower bounds on the volume of $N = S^3 \smallsetminus K$ involves two steps: first, prove a lower bound on $\operatorname{vol}(M)$ for some surgery parent $M$ of $N$, using one of the above methods; and second, control the change in volume as we Dehn fill $M$ to recover $N$.
The following theorem, proved in \cite{fkp:filling}, provides an estimate that has proved useful for lower bounds on the volume of knot complements.
\begin{theorem}\label{Thm:VolChange}
Let $M$ be a cusped hyperbolic $3$--manifold, containing embedded horocusps $C_1, \ldots, C_k$ (plus possibly others). On each torus $T_i = \partial C_i$, choose a slope $s_i$, such that the shortest length of any of the $s_i$ is ${\ell_{\rm min}} > 2\pi$.
Then the manifold $M(s_1, \dots, s_k)$
obtained by Dehn filling along $s_1, \dots, s_k$ is hyperbolic, and its volume satisfies
\[
\operatorname{vol}(M(s_1, \dots, s_k)) \geq
\left(1-\left(\frac{2\pi}{{\ell_{\rm min}}}\right)^2\right)^{3/2} \operatorname{vol}(M).
\]
\end{theorem}
Earlier results in the same vein include an asymptotic estimate by Neumann and Zagier \cite{NeumannZagier}, as well as a cone-deformation estimate by Hodgson and Kerckhoff \cite{HK:univ}.
The idea of the proof of \refthm{VolChange} is as follows. Building on the proof of the Gromov--Thurston $2\pi$-Theorem, construct explicit negatively curved metrics on the solid tori added during Dehn filling. This yields a negatively curved metric on $M(s_1, \dots, s_k)$ whose volume is bounded below in terms of $\operatorname{vol}(M)$. Then, results of
Besson, Courtois, and Gallot \cite{BCG, BCS} can be used to compare the volume of the negatively curved metric on $M(s_1, \dots, s_k)$ with the true hyperbolic volume.
Theorem \ref{Thm:VolChange}
leads to diagrammatic volume bounds for several classes of hyperbolic links.
For example, the following theorem from \cite{fkp:filling} gives a double-sided volume bound similar to \refthm{LackenbyVolAlt}.
\begin{theorem} \label{htwisted}
Let $K \subset S^3$ be a link with a prime, twist--reduced diagram
$D(K)$.
Assume that $D(K)$ has $t(D) \geq 2$ twist regions, and
that each region contains at least $7$ crossings. Then $K$ is a
hyperbolic link satisfying
$$0.70735 \; (t(D) - 1) \; < \; \operatorname{vol}(S^3 \smallsetminus K) \; < \; 10\,
{v_{\rm tet}} \, (t(D) - 1).$$
\end{theorem}
The strategy of the proof of Theorem \ref{htwisted} is to view $S^3 \smallsetminus K$
as a Dehn filling on the complement of an augmented link obtained from the highly twisted diagram
$D(K)$. The volume of augmented links can be bounded below in terms of $t(D)$ using Miyamoto's theorem \cite{miyamoto}. The hypothesis that each region contains at least $7$ crossings ensures that the filling slopes are strictly longer than $2\pi$, hence \refthm{VolChange} gives the result.
Similar arguments using \refthm{VolChange} have been applied to links obtained by adding alternating tangles \cite{fkp:symmetric}, closed 3--braids \cite{fkp:farey} and weaving links \cite{CKP:Weaving}.
\subsubsection{Knots with symmetry groups}
We close this section with a result about the volumes of symmetric knots. Suppose $K \subset S^3$ is a hyperbolic knot, and $G$ is a group of symmetries of $K$. That is, $G$ acts on $S^3$ by orientation--preserving homeomorphism that send $K$ to itself. It is a well-known consequence of Mostow rigidity that $G$ is finite and acts on $M = S^3 \smallsetminus K$ by isometries \cite[Corollary 5.7.4]{thurston:notes}. Furthermore, $G$ is cyclic or dihedral \cite{HTW}.
Define $n=n(G)$ to be the smallest order of a subgroup $\mathrm{Stab}_G(x)$ stabilizing a point $x \in S^3 \smallsetminus K$, or else $n = |G|$ if the group acts freely. While this definition depends on how $G$ acts, it is always the case that $n(G)$ is at least as large as the smallest prime factor of $|G|$.
The following result follows by combining several statements in the literature. Since it has not previously been recorded, we include a proof.
\begin{theorem}\label{Thm:SymmetricKnotVolume}
Let $K \subset S^3$ be a hyperbolic knot. Let $G$ be a group of orientation--preserving symmetries of $S^3$ that send $K$ to itself. Define $n=n(G)$ as above.
Then
\begin{equation*}
\operatorname{vol}(S^3 \smallsetminus K) \: \geq \: |G| \cdot x_n,
\end{equation*}
where $x_n = 2.848$ if $n>10$ and $n\neq 13, 18, 19$ and $x_n$ takes the following values otherwise.
\begin{center}
\begin{tabular}{| l c | c |l l |}
\hline
${v_{\rm oct}} / 12 = 0.30532 \ldots$ & $n =2$ & \hspace{.2in}
& $2.16958$ & $n =7,8$ \\
${v_{\rm tet}} / 2 = 0.50747 \ldots$ & $n =3$ & &
$2.47542$ & $n =9$ \\
$0.69524$ & $n =4$ & &
$2.76740$ & $n = 10$ \\
$1.45034$ & $n =5$ & &
$\operatorname{vol}({\tt m011}) = 2.7818\dots$ & $n = 13$ \\
$2.00606$ & $n =6$ & &
$\operatorname{vol}({\tt m016}) = 2.8281\dots$ & $n = 18,19$ \\
\hline
\end{tabular}
\end{center}
\end{theorem}
\begin{proof}
First, suppose that $G$ acts on $M = S^3 \smallsetminus K$ with fixed points. Then the quotient $\mathcal{O} = M / G$ is a non-compact, orientable hyperbolic $3$--orbifold whose torsion orders are bounded below by $n$. We need to check that $\operatorname{vol}(\mathcal{O}) \geq x_n$. If $n = 2$, this result is due to Adams \cite[Corollary 8.2]{adams:limit-volume-orbifolds}. If $n = 3$, the result is essentially due to Adams and Meyerhoff; see \cite[Lemma 2.2]{atkinson-futer:link-orbifolds} and \cite[Lemma 2.3]{atkinson-futer:high-torsion}. If $n \geq 4$, the result is due to Atkinson and Futer \cite[Theorem 3.8]{atkinson-futer:high-torsion}. In all cases, it follows that $\operatorname{vol}(M) \geq |G| \cdot x_n$.
Next, suppose that $G$ acts freely on $M = S^3 \smallsetminus K$. Then the quotient $N = M / G$ is a non-compact, orientable hyperbolic $3$--manifold. If $\operatorname{vol}(N) \geq 2.848$, then the theorem holds automatically because $x_n \leq 2.848$ for all $n$. If $\operatorname{vol}(N) < 2.848$, then Gabai, Meyerhoff, and Milley showed that $N$ is one of 10 enumerated $3$--manifolds \cite[Theorem 1.2]{gmm:smallest-cusped}. In SnapPy notation, these are $\tt m003$,
$\tt m004$, $\tt m006$, $\tt m007$, $\tt m009$, $\tt m010$, $\tt m011$, $\tt m015$, $\tt m016$, and $\tt m017$.
We restrict attention to these manifolds.
Since $G$ acts freely on $M$, the solution to the Smith conjecture implies that $G$ also acts freely on $S^3$.
By a theorem of Milnor \cite[page 624]{Milnor:free-action}, $G$ contains at most one element of order $2$, which implies that it must be cyclic. Thus $P = S^3/G$ is a lens space obtained by a Dehn filling on $N$.
An enumeration of the lens space fillings of the 10 possible manifolds $N$ appears in the table on \cite[page 243]{fkp:symmetric}. This enumeration can be used to show that all possibilities satisfy the statement of the theorem.
Suppose that a lens space $L(p,q)$ is a Dehn filling of $N$. If $N$ actually occurs as a quotient of $M = S^3 \smallsetminus K$, then $M$ must be a cyclic $p$--fold cover of $N$. We may rigorously enumerate all cyclic $p$--fold covers using SnapPy \cite{SnapPy}. In almost all cases, homological reasons show that these covers are not knot complements. For instance, $N = \mathtt{m003}$ has two lens space fillings: $L(5,1)$ and $L(10,3)$. This manifold has six $5$--fold and six $10$--fold cyclic covers, none of which has first homology ${\mathbb{Z}}$. Thus $\mathtt{m003}$ is not a quotient of a knot complement. The same technique applies to 8 of the 10 manifolds $N$.
The two remaining exceptions determine several values of $x_n$.
The manifold $ {\tt m011}$ has $9$--fold and $13$--fold cyclic covers that are knot complements in $S^3$. The value of $x_9$ is already smaller than $\operatorname{vol}({\tt m011})$, but the value of $x_{13}$ is determined by this example. Similarly, the manifold ${\tt m016}$, which is the $(-2,3,7)$ pretzel knot complement, has $18$--fold and $19$--fold cyclic covers that are knot complements, determining the values of $x_{18}$ and $x_{19}$.
\end{proof}
\section{Cusp shapes and cusp areas}\label{Sec:Cusps}
Several results discussed above, such as \refthm{6Theorem} and \refthm{VolChange}, require the slopes used in Dehn filling along knot or link complements to be long. To bound lengths of slopes, we consider an additional invariant of hyperbolic knots and links, namely their cusp shapes and cusp areas.
\begin{definition}\label{Def:CuspArea}
Let $C_1, \dots, C_n$ be a fixed choice of maximal cusps for a link complement $M$, as in \refdef{MaximalCusp}. The \emph{cusp area} of a component $C_i$, denoted by $\operatorname{area}(\partial C_i)$ is the Euclidean area of $\partial C_i$. The \emph{cusp volume}, denoted by $\operatorname{vol}(C_i)$, is the volume of $C_i$. Note that $\operatorname{area}(\partial C_i) = 2 \operatorname{vol}(C_i)$. When $M$ has multiple cusps, the cusp area and cusp volume depend on the choice of maximal cusp.
\end{definition}
This section surveys some methods for estimating the area of a maximal cusp and the length of slopes on it, and poses some open questions.
\subsection{Direct computation}
Similar to the techniques in \refsec{Hyperbolic}, if we can explicitly determine a geometric triangulation of a hyperbolic $3$--manifold, then we can determine its cusp shape and cusp area. This is implemented in SnapPy \cite{SnapPy}.
For fully augmented links, whose geometry is completely determined by a circle packing, the cusp shape is also determined by the circle packing. The cusp area can be computed by finding an explicit collection of disjoint horoballs in the fully augmented link, as in \cite{FuterPurcell}.
Under very strong hypotheses, it is possible to apply the cone deformation techniques of Hodgson and Kerckhoff \cite{HK:univ} to bound the change in cusp shape under Dehn filling.
Purcell carried this out in \cite{Purcell:Cusps}, starting from a fully augmented link. However, the results only apply to knots with at least two twist regions and at least 116 crossings per twist region.
To obtain more general bounds for larger classes of knots and links, additional tools are needed. The main tools are pleated surfaces and packing techniques.
\subsection{Upper bounds and pleated surfaces}
If $M$ is a hyperbolic link complement, then for any choice of maximal cusp, there is a collection of slopes whose Dehn fillings gives $S^3$. These are the \emph{meridians} of $M$. Because $S^3$ is not hyperbolic, the $6$--Theorem implies that in any choice of maximal cusp for $M$, one or more of these slopes must have length at most $6$.
Indeed, the $6$--Theorem is proved by considering punctured surfaces immersed in $M$ and using area arguments to bound the length of a slope.
\begin{definition}\label{Def:Pleated}
Let $M$ be a hyperbolic $3$--manifold with cusps a collection of cusps $C$, and let $S$ be a hyperbolic surface. A \emph{pleated surface} is a piecewise geodesic, proper immersion $f\colon S \to M$. Properness means that any cusps of $S$ are mapped into cusps of $M$.
The surface $S$ is cut into ideal triangles, each of which is mapped isometrically into $M$. In $M$, there may be bending along the sides of the triangles. See \reffig{Pleating}.
\end{definition}
\begin{figure}
\caption{The lift of a pleated surface to the universal cover ${\mathbb{H}
\label{Fig:Pleating}
\end{figure}
An essential surface $S$ in a hyperbolic $3$--manifold $M$ can always be homotoped into a pleated form.
The idea is to start with an ideal triangulation of $S$, then homotope the images of the edges in $M$ to be ideal geodesics in $M$. Similarly, homotope the ideal triangles to be totally geodesic, with sides the geodesic edges in $M$. This gives $S$ a pleating. See
\cite[Theorem 5.3.6]{CanaryEpsteinGreen:Notes} or \cite[Lemma 2.2]{Lackenby:Word} for proofs.
The main result on slope lengths and pleated surfaces is the following, which is a special case of \cite[Theorem~5.1]{Agol:Bounds} and \cite[Lemma~3.3]{Lackenby:Word}.
The result is used in the proof of the $6$--Theorem.
\begin{theorem}\label{Thm:Pleated}
Let $M=S^3\smallsetminus K$ be a hyperbolic knot complement with a maximal cusp $C$. Suppose that $f\colon S \to M$ is a pleated surface, and let $\ell_C(S)$ denote the total length of the intersection curves in $f(S) \cap \partial C$. Then
\[ \ell_C(S)\ \leq \ 6 | \chi(S)|.\]
\end{theorem}
The idea of the proof of \refthm{Pleated} is to find disjoint horocusp neighborhoods $H=\cup H_i$ in $S$ such that $f(H_i)\subset C$, and such that $\ell(\partial H_i)$ is at least as big as the length of $f(\partial H_i)$ measured on $C$. This allows us to compute as follows:
\[ \ell_C(S)\leq \sum_{i=1}^{s}\ell(\partial H_i) = \sum_{i=1}^{s} \operatorname{area}(H_i) \leq \frac{6}{2\pi} \operatorname{area}(S)=\frac{6}{2\pi} \cdot 2\pi | \chi(S)|. \]
Here, the first inequality is by construction. The second equality is a general fact about hyperbolic surfaces, proved by a calculation in ${\mathbb{H}}^2$. The third inequality is a packing theorem due to B\"or\"oczky \cite{boroczky}. The final equality is the Gauss--Bonnet theorem.
\subsubsection{Sample applications}
As noted above, the 6--Theorem implies that the length of a meridian is at most $6$.
\refthm{Pleated} has also been used to estimate the lengths of other slopes. For example, a \emph{$\lambda$--curve} is defined to be a curve that intersects the meridian $\mu$ exactly once. The knot-theoretic longitude, which is null-homologous in $S^3 \smallsetminus K$, is one example of a $\lambda$--curve, and need not be the shortest $\lambda$--curve.
There may be one or two shortest $\lambda$--curves. For any $\lambda$--curve $\lambda$, note that $\ell(\mu)\ell(\lambda)$ gives an upper bound on cusp area.
By applying \refthm{Pleated} to a singular spanning surface in a knot complement, the authors of \cite{AdamsEtAl:CuspSize} obtain the following upper bounds on meridian, $\lambda$--curve, and cusp area.
\begin{theorem}\label{Thm:AdamsCuspBound}
Let $K$ be a hyperbolic knot in $S^3$ with crossing number $c=c(K)$. Let $C$ denote the maximal cusp of $S^3\smallsetminus K$. Then, for the meridian $\mu$ and for the shortest $\lambda$--curve,
\[ \ell(\mu) \leq 6 - \frac{7}{c}, \quad \ell(\lambda) \leq 5c-6, \quad
\mbox{ and } \quad \operatorname{area}(\partial C) \leq {9c} \left(1 - \dfrac{1}{c}\right)^2.
\]
\end{theorem}
Another instance where \refthm{Pleated} applies is to knots with a pair of essential spanning surfaces $S_1$ and $S_2$; in this case the surface $S$ is taken to be the disjoint union of the two spanning surfaces. The following appears in \cite{BurtonKalf}.
\begin{theorem}\label{Thm:GeneralMeridianBound}
Let $K$ be a hyperbolic knot with maximal cusp $C$. Suppose that $S_1$ and $S_2$ are essential spanning surfaces in $M=S^3 \smallsetminus K$ and let $i(\partial S_1, \partial S_2)\neq 0$ denote the minimal intersection number of $\partial S_1, \partial S_2$ in $\partial C$. Finally, let $\chi = |\chi(S_1)|+|\chi(S_2)|$.
Then, for the meridian $\mu$ and the shortest $\lambda$--curve,
\[ \ell(\mu) \leq \frac{6\chi}{i(\partial S_1, \partial S_2)}, \quad
\ell(\lambda) \leq 3 \chi, \quad \mbox{ and } \quad
\operatorname{area}(\partial C) \leq \frac{18 \chi^2}{i(\partial S_1, \partial S_2).}
\]
\end{theorem}
\refthm{GeneralMeridianBound} is useful because the checkerboard surfaces of many knot diagrams are known to be essential.
For instance, the checkerboard surfaces of alternating diagrams are essential. Indeed, in \cite{AdamsEtAl:CuspSize} the authors use pleated checkerboard surfaces to prove the meridian of an alternating knot satisfies $\ell(\mu)<3-6/c$. Other knots with essential spanning surfaces include \emph{adequate knots}, which arose in the study of Jones type invariants. Ozawa first proved that two surfaces in such links are essential \cite{ozawa:adequate}; see also \cite{fkp:guts}. More generally, \refthm{GeneralMeridianBound} applies to knots that admit alternating projections on surfaces so that they define essential checkerboard surfaces. These have been studied by Ozawa \cite{Ozawa} and Howie \cite{Howie:Thesis}.
All the results above indicate that meridian lengths should be strictly less than $6$.
For knots in $S^3$, no examples are known with length more than $4$.
\begin{question}\label{Ques:LengthMeridian}
Do all hyperbolic knots in $S^3$ satisfy $\ell(\mu)\leq 4$?
\end{question}
For links in $S^3$, Goerner showed there exists a link in $S^3$ with 64 components, and a choice of cusps for which each meridian length is $\sqrt{21}\approx 4.5826$ \cite{goerner}.
\begin{question}\label{Ques:LengthLinkMeridians}
Given a hyperbolic link $L \subset S^3$, consider the shortest meridian among the components of $L$. What is the largest possible value of the shortest meridian?
Is it $\sqrt{21}$?
\end{question}
The $6$--Theorem gives a bound on the length of any non-hyperbolic Dehn fillings. By geometrization, non-hyperbolic manifolds are either \emph{reducible} (meaning they contain an essential $2$--sphere), or \emph{toroidal} (meaning the contain an essential torus), or small Seifert fibered. The $6$--Theorem is only known to be sharp on toroidal fillings. Thus one may ask about the maximal possible length for the other types of fillings.
See \cite{HoffmanPurcell} for related questions and results.
\subsubsection{Upper bounds on area via cusp density}
The \emph{cusp density} of a cusped $3$--manifold $M$ is the volume of a maximal cusp divided by the volume of $M$. B\"or\"oczky \cite{boroczky} showed that cusp density is universally bounded by $\sqrt{3}/2{v_{\rm tet}}$, with the figure--8 knot complement realizing this bound. Recall from \refthm{LATUpperBound} that every hyperbolic knot $K \subset S^3$ satisfies $\operatorname{vol}(S^3\smallsetminus K)\leq 10\, {v_{\rm tet}}(t-1)$, where $t = t(D)$ is the twist number of any diagram. Combining this with B\"or\"oczky's theorem shows that a maximal cusp $C \subset S^3 \smallsetminus K$ satisfies
\[ \operatorname{area}(\partial C)\leq 10 \sqrt 3 \cdot (t-1) \approx 17.32\cdot (t-1) .\]
We note that this bound can be arbitrarily far from sharp. This is already true for \refthm{LATUpperBound}. In addition, Eudave-Mu{\~n}oz and Luecke \cite{eudave-munoz-luecke} showed that the cusp density of a hyperbolic knot complement can be arbitrarily close to $0$.
\subsection{Lower bounds via horoball packing}
Theorems~\ref{Thm:Pleated} and~\ref{Thm:AdamsCuspBound} give methods for bounding cusp area from above. To give lower bounds on slope lengths, for example to apply the 6--Theorem, we must bound cusp area or cusp volume from below. The main tool for this is to use \emph{packing arguments}: find a disjoint collection of horoballs with Euclidean diameters bounded from below in a fundamental region of the cusp. Take their shadows on the cusp torus. The area of the cusp torus must be bounded below by the areas of the shadows.
One sample result is the following, from \cite{LP:AltCusps}.
\begin{lemma}\label{Lem:LPShortarcsVolume}
Suppose that a one-cusped hyperbolic $3$--manifold $M$ contains at least $p$ homotopically distinct essential arcs, each with length at most $L$ measured with respect to the maximal cusp $H$ of $M$. Then the cusp area $\operatorname{area}(\partial H)$ is at least $p\,\sqrt{3}\,e^{-2L}$.
\end{lemma}
Similar techniques were also used to bound cusp areas in \cite{fkp:farey} and in \cite{futer-schleimer}.
The idea of the proof is that an arc from the cusp to itself of length $L$ lifts to an arc in the universal cover between two horoballs. We may identify the universal cover of $M$ with
the upper half-space model of ${\mathbb{H}}^3$, so that the boundary of one cusp in $M$ lifts to a horosphere at Euclidean height $1$. The Euclidean metric on this horosphere coincides with the hyperbolic metric. Arcs of bounded length lead to horoballs whose diameter is not too small, and whose shadows have a definite area.
At this writing there is no general lower bound of cusp shapes for all hyperbolic knots. However, for alternating knots, Lackenby and Purcell found a collection of homotopically distinct essential arcs of bounded length, then applied \reflem{LPShortarcsVolume} to to show the following \cite{LP:AltCusps}.
\begin{theorem} \label{Thm:AltCusps}
Let $D$ be a prime, twist reduced alternating diagram of some hyperbolic knot $K$ and let $t = t(D)$ be the twist number of $D$. Let $C$ be the maximal cusp of $M=S^3\smallsetminus K$. Then
\[ A (t-2) \leq \operatorname{area}(\partial C) \leq 10 \sqrt{3} (t-1), \]
where $A$ is at least $2.278 \times 10^{-9}$.
\end{theorem}
For 2--bridge knots there is a much sharper lower bound \cite{fkp:farey}:
\[ \frac{8\sqrt{3}}{147} (t-1) \leq \operatorname{area}(\partial C) \leq \frac{ \sqrt{3} {v_{\rm oct}}}{{v_{\rm tet}}}(t-1). \]
Note that \refthm{AltCusps}, along with equation \refthm{LackenbyVolAlt} implies that the cusp density of alternating knots is universally bounded below. This is not true for non-alternating knots~\cite{eudave-munoz-luecke}. It would be interesting to study the extent to which \refthm{AltCusps} can be generalized.
In general, we would like to know how to obtain many homotopically distinct arcs that can be used in \reflem{LPShortarcsVolume}. The arcs used in the proof of \refthm{AltCusps} lie on complicated immersed essential surfaces, described in \cite{LP:Twisted}. It is conjectured that much simpler \emph{crossing arcs} should play this role.
\begin{definition}\label{Def:CrossingArc}
Let $K$ be a knot with diagram $D(K)$. A \emph{crossing arc} is an embedded arc $\alpha$ in $S^3$ with $\partial \alpha \subset K$, such that in $D(K)$, $\alpha$ projects to an unknotted embedded arc running from an overstrand to an understrand in a crossing.
\end{definition}
The following conjecture is due to Sakuma and Weeks \cite{SakumaWeeks}.
\begin{conjecture}\label{Conj:Arcs}
In a reduced alternating diagram of a hyperbolic alternating link, every crossing arc is isotopic to a geodesic.
\end{conjecture}
\refconj{Arcs} is known for 2--bridge knots \cite[Appendix]{GueritaudFuter:2bridge} and certain closed alternating braids~\cite{Tsvietkova}.
Computer experiments performed by Thistlethwaite and Tsvietkova \cite{thistle-tsviet} also support the following conjecture, which would give more information on the lengths of crossing arcs, hence more information on cusp areas.
\begin{conjecture}\label{Conj:CrossingArcLength}
Crossing arcs in alternating knots have length universally bounded above by $\log 8$.
\end{conjecture}
\end{document} |
\begin{document}
\title{Harbourne constants and arrangements of lines on smooth hypersurfaces in $\mathbb{P}^3_{\mathbb{C}}$}
\author{Piotr Pokora}
\date{\today}
\maketitle
\thispagestyle{empty}
\begin{abstract}
In this note we find a bound for the so-called linear Harbourne constants for smooth hypersurfaces in $\mathbb{P}^{3}_{\mathbb{C}}$.
\keywords{line configurations, Miyaoka inequality, blow-ups, negative curves, the Bounded Negativity Conjecture}
\subclass{14C20, 14J70}
\end{abstract}
\section{Introduction}
In this short note we find a global estimate for Harbourne constants which were introduced in \cite{BdRHHLPSz} in order to capture and measure
the bounded negativity on various birational models of an algebraic surface.
\begin{definition}
Let $X$ be a smooth projective surface. We say that $X$ \emph{has bounded negativity}
if there exists an integer $b(X)$ such that for every \emph{reduced} curve $C \subset X$ one has the bound
$$C^{2} \geq -b(X).$$
\end{definition}
The bounded negativity conjecture (BNC for short) is one of the most intriguing problems in the theory of projective surfaces and
attracts currently a lot of attention, see \cite{Duke, BdRHHLPSz, Harbourne1, XR}. It can be formulated as follows.
\begin{conjecture}[BNC]
An arbitrary smooth \emph{complex} projective surface has bounded negativity.
\end{conjecture}
Some surfaces are known to have bounded negativity
(see \cite{Duke, Harbourne1}). For example, surfaces with $\mathbb{Q}$-effective anticanonical divisor such as Del Pezzo surfaces, K3 surfaces and Enriques surfaces have bounded negativity. However, when we replace these surfaces by their blow ups, we do not know if bounded negativity is preserved. Specifically, it is not known whether the blow up of $\mathbb{P}^{2}$ at ten general points has bounded negativity or not.
Recently in \cite{BdRHHLPSz} the authors have showed the following theorem.
\begin{theorem}(\cite[Theorem~3.3]{BdRHHLPSz})
Let $\mathcal{L}$ be a line configuration on $\mathbb{P}^{2}_{\mathbb{C}}$. Let $f: X_{s} \rightarrow \mathbb{P}^{2}_{\mathbb{C}}$ be the blowing up at $s$ distinct points on $\mathbb{P}^{2}_{\mathbb{C}}$ and let $\widetilde{\call}$ be the strict transform of $\mathcal{L}$. Then we have $\widetilde{\call}^{2} \geq -4\cdot s$.
\end{theorem}
In this note, we generalize this result to the case of line configurations on smooth hypersurfaces $S_{n}$ of degree $n \geq 3$ in $\mathbb{P}^{3}_{\mathbb{C}}$.
A classical result tells us that every smooth hypersurface of degree $n=3$ contains $27$ lines. For smooth hypersurfaces of degree $n=4$ we know that the upper bound of the number of lines on quartic surfaces is 64 (claimed by Segre \cite{Segre} and correctly proved by Sch\"utt and Rams \cite{SR}). In general, for degree $n \geq 3$ hypersurfaces $S_{n}$ Boissi\'ere and Sarti (see \cite[Proposition~6.2]{BS}) showed that the number of lines on $S_{n}$ is less than or equal to $n(7n-12)$.
Using techniques similar to the one introduced in \cite{BdRHHLPSz} we prove the following result.
\begin{theoremA} Let $S_{n}$ be a smooth hypersurface of degree $n \geq 4$ in $\mathbb{P}^{3}_{\mathbb{C}}$. Let $\mathcal{L} \subset S_{n}$ be a line configuration, with the singular locus ${\rm Sing}(\mathcal{L})$ consisting of $s$ distinct points. Let $f : X_{s} \rightarrow S_{n} $ be the blowing up at ${\rm Sing}(\mathcal{L})$ and denote by $\widetilde{\call}$ the strict transform of $\mathcal{L}$. Then we have $$\widetilde{\call}^2 > -4s -2n(n-1)^2.$$
\end{theoremA}
In the last part we study some line configurations on smooth complex cubics and quartics in detail. Similar systematic studies on line configurations on the projective plane were initiated in \cite{Szpond}.
\section{Bounded Negativity viewed by Harbourne Constants}
We start with introducing the Harbourne constants \cite{BdRHHLPSz}.
\begin{definition}\label{def:H-constants}
Let $X$ be a smooth projective surface and let
$\mathcal{P}=\{ P_{1},\ldots,P_{s} \}$ be a set of mutually
distinct $s \geq 1$ points in $X$. Then the \emph{local Harbourne constant of $X$ at $\mathcal{P}$}
is defined as
\begin{equation}\label{eq:H-const for calp}
H(X;\mathcal{P}):= \inf_{C} \frac{\left(f^*C-\sum_{i=1}^s \mult_{P_i}C\cdot E_i\right)^2}{s},
\end{equation}
where $f: Y \to X$ is the blow-up of $X$ at the set $\mathcal{P}$
with exceptional divisors $E_{1},\ldots,E_s$ and the infimum is taken
over all \emph{reduced} curves $C\subset X$.\\
Similarly, we define the \emph{$s$--tuple Harbourne constant of $X$}
as
$$H(X;s):=\inf_{\mathcal{P}}H(X;\mathcal{P}),$$
where the infimum now is taken over all $s$--tuples of mutually
distinct points in $X$. \\
Finally, we define the \emph{global Harbourne constant of $X$} as
$$H(X):=\inf_{s \geq 1}H(X;s).$$
\end{definition}
The relation between Harbourne constants and the BNC can be expressed in the following way.
Suppose that $H(X)$ is a finite real number.
Then for any $s \geq 1$ and any reduced curve $D$ on the blow-up of $X$
at $s$ points, we have
$$D^2 \geq sH(X).$$
Hence the BNC holds on all blow ups of $X$ at $s$ mutually distinct points with the constant $b(X) = sH(X)$.
On the other hand, even if $H(X)=-\infty$, the BNC might still be true.
It is very hard to compute Harbourne constants in general. Moreover, it is quite tricky to find these numbers even for the simplest types of reduced curves on a well-understood surface.
\section{Proof of the main result}
Given a configurations of lines on $S_{n}$ we denote by $t_{r}$ the number of its $r$-ple points, at which exactly $r$ lines of the configuration meet. In the sequel we will repeatedly use two elementary equalities, namely $\sum_{i} {\rm mult}_{P_{i}}(C) = \sum_{k \geq 2}kt_{k}$ and $\sum_{k\geq 2} t_{k} = s$. In this section we will study \emph{linear Harbourne constants} $H_{L}$. We define only the local linear Harbourne constant for $S_{n}$ containing a line configuration $\mathcal{L}$ since this is the only one difference comparing to Definition \ref{def:H-constants}.
\begin{definition}
Let $S_{n}$ be a smooth hypersurface of degree $n \geq 2$ in $\mathbb{P}^{3}_{\mathbb{C}}$ containing at least one line and let
$\mathcal{P}=\{ P_{1},\ldots,P_{s} \}$ be a set of mutually
distinct $s$ points in $S_{n}$. Then the \emph{local linear Harbourne constant of $S_{n}$ at $\mathcal{P}$}
is defined as
\begin{equation}
H_{L}(S_{n}; \mathcal{P}):= \inf_{\mathcal{L}} \frac{\widetilde{\call}^{2}}{s},
\end{equation}
where $\widetilde{\call}$ is the strict transform of $\mathcal{L}$ with respect to the blow up $f : X_{s} \rightarrow S_{n} $ at $\mathcal{P}$
and the infimum is taken over all \emph{reduced} line configurations $\mathcal{L} \subset S_{n}$.
\end{definition}
Our proof is based on the following result due to Miyaoka \cite[Section~2.4]{Miyaoka}.
\begin{theorem}
Let $S_{n}$ be a smooth hypersurface in $\mathbb{P}^{3}_{\mathbb{C}}$ of degree $n \geq 4$ containing a configuration of $d$ lines. Then one has
$$nd -t_{2} + \sum_{k \geq 3}(k-4)t_{k} \leq 2n(n-1)^2.$$
\end{theorem}
Now we are ready to give a proof of the Main Theorem.
\begin{proof}
Pick a number $n \geq 4$. Recall that using the adjunction formulae one can compute the self-intersection number of a line $l$ on $S_{n}$, which is equal to
$$l^{2} = -2 - K_{S_{n}}.l = -2 - \mathcal{O}(n-4).l = 2-n.$$
Observe that the local linear Harbourne constant at ${\rm Sing}(\mathcal{L})$ has the following form
\begin{equation}
\label{Hconst}
H_{L}(S_{n}; {\rm Sing}(\mathcal{L})) = \frac{ (2-n)d + I_{d} - \sum_{k\geq 2}k^{2}t_{k}}{\sum_{k\geq 2}t_{k}},
\end{equation}
where $I_{d} = 2 \sum_{i < j} l_{i}l_{j}$ denotes the number of incidences of $d$ lines $l_{1}, ..., l_{d}$.
It is easy to see that we have the combinatorial equality
$$I_{d} = \sum_{k \geq 2}(k^{2} - k)t_{k},$$
hence we obtain
$$I_{d} -\sum_{k \geq 2}k^{2}t_{k} = -\sum_{k \geq 2}kt_{k}.$$
Applying this to (\ref{Hconst}) we get
$$H_{L}(S_{n}; {\rm Sing}(\mathcal{L})) = \frac{ (2-n)d - \sum_{k \geq 2} kt_{k}}{\sum_{k \geq 2}t_{k}}.$$
Simple manipulations on the Miyaoka inequality lead to
$$nd + t_{2} -4\sum_{k \geq 2}t_{k} - 2n(n-1)^2 \leq -\sum_{k\geq 2} kt_{k},$$
and finally we obtain
$$H_{L}(S_{n}; {\rm Sing}(\mathcal{L})) \geq -4 + \frac{ 2d + t_{2} -2n(n-1)^2}{s},$$
which completes the proof.
\end{proof}
It is an interesting question how the linear Harbourne constant behaves when degree $n$ of a hypersurface grows. We present two extreme examples.
\begin{example}
Let us consider the Fermat hypersurface of degree $n \geq 3$ in $\mathbb{P}^{3}_{\mathbb{C}}$, which is given by the equation
$$F_{n} \,\, : \,\, x^{n} + y^{n} + z^{n} + w^{n} = 0.$$
It is a classical result that on $F_{n}$ there exists the line configuration $\mathcal{L}_{n}$ consisting of $3n^{2}$ lines and delivers $3n^{3}$ double points and $6n$ points of multiplicity $n$. It is easy to check that
$${\rm lim}_{n \rightarrow \infty} H_{L}(F_{n}; {\rm Sing}(\mathcal{L}_{n})) = {\rm lim}_{n \rightarrow \infty} \frac{ 3 n^{2} \cdot (2-n) + 12n^{3} - 6n^{2} - 4 \cdot 3n^3 - n^{2} \cdot 6n} {3 n^3 + 6n} = -3.$$
On the other hand, the Main Theorem gives
$$ {\rm lim}_{n \rightarrow \infty} H_{L}(F_{n}; {\rm Sing}(\mathcal{L}_{n})) \geq -4 + {\rm lim}_{n \rightarrow \infty} \frac{ 6n^{2} + 3n^3 - 2n(n-1)^2}{3n^3 + 6n} = -3 \frac{2}{3},$$
which shows that the estimate given there is quite efficient.
\end{example}
\begin{example}
\label{Rams}
This construction comes from \cite{R}. Let us consider Rams hypersurface $\mathbb{P}^{3}_{\mathbb{C}}$ of degree $n \geq 6$ given by the equation
$$R_{n} \, : \, x^{n-1}\cdot y + y^{n-1} \cdot z + z^{n-1} \cdot w + w^{n-1}\cdot x = 0.$$
On $R_{n}$ there exists a configuration $\mathcal{L}_{n}$ of $n(n-2)+4$ lines, which delivers exactly $2n^2 - 4n + 4$ double points -- this configuration is the grid of $n(n-2)+2$ vertical disjoint lines intersected by two horizontal disjoint lines. The local linear Harbourne constant at ${\rm Sing}(\mathcal{L}_{n})$ is equal to
$$H_{L}(R_{n}; {\rm Sing}(\mathcal{L}_{n})) = \frac{ (n^2-2n+4)\cdot(2-n) + 4n^2 - 8n + 8 - 4\cdot(2n^2 - 4n + 4)}{2n^2 - 4n + 4} = \frac{-n^3}{2n^2 -4n + 4}.$$
Then ${\rm lim}_{n \rightarrow \infty} H_{L}(R_{n}; {\rm Sing}(\mathcal{L}_{n})) = - \infty.$
\end{example}
Example \ref{Rams} presents a quite interesting phenomenon since we can obtain very low linear Harbourne constants having singularities of minimal orders -- the whole game is made by the large number of (disjoint) lines.
\section{Smooth cubics and quartics}
We start with the case $n = 3$. As we mentioned in the first section every smooth cubic surface contains $27$ line, and the configuration of these lines have only double and triple points. These triple points are called \emph{Eckardt points}. Now we find a lower bound for the linear Harbourne constant for such hypersurfaces.
\begin{proposition}
Under the above notation one has
$$H_{L}(S_{3}; {\rm Sing}(\mathcal{L})) \geq -2 \frac{5}{11}.$$
\end{proposition}
\begin{proof}
Recall that the combinatorial equality \cite[Example II.20.]{Urzua} for cubic surfaces has the form
$$135 = t_{2} + 3t_{3}.$$
Moreover, another classical result asserts that the maximal number of \emph{Eckardt} points is equal to $18$ and this number is obtained on Fermat cubic. In order to get a sharp lower bound for $H_{L}$ we need to consider the case when the number of Eckardt points is the largest. To see this we show that the linear Harbourne constant for $t$ triple points is greater then for $t+1$ triple points. Simple computations show that
$$H_{L}(S_{3};t) = \frac{-297+3t}{135-2t},$$
$$H_{L}(S_{3};t+1) = \frac{-294+3t}{133-2t},$$
and $H_{L}(S_{3};t+1) < H_{L}(S_{3};t)$ iff $(-297+3t)\cdot(133-2t) - (-294+3t)\cdot(135-2t) > 0$ for all $t \in \{0, ..., 18\}$, which is obvious to check.
Having this information in hand we can calculate that for $18$ triple points and $81$ double points the local linear Harbourne constant at ${\rm Sing}(\mathcal{L})$ is equal to
$$H_{L}(F_{3};{\rm Sing}(\mathcal{L})) = \frac{27\cdot(-1) + 270 - 4\cdot81 - 9 \cdot 18}{99} = -2 \frac{5}{11},$$
which ends the proof.
\end{proof}
\begin{example}
Now we consider the case $n=4$ and we start with the configuration of $64$ lines on Schur quartic $Sch$. It is well-known that every line from this configuration intersects exactly $18$ other lines -- see for instance \cite[Proposition 7.1]{SR}. One can check that these $64$ lines deliver $8$ quadruple points, $64$ triple points and $336$ double points (in \cite{Urzua} we can find that the number of double points is equal to $192$, which is false). Then the local linear Harbourne constant at ${\rm Sing}(\mathcal{L})$ is equal to
$$H_{L}(Sch; {\rm Sing}(\mathcal{L})) = \frac{(-2)\cdot64 + 1152 -16\cdot8 - 9\cdot 64 - 4\cdot 336}{336 + 64 + 8} = -2.509.$$
\end{example}
Now we present an example of a line configuration on a smooth quartic which deliver the most negative (according to our knowledge) local linear Harbourne constant for this kind of surfaces.
\begin{example}[Bauer configuration of lines]
Let us consider Fermat quartic $F_{4}$. It is well-known that on $F_{4}$ there exists the configuration of $48$ lines. From this configuration one can extract a subconfiguration of $16$ lines which has only $8$ quadruple points.
Then the local linear Harbourne constant at ${\rm Sing}(\mathcal{L})$ is equal to
$$H_{L}(F_{4}; {\rm Sing}(\mathcal{L})) = \frac{ 16\cdot(-2) + 16\cdot6 - 16 \cdot8}{8} = -8.$$
Using Main Theorem we get $H_{L}(F_{4}; {\rm Sing}(\mathcal{L})) \geq -9$, which also shows efficiency of our result.
\end{example}
\paragraph*{\emph{Acknowledgement.}}
The author like to express his gratitude to Thomas Bauer for sharing Example 4.3, to S\l awomir Rams for pointing out his construction in \cite{R} and to Tomasz Szemberg and Halszka Tutaj-Gasi\'nska for useful remarks. Finally, the author would like to thank the anonymous referee for many useful comments which allowed to improve the exposition of this note. The author is partially supported by National Science Centre Poland Grant 2014/15/N/ST1/02102.
Piotr Pokora,
Instytut Matematyki,
Pedagogical University of Cracow,
Podchor\c a\.zych 2,
PL-30-084 Krak\'ow, Poland.
\nopagebreak
\textit{E-mail address:} \texttt{[email protected]}
\end{document} |
{\bf b}egin{document}
{\bf t}wocolumn[
\icmltitle{Self-Interpretable Time Series Prediction with Counterfactual Explanations}
{\bf b}egin{icmlauthorlist}
\icmlauthor{Jingquan Yan}{ru}
\icmlauthor{Hao Wang}{ru}
{\bf e}nd{icmlauthorlist}
\icmlaffiliation{ru}{Department of Computer Science, Rutgers University}
\icmlcorrespondingauthor{Jingquan Yan}{[email protected]}
\icmlkeywords{Machine Learning, ICML}
{{\bf b}m{s}}kip 0.3in
]
{\bf p}rintAffiliationsAndNotice{}
{\bf b}egin{abstract}
Interpretable time series prediction is crucial for safety-critical areas such as healthcare and autonomous driving. Most existing methods focus on interpreting predictions by assigning important scores to segments of time series. In this paper, we take a different and more challenging route and aim at developing a self-interpretable model, dubbed Counterfactual Time Series (CounTS), which generates counterfactual and actionable explanations for time series predictions. Specifically, we formalize the problem of time series counterfactual explanations, establish associated evaluation protocols, and propose a variational Bayesian deep learning model equipped with counterfactual inference capability of time series abduction, action, and prediction. Compared with state-of-the-art baselines, our self-interpretable model can generate better counterfactual explanations while maintaining comparable prediction accuracy. Code will be available at {\bf h}ref{https://github.com/Wang-ML-Lab/self-interpretable-time-series}{https://github.com/Wang-ML-Lab/self-interpretable-time-series}.
{\bf e}nd{abstract}
{\bf s}ection{Introduction}
{\bf l}abel{introduction}
Deep learning (DL) has become increasingly prevalent, and there is naturally a growing need for understanding DL predictions in many decision-making area, such as healthcare diagnosis and public policy-making. The high-stake nature of these areas means that these DL predictions are considered trustworthy only when they can be well explained. Meanwhile, time-series data has been frequently used in these areas~{\bf c}itep{zhao2021assessment,jin2022domain,yang2022artificial}, but it is always challenging to explain a time-series prediction due to the nature of temporal dependency and varying patterns over time. Moreover, time-series data often comes with confounding variables that affect both the input and output, making it even harder to explain predictions from DL models.
On the other hand, many existing explanation methods are based on assigning importance scores for different parts of the input to explain model predictions ~{\bf c}itepp{LIME, SHAP, FeatureSelector, BiasAttribution, LearningPrior, Regularizing}.
However, understanding the contribution of different input parts are usually not sufficiently informative for decision making: people often want to know what changes made to the input could have lead to a specific (desirable) prediction ~{\bf c}itepp{WachterCF,VisualCF,CounteRGAN}. We call such changed input that could have shifted the prediction to a specific target {\bf t}ilde{p}h{actionable counterfactual explanations}. Below we provide an example in the context of time series.
{\bf b}egin{example}[{\bf t}extbf{Actionable Counterfactual Explanation}]{\bf l}abel{exa:targeted}
Suppose there is a model that takes as input a time series of breathing signal ${\bf x}\in{{\bf b}f R}B^{T}$ from a subject of age $u=60$ to predict the corresponding sleep stage as $y^{pred}={\bf m}box{`Awake'}\in\{{\bf m}box{`Awake'},{\bf m}box{`Light Sleep'},{\bf m}box{`Deep Sleep'}\}$.
Typical methods assign importance scores to each entry of ${\bf x}$ to explain the prediction. However, they do not provide {\bf t}ilde{p}h{actionable} counterfactual explanations on how to modify ${\bf x}$ to ${\bf x}^{cf}$ such that the prediction can change to $y^{cf}={\bf m}box{`Deep Sleep'}$. An ideal method with such capability could provide more information on why the model make specific predictions.
{\bf e}nd{example}
Actionable counterfactual explanations help people understand how to achieve a counterfactual (target) output by modifying the current model input. However, such explanations may not be sufficiently informative in practice, especially under the causal effect of confounding variables which are often immutable. Specifically, some variables can hardly be changed once its value has been determined, and suggesting changing such variables are both meaningless and infeasible (e.g., a patient age and gender when modeling medical time series). This leads to a stronger requirement: a good explanation should make as few changes as possible on immutable variables; we call such explanations {\bf t}ilde{p}h{feasible counterfactual explanations}. Below we provide an example in the context of time series.
{\bf b}egin{example}[{\bf t}extbf{Feasible Counterfactual Explanation}]{\bf l}abel{exa:feasible}
In~{\bf e}xaref{exa:targeted},
age $u$ is a confounder that affects both ${\bf x}$ and $y$ since elderly people (i.e., larger $u$) are more likely to have irregular breathing ${\bf x}$ and more `Awake' time (i.e., $y={\bf m}box{`Awake'}$) at night.
To generate a counterfactual explanation to change $y^{pred}$ to ${\bf m}box{`Deep Sleep'}$, typical methods tend to suggest decreasing the age $u$ from $60$ to $50$, which is {\bf t}ilde{p}h{infeasible} (since age cannot be changed in practice). An ideal method would first infer the age $u$ and search for a {\bf t}ilde{p}h{feasible} counterfactual explanation ${\bf x}^{cf}$ that could change $y^{pred}$ to ${\bf m}box{`Deep Sleep'}$ while keeping $u$ unchanged.
{\bf e}nd{example}
In this paper, we propose a self-interpretable time series prediction model, dubbed Counterfactual Time Series (CounTS), which can both (1) perform time series predictions and (2) provide actionable and feasible counterfactual explanations for its predictions. Under common causal structure assumptions, our method is guaranteed to identify the causal effect between the input and output in the presence of exogenous (confounding) variables, thereby improving the generated counterfactual explanations' feasibility. Our contribution is summarized as follows:
{\bf b}egin{itemize}
\item We identify the actionability and feasibility requirements for generating counterfactual explanations for time series models and develop the first general self-interpretable method, dubbed CounTS, that satisfies such requirements.
\item We provide theoretical guarantees that CounTS can identify the causal effect between the time series input and output in the presence of exogenous (confounding) variables, thereby improving feasibility in the generated explanations.
\item Experiments on both synthetic and real-world datasets show that compared to state-of-the-art methods, CounTS significantly improves performance for generating
counterfactual explanations while still maintaining comparable prediction accuracy.
{\bf e}nd{itemize}
{\bf s}ection{Related Work}
{\bf l}abel{relatedwork}
{\bf t}extbf{Interpretation Methods for Neural Networks.}
Various attribution-based interpretation methods have been proposed in recent years. Some methods focused on local interpretation {\bf c}itepp{LIME, SHAP, MAPLE, FeatureSelector, BiasAttribution} while others are designed for global interpretation {\bf c}itepp{ACE, MAME}. The main idea is to assign attribution, or importance scores, to the input features in terms of their impact on the prediction (output). For example, such importance scores can be computed using gradients of the prediction with respect to the input {\bf c}itepp{GradCAM, GradSHAP, DEEPLIFT, IntegratedGradient}.
Some interpretation methods are specialized for time series data; these include perturbation-based {\bf c}itepp{SeriesSaliency}, rule-based {\bf c}itepp{LIMREF}, and attention-based methods {\bf c}itepp{UncertaintyAware,TFT}. One typical method, Feature Importance in Time (FIT), evaluates the importance of the input data based on the temporal distribution shift and unexplained distribution shift {\bf c}itep{FIT}. However, these methods can only produce importance scores of the input features for the current prediction and therefore cannot generate counterfactual explanations (see~{\bf s}ecref{sec:exp} and~{{\bf b}f a}ppref{sec:app_add} for empirical results).
{\bf t}extbf{Counterfactual Explanations for Time Series Models.}
There also works that generate counterfactual explanations for time series models.
{\bf c}itep{CAP} proposed an association-rule algorithm to explain time series prediction by finding the frequent pairs of timestamps and generating counterfactual examples.
{\bf c}itep{CounteRGAN} proposed a general explanation framework that generates counterfactual examples using residual generative adversarial networks (RGAN); it can be adapted for time series models. However, these works either fail to generate realistic counterfactual explanations (due to discretization error) or fail to generate feasible counterfactual explanations for time series models. In contrast, our CounTS as a principled variational causal method~{\bf c}itep{ICL,GenInt,gupta2021correcting} can naturally generate realistic and feasible counterfactual explanations. Such advantages are empirically verified in~{\bf s}ecref{sec:exp}.
{\bf t}extbf{Bayesian Deep Learning and Variational Autoencoders.}
Our work is also related to the broad categories of variational autoencoders (VAEs)~{\bf c}itep{VAE} (which use inference networks to approximate posterior distributions) and Bayesian deep learning (BDL)~{\bf c}itep{CDL,BDL,BDLThesis,RPPU,BIN,BDLSurvey,ZESRec} models (which use a deep component to process high-dimensional signals and a task-specific/graphical component to handle conditional/causal dependencies). {\bf c}itep{OrphicX} proposed the first VAE-based model for generating causal explanations for graph neural networks. {\bf c}itep{CEVAE,DeepSCM} proposed the first VAE-based models for performing causal inference and estimating treatment effect. However, none of them addressed the problem of counterfactual explanation, which involves solving an inverse problem to obtain the optimal counterfactual input. In contrast, our CounTS is the first VAE-based model to address this challenge, with theoretical guarantees and promising empirical results. From the perspective of BDL~{\bf c}itep{BDL,BDLSurvey}, CounTS uses deep neural networks to process high-dimension signals (i.e., the deep component in~{\bf c}itep{BDL}) and uses a Bayesian network to handle the conditional/causal dependencies among variables (i.e., the task-specific or graphical component in~{\bf c}itep{BDL}). Therefore, CounTS is also the first BDL model for generating counterfactual explanations.
{\bf s}ection{Preliminaries}
{\bf l}abel{preliminaries}
{\bf t}extbf{Causal Model.}
Following the definition in {\bf c}itep{Causality}, a causal model is described by a 3-tupple $M = {\bf l}eft {\bf l}angle U, V, F{{\bf t}extnormal{i}}ght {{\bf t}extnormal{a}}ngle$. $U$ is a set of exogenous variables $\{u_1,{\bf d}ots,u_m\}$ that is not determined by any other variables in this causal model. $V$ is a set of endogenous variables $\{v_1,{\bf d}ots,v_n\}$ that are determined by variables in $U {\bf c}up V$. We assume the causal model can be factorized according to a directed graph where each node represents one variable. $F$ is a set of functions $\{f_1,{\bf d}ots,f_n\}$ describing the generative process of $V$:
{\bf b}egin{equation*}
v_i = f_i(pa_i, u_i), i=1,{\bf d}ots,n,
{\bf e}nd{equation*}
where $pa_i$ denotes direct parent nodes of $v_i$.
{\bf t}extbf{Counterfactual Inference.}
Counterfactual inference is interested in questions like ``observing that $X=x$ and $Y=y$, what would be probability that $Y=y^{cf}$ if the input $X$ had been $x^{cf}$?". Formally, given a causal model ${\bf l}eft {\bf l}angle U, V, F{{\bf t}extnormal{i}}ght {{\bf t}extnormal{a}}ngle$ where $Y, X \in V$, counterfactual inference proceeds in three steps~{\bf c}itep{Causality}:
{\bf b}egin{compactenum}
\item {\bf t}extbf{Abduction.} Calculate the posterior distribution of $u$ given the observation $X=x$ and $Y=y$, i.e., $P(u|X=x,Y=y)$.
\item {\bf t}extbf{Action.} Perform causal intervention on the variable $X$, i.e., $do(X=x^{cf})$.
\item {\bf t}extbf{Prediction.} Calculate the counterfactual probability $P(Y_{X=x^{cf}}(u)=y^{cf})$ with respect to the posterior distribution of $P(u|X=x, Y=y)$.
{\bf e}nd{compactenum}
Putting the three steps together, we have
{\bf b}egin{align*}
&P(Y_{X=x^{cf}}=y^{cf}|X=x,Y=y) \\
= &{\bf s}um{\bf n}olimits_u P(Y_{X=x^{cf}}(u)=y^{cf})P(u|x,y),
{\bf e}nd{align*}
where $x^{cf}$ can be an counterfactual explanation describing what would have shifted the outcome $Y$ from $y$ to $y^{cf}$.
{\bf s}ection{Method}
In this section, we formalize the problem of counterfactual explanations for time series prediction and describe our proposed method for the problem.
{\bf t}extbf{Problem Setting.}
We focus on generating counterfactual explanations for predictions from time series models. We assume the model takes as input a multivariate time series ${\bf x}_i \in {{\bf b}f R}B^{D {\bf t}imes T}$ and predicts the corresponding label ${\bf y}_i$, which can be a categorical label, a real value, or a time series ${\bf y}_i\in{{\bf b}f R}B^{T}$. Given a specific input ${\bf x}_i$ and the model's prediction ${\bf y}_i^{pred}$, our goal is to explain the model by finding a counterfactual time series ${\bf x}_i^{cf}{\bf n}eq {\bf x}_i$ that could have lead the model to an alternative (counterfactual) prediction ${\bf y}_i^{cf}$.
{\bf b}egin{figure}[t]
{\bf c}entering
\includegraphics[width=0.48{\bf t}extwidth]{img/causal_graph_finalized.pdf}
{\bf c}aption{{\bf l}abel{fig:causal_graph}{\bf t}extbf{Left:} The causal graph and generative model for CounTS. {\bf t}extbf{Right:} Inference model for CounTS.}
{\bf e}nd{figure}
{\bf s}ubsection{Learning of CounTS}
{\bf t}extbf{Causal Graph and Generative Model for CounTS.}
Our self-interpretable model CounTS is based on the causal graph in~{\bf f}igref{fig:causal_graph}(left), where ${\bf x}\in{{\bf b}f R}B^{D{\bf t}imes T_{in}}$ is the input time series, ${\bf y}$ is the label, ${\bf z}\in{{\bf b}f R}B^{H_z{\bf t}imes T_{mid}}$ is the representation of ${\bf x}$, ${\bf u}_l\in{{\bf b}f R}B^{H_l{\bf t}imes T_{mid}}$ is the local exogenous variable, and ${\bf u}_g\in{{\bf b}f R}B^{H_g}$ is the global exogenous variable. Both ${\bf u}_l$ and ${\bf u}_g$ are confounder variables; ${\bf u}_l$ can take different values in different time steps, while ${\bf u}_g$ is shared across all time steps.
This causal graph assumes the following factorization of the generative model $p_{\bf t}heta({\bf y},{\bf u}_l,{\bf u}_g,{\bf z}|{\bf x})$:
{\bf b}egingroup{\bf m}akeatletter{\bf d}ef{\bf f}@size{7.5}{\bf c}heck@mathfonts
{\bf d}ef{\bf m}aketag@@@#1{{\bf h}box{{\bf m}@th{\bf n}ormalsize{\bf n}ormalfont#1}}
{\bf b}egin{align}
p_{\bf t}heta({\bf y},{\bf u}_l,{\bf u}_g,{\bf z}|{\bf x}) = p_{\bf t}heta({\bf u}_l,{\bf u}_g) p_{\bf t}heta({\bf z} | {\bf u}_l,{\bf u}_g, {\bf x})p_{\bf t}heta({\bf y}| {\bf u}_l,{\bf u}_g, {\bf z}),{\bf l}abel{eq:p_factor}
{\bf e}nd{align}
{\bf e}ndgroup
where ${\bf t}ha$ denotes the collection of parameters for the generative model, and $p_{\bf t}heta({\bf u}_l,{\bf u}_g)=p_{\bf t}heta({\bf u}_l)p_{\bf t}heta({\bf u}_g)$. Here $p_{\bf t}heta({\bf z} | {\bf u}_l,{\bf u}_g, {\bf x})$ and $p_{\bf t}heta({\bf y}| {\bf u}_l,{\bf u}_g, {\bf z})$ are the encoder and the predictor, respectively. Specifically, we have
{\bf b}egin{align*}
p_{\bf t}heta({\bf u}_l) &= {\mathcal N}({\bf 0},{\bf I}),~~~~p_{\bf t}heta({\bf u}_g) = {\mathcal N}({\bf 0},{\bf I}), \\
p_{\bf t}heta({\bf z} | {\bf u}_l,{\bf u}_g, {\bf x}) &= {\mathcal N}({\bf m}u_z({\bf u}_l,{\bf u}_g, {\bf x}; {\bf t}ha), {\bf s}igma_z({\bf u}_l,{\bf u}_g,{\bf x}; {\bf t}ha)), \\
p_{\bf t}heta({\bf y} | {\bf u}_l,{\bf u}_g, {\bf z}) &= {\mathcal N}({\bf m}u_y({\bf u}_l,{\bf u}_g, {\bf z}; {\bf t}ha), {\bf s}igma_y({\bf u}_l,{\bf u}_g,{\bf z}; {\bf t}ha)),
{\bf e}nd{align*}
where ${\bf m}u_z({\bf c}dot; {\bf c}dot)$, ${\bf s}igma_z({\bf c}dot; {\bf c}dot)$, ${\bf m}u_y({\bf c}dot; {\bf c}dot)$, ${\bf s}igma_y({\bf c}dot; {\bf c}dot)$ are neural networks parameterized by ${\bf t}ha$. Note that for classification models the predictor $p_{\bf t}heta({\bf y} | {\bf u}_l,{\bf u}_g, {\bf z})$ becomes a categorical distribution $Cat(f_y({\bf u}_l,{\bf u}_g, {\bf z}; {\bf t}ha))$, where $f_y({\bf c}dot;{\bf c}dot)$ is a neural network.
{\bf t}extbf{Inference Model for CounTS.}
We use an inference model $q_{\bf p}hi({\bf y}, {\bf u}_l,{\bf u}_g,{\bf z}|{\bf x})$ to approximate the posterior distribution of the latent variables, i.e., $p_{\bf t}heta({\bf u}_l,{\bf u}_g,{\bf z}|{\bf x})$. As shown in~{\bf f}igref{fig:causal_graph}(right), we factorize $q_{\bf p}hi({\bf y}, {\bf u}_l,{\bf u}_g,{\bf z}|{\bf x})$ as
{\bf b}egingroup{\bf m}akeatletter{\bf d}ef{\bf f}@size{9.5}{\bf c}heck@mathfonts
{\bf d}ef{\bf m}aketag@@@#1{{\bf h}box{{\bf m}@th{\bf n}ormalsize{\bf n}ormalfont#1}}
{\bf b}egin{align}
q_{\bf p}hi({\bf y}, {\bf u}_l,{\bf u}_g,{\bf z}|{\bf x}) &= q_{\bf p}hi({\bf y}|{\bf x})q_{\bf p}hi({\bf u}_l,{\bf u}_g| {\bf x}, {\bf y})q_{\bf p}hi({\bf z} | {\bf x}, {\bf y}),{\bf l}abel{eq:variational}\\
q_{\bf p}hi({\bf u}_l,{\bf u}_g| {\bf x}, {\bf y})&=q_{{\bf p}hi}({\bf u}_l|{\bf x}, {\bf y})q_{{\bf p}hi}({\bf u}_g|{\bf x}, {\bf y})
{\bf e}nd{align}
{\bf e}ndgroup
where ${\bf p}h$ is the collection of the inference model's parameters, and . We parameterized each factor in~{\bf e}qnref{eq:variational} as
{\bf b}egin{align*}
q_{{\bf p}hi}({\bf y}|{\bf x}) &={\mathcal N}({\bf m}u_y({\bf x}; {\bf p}h), {\bf s}igma_y^2({\bf x}; {\bf p}h)), \\
q_{{\bf p}hi}({\bf u}_l|{\bf x}, {\bf y}) &= {\mathcal N}({\bf m}u_{u_l}({\bf x}, {\bf y}; {\bf p}h), {\bf s}igma_{u_l}^2({\bf x}, {\bf y}; {\bf p}h)), \\
q_{{\bf p}hi}({\bf u}_g|{\bf x}, {\bf y}) &= {\mathcal N}({\bf m}u_{u_g}({\bf x}, {\bf y}; {\bf p}h), {\bf s}igma_{u_g}^2({\bf x}, {\bf y}; {\bf p}h)), \\
q_{{\bf p}hi}({\bf z}|{\bf x}, {\bf y}) &= {\mathcal N}({\bf m}u_z({\bf x}, {\bf y}; {\bf p}h), {\bf s}igma_z^2({\bf x}, {\bf y}; {\bf p}h)),
{\bf e}nd{align*}
where ${\bf m}u_{{\bf c}dot}({\bf c}dot;{\bf c}dot)$ and ${\bf s}igma_{{\bf c}dot}({\bf c}dot;{\bf c}dot)$ denote neural networks with ${\bf p}h$ as their parameters.
{\bf t}extbf{Evidence Lower Bound.}
Our CounTS uses the evidence lower bound (ELBO) ${\bf L}M_{ELBO}$ of the log likelihood ${\bf l}og p({\bf y}|{\bf x})$ as an objective to learn the generative and inference models.
Maximizing the ELBO is equivalent to learning the optimal variational distribution $q_{\bf p}hi({\bf u}_l,{\bf u}_g,{\bf z}|{\bf x})=\int q_{\bf p}hi({\bf y}, {\bf u}_l,{\bf u}_g,{\bf z}|{\bf x}) d{\bf y}$ that best approximates the posterior distribution of the label and latent variables $p_{\bf t}heta({\bf u}_l,{\bf u}_g,{\bf z}|{\bf x})$. Specifically, we have
{\bf b}egin{align}
{\bf L}M_{ELBO} = &~{{\bf b}f E}B_{q_{\bf p}hi({\bf y}, {\bf u}_l,{\bf u}_g,{\bf z}|{\bf x})}[p_{{\bf t}heta}({\bf y}, {\bf u}_l,{\bf u}_g,{\bf z}|{\bf x})] {\bf n}onumber\\
&- {{\bf b}f E}B_{q_{\bf p}hi({\bf y}, {\bf u}_l,{\bf u}_g,{\bf z}|{\bf x})}[q_{\bf p}hi({\bf y}, {\bf u}_l,{\bf u}_g,{\bf z}|{\bf x})].{\bf l}abel{eq:elbo_simple}
{\bf e}nd{align}
Note that different from typical ELBOs, we explicitly involves ${\bf y}$ and use $q_{\bf p}hi({\bf y}, {\bf u}_l,{\bf u}_g,{\bf z}|{\bf x})$ rather than $q_{\bf p}hi({\bf u}_l,{\bf u}_g,{\bf z}|{\bf x})$ (see~{{\bf b}f a}ppref{sec:app_elbo} for details); this is to expose the factor $q_{\bf p}hi({\bf y}|{\bf x})$ in~{\bf e}qnref{eq:variational} to allow for additional supervision on ${\bf y}$ (more details below). With the factorization in~{\bf e}qnref{eq:p_factor} and~{\bf e}qnref{eq:variational}, we can decompose {\bf e}qnref{eq:elbo_simple} as:
{\bf b}egingroup{\bf m}akeatletter{\bf d}ef{\bf f}@size{9.2}{\bf c}heck@mathfonts
{\bf d}ef{\bf m}aketag@@@#1{{\bf h}box{{\bf m}@th{\bf n}ormalsize{\bf n}ormalfont#1}}
{\bf b}egin{align}
{\bf L}M&_{ELBO}=~{{\bf b}f E}B_{q_{\bf p}hi({\bf y}|{\bf x})}{{\bf b}f E}B_{q_{\bf p}hi({\bf u}_l,{\bf u}_g, {\bf z}| {\bf x},{\bf y})}[p_{\bf t}heta({\bf y}| {\bf u}_l,{\bf u}_g, {\bf z})]{\bf l}abel{eq:elbo_y_}\\
&-{{\bf b}f E}B_{q_{\bf p}hi({\bf y},{\bf u}_l,{\bf u}_g|{\bf x})}{\bf b}ig[KL[q_{{\bf p}hi}({\bf z}|{\bf x},{\bf y})||p_{\bf t}heta({\bf z} | {\bf u}_l,{\bf u}_g, {\bf x})]{\bf b}ig]{\bf l}abel{eq:elbo_z}\\
&- {{\bf b}f E}B_{q_{\bf p}hi({\bf y}|{\bf x})}{\bf b}ig[KL[q_{\bf p}hi({\bf u}_l,{\bf u}_g| {\bf x}, {\bf y})||p_{\bf t}heta({\bf u}_l,{\bf u}_g)]{\bf b}ig]{\bf l}abel{eq:elbo_u}\\
&- {{\bf b}f E}B_{q_{{\bf p}hi}({\bf y}|{\bf x})}[q_{{\bf p}hi}({\bf y}|{\bf x})],{\bf l}abel{eq:elbo_y}
{\bf e}nd{align}
{\bf e}ndgroup
where each term is computed with our neural network parameterization; {\bf f}igref{fig:arch}(left) shows the network structure. Below we briefly discuss the intuition of each term.
{\bf b}egin{compactenum}
\item[(1)] {\bf t}extbf{{\bf e}qnref{eq:elbo_y_} predicts} the label ${\bf y}$ using ${\bf u}_l$, ${\bf u}_g$, and ${\bf z}$ inferred from ${\bf x}$ (${\bf y}$ is marginalized out).
\item[(2)] {\bf t}extbf{{\bf e}qnref{eq:elbo_z} regularizes} the inference model $q_{{\bf p}hi}({\bf z}|{\bf x},{\bf y})$ to get closer to the generative model $p_{\bf t}heta({\bf z} | {\bf u}_l,{\bf u}_g, {\bf x})$.
\item[(3)] {\bf t}extbf{{\bf e}qnref{eq:elbo_u} regularizes} $q_{{\bf p}hi}({\bf u}_l,{\bf u}_g|{\bf x},{\bf y})$ using the prior distribution $p({\bf u}_l,{\bf u}_g)$.
\item[(4)] {\bf t}extbf{{\bf e}qnref{eq:elbo_y} regularizes} $q_{{\bf p}hi}({\bf y}|{\bf x})$ by maximizing its entropy, preventing it from collapsing to deterministic solutions.
{\bf e}nd{compactenum}
{\bf b}egin{figure*}[t]
{\bf c}entering
\includegraphics[width=0.99{\bf t}extwidth]{img/corrected_path.pdf}
{{\bf b}m{s}}pace{-10pt}
{\bf c}aption{{\bf l}abel{fig:arch}Network structure. We omit subscripts of $p_{\bf t}heta$ and $q_{\bf p}hi$ for clarity. {\bf t}extbf{Left:} CounTS makes {\bf t}ilde{p}h{predictions} on ${\bf y}$ using the yellow branch. {\bf t}extbf{Right:} Given the current ${\bf x}$ and ${\bf y}^{pred}$, CounTS generates {\bf t}ilde{p}h{counterfactual explanation} ${\bf x}^{cf}$ for any target label ${\bf y}^{cf}$ using the blue branch.}
{{\bf b}m{s}}pace{-11pt}
{\bf e}nd{figure*}
{\bf t}extbf{Final Objective Function.}
Inspired by~{\bf c}itep{CEVAE}, we include an additional term ($N$ is the training set size)
{\bf b}egin{align}
{\bf L}M_y = {\bf s}um{\bf n}olimits_{i=1}^N{\bf l}og q({\bf y}_i|{\bf x}_i)
{\bf e}nd{align}
to supervise $q({\bf y}|{\bf x})$ using label ${\bf y}$ during training; this helps produce more accurate estimation for ${\bf u}_l$, ${\bf u}_g$, ${\bf z}$ using $q_{\bf p}hi({\bf u}_l,{\bf u}_g|{\bf x},{\bf y})$ and $q_{\bf p}hi({\bf z}|{\bf x},{\bf y})$.
The final objective function then becomes
{\bf b}egin{align}
{\bf L}M_{CounTS} = {\bf L}M_{ELBO} + {\bf l}ambda {\bf L}M_y,{\bf l}abel{eq:final}
{\bf e}nd{align}
where ${\bf l}ambda$ is a hyperparameter balancing both terms.
{\bf s}ubsection{Inference Using CounTS}
After learning both the generative model ({\bf e}qnref{eq:p_factor}) and the inference model ({\bf e}qnref{eq:variational}) by maximizing ${\bf L}M_{CounTS}$ in~{\bf e}qnref{eq:final}, our {\bf t}extbf{self-interpretable} CounTS can perform inference to (1) make {\bf t}extbf{predictions} on ${\bf y}$ using the yellow branch in~{\bf f}igref{fig:arch}(left), and (2) generate {\bf t}extbf{counterfactual explanation} ${\bf x}^{cf}$ for any target label ${\bf y}^{cf}$ using the blue branch in~{\bf f}igref{fig:arch}(right).
{\bf s}ubsubsection{Prediction}
We use the yellow branch in~{\bf f}igref{fig:arch}(left) to {\bf t}ilde{p}h{predict} ${\bf y}$. Specifically, given an input time series ${\bf x}$, the encoder will first infer the {\bf t}ilde{p}h{initial label} ${\bf y}$ using $q_{\bf p}hi({\bf y}|{\bf x})$ and then infer $({\bf u}_l,{\bf u}_g)$ and ${\bf z}$ from ${\bf x}$ and ${\bf y}$ using $q_{\bf p}hi({\bf u}_l,{\bf u}_g,{\bf z}|{\bf x},{\bf y})$. $({\bf u}_l,{\bf u}_g)$ and ${\bf z}$ are then fed into the predictor $p_{\bf t}heta({\bf y}|{\bf u}_l,{\bf u}_g,{\bf z})$ to infer the {\bf t}ilde{p}h{final} label ${\bf y}$. Formally, CounTS predict the label as
{\bf b}egin{align}
{\bf y}^{pred} = {{\bf b}f E}B_{q_{\bf p}hi({\bf y}|{\bf x})}{{\bf b}f E}B_{q_{\bf p}hi({\bf u}_l,{\bf u}_g, {\bf z}| {\bf x},{\bf y})}[p_{\bf t}heta({\bf y}| {\bf u}_l,{\bf u}_g, {\bf z})].
{\bf e}nd{align}
Empirically, we find that directly using the means of $q_{\bf p}hi({\bf y}|{\bf x})$ and $q_{\bf p}hi({\bf u}_l,{\bf u}_g, {\bf z}| {\bf x},{\bf y})$ as input to $p_{\bf t}heta({\bf y}| {\bf u}_l,{\bf u}_g, {\bf z})$ already achieves satisfactory accuracy.
{\bf s}ubsubsection{Generating Counterfactual Explanation}
We use the blue branch in~{\bf f}igref{fig:arch}(right) to generate counterfactual explanations via counterfactual inference. Our goal is to find the optimal counterfactual explanation ${\bf x}^{cf}$ defined below.
{\bf b}egin{definition}[{\bf t}extbf{Optimal Counterfactual Explanation}]{\bf l}abel{def:counter}
Given a factual observation ${\bf x}$ and prediction ${\bf y}^{pred}$, the optimal counterfactual explanation ${\bf x}^{cf}$ for the counterfactual outcome for ${\bf y}^{cf}$ is
{\bf b}egin{align*}
{\bf x}^{cf} = {{\bf b}f a}rgmax{\bf n}olimits_{{\bf x}'}~ p(Y_{{\bf x}={\bf x}'}({\bf u})={\bf y}^{cf}),
{\bf e}nd{align*}
where ${\bf u}=({\bf u}_l,{\bf u}_g)$ and the counterfactual likelihood is defined as
{\bf b}egingroup{\bf m}akeatletter{\bf d}ef{\bf f}@size{9.5}{\bf c}heck@mathfonts
{\bf d}ef{\bf m}aketag@@@#1{{\bf h}box{{\bf m}@th{\bf n}ormalsize{\bf n}ormalfont#1}}
{\bf b}egin{align}
&p(Y_{{\bf x}={\bf x}^{{\bf p}rime}}({\bf u})={\bf y}^{cf}) {\bf l}abel{eq:cf} \\
&= {\bf s}um_{{\bf u}} p{\bf l}eft({\bf y}={\bf y}^{cf} | do{\bf l}eft({\bf x}={\bf x}^{{\bf p}rime}{{\bf t}extnormal{i}}ght), {\bf u}{{\bf t}extnormal{i}}ght) p({\bf u} | {\bf x}={\bf x}, {\bf y}={\bf y}^{pred}). {\bf n}onumber
{\bf e}nd{align}
{\bf e}ndgroup
{\bf e}nd{definition}
In words, we search for the optimal ${\bf x}^{cf}$ that would have shifted the model prediction from ${\bf y}^{pred}$ to ${\bf y}^{cf}$ while keeping $({\bf u}_l,{\bf u}_g)$ unchanged. Since the definition of counterfactual explanations in~{\bf d}efref{def:counter} involves causal inference with the intervention on ${\bf x}$, we need to first {\bf t}ilde{p}h{identify} the causal probability $p({\bf y}={\bf y}^{cf} | do({\bf x}={\bf x}^{{\bf p}rime}), {\bf u})$ using observational probability, i.e., removing `do' in the equation. The theorem below shows that this is achievable.
{\bf b}egin{theorem}[{\bf t}extbf{Identifiability}]{\bf l}abel{thm:do}
Given the posterior distribution of exogenous variable $p({\bf u}_l,{\bf u}_g|{\bf x},{\bf y})$, the effect of action $p({\bf y}={\bf y}^{cf} | do({\bf x}={\bf x}^{{\bf p}rime}), {\bf u}_l,{\bf u}_g)$ can be identified using ${{\bf b}f E}B_{p({\bf z} | {\bf x}^{{\bf p}rime}, {\bf u}_l,{\bf u}_g)} [p{\bf l}eft({\bf y}^{cf} | {\bf z}, {\bf u}_l,{\bf u}_g{{\bf t}extnormal{i}}ght)]$.
{\bf e}nd{theorem}
See {{\bf b}f a}ppref{sec:proof} for the proof. With
{\bf t}hmref{thm:do}, we can rewrite {\bf e}qnref{eq:cf} as
{\bf b}egin{align}
{\bf L}M_{cf} = {{\bf b}f E}B_{p({\bf u} | {\bf x}={\bf x}, {\bf y}={\bf y}^{pred})}{{\bf b}f E}B_{p({\bf z} | {\bf x}^{{\bf p}rime}, {\bf u})} [p({\bf y}^{cf} | {\bf z}, {\bf u}) ],{\bf l}abel{eq:final}
{\bf e}nd{align}
where ${\bf u}=({\bf u}_l,{\bf u}_g)$ and $p({\bf u} | {\bf x}={\bf x}, {\bf y}={\bf y}^{pred})$ is approximated by $q_{\bf p}hi({\bf u} | {\bf x}={\bf x}, {\bf y}={\bf y}^{pred})$. We use Monte Carlo estimates to compute the expectation in {\bf e}qnref{eq:cf} and {\bf e}qnref{eq:final}, iteratively compute the gradient ${\bf f}rac{{\bf p}artial{\bf L}M_{cf}}{{\bf p}artial{\bf x}'}$ (via back-propagation) to search for the optimal ${\bf x}'$ in a way similar to~{\bf c}itep{BIN,ReverseAttack}, and use it as ${\bf x}^{cf}$ (see the complete algorithm in {{\bf b}f a}ppref{sec:app_algo}).
{\bf s}ection{Experiments}{\bf l}abel{sec:exp}
In this section, we evaluate our CounTS and existing methods on two {\bf t}ilde{p}h{synthetic} and three {\bf t}ilde{p}h{real-world datasets}. For each dataset, we evaluate different methods in terms of three metrics: (1) {\bf t}extbf{prediction accuracy}, (2) {\bf t}extbf{counterfactual accuracy}, and (3) {\bf t}extbf{counterfactual change ratio}, with the last one as the most important metric. These metrics take different forms for different datasets (see details in~{\bf s}ecref{sec:toy}-{{\bf t}extnormal{e}}f{sec:real}).
{\bf s}ubsection{Baselines and Implementations}
We compare our CounTS with state-of-the-art methods for generating explanations for deep learning models, including Regularized Gradient Descent ({\bf t}extbf{RGD})~{\bf c}itep{RGD}, Gradient-weighted Class Activation Mapping ({\bf t}extbf{GradCAM})~{\bf c}itep{GradCAM},
Gradient SHapley Additive exPlanations ({\bf t}extbf{GradSHAP})~{\bf c}itep{GradSHAP}, Local Interpretable Model-agnostic Explanations ({\bf t}extbf{LIME})~{\bf c}itep{LIME}, Feature Importance in Time ({\bf t}extbf{FIT})~{\bf c}itep{FIT}, Case-crossover APriori ({\bf t}extbf{CAP})~{\bf c}itep{CAP}, and Counterfactual Residual Generative Adversarial Network ({\bf t}extbf{CounteRGAN})~{\bf c}itep{CounteRGAN} (see~{{\bf b}f a}ppref{sec:app_baseline} for more details). Note that among these baselines, only RGD and CounteRGAN can generate actionable explanations. Other baselines, including FIT (which is designed for time series models), only provide importance scores as explanations; therefore some evaluation metrics may not be applicable for them (shown as `-' in tables).
All methods above are implemented with PyTorch {\bf c}itep{PyTorch}. For fair comparison, the prediction model in all the baseline explanation methods has the same neural network architecture as the inference module in our CounTS. See the Appendix for more details on the architecture, training, and inference.
{\bf b}egin{table}[t]
{{\bf b}m{s}}kip -0.25cm
{\bf c}aption{{\bf t}extbf{Results on the toy dataset.} We mark the best result with {\bf t}extbf{bold face} and the second best results with {\bf u}nderline{underline}.}
{\bf l}abel{tab:toy_result}
{\bf s}etlength{{\bf t}abcolsep}{2pt}
{{\bf t}extnormal{e}}sizebox{0.48{\bf t}extwidth}{!}{
{\bf b}egin{tabular}{cccccccccc}
{\bf t}oprule
& CounteRGAN & RGD & GradCAM & GradSHAP & LIME & FIT & CAP & CounTS (Ours) \\ {\bf m}idrule
Pred. Accuracy (\%) ${\bf u}parrow$ & {\bf m}ulticolumn{7}{c}{-------------------------- {\bf q}uad{\bf t}extbf{85.19}{\bf q}uad --------------------------} & {{\bf u}l 83.26} \\
CCR ${\bf u}parrow$ & {{\bf u}l 1.25} & 1.21 & 1.09 & 1.13 & 1.2 & 1.15 & 0.97 & {\bf t}extbf{1.33} \\
Counterf. Accuracy (\%) ${\bf u}parrow$ & {\bf t}extbf{78.75} & {{\bf u}l 77.96} & - & - & - & 45.62 & - & 70.28 \\ {\bf b}ottomrule
{\bf e}nd{tabular}
}
{\bf e}nd{table}
{\bf s}ubsection{Toy Dataset}{\bf l}abel{sec:toy}
{\bf t}extbf{Dataset Description.}
We designed a toy dataset where the label is affected by only part of the input. A good counterfactual explanation should only modify this part of the input while keeping the other part unchanged.
Following the causal graph in {\bf f}igref{fig:causal_graph}(left), we have input ${\bf x}\in{{\bf b}f R}B^{12}$ with each entry independently sampled from ${\bf m}athcal{N} ({\bf m}u_x , {\bf s}igma^2_x)$ and an exogenous variable $u \in {{\bf b}f R}B$ (as the confounder) sampled from ${\bf m}athcal{N} ( {\bf m}u_u , {\bf s}igma^2_u)$. To indicate which part of ${\bf x}$ affects the label, we introduce a mask vector ${\bf m} \in {{\bf b}f R}B^{12}$ with first $6$ entries ${\bf m}_{1:6}$ set to $1$ and the last 6 entries ${\bf m}_{7:12}$ set to $0$. We then generate ${\bf z}\in {{\bf b}f R}B^{12}$ as ${\bf z} = {\bf u} {\bf c}dot({\bf m} \odot {\bf x}) $ and the label $y {\bf s}im Bern({\bf s}igma({\bf z}^{\bf t}op{{\bf b}f 1} + u))$ where ${\bf s}igma$ is the sigmoid function. Here ${\bf x}$'s first $6$ entries ${\bf x}_{1:6}$ is label-related and the last $6$ entries ${\bf x}_{7:12}$ is label-agnostic.
{\bf t}extbf{Evaluation Metrics.}
We use three evaluation metrics:
{\bf b}egin{compactitem}
\item {\bf t}extbf{Prediction Accuracy.}
This is the percentage of time series correctly predicted ($y^{pred} = y$) in the test set.
\item {\bf t}extbf{Counterfactual Accuracy.}
For a prediction model $f$, the generated counterfactual explanation ${\bf x}^{cf}$, and the target label $y^{cf}$, counterfactual accuracy is the percentage of time series where ${\bf x}^{cf}$ successfully change the model's prediction to $y^{cf}$ (i.e., $f({\bf x}^{cf}) = y^{cf}$).
\item {\bf t}extbf{Counterfactual Change Ratio (CCR).}
This measures how well the counterfactual explanation ${\bf x}^{cf}$ changes the label-related input ${\bf x}_{1:6}$ while keeping the label-agnostic input ${\bf x}_{7:12}$ unchanged. Formally we use the average ratio of ${\bf f}rac{\|{\bf x}_{1:6}^{cf} - {\bf x}_{1:6}\|_1}{\|{\bf x}_{7:12}^{cf} - {\bf x}_{7:12}\|_1}$ across the test set.
{\bf e}nd{compactitem}
Note that CCR is the most important metric among the three since our main focus is to generate actionable and feasible counterfactual explanations.
{\bf t}extbf{Quantitative Results.}
{\bf t}abref{tab:toy_result} compares our CounTS with the baselines in terms of the three metrics.
CounTS outperforms all the baselines in terms of CCR with minimal prediction accuracy loss. This shows that our CounTS successfully identifies and fixes the exogenous variables $u$ and ${\bf m}$ to generate actionable and feasible counterfactual explanations while still achieving prediction accuracy comparable to baselines.
Note that actionable methods (i.e., CounTS, RGD, and CounteRGAN) outperforms importance score methods (i.e., FIT, LIME, and GradSHAP). This is expected because importance-score-based explanations can only explain the original prediction and therefore fail to infer what change on ${\bf x}$ will shift the prediction from $y^{pred}$ to $y^{cf}$.
Our counterfactual accuracy is lower than CounteRGAN and RGD methods. This is reasonable since these baselines do not infer and fix the posterior distribution $p({\bf u}|{\bf x},{\bf y})$ and are therefore more flexible for the generator (or back-propagation) to modify their input to push $y^{pred}$ closer to the target $y^{cf}$.
However, such flexibility comes at the cost of low feasibility, reflected in their poor CCR performance.
{\bf b}egin{table}[t]
{{\bf b}m{s}}kip -0.25cm
{\bf c}aption{{\bf t}extbf{Results on the {\bf t}ilde{p}h{Spike} dataset.} We mark the best result with {\bf t}extbf{bold face} and the second best results with {\bf u}nderline{underline}.}
{\bf s}etlength{{\bf t}abcolsep}{2pt}
{\bf l}abel{tab:spike}
{{\bf t}extnormal{e}}sizebox{0.49{\bf t}extwidth}{!}{
{\bf b}egin{tabular}{cccccccccc}
{\bf t}oprule
& & CounteRGAN & RGD & GradCAM & GradSHAP & LIME & FIT & CAP & CounTS (Ours) \\ {\bf m}idrule
Pred. MSE ${\bf d}ownarrow$ & & {\bf m}ulticolumn{7}{c}{-------------------------- {\bf q}uad{{\bf u}nderline{0.128}}{\bf q}uad-------------------------- } & {\bf t}extbf{0.117} \\
{\bf m}ultirow{2}{*}{CCR ${\bf u}parrow$} & 2 Active & 2.206 & {{\bf u}l2.217} & 2.043 & 1.989 & 1.872 & 1.863 & 1.614 & {\bf t}extbf{2.322} \\
& 1 Active & {{\bf u}l 0.705} & 0.683 & 0.654 & 0.615 & 0.473 & 0.55 & 0.497 & {\bf t}extbf{0.730} \\
Counterf. MSE ${\bf d}ownarrow$ & & {\bf t}extbf{0.074} & {\bf t}extbf{0.074} & - & - & - & 0.394 & - & {{\bf u}l 0.103} \\ {\bf b}ottomrule
{\bf e}nd{tabular}
}
{\bf e}nd{table}
{\bf b}egin{figure*}[th]
{\bf c}entering
\includegraphics[width=1{\bf t}extwidth]{img/synthetic_finalized.pdf}
{{\bf b}m{s}}pace{-25pt}
{\bf c}aption{{\bf l}abel{fig:syntheticcase}Qualitative results on the {\bf t}ilde{p}h{Spike} dataset. {\bf t}extbf{Column 1} shows the original input ${\bf x}$ for CounTS, RGD, and CounteRGAN, respectively. {\bf t}extbf{Column 2} shows the predictions ${\bf y}^{pred}$ and counterfactual (target) labels ${\bf y}^{cf}$. The prediction for RGD and CounteRGAN are identical since they use the same prediction model. {\bf t}extbf{Column 3} shows the counterfactual input ${\bf x}^{cf}$. {\bf t}extbf{Column 4} shows the changes on input, i.e., ${\bf x}^{cf} - {\bf x}$. FIT {\bf t}ilde{p}h{cannot} provide actionable explanations (see the importance score generated by FIT in {{\bf b}f a}ppref{sec:app_add}). }
{{\bf b}m{s}}pace{-15pt}
{\bf e}nd{figure*}
{\bf s}ubsection{Spike Dataset}{\bf l}abel{sec:synthetic}
{\bf t}extbf{Dataset Description.}
Inspired by~{\bf c}itep{FIT}, we construct the {\bf t}ilde{p}h{Spike} synthetic dataset. In the dataset, each time series contains $3$ channel, i.e., ${\bf x}\in{{\bf b}f R}B^{3{\bf t}imes T}$. Each channel is a independent non–linear auto-regressive moving average (NARMA) sequence with randomly distributed spikes. The label sequence ${\bf y}\in{{\bf b}f R}B^{T}$ starts at $0$; it {\bf t}ilde{p}h{may} switch to $1$ as soon as there is a spike observed in any of the $3$ channels and remain $1$ until the last timestamp. A $3$-dimensional exogenous variable ${\bf u}\in\{0,1\}^{3}$
determines whether the spike in the channel can affect the final output; if so, we say the channel is {\bf t}ilde{p}h{active}. Each of the three entries in ${\bf u}$, i.e., is sampled independently from three different Bernoulli distributions with parameters $0.8$, $0.4$, and $0$, respectively
(see more details on the dataset in~{{\bf b}f a}ppref{sec:app_dataset}).
{\bf t}extbf{Evaluation Metrics.}
We use three evaluation metrics:
{\bf b}egin{compactitem}
\item {\bf t}extbf{Prediction MSE.}
We use the mean square error (MSE) ${\bf f}rac{1}{N}{\bf s}um_i^{N}\|{\bf y}_i^{pred} - {\bf y}_i\|_2^2$ to measure the prediction error in the test set with $N$ time series.
\item {\bf t}extbf{Counterfactual MSE.}
Similar to~{\bf s}ecref{sec:toy}, for a prediction model $f$, the generated counterfactual explanation ${\bf x}^{cf}$, and the target label ${\bf y}^{cf}$, counterfactual MSE is defined as ${\bf f}rac{1}{N}{\bf s}um_i^{N} \|f({\bf x}_i^{cf})) - {\bf y}_i^{cf}\|_2^2$; it measures how successfully ${\bf x}^{cf}$ changes the model's prediction to $y^{cf}$.
\item {\bf t}extbf{CCR.}
For a given input ${\bf x}_i$, we set the target counterfactual label ${\bf y}_i^{cf}$ by shifting ${\bf y}^{pred}$ by $20$ timestamps to the right.
If at timestamp $t$, there is a spike in an active channel in ${\bf x}_i$ triggering the output ${\bf y}_i^{pred}$ to switch from $0$ to $1$, an ideal counterfactual explanation ${\bf x}_i^{cf}$ should (1) suppress all spikes between $[t, t+20)$, (2) create a new spike at $t+20$ timestamp in all active channels of the original input ${\bf x}_i$, and (3) keep all inactive channels unchanged. Therefore the counterfactual change ratio can be defined as (with $N$ time series):
$
CCR = {\bf f}rac{1}{N} {\bf s}um_{i=1}^N {\bf f}rac{\|{\bf m}_i\odot ({\bf x}_i - {\bf x}_i^{cf})\|_1}{\|({{\bf b}f 1}-{\bf m}_i)\odot ({\bf x}_i - {\bf x}_i^{cf})\|_1},
$
where ${\bf m}_i=[{\bf u},{\bf u},{\bf d}ots,{\bf u}]\in{{\bf b}f R}B^{3{\bf t}imes T}$ repeats ${\bf u}$ in each time step to mask inactive channels.
{\bf e}nd{compactitem}
Note that the scale of CCR will depend on the number of active channels, we therefore report our results for time series with $1$ and $2$ active channels. Higher CCR indicates better performance.
{\bf t}extbf{Quantitative Results.}
{\bf t}abref{tab:spike} shows the results for different methods in the {\bf t}ilde{p}h{Spike}. Similar to the toy dataset, our CounTS outperforms all baselines in terms of CCR, and actionable methods (i.e., CounTS, RGD, and CounteRGAN) outperforms importance score methods (i.e., FIT, LIME, and GradSHAP) thanks to the former's capability of modifying the input to shift the prediction towards the counterfactual target label.
Interestingly, besides promising performance in terms of CCR, our CounTS can also improve prediction performance, achieving lower prediction MSE. This is potentially due to CounTS's ability to model the exogenous variable ${\bf u}$ that decides whether a spike in a specific channel affects the label ${\bf y}$. Similar to the toy datset, we notice both CounteRGAN and RGD methods have lower counterfactual MSE (both at $0.074$) than CounTS because they do not need to infer and fix the exogenous variable ${\bf u}$ (i.e., the mask) and therefore have more flexibility to modify the input ${\bf x}$.
Since both CounteRGAN and RGD method are unaware of the mask, they suffer from lower CCR and produce more unwanted modification in inactive channels (more details below).
{\bf t}extbf{Qualitative Results.}
{\bf f}igref{fig:syntheticcase} shows an example time series as a case study. In this example, only channel $1$ (blue) is active and thus the spike in channel $1$ at timestamp $t=16$ will flip the output ${\bf y}$ from $0$ to $1$. Both CounTS's and RGD's predictions ${\bf y}^{pred}$ (blue in Column 1) are very close to the ground truth.
Following the evaluation protocol above, we set our counterfactual (target) label ${\bf y}^{cf}$ by shifting ${\bf y}^{pred}$ by $20$ timestamps to the right and padding $0$s on the left (yellow).
We should expect an ideal counterfactual explanation to have no spike in $t=[16, 36)$ and a new spike at $t=36$ in channel 1. Since channel 2 and 3 are inactive, there should not be any changes. We can see all actionable methods (i.e., CounTS, CounteRGAN, and RGD) try to reduce the spikes in $t=[16,36)$. However, CounteRGAN fails to remove the spike at $t=29$; both RGD and CounteRGAN makes undesirable changes in inactive channels 2 and 3. Compared to baselines, our CounTS shows a significant advantage at inferring the active channel and suppressing the changes in inactive channels. This demonstrates that CounTS is more actionable and feasible than the existing time-series explanation methods.
Note that FIT is {\bf t}ilde{p}h{not} an actionable explanation method and therefore can only provide importance scores to explain the current prediction ${\bf y}^{pred}$ (see {{\bf b}f a}ppref{sec:app_add} {\bf f}igref{fig:fitcase} for details).
{\bf b}egin{table}[t]
{{\bf b}m{s}}pace{-3pt}
{\bf s}etlength{{\bf t}abcolsep}{3.0pt}
{\bf c}aption{{{\bf t}extbf{Average CCR for both intra-dataset and cross-dataset settings.} The average CCR is calculated over 6 counterfactual action settings (A${{\bf t}extnormal{i}}ghtarrow$D, L${{\bf t}extnormal{i}}ghtarrow$D, R${{\bf t}extnormal{i}}ghtarrow$D, R${{\bf t}extnormal{i}}ghtarrow$A, D${{\bf t}extnormal{i}}ghtarrow$A, and L${{\bf t}extnormal{i}}ghtarrow$A) for every model and dataset setting. The {\bf t}ilde{p}h{last row} are the average CCR over all 6 dataset settings for every method. }}
{\bf l}abel{tab:realccr}
{{\bf t}extnormal{e}}sizebox{0.48{\bf t}extwidth}{!}{
{\bf b}egin{tabular}{@{}ccccccccc@{}}
{\bf t}oprule
& CounteRGAN & RGD & GradCAM & GradSHAP & LIME & FIT & CAP & CounTS (Ours) \\ {\bf m}idrule
${{\bf t}extnormal{i}}ghtarrow$SOF & {{\bf u}l 1.153} & 1.135 & 1.126 & 0.908 & 0.932 & 1.072 & 1.034 & {\bf t}extbf{1.313} \\
${{\bf t}extnormal{i}}ghtarrow$SHHS & 1.231 & {{\bf u}l 1.241} & 1.056 & 0.975 & 1.020 & 1.193 & 0.929 & {\bf t}extbf{1.390} \\
${{\bf t}extnormal{i}}ghtarrow$MESA & {{\bf u}l 1.285} & 1.188 & 1.031 & 1.025 & 1.007 & 1.142 & 1.031 & {\bf t}extbf{1.377} \\ {\bf m}idrule {\bf m}idrule
SOF & {{\bf u}l 1.126} & 1.088 & 0.887 & 0.996 & 1.078 & 1.053 & 0.963 & {\bf t}extbf{1.281} \\
SHHS & {{\bf u}l 1.133} & 1.111 & 0.890 & 1.068 & 1.059 & 1.067 & 1.000 & {\bf t}extbf{1.333} \\
MESA & 1.163 & {{\bf u}l 1.166} & 0.883 & 1.042 & 0.987 & 1.089 & 0.962 & {\bf t}extbf{1.329} \\ {\bf m}idrule
Average & {{\bf u}l 1.182} & 1.155 & 0.979 & 1.002 & 1.014 & 1.103 & 0.987 & {\bf t}extbf{1.337} \\ {\bf b}ottomrule
{\bf e}nd{tabular}
}
{\bf e}nd{table}
{\bf s}ubsection{Real-World Datasets}{\bf l}abel{sec:real}
{\bf t}extbf{Dataset Description.}
We evaluate our model on three real-world medical datasets: Sleep Heart Health Study ({\bf t}ilde{p}h{SHHS})~{\bf c}itep{SHHS}, Multi-ethnic Study of Atherosclerosis ({\bf t}ilde{p}h{MESA})~{\bf c}itep{MESA}, and Study of Osteoporotic Fractures ({\bf t}ilde{p}h{SOF})~{\bf c}itep{SOF}, each containing subjects' full-night breathing breathing signals. The breathing signals are divided into $30$-second segments; each segment has one of the sleep stages (`Awake', `Light Sleep', `Deep Sleep', `Rapid Eye Movement (REM)') as its label. In total, there are $2{,}651$, $2{,}055$ and $453$ patients in SHHS, MESA, and SOF, respectively, with an average of $1{,}043$ breathing signal segments (approximately $8.7$ hours).
{\bf b}egin{table}[t]
{{\bf b}m{s}}pace{-3pt}
{\bf c}aption{{\bf t}extbf{CCR for each counterfactual action setting in {\bf t}ilde{p}h{SHHS}.}}
{{\bf b}m{s}}kip 0.1pt
{\bf l}abel{tab:shhs}
{{\bf t}extnormal{e}}sizebox{0.48{\bf t}extwidth}{!}{
{\bf b}egin{tabular}{cccccccc}
{\bf t}oprule
& A${{\bf t}extnormal{i}}ghtarrow$D & L${{\bf t}extnormal{i}}ghtarrow$D & R${{\bf t}extnormal{i}}ghtarrow$D & R${{\bf t}extnormal{i}}ghtarrow$A & D${{\bf t}extnormal{i}}ghtarrow$A & L${{\bf t}extnormal{i}}ghtarrow$A & Average \\ {\bf m}idrule
CounteRGAN & 1.119 & 1.144 & 1.055 & 1.298 & 1.125 & 1.059 & {{\bf u}l 1.133} \\
RGD & 1.006 & 1.130 & 0.985 & 1.180 & {{\bf u}l 1.155} & 1.201 & 1.111 \\
GradCAM & 0.882 & 0.790 & 0.859 & 0.954 & 0.882 & 0.977 & 0.890 \\
GradSHAP & 1.100 & 1.105 & 0.903 & 1.171 & 1.100 & 1.027 & 1.068 \\
LIME & 0.855 & {\bf t}extbf{1.178} & 1.067 & 1.125 & 0.855 & {{\bf u}l 1.273} & 1.059 \\
FIT & 0.948 & 1.012 & 1.215 & {{\bf u}l 1.405} & 0.948 & 0.877 & 1.067 \\
CAP & 0.885 & 0.829 & {{\bf u}l 1.236} & 0.967 & 0.885 & 1.199 & 1.000 \\
CounTS (Ours) & {\bf t}extbf{1.244} & {{\bf u}l 1.159} & {\bf t}extbf{1.349} & {\bf t}extbf{1.515} & {\bf t}extbf{1.324} & {\bf t}extbf{1.407} & {\bf t}extbf{1.333}\\ {\bf b}ottomrule
{\bf e}nd{tabular}
}
{\bf e}nd{table}
Corresponding to our causal graph in {\bf f}igref{fig:causal_graph}(left), ${\bf x}$ is the original breathing signal, ${\bf z}$ is signal patterns (representation) learned from the encoder, ${\bf u}$ can be `gender', `age', and even `dataset index' (`dataset index' will be used during cross-dataset prediction; more details later), and ${\bf y}$ is the sleep stage label.
Intuitively, `gender', `age', and `dataset index' (due to different experiment instruments and signal measurement) can have causal effect on both the breathing signal patterns (subjects of different ages can have different breathing frequencies and magnitudes) and sleep stages (elderly subjects have less `Deep Sleep' at night).
{\bf t}extbf{Evaluation Metrics.}
For brevity, we use abbreviations `A', `R', `L', and `D' to represent the four sleep stages ‘Awake’, ‘Rapid Eye Movement (REM)’, ‘Light
Sleep’, ‘Deep Sleep’, respectively.
We use three evaluation metrics:
{\bf b}egin{compactitem}
\item {\bf t}extbf{Prediction Metric} and {\bf t}extbf{Counterfactual Metric} are similar to those in the toy classification dataset.
\item {\bf t}extbf{CCR.} Since ground-truth confounders are not available in real-world datasets, we propose to compute CCR on two consecutive 30-second segments with different sleep stages. For example,
given the two segments $[{\bf x}_{i,1},{\bf x}_{i,2}]$ with the corresponding two predicted sleep stages $[{\bf y}_{i,1}^{pred},{\bf y}_{i,2}^{pred}] = [A, D]$, we set the counterfactual labels $[{\bf y}_{i,1}^{cf},{\bf y}_{i,2}^{cf}] = [A, A]$; we refer to this setting as D${{\bf t}extnormal{i}}ghtarrow$A. An ideal counterfactual explanation $[{\bf x}_{i,1}^{cf},{\bf x}_{i,2}^{cf}]$ should shift the model prediction from $[A,D]$ to $[A,A]$ while keeping the first segment unchanged (i.e., smaller $\|{\bf x}_{i,1}-{\bf x}_{i,1}^{cf}\|_1$). CCR can therefore be computed as (with $N$ segment pairs in the test set):
$
CCR = {\bf f}rac{1}{N} {\bf s}um_{i=1}^N {\bf f}rac{\|{\bf x}_{i,2} - {\bf x}_{i,2}^{cf}\|_1}{\|{\bf x}_{i,1} - {\bf x}_{i,1}^{cf}\|_1}.
$
{\bf e}nd{compactitem}
{{\bf b}m{s}}pace{-5pt}
In the experiments, we focus on cases where the target label ${\bf y}_{i,2}^{cf}$ is either `Awake' or `Deep Sleep' (two extremes of sleep stages), and evaluated $6$ {\bf t}ilde{p}h{counterfactual action settings}, namely A${{\bf t}extnormal{i}}ghtarrow$D, L${{\bf t}extnormal{i}}ghtarrow$D, R${{\bf t}extnormal{i}}ghtarrow$D, R${{\bf t}extnormal{i}}ghtarrow$A, D${{\bf t}extnormal{i}}ghtarrow$A, and L${{\bf t}extnormal{i}}ghtarrow$A.
{\bf t}extbf{Cross-Dataset and Intra-Dataset Settings.}
Since the three datasets are collected with different devices and procedures, we consider the dataset index as an additional exogenous variable in ${\bf u}$, which has causal effect on both the learned representation ${\bf z}$ and the sleep stage ${\bf y}$. This leads to two different settings: the intra-dataset setting (without the dataset index confounder) and the cross-dataset setting (with the dataset index confounder).
{\bf b}egin{compactitem}
\item {\bf t}extbf{Intra-Dataset Setting.} Training and test sets are from same dataset, and we do not involve the dataset index as part of the exogenous variable (confounder) ${\bf u}$.
\item {\bf t}extbf{Cross-Dataset Setting.} We choose two of the datasets as the source datasets, with the remaining one as the target dataset (e.g., {\bf t}ilde{p}h{MESA}+{\bf t}ilde{p}h{SHHS}${{\bf t}extnormal{i}}ghtarrow${\bf t}ilde{p}h{SOF}). We use all source datasets (e.g., {\bf t}ilde{p}h{SHHS} and {\bf t}ilde{p}h{MESA}) and $10\%$ of the target dataset (e.g., {\bf t}ilde{p}h{SOF}) as the training set and use the remaining $90\%$ of the target dataset as the test set. We treat the dataset index as part of ${\bf u}$, which is predicted during both training and inference.
{\bf e}nd{compactitem}
{\bf t}extbf{Quantitative Results.}
{Table~{{\bf t}extnormal{e}}f{tab:realccr} shows the average CCR of different methods for both intra-dataset and cross-dataset settings. Results show that our CounTS outperforms the baselines in all the dataset settings in terms of the average CCR. This demonstrates CounTS's capability of generating feasible and actionable explanations in complex real-world datasets. {{\bf t}abref{tab:shhs} shows the detailed CCR for each counterfactual action setting in {\bf t}ilde{p}h{SHHS}, where CounTS is leading in almost all counterfactual action settings. Detailed results for other datasets can be found in {\bf t}abref{tab:sof}${\bf s}im${{\bf t}extnormal{e}}f{tab:tosof} of {{\bf b}f a}ppref{sec:app_add}.}
{\bf b}egin{table}[t]
{{\bf b}m{s}}pace{-5pt}
{\bf c}aption{{\bf t}extbf{Prediction accuracy for both intra-dataset and cross-dataset settings.} All baselines (e.g., RGD, CounteRGAN, and FIT) explain the same prediction model and therefore share the same prediction accuracy. `${{\bf t}extnormal{i}}ghtarrow${\bf t}ilde{p}h{SHHS}' means `{\bf t}ilde{p}h{MESA}+{\bf t}ilde{p}h{SOF}${{\bf t}extnormal{i}}ghtarrow${\bf t}ilde{p}h{SHHS}' and similarly for `${{\bf t}extnormal{i}}ghtarrow${\bf t}ilde{p}h{MESA}' and `${{\bf t}extnormal{i}}ghtarrow${\bf t}ilde{p}h{SOF}'. }
{\bf l}abel{tab:realaccu}
{{\bf b}m{s}}kip 0.1cm
{\bf s}etlength{{\bf t}abcolsep}{2pt}
{{\bf t}extnormal{e}}sizebox{0.48{\bf t}extwidth}{!}{
{\bf b}egin{tabular}{ccccccc}
{\bf t}oprule
Method & ${{\bf t}extnormal{i}}ghtarrow${\bf t}ilde{p}h{SOF} & ${{\bf t}extnormal{i}}ghtarrow${\bf t}ilde{p}h{SHHS} & ${{\bf t}extnormal{i}}ghtarrow${\bf t}ilde{p}h{MESA} & {\bf t}ilde{p}h{SOF} & {\bf t}ilde{p}h{SHHS} & {\bf t}ilde{p}h{MESA} \\ {\bf m}idrule
All Baselines & 0.702 & 0.747 & {\bf t}extbf{0.768} & 0.729 & {\bf t}extbf{0.801} & {\bf t}extbf{0.813} \\
CounTS (Ours) & {\bf t}extbf{0.719} & {\bf t}extbf{0.753} & 0.741 & {\bf t}extbf{0.732} & 0.789 & 0.799\\ {\bf b}ottomrule
{\bf e}nd{tabular}
}
{\bf e}nd{table}
{\bf b}egin{table}[t]
{{\bf b}m{s}}pace{-5pt}
{\bf c}aption{{\bf t}extbf{Average counterfactual accuracy for both intra-dataset and cross-dataset settings.} The average is calculated over all 6 counterfactual action settings.}
{\bf l}abel{tab:real_cf_accu}
{{\bf b}m{s}}kip 0.1cm
{\bf s}etlength{{\bf t}abcolsep}{1pt}
{{\bf t}extnormal{e}}sizebox{0.48{\bf t}extwidth}{!}{
{\bf b}egin{tabular}{@{}cccccccc@{}}
{\bf t}oprule
& ${{\bf t}extnormal{i}}ghtarrow$SOF & ${{\bf t}extnormal{i}}ghtarrow$SHHS & ${{\bf t}extnormal{i}}ghtarrow$MESA & SOF & SHHS & MESA & Average \\ {\bf m}idrule
RGD & {{\bf u}l 0.913} & {\bf t}extbf{0.887} & {\bf t}extbf{0.907} & {\bf t}extbf{0.862} & {\bf t}extbf{0.907} & {\bf t}extbf{0.892} & {\bf t}extbf{0.895} \\
CounteRGAN & 0.897 & {{\bf u}l 0.868} & 0.880 & 0.844 & 0.888 & 0.868 & 0.874 \\
FIT & 0.331 & 0.327 & 0.346 & 0.248 & 0.322 & 0.294 & 0.311 \\
CounTS (Ours) & {\bf t}extbf{0.916} & {\bf t}extbf{0.887} & {{\bf u}l 0.889} & {{\bf u}l 0.852} & {{\bf u}l 0.903} & {{\bf u}l 0.887} & {{\bf u}l 0.889} \\ {\bf b}ottomrule
{\bf e}nd{tabular}
}
{\bf e}nd{table}
{\bf t}abref{tab:realaccu} shows the prediction accuracy for both intra-dataset and cross-dataset settings. Similar to~{\bf t}abref{tab:toy_result} and~{\bf t}abref{tab:spike}, all baselines (e.g., RGD, CounteRGAN, and FIT) explain the same prediction model and therefore share the same prediction accuracy. Results show that CounTS achieves prediction accuracy comparable to the original prediction model. ~{\bf t}abref{tab:real_cf_accu} shows the average counterfactual accuracy for different dataset settings over $6$ counterfactual action settings. As in the toy and {\bf t}ilde{p}h{Spike} datasets, RGD achieves higher counterfactual accuracy because it does not need to infer and fix the exogenous variable ${\bf u}$ and hence enjoys more flexibility when modifying the input ${\bf x}$; this often leads to undesirable changes (e.g., breathing frequency and amplitude which are related to the subject's age), less feasible counterfactual explanations, and therefore worse CCR performance. In contrast, breathing frequency and amplitude are potentially captured by ${\bf u}_l$ and ${\bf u}_g$ in CounTS and kept unchanged (see qualitative results below). {Detailed counterfactual accuracy for every counterfactual action setting and every dataset setting can be found in {{\bf b}f a}ppref{sec:app_add}.}
{\bf b}egin{figure}[t]
{{\bf b}m{s}}kip -0.3cm
{\bf c}entering
{\bf h}space*{-7pt}
\includegraphics[width=0.49{\bf t}extwidth]{img/comparison_with_annotation_final.pdf}
{{\bf b}m{s}}pace{-25pt}
{\bf c}aption{{\bf l}abel{fig:realcase}Example on real-world dataset from {\bf t}ilde{p}h{MESA} intra-dataset experiment. For this example, ${\bf y}^{pred}$ is [D,A] and target ${\bf y}^{cf}$ is [D,D] since we assume `Deep Sleep' and `Active' have the most different patterns. FIT {\bf t}ilde{p}h{cannot} provide actionable explanations (see the importance score generated by FIT in {{\bf b}f a}ppref{sec:app_add}).}
{{\bf b}m{s}}kip 0.2cm
{\bf e}nd{figure}
{\bf t}extbf{Qualitative Analysis: Problem Setting.}
As some background on sleep staging, note that a patient's breathing signal will be much more periodic in `Deep Sleep' than in `Awake'. {\bf f}igref{fig:realcase} shows an example time series from the MESA dataset as a case study. It contains two consecutive segments of breathing signals ($60$ seconds in total). The first segment ($0{\bf s}im 30$ seconds) is regular and periodic, and therefore both CounTS and baselines correctly predict its label as `Deep Sleep' (`D'); the second segment ($30{\bf s}im 60$ seconds) is irregular and therefore both CounTS and baselines correctly predict its label as `Awake' (`A'). The goal is to generate a counterfactual explanation such that the model predicts the {\bf t}extit{second} segment as `Deep Sleep'.
{\bf t}extbf{Qualitative Analysis: Ideal Counterfactual Explanation.}
Since a `Deep Sleep' segment should be more periodic, an {\bf t}extit{ideal} counterfactual explanation should
keep the first segment ($0{\bf s}im 30$ seconds) {\bf t}extit{unchanged} and maintain its {\bf t}extit{periodicity},
make the second segment ($30{\bf s}im 60$ seconds) {\bf t}extit{more periodic} (the pattern for `Deep Sleep'), and
keep the breathing frequency (number of breathing cycles) unchanged throughout both segments (this is related to "feasibility" since a patient usually has the same breathing frequency during `Deep Sleep').
{\bf t}extbf{Qualitative Analysis: Detailed Results.}
{\bf f}igref{fig:realcase} compares the generated counterfactual explanations from CounTS, RGD, and CounteRGAN, and we can see that our CounTS's explanation is closer to the ideal case. Specifically:
{\bf b}egin{compactenum}
\item {\bf t}extbf{The First 30-Second Segment:} In the original breathing signal, the first 30-second segment ($0{\bf s}im 30$ seconds) is periodic and contains 7 breathing cycles (i.e., 7 periods). Our CounTS managed to keep the first 30-second segment signal mostly unchanged, maintaining its periodicity and 7 breathing cycles. In contrast, RGD changes the first segment to the point that its periodicity is mostly lost. CounteRGAN can maintain the first segment's periodicity; however it increases the number breathing cycles from 7 to 8, making it {\bf t}extit{infeasible}.
\item {\bf t}extbf{The Second 30-Second Segment:} In the original breathing signal, the second 30-second segment ($30{\bf s}im 60$ seconds) is irregular since the patient is in the `Awake' sleep stage. CounTS managed to make the `Awake' (i.e., `Active') signal much more periodic, successfully steering the prediction to `Deep Sleep'. In contrast, RGD and CounteRGAN failed to make the second 30-second segment ($30{\bf s}im 60$ seconds) more periodic.
\item {\bf t}extbf{Capturing and Maintaining Global Temporal Patterns:} Meanwhile, we can observe that CounTS's explanation in the second 30-second segment ($30{\bf s}im 60$ seconds) has a much more similar breathing frequency (measured by the number of breathing cycles per minute) with the first 30-second segment ($0{\bf s}im 30$ seconds), compared to RGD and CounteRGAN. This observation shows that CounTS is capable of capturing global temporal pattern (the breathing frequency of the individual) and keeping such exogenous variables unchanged in its explanation thanks to the global exogenous variable ${\bf m}athbf{u}_g$ in our model.
{\bf e}nd{compactenum}
{\bf s}ection{Conclusion}
In this paper, we identified the actionability and feasibility requirements for time series models counterfactual explanations and proposed the first self-interpretable time series prediction model, CounTS, that satisfies both requirements. Our theoretical analysis shows that CounTS is guaranteed to identify the causal effect between time series input and output in the presence of confounding variables,
thereby generating counterfactual explanations in the causal sense.
Empirical results showed that our method achieves competitive prediction accuracy on time series data and is capable of generating more actionable and feasible counterfactual explanations. Interesting future work includes further reducing the computation complexity during counterfactual inference, handling uncertainty in the explanation~{\bf c}itep{TFUncertainty}, and extending our method to multimodal data.
{\bf s}ection*{Acknowledgement}
We would like to thank the reviewers/AC for the constructive comments to improve the paper. JY and
HW are partially supported by NSF Grant IIS-2127918. We also thank National Sleep Research Resource (NSRR) data repository and the researchers for sharing SHHS~{\bf c}itepp{SHHS1,SHHS2}, MESA~{\bf c}itepp{SHHS1, MESA1} and SOF~{\bf c}itepp{SHHS1, SOF1}.
{\bf n}ocite{langley00}
{\bf b}ibliography{paper}
{\bf b}ibliographystyle{icml2023}
{\bf n}ewpage
{{\bf b}f a}ppendix
\onecolumn
{\bf b}egin{center}
{\bf L}ARGE{{\bf t}extbf{Supplementary Material}}
{\bf e}nd{center}
{\bf s}ection{Evidence Lower Bound}{\bf l}abel{sec:app_elbo}
We can derive the evidence lower bound (ELBO) in~{\bf e}qnref{eq:elbo_simple} as follows:
{\bf b}egin{align*}
&{\bf l}og p({\bf y}|{\bf x}) \\
&={\bf l}og \int_{{\bf z}}\int_{{\bf u}} p({\bf y}, {\bf z},{\bf u} | {\bf x}) d {\bf u} d {\bf z} \\
&{\bf g}eq {\bf m}athbb{E}_{q_{\bf p}hi ({\bf u}, {\bf z}, {\bf y}| {\bf x})} {\bf l}eft [ {\bf l}og{\bf f}rac{p_{\bf t}heta({\bf y}, {\bf u}, {\bf z} | {\bf x})}{q_{\bf p}hi({\bf u}, {\bf z}, {\bf y}| {\bf x})}{{\bf t}extnormal{i}}ght ] \\
&={\bf m}athbb{E}_{q_{\bf p}hi({\bf u}, {\bf z}, {\bf y}| {\bf x})} {\bf l}eft [{\bf l}og{\bf f}rac{p({\bf u}) p_{\bf t}heta({\bf z} | {\bf u}, {\bf x})p_{\bf t}heta({\bf y}| {\bf u}, {\bf z})}{q_{\bf p}hi({\bf u}, {\bf z}, {\bf y}| {\bf x})} {{\bf t}extnormal{i}}ght ] \\
&={{\bf b}f E}B_{q_{\bf p}hi({\bf y}, {\bf u}_l,{\bf u}_g,{\bf z}|{\bf x})}[p_{{\bf t}heta}({\bf y}, {\bf u}_l,{\bf u}_g,{\bf z}|{\bf x})] - {{\bf b}f E}B_{q_{\bf p}hi({\bf y}, {\bf u}_l,{\bf u}_g,{\bf z}|{\bf x})}[q_{\bf p}hi({\bf y}, {\bf u}_l,{\bf u}_g,{\bf z}|{\bf x})]\\
&={\bf m}athbb{E}_{q_{\bf p}hi({\bf u}, {\bf z}, {\bf y}| {\bf x})}[{\bf l}og p({\bf u})]
+{\bf m}athbb{E}_{q_{\bf p}hi({\bf u}, {\bf z}, {\bf y}| {\bf x})}[{\bf l}og p_{\bf t}heta({\bf z} | {\bf u}, {\bf x})]
+{\bf m}athbb{E}_{q_{\bf p}hi({\bf u}, {\bf z}, {\bf y}| {\bf x})}[{\bf l}og p_{\bf t}heta({\bf y} | {\bf u}, {\bf z})]
-{\bf m}athbb{E}_{q_{\bf p}hi({\bf u}, {\bf z}, {\bf y}| {\bf x})}[{\bf l}og q_{\bf p}hi({\bf u}, {\bf z}, {\bf y}| {\bf x})] {\bf l}abel{eq:marginal}
{\bf e}nd{align*}
{\bf s}ection{More Details on Counterfactual Inference}
{\bf s}ubsection{Proof on Identifiability} {\bf l}abel{sec:proof}
{\bf b}egin{theorem}[{\bf t}extbf{Identifiability}]
Given the posterior distribution of exogenous variable $p({\bf u}_l,{\bf u}_g|{\bf x},{\bf y})$, the effect of action $p({\bf y}={\bf y}^{cf} | do({\bf x}={\bf x}^{{\bf p}rime}), {\bf u}_l,{\bf u}_g)$ can be identified using ${{\bf b}f E}B_{p({\bf z} | {\bf x}^{{\bf p}rime}, {\bf u}_l,{\bf u}_g)} p{\bf l}eft({\bf y}^{cf} | {\bf z}, {\bf u}_l,{\bf u}_g{{\bf t}extnormal{i}}ght)$.
{\bf e}nd{theorem}
{\bf b}egin{proof}
With ${\bf u}=({\bf u}_l,{\bf u}_g)$ and applying Rule 2 and 3 in do-calculus~{\bf c}itep{Causality}, we have
{\bf b}egin{align}
&p{\bf l}eft({\bf y}^{cf} | do{\bf l}eft({\bf x}={\bf x}^{{\bf p}rime}{{\bf t}extnormal{i}}ght), {\bf u}{{\bf t}extnormal{i}}ght) {\bf n}onumber\\
&\ \ =\int_{{\bf z}} p{\bf l}eft({\bf y}^{cf} | {\bf z}, do{\bf l}eft({\bf x}^{{\bf p}rime}{{\bf t}extnormal{i}}ght), {\bf u}{{\bf t}extnormal{i}}ght) p{\bf l}eft({\bf z} | do{\bf l}eft({\bf x}^{{\bf p}rime}{{\bf t}extnormal{i}}ght), {\bf u}{{\bf t}extnormal{i}}ght) d{\bf z} {\bf n}onumber\\
&\ \ ={{\bf b}f E}B_{p{\bf l}eft({\bf z} | do{\bf l}eft({\bf x}^{{\bf p}rime}{{\bf t}extnormal{i}}ght), {\bf u}{{\bf t}extnormal{i}}ght)} {\bf l}eft [ p{\bf l}eft({\bf y}^{cf} | {\bf z}, do{\bf l}eft({\bf x}^{{\bf p}rime}{{\bf t}extnormal{i}}ght), {\bf u}{{\bf t}extnormal{i}}ght) {{\bf t}extnormal{i}}ght ]{\bf n}onumber\\
&\overset{{\bf m}athrm{Rule 2}}{=}{{\bf b}f E}B_{p{\bf l}eft({\bf z} | {\bf x}^{{\bf p}rime}, {\bf u}{{\bf t}extnormal{i}}ght)} {\bf l}eft [p{\bf l}eft({\bf y}^{cf} | do({\bf z}), do{\bf l}eft({\bf x}^{{\bf p}rime}{{\bf t}extnormal{i}}ght), {\bf u}{{\bf t}extnormal{i}}ght) {{\bf t}extnormal{i}}ght ]{\bf n}onumber\\
&\overset{{\bf m}athrm{Rule 3}}{=}{{\bf b}f E}B_{p{\bf l}eft({\bf z} | {\bf x}^{{\bf p}rime}, {\bf u}{{\bf t}extnormal{i}}ght)} {\bf l}eft [p{\bf l}eft({\bf y}^{cf} | do({\bf z}), {\bf u}{{\bf t}extnormal{i}}ght) {{\bf t}extnormal{i}}ght ]{\bf n}onumber\\
&\overset{{\bf m}athrm{Rule 2}}{=}{{\bf b}f E}B_{p{\bf l}eft({\bf z} | {\bf x}^{{\bf p}rime}, {\bf u}{{\bf t}extnormal{i}}ght)} {\bf l}eft [p{\bf l}eft({\bf y}^{cf} | {\bf z}, {\bf u}{{\bf t}extnormal{i}}ght) {{\bf t}extnormal{i}}ght ],
{\bf e}nd{align}
concluding the proof.
{\bf e}nd{proof}
{\bf s}ubsection{Counterfactual Explanation Algorithm} {\bf l}abel{sec:app_algo}
The pseudo-code for counterfactual explanation is shown in {{\bf b}f a}lgref{alg:cf}.
{\bf b}egin{algorithm}[h]
{\bf c}aption{Generating Counterfactual Explanations}
{\bf l}abel{alg:cf}
{\bf b}egin{algorithmic}
{\bf S}TATE {{\bf b}fseries Input:} data ${\bf x}$, threshold ${{\bf e}psilon}ilon$, number of samples $n$ and $m$.
{\bf S}TATE Sample ${\bf y}^{pred}$ from $q_{\bf p}hi({\bf y}|{\bf x})$.
{\bf S}TATE Set a target counterfactual output ${\bf y}^{cf} {\bf n}eq {\bf y}^{pred}$
{\bf F}OR{$i=1$ {{\bf b}fseries to} $m$}
{\bf S}TATE Sample ${\bf u}_i$ from $q_{\bf p}hi({\bf u}|{\bf x},{\bf y})$
{\bf F}OR{$j=1$ {{\bf b}fseries to} $n$}
{\bf S}TATE Sample ${\bf z}_{ij} {\bf s}im p_{\bf t}heta({\bf z}|{\bf x}, {\bf u}_i)$
{\bf S}TATE Sample ${\bf y}_{ij} {\bf s}im p_{\bf t}heta({\bf y}|{\bf z}_{ij}, {\bf u}_i)$
{{\bf b}f E}NDFOR
{{\bf b}f E}NDFOR
{\bf S}TATE Calculate ${\bf y}^{cf} = {\bf f}rac{{\bf s}um_i {\bf s}um_j {\bf y}_{ij}}{n{\bf t}imes m}$
{\bf S}TATE Set ${\bf x}^{cf} = {\bf x}$
{\bf W}HILE{$|{\bf y}-{\bf y}^{cf}| {\bf g}eq {{\bf e}psilon}ilon$}
{\bf S}TATE Update ${\bf x}^{cf} = {\bf x}^{cf} - {\bf l}ambda {\bf f}rac{{\bf p}artial {\bf m}id {\bf y}-{\bf y}^{cf} {\bf m}id}{{\bf p}artial {\bf x}^{cf}}$
{{\bf b}f E}NDWHILE
{\bf S}TATE{{\bf b}fseries return} ${\bf x}^{cf}$
{\bf e}nd{algorithmic}
{\bf e}nd{algorithm}
{\bf s}ection{More Details on Experiments}
{\bf s}ubsection{Details on Datasets}{\bf l}abel{sec:app_dataset}
{\bf t}extbf{{\bf t}ilde{p}h{Spike} Dataset.} The generation process of the {\bf t}ilde{p}h{Spike} dataset is summarized below:
We generate $D=3$ independent channels of non–linear auto-regressive moving average (NARMA) time series data using the following formula:
{\bf b}egin{equation}
x_{d,t+1}=0.5x_{d,t}+0.5x_{d,t}{\bf s}um_{i=0}^{l-1}x(t-l)+1.5u(t-(l-1))u(t)+0.5+{{\bf b}f a}lpha_{d}t {\bf l}abel{eq:spike_narma}
{\bf e}nd{equation}
for $t=[1,{\bf d}ots,80]$, order $l=2$, $u {\bf s}im {\mathcal N}(0, 0.03)$, and ${{\bf b}f a}lpha_d$ is set differently for each channel (${{\bf b}f a}lpha_1 = 0.1$, ${{\bf b}f a}lpha_2 = 0.065$ and ${{\bf b}f a}lpha_3 = 0.003$). We use $d$ to index the $3$ channels.
{\bf b}egin{equation}
{\bf b}egin{aligned}
&{\bf m}athbf{{\bf t}ha} = [0.8, 0.4, 0] ; \\
&n_{d} {\bf s}im \operatorname{Bernoulli}({\bf m}athbf{{\bf t}heta}_{d}) ; \\
&m_{d} {\bf s}im \operatorname{Bernoulli}({\bf m}athbf{{\bf t}heta}_{d}) ; \\
&{\bf e}ta_{d}= {\bf b}egin{cases}\operatorname{Poisson}({\bf l}ambda=2) & {\bf m}athbf{1} {\bf t}ext { if } {\bf l}eft(n_{d}==1{{\bf t}extnormal{i}}ght) \\
0 & {\bf t}ext { otherwise }{\bf e}nd{cases} \\
&{\bf m}athbf{g}_{d} = \operatorname{Sample}{\bf l}eft([T], {\bf e}ta_{d}{{\bf t}extnormal{i}}ght)\\
&x_{d, t}=x_{d, t}+({\bf k}appa_{d,t}+ {\bf t}heta_{d}) {\bf q}uad {\bf t}ext{where } {\bf k}appa_{d,t}{\bf s}im {\mathcal N}(1, 0.3) {\bf q}uad {\bf f}orall t \in {\bf m}athbf{g}_{d} {\bf l}abel{synthetic:2} \\
& y_{t}={\bf l}eft\{{\bf b}egin{array}{ll}
0 & t {\bf l}eqslant {\bf m}in {\bf l}eft({\bf m}athbf{g}_d{{\bf t}extnormal{i}}ght) {\bf t}ext {where } m_d=1 \\
1 & {\bf t}ext{otherwise}
{\bf e}nd{array}{{\bf t}extnormal{i}}ght.
{\bf e}nd{aligned}
{\bf e}nd{equation}
{\bf t}extbf{Real-World Datasets.} The {\bf t}ilde{p}h{real-world} datasets also include additional patient information such as age, gender and race. The age range is $[44, 90]$ for SHHS, $[54, 95]$ for MESA and $[71, 90]$ for SOF. In total, there are $2{,}651$, $2{,}055$ and $453$ patients in SHHS, MESA, and SOF, respectively, with an average of $1{,}043$ breathing signal segments (approximately $8.7$ hours).
{\bf s}ubsection{Details on Baselines}{\bf l}abel{sec:app_baseline}
We use the following five types of state-of-the-art baselines:
{\bf b}egin{itemize}
\item Gradient-based methods. Regularized Gradient Descent ({\bf t}extbf{RGD})~{\bf c}itep{RGD} directly models $p(y|x)$ and provide the explanation by modifying input with gradients along with L1 regularization; it is therefore it is an actionable explanation method. Gradient-weighted Class Activation Mapping ({\bf t}extbf{GradCAM})~{\bf c}itep{GradCAM} is originally designed to models with convolutional layers. We added convolutional layers to our model to adapt GradCAM for our time series data. Gradient SHapley Additive exPlanations ({\bf t}extbf{GradSHAP})~{\bf c}itep{GradSHAP} is a game theoretic approach that uses the expectation of gradients to approximate the SHAP values.
\item Perturbation-based method. Local Interpretable Model-agnostic Explanations ({\bf t}extbf{LIME})~{\bf c}itep{LIME} is a local interpretation approach based on the local linearity assumption and provides explanations by fitting the output of the model given locally perturbed input.
\item Distribution shift method. Feature Importance in Time ({\bf t}extbf{FIT})~{\bf c}itep{FIT} is a time-series back-box model explanation method that evaluates the importance of the input data based on the temporal distribution shift and unexplained distribution shift.
\item Association rule method. Case-crossover APriori ({\bf t}extbf{CAP})~{\bf c}itep{CAP} applies association rule mining algorithm, Apriori, to explore the causal relationship in time-series data.
\item Generative method. Counterfactual Residual Generative Adversarial Network ({\bf t}extbf{CounteRGAN})~{\bf c}itep{CounteRGAN} combines a Residual GAN (RGAN) and a classifier to generate counterfactual output. The generated residual output is considered as the result of the $do$ operation. CounteRGAN is an actionable explanation method.
{\bf e}nd{itemize}
{\bf s}ection{Additional Results}{\bf l}abel{sec:app_add}
{\bf t}extbf{Results on the {\bf t}ilde{p}h{Spike} Dataset.}
Note that methods such as FIT are {\bf t}ilde{p}h{not} an actionable explanation method. It cannot provide counterfactual explanation ${\bf x}^{cf}$ that could shift the model prediction from ${\bf y}^{pred}$ to ${\bf y}^{cf}$; it can only provide importance scores to explain the current prediction ${\bf y}^{pred}$. {\bf f}igref{fig:fitcase} shows the importance scores produced by FIT to explain the same model in~{\bf f}igref{fig:syntheticcase} in the {\bf t}ilde{p}h{Spike} dataset.
{\bf b}egin{figure}[!t]
{{\bf b}m{s}}pace{10pt}
{\bf c}entering
\includegraphics[width=1{\bf t}extwidth]{img/synthetic_fit_appendix.pdf}
{{\bf b}m{s}}pace{-25pt}
{\bf c}aption{{\bf l}abel{fig:fitcase}Qualitative results from FIT in the {\bf t}ilde{p}h{Spike} dataset.}
{{\bf b}m{s}}pace{-10pt}
{\bf e}nd{figure}
{\bf b}egin{figure}[!t]
{{\bf b}m{s}}pace{10pt}
{\bf c}entering
\includegraphics[width=1{\bf t}extwidth]{img/real_fit_rescale.pdf}
{{\bf b}m{s}}pace{-25pt}
{\bf c}aption{{\bf l}abel{fig:fitcase_real}Qualitative results on FIT in a {\bf t}ilde{p}h{MESA} intra-dataset setting.}
{{\bf b}m{s}}pace{10pt}
{\bf e}nd{figure}
{\bf t}extbf{Results on Real-World Datasets.} As FIT cannot provide actionable explanations, the importance scores produced by FIT to explain the same model in~{\bf f}igref{fig:realcase} in {\bf t}ilde{p}h{real-world dataset} are separately shown in {\bf f}igref{fig:fitcase_real}.
{\bf t}abref{tab:sof}${\bf s}im${{\bf t}extnormal{e}}f{tab:tosof} show more detailed CCR results with different counterfactual action settings and dataset settings. {\bf t}abref{tab:realcfaccu} shows detailed counterfactual accuracy for each dataset settings and each counterfactual action settings.
{\bf b}egin{table}[h]
{\bf c}entering
{\bf c}aption{CCR for different counterfactual action settings (e.g., A${{\bf t}extnormal{i}}ghtarrow$D means `Awake'${{\bf t}extnormal{i}}ghtarrow$`Deep Sleep') in {\bf t}ilde{p}h{SOF}.}
{{\bf b}m{s}}pace{3pt}
{\bf l}abel{tab:sof}
{{\bf t}extnormal{e}}sizebox{0.7{\bf t}extwidth}{!}{
{\bf b}egin{tabular}{cccccccc}
{\bf t}oprule
& A${{\bf t}extnormal{i}}ghtarrow$D & L${{\bf t}extnormal{i}}ghtarrow$D & R${{\bf t}extnormal{i}}ghtarrow$D & R${{\bf t}extnormal{i}}ghtarrow$A & D${{\bf t}extnormal{i}}ghtarrow$A & L${{\bf t}extnormal{i}}ghtarrow$A & Average \\ {\bf m}idrule
CounteRGAN & {\bf t}extbf{1.356} & {{\bf u}l 1.138} & 1.165 & 1.127 & 0.908 & 1.060 & {{\bf u}l 1.126} \\
RGD & 0.996 & 1.103 & {{\bf u}l 1.174} & 0.951 & 1.108 & {{\bf u}l 1.194} & 1.088 \\
GradCAM & 1.012 & 0.926 & 0.644 & 0.727 & 1.012 & 1.003 & 0.887 \\
GradSHAP & 0.843 & 1.177 & {{\bf u}l 1.219} & 0.913 & 0.843 & 0.981 & 0.996 \\
LIME & {{\bf u}l 1.220} & 0.857 & 1.095 & 0.936 & {{\bf u}l 1.220} & 1.139 & 1.078 \\
FIT & 1.064 & 1.150 & 1.169 & {{\bf u}l 1.202} & 1.064 & 0.667 & 1.053 \\
CAP & 0.989 & 0.952 & 1.019 & 1.139 & 0.989 & 0.689 & 0.963 \\
CounTS (Ours) & 1.179 & {\bf t}extbf{1.161} & {\bf t}extbf{1.240} & {\bf t}extbf{1.475} & {\bf t}extbf{1.245} & {\bf t}extbf{1.387} & {\bf t}extbf{1.281} \\ {\bf b}ottomrule
{\bf e}nd{tabular}
}
{\bf e}nd{table}
{\bf b}egin{table}[h]
{\bf c}entering
{\bf c}aption{CCR for different counterfactual action settings in {\bf t}ilde{p}h{SHHS+SOF${{\bf t}extnormal{i}}ghtarrow$ MESA}.}
{\bf l}abel{tab:tomesa}
{{\bf t}extnormal{e}}sizebox{0.7{\bf t}extwidth}{!}{
{\bf b}egin{tabular}{cccccccc}
{\bf t}oprule
& A${{\bf t}extnormal{i}}ghtarrow$D & L${{\bf t}extnormal{i}}ghtarrow$D & R${{\bf t}extnormal{i}}ghtarrow$D & R${{\bf t}extnormal{i}}ghtarrow$A & D${{\bf t}extnormal{i}}ghtarrow$A & L${{\bf t}extnormal{i}}ghtarrow$A & Average \\ {\bf m}idrule
CounteRGAN & {{\bf u}l 1.381} & {\bf t}extbf{1.379} & 1.068 & 1.373 & {{\bf u}l 1.355} & 1.158 & {{\bf u}l 1.285} \\
RGD & 1.226 & 1.130 & 1.089 & 1.215 & 1.177 & {{\bf u}l 1.293} & 1.188 \\
GradCAM & 1.056 & 1.014 & 0.743 & 1.218 & 1.056 & 1.101 & 1.031 \\
GradSHAP & 1.174 & 0.822 & 1.046 & 0.948 & 1.174 & 0.988 & 1.025 \\
LIME & 0.904 & 1.246 & 1.138 & 1.017 & 0.904 & 0.836 & 1.007 \\
FIT & 1.143 & 1.228 & {{\bf u}l 1.193} & 1.206 & 1.143 & 0.938 & 1.142 \\
CAP & 0.969 & 1.179 & 0.947 & {{\bf u}l 1.418} & 0.969 & 0.703 & 1.031 \\
CounTS (Ours) & {\bf t}extbf{1.391} & {{\bf u}l 1.331} & {\bf t}extbf{1.253} & {\bf t}extbf{1.470} & {\bf t}extbf{1.465} & {\bf t}extbf{1.355} & {\bf t}extbf{1.377}\\ {\bf b}ottomrule
{\bf e}nd{tabular}
}
{\bf e}nd{table}
{\bf b}egin{table}[h]
{{\bf b}m{s}}pace{0pt}
{\bf c}entering
{\bf c}aption{CCR for different counterfactual action settings in {\bf t}ilde{p}h{MESA}.}
{\bf l}abel{tab:mesa}
{{\bf t}extnormal{e}}sizebox{0.7{\bf t}extwidth}{!}{
{\bf b}egin{tabular}{cccccccc}
{\bf t}oprule
& A${{\bf t}extnormal{i}}ghtarrow$D & L${{\bf t}extnormal{i}}ghtarrow$D & R${{\bf t}extnormal{i}}ghtarrow$D & R${{\bf t}extnormal{i}}ghtarrow$A & D${{\bf t}extnormal{i}}ghtarrow$A & L${{\bf t}extnormal{i}}ghtarrow$A & Average \\ {\bf m}idrule
CounteRGAN & {{\bf u}l 1.183} & {\bf t}extbf{1.345} & 1.160 & 1.117 & 0.974 & 1.200 & 1.163 \\
RGD & 1.159 & 1.088 & 1.149 & 1.204 & 1.190 & {{\bf u}l 1.207} & {{\bf u}l 1.166} \\
GradCAM & 1.078 & 0.790 & 0.419 & 0.954 & 1.078 & 0.977 & 0.883 \\
GradSHAP & 0.885 & 1.027 & 1.039 & {{\bf u}l 1.273} & 0.885 & 1.145 & 1.042 \\
LIME & 0.903 & 1.003 & 0.930 & 1.233 & 0.903 & 0.952 & 0.987 \\
FIT & 1.210 & 1.118 & 1.024 & 1.082 & {{\bf u}l 1.210} & 0.887 & 1.089 \\
CAP & 0.786 & 1.001 & {{\bf u}l 1.176} & 1.243 & 0.786 & 0.783 & 0.962 \\
CounTS (Ours) & {\bf t}extbf{1.213} & {{\bf u}l 1.309} & {\bf t}extbf{1.294} & {\bf t}extbf{1.526} & {\bf t}extbf{1.350} & {\bf t}extbf{1.285} & {\bf t}extbf{1.329}\\ {\bf b}ottomrule
{\bf e}nd{tabular}
}
{{\bf b}m{s}}pace{0pt}
{\bf e}nd{table}
{\bf b}egin{table}[h]
{{\bf b}m{s}}pace{10pt}
{\bf c}entering
{\bf c}aption{CCR for different counterfactual action settings in {\bf t}ilde{p}h{MESA+SOF${{\bf t}extnormal{i}}ghtarrow$SHHS}.}
{\bf l}abel{tab:toshhs}
{{\bf t}extnormal{e}}sizebox{0.7{\bf t}extwidth}{!}{
{\bf b}egin{tabular}{cccccccc}
{\bf t}oprule
& A${{\bf t}extnormal{i}}ghtarrow$D & L${{\bf t}extnormal{i}}ghtarrow$D & R${{\bf t}extnormal{i}}ghtarrow$D & R${{\bf t}extnormal{i}}ghtarrow$A & D${{\bf t}extnormal{i}}ghtarrow$A & L${{\bf t}extnormal{i}}ghtarrow$A & Average \\ {\bf m}idrule
CounteRGAN & {\bf t}extbf{1.317} & 1.157 & {\bf t}extbf{1.281} & 1.300 & 1.270 & 1.060 & 1.231 \\
RGD & 1.193 & {{\bf u}l 1.266} & 1.115 & {{\bf u}l 1.313} & {{\bf u}l 1.304} & {{\bf u}l 1.255} & {{\bf u}l 1.241} \\
GradCAM & 1.131 & 1.093 & 0.725 & 1.306 & 1.131 & 0.953 & 1.056 \\
GradSHAP & 1.025 & 0.910 & 1.092 & 0.886 & 1.025 & 0.910 & 0.975 \\
LIME & 1.018 & 1.092 & 1.118 & 0.944 & 1.018 & 0.930 & 1.020 \\
FIT & 1.270 & 1.259 & 1.105 & 1.263 & 1.270 & 0.991 & 1.193 \\
CAP & 0.994 & 0.819 & 1.033 & 0.953 & 0.994 & 0.777 & 0.929 \\
CounTS (Ours) & {{\bf u}l 1.301} & {\bf t}extbf{1.452} & {{\bf u}l 1.167} & {\bf t}extbf{1.560} & {\bf t}extbf{1.417} & {\bf t}extbf{1.443} & {\bf t}extbf{1.390}\\ {\bf b}ottomrule
{\bf e}nd{tabular}
}
{\bf e}nd{table}
{\bf b}egin{table}[h]
{\bf c}entering
{{\bf b}m{s}}pace{0pt}
{\bf c}aption{CCR for different counterfactual action settings in {\bf t}ilde{p}h{SHHS+MESA${{\bf t}extnormal{i}}ghtarrow$SOF}.}
{\bf l}abel{tab:tosof}
{{\bf t}extnormal{e}}sizebox{0.7{\bf t}extwidth}{!}{
{\bf b}egin{tabular}{cccccccc}
{\bf t}oprule
& A${{\bf t}extnormal{i}}ghtarrow$D & L${{\bf t}extnormal{i}}ghtarrow$D & R${{\bf t}extnormal{i}}ghtarrow$D & R${{\bf t}extnormal{i}}ghtarrow$A & D${{\bf t}extnormal{i}}ghtarrow$A & L${{\bf t}extnormal{i}}ghtarrow$A & Average \\ {\bf m}idrule
CounteRGAN & {\bf t}extbf{1.210} & 1.050 & 1.034 & {\bf t}extbf{1.403} & 1.065 & 1.248 & {{\bf u}l 1.153} \\
RGD & 1.183 & 0.993 & 1.015 & 1.260 & {{\bf u}l 1.155} & 1.201 & 1.135 \\
GradCAM & 1.108 & 1.018 & 1.020 & 1.215 & 1.108 & {{\bf u}l 1.286} & 1.126 \\
GradSHAP & 0.983 & 0.929 & 0.780 & 1.044 & 0.983 & 0.728 & 0.908 \\
LIME & 0.738 & 1.051 & {{\bf u}l 1.199} & 1.155 & 0.738 & 0.708 & 0.932 \\
FIT & 1.110 & {{\bf u}l 1.175} & 0.954 & 1.206 & 1.110 & 0.876 & 1.072 \\
CAP & 1.106 & 0.951 & 0.833 & 1.145 & 1.106 & 1.060 & 1.034 \\
CounTS (Ours) & {{\bf u}l 1.203} & {\bf t}extbf{1.266} & {\bf t}extbf{1.202} & {{\bf u}l 1.395} & {\bf t}extbf{1.472} & {\bf t}extbf{1.341} & {\bf t}extbf{1.313} \\ {\bf b}ottomrule
{\bf e}nd{tabular}
}
{{\bf b}m{s}}pace{10pt}
{\bf e}nd{table}
{\bf b}egin{table}[t]
{{\bf b}m{s}}pace{-9cm}
{\bf c}entering
{\bf c}aption{Couterfactual accuracy for each dataset settings and each counterfactual action settings.}
{\bf l}abel{tab:realcfaccu}
{{\bf t}extnormal{e}}sizebox{0.7{\bf t}extwidth}{!}{
{\bf b}egin{tabular}{ccccccccc}
{\bf t}oprule
& Method & A${{\bf t}extnormal{i}}ghtarrow$D & L${{\bf t}extnormal{i}}ghtarrow$D & R${{\bf t}extnormal{i}}ghtarrow$D & R${{\bf t}extnormal{i}}ghtarrow$A & D${{\bf t}extnormal{i}}ghtarrow$A & L${{\bf t}extnormal{i}}ghtarrow$A & Average\\ {\bf m}idrule{\bf m}idrule
{\bf m}ultirow{4}{*}{{\bf t}abincell{c}{SHHS+MESA\\${\bf d}ownarrow$\{\bf S}OF}}
& RGD & {{\bf u}l 0.867} & {{\bf u}l 0.903} & {{\bf u}l 0.921} & {\bf t}extbf{0.932} & {\bf t}extbf{0.931} & {\bf t}extbf{0.922} & {{\bf u}l 0.913}\\
& CounteRGAN & 0.848 & {\bf t}extbf{0.917} & 0.903 & 0.917 & 0.896 & 0.900 & 0.897\\
& FIT & 0.285 & 0.296 & 0.328 & 0.384 & 0.285 & 0.348 & 0.331\\
& CounTS (Ours) & {\bf t}extbf{0.868} & 0.893 & {\bf t}extbf{0.929} & {{\bf u}l 0.918} & {{\bf u}l 0.920} & {{\bf u}l 0.913} & {\bf t}extbf{0.916} \\{\bf m}idrule
{\bf m}ultirow{4}{*}{{\bf t}abincell{c}{MESA+SOF\\${\bf d}ownarrow$\{\bf S}HHS}}
& RGD & {{\bf u}l 0.828} & {\bf t}extbf{0.882} & 0.916 & {\bf t}extbf{0.910} & {\bf t}extbf{0.903} & {\bf t}extbf{0.890} & {\bf t}extbf{0.887}\\
& CounteRGAN & {{\bf u}l 0.828} & 0.869 & {{\bf u}l 0.920} & 0.867 & 0.880 & {{\bf u}l 0.875} & {{\bf u}l 0.868}\\
& FIT & 0.334 & 0.263 & 0.374 & 0.354 & 0.334 & 0.348 & 0.327\\
& CounTS (Ours) & {\bf t}extbf{0.837} & {{\bf u}l 0.878} & {\bf t}extbf{0.923} & {{\bf u}l 0.899} & {{\bf u}l 0.899} & {\bf t}extbf{0.890} & {\bf t}extbf{0.887}\\ {\bf m}idrule
{\bf m}ultirow{4}{*}{{\bf t}abincell{c}{SHHS+SOF\\${\bf d}ownarrow$\{\bf M}ESA}}
& RGD & {\bf t}extbf{0.882} & {\bf t}extbf{0.898} & {{\bf u}l 0.933} & {\bf t}extbf{0.910} & {\bf t}extbf{0.938} & {{\bf u}l 0.888} & {\bf t}extbf{0.907}\\
& CounteRGAN & {{\bf u}l 0.872} & 0.878 & 0.926 & 0.905 & 0.901 & {\bf t}extbf{0.897} & 0.880\\
& FIT & 0.335 & 0.269 & 0.301 & 0.357 & 0.335 & 0.397 & 0.346\\
& CounTS (Ours) & {\bf t}extbf{0.882} & {{\bf u}l 0.888} & {\bf t}extbf{0.936} & {\bf t}extbf{0.910} & {{\bf u}l 0.930} & 0.881 & {{\bf u}l 0.889}\\
{\bf m}idrule{\bf m}idrule
{\bf m}ultirow{4}{*}{SOF}
& RGD & {{\bf u}l 0.823} & {{\bf u}l 0.820} & {\bf t}extbf{0.919} & {{\bf u}l 0.856} & {{\bf u}l 0.861} & {\bf t}extbf{0.891} &{\bf t}extbf{0.862}\\
& CounteRGAN & {\bf t}extbf{0.831} & 0.800 & 0.899 & {\bf t}extbf{0.867} & 0.832 & 0.875 & 0.844\\
& FIT & 0.257 & 0.328 & 0.311 & 0.328 & 0.257 & 0.310 & 0.248\\
& CounTS (Ours) & 0.813 & {\bf t}extbf{0.831} & {\bf t}extbf{0.907} & 0.843 & {\bf t}extbf{0.864} & {{\bf u}l 0.881} & {{\bf u}l 0.852} \\ {\bf m}idrule
{\bf m}ultirow{4}{*}{SHHS}
& RGD & {\bf t}extbf{0.880} & {\bf t}extbf{0.892} & {\bf t}extbf{0.940} & {\bf t}extbf{0.944} & {{\bf u}l 0.887} & {\bf t}extbf{0.900} & {\bf t}extbf{0.907}\\
& CounteRGAN & 0.841 & 0.854 & 0.916 & 0.913 & {\bf t}extbf{0.888} & 0.875 & 0.888\\
& FIT & 0.335 & 0.272 & 0.353 & 0.337 & 0.335 & 0.348 & 0.322\\
& CounTS (Ours) & {{\bf u}l 0.868} & {{\bf u}l 0.880} & {{\bf u}l 0.930} & {\bf t}extbf{0.931} & 0.875 & {{\bf u}l 0.889} & {{\bf u}l 0.903} \\{\bf m}idrule
{\bf m}ultirow{4}{*}{MESA}
& RGD & {\bf t}extbf{0.846} & {\bf t}extbf{0.888} & {{\bf u}l 0.912} & {\bf t}extbf{0.907} & {\bf t}extbf{0.909} & {{\bf u}l 0.888} & {\bf t}extbf{0.892}\\
& CounteRGAN & 0.828 & 0.869 & {\bf t}extbf{0.920} & 0.867 & 0.880 & 0.875 & 0.868\\
& FIT & 0.290 & 0.316 & 0.320 & 0.307 & 0.290 & 0.301 & 0.294\\
& CounTS (Ours) & {{\bf u}l 0.837} & {{\bf u}l 0.878} & 0.903 & {{\bf u}l 0.899} & {{\bf u}l 0.899} & {\bf t}extbf{0.890} & {{\bf u}l 0.887} \\
{\bf b}ottomrule
{\bf e}nd{tabular}
}
{\bf e}nd{table}
{\bf e}nd{document} |
\begin{document}
\author{Dariusz Kurzyk}
\affiliation{Institute of
Theoretical and Applied Informatics, Polish Academy of Sciences, Ba{\l}tycka 5,
44-100 Gliwice, Poland}
\author{{\L}ukasz Pawela\footnote{Corresponding author, E-mail:
[email protected]}}
\affiliation{Institute of
Theoretical and Applied Informatics, Polish Academy of Sciences, Ba{\l}tycka 5,
44-100 Gliwice, Poland}
\author{Zbigniew Pucha{\l}a}
\affiliation{Institute of Theoretical and Applied Informatics, Polish Academy
of Sciences, Ba{\l}tycka 5, 44-100 Gliwice, Poland}
\affiliation{Institute of Physics, Jagiellonian University, ul. Stanis\l{}awa
\L{}ojasiewicza 11, 30-348 Krak\'o{}w, Poland}
\title{Unconditional Security of a $K$-State Quantum Key Distribution Protocol}
\date{February 14, 2018}
\begin{abstract}
Quantum key distribution protocols constitute an important part of quantum
cryptography, where the security of sensitive information arises from the laws
of physics. In this paper we introduce a new family of key distribution
protocols and we compare its key
with the well-known protocols such as BB84, PBC0 and
generation rate to the well-known protocols such as BB84, PBC0 and
R04. We also state the security analysis of these protocols based on the
entanglement distillation and CSS codes techniques.
\keywords{Quantum key distribution, Quantum cryptography}
\mathbb{e}nd{abstract}
\maketitle
\section{Introduction}
Quantum cryptography is a blooming field of scientific research, where quantum
phenomena are applied to securing sensitive information. Usually, cryptographic
systems are based on the key distribution mechanisms and security of the systems
depend on computational complexity. The security of quantum cryptography arises
from the laws of quantum physics. Thus, quantum cryptography does not impose
limitation on the eavesdropper's technology, which is limited by the laws that no
one can ever overcome. Scenarios of quantum key distribution (QKD) protocols are
based on the assumption that secret key is shared by Alice and Bob, and only a
small amount of information can be leaked to an eavesdropper Eve. The first QKD
protocol, BB84~\cite{bennett1984update}, became a motivation for expanding
research in this area. As a consequence, Mayers in \cite{mayers1996quantum}
proved the unconditional security of this protocol on a noisy channel against a
general attack. Quantum entanglement and the violation of Bell's theorem was
introduced to the BB84 protocol by Ekert \cite{ekert1991quantum}. Next,
Bennett proposed a simple protocol B92 \cite{bennett1992quantum} based on two
nonorthogonal states. Unconditional security analysis of this protocol was
performed by Tamaki \mathbb{e}mph{et al.} in
\cite{tamaki2003unconditionally,tamaki2004unconditional} and by Quan \mathbb{e}mph{et
al.} in \cite{quan2002simple}. Generalization of BB84
protocol using conjugate bases, i.e. six states was discussed by Bru{\ss}\cite{bruss1998optimal}.
Subsequently, Phoenix \mathbb{e}mph{et al.} \cite{phoenix2000three} introduced PBC00
protocol and they showed that key bits can be generated more efficiently by the
usage of three mutually non-orthogonal states. Renes developed the key creation
protocols R04 \cite{renes2004spherical} for two qubit-based spherical codes,
which is a modified version of the PBC00 protocol. The R04 protocol allows one
to use all conclusive events for key extraction. In
\cite{boileau2005unconditional}, Boileau \mathbb{e}mph{et al.} proved the unconditional
security of the trine spherical code QKD protocol, which concerns also to PBC00
and R04 protocols. The experimental realization of PBC00 and R04 protocols were
proposed in \cite{senekane2015six} and \cite{schiavon2016experimental}. New
results referring to asymptotic analysis of three-state protocol can be found in
\cite{krawec2016asymptotic}. Scarani \mathbb{e}mph{et al.} \cite{scarani2004quantum}
introduced a protocol SARG04, which is resistant to photon number splitting attacks.
QKD protocols have become a subject of profound research and various
unconditional security analysis of QKD protocol can be found in
\cite{mayers2001unconditional,koashi2003secure,renner2005information,biham2006proof,inamori2007unconditional}, while
for a review see \cite{scarani2009security}.
In this paper we propose a new class of QKD protocols with security analysis
performed by the use of techniques similar as in
\cite{tamaki2003unconditionally,boileau2005unconditional}. It means that the
proposed protocol was considered as entanglement distillation protocol (EDP)
\cite{bennett1996mixed,lo1999unconditional}. Subsequently, similarly as in case
of BB84 \cite{shor2000simple}, CSS codes
\cite{calderbank1996good,steane1996multiple} were used to the security proof.
\section{$K$-state protocol}
Let us introduce a class of QKD protocols, which can be reducible to the
well-known PBC00 protocol~\cite{phoenix2000three}. Assume that Alice and Bob
would like to share $N$ secret bits $b_i$. Then, the protocol is as follows.
\subsection*{\textbf{Protocol 1 (P1)}}
\begin{enumerate}
\item Alice and Bob share $N$ pairs of maximally entangled two-qubit states
$\rho_{AB}=\ket{\phi^-}\bra{\phi^-}$, where $\ket{\phi^-} =
\frac{1}{\sqrt{2}}(\ket{01}-\ket{10})$.
\item She chooses $K$ states $\ket{\psi_k}=\cos(a+\theta_k)\ket{0} +
\sin(a+\theta_k)\ket{1}$, where $a\in[0,2\pi)$ is a constant and
$\theta_k=\frac{2 k \pi}{K}$ for $k\in\{0,...,K-1\}$. The states $\ket{\psi_k}$
are grouped into pairs $S_k=\{\ket{\psi_k},\ket{\psi_{k+1\mod K}}\}$. \item
Subsequently, Alice measures her parts of the states $\rho_{AB}$ using the POVM
$\{\frac2K\ketbra{\psi_k^\perp}{\psi_k^\perp}\}_k$, where $\ket{\psi_k^\perp}$
is orthogonal to $\ket{\psi_k}$. Detection of the state $\ket{\psi_k^\perp}$
after measurement is equivalent to sending a state $\ket{\psi_k}$ to Bob.
\item For each bit $b_i$ to be encoded, Alice chooses at random $r_i\in{0,\ldots,k-1}$.
The choice of $r_i$ determines the encoding base $S_{r_i}$. Thus, the bit value
is equal to $b_i=k-r_i\mod K$.
\item Alice publicly announces when all of her measurements are done.
\item Bob prepares measurements described by the POVM
$\{\frac2K\ketbra{\psi_k^\perp}{\psi_k^\perp}\}_k$ and announces when the measurements are done. Next, Alice
sends sequences of values $r_i$ to Bob. If the $i$-th Bob's measurement
outcome is $\ket{\psi_k^\perp}$, then Bob decodes $b_i=0$ (if $k=r_i$) or
$b_i=1$ (if $k=r_i-1 \mod K$). In other cases, the events are regarded as
inconclusive. These results are discarded.
\item Half of randomly chosen conclusive events are used in the estimation of a
bit error rate. If the bit error rate is to high, then they abort the protocol.
\item In the end, they use a classical error correction and privacy
amplification protocols.
\mathbb{e}nd{enumerate}
Notice that for $K=3$ and an appropriate choice of $a$, the above scenario is
equivalent to the PBC00 protocol \cite{phoenix2000three}. It can also be shown
that protocols of this class achieve the highest key rate for $K=3$.
\section{POVM enhancement}
Now we consider a modification of the above protocol. Steps 1--4 are the same
as in the previous protocol.
\subsection*{\textbf{Protocol 2 (P2)}}
\begin{enumerate}
\item[5$^\prime$.] Alice publicly announces when all her measurements are
done and she sends sequences of values $r_i$ to Bob.
\item[6$^\prime$.] For each $r_i$, Bob prepares an unambiguous measurement
described by the POVM
\begin{equation}\label{eq:new_povm}
\begin{split}
\{&\Pi_{r_i-1},\Pi_{r_i},\Pi_{fill}\}=\\
\Big\{
&
\frac{1}{\lambda}\ketbra{{\psi}^\perp_{r_i}}{{\psi}^\perp_{r_i}},\frac{1}{\lambda}\ketbra{{\psi}^\perp_{r_i+1\!\!\!\!\mod
K}}{{\psi}^\perp_{r_i+1\!\!\!\!\mod K}}\\
&\1_2 - \frac{1}{\lambda} \big(\ketbra{{\psi}^\perp_{{r_i}}}{{\psi}^\perp_{{r_i}}}+\ketbra{{\psi}^\perp_{{r_i}+1\!\!\!\!\mod
K}}{{\psi}^\perp_{{r_i}+1\!\!\!\!\mod
K}}\big)\Big\}
\mathbb{e}nd{split}
\mathbb{e}nd{equation}
where $\lambda = \frac{\sqrt{2}}{2} \sqrt{\cos\frac{4\pi}{K}+1} + 1 = 1
+|\cos(\frac{2\pi}{K})|$. The value $\lambda$ is determined as a maximal
eigenvalue of
\begin{equation}
\ketbra{\tilde{\psi}_{r_i}}{\tilde{\psi}_{r_i}} +
\ketbra{\tilde{\psi}_{r_i+1\!\!\!\!\mod K}}{\tilde{\psi}_{r_i+1\!\!\!\!\mod K}}.
\mathbb{e}nd{equation}
If the $i$-th Bob's measurement outcome is $\ket{\psi_k^\perp}$, then Bob
decodes $b_i=0$ (if $k=r_i$) or $b_i=1$ (if $k=r_i-1 \mod K$). In other cases,
the events are regarded as inconclusive. Finally, Alice and Bob discard results
of measurement, which are inconclusive.
\mathbb{e}nd{enumerate}
Steps 7. and 8. are again the same as in the previous protocol. For $K=3$,
the characteristics of the protocols P1 and P2 are the same and are equivalent
to the characteristics of the PBC00 protocol.
\section{Security analysis}
Similarly to
\cite{tamaki2003unconditionally,boileau2005unconditional,shor2000simple}, we
consider an entanglement distillation protocol (EDP), which can be reduced to a
QKD protocol equivalent to the above scheme. Firstly, we transform the vectors
$\ket{\psi_i}$ by the rotation operator $R(-\mathbb{e}ta)$, where
$R_y(\theta)=\left(\begin{matrix} \cos \theta & -\sin \theta\\ \sin\theta &
\cos\theta \mathbb{e}nd{matrix}\right)$ and $\mathbb{e}ta=\arccos(\braket{\psi_0}{\psi_1})/2 +
\arctan\left(\frac{\braket{1}{\psi_0}}{\braket{0}{\psi_0}}\right)$. After this
transformation, we get states $\ket{\tilde{\psi_i}}=R(-\mathbb{e}ta)\ket{{\psi_i}}$, where
$\ket{\tilde{\psi_0}}=\cos(\frac{\pi}{K})\ket{0} + \sin(\frac{\pi}{K})\ket{1}$
and $\ket{\tilde{\psi_1}}=\cos(-\frac{\pi}{K})\ket{0} +
\sin(-\frac{\pi}{K})\ket{1}$. This transformation has no impact on the
protocol, but is important in the security analysis. Assume that Alice prepares
many pairs of qubits in the entangled state
$\ket{\psi}=\frac{1}{\sqrt{2}}(\ket{+}\ket{\tilde{\psi}_0}+\ket{-}\ket{\tilde{\psi}_1})$,
where $\ket{\pm}=\frac{1}{\sqrt{2}}(\ket{0}\pm\ket{1})$ and the basis $\{\ket{+},\ket{-}\}$
will be denoted by $\pm$-basis. Next, she
randomly chooses a string of trit values $r_i$ and applies $R_y(\theta_{r_i})$
on the second qubit of every pair. After that, she sends qubits to Bob through
a quantum channel. Alice announces the values of $r_i$. Next, Bob performs
\textit{local filtering operations}
\cite{gisin1996hidden,bennett1996concentrating,horodecki1997inseparable}
$F=\frac{1}{\sqrt{2\lambda}}\Big((\ket{\tilde{\psi}^\perp_0} +
\ket{\tilde{\psi}^\perp_1})\bra{0} + (\ket{\tilde{\psi}^\perp_0} -
\ket{\tilde{\psi}^\perp_1})\bra{1} \Big)$ and operation $R_y(-\theta_{r_i})$ on the received qubits. Next, half of the states are
used to determine the number of bit errors after application of $\pm$-basis
measurements by Alice and Bob. If the number of error is too high, then the
protocol is aborted. Remaining qubits are used to distill Bell states by
an EDP based on CSS codes. Alice and Bob perform $\pm$-basis measurements
on Bell states to obtain a secret key.
Notice that $R_y(\theta_{r_i})\ket{\tilde{\psi}_j}=\ket{\tilde{\psi}_{r_i+j
\mod K}}$ and Alice's operation related to measurement
$\{\frac{1}{\lambda}\ketbra{\tilde{\psi}_i^\perp}{\tilde{\psi}_i^\perp}\}_i$ on her state
are equivalent to $\pm$-basis measurement on the state $\big(\1_2\kr
R_y(\theta_{r_i})\big)\ket{\psi}$. The filtering operations $F$, rotation operation
$R_y(-\theta_{r_i})$ and $\pm$-basis measurement performed by
Bob can by described by the following POVM
\begin{equation}\label{eq:edp_povm}
\begin{split}
&\quad\quad \{R_y(\theta_{r_i}) F^\dagger \ketbra{+}{+} F
R_y(\theta_{r_i})^\dagger,\\
&\quad\quad R_y(\theta_{r_i}) F^\dagger \ketbra{-}{-} F
R_y(\theta_{r_i})^\dagger,\\
&\quad\quad R_y(\theta_{r_i}) (\1_2 - F^\dagger F) R_y(\theta_{r_i})^\dagger\}.\\
\mathbb{e}nd{split}
\mathbb{e}nd{equation}
This measurement is equivalent to the POVM
\begin{equation}
\begin{split}
\{&\tilde{\Pi}_{r_i-1},\tilde{\Pi}_{r_i},\tilde{\Pi}_{fill}\}=\\
\Big\{
&
\frac{1}{\lambda}\ketbra{\tilde{\psi}^\perp_{r_i}}{\tilde{\psi}^\perp_{r_i}},\frac{1}{\lambda}\ketbra{\tilde{\psi}^\perp_{r_i+1\!\!\!\!\mod
K}}{\tilde{\psi}^\perp_{r_i+1\!\!\!\!\mod K}},\\
&\1_2 - \frac{1}{\lambda} (\ketbra{\tilde{\psi}^\perp_{{r_i}}}{\tilde{\psi}^\perp_{{r_i}}}+\ketbra{\tilde{\psi}^\perp_{{r_i}+1\!\!\!\!\mod
K}}{\tilde{\psi}^\perp_{{r_i}+1\!\!\!\!\mod
K}})\Big\}.
\mathbb{e}nd{split}
\mathbb{e}nd{equation}
In \cite{shor2000simple}, Shor and Preskil have shown that if the bound of
estimations of bit and phase error decreases exponentially as $N$ increases,
then Eve's information on secret key is exponentially small. This approach was
used to prove the unconditional security of the Bennet 1992 protocol, by Tamaki
\mathbb{e}mph{et al.} \cite{tamaki2003unconditionally}, and the PBC00 and R04 protocols,
by Boileau \mathbb{e}mph{et al.} \cite{boileau2005unconditional}. These proofs were
based on the usage of reduction to an entanglement distillation protocol
initialed by a local filtering process. Subsequently, we will prove the security
of the above entanglement distillation protocol in the same manner as in
\cite{tamaki2003unconditionally,boileau2005unconditional,shor2000simple}.
Assume that $\{p_{b}^{(i)}\}_{i=1}^N$ and $\{p_p^{(i)}\}_{i=1}^N$ are sets of
probabilities of a bit error and a phase error respectively on the $i$-th pair
after Alice and Bob have done the same measurements on $i-1$ previous pairs.
Thus $p_b^{(i)}$ and $p_p^{(i)}$ depend on previous results. Moreover, we
introduce $e_b$ and $e_p$ as rates of bit error and phase error in all
conclusive results respectively. Estimations of bit and phase error rates will
be performed by to use of Azuma's inequality \cite{azuma1967weighted} as in
\cite{boileau2005unconditional}.
\begin{theorem}[\cite{azuma1967weighted}]
Let $\{X_i:i=0,1,\dots\}$ be a martingale sequence and for each $k$ it holds
that $|X_k-X_{k-1}|\leq c_k$. Then for all integers $N\geq 0$ and real numbers
$\gamma\geq0$
\begin{equation}
P(|X_N-X_{0}| \geq \gamma) \leq 2 \mathbb{e}^{-\frac{\gamma^2}{2 \sum_{k=1}^N c_k^2}}.
\mathbb{e}nd{equation}\label{azuma}
\mathbb{e}nd{theorem}
Notice that for $c_k=1$ we get
\begin{equation}
P(|X_N-X_{0}| \geq \gamma) \leq 2 \mathbb{e}^{-\frac{\gamma^2}{2 N}}.
\mathbb{e}nd{equation}
As a result of the Azuma's inequality, $Ce_b$ is exponentially close to $e_p$
($Ce_b=e_p$) for particular constant $C$, if $Cp^{(i)}_b=p_p^{(i)}$ is satisfied
for all $i$. Assume that Eve can perform any coherent attack $E^{(i)}$ on
qubits sent by Alice such that $\sum_i E^{(i)\dagger} E^{(i)}\leq \1$. The
general equation for the $i$\textsuperscript{th} state can be described by a
mixed state
\begin{equation}
\rho^{(i)} = \frac1K\sum_{k=0}^K
\ketbra{\phi_k^{(i)}}{\phi_k^{(i)}},
\mathbb{e}nd{equation}
where
\begin{equation}
\ket{\phi_k^{(i)}}=\1_A\kr\big(FR(-\theta_k)E^{(i)}R(\theta_k)\big)\ket{\psi}.
\mathbb{e}nd{equation}
Let us introduce the notation
$\ket{\Phi^\pm}=\frac{1}{\sqrt{2}}(\ket{+}\ket{+}\pm\ket{-}\ket{-})$ and
$\ket{\Psi^\pm}=\frac{1}{\sqrt{2}}(\ket{+}\ket{-}\pm\ket{-}\ket{+})$.
Since the probability of sharing by Alice and Bob a Bell state $\ket{\Phi^+}$
is equal to the probabilities of a bit error $p_{b}^{(i)}$ and phase error
$p_{p}^{(i)}$ on the $i$-{th} respectively, thus
\begin{equation}
\begin{split}
p_{b}^{(i)}&=\frac{1}{Z^{(i)}}\big(\bra{\Psi^+} \rho^{(i)}\ket{\Psi^+} +
\bra{\Psi^-_Z} \rho^{(i)}\ket{\Psi^-} \big)\\
p_{p}^{(i)}&=\frac{1}{Z^{(i)}}\big(\bra{\Phi^-} \rho^{(i)}\ket{\Phi^-} +
\bra{\Psi^-} \rho^{(i)}\ket{\Psi^-} \big),
\mathbb{e}nd{split}
\mathbb{e}nd{equation}
where
\begin{equation}
\begin{split}
Z^{(i)}&=\big(\bra{\Psi^+} \rho^{(i)}\ket{\Psi^+} + \bra{\Psi^-}
\rho^{(i)}\ket{\Psi^-}\\ &+ \bra{\Phi^+} \rho^{(i)}\ket{\Phi^+} +
\bra{\Phi^-} \rho^{(i)}\ket{\Phi^-} \big).
\mathbb{e}nd{split}
\mathbb{e}nd{equation}
It can be checked that
\begin{equation}\label{eq:C}
C = \frac{p_{p}^{(i)}}{p_{b}^{(i)}} = 1+|\braket{\psi_0}{\psi_1}|^2 = 1 +
\cos^2\Big(\frac{2\pi}{K}\Big).
\mathbb{e}nd{equation}
Similarly as in \cite{boileau2005unconditional}, we calculate the key rate $S$
from the following formula
\begin{equation}\label{eq:key_rate}
S=p_{c}(e_{b})\big(1-h(e_{b})-h(e_{p})\big),
\mathbb{e}nd{equation}
where $h(x)=-x\log_2 x -(1-x)\log_2(1-x)$ and $p_c(e_{b})$ is the probability of
a conclusive result. Since $Ce_b=e_p$, we get
\begin{equation}\label{eq:key}
S=p_{c}(e_{b})\big(1-h(e_{b})-h(Ce_{b})\big).
\mathbb{e}nd{equation}
Notice that for a bit value $b=0$ we get outcome probabilities
$\{0,\frac1\lambda
|\braket{\tilde{\psi}_{r_i}}{\tilde{\psi}^\perp_{r_i+1\!\!\!\! \mod K}}|^2,
1-\frac{1}{\lambda}|\braket{\tilde{\psi}_{r_i}}{\tilde{\psi}^\perp_{r_i+1\!\!\!\!
\mod K}}|^2\}$
and for $b=1$ we get
$\{\frac1\lambda
|\braket{\tilde{\psi}_{r_i+1\!\!\!\! \mod K}}{\tilde{\psi}^\perp_{r_i}}|^2, 0,
1-\frac{1}{\lambda}|\braket{\tilde{\psi}_{r_i+1\!\!\!\! \mod
K}}{\tilde{\psi}^\perp_{r_i}}|^2\}$. Hence, it can be checked that
$|\braket{\tilde{\psi}_{r_i}}{\tilde{\psi}^\perp_{r_i+1\!\!\!\! \mod
K}}|^2=|\braket{\tilde{\psi}_{r_i+1\!\!\!\! \mod
K}}{\tilde{\psi}^\perp_{r_i}}|^2=\sin^2(\frac{2\pi}{K})$. Thus, the probability
of a conclusive result, with the assumption that $e_b=0$, is equal to
\begin{equation}\label{eq:init_pc}
p_c(0)=\frac{\sin^2(\frac{2\pi}{K})}{1+|\cos(\frac{2\pi}{K})|},
\mathbb{e}nd{equation}
which can be simplified to $p_c(0)=2\sin^2(\frac{\pi}{K})$ for $k>3$.
Generally, $p_c$ can be expressed as
\begin{equation}\label{eq:pc}
p_c(e_{b})=
\frac{\sin^2(\frac{2 \pi}{K})}{\lambda (1-2e_b \cos^2(\frac{2\pi}{K}))},
\mathbb{e}nd{equation}
which was derived in Appendix~\ref{app:pc_new}. Notice, that for $K=3$,
Eq.~(\ref{eq:pc}) is reduced to $p_c(e_{b})=\frac{1}{2-e_b}$, which corresponds to
probability of conclusive results in PBC00 protocol.
In the case of BB84 protocol, bit error rate is equal to phase error rate.
Thus $C=1$ and $p_c(e_{b})=\frac12$. In the case of $PBC00$, $C=\frac54$ and
$p_c(e_{b})=\frac{1}{2-e_b}$. From Eq.~(\ref{eq:key}) we get that $e_b\approx 11.0\%$
for BB84 protocol and $e_b\approx9.81\%$ for PBC00 protocol. It can be checked
that an interesting case is for $K=5$, where $C=\frac18(11-\sqrt{5})$ and
$e_b\approx10.5\%$. Comparison of proposed protocol with BB84 and PBC00
protocols is shown in Fig.~\ref{fig:comparison}. As we can see, the best key
ratio is for $K=5$.
\begin{figure}[!b]
\centering
\includegraphics[scale=0.87]{key_rate_new}
\caption{Comparison of key rates depending on $e_{b}$ for different setups of
the P2 protocol. Notice that for $K=5$ we get the best key rates. For $K=7$,
these drop below the values of obtained for $K=3$. We also show the key rates
of the BB84 protocol for comparison.}\label{fig:comparison}
\mathbb{e}nd{figure}
\section{Conclusion} In this paper we have introduced a new class of quantum key
distribution protocols. We have also provided unconditional security analysis of
this protocol. We have shown, that there exists $5$-state protocol with
reasonably high key rate for small bit-flip error rates.
\appendix
\section*{Appendix}
\section{Probability of conclusive events}\label{app:pc_new} Notice that if Alice performs
$\ket{\psi^\perp_i}$, where $\ket{\psi_i}\in S_r$, $i=r$ or $i=r+b\mod K$ and
Bob chooses $\ket{\psi_i^\perp}$, then it corresponds to an error. In the case
when Bob chooses a state which corresponds to $S_r$ but is not orthogonal to
Alice's state, then Bob can correctly conclude the state $\ket{\psi_i}$.
Let $n_g, n_e, n_i$ denote the numbers of good conclusive, error conclusive
and inconclusive events respectively. Beside that, let $n_t = n_g+n_e+n_i$
and thus $1=\frac{n_g}{n_t}+\frac{n_e}{n_t}+\frac{n_i}{n_t}$. Assume that after
Alice sent $r$ to Bob, Bob performs measurement
described by POVM
\begin{equation}
\begin{split}
\{&\Pi_{r},\Pi_{r+1},\Pi_{fill}\}=\\
\Big\{
&
\frac{1}{\lambda}\ketbra{{\psi}^\perp_r}{{\psi}^\perp_r},\frac{1}{\lambda}\ketbra{{\psi}^\perp_{r+1\!\!\!\!\mod
K}}{{\psi}^\perp_{r+1\!\!\!\!\mod K}},\\
&\1_2 - \frac{1}{\lambda} (\ketbra{{\psi}^\perp_{r}}{{\psi}^\perp_{r}}+\ketbra{{\psi}^\perp_{r+1\!\!\!\!\mod
K}}{{\psi}^\perp_{r+1\!\!\!\!\mod
K}})\Big\}.
\mathbb{e}nd{split}
\mathbb{e}nd{equation}
Now, we suppose that $b=0$ and Eve simulates a noisy channel, where state
$\ketbra{{\psi}_r}{{\psi}_r}$ evolves as
$\rho_B=(1-p)\ketbra{{\psi}_r}{{\psi}_r}+\frac{p}{2}\1_2$. Next, Bob performs
measurement and receives measurement outcomes with probabilities $\{\tr \Pi_r \rho_B =
\frac{p}{2\lambda}, \tr
\Pi_{r+1} \rho_B = \frac{p}{2\lambda} +
\frac{1-p}{\lambda}\sin^2(\frac{2\pi}{K}), \tr \Pi_{fill} \rho_b = 1-
\frac{p}{\lambda}-\frac{1-p}{\lambda}\sin^2(\frac{2\pi}{K}\}$.
A bit error rate $e_b$ is defined as the rate of error in conclusive results. Hence
\begin{equation}
e_b = \frac{n_e}{n_e+n_g} \quad \text{and} \quad n_e=\frac{e_b}{1-e_b} n_g.
\mathbb{e}nd{equation}
Notice that error $e_b$ can be estimated as
\begin{equation}
e_b = \frac{\tr \Pi_{r+1} \rho_B}{\tr \Pi_{r+1} \rho_B+ \Pi_{fill}
\rho_b}=\frac{p}{ 2(1-p)\sin^2(\frac{2\pi}{K}) + 2p}.
\mathbb{e}nd{equation}
Now, let us determine a ratio
\begin{equation}
\begin{split}
D &= \frac{\tr \Pi_{r+1} \rho_B}{\tr \Pi_{fill} \rho_B} =
\frac{2(1-p)\sin^2(\frac{2\pi}{K})+p}{2\lambda -
2(1-p)\sin^2(\frac{2\pi}{K})-2p}\\
&=\frac{2(1-e_b) \sin^2(\frac{2 \pi}{K})}{2\lambda
(1-2e_b\cos^2(\frac{2\pi}{K}))-2\sin^2(\frac{2\pi}{K})}.
\mathbb{e}nd{split}
\mathbb{e}nd{equation}
From the central limit theorem and the above calculation we get
\begin{equation}
\frac{n_g}{n_t}=\frac{D n_i}{n_t}+O(\mathbb{e}psilon).
\mathbb{e}nd{equation}
Continuing we obtain
\begin{equation}
\begin{split}\label{eq:pc_calc_new}
1&=\frac{n_g}{n_t}+\frac{n_e}{n_t}+\frac{n_i}{n_t}=\frac{n_g}{n_t}+\frac{e_b}{1-e_b}\frac{n_g}{n_t}+\frac{n_i}{n_t}\\
&\approx\frac{Dn_i}{n_t}+\frac{e_b}{1-e_b}\frac{Dn_i}{n_t}+\frac{n_i}{n_t}\\
&\approx\frac{D+1-e_b}{1-e_b} \frac{n_i}{n_t}
\mathbb{e}nd{split}
\mathbb{e}nd{equation}
and
\begin{equation}\label{eq:pc_calc_2}
p_c=1-\frac{n_i}{n_t} = \frac{D}{D+1-e_b}=
\frac{\sin^2(\frac{2 \pi}{K})}{\lambda (1-2e_b \cos^2(\frac{2\pi}{K}))}.
\mathbb{e}nd{equation}
\mathbb{e}nd{document} |
\begin{document}
\title{Affine elliptic surfaces with type-A singularities and orbi-conifolds}
\mathfrak{aut}hor[Lau]{Siu-Cheong Lau}
\address{Department of Mathematics and Statistics\\ Boston University}
\email{[email protected]}
\begin{abstract}
Following the work of Castano-Bernard and Matessi on conifold transition in the Gross-Siebert program, we construct orbi-conifold transitions of the Schoen's Calabi-Yau threefold and their mirrors. The construction glues together the local models for orbi-conifold transitions in the previous work with Kanazawa.
\end{abstract}
\maketitle
\section{Introduction}
Conifold transition is an important topic in the study of Calabi-Yau geometries and string theory. By the discoveries of \cite{Reid,GH,Wang}, the moduli spaces of three-dimensional CY complete intersections in a product of projective spaces are connected by conifold transitions. It was found by Friedman \cite{Friedman} and Smith-Thomas-Yau \cite{STY} that there exists non-trivial topological obstruction for conifold transition. In the direction of mirror symmetry, HMS for the local resolved and deformed conifolds was proved by Chan-Pomerleano-Ueda \cite{CPU}. The mirror of the Atiyah flop was constructed in a joint work of the auther with Fan-Hong-Yau \cite{FHLY} using a noncommutative mirror of the conifold.
More generally geometric transitions involve singularities of deeper levels than conifolds. Singularities worse than conifolds may occur when
several Lagrangian spheres vanish simultaneously.
For instance, generalized conifolds and orbifolded conifolds are two natural generalizations of the conifold. In \cite{AKLM}, Aganagic-Karch-L\"ust-Miemiec deduced that these two classes of singularities are mirror to each other by gauge-theoretical methods. In a joint work of the author with Kanazawa \cite{KL} mirror symmetry for these local singularities was realized using the SYZ program. SYZ for general local Gorenstein singularities was derived in a previous work of the author \cite{L13} using the Lagrangian fibrations constructed by Gross \cite{Gross-eg} and Goldstein \cite{Goldstein}.
A natural question is how to realize these geometric transitions and mirror symmetry in the compact setting.
For compact manifolds, Castano-Bernard and Matessi \cite{CM2} made a beautiful construction of conifold transitions and their mirrors using a symplectic version of the Gross-Siebert program that they developed earlier \cite{CM1}. They studied the local affine geometries of the base of Lagrangian fibrations for local conifold transitions, and glued the local models of Lagrangian fibrations to a global geometry.
In this paper we construct compact generalized and orbifolded conifolds by extending the method of \cite{CM2}. We construct the local affine geometries modeling the base of the SYZ fibrations on generalized and orbifolded singularities (and also their smoothings and resolutions). Then we formulate global geometries that contain both of these singularities. We call them to be orbi-conifolds.
The discriminant locus of the global structure naturally contains orbifolded edges and orbifolded positive or negative vertices. It gives an orbifold generalization of simple and positive tropical manifolds in the the Gross-Siebert program.
The Schoen's Calabi-Yau threefold \cite{Schoen} provides an excellent source of examples for orbi-conifolds. The threefold is a resolution of the fiber product of two rational elliptic surfaces over the base $\mathbb{P}^1$. \cite{CM2} constructed compact conifolds which are degenerations of the Schoen's CY and their mirrors by using fan polytopes of toric blow-ups of $\mathbb{P}^2$. In this paper we treat all the reflexive polygons uniformly and construct compact orbi-conifolds and their mirrors. The result is the following.
\begin{theorem}[Orbi-conifold degeneration of Schoen's CY] \label{thm:main}
Each pair of reflexive polygons $(P_1,P_2)$ (\emph{where $P_1$ and $P_2$ are not necessarily dual to each other}) corresponds to an orbifolded conifold degeneration $O^{(P_1,P_2)}$ of a Schoen's Calabi-Yau threefold, and also corresponds to a generalized conifold degeneration $G^{(P_1,P_2)}$ of a mirror Schoen's Calabi-Yau. A resolution of $O^{(P_1,P_2)}$ is mirror to a smoothing of $G^{(\check{P}_1,\check{P}_2)}$ in the sense of Gross-Siebert, and vice versa. ($\check{P}$ denotes the dual polygon of $P$.)
\end{theorem}
The connections between Calabi-Yau geometries and affine geometries play a key role. Gross \cite{Gross-topo} found a beautiful topological realization of mirror symmetry using affine geometries with polyhedral decompositions.
Independently, Haase-Zharkov \cite{HZ1,HZ2,HZ3} found a brilliant construction of affine structures on spheres by using a pair of dual reflexive polytopes. They are useful to study the geometries of Calabi-Yau complete intersections.
Based on these pioneering works, Gross-Siebert \cite{GS1,GS2,GS07} developed their celebrated program of toric degenerations and formulated an algebraic reconstruction of mirror pairs. A symplectic version of the reconstruction was found by Castano-Bernard and Matessi \cite{CM1}. The construction in this paper is a natural generalization of the work of \cite{CM2}.
To construct orbi-conifold degenerations of the Schoen's CY, first we study degenerations of rational elliptic surfaces via affine geometry.
The connection between affine surfaces and symplectic four-folds is very well-understood and dates back to the early works of Symington \cite{Symington} and Leung-Symington \cite{LS}. They defined the notion of an almost toric fourfold, which is a symplectic fourfold with a Lagrangian fibration under certain topological constraints. They classified almost toric fourfolds by using the affine base of Lagrangian fibrations.
Rational elliptic surfaces form a subclass of almost toric fourfolds and hence are well understood by \cite{Symington,LS}. Here we concern about singular surfaces with $A_n$-orbifold singularities, which will be important to the construction of orbi-conifolds. We construct the following.
\begin{prop}[Rational elliptic surfaces with singularities] \label{thm:ell-intro}
Each reflexive polygon $P$ corresponds to two rational elliptic surfaces $S_P$ and $S_{P}'$ with type $A$ singularities, where the configurations of singularities depend on the integral properties of $P$. $S_P - D_{S_P}$ and $S_{\check{P}}' - D_{S_{\check{P}}'}$ are mirror to each other in the sense that they are discrete Legendre dual to each other, where $D$ denotes an anti-canonical divisor.
\end{prop}
The topology of elliptic surfaces is closely related to the `12 Property' for a reflexive polygon $P$. Namely the sum of affine lengths of edges and orders of vertices of $P$ must be $12$. They correspond to the number of singular fibers (counted with multiplicities) of a rational elliptic surface.
Indeed the `12 Property' holds for more general objects called legal loops \cite{PR}. We find that they correspond to topological (which may not be Lagrangian) torus fibrations $M$ over $\mathbb{S}^2$ which have been extensively studied by \cite{Matsumoto,Iwase}. The multiple of $12$ corresponds to $-3\sigma/2 = - p_1 /2$ where $\sigma$ is the signature and $p_1$ is the first Pontryagin number of the real four-fold $M$.
The organization is as follows. We review the SYZ construction for the related local singularities in Section \ref{sec:SYZ}. Then we construct the local affine geometries modeling the singularities in Section \ref{sec:loc-aff}. In Section \ref{sec:surf} we focus on the relation between rational elliptic surfaces and the `12' Property for polygons. In Section \ref{sec:Schoen} we formulate the notion of orbi-conifold and construct orbi-conifold transitions of the Schoen's CY.
\subsection*{Acknowledgment}
The author is grateful to Ricardo Casta\~no-Bernard for introducing his beautiful joint work with Diego Matessi on Schoen's Calabi-Yau threefold. He expresses his gratitude to Atsushi Kanazawa for useful discussions and comments. He appreciates for the continuous encouragement of Naichung Conan Leung and Shing-Tung Yau.
\section{A quick review on the SYZ mirrors for local orbifolded conifolds} \label{sec:SYZ}
In \cite{KL}, we studied SYZ mirror symmetry for the local generalized conifold $G_{k,l}$ and orbifolded conifold $O_{k,l}$, following the construction of \cite{auroux07,CLL,AAK}. For $k=l=1$, it reduces to mirror symmetry for the local conifold (which is self-mirror). In this section we recall the geometries of $G_{k,l}$ and $O_{k,l}$ and their SYZ mirror symmetry.
\subsection{Lagrangian fibrations and SYZ mirrors}
\begin{defn}
A local generalized conifold is given by the equation
$$xy=(1+z)^k(1+w)^l$$
in $\mathbb{C}^4$.
\end{defn}
It is a toric Gorenstein singularity whose fan is the three-dimensional cone spanned by the primitive vectors $(0,0,1),(k,0,1),(0,1,1),(l,1,1)$. In other words it is a cone over the trapezoid with vertices $(0,0,1),(k,0,1),(0,1,1),(l,1,1)$ contained in the affine plane in height 1.
When $k,l \geq 2$, the set of singularities is the union of $\{x=y=0,z=-1\}$ and $\{x=y=0,w=-1\}$; when $k=1$ and $l\geq 2$, the set of singularities is $\{x=y=0,w=-1\}$; when $k=l=1$, the point $x=y=1+z=1+w=0$ is the only singularity.
A smoothing is given by changing the polynomial $(1+z)^k(1+w)^l$ (while keeping the highest order) such that its zero set contains no critical point. A toric crepant resolution is given by subdividing the trapezoid into standard lattice triangles whose areas achieve the smallest value $1/2$. The resolution is small in the sense that the exceptional locus is a rational curve.
For the purpose of SYZ mirror symmetry, we remove the anti-canonical divisor $\{zw=0\}$ and denote the resulting space as $G_{k,l}$. Its smoothing and resolution are denoted as $\tilde{G}_{k,l}$ and $\hat{G}_{k,l}$ respectively.
\begin{defn}
A local orbifolded conifold is given by the equations
$$u_1v_1=(1+z)^k, \, u_2v_2=(1+z)^l$$
in $\mathbb{C}^5$.
\end{defn}
It is another toric Gorenstein singularity whose fan is the three-dimensional cone spanned by the primitive vectors $(0,0,1),(k,0,1),(0,l,1),(k,l,1)$. In other words it is a cone over the corresponding rectangle in the affine plane in height 1.
When $k,l \geq 2$, the set of singularities is the union of $\{u_1=v_1=0,z=-1\}$ and $\{u_2=v_2=0,z=-1\}$; when $k=1$ and $l\geq 2$, the set of singularities is $\{u_2=v_2=0,z=-1\}$; when $k=l=1$, the point $u_1=v_1=u_2=v_2=1+z=0$ is the only singularity.
A smoothing is given by changing the polynomials $(1+z)^k$ and $(1+z)^l$ such that they do not have multiple roots. A toric crepant resolution is given by subdividing the rectangle into standard lattice triangles. We remove the anti-canonical divisor $\{z=0\}$ and denote the resulting space by $O_{k,l}$. Its smoothing and resolution are denoted as $\tilde{O}_{k,l}$ and $\hat{O}_{k,l}$ respectively.
In \cite{KL} we showed the following.
\begin{theorem}[\cite{KL}] \label{localSYZ}
The local generalized conifold $G_{k,l}$ is SYZ mirror to the orbifolded conifold $O_{k,l}$. Namely, the deformed generalized conifold $\tilde{G}_{k,l}$ is SYZ mirror to the resolved orbifolded conifold $\hat{O}_{k,l}$; the resolved generalized conifold $\hat{G}_{k,l}$ is SYZ mirror to the deformed orbifolded conifold $\tilde{O}_{k,l}$. It is summarized by the following diagram.
$$
\xymatrix{
\tilde{G}_{k,l}\ar@{<->}[d]_{SYZ} &\ar@{~>}[l] G_{k,l} \ar@{<->}[d]^{SYZ} & \ar@{->}[l] \hat{G}_{k,l} \ar@{<->}[d]^{SYZ} \\
\hat{O}_{k,l} \ar@{->}[r] & O_{k,l}\ar@{~>}[r] & \tilde{O}_{k,l}.
}
$$
\end{theorem}
The SYZ program realizes a mirror pair as dual Lagrangian torus fibrations. On $O_{k,l}$, we consider the Hamiltonian $T^2$-action given by $u_i \mapsto \lambda_i u_i$, $v_i \mapsto \lambda_i^{-1} v_i$ for $i=1,2$, leaving $z$ unchanged. We also have the corresponding action on its resolution and smoothing, and let's denote the moment map to $\mathbb{R}^2$ by $\nu_{\mathbb{T}^2}$ (which is simply given by $(|u_1|^2 - |v_1|^2,|u_2|^2 - |v_2|^2)$ on $O_{k,l}$). Then one can verify that
\begin{equation} \label{eq:fib_O}
(\log|z|,\nu_{\mathbb{T}^2})
\end{equation}
gives a Lagrangian fibration on $O_{k,l}$,$\hat{O}_{k,l}$ and $\tilde{O}_{k,l}$.
On $G_{k,l}$, we have the Hamiltonian $\mathbb{S}^1$-action given by $x \mapsto \lambda x$, $y \mapsto \lambda^{-1} y$, leaving $z,w$ unchanged. We also have the corresponding action on its resolution and smoothing, and let's denote the moment map to $\mathbb{R}$ by $\mu_{\mathbb{S}^1}$. ($\mu_{\mathbb{S}^1}$ is simply given by $|x|^2 - |y|^2$ on $G_{k,l}$.) Then we have the torus fibration
$$ (\mu_{\mathbb{S}^1},\log|z|,\log|w|) $$
on $G_{k,l}$,$\hat{G}_{k,l}$ and $\tilde{G}_{k,l}$.
The torus fibration is Lagrangian for $\hat{G}_{k,l}$, but not for $\tilde{G}_{k,l}$.
By the result of \cite{AAK} using the Moser argument, the torus fibration can be modified by an isotopy to a Lagrangian fibration (where the fibration map is piecewise smooth near the discriminant locus) with the first coordinate $\mu_{\mathbb{S}^1}$ remains unchanged. The issue is that the reduced symplectic space of $\tilde{G}_{k,l}$ by $\mathbb{S}^1$ is just isotopic but not exactly equal to $(\mathbb{C}^2,\omega_{\mathrm{std}})$, and so one needs the Moser argument to connect these two by symplectomorphisms.
From the Lagrangian fibrations on $\hat{G}_{k,l}$ and $\tilde{G}_{k,l}$, we constructed $\tilde{O}_{k,l}$ and $\hat{O}_{k,l}$ as their SYZ mirrors respectively; the reverse diection is also true. The key ingredient in the construction is the holomorphic discs emanated from singular Lagrangian fibers (which are of Maslov index zero). It is summarized as follows.
\begin{lemma}[Discriminant locus] \label{lem:disc-locus}
For the Lagrangian fibration on $G_{k,l}$ or the Lagrangian fibration on $O_{k,l}$, the discriminant locus is given by $\left(\{0\} \times \mathbb{R} \times \{0\}\right) \cup \left(\{0\}\times \{0\} \times \mathbb{R}\right)$. On $\hat{G}_{k,l}$ or $\tilde{O}_{k,l}$, it becomes
$$\left(\bigcup_{i=1}^k \left(\{s_i\} \times \mathbb{R} \times \{0\}\right)\right) \cup \left(\bigcup_{i=1}^l \left(\{t_i\} \times \{0\} \times \mathbb{R} \right)\right)$$
where $s_i$ and $t_i$ are certain constants related to the symplectic sizes of spheres in the exceptional curve of $\hat{G}_{k,l}$.
For $\tilde{G}_{k,l}$ or $\hat{O}_{k,l}$, the discriminant locus is contained in the plane $\{0\} \times \mathbb{R} \times \mathbb{R}$ and is homotopic to the dual graph of the triangulation of the rectangle with vertices $(0,0),(k,0),(0,l),(k,l)$ in the definition of the resolution $\hat{O}_{k,l}$.
\end{lemma}
Denote by $B_0$ the complement of the discriminant locus in the base $B$ of a Lagrangian fibration with a Lagrangian section. $B_0$ has an induced tropical affine structure (namely it has an atlas with transition maps being elements of $\mathrm{GL}(n,\mathbb{Z}) \ltimes \mathbb{R}^n$). Denote by $\Lambda \subset TB_0$ the corresponding local system, and $\Lambda^* \subset T^*B_0$ the dual. Then the inverse image of $B_0$ is symplectomorphic to $T^*B_0 / \Lambda^*$. The complex manifold $TB_0 / \Lambda$ is called the semi-flat mirror which serves as the first-order approximation. Then the generating functions of holomorphic discs of Maslov index zero give corrections to the complex structure of the semi-flat mirror.
\begin{prop}[Wall]
Given a Lagrangian fibration, Let $H \subset B$ (called the wall) be the set of regular Lagrangian torus fibers which bound non-constant holomorphic discs of Maslov index zero. For the Lagrangian fibration on $G_{k,l}$ or $\hat{G}_{k,l}$, $H$ is given by $\left(\mathbb{R} \times \mathbb{R} \times \{0\}\right) \cup \left(\mathbb{R} \times \{0\} \times \mathbb{R}\right)$. On $\tilde{G}_{k,l}$, $H$ is given by $\mathbb{R} \times \Delta$ where $\Delta$ denotes the discriminant locus contained in the horizontal plane.
For the Lagrangian fibration on $O_{k,l}$ or $\hat{O}_{k,l}$, $H$ is given by $\{0\} \times \mathbb{R}^2$. On $\tilde{O}_{k,l}$, $H$ becomes $\{s_1,\ldots,s_k,t_1,\ldots,t_l\} \times \mathbb{R}^2$ where $s_i$ and $t_i$ are the constants appearing in Lemma \ref{lem:disc-locus}.
\end{prop}
The discriminant loci and walls are shown in Figure \ref{fig:local-walls-gencfd}.
\begin{theorem}[Slab function]
Each component of $H$ is attached with a function defined by wall-crossing of the open Gromov-Witten potential. For $\hat{G}_{k,l}$, the function attached to $\mathbb{R} \times \mathbb{R} \times \{0\}$ is given by $(1+Z)(1+q_1Z)\ldots(1+q_1\ldots q_{k-1}Z)$, and that attached to $\mathbb{R} \times \{0\} \times \mathbb{R}$ is given by $(1+cZ)(1+q_1'cZ)\ldots(1+q_1'\ldots q_{l-1}'cZ)$, where $q_1,\ldots,q_{k-1},q_1',\ldots,q_{l-1}',c$ are given in the form $e^{- \int_C \omega}$ for certain rational curves $C$, and $Z$ is the semi-flat complex coordinate corresponding to the third coordinate $\mu_{\mathbb{S}^1}$ of the base of the Lagrangian fibration. For $\tilde{G}_{k,l}$ the function is $1+Z$.
For $\tilde{O}_{k,l}$, the slab function attached to each $\{s_i\} \times \mathbb{R}^2$ is $(1+Z)$, and that attached to each $\{t_i\} \times \mathbb{R}^2$ is $(1+W)$, where $Z,W$ are the semi-flat complex coordinates corresponding to the base direction $\nu_{T^2}$ of the Lagrangian fibration. For $\hat{O}_{k,l}$, the slab function is of the form
$$\sum_{i=0}^k \sum_{j=0}^l (1+\delta_{ij}(q)) q^{C_{ij}} Z^i W^j$$
where for each $i,j$, $\delta_{ij}(q)$ is a certain generating function of open Gromov-Witten invariants, and $C_{ij}$ is a certain rational curve in $\hat{O}_{k,l}$.
\end{theorem}
$\delta_{ij}(q)$ can be explicitly computed from the mirror map \cite{CCLT13}. The discriminant loci, walls and slab functions are shown in Figure \ref{fig:local-walls-gencfd}.
\begin{figure}
\caption{The discriminant loci, walls and slab functions for the local generalized and orbifolded conifold transitions.}
\label{fig:local-walls-gencfd}
\end{figure}
By gluing local pieces of semi-flat mirrors using the slab functions, we obtain the SYZ mirrors given in Theorem \ref{localSYZ}. We refer to \cite{CLL,KL} for detail.
In Section \ref{sec:loc-aff}, we shall follow \cite{GS07,CM1,CM2} and use tropical geometry to encode the data. The Gross-Siebert program has the important advantage that it can handle global geometries by combinatorics. The walls and generating functions of the local geometry given above serve as initial input data to the Gross-Siebert program.
\begin{remark}
In an ongoing work we shall construct the quiver mirror of $\tilde{O}_{k,l}$ by using the construction of \cite{CHL2}, see Figure \ref{fig:ncgencfd}, which is useful in studying stability conditions and flop along the line of \cite{FHLY}. It is a noncommutative resolution of $G_{k,l}$ (which was extensively studied by \cite{Nagao,MN}, and can be derived from the construction of Bocklandt \cite{Bocklandt}).
\end{remark}
\begin{figure}
\caption{A noncommutative resolution of the generalized conifold $G_{k,l}
\label{fig:ncgencfd}
\end{figure}
In summary, we have SYZ mirror symmetry for the resolutions and smoothings of the local generalized and orbifolded conifold transitions near large volume limits. There is also a noncommutative mirror construction near the conifold limit. It is schematically depicted by Figure \ref{fig:gen-con-MS-mod}.
\begin{figure}
\caption{The moduli spaces. Around a large complex structure limit we have the SYZ construction using Lagrangian torus fibrations. Around a conifold limit we have the noncommutative mirror construction in \cite{CHL2}
\label{fig:gen-con-MS-mod}
\end{figure}
\subsection{Monodromy computation for local generalized and orbifolded conifolds}
We have reviewed Lagrangian fibrations and SYZ for the local generalized conifold $G_{k,l}$ and orbifolded conifold $O_{k,l}$. Now we compute the monodromies of the fibrations for later purpose.
Recall that $B_0$ denotes the complement of discriminant locus in the base $B$ of the Lagrangian fibration. For $G_{k,l}=\{xy=(1+z)^k(1+w)^l\}$, fix two contractible open sets $U_+$ and $U_-$ covering $B_0$, where $U_+ = B_0 - \mathbb{R}_{\leq 0} \times \Delta$ and $U_- = B_0 - \mathbb{R}_{\geq 0} \times \Delta$. The torus bundles over $B_0$ trivialize over $U_+$ and $U_-$. Then fix a basis of the fundamental group of each fiber at $(a,b,c) \in U_+$ as follows.
\begin{align*}
\gamma_1(t) &\textrm{ defined by } z=e^b,w=e^c,x= r e^{2\pi\mathbf{i}\, t} \textrm{ for } r \in \mathbb{R}_+;\\
\gamma_2(t) &\textrm{ defined by } z=e^b e^{2\pi\mathbf{i}\, t},w=e^c,x \in \mathbb{R}_+;\\
\gamma_3(t) &\textrm{ defined by } z=e^b,w=e^c e^{2\pi\mathbf{i}\, t},x \in \mathbb{R}_+
\end{align*}
under the constraints $|x|^2-|y|^2=e^a$ and $xy=(1+z)^k(1+w)^l$). (Note that $x\not=0$ over $U_+$.)
$[\gamma_i]$ for $i=1,2,3$ defines a basis of the fundamental group. Similarly we fix a basis over $U_-$ (where $y\not=0$) by taking
\begin{align*}
\tilde{\gamma}_1(t) &\textrm{ defined by } z=e^b,w=e^c,y= r e^{-2\pi\mathbf{i}\, t} \textrm{ for } r \in \mathbb{R}_+;\\
\tilde{\gamma}_2(t) &\textrm{ defined by } z=e^b e^{2\pi\mathbf{i}\, t},w=e^c,y \in \mathbb{R}_+;\\
\tilde{\gamma}_3(t) &\textrm{ defined by } z=e^b,w=e^c e^{2\pi\mathbf{i}\, t},y \in \mathbb{R}_+.
\end{align*}
In the pre-image of $U_-$, $y \not=0$ and so the above is well-defined.
\begin{prop}[Monodromy for $G_{k,l}$] \label{prop:mon-gencfd}
For the Lagrangian fibration on $G_{k,l}$, the monodromy around $\{0\} \times \mathbb{R} \times \{0\}$ is $[\gamma_1] \mapsto [\gamma_1]$, $[\gamma_2] \mapsto [\gamma_2]$ and $[\gamma_3] \mapsto [\gamma_3] + l [\gamma_1]$; the monodromy around $\{0\} \times \{0\} \times \mathbb{R}$ is $[\gamma_1] \mapsto [\gamma_1]$, $[\gamma_2] \mapsto [\gamma_2]- k [\gamma_1]$ and $[\gamma_3] \mapsto [\gamma_3]$.
\end{prop}
\begin{proof}
First we consider $G_{k,l}$. $U_+ \cap U_-$ consists of four connected components which we call chambers. We have the following by considering the winding numbers of the variables $x,y,z,w$ with the constraint $xy=(1+z)^k(1+w)^l$. For a base point in a chamber,
\begin{align*}
[\gamma_1]=[\tilde{\gamma}_1], [\gamma_2]=[\tilde{\gamma}_2],[\gamma_3]=[\tilde{\gamma}_3] \textrm{ in } \mathbb{R} \times \mathbb{R}_- \times \mathbb{R}_-;\\
[\gamma_1]=[\tilde{\gamma}_1], [\gamma_2]=[\tilde{\gamma}_2]-k[\tilde{\gamma}_1],[\gamma_3]=[\tilde{\gamma}_3] \textrm{ in } \mathbb{R} \times \mathbb{R}_+ \times \mathbb{R}_-;\\
[\gamma_1]=[\tilde{\gamma}_1], [\gamma_2]=[\tilde{\gamma}_2],[\gamma_3]=[\tilde{\gamma}_3]-l[\tilde{\gamma}_1] \textrm{ in } \mathbb{R} \times \mathbb{R}_- \times \mathbb{R}_+;\\
[\gamma_1]=[\tilde{\gamma}_1], [\gamma_2]=[\tilde{\gamma}_2]-k[\tilde{\gamma}_1],[\gamma_3]=[\tilde{\gamma}_3]-l[\tilde{\gamma}_1] \textrm{ in } \mathbb{R} \times \mathbb{R}_+ \times \mathbb{R}_+.
\end{align*}
Now take a loop around $\{0\} \times \mathbb{R} \times \{0\}$. It goes from the chamber $\mathbb{R} \times \mathbb{R}_+ \times \mathbb{R}_+$ to $\mathbb{R} \times \mathbb{R}_+ \times \mathbb{R}_-$ in $\mathbb{R}_+ \times \mathbb{R} \times \mathbb{R}$, and then goes back to the original chamber in $\mathbb{R}_- \times \mathbb{R} \times \mathbb{R}$. In $\mathbb{R}_+ \times \mathbb{R} \times \mathbb{R}$ we use the basis $[\gamma_i]$; in $\mathbb{R}_- \times \mathbb{R} \times \mathbb{R}$ we use the basis $[\tilde{\gamma_i}]$ for $i=1,2,3$. Then the monodromy around the loop is $[\gamma_1] \mapsto [\gamma_1]$, $[\gamma_2] \mapsto [\gamma_2]$ and $[\gamma_3] \mapsto [\gamma_3] + l [\gamma_1]$. The computation for the monodromy around $\{0\} \times \{0\} \times \mathbb{R}$ is similar.
\end{proof}
For $O_{k,l}=\{u_1v_1=(1+z)^k,u_2v_2=(1+z)^l\}$, we take four contractible open subsets $U_{\pm\pm}$ covering $B_0$, where
\begin{align*}
U_{++} =& ((\mathbb{R}-\{0\})\times \mathbb{R}\times\mathbb{R}) \cup (\mathbb{R} \times \mathbb{R}_{>0} \times \mathbb{R}_{>0});\\
U_{+-} =& ((\mathbb{R}-\{0\})\times \mathbb{R}\times\mathbb{R}) \cup (\mathbb{R} \times \mathbb{R}_{>0} \times \mathbb{R}_{<0});\\
U_{-+} =& ((\mathbb{R}-\{0\})\times \mathbb{R}\times\mathbb{R}) \cup (\mathbb{R} \times \mathbb{R}_{<0} \times \mathbb{R}_{>0});\\
U_{--} =& ((\mathbb{R}-\{0\})\times \mathbb{R}\times\mathbb{R}) \cup (\mathbb{R} \times \mathbb{R}_{<0} \times \mathbb{R}_{<0}).
\end{align*}
We take a basis over $(a,b,c) \in U_{++}$ to be:
\begin{align*}
\gamma_1(t)=\gamma^{++}_1(t) &\textrm{ defined by } z=e^a e^{2\pi\mathbf{i}\, t},u_1, u_2 \in \mathbb{R}_+;\\
\gamma_2(t)=\gamma^{++}_2(t) &\textrm{ defined by } z=e^a,u_1=r e^{2\pi\mathbf{i}\, t} \textrm{ for } r \in \mathbb{R}_+,u_2 \in \mathbb{R}_+;\\
\gamma_3(t)=\gamma^{++}_3(t) &\textrm{ defined by } z=e^a,u_1\in \mathbb{R}_+,u_2=r e^{2\pi\mathbf{i}\, t} \textrm{ for } r \in \mathbb{R}_+
\end{align*}
under the constraints $|u_1|^2-|v_1|^2=b$ and $|u_1|^2-|v_1|^2=c$.
It is well-defined since $u_1,u_2 \not= 0$ over $U_{++}$.
We replace $(u_1,u_2)$ by $(u_1,v_2^{-1})$ for $U_{+-}$, by $(v_1^{-1},u_2)$ for $U_{-+}$, and by $(v_1^{-1},v_2^{-1})$ for $U_{--}$. Then we have a basis $\{\gamma^{\pm\pm}_i:i=1,2,3\}$ over each open set.
\begin{prop}[Monodromy for $O_{k,l}$] \label{prop:mon-orbcfd}
For the Lagrangian fibration on $O_{k,l}$, the monodromy around $\{0\} \times \mathbb{R} \times \{0\}$ is
$[\gamma_1] \mapsto [\gamma_1] - l [\gamma_3]$, $[\gamma_2] \mapsto [\gamma_2]$ and $[\gamma_3] \mapsto [\gamma_3]$;
the monodromy around $\{0\} \times \{0\} \times \mathbb{R}$ is $[\gamma_1] \mapsto [\gamma_1] + k [\gamma_2]$, $[\gamma_2] \mapsto [\gamma_2]$ and $[\gamma_3] \mapsto [\gamma_3]$.
\end{prop}
\begin{proof}
The intersection of all the open sets is the union of the two disjoint open subsets $\mathbb{R}_\pm \times \mathbb{R} \times \mathbb{R}$. In the chamber $\mathbb{R}_- \times \mathbb{R} \times \mathbb{R}$, for each $i$ we have $[\gamma_i^{\pm\pm}]$ to be all the same. In the chamber $\mathbb{R}_+ \times \mathbb{R} \times \mathbb{R}$, for $i=2,3$, we have $[\gamma_i^{\pm\pm}]$ to be all the same; for $i=1$, $[\gamma_1^{++}]=[\gamma_1^{-+}]-k[\gamma_2^{-+}]=[\gamma_1^{+-}]-l[\gamma_3^{+-}]=[\gamma_1^{--}]-k[\gamma_2^{--}]-l[\gamma_3^{--}]$. Now consider the monodromy around $\{0\} \times \mathbb{R} \times \{0\}$. We start with the basis $[\gamma_i^{++}]$ in the chamber $\mathbb{R}_+ \times \mathbb{R} \times \mathbb{R}$, change to the basis $[\gamma_i^{+-}]$, move to the chamber $\mathbb{R}_- \times \mathbb{R} \times \mathbb{R}$ in $U_{+-}$, change back to the original basis, and move back to the original chamber in $U_{++}$. Obviously $[\gamma_2^{++}],[\gamma_3^{++}]$ remains the same. We have $[\gamma_1^{++}]\mapsto [\gamma_1^{++}] - l [\gamma_3^{++}]$. The monodromy computation around $\{0\} \times \{0\} \times \mathbb{R}$ is similar.
\end{proof}
\section{Local affine geometries} \label{sec:loc-aff}
In this section, we formulate the local singularities, their resolutions and smoothings in the language of affine geometry. Such a formulation has the great advantage that it can be easily globalized, thanks to the groundbreaking works of Gross-Siebert \cite{GS1,GS2,GS07}.
The notion of a tropical manifold is central in the Gross-Siebert program. We briefly recall it below.
\begin{defn}
A polarized tropical manifold is a triple $(B,\mathcal{P},\phi)$, where $B$ is an integral affine manifold with singularities, $\mathcal{P}$ is a toric polyhedral decomposition of $B$ (where the singularities occur in facets of polyhedrons in $\mathcal{P}$),
and $\phi$ is a strictly convex multivalued piecewise linear function on $B$.
\end{defn}
In their reconstruction program they also assume $(B,\mathcal{P},\phi)$ is positive and simple. In dimension three it means that $\Delta$ is a trivalent graph and the vertices of $\Delta$ are of either positive or negative types (with certain simple monodromies of the affine connection around each edge), this is the case $d=1$ in Section \ref{sec:orb_vert}.
Gross-Siebert defined discrete Legendre transform which associates a polarized tropical manifold $(B,\mathcal{P},\phi)$ to another one $(\check{B},\check{\mathcal{P}},\check{\phi})$, with the property that the discrete Legendre transform of $(\check{B},\check{\mathcal{P}},\check{\phi})$ goes back to $(B,\mathcal{P},\phi)$. Their groundbreaking work constructs a toric degeneration of a Calabi-Yau manifold $X$ from each compact positive and simple polarized tropical manifold. The Calabi-Yau manifolds associated to a polarized tropical manifold and its discrete Legendre transform form a mirror pair.
By gluing local models of Lagrangian fibrations with prescribed discriminant loci, Castano-Bernard and Matessi \cite{CM1} constructed a symplectic manifold (equipped with a Lagrangian fibration and a Lagrangian section) which contains $T^* B_0 / \Lambda^*$ as an open subset. Here $B_0 = B - \Delta$ and $\Lambda^* \subset T^* B_0$ is a fiberwise lattice given by the affine structure. In \cite{CM2}, they performed conifold transitions for the symplectic manifolds constructed from polarized tropical manifolds.
In this paper we follow the method of Castano-Bernard and Matessi \cite{CM1,CM2} to construct orbi-conifold transitions from an affine manifold with singularities. First we construct the affine structures for the local generalized conifolds, orbifolded conifolds, and their resolutions and smoothings. Then in Section \ref{sec:Schoen} we consider global affine structures with generalized or orbifolded conifold points. The discriminant locus has four-valent vertices, and we also relax the simplicity condition (which allows orbifolded singularities).
Throughout this section $\{e_i:i=1,\ldots,n\}$ denotes the standard basis of $\mathbb{R}^n$.
\subsection{Local affine $A_{k-1}$ singularity in dimension two}
Let's begin with the local $A_{k-1}$ singularity in dimension two, where $k>1$.
\begin{defn}[Affine $A_{k-1}$ singularity]
A singular point in an oriented affine surface is called an affine $A_{k-1}$ singularity (for $k>1$) if in a certain oriented basis, it has monodromy $\left(\begin{array}{cc}
1 & k \\
0 & 1
\end{array}\right)$.
A singular point with such monodromy (even when $k\leq 1$) is said to have multiplicity $k$. When $k=\pm 1$ it is called simple.
\end{defn}
We can cook up two tropical manifolds with an affine $A_{k-1}$ singularity. They are related to each other by discrete Legendre transform. This reflects the fact that the $A_{k-1}$ singularity is self-mirror.
Define the tropical manifold $(B,\mathcal{P},\phi)$ as follows. Take the lattice triangle
$$\mathbb{C}onv\{(0,0),(0,k),(-1,k)\}$$
and the rectangle
$$\mathbb{C}onv\{(0,0),(0,k),(1,0),(1,k)\}.$$
Glue them along the edge $\mathbb{C}onv\{(0,0),(0,k)\}$. Then we have a manifold $B$ with corners and a polyhedral decomposition $\mathcal{P}$. See the top middle of Figure \ref{fig:affA_n}. The fan structures at vertices are given as follows. Denote the standard basis of $\mathbb{R}^2$ by $\{e_1,e_2\}$. We have the fan generated by the primitive vectors $e_1,e_2,-e_1$. At the vertex $(0,0)$, the tangent vectors $(1,0),(0,1),(-1,k)$ of the polytopes are mapped to the primitive generators $e_1,e_2,-e_1$ respectively. At the vertex $(0,k)$, the tangent vectors $(-1,0),(0,-1),(1,0)$ are mapped to the primitive generators $e_1,e_2,-e_1$ respectively. The multivalued piecewise linear function $\phi$ is defined by
\begin{equation} \label{eq:phi1}
\phi(x,y) = \left\{ \begin{array}{ll}
x & x \geq 0 \\
0 & x \leq 0
\end{array}\right.
\end{equation}
in the fan generated by $\{e_1,e_2,-e_1\}$ at each of the two vertices.
\begin{figure}
\caption{Affine structures of the $A_{k-1}
\label{fig:affA_n}
\end{figure}
The discriminant locus is a point $\Delta=\{p\}$ in the edge $\mathbb{C}onv\{(0,0),(0,k)\}$. In other words $B-\Delta$ has an affine structure induced from the fans at the vertices. The monodromy around $p$ is given as follows.
\begin{prop}
For the tropical manifold $(B,\mathcal{P},\phi)$ given above, the monodromy matrix around $p$ in the standard basis $\{e_1,e_2\}$ at $(0,0)$ is
$\left(\begin{array}{cc}
1 & 0 \\
k & 1
\end{array}\right).$
\end{prop}
\begin{proof}
Consider a loop starting from the vertex $(0,0)$, going to the vertex $(0,k)$ in the rectangle, and going back to the vertex $(0,0)$ in the triangle. It is easy to see that the vector $e_2$ of the fan at the vertex $(0,0)$ is monodromy invariant. Consider the vector $e_1$. Transporting it to the vertex $(0,k)$ in the rectangle, it is identified with $-e_1$ in the fan at the vertex $(0,k)$. Transporting it back to the vertex $(0,0)$ in the triangle, it is identified with $e_1+ke_2$ in the fan at the vertex $(0,0)$ (since $(1,0) = -(-1,k)+k(0,1)$). Thus $e_1$ is sent to $e_1+ke_2$ under monodromy.
\end{proof}
When $k=1$ the singularity is removable, and the affine manifold is simple in the sense of Definition 1.60 of \cite{GS1}. However the monodromy is no longer simple for $k>1$.
Next, we define $(\check{B},\check{\mathcal{P}})$ by gluing two squares
$$\mathbb{C}onv\{(0,0),(1,0),(0,1),(1,1)\} \textrm{ and } \mathbb{C}onv\{(0,0),(1,0),(0,-1),(1,-1)\}$$
along the edge $\mathbb{C}onv\{(0,0),(1,0)\}$. See the bottom middle of Figure \ref{fig:affA_n}. The fan at the vertex $(0,0)$ is given by mapping the tangent vectors $(0,-1),(1,0),(0,1)$ to $-ke_1-e_2, e_1, e_2$ respectively. The fan at the vertex $(1,0)$ is given by mapping the tangent vectors $(0,1),(-1,0),(0,-1)$ to $e_2,-e_1,-e_2$ respectively. The discriminant locus is a point $\Delta=\{p\}$ in the edge $\mathbb{C}onv\{(0,0),(1,0)\}$. One can similarly check that the monodromy is given as follows and the proof is omitted.
\begin{prop}
For $(\check{B},\check{\mathcal{P}})$, the monodromy matrix around $p$ in the standard basis $\{e_1,e_2\}$ at $(0,0)$ is
$\left(\begin{array}{cc}
1 & -k \\
0 & 1
\end{array}\right).$
\end{prop}
\begin{remark}
This is just equivalent to the affine structure of $(B,P)$ if we switch to the basis $(-e_2,e_1)$.
\end{remark}
Define the multivalued piecewise linear function $\check{\phi}$ by
$$
\check{\phi}(x,y) = \left\{ \begin{array}{ll}
0 & \textrm{ in } \mathbb{R}_{\geq 0}\{e_1,-k e_1 - e_2\} \\
ky & \textrm{ in } \mathbb{R}_{\geq 0}\{e_1,e_2\} \\
\end{array}\right.
$$
on the fan at the vertex $(0,0)$, and
$$
\phi(x,y) = \left\{ \begin{array}{ll}
0 & \textrm{ in } \mathbb{R}_{\geq 0}\{-e_1,-e_2\} \\
ky & \textrm{ in } \mathbb{R}_{\geq 0}\{-e_1,e_2\} \\
\end{array}\right.
$$
on the fan at the vertex $(1,0)$. (The differentials, or so called discrete Legendre transform, give the corners of the polytopes of $(B,P)$.)
It is easy to check the following.
\begin{prop}
$(B,\mathcal{P},\phi)$ and $(\check{B},\check{\mathcal{P}},\check{\phi})$ given above are discrete Legendre dual to each other.
\end{prop}
\begin{proof}
One can directly check that the polytopes in $(\check{B},\check{\mathcal{P}})$ are Legendre dual polytopes of the piecewise linear function $\phi$ around vertices of $(B,\mathcal{P})$ (by taking the differential of $\phi$ restricted on each cone), and vice versa. Moreover the fan structure at each vertex of $(\check{B},\check{\mathcal{P}})$ is given by the normal fan of the corresponding polytope in $(B,\mathcal{P})$, and vice versa.
\end{proof}
\begin{remark} \label{rem:edge}
By taking a product of the affine $A_{k-1}$ singularity $(B,\mathcal{P})$ (or $(\check{B},\check{\mathcal{P}})$) with the affine line $\mathbb{R}$, one obtains a tropical threefold with discriminant locus being a line. The multiplicity of such a discriminant locus is defined to be $k$.
(As in \cite{CM1,CM2}, one can perturb the discriminant locus to be a curve.)
\end{remark}
\subsection{Smoothing and resolution of a local affine $A_{k-1}$ singularity}
The tropical manifolds corresponding to the resolution and smoothing of an $A_{k-1}$ singularity are given by the left and right sides of Figure \ref{fig:affA_n}. The singularity in the affine base (which has multiplicity $k$) separates into $k$ simple singularities. In a smoothing the simple singularities lie in the same monodromy-invariant affine hyperplane (which is formed by some edges of the polytopes in the decomposition). A resolution is Legendre dual to a smoothing.
The readers can easily understand the polyhedral decompositions and fan structures from the figures and we omit the detailed descriptions. The monodromy around each critical point is simple, namely it equals
$\left(\begin{array}{cc}
1 & \pm 1 \\
0 & 1
\end{array}\right)$
up to conjugation.
For the fan generated by $\{e_1,e_2,-e_1\}$, the restriction of the multivalued piecewise linear function is given by Equation \eqref{eq:phi1}; for that generated by $\{e_1,e_2,-e_1,-e_2\}$ in the top left of Figure \ref{fig:affA_n}, it is given by
$$
\phi(x,y) = \left\{ \begin{array}{ll}
x+y & x, y \geq 0 \\
x & x \geq 0 \textrm{ and } y \leq 0 \\
y & y \geq 0 \textrm{ and } x \leq 0 \\
0 & x, y \leq 0;
\end{array}\right.
$$
for that generated by $\{e_1,e_2,-e_1,-j e_1 - e_2\}$ in the bottom right of Figure \ref{fig:affA_n}, it is given by
$$
\phi(x,y) = \left\{ \begin{array}{ll}
0 & \textrm{ in } \mathbb{R}_{\geq 0}\{e_1,-j e_1 - e_2\} \\
\frac{(j+k)(j+k-1)y}{2} & \textrm{ in } \mathbb{R}_{\geq 0}\{e_1,e_2\} \\
-x+\frac{(j+k)(j+k-1)y}{2} & \textrm{ in } \mathbb{R}_{\geq 0}\{e_2,-e_1\} \\
-x+j y & \textrm{ in } \mathbb{R}_{\geq 0}\{-e_1,-j e_1 - e_2\} \\
\end{array}\right.
$$
$\phi$ restricted to other strata are similar.
\begin{remark} \label{rem:subdivide}
At the vertex $(-1,k)$ of $(B,\mathcal{P})$ for the local $A_{k-1}$ singularity, we have an orbifolded fan structure (see the top middle of Figure \ref{fig:affA_n}). The same happens for its local $A_{k-1}$ resolution (top right of Figure \ref{fig:affA_n}). It can be easily resolved by a subdivision of the polyhedral decomposition. See Figure \ref{fig:affA_nres}. The fan structures at the additional vertices are taken to be trivial.
\end{remark}
\begin{figure}
\caption{A subdivision to resolve the orbifold fan. It shows the case for $k=2$, and it is similar for general $k$.}
\label{fig:affA_nres}
\end{figure}
\begin{remark} \label{rem:edge-sm}
Again by taking a product with $\mathbb{R}$, one obtains a smoothing or a resolution of the singularity given in Remark \ref{rem:edge}.
\end{remark}
\subsection{Lagrangian fibration on local $A_{k-1}$ singularity}
The $A_{k-1}$ surface singularity is toric. Its fan is the cone in $\mathbb{R}^2$ generated by $(0,1)$ and $(k,1)$. Let $X$ be the corresponding toric variety. Picking up a toric K\"ahler form, one has the moment map to $\mathbb{R}^2$ whose image is a (non-compact) polyhedral set. Let $\mu_1$ be the horizontal component of the moment map. Then
\begin{equation} \label{eq:fib_An}
(\mu_1,\log|\nu-1|)
\end{equation}
gives a Lagrangian fibration on $X - \{\nu=1\}$ where $\nu$ is the toric holomorphic function corresponding to the $(0,1)$ lattice point. The affine base of the fibration is isomorphic to the top middle of Figure \ref{fig:affA_n}. See \cite{LLW} for more detail.
Alternatively we can take its mirror variety $\{(u,v,z)\in \mathbb{C}^2 \times \mathbb{C}^\times:uv=(1+z)^k\}$ which is again an $A_{k-1}$ singularity at $z=-1,u=v=0$.
\begin{equation} \label{eq:fib_An'}
(|u|^2-|v|^2,\log|z|)
\end{equation}
gives a Lagrangian fibration whose affine base is isomorphic to the bottom middle of Figure \ref{fig:affA_n}.
The affine smoothings and resolutions in Figure \ref{fig:affA_n} are base of Lagrangian fibrations on resolution and smoothing of the $A_{k-1}$ singularity. We have the toric resolution whose fan is generated by $(j,1)$ for $j=0,\ldots,k$. A smoothing is given by deforming the right hand side of the equation $uv=(1+z)^k$ such that all roots are simple. Their Lagrangian fibrations are given by the same equations as above.
By taking product with $\mathbb{C}^\times$, we have Lagrangian fibrations whose base are isomorphic to the affine threefolds in Remark \ref{rem:edge} and \ref{rem:edge-sm}. These Lagrangian fibrations serve as local models which can be glued to give more interesting geometries.
\subsection{Affine models for local generalized and orbifolded conifolds} \label{sec:aff-loc}
Now we construct tropical manifolds modeling the base of the Lagrangian fibrations in Section \ref{sec:SYZ}. They will have the same discriminant loci and monodromies as in the last subsection. The singularities of the local generalized and the orbifolded conifolds are said to be negative and positive respectively, according to the Euler characteristics of the corresponding singular Lagrangian fibers. For $k=l=1$, this is the local affine geometry studied by \cite{CM2}.
First we consider the local generalized conifold. Take the triangular prisms
$$\mathbb{C}onv\{(0,0,0),(l,0,0),(0,0,k),(l,0,k),(0,-1,k),(l,-1,k)\}$$
and
$$\mathbb{C}onv\{(0,0,0),(l,0,0),(0,0,k),(l,0,k),(0,1,0),(0,1,k)\}$$
and glue them together along the rectangle $\mathbb{C}onv\{(0,0,0),(l,0,0),(0,0,k),(l,0,k)\}$. See the left of Figure \ref{fig:loc-orb-cfd}.
Take the fan given by the product of $\mathbb{R}_{\geq 0}\{e_1\}$ with the fan generated by $\{e_2,e_3,-e_2\}$ in the plane. Then the fan at the vertex $(0,0,0)$ is given by mapping the tangent vectors $(1,0,0),(0,1,k),(0,0,1),(0,-1,0)$ to the generators $e_1,e_2,e_3,-e_2$ respectively. The fans at the other three vertices $(l,0,0),(0,0,k),(l,0,k)$ are similar and the reader can understand from the left of Figure \ref{fig:loc-orb-cfd}.
\begin{figure}
\caption{Affine structures of the local generalized conifold and orbifolded conifold.}
\label{fig:loc-orb-cfd}
\end{figure}
The discriminant locus $\Delta$ is a union of the lines
$$\left([0,l] \times \{0\} \times \{k/2\}\right) \cup \left(\{l/2\} \times \{0\} \times [0,k] \right)$$
in the rectangle $\mathbb{C}onv\{(0,0,0),(l,0,0),(0,0,k),(l,0,k)\}$. This gives an affine manifold with singularities and polyhedral decomposition $(B,\mathcal{P})$. The monodromy is given as follows.
\begin{prop} \label{prop:mono-match}
For the tropical manifold $(B,\mathcal{P})$ given above, the monodromy matrix around the component $[0,l] \times \{0\} \times \{k/2\}$ of the discriminant locus in the standard basis $\{e_1,e_2,e_3\}$ at $(0,0)$ equals to
$$\left(\begin{array}{ccc}
1 & 0 & 0\\
0 & 1 & 0\\
0 & k & 1\\
\end{array}\right)$$
and the monodromy matrix around the component $\{l/2\} \times \{0\} \times [0,k]$ equals to
$$\left(\begin{array}{ccc}
1 & -l & 0\\
0 & 1 & 0\\
0 & 0 & 1\\
\end{array}\right).$$
It matches with the monodromy in Proposition \ref{prop:mon-orbcfd}.
\end{prop}
\begin{proof}
$e_1$ and $e_3$ are obviously monodromy invariant. Let's consider the monodromy of $e_2$ around $\{l/2\} \times \{0\} \times [0,k]$, and the other case is similar. First we transport $e_2$ from the vertex $(0,0,0)$ to $(l,0,0)$ in the prism on the right, which is identified with $l e_1-e_2$ in the fan at $(l,0,0)$ (since $(0,1,0)=l(-1,0,0)-(-l,-1,0)$). It is identified with the vector $(-l,0,0)+(0,1,k)$ in the prism on the left. Transporting it back to $(0,0,0)$, it is identified with $-l e_1 + e_2$ in the fan at $(0,0,0)$. Thus the monodromy maps $e_2$ to $-l e_1 + e_2$.
\end{proof}
The multivalued piecewise linear function $\phi$ is defined by
\begin{equation} \label{eq:phi-gc}
\phi(x,y,z) = \left\{ \begin{array}{ll}
y & y \geq 0 \\
0 & y \leq 0 \\
\end{array}\right.
\end{equation}
on the fan generated by $e_1,e_2,e_3,-e_2$. $(B,\mathcal{P},\phi)$ defines a tropical manifold.
The discrete Legendre transform of $(B,\mathcal{P},\phi)$ is a union of four rectangles as shown in the right of Figure \ref{fig:loc-orb-cfd}. The readers can work out the multivalued piecewise linear function $\check{\phi}$ from the figure, and check that the monodromy matches with that in Proposition \ref{prop:mon-gencfd}.
\begin{remark}
Proposition 6.4 of \cite{CM2} extends to this case by the same proof using action coordinates. Namely, the affine structure induced on the base of the Lagrangian fibration \eqref{eq:fib_O} on $O_{k,l}$ is isomorphic to the affine manifold $(\check{B},\check{\mathcal{P}})$.
The monodromy of $(B,\mathcal{P})$ matches with that of the Lagrangian fibration on the orbifolded conifold, while the monodromy of $(\check{B},\check{\mathcal{P}})$ matches with that of the Lagrangian fibration on the generalized conifold. It may look confusing that the roles of $(B,P)$ and $(\check{B},\check{P})$ get switched. This is due to the fact that $T^*B_0 \cong T\check{B}_0$ and $T^*\check{B}_0 \cong TB_0$. (Away from singular fibers a Lagrangian fibration is modeled by $T^*B_0 / \Lambda^*$.) $TB_0/\Lambda$ and $T^*B_0 / \Lambda^*$ are related by discrete Legendre transform.
\end{remark}
\subsection{Smoothings and resolutions of generalized and orbifolded conifolds}
The affine generalized conifolds are not simple in the sense of \cite[Definition 1.60]{GS1}. They can be smoothed or resolved as shown by the left and right sides of Figure \ref{fig:affA_n} (for the case $k=l=2$). Smoothing and resolution of the orbifolded conifold are given by the Legendre transform. There are different choices of smoothing corresponding to different ways of refining the rectangle $[0,k]\times [0,l]$ into standard triangles; the discriminant locus is given by taking the dual graph of the triangulation. (There are further choices of extending the triangulation to a refinement of the polyhedral decomposition, but it does not affect the affine structure.) Similarly there are different choices of resolution corresponding to the different orders of horizontal and vertical line components of the discriminant locus. In the description below we have fixed a particular choice.
\begin{figure}
\caption{Affine base of the smoothing and resolution of the generalized conifold. In the figure $k=l=2$.}
\label{fig:loc-orb-con-res}
\end{figure}
For the smoothed generalized conifold, the polyhedral decomposition is shown in the left figure. The non-trivial affine structure comes from the assignment of a fan structure at each lattice point in the rectangle in the direction $(1,0,0),(0,0,1)$. Let's take the coordinate system such that these lattice points are given by $(j,0,i)$ for $i=0,\ldots,k$ and $j=0,\ldots,l$. The key point is that, at the lattice point $(j,0,i)$, the vector $e_2$ in the local chart induced by the fan is identified with $(0,-1,k-i)$ in the polyhedral decomposition, and the vector $-e_2$ is identified with $(l-j,1,0)$. Other directions are trivially identified.
The triangulation of the rectangle $[0,k]\times [0,l]$ is associated with a dual graph. To fix the positions of the vertices in the dual graph, we fix an integral piecewise linear function supported on the triangulation. Then we define $\phi$ to be the sum of this function and that given by Equation \eqref{eq:phi-gc}. This completes the definition of the tropical manifold $(B,P,\phi)$ corresponding to a deformed generalized conifold. Its Legendre dual corresponds to resolution of a resolved orbifolded conifold.
The resolved affine generalized conifold is shown in the right figure. The discriminant locus consists of vertical lines contained in the planes $y=1,\ldots,l$ and horizontal lines contained in the planes $y=0,\ldots,-k+1$. Let's fix the coordinates such that the polyhedral decomposition contains the vertices $((j-1)j/2,j,0)$, $((j-1)j/2,j,k(k+1)/2)$ for $j=1,\ldots,l$, and $(0,-i+1,(i-1)i/2)$ or $(l(l+1)/2,-i+1,(i-1)i/2)$ for $i=1,\ldots,k$. At the vertex $((j-1)j/2,j,0)$ or $((j-1)j/2,j,k(k+1)/2)$, $e_2$ is identified with $(j,1,0)$ and $-e_2$ is identified with $(-j+1,-1,0)$. At the vertex $(0,-i+1,(i-1)i/2)$ or $(l(l+1)/2,-i+1,(i-1)i/2)$, $e_2$ is identified with $(0,1,-i+1)$ and $-e_2$ is identified with $(0,-1,i)$.
The multivalued piecewise linear function $\phi$ is defined by asserting that in the fan of each vertex, $\phi(e_2)=1$ and $\phi(-e_2)=\phi(\pm e_3)=\phi(\pm e_1)=0$. This defines the tropical manifold $(B,P,\phi)$ corresponding to a resolved generalized conifold, whose Legendre dual corresponds to a deformed orbifolded conifold.
One can directly verify the following.
\begin{prop}
The tropical manifolds given above are positive and simple in the sense of \cite{GS1}.
\end{prop}
\subsection{Orbifolded trivalent vertex} \label{sec:orb_vert}
In this subsection we introduce an orbifolded version of the trivalent vertex in the Gross-Siebert program. This will be used in Definition \ref{def:orbi-cfd}. See \cite[Example 2.2 and 2.3]{CM2} for the usual trivalent vertex.
Take a lattice triangle $T \subset \mathbb{R}^2$ with vertices $v_0, v_1, v_2 \in \mathbb{Z}^2$ (labeled clockwise, and the indices are taken in $\mathbb{Z}/3\mathbb{Z}$). We may simply take $v_0=0$. It gives orbifolded positive and negative vertices as follows. The negative vertex consists of the prism $T \times [0,1] \subset \mathbb{R}^3$ and the simplex
$$\mathrm{Conv}\{(v_0,0),(v_1,0),(v_2,0),(v_0,-1)\} \subset \mathbb{R}^3.$$
They are glued along the face $T \times \{0\}$ as shown in Figure \ref{fig:orb_vert}. The fan structure at $(v_0,0)$ is generated by $(u_1,0),(-u_3,0),e_3,-e_3$ (which can be orbifolded) where $u_i$ are the primitive vectors in the directions of $v_i-v_{i-1}$, and they are mapped to the tangent vectors of the polytopes at $(v_0,0)$ trivially. The fan structure at $(v_1,0)$ is generated by $(u_2,0),(-u_1,0),e_3,-e_3$ where $(u_2,0),(-u_1,0)$ are mapped to the tangent vectors of the polytopes trivially, while $e_3$ is mapped to $(0,0,1)$ and $-e_3$ is mapped to $(v_0-v_1,-1)$. Similarly the fan structure at $(v_2,0)$ is generated by $(u_2,0),(-u_1,0),e_3,-e_3$ where $(u_2,0),(-u_1,0)$ are mapped to the tangent vectors of the polytopes trivially, $e_3$ is mapped to $(0,0,1)$ and $-e_3$ is mapped to $(v_0-v_2,-1)$.
\begin{figure}
\caption{The negative (the left) and positive (the right) orbifolded trivalent vertices.}
\label{fig:orb_vert}
\end{figure}
The discriminant locus is a Y-shape which is the dual graph of $T$ embedded in $T\times\{0\}$. It is easy to verify the following.
\begin{prop}
$\mathbb{R}^2\times\{0\}$ is monodromy invariant. The monodromy around the leg dual to the edge $\{v_{i-1},v_i\}$ of $T$ sends $e_3$ to $e_3 + (v_i-v_{i-1})$. ($v_3:=v_0$.) In particular the multiplicity of the legs is equal to the affine lengths of $v_i-v_{i-1}$.
\end{prop}
The piecewise linear function $\phi$ is given by Equation \eqref{eq:phi-gc}. This finishes the definition of an orbifolded negative vertex as a tropical manifold.
The orbifolded positive vertex associated to $T$ is the following. Let
$$R = \left(
\begin{array}{ll} 0 & -1\\
1 & 0
\end{array}
\right).$$
$-R\cdot (v_i-v_{i-1})$ are outward normal vectors of the triangle $T$. The polyhedral decomposition consists of the three polyhedral sets $\left(\mathbb{R}_{\geq 0} \cdot \{-R\cdot (v_i-v_{i-1}),-R\cdot (v_{i+1}-v_i)\}\right) \times [0,1]$ for $i=1,2,3$, and they are glued as shown on the right of Figure \ref{fig:orb_vert}. The fan at $(0,1)$ is generated by the vectors $-R \cdot u_1,-R \cdot u_2, -R \cdot u_3, -e_3$. The fan at $(0,0)$ is generated by the directions $-R \cdot u_1,-R \cdot u_3, -R \cdot (v_2-v_1) - d e_3, e_3$ for $d = \det (v_1, v_2)$ (two times the area of $T$), where $-R \cdot u_1,-R \cdot u_3, e_3$ are mapped trivially to the tangent vectors of the polytopes and $-R \cdot (v_2-v_1) - d e_3$ is mapped to $-R \cdot (v_2-v_1)$. The discriminant locus is the same Y-shape which is union of the intersection of the facets of the polytopes with $\mathbb{R}^2 \times \{0\}$.
The multivalued piecewise linear function restricts to the fan at each of the two vertices as $v_0,v_1,v_2$ (regarded as linear function on the dual vector space) on the three maximal cones of the fan respectively. This gives an orbifolded positive vertex as a tropical manifold.
\begin{prop}
Take the leg in the direction $-R \cdot u_i$. The plane $\mathbb{R} \cdot \{(-R \cdot (v_i-v_{i-1}),0), (0,1)\}$ is monodromy invariant. Let $w \in \mathbb{Z}^2\times\{0\}$ such that $\det (u_i,w) =1$. Then the monodromy sends $-R \cdot w$ to $-R \cdot w - h e_3$ where $h$ is the affine length of $v_i-v_{i-1}$. In particular the multiplicity is equal to the affine length of $v_i-v_{i-1}$.
The orbifolded positive and negative vertices defined above form a Legendre dual pair.
\end{prop}
\begin{proof}
It is obvious that the plane $\mathbb{R} \cdot \{(-R \cdot (v_i-v_{i-1}),0), (0,1)\}$ is monodromy invariant, and the monodromy sends $(-R \cdot (v_{i+1}-v_i),0)$ to $(-R \cdot (v_{i+1}-v_i),-d)$. Write $v_{i+1}-v_i = -a u_i + b w$ for some integers $a,b$. Also $v_i-v_{i-1}= h u_i$. We have $d = \det (v_{i+1}-v_i,-(v_i-v_{i-1})) = bh$. Then the monodromy sends $-R \cdot w = -R\cdot (v_{i+1}-v_i)/b - a R \cdot u_i / b$ to $-R\cdot (v_{i+1}-v_i)/b - d e_3/b - a R \cdot u_i / b = -R \cdot w - he_3$. The remaining statements are easy to check.
\end{proof}
The orbifolded positive vertex corresponds to the toric CY orbifold $\mathbb{C}^3/G$ for a finite group $G$, whose fan is the cone over $T \times \{1\} \subset \mathbb{R}^3$. Its Lagrangian fibration is again given by Equation \eqref{eq:fib_An}, where in this case $\mu_1$ is the first two components of the moment map (with respect to a fixed toric K\"ahler form). The orbifolded negative vertex corresponds to the mirror variety $\{(u,v,z_1,z_2) \in \mathbb{C}^2 \times (\mathbb{C}^\times)^2: uv=\sum_{i=1}^3 z_1^{a^{(i)}_1}z_2^{a^{(i)}_2}\}$ where $v_i=(a^{(i)}_1,a^{(i)}_2)$ are the vertices of the triangle $T$. One can cook up a piecewise smooth Lagrangian fibration by using the construction of \cite{AAK} by Moser argument.
\subsection{Local Gorenstein singularity} \label{sec:Gor}
A natural further generalization is the tropical manifolds corresponding to toric Gorenstein singularity and its mirror. Orbifolded trivalent vertices and conifolds are included as special cases. SYZ mirror symmetry for geometric transitions associated to toric Gorenstein singularities was studied in \cite{L13}.
The construction is very similar to the last subsection, and so we will not go into detail. The lattice triangle $T$ in the last subsection is replaced by a lattice polygon $P \subset \mathbb{R}^2$ with vertices $v_0,\ldots,v_{m-1}$. Then the discriminant locus consists of legs in directions $-R\cdot (v_i-v_{i-1})$ stemming from a single vertex. The integer $d$ is defined to be two times the area of $P$ here. We obtain the Gorenstein positive and negative vertices. See the example in Figure \ref{fig:Gor-sing}.
\begin{figure}
\caption{An example of affine structure for toric Gorenstein singularity. The right hand side (positive vertex) is the toric Gorenstein singularity and the left hand side is its mirror.}
\label{fig:Gor-sing}
\end{figure}
An interesting class of geometries is hyperconifold studied by physicists \cite{Davies1,Davies2}. It gives an instance that a conifold transition (at several nodes simultaneously) is not mirror to a reverse conifold transition (but to a hyperconifold transition instead). In this case $P$ is taken to be a parallelogram spanned by two vectors $v,w$. It includes orbifolded conifolds as special cases.
\begin{remark}
A Minkowski decomposition of the polytope $P$ gives a smoothing of the corresponding toric Gorenstein singularity \cite{altmann} (or a resolution of its mirror). However it may not exist in general. (In contrast a triangulation of $P$ into standard triangles which gives a resolution of the toric Gorenstein singularity always exists.)
\end{remark}
\section{Rational elliptic surfaces and the `12' Property} \label{sec:surf}
In this section we study tropical surfaces corresponding to rational elliptic surfaces with $A_n$ singularities. A large part of this section is well-known to experts. It is a reformulation of the works of \cite{Symington,LS} using the terminologies of \cite{GS07,CM1}.
First we introduce an easy generalization of the well-known `12' Property for non-convex polygons. It is a special case of the `legal loops' in \cite{PR}. Then we construct the corresponding tropical surfaces and obtain mirror pairs of symplectic rational elliptic surfaces with singularities from dual reflexive polygons. We also generalize the construction for legal loops and relate with elliptic surfaces. By using the classical result of Matsumoto \cite{Matsumoto}, it gives a topological proof of the generalized `12' property.
The significant work of Gross-Hacking-Keel \cite{GHK} developed mirror symmetry for log Calabi-Yau surfaces whose anti-canonical divisor is a nodal cycle of holomorphic spheres. For rational elliptic surfaces the corresponding anti-canonical divisor is a smooth elliptic curve.
\subsection{The `12' Property}
The following is a special case of the `12' Property for legal loops in \cite[Section 9.1]{PR} when the winding number is $1$ and there is no clockwise move in the legal loop. (See Theoreom \ref{thm:12-legal}.)
\begin{prop} \label{prop:12}
Let $v_1, \ldots, v_m \in \mathbb{Z}^2-\{0\}$ be distinct primitive vectors, labeled in the counterclockwise manner, such that $\mathbb{R}_{\geq 0}\cdot\{v_1,\ldots,v_m\}=\mathbb{R}^2$, and the simplex $\{0,v_i, v_{i+1}\}$ does not contain any interior lattice point for every $i \in \mathbb{Z}/m\mathbb{Z}$. Let $P$ be the union of all these simplices.
Then
$$ 2 \, \mathrm{Area}(P) + \sum_{i\in\mathbb{Z}/m\mathbb{Z}} \det (u_{i-1},u_{i}) = 12 $$
where $u_i$ is the primitive vector in the direction of $v_{i+1}-v_i$.
\end{prop}
\begin{proof}
First we consider the simplest case where $v_1=(1,0)$, $v_2=(0,1)$ and $v_3=(-1,-1)$. It is easy to check the equality directly. To get better geometric understanding, we consider the complete fan generated by these vectors, which gives the toric manifold $\mathbb{P}^2$. Then $2 \, \mathrm{Area}(P) = \sum_{i=1}^m\det(v_i,v_{i+1})=m=3$, the number of irreducible toric divisors. Moreover $\det (u_{i-1},u_{i}) = c_1 \cdot D_i$ where $c_1 = \sum_{i=1}^m D_i$ and $D_i$ denotes the irreducible toric divisors corresponding to $u_i$. We have $c_1 \cdot D_i = 2+D_i\cdot D_i$ where $2$ comes from the Euler characteristic of each irreducible toric divisor (which is topologically a sphere). Hence the LHS of the above equals
$$ m + \sum_{i=1}^m (2+D_i^2) = 3m + \sum_{i=1}^m D_i^2 $$
which is equal to $12$ for $\mathbb{P}^2$.
In general, by further subdividing $P$, we can always assume that $\det (v_i, v_{i+1}) = 1$. Then the LHS always equals $3m + \sum_{i=1}^m D_i^2$ of the corresponding toric manifold. We want to prove that it is always $12$. Since every toric manifold can be obtained from $\mathbb{P}^2$ by successively blowing up and down at toric points, it suffices to prove that $3m + \sum_{i=1}^m D_i^2$ is invariant under blow-up. Blowing up at a toric point increases $m$ by $1$, adds a new exceptional curve of self-intersection $-1$, and decreases the self-intersection numbers of the two adjacent divisors by $1$. Thus the total effect is $0$ and $3m + \sum_{i=1}^m D_i^2$ is $12$.
\end{proof}
\begin{defn}
Assume the notations in Proposition \ref{prop:12}. The integer $\det (u_{i-1},u_{i})$ is called the order of the vertex $v_i$.
\end{defn}
Suppose $P$ in Proposition \ref{prop:12} is a convex polygon. Regarding $P$ as the moment polygon of a toric orbifold, $\det (u_{i-1},u_{i})$ is the order of the isotropy group of the orbifold point corresponding to $v_i$.
\subsection{Affine rational elliptic surfaces with singularities}
\begin{prop}[Affine rational elliptic surfaces] \label{prop:not-conv}
Define $P$ (which is not necessarily convex) as in Proposition \ref{prop:12}. There exist two tropical surfaces with singularities $(\mathcal{A},\mathcal{P})$ and $(\mathcal{A}',\mathcal{P}')$ associated to $P$ having the following properties.
\begin{enumerate}
\item Both $\mathcal{A}$ and $\mathcal{A}'$ are discs whose boundaries are affine circles, with the affine length being the number of lattice points contained in the boundary of $P$. (An affine circle is a topological circle in $\mathcal{A}$ whose intersection with $\mathcal{A}-\Delta$ is an affine submanifold.)
\item The interior of $(\mathcal{A},\mathcal{P})$ contains an affine circle $C$ formed by some edges in $\mathcal{P}$. The interior of $(\mathcal{A}',\mathcal{P}')$ contains the polygon $P$ whose edges are affine line segments which form a part of $\mathcal{P}'$.
\item The affine circle $C \subset \mathcal{A}$ contains singular points (called `outer') which are in one-to-one correspondence with the edges of $P$. The boundary of the polygon $P \subset \mathcal{A}'$ contains singular points (called `inner') which are in one-to-one correspondence with the edges of $P$. The multiplicity of such a singular point is equal to the affine length of the corresponding edge of $P$.
\item The open disc bounded by $C \subset \mathcal{A}$ contains singular points (called `inner') which are in one-to-one correspondence with the corners of $P$. The complement of the polygon $P \subset \mathcal{A}'$ contains singular points (called `outer') which are in one-to-one correspondence with the corners of $P$. The multiplicity of such a singular point equals the determinant of the two primitive tangent vectors of $P$ at the corresponding corners (which is negative when the corresponding corner is non-convex).
\item The total number of singular points (counted with multiplicities) in $\mathcal{A}$ or $\mathcal{A}'$ equals $12$.
\end{enumerate}
\end{prop}
\begin{proof}
$(\mathcal{A},\mathcal{P})$ is defined as follows. Take the triangles with corners $0,v_i,v_{i+1}$, and the parallelograms with corners $0,v_i,v_{i-1}-v_i,v_{i-1}$. The triangles are glued to give the polygon $P$. For the parallelograms the sides $\{0,v_i\}$ are glued to $\{v_{i}-v_{i+1},v_{i}\}$.
The sides $\{v_i,v_{i-1}\}$ of the triangles are glued to the sides $\{0,v_{i-1}-v_i\}$ of the parallelograms. See the left of Figure \ref{fig:ell-non-Fano-eg} for an example.
The fan structure at the vertex $0$ of a triangle is trivial. At the vertex $v_i$ of a triangle, the fan is generated by $e_1,-e_1,e_2,-e_2$ and they are mapped to the primitive vector in the direction $v_{i-1}-v_i$, the primitive vector in the direction $v_{i+1}-v_i$, $v_i$ and $-v_i$ respectively. At the vertex $v_i$ of a parallelogram, the fan is generated by $e_1,-e_1,-e_2$ and they are mapped to the primitive vector in the direction $v_{i-1}-v_i$, the primitive vector in the direction $v_{i+1}-v_i$ and $-v_i$ respectively. This gives $(\mathcal{A},\mathcal{P})$. Each edge of a triangle contains a singular point. The singular points in the edges $\{0,v_i\}$ are called inner and those in $\{v_i,v_{i+1}\}$ are called outer. It is a direct computation that the multiplicities are as stated in (3) and (4).
The sides $\{v_i,v_{i-1}\}$ of the parallelograms form the boundary of $\mathcal{A}$ which is an affine circle. The affine length is the sum of affine lengths of $v_i-v_{i-1}$, which is equal to the number of lattice points in the boundary of $P$. This gives (1). For (2), $C$ is given by the union of the sides $\{v_i,v_{i+1}\}$ of the triangles. (5) follows from Proposition \ref{prop:12}.
$(\mathcal{A}',\mathcal{P}')$ is defined as follows. Take the polygon $P$ with corners $v_i$, and the parallelograms with corners $0,v_i,v_{i-1}-v_i,v_{i-1}$. For the parallelograms the sides $\{0,v_i\}$ are glued to $\{v_{i}-v_{i+1},v_{i}\}$.
The sides $\{v_i,v_{i-1}\}$ of $P$ are glued to the sides $\{0,v_{i-1}-v_i\}$ of the parallelograms. See the right of Figure \ref{fig:ell-non-Fano-eg} for an example.
The fan at the vertex $v_i$ of $P$ is generated by the three vectors: $v_i$, the primitive vector along $v_{i+1}-v_i$ and that along $v_{i-1}-v_i$. The fan at the vertex $v_i$ of a parallelogram is generated by $e_1,-e_1,-e_2$, and they are mapped to the primitive vector in the direction $v_{i-1}-v_i$, the primitive vector in the direction $v_{i+1}-v_i$ and $-v_i$ respectively. Each edge of $P$ contains a singular point which is called inner. Each edge $\{0,v_i\}$ of a parallelogram contains a singular point which is called outer. Similarly one can verify the properties (1)-(5) for $(\mathcal{A}',\mathcal{P}')$.
\end{proof}
$(\mathcal{A},\mathcal{P})$ and $(\mathcal{A}',\mathcal{P}')$ are affine manifolds with toric polyhedral decompositions. We call them to be affine rational elliptic surfaces. They violate the positivity condition in Definition 1.54 of \cite{GS1} if $P$ is not convex.
Indeed $\mathcal{A}$ and $\mathcal{A}'$ are related by inversion; the inner singular points of $\mathcal{A}$ correspond to the outer singular points of $\mathcal{A}'$, and vice versa.
Figure \ref{fig:ell-non-Fano-eg} shows two examples for Proposition \ref{prop:not-conv} where $P$ is not convex. One can glue local Lagrangian fibrations with monodromy $\left(\begin{array}{cc}
1 & 1 \\
0 & 1
\end{array}\right)$ and $\left(\begin{array}{cc}
1 & -1 \\
0 & 1
\end{array}\right)$ to obtain a Lagrangian fibration on a symplectic $4$-fold. Note that negative multiplicities can only occur for a Lagrangian fibration but not for a holomorphic fibration.
\begin{figure}
\caption{Two examples of non-convex polygons. (A) is constructed from the fan polytope of the Hirzebruch surface $\mathbb{F}
\label{fig:ell-non-Fano-eg}
\end{figure}
Note that the dual of a non-convex polygon in Proposition \ref{prop:12} is not a polygon. It is a legal loop which is discussed in Section \ref{sec:legal}. For this reason let's go back to the important case that $P$ is convex. By definition $P$ is a reflexive polygon which has exactly one interior lattice point. Its dual is again a reflexive polygon $\check{P}$. Without loss of generality we assume that $u_i$ for $i=1,\ldots,m$ are the vertices of $P$. Figure \ref{fig:ell-tor} lists all the affine surfaces constructed in this way. (For simplicity we only list $\mathcal{A}$ but not $\mathcal{A}'$.)
Each polygon in $(\mathcal{A}'_{\check{P}},\mathcal{P}'_{\check{P}})$ gives a piecewise linear function (unique up to addition of a linear function) supported on the dual fan at the corresponding vertex in $(\mathcal{A},\mathcal{P})$, and vice versa. This gives a multivalued piecewise linear function $\phi$ on $(\mathcal{A},\mathcal{P})$. Thus for a reflexive polygon we have a tropical manifold $(\mathcal{A},\mathcal{P},\phi)$.
\begin{figure}
\caption{Affine rational elliptic surfaces with $A_{k-1}
\label{fig:ell-tor}
\end{figure}
By gluing in the Lagrangian fibrations on $A_{k-1}$ singularities, one obtains the following, which is a more precise version of Proposition \ref{thm:ell-intro}.
\begin{prop}[Symplectic rational elliptic surfaces] \label{thm:ell}
Each reflexive polygon $P$ corresponds to two symplectic rational elliptic surfaces with $A_k$ singularities $S$ and $S'$ together with Lagrangian fibrations. Both $S$ and $S'$ satisfy the following properties.
\begin{enumerate}
\item The base of each fibration is topologically a closed disc. The inverse image of the boundary is a symplectic torus.
\item The total number of interior singular fibers (counted with multiplicities) equals $12$.
\item The $A_k$ singularities can be divided into two groups, which are in one-to-one correspondence with non-standard corners of $P$ and non-standard simplices formed by $\{v_i,v_{i+1}\}$ respectively. One has $k = A-1$ where $A$ is the area of the parallelogram spanned by primitive tangent vectors at the corner in the first case, or the area of the parallelogram spanned by $v_i,v_{i+1}$ in the second case.
\item The singular fibers of the Lagrangian fibration can be divided into two groups, which are in one-to-one correspondence with corners of $P$ and simplices formed by $\{v_i,v_{i+1}\}$ respectively. The multiplicity equals $A$ defined above.
\end{enumerate}
For a pair of dual reflexive polygons $(P,\check{P})$, we have the mirror pairs $(S_P-D_{S_P},S_{\check{P}}'-D_{S_{\check{P}}'})$ and $(S_P'-D_{S_P'},S_{\check{P}}-D_{S_{\check{P}}})$ in the sense that the corresponding tropical geometries are Legendre dual to each other, where $D_{S_P},D_{S_{\check{P}}},D_{S_P'},D_{S_{\check{P}}'}$ denote the symplectic tori defined in (1).
\end{prop}
\begin{proof}
$T^*(\mathcal{A}-\partial \mathcal{A}-\Delta)/\Lambda^*$ gives a symplectic manifold, where $\Delta \subset \mathcal{A}$ is the collection of singular points and $\Lambda\subset T(\mathcal{A}-\partial \mathcal{A}-\Delta)$ is the local system induced from the integral affine structure. The Lagrangian fibration given in \eqref{eq:fib_An} has monodromy $\left(\begin{array}{cc}
1 & k \\
0 & 1
\end{array}\right)$
around the singular point $(0,0)$ in the base. Thus its base is affine isomorphic to an affine $A_{k-1}$ singularity. By using the action-angle coordinates, the Lagrangian fibration over a punctured neighborhood of $(0,0)$ is isomorphic to the fibration on $T^*(\mathcal{A}-\Delta)/\Lambda$ around an affine $A_{k-1}$ singularity in $\Delta$. Thus the local model can be glued in to give a Lagrangian fibration over $\mathcal{A}-\partial \mathcal{A}$.
$\partial \mathcal{A}$ is an affine circle with length $L$. We have the standard Lagrangian fibration $\pi_0:(\mathbb{C}^\times / L\mathbb{Z}) \times \mathbb{C} \to (\mathbb{R}/L\mathbb{Z}) \times \mathbb{R}_{\geq 0}$ (with the standard symplectic form). A neighborhood of $(\mathbb{R}/L\mathbb{Z}) \times \{0\}$ in the base of $\pi_0$ is affine isomorphic to a neighborhood of $\partial \mathcal{A} \subset \mathcal{A}$. Thus we can glue in a neighborhood of $\mathbb{S}^1 \times (\mathbb{R}/L\mathbb{Z})$ in $\pi_0$ and get a Lagrangian fibration over $\mathcal{A}$. This gives $S$. The construction for $S'$ is similar. The boundary divisor is the symplectic torus $\mathbb{S}^1 \times (\mathbb{R}/L\mathbb{Z})$. Property (2)-(4) follow easily from Proposition \ref{prop:not-conv}.
It is easy to see that $\mathcal{A}_P-\partial\mathcal{A}_P$ and $\mathcal{A}_{\check{P}}'-\partial\mathcal{A}_{\check{P}}'$ are Legendre dual to each other. Hence they form a mirror pair by the Gross-Siebert reconstruction program.
\end{proof}
\begin{remark}
We can take two such affine rational elliptic surfaces whose boundaries are affine circles with certain lengths, and glue them together (which are rescaled if necessary to match their boundary lengths) and obtain an affine sphere with singularities. By the construction of \cite{CM1} it gives a symplectic K3 surface with $A_k$ singularities. A pair of reflexive polygons $(P_1,P_2)$ and the dual $(\check{P}_1,\check{P}_2)$ produces a mirror pair of K3 surfaces with $A_k$ singularities.
In \cite{HZ1} a pair of dual reflexive polygons is used to construct an affine manifold corresponding to a Calabi-Yau manifold. In the above proposition $P_1$ and $P_2$ are not necessarily dual to each other.
Such a splitting is related to the conjecture of Doran-Harder-Thompson \cite{DHT} on mirror construction by gluing the Landau-Ginzburg mirrors of the components in a Tyurin degeneration. See \cite[Section 7]{Kanazawa}.
\end{remark}
In Section \ref{sec:Schoen} we glue the product of an interval with an affine rational elliptic surface with another one to obtain a symplectic Calabi-Yau threefold with orbi-conifold singularities.
\subsection{Resolution and smoothing of rational elliptic surfaces with $A_k$ singularities} \label{sec:A_k-res}
Now we construct resolutions and smoothings of rational elliptic surfaces with $A_k$ singularities. We shall stick with the example of a mirror pair given on the left of Figure \ref{fig:ell-tor-smoothing}. All other rational elliptic surfaces associated to reflexive polygons have a similar construction.
\begin{figure}
\caption{An example of a mirror pair of elliptic surfaces with $A_n$ singularities. The top left and bottom left figures show the mirror pair (this case the reflexive polygon is self-dual). The top right and bottom right figures are their smoothings respectively. The mirror of a smoothing of the top left figure is given by a resolution of the bottom left figure, which can be deduced by taking Legendre dual.
}
\label{fig:ell-tor-smoothing}
\end{figure}
Consider the second graph of the bottom row of Figure \ref{fig:ell-tor}, and compare with the bottom left of Figure \ref{fig:ell-tor-smoothing}. In this example there are four inner and four outer singular points, and they have multiplicities $2,2,1,1$ respectively.
First we take a smoothing of all the inner singular points, meaning that we separate each inner singular point with multiplicity $k>1$ into $k$ points with multiplicity $1$, and the $k$ points lie in the original monodromy-invariant affine line. This is done by a certain refinement of the polyhedral decomposition and taking a suitable fan structure at each new vertex to match the monodromy. There are several choices, and we have fixed one choice in the bottom left of Figure \ref{fig:ell-tor-smoothing}.
Then we take the Legendre dual, which gives a resolution of all the inner singular points for the dual polygon. (In the example the polygon is self-dual.) Note that each inner singular point with multiplicity $k>1$ is separated into $k$ points with multiplicity $1$ lying in distinct parallel monodromy-invariant affine line. See the top left of Figure \ref{fig:ell-tor-smoothing}.
Now we either resolve or smooth out all the outer singular points. Since resolutions can be obtained by taking Legendre dual of smoothings, we just show the smoothings here. For the bottom left of Figure \ref{fig:ell-tor-smoothing}, we refine the reflexive polygon by taking cone over the lattice points lying in the relative interior of each boundary edge, and take a suitable fan structure at all the boundary lattice points to obtain the bottom right of Figure \ref{fig:ell-tor-smoothing}. For the mirror shown in the top left of Figure \ref{fig:ell-tor-smoothing}, we refine the outer polygons if necessary and modify the fan structures at the vertices of the outer polygons to obtain the top right of Figure \ref{fig:ell-tor-smoothing}.
All other affine rational elliptic surfaces with $A_k$ singularities can be treated similarly. We summarize by the following proposition.
\begin{prop}
All rational elliptic surfaces with $A_k$ singularities $S$ and $S'$ associated with reflexive polygons can be resolved and smoothed out symplectomorphically. For each pair of dual reflexive polygons $P$ and $\check{P}$, a resolution of the rational elliptic surface $S_P$ (or $S_P'$) is mirror to a smoothing of $S'_{\check{P}}$ (or $S_{\check{P}}$) in the sense of Gross-Siebert.
\end{prop}
\subsection{Legal loops} \label{sec:legal}
Note that the dual of a non-convex polygon defined in Proposition \ref{prop:12} may not be a polygon. A natural notion which is closed under duality is the legal loop defined in \cite{PR}. The `12' Property holds for a general legal loop; the proof given in \cite{PR} uses modular forms.
\begin{defn}
A legal loop is a finite sequence of primitive vectors $\{v_1,\ldots,v_m\} \subset \mathbb{Z}^2$ with $v_i \not= v_{i+1}$ such that each triangle $\mathrm{Conv}\{0,v_i,v_{i+1}\}$, for $i=1,\ldots m$ where $v_{m+1}=v_1$, contains no interior lattice point.
A legal loop is said to be directed if $\det (v_i,v_{i+1})$ has the same sign for all $i$.
\end{defn}
The loop is formed by the union of edges connecting consecutive $v_i$. Figure \ref{fig:legal-loop} and \ref{fig:legal-loop-2} show some examples. For simplicity let's assume $(v_{i+1}-v_i) \nparallel (v_i - v_{i-1})$ for all $i$. It is easy to see the following.
\begin{prop}
The dual (formed by taking the outward primitive vectors of the facets $\mathrm{Conv}\{v_i,v_{i+1}\}$ of the triangles $\mathrm{Conv}\{0,v_i,v_{i+1}\}$) of a legal loop is again a legal loop.
\end{prop}
\begin{figure}
\caption{A dual pair of legal loops. The bottom figure shows a stitched affine manifold which is dual to the affine manifold given in the left of Figure \ref{fig:ell-non-Fano-eg1}
\label{fig:legal-loop}
\end{figure}
\begin{figure}
\caption{An example of a legal loop which is directed and has winding number $2$. It is associated with an integral affine manifold (with the origin removed).}
\label{fig:legal-loop-2}
\end{figure}
\begin{remark}
As shown in Figure \ref{fig:legal-loop}, the dual of a directed legal loop could be non-directed.
\end{remark}
The `12' Property for legal loops involves winding numbers. It is stated as follows.
\begin{theorem}[\cite{PR}] \label{thm:12-legal}
For a legal loop with winding number $w \in \mathbb{Z}$ around the origin,
$$ \sum_{i\in\mathbb{Z}/m\mathbb{Z}} \det(v_i,v_{i+1}) + \sum_{i\in\mathbb{Z}/m\mathbb{Z}} \det (u_{i-1},u_{i}) = 12 w $$
where $u_i$ is the outward primitive normal vector of the triangle $\{0,v_i,v_{i+1}\}$.
\end{theorem}
Given a legal loop, we can associate it with a `folded' affine space.
\begin{defn}
A folded integral affine space is a smooth manifold $B_0$ (of dimension $n$) equipped with the following:
\begin{enumerate}
\item A codimension-1 submanifold $S \subset B_0$ (which may not be connected);
\item An integral affine structure on the closure of each connected component of $B_0-S$ (which is a manifold with boundary);
\item An isomorphism between the two integral affine structures along $S$ (which locally divides $B_0$ into two half-spaces).
\end{enumerate}
\end{defn}
Note that each of the integral affine structures at a point of $S$ in the above definition is the intersection of a lattice with a half-space, and the isomorphisms of lattices are required to preserve the half-spaces (and hence `fold' the half-spaces to each other).
To construct $B_0$, we do a similar construction as in the proof of Proposition \ref{prop:not-conv}: take the triangles with corners $0,v_i,v_{i+1}$, and the parallelograms with corners $0,v_i,v_{i-1}-v_i,v_{i-1}$. Take the \emph{disjoint union} of these polygons, and then identify the sides $\{0,v_i\}$ among the triangles, identify the sides $\{0,v_i\}$ and $\{v_{i}-v_{i+1},v_{i}\}$ of the parallelograms, and identify the sides $\{v_i,v_{i-1}\}$ of the triangles and the sides $\{0,v_{i-1}-v_i\}$ of the parallelograms. Finally we remove the points $0$ of the triangles, the mid-points of the sides $\{0,v_i\}$ and $\{0,v_{i-1}-v_i\}$ of the parallelograms. This gives $B_0$ as an oriented surface.
$S$ is defined to be the union of those sides $\{0,v_i\}$ of the triangles and $\{0,v_i\}$ of the parallelograms where $\det(v_{i-1},v_i)$ and $\det(v_i,v_{i+1})$ have different signs. In particular if the legal loop is directed, $S = \emptyset$ and we do not have any folding.
As in Proposition \ref{prop:not-conv}, we have a similar local structure at each vertex, namely edges incident to the vertex are identified with rays in $\mathbb{R}^2$, and faces incident to the vertex are identified with cones in $\mathbb{R}^2$. However they do not form a fan in general, namely some cones can intersect at their interior.
The rays at the vertex $v_i$ of the triangles are generated by the four vectors: $v_i$, $-v_i$, the primitive vector along $v_{i+1}-v_i$ and that along $v_{i-1}-v_i$. At the vertex $v_i$ of a parallelogram, the vector $-v_i$ is mapped to $-e_2$; the primitive vector in the direction $v_{i-1}-v_i$ and that in the direction $v_{i+1}-v_i$ are mapped to $\mathrm{sgn}(\det(v_{i-1},v_i)) e_1$ and $-\mathrm{sgn}(\det(v_{i-1},v_i)) e_1$ respectively. This gives the structure of a folded integral affine space. For directed legal loops this gives usual integral affine surfaces. The proof of the following is similar to Proposition \ref{prop:not-conv} and hence omitted.
\begin{lemma}
The monodromy around $0$ is trivial. The multiplicities of the singular points on the sides $\{0,v_i\}$ and $\{0,v_{i-1}-v_i\}$ of the parallelograms are given by $\det(u_{i-1},u_i)$ and $\det(v_{i-1},v_i)$ respectively, where $u_i$ is the outward primitive normal vector of the triangle $\{0,v_i,v_{i+1}\}$.
\end{lemma}
\begin{prop}[Torus fibration associated with legal loop]
Each legal loop is associated with a smooth torus fibration over $\mathbb{S}^2$ with simple nodal singular fibers (with signs $\pm 1$). Each vertex $v_i$ corresponds to $|\det(u_{i-1},u_i)|$ singular fibers of $\mathrm{sign}=\mathrm{sgn}(\det(u_{i-1},u_i))$ where $u_i$ is the outward primitive normal vector of the triangle $\{0,v_i,v_{i+1}\}$. Each edge $\{v_i,v_{i+1}\}$ corresponds to $|\det(v_i,v_{i+1})|$ singular fibers of $\mathrm{sign}=\mathrm{sgn}(\det(v_i,v_{i+1}))$. These are all the singular fibers.
\end{prop}
\begin{proof}
As in Section \ref{sec:A_k-res}, we can resolve or smooth out the $A_k$ singularities such that all of them become simple. Denote the corresponding folded integral affine space by $(\tilde{B}_0,\tilde{S})$. Take the torus bundle $T^*U/\Lambda^*$ where $U$ is a component of $\tilde{B}_0-\tilde{S}$ and $\Lambda$ denotes the integral structure in the tangent bundle. The torus bundles for various $U$ are glued together according to the isomorphism along $S$. This gives a torus bundle over $B_0$. Then at each singular point with positive (or negative) sign, we glue in the fibration given by Equation \eqref{eq:fib_An'} (or multiplying the first component of Equation \eqref{eq:fib_An'} by $(-1)$). This matches the monodromies of the singular points. Now we have a torus fibration over an annulus. Finally since the monodromy around $0$ and the outer boundary is trivial, we can glue in a trivial fibration around $0$ and $\infty$ to obtain a fibration over $\mathbb{S}^2$. This gives the fibration with the specified properties.
\end{proof}
\begin{proof}[A topological proof of the `12' Property (Theorem \ref{thm:12-legal})]
By a classical result of Matsumoto \cite[Theorem 1.1']{Matsumoto}, the total space of a good torus fibration is topologically either $V_k \# l (\mathbb{S}^2 \times \mathbb{S}^2)$ or $\bar{V}_k \# l (\mathbb{S}^2 \times \mathbb{S}^2)$, where $V_k$ is an elliptic surface with Euler characteristic $12k$. The signature is to $8k$ and $-8k$ respectively. On the other hand, the signature is equal to $(-2/3) (k_+ - k_-)$ where $k_+$ (or $k_-$) are the total numbers of positive (or negative) simple singular fibers. Thus we see that $\sum_{i\in\mathbb{Z}/m\mathbb{Z}} \det(v_i,v_{i+1}) + \sum_{i\in\mathbb{Z}/m\mathbb{Z}} \det (u_{i-1},u_{i}) = 12 k$.
To prove that $k$ equals the winding number $w$, it suffices to see that $k=0$ when $w=0$ (since we can concatenate legal loops). When $w=0$, the torus fibration can be identified with the pulled back of a fibration over a contractible space, and hence is homotopic to a trivial bundle. It follows that the signature is zero.
\end{proof}
\section{Orbi-conifold transitions of Schoen's Calabi-Yau mirror pairs} \label{sec:Schoen}
Schoen's Calabi-Yau threefold \cite{Schoen} is (a resolution of) a fiber product of two rational elliptic surfaces over $\mathbb{P}^1$.
In Section 9 of \cite{CM2}, Castano-Bernard and Matessi constructed conifold transitions of the Schoen's Calabi-Yau from the fan polytopes of toric blow-ups of $\mathbb{P}^2$ (whose dual polytopes have standard vertices).
In this section we give a generalization of their construction. Namely we construct orbi-conifold transitions of the Schoen's Calabi-Yau threefold and its mirror. (Orbi-conifold points naturally occur when the vertices of a polygon are not standard.) It gives compact examples of mirror pairs for orbi-conifolds.
Conceptually Schoen's Calabi-Yau degenerates to orbifolded conifolds as follows. Take two elliptic surfaces with $A$-type singularities, and take their fiber product over $\mathbb{P}^1$. Suppose there is a point in $\mathbb{P}^1$ such that both elliptic surfaces have singularities over that point. Correspondingly the fiber product has an orbifolded conifold singularity. In the following we use affine geometry to understand it more clearly. The Gross-Siebert mirror (obtained by taking discrete Legendre transform) has generalized conifold singularities.
\subsection{General formulation}
We have explained local generalized and orbifolded conifolds in Section \ref{sec:SYZ}. Below we define a notion which includes both singularities in a global geometry.
\begin{defn}
A complex (or symplectic) orbi-conifold is a topological space $X$ together with a discrete subset $S \subset X$ such that $X-S$ is a complex (or symplectic) orbifold, and for each $p \in S$ we have a homeomorphism of a neighborhood of $p$ to a neighborhood of the singular point in a local orbifolded or generalized conifold given in Section \ref{sec:SYZ}, and the homeomorphism is orbi-complex (or orbi-symplectomorphic) away from $p$. A topological orbi-conifold is defined similarly without the complex (or symplectic) structure.
\end{defn}
\begin{defn}
A symplectic orbi-conifold transition of a symplectic manifold $X$ is another symplectic manifold $Y$ which is a resolution of a topological orbi-conifold $X_0$, where $X_0$ is a degeneration of $X$. (The reverse process is also called an orbi-conifold transition.)
\end{defn}
\begin{remark}
A more natural definition should require $X_0$ to be a symplectic orbi-conifold. Below we only construct $X_0$ as a topological orbi-conifold.
To make it symplectic, we should use the technique of \cite{AAK} by Moser argument to construct a Lagrangian fibration on the local generalized conifold.
\end{remark}
Now we consider the affine setting. The following definition is an orbifold generalization of \cite[Definition 6.3]{CM2}.
\begin{defn} \label{def:orbi-cfd}
An affine orbi-conifold is a polarized tropical threefold $(B,\mathcal{P},\phi)$ with the following properties. 
\begin{enumerate}
\item The discriminant locus $\Delta$ is a graph with vertices of valency 3 or 4.
\item Each edge of $\Delta$ has monodromy given by the matrix $\left(\begin{array}{ccc}
1 & 0 & 0 \\
k & 1 & 0 \\
0 & 0 & 1 \\
\end{array}\right)$
in a suitable basis.
\item Every trivalent vertex of $\Delta$ has a neighborhood which is integral affine isomorphic to the orbifolded positive or negative vertex given in Section \ref{sec:orb_vert}.
\item Every 4-valent vertex of $\Delta$ has a neighborhood which is integral affine isomorphic to the affine generalized or orbifolded conifolds given in Section \ref{sec:aff-loc}.
\end{enumerate}
\end{defn}
Since the affine generalized conifold point and the affine orbifolded conifold point are locally dual to each other (see Figure \ref{fig:loc-orb-cfd}), we immediately have the following.
\begin{prop} \label{prop:gen-dual-orb}
Let $(B,\mathcal{P},\phi)$ be an affine orbi-conifold, and $(\check{B},\check{\mathcal{P}},\check{\phi})$ its Legendre dual. Then the affine generalized conifold points of $(B,\mathcal{P},\phi)$ are in one-to-one correspondence with the affine orbifolded conifold points of $(\check{B},\check{\mathcal{P}},\check{\phi})$, and vice versa.
\end{prop}
By gluing the local models of fibrations given in Section \ref{sec:SYZ} and \ref{sec:loc-aff}, we obtain a global orbi-conifold. Combining with Proposition \ref{prop:gen-dual-orb}, we have the following.
\begin{prop}[Mirror symmetry of singularities] \label{prop:top-orb-cfd}
Given an affine orbi-conifold $(B,\mathcal{P},\phi)$, there exists a topological orbi-conifold $X$ and a torus fibration to $B$ with section whose discriminant locus and monodromies agrees with that of $B$. Let $\check{X}$ be the topological orbi-conifold corresponding to the Legendre dual $(\check{B},\check{\mathcal{P}},\check{\phi})$. Then the generalized conifold points of $X$ are in one-to-one correspondence with the orbifolded conifold points of $\check{X}$, and vice versa.
\end{prop}
\begin{defn}
Given an affine orbi-conifold $(B,\mathcal{P},\phi)$, a smoothing of $(B,\mathcal{P},\phi)$ is a simple and positive polarized tropical threefold $(\tilde{B},\tilde{\mathcal{P}},\tilde{\phi})$ such that the corresponding manifold $\tilde{X}$ is a (topological) smoothing of the orbi-conifold $X$.
A resolution of $(B,\mathcal{P},\phi)$ is a simple and positive polarized tropical threefold such that its Legendre dual is a smoothing of the Legendre dual $(\check{B},\check{\mathcal{P}},\check{\phi})$.
\end{defn}
Given a simple and positive polarized tropical threefold $(B,\mathcal{P},\phi)$, \cite{CM2} constructed a symplectic manifold with a Lagrangian fibration to $B$. In particular an affine smoothing or resolution of an affine orbi-conifold corresponds to a symplectic manifold. This gives a way to produce orbi-conifold transition via affine geometry.
\begin{remark}
Similar to Definition \ref{def:orbi-cfd}, we can allow vertices with higher valencies and define the notion of an affine variety with Gorenstein singularities by using the local models given in Section \ref{sec:Gor}. The same technique can be applied to construct more general geometric transitions.
\end{remark}
\subsection{Examples of compact CY orbi-conifolds}
The following is a more precise formulation of the first part of Theorem \ref{thm:main}.
\begin{theorem}
Each pair of reflexive polygons $(P_1,P_2)$ corresponds to a compact orbifolded conifold $O^{(P_1,P_2)}$ and a compact generalized conifold $G^{(P_1,P_2)}$. They have the following properties.
\begin{enumerate}
\item $O^{(P_1,P_2)}$ and $G^{(P_1,P_2)}$ contain orbifolded loci which are locally modeled by (an open subset of) $\mathbb{C}^2/\mathbb{Z}_k \times \mathbb{C}^\times \supset \{0\} \times \mathbb{C}^\times$.
\item The components of the orbifolded loci of $O^{(P_1,P_2)}$ (or $G^{(P_1,P_2)}$) are in one-to-one correspondence with the non-standard corners of $P_i$ and edges of $P_i$ with affine length greater than one ($i=1,2$). The multiplicity $k$ equals the index of the non-standard corner in the first case and equals the affine length of the edge in the second case.
\item $O^{(P_1,P_2)}$ is a degeneration of a Schoen's CY. $G^{(P_1,P_2)}$ is a blow-down of a mirror Schoen's CY.
\end{enumerate}
\end{theorem}
Note that $O^{(P_1,P_2)}$ is mirror to $G^{(\check{P}_1,\check{P}_2)}$ (rather than $G^{(P_1,P_2)}$) where $\check{P}$ denotes the dual polygon of $P$.
The construction goes as follows. The two reflexive polygons $P_1$ and $P_2$ are associated with two affine rational elliptic surfaces $\mathcal{A}_r'$, $r=1,2$, with $A_k$ singularities (see Proposition \ref{prop:not-conv} for the notations). Let $L_r$ be the affine lengths of the boundary of the polygons respectively. We shall glue the affine threefolds $\mathcal{A}_1' \times (\mathbb{R}/L_2\mathbb{Z})$ and $(\mathbb{R}/L_1\mathbb{Z})\times \mathcal{A}_2'$ along their boundaries $\partial \mathcal{A}_1' \times (\mathbb{R}/L_2\mathbb{Z}) \cong (\mathbb{R}/L_1\mathbb{Z})\times \partial\mathcal{A}_2' \cong (\mathbb{R}/L_1\mathbb{Z}) \times (\mathbb{R}/L_2\mathbb{Z})$. Then we get an affine $\mathbb{S}^3$ with singularities, which corresponds to an orbifolded conifold degeneration of the Schoen's Calabi-Yau threefold.
\begin{remark}
If we glue them in the other way, namely $\partial \mathcal{A}_1' \times (\mathbb{R}/\mathbb{Z}) \cong \partial\mathcal{A}_2'\times (\mathbb{R}/\mathbb{Z})$ with rescaling if necessary, then we get an affine $\mathbb{S}^2 \times \mathbb{S}^1$ which corresponds to $(K3 \textrm{ with } A_k \textrm{ singularities}) \times \mathbb{T}^{2}$.
\end{remark}
For instance, take $P_1$ to be the polygon given in the second graph of the last row of Figure \ref{fig:ell-tor}, and $P_2$ the second graph of the first row of Figure \ref{fig:ell-tor}. The gluing is given in Figure \ref{fig:SchoenCY-whole}.
\begin{figure}
\caption{An orbifolded conifold degeneration of Schoen's Calabi-Yau. Each thick dot represents an orbifolded conifold singularity.}
\label{fig:SchoenCY-whole}
\end{figure}
The polyhedral decomposition consists of the following polytopes. Let $P_r$ ($r=1,2$) consist of $m_r$ edges with affine lengths $l_k^{(r)}$ for $k=1,\ldots,m_r$ (the labeling of edges is in counterclockwise order). Then $\sum_{k=1}^{m_r} l_k^{(r)} = L_r$. We have the polytopes
\begin{equation} \label{eq:hatP1}
\hat{P}_j^{(1)}:=[0,l_j^{(2)}] \times P_1
\end{equation}
for $j=1,\ldots,m_2$,
\begin{equation} \label{eq:hatP2}
\hat{P}_i^{(2)}:=[0,l_i^{(1)}] \times P_2
\end{equation}
for $i=1,\ldots,m_1$, and the cubes
\begin{equation} \label{eq:C}
C_{ij}:=[0,l_i^{(1)}] \times [0,l_j^{(2)}] \times [0,1]
\end{equation}
for $i=1,\ldots,m_1,j=1,\ldots,m_2$.
The facet $\{0\} \times P_r$ (for $r=1,2$) of $\hat{P}_k^{(r)}$ is glued with the facet $\{l_{k-1}^{(r)}\} \times P_r$ of $\hat{P}_{k-1}^{(r)}$ for $k\in\mathbb{Z}/m_r\mathbb{Z}$. The facet $\{0\} \times [0,l_j^{(2)}] \times [0,1]$ of $C_{ij}$ is glued with the facet $\{l_{i-1}^{(1)}\} \times [0,l_j^{(2)}] \times [0,1]$ of $C_{i-1,j}$ for $i\in\mathbb{Z}/m_1\mathbb{Z}$; the facet $[0,l_i^{(1)}] \times \{0\} \times [0,1]$ of $C_{ij}$ is glued with the facet $[0,l_i^{(1)}] \times \{l_{j-1}^{(2)}\} \times [0,1]$ of $C_{i,j-1}$ for $j\in\mathbb{Z}/m_2\mathbb{Z}$. The facet $[0,l_i^{(1)}] \times [0,l_j^{(2)}] \times \{0\}$ of $C_{ij}$ is glued with the facet $[0,l_j^{(2)}] \times \partial_i P_1$ of $\hat{P}_j^{(1)}$ (where $\partial_i P_1$ denotes the $i$-th edge of $P_1$), and the facet $[0,l_i^{(1)}] \times [0,l_j^{(2)}] \times \{1\}$ of $C_{ij}$ is glued with the facet $[0,l_i^{(2)}] \times \partial_j P_2$ of $\hat{P}_i^{(2)}$. The resulting total space is $\mathbb{S}^3$ topologically.
Now we describe the fan structure at each vertex. Every vertex of the polyhedral decomposition is either a vertex of $\hat{P}_i^{(1)}$ (for $i=1,\ldots,m_1$) or that of $\hat{P}_j^{(2)}$ (for $j=1,\ldots,m_2$). It is always in the shape of a product of the fan of $\mathbb{P}^1$ and that of a weighted projective plane. Then the fan structure is taken to be the product of the fan of $\mathbb{P}^1$ and the fan at the corresponding vertex of the affine rational elliptic surface $\mathcal{A}_r$.
The discriminant locus $\Delta$ is taken to be the union of the lines
\begin{equation} \label{eq:Delta_1}
[0,l_j^{(2)}]\times\{p\} \subset \partial \hat{P}_j^{(1)}
\end{equation}
for $j=1,\ldots,m_2$ and $p$ being the mid-point of an edge of $P_1$,
\begin{equation} \label{eq:Delta_2}
[0,l_i^{(1)}]\times\{p\} \subset \partial \hat{P}_i^{(2)}
\end{equation}
for $i=1,\ldots,m_1$ and $p$ being the mid-point of an edge of $P_2$,
\begin{equation} \label{eq:Delta_3}
\{0\} \times [0,l_j^{(2)}] \times \{1/2\} \subset \partial C_{ij}
\end{equation}
and
\begin{equation} \label{eq:Delta_4}
[0,l_i^{(1)}] \times \{0\} \times \{1/2\} \subset \partial C_{ij}
\end{equation}
for $i=1,\ldots,m_1$ and $j=1,\ldots,m_2$.
This gives $(B,\mathcal{P})$. One can directly verify the following and we omit the standard computations here.
\begin{lemma}
The polyhedral decomposition and the fan structures at vertices define an affine structure on $B-\Delta$. The multiplicities of the discriminant loci defined by Equation \eqref{eq:Delta_1},\eqref{eq:Delta_2} are the lengths of the edges of $P_2$ and $P_1$ respectively; the multiplicities for Equation \eqref{eq:Delta_3}, \eqref{eq:Delta_4} are the orders of the $i$-th vertex of $P_1$ and the $j$-th vertex of $P_2$ respectively. (The $i$-th vertex is adjacent to the $(i-1)$-th and $i$-th edges.)
\end{lemma}
The discriminant loci defined by Equation \eqref{eq:Delta_1} and \eqref{eq:Delta_2} correspond to the inner singular points of the affine surfaces $\mathcal{A}_r'$. (One can smooth out the inner singular points so that all of them are simple (having multiplicity $1$), see the top left of Figure \ref{fig:ell-tor-smoothing}. Here we simply leave them as orbifolded singularities.)
The discriminant loci defined by Equation \eqref{eq:Delta_3} and \eqref{eq:Delta_4} correspond to the outer singular points of $\mathcal{A}_r'$. They intersect at $m_1\cdot m_2$ points, and these are the orbifolded conifold singularities. For instance in Figure \ref{fig:SchoenCY-whole} there are $12$ orbifolded conifold points, $6$ of which are not the usual conifold points.
By Proposition \ref{prop:top-orb-cfd}, we have a topological orbifolded conifold $O^{(P_1,P_2)}$ corresponding to $(B,\mathcal{P})$.
We have the following compact affine generalized conifold $(\check{B},\check{P})$. It is glued from $\mathcal{A}^{\check{P}_1} \times (\mathbb{R}/\check{L}_2\mathbb{Z})$ and $(\mathbb{R}/\check{L}_1\mathbb{Z}) \times \mathcal{A}^{\check{P}_2}$, where $\check{P}_r$ is the dual polytope of $P_r$ and $\check{L}_r$ is the affine length of its boundary. See Figure \ref{fig:SchoenCYmir-whole-4sides}. (In this example $\check{P}_1$ is the same as $P_1$, and $\check{P}_2$ is given by the first graph of the first row of Figure \ref{fig:ell-tor}.)
\begin{figure}
\caption{The mirror obtained by taking Legendre dual. Each thick dot represents a generalized conifold singularity.}
\label{fig:SchoenCYmir-whole-4sides}
\end{figure}
$(\check{B},\check{P})$ is obtained by gluing the prisms
\begin{equation} \label{eq:hatT1}
\hat{T}^{(1)}_{ij}:=[0,\check{l}_j^{(2)}] \times T_i^{(1)}
\end{equation}
and
\begin{equation} \label{eq:hatT2}
\hat{T}^{(2)}_{ij}:=[0,\check{l}_i^{(1)}] \times T_j^{(2)}
\end{equation}
along their boundaries, where $T_i^{(r)}$ are the triangles formed by the adjacent corners of $\check{P}_r$ for $r=1,2$. The facet $\{\check{l}_j^{(2)}\} \times T_i^{(1)}$ of $\hat{T}^{(1)}_{ij}$ is glued with the facet $\{0\} \times T_i^{(1)}$ of $\hat{T}^{(1)}_{i,j+1}$; the facet $\{\check{l}_i^{(1)}\} \times T_i^{(2)}$ of $\hat{T}^{(2)}_{ij}$ is glued with the facet $\{0\} \times T_i^{(2)}$ of $\hat{T}^{(2)}_{i+1,j}$. Denote the edges of the triangle $T_i^{r}$ by $\partial_0 T_i^{(r)}, \partial_1 T_i^{(r)},\partial_2 T_i^{(r)}$ corresponding to the $i$-th side of $\check{P}_r$ and its adjacent vertices (in counterclockwise order) respectively. The facet $[0,\check{l}_j^{(2)}] \times \partial_0 T_i^{(1)}$ of $\hat{T}^{(1)}_{ij}$ is glued with the facet $[0,\check{l}_i^{(1)}] \times T_j^{(2)}$ of $\hat{T}^{(2)}_{ij}$ by $\partial_0 T_i^{(1)} \cong [0,\check{l}_i^{(1)}]$ and $\partial_0 T_j^{(2)} \cong [0,\check{l}_j^{(2)}]$; the facet $[0,\check{l}_j^{(2)}] \times \partial_2 T_i^{(1)}$ of $\hat{T}^{(1)}_{ij}$ is glued with the facet $[0,\check{l}_j^{(2)}] \times \partial_1 T_{i+1}^{(1)}$ of $\hat{T}^{(1)}_{i+1,j}$; the facet $[0,\check{l}_i^{(1)}] \times \partial_2 T_j^{(2)}$ of $\hat{T}^{(2)}_{ij}$ is glued with the facet $[0,\check{l}_i^{(1)}] \times \partial_1 T_{j+1}^{(2)}$ of $\hat{T}^{(2)}_{i,j+1}$. The resulting space is again $\mathbb{S}^3$ topologically.
The dual discriminant locus $\check{\Delta}$ is given by the union of the lines
\begin{align}
[0,\check{l}_j^{(2)}]\times\{p\} \subset \partial \hat{T}_{ij}^{(1)}, &\textrm{ $p$ is the mid-point of $\partial_1T_i^{(1)}$ or $\partial_2T_i^{(1)}$}, \label{eq:Deltacheck_1} \\
[0,\check{l}_i^{(1)}]\times \{p\} \subset \partial \hat{T}_{ij}^{(2)}, &\textrm{ $p$ is the mid-point of $\partial_1T_j^{(2)}$ or $\partial_2T_j^{(2)}$}, \label{eq:Deltacheck_2} \\
[0,\check{l}_j^{(2)}]\times\{p\} \subset \partial \hat{T}_{ij}^{(1)}, &\textrm{ $p$ is the mid-point of $\partial_0T_i^{(1)}$}, \label{eq:Deltacheck_3} \\
[0,\check{l}_i^{(1)}]\times \{p\} \subset \partial \hat{T}_{ij}^{(2)}, &\textrm{ $p$ is the mid-point of $\partial_0T_j^{(2)}$}. \label{eq:Deltacheck_4}
\end{align}
The discriminant loci defined by Equation \eqref{eq:Deltacheck_1} and \eqref{eq:Deltacheck_2} correspond to the inner singular points of the affine surfaces $\mathcal{A}^{\check{P}_r}$.
The discriminant loci defined by Equation \eqref{eq:Deltacheck_3} and \eqref{eq:Deltacheck_4} correspond to the outer singular points of $\mathcal{A}^{\check{P}_r}$. They intersect at $m_1\cdot m_2$ points, and these are the generalized conifold singularities.
By Proposition \ref{prop:top-orb-cfd}, we have a corresponding topological generalized conifold $G^{(\check{P}_1,\check{P}_2)}$. From the definition of $(\check{B},\check{\mathcal{P}})$, one can write down a multivalued piecewise-linear function $\phi$ on $B$ such that $(\check{B},\check{\mathcal{P}})$ is the Legendre dual of $(B,\mathcal{P},\phi)$.
\begin{remark}
If we take the polytope $P$ to be the moment map polytope of a toric blow-up of $\mathbb{P}^2$ (or equivalently $\check{P}$ to be the fan polytope of a toric blow-up of $\mathbb{P}^2$), then $B$ is a usual affine conifold. This was constructed in \cite[Section 9.4]{CM2}.
\end{remark}
\subsection{Orbi-conifold transitions and mirror symmetry}
From the last subsection, we have a compact orbifolded conifold $O^{(P_1,P_2)}$ and a compact generalized conifold $G^{(P_1,P_2)}$ for each pair of reflexive polygons $(P_1,P_2)$. In this subsection we construct their smoothings and resolutions in affine geometry. By the Gross-Siebert reconstruction, they give a mirror pair of toric degenerations. Due to discrete Legendre transform, a resolution of $O^{(P_1,P_2)}$ is mirror to a smoothing of $G^{(\check{P}_1,\check{P}_2)}$, and vise versa. This completes the proof of Theorem \ref{thm:main}.
Since a resolution of the affine orbifolded conifold corresponding to $(\check{P}_1,\check{P}_2)$ is Legendre dual to a smoothing of the affine generalized conifold corresponding to $(P_1,P_2)$, and vice versa, it suffices to construct smoothings for $O^{(P_1,P_2)}$ and $G^{(P_1,P_2)}$.
\subsubsection{Smoothing of $O^{(P_1,P_2)}$.} Recall that we have constructed a smoothing $\tilde{\mathcal{A}}_{P_r}'$ for the affine elliptic surfaces $\mathcal{A}_{P_r}'$, see the top right of Figure \ref{fig:ell-tor-smoothing}. A smoothing of $O^{(P_1,P_2)}$ is given by gluing the two products of an affine circle with $\tilde{\mathcal{A}}_{P_r}'$ for $r=1,2$. See Figure \ref{fig:SchoenCY-whole-4sides-smoothing}.
\begin{figure}
\caption{A smoothing of an orbifolded conifold, which gives a Schoen's CY. Its Legendre transform gives a resolution of the mirror generalized conifold (which is an orbi-conifold transition of the mirror Schoen's CY shown in Figure \ref{fig:SchoenCYmir-whole-4sides-smoothing}
\label{fig:SchoenCY-whole-4sides-smoothing}
\end{figure}
Let $K_r$, $r=1,2$, be the maximal multiplicity of the outer singular points of $\mathcal{A}_{P_r}'$. The polyhedral decomposition consists of $\hat{P}_j^{(1)}$ for $j=1,\ldots,m_2$ defined by Equation \eqref{eq:hatP1}, $\hat{P}_i^{(2)}$ for $i=1,\ldots,m_1$ defined by Equation \eqref{eq:hatP2}, and $(K_1+K_2)$ copies of $C_{ij}$ for $i=1,\ldots,m_1,j=1,\ldots,m_2$ defined by Equation \eqref{eq:C}. They are glued as shown in Figure \ref{fig:SchoenCY-whole-4sides-smoothing}. The fan structure at each vertex is taken to be the product of the fan of $\mathbb{P}^1$ and the fan at the corresponding vertex of $\tilde{\mathcal{A}}_{P_r}'$. The discriminant locus is shown in the figure and we omit the detailed descriptions. This gives a smoothing of $O^{(P_1,P_2)}$. Its Legendre dual gives a resolution of $G^{(P_1,P_2)}$.
\subsubsection{Smoothing of $G^{(P_1,P_2)}$.} First we glue the products of an affine circle with a smoothing $\tilde{\mathcal{A}}_{P_r}$ of $\mathcal{A}_{P_r}$ (recall the bottom right of Figure \ref{fig:ell-tor-smoothing}) for $r=1,2$. See Figure \ref{fig:SchoenCYmir-whole-4sides-smoothing}. It consists of prisms $[0,1] \times \tilde{T}_i^{(1)}$ and $[0,1] \times \tilde{T}_j^{(2)}$, where $\tilde{T}_i^{(r)}$ for $r=1,2$ are the standard triangles formed by (the origin and) the adjacent boundary lattice points of $\check{P}_r$.
The resulting space is an affine conifold (with negative nodes). It can be smoothed out by subdividing the rectangular facets containing the conifold singularities, and refining the polyhedral decomposition correspondingly. Since all these negative nodes are contained in an affine plane, they can be simultaneously smoothed out by \cite[Theorem 8.7]{CM2}. This gives a smoothing of $G^{(P_1,P_2)}$. Its Legendre dual gives a resolution of $O^{(P_1,P_2)}$.
As a result, we obtain orbi-conifold transitions of the Schoen's CY and its mirror.
\begin{figure}
\caption{A smoothing of the mirror. The figure has (negative) conifold singularities which can be easily smoothed out by subdividing the $18$ rectangles containing the conifold singularities. Its Legendre transform gives an orbi-conifold transition of Schoen's CY shown in Figure \ref{fig:SchoenCY-whole-4sides-smoothing}
\label{fig:SchoenCYmir-whole-4sides-smoothing}
\end{figure}
\begin{remark}
For convenience, in the above we first smooth out to a conifold, and then take further smoothing of the conifold.
There are other choices of subdivision which do not go through an conifold. For instance see the left of Figure \ref{fig:loc-orb-con-res}. In general orbi-conifold degenerations do not factor through ordinary conifold degenerations.
\end{remark}
\end{document} |
\betagin{document}
\subjclass[2010]{17D99}
\keywords{Leibniz, triangular, nilradical, classification, Lie}
\doublespacing
\title{Solvable Leibniz Algebras with Triangular Nilradical}
\betagin{abstract}
A classification exists for Lie algebras whose nilradical is the triangular Lie algebra $T(n)$. We extend this result to a classification of all solvable Leibniz algebras with nilradical $T(n)$. As an example we show the complete classification of all Leibniz algebras whose nilradical is $T(4)$.
\end{abstract}
\section{Introduction}\lambdabel{intro}
Leibniz algebras were defined by Loday in 1993 \cite{loday, loday2}. In recent years it has been a common theme to extend various results from Lie algebras to Leibniz algebras \cite{ao, ayupov, omirov}. Several authors have proven results on nilpotency and related concepts which can be used to help extend properties of Lie algebras to Leibniz algebras.
Specifically, variations of Engel's theorem for Leibniz algebras have been proven by different authors \cite{barnesengel, jacobsonleib} and Barnes has proven Levi's theorem for Leibniz algebras \cite{barneslevi}. Additionally, Barnes has shown that left-multiplication by any minimal ideal of a Leibniz algebra is either zero or anticommutative \cite{barnesleib}.
In an effort to classify Lie algebras, many authors place various restrictions on the nilradical \cite{cs, nw, rw, wld}. In \cite{tw}, Tremblay and Winternitz study solvable Lie algebras with triangular nilradical. It is the goal of this paper to extend these results to the Leibniz setting.
Recent work has been done on classification of certain classes of Leibniz algebras \cite{aor, chelsie-allison, heisen, clok, clok2}. In \cite{heisen}, a subset of the authors of this work found a complete classification of all Leibniz algebras whose nilradical is Heisenberg. In particular, this includes a classification of all Leibniz algebras whose nilradical is the triangular Lie algebra $T(3)$, since $T(3)$ is the three-dimensional Heisenberg algebra. For this reason our primary example will be Leibniz algebras whose nilradical is the triangular algebra $T(4)$.
\section{Preliminaries}
A Leibniz algebra, $L$, is a vector space over a field (which we will take to be $\mathbb{C}$ or $\mathbb{R}$) with a bilinear operation (which we will call multiplication) defined by $[x,y]$ which satisfies the Leibniz identity
\betagin{equation}\lambdabel{Jacobi}
[x,[y,z]] = [[x,y],z] + [y,[x,z]]
\end{equation}
for all $x,y,z \in L$. In other words $L_x$, left-multiplication by $x$, is a derivation. Some authors choose to impose this property on $R_x$, right-multiplication by $x$, instead. Such an algebra is called a ``right'' Leibniz algebra, but we will consider only ``left'' Leibniz algebras (which satisfy \eqref{Jacobi}). $L$ is a Lie algebra if additionally $[x,y]=-[y,x]$.
The derived series of a Leibniz (Lie) algebra $L$ is defined by $L^{(1)}=[L,L]$, $L^{(n+1)}=[L^{(n)},L^{(n)}]$ for $n\ge 1$. $L$ is called solvable if $L^{(n)}=0$ for some $n$. The lower-central series of $L$ is defined by $L^2 = [L,L]$, $L^{n+1}=[L,L^n]$ for $n>1$. $L$ is called nilpotent if $L^n=0$ for some $n$. It should be noted that if $L$ is nilpotent, then $L$ must be solvable.
The nilradical of $L$ is defined to be the (unique) maximal nilpotent ideal of $L$, denoted by $\nr(L)$. It is a classical result that if $L$ is solvable, then $L^2 = [L,L] \subseteq \nr(L)$. From \cite{mubar}, we have that
\betagin{equation}\lambdabel{dimension}
\dim (\nr(L)) \geq \frac{1}{2} \dim (L).
\end{equation}
The triangular algebra $T(n)$ is the $\frac{1}{2}n(n-1)$-dimensional Lie algebra whose basis is the set of strictly upper-triangular matrices, $\{N_{ik} \vert 1 \leq i < k \leq n \}$ defined by multiplications
\betagin{equation}\lambdabel{tri}
[N_{ik},N_{ab}]=\delta_{ka}N_{ib} - \delta_{bi}N_{ak}.
\end{equation}
The left-annihilator of a Leibniz algebra $L$ is the ideal $\Ann_\ell(L) = \left\{x\in L\mid [x,y]=0\ \forall y\in L\right\}$. Note that the elements $[x,x]$ and $[x,y] + [y,x]$ are in $\Ann_\ell(L)$, for all $x,y\in L$, because of \eqref{Jacobi}.
An element $x$ in a Leibniz algebra $L$ is nilpotent if both $(L_x)^n = (R_x)^n = 0$ for some $n$. In other words, for all $y$ in $L$
\betagin{equation*}
[x,\cdots[x,[x,y]]] = 0 = [[[y,x],x]\cdots,x].
\end{equation*}
A set of matrices $\{X^\alpha\}$ is called linearly nilindependent if no non-zero linear combination of them is nilpotent. In other words, if
\betagin{equation*}
X = \displaystyle\sum_{\alpha=1}^f c_\alpha X^\alpha,
\end{equation*}
then $X^n=0$ implies that $c_\alpha=0$ for all $\alpha$. A set of elements of a Leibniz algebra $L$ is called linearly nilindependent if no non-zero linear combination of them is a nilpotent element of $L$.
\section{Classification}
Let $T(n)$ be the $\frac{1}{2}n(n-1)$-dimensional triangular (Lie) algebra over the field $F$ ($\mathbb{C}$ or $\mathbb{R}$) with basis $\{N_{ik} \vert 1 \leq i < k \leq n \}$ and products given by \eqref{tri}. We will extend $T(n)$ to a solvable Leibniz algebra $L$ of dimension $\frac{1}{2}n(n-1) + f$ by appending linearly nilindependent elements $\{X^1, \ldots, X^f\}$. In doing so, we will construct an indecomposable Leibniz algebra whose nilradical is $T(n)$.
We construct the vector $N = (N_{12} N_{23} \cdots N_{(n-1)n} N_{13} \cdots N_{(n-2)n} \cdots N_{1n})^T$ whose components are the basis elements of the nilradical ordered along consecutive off-diagonals ($N_{i(i+1)}$ in order, then $N_{i(i+2)}$ in order, \ldots). Then since $[L,L] \subseteq \nr(L)$, the brackets of $L$ are given by \eqref{tri} and
\betagin{align*}
[X^\alpha,N_{ik}] &= A^\alpha_{ik,pq} N_{pq}\\
[N_{ik},X^\alpha] &= B^\alpha_{ik,pq} N_{pq}\\
[X^\alpha,X^\beta] &= \sigma^{\alpha\beta}_{pq} N_{pq}.
\end{align*}
using Einstein summation notation on repeated indices (from here onward), where $1\leq \alpha, \beta \leq f$, $A^\alpha_{ik,pq}, \sigma^{\alpha \beta}_{pq} \in F$. Note that $A^\alpha \in F^{r \times r}$, $N \in T(n)^{r \times 1}$ where $r = \frac{1}{2}n(n-1)$.
To classify Leibniz algebras $L(n,f)$ we must classify the matrices $A^\alpha$ and $B^\alpha$ and the constants $\sigma^{\alpha \beta}_{pq}$. The Jacobi identities for the triples $\{X^\alpha, N_{ik}, N_{ab}\}$, $\{N_{ik}, N_{ab}, X^\alpha\}$, $\{N_{ik}, X^\alpha, N_{ab}\}$ with $1 \leq \alpha \leq f$, $1 \leq i < k \leq n$, $1 \leq a < b \leq n$ give us respectively
\betagin{align}
\taug{4a}\lambdabel{2.14}\delta_{ka} A^\alpha_{ib,pq} N_{pq} - \delta_{bi} A^\alpha_{ak,pq} N_{pq} + A^\alpha_{ik,bq} N_{aq} - A^\alpha_{ik,pa} N_{pb} - A^\alpha_{ab,kq} N_{iq} + A^\alpha_{ab,pi} N_{pk} &= 0\\
\taug{4b}\lambdabel{un2.14}\delta_{ka} B^\alpha_{ib,pq} N_{pq} - \delta_{bi} B^\alpha_{ak,pq} N_{pq} + B^\alpha_{ik,bq} N_{aq} - B^\alpha_{ik,pa} N_{pb} - B^\alpha_{ab,kq} N_{iq} + B^\alpha_{ab,pi} N_{pk} &= 0\\
\taug{4c}\lambdabel{2.14twist}\delta_{ka} A^\alpha_{ib,pq} N_{pq} - \delta_{bi} A^\alpha_{ak,pq} N_{pq} + A^\alpha_{ik,bq} N_{aq} - A^\alpha_{ik,pa} N_{pb} + B^\alpha_{ab,kq} N_{iq} - B^\alpha_{ab,pi} N_{pk} &= 0.
\end{align}
\addtocounter{equation}{1}
As a consequence of \eqref{2.14} and \eqref{2.14twist}, we also have that
$$A^\alpha_{ab,pi} N_{pk} - A^\alpha_{ab,kq} N_{iq} = - (B^\alpha_{ab,pi} N_{pk} - B^\alpha_{ab,kq} N_{iq}).$$
Thus $A^\alpha_{ab,pi} = - B^\alpha_{ab,pi}$ if $p<i$ and $A^\alpha_{ab,kq} = - B^\alpha_{ab,kq}$ if $k<q$. Therefore
\betagin{equation}\lambdabel{2.14cor}
A^\alpha_{ab,ik} = - B^\alpha_{ab,ik} \quad\forall ab,ik \text{ except } ik=1n.
\end{equation}
Similarly the Jacobi identities for the triples $\{X^\alpha, X^\beta, N_{ab}\}$, $\{X^\alpha, N_{ik}, X^\beta\}$, $\{N_{ik}, X^\alpha, X^\beta\}$ with $1 \leq \alpha, \beta \leq f$ and $1 \leq i < k \leq n$ give us respectively
\betagin{align}
\taug{6a}\lambdabel{2.15}[A^\alpha, A^\beta]_{ik,pq} N_{pq} =& \phantom{-(}\sigma^{\alpha \beta}_{kq} N_{iq} - \sigma^{\alpha \beta}_{pi} N_{pk}\\
\taug{6b}\lambdabel{2.15twist}[A^\alpha, B^\beta]_{ik,pq} N_{pq} =& - (\sigma^{\alpha \beta}_{kq} N_{iq} - \sigma^{\alpha \beta}_{pi} N_{pk})\\
\taug{6c}\lambdabel{un2.15}(B^\betaA^\alpha+B^\alphaB^\beta)_{ik,pq} N_{pq} =& \phantom{-(}\sigma^{\alpha \beta}_{kq} N_{iq} - \sigma^{\alpha \beta}_{pi} N_{pk}.
\end{align}
\addtocounter{equation}{1}
Unlike the Lie case, these give nontrivial relations for $f=1$ or $\alpha=\beta$ when $f>1$.
The Jacobi identity for the triple $\{X^\alpha, X^\beta, X^\gamma\}$ with $1 \leq \alpha, \beta, \gamma \leq f$ gives us
\betagin{equation}\lambdabel{2.16}
\sigma^{\beta \gamma}_{pq} A^\alpha_{pq,ik} - \sigma^{\alpha \beta}_{pq} B^\gamma_{pq,ik} - \sigma^{\alpha \gamma}_{pq} A^\beta_{pq,ik} = 0.
\end{equation}
Again, we do not require $\alpha, \beta, \gamma$ to be distinct, which in particular gives nontrivial relations for $f \geq 1$.
In order to simplify the matrices $A^\alpha$ and $B^\alpha$ and the constants $\sigma^{\alpha \beta}_{pq}$ we will make use of several transformations which leave the commutation relations \eqref{tri} invariant. Namely
\betagin{itemize}
\item Redefining the elements of the extension:
\betagin{equation}\lambdabel{2.17}
\betagin{split}
&\hspace{23pt}X^\alpha \longrightarrow X^\alpha + \mu^\alpha_{pq} N_{pq}, \quad \mu^\alphapha_{pq} \in F \\
\mathbb{R}ightarrow
&\betagin{cases}
A^\alpha_{ik,ab} \longrightarrow A^\alpha_{ik,ab} + \delta_{kb}\mu^\alpha_{ai} - \delta_{ia}\mu^\alpha_{kb}\\
B^\alpha_{ik,ab} \longrightarrow B^\alpha_{ik,ab} - \delta_{kb}\mu^\alpha_{ai} + \delta_{ia}\mu^\alpha_{kb}.
\end{cases}
\end{split}
\end{equation}
\item Changing the basis of $\nr(L)$:
\betagin{equation}\lambdabel{2.18}
\betagin{split}
&\hspace{17pt}N \longrightarrow GN, \quad G \in GL(r,F) \\
\mathbb{R}ightarrow
&\betagin{cases}
A^\alpha \longrightarrow GA^\alpha G^{-1}\\
B^\alpha \longrightarrow GB^\alpha G^{-1}.
\end{cases}
\end{split}
\end{equation}
\item Taking a linear combination of the elements $X^\alpha$.
\end{itemize}
The matrix $G$ must satisfy certain restrictions discussed later in order to preserve the commutation relations \eqref{tri} of $\nr(L)$.
Note that $N_{1n}$ is not used in \eqref{2.17} since it commutes with all the elements in $\nr(L)$. Since \eqref{2.16} gives relations between the matrices $A^\alpha$, $B^\alpha$ and the constants $\sigma^{\alpha \beta}_{pq}$, the unused constant $\mu^\alpha_{1n}$ can be used to scale the constants $\sigma^{\alpha \beta}_{pq}$ when $f \geq 2$:
\betagin{equation}\lambdabel{2.19}
\betagin{split}
X^\alpha &\longrightarrow X^\alpha + \mu^\alpha_{1n} N_{1n}, \quad \mu^\alphapha_{1n} \in F \\
\mathbb{R}ightarrow \sigma^{\alpha \beta}_{pq} &\longrightarrow \sigma^{\alpha \beta}_{pq} + \mu^\beta_{1n}A^\alpha_{1n,pq} + \mu^\alpha_{1n}B^\beta_{1n,pq}.
\end{split}
\end{equation}
In this transformation $A^\alpha$ is invariant, so we will be able to simplify some constants $\sigma^{\alpha \beta}_{pq}$.
\section{Extensions of $T(4)$}
In this paper we will focus on triangular algebras $T(n)$ with $n\geq 4$ because:
\betagin{itemize}
\item $T(2)$ is a one-dimensional algebra (hence by \eqref{dimension} $L$ has dimension at most 2) and the Jacobi identity gives that the only family of solvable non-Lie Leibniz algebras with one-dimensional nilradical are given by $L(c)= \lambdangle a,b \rangle$ with $[a,a]=[a,b]=0$, $[b,a]=ca$, $[b,b]=a$ where $c \neq 0 \in F$.
\item $T(3)$ is a Heisenberg Lie algebra, and Leibniz algebras with Heisenberg nilradical were classified in \cite{heisen}.
\end{itemize}
Now we will consider the case when $n = 4$. In particular, $N = (N_{12} N_{23} N_{34} N_{13} N_{24} N_{14})^T$ and $r = 6$.
We can proceed by considering the relations in \eqref{2.14} and \eqref{un2.14} for $1 \leq i < k \leq 4$, $1 \leq a < b \leq 4$, $k \neq a$, $b \neq i$
\betagin{align}
\taug{11a}\lambdabel{3.3}A^\alpha_{ik,bq} N_{aq} - A^\alpha_{ik,pa} N_{pb} - A^\alpha_{ab,kq} N_{iq} + A^\alpha_{ab,pi} N_{pk} &= 0\\
\taug{11b}\lambdabel{un3.3}B^\alpha_{ik,bq} N_{aq} - B^\alpha_{ik,pa} N_{pb} - B^\alpha_{ab,kq} N_{iq} + B^\alpha_{ab,pi} N_{pk} &= 0.
\end{align}
\addtocounter{equation}{1}
Similarly for $1 \leq i < k = a < b \leq 4$, we obtain
\betagin{align}
\taug{12a}\lambdabel{3.2}A^\alpha_{ib,pq} N_{pq} + A^\alpha_{ik,bq} N_{kq} - A^\alpha_{ik,pk} N_{pb} - A^\alpha_{kb,kq} N_{iq} + A^\alpha_{kb,pi} N_{pk} &= 0\\
\taug{12b}\lambdabel{un3.2}B^\alpha_{ib,pq} N_{pq} + B^\alpha_{ik,bq} N_{kq} - B^\alpha_{ik,pk} N_{pb} - B^\alpha_{kb,kq} N_{iq} + B^\alpha_{kb,pi} N_{pk} &= 0.
\end{align}
\addtocounter{equation}{1}
Using the linear independence of the $N_{ik}$ with equation \eqref{3.3}, we can obtain relationships among the entries of the matrices $A^\alpha$, summarized in the matrix below. For example, letting $ik = 12$ and $ab = 34$, the coefficient of $N_{14}$ gives $A^\alpha_{12,13} + A^\alpha_{34,24} = 0$, the coefficient of $N_{13}$ gives $A^\alpha_{34,23} = 0$, and the coefficient of $N_{24}$ gives $A^\alpha_{12,23} = 0$.
Using equation \eqref{un3.3}, we obtain the same relationships among the entries of $B^\alpha$.
\betagin{align*}
A^\alpha =&
\betagin{pmatrix}
*&0&A^\alpha_{12,34}&A^\alpha_{12,13}&*&*\\
0&*&0&*&*&*\\
A^\alpha_{34,12}&0&*&*&-(A^\alpha_{12,13})&*\\
0&0&0&*&(A^\alpha_{12,34})&*\\
0&0&0&(A^\alpha_{34,12})&*&*\\
0&0&0&0&0&*
\end{pmatrix}
\end{align*}
Applying \eqref{3.2} and \eqref{un3.2} in the same way, $A^\alpha$ becomes:
\betagin{align*}
A^\alpha =&
\betagin{pmatrix}
A^\alpha_{12,12}&0&0&A^\alpha_{12,13}&*&*\\
&A^\alpha_{23,23}&0&A^\alpha_{23,13}&A^\alpha_{23,24}&*\\
&&A^\alpha_{34,34}&*&-(A^\alpha_{12,13})&*\\
&&&A^\alpha_{12,12} + A^\alpha_{23,23}&0&(A^\alpha_{23,24})\\
&&&&A^\alpha_{23,23} + A^\alpha_{34,34}&(A^\alpha_{23,13})\\
&&&&&A^\alpha_{12,12} + A^\alpha_{23, 23} + A^\alpha_{34,34}
\end{pmatrix}
\end{align*}
As before, $B^\alpha$ has the same form above. This with \eqref{2.14cor} implies that $B^\alpha_{13,14} = B^\alpha_{23,24} = -A^\alpha_{23,24} = -A^\alpha_{13,14}$. Similarly, $B^\alpha_{24,14} = -A^\alpha_{24,14}$ and $B^\alpha_{14,14} = -A^\alpha_{14,14}$.
We can further simplify matrix $A^\alpha$ by performing the transformation specified in \eqref{2.17}. Choosing $\mu^\alpha_{12} = -A^\alpha_{23,13}$, $\mu^\alpha_{23} = A^\alpha_{12,13}$, $\mu^\alpha_{34} = A^\alpha_{23,24}$, $\mu^\alpha_{13} = -A^\alpha_{34,14}$, and $\mu^\alpha_{24} = -A^\alpha_{12,14}$ leads to
$$A^\alpha_{23,13} = A^\alpha_{12,13} = A^\alpha_{23,24} = A^\alpha_{34,14} = A^\alpha_{12,14} = 0.$$
This gives us the matrices
\betagin{align*}
A^\alpha =&
\betagin{pmatrix}
A^\alpha_{12,12}&0&0&0&A^\alpha_{12,24}&0\\
&A^\alpha_{23,23}&0&0&0&A^\alpha_{23,14}\\
&&A^\alpha_{34,34}&A^\alpha_{34,13}&0&0\\
&&&A^\alpha_{13,13}&0&0\\
&&&&A^\alpha_{24,24}&0\\
\phantom{-A^\alpha_{12,12}}&\phantom{-A^\alpha_{23,23}}&\phantom{-A^\alpha_{34,34}}&\phantom{-A^\alpha_{13,13}}&\phantom{-A^\alpha_{24,24}}&\,\,A^\alpha_{14,14}\,\,
\end{pmatrix}\\
B^\alpha =&
\betagin{pmatrix}
-A^\alpha_{12,12}&0&0&0&-A^\alpha_{12,24}&B^\alpha_{12,14}\\
&-A^\alpha_{23,23}&0&0&0&B^\alpha_{23,14}\\
&&-A^\alpha_{34,34}&-A^\alpha_{34,13}&0&B^\alpha_{34,14}\\
&&&-A^\alpha_{13,13}&0&0\\
&&&&-A^\alpha_{24,24}&0\\
&&&&&-A^\alpha_{14,14}
\end{pmatrix}
\end{align*}
$$A^\alpha_{ik,ik} = \sum^{k-1}_{p = i}A^\alpha_{p(p+1),p(p+1)}.$$
Note that $A^\alpha_{12,12}$, $A^\alpha_{23,23}$, and $A^\alpha_{34,34}$ cannot simultaneously equal 0, otherwise the nilradical would no longer be $T(4)$. The nilindependence among the $A^\alpha$ implies that $T(4)$ can have at most a three-dimensional extension, since there are three parameters on the diagonal.
The form of the matrices $A^\alpha$ implies that the only nonzero elements of $[A^\alpha, A^\beta]$ are
$$[A^\alpha, A^\beta]_{12,24},\quad [A^\alpha, A^\beta]_{23,14},\quad [A^\alpha, A^\beta]_{34,13}.$$
The linear independence of the $N_{ik}$ with equation \eqref{2.15}, yields
\betagin{align}
\lambdabel{Acommute}[A^\alpha,A^\beta] =& \ 0\\
\lambdabel{3.12}[X^\alpha,X^\beta] =& \ \sigma^{\alpha \beta}_{14} N_{14}.
\end{align}
For example, letting $ik = 12$ in equation \eqref{2.15}, the coefficient of $N_{24}$ implies that $[A^\alpha,A^\beta]_{12,24}=0$ and the coefficient of $N_{14}$ implies that $\sigma^{\alpha \beta}_{24} = [A^\alpha,A^\beta]_{12,14} = 0$. Henceforth we will abbreviate $\sigma^{\alpha \beta}_{14} = \sigma^{\alpha \beta}$ as all other $\sigma^{\alpha \beta}_{pq} = 0$. Since the $A^\alpha$ commute by \eqref{Acommute}, \eqref{2.15} and \eqref{2.15twist} imply
\betagin{equation}\lambdabel{ABcommute}
[A^\alpha,B^\beta]=0.
\end{equation}
Considering \eqref{ABcommute} componentwise, we find that $B^\beta_{12,14}(A^\alpha_{14,14}-A^\alpha_{12,12}) = B^\beta_{34,14}(A^\alpha_{14,14}-A^\alpha_{34,34}) = (A^\beta_{23,14} + B^\beta_{23,14})(A^\alpha_{14,14}-A^\alpha_{23,23}) = 0$.
Furthermore, by \eqref{ABcommute} and \eqref{un2.15}, $0=B^\alphaA^\beta+B^\betaB^\alpha=(A^\beta+B^\beta)B^\alpha$. Componentwise, this tells us that
\betagin{equation}\lambdabel{LindseyLemma}
0 = B^\beta_{12,14}A^\alpha_{14,14} = B^\beta_{34,14}A^\alpha_{14,14} = (A^\beta_{23,14} + B^\beta_{23,14})A^\alpha_{14,14}.
\end{equation}
In particular, if $B^\beta$ has a nontrivial off-diagonal entry, then
\betagin{equation}\lambdabel{offdiag}
\betagin{cases}
B^\beta_{12,14}\ne 0 &\mathbb{R}ightarrow A^\alpha_{12,12}=A^\alpha_{14,14}=0,\ \forall\alphapha \\
B^\beta_{34,14}\ne 0 &\mathbb{R}ightarrow A^\alpha_{34,34}=A^\alpha_{14,14}=0,\ \forall\alphapha \\
B^\beta_{23,14}\ne -A^\beta_{23,14} &\mathbb{R}ightarrow A^\alpha_{23,23}=A^\alpha_{14,14}=0,\ \forall\alphapha.
\end{cases}
\end{equation}
The form of the matrices $A^\alpha$ and $B^\alpha$ imply that \eqref{2.16} becomes
\betagin{equation}\lambdabel{3.13}
\sigma^{\alpha \beta}A^\gamma_{14,14}-\sigma^{\alpha \gamma}A^\beta_{14,14}+\sigma^{\beta \gamma}A^\alpha_{14,14}=0.
\end{equation}
By adding equations of form \eqref{3.13}, we get
\betagin{equation}\lambdabel{Lieish}
(\sigma^{\beta \gamma} + \sigma^{\gamma \beta})A^\alpha_{14,14} = 0.
\end{equation}
For example $(\sigma^{12}+\sigma^{21})A^\alpha_{14,14}=0$ is obtained by adding \eqref{3.13} with $\beta=1, \gamma=2$ to \eqref{3.13} with $\beta=2, \gamma=1$.
On a related note, since $[X^\beta,X^\beta] \in \Ann_\ell(L)$, we have $0=[[X^\beta,X^\beta],X^\alpha]=[\sigma^{\beta \beta}N_{14},X^\alpha]= - \sigma^{\beta \beta} A^\alpha_{14,14} N_{14}$. Thus,
\betagin{equation}\lambdabel{Lieish2}
\sigma^{\beta \beta} A^\alpha_{14,14} = 0.
\end{equation}
As a consequence of \eqref{2.14cor}, \eqref{3.12}, \eqref{LindseyLemma}, \eqref{Lieish}, and \eqref{Lieish2}, we have the following result.
\betagin{lemma}\lambdabel{megalem}
If $A^\alpha_{14,14}\ne0$ for any $\alphapha=1,\ldots,f$, then the Leibniz algebra is a Lie algebra.
\end{lemma}
The form of the matrices $A^\alpha$ and $B^\alpha$ imply that the transformation \eqref{2.19} becomes
\betagin{equation}\lambdabel{3.14}
\sigma^{\alpha \beta} \longrightarrow \sigma^{\alpha \beta} + \mu^\beta_{14}A^\alpha_{14,14} - \mu^\alpha_{14}A^\beta_{14,14}.
\end{equation}
For $f=2$, suppose $A^\alpha_{14,14} \neq 0$ for some $\alpha$, and without loss of generality assume that $\alpha=1$. Then choosing $\mu^1_{14}=0$ and $\mu^2_{14}=-\frac{\sigma^{12}}{A^1_{14,14}}$; \eqref{3.14} makes $\sigma^{12}=0$. For $f=3$, suppose $A^\alpha_{14,14} \neq 0$ for some $\alpha$, and without loss of generality assume that $\alpha=1$. Then choosing $\mu^1_{14}=0$ and $\mu^\beta_{14}=-\frac{\sigma^{1\beta}}{A^1_{14,14}}$ for $\beta=2,3$; \eqref{3.14} makes $\sigma^{1\beta}=0$. By \eqref{3.13}, we also have $\sigma^{23}=0$. Combining these results for $f=2$, 3 and employing \eqref{Lieish2} for $f=1$, we have:
\betagin{equation}\lambdabel{3.15}
[X^\alpha,X^\beta] =
\betagin{cases}
\sigma^{\alpha \beta}N_{14} & \text{if }A^1_{14,14}=\cdots=A^f_{14,14}=0 \\
0 & \text{otherwise.}
\end{cases}
\end{equation}
Now we will utilize matrices $G$ to simplify the structure of matrices $A^\alpha$ and $B^\alpha$. Perform transformation \eqref{2.18}, given by $N\longrightarrow G_1 N$, with
$$G_1 = \betagin{pmatrix}
1&0&0&0&g_1&g_0\\
&1&0&0&0&g_2 \\
& &1&g_3&0&g_4\\
& & &1&0&0\\
& & & &1&0\\
& & & & &1
\end{pmatrix}.$$
Observe that $G_1$ acts invariantly on the commutation relations \eqref{tri}. It does, however, transform matrices $A^\alpha$ and $B^\alpha$ by $A^\alpha\longrightarrow G_1 A^\alpha G_1^{-1}$ and $B^\alpha\longrightarrow G_1 B^\alpha G_1^{-1}$, respectively. In particular, $G_1$ transforms the following components
\betagin{eqnarray*}
\betagin{cases}
A^\alpha_{12,24} \longrightarrow A^\alpha_{12,24} + g_1(A^\alpha_{24,24}-A^\alpha_{12,12}) \\
A^\alpha_{23,14} \longrightarrow A^\alpha_{23,14} + g_2(A^\alpha_{14,14}-A^\alpha_{23,23}) \\
A^\alpha_{34,13} \longrightarrow A^\alpha_{34,13} + g_3(A^\alpha_{13,13}-A^\alpha_{34,34})
\end{cases}\\
\betagin{cases}
B^\alpha_{12,14} \longrightarrow B^\alpha_{12,14} - g_0(A^\alpha_{14,14}-A^\alpha_{12,12}) \\
B^\alpha_{23,14} \longrightarrow B^\alpha_{23,14} - g_2(A^\alpha_{14,14}-A^\alpha_{23,23}) \\
B^\alpha_{34,14} \longrightarrow B^\alpha_{34,14} - g_4(A^\alpha_{14,14}-A^\alpha_{34,34}).
\end{cases}
\end{eqnarray*}
We use the matrix $G_1$ to eliminate some entries in $A^\alpha$ and $B^\alpha$. However, if $B^\alpha_{12,14}$ or $B^\alpha_{34,14}$ is not zero, then by \eqref{offdiag}, $G_1$ leaves that entry of $B^\alpha$ invariant. Hence, we can use $g_1$, $g_2$, and $g_3$ to eliminate at most 3 off-diagonal elements.
Note: In this step, we consider only transformations of the form $G_1=\betagin{pmatrix} I&*\\0&I\end{pmatrix}$. Any other transformation which leaves \eqref{tri} and the form of $A^\alpha$ invariant, but eliminates $B^\alpha_{12,14}$ or $B^\alpha_{34,14}$,
would provide an isomorphism from
a non-Lie Leibniz algebra to a Lie algebra.
It is, however, possible to scale such entries, which we will consider in the next case.
Let the diagonal matrix $G_2$ be
$$G_2=\betagin{pmatrix}
g_{12}&&&&&\\
&g_{23}&&&&\\
&&g_{34}&&&\\
&&&g_{12}g_{23}&&\\
&&&&g_{23}g_{34}&\\
&&&&&g_{12}g_{23}g_{34}
\end{pmatrix},\quad g_{ik}\in F\backslash\{0\}.$$
Note that $G_2$ preserves commutation relations \eqref{tri}. Our transformation of $\nr(L)$ will be defined by $G=G_2 G_1$. Observe that $G_2$ transforms $A^\alpha$ and $B^\alpha$ by $A^\alpha_{ik,ab}\longrightarrow \dfrac{g_{ik}}{g_{ab}}A^\alpha_{ik,ab}$ and $B^\alpha_{ik,ab}\longrightarrow \dfrac{g_{ik}}{g_{ab}}B^\alpha_{ik,ab}$, respectively, where $g_{ik}=(G_2)_{ik}=\prod\limits^{k-1}_{j=i}g_{j(j+1)}$. Hence, we can scale up to three nonzero off-diagonal elements to 1. In the case of Lie algebras, it may be necessary to scale to $\pm 1$ over $F=\mathbb{R}$. This issue does not arise in Leibniz algebras of non-Lie type, because we have greater restrictions on the number of nonzero entries.
\subsection{Leibniz algebras $L(4,1)$}
The Lie cases for $\nr(L)=T(4)$, with $f=1$, have been previously classified in \cite{tw}, so we will focus on the Leibniz algebras of non-Lie type. We know that all such algebras will have $A^1_{14,14}=0$ by Lemma \ref{megalem}, where $A=A^1$ will be of the form found in \cite{tw}. Since $A$ is not nilpotent and $A_{14,14}=0$, we know that there is at most 1 nonzero off-diagonal entry in $A$. Altogether, there are 10 classes of Leibniz algebras of non-Lie type. Of these, there are 2 two-dimensional families, and 8 one-dimensional families. The matrices $A$ and $B$ for these can be found in Table \ref{L41} in the appendix.
\subsection{Leibniz algebras $L(4,2)$}
Again, all Lie cases for $\nr(L)=T(4)$, with $f=2$, were classified in \cite{tw}. So, focusing on Leibniz algebras of non-Lie type, we again require that $A^1_{14,14}=A^2_{14,14}=0$ by Lemma \ref{megalem}. As a result, if $B^\alpha_{ik,14}\ne-A^\alpha_{ik,14}$, for any $\alpha$ or pair $ik$, then $A^\beta_{ik,ik}=A^\beta_{14,14}=0$ $\forall \beta$, which makes it impossible to have two linearly nilindependent matrices $A^1$ and $A^2$. Therefore, there is 1 four-dimensional family of Leibniz algebras of non-Lie type, and their matrices $A^1=-B^1$ and $A^2=-B^2$ can be found in Table \ref{L42} in the appendix.
\subsection{Leibniz algebras $L(4,3)$}
There is only one Lie algebra that is a three-dimensional extension of $T(4)$, again given in \cite{tw}. Since we cannot have three linearly nilindependent matrices of form $A^\alpha$ with $A^1_{14,14} = A^2_{14,14} = A^3_{14,14} = 0$, it is impossible to have a three-dimensional extention of $T(4)$ that is of non-Lie type, by Lemma \ref{megalem}.
\betagin{theorem}\lambdabel{L4f}
Every Leibniz algebra $L(4,f)$ is either of Lie type, or is isomorphic to precisely one algebra represented in Table \ref{L41} or Table \ref{L42}.
\end{theorem}
\section{Solvable Lie algebras $L(n,f)$ for $n\ge4$}
We are now going to consider Leibniz algebras $L$ with $\nr(L)=T(n)$. Recall from \eqref{2.14cor}, $A^\alpha_{ik,ab} = -B^\alpha_{ik,ab}$ for all $ab\ne 1n$. We have the following result:
\betagin{lemma}\lambdabel{Astructure}
Matrices $A^\alpha=(A^\alpha_{ik,ab})$ and $B^\alpha=(B^\alpha_{ik,ab})$, $1\le i<k\le n$, $1\le a < b\le n$ have the following properties.
\betagin{enumerate}
\item[i.] $A^\alpha$ and $B^\alpha$ are upper-triangular.
\item[ii.] The only off-diagonal elements of $A^\alpha$ and $B^\alpha$ which may not be eliminated by an appropriate transformation on $X^\alpha$ are:
\betagin{align*}\lambdabel{lem1.2A}
A^\alpha_{12,2n},\quad A^\alpha_{j(j+1),1n} \ (2\le j\le n-2),\quad A^\alpha_{(n-1)n,1(n-1)}, \\
B^\alpha_{12,2n},\quad B^\alpha_{j(j+1),1n} \ (1\le j\le n-1),\quad B^\alpha_{(n-1)n,1(n-1)}.
\end{align*}
\item[iii.] The diagonal elements $A^\alpha_{i(i+1),i(i+1)}$ and $B^\alpha_{i(i+1),i(i+1)}$, $1\le i\le n-1$, are free. The remaining diagonal elements of $A^\alpha$ and $B^\alpha$ satisfy
\betagin{equation*}\lambdabel{lem1.3}
A^\alpha_{ik,ik} = \sum\limits^{k-1}_{j=i}A^\alpha_{j(j+1),j(j+1)},\quad B^\alpha_{ik,ik} = \sum\limits^{k-1}_{j=i}B^\alpha_{j(j+1),j(j+1)},\quad k>i+1.
\end{equation*}
\end{enumerate}
\end{lemma}
\betagin{proof}
The form of the matrices $A^\alpha$ given in Lemma \ref{Astructure} follows from \eqref{2.14} by induction on $n$, as shown in \cite{tw}. Similarly, properties $i.$ and $iii.$ follow for $B^\alpha$ from \eqref{un2.14}. Property $ii.$ for matrices $B^\alpha$ follows from \eqref{2.14cor}.
\end{proof}
As a consequence of property $iii.$ and \eqref{2.14cor}, we have that $A^\alpha_{1n,1n}=-B^\alpha_{1n,1n}$.
Lemma \ref{Astructure} asserts that $A^\alpha$ has $n-1$ free entries on the diagonal and nonzero off-diagonal entries possible in only $n-1$ locations, represented by $*$ in the matrix below. The form of $B^\alpha$ is the same, save for nonzero off-diagonal entries possible in two additional locations, represented by $b_1$ and $b_2$ in the matrix below.
$
\left(
\betagin{array}{ccccc|ccccc}
* &&&& & && &*&b_1 \\
& * &&& & && &&* \\
&& \ddots && & && &&\vdots \\
&&& * & & && && * \\
&&&& * & && *&&b_2 \\
\hline
&&&& & *&&&&\\
&&&& & & \ddots &&& \\
&&&& & &&*&&\\
&&&& & &&&*&\phantom{\ddots}\\
\phantom{\ddots}&\phantom{\ddots}&\phantom{\ddots}&\phantom{\ddots}&\phantom{\ddots}&\phantom{\ddots}&\phantom{\ddots}&\phantom{\ddots}&\phantom{\ddots}&*
\end{array}
\right)
$
\betagin{lemma}
The maximum degree of an extension of $T(n)$ is $f=n-1$.
\end{lemma}
\betagin{proof}
The proof follows from the fact that the $A^\alpha$ are nilindependent and that we have, at most, $n-1$ parameters along the diagonal.
\end{proof}
The form of the matrices $A^\alpha$ implies that the only nonzero elements of $[A^\alpha, A^\beta]$ are
$$[A^\alpha,A^\beta]_{12,2n},\quad [A^\alpha,A^\beta]_{j(j+1),1n} \ (2\le j\le n-2),\quad [A^\alpha,A^\beta]_{(n-1)n,1(n-1)}.$$
As before, the linear independence of the $N_{ik}$ with equations \eqref{2.15} and \eqref{2.15twist}, yields
\betagin{align}
\lambdabel{Acommute2}[A^\alpha,A^\beta] =& \ 0\\
\lambdabel{ABcommute2}[A^\alpha,B^\beta]=& \ 0\\
\nonumber [X^\alpha,X^\beta] =& \ \sigma^{\alpha \beta}_{1n} N_{1n}.
\end{align}
From Lemma \ref{Astructure}, \eqref{2.16} becomes
\betagin{equation*}\lambdabel{3.13b}
\sigma^{\alpha \beta}A^\gamma_{1n,1n}-\sigma^{\alpha \gamma}A^\beta_{1n,1n}+\sigma^{\beta \gamma}A^\alpha_{1n,1n}=0.
\end{equation*}
\betagin{lemma}\lambdabel{commutelem}
Matrices $A^\alpha$ and $B^\alpha$ can be transformed to a canonical form satisfying
\betagin{align*}
[A^\alpha, A^\beta] &= 0 \\
[A^\alpha, B^\beta] &= 0 \\
[X^\alpha,X^\beta] &=
\betagin{cases}
\sigma^{\alpha \beta}N_{1n} & \text{if }A^1_{1n,1n}=\cdots=A^f_{1n,1n}=0 \\
0 & \text{otherwise.}
\end{cases}
\end{align*}
\end{lemma}
\betagin{proof}
The first two identities of this lemma have already been shown. The argument for the third identity is the same as \eqref{3.15}, using $(\sigma^{\beta \gamma} + \sigma^{\gamma \beta})A^\alpha_{1n,1n} = 0$ and $\sigma^{\beta \beta} A^\alpha_{1n,1n} = 0$.
\end{proof}
\subsection{Change of basis in $\nr(L(n,f))$}
As before, we perform the transformation \eqref{2.18} on $N$ by use of the matrix $G_1$, with $G_1$ defined to be all zeroes except $(G_1)_{ik,ik} = 1$, and $(G_1)_{12,1n}, (G_1)_{(n-1)n,1n}, (G_1)_{ab,ik} \in F$ where $\{ab,ik\}=\{12,2n\}, \{(n-1)n,1(n-1)\}, \{j(j+1),1n\}$ for $2 \leq j \leq n-2$. Therefore, the zero entries of $G_1$ are the off-diagonal entries of $B^\alpha$ that are guaranteed to be zero. Note that the transformation given by $G_1$ preserves commutation relation \eqref{tri}. Matrices $A^\alpha$ and $B^\alpha$ are transformed by conjugation with $G_1$, leaving the diagonal elements invariant and giving
\betagin{eqnarray*}
\betagin{split}
A^\alpha_{ik,ab} &\longrightarrow A^\alpha_{ik,ab} + (G_1)_{ik,ab}(A^\alpha_{ab,ab}-A^\alpha_{ik,ik}) \\
B^\alpha_{ik,ab} &\longrightarrow B^\alpha_{ik,ab} - (G_1)_{ik,ab}(A^\alpha_{ab,ab}-A^\alpha_{ik,ik}).
\end{split}
\end{eqnarray*}
By \eqref{ABcommute2}, we have that $0=[A^\alpha,B^\beta]_{12,1n} = B^\beta_{12,1n}(A^\alpha_{12,12}-A^\alpha_{1n,1n})$ and $0=[A^\alpha,B^\beta]_{(n-1)n,1n} = B^\beta_{(n-1)n,1n}(A^\alpha_{(n-1)n,(n-1)n}-A^\alpha_{1n,1n})$. Consequently, $G_1$ cannot eliminate the entries $B^\beta_{12,1n}$ and $B^\beta_{(n-1)n,1n}$.
\betagin{lemma}\lambdabel{lemma4}
Matrices $A^\alpha$ and $B^\alpha$ will have a nonzero off-diagonal entry $A^\alpha_{ik,ab}$ or $B^\alpha_{ik,ab}$, respectively, only if
$$A^\beta_{ik,ik}=A^\beta_{ab,ab},\quad \forall\beta=1,\ldots,f.$$
\end{lemma}
\betagin{proof}
For $A^\alpha$, the proof of Lemma \ref{lemma4} follows from \eqref{Acommute2}, as shown in \cite{tw}. Similarly for $B^\alpha$, the proof of Lemma \ref{lemma4} follows from considering \eqref{ABcommute2} componentwise. Namely, we find that $0=[A^\beta,B^\alpha]_{12,1n}=B^\alpha_{12,1n}(A^\beta_{1n,1n}-A^\beta_{12,12})$,
and similar identities for $B^\alpha_{12,2n}$ and $B^\alpha_{(n-1)n,1(n-1)}$.
The commutation relations for $[A^\alpha,B^\beta]_{j(j+1),1n}$ gives that $B^\alpha_{j(j+1),1n}(A^\beta_{1n,1n}-A^\beta_{j(j+1),j(j+1)})=-A^\beta_{j(j+1),1n}(A^\alpha_{1n,1n}-A^\alpha_{j(j+1),j(j+1)})=0$.
\end{proof}
Now consider a second transformation $G_2$ given by $N \longrightarrow G_2 N$, where $G_2$ is the diagonal matrix $(G_2)_{ik,ik} = g_{ik}$ and $g_{ik}=\prod\limits_{j=i}^{k-1} g_{j(j+1)}$. The matrices $A^\alpha$ and $B^\alpha$ are transformed by conjugation by $G_2$. Thus $A^\alpha_{ik,ab} \longrightarrow \dfrac{g_{ik}}{g_{ab}} A^\alpha_{ik,ab}$ and $B^\alpha_{ik,ab} \longrightarrow \dfrac{g_{ik}}{g_{ab}} B^\alpha_{ik,ab}$. This transformation can be used to scale up to $n-1$ nonzero off-diagonal elements to 1. For Lie algebras over the field $F=\mathbb{R}$ it may be that some entries have to be scaled to $-1$.
Since the only non-Lie cases occur when $A^\beta_{1n,1n}=0$ for all $\beta$, then $B^\alpha_{ik,1n} \ne -A^\alpha_{ik,1n}$ implies $A^\beta_{ik,ik}=0$ for all $\beta$ by Lemma \ref{lemma4}. Since the extensions $X^\alpha$ are required to be nilindependent, this imposes restrictions on the degree $f$ of non-Lie extensions of $T(n)$. In particular, this implies that in the maximal case $f=n-1$, $L(n,n-1)$ must be Lie. Such algebras have been classified in \cite{tw}, and in fact there is a unique algebra $L(n,n-1)$ where all $A^\alpha$ are diagonal and the $X^\alpha$ commute.
\betagin{theorem}
Every solvable Leibniz algebra $L(n,f)$ with triangular nilradical $T(n)$ has dimension $d=\frac{1}{2}n(n-1) + f$ with $1 \leq f \leq n-1$. It can be written in a basis $\{X^\alpha, N_{ik}\}$ with $\alpha= 1, \ldots, f$, $1 \leq i < k \leq n$ satisfying
$$[N_{ik},N_{ab}]=\delta_{ka}N_{ib} - \delta_{bi}N_{ak}$$
$$[X^\alpha,N_{ik}]=A^\alpha_{ik,pq}N_{pq}$$
$$[N_{ik},X^\alpha]=B^\alpha_{ik,pq}N_{pq}$$
$$[X^\alpha,X^\beta]=\sigma^{\alpha \beta} N_{1n}.$$
Furthermore, the matrices $A^\alpha$ and $B^\alpha$ and the constants $\sigma^{\alpha \beta}$ satisfy:
\betagin{enumerate}
\item[i.] The matrices $A^\alpha$ are linearly nilindependent and $A^\alpha$ and $B^\alpha$ have the form specified in Lemma \ref{Astructure}. $A^\alpha$ commutes with all these matrices, i.e. $[A^\alpha,A^\beta]=[A^\alpha,B^\beta]=0$.
\item[ii.] $B^\alpha_{ik,ab}=-A^\alpha_{ik,ab}$ for $ab \neq 1n$, and $B^\alpha_{1n,1n}=-A^\alpha_{1n,1n}$.
\item[iii.] $L$ is Lie and all $\sigma^{\alpha \beta}=0$, unless $A^\gamma_{1n,1n}=0$ for $\gamma=1, \ldots, f$.
\item[iv.] The remaining off-diagonal elements $A^\alpha_{ik,ab}$ and $B^\alpha_{ik,ab}$ are zero, unless $A^\beta_{ik,ik}=A^\beta_{ab,ab}$ for $\beta=1, \ldots, f$.
\item[v.] In the maximal case $f=n-1$, there is only one algebra, which is isomorphic to the Lie algebra with all $A^\alpha$ diagonal where all $X^\alpha$ commute.
\end{enumerate}
\end{theorem}
\noindent{\bf Acknowledgements.}
The authors gratefully acknowledge the support of the Departments of Mathematics at Spring Hill College, the University of Texas at Tyler, and West Virginia University: Institute of Technology.
\betagin{table}[b]
\tiny
\caption{The Leibniz algebras $L(4,1)$ of non-Lie type}
\betagin{tabular}{lllc}
\hline
No. & $A$ & $B$ & $\sigma$, parameters \\
\hline\hline
(1)
& $
\left(\betagin{array}{cccccc}
\ 1 & & &&& \\
& a & &&& \\
& & -1-a &&&\\
& & & 1+a && \\
& & & &-1& \\
& & & && 0\\
\end{array}\right)$ & $B=-A$ &
$\betagin{array}{l}\sigma^{11} \neq 0 \in F\\ a\ne-1 \in F\end{array}$ \\
\hline
(2)
& $
\left(\betagin{array}{cccccc}
\ 1 & & &&& \\
& 0 & \ &&& \\
& & -1\ &&&\\
& & & 1 && \\
& & &&-1& \\
& & & && 0\\
\end{array}\right)$ &
$
\left(\betagin{array}{cccccc}
\ -1 & & &&& \\
& 0 & \ &&& 1\\
& & 1\ &&&\\
& & & -1 && \\
& & & &1& \\
& & & && 0\\
\end{array}\right)$ &
$\sigma^{11}\in F$ \\
\hline
(3)
& $
\left(\betagin{array}{cccccc}
\ 1 & & &&& \\
& -1 & &&& \\
& & 0 &&&\\
& & & 0 && \\
& & &&-1& \\
& & & && 0\\
\end{array}\right)$ &
$
\left(\betagin{array}{cccccc}
\ -1 & & &&& \\
& 1& &&& \\
& & 0 &&&1\\
& & & 0&& \\
& & & &1& \\
& & & && 0\\
\end{array}\right)$ &
$\sigma^{11} \in F$ \\
\hline
(4) &
$
\left(\betagin{array}{cccccc}
0 & & & & & \\
& 1 & & & & \\
& & -1 & & & \\
& & & 1 & & \\
& & & & 0 & \\
& & & & & 0 \\
\end{array}\right)$ &
$
\left(\betagin{array}{cccccc}
0 & & & & &1 \\
& -1 & & & & \\
& & 1 & & & \\
& & & -1 & & \\
& & & & 0 & \\
& & & & & 0 \\
\end{array}\right)$ & $\sigma^{11} \in F$ \\
\hline
(5) &
$
\left(\betagin{array}{cccccc}
0 & & & & & \\
& 1 & & & & \\
& & -1 & & & \\
& & & 1 & & \\
& & & &0 & \\
& & & & & 0 \\
\end{array}\right)$ & $B = -A$ & $\sigma^{11} \neq 0 \in F$ \\
\hline
(6) &
$
\left(\betagin{array}{cccccc}
0 & & & & 1 & \\
& 1 & & & & \\
& & -1 & & & \\
& & & 1 & & \\
& & & & 0 & \\
& & & & & 0 \\
\end{array}\right)$ & $B = -A$ & $\sigma^{11} \neq 0 \in F$ \\
\hline
(7) &
$
\left(\betagin{array}{cccccc}
0 & & & & 1 & \\
& 1 & & & & \\
& & -1 & & & \\
& & & 1 & & \\
& & & &0 & \\
& & & & & 0 \\
\end{array}\right)$ &
$
\left(\betagin{array}{cccccc}
0 & & & & -1 & 1 \\
& -1 & & & & \\
& & 1 & & & \\
& & & -1 & & \\
& & & & 0 & \\
& & & & & 0 \\
\end{array}\right)$ & $\sigma^{11} \in F$ \\
\hline
(8) &
$
\left(\betagin{array}{cccccc}
1 & & & & & \\
& 0 & & & & 1 \\
& & -1 & & & \\
& & & 1 & & \\
& & & & -1 & \\
& & & & & 0 \\
\end{array}\right)$ &
$
\left(\betagin{array}{cccccc}
-1 & & & & & \\
& 0 & & & & b \\
& & 1 & & & \\
& & & -1 & & \\
& & & & 1 & \\
& & & & & 0 \\
\end{array}\right)$ &
$\betagin{array}{c}
\sigma^{11}, b \in F\\
\\
\sigma^{11}, b+1 \text{ not both zero}
\end{array}
$ \\
\hline
(9) &
$
\left(\betagin{array}{cccccc}
1 & & & & & \\
& -1 & & & & \\
& & 0 & 1 & & \\
& & & 0 & & \\
& & & & -1 & \\
& & & & & 0 \\
\end{array}\right)$ & $B= -A$ & $\sigma^{11} \neq 0 \in F$ \\
\hline
(10) &
$
\left(\betagin{array}{cccccc}
1 & & & & & \\
& -1 & & & & \\
& & 0 & 1 & & \\
& & & 0 & & \\
& & & & -1 & \\
& & & & & 0 \\
\end{array}\right)$ &
$
\left(\betagin{array}{cccccc}
-1 & & & & & \\
& 1 & & & & \\
& & 0 & -1 & & 1 \\
& & & 0 & & \\
& & & & 1 & \\
& & & & & 0 \\
\end{array}\right)$ & $\sigma^{11} \in F$ \\
\hline\hline
\end{tabular}
\lambdabel{L41}
\end{table}
\betagin{table}[th]
\tiny
\caption{The Leibniz algebras $L(4,2)$ of non-Lie type}
\betagin{tabular}{lllc}
\hline
No. &
$A^1=-B^1$ & $A^2=-B^2$ & $\sigma$ \\
\hline\hline
(11) &
$\left(\betagin{array}{cccccc}
1 & & & & & \\
& 0 & & & & \\
& & -1& & & \\
& & & 1 & & \\
& & & & -1& \\
& & & & & 0 \\
\end{array}\right)$
&
$\left(\betagin{array}{cccccc}
0 & & & & & \\
& 1 & & & & \\
& & -1& & & \\
& & & 1 & & \\
& & & & 0 & \\
& & & & & 0 \\
\end{array}\right)$
&
$\betagin{array}{c}
\sigma^{11}, \sigma^{22}, \sigma^{12}, \sigma^{21} \in F\\
\\
\sigma^{11}, \sigma^{22}, \sigma^{12}+\sigma^{21}\text{ not all zero}
\end{array}
$ \\
\hline\hline
\end{tabular}
\lambdabel{L42}
\end{table}
\betagin{thebibliography}{ABCDE}
\bibitem{ao} Albeverio, Sh., A. Ayupov, B. Omirov. On nilpotent and simple Leibniz algebras, \emph{Comm. Algebra}. \textbf{33} (2005), no.1, 159-172.
\bibitem{aor} Albeverio, S., B. A. Omirov, I. S. Rakhimov. Classification of 4 dimensional nilpotent complex Leibniz algebras, \emph{Extracta Math.} \textbf{21} (2006), no. 3, 197-210.
\bibitem{ayupov} Ayupov, Sh. A., B. A. Omirov. On Leibniz algebras, \emph{Algebra and Operator Theory}. Proceedings of the Colloquium in Tashkent, Kluwer, (1998).
\bibitem{barneslevi} Barnes, D. On Levi's theorem for Leibniz algebras, \emph{Bull. Austral. Math. Soc.} \textbf{86} (2012), no. 2, 184 - 185.
\bibitem{barnesleib} Barnes, D. Some theorems on Leibniz algebras, \emph{Comm. Algebra}. \textbf{ 39} (2011), no. 7, 2463-2472.
\bibitem{barnesengel} Barnes, D. On Engel's Theorem for Leibniz Algebras, \emph{Comm. Algebra}. \textbf{40} (2012), no. 4, 1388-1389.
\bibitem{chelsie-allison} Batten-Ray, C., A. Hedges, E. Stitzinger. Classifying several classes of Leibniz algebras, \emph{Algebr. Represent. Theory}. To appear, arXiv:1301.6123.
\bibitem{heisen} Bosko, L., J. Dunbar, J.T. Hird, K. Stagg. Solvable Leibniz algebras with Heisenberg nilradical, \emph{Comm. Algebra}. To appear, arxiv:1307.8447.
\bibitem{jacobsonleib} Bosko, L., A. Hedges, J.T. Hird, N. Schwartz, K. Stagg. Jacobson's refinement of Engel's theorem for Leibniz algebras, \emph{Involve}. \textbf{4} (2011), no. 3, 293-296.
\bibitem{cs}Campoamor-Stursberg R. Solvable Lie algebras with an $\mathbb{N}$-graded nilradical of maximum nilpotency degree and their invariants, \emph{J. Phys A.} \textbf{43} (2010), no. 14, 145202, 18 pp.
\bibitem{clok} Casas J. M., M. Ladra, B. A. Omirov, I. A. Karimjanov. Classification of solvable Leibniz algebras with null-filiform nilradical, \emph{Linear Multilinear Algebra}. \textbf{61} (2013), no. 6, 758-774.
\bibitem{clok2} Casas J. M., M. Ladra, B. A. Omirov, I. A. Karimjanov. Classification of solvable Leibniz algebras with naturally graded filiform nil-radical, \emph{Linear Algebra Appl.} \textbf{438} (2013), no. 7, 2973-3000.
\bibitem{loday} Loday, J. Une version non commutative des algebres de Lie: les algebres de Leibniz, \emph{Enseign. Math. (2)} \textbf{39} (1993), no. 3-4, 269-293.
\bibitem{loday2} Loday, J., T. Pirashvili. Universal enveloping algebras of Leibniz algebras and (co)homology, \emph{Math. Ann.} \textbf{296} (1993), no. 1, 139-158.
\bibitem{mubar} Mubarakzianov, G. M. \emph{Some problems about solvable Lie algebras}. Izv. Vusov: Ser. Math. (1966), no. 1, 95-98 (Russian).
\bibitem{nw} Ndogmo, J. C., P. Winternitz. Solvable Lie algebras with abelian nilradicals, \emph{J. Phys. A.} \textbf{27} (1994), no. 8, 405-423.
\bibitem{omirov} Omirov, B. Conjugacy of Cartan subalgebras of complex finite dimensional Leibniz algebras, \emph{J. Algebra}. \textbf{302} (2006), no. 2, 887-896.
\bibitem{rw} Rubin, J. L., P. Winternitz. Solvable Lie algebras with Heisenberg ideals, \emph{J. Phys. A: Math. Gen.} \textbf{26} (1993), no. 5, 1123-1138.
\bibitem{tw} Tremblay P., P. Winternitz. Solvable Lie algebras with triangular nilradicals, \emph{J. Phys. A.} \textbf{31} (1998), no. 2, 789-806.
\bibitem{wld} Wang, Y., J. Lin, S. Deng. Solvable Lie algebras with quasifiliform nilradicals, \emph{Comm. Algebra}. \textbf{36} (2008), no. 11, 4052-4067.
\end{thebibliography}
\end{document} |
\begin{document}
\myref
\title{Multiobjective fractional variational calculus\\
in terms of a combined Caputo derivative\thanks{This work
was partially supported by the \emph{Portuguese Foundation for
Science and Technology} through the R\&D unit
\emph{Center for Research and Development in Mathematics and Applications}.}}
\author{Agnieszka B. Malinowska\thanks{Faculty of Computer Science,
Bia{\l}ystok University of Technology, 15-351 Bia\l ystok, Poland
({\tt [email protected]}).
Partially supported by BUT Grant S/WI/2/2011.}
\and Delfim F. M. Torres\thanks{Department of
Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal
({\tt [email protected]}). Partially supported by FCT (Portugal)
through the project UTAustin/MAT/0057/2008.}}
\maketitle
\begin{abstract}
The study of fractional variational problems in terms
of a combined fractional Caputo derivative is introduced.
Necessary optimality conditions of Euler--Lagrange type
for the basic, isoperimetric, and Lagrange variational problems
are proved, as well as transversality and sufficient optimality conditions.
This allows to obtain necessary and sufficient Pareto optimality conditions
for multiobjective fractional variational problems.
\end{abstract}
\begin{keywords}
Variational analysis,
multiobjective optimization,
fractional Euler--Lagrange equations,
fractional derivatives in the sense of Caputo,
Pareto minimizers.
\end{keywords}
\begin{AMS}
49K05, 26A33.
\end{AMS}
\pagestyle{myheadings}
\thispagestyle{plain}
\markboth{A. B. MALINOWSKA AND D. F. M. TORRES}{MULTIOBJECTIVE FRACTIONAL VARIATIONAL CALCULUS}
\section{Introduction}
There is an increasing interest in the study of dynamic systems of fractional
(where ``fractional'' actually means ``non-integer'') order.
Extending derivatives and integrals from integer to non-integer order
has a firm and longstanding theoretical foundation.
Leibniz mentioned this concept in a letter to L'Hopital
over three hundred years ago. Following L'Hopital's and Leibniz's
first inquisition, fractional calculus was primarily a study reserved to
the best minds in mathematics. Fourier, Euler, and Laplace are among the
many that contributed to the development of fractional calculus.
Along the history, many found, using their own notation and methodology,
definitions that fit the concept of a non-integer order integral or
derivative. The most famous of these definitions among mathematicians,
that have been popularized in the literature of fractional calculus, are the ones of
Riemann--Liouville and Grunwald--Letnikov. On the other hand,
the most intriguing and useful applications of fractional
derivatives and integrals in engineering and science have been found
in the past one hundred years.
In some cases, the mathematical notions evolved in order
to better meet the requirements of physical reality.
The best example of this is the Caputo fractional derivative,
nowadays the most popular fractional operator among engineers and applied scientists,
obtained by reformulating the ``classical'' definition of
Riemann--Liouville fractional derivative in order to be possible
to solve fractional initial value problems
with standard initial conditions \cite{Dot:Porto}.
Particularly in the last decade of the XX century,
numerous applications and physical manifestations
of fractional calculus have been found.
Fractional differentiation is nowadays recognized as a good tool
in various different fields: physics, signal processing, fluid mechanics,
viscoelasticity, mathematical biology, electrochemistry, chemistry,
economics, engineering, and control theory (see, \textrm{e.g.},
\cite{debnath,Diethelm,ferreira,hilfer,hilfer2,kulish,magin,metzler,Oustaloup,TM,Zas}).
The fractional calculus of variations was born
in 1996 with the work of Riewe \cite{rie,rie97},
and is nowadays a subject under strong current research (see
\cite{Almeida2,MyID:154,MyID:152,MyID:179,El-Nabulsi:Torres07,
R:T:08,Frederico:Torres07,Frederico:Torres08,G:D:10,MyID:181} and
references therein). The fractional calculus of variations extends
the classical variational calculus by considering fractional
derivatives into the variational integrals to be extremized. This
occurs naturally in many problems of physics and mechanics, in order
to provide more accurate models of physical phenomena (see,
\textrm{e.g.}, \cite{R:A:D:10,Atanackovic}). The aims of this paper
are twofold. Firstly, we extend the notion of Caputo fractional
derivative to the fractional derivative ${^CD}^{\alpha,\beta}_{\gamma}$, which is a convex
combination of the left Caputo fractional derivative of order
$\alpha$ and the right Caputo fractional derivative of order
$\beta$. This idea goes back at least as far as \cite{klimek}, where
based on the Riemann--Liouville fractional derivatives, the
symmetric fractional derivative was introduced. Klimek's approach
\cite{klimek} is obtained in our framework as a particular case, by
choosing parameter $\gamma$ to be $1/2$. Although the symmetric
fractional derivative of Riemann--Liouville introduced by Klimek is
a useful tool in the description of some nonconservative models,
this type of differentiation does not seems suitable for all kinds
of variational problems. Indeed, the hypothesis that admissible
trajectories $y$ have continuous symmetric fractional derivatives
implies that $y(a)=y(b)=0$ (\textrm{cf.} \cite{Ross}). Therefore,
the advantage of the fractional Caputo-type derivative ${^CD}^{\alpha,\beta}_{\gamma}$ here
introduced lies in the fact that using this derivative we can
describe a more general class of variational problems. It is also
worth pointing out that the fractional derivative ${^CD}^{\alpha,\beta}_{\gamma}$ allows to
generalize the results presented in \cite{AlmeidaTorres}. Our second
aim is to introduce the subject of multiobjective fractional
variational problems. This seems to be a completely open area of
research, never considered before in the literature. Knowing the
importance and relevance of multiobjective problems of the calculus
of variations in physics, engineering, and economics (see
\cite{Engwerda,BasiaDelfim,MyID:093,BorisBookII,Sophos,Wang} and the
references given there), and the usefulness of fractional
variational problems, we trust that the results now obtained will
open interesting possibilities for future research. Main results of
the the paper provide methods for identifying Pareto optimal
solutions. Necessary and sufficient Pareto optimality conditions are
obtained by converting a multiobjective fractional variational
problem into a single or a family of single fractional variational
problems with an auxiliary scalar functional, possibly depending on
some parameters.
The paper is organized as follows. Section~\ref{sec2} presents some
preliminaries on fractional calculus, essentially to fix notations.
In Section~\ref{main:1} we introduce the fractional
derivative ${^CD}^{\alpha,\beta}_{\gamma}$ and provide the necessary concepts and results
needed in the sequel. Our main results are stated and proved in
Section~\ref{ssec:pro} and Section~\ref{main:2}. The fractional
variational problems under our consideration are formulated in terms
of the fractional derivative ${^CD}^{\alpha,\beta}_{\gamma}$. We discuss the fundamental
concepts of a variational calculus such as the the Euler--Lagrange
equations for the elementary (Subsection~\ref{ssec:EL}),
isoperimetric (Subsection~\ref{sec:iso}), and Lagrange
(Subsection~\ref{sec:lagr}) problems, as well as sufficient optimality
(Subsection~\ref{ssec:suf}) and transversality
(Subsection~\ref{sec:tran}) conditions. Section~\ref{main:2} deals with
the multiobjective fractional variational calculus. We present Pareto
optimality conditions (Subsection~\ref{sec:par:op}) and examples
illustrating our results (Subsection~\ref{sec:ex}).
\section{Fractional calculus}
\label{sec2}
In this section we review the necessary definitions and facts from
fractional calculus. For more on the subject we refer the reader to
the books \cite{kilbas,Oldham,Podlubny,samko}.
Let $f\in L_1([a,b])$ and $0<\alpha<1$. We begin by defining the left and
the right Riemann--Liouville Fractional Integrals (RLFI) of order
$\alpha$ of a function $f$. The left RLFI is given by
\begin{equation}
\label{RLFI1}
{_aI_x^\alpha}f(x):=\frac{1}{\Gamma(\alpha)}\int_a^x
(x-t)^{\alpha-1}f(t)dt,\quad x\in[a,b],
\end{equation}
and the right RLFI by
\begin{equation}
\label{RLFI2}
{_xI_b^\alpha}f(x):=\frac{1}{\Gamma(\alpha)}\int_x^b(t-x)^{\alpha-1}
f(t)dt,\quad x\in[a,b],
\end{equation}
where $\Gamma(\cdot)$ represents the Gamma function, \textrm{i.e.},
$$
\Gamma(z):=\int_0^\infty t^{z-1} \mathrm{e}^{-t}\, dt.
$$
Moreover, ${_aI_x^0}f={_xI_b^0}f=f$ if $f$ is a continuous function.
The left and the right Riemann--Liouville derivatives are defined
with the help of the respective fractional integrals.
The left Riemann--Liouville Fractional Derivative (RLFD) is given by
\begin{equation}
\label{RLFD1}
{_aD_x^\alpha}f(x):=\frac{1}{\Gamma(1-\alpha)}\frac{d}{dx}\int_a^x
(x-t)^{-\alpha}f(t)dt=\frac{d}{dx}{_aI_x^{1-\alpha}}f(x),\quad
x\in[a,b],
\end{equation}
and the right RLFD by
\begin{equation}
\label{RLFD2}
{_xD_b^\alpha}f(x):=\frac{-1}{\Gamma(1-\alpha)}\frac{d}{dx}\int_x^b
(t-x)^{-\alpha}
f(t)dt=\left(-\frac{d}{dx}\right){_xI_b^{1-\alpha}}f(x),\quad
x\in[a,b].
\end{equation}
Let $f\in AC([a,b])$, where $AC([a,b])$ represents the space of
absolutely continuous functions on $[a,b]$. Then the Caputo fractional
derivatives are defined as follows:
the left Caputo Fractional Derivative (CFD) by
\begin{equation}
\label{CFD1}
{^C_aD_x^\alpha}f(x):=\frac{1}{\Gamma(1-\alpha)}\int_a^x
(x-t)^{-\alpha}\frac{d}{dt}f(t)dt={_aI_x^{1-\alpha}}\frac{d}{dx}f(x),
\quad x\in[a,b],
\end{equation}
and the right CFD by
\begin{equation}
\label{CFD2}
{^C_xD_b^\alpha}f(x):=\frac{-1}{\Gamma(1-\alpha)}\int_x^b
(t-x)^{-\alpha}
\frac{d}{dt}f(t)dt={_xI_b^{1-\alpha}}\left(-\frac{d}{dx}\right)f(x),
\quad x\in[a,b],
\end{equation}
where $\alpha$ is the order of the derivative.
The operators \eqref{RLFI1}--\eqref{CFD2} are obviously linear. We
now present the rule of fractional integration by parts for RLFI
(see, \textrm{e.g.}, \cite{int:partsRef}). Let $0<\alpha<1$, $p\geq1$,
$q \geq 1$, and $1/p+1/q\leq1+\alpha$. If $g\in L_p([a,b])$ and
$f\in L_q([a,b])$, then
\begin{equation}
\label{ipi} \int_{a}^{b} g(x){_aI_x^\alpha}f(x)dx =\int_a^b f(x){_x
I_b^\alpha} g(x)dx.
\end{equation}
In the discussion to follow, we will also need the following
formulae for fractional integrations by parts:
\begin{equation}
\label{ip}
\begin{split}
\int_{a}^{b} g(x) \, {^C_aD_x^\alpha}f(x)dx &=\left.f(x){_x
I_b^{1-\alpha}} g(x)\right|^{x=b}_{x=a}+\int_a^b f(x){_x D_b^\alpha}
g(x)dx,\\
\int_{a}^{b} g(x) \, {^C_xD_b^\alpha}f(x)dx &=\left.-f(x){_a
I_x^{1-\alpha}} g(x)\right|^{x=b}_{x=a}+\int_a^b f(x){_a D_x^\alpha}
g(x)dx.
\end{split}
\end{equation}
They can be derived using equations \eqref{RLFD1}--\eqref{CFD2},
the identity \eqref{ipi}, and performing integration by parts.
\section{The fractional operator $\mathbf{{^CD}^{\alpha,\beta}_{\gamma}}$}
\label{main:1}
Let $\alpha, \beta \in(0,1)$ and $\gamma\in [0,1]$ . We define the
fractional derivative operator ${^CD}^{\alpha,\beta}_{\gamma}$ by
\begin{equation}
\label{op}
{^CD}^{\alpha,\beta}_{\gamma} :=\gamma \, {^C_aD_x^\alpha} + (1-\gamma) \, {^C_xD_b^\beta} \, ,
\end{equation}
which acts on $f\in AC([a,b])$ in the expected way:
\begin{equation*}
{^CD}^{\alpha,\beta}_{\gamma} f(x)=\gamma {^C_aD_x^\alpha} f(x) + (1-\gamma) {^C_xD_b^\beta} f(x).
\end{equation*}
Note that ${^CD^{\alpha,\beta}_{0}} f(x)={^C_xD_b^\beta} f(x)$
and ${^CD^{\alpha,\beta}_{1}} f(x)={^C_aD_x^\alpha} f(x)$.
The operator \eqref{op} is obviously linear. Using equations
\eqref{ip} it is easy to derive the following rule of fractional
integration by parts for ${^CD}^{\alpha,\beta}_{\gamma}$:
\begin{multline}
\label{byparts}
\int_{a}^{b} g(x) \, {^CD}^{\alpha,\beta}_{\gamma} f(x)dx
=\gamma\left[f(x){_xI_b^{1-\alpha}} g(x)\right]^{x=b}_{x=a}\\
+(1-\gamma)\left[-f(x){_aI_x^{1-\beta}} g(x)\right]^{x=b}_{x=a}
+\int_a^b f(x)D^{\beta,\alpha}_{\gamma}c g(x)dx,
\end{multline}
where $D^{\beta,\alpha}_{\gamma}c:=(1-\gamma){_aD_x^\beta}+\gamma {_xD_b^\alpha}$.
Let $N\in\mathbb{N}$ and $\mathbf{f}=[f_1,\ldots,f_N]:[a,b]\rightarrow
\mathbb{R}^{N}$ with $f_i\in AC([a,b])$, $i=1,\ldots,N$; $\alpha, \beta,
\gamma\in \mathbb{R}^N$ with $\alpha_i, \beta_i \in(0,1)$ and
$\gamma_i\in [0,1]$, $i=1,\ldots,N$. Then,
$$
{^CD}^{\alpha,\beta}_{\gamma} \mathbf{f}(x):= \left[{^CD}^{\alpha,\beta}_{\gamma}p f_1(x),\ldots,{^CD}^{\alpha,\beta}_{\gamma}N f_N(x)\right].
$$
Let $\mathbf{D}$ denote the set of all functions
$\mathbf{y}:[a,b]\rightarrow \mathbb{R}^{N}$ such that ${^CD}^{\alpha,\beta}_{\gamma} \mathbf{y}$
exists and is continuous on the interval $[a,b]$. We endow
$\mathbf{D}$ with the following norm:
\begin{equation*}
\|\mathbf{y}\|_{1,\infty}:=\max_{a\leq x \leq
b}\|\mathbf{y}(x)\|+\max_{a\leq x \leq
b}\|{^CD}^{\alpha,\beta}_{\gamma}\mathbf{y}(x)\|,
\end{equation*}
where $\|\cdot\|$ is a norm in $\mathbb{R}^N$.
Along the work we denote by $\partial_i K$, $i=1,\ldots,M$ ($M\in
\mathbb{N}$), the partial derivative of function $K:\mathbb{R}^M\rightarrow \mathbb{R}$ with
respect to its $i$th argument.
Let $\mathbf{\lambda} \in \mathbb{R}^r$. For simplicity of notation we
introduce the operators $[\mathbf{y}]^{\alpha,\beta}_{\gamma}$
and $_{\lambda}\{\mathbf{y}\}^{\alpha,\beta}_{\gamma}$ by
\begin{equation*}
\begin{split}
[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x) &:= \left(x,\mathbf{y}(x),{^CD}^{\alpha,\beta}_{\gamma}
\mathbf{y}(x)\right) \, , \\
_{\lambda}\{\mathbf{y}\}^{\alpha,\beta}_\gamma(x) &:= \left(x,\mathbf{y}(x),{^CD}^{\alpha,\beta}_{\gamma}
\mathbf{y}(x),\lambda_1,\ldots,\lambda_r\right) \, .
\end{split}
\end{equation*}
\section{Calculus of variations via $\mathbf{{^CD}^{\alpha,\beta}_{\gamma}}$}
\label{ssec:pro}
We are concerned with the problem of finding the minimum of a functional
$\mathcal{J}: \mathcal{D}\rightarrow \mathbb{R}$, where $ \mathcal{D}$ is a
subset of $\mathbf{D}$. The formulation of a problem of the calculus
of variations requires two steps: the specification of a performance
criterion, and the statement of physical constraints
that should be satisfied. The performance criterion $\mathcal{J}$,
also called cost functional (or objective), must be specified
for evaluating quantitatively the performance of the system under study.
We consider the following cost:
\begin{equation*}
\mathcal{J}(\mathbf{y})=\int_a^b L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x) \, dx,
\end{equation*}
where $x\in [a,b]$ is the independent variable;
$\mathbf{y}(x)\in \mathbb{R}^N$ is a real vector variable, the functions
$\mathbf{y}$ are generally called trajectories or curves; ${^CD}^{\alpha,\beta}_{\gamma}
\mathbf{y}(x)\in \mathbb{R}^N$ stands for the fractional derivative of
$\mathbf{y}(x)$; and $L\in C^1([a,b]\times\mathbb{R}^{2N};
\mathbb{R})$ is the Lagrangian.
Enforcing constraints in the optimization problem reduces the set of
candidate functions and leads to the following definition.
\begin{definition}
A trajectory $\mathbf{y}\in \mathbf{D}$ is said to be an admissible
trajectory, provided it satisfies all the constraints of the problem along
the interval $[a,b]$. The set of admissible trajectories is defined as
$\mathcal{D}:=\{\mathbf{y}\in \mathbf{D}:\mathbf{y} \mbox{ is admissible}\}$.
\end{definition}
We now define what is meant by a minimum of $\mathcal{J}$ on
$\mathcal{D}$.
\begin{definition}
A trajectory $\bar{\mathbf{y}}\in \mathcal{D}$ is said to be a local
minimizer for $\mathcal{J}$ on $\mathcal{D}$, if there exists
$\delta>0$ such that $\mathcal{J}(\bar{\mathbf{y}})\leq
\mathcal{J}(\mathbf{y})$ for all $\mathbf{y}\in \mathcal{D}$ with
$\|\mathbf{y}-\bar{\mathbf{y}}\|_{1,\infty}<\delta$.
\end{definition}
The concept of variation of a functional is central to the solution
of problems of the calculus of variations.
\begin{definition}
The first variation of $\mathcal{J}$ at $\mathbf{y}\in \mathbf{D}$
in the direction $\mathbf{h}\in \mathbf{D}$ is defined as
\begin{equation*}
\delta\mathcal{J}(\mathbf{y};\mathbf{h})
:=\lim_{\varepsilon\rightarrow 0}\frac{\mathcal{J}(\mathbf{y}
+\varepsilon\mathbf{h})-\mathcal{J}(\mathbf{y})}{\varepsilon}
=\left.\frac{\partial}{\partial\varepsilon}\mathcal{J}(\mathbf{y}
+\varepsilon\mathbf{h})\right|_{\varepsilon=0},
\end{equation*}
provided the limit exists.
\end{definition}
\begin{definition}
A direction $\mathbf{h}\in \mathbf{D}$, $\mathbf{h}\neq 0$, is said
to be an admissible variation for $\mathcal{J}$ at $\mathbf{y}\in \mathcal{D}$ if
\begin{itemize}
\item[(i)] $\delta\mathcal{J}(\mathbf{y};\mathbf{h})$ exists; and
\item[(ii)] $\mathbf{y}+\varepsilon\mathbf{h}\in \mathcal{D}$ for
all sufficiently small $\varepsilon$.
\end{itemize}
\end{definition}
The following well known result offers a necessary optimality
condition for the problems of the calculus of variations,
based on the concept of variations.
\begin{theorem}[see, \textrm{e.g.}, Proposition~5.5 of \cite{Trout}]
\label{nesse_con}
Let $\mathcal{J}$ be a functional defined on $\mathcal{D}$.
Suppose that $\mathbf{y}$ is a local minimizer for
$\mathcal{J}$ on $\mathcal{D}$. Then,
$\delta\mathcal{J}(\mathbf{y};\mathbf{h})=0$ for each admissible
variation $\mathbf{h}$ at $\mathbf{y}$.
\end{theorem}
\subsection{Elementary problem of the $\mathbf{{^CD}^{\alpha,\beta}_{\gamma}}$ fractional calculus of variations}
\label{ssec:EL}
Let us begin with the following fundamental problem:
\begin{equation}
\label{Funct1} \mathcal{J}(\mathbf{y})
=\int_a^b L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x) \, dx \longrightarrow \min\\
\end{equation}
over all $\mathbf{y}\in \mathbf{D}$ satisfying the boundary
conditions
\begin{equation}
\label{boun2}
\mathbf{y}(a)=\mathbf{y}^{a}, \quad
\mathbf{y}(b)=\mathbf{y}^{b},
\end{equation}
where $\mathbf{y}^{a},\mathbf{y}^{b}\in \mathbb{R}^N$ are given.
The next theorem gives the fractional Euler--Lagrange equation for the
problem \eqref{Funct1}--\eqref{boun2}.
\begin{theorem}
\label{Theo E-L1}
Let $\mathbf{y}=(y_1,\ldots,y_N)$ be a local
minimizer to problem \eqref{Funct1}--\eqref{boun2}. Then,
$\mathbf{y}$ satisfies the following system of $N$ fractional Euler--Lagrange
equations:
\begin{equation}
\label{E-L1}
\partial_i L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)
+D^{\beta,\alpha}_{\gamma}ci \partial_{N+i} L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)=0,
\quad i=2,\ldots N+1,
\end{equation}
for all $x\in[a,b]$.
\end{theorem}
\begin{proof}
Suppose that $\mathbf{y}$ is a local minimizer for $\mathcal{J}$.
Let $\mathbf{h}$ be an arbitrary admissible variation for problem
\eqref{Funct1}--\eqref{boun2}, \textrm{i.e.}, $h_i(a)=h_i(b)=0$,
$i=1,\ldots,N$. Based on the differentiability properties of $L$ and
Theorem~\ref{nesse_con}, a necessary condition for $\mathbf{y}$ to
be a local minimizer is given by
$$
\left.\frac{\partial}{\partial\varepsilon}\mathcal{J}(\mathbf{y}
+\varepsilon\mathbf{h})\right|_{\varepsilon=0} = 0 \, ,
$$
that is,
\begin{equation}
\label{eq:FT}
\int_a^b\Biggl[\sum_{i=2}^{N+1}\partial_i L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x) h_{i-1}(x)
+\sum_{i=2}^{N+1}\partial_{N+i}L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x){^CD}^{\alpha,\beta}_{\gamma}i h_{i-1}(x)\Biggr]dx=0.
\end{equation}
Using formulae \eqref{byparts} of integration by parts in the
second term of the integrand function, we get
\begin{multline}
\label{eq:aft:IP}
\int_a^b\left[\sum_{i=2}^{N+1}\partial_i
L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)+D^{\beta,\alpha}_{\gamma}ci\partial_{N+i}
L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)\right]h_{i-1}(x)dx\\
+\gamma\left.\left[\sum_{i=2}^{N+1}h_{i-1}(x)\left({_xI_b^{1-\alpha}}i
\partial_{N+i}L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)\right)\right]\right|^{x=b}_{x=a}\\
-(1-\gamma)\left.\left[\sum_{i=2}^{N+1}h_{i-1}(x)\left({_aI_x^{1-\beta}}i
\partial_{N+i}L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)\right)\right]\right|^{x=b}_{x=a}=0.
\end{multline}
Since $h_i(a)=h_i(b)=0$, $i=1,\ldots,N$, by the fundamental lemma of
the calculus of variations we deduce that
\begin{equation*}
\partial_i L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)
+D^{\beta,\alpha}_{\gamma}ci\partial_{N+i}L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)=0,
\quad i=2,\ldots,N+1,
\end{equation*}
for all $x\in[a,b]$.
\end{proof}
Observe that if $\alpha$ and $\beta$ go to $1$, then ${^C_aD_x^\alpha}$ can be
replaced with $\frac{d}{dx}$ and ${^C_xD_b^\beta}$ with $-\frac{d}{dx}$ (see
\cite{Podlubny}). Thus, if $\gamma=1$ or $\gamma=0$, then for
$\alpha,\beta \rightarrow 1$ we obtain a corresponding result in the
classical context of the calculus of variations (see, \textrm{e.g.},
\cite[Proposition~6.1]{Trout}).
\subsection{Fractional transversality conditions}
\label{sec:tran}
Let $l\in \{1,\ldots,N\}$. Assume that $\mathbf{y}(a)=\mathbf{y}^a$,
$y_i(b)=y_i^b$, $i=1,\ldots,N$ , $i\neq l$, but $y_l(b)$ is free.
Then, $h_l(b)$ is free and by equations \eqref{E-L1} and
\eqref{eq:aft:IP} we obtain
\begin{equation*}
\Bigl[\gamma {_xI_b^{1-\alpha_l}}
\partial_{N+l+1}L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)
\left.-(1-\gamma) {_aI_x^{1-\beta_l}}\partial_{N+1+l}
L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)\Bigr]\right|_{x=b}=0,
\end{equation*}
where $\alpha, \beta, \gamma \in \mathbb{R}$. Let us consider now
the case when $\mathbf{y}(a)=\mathbf{y}^a$, $y_i(b)=y_i^b$,
$i=1,\ldots,N$ , $i\neq l$, and $y_l(b)$ is free but restricted by
a terminal condition $y_l(b)\leq y^{b}_l$. Then, in the optimal
solution $\mathbf{y}$, we have two possible types of outcome:
$y_l(b)< y^{b}_l$ or $y_l(b)= y^{b}_l$. If $y_l(b)< y^{b}_l$, then
there are admissible neighboring paths with terminal value both
above and below $y_l(b)$, so that $h_l(b)$ can take either sign.
Therefore, the transversality conditions is
\begin{equation}
\label{tran:1}
\Bigl[\gamma {_xI_b^{1-\alpha_l}}
\partial_{N+l+1}L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)
\left.-(1-\gamma) {_aI_x^{1-\beta_l}}\partial_{N+1+l}
L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)\Bigr]\right|_{x=b}=0
\end{equation}
for $y_l(b)< y_l^{b}$. The other outcome $y_l(b)= y^{b}_l$ only
admits the neighboring paths with terminal value $\tilde{y}_l(b)\leq
y_l(b)$. Assuming, without loss of generality, that $h_l(b)\geq 0$,
this means that $\varepsilon \leq 0$. Hence, the transversality
condition, which has it root in the first order condition
\eqref{eq:FT}, must be changed to an inequality. For a minimization
problem, the $\leq$ type of inequality is called for, and we obtain
\begin{equation}
\label{tran:2}
\Bigl[\gamma {_xI_b^{1-\alpha_l}}
\partial_{N+l+1}L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)
\left.-(1-\gamma) {_aI_x^{1-\beta_l}}\partial_{N+1+l}
L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)\Bigr]\right|_{x=b} \leq 0
\end{equation}
for $y_l(b)= y^{b}_l$. Combining \eqref{tran:1} and \eqref{tran:2},
we may write the following transversality condition for a
minimization problem:
\begin{multline*}
\Bigl[\gamma {_xI_b^{1-\alpha_l}}
\partial_{N+l+1}L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)
\left.-(1-\gamma) {_aI_x^{1-\beta_l}}\partial_{N+1+l}
L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)\Bigr]\right|_{x=b}
\leq 0, \quad y_l(b)\leq y^{b}_l,\\
(y_l(b)-y^{b}_l)
\Bigl[\gamma {_xI_b^{1-\alpha_l}}
\partial_{N+l+1}L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)
\left.-(1-\gamma) {_aI_x^{1-\beta_l}}\partial_{N+1+l}
L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)\Bigr]\right|_{x=b}=0.
\end{multline*}
\subsection{The $\mathbf{{^CD}^{\alpha,\beta}_{\gamma}}$ fractional isoperimetric problem}
\label{sec:iso}
Let us consider now the isoperimetric problem that consists of
minimizing \eqref{Funct1} over all $\mathbf{y}\in \mathbf{D}$
satisfying $r$ isoperimetric constraints
\begin{equation}
\label{cons2:iso} \mathcal{G}^j(\mathbf{y})=\int_{a}^{b}
G^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)dx=l_j, \quad j=1,\ldots,r,
\end{equation}
where $G^j\in C^1([a,b]\times\mathbb{R}^{2N}; \mathbb{R})$,
$j=1,\ldots,r$, and boundary conditions \eqref{boun2}.
Necessary optimality conditions for isoperimetric problems can be
obtained by the following general theorem.
\begin{theorem}[see, \textrm{e.g.}, Theorem~2 of \cite{G:H} on p.~91]
\label{nes:iso}
Let $\mathcal{J},\mathcal{G}^{1},\ldots,\mathcal{G}^r$ be functionals
defined in a neighborhood of $\mathbf{y}$ and having continuous
first variations in this neighborhood. Suppose that $\mathbf{y}$ is
a local minimizer of \eqref{Funct1} subject to the boundary conditions
\eqref{boun2} and the isoperimetric constrains \eqref{cons2:iso}.
Assume that there are functions $\mathbf{h}^{1},
\ldots,\mathbf{h}^{r}\in \mathbf{D}$ such that the matrix
$A=(a_{kl})$, $a_{kl}:=\delta\mathcal{G}^{k}(\mathbf{y};\mathbf{h}^l)$,
has maximal rank $r$. Then there exist constants
$\lambda_{1},\ldots,\lambda_{r}\in \mathbb{R}$ such that the functional
\begin{equation*}
\mathcal{F}:=\mathcal{J}-\sum_{j=1}^{r}\lambda _{j}\mathcal{G}^{j}
\end{equation*}
satisfies
\begin{equation}\label{var}
\delta\mathcal{F}(\mathbf{y};\mathbf{h})=0
\end{equation}
for all $\mathbf{h}\in \mathbf{D}$
\end{theorem}
Suppose now that assumptions of Theorem~\ref{nes:iso} hold. Then,
equation \eqref{var} is fulfilled for every $\mathbf{h} \in
\mathbf{D}$. Let us consider function $\mathbf{h}$ such that
$\mathbf{h}(a)=\mathbf{h}(b)=0$. Then, we have
\begin{multline*}
0=\delta \mathcal{F}(\mathbf{y};\mathbf{h})
=\frac{\partial}{\partial \varepsilon}\mathcal{F}(\mathbf{y}
+\varepsilon\mathbf{h})|_{\varepsilon=0}\\
=\int_a^b\Biggl[\sum_{i=2}^{N+1}\partial_i
F _{\lambda}\{\mathbf{y}\}^{\alpha,\beta}_\gamma(x)h_{i-1}(x)
+\sum_{i=2}^{N+1}\partial_{N+i} F _{\lambda}\{\mathbf{y}\}^{\alpha,\beta}_\gamma(x)
{^CD}^{\alpha,\beta}_{\gamma}i h_{i-1}(x)\Biggr]dx,
\end{multline*}
where the function $F:[a,b]\times \mathbb{R}^{2N}\times \mathbb{R}^r \rightarrow \mathbb{R}$
is defined by
$$
F _{\lambda}\{\mathbf{y}\}^{\alpha,\beta}_\gamma(x):=L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)
-\sum_{j=1}^{r}\lambda_{j} G^{j}[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x).
$$
On account of the above, and similarly in spirit to the proof of
Theorem~\ref{Theo E-L1}, we obtain
\begin{equation}
\label{ele}
\partial_{i} {_{\lambda}}F\{\mathbf{y}\}^{\alpha,\beta}_\gamma(x)
+D^{\beta,\alpha}_{\gamma}ci \partial_{N+i} {_{\lambda}}F\{\mathbf{y}\}^{\alpha,\beta}_\gamma(x)=0,
\quad i=2,\ldots N+1.
\end{equation}
Therefore, we have the following necessary optimality condition for
the fractional isoperimetric problems:
\begin{theorem}
\label{Th:B:EL-CV} Let assumptions of Theorem~\ref{nes:iso} hold. If
$\mathbf{y}$ is a local minimizer to the isoperimetric problem given
by \eqref{Funct1},\eqref{boun2} and \eqref{cons2:iso}, then
$\mathbf{y}$ satisfies the system of $N$ fractional Euler--Lagrange
equations \eqref{ele} for all $x\in[a,b]$.
\end{theorem}
Suppose now that constraints \eqref{cons2:iso} are characterized by
inequalities
\begin{equation*}
\mathcal{G}^j(\mathbf{y})
=\int_{a}^{b} G^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)dx
\leq l_j, \quad j=1,\ldots,r.
\end{equation*}
In this case we can set
\begin{equation*}
\int_{a}^{b}\left(G^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)
-\frac{l_j}{b-a}\right)dx
+\int_{a}^{b}(\phi_j(x))^2dx=0,
\end{equation*}
$j=1,\ldots,r$, where $\phi_j$ have the some continuity properties
as $y_i$. Therefore, we obtain the following problem:
\begin{equation*}
\hat{\mathcal{J}}(y)=\int_a^b \hat{L}(x,\mathbf{y}(x),
{^CD}^{\alpha,\beta}_{\gamma}\mathbf{y}(x),\phi_{1}(x),\ldots,\phi_{r}(x))
\, dx \longrightarrow \min
\end{equation*}
subject to $r$ isoperimetric constraints
\begin{equation*}
\int_{a}^{b}\left[G^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)
-\frac{l_j}{b-a}+(\phi_j(x))^2\right]dx=0, \quad j=1,\ldots,r,
\end{equation*}
and boundary conditions \eqref{boun2}. Assuming that assumptions of
Theorem~\ref{Th:B:EL-CV} are satisfied, we conclude that there exist
constants $\lambda_{j}\in \mathbb{R}$, $j=1,\ldots,r$, for which the system
of equations
\begin{multline}
\label{iso:1:L:EL}
D^{\beta,\alpha}_{\gamma}ci \partial_{N+i}\hat{F}(x,\mathbf{y}(x),
{^CD}^{\alpha,\beta}_{\gamma}\mathbf{y}(x),\lambda_1,
\ldots,\lambda_r,\phi_{1}(x),\ldots,\phi_{r}(x))\\
+\partial_i\hat{F}(x,\mathbf{y}(x),
{^CD}^{\alpha,\beta}_{\gamma}\mathbf{y}(x),\lambda_1,\ldots,\lambda_r,\phi_{1}(x),\ldots,\phi_{r}(x))=0,
\end{multline}
$i=2,\ldots, N+1$,
$\hat{F}=\hat{L}+\sum_{j=1}^r\lambda_j(G^j-\frac{l_j}{b-a}+\phi_j^2)$ and
\begin{equation}
\label{iso:2:L:EL}
\lambda_j\phi_j(x)=0, \quad j=1,\ldots,r,
\end{equation}
hold for all $x\in[a,b]$. Note that it is enough to assume that the
regularity condition holds for the constraints which are active at
the local minimizer $\mathbf{y}$ (constraint $\mathcal{G}^k$ is active
at $\mathbf{y}$ if $\mathcal{G}^k(\mathbf{y})=l_k$). Indeed,
suppose that $l<r$ constrains, say
$\mathcal{G}^1,\ldots,\mathcal{G}^l$ for simplicity, are active at
the local minimizer $\mathbf{y}$, and there are functions
$\mathbf{h}^{1},\ldots,\mathbf{h}^{l}\in \mathbf{D}$ such that the
matrix
\begin{equation*}
B=(b_{kj}),\quad
b_{kj}:=\delta\mathcal{G}^{k}(\mathbf{y};\mathbf{h}^j),\quad
k,j=1,\ldots,l<r
\end{equation*}
has maximal rank $l$. Since the inequality constraints
$\mathcal{G}^{l+1},\ldots,\mathcal{G}^r$ are inactive, the condition
\eqref{iso:2:L:EL} is trivially satisfied by taking
$\lambda_{l+1}=\cdots=\lambda_{r}=0$. On the other hand, since the
inequality constraints $\mathcal{G}^1,\ldots,\mathcal{G}^l$ are
active and satisfy a regularity condition at $\mathbf{y}$, the
conclusion that there exist constants $\lambda_{j}\in \mathbb{R}$,
$j=1,\ldots,r$, such that \eqref{iso:1:L:EL} holds follow from
Theorem~\ref{Th:B:EL-CV}. Moreover, \eqref{iso:2:L:EL} is trivially
satisfied for $j=1,\ldots,l$.
\subsection{The $\mathbf{{^CD}^{\alpha,\beta}_{\gamma}}$ fractional Lagrange problem}
\label{sec:lagr}
Let us consider the following Lagrange problem, which consists of
minimizing \eqref{Funct1} over all $\mathbf{y}\in \mathbf{D}$
satisfying $r$ independent constraints ($r<N$)
\begin{equation}\label{cons2}
G^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)=0, \quad j=1,\ldots,r,
\end{equation}
and boundary conditions \eqref{boun2}. In mechanics, constraints of
type \eqref{cons2} are called nonholonomic. By the independence of
the $r$ constraints $G^j\in C^1([a,b]\times\mathbb{R}^{2N};
\mathbb{R})$ it is meant that it should exist a nonvanishing
Jacobian determinant of order $r$, such as
$\left|\frac{\partial(G^1,\ldots,G^r)}{\partial(p_{N+2},\ldots,p_{N+2+r})}\right|\neq
0$. Of course, any $r$ of $p_j$, $j=N+2,...,2N+1$, can be used, not
necessarily the first $r$.
\begin{theorem}
\label{lagrange} A function $\mathbf{y}$ which is a solution to
problem \eqref{Funct1},\eqref{boun2} subject to $r$ independent
constraints ($r<N$) \eqref{cons2} satisfies, for suitably chosen
functions $\lambda_j$, $j=1,\ldots,r$, the system of $N$ fractional
Euler--Lagrange equations
\begin{equation*}\label{L:EL}
\partial_i F[\mathbf{y},\mathbf{\lambda}]^{\alpha,\beta}_{\gamma}(x)
+D^{\beta,\alpha}_{\gamma}ci \partial_{N+i}F[\mathbf{y},\mathbf{\lambda}]^{\alpha,\beta}_{\gamma}(x)=0,
\quad x\in[a,b], \quad i=2,\ldots, N+1,
\end{equation*}
where
$F[\mathbf{y},\mathbf{\lambda}]^{\alpha,\beta}_{\gamma}(x)
=L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)
+\sum_{j=1}^r\lambda_j(x)G^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)$.
\end{theorem}
\begin{proof}
Suppose that $\mathbf{y}=(y_1,\ldots, y_N)$ is the solution to
problem defined by \eqref{Funct1},\eqref{boun2}, and \eqref{cons2}.
Let $\mathbf{h}=(h_1,\ldots,h_N)$ be an arbitrary admissible
variation, \textrm{i.e.}, $h_i(a)=h_i(b)=0$, $i=1,\ldots,N$, and
$G^j[\mathbf{y}+\varepsilon \mathbf{h}]^{\alpha,\beta}_{\gamma}(x)=0$,
$j=1,\ldots,r,$ where $\varepsilon\in\mathbb{R}$ is a small parameter.
Because $\mathbf{y}=(y_1,\ldots, y_N)$ is a solution to problem defined by
\eqref{Funct1},\eqref{boun2}, and \eqref{cons2}, it follows that
$$
\left.\frac{\partial}{\partial\varepsilon}\mathcal{J}(\mathbf{y}
+\varepsilon\mathbf{h})\right|_{\varepsilon=0} = 0 \, ,
$$
that is,
\begin{equation}
\label{eq:FTL}
\int_a^b\Biggl[\sum_{i=2}^{N+1}\partial_i L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)
h_{i-1}(x) +\sum_{i=2}^{N+1}\partial_{N+i}L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)
{^CD}^{\alpha,\beta}_{\gamma}i h_{i-1}(x)\Biggr]dx=0,
\end{equation}
and, for $j=1,\ldots,r$,
\begin{equation}
\label{nes:2}
\sum_{i=2}^{N+1}\partial_i G^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)h_{i-1}(x)
+\sum_{i=2}^{N+1}\partial_{N+i}G^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)
{^CD}^{\alpha,\beta}_{\gamma}i h_{i-1}(x)=0.
\end{equation}
Multiplying the $j$th equation of the system \eqref{nes:2} by the
unspecified function $\lambda_j(x)$, for all $j=1,\ldots,r$,
integrating with respect to $x$, and adding the left-hand sides
(all equal to zero for any choice of the $\lambda_j$) to the
integrand of \eqref{eq:FTL}, we obtain
\begin{equation*}
\begin{split}
&\int_a^b \left[\sum_{i=2}^{N+1}\left(\partial_iL[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)
+\sum_{j=1}^r\lambda_j(x)\partial_iG^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)\right)h_{i-1}(x)\right.\\
& \left. \quad +\sum_{i=2}^{N+1}\left(\partial_{N+i}L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)
+\sum_{j=1}^r\lambda_j(x)\partial_{N+i}G^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)\right)({^CD}^{\alpha,\beta}_{\gamma}i
h_{i-1}(x))\right]dx\\
&=\int_a^b\left[\sum_{i=2}^{N+1}\partial_i F[\mathbf{y},\mathbf{\lambda}]^{\alpha,\beta}_{\gamma}(x)
h_{i-1}(x) +\sum_{i=2}^{N+1}\partial_{N+i}F[\mathbf{y},\mathbf{\lambda}]^{\alpha,\beta}_{\gamma}(x)
({^CD}^{\alpha,\beta}_{\gamma}i h_{i-1}(x))\right]dx\\
&=0,
\end{split}
\end{equation*}
where
$F[\mathbf{y},\mathbf{\lambda}]^{\alpha,\beta}_{\gamma}(x)
=L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x) +\sum_{j=1}^r\lambda(x)
G^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)$.
Integrating by parts,
\begin{equation}
\label{nes:3}
\int_a^b\left[\sum_{i=2}^{N+1}\partial_i F[\mathbf{y},\mathbf{\lambda}]^{\alpha,\beta}_{\gamma}(x)
+D^{\beta,\alpha}_{\gamma}ci \partial_{N+i} F[\mathbf{y},\mathbf{\lambda}]^{\alpha,\beta}_{\gamma}(x)\right] h_{i-1}(x)dx=0.
\end{equation}
Because of \eqref{nes:2}, we cannot regard the $N$ functions
$h_1,\ldots,h_N$ as being free for arbitrary choice. There is a
subset of $r$ of these functions whose assignment is restricted by
the assignment of the remaining $(N-r)$. We can assume, without loss
of generality, that $h_1,\ldots,h_r$ are the functions of the set
whose dependence upon the choice of the arbitrary
$h_{r+1},\ldots,h_N$ is governed by \eqref{nes:2}. We now assign the
functions $\lambda_1,\ldots,\lambda_r$ to be the set of $r$
functions that make vanish (for all $x$ between $a$ and $b$) the
coefficients of $h_1,\ldots,h_r$ in the integrand of \eqref{nes:3}.
That is, $\lambda_1,\ldots,\lambda_r$ are chosen so as to satisfy
\begin{equation}
\label{nes:4}
\partial_i F[\mathbf{y},\mathbf{\lambda}]^{\alpha,\beta}_{\gamma}(x)
+D^{\beta,\alpha}_{\gamma}ci\partial_{N+i} F[\mathbf{y},\mathbf{\lambda}]^{\alpha,\beta}_{\gamma}(x)=0,
\quad i=2,\ldots,r+1, \quad x\in[a,b].
\end{equation}
With this choice \eqref{nes:3} gives
\begin{equation*}
\int_a^b\left[\sum_{i=r+2}^{N+1}
\partial_i F[\mathbf{y},\mathbf{\lambda}]^{\alpha,\beta}_{\gamma}(x)
+D^{\beta,\alpha}_{\gamma}ci\partial_{N+i}
F[\mathbf{y},\mathbf{\lambda}]^{\alpha,\beta}_{\gamma}(x)\right] h_{i-1}(x)dx=0.
\end{equation*}
Since the functions $h_{r+1},\ldots,h_N$ are arbitrary, we may
employ the fundamental lemma of the calculus of variations to conclude that
\begin{equation}
\label{nes:6}
\partial_i F[\mathbf{y},\mathbf{\lambda}]^{\alpha,\beta}_{\gamma}(x)
+D^{\beta,\alpha}_{\gamma}ci\partial_{N+i}F[\mathbf{y},\mathbf{\lambda}]^{\alpha,\beta}_{\gamma}(x)=0,
\end{equation}
$i=r+1,\ldots,N+1$, for all $x\in[a,b]$.
\end{proof}
\begin{remark}
In order to determine the $(N+r)$ unknown functions
$y_1,\ldots,y_n$, $\lambda_1,\ldots,\lambda_r$, we must consider
the system of $(N+r)$ equations, consisting of \eqref{cons2},
\eqref{nes:4}, and \eqref{nes:6}, together with the $2N$ boundary
conditions \eqref{boun2}.
\end{remark}
Assume now that the constraints, instead of \eqref{cons2},
are characterized by inequalities:
\begin{equation*}
\label{cons:ineq}
G^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)\leq 0, \quad j=1,\ldots,r.
\end{equation*}
In this case we can set
\begin{equation*}
\label{cons:ineq:chan}
G^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)+(\phi_j(x))^2=0, \quad j=1,\ldots,r,
\end{equation*}
where $\phi_j$ have the some continuity properties as $y_i$.
Therefore, we obtain the following problem:
\begin{equation}
\label{Funct2:ineq}
\hat{\mathcal{J}}(y)=\int_a^b \hat{L}(x,\mathbf{y}(x),
{^CD}^{\alpha,\beta}_{\gamma}\mathbf{y}(x), \phi_1(x),\ldots,\phi_r(x)) \, dx
\longrightarrow \text{min}
\end{equation}
subject to $r$ independent constraints ($r<N$)
\begin{equation}
\label{cons2:ineq}
G^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)+(\phi_j(x))^2=0, \quad j=1,\ldots,r,
\end{equation}
and boundary conditions \eqref{boun2}. Applying
Theorem~\ref{lagrange} we get the following result.
\begin{theorem}
\label{lagrange:ineq}
A set of functions $y_1,\ldots, y_N$, $\phi_1,\ldots,\phi_r$, which
is a solution to problem \eqref{Funct2:ineq}--\eqref{cons2:ineq},
satisfies, for suitably chosen $\lambda_j$, $j=1,\ldots,r$, the
following system of equations:
\begin{multline*}
D^{\beta,\alpha}_{\gamma}ci \partial_{N+i}\hat{F}(x,\mathbf{y}(x),{^CD}^{\alpha,\beta}_{\gamma}\mathbf{y}(x),\lambda_1(x),
\ldots,\lambda_r(x),\phi_1(x),\ldots,\phi_r(x))\\
+ \partial_i\hat{F}(x,\mathbf{y}(x),{^CD}^{\alpha,\beta}_{\gamma}\mathbf{y}(x),\lambda_1(x),
\ldots,\lambda_r(x),\phi_1(x),\ldots,\phi_r(x)) =0,
\end{multline*}
$i=2,\ldots, N+1$, where $\hat{F}=\hat{L}+\sum_{j=1}^r\lambda_j(G^j+\phi_j^2)$, and
$\lambda_j(x)\phi_j(x)=0$, $j=1,\ldots,r$, hold for all $x\in[a,b]$.
\end{theorem}
\subsection{Sufficient condition of optimality}
\label{ssec:suf}
In this section we provide sufficient optimality conditions for the
elementary and the isoperimetric problem of the ${^CD}^{\alpha,\beta}_{\gamma}$ fractional
calculus of variations. Similarly to what happens in the classical
calculus of variations, some conditions of convexity are in order.
\begin{definition}
Given a function $f\in C^1([a,b]\times\mathbb{R}^{2N}; \mathbb{R})$,
we say that $f(\underline x,\mathbf{y},\mathbf{v})$ is jointly
convex in $(\mathbf{y},\mathbf{v})$, if
\begin{equation*}
f(x,\mathbf{y}+\mathbf{y}^0,\mathbf{v}+\mathbf{v}^0)-f(x,\mathbf{y},\mathbf{v})
\geq \sum_{i=2}^{N+1}\partial_i
f(x,\mathbf{y},\mathbf{v})y_{i-1}^0
+\sum_{i=2}^{N+1}\partial_{N+i}f(x,\mathbf{y},\mathbf{v})v_{i-1}^0
\end{equation*}
for all
$(x,\mathbf{y},\mathbf{v})$,$(x,\mathbf{y}+\mathbf{y}^0,\mathbf{v}
+\mathbf{v}^0)\in [a,b]\times\mathbb{R}^{2N}$.
\end{definition}
\begin{theorem}
\label{suff}
Let $L(\underline x,\mathbf{y},\mathbf{v})$ be jointly convex in
$(\mathbf{y},\mathbf{v})$. If $\mathbf{y}$ satisfies the system of
$N$ fractional Euler--Lagrange equations \eqref{E-L1}, then
$\mathbf{y}$ is a global minimizer to problem
\eqref{Funct1}--\eqref{boun2}.
\end{theorem}
\begin{proof}
The proof is similar to the proof of Theorem~3.3 in \cite{MalTor}.
\end{proof}
\begin{theorem}
\label{suff:iso}
Let $F(\underline
x,\mathbf{y},\mathbf{v},\bar{\mathbf{\lambda}})=L(\underline
x,\mathbf{y},\mathbf{v})-\sum_{j=1}^{r}\bar{\lambda}_{j}G^{j}(\underline
x,\mathbf{y},\mathbf{v})$ be jointly convex in
$(\mathbf{y},\mathbf{v})$, for some constants
$\bar{\lambda}_{j}\in\mathbb{R}$, $j=1,\ldots,r$. If $\mathbf{y}^0$
satisfies the system of $N$ fractional Euler--Lagrange equations
\eqref{ele}, then $\mathbf{y}^0$ is a minimizer to the isoperimetric problem
defined by \eqref{Funct1},\eqref{boun2} and \eqref{cons2:iso}.
\end{theorem}
\begin{proof}
By Theorem~\ref{suff}, $\mathbf{y}^0$ minimizes
$\int_a^b F _{\bar{\lambda}}\{\mathbf{y}\}^{\alpha,\beta}_{\gamma}(x) \, dx$. That is, for all
functions satisfying condition \eqref{boun2} we have
\begin{multline*}
\int_a^b L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)\,
dx-\sum_{j=1}^{r}\bar{\lambda}_{j}\int_a^bG^{j}[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)\, dx\\
\geq \int_a^b L[\mathbf{y}^0]^{\alpha,\beta}_{\gamma}(x)\,
dx-\sum_{j=1}^{r}\bar{\lambda}_{j}\int_a^bG^{j}[\mathbf{y}^0]^{\alpha,\beta}_{\gamma}(x)\, dx.
\end{multline*}
Restricting to the isoperimetric constraints \eqref{cons2:iso}, we obtain that
\begin{equation*}
\int_a^b L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)\, dx
-\sum_{j=1}^{r}\bar{\lambda}_{j}l_{j}\,
dx\geq \int_a^b L[\mathbf{y}^0]^{\alpha,\beta}_{\gamma}(x)\, dx
-\sum_{j=1}^{r}\bar{\lambda}_{j}l_{j}\, dx.
\end{equation*}
Therefore,
\begin{equation*}
\int_a^b L[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)\, dx
\geq \int_a^b L[\mathbf{y}^0]^{\alpha,\beta}_{\gamma}(x)\, dx
\end{equation*}
as desired.
\end{proof}
Choosing $r=1$ in Theorem~\ref{suff:iso} one can easily obtain
\cite[Theorem~3.10]{AlmeidaTorres}.
\section{Multiobjective fractional optimization}
\label{main:2}
Multiobjective optimization is a natural extension of the
traditional optimization of a single-objective function. If the
objective functions are commensurate, minimizing one-objective
function minimizes all criteria and the problem can be solved using
tradicional optimization techniques. However, if the objective
functions are incommensurate, or competing, then the minimization of
one objective function requires a compromise in another objective.
Here we consider multiobjective fractional
variational problems with a finite number
$d\geq 1$ of objective (cost) functionals
\begin{equation}
\label{Funct:mul}
\left(\mathcal{J}^1(\mathbf{y}),\ldots,\mathcal{J}^d(\mathbf{y})\right)
=\left(\int_a^b L^1[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x) \, dx ,
\ldots, \int_a^b L^d[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x) \, dx \right)\longrightarrow \min
\end{equation}
subject to the boundary conditions
\begin{equation}
\label{boun1:mul}
\mathbf{y}(a)=\mathbf{y}^{a},
\quad \mathbf{y}(b)=\mathbf{y}^{b},
\end{equation}
$\mathbf{y}^{a},\mathbf{y}^{b}\in \mathbb{R}^N$,
and $r$ ($r<N$) independent constraints
\begin{equation}
\label{cons:mul}
G^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x)\leq 0, \quad j=1,\ldots,r,
\end{equation}
where $L^i,G^j\in C^1([a,b]\times\mathbb{R}^{2N}; \mathbb{R})$,
$i=1,\ldots,d$, $j=1,\ldots,r$. We would like to find a function
$\mathbf{y}\in \mathbf{D}$, satisfying constraints \eqref{boun1:mul}
and \eqref{cons:mul}, that renders the minimum value to each
functional $\mathcal{J}^i$, $i=1,\ldots, d$, simultaneously. The
competition between objectives gives rise to the necessity
of distinguish between the difference of multiobjective optimization
and traditional single-objective optimization.
Competition causes the lack of complete order
for multiobjective optimization problems. The concept of
Pareto optimality is therefore used to characterize a solution
to the multiobjective optimization problem. For the usefulness
of variational analysis and Pareto optimal allocations
in welfare economics, we refer the reader to \cite{Boris2005}.
We define
\begin{equation*}
\mathcal{E}:=\{\mathbf{y}\in \mathbf{D}:\mathbf{y} \mbox{ satisyies
conditions \eqref{boun1:mul} and \eqref{cons:mul}}\}.
\end{equation*}
\begin{definition}
\label{def:paret}
A function $\bar{\mathbf{y}}\in \mathcal{E}$ is called a Pareto
optimal solution to problem \eqref{Funct:mul}--\eqref{cons:mul}
if does not exist $\mathbf{y}\in \mathcal{E}$ with
\begin{equation*}
\forall
i\in\{1,\ldots,d\}:\mathcal{J}^i(\mathbf{y})\leq\mathcal{J}^i(\bar{\mathbf{y}})
\quad \wedge \quad \exists
i\in\{1,\ldots,d\}:\mathcal{J}^i(\mathbf{y})<\mathcal{J}^i(\bar{\mathbf{y}}).
\end{equation*}
\end{definition}
Definition~\ref{def:paret} introduces the notion of
\emph{global Pareto optimality}.
Another important concept is the one of \emph{local Pareto optimality}.
\begin{definition}
\label{def:locparet}
A function $\bar{\mathbf{y}}\in \mathcal{E}$ is called a local
Pareto optimal solution to problem
\eqref{Funct:mul}--\eqref{cons:mul} if there exists $\delta >0$
for which does not exist $\mathbf{y}\in \mathcal{E}$ with
$\|\mathbf{y}-\bar{\mathbf{y}}\|_{1,\infty}<\delta$ and
\begin{equation*}
\forall
i\in\{1,\ldots,d\}:\mathcal{J}^i(\mathbf{y})\leq\mathcal{J}^i(\bar{\mathbf{y}})\quad
\wedge \quad \exists
i\in\{1,\ldots,d\}:\mathcal{J}^i(\mathbf{y})<\mathcal{J}^i(\bar{\mathbf{y}}).
\end{equation*}
\end{definition}
Naturally, any global Pareto optimal solution is locally Pareto optimal.
For enhanced notions of Pareto optimality
of constrained multiobjective problems,
the reader is referred to \cite{Boris2010}.
\subsection{Fractional Pareto optimality conditions}
\label{sec:par:op}
We obtain a sufficient condition for Pareto optimality by modifying the
original multiobjective fractional problem
\eqref{Funct:mul}--\eqref{cons:mul} into the following weighting
problem:
\begin{equation}
\label{wei:prob} \sum_{i=1}^{d}w_{i}\int_a^b L^i[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x) \,
dx \longrightarrow \min
\end{equation}
subject to $\mathbf{y}\in\mathcal{E}$, where $w_{i}\geq 0$ for all
$i=1,\ldots,d$, and $\sum_{i=1}^{d}w_{i}=1$.
\begin{theorem}
\label{pareto:suf}
The solution of the weighting problem \eqref{wei:prob} is Pareto
optimal if the weighting coefficients are positive, that is,
$w_{i}>0$ for all $i=1,\ldots,d$. Moreover, the unique solution of
the weighting problem \eqref{wei:prob} is Pareto optimal.
\end{theorem}
\begin{proof}
Let $\bar{\mathbf{y}}\in \mathcal{E}$ be a solution to problem
\eqref{wei:prob} with $w_{i}>0$ for all $i=1,\ldots,d$. Suppose that
$\bar{\mathbf{y}}$ is not Pareto optimal. Then, there exists
$\mathbf{y}$ such that
$\mathcal{J}^i(\mathbf{y})\leq\mathcal{J}^i(\bar{\mathbf{y}})$ for
all $i=1,\ldots,d$ and
$\mathcal{J}^j(\mathbf{y})<\mathcal{J}^j(\bar{\mathbf{y}})$ for at
least one $j$. Since $w_{i}>0$ for all $i=1,\ldots,d$, we have
$\sum_{i=1}^{d}w_i\mathcal{J}^i(\mathbf{y})<\sum_{i=1}^{d}w_i\mathcal{J}^i(\bar{\mathbf{y}})$.
This contradicts the minimality of $\bar{\mathbf{y}}$. Now, let
$\bar{\mathbf{y}}$ be the unique solution to \eqref{wei:prob}. If
$\bar{\mathbf{y}}$ is not Pareto optimal, then
$\sum_{i=1}^{d}w_i\mathcal{J}^i(\mathbf{y})\leq\sum_{i=1}^{d}w_i\mathcal{J}^i(\bar{\mathbf{y}})$.
This contradicts the uniqueness of $\bar{\mathbf{y}}$.
\end{proof}
Therefore, by varying the weights over the unit simplex
$\{w=(w_1,\ldots,w_d): w_i\geq 0, \sum_{i=1}^{d}w_{i}=1\}$ ones
obtains, in principle, different Pareto optimal solutions. The next
theorem provides a necessary and sufficient condition for Pareto
optimality. The result is analogous to that valid
for the finite dimensional case (see, \textrm{e.g.},
Chapter~3.1 and Chapter~3.3 of \cite{Miet}).
\begin{theorem}
\label{pareto:nes:suf}
A function $\bar{\mathbf{y}}\in \mathcal{E}$ is Pareto optimal
to problem \eqref{Funct:mul}--\eqref{cons:mul} if and only
if it is a solution to the scalar fractional variational problem
\begin{equation*}
\int_a^b L^i[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x) \, dx \longrightarrow \min
\end{equation*}
subject to $\mathbf{y}\in \mathcal{E}$ and
\begin{equation*}
\int_a^b L^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x) \, dx
\leq \int_a^b L^j[\bar{\mathbf{y}}]^{\alpha,\beta}_{\gamma}(x) \, dx,
\quad j=1,\ldots,d,\quad j\neq i,
\end{equation*}
for each $i=1,\ldots,d$.
\end{theorem}
\begin{proof}
Suppose that $\bar{\mathbf{y}}$ is Pareto optimal. Then
$\bar{\mathbf{y}}\in \mathcal{C}_k=\{\mathbf{y}\in \mathcal{E}:
\mathcal{J}^j(\mathbf{y})\leq\mathcal{J}^j(\bar{\mathbf{y}}),j=1,\ldots,d,j\neq
k\}$ for all $k$, so $\mathcal{C}_k\neq \emptyset$. If
$\bar{\mathbf{y}}$ does not minimize $\mathcal{J}^k(\mathbf{y})$ on
the constrained set $\mathcal{C}_k$ for some $k$, then there exists
$\mathbf{y}\in \mathcal{E}$ such that
$\mathcal{J}^k(\mathbf{y})<\mathcal{J}^k(\bar{\mathbf{y}})$ and
$\mathcal{J}^j(\mathbf{y})\leq\mathcal{J}^j(\bar{\mathbf{y}})$ for
all $j\neq k$. This contradicts the Pareto optimality of
$\bar{\mathbf{y}}$. Now, suppose that $\bar{\mathbf{y}}$ minimize
each $\mathcal{J}^k(\mathbf{y})$ on the constrained set
$\mathcal{C}_k$. If $\bar{\mathbf{y}}$ is not Pareto optimal, then
there exists $\mathbf{y}$ such that
$\mathcal{J}^i(\mathbf{y})\leq\mathcal{J}^i(\bar{\mathbf{y}})$ for
all $i=1,\ldots,d$ and
$\mathcal{J}^j(\mathbf{y})<\mathcal{J}^j(\bar{\mathbf{y}})$ for at
least one $j$. This contradicts the minimality of $\mathbf{y}$ for
$\mathcal{J}^j(\mathbf{y})$ on $\mathcal{C}_j$.
\end{proof}
\begin{remark}
\label{pareto:nes}
For a function $\mathbf{y}\in \mathcal{E}$ to be Pareto optimal
to problem \eqref{Funct:mul}--\eqref{cons:mul}, it is
necessary to be a solution to the fractional isoperimetric problems
\begin{equation*}
\int_a^b L^i[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x) \, dx \longrightarrow \min
\end{equation*}
subject to $\mathbf{y}\in \mathcal{E}$ and
\begin{equation*}
\int_a^b L^j[\mathbf{y}]^{\alpha,\beta}_{\gamma}(x) \, dx
= \int_a^b L^j[\bar{\mathbf{y}}]^{\alpha,\beta}_{\gamma}(x) \, dx,
\quad j=1,\ldots,d,\quad j\neq i,
\end{equation*}
for all $i=1,\ldots,d$. Therefore, necessary optimality conditions
for the fractional isoperimetric problems (see Theorem~\ref{Th:B:EL-CV})
are also necessary for fractional Pareto optimality.
\end{remark}
\subsection{Examples}
\label{sec:ex}
We illustrate our results with two multiobjective
fractional variational problems.
\begin{example}
Let $\bar{y}(x)=E_{\alpha}(x^{\alpha})$, $x\in [0,1]$, where
$E_{\alpha}$ is the Mittag--Leffler function:
$$
E_{\alpha}(z)=\sum_{k=1}^{\infty}\frac{z^k}{\Gamma(\alpha k+1)},
\quad z\in\mathbb{R}, \quad \alpha >0.
$$
When $\alpha =1$, the Mittag--Leffler function
is simply the exponential function: $E_1(x)=\mathrm{e}^x$.
We note that the left Caputo fractional derivative of
$\bar{y}$ is $\bar{y}$ (\textrm{cf.} \cite{kilbas}, p.~98):
$$
{^C_0D_x^\alpha}\bar{y}(x)=\bar{y}(x).
$$
Consider the following multiobjective fractional variational problem
($N=1$, $\gamma=1$, and $d=2$):
\begin{equation}
\label{ex:1:pr}
\left(\mathcal{J}^1(y),\mathcal{J}^2(y)\right)
=\left(\int_0^1 ({^C_0D_x^\alpha}y(x))^2 \, dx ,
\int_0^1 \bar{y}(x){^C_0D_x^\alpha}y(x) \, dx \right)\longrightarrow \min
\end{equation}
subject to
\begin{equation}
\label{ex:1:pr:bc}
y(0)=0,\quad y(1)=E_{\alpha}(1).
\end{equation}
Observe that $\bar{y}$ satisfies the necessary Pareto optimality
conditions (see Remark~\ref{pareto:nes}). Indeed, as shown in
\cite[Example~1]{AlmeidaTorres}, $\bar{y}$ is a solution to the
isoperimetric problem
\begin{equation*}
\mathcal{J}^1(y)=\int_0^1 ({^C_0D_x^\alpha}y(x))^2 \, dx \longrightarrow \min
\end{equation*}
subject to
\begin{equation*}
\int_0^1 \bar{y}(x){^C_0D_x^\alpha}y(x) \, dx =\int_0^1
(\bar{y}(x))^2 \, dx.
\end{equation*}
Consider now the following fractional isoperimetric problem:
\begin{equation*}
\mathcal{J}^2(y)=\int_0^1 \bar{y}(x){^C_0D_x^\alpha}y(x)\, dx \longrightarrow \min
\end{equation*}
subject to
\begin{equation*}
\int_0^1 ({^C_0D_x^\alpha}y(x))^2 \, dx =\int_0^1 ({^C_0D_x^\alpha}
\bar{y}(x))^2 \, dx.
\end{equation*}
Let us apply Theorem~\ref{nes:iso}. The equality ${_xD_1^\alpha}y(x)=0$
holds if and only if $y(x)=d(1-x)^{\alpha-1}$ with $d\in \mathbb{R}$ (see
\cite[Corollary~2.1]{kilbas}). Hence, $\bar{y}$ does not satisfy
equation ${_xD_1^\alpha}({^C_0D_x^\alpha}y)=0$.
The augmented function is
\begin{equation*}
F _{\lambda}\{\mathbf{y}\}^{\alpha,\beta}_\gamma(x)
=\bar{y}(x){^C_0D_x^\alpha}y(x)
-\lambda ({^C_0D_x^\alpha}y(x))^2,
\end{equation*}
and the corresponding fractional Euler--Lagrange equation gives
\begin{equation*}
{_xD_1^\alpha}(\bar{y}(x)-2\lambda{^C_0D_x^\alpha}y(x))=0.
\end{equation*}
A solution to this equation is $\lambda=\frac{1}{2}$ and
$y=\bar{y}$. Therefore, by Remark~\ref{pareto:nes}, $y=\bar{y}$ is a
candidate Pareto optimal solution to problem
\eqref{ex:1:pr}--\eqref{ex:1:pr:bc}.
\end{example}
\begin{example}
\label{ex:2}
Consider the following multiobjective fractional variational problem:
\begin{equation}
\label{ex:2:pr} \left(\mathcal{J}^1(y),\mathcal{J}^2(y)\right)
=\left(\int_0^1 \frac{1}{2}({^C_0D_x^\alpha}y(x)-f(x))^2 \, dx ,
\int_0^1 \frac{1}{2}({^C_0D_x^\alpha}y(x))^2 \, dx \right)\longrightarrow \min
\end{equation}
subject to
\begin{equation}
\label{ex:2:pr:bc}
y(0)=0,\quad y(1)=\chi,\quad \chi \in \mathbb{R},
\end{equation}
where $f$ is a fixed function. In this case we have
$N=1$, $\gamma=1$, and $d=2$. By Theorem~\ref{pareto:suf}, Pareto
optimal solutions to problem \eqref{ex:2:pr}--\eqref{ex:2:pr:bc}
can be found by considering the family of problems
\begin{equation}
\label{ex:2:fp}
w\int_0^1 \frac{1}{2}({^C_0D_x^\alpha}y(x)-f(x))^2 \, dx
+ (1-w)\int_0^1 \frac{1}{2}{^C_0D_x^\alpha}y(x) \, dx\longrightarrow \min
\end{equation}
subject to
\begin{equation}
\label{ex:2:fp:bc}
y(0)=0,\quad y(1)=\chi,\quad \chi \in \mathbb{R},
\end{equation}
where $w\in[0,1]$. Let us now fix $w$. By Theorem~\ref{Theo E-L1},
a solution to problem \eqref{ex:2:fp}--\eqref{ex:2:fp:bc} satisfies
the fractional Euler--Lagrange equation
\begin{equation}
\label{el:ex2}
{_xD_1^\alpha}({^C_0D_x^\alpha}y(x)-wf(x))=0.
\end{equation}
Moreover, by Theorem~\ref{suff}, a solution to \eqref{el:ex2} is a
global minimizer to problem \eqref{ex:2:fp}--\eqref{ex:2:fp:bc}.
Therefore, solving equation \eqref{el:ex2} for $w\in[0,1]$, we are
able to obtain Pareto optimal solutions to problem
\eqref{ex:2:pr}--\eqref{ex:2:pr:bc}. In order to solve equation
\eqref{el:ex2}, firstly we use Corollary~2.1 of \cite{kilbas} to get the
following equation:
\begin{equation}
\label{1storder:FDELe}
{^C_0D_x^\alpha}y(x)-wf(x)=d(1-x)^{\alpha-1},\quad d\in \mathbb{R}.
\end{equation}
\begin{figure}
\caption{\label{fig}
\label{fig}
\end{figure}
Equation~\eqref{1storder:FDELe} needs to be solved numerically. We did
numerical simulations using the MatLab solver \texttt{fode} for
linear Fractional-Order Differential Equations (FODE) with constant
coefficients, developed by Farshad Merrikh Bayat \cite{matlab}. The
results for $\alpha = 1/2$, $f(x) = \mathrm{e}^x$, and different values of
the parameter $w$ can be seen in Figure~\ref{fig}. Numerical results for
different values of $\alpha$ show that when $\alpha\rightarrow 1$
the fractional solution converges to the solution of the classical
problem of the calculus of variations.
\end{example}
\section*{Acknowledgments}
The authors would like to express their gratitude to Ivo Petras,
for having called their attention to \cite{matlab}
as well as for helpful discussions on numerical aspects and available
software packages for fractional differential equations.
\end{document} |
\begin{equation}gin{document}
\begin{equation}gin{abstract}
\noindent We prove a local well posedness result for the modified Korteweg-de Vries equation in a critical space designed so that is contains self-similar solutions. As a consequence, we can study the flow of this equation around self-similar solutions: in particular, we give an asymptotic description of small solutions as $t \to +\infty$ and construct solutions with a prescribed blow up behavior as $t \to 0$.
\varepsilonnd{abstract}
\title{Self-similar dynamics for the modified Korteweg-de Vries equation}
\section{Introduction}
In this paper, we are interested in the dynamics near self-similar solutions for the modified Korteweg-de Vries equation:
\begin{equation}gin{align} \leqslantftabel{mkdv} \tag{mKdV}
\partial_t u + \partial_{xxx}^3 u + \varepsilonpsilon \partial_x (u^3) =0, \quad u: \m R_t \times \m R_x \to \m R.
\varepsilonnd{align}
The signum $\varepsilonpsilon \in \{\pm 1 \}$ indicates whether the equation is focusing or defocusing. In our framework, $\varepsilonpsilon$ will play no major role.
The \varepsilonqref{mkdv} equation enjoys a natural scaling:
if $u$ is a solution then
\[ u_\leqslantftambda(t,x) := \leqslantftambda^{1/3} u(\leqslantftambda t, \leqslantftambda^{1/3} x) \]
is also a solution to \varepsilonqref{mkdv}. As a consequence, the self-similar solutions, which preserve their shape under scaling
\[ S(t,x) = t^{-1/3} V(t^{-1/3} x), \]
are therefore of special interest.
Self-similar solutions play an important role indeed for the \varepsilonqref{mkdv} flow: they exhibit an explicit blow up behavior, and are also related with the long time description of solutions. Even for small and smooth initial data, solutions display a modified scattering where self-similar solutions naturally appear: we refer to Hayashi and Naumkin \cite{HN99,HN01}, which was revisited by Germain, Pusateri and Rousset \cite{GPR16} and Harrop-Griffiths \cite{HaGr16}.
Another example where self-similar solutions of the \varepsilonqref{mkdv} equation are relevant is in the long time asymptotics of the so-called Intermediate Long Wave (ILW) equation. This equation occurs in the propagation of waves in a one-dimensional stratified fluid in two limiting cases. In the shallow water limit, the propagation reduces to the KdV equation, while in the deep water limit, it reduces to the so-called Benjamin-Ono equation. In a recent work, Bernal-Vilchis and Naumkin \cite{B_VN19}
study the large-time behavior of small solutions of the (modified) ILW, and they prove that in the so-called self-similar region the solutions tend at infinity to a self-similar solution of \varepsilonqref{mkdv}.
Self-similar solutions and the \varepsilonqref{mkdv} flow are also related to some other simplified models in fluid dynamics. More precisely, Goldstein and Petrich \cite{GP92} find a formal connection between the evolution of the boundary of a vortex patch in the plane under Euler equations and a hierarchy of completely integrable dispersive equations. The first element of this hierarchy is:
\[ \partial_t z = - \partial_{sss} z +\partial_s \bar z (\partial_{ss} z)^2, \quad |\partial_s z|^2=1, \]
where $z=z(t,s)$ is complex valued and parametrize by its arctlength $s$ a plane curve which evolves in time $t$. A direct computation shows that its curvature solves the focusing \varepsilonqref{mkdv} (with $\varepsilonpsilon=1$), and self-similar solutions with initial data
\begin{equation}gin{gather} \leqslantftabel{def:CI}
U(t) \rightightharpoonup c \delta_{0} + \alpha \mathop{\mathrm{v.p.}} \leqslantfteft( \frac{1}{x} \rightight) \quad \text{as } t \to 0^+, \quad \alpha, c \in \m R, \varepsilonnd{gather}
correspond to logarithmic spirals making a corner, see \cite{PV07}.
Finally \varepsilonqref{mkdv} is a member of a two parameter family of geometric flows that appears as a model for the evolution of vortex filaments. In this case, the filaments are curves that propagate in 3d, and their curvature and torsion determined a complex valued function that satisfies a non-linear dispersive equation. This equation, that depends on the two free parameters, is a combination of a cubic non-linear Schrödinger equation (NLS) and a complex modified Korteweg-de Vries equation.
The particular case of cubic (NLS) has received plenty of attention. The corresponding geometric flow is known as either the binormal curvature flow or the Localized Induction Approximation, name that is more widely used in the literature in fluid dynamics. In this setting, the relevant role played by the self-similar solutions, including also logarithmic spirals, has been largely studied. We refer the reader to the recent paper by Banica and Vega \cite{BV18} and the references there in. Among other things, in this article the authors prove that the self-similar solutions have finite energy, when the latter is properly defined. Moreover, they give a well-posedness result in an appropriately chosen space of distributions that contains the self-similar solutions.
Our goal in this paper is to continue our work initiated in \cite{CCV19}, and to study the \varepsilonqref{mkdv} flow in spaces in which self-similar solutions naturally live. As we will see, the number of technical problems increases dramatically with respect to the case of (NLS). This is due to the higher dispersion, which makes the algebra rather more complicated, and to the presence of derivatives in the non-linear term.
\section{Main results}
\subsection{Notations and functional setting}
We start with some notations. $\hat u$ represents the Fourier transform of a function $u$ (in its space variable $x$ only, if $u$ is a space time function), and we will often denote $p$ the variable dual to $x$ in the Fourier side. We denote by $\mathcal{G}(t)$ the linear KdV group:
\[ \omegaidehat{\mathcal{G}(t) v }(p) = e^{i t p^3} \hat v(p), \]
for any $v \in \cal S'(\m R)$. Given a (space-time) function $u$, we denote $\tilde u$, the \textit{profile} of $u$, as the function defined by
\begin{equation}gin{align} \leqslantftabel{def:u_tilde}
\tilde{u}(t, p):= \omegaidehat{\mathcal{G}(-t)u(t)}(p) = e^{- i t p^3} \hat u(t,p).
\varepsilonnd{align}
In all the following, $C$ denotes various constants, which can change from one line to the next, but does not depend on the other variables which appear. As usual, we use the conventions $a \leqslantftesssim b$ and $a = O(b)$ to abbreviate $a \leqslantfte C b$.
We will also use the Landau notation $a = o_n(b)$ when $a$ and $b$ are two complex quantities (depending in particular of $n$) such that $a/b \to 0$ as $n \to +\infty$; and \varepsilonmph{mutatis mutandis} $a = o_\varepsilon(b)$ when $a/b \to 0$ as $\varepsilon \to 0$.
We use often the japanese bracket $\jap y = \sqrt{1+|y|^2}$, and the (complex valued) Airy-Fock function
\begin{equation}gin{equation} \leqslantftabel{def:Ai}
\Ai(z) := \frac{1}{\pi} \int_0^{+\infty} e^{i p z + i p^3} dp.
\varepsilonnd{equation}
For $v \in \cal S'(\m R)$ such that $\hat v \in L^\infty \cap \dot H^1$, and for $t >0$, we define the norm (depending on $t$) with which we will mostly work:
\begin{equation}gin{gather} \leqslantftabel{def:E(t)}
\| v \|_{\q E(t)} := \| \omegaidehat{\mathcal{G}(-t) v} \|_{L^\infty(\m R)} + t^{-1/6} \| \partial_{p} \omegaidehat{\mathcal{G}(-t) v} \|_{L^2((0,+\infty))}.
\varepsilonnd{gather}
Let us remark that we will only consider real-valued functions $u$, and so $\hat u(t,-p) = \overline{\hat u(t,p)}$. As a consequence, the knowledge of frequencies $p >0$ is enough to completely determine $u(t)$, and in the above definition, the purpose of considering $L^2((0,+\infty))$ is to allow a jump at 0. This is necessary because self-similar solutions with $\alpha \ne 0$ (in \varepsilonqref{def:CI}) do exhibit such a jump. Indeed, we recall the main result of \cite{CCV19}.
\begin{equation}gin{thm*}
Given $c, \alpha \in \m R$ small enough, there exists unique $a \in \m R$, $A,B \in \m C$ and a self-similar solution $S(t,x) = t^{-1/3} V(t^{-1/3} x)$, where $V$ satisfies
\begin{equation}gin{align}
\text{for } p \geqslant 2, \quad e^{-it p^3} \hat V(p) & = A e^{i a \leqslantftn |p|} + B \frac{e^{3ia \leqslantftn |p| -i \frac{8}{9} p^3}}{p^3} + z(p), \leqslantftabel{eq:ss_hf} \\
\text{for } |p| \leqslantfte 1, \quad e^{-it p^3} \hat V(p) & = c + \frac{3 i \alpha}{2\pi} \sgn(p) + z(p), \leqslantftabel{eq:ss_lf}
\varepsilonnd{align}
where $z \in W^{1,\infty}(\m R)$, $z(0)=0$ and for any $k < \frac{4}{7} $, $|z(p)| + |p z'(p)| = O(|p|^{-k})$ as $|p| \to +\infty$.
\varepsilonnd{thm*}
Let us emphasize that the $\q E(t)$ norm is scaling invariant, in the following sense:
\[ \forall \leqslantftambda >0, \quad \| u_\leqslantftambda(t) \|_{\q E(t)} = \| u(\leqslantftambda t) \|_{\q E(\leqslantftambda t)}. \]
In particular, self-similar solutions have constant $\q E(t)$ norm for $t \in (0,+\infty)$.
If $u$ is a space-time function defined on a time interval $I \subset (0,+\infty)$, we extend the above definition and denote
\[ \| u \|_{\q E(I)} := \sup_{t \in I} \| u(t) \|_{\q E(t)} = \sup_{t \in I} \leqslantfteft( \| \tilde u(t) \|_{L^\infty(\m R)} + t^{-1/6} \| \partial_p \tilde u(t) \|_{L^2((0,+\infty))} \rightight). \]
In the same spirit, we define the functional space
\[ \q E(1): = \{ u \in \cal S'(\m R) : \| u \|_{\q E(1)} < +\infty \}, \]
and for $I \subset (0,+\infty)$,
\begin{equation}gin{align} \leqslantftabel{def_E}
\q E (I) = \{ u: I \to \cal S'(\m R): \tilde u \in \q C(I, \q C_b((0,+\infty))), \partial_p \tilde u \in L^\infty(I,L^2((0,+\infty))) \},
\varepsilonnd{align}
endowed with the norm $\| \cdot \|_{\q E(I)}$.
\subsection{Main results}
We can now state our results. Our main result is a local well-posedness result in the space $\q E(I)$, for initial data $u_1 \in \q E(1)$ at time $t=1$.
\begin{equation}gin{thm} \leqslantftabel{th1}
Let $u_1 \in \q E(1)$. Then there exist $T>1$ and a solution $u \in \q E([1/T,T])$ to \varepsilonqref{mkdv} such that $u(1) = u_1$.
Furthermore, one has forward uniqueness. More precisely, let $0 < t_1<t_2$ and $u$ and $v$ be two solutions to \varepsilonqref{mkdv} such that $u, v \in \q E([t_1,t_2])$. If $u(t_1) = v(t_2)$, then for all $t \in [t_1,t_2]$, $u(t) = v(t)$.
\varepsilonnd{thm}
For small data in $\q E(1)$, the solution is actually defined for large times, and one can describe the asymptotic behavior. This is the content of our second result.
\begin{equation}gin{thm} \leqslantftabel{th2}
There exists $\delta >0$ small enough such that the following holds.
If $\| u_1 \|_{\q E(1)} \leqslantfte \delta$, the corresponding solution satisfies $u \in \q E([1,+\infty))$. Furthermore, let $S$ be the self-similar solution such that
\[ \hat {S}(1, 0^+) = \hat u_1(0^+) \in \m C. \]
Then $\| u(t) - S(t) \|_{L^\infty} \leqslantftesssim \| u_1 \|_{\q E(1)} t^{-5/6^-}$ and there exists a profile $U_\infty \in \q C_b(\m R \setminus \{ 0 \}, \m C)$, with $|U_\infty(0^+)| = \leqslantftim_{p \to +\infty} |\hat {S}(1,p)|$ is well-defined, and
\[ \leqslantfteft| \tilde u(t,p) - U_\infty(p)\varepsilonxp\leqslantfteft(-\frac{i\varepsilonpsilon}{4\pi}|U_\infty(p)|^2\leqslantftog t\rightight) \rightight| \leqslantftesssim \frac{\delta}{\jap{p^3t}^{\frac{1}{12}}} \| u_1 \|_{\q E(1)}. \]
\varepsilonnd{thm}
As a consequence, one has the asymptotics in the physical space.
\begin{equation}gin{cor} \leqslantftabel{cor_th2}
We use the notations of Theorem \rightef{th2}, and let
\[ y= \begin{equation}gin{cases}
\sqrt{-x/3t},& \text{if } x<0, \\
0,& \text{if } x>0.
\varepsilonnd{cases} \]
One has, for all $t \geqslant 1$ and $x \in \m R$,
\begin{equation}gin{align}
\leqslantfteft|u(t,x)-\frac{1}{t^{1/3}}\mathrm{Re} \Ai\leqslantfteft(\frac{x}{t^{1/3}}\rightight)U_\infty\leqslantfteft(y\rightight)\varepsilonxp\leqslantfteft(-\frac{i\varepsilonpsilon}{6}|U_\infty(y)|^2\leqslantftog t\rightight)\rightight| \leqslantftesssim \frac{\delta}{t^{1/3}\jap{x/t^{1/3}}^{3/10}}.
\varepsilonnd{align}
\varepsilonnd{cor}
\subsection{Outline of the proofs, comments and complementary results}
In proving Theorem \rightef{th1} and \rightef{th2}, we use a framework derived from the work of Hayashi and Naumkin \cite{HN01}, improved so that only critically invariant quantities are involved (see Section 3). In particular, we use very similar multiplier identities and vector field estimates. An important new difficulty though is that to perform such energy-type inequalities, the precise algebraic structure of the problem has to be respected (for example, in integration by parts): it seems that one cannot use a perturbative argument like a fixed point, as the method truly requires nonlinear solutions. On the other hand, the rigorous derivation of such inequalities at our level of regularity is quite nontrivial.
This problem does not appear in \cite{HN01} as the authors work in a (weighted) subspace of $H^1$, for which a nice local (and global) well-posedness result hold (\varepsilonqref{mkdv} is actually well-posed in $H^s$ for $s \geqslant 1/4$, see \cite{KPV93}). However, no nontrivial self-similar solution belongs to these spaces, as it can be seen from the lack of decay for large $p$ in \varepsilonqref{eq:ss_hf}. Let us also mention the work by Grünrock and Vega \cite{GV09}, where local well-posedness is proved in
\[ \omegaidehat{H}^s_r = \{ u \in \cal S'(\m R) : \| \jap{p}^s \hat u \|_{L^{r'}} < +\infty \} \quad \text{for } 1 < r \leqslantfte 2, \ s \geqslant \frac 1 2 - \frac{1}{2r}. \]
This framework is not suitable for our purpose: self-similar belong to $\omegaidehat{H}^0_1$ but not better.
When finding a remedy for this, let us emphasize again that, due to the jump at frequency 0 for self-similar solutions displayed in \varepsilonqref{eq:ss_lf}, one must take extra care on the choice of the functional setting. In particular, smooth functions \varepsilonmph{are not dense} in $\q E$ spaces (and they can not approximate self-similar solutions).
In a nutshell, we face antagonist problems coming from low and high frequencies, and we were fortunate enough to manage to take care of both simultaneously.
An important effort of this paper is to solve first an amenable approximate problem (in Section 4), for which we will then derive uniform estimates in the ideology of \cite{HN01}. This approximate problem is actually a variant of the Friedrichs scheme where we filter out high frequencies via a cut-off function $\chi_n$ (in Fourier space). We solve it via a fixed point argument: the cut-off takes care of the lack of decay for large frequencies, but again, smooth functions are not dense in the space $X_n$ where the fixed point is found ($X_n$ is a version of $\q E$ where high frequencies are tamed, but the jump at frequency 0 remains).
In order to obtain uniform estimates, due to the absence of decay for large frequencies of self-similar solutions, boundary terms cannot be neglected -- unless the cut-off function $\chi_n$ is chosen in a very particular way.
At this point, we pass to the limit in $n$ (Section 5), and a delicate but standard compactness argument allows to prove the existence part of Theorem \rightef{th1} and Theorem \rightef{th2}. The description for large time (the second part of Theorem \rightef{th2} and Corollary \rightef{cor_th2}) is then a byproduct of the above analysis.
The forward uniqueness result given in Theorem \rightef{th1} requires a different argument. We consider the variation of localized $L^2$ norm of the difference $w$ of two solutions. Our solutions do not belong to $L^2$, but we make use of an improved decay of functions in $\q E(I)$ on the right (for $x >0$): in this region, one has a decay of $\jap{x}^{-3/4}$ and therefore they belong to $L^2([0,+\infty)$. The use of a cut-off $\varphi$ which is zero for $x \leqslantftl -1$ allows to make sense of the $L^2$ quantity.
When computing the derivative of this quantity, one bad term can not be controlled a priori. Fortunately, if $\varphi$ is furthermore chosen to be non decreasing, this bad term has a sign, and can be discarded as long as one works forward in time (which explains the one-sided result). This is related to a monotonicity property first observed and used by Kato \cite{Kato83}, and a key feature in the study of the dynamics of solitons by Martel and Merle \cite{MM00}. We can then conclude the uniqueness property via a Gronwall-type argument.
Using the forward uniqueness properties, we can improve the continuity properties of the solution $u$: the derivative of its Fourier transform is continuous to the right in $L^2$, see Proposition \rightef{prop:cont_u} for the details.
Backward uniqueness for solutions in $\q E$ remains an open problem. One can recover it under some extra decay information, namely that $u_1 \in L^2(\m R)$ (of course this is no longer a critical space). This is the content of our next result, proved in Section \rightef{sec:6}.
\begin{equation}gin{prop} \leqslantftabel{prop:L2}
Let $u_1 \in \q E(1) \cap L^2(\m R)$. Then the solution $u \in \q E([1/T,T])$ to \varepsilonqref{mkdv} given by Theorem \rightef{th1} is unique and furthermore, there is persistence of regularity: $u \in \q C([1/T,T],L^2(\m R))$.
\varepsilonnd{prop}
The stability of self-similar solutions at blow-up time $t=0$, or more generally the behavior of solutions with initial data in $\q E(1)$ near $t=0$ is a challenging question. In this direction, let us present two results which follow from the tools developed for Theorem \rightef{th1}.
The first one, which we prove in Section \rightef{sec:7}, is that we can construct solutions to \varepsilonqref{mkdv} with a prescribed self-similar profile as $t \to 0^+$.
\begin{equation}gin{prop}[Blow-up solutions with a given profile]\leqslantftabel{prop:blowupgivenprofile}
For $\delta$ sufficiently small, given $g_0\in \cal S'(\m R)$ with $\|g_0\|_{\q E(1)} < \delta$, there exists a solution $u \in \q E((0,+\infty))$ of \varepsilonqref{mkdv} such that
\begin{equation}gin{equation}\leqslantftabel{eq:taxablowup}
\forall t >0, \quad \leqslantfteft\|t^{1/3}u(t,t^{1/3}x) -\omegaidehat{g_0}(x)\rightight\|_{H^{-1}(\m R)} \leqslantftesssim \delta t^{1/3}.
\varepsilonnd{equation}
\varepsilonnd{prop}
The second result is concerned with the stability of self-similar blow up. Even the description of the effects of small and smooth perturbations of self-similar solutions \varepsilonmph{for small time} is not trivial. For example, consider the toy problem of the linearized equation
\[ \partial_t v + \partial_{xxx} v + \varepsilonpsilon\partial_x (K^2 v) =0. \]
near the fundamental solution $K(t,x) = t^{-1/3} \Ai(t^{-1/3} x)$ of the linear Korteweg-de Vries equation
(which is, in some sense, the self-similar solution to the linear problem). The most natural move is to use the estimates of Kenig, Ponce and Vega \cite{KPV00}, which allows to recover the loss of a derivative:
\[ \| v \|_{L^\infty_t L^2_x} \leqslantftesssim \| v(0) \|_{L^2_x} + \| K^2 v \|_{L^1_x L^2_t}. \]
Now one can essentially only use Hölder estimate:
\[ \| K^2 v \|_{L^1_x L^2_t} \leqslantfte \| K \|_{L^4_x L^\infty_t}^2 \| v \|_{L^2_{x,t}}, \]
but, due to the slow decay for $x \leqslantftl -1$, $K (t) \notin L^4_x$ for any $t$, and the argument can not be closed.
We can however prove a stability result of self-similar solutions up to blow-up time, for low frequency perturbations. Given $\alpha>0$ and a sequence $(a_k)_{k\in\m N_0} \subset \m R^+$ satisfying
\[ a_0, a_1=1,\quad \text{and for all } k \geqslant 0, \quad a_{k}\leqslantfte \alpha a_{2k+1}, \]
let us define the remainder space
\begin{equation}gin{equation} \leqslantftabel{def:R_a}
\q R_\alpha=\leqslantfteft\{ w \in \q C^\infty(\m R): \sup_{k\geqslant 0} a_k\|\partial_x^k w\|_{L^2}^2 < \infty \rightight\}
\varepsilonnd{equation}
endowed with the norm
\begin{equation}gin{equation}
\| w \|_{\q R_\alpha} = \leqslantfteft( \sup_{k\geqslant 0} a_k\|\partial_x^k w \|_{L^2}^2 \rightight)^{1/2}.
\varepsilonnd{equation}
\begin{equation}gin{prop}[Stability of the self-similar blow-up under $\q R_\alpha$-perturbations] \leqslantftabel{prop:stab}
There exists $\delta>0$ sufficiently small such that, if $w_1\in \q R_\alpha$ and $S$ is a self-similar solution with
\[ \|w_1\|_{\q R_\alpha}^2 + \alpha\|S(1)\|_{\q E(1)}^2 <\delta, \]
then the solution $u$ of \varepsilonqref{mkdv} with initial data $u_1=S(1)+w_1$ is defined on $(0,1]$ and
\[ \sup_{t\in (0,1)} \|u(t)-S(t)\|_{\q R_\alpha}^2 < 2\delta. \]
\varepsilonnd{prop}
Obviously, we shrank considerably the critical space by taking smooth perturbations of self-similar solutions, but the above result still shows some kind of stability of self-similar blow up; observe in particular that the blow up time is \varepsilonmph{not} affected by the perturbation. The study of $\q R_\alpha$ perturbations is done in Section \rightef{sec:9}.
\section{Preliminary estimates} \leqslantftabel{sec:3}
Throughout this section, $I \subset (0,+\infty)$ is an interval.
\begin{equation}gin{lem}[Decay estimates]\leqslantftabel{lema:decayE}
Let $u \in \q C(I,\cal S')$ such that $\| u \|_{\q E(I)} <+\infty$. Then there hold $u \in \q C(I, L^\infty_{\leqslantftoc}(\m R))$ and more precisely, for $t \in I$ and $x \in \m R$, one has
\begin{equation}gin{align}\leqslantftabel{eq:est1lem1}
|u(t,x)| & \leqslantftesssim \frac{1}{t^{1/3}\jap{|x|/t^{1/3}}^{1/4}} \| u(t) \|_{\q E(t)} \\
\leqslantftabel{eq:est2lem1}
|\partial_x u(t,x) &|\leqslantftesssim \frac{1}{t^{2/3}} \jap{|x|/t^{1/3}}^{1/4} \| u(t) \|_{\q E(t)}.
\varepsilonnd{align}
Consequently,
\begin{equation}gin{align} \leqslantftabel{eq:est4lem1}
\|u(t)\|_{L^6}^3 &\leqslantftesssim t^{-\frac{5}{6}} \| u(t) \|_{\q E(t)}^3 \\
\leqslantftabel{eq:est5lem1}
\|u(t) \partial_x u(t)\|_{L^\infty} &\leqslantftesssim t^{-1} \| u(t) \|_{\q E(t)}^2.
\varepsilonnd{align}
Moreover, for $x>t^{1/3}$,
\begin{equation}gin{align} \leqslantftabel{eq:est3lem1}
|u(t,x)| & \leqslantftesssim \frac{1}{t^{1/3}\jap{x/t^{1/3}}^{3/4}} \| u(t) \|_{\q E(t)}, \\
|\partial_x u(t,x)| & \leqslantftesssim \frac{1}{t^{2/3}\jap{x/t^{1/3}}^{1/4}} \| u(t) \|_{\q E(t)},
\varepsilonnd{align}
and for $x<-t^{1/3}$,
\begin{equation}gin{equation}\leqslantftabel{eq:est6lem1}
\leqslantfteft| u(t,x)- \frac{1}{t^{1/3}}\mathrm{Re} \Ai \leqslantfteft(\frac{x}{t^{1/3}}\rightight) \tilde{u} \leqslantfteft( t,\sqrt{\frac{|x|}{3t}} \rightight) \rightight|\leqslantftesssim \frac{1}{t^{1/3}\jap{|x|/t^{1/3}}^{3/10}} \| u(t) \|_{\q E(t)}.
\varepsilonnd{equation}
\varepsilonnd{lem}
\begin{equation}gin{proof}
The statement and proof are very similar to Lemma 2.1 in \cite{HN01}; notice, however, that the norm $\|\cdot\|_{\textbf{X}}$ therein is stronger than ours, so that we in fact need to systematically improve their bounds. For the convenience of the reader, we provide a complete proof.
We recall that $\tilde u(t)$ is not continuous at $0$, and may (and will) have a jump (because we only control $\| \partial_p u \|_{L^2(0,+\infty)}$: in the following computations $\tilde u(t,0)$ will mean the limit $\tilde u(t,0^+)$). Setting
\[ z = \frac{x}{\sqrt[3]{t}},\quad y=\sqrt{-\frac{x}{t}}\text{ for }x\leqslantfte 0,\quad y=0\text{ for }x>0, \]
we have the identity
\begin{equation}gin{align}
u(t,x) & =\frac{1}{\pi}\mathrm{Re} \int_0^\infty e^{ipx+ip^3t} \tilde u (t,p)dp, \quad q = p \sqrt[3]t \nonumber \\
& = \frac{1}{\pi\sqrt[3]{t}}\mathrm{Re} \int_0^\infty e^{iqz + iq^3}\leqslantfteft( \tilde u(t,y)+\leqslantfteft( \tilde u \leqslantfteft(t,\frac{q}{\sqrt[3]{t}}\rightight)-\tilde u (t,y)\rightight)\rightight) dq \nonumber \\
& = \frac{1}{\sqrt[3]{t}}\mathrm{Re} \Ai \leqslantfteft(\frac{x}{\sqrt[3]{t}}\rightight) \tilde u(t,y) + R(t,x). \leqslantftabel{eq:u_R}
\varepsilonnd{align}
In the case $x\geqslant 0$, we integrate by parts in the remainder $R$:
\begin{equation}gin{align*}
R(t,x) & = \frac{1}{\pi\sqrt[3]{t}} \mathrm{Re} \int_0^\infty \partial_q\leqslantfteft(qe^{iqz+iq^3}\rightight)\frac{1}{1+iq(3q^2+z)}\leqslantfteft(\tilde u \leqslantfteft(t,\frac{q}{\sqrt[3]{t}}\rightight)-\tilde u(t,y)\rightight) dq \\
& = \frac{1}{\pi\sqrt[3]{t}} \mathrm{Re} \int_0^\infty \frac{e^{iq z + iq^3}}{1+iq(3q^2+ z )} \\
& \qquad \times \Bigg(\frac{iq(6q^2+ z )}{1+iq(3q^2+ z )}\leqslantfteft(\tilde u\leqslantfteft(t,\frac{q}{\sqrt[3]{t}}\rightight)-\tilde u(t,y)\rightight)- \frac{q}{\sqrt[3]{t}} \partial_p \tilde u \leqslantfteft(t,\frac{q}{\sqrt[3]{t}}\rightight)\Bigg)dq.
\varepsilonnd{align*}
Since
\[ |\tilde u(t,p)-\tilde u(t,0)|\leqslantfte \leqslantfteft|\int_0^p \partial_p \tilde u(t,q)dq\rightight| \leqslantfte \sqrt{p}\|\partial_p \tilde u(t)\|_{L^2((0,+\infty))}, \]
we can estimate the remainder in the following way:
\begin{equation}gin{align*}
|R(t,x)| & \leqslantftesssim \frac{1}{\sqrt[3]{t}} \int_0^\infty \frac{1}{1+q(3q^2+ z )} \leqslantfteft( \leqslantfteft| \tilde u \leqslantfteft( t,\frac{q}{\sqrt[3]{t}}\rightight) - \tilde u(t,0) \rightight| + \frac{q}{\sqrt[3]{t}} \leqslantfteft| \partial_p \tilde u \leqslantfteft( t,\frac{q}{\sqrt[3]{t}} \rightight) \rightight| \rightight) dq \\
& \leqslantftesssim \frac{1}{\sqrt{t}} \| \partial_p \tilde u(t) \|_{L^2} \int_0^\infty\frac{\sqrt{q}dq}{1+q(3q^2+ z )} \\
& \qquad + \frac{1}{\sqrt[3]{t^2}}\leqslantfteft(\int_0^\infty \leqslantfteft| \partial_p \tilde u\leqslantfteft(t,\frac{q}{\sqrt[3]{t}}\rightight)\rightight|^2 dq\rightight)^{\frac{1}{2}} \leqslantfteft(\int_0^\infty \frac{q^2 dq}{(1+q(3q^2+ z ))^2}\rightight)^{\frac{1}{2}} \\
&\leqslantftesssim \frac{1}{\sqrt{t}} \leqslantfteft(1+\frac{|x|}{t^{1/3}}\rightight)^{-1/4} \|\partial_p \tilde u(t)\|_{L^2((0,+\infty))}.
\varepsilonnd{align*}
In the case $x < 0$, we denote $r=\sqrt{- z /3}$. Integrating by parts, we get
\begin{equation}gin{align*}
R(t,x) & = \frac{1}{\pi\sqrt[3]{t}} \mathrm{Re} \int_0^\infty \partial_q\leqslantfteft((q-r)e^{iq z +iq^3}\rightight)\frac{1}{1+3i(q-r)^2(q+r)} \leqslantfteft( \tilde u\leqslantfteft(t,\frac{q}{\sqrt[3]{t}}\rightight) - \tilde u(t,y)\rightight) dq \\
& = -\frac{1}{\pi\sqrt[3]{t}}\mathrm{Re} \int_0^\infty \frac{e^{iq z +iq^3}}{1+3i(q-r)^2(q+r)} \Bigg( \frac{3i(q-r)^2(3q+r)}{1+3i(q-r)^2(q+r)}\leqslantfteft(\tilde u \leqslantfteft(t,\frac{q}{\sqrt[3]{t}}\rightight)-\tilde u(t,y)\rightight) \\
& \qquad \qquad + \frac{q-r}{\sqrt[3]{t}}\partial_p \tilde u\leqslantfteft(t,\frac{q}{\sqrt[3]{t}}\rightight)\Bigg)dq - \frac{r}{\pi\sqrt[3]{t}}\mathrm{Re} \frac{\tilde u(t,0)-\tilde u(t,y)}{1+3ir^3}.
\varepsilonnd{align*}
Then we can estimate
\begin{equation}gin{align*}
|R(t,x)| & \leqslantftesssim \frac{1}{\sqrt[3]{t}}\int_0^\infty \frac{1}{1+(q-r)^2(q+r)}\leqslantfteft(\leqslantfteft| \tilde u \leqslantfteft(t,\frac{q}{\sqrt[3]{t}}\rightight)-\tilde u(t,y) \rightight| + \frac{|q-r|}{\sqrt[3]{t}}\leqslantfteft|\partial_p \tilde u\leqslantfteft(t,\frac{q}{\sqrt[3]{t}}\rightight)\rightight|\rightight)dq \\
& \qquad\qquad + \frac{1}{\sqrt{t}\jap{r}}\|\partial_p \tilde u(t)\|_{L^2((0,+\infty))} \\
&\leqslantftesssim \frac{1}{\sqrt{t}}\leqslantfteft( \int_0^\infty \frac{\sqrt{|q-r|}dq}{1+(q-r)^2(q+r)} + \leqslantfteft(\int_0^\infty \frac{(q-r)^2dq}{(1+(q-r)^2(q+r))^2} \rightight)^{1/2}\rightight) \\
& \qquad \qquad \times \| \partial_p \tilde u(t)\|_{L^2((0,+\infty))} + \frac{1}{\sqrt{t}\jap{r}}\|\partial_p \tilde u(t)\|_{L^2((0,+\infty))} \\
& \leqslantftesssim \frac{1}{\sqrt{t}\sqrt{\jap{r}}} \|\partial_p \tilde u(t)\|_{L^2((0,+\infty))}.
\varepsilonnd{align*}
It now follows from the decay of the Airy-Fock function $|\text{Ai}( z )|\leqslantftesssim \jap{ z }^{-\frac{1}{4}}$ that
\begin{equation}gin{align*}
|u(t,x)| & \leqslantftesssim t^{-1/3} \jap{\frac{x}{t^{1/3}}}^{-1/4} \leqslantfteft(\|\tilde u(t)\|_{L^\infty} + t^{-\frac{1}{6}}\|\partial_p \tilde u(t)\|_{L^2((0,+\infty))} \rightight) \\
& \leqslantftesssim \frac{1}{\sqrt[3]{t} \jap{x/t^{1/3}}^{1/4}} \| u(t) \|_{\q E(t)}.
\varepsilonnd{align*}
This concludes the proof of \varepsilonqref{eq:est1lem1}. For \varepsilonqref{eq:est2lem1}, we split once again between the cases $x\geqslant 0$ and $x < 0$. In the second case, we have as in \varepsilonqref{eq:u_R}
\begin{equation}gin{gather*}
\partial_x u(t,x) = \frac{1}{\sqrt[3]{t^2}} \mathop{\mathrm{Re}} \Ai' \leqslantfteft( \frac{x}{\sqrt[3]t} \rightight) \tilde u(t,y) + \tilde R(t,x), \quad \text{with} \\
\tilde R(t,x) := \frac{1}{\pi\sqrt[3]{t^2}}\mathrm{Re} \int_0^\infty iq e^{iq z + iq^3} \leqslantfteft( \tilde u \leqslantfteft(t,\frac{q}{\sqrt[3]{t}}\rightight)-\tilde u (t,y)\rightight) dq .
\varepsilonnd{gather*}
Analogous computations done for $R$ yield
\begin{equation}gin{align*}
|\tilde R(t,x)|&\leqslantftesssim \frac{1}{\sqrt[3]{t^2}}\int_0^\infty \frac{q}{1+(q-r)^2(q+r)}\leqslantfteft(\leqslantfteft| \tilde u \leqslantfteft(t,\frac{q}{\sqrt[3]{t}}\rightight)-\tilde u(t,y) \rightight| + \frac{|q-r|}{\sqrt[3]{t}}\leqslantfteft|\partial_p \tilde u\leqslantfteft(t,\frac{q}{\sqrt[3]{t}}\rightight)\rightight|\rightight) dq \\
& \leqslantftesssim \frac{\|\tilde u(t)\|_{L^\infty}}{\sqrt[3]{t^2}}\int_0^\infty \frac{qdq}{1+3(q-r)^2(q+r)} \\
& \qquad + \frac{\|\partial_p \tilde u(t)\|_{L^2((0,+\infty))}}{\sqrt[6]{t^5}} \leqslantfteft(\int_0^\infty \frac{q^2(q-r)^2dq}{(1+3(q-r)^2(q-r))^2}\rightight)^{\frac{1}{2}} \\
&\leqslantftesssim \frac{1}{t^{2/3}}\leqslantfteft(1+\frac{|x|}{t^{1/3}}\rightight)^{1/4}\leqslantfteft(\|\tilde u(t)\|_{L^\infty} + t^{-\frac{1}{6}}\|\partial_p \tilde u(t)\|_{L^2((0,+\infty))} \rightight) \\
& \leqslantftesssim \frac{1}{t^{2/3}} \jap{\frac{x}{t^{1/3}}}^{1/4} \|u(t) \|_{\q E(t)},
\varepsilonnd{align*}
and the bound for $\partial_x u$ follows from the bound on the Airy-Fock function $|\Ai '( z )| \leqslantftesssim \jap{ z }^{\frac{1}{4}}$.
For $x\geqslant 0$, we write
\begin{equation}gin{align*}
\partial_x u(t,x) & = \frac{1}{\pi\sqrt[3]{t^2}}\mathrm{Re} \int_0^\infty ie^{iq z + iq^3}q\tilde u \leqslantfteft(t,\frac{q}{\sqrt[3]{t}}\rightight)dq \\
& = \frac{1}{\pi\sqrt[3]{t^2}} \mathrm{Re} \int_0^\infty ie^{iq z + iq^3} q \leqslantfteft(\tilde u(t,0) + \int_0^q \partial_p \tilde u\leqslantfteft(t,\frac{r}{\sqrt[3]{t}}\rightight)\frac{dr}{\sqrt[3]{t}} \rightight) dq\\
& = \frac{1}{\pi t}\mathrm{Re} \int_0^\infty \leqslantfteft(\int_r^\infty ie^{iq z + iq^3}q dq\rightight)\partial_p \tilde u\leqslantfteft(t,\frac{r}{\sqrt[3]{t}}\rightight)dr.
\varepsilonnd{align*}
Applying Cauchy-Schwarz, and as for $z,r >0$, we have
\[ \int_r^\infty e^{iq z + iq^3}q dq \leqslantftesssim \frac{1}{z+r^2}, \]
we obtain
\begin{equation}gin{align*}
|\partial_x u(t,x)| & \leqslantftesssim \frac{1}{\sqrt[6]{t^5}}\|\partial_p \tilde u(t)\|_{L^2((0,+\infty))} \leqslantfteft\|\int_r^\infty e^{iq z + iq^3}q dq \rightight\|_{L^2((0,\infty),dr)} \\
& \leqslantftesssim \frac{1}{\sqrt[3]{t^2}}\jap{ z }^{-\frac{1}{4}} \|\partial_p \tilde u(t)\|_{L^2((0,+\infty))}.
\varepsilonnd{align*}
Hence \varepsilonqref{eq:est2lem1} follows. The estimate for $\partial_x u$ in \varepsilonqref{eq:est3lem1} is also a consequence of the above estimate.
Now we prove the first estimate in \varepsilonqref{eq:est3lem1}. To that end, we integrate by parts the expression for $u$:
\begin{equation}gin{align*}
u(t,x) & = \frac{1}{\pi}\int_0^\infty e^{ipx+ip^3t}\tilde u(t,p)dp = \frac{\tilde u(t,0)}{\pi x}-\frac{1}{\pi}\int_0^\infty e^{ipx+ip^3t}\partial_p\leqslantfteft(\frac{\tilde u(t,p)}{x+3p^2t}\rightight)dp \\ & = \frac{\tilde u(t,0)}{\pi x}-\frac{1}{\pi}\int_0^\infty e^{ipx+ip^3t}\frac{\partial_p \tilde u(t,p)}{x+3p^2t}dp + \int_0^\infty e^{ipx+ip^3t}\frac{6pt \tilde u(t,p)}{(x+3p^2t)^2}dp.
\varepsilonnd{align*}
The first and third terms are bounded directly, while the second term is bounded using Cauchy-Schwarz:
\begin{equation}gin{align*}
\MoveEqLeft \leqslantfteft|\int_0^\infty e^{ipx+ip^3t}\frac{\partial_p \tilde u(t,p)}{x+3p^2t}dp \rightight| \leqslantftesssim \|\partial_p \tilde u(t)\|_{L^2((0,+\infty))}\leqslantfteft(\int_0^\infty \frac{dp}{(x+3p^2t)^2} \rightight)^{1/2}\\
&\leqslantftesssim t^{1/6}\frac{1}{t^{1/4}x^{3/4}} \|\partial_p \tilde u(t)\|_{L^2((0,+\infty))} \leqslantftesssim \frac{1}{t^{1/3}\jap{x/t^{1/3}}} \|\partial_p \tilde u(t)\|_{L^2((0,+\infty))}.
\varepsilonnd{align*}
Finally, estimate \varepsilonqref{eq:est6lem1} follows from \cite[Lemma 2.9]{GPR16}. For completeness, we present the proof: define $\varepsilonll_0\in \mathbb{Z}$ so that
\[ 2^{\varepsilonll_0}\sim t^{-1/3}(|x|/t^{1/3})^{-1/5}. \]
We split the estimate for
\[ R(t,x)= \frac{1}{\pi}\text{Re }\int_0^{\infty} e^{it\Phi(p)}(\tilde u(t,p)-\tilde u(t,y))dp, \quad \Phi(p)=\frac{x}{t}p + p^3, \]
in three regions, using appropriate cut-off functions $\chi_A+\chi_B+\chi_C=1$:
\varepsilonmph{Region A: $|p-y|\geqslant y/2$}. Over this region, $\partial_p\Phi(p)\gtrsim \max\{ y, p\}$. Then an integration by parts yields
\begin{equation}gin{align*}
&\leqslantfteft| \int_0^\infty e^{it\Phi(p)}(\tilde u(t,p)-\tilde u(t,y))\chi_A(p) dp\rightight|
\leqslantftesssim \frac{1}{t^{1/3}(|x|/t^{1/3})^{3/4}} \| u(t) \|_{\q E(t)}.
\varepsilonnd{align*}
\varepsilonmph{Region B: $|p-y|\geqslant 2^{\varepsilonll_0}$}. If $|p-y|\sim 2^l$, with $l\geqslant \varepsilonll_0$, then $|\partial_p\Phi(p)|\gtrsim 2^ly$ and the same integration by parts gives
\begin{equation}gin{align*}
\leqslantfteft(\text{contribution of }|p-y|\sim 2^l\rightight) \leqslantftesssim \leqslantfteft(\frac{1}{t^{5/6}2^{l/2}y} + \frac{1}{t2^{l}y}\rightight) \| u(t) \|_{\q E(t)}.
\varepsilonnd{align*}
Summing in $l\geqslant \varepsilonll_0$,
\begin{equation}gin{align*}
\leqslantfteft|\int_0^\infty e^{it\Phi(p)}(\tilde u(t,p)-\tilde u(t,y))\chi_B(p) dp\rightight|
& \leqslantftesssim \leqslantfteft(\frac{1}{t^{5/6}2^{\varepsilonll_0/2}y} + \frac{1}{t2^{\varepsilonll_0}y}\rightight) \| u(t) \|_{\q E(t)} \\
& \leqslantftesssim\frac{1}{t^{1/3}(|x|/t^{1/3})^{3/10}} \| u(t) \|_{\q E(t)}.
\varepsilonnd{align*}
\varepsilonmph{Region C: $|p-y|\leqslantfte 2^{\varepsilonll_0}$}. We decompose the integral as
\begin{equation}gin{align*}
\MoveEqLeft \int_0^{\infty} e^{it\Phi(p)}(\tilde u(t,p)-\tilde u(t,y))\chi_C(p)dp \\
& = \int_0^\infty e^{it\Phi(p)-it(\Phi(y) + 3y(p-y)^2)}(\tilde u(t,p)-\tilde u(t,y))\chi_C(p)dp \\
& \quad + e^{it\Phi(y)}\int_0^\infty e^{3ity(p-y)^2}(\tilde u(t,p)-\tilde u(t,y))\chi_C(p)dp \\
& = I_1+ I_2.
\varepsilonnd{align*}
Since $|\tilde u(t,p)-\tilde u(t,y)|\leqslantftesssim t^{1/6}|p-y|^{1/2} \| u(t) \|_{\q E(t)}$, one easily bounds these integrals:
\begin{equation}gin{align*}
| I_1 | & \leqslantftesssim t2^{4\varepsilonll_0}\|u\| \leqslantftesssim t^{-1/3}(|x|/t^{1/3})^{4/5} \| u(t) \|_{\q E(t)} \\
| I_2 | &\leqslantftesssim t^{1/6}2^{3\varepsilonll_0/2} \| u(t) \|_{\q E(t)} \leqslantftesssim t^{-1/3}(|x|/t^{1/3})^{3/10} \| u \|_{\q E(t)}. \qedhere
\varepsilonnd{align*}
\varepsilonnd{proof}
Let $u \in \q C(I,\cal S')$ be a solution to \varepsilonqref{mkdv} in the distributional sense. Taking the Fourier transform of
\[ (\partial_t + \partial_{xxx}) u = -\varepsilonpsilon\partial_x (u^3) \]
using $\tilde{u}(t,p)=e^{itp^3}\hat{u}(t,p)$, one obtains
\begin{equation}gin{equation}
\partial_t \tilde{u}(t,p) + \frac{i\varepsilonpsilon p}{4\pi^2}\iint_{p_1+p_2+p_3=p} e^{it(p^3-p_1^3-p_2^3-p_3^3)}\tilde{u}(t,p_1)\tilde{u}(t,p_2)\tilde{u}(t,p_3)dp_1dp_2 = 0.
\varepsilonnd{equation}
This leads us to define (with the change of variables $p_i = pq_i$)
\begin{equation}gin{equation}
\cal N[u](t,p)= ip^3\iint_{q_1+q_2+q_3=1} e^{-itp^3(1-q_1^3-q_2^3-q_3^3)}\tilde{u}(t,pq_1)\tilde{u}(t,pq_2)\tilde{u}(t,pq_3)dq_1dq_2,
\varepsilonnd{equation}
so that
\begin{equation}gin{align}\leqslantftabel{eq:tildeu}
\partial_t \tilde{u}(t,p)&=-\frac{\varepsilonpsilon}{4\pi^2}\cal N[u](t,p).
\varepsilonnd{align}
The following result is a stationary phase lemma for $\cal N[u]$. Similar statements may be found in \cite[Lemma 2.4]{HN01} and \cite{GPR16}.
\begin{equation}gin{lem}[Asymptotics of the nonlinearity on the Fourier side]\leqslantftabel{lem:desenvolveoscilatorio}
Let $u \in \q C(I,\cal S')$ such that $\| u \|_{\q E(I)} <+\infty$. One has the following asymptotic development for $\cal N[u]$: for all $t \in I$ and $p >0$,
\begin{equation}gin{align} \leqslantftabel{eq:N_expansion}
\cal N[u](t,p)&=\frac{\pi p^3}{\jap{p^3t}}\leqslantfteft(i|\tilde{u}(t,p)|^2\tilde{u}(t,p) - \frac{1}{\sqrt{3}}e^{-\frac{8itp^3}{9}}\tilde{u}^3\leqslantfteft(t,\frac{p}{3}\rightight) \rightight)+ R[u](t,p)
\varepsilonnd{align}
where the remainder $R$ satisfies the bound
\begin{equation}gin{align} \leqslantftabel{eq:N_remainder}
|R[u](t,p)| \leqslantftesssim \frac{p^3\|u(t) \|^3_{\q E(t)}}{(p^3t)^{5/6}\jap{p^3t}^{1/4}}.
\varepsilonnd{align}
\varepsilonnd{lem}
\begin{equation}gin{proof}
This essentially relies on a stationary phase type argument. We must however emphasize that the computations and the estimations of the errors have to be performed very carefully, because our setting allows few integration by parts and functions have limited spatial decay. We postpone the proof to Appendix A.
\varepsilonnd{proof}
\begin{equation}gin{lem}\leqslantftabel{lem:desenvassimpt}
Let $I \subset (0,+\infty)$ be an interval and $t_1 \in I$. Let $u \in \q C(I,\cal S')$ be a solution to \varepsilonqref{mkdv} in the distributional sense such that $\| u \|_{\q E(I)} <+\infty$.
Then, for some universal constant $C$ (independent of $I$), and for all $t \in I$
\begin{equation}gin{align*}
\|\tilde{u}(t)\|_{L^\infty} & \leqslantfte\|\tilde{u}(t_1)\|_{L^\infty} + C(\|u\|_{\q E(I)}^3 + \|u\|_{\q E(I)}^5), \\
\| \partial_t \tilde u(t) \|_{L^\infty} & \leqslantftesssim \frac{1}{t} \| u(t) \|_{\q E(t)}^3.
\varepsilonnd{align*}
Furthermore, if we denote
\begin{equation}gin{align} \leqslantftabel{def:E_u}
E_u(t,p) :=\varepsilonxp\leqslantfteft(-i\varepsilonpsilon\int_{t_1}^t\frac{p^3}{ \jap{p^3s}}|\tilde{u}(s,p)|^2ds\rightight),
\varepsilonnd{align}
one has, for all $t, \tau \in I$
\begin{equation}gin{equation}\leqslantftabel{eq:decayuniformt}
|(\tilde{u}E_u)(t,p)-(\tilde{u}E_u)(\tau,p)| \leqslantftesssim (\|u\|_{\q E(I)}^3 + \|u\|_{\q E(I)}^5)\min\leqslantfteft( \frac{|t-\tau|}{\jap{p^3}},\frac{1}{\jap{\tau p^3}^{\frac{1}{12}}} \rightight).
\varepsilonnd{equation}
\varepsilonnd{lem}
\begin{equation}gin{proof}
Since $\tilde{u}(t,-p)=\overline{\tilde{u}(t,p)}$, it suffices to consider $p>0$. Using Lemma \rightef{lem:desenvolveoscilatorio},
\begin{equation}gin{align}
\partial_t \tilde{u} (s,p) & = -\frac{\varepsilonpsilon}{4\pi}\frac{p^3} {\jap{p^3s}}\leqslantfteft(\frac{1}{\sqrt{3}}e^{-\frac{8isp^3}{9}}\tilde{u}^3(s,p/3) - i|\tilde{u}(s,p)|^2\tilde{u}(s,p)\rightight)\leqslantftabel{eq:quaseperfil} \\
&+ O\leqslantfteft(\frac{p^3\|u\|_{\q E(t)}^3}{(p^3t)^{5/6}\jap{p^3t}^{1/4}}\rightight).
\varepsilonnd{align}
Denote $v(t,p)=\tilde{u}(t,p)E_u(t,p)$. Then the integration in time of \varepsilonqref{eq:quaseperfil} on $[\tau,t] \subset I$ yields
\begin{equation}gin{align}\leqslantftabel{eq:perfil}
v(t,p) & = v(\tau,p)-\varepsilonpsilon\int_{\tau}^t\frac{p^3}{4\pi\sqrt{3} \jap{p^3s}}e^{-\frac{8isp^3}{9}}E_u(s,p)\tilde{u}^3\leqslantfteft(s,\frac{p}{3}\rightight)ds \\
& \qquad+ O\leqslantfteft(p^3\|u\|_{\q E(I)}^3\int_{\tau}^t \frac{ds}{(p^3s)^{5/6}\jap{p^3s}^{1/4}}\rightight).
\varepsilonnd{align}
We claim that
\begin{equation}gin{equation}\leqslantftabel{eq:claim}
\leqslantfteft|\int_{\tau}^t\frac{p^3}{ \jap{p^3s}}e^{-\frac{8isp^3}{9}}E_u(s,p)\tilde{u}^3\leqslantfteft(s,\frac{p}{3}\rightight)ds \rightight| \leqslantftesssim (\|u\|_{\q E(I)}^3+\|u\|_{\q E(I)}^5) \min \leqslantfteft( \frac{1}{\jap{p^3\tau}}, \frac{|t-\tau|}{\jap{p^3}} \rightight)
\varepsilonnd{equation}
Indeed, integrating by parts,
\begin{equation}gin{align*}
\MoveEqLeft \int_\tau^t\frac{p^3}{ \jap{p^3s}}e^{-\frac{8isp^3}{9}}E_u(s,p)\tilde{u}^3\leqslantfteft(s,\frac{p}{3}\rightight)ds \\
& = \int_\tau^t \partial_s \leqslantfteft(s e^{-\frac{8isp^3}{9}}\rightight)\frac{1}{1-\frac{8is p^3}{9}}\frac{p^3}{ \jap{p^3s}}E_u(s,p)\tilde{u}^3\leqslantfteft(s,\frac{p}{3}\rightight) ds\\
& =-\int_{\tau}^{t} E_u(s,p)\frac{e^{-\frac{8isp^3}{9}}}{1-\frac{8is p^3}{9}}\Bigg(\tilde{u}^3\leqslantfteft(s, p/3\rightight) \frac{ O(p^6 s) }{\leqslantfteft(1-\frac{8isp^3}{9}\rightight) \jap{p^3s}} \\
& \qquad + \frac{3sp^3}{ \jap{p^3s}}\tilde{u}^2(s,p/3)\tilde{u}_s(s,p/3) + \frac{ip^3sp^3}{\jap{p^3s}^2}\tilde{u}^3(s,p/3)|\tilde{u}(s,p)|^2 \Bigg)ds \\
& \qquad + \leqslantfteft[\frac{E_u(s,p)e^{-\frac{8isp^3}{9}}\tilde{u}^3(s,p/3)}{1-\frac{8isp^3}{9}}\rightight]_{s=\tau}^{s=t}.
\varepsilonnd{align*}
From \varepsilonqref{eq:quaseperfil}, we have
\[ |\partial_t \tilde{u}(s,p)|\leqslantftesssim s^{-1} \| \tilde u (s) \|_{L^\infty}^3 \leqslantftesssim s^{-1} \|u\|_{\q E(s)}^3. \]
Taking absolute values in the above expression,
\begin{equation}gin{align*}
\leqslantfteft| \int_\tau^t\frac{p^3}{ \jap{p^3s}}e^{-\frac{8isp^3}{9}}E_u(s,p)\tilde{u}^3\leqslantfteft(s,\frac{p}{3}\rightight)ds\rightight| & \leqslantftesssim (\|u\|_{\q E([\tau,t])}^3+\|u\|_{\q E([\tau,t])}^5)\int_{\tau}^{t}\frac{p^3ds}{\jap{p^3s}^{2}} \\
& \leqslantftesssim (\|u\|_{\q E(I)}^3+\|u\|_{\q E(I)}^5) \min\leqslantfteft( \frac{1}{\jap{p^3\tau}}, \frac{|t-\tau|}{\jap{p^3}}\rightight)
\varepsilonnd{align*}
as claimed. We plug this estimate with $\tau = t_1$ in \varepsilonqref{eq:perfil},
\begin{equation}gin{align*}
\|\tilde{u}(t)\|_{L^\infty} = \| v(t)\|_{\infty} \leqslantfte \| v(t_1)\|_{L^\infty} + C(\|u\|_{\q E(I)} ^3 + \|u\|_{\q E(I)}^5).
\varepsilonnd{align*}
Estimate \varepsilonqref{eq:decayuniformt} follows from \varepsilonqref{eq:claim}:
\begin{equation}gin{align*}
|v(t,p) - v(\tau,p)| & \leqslantftesssim \leqslantfteft| \int_\tau^t\frac{p^3}{4\pi\sqrt{3} \jap{p^3s}}e^{-\frac{8isp^3}{9}}E_u(s,p)\tilde{u}^3\leqslantfteft(s,\frac{p}{3}\rightight)ds\rightight| \\
& \qquad + O\leqslantfteft(p^3\|u\|_{\q E(I)}^3\int_\tau^t \frac{ds}{(p^3s)^{5/6}\jap{p^3s}^{1/4}}\rightight)\\
& \leqslantftesssim (\|u\|_{\q E(I)}^3 + \|u\|_{\q E(I)}^5)\min\leqslantfteft(\frac{|t-\tau|}{\jap{p^3}},\frac{1}{\jap{\tau p^3}^{\frac{1}{12}}} \rightight). \qedhere
\varepsilonnd{align*}
\varepsilonnd{proof}
\section{Construction of an approximating sequence} \leqslantftabel{sec:4}
Let $(\chi_n)_{n \in\mathbb{N}}\subset \mathcal{S}(\m R)$ be a sequence of even decreasing functions such that
\begin{equation}gin{itemize}
\item for all $n\in \m N$, $0<\chi_n \leqslantfte 1$, $\chi_n^{1/2}\in \mathcal{S}(\m R)$,
\item for all $p \in \m R$, $\chi_n(p) \to 1$ as $n \to +\infty$.
\item $\displaystyle \sup_{p\in \m R} |p(\chi_n^{1/2})'(p)| \to 0$ as $n \to +\infty$.
\varepsilonnd{itemize}
The existence of such a sequence is not completely obvious, let us sketch how to construct one.
\begin{equation}gin{claim}
There exists a sequence $(\chi_n)_{n \in\mathbb{N}}$ satisfying the above conditions.
\varepsilonnd{claim}
\begin{equation}gin{proof}
Define the function $\varphi_n$ as follows: $\varphi_n$ is even and
\[ \varphi_n(p) = \begin{equation}gin{cases}
1 & \text{if } |p| \leqslantfte n \\
1 - \frac{1}{n} \leqslantftn (p/n) & \text{if } n \leqslantfte p \leqslantfte \alpha_n \\
e^{-p} & \text{if } p \geqslant \alpha_n.
\varepsilonnd{cases}
\]
where $\alpha_n>0$ is chosen so that $\varphi_n$ is continuous, that is $\displaystyle 1 - \frac{1}{n} \leqslantftn \leqslantfteft( \frac{\alpha_n}{n} \rightight) = e^{-\alpha_n}$. One can check that $\alpha_n \in [n e^n-1, n e^n]$.
It follows that $0 < \varphi_n \leqslantfte 1$, $\varphi_n$ is non increasing on $[0,+\infty)$, and $\sup_{p \in \m R} |p \phi_n' (p)| = O(1/n)$. Then let $\psi \in \q D(\m R)$ be non negative, even and $\| \psi \|_{L^1} =1$. One can see that $\chi_N := (\varphi_n * \psi)^2$ answers the question.
\varepsilonnd{proof}
Define, for any $u\in \mathcal{S}'(\m R)$,
\[
\omegaidehat{\Pi_n u}(p) = \chi_n(p)\hat{u}(p).
\]
Throughout this section, we shall study the properties of the solutions of
\begin{equation}gin{equation}\tag{$\Pi_n$-mKdV}\leqslantftabel{pimkdv}
\begin{equation}gin{cases}
\partial_t u + \partial_{xxx} u + \varepsilonpsilon\Pi_n \partial_x (u^3) =0, \\
u(1)=\Pi_n u_1,
\varepsilonnd{cases}
\varepsilonnd{equation}
where $u_1 \in \q E(1)$ is given. Equivalently, we consider the equation
\begin{equation}gin{equation}
\partial_t \tilde{u} =-\frac{\varepsilonpsilon\chi_n}{4\pi^2}\cal N[u],\quad \tilde{u}(1)=\chi_n \tilde{u}_1.
\varepsilonnd{equation}
(with the slight abuse of notation $\tilde u_1 = \omegaidehat{ \mathcal{G}(-1) u_1}$). Define
\begin{equation}gin{equation}
\|u\|_{X_n(t)} :=\| \omegaidehat{\mathcal{G}(-t) u} \chi_n^{-1}\|_{L^\infty} + \leqslantfteft\| \partial_p ( \omegaidehat{\mathcal{G}(-t) u}) \chi_n^{-1/2} \rightight\|_{L^2((0,+\infty))}
\varepsilonnd{equation}
and the space
\[ X_n(t) := \leqslantfteft\{ u\in \mathcal{S}'(\m R): \| u \|_{X_n(t)} < \infty \rightight\}. \]
Similarly, if $I \subset (0,+\infty)$ is an interval and $u$ a space-time function, we denote
\[ \| u \|_{X_n(I)} := \sup_{t \in I} \| u(t) \|_{X_n(t)} = \sup_{t \in I} \| \tilde u(t) \chi_n^{-1}\|_{L^\infty} + \| \partial_p \tilde u(t) \chi_n^{-1/2} \|_{L^2((0,+\infty))}, \]
and
\[ X_n(I) := \leqslantfteft\{ u\in \q C(I,\mathcal{S}'(\m R)): \tilde u \chi_n^{-1} \in \q C(I, \q C_b((0,+\infty))),\ \partial_p \tilde u \chi_n^{-1/2} \in \q C(I, L^2((0,+\infty))) \rightight\}. \]
Observe that if $u \in \q E(1)$, then
\[ \|\Pi_n u_1\|_{X_n(1)} \leqslantfte \|u_1\|_{\q E(1)}. \]
\begin{equation}gin{prop} \leqslantftabel{prop:u_n}
Given any $u_1\in \q E(1)$, there exists $T_{-,n} <1$, $T_{+,n}>1$ and a unique
$u_n \in X_n((T_{-,n},T_{+,n})) $ maximal solution of \varepsilonqref{pimkdv}. Moreover, if $T_{+,n}<\infty$, then
\[ \leqslantftim_{t\to T_+} \|u_n(t)\|_{X_n(t)} = +\infty. \]
(A similar statement holds at $T_-$).
In particular, $u \in \q E((T_{-,n},T_{+,n}))$.
\varepsilonnd{prop}
\begin{equation}gin{proof}
This is a standard fixed-point argument (in the estimates below, the implicit constants are allowed to depend on $n$). We work for times larger than 1, the other case is similar. For $T>1, M>0$, let
\begin{equation}gin{equation}
B_{n}(T,M)=\leqslantfteft\{ u \in X_n([1,T]): \|u\|_{X_n([0,T])} \leqslantfte M \rightight\}
\varepsilonnd{equation}
endowed with the natural distance
\[
d(u,v)= \|u-v\|_{X_n([0,T])},
\]
and
\[
(\Psi(u))(t,p)= \chi_n (p) \tilde{u}_1(p) -\frac{\varepsilonpsilon\chi_n(p)}{4\pi^2}\int_1^t \cal N[u](s,p)ds.
\]
Using the strong decay on the Fourier side, that is, for any $u \in X_{n}(T,M)$,
\[
\forall t \in [1,T], \forall p \in \m R, \quad |\tilde{u}(t,p)|\leqslantfte M\chi_n(p), \]
one may easily obtain the necessary bounds on $\Psi$. Indeed, we estimate
\begin{equation}gin{align}\leqslantftabel{eq:boundNporchiN}
|\cal N[u|(t,p)|&\leqslantftesssim |p| \iint_{q_1+q_2+q_3=p} \chi_n(q_1) \chi_n(q_2) \chi_n(q_3) dq_1dq_2 \|u\|_{X_n}^3 \\
& \leqslantftesssim |p| \sup_{|q_3| \geqslant |p/3|} \chi_{n} (q_3) \| \chi_n \|_{L^1}^2\|u\|_{X_n} ^3 \leqslantftesssim \|u\|_{X_n}^3, \nonumber
\varepsilonnd{align}
where we used the fact that at least one of the variables $q_1, q_2$ and $q_3$ has modulus at least $|p/3|$ . Hence for $t \in [1,T]$,
\begin{equation}gin{align*}
\|\hat u(t) \chi_n^{-1}\|_{L^\infty} & = \|\tilde{u}(t) \chi_n^{-1}\|_{L^\infty} \leqslantftesssim \|\chi_n \tilde{u}_1 \chi_n^{-1}\|_{L^\infty} + \leqslantfteft\| \int_1^t \cal N[u](s)ds \rightight\|_{L^\infty} \\
& \leqslantftesssim \| \tilde{u}_1 \|_{L^\infty} + (T-1)\sup_{t\in[1,T]} \leqslantfteft\| \cal N[u](t) \rightight\|_{L^\infty}\\
&\leqslantftesssim \|\tilde{u}_1 \|_{L^{\infty}} + (T-1)M^3.
\varepsilonnd{align*}
Similar to estimate \varepsilonqref{eq:boundNporchiN}, we have
\begin{equation}gin{align*}
|\partial_p\cal N[u](t,p)|&\leqslantftesssim t|p|\iint_{q_1+q_2+q_3=p}(|p|^2+|q_3|^2)\chi_n(q_1) \chi_n(q_2) \chi_n(q_3) dq_1dq_2 \|u\|_{X_n}^3\\&\quad+ |p|\iint_{q_1+q_2+q_3=p} \chi_n(q_1) \chi_n(q_2) |\partial_{p}u(q_3)| dq_1dq_2\|u\|_{X_n}^2\\&\leqslantftesssim |p| \sup_{|q_3| \geqslant |p/3|} (p^2+q_3^2)\chi_{n} (q_3) \| \chi_n \|_{L^1}^2\|u\|_{X_n} ^3 \\&\quad+ |p|\leqslantfteft(\iint_{q_1+q_2+q_3=p}\chi_n(q_1)^2 \chi_n(q_2) \chi_n(q_3) dq_1dq_2 \rightight)^{1/2}\\&\quad\quad\times\leqslantfteft(\iint \chi_{n}(q_2)|\partial_{p}u(q_3)|^2\chi_{n}^{-1}(q_3)dq_2dq_3\rightight)^{1/2}\|u\|_{X_n}^2\leqslantftesssim \|u\|_{X_n}^3.
\varepsilonnd{align*}
This implies the direct bound
\begin{equation}gin{align*}
\MoveEqLeft \| \partial_p\tilde{u}(t) \chi_n^{-1/2} \|_{L^2((0,+\infty))} \\
& \leqslantftesssim \| \chi_n \partial_p\tilde{u}_1 \chi_n^{-1/2}\|_{L^2((0,+\infty))} + (T-1) \| \chi_n^{1/2} \|_{H^1}\sup_{t \in [1,T]} \| \cal N[u](t)\|_{W^{1,\infty}} \\
& \leqslantftesssim \| \partial_p\tilde{u}_1 \|_{L^2((0,+\infty))} + (T-1)\|u\|_{X_n}^3.
\varepsilonnd{align*}
Thus
\[ \|(\Psi(u)) \|_{X_n([0,T])} \leqslantfte C\leqslantfteft(\| u_1 \|_{\q E(1)} + (T-1)M^3\rightight). \]
Analogous computations yield
\[ d(\Psi(u),\Psi(v))\leqslantfte C (T-1)M^2 d(u,v). \]
(since $\cal N$ is a trilinear operator). Choosing $M$ and $T$ such that
\[ C \leqslantfteft(\|\tilde{u}_1\|_{X_n} + (T-1)M^3\rightight) \leqslantfte M, \]
and $C (T-1)M \leqslantfte 1/2$, we see that $\Psi: B_{n}(T,M) \to B_{n}(T,M) $ is a contraction. The result now follows from Banach's fixed point theorem.
\varepsilonnd{proof}
To conclude the construction of a solution, we need the time interval on which the approximating sequence is defined to remain wide independently of $n$. To that end, we need some \varepsilonmph{a priori} bounds.
\begin{equation}gin{lem}[$L^\infty$ bound for \varepsilonqref{pimkdv}]\leqslantftabel{lem:desenvassimptaprox}
Given $u_1\in \q E(1)$, denote $u$ the corresponding solution of \varepsilonqref{pimkdv}, given by Proposition \rightef{prop:u_n} and defined on $(T_{-,n},T_{+,n})$. Let $I \subset (T_{-,n},T_{+,n})$. Then\begin{equation}gin{equation}\leqslantftabel{eq:Linftyaprox}
\forall t \in I, \quad \| \tilde{u}_n(t) \chi_n^{-1} \|_{L^\infty} \leqslantfte \| \tilde{u}_1 \chi_n^{-1} \|_{L^\infty} + C(\|u_n\|_{\q E(I)} ^3 + \|u_n\|_{\q E(I)}^5)
\varepsilonnd{equation}
and
\[
\|\partial_t \tilde{u}_n \chi_n^{-1} \|_{L^\infty}\leqslantftesssim \frac{1}{t}\|u_n\|_{\q E(I)} ^3.
\]
Moreover, if one defines
\[
E^n(t,p)=\varepsilonxp\leqslantfteft(- i\varepsilonpsilon\int_1^t\frac{p^3\chi_n(p)}{4\pi \jap{p^3s}}|\tilde{u}_n(s,p)|^2ds\rightight).
\]
then for all $t, \tau \in I$,
\[
|(\tilde{u}E^n)(t,p) - (\tilde{u}E^n)(\tau,p)|\chi_{n}^{-1}(p)\leqslantfte C(\|u_n\|_{\q E(I)}^3 + \|u_n\|_{\q E(I)}^5)) \min \leqslantfteft( \frac{|t-\tau|}{\jap{p^3}},\frac{1}{\jap{\tau p^3}^{\frac{1}{12}}} \rightight).
\]
\varepsilonnd{lem}
\begin{equation}gin{proof}
The proof follows the line of Lemma \rightef{lem:desenvassimpt}, we leave the details to the reader.
\varepsilonnd{proof}
Now we look for an \varepsilonmph{a priori} bound for $\partial_p \tilde{u}_n$. Define the operator
\[
\omegaidehat{\mathcal{I}u}(t,p)= i\partial_p \hat{u}(t,p) - \frac{3it}{p}\partial_t\hat{u}(t,p) = ie^{itp^3}\leqslantfteft(\partial_p\tilde{u} - \frac{3t}{p}\partial_t\tilde{u}\rightight),
\]
which corresponds to the formal operator
\[
x+3t\int_{-\infty}^x \partial_t dx'.
\]
Using the definition, one may check that, if
\[
\omegaidehat{\Pi_n'u}:= \chi_n'\hat{u}
\]
then
\[
\mathcal{I}\leqslantfteft(\Pi_n u \rightight)= \Pi_n \mathcal{I}u + i\Pi_n'u.
\]
Moreover, if we let $L = \partial_t + \partial_{xxx}$,
\[
\omegaidehat{L\mathcal{I}u}=\omegaidehat{\mathcal{I}Lu} + \frac{3i}{p}\omegaidehat{Lu},\quad \mathcal{I}(u^3)_x = 3u^2 \leqslantfteft(\mathcal{I}u\rightight)_x - 3u^3.
\]
\begin{equation}gin{lem}[$\dot{H}^1$ bound for \varepsilonqref{pimkdv}]\leqslantftabel{lem:H1aprox}
Given $u_1\in \q E(1)$, the corresponding solution $u_n$ of \varepsilonqref{pimkdv} satisfies
\[
\omegaidehat{\mathcal{I}u_n}\in \q C^1((T_{-,n},T_{+,n}), L^2((0,+\infty), \chi_n^{-1}dp)).
\]
There exists a universal constant $\kappa >0$, such that, for $1<t<T_{+,n}$,
\begin{equation}gin{align}
\forall t \in [1,T_{+,n}) \quad \leqslantfteft(\int_0^\infty |\omegaidehat{\mathcal{I}u_n}(t,p)|^2\chi_n^{-1}dp \rightight)^{1/2} & \leqslantfte \leqslantfteft(\int_0^\infty |\omegaidehat{\mathcal{I}u_n}(1,p)| \chi_n^{-1} dp \rightight)^{1/2}t^{\kappa \| u_n \|_{\q E([1,t])}^2} \nonumber\\
&\qquad + o_n(1) \| u_n \|_{\q E([1,t])}^3 t^{1/6}\leqslantftabel{eq:H1aprox_tmaior}, \\
\forall t \in (T_{-,n},1], \quad \leqslantfteft(\int_0^\infty |\omegaidehat{\mathcal{I}u_n}(t,p)|^2\chi_n^{-1}dp \rightight)^{1/2} &\leqslantfte \leqslantfteft(\int_0^\infty |\omegaidehat{\mathcal{I}u_n}(1,p)| \chi_n^{-1} dp \rightight)^{1/2}t^{-\kappa\| u_n \|_{\q E([t,1])}^2} \nonumber\\&\qquad+ o_n(1) \| u_n \|_{\q E([t,1])}^3 t^{1/6}\leqslantftabel{eq:H1aprox_tmenor}.
\varepsilonnd{align}
\varepsilonnd{lem}
\begin{equation}gin{proof}
Fix $\varepsilon>0$. First of all, notice that, by Lemma \rightef{lem:desenvassimptaprox}, we have
\[
\omegaidehat{\mathcal{I}u_n} = ie^{-itp^3}\leqslantfteft(\partial_p\tilde{u}_n - \frac{3t}{p}\partial_t\tilde{u}_n \rightight) \in L^2([\varepsilon,+\infty), \chi_n^{-1}dp),
\]
which justifies the finiteness of all the following integrations.
On the other hand,
\begin{equation}gin{align*}
\partial_t\omegaidehat{\mathcal{I}u_n} - ip^3\omegaidehat{\mathcal{I}u_n} & = \omegaidehat{L\mathcal{I}u_n} = \omegaidehat{\mathcal{I}Lu_n} - 3 \varepsilonpsilon \omegaidehat{\Pi_n (u_n^3)} = -\varepsilonpsilon\omegaidehat{\mathcal{I}(\Pi_n(u_n^3)_x)} -3 \varepsilonpsilon \chi_n\omegaidehat{u_n^3}\\
& = -\varepsilonpsilon\chi_n\omegaidehat{\mathcal{I}(u_n^3)_x} - i\varepsilonpsilon\chi_n' \omegaidehat{(u_n^3)_x} - 3 \varepsilonpsilon \chi_n \omegaidehat{u_n^3} = -\varepsilonpsilon\leqslantfteft(3\chi_n \omegaidehat{u^2(\mathcal{I}u_n)_x} +ip\chi_n' \omegaidehat{u_n^3}\rightight).
\varepsilonnd{align*}
Multiplying by $\overline{\omegaidehat{\mathcal{I}u_n}} \chi_n^{-1}$, integrating on $\m R\setminus (-\varepsilon,\varepsilon)$ and taking the real part,
\begin{equation}gin{align*}
\frac{1}{2}\frac{d}{dt}\int_{\m R\setminus (-\varepsilon,\varepsilon)} |\omegaidehat{\mathcal{I}u_n}(p)|^2\chi_n^{-1}(p)dp &=-\varepsilonpsilon \mathrm{Re} \int_{\m R\setminus (-\varepsilon,\varepsilon)}\int_{p_1+p_2=p} \omegaidehat{u_n^2}(p_1)p_2\omegaidehat{\mathcal{I}u_n}(p_2)\overline{\omegaidehat{\mathcal{I}u_n}}(p)dp_2dp \\
&\ \ \ -\varepsilonpsilon\mathrm{Re} \int_{\m R\setminus (-\varepsilon,\varepsilon)} p\chi_n'(p)\chi_n^{-1/2}(p)\omegaidehat{u_n^3}(p)\overline{\omegaidehat{\mathcal{I}u_n}}(p)\chi_n^{-1/2}(p)dp \\
& =I_1 + I_2.
\varepsilonnd{align*}
For $I_1$, we split the integral in $p_2$:
\begin{equation}gin{align*}
\leqslantfteft| \int_{\m R\setminus (-\varepsilon,\varepsilon)}\int_{-\varepsilon}^{\varepsilon} \overline{\omegaidehat{\mathcal{I}u_n}}(p) \omegaidehat{\mathcal{I}u_n}(p_2)p_2 \omegaidehat{u_n^2}(p_1)dp_2dp \rightight| & \leqslantftesssim \varepsilon\|\omegaidehat{\mathcal{I}u_n}\|_{L^2(\m R\setminus(-\varepsilon,\varepsilon))}\|\omegaidehat{\mathcal{I}u_n}\|_{L^2((0,+\infty))}\|\omegaidehat{u_n^2}\|_{L^1} \\
& \to 0 \quad \text{as } \varepsilon \to 0.
\varepsilonnd{align*}
(Indeed $\|\omegaidehat{u_n^2}\|_{L^1} \leqslantftesssim \| \tilde u_n \|_{L^1}^2<+\infty$).
Then observe that
\begin{equation}gin{align*}
\MoveEqLeft \mathrm{Re} \int_{\m R\setminus (-\varepsilon,\varepsilon)} \int_{\m R\setminus (-\varepsilon,\varepsilon)}\omegaidehat{u^2_n}(p_1)p_2\omegaidehat{\mathcal{I}u_n}(p_2)\overline{\omegaidehat{\mathcal{I}u_n}}(p)dp_2dp \\
& = \mathrm{Re} \int_{\m R\setminus (-\varepsilon,\varepsilon)} \int_{\m R\setminus (-\varepsilon,\varepsilon)}\omegaidehat{u^2_n}(p_1)(p-p_1)\omegaidehat{\mathcal{I}u_n}(p_2)\overline{\omegaidehat{\mathcal{I}u_n}}(p)dp_2dp \\
&= -\mathrm{Re} \int_{\m R\setminus (-\varepsilon,\varepsilon)} \int_{\m R\setminus (-\varepsilon,\varepsilon)}p_1\omegaidehat{u^2_n}(p_1)\omegaidehat{\mathcal{I}u_n}(p_2)\overline{\omegaidehat{\mathcal{I}u_n}}(p)dp_2dp \\
&\quad- \mathrm{Re} \int_{\m R\setminus (-\varepsilon,\varepsilon)} \int_{\m R\setminus (-\varepsilon,\varepsilon)}\omegaidehat{u^2_n}(p_1)p_2\omegaidehat{\mathcal{I}u_n}(p_2)\overline{\omegaidehat{\mathcal{I}u_n}}(p)dp_2dp.
\varepsilonnd{align*}
Hence, if we define the operator
\[ \omegaidehat{\m 1_\varepsilon v}=\m 1_{\m R\setminus(-\varepsilon,\varepsilon)}\hat{v}, \]
then
\begin{equation}gin{align*}
\MoveEqLeft \leqslantfteft|\mathrm{Re} \int_{\m R\setminus (-\varepsilon,\varepsilon)} \int_{\m R\setminus (-\varepsilon,\varepsilon)}\omegaidehat{u_n^2}(p_1)p_2\omegaidehat{\mathcal{I}u_n}(p_2)\overline{\omegaidehat{\mathcal{I}u_n}}(p)dp_2dp \rightight| \\
&= \frac{1}{2}\leqslantfteft|\mathrm{Re} \int_{\m R\setminus (-\varepsilon,\varepsilon)} \int_{\m R\setminus (-\varepsilon,\varepsilon)}p_1\omegaidehat{u_n^2}(p_1)\omegaidehat{\mathcal{I}u}(p_2)\overline{\omegaidehat{\mathcal{I}u_n}}(p)dp_2dp\rightight| \\
& \leqslantftesssim \leqslantfteft|\iint p_1\omegaidehat{u_n^2}(p_1)\omegaidehat{\m 1_\varepsilon \mathcal{I}u}(p_2)\overline{\omegaidehat{\m 1_\varepsilon \mathcal{I}u_n}}(p)dp_2dp\rightight|\leqslantftesssim \leqslantfteft|\int (u_n^2)_x(x)|\m 1_\varepsilon(\mathcal{I}u_n)(x)|^2dx\rightight| \\
&\leqslantftesssim \|u_n \partial_x u_n\|_{L^\infty}\|\m 1_\varepsilon(\mathcal{I}u_n)\|_{L^2}^2 \leqslantftesssim \|u_n \partial_x u_n\|_{L^\infty}\int_{\m R\setminus (-\varepsilon,\varepsilon)} |\omegaidehat{\mathcal{I}u_n}(p)|^2\chi_n^{-1}(p)dp.
\varepsilonnd{align*}
Now, to estimate $I_2$, we use Cauchy-Schwarz:
\begin{equation}gin{align*}
|I_2| &\leqslantftesssim \leqslantfteft(\int_{\m R\setminus (-\varepsilon,\varepsilon)} |\omegaidehat{\mathcal{I}u_n}(p)|^2\chi_n^{-1}(p)dp\rightight)^{1/2}\leqslantfteft(\int|p\chi_n'(p)\chi_n^{-1/2}|^2|\omegaidehat{u_n^3}|^2dp \rightight)^{1/2}\\
&\leqslantftesssim \leqslantfteft(\int_{\m R\setminus (-\varepsilon,\varepsilon)} |\omegaidehat{\mathcal{I}u_n}(p)|^2\chi_N^{-1}(p)dp\rightight)^{1/2} \| u_n(t) \|_{L^6}^3 \sup_{p\in\m R}|p(\chi_n^{1/2})'(p)|
\varepsilonnd{align*}
Here we crucially use the third condition on $\chi_n$.
Putting together these estimates, using Lemma \rightef{lema:decayE} and the symmetry $\hat u_n(t,-p)=\overline{\hat u_n(t,p)}$,
\begin{equation}gin{align}
\MoveEqLeft \leqslantfteft|\frac{d}{dt}\leqslantfteft(\int_\varepsilon^\infty |\omegaidehat{\mathcal{I}u_n}(p)|^2\chi_n^{-1}(p)dp\rightight)^{1/2}\rightight|\nonumber \\
&\leqslantftabel{eq:estH1aprox} \leqslantftesssim \| u \partial_x u \|_{L^\infty}\leqslantfteft(\int_\varepsilon^\infty |\omegaidehat{\mathcal{I}u_n}(p)|^2\chi_n^{-1}(p)dp\rightight)^{1/2} + o_\varepsilon(1) + o_n(1) \| u_n (t) \|_{L^6}^3 \\
& \leqslantftesssim \frac{\| u(t) \|_{\q E(t)} ^2}{t}\leqslantfteft(\int_\varepsilon^\infty |\omegaidehat{\mathcal{I}u_n}(p)|^2\chi_n^{-1}(p)dp\rightight)^{1/2} + o_\varepsilon(1) + o_n(1)\frac{\| u_n(t) \|_{\q E(t)}^3}{t^{5/6}}\nonumber
\varepsilonnd{align}
It follows that, for $t\geqslant 1$ and some universal constant $kappa>0$,
\begin{equation}gin{align*}
\leqslantfteft(\int_\varepsilon^\infty |\omegaidehat{\mathcal{I}u_n}(p)|^2\chi_n^{-1}(t,p)dp\rightight)^{1/2} & \leqslantfte \leqslantfteft(\int_\varepsilon^\infty |\omegaidehat{\mathcal{I}u_n}(p)|^2(1,p)\chi_n^{-1}(p) dp \rightight)^{1/2}t^{\kappa \| u_n \|_{\q E(1,t)}^2} \\
&\quad + o_\varepsilon(t) + \| u_n(t) \|_{\q E(t)}^3 o_n(t^{1/6}).
\varepsilonnd{align*}
Taking $\varepsilon\to 0$, the result follows. An analogous computation yields the inequality for $t<1$.
\varepsilonnd{proof}
\begin{equation}gin{prop}[Global existence]\leqslantftabel{prop:boundapprox}
Given $u_1\in \q E(1)$ small, let $u_n$ be the unique maximal solution of \varepsilonqref{pimkdv} given by Proposition \rightef{prop:u_n}. Then there exists $T=T(\|u_1\|_{\q E(1)})<1$ such that, if $n$ is large enough, $u_n$ is defined on $[T,+\infty)$ and
\begin{equation}gin{equation}
\| u_n \|_{\q E([T,+\infty))} \leqslantfte C \|u_1\|_{\q E(1)}.
\varepsilonnd{equation}
\varepsilonnd{prop}
\begin{equation}gin{proof}
Fix $\delta_0>\|u_1\|_{\q E(1)}$. Define
\begin{equation}gin{equation}
f_n(t)= \| \tilde{u}_n(t) \chi_n^{-1}\|_{L^\infty} + t^{-1/6}\|\partial_p \tilde{u}_n(t) \chi_N^{-1/2}\|_{L^2((0,+\infty))}
\varepsilonnd{equation}
and let $J_n$ be the maximal connected interval containing $t=1$ such that
\[
f_n(t) \leqslantfte 4 C \delta_0,\quad t\in J_n.
\]
For $\delta_0$ sufficiently small and some $T<1$ close to 1, it follows from Lemma \rightef{lem:H1aprox} that, given $t\in J_n$, $t>T$,
\[
\leqslantfteft(\int_0^\infty |\omegaidehat{\mathcal{I}u}(t,p)|^2\chi_n^{-1}dp \rightight)^{1/2}\leqslantfte 2\leqslantfteft(\leqslantfteft(\int_0^\infty |\omegaidehat{\mathcal{I}u}(1,p)|^2 \chi_n^{-1}dp\rightight)^{1/2} + o_n(1) f_n(s)^3\rightight)t^{1/6}.
\]
Recalling that
\[
\partial_p\tilde{u}_n = -ie^{itp^3}\omegaidehat{\mathcal{I}u_n} + \frac{3t}{p}\partial_t\tilde{u}_n = -ie^{itp^3}\omegaidehat{\mathcal{I}u_n} + 3t\chi_n e^{-itp^3}\omegaidehat{u^3_n},
\]
we derive the bound for $\partial_p\tilde{u}_n$:
\begin{equation}gin{align*}
\| \partial_p\tilde{u}_n(t) \chi_n^{-1/2}\|_{L^2((0,+\infty))} & \leqslantfte \| \omegaidehat{\mathcal{I}u_n}(t) \chi_n^{-1/2} \|_{L^2((0,+\infty))} + 3t\| \omegaidehat{u_n^3}(t) \chi_n^{1/2} \|_{L^2} \\
& \leqslantftesssim t^{1/6}\leqslantfteft(\| \omegaidehat{\mathcal{I}u_n}(1) \chi_n^{-1/2}\|_{L^2((0,+\infty))} + o_n(1)f_n(t)\rightight) + 3t \|u_n \|_{L^6}^3 \\
& \leqslantftesssim t^{1/6}\leqslantfteft(\| \omegaidehat{\mathcal{I}u_n}(1)\chi_n^{-1/2} \|_{L^2((0,+\infty))} + o_n(1)f_n(t) + f_n (t)^3\rightight).
\varepsilonnd{align*}
Together with the $L^\infty$ bound \varepsilonqref{eq:Linftyaprox}, we infer
\begin{equation}gin{align*}
f_n(t) & \leqslantfte C\leqslantfteft(\|\chi_n^{-1}\tilde{u}(1)\|_{L^\infty} + \|\chi_n^{-1/2}\partial_p\tilde{u}(1)\|_{L^2((0,+\infty))} + o_n(1) f_n(t) + f_n(t)^3\rightight) \\
& \leqslantfte C\leqslantfteft(\|u_1\|_{\q E(1)} + o_N(1)f_n(t) + f_n(t)^3\rightight).
\varepsilonnd{align*}
If $n$ large and $4C\|u_1\|<\delta_0$, then a continuity argument implies that
\begin{equation}gin{equation}
\forall t \in J_n,\ t>T, \quad f_n(t) \leqslantfte 2C\|u_1\|_{\q E(1)} < \frac{\delta_0}{2}.
\varepsilonnd{equation}
Hence $J_n$ must be equal to $[T,T_{+,n})$. By the definition of $J_n$ and the blow-up alternative, $T_{+,n} = +\infty$.
\varepsilonnd{proof}
\section{Well-posedness on the critical space} \leqslantftabel{sec:5}
\begin{equation}gin{prop}[Existence for small data]\leqslantftabel{prop:exist}
There exists $C,\delta>0$ such that, given $u_1\in \q E(1)$ with $\|u_1\|_{\q E(1)}<\delta$, there exist $T=T(\|u_1\|_{\q E(1)})<1$ and a unique $u\in \q{E}([T,\infty))$ solution of \varepsilonqref{mkdv} in the distributional sense such that $u(1)=u_1$. Moreover, there exists a universal constant $C>1$ such that
\begin{equation}gin{equation}\leqslantftabel{eq:boundfinal}
\|u\|_{\q{E}([T,\infty))}\leqslantfte C \| u_1 \|_{\q E(1)}.
\varepsilonnd{equation}
\varepsilonnd{prop}
\begin{equation}gin{proof}
\varepsilonmph{Step 1. Approximate solutions and a priori bounds.} For each $n\in \mathbb{N}$, define $u_n$ as the unique solution of \varepsilonqref{pimkdv}. By Proposition \rightef{prop:boundapprox}, for $n$ large enough, $u_n$ is defined on $[T,+\infty)$ and
\[
\| u_n \|_{\q E([T,+\infty))}\leqslantfte C\delta.
\]
\varepsilonmph{Step 2. Convergence on the profile space.} Since the sequence $(\tilde{u}_n)_{n\in\mathbb{N}}$ is uniformly bounded in $\q C([T,\infty), \dot{H}^1((0,+\infty))\cap L^\infty(\m R))$,
the Sobolev embedding implies that
\[
|\tilde{u}_n(t,p)-\tilde{u}_n(t,q)|\leqslantftesssim |p-q|^{1/2},\quad p,q>0.
\]
Moreover, by Lemma \rightef{lem:desenvassimptaprox}, we have $\tilde{u}_n$ uniformly bounded in $ W^{1,\infty}([T,\infty),L^\infty(\m R)).$ Hence $(\tilde{u}_n)_{n\in\mathbb{N}}$ is equicontinuous on $[T,\infty)\times [0,R]$, for any $R>0$. By Ascoli-Arzelà theorem, we conclude that there exists $\tilde{u}\in C_b([T,\infty)\times (0,+\infty))$ such that, up to a subsequence,
\[
\tilde{u}_N \to \tilde{u}\quad \text{uniformly in }[T,T']\times [0,R], \ T'>T,\ R>0.
\]
Given $p<0$, we set
\[
\tilde{u}(t,p)=\overline{\tilde{u}(t,-p)}.
\]
\varepsilonmph{Step 3. $\tilde{u}\in C_b([T,\infty)\times \m R)\cap L^\infty((T,\infty), \dot{H}^1((0,+\infty)))$}. Define
\begin{equation}gin{align*}
E_u(t,p) & = \varepsilonxp\leqslantfteft(-i\varepsilonpsilon\int_1^t\frac{p^3}{4\pi \jap{p^3s}}|\tilde{u}(s,p)|^2ds\rightight), \\
E^n(t,p) & = \varepsilonxp\leqslantfteft(-i\varepsilonpsilon\int_1^t\frac{\chi_n(p)p^3}{4\pi \jap{p^3s}}|\tilde{u}_n(s,p)|^2ds\rightight).
\varepsilonnd{align*}
By Lemma \rightef{lem:desenvassimptaprox},
\[
|\tilde{u}_nE^n(t.p)-\tilde{u}_nE^n(s,p)|\leqslantftesssim \frac{|t-s|}{\jap{p^3}}.
\]
Taking $n\to\infty$, we get
\[
|\tilde{u}E_{u}(t.p)-\tilde{u}E_{u}(s,p)|\leqslantftesssim \frac{|t-s|}{\jap{p^3}},
\]
which means that $\tilde{u}E_{u}\in \q C([T,\infty), L^\infty(\m R))$. On the other hand,
\[
|E_u(t,p)-E_u(s,p)|\leqslantftesssim \int_s^t \frac{p^3}{ \jap{p^3s}}|\tilde{u}(s,p)|^2ds \leqslantftesssim |t-s|.
\]
Hence, when $t\to s$,
\[
\|\tilde{u}(t)-\tilde{u}(s)\|_{L^\infty(\m R)}\leqslantfte \|\tilde{u}(t)E_u(t)-\tilde{u}(s)E_u(s)\|_{L^\infty(\m R)} + \|\tilde{u}(s)\leqslantfteft(E_u(t)-E_u(s) \rightight)\|_{L^\infty(\m R)} \to 0
\]
and so $\tilde{u}\in \q C([T,\infty), L^\infty(\m R))$.
Fix $t\in [T,\infty)$. Since $(\tilde{u}_n(t))_{n\in\mathbb{N}}$ is bounded in $\dot{H}^1((0,+\infty))$, up to a subsequence, there exists $g(t)\in L^2((0,+\infty))$ such that
\[
\partial_p\tilde{u}_n(t)\rightightharpoonup g(t),\quad \|g(t)\|_{L^2((0,+\infty))}\leqslantfte \leqslantftiminf \|\partial_p\tilde{u}_n(t)\|_{L^2((0,+\infty))} \leqslantftesssim \delta.
\]
Since $\tilde{u}_n(t)\to \tilde{u}(t)$ in $L^\infty_{loc}(\m R)$, we have $\tilde{u}(t)\in \dot{H}^1((0,+\infty))$ and $\partial_p\tilde{u}(t)=g(t)$. Moreover, the uniform bound on $g(t)$ implies
\[
\tilde{u}\in C_b([T,\infty) \times \m R)\cap L^\infty((T,\infty), \dot{H}^1((0,+\infty))).
\]
\varepsilonmph{Step 4. Convergence on the physical space.}
We already now that
\[
|\tilde{u}_n(t,p)-\tilde{u}_n(t,0^+)|, |\tilde{u}(t,p)-\tilde{u}(t,0^+)|\leqslantftesssim |p|^{1/2}.
\]
Hence, $\tilde{u}_n(t)\to \tilde{u}(t)$ in $\mathcal{S}'(\m R)$. Therefore $u(t)=(e^{itp^3}\tilde{u})^{\vee}(t)$ is well-defined and we have
\[
u_n\to u \text{ in }\mathcal{D}'((T,\infty)\times \m R),
\]
We now claim that $u$ is in $L^\infty((T,\infty)\times\m R)$ and that $u_n(t)\to u(t)$ in $L^\infty(\m R)$, for any $t\in [T,\infty)$. Since $(u_n)_{n\in\mathbb{N}}$ is
bounded in $\q E([T,\infty))$, Lemma \rightef{lema:decayE} implies that, for any $K\subset \m R $ compact and $T'>T$,
\begin{equation}gin{gather*}
\forall t \in [T,T'], \quad \|u_n(t)\|_{L^\infty(K)}, \|(u_n)_x(t)\|_{L^\infty(K)} \leqslantftesssim_{T',K} \delta, \\
\text{and} \quad \forall x\in \m R, \ \forall t \in [T,T'], \quad |u_n(t,x)|\leqslantftesssim C(T') \jap{x}^{-\frac{1}{4}}.
\varepsilonnd{gather*}
Again by Ascoli-Arzelà, there exists $h(t)\in \q C(\m R)$ such that
\[
u_{n_k}(t)\to h(t) \quad \text{uniformly in } K, \ K\subset \m R\text{ compact.}
\]
and
\[
|h(t,x)|\leqslantftesssim C(T') \jap{x}^{-\frac{1}{4}},\quad x\in\m R.
\]
This implies that $h$ is, in fact, bounded over $(T,T')\times \m R$. Since $u_n\to u$ in the distribution sense, $h=u$. Hence the limit $h(t)$ is unique and we conclude that the whole sequence $(u_n(t))_{n\in\mathbb{N}}$ must converge to $h(t)$:
\[
u_n(t)\to h(t)=u(t) \quad \text{uniformly in } K, \ K\subset \m R\text{ compact.}
\]
Finally, the uniform decay of $u_n$ and $u$ imply that this convergence holds over $\m R$,
\[
u_n(t)\to u(t) \text{ in }L^\infty(\m R).
\]
The claim is proven.
\varepsilonmph{Step 5. $u$ is a solution of \varepsilonqref{mkdv} in $\q E([T,+\infty))$}. Since $u_n(t)\to u(t) \text{ in }L^\infty(\m R)$, one has
\[ (u_n)^3\to u^3 \text{ in } \mathcal{D}'((T, \infty)\times\m R). \]
Recalling that $(\partial_t + \partial_{xxx})u_n=-\varepsilonpsilon\Pi_n((u_n)^3)_x$, one may now pass to the limit in the distributional sense and
\[
(\partial_t + \partial_{xxx})u=-\varepsilonpsilon(u^3)_x.
\]
By Step 3, $u\in \q E([T,+\infty))$ and the bound \varepsilonqref{eq:boundfinal} follows from the corresponding bound for $u_n$. The proof is complete.
\varepsilonnd{proof}
\begin{equation}gin{nb}\leqslantftabel{rmk:boundI}
As a consequence of the above proof and Lemma \rightef{lem:H1aprox}, one may easily see that, if $\|u_1\|_{\q E(1)}<\delta<\delta_0$, then
\begin{equation}gin{align}
\forall t \geqslant 1, \quad \|\omegaidehat{\mathcal{I}u}(t)\|_{L^2((0,+\infty))} & \leqslantfte \|\omegaidehat{\mathcal{I}u}(1)\|_{L^2((0,+\infty))} t^{\kappa \delta^2},\quad \text{and} \\
\leqslantftabel{eq:estIparatras}
\forall t \leqslantfte 1, \quad \|\omegaidehat{\mathcal{I}u}(t)\|_{L^2((0,+\infty))} & \leqslantfte \|\omegaidehat{\mathcal{I}u}(1)\|_{L^2((0,+\infty))} t^{- \kappa\delta^2}.
\varepsilonnd{align}
\varepsilonnd{nb}
We now consider the large data case. Here, the only delicate point is to prove that the lifespan $[T_{-,n},T_{+,n}]$ does not become trivial as $n$ tends to $\infty$. Afterwards, the arguments of the previous proof may be applied \textit{mutatis mutandis}.
\begin{equation}gin{lem}[Uniform local existence for large data]\leqslantftabel{lem:uniformtime}
Given $u_1\in \q E(1)$, there exists $T_-(u_1)<1$, $T_+(u_1)>1$, $C = C(\| u_1 \|_{\q E(1)}$ such that, for large $n$, the corresponding solution $u_n$ of \varepsilonqref{pimkdv} is defined on $[T_-(u_1), T_+(u_1)]$ and
\[
\|u_n\|_{\q E([T_-(u_1), T_+(u_1)])}\leqslantfte C \|u_1\|_{\q E(1)}.
\]
\varepsilonnd{lem}
\begin{equation}gin{proof}
Due to the critical nature of the space $\q E$, we are unable to obtain a uniform time-continuity estimate for the solutions $u_n$. Instead, we argue by contradiction. We focus on showing $T_+(u_1)>1$, the other case being completely analogous.
Let $C_1>0$ be a large constant to be chosen later. For each $n$, let $t_n>1$ be the first time satisfying
\[
\| u_n(t_n) \|_{\q E(t_n)}= C_1 \|u_1\|_{\q E(1)}.
\]
Suppose, for the sake of contradiction, that $t_n\to 1$, in particular $t_n \leqslantfte 2$. The uniform bound of $\tilde{u}_n(t_n)$ in $L^\infty\cap \dot{H}^1((0,+\infty))$ implies the existence of $v$ such that
\[
\partial_{p}\tilde{u}_n(t_n)\rightightharpoonup \partial_{p}v \mbox{ in }L^2(0,+\infty), \quad \tilde{u}_n(t_n)\to v\mbox{ in }L^\infty(K), K\subset \m R\mbox{ compact}
\]
On the other hand, by Lemma \rightef{lem:desenvassimptaprox},
\begin{equation}gin{align}
|(\tilde{u}_nE^n)(t_n,p) - \tilde{u}_1(p)|&=|(\tilde{u}_nE^n)(t_n,p) - (\tilde{u}_nE^n)(1,p)|+|\tilde{u}_n(1,p) - \tilde{u}_1(p)| \nonumber\\
&\leqslantftesssim (\| u_n(t_n) \|_{\q E(t_n)}^3 + \| u_n(t_n) \|_{\q E(t_n)}^5) \| |t_n-1|+|\Pi_n(p) - 1| \to 0\leqslantftabel{eq:contradLinfty}
\varepsilonnd{align}
which means that $v=\tilde{u}_1$. Moreover, the decay estimates of Lemma \rightef{lema:decayE} imply that
\[
u_n(t_n)\to u_1\text{ in } L^6(\m R).
\]
Due to \varepsilonqref{eq:H1aprox_tmaior} from Lemma \rightef{lem:H1aprox}, we have
\begin{equation}gin{align*}
\|(\mathcal{I}u_n)(t_n)\|_{L^2((0,+\infty))}& \leqslantftesssim \leqslantfteft(\int_0^\infty |\omegaidehat{\mathcal{I}u_n}(1)|^2\chi_n^{-1}dp\rightight)^{1/2} t_n^{C_1^2 \kappa \| u_1 \|_{\q E(1)}^2} + o_n(1) t_n^{1/6} \| u_n(1) \|_{\q E(1)}. \\
\varepsilonnd{align*}
For $n$ large, $t_n^{C_1^2 \kappa \| u_1 \|_{\q E(1)}^2} \leqslantfte 2$. Using once more the formula
\begin{equation}gin{align} \leqslantftabel{eq:Iu_n}
\partial_p\tilde{u}_n = -ie^{itp^3}\omegaidehat{\mathcal{I}u_n} + 3t \chi_n e^{-itp^3}\omegaidehat{u^3_n},
\varepsilonnd{align}
we get that for large $n$,
\begin{equation}gin{align*}
\| \mathcal{I}u_n(1) \chi_n^{-1/2} \|_{L^2((0,+\infty))} & \leqslantftesssim \| \partial_p\tilde{u}_n(1) \chi_n^{-1/2} \|_{L^2((0,+\infty))} + \| \omegaidehat{ u_n^3(1)} \chi_n^{1/2} \|_{L^2((0,+\infty))} \\
& \leqslantftesssim \| u_1 \|_{\q E(1)} + \| u_1 \|_{L^6}^3 \leqslantftesssim \| u_1 \|_{\q E(1)} (1 + \| u_1 \|_{\q E(1)}^2).
\varepsilonnd{align*}
As a consequence, we get
\[ \|(\mathcal{I}u_n)(t_n)\|_{L^2((0,+\infty))} \leqslantftesssim \| u_1 \|_{\q E(1)} (1 + \| u_1 \|_{\q E(1)}^2). \]
In the above estimate, we emphasize that the implied constant does not depend on $C_1$.
Using \varepsilonqref{eq:Iu_n} yet another time gives
\begin{equation}gin{align*}
\| \partial_p\tilde{u}_n (t_n) \|_{L^2((0,+\infty))} & \leqslantfte \|(\mathcal{I}u_n)(t_n)\|_{L^2((0,+\infty))} + 3t_n \| \omegaidehat{ u_n^3(1)} \|_{L^2((0,+\infty))} \\
& \leqslantftesssim \| u_1 \|_{\q E(1)} (1 + \| u_1 \|_{\q E(1)}^2).
\varepsilonnd{align*}
Moreover, it follows from \varepsilonqref{eq:contradLinfty} that $\| \tilde u_n(t_n) \|_{L^\infty} \leqslantfte 2 \| u_1 \|_{\q E(1)}$ for large $n$. In other words, for some absolute constant $C_0$ and large $n$,
\[ \| u_n(t_n) \|_{\q E(t_n)} \leqslantfte C_0 \| u_1 \|_{\q E(1)} (1 + \| u_1 \|_{\q E(1)}^2). \]
Choose now $C_1 = 2 C_0 (1 + \| u_1 \|_{\q E(1)}^2)$: this is a contradiction. Hence $t_n \not\to 1$, and the proof is complete.
\varepsilonnd{proof}
We can now follow the same arguments as for the proof of Proposition \rightef{prop:exist}, and get the analoguous result for large data given below.
\begin{equation}gin{prop}[Existence for large data]\leqslantftabel{prop:exist_u_E}
Let $u_1\in \q E(1)$. There exist $T_{-}(u_1)<1$ and $T_+(u_1) >1$ and a solution $u\in \q{E}([T_-(u_1),T_+(u_1)])$ solution of \varepsilonqref{mkdv} in the distributional sense such that $u(1)=u_1$.
\varepsilonnd{prop}
We now turn to the forward uniqueness result. It relies on completely different arguments, related to a monotonicity formula.
\begin{equation}gin{prop}[Forward uniqueness]\leqslantftabel{prop:uniq}
If $u,v\in \q E([t_1,t_2])$ are two solutions of \varepsilonqref{mkdv} and $u(t_1)=v(t_1)$, then $u\varepsilonquiv v$.
\varepsilonnd{prop}
\begin{equation}gin{proof}
\varepsilonmph{Step 1}. The difference $w=u-v$ satisfies $(\partial_t+\partial_{xxx})w=((w+v)^3 - v^3)_x$, $w(t_1)=0$. For any $x_1<x_2$, take $\phi \in \q C^\infty(\m R)$ increasing such that $\phi(x)= 0$ for $x<x_1$ and $\phi(x)=1$ for $x>x_2$. A formal computation yields
\begin{equation}gin{equation}\leqslantftabel{eq:diferenca}
\frac{1}{2}\int w^2\phi dx = \int_1^t\int \leqslantfteft(-\frac{3}{2} w_x^2\phi_x - w_xw\phi_{xx} -\varepsilonpsilon ((w+v)^3-v^3)_x w\phi\rightight) dxds
\varepsilonnd{equation}
This can be rigorously justified by a regularization process: for any $\delta>0$, take $\psi\in C_c^\infty(\m R)$ such that $\text{supp } \psi\subset [t_1,t_2]$ and $\psi\varepsilonquiv 1$ over $[t_1+\delta, t_2-\delta]$. Then $\psi w$ solves
\[
(\partial_t+\partial_{xxx})(\psi w) = \psi' w -\varepsilonpsilon \psi \leqslantfteft((w+v)^3 - v^3\rightight)_x\quad \text{ in }\mathcal{D}'(\m R\times\m R).
\]
Taking a sequence of mollifiers (in both space and time) $(\rightho_\varepsilon)_{\varepsilon>0}$, one has
\[
(\partial_t+\partial_{xxx})(\rightho_\varepsilon \ast (\psi w)) = \rightho_\varepsilon \ast ( \psi' w ) - \varepsilonpsilon \rightho_\varepsilon \ast \leqslantfteft(\psi ((w+v)^3 - v^3)_x\rightight), \quad t\in [t_1,t_2].
\]
Writing $w_\varepsilon = \rightho_\varepsilon * (\psi w)$, one now multiplies the above equation by $w_\varepsilon \phi$ and integrates over $[t_1+\delta, t] \times \m R$:
\begin{equation}gin{align}\leqslantftabel{eq:regularizada}
\MoveEqLeft \int w_\varepsilon(t)^2 \phi dx - \int w_\varepsilon (t_1+\delta)^2 \phi dx = \int_{t_1+\delta}^t\int \leqslantfteft(-\frac{3}{2} (w_\varepsilon)_x^2\phi_x - (\partial_x w_\varepsilon) w_\varepsilon \partial_{xx} \phi \rightight) dxds\\
& \qquad +\int_{1+\delta}^t\int \rightho_\varepsilon \ast (\psi' w-\varepsilonpsilon\psi ((w+v)^3-v^3)_x) w_\varepsilon\phi dxds
\varepsilonnd{align}
Using the decay properties of $w$ and $v$ (cf. Lemma \rightef{lema:decayE}), one may show that
\begin{equation}gin{gather*}
w\in L^2(\phi dx), \quad \partial_x w \in L^6(\phi dx),\quad \partial_x w\in L^2(\phi_x dx), \\
w \partial_x w \in \q C(supp \partial_{xx} \phi),\quad \text{uniformly in }t\in [t_1,t_2].
\varepsilonnd{gather*}
Furthermore, since $w\in \q C([t_1,t_2], L^\infty_{loc}(\m R))$ and $|w(t,x)|\leqslantftesssim C\|w\| x^{-1}$ for $x\geqslant 1$, it is trivial to check, using the dominated convergence theorem, that $w^2\phi \in \q C([t_1,t_2], L^1(\m R))$.
These bounds are sufficient to show that, when $\varepsilon \to 0$ in \varepsilonqref{eq:regularizada}, one obtains
\begin{equation}gin{align}\leqslantftabel{eq:diferencaquase}
\MoveEqLeft \frac{1}{2}\int w^2(t)\phi dx - \frac{1}{2}\int w^2(t_1+\delta)\phi dx \\
= & \int_{t_1+\delta}^t\int \leqslantfteft(-\frac{3}{2} w_x^2\phi_x - w_xw\phi_{xx} - \varepsilonpsilon ((w+v)^3-v^3)_x w\phi\rightight) dxds.\nonumber
\varepsilonnd{align}
Finally, using once again the continuity of $\|w^2(t)\phi\|_{L^1}$, the limit $\delta\to 0$ yields \varepsilonqref{eq:diferenca}.
\varepsilonmph{Step 2.} Fix $\phi_0 \in \q C^\infty(\m R)$ increasing with $\phi(x)=0$ for $x<0$ and $\phi(x)=1$ for $x>1$. Define the sequence $\phi_n(x)=\phi_0(1+x/n)$, which satisfies
\[ \phi_n(x)\to 1 \text{ as } n\to \infty,\quad \|(\phi_n)_{xx}\|_1 = \frac{\|(\phi_0)_{xx}\|_1}{n}. \]
Applying \varepsilonqref{eq:diferenca} to $\phi_n$,
\begin{equation}gin{align*}
\frac{1}{2}\int w(t)^2\phi_ndx \leqslantfte \int_1^t \leqslantfteft(\|w_xw\|_{L^\infty} \|(\phi_n)_{xx}\|_1 -\varepsilonpsilon\int ((w+v)^3-v^3)_x w\phi_ndx\rightight) ds.
\varepsilonnd{align*}
We simplify the last term:
\begin{equation}gin{gather*}
\int (w^3)_x w \phi_n = \int 3ww_x w^2\phi_n,\quad \int (vw^2)_x w \phi_n = \int (v_xw + 2w_x v)w^2\phi_n \\
\int (v^2w)_x w\phi = \int 2 v_xv w^2\phi_n + \frac{1}{2}\int v^2 (w^2)_x \phi = \int v_x v w^2\phi_n - \frac{1}{2}\int v^2 w^2 (\phi_n)_x.
\varepsilonnd{gather*}
Recall that, from Lemma \rightef{lema:decayE}, given $u,v\in E$, $\|uv_x\|_{L^\infty}\leqslantftesssim \|u\|\|v\|/t$. Hence
\begin{equation}gin{align*}
\frac{1}{2}\int w(t)^2\phi_ndx \leqslantftesssim \|w\|^2 \|(\phi_n)_{xx}\|_1\leqslantftog t + \frac{\|u_1\|^2 + \|v\|^2}{t}\int w^2\phi_n dx.
\varepsilonnd{align*}
Applying Gronwall's inequality, for some $C>$ and all $t\in [t_1, t_2]$, there hold
\[
\int w(t)^2\phi_n dx \leqslantftesssim \|w\|^2 \| \partial_{xx} \phi_n \|_1(\leqslantftog t) \varepsilonxp ( 2C(\|u_1\|^2 + \|v\|^2)\leqslantftog t).
\]
Taking the limit $n\to \infty$ and using Fatou's lemma,
\begin{equation}gin{align*}
\int w^2(t) dx &\leqslantfte \leqslantftiminf \int w^2(t)\phi_n \\&\leqslantftesssim \leqslantftiminf \|w\|^2 \|(\phi_n)_{xx}\|_1(\leqslantftog t) \varepsilonxp ( 2C(\|u_1\|^2 + \|v\|^2)\leqslantftog t)= 0.
\varepsilonnd{align*}
Thus $w\varepsilonquiv 0$ and $u \varepsilonquiv v$.
\varepsilonnd{proof}
Using the forward uniqueness property, we are able to obtain some further continuity information on the solution $u$ we constructed.
\begin{equation}gin{prop}[Continuity properties on $\partial_p \tilde u$] \leqslantftabel{prop:cont_u}
Let $u \in \q E([T_-(u_1),T_+(u_1)])$ be the solution constructed in Proposition \rightef{prop:exist_u_E}.
The map $t \mapsto \partial_p \tilde u(t)$ is continuous in weak-$L^2$ on $[T_-(u_1),T_+(u_1)]$, continuous to the right in $L^2$, and it is continuous at $t=1$.
\varepsilonnd{prop}
\begin{equation}gin{proof}
\varepsilonmph{Step 1. Continuity at $t=1$ in $L^2$.}
Given a sequence $t_k \to 1^+$, since $\tilde{u}\in L^\infty((T,\infty), \dot{H}^1((0,+\infty)))$, one may extract a subsequence so that
\[
\partial_p\tilde{u}(t_k)\rightightharpoonup z, \quad z\in L^2((0,+\infty)).
\]
The continuity of $t\mapsto \tilde{u}(t)\in L^\infty(\m R)$ implies that $z=\tilde{u}_x(1)$. Since $u\in \q{E}([T,\infty))$, given any compact set $K\subset \m R$, the estimates of Lemma \rightef{lema:decayE} imply that $(u(t_k))_{k\in\mathbb{N}}$ is in $W^{1,\infty}(K)$. Hence, by Ascoli-Arzel\'a, up to a subsequence, there exists $v\in \q C(\m R)$ such that
\[
u(t_k) \to v\quad \text{ in }L^\infty(K),\ K\text{ compact subset of }\m R.
\]
Moreover, due to the uniform decay of $u(t_k)$, one sees that this convergence is valid over $L^\infty(\m R)$. Since, by Step 3, $\tilde{u}(t_k)\to \tilde{u}(1)$ in $L^\infty(\m R)$, one must have $v=u(1)$. Together with the decay estimate \varepsilonqref{eq:est1lem1}, the Dominated Convergence theorem implies that
\[
\tilde{u}(t_k)\to \tilde{u}(1) \quad \text{in }L^6(\m R).
\]
Hence
\begin{equation}gin{align*}
e^{-it_kp^3}\omegaidehat{\mathcal{I}u}(t_k)&=i\leqslantfteft(\partial_p\tilde{u}(t_k) - \frac{3t_k}{4\pi}e^{it_kp^3}\omegaidehat{u^3}(t_k)\rightight) \rightightharpoonup e^{-ip^3}\omegaidehat{\mathcal{I}u}(1)\quad \text{ in }L^2((0,+\infty)).
\varepsilonnd{align*}
On the other hand, using \varepsilonqref{eq:H1aprox_tmaior} from Lemma \rightef{lem:H1aprox},
\begin{equation}gin{align*}
\|(\mathcal{I}u)(t_k)\|_{L^2((0,+\infty))}&\leqslantftesssim \leqslantftim_n \leqslantfteft(\int_0^\infty |\omegaidehat{\mathcal{I}u}(t_k)|^2\chi_n^{-1}dp\rightight)^{1/2} \\ & \leqslantftesssim \leqslantftim_n \leqslantfteft(\int_0^\infty |\omegaidehat{\mathcal{I}u}(1)|^2\chi_n^{-1}dp + o_n(1)\delta^3 \rightight) t_k^{1/6} \\&= \|(\mathcal{I}u)(1)\|_{L^2((0,+\infty))}t_k^{1/6}.
\varepsilonnd{align*}
Therefore $e^{-it_kp^3}\omegaidehat{\mathcal{I}u}(t_k)\to e^{-ip^3}\omegaidehat{\mathcal{I}u}(1)$ in $L^2((0,+\infty))$ and
\begin{equation}gin{equation}
\partial_p\tilde{u}(t_k) = -ie^{-it_kp^3}\omegaidehat{\mathcal{I}u}(t_k) + \frac{3t_k}{4\pi}e^{it_kp^3}\omegaidehat{u^3}(t_k) \to \partial_p\tilde{u}(1) \quad \text{in } L^2((0,+\infty)).
\varepsilonnd{equation}
The continuity to the left of $t=1$ follows from the same arguments and \varepsilonqref{eq:H1aprox_tmenor}.
\varepsilonmph{Step 2. Continuity to the right in $L^2$.}
Now we observe that $\tilde{u}$ is continuous to the right with values in $\dot{H}^1((0,+\infty))$.
Given $t_0\in [T_-(u_1),T_+(u_1))$, Proposition \rightef{prop:exist_u_E} show that one may build $v$ satisfying $\tilde{v}\in \q E([t_0-\varepsilon,t_0+\varepsilon])$, for some $\varepsilon >0$, with $v$ a solution to \varepsilonqref{mkdv} in $\q D'((t_0-\varepsilon, t_0+\varepsilon) \times \m R)$ and $v(t_0)=u(t_0)$.
Step 1 shows that we can furthermore assume continuity at $t_0$:
\[ \partial_p \tilde{v}(t) \to \partial_p \tilde{v}(t_0) \quad \text{in }L^2((0,+\infty)) \quad \text{as} \quad t\to t_0. \]
By forward uniqueness, $v\varepsilonquiv u$ on $[t_0,t_0+\varepsilon]$, which means that $\partial_p \tilde{u}$ is continuous to the right at $t_0$.
\varepsilonmph{Step 3. Continuity in weak-$L^2$.
}
Let $(t_n)_{n\in \mathbb{N}}$ be a sequence of times in $[T_-(u_1),T_+(u_1)]$ such that $t_n \to t_*$. We already saw that $\tilde u(t_n) \to \tilde u(t_*)$ in $L^\infty$. Since $\partial_{p}\tilde{u}(t_n)$ in bounded in $L^2((0,+\infty))$, any subsequence admits a sub-subsequence converging in weakly in $L^2$, to a limit which can only be $u(t_*)$. This proves that the full sequence converges: $\partial_p \tilde u(t_n) \rightightharpoonup \partial_p u(t_*)$ weakly in $L^2$.
\varepsilonnd{proof}
\begin{equation}gin{nb}
If backward uniqueness holds, then the same proof shows full continuity: $\partial_p \tilde u \in \q C([T_-(u_1),T_+(u_1)],L^2)$.
\varepsilonnd{nb}
\begin{equation}gin{prop}[Forward uniqueness implies a backward blow-up alternative]\leqslantftabel{prop:blowupaltern}
Let $u\in \q E((T_-, T_+))$ be a maximal solution of (mKdV). If $T_->0$, then
\[
\leqslantftimsup_{t\to T_-} \|u(t)\|_{\q E(t)}=\infty.
\]
\varepsilonnd{prop}
\begin{equation}gin{proof}
By contradiction, suppose that
\[
\forall t \in (T_-,1), \quad \|u(t)\|_{\q E(t)}<M.
\]
It follows from Step 3, proof of Proposition \rightef{prop:exist}, that $\tilde{u}$ is Lipschitz in time with values in $L^\infty(\m R)$. Moreover, since $T_->0$, $\tilde{u}$ is bounded in $\dot{H}^1((0,+\infty))$. This means that $u$ may be extended up to $t=T_-$: $u\in \q E([T_-, 1])$.
Now consider the solution $v\in \q E([T_- - \varepsilon, T_- + \varepsilon)$ of (mKdV) with $v(T_-)=u(T_-)$. By forward uniqueness, $u\varepsilonquiv v$ for $t>T_-$. This means that $u$ is not maximal, a contradiction.
\varepsilonnd{proof}
\section{Well-posedness on $L^2(\m R)\cap\q E$} \leqslantftabel{sec:6}
In this section, we prove that Proposition \rightef{prop:L2}, that is, once we restrict the critical space to $L^2$-integrable functions, the local well-posedness theory works in either direction of time. We split it into two statements: one for uniqueness and another for persistence of $L^2$ integrability.
\begin{equation}gin{prop}[Backward uniqueness]\leqslantftabel{prop:backuniqL2}
If $u, v \in \q E([t_1,t_2])$ are two solutions of \varepsilonqref{mkdv} with $u, v \in \q C([t_1,t_2], L^2(\m R))$ and $u (t_2)= v(t_2)$, then $u_1\varepsilonquiv u_2$.
\varepsilonnd{prop}
\begin{equation}gin{proof}
Set $w=u - v$. Then $(\partial_t+\partial_{xxx})w=- \varepsilonpsilon (u^3 - v^3)_x$. The right-hand side of the equation is in $L^\infty((t_1,t_2),L^2(\m R))$:
\[
\|(u^3)_x(t)\|_{L^2}\leqslantftesssim \|u_x u(t)\|_{L^\infty}\|u(t)\|_{L^2} \leqslantftesssim \|u\|_{\q E(t_1,t_2)}\|u\|_{L^\infty((t_1,t_2),L^2)}.
\]
Applying $w$ to the equation, we see that
\begin{equation}gin{align*}
\leqslantfteft|\frac{1}{2}\frac{d}{dt}\|w(t)\|_2^2\rightight|& \leqslantftesssim (\|u_x u(t)\|_{L^\infty} + \| v_x v (t)\|_{L^\infty}) \|w(t)\|_{L^2}^2\\&
\leqslantftesssim (\|u\|_{\q E(t_1,t_2)} + \|u\|_{\q E(t_1,t_2)})\|w(t)\|_{L^2}^2.
\varepsilonnd{align*}
By Gronwall's lemma, we obtain $w\varepsilonquiv 0$.
\varepsilonnd{proof}
We now prove existence of solutions in $L^2(\m R)\cap\q E$, which can be translated into a persistence result:
\begin{equation}gin{prop}[Persistence of $L^2$ integrability]\leqslantftabel{prop:persistL2}
Given $u_1\in \q E(1)\cap L^2(\m R)$, consider the corresponding solution $u\in \q E(I)$ of \varepsilonqref{mkdv} given by Proposition \rightef{prop:exist}. Then $u\in \q C(I, L^2(\m R))$.
\varepsilonnd{prop}
\begin{equation}gin{proof}
Consider the approximate solutions $u_n$ of \varepsilonqref{pimkdv}. Since $u_n\chi_{n}^{-1}\in \q C(I, L^{\infty}(\m R))$, we have $\tilde{u}_n\chi_{n}^{-1/2} \in \q C(I,L^2(\m R))$.
It then follows by direct integration that
\begin{equation}gin{align*}
\leqslantfteft|\frac{1}{2}\frac{d}{dt}\int |\tilde{u}_n(t,p)|^2\chi_{n}^{-1}dp\rightight| &= \leqslantfteft|\int \omegaidehat{(u_n^3)_x}(t,p)\hat{u}_n(t,p)dp\rightight|= \leqslantfteft|\frac{1}{2}\int (u_n^2)_xu_n^2dx\rightight|=0
\varepsilonnd{align*}
and so $\|\tilde{u}\chi_n^{-1/2}(t)\|_{L^2(\m R)}^2$ is conserved.
In the limit $n\to \infty$, we obtain $u\in L^\infty(I, L^2(\m R))$. We then infer $L^2$-conservation by direct integration of the equation for $u$:
\[ \|u(t)\|_{L^2}=\|u_1\|_{L^2}. \]
Together with the weak $L^2$-continuity, we conclude that $u\in \q C(I, L^2(\m R))$.
\varepsilonnd{proof}
\section{Construction of blow-up solutions} \leqslantftabel{sec:7}
\begin{equation}gin{proof}[Proof of Proposition \rightef{prop:blowupgivenprofile}]
\textit{Step 1. Construction of the approximating sequence.} The scaling invariance of the equation and the criticality of the space $\q E$ imply that Proposition \rightef{prop:exist} holds for any initial time $t_0>0$, with $\delta$ independent on $t_0$.
Given $t_n\to 0$, define
\[
g_n(p)=g_0(t_n^{1/3}p).
\]
Since
\[
\|g_n\|_{\q E(t_n)}=\|g_0\|_{\q E(1)}<\delta,
\]
one may build $u_n\in \q E([t_n,+\infty))$ solution of (mKdV) with initial condition $\tilde{u}_n(t_0)=g_n$ and
\[ \|u_n\|_{\q E([t_n, +\infty))}\leqslantfte C\delta. \]
\textit{Step 2. Convergence.} Proceeding as in Steps 2-5 of the proof of Proposition \rightef{prop:exist}, we obtain a solution $u\in \q E((0,+\infty))$ of (mKdV) which is the pointwise limit of $(u_n)_{n\in \m N}$ both in the physical and frequency spaces.
Given $t_0>0$, one may construct a solution $v\in \q E([T_-(v(t_0)),+\infty))$ of \varepsilonqref{mkdv} with $v(t_0)=u(t_0)$. By uniqueness, $u\varepsilonquiv v$ for all $t\geqslant t_0$. Since $t_0$ is arbitrary, this implies $u\in \q E((0,+\infty))$.
\textit{Step 3. Behavior at $t=0$.} Define
\[
g_n(t,p)=\leqslantfteft\{\begin{equation}gin{array}{lc}
\tilde{u}_n(t,t^{-1/3}p)& 1\geqslant t>t_n\\
g_0(p)&t_n\geqslant t\geqslant 0
\varepsilonnd{array}\rightight..
\]
Hence, for $t>t_n$,
\[
\partial_tg_n(t,p) = (\partial_t\tilde{u}_n)(t,t^{-1/3}p) -\frac{p}{3t^{4/3}}(\partial_p\tilde{u}_n)(t,t^{-1/3}p)=-\frac{3p}{t}\omegaidehat{\mathcal{I}u_n}(t,t^{-1/3}p).
\]
Remark \rightef{rmk:boundI} implies
\[
\leqslantfteft\|\frac{1}{p}\partial_tg_n(t)\rightight\|_{L^2((0,+\infty))}\leqslantftesssim \frac{1}{t}\|\omegaidehat{\mathcal{I}u_n}(t,t^{-1/3}p)\|_{L^2((0,+\infty))}\leqslantftesssim t^{-5/6}\|\omegaidehat{\mathcal{I}u_n}(t)\|_{L^2((0,+\infty))} \leqslantftesssim \delta t^{-2/3}.
\]
As a consequence, the sequence $(g_n)_{n\in\m N}$ is uniformly continuous in time: for any $0\leqslantfte t,s\leqslantfte 1$,
\[
\leqslantfteft\| \frac{1}{\jap{p}}(g_n(t,p))-g_n(s,p))\rightight\|_{L^2((0,+\infty))}\leqslantfte \int_s^t \leqslantfteft\|\frac{1}{p}\partial_t g_n(s') \rightight\|_{L^2((0,+\infty))} ds' \leqslantftesssim \delta|t^{1/3}-s^{1/3}|.
\]
Since
\[
\leqslantfteft\| \frac{1}{\jap{p}}g_n(t,p)\rightight\|_{L^2(\m R)}\leqslantftesssim \|g_n(t)\|_{L^\infty(\m R)}\leqslantftesssim \delta,
\]
by the Ascoli-Àrzela theorem, there exists $g \in \q C([0,1], L^2(\jap{p}^{-1}dp))$ such that
\[
g_n \to g \mbox{ in }\q C([0,1], L^2(\jap{p}^{-1}dp)),\quad g(0,p)=g_0(p)
\]
and
\[
\leqslantfteft\| \frac{1}{\jap{p}}(g (t,p))-g_0(p))\rightight\|_{L^2(\m R)} \leqslantftesssim \delta t^{1/3}, t>0.
\]
On the other hand, for each fixed $t>0$,
\[
g_n(t,p)=\tilde{u}_n(t,t^{-1/3}p) \to \tilde{u}(t,t^{-1/3}p) \quad \text{in } L^\infty(\m R) \text{ as } n\to \infty.
\]
Thus $\tilde{u}(t,t^{-1/3}p)=g(t,p)$ and \varepsilonqref{eq:taxablowup} follows.
\varepsilonnd{proof}
\section{Asymptotic behaviour as $t\to+\infty$} \leqslantftabel{sec:8}
Given $\nu\in (0,1/2)$, consider the norm
\[
\|u\|_{\mathcal{Y}_t^\nu}= t^{\frac{\nu}{3}-\frac{1}{6}}\|\partial_p\tilde{u}\|_{L^2} + \sup_{p\in\m R}\leqslantfteft\{ |p|^{-\nu}\jap{p^3t}^{\frac{\nu}{3}-\frac{1}{6}}|\tilde{u}(p)| \rightight\}
\]
and define
\[
\mathcal{Y}_t^\nu=\leqslantfteft\{ u\in \mathcal{S}'(\m R;\m R)\ : \tilde{u}(0)=0, \ \|u\|_{\mathcal{Y}_t^\nu}<\infty \rightight\}.
\]
For the sake of completeness, we recall the following technical lemma.
\begin{equation}gin{lem}[{\cite[Lemma 2.2]{HN01}}]\leqslantftabel{lem:media0}
Given $\nu\in (9/20, 1/2)$, let $u,v, w\in \q E([1,T])$ be such that $\tilde{w}(t,0)=0$.
Then there exists a universal constant $C>0$ such that
\begin{equation}gin{equation}\leqslantftabel{eq:trilinear}
\forall t \in [1,T], \quad \|u(t)v(t)w(t)\|_{L^2}\leqslantfte C t^{-\frac{5}{6}-\frac{\nu}{3}}\|u(t) \|_{\q E(t)} \|v(t)\|_{\q E(t)} \|w(t)\|_{\mathcal{Y}_t^\nu}.
\varepsilonnd{equation}
Furthermore,
\begin{equation}gin{equation}\leqslantftabel{eq:asympw}
\leqslantfteft|w(t,x)-\frac{1}{t^{1/3}}\mathrm{Re} \Ai\leqslantfteft(\frac{x}{t^{1/3}}\rightight)\tilde{w}(t,y)\rightight|\leqslantftesssim t^{-1/3-\nu/3}\jap{|x|/t^{1/3}}^{-1/4}\|w(t)\|_{\mathcal{Y}_t^\nu},
\varepsilonnd{equation}
where
\begin{equation}gin{equation}\leqslantftabel{eq:p0}
y=\leqslantfteft\{\begin{equation}gin{array}{ll}
\sqrt{-x/3t},& x<0\\
0,& x>0
\varepsilonnd{array}\rightight..
\varepsilonnd{equation}
\varepsilonnd{lem}
\begin{equation}gin{prop}[Asymptotics on the Fourier space]\leqslantftabel{prop:asympfourier}
Given $u\in \q E([1,\infty))$ solution of $(\partial_t+\partial_{xxx})u=(u^3)_x$ with $\|u\|_{\q E([0,+\infty))}\leqslantfte \delta$, let $S\in \q E([1,\infty))$ be a self-similar solution with
\[
\omegaidehat{S(1)}(0^+)=\omegaidehat{u(1)}(0^+).
\]
Then, for any $\nu\in (9/20,1/2)$,
\begin{equation}gin{equation}\leqslantftabel{eq:restoemYt}
\|u(t)-S(t)\|_{\mathcal{Y}_t^\nu}<30\delta,\quad t\geqslant 1.
\varepsilonnd{equation}
On the other hand, there exists $U_\infty\in C_b(\m R\setminus\{0\})$ such that
\begin{equation}gin{equation}\leqslantftabel{eq:modificadofourier}
\leqslantfteft| \tilde{u}(t,p) - U_\infty(p)\varepsilonxp\leqslantfteft(-\frac{i\varepsilonpsilon}{4\pi}|U_\infty(p)|^2\leqslantftog t\rightight) \rightight|\leqslantftesssim \frac{\delta}{\jap{p^3t}^{\frac{1}{12}}}
\varepsilonnd{equation}
Finally, one has
\begin{equation}gin{equation}\leqslantftabel{eq:relacaoSU}
\leqslantftim_{p\to 0} |U_\infty(p)| = \leqslantftim_{tp^3\to \infty} |\mathcal{S}(t,p)|.
\varepsilonnd{equation}
\varepsilonnd{prop}
\begin{equation}gin{nb}
As a direct consequence, we see that, if $S$ and $S'$ are two self-similar solutions with $\|S\|_{\q E(1)}, \|S'\|_{\q E(1)}\leqslantfte \delta$ and
\[ \omegaidehat{S(1)}(0)=\omegaidehat{S'(1)}(0), \]
then $S=S'$.
\varepsilonnd{nb}
\begin{equation}gin{proof}
\varepsilonmph{Proof of }\varepsilonqref{eq:restoemYt}.
Set $w(t)=u(t)-S(t)$. Observe that
\[
\|w(1)\|_{\mathcal{Y}_t^\nu}\leqslantfte 2\delta.
\]
Suppose that there exists $T_1>1$ such that
\[
\|w(t)\|_{\mathcal{Y}_t^\nu}<30\delta,\quad 1\leqslantfte t\leqslantfte T_1,\text{ and }\|w(T_1)\|_{\mathcal{Y}_{T_1}^\nu}=30\delta.
\]
Due to the self-similar structure of $S$, $\mathcal{I}S=0$ and so
\[
\partial_p\omegaidetilde{S}=3te^{itp^3}\omegaidehat{S^3}.
\]
Since $(\partial_t+\partial_{xxx})w=-\varepsilonpsilon(w^3-3uw^2+3u^2w)_x$,
\begin{equation}gin{align*}
\|\tilde{w}_p(t)\|_{L^2((0,+\infty))}&\leqslantfte \|\omegaidehat{\mathcal{I}w}(t)\|_{L^2((0,+\infty))} + 3t\|(w^3-3uw^2+3u^2w)(t)\|_{L^2} \\&\leqslantfte \|\omegaidehat{\mathcal{I}u}(t)\|_{L^2((0,+\infty))} + 30\delta^3 t^{\frac{1}{6}-\frac{\nu}{3}} \leqslantfte 2\delta t^{\frac{1}{6}-\frac{\nu}{3}}.
\varepsilonnd{align*}
Moreover, since $\tilde{w}(0)=0$,
\begin{equation}gin{align*}
|\tilde{w}(t,p)|\leqslantfte \sqrt{p}\|\tilde{w}_p(t)\|_{L^2} \leqslantfte 2\delta |p|^{\nu}\jap{p^3t}^{\frac{1}{6}-\frac{\nu}{3}}
\varepsilonnd{align*}
Thus
\[
30\delta=\|w(T_1)\|_{\mathcal{Y}_{T_1}^\nu}\leqslantfte 4\delta,
\]
which is a contradiction. Hence
\[
\|w(t)\|_{\mathcal{Y}_t^\nu}<30\delta,\quad t\geqslant 1
\]
and \varepsilonqref{eq:restoemYt} follows.
\varepsilonmph{Proof of }\varepsilonqref{eq:modificadofourier}. From \varepsilonqref{eq:decayuniformt}, we have
\[
|(\tilde{u}E_u)(t,p)-(\tilde{u}E_u)(\tau,p)|\leqslantftesssim \frac{\delta}{\jap{p^3\tau}^{\frac{1}{12}}},\quad t\geqslant \tau
\]
where we recall that (see )
\[
E_u(t,p)=\varepsilonxp\leqslantfteft(-i\varepsilonpsilon\int_1^t\frac{p^3}{4\pi \jap{p^3s}}|\tilde{u}(s,p)|^2ds\rightight).
\]
Thus there exists $U\in L^\infty(\m R)$, $U\in \q C(\m R\setminus\{0\})$, such that
\begin{equation}gin{equation}\leqslantftabel{eq:perfilfinal}
|(\tilde{u}E_u)(t,p)- U(p)|\leqslantftesssim \frac{\delta}{\jap{p^3t}^{\frac{1}{12}}}.
\varepsilonnd{equation}
Writing
\[
\psi(t,p)=\int_1^t\frac{p^3}{4\pi \jap{p^3s}}(|\tilde{u}(s,p)|^2 - |\tilde{u}(t,p)|^2)ds,
\]
we have
\begin{equation}gin{align*}
\psi(t)-\psi(\tau)&=\int_\tau^t \frac{p^3}{4\pi \jap{p^3s}}(|\tilde{u}(s,p)|^2 - |\tilde{u}(t,p)|^2)ds \\
& \qquad + (|\tilde{u}(\tau,p)|^2 - |\tilde{u}(t,p)|^2)\int_\tau^t \frac{p^3}{4\pi \jap{p^3s}}ds
\varepsilonnd{align*}
which implies that
\[
|\psi(t,p)-\psi(\tau,p)|\leqslantftesssim \frac{\delta}{\jap{p^3\tau}^{\frac{1}{12}}}
\]
Therefore there exists $\Psi\in L^\infty(\m R)$, $\Psi\in \q C(\m R\setminus\{0\})$ such that
\begin{equation}gin{equation}\leqslantftabel{eq:fasefinal}
|\psi(t,p)-\Psi(p)|\leqslantftesssim \frac{\delta}{\jap{p^3t}^{\frac{1}{12}}}
\varepsilonnd{equation}
We decompose
\begin{equation}gin{align*}
\int_{1}^t\frac{p^3ds}{\jap{p^3s}} &= \int_{p^3}^{p^3t}\frac{ds}{\jap{s}} = \int_{p^3}^{p^3t}\frac{ds}{s} +\int_{p^3}^{p^3t}\leqslantfteft(\frac{1}{\jap{s}}-\frac{1}{s}\rightight)ds \\&= \leqslantftog t + \int_{p^3}^\infty \frac{s-\jap{s}}{s\jap{s}} - \int_{tp^3}^\infty \frac{s-\jap{s}}{s\jap{s}} =:\leqslantftog t + \mathcal{R}(p^3) - \mathcal{R}(tp^3)
\varepsilonnd{align*}
Since
\[
|\jap{s}-s|=s\leqslantfteft(\leqslantfteft(1+\frac{1}{s^2}\rightight)^{1/2}-1\rightight)\leqslantftesssim s^{-1},
\]
we have
\[
|\mathcal{R}(y)|\leqslantftesssim \frac{1}{y^2}.
\]
Thus,
\[
\leqslantfteft|\int_1^t \frac{p^3 |\tilde{u}(s,p)|^2ds}{4\pi(p^3s)^{1-\gamma/2} \jap{p^3s}^{\gamma/2}} - \Psi(p)-\frac{1}{4\pi}|U(p)|^2\mathcal{R}(p^3) - \frac{1}{4\pi}|U(p)|^2\leqslantftog t\rightight| \leqslantftesssim \frac{\delta}{(p^3t)^{\frac{1}{12}}}.
\]
Defining
\[
U_\infty(p)=U(p)\varepsilonxp\leqslantfteft(-i\varepsilonpsilon\Psi(p) - \frac{i\varepsilonpsilon}{4\pi}|U(p)|^2\mathcal{R}(p^3)\rightight),
\]
\varepsilonqref{eq:perfilfinal} and \varepsilonqref{eq:fasefinal} imply
\begin{equation}gin{align*}
\leqslantfteft|\tilde{u}(t,p) - U_\infty(p)\varepsilonxp\leqslantfteft(- \frac{i\varepsilonpsilon}{4\pi}|U_\infty(p)|^2\leqslantftog t\rightight) \rightight|\leqslantftesssim \frac{\delta}{\jap{p^3t}^{\frac{1}{12}}}.
\varepsilonnd{align*}
\varepsilonmph{Proof of }\varepsilonqref{eq:relacaoSU}: Using \varepsilonqref{eq:restoemYt} and \varepsilonqref{eq:modificadofourier},
\[
\leqslantfteft| |S(t,p)| - |U_\infty(p)| \rightight|\leqslantftesssim |p|^{\nu}\jap{tp^3}^{\frac{1}{6}-\frac{\nu}{3}} + \frac{1}{\jap{p^3t}^{\frac{1}{12}}}
\]
Take, at the same time, $tp^3\to \infty$ and $p\to 0$ so that the right-side goes to zero. Then
\[
\leqslantftim_{p\to 0} |U_\infty(p)| = \leqslantftim_{tp^3\to \infty} |S(t,p)|.
\]
\varepsilonnd{proof}
\begin{equation}gin{nb}
If $u$ is an $L^2$ solution of (mKdV), then \varepsilonqref{eq:modificadofourier} implies that $U_\infty\in L^p(\m R)$, for $p$ large. On the other hand, if one takes $u$ as a self-similar solution, then $|U_\infty|$ is a constant.
\varepsilonnd{nb}
\begin{equation}gin{cor}[Asymptotics in the physical space]
Given $u\in \q E([1,\infty))$ solution of $(\partial_t + \partial_{xxx})u=(u^3)_x$ with $\|u\|_{\q E([1,+\infty)}\leqslantfte \delta$, let $S\in \q E([1,\infty))$ be the self-similar solution with
\[
\omegaidehat{S(1)}(0^+)=\omegaidehat{u(1)}(0^+).
\]
Then, for any $\nu\in(9/20,1/2)$,
\begin{equation}gin{equation}\leqslantftabel{eq:asympu}
\leqslantfteft\|u(t)-S(t)\rightight\|_{L^\infty}\leqslantftesssim \frac{\delta}{t^{1/3+\nu/3}}.
\varepsilonnd{equation}
On the other hand,
\begin{equation}gin{equation}\leqslantftabel{eq:asympumodified}
\leqslantfteft|u(t,x)-\frac{1}{t^{1/3}}\mathrm{Re} \Ai\leqslantfteft(\frac{x}{t^{1/3}}\rightight)U_\infty\leqslantfteft(y\rightight)\varepsilonxp\leqslantfteft(-\frac{i\varepsilonpsilon}{6}|U_\infty(y)|^2\leqslantftog t\rightight)\rightight|\leqslantfte \frac{\delta}{t^{1/3}\jap{x/t^{1/3}}^{3/10}},
\varepsilonnd{equation}
where $y$ is defined by \varepsilonqref{eq:p0}.
\varepsilonnd{cor}
\begin{equation}gin{proof}
Define $w(t):=u(t,x)-S(t,x)$. Then, by Proposition \rightef{prop:asympfourier},
\[
\|w(t)\|_{\mathcal{Y}_t^\nu}<30\delta.
\]
The definition of the norm of $\mathcal{Y}_t^\nu$ and the decay estimate for the Airy-Fock function imply
\[
\leqslantfteft|\frac{1}{t^{1/3}}\mathrm{Re} \Ai\leqslantfteft(\frac{x}{t^{1/3}}\rightight)\tilde{w}\leqslantfteft(t,\sqrt{\frac{|x|}{3t}}\rightight)\rightight|\leqslantftesssim \delta t^{-1/3}\jap{x/t^{1/3}}^{-1/4}\leqslantfteft|x/t\rightight|^{\nu/2}\jap{x^{3/2}/t^{1/2}}^{\frac{1}{6}-\frac{\nu}{3}}\leqslantftesssim \frac{\delta}{t^{1/3+\nu/3}}.
\]
Together with \varepsilonqref{eq:asympw}, we obtain \varepsilonqref{eq:asympu}. Finally, \varepsilonqref{eq:asympumodified} follows from \varepsilonqref{eq:est6lem1} and \varepsilonqref{eq:modificadofourier}.
\varepsilonnd{proof}
\section{Well-posedness for perturbations of self-similar solutions} \leqslantftabel{sec:9}
We remark that the results of section \rightef{sec:6} exclude nontrivial self-similar solutions. On the other hand, the lack of backwards uniqueness in $\q E$ is especially problematic for the study of the dynamics around self-similar solutions at time $t=0$. To overcome this, we consider the space $\q R_\alpha$ defined in \varepsilonqref{def:R_a} in Section 2.
First let us observe that $\q R_\alpha$ is not empty. If one considers $\alpha=1$ and $a_k=1$, $k\geqslant 1$, then any $L^2$-function $f$ with $\mbox{supp} \hat{f} \subset B_{R}(0)$, $R<1$, is in $\q R_\alpha$:
\[
\|\partial_x^k f\|_{L^2}^2 = \|p^k \hat{f}\|_{L^2}^2\leqslantftesssim R^k \| \hat{f}\|_{L^2}^2 \leqslantfte \| \hat f \|_{L^2}^2.
\]
This means that low-frequency perturbations of self-similar solutions are acceptable.
\begin{equation}gin{prop}[Backward uniqueness]
If $u_1,u_2\in \q{E}(t_1,t_2)$ are two solutions of \varepsilonqref{mkdv} with $u_1-u_2\in L^\infty((t_1,t_2),\q R_\alpha)$ and $u_1(t_2)=u_2(t_2)$, then $u_1\varepsilonquiv u_2$.
\varepsilonnd{prop}
\begin{equation}gin{proof}
Having the \textit{a priori} knowledge that $u_1-u_2\in L^\infty((t_1,t_2),H^1(\m R))$ allows us to proceed as in Proposition \rightef{prop:uniq}.
\varepsilonnd{proof}
Now fix a self-similar profile $\mathcal{S}\in \q E(1)$ and $\tilde{S}(t,p)=\mathcal{S}(t^{1/3}p)$. Define
\begin{equation}gin{equation}
\q E_S(t) = \leqslantfteft\{ u\in \q E(t): \hat{u}(t)-\tilde{S}(t)\in \q R_\alpha \rightight\}
\varepsilonnd{equation}
and, for any time interval $I\subset(0,\infty)$,
\begin{equation}gin{equation}
\q E_S(I)=\leqslantfteft\{ u\in \q E(I): \hat{u}-\tilde{S}\in \q C(I,\q R_\alpha) \rightight\}
\varepsilonnd{equation}
endowed with the norm
\[
\|u\|_{\q E_S(I)}=\sup_{t \in I} \leqslantfteft(\|u(t)\|_{\q E(t)} + \|\hat{u}(t)-\tilde{S}(t)\|_{\q R_\alpha}\rightight),
\]
The remainder of this section concerns the existence of solution on $\q E_S$, which follows from a persistence result analogous to Proposition \rightef{prop:persistL2}.
\begin{equation}gin{lem}\leqslantftabel{lem:uniqS}
Let $v$ be a solution of \varepsilonqref{mkdv} given by Proposition \rightef{prop:exist} with initial condition $\tilde{v}(1)=\mathcal{S}$. Then $v\varepsilonquiv S$.
\varepsilonnd{lem}
\begin{equation}gin{proof}
By forward uniqueness, $v\varepsilonquiv S$ for $t>1$. Since $S$ is a self-similar solution, it satisfies
\[
\|\omegaidehat{\mathcal{I}v}(t)\|_{L^2((0,+\infty))}= \|\omegaidehat{\mathcal{I}S}(t)\|_{L^2((0,+\infty))}=0, \ t\geqslant 1.
\]
The inequality \varepsilonqref{eq:estIparatras} then implies that $\omegaidehat{\mathcal{I}v}(t,p)\varepsilonquiv 0$, $t<1, p\neq0$. Setting
\[
\mathcal{V}(t,p)=\tilde{v}(t,t^{-1/3}p)
\]
a simple computation yields
\[
\partial_t\mathcal{V}(t,p)=-\frac{3p}{t}\omegaidehat{\mathcal{I}v}(t, t^{-1/3}p)\varepsilonquiv 0.
\]
Hence $\mathcal{V}(t,p)=\mathcal{V}(1,p)=\mathcal{S}(p)$ and so $v\varepsilonquiv S$.
\varepsilonnd{proof}
For any given $n\in\m N$, we define $S_n$ as the solution of \varepsilonqref{pimkdv} with $\tilde{u}_1=\mathcal{S}$. For large $n$, Lemma \rightef{lem:uniformtime} implies that $S_n$ is defined on $[T_-,T_+]$ and Lemma \rightef{lem:uniqS} ensures that the limit of $S_n$ is the self-similar solution $S$.
\begin{equation}gin{prop}[Persistence of $\q E_S$]
Given $u_1\in \q E_S(1)$, the corresponding solution $u\in \q E(I)$ of \varepsilonqref{mkdv} given by Proposition \rightef{prop:exist} satisfies $u\in \q E_S(I)$.
\varepsilonnd{prop}
\begin{equation}gin{proof}
It suffices to consider $I$ bounded. We take the solutions $u_n$ of the approximate equation \varepsilonqref{pimkdv}. Define $w_n=u_n-S_n$. It follows directly that $\partial_x^kw_n\in \q C(I,\q L^2)$, for any $k$. Then, integrating the equation for $w$,
\begin{equation}gin{align*}
\leqslantfteft|\frac{1}{2}\frac{d}{dt}\|\partial_x^kw_n\|_{L^2}^2 \rightight|&\leqslantftesssim \leqslantfteft|\int \partial_x^k\leqslantfteft(\Pi_n((u_n^3-S_n^3)_x)\rightight)\partial_x^k w_n dx \rightight|\\&\leqslantftesssim (\|(u_n)^2\|_{W^{1,\infty}} + \|(S_n)^2\|_{W^{1,\infty}} )\|w_n\|_{H^1}\|\Pi_n\partial_x^{2k} w_n\|_{L^2} \\&\leqslantftesssim (\|u\|_{\q E(I)}^2 + \|S\|_{\q E(I)}^2)\|w_n\|_{\q R_\alpha}\|\partial_x^{2k} w_n\|_{L^2}.
\varepsilonnd{align*}
Hence
\begin{equation}gin{align*}
\MoveEqLeft a_k \|\partial_x^k w_n(t)\|_{L^2}^2 \leqslantfte a_k\|\partial_x^k w_n(1)\|_{L^2}^2 + C\int_1^t(\|u\|_{\q E(I)}^2 + \|S\|_{\q E(I)}^2)\|w_n(s)\|_{\q R_\alpha}a_k\|\partial_x^{2k} w_n(s)\|_{L^2}ds \\
&\leqslantfte a_k\|\partial_x^k w_n(1)\|_{L^2}^2 + C\alpha\int_1^t(\|u\|_{\q E(I)}^2 + \|S\|_{\q E(I)}^2)\|w_n(s)\|_{\q R_\alpha}a_{2k}\|\partial_x^{2k} w_n(s)\|_{L^2}ds.
\varepsilonnd{align*}
Taking the supremum in $k$ and applying Gronwall's lemma,
\[
\|w_n(t)\|_{\q R_\alpha}^2\leqslantfte \|w_n(1)\|_{\q R_\alpha}^2\varepsilonxp\leqslantfteft(C\alpha(\|u\|_{\q E(I)}^2 + \|S\|_{\q E(I)}^2)|t-1|\rightight)
\]
uniformly on $n$. Taking the limit $n\to \infty$, $u-S\in L^\infty(I, \q R_\alpha)$. The uniqueness in $\q E_S$ reduces the continuity at any time to the continuity at $t=1$, which follows from the above inequality. Indeed,
\begin{equation}gin{align*}
\leqslantftim_{t\to 1}\|w(t)\|_{\q R_\alpha}^2 &\leqslantfte \leqslantftim_{t\to 1}\|w_n(t)\|_{\q R_\alpha}^2 \leqslantfte \leqslantftim_{t\to 1}\|w_n(1)\|_{\q R_\alpha}^2\varepsilonxp\leqslantfteft(C\alpha(\|u\|_{\q E(I)}^2 + \|S\|_{\q E(I)}^2)|t-1|\rightight) \\
&= \|w_n(1)\|_{\q R_\alpha}^2,
\varepsilonnd{align*}
and, taking the limit on the right-hand side,
\[
\leqslantftim_{t\to 1}\|w(t)\|_{\q R_\alpha}^2\leqslantfte \|w(1)\|_{\q R_\alpha}^2.
\]
Since $w(t)\rightightharpoonup w(1)$ in $\q R_\alpha$, we obtain strong convergence in $\q R_\alpha$.
\varepsilonnd{proof}
We are now in a position to prove Proposition \rightef{prop:stab} stated in Section 2.
\begin{equation}gin{proof}[Proof of Proposition \rightef{prop:stab}]
Suppose that $u=S+w$, with $S$ self-similar and $w\in \q R_\alpha$, is defined on an interval $I\subset (0,1]$. Then, setting $z:=u^2+uS + S^2$, $(\partial_t + \partial_{xxx})w= -\varepsilonpsilon (zw)_x$ and, by Sobolev embedding,
\[
\|z\|_{L^\infty}\leqslantftesssim \|w\|_{L^\infty}^2 + \|S\|_{L^\infty}^2 \leqslantftesssim \|w\|_{\q R_\alpha}^2 + t^{-2/3}\|S\|_{\q E(1)}^2.
\]
The following formal computations can be made rigorous by approximating with solutions of \varepsilonqref{pimkdv}. Integrating directly the equation for $w$,
\begin{equation}gin{align*}
\leqslantfteft|\frac{d}{dt}\|\partial_x^k w\|_{L^2}^2\rightight|&\leqslantftesssim \leqslantfteft|\int \partial_x^{k+1}(zw)\partial_x^kwdx\rightight|\leqslantftesssim \|z\|_{L^\infty}\|w\|_{L^2}\|\partial_x^{2k+1} w\|_{L^2}\\
&\leqslantftesssim (\|w\|_{\q R_\alpha(I)}^2 + t^{-2/3}\|S\|_{\q E(1)}^2)\|w\|_{L^2}\|\partial_x^{2k+1} w\|_{L^2}
\varepsilonnd{align*}
and so, for any $t\in I$,
\begin{equation}gin{align*}
a_k\|\partial_x^k w(t)\|_{L^2}^2&\leqslantftesssim \|w(1)\|_{\q R_\alpha}^2 + \frac{a_k}{a_{2k+1}}(\|w\|_{\q R_\alpha(I)}^2 + \|S\|_{\q E(1)}^2)\|w\|_{\q R_\alpha(I)}^2\\&\leqslantftesssim \|w(1)\|_{\q R_\alpha}^2 + \alpha(\|w\|_{\q R_\alpha(I)}^2 + \|S\|_{\q E(1)}^2)\|w\|_{\q R_\alpha(I)}^2.
\varepsilonnd{align*}
These estimates imply
\begin{equation}gin{align*}
\|w\|_{\q R_\alpha(I)}^2\leqslantftesssim \|w(1)\|_{\q R_\alpha}^2 + \alpha(\|w\|_{\q R_\alpha(I)}^2 + \|S\|_{\q E(1)}^2)\|w\|_{\q R_\alpha(I)}^2.
\varepsilonnd{align*}
Therefore, if $\|w(1)\|_{\q R_\alpha}^2, \alpha\|S\|_{\q E(1)}^2< \delta$ sufficiently small,
\[
\| w \|_{\q R_\alpha(I)}^2\leqslantftesssim \delta.
\]
On the other hand, since $\mathcal{I}w=\mathcal{I}u$ and $(\partial_t + \partial_{xxx})\mathcal{I}u = -3 \varepsilonpsilon u^2(\mathcal{I}u)_x$,
\[
\leqslantfteft|\frac{1}{2}\frac{d}{dt}\|\mathcal{I}w\|_{L^2}^2\rightight|\leqslantftesssim \leqslantfteft| \int u^2(\mathcal{I}w)_x\mathcal{I}w \rightight| \leqslantftesssim \|uu_x\|_{L^\infty}\|\mathcal{I}w\|_{L^2}^2 \leqslantftesssim (\|w\|_{\q R_\alpha(I)}^2 + \frac{1}{t}\|S\|_{\q E(I)}^2)\|\mathcal{I}w\|_{L^2}^2.
\]
The integration of this inequality implies that $ \|\mathcal{I}u\|_{L^2}^2$ cannot blow-up in $I$. Finally, since
\[ \partial_p \tilde{w} = \omegaidehat{\mathcal{I}w} + 3t \omegaidehat{zw}, \]
we estimate
\begin{equation}gin{align*}
\|\tilde{w}\|_{L^\infty}&\leqslantftesssim \|\tilde{w}\|_{L^2}^{1/2}\|\partial_{p}\tilde{w}\|_{L^2}^{1/2} \leqslantftesssim \|w\|_{\q R_\alpha(I)}^{1/2}\leqslantfteft(\|\mathcal{I}w\|_{L^2} + t\|z\|_\infty\|w\|_{L^2}\rightight)^{1/2}\\&\leqslantftesssim \|w\|_{\q R_\alpha(I)}^{1/2}\leqslantfteft(\|\mathcal{I}w\|_{L^2} + t(\|w\|_{\q R_\alpha}^2 + t^{-2/3}\|S\|_{\q E(1)}^2)\|w\|_{L^2}\rightight)^{1/2}.
\varepsilonnd{align*}
By Proposition \rightef{prop:blowupaltern}, $u$ cannot blow-up at any $t>0$ and the result follows.
\varepsilonnd{proof}
\begin{equation}gin{nb}
As shown in Lemma \rightef{lema:decayE}, the space $\q E$ controls $\|w^2\|_{W^{1,\infty}}$. The main difficulty when one studies the limit $t\to 0^+$ is that the corresponding estimate includes terms behaving as $O(t^{-1})$. The key argument in the proof of stability is the control of $\|w^2\|_{W^{1,\infty}}$ using the $\q R_\alpha$ norm, which does not depend on time.
\varepsilonnd{nb}
\begin{equation}gin{nb}
In Proposition \rightef{prop:blowupgivenprofile}, we built solutions defined on $(0,1]$ which blow up at $t=0$. One may then ask if these solutions are also stable under $\q R_\alpha$-perturbations. However, in the above proof, the nullity of $(\mathcal{I}S)_x$ is essential in order to close the estimate for $\mathcal{I}w$.
\varepsilonnd{nb}
\appendix
\section{Asymptotic development of the nonlinearity}
\begin{equation}gin{proof}[Proof of Lemma \rightef{lem:desenvolveoscilatorio}]
The goal is to obtain the right asymptotics for
\begin{equation}gin{equation}
\cal N[u](t,p)=p^3\iint_{q_1+q_2+q_3=1} e^{-itp^3(1-q_1^3-q_2^3-q_3^3)}f(t,pq_1)g(t,pq_2)h(t,pq_3)dq_1dq_2
\varepsilonnd{equation}
assuming that $f$ (and \varepsilonmph{mutatis mutandis} $g,h$) satisfies
\[
f(t,p)=\overline{f(t,-p)},\quad \|f\|_{L^\infty} + t^{-\frac{1}{6}}\|\partial_p f\|_{L^2((0,+\infty))} = \|f\|<\infty.
\]
The usual stationary phase arguments either use high regularity assumptions or that all the functions involved have enough spatial decay (specifically $L^2$, in order to apply Parseval's identity), which fail in our setting: as mentioned earlier, the way the computations are performed is critical in order to close the argument with suitable bounds. Before we proceed, let us first explain the ideas of the computations, and start with the main order term. The phase
\[
Q:= -(1-q_1^3-q_2^3-q_3^3)
\] has four stationary points:
\[
\leqslantfteft(\frac{1}{3},\frac{1}{3},\frac{1}{3}\rightight),\quad (1,1,-1),\quad (-1,1,1),\quad (1,-1,1).
\]
The last three are connected through the symmetry between $q_1,q_2$ and $q_3$. Then we split the domain of integration using three cutoff function $\phi_j$ with $\phi_1+\phi_2+\phi_3=1$ such that the support of $\phi_j$ does not include the stationary points with $q_k=-1$, $k\neq j$. For example, one may choose
\[
\phi_3\varepsilonquiv 1\text{ if }q_2>1/6 \text{ and }q_3<1/2,\quad \phi_3\varepsilonquiv 0\text{ if }q_2<1/12\text{ or }q_3>2/3
\]
and define $\phi_1$, $\phi_2$ in an analogous way. Therefore, without loss of generality,
in order to study the asymptotics for $\cal N$, it suffices to consider
\[
p^3\iint_{q_1+q_2+q_3=1} e^{itp^3Q}f(t,pq_1)g(t,pq_2)h(t,pq_3)\phi(q_1,q_2)dq_1dq_2,
\]
were $\phi:=\phi_3$ and the relevant stationary points are
\[
\leqslantfteft(\frac{1}{3},\frac{1}{3},\frac{1}{3}\rightight),\quad (1,1,-1).
\]
If the general stationary phase argument was applicable, then
\begin{equation}gin{align*}
\cal N(t,p)&=k_1(t,p)e^{-\frac{8itp^3}{9}}f\leqslantfteft(t,\frac{p}{3}\rightight)g\leqslantfteft(t,\frac{p}{3}\rightight)h\leqslantfteft(t,\frac{p}{3}\rightight) + k_2(t,p) f(t,p)g(t,p)\overline{h(t,p)} \\&\qquad+ \text{remainder.}
\varepsilonnd{align*}
If one takes smooth cutoff functions around the stationary points $\psi_{1/3}$ and $\psi_1$, then the stationary phase argument for smooth functions (see, for example, \cite{F89}) implies that, up to a small remainder,
\begin{equation}gin{align*}
\MoveEqLeft \frac{1}{3}k_1(t,p)e^{-\frac{8itp^3}{9}}f\leqslantfteft(t,\frac{p}{3}\rightight)g\leqslantfteft(t,\frac{p}{3}\rightight)h\leqslantfteft(t,\frac{p}{3}\rightight) \\
& = f\leqslantfteft(t,\frac{p}{3}\rightight)g\leqslantfteft(t,\frac{p}{3}\rightight)h\leqslantfteft(t,\frac{p}{3}\rightight) \leqslantfteft(p^3\iint_{q_1+q_2+q_3=1}e^{itp^3Q} \psi_{1/3}(q_1,q_2)\phi(q_1,q_2) dq_1dq_2\rightight),
\varepsilonnd{align*}
and
\begin{equation}gin{align*}
\MoveEqLeft k_2(t,p)f(t,p)g(t,p)\overline{h(t,p)} \\
& = f(t,p)g(t,p)\overline{h(t,p)} \leqslantfteft(p^3\iint_{q_1+q_2+q_3=1}e^{itp^3Q} \psi_{1}(q_1,q_2)\phi(q_1,q_2) dq_1dq_2\rightight).
\varepsilonnd{align*}
This implies that the remainder in the stationary phase argument is given by
\begin{equation}gin{equation}\leqslantftabel{eq:resto}
p^3\iint_{q_1+q_2+q_3=1} e^{itp^3Q}\Phi dq_1dq_2
\varepsilonnd{equation}
where
\begin{equation}gin{align*}
\Phi &=f(t,pq_1)g(t,pq_2)h(t,pq_3) - f(t,p)g(t,p)\overline{h(t,p)}\psi_{1}(q_1,q_2) \\&\qquad- f\leqslantfteft(t,\frac{p}{3}\rightight)g\leqslantfteft(t,\frac{p}{3}\rightight)h\leqslantfteft(t,\frac{p}{3}\rightight)\psi_{1/3}(q_1,q_2).
\varepsilonnd{align*}
We now explain the main ideas behind the computations for the remainder.
The function $\Phi$, due to the fact that $f,g$ and $h$ are Hölder continuous of degree $1/2$, satisfies
\[ |\Phi|\leqslantftesssim p^{1/2}t^{1/6}(\sqrt{|q_1-1/3|} + \sqrt{|q_2-1/3|}) \]
and
\[ |\Phi|\leqslantftesssim p^{1/2}t^{1/6}(\sqrt{|q_1-1|} + \sqrt{|q_2-1|}). \]
There are three regions of integration:
\begin{equation}gin{itemize}
\item the inner region: $q_3>1/6$,
\item the middle region: $-2<q_3<1/5$,
\item the outer region: $q_3<-3/2$.
\varepsilonnd{itemize}
In the first and the second regions, we are close to a stationary point and we shall use the Hölder estimate. In the third region, $q_1$ and $q_2$ are large, meaning that we are far from any possible singularity coming from integration by parts. The splitting of the integral into these three regions can be accomplished by using appropriate cut-off functions; however, to simplify the exposition of the proof, we omit these terms.
Instead of applying the usual relation $itp^3(\partial Q)e^{itp^3Q}=\partial(e^{itp^3Q})$, we use
\begin{equation}gin{equation}
Ke^{itp^3Q}=\frac{1}{1+itp^3K\partial Q}\partial(Ke^{itp^3Q}),\quad \partial K\varepsilonquiv 1,\quad K=0\text{ at the stationary point}.
\varepsilonnd{equation}
The introduction of $K$ leads to some simplifications: firstly, there is no singularity appearing in the integration by parts; second, the $K$ in the numerator will add some degeneracy.
The required decay has to come from two integration by parts (one integration eliminates the $p^3$ factor but does not show decay). This has to be done carefully, since $f,g$ and $h$ cannot be differentiated more than once. The key fact is that one may differentiate, for example,
\[ f_p(pq_1)g(pq_2)h(pq_3) \]
in the $q_2$ (or $q_3$) direction. Therefore, the two required integration by parts are made in different directions, so that no second derivatives of $f$ appear.
Even though one could perform all the computations in the $q_1,q_2$ coordinates, we introduce some linear change of variables so that it becomes clearer in which direction we integrate by parts and which terms are irrelevant in each region. For example, we shall say that $q_1$ is irrelevant on the middle region and throw it away when taking absolute values in the integrand.
We now bound the remainder terms in detail. Throughout this proof, $\tau=tp^3$ and $p>0$.
Consider the change of variables
\[
1-q_1=\leqslantftambda- \mu ,\quad 1-q_2= \leqslantftambda + \mu ,\quad 1-q_3=2(1-\leqslantftambda).
\]
Notice that both stationary points satisfy $ \mu =0$. We now use the relation
\[
e^{i\tau Q}=\frac{1}{1+4i\tau \mu ^2(1-\leqslantftambda)}\partial_ \mu ( \mu e^{i\tau Q})
\]
and integrate by parts \varepsilonqref{eq:resto}:
\begin{equation}gin{align*}
\iint e^{i\tau Q}\Phi dq_1dq_2 &= \iint e^{i\tau Q}\Phi \frac{8i\tau \mu ^2(1-\leqslantftambda)}{(1+4i\tau \mu ^2(1-\leqslantftambda))^2}dq_1dq_2 \\
& \quad + \iint e^{i\tau Q}\Phi_{q_1}\frac{ \mu }{1+4i\tau \mu ^2(1-\leqslantftambda)}dq_1dq_2 \\
& \quad - \iint e^{i\tau Q}\Phi_{q_2}\frac{ \mu }{1+4i\tau \mu ^2(1-\leqslantftambda)}dq_1dq_2 \\
& = M_1 + M_2 - M_3.
\varepsilonnd{align*}
The estimate for $M_3$ follows from similar computations as those for $M_2$.
We will bound $M_1$ and $M_2$ in separately, and depending whether $\tau$ is less or greater than 1, in the four claims below.
Let us focus first of $M_1$. We take $\varepsilonta= \mu \sqrt{1-\leqslantftambda}$ and use, for a fixed $\leqslantftambda_0\in\{0,2/3\}$, we ahve
\[
e^{i\tau Q}=\frac{1}{1 + 2i\tau (\leqslantftambda-\leqslantftambda_0)\leqslantftambda(2-3\leqslantftambda)}\partial_\leqslantftambda((\leqslantftambda-\leqslantftambda_0)e^{i\tau Q}).
\]
Hence
\begin{equation}gin{align*}
M_1 & = \iint e^{i\tau Q}\Phi \frac{8i\tau \mu ^2(1-\leqslantftambda)}{(1+4i\tau \mu ^2(1-\leqslantftambda))^2}dq_1dq_2= \iint e^{i\tau Q} \Phi \frac{8i\tau \varepsilonta^2}{(1+4i\tau \varepsilonta^2)^2} \frac{d\varepsilonta d\leqslantftambda}{\sqrt{1-\leqslantftambda}} \\
& = \iint e^{i\tau Q}\leqslantfteft[ \Phi_{q_1}\leqslantfteft(1+\frac{ \mu }{2(1-\leqslantftambda)}\rightight) + \Phi_{q_2}\leqslantfteft(1-\frac{ \mu }{2(1-\leqslantftambda)}\rightight) -2\Phi_{q_3}\rightight] \\&\qquad\qquad\times\frac{8i\tau (\leqslantftambda-\leqslantftambda_0) \varepsilonta^2d\leqslantftambda d\varepsilonta}{(1+4i\tau\varepsilonta^2)^2(1-\leqslantftambda)^{1/2}(1 + 2i\tau (\leqslantftambda-\leqslantftambda_0)\leqslantftambda(2-3\leqslantftambda))} \\
& \quad + \iint e^{i\tau Q}\Phi \leqslantfteft(-\frac{1}{2\sqrt{(1-\leqslantftambda)}} + \frac{2i\tau\leqslantfteft( \leqslantftambda(2-3\leqslantftambda) + (\leqslantftambda-\leqslantftambda_0)(2-3\leqslantftambda) - 3(\leqslantftambda-\leqslantftambda_0)\leqslantftambda \rightight)}{1+3i\tau(\leqslantftambda-\leqslantftambda_0)\leqslantftambda(2-3\leqslantftambda)}\rightight) \\
& \qquad \times\frac{8i\tau (\leqslantftambda-\leqslantftambda_0) \varepsilonta^2d\leqslantftambda d\varepsilonta}{(1+4i\tau\varepsilonta^2)^2(1-\leqslantftambda)^{1/2}(1 + 2i\tau (\leqslantftambda-\leqslantftambda_0)\leqslantftambda(2-3\leqslantftambda))}.
\varepsilonnd{align*}
\begin{equation}gin{claim}
For $\tau \geqslant 1$, we have the bound on $M_1$:
\[ |M_1| \leqslantftesssim \tau^{-13/12}. \]
\varepsilonnd{claim}
\begin{equation}gin{proof}
Bounds in the inner region: here we choose $\leqslantftambda_0=0$. Since
\[ 2-3\leqslantftambda, \ 1-\leqslantftambda, \ 1\pm \frac{ \mu }{2(1-\leqslantftambda)}\text{ are irrelevant,} \]
a direct bound on $M_1$ yields
\begin{equation}gin{align*}
|M_1| & \leqslantftesssim \iint \frac{|\tau \varepsilonta^2 \leqslantftambda|}{|1+i\tau \varepsilonta^2|^2|1+i\tau \leqslantftambda^2|}(|\Phi_{q_1}| + |\Phi_{q_2}| + |\Phi_{q_3}|)d\leqslantftambda d\varepsilonta \\
& \quad + \iint |\Phi|\leqslantfteft(\frac{|\tau \leqslantftambda\varepsilonta^2|}{|1+i\tau \leqslantftambda^2||1+i\tau \varepsilonta^2|^2} + \frac{|\tau \leqslantftambda^2||\tau\varepsilonta^2|}{|1+i\tau \leqslantftambda^2|^2|1+i\tau \varepsilonta^2|^2}\rightight)d\leqslantftambda d\varepsilonta.
\varepsilonnd{align*}
The first term is bounded by Cauchy-Schwarz:
\begin{equation}gin{align*}
\MoveEqLeft \iint \frac{|\tau \varepsilonta^2 \leqslantftambda|}{|1+i\tau \varepsilonta^2|^2|1+i\tau \leqslantftambda^2|}(|\Phi_{q_1}| + |\Phi_{q_2}| + |\Phi_{q_3}|)d\leqslantftambda d\varepsilonta \\
&\leqslantftesssim \tau^{1/2} \int \frac{\tau\varepsilonta^2d\varepsilonta}{|1+i\tau\varepsilonta^2|^2}\leqslantfteft(\int \frac{\leqslantftambda^2d\leqslantftambda}{|1+i\tau \leqslantftambda^2|^2}\rightight)^{1/2} \\
& \leqslantftesssim \tau^{-13/12}.
\varepsilonnd{align*}
For the second and the third term, we use the Hölder estimate for $|\Phi|$:
\[ |\Phi|\leqslantftesssim \tau^{1/6}(\sqrt{|\leqslantftambda|} + \sqrt{|\varepsilonta|}) \]
and obtain
\begin{equation}gin{align*}
\iint |\Phi|\frac{|\tau \leqslantftambda\varepsilonta^2| d\leqslantftambda d\varepsilonta}{|1+i\tau \leqslantftambda^2||1+i\tau \varepsilonta^2|^2} \leqslantftesssim \tau^{1/6}\iint \frac{|\leqslantftambda |^{3/2}|\tau\varepsilonta^2| + |\tau \leqslantftambda ||\varepsilonta|^{5/2}}{|1+i\tau \leqslantftambda ^2||1+i\tau\varepsilonta^2|^2}d\leqslantftambda d\varepsilonta \leqslantftesssim \tau^{-13/12}
\varepsilonnd{align*}
and
\begin{equation}gin{align*}
\iint |\Phi|\frac{|\tau \leqslantftambda ^2||\tau\varepsilonta^2| d\leqslantftambda d\varepsilonta}{|1+3i\tau \leqslantftambda ^2|^2|1+i\tau \varepsilonta^2|^2} \leqslantftesssim \tau^{1/6}\iint \frac{|\tau|^2(|\leqslantftambda |^{5/2}|\varepsilonta|^2 + |\leqslantftambda |^2|\varepsilonta|^{5/2})}{|1+i\tau \leqslantftambda ^2|^2|1+i\tau \varepsilonta^2|^2}d\leqslantftambda d\varepsilonta \leqslantftesssim {\tau^{-13/12}}.
\varepsilonnd{align*}
Bounds in the middle region: here we take $\leqslantftambda _0=2/3$. Since
\[ \leqslantftambda , 1-\leqslantftambda , 1\pm \frac{ \mu }{2(1-\leqslantftambda )}\text{ are irrelevant,} \]
a direct bound on $M_1$ yields
\begin{equation}gin{align*}
|M_1|&\leqslantftesssim \iint \frac{|\tau \varepsilonta^2 (\leqslantftambda -2/3)|}{|1+i\tau \varepsilonta^2|^2|1+i\tau (\leqslantftambda -2/3)^2|}(|\Phi_{q_1}| + |\Phi_{q_2}| + |\Phi_{q_3}|)d\leqslantftambda d\varepsilonta \\&+ \iint |\Phi|\leqslantfteft(\frac{|\tau (\leqslantftambda -2/3)\varepsilonta^2|}{|1+i\tau (\leqslantftambda -2/3)^2||1+i\tau \varepsilonta^2|^2} + \frac{|\tau (\leqslantftambda -2/3)^2||\tau\varepsilonta^2|}{|1+i\tau (\leqslantftambda -2/3)^2|^2|1+i\tau \varepsilonta^2|^2}\rightight)d\leqslantftambda d\varepsilonta.
\varepsilonnd{align*}
and the estimate follows as in the inner region.
Bounds in the outer region: we consider $\leqslantftambda _0=0$ and use
\[
1\pm \frac{ \mu }{2(1-\leqslantftambda )}\sim 1\pm \frac{\varepsilonta}{|\leqslantftambda |^{3/2}},\quad 1+3\leqslantftambda \sim \leqslantftambda ,\quad 1+3i\tau \leqslantftambda ^2\sim \tau \leqslantftambda ^2
\]
to obtain
\begin{equation}gin{align*}
|M_1|\leqslantftesssim \iint \frac{|\leqslantftambda \varepsilonta^2|}{|1+i\tau \varepsilonta^2|^2|\leqslantftambda |^{7/2}}\leqslantfteft(\leqslantfteft(1+\frac{|\varepsilonta|}{|\leqslantftambda |^{3/2}}\rightight)(|\Phi_{q_1}| + |\Phi_{q_2}|) + |\Phi_{q_3}| + \frac{1}{|\leqslantftambda |^{1/2}}|\Phi|\rightight)\leqslantftesssim \tau^{-13/12},
\varepsilonnd{align*}
where the terms with $|\varepsilonta|/|\leqslantftambda |^{3/2}$ is taken care with Cauchy-Schwarz in the $\varepsilonta$-variable.
\varepsilonnd{proof}
\begin{equation}gin{claim} \leqslantftabel{claim:M1_tau<1}
For $\tau <1$, we have the bound on $M_1$
\[ |M_1| \leqslantftesssim \tau^{-5/6}. \]
\varepsilonnd{claim}
\begin{equation}gin{proof}
In the outer region, we take $\leqslantftambda _0=0$ and estimate
\begin{equation}gin{align*}
|M_1| & \leqslantftesssim \iint \frac{|\leqslantftambda \varepsilonta^2|}{|1+i\tau \varepsilonta^2|^2|\leqslantftambda |^{1/2}|1+i\tau \leqslantftambda ^3|}\leqslantfteft(\leqslantfteft(1+\frac{|\varepsilonta|}{|\leqslantftambda |^{3/2}}\rightight)(|\Phi_{q_1}| + |\Phi_{q_2}|) + |\Phi_{q_3}| + \frac{1}{|\leqslantftambda |^{1/2}}|\Phi|\rightight) \\
& \leqslantftesssim \tau^{-5/6}.
\varepsilonnd{align*}
In the inner and middle regions, we simply use the fact that $\Phi$ is bounded and conclude to a better bound than needed:
\[ \leqslantfteft|\iint e^{itp^3Q}\Phi dq_1dq_2\rightight| \leqslantftesssim 1. \qedhere \]
\varepsilonnd{proof}
\begin{equation}gin{claim} \leqslantftabel{claim:M2_tau>1}
For $\tau \geqslant 1$, we have the bound on $M_2$:
\[ |M_2| \leqslantftesssim \tau^{-13/12}. \]
\varepsilonnd{claim}
\begin{equation}gin{proof}
We consider the change of variables
\[ \mu =\frac{3\zeta+\xi-2}{2},\quad \leqslantftambda =\frac{2-\zeta+\xi}{2}. \]
One may obtain this transformation by going back to the $\xi$ variables, switching $q_1$ with $q_3$ and then redoing the $\leqslantftambda,\mu$ transformation. In this way, $q_1$ depends on a single variable $\zeta$.
In this coordinate system, the stationary points are
\[ (\xi,\zeta)=(0,2/3) \text{ and } (-1,1). \]
In the inner region, we use the relation
\[ e^{i\tau Q}=\frac{1}{1+4i\tau \xi^2(1-\zeta)}\partial_\xi (\xi e^{i\tau Q}). \]
Define
\[ A=1+4i\tau \xi^2(1-\zeta),\quad B=1+4i\tau \mu ^2(1-\leqslantftambda )=1+i\tau(3\zeta+\xi-2)^2(\zeta-\xi)/2. \]
We now integrate by parts:
\begin{equation}gin{align*}
M_2 & = \iint e^{i\tau Q}\frac{ \mu \Phi_{q_1}}{B}d\xi d\zeta \\&= \iint \xi e^{i\tau Q}\leqslantfteft[\Phi_{q_1q_2} \frac{ \mu }{AB} + \frac{\Phi_{q_1}}{AB} + \mu \Phi_{q_1}\leqslantfteft(\frac{A_\xi}{A^2B} + \frac{B_\xi}{AB^2}\rightight)\rightight]d\xi d\zeta.
\varepsilonnd{align*}
Since, in this region,
\[ 1\geqslant \zeta, \quad 1-z\sim 1,\quad \leqslantfteft|\frac{\xi A_\xi}{A}\rightight|\leqslantftesssim 1,\quad B\sim 1+i\tau \mu ^2, \quad B_\xi \sim \tau \mu, \]
we can now estimate $M_2$ as follows:
\begin{equation}gin{align*}
|M_2| & \leqslantftesssim \iint \frac{|\xi \mu |||\Phi_{q_1q_2}| + |\xi||\Phi_{q_1}|}{|1+i\tau \xi^2||1+i\tau \mu ^2|} \\
& \quad + | \mu ||\Phi_{q_1}|\leqslantfteft(\frac{1}{|1+i\tau \xi^2||1+i\tau \mu ^2|} + \frac{|\tau \mu \xi|}{|1+i\tau \xi^2||1+i\tau \mu ^2|^2}\rightight)d\xi d \mu.
\varepsilonnd{align*}
We apply Cauchy-Schwarz to all four terms:
\begin{equation}gin{align*}
\iint \frac{|\xi \mu |||\Phi_{q_1q_2}|}{|1+i\tau \xi^2||1+i\tau \mu ^2|}d\xi d \mu & \leqslantftesssim \tau^{1/3}\leqslantfteft(\int \frac{| \mu |^2d \mu }{|1+i\tau \mu ^2|^2}\rightight)^{\frac{1}{2}}\leqslantfteft(\int \frac{|\xi|^2d\xi}{|1+i\tau \xi^2|^2}\rightight)^{\frac{1}{2}} \leqslantftesssim \tau^{-7/6}, \\
\iint \frac{|\xi ||\Phi_{q_1}|}{|1+i\tau \xi^2||1+i\tau \mu ^2|}d\xi d \mu & \leqslantftesssim \tau^{1/6}\int \frac{d \mu }{|1+i\tau \mu ^2|}\leqslantfteft(\int \frac{|\xi|^2d\xi}{|1+i\tau \xi^2|^2}\rightight)^{\frac{1}{2}} \leqslantftesssim \tau^{-13/12}, \\
\iint \frac{| \mu ||\Phi_{q_1}|}{|1+i\tau \xi^2||1+i\tau \mu ^2|}d\xi d \mu & \leqslantftesssim \tau^{1/6}\leqslantfteft(\int\frac{| \mu |^2d \mu }{|1+i\tau \mu ^2|^2}\rightight)^{\frac{1}{2}}\int \frac{d\xi}{|1+i\tau\xi^2|}\leqslantftesssim \tau^{-13/12}, \\
\iint \frac{| \tau \mu ^2\xi||\Phi_{q_1}|}{|1+i\tau \xi^2||1+i\tau \mu ^2|^2}d\xi d \mu & \leqslantftesssim \tau^{1/6}\leqslantfteft(\int \frac{|\xi|^2d\xi}{|1+i\tau\xi^2|^2}\rightight)^{\frac{1}{2}}\int \frac{|\tau \mu ^2|d \mu }{|1+i\tau \mu ^2|^2} \leqslantftesssim \tau^{-13/12},
\varepsilonnd{align*}
which gives suitable bounds.
In the middle region, we use the following relation to capture the stationary point:
\[ e^{i\tau Q}=\frac{1}{1+2i\tau \xi(\xi+1)(1-\zeta)}\partial_\xi ((\xi+1) e^{i\tau Q}). \]
After an integration by parts, the computations are similar to those of the inner region and are left to the reader.
In the outer region, we consider the case $|1-\zeta|<1/100$ first. Then
\[ \mu \sim \xi,\quad 1-\leqslantftambda \sim \xi,\quad |\xi|\gtrsim 1 \]
and so
\begin{equation}gin{align*}
|M_2| &\leqslantftesssim \iint \frac{\xi}{|1+i\tau\xi^2\zeta||1+i\tau \xi^3|}\leqslantfteft[\xi\Phi_{q_1q_2} + \Phi_{q_1} + \xi \Phi_{q_1}\leqslantfteft(\frac{|\tau\xi\zeta|}{|1+i\tau\xi^2\zeta|} + \frac{|\tau \xi^3|}{|1+i\tau \xi^3|}\rightight)\rightight]d\xi d\zeta.
\varepsilonnd{align*}
We now estimate each term separately.
\begin{equation}gin{align*}
\iint \frac{|\xi^2\Phi_{q_1q_2}|}{|1+i\tau\xi^2\zeta||1+i\tau \xi^3|}d\xi d\zeta & \leqslantftesssim \tau^{1/3}\leqslantfteft( \iint \frac{\xi^4d\xi d\zeta}{|1+i\tau \xi^2 \zeta|^2|1+i\tau \xi^3|^2} \rightight)^{\frac{1}{2}}\\
& \leqslantftesssim \tau^{-2/3}\leqslantfteft(\int_{|\xi|\gtrsim \tau^{1/3}} \frac{\xi^2d\xi}{|1+i\xi^3|^2} \int\frac{d\zeta}{|1+i\zeta|^2} \rightight)^{\frac{1}{2}} \leqslantftesssim \tau^{-7/6}, \\
\iint \frac{|\xi||\Phi_{q_1}|}{|1+i\tau\xi^2\zeta||1+i\tau \xi^3|}d\xi d\zeta & \leqslantftesssim \tau^{1/6}\int \frac{d\xi}{|1+i\tau \xi^3|}\leqslantfteft(\int \frac{\xi^2d\zeta}{|1+i\tau \xi^2\zeta|^2}\rightight)^{\frac{1}{2}} \leqslantftesssim \tau^{-7/6}, \\
\iint \frac{\xi^2|\Phi_{q_1}|}{|1+i\tau\xi^2\zeta||1+i\tau \xi^3|}d\xi d\zeta & \leqslantftesssim\tau^{1/6} \int \frac{|\xi|d\xi}{|1+i\tau \xi^3|}\leqslantfteft(\int \frac{\xi^2d\zeta}{|1+i\tau \xi^2\zeta|^2}\rightight)^{\frac{1}{2}}\leqslantftesssim \tau^{-7/6}.
\varepsilonnd{align*}
In the remaining case where $|1-\zeta|>1/100$, we further split in the cases $| \mu |<1/100$ and $| \mu |>1/100$. In the former case,
\[ \zeta \sim -\xi,\quad 1-z\sim -\xi, \quad |\tau \xi^2 \zeta| \geqslant 1 \]
meaning that
\[ |M_2| \leqslantftesssim \iint \frac{|\xi|}{|\tau \xi^3||1+i\tau \mu ^2\xi|}\leqslantfteft[| \mu \Phi_{q_1q_2}| + |\Phi_{q_1}| + | \mu \Phi_{q_1}|\leqslantfteft(1+\frac{|\tau \mu \xi|}{|1+i\tau \mu ^2\xi|}\rightight)\rightight]d\xi d \mu. \]
Again using Cauchy-Schwarz inequality,
\begin{equation}gin{align*}
\iint \frac{|\xi|}{|\tau \xi^3||1+i\tau \mu ^2\xi|}| \mu \Phi_{q_1q_2}|d\xi d \mu & \leqslantftesssim \tau^{-2/3}\leqslantfteft(\iint \frac{ \mu ^2d \mu d\xi}{\xi^4|1+i\tau \mu ^2\xi|^2}\rightight)^{\frac{1}{2}} \leqslantftesssim \tau^{-17/12}, \\
\iint \frac{|\xi|}{|\tau \xi^3||1+i\tau \mu ^2\xi|}|\Phi_{q_1}|d\xi d \mu & \leqslantftesssim \tau^{-5/6}\int \frac{d\xi}{\xi^2}\leqslantfteft(\int \frac{d \mu }{|1+i\tau \mu ^2\xi|}\rightight)^{\frac{1}{2}}\leqslantftesssim \tau^{-13/12}, \\
\iint \frac{| \mu \xi\Phi_{q_1}|}{|\tau \xi^3||1+i\tau \mu ^2\xi|}\leqslantfteft(1+\frac{|\tau \mu \xi|}{|1+i\tau \mu ^2\xi|}\rightight)d\xi d \mu & \leqslantftesssim \iint \frac{|\xi| |\Phi_{q_1}|}{|\tau \xi^3||1+i\tau \mu ^2\xi|}d\xi d \mu \leqslantftesssim\tau^{-13/12}.
\varepsilonnd{align*}
Finally, in the latter case where $| \mu |>1/100$, one has
\[ |1+i\tau \xi^2\zeta|\gtrsim |\tau\xi^2 \zeta|,\quad |1+i\tau \mu ^2(1-\leqslantftambda )|\gtrsim|\tau \mu ^2(1-\leqslantftambda )|. \]
This implies directly the even better bound $|M_2| \leqslantftesssim \tau^{-5/3}$.
\varepsilonnd{proof}
\begin{equation}gin{claim}
For $\tau <1$, we have the bound on $M_2$:
\[ |M_2| \leqslantftesssim \tau^{-2/3}. \]
\varepsilonnd{claim}
\begin{equation}gin{proof}
In the outer region, we further split depending on the sizes of $\zeta$ and $\mu$. Assume first that $|\zeta|<1/100$. Proceeding as in Claim \rightef{claim:M2_tau>1}, we obtain
\[ |M_2| \leqslantftesssim \tau^{-2/3}. \]
If $|\zeta|>1/100$ and $| \mu |<1/100$, then
\[ \zeta\sim -\xi,\quad 1-\leqslantftambda \sim -\xi. \]
This yields the estimates
\begin{equation}gin{align*}
\iint \frac{| \mu \xi|}{|1+i\tau\xi^3||1+i\tau \mu ^2\xi|}|\Phi_{q_1q_2}|d\xi d \mu & \leqslantftesssim \tau^{1/3}\leqslantfteft(\iint \frac{\xi^2 \mu ^2 d\xi d \mu }{|1+i\tau \xi^3|^2|1+i\tau \mu ^2\xi|^2}\rightight)^{\frac{1}{2}} \leqslantftesssim \tau^{-2/3}, \\
\iint \frac{|\xi|}{|1+i\tau\xi^3||1+i\tau \mu ^2\xi|}|\Phi_{q_1}|d\xi d \mu & \leqslantftesssim \tau^{1/6}\int \frac{d\xi}{|1+i\tau\xi^3|}\leqslantfteft(\int \frac{\xi^2 d \mu }{|1+i\tau \mu ^2\xi|^2}\rightight)^{\frac{1}{2}} \leqslantftesssim \tau^{-2/3}.
\varepsilonnd{align*}
Finally, if $| \mu |>1/100$, then
\[ \mu \sim \zeta + \xi,\quad 1 - \leqslantftambda \sim \zeta - \xi, \]
and so
\begin{equation}gin{align*}
\MoveEqLeft \iint \frac{(|\zeta|+|\xi|)|\xi|}{|1+i\tau \xi^2\zeta||1+i\tau (\zeta+\xi)^2(\zeta-\xi)|}|\Phi_{q_1q_2}|d\zeta d\xi \\
& \leqslantftesssim \tau^{1/3}\leqslantfteft( \iint \frac{(|\zeta|^2+|\xi|^2)|\xi|^2}{|1+i\tau \xi^2\zeta|^2|1+i\tau (\zeta+\xi)^2(\zeta-\xi)|^2}|d\zeta d\xi\rightight)^{\frac{1}{2}}\leqslantftesssim \tau^{-2/3}, \\
\MoveEqLeft \iint \frac{|\xi|}{|1+i\tau \xi^2\zeta||1+i\tau (\zeta+\xi)^2(\zeta-\xi)|}|\Phi_{q_1}|d\zeta d\xi \\
& \leqslantftesssim \tau^{1/6}\int d\xi \leqslantfteft(\frac{|\xi|^2d\zeta}{|1+i\tau \xi^2\zeta|^2|1+i\tau (\zeta+\xi)^2(\zeta-\xi)|^2} \rightight)^{\frac{1}{2}}\leqslantftesssim \tau^{-2/3}.
\varepsilonnd{align*}
Finally, in the inner and middle regions, we proceed as in Claim \rightef{claim:M1_tau<1}: the fact that $\Phi$ is bounded yields the better bound
\[
\leqslantfteft|\iint e^{itp^3Q}\Phi dq_1dq_2\rightight| \leqslantftesssim 1. \qedhere
\]
\varepsilonnd{proof}
Gathering all these bounds, and taking into account the main order term, gives the expansion \varepsilonqref{eq:N_expansion} together with the bound \varepsilonqref{eq:N_remainder} on the remainder. The proof is complete.
\varepsilonnd{proof}
\nocite{HM80}
\nocite{HN01}
\nocite{HN99}
\nocite{CV08}
\nocite{DZ95}
\nocite{GPR16}
\nocite{HaGr16}
\nocite{PV07}
\nocite{FIKN06}
\nocite{FA83}
\normalsize
\begin{equation}gin{center}
{\scshape Simão Correia}\\
{\footnotesize
CMAF-CIO, Universidade de Lisboa\\
Edif\'{\i}cio C6, Campo Grande\\
1749-016 Lisboa, Portugal\\
\varepsilonmail{[email protected]}
}
{\scshape Raphaël Côte}\\
{\footnotesize
Université de Strasbourg\\
CNRS, IRMA UMR 7501\\
F-67000 Strasbourg, France\\
\varepsilonmail{[email protected]}
}
{\scshape Luis Vega}\\
{\footnotesize
Departamento de Matemáticas\\
Universidad del País Vasco UPV/EHU\\
Apartado 644, 48080, Bilbao, Spain\\
\ \\
Basque Center for Applied Mathematics BCAM\\
Alameda de Mazarredo 14, 48009 Bilbao, Spain\\
\varepsilonmail{[email protected]}
}
\varepsilonnd{center}
\varepsilonnd{document} |
\begin{document}
\title{\LARGE \bf Monotone flows with dense periodic orbits}
\author{\Large Morris W.\ Hirsch \thanks{I thank {\sc Eric Bach,
Bas Lemmens} and {\sc Janusz Mierczy\'nski} for helpful
discussions.} \\University of Wisconsin}
\maketitle
\begin{abstract}
Give $\R n$ the (partial) order $\succeq$ determined by a closed
convex cone $K$ having nonempty interior: $y\succeq x \ensuremath{\:\Longleftrightarrow\:} y-x \in K$.
Let $X\subset\R n$ be a connected open set and $\varphi$ a flow on
$X$ that is monotone for this order: if $y\succeq x$ and $t\ge 0$,
then $\varphi^ty\succeq \varphi^t y$.
{\em Theorem.} If periodic points of $\varphi$ are dense in $X$, then
$\varphi^t$ is the identity map of $X$ for some $t \ge 0$.
\end{abstract}
\tableofcontents
\section{Introduction} \mylabel{sec:intro}
Many dynamical systems, especially those used as models in applied
fields, are {\em monotone}: the state space has an order relation
that is preserved in positive time. A recurrent theme is that bounded
orbits tend toward periodic orbits. For a sampling of the large
literature on monotone dynamics, consult the following works and the
references therein:
{\small \cite{AngeliHirschSontag, Anguelov12, BenaimHirsch99a,
BenaimHirsch99b, DeLeenheer17, Dirr15, Elaydi17, EncisoSontag06,
Grossberg78, Hess-Polacik93, HS05, Kamke32, LajmanovichYorke,
Landsberg96, LeonardMay75, Matano86, Mier94, Potsche15,
RedhefWalter86, Selgrade80, Smale76, Smith95, Smith17, Volkmann72,
Wang17, Walcher01}.}
Another common dynamical property is {\em dense periodicity}: periodic
points are dense in the state space. Often considered typical of
chaotic dynamics, this condition is closely connected to many other
important dynamical topics, such as structural stability, ergodic
theory, Hamiltonian mechanics, smoothness of flows and
diffeomorphisms.
The main result in this article, Theorem \ref{th:main}, is that a large class of
monotone, densely periodic flows $\varphi:=\{\varphi^t\}_{t\in\ensuremath{{\mathbb R}}}$ on open
subsets of $\R n$ are
(globally) {\em periodic}: There exists $t>0$ such that $\varphi^t$ is the
identity map.
\subsection{Terminology}
Throughout this paper $X$ and $Y$ denote (topological) spaces.
The closure of a subset of a space is $\ov S$.
The group of homeomorphisms of $X$ is $\mcal H (X)$.
$\ensuremath{{\mathbb Z}}$ denotes the integers, $\ensuremath{{\mathbb N}}$ the nonnegative integers, and $\ensuremath{{\mathbb N}_{+}}$
the positive integers. $\ensuremath{{\mathbb R}}$ denotes the reals, $\ensuremath{{\mathbb Q}}$ the rationals,
and $\ensuremath{{\mathbb Q}_+}$ the positive rationals.
$\R n$ is Euclidean $n$-space.
Every manifold $M$ is assumed metrizable with empty boundary $\ensuremath{\partial}
M$, unless otherwise indicated.
Every map is assumed continuous unless otherwise described. $f\colon\thinspace
X\approx Y$ means $f$ maps $X$ homeomorphically onto $Y$. The
identity map of $X$ is $id_X$.
If $f$ and $g$ denote maps, the composition of $g$ following $f$ is the map
$g\circ f\colon\thinspace x\mapsto g(f(x))$ (which may be empty).
We set $f^0:=id_X$, and recursivelydefine the
$k$'th iterate $f^k$ of $f$ as $f^k:=f\circ f^{k-1}, \ (k \in \ensuremath{{\mathbb N}_{+}})$.
The {\em orbit} of $p$ under $f$ is the set
\[\mcal O(p):=\big\{f^k (p)\colon\thinspace k\in \ensuremath{{\mathbb N}}\big\},
\]
denoted as $\mcal O (p,f)$ to record $f$.
When $f (X) \subset X$, the orbit of $P\subset X$ is defined as
\[
\mcal O (P)= \mcal O (P,f) := \bigcup_{p\in P}\mcal O (p).
\]
A set $A\subset X$ is {\em invariant} under $f$ if $f(A)=A =f^{-1}
(A)$. The {\em fixed set} and {\em periodic set} of $f$ are the
respective invariant sets
\[
{\mcal F}(f) :=\big\{x\colon\thinspace f(x)=x\big\}, \ensuremath{\mathfrak q}uad {\mcal P}(f):= \bigcup_{n\in \ensuremath{{\mathbb N}_{+}}}
{\mcal F}(f^n).
\]
\ensuremath{\partial}aragraph{Flows}
A {\em flow} $\ensuremath{\partial}si:=\{\ensuremath{\partial}si^t\}_{t\in \ensuremath{{\mathbb R}}}$ on $Y$, denoted formally by
$(\ensuremath{\partial}si, Y)$, is a continuous action of the group $\ensuremath{{\mathbb R}}$ on $Y$: a
family of homeomorphisms $\ensuremath{\partial}si^t\colon\thinspace Y\approx Y$ such that the map
\[
\ensuremath{{\mathbb R}} \to \mcal H (Y), \quad t\mapsto \ensuremath{\partial}si^t
\]
is a homomorphism, and the {\em evaluation map}
\begin{equation}\label{eq:ev}
\msf {ev}_\ensuremath{\partial}si\colon\thinspace\ensuremath{{\mathbb R}} \times Y \to Y, \ensuremath{\mathfrak q}uad (t,x) \mapsto \ensuremath{\partial}si^tx
\end{equation}
is continuous.
A set $A\subset Y$ is {\em invariant under $\ensuremath{\partial}si$} if it is invariant under
every $\ensuremath{\partial}si^t$. When this holds, the {\em restricted flow}
$(\ensuremath{\partial}si \big |A)$ on $A$ has the evaluation map
$\msf {ev}_\ensuremath{\partial}si\big|\,\ensuremath{{\mathbb R}} \times A$.
The {\em orbit} of $y\in Y$ under $\ensuremath{\partial}si$ is the invariant set
\[
\mcal O (y):=\big \{\ensuremath{\partial}si^t y\colon\thinspace t\in \ensuremath{{\mathbb R}} \big\},
\]
denoted formally by $\mcal O (y, \ensuremath{\partial}si)$.
Orbits of distinct points
either coincide or are disjoint.
The
{\em periodic} and {\em equilibrium} sets of $\ensuremath{\partial}si$ are the respective
invariant
sets
\[
{\mcal P}(\ensuremath{\partial}si):=\bigcup_{t\in\ensuremath{{\mathbb R}}}{\mcal F}(\ensuremath{\partial}si^t), \ensuremath{\mathfrak q}uad
{\mcal E}(\ensuremath{\partial}si):= \bigcap_{t\in\ensuremath{{\mathbb R}}} {\mcal F}(\ensuremath{\partial}si^t),
\]
denoted by ${\mcal P}$ and ${\mcal E}$ if $\ensuremath{\partial}si$ is clear from the context.
Points in ${\mcal E}$ are called {\em stationary}. The orbit of a
nonstationary periodic point is a {\em cycle}.
The {\em period} of $p\in Y$ is $r=\msf{per}(p)>0$, provided
\[p\in {\mcal P}\sm {\mcal E}, \quad r=\min\big\{t>0\colon\thinspace \ensuremath{\partial}si^tp=p\big\}
\]
The set of {\em $r$-periodic} points is
\[{\mcal P}^r ({\mcal P}si) ={\mcal P}^r:= \big\{p\in {\mcal P}\colon\thinspace\msf{per} (p)=r\big\}\]
When $p$ is $r$-periodic, the restricted flow $\ensuremath{\partial}si \big|O(p)$ is
topologically conjugate to the rotational flow on the topological
circle $\ensuremath{{\mathbb R}}/r\ensuremath{{\mathbb Z}}$, covered by the translational flow on $\ensuremath{{\mathbb R}}$
whose evaluation map is $(t,x) \mapsto t+x$.
The flow $(\ensuremath{\partial}si, Y)$ is:
\begin{itemize}
\item {\em trivial} if ${\mcal E} =Y$,
\item {\em densely periodic} if ${\mcal P} $ is dense in $Y$,
\item {\em pointwise periodic} if ${\mcal P} = Y$,
\item {\em periodic} if there exists $l > 0$ such that $\ensuremath{\partial}si^l =id_Y$.
Equivalently: the homomorphism $\ensuremath{\partial}si \colon\thinspace \ensuremath{{\mathbb R}} \to \mcal H (Y)$
factors through a homomomorphism of the circle group $\ensuremath{{\mathbb R}}/l\ensuremath{{\mathbb Z}}$.
\end{itemize}
\ensuremath{\partial}aragraph{Order} Recall that a (partial) {\em order} on $Y$ is a
binary relation $\ensuremath{\partial}receq$ on $Y$ that is reflexive, transitive and
antisymmetric.
In this paper all orders are {\em closed}:
the set $\{(u,v)\in Y\times Y\colon\thinspace u\ensuremath{\partial}receq v\}$ is closed.
The {\em trivial order} is: $x\ensuremath{\partial}receq y \ensuremath{\:\Longleftrightarrow\:} x=y$.
The pair
$(Y,\ensuremath{\partial}receq)$ is an {\em ordered space}, denoted by $Y$ if
the order is clear from the context.
We write $y\succeq x$ to mean $x\ensuremath{\partial}receq y$. If $x\ensuremath{\partial}receq y$ and $x\ne
y$, we write $x\ensuremath{\partial}rec y$ and $ y \succ x$.
For sets $A, B\subset Y$, the condition $A\ensuremath{\partial}receq B$ means $a\ensuremath{\partial}receq
b$ for all $a\in A, \,b\in B$, and similarly for the relations
$\ensuremath{\partial}rec,\, \succeq, \,\succ$.
The {\em closed order interval} spanned by $a, b \in Y$ is the closed
set
\[ [a,b]:=\{y\in Y\colon\thinspace a \ensuremath{\partial}receq y\ensuremath{\partial}receq b\},\]
and its interior is the {\em open order interval} $[[a,b]]$. We write
$a\ll b$ to indicate $[[a,b]]\ne\varphiarnothing$.
An {\em order cone}
$K\subset \R n$ is a closed convex cone that is
pointed (contains no affine line) and solid (has nonempty
interior). $K$ is {\em polyhedral} if is the intersection of finitely
many closed linear halfspaces.
The {\em $K$-order}, denoted by $\ensuremath{\partial}receq_K$ for clarity, is
the order defined on every subset of $\R n$ by
\[
x \ensuremath{\partial}receq_K y \ensuremath{\:\Longleftrightarrow\:} y-x\in K.
\]
For the $K$-order on
$\R n$, every order interval is a convex $n$-cell.
A map $f\colon\thinspace X \to Y$ between ordered spaces is {\em monotone} when
\[x\ensuremath{\partial}receq_X x' \implies f(x) \ensuremath{\partial}receq_Y f (x').
\]
When $Y$ is ordered and $g\colon\thinspace Y'\to Y$ is injective, $Y'$ has a unique
{\em induced order} making $g$ monotone.
Every subspace $S\subset Y$ is given the order induced by the
inclusion map $S\hookrightarrow Y$. When this order is trivial, $S$
is {\em unordered}.
A flow $\ensuremath{\partial}si$ on an ordered space is {\em monotone} iff the maps
$\ensuremath{\partial}si^t$ are monotone.
It is easy to see that every cycle for a monotone flow is unordered.
This is the chief result:
\begin{theorem}\mylabel{th:main}
Assume:
\begin{description}
\item[(H1)] $K\subset \R n$ is an order cone.
\item[(H2)] $X\subset \R n$ is open and connected.
\item[(H3)] $X$ is $K$-ordered.
\item[(H4)] The flow $(\varphi, X)$ is monotone and densely periodic.
\end{description}
Then $\varphi$ is periodic.
\end{theorem}
\noindent The proof will be given after results about various types of
flows.
\section{Resonance} \mylabel{sec:resonance}
In this section:
\begin{itemize}
\item $Y$ is an ordered space,
\item $(\ensuremath{\partial}si,Y)$ is a monotone flow,
\item
$ \mcal O (y)$ is the orbit of $y$ under $\ensuremath{\partial}si$,
\item $\mcal O (y, \ensuremath{\partial}si^t)$ is the orbit of $y$ under $\ensuremath{\partial}si^t$.
\end{itemize}
\begin{lemma}\mylabel{th:colimit}
Assume:
\begin{description}
\item[(i)] \mbox{$\big\{p_k\big\}$ and $ \big\{q_k\big\}$ are seqences in $Y$ converging to
$y\in Y$},
\item[(ii)] $\mcal O (p_k) \ensuremath{\partial}rec \mcal O(q_k), \quad (k \ge 1).$
\end{description}
Then $y\in {\mcal E}$.
\end{lemma}
\begin{proof} By (i) and continuity of $\ensuremath{\partial}si$ we have
\begin{equation}\label{eq:limvt}
\lim_{k\to \infty}\ensuremath{\partial}si^t p_k = \lim_{k\to \infty}\ensuremath{\partial}si^t q_k =\ensuremath{\partial}si^t y, \ensuremath{\mathfrak q}uad (t\in \ensuremath{{\mathbb R}})
\end{equation}
while (ii) and monotonicity of $\ensuremath{\partial}si$ imply
\begin{equation}\label{eq:pktqk}
k\in \ensuremath{{\mathbb N}}, \ t\in \ensuremath{{\mathbb R}} \implies p_k \ensuremath{\partial}receq \ensuremath{\partial}si^t q_k.
\end{equation}
From (i), (\ref{eq:limvt}). (\ref{eq:pktqk}) and continuity of $\ensuremath{\partial}si$ we infer:
\[ y\ensuremath{\partial}receq \ensuremath{\partial}si^t y,\ensuremath{\mathfrak q}uad (t\in \ensuremath{{\mathbb R}}). \]
Therefore monotonicity shows that $\ensuremath{\partial}si^ty=y$ for all $t\in \ensuremath{{\mathbb R}}$.
\end{proof}
\ensuremath{\partial}aragraph{Rationality} Rational numbers play a surprising role in monotone flows:
\begin{theorem} \mylabel{th:poa}
Assume $r,s >0$. If
\[ p \in \mcal P^r, \quad q \in {\mcal P}^s, \quad p \ensuremath{\partial}rec q,
\quad \text{and} \quad \mcal O (p) \not \ensuremath{\partial}rec\mcal O (q),
\]
then $r/s$ is rational.
\end{theorem}
\begin{proof}
We have
\begin{equation} \label{eq:gama}
\mcal O (p) \cap \mcal O (q)=\varnothing
\end{equation}
because $p \ensuremath{\partial}rec q$
and every cycle in a monotone flow us unordered.
Note that the restriction of $\ensuremath{\partial}si^s$ to the circle $\mcal O(p)$
is conjugate to the rotation $\msf R_{2\ensuremath{\partial}i r/s}$ of the unit circle $ \msf
C \subset\R 2$ through $2\ensuremath{\partial}i r/s$ radians.
Arguing by contradiction, we provisionally assume $r/s$ is irrational.
Then the orbit of $\msf R_{2\ensuremath{\partial}i r/s}$ is dense in $\msf
C$,\footnote{This was discovered--- in the 14th century!--- by {\sc Nicole
Oresme}. See {\sc Grant} \cite{Grant71}, {\sc
Kar} \cite {Kar03}. A short proof based on the pigeon-hole
principle is due to {\sc Speyer} \cite{Speyer17}. Stronger density
theorems are given in {\sc Bohr} \cite{Bohr23}, {\sc Kronecker}
\cite{Kronecker}, and {\sc Weyl} \cite{Weyl10, Weyl16}.}
and the cojugacy described above implies
\begin{equation}\label{eq:oppr}
\mcal O(p)=\ov{\mcal O (p,\ensuremath{\partial}si^r)}.
\end{equation}
Since $p\ensuremath{\partial}rec q$,
monotonicity
\[\big(\ensuremath{\partial}si^r\big)^k p \ensuremath{\partial}receq \big (\ensuremath{\partial}si^s\big)^k q= q, \quad (k\in \ensuremath{{\mathbb N}_{+}}),\]
whence
\[\mcal O (p;\ensuremath{\partial}si^r) \ensuremath{\partial}receq \{q\},
\]
and (\ref{eq:oppr}) implies
\begin{equation}\label{eq:opq}
\mcal O (p) \ensuremath{\partial}rec \{q\}.
\end{equation}
Therefore $\mcal O (p) \ensuremath{\partial}rec \mcal O(q)$ by monotonicity
(\ref{eq:gama}) and monotonicity. But this contradicts the hypothesis.
\end{proof}
\begin{definition}\mylabel{th:resdef}
A set $S$ is {\em resonant} for the flow $(\ensuremath{\partial}si,Y)$ if $S\subset Y$ and
\[
u,v\in S\cap({\mcal P} \sm {\mcal E}) \implies
\mathsf{fr}\,ac{\msf{per}(u)}{\msf{per}(v)}\in \ensuremath{{\mathbb Q}_+}.
\]
\end{definition}
It is easy to prove:
\begin{lemma}\mylabel{th:resorbit}
\mbox{If $S$ is resonant, so is its orbit. \qed}
\end{lemma}
\begin{proposition}\mylabel{th:resprop}
$S$ is resonant provided there exists
\[q \in S\cap ({\mcal P} \sm {\mcal E} )
\]
such that
\begin{equation}\label{eq:zsp}
z\in S\cap ({\mcal P} \sm {\mcal E}) \implies
\mathsf{fr}\,ac{\msf{per} (z)}{\msf{per} (q)} \in \ensuremath{{\mathbb Q}_+}.
\end{equation}
\end{proposition}
\begin{proof} If
$ u,v\in S\cap{\mcal P} (\ensuremath{\partial}si)\sm{\mcal E}$, then
\[
\mathsf{fr}\,ac{\msf{per}(u)}{\msf{per}(v)} =
\mathsf{fr}\,ac{\msf{per}(u)}{\msf{per}(q)}\cdot \mathsf{fr}\,ac{\msf{per}(q)}{\msf{per}(v)},
\]
which lies in $\ensuremath{{\mathbb Q}_+}$ by (\ref{eq:zsp}).
\end{proof}
\begin{theorem} \mylabel{th:resglobal}
Assume
\begin{equation}\label{eq:pqP}
p, q \in {\mcal P} \sm {\mcal E},
\ensuremath{\mathfrak q}uad p\ensuremath{\partial}rec q, \ensuremath{\mathfrak q}uad \mcal O(p) \not \ensuremath{\partial}rec \mcal O(q).
\end{equation}
Then $[p,q]$ is resonant.
\end{theorem}
\begin{proof} Note that
\begin{equation}\label{eq:pzq}
p\ensuremath{\partial}receq z \ensuremath{\partial}rec q \implies \mcal O(z)\not \ensuremath{\partial}rec \mcal O(q).
\end{equation}
For if this is false, there exists $z \in [p,q]$ such that
\[ \mcal O(z) \ensuremath{\partial}rec \mcal O(q),\]
whence monotonicity implies
\[\big(\forall\, w\in\mcal O(p)\big) \ \big(\exists\, w'\in \mcal O(z)\big)\colon\thinspace\quad
w\ensuremath{\partial}receq w'\ensuremath{\partial}rec \mcal O (q).
\]
It follows that $\mcal O(p)\ensuremath{\partial}rec \mcal O(q)$, contrary to hypothesis.
Resonance of $[p,q]$ now follows from Equation (\ref{eq:pzq}), and
Theorem \ref{th:poa} with the parameters $a:=p, \ b:=q$.
\end{proof}
The next result will be used to derive the Main Theorem from the
analogous reuslt for homeomorphisms, Theorem \ref{th:lemmens}.
\begin{theorem}\mylabel{th:rescor}
Assume $Y$ is resonant, $s >0$ and ${\mcal P}^s \ne\varphiarnothing$. Then $ {\mcal P} (\ensuremath{\partial}si)= {\mcal P} (\ensuremath{\partial}si^s)$.
\end{theorem}
\begin{proof}
We fix $p\in {\mcal P}^s$ and show that every
$q\in {\mcal P} (\ensuremath{\partial}si)$ lies ${\mcal P} (\ensuremath{\partial}si^s)$.
If then $q$ is stationary then $q\in {\mcal P} (\ensuremath{\partial}si^s)$.
If $q\in {\mcal P}^r$ then $r >0$ and resonance implies
$ sk = rl$ with $ k, l \in \ensuremath{{\mathbb N}_{+}}.$
Since $\ensuremath{\partial}si^r q =q$, we have
$ T^kq = \big(\ensuremath{\partial}si^s\big)^kq= \big(\ensuremath{\partial}si^r\big)^lq =q$, and again
$q\in {\mcal P} (\ensuremath{\partial}si^s)$.
\end{proof}
\section{Proof of Theorem \ref{th:main} } \mylabel{sec:proofmain}
Recall the statement of the Theorem:
\noindent
{\em Assume:
\begin{description}
\item[(H1)] $K\subset \R n$ is an order cone,
\item[(H2)] $X\subset \R n$ is open and connected,
\item[(H3)] $X$ is $K$-ordered,
\item[(H4)] the flow $(\varphi, X)$ monotone and densely periodic.
\end{description}
Then $\varphi$ is periodic.
}
The proof relies on two sufficient conditions for periodicity of
monotone homeomorphisms, Theorems \ref{th:lemmens} and
\ref{th:monty} below.
\begin{definitions*}
A homeomorphism $T\colon\thinspace W \approx W$ is:
\begin{itemize}
\item {\em densely periodic}\, if ${\mcal P}(T)$ is dense in $W$,
\item {\em pointwise periodic}\, if ${\mcal P}(T)= W$,
\item {\em periodic}\, if $T^k=id_W$ for some $k \in \ensuremath{{\mathbb N}_{+}}$.
\end{itemize}
\end{definitions*}
The key to Theorem \ref{th:main} is this striking result:
\begin{theorem}[{\sc B. Lemmens} {\em et al.}, \cite {Lemmens17}] \mylabel{th:lemmens}
Assume {\em (H1), (H2), (H3)}. Then a monotone homeomorphism $T\colon\thinspace
X\approx X$ is periodic provided it is densely
periodic.\footnote{Conjectured, and proved for polyhedral cones, in
{\sc M. Hirsch} \cite{Hirsch17}.}
\end{theorem}
We also use an elegant result from the early days of transformation groups:
\begin{theorem}[{\sc D. Montgomery}, \cite {Monty37, Monty55}] \mylabel{th:monty}
A homeomorphism of a connected topological manifold is
periodic provided it is pointwise
periodic.\footnote{There are analogs of
Montgomery's Theorem for countable transformation groups; see
{\sc Kaul} \cite{Kaul71},
{\sc Roberts} \cite {Roberts75},
{\sc Yang} \cite {Yang71}. Pointwise periodic homeomorphisms
on compact metric spaces are investigated in {\sc Hall \& Schweigert} \cite
{Hall82}.}
\end{theorem}
\begin{lemma}\mylabel{th:P}
$\varphi$ is pointwise periodic
\end{lemma}
\begin{proof}
Since ${\mcal E}\subset {\mcal P}(\varphi)$, it suffices to prove that the restriction of
$\varphiarphi$ to each component of $X\,\varphierb=\={\mcal E}$ is pointwise periodic.
Therefore we assume ${\mcal E}=\varnothing$.
Let $x \in X$ be arbitrary. As ${\mcal P}(\varphi)$ is dense, Lemma
\ref{th:colimit} and Theorem \ref{th:resglobal} show there exist $p, q\in
{\mcal P}(\varphi)$ such that the open set $[[p,q]]$ is connected and resonant, and
contains $x$. Therefore the open set $Y:=\mcal O([[p,q]])$, which is
invariant and connected, is resonant by Lemma \ref{th:resorbit}.
By Theorem \ref{th:rescor} there exists $s >0$ such that
${\mcal P} (\varphi^s) = {\mcal P} (\varphiarphi)\cap Y$. Consequently $\varphi^s$ is densely
periodic, hence periodic by Theorem \ref{th:lemmens}. Therefore $x\in
{\mcal P}(\varphi)$. \end{proof}
Theorem \ref{th:main} follows from Lemma \ref{th:P} and
Theorem \ref{th:monty}.
\end{document} |
\begin{document}
\title{Congruences between modular forms and related modules }
\author{Miriam Ciavarella}
\date{ }
\maketitle
\begin{abstract}
We fix $\ell$ a prime and let $M$ be an integer such that $\ell\not|M$; let $f\in S_2(\Gamma_1(M\ell^2))$ be a newform supercuspidal of fixed type related to the nebentypus, at $\ell$ and special at a finite set of primes. Let ${\bf T}^\psi$ be the local quaternionic Hecke algebra associated to $f$. The algebra ${\bf T}^\psi$ acts on a module $\mathcal M^\psi_f$ coming from the cohomology of a Shimura curve. Applying the Taylor-Wiles criterion and a recent Savitt's theorem, ${\bf T}^\psi$ is the universal deformation ring of a global Galois deformation problem associated to $\overline\rho_f$. Moreover $\mathcal M^\psi_f$ is free of rank 2 over ${\bf T}^\psi$. If $f$ occurs at minimal level, by a generalization of a Conrad, Diamond and Taylor's result and by the classical Ihara's lemma, we prove a theorem of raising the level and a result about congruence ideals. The extension of this results to the non minimal case is an open problem.
\end{abstract}
Keywords: modular form, deformation ring, Hecke algebra, quaternion algebra, congruences. \\
2000 AMS Mathematics Subject Classification: 11F80
\timesection*{Introduction}
The principal aim of this article is to detect some results about isomorphism of complete intersection between an universal deformation ring and a local Hecke algebra and about cohomological modules free over an Hecke algebra. From this results, it is possible deduce that there is an isomorphism between the quaternionic cohomological congruence module for a modular form and the classical congruence module.\\
Our work take place in a context of search which has its origin in the works of Wiles and Taylor-Wiles on the Shimura-Taniyama-Weil conjecture. Recall that the problem addressed by Wiles in \cite{wi} is to prove that a certain ring homomorphism $\phi:\mathcal R_\mathcal D\to{\bf T}_\mathcal D$ is an isomorphism (of complete intersection), where $\mathcal R_\mathcal D$ is the universal deformation ring for a mod $\ell$ Galois representation arising from a modular form and ${\bf T}_\mathcal D$ is a certain Hecke algebra. \\
Extending our results to a more general class of representations letting free the ramification to a finite set of prime we need a small generalization of a result of Conrad Diamond and Taylor. We will describe it, using a recent Savitt's theorem and we deduce from it two interesting results about congruences.\\
Our first result extends a work of Terracini \cite{Lea} to a more general class of types and allows us to work with modular forms having a non trivial nebentypus.
Our arguments are largely identical to Terracini's in many places; the debt to Terracini's work will be clear throughout the paper. One important change is that since we will work with Galois representations which are not semistable at $\ell$ but only potentially semistable,
we use a recent Savitt's theorem \cite{savitt}, that prove a conjecture of Conrad, Diamond and Taylor (\cite{CDT}, conjecture 1.2.2 and conjecture 1.2.3), on the size of certain deformation rings parametrizing potentially Barsotti-Tate Galois representations, extending results of Breuil and M\'{e}zard (conjecture 2.3.1.1 of \cite{BM}) (classifying Galois lattices in semistable representations in terms of \lq\lq strongly divisible modules \rq\rq) to the potentially crystalline case in Hodge-Tate weights $(0,1)$.
Given a prime $\ell$, we fix a newform $f\in S_2(\Gamma_0(N\Delta'\ell^2),\psi)$ with nebentypus $\psi$ of order prime to $\ell$, special at primes dividing $\Delta'$ and such that its local representation $\pi_{f,\ell}$ of $GL_2({\bf Q}_\ell)$, is associated to a fixed regular character $\chi$ of ${\bf F}_{\ell^2}^\times$ satisfying $\chi|_{{\bf Z}_\ell^\times}=\psi_\ell|_{{\bf Z}_\ell^\times}$. We consider the residual Galois representation $\overline\rho$ associated to $f$. We denote by ${\bf T}^\psi$ the ${\bf Z}_\ell$-subalgebra of $\prod_{h\in\mathcal B}\mathcal O_{h,\lambda}$ where $\mathcal B$ is the set of normalized newforms in $S_2(\Gamma_0(N\Delta'\ell^2),\psi)$ which are supercuspidal of type $\tau=\chi^\timesigma\oplus\chi$ at $\ell$ and whose associated representation is a deformation of $\overline\rho$. By the Jacquet-Langlands correspondence and the Matsushima-Murakami-Shimura isomorphism, one can see such forms in a local component ${\mathcal M^\psi}$ of the $\ell$-adic cohomology of a Shimura curve. By imposing suitable conditions on the type $\tau$, we describe for each prime $p$ dividing the level, a local deformation condition of $\overline\rho_p$ and applying the Taylor-Wiles criterion in the version of Diamond \cite{D} and Fujiwara, we prove that the algebra ${\bf T}^\psi$ is characterized as the universal deformation ring ${\mathcal R^\psi}$ of our global Galois deformation problem.
We point out that in order to prove the existence of a family of sets realizing simultaneously the conditions of a Taylor-Wiles system, we make large use of Savitt's theorem \cite{savitt}: assuming the existence of a newform $f$ as above, the tangent space of the deformation functor has dimension one over the residue field. Our first resul is the following:
\begin{theorem}
\begin{itemize}
\item[a)] $\Phi:{\mathcal R^\psi}\to{\bf T}^\psi$ is an isomorphism of complete intersection;
\item[b)] ${\mathcal M^\psi}$ is a free ${\bf T}^\psi$-module of rank 2.
\end{itemize}
\end{theorem}
\noindent We observe that in \cite{CDT} the authors assume that the type $\tau$ is strongly acceptable for $\overline\rho_\ell$. In this way they assure the existence of a modular form under their hyphotesis. Since we are interested to study the quaternionic cohomolgical module associated to the modular form $f$, as a general hyphotesis we suppose that there exist a modular form $f$ in our conditions, in other words we are assuming that uor cohomological module is not empty.
\noindent Keeping the definitions as in \cite{CDT}, Savitt's result allows to suppress the assumption of acceptability in the definition of strongly acceptability, thus it is possible to extend Conrad, Diamond and Taylor's result \cite{CDT} relaxing the hypotheses on the residual representation.\\
\noindent Under the hypothesis that $f$ occurs with minimal level (i.e. the ramification at primes $p$ dividing the Artin conductor of the Galois representation $\rho_f$ is equal to the ramification of $\overline\rho_f$ at $p$)
the module ${\mathcal M^\psi}$, used to construct the Taylor-Wiles system, can be also seen as a part of a module $\mathcal M^{\rm mod}$ coming from the cohomology of a modular curve, as described in \cite{CDT} \S5.3. Applying the extended Conrad, Diamond and Taylor's methods and by the Ihara's lemma for the cohomology of modular curves \cite{CDT}, the first part of our result can be extended by allowing the ramification on a set of primes $S$ disjoint from $N\Delta'\ell$. In this way it is possible to obtain results of the form:
\begin{itemize}
\item $\Phi_S:\mathcal R^\psi_S\to{\bf T}^\psi_S$ is an isomorphism of complete intersections,
\end{itemize}
where $\mathcal R^\psi_S$ is an universal deformation ring letting free the ramification at primes in $S$, ${\bf T}^\psi_S$ is a local Hecke algebra.
We observe that, since there is not an analogous of the Ihara's lemma for the cohomology of the Shimura curve, we don't have any information about the correspondent module $\mathcal M^\psi_S$ coming from the cohomology of a Shimura curve. In particular, in the general case $S\not=\emptyset$, it is not possible to show that $\mathcal M^\psi_S$ is free over ${\bf T}^\psi_S$
We observe that as a consequence of the generalization of the Cornrad, Diamond and Taylor's result, two results about raising the level of modular forms and about the ideals of congruence follows.
Let $S_1,S_2$ be two subsets of $\Delta_2$, we slightly modify the deformation problem, by imposing the condition sp at primes $p$ in $S_2$ and by allowing ramification at primes in $S_1$. Let we denote by $\eta_{S_1,S_2}$ the congruence ideal of a modular form relatively to the set $\mathcal B_{S_1,S_2}$ of the newforms of weight 2, Nebentypus $\psi$, level dividing $N\Delta_1\Delta_2\ell$ which are supercuspidal of type $\tau$ at $\ell$, special at primes in $\Delta_1S_2$. We prove that there is an isomorphism of complete intersections between the universal deformation ring $\mathcal R^\psi_{S_1,S_2}$ and the Hecke algebra ${\bf T}^\psi_{S_1,S_2}$ acting on the space $\mathcal B_{S_1,S_2}$ and $$\eta_{\Delta_2,\emptyset}=C\eta_{S_1,S_2}$$ where $C$ is a constant depending of the modular form. In particular we prove the following result:
\begin{theorem}
Let $f=\timesum a_nq^n$ be a normalized newform in $S_2(\Gamma_0(M\ell^2),\psi)$ supercuspidal of type $\tau=\chi\oplus\chi^\timesigma$ at $\ell$, special at primes in a finite set $\Delta'$, there exist $g\in S_2(\Gamma_0(qM\ell^2),\psi)$ supercuspidal of type $\tau$ at $\ell$, special at every prime $p|\Delta'$ such that $f\equiv g\ {\rm mod}\ \lambda$ if and only if $$a_q^2\equiv\psi(q)(1+q)^2\ {\rm mod}\ \lambda$$ where $q$ is a prime such that $(q,M\ell^2)=1,$ $q\not\equiv-1\ {\rm mod}\ \ell$.
\end{theorem}
We observe that our results concerning the cohomological modules, holds only at the minimal level since a quaternionic analogue of the Ihara's lemma is not available in this case. Let $S$ be a finite set of primes not dividing $M\ell$; we fix $f\in S_2(\Gamma_0(N\Delta'\ell^2 S),\psi)$ supercuspidal of type $\tau$ at $\ell$, special at primes $p|\Delta'$. If we modify our Galois deformation problem allowing ramification at primes in $S$, we obtain a new universal deformation ring $\mathcal R_S$ and a new Hecke algebra ${\bf T}^\psi_S$ acting on the newforms giving rise to such representation. We make the following conjecture:
\begin{conjecture}\label{conj1}
\begin{itemize}
\item $\mathcal R_S\to{\bf T}^\psi_S$ is an isomorphism of complete intersection;
\item let $\mathcal M^\psi_S$ be the module $H^1({\bf X}_1(NS),\mathcal O)_{\mathfrak m_S}^{\widehat\psi}$ coming from the cohomology of the Shimura curve ${\bf X}_1(NS)$ associated to the open compact subgroup of $B_{\bf A}^{\times,\infty}$, $V_1(NS)=\prod_{p\not|NS\ell}R_p^\times\prod_{p|NS}K_p^1(N)\times(1+u_\ell R_\ell)$ where $K_p^1(N)$ is defined in section \ref{shi}, and $u_\ell$ is a uniformizer of $B_\ell^\times$. $\mathcal M^\psi_S$ is a free ${\bf T}^\psi_S$-module of rank 2.
\end{itemize}
\end{conjecture}
\noindent Conjecture \ref{conj1} easily follows from the following conjecture:
\begin{conjecture}\label{noscon}
Let $q$ be a prime number such that $q\not|N\Delta'\ell^2$. We fix a maximal non Eisenstein ideal of the Hecke algebra ${\bf T}_0^{\widehat\psi}(N)$ acting on the group $H^1({\bf X}_1(N),\mathcal O)^{\widehat\psi}$. Let ${\bf X}_1(N)$ be the Shimura curve $${\bf X}_1(N)=B^\times\timesetminus B_{\bf A}^\times/K_\infty^+V_1(N)$$ where $$V_1(N)=\prod_{p\not|N\ell}R_p^\times\prod_{p|N}K_p^1(N)\times(1+u_\ell R_\ell)$$ where $K_p^1(N)$ is defined in section \ref{shi}, and $u_\ell$ is a uniformizer of $B_\ell^\times$. The map $$\alpha_\mathfrak m:H^1({\bf X}_1(N),\mathcal O)_\mathfrak m^{\widehat\psi}\times H^1({\bf X}_1(N),\mathcal O)_\mathfrak m^{\widehat\psi}\to H^1({\bf X}_1(Nq),\mathcal O)_{\mathfrak m^q}^{\widehat\psi}$$ is such that $\alpha\otimes_\mathcal O k$ is injective, where $\mathfrak m^q$ is the inverse image of the ideal $\mathfrak m$ under the natural map ${\bf T}_0^{\widehat\psi}(Nq)\to{\bf T}_0^{\widehat\psi}(N)$ and $k=\mathcal O/\lambda$.
\end{conjecture}
\noindent This conjecture would provide an analogue for the Shimura curves of the Ihara's Lemma in case $\ell|\Delta$. In \cite{DT} and in \cite{DTi}, Diamond and Taylor show that if $\ell$ not divides the discriminant of the indefinite quaternion algebra, then the analogue of conjecture \ref{noscon} holds.
\timesection{Notations}
\noindent For a rational prime $p$, ${\bf Z}_p$ and ${\bf Q}_p$ denote the ring of $p$-adic integers and the field of $p$-adic numbers, respectively. If $A$ is a ring, then $A^\times$ denotes the group of invertible elements of $A$. We will denote by ${\bf A}$ the ring of rational ad\'eles, and by ${\bf A}^\infty$ the finite ad\'eles.\\
Let $B$ be a quaternion algebra on ${\bf Q}$, we will denote by $B_{\bf A}$ the adelization of $B$, by $B_{\bf A}^\times$ the topological group of invertible elements in $B_{\bf A}$ and $B_{\bf A}^{\times,\infty}$ the subgroup of finite ad\'eles.\\
Let $R$ be a maximal order in $B$.For a rational place $v$ of ${\bf Q}$ we put $B_v=B\otimes_{\bf Q}{\bf Q}_v$; if $p$ is a finite place we put $R_p=R\otimes_{\bf Z}{\bf Z}_p$. \\
If $p$ is a prime not dividing the discriminant of $B$, included $p=\infty$, we fix an isomorphism $i_p:B_p\to M_2({\bf Q}_p)$ such that if $p\not=\infty$ we have $i_p(R_p)=M_2({\bf Z}_p)$.\\
We write $GL_2^+({\bf R})=\{g\in GL_2({\bf R})|\ det\ g>0\}$ and $K_\infty={\bf R}^\times O_2({\bf R}),$ $K_\infty^+={\bf R}^\times SO_2({\bf R}).$
If $K$ is a field, let $\overline K$ denote an algebraic closure of $K$; we put $G_K={\rm Gal}(\overline K/K)$. For a local field $K$, $K^{unr}$ denotes the maximal unramified extension of $K$ in $\overline K$; we put $I_K={\rm Gal}(\overline K/K^{unr})$, the inertia subgroup of $G_K$. For a prime $p$ we put $G_p=G_{{\bf Q}_p}$, $I_p=I_{{\bf Q}_p}$. If $\rho$ is a representation of $G_{\bf Q}$, we write $\rho_p$ for the restriction of $\rho$ to a decomposition group at $p$.
\timesection{The local Hecke algebra ${\bf T}^\psi$}\label{de}
We fix a prime $\ell>2$. Let ${\bf Z}_{\ell^2}$ denote the integer ring of ${\bf Q}_{\ell^2}$, the unramified quadratic extension of ${\bf Q}_\ell$. Let $M\not=1$ be a square-free integer not divisible by $\ell$.
We fix $f$ an eigenform in $S_2(\Gamma_1(M\ell^2))$, then $f\in S_2(\Gamma_0(M\ell^2),\psi)$ for some Dirichlet character $\psi:({\bf Z}/M\ell^2{\bf Z})^\times\to\overline{\bf Q}^\times.$\\
For abuse of notation, let $\psi$ be the adelisation of the Dirichlet character $\psi$ and we denote by $\psi_p$ the composition of $\psi$ with the inclusion ${\bf Q}_p^\times\to {\bf A}^\times$.\\
We fix a regular character $\chi:{\bf Z}_{\ell^2}^\times\to \overline{\bf Q}^\times$ of conductor $\ell$ such that $\chi|_{{\bf Z}_\ell^\times}=\psi_\ell|_{{\bf Z}_\ell^\times}$
and we extend $\chi$ to ${\bf Q}_{\ell^2}^\times$ by putting $\chi(\ell)=-\psi_\ell(\ell)$. We observe that $\chi$ is not uniquely determined by $\psi$ and, if we fix an embedding of $\overline{\bf Q}$ in $\overline{\bf Q}_\ell$, we can ragard the values of $\chi$ in this field.\\
Since, by local classfield theory, $\chi$ can be regarded as a character of $I_\ell$ and
we can consider the type $\tau=\chi\oplus\chi^\timesigma:I_\ell\to GL_2(\overline{\bf Q}_\ell)$, where $\timesigma$ denotes the complex conjugation.\\
We fix a decomposition $M=N\Delta'$ where $\Delta'$ is a product of an odd number of primes. If we shoose $f\in S_2(\Gamma_1(M\ell^2))$ such that the automorphic representation $\pi_f=\otimes_v\pi_{f,v}$ of $GL_2({\bf A})$ associated to $f$ is supercuspidal of type $\tau=\chi\oplus\chi^\timesigma$ at $\ell$ and special at every primes $p|\Delta'$, then $\pi_{f,\ell}=\pi_\ell(\chi)$, where $\pi_\ell(\chi)$ is the representation of $GL_2({\bf Q}_\ell)$ associated to $\chi$, with central character $\psi_\ell$ and conductor $\ell^2$ (see \cite{Ge}, \S 2.8). Moreover, under our hypotheses, the nebentypus $\psi$ factors through $({\bf Z}/N\ell{\bf Z})^\times$. As a general hypothesis, we assume that $\psi$ has order prime to $\ell$.\\
Let $WD(\pi_\ell(\chi))$ be the $2$-dimensional representation of the Weil-Deligne group at $\ell$ associated to $\pi_\ell(\chi)$ by local Langlands correspondence. Since by local classfield theory, we can identify ${\bf Q}_{\ell^2}^\times$ with $W_{{\bf Q}_{\ell^2}}^{ab}$, we can see $\chi$ as a character of $W_{{\bf Q}_{\ell^2}}$; by (\cite{Ca} \S 11.3), we have
\begin{equation}\label{ind}
WD(\pi_\ell(\chi))=Ind^{W_{{\bf Q}_{\ell}}}_{W_{{\bf Q}_{\ell^2}}}(\chi)\otimes|\ |_\ell^{-1/2}.
\end{equation}
\noindent Let $\rho_f:G_{\bf Q}\to GL_2(\overline{\bf Q}_\ell)$ be the Galois representation associated to $f$ and $\overline\rho:G_{\bf Q}\to GL_2(\overline{\bf F}_\ell)$ be its reduction modulo $\ell$.\\
As in \cite{Lea}, we impose the following conditions on $\overline\rho$:
\begin{equation}\label{con1}
\overline\rho\ {\rm is\ absolutely\ irreducible};
\end{equation}
\begin{equation}\label{cond2}
{\rm if}\ p|N\ {\rm then}\ \overline\rho(I_p)\not=1;
\end{equation}
\begin{equation} \label{rara}
{\rm if}\ p|\Delta'\ {\rm and}\ p^2\equiv 1 {\rm mod}\ \ell\ {\rm then}\ \overline\rho(I_p)\not=1;
\end{equation}
\begin{equation}\label{end}
{\rm End}_{\overline{\bf F}_\ell[G_\ell]}(\overline\rho_\ell)=\overline{\bf F}_\ell.
\end{equation}
\begin{equation}\label{c3}
{\rm if}\ \ell=3,\ \overline\rho\ \ {\rm is\ not\ induced\ from\ a\ character\ of\ }\ {\bf Q}(\timesqrt{-3}).
\end{equation}
\noindent Let $K=K(f)$ be a finite extension of ${\bf Q}_\ell$ containing ${\bf Q}_{\ell^2}$, ${\rm Im}(\psi)$ and the eigenvalues for $f$ of all Hecke operators. Let $\mathcal O$ be the ring of integers of $K$, $\lambda$ be a uniformizer of $\mathcal O$, $k=\mathcal O/(\lambda)$ be the residue field.
\noindent Let $\mathcal B$ denote the set of normalized newforms $h$ in $S_2(\Gamma_0(M\ell^2),\psi)$ which are supercuspidal of type $\chi$ at $\ell$, special at primes dividing $\Delta'$ and whose associated representation $\rho_h$ is a deformation of $\overline\rho$. For $h\in\mathcal B$, let $h=\timesum_{n=1}^\infty a_n(h)q^n$ be the $q$-expansion of $h$ and let $\mathcal O_h$ be the $\mathcal O$-algebra generated in ${\bf Q}_\ell$ by the Fourier coefficients of $h$. Let ${\bf T}^\psi$ denote the sub-$\mathcal O$-algebra of $\prod_{h\in\mathcal B}\mathcal O_h$ generated by the elements $\widetilde T_p=(a_p(h))_{h\in\mathcal B}$ for $p\not|M\ell$.\\
\timesection{Deformation problem}
Our next goal is to state a global Galois deformation condition of $\overline\rho$ which is a good candidate for having ${\bf T}^\psi$ as an universal deformation ring.
\timesubsection{The global deformation condition of type $(\rm{sp},\tau,\psi)_Q$}\label{univ}
First of all we observe that our local Galois representation $\rho_{f,\ell}=\rho_\ell$ is of type $\tau$ (\cite{CDT}).\\
We let $\Delta_1$ be the product of primes $p|\Delta'$ such that $\overline\rho(I_p)\not=1$, and $\Delta_2$ be the product of primes $p|\Delta'$ such that $\overline\rho(I_p)=1$.\\
We denote by $\mathcal C_\mathcal O$ the category of local complete noetherian $\mathcal O$-algebras with residue field $k$. Let $\epsilon:G_p:\to{\bf Z}_\ell^\times$ be the cyclotomic character and $\omega:G_p\to{\bf F}_\ell^\times$ be its reduction mod $\ell$.
By analogy with \cite{Lea}, We define the global deformation condition of type $(\rm{sp},\tau,\psi)_Q$:
\begin{definition}\label{def}
Let $Q$ be a square-free integer, prime to $M\ell$. We consider the functor $\mathcal F_Q$ from $\mathcal C_\mathcal O$ to the category of sets which associate to an object $A\in\mathcal C_\mathcal O$ the set of strict equivalence classes of continuous homomorphisms $\rho:{G_{\bf Q}}\to GL_2(A)$ lifting $\overline\rho$ and satisfying the following conditions:
\begin{itemize}
\item[a$_Q$)] $\rho$ is unramified outside $MQ\ell$;
\item[b)] if $p|\Delta_1N$ then $\rho(I_p)\timesimeq\overline\rho(I_p)$ ;
\item[c)] if $p|\Delta_2$ then $\rho_p$ satisfies the sp-condition, that is ${\rm tr}(\rho(F))^2=(p\mu(p)+\mu(p))^2=\psi_p(p)(p+1)^2$ for a lift $F$ of ${\rm Frob}_p$ in $G_p$;
\item[d)] $\rho_\ell$ is weakly of type $\tau$;
\item[e)] ${\rm det}(\rho)=\epsilon\psi$, where $\epsilon:G_{\bf Q}\to{\bf Z}_\ell^\times$ is the cyclotomic character.
\end{itemize}
\end{definition}
\noindent It is easy to prove that the functor $\mathcal F_Q$ is representable.
\noindent Let $\mathcal R^\psi_Q$ be the universal ring associated to the functor $\mathcal F_Q$. We put $\mathcal F=\mathcal F_0$, ${\mathcal R^\psi}=\mathcal R^\psi_0$.\\
We observe that if $\overline\rho(I_p)=1$, by the Ramanujan-Petersson conjecture proved by Deligne, the sp-condition rules out thouse defomations of $ \overline\rho$ arising from modular forms which are not special at $p$. This space includes the restrictions to $G_p$ of representations coming from forms in $S_2(\Gamma_0(N\Delta'\ell^2),\psi)$ which are special at $p$, but it does not contain those coming from principal forms in $S_2(\Gamma_0(N\Delta'\ell^2),\psi)$.
\timesection{Cohomological modules coming from the Shimura curves}\label{shi}
We fix a prime $\ell>2$. Let $\Delta'$ be a product of an odd number of primes, different from $\ell$. We put $\Delta=\ell\Delta'$. Let $B$ be the indefinite quaternion algebra over ${\bf Q}$ of discriminant $\Delta$. Let $R$ be a maximal order in $B$. Let $N$ be an integer prime to $\Delta$. We put $$K_p^0(N)=i_p^{-1}\left\{\left(
\begin{array}
[c]{cc}
a & b\\
c & d
\end{array}
\right)\in GL_2({\bf Z}_p)\ |\ c\equiv 0\ {\rm mod}\ N\right\}$$
$$K_p^1(N)=i_p^{-1}\left\{\left(
\begin{array}
[c]{cc}
a & b\\
c & d
\end{array}
\right)\in \ GL_2({\bf Z}_p)\ |\ c\equiv 0\ {\rm mod}\ N,\ a\equiv 1\ {\rm mod}\ N\right\}.$$
Let $s$ be a prime $s\not|N\Delta$. We define $$V_0(N,s)=\prod_{p\not|Ns}R^\times_p\times \prod_{p|N}K_p^0(N)\times K_s^1(s^2)$$ and $$V_1(N,s)=\prod_{p\not|N\ell s}R^\times_p\times\prod_{p|N}K_p^1(N)\times K_s^1(s^2)\times (1+u_\ell R_\ell).$$
We observe that there is an isomorphism $$V_0(N,s)/V_1(N,s)\timesimeq({\bf Z}/N{\bf Z})^\times\times{\bf F}_{\ell^2}^\times.$$ We will consider the character $\widehat\psi$ of $V_0(N,s)$ with kernel $V_1(N,s)$ defined as follow: $$\widehat\psi:=\prod_{p|N}\psi_p\times\chi:({\bf Z}/N{\bf Z})^\times\times{\bf F}_{\ell^2}^\times\to{\bf C}^\times$$
and we shall consider the space $S_2(V_0(N,s),\widehat\psi)$ of quaternionic modular forms with nebentypus $\widehat\psi$.\\
For $i=0,1$ let $\Phi_i(N,s)=(GL_2^+({\bf R})\times V_i(N,s))\cap B_{\bf Q}^\times$
and we consider the Shimura curves: $${\bf X}_i(N,s)=B_{\bf Q}^\times\timesetminus B^\times_{\bf A}/K_\infty^+\times V_i(N,s).$$
The finite commutative group $\Omega=({\bf Z}/N{\bf Z})^\times\times{\bf F}_{\ell^2}^\times$ naturally acts on the $\mathcal O$-module $H^*({\bf X}_1(N,s),\mathcal O)$ via its action on ${\bf X}_1(N,s)$. Since there is an injection of $H^*({\bf X}_1(N,s),\mathcal O)$ in $H^*({\bf X}_1(N,s),K)$, by (\cite{H2}, \S 7) the cohomology group $H^1({\bf X}_1(N,s),\mathcal O)$ is also equipped with the action of Hecke operator $T_p$, for $p\not=\ell$ and diamond operators $\langle n\rangle$ for $n\in({\bf Z}/N{\bf Z})^\times$. The Hecke action commutes with the action of $\Omega$, since we do not have a $T_\ell$ operator. The two actions are $\mathcal O$-linear.\\
We can write $\Omega=\Omega_1\times\Omega_2$ where $\Omega_1$ is the $\ell$-Sylow subgroup of $\Omega$ and $\Omega_2$ is the subgroup of $\Omega$ with order prime to $\ell$. Since $\Omega_2\timesubseteq\Omega$, $\Omega_2$ acts on $H^*({\bf X}_1(N,s),\mathcal O)$ and so $H^*({\bf X}_1(N,s),\mathcal O)=\bigoplus_\varphi H^*({\bf X}_1(N,s),\mathcal O)^\varphi$ where $\varphi$ runs over the characters of $\Omega_2$ and $H^*({\bf X}_1(N,s),\mathcal O)^\varphi$ is the sub-Hecke-module of $H^*({\bf X}_1(N,s),\mathcal O)$ on which $\Omega_2$ acts by the character $\varphi$. Since, by hypothesis, $\psi$ has order prime to $\ell$, $H^*({\bf X}_1(N,s),\mathcal O)^{\widehat\psi}=H^*({\bf X}_1(N,s),\mathcal O)^\varphi$ for some character $\varphi$ of $\Omega_2$. So $H^*({\bf X}_1(N,s),\mathcal O)^{\widehat\psi}$ is a direct summand of $H^*({\bf X}_1(N,s),\mathcal O)$.
{}
\noindent It follows easily from the Hochschild-Serre spectral sequence that $$H^*({\bf X}_1(N,s),\mathcal O)^{\widehat\psi}\timesimeq H^*({\bf X}_0(N,s),\mathcal O({\widehat\psi}))$$ where $\mathcal O({\widehat\psi})$ is the sheaf $B^\times\timesetminus B_{\bf A}^\times\times\mathcal O/K_\infty^+\times V_0(N,s),$ $B^\times$ acts on $B^\times_{\bf A}\times\mathcal O$ on the left by $\alpha\cdot(g,m)=(\alpha g,m)$ and $K_\infty^+\times V_0(N,s)$ acts on the right by $(g,m)\cdot v=(g,m)\cdot(v_\infty,v^\infty)=(gv,{\widehat\psi}(v^\infty)m)$ where $v_\infty$ and $v^\infty$ are respectively the infinite and finite part of $v$ . By translating to the cohomology of groups we obtain (see \cite{sl}, Appendix) $H^1({\bf X}_1(N,s),\mathcal O)^{\widehat\psi}\timesimeq H^1(\Phi_0(N,s),\mathcal O(\widetilde\psi)),$ where $\widetilde\psi$ is the restriction of ${\widehat\psi}$ to $\Phi_0(N,s)/\Phi_1(N,s)$ and $\mathcal O(\widetilde\psi)$ is $\mathcal O$ with the action of $\Phi_0(N,s)$ given by $a\mapsto\widetilde\psi^{-1}(\gamma)a$.
\noindent It is well know the Hecke action on $H^1(\Phi_0(N,s),\mathcal O(\widetilde\psi))$ and the structure of the module $H^1({\bf X}_1(N,s),K)^{\widehat\psi}$ over the Hecke algebra. Let ${\bf T}^{\widehat\psi}_0(N,s)$ be the $\mathcal O$-algebra generated by the Hecke operators $T_p$, $p\not=\ell$ and the diamond operators, acting on $H^1({\bf X}_1(N,s),\mathcal O)^{\widehat\psi}$.
\begin{proposition}\label{ra}
$H^1({\bf X}_1(N,s),K)^{\widehat\psi}$ is free of rank 2 over $${\bf T}^{\widehat\psi}_0(N,s)\otimes K={\bf T}^{\widehat\psi}_0(N,s)_K.$$
\end{proposition}
\noindent The proof of proposition \ref{ra}, easy follows from the following lemmas:
\begin{lemma}
Let $L\timesupseteq K$ be two Galois fields, let $V$ be a vector space on $K$ and let $T$ be a $K$-algebra. If $V\otimes L$ is free of rank $n$ on $T\otimes L$ then $V$ is free of rank $n$ on $T$.
\end{lemma}
\begin{proof}
Let $G={\rm Gal}(L/K)$. By the Galois theory, since $V\otimes L$ is free of rank $n$ on $T\otimes L$, we have
$V\timesimeq(V\otimes L)^G\timesimeq((T\otimes L)^n)^G\timesimeq((T\otimes L)^G)^n\timesimeq T^n.$ \end{proof}
\begin{lemma}
Let ${\bf T}_0^{\widehat\psi}(N,s)_{\bf C}$ denote the algebra generated over ${\bf C}$ by the operators $T_p$ for $p\not=\ell$
acting on $H^1({\bf X}_1(N,s),{\bf C})^{\widehat\psi}$. Then $H^1({\bf X}_1(N,s),{\bf C})^{\widehat\psi}$ is free of rank 2 over ${\bf T}^{\widehat\psi}_0(N,s)_{\bf C}$.
\end{lemma}
\noindent We observe that the proof of this lemma follows by the same analysis explained in (\cite{Lea}, Proof of proposition 1.2), defining an homomorphism $$JL:S_2(V_1(N,s))\to S_2(\Gamma_0(s^2\Delta')\cap\Gamma_1(N\ell^2))$$ which is injective when restricted to the space $S_2(V_0(N,s),\widehat\psi)$ and equivariant by Hecke operators.
\timesection{The $\mathcal O$-module ${\mathcal M^\psi}$ }\label{tw}
Throughout this section, we largely mirror section 3 of \cite{Lea}, and we formulate a result that generalizes of a result of Terracini to the case of nontrivial nebentypus.\\
We set $\Delta=\Delta'\ell$; let $B$ be the indefinite quaternion algebra over ${\bf Q}$ of discriminant $\Delta$. Let $R$ be a maximal order in $B$.\\
It is convenient to choose an auxiliary prime $s\not|M\ell$, $s>3$ such that no lift of $\overline\rho$ can be ramified at $s$; such prime exists by \cite{DT}, Lemma 2. We consider the group $\Phi_0=\Phi_0(N,s)$; it easy to verify that the group $\Phi_0$ has not elliptic elements (\cite{Lea}).
\noindent There exists an eigenform $\widetilde f\in S_2(\Gamma_0(Ms^2\ell^2),\psi)$ such that $\rho_f=\rho_{\widetilde f}$ and $T_s\widetilde f=0$. By the Jacquet-Langlands correspondence, the form $\widetilde f$ determines a character ${\bf T}^{\widehat\psi}_0(N,s)\to k$
sending the operator $t$ in the class ${\rm mod}\ \lambda$ of the eigenvalue of $t$ for $\widetilde f$. The kernel of this character is a maximal ideal $\mathfrak m$ in ${\bf T}^{\widehat\psi}_0(N,s)$. We define ${\mathcal M^\psi}=H^1({\bf X}_1(N,s),\mathcal O)^{\widehat\psi}_\mathfrak m.$
By combining Proposition 4.7 of \cite{DDT} with the Jacquet-Langlands correspondence we see that there is a natural isomorphism ${\bf T}^\psi\timesimeq{\bf T}_0^{\widehat\psi}(N,s)_\mathfrak m.$ Therefore, by Proposition \ref{ra}, ${\mathcal M^\psi}\otimes_\mathcal O K\ {\rm is\ free\ of\ rank\ 2\ over}\ {\bf T}^\psi\otimes_\mathcal O K.$
\noindent Let $\mathcal B$ denote the set of newforms $h$ of weight two, nebentypus $\psi$, level dividing $M\ell^2s^2$, special at primes $p$ dividing $\Delta'$, supercuspidal of type $\tau$ at $\ell$ and such that $\overline\rho_h\timesim\overline\rho$. For a newforms $h\in\mathcal B$, we let $K_h$ denote the field over ${\bf Q}_\ell$ generated by its coefficients $a_n(h)$, $\mathcal O_h$ denote the ring of integers of $K_h$ and let $\lambda$ be a uniformizer of $\mathcal O_h$. We let $A_h$ denote the subring of $\mathcal O_{h}$ consisting of those elements whose reduction mod $\lambda$ is in $k$. We know that with respect to some basis, we have
$\rho_h:G_{\bf Q}\to GL_2(A_h)$ a deformation of $\overline\rho$ satisfying our global deformation problem.\\
The universal property of ${\mathcal R^\psi}$ furnishes a unique homomorphism $\pi_h:{\mathcal R^\psi}\to A_h$ such that the composite $G_{\bf Q}\to GL_2({\mathcal R^\psi})\to GL_2(A_h)$ is equivalent to $\rho_h$. Since ${\mathcal R^\psi}$ is topologically generated by the traces of $\rho^{\rm univ}({\rm Frob}_p)$ for $p\not=\ell$, (see \cite{Ma}, \S1.8), we conclude that the map
${\mathcal R^\psi}\to\prod_{h\in\mathcal B}\mathcal O_h$ such that $r\mapsto(\pi_h(r))_{h\in\mathcal B}$
has image ${\bf T}^\psi$. Thus there is a surjective homomorphism of $\mathcal O$-algebras $\Phi:{\mathcal R^\psi}\to{\bf T}^\psi.$ Our goal is to prove the following
\begin{theorem}\label{goal}
\begin{itemize}
\item[a)] ${\mathcal R^\psi}$ is complete intersection of dimension 1;
\item[b)] $\Phi:{\mathcal R^\psi}\to{\bf T}^\psi$ is an isomorphism;
\item[c)] ${\mathcal M^\psi}$ is a free ${\bf T}^\psi$-module of rank 2.
\end{itemize}
\end{theorem}
\timesubsection{Proof of theorem \ref{goal}}
\noindent In order to prove theorem \ref{goal}, we shall apply the Taylor-Wiles criterion in the version of Diamond and Fujiwara and we continue to follow section 3 of \cite{Lea} closely.\\
\noindent We shall prove the existence of a family $\mathcal Q$ of finite sets $Q$ of prime numbers, not dividing $M\ell$ and of a ${\mathcal R^\psi}_Q$-module ${\mathcal M^\psi}_Q$ for each $Q\in\mathcal Q$ such that the system $({\mathcal R^\psi}_Q,{\mathcal M^\psi}_Q)_{Q\in\mathcal Q}$ satisfies the conditions (TWS1), (TWS2), (TWS3), (TWS4), (TWS5) and (TWS6) of \cite{Lea}.
\noindent If this conditions are satisfy for the family $({\mathcal R^\psi}_Q,{\mathcal M^\psi}_Q)_{Q\in\mathcal Q}$, it will be called a {\bf Taylor-Wiles system} for $({\mathcal R^\psi},{\mathcal M^\psi})$. Then theorem \ref{goal} will follow from the isomorphism criterion (\cite{D}, theorem 2.1) developed by Wiles, Taylor-Wiles.\\
As in section 3.1 of \cite{Lea}, let $Q$ be a finite set of prime numbers not dividing $N\Delta$ and such that
\begin{itemize}
\item[(A)] $q\equiv 1\ {\rm mod}\ \ell,\ \ \forall q\in Q$;
\item[(B)] if $q\in Q$, $\overline\rho({\rm Frob}_q)$ has distinct eigenvalues $\alpha_{1,q}$ and $\alpha_{2,q}$ contained in $k$.
\end{itemize}
We will define the modules $\mathcal M_Q$.
If $q\in Q$ we put $$K'_q=\left\{\alpha\in R_q^\times\ |\ i_q(\alpha)\in \left(
\begin{array}
[c]{cc}
H_q & *\\
q{\bf Z}_q & *
\end{array}
\right)\right\}$$ where $H_q$ is the subgroup of $({\bf Z}/q{\bf Z})^\times$ consisting of elements of order prime to $\ell$. By analogy with the definition of $V_Q$ in section 3.3 of \cite{Lea}, we define $$V_Q'(N,s)=\prod_{p\not|NQs}R_p^\times\times\prod_{p|N}K_p^0(N)\times K_s^1(s^2)\times\prod_{q|Q}K'_q$$
$$V_Q(N,s)=\prod_{p\not|NQs}R_p^\times\times\prod_{p|NQ}K_p^0(NQ)\times K_s^1(s^2)$$
$$\Phi_Q=(GL_2({\bf R}^+)\times V_Q(N,s))\cap B^\times,\ \ \Phi_Q'=(GL_2({\bf R}^+)\times V_Q'(N,s))\cap B^\times.$$
\noindent Then $\Phi_Q/\Phi_Q'\timesimeq\Delta_Q$ acts on $H^1(\Phi_Q',\mathcal O(\widetilde\psi))$. Let ${\bf T}_Q^{'\widehat\psi}(N,s)$ (resp. ${\bf T}_Q^{\widehat\psi}(N,s)$) be the Hecke $\mathcal O$-algebra generated by the Hecke operators $T_p$, $p\not=\ell$ and the diamond operators ( that are those Hecke operators coming from $\Delta_Q$) , acting on $H^1({\bf X}'_Q(N,s),\mathcal O)^{\widehat\psi}$ (resp. $H^1({\bf X}_Q(N,s),\mathcal O)^{\widehat\psi}$) where ${\bf X}'_Q(N,s)$ (resp. ${\bf X}_Q(N,s)$) is the Shimura curve associated to $V'_Q(N,s)$ (resp. $V_Q(N,s)$).\
There is a natural surjection $\timesigma_Q:{\bf T}_Q^{'\widehat\psi}(N,s)\to {\bf T}_Q^{\widehat\psi}(N,s)$. Since the diamond operator $\langle n\rangle$ depends only on the image of $n$ in $\Delta_Q$, ${\bf T}_Q^{'\widehat\psi}(N,s)$ is naturally an $\mathcal O[\Delta_Q]$-algebra.
Let $\widetilde\alpha_{i,q}$ for $i=1,2$ be the two roots in $\mathcal O$ of the polinomial $X^2-a_q(f)X+q$ reducing to $\alpha_{i,q}$ for $i=1,2$.
There is a unique eigenform $\widetilde f_Q\in S_2(\Gamma_0(MQs^2\ell^2),\widehat\psi)$ such that $\rho_{\widetilde f_Q}=\rho_f$, $a_s(\widetilde f_Q)=0$, $a_q(\widetilde f_Q)=\widetilde\alpha_{2,q}$ for $q|Q$, where $\widetilde\alpha_{2,q}$ is the lift of $\alpha_{2,q}$.\\
By the Jacquet-Langlands correspondence, the form $\widetilde f_Q$ determines a character $\theta_Q:{\bf T}_Q^{\widehat\psi}(N,s)\to k$ sending $T_p$ to $a_p(\widetilde f_Q)\ {\rm mod}\ \lambda$ and the diamond operators to 1. We define $\widetilde\mathfrak m_Q={\rm ker} \theta_Q,$ $\mathfrak m_Q=\timesigma^{-1}_Q(\widetilde\mathfrak m_Q)$, and $\mathcal M_Q=H^1(\Phi'_Q,\mathcal O(\widetilde\psi))_{\mathfrak m_Q}.$
Then the map $\timesigma_Q$ induce a surjective homomorphism ${\bf T}_Q^{'\widehat\psi}(N,s)_{\mathfrak m_Q}\to {\bf T}_Q^{\widehat\psi}(N,s)_{\widetilde\mathfrak m_Q}$ whose kernel contains $I_Q({\bf T}_Q^{'\widehat\psi}(N,s))_{\mathfrak m_Q}$. \\
If $\mathcal Q$ is a family of finite sets $Q$ of primes satisying conditions $(A)$ e $(B)$, conditions (TWS1) and (TWS2) holds, as proved in \cite{Lea}, proposition 3.2; by the same methods as in \S 6 of \cite{DDT1} and in \S 4, \S 5 of \cite{Des}, it is easy to prove that our system $({\mathcal R^\psi}_Q,\mathcal M_Q)_{Q\in\mathcal Q}$ realize simultaneously conditions (TWS3), (TWS4), (TWS5). \\
We put $\delta_q=\left(
\begin{array}
[c]{cc}
q & 0\\
0 & 1
\end{array}
\right).$
Let $\eta_q$ be the id\`{e}le in $B_{\bf A}^\times$ defined by $\eta_{q,v}=1$ if $v\not|q$ and $\eta_{q,q}=i^{-1}_q\left(
\begin{array}
[c]{cc}
q & 0\\
0 & 1
\end{array}
\right).$ By strong approximation, write $\eta_q=\delta_qg_\infty u$ with $\delta_q\in B^\times$, $g_\infty\in GL_2^+({\bf R})$, $u\in V_{Q'}(N,s)$. We define a map
\begin{eqnarray}
H^1(\Phi_Q,\mathcal O(\widetilde\psi))&\to& H^1(\Phi_{Q'},\mathcal O(\widetilde\psi))\\
x&\to& x|_{\eta_q}
\end{eqnarray}
as follows: let $\xi$ be a cocycle representing the cohomology class $x$ in $H^1(\Phi_{Q},\mathcal O(\widetilde\psi))$; then $x|_{\eta_q}
$ is represented by the cocycle $$\xi|_{\eta_q}(\gamma)=\widehat\psi(\delta_q)\cdot\xi(\delta_q\gamma\delta_q^{-1}).$$
We observe that if $\mathcal Q$ is a family of finite sets $Q$ of primes satisying conditions $(A)$ e $(B)$, then condition (TWS6) hold for the system $({\mathcal R^\psi}_Q,\mathcal M_Q)_{Q\in\mathcal Q}$. The proof is essentially the same as in \cite{Lea}, using the following lemma:
\begin{lemma}
$$T_p(x|_{\eta_q})=(T_p(x)|_{\eta_q})\ \ \ {\rm if}\ \ p\not|MQ'\ell,$$
\begin{equation}\label{r}
T_q(x|_{\eta_q})=q\widehat\psi(q)res_{\Phi_Q/\Phi_{Q'}}x,
\end{equation}
$$T_q(res_{\Phi_Q/\Phi_{Q'}}x)=res_{\Phi_Q/\Phi_{Q'}}(T_q(x))-x|_{\eta_q}.$$
\end{lemma}
\proof We prove (\ref{r}). We put $\widetilde\delta_q=\left(
\begin{array}
[c]{cc}
1 & 0\\
0 & q
\end{array}
\right),$ and we decompose the double coset $\Phi_{Q'}\widetilde\delta_q\Phi_{Q'}=\coprod_{i=1}^q\Phi_{Q'}\widetilde\delta_qh_i$ with $h_i\in\Phi_{Q'}$; if $\gamma\in\Phi_{Q'}$, then:
\begin{eqnarray}
T_q(\xi|_{\eta_q})(\gamma)&=&\timesum_{i=1}^q\widehat\psi(h_i\widetilde\delta_q)\xi|_{\eta_q}(\widetilde\delta_qh_i\gamma h_{j(i)}^{-1}\widetilde\delta^{-1}_q)\nonumber\\
&=&\timesum_{i=1}^q\widehat\psi(h_i)\widehat\psi(\widetilde\delta_q)\widehat\psi(\delta_q)\xi(\delta_q\widetilde\delta_qh_i\gamma h_{j(i)}^{-1}\widetilde\delta^{-1}_q\delta^{-1}_q)\nonumber\\
&=&\widehat\psi(q)\timesum_{i=1}^q\widehat\psi(h_i)\xi(h_i\gamma h_{j(i)}^{-1})
\end{eqnarray}
where the latter sum holds since $\delta_q\widetilde\delta_q=\left(
\begin{array}
[c]{cc}
q & 0\\
0 & q
\end{array}
\right)$. From the cocycle relations we have $T_q(x|_{\eta_q})=q\widehat\psi(q)res_{\Phi_Q/\Phi_{Q'}}x.$\qed
\noindent The conditions defining the functor $\mathcal F_Q$, characterize a global Galois deformation problem with fixed determinant (\cite{M}, \S 26). We let $ad^0\overline\rho$ denote the subrepresentation of the adjoint representation of $\overline\rho$ over the space of the trace-0-endomorphisms and we let $ad^0\overline\rho(1)={\rm Hom}(ad^0\overline\rho,\mu_p)\timesimeq Symm^2(\overline\rho)$, with the action of $G_p$ given by $g\varphi)(v)=g\varphi(g^{-1}v).$ Local deformation conditions a$_Q$), b), c), d) allow one to define for each place $v$ of ${\bf Q}$, a subgroup $L_v$ of $H^1(G_v, ad^0\overline\rho)$, the tangent space of the deformation functor (see \cite{M}). We will describe the computation of the local terms of the dimension formula coming from the Poitou-Tate sequence:
\begin{itemize}
\item ${\rm dim}_k(H^0(G_{\bf Q},ad^0\overline\rho)={\rm dim}_k(H^0(G_{\bf Q},ad^0\overline\rho(1))=0$, by the same argument as in \cite{Des} pp.441.
\item ${\rm dim}_kL_\ell=1$. In fact let ${\bf R}^D_{\mathcal O,\ell}$ be the local universal deformation ring associated to a local deformation problem of being weakly of type $\tau$ \cite{CDT}. Since, in dimension 2, potentially Barsotti-Tate is equivalent to potentially crystalline (hence potentially semi stable) of Hodge-Tate weight $(0,1)$ (see \cite{FM}, theorem C2), this allow us to apply Savitt's result (\cite{savitt}, theorem 6.22). Since under our hypothesis ${\bf R}^D_{\mathcal O,\ell}\not=0$, we deduce that there is an isomorphism $
\mathcal O[[X]]\timesimeq{\bf R}^D_{\mathcal O,\ell}$.
\item ${\rm dim}_kH^0(G_\ell,ad^0\overline\rho)=0$, because of hypothesis (\ref{end})
\item ${\rm dim}_kH^1(G_p/I_p,(ad^0\overline\rho)^{I_p})-{\rm dim}_kH^0(G_p,ad^0\overline\rho)=0$ for $p|N\Delta_1$\\
If we let $W=ad^0\overline\rho$, this follows from the exact exact sequence
\begin{displaymath}
0\to H^0(G_p,W)\to H^0(I_p,W)\timestackrel{\timescriptstyle{\rm Frob}_p-1}{\to}H^0(I_p,W)\to H^1(G_p/I_p,W^{I_p})\to 0.
\end{displaymath}
\item ${\rm dim}_kL_p=1$ for $p|\Delta_2$, infact the following lemma holds:
\begin{lemma}\label{versal}
The versal defomation ring of the local defomation problem of satisfying the sp-condition is $\mathcal O[[X,Y]]/(X,XY)=\mathcal O[[Y]].$
\end{lemma}
\item ${\rm dim}_kH^0(G_p,ad^0\overline\rho)=1$, since the eigenvalues of $\overline\rho({\rm Frob}_p)$ are distinct.
\item ${\rm dim}_kH^1(G_q,ad^0\overline\rho)=2$ if $q|Q$, in fact $H^1(G_q/I_q,W)=W/({\rm Frob}_q-1)W$ has dimension 1, because in the base $\left\{\left(
\begin{array}
[c]{cc}
1 & 0\\
0 & -1
\end{array}
\right), \left(
\begin{array}
[c]{cc}
0 & 0\\
0 & 1
\end{array}
\right), \left(
\begin{array}
[c]{cc}
0 & 0\\
0 & 1
\end{array}
\right)\right\}$ of $W$, ${\rm Frob}_q$ is the matrix
$$\left(
\begin{array}
[c]{ccc}
1 & 0 & 0\\
0 & \alpha_{1,q}\alpha_{2,q}^{-1} & 0\\
0 & 0 & \alpha_{1,q}^{-1}\alpha_{2,q}
\end{array}
\right)$$
and $\alpha_{1,q}\not=\alpha_{2,q}$ by hypothesis. We observe that:
\begin{eqnarray}
H^1(I_q,W)^{G_q/I_q}&=&\{\alpha\in {\rm Hom}({\bf Z}_q^\times,W)\ |\ ({\rm Frob}_q-1)\alpha=0\}\nonumber\\
&\timesimeq& W[{\rm Frob}_q-1]\nonumber
\end{eqnarray}
is again one-dimensional, and $H^2(G_q/I_q,W)=0$ since $G_q/I_q\timesimeq\widehat{{\bf Z}}.$ The desidered result follows from the inflation-restriction exact sequence.
\item ${\rm dim}_kH^0(G_q,ad^0\overline\rho)=1$ if $q|Q$, since the eigenvalues of ${\rm Frob}_q$ are $1, \alpha_{1,q}^2, \alpha_{2,q}^2$ and $\alpha_{1,q}^2, \alpha_{2,q}^2\not=1$ for hypothesis.
\item ${\rm dim}_kH^1(G_\infty,ad^0\overline\rho)=0$, since $|G_\infty|=2\not=\ell$.
\item ${\rm dim}_kH^0(G_\infty,ad^0\overline\rho)=1$, since the eigenvalues of complex conjugation on $ad^0\overline\rho$ are $\{1,-1,-1\}.$
\end{itemize}
\proof[lemma \ref{versal}]
\noindent We first observe that it is possible to characterize the deformations of $\overline\rho_p$ in the unramified case, if $p^2\not\equiv 1\ {\rm mod}\ \ell$ and, as in Lemma 2.1 in \cite{Lea}, it it is easy to prove that the versal deformation ring ${\mathcal R^\psi}'_p$ is generated by two elements $X,Y$ such that $XY=0$. \\
It is immediate to see that the sp-condition is equivalent to require that every homomorphism $\varphi:{\mathcal R^\psi}'_p\to A$ associated to the deformation $\rho$ of $\overline\rho_{G_p}$ over a $\mathcal O$-algebra $A\in\mathcal C_\mathcal O$, has $\varphi(X)=0$.\qed
\noindent The dimension formula allow us to obtain the following identity:
\begin{eqnarray}\label{f}
{\rm dim}_kSel_Q(ad^0\overline\rho)-{\rm dim}_kSel^*_Q(ad^0\overline\rho(1))=|Q|
\end{eqnarray}
and, since the minimal number of topological generators of ${\mathcal R^\psi}_Q$ is equal to ${\rm dim}_kSel_Q(ad^0\overline\rho)$,
we obtain that the $\mathcal O$-algebra ${\mathcal R^\psi}_Q$ can be generated topologically by $|Q|+{\rm dim}_kSel^*_Q(ad^0\overline\rho(1))$ elements.
Applying the same arguments as in $\cite{DDT}$ the proof of theorem \ref{goal} follows.
\timesection{Quaternionic congruence module}
In this section we will keep the notations as in the previous sections.\\
As remarked in section \ref{shi}, there is an injective map $$JL:S_2(V_0(N),\widehat\psi)\to S_2(\Gamma_0(\Delta')\cap\Gamma_1(N\ell^2))$$ where $V_0(N)=\prod_{p\not|N}R_p^\times\times\prod_{p|N}K_p^0(N)$ is an open compact subgroup of $B_{\bf A}^{\times,\infty}$. We put $V_{\widehat\psi}=JL(S_2(V_0(N),\widehat\psi))$ the subset of $S_2(\Gamma_0(\Delta')\cap\Gamma_1(N\ell^2))$ generated by thouse new eigenforms with nebentypus $\psi$ which are supercuspidal of type $\tau$ at $\ell$ and special at primes dividing $\Delta'$. Let $K\timesubset\overline{\bf Q}_\ell$ be a finite extension containing ${\bf Q}_{\ell^2}$; we consider $f\in S_2(\Gamma_0(N\Delta'\ell^2),\psi)$ a newform, supercuspidal of type $\tau$ at $\ell$ and special at primes dividing $\Delta'$, and let $X$ be the subspace of $V_{\widehat\psi}(K)$ spanned by $f$. We remark, that there is an isomorphism between the $K$-algebra ${\bf T}_0^{\widehat\psi}(K)={\bf T}_0^{\widehat\psi}(N)\otimes_\mathcal O K$ generated over $K$ by the operators $T_p$, $p\not=\ell$ acting on $H^1({\bf X}_1(N),K)^{\widehat\psi}$ and the Hecke algebra generated by all the Hecke operators acting on $V_{\widehat\psi}(K)$. Thus $V_{\widehat\psi}=X\oplus Y$ where $Y$ is the orthogonal complement of $X$ with respect to the Petersson product over $V_{\widehat\psi}$ and there is an isomorphism $${\bf T}_0^{\widehat\psi}(K)\timesimeq{\bf T}_{0}^{\widehat\psi}(K)_X\oplus{\bf T}_{0}^{\widehat\psi}(K)_Y$$ where ${\bf T}_{0}^{\widehat\psi}(K)_X={\bf T}_0^{\widehat\psi}(N)_X\otimes K$ and ${\bf T}_{0}^{\widehat\psi}(K)_Y={\bf T}_0^{\widehat\psi}(N)_Y\otimes K $ are the $K$ algebras generated by the Hecke operators acting on $X$ and $Y$ respectively.\\
As in classical case, it is possible to define the quaternionic congruence module for $f$ and, by the Jacquet-Langlands correspondence, it is easy to prove that $\widetilde M^{\rm quat}=\mathcal O/(\lambda^n)$ where $n$ is the smallest integer such that $\lambda^ne_f\in{\bf T}_0^{\widehat\psi}(N)$ where $e_f$ is the projector onto the coordinate corresponding to $f$. There are the isomorphisms
$$\widetilde M^{\rm quat}\timesimeq\frac{{\bf T}_0^{\widehat\psi}(N)_X\oplus{\bf T}_0^{\widehat\psi}(N)_Y}{{\bf T}_0^{\widehat\psi}(N)}\timesimeq\frac{e_f{\bf T}_0^{\widehat\psi}(N)}{e_f{\bf T}_0^{\widehat\psi}(N)\cap{\bf T}_0^{\widehat\psi}(N)}$$
where the first one is an isomorphism of $\mathcal O$-modules and the second one is obtained considernig the projection map of ${\bf T}_0^{\widehat\psi}(N)_X\oplus{\bf T}_0^{\widehat\psi}(N)_Y$ onto the first component.\\
Now, let $\mathfrak m$ be the maximal ideal of ${\bf T}_0^{\widehat\psi}(N)$ definend in section \ref{tw}, since $e_f{\bf T}_0^{\widehat\psi}(N)_\mathfrak m=e_f{\bf T}_0^{\widehat\psi}(N)$ then, by the results in the previous sections $$\widetilde M^{\rm quat}=\mathcal O/(\lambda^n)=\frac{e_f{\bf T}^\psi}{e_f{\bf T}^\psi\cap{\bf T}^\psi}=(\widetilde L^{\rm quat})^2$$ where $\widetilde L^{\rm quat}$ is the quaternionic cohomological congruence module for $f$ $$\widetilde L^{\rm quat}=\frac{e_fH^1({\bf X}_1(N),\mathcal O)^{\widehat\psi}}{e_fH^1({\bf X}_1(N),\mathcal O)^{\widehat\psi}\cap H^1({\bf X}_1(N),\mathcal O)^{\widehat\psi}}.$$
\timesection{A generalization of the Conrad, Diamond and Taylor's result using Savitt's theorem}\label{gen}
In \cite{CDT}, Conrad, Diamond and Taylor assume that the type $\tau$ is strongly acceptable for $\overline\rho|_{G_\ell}$ and they consider the global Galois deformation problem of being of {\bf type $(S,\tau)$ }, where $S$ is a set of rational primes which does not contain $\ell$. Savitt's theorem allows to suppress the assumption of acceptability in the definition of strong acceptability and their result, theorem 5.4.2., still follows.
They first suppose that $S=\emptyset$ and they prove their result using the improvement on the method of Taylor and Wiles \cite{taywi} found by Diamond \cite{D} and Fujiwara \cite{Fu}, then they prove their result for an arbitrary $S$ by induction on $S$, using Ihara's Lemma.\\
In particular, if $S$ is a set of rational prime not dividing $N\Delta'\ell$, we consider a newform $f$ of weight 2, level dividing $SM\ell^2$, with nebentypus $\psi$ (not trivial), supercuspidal of type $\tau=\chi\oplus\chi^\timesigma$ at $\ell$ and such that $\overline\rho_f=\overline\rho$ satisfies the conditions (\ref{con1}),(\ref{cond2}),(\ref{end}) and (\ref{c3}) of section \ref{de}. As a general hypothesis, we assume that $f$ occurs with type $\tau$ and minimal level, that is $\overline\rho_f$ is ramified at every prime in $\Delta'$.\\
We consider deformations of type $(S,\tau)$ of $\overline\rho_f$ unramified outside the level of $f$ and such that ${\rm det}(\rho)=\epsilon\psi$; we will call this deformation problem of type $(S,\tau,\psi)$. Then Savitt'theorem assure that the tangent space of the deformation functor at $\ell$ is still one-dimensional and so it is possible to go on with the same construction as in \cite{CDT}.
Let $\mathcal R_S^{\rm mod,\psi}$ be classical type $(S,\tau,\psi)$ universal deformation ring which pa\-ra\-me\-trizes representations of type $(S,\tau,\psi)$ with residual representation $\overline\rho$ and let ${\bf T}_S^{\rm mod,\psi}$ be the classical Hecke algebra acting on the space of the modular forms of type $(S,\tau,\psi)$. If we denote by $\mathcal M_S^{\rm mod}$ the cohomological module defined in \S 5.3 of \cite{CDT}, which is essentially the \lq\lq$\tau$-part\rq\rq of the first cohomology group of a modular curve of level depending on $S$, let $\mathcal M^{\rm mod,\psi}_S$ be the $\psi$-part of $\mathcal M_S^{\rm mod}$. Then the following proposition follows:
\begin{proposition}\label{gcdt}
The map $$\Phi_S^{\rm mod,\psi}:\mathcal R_S^{\rm mod,\psi}\to{\bf T}_S^{\rm mod,\psi}$$ is a complete intersection isomorphism and $\mathcal M^{\rm mod,\psi}_S$ is a free ${\bf T}_S^{\rm mod,\psi}$-module of rank 2.
\end{proposition}
\noindent In particular we observe that:
\begin{lemma}
There is an isomorphism of ${\bf T}^\psi$-modules between $H^1({\bf X}_1(N),\mathcal O)^{\widehat\psi}_\mathfrak m$ and $\mathcal M^{{\rm mod},\psi}_\emptyset$ where ${\bf X}_1(N)$ is the Shimura curve associated to $V_1(N)=\prod_{p\not|N\ell}R_p^\times\prod_{p|N}K_p^1(N)\times(1+u_\ell R_\ell)$.
\end{lemma}
\proof
We observe that if $f$ occurs with type $\tau$ and minimal level ${\mathcal R^\psi}\timesimeq\mathcal R^{{\rm mod},\psi}_\emptyset$. By theorem \ref{goal} and proposition \ref{gcdt}, there is an isomorphism between the Hecke algebras ${\bf T}^\psi\timesimeq{\bf T}^{{\rm mod},\psi}_\emptyset$ thus ${\mathcal M^\psi}\timesimeq \mathcal M^{{\rm mod},\psi}_\emptyset$ as ${\bf T}^\psi$-modules.\qed
\noindent We will describe some consequences of this result.
\timesection{Congruence ideals}\label{con}
In this section we generalize the results about congruence ideals of Terracini \cite{Lea}, considering modular forms with nontrivial nebentypus.\\
Let $\Delta_1$ be a set of primes, disjoint from $\ell$. By an abuse of notation, we shall sometimes denote by $\Delta_1$ also the product of the prime in this set.\\
As before, let $f$ be a newform in $S_2(\Gamma_0(N\Delta_1\ell^2),\psi))$, supercuspidal of type $\tau$ at $\ell$ and as a general hypothesis we assume that the residual representation $\overline\rho$, associated to $f$ occurs with type $\tau$ and minimal level. \\
We observe that if $\overline\rho_\ell$ has the form as at pp.525 of \cite{CDT}, then $f$ satisfies the above hypothesis.\\
We assume that the character $\psi$ satisfies the condition as in the section \ref{de}, that $\overline\rho$ is absolutely irreducible and that $\overline\rho_\ell$ has trivial centralizer.\\
Let $\Delta_2$ be a finite set of primes $p$, not dividing $\Delta_1\ell$ such that $p^2\not\equiv 1\ {\rm mod}\ \ell$ and ${\rm tr}(\overline\rho({\rm Frob}_p))^2\equiv\psi(p)(p+1)^2\ {\rm mod}\ \ell$. We let $\mathcal B_{\Delta_2}$ denote the set of newforms $h$ of weight 2, character $\psi$ and level dividing $N\Delta_1\Delta_2\ell$ which are special at $\Delta_1$, supercuspidal of type $\chi$ at $\ell$ and such that $\overline\rho_h=\overline\rho$. We choose an $\ell$-adic ring $\mathcal O$ with residue field $k$, sufficiently large, so that every representation $\rho_h$ for $h\in\mathcal B_{\Delta_2}$ is defined over $\mathcal O$ and $Im(\psi)\timesubseteq\mathcal O$. For every pair of disjoint subset $S_1, S_2$ of $\Delta_2$ we denote by $\mathcal R^\psi_{S_1,S_2}$ the universal solution over $\mathcal O$ for the deformation problem of $\overline\rho$ consisting of the deformations $\rho$ satisfying:
\begin{itemize}
\item[a)] $\rho$ is unramified outside $N\Delta_1 S_1S_2\ell$;
\item[b)] if $p|\Delta_1N$ then $\rho(I_p)=\overline\rho(I_p)$;
\item[c)] if $p|S_2$ then $\rho_p$ satisfies the sp-condition;
\item[d)] $\rho_\ell$ is weakly of type $\tau$;
\item[e)] ${\rm det}(\rho)=\epsilon\psi$ where $\epsilon:G_{\bf Q}\to{\bf Z}^\times_\ell$ is the cyclotomic character.
\end{itemize}
Let $\mathcal B_{S_1,S_2}$ be the set of newforms in $\mathcal B_{\Delta_2}$ of level dividing $N\Delta_1S_1S_2\ell$ which are special at $S_2$ and let ${\bf T}^\psi_{S_1,S_2}$ be the sub-$\mathcal O$-algebra of $\prod_{h\in\mathcal B_{S_1,S_2}}\mathcal O$ generated by the elements $\widetilde T_p=(a(h))_{h\in\mathcal B_{S_1,S_2}}$ for $p$ not in $\Delta_1\cup S_1\cup S_2\cup\{\ell\}$. Since $\mathcal R^\psi_{S_1,S_2}$ is generated by traces, we know that there exist a surjective homomorphism of $\mathcal O$-algebras $\mathcal R^\psi_{S_1,S_2}\to{\bf T}^\psi_{S_1,S_2}$. Moreover by the results obtained in section \ref{gen}, we have that $\mathcal R^\psi_{S_1,\emptyset}\to{\bf T}^\psi_{S_1,\emptyset}$ is an isomorphism of complete intersections, for any subset $S_1$ of $\Delta_2$.\\
If $\Delta_1\not=1$ then each ${\bf T}^\psi_{\emptyset,S_2}$ acts on a local component of the cohomology of a suitable Shimura curve, obtained by taking an indefinite quaternion algebra of discriminant $S_2\ell$ or $S_2\ell p$ for a prime $p$ in $\Delta_1$. Therefore, theorem \ref{goal} gives the following:
\begin{corollary}
Suppose that $\Delta_1\not=1$ and that $\mathcal B_{\emptyset,S_2}\not=\emptyset;$ then the map $$\mathcal R^\psi_{\emptyset,S_2}\to{\bf T}^\psi_{\emptyset,S_2}$$ is an isomorphism of complete intersections.
\end{corollary}
If $p\in S_2$ there is a commutative diagram:
\begin{eqnarray}
&\mathcal R^\psi_{S_1p,S_2/p}& \to \mathcal R^\psi_{S_1,S_2}\nonumber\\
&\downarrow &\ \ \ \ \ \downarrow\nonumber\\
&{\bf T}^\psi_{S_1p,S_2/p}&\to {\bf T}^\psi_{S_1,S_2}\nonumber\\
\end{eqnarray}
where all the arrows are surjections.\\
For every $p|\Delta_2$ the deformation over $\mathcal R^\psi_{\Delta_2,\emptyset}$ restricted to $G_p$ gives maps $$\mathcal R^{\psi}_p=\mathcal O[[X,Y]]/(XY)\to\mathcal R^\psi_{\Delta_2,\emptyset}.$$ The image $x_p$ of $X$ and the ideal $(y_p)$ generated by the image $y_p$ of $Y$ in $\mathcal R^\psi_{\Delta_2,\emptyset}$ do not depend on the choice of the map. By an abuse of notation, we shall call $x_p,y_p$ also the image of $x_p,y_p$ in every quotient of $\mathcal R^\psi_{\Delta_2,\emptyset}$. If $h$ is a form in $\mathcal B_{\Delta_2,\emptyset}$, we denote by $x_p(h),y_p(h)\in\mathcal O$ the images of $x_p,y_p$ by the map $\mathcal R^\psi_{\Delta_2,\emptyset}\to\mathcal O$ corresponding to $\rho_h$.\\
\begin{lemma}
If $h\in\mathcal B_{\Delta_2}$ and $p|\Delta_2$, then:
\begin{itemize}
\item[a)] $x_p(h)=0$ if and only if $h$ is special at $p$;
\item[b)] if $h$ is unramified at $p$ then $(x_p(h))=(a_p(h)^2-\psi(p)(p+1)^2)$;
\item[c)] $y_p(h)=0$ if and only if $h$ is unramified at $p$;
\item[d)] if $h$ is special at $p$, the oreder at $(\lambda)$ of $y_p(h)$ is the greatest positive integer $n$ such that $\rho_h(I_p)\equiv\timesqrt{\psi(p)}\otimes 1\ {\rm mod}\ \lambda^n.$
\end{itemize}
\end{lemma}
\begin{proof}
It is an immediate consequence of the definition of the sp-condition (proof of lemma \ref{versal}). Statement b), follows from the fact that $$a_p(h)={\rm tr}(\rho_h({\rm Frob}_p))$$ and $$\rho_h({\rm Frob}_h)=\left(
\begin{array}
[c]{cc}
\pm p\timesqrt{\psi(p)}+x_p(h) & 0\\
0 & p\psi(p)/(\pm p\timesqrt{\psi(p)}+x_p(h))
\end{array}
\right).$$ \end{proof}
\noindent In particular $(y_p)$ is the kernel of the map $\mathcal R^\psi_{S_1,S_2}\to\mathcal R^\psi_{S_1/p,S_2}$.
\noindent If $h\in\mathcal B_{S_1,S_2}$ let $\theta_{h,S_1,S_2}:{\bf T}^\psi_{S_1,S_2}\to\mathcal O$ be the character corresponding to $h$.
\noindent We consider the congruence ideal of $h$ relatively to $\mathcal B_{S_1,S_2}$: $$\eta_{h,S_1,S_2}=\theta_{h,S_1,S_2}(Ann_{{\bf T}^\psi_{S_1,S_2}}({\rm ker}\ \theta_{h,S_1,S_2})).$$
\noindent It is know that $\eta_{h,S_1,S_2}$ controls congruences between $h$ and linear combinations of forms different from $h$ in $\mathcal B_{S_1,S_2}$.
\begin{theorem}\label{cong}
Suppose $\Delta_1\not=1$ and $\Delta_2$ as above. Then
\begin{itemize}
\item[a)] $\mathcal B_{\emptyset,\Delta_2}\not=0$;
\item[b)] for every subset $S\timesubseteq\Delta_2$, the map $\mathcal R^\psi_{S,\Delta_2/S}\to{\bf T}^\psi_{S,\Delta_2/S}$ is an isomorphism of complete intersection;
\item[c)] for every $h\in\mathcal B_{\emptyset,\Delta_2}$, $\eta_{h,S,\Delta_2/S}=(\prod_{p|S}y_p(h))\eta_{h,\emptyset,\Delta_2}.$
\end{itemize}
\end{theorem}
\noindent The proof of this theorem is essentilly the same as in \cite{Lea}.\\
If we combine point c) of theorem \ref{cong} to the results in Section 5.5 of \cite{CDT}, we obtain:
\begin{corollary}
If $h\in\mathcal B_{S_1,S_2},$ then
\begin{displaymath}
\eta_{h,\Delta_2,\emptyset}=\prod_{p|\frac{\Delta_2}{S_1S_2}}x_p(h)\prod_{p|S_2}y_p(h)\eta_{h,S_1,S_2}.
\end{displaymath}
\end{corollary}
In particular, from this corollary we deduce the following theorem:
\begin{theorem}
Let $f=\timesum a_nq^n$ be a normalized newform in $S_2(\Gamma_0(M\ell^2),\psi)$ supercuspidal of type $\tau=\chi\oplus\chi^\timesigma$ at $\ell$, with minimal level, special at primes in a finite set $\Delta'$, there exist $g\in S_2(\Gamma_0(qM\ell^2),\psi)$ supercuspidal of type $\tau$ at $\ell$, special at every prime $p|\Delta'$ such that $f\equiv g\ {\rm mod}\ \lambda$ if and only if $$a_q^2\equiv\psi(q)(1+q)^2\ {\rm mod}\ \lambda$$ where $q$ is a prime such that $(q,M\ell^2)=1,$ $q\not\equiv-1\ {\rm mod}\ \ell$.
\end{theorem}
\noindent The problem of remuve the hypothesis of minimal level, is still open and could be solved by proving conjecture \ref{noscon}
\timesection{Problem: extension of results to the non minimal case}
Let $\ell\geq 2$ be a prime number. Let $\Delta'$ be a product of an odd number of primes, different from $\ell$. We put $\Delta=\Delta'\ell$. Let $B$ be the indefinite quaternion algebra over ${\bf Q}$ of discriminant $\Delta$. Let $R$ be a maximal order in $B$.\\
Let $N$ be a positive rational number, we observe that in our deformation problem, in section \ref{de}, we have assumed that the representation $\overline\rho$ associated to $f\in S_2(\Gamma_0(N\Delta\ell),\psi)$, occurs with type $\tau$ and minimal level at $N$ ( not necessarily at $\Delta'$). Let now $S$ be a finite set of rational primes which does not divide $M\ell$, where $M=N\Delta'$, we fix $f\in S(\Gamma_0(N\Delta\ell S),\psi)$ a modular newform of weight 2, level $N\Delta\ell S$, supercuspidal of type $\tau$ at $\ell$, special at primes dividing $\Delta'$ and with Nebentypus $\psi$. Let $\rho$ be the Galois representation associated to $f$ and let $\overline\rho$ be its reduction modulo $\ell$; we suppose that conditions (\ref{con1}), (\ref{cond2}), (\ref{rara}), (\ref{end}) and (\ref{c3}) hold.
\noindent If we denote by $\Delta_1$ the product of primes $p|\Delta'$ such that $\overline\rho(I_p)\not=1$ and by $\Delta_2$ the product of primes $p|\Delta'$ such that $\overline\rho(I_p)=1$. As usual we assume that $p^2\not\equiv 1\ {\rm mod}\ \ell$ if $p|\Delta_2$. We say that the representation $\rho$ is of {\bf type $(S,{\rm sp},\tau,\psi)$} if conditions b), c), d), e) of definition \ref{def} hold and
\begin{itemize}
\item[a)] $\rho$ is unramified outside $M\ell S$.
\end{itemize}
\noindent This is a deformation condition. Let $\mathcal R^\psi_S$ be the universal deformation ring which parametrizes representations of type $(S,{\rm sp},\tau,\psi)$ with residual representation $\overline\rho$ and let ${\bf T}^\psi_S$ be the Hecke algebra acting on the space of modular forms of type $(S,{\rm sp},\tau,\psi)$.\\
Since the dimension of the Selmer groups do not satisfy the control conditions, it is not possible to construct a Taylor-Wiles system by considering a deformation problem of type $(S,{\rm sp},\tau,\psi)$ with $S\not=\emptyset$, as in section \ref{tw}; a still open problem is to prove theorem \ref{goal} for $\mathcal R^\psi_S$, ${\bf T}^\psi_S$ and ${\mathcal M^\psi}_S$. If $S=0$ then we retrouve our theorem \ref{goal}. A possible approach to this problem is to use, as in the classical case, the results of De Smit, Rubin, Schoof \cite{DRS} and Diamond \cite{Dia}, by induction on the cardinality of $S$. If we make the inductive hypothesis assuming the result true for $S$, we have to verify that the result holds for $S'=S\cup\{q\}$ where $q$ is a prime number not dividing $M\ell S$. Following the litterature, to prove the inductive step the principal ingredients are:
\begin{itemize}
\item a duality result about $\mathcal M^\psi_S$ (to appeare), showing the existence of a perfect pairing $\mathcal M^\psi_S\times \mathcal M^\psi_S\to\mathcal O$ which induces an isomorphism $\mathcal M^\psi_S\to{\rm Hom}_\mathcal O(\mathcal M^\psi_S,\mathcal O)$ of ${\bf T}^\psi_S$-modules;
\item conjecture \ref{noscon}, saying that the natural ${\bf T}^\psi_{S'}$-injection $\left(\mathcal M^\psi_S\right)^2\hookrightarrow \mathcal M^\psi_{S'}$ is injective when tensorized with $k$.
\end{itemize}
For semplicity, we shall assume that $q^2\not\equiv 1\ {\rm mod}\ \ell$ for every $q\in S$.
\noindent Then, as observed in the previous section, there is an isomorphism: $$\frac{\mathcal R^\psi_{S'}}{(y_q)}\ \widetilde\to\ \mathcal R^\psi_{S}$$ that induces an isomorphism on the Hecke algebras: $$\frac{{\bf T}^\psi_{S'}}{(y_q)_{{\bf T}^\psi_{S'}}}\ \widetilde\to\ {\bf T}^\psi_{S}$$
where $(y_q)_{{\bf T}^\psi_{S'}}$ is the image of $(y_q)$ in ${\bf T}^\psi_{S'}$; but, for a general $S$, we don't have any information about $\mathcal M^\psi_S=H^1({\bf X}_1(NS),\mathcal O)_{\mathfrak m_S}^{\widehat\psi}$ as a ${\bf T}_S^{\widehat\psi}$-module, where $\mathfrak m_S$ is the inverse image of $\mathfrak m$ with respect to the natural map ${\bf T}_S^{\widehat\psi}\to {\bf T}_\emptyset^{\widehat\psi}$.
\end{document} |
\begin{document}
\title{Dynamical Entanglement}
\author{Gilad Gour}
\email{[email protected]}
\affiliation{Department of Mathematics and Statistics, University of Calgary, AB,
Canada T2N 1N4}
\affiliation{Institute for Quantum Science and Technology, University of Calgary,
AB, Canada T2N 1N4}
\author{Carlo Maria Scandolo}
\email{[email protected]}
\affiliation{Department of Mathematics and Statistics, University of Calgary, AB,
Canada T2N 1N4}
\affiliation{Institute for Quantum Science and Technology, University of Calgary,
AB, Canada T2N 1N4}
\begin{abstract}
Unlike the entanglement of quantum states, very little is known about
the entanglement of bipartite channels, called dynamical entanglement.
Here we work with the partial transpose of a superchannel, and use
it to define computable measures of dynamical entanglement, such as
the negativity. We show that a version of it, the max-logarithmic
negativity, represents the exact asymptotic dynamical entanglement
cost. We discover a family of dynamical entanglement measures that
provide necessary and sufficient conditions for bipartite channel
simulation under local operations and classical communication and
under operations with positive partial transpose.
\end{abstract}
\maketitle
\paragraph{Introduction.}
Quantum entanglement \citep{Plenio-review,Review-entanglement} is
universally regarded as the most important quantum phenomenon, signaling
the definitive departure from classical physics \citep{Schrodinger}.
Its importance ranges across different areas of physics, from quantum
thermodynamics \citep{Bocchieri,Seth-Lloyd,Lubkin,Gemmer-Otte-Mahler,Canonical-typicality,Popescu-Short-Winter,Mahler-book,Huber1,Huber2,delRio,Zurek},
to quantum field theory \citep{Ryu1,Ryu2,Witten} and condensed matter
\citep{Entanglement-many-body,Area-law,Condensed-matter}. In quantum
information it is a resource in many protocols that cannot be implemented
in classical theory, such as quantum teleportation \citep{Teleportation},
dense coding \citep{Dense-coding}, and quantum key distribution \citep{Ekert}.
An even more crucial aspect of physics is that all systems evolve.
This is described by quantum channels \citep{Nielsen2010,Wilde,Watrous}.
Given the importance of entanglement, a natural question is how physical
evolution interacts with it. For example, one can wonder how much
entanglement a given evolution creates or consumes.
To this end, in this letter, which is a concise presentation of the
most significant results of our previous work \citep{Gour-Scandolo},
we push entanglement out of its boundaries to the next level: from
quantum states (static entanglement) to quantum channels (dynamical
entanglement), filling an important gap in the literature (an independent
work in this respect is Ref.~\citep{Wilde-entanglement}). Preliminary
work was done in Refs.~\citep{Bennett-bipartite,Berta-cost,Bennett-converse,Pirandola-LOCC,Wilde-cost,Wilde-PPT-Das1,WW18},
but here we study the topic in utmost generality, using \emph{resource
theories} \citep{Quantum-resource-1,Quantum-resource-2,Resource-knowledge,Resource-currencies,Gour-single-shot,Regula2017,Multiresource,Gour-review,Adesso-resource,Single-shot-new}.
With them, the idea of entanglement as a resource can be made precise.
Resource theories have recently attracted considerable attention \citep{Gour-review},
producing plenty of important results in quantum information \citep{Plenio-review,Review-entanglement,delRio,Lostaglio-thermo,Review-coherence,Veitch_2014,Magic-states}.
Resource theories are particularly meaningful whenever there is a
restriction on the set of quantum operations that can be performed,
usually coming from the physical constraints of a task an agent is
trying to do \citep{Gour-review}.
Looking closely at the entanglement protocols mentioned above \citep{Teleportation,Dense-coding},
one notices that a state is converted into a particular channel \citep{Devetak-Winter,Resource-calculus}.
Thus, the need of a framework that goes beyond the conversion between
static entangled resources is built in the very notion of entanglement
as a resource. In other terms, we want to treat static and dynamical
resources on the same grounds. We do so by phrasing entanglement theory
as a resource theory of\emph{ }quantum processes \citep{Gour-review,Resource-channels-1,Resource-channels-2,Gour-Winter}.
In this setting, the generic resource is a bipartite channel \citep{Shannon-bipartite,Bipartite},
instead of a bipartite state.
In this letter, we start from the simulation of bipartite channels
with local operations and classical communication (LOCC) \citep{LOCC1,LOCC2,Lo-Popescu},
and we derive a family of convex dynamical entanglement measures that
provide necessary and sufficient conditions for the LOCC-simulation
of channels.
The key tool for the remainder of the letter is a generalization of
partial transpose \citep{PPT-Peres,PPT-Horodecki}. This allows us
to define superchannels with positive partial transpose (PPT) \citep{Leung},
which constitute the largest set of superoperations to manipulate
dynamical entanglement, also encompassing the standard entanglement
manipulations involving LOCC. In this setting, we define measures
of dynamical entanglement that can be computed efficiently with semidefinite
programs (SDPs). Specifically, one of them, the max-logarithmic negativity,
quantifies the amount of static entanglement needed to simulate a
channel using PPT superchannels.
Finally, with the same generalization of the partial transpose, we
discover bound dynamical entanglement, whereby it is not possible
to produce entanglement out of a class of channels---PPT channels
\citep{PPT1,PPT2}---that generalize PPT states \citep{PPT-Peres,PPT-Horodecki}.
\begin{notation*}
Physical systems are denoted by capital letters (e.g., $A$) with
$AB$ meaning $A\otimes B$. Working on quantum channels, it is convenient
to associate two subsystems $A_{0}$ and $A_{1}$ with every system
$A$, referring, respectively, to the input and output of the resource.
In the case of static resources, we take $A_{0}$ to be one-dimensional.
A channel from $A_{0}$ to $A_{1}$ is indicated with a calligraphic
letter $\mathcal{N}_{A}:=\mathcal{N}_{A_{0}\rightarrow A_{1}}$. Superchannels
are denoted by capital Greek letters (e.g., $\Theta$), and the action
of superchannels on channels by square brackets. Thus $\Theta_{A\rightarrow B}\left[\mathcal{N}_{A}\right]$
indicates the action of the superchannel $\Theta$ on the channel
$\mathcal{N}_{A}$.
\end{notation*}
\paragraph{LOCC simulation of bipartite channels.}
To manipulate dynamical resources, one needs quantum superchannels
\citep{Chiribella2008,Gour2018}, which are linear maps sending quantum
channels to quantum channels in a complete sense, i.e., even when
tensored with the identity superchannel. This means that if $\mathcal{N}_{RA}$
is a quantum channel, $\Theta_{A\rightarrow B}\left[\mathcal{N}_{RA}\right]$
is still a quantum channel, for any $R$. Superchannels can be all
realized concretely with a preprocessing channel and a postprocessing
channel, connected by a memory system \citep{Chiribella2008,Gour2018}.
Specifically, an LOCC superchannel, used in LOCC simulation, consists
of LOCC pre- and post-processing, and is represented in Fig.~\ref{fig:LOCC-1}.
\begin{figure}
\caption{\label{fig:LOCC-1}
\label{fig:LOCC-1}
\end{figure}
These superchannels are relevant when one is concerned with channel
simulation in bipartite communication-type scenarios where only classical
communication is allowed between the parties \citep{LOCC1,LOCC2}
(e.g., in teleportation \citep{Teleportation}).
Recall that with one qubit maximally entangled state (also known as
an ebit), thanks to quantum teleportation \citep{Teleportation},
we can simulate a qubit noiseless channel from Alice to Bob using
an LOCC scheme, and vice versa \citep{Devetak-Winter,Resource-calculus}.
Therefore one ebit---a static resource---is equivalent to a dynamical
one: a qubit channel. With a pair of such channels at hand, from Alice
to Bob and vice versa, we can LOCC-implement all bipartite channels
between the two parties when they both have qubit systems. This means
that such a pair of channels is the maximal resource. This is illustrated
in Fig.~\ref{fig:swap}(a).
\begin{figure}
\caption{\label{fig:swap}
\label{fig:swap}
\end{figure}
In Fig.~\ref{fig:swap}(b) we show that in the same situation the
swap operation is another maximal resource, equivalent to 2 ebits.
In entanglement theory, a function $f$ is a \emph{measure of dynamical
entanglement} if $f\left(\Theta\left[\mathcal{N}_{AB}\right]\right)\leq f\left(\mathcal{N}_{AB}\right)$,
where $\Theta$ is an LOCC superchannel. It is conventional to assume
that $f$ vanishes on all separable channels \citep{SEP,Rains-SEP,PPT1},
which are regarded as the free resources in the theory of dynamical
entanglement; but this is not essential.
The very definition of a measure of dynamical entanglement indicates
that $f$ gives us a necessary condition for the simulation of channel
$\mathcal{M}$ starting from channel $\mathcal{N}$ and using an LOCC
superchannel $\Theta$. Indeed, if such a superchannel exists, namely
$\mathcal{M}=\Theta\left[\mathcal{N}\right]$, then $f\left(\mathcal{M}\right)\leq f\left(\mathcal{N}\right)$.
However, here we construct a family of convex measures of dynamical
entanglement that also give us a sufficient condition for LOCC simulation.
For any bipartite channels $\mathcal{P}$ and $\mathcal{N}$, define
\begin{equation}
E_{\mathcal{P}}\left(\mathcal{N}\right)=\sup_{\Theta\textrm{ LOCC}}\mathrm{Tr}\left[J^{\mathcal{P}}J^{\Theta\left[\mathcal{N}\right]}\right],\label{eq:complete family}
\end{equation}
where $J$ denotes the Choi matrix of the channel in the superscript,
and $\Theta$ is a generic LOCC superchannel. Note that these functions
need not vanish on separable channels. It is possible to show that
each function $E_{\mathcal{P}}$, with $\mathcal{P}$ ranging over
all bipartite channels, can be computed using a conic linear program
\citep[subsection 3 C]{Gour-Scandolo}.
\begin{thm}
In the theory of dynamical entanglement, a channel $\mathcal{N}$
can be LOCC-converted into a channel $\mathcal{M}$ if and only if
$E_{\mathcal{P}}\left(\mathcal{N}\right)\geq E_{\mathcal{P}}\left(\mathcal{M}\right)$
for \emph{every} bipartite channel $\mathcal{P}$.
\end{thm}
The proof is in Ref.~\citep[subsection 3 C]{Gour-Scandolo}. Since
we need to consider \emph{all} bipartite channels $\mathcal{P}$,
this family of measures of dynamical entanglement is not so practical
to work with. Unfortunately, one cannot expect to find a finite family
of such monotones, as shown in Ref.~\citep{Gour-infinite}.
Given two channels $\mathcal{N}$ and $\mathcal{M}$, to determine
if the former can be LOCC-converted into the latter, we can alternatively
compute their conversion distance, defined following similar ideas
to Ref.~\citep{Tomamichel}:
\begin{equation}
d_{\mathrm{LOCC}}\left(\mathcal{N}\to\mathcal{M}\right)=\frac{1}{2}\inf_{\Theta\textrm{ LOCC}}\left\Vert \Theta\left[\mathcal{N}\right]-\mathcal{M}\right\Vert _{\diamond}.\label{eq:conversion distance}
\end{equation}
If this distance is zero, we can convert $\mathcal{N}$ into $\mathcal{M}$
using a superchannel in the topological closure of LOCC superchannels
\citep{Chitambar2014}. Again, this distance can be calculated using
a conic linear program \citep[subsection 3 D]{Gour-Scandolo}, thanks
to the results in Refs.~\citep{Watrous-diamond,Gour-Winter}.
\paragraph{PPT superchannels.}
In entanglement theory, one of the most practical tools to determine
whether a state is entangled is the partial transpose \citep{PPT-Peres,PPT-Horodecki}.
One defines PPT states as the bipartite states $\rho_{AB}$ such that
$\mathcal{T}_{B}\left(\rho_{AB}\right)=\rho_{AB}^{T_{B}}$ is still
a valid state, where $\mathcal{T}$ denotes the transpose map \citep{PPT-Peres,PPT-Horodecki}.
Recall, however, that the set of PPT states is larger than the set
of separable states, due to the existence of bound entangled states
\citep{Bound-entanglement}. One then defines a PPT channel to be
a bipartite channel $\mathcal{N}_{AB}$ such that, applying the transpose
map on Bob's input and output, we get another valid channel $\mathcal{N}_{AB}^{\Gamma}:=\mathcal{T}_{B_{1}}\circ\mathcal{N}_{AB}\circ\mathcal{T}_{B_{0}}$
\citep{PPT1,PPT2}. Note that the set of PPT channels is larger than
the set of LOCC channels. It is not hard to show that PPT channels
are also completely PPT preserving, for they preserve PPT states even
when tensored with the identity channel \citep{PPT1,PPT2}. Finally,
the Choi matrix of a PPT channel $\mathcal{N}_{AB}$ is such that
$\left(J_{AB}^{\mathcal{N}}\right)^{T_{B}}\geq0$. PPT states and
operations can be regarded as free; therefore, anything that is \emph{not}
PPT---which we call NPT---will be a resource. For this reason, this
resource theory is called the resource theory of NPT entanglement.
As it often happens in resource theories, one considers a larger set
of operations to get upper and lower bounds for the relevant figures
of merit, especially when the interesting resource theory is mathematically
hard to study, such as the LOCC theory \citep{NP-hard1,NP-hard2}.
This is precisely why we study NPT entanglement.
Here we generalize partial transpose, defining the transpose supermap
$\Upsilon$ as $\Upsilon\left[\mathcal{N}_{A}\right]=\mathcal{T}_{A_{1}}\circ\mathcal{N}_{A}\circ\mathcal{T}_{A_{0}}$.
Note that the Choi matrix of $\Upsilon\left[\mathcal{N}_{A}\right]$
is the transpose of the Choi matrix of $\mathcal{N}_{A}$. In this
way, PPT channels can be characterized in a similar way to PPT states:
PPT channels are those bipartite channels such that $\Upsilon_{B}\left[\mathcal{N}_{AB}\right]$
is still a valid channel. Now we iterate the previous construction
to define PPT superchannels.
\begin{defn}
\label{def:PPT}A superchannel $\Theta_{AB\rightarrow A'B'}$ is \emph{PPT}
if $\Theta_{AB\rightarrow A'B'}^{\Gamma}:=\Upsilon_{B'}\circ\Theta_{AB\rightarrow A'B'}\circ\Upsilon_{B}$
is still a valid superchannel.
\end{defn}
These superchannels enjoy some remarkable properties.
\begin{lem}
The following are equivalent:
\begin{enumerate}
\item $\Theta_{AB\rightarrow A'B'}$ is a PPT superchannel.
\item $\Theta_{AB\rightarrow A'B'}$ is completely PPT preserving.\label{enu:complete}
\item $\left(\mathbf{J}_{ABA'B'}^{\Theta}\right)^{T_{BB'}}\geq0$, where
$\mathbf{J}_{ABA'B'}^{\Theta}$ is the Choi matrix of the superchannel
$\Theta$ \citep{Circuit-architecture,Hierarchy-combs,Perinotti1,Perinotti2,Gour2018}.\label{enu:PPT Choi}
\end{enumerate}
\end{lem}
A proof of this result can be found in Ref.~\citep[subsection 5 A]{Gour-Scandolo}.
Property~\ref{enu:complete} means that PPT superchannels preserve
PPT channels in a complete sense. Property~\ref{enu:PPT Choi} tells
us that PPT superchannels are the same objects that appeared in Ref.~\citep{Leung}.
Despite the fairly simple condition defining PPT superchannels at
the level of Choi matrices, we do not know if all of them can be realized
with PPT pre- and postprocessing. When this happens, we call them
\emph{restricted} PPT superchannels. It is not hard to show that restricted
PPT superchannels are indeed PPT superchannels in the sense of definition~\ref{def:PPT}.
Instead, we conjecture that the converse is not true, so we are really
considering a larger set of superchannels. This is one of the main
differences from a related work by Wang and Wilde \citep{WW18}: there
the authors study only restricted PPT superchannels, and they do not
consider bipartite channels, but only one-way channels from Alice
to Bob (or vice versa).
Our approach brings a lot of mathematical simplifications. For instance,
if we replace LOCC with PPT in Eqs.~\eqref{eq:complete family} and
\eqref{eq:conversion distance}, the NPT entanglement measures and
the conversion distance become computable efficiently with SDPs (see
Ref.~\citep[subsections 5 B and 5 C]{Gour-Scandolo}). However, note
that this family of NPT entanglement monotones will not provide a
sufficient condition for the convertibility under \emph{LOCC} superchannels.
\paragraph{New measures of dynamical entanglement.}
Since PPT channels contain LOCC channels, PPT superchannels contain
LOCC ones. Thus, measures of NPT dynamical entanglement (i.e., monotonic
under PPT superchannels) are also measures of LOCC dynamical entanglement
(i.e., monotonic under LOCC superchannels). As seen above, working
with PPT superchannels is mathematically simpler. For this reason,
focusing on the PPT-simulation of channels we obtain measures of LOCC
dynamical entanglement that are easily computable.
The first example in this respect is the the \emph{negativity} \citep{VW2002},
defined for states as $N\left(\rho_{AB}\right)=\frac{\left\Vert \mathcal{T}_{B}\left(\rho_{AB}\right)\right\Vert _{1}-1}{2}$.
The generalization to bipartite channels is straightforward: replace
the trace norm with the diamond norm, and the transpose map $\mathcal{T}_{B}$
with the transpose supermap $\Upsilon_{B}$.
\begin{equation}
N\left(\mathcal{N}_{AB}\right)=\frac{\left\Vert \Upsilon_{B}\left[\mathcal{N}_{AB}\right]\right\Vert _{\diamond}-1}{2}.
\end{equation}
Contextually, the logarithmic negativity is defined as
\begin{equation}
LN\left(\mathcal{N}_{AB}\right)=\log_{2}\left\Vert \Upsilon_{B}\left[\mathcal{N}_{AB}\right]\right\Vert _{\diamond}.
\end{equation}
We prove that these are measures of dynamical entanglement that can
be computed efficiently with an SDP (cf.\ Ref.~\citep[subsection 5 C]{Gour-Scandolo}).
Now we introduce a new measure of NPT dynamical entanglement, called
max-logarithmic negativity (MLN) (cf.\ \citep{WW19}). It is a generalization
of the notion of $\kappa$-entanglement introduced in \citep{WW18}.
The MLN is defined as
\begin{align}
& LN_{\max}\left(\mathcal{N}_{AB}\right)\nonumber \\
& :=\log_{2}\inf_{P_{AB}}\left\{ \max\left\{ \left\Vert P_{A_{0}B_{0}}\right\Vert _{\infty},\left\Vert P_{A_{0}B_{0}}^{T_{B_{0}}}\right\Vert _{\infty}\right\} \right\} ,\label{eq:MLN}
\end{align}
where $P_{AB}$ is a matrix subject to the constraints $-P_{AB}^{T_{B}}\leq\left(J_{AB}^{\mathcal{N}}\right)^{T_{B}}\leq P_{AB}^{T_{B}}$
and $P_{AB}\geq0$. Here $P_{A_{0}B_{0}}$ denotes $\mathrm{Tr}_{A_{1}B_{1}}\left[P_{AB}\right]$.
We can show that the MLN is an additive measure of dynamical entanglement,
computable with an SDP (see Ref.~\citep[subsection 5 C]{Gour-Scandolo}).
Despite its rather complicated definition, the MLN has a nice operational
interpretation, which generalizes the results in Refs.~\citep{PPTcost1,WW18}.
Consider the task of simulating $n$ parallel copies of the bipartite
channel $\mathcal{N}_{AB}$ out of the maximally entangled state $\left|\phi_{m}^{+}\right\rangle _{A_{0}B_{0}}$
of Schmidt rank $m$ using PPT superchannels (which, in this case,
take the form of PPT channels). Recall that $\left|\phi_{m}^{+}\right\rangle _{A_{0}B_{0}}$
is, up to a scaling factor 2, the maximal resource in the theory of
entanglement for bipartite channels, as we noted above. We require
that the conversion of $\left|\phi_{m}^{+}\right\rangle _{A_{0}B_{0}}$
into $\mathcal{N}_{AB}$ be exact for every $n$. We want to study
the asymptotic entanglement cost of preparing $\mathcal{N}_{AB}$
according to this PPT protocol, viz.\ the minimum Schmidt rank of
maximally entangled states consumed per copy of $\mathcal{N}_{AB}$
produced when $n\rightarrow+\infty$. Remarkably, we show that this
cost is given precisely by the MLN. Clearly, the use of PPT superchannels
is not so physically motivated, but it provides a simple lower bound
to the more meaningful calculation of the entanglement cost under
LOCC superchannels \citep{PPTcost1,WW18}.
\begin{thm}
\label{thm:The-exact-asymptotic}The exact asymptotic NPT cost of
a bipartite channel $\mathcal{N}_{AB}$ is \textup{$LN_{\max}\left(\mathcal{N}_{AB}\right)$.}
\end{thm}
A proof of this result can be found in Ref.~\citep[subsection 5 D]{Gour-Scandolo}.
We can prove that the MLN is an upper bound for another entanglement
measure, the NPT entanglement generation power $E_{g}^{\mathrm{PPT}}$
\citep{Bennett-bipartite,Resource-channels-1,Resource-channels-2,Gour-Winter}
(cf.\ Appendix~\ref{sec:Bound-between-NPT}):
\[
E_{g}^{\mathrm{PPT}}\left(\mathcal{N}_{AB}\right)\leq LN_{\max}\left(\mathcal{N}_{AB}\right).
\]
\paragraph{Bound entanglement for bipartite channels.}
Dual to the calculation of the cost of a bipartite channel, we have
the distillation of ebits out of a dynamical resource. It is known
that for some entangled static resources this is not possible: it
is the phenomenon of bound entanglement \citep{Bound-entanglement},
which occurs whenever we have a PPT entangled state.
Is it possible to distill ebits out $n$ copies of a PPT channel $\mathcal{N}_{AB}$?
Now, when we have $n$ copies of a channel, the \emph{timing} in which
they are available becomes relevant: dynamical resources have a natural
temporal ordering between input and output. Indeed, unlike states,
they can also be composed in nonparallel ways, e.g., in sequence.
Therefore, when manipulating dynamical resources, we also need to
specify \emph{when} and \emph{how} they can be used (see also Refs.~\citep{Resource-channels-2,Gour-Scandolo}).
This opens up the possibility of using adaptive schemes \citep{Pirandola-LOCC,Kaur2017,Wilde-cost,Wilde-entanglement}:
if we have $n$ resources $\mathcal{N}_{1},\dots,\mathcal{N}_{n}$
that are available, respectively, at times $t_{1}\leq t_{2}\leq\dots\leq t_{n}$,
the most general channel that can be simulated with these resources
is given by a free $n$-comb \citep{Gutoski,Circuit-architecture,Hierarchy-combs,Gutoski2,Resource-theories,Gutoski3,Resource-channels-1,Resource-channels-2},
depicted in Fig.~\ref{fig:A-PPT--comb} in the case of a PPT comb.
\begin{figure}
\caption{\label{fig:A-PPT--comb}
\label{fig:A-PPT--comb}
\end{figure}
Specializing this idea to the case of dynamical entanglement, this
amounts to considering an LOCC $n$-comb, where all the $n+1$ channels
$\mathcal{E}_{1}$, \ldots , $\mathcal{E}_{n+1}$ in Fig.~\ref{fig:A-PPT--comb}
are LOCC. Then we plug the $n$ copies of $\mathcal{N}_{AB}$ into
its $n$ slots.
Instead of LOCC combs, we consider PPT combs, which are defined as
the combs for which the composition of channels $\mathcal{E}_{n+1}\circ\dots\circ\mathcal{E}_{1}$
in Fig.~\ref{fig:A-PPT--comb} is a PPT channel. This is equivalent
to requiring that the Choi matrix of the $n$-comb \citep{Circuit-architecture,Hierarchy-combs,Gour-Scandolo}
is the Choi matrix of a PPT channel. PPT combs will give us an upper
bound on the amount of ebits generated in an LOCC procedure. However,
again, we do not know if this implies that each channel $\mathcal{E}_{1}$,
\ldots , $\mathcal{E}_{n+1}$ is PPT, but we conjecture it is not
the case.
By the mathematical properties of PPT combs and PPT channels, we can
show that no ebits can be distilled out of PPT channels even with
the most general adaptive PPT scheme (see Ref.~\citep[section 7]{Gour-Scandolo}).
Since this is an upper bound for LOCC adaptive schemes, we conclude
that no entanglement distillation from PPT channels is possible under
LOCC protocols either.
\begin{thm}
It is impossible to distill entangled ebits from PPT channels under
any adaptive schemes in any resource theory of dynamical entanglement.
\end{thm}
As a result, we find an example of a bound entangled POVM.
\begin{example}
Recall that a POVM can be viewed as a quantum-to-classical channel.
Let $\beta_{A_{0}B_{0}}$ be any PPT bound entangled state of a bipartite
system $A_{0}B_{0}$, and consider the binary POVM $\left\{ \beta_{A_{0}B_{0}},I_{A_{0}B_{0}}-\beta_{A_{0}B_{0}}\right\} $.
Since both $\beta_{A_{0}B_{0}}$ and $I_{A_{0}B_{0}}-\beta_{A_{0}B_{0}}$
have positive partial transpose, it follows that this POVM is a PPT
channel. As such, it cannot produce distillable entanglement. This
means it is a bound entangled POVM.
\end{example}
\paragraph{Conclusions and outlook.}
In this letter, we addressed dynamical entanglement as a resource
theory of quantum processes. This is a major step in understanding
the role of entanglement in quantum theory, for it allows us to treat
static and dynamical entanglement on the same grounds \citep{Devetak-Winter,Resource-calculus},
which is something that had been missing since the inception of the
very first quantum information protocols \citep{Teleportation,Dense-coding}.
We found a set of measures of dynamical entanglement yielding necessary
and sufficient conditions for LOCC channel simulation. Then we generalized
the key tool of partial transpose, defining PPT superchannels. Working
with them, we obtained measures of dynamical entanglement that can
be computed with SDPs. This remarkable fact, which did not appear
in previous works on PPT superchannels (e.g., Ref.~\citep{WW18}),
is a consequence of our more relaxed definition of PPT superchannels
(definition~\ref{def:PPT}). This is not the only novelty with respect
to Ref.~\citep{WW18}: we were able to generalize their notion of
$\kappa$-entanglement with the max-logarithmic negativity (Eq.~\eqref{eq:MLN}).
Finally, we showed that we can distill no ebits under any adaptive
strategies out of PPT channels. This extends the known result for
PPT states \citep{Bound-entanglement}, and led us to the discovery
of bound entangled POVMs.
Clearly, our work just scratches the surface of a whole unexplored
world, opening the way for a thorough study of the new area of dynamical
entanglement. On a grand scale, our findings lead naturally to several
directions that can be explored anew. Think, e.g., of multipartite
entanglement \citep{Review-entanglement}, or of the whole zoo of
entanglement measures \citep{Plenio-review,Review-entanglement},
to be extended to channels. Moreover, our results for LOCC superchannels
can be translated to local operations and shared randomness (LOSR)
superchannels \citep{Rosset-distributed,Wolfe2020,Schmid2020,LOSR-nonlocality},
which are a strict subset of LOCC ones. LOSR superchannels were proved
essential for the formulation of resource theories for non-locality
\citep{LOSR-nonlocality}: they define the relevant notion of dynamical
entanglement in Bell and common-cause scenarios. This intriguing research
direction deserves a comprehensive study in the future.
Finally, providing us with a more general angle, research findings
in the resource theory of dynamical entanglement can also help us
gain new insights into one of the major open problems of quantum information
theory: the existence of NPT bound entangled states \citep{NPPT1,NPPT2,NPPT3}.
{}
\begin{acknowledgments}
G.\ G.\ would like to thank Francesco Buscemi, Eric Chitambar, Mark
Wilde, and Andreas Winter for many useful discussions related to the
topic of this paper. The authors acknowledge support from the Natural
Sciences and Engineering Research Council of Canada (NSERC) through
grant RGPIN-2020-03938, from the Pacific Institute for the Mathematical
Sciences (PIMS), and a from Faculty of Science Grand Challenge award
at the University of Calgary.
\end{acknowledgments}
\onecolumngrid
\appendix
\section{\label{sec:Bound-between-NPT}Bound between NPT entanglement generation
power and max-logarithmic negativity}
Define the \emph{NPT entanglement generation power} \citep{Bennett-bipartite,Resource-channels-1,Resource-channels-2,Gour-Winter}
as the maximum amount of NPT entanglement produced out of PPT states,
namely as
\begin{equation}
E_{g}^{\mathrm{PPT}}\left(\mathcal{N}_{AB}\right)=\max_{\rho_{A_{0}'B_{0}'A_{0}B_{0}}\textrm{ PPT}}E\left(\mathcal{N}_{AB}\left(\rho_{A_{0}'B_{0}'A_{0}B_{0}}\right)\right),\label{eq:e-generation power}
\end{equation}
where $E$ is a measure of NPT static entanglement.
In~\citep{Gour-Scandolo}, we defined the \emph{exact} NPT entanglement
single-shot distillation out of a bipartite channel $\mathcal{N}_{AB}$
as
\[
\mathrm{DISTILL}_{\mathrm{PPT},\mathrm{exact}}^{\left(1\right)}\left(\mathcal{N}_{AB}\right)=\log_{2}\max\left\{ m:\mathcal{N}_{AB}\overset{\mathrm{PPT}}{\longrightarrow}\phi_{m}^{+}\right\} ,
\]
where $\phi_{m}^{+}$ is the maximally entangled state with Schmidt
rank $m$. The distillable NPT entanglement in the asymptotic limit
is then
\[
\mathrm{DISTILL}_{\mathrm{PPT},\mathrm{exact}}\left(\mathcal{N}_{AB}\right)=\limsup_{n}\frac{1}{n}\mathrm{DISTILL}_{\mathrm{PPT},\mathrm{exact}}^{\left(1\right)}\left(\mathcal{N}_{AB}^{\otimes n}\right),
\]
where we are using a parallel scheme for distillation. If $E$ in
Eq.~\eqref{eq:e-generation power} is taken to be the exact asymptotic
PPT distillation of static entanglement \citep{Resource-channels-1},
we can link the NPT entanglement generation power to the MLN.
\begin{lem}
For any bipartite channel $\mathcal{N}_{AB}$, we have
\[
E_{g}^{\mathrm{PPT}}\left(\mathcal{N}_{AB}\right)\leq\mathrm{DISTILL}_{\mathrm{PPT},\mathrm{exact}}\left(\mathcal{N}_{AB}\right),
\]
where $E_{g}^{\mathrm{PPT}}$ is defined using the exact asymptotic
PPT distillation of static entanglement.
\end{lem}
\begin{proof}
The proof follows similar lines to the proof of theorem~4 in~\citep{Resource-channels-1}.
Let $R=E_{g}^{\mathrm{PPT}}\left(\mathcal{N}_{AB}\right)$, and let
$\omega_{A_{0}'B_{0}'A_{0}B_{0}}$ be the optimal PPT state achieving
$R$, i.e., $E\left(\mathcal{N}_{AB}\left(\omega_{A_{0}'B_{0}'A_{0}B_{0}}\right)\right)=R$.
Now let us construct a distillation protocol for $\phi_{m}^{+}$.
To this end, consider the channel preparing $\omega_{A_{0}'B_{0}'A_{0}B_{0}}^{\otimes n}$
out of nothing (namely out of the 1-dimensional system). This is a
PPT channel, which we will use as a pre-processing to construct a
(restricted) PPT superchannel. Defining $\sigma_{A_{0}'B_{0}'A_{1}B_{1}}:=\mathcal{N}_{AB}\left(\omega_{A_{0}'B_{0}'A_{0}B_{0}}\right)$,
then $\mathcal{N}_{AB}^{\otimes n}\left(\omega_{A_{0}'B_{0}'A_{0}B_{0}}^{\otimes n}\right)=\sigma_{A_{0}'B_{0}'A_{1}B_{1}}^{\otimes n}.$
Since $E_{g}^{\mathrm{PPT}}$ is defined as the exact asymptotic distillation
rate, we know that there exists a PPT post-processing such that $\limsup_{n}\frac{1}{n}\mathrm{DISTILL}_{\mathrm{PPT},\mathrm{exact}}^{\left(1\right)}\left(\sigma_{A_{0}'B_{0}'A_{1}B_{1}}^{\otimes n}\right)=R$,
where $\mathrm{DISTILL}_{\mathrm{PPT},\mathrm{exact}}^{\left(1\right)}$
is the analogous definition for states rather than channels. With
this in mind, we can use this post-processing to obtain some $\phi_{m}^{+}$;
thus we prove our statement.
\end{proof}
By a Carnot-like argument \citep{Thermodynamic-entanglement}, one
can prove that $\mathrm{DISTILL}_{\mathrm{PPT},\mathrm{exact}}\left(\mathcal{N}_{AB}\right)\leq\mathrm{COST}_{\mathrm{PPT},\mathrm{exact}}\left(\mathcal{N}_{AB}\right)$,
where $\mathrm{COST}_{\mathrm{PPT},\mathrm{exact}}\left(\mathcal{N}_{AB}\right)$
denotes the exact asymptotic NPT cost of $\mathcal{N}_{AB}$. By theorem~4
in the main letter, the exact asymptotic NPT cost of $\mathcal{N}_{AB}$
is the MLN, whence we conclude that
\[
E_{g}^{\mathrm{PPT}}\left(\mathcal{N}_{AB}\right)\leq LN_{\max}\left(\mathcal{N}_{AB}\right).
\]
\end{document} |
\betagin{document}
\title{A Note on the Hausdorff Number of Compact Topological Spaces }
\betagin{abstract}
The notion of Hausdorff number of a topological space is first introduced in \cite{bonan}, with
the main objective of using this notion to obtain generalizations of some known bounds for cardinality of topological spaces.
Here we consider this notion from a topological point of view and examine interrelations of the Hausdorff number with compactness.
\\
\\
2000 Math.Subj.Classification: 54D10, 54D30
\\
\\
Keywords: n-Hausdorff spaces, compact spaces, topologies on finite spaces
\epsilonnd{abstract}
\section{Introduction}
\betagin{defn} \cite{bonan}
Let $X$ be a topological space.
Let $
H(X)=\min\{\tau:\forall A\subset X: |A|\gammaeq \tau, \forall a\in A, \textrm{there exist open neighborhoods }
U_a\ni a \textrm{ such that } \bigcap_{a\in A} U_a=\epsilonmptyset\}.
$
This number (finite or infinite) is called the \epsilonmph{Hausdorff number} of $X$.
\epsilonnd{defn}
It is clear that if $X$ contains at least two points then $X$ is Hausdorff if and only if $H(X)=2$.
\betagin{defn}
If $n\in\mathbb{N}$, $n\gammaeq2$, then $X$ is called \epsilonmph{$n$-Hausdorff} if $H(X)\leqslant n$.
\epsilonnd{defn}
In \cite{bonan} it was pointed out that the property of being $n$-Hausdorff is independent from the $T_1$ property.
Also, it is clear that if $X$ is $n$-Hausdorff then it is $(n+1)$-Hausdorff for any $n\in\mathbb{N}, \ n\gammaeq2$, hence all Hausdorff spaces are trivially $n$-Hausdorff for any $n\gammaeq 2$.
For every finite $n$, examples of $(n+1)$-Hausdorff spaces that are not $n$-Hausdorff are also considered in \cite{bonan}.
Since the Hausdorff property nicely correlates with some basic covering properties, such as compactness and Lindel\"of d ness \cite{Eng}, here we consider a natural question arising in this setting.
It is well-known that every Hausdorff compact space is regular and normal, i.e. the combination of compact with the Hausdorff separation axiom leads to stronger separation properties.
Also it is well known that any Hausdorff regular Lindelof space is normal \cite{Eng}.
For any $n\gammaeq 2$, we shall give examples of $(n+1)$-Hausdorff spaces that are not $n$-Hausdorff and in addition are $T_1$, compact and first countable.
Also we shall give an example of a compact $T_1$ first countable $\omega$-Hausdorff space which is not $n$-Hausdorff for any $n\gammaeq2$.
Our examples will be neither regular nor normal, thus showing that even in ``nice'' topological spaces,
\hdf{n}ness is something much weaker than the Hausdorff property.
Finally, some open questions in the above vein will be posed.
The examples that we present are different from those given in \cite{bonan} and, in our point of view, much more transparent.
Also, all the examples in \cite{bonan} are countable, while most of the examples we give here are uncountable.
\subsection{Notation and Assumptions}
Throughout this exposition, a space $X$ will be considered compact if from any open cover of $X$ one can choose a finite subcover.
\betagin{notation}
We use:
\betagin{itemize}
\item $\lambdangle a, b \rightarrowngle$ to denote the ordered pair,
\item $(a,b)$ to denote the open interval on the real line,
\item $k, l, m, n$ to denote positive integers.
\epsilonnd{itemize}
\epsilonnd{notation}
\section{The Finite Case}
Most mathematicians might believe that study of topologies on finite sets is a trivial pursuit, but this is far from true.
Questions concerning topologies on spaces containing finitely many points can be very difficult, and have been considered as early as 1966, \cite{Stong}.
There are still many open questions, such as how many (non-homeomorphic) topologies there are on a set of $n$ elements, and how many $T_1$ topologies there are on an $n$-point set.
So far, the best partial answers to these questions give approximations, for example \cite{benoumhani}, \cite{Tenner}, \cite{Dorsett} and \cite{Berrone}.
We consider the following examples and statements in view of the fact that all Hausdorff topologies on a finite set are discrete, \cite{workbook}.
\betagin{xmpl}
A simple example of a compact 3-Hausdorff space which is not Hausdorff.
\epsilonnd{xmpl}
On the underlying set $X=\{x,y,z\}$, we introduce the topology
$$
\tau=\{\epsilonmptyset, \{x\},\{y,z\},\{x,y,z\}\}.
$$
It is easy to check that this is indeed a topology.
It is also \hdf{3}, because the only subset of cardinality $3$ is $\{x,y,z\}$, and $\{x\},\ \{y,z\}$ form a disjoint open cover.
It is also compact, since it is finite.
Hence, in compact, and even finite, spaces the property of being \hdf{3} does not coincide with Hausdorfness.
This leads us to the following interesting theorem:
\betagin{thm}
For each $n\in\mathbb{N}$,
there exists an \hdf{n} topology on the $n$-point set which is not discrete.
\epsilonnd{thm}
\betagin{proof}
Let $X$ be a set with $n$ points.
Pick $x_0\in X$, and take the topology $\{\epsilonmptyset, \{x_0\}, X\setminus\{x_0\}, X \}$.
\epsilonnd{proof}
\betagin{xmpl}\lambdabel{4point}
There is a \hdf{3} space of cardinality $4$, which is not discrete.
\epsilonnd{xmpl}
Let $X=\{w,x,y,z\}$, and let $F$ be the filter generated by $\{y\}$.
Then the topology
$$
\tau=F\cup\{\epsilonmptyset, \{x\},\{z\}, \{x,z\}\}
$$
is \hdf{3} and not discrete.
On the basis of this example we can establish the following more general result:
\betagin{thm}\lambdabel{3hdf}
For any $n\gammaeq 3$, there is a \hdf{3} topology on an $n$-point set $X$, which is not discrete.
\epsilonnd{thm}
\betagin{proof}
Let $X$ be a space with $n$ points, and choose two `special' points, $x_0, x_1 \in X$.
Then the basis
$$
\mathcal{B}=\{\{x\}:x\in X\setminus \{x_0\}\}\cup \{x_0,x_1\}
$$
is a basis for a non-discrete \hdf{3} topology on $X$.
Indeed, any 3-element subset of $X$ can be separated via basic open sets.
Let $A\subset X, |A|=3$.
Then either $x_0\in A$ or $x_0 \notin A$.
If $x_0\in A$, the family $\big\{\{x\}:x\in A\setminus\{x_0\}\big\}\cup \big\{\{x_0,x_1\}\big\}$ is the required family of basic open sets with empty intersection.
If $x_0 \notin A$, the family $\big\{\{x\}:x\in A\big\}$ will separate the points of $A$.
\epsilonnd{proof}
Some recent results on the number of topologies on a finite set with $n$ elements have been obtained in \cite{Berrone}, \cite{Tenner}, \cite{benoumhani}, and \cite{Dorsett}.
In the archive (\url{http://arxiv.org}), one can find even more recent papers (from 2012) that also treat these questions.
Theorem \ref{3hdf} justifies asking similar questions for \hdf{n} topologies.
The following seem interesting questions, though some might end up rather easy:
\betagin{oq}
In \cite{markowsky}, several formulae are given which relate the number of $T_0$ topologies on an $n$-point set with the number of topologies on $n$-points set.
Can similar formulae be found for the number of \hdf{k} topologies on an $n$-point set, where $k\leqslant n$?
\epsilonnd{oq}
Or, in general:
\betagin{oq}
How many distinct \hdf{k} topologies are there on a finite set $X$ of cardinality $n \in \mathbb{N}$, for $k\leqslant n$?
\epsilonnd{oq}
Moreover:
\betagin{oq}
For $k,l<n$, is there a formula relating the number of \hdf{k} topologies on a space of cardinality $n$ to the number of \hdf{l} topologies on the same space?
\epsilonnd{oq}
\comments{
\betagin{oq}
If a space $X$ has finite cardinality $n$, can we find a `limit number' $k<n$ such that, for all $l\leqslant k$, the \hdf{l} topology is the discrete? We have a natural `upper bound' in $n$ and natural `lower bound' in $2$ (since every Hausdorff topology on a finite space is discrete), but Example \ref{4point} shows that we can have a non-discrete \hdf{k} topology for $2<k\lneqq n$.
\epsilonnd{oq}
}
\section{Main Examples}
Let us now consider the uncountable cases.
The following examples were inspired by the question, discussed in the introduction, of whether any compact \hdf{3} space is Hausdorff (and therefore regular and normal).
\betagin{xmpl}\lambdabel{xmpl1}
There is a $T_1$, uncountable, \hdf{3} space that is compact first countable but not Hausdorff, regular, or normal.
\epsilonnd{xmpl}
Define $X=([0,1]\times\{0\})\cup\{\lambdangle\n{2},\n{2}\rightarrowngle\}$.
Topologize $X$ as follows:
\betagin{itemize}
\item all points on $[0,1]\times\{0\}$ have the Euclidean neighborhoods;
\item the neighborhoods of $\{\lambdangle\n{2},\n{2}\rightarrowngle\}$ consist of
$$
U_n\left(\left\lambdangle\n{2},\n{2}\right\rightarrowngle\right)=
\left\{\left\lambdangle\n{2},\n{2}\right\rightarrowngle\right\}\cup
\left(\left(\left(\n{2}-\n{n},\n{2}+\n{n}\right)\setminus\left\{\n{2}\right\}\right)
\times\left\{0\right\}\right).
$$
\epsilonnd{itemize}
It is clear that $X$ is compact first countable (since this is a compact space with one additional point added and all points have countable neighbourhood basis).
It is also clear that $X$ is $T_1$, since $[0,1]$ is $T_1$ (even Hausdorff, regular and normal), and
$$
\bigcap_{n\in\mathbb{N}} U_n\left(\left\lambdangle\n{2},\n{2}\right\rightarrowngle\right)=
\left\{\left\lambdangle\n{2},\n{2}\right\rightarrowngle\right\}.
$$
It is also clear that this space is not Hausdorff, since $\left\lambdangle\n{2},\n{2}\right\rightarrowngle$ and $\left\lambdangle\n{2},0\right\rightarrowngle$ do not have disjoint neighborhoods.
It is \hdf{3} because if we have any three different points in $X$,
at least two would be in $[0,1]\times\{0\}$, and since the latter is Hausdorff in the Euclidean topology, those two points would have disjoint neighborhoods.
Since $X$ is $T_1$, we again have that $\left\lambdangle\n{2},\n{2}\right\rightarrowngle$ and $\left\lambdangle\n{2},0\right\rightarrowngle$, being closed, witness the fact that $X$ is neither regular nor normal.
If we modify Example \ref{xmpl1} by defining
$$
U_n\left(\left\lambdangle\n{2},\n{2}\right\rightarrowngle\right)=
\left\{\left\lambdangle\n{2},\n{2}\right\rightarrowngle\right\}
\cup
\left(
\left(\n{2}-\n{n},\n{2}+\n{n}\right)\times\{0\}
\right),
$$
then this new space has the same properties as before, except it fails to be $T_1$.
Hence even a \hdf{3} space does not entail $T_1$, and we have:
\betagin{xmpl}\lambdabel{xmpl2}
There exists a \hdf{3} compact uncountable first countable space that is not $T_1$.
\epsilonnd{xmpl}
The above idea can easily be generalized for any finite $n\gammaeq 2$:
\betagin{xmpl}\lambdabel{xmpl3}
There is a $T_1$ uncountable compact first countable \hdf{(n+1)} space which is not \hdf{n} (hence not regular or normal).
\epsilonnd{xmpl}
As an underlying set, we take $X=([0,1]\times\{0\})\cup\left\{\left\lambdangle\n{2},\n{m}\right\rightarrowngle:m<n\right\}$, and
topologize $X$ as follows:
\betagin{itemize}
\item all points on $[0,1]\times\{0\}$ again have the Euclidean neighborhoods;
\item the neighborhoods of $\{\lambdangle\n{2},\n{m}\rightarrowngle\}$ consist of
$$
U_k\left(\left\lambdangle\n{2},\n{m}\right\rightarrowngle\right)=
\left\{\left\lambdangle\n{2},\n{m}\right\rightarrowngle\right\}\cup
\left(\left(\left(\n{2}-\n{k},\n{2}+\n{k}\right)\setminus \left\{\n{2}\right\}\right)
\times\left\{0\right\}\right).
$$
\epsilonnd{itemize}
\betagin{xmpl}\lambdabel{xmpl4}
There is an \hdf{\omega_1} uncountable compact first countable $T_1$ space $X$ which is not $n$-Hausdorff for any finite $n\gammaeq2$, and is not \hdf{\omega}.
\epsilonnd{xmpl}
We take $X=([0,1]\times\{0\})\cup\left\{\left\lambdangle\n{2},\n{m}\right\rightarrowngle:m\in\omega\right\}$, and
topologize $X$ as follows:
\betagin{itemize}
\item all points on $[0,1]\times\{0\}$ have neighborhoods which are traces in $X$ of their respective Euclidean neighborhoods in $\mathbb{R}^2$;
\item the neighborhoods of $\{\lambdangle\n{2},\n{m}\rightarrowngle\}$ consist of
$$
U_k\left(\left\lambdangle\n{2},\n{m}\right\rightarrowngle\right)=
\left\{\left\lambdangle\n{2},\n{m}\right\rightarrowngle\right\}\cup
\left(\left(\left(\n{2}-\n{k},\n{2}+\n{k}\right)\setminus \left\{\n{2}\right\}\right)
\times\left\{0\right\}\right).
$$
\epsilonnd{itemize}
This space is compact since it consists of the union of two compact sets:
of a sequence converging in our topology, and of the closed unit interval with the Euclidean topology,
which is is also compact in our topology.
In fact, this space has a separation property which is much stronger than \hdf{\omega_1},
since the only countable subset of it which does not have neighborhoods with empty intersections is the vertical sequence
$\left\{\left\lambdangle\n{2},\n{m}\right\rightarrowngle:m\in\omega\right\}$.
Otherwise, if the countable set contains at least one point in $([0,1]\setminus\{\n{2}\})\times \{0\}$,
then it obviously has neighborhoods with empty intersection (since $[0,1]\times\{0\}$ is Hausdorff).
\section{Inheritance of Hausdorff number by subspaces}
It is well-known that the Hausdorff property is inherited by any subspace of a topological space.
It is clear that the same is true for the Hausdorff number.
But some subspaces of \hdf{n} spaces can have a Hausdorff number less than $n$.
If a subspace is a two-point set, it will of course be Hausdorff.
So, it is interesting to make the following observations about some uncountable subspaces of our examples:
\betagin{itemize}
\item Example \ref{xmpl1} has an uncountable open compact subspace which is Hausdorff, namely $[0,1]\times\{0\}$;
\item for each $n\in\omega$, Example \ref{xmpl4} has an uncountable open compact subspace which is \hdf{(n+1)} but not \hdf{n}:
for a fixed $n\in\omega$, take the subspace $Y_n=([0,1]\times\{0\})\cup\left\{\left\lambdangle\n{2},\n{m}\right\rightarrowngle:m<n\right\}$.
Then $Y_n$ is not \hdf{n}, but it is \hdf{(n+1)} (and is also compact and open).
\epsilonnd{itemize}
Hence the Hausdorff number of a subspace is less than or equal to that of the space.
This leads us to the following natural question:
\betagin{oq}
Given an uncountable \hdf{\tau} space $X$, do we have that for each $\kappa<\tau$, $X$ has an uncountable \hdf{\kappa} subspace?
\epsilonnd{oq}
\betagin{ackn}
The author is grateful to the referee for carefully reading the paper, and for giving helpful suggestions for examples \ref{xmpl3}, \ref{xmpl4}, and for improving the general exposition.
\epsilonnd{ackn}
\noindent
contact email: [email protected] \\
Mailing Address:\\
Department of Mathematics\\
University of Leicester\\
LE1 7RH\\
Leicester\\
United Kingdom
\epsilonnd{document} |
\begin{document}
\title{Quantum Error Correction using Hypergraph States}
\author{Shashanka Balakuntala$^*$ and Goutam Paul$^\#$}
\affiliation{$^*$Department of Applied Mathematics and Computational Sciences,\\
PSG College of Technology, Coimbatore 641 004, India,\\
Email: [email protected]\\
$^\#$Cryptology and Security Research Unit,\\
R. C. Bose Centre for Cryptology and Security,\\
Indian Statistical Institute, Kolkata 700 108, India,\\
Email: [email protected]
}
\begin{abstract}
Graph states have been used for quantum error correction by Schlingemann et al. [Physical Review A 65.1 (2001): 012308]. Hypergraph states [Physical Review A 87.2 (2013): 022311] are generalizations of graph states and they have been used in quantum algorithms. We for the first time demonstrate how hypergraph states can be used for quantum error correction. We also point out that they are more efficient than graph states in the sense that to correct equal number of errors on the same graph topology, suitably defined hypergraph states require less number of gate operations than the corresponding graph states.
\end{abstract}
\maketitle
\newcommand{{\bf Proof : }}{{\bf Proof : }}
\newtheorem{definition}{Definition}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{remark}{Remark}
\newtheorem{corollary}{Corollary}
\noindent{\bf Keywords:} Quantum Error Correction, Hypergraph States, Condition for Error Correction, Advantages, Encoding.
\section{\label{sec:intro}Introduction}
Error correction plays an important role in quantum information theory. Controlling operational errors and decoherence is a major challenge faced in the field of quantum computation. Quantum error correction was first proposed by P. W. Shor~\cite{shor1}. Since then there have been a lot of improvements in the field~\cite{robcal,knill,raylaf,dg1}. It is also to be noted that if one uses $n$ qubits to encode 1 qubit of information, one can detect no more than $n/2 - 1$ errors due to no cloning theorem~\cite{wootters}. Quantum error correction is essential, if one is to achieve fault-tolerant quantum computation that can deal not only with noise on stored quantum information, but also with faulty quantum gates, faulty quantum preparation, and faulty measurements~\cite{shor2,steane1,dg2}.
Schlingemann et al. gave a scheme for construction of error correcting codes based on a graph~\cite{werner}. The entanglement properties and applications of graph states can be found in~\cite{hein}. Verification of any error correction code is easier through graph states than the earlier existing methods. Other applications of graph states include secret sharing~\cite{markham}. Hypergraph states were first discussed in~\cite{rossi1,wang1}. In~\cite{rossi2}, they compute the amount of entanglement in the initial state of Grover's algorithm using hypergraph states. In the recent years hypergraph states are attracting a lot of attention, the local unitary symmetries of hypergraphs are discussed in~\cite{lyon}, their local Pauli stabilizers are discussed in~\cite{peters}. In~\cite{wang2}, they have discussed about the relationship between LME states and hypergraphs under local unitary transformations. Multipartite entanglement in hypergraphs are discussed in~\cite{wang3}. We, for the first time, use hypergraph states for error correction and show its efficiency over graph states. And the code that we discuss here are instances of stabilizer codes.
Consider a scenario where we have a network of CCTV cameras or satellites. This can be easily modelled as a graph, more precisely as a hypergraph. The relationship defined can be as follows: if two (or more) cameras cover the same area (from different angles) then we draw a hyper-edge between them. So now say the cameras qubits are entangled. If we are using these qubits to transmit a message and because of the fault in qubits there is an error in the transmitted message, this would mean that some of the cameras may be faulty and we need to repair them as soon as possible for smooth transmission of the feed. As the topology of the network follows a hypergraph, we try to give a condition through which we can easily identify the faulty bits and correct them. By finding the faulty bits, we get to know which cameras' qubits are faulty, and we can check the corresponding cameras.
The paper is organized as follows. First we discuss the basics of hypergraph states, then we give the general condition for error correction. Next, we talk about the basic construction of the graphs which is required for the error correction. After that, we give the condition for error correction for a hypergraph state and a function to correct the error, followed by a short proof. Then we give the methods of encoding. Finally, we show what advantages hypergraphs have over graph states when using them for error correction.
\section{\label{sec:hg_intro}Hypergraph States}
Here we briefly review the concepts related to a hypergraph. As mentioned in~\cite{rossi1}, we first talk about the $k$-uniform hypergraphs. A $k$-uniform hypergraph $\Gamma^k = \{V,E\}$, containing $V$ a non-empty finite set of vertices $V$ and set of edges $E$, where each $e \in E$ connects exactly $k$ vertices.
Given a $k$-uniform hypergraph, one can find the $k$-uniform hypergraph state corresponding to the given graph as follows. Assign $\ket{+}$ to each and every vertex and if vertices $z_1, z_2, \ldots, z_k$ are connected by an edge and perform multi-qubit controlled-$Z$ operation between the connected qubits. So the final quantum state corresponding to the given $k$-uniform hypergraph will be as follows:
\begin{equation*}
\ket{\Gamma^{k}} = \prod_{\{z_1, z_2, \ldots, z_k\} \in E} C^{k}Z_{z_1, z_2, \ldots, z_k} \ket{+}^{\otimes |V|}.
\end{equation*}
where,
\begin{equation*}
C^kZ = \textrm{diag}(\underbrace{1,1,\ldots,1}_{2^k-2},1,-1)
\end{equation*}
Next we discuss general hypergraph states. A hypergraph $\Gamma^{\le n} = \{V,E\}$ is a non-empty set $V$ of $n$ vertices and a set $E$ of edges, where the edges can connect any number of vertices from 1 to $n$. Given a hypergraph, the corresponding state can be written as follows:
\begin{equation*}
\ket{\Gamma^{\le n}} = \prod_{k = 1}^{n}\prod_{\{z_1, z_2, \ldots, z_k\} \in E} C^{k}Z_{z_1, z_2, \ldots, z_k} \ket{+}^{\otimes |V|}.
\end{equation*}
\section{\label{sec:err_corr}Error Correction}
General characteristics of quantum error correction was discussed by E. Knill and R. Laflamme in \cite{knill}. In quantum information, error correcting code is defined to be an isometry\footnote{An isometry is a transformation which is invariant with respect to the distance, i.e., the distance is preserved before and after transformation.} $v : \mathcal{H} \rightarrow \mathcal{K}$, where $\mathcal{H}$ is the input Hilbert space and $\mathcal{K}$ is the output Hilbert space with $dim(\mathcal{K}) > dim(\mathcal{H})$, so that input density operator $\rho$ is transformed by this coding into $v\rho v^\ast$, a density operator in $\mathcal{K}$. The code's output is then sent through the channel. The channel noise is described by a certain class of errors belonging to a linear subspace $E$ of operators on $\mathcal{K}$. Thus the channel can be represented as a completely positive linear map as follows:
\begin{equation*}
T(\sigma) = \sum_{\alpha}F_\alpha \sigma F_\alpha^\ast ,
\end{equation*}
where $\sigma \in \mathcal{K}$, $F_\alpha \in E$. The isometry $v$ is a error correcting code for $E$, if there is a recovery operator $R$ such that:
\begin{equation}
\label{cond_g}
R(T(v\rho v^\ast)) = \rho , \quad\qquad \forall \rho \in \mathcal{H}.
\end{equation}
Here, the recovery operation is such that it has to act as an inverse to two things, first it has to reverse the operation done by the error transformation $T$, and after doing this we should be able to find the density operator $\rho$ which was initially sent.
Next we will discuss the basic construction of hypergraph states in order to prepare them for error correction and we will arrive at a condition for error correction using the hypergraph states. This construction is similar to that of~\cite{werner}.
\section{\label{sec:basic_construction}Error Correction Using Hypergraph States}
The following determines the codes that we construct:
\begin{description}
\addtolength{\itemindent}{1.5cm}
\item[$\bullet$] An undirected graph $\Gamma$ with two kinds of vertices: the set $X$ of input vertices and the set $Y$ of output vertices. If the vertices $z_1,z_2,\ldots,z_k \in \{X \cup Y\}$ are related then $\Gamma(z_1,z_2,\ldots,z_k) = 1$.
The notation $\Gamma^k$ is for a $k$-uniform hypergraph and $\Gamma^{\le n}$ is for a general hypergraph.
\item[$\bullet$] A finite \textit{abelian group} $G$ with a non-degenerate symmetric \textit{bicharacter}.
\end{description}
In~\cite{werner}, the bicharacter function was $G \times G \rightarrow \mathbb{C}$, here we extend it to $k$-uniform bicharacter function.
\theoremstyle{definition}
\begin{definition}{[$k$-uniform bicharacter function]}
A $k$-uniform bicharacter is a function $\chi^k: G \times G \times \cdots \times G \rightarrow \mathbb{C}$, which takes $k$ input arguments and maps it to a complex number such that, $\chi^k(g_1+h,g_2,\ldots,g_k) = \chi^k(g_1,g_2,\ldots,g_k)\chi^k(h,g_2,\ldots,g_k), \forall h$. The non-degeneracy is in a sense that:
\begin{equation*}
\sum_{g = g_1,g_2,\ldots,g_{k-1}}\chi^k(g,g^\prime) = |G|\delta(g^\prime),
\end{equation*}
\end{definition}
where,
\begin{equation*}
\delta(g^\prime) =
\begin{cases}
1, & \mbox{if }g^\prime = 0, \\
0, & \mbox{if }g^\prime \ne 0.
\end{cases}
\end{equation*}
The combined input system is described in $|X|$-fold tensor product space\footnote{$L^2(G)$ generates the Hilbert Space, which consists of all functions $\psi : G \rightarrow \mathbb{C}$, with inner product $\langle\phi,\psi\rangle = \int \overline{\phi}(g) \phi(g) dg$, where $\overline{z}$ is the complex conjugate of $z$.} $\mathcal{H}^{\otimes X} = L^2(G^X)$. A vector in this space is denoted by the notation $g_z, \forall z \in X$. The entire collection of these $g_z$ vectors is denoted as $g^X$. The same holds for the output system too. Now we define the isometry for $k$-uniform hypergraphs.
\begin{equation*}
v_\Gamma^k : L^2(G^X) \rightarrow L^2(G^Y).
\end{equation*}
The scalar product in this space is given by,
\begin{equation*}
(v_\Gamma^k \psi)(g^Y) = \int v_\Gamma^k [g^{X\cup Y}]\psi(g^X) dg^X.
\end{equation*}
The $v_\Gamma$ under the integral denotes the kernel of the operator $v_\Gamma$ which depends on both input and output vertices. Explicitly the expressions for this kernel was given in~\cite{werner}, and for the $k$-uniform hypergraph it is given by:
\begin{equation*}
v_\Gamma^k [g^{X\cup Y}] = |G|^{-1}\prod_{z_1,z_2,\dots,z_k} \chi^k(g_{z_1},g_{z_2},\ldots,g_{z_k})^{\Gamma^k(z_1,z_2,\dots,z_k)}.
\end{equation*}
Now this concludes the construction of the isometry. We will give the condition of the error correction after we give more generalized form of this. Lets see how we can generalize these functions to a general hypergraph.
The isometry is defined by:
\begin{equation*}
v_\Gamma^{\le n} : L^2(G^X) \rightarrow L^2(G^Y).
\end{equation*}
Similarly the scalar product here is given by,
\begin{equation*}
(v_\Gamma^{\le n} \psi)(g^Y) = \int v_\Gamma^{\le n} [g^{X\cup Y}]\psi(g^X) dg^X.
\end{equation*}
And the kernel's expression is defined as follows:
\begin{equation*}
v_\Gamma^{\le n} [g^{X\cup Y}] = \prod_{k=1}^{n}\prod_{z_1,z_2,\dots,z_k} \chi^k(g_{z_1},g_{z_2},\ldots,g_{z_k})^{\Gamma^{\le n}(z_1,z_2,\dots,z_k)},
\end{equation*}
where it is normalized by $|G|^{-1}$.
We say a vertex $z \in \mathcal{N}(x)$ if there is an edge which connects $z$ and $x$, which is same as saying $\Gamma^{\le n}(z,x,\ldots,z^\prime) = 1$.\\
We define another indicator function $f$ as follows:
\begin{equation*}
f(x,y) =
\begin{cases}
1, & \mbox{if }x \in \mathcal{N}(y), \\
0, & \mbox{if }x \notin \mathcal{N}(y).
\end{cases}
\end{equation*}
Now we define a homomorphism which will help us to define the condition for error correction easily. This existed for graph states and now we try to define it for the hypergraphs.
A homomorphism between two subsets $K$ and $L$ of $X \cup Y$ denoted by $H: G^L \rightarrow G^K$ is defined as follows:
\begin{equation*}
H_L^K(g^L) = \left(\sum_{l \in L} g_lf(l,k)\right)_{k \in K} .
\end{equation*}
Using this homomorphism, the necessary and sufficient condition for the error correction can be derived.
\begin{theorem}
Given a finite abelian group $G$ and an undirected hypergraph $\Gamma^{\le n}$, an error configuration $E \subset Y$ is detected by the quantum code $v_\Gamma$ if and only if the system of equations:
\begin{equation}
\label{suf1}
H_{X \cup E}^I g^{X\cup E} = 0 \qquad with\ I = Y\setminus E
\end{equation}
implies:
\begin{equation}
\label{suf2}
g^X = 0 \quad \textrm{and} \quad H_E^X g^E = 0.
\end{equation}
\end{theorem}
\begin{proof}
To prove this claim, we first see the equivalence of \autoref{cond_g} and the general condition mentioned by \cite{knill}. This was given by Schlingemann et al. in~\cite{werner} which reduces to the following:
\begin{equation}
\label{red}
\begin{aligned}[b]
v_\Gamma^\ast F v_\Gamma[g^X, h^X]
= \int dg^E dg^I dh^E \quad \overline{v_\Gamma[g^X, g^E, g^I]} \\ \times F[g^E,h^E] v_\Gamma[g^X, h^E, g^I]
\end{aligned}
\end{equation}
by choosing $ F = \ket{g^E}\bra{h^E}$, which can be further factorized as:
\begin{equation}
\label{nsc}
w_{[\Gamma, E]}[g^{X \cup E}, h^{X \cup E}] = C(g^E, h^E) \delta(g^X - h^X).
\end{equation}
We follow the proof structure as in~\cite{werner}. We define another function for any two $K$ and $K^\prime$ of $X \cup Y$,
\begin{equation*}
\chi^{\Gamma^{\le n}}(g^K, g^{k^\prime})= \prod_{k = 1}^n\prod_{z = z_1,z_2,\ldots,z_{k-1},z_k} \chi^k(z)^{\Gamma^k(z)},
\end{equation*}
where $\{z_1,z_2,\ldots,z_{k-1}\} \in K$ and $z_k \in K^{\prime}$ and the product is over all possible $k-1$ elementary subsets in $K$ with its combination of 1 element in $K^\prime$. So the expression inside the integral of \autoref{red} can be expressed as:
\begin{equation}
\label{red1}
|G|^X \quad \frac{\chi^{\Gamma^{\le n}}(h^{X \cup E}, h^{X \cup E})}{\chi^{\Gamma^{\le n}}(g^{X \cup E}, g^{X \cup E})} \frac{\chi^{\Gamma^{\le n}}(h^{X \cup E}, g^{I})}{\chi^{\Gamma^{\le n}}(g^{X \cup E}, g^{I})},
\end{equation}
where $I = Y \setminus E$. To carry out the integral with respect to one variable $g_i, i \in I$, the $g_i$ dependent part of $\chi^{\Gamma^{\le n}}(h^{X \cup E}, g^{I})$ is as follows:
\begin{equation*}
\prod_{k = 1}^{n}\prod_{\{z, i\}, z \in X\cup E} \chi^k(h_z,g_i)^{\Gamma^{\le n}(z,i)},
\end{equation*}
where $z=\{z_1,z_2,\ldots,z_{k-1}\}$ and $h_z=\{h_{z_1},\ldots,h_{z_{k-1}}\}$. Similarly the same for the factor $\chi^{\Gamma^k}(g^{X \cup E}, g^{I})$. Thus the $g_i$ dependent part of \autoref{red1} is:
\begin{equation*}
\prod_{k = 1}^{n}\prod_{\{z, i\}, z \in X\cup E} \chi^k(g_z,g_i)^{-\Gamma^{\le n}(z,i)}\chi^k(h_z,g_i)^{\Gamma^{\le n}(z,i)}.
\end{equation*}
Using the characteristics of the function $\chi^k$ we can reduce the equation above to a single factor of the form $\chi^k(g^\prime,g_i)$, where
\begin{equation*}
\begin{aligned}
g^\prime = \sum_{\{j_1,\ldots,j_{k-1}\} \in X \cup E} \Gamma^k_{i,j_1,\ldots,j_{k-1}}(h_{j_1} - g_{j_1}),\ldots,\\\sum_{\{j_1,\ldots,j_{k-1}\} \in X \cup E} \Gamma^k_{i,j_1,\ldots,j_{k-1}}(h_{j_{k-1}} - g_{j_{k-1}}).
\end{aligned}
\end{equation*}
where the sum is taken over all possible $k-1$ elementary sets in $X \cup E$, and
\begin{equation*}
\Gamma^k_{i_1,i_2,\ldots,i_k} =
\begin{cases}
1 & \textrm{if } i_1,i_2,\ldots,i_k \textrm{ are related},\\
0 & \textrm{otherwise}.
\end{cases}
\end{equation*}
Now the integral over $g_i$ gives us $\delta(g^\prime)$, which is same as:
\begin{equation}
\label{red2}
|G|^X \quad \frac{\chi^{\Gamma^{\le n}}(h^{X \cup E}, h^{X \cup E})}{\chi^{\Gamma^{\le n}}(g^{X \cup E}, g^{X \cup E})} \delta(H_{X \cup E}^I(h^{X \cup E} - g^{X \cup E})).
\end{equation}
The above equation can be reduced to the error correcting condition for an isometric function given in~\autoref{nsc}. As one factor of the equation is independent of the input vertices $X$ and the other is dependent factor of input vertices. And it is also to be noted that the above expression vanishes when $g^X = h^X$ and this implies that the $\delta$-function vanishes. So we can conclude that: if and only if $d^X \ne 0$, it implies $H_{X \cup E}^I(d^X) \ne 0$.
Now assume that $H_{X \cup E}^I(d^X) = 0$ implies $d^X \ne 0$. The $\delta$-function can be reduced as:
\begin{equation*}
\begin{aligned}
\delta(H_{X \cup E}^I(h^{X \cup E} - g^{X \cup E}))
= \delta(H_{E}^I(h^{E} - g^{E}))\delta(h^{X} - g^{X}),
\end{aligned}
\end{equation*}
as two expressions are equal when $h^{X} = g^{X}$, and for $h^{X} \ne g^{X}$ both vanishes. Now for the bicharacter quotient in \autoref{red2}, we can easily simplify as follows:
\begin{equation*}
\chi^{\Gamma^{\le n}}(h^{X}, h^{X})\chi^{\Gamma^{\le n}}(h^{E}, h^{X})\chi^{\Gamma^{\le n}}(h^{E}, h^{E}).
\end{equation*}
Similar decomposition holds for the denominator too. So the $(X,X)$ and $(E,E)$ factors cancel in numerator and denominator. So the remaining $(X,E)$ factors can be expressed as follows:
\begin{equation*}
\frac{\chi^{\Gamma^{\le n}}(h^{E}, h^{X})}{\chi^{\Gamma^{\le n}}(g^{E}, h^{X})} = \prod_{k = 1}^{n}\prod_{j \in X} \chi^k(h_{j^\prime},h_j),
\end{equation*}
where,
\begin{equation*}
\begin{aligned}
h_{j^\prime} = \{\sum_{\{j_1,\ldots,j_{k-1}\} \in X \cup E} \Gamma^k_{j_1,\ldots,j_{k-1},j}(h_{j_1} - g_{j_1}),\ldots,\\\sum_{\{j_1,\ldots,j_{k-1}\} \in X \cup E} \Gamma^k_{j_1,\ldots,j_{k-1}}(h_{j_{k-1},j} - g_{j_{k-1}})\}.
\end{aligned}
\end{equation*}
For the above equation to be in the desired form $C(g^E,h^E)$, this must be independent of all the $X$ variables. But this is satisfied, and the equation becomes independent of $h_j$ if and only if $j^\prime = 0$. Hence we must have $H_E^I(h^E - g^E) = 0$ implies $H_E^X(h^E - g^E) = 0$.
This concludes the proof.
\end{proof}
\subsection{Physical Encoding using Hypergraph States}
In this section we give two different ways to encode the error correcting code based on hypergraph states. The first one follows the secret sharing scheme using graph states as per~\cite{markham}. The encoding is given by:
\begin{equation*}
\ket{0} \rightarrow \ket{0}_D\otimes \Gamma^{\le n}, \qquad \quad \ket{1} \rightarrow \ket{1}_D\otimes \overline{X}\Gamma^{\le n},
\end{equation*}
where $\overline{X} = X_1 \otimes X_2 \otimes \cdots \otimes X_{n}$. The subscript $D$ is to convey that the encoding is not using the physical bit in the process. Note that here one qubit is encoded into $n+1$ qubits.
The second way to encode the error correcting code uses the fact that a set of stabilizers can be defined on hypergraph states, as follows:
\begin{equation*}
K_i = X_i \otimes \prod_{k=1}^{n} \prod_{\{v_1,\ldots,v_{k-1}\} \in N(i)} C^{k-1}Z_{v_1,\ldots,v_{k-1}}.
\end{equation*}
Based on the above definition, one can follow~\cite{cleve} for encoding and decoding using stabilizer formalism.
\section{Advantages Over Graph States}
We will now argue that the number of gate operations required to prepare a hypergraph state is less than that to prepare the corresponding graph state (for correcting equal number of errors) on the same topology.
First we see how many standard Controlled-Z gates are required to implement one multi-qubit Controlled-Z gate. As per~\cite{marklov}, an $n$-qubit Controlled-Z gate requires $2n$ standard Controlled-Z gates.
To replicate any effects of the hypergraph states in the graph state, we need to form a complete graph with the vertices that form a hyper-edge in the corresponding hypergraph. So for a hyper-edge with $n$ nodes, we need a complete subgraph on $n$ nodes requiring $n(n-1)/2$ edges.
We can easily see that for $n \geq 6$, the number of Controlled-Z gates required for a complete graph would be greater than the number of Controlled-Z required to implement the multi-qubit Controlled-Z for the corresponding hypergraph state. For example, let us consider $n = 6$, the number of Controlled-Z required for a complete graph in this case 15. But we can implement a $C^6Z$ by using just 12 Controlled-Z.
\subsection{A Detailed Example Using 6-uniform Hypergraph}
\begin{figure}
\caption{Hypergraph state for 15-fold way correcting 4 arbitrary errors}
\label{fig:example}
\end{figure}
Now, we will give an example using hypergraphs. The hypergraph state for the code can be seen in~\autoref{fig:example}.
For this graph,
\begin{equation*}
X = \{0\}, \qquad \qquad Y = \{1,2,\ldots,15\}.
\end{equation*}
$\Gamma$ for the graph is a 6-dimensional matrix, with zero entry everywhere except the following.
\begin{equation*}
\begin{aligned}
\Gamma(1,2,3,4,5,6) = 1,\\
\Gamma(4,5,6,7,8,9) = 1,\\
\Gamma(7,8,9,10,11,12) = 1,\\
\Gamma(10,11,12,13,14,15) = 1,\\
\Gamma(1,2,3,13,14,15) = 1.
\end{aligned}
\end{equation*}
It is also to be noted that the adjacency matrix of the graph $\Gamma$ is a symmetric matrix.
The corresponding hypergraph state is given by:
\begin{equation*}
\Gamma^4 = \prod_{\{z_1,z_2,z_3,z_4,z_5,z_6\}\in E} C^6Z \ket{+}^{\otimes15}.
\end{equation*}
Now we show here that, given a finite abelian group $G$, the hypergraph can detect up to four errors. There are six different error configurations and we will discuss it one by one. The vertices coloured red are the errors.
\begin{figure}
\caption{All errors occurring in same edge.\label{fig:e1}
\label{fig:e1}
\end{figure}
The first error configuration can be seen in \autoref{fig:e1} and the corresponding set of equations becomes as follows.
\begin{center}
\begin{tabular}{ |c|c| }
\hline
Vertex $x$ & Equations \\
\hline
5 and 6 & $d_0 + d_1 + d_2 + d_3 + d_4 = 0$ \\
7, 8 and 9 & $d_0 + d_4 = 0$ \\
10, 11 and 12 & $d_0 = 0$\\
13, 14 and 15 & $d_0 + d_1 + d_2 + d_3 = 0$\\
\hline
\end{tabular}
\end{center}
Here $d_0 = 0$ which implies $d_1 + d_2 + d_3 + d_4 = 0$. Thus the condition in \autoref{suf2} is satisfied. So we can say this error configuration is detected.
\begin{figure}
\caption{Errors occurring alternatively.\label{fig:e2}
\label{fig:e2}
\end{figure}
For the error configuration \autoref{fig:e2}, the set of equations becomes:
\begin{center}
\begin{tabular}{ |c|c| }
\hline
Vertex $x$ & Equations \\
\hline
2 & $d_0 + d_1 + d_3 + d_5 = 0$ \\
4 and 6 & $d_0 + d_1 + d_3 + d_5 + d_7 = 0$ \\
8 and 9 & $d_0 + d_5 + d_7 = 0$\\
10, 11 and 12 & $d_0 + d_7 = 0$\\
13, 14 and 15 & $d_0 + d_1 + d_3 = 0$\\
\hline
\end{tabular}
\end{center}
This set of equations clearly implies that $d_0 = d_5 = d_7 = 0$ and $d_1 + d_3 =0$. Thus the error configuration is detected.
\begin{figure}
\caption{Two errors in same hyper-edge but not exactly opposite to each other.\label{fig:e3}
\label{fig:e3}
\end{figure}
For the error configuration \autoref{fig:e3}, the set of equations becomes:
\begin{center}
\begin{tabular}{ |c|c| }
\hline
Vertex $x$ & Equations \\
\hline
3,4, 5 and 6 & $d_0 + d_1 + d_2 = 0$ \\
7, 8, 9 and 12 & $d_0 + d_{10} + d_{11} = 0$ \\
13, 14 and 15 & $d_0 + d_1 + d_2 + d_{10} + d_{11} = 0$\\
\hline
\end{tabular}
\end{center}
Here $d_0 = 0$ and $d_1 + d_2 + d_{10} + d_{11} = 0$. Thus the condition in \autoref{suf2} is satisfied. So this error configuration is detected.
\begin{figure}
\caption{Two errors in same hyper-edge.\label{fig:e4}
\label{fig:e4}
\end{figure}
For the error configuration \autoref{fig:e4} the set of equations becomes:
\begin{center}
\begin{tabular}{ |c|c| }
\hline
Vertex $x$ & Equations \\
\hline
3 & $d_0 + d_1 + d_2 = 0$ \\
4, 5 and 6 & $d_0 + d_1 + d_2 + d_9 = 0$ \\
7, 8, 11 and 12 & $d_0 + d_9 + d_{10} = 0$\\
13, 14 and 15 & $d_0 + d_1 + d_2 + d_{10} = 0$\\
\hline
\end{tabular}
\end{center}
here this set of equations implies that $d_0 = d_9 = d_{10} = 0$ and $d_1 + d_2 = 0$. Thus the error configuration is detected.
\begin{figure}
\caption{Three errors in different edges.\label{fig:e5}
\label{fig:e5}
\end{figure}
For the error configuration \autoref{fig:e5} the set of equations becomes:
\begin{center}
\begin{tabular}{ |c|c| }
\hline
Vertex $x$ & Equations \\
\hline
2, 3, 13, 14 and 15 & $d_0 + d_1 = 0$ \\
4, 5 and 6 & $d_0 + d_1 + d_7 + d_8 + d_9 = 0$ \\
10, 11 and 12 & $d_0 + d_7 + d_8 + d_9 = 0$\\
\hline
\end{tabular}
\end{center}
For the above set of equations, $d_0 = d_1 = 0$ and $d_7 + d_8 + d_9 = 0$. So this error configuration is also detected.
\begin{figure}
\caption{Errors occurring alternatively with distance 2 vertices apart.\label{fig:e6}
\label{fig:e6}
\end{figure}
For the error configuration \autoref{fig:e6}, the set of equations becomes:
\begin{center}
\begin{tabular}{ |c|c| }
\hline
Vertex $x$ & Equations \\
\hline
1 and 3 & $d_0 + d_2 + d_5 = 0$ \\
4 and 6 & $d_0 + d_2 + d_5 + d_8 = 0$ \\
7 and 9 & $d_0 + d_5 + d_8 + d_{11} = 0$\\
10 and 12 & $d_0 + d_8 + d_{11} = 0$\\
13, 14 and 15 & $d_0 + d_2 + d_11 = 0$\\
\hline
\end{tabular}
\end{center}
This set of equations clearly implies that $d_0 = d_2 = d_5 = d_8 = d_{11} = 0$. Thus the error configuration is detected. So we conclude by saying the code correct arbitrary four errors.
\section{Conclusion}
In this paper we have discussed the error correcting properties of hypergraphs and arrived at a condition for error correction using the hypergraph states. We have also shown that to correct the equal number of errors on the same graph topology, the number of gate operations required to prepare the hypergraph states is less than that of the graph states for any hyper-edge of degree greater than or equal to 6.
Error correction using hypergraph states begins a new direction in the field of fault tolerant quantum computation. We believe further research in this area will reveal more insights and new interesting results.
\vspace*{5mm}
\end{document} |
\begin{document}
\title{Study of the transcendence of a family of generalized
continued fractions}
\begin{abstract}
We study a family of generalized continued fractions, which are
defined by a pair of substitution sequences in a finite
alphabet. We prove that they are {\em stammering} sequences, in
the sense of Adamczewski and Bugeaud. We also prove that this
family consists of transcendental numbers which are not
Liouvillian.
We explore the partial quotients of their regular continued
fraction expansions, arriving at no conclusion concerning their
boundedness.
\end{abstract}
\keywords{Continued fractions; Transcendence; Stammering
Sequences.}
\section{Introduction}
The problem of characterizing continued fractions of numbers
beyond rational and quadratic has received consistent attention
over the years.
One direction points to an attempt to understand properties of
algebraic numbers of degree at least three, but at times even
this line ends up in the realm of transcendental numbers.
Some investigations \cite{bruno,garrity,schwe} on algebraic
numbers depart from generalizations of continued fractions. This
line of investigation has been tried since Euler, see
\cite{schwe,bruno} and references therein, with a view to
generalize Lagrange's theorem on quadratic numbers, in search
of proving a relationship between algebraic numbers and
periodicity of multidimensional maps yielding a
sequence of approximations to a irrational number.
This theory has been further developed for instance to the
study of ergodicity of the triangle map \cite{meno}. In fact, a
considerable variety of algorithms may be called
generalizations of continued fractions: for instance
\cite{keane}, Jacobi-Perron's and Poincaré's algorithms in
\cite{schwe}.
We report on a study of generalized continued fractions of
the form:
\begin{equation}
\label{thab}
\theta(a,b)\stackrel{{\rm def}}{=}
a_0+\frac{b_0}{a_1+\frac{b_1}{a_2+\frac{b_2}{a_3+\frac{b_3}{
a_4+\frac { b_4 } { \ddots}}}}}
\ ,
\end{equation}
(where $a_0\geq 0$, $a_n\in \mathbb N$, for $n\in \mathbb N$ and $b_n \in
\mathbb N$, for $n\geq 0$), investigating a class with a
regularity close, in a
sense, to periodicity. This family of generalized continued
fractions converges when $(a_n)$ and $(b_n)$ are
finite valued sequences. They were considered {\em formally} in
\cite{list}, in the context exemplified in Section \ref{exam1}.
The {\em stammering} sequences
\cite{acta}, and sequences generated by morphisms
\cite{abd,aldavquef,allsha}, consist in a natural step away
from
periodic ones. Similarly to the results in
\cite{abd,aldavquef}, on regular continued fractions, we prove
that the family of numbers considered are transcendental.
\begin{teo}
\label{teo1}
Suppose $(a_n)$ and $(b_n)$, $n\geq 0$, are fixed points of
primitive substitutions, then the number $\theta(a,b)$ is
transcendental.
\end{teo}
\def\mathsf S{\mathsf S}
We also consider Mahler's classification within this family of
transcendental numbers, proving that they cannot be
Liouvillian.
Let us call the numbers of the
family of generalized
continued fractions $\theta(a,b)$, with ${\mathbf a}$ and
${\mathbf b}$ stammering sequences coming from a primitive
substitution number of {\em type} $\mathsf S_3$.
\begin{teo}
Type $\mathsf S_3$ numbers are either $S$-numbers or
$T$-numbers in Mahler's classification.
\label{teo2}
\end{teo}
The paper is organized as follows. In Section \ref{conv}, we
prove the convergence of the generalized continued fraction
expansions for type $\mathsf S_3$ numbers. In Section \ref{transc}, we
prove the transcendence of type $\mathsf S_3$ numbers. In
Section \ref{liou}, we use Baker's Theorem to prove that they
are either $S$-numbers or $T$-numbers. In Section \ref{exam1},
we show some inconclusive calculations on the partial quotients
of the regular continued fraction of an specific type $\mathsf S_3$
number.
\section{Convergence}
\label{conv}
We start from the analytic
theory of continued fractions \cite{wall} to prove the
convergence of \eqref{thab} when $(a_n)$ and $(b_n)$, $n\geq
0$, are sequences in a finite alphabet.
Let $\overline{\R}^+=[0,\infty]$ denote the extended positive real
axis,
with the understanding that $a+\infty=\infty$, for any
$a\in \overline{\R}^+$; $a\cdot \infty=\infty$, if $a>0$,
$0\cdot \infty = 0$ and $a/\infty=0$, if $a\in \mathbb R$. We do not
need to define $\infty/\infty$.
Given the sequences $(a_k)$ and $(b_k)$ of non-negative
(positive) integers, consider the Möbius transforms
$t_k:\overline{\R}^+\to \overline{\R}^+$
\[ t_k(w)= a_k+\frac{b_k}{w} \ ,\; k\in \mathbb N \ ,\]
and their compositions
\[ t_it_j(w)= t_i(a_j+b_j/w)=a_i+b_i/(a_j+b_j/w) \ .\]
This set of Möbius transforms is closed under compositions and
form a
semigroup.
It is useful to consider the natural correspondence between Möbius
transformations and $2\times 2$ matrices:
\[
M_k=\begin{pmatrix} a_k & b_k \\ 1 & 0 \end{pmatrix} \ .\]
Taking the
positive real cone ${\cal C}_2=\{(x,y) \ |\
x\geq 0\ , \ y\geq 0\ , x+y>0\}$ with the equivalence $(x,y) \sim
\lambda (x,y)$ for every $\lambda>0$, we
have an homomorfism between the semigroup of Möbius transforms,
under composition, acting on $\overline{\R}^+$ and the algebra of matrices above
(which are all invertible) acting on ${\cal C}_2/\sim$.
Assume the limit
\[ \lim_{n\to \infty} t_0t_1t_2\cdots t_n(0) \]
exists as a real positive number, then it is given once we
know the sequences $(a_n)$
and $(b_n)$, $n\geq 0$.
In this case, it is equal to $\lim_{n\to \infty}
t_0t_1t_2\cdots t_{n-1}(\infty)$ as well, so that the initial
point may
be taken as $0$ or $\infty$ in the extended positive real axis.
In terms of matrices multiplication, we have
\[ M_0M_1M_2\cdots M_n \begin{pmatrix} 0 \\ 1 \end{pmatrix}
\sim
M_0M_1M_2\cdots M_{n-1} \begin{pmatrix} 1 \\ 0 \end{pmatrix} \]
in ${\cal C}_2$.
Define $p_{-1}=1$, $q_{-1}=0$, $p_0=a_0$, $q_0=1$ and
\[ \begin{pmatrix}
p_n & b_np_{n-1} \\
q_n & b_nq_{n-1}
\end{pmatrix} \stackrel{{\rm def}}{=} M_0M_1M_2 \cdots M_n \
,\ n\geq 0 \ . \]
We have the following second order recursive formulas for
$(p_n,q_n)$:
\begin{align}
\label{recur}
& p_{n+1}=a_{n+1}p_n+b_np_{n-1} \\
& q_{n+1}=a_{n+1}q_n+b_nq_{n-1} \nonumber
\end{align}
and the determinant formula
\begin{equation}
\label{detAn}
p_nq_{n-1}-p_{n-1}q_n = (-1)^{n-1} b_0\cdots b_{n-1}
\end{equation}
We recall the series associated with a continued fraction
\cite{wall}:
\begin{lema}
Let $(q_n)$ denote the sequence of denominators given in
\eqref{recur} for the continued fraction $\theta(a,b)$.
\if 0
\begin{equation}
\label{basecf}
\frac{1}{1+\frac{b_1}{a_2+\frac{b_2}{a_3+\frac{b_3}{\ddots}}}}
\
.
\end{equation}
\fi
Let
\begin{equation}
\rho_k=-\frac{b_k q_{k-1}}{q_{k+1}} \ ,\; k
\in \mathbb N \ .\label{assure_conv}
\end{equation}
Then
\[ a_0+\frac{b_0}{a_1}\left(1+\sum_{k=1}^{n-1} \rho_1\rho_2\ldots
\rho_k \right) = \frac{p_n}{q_n} \ , n\geq 1 \
. \]
\end{lema}
\begin{proof}
For $n=1$, the sum is empty, $p_1=a_1a_0+b_0$ and $q_1=a_1$; the
equality holds.
Consider the telescopic sum, for $n\geq 1$,
\[ \frac{p_1}{q_1}+\sum_{k=1}^{n-1} \left( \frac{p_{k+1}}{q_{k+1}} -
\frac{p_k}{q_k}\right) = \frac{p_n}{q_n} \ . \]
From \eqref{detAn},
\[ \left( \frac{p_{k+1}}{q_{k+1}} -
\frac{p_k}{q_k}\right) = (-1)^k \frac{b_0b_1\cdots
b_k}{q_{k+1}q_k}\ .\]
Now $\frac{b_0}{a_1}\rho_1=-\frac{b_0}{q_1}\frac{b_1q_0}{q_2}
=-\frac{b_0b_1}{q_1q_2}=\frac{p_2}{q_2}-\frac{p_1}{q_1}$. Moreover
\[ \frac{p_3}{q_3}-\frac{p_2}{q_2}= \frac{b_0b_1b_2}{q_3q_2} =
\frac{b_0}{q_1} \left(-\frac{b_1q_0}{q_2}\right)\left(-\frac{b_2q_1}{
q_3 }
\right) = \frac{b_0}{a_1} \rho_1\rho_2 \ . \]
Multiplicative cancelling provides the argument to
deduce the formula
\[ \frac{p_{k+1}}{q_{k+1}}-\frac{p_k}{q_k}= \frac{b_0}{a_1}
\rho_1\rho_2\ldots \rho_k \]
by induction, finishing the proof.
\end{proof}
Even though $b_n>1$ may occur in \eqref{thab}, note
that we still have $q_{n+1}\geq 2^{(n-1)/2}$.
Indeed $q_0=1$, $q_1=a_1$ and $q_2=a_2q_1+b_1q_0>1$. Finally,
since $a_n\geq 1$ and $b_n\geq 1$ for all $n\in \mathbb N$,
\[ q_{n+1}=a_{n+1}q_n+b_nq_{n-1} \geq q_n+q_{n-1}\geq
2^{n/2-1}+\frac{2^{(n-1)/2}}2> 2^{(n-1)/2} \ .\]
\begin{lema}
If $(a_n)$ and $(b_n)$ are sequences on a finite alphabet
$\mathscr A\subset [\alpha,\beta] \subset [1,\infty)$, then
the generalized continued fraction \eqref{thab} converges.
\label{dois}
\end{lema}
\begin{proof}
It follows from \eqref{recur} that $(q_n)$, $n\geq 1$, is
increasing, and
\[ |\rho_k| = \left| \frac{b_kq_{k-1}}{q_{k+1}}\right| =
\left(1+ \frac{a_{k+1}q_k}{b_kq_{k-1}}\right)^{-1}<
(1+\alpha/\beta)^{-1} \ . \]
Thus the series with general term $\rho_1\cdots \rho_k$ is
bounded by a convergent geometric series.
\end{proof}
\begin{lema}
Let $(q_n)$ denote the sequence of denominators given in
\eqref{recur} for a generalized continued fraction, with
$(a_n)$ and $(b_n)$ sequences in a finite alphabet $\mathscr A \subset
[\alpha,\beta]\subset [1,\infty)$. Then
$q_n^{1/n}$ is bounded.
\label{logq_nlim}
\end{lema}
\begin{proof}
From \eqref{recur},
$q_1=a_1$, and $q_{n+1}<(a_{n+1}+b_n)q_n \leq (2\beta)q_n$. Hence
$q_n^{1/n}\leq 2\beta \sqrt[n]{a_1}\leq 2\beta^{3/2}$, where
the last inequality is necessary only when $a_1=\beta$.
\end{proof}
\section{Transcendence}
\label{transc}
We now specialize the study of generalized continued fractions
for sequences $(a_n)$ and $(b_n)$ which are generated by
primitive substitutions. These sequences provide a wealth of examples
of {\em stammering sequences}, defined below, following \cite{acta}.
Let us introduce some notation. The set
${\cal A}$ is called alphabet.
A word $w$ on ${\cal A}$ is a finite or infinite sequence of
letters in ${\cal A}$. For finite $w$, $|w|$ denotes the number
of letters
composing $w$. Given a natural number $k$, $w^k$ is the word
obtained by $k$ concatenated repetitions of $w$. Given a
rational number $r>0$, which is not an integer, $w^r$ is the word
$w^{\lfloor r\rfloor}
w'$, where $\lfloor r\rfloor$ denotes the integer part of $r$
and $w'$ is a prefix of $w$ of length $\lceil (r-\lfloor
r\rfloor)|w|\rceil$, where $\lceil q\rceil=\lfloor q\rfloor +1$ is
the upper integer part of $q$.
Note that if $(a_n)$ and $(b_n)$ are sequences on ${\cal A}$, then
$(a_n,b_n)$ is a sequence in ${\cal A}\times {\cal A}$, which is also
an alphabet. A sequence ${\bf a}=(a_n)$ has the {\em stammering
property} if it is not a periodic sequence and, given $r>1$, there
exists a sequence of finite words $(w_n)_{n\in \mathbb N}$, such that
\begin{itemize}
\item[a)] for every $n\in \mathbb N$, $w_n^r$ is a prefix of ${\bf a}$;
\item[b)] $(|w_n|)$ is increasing.
\end{itemize}
We say, more briefly, that $(a_n)$ is a stammering
sequence with exponent $r$. It is clear that if $(a_n)$
and $(b_n)$ are both stammering with exponents $r$ and $s$
respectively, then $(a_n,b_n)$ is also stammering with
exponent $\min\{r,s\}$.
\begin{lema}
If $u$ is a substitution sequence on a finite alphabet $\mathscr A$,
then $u$ is stammering.
\label{fact1}
\end{lema}
\begin{proof}
Denote the substitution map by $\xi:\mathscr A\to \mathscr A^+$.
Since $\mathscr A$ is a finite set, there is a $k\geq 1$ and $\alpha
\in \mathscr A$ such that $\alpha$ is a prefix of $\xi^k(\alpha)$.
$u=\lim_{n\to \infty} \xi^{kn}(\alpha)$. Moreover, there is a
least finite $j$ such that $\alpha$ occurs a second time in
$\xi^{jk}(\alpha)$. Therefore $u$ is stammering with $w\geq
1+\frac{1}{|\xi^{jk}(\alpha)|-1}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{teo1}]
From Lemma \ref{fact1} $(a_n)$ and $(b_n)$ stammering
sequences with exponent $r>1$. Hence
$\theta(a,b)$ has infintely many {\em good}
quadratic approximations. Let $(w_n)\in ({\cal A}\times {\cal
A})^*$ be a sequence of words of increasing length
characterizing $(a_n,b_n)$ as a stammering sequence
with exponent $r>1$. Consider $\psi_k(a,b)$ given
by
\[ \psi_k(a,b)=
c_0+\frac{d_0}{c_1+\frac{d_1}{c_2+\frac{d_2}{c_3+\frac{d_3}{\ddots}}}}
\ ,
\]
where $c_j=a_j$, $d_j=b_j$, for $0\leq j<k$, and
$c_{j}=c_{j\pmod{k}}$
and $d_{j}=d_{j\pmod{k}}$, for $j\geq k$. $\psi_k$ is a root of
the quadratic equation
\[ q_{k-1}x^2+(q_k-p_{k-1})x-p_k=0 \ , \]
which might not be in lowest terms.
Arguing as in Theorem 1 from \cite{acta}, we choose $k$ from the
subsequence of natural numbers given by $|w_n^r|$. Lemma
\ref{logq_nlim} allows us to conclude that the generalized continued
fraction $\theta(a,b)$ is transcendental if both $(a_n)$ and $(b_n)$
are stammering sequences with exponent $r>1$.
\end{proof}
\section{Quest on Liouville numbers}
\label{liou}
We address the question of Mahler's classification of the
numbers for type ${\mathcal S}_3$ numbers. The
statement of Baker's Theorem we quote use a measure of
transcendence introduced by Koksma, which is equivalent to
Mahler's, and we explain briefly, following
\cite{plms}, Section 2.
Let $d\geq 1$ and $\xi$ a real number. Denote by $P(X)$ an
arbitrary polynomial with integer coefficients, and
$H(P)=\max_{0\leq k\leq j} \{|a_k|\ :\
P(X)=a_0+a_1X+\cdots+a_jX^j\}$ is the height of the polynomial
$P$. Let $w_d(\xi)$ be the supremum of the real numbers such
that the inequality
\[ 0< |P(\xi)|\leq H(P)^{-w} \]
is true for infinitely many polynomials $P(X)$ with integer
coefficients and degree at most $d$. Koksma introduced
$w_d^*(\xi)$ as the supremum of the real numbers $w^*$ such that
\[ 0< |\xi-\alpha| \leq H(\alpha)^{-w^*-1} \]
are true for infinitely many algebraic numbers $\alpha$ of
degree at most $d$, where $H(\alpha)$ is the
height of the minimal polynomial with integer
coefficients which vanishes at $\alpha$.
Let $w(\xi) = \lim_{d\to \infty} \frac{w_d(\xi)}{d}$, then
$\xi$ is called
\begin{itemize}
\item an $A$-number if $w(\xi)=0$;
\item an $S$-number if $0<w(\xi)<\infty$;
\item a $T$-number if $w(\xi)=\infty$, but $w_d(\xi)<\infty$
for every integer $d\geq 1$;
\item an $U$-number if $w(\xi)=\infty$ and $w_d(\xi)=\infty$
for some $d\geq 1$.
\end{itemize}
It was shown by Koksma that $w^*_d$ and $w^*$ provide the same
classification of numbers. Liouville numbers are precisely
those for which $w_1(\xi)=\infty$, they are $U$-numbers of type
1.
\begin{teob}[Baker]
\label{baker}
Let $\xi$ be a real number and $\epsilon >0$. Assume there is
an infinite sequence of irreducible rational numbers
$(p_n/q_n)_{n\in \mathbb N}$, $(p_n,q_n)=1$, ordered such that $2\leq
q_1< q_2\leq \cdots $ satisfying
\[ \left| \xi - \frac{p_n}{q_n} \right| < \frac
1{q_n^{2+\epsilon} } \ . \]
Additionally, suppose that
\[ \limsup_{n\to \infty} \frac{\log q_{n+1}}{\log q_n}<\infty \
, \]
then there is a real number $c$, depending only on $\xi$
and
$\epsilon$ such that
\[ w_d^* (\xi) \leq \exp\exp ( cd^2) \ . \]
for every $d\in \mathbb N$. Consequently, $\xi$ is either an
$S$-number or a $T$-number.
\end{teob}
\begin{proof}[Proof of Theorem \ref{teo2}]
We note that the hypothesis of irreducibility is lacking for
type $\mathsf S_3$ numbers. Let us write $d_n=(p_n,q_n)$. By
eq. \eqref{detAn}, $d_n=b_0\ldots b_{n-1}$. Recall that, for
primitive substitutions in ${\cal A}=\{\alpha,\beta\}\subset
\mathbb N$, there is a frequency $\nu$, which is
uniform in the sequence $({\mathbf b})$ \cite{queffe}, for
which
$b_k=\beta$. Thus, $d_n \approx
\beta^\nu \alpha^{1-\nu}$ for large $n$. If, for every $n\in
\mathbb N$, there is a number $\theta$ such that $d_n <
\left(\frac{q_n}{d_n}\right)^\theta$, then
\[ 0< \left| \xi -\frac{p_n/d_n}{q_n/d_n}\right| < \frac
1{q_n^{2+\epsilon}}= \frac 1{d_n^{2+\epsilon}
(q_n/d_n)^{2+\epsilon}} \ .\]
In this case, from the estimates of $q_n$ and $d_n$, the limit
\[ \limsup_{n\to \infty}
\frac{\log(q_{n+1}/d_{n+1})}{\log(q_n/d_n)} <\infty \ .\]
We would conclude from
Theorem \ref{baker} that type $\mathsf S_3$ contains either
$S$-numbers or $T$-numbers and no Liouville numbers.
From the analysis of Section \ref{conv}, keeping its notations,
\[ (-1)^n \frac{d_n}{q_nq_{n-1}} = \frac{b_0}{a_1} \rho_1
\ldots \rho_{n-1} \ . \]
Now
$|\rho_k|=\left(1+\frac{a_{k+1}q_k}{b_kq_{k-1}}\right)^{-1}$,
and since $q_k\leq 2\beta q_{k-1}$, we conclude that
\[ |\rho_k| > \left( 1+ \frac{2\beta^2}{\alpha}\right)^{-1} \ .
\]
Therefore, recalling that $q_{n-1}\geq 2^{(n-3)/2}$
\[ \frac{d_n}{q_n} > q_{n-1} (1+2\beta^2/\alpha)^{-n+1}
\frac{b_0}{a_1} \quad \mathbb Rightarrow\quad \frac{q_n}{d_n}<
(1+2\beta^2/\alpha)^{n-1} 2^{-(n-3)/2}\frac{\beta}{\alpha} \ .
\]
Therefore, we want to determine the
existence of a solution for $\theta$ for the inequality
\[ d_n < \left(\frac{q_n}{d_n}\right)^\theta \]
considering that $d_n \approx \beta^{\nu n} \alpha^{(1-\nu)n}$
we obtain the inequality
\[ \beta^{\nu n}
\alpha^{(1-\nu)n}
< (1+2\beta^2/\alpha)^{\theta(n-1)}2^{-\theta(n-3)/2}
\frac{\beta}{\alpha} \ .\]
For large $n$, it is sufficient to solve
\[ \beta^{\nu}\alpha^{1-\nu} <
\frac
1{2^{\theta/2}}\left(1+\frac{2\beta^2}{\alpha}\right)^\theta \
,\]
which clearly has the solution $\theta=1$, since
$\alpha<\beta$ and $0<\nu<1$, implying $\beta^\nu
\alpha^{1-\nu} < \beta$. We conclude that type
$\mathsf S_3$
consists
only of $S$-numbers or $T$-numbers.
\end{proof}
\section{Example: partial quotients of a corresponding regular
continued fraction}
\label{exam1}
We now examine one specific example: a generalized continued fraction
associated with the period doubling sequence. The
period doubling sequence, which we denote by $\omega$,
is the fixed point of the substitution
$\xi(\alpha)=\alpha\beta$ and
$\xi(\beta)=\alpha\alpha$ on the two lettered alphabet
$\{\alpha,\beta\}$.
It is
also the limit of a sequence of {\em foldings}, and called a
{\em folded
sequence} \cite{allsha}.
We make some observations and one question about
the partial
quotients of the
corresponding regular continued fraction representing the
real number that the generalized continued
fraction given by \eqref{thab} when both sequences
$({\mathbf a})$ and $({\mathbf b})$ are given by the period
doubling sequence: $a_n=b_n=\omega_n$.
We choose to view the period doubling sequence as the
limit of folding operations. The algebra of matrices with fixed
determinant will play a role. A
folding is a mapping
\begin{align*}
{\cal F}_p: &{\cal A}^*\to {\cal A}^*\\
& w\mapsto wp\tilde{w}
\end{align*}
where $\tilde{w}$ equals the word $w$ reversed: if $w=a_1\ldots
a_n$, $a_i\in {\cal A}$, then $\tilde{w}=a_n\ldots a_1$, and
$p\in {\cal A}^*$.
It is clear that
\[ \omega= \lim_{n\to \infty} ({\cal F}_a\circ {\cal F}_b)^n
(a) \ ,\]
see also \cite{allsha},
where the limit is understood in the product topology
(of the discrete topology) in
${\cal A}^\mathbb N\cup {\cal A}^*$.
Let $\theta$ denote the number whose generalized continued
fraction is obtained from the substitution of the letters
$\alpha$ and $\beta$ by
\[ A=\begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix} \ , \quad B=
\begin{pmatrix} 3 & 3 \\
1 & 0
\end{pmatrix} \]
respectively. It corresponds to the choice $\{1,3\}$ for the
alphabet where the sequences $({\mathbf a})$ and
$({\mathbf b})$ take values.
Now we use Raney transducers \cite{raney} to describe
the computation of some partial quotients of the regular
continued fraction converging to $\theta$. A
transducer ${\mathscr T}=(Q,\mathsf Sigma,\delta,\lambda)$, or
two-tape machine, is defined by a set of states $Q$, an
alphabet $\mathsf Sigma$, a transition function $\delta: Q\times
\sigma \to Q$, and an output function $\lambda: Q\times \sigma
\to \mathsf Sigma^*$, where $\sigma\subset \mathsf Sigma$ (a more general
definition is possible \cite{allsha}, but this is sufficient
for our purposes).
The states of Raney's tranducers are column and
row (or doubly) balanced matrices over the non-negative
integers
with a fixed determinant. A matrix $\begin{pmatrix} a & b \\ c
&
d\end{pmatrix}$ is column balanced if $(a-b)(c-d)<0$
\cite{raney}.
Figure 1 shows the Raney transducer for determinant 3 doubly
balanced matrices. In the text, we use the abbreviations:
$\beta_1=\begin{pmatrix} 3 & 0 \\ 0 & 1 \end{pmatrix}$,
$\beta_2=\begin{pmatrix} 1 & 0 \\ 0 & 3 \end{pmatrix}$ and
$\beta_3=\begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}$. Then
$Q=\{\beta_1,\beta_2,\beta_3\}$, $\mathsf Sigma=\{L,R\}$, where
\[ R=\begin{pmatrix}
1 & 1 \\ 0 & 1
\end{pmatrix} \ , \quad L=\begin{pmatrix} 1 & 0 \\ 1 & 1
\end{pmatrix} \ ,\]
the
transition function $\delta$ and the output function are
indicated in the graph.
For instance, if $RL^2R$ is the input word on state $\beta_2$,
the output word is $L^2R^4$ and the final state is $\beta_1$.
Any infinite word in $\mathsf Sigma^\mathbb N$ can be read by ${\mathscr
T}$, but not every finite word can be fully read by ${\mathscr
T}$, for
instance, $L^{11}$ in state $\beta_1$ will produce $L^3$, but
$L^2$ will stay in the reading queue in state $\beta_1$.
Algebraically, these two examples are written as
\begin{align*}
\beta_2 RL^2R & = L \beta_3 LR = L LR \beta_1R = L^2 RR^3
\beta_1 = L^2R^4 \beta_1 \\
\beta_1 L^{11} &= L^3 \beta_1 L^2
\end{align*}
As explained in \cite{vdP}, Theorem 1 in \cite{list} or even
Theorem 5.1 in \cite{raney}, one may use the transducer
${\mathscr T}$ to commute the matrices $A$ and $B$ to get an
approximation to the continued fraction of $\theta$.
\begin{figure}
\caption{Transducer $\mathscr T$.}
\label{trans1}
\end{figure}
\def\mathscr A{\mathscr A}
\def\mathscr B{\mathscr B}
Introducing the matrix $J=\begin{pmatrix} 0 & 1 \\ 1 & 0
\end{pmatrix}$, we note the following relations:
$A=RJ$, $B=\beta_1 RJ$, $\beta_2 J=J\beta_1$.
The homomorfism between ${\cal A}^*$ and the semigroup of
matrices generated by $\{A,B\}$, as Moebius transforms, is
the basis for discovering some curious properties of the
regular continued
fraction of $\theta$.
The sequence of matrices
\[ ABA,\ ABAAABA,\ ABAAABABABAAABA \ ,\]
which corresponds to ${\cal F}_b(a)$, ${\cal F}_a\circ{\cal F}_b(a)$,
${\cal F}_b\circ {\cal F}_a\circ {\cal F}_b(a)$, yields the
beginning of the regular continued fraction expansion of
$\theta$.
A step by step calculation shows the basic features in the use
of ${\mathscr T}$:
\begin{align*}
ABAAABA &= RJ(\beta_1 RJ) RJRJRJ(\beta_1 RJ)RJ \\
&= R \beta_2 (JRJ)R(JRJ)R\beta_2 (JRJ)RJ \\
&= R \beta_2 LRLR\beta_2 LRJ \\
& = R L^3 \beta_2 RLR L^3 \beta_2 RJ \\
& =RL^3 L \beta_3 RL^3 \beta_2 RJ \\
& = RL^4 RL \beta_2 L^3 \beta_2 RJ \\
&= RL^4 RL L^9 \beta_2^2 RJ = RL^4 RL^{10} \beta_2^2 RJ \ .
\end{align*}
This means that the continued fraction of $\theta$ begins as
$[1;4,1,k,\cdots]$, with $k\geq 10$.
Writing $T=ABAAABA$, upon the next folding
\begin{align*}
TBT &= RL^4RL^{10} \beta_2^2 RJ (\beta_1RJ) RL^4RL^{10}
\beta_2^2 RJ \\
&= RL^4 RL^{10} \beta_2^2 R\beta_2 LRL^4RL^{10} \beta_2^2 RJ \\
&= RL^4RL^{10} \beta_2^2RL^3L\beta_3 L^3
RL^{10} \beta_2^2 RJ \\
& = RL^4R L^{10} \beta_2^2 RL^4LR \beta_1 L^2R
L^{10} \beta_2^2 RJ \\
&= RL^4R L^{10} \beta_2^2 RL^5 RRL^2 \beta_2 L^{10}
\beta_2^2 RJ \\
&=RL^4RL^{10} \beta_2 L \beta_3L^4 R^2 L^{32} \beta_2^3 RJ \\
&= RL^4RL^{10} \beta_2 L \beta_3 L^4 R^2 L^{32}\beta_2^3 RJ
\\
&= RL^4 R L^{13} \beta_2 LR \beta_1 L^3 R^2L^{32} \beta_2^3 RJ
\\
& = RL^4 RL^{16} \beta_2 R L \beta_1 R^2 L^{32} \beta_2^3 RJ \\
&= RL^4 RL^{17} \beta_3 R^6L^{10} \beta_1 L^2 \beta_2^3 RJ \\
&= RL^4 RL^{17} RL \beta_2 R^5L^{10} \beta_1 L^2\beta_2^3 RJ \\
&= RL^4 RL^{17} RL R LR^2 \beta_1 L^9 \beta_1 L^2 \beta_2^3 RJ
\\
&= RL^4 RL^{17} RLRLR^2L^3 \beta_1^2 L^2 \beta_2^3 RJ
\end{align*}
Now we have the knowledge that the beginning of the regular
continued fraction expansion of
$\theta=[1;4,1,17,1,1,1,1,2,a,\cdots]$, with $a\geq 3$.
Instructions for the transducer are stuck on the right of this
factorization to be read for the next folding. Note that only
states $\beta_1$ and $\beta_2$ will remain, since a transition
from $\beta_3$ is always possible given any finite word in
$\mathsf Sigma^*$.
An inductive prediction as to whether the high power in the
beginning, $L^{17}$, will consistently increase upon
(sufficient) repetitions of foldings
is out of reach. This observation poses the question: are the
partial quotients of regular continued fraction of $\theta$
bounded? Similar calculations have been done with a simpler
choice of the alphabet, that is, $\{1,2\}$, where the
transducer has only two states (doubly balanced matrices with
determinant 2).
\end{document} |
\begin{document}
\title{Stabilized finite element methods for nonsymmetric,
noncoercive and ill-posed problem. Part II: hyperbolic equations}
\begin{abstract}
In this paper we consider stabilized finite element methods for hyperbolic transport equations without
coercivity. Abstract conditions for the convergence of the methods are
introduced and these conditions are shown to hold for three different
stabilized methods: the Galerkin least squares method, the continuous
interior penalty method and the discontinuous Galerkin method.
We consider both the standard stabilization methods and the
optimisation based method introduced in \cite{part1}.
The main idea of the latter is to write the stabilized method in an optimisation
framework and select the discrete function for which a certain cost
functional, in our case stabilization term, is minimised.
Some numerical examples illustrate the theoretical investigations.
\end{abstract}
\newcommand{c}{c}
\def\mathbb R{\mathbb R}
\def\mbox{\textsf{E}}{\mbox{\textsf{E}}}
\section{Introduction}
Several finite element methods have been proposed for the computation
of hyperbolic problems, such as the SUPG method \cite{BH82,JNP84}, the
discontinuous Galerkin method \cite{RH73, LR74, JP86} and several
different weakly consistent, symmetric stabilization methods for
continuous approximation spaces, \cite{Gue99, Cod00, BH04, BB04}. In
most of these cases however the analysis relies on the satisfaction of a
coercivity condition. Indeed if a scalar hyperbolic transport equation
\begin{equation}\label{model_problem}
\beta \cdot \nabla u + \sigma u = f
\end{equation}
is considered, with data given on the inflow boundary, it is typically
assumed that there exists $\sigma_0 \in \mathbb{R}^+$ such that
\begin{equation}\label{coercivity}
\sigma_0 \leq \inf_{x \in \Omega} \left(\sigma - \frac12 \nabla \cdot \beta\right).
\end{equation}
In for instance \cite{JNP84,JP86,AM09} the degenerate case
$\sigma_0=0$ is allowed using special exponentially weighted test
functions, which we will also exploit in this paper.
In practice this condition is quite restrictive and rules out many
important flow regimes such as exothermic reactions, compressible
flow fields or data assimilation problems with data given on the
outflow boundary. Our objective in the present paper is to propose an
analysis of stabilized finite element methods in the noncoercive case. Indeed
similarly as in the elliptic case \cite{Schatz74} the discrete
solutions of standard stabilized finite element methods are shown
to exist and have optimal convergence under a condition on the mesh
size.
Unlike the elliptic case there appears to be no equivalent
result, even suboptimal, for the standard Galerkin method. This part
uses tools similar to those of \cite{JNP84,JP86,AM09}. Then we
show how the method
introduced in \cite{part1} can be applied to hyperbolic problems
beyond the coercive regime of condition \eqref{coercivity}. The advantage of
this latter method is
that the mesh conditions under which the analysis holds are much less
restrictive and boundary conditions may be imposed on the outflow
boundary just as easily as on the inflow boundary, without modifying
the parameters of the method.
For a full motivation of
the method and analysis in the elliptic case we refer the reader to
\cite{part1}.
We will
consider the problem \eqref{model_problem} with smooth coefficients,
$\beta \in [W^{2,\infty}(\Omega)]^d$ and $\sigma \in
W^{1,\infty}(\Omega)$. Boundary data will be given either on the inflow or the
outflow correponding to solving either the standard transport problem or
a model data assimilation problem. For such smooth physical parameters both cases can easily
be solved using the method of characteristics, provided that for each
$x \in \Omega$ there exists a streamline leading, in finite time, to the boundary where
data is imposed and $|\beta(x)|\ne 0$ for all $x \in \Omega$. In the
following we always assume that $\beta$ satisfies these
assumptions, unless otherwise stated, and that the stationary problem admits a unique, sufficiently
smooth solution.
Problems on conservation form $\nabla \cdot (\beta u)$ are cast on the form \eqref{model_problem} by using the
product rule and including the low order term with coefficient $\nabla
\cdot \beta$ in $\sigma$. The present paper has the following structure. In section
\ref{sec:abstract} we propose an abstract analysis under certain
assumptions on the discrete bilinear form. Then in section
\ref{sec:stabilization} we give a detailed description of how three
different stabilization methods, the Galerkin least squares method (GLS),
the continuous interior penalty (CIP) method and the discontinuous Galerkin
method (DG) satisfy the assumptions of the abstract theory for the
case of the advection--reaction equation. In all cases we prove that
the classical quasi optimal estimate
for stabilized methods holds
\[
\|u - u_h\|_{L^2(\Omega)} + \|h^{\frac12} \beta \cdot \nabla (u -
u_h)\|_{L^2(\Omega)} \leq C h^{k+\frac12} |u|_{H^{k+1}(\Omega)}.
\]
We also show how to include a model
problem for data assimilation in the analysis. Finally in section
\ref{numerics} we illustrate the theory with some numerical examples.
\section{Abstract formulation}\label{sec:abstract}
Let $\Omega$ be a polygonal/polyhedral subset of $\mathbb{R}^d$. The
boundary of $\Omega$ will be denoted by $\partial \Omega$ and its
outward pointing normal by $n$.
We let $V,W$ denote two Hilbert spaces with norms $\|\cdot\|_V$ and $\|\cdot\|_W$.
The abstract weak formulation of the continuous problem takes
the form: find $u \in V$ such that
\begin{equation}\label{forward}
a(u,v) = (f,v), \quad \forall v \in W
\end{equation}
with formal adjoint: find $z \in W$ such that
\begin{equation}\label{adjoint}
a(w,z) = (g, w) , \quad \forall w \in V.
\end{equation}
The bilinear form $a(\cdot,\cdot):V\times W
\rightarrow \mathbb{R}$ and the data $f$ are assumed to satisfy the assumptions of
Babuska's theorem \cite{Bab70} so that the problems \eqref{forward} and
\eqref{adjoint} are well posed. (See \cite{EG06} for an analysis of
\eqref{model_problem} in the coercive regime.) We denote the forward problem on strong form
$\mathcal{L} u = f$ and the adjoint problem on strong form
$\mathcal{L}^* z = g$.
\begin{remark}
The analysis below never uses the full power of Babuska's theorem. We
only need to assume that \eqref{forward} admits a unique solution for
the given data and that certain discrete stability conditions are
satisfied by $a(\cdot,\cdot)$ as specified below. For the problems
considered herein the solution of \eqref{adjoint} will always be
$z=0$.
\end{remark}
\subsection{Finite element discretisation}
Let $\{\mathcal{T}_h\}_h$ denote a family of quasi uniform, shape
regular triangulations $\mathcal{T}_h:=\{K\}$, indexed by the maximum
triangle radius $h:=\max_{K \in \mathcal{T}_h} h_K$. The set of faces of the
triangulation will be denoted by $\mathcal{F}$ and
$\mathcal{F}_{int}$ denotes the subset of interior faces. Let $X_h^k$ denote the finite element space of piecewise
polynomial functions on
$\mathcal{T}_h$,
$$
X_h^k := \{v_h \in L^2(\Omega): v_h\vert_{K} \in \mathbb{P}_k(K),\quad \forall K
\in \mathcal{T}_h\}.
$$
Here $\mathbb{P}_k(K)$ denotes the space of polynomials of degree
less than or equal to $k$ on a triangle $K$. The $L^2$-scalar product over some {measurable} $X \subset \mathbb{R}^d$ is denoted $(\cdot,\cdot)_X$ and the associated
norm $\|\cdot\|_X$, the subscript is dropped whenever $X = \Omega$. We
will also use $\left<\cdot,\cdot\right>_Y$ to denote the $L^2$-scalar
product over $Y \subset \mathbb{R}^{d-1}$. For the element wise
$L^2$-scalar product and norm over $\Omega$ we will use the notation
$(\cdot,\cdot)_h:= \sum_{K \in \mathcal{T}_h} (\cdot,\cdot)_K$,
$\|\cdot\|_h:= (\cdot,\cdot)_h^{\frac12}$.
In the estimates of the paper capital constants are
generic, whereas lower case constants are specific to the estimate.
Sometimes capital constants will be given subscripts to point
to the main dependencies on parameters. We will also use $a \sim b$ to
stress an important dependence in $a$ on some parameter $b$, i.
e. $a = C b$, with $C$ assumed to be moderate.
We let $\pi_L$ denote the standard $L^2$-projection onto $X_h^k$ and
$i_h:C^0(\bar \Omega) \mapsto X_h^k$ the standard Lagrange interpolant. Recall that
for any function $u \in (V \cup W)\cap H^{k+1}(\Omega)$ there holds
\begin{equation}\label{approx}
\|u - i_h u\| + h \|\nabla(u - i_h u)\|+ h^2 \|D^2(u - i_h u)\|_h \leq c_i h^{k+1} |u|_{H^{k+1}(\Omega)},
\end{equation}
where $D^2$ denotes the Hessian matrix and the matrix norm used is the
Frobenius norm. A similar result holds for $\pi_L$. If $\pi_L$ projects onto $X_h^k \cap
C^0(\bar \Omega)$ the same result holds under the assumption of local
quasi regularity of the mesh. The following discrete commutator
property follows by straightforward modifications of the result in
\cite{Berto99} and holds for $i_h$, the element-wise $L^2$-projection
onto $X_h^k$ and, under our assumptions
on the mesh, for the $L^2$-projection onto continuous finite element
functions. Here $\varphi \in W^{2,\infty}(\Omega)$, $0\leq n \leq 2$,
\begin{equation}\label{discrete_commutator}
\sum_{K\in \mathcal{T}_h} |\varphi u_h - i_h(\varphi u_h) |^2_{H^n(K)} \leq c^2_{dc,n,\varphi}
h^{-2n+2} \|u_h\|^2_{L^2(\Omega)}.
\end{equation}
We also note that the following inverse inequalities hold, $\exists
c_T, c_I \in \mathbb{R}^+$ such that
\begin{equation}\label{inverse_eq}
\begin{array}{l}
\|u\|_{\partial K} \leq c_T (h^{-\frac12} \|u\|_{K} + h^{\frac12}
\|\nabla u\|_K), \quad \forall u \in H^1(K)\\[3mm]
h_K^{-\frac12} \|u_h\|_{\partial K} + h_K \|\nabla u_h\|_K \leq c_I
\|u_h\|_{K}, \quad \forall u_h \in \mathbb{P}_k(K).
\end{array}
\end{equation}
Let $V_h$ and $W_h$ denote two finite element spaces such that
$\mbox{dim} ~ V_h= \mbox{dim} ~ W_h$ (in practice $V_h = W_h$ herein). Now we introduce a discrete
bilinear form $a_h(\cdot,\cdot):V_h \times W_h \mapsto \mathbb{R}$
associated to $a(\cdot,\cdot)$
and a stabilization operator $s_p(\cdot,\cdot):V_h \times W_h \mapsto \mathbb{R}$. The standard stabilized finite element
formulation for the problem \eqref{forward} takes the form, find $u_h
\in V_h$ such that
\begin{equation}\label{standardFEM}
a_h(u_h,v_h) + s_p(u_h,v_h) = (f,v_h) + s_p(u,v_h) \quad \forall v_h
\in W_h.
\end{equation}
Observe that since $s_p(u,v_h)$ appears in the right hand side,
we can only use stabilization operators such that this quantity is known.
As we shall see below, the noncoercivity of the form
$a_h(\cdot,\cdot)$ leads to problem dependent mesh conditions for the
well-posedness of \eqref{standardFEM}. To alleviate the conditions on the mesh we propose the following finite element method for the approximation
of \eqref{forward}, find $(u_h,z_h) \in V_h \times W_h$ such that
\begin{equation}\label{FEM}
\begin{array}{rcl}
a_h(u_h,w_h) + s_a(z_h,w_h) &=&(f,w_h) \\[3mm]
a_h(v_h,z_h) - s_p(u_h,v_h) &=& -s_p(u,v_h),
\end{array}
\end{equation}
for all $(v_h,w_h) \in V_h \times W_h$. Here $s_a(\cdot,\cdot)$ is a
stabilization term related to the adjoint equation that will be
discussed below. {Observe that we here solve
simultaneously \eqref{forward} and \eqref{adjoint}, with $g=0$ in the
latter equation.
We will consider either continuous approximation spaces,
$V_h:=X^k_h\cap H^1(\Omega)$ or discontinuous approximation $V_h := X^k_h$.
The bilinear form
$a_h(\cdot,\cdot)$ is a discrete realisation of $a(\cdot,\cdot)$,
typically modified to account for the effect of nonconformity,
since in general $V_h \not \subset V$ and $W_h \not \subset W$. Weakly
imposed boundary conditions may be set in the form $a_h(\cdot,\cdot)$,
but below we have chosen to
impose them using $s_p(\cdot,\cdot)$ and $s_a(\cdot,\cdot)$ to obtain
a more unified analysis. In
\eqref{FEM} stabilization can also be added in $a_h(\cdot,\cdot)$.
Our numerical experiments did not show any advantages of the addition
and this approach will not be pursued herein.
The bilinear forms
$s_a(\cdot,\cdot)$, $s_p(\cdot,\cdot)$ in \eqref{FEM} are symmetric, positive semi-definite, stabilization
operators, defined on $[V_h \cup W_h]^2$. For
simplicity we will always assume that $u$ is
sufficiently regular so that strong consistency holds,
i.e. $s_p(u,v_h)$ is well defined. Note also that for the method to
make sense $s_p(u,v_h)$ must be known, either to be zero, or depending
only on known data. This will be the case below. The modifications of the
analysis to the case of weakly consistent stabilization are
straightforward and not considered herein. The semi-norm on $V_h\cup W_h$ associated to the stabilization is defined by
\[
|x_h|_{S_y} := s_y(x_h,x_h)^{\frac12}, \quad y = a,p.
\]
We will
assume that the following strong consistency property holds. If $u$
is the solution of \eqref{forward} then
\begin{equation}\label{consist1}
a_h(u,\varphi) =(\mathcal{L }u,\varphi) = (f,\varphi) \mbox{ for all }\varphi \in W_h.
\end{equation}
Then $u$ solution of \eqref{forward} solves \eqref{standardFEM},
and $u$ solution of \eqref{forward} and $z \equiv 0$ solve the
system \eqref{FEM}.
We also assume that there are interpolation operators $\pi_V:V
\rightarrow V_h $ and $\pi_W : W \rightarrow W_h$, satisfying
\eqref{approx}. We introduce the (semi-)norm $\|\cdot\|_+$ and assume
that the following approximation estimates are satisfied
\begin{equation}\label{approx1}
\|v - \pi_V v\|_V + \|v - \pi_V v\|_+ + |v- \pi_V v|_{S_p} \leq c_{a\gamma} h^r
|v|_{H^{k+1}(\Omega)}, \quad \forall v \in V \cap H^{k+1}(\Omega),
\end{equation}
where $r>0$, depends {on the approximation properties of the finite
element space and the definition of the norms in the left hand
side}. From
the standard error estimates for stabilized methods we expect
$r=k+\frac12$ for smooth exact solutions. The
constant $c_{a\gamma}$ depends on the form $a(\cdot,\cdot)$ and
stabilization parameter(s) of the method included in
$s_p(\cdot,\cdot)$ and $s_a(\cdot,\cdot)$, here denoted $\gamma$.
\subsection{Abstract assumptions on the formulation
\eqref{standardFEM}}
The assumptions made below consititutes sufficient conditions for the
method \eqref{standardFEM} to converge. Here we assume that
$\|\cdot\|_V \equiv \|\cdot \|_W$. As usual the conditions are
consistency, stability and continuity of the forms.
First consistency, Galerkin orthogonality for \eqref{standardFEM} is a consequence of the consistency \eqref{consist1}
\begin{equation}\label{galortho1}
a_h(u - u_h,w_h) + s_p(u-u_h,w_h) = 0, \quad \forall w_h \in W_h.
\end{equation}
We assume that there exists $c_s, c_\eta \in \mathbb{R}^+$ such that
for all $h>0$ and $u_h \in V_h$ there exists $v_a \in W_h$ satisfying
\begin{equation}\label{strong_bound_va}
c_s (\|u_h\|^2_V + |u_h|^2_{S_p}) \leq a_h(u_h,v_a(u_h)) + s_p(u_h,v_a(u_h)) + \epsilon(h) (\|u_h\|_V^2 + |u_h|^2_{S_p}),
\end{equation}
where $\epsilon(h)$ is a continuous function such that $\epsilon(0)=0$,
and
\begin{equation}\label{stab_v_bound3}
\|v_{a}(u_h)\|_V + |v_a(u_h)|_{S_p}
\leq c_\eta (\|u_h\|_V + |u_h|_{S_p}).
\end{equation}
These assumptions ensure that the stabilized formulation satisfies a
discrete inf-sup condition for $\epsilon(h)$ small enough. We also assume the following continuity.
\begin{equation}\label{cont2}
a_h(v - \pi_V v, x_h) \leq \|v - \pi_V v\|_+ c_{a}
(|x_h|_{S_p}+\|x_h\|_V),\, \forall v \in V, \, x_h \in W_h.
\end{equation}
\subsection{Abstract assumptions on the formulation \eqref{FEM}}
Observe that the following partial coercivity is obtained
by taking $v_h = u_h$ and $w_h=z_h$ in \eqref{FEM},
\begin{equation}\label{partial_coerc}
|z_h|_{S_a}^2 + |u_h|_{S_p}^2= (f, z_h) {+ s_p(u,u_h)}.
\end{equation}
The following Galerkin orthogonality holds for \eqref{FEM} by \eqref{consist1},
\begin{equation}\label{galortho2}
\begin{array}{c}
a_h(u - u_h,w_h) = s_a(z_h,w_h)\quad\forall w_h \in W_h \\
{a_h(v_h, z_h) =
s_p(u_h - u,v_h)} \quad \forall v_h \in V_h.
\end{array}
\end{equation}
Let $\tilde \epsilon(h)$ and $\breve \epsilon(h)$ denote continuous,
monotonically increasing functions such that
$\tilde \epsilon(0)=0$ and $0 \leq \breve \epsilon(h)$.
We assume that the following discrete stability holds for all
$u_h\in V_h$, $z_h \in W_h$. For some
$\tilde c_s, \tilde c_\eta \in \mathbb{R}^+$, for all $u_h \in V_h$, there exists $v_{a}(u_h) \in W_h$ such that
\begin{equation}\label{forward_stability}
\tilde c_s \|u_h\|^2_V \leq a_h(u_h,v_{a}(u_h)) + \tilde
\epsilon(h)\|u_h\|^2_V + \tilde c_\eta|u_h|^2_{S_p}
\end{equation}
and similarly, for all $z_h \in W_h$ there exists $v_{a*}(z_h) \in V_h$ such that
\begin{equation}\label{adjoint_stability}
\tilde c_s \|z_h\|^2_W \leq a_h(v_{a*}(z_h),z_h) + \tilde
\epsilon(h)\|z_h\|^2_W + \tilde c_\eta |z_h|^2_{S_a}.
\end{equation}
Moreover assume that the functions $v_a$ and $v_{a*}$ satisfy the bounds
\begin{equation}\label{stab_v_bound2}
\|v_{a}(u_h)\|_W \leq \tilde c_\eta \|u_h\|_V, \quad |v_{a}(u_h)|_{S_a}
\leq \breve \epsilon(h) \|u_h\|_V + \tilde c_\eta |u_h|_{S_p},
\end{equation}
\begin{equation}\label{stab_v_bound1}
\|v_{a*}(z_h)\|_V \leq \tilde c_\eta \|z_h\|_W, \quad |v_{a*}(z_h)|_{S_p}
\leq\breve \epsilon(h) \|z_h\|_W + \tilde c_{\eta} |z_h|_{S_a}.
\end{equation}
Since we are interested in problems that are ill-conditioned, we here
assume $\tilde c_s<\tilde c_\eta$ without loss of generality.
We finally assume that the following continuity relation
holds
\begin{equation}\label{cont1}
a_h(v - \pi_V v, x_h) \leq \|v - \pi_V v\|_+ c_{a}
(|x_h|_{S_a}+ \|x_h\|_W),\, \forall v \in V, \, x_h \in W_h.
\end{equation}
\subsection{Convergence analysis for the abstract methods}
We will first prove a convergence result for the standard stabilized
finite element method \eqref{standardFEM}. Then we will consider
\eqref{FEM}.
\begin{proposition}\label{standard_conv}
Assume that the solution of \eqref{forward} is smooth and that the forms of
\eqref{standardFEM} and the operators $\pi_V$, $\pi_W$
are such that \eqref{galortho1}--\eqref{cont2} are satisfied.
Also assume that $\epsilon(h)$ satisfies the bound,
\begin{equation}\label{h_cond_strong}
\epsilon(h) \leq \frac{c_s}{2}.
\end{equation}
Then \eqref{standardFEM} admits a unique solution $u_h$ for which there holds
\[
\|u - u_h\|_V + |u - u_h|_{S_p} \leq c_{as\gamma}
h^r
|u|_{H^{k+1}(\Omega)},
\]
where $c_{as\gamma}\sim (c_a+1) \frac{c_\eta}{c_s}$.
\end{proposition}
\begin{proof}
Since the spaces $W_h$ and $V_h$ have the same dimension, the matrix
is square and it is sufficient to prove uniqueness.
Assume $(f,v_h)+s_p(u,v_h) = 0$ for all $v_h \in W_h$. Under the
condition \eqref{h_cond_strong} there holds
\[
\frac12 c_s (\|u_h\|^2_V + |u_h|^2_{S_p}) \leq a_h(u_h,v_a(u_h)) +
s_p(u_h,v_a(u_h)) = 0
\]
hence $u_h = 0$ and existence and uniqueness follows.
Let $\xi_h := \pi_V u - u_h$. By the stability assumption
\eqref{strong_bound_va} we have
\begin{equation*}
c_s( \|\xi_h\|_V^2 + |\xi_h|^2_{S_p})\leq
a_h(\xi_h,v_a(\xi_h))+s_p(\xi_h,v_a(\xi_h))+ \epsilon(h) ( \|\xi_h\|_V^2 + |\xi_h|^2_{S_p}).
\end{equation*}
It follows that under the condition \eqref{h_cond_strong} there holds
\[
\frac12 c_s( \|\xi_h\|_V^2 + |\xi_h|^2_{S_p}) \leq
a_h(\xi_h,v_a(\xi_h))+s_p(\xi_h,v_a(\xi_h))
\]
and by Galerkin orthogonality \eqref{galortho1}, the continuity \eqref{cont2} and the
stability \eqref{stab_v_bound3}
\begin{multline*}
\frac12 c_s( \|\xi_h\|_V^2 + |\xi_h|^2_{S_p}) \leq
a_h(\pi_V u - u,v_a(\xi_h))+s_p(\pi_V u - u,v_a(\xi_h))\\
\leq c_a \|\pi_V u - u\|_+(|v_a(\xi_h)|_{S_p} + \|v_a(\xi_h)\|_V) +
|\pi_V u - u|_{S_p} |v_a(\xi_h)|_{S_p}\\
\leq (c_a+1)( \|\pi_V u - u\|_+ + |\pi_V u - u|_{S_p}) c_\eta( \|\xi_h\|_V + |\xi_h|_{S_p}).
\end{multline*}
We conclude by noting that $\|u - u_h\|_V \leq \|u-\pi_V u\|_V +
\|\xi_h\|_V$ and applying the approximation \eqref{approx1}.
\end{proof}\\
We now turn to the analysis of \eqref{FEM}. In this case the analysis is based on a combination of coercivity of the
stabilization operators \eqref{partial_coerc} and an inf-sup argument
using \eqref{forward_stability} and \eqref{adjoint_stability}. This
allows us to exploit the strong stability property \eqref{partial_coerc} enjoyed by
the stabilization terms and thereby improve the robustness of the method.
\begin{theorem}\label{stab_conv}
Assume that
the solution of \eqref{forward} is smooth, that the forms of
\eqref{FEM} and the operators $\pi_V$, $\pi_W$
are such that \eqref{approx1}, \eqref{galortho2}--\eqref{cont1} are satisfied and that
\begin{equation}\label{h_cond}
\tilde \epsilon(h) \leq \frac{\tilde c_s}{2}.
\end{equation}
Then \eqref{FEM} admits a unique solution $u_h,z_h$ for which there holds
\[
\|u - u_h\|_V + \|z_h\|_W + |u - u_h|_{S_p} + |z_h|_{S_a}\leq \tilde c_{as\gamma}
h^r
|u|_{H^{k+1}(\Omega)}.
\]
The constant in the above estimate is given by
$$
\tilde c_{as\gamma} \sim (c_a + 1)\frac{\tilde c_{\eta}}{\tilde c_s} \left(1 +
\frac{\breve \epsilon(h)^2}{\tilde c_{\eta} \tilde c_s} \right).
$$
Similarly, if $s_p(u,w_h)=0$, there holds
\[
|u_h|_{S_p} + |z_h|_{S_a} \leq \tilde c_{as\gamma} h^r
|u|_{H^{k+1}(\Omega)}.
\]
\end{theorem}
\begin{proof}
For the first inequality, let $\xi_h = \pi_V u - u_h$. As in the
previous case it is enough to prove the claim for $\xi_h$. By the
definition \eqref{FEM} there holds
\[
|\xi_h|_{S_p}^2+|z_h|_{S_a}^2 = s_p(\xi_h,\xi_h)+s_a(z_h,z_h) = a_h(\xi_h,z_h) +
s_a(z_h,z_h) - a_h(\xi_h,z_h) + s_p(\xi_h,\xi_h).
\]
By the stabilities \eqref{forward_stability}--\eqref{stab_v_bound2} there exists $v_a(\xi_h)$ and
$v_{a*}(z_h)$ such that
\begin{multline*}
\tilde c_s (\|\xi_h\|^2_V +\|z_h\|_W^2) \leq a_h(\xi_h,v_a(\xi_h)) +
s_a(v_a(\xi_h),z_h) \\+ a_h(v_{a*}(z_h),z_h) -
s_p(\xi_h, v_{a*}(z_h))
+
\tilde \epsilon(h)\|\xi_h\|^2_V + \tilde c_\eta
|\xi_h|^2_{S_p}\\
+|z_h|_{S_a} (\breve \epsilon(h) \|\xi_h\|_V + \tilde c_{\eta} |\xi_h|_{S_p})
+
\tilde \epsilon(h)\|z_h\|^2_W + \tilde c_\eta
|z_h|_{S_a}^2\\
+|\xi_h|_{S_p}(\breve \epsilon(h)\|z_h\|_W + \tilde c_{\eta} |z_h|_{S_a})
.
\end{multline*}
It follows that for all $\mu_V,\mu_S>0$ we may write
\begin{multline*}
\tilde c_s \mu_V (\|\xi_h\|^2_V +\|z_h\|_W^2)
+\mu_S(|\xi_h|_{S_p}^2+|z_h|_{S_a}^2) \leq a_h(\xi_h, \mu_S z_h + \mu_V
v_a(\xi_h)) \\ +
s_a(\mu_S z_h + \mu_V v_a(\xi_h),z_h) - a_h(\mu_S \xi_h - \mu_V
v_{a*}(z_h),z_h) + s_p(\xi_h ,\mu_S \xi_h - \mu_V v_{a*}(z_h))
\\
+ \mu_V \tilde \epsilon(h)(\|\xi_h\|^2_V + \|z_h\|^2_W)+ \mu_V \tilde c_\eta ( |\xi_h|^2_{S_p}+|z_h|^2_{S_a})\\
+\mu_V |z_h|_{S_a} (\breve \epsilon(h) \|\xi_h\|_V + \tilde c_{\eta} |\xi_h|_{S_p})
+\mu_V |\xi_h|_{S_p}(\breve \epsilon(h)\|z_h\|_W + \tilde c_{\eta} |z_h|_{S_a}) .
\end{multline*}
By arithmetic-geometric inequalities in the right hand side
\begin{multline*}
\mu_V \tilde \epsilon(h)(\|\xi_h\|^2_V + \|z_h\|^2_W)+ \mu_V \tilde c_\eta ( |\xi_h|^2_{S_p}+|z_h|^2_{S_a})\\
+\mu_V |z_h|_{S_a} (\breve \epsilon(h) \|\xi_h\|_V + \tilde c_{\eta} |\xi_h|_{S_p})
+\mu_V |\xi_h|_{S_p}(\breve \epsilon(h)\|z_h\|_W + \tilde c_{\eta} |z_h|_{S_a}) \\
\leq \mu_V \left(\tilde \epsilon(h) + \frac14 \tilde c_s\right) (\|\xi_h\|_V^2+\|z_h\|_W^2)
+ \mu_V \left(2 \tilde c_\eta + \frac{\breve \epsilon(h)^2}{ \tilde c_s}\right) \ (|\xi_h|_{S_p}^2 + |z_h|^2_{S_a}).
\end{multline*}
Therefore under
the condition \eqref{h_cond} there holds
\begin{multline*}
\frac14 \tilde c_s\mu_V (\|\xi_h\|^2_V +\|z_h\|_W^2)
+ \left(\mu_S - \mu_V \left(2 \tilde c_\eta + \frac{\breve \epsilon(h)^2}{ \tilde c_s}\right) \right)(
|\xi_h|_{S_p}^2+|z_h|_{S_a}^2) \\
\leq a_h(\xi_h,\mu_S z_h + \mu_V
v_a(\xi_h)) +
s_a(\mu_S z_h + \mu_V v_a(\xi_h),z_h) \\ - a_h(\mu_S\xi_h - \mu_V
v_{a*}(z_h),z_h) + s_p(\xi_h,\mu_S\xi_h - \mu v_{a*}(z_h)).
\end{multline*}
Then, by choosing $\mu_V = \tfrac{4}{\tilde c_s}$, $\mu_S =
\tfrac{9 \tilde c_\eta}{\tilde c_s} + \tfrac{4 \breve \epsilon(h)^2}{ \tilde c_s^2}$
and applying the Galerkin orthogonality of equation
\eqref{galortho2}, we have, since by assumption $\tilde c_s < \tilde c_\eta$
\begin{multline*}
\|\xi_h\|^2_V +\|z_h\|_W^2 +
|\xi_h|_{S_p}^2+|z_h|_{S_a}^2\\ \leq a_h(\pi_V u -
u,\mu_S z_h + \mu_V
v_a(\xi_h)) + s_p(\pi_V u-u, \mu_S \xi_h - \mu_V v_{a*}(z_h)).
\end{multline*}
We proceed by applying the continuity \eqref{cont1} in the first term
of the right hand side and the Cauchy-Schwarz inequality in the
stabilization term,
\begin{multline*}
\|\xi_h\|^2_V
+\|z_h\|_W^2+|\xi_h|_{S_p}^2+|z_h|_{S_a}^2\\ \leq \|u - \pi_V u\|_+
c_a(|\mu_S z_h + \mu_V
v_a(\xi_h)|_{S_a} + \|\mu_S z_h + \mu_V
v_a(\xi_h)\|_W)\\+ |u-\pi_V u|_{S_p}|\mu_S \xi_h - \mu_V v_{a*}(z_h)|_{S_p}.
\end{multline*}
Using a triangle inequality followed by the the stability of $v_a$
\eqref{stab_v_bound2} and $v_{a*}$ \eqref{stab_v_bound1} and the bound
$\mu_V (\tilde c_\eta + \breve \epsilon(h)) < \mu_S$, that holds under
the assumption $\tilde c_s < \tilde c_{\eta}$,
we may conclude that
\begin{multline*}
\|\xi_h\|^2_V
+\|z_h\|_W^2+|\xi_h|_{S_p}^2+|z_h|_{S_a}^2 \leq
(\|u - \pi_V u\|_+ + |u-\pi_V u|_{S_p})\\ \times
(c_a + 1) \mu_S ( \|\xi_h\|_V
+\|z_h\|_W+|\xi_h|_{S_p} + |z_h|_{S_a} ).
\end{multline*}
We conclude from this expression and \eqref{approx1} that the first
claim holds.
The second result is an immediate consequence of $s_p(u,w_h)=0$ and
the symmetry of $s_p(\cdot,\cdot)$. Uniqueness of the discrete
solution follows by taking $f=0$ in \eqref{forward} and observing that
since then $u=\pi_V u=0$ we have $u_h=z_h=0$ by which uniqueness
follows using the same a priori estimates.
\end{proof}
\section{Stabilization methods}\label{sec:stabilization}
We let $\mathcal{L}$ denote the first order hyperbolic operator on non-conservation form,
\begin{equation}\label{hyperbolic_oper}
\mathcal{L} u := \beta \cdot \nabla u + \sigma u.
\end{equation}
Here $\beta \in [W^{2,\infty}(\Omega)]^d$ is a non-solenoidal velocity
vectorfield and $\sigma \in W^{1,\infty}(\Omega)$.
We also assume that boundary conditions are set on the inflow boundary
$\partial \Omega^-$,
$$
u\vert_{\partial \Omega^-} = g_{in}, \quad \partial \Omega^\pm := \{x \in \partial \Omega : \pm \beta(x) \cdot n > 0 \}.
$$
The adjoint operator
takes the form
\begin{equation}\label{hyperbolic_adjoint}
\mathcal{L}^* u := -\nabla \cdot(\beta u) + \sigma u.
\end{equation}
We have assumed below that the reaction is moderately stiff
so that the relevant time scale of the flow is given by $h
|\beta|^{-1}$. In particular we will not track the influence of the size
of $\sigma$ in the error bounds below, assuming
$h^{\frac12}(\|\sigma\|_{L^\infty(\Omega)} + \|\nabla \cdot \beta\|_{L^\infty(\Omega)})$ moderate.
We will consider three
different stabilized finite element methods below and show that they
all satisfy the assumptions of the abstract theory. The bilinear form
$a_h(\cdot,\cdot)$ of \eqref{standardFEM} and \eqref{FEM}
is defined as
\begin{equation}\label{discrete_bilin}
a_h(u_h,v_h) := (\mathcal{L} u_h,v_h)_h - \frac12 \sum_{K \in \mathcal{T}_h}
\int_{\partial K \setminus \partial \Omega} \beta \cdot n_{\partial K}[u_h] \{v_h\} ~\mbox{d}s
\end{equation}
where $\{v_h\}$ denotes the average of $v_h$ from the two element
faces,
\[
\{u_h\}(x)\vert_{\partial K} := \frac12 \lim_{\varepsilon \rightarrow 0^+} (u_h(x-\varepsilon
n_{\partial K}) + u_h(x+\varepsilon
n_{\partial K})),
\]
the jump of $u_h$ is defined as
\[
[u_h](x)\vert_{\partial K} := \lim_{\varepsilon \rightarrow 0^+} (u_h(x-\varepsilon
n_{\partial K}) - u_h(x+\varepsilon
n_{\partial K})).
\]
As usual the jump terms on $u_h$ may be omitted when a
continuous function is considered in the formulation.
First we will prove a general stability result on $a_h(u_h,v_h)$.
\begin{lemma}\label{basic_stability}
For the bilinear form \eqref{discrete_bilin} there holds $\forall \eta
\in W^{1,\infty}(\Omega)$, $\forall u_h,z_h \in X_h^k$,
\[
a_h(u_h,e^{\pm \eta} u_h) = \frac12 \int_{\partial \Omega} (\beta \cdot
n) u_h^2 e^{\pm\eta} ~\mbox{d}s + \int_{\Omega} u_h^2 \left(\mp \frac12 \beta
\cdot \nabla \eta - \frac12 \nabla \cdot \beta + \sigma\right) e^{\pm\eta}~\mbox{d}x,
\]
\[
a_h( e^{\pm \eta}z_h, z_h) = \frac12 \int_{\partial \Omega} (\beta \cdot
n) z_h^2 e^{\pm \eta} ~\mbox{d}s + \int_{\Omega} z_h^2 \left(\pm \frac12 \beta
\cdot \nabla \eta - \frac12 \nabla \cdot \beta + \sigma\right) e^{\pm \eta}~\mbox{d}x.
\]
\end{lemma}
\begin{proof}
Consider the first inequality with the negative sign in the exponent.
By definition we have
\begin{multline}\label{a_defstart}
a_h(u_h,e^{-\eta} u_h) = (\beta \cdot \nabla u_h+\sigma u_h,
e^{-\eta} u_h)_h - \frac12 \sum_{K \in \mathcal{T}_h}
\int_{\partial K \setminus \partial \Omega} \beta \cdot n_{\partial K}
[u_h] \{e^{-\eta} u_h\} ~\mbox{d}s
\end{multline}
and note that an integration by parts in the advective term yields
\begin{multline*}
(\beta \cdot \nabla u_h, e^{-\eta} u_h)_h - \frac12 \sum_{K \in \mathcal{T}_h}
\int_{\partial K \setminus \partial \Omega} \beta \cdot n_{\partial K}
[u_h] \{e^{-\eta} u_h\} ~\mbox{d}s = (u_h,e^{-\eta} (\beta \cdot
\nabla \eta - \nabla \cdot \beta) u_h)\\
- (u_h,
e^{-\eta} \beta \cdot \nabla u_h)_h + \frac12 \sum_{K \in \mathcal{T}_h}
\int_{\partial K \setminus \partial \Omega} \beta \cdot n_{\partial K}
[u_h] \{e^{-\eta} u_h\} ~\mbox{d}s \\
+ \int_{\partial \Omega} (\beta \cdot
n) u_h^2 e^{-\eta} ~\mbox{d}s.
\end{multline*}
This equality implies the following well-known relation,
\begin{multline}\label{eq:dg_cons}
(\beta \cdot \nabla u_h, e^{-\eta} u_h)_h - \frac12 \sum_{K \in \mathcal{T}_h}
\int_{\partial K \setminus \partial \Omega} \beta \cdot n_{\partial K}
[u_h] \{e^{-\eta} u_h\} ~\mbox{d}s \\
= \frac12 \left( (u_h,e^{-\eta} (\beta \cdot
\nabla \eta - \nabla \cdot \beta) u_h) + \int_{\partial \Omega} (\beta \cdot
n) u_h^2 e^{-\eta} ~\mbox{d}s\right).
\end{multline}
The first stability result is obtained by applying this equality in
\eqref{a_defstart}.
The inequality for the adjoint case is proven similarly by observing that after
an integration by parts in the bilinear form
\begin{multline}\label{z_defstart}
a_h(e^{-\eta} z_h, z_h) = -(e^{-\eta} z_h, \beta \cdot \nabla z_h +
(\nabla \cdot \beta - \sigma) z_h) \\
+ \frac12\sum_{K \in \mathcal{T}_h}
\int_{\partial K \setminus \partial \Omega} \beta \cdot n_{\partial K}
[z_h] \{e^{-\eta} z_h\} ~\mbox{d}s
+ \int_{\partial \Omega} (\beta \cdot
n) z_h^2 e^{-\eta} ~\mbox{d}s
\end{multline}
and then applying \eqref{eq:dg_cons}. The case where the power is
positive follows similarly, observing that the change of sign only has
an effect in the inner derivative $\beta \cdot \nabla \eta$.
\end{proof}\\
The importance of this Lemma is a consequence of the existence of a
particular function $\eta$ that is given in the following result.
\begin{lemma}
Under the assumptions on $\beta$ there exists $\eta_0
\in W^{2,\infty}(\Omega)$ such that $\beta \cdot \nabla \eta_0 \ge 1$ in
$\Omega$.
\end{lemma}
For the proof of this result see \cite[Appendix A]{AM09}.
It follows that the second term of the right hand sides in the equations of Lemma
\ref{basic_stability} are {non-negative} for
\begin{equation}\label{eq:stab_func}
\eta := (1+ \|2 \sigma-\nabla \cdot \beta
\|_{L^\infty(\Omega)}) \, \eta_0.
\end{equation}
Below we
always assume that $\eta$ is of this form.
In general $e^{-\eta} u_h \not \in V_h$ and hence Lemma
\ref{basic_stability} is insufficient to prove
\eqref{forward_stability} and \eqref{adjoint_stability}. The trick is
to chose $v_a$ to be some suitable approximation of $e^{-\eta} u_h$ in
$V_h$, $\pi e^{-\eta} u_h$, and control the approximation error using the
stabilization. Since we are often required to estimate this error we
introduce the notation $\delta(e^{-\eta} u_h) := e^{-\eta} u_h - \pi e^{-\eta} u_h$. Similarly $v_{a*}$ is chosen as an approximation of $-
e^{-\eta} z_h$.
The stabilization terms may now be chosen as
one of the following, where the first two assumes $H^1$-conforming
approximation and the last discontinuous approximation. In all three
cases we have $W_h \equiv V_h$. Below $\gamma_X \in \mathbb{R}^+$, $X
= GLS, \, CIP,\, DG$,
denotes a stabilization parameter associated to the method $X$ and
$\gamma_{bc} \in \mathbb{R}^+$ a stabilization parameter associated to
the weakly imposed boundary condition.
\begin{itemize}
\item The Galerkin least squares method.\\
In this case continuous finite element spaces are used, $V_h = W_h := X_h^k \cap
H^1(\Omega)$ and the stabilization operators take the form
\begin{equation}\label{GLS_prim}
s_{p,GLS}(u_h,w_h) := (\gamma_{GLS} |\beta|^{-1} h \mathcal{L} u_h, \mathcal{L} w_h) ,
\end{equation}
\begin{equation}\label{GLS_adjoint}
s_{a,GLS}(z_h,v_h) := (\gamma_{GLS} |\beta|^{-1} h \mathcal{L^*} z_h, \mathcal{L^*} v_h).
\end{equation}
Note that $s_{p,GLS}(u,w_h) =
(f,\gamma_{GLS} |\beta|^{-1} h \mathcal{L} w_h)$ showing that
$s_p(u,\cdot)$ can indeed be expressed using data.
\item Continuous interior penalty stabilization.\\
Here as well continuous finite element spaces are used, $V_h = W_h := X_h^k \cap
H^1(\Omega)$ and the stabilization is given by
\begin{equation}\label{jumpstab1}
s_{CIP}(u_h,w_h) := \sum_{F \in\mathcal{F}_{int}} \int_F h^2_F
\gamma_{CIP} \|\beta_h \cdot n_F\|_{L^\infty(F)}
\jump{\nabla u_h} \cdot \jump{\nabla w_h} ~\mbox{d}x
\end{equation}
for both the primal and the adjoint equations, where
$\jump{\nabla u_h}\vert_F$ denotes the
jump of the gradient over the
face $F$.
\item The discontinuous Galerkin method.\\
In this case we do not impose any continuity constraints in the
finite element space
$V_h := X_h^k $. The method is stabilized by penalising the jump of the solution
over element faces for both the primal and the adjoint equations.
\begin{equation}\label{DGstab}
s_{DG}(u_h,w_h) := \sum_{F \in\mathcal{F}_{int}} \int_F
\gamma_{DG} |\beta \cdot n_F|
[u_h] [w_h] ~\mbox{d}x.
\end{equation}
where
$[u_h]\vert_F$ denotes the
jump of the solution over the
face $F$. The choice $\gamma_{DG} = \tfrac12$ is known to lead to the
classical upwind formulation for the method \eqref{standardFEM}.
\end{itemize}
To
account for boundary conditions the above stabilizations are modified as
follows
\begin{equation}\label{stab_ops_bound}
\begin{aligned}
s_p(u_h,w_h)& := s_{p,X}(u_h,w_h) + s_{bc,-}(u_h,w_h),\\
s_a(z_h,v_h) &:= s_{a,X}(z_h,v_h) + s_{bc,+}(z_h,v_h)+s_{bc,-}(z_h,v_h),
\end{aligned}
\end{equation}
with $X =
GLS,\,CIP,\, DG$ and
$
s_{bc,\pm} := \int_{\partial \Omega}\gamma_{bc} |(\beta \cdot n)_{\pm}|
u_h v_h ~\mbox{d}s.
$
Note that the value of $z_h$ is penalized on the whole boundary. This
is necessary to obtain robustness if no boundary conditions are set
in $a_h(\cdot,\cdot)$ and allows for the simple choice of test functions used in the
analysis below.
It should be noted that for problems where the adjoint solution satisfies $z=0$ the
stabilization in the bulk or on the boundary can be changed to any form satisfying the
assumptions \eqref{adjoint_stability}-\eqref{cont1}. The
consistency requirements are much weaker, since the exact solution is
trivial. The variant where $z_h$ is penalized only on the outflow
boundary can also be shown to be stable using the arguments below
provided that weak boundary
conditions are included also in $a_h(\cdot,\cdot)$. In this case
different weight functions must be used for $u_h$ and $z_h$. The present choice
was motivated mainly by the use of a single exponential weight in all
estimates and that it makes integration
of data assimilation problems straightforward, by changing the
boundary contribution in $s_p(\cdot,\cdot)$.
Below we will consider the methods \eqref{standardFEM} and \eqref{FEM}
one by one, in each case showing that the
assumptions \eqref{galortho1}-\eqref{cont2} are satisfied
for method \eqref{standardFEM} as well as
\eqref{galortho2}--\eqref{cont1} for method \eqref{FEM}. Clearly some arguments are
very similar between the different methods and full details are given
only for the GLS-method. The conclusion is that all three schemes
satisfy the assumptions necessary for the abstract analysis to
hold. The dependence of the $\epsilon(h)$, $\tilde \epsilon(h)$,
$\breve \epsilon(h)$ and
$c_\eta$ and $\tilde c_\eta$ on the physical parameters
and on $h$ is specified in each case in the proofs. The natural norm for the
analysis is
\[
\|x\|_W = \|x\|_V := \|x \| + \|h^{\frac12} \beta
\cdot \nabla x \|_h + \| |\beta
\cdot n|^{\frac12} x \|_{\partial \Omega},
\]
but to keep down the technical detail we will first prove the results
in the reduced norm,
\begin{equation}\label{red_norm}
\|x\|_W = \|x\|_V := \|x \| + \| |\beta
\cdot n|^{\frac12} x \|_{\partial \Omega}
\end{equation}
and then show how the control of the streamline derivative can be
recovered separately. We also define the continuity norm for all three
methods as
\begin{equation}\label{dualnorm}
\|v\|_+:= \|(|\beta|^{\frac12} h^{-\frac12} + |\sigma_\beta|) v\|+
\||\beta \cdot n|^{\frac12} v\|_{\mathcal{F}},
\end{equation}
where $\sigma_\beta = - \nabla \cdot \beta + \sigma$.
It is straightforward to show that in all cases the approximation
estimate \eqref{approx1} holds with $r = k+{\frac12}$ for any
interpolant in $X_h^k$
with optimal approximation properties.
The error estimate that results from the abstract analysis for the
transport equation may be
written in all cases, for both \eqref{standardFEM} and \eqref{FEM},
\[
\|u - u_h\|_V + \|h^{\frac12} \beta \cdot \nabla (u - u_h)\| +
|u-u_h|_{S_p} \leq C h^{k+\frac12} |u|_{H^{k+1}(\Omega)}.
\]
However the condition \eqref{h_cond_strong} leads to a stronger constraint on the mesh for
the formulation \eqref{standardFEM} than \eqref{h_cond}. We first prove a Lemma, similar to the superapproximation
result of \cite{JP86}, useful in all three cases.
\begin{lemma}\label{interp_lem}
Let $\pi$ be an interpolation operator that satisfies \eqref{approx} and \eqref{discrete_commutator}
then there holds
\[
\|e^{-\eta} u_h - \pi e^{-\eta} u_h\|_{V} + \|e^{-\eta} u_h - \pi
e^{-\eta} u_h\|_+ + |e^{-\eta} u_h - \pi e^{-\eta} u_h|_{S_x} \leq
\Pi(h) \|u_h\|_V,
\]
where $x=a,p$ and $ \Pi(h) = C_{\gamma\beta\sigma}
c_{dc,e^{-\eta}} \,h^{\frac12}$. Here $c_{dc,e^{-\eta}}$ refers to the maximum
constant of \eqref{discrete_commutator} for $n=0,1,2$. The result
holds for all the three methods presented above.
\end{lemma}
\begin{proof}
First observe that by inequality \eqref{discrete_commutator} we have
\begin{equation}\label{bas_eu_est}
h^{-\frac12} \|\delta(e^{-\eta} u_h)\| + h^{\frac12}
\|\nabla\delta(e^{-\eta} u_h)\| \leq C \max_{n \in
\{0,1\}}c_{dc,n,e^{-\eta}} h^{\frac12} \|u_h\|,
\end{equation}
recalling that $\delta(e^{-\eta} u_h) := e^{-\eta} u_h - \pi e^{-\eta} u_h$.
Similarly using \eqref{inverse_eq} followed by
\eqref{discrete_commutator} gives
\begin{multline*}
\sum_{K \in \mathcal{T}_h}\|\delta(e^{-\eta} u_h)\|^2_{\partial K}
\leq \sum_{K \in \mathcal{T}_h} c_T^2(
h^{-\frac12} \|\delta(e^{-\eta} u_h)\|^2_{K} + h^{\frac12}
\|\nabla\delta(e^{-\eta} u_h)\|^2_{ K} )\\ \leq
C^2 \max_{n \in
\{0,1\}}c^2_{dc,n,e^{-\eta}} h \|u_h\|^2.
\end{multline*}
Using these results in the definitions \eqref{red_norm} and \eqref{dualnorm} we obtain
\begin{multline*}
\|\delta(e^{-\eta} u_h)\|_+ + \|\delta(e^{-\eta} u_h)\|_V
\leq
C (\|\beta\|_{L^\infty}^{\frac12} + h^{\frac12} \|\sigma_\beta\|
_{L^\infty}^{\frac12} + h^{\frac12} ) \|h^{-\frac12} \delta(e^{-\eta} u_h)\|\\+\| |\beta \cdot n|^{\frac12}\delta(e^{-\eta} u_h)\|_{\mathcal{F}} \leq C_{\beta\sigma} \max_{n \in
\{0,1\}} c_{dc,n,e^{-\eta}} h^{\frac12} \|u_h\|.
\end{multline*}
For the stabilization norm we first consider the boundary term and
the three methods separately.
For the boundary terms we observe that
\[
s_{bc,\pm}(\delta(e^{-\eta} u_h),\delta(e^{-\eta} u_h))^{\frac12} \leq
\gamma_{bc}^{\frac12}\|\delta(e^{-\eta} u_h)\|_V \leq C_{\gamma\beta\sigma} \max_{n \in
\{0,1\}} c_{dc,n,e^{-\eta}} h^{\frac12} \|u_h\|.
\]
Then note that for the GLS method
\begin{multline*}
s_{p,GLS}(\delta(e^{-\eta} u_h),\delta(e^{-\eta} u_h))^{\frac12} \leq \gamma_{GLS}^{\frac12} h^{\frac12} (
\|\beta\|^{\frac12}_{L^\infty}\|\nabla \delta(e^{-\eta} u_h)\|_h + \|\sigma\|_{L^\infty} \|\delta(e^{-\eta} u_h)\| )\\
\leq C_{\gamma\beta\sigma} \max_{n \in
\{0,1\}} c_{dc,n,e^{-\eta}} h^{\frac12} \|u_h\|
\end{multline*}
and similarly
$
s_{a,GLS}(\delta(e^{-\eta} u_h),\delta(e^{-\eta} u_h))^{\frac12} \leq C_{\gamma\beta\sigma_{\beta}} \max_{n \in
\{0,1\}} c_{dc,n,e^{-\eta}} h^{\frac12} \|u_h\|.
$
For the CIP-method we use element-wise trace inequalities followed by
\eqref{discrete_commutator}, with $n=1$ and $n=2$,
\begin{multline*}
s_{CIP}(\delta(e^{-\eta} u_h),\delta(e^{-\eta} u_h))^{\frac12} \\
\leq \gamma_{CIP}^{\frac12} c_T h^{\frac12}
\|\beta\|_{L^\infty} (\sum_{K \in \mathcal{T}_h} (\|\nabla
\delta(e^{-\eta} u_h)\|_K^2 + h^2 \|D^2 \delta(e^{-\eta} u_h)\|_K^2
))^{\frac12} \\
\leq \gamma_{CIP}^{\frac12} c_T h^{\frac12}
\|\beta\|_{L^\infty} (c_{dc,1,e^{-\eta}} + c_{dc,2,e^{-\eta}}) \|u_h\|
\end{multline*}
Finally for the DG-method, we simply observe that
\[
s_{DG}(\delta(e^{-\eta} u_h),\delta(e^{-\eta} u_h))^{\frac12} \leq C \gamma_{DG}\|\delta(e^{-\eta} u_h)\|_+.
\]
\end{proof}
\subsection{Galerkin-least-squares stabilization}\label{subsec:GLS}
We assume that $$V_h = X_h^k \cap H^1(\Omega),\, W_h=V_h$$
Let $\pi_V,\, \pi_W$ be defined by the Lagrange
interpolator $i_h$. It follows by the construction of the
stabilization operator and \eqref{consist1} that \eqref{galortho1}
and \eqref{galortho2} hold (recalling that $z\equiv 0$.) It is also
straightforward to show that \eqref{approx1} holds with $r=k+\frac12$.
We collect the proof of the remaining assumptions of Proposition \ref{standard_conv} and Theorem
\ref{stab_conv} in two propositions.
\begin{proposition}(Satisfaction of assumptions for \eqref{standardFEM} with GLS)\label{assumpGLS}
Let the bilinear forms of \eqref{standardFEM} be defined by
\eqref{discrete_bilin} and \eqref{GLS_prim} with $\gamma_{bc}\ge 1$. Then \eqref{strong_bound_va}--
\eqref{cont2} are
satisfied, with $\epsilon(h) = C_{\gamma\beta\sigma\eta} h^{\frac12}$.
\end{proposition}
\begin{proof}
To show \eqref{strong_bound_va} we take $v_a := \pi_V (e^{-\eta} u_h)$ with
$\eta$ defined by \eqref{eq:stab_func} and use first inequality of Lemma \ref{basic_stability} to obtain
\begin{multline}\label{bas_stab_GLS}
a_h(u_h,\pi_V (e^{-\eta} u_h)) = a_h(u_h, e^{-\eta} u_h) - a_h(u_h,\delta (e^{-\eta} u_h))
\\ \ge -\gamma_{GLS}^{-\frac12} |u_h|_{S_p}
\|\delta (e^{-\eta} u_h)\|_+
\\+ \frac12 \int_{\partial \Omega} (\beta \cdot n) u_h^2
e^{-\eta} ~\mbox{d}s+
\frac12 \|u_h e^{-\frac{\eta}{2}} \|^2.
\end{multline}
Using Lemma \ref{interp_lem} we have
\begin{multline}\label{a_stab_part}
\frac12 \|u_h e^{-{\frac{\eta}{2}}} \|^2 + \frac12 \int_{\partial \Omega} (\beta \cdot n)_+ u_h^2
e^{-\eta} ~\mbox{d}s \leq a_h(u_h,\pi_V (e^{-\eta} u_h)) \\
- \frac12 \int_{\partial \Omega} (\beta \cdot n)_- u_h^2 e^{-\eta}
+\frac12 \gamma_{GLS}^{-\frac12} \Pi(h) (|u_h|^2_{S_p} + \|u_h\|^2_V).
\end{multline}
We need a similar bound for the stabilization operator using the function
$v_a(u_h)$. This is straightforward observing that
\begin{multline*}
s_{p}(u_h,v_a(u_h)) = (\mathcal{L} u_h
,\gamma_{GLS} |\beta|^{-1} h \mathcal{L} (u_h e^{-\eta})) + s_{bc,-}(u_h, e^{-\eta} u_h) \\-
(\mathcal{L} u_h , \gamma_{GLS} |\beta|^{-1} h (\mathcal{L} \delta (e^{-\eta} u_h))
- s_{bc,-}(u_h, \delta (e^{-\eta} u_h)))
\\
\ge \| (\gamma_{GLS} h |\beta|^{-1})^{\frac12} \mathcal{L} u_h
e^{-\frac{\eta}{2}}\|^2+ \gamma_{bc} \|| (\beta \cdot n)_-|^{\frac12} u_h
e^{-\frac{\eta}{2}}\|^2_{\partial \Omega} \\ - |u_h|_{S_p}\left(| \delta (e^{-\eta} u_h)|_{S_p}+ \| (\gamma_{GLS} h |\beta|^{-1})^{\frac12}
(\mathcal{L} e^{-\frac{\eta}{2}} )u_h\|\right).
\end{multline*}
Combining this result with \eqref{a_stab_part}, using
\eqref{interp_lem} it follows that for $\gamma_{bc}$ large enough
\begin{multline}\label{a_s_tot}
\frac12 \inf_{x \in \Omega} e^{-\eta} (\|u_h\|_V^2 + |u_h|^2_{S_p}) \leq a_h(u_h,\pi_V
(e^{-\eta} u_h))
+ s_{p,GLS}(u_h,\pi_V (e^{-\eta} u_h)) \\
+(C_{\gamma}\Pi(h) + (\gamma_{GLS} h |\beta|^{-1})^{\frac12}
\sup_{x \in \Omega} |\mathcal{L} e^{-\frac{\eta}{2}} |) (|u_h|^2_{S_p} + \|u_h\|^2_V).
\end{multline}
We conclude that \eqref{strong_bound_va} holds with $c_s = \tfrac12
\inf_{x \in \Omega} e^{-\eta}$
and
\begin{equation*}
\epsilon(h)= (C_{\gamma}\Pi(h) + (\gamma_{GLS} h |\beta|^{-1})^{\frac12}
\sup_{x \in \Omega} |\mathcal{L} e^{-\frac{\eta}{2}} |) \sim
C_{\gamma\beta\sigma\eta} h^{\frac12}.
\end{equation*}
Considering now \eqref{stab_v_bound3} we have
\begin{equation}\label{1st_arg_stab_v}
\|v_{a}(u_h)\|_V \leq \|e^{-\eta} u_h\|_V + \|\delta(e^{-\eta} u_h) \|_V \leq (\sup_{x \in \Omega} e^{-\eta} + \Pi(h) ) \|u_h\|_V
\end{equation}
and for the stabilization part,
\begin{multline}\label{1st_arg_stab_s}
|v_{a}(u_h)|_{S_p} \leq \sup_{x \in \Omega} |\mathcal{L} e^{-\frac{\eta}{2}} | h^{\frac12} C_{\gamma}
\|u_h\|_V +\sup_{x \in \Omega} e^{-\eta}|u_h|_{S_p} + |\delta(e^{-\eta} u_h)|_{S_p} \\
\leq (\sup_{x \in \Omega} e^{-\eta} h^{\frac12}
C_{\gamma\beta\sigma\eta}+\Pi(h))\|u_h\|_V + \sup_{x \in \Omega} e^{-\eta} |u_h|_{S_p}.
\end{multline}
It follow that \eqref{stab_v_bound3} holds for any $$c_\eta \ge \max(\sup_{x \in \Omega} |\mathcal{L} e^{-\frac{\eta}{2}} | h^{\frac12}
C_{\gamma\beta\sigma}, \sup_{x \in \Omega} e^{-\eta} +\Pi(h))
\sim C_{\gamma\beta\sigma\eta} h^{\frac12} + \sup_{x \in \Omega} e^{-\eta}.$$
For the continuity \eqref{cont2} we first use an
integration by parts and Cauchy-Schwarz inequality to obtain
\begin{multline}\label{bas_continuity}
a_h(v - \pi_V v, x_h) = (u - \pi_V u, \mathcal{L}^* x_h)+
\int_{\partial \Omega} (\beta \cdot n)( v - \pi_V v) x_h ~\mbox{d}s\\
\leq
\|u - \pi_V u\|_+ (\|(|\beta|^{-1} h)^{\frac12}
\mathcal{L}^* x_h\| + \|x_h\|_V).
\end{multline}
To conclude we need to express the norm over the adjoint operator in
the right hand side by the stabilization of the primal
operator. Observe that for all $x_h \in V_h$ there holds
\begin{equation}\label{LstaraboundbySp}
\||(\beta|^{-1} h)^{\frac12}
\mathcal{L}^* x_h\|\leq |x_h|_{S_p} + C_{\gamma\beta}( h^{\frac12}
(2 \|\sigma\|_{L^\infty(\Omega)} + \|\nabla \cdot \beta
\|_{L^\infty(\Omega)})) \|x_h\|_V.
\end{equation}
Collecting the results of \eqref{bas_continuity} and
\eqref{LstaraboundbySp} we see that $c_a \ge 1+ C_{\gamma\beta\sigma} h^{\frac12}$.
\end{proof}
\begin{proposition}(Satisfaction of the assumptions for \eqref{FEM}
with GLS)\label{assumpGLS_new}
Let the bilinear forms of \eqref{FEM} be defined by
\eqref{discrete_bilin}, \eqref{GLS_prim} and
\eqref{GLS_adjoint}. Then the
inequalities \eqref{galortho2}--\eqref{cont1} hold with $\tilde \epsilon(h) = 0
$.
\end{proposition}
\begin{proof}
Starting from the inequality \eqref{bas_stab_GLS} with $v_a(u_h) := \pi_W(e^{-\eta} u_h)$ we immediately get,
\begin{multline*}
\frac12 \inf_{x\in \Omega} e^{-\eta} \|u_h\|^2_V \leq a_h(u_h,\pi_W (e^{-\eta} u_h)) +
\gamma_{GLS}^{-\frac12} \Pi(h)
|u_h|_{S_p} \|u_h\|_V \\ + \sup_{x\in \Omega} e^{-\eta} \gamma_{bc}^{-1}|u_h|_{S_p}^2
\end{multline*}
from which we deduce, using $(\inf_{x\in \Omega} e^{-\eta} )^{-1} = \sup_{x\in \Omega} e^{\eta} $,
\begin{equation*}
\frac14 \inf_{x\in \Omega} e^{-\eta} \|u_h\|^2_V \leq a_h(u_h,\pi_W (e^{-\eta} u_h)) +(\sup_{x\in \Omega} e^{\eta} \gamma_{GLS}^{-1} \Pi(h)^2+\sup_{x\in \Omega} e^{-\eta}\gamma_{bc}^{-1} )|u_h|_{S_p}^2
\end{equation*}
which is the required inequality with $\tilde
\epsilon(h)=0$, $\tilde c_s = \tfrac14 \inf_{x\in \Omega} e^{-\eta} $and $$\tilde c_\eta \ge \sup_{x\in \Omega} e^{\eta}
\gamma_{GLS}^{-1} \Pi(h)^2+\sup_{x\in \Omega} e^{-\eta}\gamma_{bc}^{-1}.$$
In a similar fashion we may show that
\eqref{adjoint_stability}
holds, also with the weight $e^{-\eta}$, and corresponding test
function $v_{a*}(z_h) = -\pi_V (e^{-\eta} z_h)$. First observe that in
this case using Lemma \ref{basic_stability} (second equation),
\begin{multline*}
\frac12 \inf_{x\in \Omega} e^{-\eta} \|z_h\|^2 - \frac12 \int_{\partial \Omega}
(\beta \cdot n) z_h^2 e^{-\eta} ~\mbox{d}s \leq -a_h(e^{-\eta}
z_h,z_h) \\
= a_h(-\pi_V (e^{-\eta} z_h),z_h) - a_h(\delta (e^{-\eta} z_h),z_h).
\end{multline*}
For the second term in the right hand side we have after integration
by parts and application of Lemma \ref{interp_lem}
\begin{multline*}
a_h(\delta (e^{-\eta} z_h),z_h) = \int_{\partial \Omega} (\beta \cdot
n) \delta (e^{-\eta} z_h) z_h ~\mbox{d}s + (\delta (e^{-\eta} z_h),
\mathcal{L^*} z_h) \\
\leq C \|\delta (e^{-\eta} z_h)\|_+ |z_h|_{S_a} \leq C \Pi(h)
\|z_h\|_V |z_h|_{S_a}.
\end{multline*}
Here we used that the boundary penalty on $z_h$ is active on the
whole boundary. We may then conclude as before that
\[
\frac14 \inf_{x\in \Omega} e^{-\eta} \|z_h\|^2_W \leq a_h(-\pi_V (e^{-\eta} z_h),z_h)
+(\sup_{x\in \Omega} e^{\eta} C_\gamma \Pi(h)^2+\sup_{x\in \Omega} e^{-\eta} \gamma_{bc}^{-1}) |z_h|_{S_a}^2
\]
with similar constants as before.
The inequalities of \eqref{stab_v_bound2} and \eqref{stab_v_bound1} follow by similar arguments as
\eqref{1st_arg_stab_v} and \eqref{1st_arg_stab_s}. The only differences occur in the right inequalities.
\begin{multline*}
|v_{a}(u_h)|_{S_a} \leq h^{\frac12} \gamma_{GLS}^{\frac12} \sup_{x \in \Omega} \mathcal{L^*} e^{-\eta}
\|u_h\|_V + \sup_{x \in \Omega} e^{-\eta} |u_h|_{S_a} + |\delta(e^{-\eta} u_h)|_{S_a} \\
\leq (C_{\gamma\beta\sigma\eta} h^{\frac12} \sup_{x \in \Omega}
e^{-\eta} + \Pi(h)) \|u_h\|_V +\sup_{x \in \Omega}
e^{-\eta}
|u_h|_{S_a}.
\end{multline*}
We then use an inequality similar to \eqref{LstaraboundbySp}, but this time
adding
the boundary penalty term that is included in the stabilization in
formulation \eqref{FEM} (see \eqref{stab_ops_bound}.)
\begin{equation}\label{SaboundbySp}
|u_h|_{S_a} \leq |u_h|_{S_p} + C_{\beta\gamma}h^{\frac12}
(2 \|\sigma\|_{L^\infty(\Omega)} + \|\nabla \cdot \beta
\|_{L^\infty(\Omega)})\|u_h\|_V + \gamma_{bc}^{\frac12} \||\beta
\cdot n|^{\frac12} u_h\|_{\partial \Omega}.
\end{equation}
Note that the boundary contribution can not be controlled by
$|u_h|_{S_p}$ as one would like, but must be controlled using the
$V$-norm. This
adds an $O(\gamma_{bc}^{\frac12})$ contribution to the constant in front of $\|u_h\|_V$.
\begin{equation}\label{SaboundbySpfinal}
|u_h|_{S_a} \leq |u_h|_{S_p} + (\gamma_{bc}^{\frac12} + C_{\beta\gamma}(h^{\frac12}
(2 \|\sigma\|_{L^\infty(\Omega)} + \|\nabla \cdot \beta
\|_{L^\infty(\Omega)}))\|u_h\|_V.
\end{equation}
The proof of \eqref{stab_v_bound1} is similar, but here the stronger
adjoint boundary penalty can control the boundary term, leading to
\[
|z_h|_{S_p} \leq |z_h|_{S_a} + (C_{\beta\gamma}(h^{\frac12}
(2 \|\sigma\|_{L^\infty(\Omega)} + \|\nabla \cdot \beta
\|_{L^\infty(\Omega)}))\|z_h\|_W.
\]
We conclude that the inequalities \eqref{stab_v_bound2} and
\eqref{stab_v_bound1} hold with
$$\tilde c_\eta \ge \sup_{x \in \Omega}
e^{-\eta} + \Pi(h) \mbox{ and }\breve \epsilon(h) \ge C_{\beta\sigma\gamma\eta}
h^{\frac12} + \sup_{x \in \Omega}
e^{-\eta}\gamma_{bc}^{\frac12}
$$
The continuity \eqref{cont1} is
immediate by
integration by parts and Cauchy-Schwarz inequality,
\begin{align*}
a_h(v - \pi_V v, x_h) &= (u - \pi_V u, \mathcal{L}^* x_h)+
\int_{\partial \Omega} (\beta \cdot n)( v - \pi_V v) x_h ~\mbox{d}s\\
&\leq C_\gamma
\|u - \pi_V u\|_+ (|x_h|_{S_a} + \|x_h\|_W).
\end{align*}
\end{proof}
\begin{remark}
Note that for the GLS-method $\tilde \epsilon(h)=0$ in
\eqref{forward_stability} and \eqref{adjoint_stability} indicating
that the scheme is unconditionally stable. This follows from the fact that the
whole residual is considered in the stabilization term. This nice
feature however only holds under exact quadrature. When the integrals are
approximated, the quadrature error once again gives rise to
oscillation terms from data that introduces a non-zero
contribution to $\tilde \epsilon(h)$.
\end{remark}
\subsection{Continuous interior penalty}\label{subsec:CIP}
In this case also $W_h = V_h:= X_h^k \cap H^1(\Omega)$, but the stabilization added to the standard Galerkin
formulation is a penalty on the jump of the gradient over element
faces \cite{DD76,BH04}.
The key observation is that the following discrete approximation
result holds for $\gamma_{CIP}$ large enough (see \cite{Bu05, BFH06})
\begin{equation}\label{oswald_int}
\|h^{\frac12} |\beta_h|^{-\frac12}(\beta_h \cdot \nabla u_h - I_{os} \beta_h \cdot \nabla
u_h)\|^2 \leq s_{CIP}(u_h,u_h).
\end{equation}
Here $\beta_h$ is some piecewise affine interpolant of the velocity
vector field $\beta$ and $I_{os}$ is the quasi-interpolation operator defined in each
node of the mesh as a straight average of the function values from
triangles sharing that node,
\[
(I_{os} \beta_h \cdot \nabla u_h)(x_i) = N_i^{-1} \sum_{\{K : x_i \in K\}} (\beta_h \cdot \nabla u_h)(x_i)\vert_K,
\]
with $N_i := \mbox{card} \{K : x_i \in K\}$.
Stability is then a
consequence of the following lemma:
\begin{lemma}\label{stab_cont}
The following inequalities hold.
\begin{equation}\label{discrete_interp}
\inf_{v_h \in V_h} \|h^{\frac12}(\mathcal{L} u_h - v_h)\| \leq C_{\gamma\beta}
s_{CIP}(u_h,u_h)^{\frac12} + \epsilon_{CIP}(h) \|u_h\|
\end{equation}
and
\begin{equation}\label{disc_adjoint}
\inf_{w_h \in W_h} \|h^{\frac12}(\mathcal{L} ^*z_h - w_h)\| \leq C_{\gamma\beta}
s_{CIP}(z_h,z_h)^{\frac12} + \epsilon_{CIP}(h) \|z_h\|,
\end{equation}
with
$
\epsilon_{CIP}(h) \sim h^{\frac32} (\|\beta\|_{W^{2,\infty}(\Omega)} + c_{dc,0,\sigma}).
$
\end{lemma}
\begin{proof}
Since the proofs of the two results are similar we only detail the
arguments for \eqref{discrete_interp}.
First note that
\begin{multline*}
\inf_{v_h \in V_h} \|h^{\frac12} (\mathcal{L} u_h - v_h)\| \leq
\|h^{\frac12} (i_h \beta \cdot
\nabla u_h - I_{os}( i_h \beta \cdot
\nabla u_h)) \| \\
+ h^\frac12 \|\beta -i_h\beta\|_{L^\infty(\Omega)} \|\nabla
u_h\| + h^\frac12\|\sigma u_h - i_h (\sigma u_h)\|.
\end{multline*}
Using \eqref{oswald_int}, interpolation in $L^\infty$, an inverse inequality and the
discrete commutator property \eqref{discrete_commutator} we conclude
\[
\inf_{v_h \in V_h} \|h^{\frac12} (\mathcal{L} u_h - v_h)\| \leq
C_{\beta\gamma} s_{CIP}(u_h,u_h)^{\frac12}\\
+ h^{\frac32} (\|\beta\|_{W^{2,\infty}(\Omega)} + c_{dc,0,\sigma}) \|u_h\|.
\]
\end{proof}
For the CIP-method we choose the $\pi_V$ and $\pi_W$ as the $L^2$-projection in order to exploit
orthogonality to ``filter'' the element residual. Observe that if
$u\in H^{\frac32+\varepsilon}(\Omega)$, $\varepsilon>0$ then
$s_{CIP}(u,\cdot)=0$. The consistencies \eqref{galortho1}
and \eqref{galortho2} hold from the consistency of
\eqref{discrete_bilin}.
The approximation result \eqref{approx1}, with $r=k+\tfrac12$ is a consequence of
standard results for the CIP-method (see for instance \cite{BFH06}.)
We now prove that the remaining assumptions for Proposition \ref{standard_conv} and Theorem
\ref{stab_conv} hold.
\begin{proposition}(Satisfaction of assumptions for
\eqref{standardFEM} with CIP)\label{assumpCIP}
Let the bilinear forms of \eqref{standardFEM} be defined by
\eqref{discrete_bilin} and \eqref{jumpstab1}. Let $\gamma_{bc}\ge1$. Then \eqref{galortho1}--
\eqref{cont2} are
satisfied, with $\epsilon(h) \sim h^{\frac12}$.
\end{proposition}
\begin{proof}
To prove the stability \eqref{strong_bound_va} take $v_a =
\pi_V(e^{-\eta} u_h)$ and use Lemmas \ref{basic_stability}, the
orthogonality of the $L^2$-projection and
\ref{stab_cont} to obtain
\begin{multline}\label{stand_stab_CIP}
a_h(u_h,\pi_V (e^{-\eta} u_h)) = a_h(u_h, e^{-\eta} u_h)
-(\mathcal{L} u_h - w_h,\delta (e^{-\eta} u_h)) \\ \ge -C_{\gamma} |u_h|_{S_p} \|\delta (e^{-\eta} u_h)\|_+- \epsilon_{CIP}(h) \|u_h\| \|h^{-\frac12}\delta (e^{-\eta} u_h)\|\\+ \frac12 \int_{\partial \Omega} (\beta \cdot n) u_h^2
e^{-\eta} ~\mbox{d}s+
\frac12 \|u_h e^{-\frac{\eta}{2}} \|^2.
\end{multline}
We also observe that for the stabilization
\begin{equation}\label{s_stab_CIP}
s_p(u_h,\pi_V (e^{-\eta} u_h)) \ge s_p(u_h,u_h e^{-\eta} ) -
|u_h|_{S_p} |\delta (e^{-\eta} u_h) |_{S_p}.
\end{equation}
Now observe that, since the jump of $\nabla e^{-\eta}$ is zero we
have, using \eqref{stand_stab_CIP} and \eqref{s_stab_CIP}
\begin{multline*}
\frac12 \inf_{x \in \Omega} e^{-\eta} (\|u_h\|_V^2 + |u_h|^2_{S_p})
\leq \frac12 \|u_h e^{-\frac{\eta}{2}} \|^2 + \frac12 \int_{\partial
\Omega} (\beta \cdot n) u_h^2 e^{-\eta} ~\mbox{d}s+ s_p(u_h,u_h
e^{-\eta} )\\
\leq a_h(u_h,\pi_V (e^{-\eta} u_h)) + s_p(u_h,\pi_V (e^{-\eta} u_h))
\\|u_h|_{S_p} (C_{\gamma} \|\delta
(e^{-\eta} u_h)\|_++|\delta (e^{-\eta} u_h) |_{S_p})+ \epsilon_{CIP}(h) \|u_h\| \|h^{-\frac12}\delta
(e^{-\eta} u_h)\|
\end{multline*}
Using Lemma \ref{interp_lem} we deduce that
\eqref{strong_bound_va} holds with
$$c_s = \frac12 \inf_{x \in \Omega}
e^{-\eta} \mbox{ and }
\epsilon(h) \ge \Pi(h) (C_{\gamma} + \epsilon_{CIP}(h)).
$$
For \eqref{stab_v_bound3} only the stabilization part differs from the
GLS case. Since the
jump of $\nabla e^{-\eta}$ is zero we immediately get
\[
|v_{a}(u_h)|_{S_p} \leq \sup_{x \in \Omega} e^{-\eta} |u_h|_{S_p} +
|\delta(e^{-\eta} u_h) |_{S_p} \leq \sup_{x \in \Omega} e^{-\eta}
|u_h|_{S_p} + \Pi(h) \|u\|_V
\]
and hence $c_\eta \ge \sup_{x \in \Omega} e^{-\eta}+\Pi(h)$.
The continuity \eqref{cont2} follows by observing that by
\eqref{stab_cont} there holds
\begin{multline} \label{CIPcont}
a_h(v- \pi_V v, x_h) =\inf_{w_h \in V_h} (v - \pi_V v, \mathcal{L}^* x_h - w_h) +
\int_{\partial \Omega} (\beta \cdot n) (v- \pi_V v) x_h ~\mbox{d}s
\\
\leq \|v - \pi_V v\|_+(C_\gamma |x_h|_{S_p} + (C_\beta h^{\frac12}
\epsilon_{CIP}(h) + 1)\|x_h\|_V ).
\end{multline}
Where we observe that the boundary part must be controlled using the norm
$\|\cdot\|_V$.
\end{proof}
\begin{proposition}(Satisfaction of assumptions for
\eqref{FEM} with CIP)\label{assumpCIP_new}
Let the bilinear forms of \eqref{FEM} be defined by
\eqref{discrete_bilin} and \eqref{jumpstab1} for both
$s_p(\cdot,\cdot)$ and $s_a(\cdot,\cdot)$, together with the respective
boundary penalty terms of \eqref{stab_ops_bound}. Then the inequalities \eqref{galortho2}--\eqref{cont1} hold with $\tilde \epsilon(h)
\sim h^2$.
\end{proposition}
\begin{proof}
Starting from \eqref{stand_stab_CIP} with $v_a(u_h) := \pi_W(e^{-\eta} u_h)$ we have using Lemma \ref{interp_lem},
\begin{multline}\label{new_stab_CIP}
\frac12 \|u_h e^{-\frac{\eta}{2}} \|^2 + \frac12 \int_{\partial \Omega} |\beta \cdot n| u_h^2
e^{-\eta} ~\mbox{d}s \leq a_h(u_h,\pi_W (e^{-\eta} u_h)) - \int_{\partial \Omega} (\beta \cdot n)_- u_h^2
e^{-\eta} ~\mbox{d}s \\
+ (C_{\gamma} |u_h|_{S_p}+ \epsilon_{CIP}(h) \|u_h\| ) \Pi(h) \|u_h\|
\\
\leq a_h(u_h,\pi_W (e^{-\eta} u_h)) + (\gamma_{bc}^{-\frac12} \sup_{x
\in \Omega} e^{-\eta} + C^2_{\gamma} \sup_{x
\in \Omega} e^{\eta}\Pi(h)^2) |u_h|_{S_p} \\+ \left(\frac14 \inf_{x \in \Omega}
e^{-\eta} + \epsilon_{CIP}(h) \Pi(h)\right) \|u_h\|^2.
\end{multline}
The last inequlaity is due to an arithmetic-geometric inequality.
Hence we see that \eqref{forward_stability} holds with
$\tilde \epsilon(h)= \epsilon_{CIP}(h) \Pi(h) \sim h^{2}$ and $$\tilde
c_s = \frac14 \inf_{x \in \Omega} e^{-\eta},\quad \tilde c_\eta \ge C^2_\gamma \sup_{x
\in \Omega} e^{\eta} \Pi(h)^2 + \gamma_{bc}^{-1} \sup_{x
\in \Omega} e^{-\eta}
$$
The inequality \eqref{adjoint_stability} is proved
similarly as in the GLS case, taking this time $v_{a*}(z_h) := -\pi_V
(e^{-\eta} z_h)$, with $\pi_V$ the $L^2$-projection and
using the second inequality of Lemma \ref{basic_stability} and Lemma
\ref{interp_lem} after
integration by parts.
\begin{multline*}
a_h(\delta (e^{-\eta} z_h),z_h) = \int_{\partial \Omega} (\beta \cdot
n) \delta (e^{-\eta} z_h) z_h ~\mbox{d}s + \inf_{w_h \in W_h} (\delta (e^{-\eta} z_h),
\mathcal{L^*} z_h - w_h) \\
\leq C \|\delta (e^{-\eta} z_h)\|_+ (|z_h|_{S_a} + \epsilon_{CIP}(h)
\|z_h\|) \leq C \Pi(h)
\|z_h\| (|z_h|_{S_a} + \epsilon_{CIP}(h) \|z_h\|).
\end{multline*}
Then we conclude as before.
For the stabilities \eqref{stab_v_bound2} and \eqref{stab_v_bound1} we
proceed as in Proposition \ref{assumpGLS} and we only detail
the second inequality of \eqref{stab_v_bound2}. When using the CIP-method the primal and adjoint stabilization terms only
differ in the boundary contributions therefore, by symmetry, the second
inequality of \eqref{stab_v_bound1} follows identically.
Since the jump of $\nabla e^{-\eta}$ is zero we get
\begin{equation}
|v_{a}(u_h)|_{S_a} \leq \sup_{x \in \Omega} e^{-\eta} |u_h|_{S_p} +
|\delta(e^{-\eta} u_h)|_{S_p}+
\gamma_{bc}^{\frac12}\| |\beta \cdot n|^{\frac12} v_{a}(u_h)\|_{\partial \Omega}.
\end{equation}
The boundary penalty term is bounded using that by adding and
subtracting $e^{-\eta} u_h$, using a triangle inequality and the arguments of Lemma \ref{interp_lem} to obtain
\begin{multline}\label{critical_step}
\gamma_{bc}^{\frac12}\| |\beta \cdot n|^{\frac12} v_{a}(u_h)\|_{\partial \Omega} \leq \Pi(h) \|u_h\|_V + \gamma^{\frac12}_{bc}\||\beta|_+^{\frac12} u_h
e^{-{\eta}}\|_{\partial \Omega} \\
\leq (\Pi(h) + \gamma_{bc}^{\frac12}\sup_{x \in
\Omega} e^{-\eta}) \|u_h\|_V
\end{multline}
Therefore \eqref{stab_v_bound2} and \eqref{stab_v_bound1} hold with
\begin{equation}\label{constants}
\tilde c_\eta \ge \sup_{x \in \Omega} e^{-\eta} + \Pi(h)\mbox{ and }\breve
\epsilon(h) \ge
\gamma_{bc}^{\frac12} \sup_{x \in \Omega}
e^{-\eta} + 2 \Pi(h).
\end{equation}
The proof of continuity \eqref{cont1} follows as in \eqref{CIPcont}.
\end{proof}
\subsection{The discontinuous Galerkin method}\label{subsec:DG}
In the case where discontinuous elements are used, i.e. $V_h = W_h :=
X_h^k$ the analysis is simplified by the fact that $\beta_h
\cdot \nabla u_h\in V_h$. Here we let $\pi_V$ and $\pi_W$ denote the element wise
$L^2$-projection onto $X_h^k$. The analysis is essentially the same as
for the CIP-method and when appropriate we will refer to the
previous analysis. Thanks to the local
character of the DG method the results hold without assuming any
quasi regularity of the meshes.
The consistency results \eqref{galortho1}
and \eqref{galortho2} are standard as well as the approximation result
\eqref{approx1}, with $r=k+\tfrac12$ (see \cite{EG06}). As before we collect the proofs of the remaining assumption
in a proposition.
\begin{proposition}(Satisfaction of assumptions for
\eqref{standardFEM} with DG)\label{DGassump}
Let the bilinear forms of \eqref{standardFEM} be defined by
\eqref{discrete_bilin} and \eqref{DGstab}. Then \eqref{galortho1}--\eqref{cont2} are
satisfied with $\epsilon(h) \sim h^{\frac12}$.
\end{proposition}
\begin{proof}
Let $i_h \beta \in X_h^1$ be the Lagrange interpolant of $\beta$ with and $\pi_0 \sigma \in
X_h^0$ the projection of $\sigma$ on piecewise constant functions.
For \eqref{strong_bound_va} take $v_a := \pi_V (
e^{-\eta} u_h)$, use $L^2$-orthogonality and apply Lemma
\ref{basic_stability} to obtain for $\gamma_{bc}$ large enough,
\begin{multline}\label{bas_stab_DG}
a_h(u_h,\pi_V (e^{-\eta} u_h)) + s_p(u_h,\pi_V (e^{-\eta} u_h))= a_h(u_h, e^{-\eta} u_h) + s_p(u_h,e^{-\eta} u_h)\\- a_h(u_h,\delta (e^{-\eta} u_h))- s_p(u_h,\delta(e^{-\eta} u_h))
\ge ((i_h \beta - \beta) \cdot \nabla u_h + (\pi_0 \sigma - \sigma)
u_h, \delta (e^{-\eta} u_h))\\
-2 \sum_{K \in \mathcal{T}_h} \left<|\beta \cdot n| |[u_h]|, (1 +
\gamma_{DG})|\delta (e^{-\eta} u_h)|\right>_{\partial K \setminus \partial \Omega}
+ \frac12 \inf_{x \in \Omega}
e^{-\eta} (\|u_h\|^2_V + |u_h|^2_{S_p})\\
\ge -|u_h|_{S_p} C_{\gamma} \|\delta (e^{-\eta} u_h)\|_+ - \epsilon_{DG}(h) \|u_h\| \|h^{-\frac12}\delta (e^{-\eta} u_h)\|\\
+ \frac12 \inf_{x \in \Omega}
e^{-\eta} (\|u_h\|^2_V + |u_h|^2_{S_p})
\end{multline}
where
$$\epsilon_{DG}(h) = h^{-\frac12} \|i_h \beta -
\beta\|_{L^\infty(\Omega)} + h^{\frac12} \|\pi_0 \sigma -
\sigma\|_{L^\infty(\Omega)} \sim (\|\beta\|_{W^{2,\infty}(\Omega)} +
\|\sigma\|_{W^{1,\infty}(\Omega)}) )h^{\frac32}.$$
It follows that \eqref{strong_bound_va} holds with
$c_s = \tfrac12 \inf_{x \in \Omega}
e^{-\eta}$ and $\epsilon(h) \ge (C_\gamma + \epsilon_{DG}(h))
\Pi(h)$. The proof of \eqref{stab_v_bound3} is
analogous with the CIP-case with similar constants.
Considering finally the continuity \eqref{cont2} we have after an
integration by parts
\begin{multline}\label{cont_DG}
a_h(v - \pi_V v, x_h) = (v - \pi_V v, \mathcal{L}^* x_h) + \frac12
\sum_K \left<(\beta \cdot n) \{v - \pi_V v \}, [x_h] \right>_{\partial
K \setminus \partial \Omega}\\
+ \left<(\beta \cdot n) (v - \pi_V v ),x_h \right>_{\partial
\Omega}\\
= (v - \pi_V v, (i_h \beta - \beta) \nabla x_h + (\sigma - \pi_0
\sigma) x_h) + \frac12
\sum_K \left<(\beta \cdot n) \{v - \pi_V v \}, [x_h] \right>_{\partial
K \setminus \partial \Omega}\\
+ \left<(\beta \cdot n) (v - \pi_V v ),x_h \right>_{\partial
\Omega}\\
\leq\|v - \pi_V v\|_+ (C_{\gamma} |x_h|_{S_a} + (C_{\beta} h^{\frac12}
\epsilon_{DG}(h) + 1)\|x_h\|_V).
\end{multline}
\end{proof}
\begin{proposition}
(Satisfaction of assumptions for
\eqref{FEM} with DG)\label{DGassump_new}
Let the bilinear forms of \eqref{FEM} be defined by
\eqref{discrete_bilin} and \eqref{DGstab} for both
$s_p(\cdot,\cdot)$ and $s_a(\cdot,\cdot)$ together with the respective
boundary penalty terms of \eqref{stab_ops_bound}. Then the
inequalities \eqref{galortho2}--\eqref{cont1} hold with $\tilde \epsilon(h) \sim h^2$.
\end{proposition}
\begin{proof}
The stability \eqref{forward_stability} and \eqref{adjoint_stability}
follows by taking $v_a := \pi_W (
e^{-\eta} u_h)$ and $v_{a*} := -\pi_V (
e^{-\eta} z_h)$, using \eqref{bas_stab_DG} and the manipulations of Proposition
\ref{assumpCIP_new}. The proof of the inequalities \eqref{stab_v_bound2} and
\eqref{stab_v_bound1} use the same techniques as the corresponding
results for the CIP-method and result in similar constants. Finally
\eqref{cont1} follows from \eqref{cont_DG}.
\end{proof}
\subsection{Convergence of the error in the streamline derivative}
As already mentioned the natural norm for the above analysis would
include the $L^2$-norm of the $h^\frac12$-weighted streamline
derivative. Given the results of the previous section it is
straightforward to prove optimal convergence of the streamline
derivative both for \eqref{standardFEM} and \eqref{FEM}.
We only give the result for the
method \eqref{FEM} below. The proof of the result for \eqref{standardFEM} is identical.
\begin{proposition}
Let $u_h,z_h$ be the solution of \eqref{FEM} with bilinear form
\eqref{discrete_bilin} stabilized with one of the methods presented in
Sections \ref{subsec:GLS} -- \ref{subsec:DG}. Assume that the
conditions of Theorem \ref{stab_conv} are satisfied. Then there holds
\[
\|\beta \cdot \nabla (u - u_h)\|_h \leq C_{\eta\beta\sigma\gamma}
h^k |u|_{H^{k+1}(\Omega)}.
\]
\end{proposition}
\begin{proof}
First consider the GLS-method. Add and subtract $\sigma(u - u_h)$ inside
the streamline derivative norm and use a triangle inequality to
obtain, using the previously obtained error estimates,
\[
\|\beta \cdot \nabla (u - u_h)\| \leq C_\gamma \|\beta\|_{L^\infty}^{-\frac12} h^{-\frac12} (|u-u_h|_{S_p} +
\|\sigma\|_{L^\infty(\Omega)} h^{\frac12} \|u-u_h\|) \leq C_{\gamma\beta\sigma} h^k |u|_{H^{k+1}(\Omega)}.
\]
For the CIP method we may write $\xi_h := \pi_V u - u_h$, where
$\pi_V$ is any interpolation operator with optimal approximation properties, and note that
by Galerkin orthogonality, interpolation in $L^\infty$, and inverse inequalities, we have
\begin{multline*}
\|\beta \cdot \nabla (u - u_h)\|^2 = (\beta \cdot \nabla (u -
u_h),\beta_h \cdot \nabla \xi_h -
I_{os} \beta_h \cdot \nabla \xi_h ) -
(\sigma (u - u_h) , I_{os} \beta_h \cdot \nabla \xi_h) \\ - s_a(z_h, I_{os} \beta_h \cdot \nabla \xi_h)\\+ (\beta \cdot
\nabla (u - u_h), (\beta -\beta_h) \cdot \nabla \xi_h) - (\beta \cdot
\nabla (u - u_h), \beta \cdot \nabla (\pi_V u - u))\\
\leq C_\gamma\|\beta \cdot \nabla (u - u_h)\| (h^{-\frac12} |\xi_h|_{S_p}
+ \|\beta\|_{W^{1,\infty}(\Omega)} \|\xi_h\| + \|\beta \cdot \nabla
(u - \pi_V u)\|) \\
+ (C_{\gamma\beta\sigma} h^{-\frac12} |z_h|_{S_a} + \|\sigma\|_{L^\infty(\Omega)} \|u-u_h\|)\| \beta_h \cdot \nabla \xi_h\|.
\end{multline*}
Here we have used the $L^2$-stability of the interpolation operator
$I_{os}$ and the inequality
\[
|s_a(z_h, I_{os} \beta_h \cdot \nabla \xi_h)| \leq |z_h|_{S_a}
C_{\gamma\beta\sigma} h^{-\frac12} \|\beta_h \cdot \nabla \xi_h\|.
\]
Observing that
\[
\| \beta_h \cdot \nabla \xi_h\| \leq C \|\beta\|_{W^{1,\infty}(\Omega)}
\|\xi_h\| + \|\beta \cdot \nabla (u - u_h)\| + \|\beta \cdot \nabla
(u - \pi_V u)\|
\]
and using suitable arithmetic-geometric inequalitites to absorb
factors $\|\beta \cdot \nabla (u - u_h)\|$ in the left hand side we
conclude that
\begin{multline*}
\|\beta \cdot \nabla (u - u_h)\|^2 \leq C_{\gamma\beta\sigma} \Bigl(h^{-1} |\xi_h|^2_{S_p}
+ \|\xi_h\|^2 + \|u - u_h\|^2\\+ h^{-1} |z_h|^2_{S_a}+ \|\beta \cdot \nabla
(u - \pi_V u)\|^2 \Bigr)\leq C_{\gamma\beta\sigma} h^{2k} |u|^2_{H^{k+1}(\Omega)}.
\end{multline*}
The last inequality is a consequence of the estimate
\[
\|u - u_h\|_V + |u-u_h|_{S_p} + |z_h|_{S_a} \leq C_{\gamma\beta\sigma} h^{k+\frac12} |u|_{H^{k+1}(\Omega)}
\]
of Theorem \ref{stab_conv} and standard approximation results on $\|u-
\pi_V u\|$ and $\|\beta \cdot \nabla
(u - \pi_V u)\|$.
The proof for the discontinuous Galerkin method is similar and left to
the reader.
\end{proof}
\subsection{The data assimilation case}
The aim of the methods presented in \cite{part1}, is to introduce a
framework where also ill-posed problems such as those arising in
inverse problems, or data assimilation problems can
be included, without modifying the method. We will therefore in this section discuss the case where
{\emph{data is given on the outflow boundary}} in equation
\eqref{model_problem} as a model case of data assimilation. By the reversibility of the transport equation
under our assumptions on $\beta$ this problem is not ill-posed on the
continuous level.
However on the discrete level methods based on upwinding are
likely to experience difficulties. Since our framework does not rely
neither on upwinding nor on coercivity, this case can be included with only minor modifications
in the formulations without any loss of
stability. Consider the problem \eqref{model_problem} with the
boundary condition $u = g$ on $\partial \Omega_+$. Let the
formulation \eqref{FEM} be defined by the bilinear form \eqref{discrete_bilin}
and the stabilization term $s_p(\cdot,\cdot)$ for $X=GLS, CIP, DG$,
\begin{equation}\label{assim_stab}
s_p(u_h,v_h) := s_{p,X}(u_h,v_h) + s_{bc,+}(u_h,v_h).
\end{equation}
The term $s_a(\cdot,\cdot)$ is unchanged. The data assimilation problem then typically consists in finding $u|_{\partial
\Omega_{-}}$, which amounts to solving the backward transport equation.
Observe that the boundary penalty for the primal equation now acts on
the outflow boundary.
The stabilization may then be chosen as any of the three methods
considered in Section \ref{subsec:GLS}--\ref{subsec:DG} and Theorem
\ref{stab_conv} holds under the same conditions as before, but the
stability will be given by a different weight function. Once the functions $v_{a}$ and
$v_{a*}$ have been identified the rest of the analysis is identical to
that of Sections \ref{subsec:GLS}--\ref{subsec:DG}. We recall the
following inequalities from Lemma \ref{basic_stability}.
\begin{lemma}\label{daat_assim_stability}
For the bilinear form \eqref{discrete_bilin} there holds, for all
$\eta \in W^{1,\infty}(\Omega)$
\[
a_h(u_h,-e^{\eta} u_h) = -\frac12 \int_{\partial \Omega} (\beta \cdot
n) u_h^2 e^{\eta} ~\mbox{d}s + \int_{\Omega} u_h^2 \left(\frac12 \beta
\cdot \nabla \eta + \frac12 \nabla \cdot \beta - \sigma\right) e^{\eta}~\mbox{d}x,
\]
\[
a_h( e^{\eta} z_h, z_h) = \frac12 \int_{\partial \Omega} (\beta \cdot
n) z_h^2 e^{\eta} ~\mbox{d}s + \int_{\Omega} z_h^2 \left(\frac12 \beta
\cdot \nabla \eta + \frac12 \nabla \cdot \beta - \sigma\right) e^{\eta}~\mbox{d}x.
\]
\end{lemma}
{
It follows that apart from the form of the exponential dependencies in the
constants nothing changes for the method \eqref{FEM}. The situation is
different for method \eqref{standardFEM}, since here the same test
function must be used in the forms $a_h(\cdot,\cdot)$ and
$s_p(\cdot,\cdot)$. We see that the choice $v_{a}(u_h):=-\pi_V (e^{\eta}
u_h)$ is necessary in $a_h(\cdot,\cdot)$, however due to the least
squares character of $s_p(\cdot,\cdot)$, the term can never have a
stabilizing effect for positive stabilization parameter when this
weight function is used. If instead
the stabilization parameters in \eqref{standardFEM} are chosen
\emph{negative} it is straightforward to show that the assumptions for
Proposition \ref{standard_conv} hold. This correpsonds to using
downwind fluxes instead of upwind fluxes. For more general problems
however, data are provided at some points along the
characteristics and it is therefore not possible for any given point
in the domain to decide whether the data will arrive from the upwind
or the downwind side unless the characteristic equations are solved for each
given data. Therefore the strategy of changing the sign of the
stabilization parameter inside the domain to match the location of
given data is not so attractive. In contrast the method \eqref{FEM}
does not use the flow direction for stability and can therefore be
applied in a much wider context, without tuning the stabilization parameters.}
\section{Numerical examples}\label{numerics}
Here we will give some simple numerical examples illustrating the
above theory. All computations were made using Freefem++ \cite{freefem}. We will only consider the CIP-method and compare
the results obtained by \eqref{standardFEM} with those of \eqref{FEM}
and in some cases with the
standard Galerkin method. We use an exact solution from \cite{CD11} adapted for
the case of vanishing viscosity with some different velocity
fields. We consider pure transport on conservation form and with a
non-solenoidal velocity field,
\begin{equation}\label{comptransp}
\nabla \cdot (\beta u) = f \quad \mbox{ on } \Omega.
\end{equation}
Three different velocity fields will be used,
\begin{equation}\label{beta1}
\beta_1:= \left(\begin{array}{c} -(x+1)^4+y \\ -8(y-x) \end{array} \right),
\end{equation}
\begin{equation}\label{beta2}
\beta_2:= -100 \left(\begin{array}{c} x+y \\ y-x \end{array} \right)
\end{equation}
or
\begin{equation}\label{badbeta}
\beta_3 = \left( \begin{array}{c}
10 \,\mbox{arctan}(\frac{y-\frac12}{\varepsilon}) - \frac{x^2}{\varepsilon} \\[3mm]
\sin(x/\varepsilon) + \sin(y/\varepsilon)\end{array}\right).
\end{equation}
We will consider two different exact solutions, one smooth given by
\begin{equation}\label{exact_sol}
u(x,y) = 30x(1-x)y(1-y),
\end{equation}
obtained by choosing a suitable right-hand side $f$, and one non-smooth obtained by setting $f=0$, but introducing a
discontinuous function for the boundary data.
The smooth solution \eqref{exact_sol} satisfies homogeneous Dirichlet boundary conditions both
on the inflow and the outflow boundary and
has $\|u\| = 1$.
Unless otherwise stated, we use the stabilization parameters $\gamma_{CIP} =
0.01$ for piecewise affine approximation and $\gamma_{CIP}=0.001$ for
piecewise quadratic approximation. The boundary penalty term is taken
as $\gamma_{bc} = 0.5$ for \eqref{FEM} and $\gamma_{bc} = 1.0$ for
\eqref{standardFEM}.
We have first
considered the velocity field \eqref{beta1} and the solution \eqref{exact_sol}. Note that $\inf_{x \in \Omega} \nabla
\cdot \beta_1 = -40$, making the problem strongly
noncoercive, since then $\sigma_0 = \tfrac12 \inf_{x \in \Omega} \nabla
\cdot \beta_1 = -20$. In our experience the standard Galerkin method performs relatively
well for the coercive case when approximating smooth solutions in two
space dimensions.
As can be seen in figure \ref{smooth_plot}, this is not
the case here. Three contour plots are presented representing
computations using the standard Galerkin method, the method
\eqref{standardFEM} and \eqref{FEM} on a $64\times64$ unstructured
mesh. Note the oscillations that persist in the standard
Galerkin solution, in spite of the smoothness of the solution. On all the meshes we considered, up to $256 \times
256$ these oscillations remained, although their amplitude
decreased. This highlights the increased need of stabilization for
noncoercive problems.
\begin{figure}
\caption{Contour plots of approximations of the smooth solution \eqref{exact_sol}
\label{smooth_plot}
\end{figure}
In table \ref{SmoothP1} we present the errors in both
the $L^2$-norm and the streamline derivative norm,
\begin{equation}\label{SDdef}
\|h^{\frac12} |\beta|^{-\frac12} \beta \cdot \nabla (u - u_h)\|,
\end{equation}
on $6$
consecutive unstructured meshes with $2^N$, $N=3,...,8$, elements on
each side and piecewise affine approximation. We note that the stabilized methods both have (and
sometimes exceed) the expected
convergence orders. Indeed the $L^2$-error converges as $O(h^{k+1})$
and the error in the streamline derivative \eqref{SDdef} as $O(h^{k+\frac12})$. As expected
the convergence of the standard Galerkin method is very uneven. It is
unclear if
the error in the streamline derivative converges at all. In table \ref{SmoothP2} the
same sequence of computations are reported using piecewise quadratic
elements. The stability of the standard Galerkin method is noticeably
improved. Nevertheless the errors of the stabilized methods are two
orders of magnitude smaller. The errors of formulation
\eqref{FEM} are slightly smaller than those of \eqref{standardFEM},
but on the other hand the former method uses twice as many degrees of
freedom as the latter.
Both methods \eqref{standardFEM} and \eqref{FEM} control
spurious oscillations in non-smooth exact solutions as can be seen in
figure \ref{rough_plot}, where the contour plots of a computation with
non-smooth exact solution created by using the velocity field
\eqref{beta2} in \eqref{comptransp}, setting $f=0$ and the boundary
data equal one wherever $x>0.8$ and $y<0.5$ and zero elsewhere.
\begin{figure}
\caption{Discontinuous solution, $64\times 64$ mesh, affine approximation. From left
to right, standard Galerkin, method \eqref{standardFEM}
\label{rough_plot}
\end{figure}
To show the increased
robustness of the formulation \eqref{FEM}, we propose to study the
problem \eqref{comptransp}, with the velocity field \eqref{badbeta}.
This velocity field is strictly speaking not covered by the analysis,
since for some values on $\varepsilon$ there may be points in the
domain where $\beta_3$ vanishes. Nevertheless the right hand side is chosen such that
the exact solution is given by \eqref{exact_sol}. We consider a fixed
$64\times 64$ unstructured mesh and vary $\varepsilon$, creating a series of
increasingly ill-posed problems where the divergence and the maximum
derivatives of $\beta$ behaves as
$-\tfrac{1}{\varepsilon}$. The error in the
streamline derivative \eqref{SDdef} for varying $\varepsilon$ is
plotted in the left graphic of figure
\ref{stab_study}. It is fair to say that the method \eqref{FEM} (circle markers) outperforms
\eqref{standardFEM} (square markers). As $\varepsilon$ becomes small the error for the
approximations computed using \eqref{FEM} exhibits moderate growth
of order $O(\varepsilon^{-\frac13})$, but remain below $0.06$. Whereas
over half the approximations computed using \eqref{standardFEM} has an
error larger than $0.5$ and none below $0.1$. For $\varepsilon =
0.05$, the error is $120$ and the computed solution bears no
ressemblance to the exact one. { In the right plot of figure
\ref{stab_study} we study how the error
depends on the choice of the stabilization parameter $\gamma_{CIP}$.
We plot the error defined by \eqref{SDdef}, this time varying the
parameter $\gamma_{CIP}$ for three different
$\varepsilon$. Even when accounting for the increased number of
degrees of freedom in method \eqref{FEM} the error of
\eqref{standardFEM} is more than 50\% large in all the computations
and where \eqref{standardFEM} fails it is more than a factor 1000 larger.}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
N & SG, $L^2$ & SG, SD & \eqref{standardFEM}, $L^2$ &
\eqref{standardFEM}, SD & \eqref{FEM}, $L^2$ & \eqref{FEM}, SD\\
\hline
3 & 0.041 & 1.0 & 0.029 & 0.58 & 0.028 & 0.58 \\ \hline
4 & 0.025 & 0.88 &7.2E-3 & 0.20& 6.5E-3& 0.20 \\ \hline
5 & 0.010 & 0.48 & 1.7E-3 & 0.071 & 1.5E-3 & 0.069 \\ \hline
6 & 0.015 & 1.1 & 4.5E-4& 0.026 &4.0E-4 & 0.025 \\ \hline
7& 7.8E-3 & 0.76 & 1.1E-4& 9.1E-3 & 1.0E-4 & 8.7E-3 \\ \hline
8 & 1.9E-3 & 1.1 &2.5E-5 & 3.0E-3 & 2.4E-5 & 3.0E-3 \\ \hline
\end{tabular}
\caption{Errors of estimated quantities for the smooth
solution approximated using piecewise affine elements. SG means
standard Galerkin and equations refer to methods used. ``L2''
denotes the error in $L^2$-norm, ``SD'' denotes the error in the
streamline derivative norm defined in equation \eqref{SDdef}.}\label{SmoothP1}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
N & SG, $L^2$ & SG, SD & \eqref{standardFEM}, $L^2$ &
\eqref{standardFEM}, SD & \eqref{FEM}, $L^2$ & \eqref{FEM}, SD\\ \hline
3 & 0.028 & 0.58 & 9.3E-4 & 0.060 & 7.5E-4 & 0.045 \\ \hline
4 & 4.6E-3 & 0.25 & 1.7E-4 & 0.014& 1.1E-4 & 8.7E-3\\ \hline
5 & 1.9E-3 & 0.17 & 2.7E-5 & 3.1E-3 & 1.4E-5 & 1.7E-3 \\ \hline
6 & 3.0E-4 & 0.042 &3.3E-6 & 5.1E-4& 1.7E-6 & 2.7E-4\\ \hline
7 & 3.3E-5 & 6.1E-3 & 4.4E-7 & 9.2E-5& 2.1E-7 & 4.7E-5\\ \hline
\end{tabular}
\caption{Errors of estimated quantities for the smooth
solution approximated using piecewise quadratic elements. SG means
standard Galerkin and equations refer to methods used. ``L2'' denotes the error in $L^2$-norm, ``SD'' denotes the error in the streamline derivative.}\label{SmoothP2}
\end{center}
\end{table}
\begin{figure}
\caption{Study of the error in the SD-norm
error \eqref{SDdef}
\label{stab_study}
\end{figure}
\subsection{A data assimilation example}
Finally we consider a model problem for data assimilation where the
boundary conditions of the problem \eqref{comptransp} are imposed on
the outflow boundary instead of the inflow boundary. The method
\eqref{FEM} with the bilinear form \eqref{discrete_bilin} and the stabilizing term
\eqref{assim_stab}, with $X=CIP$ was applied. We consider the
test case with smooth solution \eqref{exact_sol} and the velocity field
\eqref{beta1}. In table \ref{dataassim} we give the computational
errors in the $L^2$-norm and the streamline norm \eqref{SDdef}, using either piecewise affine and piecewise
quadratic elements. Recalling the results in tables
\ref{SmoothP1} and \ref{SmoothP2} we see that the errors are
comparable. This is not surprising since the use of the adjoint
equation makes the two cases similar. Attempts to use \eqref{standardFEM} with weakly
imposed boundary conditions on the outflow
and $\gamma_{CIP}>0$ were not fruitful. This is
expected since the stabilized methods on the form
\eqref{standardFEM} all are based on upwinding, which is unphysical in
this setting.
{
Indeed the standard unstabilized Galerkin method
performs better than the standard stabilized method for this smooth
solution. When the stabilization parameter is chosen negative we
recover the expected behavior of the stabilized method. We give the
results of \eqref{standardFEM} using $\gamma_{bc}=-1.0$ and $\gamma_{CIP}=0.001$, $\gamma_{CIP}=0$, $\gamma_{CIP}=-0.01$ in table
\ref{standard_data_assim}.}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
N & $\mathbb{P}_1$, $L^2$ &
$\mathbb{P}_1$, SD & $\mathbb{P}_2$, $L^2$ &
$\mathbb{P}_2$, SD \\
\hline
3 & 0.033 & 0.75 & 1.1E-3 & 0.052 \\ \hline
4 &7.1E-3 & 0.23& 1.5E-4 & 9.6E-3 \\ \hline
5 &1.6E-3 & 0.075 & 1.8E-5 & 1.8E-3 \\ \hline
6 & 4.1E-4& 0.026 & 2.0E-6 & 2.8E-4 \\ \hline
7& 1.0E-4& 8.9E-3 & 2.4E-7 & 4.8E-5 \\ \hline
8 & 2.4E-5 & 3.0E-3 & -- & -- \\ \hline
\end{tabular}
\caption{Data assimilation using \eqref{FEM}. Errors of estimated quantities for the smooth
solution \eqref{exact_sol} computed with data given on the outflow
boundary. Approximation using piecewise affine ($\mathbb{P}_1$) and
quadratic ($\mathbb{P}_2$) elements. ``L2'' denotes the error in $L^2$-norm, ``SD'' denotes the error in the streamline derivative.}\label{dataassim}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
N & $\gamma_{1}$, $L^2$ &
$\gamma_{1}$, SD & $\gamma_{2}$, $L^2$ &
$\gamma_{2}$, SD & $\gamma_{3}$, $L^2$ &
$\gamma_{3}$, SD \\
\hline
3 & 0.044 & 3.48 & 0.034 & 2.8 &0.029 & 2.25\\ \hline
4 & 0.027 & 2.96& 0.01 & 1.2 & 6.7E-3& 0.74 \\ \hline
5 &0.27 & 31.0 & 2.7E-3 & 0.44 &1.6E-3 & 0.26 \\ \hline
6 & 2.74 & 455 & 1.1E-3 & 0.26 & 4.2E-4 & 0.094 \\ \hline
7& 6170 & 1.8E6 & 3.7E-4 & 0.11 &1.1E-4& 0.033 \\ \hline
8 & 67471 & 3.4E7 & 9.9E-5 & 0.041 & 2.5E-3& 0.011 \\ \hline
\end{tabular}
\caption{Data assimilation using the method \eqref{standardFEM} with
the forms \eqref{discrete_bilin} and \eqref{assim_stab}, piecewise
affine elements, $\gamma_{bc}=-1$
and three different choices of $\gamma_{CIP}$ denoted by $\gamma_1$,
$\gamma_2$ and $\gamma_3$. The CIP-stabilization parameters are
assigned the values $\gamma_1=10^{-3}$, $\gamma_{2}=0$ and $\gamma_{3}=-10^{-2}$. Errors of estimated quantities for the smooth
solution \eqref{exact_sol} computed with data given on the outflow
boundary. ``L2'' denotes the error in $L^2$-norm, ``SD'' denotes the error in the streamline derivative.}\label{standard_data_assim}
\end{center}
\end{table}
\section{Concluding remarks}
We have extended the methods proposed in \cite{part1} to include
hyperbolic equations and shown how three stabilization methods known from the
litterature can be used to obtain stable and (quasi) optimally convergent approximations. Compared to
the standard stabilized method we show that the new method yields
existence of discrete solutions and (quasi) optimal error estimates under
much weaker assumptions on the mesh parameter (``$h^2$ small enough'' compared to
``$h^{\frac12}$ small enough''). We would like to stress that the method
proposed herein will not necessarily yield a more accurate solution than the
standard stabilized methods in cases where both methods work. The new
method however has increased robustness for noncoercive
problems. It also makes it easier to
incorporate other data than classical inflow boundary data. The idea of recasting the problem in an
optimisation framework opens interesting perspectives for optimal
control, inverse problems and data assimilation using observers.
\section*{Acknowledgment} Partial funding for this research was
provided by EPSRC
(Award number EP/J002313/1).
\end{document} |
\begin{document}
\title[Moduli spaces of weighted pointed
stable rational curves]{Moduli spaces of weighted pointed stable
rational curves via GIT}
\date{March 2010}
\author{Young-Hoon Kiem}
\address{Department of Mathematics and Research Institute
of Mathematics, Seoul National University, Seoul 151-747, Korea}
\email{[email protected]}
\author{Han-Bom Moon}
\address{Department of Mathematics, Seoul National University, Seoul 151-747, Korea}
\email{[email protected]}
\thanks{Partially supported by an NRF grant}
\begin{abstract}
We construct the Mumford-Knudsen space $\overline{M}_{0,n} $ of $n$-pointed
stable rational curves by a sequence of explicit blow-ups from the
GIT quotient $(\mathbb{P}^1)^n/\!/ SL(2)$ with respect to the symmetric
linearization $\mathcal{O} (1,\cdots,1)$. The intermediate blown-up spaces
turn out to be the moduli spaces of weighted pointed stable curves
$\overline{M}_{0,n\cdot \epsilon} $ for suitable ranges of $\epsilon$. As an application, we
provide a new unconditional proof of M. Simpson's Theorem about
the log canonical models of $\overline{M}_{0,n} $. We also give a basis of the
Picard group of $\overline{M}_{0,n\cdot \epsilon} $.
\end{abstract}
\maketitle
\section{Introduction}\label{sec1}
Recently there has been a tremendous amount of interest in the
birational geometry of moduli spaces of stable curves. See for
instance \cite{AlexSwin, FedoSmyt,Hassett,Kapranov,Keel,Li,
Mustata, Simpson} for the genus 0 case only. Most prominently, it
has been proved in \cite{AlexSwin, FedoSmyt,Simpson} that the log
canonical models for $(\overline{M}_{0,n} , K_{\overline{M}_{0,n} }+\alpha D)$, where $D$ is the
boundary divisor and $\alpha$ is a rational number, give us
Hassett's moduli spaces $\overline{M}_{0,n\cdot \epsilon} $ of weighted pointed stable curves
with \emph{symmetric} weights $n\cdot \epsilon =(\epsilon,\cdots,
\epsilon)$. See \S\ref{sec2.1} for the definition of $\overline{M}_{0,n\cdot \epsilon} $ and
Theorem \ref{thm1.2} below for a precise statement. The purpose of
this paper is to prove that actually all the moduli spaces $\overline{M}_{0,n\cdot \epsilon} $
can be constructed by explicit blow-ups from the GIT quotient
$(\mathbb{P}^1)^n/\!/ SL(2)$ with respect to the symmetric linearization
$\mathcal{O} (1,\cdots,1)$ where $SL(2)$ acts on $(\mathbb{P}^1)^n$ diagonally.
More precisely, we prove the following.
\begin{theorem}\label{thm1.1} There is a sequence of blow-ups
\begin{equation}\label{eq1}
\overline{M}_{0,n} =\overline{M}_{0,n\cdot \epsilon_{m-2}} \to \overline{M}_{0,n\cdot \epsilon_{m-2}} h\to \cdots \to \overline{M}_{0,n\cdot \epsilon_2} \to \overline{M}_{0,n\cdot \epsilon_1} \to
(\mathbb{P}^1)^n/\!/ SL(2) \end{equation} where
$m=\lfloor\frac{n}{2}\rfloor$ and $\frac1{m+1-k}<\epsilon_k\le
\frac1{m-k}$. Except for the last arrow when $n$ is even, the
center for each blow-up is a union of transversal smooth
subvarieties of same dimension. When $n$ is even, the last arrow
is the blow-up along the singular locus which consists of
$\frac12\binom{n}{m}$ points in $(\mathbb{P}^1)^n/\!/ SL(2)$, i.e.
$\overline{M}_{0,n\cdot \epsilon_1} $ is Kirwan's partial desingularization (see \cite{Kirwan})
of the GIT quotient $(\mathbb{P}^1)^{2m}/\!/ SL(2)$.\end{theorem} If the
center of a blow-up is the transversal union of smooth
subvarieties in a nonsingular variety, the result of the blow-up
is isomorphic to that of the sequence of smooth blow-ups along the
irreducible components of the center in any order (see
\S\ref{sec2.3}). So each of the above arrows can be decomposed
into the composition of smooth blow-ups along the irreducible
components.
As an application of Theorem \ref{thm1.1}, we give a new proof of
the following theorem of M. Simpson (\cite{Simpson}) without
relying on Fulton's conjecture.
\begin{theorem}\label{thm1.2}
Let $\alpha$ be a rational number satisfying $\frac{2}{n-1} < \alpha
\le 1$ and let $D=\overline{M}_{0,n} -M_{0,n}$ denote the boundary divisor. Then
the log canonical model $$\overline{M}_{0,n} (\alpha) = \mathrm{Proj}\;\left(\bigoplus_{l \ge
0} H^0(\overline{M}_{0,n} , \mathcal{O} (\lfloor l (K_{\overline{M}_{0,n} } +\alpha D) \rfloor))\right)
$$
satisfies the following:
\begin{enumerate} \item If $\frac{2}{m-k+2} < \alpha \le \frac{2}{m-k+1}$ for $1\le k\le
m-2$, then $\overline{M}_{0,n} (\alpha) \cong \overline{M}_{0,n\cdot \epsilon} k$.\item If $\frac{2}{n-1} <
\alpha \le \frac{2}{m+1}$, then $\overline{M}_{0,n} (\alpha) \cong (\mathbb{P}^1)^n /\!/
G$ where the quotient is taken with respect to the symmetric
linearization $\mathcal{O} (1,\cdots,1)$.
\end{enumerate}
\end{theorem}
There are already two different \emph{unconditional} proofs of
Theorem \ref{thm1.2} by Alexeev-Swinarski \cite{AlexSwin} and by
Fedorchuk-Smyth \cite{FedoSmyt}. See Remark \ref{rem-compproofs}
for a brief outline of the two proofs. In this paper we obtain the
ampleness of some crucial divisors directly from Theorem
\ref{thm1.1}. As another application, we give an explicit basis of
the Picard group of $\overline{M}_{0,n\cdot \epsilon} k$ for each $k$.
It is often the case in moduli theory that adding an extra structure
makes a problem easier. Let $0\le k< n$. A pointed nodal curve
$(C,p_1,\cdots, p_n)$ of genus $0$ together with a morphism $f:C\to
\mathbb{P}^1$ of degree $1$ is called \emph{$k$-stable} if
\begin{enumerate}
\item[i.] all marked points $p_i$ are smooth points of $C$;
\item[ii.] no more than $n-k$ of the marked points $p_i$ can coincide;
\item[iii.] any ending irreducible component $C'$ of $C$ which is
contracted by $f$ contains more than $n-k$ marked points;
\item[iv.] the group of automorphisms of $C$ preserving $f$ and $p_i$
is finite.
\end{enumerate}
A. Mustata and M. Mustata prove the following in \cite{Mustata}.
\begin{theorem} \cite[\S1]{Mustata}
There is a fine moduli space $F_k$ of $k$-stable
pointed parameterized curves $(C,p_1,\cdots,p_n,f)$.
Furthermore, the moduli spaces $F_k$ fit into a
sequence of blow-ups
\begin{equation}\label{2010eq1}
\xymatrix{ \mathbb{P}^1[n]\ar@{=}[r] & F_{n-2}\ar[r]^{\psi_{n-2}} &
F_{n-3}\ar[r]^{\psi_{n-3}} & \cdots \ar[r]^{\psi_2} &
F_1\ar[r]^{\psi_1} & F_0\ar@{=}[r] & (\mathbb{P}^1)^n }
\end{equation}
whose centers are transversal unions of smooth subvarieties.
\end{theorem}
The first term $\mathbb{P}^1[n]$ is the Fulton-MacPherson
compactification of the configuration space of $n$ points in
$\mathbb{P}^1$ constructed in \cite{FM}. The blow-up centers are
transversal unions of smooth subvarieties and hence we can further
decompose each arrow into the composition of smooth blow-ups along
the irreducible components in any order. This blow-up sequence is
actually a special case of L. Li's inductive construction of
a \emph{wonderful compactification} of the configuration space and
transversality of various subvarieties is a corollary of Li's
result \cite[Proposition 2.8]{Li}. (See \S\ref{sec2.3}.) The
images of the blow-up centers are invariant under the diagonal
action of $SL(2)$ on $(\mathbb{P}^1)^n$ and so this action lifts to $F_k$
for all $k$. The aim of this paper is to show that the GIT
quotient of the sequence \eqref{2010eq1} by $SL(2)$ gives us
\eqref{eq1}.
To make sense of GIT quotients, we need to specify a linearization
of the action of $G=SL(2)$ on $F_k$. For $F_0=(\mathbb{P}^1)^n$, we
choose the symmetric linearization $L_0=\mathcal{O} (1,\cdots, 1)$.
Inductively, we choose $L_k=\psi^*_kL_{k-1}\otimes
\mathcal{O} (-\delta_kE_{k})$ where $E_k$ is the exceptional divisor of
$\psi_k$ and $0<\delta_k<\!<\delta_{k-1}<\!< \cdots
<\!<\delta_1<\!<1$. Let $F_k^{ss}$ (resp. $F_k^s$) be the
semistable (resp. stable) part of $F_k$ with respect to $L_k$.
Then by \cite[\S3]{Kirwan} or \cite[Theorem 3.11]{Hu}, we have
\begin{equation}\label{2010eq2}
\psi_k^{-1}(F^s_{k-1})\subset F_k^s\subset F_k^{ss}\subset
\psi_k^{-1}(F_{k-1}^{ss}).
\end{equation}
In particular, we obtain a sequence of morphisms
$$\bar\psi_k:F_k/\!/ G\to F_{k-1}/\!/ G.$$ It is well known that a
point $(x_1,\cdots,x_n)$ in $F_0=(\mathbb{P}^1)^n$ is stable (resp.
semistable) if $\ge \lfloor \frac{n}2\rfloor$ points (resp. $>
\lfloor \frac{n}2\rfloor$ points) do not coincide (\cite{MFK,K1}).
Let us first consider the case where $n$ is odd. In this case,
$F_0^s=F_0^{ss}$ because $\frac{n}2$ is not an integer. Hence
$F_k^s=F_k^{ss}$ for any $k$ by \eqref{2010eq2}. Since the blow-up
centers of $\psi_k$ for $k\le m+1$ lie in the unstable part, we
have $F_k^s=F_0^s$ for $k\le m+1$. Furthermore, the stabilizer
group of every point in $F_k^s$ is $\{\pm 1\}$, i.e. $\bar
G=PGL(2)$ acts freely on $F_k^s$ for $0\le k\le n-2$ and thus
$F_k/\!/ G=F_k^s/G$ is nonsingular. By the stability conditions,
forgetting the degree 1 morphism $f:C\to \mathbb{P}^1$ gives us an
invariant morphism $F_{n-m+k}^s\to \overline{M}_{0,n\cdot \epsilon} k$ which induces a morphism
$$\phi_k:F_{n-m+k}/\!/ G\to \overline{M}_{0,n\cdot \epsilon} k \quad \text{for } k=0,\cdots, m-2.$$
Since both varieties are nonsingular, we can conclude that
$\phi_k$ is an isomorphism by showing that the Picard numbers are
identical. Since $\bar G$ acts freely on $F_{n-m+k}^s$, the
quotient of the blow-up center of $\psi_{n-m+k+1}$ is again a
transversal union of $\binom{n}{m-k}$ smooth varieties
$\Sigma^S_{n-m+k}/\!/ G$ for a subset $S$ of $\{1,\cdots, n\}$
with $|S|=m-k$, which are isomorphic to the moduli space
$\overline{M}_{0,(1,\epsilon_k,\cdots,\epsilon_k)}$ of weighted
pointed stable curves with $n-m+k+1$ marked points (Remark
\ref{2010rem1}). Finally we conclude that
$$\varphi_k:\overline{M}_{0,n\cdot \epsilon} k\cong F_{n-m+k}/\!/ G\mapright{\bar\psi_{n-m+k}}
F_{n-m+k-1}/\!/ G\cong \overline{M}_{0,n\cdot \epsilon} ko$$ is a blow-up by using a lemma in
\cite{Kirwan} which tells us that quotient and blow-up commute.
(See \S\ref{sec2.2}.) It is straightforward to check that this
morphism $\varphi_k$ is identical to Hassett's natural morphisms
(\S\ref{sec2.1}). Note that the isomorphism
$$\phi_{m-2}:\mathbb{P}^1[n]/\!/ G\mapright{\cong} \overline{M}_{0,n} $$ was obtained
by Hu and Keel (\cite{HuKeel}) when $n$ is odd because $L_0$ is a
\emph{typical} linearization in the sense that $F_0^{ss}=F_0^s$.
The above proof of the fact that $\phi_k$ is an isomorphism in the
odd $n$ case is essentially the same as Hu-Keel's. However their
method does not apply to the even degree case.
\defF_{n-m+k} {F_{n-m+k} }
\def\tilde{F}_{n-m+k} {\tilde{F}_{n-m+k} }
\defY_{n-m+k} {Y_{n-m+k} }
The case where $n$ is even is more complicated because
$F_k^{ss}\ne F_k^s$ for all $k$. Indeed, $F_m/\!/ G=\cdots
=F_0/\!/ G= (\mathbb{P}^1)^n/\!/ G$ is singular with exactly
$\frac12\binom{n}{m}$ singular points. But for $k\ge 1$, the GIT
quotient of $F_{n-m+k}$ by $G$ is nonsingular and we can use
Kirwan's partial desingularization of the GIT quotient
$F_{n-m+k}/\!/ G$ (\cite{Kirwan}). For $k\ge 1$, the locus
$Y_{n-m+k}$ of closed orbits in $F_{n-m+k}^{ss}-F_{n-m+k}^s$ is
the disjoint union of the transversal intersections of smooth
divisors $\Sigma^S_{n-m+k}$ and $\Sigma^{S^c}_{n-m+k}$ where
$S\sqcup S^c=\{1,\cdots,n\}$ is a partition with $|S|=m$. In
particular, $Y_{n-m+k}$ is of codimension $2$ and the stabilizers
of points in $Y_{n-m+k}$ are all conjugates of $\mathbb{C}^*$. The
weights of the action of the stabilizer $\mathbb{C}^*$ on the normal
space to $Y_{n-m+k}$ are $2,-2$. By Luna's slice theorem
(\cite[Appendix 1.D]{MFK}), it follows that $F_{n-m+k}/\!/ G$ is
smooth along the divisor $Y_{n-m+k}/\!/ G$. If we let $\tilde{F}_{n-m+k} \to
F_{n-m+k} ^{ss}$ be the blow-up of $F_{n-m+k} ^{ss}$ along $Y_{n-m+k} $,
$\tilde{F}_{n-m+k} ^{ss}=\tilde{F}_{n-m+k} ^s$ and $\tilde{F}_{n-m+k} /\!/ G=\tilde{F}_{n-m+k} ^s/G$ is nonsingular. Since
blow-up and quotient commute (\S\ref{sec2.2}), the induced map
$$\tilde{F}_{n-m+k} /\!/ G\to F_{n-m+k} /\!/ G$$ is a blow-up along $Y_{n-m+k} /\!/ G$ which has
to be an isomorphism because the blow-up center is already a smooth
divisor. So we can use $\tilde{F}_{n-m+k} ^s$ instead of $F_{n-m+k} ^{ss}$ and apply the
same line of arguments as in the odd degree case. In this way, we
can establish Theorem \ref{thm1.1}.
To deduce Theorem \ref{thm1.2} from Theorem \ref{thm1.1}, we note
that by \cite[Corollary 3.5]{Simpson}, it suffices to prove that
$K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_k$ is ample for $\frac{2}{m-k+2}<\alpha\le
\frac{2}{m-k+1}$ where $D_k=\overline{M}_{0,n\cdot \epsilon} k-M_{0,n}$ is the boundary divisor
of $\overline{M}_{0,n\cdot \epsilon} k$ (Proposition \ref{prop-amplerange}). By the
intersection number calculations of Alexeev and Swinarski
(\cite[\S3]{AlexSwin}), we obtain the nefness of $K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha
D_k$ for $\alpha= \frac{2}{m-k+1}+s$ for some (sufficiently small)
positive number $s$. Because any positive linear combination of an
ample divisor and a nef divisor is ample, it suffices to show that
$K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_k$ is ample for $\alpha=\frac{2}{m-k+2}+t$ for
\emph{any} sufficiently small $t>0$. We use induction on $k$. By
calculating the canonical divisor explicitly, it is easy to show
when $k=0$. Because $\varphi_k$ is a blow-up with exceptional
divisor $D^{m-k+1}_k$, $\varphi_k^*(K_{\overline{M}_{0,n\cdot \epsilon} ko}+\alpha
D_{k-1})-\delta D^{m-k+1}_k$ is ample for small $\delta>0$ if
$K_{\overline{M}_{0,n\cdot \epsilon} ko}+\alpha D_{k-1}$ is ample. By a direct calculation, we
find that these ample divisors give us $K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_k$ with
$\alpha=\frac{2}{m-k+2}+t$ for any sufficiently small $t>0$. So we
obtain a proof of Theorem \ref{thm1.2}.
For the moduli spaces of \emph{unordered} weighted pointed stable
curves \[\widetilde{M}_{0,n\cdot\epsilon_k}=\overline{M}_{0,n\cdot \epsilon} k/S_n\] we can
simply take the $S_n$ quotient of our sequence \eqref{eq1} and
thus $\widetilde{M}_{0,n\cdot\epsilon_k}$ can be constructed by a
sequence of \emph{weighted blow-ups} from $\mathbb{P}^n/\!/
G=\left((\mathbb{P}^1)^n/\!/ G\right)/S_n$. In particular,
$\widetilde{M}_{0,n\cdot\epsilon_1}$ is a weighted blow-up of
$\mathbb{P}^n/\!/ G$ at its singular point when $n$ is even.
Here is an outline of this paper. In \S2, we recall necessary
materials about the moduli spaces $\overline{M}_{0,n\cdot \epsilon} k$ of weighted pointed stable
curves, partial desingularization and blow-up along transversal
center. In \S3, we recall the blow-up construction of the moduli
space $F_k$ of weighted pointed parameterized stable curves. In \S4,
we prove Theorem \ref{thm1.1}. In \S5, we prove Theorem
\ref{thm1.2}. In \S6, we give a basis of the Picard group of $\overline{M}_{0,n\cdot \epsilon} k$
as an application of Theorem \ref{thm1.1}.
\textbf{Acknowledgement.} This paper grew out of our effort to
prove a conjecture of Brendan Hassett (passed to us by David
Donghoon Hyeon): When $n$ is even,
$\widetilde{M}_{0,n\cdot\epsilon_1}$ is the (weighted) blow-up of
$\mathbb{P}^n/\!/ G$ at the singular point. It is our pleasure to thank
Donghoon Hyeon for useful discussions. We are also grateful to
David Smyth who kindly pointed out an error in a previous draft.
\section{Preliminaries}\label{sec2}
\subsection{Moduli of weighted pointed stable curves}\label{sec2.1}
We recall the definitions and basic facts on Hassett's moduli
spaces of weighted pointed stable curves from \cite{Hassett}.
A family of nodal curves of genus $g$ with $n$ marked points over
base scheme $B$ consists of
\begin{enumerate}
\item a flat proper
morphism $\pi : C \to B$ whose geometric fibers are nodal
connected curves of arithmetic genus $g$ and
\item sections $s_1, s_2, \cdots, s_n$ of $\pi$.
\end{enumerate}
An $n$-tuple
$\mathcal{A} =(a_1, a_2, \cdots, a_n) \in \mathbb{Q}^n$ with $0 <
a_i \le 1$ assigns a weight $a_i$ to the $i$-th marked point.
Suppose that $2g-2+a_1+a_2+\cdots+a_n > 0$.
\begin{definition}\cite[\S 2]{Hassett}
A family of nodal curves of genus $g$ with $n$ marked points $(C,
s_1, \cdots, s_n) \stackrel{\pi}{\to} B$ is stable of type $(g,
\mathcal{A})$ if
\begin{enumerate}
\item the sections $s_1, \cdots, s_n$ lie in the smooth locus of $\pi$;
\item for any subset
$\{s_{i_1}, \cdots, s_{i_r}\}$ of nonempty intersection, $a_{i_1}
+ \cdots + a_{i_r} \le 1$;
\item $K_{\pi} + a_1s_1 + a_2s_2+\cdots+a_ns_n$ is $\pi$-relatively ample.
\end{enumerate}
\end{definition}
\begin{theorem}\cite[Theorem 2.1]{Hassett}
There exists a connected Deligne-Mumford stack
$\overline{\mathcal{M}}_{g, \mathcal{A}}$, smooth and proper over
$\mathbb{Z}$, representing the moduli functor of weighted pointed stable
curves of type $(g, \mathcal{A})$. The corresponding coarse moduli
scheme $\overline{M}_{g, \mathcal{A}}$ is projective over $\mathbb{Z}$.
\end{theorem}
When $g = 0$, there is no nontrivial automorphism for any weighted
pointed stable curve and hence $\overline{M}_{0, \mathcal{A}}$ is
a projective \emph{smooth variety} for any $\mathcal{A} $.
There are natural morphisms between moduli spaces with different
weight data. Let $\mathcal{A}=\{a_1, \cdots, a_n\}$,
$\mathcal{B}=\{b_1, \cdots, b_n\}$ be two weight data and suppose
$a_i \ge b_i$ for all $1 \le i \le n$. Then there exists a
birational \emph{reduction} morphism
\[
\varphi_{\mathcal{A}, \mathcal{B}}
: \overline{\mathcal{M}}_{g, \mathcal{A}} \to
\overline{\mathcal{M}}_{g, \mathcal{B}}.
\]
For $(C, s_1, \cdots, s_n) \in \overline{\mathcal{M}}_{g,
\mathcal{A}}$, $\varphi_{\mathcal{A}, \mathcal{B}}(C, s_1, \cdots,
s_n)$ is obtained by collapsing components of $C$ on which
$\omega_C + b_1s_1+ \cdots + b_ns_n$ fails to be ample. These
morphisms between moduli stacks induce corresponding morphisms
between coarse moduli schemes.
The exceptional locus of the reduction morphism
$\varphi_{\mathcal{A}, \mathcal{B}}$ consists of boundary divisors
$D_{I, I^c}$ where $I = \{i_1, \cdots, i_r\}$ and $I^c=\{j_1,
\cdots, j_{n-r}\}$ form a partition of $\{1, \cdots, n\}$
satisfying $r > 2$, $$a_{i_1} + \cdots + a_{i_r} > 1 \quad
\text{and}\quad b_{i_1} + \cdots + b_{i_r} \le 1.$$ Here $D_{I,
I^c}$ denotes the closure of the locus of $(C, s_1, \cdots, s_n)$
where $C$ has two irreducible components $C_1, C_2$ with $p_a(C_1)
= 0$, $p_a(C_2) = g$, $r$ sections $s_{i_1}, \cdots s_{i_r}$ lying
on $C_1$, and the other $n-r$ sections lying on $C_2$.
\begin{proposition}\cite[Proposition 4.5]{Hassett}\label{reduction}
The boundary divisor $D_{I,I^c}$ is isomorphic to $
\overline{M}_{0, \mathcal{A}'_I} \times \overline{M}_{g,
\mathcal{A}'_{I^c}},$ with $ \mathcal{A}'_I = (a_{i_1}, \cdots,
a_{i_r}, 1) $ and $\mathcal{A}'_{I^c}= (a_{j_1}, \cdots,
a_{j_{n-r}}, 1).$ Furthermore, $
\varphi_{\mathcal{A}, \mathcal{B}}(D_{I, I^c})
\cong \overline{M}_{g, \mathcal{B}'_{I^c}}$ with $ \mathcal{B}'_{I^c} = (b_{j_1}, \cdots, b_{j_{n-r}},
\sum_{k=1}^r b_{i_k}).$
\end{proposition}
From now on, we focus on the $g=0$ case. Let $$m = \lfloor
\frac{n}{2}\rfloor,\quad \frac{1}{m-k+1} < \epsilon_k \le
\frac{1}{m-k}\quad \text{and}\quad n\cdot \epsilon_k = (\epsilon_k,
\cdots, \epsilon_k).$$ Consider the reduction morphism
\[
\varphi_{n\cdot \epsilon_{k}, n \cdot \epsilon_{k-1}}
: \overline{M}_{0, n \cdot \epsilon_{k}} \to
\overline{M}_{0, n \cdot \epsilon_{k-1}}.
\]
Then $D_{I, I^c}$ is contracted by $\varphi_{n\cdot \epsilon_{k},
n \cdot \epsilon_{k-1}}$ if and only if $|I| = m - k + 1$.
Certainly, there are ${n} \choose {m-k+1}$ such partitions
$I\sqcup I^c$ of $\{1,\cdots,n\}$.
For two subsets $I, J \subset \{1, \cdots, n\}$ such that $|I| =
|J| = m - k + 1$, $D_{I, I^c} \cap D_{J, J^c}$ has codimension at
least two in $\overline{M}_{0,n\cdot \epsilon} k$. So if we denote the complement of the
intersections of the divisors by
\[
\overline{M}_{0,n\cdot \epsilon} k' = \overline{M}_{0,n\cdot \epsilon} k - \bigcup_{|I| = |J| = n-k+1, I \ne J} D_{I, I^c} \cap D_{J,
J^c},
\]
we have $\mathrm{Pic}(\overline{M}_{0,n\cdot \epsilon} k') = \mathrm{Pic}(\overline{M}_{0,n\cdot \epsilon} k)$. The
restriction of $\varphi_{n \cdot \epsilon_{k}, n \cdot
\epsilon_{k-1}}$ to $\overline{M}_{0,n\cdot \epsilon} k'$ is a contraction of ${n} \choose
{m-k+1}$ \emph{disjoint} divisors and its image is an open subset
whose complement has codimension at least two. Therefore we obtain
the following equality of Picard numbers:
\begin{equation}\label{eq-eqPicNumber}
\rho(\overline{M}_{0, n \cdot \epsilon_{k}}) =
\rho(\overline{M}_{0, n \cdot \epsilon_{k-1}}) + {n \choose {m-k+1}}.
\end{equation}
It is well known that the Picard number of $\overline{M}_{0,n} $ is
\begin{equation}\label{eq-eqPic2} \rho(\overline{M}_{0,n} )=\rho(\overline{M}_{0, n \cdot
\epsilon_{m-2}})=2^{n-1}-\binom{n}{2}-1.
\end{equation}
Hence we obtain the following lemma from \eqref{eq-eqPicNumber}
and \eqref{eq-eqPic2}.
\begin{lemma}\label{lem-PicNumMze}\begin{enumerate}\item
If $n$ is odd, $\rho(\overline{M}_{0,n\cdot \epsilon} k)= n+\sum_{i=1}^k\binom{n}{m-i+1}$.\item
If $n$ is even,
$\rho(\overline{M}_{0,n\cdot \epsilon} k)=n+\frac12\binom{n}{m}+\sum_{i=2}^k\binom{n}{m-i+1}$.\end{enumerate}
\end{lemma}
\subsection{Partial desingularization}\label{sec2.2}
We recall a few results from \cite{Kirwan, Hu} on change of
stability in a blow-up.
Let $G$ be a complex reductive group acting on a projective
nonsingular variety $X$. Let $L$ be a $G$-linearized ample line
bundle on $X$. Let $Y$ be a $G$-invariant closed subvariety of
$X$, and let $\pi : \widetilde{X} \to X$ be the blow-up of $X$
along $Y$, with exceptional divisor $E$. Then for sufficiently
large $d$, $L_d = \pi^*L^d \otimes \mathcal{O} (-E)$ becomes very ample,
and there is a natural lifting of the $G$-action to $L_d$ (
\cite[\S3]{Kirwan}).
Let $X^{ss}$(resp. $X^s$) denote the semistable (resp. stable)
part of $X$. With respect to the polarizations $L$ and $L_d$, the
following hold (\cite[\S3]{Kirwan} or \cite[Theorem 3.11]{Hu})
:\begin{equation}\label{eq-StabBlowup} \widetilde{X}^{ss} \subset
\pi^{-1}(X^{ss}), \qquad \widetilde{X}^{s} \supset
\pi^{-1}(X^{s}).\end{equation} In particular, if $X^{ss} = X^s$,
then $\widetilde{X}^{ss} = \widetilde{X}^s = \pi^{-1}(X^s)$.
For the next lemma, let us suppose $Y^{ss}=Y\cap X^{ss}$ is
nonsingular. We can compare the GIT quotient of $\widetilde{X}$ by
$G$ with respect to $L_d$ with the quotient of $X$ by $G$ with
respect to $L$.
\begin{lemma}\cite[Lemma 3.11]{Kirwan} \label{blowupGIT}
For sufficiently large $d$, $\widetilde{X}/\!/ G$ is the blow-up
of $X/\!/ G$ along the image $Y /\!/ G$ of $Y^{ss}$.
\end{lemma}
Let $\mathcal{I}$ be the ideal sheaf of $Y$. In the statement of Lemma
\ref{blowupGIT}, the blow-up is defined by the ideal sheaf
$(\mathcal{I}^m)_G$ which is the $G$-invariant part of $\mathcal{I}^m$,
for some $m$. (See the proof of \cite[Lemma
3.11]{Kirwan}.) In the cases considered in this paper, the
blow-ups always take place along \emph{reduced} ideals, i.e.
$\widetilde{X}/\!/ G$ is the blow-up of $X/\!/ G$ along the
subvariety $Y/\!/ G$ because of the following.
\begin{lemma}\label{lem-specialcasebl}
Let $G=SL(2)$ and $\mathbb{C}^*$ be the maximal torus of $G$. Suppose
$Y^{ss}$ is smooth. The blow-up $\widetilde{X}/\!/ G\to X/\!/ G$
is the blow-up of the reduced ideal of $Y/\!/ G$ if any of the
following holds:
\begin{enumerate} \item The stabilizers of points in $X^{ss}$ are
all equal to the center $\{\pm 1\}$, i.e. $\bar G=SL(2)/\{\pm 1\}$
acts on $X^{ss}$ freely.
\item If we denote the $\mathbb{C}^*$-fixed locus in $X^{ss}$ by
$Z^{ss}_{\mathbb{C}^*}$, $Y^{ss}=Y\cap X^{ss}=GZ^{ss}_{\mathbb{C}^*}$ and the
stabilizers of points in $X^{ss}-Y^{ss}$ are all $\{\pm 1\}$.
Furthermore suppose that the weights of the action of $\mathbb{C}^*$ on
the normal space of $Y^{ss}$ at any $y\in Z^{ss}_{\mathbb{C}^*}$ are $\pm
l$ for some $l\ge 1$.
\item There exists a smooth divisor $W$ of $X^{ss}$ which
intersects transversely with $Y^{ss}$ such that the stabilizers of
points in $X^{ss}-W$ are all $\mathbb{Z}_2=\{\pm 1\}$ and the
stabilizers of points in $W$ are all isomorphic to $\mathbb{Z}_4$.
\end{enumerate}
In the cases (1) and (3), $Y/\!/ G=Y^s/G$ and $X/\!/ G=X^s/G$ are
nonsingular and the morphism $\widetilde{X}/\!/ G\to X/\!/ G$ is
the smooth blow-up along the smooth subvariety $Y/\!/ G$.
\end{lemma}
\begin{proof} Let us consider the first case. Let $\bar
G=PGL(2)$. By Luna's \'etale slice theorem \cite[Appendix 1.D]{MFK},
\'etale locally near a point in $Y^{ss}$, $X^{ss}$ is
$\bar{G}\times S$ and $Y^{ss}$ is $\bar{G}\times S^Y$ for some
nonsingular locally closed subvariety $S$ and $S^Y=S\cap Y$. Then
\'etale locally $\widetilde{X}^{ss}$ is $\bar{G}\times
\mathrm{bl}_{S^Y}S$ where $\mathrm{bl}_{S^Y}S$ denotes the blow-up
of $S$ along the nonsingular variety $S^Y$. Thus the quotients
$X/\!/ G$, $Y/\!/ G$ and $\widetilde{X}/\!/ G$ are \'etale locally
$S$, $S^Y$ and $\mathrm{bl}_{S^Y}S$ respectively. This implies
that the blow-up $\widetilde{X}/\!/ G\to X/\!/ G$ is the smooth
blow-up along the reduced ideal of $Y/\!/ G$.
For the second case, note that the orbits in $Y^{ss}$ are closed
in $X^{ss}$ because the stabilizers are maximal. So we can again
use Luna's slice theorem to see that \'etale locally near a point
$y$ in $Y^{ss}$, the varieties $X^{ss}$, $Y^{ss}$ and
$\widetilde{X}$ are respectively $G\times_{\mathbb{C}^*}S$,
$G\times_{\mathbb{C}^*}S^0$ and $G\times_{\mathbb{C}^*}\mathrm{bl}_{S^0}S$ for
some nonsingular locally closed $\mathbb{C}^*$-equivariant subvariety $S$
and its $\mathbb{C}^*$-fixed locus $S^0$. Therefore the quotients $X/\!/
G$, $Y/\!/ G$ and $\widetilde{X}/\!/ G$ are \'etale locally $S/\!/
\mathbb{C}^*$, $S^0$ and $(\mathrm{bl}_{S^0}S)/\!/ \mathbb{C}^*$. Thus it
suffices to show
$$(\mathrm{bl}_{S^0}S)/\!/ \mathbb{C}^*\cong
\mathrm{bl}_{S^0}(S/\!/ \mathbb{C}^*).$$ Since $X$ is smooth, \'etale
locally we can choose our $S$ to be the normal space to the orbit
of $y$ and $S$ is decomposed into the weight spaces $S^0\oplus
S^+\oplus S^-$. As the action of $\mathbb{C}^*$ extends to $SL(2)$, the
nonzero weights are $\pm l$ by assumption. If we choose
coordinates $x_1,\cdots, x_r$ for $S^+$ and $y_1,\cdots, y_s$ for
$S^-$, the invariants are polynomials of $x_iy_j$ and thus
$(I^{2m})_{\mathbb{C}^*}=(I_{\mathbb{C}^*})^m$ for $m\ge 1$ where $I=\langle
x_1,\cdots,x_r,y_1,\cdots,y_s \rangle$ is the ideal of $S^0$. By
\cite[II Exe. 7.11]{Hartshorne}, we have
$$\mathrm{bl}_{S^0}S=\mathrm{Proj}_S(\oplus_m I^m)\cong
\mathrm{Proj}_S(\oplus_m I^{2m})$$ and thus
$$(\mathrm{bl}_{S^0}S)/\!/ \mathbb{C}^*
=\mathrm{Proj}_{S/\!/ \mathbb{C}^*}(\oplus_m I^{2m})_{\mathbb{C}^*}
=\mathrm{Proj}_{S/\!/ \mathbb{C}^*}\left(\oplus_m
(I_{\mathbb{C}^*})^{m}\right)=\mathrm{bl}_{I_{\mathbb{C}^*}}(S/\!/ \mathbb{C}^*).$$
Since $S$ is factorial and $I$ is reduced, $I_{\mathbb{C}^*}$ is reduced.
(If $f^m\in I_{\mathbb{C}^*}$, then $f\in I$ and $(g\cdot f)^m=f^m$ for
$g\in \mathbb{C}^*$. By factoriality, $g\cdot f$ may differ from $f$ only
by a constant multiple, which must be an $m$-th root of unity.
Because $\mathbb{C}^*$ is connected, the constant must be $1$ and hence
$f\in I_{\mathbb{C}^*}$.) Therefore $I_{\mathbb{C}^*}$ is the reduced ideal of
$S^0$ on $S/\!/ \mathbb{C}^*$ and hence $(\mathrm{bl}_{S^0}S)/\!/
\mathbb{C}^*\cong \mathrm{bl}_{S^0}(S/\!/ \mathbb{C}^*)$ as desired.
The last case is similar to the first case. Near a point in $W$,
$X^{ss}$ is \'etale locally $\bar G\times_{\mathbb{Z}_2}S$ where
$S=S_W\times\mathbb{C}$ for some smooth variety $S_W$. $\mathbb{Z}_2$ acts
trivially on $S_W$ and by $\pm 1$ on $\mathbb{C}$. Etale locally $Y^{ss}$
is $\bar G\times_{\mathbb{Z}_2}S_Y$ where $S_Y=(S_W\cap Y)\times \mathbb{C}$.
The quotients $X/\!/ G$, $Y/\!/ G$ and $\widetilde{X}/\!/ G$ are
\'etale locally $S_W\times \mathbb{C}$, $(S_W\cap Y)\times \mathbb{C}$ and
$\mathrm{bl}_{S_W\cap Y}S_W\times \mathbb{C}$. This proves our lemma.
\end{proof}
\begin{corollary}\label{cor-blcomquot}
Suppose that (1) of Lemma \ref{lem-specialcasebl} holds. If
$Y^{ss}=Y_1^{ss}\cup\cdots\cup Y_r^{ss}$ is a transversal union of
smooth subvarieties of $X^{ss}$ and if $\widetilde{X}$ is the
blow-up of $X^{ss}$ along $Y^{ss}$, then $\widetilde{X}/\!/ G$ is
the blow-up of $X/\!/ G$ along the reduced ideal of $Y/\!/ G$
which is again a transversal union of smooth varieties $Y_i/\!/
G$. The same holds under the condition (3) of Lemma
\ref{lem-specialcasebl} if furthermore $Y_i$ are transversal to
$W$.
\end{corollary}
\begin{proof}
Because of the assumption (1), $X^{ss}=X^s.$ If
$Y^{ss}=Y_1^{ss}\cup\cdots\cup Y_r^{ss}$ is a transversal union of
smooth subvarieties of $X^{ss}$ and if $\pi:\widetilde{X}\to
X^{ss}$ is the blow-up along $Y^{ss}$, then
$\widetilde{X}^s=\widetilde{X}^{ss}=\pi^{-1}(X^s)$ is the
composition of smooth blow-ups along (the proper transforms of)
the irreducible components $Y_i^{ss}$ by Proposition
\ref{prop-bltrcen} below. For each of the smooth blow-ups, the
quotient of the blown-up space is the blow-up of the quotient
along the reduced ideal of the quotient of the center by Lemma
\ref{lem-specialcasebl}. Hence $\widetilde{X}/\!/ G\to X/\!/ G$ is
the composition of smooth blow-ups along irreducible smooth
subvarieties which are proper transforms of $Y_i/\!/ G$. Hence
$\widetilde{X}/\!/ G$ is the blow-up along the union $Y/\!/ G$ of
$Y_i/\!/ G$ by Proposition \ref{prop-bltrcen} again.
The case (3) of Lemma \ref{lem-specialcasebl} is similar and we
omit the detail.
\end{proof}
Finally we recall Kirwan's partial desingularization construction
of GIT quotients. Suppose $X^{ss} \ne X^s$ and $X^s$ is nonempty.
Kirwan in \cite{Kirwan} introduced a systematic way of blowing up
$X^{ss}$ along a sequence of nonsingular subvarieties to obtain a
variety $\widetilde{X}$ with linearized $G$ action such that
$\widetilde{X}^{ss} = \widetilde{X}^s$ and $\widetilde{X}/\!/ G$
has at worst finite quotient singularities only, as
follows:\begin{enumerate}
\item Find a maximal dimensional connected reductive subgroup $R$
such that the $R$-fixed locus $Z_R^{ss}$ in $X^{ss}$ is nonempty.
Then $$GZ_R^{ss}\cong G\times_{N^R}Z_R^{ss}$$ is a nonsingular
closed subvariety of $X^{ss}$ where $N^R$ denotes the normalizer
of $R$ in $G$.
\item Blow up $X^{ss}$ along $GZ_R^{ss}$ and find the semistable
part $X_1^{ss}$. Go back to step 1 and repeat this precess until
there are no more strictly semistable points.
\end{enumerate}
Kirwan proves that this process stops in finite steps and
$\widetilde{X}/\!/ G$ is called the \emph{partial
desingularization} of $X /\!/ G$. We will drop ``partial" if it is
nonsingular.
\subsection{Blow-up along transversal center}\label{sec2.3} We show that the
blow-up along a center whose irreducible components are
transversal smooth varieties is isomorphic to the result of smooth
blow-ups along the irreducible components in any order. This fact
can be directly proved but instead we will see that it is an easy
special case of beautiful results of L. Li in \cite{Li}.
\begin{definition} \cite[\S1]{Li}
(1) For a nonsingular algebraic variety $X$, an \emph{arrangement}
of subvarieties $S$ is a finite collection of nonsingular
subvarieties such that all nonempty scheme-theoretic intersections
of subvarieties in $S$ are again in $S$.
(2) For an arrangement $S$, a subset $B\subset S$ is called a
\emph{building set} of $S$ if for any $s \in S- B$, the minimal
elements in $\{b \in B : b \supset s\}$ intersect transversally
and the intersection is $s$.
(3) A set of subvarieties $B$ is called a \emph{building set} if
all the possible intersections of subvarieties in $B$ form an
arrangement $S$ (called the induced arrangement of $B$) and $B$ is
a building set of $S$.
\end{definition}
The \emph{wonderful compactification} $X_B$ of $X^0=X-\cup_{b\in
B} b$ is defined as the closure of $X^0$ in $\prod_{b\in
B}\mathrm{bl}_bX$. Li then proves the following.
\begin{theorem}\cite[Theorem 1.3]{Li} \label{thm-Li1}
Let $X$ be a nonsingular variety and $B = \{b_1, \cdots,b_n\}$ be
a nonempty building set of subvarieties of $X$. Let $I_i$ be the
ideal sheaf of $b_i \in B$. \begin{enumerate}\item The wonderful
compactification $X_B$ is isomorphic to the blow-up of $X$ along
the ideal sheaf $I_1I_2\cdots I_n$. \item If we arrange $B =
\{b_1, \cdots,b_n\}$ in such an order that the first $i$ terms
$b_1,\cdots,b_i$ form a building set for any $1\le i\le n$, then
$X_B = \mathrm{bl}_{\tilde{b}_n} \cdots \mathrm{bl}_{\tilde{b}_2}
\mathrm{bl}_{b_1} X$, where each blow-up is along a nonsingular
subvariety $\tilde{b}_i$.\end{enumerate}
\end{theorem}
Here $\tilde{b}_i$ is the \emph{dominant transform} of $b_i$ which
is obtained by taking the proper transform when it doesn't lie in
the blow-up center or the inverse image if it lies in the center,
in each blow-up. (See \cite[Definition 2.7]{Li}.)
Let $X$ be a smooth variety and let $Y_1, \cdots, Y_n$ be
transversally intersecting smooth closed subvarieties. Here,
\emph{transversal intersection} means that for any nonempty $S
\subset \{1, \cdots, n\}$ the intersection $Y_S:=\cap_{i \in
S}Y_i$ is smooth and the normal bundle $N_{Y_S/X}$ in $X$ of $Y_S$
is the direct sum of the restrictions of the normal bundles
$N_{Y_i/X}$ in $X$ of $Y_i$, i.e.
$$N_{Y_S/X} = \bigoplus_{i\in S} N_{Y_i/X}|_{Y_S}.$$
If we denote the ideal of $Y_i$ by $I_i$, the ideal of the union
$\cup_{i=1}^n Y_i$ is the product $I_1I_2\cdots I_n$. Moreover for
any permutation $\tau\in S_n$ and $1\le i\le n$,
$B=\{Y_{\tau(1)},\cdots,Y_{\tau(i)}\}$ is clearly a building set.
By Theorem \ref{thm-Li1} we obtain the following.
\begin{proposition}\label{prop-bltrcen}
Let $Y=Y_1\cup\cdots \cup Y_n$ be a union of transversally
intersecting smooth subvarieties of a smooth variety $X$. Then the
blow-up of $X$ along $Y$ is isomorphic to
\[
\mathrm{bl}_{\tilde Y_{\tau(n)}}\cdots \mathrm{bl}_{\tilde
Y_{\tau(2)}}\mathrm{bl}_{Y_{\tau(1)}} X
\]
for any permutation $\tau\in S_n$ where $\tilde{Y}_i$ denotes the
proper transform of $Y_i$.
\end{proposition}
\subsection{Log canonical model}
Let $X$ be a normal projective variety and $D = \sum a_i D_i$ be a
rational linear combination of prime divisors of $X$ with $0 < a_i
\le 1$. A \emph{log resolution} of $(X, D)$ is a birational
morphism $\pi : Y \to X$ from a smooth projective variety $Y$ to
$X$ such that $\pi^{-1}(D_i)$ and the exceptional divisors $E_i$
of $\pi$ are simple normal crossing divisors on $Y$. Then the
discrepancy formula
\[
K_Y + \pi^{-1}_* (D) \equiv
\pi^*(K_X + D) + \sum_{E_i : \mbox{exceptional}} a(E_i, X, D)E_i,
\]
defines the \emph{discrepancy} of $(X, D)$ by
\[
\mathrm{discrep}(X,D) := \inf \{ a(E, X, D) : E : \mbox{exceptional}\}.
\]
Let $(X, D)$ be a pair where $X$ is a normal projective variety
and $D = \sum a_i D_i$ be a rational linear combination of prime
divisors with $0 < a_i \le 1$. Suppose that $K_X + D$ is
$\mathbb{Q}$-Cartier. A pair $(X, D)$ is \emph{log canonical (abbrev.
lc)} if $\mathrm{discrep}(X,D) \ge -1$ and \emph{Kawamata log
terminal (abbrev. klt)} if $\mathrm{discrep}(X,D) > -1$ and
$\lfloor D \rfloor \le 0$.
When $X$ is smooth and $D$ is a normal crossing effective divisor,
$(X, D)$ is always lc and is klt if all $a_i < 1$.
\begin{definition}
For lc pair $(X, D)$, the \emph{canonical ring} is
\[
R(X, K_X + D) := \oplus_{l \ge 0} H^0(X, \mathcal{O} _X(\lfloor l (K_X + D) \rfloor))
\]
and the \emph{log canonical model} is
\[
\mathrm{Proj}\; R(X, K_X + D).
\]
\end{definition}
In \cite{BCHM}, Birkar, Cascini, Hacon and McKernan proved that
for any klt pair $(X, D)$, the canonical ring is finitely
generated, so the log canonical model always exists.
\section{Moduli of weighted parameterized stable
curves}\label{sec3}
Let $X$ be a smooth projective variety. In this section, we
decompose the map $$X[n]\to X^n$$ defined by Fulton and MacPherson
(\cite{FM}) into a \emph{symmetric} sequence of blow-ups along
transversal centers. A. Mustata and M. Mustata already considered
this problem in their search for intermediate moduli spaces for
the stable map spaces in \cite[\S1]{Mustata}. Let us recall their
construction.
\noindent \textbf{Stage 0}: Let $F_0=X^n$ and $\Gamma_0=X^n\times
X$. For a subset $S$ of $\{1,2,\cdots,n\}$, we let
\[
\Sigma^S_0=\{(x_1,\cdots,x_n)\in X^n\,|\, x_i=x_j \text{ if }
i,j\in S\}, \quad \Sigma^k_0=\cup_{|S|=k}\Sigma_0^S
\]
and let $\sigma^i_0\subset \Gamma_0$ be the graph of the $i$-th
projection $X^n\to X$. Then $\Sigma_0^n\cong X$ is a smooth
subvariety of $F_0$. For each $S$, fix any $i_S\in S$.
\noindent \textbf{Stage $1$}: Let $F_1$ be the blow-up of $F_0$
along $\Sigma_0^n$. Let $\Sigma_1^n$ be the exceptional divisor
and $\Sigma_1^S$ be the proper transform of $\Sigma_0^S$ for
$|S|\ne n$. Let us define $\Gamma_1$ as the blow-up of
$F_1\times_{F_0}\Gamma_0$ along $\Sigma^n_1\times_{F_0}\sigma^1_0$
so that we have a flat family
\[
\Gamma_1\to F_1\times_{F_0}\Gamma_0 \to F_1
\]
of varieties over $F_1$. Let $\sigma_1^i$ be the proper transform
of $\sigma_0^i$ in $\Gamma_1$. Note that $\Sigma^S_{1}$ for
$|S|=n-1$ are all disjoint smooth varieties of same dimension.
\noindent \textbf{Stage $2$}: Let $F_2$ be the blow-up of $F_1$
along $\Sigma_1^{n-1}=\sum_{|S|=n-1}\Sigma_1^S$. Let $\Sigma_2^S$
be the exceptional divisor lying over $\Sigma_1^S$ if $|S|=n-1$
and $\Sigma_2^S$ be the proper transform of $\Sigma_1^S$ for
$|S|\ne n-1$. Let us define $\Gamma_2$ as the blow-up of
$F_2\times_{F_1}\Gamma_1$ along the disjoint union of
$\Sigma^S_2\times_{F_1}\sigma^{i_S}_1$ for all $S$ with $|S|=n-1$
so that we have a flat family
\[
\Gamma_2\to F_2\times_{F_1}\Gamma_1 \to F_2
\]
of varieties over $F_2$. Let $\sigma_2^i$ be the proper transform
of $\sigma_1^i$ in $\Gamma_2$. Note that $\Sigma^S_{2}$ for
$|S|=n-2$ in $F_2$ are all transversal smooth varieties of same
dimension. Hence the blow-up of $F_2$ along their union is smooth
by \S\ref{sec2.3}.
We can continue this way until we reach the last stage.
\noindent \textbf{Stage $n-1$}: Let $F_{n-1}$ be the blow-up of
$F_{n-2}$ along $\Sigma_{n-2}^2=\sum_{|S|=2}\Sigma_{n-2}^S$. Let
$\Sigma_{n-1}^S$ be the exceptional divisor lying over
$\Sigma_{n-2}^S$ if $|S|=2$ and $\Sigma_{n-1}^S$ be the proper
transform of $\Sigma_{n-2}^S$ for $|S|\ne 2$. Let us define
$\Gamma_{n-1}$ as the blow-up of
$F_{n-1}\times_{F_{n-2}}\Gamma_{n-2}$ along the disjoint union of
$\Sigma^S_{n-1}\times_{F_{n-2}}\sigma^{i_S}_{n-2}$ for all $S$
with $|S|=2$ so that we have a flat family
\[
\Gamma_{n-1}\to F_{n-1}\times_{F_{n-2}}\Gamma_{n-2} \to F_{n-1}
\]
of varieties over $F_{n-1}$. Let $\sigma_{n-1}^i$ be the proper
transform of $\sigma_{n-2}^i$ in $\Gamma_{n-1}$.
Nonsingularity of the blown-up spaces $F_k$ are guaranteed by the
following.
\begin{lemma}\label{lem3-1}
$\Sigma^S_{k}$ for $|S|\ge n-k$ are transversal in $F_{k}$ i.e.
the normal bundle in $F_{k}$ of the intersection
$\cap_i\Sigma^{S_i}_k$ for distinct $S_i$ with $|S_i|\ge n-k$ is
the direct sum of the restriction of the normal bundles in $F_k$
of $\Sigma^{S_i}_k$.
\end{lemma}
\begin{proof}
This is a special case of the inductive construction of the
wonderful compactification in \cite{Li}. (See \S \ref{sec2.3}.) In
our situation, the building set is the set of all diagonals $B_0 =
\{\Sigma_0^S | S \subset \{1, 2, \cdots, n\}\}$. By
\cite[Proposition 2.8]{Li}, $B_k=\{\Sigma_k^S\}$ is a building set
of an arrangement in $F_k$ and hence the desired transversality
follows.
\end{proof}
By construction, $F_k$ are all smooth and $\Gamma_k\to F_k$ are
equipped with $n$ sections $\sigma_k^i$. When $\dim X=1$,
$\Sigma^2_{n-2}$ is a divisor and thus $F_{n-1}=F_{n-2}$. In
\cite[Proposition 1.8]{Mustata}, Mustata and Mustata prove that
the varieties $F_k$ are fine moduli spaces for some moduli
functors as follows.
\begin{definition}\label{def3.1} \cite[Definition 1.7]{Mustata}
A family of \emph{$k$-stable parameterized rational curves} over
$S$ consists of a flat family of curves $\pi:C\to S$, a morphism
$\phi:C\to S\times \mathbb{P}^1$ of degree 1 over each geometric fiber
$C_s$ of $\pi$ and $n$ marked sections $\sigma^1,\cdots, \sigma^n$
of $\pi$ such that for all $s\in S$, \begin{enumerate}
\item no more than $n-k$ of the marked points $\sigma^i(s)$ in $C_s$ coincide;
\item any ending irreducible curve in $C_s$, except the parameterized one, contains
more than $n-k$ marked points;
\item all the marked points are smooth points of the curve $C_s$;
\item $C_s$ has finitely many automorphisms preserving the marked
points and the map to $\mathbb{P}^1$.
\end{enumerate}
\end{definition}
\begin{proposition}\label{prop3.2} \cite[Proposition 1.8]{Mustata}
Let $X=\mathbb{P}^1$. The smooth variety $F_k$ finely represents the
functor of isomorphism classes of families of $k$-stable
parameterized rational curves. In particular, $F_{n-2}=F_{n-1}$ is
the Fulton-MacPherson space $\mathbb{P}^1[n]$.
\end{proposition}
\section{Blow-up construction of moduli of pointed stable
curves}\label{sec4}
In the previous section, we decomposed the natural map
$\mathbb{P}^1[n]\to (\mathbb{P}^1)^n$ of the Fulton-MacPherson space into a
sequence
\begin{equation}\label{eq4-1}\xymatrix{
\mathbb{P}^1[n]\ar@{=}[r] & F_{n-2}\ar[r]^{\psi_{n-2}} &
F_{n-3}\ar[r]^{\psi_{n-3}} & \cdots \ar[r]^{\psi_2} &
F_1\ar[r]^{\psi_1} & F_0\ar@{=}[r] & (\mathbb{P}^1)^n }\end{equation} of
blow-ups along transversal centers. By construction the morphisms
above are all equivariant with respect to the action of $G=SL(2)$.
For GIT stability, we use the \emph{symmetric} linearization
$L_0=\mathcal{O} (1,\cdots,1)$ for $F_0$. For $F_k$ we use the
linearization $L_k$ inductively defined by
$L_k=\psi_k^*L_{k-1}\otimes \mathcal{O} (-\delta_kE_k)$ where $E_k$ is the
exceptional divisor of $\psi_k$ and $\{\delta_k\}$ is a decreasing
sequence of sufficiently small positive numbers. Let
$m=\lfloor\frac{n}{2}\rfloor$. In this section, we prove the
following.
\begin{theorem}\label{thm4-1} (i) The GIT quotient
$F_{n-m+k}/\!/ G$ for $1\le k\le m-2$ is
isomorphic to Hassett's moduli space of weighted pointed stable
rational curves $\overline{M}_{0,n\cdot \epsilon} k$ with weights $n\cdot
\epsilon_k=(\epsilon_k,\cdots,\epsilon_k)$ where
$\frac1{m+1-k}<\epsilon_k\le \frac1{m-k}$. The induced maps on
quotients \[ \overline{M}_{0,n\cdot \epsilon} k = F_{n-m+k}/\!/ G \to F_{n-m+k-1}/\!/ G
=\overline{M}_{0,n\cdot \epsilon} ko\] are blow-ups along transversal centers for $k=2,\cdots,
m-2$.
(ii) If $n$ is odd, $$F_{m+1}/\!/ G=\cdots =F_0/\!/
G=(\mathbb{P}^1)^n/\!/ G=\overline{M}_{0,n\cdot \epsilon_0} $$ and we have a sequence of blow-ups
\[
\overline{M}_{0,n} =\overline{M}_{0,n\cdot \epsilon_{m-2}} \to \overline{M}_{0,n\cdot \epsilon_{m-2}} h \to \cdots \to \overline{M}_{0,n\cdot \epsilon_1} \to \overline{M}_{0,n\cdot \epsilon_0} =
(\mathbb{P}^1)^n/\!/ G
\]
whose centers are transversal unions of equidimensional smooth
varieties.
(iii) If $n$ is even, $\overline{M}_{0,n\cdot \epsilon_1} $ is a desingularization of
$$(\mathbb{P}^1)^n/\!/ G=F_0/\!/ G=\cdots =F_m/\!/ G,$$
obtained by blowing up $\frac12\binom{n}{m}$
singular points so that we have a sequence of blow-ups
\[
\overline{M}_{0,n} =\overline{M}_{0,n\cdot \epsilon_{m-2}} \to \overline{M}_{0,n\cdot \epsilon_{m-2}} h \to \cdots \to \overline{M}_{0,n\cdot \epsilon_1} \to (\mathbb{P}^1)^n/\!/ G.
\]
\end{theorem}
\begin{remark}
(1) When $n$ is even, $\overline{M}_{0,n\cdot \epsilon_0} $ is not defined because the sum of
weights does not exceed $2$.
(2) When $n$ is even, $\overline{M}_{0,n\cdot \epsilon_1} $ is Kirwan's (partial)
desingularization of the GIT quotient $(\mathbb{P}^1)^n/\!/ G$ with
respect to the symmetric linearization $L_0=\mathcal{O} (1,\cdots,1)$.
\end{remark}
Let $F_k^{ss}$ (resp. $F_k^s$) denote the semistable (resp. stable)
part of $F_k$. By \eqref{eq-StabBlowup}, we have
\begin{equation}\label{eq4-2}
\psi_k(F_k^{ss})\subset F_{k-1}^{ss},\qquad
\psi_k^{-1}(F_{k-1}^s)\subset F_k^s.
\end{equation}
Also recall from \cite{K1} that $x=(x_1,\cdots,x_n)\in (\mathbb{P}^1)^n$
is semistable (resp. stable) if $> \frac{n}2$ (resp. $\ge
\frac{n}2$) of $x_i$'s are not allowed to coincide. In particular,
when $n$ is odd, $\psi_k^{-1}(F_{k-1}^s)=F_k^s=F_k^{ss}$ for all
$k$ and
\begin{equation}\label{eq4-3}
F_{m+1}^s=F_{m}^s=\cdots =F_0^s,
\end{equation}
because the blow-up centers lie in the unstable part. Therefore we
have
\begin{equation}\label{eq4-4}
F_{m+1}/\!/ G = \cdots =F_0/\!/ G= (\mathbb{P}^1)^n/\!/ G.
\end{equation}
When $n$ is even, $\psi_k$ induces a morphism $F_k^{ss}\to
F_{k-1}^{ss}$ and we have
\begin{equation}\label{eq4-5}
F_m^{ss}=F_{m-1}^{ss}=\cdots =F_0^{ss} \quad \text{and}\quad F_m/\!/
G=\cdots =F_0/\!/ G=(\mathbb{P}^1)^n/\!/ G.
\end{equation}
Let us consider the case where $n$ is odd first. By forgetting the
parameterization of the parameterized component of each member of
family $(\Gamma_{m+k+1}\to F_{m+k+1},\sigma_{m+k+1}^i)$, we get a
rational map $F_{m+k+1}\dashrightarrow \overline{M}_{0,n\cdot \epsilon} k$ for $k=0,1,\cdots,
m-2$. By the definition of the stability in \S\ref{sec2.1}, a
fiber over $\xi\in F_{m+k+1}$ is not stable with respect to
$n\cdot \epsilon_k=(\epsilon_k,\cdots,\epsilon_k)$ if and only if,
in each irreducible component of the curve, the number $a$ of
nodes and the number $b$ of marked points satisfy
$b\epsilon_k+a\le 2$. Obviously this cannot happen on the (GIT)
stable part $F_{m+k+1}^s$. Therefore we obtain a morphism
$F_{m+k+1}^s\to \overline{M}_{0,n\cdot \epsilon} k$. By construction this morphism is
$G$-invariant and thus induces a morphism
$$\phi_k:F_{m+k+1}/\!/ G\to \overline{M}_{0,n\cdot \epsilon} k.$$ Since the stabilizer groups
in $G$ of points in $F_0^s$ are all $\{ \pm 1\}$, the quotient
$$\bar{\psi}_{m+k+1}:F_{m+k+1}/\!/ G\to F_{m+k}/\!/ G$$ of
$\psi_{m+k+1}$ is also a blow-up along a center which consists of
transversal smooth varieties by Corollary \ref{cor-blcomquot}.
Since the blow-up center has codimension $\ge 2$, the Picard
number increases by $\binom{n}{m-k+1}$ for $k=1, \cdots, m-2$.
Since the character group of $SL(2)$ has no free part, by the
descent result in \cite{DN}, the Picard number of $F_{m+1}/\!/
G=F_0^s/G$ is the same as the Picard number of $F_0^s$ which
equals the Picard number of $F_0$. Therefore $\rho(F_{m+1}/\!/
G)=n$ and the Picard number of $F_{m+k+1}/\!/ G$ is
\[
n+\sum_{i=1}^{k}\binom{n}{m-i+1}
\]
which equals the Picard number of $\overline{M}_{0,n\cdot \epsilon} k$ by Lemma
\ref{lem-PicNumMze}. Since $\overline{M}_{0,n\cdot \epsilon} k$ and $F_{m+k+1}/\!/ G$ are
smooth and their Picard numbers coincide, we conclude that
$\phi_k$ is an isomorphism as we desired. So we proved Theorem
\ref{thm4-1} for odd $n$.
Now let us suppose $n$ is even. For ease of understanding, we
divide our proof into several steps.
\noindent \underline{Step 1:} For $k\ge 1$, $F_{m+k}/\!/ G$ are
nonsingular and isomorphic to the partial desingularizations
$\tilde{F}_{m+k}/\!/ G$.
The GIT quotients $F_{m+k}/\!/ G$ may be singular because there
are $\mathbb{C}^*$-fixed points in the semistable part $F_{m+k}^{ss}$. So
we use Kirwan's partial desingularization of the GIT quotients
$F_{m+k}/\!/ G$ (\S\ref{sec2.2}). The following lemma says that
the partial desingularization process has no effect on the
quotient $F_{m+k}/\!/ G$ for $k\ge 1$.
\begin{lemma}\label{lem4-3}
Let $F$ be a smooth projective variety with linearized $G=SL(2)$
action and let $F^{ss}$ be the semistable part. Fix a maximal
torus $\mathbb{C}^*$ in $G$. Let $Z$ be the set of $\mathbb{C}^*$-fixed points
in $F^{ss}$. Suppose the stabilizers of all points in the stable
part $F^{s}$ are $\{\pm 1\}$ and $Y=GZ$ is the union of all closed
orbits in $F^{ss}-F^s$. Suppose that the stabilizers of points in
$Z$ are precisely $\mathbb{C}^*$. Suppose further that $Y=GZ$ is of
codimension $2$. Let $\tilde{F}\to F^{ss}$ be the blow-up of
$F^{ss}$ along $Y$ and let $\tilde{F}^s$ be the stable part in
$\tilde{F}$ with respect to a linearization as in \S\ref{sec2.2}.
Finally suppose that for each $y\in Z$, the weights of the $\mathbb{C}^*$
action on the normal space to $Y$ is $\pm l$ for some $l>0$. Then
$\tilde{F}/\!/ G=\tilde{F}^s/G\cong F/\!/ G$ and $F/\!/ G$ is
nonsingular.
\end{lemma}
\begin{proof}
Since $\bar G=G/\{\pm 1\}$ acts freely on $F^s$, $F^s/G$ is
smooth. By assumption, $Y$ is the union of all closed orbits in
$F^{ss}-F^s$ and hence $F/\!/ G-F^s/G=Y/G$. By Lemma
\ref{lem-specialcasebl} (2), $\tilde{F}^s/G$ is the blow-up of
$F/\!/ G$ along the reduced ideal of $Y/G$. By our assumption, $Z$
is of codimension $4$ and
$$Y/G=GZ/G\cong G\times _{N^{\mathbb{C}^*}}Z/G\cong Z/\mathbb{Z}_2$$
where $N^{\mathbb{C}^*}$ is the normalizer of $\mathbb{C}^*$ in $G$. Since the
dimension of $F/\!/ G$ is $\dim F-3$, the blow-up center $Y/G$ is
nonsingular of codimension $1$. By Luna's slice theorem
(\cite[Appendix 1.D]{MFK}), the singularity of $F/\!/ G$ at any
point $[Gy]\in Y/G$ is $\mathbb{C}^2/\!/ \mathbb{C}^*$ where the weights are
$\pm l$. Obviously this is smooth and hence $F/\!/ G$ is smooth
along $Y/G$. Since the blow-up center is a smooth divisor, the
blow-up map $\tilde{F}^s/G\to F/\!/ G$ has to be an isomorphism.
\end{proof}
Let $Z_{m+k}$ be the $\mathbb{C}^*$-fixed locus in $F_{m+k}^{ss}$ and let
$Y_{m+k}=GZ_{m+k}$. Then $Y_{m+k}$ is the disjoint union of
$$\Sigma_{m+k}^{S,S^c}:=\Sigma_{m+k}^S\cap \Sigma_{m+k}^{S^c}\cap
F_{m+k}^{ss} \quad \text{for }|S|=m, S^c=\{1,\cdots,n\}-S $$ which
are nonsingular of codimension $2$ for $k\ge 1$ by Lemma
\ref{lem3-1}. For a point $$(C,p_1,\cdots,p_n,f:C\to\mathbb{P}^1)\in
\Sigma^{S,S^c}_{m+k},$$ the parameterized component of $C$ (i.e.
the unique component which is not contracted by $f$) has two nodes
and no marked points. The normal space $\mathbb{C}^2$ to
$\Sigma^{S,S^c}_{m+k}$ is given by the smoothing deformations of
the two nodes and hence the stabilizer $\mathbb{C}^*$ acts with weights
$2$ and $-2$.
The blow-up $\tilde{F}_{m+k}$ of $F_{m+k}^{ss}$ along $Y_{m+k}$
has no strictly semistable points by \cite[\S6]{Kirwan}. In fact,
the unstable locus in $\tilde{F}_{m+k}$ is the proper transform of
$\Sigma^S_{m+k}\cup \Sigma^{S^c}_{m+k}$ and the stabilizers of
points in $\tilde{F}^s_{m+k}$ are either $\mathbb{Z}_2=\{\pm 1\}$ (for
points not in the exceptional divisor of $\tilde{F}^s_{m+k}\to
F^{ss}_{m+k}$) or $\mathbb{Z}_4=\{\pm 1,\pm i\}$ (for points in the
exceptional divisor). Therefore, by Lemma \ref{lem4-3} and Lemma
\ref{lem-specialcasebl} (3), we have isomorphisms
\begin{equation}\label{eq4-10}
\tilde{F}_{m+k}^s/G\cong F_{m+k}/\!/ G \end{equation} and
$F_{m+k}/\!/ G$ are nonsingular for $k\ge 1$.
\noindent \underline{Step 2:} The partial desingularization
$\tilde{F}_m/\!/ G$ is a nonsingular variety obtained by blowing
up the $\frac12\binom{n}{m}$ singular points of $F_m/\!/
G=(\mathbb{P}^1)^n/\!/ G$.
Note that $Y_m$ in $F_m^{ss}$ is the disjoint union of
$\frac12\binom{n}{m}$ orbits $\Sigma_m^{S,S^c}$ for $|S|=m$. By
Lemma \ref{lem-specialcasebl} (2), the morphism
$\tilde{F}_m^s/G\to F_m/\!/ G$ is the blow-up at the
$\frac12\binom{n}{m}$ points given by the orbits of the blow-up
center. A point in $\Sigma^{S,S^c}_{m}$ is represented by
$(\mathbb{P}^1,p_1,\cdots,p_n,\mathrm{id})$ with $p_i=p_j$ if $i,j\in S$
or $i,j\in S^c$. Without loss of generality, we may let
$S=\{1,\cdots, m\}$. The normal space to an orbit
$\Sigma^{S,S^c}_{m}$ is given by
\[
(T_{p_1}\mathbb{P}^1)^{m-1}\times
(T_{p_{m+1}}\mathbb{P}^1)^{m-1}=\mathbb{C}^{m-1}\times\mathbb{C}^{m-1}
\]
and $\mathbb{C}^*$ acts with weights $2$ and $-2$ respectively on the two
factors. By Luna's slice theorem, \'etale locally near
$\Sigma^{S,S^c}_{m}$, $F_m^{ss}$ is
$G\times_{\mathbb{C}^*}(\mathbb{C}^{m-1}\times\mathbb{C}^{m-1})$ and $\tilde{F}_m$ is
$G\times_{\mathbb{C}^*}\mathrm{bl}_{0}(\mathbb{C}^{m-1}\times\mathbb{C}^{m-1})$ while
$\tilde{F}_m^s$ is
$G\times_{\mathbb{C}^*}\left[\mathrm{bl}_{0}(\mathbb{C}^{m-1}\times\mathbb{C}^{m-1})
-\mathrm{bl}_{0}\mathbb{C}^{m-1}\sqcup \mathrm{bl}_{0}\mathbb{C}^{m-1}\right]$.
By an explicit local calculation, the stabilizers of points on the
exceptional divisor of $\widetilde{F}_m$ are $\mathbb{Z}_4=\{\pm 1,\pm
i\}$ and the stabilizers of points over $F_m^s$ are $\mathbb{Z}_2=\{\pm
1\}$. Since the locus of nontrivial stabilizers for the action of
$\bar G$ on $\tilde{F}^s_m$ is a smooth divisor with stabilizer
$\mathbb{Z}_2$, $\tilde{F}_m/\!/ G=\tilde{F}^s_m/G$ is smooth and hence
$\tilde{F}_m^s/G$ is the desingularization of $F_m/\!/ G$ obtained
by blowing up its $\frac12\binom{n}{m}$ singular points.
\noindent \underline{Step 3:} The morphism
$\bar\psi_{m+k+1}:F_{m+k+1}/\!/ G\to F_{m+k}/\!/ G$ is the blow-up
along the union of transversal smooth subvarieties for $k\ge 1$.
For $k=0$, we have $\tilde{F}^s_{m+1}=\tilde{F}^s_m$ and thus
$$F_{m+1}/\!/ G\cong
\tilde{F}^s_{m+1}/G=\tilde{F}^s_m/G=\tilde{F}_m/\!/ G$$ is the
blow-up along its $\frac12\binom{n}{m}$ singular points.
From Lemma \ref{lem3-1}, we know $\Sigma^S_{m+k}$ for $|S|\ge m-k$
are transversal in $F_{m+k}$. In particular,
$$\bigcup_{|S|=m}\Sigma_{m+k}^S\cap \Sigma_{m+k}^{S^c}$$ intersects transversely with the
blow-up center $$\bigcup_{|S'|=m-k} \Sigma^{S'}_{m+k}$$ for
$\psi_{m+k+1}:F_{m+k+1}\to F_{m+k}$. Hence, by Proposition
\ref{prop-bltrcen} we have a commutative diagram
\begin{equation}\label{eq4-12}
\xymatrix{ \tilde{F}_{m+k+1}\ar[r]\ar[d] & \tilde{F}_{m+k}\ar[d]\\
F_{m+k+1}^{ss}\ar[r] & F_{m+k}^{ss} }
\end{equation}
for $k\ge 1$ where the top horizontal arrow is the blow-up along
the proper transforms $\tilde{\Sigma}^{S'}_{m+k}$ of
$\Sigma^{S'}_{m+k}$, $|S'|=m-k$. By Corollary \ref{cor-blcomquot},
we deduce that for $k\ge 1$, $\bar\psi_{m+k+1}$ is the blow-up
along the transversal union of smooth subvarieties
$\tilde{\Sigma}^{S'}_{m+k}/\!/ G\cong \Sigma^{S'}_{m+k}/\!/ G$.
For $k=0$, the morphism $\tilde{F}_{m+1}\to \tilde{F}_m$ is the
blow-up along the proper transforms of $\Sigma_m^S$ and
$\Sigma_m^{S^c}$ for $|S|=m$. But these are unstable in
$\tilde{F}_m$ and hence the morphism $\tilde{F}_{m+1}^s\to
\tilde{F}_m^s$ on the stable part is the identity map. So we
obtain $\tilde{F}_{m+1}^s= \tilde{F}_m^s$ and
$\tilde{F}_{m+1}^s/G\cong \tilde{F}_m^s/G$.
\noindent \underline{Step 4:} Calculation of Picard numbers.
The Picard number of $F_m^{ss}=F_0^{ss}\subset F_0=(\mathbb{P}^1)^n$ is
$n$ and so the Picard number of $\tilde{F}_m$ is
$n+\frac12\binom{n}{m}$. By the descent lemma of \cite{DN} as in
the odd degree case, the Picard number of
$$F_{m+1}/\!/ G\cong \tilde{F}_{m+1}^s/G=\tilde{F}_m^s/G$$ equals the
Picard number $n+\frac12\binom{n}{m}$ of $\tilde{F}_m^s$. Since
the blow-up center of $\tilde{F}_{m+k}/\!/ G\to
\tilde{F}_{m+k-1}/\!/ G$ has $\binom{n}{m-k+1}$ irreducible
components, the Picard number of $\tilde{F}_{m+k}/\!/ G\cong
F_{m+k}/\!/ G$ is
\begin{equation}\label{eq4-13}
n+\frac12\binom{n}{m}+\sum_{i=2}^{k}\binom{n}{m-i+1}
\end{equation}
for $k\ge 2$.
\noindent \underline{Step 5:} Completion of the proof.
As in the odd degree case, for $k\ge 1$ the universal
family $\pi_k:\Gamma_{m+k}\to F_{m+k}$ gives rise to a family of
pointed curves by considering the linear system
$K_{\pi_k}+\epsilon_k\sum_i\sigma^i_{m+k}$. Over the semistable
part $F_{m+k}^{ss}$ it is straightforward to check that this gives
us a family of $n\cdot \epsilon_k$-stable pointed curves.
Therefore we obtain an invariant morphism $$F_{m+k}^{ss}\to
\overline{M}_{0,n\cdot \epsilon} k$$ which induces a morphism $$F_{m+k}/\!/ G\to \overline{M}_{0,n\cdot \epsilon} k.$$ By
Lemma \ref{lem-PicNumMze}, the Picard number of $\overline{M}_{0,n\cdot \epsilon} k$ coincides
with that of $F_{m+k}/\!/ G$ given in \eqref{eq4-13}. Hence the
morphism $F_{m+k}/\!/ G\to \overline{M}_{0,n\cdot \epsilon} k$ is an isomorphism as desired.
This completes our proof of Theorem \ref{thm4-1}.
\begin{remark}\label{2010rem1}
Let $S \subset \{1, 2, \cdots, n\}$ with $|S| = m-k$. On
$\overline{M}_{0,n\cdot \epsilon_{k}}$, the blow-up center for
$\overline{M}_{0,n\cdot \epsilon_{k+1}}\to\overline{M}_{0,n\cdot \epsilon} k$ is the union of
$n \choose {m-k}$ smooth subvarieties $\Sigma_{n-m+k}^S/\!/ G$.
Each $\Sigma_{n-m+k}^S /\!/ G$ parameterizes weighted pointed
stable curves with $m-k$ colliding marked points $s_{i_1},
s_{i_2}, \cdots, s_{i_{m-k}}$ for $i_j \in S$. On the other hand,
for any member of $\overline{M}_{0,n\cdot \epsilon} k$, no $m-k+1$ marked points can collide.
So we can replace $m-k$ marked points $s_{i_j}$ with $i_j \in S$
by a single marked point which cannot collide with any other
marked points. Therefore, an irreducible component
$\Sigma_{n-m+k}^S /\!/ G$ of the blow-up center is isomorphic to
the moduli space of weighted pointed rational curves
$\overline{M}_{0, (1, \epsilon_k, \cdots, \epsilon_k)}$ with
$n-m+k+1$ marked points as discovered by Hassett. (See Proposition
\ref{reduction}.)
\end{remark}
\begin{remark}
For the moduli space of \emph{unordered} weighted pointed stable
curves $\overline{M}_{0,n\cdot \epsilon} k/S_n$, we can simply take quotients by the $S_n$
action of the blow-up process in Theorem \ref{thm4-1}. In
particular, $\overline{M}_{0,n} /S_n$ is obtained by a sequence of weighted
blow-ups from $\left((\mathbb{P}^1)^n/\!/ G\right)/ S_n=\mathbb{P}^n/\!/ G.$
\end{remark}
\section{Log canonical models of $\overline{M}_{0,n} $}\label{sec6}
In this section, we give a relatively elementary and
straightforward proof of the following theorem of M. Simpson by
using Theorem \ref{thm4-1}. Let $M_{0,n}$ be the moduli space of
$n$ \emph{distinct} points in $\mathbb{P}^1$ up to $\mathrm{Aut}(\mathbb{P}^1)$.
\begin{theorem} \label{thm6.1}\emph{(M. Simpson \cite{Simpson})}
Let $\alpha$ be a rational number satisfying $\frac{2}{n-1} <
\alpha \le 1$ and let $D=\overline{M}_{0,n} -M_{0,n}$ denote the boundary
divisor. Then the log canonical model
\[
\overline{M}_{0,n} (\alpha) =
\mathrm{Proj}\;\left(\bigoplus_{l \ge 0} H^0(\overline{M}_{0,n} , \mathcal{O} (\lfloor l (K_{\overline{M}_{0,n} }
+\alpha D) \rfloor))\right)
\]
satisfies the following:
\begin{enumerate}
\item If $\frac{2}{m-k+2} < \alpha \le \frac{2}{m-k+1}$ for $1\le k\le
m-2$, then $\overline{M}_{0,n} (\alpha) \cong \overline{M}_{0,n\cdot \epsilon} k$.\item If $\frac{2}{n-1} <
\alpha \le \frac{2}{m+1}$, then $\overline{M}_{0,n} (\alpha) \cong (\mathbb{P}^1)^n /\!/
G$ where the quotient is taken with respect to the symmetric
linearization $\mathcal{O} (1,\cdots,1)$.
\end{enumerate}
\end{theorem}
\begin{remark} Keel and McKernan prove (\cite[Lemma 3.6]{KeelMcKer}) that
$K_{\overline{M}_{0,n} } + D$ is ample. Because $$\overline{M}_{0,n\cdot \epsilon_{m-2}} \cong\overline{M}_{0,n\cdot \epsilon_{m-1}} = \overline{M}_{0,n} $$ by
definition, we find that (1) above holds for $k=m-1$ as
well.\end{remark}
For notational convenience, we denote $(\mathbb{P}^1)^n/\!/ G$ by $\overline{M}_{0,n\cdot \epsilon_0} $
for even $n$ as well. Let $\Sigma_k^S$ denote the subvarieties of
$F_k$ defined in \S\ref{sec3} for $S\subset \{1,\cdots,n\}$,
$|S|\le m$. Let
$$D_k^S=\Sigma_{n-m+k}^S/\!/ G\subset F_{n-m+k}/\!/ G\cong \overline{M}_{0,n\cdot \epsilon} k.$$
Then $D_k^S$ is a divisor of $\overline{M}_{0,n\cdot \epsilon} k$ for $|S|=2$ or $m-k<|S|\le
m$. Let $D_k^j=(\cup_{|S|=j}\Sigma_{n-m+k}^S)/\!/ G$ and
$D_k=D_k^2+\sum_{j>m-k}D^j_k$. Then $D_k$ is the boundary divisor
of $\overline{M}_{0,n\cdot \epsilon} k$, i.e. $\overline{M}_{0,n\cdot \epsilon} k-M_{0,n}=D_k$. When $k = m-2$ so $\overline{M}_{0,n\cdot \epsilon} k \cong
\overline{M}_{0,n} $, sometimes we will drop the subscript $k$. Note that if $n$ is even and
$|S|=m$, $D_k^S=D_k^{S^c}=\Sigma^{S,S^c}_{n-m+k}/\!/ G$.
By Theorem \ref{thm4-1}, there is a sequence of blow-ups
\begin{equation}\label{eq6.1}
\overline{M}_{0,n} \cong \overline{M}_{0,n\cdot \epsilon_{m-2}} \mapright{\varphi_{m-2}}
\overline{M}_{0,n\cdot \epsilon_{m-2}} h\mapright{\varphi_{m-3}}\cdots
\mapright{\varphi_{2}}\overline{M}_{0,n\cdot \epsilon_1} \mapright{\varphi_{1}} \overline{M}_{0,n\cdot \epsilon_0}
\end{equation}
whose centers are transversal unions of smooth subvarieties,
except for $\varphi_1$ when $n$ is even. Note that the irreducible
components of the blow-up center of $\varphi_k$ furthermore
intersect transversely with $D^j_{k-1}$ for $j>m-k+1$ by Lemma
\ref{lem3-1} and by taking quotients.
\begin{lemma}\label{computepushandpull} Let $1\le k\le m-2$.
\begin{enumerate}
\item $\varphi_k^* (D^j_{k-1}) = D^j_k$ for $j> m-k+1$.
\item $\varphi_k^* (D^2_{k-1}) = D^2_k + {m-k+1 \choose 2} D^{m-k+1}_k$.
\item $\varphi_{k *} (D^j_k) = D^j_{k-1}$ for $j > m-k+1$ or $j=2$.
\item $\varphi_{k *} (D^j_k) =0$ for $j=m-k+1$.\end{enumerate}
\end{lemma}
\begin{proof} The push-forward formulas (3) and (4) are obvious.
Recall from \S\ref{sec4} that $\varphi_k=\bar\psi_{n-m+k}$ is the
quotient of $\psi_{n-m+k}:F_{n-m+k}^{ss}\to F_{n-m+k-1}^{ss}$.
Suppose $n$ is not even or $k$ is not $1$. Since $D^S_k$ for
$|S|>2$ does not contain any component of the blow-up center,
$\varphi_k^*(D^S_{k-1}) = D^S_k$. If $|S|=2$, $D^S_{k-1}$ contains
a component $D^{S'}_{k-1}$ of the blow-up center if and only if
$S' \supset S$. Therefore we have
\[
\varphi_k^*(D^S_{k-1}) = D^S_k +
\sum_{S' \supset S, |S'| = m-k+1} D^{S'}_k.
\]
By adding them up for all $S$ such that $|S|=2$, we obtain (2).
When $n$ is even and $k=1$, we calculate the pull-back before
quotient. Let $\pi:\tilde{F}_{m}^s\to F_m^{ss}$ be the map
obtained by blowing up $\cup_{|S|=m}\Sigma^{S,S^c}_m$ and removing
unstable points. Recall that $\tilde{F}_m^s/G\cong F_{m+1}/\!/
G\cong \overline{M}_{0,n\cdot \epsilon_1} $ and the quotient of $\pi$ is $\varphi_1$. Then a
direct calculation similar to the above gives us
$\pi^*\Sigma^2_m=\tilde{\Sigma}_m^2+2\binom{m}{2}\tilde{\Sigma}^m_m$
where $\Sigma^2_m=\cup_{|S|=2}\Sigma^S_m$ and $\tilde{\Sigma}_m^2$
is the proper transform of $\Sigma^2_m$ while $\tilde{\Sigma}_m^m$
denotes the exceptional divisor. Note that by the descent lemma
(\cite{DN}), the divisor $\Sigma_m^2$ and $\tilde{\Sigma}^2_m$
descend to $D^2_0$ and $D_{1}^2$. However $\tilde{\Sigma}_m^m$
does not descend because the stabilizer group $\mathbb{Z}_2$ in $\bar
G=PGL(2)$ of points in $\tilde{\Sigma}_m^m$ acts nontrivially on
the normal spaces. But by the descent lemma again,
$2\tilde{\Sigma}^m_m$ descends to $D^m_1$. Thus we obtain (2).
\end{proof}
Next we calculate the canonical divisors of $\overline{M}_{0,n\cdot \epsilon} k$.
\begin{proposition}\label{canonicaldiv1} \cite[Proposition
1]{Pand} The canonical divisor of $\overline{M}_{0,n} $ is
\[
K_{\overline{M}_{0,n} } \cong -\frac{2}{n-1} D^2 + \sum_{j=3}^m
\left(-\frac{2}{n-1}{j \choose 2} +(j-2)\right) D^j.
\]
\end{proposition}
\begin{lemma}\label{canonicaldiv2}
(1) The canonical divisor of $(\mathbb{P}^1)^n /\!/ G$ is
\[
K_{(\mathbb{P}^1)^n /\!/ G} \cong -\frac{2}{n-1}D_0^2.
\]
(2) For $1\le k\le m-2$, the canonical divisor of $\overline{M}_{0,n\cdot \epsilon} k$ is
\[
K_{\overline{M}_{0,n\cdot \epsilon} k} \cong -\frac{2}{n-1}D_k^2 + \sum_{j\ge m-k+1}^m
\left(-\frac{2}{n-1}{j \choose 2} + (j-2) \right)D_k^j.
\]
\end{lemma}
\begin{proof}
It is well known by the descent lemma (\cite{DN}) that
$\mathrm{Pic}((\mathbb{P}^1)^n /\!/ G)$ is a free abelian group of rank
$n$(See \S6). The symmetric group $S_n$ acts on $(\mathbb{P}^1)^n /\!/ G$ in the
obvious manner, and there is an induced action on its Picard
group. Certainly the canonical bundle $K_{(\mathbb{P}^1)^n /\!/ G}$ and
$D^2_0$ are $S_n$-invariant. On the other hand, the
$S_n$-invariant part of the rational Picard group is a one
dimensional vector space generated by the quotient $D_0^2$ of
$\mathcal{O} _{(\mathbb{P}^1)^n}(n-1,\cdots,n-1)$ and hence we have $K_{(\mathbb{P}^1)^n
/\!/ G} \cong c D_0^2$ for some $c\in \mathbb{Q}$.
Suppose $n$ is odd. The contraction morphisms $\varphi_k$ are all
compositions of smooth blow-ups for $k \ge 1$. From the blow-up
formula of canonical divisors (\cite[II Exe. 8.5]{Hartshorne}) and
Lemma \ref{computepushandpull}, we deduce that
\[
K_{\overline{M}_{0,n\cdot \epsilon} k} = cD^2_k + \sum_{j\ge m-k+1}^m
\left(c{j \choose 2} + (j-2)\right)D^j_k.
\]
Since $\overline{M}_{0,n} \cong \overline{M}_{0,n\cdot \epsilon_{m-2}} $, we get $c = -\frac{2}{n-1}$ from
Proposition \ref{canonicaldiv1}.
When $n$ is even, $\varphi_1^*(K_{(\mathbb{P}^1)^n /\!/ G}) = cD_1^2 +
c{m \choose 2} D^m_1$ by Lemma \ref{computepushandpull}. We write
$K_{\overline{M}_{0,n\cdot \epsilon_1} } = cD_1^2 + (c{m \choose 2} + a)D_1^m$. By the blow-up
formula of canonical divisors (\cite[II Exe. 8.5]{Hartshorne})
again, we deduce that
\[
K_{\overline{M}_{0,n\cdot \epsilon} k} = cD_k^2 + \sum_{j \ge m-k+1}^{m-1}
\left( c{j \choose 2} + (j-2)\right)D_k^j + (c{m \choose 2} + a)D_k^m.
\]
From Proposition \ref{canonicaldiv1} again, we get $c =
-\frac{2}{n-1}$ and $a = m-2$.
\end{proof}
We are now ready to prove Theorem \ref{thm6.1}. By \cite[Corollary
3.5]{Simpson}, the theorem is a direct consequence of the
following proposition.
\begin{proposition}\label{prop-amplerange}
(1) $K_{\overline{M}_{0,n\cdot \epsilon_0} }+\alpha D_0$ is ample if $\frac{2}{n-1} < \alpha \le
\frac{2}{m+1}$.\\
(2) For $1\le k\le m-2$, $K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_k$ is ample if
$\frac{2}{m-k+2} < \alpha \le \frac{2}{m-k+1}$.
\end{proposition}
Since any positive linear combination of an ample divisor and a
nef divisor is ample \cite[Corollary 1.4.10]{Larz1}, it suffices
to show the following:
\begin{enumerate}\item[(a)] Nefness of $K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_k$ for
$\alpha =\frac{2}{m-k+1}+s$ where $s$ is some (small) positive
number;
\item[(b)] Ampleness of $K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_k$ for
$\alpha=\frac{2}{m-k+2}+t$ where $t$ is \emph{any} sufficiently
small positive number. \end{enumerate} We will use Alexeev and
Swinarski's intersection number calculation in \cite{AlexSwin} to
achieve (a) (See Lemma \ref{lem-otherextreme}.) and then (b) will
immediately follow from our Theorem \ref{thm4-1}.
\begin{definition} (\cite{Simpson}) Let $\varphi=\varphi_{n\cdot\epsilon_{m-2},
n\cdot\epsilon_k} : \overline{M}_{0,n} \to \overline{M}_{0,n\cdot \epsilon} k$ be the natural contraction map
(\S\ref{sec2.1}). For $k = 0,1,\cdots,m-2$ and $\alpha > 0$,
define $A(k, \alpha)$ by
\begin{eqnarray*}
A(k, \alpha) &:=& \varphi^*(K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_k)\\
&=& \sum_{j = 2}^{m-k} {j \choose 2}\left(\alpha - \frac{2}{n-1}\right)D^j
+ \sum_{j \ge m-k+1}^m \left(\alpha - \frac{2}{n-1}{j \choose 2}
+ j-2\right)D^j.
\end{eqnarray*}
\end{definition}
Notice that the last equality is an easy consequence of Lemma
\ref{computepushandpull}.
By \cite{Kapranov}, there is a birational morphism $\pi_{\vec{x}}
: \overline{M}_{0,n} \to (\mathbb{P}^1)^n /\!/ _{\vec{x}} G$ for any linearization
$\vec{x} = (x_1, \cdots, x_n) \in \mathbb{Q}_+^n$. Since the canonical
ample line bundle $\mathcal{O} _{(\mathbb{P}^1)^n }(x_1, \cdots, x_n)/\!/ G$ over
$(\mathbb{P}^1)^n /\!/ _{\vec{x}} G$ is ample, its pull-back $L_{\vec{x}}$
by $\pi_{\vec{x}}$ is certainly nef.
\begin{definition}\cite[Definition 2.3]{AlexSwin}\label{def-symnefdiv}
Let $x$ be a rational number such that $\frac{1}{n-1} \le x \le \frac{2}{n}$.
Set $\vec{x} = \mathcal{O} (x, \cdots, x, 2-(n-1)x)$.
Define
\[
V(x, n) := \frac{1}{(n-1)!}\bigotimes_{\tau \in S_n} L_{\tau
\vec{x}}.
\]
Obviously the symmetric group $S_n$ acts on $\vec{x}$ by permuting
the components of $\vec{x}$.
\end{definition}
Notice that $V(x,n)$ is nef because it is a positive linear
combination of nef line bundles.
\begin{definition}\cite[Definition 3.5]{AlexSwin}
Let $C_{a,b,c,d}$ be \emph{any} vital curve class corresponding to
a partition $S_a \sqcup S_b \sqcup S_c \sqcup S_d$ of $\{1, 2,
\cdots, n\}$
such that $|S_a| = a, \cdots, |S_d|=d$.\\
(1) Suppose $n=2m+1$ is odd. Let $C_i = C_{1, 1, m-i, m+i-1}$,
for $i = 1, 2, \cdots, m-1$.\\
(2) Suppose $n=2m$ is even. Let $C_i = C_{1,1,m-i, m+i-2}$ for $i
= 1, 2, \cdots, m-1$.
\end{definition}
By \cite[Corollary 4.4]{KeelMcKer}, the following computation is straightforward.
\begin{lemma}\label{lem-intAkalpha}
The intersection numbers $C_i \cdot A(k, \alpha)$ are
\[
C_i \cdot A(k, \alpha) = \left\{\begin{array}{ll}
\alpha&\mbox{if } i < k\\
\left(2-{m-k \choose 2}\right)\alpha + m-k-2&\mbox{if } i = k\\
\left({m-k+1 \choose 2}-1\right)\alpha-m+k+1
&\mbox{if } i = k+1\\
0&\mbox{if } i > k+1.
\end{array}\right.
\]
\end{lemma}
This lemma is in fact a slight generalization of \cite[Lemma
3.7]{AlexSwin} where the intersection numbers for
$\alpha=\frac{2}{m-k+1}$ only are calculated.
The $S_n$-invariant subspace of Neron-Severi vector space of
$\overline{M}_{0,n} $ is generated by $D^j$ for $j=2,3, \cdots, m$ (\cite[Theorem
1.3]{KeelMcKer}). Therefore, in order to determine the linear
dependency of $S_n$-invariant divisors, we find $m-1$ linearly
independent curve classes, and calculate the intersection numbers
of divisors with these curves classes. Let $U$ be an $(m-1) \times
(m-1)$ matrix with entries $U_{ij} = (C_i \cdot V(\frac{1}{m+j},
n))$ for $1 \le i,j \le m-1$. Since $V(\frac{1}{m+j}, n)$'s are
all nef, all entries of $U$ are nonnegative.
\begin{lemma}\cite[\S3.2, \S3.3]{AlexSwin}\label{lem-nefbdl}
(1) The intersection matrix $U$ is upper triangular and
if $i \le j$, then $U_{ij} > 0$. In particular, $U$ is invertible.\\
(2) Let $\vec{a} = ((C_1 \cdot A(k, \frac{2}{m-k+1})), \cdots,
(C_{m-1} \cdot A(k,\frac{2}{m-k+1})))^t$ be the column vector
of intersection numbers.
Let $\vec{c} = (c_1, c_2, \cdots, c_{m-1})^t$ be the
unique solution of the system of linear equations
$U \vec{c} = \vec{a}$.
Then $c_i > 0$ for $i \le k+1$ and $c_i = 0$ for $i \ge k+2$.
\end{lemma}
This lemma implies that $A(k,\frac{2}{m-k+1})$ is a positive
linear combination of $V(\frac{1}{m+j}, n)$ for $j = 1, 2, \cdots,
k+1$. Note that $A(k,\frac{2}{m-k+2}) =
A(k-1,\frac{2}{m-(k-1)+1})$ and that for $\frac{2}{m-k+2} \le
\alpha \le \frac{2}{m-k+1}$, $A(k,\alpha)$ is a nonnegative linear
combination of $A(k,\frac{2}{m-k+2})$ and $A(k,\frac{2}{m-k+1})$.
Hence by the numerical result in Lemma \ref{lem-nefbdl} and the
convexity of the nef cone, $A(k,\alpha)$ is nef for
$\frac{2}{m-k+2} \le \alpha \le \frac{2}{m-k+1}$. Actually we can
slightly improve this result by using continuity.
\begin{lemma}\label{lem-otherextreme}
For each $k = 0,1,\cdots,m-2$, there exists $s > 0$ such that
$A(k,\alpha)$ is nef for $\frac{2}{m-k+2} \le \alpha \le
\frac{2}{m-k+1}+s$. Therefore, $K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_k$ is nef for
$\frac{2}{m-k+2} \le \alpha \le \frac{2}{m-k+1}+s$.
\end{lemma}
\begin{proof}
Let $\vec{a}^\alpha = ((C_1 \cdot A(k,\alpha)), \cdots, (C_{m-1}
\cdot A(k,\alpha)))^t$ and let $\vec{c}^\alpha = (c^\alpha_1,
\cdots, c^\alpha_{m-1})^t$ be the unique solution of equation $U
\vec{c}^\alpha = \vec{a}^\alpha$. Then by continuity, the
components $c^\alpha_1, c^\alpha_2, \cdots, c^\alpha_{k+1}$ remain
positive when $\alpha$ is slightly increased. By Lemma
\ref{lem-intAkalpha} and the upper triangularity of $U$,
$c^\alpha_i$ for $i > k+1$ are all zero. Hence $A(k, \alpha)$ is
still nef for $\alpha = \frac{2}{m-k+1}+s$ with sufficiently small
$s > 0$.
\end{proof}
With this nefness result, the proof of Proposition
\ref{prop-amplerange} is obtained as a quick application of
Theorem \ref{thm4-1}.
\begin{proof}[Proof of Proposition \ref{prop-amplerange}]
We prove that in fact $K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_k$ is ample for
$\frac{2}{m-k+2} < \alpha < \frac{2}{m-k+1}+s$ where $s$ is the
small positive rational number in Lemma \ref{lem-otherextreme}.
Since a positive linear combination of an ample divisor and a nef
divisor is ample by \cite[Corollary 1.4.10]{Larz1}, it suffices to
show that $K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_k$ is ample when $\alpha=
\frac{2}{m-k+2}+t$ for any sufficiently small $t>0$ by Lemma
\ref{lem-otherextreme}.
We use induction on $k$. It is certainly true when $k=0$ by Lemma
\ref{canonicaldiv2} because $D^2_0$ is ample as the quotient of
$\mathcal{O} (n-1,\cdots,n-1)$. Suppose $K_{\overline{M}_{0,n\cdot \epsilon} ko}+\alpha D_{k-1}$ is ample
for $\frac{2}{m-k+3} < \alpha < \frac{2}{m-k+2}+s'$ where
$s'$ is the small positive number in Lemma \ref{lem-otherextreme}
for $k-1$.
Since $\varphi_k$ is a blow-up with exceptional divisor $D_k^{m-k+1}$,
\[
\varphi_k^*(K_{\overline{M}_{0,n\cdot \epsilon} ko}+\alpha D_{k-1})-\delta D_k^{m-k+1}
\]
is ample for any sufficiently small $\delta>0$ by \cite[II
7.10]{Hartshorne}. A direct computation with Lemmas
\ref{computepushandpull} and \ref{canonicaldiv2} provides us with
\[
\varphi_k^*(K_{\overline{M}_{0,n\cdot \epsilon} ko}+\alpha D_{k-1})-\delta D_k^{m-k+1}\] \[
=K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_{k}+\left( \binom{m-k+1}{2}\alpha
-\alpha-(m-k-1)-\delta \right)D_k^{m-k+1}.
\]
If $\alpha=\frac{2}{m-k+2}$, $\binom{m-k+1}{2}\alpha
-\alpha-(m-k-1)=0$ and thus we can find $\alpha>\frac{2}{m-k+2}$
satisfying $\binom{m-k+1}{2}\alpha -\alpha-(m-k-1)-\delta =0$. If
$\delta$ decreases to $0$, the solution $\alpha$ decreases to
$\frac{2}{m-k+2}$. Hence $K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_k$ is ample when
$\alpha= \frac{2}{m-k+2}+t$ for any sufficiently small $t>0$ as
desired.
\end{proof}
\begin{remark}\label{rem-compproofs}
There are already two different proofs of M. Simpson's theorem
(Theorem \ref{thm6.1}) given by Fedorchuk--Smyth \cite{FedoSmyt},
and by Alexeev--Swinarski \cite{AlexSwin} without relying on
Fulton's conjecture. Here we give a brief outline of the two
proofs.
In \cite[Corollary 3.5]{Simpson}, Simpson proves that Theorem
\ref{thm6.1} is an immediate consequence of the ampleness of
$K_{\overline{M}_{0,n\cdot \epsilon} k} + \alpha D_k$ for $\frac{2}{m-k+2} < \alpha \le
\frac{2}{m-k+1}$ (Proposition \ref{prop-amplerange}). The differences in
the proofs of Theorem \ref{thm6.1} reside solely in different ways
of proving Proposition \ref{prop-amplerange}.
The ampleness of $K_{\overline{M}_{0,n\cdot \epsilon} k} + \alpha D_k$ follows if the divisor
$A(k, \alpha) = \varphi^*(K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_k)$ is nef and its
linear system contracts only $\varphi$-exceptional curves. Here,
$\varphi : \overline{M}_{0,n} \to \overline{M}_{0,n\cdot \epsilon} k$ is the natural contraction map
(\S\ref{sec2.1}). Alexeev and Swinarski prove Proposition
\ref{prop-amplerange} in two stages: First the nefness of $A(k,
\alpha)$ for suitable ranges is proved and next they show that the
divisors are the pull-backs of ample line bundles on $\overline{M}_{0,n\cdot \epsilon} k$.
Lemma \ref{lem-otherextreme} above is only a negligible
improvement of the nefness result in \cite[\S3]{AlexSwin}. In
\cite[Theorem 4.1]{AlexSwin}, they give a criterion for a line
bundle to be the pull-back of an ample line bundle on $\overline{M}_{0,n\cdot \epsilon} k$.
After some rather sophisticated combinatorial computations, they
prove in \cite[Proposition 4.2]{AlexSwin} that $A(k, \alpha)$
satisfies the desired properties.
On the other hand, Fedorchuk and Smyth show that $K_{\overline{M}_{0,n\cdot \epsilon} k} +
\alpha D_k$ is ample as follows. Firstly, by applying the
Grothendieck-Riemann-Roch theorem, they represent
$K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_k$ as a linear combination of boundary
divisors and tautological $\psi$-classes. Secondly, for such a
linear combination of divisor classes and for a complete curve in
$\overline{M}_{0,n\cdot \epsilon} k$ parameterizing a family of curves with smooth general
member, they perform brilliant computations and get several
inequalities satisfied by their intersection numbers
(\cite[Proposition 3.2]{FedoSmyt}). Combining these inequalities,
they prove in particular that $K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_k$ has positive
intersection with any complete curve on $\overline{M}_{0,n\cdot \epsilon} k$ with smooth
general member (\cite[Theorem 4.3]{FedoSmyt}). Thirdly, they prove
that if the divisor class intersects positively with any curve
with smooth general member, then it intersects positively with all
curves by an induction argument on the dimension. Thus they
establish the fact that $K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_k$ has positive
intersection with all curves. Lastly, they prove that the same
property holds even if $K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha D_k$ is perturbed by any
small linear combination of boundary divisors. Since the boundary
divisors generate the Neron-Severi vector space, $K_{\overline{M}_{0,n\cdot \epsilon} k}+\alpha
D_k$ lies in the interior of the nef cone and the desired
ampleness follows.
\end{remark}
\section{The Picard groups of $\overline{M}_{0,n\cdot \epsilon} k$}\label{sec5}
As a byproduct of our GIT construction of the moduli spaces of
weighted pointed curves, we give a basis of the \emph{integral}
Picard group of $\overline{M}_{0,n\cdot \epsilon} k$ for $0 \le k \le m-2$.
Let $e_i$ be the $i$-th standard basis vector of $\mathbb{Z}^n$. For
notational convenience, set $e_{n+1} = e_1$. For $S\subset
\{1,2,\cdots,n\}$, let $D_k^S=\Sigma_{n-m+k}^S/\!/ G\subset
F_{n-m+k}/\!/ G\cong \overline{M}_{0,n\cdot \epsilon} k$. Note that if $m-k<|S|\le m$ or
$|S|=2$, $D_k^S$ is a divisor of $\overline{M}_{0,n\cdot \epsilon} k$.
\begin{theorem}\label{picardgroup}
(1) If $n$ is odd, then the Picard group of $\overline{M}_{0,n\cdot \epsilon} k$ is
\[
\mathrm{Pic} (\overline{M}_{0,n\cdot \epsilon} k) \cong \bigoplus_{m-k < |S| \le m} \mathbb{Z} D^S_k \oplus
\bigoplus_{i=1}^n \mathbb{Z} D_k^{\{i, i+1\}}
\]
for $0 \le k \le m-2$.\\
(2) If $n$ is even, then the Picard group of $\overline{M}_{0,n\cdot \epsilon} k$ is
\[
\mathrm{Pic} (\overline{M}_{0,n\cdot \epsilon} k) \cong \bigoplus_{m-k < |S| < m} \mathbb{Z} D^S_k \oplus
\bigoplus_{1 \in S, |S| = m} \mathbb{Z} D^S_k \oplus
\bigoplus_{i=1}^{n-1} \mathbb{Z} D_k^{\{i, i+1\}}
\oplus \mathbb{Z} D_k^{\{1, n-1\}}.
\]
for $1 \le k \le m-2$.
\end{theorem}
\begin{proof}
Since the codimensions of unstable strata in $(\mathbb{P}^1)^n$ are greater
than 1, $$\mathrm{Pic} (((\mathbb{P}^1)^n)^{ss}) = \mathrm{Pic} ((\mathbb{P}^1)^n)
\cong \oplus_{1 \le i \le n} \mathbb{Z} \mathcal{O} (e_i).$$ For all $x \in
((\mathbb{P}^1)^n)^s$, $G_x \cong \{\pm 1\}$. If $n$ is even and $x$ is
strictly semistable point with closed orbit, then $G_x \cong \mathbb{C}^*$.
Since $G$ is connected, $G$ acts on the discrete set
$\mathrm{Pic}((\mathbb{P}^1)^n)$ trivially. By Kempf's descent lemma
(\cite[theorem 2.3]{DN}) and by checking the actions of the
stabilizers on the fibers of line bundles, we deduce that $\mathcal{O} (a_1,
a_2, \cdots, a_n)$ descends to $((\mathbb{P}^1)^n)^{ss} /\!/ G$ if and only
if $2$ divides $\sum a_i$.
Consider the case when $n$ is odd first. It is elementary to check
that the subgroup $\{(a_1, \cdots, a_n) \in \mathbb{Z}^n | \sum a_i \in
2\mathbb{Z} \}$ is free abelian of rank $n$ and $\{e_i + e_{i+1}\}$ for
$1 \le i \le n$ form a basis of this group. Furthermore, for
$S=\{i,j\}$ with $i\ne j$, the big diagonal $(\Sigma_{m+1}^S)^s =
(\Sigma_0^S)^s$ satisfies $\mathcal{O} ((\Sigma_0^S)^s) \cong
\mathcal{O} _{F_0^s}(e_i+e_j)$. Hence in $F_{m+1}/\!/ G = F_0 /\!/ G$,
$\mathcal{O} (\Sigma_0^S /\!/ G) \cong \mathcal{O} (e_i + e_j)$. Therefore we have
\[
\mathrm{Pic} (\overline{M}_{0,n\cdot \epsilon_0} ) = \mathrm{Pic} (F_{m+1} /\!/ G)
= \bigoplus_{i = 1}^n \mathbb{Z} D_0^{\{i, i+1\}}.
\]
By Theorem \ref{thm4-1}, the contraction morphism $\varphi_k :
\overline{M}_{0,n\cdot \epsilon} k \to \overline{M}_{0,n\cdot \epsilon} ko$ is the blow-up along the union of transversally
intersecting smooth subvarieties. By \S2.3, this is a composition
of smooth blow-ups. In $\overline{M}_{0,n\cdot \epsilon} k$, the exceptional divisors are
$D^S_k$ for $|S| = {m-k+1}$. So the Picard group of $\overline{M}_{0,n\cdot \epsilon} k$ is
\[
\mathrm{Pic} (\overline{M}_{0,n\cdot \epsilon} k) \cong \varphi_k^* \mathrm{Pic} (\overline{M}_{0,n\cdot \epsilon} ko) \oplus
\bigoplus_{|S| = {m-k+1}}\mathbb{Z} D^S_k.
\]
by \cite[II Exe. 8.5]{Hartshorne}. For any $S$ with $|S| = 2$,
$D_{k-1}^S$ contains the blow-up center $D_{k-1}^{S'}$ if $S
\subset {S'}$. So $\varphi_k^* (D_{k-1}^S)$ is the sum of $D_k^S$
and a linear combination of $D_k^{S'}$ for $S' \supset S, |S'| =
m-k+1$. If $|S| > 2$, then $\varphi_k^* D^S_{k-1} = D^S_k$ since
it does not contain any blow-up centers. After obvious basis
change, we get the desired result by induction.
Now suppose that $n$ is even. Still the group $\{ (a_1, \cdots,
a_n) \in \mathbb{Z}^n | \sum a_i \in 2\mathbb{Z}\}$ is free abelian of rank $n$
and $\{e_i + e_{i+1}\}_{1 \le i \le n-1} \cup \{e_1 + e_{n-1}\}$
form a basis. In $F_m/\!/ G = F_0 /\!/ G$, $\mathcal{O} (\Sigma_m^S/\!/ G)
\cong \mathcal{O} (e_i + e_j)$ when $S = \{i, j\}$ with $i\ne j$. Hence
\[
\mathrm{Pic} (F_m /\!/ G) = \bigoplus_{i=1}^{n-1}
\mathbb{Z} D_0^{\{i, i+1\}} \oplus \mathbb{Z} D_0^{\{1, n-1\}}.
\]
In $\tilde{F}_m$, the unstable loci have codimension two.
Therefore we have
\[
\mathrm{Pic} (\tilde{F}_m^s) = \mathrm{Pic}(\tilde{F}_m)
= \pi_m^* \mathrm{Pic} (F_m^{ss}) \oplus \bigoplus_{1 \in S, |S| = m}
\mathbb{Z} \tilde{\Sigma}_m^S,
\]
where $\pi_m : \tilde{F}_m \to F_m^{ss}$ is the blow-up morphism,
and $\tilde{\Sigma}_m^S = \pi_m^{-1}(\Sigma_m^S \cap \Sigma_m^{S^c})$
for $|S|=m$.
By Kempf's descent lemma, $\mathrm{Pic}(F_{m+k}/\!/ G)$ is a
subgroup of $\mathrm{Pic}(F_{m+k}^{ss})$ and
$\mathrm{Pic}(\tilde{F}_{m+k}^s)$ for $0 \le k \le m-2$. From our
blow-up description, all arrows except possibly
$\bar{\psi}_{m+1}^*$ in following commutative diagram
\[
\xymatrix{\mathrm{Pic}(\tilde{F}_{m+1}^s)&
\mathrm{Pic}(\tilde{F}_m^s)\ar@{=}[l]_{\tilde{\psi}_{m+1}^*}\\
\mathrm{Pic}(F_{m+1}^{ss})\ar[u]_{\pi_{m+1}^*}&
\mathrm{Pic}(F_m^{ss})\ar[l]_{\psi_{m+1}^*}\ar[u]_{\pi_m^*}\\
\mathrm{Pic}(F_{m+1}/\!/ G)\ar[u]&
\mathrm{Pic}(F_m /\!/ G)\ar[l]_{\bar{\psi}_{m+1}^*}\ar[u]}
\] are
injective, and thus the bottom arrow $\bar{\psi}_{m+1}^*$ is also
injective. Hence $\mathrm{Pic}(F_{m+1}/\!/ G)$ contains the
pull-back of $\mathrm{Pic}(F_m /\!/ G)$ as a subgroup. Also, for
the quotient map $p : \tilde{F}_{m+1}^s \to F_{m+1}/\!/ G$, $p^*
D_1^S = \tilde{\Sigma}_{m+1}^S$ for $|S| = m$. Let $H$ be the
subgroup of $\mathrm{Pic}(\tilde{F}_{m+1}^s)$ generated by the
images of $\bar{\psi}_{m+1}^* \mathrm{Pic}(F_m /\!/ G)$ and the
divisors $D_1^S$ with $|S| = m$. By definition, the image of
$\mathrm{Pic}(F_{m+1}/\!/ G)$ contains $H$. Now by checking the
action of stabilizers on the fibers of line bundles, it is easy to
see that no line bundles in $\mathrm{Pic}(\tilde{F}_{m+1}^s)- H$
descend to $F_{m+1}/\!/ G$. Hence we have
\begin{equation}\label{mzeo1}
\mathrm{Pic}(\overline{M}_{0,n\cdot \epsilon_1} ) = \mathrm{Pic}(F_{m+1} /\!/ G)
= \bar{\psi}_{m+1}^* \mathrm{Pic}(F_m /\!/ G) \oplus
\bigoplus_{1 \in S, |S| = m}\mathbb{Z} D_1^S.
\end{equation}
For an $S$ with $|S|=2$, $\Sigma_m^S/\!/ G$ contains the blow-up
center $\Sigma_m^{S'} \cap \Sigma_m^{{S'}^c}/\!/ G$ if $S \subset
S'$ or $S \subset {S'}^c$. So $\bar{\psi}_{m+1}^* (D_0^S)$ is the
sum of $D_{1}^S$ and a linear combination of divisors $D_1^{S'}$
for $S' \supset S$ or ${S'}^c\supset S$ with $|S'| = m$. From this
and (\ref{mzeo1}), we get the following by an obvious basis
change:
\[
\mathrm{Pic}(\overline{M}_{0,n\cdot \epsilon_1} ) = \bigoplus_{1 \in S, |S| = m}\mathbb{Z} D_1^S
\oplus \bigoplus_{i=1}^{n-1} \mathbb{Z} D_1^{\{i,i+1\}}
\oplus \mathbb{Z} D_1^{\{1,n-1\}}.
\]
The rest of the proof is identical to the odd $n$ case and so we
omit it.
\end{proof}
\end{document} |
\begin{document}
\title{On the Generalization of the Gap Principle}
\author{Anton Mosunov}
\affil{University of Waterloo}
\date{}
\maketitle
\begin{abstract}
Let $\alpha$ be a real algebraic number of degree $d \geq 3$ and let $\beta \in \mathbb Q(\alpha)$ be irrational. Let $\mu$ be a real number such that $(d/2) + 1 < \mu < d$ and let $C_0$ be a positive real number. We prove that there exist positive real numbers $C_1$ and $C_2$, which depend only on $\alpha$, $\beta$, $\mu$ and $C_0$, with the following property. If $x_1/y_1$ and $x_2/y_2$ are rational numbers in lowest terms such that
$$
H(x_2, y_2) \geq H(x_1, y_1) \geq C_{1}
$$
and
$$
\left|\alpha - \frac{x_1}{y_1}\right| < \frac{C_0}{H(x_1, y_1)^\mu}, \quad \left|\beta - \frac{x_2}{y_2}\right| < \frac{C_0}{H(x_2, y_2)^\mu},
$$
then either $H(x_2, y_2) > C_{2}^{-1} H(x_1, y_1)^{\mu - d/2}$, or there exist integers $s, t, u, v$, with $sv - tu \neq 0$, such that
$$
\beta = \frac{s\alpha + t}{u\alpha + v} \quad \text{and} \quad \frac{x_2}{y_2} = \frac{sx_1 + ty_1}{ux_1 + vy_1},
$$
or both. Here $H(x, y) = \max(|x|, |y|)$ is the height of $x/y$. Since $\mu - d/2$ exceeds one, our result demonstrates that, unless $\alpha$ and $\beta$ are connected by means of a linear fractional transformation with integer coefficients, the heights of $x_1/y_1$ and $x_2/y_2$ have to be exponentially far apart from each other. An analogous result is established in the case when $\alpha$ and $\beta$ are $p$-adic algebraic numbers.
\end{abstract}
\section{Introduction}
The theory of Diophantine approximation concerns the question of how well real numbers can be approximated by rationals, and its variations. If $\alpha$ is a real number and $x/y$ is a rational number, with $x, y \in \mathbb Z$ and $y \geq 1$, then the quality of approximation of $\alpha$ by $x/y$ can be measured by means of a quantity $\mu$ such that the inequality
\begin{equation} \label{eq:diophantine_inequality}
\left|\alpha - \frac{x}{y}\right| < \frac{1}{y^\mu}
\end{equation}
is satisfied. The larger $\mu$ is, the better the approximation of $x/y$ with respect to $\alpha$ is. It was observed by Dirichlet that for $\mu = 2$ the inequality above can be achieved for infinitely many integers $x$ and $y$, as long as $\alpha$ is real and irrational. On the other hand, Liouville pointed out that, if $\alpha$ is an irrational algebraic number of degree $d$ and $\mu > d$, then (\ref{eq:diophantine_inequality}) has only finitely many solutions in integers $x$ and $y$ with $y \geq 1$. In other words, algebraic numbers cannot be approximated by rationals too well.
It is not difficult to count distinct $x/y$ satisfying (\ref{eq:diophantine_inequality}), with $y$ varying in a fixed range. Indeed, if it so happens that $C_1 \leq y_1 < y_2 \leq C_2$, then the fact that $x_1/y_1 \neq x_2/y_2$ yields
$$
\frac{1}{y_1y_2} \leq \left|\frac{x_1}{y_1} - \frac{x_2}{y_2}\right| \leq \left|\alpha - \frac{x_1}{y_1}\right| + \left|\alpha - \frac{x_2}{y_2}\right| < \frac{1}{y_1^\mu} + \frac{1}{y_2^\mu} < \frac{2}{y_1^\mu},
$$
resulting in the inequality
$$
2y_2 > y_1^{\mu - 1},
$$
which is known as the \emph{gap principle}. For $\mu > 2$ this inequality states that, if two distinct rationals satisfy (\ref{eq:diophantine_inequality}), then their denominators must be exponentially far apart from each other.
Unfortunately, as the quantity $C_2$ can be arbitrarily large, the gap principle itself does not allow us to count the number of distinct solutions to (\ref{eq:diophantine_inequality}). However, it was established by Thue \cite{thue} that, when $\alpha$ is an irrational algebraic number of degree $d \geq 3$ and $(d/2) + 1 < \mu < d$, then there exist computable positive real numbers $C_1$ and $\eta > 1$, which depend only on $\alpha$ and $\mu$, such that every solution $x_i/y_i$ with $C_1 \leq y_1 < \ldots < y_\ell$ satisfies $y_i < y_1^\eta$. This phenomenon is known as the \emph{Thue-Siegel principle} and it was vastly generalized by Bombieri and Mueller \cite{bombieri-mueller}. When combined with the gap principle, the Thue-Siegel principle enables us to count the number of solutions $x/y$ to (\ref{eq:diophantine_inequality}) such that $y \geq C_1$.
For a rational number $x/y$ in lowest terms, let $H(x, y) = \max(|x|, |y|)$ denote the \emph{height} of $x/y$. In this article, we generalize the gap principle as follows. Notice that the positive real numbers $C_1, C_2, \ldots$ occurring throughout the article are all computable.
\begin{thm} \label{thm:archimedean_gap_principle}
\emph{(A generalized Archimedean gap principle)} Let $\alpha$ be a real algebraic number of degree $d \geq 3$ over $\mathbb Q$ and let $\beta$ be irrational and in $\mathbb Q(\alpha)$. Let $\mu$ be a real number such that $(d/2) + 1 < \mu < d$ and let $C_0$ be a positive real number. There exist positive real numbers $C_1$ and $C_2$, which depend only on $\alpha$, $\beta$, $\mu$ and $C_0$, with the following property. If $x_1/y_1$ and $x_2/y_2$ are rational numbers in lowest terms such that $H(x_2, y_2) \geq H(x_1, y_1) \geq C_{1}$ and
$$
\left|\alpha - \frac{x_1}{y_1}\right| < \frac{C_0}{H(x_1, y_1)^\mu}, \quad \left|\beta - \frac{x_2}{y_2}\right| < \frac{C_0}{H(x_2, y_2)^\mu},
$$
then at least one of the following holds:
\begin{itemize}
\item $H(x_2, y_2) > C_{2}^{-1} H(x_1, y_1)^{\mu - d/2}$;
\item There exist integers $s, t, u, v$, with $sv - tu \neq 0$, such that
$$
\beta = \frac{s\alpha + t}{u\alpha + v} \quad \text{and} \quad \frac{x_2}{y_2} = \frac{sx_1 + ty_1}{ux_1 + vy_1}.
$$
\end{itemize}
\end{thm}
Since the exponent $\mu - d/2$ exceeds one, our result demonstrates that, unless $\alpha$ and $\beta$ are connected by means of a linear fractional transformation with integer coefficients, the heights of $x_1/y_1$ and $x_2/y_2$ have to be exponentially far apart from each other.
Next, let $|\quad|_p$ denote the $p$-adic absolute value on the field of $p$-adic numbers $\mathbb Q_p$, normalized so that $|p|_p = p^{-1}$. An analogous result for $p$-adic algebraic numbers is as follows.
\begin{thm} \label{thm:non-archimedean_gap_principle}
\emph{(A generalized non-Archimedean gap principle)} Let $p$ be a rational prime. Let $\alpha \in \mathbb Q_p$ be a $p$-adic algebraic number of degree $d \geq 3$ over $\mathbb Q$ and let $\beta$ be irrational and in $\mathbb Q(\alpha)$. Let $\mu$ be a real number such that $(d/2) + 1 < \mu < d$ and let $C_0$ be a positive real number. There exist positive real numbers $C_{3}$ and $C_{4}$, which depend only on $\alpha$, $\beta$, $\mu$ and $C_0$, with the following property. If $x_1/y_1$ and $x_2/y_2$ are rational numbers in lowest terms such that $H(x_2, y_2) \geq H(x_1, y_1) \geq C_{3}$ and
$$
\left|y_1\alpha - x_1\right|_p < \frac{C_0}{H(x_1, y_1)^\mu}, \quad \left|y_2\beta - x_2\right|_p < \frac{C_0}{H(x_2, y_2)^\mu},
$$
then at least one of the following holds:
\begin{itemize}
\item $H(x_2, y_2) > C_{4}^{-1} H(x_1, y_1)^{\mu - d/2}$;
\item There exist integers $s, t, u, v$, with $sv - tu \neq 0$, such that
$$
\beta = \frac{s\alpha + t}{u\alpha + v} \quad \text{and} \quad \frac{x_2}{y_2} = \frac{sx_1 + ty_1}{ux_1 + vy_1}.
$$
\end{itemize}
\end{thm}
We apply our result to establish an absolute bound on the number of large primitive solutions of certain Thue inequalities. A \emph{Thue inequality} is an inequality of the form
\begin{equation} \label{eq:thue-inequality}
0 < |F(x, y)| \leq m,
\end{equation}
where $m$ is a positive integer and $F \in \mathbb Z[x, y]$ is an irreducible binary form of degree $d \geq 3$. A solution $(x, y) \in \mathbb Z^2$ to the above inequality is called \emph{primitive} when $x$ and $y$ are coprime. In \cite[Theorem 1]{gyory}, Gy\H{o}ry proved that there exists a positive real number $Y_0$, which depends only on $m$ and $F$, such that the number of primitive solutions $(x, y)$ to (\ref{eq:thue-inequality}) with $H(x, y) \geq Y_0$ does not exceed $25d$ (here the solutions $(x, y)$ and $(-x, -y)$ are regarded as the same). Using Theorem \ref{thm:archimedean_gap_principle}, we can improve Gy\H{o}ry's result in the case when $F$ is irreducible and the field extension $\mathbb Q(\alpha)/\mathbb Q$ is Galois, where $\alpha$ is a root of $F(x, 1)$.
To state our result, we need to introduce the notion of \emph{enhanced automorphism group} of $F$. For a $2 \times 2$ matrix $M = \left(\begin{smallmatrix}s & u\\t & v\end{smallmatrix}\right)$, with complex entries, define the binary form $F_M$ by
$$
F_M(x, y) = F(sx + uy, tx + vy).
$$
Let $\overline{\mathbb Q}$ denote the algebraic closure of the rationals and let $K$ be a field containing $\mathbb Q$. We say that a matrix $M = \left(\begin{smallmatrix}s & u\\t & v\end{smallmatrix}\right) \in \operatorname{M}_2(K)$ is a \emph{$K$-automorphism of $F$} (resp., $|F|$) if $F_M = F$ (resp., $F_M = \pm F$). The set of all $K$-automorphisms of $F$ (resp., $|F|$) is denoted by $\Aut_K F$ (resp., $\operatorname{Aut}_K |F|$). We define
\begin{equation} \label{eq:G}
\Aut' |F| = \left\{\frac{1}{\sqrt{|sv - tu|}}\begin{pmatrix}s & u\\t & v\end{pmatrix} \colon s, t, u, v \in \mathbb Z\right\} \cap \Aut_{\overline{\mathbb Q}} |F|
\end{equation}
and refer to it as the \emph{enhanced automorphism group} of $F$. One can verify that $\operatorname{Aut}' |F|$ is a group. In Section \ref{sec:automorphisms} we will show that, under the conditions on $F$ specified above, it is finite and contains at most $24$ elements. We prove the following.
\begin{thm} \label{thm:thue_large}
Let $F \in \mathbb Z[x, y]$ be an irreducible binary form of degree \mbox{$d \geq 3$}. Let $\alpha$ be a root of $F(x, 1)$ and assume that the field extension $\mathbb Q(\alpha)/\mathbb Q$ is Galois. For a positive integer $m$ consider the Thue inequality (\ref{eq:thue-inequality}). Let $\mu$ be a real number such that $(d/2) + 1 < \mu < d$. There exists a positive real number $C_5$, which depends only on $m$, $F$ and $\mu$, such that the number of primitive solutions $(x, y)$ to (\ref{eq:thue-inequality}) with $H(x, y) \geq C_5$ does not exceed
$$
\#\Aut'|F|\cdot \left\lfloor 1 + \frac{11.51 + 1.5\log d + \log \mu}{\log(\mu - d/2)}\right\rfloor.
$$
Here the solutions $(x, y)$ and $(-x, -y)$ are regarded as the same.
\end{thm}
Let $\mu = (3d + 2)/4$. Then the function
$$
f(d) = 1 + \frac{11.51+ 1.5\log d + \log((3d + 2)/4)}{\log((d + 2)/4)}
$$
is monotonously decreasing on the interval $[3, \infty)$. To see that this is the case, it is sufficient to prove that $g(x) = \frac{\log x}{\log((x + 2)/4)}$ and $h(x) = \frac{\log((3x + 2)/4)}{\log((x + 2)/4)}$ are monotonously decreasing on the specified interval. We leave it as an exercise to the reader to prove that the derivatives of $g(x)$ and $h(x)$ take negative values when evaluated at any $x_0 \geq 3$. Since $f(3) \approx 64.5$, we can use the upper bound $\#\Aut'|F| \leq 24$ established in Lemma \ref{lem:G_is_finite} as well as Theorem \ref{thm:thue_large} to conclude that the number of primitive solutions $(x, y)$ to (\ref{eq:thue-inequality}) such that $H(x, y) \geq C_5$ does not exceed $24 \cdot \lfloor f(3)\rfloor = 1536$ when $d \geq 3$. Furthermore, since $f(10^{14}) < 4$ and $\lim\limits_{d \rightarrow \infty} f(d) = 3.5$, we can also conclude that it does not exceed $24 \cdot \lfloor f(10^{14})\rfloor = 72$ when $d \geq 10^{14}$. While it is an interesting task to compare the value of $C_5$ to the quantity $Y_L$ in \cite{mueller-schmidt} (see equation 2.9) or to the quantity $Y_0$ in \mbox{\cite[Theorem 1]{gyory}}, it lies outside the scope of this article.
The article is structured as follows. In Section \ref{sec:theory} we outline a number of auxiliary results, which are used in the later sections. We recommend the reader to skip this section and use it as a reference. In Section \ref{sec:minimal_pairs} we introduce the notion of a \emph{minimal pair} $P, Q \in \mathbb Z[x]$ for a tuple of algebraic numbers $(\alpha, \beta) \in \overline{\mathbb Q} \times (\mathbb Q(\alpha) \setminus \mathbb Q)$, and summarize the properties of minimal pairs. Minimal pairs enable us to construct a nonzero polynomial $R(x, y) = P(x) + yQ(x)$, which vanishes at the point $(\alpha, \beta)$. When $R(x, y)$ does not vanish at the rational point $\left(x_1/y_1,\ x_2/y_2\right)$, establishing the gap principle is a rather easy task. In \mbox{Section \ref{sec:gap-principle-with-vanishing}}, we prove that \emph{despite the vanishing} of $R(x, y)$ at $\left(x_1/y_1,\ x_2/y_2\right)$, it is still possible to prove that the heights of $(x_1, y_1)$ and $(x_2, y_2)$ are exponentially far apart, provided that $\alpha$ and $\beta$ are not connected by means of a linear fractional transformation with integer coefficients. In Sections \ref{sec:generalized_archimedean_gap_principle} and \ref{sec:generalized_non-archimedean_gap_principle} we prove Theorems \ref{thm:archimedean_gap_principle} and \ref{thm:non-archimedean_gap_principle}, respectively. In Section \ref{sec:automorphisms} we investigate the properties of the enhanced automorphism group $\Aut' |F|$. Finally, in Section \ref{sec:thue-inequality} we prove \mbox{Theorem \ref{thm:thue_large}}.
\section{Auxiliary Results} \label{sec:theory}
This section contains several definitions and results, which we utilize in the remaining part of the article. We recommend the reader to skip this section and refer to it when reading the proofs outlined in sections that follow it.
Let us begin with the number of definitions. For an arbitrary polynomial $R \in \mathbb Z[x_1, x_2, \ldots, x_n]$, we let $H(R)$ denote the maximum of Archimedean absolute values of its coefficients, and refer to this quantity as a \emph{height} of $R$. For an algebraic number $\alpha$ with the minimal polynomial $f$, we write $H(\alpha) = H(f)$. For a point $(x_1, x_2, \ldots, x_n) \in \mathbb C^n$, we define
$$
H(x_1, x_2, \ldots, x_n) = \max\limits_{i = 1, 2, \ldots, n}\left\{|x_i|\right\}
$$
and refer to this quantity as the \emph{height} of $(x_1, x_2, \ldots, x_n)$.
In this section, as well as all the subsequent ones, we write
$$
D_{i,j} = \frac{1}{i!j!}\frac{\partial^{i + j}}{\partial X^i\partial Y^j} \quad \text{and} \quad D_i = \frac{1}{i!}\frac{d^i}{dX^i}.
$$
\begin{lem} \label{lem:liouville}
\emph{(Liouville's Theorem)} Let $\alpha \in \mathbb C$ be an algebraic number of degree $d$ over $\mathbb Q$. There exists a positive number $C_6$, which depends only on $\alpha$, such that for all integers $x$ and $y$, with $y \neq 0$ and $x \neq y\alpha$, the inequality
\begin{equation} \label{eq:liouville_inequality}
\left|\alpha - \frac{x}{y}\right| \geq \frac{C_6}{H(x, y)^d}
\end{equation}
holds.
\end{lem}
\begin{proof}
See \cite[Theorem 1E]{schmidt3}.
\end{proof}
\begin{lem} \label{lem:p-adic_liouville}
\emph{($p$-adic Liouville's Theorem)} Let $p$ be a rational prime and $\alpha \in \mathbb Q_p$ a $p$-adic algebraic number of degree $d$ over $\mathbb Q$. There exists a positive number $C_7$, which depends only on $\alpha$, such that for all integers $x$ and $y$, with $x \neq y\alpha$, the inequality
\begin{equation} \label{eq:p-adic_liouville_inequality}
|y\alpha - x|_p \geq \frac{C_7}{H(x, y)^d}
\end{equation}
holds.
\end{lem}
\begin{proof}
Let
$$
f(x) = c_dx^d + \cdots + c_1x + c_0
$$
be the minimal polynomial of $\alpha$ and let $F(x, y) = y^df(x/y)$ be its associated binary form. Since $f(\alpha) = 0$, it follows from Taylor's Theorem that
\begin{align*}
F(x ,y)
& = (x - \alpha y)\sum\limits_{i = 1}^dD_if(\alpha)(x - \alpha y)^{i - 1}y^{d - i}\\
& = (x - \alpha y)\sum\limits_{i = 1}^d\frac{D_if(\alpha)}{c_d^{i - 1}}(c_dx - c_d\alpha y)^{i - 1}y^{d - i}.
\end{align*}
Since $c_d\alpha$ and $c_d^{d - i}D_if(\alpha)$ are algebraic integers, their $p$-adic absolute values do not exceed one, so
\begin{align*}
\left|F(x, y)\right|_p
& \leq |y\alpha - x|_p\cdot \max\limits_{i = 1, \ldots, d}\left\{\left|\frac{D_if(\alpha)}{c_d^{i - 1}}\right|_p\right\}\\
& = |y\alpha - x|_p\cdot \max\limits_{i = 1, \ldots, d}\left\{\left|\frac{c_d^{d - i}D_if(\alpha)}{c_d^{d - 1}}\right|_p\right\}\\
& \leq |y\alpha - x|_p \cdot |c_d|_p^{-{d+1}}.
\end{align*}
Since $x \neq y\alpha$, it must be the case that $F(x, y) \neq 0$. By the product formula, the following trivial lower bound holds:
$$
\left|F(x, y)\right|_p \geq \frac{1}{\left|F(x, y)\right|} \geq \frac{1}{(d + 1)H(\alpha)H(x, y)^d}.
$$
The result follows once we combine the upper and lower bounds on $|F(x, y)|_p$ and use the inequality $|c_d|_p \geq c_d^{-1}$.
\end{proof}
\begin{lem} \label{lem:siegels_lemma}
\emph{(Siegel's lemma, \cite{bombieri-vaaler})} Let $N$ and $M$ be positive integers with \mbox{$N > M$}. Let $a_{i, j}$ be integers of absolute value at most $A \geq 1$ for $i = 1, \ldots, N$ and $j = 1, \ldots, M$. Then there exist integers $t_1, \ldots, t_N$, not all zero, such that
$$
|t_i| \leq (NA)^{\frac{M}{N - M}}, \quad\,\, \sum\limits_{i = 1}^N a_{i, j}t_i = 0, \quad\,\, j = 1, \ldots, M.
$$
\end{lem}
\begin{proof}
See, for example, \cite[Lemma 2.7]{zannier}.
\end{proof}
\begin{lem} \label{lem:bound_on_coefficients_2}
Let $\alpha$ be an algebraic number of degree $d$ over $\mathbb Q$. Then for every non-negative integer $r$ there exist rational numbers $a_{r, i}$ such that
$$
\alpha^r = a_{r, d-1}\alpha^{d-1} + \cdots + a_{r, 1}\alpha + a_{r, 0}.
$$
Furthermore, if we denote the leading coefficient of the minimal polynomial of $\alpha$ by $c_\alpha$ and put
$$
C_8 = 1 + \max\limits_{0 \leq i \leq d - 1}\left\{|a_{d, i}|\right\},
$$
then $c_\alpha^{\max\{0, r - d + 1\}}a_{r, i} \in \mathbb Z$ and $|a_{r, i}| \leq C_8^{\max\{0, r - d + 1\}}$ for all $i$ such that $0 \leq i \leq d - 1$.
\end{lem}
\begin{proof}
See \cite[Proposition 2.6]{zannier}.
\end{proof}
Let $P \in \mathbb C[x]$ be a polynomial of degree $d \geq 1$. The \emph{house} of $P$, denoted $\house{P}$, is defined to be
$$
\house{P} = \max\left\{|\alpha_1|, \ldots, |\alpha_d|\right\},
$$
where $\alpha_1, \ldots, \alpha_d \in \mathbb C$ are the roots of $P$. For an algebraic number $\alpha$, we define the \emph{house} of $\alpha$ as $\house{\alpha} = \house{f}$, where $f$ is the minimal polynomial of $\alpha$.
Let $\alpha$ be an algebraic number and $\mathcal O$ the ring of integers of $\mathbb Q(\alpha)$. Let $c_\alpha$ denote the leading coefficient of the minimal polynomial of $\alpha$. We define
\begin{equation} \label{eq:theta}
\theta_\alpha = \left[\mathcal O \colon \mathbb Z[c_\alpha\alpha]\right].
\end{equation}
That is, $\theta_\alpha$ is equal to the index of $\mathbb Z[c_\alpha\alpha]$ in the additive group $\mathcal O$.
\begin{lem} \label{lem:alpha_beta_bound}
Let $\alpha$ be an algebraic number of degree $d$ over $\mathbb Q$ and
$$
\beta = b_{d - 1}\alpha^{d - 1} + \cdots + b_1\alpha + b_0,
$$
where $b_0, b_1, \ldots, b_{d - 1} \in \mathbb Q$. There exists a positive number $C_9$, which depends only on $\alpha$ and $\beta$, such that
$$
\max\limits_{1 \leq i \leq d}\{|b_i|\} \leq C_9.
$$
Furthermore,
$$
\theta_\alpha c_\beta\beta \in \mathbb Z[c_\alpha\alpha],
$$
where $\theta_\alpha$ is defined in (\ref{eq:theta}). In particular, $\theta_\alpha c_\beta b_i \in \mathbb Z$ for all \mbox{$i = 0, 1, \ldots, d - 1$}.
\end{lem}
\begin{proof}
Let $\alpha = \alpha_1, \ldots, \alpha_d$ denote the conjugates of $\alpha$. For $j = 1, \ldots, d$, let
$$
\beta_j = \sum\limits_{i = 0}^{d - 1}b_i\alpha_j^i.
$$
Then each $\beta_j$ is a conjugate of $\beta = \beta_1$. Further,
\begin{equation} \label{eq:vandermonde}
\begin{pmatrix}
\beta_1\\
\beta_2\\
\vdots\\
\beta_d
\end{pmatrix}
=
\begin{pmatrix}
1 & \alpha_1 & \alpha_1^2 & \ldots & \alpha_1^{d - 1}\\
1 & \alpha_2 & \alpha_2^2 & \ldots & \alpha_2^{d - 1}\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
1 & \alpha_d & \alpha_d^2 & \ldots & \alpha_d^{d - 1}
\end{pmatrix}
\cdot
\begin{pmatrix}
b_0\\
b_1\\
\vdots\\
b_{d-1}
\end{pmatrix}.
\end{equation}
Let us denote the Vandermonde matrix on the right-hand side of the above expression \mbox{by $V$.} Then it follows from the inequality (4.1) in \cite{gautschi} that
$$
\|V^{-1}\|_\infty \leq \max\limits_{1 \leq j \leq d}\prod\limits_{\substack{1 \leq i \leq d\\ i \neq j}}\frac{1 + |\alpha_i|}{|\alpha_i - \alpha_j|},
$$
where $\|\,\cdot\,\|_\infty$ denotes the matrix infinity norm.
Now, let $V^{-1} = (v_{ij})$. Then it follows from (\ref{eq:vandermonde}) that $b_i = \sum_{j = 1}^d v_{ij}\beta_j$, so
$$
|b_i| \leq \sum\limits_{j = 1}^d|v_{ij}|\cdot |\beta_j| \leq d\cdot \house{\beta}\cdot \max\limits_{1 \leq j \leq d}\prod\limits_{\substack{1 \leq i \leq d\\ i \neq j}}\frac{1 + |\alpha_i|}{|\alpha_i - \alpha_j|}.
$$
Next, let $c_\alpha$ and $c_\beta$ denote the leading coefficients of the minimal polynomials of $\alpha$ and $\beta$, respectively. Note that $\theta_\alpha c_\beta \beta = (\#\mathcal O/\mathbb Z[c_\alpha\alpha])c_\beta \beta \in \mathbb Z[c_\alpha \alpha]$ due to the fact that $c_\beta\beta \in \mathcal O$. Finally, observe that
$$
\theta_\alpha c_\beta \beta = \sum\limits_{i = 0}^{d-1}\theta_\alpha c_\beta b_i\alpha^i
\in \mathbb Z[c_\alpha\alpha]
$$
Since $\mathbb Z[c_\alpha\alpha] \subseteq \mathbb Z[\alpha]$, it must be the case that each coefficient $\theta_\alpha c_\beta b_i$ is an integer.
\end{proof}
Let $P \in \mathbb C[x]$ be a polynomial that is not identically equal to zero, with leading coefficient $c_P$. The \emph{Mahler measure} of $P$, denoted $M(P)$, is defined to be $M(P) = |c_P|$ if $P$ is the constant polynomial and
$$
M(P) = |c_P|\prod\limits_{i = 1}^d\max\{1, |\alpha_i|\}
$$
otherwise, where $\alpha_1, \ldots, \alpha_d \in \mathbb C$ are the roots of $P$. For a binary form $Q \in \mathbb C[x, y]$, we define the Mahler measure of $Q$ as $M(Q) = M(Q(x, 1))$. Finally, for an algebraic number $\alpha$, we define the Mahler measure of $\alpha$ to be $M(\alpha) = M(f)$, where $f$ is the minimal polynomial of $\alpha$.
The following lemma is a reformulation of a well-known result of Lewis and Mahler \cite{lewis-mahler}.
\begin{lem} \label{lem:lewis-mahler}
Let
$$
F(x, y) = c_dx^d + c_{d-1}x^{d-1}y + \cdots + c_0y^d
$$
be a binary form of degree $d \geq 2$ with integer coefficients such that $c_0c_d \neq 0$. Let $x_1$ and $y_1$ be nonzero integers. There exists a root $\alpha$ of $F(x, 1)$ such that
$$
\min\left\{\left|\alpha - \frac{x_1}{y_1}\right|, \left|\alpha^{-1} - \frac{y_1}{x_1}\right|\right\} \leq \frac{C_{10}|F(x_1, y_1)|}{H(x_1, y_1)^d},
$$
where
$$
C_{10} = \frac{2^{d-1}d^{(d-1)/2}M(F)^{d-2}}{|D(F)|^{1/2}}.
$$
\end{lem}
\begin{proof}
Let $\alpha$ be a root of $F(x, 1)$ that minimizes $|\alpha - x/y|$. By \cite[Lemma 3]{stewart2},
$$
\left|\alpha - \frac{x_1}{y_1}\right| \leq \frac{C_{10}|F(x_1, y_1)|}{|y_1|^d}.
$$
If $|y_1| \geq |x_1|$, then $H(x_1, y_1) = |y_1|$, and so the result holds. Otherwise, since $c_0c_d \neq 0$, we see that the roots of $F(x, 1)$ and $F(1, x)$ are nonzero, meaning that all roots of $F(1, x)$ are of the form $\alpha^{-1}$, where $\alpha$ is a root of $F(x, 1)$. If we let $\beta^{-1}$ be a root of $F(1, x)$ that minimizes $|\beta^{-1} - y_1/x_1|$, then it follows from \mbox{\cite[Lemma 3]{stewart2}} that
$$
\left|\beta^{-1} - \frac{y_1}{x_1}\right| \leq \frac{C_{10}|F(x_1, y_1)|}{|x_1|^d}.
$$
Since $|x_1| > |y_1|$, the result follows.
\end{proof}
\begin{lem} \label{lem:unique}
Let $K = \mathbb R$ or $\mathbb Q_p$, where $p$ is a rational prime, and let $\overline K$ denote the algebraic closure of $K$. Denote the standard absolute value on $K$ by $|\quad|$. Let $\alpha$ and $\beta$ be distinct numbers in $\overline{K}$. Let $\mu$ and $C_0$ be positive real numbers.
If $x_1/y_1$ is a rational number such that $H(x_1, y_1) \geq (2C_0/|\alpha - \beta|)^{1/\mu}$ and
$$
\left|\alpha - \frac{x_1}{y_1}\right| < \frac{C_0}{H(x_1, y_1)^\mu},
$$
then
$$
\left|\beta - \frac{x_1}{y_1}\right| \geq \frac{C_0}{H(x_1, y_1)^\mu}.
$$
\end{lem}
\begin{proof}
Suppose that the statement is false. Then it follows from the triangle inequality that
$$
|\alpha - \beta| \leq \left|\alpha - \frac{x_1}{y_1}\right| + \left|\beta - \frac{x_1}{y_1}\right| < \frac{2C_0}{H(x_1, y_1)^\mu},
$$
and so $H(x_1, y_1) < (2C_0/|\alpha - \beta|)^{1/\mu}$, leading us to a contradiction.
\end{proof}
\begin{cor} \label{cor:unique}
Let $K = \mathbb R$ or $\mathbb Q_p$, where $p$ is a rational prime, and let $\overline K$ denote the algebraic closure of $K$. Denote the standard absolute value on $K$ by $|\quad|$. Let $f(x) \in \mathbb Z[x]$ be an irreducible polynomial of degree $d \geq 2$ with roots $\alpha_1, \ldots, \alpha_d \in \overline{K}$. Let $\mu$ and $C_0$ be positive real numbers. There exists a positive number $C_{11}$, which depends only on $f$, $\mu$ and $C_0$, with the following property. If $x_1/y_1$ is a rational number such that $H(x_1, y_1) \geq C_{11}$ and
$$
\left|\alpha_i - \frac{x_1}{y_1}\right| < \frac{C_0}{H(x_1, y_1)^\mu}
$$
for some $i \in \{1, \ldots, d\}$, then
$$
\left|\alpha_j - \frac{x_1}{y_1}\right| \geq \frac{C_0}{H(x_1, y_1)^\mu}
$$
for all $j \neq i$.
\end{cor}
\begin{lem} \label{lem:thue-siegel}
\emph{(The Thue-Siegel Principle \cite{bombieri-mueller})} Let $K = \mathbb R$ or $\mathbb Q_p$, where $p$ is a rational prime, and denote the standard absolute value on $K$ by $|\quad|$. Let $\alpha_1 \in K$ be an algebraic number of degree $d \geq 3$ over $\mathbb Q$ and let $\alpha_2 \in \mathbb Q(\alpha_1)$ have degree $d$. Let $t$ and $\tau$ be such that
\begin{equation} \label{eq:thue-siegel_intervals}
\frac{2 + \sqrt{2d^3 + 2d^2 - 4d}}{d(d + 1)} < t < \sqrt{\frac{2}{d}}, \quad \sqrt{2 - dt^2} < \tau < t - \frac{2}{d},
\end{equation}
and put $\lambda = 2/(t - \tau)$, so that $\lambda < d$. Define
$$
A_i = \frac{t^2}{2 - dt^2}\left(\log M(\alpha_i) + \frac{d}{2}\right) \quad \text{for $i = 1, 2$}.
$$
Let $x_1/y_1$ and $x_2/y_2$ be rational numbers in lowest terms that satisfy the inequalities
$$
\left|\alpha_i - \frac{x_i}{y_i}\right| < \frac{1}{\left(4e^{A_i}H(x_i, y_i)\right)^\lambda} \quad \text{for $i = 1, 2$.}
$$
Then
$$
\log(4e^{A_2}) + \log H(x_2, y_2) \leq \delta^{-1}\left(\log(4e^{A_1}) + \log H(x_1, y_1)\right),
$$
where
$$
\delta = \frac{dt^2 + \tau^2 - 2}{d - 1}.
$$
\end{lem}
\begin{proof}
Since $d \geq 3$, the intervals in (\ref{eq:thue-siegel_intervals}) are guaranteed to be non-empty, so the statement is not vacuously true. Since
$$
\left|\alpha_i - \frac{x_i}{y_i}\right| < \frac{1}{\left(4e^{A_i}H(x_i, y_i)\right)^\lambda} < \frac{1}{\left(3e^{A_i}H(x_i, y_i)\right)^\lambda} \quad \text{for $i = 1, 2$},
$$
the case when $|\alpha_1| \leq 1$ and $|\alpha_2| \leq 1$ follows directly from \mbox{\cite[Section II]{bombieri-mueller}}. More precisely, the comment on \mbox{p. 184} of \cite{bombieri-mueller} states that the triple $(A_1, A_2, \tau)$ is admissible for the data $(\alpha_1, \alpha_2, x_1/y_1, x_2/y_2, t, \vartheta, \delta)$, where we take $\vartheta = 1$. Note also that the comments on p. 74 of \cite{bombieri-schmidt} apply in our situation:
\begin{enumerate}[(i)]
\item the hypothesis $K_{\tilde v} = k_v$ is not used in the proof and therefore may be omitted;
\item $c(\vartheta t) \leq \log 3$;
\item the chosen value for $A_i$ implies a fortiori $\left|\alpha_i - \frac{x_i}{y_i}\right| < \vartheta(t - \tau)$ for $i = 1, 2$;
\item $h(x_i/y_i) = H(x_i, y_i)$ for $i = 1, 2$;
\item the exponent in (5A), p. 179 of \cite{bombieri-mueller} should be $2\vartheta/(t - \tau)$, not $2\vartheta^{-1}/(t - \tau)$.
\end{enumerate}
Next, we consider the case when $|\alpha_i| > 1$ for some $i \in \{1, 2\}$. If $K = \mathbb R$, then $|x_i/y_i| > 1$, and so
$$
\left|\alpha_i^{-1} - \frac{y_i}{x_i}\right| < |\alpha_i|^{-1}|y_i/x_i| \left(4e^{A_i}H(x_i, y_i)\right)^{-\lambda} < \left(3e^{A_i}H(x_i, y_i)\right)^{-\lambda}.
$$
If $K = \mathbb Q_p$, then $|y_i| \leq 1$ and we claim that $|x_i| = 1$. For suppose not and $|x_i| < 1$. Since $x_i/y_i$ is in lowest terms it must be the case that $|y_i| = 1$, so
$$
|x_i\alpha_i^{-1}| = |x_i| \cdot |\alpha_i^{-1}| < 1 = |y_i|.
$$
Since $|x_i\alpha_i^{-1}| \neq |y_i|$, it follows from the strong triangle inequality that
$$
|x_i\alpha_i^{-1} - y_i| = \max\left\{|x_i\alpha_i^{-1}|,\ |y_i|\right\} = \max\left\{|x_i\alpha_i^{-1}|,\ 1\right\} \geq 1.
$$
Thus,
$$
1 \leq |x_i\alpha_i^{-1} - y_i| = |y_i| |\alpha_i^{-1}| \left|\alpha_i - \frac{x_i}{y_i}\right| < \left(4e^{A_i}H(x_i, y_i)\right)^{-\lambda},
$$
which is impossible. Hence $|x_i| = 1$, so
$$
\left|\alpha_i^{-1} - \frac{y_i}{x_i}\right| < |\alpha_i|^{-1}|y_i/x_i| \left(4e^{A_i}H(x_i, y_i)\right)^{-\lambda} < \left(3e^{A_i}H(x_i, y_i)\right)^{-\lambda}.
$$
We conclude that, as long as $\left|\alpha_i - \frac{x_i}{y_i}\right| < \left(4e^{A_i}H(x_i, y_i)\right)^{-\lambda}$ for $i = 1, 2$, the inequalities
$$
\left|\alpha_i - \frac{x_i}{y_i}\right| < \frac{1}{\left(3e^{A_i}H(x_i, y_i)\right)^{\lambda}} \quad \text{and} \quad \left|\alpha_i^{-1} - \frac{y_i}{x_i}\right| < \frac{1}{\left(3e^{A_i}H(x_i, y_i)\right)^{\lambda}}
$$
hold whenever $|\alpha_i| > 1$ for some $i \in \{1, 2\}$. Consequently, we can always choose $r, s \in \{-1, 1\}$ so that $|\alpha_1^r| \leq 1$, $|\alpha_2^s| \leq 1$ and $(A_1, A_2, \tau)$ is admissible for the data $\left(\alpha_1^r, \alpha_2^s, (x_1/y_1)^r, (x_2/y_2)^s, t, \vartheta, \delta\right)$. The result now follows from \mbox{\cite[Section II]{bombieri-mueller}}.
\end{proof}
\section{Minimal Pairs} \label{sec:minimal_pairs}
Let $\alpha$ be an algebraic number of degree $d$ over $\mathbb Q$ and let $\beta \in \mathbb Q(\alpha)$ be irrational. With a pair $(\alpha, \beta)$ we associate two polynomials $P, Q \in \mathbb Z[x]$, which possess certain minimal properties listed in Definition \ref{defin:minimal_pair}. The properties of minimal pairs summarized in Proposition \ref{prop:properties_of_minimal_pairs} will play a crucial role in proofs of Archimedean and non-Archimedean gap principles, which are outlined in Sections \ref{sec:generalized_archimedean_gap_principle} and \ref{sec:generalized_non-archimedean_gap_principle}, respectively.
\begin{defin} \label{defin:minimal_pair}
Let $\alpha$ be an algebraic number of degree $d$ and let $\beta \in \mathbb Q(\alpha)$ be irrational. We say that two univariate polynomials, $P$ and $Q$, not both identically equal to zero, form a \emph{minimal pair} for $(\alpha, \beta)$, if they satisfy the following four properties:
\begin{enumerate}[(1)]
\item $P, Q \in \mathbb Z[x]$.
\item $P(\alpha) + \beta Q(\alpha) = 0$.
\item The quantity $\max\{\deg P, \deg Q\}$ is minimal among all polynomials satisfying properties (1) and (2).
\item The quantity $\max\{H(P), H(Q)\}$ is minimal among all polynomials satisfying properties (1), (2) and (3).
\end{enumerate}
If $P, Q$ is a minimal pair for $(\alpha, \beta)$, we write
$$
r(\alpha, \beta) = \max\{\deg P, \deg Q\}.
$$
\end{defin}
If $P, Q$ is a minimal pair for $(\alpha, \beta)$ then $-P, -Q$ is also a minimal pair for $(\alpha, \beta)$. This already demonstrates that minimal pairs are not unique. Furthermore, the uniqueness is not guaranteed even if we impose an additional condition that the leading coefficient of $Q$ is equal to one. Indeed, let
$$
\alpha = 2\cos\left(\frac{2\pi}{15}\right) \quad \text{and} \quad \beta = 2\cos\left(\frac{4\pi}{15}\right).
$$
Then both
$$
P_1(x) = -x^2 + 2, \quad Q_1(x) = 1
$$
and
$$
P_2(x) = -x^2 + 2x - 1, \quad Q_2(x) = x^2 - x - 1
$$
are minimal pairs for $(\alpha, \beta)$.
If $P, Q$ is a minimal pair for $(\alpha, \beta)$, then we can define a polynomial
$$
R(x, y) = P(x) + yQ(x).
$$
Polynomials of such form were used by Thue \cite{thue} for the purpose of establishing the first instance of the Thue-Siegel principle \cite{bombieri-mueller}. More precisely, they were constructed as to achieve high vanishing at the point $(\alpha, \alpha)$, i.e., \mbox{$D_iR(\alpha, \alpha) = 0$} for \mbox{$i = 0, 1, \ldots, \ell$} for some large $\ell$ (see the exposition of Thue's method in \mbox{\cite[Chapter 2]{zannier}}). In turn, we construct $R(x, y)$ so to achieve $R(\alpha, \beta) = 0$ for arbitrary irrational $\beta \in \mathbb Q(\alpha)$ for the purpose of obtaining a generalized gap principle. The following proposition summarizes various properties of minimal pairs.
\begin{prop} \label{prop:properties_of_minimal_pairs}
Let $\alpha$ be an algebraic number of degree $d$ over $\mathbb Q$ and let \mbox{$\beta \in \mathbb Q(\alpha)$} be irrational. Let $P, Q$ be a minimal pair for $(\alpha, \beta)$ and put $r = r(\alpha, \beta)$. Then the polynomials $P$, $Q$, and their Wronskian $W = PQ' - QP'$ possess the following properties.
\begin{enumerate}
\item
\begin{equation} \label{eq:r_bound}
1 \leq r \leq \left\lfloor d/2\right\rfloor.
\end{equation}
\item $P$ and $Q$ are coprime.
\item If $\hat P, \hat Q \in \mathbb Z[x]$ satisfy $\hat P(\alpha) + \beta \hat Q(\alpha) = 0$ and $\max\{\deg \hat P, \deg \hat Q\} \leq d - 1 - r$, then $\hat P = GP$, $\hat Q = GQ$ for some $G \in \mathbb Z[x]$.
\item There exists a positive number $C_{12}$, which depends only on $\alpha$ and $\beta$, such that
\begin{equation} \label{eq:height_PQ}
\max\{H(P), H(Q)\} \leq C_{12}.
\end{equation}
\item If $\alpha \in \mathbb C$, there exists a positive real number $C_{13}$, which depends only on $\alpha$ and $\beta$, such that
\begin{equation} \label{eq:W_alpha_archimedean_lower_bound}
|W(\alpha)| \geq C_{13}.
\end{equation}
Similarly, if $\alpha \in \mathbb Q_p$ for some rational prime $p$, there exists a positive real number $C_{14}$, which depends only on $\alpha$ and $\beta$, such that
\begin{equation} \label{eq:W_alpha_non-archimedean_lower_bound}
|W(\alpha)|_p \geq C_{14}.
\end{equation}
\end{enumerate}
\end{prop}
\begin{proof}
Let us prove each of the above statements.
\begin{enumerate}
\item First, we prove that $r \geq 1$. If not, then $r = \max\{\deg P, \deg Q\} = 0$, which means that $P = p$ and $Q = q$ for some integers $p$ and $q$, not both equal to zero. If $q \neq 0$, then $P(\alpha) + \beta Q(\alpha) = 0$ implies $\beta = -p/q$, which contradicts the fact that $\beta$ is irrational. If $q = 0$, then $p = 0$, which is impossible, since we assumed that both $p$ and $q$ cannot be equal to zero. Thus, $r \geq 1$.
Next, we prove that $r \leq s$, where $s = \lfloor d/2\rfloor$. Write
\begin{equation} \label{eq:P_hat_Q_hat}
\hat P(x) = \sum\limits_{i = 0}^s a_ix^i \quad \text{and} \quad \hat Q(x) = \sum\limits_{i = 0}^s a_{s + 1 + i}x^i.
\end{equation}
We view the $2s + 2$ integer coefficients $a_0, \ldots, a_{2s + 1}$ as variables. Since $\alpha$ is algebraic of degree $d$ over $\mathbb Q$ and $\beta \in \mathbb Q(\alpha)$, the equation $\hat P(\alpha) + \beta \hat Q(\alpha) = 0$ defines $d$ linear equations over $\mathbb Q$, which we will define in the proof of Part 4. Since $2s + 2 > d$, the existence of a non-trivial integer solution to the system of $d$ linear equations over $\mathbb Q$ in $2s + 2$ variables is guaranteed by Lemma \ref{lem:siegels_lemma}. Therefore, there exist polynomials $\hat P, \hat Q$, not both zero, such that $\max\{\deg \hat P, \deg \hat Q\} \leq s$. Consequently, the polynomials $P, Q$ with $\max\{\deg P, \deg Q\}$ minimal satisfy
$$
\max\{\deg P, \deg Q\} \leq \max\{\deg \hat P, \deg \hat Q\} \leq s.
$$
\item Let $G = \gcd(P, Q)$ and suppose that $\deg G \geq 1$. Then certainly $G(\alpha) \neq 0$, because $\alpha$ has degree $d$ and $\deg G \leq \deg P < d$. Put $\hat P = P/G$ and $\hat Q = Q/G$. Then
$$
\hat P(\alpha) + \beta\hat Q(\alpha) = 0
$$
and
$$
\max\{\deg \hat P, \deg \hat Q\} < \max\{\deg P, \deg Q\},
$$
in contradiction to our assumption that $\max\{\deg P, \deg Q\}$ is minimal. This means that $\deg G = 0$, and so $P$ and $Q$ are coprime.
\item Since
$$
P(\alpha) + \beta Q(\alpha) = \hat P(\alpha) + \beta \hat Q(\alpha) = 0,
$$
we have
$$
P(\alpha)\hat Q(\alpha) - Q(\alpha)\hat P(\alpha) = 0.
$$
Since $\alpha$ has degree $d$ and
\begin{align*}
\deg\left(P\hat Q - Q\hat P\right)
& \leq \max\{\deg P, \deg Q\} + \max\{\deg \hat P, \deg \hat Q\}\\
& \leq r + (d - 1 - r)\\
& < d,
\end{align*}
we conclude that $P\hat Q - Q\hat P$ is identically equal to zero. If $\hat Q = 0$, then $\hat P = 0$, and so $G = 0$. Otherwise $P/Q = \hat P/\hat Q$. If we put $G = \gcd(\hat P, \hat Q)$, then it becomes clear that $\hat P = GP$, $\hat Q = GQ$.
\item Define $b_i, c_{k, i} \in \mathbb Q$ as follows:
$$
\alpha^k = c_{k, d - 1}\alpha^{d - 1} + \cdots + c_{k, 1}\alpha + c_{k, 0},
$$
$$
\beta = b_{d - 1}\alpha^{d - 1} + \cdots + b_1\alpha + b_0.
$$
Let $\hat P, \hat Q$ be as in (\ref{eq:P_hat_Q_hat}). Then,
\begin{align*}
\hat P(\alpha) + \beta\hat Q(\alpha)
& = \sum\limits_{i = 0}^s a_i\alpha^i + \left(\sum\limits_{i = 0}^{d - 1}b_i\alpha^i\right)\cdot\left(\sum\limits_{j = 0}^s a_{s + 1 + j}\alpha^j\right)\\
& = \sum\limits_{i = 0}^sa_i\alpha^i + \sum\limits_{j = 0}^sa_{s + 1 + j}\sum\limits_{i = 0}^{d - 1}b_i\alpha^{i + j}\\
& = \sum\limits_{i = 0}^sa_i\alpha^i + \sum\limits_{j = 0}^sa_{s + 1 + j}\left(\sum\limits_{i = j}^{d - 1}b_{i-j}\alpha^i + \sum\limits_{k = d}^{d - 1 + j}b_{k - j}\alpha^k\right)\\
& = \sum\limits_{i = 0}^sa_i\alpha^i + \sum\limits_{j = 0}^sa_{s + 1 + j}\sum\limits_{i = j}^{d - 1}b_{i-j}\alpha^i + \sum\limits_{j = 0}^sa_{s + 1 + j}\sum\limits_{k = d}^{d - 1 + j}b_{k - j}\sum\limits_{i = 0}^{d - 1}c_{k, i}\alpha^i\\
& = \sum\limits_{i = 0}^s\left(a_i + \sum\limits_{j = 0}^ib_{i - j}a_{s + 1 + j} + \sum\limits_{j = 0}^s\sum\limits_{k = d}^{d - 1 + j}b_{k - j}c_{k, i}a_{s + 1 + j}\right)\alpha^i +\\
& + \sum\limits_{i = s + 1}^{d - 1}\sum\limits_{j = 0}^s\left(b_{i-j} + \sum\limits_{k = d}^{d - 1 + j}b_{k - j}c_{k, i}\right)a_{s + 1 + j}\alpha^i\\
& = \sum\limits_{i = 0}^{d - 1}L_i(\vec a)\alpha^i,
\end{align*}
where $\vec a = (a_0, a_1, \ldots, a_{2s + 1})$ and
\small
$$
L_i(\vec a) =
\begin{cases}
a_i + \sum\limits_{j = 0}^i\left(b_{i - j} + \sum\limits_{k = d}^{d - 1 + j}b_{k - j}c_{k, i}\right)a_{s + 1 + j} + \sum\limits_{j = i + 1}^s\left(\sum\limits_{k = d}^{d - 1 + j}b_{k - j}c_{k, i}\right)a_{s + 1 + j} & \text{if $i \leq s$,}\\
\sum\limits_{j = 0}^s\left(b_{i-j} + \sum\limits_{k = d}^{d - 1 + j}b_{k - j}c_{k, i}\right)a_{s + 1 + j} & \text{if $i \geq s + 1$.}
\end{cases}
$$
\normalsize
We conclude that the equation $\hat P(\alpha) + \beta \hat Q(\alpha) = 0$ is equivalent to the system of $d$ linear equations $L_0(\vec a) = \ldots = L_{d - 1}(\vec a) = 0$ over $\mathbb Q$.
Put
$$
B = \max\limits_{0 \leq i \leq d - 1}\{|b_i|\} \quad \text{and} \quad C = \max\limits_{\substack{0 \leq k \leq d - 1 + s\\0 \leq i \leq d - 1}}\{|c_{k, i}|\}.
$$
Then we can bound the (rational) coefficients of $L_i(\vec a)$ from above by $B(1 + sC)$:
$$
\left|b_{i - j} + \sum\limits_{k = d}^{d - 1 + j}b_{k - j}c_{k, i}\right| \leq B + jBC \leq B(1 + sC),
$$
$$
\left|\sum\limits_{k = d}^{d - 1 + j}b_{k - j}c_{k, i}\right| \leq jBC \leq sBC < B(1 + sC).
$$
By Lemma \ref{lem:alpha_beta_bound} we have $\theta_\alpha c_\beta b_i \in \mathbb Z$ for all $i$ and $B \leq C_9$. Further, by Lemma \ref{lem:bound_on_coefficients_2} we have $c_\alpha^{\max\{0, k - d + 1\}}c_{k, i} \in \mathbb Z$ for all $i, k$ and $C \leq C_8^s$.
Hence the linear forms
$$
\hat L_i(\vec a) = \theta_\alpha c_\beta c_\alpha^sL_i(\vec a)
$$
have integer coefficients and the size of these coefficients is at most
$$
A = \theta_\alpha c_\beta c_\alpha^sC_9(1 + sC_8^s).
$$
By Lemma \ref{lem:siegels_lemma},
\small
\begin{align*}
\max\{H(\hat P), H(\hat Q)\}
& = \max\limits_{0 \leq i \leq 2s + 1}\{|a_i|\}\\
& \leq \left((2s + 2)A\right)^{d/(2s + 2 - d)}\\
& \leq \left((2s + 2)\theta_\alpha c_\beta c_\alpha^sC_9(1 + sC_8^s)\right)^{d/(2s + 2 - d)}.
\end{align*}
\normalsize
Now that we know an upper bound on $\max\{H(\hat P), H(\hat Q)\}$, we can determine an upper bound on $\max\{H(P), H(Q)\}$ by considering the following two cases.
\begin{itemize}
\item[\textbf{Case 1.}] Suppose that $\max\{\deg \hat P, \deg \hat Q\} > d - 1 - r$. Then it follows from Part 1 and the inequality $\max\{\deg \hat P, \deg \hat Q\} \leq \lfloor d/2\rfloor$ that
$$
d \leq r + \max\{\deg \hat P, \deg \hat Q\} \leq 2\lfloor d/2\rfloor.
$$
Thus, $d$ is even and $\max\{\deg \hat P, \deg \hat Q\} = r = d/2$. Therefore the pair $\hat P, \hat Q$ satisfies Properties (1), (2), (3) in Definition \ref{defin:minimal_pair}. By Property (4), the polynomials $P$ and $Q$ satisfy
$$
\max\{H(P), H(Q)\} \leq \max\{H(\hat P), H(\hat Q)\},
$$
and so the result follows.
\item[\textbf{Case 2.}] Suppose that $\max\{\deg \hat P, \deg \hat Q\} \leq d - 1 - r$. Then we can use \mbox{Part 3} to conclude that $\hat P = GP$, $\hat Q = GQ$ for some $G \in \mathbb Z[x]$. Since either $\hat P$ or $\hat Q$ is nonzero, we have $H(G) \geq 1$. By Gelfond's Lemma \mbox{\cite[Lemma 1.6.11]{bombieri-gubler}},
$$
H(P) \leq H(G)H(P) \leq 2^{\deg(GP)}H(GP) \leq 2^{d/2}H(\hat P).
$$
An analogous estimate for $H(Q)$ yields the result.
\end{itemize}
\item Since $P$ and $Q$ are coprime and $r \geq 1$, they are linearly independent \mbox{over $\mathbb Q$}, so the Wronskian $W = PQ' - QP'$ is not identically equal to zero. Since $\alpha$ has degree $d$ and
\begin{align*}
\deg W
& = \deg(PQ' - QP')\\
& \leq \max\{\deg P, \deg Q\} + \max\{\deg P', \deg Q'\}\\
& \leq d/2 + (d/2 - 1)\\
& < d,
\end{align*}
we conclude that $W(\alpha) \neq 0$.
With the basic properties of heights listed in \cite[Section 2.4.1]{zannier}, we find the following upper bound on $H(W)$:
\begin{align*}
H(W)
& \leq H(PQ') + H(QP')\\
& \leq r(H(P)H(Q') + H(Q)H(P'))\\
& \leq 2r^2H(P)H(Q)\\
& \leq 2r^2\max\{H(P), H(Q)\}^2\\
& \leq 2(d/2)^2C_{12}^2.
\end{align*}
Suppose that $\alpha \in \mathbb C$. Then $c_\alpha^{\deg W}W(\alpha)$ is a nonzero algebraic integer, so
$$
N_{\mathbb Q(\alpha)/\mathbb Q}\left(c_\alpha^{\deg W}W(\alpha)\right) = c_\alpha^{d\deg W}\prod_{i = 1}^dW(\alpha_i)
$$
is a nonzero rational integer. Thus,
\begin{align*}
|W(\alpha)|^{-1}
& \leq c_\alpha^{d\deg W}\prod\limits_{i = 2}^d|W(\alpha_i)|\\
& \leq c_\alpha^{d\deg W}\prod\limits_{i = 2}^d(\deg W + 1)H(W)\max\{1, |\alpha_i|\}^{\deg W}\\
& \leq (2rH(W))^{d - 1}\left(\frac{c_\alpha^{d - 1}M(\alpha)}{\max\{1, |\alpha|\}}\right)^{2r - 1}\\
& \leq \left((d^3/2)C_{12}^2\right)^{d - 1}\left(\frac{c_\alpha^{d - 1}M(\alpha)}{\max\{1, |\alpha|\}}\right)^{d - 1}.
\end{align*}
\end{enumerate}
Suppose that $\alpha \in \mathbb Q_p$. Let $f(x)$ denote the minimal polynomial of $\alpha$ with the leading coefficient $c_\alpha$. By \cite[Theorem 1.3.2]{prasolov}, there exist polynomials $\varphi, \psi \in \mathbb Z[x]$ such that $\deg \varphi < \deg W$, $\deg \psi < d$, and
$$
\varphi(x)f(x) + \psi(x)W(x) = \mathbb Res(f, W).
$$
Here $\mathbb Res(f, W)$ denotes the resultant of $f$ and $W$. Since $\mathbb Res(f, W) \neq 0$ and $\alpha$ is a root of $f(x)$, we see that $\psi(\alpha)W(\alpha) = \mathbb Res(f, W)$. Since $c_\alpha^{d - 1}\psi(\alpha)$ is an algebraic integer, its $p$-adic absolute value does not exceed one, so
$$
|W(\alpha)|_p \geq |c_\alpha^{d-1}\psi(\alpha)W(\alpha)|_p = |c_\alpha^{d-1}\mathbb Res(f, W)|_p.
$$
Further, it follows from Hadamard's inequality, as well as the upper bound on $H(W)$ established previously, that
\begin{align*}
|\mathbb Res(f, W)|
& \leq (\deg f + 1)^{\deg W/2}(\deg W + 1)^{\deg f/2}H(\alpha)^{\deg W}H(W)^{\deg f}\\
& \leq (d + 1)^{(2r - 1)/2}(2r)^{d/2}H(\alpha)^{2r - 1}(2(d/2)^2C_{12}^2)^{d}\\
& \leq (d + 1)^{(d - 1)/2}d^{d/2}H(\alpha)^{d - 1}((d^2/2)C_{12}^2)^{d}.
\end{align*}
Combining the lower bound on $|W(\alpha)|_p$ with an upper bound on $|\mathbb Res(f, W)|$ yields the result:
\begin{align*}
|W(\alpha)|_p
& \geq |c_\alpha^{d-1}\mathbb Res(f, W)|_p\\
& \geq |c_\alpha^{d-1}\mathbb Res(f, W)|^{-1}\\
& \geq H(\alpha)^{-(d - 1)}|\mathbb Res(f, W)|^{-1}\\
& \geq (d + 1)^{-(d - 1)/2}d^{-d/2}H(\alpha)^{-2d+2}((d^2/2)C_{12}^2)^{-d}.
\end{align*}
\end{proof}
\section{A Gap Principle in the Presence of Vanishing} \label{sec:gap-principle-with-vanishing}
Let $\alpha$ be an algebraic number over $\mathbb Q$ of degree $d \geq 2$ and let $\beta \in \mathbb Q(\alpha)$ be irrational. Let $P, Q \in \mathbb Z[x]$ be polynomials such that
$$
P(\alpha) + \beta Q(\alpha) = 0.
$$
In this section we prove Proposition \ref{prop:alternative_gap_principle}, which states that \emph{despite the vanishing} of $P(x) + yQ(x)$ at the point $\left(\frac{x_1}{y_1}, \frac{x_2}{y_2}\right) \in \mathbb Q^2$, it is still possible to produce a gap principle, provided that the quantity $r = \max\{\deg P, \deg Q\}$ exceeds one.
\begin{prop} \label{prop:alternative_gap_principle}
Let $P, Q \in \mathbb Z[x]$ be coprime and such that
$$
r = \max\{\deg P, \deg Q\} \geq 1.
$$
Let $x_1/y_1, x_2/y_2$ be rational numbers in lowest terms such that $H(x_2, y_2) \geq H(x_1, y_1)$ and
\begin{equation} \label{eq:R_vanishes_at_rational_point}
P\left(\frac{x_1}{y_1}\right) + \frac{x_2}{y_2}Q\left(\frac{x_1}{y_1}\right) = 0.
\end{equation}
Then
$$
H(x_2, y_2) \geq \frac{H(x_1, y_1)^r}{C_{15}\max\{H(P), H(Q)\}^{2r^2 + 3r}},
$$
where
$$
C_{15} = 2^{r^2} (r + 1)^{(3r^2 + 2r)/2}.
$$
\end{prop}
The proof of Proposition \ref{prop:alternative_gap_principle} is given at the end of the section, and it follows directly from the results established below.
\begin{lem} \label{lem:resultant}
Let $P, Q \in \mathbb Z[x]$ be coprime polynomials of degrees $r$ and $s$, respectively, such that $r \geq \max\{1, s\}$. Let $c_P$ be the leading coefficient of $P$,
$$
P(x, y) = y^rP(x/y) \quad \text{and} \quad Q(x, y) = y^rQ(x/y).
$$
Then for all coprime integers $a$ and $b$ the number $g = \gcd\left(P(a, b), Q(a, b)\right)$ divides
$$
\varrho = \left|c_P^{r-s}\mathbb Res(P, Q)\right|,
$$
where $\mathbb Res(P, Q)$ denotes the resultant of $P$ and $Q$. Furthermore,
$$
1 \leq \varrho \leq (r + 1)^r\max\{H(P), H(Q)\}^{2r}.
$$
\end{lem}
\begin{proof}
Let $a$ and $b$ be coprime integers and suppose that a prime power $p^n$ exactly divides $g = \gcd\left(P(a, b), Q(a, b)\right)$. Since $a$ and $b$ are coprime, either $a$ or $b$ is not divisible by $p$. Suppose that $p$ does not divide $b$. By \cite[Theorem 1.3.2]{prasolov}, there exist polynomials $\varphi, \psi \in \mathbb Z[x]$ such that
$$
\varphi(x)P(x) + \psi(x)Q(x) = \mathbb Res(P, Q).
$$
Let $t = \max\{\deg \varphi, \deg \psi\}$. We evaluate the polynomial on the left-hand side at $x = a/b$ and multiply both sides of the above equality by $b^{r + t}$:
$$
b^t\varphi(a/b)P(a, b) + b^t\psi(a/b)Q(a, b) = \mathbb Res(P, Q)b^{r+t}
$$
By definition of $t$, the numbers $b^t\varphi(a/b)$ and $b^t\psi(a/b)$ are integers. Since $p$ does not divide $b$ and $p^n$ divides both $P(a, b)$ and $Q(a, b)$, we conclude that $p^n$ divides $\mathbb Res(P, Q)$.
Suppose that $p$ divides $b$. Then $p$ does not divide $a$, and so by analogy with the previous case we see that $p^n$ divides $\mathbb Res(P(1, x), Q(1, x))$. Let $\mathcal R(f) = x^{\deg f}f(1/x)$ denote the reciprocal of a polynomial $f(x)$. Then
$$
P(1, x) = \mathcal R(P) \quad \text{and} \quad Q(1, x) = x^{r - s}\mathcal R(Q),
$$
so
\begin{align*}
\mathbb Res(P(1, x), Q(1, x))
& = \mathbb Res\left(\mathcal R(P), x^{r - s}\mathcal R(Q)\right)\\
& = \mathbb Res(\mathcal R(P), x)^{r - s}\mathbb Res(\mathcal R(P), \mathcal R(Q))\\
& = \left((-1)^rc_P\right)^{r - s}(-1)^{rs}\mathbb Res(P, Q)\\
& = (-1)^rc_P^{r - s}\mathbb Res(P, Q).
\end{align*}
Therefore, $p^n$ divides $\left|c_P^{r - s}\mathbb Res(P, Q)\right|$, and the result follows.
Finally, since $P(x)$ and $Q(x)$ are coprime and $r \geq 1$, we have $\mathbb Res(P, Q) \neq 0$, so $\varrho \geq 1$. Applying Hadamard's inequality and $r \geq s$, we obtain
\begin{align*}
|c_P^{r - s}\mathbb Res(P, Q)|
& \leq |c_P|^{r - s}(r + 1)^{s/2}(s + 1)^{r/2}H(P)^sH(Q)^r\\
& \leq (r + 1)^r\max\{H(P), H(Q)\}^{2r}.
\end{align*}
\end{proof}
\begin{lem} \label{lem:two_forms_bound}
Let
$$
P(x, y) = \prod\limits_{i = 1}^r(\alpha_ix + \beta_iy) \quad \text{and} \quad Q(x, y) = \prod\limits_{j = 1}^r(\gamma_jx + \delta_jy)
$$
be binary forms of degree $r \geq 1$, with complex coefficients. Let
\begin{equation} \label{eq:two_forms_bound_C}
C = C(P, Q) = \frac{\min_{i, j}\{|\alpha_i\delta_j -\beta_i\gamma_j|\}}{\max_{i, j}\left\{\max\{|\alpha_i| + |\gamma_j|, |\beta_i| + |\delta_j|\}\right\}}.
\end{equation}
Suppose that $P$ and $Q$ do not have a linear factor in common, so that $C > 0$. Then for all pairs $(a, b) \in \mathbb C^2$ we have
$$
\max\{|P(a, b)|, |Q(a, b)|\} \geq C^rH(a, b)^r.
$$
\end{lem}
\begin{proof}
We claim that either
$$
\min\limits_{i = 1, \ldots, r}\{|\alpha_i a + \beta_i b|\} \geq C|b| \quad \text{or} \quad \min\limits_{j = 1, \ldots, r}\{|\gamma_ja + \delta_jb|\} \geq C|b|.
$$
For suppose not. Then for all $i, j$ we have
\begin{align*}
\left|(\alpha_i\delta_j - \beta_i\gamma_j)b\right|
& = |\alpha_i(\gamma_ja + \delta_jb) - \gamma_j(\alpha_ia + \beta_ib)|\\
& \leq (|\alpha_i| + |\gamma_j|)\max\left\{|\alpha_i a + \beta_i b|, |\gamma_j a + \delta_j b|\right\}\\
& < (|\alpha_i| + |\gamma_j|)C|b|\\
& \leq \min\{|\alpha_i\delta_j - \beta_i\gamma_j|\}|b|,
\end{align*}
so we reach a contradiction. Without loss of generality, suppose that $\min\{|\alpha_ia + \beta_i b|\}\geq C|b|$. Then
$$
|P(a, b)| = \prod\limits_{i = 1}^r|\alpha_ia + \beta_i b| \geq \min\left\{|\alpha_ia + \beta_ib|\right\}^r \geq C^r|b|^r.
$$
Analogously, either
$$
\min\limits_{i = 1, \ldots, r}\{|\alpha_i a + \beta_i b|\} \geq C|a| \quad \text{or} \quad \min\limits_{j = 1, \ldots, r}\{|\gamma_ja + \delta_jb|\} \geq C|a|.
$$
In the first case we can immediately conclude that $|P(a, b)| \geq C^rH(a, b)^r$ and the result follows. Otherwise we have $|Q(a, b)| \geq C^r|a|^r$. Combining this inequality with $|P(a, b)| \geq C^r|b|^r$ yields the result.
\end{proof}
For the proof of the following result, recall the definition of the \emph{Mahler measure} and the \emph{house} of a polynomial introduced in Section \ref{sec:theory}.
\begin{cor} \label{cor:two_forms_bound}
Let $P, Q \in \mathbb Z[x]$ be coprime polynomials of degrees $r$ and $s$, respectively, such that $r \geq \max\{1, s\}$. Define
$$
P(x, y) = y^rP(x/y) \quad \text{and} \quad Q(x, y) = y^rQ(x/y).
$$
Then for all pairs $(a, b) \in \mathbb C^2$ we have
\small
$$
\max\{|P(a, b)|, |Q(a, b)|\} \geq \frac{H(a, b)^r}{2^{r^2} (r + 1)^{3r^2/2} \max\{H(P), H(Q)\}^{2r^2+r}}.
$$
\normalsize
\end{cor}
\begin{proof}
Let $c_P$ and $c_Q$ be the leading coefficients of $P$ and $Q$, respectively. Then
$$
|c_P| \cdot \max\{1, \house{P}\} \leq M(P) \quad \text{and} \quad |c_Q| \cdot \max\{1, \house{Q}\} \leq M(Q),
$$
and so it follows from \cite[Lemma 1.6.7]{bombieri-gubler} that
\small
\begin{equation} \label{eq:house-bounds}
|c_P| \cdot \max\{1, \house{P}\} \leq (r + 1)^{1/2}H(P) \quad \text{and} \quad |c_Q| \cdot \max\{1, \house{Q}\} \leq (s + 1)^{1/2}H(Q).
\end{equation}
\normalsize
Let $\mu_1, \ldots, \mu_r$ be the roots of $P(x)$ and write
$$
P(x, y) = c_P\prod\limits_{i = 1}^r(x - \mu_i y) = \prod\limits_{i = 1}^r(\alpha_ix + \beta_iy),
$$
where $\alpha_i = c_P^{1/r}$, $\beta_i = - c_P^{1/r}\mu_i$. We consider the following two cases.
\textbf{Case 1.} Suppose that $s = 0$, i.e., $Q(x) = c_Q$. Then
$$
Q(x, y) = c_Qy^r = \prod\limits_{j = 1}^r(\gamma_jx + \delta_jy),
$$
where $\gamma_j = 0$ and $\delta_j = c_Q^{1/r}$. Using (\ref{eq:house-bounds}), the constant $C$ in (\ref{eq:two_forms_bound_C}) can be estimated from below as follows:
\begin{align*}
C
& = \frac{|c_Pc_Q|^{1/r}}{\max_i\left\{\max\{|c_P|^{1/r}, |c_P|^{1/r}|\mu_i| + |c_Q|^{1/r}\}\right\}}\\
& = \frac{|c_Q|^{1/r}}{\max_i\{\ \max\{1, |\mu_i| + \left|\frac{c_Q}{c_P}\right|^{1/r}\}\ \}}\\
& = \frac{1}{\max_i\{\ \max\{\left|\frac{1}{c_Q}\right|^{1/r}, \left|\frac{\mu_i}{c_Q}\right|^{1/r} + \left|\frac{1}{c_P}\right|^{1/r}\}\ \}}\\
& = \frac{1}{\max\{\ \left|\frac{1}{c_Q}\right|^{1/r}, \frac{\house{P}}{|c_Q|^{1/r}} + \left|\frac{1}{c_P}\right|^{1/r} \ \}}\\
& \geq \frac{1}{|c_P|\cdot \max\{1, \house{P}\} + |c_Q| \cdot \max\{1, \house{Q}\}}\\
& \geq \frac{1}{2(r + 1)^{1/2}\max\{H(P), H(Q)\}}.
\end{align*}
\textbf{Case 2.} Suppose that $s \geq 1$. Let $\nu_1, \ldots, \nu_s$ be the roots of $Q(x)$ and write
$$
Q(x, y) = c_Qy^{r - s}\prod\limits_{j = 1}^s(x - \nu_jy) = \prod\limits_{j = 1}^r(\gamma_jx + \delta_jy),
$$
where
$$
\gamma_j =
\begin{cases}
c_Q^{1/r}, & \text{if $1 \leq i \leq s$,}\\
0, & \text{if $s + 1 \leq i \leq r$,}
\end{cases}
\quad
\text{and}
\quad
\delta_j =
\begin{cases}
-c_Q^{1/r}\nu_i, & \text{if $1 \leq i \leq s$,}\\
c_Q^{1/r}, & \text{if $s + 1 \leq i \leq r$.}
\end{cases}
$$
Using (\ref{eq:house-bounds}), the constant $C$ in (\ref{eq:two_forms_bound_C}) can be estimated from below as follows:
\begin{align*}
C
& = \frac{\min_{i, j}\{|\alpha_i\delta_j -\beta_i\gamma_j|\}}{\max_{i, j}\left\{\max\{|\alpha_i| + |\gamma_j|, |\beta_i| + |\delta_j|\}\right\}}\\
& \geq \frac{|c_Pc_Q|^{1/r}\min\{1, \min_{i, j}\{|\mu_i - \nu_j|\}\}}{|c_P|^{1/r}\cdot \max\{1, \house{P}\} + |c_Q|^{1/r}\cdot \max\{1, \house{Q}\}}\\
& \geq \frac{\min\{1, \min_{i, j}\{|\mu_i - \nu_j|\}\}}{2(r + 1)^{1/2}\max\{H(P), H(Q)\}}.
\end{align*}
By \cite[Theorem A]{bugeaud-mignotte},
$$
\min\limits_{\substack{1 \leq i \leq r\\ 1 \leq j \leq s}}\{|\mu_i - \nu_j|\} \geq 2^{1 - r}(r + 1)^{(1 - 3r)/2}\max\{H(P), H(Q)\}^{-2r}.
$$
Since $P, Q \in \mathbb Z[x]$ and $r \geq 1$, we have $\max\{H(P), H(Q)\} \geq 1$, so the quantity on the right-hand side of the above inequality does not exceed one. Combining the lower bound on $\min_{i, j}\{|\mu_i - \nu_j|\}$ with the lower bound on $C$ established above, we obtain
$$
C \geq 2^{-r} (r + 1)^{-3r/2} \max\{H(P), H(Q)\}^{-2r - 1}.
$$
The result now follows from Lemma \ref{lem:two_forms_bound}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:alternative_gap_principle}]
From equation (\ref{eq:R_vanishes_at_rational_point}) it follows that $Q(x_1/y_1) \neq 0$, for otherwise $P(x_1/y_1) = 0$, which means that $P$ and $Q$ are not coprime. Let
$$
P(x, y) = y^rP(x/y) \quad \text{and} \quad Q(x, y) = y^rQ(x/y).
$$
Since $|y_1| \geq 1$, it must be the case that $Q(x_1, y_1) = y_1^rQ(x_1/y_1) \neq 0$, so
$$
\frac{x_2}{y_2} = -\frac{P(x_1/y_1)}{Q(x_1/y_1)} = -\frac{P(x_1, y_1)}{Q(x_1, y_1)}.
$$
Since $x_2$ and $y_2$ are coprime, and $P(x_1, y_1)$ and $Q(x_1, y_1)$ are integers, we see that
$$
|x_2| = \frac{|P(x_1, y_1)|}{g} \quad \text{and} \quad |y_2| = \frac{|Q(x_1, y_1)|}{g},
$$
where $g = \gcd\left(P(x_1, y_1), Q(x_1, y_1)\right)$. By Lemma \ref{lem:resultant}, $g \leq (r + 1)^r\max\{H(P), H(Q)\}^{2r}$. Thus,
$$
H(x_2, y_2) = \frac{\max\{|P(x_1, y_1)|, |Q(x_1, y_1)|\}}{g} \geq \frac{\max\{|P(x_1, y_1)|, |Q(x_1, y_1)|\}}{(r + 1)^r\max\{H(P), H(Q)\}^{2r}}.
$$
Finally, since $P$ and $Q$ are coprime, Corollary \ref{cor:two_forms_bound} applies:
\begin{align*}
H(x_2, y_2)
& \geq \frac{\max\{|P(x_1, y_1)|, |Q(x_1, y_1)|\}}{(r + 1)^r\max\{H(P), H(Q)\}^{2r}}\\
& \geq \frac{H(x_1, y_1)^r}{2^{r^2} (r + 1)^{(3r^2 + 2r)/2} \max\{H(P), H(Q)\}^{2r^2 + 3r}}.
\end{align*}
\end{proof}
\section{A Generalized Archimedean Gap Principle} \label{sec:generalized_archimedean_gap_principle}
In this section we prove Theorem \ref{thm:archimedean_gap_principle}. Recall that, for any $h \in \mathbb Z[x]$ and $i$ such that $0 \leq i \leq \deg h$, the inequality
\begin{equation} \label{eq:Dih_bound}
|D_ih(\alpha)| \leq H(h)\binom{\deg h + 1}{i + 1}\max\{1, |\alpha|\}^{\deg h - i}
\end{equation}
holds.
Let $P, Q$ be a minimal pair for $(\alpha, \beta)$, and define $R(x, y) = P(x) + yQ(x)$, so that $R(\alpha, \beta) = 0$. Choose $C_1$ so that
$$
C_1 \geq C_0^{1/\mu}.
$$
Then
$$
\left|\alpha - \frac{x_1}{y_1}\right| < 1 \quad \text{and} \quad \left|\beta - \frac{x_2}{y_2}\right| < 1.
$$
If $R(x_1/y_1, x_2/y_2) \neq 0$, then it follows from the triangle inequality, (\ref{eq:Dih_bound}), and the two inequalities established above that
\begin{align*}
\frac{1}{H(x_1, y_1)^r H(x_2, y_2)}
& \leq \left|R\left(\frac{x_1}{y_1}, \frac{x_2}{y_2}\right)\right|\\
& \leq \sum\limits_{i = 0}^r\sum\limits_{j = 0}^1\left|D_{i,j}R(\alpha, \beta)\right|\left|\alpha - \frac{x_1}{y_1}\right|^i\left|\beta - \frac{x_2}{y_2}\right|^j\\
& < \frac{C_0}{H(x_1, y_1)^{\mu}}\sum\limits_{i = 0}^r\left(\left|D_{i, 0}R(\alpha, \beta)\right| + \left|D_{i, 1}R(\alpha, \beta)\right|\right)\\
& \leq \frac{C_0}{H(x_1, y_1)^{\mu}}\sum\limits_{i = 0}^r\left(|D_iP(\alpha)| + (1 + |\beta|)\cdot |D_iQ(\alpha)|\right)\\
& \leq \frac{C_0}{H(x_1, y_1)^{\mu}}\sum\limits_{i = 0}^r\left(\binom{r + 1}{i + 1}H(P) + (1 + |\beta|)\binom{r + 1}{i + 1}H(Q)\right)\max\{1, |\alpha|\}^{r - i}\\
& \leq \frac{C_0}{H(x_1, y_1)^{\mu}}(2 + |\beta|)\max\{H(P), H(Q)\}\max\{1, |\alpha|\}^r\sum\limits_{i = 0}^r\binom{r + 1}{i + 1}\\
& < \frac{C_0}{H(x_1, y_1)^{\mu}}2^{r + 1}(2 + |\beta|)C_{12}\max\{1, |\alpha|\}^r\\
& \leq \frac{C_0}{H(x_1, y_1)^{\mu}}2^{(d/2) + 1}(2 + |\beta|)C_{12}\max\{1, |\alpha|\}^{d/2}\\
& = \frac{C_2}{H(x_1, y_1)^{\mu}},
\end{align*}
where the second-to-last inequality follows from (\ref{eq:height_PQ}). By (\ref{eq:r_bound}),
$$
H(x_2,y_2) > C_2^{-1}H(x_1, y_1)^{\mu - r} \geq C_2^{-1}H(x_1, y_1)^{\mu - d/2},
$$
which means that case 1 holds.
Suppose that $R(x_1/y_1, x_2/y_2) = 0$. If $r = 1$, then by definition $R(x, y) = (sx + t) - y(ux + v)$ for some integers $s$, $t$, $u$ and $v$. Note that \mbox{$sv - tu \neq 0$}, for otherwise the number $\beta$ would have to be rational. Since \mbox{$R(\alpha, \beta) = 0$} and \mbox{$R(x_1/y_1,\ x_2/y_2) = 0$}, case 2 holds.
It remains to consider the case when $R(x_1/y_1, x_2/y_2) = 0$ and $r \geq 2$. We will prove that $H(x_1, y_1) < C$ for some positive real number $C$, which depends only on $\alpha$, $\beta$, $\mu$ and $C_0$. By choosing $C_1$ so that $C_1 \geq C$, we then arrive to a contradiction. Note that
\begin{equation} \label{eq:beta_minus_x2y2}
\left|\beta - \frac{x_2}{y_2}\right| = \left|\frac{P(\alpha)}{Q(\alpha)} - \frac{P(x_1/y_1)}{Q(x_1/y_1)}\right| = \frac{|P(\alpha)Q(x_1/y_1) - Q(\alpha)P(x_1/y_1)|}{|Q(\alpha)Q(x_1/y_1)|}.
\end{equation}
Further,
\begin{equation} \label{eq:Q_alpha_upper_bound}
|Q(\alpha)| \leq (r + 1)\max\{H(P), H(Q)\}\max\{1, |\alpha|\}^r \leq (r + 1)C_{12}\max\{1, |\alpha|\}^r,
\end{equation}
\begin{align} \label{eq:Q_x1y1_upper_bound}
\left|Q\left(\frac{x_1}{y_1}\right)\right|
& \leq \sum\limits_{i = 0}^r|D_iQ(\alpha)|\cdot\left|\alpha - \frac{x_1}{y_1}\right|^i\\\notag
& < \sum\limits_{i = 0}^r|D_iQ(\alpha)|\\\notag
& \leq H(Q)\sum\limits_{i = 0}^r\binom{r + 1}{i + 1}\max\{1, |\alpha|\}^{r - i}\\\notag
& < 2^{r + 1}C_{12}\max\{1, |\alpha|\}^r.\\\notag
\end{align}
It remains to estimate $|P(\alpha)Q(x_1/y_1) - Q(\alpha)P(x_1/y_1)|$ from below. Let $W = PQ' - QP'$ denote the Wronskian of $P$ and $Q$. By Taylor's Theorem, (\ref{eq:Dih_bound}) and (\ref{eq:W_alpha_archimedean_lower_bound}),
$$
\begin{array}{l}
\left|P(\alpha)Q\left(\frac{x_1}{y_1}\right) -Q(\alpha)P\left(\frac{x_1}{y_1}\right)\right|\\
= \left|P(\alpha)\sum\limits_{i = 0}^rD_iQ(\alpha)\left(\frac{x_1}{y_1} - \alpha\right)^i - Q(\alpha)\sum\limits_{i = 0}^rD_iP(\alpha)\left(\frac{x_1}{y_1} - \alpha\right)^i\right|\\
= \left|\alpha - \frac{x_1}{y_1}\right|\cdot \left|\sum\limits_{i = 0}^{r-1}\left(P(\alpha)D_{i+1}Q(\alpha) - Q(\alpha)D_{i+1}P(\alpha)\right)\left(\frac{x_1}{y_1} - \alpha\right)^i\right|\\
= \left|\alpha - \frac{x_1}{y_1}\right|\cdot \left|W(\alpha) + \left(\frac{x_1}{y_1} - \alpha\right)\sum\limits_{i = 1}^{r-1}\left(P(\alpha)D_{i+1}Q(\alpha) - Q(\alpha)D_{i+1}P(\alpha)\right)\left(\frac{x_1}{y_1} - \alpha\right)^{i-1}\right|\\
> \left|\alpha - \frac{x_1}{y_1}\right|\left(|W(\alpha)| - \frac{C_0}{H(x_1, y_1)^\mu}\sum\limits_{i = 1}^{r-1}\left|P(\alpha)D_{i+1}Q(\alpha) - Q(\alpha)D_{i+1}P(\alpha)\right|\right)\\
\geq \left|\alpha - \frac{x_1}{y_1}\right|\left(|W(\alpha)| - \frac{C_0}{H(x_1, y_1)^\mu}2(r + 1)\max\{H(P), H(Q)\}^2\sum\limits_{i = 1}^{r - 1}\binom{r + 1}{i + 2}\max\{1, |\alpha|\}^{2r - i - 1}\right)\\
\geq \left|\alpha - \frac{x_1}{y_1}\right|\left(C_{13} - C_1^{-\mu}C_02^{r + 2}(r + 1)C_{12}^2\max\{1, |\alpha|\}^{2r}\right),
\end{array}
$$
where the last inequality follows from $H(x_1, y_1) \geq C_1$, (\ref{eq:height_PQ}) and (\ref{eq:W_alpha_archimedean_lower_bound}). Thus, if we choose $C_1$ so that
$$
C_1^\mu \geq 2^{(d/2) + 3}((d/2) + 1)C_0C_{12}^2C_{13}^{-1}\max\{1, |\alpha|\}^d,
$$
then it follows from $r \leq d/2$ that
\begin{align*}
\left|P(\alpha)Q\left(\frac{x_1}{y_1}\right) -Q(\alpha)P\left(\frac{x_1}{y_1}\right)\right|
& \geq \left|\alpha - \frac{x_1}{y_1}\right|\left(C_{13} - C_1^{-\mu}C_02^{r + 2}(r + 1)C_{12}^2\max\{1, |\alpha|\}^{2r}\right)\\
& \geq \left|\alpha - \frac{x_1}{y_1}\right|\left(C_{13} - C_1^{-\mu}C_02^{(d/2) + 2}((d/2) + 1)C_{12}^2\max\{1, |\alpha|\}^d\right)\\
& \geq \frac{C_{13}}{2}\left|\alpha - \frac{x_1}{y_1}\right|.
\end{align*}
Combining the above result with (\ref{eq:beta_minus_x2y2}), (\ref{eq:Q_alpha_upper_bound}) and (\ref{eq:Q_x1y1_upper_bound}) yields
$$
\left|\beta - \frac{x_2}{y_2}\right|
= \frac{|P(\alpha)Q(x_1/y_1) - Q(\alpha)P(x_1/y_1)|}{|Q(\alpha)Q(x_1/y_1)|}\\
> \frac{C_{13}}{2^{r + 2}(r + 1)C_{12}^2\max\{1, |\alpha|\}^{2r}}\left|\alpha - \frac{x_1}{y_1}\right|.
$$
By Proposition \ref{prop:alternative_gap_principle},
$$
H(x_2, y_2) \geq \frac{H(x_1, y_1)^r}{C_{15}\max\{H(P), H(Q)\}^{2r^2 + 3r}} \geq \frac{H(x_1, y_1)^r}{C_{15}C_{12}^{2r^2 + 3r}},
$$
where
\begin{equation} \label{eq:C15-upper-bound}
C_{15} = 2^{r^2}(r + 1)^{(3r^2 + 2r)/2} \leq 2^{d^2/4} ((d/2) + 1)^{(3d^2+4d)/8}.
\end{equation}
Consequently,
\begin{align*}
\left|\alpha - \frac{x_1}{y_1}\right|
& < \frac{2^{r + 2}(r + 1)C_{12}^2\max\{1, |\alpha|\}^{2r}}{C_{13}}\left|\beta - \frac{x_2}{y_2}\right|\\
& < \frac{2^{r + 2}(r + 1)C_{12}^2\max\{1, |\alpha|\}^{2r}}{C_{13}} \cdot \frac{C_0}{H(x_2, y_2)^\mu}\\
& \leq \frac{2^{r + 2}(r + 1)C_0 C_{12}^{(2r^2 + 3r)\mu + 2} C_{15}^\mu\max\{1, |\alpha|\}^{2r}}{C_{13}H(x_1, y_1)^{r\mu}}.
\end{align*}
Thus, we obtain an upper bound on $|\alpha - x_1/y_1|$. On the other hand, by \mbox{Lemma \ref{lem:liouville}}, the lower bound (\ref{eq:liouville_inequality}) holds. Combining upper and lower bounds,
$$
\frac{C_6}{H(x_1, y_1)^d} \leq \left|\alpha - \frac{x_1}{y_1}\right| < \frac{2^{r + 2}(r + 1)C_0 C_{12}^{(2r^2 + 3r)\mu + 2} C_{15}^\mu\max\{1, |\alpha|\}^{2r}}{C_{13}H(x_1, y_1)^{r\mu}}
$$
Since $\mu > (d/2) + 1$ and $r \geq 2$, we see that $r\mu - d > 0$, so
\begin{align*}
H(x_1, y_1)
& < \left(2^{r + 2}(r + 1)C_0(C_6C_{13})^{-1}C_{12}^{(2r^2 + 3r)\mu + 2} C_{15}^\mu \max\{1, |\alpha|\}^{2r}\right)^{1/(r\mu - d)}\\
& \leq \left(2^{d^2\mu/4} ((d/2) + 1)^{(3d^2+4d)\mu/8}C_0(C_6C_{13})^{-1}C_{12}^{(d^2 + 3d)\mu/2 + 2}\max\{1, |\alpha|\}^d\right)^{1/(2\mu - d)}\\
& = C.
\end{align*}
Notice how in the second-to-last inequality we have utilized the upper bound on $C_{15}$ given in (\ref{eq:C15-upper-bound}). Thus, if we choose $C_1$ so that $C_1 \geq C$, then $H(x_1, y_1) \geq C_1 \geq C$, and so we arrive to a contradiction.
\section{A Generalized Non-Archimedean Gap Principle} \label{sec:generalized_non-archimedean_gap_principle}
In this section we prove Theorem \ref{thm:non-archimedean_gap_principle}. Let $P, Q$ be a minimal pair for $(\alpha, \beta)$, and define $R(x, y) = P(x) + yQ(x)$, so that $R(\alpha, \beta) = 0$. Suppose that \mbox{$R(x_1/y_1, x_2/y_2) \neq 0$.} Then the following trivial lower bound holds:
\begin{align*}
\left|y_1^ry_2R\left(\frac{x_1}{y_1}, \frac{x_2}{y_2}\right)\right|_p
& \geq \frac{1}{|y_1^ry_2R(x_1/y_1, x_2/y_2)|}\\
& \geq \frac{1}{2(r + 1)\max\{H(P), H(Q)\}H(x_1, y_1)^rH(x_2, y_2)}\\
& \geq \frac{1}{2((d/2) + 1)C_{12}H(x_1, y_1)^{d/2}H(x_2, y_2)}.
\end{align*}
Let $c_\alpha$ and $c_\beta$ denote the leading coefficients of the minimal polynomials of $\alpha$ and $\beta$, respectively. Note that for each $(i, j) \in \{0, \ldots, r\} \times \{0, 1\}$ the $p$-adic number $c_\alpha^{r - i}c_\beta^{1 - j}D_{ij}R(\alpha, \beta)$ is an algebraic integer. Thus, its $p$-adic absolute value does not exceed one. Via the application of Taylor's Theorem we obtain the following upper bound:
\begin{align*}
\left|y_1^ry_2R\left(\frac{x_1}{y_1}, \frac{x_2}{y_2}\right)\right|_p
& \leq \max\limits_{(i, j) \neq (0, 0)}\left\{\left|D_{ij}R(\alpha, \beta)\right|_p \cdot |y_1\alpha - x_1|_p^i\cdot |y_2\beta -x_2|_p^j\right\}\\
& = \max\limits_{(i, j) \neq (0, 0)}\left\{\left|\frac{c_\alpha^{r - i}c_\beta^{1 - j}D_{ij}R(\alpha, \beta)}{c_\alpha^{r - i}c_\beta^{1 - j}}\right|_p \cdot |y_1\alpha - x_1|_p^i\cdot |y_2\beta -x_2|_p^j\right\}\\
& \leq |c_\alpha^rc_\beta|_p^{-1} \max\limits_{(i, j) \neq (0, 0)}\left\{|y_1c_\alpha\alpha - c_\alpha x_1|_p^i\cdot |y_2c_\beta\beta -c_\beta x_2|_p^j\right\}\\
& \leq c_\alpha^rc_\beta\max\{|y_1c_\alpha\alpha - c_\alpha x_1|_p, |y_2c_\beta\beta - c_\beta x_2|_p\}\\
& \leq c_\alpha^rc_\beta\max\{|c_\alpha|_p, |c_\beta|_p\}\max\{|y_1\alpha - x_1|_p, |y_2\beta - x_2|_p\}\\
& < \frac{C_0c_\alpha^rc_\beta}{H(x_1, y_1)^\mu}.
\end{align*}
Upon combining the upper and lower bounds, we obtain
$$
\frac{1}{2((d/2) + 1)C_{12}H(x_1, y_1)^{d/2}H(x_2, y_2)} < \frac{C_0c_\alpha^r c_\beta}{H(x_1, y_1)^\mu}.
$$
If we now set $C_4 = (d + 2)C_0C_{12}c_\alpha^{d/2}c_\beta$, then
$$
H(x_2, y_2) > C_4^{-1}H(x_1, y_1)^{\mu - d/2},
$$
which means that case 1 holds.
Suppose that $R(x_1/y_1, x_2/y_2) = 0$. If $r = 1$, then by definition $R(x, y) = (sx + t) - y(ux + v)$ for some integers $s, t, u, v$. Note that $sv - tu \neq 0$, for otherwise the number $\beta$ would have to be rational. Since \mbox{$R(\alpha, \beta) = 0$} and \mbox{$R(x_1/y_1, x_2/y_2) = 0$}, case 2 holds.
It remains to consider the case when $R(x_1/y_1, x_2/y_2) = 0$ and $r \geq 2$. We will prove that $H(x_1, y_1) < C$ for some positive number $C$, which depends only on $\alpha$ and $\beta$. By choosing $C_3$ so that $C_3 \geq C$, we then arrive to a contradiction.
First, we claim that by choosing $C_3$ so that
$$
C_3 \geq C_0^{1/\mu}
$$
we can ensure that $|y_1|_p \geq c_\alpha^{-1}$. This inequality clearly holds when $p$ does not divide $y_1$, so we assume that $p \mid y_1$. Since $x_1$ and $y_1$ are coprime, it must be the case that $p$ does not divide $x_1$. Suppose that $|y_1\alpha|_p \neq |x_1|_p$. Then it follows from the strong triangle inequality that
$$
|y_1\alpha - x_1|_p = \max\left\{|y_1\alpha|_p,\ |x_1|_p\right\} = \max\left\{|y_1\alpha|_p,\ 1\right\} \geq 1.
$$
Since $|y_1\alpha - x_1|_p < C_0H(x_1, y_1)^{-\mu}$, we find that $H(x_1, y_1) < C_0^{1/\mu} \leq C_3$, so we reach a contradiction. Thus, $|y_1\alpha|_p = |x_1|_p = 1$. Since $c_\alpha \alpha$ is an algebraic integer, it must be the case that $|c_\alpha \alpha|_p \leq 1$, so
$$
|y_1|_p = \left|\alpha^{-1}\right|_p \geq \left|c_\alpha\right|_p \geq c_\alpha^{-1},
$$
as claimed.
Now that we have chosen $C_3$ so that $|y_1|_p \geq c_\alpha^{-1}$, we turn our attention to the equation $R(x_1/y_1, x_2/y_2) = 0$, which implies that
$$
|x_2| = \frac{|P(x_1, y_1)|}{g} \quad \text{and} \quad |y_2| = \frac{|Q(x_1, y_1)|}{g},
$$
where $g = \gcd(P(x_1, y_1), Q(x_1, y_1))$. Consequently,
\begin{equation} \label{eq:non-archimedean_beta_minus_x2y2}
|y_2\beta - x_2|_p = \frac{|P(\alpha)Q(x_1, y_1) - Q(\alpha)P(x_1, y_1)|_p}{|gQ(\alpha)|_p}.
\end{equation}
Since $g$ is an integer, we have
\begin{equation} \label{eq:g_is_an_integer}
|g|_p \leq 1.
\end{equation}
Since $c_\alpha^r Q(\alpha)$ is an algebraic integer,
\begin{equation} \label{eq:non-archimedean_Q_alpha_bound}
|Q(\alpha)|_p \leq |c_\alpha|_p^{-r} \leq c_\alpha^r \leq c_\alpha^{d/2}.
\end{equation}
It remains to estimate $|P(\alpha)Q(x_1, y_1) - Q(\alpha)P(x_1, y_1)|_p$ from below. Before we proceed, note that for any $i$ the number
$$
c_\alpha^{2r-i-1}\left(P(\alpha)D_{i+1}Q(\alpha) - Q(\alpha)D_{i+1}P(\alpha)\right)
$$
is an algebraic integer, so its $p$-adic absolute value does not exceed one. Consequently,
\begin{align*}
\left|\frac{P(\alpha)D_{i+1}Q(\alpha) - Q(\alpha)D_{i+1}P(\alpha)}{c_\alpha^i}\right|_p
& = \left|\frac{c_\alpha^{2r-1-i}\left(P(\alpha)D_{i+1}Q(\alpha) - Q(\alpha)D_{i+1}P(\alpha)\right)}{c_\alpha^{2r-1}}\right|_p\\
& \leq |c_{\alpha}|_p^{-(2r-1)}\\
& \leq c_\alpha^{2r - 1}\\
& \leq c_\alpha^{d - 1}.
\end{align*}
Now, let $W = PQ' - QP'$ denote the Wronskian of $P$ and $Q$. By Taylor's Theorem and (\ref{eq:W_alpha_non-archimedean_lower_bound}),
\small
$$
\begin{array}{l}
\left|P(\alpha)Q(x_1, y_1) - Q(\alpha)P(x_1, y_1)\right|_p\\
= \left|P(\alpha)\sum\limits_{i = 0}^rD_iQ(\alpha)\left(x_1 - \alpha y_1\right)^iy_1^{r-i} - Q(\alpha)\sum\limits_{i = 0}^rD_iP(\alpha)\left(x_1 - \alpha y_1\right)^iy_1^{r-i}\right|_p\\
= \left|y_1\alpha - x_1\right|_p\left|\sum\limits_{i = 0}^{r-1}\left(P(\alpha)D_{i+1}Q(\alpha) - Q(\alpha)D_{i+1}P(\alpha)\right)\left(x_1 - \alpha y_1\right)^iy_1^{r-1-i}\right|_p\\
\geq \left|y_1\alpha - x_1\right|_p\left(|W(\alpha)y_1^{r - 1}|_p - |y_1\alpha - x_1|_p\max\limits_{i = 0, \ldots, r - 1}\left\{\left|\frac{P(\alpha)D_{i+1}Q(\alpha) - Q(\alpha)D_{i+1}P(\alpha)}{c_\alpha^i}\right|_p\right\}\right)\\
> \left|y_1\alpha - x_1\right|_p\left(|W(\alpha)y_1^{r - 1}|_p - \frac{C_0}{H(x_1, y_1)^\mu}c_\alpha^{2r - 1}\right)\\
\geq \left|y_1\alpha - x_1\right|_p\left(C_{14}c_\alpha^{-(r - 1)} - C_3^{-\mu}C_0c_\alpha^{2r - 1}\right),
\end{array}
\normalsize
$$
where the last inequality follows from $H(x_1, y_1) \geq C_3$, (\ref{eq:W_alpha_non-archimedean_lower_bound}) and $|y_1|_p \geq c_\alpha^{-1}$. Thus, if we choose $C_3$ so that
$$
C_3^\mu \geq 2C_0C_{14}^{-1}c_\alpha^{(3d-4)/2},
$$
then
$$
\left|P(\alpha)Q(x_1, y_1) - Q(\alpha)P(x_1, y_1)\right|_p > \left|y_1\alpha - x_1\right|_p\left(C_{14}c_\alpha^{-(r - 1)} - C_3^{-\mu}C_0c_\alpha^{2r - 1}\right) \geq \frac{C_{14}}{2c_\alpha^{(d-2)/2}}\left|y_1\alpha - x_1\right|_p.
$$
Combining this observation with (\ref{eq:non-archimedean_beta_minus_x2y2}), (\ref{eq:g_is_an_integer}) and (\ref{eq:non-archimedean_Q_alpha_bound}),
$$
\left|y_2\beta - x_2\right|_p = \frac{|P(\alpha)Q(x_1, y_1) - Q(\alpha)P(x_1, y_1)|_p}{|gQ(\alpha)|_p} > \frac{C_{14}}{2c_\alpha^{d - 1}}|y_1\alpha - x_1|_p.
$$
By Proposition \ref{prop:alternative_gap_principle},
$$
H(x_2, y_2) \geq \frac{H(x_1, y_1)^r}{C_{15}\max\{H(P), H(Q)\}^{2r^2 + 3r}} \geq \frac{H(x_1, y_1)^r}{C_{15}C_{12}^{2r^2 + 3r}},
$$
where $C_{15}$ satisfies the upper bound (\ref{eq:C15-upper-bound}). Consequently,
$$
\left|y_1\alpha - x_1\right|_p
< \frac{2c_\alpha^{d - 1}}{C_{14}}\left|y_2\beta - x_2\right|_p
< \frac{2c_\alpha^{d - 1}C_0}{C_{14}H(x_2, y_2)^\mu}
\leq \frac{2c_\alpha^{d - 1}C_0C_{12}^{(2r^2 + 3r)\mu}C_{15}^\mu}{C_{14}H(x_1, y_1)^{r\mu}}
$$
Thus, we obtain an upper bound on $|y_1\alpha - x_1|_p$. On the other hand, by Lemma \ref{lem:p-adic_liouville} we have the lower bound (\ref{eq:p-adic_liouville_inequality}). Combining upper and lower bounds,
$$
\frac{C_7}{H(x_1, y_1)^d} \leq \left|y_1\alpha - x_1\right|_p < \frac{2c_\alpha^{d-1}C_0C_{12}^{(2r^2 + 3r)\mu}C_{15}^\mu}{C_{14}H(x_1, y_1)^{r\mu}}
$$
Since $\mu > (d/2) + 1$ and $r \geq 2$, we have
\begin{align*}
H(x_1, y_1)
& < \left(2c_\alpha^{d-1}C_0C_7^{-1}C_{12}^{(2r^2 + 3r)\mu} C_{14}^{-1} C_{15}^\mu\right)^{1/(r\mu - d)}\\
& \leq \left(2^{d^2\mu/4} ((d/2) + 1)^{(3d^2+4d)\mu/8}c_\alpha^{d-1}C_0C_7^{-1}C_{12}^{(d^2+3d)\mu/2}C_{14}^{-1}\right)^{1/(2\mu - d)}\\
& = C.
\end{align*}
Notice how in the second-to-last inequality we have utilized the upper bound on $C_{15}$ given in (\ref{eq:C15-upper-bound}). Thus, if we choose $C_3$ so that $C_3 \geq C$, then $H(x_1, y_1) \geq C_3 \geq C$, and so we arrive to a contradiction.
\section{The Enhanced Automorphism Group} \label{sec:automorphisms}
In this section we establish several results about the enhanced automorphism group $\Aut' |F|$ of a binary form $F$. At the end we prove Proposition \ref{prop:automorphisms}, where we explain the relation between automorphisms of $F$ and the roots of $F(x, 1)$.
\begin{lem} \label{lem:Aut_QF_is_finite}
Let $F \in \mathbb Z[x, y]$ be a binary form of degree $d \geq 3$ and nonzero discriminant $D(F)$. Then $\Aut_{\mathbb Q} |F|$ is $\GL_2(\mathbb Q)$-conjugate to one of the groups from \mbox{Table \ref{tab:finite_subgroups}}.
\end{lem}
\begin{proof}
See \cite{newman}.
\end{proof}
\begin{table}[t]
\centering
\begin{tabular}{| l | l | l | l |}
\hline
Group & Generators & Group & Generators\\
\hline\rule{0pt}{4ex}
$\bm C_1$ &
$\begin{pmatrix}1 & 0\\0 & 1\end{pmatrix}$ &
$\bm D_1$ &
$\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}$\\\rule{0pt}{5ex}
$\bm C_2$ &
$\begin{pmatrix}-1 & 0\\0 & -1\end{pmatrix}$ &
$\bm D_2$ &
$\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}, \begin{pmatrix}-1 & 0\\0 & -1\end{pmatrix}$\\\rule{0pt}{5ex}
$\bm C_3$ &
$\begin{pmatrix}0 & 1\\-1 & -1\end{pmatrix}$ &
$\bm D_3$ &
$\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}, \begin{pmatrix}0 & 1\\-1 & -1\end{pmatrix}$\\\rule{0pt}{5ex}
$\bm C_4$ &
$\begin{pmatrix}0 & 1\\-1 & 0\end{pmatrix}$ &
$\bm D_4$ &
$\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}, \begin{pmatrix}0 & 1\\-1 & 0\end{pmatrix}$\\\rule{0pt}{5ex}
$\bm C_6$ &
$\begin{pmatrix}0 & -1\\1 & 1\end{pmatrix}$ &
$\bm D_6$ &
$\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}, \begin{pmatrix}0 & 1\\-1 & 1\end{pmatrix}$\\
\hline
\end{tabular}
\caption{Representatives of equivalence classes of finite subgroups of $\GL_2(\mathbb Q)$ under $\GL_2(\mathbb Q)$-conjugation.}
\label{tab:finite_subgroups}
\end{table}
\begin{lem} \label{lem:G_is_finite}
Let $F \in \mathbb Z[x, y]$ be a binary form of degree $d \geq 3$ and nonzero discriminant $D(F)$. Let $\Aut' |F|$ be as in (\ref{eq:G}). Then $\Aut' |F| \cong \bm C_n$ or $\Aut'|F| \cong \bm D_n$, where $n \in \{1, 2, 3, 4, 6, 8, 12\}$.
\end{lem}
\begin{proof}
Note that $\Aut_{\mathbb Q} |F|$ is a subgroup of $\Aut' |F|$. Furthermore, for any $M \in \Aut' |F|$ we have $M^2 \in \Aut_{\mathbb Q}|F|$. By Lemma \ref{lem:Aut_QF_is_finite}, $\Aut_{\mathbb Q}|F|$ is finite, and so any $M \in \Aut' |F|$ has finite order. In fact, since the orders of elements in $\Aut_{\mathbb Q}|F|$ are $\{1, 2, 3, 4, 6\}$, the only possible orders of elements in $\Aut'|F|$ are $\{1, 2, 3, 4, 6, 8, 12\}$.
Next, recall a classical result that any finite subgroup of $\GL_2(\mathbb R)$ is $\GL_2(\mathbb R)$-conjugate to a finite subgroup of the orthogonal group $O_2(\mathbb R)$.
Since finite subgroups of $O_2(\mathbb R)$ correspond to rotations and reflections on a plane, we conclude that each finite subgroup of $\GL_2(\mathbb R)$, including $\Aut' |F|$, is isomorphic to either a cyclic group $\bm C_n$ of order $n$ or a dihedral group $\bm D_n$ of order $2n$.
Now suppose that $\Aut' |F|$ contains at least $25$ distinct elements $M_1, \ldots, M_{25}$. By Schur's Theorem \cite{curtis-reiner}, any finitely generated torsion subgroup of $\GL_n(\mathbb C)$ is finite. Hence $\langle M_1, \ldots, M_{25}\rangle$ is a finite subgroup of $\GL_2(\mathbb R)$, so it is isomorphic to either $\bm C_n$ or $\bm D_n$ for some $n$. In the former case we see that $n \geq 25$, while in the latter case $n \geq 13$. In both cases we obtain a contradiction, since the largest order that an element of $\Aut' |F|$ can have is $12$. Therefore $\Aut' |F|$ contains at most $24$ elements.
\end{proof}
Let us give an example of a group of the form (\ref{eq:G}) that is not a subgroup of $\GL_2(\mathbb Q)$. Consider
$$
G = \left\langle
\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix},
\begin{pmatrix}
1/\sqrt 3 & 1/\sqrt 3\\
-1/\sqrt 3 & 2/\sqrt 3
\end{pmatrix}
\right\rangle.
$$
Then $G \cong \bm D_{12}$. If we choose coprime integers $a$ and $b$ so that $a \equiv 3b \pmod{10}$, then any (reciprocal) binary form
\begin{align*}
F(x, y)
& = a(x^{12} + y^{12}) -6axy(x^{10} + y^{10})\\
& + \frac{231a + 2b}{5}x^2y^2(x^8 + y^8) - (176a + 2b)x^3y^3(x^6 + y^6)\\
& + \frac{495a + 5b}{2}x^4y^4(x^4 + y^4) + 2bx^5y^5(x^2 + y^2)\\
& - \frac{1122a+29b}{5}x^6y^6
\end{align*}
will have integer coefficients and satisfy $F_M = F$ for any $M \in G$. Consequently, if $(x, y)$ is a solution to the Thue equation $F(x, y) = m$, then so are $(y, -x + y)$, $(-x + y, -x)$, $(-x, -y)$, \mbox{$(-y, x - y)$,} $(x - y, x), (y, x), (-x + y, y), (-x, -x + y), (-y, -x), (x-y, -y), (x, x-y)$. This phenomenon was observed by Stewart in \mbox{\cite[Section 6]{stewart2}} with respect to binary forms invariant under $\bm D_6$, which is a subgroup of $G$. In addition to these $12$ solutions, we have $F(x', y') = 729m$ for any $(x', y') \in \left\{(x + y, -x + 2y)\right.$, $(-x + 2y, -2x + y)$, $(-2x + y, -x - y)$, $(-x - y, x - 2y)$, $(x - 2y, 2x - y)$, $(2x - y, x + y)$, $(-x + 2y, x + y)$, $(-2x + y, -x + 2y)$, $(-x - y, -2x + y)$, $(x - 2y, -x - y)$, $(2x - y, x - 2y)$, $\left.(x + y, 2x - y)\right\}$.
\begin{prop} \label{prop:automorphisms}
Let $F(x, y) = c_dx^d + c_{d-1}x^{d-1}y + \cdots + c_0y^d \in \mathbb Z[x, y]$ be an irreducible binary form of degree $d \geq 3$. Let $\alpha_1, \ldots, \alpha_d$ be the roots of $F(x, 1)$. There exists an index $j \in \{1, \ldots, d\}$ such that
$$
\alpha_j = \frac{v\alpha_1 - u}{-t\alpha_1 + s}
$$
for some integers $s$, $t$, $u$ and $v$ if and only if the matrix
$$
M = \frac{1}{\sqrt{|sv - tu|}}
\begin{pmatrix}
s & u\\
t & v
\end{pmatrix}
$$
is an element of $\Aut'|F|$. Furthermore, if $M \in \Aut'|F|$, then $|sv - tu| = \left|\frac{F(s, t)}{c_d}\right|^{2/d}$.
\end{prop}
\begin{proof}
Suppose that there exists an index $j \in \{1, \ldots, d\}$ such that $\alpha_j = \frac{v\alpha_1 - u}{-t\alpha_1 + s}$ for some integers $s$, $t$, $u$ and $v$. Since $F(x, 1)$ is irreducible, its Galois group acts transitively on the roots $\alpha_1, \alpha_2, \ldots, \alpha_d$. Therefore,
$$
\frac{v\alpha_1 - u}{-t\alpha_1 + s}, \quad \frac{v\alpha_2 - u}{-t\alpha_2 + s}, \quad \ldots, \quad \frac{v\alpha_d - u}{-t\alpha_d + s}
$$
is a permutation of $\alpha_1, \ldots, \alpha_d$. Thus,
\begin{align*}
F(x, y)
& = c_d\prod\limits_{i = 1}^d\left(x - \frac{v\alpha_i - u}{-t\alpha_i + s}y\right)\\
& = \frac{c_d}{\prod_{i = 1}^d(s - t\alpha_i)}\prod\limits_{i = 1}^d\left((-t\alpha_i + s)x - (v\alpha_i - u)y\right)\\
& = \frac{c_d}{F(s, t)}F(sx + uy,\ tx + vy)\\
& = \pm \eta^dF(sx + uy,\ tx + vy)\\
& = \pm F_M(x, y),
\end{align*}
where $\eta = |c_d/F(s, t)|^{1/d}$ and $M = \eta\left(\begin{smallmatrix}s & u\\t & v\end{smallmatrix}\right)$. Since
$$
D(F_M) = (\det M)^{d(d - 1)}D(F)
$$
and $F_M = \pm F$, we see that $|D(F_M)| = |D(F)|$, so $|\det M| = 1$. Hence $|\eta|^2\cdot |sv - tu| = 1$, which leads us to the conclusion that $\eta = |\eta| = |sv - tu|^{-1/2}$ and $M \in \operatorname{Aut}'|F|$.
Conversely, suppose that $M = |sv - tu|^{-1/2}\left(\begin{smallmatrix}s & u\\t & v\end{smallmatrix}\right)$ is in $\Aut' |F|$. Then
\begin{align*}
\pm F(x, y) & = F_M(x, y)\\
& = \frac{c_d}{|sv - tu|^{d/2}}\prod\limits_{i = 1}^d(sx + uy - \alpha_i(tx + vy))\\
& = \frac{F(s, t)}{|sv - tu|^{d/2}}\prod\limits_{i = 1}^d\left(x - \frac{v\alpha_i - u}{-t\alpha_i + s}y\right).
\end{align*}
We see that the polynomial $F_M(x, 1)$ vanishes at $\frac{v\alpha_i - u}{-t\alpha_i + s}$ for $i = 1, \ldots, d$. Since $F_M = \pm F$, the polynomials $F_M(x, 1)$ and $F(x, 1)$ have the same roots, so there exists some index $j$ such that $\alpha_j = \frac{v\alpha_1 - u}{-t\alpha_1 + s}$. Furthermore, the leading coefficients of $F(x, 1)$ and $F_M(x, 1)$ must equal up to a sign, i.e., $|c_d| = \frac{|F(s, t)|}{|sv - tu|^{d/2}}$. Of course, this is the same as $|sv - tu| = \left|\frac{F(s, t)}{c_d}\right|^{2/d}$.
\end{proof}
\section{Counting Primitive Solutions of Large Height to Certain Thue Inequalities} \label{sec:thue-inequality}
In this section we prove Theorem \ref{thm:thue_large}. It follows from a more general result stated in Theorem \ref{thm:count_large_approximations}, where we count approximations $x/y$ of large height to distinct algebraic numbers $\alpha_1, \ldots, \alpha_n$ such that $\mathbb Q(\alpha_i) = \mathbb Q(\alpha_1)$ for all $i = 1, 2, \ldots, n$. In order to state the main result of this section we need to introduce the notion of an \emph{orbit}. For an irrational number $\alpha$, the \emph{orbit} of $\alpha$ is the set
$$
\orb(\alpha) = \left\{\frac{v\alpha - u}{-t\alpha + s}\ \colon\ s,t,u,v \in \mathbb Z,\ sv-tu \neq 0\right\}.
$$
\begin{thm} \label{thm:count_large_approximations}
Let $K = \mathbb C$ or $\mathbb Q_p$, where $p$ is a rational prime, and denote the standard absolute value on $K$ by $|\quad|$. Let $\alpha_1 \in K$ be an algebraic number of degree $d \geq 3$ over $\mathbb Q$ and $\alpha_2, \alpha_3, \ldots, \alpha_n$ be distinct elements of $\mathbb Q(\alpha_1)$, different from $\alpha_1$, each of \mbox{degree $d$.} Let $\mu$ be such that $(d/2) + 1 < \mu < d$. Let $C_0$ be a real number such that $C_0 > (4e^A)^{-1}$, where
\begin{equation} \label{eq:A}
A = 500^2\left(\log \max\limits_{i = 1, \ldots, n}\{M(\alpha_i)\} + \frac{d}{2}\right).
\end{equation}
There exists a positive real number $C_{16}$, which depends on $\alpha_1, \alpha_2, \ldots, \alpha_n$, $\mu$ and $C_0$, with the following property. The total number of rationals $x/y$ in lowest terms, which satisfy \mbox{$H(x, y) \geq C_{16}$} and
\begin{equation} \label{eq:roths_inequality}
\left|\alpha_j - \frac{x}{y}\right| < \frac{C_0}{H(x, y)^\mu}
\end{equation}
for some $j \in \{1, 2, \ldots, n\}$ is less than
$$
\gamma\left\lfloor1 + \frac{11.51 + 1.5\log d + \log \mu}{\log(\mu - d/2)}\right\rfloor,
$$
where
\begin{equation} \label{eq:gamma}
\gamma = \max\{\gamma_1, \ldots, \gamma_n\}, \quad \gamma_i = \#\{j \colon \alpha_j \in \orb(\alpha_i)\}.
\end{equation}
\end{thm}
Let us see why Theorem \ref{thm:thue_large} follows from Theorem \ref{thm:count_large_approximations}.
\begin{proof}[Proof of Theorem \ref{thm:thue_large}]
Let $\alpha_1, \alpha_2, \ldots, \alpha_d$ be the roots of $F(x, 1)$. Notice that, since $F(x, y)$ is irreducible, the roots of $F(1, x)$ are given by $\alpha_1^{-1}, \ldots, \alpha_d^{-1}$. Furthermore, since the field extension $\mathbb Q(\alpha)/\mathbb Q$ is Galois, we have $\mathbb Q(\alpha_i) = \mathbb Q(\alpha_1)$ for all $i = 1, \ldots, d$.
Choose $C_5$ so that
$$
C_5^{d - \mu} > \frac{2^{d-1}d^{(d-1)/2}M(F)^{d-2}m}{|D(F)|^{1/2}}.
$$
Let $(x, y)$ be a primitive solution to (\ref{eq:thue-inequality}) such that $H(x, y) \geq C_5$. Then it follows from the result of Lewis and Mahler stated in Lemma \ref{lem:lewis-mahler} that there exists an index $j \in \{1, 2, \ldots, d\}$ such that
$$
\min\left\{\left|\alpha_j - \frac{x}{y}\right|, \left|\alpha_j^{-1} - \frac{y}{x}\right|\right\} \leq \frac{2^{d-1}d^{(d-1)/2}M(F)^{d-2}m}{|D(F)|^{1/2}H(x, y)^d} < \frac{1}{H(x, y)^\mu}.
$$
Next, adjust the choice of $C_5$ so that Theorem \ref{thm:count_large_approximations} applies:
$$
C_5 \geq \max\left\{C_{16}(\alpha_1, \ldots, \alpha_d, \mu, C_0),\ C_{16}(\alpha_1^{-1}, \ldots, \alpha_d^{-1}, \mu, C_0)\right\},
$$
where $C_0 = 1$. If we let $\gamma$ be as in (\ref{eq:gamma}), then it follows from Theorem \ref{thm:count_large_approximations} that $x/y$ is one of at most
$$
2\gamma\left\lfloor1 + \frac{11.51 + 1.5\log d + \log \mu}{\log(\mu - d/2)}\right\rfloor
$$
rationals in lowest terms that satisfy either of the two inequalities
$$
\left|\alpha_j - \frac{x}{y}\right| < \frac{C_0}{H(x, y)^\mu}, \quad \left|\alpha_j^{-1} - \frac{y}{x}\right| < \frac{C_0}{H(x, y)^\mu}.
$$
It now follows from Proposition \ref{prop:automorphisms} that $\gamma \leq \frac{\Aut' |F|}{2}$. The division by $2$ appears due to the presence of the matrix $\left(\begin{smallmatrix}-1 & 0\\0 & -1\end{smallmatrix}\right)$ in $\Aut' |F|$, which maps $(x, y)$ to $(-x, -y)$.
\end{proof}
We conclude this section with the proof of Theorem \ref{thm:count_large_approximations}.
\begin{proof}[Proof of Theorem \ref{thm:count_large_approximations}]
Throughout the proof we will be adjusting our choice of $C_{16}$ four times. First, let $C_{16} \geq C_{11}$, where the positive real number $C_{11}$ is defined in Corollary \ref{cor:unique}. Then it follows from Lemma \ref{cor:unique} that for each $x/y$ satisfying (\ref{eq:roths_inequality}) the index $j \in \{1, 2, \ldots, n\}$ is unique.
Let $x_1/y_1, x_2/y_2, \ldots, x_\ell/y_\ell$ be the list of rational numbers in lowest terms that satisfy the following conditions.
\begin{enumerate}
\item $C_{16} \leq H(x_1, y_1) \leq H(x_2, y_2) \leq \ldots \leq H(x_\ell, y_\ell)$.
\item $\gcd(x_j, y_j) = 1$ for all $j = 1, 2, \ldots, \ell$.
\item For each $j \in \{1, 2, \ldots, \ell\}$, there exists the index $i_j \in \{1, 2, \ldots, n\}$ such that
$$
\left|\alpha_{i_j} - \frac{x_j}{y_j}\right| < \frac{C_0}{H(x_j, y_j)^\mu}.
$$
By the discussion above, this index is unique.
\item For every $j, k \in \{1, 2, \ldots, \ell\}$, if $\alpha_{i_k} \in \orb(\alpha_{i_j})$, i.e.,
$$
\alpha_{i_k} = \frac{s\alpha_{i_j} + t}{u\alpha_{i_j} + v}
$$
for some integers $s$, $t$, $u$ and $v$, then
$$
\frac{x_k}{y_k} \neq \frac{sx_j + ty_j}{ux_j + vy_j}.
$$
\end{enumerate}
Due to the fourth condition this list need not be uniquely defined. This fact, however, does not affect our estimates. The fourth property requires additional clarification: to each rational approximation in the list
$$
\frac{x_1}{y_1}, \quad \frac{x_2}{y_2}, \quad \ldots, \quad \frac{x_\ell}{y_\ell}
$$
correspond several rational approximations, which we call \emph{derived}. To be more precise, from $x_j/y_j$ one can naturally construct a (possibly bad) rational approximation to arbitrary $\alpha \in \orb(\alpha_{i_j})$ as follows. Let
$$
\alpha = \frac{s\alpha_{i_j} + t}{u\alpha_{i_j} + v} \quad \textrm{and} \quad \frac{x_j'}{y_j'} = \frac{sx_j + ty_j}{ux_j + vy_j}
$$
for some integers $s$, $t$, $u$ and $v$. Then
$$
\alpha - \frac{x_j'}{y_j'} = \frac{tu - sv}{(u\alpha_{i_j} + v)(u(x_j/y_j) + v)}\left(\alpha_{i_j} - \frac{x_j}{y_j}\right),
$$
so rational approximations to $\alpha$ and $\alpha_{i_j}$ are connected. Thus, by imposing condition (4), we insist that $x_j'/y_j'$ does not appear in the list $x_1/y_1, x_2/y_2, \ldots, x_\ell/y_\ell$.
In order to account for the presence of derived rational approximations, we introduce the value $\gamma_i$ defined in (\ref{eq:gamma}). Note that the value $\gamma_{i_j}$ is equal to the number of rational approximations derived from $x_j/y_j$, including $x_j/y_j$ itself. Consequently, if we let $N$ denote the total number of rationals satisfying the conditions specified in the hypothesis, then $N$ does not exceed $\sum_{j = 1}^\ell \gamma_{i_j}$. Therefore,
$$
N \leq \sum\limits_{j = 1}^\ell\gamma_{i_j} \leq \gamma\ell,
$$
where $\gamma$ is defined in (\ref{eq:gamma}). Thus, it remains to estimate $\ell$.
To derive an upper bound on $\ell$, we begin by applying a generalized gap principle to the ordered pair $(\alpha_{i_k}, \alpha_{i_{k+1}})$. Choose $C_{16}$ and define $C$ as follows:
$$
C_{16} \geq \max\limits_{j, k}\{C_1(\alpha_j, \alpha_k, \mu, C_0),\ C_3(\alpha_j, \alpha_k, \mu, C_0)\},
$$
$$
C = \max\limits_{j, k}\{C_2(\alpha_j, \alpha_k, \mu, C_0),\ C_4(\alpha_j, \alpha_k, \mu, C_0)\},
$$
where the positive real numbers $C_1$, $C_2$, $C_3$ and $C_4$ are taken from Theorems \ref{thm:archimedean_gap_principle} and \ref{thm:non-archimedean_gap_principle}, respectively. Note that if $K = \mathbb Q_p$, then $|y_k| \leq 1$, and so
$$
|y_k\alpha_{i_k} - x_k| = |y_k|\cdot \left|\alpha_{i_k} - \frac{x_k}{y_k}\right| < \frac{C_0}{H(x_k, y_k)^\mu}.
$$
Analogously,
$$
|y_{k + 1}\alpha_{i_{k + 1}} - x_{k + 1}| < \frac{C_0}{H(x_{k + 1}, y_{k + 1})^\mu}.
$$
It follows from Theorems \ref{thm:archimedean_gap_principle} and \ref{thm:non-archimedean_gap_principle} that, for every $k \in \{1, 2, \ldots, \ell - 1\}$,
$$
H(x_{k + 1}, y_{k + 1}) > C^{-1}H(x_k, y_k)^E,
$$
where
\begin{equation}
E = \mu - d/2.
\end{equation}
Notice that case 2 in the aforementioned theorems is impossible due to the fact that the list $x_1/y_1, \ldots, x_\ell/y_\ell$ does not contain derived rational approximations. Consequently,
\begin{align*}
\log H(x_\ell, y_\ell)
& > E\log H(x_{\ell - 1}, y_{\ell - 1}) - \log C\\
& > E^2\log H(x_{\ell - 2}, y_{\ell - 2}) - (1 + E)\log C\\
& > \cdots\\
& > E^{\ell - 1}\log H(x_1, y_1) - (1 + E + \cdots + E^{\ell - 2})\log C.
\end{align*}
Thus, we obtain the following lower bound on $\log H(x_\ell, y_\ell)$:
\begin{equation} \label{eq:qell_bound}
\log H(x_\ell, y_\ell) > E^{\ell - 1}\log H(x_1, y_1) - \frac{E^{\ell - 1} - 1}{E - 1}\log C.
\end{equation}
Next, we apply the Thue-Siegel principle from Lemma \ref{lem:thue-siegel} to the pair $(\alpha, \beta) = (\alpha_{i_1}, \alpha_{i_\ell})$. Observe that, since all $\alpha_i$'s have degree $d$, we have $\mathbb Q(\alpha_{i_1}) = \mathbb Q(\alpha_{i_\ell})$, so $\alpha_{i_{\ell}} \in \mathbb Q(\alpha_{i_1})$. For $a = 1/500$, set
$$
t = \sqrt{\frac{2}{d + a^2}}, \quad \tau = 2at.
$$
Then
$$
\lambda = \frac{2}{t - \tau} = \frac{2}{(1 - 2a)t} < 1.42 \sqrt d.
$$
Further,
$$
\frac{t^2}{2 - dt^2} = \frac{1}{a^2} = 500^2,
$$
$$
A_1 = 500^2\left(\log M(\alpha_{i_1}) + \frac{d}{2}\right), \quad A_\ell = 500^2\left(\log M(\alpha_{i_\ell}) + \frac{d}{2}\right),
$$
$$
\delta = \frac{dt^2 + \tau^2 - 2}{d - 1} = \frac{6a^2}{(d + a^2)(d - 1)}.
$$
Note that
\begin{equation} \label{eq:delta_inverse_inequality}
\delta^{-1} < 41667d^2.
\end{equation}
We further adjust our definition of $C_{16}$ by choosing it so that
\begin{equation} \label{eq:large_C1_third_adjustment}
C_{16} \geq C_0^{\frac{1}{\mu - 1.42\sqrt d}}\left(4e^A\right)^{\frac{1.42\sqrt d}{\mu - 1.42\sqrt d}},
\end{equation}
where $A$ is defined in (\ref{eq:A}). Now with the help of inequalities $\lambda < 1.42\sqrt d$ and $H(x_j, y_j) \geq C_{16}$ we obtain
$$
\left|\alpha_{i_j} - \frac{x_j}{y_j}\right| < \frac{C_0}{H(x_j, y_j)^\mu} \leq \frac{1}{\left(4e^AH(x_j, y_j)\right)^{1.42 \sqrt d}} < \frac{1}{(4e^A H(x_j, y_j))^\lambda},
$$
so that the hypothesis of Lemma \ref{lem:thue-siegel} is satisfied. Thus, we arrive at the conclusion that
\begin{align*}
\log H(x_\ell, y_\ell)
& \leq \delta^{-1}\left(\log(4e^{A_1}) + \log H(x_1, y_1)\right) - \log(4e^{A_\ell})\\
& < 41667d^2\left(\log(4e^{A_1}) + \log H(x_1, y_1)\right),
\end{align*}
where the last inequality follows from (\ref{eq:delta_inverse_inequality}). Thus
$$
\log H(x_\ell, y_\ell) < 41667d^2\left(\log\left(4e^{A_1}\right) + \log H(x_1, y_1)\right).
$$
We combine the above upper bound on $\log H(x_\ell, y_\ell)$ with the lower bound given in (\ref{eq:qell_bound}):
$$
E^{\ell - 1}\log H(x_1, y_1) - \frac{E^{\ell - 1} - 1}{E - 1}\log C < 41667d^2\left(\log\left(4e^{A_1}\right) + \log H(x_1, y_1)\right).
$$
Reordering the terms yields
\begin{equation} \label{eq:simplifying}
\left(E^{\ell - 1} - 41667d^2\right)\log H(x_1, y_1) - \frac{E^{\ell - 1} - 1}{E - 1}\log C < 41667d^2\log\left(4e^{A_1}\right).
\end{equation}
Let us assume that
$$
\ell \geq 1 + \frac{\log(41667d^2)}{\log(\mu - d/2)},
$$
for otherwise the statement of our theorem holds. Then $E^{\ell - 1} \geq 41667d^2$, so we may use the inequality $H(x_1, y_1) \geq C_{16}$ to replace $H(x_1, y_1)$ with $C_{16}$ in (\ref{eq:simplifying}):
$$
\left(E^{\ell - 1} - 41667d^2\right)\log C_{16} - \frac{E^{\ell - 1} - 1}{E - 1}\log C < 41667d^2\log\left(4e^{A_1}\right).
$$
Since $E = \mu - d/2$,
$$
(\mu - d/2)^{\ell - 1}\left(\log C_{16} - \frac{\log C}{E - 1}\right) < 41667d^2\log C_{16} + 41667d^2\log\left(4e^{A_1}\right) + \frac{\log C}{E - 1}.
$$
We make a final adjustment to $C_{16}$ by choosing it so that
\begin{equation} \label{eq:large_C1_fourth_adjustment}
C_{16} \geq C^{2/(E - 1)}.
\end{equation}
Then
$$
(\mu - 0.5d)^{\ell - 1}\frac{\log C_{16}}{2} < \left(41667d^2 + \frac{1}{2}\right)\log C_{16} + 41667d^2\log\left(4e^{A_1}\right),
$$
leading us to a conclusion
\begin{equation} \label{eq:will-continue}
(\mu - 0.5d)^{\ell - 1} < 1 + 83334d^2\left(1 + \frac{\log\left(4e^{A_1}\right)}{\log C_{16}}\right).
\end{equation}
By our choice of $C_{16}$,
$$
\log C_{16} \geq \frac{1}{\mu - 1.42\sqrt d}\log C_0 + \frac{1.42\sqrt d}{\mu - 1.42\sqrt d}\log(4e^A),
$$
which means that
$$
\frac{\log(4e^A)}{\log C_{16}} \leq \frac{\mu - 1.42\sqrt d}{1.42\sqrt d + \log C_0/\log(4e^A)} < \frac{\mu - 1.42\sqrt d}{1.42\sqrt d - 1},
$$
where the last inequality follows from the fact that $C_0 > (4e^A)^{-1}$. Plugging the above inequality into (\ref{eq:will-continue}), we obtain
$$
(\mu - 0.5d)^{\ell - 1} < 1 + 83334d^2\left(1 + \frac{\mu - 1.42\sqrt d}{1.42\sqrt d - 1}\right) = 1 + 83334d^2\frac{\mu - 1}{1.42\sqrt d - 1} \leq 1 + 98896d^{3/2}\mu,
$$
where the last inequality follows from $d \geq 3$. We conclude that
$$
\ell < 1 + \frac{\log(98897d^{3/2}\mu)}{\log(\mu - d/2)} < 1 + \frac{11.51 + 1.5\log d + \log \mu}{\log(\mu - d/2)}.
$$
The result follows once we multiply the right-hand side by the constant $\gamma$ defined in (\ref{eq:gamma}).
\end{proof}
\end{document} |
\begin{document}
\title{Cohesive dynamics and brittle fracture}
\begin{abstract}
We formulate a nonlocal cohesive model for calculating the deformation state inside a cracking body. In this model a more complete set of physical properties including elastic and softening behavior are assigned to each point in the medium. We work within the small deformation setting and use the peridynamic formulation. Here strains are calculated as difference quotients. The constitutive relation is given by a nonlocal cohesive law relating force to strain. At each instant of the evolution we identify a process zone where strains lie above a threshold value. Perturbation analysis shows that jump discontinuities within the process zone can become unstable and grow. We derive an explicit inequality that shows that the size of the process zone is controlled by the ratio given by the length scale of nonlocal interaction divided by the characteristic dimension of the sample. The process zone is shown to concentrate on a set of zero volume in the limit where the length scale of nonlocal interaction vanishes with respect to the size of the domain. In this limit the dynamic evolution is seen to have bounded linear elastic energy and Griffith surface energy. The limit dynamics corresponds to the simultaneous evolution of linear elastic displacement and the fracture set across which the displacement is discontinuous. We conclude illustrating how the approach developed here can be applied to limits of dynamics associated with other energies that $\Gamma$- converge to the Griffith fracture energy.
{\varepsilon}nd{abstract}
\begin{flushleft}
{\bf Keywords:} \,\,peridynamics, dynamic brittle fracture, fracture toughness, process zone, $\Gamma$- convergence
{\varepsilon}nd{flushleft}
\begin{flushleft}
{\bf Mathematics Subject Classification}: 34A34, 74H55, 74R10
{\varepsilon}nd{flushleft}
\hat{\varphi}agestyle{myheadings}
\markboth{R. LIPTON}{Cohesive Dynamics and Brittle Fracture}
\muetcounter{equation}{0} \muetcounter{theorem}{0} \muetcounter{lemma}{0}\muetcounter{proposition}{0}\muetcounter{remark}{0}\muetcounter{definition}{0}\muetcounter{hypothesis}{0}
\muection{Introduction}
\label{introduction}
Dynamic brittle fracture is a multiscale phenomenon operating across a wide range of length and time scales. Contemporary approaches to brittle fracture modeling can be broadly characterized as bottom-up and top-down. Bottom-up approaches take into account the discreteness of fracture at the smallest length scales and are expressed through lattice models. This approach has provided insight into the dynamics of the fracture process \chi^\nuite{6,16,17,21}. Complementary to the bottom-up approaches are top-down computational approaches using cohesive surface elements \chi^\nuite{Cox}, \chi^\nuite{14}, \chi^\nuite{22}, \chi^\nuite{Remmers}. In this formulation the details of the process zone are collapsed onto an interfacial element with a force traction law given by the cohesive zone model \chi^\nuite{Barenblatt}, \chi^\nuite{Dougdale}. Cohesive surfaces have been applied within the extended finite element method \chi^\nuite{5}, \chi^\nuite{Duarte}, \chi^\nuite{18} to minimize the effects of mesh dependence on free crack paths. Higher order multi-scale cohesive surface models involving excess properties and differential momentum balance are developed in \chi^\nuite{WaltonSendova}. Comparisons between different cohesive surface models are given in \chi^\nuite{Falk}. More recently variational approaches to brittle fracture based on quasi-static evolutions of global minimizers of Grifffith's fracture energy have been developed \chi^\nuite{FrancfortMarigo}, \chi^\nuite{BourdinFrancfortMarigo}, and \chi^\nuite{FrancfortLarsen}. Phase field approaches have also been developed to model brittle fracture evolution from a continuum perspective \chi^\nuite{BourdinFrancfortMarigo}, \chi^\nuite{BourdinLarsenRichardson}, \chi^\nuite{Miehe}, \chi^\nuite{Hughes}, \chi^\nuite{Ortiz}, \chi^\nuite{Wheeler}. In the phase field approach a second field is introduced to interpolate between cracked and undamaged elastic material. The evolution of the phase field is used to capture the trajectory of the crack. A concurrent development is the emergence of the peridynamic formulation introduced in \chi^\nuite{Silling1} and \chi^\nuite{States}. Peridynamics is a nonlocal formulation of continuum mechanics expressed in terms of displacement differences as opposed to spatial derivatives of the displacement field. These features provide the ability to simultaneously simulate kinematics involving both smooth displacements and defect evolution. Numerical simulations based on peridynamic modeling exhibit the formation and evolution of sharp interfaces associated with defects and fracture \chi^\nuite{Bobaru1}, \chi^\nuite{BhattacharyaDyal}, \chi^\nuite{Foster}, \chi^\nuite{Bobaru2}, \chi^\nuite{SillingBobaru}, \chi^\nuite{SillingAscari2}, \chi^\nuite{WecknerAbeyaratne}. In an independent development nonlocal formulations have been introduced for modeling the passage from discrete to continuum limits of energies for quasistatic fracture models \chi^\nuite{Alicandro}, \chi^\nuite{Buttazzo}, \chi^\nuite{Braides}, \chi^\nuite{BraidesGelli}, for smeared crack models \chi^\nuite{Negri} and for image processing \chi^\nuite{Gobbino1} and \chi^\nuite{Gobbino3}. A complete review of contemporary methods is beyond the scope of this paper however the reader is referred to \chi^\nuite{Agwai}, \chi^\nuite{Bazant}, \chi^\nuite{BelitchReview}, \chi^\nuite{Bouchbinder}, \chi^\nuite{BourdinFrancfortMarigo}, \chi^\nuite{Braides1}, \chi^\nuite{Braides2} for a more complete guide to the literature.
In this paper we formulate a nonlocal, multi-scale, cohesive continuum model for assessing the deformation state inside a cracking body. This model is expressed using the peridynamic formulation introduced in \chi^\nuite{Silling1}, \chi^\nuite{States}. Here strains are calculated as difference quotients of displacements between two points $x$ and $y$. In this approach the force between two points $x$ and $y$ is related to the strain through a nonlinear cohesive law that depends upon the magnitude and direction of the strain. The forces are initially elastic for small strains and soften beyond a critical strain. We introduce the dimensionless length scale ${\varepsilon}psilon$ given by the ratio of the length scale of nonlocal interaction to the characteristic length of the material sample $D$. Working in the new rescaled coordinates the nonlocal interactions between $x$ and its neighbors $y$ occur within a horizon of radius ${\varepsilon}psilon$ about $x$ and the characteristic length of $D$ is taken to be unity. This neighborhood of $x$ is denoted by $\mathcal{H}_{\varepsilon}psilon(x)$.
To define the potential energy we first assume the deformation $z$ is given by $z(x)=u(x)+x$ where $u$ is the displacement field. The strain between two points $x$ and $y$ inside $D$ is given by
\begin{eqnarray}
\mathcal{S}=\frac{|z(y)-z(x)|-|y-x|}{|y-x|}.
\label{bigdefstrain}
{\varepsilon}nd{eqnarray}
In this treatment we assume small deformation kinematics and the displacements are small (infinitesimal) relative to the size of the body $D$. Under this hypothesis {\varepsilon}qref{bigdefstrain} is linearized and the strain is given by $$\mathcal{S}=\frac{u(y)-u(x)}{|y-x|}\chi^\nudot e,$$ where $e=\frac{y-x}{|y-x|}$. Both two and three dimensional problems will be considered and the dimension is denoted by $d=2,3$.
The cohesive model is characterized through a nonlocal potential $$W^{\varepsilon}psilon(\mathcal{S},y-x),$$ associated with points $x$ and $y$. The associated energy density is obtained on integrating over $y$ for $x$ fixed and is given by
\begin{eqnarray}
{\bf{W}}^{\varepsilon}psilon(\mathcal{S},x)=\frac{1}{V_d}\int_{\mathcal{H}_{\varepsilon}psilon(x)}\,W^{\varepsilon}psilon(\mathcal{S},y-x)\,dy
\label{densityy}
{\varepsilon}nd{eqnarray}
where $V_d={\varepsilon}psilon^d\omega_d$ and $\omega_d$ is the (area) volume of the unit ball in dimensions $d=(2)3$. The potential energy of the body is given by
\begin{eqnarray}
PD^{\varepsilon}psilon(u)=\int_{D}\,{\bf{W}}^{\varepsilon}psilon(\mathcal{S},x)\,dx
\label{the peridynamicenergy}
{\varepsilon}nd{eqnarray}
\begin{figure}
\chi^\nuentering
\begin{tikzpicture}[xscale=1,yscale=1]
\draw [<->,thick] (0,2.75) -- (0,0) -- (2.5,0);
\draw [->,thick] (0,0) -- (-2.5,0);
\draw [-] (-2.5,2.15) -- (2.5,2.15);
\draw [-,thick] (0,0) to [out=0,in=-175] (2.5,2);
\draw [-,thick] (-2.5,2) to [out=-5,in=180] (0,0);
\draw (1,-0.2) -- (1, 0.2);
\draw (-1,-0.2) -- (-1, 0.2);
\node [below] at (1,-0.2) {$\frac{\overline{r}}{\muqrt{|y-x|}}$};
\node [below] at (-1,-0.2) {$-\frac{\overline{r}}{\muqrt{|y-x|}}$};
\node [right] at (2.5,0) {$\mathcal{S}$};
\node [right] at (0,2.75) {$\mathcal{W}^{\varepsilon}psilon(\mathcal{S},y-x)$};
{\varepsilon}nd{tikzpicture}
\chi^\nuaption{\bf Cohesive potential as a function of $\mathcal{S}$ for $x$ and $y$ fixed.}
\label{ConvexConcave}
{\varepsilon}nd{figure}
\begin{figure}
\chi^\nuentering
\begin{tikzpicture}[xscale=1,yscale=1]
\draw [<->,thick] (0,2) -- (0,-2);
\draw [<->,thick] (-3.5,0) -- (3.5,0);
\draw [-,thick] (-3,-0.25) to [out=-25,in=180] (-1.5,-1.5) to [out=0,in=180] (1.5,1.5)
to [out=0,in=165] (3,0.25);
\draw (1.5,-0.2) -- (1.5, 0.2);
\draw (-1.5,-0.2) -- (-1.5, 0.2);
\node [below] at (1.5,-0.2) {$\frac{\overline{r}}{\muqrt{|y-x|}}$};
\node [below] at (-1.5,-0.2) {$-\frac{\overline{r}}{\muqrt{|y-x|}}$};
\node [right] at (3.5,0) {$\mathcal{S}$};
\node [right] at (0,2.0) {$\hat{\varphi}artial_{\mathcal{S}}\mathcal{W}^{\varepsilon}psilon(\mathcal{S},y-x)$};
{\varepsilon}nd{tikzpicture}
\chi^\nuaption{{\bf Cohesive relation between force and strain for $x$ and $y$ fixed.}}
\label{SofteningBond}
{\varepsilon}nd{figure}
We introduce the class of potentials associated with a cohesive force that is initially elastic and then softens after a critical strain. These potentials are of the generic form given by
\begin{eqnarray}
W^{\varepsilon}psilon(S,y-x)=|y-x|\mathcal{W}^{\varepsilon}psilon(\mathcal{S},y-x),
\label{potentialdensity1a}
{\varepsilon}nd{eqnarray}
where $\mathcal{W}^{\varepsilon}psilon(\mathcal{S},y-x)$ is the peridynamic potential per unit length associated with $x$ and $y$ given by
\begin{eqnarray}
\mathcal{W}^{\varepsilon}psilon(\mathcal{S},y-x)=\frac{1}{{\varepsilon}psilon}J^{\varepsilon}psilon\left(|y-x|\right)\left(\frac{1}{|y-x|}f\left(|y-x|\mathcal{S}^2\right)\right).
\label{potentialdensity2a}
{\varepsilon}nd{eqnarray}
These potentials are of a general form and are associated with potential functions $f:[0,\infty)\rightarrow\mathbb{R}$ that are positive, smooth and concave with the properties
\begin{eqnarray}
\lim_{r\rightarrow 0^+}\frac{f(r)}{r}=f'(0)>0,&&\lim_{r\rightarrow\infty}f(r)=f_\infty <\infty.
\label{properties}
{\varepsilon}nd{eqnarray}
The composition of $f$ with $|y-x|\mathcal{S}^2$ given by {\varepsilon}qref{potentialdensity2a} delivers the convex-concave dependence of $\mathcal{W}^{\varepsilon}psilon(\mathcal{S},y-x)$ on $\mathcal{S}$ for fixed values of $x$ and $y$, see Figure \ref{ConvexConcave}. Here $J^{\varepsilon}psilon(|y-x|)$ is used to prescribe the influence of separation length $|y-x |$ on the force between $x$ and $y$ with $0\leq J^{\varepsilon}psilon(|y-x|)<M $ for $0\leq |y-x|< {\varepsilon}psilon$ and $J^{\varepsilon}psilon(|y-x|)=0$ for ${\varepsilon}psilon\leq |y-x |$. For fixed $x$ and $y$ the inflection point for the potential energy {\varepsilon}qref{potentialdensity2a} with respect to the strain $\mathcal{S}$ is given by $\overline{r}/\muqrt{|y-x|}$, where $\overline{r}$ is the inflection point for the function $r:\rightarrow f(r^2)$, see Figure \ref{ConvexConcave}. This choice of potential delivers an initially elastic and then softening constitutive law for the force per unit length along the direction $e$ given by
\begin{eqnarray}
\hbox{\rm force per unit length}=\hat{\varphi}artial_\mathcal{S} \mathcal{W}^{\varepsilon}psilon(\mathcal{S},y-x)=\frac{2}{{\varepsilon}psilon}\left(J^{\varepsilon}psilon(|y-x|)f'\left(|y-x|\mathcal{S}^2\right)\mathcal{S}\right).
\label{forcestate}
{\varepsilon}nd{eqnarray}
The force between points $y$ and $x$ begins to drop when the strain $\mathcal{S}$ exceeds the critical strain
\begin{eqnarray}
|\mathcal{S}|>\frac{\overline{r}}{\muqrt{|y-x|}}=\mathcal{S}_c,
\label{sqrtsingularity}
{\varepsilon}nd{eqnarray}
see Figure \ref{SofteningBond}. This is the same singularity strength associated with a strain concentration in the vicinity of a crack tip as in the classic theory of brittle fracture \chi^\nuite{Freund}. A future goal will be to inform the cohesive model introduced here by resolving atomistic or molecular dynamics across smaller length scales.
We apply the principle of least action to recover the cohesive equation of motion describing the state of displacement inside the body $D\muubset\mathbb{R}^d$ given by
\begin{eqnarray}
\rho\hat{\varphi}artial_{tt}^2 u(t,x)=2\frac{1}{V_d}\int_{\mathcal{H}_{\varepsilon}psilon(x)}\,\hat{\varphi}artial_\mathcal{S} \mathcal{W}^{\varepsilon}psilon(\mathcal{S},y-x)\frac{y-x}{|y-x|}\,dy+b(t,x)
\label{eqofmotion}
{\varepsilon}nd{eqnarray}
where $\rho$ is the density and $b(t,x)$ is the body force. This is a well posed formulation in that existence and uniqueness (within a suitable class of evolutions) can be shown, see Section \ref{sec2} and Theorem \ref{existenceuniqueness} of Section \ref{EE}.
In this model a more complete set of physical properties including elastic and softening behavior are assigned to each point in the medium. Here each point in the domain is connected to its neighbors by a cohesive law see Figure \ref{SofteningBond}. We define the {{\varepsilon}m process zone} to be the collection of points $x$ inside the body $D$ associated with peridynamic neighborhoods $\mathcal{H}_{\varepsilon}psilon(x)$ for which the strain $\mathcal{S}$ between $x$ and $y$ exceeds a threshold value for a sufficiently large proportion of points $y$ inside $\mathcal{H}_{\varepsilon}psilon(x)$. Here the force vs. strain law departs from linear behavior when the strain exceeds the threshold value. The mathematically precise definition of the process zone is given in Section \ref{sec4}, see Definition \ref{processZone}. In this model the {{\varepsilon}m fracture set} is associated with peridynamic neighborhoods $\mathcal{H}_{\varepsilon}psilon(x)$ with strains $|\mathcal{S}|>\mathcal{S}_c$ for which the force vs. strain law begins to soften and is defined in Section \ref{sec4}, see Definition \ref{Fractureset}.
The nonlinear elastic--softening behavior put forth in this paper is similar to the ones used in cohesive zone models \chi^\nuite{Dougdale}, \chi^\nuite{Barenblatt}. However for this model the dynamics selects whether a material point lies inside or outside the process zone. The principle feature of the cohesive dynamics model introduced here is that the evolution of the process zone together with the the fracture set is goverened by one equation consistent with Newton's second law given by {\varepsilon}qref{eqofmotion}. This is a characteristic feature of peridynamic models \chi^\nuite{Silling1}, \chi^\nuite{States} and lattice formulations for fracture evolution \chi^\nuite{6,16,17,21}.
In this paper the goal is to characterize the size of the process zone for cohesive dynamics as a function of domain size and the length scale of the nonlocal forces. The second focus is to identify properties of the distinguished limit evolution for these models in the limit of vanishing non-locality as characterized by the ${\varepsilon}psilon\rightarrow 0$ limit. For the model introduced here the parameter that controls the size of the process zone is given by the radius of the horizon ${\varepsilon}psilon$. We derive an explicit inequality that shows that the size of the process zone is controlled by the horizon radius. Perturbation analysis shows that jump discontinuities within the process zone can become unstable and grow.
This analysis shows that {{\varepsilon}m the horizon size ${\varepsilon}psilon$ for cohesive dynamics is a modeling parameter} that can be calibrated according to the size of the process zone obtained from experimental measurements.
Further calculation shows that the volume of the process zone goes to zero with ${\varepsilon}psilon$ in the limit of vanishing non-locality, ${\varepsilon}psilon\rightarrow 0$. Distinguished ${\varepsilon}psilon\rightarrow 0$ limits of cohesive evolutions are identified and are found to have both bounded linear elastic energy and Griffith surface energy. Here the limit dynamics corresponds to the simultaneous evolution of linear elastic displacement and a fracture set across which the displacement is discontinuous. Under suitable hypotheses it is found that for points in space-time away from the fracture set that the displacement field evolves according to the linear elastic wave equation. Here the linear wave equation provides a dynamic coupling between elastic waves and the evolving fracture path inside the media. The elastic moduli, wave speed and energy release rate for the evolution are explicitly recovered from moments of the peridynamic potential energy. These features are consistent with the asymptotic behavior seen in the convergence of solutions of the Barenblatt model to the Griffith model when cohesive forces confined to a surface act over a sufficiently short range \chi^\nuite{MarigoTruskinovsky},
\chi^\nuite{Willis}.
Earlier work has shown that linear peridynamic formulations recover the classic linear elastic wave equation in the limit of vanishing non-locality see \chi^\nuite{EmmrichWeckner}, \chi^\nuite{SillingLehoucq}. The convergence of linear peridynamics to Navier's equation in the sense of solution operators is demonstrated in \chi^\nuite{MengeshaDu}. Recent work shows that analogous results can be found for dynamic problems and fully nonlinear peridynamics \chi^\nuite{LiptonJElast2014} in the context of antiplane shear. There distinguished ${\varepsilon}psilon\rightarrow 0$ limits of cohesive evolutions are identified and are found to have both bounded linear elastic energy and Griffith surface energy. It is shown that the limiting displacement evolves according to the linear elastic wave equation away from the crack set, see \chi^\nuite{LiptonJElast2014}. For large deformations, the connection between hyperelastic energies and the small horizon limits of nonlinear peridynamic energies is recently established in \chi^\nuite{Bellido}. In the current paper both two and three dimensional problems involving multi-mode fracture are addressed. For these problems new methods are required to identify the existence of a limit dynamics as the length scale of nonlocal interaction ${\varepsilon}psilon$ goes to zero. A crucial step is to establish a suitable notion of compactness for sequences of cohesive evolutions. The approach taken here employs nonlocal Korn inequalities introduced in \chi^\nuite{DuGunzbergerlehoucqmengesha}. This method is presented in Section \ref{CC}. We conclude noting that the cohesive dynamics model introduced here does not have an irreversibility constraint and that the constitutive law {\varepsilon}qref{forcestate} applies at all times in the fracture evolution. However with this caveat in mind, the nonlocal cohesive model offers new computational and analytical opportunities for understanding the effect of the process zone on fracture patterns.
In the next section we write down the Lagrangian formulation for the cohesive dynamics and apply the principle of least action to recover the equation of motion. In that section it is shown that the nonlinear-nonlocal cohesive evolution is a well posed initial boundary value problem. It is also shown that energy balance is satisfied by the cohesive dynamics. A formal stability analysis is carried out in Section \ref{sec3} showing that jump discontinuities within the process zone can become unstable and grow, see Proposition \ref{Nuccriteria}. In Section \ref{sec4} we provide a mathematically rigorous inequality explicitly showing how the volume of the process zone for the cohesive evolutions is controlled by the length scale of nonlocal interaction ${\varepsilon}psilon$, see Theorem \ref{epsiloncontropprocesszone}. It is shown that the process zone concentrates on a set of zero volume in the limit, ${\varepsilon}psilon\rightarrow 0$, see Theorem \ref{bondunstable}. In Sections \ref{sec5} and \ref{sec6} we introduce suitable technical hypothesis and identify the distinguished limit of the cohesive evolutions as ${\varepsilon}psilon\rightarrow 0$, see Theorem \ref{LimitflowThm}. It is shown that the dynamics can be expressed in terms of displacements that satisfy the linear elastic wave equation away from the crack set, see Theorem \ref{waveequation}. These displacements are shown to have bounded bulk elastic and surface energy in the sense of Linear Elastic Fracture Mechanics (LEFM), see Theorem \ref{LEFMMThm}. In Section \ref{EightIntro} we provide the mathematical underpinnings and proofs of the theorems. In Section \ref{Ninth} we apply the approach developed here to examine limits of dynamics associated with other energies that $\Gamma$- converge to the Griffith fracture energy. As an illustrative example we examine the Ambrosio-Tortorelli \chi^\nuite{AT} approximation as applied to the dynamic problem in \chi^\nuite{BourdinLarsenRichardson} and \chi^\nuite{LarsenOrtnerSuli}.
\muetcounter{equation}{0} \muetcounter{theorem}{0} \muetcounter{lemma}{0}\muetcounter{proposition}{0}\muetcounter{remark}{0}\muetcounter{remark}{0}\muetcounter{definition}{0}\muetcounter{hypothesis}{0}
\muection{Cohesive dynamics}
\label{sec2}
We formulate the initial boundary value problem for the cohesive evolution. Since the problem is nonlocal the domain $D$ is split into a boundary layer called the constraint set $D_c$, and the interior $D_s$. To fix ideas the thickness of the boundary layer is denoted by $\alpha$ and $2{\varepsilon}psilon<\alpha$ where $2{\varepsilon}psilon$ is the diameter of nonlocal interaction see Figure \ref{Domains}. The boundary condition for the displacement $u$ is given by $u(t,x)=0$ for $x$ in $D_c$. To incorporate nonlocal boundary conditions we introduce the space $L^2_0(D;\mathbb{R}^d)$, of displacements that are square integrable over $D$ and zero on $D_c$.
The initial conditions for the cohesive dynamics belong to $L^2_0(D;\mathbb{R}^d)$ and are given by
\begin{eqnarray}
u(x,0)=u_0(x),\hbox{ and }u_t(x,0)=v_0(x).
\label{initialconditions}
{\varepsilon}nd{eqnarray}
We will investigate the evolution of the deforming domain for general initial conditions. These can include an initially un-cracked body or one with a preexisting system of cracks. For two dimensional problems the cracks are given by a system of curves of finite total length, while for three dimensional problems the crack set is given by a system of surfaces of finite total surface area. Depending on the dimension of the problem the displacement suffers a finite jump discontinuity across each curve or surface. The initial condition is specified by a crack set $K$ and displacement $u_0$. The strain $\mathcal{E}u_0=(\nabla u_0+\nabla u_0^T)/2$ is defined off the crack set and the displacement $u_0$ can suffer jumps across $K$. Griffith's theory of fracture
asserts that the energy necessary to produce a crack $K$ is proportional to the crack length (or surface area). For Linear Elastic Fracture Mechanics (LEFM) the total energy associated with bulk elastic and surface energy is given by
\begin{eqnarray}
LEFM(u_0)=\int_{D}\left(2\mu|\mathcal{E}u_0|^2+\lambda|{\rm div}\,u_0|^2\right)\,dx+\mathcal{G}_c |K|,
\label{Gcrackenergy}
{\varepsilon}nd{eqnarray}
where $\mu$, $\lambda$ are the the shear and Lam\'e moduli and $\mathcal{G}_c$ is the critical energy release rate for the material. Here $|K|$ denotes the length or surface area of the crack.
In what follows we will assume that the bulk elastic energy and surface energy of the initial displacement are bounded as well as the the initial velocity and displacement.
For future reference we describe initial data $u_0$ and $v_0$ that satisfy these conditions as {{\varepsilon}m LEFM initial data} and we have the inequality between the peridynamic energy and the energy of Linear Elastic Fracture Mechanics given by
\begin{eqnarray}
PD^{\varepsilon}psilon(u_0)\leq LEFM(u_0),
\label{basicinequality}
{\varepsilon}nd{eqnarray}
when $\mu$, $\lambda$, and $\mathcal{G}_c$ are related to the nonlocal potentials according to {\varepsilon}qref{calibrate1} and {\varepsilon}qref{calibrate2}.
This inequality is established in Section \ref{CC}, see {\varepsilon}qref{upperboundperi}.
In what follows we write $u(t,x)$ as $u(t)$ to expedite the presentation.
The cohesive dynamics is described by the Lagrangian
\begin{eqnarray}
L^{\varepsilon}psilon(u(t),\hat{\varphi}artial_t u(t),t)=K(\hat{\varphi}artial_t u(t))-PD^{\varepsilon}psilon(u(t))+U(u(t)),
\label{Lagrangian}
{\varepsilon}nd{eqnarray}
with
\begin{eqnarray}
K(\hat{\varphi}artial_t u(t))&=&\frac{1}{2}\int_{D}\rho|\hat{\varphi}artial_t u(t,x)|^2\,dx, \hbox{ and }\nonumber\\
U(u(t))&=&\int_{D}b(t,x) u(t,x)\,dx,
\label{Components}
{\varepsilon}nd{eqnarray}
where $\rho$ is the mass density of the material and $b(t,x)$ is the body force density.
The initial conditions $u^{\varepsilon}psilon(0,x)=u_0(x)$ and $u^{\varepsilon}psilon_t(0,x)=v_0(x)$ are prescribed and the action integral for the peridynamic evolution is
\begin{eqnarray}
I^{\varepsilon}psilon(u)=\int_0^TL^{\varepsilon}psilon(u(t),\hat{\varphi}artial_t u(t),t)\,dt.
\label{Action}
{\varepsilon}nd{eqnarray}
The Euler Lagrange Equation for this system delivers the the cohesive dynamics described by
\begin{eqnarray}
\rho u^{\varepsilon}psilon_{tt}&=&-\nabla PD^{\varepsilon}psilon(u^{\varepsilon}psilon)+b
\label{stationary}
{\varepsilon}nd{eqnarray}
where
\begin{eqnarray}
\nabla PD^{\varepsilon}psilon(u^{\varepsilon}psilon)=-\frac{2}{V_d}\int_{\mathcal{H}_{\varepsilon}psilon(x)}\hat{\varphi}artial_\mathcal{S} \mathcal{W}^{\varepsilon}psilon(\mathcal{S},y-x)\frac{y-x}{|y-x|}\,dy,
\label{GradPD}
{\varepsilon}nd{eqnarray}
and
\begin{eqnarray}
\mathcal{S}=\frac{u^{\varepsilon}psilon(y)-u^{\varepsilon}psilon(x)}{|y-x|}\chi^\nudot e.
\label{sepsilon}
{\varepsilon}nd{eqnarray}
The displacement $u^{\varepsilon}psilon(t,x)$ is twice differentiable in time taking values in $L^2_0(D;\mathbb{R}^d)$. The space of such functions is denoted by $C^2([0,T];L^2_0(D;\mathbb{R}^d))$. The initial value problem for the peridynamic evolution {\varepsilon}qref{stationary}
is seen to have a unique solution in this space, see Theorem \ref{existenceuniqueness} of Section \ref{EE}. The cohesive evolution $u^{\varepsilon}psilon(x,t)$ is uniformly bounded in the mean square norm over bounded time intervals $0<t<T$, i.e.,
\begin{eqnarray}
\max_{0<t<T}\left\{\Vert u^{\varepsilon}psilon(x,t)\Vert_{L^2(D;\mathbb{R}^d)}^2\right\}<C.
\label{bounds}
{\varepsilon}nd{eqnarray}
Here $\Vert u^{\varepsilon}psilon(x,t)\Vert_{L^2(D;\mathbb{R}^d)}=\left(\int_D|u^{\varepsilon}psilon(x,t)|^2\,dx\right)^{1/2}$ and the upper bound $C$ is independent of ${\varepsilon}psilon$ and depends only on the initial conditions and body force applied up to time $T$, see Section \ref{GKP}.
\begin{figure}
\chi^\nuentering
\begin{tikzpicture}[xscale=1,yscale=1]
\draw [<->,thick] (-1.01,2.2) -- (-1.051,2.75);
\draw [-,thick] (-1.25,2.75) to [out=0,in=120] (1.5,0) to [out=300,in=10] (1.0,-2.0) to [out=200,in=180](-1.25, 2.75);
\draw [-,thick] (-1.0,2.2) to [out=0,in=120] (1.2,0) to [out=300,in=10] (0.8,-1.6) to [out=200,in=180](-1.0, 2.2);
\node [right] at (1.5,0) {$D$};
\node [right] at (0,0) {$D_s$};
\node [right] at (-1.0,2.4) {$D_c$};
\node [left] at (-1.10,2.4) {$\alpha$};
{\varepsilon}nd{tikzpicture}
\chi^\nuaption{{\bf Domain $D=D_c\chi^\nuup D_s$.}}
\label{Domains}
{\varepsilon}nd{figure}
The cohesive evolution has the following properties that are established in Section \ref{GKP}. The evolution has uniformly bounded kinetic and elastic potential energy
\begin{theorem}
\label{Gronwall}
{\rm \bf Bounds on kinetic and potential energy for cohesive dynamics}\\
There exists a positive constant $C$ depending only on $T$ and independent of ${\varepsilon}psilon$ for which
\begin{eqnarray}
\muup_{0\leq t\leq T}\left\{PD^{{\varepsilon}psilon}(u^{{\varepsilon}psilon}(t))+\frac{\rho}{2}\Vert u_t^{{\varepsilon}psilon}(t)\Vert^2_{L^2(D;\mathbb{R}^d)}\right\}\leq C.
\label{boundenergy}
{\varepsilon}nd{eqnarray}
{\varepsilon}nd{theorem}
The evolution is uniformly continuous in time as measured by the mean square norm.
\begin{theorem}{\rm \bf Continuous cohesive evolution in mean square norm}\\
There is a positive constant $K$ independent of $t_2 < t_1$ in $[0,T]$ and index ${\varepsilon}psilon$ for which
\begin{eqnarray}
\Vert u^{{\varepsilon}psilon}(t_1)-u^{{\varepsilon}psilon}(t_2)\Vert_{L^2(D;\mathbb{R}^d)}\leq K |t_1-t_2|.
\label{holderest}
{\varepsilon}nd{eqnarray}
\label{holdercont}
{\varepsilon}nd{theorem}
The evolution satisfies energy balance. The total energy of the cohesive evolution at time $t$ is given by
\begin{eqnarray}
\mathcal{EPD}^{\varepsilon}psilon(t,u^{\varepsilon}psilon(t))=\frac{\rho}{2}\Vert u^{\varepsilon}psilon_t(t)\Vert^2_{L^2(D;\mathbb{R}^d)}+PD^{\varepsilon}psilon(u^{\varepsilon}psilon(t))-\int_{D}b(t)\chi^\nudot u^{\varepsilon}psilon(t)\,dx
\label{energyt}
{\varepsilon}nd{eqnarray}
and the total energy of the system at time $t=0$ is
\begin{eqnarray}
\mathcal{EPD}^{\varepsilon}psilon(0,u^{\varepsilon}psilon(0))=\frac{\rho}{2}\Vert v_0\Vert^2_{L^2(D;\mathbb{R}^d)}+PD^{\varepsilon}psilon(u_0)-\int_{D}b(0)\chi^\nudot u_0\,dx.
\label{energy0}
{\varepsilon}nd{eqnarray}
The cohesive dynamics is seen to satisfy energy balance at every instant of the evolution.
\begin{theorem}{\rm \bf Energy balance for cohesive dynamics}\\
\label{Ebalance}
\begin{eqnarray}
\mathcal{EPD}^{\varepsilon}psilon(t,u^{\varepsilon}psilon(t))=\mathcal{EPD}^{\varepsilon}psilon(0,u^{\varepsilon}psilon(0))-\int_0^t\int_{D} b_t(\tau)\chi^\nudot u^{\varepsilon}psilon(\tau)\,dx\,d\tau.\label{BalanceEnergy}
{\varepsilon}nd{eqnarray}
{\varepsilon}nd{theorem}
\muetcounter{equation}{0} \muetcounter{theorem}{0} \muetcounter{lemma}{0}\muetcounter{proposition}{0}\muetcounter{remark}{0}\muetcounter{remark}{0}\muetcounter{definition}{0}\muetcounter{hypothesis}{0}
\muection{Dynamic instability and fracture nucleation}
\label{sec3}
In this section we present a fracture nucleation condition that arises from the unstable force law {\varepsilon}qref{forcestate}. This condition is manifested as a dynamic instability. In the following companion section we investigate the localization of dynamic instability as ${\varepsilon}psilon_k\rightarrow 0$ and define the notion of process zone for the cohesive evolution.
Fracture nucleation conditions can be viewed as instabilities and have been identified for peridynamic evolutions in \chi^\nuite{SillingWecknerBobaru}. Fracture nucleation criteria formulated as instabilities for one dimensional peridynamic bars are developed in \chi^\nuite{WecknerAbeyaratne}. In this treatment we define a source for crack nucleation as jump discontinuity in the displacement field that can become unstable and grow in time. Here we establish a direct link between the growth of jump discontinuities and the appearance of strain concentrations inside the deforming body.
We proceed with a formal perturbation analysis and consider a time independent body force density $b$ and a smooth equilibrium solution $u$ of {\varepsilon}qref{stationary}. Now perturb $u$ in the neighborhood of a point $x$ by adding a piecewise smooth discontinuity denoted by the vector field $\delta$. The perturbation takes the value zero on one side of a plane with normal vector $\nu$ passing through $x$ and on the other side of the plane takes the value $\delta=\overline{u}s(t)$. Here $s(t)$ is a scalar function of time and $\overline{u}$ is a constant vector. Consider the neighborhood $\mathcal{H}_{\varepsilon}psilon(x)$, then $\delta(y)=0$ for $(y-x)\chi^\nudot\nu<0$ and $\delta(y)=\overline{u}s(t)$ for $(y-x)\chi^\nudot\nu\geq 0$, see Figure \ref{plane}. The half space on the side of the plane for which $(y-x)\chi^\nudot\nu<0$ is denoted by $E^-_{\nu}$.
Write $u^p=u+\delta$ and assume
\begin{eqnarray}
\rho u^p_{tt}&=&-\nabla PD^{\varepsilon}psilon(u^p)+b.\label{stationarydiff1}
{\varepsilon}nd{eqnarray}
We regard $s(t)$ as a small perturbation and expand the integrand of $\nabla PD^{\varepsilon}psilon(u^p)$ in a Taylor series to recover the linearized evolution equation for the jump $s=s(t)$. The evolution equation is given by
\begin{eqnarray}
\rho s_{tt}\overline{u}=\mathcal{A}_{\nu}(x)\overline{u}s,
\label{pertevolution}
{\varepsilon}nd{eqnarray}
where the stability matrix $\mathcal{A}_{\nu}(x)$ is a $d\times d$ symmetric matrix with real eigenvalues and is defined by
\begin{eqnarray}
\mathcal{A}_{\nu}(x)&=&-\frac{2}{{\varepsilon}psilon V_d}\left\{\int_{\mathcal{H}_{\varepsilon}psilon(x)\chi^\nuap E^-_{\nu}}\frac{1}{|y-x|}\hat{\varphi}artial^2_{\mathcal{S}}\mathcal{W}^{\varepsilon}psilon(\mathcal{S},y-x)\frac{y-x}{|y-x|}\otimes\frac{y-x}{|y-x|}dy\right\},
\label{instabilitymatrix}
{\varepsilon}nd{eqnarray}
and $$\mathcal{S}=\mathcal{S}(y,x)=\left(\frac{u(y)-u(x)}{|y-x|}\right)\chi^\nudot\frac{y-x}{|y-x|}.$$
Calculation shows that
\begin{eqnarray}
\hat{\varphi}artial^2_{\mathcal{S}}\mathcal{W}^{\varepsilon}psilon(\mathcal{S},y-x)=\frac{2}{{\varepsilon}psilon}J^{\varepsilon}psilon(|y-x|)\left(f'\left(|y|\mathcal{S}^2\right)+2f''\left(|y-x|\mathcal{S}^2\right)|y-x|\mathcal{S}^2\right),\label{expand}
{\varepsilon}nd{eqnarray}
where $f'(|y|\mathcal{S}^2)>0$ and $f''(|y|\mathcal{S}^2)<0$. On writing
\begin{eqnarray}
\mathcal{S}_c=\frac{\overline{r}}{\muqrt{|y-x|}}
\label{critstrain}
{\varepsilon}nd{eqnarray}
we have that
\begin{eqnarray}
\hat{\varphi}artial_\mathcal{S}^2\mathcal{W}^{\varepsilon}psilon(\mathcal{S},y)>0\hbox{ for }|\mathcal{S}(y,x)|<\mathcal{S}_c,
\label{loss1}
{\varepsilon}nd{eqnarray}
and
\begin{eqnarray}
\hat{\varphi}artial_\mathcal{S}^2W^{\varepsilon}psilon(\mathcal{S},y)<0\hbox{ for }|\mathcal{S}(y,x)|>\mathcal{S}_c.
\label{loss2}
{\varepsilon}nd{eqnarray}
Here $\overline{r}$ is the inflection point for the function $r:\rightarrow f(r^2)$ and is the root of the equation
\begin{eqnarray}
f'(r)+2{r}f''({r})=0.
\label{rootroverline}
{\varepsilon}nd{eqnarray}
Note that the critical strain $\mathcal{S}_c$ for which the cohesive force between a pair of points $y$ and $x$ begins to soften is akin to the square root singularity seen at the crack tip in classical brittle fracture mechanics.
For eigenvectors $\overline{u}$ in the eigenspace associated with positive eigenvalues $\lambda$ of $\mathcal{A}_\nu(x)$ one has
\begin{eqnarray}
\rho\hat{\varphi}artial^2_{tt}s(t)=\lambda s(t)
\label{stabeq}
{\varepsilon}nd{eqnarray}
and the perturbation $s(t)$ can grow exponentially. Observe from {\varepsilon}qref{loss2} that the quadratic form
\begin{eqnarray}
\mathcal{A}_{\nu}(x)\overline{w}\chi^\nudot\overline{w}=-\frac{2}{{\varepsilon}psilon V_d}\left\{\int_{\mathcal{H}_{\varepsilon}psilon(x)\chi^\nuap E^-_{\nu}}\frac{1}{|y-x|}\hat{\varphi}artial^2_{\mathcal{S}}\mathcal{W}^{\varepsilon}psilon(\mathcal{S},y-x)(\frac{y-x}{|y-x|}\chi^\nudot\overline{w})^2dy\right\}
\label{quadformunstable}
{\varepsilon}nd{eqnarray}
will have at least one positive eigenvalue provided a sufficiently large proportion of bonds $y-x$ inside the horizon have strains satisfying
\begin{eqnarray}
|\mathcal{S}(x,y)|>\mathcal{S}_c
\label{exceed}
{\varepsilon}nd{eqnarray}
for which the cohesive force is in the unstable phase. For this case we see that the jump can grow exponentially. The key feature here is that dynamic instability is explicitly linked to strain concentrations in this cohesive model as is seen from {\varepsilon}qref{loss2} together with {\varepsilon}qref{quadformunstable}. Collecting results we have the following proposition.
\begin{proposition}{{\varepsilon}m \bf Fracture nucleation condition for cohesive dynamics}\\
\label{Nuccriteria}
A condition for crack nucleation at a point $x$ is that there is at least one direction $\nu$ for which $\mathcal{A}_{\nu}(x)$ has at least one positive eigenvalue. This occurs if there is a square root strain concentration $|\mathcal{S}(y,x)|>\mathcal{S}_c$ over a sufficiently large proportion of cohesive bonds inside the peridynamic horizon.
{\varepsilon}nd{proposition}
\noindent Proposition \ref{Nuccriteria} together with (\ref{loss2}) provide the explicit link between dynamic instability and the critical strain where the cohesive law begins to soften.
\begin{figure}
\chi^\nuentering
\begin{tikzpicture}[xscale=1,yscale=1]
\draw [-,thick] (0,2.25) -- (0,-2.25);
\draw [->,thick] (0,0) -- (1,0);
\draw [->,thick] (0,0) -- (1.5,-1.0);
\draw [ultra thick] (0,0) circle [radius=2.25];
\node [left] at (0,0) {$x$};
\node [above] at (1,0.0) {$\nu$};
\node [above] at (1.65,-0.80) {$y-x$};
\node [right] at (0.2,0.9) {$\delta=\overline{u}\delta(t)$};
\node [left] at (-0.4,0.9) {$\delta=0$};
\node [right] at (-0.95,1.7) {$E^-_{\nu}$};
{\varepsilon}nd{tikzpicture}
\chi^\nuaption{{\bf Jump discontinuity.}}
\label{plane}
{\varepsilon}nd{figure}
More generally we may postulate a condition for the direction along which the opposite faces of a nucleating fissure are oriented and the direction of the displacement jump across it. Recall that two symmetric matrices $A$ and $B$ satisfy $A\geq B$ in the sense of quadratic forms if $A\overline{w}\chi^\nudot\overline{w}\geq B\overline{w}\chi^\nudot\overline{w}$ for all $\overline{w}$ in $\mathbb{R}^d$. We say that a matrix $A$ is the maximum of a collection of symmetric matrices if $A\geq B$ for all matrices $B$ in the collection.
We postulate that the faces of the nucleating fissure are perpendicular to the direction $\nu^*$ associated with the the matrix $\mathcal{A}_{\nu^*}(x)$ for which
\begin{eqnarray}
\mathcal{A}_{\nu^*}(x)=\max\left\{\mathcal{A}_{\nu}(x);\, \hbox{over all directions } \nu\hbox{ such that } \mathcal{A}_{\nu}(x) \hbox{ has a positive eigenvalue}\right\},
\label{bestdirctionforgrowth}
{\varepsilon}nd{eqnarray}
and that the orientation of the jump in displacement across opposite sides of the fissure lies in the eigenspace associated with the largest positive eigenvalue of $\mathcal{A}_{\nu^*}$, {{\varepsilon}m i.e., the fissure is oriented along the most unstable orientation and the displacement jump across the nucleating fissure is along the most unstable direction.}
\muetcounter{equation}{0} \muetcounter{theorem}{0} \muetcounter{lemma}{0}\muetcounter{proposition}{0}\muetcounter{remark}{0}\muetcounter{remark}{0}\muetcounter{definition}{0}\muetcounter{hypothesis}{0}
\muection{The process zone for cohesive dynamics and its localization in the small horizon limit }
\label{sec4}
In this section it is shown that the collection of centroids of peridynamic neighborhoods with strain exceeding a prescribed threshold concentrate on sets with zero volume in the limit of vanishing non-locality. In what follows we probe the dynamics to obtain mathematically rigorous and explicit estimates on the size of the process zone in terms of the radius of the peridynamic horizon $0<{\varepsilon}psilon<1$.
The continuity of the displacement inside the neighborhood $\mathcal{H}_{\varepsilon}psilon(x)$ is measured quantitatively by
\begin{eqnarray}
|u(y)-u(x)|\leq\underline{k}\,|y-x|^\alpha, \hbox{ for $y\in \mathcal{H}_{\varepsilon}psilon(x)$},
\label{moduluscont}
{\varepsilon}nd{eqnarray}
with $0<\underline{k}$ and $0\leq\alpha\leq 1$. In what follows we focus on the reduction of continuity measured quantitatively by
\begin{eqnarray}
|u(y)-u(x)|>\underline{k}\,|y-x|^\alpha \hbox{ for $y\in \mathcal{H}_{\varepsilon}psilon(x)$}.
\label{modulusdiscont}
{\varepsilon}nd{eqnarray}
Observe when {\varepsilon}qref{modulusdiscont} holds and $\alpha=1/2$ and $\underline{k}=\overline{r}$ then $|\mathcal{S}(y,x)|>\mathcal{S}_c$ and there is softening in the cohesive force-strain behavior given by {\varepsilon}qref{forcestate} .
We now consider solutions $u^{\varepsilon}psilon$ of {\varepsilon}qref{stationary} and define a mathematical notion of process zone based the strain exceeding threshold values associated with {\varepsilon}qref{modulusdiscont}.
The process zone is best described in terms of the basic unit of peridynamic interaction: the peridynamic neighborhoods $\mathcal{H}_{\varepsilon}psilon(x)$ of radius ${\varepsilon}psilon>0$ with centers $x\in D$. We fix a choice of $\underline{k}$ and $\alpha$ belonging to the intervals $0< \underline{k}\leq \overline{r}$ and $1/2\leq \alpha<1$. The strain between $x$ and a point $y$ inside the neighborhood is denoted by $\mathcal{S}^{\varepsilon}psilon(y,x)$.
The collection of points $y$ inside $\mathcal{H}_{\varepsilon}psilon(x)$ for which the strain $|\mathcal{S}^{\varepsilon}psilon(y,x)|$ exceeds the threshold function $\underline{k}\,|y-x|^{\alpha-1}$ is denoted by
$\{y\hbox{ $ \in$ }\mathcal{H}_{\varepsilon}psilon(x):\, |\mathcal{S}^{\varepsilon}psilon(x,y)|>\underline{k}\,|y-x|^{\alpha-1}\}$. Note for $0<\underline{k}<\overline{r}$ and $1/2<\alpha<1$ that
\begin{eqnarray}
&&\left\{y\hbox{ $\in$ }\mathcal{H}_{\varepsilon}psilon(x):\, |\mathcal{S}^{\varepsilon}psilon(y,x)|>\mathcal{S}_c\right\}\muubset\{y\hbox{ $\in$ }\mathcal{H}_{\varepsilon}psilon(x):\, |\mathcal{S}^{\varepsilon}psilon(y,x)|>\frac{\underline{k}}{|y-x|^{1-\alpha}}\}.\label{nonlipshitz}
{\varepsilon}nd{eqnarray}
The fraction of points inside the neighborhood $\mathcal{H}_{\varepsilon}psilon(x)$ with strains exceed the threshold is written
\begin{eqnarray}
P\left(\{y\hbox{ in }\mathcal{H}_{\varepsilon}psilon(x):\, |\mathcal{S}^{\varepsilon}psilon(y,x)|>\underline{k}\,|y-x|^{\alpha-1}\}\right),\label{weight}
{\varepsilon}nd{eqnarray}
where the weighted volume fraction for any subset $B$ of $\mathcal{H}_{\varepsilon}psilon(x)$ is defined as
\begin{eqnarray}
P(B)=\frac{1}{{\varepsilon}psilon^d m}\int_{B}\,(|y-x|/{\varepsilon}psilon)J(\vert y-x\vert/{\varepsilon}psilon)\,dy,
\label{weightdefined}
{\varepsilon}nd{eqnarray}
with normalization constant
\begin{eqnarray}
m=\int_{\mathcal{H}_1(0)}|{\bf x}i||J(|{\bf x}i|)\,d{\bf x}i
\label{normalize}
{\varepsilon}nd{eqnarray}
chosen so that $P(\mathcal{H}_{\varepsilon}psilon(x))=1$.
\begin{definition}{\bf Process Zone.}
\label{processZone}
Fix a volume fraction $0<\overline{\theta}\leq 1$, $0<\underline{k}\leq\overline{r}$, and $1/2\leq\alpha<1$ and at each time $t$ in the interval $0\leq t\leq T$, define the process zone $PZ^{\varepsilon}psilon(\underline{k},\alpha,\overline{\theta},t)$ to be the collection of centers of peridynamic neighborhoods for which the portion of points $y$ with strain $\mathcal{S}^{\varepsilon}psilon(t,y,x)$ exceeding the threshold $\underline{k}\,|y-x|^{\alpha-1}$ is greater than $\overline{\theta}$, i.e., $P\left(\{y\hbox{ in }\mathcal{H}_{\varepsilon}psilon(x):\, |\mathcal{S}^{\varepsilon}psilon(t,y,x)|>\underline{k}\,|y-x|^{\alpha-1}\}\right)>\overline{\theta}$.
{\varepsilon}nd{definition}
The fracture set is defined to be the process zone for which strains exceed the threshold $\mathcal{S}_c$ and the force vs. strain curve begins to soften.
\begin{definition}{\bf Fracture Set.}
\label{Fractureset}
The fracture set is defined to be the process zone associated with the values ${\theta}=1/2$, $\underline{k}=\overline{r}$, and $\alpha=1/2$ and at each time $t$ in the interval $0\leq t\leq T$, and is defined by $PZ^{\varepsilon}psilon(\overline{r},1/2,1/2,t)$ to be the collection of centers of peridynamic neighborhoods for which the portion of points $y$ with strain $\mathcal{S}^{\varepsilon}psilon(t,y,x)$ exceeding the threshold $\mathcal{S}_c$ is greater than $1/2$, i.e., $P\left(\{y\hbox{ in }\mathcal{H}_{\varepsilon}psilon(x):\, |\mathcal{S}^{\varepsilon}psilon(t,y,x)|>\mathcal{S}_c\}\right)>1/2$.
{\varepsilon}nd{definition}
\noindent It is clear from the definition that the fracture set defined for this model contains the set of jump discontinuities for the displacement. This definition of fracture set given here is different that the usual one which collapses material damage onto a surface across which the displacement jumps.
It follows from Proposition \ref{Nuccriteria} that the process zone contains peridynamic neighborhoods associated with softening cohesive forces. Within this zone pre-existing jump discontinuities in the displacement field can grow.
\begin{remark}
Here we have described a range of process zones depending upon the choice of $\alpha$, $\underline{k}$ and $\overline{\theta}$. In what follows we show that for any choice of $\alpha$ in $1/2\leq \alpha <1$ and $\underline{k}$ in $0<\underline{k}\leq\overline{r}$ and $0<\overline{\theta}\leq 1$ the volume of the process zone is explicitly controlled by the radius of the peridynamic horizon ${\varepsilon}psilon$.
{\varepsilon}nd{remark}
We consider problem formulations in two and three dimensions and the volume or area of a set is given by the $d$ dimensional Lebesgue measure denoted by $\mathcal{L}^d$, for $d=2,3$.
We let
\begin{eqnarray}
\label{upbdconst}
C(t)= \left( (2LEFM(u_0)+{\rho}\Vert v_0\Vert_{L^2(D;\mathbb{R}^d)}+1)^{1/2}+\muqrt{\rho^{-1}}\int_0^t\Vert b(\tau)\Vert_{L^2(D;\mathbb{R}^d)}\,d\tau\right)^2-1,
{\varepsilon}nd{eqnarray}
and note that $C(t)\leq C(T)$ for $t<T$.
We now give the following bound on the size of the process zone.
\begin{theorem} {\bf Dependence of the process zone on the radius of the peridynamic horizon}
\label{epsiloncontropprocesszone}
\begin{eqnarray}
\mathcal{L}^d\left(PZ^{\varepsilon}psilon(\underline{k},\alpha,\overline{\theta},t) \right)\leq \frac{{\varepsilon}psilon^{1-\beta}}{\overline{\theta}\underline{k}^2(f'(0)+o({\varepsilon}psilon^\beta))}\times \frac{C(t)}{2m},
\label{controlbyepsilon}
{\varepsilon}nd{eqnarray}
where $0\leq\beta<1$ and $\beta=2\alpha-1$ and $0\leq t\leq T$.
{\varepsilon}nd{theorem}
\noindent Theorem \ref{epsiloncontropprocesszone} explicitly shows that the size of the process zone is controlled by the radius ${\varepsilon}psilon$ of the peridynamic horizon, uniformly in time. This theorem is proved in Section \ref{proofbondunstable}.
\begin{remark}
\label{ModelParameter}
This analysis shows that {{\varepsilon}m the horizon size ${\varepsilon}psilon$ for cohesive dynamics is a modeling parameter} that may be calibrated according to the size of the process zone obtained from experimental measurements.
{\varepsilon}nd{remark}
Next we show how the process zone localizes and concentrates on sets with zero volume in the small horizon limit. To proceed choose $\delta>0$ and consider the sequence of solutions $u^{{\varepsilon}psilon_k}(t,x)$ to the cohesive dynamics for a family of radii ${\varepsilon}psilon_k=\frac{1}{2^k}$, $k=1,\ldots$. The set of centers $x$ of neighborhoods $\mathcal{H}_{{\varepsilon}psilon_k}(x)$ that belong to at least one of the process zones $PZ^{{\varepsilon}psilon_k}(\underline{k},\alpha,\overline{\theta},t)$ for some ${\varepsilon}psilon_k<\delta$ at time $t$ is denoted by $CPZ^\delta(\underline{k},\alpha,\overline{\theta},t)$. Let $CPZ^0(\underline{k},\alpha,\overline{\theta},t)=\chi^\nuap_{0<\delta}CPZ^\delta(\underline{k},\alpha,\overline{\theta},t)$ be the collection of centers of neighborhoods such that for every $\delta>0$ they belong to a process zone $PZ^{{\varepsilon}psilon_k}(\underline{k},\alpha,\overline{\theta},t)$ for some ${\varepsilon}psilon_k<\delta$. The localization and concentration of the process zone is formulated in the following theorem.
\begin{theorem}{\rm\bf Localization of process zone in the small horizon limit.}\\
\label{bondunstable}
The collection of process zones $CPZ^\delta(\underline{k},\alpha,\overline{\theta},t)$ is decreasing with $\delta\rightarrow 0$ and there is a positive constant $K$ independent of $t$ and $\delta$ for which
\begin{eqnarray}
&& \mathcal{L}^d\left(CPZ^\delta(\underline{k},\alpha,\overline{\theta},t)\right)\leq K{\delta^{1-\beta}}, \hbox{ for, } 0\leq t\leq T,\,\,0<\beta=2\alpha-1\leq 1, \hbox{ with }\nonumber\\
&& \mathcal{L}^d\left(CPZ^0(\underline{k},\alpha,\overline{\theta},t)\right)=\lim_{\delta\rightarrow 0}\mathcal{L}^d\left(CPZ^\delta(\underline{k},\alpha,\overline{\theta},t)\right)=0.
\label{limdelta}
{\varepsilon}nd{eqnarray}
For any choice of $0<\overline{\theta}\leq1$ the collection of centers of neighborhoods for which there exists a positive $\delta$ such that \begin{eqnarray}
P\left(\{y\hbox{ in }H_{{\varepsilon}psilon_k}(x):\, |\mathcal{S}^{{\varepsilon}psilon_k}(t,y,x)|\leq\underline{k}\,|y-x|^{\alpha-1}\}\right)\geq1-\overline{\theta},
\label{contolledstrain}
{\varepsilon}nd{eqnarray}
for all ${\varepsilon}psilon_k<\delta$ is a set of full measure for every choice of $0<\underline{k}\leq\overline{r}$ and $1/2\leq\alpha<1$, i.e., $\mathcal{L}^d(D)=\mathcal{L}^d(D\muetminus CPZ^0(\underline{k},\alpha,\overline{\theta},t))$.
{\varepsilon}nd{theorem}
\begin{remark}
\label{nearlipschitz}
The theorem shows that the process zone concentrates on a set of zero volume in the limit of vanishing peridynamic horizon. Note {\varepsilon}qref{contolledstrain} holds for any $0<\overline{\theta}\leq 1$. On choosing $\overline{\theta}\approx 0$, and $\alpha\approx 1$ it is evident that the modulus of continuity for displacement field is close to Lipschitz outside of the process zone in the limit of vanishing nonolcality, ${\varepsilon}psilon_k\rightarrow 0$. The concentration of the process zone is inevitable for the cohesive model and is directly linked to the constraint on the energy budget associated with the cohesive dynamics as described by Theorem \ref{Gronwall}. This bound forces the localization of the process zone as shown in Section \ref{proofbondunstable}.
{\varepsilon}nd{remark}
\muetcounter{equation}{0} \muetcounter{theorem}{0} \muetcounter{lemma}{0}\muetcounter{proposition}{0}\muetcounter{remark}{0}\muetcounter{remark}{0}\muetcounter{definition}{0}\muetcounter{hypothesis}{0}
\muection{The small horizon limit of cohesive dynamics}
\label{sec5}
In this section we identify the distinguished small horizon ${\varepsilon}psilon\rightarrow 0$ limit for cohesive dynamics. It is shown here that the limit dynamics has bounded bulk linear elastic energy and Griffith surface energy characterized by the shear modului $\mu$, Lam\'e modulus $\lambda$, and energy release rate $\mathcal{G}_c$ respectively. In order to make the connection between the limit dynamics and cohesive dynamics we will identify the relationship between the potentials $W^{\varepsilon}psilon(\mathcal{S},y-x)$ and the triple $\mu$, $\lambda$, $\mathcal{G}_c$.
To reveal this connection consider a family of cohesive evolutions $u^{{\varepsilon}psilon_k}$, each associated with a fixed potential $W^{{\varepsilon}psilon_k}$ and horizon length ${\varepsilon}psilon_k$, with $k=1,2,\ldots$ and ${\varepsilon}psilon_k\rightarrow 0$. Each $u^{{\varepsilon}psilon_k}(x,t)$ can be thought of as being the result of a perfectly accurate numerical simulation of a cohesive evolution associated with the potential $W^{{\varepsilon}psilon_k}$. It is shown in this section that the cohesive dynamics $u^{{\varepsilon}psilon_k}(x,t)$ converges to a limit evolution $u^0(x,t)$ in the limit, ${\varepsilon}psilon_k\rightarrow 0$. The limit evolution describes the dynamics of the cracked body when the scale of nonlocality is infinitesimally small with respect to the material specimen. Here the limiting free crack evolution is mediated through the triple $\mu$, $\lambda$, and $\mathcal{G}_c$ that are described by explicit formulas associated with the sequence of potentials $W^{{\varepsilon}psilon_k}$, see {\varepsilon}qref{calibrate1}, {\varepsilon}qref{calibrate2} and Theorem \ref{LEFMMThm} below.
\noindent {{\varepsilon}m It is of fundamental importance to emphasize that we do do not impose a-priori relations between $W^{{\varepsilon}psilon_k}$ and the triple $\mu$, $\lambda$, and $\mathcal{G}_c$; we show instead that the cohesive dynamics $u^{{\varepsilon}psilon_k}(x,t)$ approaches the limit dynamics $u^0(x,t)$ characterized by $\mu$, $\lambda$, and $\mathcal{G}_c$ given by the formulas {\varepsilon}qref{calibrate1} and {\varepsilon}qref{calibrate2} in the limit when ${\varepsilon}psilon_k\rightarrow 0$} .
\noindent In what follows the sequence of cohesive dynamics described by $u^{{\varepsilon}psilon_k}$ is seen to converge to the limiting free crack evolution $u^0(x,t)$ in mean square, uniformly in time, see Theorem \ref{LimitflowThm}.
The limit evolution is shown to have the following properties:
\begin{itemize}
\item It has uniformly bounded energy in the sense of linear elastic fracture mechanics for $0\leq t \leq T$.
\item It satisfies an energy inequality involving the kinetic energy of the motion together with the bulk elastic and surface energy associated with linear elastic fracture mechanics for $0\leq t\leq T$.
{\varepsilon}nd{itemize}
\noindent We provide explicit conditions under which these properties are realized for the limit dynamics.
\begin{hypothesis}
\label{remarkone}
We suppose that the magnitude of the displacements $u^{{\varepsilon}psilon_k}$ for cohesive dynamics are bounded for $0\leq t\leq T$ uniformly in ${\varepsilon}psilon_k$, i.e., $\muup_{{\varepsilon}psilon_k}\muup_{0\leq t\leq T}\Vert u^{{\varepsilon}psilon_k}(t)\Vert_{L^\infty(D;\mathbb{R}^d)}<\infty$.
{\varepsilon}nd{hypothesis}
The convergence of cohesive dynamics is given by the following theorem,
\begin{theorem}
\label{LimitflowThm}
{\rm\bf Convergence of cohesive dynamics}\\
For each ${\varepsilon}psilon_k$ we prescribe identical LEFM initial data $u_0(x)$ and $v_0(x)$ and the solution to the cohesive dynamics initial value problem is denoted by $u^{{\varepsilon}psilon_k}$. Now consider a sequence of solutions $u^{{\varepsilon}psilon_k}$ associated with a vanishing peridynamic horizon ${\varepsilon}psilon_k\rightarrow 0$ and
suppose Hypothesis \ref{remarkone} holds true. Then, on passing to a subsequence if necessary, the cohesive evolutions $u^{{\varepsilon}psilon_k}$ converge in mean square uniformly in time to a limit evolution $u^0$ with the same LEFM initial data, i.e.,
\begin{eqnarray}
\lim_{{\varepsilon}psilon_k\rightarrow 0}\max_{0\leq t\leq T}\left\{\Vert u^{{\varepsilon}psilon_k}(t)-u^0(t)\Vert_{L^2(D;\mathbb{R}^d)}\right\}=0
\label{unifconvg}
{\varepsilon}nd{eqnarray}
and $u^0(x,0)=u_0(x)$ and $\hat{\varphi}artial_t u^0(x,0)=v_0(x)$.
{\varepsilon}nd{theorem}
To appropriately characterize the LEFM energy for the limit dynamics with freely propagating cracks one needs a generalization of the strain tensor. The appropriate notion of displacement and strain useful for problems involving discontinuities is provided by functions of bounded deformation BD introduced in \chi^\nuite{Matthies}, \chi^\nuite{Suquet}. The subspace of BD given by the special functions of bounded deformation $SBD$ introduced in \chi^\nuite{Bellettini} is appropriate for describing discontinuities associated with linear elastic fracture. Functions in $u$ SBD belong to $L^1(D;\mathbb{R}^d)$ and are approximately continuous, i.e., have Lebesgue limits for almost every $x\in D$ given by
\begin{eqnarray}
\lim_{r\muearrow 0}\frac{1}{\hat{\varphi}i r^d}\int_{B(x,r)}\,|u(y)-u(x)|\,dy=0, \hbox{ $d=2,3$}
\label{approx}
{\varepsilon}nd{eqnarray}
where $B(x,r)$ is the ball of radius $r$ centered at $x$.
The jump set $J_{u}$ for elements of $SBD$ is defined to be the set of points of discontinuity which have two different one sided Lebesgue limits. One sided Lebesgue limits of $u$ with respect to a direction $\nu_u(x)$ are denoted by $u^-(x)$, $u^+(x)$ and are given by
\begin{eqnarray}
\lim_{r\muearrow 0}\frac{1}{\hat{\varphi}i r^d}\int_{B^-(x,r)}\,|u(y)-u^-(x)|\,dy=0,\,\,\, \lim_{r\muearrow 0}\frac{1}{\hat{\varphi}i r^d}\int_{B^+(x,r)}\,|u(y)-u^+(x)|\,dy=0, \hbox{$d=2,3$},
\label{approxjump}
{\varepsilon}nd{eqnarray}
where $B^-(x,r)$ and $B^+(x,r)$ are given by the intersection of $B(x,r)$ with the half spaces $(y-x)\chi^\nudot \nu_u(x)<0$ and $(y-x)\chi^\nudot \nu_u(x)>0$ respectively. SBD functions have jump sets $J_u$, described by a countable number of components $K_1,K_2,\ldots$, contained within smooth manifolds, with the exception of a set $K_0$ that has zero $d-1$ dimensional Hausdorff measure \chi^\nuite{AmbrosioCosicaDalmaso}. Here the notion of arc length or (surface area) is the $d-1$ dimensional Hausdorff measure of $J_{u}$ and $\mathcal{H}^{d-1}(J_{u})=\muum_i\mathcal{H}^{d-1}(K_i)$.
The strain \chi^\nuite{AmbrosioCosicaDalmaso} of a displacement $u$ belonging to SBD, written as $\mathcal{E}u$, is a generalization of the classic strain tensor and satisfies the property
\begin{eqnarray}
\lim_{r\muearrow 0}\frac{1}{\hat{\varphi}i r^d}\int_{B(x,r)}\,\frac{|(u(t,y)-u(t,x)-\mathcal{E}u(t,x)(y-x))\chi^\nudot(y-x)|}{|y-x|^2}\,dy=0, \hbox{ $d=2,3$}
\label{appgrad}
{\varepsilon}nd{eqnarray}
for almost every $x\in D$, with respect to $d$-dimensional Lebesgue measure $\mathcal{L}^d$.
The symmetric part of the distributional derivative of $u$, $E u=1/2(\nabla u+\nabla u^T)$ for $SBD$ functions is a $d\times d$ matrix valued Radon measure with absolutely continuous part described by the density $\mathcal{E}u$ and singular part described by the jump set \chi^\nuite{AmbrosioCosicaDalmaso}, \chi^\nuite{Bellettini} and
\begin{eqnarray}
\langle E u,\Phi\rangle=\int_D\,\muum_{i,j=1}^d\mathcal{E}u_{ij}\Phi_{ij}\,dx+\int_{J_{u}}\,\muum_{i,j=1}^d(u^+_i - u^-_i)\nu_j\Phi_{ij}\,d\mathcal{H}^{d-1},
\label{distderiv}
{\varepsilon}nd{eqnarray}
for every continuous, symmetric matrix valued test function $\Phi$.
A description of $BD$ functions including their fine properties and structure, together with the characterization of $SBD$ functions on slices is developed in \chi^\nuite{AmbrosioCosicaDalmaso} and \chi^\nuite{Bellettini}.
The energy of linear elastic fracture mechanics extended to the class of $SBD$ functions is given by:
\begin{eqnarray}
LFEM(u,D)=\int_{D}\left(2\mu |\mathcal{E} u|^2+\lambda |{\rm div}\,u|^2\right)\,dx+\mathcal{G}_c\mathcal{H}^{d-1}(J_{u}), \hbox{ $d=2,3$,}
\label{LEFMSBVDefinition}
{\varepsilon}nd{eqnarray}
for $u$ belonging to $SBD$.
We now describe the elastic energy for the limit dynamics.
\begin{theorem}
{\rm\bf The limit dynamics has bounded LEFM energy}\\
The limit evolution $u^0$ belongs to SBD for every $t\in[0,T]$. Furthermore there exists a constant $C$ depending only on $T$ bounding the LEFM energy, i.e.,
\begin{eqnarray}
\int_{D}\,2\mu |\mathcal{E} u^0(t)|^2+\lambda |{\rm div}\,u^0(t)|^2\,dx+\mathcal{G}_c\mathcal{H}^{d-1}(J_{u^0(t)})\leq C, \hbox{ $d=2,3$,}
\label{LEFMbound}
{\varepsilon}nd{eqnarray}
for $0\leq t\leq T$. Here $\mu$, $\lambda$, and $\mathcal{G}_c$ are given by the explicit formulas
\begin{eqnarray}
\mu=\lambda=\frac{1}{4} f'(0)\int_{0}^1r^2J(r)dr, \hbox{ $d=2$}&\hbox{ and }& \mu=\lambda=\frac{1}{5} f'(0)\int_{0}^1r^3J(r)dr, \hbox{ $d=3$}\label{calibrate1}
{\varepsilon}nd{eqnarray}
and
\begin{eqnarray}
\mathcal{G}_c=\frac{2\omega_{d-1}}{\omega_d}\, f_\infty \int_{0}^1r^dJ(r)dr, \hbox{ for $d=2.3$}
\label{calibrate2}
{\varepsilon}nd{eqnarray}
where $f_\infty$ is defined by {\varepsilon}qref{properties} and $\omega_{n}$ is the volume of the $n$ dimensional unit ball, $\omega_1=2,\omega_2=\hat{\varphi}i,\omega_3=4\hat{\varphi}i/3$.
The potential $f$ and influence function $J$ can always be chosen to satisfy {\varepsilon}qref{calibrate1} and {\varepsilon}qref{calibrate2} for any $\mu=\lambda>0$ corresponding to the Poisson ratio $\nu=1/3$, for $d=2$ and $\nu=1/4$, for $d=3$, and $\mathcal{G}_c>0$.
\label{LEFMMThm}
{\varepsilon}nd{theorem}
\begin{remark}
\label{boundedhausdorffmeasure}
The absolutely continuous part of the strain $\mathcal{E}u^0$ is defined for points away from the jump set $J_{u^0}$ and in this sense the process zone for the limit evolution can be viewed as being confined to the jump set $J_{u^0}$ .
Theorem \ref{LEFMMThm} shows that the jump set $J_{u^0}$ for the limit evolution $u^0(t,x)$ is confined to a set of finite $d-1$ dimensional Hausdorff measure.
{\varepsilon}nd{remark}
We now present an energy inequality for the limit evolution. The sum of energy and work for the displacement $u^0$ at time $t$ is written
\begin{eqnarray}
\mathcal{GF}(u^0(t),D)=\frac{\rho}{2}\Vert u_t^0(t)\Vert^2_{L^2(D;\mathbb{R}^d)}+LEFM(u^0(t),D)-\int_{D}b(t)\chi^\nudot u^0(t)\,dx.
\label{sumtt}
{\varepsilon}nd{eqnarray}
The sum of energy and work for the initial data $u_0,v_0$ is written
\begin{eqnarray}
\mathcal{GF}(u_0,D)=\frac{\rho}{2}\Vert v_0\Vert^2_{L^2(D;\mathbb{R}^d)}+LEFM(u_0,D)-\int_{D}b(0)\chi^\nudot u_0\,dx.
\label{sumt0}
{\varepsilon}nd{eqnarray}
The energy inequality for the limit evolution $u^0$ is given by,
\begin{theorem} {\rm \bf Energy Inequality}\\
\label{energyinequality}
For almost every $t$ in $[0, T]$,
\begin{eqnarray}
\mathcal{GF}(u^0(t),D)\leq\mathcal{GF}(u_0,D)-\int_0^t\int_{D} b_t(\tau) \chi^\nudot u^0(\tau)\,dx\,d\tau.
\label{enegineq}
{\varepsilon}nd{eqnarray}
{\varepsilon}nd{theorem}
\begin{remark}
\label{remarkfinal}
The equality $\lambda=\mu$ appearing in Theorem \ref {LEFMMThm} is a consequence of the central force nature of the local cohesive interaction mediated by {\varepsilon}qref{forcestate}. More general non-central interactions are proposed in Section 15 of \chi^\nuite{Silling1} and in the state based peridynamic formulation \chi^\nuite{States}. The non-central formulations deliver a larger class of energy-volume-shape change relations for homogeneous deformations.
Future work will address state based formulations that deliver general anisotropic elastic response for the bulk energy associated with the limiting dynamics.
{\varepsilon}nd{remark}
\muetcounter{equation}{0} \muetcounter{theorem}{0} \muetcounter{lemma}{0}\muetcounter{proposition}{0}\muetcounter{remark}{0}\muetcounter{remark}{0}\muetcounter{definition}{0}\muetcounter{hypothesis}{0}
\muection{Free crack propagation in the small horizon limit}
\label{sec6}
We recall that the process zone concentrates on a set of zero volume (Lebesgue measure) in the small horizon limit and identify conditions for which the limit dynamics $u^0$ solves the wave equation away from the evolving crack set. To begin we make a technical hypothesis on the regularity of the jump set of the limit dynamics $u^0(x,t)$.
\begin{hypothesis}
\label{remarkthree}
We suppose that the crack set given by $J_{u^0(t)}$ is a closed set for $0\leq t\leq T$.
{\varepsilon}nd{hypothesis}
\noindent The next hypothesis applies to the concentration of the process zones as ${\varepsilon}psilon\rightarrow 0$ and their relation to the crack set for the limit dynamics.
\begin{hypothesis}
\label{remark555}
\noindent Theorem \ref{bondunstable} shows that the fracture sets defined as process zones with strains above $\mathcal{S}_c$, see Definition \ref{Fractureset}, concentrate on the set $CPZ^0(\overline{r},1/2,1/2,t)$. Here we assume that $J_{u^0(t)}=CPZ^0(\overline{r},1/2,1/2,t)$ for $0\leq t\leq T$.
{\varepsilon}nd{hypothesis}
\noindent The next hypothesis applies to neighborhoods $\mathcal{H}_{{\varepsilon}psilon_k}(x)$ for which the strain is subcritical, i.e., $|\mathcal{S}^{\varepsilon}psilon|<\overline{r}/\muqrt{|y-x|}$, for $y$ in $\mathcal{H}_{{\varepsilon}psilon_k}(x)$. These neighborhoods will be referred to as neutrally stable.
\begin{hypothesis}
\label{remark556}
We suppose that ${\varepsilon}psilon_k=\frac{1}{2^k}<\delta$ and $0\leq t\leq T$. Consider the collection of centers of peridynamic neighborhoods in $CPZ^\delta(\overline{r},1/2,1/2,t)$. We fatten out $CPZ^\delta(\overline{r},1/2,1/2,t)$ and consider $C\widetilde{PZ}^\delta(\overline{r},1/2,1/2,t)=\{x\in D:\, dist(x,CPZ^\delta(\overline{r},1/2,1/2,t)<\delta\}$. We suppose that all neighborhoods $H_{{\varepsilon}psilon_k}(x)$ that do not intersect the set $C\widetilde{PZ}^\delta(\overline{r},1/2,1/2,t)$ are neutrally stable.
{\varepsilon}nd{hypothesis}
\noindent With these conditions satisfied the limit evolution $u^0$ is identified as a solution of the linear elastic wave equation.
\begin{theorem}
\label{waveequation}
Suppose Hypotheses \ref{remarkthree}, \ref{remark555} and \ref{remark556} hold true then the limit evolution $u^0(t,x)$ is a solution of the following wave equation (the first law of motion of Cauchy) in the sense of distributions on the domain $[0,T]\times D$ given by
\begin{eqnarray}
\rho u^0_{tt}= {\rm div}\muigma+b, \hbox{on $[0,T]\times D$},
\label{waveequationn}
{\varepsilon}nd{eqnarray}
where the stress tensor $\muigma$ is given by,
\begin{eqnarray}
\muigma =\lambda I_d Tr(\mathcal{E}\,u^0)+2\mu \mathcal{E}u^0,
\label{stress}
{\varepsilon}nd{eqnarray}
where $I_d$ is the identity on $\mathbb{R}^d$ and $Tr(\mathcal{E}\,u^0)$ is the trace of the strain.
Here the second derivative $u_{tt}^0$ is the time derivative in the sense of distributions of $u^0_t$ and ${\rm div}\muigma$ is the divergence of the stress tensor $\muigma$ in the distributional sense.
{\varepsilon}nd{theorem}
\begin{remark}
\label{Disjointsets}
For completeness we recall that the strain $\mathcal{E} u^0(x,t)$ and jump set $J_{u^0(t)}$ are defined over disjoint sets in $[0,T]\times D$.
{\varepsilon}nd{remark}
\begin{remark}
\label{displacementcrack}
The limit of the cohesive dynamics model is given by the displacement - crack set pair $u^0(t,x)$, $J_{u^0(t)}$.
The wave equation provides the dynamic coupling between elastic waves and the evolving fracture path inside the media.
{\varepsilon}nd{remark}
\begin{remark}
\label{remarknearlyfinal}
Hypotheses \ref{remarkthree}, \ref{remark555}, and \ref{remark556} are applied exclusively to establish Lemma \ref{twolimitsB} which identifies the absolutely continuous part of the limit strain
$\mathcal{E}_{ij}u^0e_ie_j$, $e={\bf x}i/|{\bf x}i|$, with the weak $L^2 (D\times\mathcal{H}_1 (0))$ limit of the
strain $\mathcal{S}^{\varepsilon}psilon$restricted to pairs $(x,{\bf x}i)\in D\times\mathcal{H}_1 (0)$ for which $|\mathcal{S}^{\varepsilon}psilon|\leq\mathcal{S}_c$.
{\varepsilon}nd{remark}
\begin{remark}
\label{remarkfinal1}
We point out that the cohesive model addressed in this work does not have an irreversibility constraint and the constitutive law {\varepsilon}qref{forcestate} applies at all times in the peridynamic evolution. Because of this the crack set at each time is given by $J_{u^0(t)}$. For rapid monotonic loading we anticipate that crack growth is increasing for this model, i.e., $J_{u^0(t')}\muubset J_{u^0(t)}$ for $t'<t$. For cyclic loading this is clearly not the case and the effects of irreversibility (damage) must be incorporated into in the cohesive model.
{\varepsilon}nd{remark}
\muetcounter{equation}{0} \muetcounter{theorem}{0} \muetcounter{lemma}{0}\muetcounter{proposition}{0}\muetcounter{remark}{0}\muetcounter{remark}{0}\muetcounter{definition}{0}\muetcounter{hypothesis}{0}
\muection{Mathematical underpinnings and analysis}
\label{EightIntro}
In this section we provide the proofs of theorems stated in Sections \ref{sec2}, \ref{sec4}, \ref{sec5} and \ref{sec6}. The first subsection asserts the Lipschitz continuity of $\nabla PD^{{\varepsilon}psilon_k}(u)$ for $u$ in $L^2_0(D;\mathbb{R}^d)$ and applies the theory of ODE to deduce existence of the cohesive dynamics, see Section \ref{EE}. A Gronwall inequality is used to bound the cohesive potential energy and kinetic energy uniformly in time, see Section \ref{GKP}. Uniformly bounded sequences $\{u^{{\varepsilon}psilon_k}\}_{k=1}^\infty$ of cohesive dynamics are shown to be compact in $C([0,T]; L^2_0(D;\mathbb{R}^d))$, see Section \ref{CC}. Any limit point $u^0$ for the sequence $u^{{\varepsilon}psilon_k}$ is shown to belong to SBD for every $0\leq t\leq T$, see Section \ref{CC}. The limit evolutions $u^0$ are shown to have uniformly bounded elastic energy in the sense of linear elastic fracture mechanics for $0\leq t\leq T$, see Section \ref{CC}. In Section \ref{EI} we pass to the limit in the energy balance equation for cohesive dynamics {\varepsilon}qref{BalanceEnergy} to recover an energy inequality for the limit flow. The wave equation satisfied by the limit flow is obtained on identifying the weak $L^2$ limit of the sequence $\{\nabla PD^{{\varepsilon}psilon_k}(u^{{\varepsilon}psilon_k})\}_{k=1}^\infty$ and passing to the limit in the weak formulation of {\varepsilon}qref{stationary}, see Section \ref{SC}. We conclude with the proof of Theorems \ref{epsiloncontropprocesszone} and \ref{bondunstable}.
\muubsection{Existence of a cohesive evolution}
\label{EE}
The peridynamic equation {\varepsilon}qref{eqofmotion} for cohesive dynamics is written as an equivalent first order system. We set $y^{{\varepsilon}psilon_k}=(y^{{\varepsilon}psilon_k}_1,y^{{\varepsilon}psilon_k}_2)^T$ where $y^{{\varepsilon}psilon_k}_1=u^{{\varepsilon}psilon_k}$ and $y_2^{{\varepsilon}psilon_k}=u_t^{{\varepsilon}psilon_k}$. Set $F^{{\varepsilon}psilon_k}(y^{{\varepsilon}psilon_k},t)=(F^{{\varepsilon}psilon_k}_1(y^{{\varepsilon}psilon_k},t),F^{{\varepsilon}psilon_k}_2(y^{{\varepsilon}psilon_k},t))^T$ where
\begin{eqnarray}
F^{{\varepsilon}psilon_k}_1(y^{{\varepsilon}psilon_k},t)&=&y_2^{{\varepsilon}psilon_k}\nonumber\\
F^{{\varepsilon}psilon_k}_2(y^{{\varepsilon}psilon_k},t)&=&\nabla PD^{{\varepsilon}psilon_k}(y_1^{{\varepsilon}psilon_k})+b(t).\nonumber
{\varepsilon}nd{eqnarray}
The initial value problem for $y^{{\varepsilon}psilon_k}$ given by the first order system is
\begin{eqnarray}
\frac{d}{dt} y^{{\varepsilon}psilon_k}=F^{{\varepsilon}psilon_k}(y^{{\varepsilon}psilon_k},t)\label{firstordersystem}
{\varepsilon}nd{eqnarray}
with initial conditions $y^{{\varepsilon}psilon_k}(0)=(u_0,v_0)^T$ satisfying LEFM initial conditions. In what follows we consider the more general class of initial data
$(u_0,v_0)$ belonging to $L^2_0(D;\mathbb{R}^d)\times L^2_0(D;\mathbb{R}^d)$.
\begin{theorem}
For $0\leq t\leq T$ there exists unique solution in $C^1([0,T];L^2_0(D;\mathbb{R}^d)$ for the mesoscopic dynamics described by {\varepsilon}qref{firstordersystem} with initial data in $L^2_0(D;\mathbb{R}^d)\times L^2_0(D;\mathbb{R}^d)$ and body force $b(t,x)$ in $C^1([0,T];L^2_0(D;\mathbb{R}^d)$.
\label{existenceuniqueness}
{\varepsilon}nd{theorem}
\noindent It now follows that for $LEFM$ initial data that one has a unique solution $u^{{\varepsilon}psilon_k}$ of {\varepsilon}qref{stationary} in Section \ref{sec2} belonging to $C^2([0,T];L^2_0(D;\mathbb{R}^d)$.
{\bf Proof of Theorem \ref{existenceuniqueness} }.
A straight forward calculation shows that for a generic positive constant $C$ independent of $\mathcal{S}$, $y-x$, and ${\varepsilon}psilon_k$, that
\begin{eqnarray}
\muup_{\mathcal{S}}|\hat{\varphi}artial_{\mathcal{S}}^2 W^{{\varepsilon}psilon_k}(\mathcal{S},y-x)|\leq \frac{C}{{\varepsilon}psilon_k|y-x|} \times J(|y-x|/{\varepsilon}psilon_k).
\label{secondderv}
{\varepsilon}nd{eqnarray}
From this it easily follows from H\"older and Minkowski inequalities that $\nabla PD^{{\varepsilon}psilon_k}$ is a Lipschitz continuous map from $L^2_0(D;\mathbb{R}^d)$ into $L^2_0(D;\mathbb{R}^d)$ and there is a positive constant $C$ independent of $0\leq t\leq T$, such that for any pair of vectors $y=(y_1,y_2)^T$, $z=(z_1,z_2)^T$ in $L^2_0(D;\mathbb{R}^d)\times L^2_0(D;\mathbb{R}^d)$
\begin{eqnarray}
\Vert F^{{\varepsilon}psilon_k}(y,t)-F^{{\varepsilon}psilon_k}(z,t)\Vert_{L^2(D;\mathbb{R}^d)^2}\leq \frac{C}{{\varepsilon}psilon_k}\Vert y-z\Vert_{L^2(D;\mathbb{R}^d)^2} \hbox{ for $0\leq t\leq T$}.
\label{lipschitz}
{\varepsilon}nd{eqnarray}
Here for any element $w=(w_1,w_2)$ of $L^2_0(D;\mathbb{R}^d)\times L^2_0(D;\mathbb{R}^d)$, $\Vert w \Vert^2_{L^2(D;\mathbb{R}^d)^2}=\Vert w_1\Vert_{L^2(D;\mathbb{R}^d)}^2+\Vert w_2\Vert_{L^2(D;\mathbb{R}^d)}^2$.
Since {\varepsilon}qref{lipschitz} holds the theory of ODE in Banach space \chi^\nuite{Driver} shows that there exists a unique solution to the initial value problem {\varepsilon}qref{firstordersystem} with $y^{{\varepsilon}psilon_k}$ and $\hat{\varphi}artial_t y^{{\varepsilon}psilon_k}$ belonging to $C([0,T]; L^2_0(D;\mathbb{R}^d))$ and Theorem
\ref{existenceuniqueness} is proved.
In this context we point out the recent work of \chi^\nuite{EmmrichPhulst} where an existence theory for peridynamic evolutions for general pairwise force functions that are Lipschitz continuous with respect to the peridynamic deformation state is presented.
\muubsection{Bounds on kinetic and potential energy for solutions of PD}
\label{GKP}
In this section we apply Gronwall's inequality to obtain bounds on the kinetic and elastic energy for peridynamic flows described by Theorem \ref{Gronwall}. The bounds are used to show that the solutions of the PD initial value problem are Lipschitz continuous in time
We now prove Theorem \ref{Gronwall}. Multiplying both sides of {\varepsilon}qref{stationary} by $u_t^{{\varepsilon}psilon_k}(t)$ and integration together with a straight forward calculation gives
\begin{eqnarray}
&&\frac{1}{2}\frac{d}{dt}\left\{2PD^{{\varepsilon}psilon_k}(u^{{\varepsilon}psilon_k}(t))+{\rho}\Vert u_t^{{\varepsilon}psilon_k}(t)\Vert^2_{L^2(D;\mathbb{R}^d)}\right\}\nonumber\\
&&=\int_{D}(\nabla PD^{{\varepsilon}psilon_k}(u^{{\varepsilon}psilon_k}(t))+\rho u_{tt}^{{\varepsilon}psilon_k}(t))\chi^\nudot u_t^{{\varepsilon}psilon_k}(t)\,dx\nonumber\\
&&=\int_{D} u_t^{{\varepsilon}psilon_k}(t)\chi^\nudot b(t)\,dx\,\leq \, \Vert u_t^{{\varepsilon}psilon_k}\Vert_{L^2(D;\mathbb{R}^d)}\Vert b(t)\Vert_{L^2(D;\mathbb{R}^d)}.\label{esttime1}
{\varepsilon}nd{eqnarray}
Set
\begin{eqnarray}
&& W(t)=2PD^{{\varepsilon}psilon_k}(u^{{\varepsilon}psilon_k}(t))+{\rho}\Vert u_t^{{\varepsilon}psilon_k}(t)\Vert^2_{L^2(D;\mathbb{R}^d)}+1,
\label{wt}
{\varepsilon}nd{eqnarray}
and applying {\varepsilon}qref{esttime1} gives
\begin{eqnarray}
&&\frac{1}{2}W'(t) \leq \, \Vert u_t^{{\varepsilon}psilon_k}\Vert_{L^2(D;\mathbb{R}^d)}\Vert b(t)\Vert_{L^2(D;\mathbb{R}^d)}\leq\frac{1}{\muqrt{\rho}}\muqrt{W(t)}\Vert b(t)\Vert_{L^2(D;\mathbb{R}^d)}\label{esttime2}
{\varepsilon}nd{eqnarray}
and
\begin{eqnarray}
\frac{1}{2}\int_0^t\frac{W'(\tau)}{\muqrt{W(\tau)}}\,d\tau\leq\frac{1}{\muqrt{\rho}}\int_0^t\Vert b(\tau)\Vert_{L^2(D;\mathbb{R}^d)}\,d\tau.
\label{estime3}
{\varepsilon}nd{eqnarray}
Hence
\begin{eqnarray}
\muqrt{W(t)}-\muqrt{W(0)}\leq\frac{1}{\muqrt{\rho}}\int_0^t\Vert b(\tau)\Vert_{L^2(D;\mathbb{R}^d)}\,d\tau
\label{estime4}
{\varepsilon}nd{eqnarray}
and
\begin{eqnarray}
&& 2PD^{{\varepsilon}psilon_k}(u^{{\varepsilon}psilon_k}(t))+{\rho}\Vert u_t^{{\varepsilon}psilon_k}(t)\Vert^2_{L^2(D;\mathbb{R}^d)} \leq \left(\frac{1}{\muqrt{\rho}}\int_0^t\Vert b(\tau)\Vert_{L^2(D;\mathbb{R}^d)}\,d\tau +\muqrt{W(0)}\right )^2-1.
\label{gineq}
{\varepsilon}nd{eqnarray}
For now we postpone the proof of {\varepsilon}qref{basicinequality} to Section \ref{CC} (see the discussion preceeding {\varepsilon}qref{upperboundperi}) and apply {\varepsilon}qref{basicinequality} to get the upper bound
\begin{eqnarray}
PD^{{\varepsilon}psilon_k}(u_0)\leq LEFM (u_0,D)\hbox{ for every ${\varepsilon}psilon_k$, \,\,$k=1,2,\ldots$},
\label{upperbound}
{\varepsilon}nd{eqnarray}
where $LEFM(u_0,D)$ is the elastic potential energy for linear elastic fracture mechanics given by {\varepsilon}qref{Gcrackenergy} or equivelently {\varepsilon}qref{LEFMSBVDefinition}. Theorem \ref{Gronwall} now follows from {\varepsilon}qref{gineq} and {\varepsilon}qref{upperbound}.
Theorem \ref{Gronwall} implies that PD solutions are Lipschitz continuous in time; this is stated explicitly in Theorem \ref{holdercont} of Section \ref{sec2}. To prove Theorem \ref{holdercont} we write
\begin{eqnarray}
&&\Vert u^{{\varepsilon}psilon_k}(t_1)-u^{{\varepsilon}psilon_k}(t_2)\Vert_{L^2(D;\mathbb{R}^d)}=\left (\int_{D}|\int_{t_2}^{t_1} u^{{\varepsilon}psilon_k}_\tau(\tau)\,d\tau |^2\,dx\right )^{\frac{1}{2}}\nonumber\\
&&=\left (\int_{D}|t_1-t_2|^{2}\left|\frac{1}{|t_1-t_2|}\int_{t_2}^{t_1} u^{{\varepsilon}psilon_k}_\tau(\tau)\,d\tau \right|^2\,dx\right )^{\frac{1}{2}}\nonumber\\
&&\leq\left (\int_{D}|t_1-t_2|\int_{t_2}^{t_1} |u^{{\varepsilon}psilon_k}_\tau(\tau)|^2\,d\tau\,dx\right )^{\frac{1}{2}}\nonumber\\
&&\leq\left(|t_1-t_2|\int_{t_2}^{t_1}\Vert u_\tau^{{\varepsilon}psilon_k}(\tau)\Vert_{L^2(D;\mathbb{R}^d)}^2\,d\tau\right)^{1/2}\nonumber\\
&&\leq K|t_1-t_2|,
\label{lip}
{\varepsilon}nd{eqnarray}
where the third to last line follows from Jensen's inequality, the second to last line from Fubini's theorem and
the last inequality follows from the upper bound for $\Vert u_t^{{\varepsilon}psilon_k}(t)\Vert^2_{L^2(D;\mathbb{R}^d)}$ given by Theorem \ref{Gronwall}.
\muubsection{Compactness and convergence}
\label{CC}
In this section we prove Theorems \ref{LimitflowThm} and \ref{LEFMMThm} . We start by establishing the inequality {\varepsilon}qref{basicinequality} between the elastic energies $PD^{{\varepsilon}psilon_k}(u)$
and for $LEFM(u,D)$. This is illustrated for any $u$ in $L^2_0(D;\mathbb{R}^d)\chi^\nuap L^\infty(D;\mathbb{R}^d)$ and $LEFM(u,D)$ given by {\varepsilon}qref{LEFMSBVDefinition}. Here {\varepsilon}qref{LEFMSBVDefinition} reduces to {\varepsilon}qref{Gcrackenergy} when $u$ is piecewise smooth and the crack $K$ consists of a finite number of smooth components.
To obtain the upper bound we can directly apply the slicing technique of \chi^\nuite{Gobbino3} to reduce to the one dimensional case to obtain an upper bound on one dimensional sections and then apply integral-geometric arguments to conclude. Here the slicing theorem and integral-geometric measure appropriate for this approach in the context of SBD are given by Theorems 4.5 and 4.10 of \chi^\nuite{AmbrosioCosicaDalmaso}.
These arguments deliver the following inequality
\begin{eqnarray}
&&PD^{{\varepsilon}psilon_k}(u)\leq LEFM(u,D), \hbox{ for every $u$ in $L^2_0(D;\mathbb{R}^d)$, and ${\varepsilon}psilon_k>0$}. \label{upperboundperi}
{\varepsilon}nd{eqnarray}
To proceed with the proof of Theorem \ref{LimitflowThm} we require the compactness theorem.
\begin{theorem} {\rm \bf Compactness.}\\
Given a sequence of functions $u^{{\varepsilon}psilon_k}\in L^2_0(D;\mathbb{R}^d)$, ${\varepsilon}psilon_k=1/k,k=1,2,\ldots$ such that
\begin{eqnarray}
\muup_{{\varepsilon}psilon_k}\left(PD^{{\varepsilon}psilon_k}(u^{{\varepsilon}psilon_k})+\Vert u^{{\varepsilon}psilon_k}\Vert_{L^\infty(D;\mathbb{R}^d)}\right)<\infty,
\label{unifbound}
{\varepsilon}nd{eqnarray}
then there exists a subsequence $u^{{{\varepsilon}psilon_{k}}'}$ and limit point $u$ in $L_0^2(D;\mathbb{R}^d)$ for which
\begin{eqnarray}
u^{{{\varepsilon}psilon_k}'}\rightarrow u \hbox{ in $L^2(D;\mathbb{R}^d)$ as } {{\varepsilon}psilon_k}'\rightarrow 0.
\label{compactness}
{\varepsilon}nd{eqnarray}
\label{L2compact}
{\varepsilon}nd{theorem}
In what follows its convenient to change variables $y=x+\delta{\bf x}i$ for $|{\bf x}i|<1$ and $0<\delta<\alpha/2<1$, here the peridynamic neighborhood $\mathcal{H}_\delta(x)$ transforms to $\mathcal{H}_1(0)=\{{\bf x}i\in\mathbb{R}^d;\,|{\bf x}i|<1\}$. The unit vector ${\bf x}i/|{\bf x}i|$ is denoted by $e$.
To prove Theorem \ref{L2compact} we need the following upper bound given by the following theorem.
\begin{theorem}{\rm \bf Upper bound}\\
\label{coercivityth}
For any $0<\delta<\alpha/2$ there exists positive constants $\tilde{K}_1$ and $\tilde{K}_2$ independent of $u\in L^2_0(D;\mathbb{R}^d)\chi^\nuap L^\infty(D;\mathbb{R}^d)$ such that
\begin{eqnarray}
\int_{\mathcal{H}_1(0)}\int_D|u(x+\delta{\bf x}i)-u(x)|^2dx\,J(|{\bf x}i|)d{\bf x}i\leq \delta(\tilde{K}_1+\tilde{K}_2\Vert u\Vert_{L^\infty(D;\mathbb{R}^d)}^2) PD^\delta(u).
\label{coercivity}
{\varepsilon}nd{eqnarray}
{\varepsilon}nd{theorem}
We establish the upper bound in two steps.
\begin{lemma}{\rm \bf Coercivity}\\
\label{coercivitya}
There exists a positive constant $C$ independent of $u\in L^2_0(D;\mathbb{R}^d)$
for which
\begin{eqnarray}
\int_{\mathcal{H}_1(0)}\int_D|u(x+\delta{\bf x}i)-u(x)|^2dx\,J(|{\bf x}i|)d{\bf x}i\leq C \int_{\mathcal{H}_1(0)}\int_D|(u(x+\delta{\bf x}i)-u(x))\chi^\nudot e|^2dx\,J(|{\bf x}i|)d{\bf x}i
\label{coercivea}
{\varepsilon}nd{eqnarray}
{\varepsilon}nd{lemma}
{\bf Proof of Lemma \ref{coercivitya}} We prove by contradiction. Suppose for every positive integer $N>0$ there is an element $u^N\in L_0^2(D;\mathbb{R}^d)$ for which
\begin{eqnarray}
&&N\int_{\mathcal{H}_1(0)}\int_D|(u^N(x+\delta{\bf x}i)-u^N(x))\chi^\nudot e |^2dx\,J(|{\bf x}i|)d{\bf x}i\nonumber\\
&&\leq\int_{\mathcal{H}_1(0)}\int_D|(u^N(x+\delta{\bf x}i)-u^N(x)) |^2dx\,J(|{\bf x}i|)d{\bf x}i.
\label{contra}
{\varepsilon}nd{eqnarray}
The Cauchy Schwartz inequality together with the triangle inequality deliver a constant $\overline{K}>0$ for which
\begin{eqnarray}
\int_{\mathcal{H}_1(0)}\int_D|(u(x+\delta{\bf x}i)-u(x))|^2dx\,J(|{\bf x}i|)d{\bf x}i\leq\overline{K}\Vert u\Vert^2_{L^2(D;\mathbb{R}^d)}.
\label{upineq}
{\varepsilon}nd{eqnarray}
An application of the nonlocal Korn inequality, Lemma 5.5 of \chi^\nuite{DuGunzbergerlehoucqmengesha} gives the existence of a constant $\underline{K}>0$ independent of $u$ in $L_0^2(D;\mathbb{R}^d)$ for which
\begin{eqnarray}
\underline{K} \Vert u\Vert^2_{L^2(D;\mathbb{R}^d)}\leq\int_{\mathcal{H}_1(0)}\int_D|(u(x+\delta{\bf x}i)-u(x))\chi^\nudot e|^2dx\,J(|{\bf x}i|)d{\bf x}i.
\label{korn}
{\varepsilon}nd{eqnarray}
Applying the inequalities {\varepsilon}qref{contra}, {\varepsilon}qref{upineq}, and {\varepsilon}qref{korn} we discover that $\overline{K}/N\geq\underline{K}$ for all integers $N>0$ to conclude $\underline{K}=0$ which is a contradiction and Lemma \ref{coercivitya} is proved.
Theorem \ref{coercivityth} now follows from Lemma \ref{coercivitya} and the upper bound given by
\begin{lemma} {\rm Upper bound}\\
\label{coercivityb}
\begin{eqnarray}
\int_{\mathcal{H}_1(0)}\int_D|(u(x+\delta{\bf x}i)-u(x))\chi^\nudot e|^2dx\,J(|{\bf x}i|)d{\bf x}i\leq \delta(\tilde{K}_1+\tilde{K}_2\Vert u\Vert_{L^\infty(D;\mathbb{R}^d)}^2) PD^\delta(u).
\label{coerciveb}
{\varepsilon}nd{eqnarray}
{\varepsilon}nd{lemma}
{\rm \bf Proof of Lemma \ref{coercivityb}}. Consider the concave potential function $f$ described in the introduction, recall $f(0)=0$ and given $M>0$ set $H_M=f(M)/M$. For $0<r<M$ one has $r<H_M^{-1}f(r)$ and set
\begin{eqnarray}
A_{\delta{\bf x}i}=\{x\in D; |(u(x+\delta{\bf x}i)-u(x))\chi^\nudot e|^2>\delta|{\bf x}i|M\},
\label{Aset}
{\varepsilon}nd{eqnarray}
so
\begin{eqnarray}
&&\int_{D\muetminus A_{\delta{\bf x}i}}\,|(u(x+\delta{\bf x}i)-u(x))\chi^\nudot e|^2\,dx=
\delta|{\bf x}i|\int_{D\muetminus A_{\delta{\bf x}i}}\delta|{\bf x}i||\mathcal{S}|^2\,dx\nonumber\\
&&\leq\frac{\delta|{\bf x}i|}{H_M}\int_{D\muetminus A_{\delta{\bf x}i}}\frac{1}{\delta|{\bf x}i|}f\left(\delta|{\bf x}i|\mathcal{S}|^2\right)\,dx.
\label{starter}
{\varepsilon}nd{eqnarray}
Now $f(r)>f(M)$ for $r>M$ gives
\begin{eqnarray}
\frac{1}{\delta|{\bf x}i|}f(M)\mathcal{L}^d(A_{\delta{\bf x}i})\leq
\int_{A_{\delta{\bf x}i}}\frac{1}{\delta|{\bf x}i|}f\left(\delta|{\bf x}i|\mathcal{S}|^2\right)\,dx
\label{vol1}
{\varepsilon}nd{eqnarray}
and
\begin{eqnarray}
\mathcal{L}^d(A_{\delta{\bf x}i}\label{ineqonA})\leq
\frac{\delta|{\bf x}i|}{f(M)}\int_{D}\frac{1}{\delta|{\bf x}i|}f\left(\delta|{\bf x}i|\mathcal{S}|^2\right)\,dx.
\label{volumebound}
{\varepsilon}nd{eqnarray}
Noting that
\begin{eqnarray}
\int_{A_{\delta{\bf x}i}}\,|(u(x+\delta{\bf x}i)-u(x))\chi^\nudot e|^2\,dx\leq 2\Vert u\Vert_{L^\infty(D;\mathbb{R}^d}^2\mathcal{L}^d(A_{\delta{\bf x}i})
\label{basiccc}
{\varepsilon}nd{eqnarray}
and collecting results one has
\begin{eqnarray}
\label{almostestimate}
\int_{D}\,|(u(x+\delta{\bf x}i)-u(x))\chi^\nudot e|^2\,dx\leq\delta|{\bf x}i|\left(\frac{1}{H_M}+\frac{2\Vert u\Vert_{L^\infty(D;\mathbb{R}^d)}^2}{f(M)}\right)
\int_{D}\frac{1}{\delta|{\bf x}i|}f\left(\delta|{\bf x}i|\mathcal{S}|^2\right)\,dx.
{\varepsilon}nd{eqnarray}
Lemma \ref{coercivityb} follows on multiplying both sides of {\varepsilon}qref{almostestimate} by $J(|{\bf x}i|)$ and integration over $\mathcal{H}_1(0)$.
Theorem \ref{coercivityth} follows from Lemmas \ref{coercivitya} and \ref{coercivityb}.
Arguing as in \chi^\nuite{Gobbino3} we have the monotonicity given by
\begin{lemma}{\rm\bf Monotonicity}\\
For any integer $M$, ${\varepsilon}ta>0$ and $u\in L^\infty(D;\mathbb{R}^d)$ one has
\begin{eqnarray}
PD^{M{\varepsilon}ta}(u)\leq PD^{\varepsilon}ta(u).
\label{monotone}
{\varepsilon}nd{eqnarray}
\label{monotonicity}
{\varepsilon}nd{lemma}
Now choose the subsequence ${\varepsilon}psilon_k=1/2^k,\,i=1,2,\ldots$ and from Theorem \ref{coercivityth} and Lemma \ref{monotonicity} we have for any $0<K<k$ with $\delta=2^{-K}$, ${\varepsilon}psilon_k=2^{-k}$,
\begin{eqnarray}
\int_{\mathcal{H}_1(0)}\int_D|u^{{\varepsilon}psilon_k}(x+\delta{\bf x}i)-u^{{\varepsilon}psilon_k}(x)|^2dx\,J(|{\bf x}i|)d{\bf x}i\leq \delta(\tilde{K}_1+\tilde{K}_2\Vert u^{{\varepsilon}psilon_k}\Vert_{L^\infty(D;\mathbb{R}^d)}^2) PD^{{\varepsilon}psilon_k}(u^{{\varepsilon}psilon_k}).\label{precompactness}
{\varepsilon}nd{eqnarray}
Applying the hypothesis {\varepsilon}qref{unifbound} to inequality {\varepsilon}qref{precompactness} gives a finite constant $B$ independent of ${\varepsilon}psilon_k$ and $\delta$ for which
\begin{eqnarray}
\int_{\mathcal{H}_1(0)}\int_D|u^{{\varepsilon}psilon_k}(x+\delta{\bf x}i)-u^{{\varepsilon}psilon_k}(x)|^2dx\,J(|{\bf x}i|)d{\bf x}i\leq \delta B,\label{precompactnesses}
{\varepsilon}nd{eqnarray}
for all ${\varepsilon}psilon_k<\delta$. One can then apply {\varepsilon}qref{precompactnesses} as in \chi^\nuite{Gobbino3}, (or alternatively apply {\varepsilon}qref{precompactnesses} and arguments similar to the proof of the Kolomogorov-Riesz compactness theorem \chi^\nuite{OlsenHolden}) to show that the sequence $\{u^{{\varepsilon}psilon_k}\}_{k=1}^\infty$ is a totally bounded subset of $L^2_0(D;\mathbb{R}^d)$ and Theorem \ref{L2compact} is proved.
Now it is shown that the family of mesoscopic dynamics $\{u^{{\varepsilon}psilon_k}\}_{k=1}^\infty$ is relatively compact in \\
$C([0,T];L_0^2(D;\mathbb{R}^d))$.
For each $t$ in $[0,T]$ we apply Theorem \ref{Gronwall} and Hypothesis \ref{remarkone} to obtain the bound
\begin{eqnarray}
PD^{{\varepsilon}psilon_k}(u^{{\varepsilon}psilon_k}(t))+\Vert u^{{\varepsilon}psilon_k}(t)\Vert_{L^\infty(D)}<C
\label{cpactbnd}
{\varepsilon}nd{eqnarray}
where $C<\infty$ and is independent of ${\varepsilon}psilon_k$, $k=1,2,\ldots$, and $0\leq t\leq T$.
With this bound we apply Theorem \ref{L2compact} to assert that for each $t$ the sequence $\{u^{{\varepsilon}psilon_k}(t)\}_{k=1}^\infty$
is relatively compact in $L^2(D;\mathbb{R}^d)$.
From Theorem \ref{holdercont} the sequence $\{u^{{\varepsilon}psilon_k}\}_{k=1}^\infty$, is seen to be uniformly equi-continuous in $t$ with respect to the $L^2(D;\mathbb{R}^d)$ norm
and we immediately conclude from the Ascoli theorem that $\{u^{{\varepsilon}psilon_k}\}_{k=1}^\infty$ is relatively compact in $C([0,T];L^2(D;\mathbb{R}^d))$.
Therefore we can pass to a subsequence also denoted by $\{u^{{\varepsilon}psilon_{k}}(t)\}_{k=1}^\infty$ to assert the existence of a limit evolution $u^0(t)$ in $C([0,T];L^2(D;\mathbb{R}^d))$ for which
\begin{eqnarray}
\lim_{k\rightarrow\infty}\left\{\muup_{t\in[0,T]}\Vert u^{{\varepsilon}psilon_{k}}(t)-u^0(t)\Vert_{\mucriptscriptstyle{{L^2(D;\mathbb{R}^d)}}}\right\}=0
\label{unfconvergence}
{\varepsilon}nd{eqnarray}
and Theorem \ref{LimitflowThm} is proved.
We now prove Theorem \ref{LEFMMThm}. One has that limit points of sequences satisfying {\varepsilon}qref{unifbound} enjoy higher regularity.
\begin{theorem}{\rm\bf Higher regularity}\\
\label{higherreg}
Every limit point of a sequence $\{u^{{\varepsilon}psilon_k}\}_{k=1}^\infty$ in $L_0^2(D;\mathbb{R}^d)$
satisfying {\varepsilon}qref{unifbound} belongs to $SBD$.
{\varepsilon}nd{theorem}
{\bf Proof.} To recover higher regularity one can directly apply the slicing technique of \chi^\nuite{Gobbino3} to reduce to the one dimensional case and construct sequences of functions converging in SBV to the limit point along one dimensional sections. One then applies Theorem 4.7 of \chi^\nuite{AmbrosioCosicaDalmaso} to conclude that the limit point belongs to $SBD$.
It now follows from Theorem \ref{higherreg} that the limit evolution $u^0(t)$ belongs to $SBD$ for $0\leq t\leq T$.
Next we recall the properties of $\Gamma$-convergence and apply them to finish the proof of Theorem \ref{LEFMMThm}.
Consider a sequence of functions $\{F_j\}$ defined on a metric
space $\mathbb{M}$ with values in $\overline{\mathbb{R}}$ together with a function $F$ also defined on $\mathbb{M}$ with values in $\overline{\mathbb{R}}$.
\begin{definition}
\label{Gammaconvergence}
We say that $F$ is the $\Gamma$-limit of the sequence $\{F_j\}$ in $\mathbb{M}$ if the following two
properties hold:
\begin{enumerate}
\item for every $x$ in $\mathbb{M}$ and every sequence $\{x_j\}$ converging to $x$, we have that
\begin{eqnarray}
F(x)\leq \liminf_{j\rightarrow\infty} F_j(x_j),\label{lowerbound}
{\varepsilon}nd{eqnarray}
\item for every $x$ in $\mathbb{M}$ there exists a recovery sequence $\{x_j\}$ converging to $x$, for which
\begin{eqnarray}
F(x)=\lim_{j\rightarrow\infty} F_j(x_j).\label{recovery}
{\varepsilon}nd{eqnarray}
{\varepsilon}nd{enumerate}
{\varepsilon}nd{definition}
For $u$ in $L^2_0(D;\mathbb{R}^d)$ define $PD^0:\,L^2_0(D;\mathbb{R}^d)\rightarrow [0,+\infty]$ by
\begin{equation}
PD^0(u)=\left\{ \begin{array}{ll}
LEFM(u,D)&\hbox{if $u$ belongs to $SBD$}\\
+\infty&\hbox{otherwise}
{\varepsilon}nd{array} \right.
\label{Gammalimit}
{\varepsilon}nd{equation}
A straight forward argument following Theorem 4.3 $(ii)$ and $(iii)$ of \chi^\nuite{Gobbino3} and invoking Theorems 4.5, 4.7, and 4.10 of \chi^\nuite{AmbrosioCosicaDalmaso} as appropriate delivers
\begin{theorem}{\bf $\Gamma$- convergence and point wise convergence of peridynamic energies for cohesive dynamics.}
\begin{eqnarray}
&&PD^0 \hbox{ is the $\Gamma$-limit of $\{PD^{{\varepsilon}psilon_k}\}$ in $L^2_0(D;\mathbb{R}^d)$}, \hbox{ and }
\label{gammaconvpd}\\
&&\lim_{k\rightarrow\infty}PD^{{\varepsilon}psilon_k}(u)=PD^0(u), \hbox{ for every $u$ in $L^2_0(D;\mathbb{R}^d)$}.\label{pointwise}
{\varepsilon}nd{eqnarray}
\label{Gammaandpointwise}
{\varepsilon}nd{theorem}
Observe that since the sequence of peridynamic energies $\{PD^{{\varepsilon}psilon_k}\}$ $\Gamma$-converge to $PD^0$ in $L^2(D;\mathbb{R}^d)$ we can apply
the lower bound property {\varepsilon}qref{lowerbound} of $\Gamma$-convergence to conclude that the limit has bounded elastic energy in the
sense of fracture mechanics, i.e.,
\begin{eqnarray}
LEFM(u^0(t),D)=PD^0(u^0(t))\leq\liminf_{k\rightarrow\infty}PD^{{\varepsilon}psilon_{k}}(u^{{\varepsilon}psilon_{k}}(t))<C.
\label{GSBV}
{\varepsilon}nd{eqnarray}
This concludes the proof of Theorem \ref{LEFMMThm}.
\muubsection{Energy inequality for the limit flow}
\label{EI}
In this section we prove Theorem \ref{energyinequality}. We begin by showing that the limit evolution $u^0(t,x)$ has a weak derivative $u_t^0(t,x)$ belonging to $L^2([0,T]\times D;\mathbb{R}^d)$. This is summarized in the following theorem.
\begin{theorem}
\label{weaktimederiviative}
On passage to subsequences if necessary the sequence $u_t^{{\varepsilon}psilon_k}$ weakly converges in $L^2([0,T]\times D;\mathbb{R}^d)$ to $u^0_t$ where
\begin{eqnarray}
-\int_0^T\int_D\hat{\varphi}artial_t\hat{\varphi}si \chi^\nudot u^0\, dxdt=\int_0^T\int_D\hat{\varphi}si \chi^\nudot u^0_t\, dxdt,
\label{weakl2time}
{\varepsilon}nd{eqnarray}
for all compactly supported smooth test functions $\hat{\varphi}si$ on $[0,T]\times D$.
{\varepsilon}nd{theorem}
{\bf Proof.} The bound on the kinetic energy given in Theorem \ref{Gronwall}
implies
\begin{eqnarray}
\muup_{{\varepsilon}psilon_k>0}\left(\muup_{0\leq t\leq T}\Vert u^{{\varepsilon}psilon_k}_t\Vert_{L^2(D;\mathbb{R}^d)}\right)< \infty.
\label{bddd}
{\varepsilon}nd{eqnarray}
Therefore the sequence $u^{{\varepsilon}psilon_k}_t$ is bounded in $L^2([0,T]\times D;\mathbb{R}^d)$ and passing to
a subsequence if necessary we conclude that there is a limit function
$\tilde{u}^0$ for which $u_t^{{\varepsilon}psilon_k}\rightharpoonup\tilde{u}^0$ weakly in $L^2([0,T]\times D;\mathbb{R}^d)$. Observe also that the uniform convergence {\varepsilon}qref{unfconvergence} implies that $u^{{\varepsilon}psilon_k}\rightarrow u^0$ in $L^2([0,T]\times D;\mathbb{R}^d)$.
On writing the identity
\begin{eqnarray}
-\int_0^T\int_D\hat{\varphi}artial_t\hat{\varphi}si\chi^\nudot u^{{\varepsilon}psilon_k}\, dxdt=\int_0^T\int_D\hat{\varphi}si \chi^\nudot u^{{\varepsilon}psilon_k}_t\, dxdt.
\label{weakidentity}
{\varepsilon}nd{eqnarray}
applying our observations and passing to the limit it is seen that
$\tilde{u}^0=u_t^0$ and the theorem follows.
To establish Theorem \ref{energyinequality} we require the following inequality.
\begin{lemma}
For almost every $t$ in $[0,T]$ we have
\label{weakinequality}
\begin{eqnarray}
\Vert u^0_t(t)\Vert_{L^2(D;\mathbb{R}^d)}\leq \liminf_{{\varepsilon}psilon_k\rightarrow 0}\Vert u^{{\varepsilon}psilon_k}_t(t)\Vert_{L^2(D;\mathbb{R}^d)}.
\label{limitweakineq}
{\varepsilon}nd{eqnarray}
{\varepsilon}nd{lemma}
{\bf Proof.}
We start with the identity
\begin{eqnarray}
\Vert u^{{\varepsilon}psilon_k}_t\Vert_{L^2(D;\mathbb{R}^d)}^2-2\int_D u^{{\varepsilon}psilon_k}_t\chi^\nudot u^0_t \,dx+\Vert u^0_t\Vert_{L^2(D;\mathbb{R}^d)}^2
= \Vert u^{{\varepsilon}psilon_k}_t-u^0_t\Vert_{L^2(D;\mathbb{R}^d)}^2 \geq 0,
\label{locpositive}
{\varepsilon}nd{eqnarray}
and for every non-negative bounded measurable function of time $\hat{\varphi}si(t)$ defined on $[0,T]$ we have
\begin{eqnarray}
\int_0^T\hat{\varphi}si \Vert u^{{\varepsilon}psilon_k}_t-u^0_t\Vert_{L^2(D;\mathbb{R}^d)}^2\,dt\geq 0.
\label{positive}
{\varepsilon}nd{eqnarray}
Together with the weak convergence given in Theorem \ref{weaktimederiviative} one easily sees that
\begin{eqnarray}
\liminf_{{\varepsilon}psilon_k\rightarrow 0}\int_0^T\hat{\varphi}si\Vert u^{{\varepsilon}psilon_k}_t\Vert_{L^2(D;\mathbb{R}^d)}^2\,dt-\int_0^T\hat{\varphi}si\Vert u^0_t\Vert_{L^2(D;\mathbb{R}^d)}^2\,dt\geq 0.
\label{diff}
{\varepsilon}nd{eqnarray}
\noindent Applying {\varepsilon}qref{bddd} and invoking the Lebesgue dominated convergence theorem we conclude
\begin{eqnarray}
\liminf_{{\varepsilon}psilon_k\rightarrow 0}\int_0^T\hat{\varphi}si\Vert u^{{\varepsilon}psilon_k}_t\Vert_{L^2(D;\mathbb{R}^d)}^2\,dt=\int_0^T\hat{\varphi}si\liminf_{{\varepsilon}psilon_k\rightarrow 0}\Vert u^{{\varepsilon}psilon_k}_t\Vert_{L^2(D;\mathbb{R}^d)}^2\,dt
\label{equalitybalance}
{\varepsilon}nd{eqnarray}
to recover the inequality given by
\begin{eqnarray}
\int_0^T\hat{\varphi}si\left(\liminf_{{\varepsilon}psilon_k\rightarrow 0}\Vert u^{{\varepsilon}psilon_k}_t\Vert_{L^2(D;\mathbb{R}^d)}^2-\Vert u^0_t\Vert_{L^2(D;\mathbb{R}^d)}^2\right)\,dt\geq 0.
\label{diffinal}
{\varepsilon}nd{eqnarray}
The lemma follows noting that {\varepsilon}qref{diffinal} holds for every non-negative test function $\hat{\varphi}si$.
Theorem \ref{energyinequality} now follows immediately on taking the ${\varepsilon}psilon_k\rightarrow 0$ limit in the peridynamic energy balance equation {\varepsilon}qref{BalanceEnergy} of Theorem \ref{Ebalance} and applying {\varepsilon}qref{pointwise}, {\varepsilon}qref{GSBV}, and {\varepsilon}qref{limitweakineq} of Lemma \ref{weakinequality}.
\muubsection{Stationarity conditions for the limit flow}
\label{SC}
In this section we prove Theorem \ref{waveequation}. The first subsection establishes Theorem \ref{waveequation} using Theorem \ref{convgofelastic}. Theorem \ref{convgofelastic} is proved in the second subsection.
\muubsubsection{Proof of Theorem \ref{waveequation}}
To proceed we make the change of variables $y=x+{\varepsilon}psilon{\bf x}i$ where ${\bf x}i$ belongs to the unit disk $\mathcal{H}_1(0)$ centered at the origin
and the local strain $\mathcal{S}$ is of the form
\begin{eqnarray}
\mathcal{S}=\left(\frac{u(x+{\varepsilon}psilon{\bf x}i)-u(x)}{{\varepsilon}psilon|{\bf x}i|}\right)\chi^\nudot e.
\label{strainrescaled}
{\varepsilon}nd{eqnarray}
It is convenient for calculation to express the strain through the directional difference operator $D_{e}^{{\varepsilon}psilon|{\bf x}i|}u$ defined by
\begin{eqnarray}
D_{e}^{{\varepsilon}psilon|{\bf x}i|}u(x)=\frac{u(x+{\varepsilon}psilon{\bf x}i)-u(x)}{{\varepsilon}psilon|{\bf x}i|}\hbox{ and }\mathcal{S}=D_{e}^{{\varepsilon}psilon|{\bf x}i|} u\chi^\nudot e,
\label{strainrescaledderiv}
{\varepsilon}nd{eqnarray}
with $e={\bf x}i/|{\bf x}i|$. One also has
\begin{eqnarray}
D_{-e}^{{\varepsilon}psilon|{\bf x}i|}u(x)=\frac{u(x-{\varepsilon}psilon{\bf x}i)-u(x)}{{\varepsilon}psilon|{\bf x}i|},
\label{strainrescaledderivopposite}
{\varepsilon}nd{eqnarray}
and the integration by parts formula for functions $u$ in $L_0^2(D;\mathbb{R}^d)$, densities $\hat{\varphi}hi$ in $L_0^2(D;\mathbb{R})$ and $\hat{\varphi}si$ continuous on $\mathcal{H}_1(0)$ given by
\begin{eqnarray}
\int_D\int_{\mathcal{H}_1(0)}(D_e^{{\varepsilon}psilon|{\bf x}i|}u\chi^\nudot e)\hat{\varphi}hi(x)\hat{\varphi}si({\bf x}i)\,d{\bf x}i\,dx=\int_D\int_{\mathcal{H}_1(0)}(u\chi^\nudot e) (D_{-e}^{{\varepsilon}psilon|{\bf x}i|}\hat{\varphi}hi)\hat{\varphi}si({\bf x}i)\,d{\bf x}i\,dx.
\label{intbypts}
{\varepsilon}nd{eqnarray}
Note further for $v$ in $C^\infty_0(D;\mathbb{R}^d)$ and $\hat{\varphi}hi$ in $C^\infty_0(D;\mathbb{R})$ one has
\begin{eqnarray}
\lim_{{\varepsilon}psilon_k\rightarrow 0}D_e^{{{\varepsilon}psilon_k}|{\bf x}i|} v\chi^\nudot e=\mathcal{E} v \,e\chi^\nudot e &\hbox{ and }& \lim_{{\varepsilon}psilon_k\rightarrow 0}D_e^{{{\varepsilon}psilon_k}|{\bf x}i|} \hat{\varphi}hi= e\chi^\nudot \nabla\hat{\varphi}hi\label{grad}
{\varepsilon}nd{eqnarray}
where the convergence is uniform in $D$.
Taking the first variation of the action integral {\varepsilon}qref{Action} gives the Euler equation in weak form
\begin{eqnarray}
&&\rho\int_0^T\int_{D} u_t^{{\varepsilon}psilon_k}\chi^\nudot\delta_t\,dx \,dt\nonumber\\
&&-\frac{1}{\omega_d}\int_0^T\int_{D}\int_{\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)f'\left({\varepsilon}psilon_k|{\bf x}i||D_e^{{\varepsilon}psilon_k|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e|^2\right)2(D_e^{{\varepsilon}psilon_k|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e) (D_{e}^{{\varepsilon}psilon_k|{\bf x}i|}\delta\chi^\nudot e)\,d{\bf x}i\,dx\,dt\nonumber\\
&&+\int_0^T\int_{D} b\chi^\nudot\delta\,dx\,dt=0,\label{stationtxweaker}
{\varepsilon}nd{eqnarray}
where the test function $\delta=\delta(x,t)=\hat{\varphi}si(t)\hat{\varphi}hi(x)$ is smooth and has compact support in $[0,T]\times D$.
Next we make the change of function and write $F_s (\mathcal{S})=\frac{1}{s}f(s\mathcal{S}^2)$, $F'_s(\mathcal{S})=2\mathcal{S}f'(s\mathcal{S}^2)$, and $s={{\varepsilon}psilon_k}|{\bf x}i|$ we transform {\varepsilon}qref{stationtxweaker} into
\begin{eqnarray}
&&\rho\int_0^T\int_{D} u_t^{{\varepsilon}psilon_k}\chi^\nudot\delta_t\,dx \,dt\nonumber\\
&&-\frac{1}{\omega_d}\int_0^T\int_{D}\int_{\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}'(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)D_{e}^{{\varepsilon}psilon_k}\delta\chi^\nudot e\,d{\bf x}i\,dx\,dt\nonumber\\
&&+\int_0^T\int_{D} b\chi^\nudot\delta\,dx\,dt=0,\label{stationtxweakerlimitform}
{\varepsilon}nd{eqnarray}
where
\begin{eqnarray}
F_{{\varepsilon}psilon_k|{\bf x}i|}'(D_{e}^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)=f'\left({\varepsilon}psilon_k |{\bf x}i||D_{e}^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e|^2\right)2D_{e}^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e.
\label{deriv}
{\varepsilon}nd{eqnarray}
For future reference observe that $F_s(r)$ is convex-concave in $r$ with inflection point $\overline{r}_s=\overline{r}/\muqrt{s}$ where $\overline{r}$ is the inflection point of $f(r^2)=F_1(r)$. One also has the estimates
\begin{eqnarray}
&&F_s(r)\geq\frac{1}{s}F_1(\overline{r})\hbox{ for $r\geq\overline{r}_s$, and }\label{lowerestforF}\\
&&\muup_{0\leq r<\infty}|F'_s(r)|\leq\frac{2f'(\overline{r}^2){\overline{r}}}{\muqrt{s}},\label{boundderiv}
\label{estforFprime}
{\varepsilon}nd{eqnarray}
We send ${\varepsilon}psilon_k\rightarrow 0$ in {\varepsilon}qref{stationtxweakerlimitform} applying the weak convergence Theorem \ref{weaktimederiviative} to the first term to obtain
\begin{eqnarray}
&&\rho\int_0^T\int_{D} u_t^{0}\chi^\nudot\delta_t\,dx \,dt-\lim_{{\varepsilon}psilon_k\rightarrow 0}\frac{1}{\omega_d}\left(\int_0^T\int_{D}\int_{\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}'(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)D_{e}^{{\varepsilon}psilon_k}\delta\chi^\nudot e\,d{\bf x}i\,dx\,dt\right)\nonumber\\
&&+\int_0^T\int_{D} b\chi^\nudot\delta\,dx\,dt=0.\label{stationtxweakerlimit}
{\varepsilon}nd{eqnarray}
Theorem \ref{waveequation} follows once we identify the limit of the second term in {\varepsilon}qref{stationtxweakerlimit} for smooth test functions $\hat{\varphi}hi(x)$ with support contained in $D$.
We state the following convergence theorem.
\begin{theorem}
\label{convgofelastic}
Given any infinitely differentiable test function $\hat{\varphi}hi$ with compact support in $D$ then
\begin{eqnarray}
\lim_{{\varepsilon}psilon_k\rightarrow 0}\frac{1}{\omega_d}\int_{D}\int_{\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}'(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k})D_{e}^{{\varepsilon}psilon_k}\hat{\varphi}hi\,d{\bf x}i\,dx=\int_{D}\mathbb{C}\mathcal{E} u^0:\mathcal{E}\hat{\varphi}hi\,dx,
\label{limitdelta}
{\varepsilon}nd{eqnarray}
where $\mathbb{C}\mathcal{E} u^0:\mathcal{E}\hat{\varphi}hi=\muum_{ijkl=1}^d\mathbb{C}_{ijkl}\mathcal{E}u^0_{ij}\mathcal{E}\hat{\varphi}hi_{kl}$, $\mathbb{C}\mathcal{E}u^0=\lambda I_d Tr(\mathcal{E}u^0)+2\mu \mathcal{E}u^0$, and $\lambda$ and $\mu$ are given by {\varepsilon}qref{calibrate1}.
{\varepsilon}nd{theorem}
\noindent Theorem \ref{convgofelastic} is proved in Section \ref{prtheorem44}.
The sequence of integrals on the left hand side of {\varepsilon}qref{limitdelta} are uniformly bounded in time, i.e.,
\begin{eqnarray}
\muup_{{\varepsilon}psilon_k>0}\left\{\muup_{0\leq t\leq T}\left\vert\int_{D}\int_{\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}'(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)D_{e}^{{\varepsilon}psilon_k}\hat{\varphi}hi\chi^\nudot e\,d{\bf x}i\,dx\right\vert\right\}<\infty,
\label{uniboundt}
{\varepsilon}nd{eqnarray}
this is demonstrated in {\varepsilon}qref{Fprimesecond} of Lemma \ref{estimates} in Section \ref{prtheorem44}. Applying the Lebesgue bounded convergence theorem together with Theorem \ref{convgofelastic}
with $\delta(t,x)=\hat{\varphi}si(t)\hat{\varphi}hi(x)$ delivers the desired result
\begin{eqnarray}
&&\lim_{{\varepsilon}psilon_k\rightarrow 0}\frac{1}{\omega_d}\left(\int_0^T\int_{D}\int_{\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}'(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)\hat{\varphi}si D_{e}^{{\varepsilon}psilon_k}\hat{\varphi}hi\chi^\nudot e\,d{\bf x}i\,dx\,dt\right)\nonumber\\
&&=\int_0^T\int_{D}\mathbb{C}\mathcal{E} u^0:\mathcal{E}\hat{\varphi}hi\,dx\,dt,
\label{limitidentity}
{\varepsilon}nd{eqnarray}
and we recover the identity
\begin{eqnarray}
&&\rho\int_0^T\int_{D} u_t^{0}(t,x)\chi^\nudot\hat{\varphi}si_t(t)\hat{\varphi}hi(x)\,dx \,dt-\int_0^T\int_{D}\hat{\varphi}si(t)\mathbb{C}\mathcal{E} u^0(t,x):\mathcal{E}\hat{\varphi}hi(x)\,dx\,dt
\nonumber\\
&&+\int_0^T\int_{D} b(t,x)\chi^\nudot\hat{\varphi}si(t)\hat{\varphi}hi(x)\,dx\,dt=0
\label{finalweakidentity}
{\varepsilon}nd{eqnarray}
from which Theorem \ref{waveequation} follows.
\muubsubsection{ Proof of Theorem \ref{convgofelastic}}
\label{prtheorem44}
We decompose the difference $D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e$ as
\begin{eqnarray}
D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e=(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^- +(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^+
\label{decompose}
{\varepsilon}nd{eqnarray}
where
\begin{equation}
(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}(x)\chi^\nudot e)^-=\left\{\begin{array}{ll}(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}(x)\chi^\nudot e),&\hbox{if $|D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}(x)\chi^\nudot e|<\frac{\overline{r}}{\muqrt{{\varepsilon}psilon_k|{\bf x}i|}}$}\\
0,& \hbox{otherwise}
{\varepsilon}nd{array}\right.
\label{decomposedetails}
{\varepsilon}nd{equation}
where $\overline{r}$ is the inflection point for the function $F_1(r)=f(r^2)$. Here $(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^+$ is defined so that {\varepsilon}qref{decompose} holds.
We prove Theorem \ref{convgofelastic} by using the following two identities
described in the Lemmas below.
\begin{lemma}
\label{twolimitsA}
For any $\hat{\varphi}hi$ in $C^\infty_0(D;\mathbb{R}^d)$
\begin{eqnarray}
&&\lim_{{\varepsilon}psilon_k\rightarrow 0} \frac{1}{\omega_d}\int_{D}\int_{\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}'(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)D_{e}^{{\varepsilon}psilon_k|{\bf x}i|}\hat{\varphi}hi\chi^\nudot e\,d{\bf x}i\,dx\nonumber\\
&&-2\lim_{{\varepsilon}psilon_k\rightarrow 0}\frac{1}{\omega_d}\int_{D}\int_{\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)f'(0)(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-D_e^{{\varepsilon}psilon_k |{\bf x}i|}\hat{\varphi}hi\chi^\nudot e\,d{\bf x}i\,dx=0.
\label{functiotolimit}
{\varepsilon}nd{eqnarray}
{\varepsilon}nd{lemma}
\begin{lemma}
\label{twolimitsB}
Assume that Hypotheses \ref{remarkthree}, \ref{remark555} and \ref{remark556} hold true and
define the weighted Lebesgue measure $\nu$ by $d\nu=|{\bf x}i|J(|{\bf x}i|)d{\bf x}i\,dx$ for any Lebesgue measurable set $S\muubset D\times\mathcal{H}_1(0)$.
Passing to subsequences if necessary $\{(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-\}_{k=1}^\infty$ converges weakly in $L^2(D\times\mathcal{H}_1(0);\nu)$ to $\mathcal{E} u^0 e\chi^\nudot e$, i.e.,
\begin{eqnarray}
&&\lim_{{\varepsilon}psilon_k\rightarrow 0}\frac{1}{\omega_d}\int_{D}\int_{\mathcal{H}_1(0)}(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^- \hat{\varphi}si\,d\nu\nonumber\\
&&=\frac{1}{\omega_d}\int_{D}\int_{\mathcal{H}_1(0)}(\mathcal{E} u^0 e \chi^\nudot e)\hat{\varphi}si\,d\nu,
\label{functiotofunction}
{\varepsilon}nd{eqnarray}
for any test function $\hat{\varphi}si(x,{\bf x}i)$ in $L^2(D\times\mathcal{H}_1(0);\nu)$.
{\varepsilon}nd{lemma}
We now apply the Lemmas. Observing that $D_e^{{\varepsilon}psilon_k |{\bf x}i|}\hat{\varphi}hi\chi^\nudot e $ converges strongly in $L^2(D\times\mathcal{H}_1(0):\nu)$ to $\mathcal{E}\hat{\varphi}hi\, e\chi^\nudot e$ for test functions $\hat{\varphi}hi$ in $C^\infty_0(D;\mathbb{R}^d)$ and from the weak $L^2(D\times\mathcal{H}_1(0):\nu)$ convergence of $(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-$ we deduce that
\begin{eqnarray}
&&\lim_{{\varepsilon}psilon_k\rightarrow 0}\frac{1}{\omega_d}\int_{D}\int_{\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)f'(0)(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-(D_e^{{\varepsilon}psilon_k |{\bf x}i|}\hat{\varphi}hi\chi^\nudot e)\,d\nu\nonumber\\
&&=\frac{1}{\omega_d}\int_{D}\int_{\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)f'(0)\,(\mathcal{E} u^0 e\chi^\nudot e)(\mathcal{E}\hat{\varphi}hi e\chi^\nudot e)\,d\nu\nonumber\\
&&=\frac{f'(0)}{\omega_d}\muum_{ijkl=1}^d\int_{\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)\,e_i e_j e_k e_l,d{\bf x}i\int_{D}\mathcal{E}u^0_{ij}\mathcal{E}\hat{\varphi}hi_{kl}\,dx.\label{limitproduct}
{\varepsilon}nd{eqnarray}
Now we show that
\begin{eqnarray}
\frac{f'(0)}{\omega_d}\int_{\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)\,e_i e_j e_k e_l\,d{\bf x}i=\mathbb{C}_{ijkl}=2\mu\left(\frac{\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}}{2}\right)+\lambda \delta_{ij}\delta_{kl}\label{shear}
{\varepsilon}nd{eqnarray}
where $\mu$ and $\lambda$ are given by {\varepsilon}qref{calibrate1}. To see this we write
\begin{eqnarray}
\Gamma_{ijkl}(e)=e_ie_je_ke_l,
\label{tensorId}
{\varepsilon}nd{eqnarray}
to observe that $\Gamma(e)$ is a totally symmetric tensor valued function defined for $e\in S^{d-1}$ with the property
\begin{eqnarray}
\Gamma_{ijkl}(Qe)=Q_{im}e_mQ_{jn}e_nQ_{ko}e_oQ_{lp}e_p=Q_{im}Q_{jn}Q_{ko}Q_{lp}\Gamma_{mnop}(e)
\label{rot}
{\varepsilon}nd{eqnarray}
for every rotation $Q$ in $SO^d$. Here repeated indices indicate summation.
We write
\begin{eqnarray}
\int_{\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)\,e_i e_j e_k e_l\,d{\bf x}i=\int_0^1|{\bf x}i|d|{\bf x}i|\int_{S^{d-1}}\Gamma_{ijkl}(e)\,de
\label{groupavgpre}
{\varepsilon}nd{eqnarray}
to see that for every $Q$ in $SO^d$
\begin{eqnarray}
Q_{im}Q_{jn}Q_{ko}Q_{lp}\int_{S^{d-1}}\Gamma_{ijkl}(e)\,de=\int_{S^{d-1}}\Gamma_{mnop}(Qe)\,de=\int_{S^{d-1}}\Gamma_{mnop}(e)\,de.
\label{groupavg}
{\varepsilon}nd{eqnarray}
Therefore we conclude that $\int_{S^{d-1}}\Gamma_{ijkl}(e)\,de$ is an isotropic symmetric $4^{th}$ order tensor and of the form
\begin{eqnarray}
\int_{S^{d-1}}\Gamma_{ijkl}(e)\,de=a\left(\frac{\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}}{2}\right)+b \delta_{ij}\delta_{kl}.
\label{groupavgaft}
{\varepsilon}nd{eqnarray}
Here we evaluate $a$ by contracting both sides of {\varepsilon}qref{groupavgaft} with a trace free matrix and $b$ by contracting both sides with the $d\times d$ identity and calculation delivers {\varepsilon}qref{shear}.
Theorem \ref{convgofelastic} now follows immediately from {\varepsilon}qref{limitproduct} and {\varepsilon}qref{functiotolimit}.
To establish Lemmas \ref{twolimitsA} and \ref{twolimitsB} we develop the following estimates for the sequences\\ $(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-$ and $(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^+$. We define the set $K^{+,{\varepsilon}psilon_k}$ by
\begin{eqnarray}
K^{+,{\varepsilon}psilon_k}=\{(x,{\bf x}i)\in D\times\mathcal{H}_1(0)\,: (D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^+\not=0\}.
\label{suppset}
{\varepsilon}nd{eqnarray}
We have the following string of estimates.
\begin{lemma}
We introduce the generic positive constant $0<C<\infty$ independent of $0<{\varepsilon}psilon_k<1$ and $0\leq t\leq T$ and state the following inequalities that hold for all $0<{\varepsilon}psilon_k<1$ and $0\leq t\leq T$ and for $C^\infty(D)$ test functions $\hat{\varphi}hi$ with compact support on $D$.
\label{estimates}
\begin{eqnarray}
&&\int_{K^{+,{\varepsilon}psilon_k}} |{\bf x}i|J(|{\bf x}i|)\,d{\bf x}i\,dx<C{\varepsilon}psilon_k,\label{suppsetupper}\\
&&\left\vert\int_{D\times\mathcal{H}_1(0)}|{\bf x}i| J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}'((D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|,+}u^{{\varepsilon}psilon_k}\chi^\nudot e)^+)(D_{e}^{{\varepsilon}psilon_k}\hat{\varphi}hi\chi^\nudot e)\,d{\bf x}i\,dx\right\vert<C\muqrt{{\varepsilon}psilon_k}\Vert\mathcal{E}\hat{\varphi}hi\Vert_{\mucriptscriptstyle{{L^\infty(D;\mathbb{R}^{d\times d})}}},\label{Fprimefirst}\\
&&\int_{D\times\mathcal{H}_1(0)}|{\bf x}i| J(|{\bf x}i|)|(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-|^2\,d{\bf x}i\,dx<C,\label{L2bound}\\
&&\int_{D\times\mathcal{H}_1(0)}|{\bf x}i| J(|{\bf x}i|)|D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e|\,d{\bf x}i\,dx<C,\hbox{ and}\label{L1bound}\\
&&\left\vert\int_{D\times\mathcal{H}_1(0)}|{\bf x}i| J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}'(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e) (D_{e}^{{\varepsilon}psilon_k}\hat{\varphi}hi\chi^\nudot e)d\,{\bf x}i\,dx\right\vert<C\Vert\mathcal{E} \hat{\varphi}hi\Vert_{\mucriptscriptstyle{L^\infty(D;\mathbb{R}^{d\times d})}}.\label{Fprimesecond}
{\varepsilon}nd{eqnarray}
{\varepsilon}nd{lemma}
{\bf Proof.}
For $(x,{\bf x}i)\in K^{+,{\varepsilon}psilon_k}$ we apply {\varepsilon}qref{lowerestforF} to get
\begin{eqnarray}
J(|{\bf x}i|)\frac{1}{{\varepsilon}psilon_k}F_{1}(\overline{r})=|{\bf x}i|J(|{\bf x}i|)\frac{1}{{\varepsilon}psilon_k|{\bf x}i|}F_{1}(\overline{r})
\leq|{\bf x}i|J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)\label{first}
{\varepsilon}nd{eqnarray}
and in addition since $|{\bf x}i|\leq 1$ we have
\begin{eqnarray}
&&\frac{1}{{\varepsilon}psilon_k}F_{1}(\overline{r}) \int_{K^{+,{\varepsilon}psilon_k}} |{\bf x}i|J(|{\bf x}i|)\,d{\bf x}i\,dx\leq\frac{1}{{\varepsilon}psilon_k}F_{1}(\overline{r}) \int_{K^{+,{\varepsilon}psilon_k}} J(|{\bf x}i|)\,d{\bf x}i\,dx\nonumber\\
&&\leq\int_{K^{+,{\varepsilon}psilon_k}} |{\bf x}i|J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)\,d{\bf x}i\,dx\leq\muup_{t\in [0,T]}\muup_{{\varepsilon}psilon_k}PD^{{\varepsilon}psilon_k}(u^{{\varepsilon}psilon_k})
\label{firstb}
{\varepsilon}nd{eqnarray}
where Theorem \ref{Gronwall} implies that the right most element of the sequence of inequalities is bounded and {\varepsilon}qref{suppsetupper} follows noting that the inequality {\varepsilon}qref{firstb} is equivalent to {\varepsilon}qref{suppsetupper}. More generally since $|{\bf x}i|\leq 1$ we may argue as above to conclude that
\begin{eqnarray}
\int_{K^{+,{\varepsilon}psilon_k}} |{\bf x}i|^pJ(|{\bf x}i|)\,d{\bf x}i\,dx<C{\varepsilon}psilon_k.
\label{power}
{\varepsilon}nd{eqnarray}
for $0\leq p$.
We apply {\varepsilon}qref{estforFprime} and {\varepsilon}qref{power} to find
\begin{eqnarray}
&&\left\vert\int_{D\times\mathcal{H}_1(0)}|{\bf x}i| J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}'((D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^+)D_{e}^{{\varepsilon}psilon_k}\hat{\varphi}hi\,d{\bf x}i\,dx\right\vert\nonumber\\
&&\leq C\frac{2f'(\overline{r}^2)\overline{r}}{\muqrt{{\varepsilon}psilon_k}}\int_{K^{+,{\varepsilon}psilon_k}}\muqrt{|{\bf x}i|}J(|{\bf x}i|)\,d{\bf x}i\,dx \Vert\mathcal{E}\hat{\varphi}hi\Vert_{\mucriptscriptstyle{L^\infty(D;\mathbb{R}^{d\times d})}}
\leq\muqrt{{\varepsilon}psilon_k}C\Vert\mathcal{E}\hat{\varphi}hi\Vert_{\mucriptscriptstyle{L^\infty(D;\mathbb{R}^{d\times d})}},
\label{second}
{\varepsilon}nd{eqnarray}
and {\varepsilon}qref{Fprimefirst} follows.
A basic calculation shows there exists a positive constant independent of $r$ and $s$ for which
\begin{eqnarray}
r^2\leq C F_s(r), \hbox{ for $r<\frac{\overline{r}}{\muqrt{s}}$},
\label{rsquaredd}
{\varepsilon}nd{eqnarray}
so
\begin{eqnarray}
|D_{e}^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e|^2\leq C F_{{\varepsilon}psilon_k|{\bf x}i|}(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e), \hbox{ for $|D_{e}^{{\varepsilon}psilon_k|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e|<\frac{\overline{r}}{\muqrt{{\varepsilon}psilon_k |{\bf x}i|}}$},
\label{rsquared}
{\varepsilon}nd{eqnarray}
and
\begin{eqnarray}
&&\int_{D\times\mathcal{H}_1(0)}|{\bf x}i| J(|{\bf x}i|)|(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-|^2\,d{\bf x}i\,dx
=\int_{D\times\mathcal{H}_1(0)\muetminus K^{+,{\varepsilon}psilon_k}}|{\bf x}i| J(|{\bf x}i|)|D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e|^2\,d{\bf x}i\,dx\nonumber\\
&&\leq C\int_{D\times\mathcal{H}_1(0)\muetminus K^{+,{\varepsilon}psilon_k}}|{\bf x}i| J(|{\bf x}i|)F_{{\varepsilon}psilon_k |{\bf x}i|}(D_{e}^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)\,d{\bf x}i\,dx\leq C\muup_{t\in [0,T]}\muup_{{\varepsilon}psilon_k}PD^{{\varepsilon}psilon_k}(u^{{\varepsilon}psilon_k})
\label{third}
{\varepsilon}nd{eqnarray}
where Theorem \ref{Gronwall} implies that the right most element of the sequence of inequalities is bounded and {\varepsilon}qref{L2bound} follows.
To establish {\varepsilon}qref{L1bound} we apply H\"older's inequality to find that
\begin{eqnarray}
&&\int_{D\times\mathcal{H}_1(0)}|{\bf x}i| J(|{\bf x}i|)|D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e|\,d{\bf x}i\,dx\nonumber\\
&&=\int_{K^{+,{\varepsilon}psilon_k}}|{\bf x}i| J(|{\bf x}i|)|D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e|\,d{\bf x}i\,dx+\int_{D\times\mathcal{H}_1(0)\muetminus K^{+,{\varepsilon}psilon_k}}|{\bf x}i| J(|{\bf x}i|)|D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e|\,d{\bf x}i\,dx\nonumber\\
&&\leq \frac{2\Vert u^{{\varepsilon}psilon_k}\Vert_{L^\infty(D;\mathbb{R}^d)}}{{\varepsilon}psilon_k}\int_{K^{+,{\varepsilon}psilon_k}}|{\bf x}i|J(|{\bf x}i|)\,d{\bf x}i\,dx+\nonumber\\
&&+\nu(D\times\mathcal{H}_1(0))^{\frac{1}{2}}\left (\int_{D\times\mathcal{H}_1(0)}|{\bf x}i| J(|{\bf x}i|)|(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-|^2\,d{\bf x}i\,dx\right)^{\frac{1}{2}},
\label{twoterms}
{\varepsilon}nd{eqnarray}
and {\varepsilon}qref{L1bound} follows from {\varepsilon}qref{suppsetupper} and {\varepsilon}qref{L2bound}.
We establish {\varepsilon}qref{Fprimesecond}. This bound follows from the basic features of the potential function $f$. We will recall for subsequent use that $f$ is smooth positive, concave and $f'$ is a decreasing function with respect to its argument. So for $A$ fixed and $0\leq h\leq A^2\overline{r}^2$ we have
\begin{eqnarray}
|f'(h)-f'(0)|\leq |f'(A^2\overline{r}^2)- f'(0)|<2|f'(0)|^2.
\label{ffact}
{\varepsilon}nd{eqnarray}
The bound {\varepsilon}qref{Fprimesecond} is now shown to be a consequence of the following upper bound valid for the parameter $0<A<1$ given by
\begin{eqnarray}
&&\int_{D\times\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)|f'({\varepsilon}psilon_k |{\bf x}i||(D_{e}^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-|^2)-f'(0)|^2\, d{\bf x}i\,dx\nonumber\\
&&\leq \nu(D\times\mathcal{H}_1(0))\times|f'(A^2\overline{r}^2)-f'(0)|^2+C{\varepsilon}psilon_k\frac{4|f'(0)|^2}{A^2}.
\label{usefulbound}
{\varepsilon}nd{eqnarray}
We postpone the proof of {\varepsilon}qref{usefulbound} until after it is used to establish {\varepsilon}qref{Fprimesecond}. Set $h_{{\varepsilon}psilon_k}=(D_{e}^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-$ to note
\begin{eqnarray}
F_{{\varepsilon}psilon_k |{\bf x}i|}'(h_{{\varepsilon}psilon_k})-2f'(0)h_{{\varepsilon}psilon_k}=(f'({\varepsilon}psilon_k |{\bf x}i| h^2_{{\varepsilon}psilon_k})-f'(0))2h_{{\varepsilon}psilon_k}.
\label{diffeq}
{\varepsilon}nd{eqnarray}
Applying H\"olders inequality, {\varepsilon}qref{Fprimefirst}, {\varepsilon}qref{L2bound}, {\varepsilon}qref{usefulbound}, and {\varepsilon}qref{diffeq} gives
\begin{eqnarray}
&&\left\vert\int_{D\times\mathcal{H}_1(0)}|{\bf x}i| J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}'(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e) (D_{e}^{{\varepsilon}psilon_k}\hat{\varphi}hi\,d{\bf x}i\chi^\nudot e)\,dx
\right\vert\nonumber\\
&&\leq\left\vert\int_{D\times\mathcal{H}_1(0)}|{\bf x}i| J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}'((D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^+) (D_{e}^{{\varepsilon}psilon_k}\hat{\varphi}hi\chi^\nudot e)\,d{\bf x}i\,dx
\right\vert\nonumber\\
&&+\left\vert\int_{D\times\mathcal{H}_1(0)}|{\bf x}i| J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}'((D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-) (D_{e}^{{\varepsilon}psilon_k}\hat{\varphi}hi\chi^\nudot e)\,d{\bf x}i\,dx
\right\vert\nonumber\\
&&\leq C\muqrt{{\varepsilon}psilon_k}\Vert\mathcal{E}\hat{\varphi}hi\Vert_{\mucriptscriptstyle{L^\infty(D;\mathbb{R}^{d\times d})}}+2\int_{D\times \mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)f'(0)(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-(D_e^{{\varepsilon}psilon_k |{\bf x}i|}\hat{\varphi}hi\chi^\nudot e)\,d{\bf x}i\,dx\nonumber\\
&&+\int_{D\times\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)\left (F_{{\varepsilon}psilon_k|{\bf x}i|}'((D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-) -2f'(0)(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-\right )(D_e^{{\varepsilon}psilon_k |{\bf x}i|}\hat{\varphi}hi\chi^\nudot e)\,d{\bf x}i\,dx\nonumber\\
&&\leq C\left(f'(0)+\muqrt{{\varepsilon}psilon_k}+\left(\nu(D\times\mathcal{H}_1(0))\times|f'(A^2\overline{r}^2)-f(0)|^2+{\varepsilon}psilon_k\frac{4|f'(0)|^2}{A^2}\right)^{1/2}\right )\Vert\mathcal{E} \hat{\varphi}hi\Vert_{\mucriptscriptstyle{L^\infty(D;\mathbb{R}^{d\times d})}}.\nonumber\\
\label{five}
{\varepsilon}nd{eqnarray}
and {\varepsilon}qref{Fprimesecond} follows.
We establish the inequality {\varepsilon}qref{usefulbound}. Set $h_{{\varepsilon}psilon_k}=(D_{e}^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-$ and for $0<A<1$ introduce the set
\begin{eqnarray}
K^{+,{\varepsilon}psilon_k}_A=\{(x,{\bf x}i)\in D\times\mathcal{H}_1(0)\,: A^2\overline{r}^2\leq {\varepsilon}psilon_k|{\bf x}i||h_{{\varepsilon}psilon_k}|^2\}.
\label{suppAset}
{\varepsilon}nd{eqnarray}
To summarize $(x,{\bf x}i)\in K^{+,{\varepsilon}psilon_k}_A$ implies $A^2\overline{r}^2\leq{\varepsilon}psilon_k|{\bf x}i||h_{{\varepsilon}psilon_k}|^2\leq\overline{r}^2$ and $(x,{\bf x}i)\not\in K^{+,{\varepsilon}psilon_k}_A$ implies ${\varepsilon}psilon_k|{\bf x}i||h_{{\varepsilon}psilon_k}|^2<A^2\overline{r}^2$ and $|f'({\varepsilon}psilon_k|{\bf x}i||h_{{\varepsilon}psilon_k}|^2)-f'(0)|\leq|f'(A^2\overline{r}^2)-f'(0)|$. Inequality {\varepsilon}qref{L2bound} implies
\begin{eqnarray}
&&C>\int_{K^{+,{\varepsilon}psilon_k}_A} |{\bf x}i|J(|{\bf x}i|) h_{{\varepsilon}psilon_k}^2\,d{\bf x}i\,dx\geq\frac{A^2\overline{r}^2}{{\varepsilon}psilon_k}\int_{K^{+,{\varepsilon}psilon_k}_A} J(|{\bf x}i|) \,d{\bf x}i\,dx\nonumber\\
&&\geq\frac{A^2\overline{r}^2}{{\varepsilon}psilon_k}\int_{K^{+,{\varepsilon}psilon_k}_A} |{\bf x}i|J(|{\bf x}i|) \,d{\bf x}i\,dx,
\label{chebyA}
{\varepsilon}nd{eqnarray}
the last inequality follows since $1\geq|{\bf x}i|>0$. Hence
\begin{eqnarray}
\int_{K^{+,{\varepsilon}psilon_k}_A} |{\bf x}i|J(|{\bf x}i|) \,d{\bf x}i\,dx\leq C\frac{{\varepsilon}psilon_k}{A^2\overline{r}^2},
\label{chebyAUpper}
{\varepsilon}nd{eqnarray}
and it follows that
\begin{eqnarray}
&&\int_{K^{+,{\varepsilon}psilon_k}_A} |{\bf x}i|J(|{\bf x}i|)|f'({\varepsilon}psilon_k|{\bf x}i|h_{{\varepsilon}psilon_k}|^2-f'(0)|^2 \,d{\bf x}i\,dx\nonumber\\
&&\leq 4|f'(0)|^2\int_{K^{+,{\varepsilon}psilon_k}_A} |{\bf x}i|J(|{\bf x}i|) \,d{\bf x}i\,dx\leq C{\varepsilon}psilon_k\frac{4|f'(0)|^2}{A^2\overline{r}^2}.
\label{kepsplus}
{\varepsilon}nd{eqnarray}
Collecting observations gives
\begin{eqnarray}
&&\int_{D\times\mathcal{H}_1(0)\muetminus K^{+,{\varepsilon}psilon_k}_A}|{\bf x}i|J(|{\bf x}i|)|f'({\varepsilon}psilon_k |{\bf x}i||(D_{e}^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-|^2)-f'(0)|^2\, d{\bf x}i\,dx\nonumber\\
&&\leq \nu(D\times\mathcal{H}_1(0))\times |f'(A^2\overline{r}^2)-f'(0)|^2,
\label{2ndbd}
{\varepsilon}nd{eqnarray}
and {\varepsilon}qref{usefulbound} follows.
We now prove Lemma \ref{twolimitsA}. Write
\begin{eqnarray}
F_{{\varepsilon}psilon_k|{\bf x}i|}'(D_{e}^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)=F_{{\varepsilon}psilon_k|{\bf x}i|}'((D_{e}^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^+)
+F_{{\varepsilon}psilon_k|{\bf x}i|}'((D_{e}^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-),
\label{plusminus}
{\varepsilon}nd{eqnarray}
and from {\varepsilon}qref{Fprimefirst} it follows that
\begin{eqnarray}
&&\lim_{{\varepsilon}psilon_k\rightarrow 0} \int_{D}\int_{\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}'(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)D_{e}^{{\varepsilon}psilon_k|{\bf x}i|}\hat{\varphi}hi\chi^\nudot e\,d{\bf x}i\,dx\nonumber\\
&&=\lim_{{\varepsilon}psilon_k\rightarrow 0} \int_{D}\int_{\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}'((D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-)(D_{e}^{{\varepsilon}psilon_k|{\bf x}i|}\hat{\varphi}hi\chi^\nudot e)\,d{\bf x}i\,dx.
\label{functiotolimitminus}
{\varepsilon}nd{eqnarray}
To finish the proof we identify the limit of the right hand side of {\varepsilon}qref{functiotolimitminus}.
Set $h_{{\varepsilon}psilon_k}=(D_{e}^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-$ and apply H\'older's inequality to find
\begin{eqnarray}
&&\int_{D\times\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)\left(F_{{\varepsilon}psilon_k|{\bf x}i|}'(h_{{\varepsilon}psilon_k}) -2f'(0)h_{{\varepsilon}psilon_k}\right)(D_e^{{\varepsilon}psilon_k|{\bf x}i|}\hat{\varphi}hi\chi^\nudot e)\,d{\bf x}i\,dx\nonumber\\
&&\leq C\int_{D\times\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)\left|F_{{\varepsilon}psilon_k|{\bf x}i|}'(h_{{\varepsilon}psilon_k}) -2f'(0)h_{{\varepsilon}psilon_k}\right|\,d{\bf x}i\,dx\Vert\mathcal{E}\hat{\varphi}hi\Vert_{\mucriptscriptstyle{L^\infty(D;\mathbb{R}^{d\times d})}}
\label{firstestimate}
{\varepsilon}nd{eqnarray}
We estimate the first factor in {\varepsilon}qref{firstestimate} and
apply {\varepsilon}qref{diffeq}, H\"older's inequality, {\varepsilon}qref{L2bound}, and {\varepsilon}qref{usefulbound} to obtain
\begin{eqnarray}
&&\int_{D\times\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)\left |F_{{\varepsilon}psilon_k|{\bf x}i|}'(h_{{\varepsilon}psilon_k}) -2f'(0)h_{{\varepsilon}psilon_k}\right |\,d{\bf x}i\,dx\nonumber\\
&&\leq\int_{D\times\mathcal{H}_1(0)}|{\bf x}i|J(|{\bf x}i|)\left |f'({\varepsilon}psilon_k |{\bf x}i||h_{{\varepsilon}psilon_k}|^2) -2f'(0)\right |\left | h_{{\varepsilon}psilon_k}\right |\,d{\bf x}i\,dx
\nonumber\\
&&\leq C\left(\nu(D\times\mathcal{H}_1(0))\times |f'(A^2\overline{r}^2)-f'(0)|^2+{\varepsilon}psilon_k\frac{4|f'(0)|^2}{A^2\overline{r}^2}\right)^{1/2}.
\label{usefulLemmaA}
{\varepsilon}nd{eqnarray}
Lemma \ref{twolimitsA} follows on applying the bound {\varepsilon}qref{usefulLemmaA} to {\varepsilon}qref {firstestimate} and passing to the ${\varepsilon}psilon_k$ zero limit and noting that the choice of $0<A<1$ is arbitrary.
We now prove Lemma \ref{twolimitsB}.
For $\tau>0$ sufficiently small define $K^\tau\muubset D$ by $K^{\tau}=\{x\in D:\,dist(x,J_{u^0(t)})<\tau\}$. From Hypothesis \ref{remark555} the collection of centroids in $CPZ^\delta(\overline{r},1/2,1/2,t)$ lie inside $K^\tau$ for $\delta$ sufficiently small. (Otherwise the components of the collection $CPZ^\delta(\overline{r},1/2,1/2,t)$ would concentrate about a component of $CPZ^0(\overline{r},1/2,1/2,t)$ outside $K^\tau$; contradicting the hypothesis that $J_{u^0(t)}=CPZ^0(\overline{r},1/2,1/2,t)$). The collection of all points belonging to unstable neighborhoods associated with centroids in is easily seen to be contained in the slightly larger set $K^{\tau,\delta}=\{x\in \,D; dist(x,K^\tau)<\delta\}$. From Hypothesis \ref{remark556} we may choose test functions $\hat{\varphi}hi\in C_0^1(D\muetminus K^{\tau,2\delta})$ such that for ${\varepsilon}psilon_k$ sufficiently small
\begin{eqnarray}
(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-\hat{\varphi}hi=(D_e^{{\varepsilon}psilon_k|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)\hat{\varphi}hi.
\label{identical}
{\varepsilon}nd{eqnarray}
We form the test functions $\hat{\varphi}hi(x)\hat{\varphi}si({\bf x}i)$, with $\hat{\varphi}hi\in C_0^1(D\muetminus K^{\tau,2\delta})$ and
$\hat{\varphi}si\in C(\mathcal{H}_1(0))$. From {\varepsilon}qref{L2bound} we may pass to a subsequence to find that $(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)^-$ weakly converges to the limit
$g(x,{\bf x}i)$ in $L^2(D\times\mathcal{H}_1(0);\nu)$.
With this in mind we write
\begin{eqnarray}
&&\int_{D\times\mathcal{H}_1(0)} g(x,{\bf x}i)\hat{\varphi}hi(x)\hat{\varphi}si({\bf x}i)\,d\nu=\int_{D\times\mathcal{H}_1(0)} g(x,{\bf x}i)\hat{\varphi}hi(x)\hat{\varphi}si({\bf x}i)|{\bf x}i|J(|{\bf x}i|)\,d{\bf x}i\,dx\nonumber\\
&&=\lim_{{\varepsilon}psilon_k\rightarrow 0}\int_{D\times\mathcal{H}_1(0)}(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}(x)\chi^\nudot e)^- \hat{\varphi}hi(x)\hat{\varphi}si({\bf x}i)|{\bf x}i|J(|{\bf x}i|)\,d{\bf x}i\,dx\nonumber\\
&&=\lim_{{\varepsilon}psilon_k\rightarrow 0}\int_{D\times\mathcal{H}_1(0)}(D_e^{{\varepsilon}psilon_k |{\bf x}i|}u^{{\varepsilon}psilon_k}(x)\chi^\nudot e) \hat{\varphi}hi(x)\hat{\varphi}si({\bf x}i)|{\bf x}i|J(|{\bf x}i|)\,d{\bf x}i\,dx\nonumber\\
&&=\lim_{{\varepsilon}psilon_k\rightarrow 0}\int_{D\times\mathcal{H}_1(0)}(u^{{\varepsilon}psilon_k}(x)\chi^\nudot e)(D_{-e}^{{\varepsilon}psilon_k |{\bf x}i|}\hat{\varphi}hi(x))\hat{\varphi}si({\bf x}i)|{\bf x}i|J(|{\bf x}i|)\,d{\bf x}i\,dx,\label{middlepart}
{\varepsilon}nd{eqnarray}
where we have integrated by parts using {\varepsilon}qref{intbypts} in the last line of {\varepsilon}qref{middlepart}.
Noting that $D_{-e}^{{\varepsilon}psilon_k |{\bf x}i|}\hat{\varphi}hi(x)$ converges uniformly to $-e\chi^\nudot\nabla\hat{\varphi}hi(x)$ and from the strong convergence of $u^{{\varepsilon}psilon_k}$ to $u^0$ in $L^2$ we obtain
\begin{eqnarray}
&&=\lim_{{\varepsilon}psilon_k\rightarrow 0}\int_{D\times\mathcal{H}_1(0)}(u^{{\varepsilon}psilon_k}(x)\chi^\nudot e)(D_{-e}^{{\varepsilon}psilon_k |{\bf x}i|}\hat{\varphi}hi(x))\hat{\varphi}si({\bf x}i)|{\bf x}i|J(|{\bf x}i|)\,d{\bf x}i\,dx\nonumber\\
&&=-\int_{D\times\mathcal{H}_1(0)}(u^{0}(x)\chi^\nudot e) (e\chi^\nudot\nabla\hat{\varphi}hi(x))\hat{\varphi}si({\bf x}i)|{\bf x}i|J(|{\bf x}i|)\,d{\bf x}i\,dx\nonumber\\
&&=-\muum_{j,k=1}^d\int_{D}u^0_j(x)\,\hat{\varphi}artial_{x_k}\left(\hat{\varphi}hi(x) \int_{\mathcal{H}_1(0)} e_j e_k\hat{\varphi}si({\bf x}i)|{\bf x}i|J(|{\bf x}i|)\,d{\bf x}i\right)\,dx\nonumber\\
&&=\int_{D}\mathcal{E} u^0_{jk}(x)\left(\hat{\varphi}hi(x) \int_{\mathcal{H}_1(0)} e_j e_k\hat{\varphi}si({\bf x}i)|{\bf x}i|J(|{\bf x}i|)\,d{\bf x}i\right)\,dx\nonumber\\
&&=\int_{D\times{\mathcal{H}_1(0)} }(\mathcal{E} u^0(x) e\chi^\nudot e)\hat{\varphi}hi(x)\hat{\varphi}si({\bf x}i)|{\bf x}i|J(|{\bf x}i|)\,d{\bf x}i\,dx,
\label{identifyweakl2}
{\varepsilon}nd{eqnarray}
where we have made use of $E u^0\lfloor D\muetminus K^{\tau,2\delta}=\mathcal{E} u^0\,dx$ on the third line of {\varepsilon}qref{identifyweakl2}.
From the density of the span of the test functions we conclude that $g(x,{\bf x}i)=\mathcal{E} u^0(x)e\chi^\nudot e$ almost everywhere on $D\muetminus K^{\tau,2\delta}\times\mathcal{H}_1(0)$. Since $K^{\tau,2\delta}$ can be chosen to have arbitrarily small measure with vanishing $\tau$ and $\delta$ we conclude that $g(x,{\bf x}i)=\mathcal{E} u^0(x) e\chi^\nudot e$ on $D\times\mathcal{H}_1(0)$ a.e. and Lemma \ref{twolimitsB} is proved.
\muubsubsection{ Proof of Theorems \ref{epsiloncontropprocesszone} and \ref{bondunstable}}
\label{proofbondunstable}
We begin with the proof on the upper bound on the size of the process zone given by Theorem \ref{epsiloncontropprocesszone}.
The set $K^{+,{\varepsilon}psilon_k}_\alpha$ is defined by \
\begin{eqnarray}
K^{+,{\varepsilon}psilon_k}_\alpha=\{(x,{\bf x}i)\in D\times\mathcal{H}_1(0);\,|D_e^{{\varepsilon}psilon_k|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e|>\underline{k}|{\varepsilon}psilon_k{\bf x}i|^{\alpha-1}\}
\label{equivdescralpha}
{\varepsilon}nd{eqnarray}
where $0<\underline{k}\leq\overline{r}$ and $1/2\leq \alpha<1$. We set $\beta=2\alpha-1$ to see
for $(x,{\bf x}i)\in K^{+,{\varepsilon}psilon_k}$ that
\begin{eqnarray}
{\varepsilon}psilon_k|{\bf x}i||D_e^{{\varepsilon}psilon_k|{\bf x}i|}u^{{\varepsilon}psilon_k}(x)\chi^\nudot e|^2>\underline{k}^2{\varepsilon}psilon_k^\beta|{\bf x}i|^\beta.
\label{inequalbasic}
{\varepsilon}nd{eqnarray}
and recall that the potential function $f(r)=F_1(r)$ is increasing to get
\begin{eqnarray}
J(|{\bf x}i|)\frac{1}{{\varepsilon}psilon_k}F_{1}(\underline{k}^2{\varepsilon}psilon_k^\beta|{\bf x}i|^\beta)=|{\bf x}i|J(|{\bf x}i|)\frac{1}{{\varepsilon}psilon_k|{\bf x}i|}F_{1}(\underline{k}^2{\varepsilon}psilon_k^\beta|{\bf x}i|^\beta)
\leq|{\bf x}i|J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)\label{firstalpha}
{\varepsilon}nd{eqnarray}
and in addition since $|{\bf x}i|^{1-\beta}\leq 1$ we have
\begin{eqnarray}
&& \int_{K^{+,{\varepsilon}psilon_k}_\alpha}\frac{1}{{\varepsilon}psilon_k}F_{1}(\underline{k}^2{\varepsilon}psilon_k^\beta|{\bf x}i|^\beta)|{\bf x}i|^{1-\beta}J(|{\bf x}i|)\,d{\bf x}i\,dx\leq \int_{K^{+,{\varepsilon}psilon_k}_\alpha}\frac{1}{{\varepsilon}psilon_k}F_{1}(\underline{k}^2{\varepsilon}psilon_k^\beta|{\bf x}i|^\beta)J(|{\bf x}i|)\,d{\bf x}i\,dx\nonumber\\
&&\leq\int_{K^{+,{\varepsilon}psilon_k}_\alpha} |{\bf x}i|J(|{\bf x}i|)F_{{\varepsilon}psilon_k|{\bf x}i|}(D_{e}^{{{\varepsilon}psilon_k}|{\bf x}i|}u^{{\varepsilon}psilon_k}\chi^\nudot e)\,d{\bf x}i\,dx\leq\muup_{t\in [0,T]}\muup_{{\varepsilon}psilon_k}PD^{{\varepsilon}psilon_k}(u^{{\varepsilon}psilon_k})
\label{firstbalpha}
{\varepsilon}nd{eqnarray}
Taylor approximation gives
\begin{eqnarray}
\frac{1}{{\varepsilon}psilon_k}F_1(\underline{k}^2{\varepsilon}psilon_k^\beta|{\bf x}i|^\beta)={\varepsilon}psilon_k^{\beta-1}\underline{k}^2(f'(0)+o({\varepsilon}psilon_k^\beta))|{\bf x}i|^\beta
\label{taylor}
{\varepsilon}nd{eqnarray}
Substitution of {\varepsilon}qref{taylor} into the left hand side of {\varepsilon}qref{firstbalpha} gives
\begin{eqnarray}
\int_{K^{+,{\varepsilon}psilon_k}_\alpha} |{\bf x}i|J(|{\bf x}i|)\,d{\bf x}i\,dx\leq
\frac{{\varepsilon}psilon_k^{1-\beta}}{\underline{k}^2(f'(0)+o({\varepsilon}psilon_k^\beta))}\left(\muup_{t\in [0,T]}\muup_{{\varepsilon}psilon_k}PD^{{\varepsilon}psilon_k}(u^{{\varepsilon}psilon_k})\right).
\label{semifinal}
{\varepsilon}nd{eqnarray}
Introduce the characteristic function $\chi^\nuhi_\alpha^{+,{\varepsilon}psilon_k}(x,{\bf x}i)$ defined on $D\times\mathcal{H}_1(0)$ taking the value $1$ for $(x,{\bf x}i)$ in $K_\alpha^{+,{\varepsilon}psilon_k}$ and zero otherwise.
Observe that
\begin{eqnarray}
\frac{1}{m}\int_{\mathcal{H}_1(0)}\chi^\nuhi_\alpha^{+,{\varepsilon}psilon_k}(x,{\bf x}i)|{\bf x}i|J(|{\bf x}i|)\,d{\bf x}i=P(\{y\in\mathcal{H}_{{\varepsilon}psilon_k}(x);|S^{{\varepsilon}psilon_k}(y,x)|>\underline{k}|y-x|^{\alpha-1}\}),
\label{equivlence}
{\varepsilon}nd{eqnarray}
so
\begin{eqnarray}
&&\int_DP(\{y\in\mathcal{H}_{{\varepsilon}psilon_k}(x);|S^{{\varepsilon}psilon_k}(y,x)|>\underline{k}|y-x|^{\alpha-1}\})dx\nonumber\\
&&\leq \frac{{\varepsilon}psilon_k^{1-\beta}}{m\times \underline{k}^2(f'(0)+o({\varepsilon}psilon_k^\beta))}\left(\muup_{t\in [0,T]}\muup_{{\varepsilon}psilon_k}PD^{{\varepsilon}psilon_k}(u^{{\varepsilon}psilon_k})\right).
\label{semifinaler}
{\varepsilon}nd{eqnarray}
For $0<\overline{\theta}\leq 1$, Tchebyshev's inequality delivers
\begin{eqnarray}
&& \mathcal{L}^d\left(\{x\in D;\, P\left(\{y\in\mathcal{H}_{{\varepsilon}psilon_k}(x);\,|S^{{\varepsilon}psilon_k}(y,x)|>\underline{k}|y-x|^{\alpha-1}\}\right)>\overline{\theta}\}\right)\nonumber\\
&&\leq \frac{{\varepsilon}psilon_k^{1-\beta}}{m\times \overline{\theta}\underline{k}^2(f'(0)+o({\varepsilon}psilon_k^\beta))}\left(\muup_{t\in [0,T]}\muup_{{\varepsilon}psilon_k}PD^{{\varepsilon}psilon_k}(u^{{\varepsilon}psilon_k})\right).
\label{probnewvariableboundchebyshcv}
{\varepsilon}nd{eqnarray}
and Theorem \ref{epsiloncontropprocesszone} follows on applying {\varepsilon}qref{gineq} and {\varepsilon}qref{upperbound}.
Choose ${\varepsilon}psilon_k=\frac{1}{2^k}$ and Theorem \ref{epsiloncontropprocesszone} imples $\mathcal{L}^d(PZ^{{\varepsilon}psilon_k}(\underline{k},\alpha,\overline{\theta},t))<C(\frac{1}{{2}^k})^{1-\beta}$, where $C$ is independent of $t\in [0,T]$.
The collection of process zones for ${\varepsilon}psilon_k<\delta$ is written as
\begin{eqnarray}
CPZ^{\delta}(\underline{k},\alpha,\overline{\theta},t)=\chi^\nuup_{{\varepsilon}psilon_k<\delta}PZ^{{\varepsilon}psilon_k}(\underline{k},\alpha,\overline{\theta},t)
\label{unstablecollection}
{\varepsilon}nd{eqnarray}
and from the geometric series we find
\begin{eqnarray}
\mathcal{L}^d\left(CPZ^{\delta}(\underline{k},\alpha,\overline{\theta},t)\right)<C{\delta}^{1-\beta}.
\label{boundcdelta}
{\varepsilon}nd{eqnarray}
Theorem \ref{bondunstable} follows noting further that $CPZ^{0}(\underline{k},\alpha,\overline{\theta},t)\muubset CPZ^{\delta}(\underline{k},\alpha,\overline{\theta},t)\muubset CPZ^{\delta'}(\underline{k},\alpha,\overline{\theta},t)$, for $0<\delta<\delta'$.
\muetcounter{equation}{0} \muetcounter{theorem}{0} \muetcounter{lemma}{0}\muetcounter{proposition}{0}\muetcounter{remark}{0}\muetcounter{remark}{0}
\muetcounter{definition}{0}\muetcounter{hypothesis}{0}
\muection{Dynamics and limits of energies that $\Gamma$-converge to Griffith fracture energies}
\label{Ninth}
In this final section we collect ideas and illustrate how the approach presented in the earlier sections can be used to examine limits of dynamics associated with other energies that $\Gamma$- converge to the Griffith fracture energy. As an example we consider the phase field aproach based on the Ambrosio-Tortorelli approximation for dynamic brittle fracture calculations \chi^\nuite{BourdinLarsenRichardson}. This model is seen to be a well posed formulation in the sense that existence of solutions can be shown \chi^\nuite{LarsenOrtnerSuli}.
To illustrate the ideas we focus on anti-plane shear and
the model is described by an out of plane elastic displacement $u^{\varepsilon}psilon(x,t)$ and phase field $0\leq v^{\varepsilon}psilon(x,t)\leq 1$ defined for points $x$ belonging to the domain $D\muubset \mathbb{R}^2$. The potential energy associated with the cracking body is given by the Ambrosio-Tortorelli potential
\begin{eqnarray}
P^{\varepsilon}psilon(u^{\varepsilon}psilon(t),v^{\varepsilon}psilon(t))=E^{\varepsilon}psilon(u^{\varepsilon}psilon(t),v^{\varepsilon}psilon(t))+H^{\varepsilon}psilon(v^{\varepsilon}psilon(t)),
\label{ab}
{\varepsilon}nd{eqnarray}
with
\begin{eqnarray}
E^{\varepsilon}psilon(u^{\varepsilon}psilon(t),v^{\varepsilon}psilon(t))=\frac{\mu}{2}\int_D a^{\varepsilon}psilon(t)|\nabla u^{\varepsilon}psilon(t)|^2\,dx
\label{detailsforab}
{\varepsilon}nd{eqnarray}
and
\begin{eqnarray}
H^{\varepsilon}psilon(v^{\varepsilon}psilon(t))=\frac{\mathcal{G}_c}{2}\int_D \frac{(1-v^{\varepsilon}psilon(t))^2}{2{\varepsilon}psilon}+2{\varepsilon}psilon|\nabla v^{\varepsilon}psilon(t)|^2\,dx.
\label{moredetailsforab}
{\varepsilon}nd{eqnarray}
here $a^{\varepsilon}psilon(t)=a^{\varepsilon}psilon(x,t)=(v^{\varepsilon}psilon(x,t))^2+{\varepsilon}ta^{\varepsilon}psilon$, with $0<{\varepsilon}ta^{\varepsilon}psilon<<{\varepsilon}psilon$. In this model the phase field $v^{\varepsilon}psilon$, provides an approximate description of a freely propagating crack taking the value $1$ for points $(x,t)$ away from the crack and zero on the crack.
To formulate the problem we introduce the space $H^1_0(D)$ defined to be displacements $u$ in $H^1(D)$ with zero Dirichlet data on $\hat{\varphi}artial D$ and the set of functions $H_1^1(D)$ defined to be functions $v$ in $H^1(D)$ for which $v=1$ on $\hat{\varphi}artial D$.
The total energy is given by
\begin{eqnarray}
\mathcal{F}(t;u^{\varepsilon}psilon,\hat{\varphi}artial_t u^{\varepsilon}psilon,v^{\varepsilon}psilon)=\frac{1}{2}\int_D|\hat{\varphi}artial_t u^{\varepsilon}psilon|^2\,dx+P^{\varepsilon}psilon(u^{\varepsilon}psilon,v^{\varepsilon}psilon)-\int_D f(t)u^{\varepsilon}psilon\,dx.
\label{eergytotala}
{\varepsilon}nd{eqnarray}
The body force $f(x,t)$ is prescribed and the displacement - phase field pair $(u^{\varepsilon}psilon,v^{\varepsilon}psilon)$ is a solution of the initial boundary value problem \chi^\nuite{LarsenOrtnerSuli} given by:
\begin{eqnarray}
\hat{\varphi}artial_{tt}^2 u^{\varepsilon}psilon-{\rm{div}}\left(a^{\varepsilon}psilon(t)\nabla(u^{\varepsilon}psilon-\hat{\varphi}artial_t u^{\varepsilon}psilon)\right)=f(t),&& \hbox{ in $D$},\nonumber\\
u^{\varepsilon}psilon=0 \hbox{ and } v^{\varepsilon}psilon=1, &&\hbox{ on $\hat{\varphi}artial D$},\label{ibvp}
{\varepsilon}nd{eqnarray}
for $t\in (0,T]$ with initial conditions $u^{\varepsilon}psilon(0)=u^{\varepsilon}psilon_0$, $\hat{\varphi}artial_t u^{\varepsilon}psilon(0)=u^{\varepsilon}psilon_1\in H^1_0(D)$, satisfying the crack stability condition
\begin{eqnarray}
P^{\varepsilon}psilon(u^{\varepsilon}psilon(t),v^{\varepsilon}psilon(t))\leq\inf\left\{P^{\varepsilon}psilon(u^{\varepsilon}psilon(t),v):\, v\in H^1_1(D),\,\,v\leq v^{\varepsilon}psilon(t)\right\}
\label{stability}
{\varepsilon}nd{eqnarray}
and energy balance
\begin{eqnarray}
\mathcal{F}(t;u^{\varepsilon}psilon,\hat{\varphi}artial_t u^{\varepsilon}psilon,v^{\varepsilon}psilon)=\mathcal{F}(0;u^{\varepsilon}psilon_0,u^{\varepsilon}psilon_1,v^{\varepsilon}psilon_0)-\int_0^t\int_D\,a^{\varepsilon}psilon|\nabla\hat{\varphi}artial_\tau u^{\varepsilon}psilon|^2\,d\tau-\int_0^t\int_D\,\hat{\varphi}artial_\tau f u^{\varepsilon}psilon\,dx\,d\tau,
\label{eergytotalbalance}
{\varepsilon}nd{eqnarray}
for every time $0\leq t\leq T$. Finally the initial condition for the phase field is chosen such that $v^{\varepsilon}psilon(0)=v^{\varepsilon}psilon_0$ where $v^{\varepsilon}psilon_0\in H^1_1(D)$ satisfies the unilateral minimality condition {\varepsilon}qref{stability}. In this formulation the pair $u^{\varepsilon}psilon(t)$, $v^{\varepsilon}psilon(t)$ provides a regularized model for free crack propagation. Here the phase field tracks the evolution of the crack with $v^{\varepsilon}psilon=1$ away from the crack and $v^{\varepsilon}psilon=0$ in the crack set.
For a body force $f(x,t)$ in $C^1([0,T];L^2(D))$ it is shown in \chi^\nuite{LarsenOrtnerSuli} that there exists at least one trajectory
$(u^{\varepsilon}psilon,v^{\varepsilon}psilon)\in H^2((0,T);L^2(D))\chi^\nuap W^{1,\infty}((0,T); H^1_0(D))\times W^{1,\infty}((0,T); H_1^1(D))$ satisfying \ref{ibvp} in the weak sense, i.e.
\begin{eqnarray}
\int_D\hat{\varphi}artial_{tt}^2 u^{\varepsilon}psilon\varphi\,dx+\int_D(a^{\varepsilon}psilon(t)\nabla(u^{\varepsilon}psilon-\hat{\varphi}artial_t u^{\varepsilon}psilon)\chi^\nudot\nabla\varphi\,dx=\int_D f(t)u^{\varepsilon}psilon\,dx,
\label{ibvpweak}
{\varepsilon}nd{eqnarray}
for all $\varphi$ in $H^1_0(D)$ for almost every $t$ in $(0,T]$, with $u^{\varepsilon}psilon(0)=u^{\varepsilon}psilon_0$, $\hat{\varphi}artial_t u^{\varepsilon}psilon(0)=u^{\varepsilon}psilon_1$, $v^{\varepsilon}psilon(0)=v^{\varepsilon}psilon_0$,
and such that {\varepsilon}qref{stability} and {\varepsilon}qref{eergytotalbalance} are satisfied for all times $0<t\leq T$. We have formulated the problem in a simplified setting to illustrate the ideas and note that this type of evolution is shown to exist for evolutions with more general boundary conditions and for displacements in two and three dimensions, see \chi^\nuite{LarsenOrtnerSuli}. For future reference we call the pair $(u^{\varepsilon}psilon(t),v^{\varepsilon}psilon(t))$ a phase field fracture evolution.
In what follows we pass to the ${\varepsilon}psilon\rightarrow 0$ limit in the phase field evolutions to show existence of a limiting evolution with bounded linear elastic fracture energy. Here the limit evolution $u^0(t)$ is shown to take values in the space of special functions of bounded variation $SBV(D)$. This space is well known and can be thought of as a specialization of the space SBD introduced earlier in Section \ref{sec5} that is appropriate for the scalar problem treated here. For a treatment of SBV and its relation to fracture the reader is referred to \chi^\nuite{AmbrosioBrades}.
Applying the techniques developed in previous sections it is possible to state and prove following theorem on the dynamics associated with the phase field evolutions $(u^{\varepsilon}psilon,v^{\varepsilon}psilon)$ in the limit as ${\varepsilon}psilon\rightarrow 0$.
\begin{theorem} {\bf Sharp interface limit of phase field fracture evolutions.}\\
\label{epszerolimitofLOS}
Suppose for every ${\varepsilon}psilon>0$ that: (a) the potential energy of the initial data $(u_0^{\varepsilon}psilon,v_0^{\varepsilon}psilon)$ is uniformly bounded,
ie. $\muup_{{\varepsilon}psilon>0}\left\{P^{\varepsilon}psilon(u^{\varepsilon}psilon_0,v_0^{\varepsilon}psilon)\right\}<\infty$, and that (b) $\Vert u^{\varepsilon}psilon(t)\Vert_{L^\infty(D)}<C$ for all $0<{\varepsilon}psilon$ and $0\leq t\leq T$. Then on passing to a subsequence if necessary in the phase field fracture evolutions $(u^{\varepsilon}psilon,v^{\varepsilon}psilon)$ there exists an anti-plane displacement field $u^0(x,t)$ in $SBV(D)$ for all $t\in [0,T]$ such that $u^0\in C([0,T];L^2(D))$ and
\begin{eqnarray}
\label{limsbv}
\lim_{{\varepsilon}psilon\rightarrow 0}\max_{0\leq t\leq T}\{\Vert u^{\varepsilon}psilon(t)-u^0(t)\Vert_{L^2(D)}\}=0
{\varepsilon}nd{eqnarray}
with
\begin{eqnarray}
\label{lefm}
GF(u^0)=\frac{\mu}{2}\int_D|\nabla u^0(t)|^2+{G_c}\mathcal{H}^1(J_{u^0(t)})<C
{\varepsilon}nd{eqnarray}
for $0\leq t\leq T$.
{\varepsilon}nd{theorem}
For anti-plane shear deformations the energy $GF$ is a special form of the energy $LEFM$ introduced in Section \ref{sec5}.
The strategy we will use for proving Theorem \ref{epszerolimitofLOS} is the same as the one developed in the proofs of Theorems \ref{LimitflowThm} and \ref{LEFMMThm}. This strategy can be made systematic and applied to evolutions associated with potential energies that $\Gamma$- converge to the Griffith fracture energy. It consists of four component parts:
\begin{enumerate}
\item Constructing upper bounds on the kinetic and potential energy of the evolutions that hold uniformly for $0\leq t\leq T$ and $0<{\varepsilon}psilon$.
\item Showing compactness of the evolution $u^{\varepsilon}psilon(t)$ in $L^2(D)$ for each time $0\leq t\leq T$.
\item Showing limit points of the sequence $\{u^{\varepsilon}psilon(t)\}$ belong to $SBD(D)$ (or $SBV(D)$ as appropriate) for each time $0\leq t\leq T$.
\item $\Gamma$-convergence of the potential energies to the Griffith energy $LEFM$ (or $GF$ as appropriate).
{\varepsilon}nd{enumerate}
Assume first that Parts 1 through 4 hold for the the phase field fracture evolution with potential energies $P^{\varepsilon}psilon$ given by {\varepsilon}qref{ab}. These are used as follows to prove Theorem \ref{epszerolimitofLOS}.
Part 1 is applied as in {\varepsilon}qref{lip} to show that the sequence of evolutions $u^{\varepsilon}psilon(t)$ is uniformly Lipschitz continuous in time with respect to the $L^2(D)$ norm, i.e.
\begin{eqnarray}
\label{lipschitzat}
\Vert u^{\varepsilon}psilon(t_1)-u^{\varepsilon}psilon(t_2)\Vert_{L^2(D)}\leq K |t_1-t_2|
{\varepsilon}nd{eqnarray}
for $K$ independent of ${\varepsilon}psilon$ and for any $0\leq t_1<t_2\leq T$.
Part 2 together with {\varepsilon}qref{lipschitzat} and the Ascoli theorem imply the existence of a subsequence and a limit $u^0(x,t)\in C([0,T];L^2(D))$ such that the convergence {\varepsilon}qref{limsbv} holds.
Part 3
shows that $u^0(x,t)$ belongs to $SBV(D)$ for every time in $[0,T]$. Part 4 together with Part 1 and the lower bound property of $\Gamma$-convergence shows that {\varepsilon}qref{lefm} holds and Theorem \ref{epszerolimitofLOS} follows.
We now establish Parts 1 through 4 for the dynamic phase field fracture evolution introduced in \chi^\nuite{LarsenOrtnerSuli}. To obtain a uniform bound on the kinetic and potential energy differentiate both sides of the energy balance {\varepsilon}qref{eergytotalbalance} with respect to time to get
\begin{eqnarray}
\label{gronagain}
&&\frac{d}{dt}\left(\frac{1}{2}\int_D|\hat{\varphi}artial_t u^{\varepsilon}psilon(t)|^2\,dx+P^{\varepsilon}psilon(u^{\varepsilon}psilon(t),v^{\varepsilon}psilon(t))\right)-\frac{d}{dt}\int_D\,f(t) u^{\varepsilon}psilon\,dx\\
&&=-\int_D a^{\varepsilon}psilon|\nabla \hat{\varphi}artial_t u^{\varepsilon}psilon|^2\,dx-\int_D\hat{\varphi}artial_t f u^{\varepsilon}psilon\,dx.\nonumber
{\varepsilon}nd{eqnarray}
Manipulation and application of the identity $f\hat{\varphi}artial_t u^{\varepsilon}psilon=\hat{\varphi}artial_t(f u^{\varepsilon}psilon)-\hat{\varphi}artial_t f u^{\varepsilon}psilon$ to {\varepsilon}qref{gronagain} delivers the inequality
\begin{eqnarray}
\label{gronagainagain}
&&\frac{d}{dt}\left(\frac{1}{2}\int_D|\hat{\varphi}artial_t u^{\varepsilon}psilon(t)|^2\,dx+P^{\varepsilon}psilon(u^{\varepsilon}psilon(t),v^{\varepsilon}psilon(t))\right)\\
&&\leq \int_D f \hat{\varphi}artial_t u^{\varepsilon}psilon\,dx.\nonumber
{\varepsilon}nd{eqnarray}
Now set
\begin{eqnarray}
W^{\varepsilon}psilon(t)=\left(\frac{1}{2}\int_D|\hat{\varphi}artial_t u^{\varepsilon}psilon(t)|^2\,dx+P^{\varepsilon}psilon(u^{\varepsilon}psilon(t),v^{\varepsilon}psilon(t))\right)+1
\label{westat}
{\varepsilon}nd{eqnarray}
and proceed as in Section \ref{GKP} to get
\begin{eqnarray}
\left(\frac{1}{2}\int_D|\hat{\varphi}artial_t u^{\varepsilon}psilon(t)|^2\,dx+P^{\varepsilon}psilon(u^{\varepsilon}psilon(t),v^{\varepsilon}psilon(t))\right) \leq \left(\int_0^t\Vert f(\tau)\Vert_{L^2(D)}\,d\tau +\muqrt{W^{\varepsilon}psilon(0)}\right )^2-1.
\label{gineqat}
{\varepsilon}nd{eqnarray}
Part 1 easily follows from {\varepsilon}qref{gineqat} noting that $\muup_{{\varepsilon}psilon>0}\{W^{{\varepsilon}psilon}(0)\}<\infty$ is a consequence of hypothesis (a) of Theorem \ref{epszerolimitofLOS}. For this example Parts 2 and 3 follow from the uniform bound of Part 1, hypothesis (b) of Theorem \ref{epszerolimitofLOS} and the well known compactness result for the Ambrosio Tortorelli functional, see for example the remark following Theorem 2.3 of \chi^\nuite{Giacomini}. Part 4 is given by the Ambrosio-Tortorelli result \chi^\nuite{AT} as expressed in Theorem 2.3 of \chi^\nuite{Giacomini}.
\muection{Conclusions}
The cohesive model for dynamic brittle fracture evolution presented in this paper does not require extra constitutive laws such as a kinetic relation between crack driving force and crack velocity or a crack branching condition. Instead the evolution of the process zone together with the the fracture set is governed by one equation consistent with NewtonÕs second law given by {\varepsilon}qref{eqofmotion}. This is a characteristic feature of peridynamic models \chi^\nuite{Silling1}, \chi^\nuite{States}. This evolution is intrinsic to the formulation and encoded into the nonlocal cohesive constitutive law. Crack nucleation criteria although not necessary to the formulation follow from the dynamics and are recovered here by viewing nucleation as a dynamic instability, this is similar in spirit to \chi^\nuite{SillingWecknerBobaru} and the work of \chi^\nuite{BhattacharyaDyal} for phase transitions. Theorem \ref{epsiloncontropprocesszone} explicitly shows how the size of the process zone is controlled by the radius of the horizon. This analysis shows that {{\varepsilon}m the horizon size ${\varepsilon}psilon$ for cohesive dynamics is a modeling parameter} that can be calibrated according to the size of the process zone obtained from experimental measurements. The process zone is seen to concentrate on a set of zero volume as the length scale of non-locality characterized by the radius of the horizon ${\varepsilon}psilon$ goes to zero, see Theorem \ref{bondunstable}. In this limit the dynamics is shown (under suitable hypotheses) to coincide with the simultaneous evolution of a displacement crack set pair. Here displacement satisfies the elastic wave equation for points in space-time away from the crack set. The shear and Lam\'e moduli together with the energy release rate are described in terms of moments of the nonlocal potentials.
\muection{Acknowlegements}
\label{Acknowlegements}
The author would like to thank Stewart Silling, Richard Lehoucq and Florin Bobaru for stimulating and fruitful discussions.
This research is supported by NSF grant DMS-1211066, AFOSR grant FA9550-05-0008, and NSF EPSCOR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.
\begin{thebibliography}{99}
\bibitem{Agwai}
A. Agwai, I Guven, and E. Madenci. {{\varepsilon}m Predicting crack propagation with peridynamics: a comparative study.} Int. J. Fract. {\bf 171} (2011) 65--78.
\bibitem{Alicandro}
R. Alicandro, M. Focardi, M. S. Gelli. {{\varepsilon}m Finite-difference approximation of energies in fracture mechanics}. Annali della Scuola Normale Superiore di Pisa {\bf 23} (2000) 671--709.
\bibitem{AmbrosioCosicaDalmaso}
L. Ambrosio, A. Coscia, and G. Dal Maso. {{\varepsilon}m Fine properties of functions with bounded deformation.} Arch. Rational Mech. Anal. {\bf 139} (1997) 201--238.
\bibitem{AmbrosioBrades}
L. Ambrosio and A. Brades. {{\varepsilon}m Energies in SBV and variational models in fracture mechanics.} Homogenization and Applications to Materials Science {\bf 9}. D. Cioranescu, A. Damlamian, P. Donato eds, Gakuto, Gakkotosho, Tokyo, Japan (1997) 1--22.
\bibitem{AT}
L. Ambrosio and V.M. Tortorelli. {{\varepsilon}m Approximation of functionals depending on jumps by elliptic functionals via $\Gamma$-Convergence.}
Communications on Pure and Applied Mathematics, XLIII (1990) 999--1036.
\bibitem{Bellido}
J. C. Bellido, C. Morra-Corral, and P. Pedregal. {{\varepsilon}m Hyperelasticity as a $\Gamma$-limit of peridynamics when the horizon goes to zero}
Calc. Var., (2015) DOI 10.1007/s00526-015-0839-9.
\bibitem{Buttazzo}
H. Attouch, G. Buttazzo, and G. Michaille. {{\varepsilon}m Variational Analysis in Sobolev and BV Spaces: applications to PDEs and optimization.} MPS-SIAM series on optimization. SIAM, Philadelphia, PA, 2006.
\bibitem{Barenblatt}
G.I. Barenblatt. {{\varepsilon}m The mathematical theory of equilibrium cracks in brittle fracture.} Adv. Appl. Mech. {\bf 7} (1962) 55--129.
\bibitem{Bellettini}
G. Bellettini, A. Coscia, G. Dal Maso. {{\varepsilon}m Compactness and lower semicontinuity properties in SBD($\Omega$)}, Math. Z. {\bf 228} (1998) 337--351.
\bibitem{Bazant}
Z. P. Ba\v zant and J. Planas. {{\varepsilon}m Fracture and Size Effect in Concrete and Other Quasibrittle Materials}. CRC Press, Boca Raton, FL, 1998.
\bibitem {BelitchReview}
T. Belytschko, R. Gracie, and G. Ventura. {{\varepsilon}m A review of the Extended/Generalized Finite Element Methods for material modelling.} Modelling and Simulation in Materials Science and Engineering. {\bf 17}:043001, 2009.
\bibitem{5}
T. Belytschko and T. Black. {{\varepsilon}m Elastic crack growth in finite elements with minimal remeshing.} Int. J. Numer. Meth. Eng. {\bf 45} (1999) 601--620.
\bibitem{Bobaru1}
F. Bobaru and W. Hu. {{\varepsilon}m The meaning, selection, and use of the Peridynamic horizon and its relation to crack branching in brittle materials.} Int. J. Fract. {\bf 176} (2012) 215-Ð222.
\bibitem{Hughes}
M. Borden, C. Verhoosel, M. Scott, T. Hughes, and C. Landis. {{\varepsilon}m A phase-field description of dynamic brittle fracture.} Computer Methods in Applied Mechanics and Engineering {\bf 217-220} (2012) 77-95.
\bibitem{BourdinFrancfortMarigo}
B. Bourdin, G. Francfort, and J.-J. Marigo. {{\varepsilon}m The variational approach to fracture}. J. Elasticity {\bf 91} (2008) 5--148.
\bibitem{BourdinLarsenRichardson}
B. Bourdin, C. Larsen, C. Richardson. {{\varepsilon}m A time-discrete model for dynamic fracture based on crack regularization.} Int. J. Fract. {\bf 168} (2011) 133--143.
\bibitem{Bouchbinder}
E. Bouchbinder, J. Fineberg and M. Marder. {{\varepsilon}m Dynamics of simple cracks}. Annual Rev. Condens. Matter Phys. {\bf 1} (2010) 371--395.
\bibitem{6}
M.J. Buehler, F.F. Abraham, and H. Gao. {{\varepsilon}m Hyperelasticity governs dynamic fracture at a critical length scale.} Nature {\bf 426} (2003) 141--146.
\bibitem{Braides1}
A. Braides. {{\varepsilon}m Approximation of Free Discontinuity Problems.} Lecture Notes in Mathematics. No. 1694. Springer, Berlin, 1998.
\bibitem{Braides2}
A. Braides. {{\varepsilon}m Local Minimization, Variational Evolution and $\Gamma$-Convergence.}
Lecture Notes in Mathematics. No. 2094. Springer, Berlin, 2014.
\bibitem{Braides}
A. Braides. {{\varepsilon}m Discrete approximation of functionals with jumps and creases.} Homogenization, 2001 (Naples), Gakuto Internat. Ser. Math. Sci. Appl. {\bf 18} Gakkotosho, Tokyo, 2003, 147--153.
\bibitem{BraidesGelli}
A. Braides and M.S. Gelli. {{\varepsilon}m Limits of discrete systems with long-range interactions}.
J. Convex Anal. {\bf 9} (2002) 363--399.
\bibitem{Cox}
B. N. Cox and Q.D. Yang, {{\varepsilon}m In quest of virtual tests for structural composites.} Science {\bf 314} (2006) 1102--1107.
\bibitem{Duarte}
C.A. Duarte, O.N. Hamzeh, T.J. Liszka, and W.W. Tworzydlo. {{\varepsilon}m A generalized finite element method for the simulation of three-dimensional dynamic crack propagation.} Computer Methods in Applied Mechanics and Engineering {\bf 190} (2001) 2227--2262.
\bibitem{Dougdale}
D. Dugdale. {{\varepsilon}m Yielding of steel sheets containing slits.} J. Mech. Phys. Solids {\bf 8} 100-104 (1960).
\bibitem{Driver}
B. Driver. {{\varepsilon}m Analysis Tools With Applications} E-book, Springer, Berlin, 2003.
\bibitem{BhattacharyaDyal}
K. Dyal and K. Bhattacharya. {{\varepsilon}m Kinetics of phase transformations in the perydanimic formulation of continuum mechanics.} J. Mech. Phys. Solids {\bf 54} (2006) 1811-1842.
\bibitem{DuGunzbergerlehoucqmengesha}
Q. Du, M. Gunzburger, R. Lehoucq, and K. Zhou. {{\varepsilon}m Analysis of the volume-constrained peridynamic Navier equation of linear elasticity.} Journal of Elasticity {\bf 113} (2013) 193--217.
\bibitem{EmmrichWeckner}
E. Emmrich and O. Weckner. {{\varepsilon}m On the well-posedness of the linear peridynamic model and its convergence towards the Navier equation of linear elasticity.} Communications in Mathematical Sciences {\bf 5} (2007) 851--864.
\bibitem{EmmrichPhulst}
E. Emmrich and D. Puhst. {{\varepsilon}m Well-posedness of the peridynamic model with Lipschitz continuous pairwise force function}. Commun. Math. Sci., {\bf 11} (2013) 1039Ð1049.
\bibitem {Falk}
M. Falk, A. Needleman, J.R. Rice. {{\varepsilon}m A critical evaluation of cohesive zone models of dynamic fracture.} Journal de Physique IV, Proceedings, pp. Pr-5-43 -- Pr-5-50, 2001.
\bibitem{Foster}
J. Foster, S.A. Silling, and W. Chen. {{\varepsilon}m an energy based failure criterion for use with peridynamic states.} International Journal for Multiscale Computational Engineering. {\bf 9} (2011) 675--688.
\bibitem{Federer}
H. Federer. {{\varepsilon}m Geometric Measure Theory}. Springer-Verlag, Berlin 1969.
\bibitem{FrancfortMarigo}
G. Francfort and J.-J. Marigo. {{\varepsilon}m Revisiting brittle fracture as an energy minimization problem.} J. Mech. Phys. Solids {\bf 46} (1998) 1319--1342.
\bibitem{FrancfortLarsen}
G. Francfort and C. Larsen. {{\varepsilon}m Existence and convergence for quasi-static evolution in brittle fracture.} Commun. Pur. Appl. Math. {\bf 56} (2003) 1465--1500.
\bibitem{Freund}
L.B. Freund. {{\varepsilon}m Dynamic Fracture Mechanics.} Cambridge Monographs on Mechanics and Applied Mathematics. Cambridge University Press, Cambridge, UK, 1998.
\bibitem{Evans}
L. C. Evans. {{\varepsilon}m Partial Differential Equations.} Graduate Studies in Mathematics Vol. 19, American Mathematical Society, Providence, RI, 2010.
\bibitem{14}
A. Hillerborg, M. Modeer, and P.E. Petersson. {{\varepsilon}m Analysis of crack formation and crack growth by means of fracture mechanics and finite elements}. Cem. Concr. Res. {\bf 6} (1976) 731--781.
\bibitem{SillingAscari3}
W. Gerstle, N. Sau, and S. Silling. {{\varepsilon}m Peridynamic Modeling of Concrete Structures.} Nuclear Engineering and Design {\bf 237} (2007) 1250--1258.
\bibitem{Giacomini}
A. Giacomini. {{\varepsilon}m Ambrosio-Tortorelli approximation of quasi-static evolution of brittle fractures.} Calc. Var. Partial Differ. Equ. {\bf 22} (2005) 129--172.
\bibitem{Gobbino1}
M. Gobbino. {{\varepsilon}m Finite difference approximation of the Mumford-Shah Functional.} Comm. Pure Appl. Math. {\bf 51} (1998) 197--228.
\bibitem{Gobbino3}
M. Gobbino and M.G. Mora. {{\varepsilon}m Finite difference approximation of free discontinuity problems.} Royal Soc. Edinburgh Proceedings A. {\bf 131} (2001) 567--595.
\bibitem{Bobaru2}
Y.D. Ha and F. Bobaru. {{\varepsilon}m Studies of dynamic crack propagation and crack branching with peridynamics.} International Journal of Fracture. {\bf 162} (2010) 229--244.
\bibitem{LarsenOrtnerSuli}
C.J. Larsen, C. Ortner, and E. Suli. {{\varepsilon}m Existence of solutions to a regularized model of dynamic fracture.} Mathematical Models and Methods in Applied Sciences, {\bf 20} (2010) 1021--1048.
\bibitem{LiptonJElast2014}
R. Lipton. {{\varepsilon}m Dynamic Brittle Fracture as a Small Horizon Limit of Peridynamics,} Journal of Elasticity, January 3, 2014, DOI 10.1007/s 10659-013-9463-0.
\bibitem{Negri}
L. Lussardi and M. Negri. {{\varepsilon}m Convergence of nonlocal finite element energies for fracture mechanics}. Numerical Functional Analysis and Optimization. {\bf 28} (2007) 83--109.
\bibitem{MarigoTruskinovsky}
J.-J. Marigo and L. Truskinovsky. {{\varepsilon}m Initiation and propagation of fracture in the models of Griffith and Barenblatt}. Continuum Mech. Thermodyn, {\bf{16}} (2004) 391--409.
\bibitem{16}
M. Marder. {{\varepsilon}m Supersonic rupture of rubber.} J. Mech. Phys. Solids {\bf 54} (2006) 491--532.
\bibitem{17}
M. Marder and S. Gross. {{\varepsilon}m Origin of crack tip instabilities.} J. Mech. Phys. Solids, {\bf 43} (1995) 1--48.
\bibitem{Matthies}
H. Matthies, G. Strang, and E. Christiansen. {{\varepsilon}m The Saddle Point of a Differential Program}. Energy Methods in Finite Element Analysis, Wiley, New York, 1979.
\bibitem{MengeshaDu}
T. Mengesha and Q. Du. {{\varepsilon}m Nonlocal constrained value problems for a linear peridynamic Navier equation}. Journal of Elasticity. Online First August 2013, DOI 10.1007/s 10659-013-9456-z.
\bibitem{Miehe}
C. Miehe, M. Hofacker, and F. Welschinger. {{\varepsilon}m A phase field model for rate-independent crack propagation: Robust algorithmic implementation based on operator splits.} Computer Methods in Applied Mechanics and Engineering, {\bf 199} (2010) 2765--2778.
\bibitem{18}
N. M\"oes, J. Delbow, and T. Belytschko. {{\varepsilon}m A finite element method for crack growth without remeshing.} Int. J. Numer. Meth. Eng. {\bf 46} (1999) 131--150.
\bibitem{Morgan}
F. Morgan. {{\varepsilon}m Geometric Measure Theory, A Beginner's Guide}. Academic Press, San Diego, 1995.
\bibitem{MumfordShah}
D. Mumford and J. Shah. {{\varepsilon}m Optimal approximation by piecewise smooth functions and associated variational problems.} Comm. Pure Appl. Math. {\bf 17} (1989) 577--685.
\bibitem{WaltonSendova}
E.S. Oh, J.R. Walton, and J.C. Slattery.{{\varepsilon}m A theory of fracture based upon an extension of continuum mechanics to the
nanoscale.} J. Appl. Mech. {\bf 73} (2006) 792--798.
\bibitem{OlsenHolden}
B. Hanche-Olsen and Helge Holden. {{\varepsilon}m The Kolomogorov-Riesz compactness theorem.} Expositiones Mathematicae {\bf 28} (2010) 385--394.
\bibitem {Remmers}
J.J.C. Remmers, R. de Borst and A. Needleman. {{\varepsilon}m The Simulation of Dynamic Crack Propagation using the Cohesive Segments Method.} Journal of the Mechanics and Physics of Solids {\bf 56} (2008) 70--92.
\bibitem{Ortiz}
B. Schmidt, F. Fraternali, and M. Ortiz. {{\varepsilon}m Eigenfracture: an eigendeformation approach to variational fracture.} Multiscale Model. Simul. {\bf 7} (2009) 1237--1266.
\bibitem{Silling1}
S.A. Silling. {{\varepsilon}m Reformulation of Elasticity Theory for Discontinuities and Long-Range Forces.} J. Mech. Phys. Solids {\bf 48} (2000) 175Ð209.
\bibitem{SillingAscari2}
S.A. Silling and E. Askari. {{\varepsilon}m A meshfree method based on the peridynamic model of solid mechanics.} Computers and Structures {\bf 83} (2005) 1526--1535.
\bibitem{SillingBobaru}
S.A. Silling and F. Bobaru. {{\varepsilon}m Peridynamic Modeling of Membranes and Fibers.} International Journal of Non-Linear Mechanics {\bf 40} (2005) 395--409.
\bibitem{SillingWecknerBobaru}
S. Silling, O. Weckner, E. Askari, and F. Bobaru. {{\varepsilon}m Crack nucleation in a peridynamic solid.} International Journal of Fracture {\bf 162} (2010) 219--227.
\bibitem{SillingLehoucq}
S.A. Silling and R. Lehoucq. {{\varepsilon}m Convergence of peridynamics to classical elasticity theory.} Journal of Elasticity {\bf 93} (2008) 13--37.
\bibitem{States}
S.A. Silling, M. Epton, O. Weckner, J. Xu, and E. Askari. {{\varepsilon}m Peridynamic states and constitutive
modeling.} J. Elasticity, {\bf 88} (2007) 151--184.
\bibitem{21}
L.I. Slepyan. {{\varepsilon}m Models and Phenomena in Fracture Mechanics.} Springer, Berlin, Germany, 2002.
\bibitem{Suquet}
P.M. Suquet. {{\varepsilon}m Un espace fonctionnel pour les \'equations de la plasticit\'e.} Ann. Fac. Sci. Toulouse {\bf 1} (1979) 77--87.
\bibitem{WecknerAbeyaratne}
O. Weckner and R. Abeyaratne. {{\varepsilon}m The effect of long-range forces on the dynamics of a bar.} Journal of the Mechanics and Physics of Solids {\bf 53} (2005) 705--728.
\bibitem{Wheeler}
M.F. Wheeler, T. Wick, W. Wollner. {{\varepsilon}m An augmented-lagrangian method for the phase-field approach for pressurized fractures},
Comp. Meth. Appl. Mech. Engrg. {\bf 271} (2014), pp. 69--85.
\bibitem{Willis}
J.R. Willis. {{\varepsilon}m A comparison of the fracture criteria of Griffith and Barenblatt}. J. Mech. Phys. Solids. {\bf 15} (1967) 152--162.
\bibitem{22}
X.P. Xu and A. Needleman. {{\varepsilon}m Numerical simulations of fast crack growth in brittle solids.} J. Mech. Phys. Solids {\bf 42} (1994) 1397--1434.
{\varepsilon}nd{thebibliography}
{\varepsilon}nd{document} |
\begin{document}
\title{Colorful Subhypergraphs in Uniform Hypergraphs}
\begin{abstract}
There are several topological results ensuring the existence of a large complete bipartite subgraph in any properly colored graph satisfying some special topological regularity conditions. In view of $\mathbb{Z}_p$-Tucker lemma,
Alishahi and Hajiabolhassan [{\it On the chromatic number of general Kneser hypergraphs, Journal of Combinatorial Theory, Series B, 2015}] introduced a lower bound for the chromatic number of
Kneser hypergraphs ${\rm KG}^r({\mathcal H})$.
Next, Meunier [{\it Colorful subhypergraphs in
Kneser hypergraphs, The Electronic Journal of Combinatorics, 2014}] improved their result by proving that any properly colored general Kneser hypergraph ${\rm KG}^r({\mathcal H})$ contains a large colorful $r$-partite subhypergraph provided that $r$ is prime.
In this paper, we give some new generalizations of $\mathbb{Z}_p$-Tucker lemma.
Hence, improving Meunier's result in some aspects. Some new lower bounds for the chromatic number and local chromatic number of uniform hypergraphs are presented as well. \\
\noindent{\it Keyword:} chromatic number of hypergraphs, $\mathbb{Z}_p$-Tucker-Ky~Fan lemma, colorful complete hypergraph, $\mathbb{Z}_p$-box-complex,
$\mathbb{Z}_p$-hom-complex
\end{abstract}
\section{\bf Introduction}
\subsection{{\bf Background and Motivations}}
In 1955, Kneser~\cite{MR0068536} posed a conjecture about the chromatic number
of Kneser graphs. In 1978, Lov{\'a}sz~\cite{MR514625} proved this conjecture by
using algebraic topology.
The Lov{\'a}sz's proof marked the beginning of the history of topological combinatorics.
Nowadays, it is an active stream of research to study the coloring properties of graphs
by using algebraic topology. There are several lower bounds for the chromatic number
of graphs related to the indices of some topological spaces defined based on the
structure of graphs.
However, for hypergraphs, there are a few such lower bounds, see~\cite{2013arXiv1302.5394A,MR953021,Iriye20131333,MR1081939,MR2279672}.
A {\it hypergraph} ${\mathcal H}$ is a pair $(V({\mathcal H}),E({\mathcal H}))$, where $V({\mathcal H})$ is a finite set, called the vertex set
of ${\mathcal H}$, and $E({\mathcal H})$ is a family of nonempty subsets of $V({\mathcal H})$, called the edge set of ${\mathcal H}$.
Throughout the paper, by a nonempty hypergraph, we mean a hypergraph
with at least one edge.
If any edge $e\in E({\mathcal H})$ has the cardinality $r$, then the hypergraph
${\mathcal H}$ is called {\it $r$-uniform.} For a set $U\subseteq V({\mathcal H})$, the {\it induced subhypergraph on $U$,} denoted ${\mathcal H}[U]$, is a hypergraph with the vertex set $U$ and the edge set
$\{e\in E({\mathcal H}):\; e\subseteq U\}$.
Throughout the paper, by a {\it graph,} we mean a $2$-uniform hypergraph.
Let $r\geq 2$ be a positive integer and $q\geq r$ be an integer.
An $r$-uniform hypergraph ${\mathcal H}$ is called {\it $q$-partite} with parts $V_1,\ldots,V_q$ if
\begin{itemize}
\item $V({\mathcal H})=\displaystyle\bigcup_{i=1}^q V_i$ and
\item each edge of ${\mathcal H}$ intersects each part $V_i$ in at most one vertex.
\end{itemize}
If ${\mathcal H}$ contains all possible edges, then we call it a
{\it complete $r$-uniform $q$-partite hypergraph.}
Also, we say the hypergraph ${\mathcal H}$ is {\it balanced} if the values of $|V_j|$ for
$j =1,\ldots,q$ differ by at most one, i.e., $|V_i|-|V_j|\leq 1$ for each $i,j\in[q]$.
Let ${\mathcal H}$ be an $r$-uniform hypergraph and $U_1,\ldots, U_q$ be $q$ pairwise disjoint
subsets of $V({\mathcal H})$.
The hypergraph ${\mathcal H}[U_1,\ldots, U_q]$ is a subhypergraph of ${\mathcal H}$ with the vertex set $\displaystyle\bigcup_{i=1}^q U_i$ and the edge set $$E({\mathcal H}[U_1,\ldots, U_q])=\left\{e\in E({\mathcal H}):\; e\subseteq \displaystyle\bigcup_{i=1}^q U_i\mbox{ and } |e\cap U_i|\leq 1\mbox{ for each } i\in[q]\right\}.$$
Note that ${\mathcal H}[U_1,\ldots, U_q]$ is an $r$-uniform $q$-partite hypergraph with parts $U_1,\ldots,U_q$.
By the symbol ${[n]\choose r}$, we mean the family of all $r$-subsets of the set $[n]$.
The hypergraph $K^r_n=\left([n],{[n]\choose r}\right)$ is celled the complete $r$-uniform hypergraph with $n$ vertices. For $r=2$, we would rather use $K_n$ instead of $K_n^2$.
The largest possible integer $n$ such that ${\mathcal H}$
contains $K^r_n$ as a subhypergraph
is called the {\it clique number of ${\mathcal H}$}, denoted $\omega({\mathcal H})$.
A {\it proper $t$-coloring} of a hypergraph ${\mathcal H}$ is a map $c:V({\mathcal H})\longrightarrow [t]$ such that there is no monochromatic edge. The minimum possible
such a $t$ is called {\it the chromatic number of ${\mathcal H}$}, denoted $\chi({\mathcal H})$. If there is no such a $t$, we define the chromatic number to be infinite.
Let $c$ be a proper coloring of ${\mathcal H}$ and $U_1,\ldots, U_q$ be
$q$ pairwise disjoint subsets of $V({\mathcal H})$.
The hypergraph ${\mathcal H}[U_1,\ldots, U_q]$ is said to be {\it colorful } if
for each $j\in[q]$, the vertices of $U_j$ get pairwise distinct colors.
For a properly colored graph $G$, a subgraph is called {\it multicolored} if its vertices get pairwise distinct colors.
For a hypergraph ${\mathcal H}$, {\it the Kneser hypergraph} ${\rm KG}^r({\mathcal H})$ is an $r$-uniform hypergraph with the vertex set $E({\mathcal H})$ and whose edges are formed by $r$ pairwise vertex-disjoint edges of ${\mathcal H}$, i.e.,
$$E({\rm KG}^r({\mathcal H}))=\left\{ \{e_1,\ldots,e_r\}:\; e_i\cap e_j=\varnothing\mbox{ for each } i\neq j\in[r] \right\}.$$
For any graph $G$,
it is known that there are several hypergraphs ${\mathcal H}$ such that ${\rm KG}^2({\mathcal H})$ and $G$ are isomorphic.
The Kneser hypergraph ${\rm KG}^r\left(K_n^k\right)$ is called the ``usual"
Kneser hypergraph which is denoted by ${\rm KG}^r(n,k)$.
Coloring properties of Kneser hypergraphs have been studied extensively
in the literature. Lov\'asz~\cite{MR514625} (for $r=2$) and Alon, Frankl and Lov\'asz~\cite{MR857448} determined the chromatic number of ${\rm KG}^r(n,k)$. For an integer $r\geq 2$, they proved that
$$\chi\left({\rm KG}^r(n,k)\right)= \left\lceil{n-r(k-1)\over r-1}\right\rceil.$$
For a hypergraph ${\mathcal H}$, the $r$-colorability defect of ${\mathcal H}$, denoted ${\rm cd}_r({\mathcal H})$, is the
minimum number of vertices which should be removed such that the induced hypergraph on the remaining vertices is $r$-colorable, i.e.,
$${\rm cd}_r({\mathcal H})=\min\left\{|U|:\; {\mathcal H}[V({\mathcal H})\setminus U]\mbox{ is $r$-colorable}\right\}.$$
For a hypergraph ${\mathcal H}$,
Dol'nikov~\cite{MR953021}~(for $r=2$) and K{\v{r}}{\'{\i}}{\v{z}}~{\rm \cite{MR1081939} proved taht
$$\chi({\rm KG}^r({\mathcal H}))\geq \left\lceil{{\rm cd}_r({\mathcal H})\over r-1}\right\rceil,$$
which is a generalization of the results by Lov\'asz~\cite{MR514625} and Alon, Frankl and Lov\'asz~\cite{MR857448}.
For a positive integer $r$, let $\mathbb{Z}_r=\{\omega,\omega^2\ldots,\omega^r\}$
be a cyclic group of order $r$ with generator $\omega$.
Consider a vector $X=(x_1,x_2,\ldots,x_n)\in(\mathbb{Z}_r\cup\{0\})^n$.
An alternating
subsequence of $X$ is a sequence $x_{i_1},x_{i_2},\ldots,x_{i_m}$
of nonzero terms of $X$ such that $i_1<\operatorname{cd}ots<i_m$ and $x_{i_j}\neq x_{i_{j+1}}$ for each $j\in [m-1]$.
We denote by ${\rm alt}(x)$ the maximum possible length of an
alternating subsequence of $X$.
For a vector $X=(x_1,x_2,\ldots,x_n)\in (\mathbb{Z}_r\cup\{0\})^n$ and for an $\epsilon\in\mathbb{Z}_p$, set
$X^\epsilon=\{i\in[n]:\; x_i=\epsilon\}$.
Note that, by abuse of notation, we can write $X=(X^\epsilon)_{\epsilon\in \mathbb{Z}_r}$.
For two vectors $X,Y\in (\mathbb{Z}_r\cup\{0\})^n$, by $X\subseteq Y$, we mean
$X^\epsilon\subseteq Y^\epsilon$ for each $\epsilon\in\mathbb{Z}_r$.
For a hypergraph ${\mathcal H}$ and a bijection $\sigma:[n]\longrightarrow V({\mathcal H})$, define
$${\rm alt}_r({\mathcal H},\sigma)=\displaystyle\max\left\{{\rm alt}(X):\; X\in (\mathbb{Z}_r\cup\{0\})^n\mbox{ such that } E({\mathcal H}[\sigma(X^\epsilon)])=\varnothing\mbox{ for each } \epsilon\in\mathbb{Z}_r \right\}.$$
Also, let
$${\rm alt}_r({\mathcal H})=\displaystyle\min_{\sigma} {\rm alt}_r({\mathcal H},\sigma),$$
where the minimum is taken over all bijection $\sigma:[n]\longrightarrow V({\mathcal H})$.
One can readily check that for any hypergraph ${\mathcal H}$, $|V({\mathcal H})|-{\rm alt}_r({\mathcal H})\geq {\rm cd}_r({\mathcal H})$ and the inequality is often strict, see~\cite{2013arXiv1302.5394A}.
Alishahi and Hajiabolhassan~\cite{2013arXiv1302.5394A} improved Dol'nikov-K{\v{r}}{\'{\i}}{\v{z}} result by proving that
for any hypergraph ${\mathcal H}$ and for any integer $r\geq 2$, the quantity $\left\lceil{ |V({\mathcal H})|-{\rm alt}_r({\mathcal H})\over r-1}\right\rceil$ is a lower bound for the chromatic number of
${\rm KG}^r({\mathcal H})$.
Using this lower bound, the chromatic number of some families of graphs and hypergraphs are computed, see~\cite{2014arXiv1401.0138A,2014arXiv1403.4404A,2014arXiv1407.8035A,2015arXiv150708456A,2013arXiv1302.5394A,HaMe16}.
There are some other lower bounds for the chromatic number of graphs which are
better than the former discussed lower bounds.
They are based on some topological indices of some topological spaces
connected to the structure of graphs.
In spite of these lower bounds being better, they are~not combinatorial and most of the times they are difficult to compute.
The existence of large colorful bipartite subgraphs in a properly colored graph has
been extensively studied in the
literature~\cite{2013arXiv1302.5394A,MR2971704,MR2763055,SiTaZs13,MR2279672,MR2351519}.
To be more specific, there are several theorems ensuring the existence of a colorful bipartite subgraph in any properly colored graph such that the bipartite subgraph has a specific number of vertices related to some topological parameters connected to the graph.
Simonyi and Tardos~\cite{MR2351519} improved Dol'nikov's lower bound and proved that
in any proper coloring of a Kneser graph ${\rm KG}^2({\mathcal H})$, there is a multicolored complete bipartite graph
$K_{\left\lceil{{\rm cd}_2({\mathcal H})\over 2}\right\rceil,\left\lfloor{{\rm cd}_2({\mathcal H})\over 2}\right\rfloor}$
such that the ${\rm cd}^2({\mathcal H})$ different colors occur alternating on the
two parts of the bipartite graph with respect to their natural order.
By a combinatorial proof, Alishahi and Hajiabolhassan~\cite{2013arXiv1302.5394A} improved this result. They proved that the the result
remains true if we replace ${\rm cd}^2({\mathcal H})$ by $n-{\rm alt}_2({\mathcal H})$. Also,
a stronger result is proved by Simonyi, Tardif, and Zsb{\'{a}}n~\cite{SiTaZs13}.
\begin{alphtheorem}{\rm (Zig-zag Theorem~\cite{SiTaZs13}).}\label{zigzag}
Let $G$ be a nonempty graph
which is properly colored with arbitrary number of colors.
Then $G$ contains a multicolored complete bipartite subgraph $K_{\lceil{t\over2}\rceil,\lfloor{t\over2}\rfloor}$,
where ${\rm Xind}({\rm Hom}(K_2,G))+2= t$. Moreover, colors appear alternating on the two sides of the bipartite subgraph with respect to their natural ordering.
\end{alphtheorem}
The quantity ${\rm Xind}({\rm Hom}(K_2,G))$ is the cross-index of
hom-complex ${\rm Hom}(K^2,G)$ which will be defined in
Subsection~\ref{Boxdefin}. We should mention that there are some other weaker similar
results in terms of some other topological parameters, see~\cite{MR2279672,MR2351519}.
Note that prior mentioned results concern the existence of colorful bipartite subgraphs
in properly colored graphs ($2$-uniform hypergraphs).
In 2014, Meunier~\cite{Meunier14}
found the first colorful type result for the uniform hypergraphs.
He proved that for any prime number $p$, any properly colored Kneser
hypergraph $\mathsf{K}G^p({\mathcal H})$ must contain a colorful
balanced complete $p$-uniform $p$-partite subhypergraph with a specific number
of vertices, see Theorem~\ref{colorfulhyper}.
\subsection{\bf Main Results}
For a given graph $G$,
there are several complexes defined based on the structure of $G$.
For instance, the box-complex of $G$, denoted ${\rm B}_0(G)$, and the hom-complex of $G$, denoted ${\rm Hom}(K_2,G)$, see~\cite{MR1988723,SiTaZs13,MR2279672}.
Also, there are some lower bounds for the chromatic number of graphs
related to some indices of these complexes~\cite{SiTaZs13,MR2279672}.
In this paper, we naturally generalize the definitions of box-complex and hom-complex
of graphs to uniform hypergraphs.
Also, the definition of $\mathbb{Z}_p$-cross-index of $\mathbb{Z}_p$-posets will be introduced.
Using these complexes,
as a first main result of this paper, we generalize Meunier's result~\cite{Meunier14} (Theorem~\ref{colorfulhyper})
to the following theorem.
\begin{theorem}\label{maincolorfulindex}
Let $r\geq 2$ be a positive integer and $p\geq r$ be a prime number.
Assume that ${\mathcal H}$ is an $r$-uniform hypergraph and $c:V({\mathcal H})\longrightarrow[C]$ is a proper coloring of
${\mathcal H}$ {\rm (}$C$ arbitrary{\rm )}. Then we have the
following assertions.
\begin{itemize}
\item[{\rm (i)}] There is some colorful balanced complete $r$-uniform $p$-partite
subhypergraph in ${\mathcal H}$
with ${\rm ind}_{\mathbb{Z}_p}(B_0({\mathcal H},\mathbb{Z}_p))+1$ vertices.
In particular,
$$\chi({\mathcal H})\geq {{\rm ind}_{\mathbb{Z}_p}(B_0({\mathcal H},\mathbb{Z}_p))+1\over r-1}.$$
\item[{\rm (ii)}] If $p\leq \omega({\mathcal H})$, then there is some colorful balanced complete $r$-uniform $p$-partite subhypergraph in ${\mathcal H}$ with ${\rm Xind}_{\mathbb{Z}_p}({\rm Hom}(K^r_p,{\mathcal H}))+p$ vertices.
In particular,
$$\chi({\mathcal H})\geq {{\rm Xind}_{\mathbb{Z}_p}({\rm Hom}(K^r_p,{\mathcal H}))+p\over r-1}.$$
\end{itemize}
\end{theorem}
Quantities ${\rm ind}_{\mathbb{Z}_p}(B_0({\mathcal H},\mathbb{Z}_p))$ and ${\rm Xind}_{\mathbb{Z}_p}({\rm Hom}(K^r_p,{\mathcal H}))$ appearing in the statement of Theorem~\ref{maincolorfulindex} are respectively
the $\mathbb{Z}_p$-index and $\mathbb{Z}_p$-cross-index
of the $\mathbb{Z}_p$-box-complex $B_0({\mathcal H},\mathbb{Z}_p)$
and $\mathbb{Z}_p$-hom-complex ${\rm Hom}(K^r_p,{\mathcal H})$ which will be defined in Subsection~\ref{Boxdefin}.
Using these complexes, we introduce some new lower bounds for the chromatic number of uniform hypergraphs.
In view of Theorem~\ref{maincolorfulindex},
next theorem provides a hierarchy of lower bounds for the chromatic number of $r$-uniform hypergraphs.
\begin{theorem}\label{inequalities}
Let $r\geq 2$ be a positive integer and $p\geq r$ be a prime number.
For any $r$-uniform hypergraph ${\mathcal H}$, we have the following inequalities.\\
{\rm (i)} If $p\leq \omega({\mathcal H})$, then
$${\rm Xind}_{\mathbb{Z}_p}({\rm Hom}(K^r_p,{\mathcal H}))+p \geq {\rm ind}_{\mathbb{Z}_p}(B_0({\mathcal H},\mathbb{Z}_p))+1.$$
{\rm (ii)} If ${\mathcal H}={\rm KG}^r({\mathcal F})$ for some hypergraph ${\mathcal F}$, then
$$
{\rm ind}_{\mathbb{Z}_p}(B_0({\mathcal H},\mathbb{Z}_p))+1
\geq |V({\mathcal F})|-{\rm alt}_p({\mathcal F})
\geq {\rm cd}_p({\mathcal F}).
$$
\end{theorem}
In view of Theorem~\ref{inequalities}, Theorem~\ref{maincolorfulindex} is a common
extension of Theorem~\ref{zigzag} and Theorem~\ref{colorfulhyper}. Furthermore, for $r=2$, Theorem~\ref{maincolorfulindex} implies the next corollary which also is a generalization of Theorem~\ref{zigzag}.
\begin{corollary}\label{cor1}
Let $p$ be a prime number and let $G$ be a nonempty graph which is properly colored with arbitrary number of colors. Then there is a multicolored
complete $p$-partite subgraph $K_{n_1,n_2,\ldots,n_p}$ of $G$ such that
\begin{itemize}
\item $\displaystyle\sum_{i=1}^pn_i={\rm ind}_{\mathbb{Z}_p}(B_0(G,\mathbb{Z}_p))+1$,
\item $|n_i-n_j|\leq 1$ for each $i,j\in[p]$.
\end{itemize}
Moreover, if $p\leq \omega(G)$, then ${\rm ind}_{\mathbb{Z}_p}(B_0(G,\mathbb{Z}_p))+1$
can be replaced with ${\rm Xind}_{\mathbb{Z}_p}({\rm Hom}(K_p,G))+p$.
\end{corollary}
In view of the prior mentioned results, the following question naturally arises.
\begin{question} Do Theorem~\ref{maincolorfulindex} and Theorem~\ref{inequalities} remain true for non-prime $p$?
\end{question}
\subsection{\bf Applications to Local Chromatic Number of Uniform Hypergraphs}
For a graph $G$ and a vertex $v\in V(G)$, the {\it closed neighborhood of $v$,} denoted $N[v]$, is the set $\{v\}\cup\{u:\; uv\in E(G)\}$. The {\it local chromatic number} of $G$, denoted
$\chi_l(G)$, is defined in~\cite{ERDOS198621} as follows:
$$\chi_l(G)=\displaystyle\min_c\max\{|c(N[v])|:\; v\in V(G)\}$$
where the minimum in taken over all proper coloring $c$ of $G$.
Note that Theorem~\ref{zigzag} gives the following lower bound for the local chromatic
number of a nonempty graph $G$:
\begin{equation}\label{localloerzigzag}
\chi_l(G)\geq \left\lceil{{\rm Xind}({\rm Hom}(K_2,G))+2\over 2}\right\rceil+1.
\end{equation}
Note that for a Kneser hypergraph $\mathsf{K}G^2({\mathcal H})$,
by using Simonyi and Tardos colorful result~\cite{MR2351519} or the extension given by
Alishahi and Hajiabolhassan~\cite{2013arXiv1302.5394A}, there are two similar lower
bounds for $\chi_l(\mathsf{K}G^2({\mathcal H}))$ which respectively used $\operatorname{cd}_2({\mathcal H})$ and
$|V({\mathcal H})|-\operatorname{alt}_2({\mathcal H})$ instead of ${\rm Xind}({\rm Hom}(K_2,G))+2$.
However, as it is stated in Theorem~\ref{inequalities}, the lower bound in terms
of ${\rm Xind}({\rm Hom}(K_2,G))+2$ is better than these two last mentioned lower bounds.
Using Corollary~\ref{cor1}, we have the following lower bound for the local
chromatic number of graphs.
\begin{corollary}\label{locallowerp}
Let $G$ be a nonempty graph and $p$ be a prime number.
Then $$\chi_l(G)\geq t-\left\lfloor{t\over p}\right\rfloor+1,$$ where
$t={\rm ind}_{\mathbb{Z}_p}(B_0(G,\mathbb{Z}_p))+1$.
Moreover, if $p\leq \omega(G)$, then ${\rm ind}_{\mathbb{Z}_p}(B_0(G,\mathbb{Z}_p))+1$
can be replaced with ${\rm Xind}_{\mathbb{Z}_p}({\rm Hom}(K_p,G))+p$.
\end{corollary}
Note that if we set $p=2$, then previous theorem implies Simonyi
and Tardos lower bound for the local chromatic number. Note that, in general,
this lower bound might be better than Simonyi
and Tardos lower bound. To see this, let $k\geq 2$ be a fixed integer.
Consider the Kneser graph $\mathsf{K}G^2(n,k)$ and let $p=p(n)$ be a prime number such that
$p=O(\ln n)$. By Theorem~\ref{inequalities}, for $n\geq pk$, we have
$${\rm ind}_{\mathbb{Z}_p}(B_0(\mathsf{K}G^2(n,k),\mathbb{Z}_p))+1\geq \operatorname{cd}_p(K_n^k)=n-p(k-1).$$
Note that
the lower bound for $\chi_l(\mathsf{K}G^2(n,k))$ coming form Inequality~\ref{localloerzigzag}
is \begin{equation}\label{equation2}
1+\left\lceil{n-2k+2\over 2}\right\rceil={n\over 2}-o(1),
\end{equation}
while,
in view of Corollary~\ref{locallowerp}, we have
$$\chi_l(\mathsf{K}G^2(n,k))\geq n-p(k-1)-\left\lfloor{n-p(k-1)\over p}\right\rfloor+1=n-o(n),$$
which is better than the quantity in Equation~\ref{equation2} if $n$ is sufficiently large.
However,
since the induced subgraph on the neighbors of any vertex of $\mathsf{K}G(n,k)$ is isomorphic to
$\mathsf{K}G(n-k,k)$, we have
$$\chi_l(\mathsf{K}G(n,k))\geq n-3(k-1).$$
\begin{corollary}
Let $\mathcal{F}$ be a hypergraph and $\alpha(\mathcal{F})$ be its independence number. Then for any prime number $p$, we have
$$\chi_l(\mathsf{K}G^2(\mathcal{F}))\geq \left\lceil{(p-1)|V(\mathcal{F})|\over p}\right\rceil-(p-1)\operatorname{cd}ot\alpha(\mathcal{F})+1.$$
\end{corollary}
\begin{proof}
In view of Theorem~\ref{inequalities}, we have
$${\rm ind}_{\mathbb{Z}_p}(B_0(\mathsf{K}G^2(\mathcal{F}),\mathbb{Z}_p))+1\geq \operatorname{cd}_p(\mathcal{F})\geq |V(\mathcal{F})|-p\operatorname{cd}ot\alpha(\mathcal{F}).$$
Now, Corollary~\ref{locallowerp} implies the assertion.
\end{proof}
Meunier~\cite{Meunier14} naturally generalized the definition of
local chromatic number of graphs
to uniform hypergraphs as follows. Let ${\mathcal H}$ be a uniform hypergraph. For a
set $X\subseteq V({\mathcal H})$, the closed neighborhood of $X$ is the set $X\cup {\mathcal N}(X),$
where
$${\mathcal N}(X)=\{v\in V({\mathcal H}):\; \exists\; e\in E({\mathcal H})\mbox{ such that } e\setminus X=\{v\}\}.$$
For a uniform hypergraph ${\mathcal H}$, the local chromatic number of ${\mathcal H}$ is defined as follows:
$$\chi_l({\mathcal H})=\displaystyle\min_c\max\{|c({\mathcal N}[e\setminus \{v\}])|:\; e\in E({\mathcal H})\mbox{ and } v\in e\},$$
where the minimum is taken over all proper coloring $c$ of ${\mathcal H}$.
Meunier~\cite{Meunier14}, by using his colorful theorem (Theorem~\ref{colorfulhyper}),
generalized Simonyi and Tardos lower bound~\cite{MR2351519} for the local chromatic
number of Kneser graphs to the local chromatic number of Kneser hypergraphs. He proved:
$$
\chi_l(\mathsf{K}G^p({\mathcal H}))\geq \min\left(\left\lceil{|V({\mathcal H})|-\operatorname{alt}_p({\mathcal H})\over p}\right\rceil+1, \left\lceil{|V({\mathcal H})|-\operatorname{alt}_p({\mathcal H})\over p-1}\right\rceil \right)$$ for any hypergraph ${\mathcal H}$ and any prime number $p$.
In what follows, we generalize this result.
\begin{theorem}\label{localhyper}
Let ${\mathcal H}$ be an $r$-uniform hypergraph with at least one edge and $p$ be a prime number,
where $r\leq p\leq \omega({\mathcal H})$. Let $t={\rm Xind}_{\mathbb{Z}_p}({\rm Hom}(K^r_p,{\mathcal H}))+p$.
If $t=ap+b$, where $a$ and $b$ are nonnegative integers and
$0\leq b\leq p-1$, then
$$\chi_l({\mathcal H})\geq \min\left(\left\lceil{(p-r+1)a+\min\{p-r+1,b\}\over r-1}\right\rceil+1, \left\lceil{t\over r-1}\right\rceil \right).$$
\end{theorem}
\begin{proof}
Let $c$ be an arbitrary proper coloring of ${\mathcal H}$ and
let ${\mathcal H}[U_1,\ldots,U_p]$ be the colorful balanced complete $r$-uniform $p$-partite
subhypergraph of ${\mathcal H}$ whose existence is ensured by Theorem~\ref{maincolorfulindex}.
Note that $b$ numbers of $U_i$'s, say $U_1,\ldots,U_b$, have the cardinality
$\lceil{t\over r}\rceil$ while the others
have the cardinality $\lfloor{t\over r}\rfloor\geq 1$.
Consider $U_1,\ldots,U_{p-r+1}$.
Two different cases will be distinguished.
\begin{itemize}
\item[{\bf Case 1.}] If $\left|\displaystyle\bigcup_{i=1}^{p-r+1} c(U_i)\right|<\left\lceil{t\over r-1}\right\rceil$,
then there is a vertex $v\in \displaystyle\bigcup_{i=p-r+2}^p U_i$ whose color is~not in $\displaystyle\bigcup_{i=1}^{p-r+1} c(U_i)$. Consider an edge of ${\mathcal H}[U_1,\ldots,U_p]$ containing $v$
and such that $|e\cap U_{p-r+1}|=1$ and $e\cap U_i=\varnothing$ for $i=1,\ldots, p-r$.
Let $e\cap U_{p-r+1}=\{u\}$.
One can check that $$\{c(v)\}\cup\displaystyle\left(\bigcup_{i=1}^{p-r+1} c(U_i)\right)\subseteq c({\mathcal N}(e\setminus \{u\})).$$ Therefore,
since any color is appeared in at most $r-1$ number of $U_i$'s, we have
$$\left|\bigcup_{i=1}^{p-r+1} c(U_i)\right|\geq \displaystyle\left\lceil{\sum_{i=1}^{p-r+1}|U_i|\over r-1}\right\rceil,$$
and consequently,
$$|c({\mathcal N}(e\setminus \{u\}))|\geq 1+\displaystyle\left\lceil{\sum_{i=1}^{p-r+1}|U_i|\over r-1}\right\rceil=1+\left\lceil{(p-r+1)a+\min\{p-r+1,b\}\over r-1}\right\rceil,$$
which completes the proof in Case~1.
\item[{\bf Case 2.}] If $\left|\displaystyle\bigcup_{i=1}^{p-r+1} c(U_i)\right|\geq\left\lceil{t\over r-1}\right\rceil$,
then consider an edge of ${\mathcal H}[U_1,\ldots,U_p]$ such that
$|e\cap U_{p-r+1}|=1$ and $e\cap U_i=\varnothing$ for $i=1,\ldots, p-r$.
Let $e\cap U_{p-r+1}=\{u\}$.
One can see that $$\displaystyle\bigcup_{i=1}^{p-r+1} c(U_i)\subseteq c({\mathcal N}(e\setminus \{u\})),$$
which completes the proof in Case~2.
\end{itemize}
\end{proof}
\begin{corollary}
Let ${\mathcal H}$ be a $p$-uniform hypergraph with at least one edge, where $p$ is a prime number.
Then
$$\chi_l({\mathcal H})\geq \min\left(\left\lceil{{\rm Xind}_{\mathbb{Z}_p}({\rm Hom}(K^p_p,{\mathcal H}))+p\over p}\right\rceil+1, \left\lceil{{\rm Xind}_{\mathbb{Z}_p}({\rm Hom}(K^p_p,{\mathcal H}))+p\over p-1}\right\rceil \right).$$
\end{corollary}
\begin{proof}
Since ${\mathcal H}$ has at least one edge, we have $\omega({\mathcal H})\geq p$. Therefore, in view of Theorem~\ref{localhyper}, we have the assertion.
\end{proof}
Note that if ${\mathcal H}=\mathsf{K}G^p(\mathcal{F})$, then, in view of Theorem~\ref{inequalities},
we have
$${\rm Xind}_{\mathbb{Z}_p}({\rm Hom}(K^p_p,{\mathcal H}))+p\geq |V(\mathcal{F})|-\operatorname{alt}_p(\mathcal{F}).$$
This implies that the previous corollary is a generalization of Meunier's lower bound for the
local chromatic number of $\mathsf{K}G^p(\mathcal{F})$
\subsection{\bf Plan}
Section~\ref{intro} contains some backgrounds and essential definitions used
elsewhere in the paper.
In Section~\ref{definition}, we present some new topological
tools which help us for the proofs of main results.
Section~\ref{sec:proofs} is devoted to the proofs of
Theorem~\ref{maincolorfulindex} and Theorem~\ref{inequalities}.
\section{\bf Preliminaries}\label{intro}
\subsection{{\bf Topological Indices and Lower Bound for Chromatic Number}}
We assume basic knowledge in combinatorial algebraic topology.
Here, we are going to bring a brief review of some essential notations and definitions which will be needed throughout the paper. For more, one can see the book written by Matou{\v{s}}ek~\cite{MR1988723}. Also, the definitions of box-complex, hom-complex, and cross-index
will be generalized to $\mathbb{Z}_p$-box-complex, $\mathbb{Z}_p$-hom-complex, and $\mathbb{Z}_p$-cross-index, respectively.
Let $\mathbb{G}$ be a finite nontrivial group which acts on a topological space $X$.
We call $X$ a {\it topological $\mathbb{G}$-space} if for each $g\in \mathbb{G}$, the map $g:X\longrightarrow X$ which $x\mapsto g\operatorname{cd}ot x$ is continuous. A {\it free topological $\mathbb{G}$-space $X$} is a topological $\mathbb{G}$-space such that $\mathbb{G}$ acts on it freely, i.e., for each $g\in \mathbb{G}\setminus\{e\}$, the map $g:X\longrightarrow X$ has no fixed point. For two topological $\mathbb{G}$-spaces $X$ and $Y$, a continuous map
$f:X\longrightarrow Y$ is called a $\mathbb{G}$-map if $f(g\operatorname{cd}ot x)=g\operatorname{cd}ot f(x)$ for each $g\in \mathbb{G}$ and $x\in X$. We write $X\stackrel{\mathbb{G}}{\longrightarrow} Y$ to mention that there is a $\mathbb{G}$-map from $X$ to $Y$. A map $f:X\longrightarrow Y$ is called a $\mathbb{G}$-equivariant map, if $f(g\operatorname{cd}ot x)=g\operatorname{cd}ot f(x)$ for each $g\in \mathbb{G}$ and $x\in X$.
Simplicial complexes provide
a bridge between combinatorics and topology. A simplicial complex can be viewed
as a combinatorial object, called abstract simplicial complex, or as a topological
space, called geometric simplicial complex. Here, we just remind the definition of an
abstract simplicial complex. However, we assume that the reader is familiar with the concept of how an abstract simplicial complex and its geometric realization are connected to each other.
A {\it simplicial complex} is a pair $(V,K)$, where $V$ is a finite set and $K$ is a family of subsets of $V$ such that if $F\in K$ and $F'\subseteq F$, then $F'\in K$.
Any set in $K$ is called a simplex.
Since we may assume that $V=\bigcup_{F\in K}F$,
we can write $K$ instead of $(V, K)$.
The {\it dimension of $K$} is defined as follows:
$${\rm dim}(K)=\max\{|F|-1:\; F\in K\}.$$
The geometric realization of $K$ is denoted by $||K||$.
For two simplicial complexes $C$ and $K$, by a {\it simplicial map $f:C\longrightarrow K$,}
we mean a map from $V(C)$ to $V(K)$ such that the image of any
simplex of $C$ is a simplex of $K$.
For a nontrivial finite group $\mathbb{G}$, a {\it simplicial $\mathbb{G}$-complex} $K$
is a simplicial complex with a $\mathbb{G}$-action on its vertices such that
each $g\in \mathbb{G}$ induces a simplicial map from $K$ to $K$, that is
the map which maps $v$ to $g\operatorname{cd}ot v$ for each $v\in V(K)$.
If for each $g\in \mathbb{G}\setminus\{e\}$, there is no fixed simplex under the
simplicial map made by $g$, then $K$ is called a
{\it free simplicial $\mathbb{G}$-complex.} For a simplicial $\mathbb{G}$-complex $K$,
if we take the affine extension, then $K$ is free if and only if $||K||$ is free.
For two simplicial $\mathbb{G}$-complexes $C$ and $K$, a simplicial map
$f:C\longrightarrow K$ is called a simplicial $\mathbb{G}$-map if $f(g\operatorname{cd}ot v)=g\operatorname{cd}ot f(v)$
for each $g\in \mathbb{G}$ and $v\in V(C)$. We write
$C\stackrel{\mathbb{G}} {\longrightarrow} K$, if there is a simplicial $\mathbb{G}$-map
from $C$ to $K$. Note that if $C\stackrel{\mathbb{G}}{\longrightarrow} K$,
then $||C||\stackrel{\mathbb{G}}{\longrightarrow} ||K||$. A map $f:C\longrightarrow K$
is called a $\mathbb{G}$-equivariant map, if $f(g\operatorname{cd}ot v)=g\operatorname{cd}ot f(v)$
for each $g\in \mathbb{G}$ and $v\in V(C)$.
For an integer $n\geq 0$ and a nontrivial finite group $\mathbb{G}$,
{\it $E_n \mathbb{G}$ space} is a
free $(n-1)$-connected $n$-dimensional simplicial $\mathbb{G}$-complexes.
A concrete example of an $E_n \mathbb{G}$ space is the $(n+1)$-fold
join $\mathbb{G}^{*(n+1)}$.
As a topological space $\mathbb{G}^{*(n+1)}$ is a $(n+1)$-fold join of
an $(n+1)$-point discrete space.
This is known that for any two $E_n \mathbb{G}$ space $X$ and $Y$,
there is a $\mathbb{G}$-map from $X$ to $Y$.
For a $\mathbb{G}$-space $X$, define
$${\rm ind}_{\mathbb{G}}(X)=\min\{n:\; X\stackrel{\mathbb{G}}{\longrightarrow}
E_n\mathbb{G}\}.$$
Note that here $E_n \mathbb{G}$ can be any $E_n \mathbb{G}$, since there is a
$\mathbb{G}$-map between any two $E_n \mathbb{G}$ spaces, see~\cite{MR1988723}. Also, for a simplicial
complex $K$, by ${\rm ind}_{\mathbb{G}}(K)$, we mean
${\rm ind}_{\mathbb{G}}(||K||)$. Throughout the paper,
for $\mathbb{G}=\mathbb{Z}_2$, we would rather use ${\rm ind}(-)$
instead of ${\rm ind}_{\mathbb{Z}_2}(-)$.
\noindent{\bf Properties of the $\mathbb{G}$-index.} \cite{MR1988723} Let $\mathbb{G}$ be a finite nontrivial group.
\begin{itemize}
\item[{\rm (i)}] ${\rm ind}_{\mathbb{G}}(X)>{\rm ind}_{\mathbb{G}}(Y)$ implies $X\stackrel{\mathbb{G}}{\centernot\longrightarrow} Y$.
\item[{\rm (ii)}] ${\rm ind}_{\mathbb{G}}(E_n \mathbb{G})=n$ for any $E_n \mathbb{G}$ space.
\item[{\rm (iii)}] ${\rm ind}_{\mathbb{G}}(X*Y)\leq {\rm ind}_{\mathbb{G}}(X)+{\rm ind}_{\mathbb{G}}(Y)+1$.
\item[{\rm (iv)}] If $X$ is $(n-1)$-connected, then ${\rm ind}_{\mathbb{G}}(X)\geq n$.
\item[{\rm (v)}] If $K$ is a free simplicial $\mathbb{G}$-complex of dimension $n$, then ${\rm ind}_{\mathbb{G}}(K)\leq n$.
\end{itemize}
\subsection{{\bf $\mathbb{Z}_p$-Box-Complex, $\mathbb{Z}_p$-Poset, and $\mathbb{Z}_p$-Hom-Complex}}\label{Boxdefin}
In this subsection, for any $r$-uniform hypergraph ${\mathcal H}$, we are going to define two objects; $\mathbb{Z}_p$-box-complex of ${\mathcal H}$ and $\mathbb{Z}_p$-hom-complex of ${\mathcal H}$ which the first one is a
simplicial $\mathbb{Z}_p$-complex and the second one is a $\mathbb{Z}_p$-poset.
Moreover, for any $\mathbb{Z}_p$-poset $P$, we assign a combinatorial index to $P$ called the cross-index of $P$.
\\
\noindent{\bf $\mathbb{Z}_p$-Box-Complex.}
Let $r\geq 2$ be a positive integer and $p\geq r$ be a prime number.
For an $r$-uniform hypergraph ${\mathcal H}$, define the {\it $\mathbb{Z}_p$-box-complex of ${\mathcal H}$,} denoted ${\rm B}_0({\mathcal H},{\mathbb{Z}_p})$,
to be a simplicial complex with the vertex set $\displaystyle\biguplus_{i=1}^pV({\mathcal H})=\mathbb{Z}_p\times V({\mathcal H})$ and the simplex set consisting of all
$\{\omega^1\}\times U_1\cup\operatorname{cd}ots\cup \{\omega^p\}\times U_p$,
where
\begin{itemize}
\item $U_1,\ldots,U_p$ are pairwise disjoint subsets of $V({\mathcal H})$,
\item $\displaystyle\bigcup_{i=1}^p U_i\neq\varnothing$, and
\item the hypergraph ${\mathcal H}[U_1,U_2,\ldots,U_p]$
is a complete $r$-uniform $p$-partite hypergraph.
\end{itemize}
Note that some of $U_i$'s might be empty.
In fact, if $U_1,\ldots,U_p$ are pairwise disjoint subsets of $V({\mathcal H})$ and the number of nonempty $U_i$'s is less than $r$, then ${\mathcal H}[U_1,U_2,\ldots,U_p]$ is a complete $r$-uniform $p$-partite hypergraph and thus
$\{\omega^1\}\times U_1\cup\operatorname{cd}ots\cup \{\omega^p\}\times U_p\in {\rm B}_0({\mathcal H},{\mathbb{Z}_p})$. For each $\epsilon\in{\mathbb{Z}_p}$
and each $(\epsilon',v)\in V({\rm B}_0({\mathcal H},{\mathbb{Z}_p}))$, define $\epsilon\operatorname{cd}ot(\epsilon',v)=(\epsilon\operatorname{cd}ot\epsilon',v)$. One can see that this action makes
${\rm B}_0({\mathcal H},{\mathbb{Z}_p})$ a free simplicial $\mathbb{Z}_p$-complex.
It should be mentioned that the $\mathbb{Z}_2$-box-complex ${\rm B}_0({\mathcal H},\mathbb{Z}_2)$ is extensively
studied in the literature, see~\cite{MR2279672,MR2351519}.
In the literature, for a graph $G$, the simplicial complex
${\rm B}_0(G,{\mathbb{Z}_2})$ is shown by $B_0(G)$.
This simplicial complex is used to introduce some lower bounds for the chromatic
number of a given graph $G$, see~\cite{MR2279672}.
In particular, we have the following inequalities
$$\chi(G)\geq {\rm ind}(B_0(G))+1\geq {\rm coind}(B_0(G))+1\geq n-{\rm alt}({\mathcal F})\geq{\rm cd}_2({\mathcal F}),$$
where ${\mathcal F}$ is any hypergraph such that ${\rm KG}^2({\mathcal F})$ and $G$ are
isomorphic, see~\cite{2014arXiv1403.4404A,2013arXiv1302.5394A,MR2279672}.\\
\noindent{\bf $\mathbb{Z}_p$-Poset.}
A partially ordered set, or simply a {\it poset}, is defined as an ordered pair $P=(V(P),\preceq)$, where $V(P)$ is a set called the ground set of $P$ and $\preceq$ is a partial order on $V(P)$.
For two posets $P$ and $Q$,
by an order-preserving map $\phi:P\longrightarrow Q$, we mean a map $\phi$
from $V(P)$ to $V(Q)$ such that for each $u,v\in V(P)$, if $u\preceq v$, then $\phi(u)\preceq \phi(v)$.
A poset $P$ is called a {\it $\mathbb{Z}_p$-poset}, if $\mathbb{Z}_p$ acts on $V(P)$
and furthermore,
for each $\epsilon\in \mathbb{Z}_p$, the map $\epsilon:V(P)\longrightarrow V(P)$ which $v\mapsto \epsilon\operatorname{cd}ot v$ is an automorphism of $P$ (order preserving bijective map).
If for each $\epsilon\in \mathbb{Z}_p\setminus\{e\}$, this map has no fixed point, then $P$ is called a {\it free $\mathbb{Z}_p$-poset}.
For two $\mathbb{Z}_p$-poset $P$ and $Q$,
by an order-preserving $\mathbb{Z}_p$-map $\phi:P\longrightarrow Q$, we mean
an order-preserving map from $V(P)$ to $V(Q)$ such that for each $v\in V(P)$ and $\epsilon\in \mathbb{Z}_p$, we have $\phi(\epsilon\operatorname{cd}ot v)=\epsilon\operatorname{cd}ot\phi(v)$.
If there exists such a map, we write $P\stackrel{\mathbb{Z}_p}{\longrightarrow} Q$.
For a nonnegative integer $n$ and a prime number $p$, let $Q_{n,p}$ be a free
$\mathbb{Z}_p$-poset with ground set $\mathbb{Z}_p\times[n+1]$ such that for any
two members $(\epsilon,i),(\epsilon',j)\in Q_{n,p}$, $(\epsilon,i)<_{Q_{n,p}}(\epsilon',j)$ if
$i<j$. Clearly, $Q_{n,p}$ is a free $\mathbb{Z}_p$-poset with the action $\epsilon\operatorname{cd}ot(\epsilon',j)=(\epsilon\operatorname{cd}ot\epsilon',j)$ for each $\epsilon\in\mathbb{Z}_p$ and $(\epsilon',j)\in Q_{n,p}$.
For a $\mathbb{Z}_p$-poset $P$, the {\it $\mathbb{Z}_p$-cross-index} of $P$, denoted
${\rm Xind}_{\mathbb{Z}_p}(P)$, is the least integer $n$ such that there is a
$\mathbb{Z}_p$-map from $P$ to $Q_{n,p}$. Throughout the paper, for $p=2$,
we speak about ${\rm Xind}(-)$ rather than ${\rm Xind}_{\mathbb{Z}_2}(-)$.
It should be mentioned that ${\rm Xind}(-)$ is first defined in~\cite{SiTaZs13}.
Let $P$ be a poset. We can define an {\it order complex} $\Delta P$ with the vertex set
same as the ground set of $P$ and simplex set consisting of all chains in $P$.
One can see that if $P$ is a free $\mathbb{Z}_p$-poset, then $\Delta P$ is a free
simplicial $\mathbb{Z}_p$-complex.
Moreover, any order-preserving $\mathbb{Z}_p$-map $\phi:P\longrightarrow Q$ can be lifted
to a simplicial
$\mathbb{Z}_p$-map from $\Delta P$ to $\Delta Q$. Clearly, there is a simplicial
$\mathbb{Z}_p$-map from $\Delta Q_{n,p}$ to $\mathbb{Z}_p^{*(n+1)}$ (identity map).
Therefore,
if ${\rm Xind}_{\mathbb{Z}_p}(P)=n$, then
we have a simplicial $\mathbb{Z}_p$-map from $\Delta P$ to $\mathbb{Z}_p^{*(n+1)}$. This
implies that
${\rm Xind}_{\mathbb{Z}_p}(P)\geq {\rm ind}_{\mathbb{Z}_p}(\Delta P)$.
Throughout the paper, for each $(\epsilon, j)\in Q_{n,p}$, when we speak about the sign of
$(\epsilon, j)$ and the absolute value of $(\epsilon, j)$, we mean $\epsilon$ and
$j$, respectively.
\begin{alphtheorem}{\rm \cite{AliHajiMeu2016}}\label{altercrossindex}
Let $P$ be a free ${\mathbb Z}_2$-poset and $\phi:P\longrightarrow Q_{s,2}$ be an order preserving ${\mathbb Z}_2$-map. Then $P$ contains a chain $p_1\prec_P\operatorname{cd}ots\prec_Pp_{ k}$ such that $k= {\rm Xind}(P)+1$ and
the signs of $\phi(p_i)$ and $\phi(p_{i+1})$ differ
for each $i\in[k-1]$. Moreover, if $s= {\rm Xind}(P)$, then for any $(s+1)$-tuple
$(\epsilon_1,\ldots,\epsilon_{s+1})\in\mathbb{Z}_2^{s+1}$, there is at least one chain
$p_1\prec_P\operatorname{cd}ots\prec_Pp_{ s+1}$ such that $\phi(p_i)=(\epsilon_i,i)$ for each $i\in[s+1]$.
\end{alphtheorem}
\noindent{\bf $\mathbb{Z}_p$-Hom-Complex.}
Let ${\mathcal H}$ be an $r$-uniform hypergraph. Also, let $p\geq r$ be a prime number.
The {\it $\mathbb{Z}_p$-hom-complex} ${\rm Hom}(K^r_p,{\mathcal H})$ is a free
$\mathbb{Z}_p$-poset with the ground set consisting of all ordered $p$-tuples
$(U_1,\operatorname{cd}ots,U_p)$, where $U_i$'s are nonempty pairwise disjoint subsets of $V$ and
${\mathcal H}[U_1,\ldots,U_p]$ is a complete $r$-uniform $p$-partite hypergraph.
For two $p$-tuples $(U_1,\operatorname{cd}ots,U_p)$ and $(U'_1,\operatorname{cd}ots,U'_p)$ in
${\rm Hom}(K^r_p,{\mathcal H})$, we define $(U_1,\operatorname{cd}ots,U_p)\preceq(U'_1,\operatorname{cd}ots,U'_p)$
if $U_i\subseteq U'_i$ for each $i\in[p]$.
Also, for each $\omega^i\in \mathbb{Z}_p=\{\omega^1,\ldots,\omega^p\}$, let
$\omega^i\operatorname{cd}ot (U_1,\operatorname{cd}ots,U_p)=(U_{1+i},\operatorname{cd}ots,U_{p+i})$, where $U_j=U_{j-p}$ for $j>p$.
Clearly, this action is a free $\mathbb{Z}_p$-action on ${\rm Hom}(K^r_p,{\mathcal H})$.
Consequently, ${\rm Hom}(K^r_p,{\mathcal H})$ is a free $\mathbb{Z}_p$-poset with
this $\mathbb{Z}_p$-action.
For a nonempty graph $G$ and for $p=2$, it is proved~\cite{2014arXiv1403.4404A,2013arXiv1302.5394A,SiTaZs13,MR2279672} that
\begin{equation}\label{equation}
\begin{array}{lll}
\chi(G) &\geq & {\rm Xind}({\rm Hom}(K_2,G))+2 \geq {\rm ind}(\Delta {\rm Hom}(K_2,G))+2 \geq {\rm ind}(B_0(G))+1\\
&\geq & {\rm coind}(B_0(G))+1\geq |V({\mathcal F})|-{\rm alt}_2({\mathcal F})
\geq {\rm cd}_2({\mathcal F}),
\end{array}
\end{equation}
where ${\mathcal F}$ is any hypergraph such that ${\rm KG}^2({\mathcal F})$ and $G$ are isomorphic.\\
\section{\bf Notations and Tools}\label{definition}
For a simplicial complex $K$, by $\operatorname{sd} K$, we mean the first barycentric subdivision of $K$.
It is the simplicial complex whose vertex set is the set of nonempty simplices of $K$
and whose simplices are the collections of simplices of $K$ which are pairwise
comparable by inclusion. Throughout the paper, by $\sigma_{t-1}^{r-1}$, we mean the
$(t-1)$-dimensional simplicial complex with vertex set $\mathbb{Z}_r$
containing all $t$-subsets
of $\mathbb{Z}_r$ as its maximal simplices.
The join of two simplicial complexes $C$ and $K$, denoted $C*K$, is a simplicial complex with the vertex set $V(C)\biguplus V(K)$ and such that the set of its simplices is
$\{F_1\biguplus F_2:\; F_1\in C\mbox{ and } F_2\in K\}$.
Clearly, we can see $\mathbb{Z}_r$ as a $0$-dimensional simplicial complex.
Note that the vertex set of simplicial complex $\operatorname{sd}\mathbb{Z}_r^{*\alpha}$ can be identified with $(\mathbb{Z}_r\cup\{0\})^\alpha\setminus\{\boldsymbol{0}\}$ and the vertex set of $(\sigma^{r-1}_{t-1})^{*n}$ is the set of all pairs $(\epsilon,i)$, where $\epsilon\in \mathbb{Z}_r$ and $i\in [n]$.
\subsection{{\bf $\mathbb{Z}_p$-Tucker-Ky Fan lemma}}
The famous Borsuk-Ulam theorem has many generalizations which have been extensively used in investigating graph coloring properties. Some of these interesting generalizations are Tucker lemma~\cite{MR0020254}, $Z_p$-Tucker Lemma~\cite{MR1893009}, and Tucker-Ky Fan~\cite{MR0051506}. For more details about the Borsuk-Ulam theorem and its generalizations, we refer the reader to \cite{MR1988723}.
Actually, Tucker lemma is a combinatorial counterpart of Borsuk-Ulam theorem. There are several interesting and surprising applications of Tucker Lemma in combinatorics, including a combinatorial proof of Lov{\'a}sz-Kneser
theorem by Matou{\v{s}}ek \cite{MR2057690}.
\begin{alphlemma}\label{tuckeroriginal}
{\rm(Tucker lemma \cite{MR0020254}).}
Let $m$ and $n$ be positive integers and
$\lambda:\{-1,0,+1\}^n\setminus \{(0,\ldots,0)\} \longrightarrow \{\pm 1, \pm 2,\ldots ,\pm m\}$
be a map satisfying the following properties:
\begin{itemize}
\item for any $X\in \{-1,0,+1\}^n\setminus \{\boldsymbol{0}\}$, we have $\lambda(-X)=-\lambda(X)$ {\rm (}a
$Z_2$-equivariant map{\rm ),}
\item no two signed vectors $X$ and $Y$ are such that
$X\subseteq Y$ and $\lambda(X) =-\lambda(Y)$.
\end{itemize}
Then, we have $m \geq n$.
\end{alphlemma}
Another interesting generalization of the Borsuk-Ulam theorem is
Ky~Fan's lemma~\cite{MR0051506}. This generalization ensures that with the same assumptions as in Lemma~\ref{tuckeroriginal}, there is odd number of chains
$X_1\subseteq X_2\subseteq \operatorname{cd}ots \subseteq X_n$ such that $$\{\lambda(X_1),\ldots,\lambda(X_n)\}=\{+c_1,-c_2,\ldots ,
(- 1)^{n-1}c_n\},$$ where $1\leq c_1 < \operatorname{cd}ots < c_n \leq m$.
Ky~Fan's lemma has been used in several articles to
study some coloring properties of graphs, see \cite{AliHajiMeu2016,MR2763055,MR2837625}.
There are also some other generalizations of
Tucker Lemma.
A $\mathbb{Z}_p$ version of Tucker Lemma, called $\mathbb{Z}_p$-Tucker Lemma, is proved by Ziegler~\cite{MR1893009} and extended by Meunier~\cite{MR2793613}.
In next subsection, we present a $\mathbb{Z}_p$ version of Ky~Fan's lemma
which is called $\mathbb{Z}_p$-Tucker-Ky Fan lemma.
\subsection{{\bf New Generalizations of Tucker Lemma}}
Before presenting our results, we need to introduce
some functions having key roles in the paper.
Throughout the paper, we are going to use these functions repeatedly.
Let $m$ be a positive integer.
We remind that $(\sigma^{p-1}_{p-2})^{*m}$ is a free simplicial $\mathbb{Z}_p$-complex with vertex set $\mathbb{Z}_p\times [m]$. \\
\noindent{\bf The value function $l(-)$.}
Let $\tau\in (\sigma^{p-1}_{p-2})^{*m}$ be a simplex.
For each $\epsilon\in \mathbb{Z}_p$, define
$\tau^\epsilon=\left\{(\epsilon,j):\; (\epsilon,j)\in \tau\right\}.$
Moreover, define
$$l(\tau)=\max\left\{\displaystyle|\bigcup_{\epsilon\in\mathbb{Z}_p} B^\epsilon|:\; \forall\epsilon\in\mathbb{Z}_p ,\; B^\epsilon\subseteq \tau^\epsilon\mbox{ and } \forall \epsilon_1,\epsilon_2\in\mathbb{Z}_p,\; |\;|B^{\epsilon_1}|-|B^{\epsilon_2}|\;|\leq 1 \right\}.$$
Note that if we set $h(\tau)=\displaystyle\min_{\epsilon\in \mathbb{Z}_p}|\tau^\epsilon|$, then
$$l(\tau)=p\operatorname{cd}ot h(\tau)+|\{\epsilon\in\mathbb{Z}_p:\; |\tau^\epsilon|>h(\tau)\}|.$$\\
\noindent{\bf The sign functions $s(-)$ and $s_0(-)$.}
For an $a\in[m]$,
let $W_a$ be the set of all simplices $\tau\in (\sigma_{p-2}^{p-1})^{*m}$ such that $|\tau^\epsilon|\in\{0,a\}$ for each $\epsilon\in\mathbb{Z}_p$. Let $W=\displaystyle\bigcup_{a=1}^{m}W_a$.
Choose an arbitrary $\mathbb{Z}_p$-equivariant map $s:W\longrightarrow \mathbb{Z}_p$.
Also, consider an $\mathbb{Z}_p$-equivariant map $s_0:\sigma_{p-2}^{p-1}\longrightarrow \mathbb{Z}_p$.
Note that since $\mathbb{Z}_p$ acts freely on both $\sigma_{p-2}^{p-1}$ and $W$,
these maps can be easily built by choosing one representative in each orbit. It should be mentioned that both functions $s(-)$ and $s_0(-)$ are first introduced in~\cite{Meunier14}.
Now, we are in a position to generalize Tucker-Ky Fan lemma to $\mathbb{Z}_p$-Tucker-Ky Fan lemma.
\begin{lemma}{\rm ($\mathbb{Z}_p$-Tucker-Ky Fan lemma).}\label{Z_pfanlemma}
Let $m,n,p$ and $\alpha$ be nonnegative integers, where
$m,n\geq 1$, $m\geq \alpha\geq 1$, and $p$ is prime. Let
$$
\begin{array}{crcl}
\lambda: & (\mathbb{Z}_p\cup\{0\})^n\setminus\{\boldsymbol{0}\} &\longrightarrow & \mathbb{Z}_p\times[m]\\
& X &\longmapsto & (\lambda_1(X),\lambda_2(X))
\end{array}$$ be a $\mathbb{Z}_p$-equivariant map
satisfying the following conditions.
\begin{itemize}
\item For $X_1\subseteq X_2\in \left(\mathbb{Z}_p\cup\{0\}\right)^n\setminus\{\boldsymbol{0}\}$,
if $\lambda_2(X_1)=\lambda_2(X_2)\leq \alpha$, then $\lambda_1(X_1)=\lambda_1(X_2)$.
\item For $X_1\subseteq X_2\subseteq\operatorname{cd}ots \subseteq X_p\in \left(\mathbb{Z}_p\cup\{0\}\right)^n\setminus\{\boldsymbol{0}\}$,
if $\lambda_2(X_1)=\lambda_2(X_2)=\operatorname{cd}ots=\lambda_2(X_p)\geq\alpha+1$, then
$$\left|\left\{\lambda_1(X_1),\lambda_1(X_2),\ldots,\lambda_1(X_p)\right\}\right|<p.$$
\end{itemize}
Then there is a chain $$Z_1\subset Z_2\subset\operatorname{cd}ots\subset Z_{n-\alpha}\in \left(\mathbb{Z}_p\cup\{0\}\right)^n\setminus\{\boldsymbol{0}\}$$
such that
\begin{enumerate}
\item for each $i\in [n-\alpha]$, $\lambda_2(Z_i)\geq \alpha+1$,
\item for each $i\neq j\in [n-\alpha]$, $\lambda(Z_i)\neq \lambda(Z_j)$, and
\item\label{condition3} for each
$\epsilon\in\mathbb{Z}_p$,
$$\left\lfloor{n-\alpha\over p}\right\rfloor\leq \left|\left\{j:\; \lambda_1(Z_j)=\epsilon\right\}\right|\leq \left\lceil{n-\alpha\over p}\right\rceil.$$
\end{enumerate}
In particular, $n-\alpha\leq (p-1)(m-\alpha)$.
\end{lemma}
\begin{proof}
Note that the map $\lambda$ can be considered as a simplicial $\mathbb{Z}_p$-map from $\operatorname{sd} \mathbb{Z}_p^{*n}$ to
$(\mathbb{Z}_p^{*\alpha})*((\sigma_{p-2}^{p-1})^{*(m-\alpha)}).$
Let $K={\rm Im}(\lambda)$.
Note that each simplex in $K$ can be represented in a unique form $\sigma\cup\tau$ such that
$\sigma\in \mathbb{Z}_p^{*\alpha}$ and $\tau \in (\sigma_{p-2}^{p-1})^{*m-\alpha}.$
In view of definition of the function $l(-)$ and the properties which $\lambda$ satisfies in, to prove the assertion, it suffices to show that there is a simplex $\sigma\cup\tau\in K$ such that $l(\tau)\geq n-\alpha$. For a contradiction, suppose that for each $\sigma\cup\tau\in K$, we have $l(\tau)\leq n-\alpha-1$.
Define the map
$$\Gamma: \operatorname{sd} K\longrightarrow \mathbb{Z}_p^{*(n-1)}$$
such that for each vertex $\sigma\cup\tau\in V(\operatorname{sd} K)$,
\begin{itemize}
\item if $\tau=\varnothing$, then $\Gamma(\sigma\cup\tau)=(\epsilon, j)$,
where $j$ is the maximum possible value such that $(\epsilon, j)\in\sigma$.
Note that since $\sigma\in \mathbb{Z}_p^{*\alpha}$,
there is only one $\epsilon\in\mathbb{Z}_p$ for which
the maximum is attained. Therefore, in this case, the function $\Gamma$ is well-defined.
\item if $\tau\neq\varnothing$. Define $h(\tau)=\displaystyle\min_{\epsilon\in \mathbb{Z}_p}|\tau^\epsilon|.$
\begin{enumerate}[label={\rm (\roman*)}]
\item If $h(\tau)=0$, then define $\bar{\tau}=\{\epsilon\in \mathbb{Z}_p:\;
\tau^\epsilon= \varnothing\}\in \sigma^{p-1}_{p-2}$ and
$$\Gamma(\sigma\cup\tau)=\left(s_0(\bar\tau), \alpha+l(\tau)\right).$$
\item If $h(\tau)> 0$, then define $\bar{\tau}=\displaystyle
\bigcup_{\{\epsilon\in\mathbb{Z}_p:\; |\tau^\epsilon|=h(\tau)\}} \tau^\epsilon\in W$
and $$\Gamma(\sigma\cup\tau)=\left(s(\bar\tau), \alpha+l(\tau)\right).$$
\end{enumerate}
\end{itemize}
Now, we claim that $\Gamma$ is a simplicial $\mathbb{Z}_p$-map from $\operatorname{sd} K$ to $\mathbb{Z}_p^{*(n-1)}$. It is clear that $\Gamma$ is a $\mathbb{Z}_p$-equivariant map.
For a contradiction, suppose that there are $\sigma\cup\tau,\sigma'\cup\tau' \in \operatorname{sd} K$
such that $\sigma\subseteq \sigma'$, $\tau\subseteq\tau'$, $\Gamma(\sigma\cup\tau)=(\epsilon,\beta)$, and $\Gamma(\sigma'\cup\tau')=(\epsilon',\beta)$, where $\epsilon\neq \epsilon'$.
First note that in view of the definition of $\Gamma$ and the assumption $\Gamma(\sigma\cup\tau)=(\epsilon,\beta)$ and $\Gamma(\sigma'\cup\tau')=(\epsilon',\beta)$, the case $\tau=\varnothing$ and $\tau'\neq\varnothing$ is not possible.
If $\tau'=\varnothing$, then
$\tau=\tau'=\varnothing$ and we should have
$(\epsilon,\beta),(\epsilon',\beta)\in\sigma'\in \mathbb{Z}_p^{*\alpha}$ which implies that $\epsilon=\epsilon'$, a contradiction.
If $\varnothing\neq \tau\subseteq \tau'$, then in view of definition of $\Gamma$, we should have $l(\tau)=l(\tau')$.
We consider three different cases.\\
\begin{enumerate}[label={\rm (\roman*)}]
\item If $h(\tau)=h(\tau')=0$, then $$\epsilon=s_0(\{\epsilon\in \mathbb{Z}_p:\;
\tau^\epsilon= \varnothing\})\neq s_0(\{\epsilon\in \mathbb{Z}_p:\;
{\tau'}^\epsilon= \varnothing\})=\epsilon'.$$
Therefore, $ \{\epsilon\in \mathbb{Z}_p:\;
{\tau'}^\epsilon= \varnothing\}\subsetneq\{\epsilon\in \mathbb{Z}_p:\;
\tau^\epsilon= \varnothing\}$. This implies that
$$l(\tau')=p-|\{\epsilon\in \mathbb{Z}_p:\;
{\tau'}^\epsilon= \varnothing\}|>p-|\{\epsilon\in \mathbb{Z}_p:\;
\tau^\epsilon= \varnothing\}|=l(\tau),$$
a contradiction.\\
\item If $h(\tau)=0$ and $h(\tau')>0.$ We should have
$l(\tau)\leq p-1$ and $l(\tau')\geq p$ which contradicts the fact that $l(\tau)=l(\tau')$.\\
\item If $h(\tau)>0$ and $h(\tau')>0.$
Note that
$$ l(\tau)=p\operatorname{cd}ot h(\tau)+|\{\epsilon\in\mathbb{Z}_p:\; |\tau^\epsilon|>h(\tau)\}|
\mbox{ and } l(\tau')=p\operatorname{cd}ot h(\tau')+|\{\epsilon\in\mathbb{Z}_p:\;
|{\tau'}^\epsilon|>h(\tau')\}|.$$
For this case, two different sub-cases will be distinguished.
\begin{itemize}
\item[(a)] If $h(\tau)=h(\tau')=h$, then
$$\epsilon=s(\displaystyle\bigcup_{\{\epsilon\in\mathbb{Z}_p:\; |\tau^\epsilon|=h\}} \tau^\epsilon)\neq s(\displaystyle\bigcup_{\{\epsilon\in\mathbb{Z}_p:\; |{\tau'}^\epsilon|=h\}} {\tau'}^\epsilon)=\epsilon'.$$
Clearly, it implies that $$\displaystyle\bigcup_{\{\epsilon\in\mathbb{Z}_p:\; |\tau^\epsilon|=h\}} \tau^\epsilon\neq \displaystyle\bigcup_{\{\epsilon\in\mathbb{Z}_p:\; |{\tau'}^\epsilon|=h\}} {\tau'}^\epsilon.$$
Note that $\tau\subseteq \tau'$ and $h=\displaystyle\min_{\epsilon\in \mathbb{Z}_p}|\tau^\epsilon|=\displaystyle\min_{\epsilon\in \mathbb{Z}_p}|{\tau'}^\epsilon|.$ Therefore, we should have
$$
\{\epsilon\in\mathbb{Z}_p:\; |{\tau'}^\epsilon|=h\} \subsetneq \{\epsilon\in\mathbb{Z}_p:\; |{\tau}^\epsilon|=h\}$$
and consequently $l(\tau)<l(\tau')$ which is a contradiction.
\item[(b)] If $h(\tau)<h(\tau')$, then
$$l(\tau)\leq p\operatorname{cd}ot h(\tau)+p-1< p\operatorname{cd}ot (h(\tau)+1)\leq l(\tau'),$$
a contradiction.
\end{itemize}
\end{enumerate}
Therefore, $\Gamma$ is a simplicial $\mathbb{Z}_p$-map from $\operatorname{sd} K$ to $\mathbb{Z}_p^{*(n-1)}$. Naturally, $\lambda$ can be lifted to a simplicial $\mathbb{Z}_p$-map $\bar\lambda:\operatorname{sd}^2 \mathbb{Z}_p^{*n}\longrightarrow \operatorname{sd} K$.
Thus $\Gamma\circ\bar\lambda$ is a simplicial $\mathbb{Z}_p$-map from $\operatorname{sd}^2 \mathbb{Z}_p^{*n}$ to
$\mathbb{Z}_p^{*(n-1)}$.
In view of Dold's theorem~\cite{MR711043,MR1988723},
the dimension of $\mathbb{Z}_p^{*(n-1)}$ should be strictly larger than the connectivity of $\operatorname{sd}^2 \mathbb{Z}_p^{*n}$, that is
$n-2>n-2$,
which is not possible.
\end{proof}
Lemma~\ref{Z_pfanlemma} provides a short simple proof of Meunier's colorful result for Kneser hypergraphs (next Theorem) as follows.
\begin{alphtheorem}{\rm \cite{Meunier14}}\label{colorfulhyper}
Let ${\mathcal H}$ be a hypergraph and let $p$ be a prime number. Then any proper coloring $c:V({\rm KG}^p({\mathcal H}))\longrightarrow [C]$ {\rm(}$C$ arbitrary{\rm)} must contain a colorful balanced complete $p$-uniform $p$-partite hypergraph with
$|V({\mathcal H})|-{\rm alt}_p({\mathcal H})$ vertices.
\end{alphtheorem}
\begin{proof}
Consider a bijection $\pi:[n]\longrightarrow V({\mathcal H})$ such that
${\rm alt}_p({\mathcal H},\pi)={\rm alt}_p({\mathcal H}).$
We are going to define a map $$\begin{array}{cccc}
\lambda: & (\mathbb{Z}_p\cup\{0\})^n\setminus\{\boldsymbol{0}\} &\longrightarrow & \mathbb{Z}_p\times[m]\\
& X &\longmapsto & (\lambda_1(X),\lambda_2(X))
\end{array}$$ satisfying the conditions of Lemma~\ref{Z_pfanlemma}
and with parameters $n= |V({\mathcal H})|$, $m={\rm alt}_p({\mathcal H})+C$,
and $\alpha={\rm alt}_p({\mathcal H})$.
Assume that $2^{[n]}$ is equipped with a total ordering $\preceq$.
For each $X\in(\mathbb{Z}_p\cup\{0\})^n\setminus\{\boldsymbol{0}\}$, define $\lambda(X)$ as follows.
\begin{itemize}
\item If ${\rm alt}(X)\leq {\rm alt}_p({\mathcal H},\pi)$, then let $\lambda_1(X)$ be the first nonzero coordinate of $X$ and $\lambda_2(X)={\rm alt}(X)$.
\item If ${\rm alt}(X)\geq {\rm alt}_p({\mathcal H},\pi)+1$, then in view of the definition of
${\rm alt}_p({\mathcal H},\pi)$, there is some $\epsilon\in\mathbb{Z}_p$
such that $E(\pi(X^\epsilon))\neq \varnothing$.
Define
$$c(X)=\max\left\{c(e):\; \exists\epsilon\in\mathbb{Z}_p\mbox { such that }
e\subseteq \pi(X^\epsilon)\right\}$$
and $\lambda_2(X)={\rm alt}_p({\mathcal H},\pi)+c(X)$.
Choose $\epsilon\in\mathbb{Z}_p$ such that
there is at least one edge $e\in\pi (X^\epsilon)$ with $c(X)=c(e)$ and such that
$X^\epsilon$ is the maximum one having this property. By the maximum, we mean
the maximum according to the total ordering $\preceq$.
It is clear that $\epsilon$ is defined uniquely. Now, let $\lambda_1(X)=\epsilon$.
\end{itemize}
One can check that $\lambda$ satisfies the conditions of Lemma~\ref{Z_pfanlemma}.
Consider the chain $Z_1\subset Z_2\subset\operatorname{cd}ots\subset Z_{n-{\rm alt}_p({\mathcal H},\pi)}$ whose existence is ensured by Lemma~\ref{Z_pfanlemma}.
Note that for each $i\in[n-{\rm alt}_p({\mathcal H},\pi)]$, we have $\lambda_2(Z_i)>{\rm alt}_p({\mathcal H},\pi)$. Consequently, $\lambda_2(Z_i)={\rm alt}_p({\mathcal H},\pi)+c(Z_i)$. Let $\lambda(Z_i)=(\epsilon_i,j_i)$.
Note that for each $i$, there is at least one edge
$e_{i,\epsilon_i}\subseteq \pi(Z_i^{\epsilon_i})\subseteq \pi(Z_{n-{\rm alt}_p({\mathcal H},\pi)}^{\epsilon_i})$ such that
$c(e_{i,\epsilon_i})=j_i-{\rm alt}_p({\mathcal H},\pi)$.
For each $\epsilon\in\mathbb{Z}_p$, define
$U_\epsilon=\{e_{i,\epsilon_i}:\; \epsilon_i=\epsilon\}.$
We have the following three properties for $U_\epsilon$'s.
\begin{itemize}
\item Since the chain $Z_1\subset Z_2\subset\operatorname{cd}ots\subset
Z_{n-{\rm alt}_p({\mathcal H},\pi)}$ is satisfying Condition~\ref{condition3} of
Lemma~\ref{Z_pfanlemma}, we have
$\left\lfloor{n-{\rm alt}_p({\mathcal H},\pi)\over p}\right\rfloor\leq
|U_\epsilon|\leq \left\lceil{n-{\rm alt}_p({\mathcal H},\pi)\over p}\right\rceil.$
\item The edges in $U_\epsilon$ get distinct colors.
If there are two edges $e_{i,\epsilon}$ and $e_{i',\epsilon}$ in $U_\epsilon$ such that
$c(e_{i,\epsilon})=c(e_{i',\epsilon})$, then $\lambda(Z_i)=\lambda(Z_{i'})$
which is not possible.
\item If $\epsilon\neq \epsilon'$, then for each $e\in U_\epsilon$ and $f\in U_{\epsilon'}$,
we have $e\cap f=\varnothing$. It is clear because
$e\subseteq\pi(Z_{n-{\rm alt}_p({\mathcal H},\pi)}^\epsilon)$,
$f\subseteq\pi(Z_{n-{\rm alt}_p({\mathcal H},\pi)}^{\epsilon'})$,
and
$$\pi(Z_{n-{\rm alt}_p({\mathcal H},\pi)}^\epsilon)\cap
\pi(Z_{n-{\rm alt}_p({\mathcal H},\pi)}^{\epsilon'})=\varnothing.$$
\end{itemize}
Now, it is clear that the subhypergraph ${\rm KG}^p({\mathcal H})[U_{\omega^1},\ldots,U_{\omega^p}]$ is the desired subhypergraph.
\end{proof}
The proof of next lemma is similar to the proof of Lemma~\ref{Z_pfanlemma}.
\begin{lemma}\label{genfanlemma}
Let $C$ be a free simplicial $\mathbb{Z}_p$-complex
such that ${\rm ind}_{\mathbb{Z}_p}(C)\geq t$
and let $\lambda:C\longrightarrow (\sigma^{p-1}_{p-2})^{*m}$ be a simplicial $\mathbb{Z}_p$-map.
Then there is at least one $t$-dimensional simplex $\sigma\in C$ such that $\tau=\lambda(\sigma)$ is a $t$-dimensional simplex and for each $\epsilon\in \mathbb{Z}_p$, we have
$\lfloor{t+1\over p}\rfloor\leq |\tau^\epsilon|\leq\lceil{t+1\over p}\rceil.$
\end{lemma}
\begin{proof}
For simplicity of notation, let $K={\rm Im}(\lambda)$.
Clearly, to prove the assertion, it is enough to show that there is a $t$-dimensional simplex $\tau\in K$
such that $l(\tau)\geq t$.
Suppose, contrary to the assertion, that there is no such a $t$-dimensional simplex.
Therefore, for each simplex $\tau$ of $K$, we have $l(\tau)\leq t$.
For each vertex $\tau\in V(\operatorname{sd} K)$, set $h(\tau)=\displaystyle\min_{\epsilon\in \mathbb{Z}_p}|\tau^\epsilon|$.
Let $\Gamma:\operatorname{sd} K\longrightarrow \mathbb{Z}_p^{*t}$ be a map such that for each vertex $\tau$ of $\operatorname{sd} K$, $\Gamma(\tau)$ is defined as follows.
\begin{enumerate}[label={\rm (\roman*)}]
\item If $h(\tau)=0$, then define $\bar{\tau}=\{\epsilon\in \mathbb{Z}_p:\;
\tau^\epsilon= \varnothing\}\in \sigma^{p-1}_{p-2}$ and
$$\Gamma(\sigma\cup\tau)=\left(s_0(\bar\tau), l(\tau)\right).$$
\item If $h(\tau)> 0$, then define $\bar{\tau}=\displaystyle
\bigcup_{\{\epsilon\in\mathbb{Z}_p:\; |\tau^\epsilon|=h(\tau)\}} \tau^\epsilon\in W$
and $$\Gamma(\sigma\cup\tau)=\left(s(\bar\tau), l(\tau)\right).$$
\end{enumerate}
Similar to the proof of Lemma~\ref{Z_pfanlemma},
$\Gamma\circ\bar{\lambda}:\operatorname{sd} C\longrightarrow \mathbb{Z}_p^{*t}$ is a simplicial $\mathbb{Z}_p$-map. This implies that ${\rm ind}_{\mathbb{Z}_p}(C)\leq t-1$ which is~not possible.
\end{proof}
Next proposition is an extension of Theorem~\ref{altercrossindex}. However, we lose some properties by this extension.
\begin{proposition}\label{Xindposet}
Let $P$ be a free ${\mathbb Z}_p$-poset and
$$\begin{array}{rll}
\psi: P & \longrightarrow & Q_{s,p}\\
p &\longmapsto & (\psi_1(p),\psi_2(p))
\end{array}$$
be an order preserving ${\mathbb Z}_p$-map. Then $P$ contains a chain $p_1\prec_P\operatorname{cd}ots\prec_Pp_{ k}$ such that
\begin{itemize}
\item $k= {\rm ind}_{\mathbb{Z}_p}(\Delta P)+1$,
\item for each $i\in[k-1]$, $\psi_2(p_i)< \psi_2(p_{i+1})$, and
\item for each $\epsilon\in\mathbb{Z}_p$,
$$\left\lfloor{k\over p}\right\rfloor\leq \left|\left\{j:\; \psi_1(p_j)=\epsilon\right\}\right|\leq \left\lceil{k\over p}\right\rceil.$$
\end{itemize}
\end{proposition}
\begin{proof}
Note $\psi$ can be considered as a simplicial ${\mathbb Z}_p$-map from $\Delta P$ to $ \mathbb{Z}_p^{*n}\subseteq (\sigma_{p-2}^{p-1})^{*n}$. Now, in view of Lemma~\ref{genfanlemma},
we have the assertion.
\end{proof}
Note that, for $p=2$,
since ${\rm Xind}(P)\geq {\rm ind}(\Delta P)$,
Theorem~\ref{altercrossindex} is better than proposition~\ref{Xindposet}.
However, we cannot prove that proposition~\ref{Xindposet}
is valid if we replace ${\rm ind}(\Delta P)$ by ${\rm Xind}(P)$.
In an unpublished paper, Meunier~\cite{unpublishedMeunier} introduced a
generalization of Tuckey-Ky~Fan lemma. He presented a version of
$\mathbb{Z}_q$-Fan lemma which is valid for each odd integer
$q\geq 3$. To be more specific, he proved that if $q$ is an odd positive integer and
$\lambda:V(T)\longrightarrow \mathbb{Z}_q\times[m]$ is a $\mathbb{Z}_q$-equivariant labeling
of an $\mathbb{Z}_q$-equivariant
triangulation of a $(d-1)$-connected free $\mathbb{Z}_q$-spaces $T$, then
there is at least one simplex in
$T$ whose vertices are labelled with labels
$(\epsilon_0,j_0),(\epsilon_1,j_1),\ldots,(\epsilon_n,j_n)$,
where $\epsilon_i\neq \epsilon_{i+1}$ and $j_i<j_{i+1}$ for
all $i\in\{0,1,\ldots,n-1\}$. Also, he asked the question if the result is true for
even value of $q$. This question received a positive answer owing
to the work of B.~Hanke et~al.~\cite{Hanke2009404}.
In both mentioned works, the proofs of $\mathbb{Z}_q$-Fan lemma
are built in involved construction. Here, we take the opportunity of this paper to
propose the following generalization of this result with a short simple
proof because we are using similar techniques in the paper.
\begin{lemma}{\rm($\mathbb{G}$-Fan lemma).}\label{Gtucker}
Let $\mathbb{G}$ be a nontrivial finite group and
let $T$ be a free $\mathbb{G}$-simplicial complex such that ${\rm ind}_{\mathbb{G}}(T)= n$.
Assume that $\lambda:V(T)\longrightarrow \mathbb{G}\times[m]$ be a $\mathbb{G}$-equivariant labeling such that there is no edge in $T$ whose vertices are labelled with $(g,j)$ and $(g',j)$ with $g\neq g'$ and $j\in[m]$. Then there is at least one simplicial complex in $T$ whose vertices are labelled with labels $(g_0,j_0),(g_1,j_1),\ldots,(g_n,j_n)$, where $g_i\neq g_{i+1}$ and $j_i<j_{i+1}$ for all $i\in\{0,1,\ldots,n-1\}$. In particular, $m\geq n+1$.
\end{lemma}
\begin{proof}
Clearly, the map $\lambda$ can be considered as a $\mathbb{G}$-simplicial map from $T$ to $\mathbb{G}^{*m}$. Naturally,
each nonempty simplex $\sigma\in \mathbb{G}^{*m}$ can be identified with a vector $X=(x_1,x_2,\ldots,x_m)\in (\mathbb{G}\cup\{0\})^n\setminus\{\boldsymbol{0}\}$. To prove the assertion, it is enough to show that there is a simplex $\sigma\in T$ such that ${\rm alt}(\lambda(\sigma))\geq n+1$.
For a contradiction, suppose that, for each simplex $\sigma\in T$, we have ${\rm alt}(\lambda(\sigma))\leq n$.
Define
$$\begin{array}{lrll}
\Gamma:&V(\operatorname{sd} T) &\longrightarrow & \mathbb{G}\times[n]\\
&\sigma&\longmapsto & \left(g,{\rm alt}(\lambda(\sigma)\right)),
\end{array}$$
where $g$ is the first nonzero coordinate of the vector $\lambda(\sigma)\in (\mathbb{G}\cup\{0\})^n\setminus\{\boldsymbol{0}\}.$
One can check that $\Gamma$ is a simplicial $\mathbb{G}$-map from $\operatorname{sd} T$ to $\mathbb{G}^{*n}$.
Note $\mathbb{G}^{*n}$ is an $E_{n-1} \mathbb{G}$ space.
Consequently, ${\rm ind}_{\mathbb{G}}(\mathbb{G}^{*n})= n-1$.
This implies that ${\rm ind}_{\mathbb{G}}(T)\leq n-1$ which is a contradiction.
\end{proof}
\subsection{\bf Hierarchy of Indices}
The aim of this subsection is introducing some tools for the proof of Theorem~\ref{inequalities}.
Let $n,\alpha$, and $p$ be integers where $n\geq 1$, $n\geq\alpha\geq 0$, and $p$ is prime.
Define
$$\displaystyle\Sigma_p(n,\alpha)=\Delta\left\{X\in(\mathbb{Z}_p\cup\{0\})^n:\; {\rm alt}(X)\geq \alpha+1\right\}.$$
Note that $\displaystyle\Sigma_p(n,\alpha)$ is a free simplicial $\mathbb{Z}_p$-complex
with the vertex set $$\left\{X\in(\mathbb{Z}_p\cup\{0\})^n:\; {\rm alt}(X)\geq \alpha+1\right\}.$$
\begin{lemma}\label{indsigma}
Let $n,\alpha$, and $p$ be integers where $n\geq 1$, $n\geq\alpha\geq 0$, and $p$ is prime. Then
$${\rm ind}_{\mathbb{Z}_p}(\displaystyle\Sigma_p(n,\alpha))\geq n-\alpha-1.$$
\end{lemma}
\begin{proof}
Define
$$
\begin{array}{crcl}
\lambda: & \operatorname{sd} \mathbb{Z}_p^{*n} & \longrightarrow &
(\mathbb{Z}_p^{*\alpha})*\left(\displaystyle\Sigma_p(n,\alpha)\right)\\
& X & \longmapsto &
\left
\{\begin{array}{cl}
(\epsilon,{\rm alt}(X)) & \mbox{ if ${\rm alt}(X)\leq \alpha$}\\
X & \mbox{ if ${\rm alt}(X)\geq \alpha+1$},
\end{array}
\right.
\end{array}$$
where $\epsilon$ is the first nonzero term of $X$.
Clearly, the map $\lambda$ is a simplicial $\mathbb{Z}_p$-map.
Therefore,
$$
\begin{array}{lll}
n-1={\rm ind}_{\mathbb{Z}_p}( \operatorname{sd} \mathbb{Z}_p^{*n}) & \leq & {\rm ind}_{\mathbb{Z}_p}\left(\mathbb{Z}_p^{*\alpha}*\displaystyle\Sigma_p(n,\alpha)\right)\\
& \leq & {\rm ind}_{\mathbb{Z}_p}(\mathbb{Z}_p^{*\alpha})+{\rm ind}_{\mathbb{Z}_p}(\displaystyle\Sigma_p(n,\alpha))+1\\
&\leq &\alpha+{\rm ind}_{\mathbb{Z}_p}(\displaystyle\Sigma_p(n,\alpha))
\end{array}
$$ which completes the proof.
\end{proof}
\begin{proposition}\label{inequalityI}
Let ${\mathcal H}$ be a hypergraph. For any integer $r\geq 2$ and any prime number $p\geq r$, we have
$${\rm ind}_{\mathbb{Z}_p}({\rm B}_0({\rm KG}^r({\mathcal H}),\mathbb{Z}_p))+1\geq |V({\mathcal H})|-{\rm alt}_p({\mathcal H}).$$
\end{proposition}
\begin{proof}
For convenience, let $|V({\mathcal H})|=n$ and $\alpha=n-{\rm alt}_p({\mathcal H})$.
Let $\pi:[n]\longrightarrow V({\mathcal H})$ be the bijection such that
${\rm alt}_p({\mathcal H},\pi)={\rm alt}_p({\mathcal H})$.
Define
$$
\begin{array}{lrll}
\lambda:& \Sigma_p(n,\alpha)& \longrightarrow & \operatorname{sd}{\rm B}_0({\rm KG}^r({\mathcal H}),\mathbb{Z}_p))\\
& X&\longmapsto & \{\omega^1\}\times U_1\cup\operatorname{cd}ots\cup \{\omega^p\}\times U_p,
\end{array}
$$ where $U_i=\{e\in E({\mathcal H}):\; e\subseteq \pi(X^{\omega^i})\}.$
One can see that $\lambda$ is a simplicial $\mathbb{Z}_p$-map.
Consequently,
$${\rm ind}_{\mathbb{Z}_p}({\rm B}_0({\rm KG}^r({\mathcal H}),\mathbb{Z}_p))\geq
{\rm ind}_{\mathbb{Z}_p}(\Sigma_p(n,\alpha))\geq n-{\rm alt}_p({\mathcal H})-1.
$$
\end{proof}
\begin{proposition}\label{inequalityII}
Let ${\mathcal H}$ be an $r$-uniform hypergraph and $p\geq r$ be a prime number.
Then
$$ {\rm Xind}_{\mathbb{Z}_p}({\rm Hom}(K^r_p,{\mathcal H}))+p\geq {\rm ind}_{\mathbb{Z}_p}(\Delta{\rm Hom}(K^r_p,{\mathcal H}))+p\geq {\rm ind}_{\mathbb{Z}_p}(B_0({\mathcal H},\mathbb{Z}_p))+1.$$
\end{proposition}
\begin{proof}
Since already we know
${\rm Xind}_{\mathbb{Z}_p}({\rm Hom}(K^r_p,{\mathcal H}))\geq {\rm ind}_{\mathbb{Z}_p}(\Delta{\rm Hom}(K^r_p,{\mathcal H}))$,
to prove the assertion, it is enough to show that
${\rm ind}_{\mathbb{Z}_p}(\Delta{\rm Hom}(K^r_p,{\mathcal H}))+p\geq {\rm ind}_{\mathbb{Z}_p}(B_0({\mathcal H},\mathbb{Z}_p))+1.$
To this end, define
$$\begin{array}{llll}
\lambda: & \operatorname{sd} B_0({\mathcal H},\mathbb{Z}_p) & \longrightarrow & \left(\operatorname{sd}\sigma_{p-2}^{p-1}\right)*\displaystyle\left(\Delta{\rm Hom}(K^r_p,{\mathcal H})\right)\\
\end{array}$$
such that for each vertex
$\tau=\displaystyle\bigcup_{i-1}^p\left(\{\omega^i\}\times U_i\right)$ of $\operatorname{sd} B_0({\mathcal H},\mathbb{Z}_p)$, $\lambda(\tau)$ is defined as follows.
\begin{itemize}
\item If $U_i\neq\varnothing$ for each $i\in[p]$, then $\lambda(\tau)=\tau.$
\item If $U_i=\varnothing$ for some $i\in[p]$, then
$$\lambda(\tau)=\{\omega^i\in\mathbb{Z}_p:\; U_i=\varnothing\}.$$
\end{itemize}
One can check that the map $\lambda$ is a simplicial $\mathbb{Z}_p$-map. Also,
since $\sigma_{p-2}^{p-1}$ is a free simplicial $\mathbb{Z}_p$-complex of
dimension $p-2$, we have ${\rm ind}_{\mathbb{Z}_p}(\sigma_{p-2}^{p-1})\leq p-2$
(see properties of the $\mathbb{G}$-index in Section~\ref{intro}).
This implies that
$$
\begin{array}{lll}
{\rm ind}_{\mathbb{Z}_p}(B_0({\mathcal H},\mathbb{Z}_p))& \leq & {\rm ind}_{\mathbb{Z}_p}\left(\left(\operatorname{sd}\sigma_{p-2}^{p-1}\right)*
\left(\Delta{\rm Hom}(K^r_p,{\mathcal H})\right)\right)\\
& \leq &{\rm ind}_{\mathbb{Z}_p}(\sigma_{p-2}^{p-1})+ {\rm ind}_{\mathbb{Z}_p}(\Delta{\rm Hom}(K^r_p,{\mathcal H}))+1\\
&\leq & p-1+{\rm ind}_{\mathbb{Z}_p}(\Delta{\rm Hom}(K^r_p,{\mathcal H}))
\end{array}
$$ which completes the proof.
\end{proof}
\section{\bf Proofs of Theorem~\ref{maincolorfulindex} and Theorem~\ref{inequalities}}\label{sec:proofs}
Now, we are ready to prove Theorem~\ref{maincolorfulindex} and Theorem~\ref{inequalities}.\\
\noindent{\bf Proof of Theorem~\ref{maincolorfulindex}: Part (i).}
For convenience, let ${\rm ind}_{\mathbb{Z}_p}({\rm B}_0({\mathcal H},{\mathbb{Z}_p}))=t$.
Note that
$$
\begin{array}{crcl}
\Gamma: &\mathbb{Z}_p\times V({\mathcal H}) & \longrightarrow & \mathbb{Z}_p\times [C]\\
& (\epsilon,v) & \longmapsto & (\epsilon,c(v))
\end{array}$$ is a simplicial $\mathbb{Z}_p$-map from ${\rm B}_0({\mathcal H},{\mathbb{Z}_p})$ to $(\sigma^{p-1}_{r-2})^{*C}$. Therefore, in view of Lemma~\ref{genfanlemma}, there is a $t$-dimensional simplex $\tau\in{\rm im}(\Gamma)$ such that, for each $\epsilon\in \mathbb{Z}_p$, we have
$\lfloor{t+1\over p}\rfloor\leq |\tau^\epsilon|\leq\lceil{t+1\over p}\rceil.$
Let $\displaystyle\bigcup_{i=1}^p(\{\omega^i\} \times U_i)$ be the minimal simplex in $\Gamma^{-1}(\tau)$.
One can see that ${\mathcal H}[U_1,\ldots,U_p]$ is the desired subhypergraph.
Moreover,
since every color can be appeared in at most $r-1$ number of $U_i$'s, we have
$$C\geq {{\rm ind}_{p}({\rm B}_0({\mathcal H},{\mathbb{Z}_p}))+1\over r-1}.$$
\noindent{\bf Part (ii).}
For convenience, let ${\rm Xind}_{\mathbb{Z}_p}({\rm Hom}(K^r_p,{\mathcal H}))=t$.
Define the map $$\lambda:{\rm Hom}(K^r_p,{\mathcal H})\longrightarrow \operatorname{sd}(\sigma_{r-2}^{p-1})^{*C}$$
such that for each $(U_1,\operatorname{cd}ots,U_p)\in {\rm Hom}(K^r_p,{\mathcal H})$,
$$\lambda(U_1,\operatorname{cd}ots,U_p)=\{\omega^1\}\times c(U_1) \cup\operatorname{cd}ots\cup \{\omega^p\}\times c(U_p).$$
{\bf Claim.} There is a $p$-tuple $(U_1,\operatorname{cd}ots,U_p)\in {\rm Hom}(K^r_p,{\mathcal H})$
such that for $\tau=\lambda(U_1,\operatorname{cd}ots,U_p)$, we have $l(\tau)\geq {\rm Xind}_{\mathbb{Z}_p}({\rm Hom}(K^r_p,{\mathcal H}))+p$.
\noindent{\bf Proof of Claim.} Suppose, contrary to the claim, that for each $\tau\in {\rm Im}(\lambda)$, we have $l(\tau)\leq t+p-1$.
Note that $\operatorname{sd}(\sigma_{r-2}^{p-1})^{*C}$ can be considered as a free $\mathbb{Z}_p$-poset ordered by inclusion.
One can readily check that $\lambda$ is an order-preserving $\mathbb{Z}_p$-map.
Clearly, for each $\tau\in {\rm Im}(\lambda)$, we have $h(\tau)=\displaystyle\min_{\epsilon\in\mathbb{Z}_p}|\tau^\epsilon|\geq 1$ and consequently, $l(\tau)\geq p$. Now, define $$\bar{\tau}=\displaystyle
\bigcup_{\{\epsilon\in\mathbb{Z}_p:\; |\tau^\epsilon|=h(\tau)\}} \tau^\epsilon\in W\quad {\rm and }\quad \Gamma(\tau)=\left(s(\bar\tau), l(\tau)-p+1\right).$$
One can see that the map $\Gamma:{\rm im}(\lambda)\longrightarrow Q_{t-1,p}$ is an order-preserving $\mathbb{Z}_p$-map. Therefore,
$$\Gamma\circ\lambda:{\rm Hom}(K^r_p,{\mathcal H})\longrightarrow Q_{t-1,p}$$ is an
order-preserving $\mathbb{Z}_p$-map, which contradicts the fact that ${\rm Xind}_{\mathbb{Z}_p}({\rm Hom}(K^r_p,{\mathcal H}))=t$.
$\square$
Now, let $(U_1,\operatorname{cd}ots,U_p)$ be a minimal $p$-tuple in ${\rm Hom}(K^r_p,{\mathcal H})$
such that for $$\tau=\lambda(U_1,\operatorname{cd}ots,U_p)=\{\omega^1\}\times c(U_1) \cup\operatorname{cd}ots\cup \{\omega^p\}\times c(U_p),$$ we have $l(\tau)= t+p$.
One can check that ${\mathcal H}[U_1,\operatorname{cd}ots,U_p]$ is the desired complete
$r$-uniform $p$-partite subhypergraph. Similar to the proof of Part (i),
since every color can be appeared in at most $r-1$ number of $U_i$'s, we have
$$C\geq {{\rm Xind}_{\mathbb{Z}_p}({\rm Hom}(K^r_p,{\mathcal H}))+p\over r-1}.$$
$\square$
\noindent{\bf Proof of Theorem~\ref{inequalities}.}
It is simple to prove that
$|V({\mathcal F})|-{\rm alt}_p({\mathcal F})
\geq {\rm cd}_p({\mathcal F})$ for any hypergraph ${\mathcal F}$.
Therefore, the proof follows by Proposition~\ref{inequalityI} and Proposition~\ref{inequalityII}.
$\square$\\
\noindent{\bf Acknowledgements.}
I would like to acknowledge Professor Fr\'ed\'eric~Meunier for interesting discussions about the paper and his invaluable comments.
Also, I would like to thank Professor Hossein~Hajiabolhasan and Mrs~Roya~Abyazi~Sani for their useful comments.
\def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
\end{document} |
\begin{document}
\title{On pattern-avoiding permutons
\thanks{
JH: Research supported by Czech Science Foundation Project 21-21762X. FG, KP: This work has received funding from the MUNI Award in Science and Humanities (MUNI/I/1677/2018) of the Grant Agency of Masaryk University. GK: The work on the project leading to this application has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 741420), from the \'UNKP-20-5 New National Excellence Program of the Ministry of Innovation and Technology from the source of the National Research, Development and Innovation Fund, from Lend\"ulet grant no. 2022-58 and from the J\'anos Bolyai Scholarship of the Hungarian Academy of Sciences.
}
\\
}
\author{Frederik Garbe\thanks{Faculty of Informatics, Masaryk University, Brno, Czech Republic, E-mail: {\tt [email protected]}} \and Jan Hladk\'y\thanks{Institute of Computer Science of the Czech Academy of Sciences, Prague, Czech Republic, E-mail: {\tt [email protected]}}
\and Gábor Kun\thanks{Alfr\'ed R\'enyi Institute of Mathematics, Budapest, Hungary and Institute of Mathematics, E\"otv\"os L\'or\'and University, Budapest, Hungary, E-mail: {\tt [email protected]}} \and
Kristýna Pekárková\thanks{Faculty of Informatics, Masaryk University, Brno, Czech Republic, E-mail: {\tt [email protected]}}}
\date{}
\maketitle
\begin{abstract}
The theory of limits of permutations leads to limit objects called permutons, which are certain Borel measures on the unit square.
We prove that permutons avoiding a given permutation of order $k$ have a particularly simple structure. Namely, almost every fiber of the disintegration of the permuton (say, along the x-axis) consists only of atoms, at most $k-1$ many, and this bound is best-possible.
\end{abstract}
\section{Introduction}\label{sec:Intro}
The main result of this paper concerns limit properties of pattern avoiding permutations.
Here, by a \emph{permutation} we mean a bijection $\pi:[n]\to [n]$ for some $n\in\mathbb{N}$. We write $\mathbb{S}(n)$ for the set of all permutations on $[n]$. The fact that $[n]$ is equipped with the natural order allows us to define the notion of a pattern. Namely, for $k\in [n]$ and for a $k$-element set $K\in\binom{[n]}{k}$ we say that \emph{$K$ induces a pattern $A\in\mathbb{S}(k)$ in $\pi$} if for all $i,j\in [k]$ we have that $A(i)<A(j)$ if and only if the image of the $i$th smallest element of $K$ under $\pi$ is smaller than the image of the $j$th smallest element of $K$ under $\pi$. In particular the \emph{density of $A$ in $\pi$}, denoted by $t(A,\pi)$, is the proportion of $k$-tuples $K$ that induce $A$. We say that $\pi$ is \emph{$A$-avoiding} if $t(A,\pi)=0$. Pattern avoidance is one of the most vivid parts of the combinatorics of permutations, and indeed its treatise spans most of the standard books on permutations~\cite{MR2919720,MR3012380}.
A classical result that can be understood in this way is that of Erd\H os-Szekeres~\cite{esz} on monotone patterns. Most of the questions in the area of pattern avoidance concern a setting in which the pattern is fixed and $n$ is large. A very broad question is how restrictive the property of avoiding a specific pattern is. The most famous result in this direction is due to Marcus and Tardos~\cite{MaTa04}, formerly known as the Stanley--Wilf conjecture, and says, that for any fixed pattern $A$ there exists a constant $c_{A}$ so that the number of $A$-avoiding permutations of any order
$n$ is at most $c_{A}^{n}$. The monographs of B\'ona~\cite{MR2919720} and Kitaev~\cite{MR3012380} contain many further results regarding the number and structure of pattern avoiding permutations.
Following the success of the theory of limits of dense graphs, Hoppen,
Kohayakawa, Moreira, Ráth, and Sampaio~\cite{HoKo13} introduced a limit theory for finite permutations; the name and the following measure theoretic view on the limit object were introduced by Kr\'al' and Pikhurko~\cite{KrPi13}. A \emph{permuton} $\Gamma$ is a Borel probability measure on $[0,1]^{2}$ with uniform marginals, that is, $\Gamma(Z\times[0,1])=\Gamma([0,1]\times Z)$ is equal to the Lebesgue measure of $Z$ for any Borel set $Z\subset[0,1]$. The main feature of the theory of permutons is that they generalise permutations and form a compact metric space with the so called \emph{rectangular distance}. Furthermore, pattern densities are continuous with respect to this metric.
We now formally define pattern densities for permutons. Recall that for $A\in \mathbb{S}(k)$ and $\pi\in\mathbb{S}(n)$ with $k\le n$ we defined the density of $A$ in $\pi$ as the proportion of $k$-tuples of $[n]$ which induce the pattern $A$. To define pattern avoidance for permutons we will work with geometric representations of a pattern $A\in \mathbb{S}(k)$. Let $S_A\subset ([0,1]^2)^k$ be the collection of all sets of $k$ points $(x_1,y_1),(x_2,y_2),\ldots,(x_k,y_k)$ such that for all $1\le i<j\le k$ we have $x_i<x_j$ and further that $y_i<y_j$ if and only if $A(i)<A(j)$. We call each element of $S_A$ a \emph{geometric representation} of $A$. Vice versa each collection $(x_1,y_1),(x_2,y_2),\ldots,(x_k,y_k)$ of $k$ points such that for every $i< j$ we have $x_i< x_j$ and $y_i\neq y_j$ \emph{induces} a unique permutation $A\in \mathbb{S}(k)$ via $A(i)\coloneqq
|\{j\in[k]\mid y_j\leq y_i\}|$. Note that each element of $S_A$ induces $A$. We can now extend the notion of pattern density to permutons. The \emph{density of $A$ in a permuton $\Gamma$} is defined as $t(A,\Gamma)\coloneqq k!\cdot \Gamma^{\otimes k}(S_A)$. A probabilistic interpretation of $t(A,\Gamma)$ is as follows. Suppose
that we sample points $\left(x_{1},y_{1}\right),\left(x_{2},y_{2}\right),\ldots,\left(x_{k},y_{k}\right)$
independently at random from the measure $\Gamma$. Then $t(A,\Gamma)$
is the probability that when reading these points from left to right,
their vertical positions are consistent with $A$. In particular, we say that $\Gamma$ is \emph{$A$-avoiding} if $t(A,\Gamma)=0$.
Our theorem below says that the support of the measure of any avoiding permuton has a particularly simple one-dimensional structure. Let $\mathfrak{M}$ be the space of all finite Borel measures on $[0,1]$.
For a given $\gamma\in\mathfrak{M}$ and $\ell\in\mathbb{N}$ we say
that \emph{$\gamma$} is an \emph{$\le\ell$-molecule} if there exists
a set $M\subset[0,1]$ with $|M|\le\ell$ such that $\gamma\left([0,1]\setminus M\right)=0$. We say that $\gamma$ is an \emph{$\ell$-molecule} if it is an $\le\ell$-molecule but not an $\le(\ell-1)$-molecule.
Last, we recall the concept of disintegration of a measure. Suppose
that $\left\{ \Lambda_{x}\in\mathfrak{M}\right\} _{x\in[0,1]}$ is
a collection of measures such that the map $x\mapsto\Lambda_{x}$ is
measurable with respect to the Borel $\sigma$-algebra induced by the weak convergence topology on $\mathfrak{M}$.
Then we can define a new Borel measure $\Lambda$ on $[0,1]^{2}$
by
\[
\Lambda(B):=\int_{x}\Lambda_{x}\left(\left\{ y\in[0,1]:(x,y)\in B\right\} \right)\;\mathsf{d}\lambda(x)\quad\text{for each Borel \ensuremath{B\subset[0,1]^{2}}}\;.
\]
Then $\left\{ \Lambda_{x}\right\} _{x\in[0,1]}$ is called a \emph{disintegration
}of $\Lambda$ with respect to the Lebesgue measure $\lambda$. The
Disintegration Theorem tells us that a disintegration always exists
and is unique up to a nullset. We call the measures $\Lambda_{x}$
\emph{fibers}.
We can now state our main result.
\begin{thm}\label{thm:molecule}
Suppose that $A$ is a pattern of order $k$ and $\Gamma$ is an $A$-avoiding
permuton. Let us fix a disintegration $\left\{ \Gamma_{x}\right\} _{x\in[0,1]}$
of $\Gamma$. Then almost all fibers of $\Gamma$ are $\le\left(k-1\right)$-molecules.
This bound is optimal: for every pattern $A$ of order $k$ there exists an $A$-free permuton such that almost all of its fibers are $(k-1)$-molecules.
\end{thm}
Let us remark that Dole\v{z}al, M\'{a}th\'{e} and the second author~\cite{DoHlMa17} proved a result in a similar spirit (but using completely different tools) for graphons: every graphon avoiding a specific graph must be countably-partite.
Our construction for the second half of the theorem will be piecewise linear. One may expect that pattern-free permutons are similarly nice. However, this need not be the case as we show in Section~\ref{ssec:nondifferentiable}.
\subsection{A removal lemma as an application}\label{ssec:remlemma}
As an application, we provide a proof of the following removal lemma. This is a nice example how a result about limit objects can be used to show a result for finite permutations.
\begin{thm}\label{thm:Relocation}
Let $k\in\mathbb{N}$ and $A\in\mathbb{S}(k)$. For every $\varepsilon>0$ there exists $\delta>0$ such that the following holds. Suppose that $\pi\in\mathbb{S}(n)$ is a permutation with $t(A,\pi)<\delta$. Then there exists a permutation $\tilde\pi\in\mathbb{S}(n)$ which is $A$-avoiding and for which we have
\begin{equation}\label{eq:removalchange}
\sum_{i = 1}^n|\pi(i)-\tilde\pi(i)|<\varepsilon n^2\;.
\end{equation}
\end{thm}
This `permutation removal lemma' was first obtained by Klimo\v sov\'a and Kr\'al' \cite{MR3376446} and reproved by Fox and Wei~\cite{MR3627836,MR3816060}. The former proof gave an Ackermann-type dependency between $\varepsilon$ and $\delta$.\footnote{Note that~\cite{MR3376446} claims a doubly exponential dependency. This claim is wrong as was pointed out in Footnote~2 of~\cite{MR3627836}.} The main upshot of the latter proof was in giving a polynomial dependency between $\varepsilon$ and $\delta$ (with the degree of the polynomial being $2^{2^{O(k)}}$). Our proof is weaker in that it does not give an quantitative dependence between $\varepsilon$ and $\delta$. Its advantage, however, is that it is (with Theorem~\ref{thm:molecule}) almost computation-free, compared to the fairly involved proofs of Klimo\v sov\'a--Kr\'al' and Fox--Wei.
Let us also note that the metric used to quantify~\eqref{eq:removalchange} is called \emph{Spearman’s footrule distance}, and is within a factor of~2 within another, perhaps more common distance, the so-called \emph{Kendall's tau distance}.
In the following we illustrate the proof idea for deriving Theorem~\ref{thm:Relocation} from Theorem~\ref{thm:molecule}, deferring details to Section~\ref{sec:ProofOfRemoval}. By using the compactness of the space of permutons it suffices to show for a sequence $\pi_1,\pi_2,\dots$ of $A$-avoiding permutations converging to an $A$-free permuton $\Gamma$ that for $n$ large enough we can find similar but $A$-avoiding permutations $\tilde\pi_n$. Suppose that $\pi_n$ is close to $\Gamma$ in the rectangular distance. Fix a disintegration $\{\Gamma_x\}_{x \in[0,1]}$ of $\Gamma$. We generate an $n$-tuple of numbers uniformly at random in $[0,1]$, and read them in increasing order as $x_1<x_2<\cdots<x_n$. We set $y_i \coloneqq (\pi_n(i)-0.5)/n$. The set of points $\{(x_1,y_1)$, \ldots, $(x_n,y_n)\}$ is a geometric representation of $\pi_n$, and the uniform probability measure supported on this set is close to $\Gamma$ in the rectangular distance.
The key idea is that almost any $n$-tuple of points $(\tilde{x}_1,\tilde{y}_1)$, \ldots, $(\tilde{x}_n,\tilde{y}_n)$, whose x- and y- coordinates are pairwise distinct and where all the points lie in the support of the measure $\Gamma$, is a geometric representation of an $A$-free permutation. We shall use this with the choice $\tilde{x}_1=x_1$, \ldots, $\tilde{x}_n=x_n$. We want to alter the y-coordinates of the points $(x_i,y_i)$ such that the new $\tilde{y}_i$ lies in the support of the measure $\Gamma_{x_i}$, which is essentially equivalent to the point $(x_i,\tilde{y}_i)$ lying in the support of $\Gamma$.
Here is where Theorem~\ref{thm:molecule} comes into play. We know that almost every $\Gamma_{x_i}$ is a collection of at most $k-1$ atoms. Since $\pi_n$ is close to $\Gamma$ in the rectangular distance, we can also deduce that typically $\Gamma_{x_i}$ must have positive mass around $y_i$. Therefore, there exists an atom $\tilde{y}_i$ of $\Gamma_{x_i}$ which is close to $y_i$. The points $(x_i,\tilde{y}_i)$ then induce our new permutation $\tilde{\pi}_n$. See Figure~\ref{fig:Example}.
\begin{figure}
\caption{An example described in Section~\ref{ssec:remlemma}
\label{fig:Example}
\end{figure}
\section{Proof of Theorem~\ref{thm:molecule}}\label{sec:pfmolecule}
\subsection{Proof of the upper bound}
We use $\lambda$ to denote the Lebesgue measure on $\mathbb{R}$. For an arbitrary measure $\gamma$ on a measure space $X$, we write $\gamma^{\otimes k}$ for its $k$-th power.
Suppose for the sake of contradiction that the assertion does not hold. Then there exists a pattern $A$ of order $k$, an $A$-avoiding permuton $\Gamma$ together with a disintegration $\{\Gamma_x\}_{x\in[0,1]}$ and a set $X\subset[0,1]$ with $\lambda(X)>0$ such that for every $x\in X$ the fiber $\Gamma_x$ is not a $\le\left(k-1\right)$-molecule.
\begin{figure}
\caption{An example for the notation in Section~\ref{sec:pfmolecule}
\label{fig:pfmolecule}
\end{figure}
Figure~\ref{fig:pfmolecule} depicts the setting we introduce below.
For $t\in\mathbb N$ and $m\in[t-1]$ we define $J_{m,t}\coloneqq[(m-1)/t,m/t)$ and $J_{t,t}\coloneqq[(t-1)/t,1]$. For $S\in\binom{[t]}{k}$ and $i\in[k]$, write $S_i$ for the $i$th smallest element of $S$. For each $S\in\binom{[t]}{k}$, set $X_{S,t}\coloneqq\{x\in X\mid\Gamma_{x}(J_{S_i,t})>0\;\forall i\in[k]\}$. Note that for $x\in X$, since $\Gamma_x$ is not $\le\left(k-1\right)$-molecule, we have that there exist $t\in\mathbb N$ and $S\in\binom{[t]}{k}$ such that $x\in X_{S,t}$. Hence $X=\bigcup_{t\in\mathbb N, S\in\binom{[t]}{k}}X_{S,t}$. Recalling that $\lambda(X)>0$, it follows from the countable subadditivity of measures that there exist $t\in\mathbb N$ and $S\in\binom{[t]}{k}$ such that $\lambda(X_{S,t})>0$. By Lebesgue's density theorem, we can find a density point $x_0$ of $X_{S,t}$ and choose $\varepsilon>0$ small enough such that $\lambda([x_0-\varepsilon,x_0+\varepsilon]\cap X_{S,t})>2\varepsilon (k-1)/k$. Then for $i\in[k]$ define $P_i\coloneqq [x_0-\varepsilon+2\varepsilon\cdot (i-1)/k,x_0-\varepsilon+2\varepsilon\cdot i/k)$ and note that $\lambda(X_{S,t}\cap P_i)>0$. Finally, define $B_{i,j}\coloneqq P_i\times J_{S_j,t}$, for $i,j\in[k]$, and observe that every $p\in\prod_{i\in[k]} B_{i,A(i)}$ induces $A$ and therefore
\[t(A,\Gamma)\geq\Gamma^{\otimes k}\left(\prod_{i\in[k]} B_{i,A(i)}\right)\geq\prod_{i\in[k]}\left(\int_{x\in P_i\cap X_{S,t}}\Gamma_x(J_{S_{A(i)},t})\right)>0\;,\]
which is a contradiction.
\subsection{Optimality}\label{ssec:optimalitymolecules}
Given a pattern $A\in\mathbb{S}(k)$, we will construct an $A$-free permuton whose fibers with respect to a disintegration along the x-axis are almost all $(k-1)$-molecules. The role of the two coordinates is exchangeable, and it turns out that it is notationally simpler to construct an $A^{-1}$-free
permuton whose horizontal fibers are $(k-1)$-molecules.
Set $\pi=A^{-1}$. We consider the piecewise linear function $L:[0,1]\to[0,1]$ consisting of $k-1$ pieces of slope $k-1$ or $1-k$ satisfying
for $i=1, \dots ,k-1$ that $L(\frac{i-1}{k-1})=0$ and $L(\frac{i}{k-1})=1$ if $\pi(i)>\pi(i+1)$, while $L(\frac{i-1}{k-1})=1$ and $L(\frac{i}{k-1})=0$ if $\pi(i)<\pi(i+1)$.
(At the endpoints this might not be well-defined: in this case we choose the value defined on the interval on the left.)
An example is given in Figure~\ref{fig:Piecewise}.
\begin{figure}
\caption{Construction of a piecewise linear function $L$ in Section~\ref{ssec:optimalitymolecules}
\label{fig:Piecewise}
\end{figure}
Putting an appropriate multiple of the 1-dimensional Lebesgue measure on the graph of the function $L$ we obtain a permuton (that is, both uniform marginal properties are satisfied), which we call $\Gamma$. Firstly, almost all horizontal fibers of $\Gamma$ are $(k-1)$-molecules. So, it remains to argue that $t(\pi,\Gamma)=0$. To this end we prove the following.
\begin{unnumberedclm}
Consider $\ell\in[k]$ and $0\leq x_1 < \dots < x_{\ell} \leq 1$ such that $x_i\not\in\{\frac{j}{k-1}:j\in\mathbb{Z}\}$ for all $i\in [\ell]$. If the set of points $\{ (x_1,L(x_1))\ldots,(x_{\ell},L(x_{\ell})) \}$ induces the pattern given by the first $l$ entries of $\pi$ then $x_{\ell} > \frac{\ell-1}{k-1}$.
\end{unnumberedclm}
Observe that the above claim indeed proves that $t(\pi,\Gamma)=0$, since it asserts that we cannot find geometric representations of $\pi$ in the support of $\Gamma$, except for the nullset of $k$-tuples where at least one of the coordinates $x_i$ lies in $\{\frac{j}{k-1}:j\in\mathbb{Z}\}$.
Let us now prove the claim by induction on $\ell$. The base case $\ell=1$ is obvious. Assume that the claim holds for $\ell-1$ and, in particular, for the $(\ell-1)$-tuple $(x_1,\ldots,x_{\ell-1})$. If $x_{\ell-1} > \frac{\ell-1}{k-1}$ then we are done, since $x_{\ell}>x_{\ell-1}$. It remains to handle the case $x_{\ell-1} \in (\frac{\ell-2}{k-1},\frac{\ell-1}{k-1})$. Then $x_{\ell} \notin (\frac{\ell-2}{k-1},\frac{\ell-1}{k-1})$, since $x_{\ell} >x_{\ell-1}$, and if
$\pi(\ell-1)>\pi(\ell)$ then $L$ is monotone increasing on this interval, while monotone decreasing otherwise. This completes the proof the theorem.
Note that while a.e. vertical fiber of the construction is a $(k-1)$-molecule, the horizontal fibers are $1$-molecules, i.e., their support is a single atom of measure one. It would be interesting to see, which lower bounds on the size of the support of a.e. horizontal and vertical fiber would guarantee that a given pattern has positive density.
\section{Discussion and further questions}\label{sec:discussion}
\subsection{Nondifferentiable pattern-avoiding permuton}\label{ssec:nondifferentiable}
Theorem~\ref{thm:molecule} together with Lusin's theorem tells us that we can think of the support of a pattern-avoiding permuton as a union of graphs of partial functions that are continuous on a set whose complement is of arbitrary small positive measure. The following example shows that continuity cannot be strengthened to differentiability. More precisely, there exists a function $f: [0,1] \rightarrow [0,1]$ whose restriction to any subset of $[0,1]$ of positive measure is not differentiable and the permuton supported on the graph of this function is avoiding the pattern $(3142)$.
Consider the quaternary expansion (with digits $0,1,2$ and $3$) of the numbers in $[0,1]$. This is well-defined for all, but countably many numbers, the rational numbers whose denominator is a power of two. We will ignore this countable set.
Let $f$ be the function that swaps $1$ and $2$ for every coordinate. This function is injective (up to a nullset) and measure preserving, hence its graph is the support of a permuton.
First we argue that the graph of $f$ is free of $(3142)$.
Suppose for a contradiction that there are four numbers $x_1<x_2<x_3<x_4$ such that $f(x_2) < f(x_4) < f(x_1) < f(x_3)$. Consider the first digit where $x_1$ and $x_2$ differ. Clearly this digit should be $1$ for $x_1$ and $2$ for $x_2$. The number $x_4$ cannot differ in an earlier digit, since $f(x_2) < f(x_4) < f(x_1)$. Hence $x_3$ being less than $x_4$ can also not differ in an earlier digit. We will get a contradiction when checking the critical digit: $f(x_3)>f(x_1), x_3>x_2$ implies that this digit of $x_3$ should be $3$, but then $x_4$ also has digit $3$ here contradicting
$f(x_1)>f(x_4)$.
It remains to show that the restriction of $f$ to any set of positive measure cannot be differentiable: if there was such a set then on a subset of positive measure for every pair the inequality $|\frac{f(x)-f(y)}{x-y}-f'(x)|<\frac{1}{2}$ would hold.
However, given a natural number $n$, $i \in \{-3, -2, -1, 1,2,3\}$ and real numbers $x, x+i4^{-n}$ agreeing until the first $n-1$ digits, note that $\frac{f(x)-f(x+i4^{-n})}{i4^{-n}} \in \{ -2, -1, -\frac{1}{2}, \frac{1}{2}, 1, 2 \}$, and this value depends on $i$ and the $n$th digit of $x$ (but not on $n$), and for every $x$ and $n$ we get three different values (depending on $i$). By the Lebesgue density theorem any set of positive measure contains more than three quarter of a small interval, so we can find four numbers in the set agreeing in all, but one digit. Hence the restriction of $f$ to any set of positive measure cannot be differentiable.
\subsection{Lower bounds on the Stanley--Wilf constant}
For a permutation $A$, let $\mathbb{S}(A;n)\subset \mathbb{S}(n)$ be the set of $A$-avoiding permutations of order $n$. The Stanley--Wilf constant is defined as $c_A:=\lim_n \sqrt[n]{|\mathbb{S}(A;n)|}$. The exact value of the Stanley--Wilf constant is only known for several permutations and permutation classes (see~\cite[Chapter~4]{MR2919720}): trivially for $A\in\mathbb{S}(2)$ we have $c_A=1$, for all $A\in\mathbb{S}(3)$ we have $c_A=4$, for each $k\in\mathbb{N}$ we have $c_{\mathsf{identity}_k}=(k-1)^2$, and then the Stanley--Wilf constant is known for several sporadic examples, including $c_{(1342)}=8$, $c_{(12453)}=9+4\sqrt{2}$.
Consider an $A$-free permuton $\Gamma$. We can use $\Gamma$ to generate many different $A$-free permutations of order $n$. Let us illustrate this with the $(1234)$-avoiding permuton $\Gamma$ from Figure~\ref{fig:Example}. Fix the numbers $x_1<\ldots<x_n$ to be equidistant in $[0,1]$. Each fiber $\Gamma_{x_i}$ has exactly two atoms. So, we choose $y_i$ to be one of these two atoms. We have $2^n$ options in total of generating a geometric configuration. It can be shown that most of these choices represent different permutations, hence we obtain $c_{(1234)}\ge 2$.
\subsection{Permuton limits of typical pattern-avoiding permutations}
Consider the sequence of independent random permutations $\pi_1,\pi_2,\ldots$, where $\pi_i$ is taken from $\mathbb{S}(i)$ uniformly at random. It is a well-known fact that such a sequence converges in the rectangular distance almost surely to the Lebesgue measure on $[0,1]^2$. What if we condition $\pi_i$ to be free of a given pattern $A$?
\begin{qu}
\label{pr:typicalfreepermuton}
Consider a pattern $A\in\mathbb{S}(k)$. Suppose that $\pi_1,\pi_2,\ldots$ is a sequence of independent random permutations, where $\pi_i$ is taken from $\mathbb{S}(A;i)$ uniformly at random. Does this sequence converge in the rectangular distance almost surely? What is the limit permuton $\mathsf{d}elta_A$?
\end{qu}
Theorem~\ref{thm:molecule} asserts that the hypothetical permuton $\mathsf{d}elta_A$ is supported on the union of the graphs of finitely many functions.
Problem~\ref{pr:typicalfreepermuton} is a weaker form of a question of Joshua Cooper~\cite{Cooper:question}, who asked for two patterns $A$ and $B$ the expected number of occurrences of $B$ in a permutation chosen uniformly at random from $\mathbb{S}(A;n)$. Our perspective is different.
In a recent work Borga, Das, Mukherjee and Winkler~\cite{BDMW22} have introduced a Gibbs permutation model that might help to approach our question. This model does not include the uniform distribution on permutations avoiding a fixed pattern $A$, but it allows to give an exponentially small probability w.r.t. the density of $A$ to a permutation.
Problem~\ref{pr:typicalfreepermuton} is trivial for $A\in\mathbb{S}(2)$. For $A=(132)$, Section~5 of \cite{MR2676667} tells us that for a permutation $\pi\in\mathbb{S}(A;n)$ we have $t(12,\pi)=o(1)$ asymptotically almost surely. This means that $\mathsf{d}elta_{(132)}$ is the antidiagonal permuton. Known bijections between $\mathbb{S}(123;n)$, $\mathbb{S}(132;n)$, $\mathbb{S}(213;n)$, $\mathbb{S}(231;n)$, $\mathbb{S}(312;n)$, and $\mathbb{S}(321;n)$ tell us that $\mathsf{d}elta_{(123)}=\mathsf{d}elta_{(213)}$ is also the antidiagonal permuton, whereas $\mathsf{d}elta_{(231)}=\mathsf{d}elta_{(312)}=\mathsf{d}elta_{(321)}$ is the diagonal permuton. For monotone increasing patterns it follows from a more general large deviations result~\cite{MP16} that $\mathsf{d}elta_{\mathsf{identity}_k}$ is the antidiagonal permuton. We do not know of other limit permutons of typical $A$-free permutations.
All the above examples lead to the diagonal or the antidiagonal limit permuton. Therefore, it is of particular interest to find limit permutons of a different shape. For the permutation $\sigma=(3142)$, suggested to us by Mikl\'os B\'ona, $\mathsf{d}elta_{\sigma}$ can neither be the diagonal nor the antidiagonal (if it exists). Indeed, suppose that $\mathsf{d}elta_{\sigma}$ is the diagonal. Then $\mathsf{d}elta_{\sigma^{-1}}$ must also be the diagonal. On the other hand, we can define the complement of a permutation $\pi\in \mathbb{S}(n)$ via $C(\pi)(i)=n+1-\pi(i)$ and observe that $\mathsf{d}elta_{C(\sigma)}$ then must be the antidiagonal contradicting that $C(\sigma)=(2413)=\sigma^{-1}$. One can argue similarly that $\mathsf{d}elta_{\sigma}$ cannot be the antidiagonal.
\section{Acknowledgments}
This work was initiated during the workshop \emph{Interfaces of the Theory of Combinatorial Limits} at the Erd\H{o}s center of the Rényi Institute. We thank the organisers for creating a very productive atmosphere. We thank Mikl\'os B\'ona, Martin Dole\v{z}al and Lara Pudwell for discussions.
\appendix
\section{Permutation removal lemma}\label{sec:ProofOfRemoval}
In this section, we prove Theorem~\ref{thm:Relocation}. For this we recall some facts from the theory of permutation limits and measure theory.
\subsection{Limits of permutations}
We recall basic concepts of the theory of permutation limits, as introduced in~\cite{HoKo13}. Earlier we already defined a \emph{permuton} as a probability measure on the Borel sigma-algebra on $[0,1]^2$ with the uniform marginal property. The relevant metric for the limit theory of permutations is the \emph{rectangular distance} which is defined for two permutons $\Gamma_1$ and $\Gamma_2$ as
\begin{equation}\label{eq:rectangular}
\textrm{d}_{\square}(\Gamma_2, \Gamma_2) = \sup_{\substack{S, T \subseteq [0, 1]\\ \textrm{intervals}}} |\Gamma_1(S \times T) - \Gamma_2(S \times T)|\;.
\end{equation}
\begin{remark}\label{rem:nonpermutons}
Note that~\eqref{eq:rectangular} gives a metric for general measures (i.e., not necessarily permutons) as well.
\end{remark}
Note that each finite permutation $\pi\in\mathbb{S}(n)$ has a \emph{permuton representation} $\Psi_\pi$ which is defined as follows. Take the union of rectangles $S\coloneqq\bigcup_{i=1}^n [\frac{i-1}{n},\frac{i}{n})\times [\frac{\pi(i)-1}{n},\frac{\pi(i)}{n})$. Now, for any Borel set $B\subset [0,1]^2$ we define $\Psi_\pi(B)\coloneqq n\cdot \lambda^2 (B\cap S)$.
Using permuton representations we can extend the rectangular distance to all finite permutations. That is, given a permuton $\Gamma$ and finite permutations $\pi_1\in\mathbb{S}(n_1)$ and $\pi_2\in\mathbb{S}(n_2)$ we write $\textrm{d}_{\square}(\pi_1,\Gamma)\coloneqq\textrm{d}_{\square}(\Psi_{\pi_1},\Gamma)$ and $\textrm{d}_{\square}(\pi_1,\pi_2)\coloneqq\textrm{d}_{\square}(\Psi_{\pi_1},\Psi_{\pi_2})$.
The following compactness result is the key result of the limit theory for permutations of which we will make use later.
\begin{thm}[\cite{HoKo13}]\label{thm:compact}
The space of permutons is compact with respect to the distance $\textrm{d}_{\square}$.
\end{thm}
Furthermore, we will need that convergence in rectangular distance also induces convergence in pattern densities.
\begin{thm}[\cite{HoKo13}]\label{thm:densitiescontinuous}
Suppose that $\pi_1,\pi_2,\ldots$ is a sequence of permutations of growing orders that converges to a permuton $\Gamma$ in the rectangular distance. Then for any $k\in\mathbb{N}$ and any $A\in\mathbb{S}(k)$ we have $\lim_n t(A,\pi_n)=t(A,\Gamma)$.
\end{thm}
\subsection{The Lévy-Prokhorov metric}
We will make use of the Lévy-Prokhorov metric to compare probability measures on $[0,1]$.
\begin{defi}[Lévy-Prokhorov metric]
For $\varepsilon>0$ and a set $A\subseteq[0,1]$ we define the \emph{$\varepsilon$-neighbourhood of $A$} by
$$A^{\uparrow\varepsilon}=\{p\in[0,1]\mid \exists q\in A: |p-q|<\varepsilon\}\;.$$
Let $\alpha$ and $\beta$ be two Borel probability measures on $[0,1]$. The \emph{Lévy-Prokhorov distance} between $\alpha$ and $\beta$ is defined by
$$\textrm{dist}_{LP}(\alpha, \beta) = \inf\{ \varepsilon > 0 \mid \forall A \in \mathcal{B}([0,1])\;:\; \alpha(A) \leq \beta(A^{\uparrow\varepsilon}) + \varepsilon,\; \beta(A) \leq \alpha(A^{\uparrow\varepsilon}) + \varepsilon\}\;.$$
\end{defi}
Recall that a sequence of Borel probability measures $\alpha_1,\alpha_2,\ldots$ on $[0,1]$ \emph{converges weakly} to a measure $\alpha$ if for every Borel set $A\subset[0,1]$ we have $\lim_n \alpha_n (A)=\alpha(A)$. It well-known that the Lévy-Prokhorov metric is a metrization of the weak convergence. It is also well-known that the Lévy-Prokhorov metric is separable, that is, there exists a countable set $\{m_1,m_2,\ldots\}$ of Borel probability measures on $[0,1]$ so that for every Borel probability measure $\gamma$ on $[0,1]$ and every $\varepsilon>0$ there exists an index $i\in\mathbb{N}$ so that $\textrm{dist}_{LP}(m_i,\gamma)<\varepsilon$. Last, recall that for any metric space, the properties of being separable, having the property of Lindel\"of and being second-countable are all equivalent.\footnote{We do not give definitions since the only way we use them is that we feed them into another theorem.} In particular, we will use that the Lévy-Prokhorov metric gives a second-countable space.
\begin{lem}\label{lem:measure}
Let $\Gamma$ be a probability measure on $[0,1]^2$.
For all $\delta > 0$, there exists $h \in \mathbb{N}$ and a partition
$[0,1] = J_1 \cup J_2 \cup \ldots \cup J_h$ of intervals
such that for a disintegration $\{\Gamma_x\}_{x \in[0,1]}$ of $\Gamma$
there exists $X \subset [0,1]$ with $\lambda(X) < \delta$
such that $\forall i \in [h], x,y \in J_i \setminus X$: $\textrm{dist}_{LP}(\Gamma_x, \Gamma_y) < \delta$.
\end{lem}
\begin{proof}
We shall use Lusin's Theorem in the following form, see~\cite[Theorem 17.12]{MR1321597}. Suppose that $M$ is a second-countable topological space.
Suppose that $f:[0,1]\rightarrow M$ is measurable. Then for every $\delta>0$ there exists a open set $X\subset [0,1]$ with $\lambda(Y)<\delta$ so that $f_{\restriction [0,1]\setminus X}$ is continuous.
We are in this setting of Lusin's Theorem as the space of all Borel probability measures on $[0,1]$ equipped with the Lévy-Prokhorov metric is second-countable, and the map $f:x\mapsto \Gamma_x$ is Borel. We use it with the same parameter as in the above statement. Since $[0,1]\setminus X$ is compact and the function $f$ is continuous on it, it is also uniformly continuous. That is, for the given $\delta$ there exists a real $\alpha>0$ so that for any two $x,y\in [0,1]\setminus X$ with $|x-y|\le \alpha$, we have $\textrm{dist}_{LP}(\Gamma_x,\Gamma_y)<\delta$.
The lemma therefore follows by choosing $(J_i)_{i\in [h]}$ to be consecutive intervals of length $\alpha$.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:weprove}}\label{ssec:proof}
We shall prove Theorem~\ref{thm:Relocation} in the following form.
\begin{thm}\label{thm:weprove}
Let $k\in\mathbb{N}$ and $A\in\mathbb{S}(k)$. Suppose that we have a sequence of permutations $(\pi_n\in\mathbb{S}(n))_n$ converging to an $A$-free permuton $\Gamma$ in the rectangular distance. Then there exists $n_0 \in \mathbb{N}$ such that for every $n > n_0$ there exists an $A$-free permutation $\tilde{\pi}_n\in \mathbb{S}(n)$ such that
\begin{equation}\label{milujuvzorecky}
\sum_{i = 1}^{n}|\pi_n(i)-\tilde{\pi}_n(i)| < \varepsilon n^2\;.
\end{equation}
\end{thm}
\begin{proof}[Deducing Theorem~\ref{thm:Relocation} from Theorem~\ref{thm:weprove}]
Suppose that Theorem~\ref{thm:Relocation} does not hold. That is, for a fixed pattern $A$, there exists a number $\varepsilon>0$ and a sequence of permutations $\pi_n$ with $t(A,\pi_n)\to 0$ for which we cannot find similar but $A$-free permutations. By compactness of the space of permutons (Theorem~\ref{thm:compact}), there exists a subsequence $(\pi_{n_\ell})$ which converges to a certain permuton $\Gamma$ in the rectangular distance. By continuity of densities (Theorem~\ref{thm:densitiescontinuous}), $\Gamma$ is $A$-free. This setting contradicts Theorem~\ref{thm:weprove}, as was needed.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:weprove}]
Given $\pi_n\overset{\textrm{d}_{\square}}{\rightarrow}\Gamma$, where $\Gamma$ is $A$-free, and $\varepsilon>0$ we fix constants $\zeta,\delta>0$ and $h,n_0\in\mathbb N$ according to the following hierarchy
\[\tfrac{1}{n_0} \ll \zeta \ll \tfrac{1}{h} \ll \delta \ll \varepsilon\;.\]
First, we apply Lemma~\ref{lem:measure} to $\Gamma$ and $\delta$ resulting in $h\in\mathbb N$, a partition $[0,1]=\bigcup_{i\in[h]}J_i$ into intervals, a set $X_1\subseteq[0,1]$ with $\lambda(X_1)<\delta$, and a disintegration $\{\Gamma_x\}_{x \in[0,1]}$ of $\Gamma$ such that for every $i\in[h]$ and $x,y\in J_i\setminus X_1$ we have
\begin{equation}\label{eq:distint}
\textrm{dist}_{LP}(\Gamma_x, \Gamma_y) < \delta\;.
\end{equation}
Furthermore, since $\Gamma$ is $A$-free and by Theorem~\ref{thm:molecule} we can fix a null set $X_2\subseteq [0,1]$ such that $\Gamma_x$ is $\le\left(k-1\right)$-molecule for every $x\in [0,1]\setminus X_2$. We then set $X\coloneqq X_1\cup X_2$.
Now, fix a permutation $\pi_n$ with $n\geq n_0$. We want to show that there exists an $A$-free permutation $\tilde{\pi}_n\in \mathbb{S}(n)$ fulfilling~\eqref{milujuvzorecky}. Note that by the constant hierarchy we can choose $n_0$ large enough such that we may assume that
\begin{equation}\label{eq:cutdist}
\textrm{d}_{\square}(\pi_n, \Gamma) < \zeta/2\;.
\end{equation}
Set $T\coloneqq\{i\in[n]\mid \lambda(((i-1)/n,i/n)\setminus X)>0\}$. Note that $|T|>(1-\delta) n$, as $\lambda(X)<\delta$. We now generate a random $n$-tuple of points in the unit interval by choosing $x_i$ uniformly at random from $((i-1)/n,i/n)\setminus X$ for every $i\in T$. For $i\in [n]\setminus T$ we choose $x_i$ uniformly at random from $((i-1)/n,i/n)\setminus X_2$. Furthermore we set
\begin{equation}\label{eq:whatisy}
y_i\coloneqq (\pi_n(i)-0.5)/n
\end{equation}
and denote the measure which has mass $1/n$ on each point $(x_i,y_i)$ by $\mathsf{d}elta$, $\mathsf{d}elta=\frac1n\sum_{i\in[n]}\mathsf{Dirac}_{(x_i,y_i)}$. While $\mathsf{d}elta$ is not a permuton, it can be thought of as a perturbation of the permuton representation $\Psi_{\pi_n}$ in which each rectangle $[\frac{i-1}{n},\frac{i}{n})\times [\frac{\pi_n(i)-1}{n},\frac{\pi_n(i)}{n})$ is transformed to a single point of the same measure and within that rectangle. In particular $\textrm{d}_{\square}(\mathsf{d}elta,\pi_n)\le \tfrac{2}{n}$. From the triangle inequality (c.f. Remark~\ref{rem:nonpermutons}) and~\eqref{eq:cutdist} we get
\begin{equation}\label{eq:cutdistlepsi}
\textrm{d}_{\square}(\mathsf{d}elta,\Gamma)\leq\textrm{d}_{\square}(\mathsf{d}elta,\pi_n)+\textrm{d}_{\square}(\pi_n,\Gamma)\leq\tfrac{2}{n}+\zeta/2<\zeta\;.
\end{equation}
\begin{clm}\label{clm:sureprop} With probability $1$ with respect to the choice of the random points $(x_i)_{i\in [n]}$ the following holds.
\begin{enumerate}[label=(\roman*)]
\item\label{clm:Afree} If $(\tilde{y}_i\in[0,1])_n$ is collection of pairwise distinct points such that for each $i\in[n]$, $\tilde{y}_i$ is an atom on $\Gamma_{x_i}$ then the permutation induced by $((x_i,\tilde{y}_i))_{i\in[n]}$ is $A$-free.
\item\label{clm:atomsdis} If $\Gamma_{x_i}(y)>0$ for some $i\in[n]$ and $y\in[0,1]$, then $\Gamma_{x_j}(y)=0$ for every $i\neq j$.
\end{enumerate}
\end{clm}
\begin{proof}[Proof of Claim]
For proving \ref{clm:Afree} suppose for the sake of contradiction that there exists $Q\subseteq[0,1]^n$ with $\lambda^{\otimes n}(Q)>0$ such that for every $x\in Q$ there exists $\tilde{y}(x)\in[0,1]^n$ such that $\prod_{i=1}^n\Gamma_{x_i}(\tilde{y}(x)_i)>0$ and such that $(x_i,\tilde{y}(x)_i)_{i\in[n]}$ contains the pattern $A$. Further, we can assume that the map $x\mapsto \tilde{y}(x)$ is measurable. Then the probability to sample the pattern $A$ from $\Gamma$ is at least
\[\int_{Q}\prod_{i=1}^n\Gamma_{x_i}(\tilde{y}(x)_i)\mathsf{d}\lambda^{\otimes n}(x)>0\;,\]
contradicting that $\Gamma$ is $A$-free.
For proving \ref{clm:atomsdis} suppose for the sake of contradiction that there exists $y\in[0,1]$ such that for $Q_y\coloneqq\{x\in[0,1]\mid \Gamma_x(y)>0\}$ holds that $\lambda(Q_y)>0$. By continuity of the Lebesgue measure there exists $\beta>0$ and $Q_y'\subseteq Q_y$ with $\lambda(Q_y')>0$ such that for every $x\in Q_y'$ we have $\Gamma_{x}(y)>\beta$. Then for an arbitrary $\alpha>0$ we have
\[\Gamma([0,1]\times[y-\alpha,y+\alpha])\geq \beta\lambda(Q_y')\;,\]
which for $\alpha$ small enough contradicts the uniform marginal property of $\Gamma$.
\end{proof}
We fix an outcome of our random selection of $x_1<\cdots<x_n$ such that the above properties hold. Below, we abbreviate measures of intervals $[a,b]$ on fibers $\Gamma_{x_i}$ as $\Gamma_{x_i}(a,b):=\Gamma_{x_i}([a,b])$.
\begin{clm}\label{clm:posmass} For all but at most $3\sqrt{\delta} n$ many $i\in [n]$ it holds that $\Gamma_{x_i}(y_i - 2\sqrt{\delta}, y_i + 2\sqrt{\delta}) > \delta$.
\end{clm}
\begin{proof}[Proof of Claim]
We call a point $(x_i,y_i)$ {\it bad} if $\Gamma_{x_i}(y_i - 2\sqrt{\delta} , y_i + 2\sqrt{\delta}) \leq \delta$ and let $Y$ be the set of such bad points. We denote by $J(x)$ the interval $J_j$ (where $j\in[h]$) such that $x\in J_j$. Consider a system of rectangles $\mathcal{C}=\{J(x_i)\times [y_i - \sqrt{\delta} , y_i + \sqrt{\delta}] \mid (x_i,y_i) \text{ bad} \}$ covering $Y$ in $[0,1]^2$.
Choose a minimal system $\mathcal{M}\subseteq\mathcal{C}$ which covers $\bigcup\mathcal{C}$. For $j\in[h]$, let $\mathcal{M}_j\subset \mathcal{M}$ be the collection of sets of the form $J_j\times U$. By the minimality, we have
\begin{equation}\label{eq:Msmall}
|\mathcal{M}_j|\leq 2/(2\sqrt{\delta})=1/\sqrt{\delta}\;.
\end{equation}
Therefore $\bigcup\mathcal{C}$ can be written as the disjoint union of at most $h/\sqrt{\delta}$ many rectangles. By using~\eqref{eq:cutdistlepsi} we get
\begin{equation}\label{eq:milujubuchtu}
\left|\Gamma(\cup\mathcal{C})-\mathsf{d}elta(\cup\mathcal{C})\right|<\zeta h/\sqrt{\delta}\;.
\end{equation}
For every rectangle $J(x_i)\times [y_i - \sqrt{\delta} , y_i + \sqrt{\delta}]=J_j\times B=M\in\mathcal{M}_j$ we have that
\begin{align*}
\Gamma(M \setminus (X \times [0,1]))
&= \int_{J_j\setminus X}\Gamma_x(B)\mathsf{d}\lambda(x)
\overset{\eqref{eq:distint}}{\leq} \lambda(J_j)\cdot(\Gamma_{x_i}(B^{\uparrow \delta})+\delta)\;.
\end{align*}
In particular, since $(x_i,y_i)$ is bad we get
\begin{equation}
\label{eq:milujubuchty}
\Gamma(M \setminus (X \times [0,1])) \leq 2\delta\lambda(J_j)\;.
\end{equation}
Thus the triangle inequality implies that
\begin{align*}
\mathsf{d}elta(\cup\mathcal{C}) &\leq \Gamma(\cup\mathcal{C}\setminus(X\times[0,1]))+ \Gamma(X\times[0,1]) + |\Gamma(\cup\mathcal{C}) - \mathsf{d}elta(\cup\mathcal{C})|\\
&\overset{\eqref{eq:milujubuchtu}}{\leq} \sum_{j\in[h]}\sum_{M\in\mathcal{M}_j}\Gamma(M\setminus(X\times[0,1]))+\delta+\zeta h/\sqrt{\delta}\\
&\overset{\eqref{eq:Msmall},\eqref{eq:milujubuchty}}{\leq} \sum_{j\in[h]} \tfrac{1}{\sqrt{\delta}}\cdot 2\delta\lambda(J_j)+\sqrt{\delta}\\
&\leq 3\sqrt{\delta}\;,
\end{align*}
and therefore $|Y|\le\mathsf{d}elta(\cup\mathcal{C})n\leq 3\sqrt{\delta}n$.
\end{proof}
Now set $S\coloneqq\{i\in T \mid \Gamma_{x_i}(y_i - 2\sqrt{\delta}, y_i + 2\sqrt{\delta})>\delta\}$. Observe that by Claim~\ref{clm:posmass} and the fact that $|T|\geq(1-\delta)n$ we have $|S|\geq n-3\sqrt{\delta}n-\delta n \geq (1-4\sqrt{\delta})n$. For every $i\in S$, since $\Gamma_{x_i}$ is $\le\left(k-1\right)$-molecule there exists $\tilde{y}_i\in [y_i - 2\sqrt{\delta}, y_i + 2\sqrt{\delta}]$ such that $\Gamma_{x_i}(\tilde{y}_i)>0$. For every $i\in[n]\setminus S$ choose an arbitrary atom $\tilde{y}_i$ of $\Gamma_{x_i}$ which is possible, as $x_i\notin X_2$ for every $i\in[n]$. By Claim~\ref{clm:sureprop}\ref{clm:atomsdis} we have that the points $(x_i,\tilde{y}_i)_{i\in[n]}$ induce a permutation which we denote by $\tilde{\pi}_n$. Furthermore, by Claim~\ref{clm:sureprop}\ref{clm:Afree} $\tilde{\pi}_n$ is $A$-free. It remains to establish the key property~\eqref{milujuvzorecky}. We split the summands $\sum_{i = 1}^{n}|\pi_n(i)-\tilde{\pi}_n(i)|$ according to whether $i\in S$ or $i\notin S$. In the latter case, it is enough to use the trivial bound $|\pi_n(i)-\tilde{\pi}_n(i)|\le n$. So, we need to have a good bound for $|\pi_n(i)-\tilde{\pi}_n(i)|$ when $i\in S$. Recall~\eqref{eq:whatisy} which tells us that the spacing in the y-coordinates of the geometric representation of $\pi_n$ was exactly $\frac1n$. Thus, if we change each point $(y_j)_{j\in S}$ by at most $2\sqrt{\delta}$, the point $y_i$ changes its relative position\footnote{that is `bigger than/smaller than'} with at most $4\sqrt{\delta}n$ other points $(y_j)_{j\in S}$. Taking into account also elements which are not in $S$, we get $|\pi_n(i)-\tilde{\pi}_n(i)|\le 4\sqrt{\delta}n+|[n]\setminus S|$. Putting all this together,
\[
\sum_{i = 1}^{n}|\pi_n(i)-\tilde{\pi}_n(i)|\leq \sum_{i\in S}(4\sqrt{\delta}n+|[n]\setminus S|)+\sum_{i\in[n]\setminus S} n\leq (4\sqrt{\delta}+4\sqrt{\delta})n^2+4\sqrt{\delta}n^2<\varepsilon n^2\;.
\]
\end{proof}
\end{document} |
\begin{document}
\title[Unstabilized critical Heegaard surfaces]
{Examples of unstabilized critical Heegaard surfaces}
\author{Jung Hoon Lee}
\address{School of Mathematics, KIAS\\
207-43, Cheongnyangni 2-dong, Dongdaemun-gu\\
Seoul, Korea}
\email{[email protected]}
\subjclass[2000]{Primary 57M50} \keywords{critical Heegaard surface, (closed orientable surface)$\times S^1$}
\begin{abstract}
We show that the standard minimal genus Heegaard splitting of (closed orientable surface)$\times S^1$ is a critical Heegaard splitting.
\end{abstract}
\maketitle
\section{Introduction}
Let $S$ be a closed orientable connected separating surface in an irreducible $3$-manifold $M$, dividing $M$ into two manifolds $V$ and $W$. Define the {\it disk complex} $\mathcal{D}(S)$ as follows.
\begin{enumerate}
\item Vertices of $\mathcal{D}(S)$ are isotopy classes of essential simple closed curves in $S$ that bound disks in $V$ or $W$.
\item $k+1$ distinct vertices constitute a $k$-cell if there are disjoint representatives.
\end{enumerate}
By abuse of terminology, we sometimes identify a vertex with some representative essential disk of the vertex.
Let $\mathcal{D}_V(S)$ and $\mathcal{D}_W(S)$ be the subcomplex of $\mathcal{D}(S)$ spanned by vertices that bound disks in $V$ and $W$ respectively.
In this paper, we only consider vertices and edges of $\mathcal{D}_V(S)$, $\mathcal{D}_W(S)$ and $\mathcal{D}(S)$.
$S$ is {\it strongly irreducible} if $\mathcal{D}_V(S)$ is not connected to $\mathcal{D}_W(S)$ in $\mathcal{D}(S)$. Strongly irreducible surfaces were proved to be useful to analyze the Heegaard structure and topology of $3$-manifolds. For example, if a minimal genus Heegaard surface is not strongly irreducible, then the manifold contains an incompressible surface \cite{Casson-Gordon}.
Incompressible surface and strongly irreducible surface can be regarded as a topological analogue of an index $0$ minimal surface and index $1$ minimal surface respectively \cite{Bachman3}. As a generalization of this idea, Bachman has defined a notion of {\it critical surface} \cite{Bachman1}, \cite{Bachman2}, which can be regarded as a topological index $2$ minimal surface.
\begin{definition}
$S$ is critical if vertices of $\mathcal{D}(S)$ can be partitioned into two sets $C_0$ and $C_1$.
\begin{enumerate}
\item For each $i=0,1$, there is at least one pair of disks $V_i\in \mathcal{D}_V(S)\cap C_i$ and $W_i\in \mathcal{D}_W(S)\cap C_i$ such that $V_i\cap W_i=\emptyset$.
\item If $V_i\in \mathcal{D}_V(S)\cap C_i$ and $W_{1-i}\in \mathcal{D}_W(S)\cap C_{1-i}$, then $V_i\cap W_{1-i}\ne \emptyset$ for any representatives. In other words, $V_i$ and $W_{1-i}$ are not joined by an edge.
\end{enumerate}
\end{definition}
Critical surfaces have some useful properties. For example, if an irreducible manifold contains an incompressible surface and a critical surface, then the two surfaces can be isotoped so that any intersection loop is essential on both surfaces.
In \cite{Bachman1}, it is shown that if a manifold which does not contain incompressible surfaces has two distinct Heegaard splittings, then the minimal genus common stabilization of the two splittings is critical.
In this paper, we give concrete examples of critical Heegaard surfaces which are of minimal genus Heegaard splittings, hence unstabilized.
\begin{theorem}
The standard minimal genus Heegaard splitting of (closed orientable surface)$\times S^1$ is a critical Heegaard splitting.
\end{theorem}
\begin{remark}
In \cite{Bachman-Johnson}, for any $n$, Bachman and Johnson constructed examples of manifolds $M_n$ admitting an index $n$ topologically minimal genus $n+1$ Heegaard surface by $n$-fold covering of certain $2$-bridge link exteriors. When $n=2$, the manifold $M_2$ seems to be a genus three manifold by appealing to the Kobayashi's characterization of genus two manifolds containing incompressible tori \cite{Kobayashi}. So existence of minimal genus critical Heegaard surfaces of genus three are already known. Theorem 1.2 gives some examples of higher genus minimal genus critical Heegaard surfaces.
\end{remark}
\begin{corollary}
For any odd $g>1$, there exists a minimal genus critical Heegaard suface of genus $g$.
\end{corollary}
In Section $2$, we give a partition of a disk complex for the standard genus three Heegaard surface of (torus)$\times S^1$. In Section $3$, we show that the partition satisfies the definition of a critical surface.
The same arguments apply for (closed orientable surface)$\times S^1$.
\section{Partition of disk complex}
Let $C$ be a cube $\{(x,y,z)\in \mathbb{R}^3\,|\,-1\le x,y,z\le 1\,\}$. If we identify the three pairs of opposite faces of $C$, we get a $3$-torus $M=$ (torus)$\times S^1$. Let $q:C\rightarrow M$ be the quotient map. An image by $q$ of a tubular neighborhood of union of three axis in $C$ is a genus three handlebody $V$. It is easy to see that $W=cl(M-V)$ is also a genus three handlebody. This decomposition $M=V\cup_S W$ is a standard genus three Heegaard splitting of (torus)$\times S^1$.
\begin{figure}
\caption{$D,E\in C_0$ and $D\cap E=\emptyset$}
\end{figure}
Let $D$ be a meridian disk $q(\{z=\frac{1}{2}\}\cap C)\cap V$ in $V$. Let $E$ be a meridian disk $q(\{z=0\}
\cap C)\cap W$ in $W$. See Figure 1. Now we define a partition of vertices of the disk complex $\mathcal{D}(S)=C_0\,\dot{\cup}\, C_1$ as follows.
\begin{enumerate}
\item Let $\mathcal{D}_V(S)\cap C_0$ be essential disks in $V$ that are disjoint from $E$.
\item Let $\mathcal{D}_W(S)\cap C_0$ be essential disks in $W$ that are disjoint from $D$.
\item Let $\mathcal{D}_V(S)\cap C_1$ be $\mathcal{D}_V(S)-(\mathcal{D}_V(S)\cap C_0)$.
\item Let $\mathcal{D}_W(S)\cap C_1$ be $\mathcal{D}_W(S)-(\mathcal{D}_W(S)\cap C_0)$.
\end{enumerate}
Since $V\cup_S W$ is not a reducible Heegaard splitting, the four sets $\mathcal{D}_V(S)\cap C_0$, $\mathcal{D}_W(S)\cap C_0$, $\mathcal{D}_V(S)\cap C_1$, $\mathcal{D}_W(S)\cap C_1$ are mutually disjoint.
Note that $D\in \mathcal{D}_V(S)\cap C_0$ and $E\in \mathcal{D}_W(S)\cap C_0$ and $D\cap E=\emptyset$.
A meridian disk $D'=q(\{x=\frac{1}{2}\}\cap C)\cap V$ is in $\mathcal{D}_V(S)\cap C_1$. A meridian disk $E'=q(\{x=0\}\cap C)\cap W$ is in $\mathcal{D}_W(S)\cap C_1$ and $D'\cap E'=\emptyset$. See Figure 2. So the partition satisfies the first condition of definition of a critical surface.
Since $\mathcal{D}_V(S)\cap C_0$ and $\mathcal{D}_W(S)\cap C_0$ are symmetric, we only need to show that vertices in $\mathcal{D}_V(S)\cap C_0$ and $\mathcal{D}_W(S)\cap C_1$ cannot be joined by an edge.
By definition, any representative of a disk in $\mathcal{D}_W(S)\cap C_1$ intersects $D$. In next section, we will show that $\mathcal{D}_V(S)\cap C_0$ consists of a single vertex $D$, then it completes the proof for the case of (torus)$\times S^1$.
\begin{figure}
\caption{$D',E'\in C_1$ and $D'\cap E'=\emptyset$}
\end{figure}
\section{Any essential disk in $V$ disjoint from $E$ is $D$}
\begin{lemma}
Any essential disk in $V$ disjoint from $E$ is isotopic to $D$.
\end{lemma}
\begin{proof}
Let $F$ be an essential disk in $V$ disjoint from $\partial E$ which intersects $D$ minimally. Since $V$ is irreducible, we may assume that $F\cap D$ has no circle component of intersection, so the intersection consists of arcs. Let $\gamma$ be an outermost arc of $F\cap D$ in $F$ and $\Delta$ be the corresponding outermost disk in $F$. Let $\Delta'$ be one of the two disks cut by $\gamma$ in $D$.
By slightly pushing $\Delta'$, $\Delta\cup\Delta'$ is disjoint from $D$ and $\partial E$. If we cut $V$ by $D$, we get a genus two handlebody $V'$. $\partial E$ separates $\partial V'$ into two once-punctured tori $T_1$ and $T_2$. We can see that $V'$ is homeomorphic to $T_1\times I$. Since there can be no essential disk in $T_1\times I$ whose boundary is disjoint from $\partial T_1$, $\Delta\cup\Delta'$ is an inessential disk in $V'$. The disk that $\partial (\Delta\cup\Delta')$ bounds in $\partial V'$ may or may not contain a copy of $D$, but in any case we can reduce the intersection of $F\cap D$ by isotopy of $\Delta$.
So we conclude that $F\cap D=\emptyset$. By the above arguments and since $F$ is essential in $V$, $F$ is isotopic to $D$.
\end{proof}
A closed orientable surface $S$ of genus $g>1$ can be obtained from a $4g$-gon $P=a_1 b_1 a^{-1}_1 b^{-1}_1\cdots a_g b_g a^{-1}_g b^{-1}_g$ by identifying corresponding sides. Then $M=S\times S^1$ can be obtained from $C=P\times [-1,1]$ by identifying corresponding pairs of faces. Let $q:C\rightarrow M$ be the quotient map. A standard minimal genus Heegaard splitting $M=V\cup W$ can be obtained similarly as in the case of (torus)$\times S^1$ in Section $2$. Take the center point $p$ of $P\times\{0\}$. Connect $p$ to each side of $P\times\{0\}$, and also connect $p$ to the center of $P\times\{-1\}$ and $P\times\{1\}$ by vertical arcs. Let $V$ be an image by $q$ of a tubular neighborhood of the resulting graph. Let $W=cl(M-V)$.
Let $D$ be a meridian disk $q(P\times \{\frac{1}{2}\})\cap V$ in $V$. Let $E$ be a meridian disk $q(P\times \{0\})\cap W$ in $W$.
$V$ can be regarded as a ((once-punctured surface) $\times I$) $\cup$ ($1$-handle), where $E$ corresponds to the puncture and $D$ corresponds to the co-core of $1$-handle. By similar arguments as in the proof of Lemma 3.1, any essential disk in $V$ disjoint from $E$ is isotopic to $D$. Note that $W$ can also be regarded as a ((once-punctured surface) $\times I$) $\cup$ ($1$-handle), where $D$ corresponds to the puncture and $E$ corresponds to the co-core of $1$-handle. So any essential disk in $W$ disjoint from $D$ is isotopic to $E$.
Hence if we partition the disk complex of $\partial V=\partial W$ as in Section 2, then we can see that the Heegaard surface of (closed orientable surface)$\times S^1$ is a critical Heegaard surface. This completes the proof of Theorem 1.2.
\end{document} |
\begin{document}
\title{Some Identities Involving Three Kinds of Counting Numbers}
\begin{center}
L. C. Hsu
Mathematics Institute, Dalian University of Technology, Dalian 116024, China\\[5pt]
\end{center}\vskip0.5cm
\subsection*{Abstract} In this note, we present several identities involving
binomial coefficients and the two kind of Stirling numbers.
\vskip0.5cm
{\large\bf 1. Introduction}
Adopting Knuth's notation, let us denote by
$\left[\begin{array}{c}n\\k\end{array}\right]$ and
$\left\{\begin{array}{c}n\\k\end{array}\right\}$ the unsigned
(absolute) Stirling number of the first kind and the ordinary
Stirling number of the second kind, respectively. Here in
particular,
$\left[\begin{array}{c}0\\0\end{array}\right]=\left\{\begin{array}{c}0\\0\end{array}\right\}=1$,
$\left[\begin{array}{c}n\\0\end{array}\right]=\left\{\begin{array}{c}n\\0\end{array}\right\}=0~(n>0)$,
and
$\left[\begin{array}{c}n\\k\end{array}\right]=\left\{\begin{array}{c}n\\k\end{array}\right\}=0~(0\le
n<k)$. Generally, $\left[\begin{array}{c}n\\k\end{array}\right]$,
$\left\{\begin{array}{c}n\\k\end{array}\right\}$ and the binomial
coefficients $\left(\begin{array}{c}n\\k\end{array}\right)$ may be
regarded as the most important counting numbers in combinatorics.
The object of this short note is to propose some combinatorial
identities each consisting of these three kinds of counting numbers,
namely the following
$$
\sum_k\left[\begin{array}{c}k\\p\end{array}\right]\left\{\begin{array}{c}n+1\\k+1\end{array}\right\}(-1)^k=
\left(\begin{array}{c}n\\p\end{array}\right)(-1)^p
$$
$$
\sum_k\left[\begin{array}{c}k+1\\p+1\end{array}\right]\left\{\begin{array}{c}n\\k\end{array}\right\}(-1)^k=
\left(\begin{array}{c}n\\p\end{array}\right)(-1)^n
$$
$$
\sum_{j,k}\left[\begin{array}{c}n\\k\end{array}\right]\left\{\begin{array}{c}k\\j\end{array}\right\}
\left(\begin{array}{c}n\\j\end{array}\right)(-1)^k=(-1)^n
$$
$$
\sum_{j,k}\left\{\begin{array}{c}n\\k\end{array}\right\}\left[\begin{array}{c}k\\j\end{array}\right]
\left(\begin{array}{c}n\\j\end{array}\right)(-1)^k=(-1)^n
$$
$$
\sum_{j,k}\left(\begin{array}{c}n\\k\end{array}\right)\left\{\begin{array}{c}k\\j\end{array}\right\}
\left[\begin{array}{c}j+1\\p\end{array}\right](-1)^j=\left\{\begin{array}{cl}0,
&(n+1>p)\\[6pt]
(-1)^n, &(n+1=p)\end{array}\right.
$$
$$
\sum_{j,k}\left[\begin{array}{c}n\\k\end{array}\right]\left(\begin{array}{c}k\\j\end{array}\right)
\left\{\begin{array}{c}j+1\\p\end{array}\right\}(-1)^j=\left\{\begin{array}{cl}0,
&(n+1>p)\\[6pt]
(-1)^n, &(n+1=p)\end{array}\right.
$$
Here each of the summations in (1) and (2) extends over all $k$ such
that $0\le k\le n$ or $p\le k\le n$, and all the double summations
within (3)--(6) are taken over all possible integers $j$ and $k$
such that $0\le j\le k\le n$.
Note that (1) is a well-known identity that has appeared in the
Table 6.4 of Graham-Knuth-Patashnik's book [1] (cf. formula (6.24)).
It is quite believable that (1) and (2) may be the most simple
identities each connecting with the three kinds of counting numbers.
\\[10pt]
{\large\bf 2. Proof of the identities}
In order to verify (2)--(6), let us recall that the orthogonality
relations
$$
\sum_{k}\left[\begin{array}{c}n\\k\end{array}\right]\left\{\begin{array}{c}k\\p\end{array}\right\}
(-1)^{n-k}=\sum_k\left\{\begin{array}{c}n\\k\end{array}\right\}\left[\begin{array}{c}k\\p\end{array}\right]
(-1)^{n-k}=\delta_{np}=\left\{\begin{array}{ll}0, &(n\neq p)\\[8pt]
1, &(n=p)\end{array}\right.
$$
are equivalent to the inverse relations
$$
a_n=\sum_{k}\left[\begin{array}{c}n\\k\end{array}\right](-1)^{n-k}b_k
\Leftrightarrow
b_n=\sum_k\left\{\begin{array}{c}n\\k\end{array}\right\}a_k.
$$
Also, we shall make use of two known identities displayed in the
Table 6.4 of [1], viz.
$$
\left\{\begin{array}{c}n+1\\p+1\end{array}\right\}=\sum_k\left(\begin{array}{c}n\\k\end{array}\right)
\left\{\begin{array}{c}k\\p\end{array}\right\}
$$
$$
\left[\begin{array}{c}n+1\\p+1\end{array}\right]=\sum_k\left[\begin{array}{c}n\\k\end{array}\right]
\left(\begin{array}{c}k\\p\end{array}\right).
$$
Now, take
$a_n=(-1)^n\left[\begin{array}{c}n+1\\p+1\end{array}\right]$ and
$b_k=(-1)^k\left(\begin{array}{c}k\\p\end{array}\right)$, so that
(10) can be embedded in the first equation of (8). Thus it is seen
that (10) can be inverted via (8) to get the identity (2).
(3) and (4) are trivial consequences of (7). Indeed rewriting (7) in
the form
~~~~~~~~~~~~~~~~~$\displaystyleplaystyle
\sum_k\left[\begin{array}{c}n\\k\end{array}\right]\left\{\begin{array}{c}k\\j\end{array}\right\}(-1)^k=
\sum_k\left\{\begin{array}{c}n\\k\end{array}\right\}\left[\begin{array}{c}k\\j\end{array}\right](-1)^k=
(-1)^n\delta_{nj}$
$(7)'$
\\[8pt]
and noticing that
$\sum_j\left(\begin{array}{c}n\\j\end{array}\right)\delta_{nj}=
\left(\begin{array}{c}n\\n\end{array}\right)=1$, we see that
(3)--(4) follow at once from $(7)'$.
For proving (5), let us make use of (9) and (7) with $p$ being
replaced by $j$. We find
$$
\sum_j\left\{\begin{array}{c}n+1\\j+1\end{array}\right\}\left[\begin{array}{c}j+1\\p\end{array}\right](-1)^j=
\sum_j\sum_k\left(\begin{array}{c}n\\k\end{array}\right)\left\{\begin{array}{c}k\\j\end{array}\right\}
\left[\begin{array}{c}j+1\\p\end{array}\right](-1)^j=(-1)^n\delta_{n+1,p}.
$$
Hence (5) is obtained.
Similarly, (6) is easily derived from (10) and (7).
\\[10pt]
{\large\bf 3. Questions}
It may be a question of certain interest to ask whether some of the
identities (1)--(6) could be given some combinatorial
interpretations with the aid of the inclusion-exclusion principle or
the method of bijections. Also, we have not yet decided whether
(1)--(6) could be proved by the method of generating functions (cf.
[2]).
{\bf AMS Classification Numbers}: 05A10, 05A15, 05A19.
\end{document} |
\begin{document}
\title[The Prescribed Jacobian Inequality with $L^p$ Data]
{Bi-Sobolev Solutions to the Prescribed Jacobian Inequality in the Plane with $L^p$ Data}
\author{Julian Fischer and Olivier Kneuss}
\address{Max Planck Institute for Mathematics in the Sciences,
Inselstrasse 22, 04103 Leipzig, Germany, E-Mail:
[email protected]}
\address{Institute of Mathematics, Federal University of Rio de Janeiro, Cidade Universitaria, 21941909 Rio de Janeiro, Brazil, E-Mail:
[email protected]}
\begin{abstract}
We construct planar bi-Sobolev mappings whose local volume distortion is bounded from below by a given function $f\in L^p$ with $p>1$, i.\,e.\ bi-Sobolev solutions for the prescribed Jacobian inequality in the plane for right-hand sides $f\in L^p$. More precisely, for any $1<q<(p+1)/2$ we construct a solution which belongs to $W^{1,q}$ and which preserves the boundary pointwise. For bounded right-hand sides $f\in L^{\infty}$, we provide bi-Lipschitz solutions. The basic building block of our construction are Lipschitz maps which stretch a given compact subset of the unit square by a given factor while preserving the boundary. The construction of these stretching maps relies on a slight strengthening of the covering result of Alberti, Cs\"ornyei, and Preiss for measurable planar sets in the case of compact sets.
\end{abstract}
\maketitle
\section{Introduction}
The prescribed Jacobian equation
\begin{subequations}
\label{JacobianEqualityWithBC}
\begin{align}
\label{JacobianEquality}
\det\nabla \phi&=f &&\text{in }\Omega,
\\
\phi&=\operatorname{id} &&\text{on }\partial \Omega,
\end{align}
\end{subequations}
(here for bounded connected domains $\Omega\subset \mathbb{R}^d$, positive right-hand sides $f:\Omega\rightarrow (0,\infty)$, and maps $\phi:\overline{\Omega}\rightarrow \mathbb{R}^d$) is an important subject of studies in geometric analysis. By the change of variables formula, the equation amounts to prescribing the volume distortion of the mapping $\phi$.
Clearly, the prescribed Jacobian equation \eqref{JacobianEquality} is underdetermined, as it is invariant under composition of $\phi$ with an arbitrary volume-preserving map. Therefore, one has neither uniqueness of solutions nor a regularity theory for solutions: Even for smooth right-hand sides $f$, there exist infinitely many solutions with rather low regularity. On the other hand, the regularity of the right-hand side $f$ may obviously restrict the regularity of solutions $\phi$: For right-hand sides $f$ that fail to belong to the H\"older space $C^{k,\alpha}$, the solutions $\phi$ cannot belong to the H\"older space $C^{k+1,\alpha}$. Similarly, for right-hand sides $f$ that lack $L^p$-integrability, solutions $\phi$ cannot belong to the Sobolev space $W^{1,dp}$.
Beyond these trivial restrictions on the regularity of solutions $\phi$, a notable result of Burago and Kleiner \cite{Burago-Kleiner} and McMullen \cite{McMullen} establishes the existence of bounded right-hand sides $f\in L^\infty$ for which the prescribed Jacobian equation \eqref{JacobianEquality} does not admit bi-Lipschitz solutions. In fact, these right-hand sides $f$ may even be taken as continuous and arbitrarily close to $1$.
For H\"older-regular right-hand sides $f\in C^{k,\alpha}(\overline{\Omega})$ (with $k\in \mathbb{N}_0$, $0<\alpha<1$) and sufficiently regular connected domains $\Omega$, an existence theory with optimal regularity has been developed for the prescribed Jacobian equation with identity boundary conditions \eqref{JacobianEqualityWithBC}: Provided that the trivial necessary condition $\int_{\Omega}f\,dx=|\Omega|$ for the solvability of \eqref{JacobianEqualityWithBC} is satisfied, the existence of a solution $\phi\in C^{k+1,\alpha}(\overline{\Omega};\overline{\Omega})$ has been shown by Dacorogna and Moser \cite{Dac-Moser}. Later, alternative proofs were given by Rivi\`ere and Ye \cite{Riv-Ye}, Avinyo et.\ al.\ \cite{AvinyoEtAl}, and Carlier and Dacorogna \cite{Dac-Carlier}. Note also that Ye \cite{Ye} proved that for right-hand sides $f$ satisfying the above necessary condition and belonging to $W^{m,p}(\Omega),$ where $m\in \mathbb{N}$ and $1<p<\infty$ are such that $W^{m,p}(\Omega)$ is an algebra, there exists a solution to \eqref{JacobianEqualityWithBC} in $W^{m+1,p}(\Omega;\Omega).$ An overview of the available theory for the prescribed Jacobian equation may be found in \cite{CDK}.
While the existence theory for the prescribed Jacobian equation is well-developed in the case of H\"older spaces, in the regime of less regular right-hand sides the situation changes drastically. In the case of merely $L^p$-integrable right-hand sides $f$, it is an open question for all $p\leq \infty$ whether in general a weak solution of the prescribed Jacobian equation \eqref{JacobianEqualityWithBC} exists in some Sobolev space $W^{1,q}(\Omega;\mathbb{R}^d)$ for some $q\geq 1$. The only existence result in this regime is due to Rivi\`ere and Ye \cite{Riv-Ye} and ensures the existence of H\"older-continuous solutions $\phi\in C^{0,\alpha}$ for right-hand sides $f\in L^\infty$ or $f\in \operatorname{BMO}$. Here, ``solution'' is to be understood in the distributional sense, as defined by the change of variables formula.
In the present work, in the planar case (i.\,e.\ for $\Omega\subset \mathbb{R}^2$) we establish existence results in Sobolev spaces $W^{1,q}(\Omega;\mathbb{R}^d)$ for the prescribed Jacobian \emph{inequality}
\begin{subequations}
\label{JacobianInequalityBC}
\begin{align}
\label{JacobianInequality}
\det\nabla \phi&\geq f\quad &&\text{in }\Omega,
\\
\phi&=\operatorname{id}\quad &&\text{on }\partial \Omega,
\end{align}
\end{subequations}
assuming just appropriate integrability of the right-hand side $f$. More precisely, for nonnegative right-hand sides $f\geq 0$ satisfying $\int_\Omega f~dx<|\Omega|$, we construct solutions with the following properties:
\begin{itemize}
\item Given $f\in L^p(\Omega)$ for some $p>1$, we show in Theorem \ref{L.p} that for every $1<q<(p+1)/2$ there exists a homeomorphism $\phi:\overline{\Omega}\rightarrow \overline{\Omega}$ which solves the prescribed Jacobian inequality \eqref{JacobianInequalityBC} and for which both $\phi$ and its inverse $\phi^{-1}$ belong to the Sobolev space $W^{1,q}(\Omega;\Omega)$.
\item For bounded right-hand sides $f\in L^{\infty}(\Omega)$, the prescribed Jacobian inequality \eqref{JacobianInequalityBC} admits a bi-Lipschitz solution $\phi:\overline{\Omega}\rightarrow \overline{\Omega}$, see Theorem \ref{theorem.det.nabla.u.geq.f}. The bi-Lipschitz regularity is in general sharp. Recall that this existence result is in strong contrast to the case of the prescribed Jacobian equation \eqref{JacobianEquality}, for which the existence of bi-Lipschitz solutions for bounded right-hand sides fails in general, as mentioned above \cite{Burago-Kleiner,McMullen}.
\end{itemize}
Note that the condition on the right-hand side $\int_{\Omega}f~dx < |\Omega|$ is indeed a necessary condition for the problem of the prescribed Jacobian inequality to be meaningful: In the case $\int_{\Omega}f~dx > |\Omega|$, the change of variables formula excludes the existence of (weak) solutions; in the equality case $\int_{\Omega}f~dx = |\Omega|$, by the change of variables formula the prescribed Jacobian inequality \eqref{JacobianInequalityBC} reduces to the more ``rigid'' case of the prescribed Jacobian equation \eqref{JacobianEquality}.
To the best of our knowledge, Theorem \ref{L.p} is the first existence result giving a solution (of an equation involving the Jacobian) whose gradient has some integrability when the right-hand side $f$ is merely in $L^p(\Omega)$.
For maps $\phi\in W^{1,q}(\Omega;\mathbb{R}^d)$ with $q\geq d$, the Jacobian determinant $\det\nabla \phi$ is naturally defined in the pointwise sense. In the case $d-1<q<d$, in the present work the Jacobian will be understood in the distributional sense (see e.\,g.\ \cite{DePhilippis,DeLellis,dacorogna}), namely
\begin{align*}
\mathcal{J}_{\phi}[\eta]:=
-\frac{1}{d}\int_\Omega \langle\operatorname{adj} \nabla \phi \cdot \phi, \nabla \eta\rangle ~dx\quad \text{for every test function }\eta \in C^\infty_{cpt}(\Omega).
\end{align*}
In the supercritical case $q>d$ and the critical case $q=d$, the pointwise notion of the Jacobian determinant coincides with the distributional determinant, i.\,e.\ it holds that
\begin{align*}
\mathcal{J}_\phi[\eta]=\int_\Omega \eta \det \nabla \phi ~dx.
\end{align*}
In the subcritical case $q<d$, the two notions differ in general: A simple example is provided by the map
\begin{align*}
\phi(x)=\frac{x}{|x|}.
\end{align*}
In fact, in the subcritical case the Jacobian determinant defined in the \emph{pointwise} sense experiences a certain loss of rigidity and is no longer associated with the change of variables formula. This has in particular been exploited by Koumatos, Rindler, and Wiedemann \cite{KKR} to construct solutions to the prescribed Jacobian equation in the pointwise sense with right-hand sides $f\in L^p(\Omega)$, $p<d$, $\Omega\subset \mathbb{R}^d$, by employing convex integration techniques.
Before stating our main results, let us comment on the strategy for the proof of our results. We shall reduce both existence results for the prescribed Jacobian inequality (\ref{JacobianInequality}) Theorem~\ref{L.p} and Theorem~\ref{theorem.det.nabla.u.geq.f} (i.\,e.\ both the case of right-hand sides in $L^p$ and the case of right-hand sides in $L^\infty$) to the existence of bi-Lipschitz maps which stretch a given measurable subset of the plane. This stretching result is provided in Theorem \ref{BiLipschitzMapTheorem} and is at the heart of our paper. In the case of right-hand sides $f\in L^p$, the reduction to Theorem \ref{BiLipschitzMapTheorem} is facilitated by iterative stretching of appropriate superlevel sets of $f$; see the proof of Proposition \ref{proposition.assuming.f.L.p.small}. In the case of bounded right-hand sides $f\in L^\infty$, the reduction to Theorem \ref{BiLipschitzMapTheorem} is quite straightforward and relies on the classical existence results for the prescribed Jacobian equation in the smooth case.
In Theorem \ref{BiLipschitzMapTheorem}, we consider a bounded open set $\Omega$ in $\mathbb{R}^2$ with $C^{1,1}$ boundary, let $\tau>0$, and consider any measurable set $M\subset \Omega$ with small measure. Then we are able to construct a bi-Lipschitz mapping $\phi=\phi_{\tau,M}$ (with $\phi:\overline{\Omega}\rightarrow \overline{\Omega}$) preserving the boundary pointwise, stretching $M$ by a factor of at least $1+\tau$, and compressing in a controlled way outside $M$ in the sense $\det\nabla \phi\geq 1+\tau$ a.\,e.\ on $M$ and $\det\nabla\phi\geq 1-C|M|^{1/2}\tau$ a.\,e.\ in $\Omega$,
together with uniform $W^{1,p}$ estimates for $1\leq p\leq \infty$.
Our proof of Theorem \ref{BiLipschitzMapTheorem} is constructive and is based on a slight strengthening of a covering result for planar measurable sets by Alberti, Cs\"ornyei, and Preiss \cite{Alberti}. As the covering result is restricted to two dimensions, the same is true for our construction (see Remark \ref{remark.no.generalization.higher.dim}).
The covering result of Alberti, Cs\"ornyei, and Preiss \cite{Alberti} states that any compact subset $K$ of the unit square may be covered (for $\delta>0$ small enough) by a collection of horizontal and vertical 1-Lipschitz strips with height (respectively width) $2\delta$, the number of strips being bounded by $2\delta^{-1}\sqrt{|K|}$.
Here, a horizontal
(respectively vertical) 1-Lipschitz strip is defined to be the graph of a
1-Lipschitz function over the horizontal (respectively vertical) axis thickened
by $\delta$ in the vertical (respectively horizontal) direction.
In fact, the strips may be chosen in such a way that the horizontal strips have pairwise disjoint interior and such that the same is true for the vertical strips (see Lemma \ref{IntersectionFreeCovering}). This has first been observed by Marchese \cite{Marchese}; the authors of the present paper have established a similar result independently in an earlier version of the present work. See the picture in the center of Figure~\ref{SketchStretching} for an example of such a covering by strips.
By stretching the horizontal strips in the vertical direction and the vertical strips in the horizontal direction (see the bottom pictures in Figure~\ref{SketchStretching}), in Proposition \ref{Prop.Bi.Lip.simpl} we obtain a mapping which stretches a given compact subset $K$ of the unit square. Here, the fact that the strips may be chosen in such a way that the number of strips that may cover a single point is bounded seems to be crucial for our construction, as otherwise the uniform Lipschitz bound for our maps would be lost.
Next, by an explicit construction based on an interpolation on a boundary layer between the identity boundary conditions and the boundary values provided by Proposition \ref{Prop.Bi.Lip.simpl}, we obtain a map that additionally preserves the boundary pointwise (see Lemma \ref{lemma.simpl.boundary.values} and, in particular, Figure~\ref{BoundaryCorrection}). Using inner regularity of the Lebesgue measure and weak continuity of the determinant, our result immediately extends to general measurable sets (see Lemma \ref{Lemma.simpl.set}). Finally we extend the result -- which by now has only been proven for the unit square -- to domains in the plane with boundary of class $C^{1,1}$; see Lemma \ref{Lemma.simpl.domain}.
{\bf Notation.}
Throughout the paper, we use the following notation:
\begin{itemize}
\item We denote the Lebesgue measure of a measurable set $M\subset
\mathbb{R}^d$ by $|M|.$
\item For $M\subset \mathbb{R}^d,$ $\chi_{M}$ denotes the usual characteristic
function of $M:$
\begin{align*}
\chi_M=\left\{\begin{array}{cl}
1&\text{in $M$}\\
0&\text{in $M^c.$}
\end{array}
\right.
\end{align*}
\item The notation $\sharp P$ refers to the number of elements
of the set $P$.
\item A function $g:\mathbb{R}\rightarrow \mathbb{R}$ is said to be
$1$-Lipschitz if it is Lipschitz with $|g'|\leq 1$ almost everywhere.
\item By $C$ we denote a generic constant whose value may change from
appearance to appearance.
\item By $\operatorname{id}$ we denote the identity map, while by
$\operatorname{Id}$ we denote the unit matrix.
\item For a matrix $A\in \mathbb{R}^2$ we denote by $\operatorname{adj}A$ the adjugate matrix of $A.$
For example when $d=2$ we have
\begin{align*}
\operatorname{adj}A=\begin{pmatrix}
A^2_2
&
-A^1_2
\\
-A^2_1
&
A^1_1
\end{pmatrix}\quad \text{where}\quad
A=\begin{pmatrix}
A^1_1
&
A^1_2
\\
A^2_1
&
A^2_2
\end{pmatrix}.
\end{align*}
\item Throughout the paper, we use standard notation for Sobolev and H\"older spaces. For example, by $C^{1,\alpha}(\Omega)$ we denote the space of functions on $\Omega$ whose first derivative is H\"older continuous with exponent $\alpha$; by $W^{1,p}(\Omega;\mathbb{R}^d)$, we denote the space of functions $f\in L^p(\Omega)$ whose distributional derivative satisfies $\nabla f\in L^p(\Omega;\mathbb{R}^d)$. By $\operatorname{Diff}^\infty(A;B)$ we denote the class of smooth diffeomorphisms from $A$ to $B$.
\end{itemize}
\section{Main results}
We now state our main results.
\begin{theorem}\label{L.p}
Let $\Omega\subset\mathbb{R}^2$ be a bounded connected domain with boundary of class $C^{1,1}$. Let $f\in L^p(\Omega)$, $p>1$, be nonnegative with
\begin{align*}
\int_\Omega f \,dx<|\Omega|.
\end{align*}
Then for every $q$ with $1<q<(p+1)/2$ there exists a homeomorphism $\phi:\overline{\Omega}\rightarrow \overline{\Omega}$ with the bi-Sobolev property $\phi,\phi^{-1}\in W^{1,q}(\Omega;\Omega)$ which solves the prescribed Jacobian inequality with identity boundary conditions
\begin{subequations}
\label{main.equation.L.p.case}
\begin{align}
\label{MainResultJacobianInequality}
\det\nabla \phi&\geq f &&\text{in }\Omega,
\\
\phi&=\operatorname{id} &&\text{on }\partial \Omega.
\end{align}
\end{subequations}
Here, the inequality \eqref{MainResultJacobianInequality} holds almost everywhere in the case $q\geq 2$; in contrast, in the case $q<2$ it is to be understood in the weak sense
\begin{align*}
\mathcal{J}_{\phi}[\eta]=
-\frac{1}{2}\int_\Omega \langle\operatorname{adj} \nabla \phi \cdot \phi, \nabla \eta\rangle ~dx\geq \int_{\Omega}f\eta ~dx\quad
\text{for every }\eta\in C^{\infty}_{cpt}(\Omega; [0,\infty)).
\end{align*}
\end{theorem}
Recall that by the change of variables formula, the condition $\int_{\Omega}\det\nabla \phi~dx\leq |\Omega|$ is necessary for the solvability of \eqref{main.equation.L.p.case}. This assertion is also true for $q<2$ when interpreting the prescribed Jacobian inequality \eqref{MainResultJacobianInequality} in the distributional sense. Recall also that for $\int_{\Omega}f=|\Omega|$ the prescribed Jacobian inequality \eqref{main.equation.L.p.case} actually reduces to the prescribed Jacobian equation (\ref{JacobianEquality}).
\begin{remark}\label{remark.apres.L.p}
Note that the best possible regularity for a solution of \eqref{MainResultJacobianInequality} (recall that we are working in two dimensions) would be $\phi\in W^{1,2p}$ (i.e. $q=2p$). However it seems that the asymptotic ratio of $1/2$ between $q$ and $p$ in our main result cannot be improved with our approach.
\end{remark}
For $p=+\infty$, the following limiting version of the previous theorem holds.
\begin{theorem}
\label{theorem.det.nabla.u.geq.f}
Let $\Omega\subset \mathbb{R}^2$ be a bounded domain with boundary of class $C^{1,1}$. Let $f\in L^{\infty}(\Omega)$ be a nonnegative function with $\int_{\Omega}f~dx<|\Omega|$. Then there exists a bi-Lipschitz map $\phi:\overline{\Omega}\rightarrow \overline{\Omega}$ satisfying
\begin{subequations}
\label{Jacobian.inequality.theorem}
\begin{align}
\det\nabla \phi&\geq f&&\text{a.\,e.\ in }\Omega,
\\
\phi&=\operatorname{id}&&\text{on }\partial \Omega.
\end{align}
\end{subequations}
Moreover the regularity of $\phi$ is sharp in general: There exists an $f$ for which there is no $C^1$ solution $\phi$ to (\ref{Jacobian.inequality.theorem}).
\end{theorem}
\begin{remark}\label{Remark.after.det.nabla.phi.geq.f}
Recall that for $f\in C^0(\overline{\Omega})$ with $f\geq 0$ and $\int_{\Omega}f~dx=|\Omega|$, in general the prescribed Jacobian \eqref{JacobianEquality} does not admit a bi-Lipschitz solution \cite{Burago-Kleiner,McMullen}. As in the case $\int_{\Omega}f=|\Omega|$ the prescribed Jacobian inequality (\ref{Jacobian.inequality.theorem}) reduces to the case of the prescribed Jacobian equation \eqref{JacobianEquality}, we see that the assumption $\int_\Omega f<|\Omega|$ is sharp in general.
\end{remark}
All the above results strongly rely on the following result which
essentially states that for any $\tau>0$ and any measurable set $M\subset \Omega\subset \mathbb{R}^2$ with small enough measure, we can construct a bi-Lipschitz map mapping $\overline{\Omega}$ to itself which
\begin{itemize}
\item stretches the set $M$ by a factor of at least $1+\tau,$
\item compresses $\Omega\setminus M$ by no more than a factor of $1-C\sqrt{|M|}\tau,$
\item has good uniform estimates for its difference from the identity,
\item preserves the boundary pointwise.
\end{itemize}
\begin{theorem}
\label{BiLipschitzMapTheorem}
Let $\Omega\subset \mathbb{R}^2$ be a bounded domain with boundary of class
$C^{1,1}$. Then there exist constants $C,c>0$ depending only on $\Omega$ with
the following property: For any $\tau>0$ and for any measurable set $M\subset
\Omega$ with $\max\{|M|,\sqrt{|M|}\tau\}\leq c$ there exists a bi-Lipschitz mapping
$\phi=\phi_{\tau,M}:\overline{\Omega}\rightarrow \overline{\Omega}$ satisfying
\begin{align}
\phi&=\operatorname{id} &&\text{on }\partial\Omega,
\label{main.boundary}
\\
\|\phi-\operatorname{id}\|_{W^{1,p}(\Omega)}&\leq C|M|^{1/(2p)}
\tau &&\text{for every }1\leq p\leq \infty,
\label{main.W.1.p}
\\
\|\phi-\operatorname{id}\|_{L^\infty(\Omega)}&\leq C|M|^{1/2}\tau,
\label{main.L.infty}
\\
|\nabla\phi|&\leq C\det \nabla \phi &&\text{a.\,e.\ in }\Omega,
\label{main.nabla.phi.bounded.by.det}
\\
\det \nabla \phi&\geq 1+\tau\quad &&\text{a.\,e.\ on }M,
\label{main.det.on.M}
\\
\det \nabla \phi&\geq 1-C|M|^{1/2}\tau &&\text{a.\,e.\ in }\Omega.
\label{main.det.global}
\end{align}
\end{theorem}
\begin{remark}\label{remark.after.Th.Bilip.Stetching}
(i) We would like to emphasize that the constant $C$ only depends on
$\Omega$. In particular (\ref{main.W.1.p}) with $p=\infty$ gives the
uniform Lipschitz estimate
\begin{align*}
\|\phi-\operatorname{id}\|_{W^{1,\infty}(\Omega)}\leq C\tau.
\end{align*}
(ii) Since, from the changes of variables formula, we always have $\int_{\Omega}\det\nabla \phi=|\Omega|$
we deduce that a smallness assumption on $|M|$
(depending on $\tau$) is needed for (\ref{main.det.on.M}) to hold true.
\end{remark}
A consequence of the results of the present work is a certain generalization of results on functionals in nonlinear elasticity, at least in the planar case. In elasticity, a natural requirement for functionals $\mathcal{F}[u]$ that associate an elastic energy to a given deformation $u:\Omega\subset \mathbb{R}^d \rightarrow \mathbb{R}^d$ is that they should penalize compression of matter to zero volume by infinite energy. A mathematical model case of such functionals is the functional
\begin{align}
\label{FunctionalElasticity}
\mathcal{F}[u]:=\int_\Omega |\nabla u|^2 + h(\det \nabla u) \,dx,
\end{align}
where $h:(0,\infty)\rightarrow [0,\infty)$ is a convex function that blows up at $0$. In particular, for power-law blowup $h(s):=s^{-p}$, one has
\begin{align}
\label{FunctionalElasticityPowerLaw}
\mathcal{F}[u]:=\int_\Omega |\nabla u|^2 + \frac{1}{(\det \nabla u)_+^p} \,dx.
\end{align}
For minimizers of such functionals, the concept of equilibrium equations has been developed by Ball \cite{Ball} as a substitute for the Euler-Lagrange equations, as it is not known whether in general the Euler-Lagrange equations are satisfied by minimizers of such functionals. In contrast to the Euler-Lagrange equations, which for a minimizer $u$ are derived by passing to the limit $\tau\searrow 0$ for some smooth compactly supported test vector field $\xi$ in the relation
\begin{align}
\label{AnsatzEulerLagrangeEquation}
\frac{\mathcal{F}[u+\tau \xi]-\mathcal{F}[u]}{\tau} \geq 0,
\end{align}
the equilibrium equations are obtained by passing to the limit $\tau \searrow 0$ in the relations
\begin{align}
\label{AnsatzEquilibriumEquations}
\frac{\mathcal{F}[u\circ (\operatorname{id}+\tau \xi)]-\mathcal{F}[u]}{\tau} \geq 0
\quad\quad\text{or}\quad\quad
\frac{\mathcal{F}[(\operatorname{id}+\tau \xi)\circ u]-\mathcal{F}[u]}{\tau} \geq 0.
\end{align}
In the ansatz for the Euler-Lagrange equation \eqref{AnsatzEulerLagrangeEquation}, one cannot exclude that the competitor deformation $u+\tau \xi$ may have infinite energy for all $\tau>0$, thereby obstructing the derivation of the Euler-Lagrange equations: In general, one cannot exclude that $\det \nabla (u+\tau\xi)$ becomes negative on a set of positive measure for every $\tau>0$.
It turns out that for functionals with power-law blowup for $\det\nabla u \searrow 0$ like \eqref{FunctionalElasticityPowerLaw}, the direct ansatz for the derivation of the equilibrium equations \eqref{AnsatzEquilibriumEquations} is successful \cite{Ball}: In particular, one directly sees that for minimizers $u$ with finite energy $\mathcal{F}[u]$, the competitor deformations $u\circ (\operatorname{id}+\tau \xi)$ and $(\operatorname{id}+\tau \xi)\circ u$ have finite energy for $\tau>0$ small enough. However, in the case of superpolynomial blowup of $h$ for $\det \nabla u \searrow 0$ -- or if blowup already occurs when $\det \nabla u$ drops below some positive threshold $\delta>0$ --~, the direct ansatz \eqref{AnsatzEquilibriumEquations} for the derivation of the equilibrium equations fails, as one again cannot exclude that the competitor maps $u\circ (\operatorname{id}+\tau \xi)$ and $(\operatorname{id}+\tau \xi)\circ u$ may have infinite energy for all $\tau>0$.
In the planar case, the stretching maps provided by Theorem \ref{BiLipschitzMapTheorem} enable us to establish the equilibrium equations also for functionals \eqref{FunctionalElasticity} with a convex differentiable function $h(s)$ which blows up superpolynomially for $s\rightarrow 0$ or even already at a positive threshold $\delta>0$. The details will be provided in a forthcoming work.
\section{The prescribed Jacobian inequality in the $L^{p}$ case}
In this section we prove the existence of solutions to the prescribed Jacobian inequality for right-hand sides of class $L^p$ (i.\,e.\ Theorem \ref{L.p}). We shall reduce Theorem~\ref{L.p} to Proposition~\ref{proposition.assuming.f.L.p.small} below, using classical existence theorems for the prescribed Jacobian equation in the smooth case. Besides some extra estimates, the statement of Proposition~\ref{proposition.assuming.f.L.p.small} corresponds to Theorem~\ref{L.p} with the additional assumption that $\|f\|_{L^p}$ is sufficiently small.
The idea for the proof of Proposition~\ref{proposition.assuming.f.L.p.small} -- which corresponds to the main step of the proof of Theorem~\ref{L.p} -- is to inductively use Theorem \ref{BiLipschitzMapTheorem} to obtain mappings which stretch well-chosen sets (defined in terms of the superlevel sets of $f$) and to show that the (infinite) composition of these mappings converges to a solution.
\begin{proposition}\label{proposition.assuming.f.L.p.small} Let $\Omega\subset\mathbb{R}^2$ be a bounded domain with boundary of class $C^{1,1}$. Let $p>1$ and $1< q<(p+1)/2$.
Then there exists two constants $\widetilde{c}$ and $\widetilde{C}$ depending only on $p$, $q$ and $\Omega$ with the following property: For any $f\in L^p(\Omega)$ with
$$f\geq 0\quad \text{and}\quad \|f\|_{L^p}\leq \widetilde{c},$$ there exists
a homeomorphism $\phi:\overline{\Omega}\rightarrow \overline{\Omega}$ with
$\phi,\phi^{-1}\in W^{1,q}(\Omega;\Omega)$ satisfying
\begin{subequations}\label{equation.det.gen}
\begin{align}
\label{equat.det.simplifier}
\det\nabla \phi &\geq f && \text{in }\Omega,
\\
\label{equat.det.simplifier.boundary}
\phi &= \operatorname{id} && \text{on }\partial \Omega,
\end{align}
\end{subequations}
together with the estimates
\begin{equation}\label{est.det.simpl}
\det\nabla \phi\geq 1-\widetilde{C}\|f\|_{L^p}^{p/2} \quad\quad \text{in $\Omega$}
\end{equation}
and
\begin{equation}\label{est.psi.w.1.q}
\|\phi-\operatorname{id}\|_{W^{1,q}}\leq \widetilde{C}\|f\|_{L^p}^{p/(2q)}.
\end{equation}
Here, the inequalities \eqref{equat.det.simplifier} and \eqref{est.det.simpl} are understood in the pointwise a.\,e.\ sense for $q\geq 2$ and in the distributional sense for $q<2$.
\end{proposition}
\begin{proof}
\textit{Step 1 (stretching the superlevel sets of $f$).}
We choose $0<\lambda<1$ small enough so that
\begin{align}\label{relation.q.p}
2(1+\lambda)(q-1)<p-1.
\end{align}
Let $C$ be the constant (depending only on $\Omega$) appearing in the statement of Theorem \ref{BiLipschitzMapTheorem}.
Fix $\tau_0\geq 1$ large enough so that
\begin{align}\label{tau.big.enough}
C^{1/(q-1)}(1+C\tau_0)\leq (1+\tau_0)^{1+\lambda}.
\end{align}
Proceeding by induction, we claim that, taking $\|f\|_{L^p}\leq \widetilde{c}$ (where $\widetilde{c}$ will be chosen small enough and will only depend on $p,q$ and $\Omega$), we can construct, for every $j\geq 1$, a bi-Lipschitz map
$\phi_j:\overline{\Omega}\rightarrow \overline{\Omega}$ such that
\begin{align}
\phi_j&=\operatorname{id}\quad \text{on $\partial\Omega,$}
\\
\|\phi_j-\operatorname{id}\|_{W^{1,t}(\Omega)}&\leq C\tau_0|M_j|^{1/(2t)}\quad
\text{for every }1\leq t\leq \infty,\label{main.W.1.p.i}
\\
\|\phi_{j} -\operatorname{id}\|_{L^{\infty}}&\leq C\tau_0|M_{j}|^{1/2},\label{main.L.infty.i}\\
|\nabla\phi_j|&\leq C\det \nabla \phi_j\quad \text{a.\,e.\ in }\Omega,
\label{main.nabla.phi.bounded.by.det.i}\\
\det \nabla \phi_j&\geq 1+\tau_0\quad \text{a.\,e.\ in }M_j,\label{main.det.on.M.i}
\\
\det \nabla \phi_j&\geq 1-C\tau_0|M_j|^{1/2}\quad \text{a.\,e.\ in $\Omega,$}
\label{main.det.global.i}
\end{align}
where
\begin{align}\label{definition.M.j}
M_j:=\phi_{j-1}\circ\cdots \circ\phi_1\left(
\underbrace{
\bigcap_{0\leq k\leq j-1}
\left\{ \frac{f}{\det \nabla (\phi_{k}\circ \ldots \circ \phi_1)}\geq \frac{1}{2}\right\}}_{=:A_j}
\right)
\end{align}
with the convention that $M_1=\{ f\geq 1/2\}$ and $A_1=\{f\geq \frac{1}{2}\}$ and where $C=C(\Omega)$ is as before the constant in the statement of Theorem \ref{BiLipschitzMapTheorem}.
\textit{Step 1.1 (start of induction).}
First, since obviously
\begin{align*}
|\{f\geq 1/2\}|\rightarrow 0\quad \text{as $\|f\|_{L^p}\rightarrow 0$,}
\end{align*}
taking $\|f\|_{L^p}$ small enough,
we can apply Theorem \ref{BiLipschitzMapTheorem} with $\tau=\tau_0$ and
$M=\{ f\geq 1/2\}$
to get $\phi_1.$
\textit{Step 1.2 (induction step).}
Assume that $\phi_1,\cdots,\phi_{i-1}$ have been constructed. We now show how to obtain $\phi_i$.
Again the existence of $\phi_i$ will be a direct consequence of Theorem \ref{BiLipschitzMapTheorem} once we have shown that the measure of $M_i$ is small enough so that $\max\{|M_i|,\sqrt{|M_i|}\tau\}\leq c$ is satisfied.
To see that, let us define
\begin{align*}
I_i:=&
\int_{A_i}
\frac{ f^p}{\left(\det \nabla (\phi_{i-1} \circ \ldots \circ \phi_1)\right)^{p-1}}~dx.
\end{align*}
Hence, as by the definition of $A_i$ (see \eqref{definition.M.j}) we obviously have $A_i\subset A_{i-1}$, using (\ref{main.det.on.M.i}) with $j=i-1$ we deduce
\begin{align*}
I_i=&
\int_{A_i}
\frac{1}{(\det \nabla \phi_{i-1}(\phi_{i-2}\circ \ldots \circ \phi_1))^{p-1}}
\cdot
\frac{ f^p}{\left(\det \nabla (\phi_{i-2} \circ \ldots \circ \phi_1)\right)^{p-1}} ~dx\\
\leq&
\int_{A_{i-1}}
\frac{1}{(\det \nabla \phi_{i-1}(\phi_{i-2}\circ \ldots \circ \phi_1))^{p-1}}
\cdot
\frac{ f^p}{\left(\det \nabla (\phi_{i-2} \circ \ldots \circ \phi_1)\right)^{p-1}} ~dx
\\
\leq&
\frac{1}{(1+\tau_0)^{p-1}}
\int_{A_{i-1}}
\frac{ f^p}{\left(\det \nabla (\phi_{i-2} \circ \ldots \circ \phi_1)\right)^{p-1}} ~dx
\\
=&
\frac{1}{(1+\tau_0)^{p-1}} I_{i-1}.
\end{align*}
Thus
\begin{align*}
I_i\leq \frac{1}{(1+\tau_0)^{(p-1)(i-1)}} \int_\Omega f^p ~dx.
\end{align*}
Moreover
\begin{align*}
|M_i|=&|\phi_{i-1}\circ\cdots\circ \phi_{1}(A_i)|=2^p\int_{A_i}(1/2)^p\det\nabla(\phi_{i-1}\circ\cdots\circ \phi_{1})~dx\\
\leq&2^p\int_{A_i}\frac{f^p}{(\det \nabla (\phi_{i-1}\circ \ldots \circ \phi_1))^p}\det\nabla(\phi_{i-1}\circ\cdots\circ \phi_{1})~dx\\
=&2^pI_i
\end{align*}
and thus
\begin{align}\label{decay.M.i}
|M_i|\leq \frac{\|2f\|_{L^p}^{p}}{(1+\tau_0)^{(p-1)(i-1)}}\leq \|2f\|_{L^p}^{p}.
\end{align}
Choosing the upper bound $\tilde c$ for $\|f\|_{L^p}$ in the assumptions of the proposition small enough (independently of $i$), we therefore can apply Theorem \ref{BiLipschitzMapTheorem} to $\tau=\tau_0$ and $M=M_i$ to get $\phi_i.$
\textit{Step 1.3 (additional estimates).}
By (\ref{decay.M.i}) we deduce that
\begin{align}
\sum_{i=1}^{\infty} |M_i|^{1/2}\leq&\sum_{i=1}^{\infty}\frac{\|2f\|_{L^p}^{p/2}}{(1+\tau_0)^{(p-1)(i-1)/2}}
=\frac{\|2f\|_{L^p}^{p/2}}{1-(1+\tau_0)^{-(p-1)/2}}.\label{M.k.in.l.2}
\end{align}
Next, using (\ref{main.det.global.i}) and (\ref{M.k.in.l.2}), we have for every $m\geq i\geq 1$
\begin{align}\det\nabla (\phi_m\circ\cdots\circ \phi_i)
=&\det \nabla \phi_m (\phi_{m-1}\circ\ldots \phi_i) \cdot
\ldots \cdot \det \nabla \phi_{i+1}(\phi_i) \cdot \det \nabla \phi_i\nonumber
\\
\geq&\prod_{k=i}^m (1-C\tau_0|M_k|^{1/2})\geq 1-C\tau_0\sum_{k=i}^{m}|M_k|^{1/2}\nonumber\\
\geq& 1-C\tau_0\frac{\|2f\|_{L^p}^{p/2}}{1-(1+\tau_0)^{-(p-1)/2}}.\label{geq.one.half}
\end{align}
In particular, choosing the upper bound $\tilde c$ for $\|f\|_{L^p}$ smaller if necessary, we deduce from the previous inequality that for every $m\geq i\geq 1$
\begin{align}\label{det.nabla.u.m....u.1.geg.half}
\det\nabla (\phi_m\circ\cdots\circ \phi_i)\geq 1/2\quad \text{a.e in $\Omega.$}
\end{align}
Finally, taking again $\tilde c$ smaller if necessary (note that this smallness will only depend on $p,q$ and $\Omega$), we get using (\ref{main.det.global.i}), (\ref{decay.M.i}), and (\ref{M.k.in.l.2})
\begin{align}&\prod_{k=1}^{m}\left\|\frac{1}{\det\nabla \phi_k}\right\|_{L^{\infty}}\leq \prod_{k=1}^{m}\frac{1}{1-C\tau_0|M_k|^{1/2}}=\prod_{k=1}^{m}\left(1+\frac{C\tau_0|M_k|^{1/2}}{1-C\tau_0|M_k|^{1/2}}\right)\nonumber\\
\leq&\prod_{k=1}^{m}\left(1+2C\tau_0|M_k|^{1/2}\right)
\leq \exp\left(\sum_{k=1}^{m}2C\tau_0|M_k|^{1/2}\right)\leq 2.\label{estimate.prod.1.over.det}
\end{align}
\textit{Step 2 (properties of the limiting mapping).} In the remaining steps we will show that
\begin{align*}
\phi:=\lim_{m\rightarrow \infty}\phi_m\circ\cdots\circ\phi_1
\end{align*}
is well-defined and has all the desired properties, namely that $\phi:\overline{\Omega}\rightarrow \overline{\Omega}$ is a homeomorphism
with $\phi,\phi^{-1}\in W^{1,q}(\Omega;\Omega)$ for which (\ref{equation.det.gen}), (\ref{est.det.simpl}), and (\ref{est.psi.w.1.q}) hold.
\textit{Step 2.1.} We first prove that $\{\nabla(\phi_m\circ\cdots\circ \phi_1)\}_{m\in \mathbb{N}}$ is a Cauchy sequence in $L^{q}.$
First for every integer $n$,
\begin{align*}
&\|\nabla (\phi_{n+1} \circ \ldots \circ \phi_1)-\nabla (\phi_n \circ \ldots \circ \phi_1)\|_{L^{q}}^q
\\
&\leq \int_{\Omega}\left|\nabla \phi_{n+1}(\phi_n\circ \ldots \circ \phi_1)-\operatorname{Id}\right|^q
\left|\nabla (\phi_{n}\circ \ldots \circ \phi_1)\right|^q~dx
\\
&\leq \int_{\Omega}\left|\nabla \phi_{n+1}(\phi_n\circ \ldots \circ \phi_1)-\operatorname{Id}\right|^q
\prod_{k=1}^n\left|\nabla \phi_{k}(\phi_{k-1}\circ\ldots\circ \phi_1)\right|^q~dx
\\
&=\int_{\Omega}\left|\nabla \phi_{n+1}(\phi_n\circ \ldots \circ \phi_1)-\operatorname{Id}\right|^q
\left(\prod_{k=1}^n\frac{\left|\nabla \phi_{k}(\phi_{k-1}\circ\ldots\circ \phi_1)\right|^q}
{\det \nabla \phi_k (\phi_{k-1}\circ \ldots \circ \phi_1)}\right)
\det \nabla (\phi_{n}\circ \ldots \circ \phi_1)~dx
\\
&\overset{\eqref{main.nabla.phi.bounded.by.det.i},\eqref{main.W.1.p.i}}{\leq~~~~~~}
\int_{\Omega}\left|\nabla \phi_{n+1}
(\phi_n\circ \ldots \circ \phi_1)-\operatorname{Id}\right|^q\left(\prod_{k=1}^nC(1+C\tau_0)^{q-1}\right)
\det \nabla (\phi_{n}\circ \ldots \circ \phi_1)~dx
\\
&=\int_{\Omega}\left|\nabla \phi_{n+1}-\operatorname{Id}\right|^qC^{n} (1+C\tau_0)^{n(q-1)}~dx
\\
&\overset{\eqref{main.W.1.p.i}}{\leq\,}
(C\tau_0)^q|M_{n+1}|^{1/2}
C^{n} (1+C\tau_0)^{n(q-1)}
\\
&\overset{\eqref{tau.big.enough}}{\leq\,}
(C\tau_0)^q|M_{n+1}|^{1/2}
(1+\tau_0)^{n(1+\lambda)(q-1)}
\\
&\overset{\eqref{decay.M.i}}{\leq\,}
(C\tau_0)^q\|2f\|_{L^p}^{p/2}(1+\tau_0)^{n[(1+\lambda)(q-1)-(p-1)/2]}.
\end{align*}
Taking the $q$-th root in the previous inequality we get that, for any $m\geq n+1$,
\begin{align}
&\|\nabla (\phi_{m} \circ \ldots \circ \phi_1)-\nabla (\phi_n \circ \ldots \circ \phi_1)\|_{L^{q}}\nonumber\\
\leq &\sum_{k=n}^{m-1}\|\nabla (\phi_{k+1} \circ \ldots \circ \phi_1)-\nabla (\phi_k \circ \ldots \circ \phi_1)\|_{L^{q}}\nonumber\\
\leq & C\tau_0\|2f\|_{L^p}^{p/(2q)}\sum_{k=n}^{m-1}(1+\tau_0)^{k[(1+\lambda)(q-1)-(p-1)/2]/q}\label{est.nabla.phi.m.W.1.q}
\end{align}
which proves that
$\{\nabla(\phi_m\circ\cdots\circ \phi_1)\}_{m\in \mathbb{N}}$ is a Cauchy sequence in $L^{q}$ since by (\ref{relation.q.p}) we know that
\begin{align*}
(1+\lambda)(q-1)-(p-1)/2<0.
\end{align*}
\textit{Step 2.2.}
Since $\phi_k=\operatorname{id}$ on $\partial \Omega$ for every $k$, we deduce from the Poincar\'e inequality and from (\ref{est.nabla.phi.m.W.1.q}) that
$(\phi_m\circ\cdots\circ \phi_1)_{m\in \mathbb{N}}$ is a Cauchy sequence in $W^{1,q}(\Omega)$. Hence $\phi_m\circ\cdots\circ\phi_1$ converges to some $\phi$ in $W^{1,q}$ as $m\rightarrow \infty$.
Using (\ref{est.nabla.phi.m.W.1.q}) with $n=0$ and again the Poincar\'e inequality, we directly get (\ref{est.psi.w.1.q}).
By (\ref{main.L.infty.i}), we have for every $m\geq n+1$
\begin{align*}
\|\phi_{m} \circ \ldots \circ \phi_1-\phi_n \circ \ldots \circ \phi_1\|_{L^{\infty}}
\leq &\sum_{k=n}^{m-1}\|\phi_{k+1} -\operatorname{id}\|_{L^{\infty}}
\leq \sum_{k=n}^{m-1}C\tau_0|M_{k+1}|^{1/2}.
\end{align*}
Using (\ref{M.k.in.l.2}), this implies that $(\phi_{m} \circ \ldots \circ \phi_1)_m$ is a Cauchy sequence in $L^{\infty}$. Thus, $\phi_{m} \circ \ldots \circ \phi_1$ converges to $\phi$ uniformly and we have $\phi\in C^{0}(\overline{\Omega})$.
\textit{Step 2.3 (existence and integrability of $\phi^{-1}$).}
First, for every $m\in \mathbb{N}$, we have (recall that we are working in 2 dimensions)
\begin{align*}
&||\nabla [(\phi_m \circ \ldots \circ \phi_1)^{-1}]||_{L^q}=||(\nabla (\phi_m \circ \ldots \circ \phi_1))^{-1}((\phi_m\circ\cdots\circ \phi_1)^{-1})||_{L^q}\\
=&\left(\int_{\Omega}\left|\frac{\operatorname{adj}(\nabla (\phi_m \circ \ldots \circ \phi_1))}{\det\nabla (\phi_m \circ \ldots \circ \phi_1)}\right|^q((\phi_m\circ\cdots\circ \phi_1)^{-1})\,dx\right)^{1/q}
\\
=&\left(\int_{\Omega}\left|\frac{\nabla (\phi_m \circ \ldots \circ \phi_1)}{\det\nabla (\phi_m \circ \ldots \circ \phi_1)}\right|^q((\phi_m\circ\cdots\circ \phi_1)^{-1})\,dx\right)^{1/q}
\\
=&\left(\int_{\Omega}\frac{|\nabla (\phi_m \circ \ldots \circ \phi_1)|^q}{|\det\nabla (\phi_m \circ \ldots \circ \phi_1)|^{q-1}}\,dx\right)^{1/q}
\\
\leq& ||\nabla (\phi_m \circ \ldots \circ \phi_1)||_{L^q}\left(\prod_{i=1}^m\left\|\frac{1}{\det\nabla \phi_i}\right\|_{L^{\infty}}\right)^{(q-1)/q}.
\end{align*}
Using (\ref{estimate.prod.1.over.det}) we directly deduce from the previous inequality that the sequence $((\phi_m\circ \ldots \circ \phi_1)^{-1})_{m\in \mathbb{N}}$ is bounded in $W^{1,q}$ (uniformly in $m$) and hence converges weakly (up to a subsequence) to some $\xi\in W^{1,q}$. Moreover since trivially
\begin{align*}
\|\phi_j^{-1}-\operatorname{id}\|_{L^{\infty}}=\|\phi_j-\operatorname{id}\|_{L^{\infty}}
\end{align*}
proceeding exactly as Step 2.2 we know that
$\xi\in C^0(\overline{\Omega})$
and
\begin{align*}
\|(\phi_{m} \circ \ldots \circ \phi_1)^{-1}-\xi\|_{L^{\infty}}\rightarrow 0\quad \text{as $m\rightarrow \infty.$}
\end{align*}
Moreover from the uniform convergence we get that, for every $x\in \overline{\Omega}$,
\begin{align*}
x=\lim_{m\rightarrow \infty}(\phi_{m} \circ \ldots \circ \phi_1)\circ(\phi_{m} \circ \ldots \circ \phi_1)^{-1}(x)=(\phi\circ \xi)(x)\end{align*}
and similarly $x=(\xi\circ\phi)(x)$. Hence $\xi=\phi^{-1}$ and $\phi:\overline{\Omega}\rightarrow \overline{\Omega}$ is an homeomorphism.
Finally, since for every $m$ we have $\phi_{m} \circ \ldots \circ \phi_1=\operatorname{id}$ on $\partial \Omega$, passing to the limit we get
\begin{align*}
\phi=\operatorname{id}\quad \text{on $\partial \Omega.$}
\end{align*}
\textit{Step 2.4 (proving the assertions about $\det\nabla \phi$).} We now show that $\det\nabla \phi\geq f$ in $\Omega$ together with (\ref{est.det.simpl}), where, as always, the inequalities are understood in the weak sense for $q<2$ and in the pointwise a.\,e.\ sense for $q\geq 2$.
We first claim that we have a.\,e.\ in $\Omega$
\begin{align}\label{det.up.to.m}
\det\nabla(\phi_m\circ\cdots\circ \phi_1)\geq f\chi_{\Omega\setminus A_m}
\end{align}
where we recall that
\begin{align*}
A_m= \bigcap_{0\leq k\leq m-1}
\left\{ \frac{ f}{\det \nabla (\phi_{k}\circ \ldots \circ \phi_1)}\geq \frac{1}{2}\right\}
\end{align*}
with the convention that $A_1=\{ f\geq 1/2\}$ and $A_0=\Omega.$
Indeed noticing that, by the very definition of the sets $A_m$,
$$\det
\nabla (\phi_{i-1}\circ\cdots\circ \phi_1)>2 f\quad \text{on $A_{i-1}\setminus A_i$},$$
and using (\ref{det.nabla.u.m....u.1.geg.half}),
we get that, a.\,e.\ in $\Omega,$ and for every $1\leq i\leq m$,
\begin{align*}
\det\nabla(\phi_m\circ\cdots\circ \phi_1)&=\det\nabla(\phi_m\circ\cdots\circ \phi_i)(\phi_{i-1}\circ\cdots\circ \phi_1)\cdot \det\nabla (\phi_{i-1}\circ\cdots\phi_1)\\
&\geq \det\nabla(\phi_m\circ\cdots\circ \phi_i)(\phi_{i-1}\circ\cdots\circ \phi_1)\cdot 2f \chi_{A_{i-1}\setminus A_i}\\
&\geq f \chi_{A_{i-1}\setminus A_i},
\end{align*}
from which (\ref{det.up.to.m}) directly follows.
Moreover, using (\ref{definition.M.j}), (\ref{decay.M.i}) and (\ref{estimate.prod.1.over.det}), we obtain
\begin{align}\label{A.i.tends.to.zero}|A_i|=|\phi_1^{-1}\circ\cdots\circ \phi_{i-1}^{-1}(M_i)|\leq 2|M_i|\rightarrow 0\quad \text{as $i\rightarrow \infty$}.
\end{align}
On one hand, recalling that $\phi_m\circ\cdots\circ\phi_1$ converges strongly to $\phi$ both in $W^{1,q}$ and $L^{\infty}$ we get that, for every $\eta\in C^{\infty}_{cpt}(\Omega)$,
\begin{align*}\int_{\Omega}\det[\nabla(\phi_m\circ\cdots\circ\phi_1)]\eta ~dx &=
-\frac{1}{2}\int_\Omega \langle\operatorname{adj} \nabla(\phi_m\circ\cdots\circ\phi_1) \cdot \phi_m\circ\cdots\circ\phi_1), \nabla \eta\rangle ~dx
\\
&\rightarrow -\frac{1}{2}\int_\Omega \langle\operatorname{adj} \nabla\phi \cdot \phi, \nabla \eta \rangle ~dx\quad \text{as $m\rightarrow \infty$}
\end{align*} or, equivalently, using our notation,
\begin{align*}\mathcal{J}_{\phi_m\circ\cdots\circ\phi_1}[\eta]\rightarrow \mathcal{J}_{\phi}[\eta]\quad \text{as $m\rightarrow \infty$}.
\end{align*}
On the other hand, using (\ref{det.up.to.m}) and (\ref{A.i.tends.to.zero}) we obtain for every nonnegative $\eta \in C^{\infty}_{cpt}(\Omega)$,
\begin{align*}\int_{\Omega}\det[\nabla(\phi_m\circ\cdots\circ\phi_1)]\eta\geq \int_{\Omega}f\chi_{\Omega\setminus A_m}\eta ~dx\rightarrow \int_{\Omega}f\eta ~dx.
\end{align*}
Combining the two previous equations we get that (in the sense of the distribution for $q<2$ and in the a.\,e.\ pointwise sense for $q\geq 2$)
\begin{align*}
\det\nabla \phi\geq f.
\end{align*}
Finally (\ref{est.det.simpl}) is a direct consequence of (\ref{geq.one.half}). This concludes the proof.
\end{proof}
With the help of the previous result we now establish Theorem \ref{L.p}.
\begin{proof}[Proof of Theorem \ref{L.p}]
\textit{Step 1 (preliminaries).}
Fix $\delta>0$ small enough so that (recall that by assumption $\int_\Omega f<|\Omega|$)
\begin{align}
(1+\delta)\int_{\Omega}f~dx<|\Omega|(1-\delta)\label{choice.delta}
\end{align}
and let (extending $f$ by $0$ outside $\Omega$)
\begin{align*}
f_{\epsilon}:=(1+\delta)\rho_{\epsilon}\ast f+l_{\epsilon}
\end{align*}
where $\rho_{\epsilon}$ is the usual convolution kernel and
where $l_{\epsilon}\in \mathbb{R}$ is chosen so that
\begin{align}
\int_{\Omega}f_{\epsilon}\,dx=|\Omega|\label{int.cond.f.epsilon}.
\end{align}
Note that by (\ref{choice.delta}) we have
\begin{align}\label{l.epsilon.geg.delta}l_{\epsilon}\geq \delta.
\end{align}
Hence, since $f_{\epsilon}\in C^{\infty}(\overline{\Omega}),$ since $f_{\epsilon}\geq l_{\epsilon}\geq \delta>0$ and since the boundary of $\Omega$ is $C^{1,1}$, by (\ref{int.cond.f.epsilon}) there exists (see Theorem 5 in \cite{Dac-Moser}) a map $\psi_{\epsilon}\in \operatorname{Diff}^{1,1}(\overline{\Omega};\overline{\Omega})$ satisfying
\begin{subequations}
\label{equation.for.psi.epsilon}
\begin{align}
\det\nabla \psi_{\epsilon}&=f_{\epsilon} &&\text{in }\Omega,
\\
\psi_{\epsilon}&=\operatorname{id} &&\text{on }\partial \Omega.
\end{align}
\end{subequations}
Define the measurable set $A_{\epsilon}\subset \Omega$ by
\begin{align*}A_{\epsilon}:=\{x\in \Omega: f_{\epsilon}(x)\leq (1+\delta)f(x)\}.
\end{align*}
Using (\ref{l.epsilon.geg.delta}) and the definition of $f_{\epsilon}$ we deduce that
\begin{align}\label{A.epsilon.goes.to.0}
|A_{\epsilon}|\rightarrow 0\quad \text{as $\epsilon\rightarrow 0$.}
\end{align}
\textit{Step 2.}
We claim that, for every $\epsilon>0$ small enough, we can find a homeomorphism $\varphi_{\epsilon}:\overline{\Omega}\rightarrow \overline{\Omega}$ with
$\varphi_{\epsilon},\varphi_{\epsilon}^{-1}\in W^{1,q}(\Omega;\Omega)$ satisfying
\begin{subequations}
\label{equation.for.varphi.epsilon}
\begin{align}
\det\nabla \varphi_{\epsilon}
&\geq
\frac{f(\psi_{\epsilon}^{-1})}{f_{\epsilon}(\psi_{\epsilon}^{-1})}\chi_{A_{\epsilon}}(\psi_{\epsilon}^{-1})&& \text{in }\Omega,
\\
\varphi_{\epsilon}&=\operatorname{id} && \text{on }\partial \Omega,
\end{align}
\end{subequations}
and
\begin{align}\label{borne.inf.det.varphi.eps}
\det\nabla \varphi_{\epsilon}&\geq \frac{1}{1+\delta} &&\text{in }\Omega,
\end{align}
where both inequalities are understood in the pointwise a.\,e.\ sense for $q\geq 2$ and in the weak sense for $q<2$.
Indeed using $\psi_{\epsilon}$ as a change of variables, recalling that $f_{\epsilon}\geq \delta$ in $\Omega$ and using (\ref{A.epsilon.goes.to.0}) we get
\begin{align*}
\int_{\Omega}\left|\frac{f(\psi_{\epsilon}^{-1})}{f_{\epsilon}(\psi_{\epsilon}^{-1})}\chi_{A_{\epsilon}}(\psi_{\epsilon}^{-1})\right|^p~dx
&=\int_{A_{\epsilon}}\frac{|f|^p}{|f_{\epsilon}|^{p-1}}~dx\\
&\leq \frac{1}{\delta^{p-1}}\|f\|_{L^p(A_{\epsilon})}^p\rightarrow 0\quad \text{as $\epsilon\rightarrow 0$.}
\end{align*}
Hence we can use Proposition \ref{proposition.assuming.f.L.p.small} (with the right-hand side equal to $\frac{f(\psi_{\epsilon}^{-1})}{f_{\epsilon}(\psi_{\epsilon}^{-1})}\chi_{A_{\epsilon}}(\psi_{\epsilon}^{-1})$) and get the claim. Note that in particular (\ref{borne.inf.det.varphi.eps}) follows directly from (\ref{est.det.simpl}) and (\ref{A.epsilon.goes.to.0}).
\textit{Step 3 (conclusion).}
We claim that, for $\epsilon$ small enough,
\begin{align*}
\phi=\varphi_{\epsilon}\circ \psi_{\epsilon}
\end{align*}
satisfies all the wished properties.
First,
$\phi:\overline{\Omega}\rightarrow \overline{\Omega}$ is an homeomorphism, is identically the identity on $\partial \Omega$ and $\phi,\phi^{-1}\in W^{1,q}(\Omega;\Omega).$
It remains to show
\begin{align}\label{det.phi.step.1.3}
\det\nabla \phi\geq f \quad\quad\text{in $\Omega$},
\end{align}
where the inequality is required to hold pointwise almost everywhere for $q\geq 2$ and where the inequality is understood in the weak sense for $q<2$.
We first prove (\ref{det.phi.step.1.3}) when $q\geq 2$ (in order to expose the idea more clearly).
On one hand, using (\ref{equation.for.psi.epsilon}) and (\ref{equation.for.varphi.epsilon}), we get a.\,e.\ in $\Omega$
\begin{align*}
\det\nabla\phi
=\det\nabla\varphi_{\epsilon}(\psi_{\epsilon})\det\nabla\psi_{\epsilon}
\geq f\chi_{A_{\epsilon}}
\end{align*}
On the other hand, by definition of $A_{\epsilon}$ and by (\ref{borne.inf.det.varphi.eps}) we get
a.\,e.\ in $\Omega$
\begin{align*}
\det\nabla\phi
=\det\nabla\varphi_{\epsilon}(\psi_{\epsilon})\det\nabla\psi_{\epsilon}
\geq \frac{f_{\epsilon}}{1+\delta}
\geq f\chi_{\Omega\setminus A_{\epsilon}}.
\end{align*}
The combination of the previous two inequalities gives (\ref{det.phi.step.1.3}).
We finally prove (\ref{det.phi.step.1.3}) when $q<2$, namely
\begin{align*} \mathcal{J}_{\phi}[\eta]=-\frac{1}{2}\int_{\Omega}\langle \operatorname{adj}\nabla \phi\cdot\phi,\nabla \eta\rangle~dx\geq \int_{\Omega}f\eta ~dx\quad\text{for every $\eta\in C^{\infty}_{cpt}(\Omega;[0,\infty))$}.
\end{align*}
By classical properties of $\operatorname{adj}$ and changing the variables we get
\begin{align*}
&\int_{\Omega}\langle\operatorname{adj}\nabla\phi\cdot \phi,\nabla \eta\rangle ~dx
\\
=&\int_{\Omega}\langle\operatorname{adj}\left[\nabla\varphi_{\epsilon}(\psi_{\epsilon})\cdot\nabla \psi_{\epsilon}\right]\cdot \varphi_{\epsilon}(\psi_{\epsilon}),\nabla \eta\rangle ~dx\\
=&\int_{\Omega}\frac{\langle\operatorname{adj}\left[\nabla\varphi_{\epsilon}\cdot\nabla \psi_{\epsilon}(\psi_{\epsilon}^{-1})\right]\cdot \varphi_{\epsilon},\nabla \eta(\psi_{\epsilon}^{-1})\rangle}{\det\nabla \psi_{\epsilon}(\psi_{\epsilon}^{-1})} ~dx\\
=&\int_{\Omega}\frac{\langle\operatorname{adj}\left[\nabla\psi_{\epsilon}(\psi_{\epsilon}^{-1})\right]\cdot\operatorname{adj}
\left[\nabla\varphi_{\epsilon}\right]\cdot \varphi_{\epsilon},\nabla \eta(\psi_{\epsilon}^{-1})\rangle}{\det\nabla \psi_{\epsilon}(\psi_{\epsilon}^{-1})} ~dx\\
=&\int_{\Omega}\frac{\langle\operatorname{adj}\left[\nabla\psi_{\epsilon}(\psi_{\epsilon}^{-1})\right]\cdot\operatorname{adj}
\left[\nabla\varphi_{\epsilon}\right]\cdot \varphi_{\epsilon},(\nabla\psi_{\epsilon})^t(\psi_{\epsilon}^{-1})\cdot\nabla [\eta\circ\psi_{\epsilon}^{-1}])\rangle}{\det\nabla \psi_{\epsilon}(\psi_{\epsilon}^{-1})} ~dx
\\
=&\int_{\Omega}\frac{\langle\operatorname{adj}
\left[\nabla\varphi_{\epsilon}\cdot \right]\cdot \varphi_{\epsilon},\operatorname{adj}\left[\nabla\psi_{\epsilon}(\psi_{\epsilon}^{-1})\right]^t\cdot
(\nabla\psi_{\epsilon})^t(\psi_{\epsilon}^{-1})\cdot\nabla [\eta\circ\psi_{\epsilon}^{-1}])\rangle}{\det\nabla \psi_{\epsilon}(\psi_{\epsilon}^{-1})} ~dx
\\
=&\int_{\Omega}\langle\operatorname{adj}
\left[\nabla\varphi_{\epsilon} \right]\cdot \varphi_{\epsilon}, \nabla[\eta\circ\psi_{\epsilon}^{-1}]\rangle ~dx.
\end{align*}
Then, for every $\rho\in C^{\infty}(\overline{\Omega};[0,1])$ we get, using (\ref{equation.for.varphi.epsilon}) and (\ref{borne.inf.det.varphi.eps}) in the weak sense and then changing the variables,
\begin{align*}
&-\frac{1}{2}\int_{\Omega}\langle\operatorname{adj}
\left[\nabla\varphi_{\epsilon}\right]\cdot \varphi_{\epsilon}, \nabla[\eta\circ\psi_{\epsilon}^{-1}]\rangle ~dx
\\
=&-\frac{1}{2}\int_{\Omega}\langle\operatorname{adj}
\left[\nabla\varphi_{\epsilon} \right]\cdot \varphi_{\epsilon}, \nabla[(\rho\eta)\circ\psi_{\epsilon}^{-1}]\rangle ~dx-\frac{1}{2}\int_{\Omega}\langle\operatorname{adj}
\left[\nabla\varphi_{\epsilon} \right]\cdot \varphi_{\epsilon}, \nabla[((1-\rho)\eta)\circ\psi_{\epsilon}^{-1}]\rangle ~dx
\\
\geq &\int_{\Omega} \left(\frac{f}{\det\nabla \psi_{\epsilon}}\rho\eta\chi_{A_{\epsilon}}
\right)\circ\psi_{\epsilon}^{-1}~dx+\int_{\Omega} \frac{1}{1+\delta}((1-\rho)\eta)\circ\psi_{\epsilon}^{-1}~dx
\\
=&\int_{\Omega}f\rho\eta\chi_{A_{\epsilon}}~dx+\int_{\Omega} \frac{f_{\epsilon}}{1+\delta}(1-\rho)\eta~dx
\end{align*}
By approximation, the above equation is valid for $\rho=\chi_{A_{\epsilon}}$. Hence, by definition of $A_{\epsilon}$ we get
combining the last two equations,
\begin{align*}
&-\frac{1}{2}\int_{\Omega}\langle\operatorname{adj}\nabla\varphi\cdot \varphi,\nabla \eta\rangle ~dx
\geq \int_{\Omega}f\chi_{A_{\epsilon}}\eta~dx+\int_{\Omega} \frac{f_{\epsilon}}{1+\delta}\chi_{\Omega\setminus A_{\epsilon}}\eta~dx\\
\geq &\int_{\Omega}f\chi_{A_{\epsilon}}\eta~dx+\int_{\Omega} f\chi_{\Omega\setminus A_{\epsilon}}\eta~dx
=\int_{\Omega}f\eta ~dx,
\end{align*}
which proves (\ref{det.phi.step.1.3}) and ends the proof.
\end{proof}
\section{The prescribed Jacobian in the $L^{\infty}$ case}
In this section we prove Theorem \ref{theorem.det.nabla.u.geq.f}. This idea is the following: First, by convolution and classical
results for the prescribed Jacobian equation in the smooth case, we obtain a
smooth map $\varphi$ such that $\varphi=\operatorname{id}$ on $\partial \Omega$
and $\det\nabla \varphi> f$ outside a set $M$ of small measure.
Postcomposing $\varphi$ with the stretching map (from Theorem \ref{BiLipschitzMapTheorem}) $\phi_{\tau,\varphi(M)}$
for some well-chosen $\tau$ then yields the desired solution to the prescribed
Jacobian inequality.
\begin{proof}[Proof of Theorem \ref{theorem.det.nabla.u.geq.f}]~
\textit{Step 1 (sharp regularity).} A very simple example shows that there exists a nonnegative function $f\in L^{\infty}(\Omega)$ with $\int_{\Omega}f<|\Omega|$ such that no solution to (\ref{Jacobian.inequality.theorem}) can be of class $C^1$:
Let $M\subset \Omega$ be an open dense (in $\Omega$) set with $|M|<|\Omega|/2$
and let $f=2\chi_{M}.$ We argue by contradiction and assume that there exists a
solution $\phi\in C^1$ of (\ref{Jacobian.inequality.theorem}). Then we would have
$\det\nabla \phi\geq 2$ a.\,e.\ in $M$. By continuity of $\det\nabla \phi$ and the fact that $M$ is open and dense, the previous inequality would imply $\det\nabla \phi\geq 2$ everywhere in $\Omega$, which contradicts $\int_{\Omega}\det\nabla \phi\,dx=|\Omega|$.
Let us now turn to the existence part of our theorem.
\textit{Step 2 (preliminaries).} Define
\begin{align*}
\beta:=\frac{|\Omega|-\int_{\Omega}f\,dx}{|\Omega|}\in (0,1].
\end{align*}
Take $\tau_{0}$ big enough so that
\begin{equation} \label{tau.assez.grand}
\frac{(1+\tau_{0})\beta}{2}\geq \|f\|_{L^{\infty}}.
\end{equation}
Next choose $\epsilon_{0}>0$ small enough so that
\begin{equation} \label{epsilon.assez.petit}
\left(1-C\tau_{0}\sqrt{(\|f\|_{L^{\infty}}+1)\epsilon_{0}}\right)
\left(\frac{\beta}{2}+y\right)\geq y
\quad \text{for every $y\in [0,\|f\|_{L^{\infty}}];$}
\end{equation}
taking $\epsilon_{0}$ even smaller we can moreover assume that
\begin{equation}
\label{volume.assez.petit}
(\|f\|_{L^{\infty}}+1)\epsilon_{0},\sqrt{(\|f\|_{L^{\infty}}+1)\epsilon_{0}}\tau_0\leq c,
\end{equation}
where $C,c>0$ are the constants (depending only on $\Omega$) in the statement of
Theorem~\ref{BiLipschitzMapTheorem}.
\textit{Step 3 (approximation).}
Extending $f$ by $1$ outside of $\Omega$, mollifying the resulting function,
and adding an appropriate constant, it is elementary to construct
a sequence $f_{\nu}\in C^{\infty}(\overline{\Omega})$, $\nu\in \mathbb{N}$, such
that $\int_{\Omega}f_{\nu}\,dx=|\Omega|,$
\begin{equation}
\label{f.nu.sandwich}\beta/2\leq f_{\nu}\leq \|f\|_{L^{\infty}}+1\quad
\text{in $\Omega$},
\end{equation}
and
\begin{align*}
f_{\nu}(x)\rightarrow f(x)+\beta\quad
\text{for a.\,e.\ }x\in \Omega.
\end{align*}
The last formula implying convergence in measure, there exist $\nu_0$ and a
measurable set $A\subset \Omega$ such that $|A|\leq \epsilon_0$ and
\begin{equation}
\label{f.N.plus.grand.que.f}
f_{\nu_0}\geq \frac{\beta}{2}+f\quad
\text{a.\,e.\ in $\Omega\setminus A.$}
\end{equation}
\textit{Step 4 (conclusion).}
By a classical result for the prescribed Jacobian equation (see Theorem 5 in
\cite{Dac-Moser}; recall that the boundary of $\Omega$ is of class $C^{1,1}$ and that $\int_{\Omega}f_{\nu_0}\,dx=|\Omega|$) there exists
$\varphi\in \operatorname{Diff}^{1,1}(\overline{\Omega};\overline{\Omega})$
so that
\begin{align*}
\det\nabla\varphi=f_{\nu_0}\quad \text{in $\Omega$} \quad \text{and}\quad
\varphi=\operatorname{id}\quad \text{on $\partial \Omega.$}
\end{align*}
Using (\ref{f.nu.sandwich}), we have
\begin{equation}
\label{estimate.volume}
|\varphi(A)|
=\int_{A}\det\nabla \varphi \,dx
\leq (\|f\|_{L^{\infty}}+1)|A|
\leq (\|f\|_{L^{\infty}}+1)\epsilon_{0}.
\end{equation}
Hence, from the previous inequality and (\ref{volume.assez.petit}),
we can apply Theorem \ref{BiLipschitzMapTheorem} with
$M=\varphi(A)$ and $\tau=\tau_0$ and get a bi-Lipschitz mapping
$\psi:\overline{\Omega}\rightarrow \overline{\Omega}$ such that
$\psi=\operatorname{id}$ on $\partial \Omega$ and
\begin{align}
\det \nabla \psi&\geq 1+\tau_{0}\quad
\text{a.\,e.\ in $\varphi(A),$}\label{det.on.vaphi.K}
\\
\det \nabla \psi&\geq 1-C|\varphi(A)|^{1/2}\tau_{0}\quad
\text{a.\,e.\ in $\Omega.$}\label{det.varphi.global}
\end{align}
We claim that $\phi:=\psi\circ \varphi$ has all the desired properties. First we
obviously have $\phi=\operatorname{id}$ on $\partial \Omega.$ It remains to
show that
\begin{align*}
\det\nabla \phi=\det\nabla\psi(\varphi)\cdot f_{\nu_0}\geq f\quad \text{a.e in $\Omega.$}
\end{align*}
First, using (\ref{det.on.vaphi.K}), (\ref{f.nu.sandwich}) and (\ref{tau.assez.grand}), we obtain that a.\,e.\ in $A$
\begin{align*}
\det\nabla\psi(\varphi)\cdot f_{\nu_0}\geq (1+\tau_{0})f_{\nu_0}\geq (1+\tau_{0})\frac{\beta}{2}\geq\|f\|_{L^{\infty}}.
\end{align*}
Finally, using (\ref{det.varphi.global}), (\ref{estimate.volume}),
(\ref{f.N.plus.grand.que.f}), and (\ref{epsilon.assez.petit}), we get that a.\,e.\ in $\Omega\setminus A$
\begin{align*}
\det\nabla\psi(\varphi)\cdot f_{\nu_0}
&\geq (1-C\tau_0\sqrt{|\varphi(A)|})f_{\nu_0}
\geq (1-C\tau_0\sqrt{(\|f\|_{L^{\infty}}+1)\epsilon_{0}})f_{\nu_0}\\
&\geq (1-C\tau_0\sqrt{(\|f\|_{L^{\infty}}+1)\epsilon_{0}})
\left(\frac{\beta}{2}+f\right)
\geq f,
\end{align*}
which ends the proof.
\end{proof}
\section{Bi-Lipschitz stretching of a given set with small
measure}\label{Section.Bi-Lip.Mapping}
In this section we establish Theorem \ref{BiLipschitzMapTheorem}. The proof will
consists of two main parts. In the first part, contained in the Sections \ref{sub.sect.Simplification.of.the.domain}, \ref{sub.sect.Simplification.of.the.set.M}, and \ref{sub.sect.Simplification.of.the.boundary.values}, we reduce the problem to the case $\Omega=(0,1)^2$ (see Lemma \ref{Lemma.simpl.domain}) with a compact subset $M\subset \Omega$ (see Lemma \ref{Lemma.simpl.set}); furthermore, we drop the requirement of identity boundary conditions, replacing it by a global preservation of the boundary (see Lemma \ref{lemma.simpl.boundary.values}). This way we will only be left with proving
the simplified version of the existence of stretching maps that is stated in Proposition \ref{Prop.Bi.Lip.simpl} below.
In the second part -- see the Sections \ref{SubSection.covering} and \ref{subsection.stretching} -- , we prove Proposition \ref{Prop.Bi.Lip.simpl}. To this end we first establish a covering result (see Lemma \ref{IntersectionFreeCovering}) based on results by Alberti, Cs\"ornyei, and Preiss \cite{Alberti}: Any compact set $K\subset (0,1)^2$ can be covered
by a finite number a strips generated by $1$-Lipschitz functions in both axis
directions with the two following properties:
\begin{itemize}
\item The total width of the strips do not exceed $2\sqrt{|K|};$
\item The $x$-strips have pointwise disjoint interior and similarly for the $y$-strips.
\end{itemize}
With the help of this covering result we finally prove Proposition \ref{Prop.Bi.Lip.simpl} with an explicit formula for the stretching map.
\subsection{Simplification of the domain $\Omega$.}\label{sub.sect.Simplification.of.the.domain}
\begin{lemma}\label{Lemma.simpl.domain}
Theorem \ref{BiLipschitzMapTheorem} in the case of a general planar domain $\Omega\subset\mathbb{R}^2$ is a consequence of Theorem \ref{BiLipschitzMapTheorem} for the special case of the unit square $\Omega=(0,1)^2.$
\end{lemma}
\begin{proof}
The proof is based on two main ingredients, namely
\begin{itemize}
\item the fact that (\ref{main.boundary})-(\ref{main.det.global}) behave well under fixed $C^{1,1}$-diffeomorphisms, and
\item the fact that any $C^{1,1}$-domain $\Omega$ may be decomposed into finitely many subdomains $\Omega_i$, each of which may be mapped onto the unit square $(0,1)^2$ by a $C^{1,1}$-diffeomorphism.
\end{itemize}
So we assume that Theorem \ref{BiLipschitzMapTheorem} has been proven for
$\Omega=(0,1)^2$ and show how to deduce it for any bounded open set
$\Omega\subset \mathbb{R}^2$ with $C^{1,1}$ boundary. To this end we recall that
any bi-Lipschitz mapping in the plane $\psi$ satisfies
\begin{equation}
\label{control.area}
\frac{1}{L^2}|M|\leq |\psi(M)|\leq L^2|M|
\end{equation}
whenever
$|x-y|/L\leq |\psi(x)-\psi(y)|\leq L|x-y|$ holds for every $x,y.$
\textit{Step 1.} We assume first that there exists a
bijection $\psi:[0,1]^2\rightarrow \overline{\Omega}$ such that $\psi\in
C^{1,1}([0,1]^2;\overline{\Omega})$ and $\psi^{-1}\in
C^{1,1}(\overline{\Omega};[0,1]^2)$ hold. We call such a set $\Omega$
\textit{$C^{1,1}$-equivalent to the unit square}.
Let $\tau>0$ and $M\subset \Omega$
with $|M|$ and $\sqrt{|M|}\tau$ small enough, which trivially implies (by
\eqref{control.area}) that $|\psi^{-1}(M)|$ and $\sqrt{|\psi^{-1}(M)|}\tau$ are
also small. Hence, by hypothesis applied to $2\tau$ in place of
$\tau$, there exists a bi-Lipschitz map $\widetilde{\phi}:[0,1]^2\rightarrow
[0,1]^2$ stretching the set $\psi^{-1}(M)\subset (0,1)^2,$ such that
\begin{align}
\widetilde{\phi}&=\operatorname{id}\quad
\text{on $\partial[0,1]^2,$}\label{simpl.domain.boundary}
\\
\|\widetilde{\phi}-\operatorname{id}\|_{W^{1,p}([0,1]^2)}&
\leq 2C\tau|\psi^{-1}(M)|^{1/(2p)}\quad
\text{for all $1\leq p\leq \infty,$}\label{simpl.domain.W.1.p}
\\
\|\widetilde{\phi}-\operatorname{id}\|_{L^\infty([0,1]^2)}&
\leq 2C\tau|\psi^{-1}(M)|^{1/2},
\label{simpl.domain.L.infty}
\\
|\nabla\widetilde{\phi}|&\leq C\det\nabla \widetilde{\phi}\quad\text{a.\,e.\ in $(0,1)^2,$}\label{simpl.domain.nabla.borne.par.det}\\
\det \nabla \widetilde{\phi}&\geq 1+2\tau\quad \text{a.\,e.\ in $\psi^{-1}(M),$}
\label{simpl.domain.det.on.M}
\\
\det \nabla \widetilde{\phi}&\geq 1-2C\tau|\psi^{-1}(M)|^{1/2}\quad
\text{a.\,e.\ in $(0,1)^2.$}
\label{simpl.domain.det.global}
\end{align}
We claim that
\begin{align*}
\phi:=\psi\circ \widetilde{\phi}\circ \psi^{-1}
\end{align*}
satisfies (\ref{main.boundary})-(\ref{main.det.global}).
Indeed first (\ref{main.boundary}) follows trivially from
(\ref{simpl.domain.boundary}).
In what follows $C_{\psi}$ will denote a generic constant depending only on
$\|\psi\|_{C^{1,1}}$ and $\|\psi^{-1}\|_{C^{1,1}}$ (and hence only on $\Omega$)
which may change from appearance to appearance.
Using (\ref{control.area}) and (\ref{simpl.domain.L.infty}) we get
\begin{align*}
\|\phi-\operatorname{id}\|_{L^{\infty}(\Omega)}\leq
C_{\psi}\|\widetilde{\phi}\circ \psi^{-1}-\psi^{-1}\|_{L^{\infty}(\Omega)}\leq
C_{\psi}|M|^{1/2}\tau,
\end{align*}
which proves (\ref{main.L.infty}).
Next note that
\begin{align*}
\nabla \phi-\operatorname{Id}
=\nabla (\psi \circ \widetilde{\phi} \circ \psi^{-1})-\operatorname{Id}
=&\nabla \psi (\widetilde{\phi} \circ \psi^{-1})
\cdot (\nabla \widetilde{\phi} (\psi^{-1})-\operatorname{Id})
\cdot \nabla \psi^{-1}
\\&
+(\nabla \psi (\widetilde{\phi} \circ \psi^{-1})-\nabla \psi (\psi^{-1}))
\cdot \nabla \psi^{-1}
\end{align*}
which yields, using (\ref{control.area}), (\ref{simpl.domain.W.1.p}), and
(\ref{simpl.domain.L.infty}),
\begin{align*}
\|\nabla \phi-\operatorname{Id}\|_{L^p(\Omega)}&
\leq C_{\psi}\|\nabla \widetilde{\phi}-\operatorname{Id}
\|_{L^p([0,1]^2)}
+C_{\psi}\| \widetilde{\phi}-\operatorname{id}
\|_{L^\infty([0,1]^2)}\\
&\leq C_{\psi}|M|^{1/(2p)}\tau,
\end{align*}
proving (\ref{main.W.1.p}).
Next, using (\ref{simpl.domain.nabla.borne.par.det}), we deduce, a.\,e.\ in $\Omega,$
\begin{align*}
|\nabla \phi|&\leq C_{\psi}|\nabla\widetilde{\phi}(\psi^{-1})|\leq C_{\psi}\det\nabla\widetilde{\phi}(\psi^{-1})\leq C_{\psi}\det\nabla\phi,
\end{align*}
which proves (\ref{main.nabla.phi.bounded.by.det}).
Then
\begin{align*}
&\det \nabla (\psi \circ \widetilde{\phi} \circ \psi^{-1})
=\det \nabla \psi (\widetilde{\phi} \circ \psi^{-1})
\det \nabla \widetilde{\phi} (\psi^{-1})
\det\nabla \psi^{-1}
\\
=&\left(\det \nabla \psi (\widetilde{\phi} \circ \psi^{-1})
-\det \nabla \psi (\psi^{-1})
\right)
\det \nabla \widetilde{\phi} (\psi^{-1})
\det\nabla \psi^{-1}
+\det \nabla \widetilde{\phi} (\psi^{-1})
\\
\geq& -C_\psi |\widetilde{\phi}\circ \psi^{-1}-\psi^{-1}|
+\det \nabla \widetilde{\phi} (\psi^{-1})
\\
\geq& -C_\psi |M|^{1/2} \tau
+\det \nabla \widetilde{\phi} (\psi^{-1}),
\end{align*}
where we have used (\ref{simpl.domain.L.infty}) for
the last inequality.
Hence using (\ref{simpl.domain.det.on.M}) and (\ref{simpl.domain.det.global})
we get from the last inequality
\begin{align*}
\det\nabla \phi\geq
\left\{\begin{array}{cl}1+\tau(2-C_{\psi}|M|^{1/2})&\text{a.\,e.\ in $M$}
\\
1-C_{\psi}|M|^{1/2}\tau&\text{a.\,e.\ in $\Omega\setminus M.$}\end{array} \right.
\end{align*}
Taking $|M|$ small enough so that $C_{\psi}|M|^{1/2}\leq 1$
gives (\ref{main.det.on.M}) and (\ref{main.det.global}).
\textit{Step 2.} We now prove the lemma in the general case.
\textit{Step 2.1.} Let $\tau>0$ and $M\subset \Omega$ with
$|M|$ and $\sqrt{|M|}\tau$ small enough.
Using Lemma \ref{lemma:decomposition.domain}
there
exist $N$ open sets $\Omega_1,\ldots,\Omega_N$ every one of which is
$C^{1,1}$-equivalent to the unit square and such that
\begin{align*}
\overline{\Omega}=\bigcup_{i=1}^{N}\overline{\Omega}_i\quad \text{and}\quad \Omega_i\cap \Omega_j=\emptyset\quad \text{for every $1\leq i<j\leq N$}.
\end{align*}
Using Step 1, for any $1\leq i\leq N$ there
exists a bi-Lipschitz mapping $\phi_{i}$ from $\overline{\Omega}_{i}$ to
$\overline{\Omega}_{i}$ satisfying
(\ref{main.boundary})-(\ref{main.det.global}) for
\begin{align*}
M_i:=M\cap \Omega_i.
\end{align*}
We extend the $\phi_{i}$ by the identity outside $\Omega_i$ which obviously
implies that (\ref{main.W.1.p})-(\ref{main.det.global}) are satisfied on
$\Omega$ (and not only on $\Omega_{i})$. Summarizing we have for every $1\leq
i\leq N$
\begin{align}
\phi_{i}&=\operatorname{id}\quad \text{on }\overline{\Omega}\setminus \Omega_{i}
\quad\text{and}\quad \phi_i(\Omega_i)=\Omega_i,
\label{lemma.id.ouside.Omega.i}
\\
\|\phi_{i}-\operatorname{id}\|_{W^{1,p}(\Omega)}&\leq C|M_{i}|^{1/(2p)}
\tau
\quad \text{for every $1\leq p\leq \infty,$}\label{lemma.W.1.p}
\\
\|\phi_{i}-\operatorname{id}\|_{L^{\infty}(\Omega)}&\leq C|M_{i}|^{1/2}\tau,\label{lemma.L.infty}
\\
|\nabla\phi_{i}|&\leq C\det\nabla \phi_i\quad \text{a.\,e.\ in $\Omega$},\label{lemma.nabla.bounded.by.det}\\
\det \nabla \phi_{i}&\geq 1+\tau\quad \text{a.\,e.\ in $M_{i},$}\label{lemma.det.on.M.i}
\\
\det \nabla \phi_{i}&\geq 1-C|M_{i}|^{1/2}\tau\quad \text{a.\,e.\ in $\Omega.$}
\label{lemma.det.global}
\end{align}
\textit{Step 2.2 (conclusion).} We claim that
\begin{align*}
\phi:=\phi_{N}\circ\ldots\circ\phi_1
\end{align*}
satisfies all the properties stated in Theorem \ref{BiLipschitzMapTheorem}.
First we obviously have that $\phi$ is a bi-Lipschitz mapping from
$\overline{\Omega}$ to $\overline{\Omega}$
with $\phi=\operatorname{id}$ on $\partial\Omega.$
For every $1\leq i\leq N$, we have
\begin{align*}
\phi=\phi_i\quad \text{in }\Omega_i
\end{align*}
and hence for every $1\leq i\leq N$
\begin{align*}
\nabla \phi=\nabla\phi_i\quad \text{and}\quad \det\nabla \phi=\det\nabla\phi_i\quad \text{a.\,e.\ in $\Omega_i$.}
\end{align*}
Therefore, since that $|M_i|\leq |M|$ and since $|\Omega\setminus \cup_{i=1}^N\Omega_i|=0$ we get
directly get (\ref{main.W.1.p})-(\ref{main.det.global}) from (\ref{lemma.W.1.p})-(\ref{lemma.det.global}).
This ends the proof.
\end{proof}
\subsection{Simplification of the set $M$ on which we stretch}
\label{sub.sect.Simplification.of.the.set.M}
\begin{lemma}\label{Lemma.simpl.set}
In order to prove Theorem \ref{BiLipschitzMapTheorem} for some domain $\Omega\subset\mathbb{R}^2$, it is enough to prove it for compact subsets $M=K\subset \Omega.$
\end{lemma}
\begin{proof} The proof utilizes the inner regularity of the Lebesgue measure,
the weak lower semicontinuity of the norms $\|\cdot\|_{W^{1,p}}$, and the weak continuity of the determinant.
Let $\tau>0$ and $M\subset \Omega$ be a measurable set with $|M|$ and $\sqrt{|M|}\tau$ small enough.
We assume that Theorem \ref{BiLipschitzMapTheorem} has been proven in the case when the set $M$ that is to be stretched is a compact subset $K\subset\Omega$. By a limiting process, we show that Theorem \ref{BiLipschitzMapTheorem} then holds for any measurable set $M\subset\Omega$.
Recall that the constant $C$ appearing in (\ref{main.W.1.p}), (\ref{main.L.infty}), \eqref{main.nabla.phi.bounded.by.det}, and (\ref{main.det.global}) is independent of $K$ (and of $\tau$).
\textit{Step 1.} By inner regularity of the Lebesgue measure, we may
choose an increasing sequence of compact sets $K_{\nu}\subset M$ with
$\lim_{\nu\rightarrow \infty}|K_{\nu}|=|M|$.
For any $K_{\nu}$ we have by assumption a bi-Lipschitz map
$\phi_{\nu}=(\phi_{\nu})_{\tau,K_\nu}:\overline{\Omega}\rightarrow
\overline{\Omega}$ satisfying
\begin{align}
\phi_{\nu}&=\operatorname{id}\quad \text{on $\partial\Omega,$}
\label{lemma.set.boundary}
\\
\|\phi_{\nu}-\operatorname{id}\|_{W^{1,p}(\Omega)}&\leq C|K_{\nu}|^{1/(2p)}
\tau\quad
\text{for every $1\leq p\leq \infty,$}
\label{lemma.set.W.1.p}
\\
\|\phi-\operatorname{id}\|_{L^{\infty}(\Omega)}&\leq C|K_{\nu}|^{1/2}\tau,\label{lemma.set.L.infty}\\
|\nabla \phi_{\nu}|&\leq C\det\nabla\phi_{\nu}\quad
\text{a.\,e.\ in $\Omega$,}
\label{lemma.set.nabla.bounded.by.det.W.1.p}\\
\det \nabla \phi_\nu&\geq 1+\tau\quad
\text{a.\,e.\ in $K_{\nu},$}\label{lemma.set.det.on.M}
\\
\det \nabla \phi_{\nu}&\geq 1-C|K_{\nu}|^{1/2}\tau\quad
\text{a.\,e.\ in $\Omega.$}\label{lemma.set.det.global}
\end{align}
Taking the upper bound $c$ on $\sqrt{|M|}\tau$ in the assumptions of Theorem \ref{BiLipschitzMapTheorem} smaller if necessary, we deduce from
(\ref{lemma.set.det.global}) that
\begin{align*}
\det\nabla \phi_{\nu}\geq 1/2\quad \text{a.\,e.\ in $\Omega.$}
\end{align*}
\textit{Step 2.}
By (\ref{lemma.set.W.1.p}) with $p=\infty$ and
the last inequality, the maps $\phi_{\nu},\phi_{\nu}^{-1}$ are
uniformly bounded in $W^{1,\infty}(\Omega),$ namely
\begin{equation}
\label{conv.W.1.infty.phi.n}
\|\phi_{\nu}-\operatorname{id}\|_{W^{1,\infty}(\Omega)}
\leq C\tau
\end{equation}
and
\begin{align*}
\sup_{\nu}\|\phi_{\nu}^{-1}\|_{W^{1,\infty}}<\infty.
\end{align*}
Hence, up to a subsequence we know that
\begin{align*}
\phi_{\nu}\stackrel{\ast}{\rightharpoonup}\phi\quad
\text{in $W^{1,\infty}(\Omega)$ as $\nu\rightarrow \infty$}
\end{align*}
holds for some bi-Lipschitz map $\phi$ from $\overline{\Omega}$ to
$\overline{\Omega}.$
By compactness of the embedding of $W^{1,\infty}$ in $C^0$,
we see that $\phi_{\nu}$ converges to $\phi$ also in $C^0(\overline{\Omega}).$
Hence, using (\ref{lemma.set.boundary}) we get \eqref{main.boundary}, namely
\begin{align*}
\phi=\operatorname{id}\quad \text{on $\partial \Omega.$}
\end{align*}
Note also that by the weak lower semicontinuity we have
\begin{equation}
\label{weak.lower.semi.cont}
\|\phi-\operatorname{id}\|_{W^{1,p}(\Omega)}
\leq \liminf_{\nu\rightarrow \infty}
\|\phi_{\nu}-\operatorname{id}\|_{W^{1,p}(\Omega)}\quad
\text{for every $1\leq p\leq \infty.$}
\end{equation}
Hence, from (\ref{lemma.set.W.1.p}), (\ref{weak.lower.semi.cont}), and the fact
that $|K_{\nu}|\leq |M|$ holds we get \eqref{main.W.1.p}, namely
\begin{align*}
\|\phi-\operatorname{id}\|_{W^{1,p}(\Omega)}\leq C|M|^{1/(2p)}\tau \quad \text{for every $1\leq p\leq \infty.$}
\end{align*}
Furthermore, from
\begin{align*}
\|\phi_\nu-\operatorname{id}\|_{L^\infty(\Omega)}
\leq C\sqrt{|K_\nu|}\tau\leq C\sqrt{|M|}\tau
\end{align*}
we deduce \eqref{main.L.infty}, namely
\begin{align*}
\|\phi-\operatorname{id}\|_{L^\infty(\Omega)}
\leq C\sqrt{|M|}\tau.
\end{align*}
\textit{Step 3.} It remains to prove the three assertions involving the
determinant, namely (\ref{main.nabla.phi.bounded.by.det})-(\ref{main.det.global}). By weak continuity of the determinant, we have for any $f\in
L^{\infty}(\Omega)$
\begin{equation}\label{weak.cont.det}
\lim_{\nu\rightarrow \infty} \int_{\Omega} f\det \nabla \phi_{\nu}~dx
=\int_{\Omega} f\det \nabla \phi~dx.
\end{equation}
By sequential lower semicontinuity of the norm $||\nabla \phi_\nu ||_{L^1(A)}$ for any open set $A\subset\Omega$, we deduce from (\ref{lemma.set.nabla.bounded.by.det.W.1.p}) and (\ref{weak.cont.det}) that
\begin{align*}
\int_{\Omega}|\nabla \phi|\chi_{A} ~dx
&\leq \liminf_{\nu\rightarrow \infty} ||\nabla \phi_\nu||_{L^1(A)}
=\liminf_{\nu\rightarrow \infty}\int_{\Omega}|\nabla \phi_{\nu}|\chi_{A} ~dx
\\
&\leq C\lim_{\nu\rightarrow \infty}\int_{\Omega}\det\nabla \phi_{\nu}\chi_{A} ~dx
=C\int_{\Omega}\det\nabla \phi\chi_{A} ~dx,
\end{align*}
which proves (\ref{main.nabla.phi.bounded.by.det}) since the open set $A$ is arbitrary.
Recalling that (see (\ref{lemma.set.det.on.M})) $\det \nabla \phi_{\nu}\geq 1+\tau$
a.\,e.\ on $K_{\nu}$ and that $K_\nu\subset K_{\nu+1}$, we obviously have for any
measurable subset $A\subset K_{\nu}$
\begin{align*}
\int_{\Omega} \chi_A \det\nabla \phi_{\nu+k}~dx
\geq \int_{\Omega} (1+\tau)\chi_A~dx
\quad \forall k\geq 0,
\end{align*}
which implies (by (\ref{weak.cont.det}))
\begin{align*}
\int_{\Omega} \chi_A \det\nabla \phi~dx
\geq \int_{\Omega} (1+\tau)\chi_A~dx.
\end{align*}
By arbitrariness of $A\subset K_\nu$, we obtain $\det \nabla \phi\geq 1+\tau$
a.\,e.\ on $K_\nu$ for all $\nu$. Since $|M\setminus (\cup_{\nu\geq 1}K_{\nu})|=0,$
we deduce
\begin{align*}
\det\nabla \phi\geq 1+\tau \quad \text{a.\,e.\ in $M.$}
\end{align*}
Similarly, using (\ref{lemma.set.det.global}) and
(\ref{weak.cont.det}) we get for any measurable subset $A\subset \Omega$
\begin{align*}
&\int_{\Omega} \chi_A \det \nabla \phi~dx
=\lim_{\nu\rightarrow \infty}\int_{\Omega} \chi_A \det \nabla \phi_{\nu}~dx
\\
\geq&
\lim_{\nu\rightarrow \infty}
\int_{\Omega} (1-C\sqrt{|K_{\nu}|}\tau)\chi_A~dx
= \int_{\Omega} (1-C\sqrt{|M|}\tau)\chi_A~dx,
\end{align*}
which, again by arbitrariness of $A,$ implies
\begin{align*}
\det \nabla \phi
\geq 1-C\sqrt{|M|}\tau\quad \text{a.\,e.\ in $\Omega.$}
\end{align*}
This ends the proof.
\end{proof}
\subsection{Simplification of the boundary values}\label{sub.sect.Simplification.of.the.boundary.values}
We finally notice that it is enough to prove Theorem \ref{BiLipschitzMapTheorem}
when $\Omega=(0,1)^2$, when $M\subset \Omega$ is compact and without assuming the
bi-Lipschitz mapping to preserve the boundary pointwise.
\begin{lemma}\label{lemma.simpl.boundary.values}
Theorem \ref{BiLipschitzMapTheorem} is implied by Proposition \ref{Prop.Bi.Lip.simpl} below.
\end{lemma}
\begin{proposition}\label{Prop.Bi.Lip.simpl}
There exist universal constants $C,c>0$ with the following property: for any
$\tau>0$ and any compact set $K\subset (0,1)^2$ with
$\max\{|K|,\sqrt{|K|}\tau\}\leq c$ there exists a bi-Lipschitz mapping
$\phi=\phi_{\tau,K}:[0,1]^2\rightarrow [0,1]^2$ satisfying
\begin{align}
\|\phi-\operatorname{id}\|_{W^{1,p}([0,1]^2)}&\leq C|K|^{1/(2p)}\tau
\quad \text{for all $1\leq p\leq \infty,$}
\label{prop.W.1.p}
\\
\|\phi-\operatorname{id}\|_{L^{\infty}([0,1]^2)}&\leq C\sqrt{|K|}\tau
\label{prop.L.infty}
\\
|\nabla\phi|&\leq C\det\nabla\phi\quad \text{a.\,e.\ in $[0,1]^2,$}
\label{prop.nabla.phi.bounded.by.det}
\\
\det \nabla \phi&\geq 1+\tau\quad \text{a.\,e.\ on $K,$}
\label{prop.det.on.M}
\\
\det \nabla \phi&\geq 1-C\sqrt{|K|}\tau\quad \text{a.\,e.\ on $[0,1]^2.$}
\label{prop.det.global}
\end{align}
Moreover the following properties concerning the boundary values hold true:
\begin{align}
\phi_1(0,s)&=\phi_2(s,0)=0\quad\text{for every $s\in [0,1],$}\label{prop.bord.pres.glob.1}
\\
\phi_1(1,s)&=\phi_2(s,1)=1\quad\text{for every $s\in [0,1],$}\label{prop.bord.pres.glob.2}
\\
\partial_x \phi_1(x,y) &\geq 1-C|K|^{1/2}\tau\quad \text{for every }(x,y)\in [0,1]\times \{0,1\},
\label{prop.derivees.phi.1}
\\
\partial_y \phi_2(x,y) &\geq 1-C|K|^{1/2}\tau \quad \text{for every }(x,y)\in \{0,1\}\times [0,1]
\label{prop.derivees.phi.2}.
\end{align}
\end{proposition}
\begin{proof}[Proof of Lemma \ref{lemma.simpl.boundary.values}.]
\begin{figure}
\caption{A sketch of the construction for the correction of the boundary values. In the inner square, the construction from the proof of Proposition~\ref{Prop.Bi.Lip.simpl}
\label{BoundaryCorrection}
\end{figure}
First by Lemmas \ref{Lemma.simpl.domain} and \ref{Lemma.simpl.set} we know that it is enough to prove Theorem \ref{BiLipschitzMapTheorem} when $\Omega=(0,1)^2$ (or equivalently when $\Omega=(-1,1)^2$ -- due to certain symmetries in our construction below, it will be convenient to work on $(-1,1)^2$) and when $M$ (the set that will be stretched) is compact.
It hence remains to prove that Proposition \ref{Prop.Bi.Lip.simpl} implies Theorem \ref{BiLipschitzMapTheorem} in the case of $\Omega=(-1,1)^2$ and compact sets $M\subset (-1,1)^2$. We basically have to show how to change the stretching map $\phi$ so that it satisfies $\phi=\operatorname{id}$ on the boundary (while preserving the other properties).
The idea of the proof goes as follows:
We consider the subsquare $(-1+|M|^{1/2},1-|M|^{1/2})^2\subset (-1,1)^2$ (i.e. we allow for a boundary layer of thickness $|M|^{1/2}$) and use Proposition \ref{Prop.Bi.Lip.simpl} to obtain a stretching map on the subsquare. On the boundary layer, we interpolate between the boundary values of our stretching map on the subsquare and the identity boundary conditions on the original square. Furthermore, we stretch the full boundary layer, which means that we have to compress the subsquare in the interior slightly. Due to the size $|M|^{1/2}$ of the boundary layer, this will not destroy the stretching property on $M$ in the subsquare.
\textit{Step 1.} We consider the subsquare $S:=(-1+|M|^{1/2},1-|M|^{1/2})^2$
and apply (the rescaled version of) Lemma
\ref{Prop.Bi.Lip.simpl} to the set $M\cap S$ with $2\tau$ in place of $\tau$.
This yields a bi-Lipschitz map $\tilde \phi:\overline{S}\rightarrow \overline{S}$ with the properties (where we abbreviate $I_M:=(-1+|M|^{1/2},1-|M|^{1/2})$ and use the fact that we may assume that $|M|^{1/2}\leq \frac{1}{2}$)
\begin{subequations}
\label{PropertiesTildePhi}
\begin{align}
\label{W1pBoundDerivative}
\|\tilde\phi-\operatorname{id}\|_{W^{1,p}(S)}&\leq C|M|^{1/(2p)}
\tau\quad
\text{for every }1\leq p\leq \infty,
\\
\label{LinftyTildePhi}
\|\tilde\phi-\operatorname{id}\|_{L^\infty(S)}&\leq C|M|^{1/2}\tau,
\\
|\nabla \tilde\phi|&\leq C\det \nabla \tilde\phi \quad \text{a.\,e.\ in }S,
\\
\det \nabla \tilde\phi&\geq 1+2\tau\quad \text{a.\,e.\ on }M\cap S,
\\
\det \nabla \tilde\phi&\geq 1-C|M|^{1/2}\tau\quad \text{a.\,e.\ in }S,
\\
\tilde\phi_2(s,-1+|M|^{1/2})&=\tilde\phi_1(-1+|M|^{1/2},s)=-1+|M|^{1/2}\quad
\text{for }s\in I_M,\label{bord.phi.tilde.x}
\\
\tilde\phi_2(s,1-|M|^{1/2})&=\tilde\phi_
1(1-|M|^{1/2},s)=1-|M|^{1/2}\quad
\text{for }s\in I_M,\label{bord.phi.tilde.y}
\\
\partial_x \tilde\phi_1(x,y) &\geq 1-C|M|^{1/2}\tau \quad \text{for every }(x,y)\in [0,1]\times \{0,1\},
\\
\label{LowerBoundyDerivative}
\partial_y \tilde\phi_2(x,y) &\geq 1-C|M|^{1/2}\tau \quad \text{for every }(x,y)\in \{0,1\} \times [0,1].
\end{align}
\end{subequations}
\textit{Step 2.} We now define our map $\phi:[-1,1]^2\rightarrow [-1,1]^2$; a sketch of our
construction is provided in Figure \ref{BoundaryCorrection}. On the subsquare
$\overline{S}$, we set
\begin{align}
\label{DefinitionPhiCentralSquare}
\phi(x,y):=
\frac{1-|M|^{1/2}(1+2\tau)}{1-|M|^{1/2}}
\tilde \phi(x,y).
\end{align}
It remains to define $\phi$ in $[-1,1]^2\setminus \overline{S}$ which we divide into four quadrilaterals $Q_l,Q_u,Q_d$ and $Q_r$ (see Figure \ref{BoundaryCorrection}).
In the left quadrilateral $Q_l$, we define $\phi$ by setting
\begin{align}\label{def.on.Q.l}
\phi(x,y):=
\begin{pmatrix}
-1+(1+2\tau)(1+x)
\\
\frac{1-|M|^{1/2}+x}{x|M|^{1/2}} y + \frac{1+x}{|M|^{1/2}}
\phi_2 \left(-1+|M|^{1/2},
\frac{-1+|M|^{1/2}}{x}y
\right)
\end{pmatrix}.
\end{align}
In the other three quadrilaterals, we define $\phi$ by an analogous construction.
We claim that the $\phi$ constructed by this procedure has all the desired properties. This will be done in the remaining two steps.
\textit{Step 3.} First, we easily see that $\phi=\operatorname{id}$ on $\partial [-1,1]^2$, i.e. that \eqref{main.boundary} holds.
We claim that $\phi$ is Lipschitz in $[-1,1]^2$. Since $\tilde \phi$ is Lipschitz in $\overline{S}$ and thus, by definition, $\phi$ is Lipschitz in $\overline{S}$, $\overline{Q_l}$, $\overline{Q_r}$, $\overline{Q_u}$, and $\overline{Q_d}$,
it is enough to prove that $\phi$ is continuous on
\begin{align*}
\{(\pm(1-\theta),\pm(1-\theta)),\theta\in [0,\sqrt{|M|}]\}\cup \partial S.
\end{align*}
The continuity of $\phi$ on the first above set (i.\,e.\ the four diagonal segments) is a direct consequence of the definition of $\phi$: For example, on the lower left diagonal we have $\phi(-1+\theta,-1+\theta)=(-1+(1+2\tau)\theta,-1+(1+2\tau)\theta)$ for $\theta\in [0,|M|^{1/2}]$. To prove the continuity on $\partial S$, by symmetry of our construction it is enough to prove it on left vertical segment of $S$, i.e. on
\begin{align*}
\{-1+|M|^{1/2}\}\times I_M.
\end{align*}
First, by (\ref{bord.phi.tilde.x}) and (\ref{DefinitionPhiCentralSquare}) we have
\begin{align}\label{equ.for.phi.1}
\phi_1(-(1-|M|^{1/2}),y)=-1+|M|^{1/2}(1+2\tau).
\text{ for }y\in I_M.
\end{align}
Using (\ref{def.on.Q.l}), (\ref{DefinitionPhiCentralSquare}), and (\ref{equ.for.phi.1}), we have, for any $y\in I_M$,
\begin{align*}
\lim_{x\nearrow -1+|M|^{1/2}} \phi(x,y) =
\begin{pmatrix}
-1+(1+2\tau)|M|^{1/2}
\\
\phi_2(-1+|M|^{1/2},y)
\end{pmatrix}
=
\phi(-1+|M|^{1/2},y),
\end{align*}
proving the claim.
\textit{Step 4.}
In this step we verify the assertions \eqref{main.W.1.p} through \eqref{main.det.global}. It is sufficient to check these assertions separately on $S,Q_l,Q_r,Q_u$ and $Q_d$. In the central square $S$, these properties are a straightforward consequence (taking $|M|$ and $\tau |M|^{1/2}$ smaller if necessary) of the properties \eqref{PropertiesTildePhi} and our definition \eqref{DefinitionPhiCentralSquare}, since the prefactor in \eqref{DefinitionPhiCentralSquare} may be rewritten as
\begin{align*}
1-\frac{2\tau |M|^{1/2}}{1-|M|^{1/2}}.
\end{align*}
It is therefore sufficient to check \eqref{main.W.1.p} through \eqref{main.det.global} on the left quadrilateral $Q_l$, as the construction of $\phi$ in the three other quadrilaterals is analogous.
\textit{Step 4.1.} In $Q_l$, we have
\begin{align}
\label{LemmaPhiDx}
\partial_x \phi(x,y)
=
\begin{pmatrix}
1+2\tau
\\
\frac{-1+|M|^{1/2}}{x^2 |M|^{1/2}}y[1-(1+x)\partial_y \phi_2(\ldots)]
+\frac{1}{|M|^{1/2}}\phi_2(\ldots)
\end{pmatrix}
\end{align}
and
\begin{align}\label{LemmaPhiDy}
\partial_y \phi(x,y)
&=
\begin{pmatrix}
0
\\
\frac{1-|M|^{1/2}+x}{x|M|^{1/2}}+\frac{(1+x)(-1+|M|^{1/2})}{x|M|^{1/2}}
\partial_y\phi_2(\ldots)
\end{pmatrix}
\end{align}
where "$(\ldots)$" stands (and will stand) for
\begin{align*}
\left(-1+|M|^{1/2},
\frac{-1+|M|^{1/2}}{x}y
\right).
\end{align*}
From (\ref{LowerBoundyDerivative}) and (\ref{DefinitionPhiCentralSquare}) we deduce
\begin{align} \partial_{y}\phi_2(\ldots)\geq 1-C\tau |M|^{1/2}\label{phi.2.y.lower.bound}.
\end{align}
From \eqref{LemmaPhiDx}, \eqref{LemmaPhiDy} and \eqref{phi.2.y.lower.bound} we deduce that, for $(x,y)\in Q_l,$
\begin{align}
\nonumber
\det \nabla \phi(x,y)&=(1+2\tau) \left(
\frac{1-|M|^{1/2}+x}{x|M|^{1/2}}
+\frac{(1+x)(-1+|M|^{1/2})}{x|M|^{1/2}}\partial_y\phi_2(\ldots)
\right)\\ \nonumber
&\geq (1+2\tau) \left(
\frac{1-|M|^{1/2}+x}{x|M|^{1/2}}
+\frac{(1+x)(-1+|M|^{1/2})}{x|M|^{1/2}}(1-C\tau |M|^{1/2})
\right)
\\&
\label{LemmaPhiDet}
= (1+2\tau) \left(1-C \tau |M|^{1/2} \frac{(1+x)(-1+|M|^{1/2})}{x|M|^{1/2}} \right)
\geq 1+\tau,
\end{align}
where in the last inequality we have used the fact that we have $|1+x|\leq |M|^{1/2}$ in $Q_l$ (as well as the smallness condition on $|M|$ and $|M|^{1/2} \tau$).
Therefore \eqref{main.det.on.M} and \eqref{main.det.global} are established.
\textit{Step 4.2.}
From \eqref{W1pBoundDerivative} for $p=\infty$, \eqref{LinftyTildePhi}, and \eqref{DefinitionPhiCentralSquare} we deduce (see also the beginning of Step 4) that
\begin{align}\label{equ.intermediaire}
|\partial_y\phi_2(\ldots)-1|\leq C\tau\quad \text{and}\quad \left|\phi_2(\ldots)-y\frac{-1+|M|^{1/2}}{x}\right|\leq C\tau |M|^{1/2}.
\end{align}
By \eqref{LemmaPhiDx} and \eqref{equ.intermediaire} we have for $(x,y)\in Q_l$
\begin{align}
\Big|\partial_x \phi(x,y)-
\begin{pmatrix}
1\\0
\end{pmatrix}
\Big|
\leq& C\tau + \left|\frac{-1+|M|^{1/2}}{x^2 |M|^{1/2}}y(1+x)(\partial_y \phi_2(\ldots)-1)\right|
\nonumber\\&
+\frac{1}{|M|^{1/2}}\left|\phi_2(\ldots)-y\frac{-1+|M|^{1/2}}{x}\right|
\nonumber\\
\leq&
C\tau+C\tau+\frac{1}{|M|^{1/2}} C|M|^{1/2}\tau
\nonumber\\
=&C\tau,\label{estimate.phi.dx}
\end{align}
where in the last inequality we have used the fact that $|1+x|\leq |M|^{1/2}$ holds in the left quadrilateral $Q_l$.
Similarly, we have by \eqref{LemmaPhiDy} and \eqref{equ.intermediaire} for $(x,y)\in Q_l$
\begin{align}
\Big|\partial_y \phi(x,y)-
\begin{pmatrix}
0\\1
\end{pmatrix}
\Big|
\leq \left|
\frac{(1+x)(1-|M|^{1/2})}{-x|M|^{1/2}}
(\partial_y\phi_2(\ldots)-1)\right|
\leq C\tau,\label{estimate.phi.dy}
\end{align}
where in the last inequality we have used the fact that $|1+x|\leq |M|^{1/2}$ holds in the left quadrilateral.
\textit{Step 4.3.}
As the area of the left quadrilateral $Q_l$ is bounded by $|M|^{1/2}$, \eqref{estimate.phi.dx} and \eqref{estimate.phi.dy} establish the bound \eqref{main.W.1.p}. Next from \eqref{LemmaPhiDet}, \eqref{estimate.phi.dx} and \eqref{estimate.phi.dy} we directly get \eqref{main.nabla.phi.bounded.by.det}. It remains to show \eqref{main.L.infty}. This however is an easy consequence of the bound \eqref{main.W.1.p} for $p=\infty$, the fact that on the left edge of our left quadrilateral we have $\phi=\operatorname{id}$, as well as the fact that any point in the left quadrilateral has distance of at most $|M|^{1/2}$ to the left edge of the quadrilateral.
\textit{Step 5.} Since $\phi=\operatorname{id}$ on $\partial \Omega$, since $\phi$ is Lipschitz and since from \eqref{main.det.global}, up to taking $\tau |M|^{1/2}$ smaller if necessary,
\begin{align*}
\det\nabla \phi\geq 1/2,
\end{align*}
classical degree theory results show that $\phi$ is bi-Lipschitz (see e.\,g.\ Theorem 2 in \cite{Ball2}).
This concludes the proof.
\end{proof}
\subsection{Covering by strips}\label{SubSection.covering}
The following lemma is a consequence of the (combinatorial) Erd\H{o}s-Szekeres theorem, see Theorem 2.1 in Alberti, Cs\"ornyei, and Preiss \cite{Alberti}.
\begin{lemma}
\label{CombinatorialResult}
Let $S\subset \mathbb{R}^2$ be a set containing a finite number of points. Then there exist $N$ functions $f_i: \mathbb{R} \rightarrow \mathbb{R}$ and $M$ functions $g_j: \mathbb{R} \rightarrow \mathbb{R}$ with the following properties:
\begin{itemize}
\item We have the estimate $N\leq \sqrt{\sharp S}$ and $M\leq \sqrt{\sharp S}$.
\item The functions $f_i$ and $g_j$ are Lipschitz continuous with Lipschitz constant of at most $1$.
\item The set $S$ is contained in the union of the graphs of the $f_i$ (considered as functions of $x$) and the graphs of the $g_j$ (considered as functions of $y$), i.e.\ we have
\begin{align*}
S\subset \bigcup_{1 \leq i \leq N} \{(x,f_i(x)):x\in \mathbb{R}\} \cup \bigcup_{1 \leq j \leq M} \{(g_j(y),y):y \in \mathbb{R}\}.
\end{align*}
\end{itemize}
\end{lemma}
\begin{remark}\label{remark.no.generalization.higher.dim}
The previous lemma is no longer true in higher dimensions (see Question 8.2 in
\cite{Alberti}).
\end{remark}
Lemma \ref{CombinatorialResult} has the following consequence, as observed by Alberti, Cs\"ornyei, and Preiss \cite{Alberti}.
\begin{theorem}[Alberti, Cs\"ornyei, Preiss \cite{Alberti}]
\label{CoveringOfCompactSet}
Let $K\subset [0,1]^2$ be a compact set and let $\epsilon>0$. For any $\delta>0$ small enough (depending on $\epsilon$ and $K$), there exist $N$ functions $f_i:[0,1]\rightarrow [0,1]$ as well as $M$ functions $g_j:[0,1]\rightarrow [0,1]$ such that the following holds true:
\begin{itemize}
\item We have the estimates $\delta N\leq \sqrt{|K|}+\epsilon$ and $\delta M\leq \sqrt{|K|}+\epsilon$.
\item The functions $f_i$ and $g_j$ are Lipschitz continuous with Lipschitz constant no larger than $1$.
\item Considering the horizontal strips $H_i:=\{(x,y):x\in [0,1],|y-f_i(x)|\leq \delta\}$ and the vertical strips $V_j:=\{(x,y):y\in [0,1],|x-g_j(y)|\leq \delta\}$, the set $K$ is covered by these strips, i.e.\ we have
\begin{align*}
K\subset \bigcup_{i=1}^N H_i \cup \bigcup_{j=1}^M V_j.
\end{align*}
\end{itemize}
\end{theorem}
For the reader's convenience, we also state the proof of the result.
\begin{proof} \textit{Step 1.}
Fix $\epsilon>0$. We set $\delta:=2^{-l}$ and subdivide the unit square $[0,1]^2$ into $2^l \times 2^l$ squares. By compactness of $K$, for $l\in \mathbb{N}$ large enough the total area of the subsquares that have nonempty intersection with $K$ is bounded by $|K|+\epsilon^2$ (one easily sees that by monotone convergence of the sequence $\chi_{K_l}$ towards $\chi_K$, where $K_l$ denotes the set containing all subsquares of size $2^{-l} \times 2^{-l}$ that have nonempty intersection with $K$).
\textit{Step 2.}
Applying Lemma \ref{CombinatorialResult} to the set of centers of the subsquares contained in $K_l$ -- note that by the previous considerations, the number of such squares is bounded by $2^{2l}(|K|+\epsilon^2)$ -- , we obtain $N$ functions $f_i:\mathbb{R}\rightarrow \mathbb{R}$ and $M$ functions $g_j:\mathbb{R}\rightarrow \mathbb{R}$ whose graphs cover the centers of our squares. Furthermore, we have
$N\leq \sqrt{2^{2l}(|K|+\epsilon^2)}$ and $M\leq \sqrt{2^{2l}(|K|+\epsilon^2)}$, which implies the desired estimates $\delta N\leq \sqrt{|K|}+\epsilon$ and $\delta M\leq \sqrt{|K|}+\epsilon$. We restrict the functions $f_i$ and $g_j$ to the interval $[0,1]$ and replace them by $\min\{\max\{f_i,0\},1\}$ and $\min\{\max\{g_j,0\},1\}$. This obviously preserves the $1$-Lipschitz property; furthermore, the centers of the subsquares are still covered by the graphs of the modified $f_i$ and $g_j$.
\textit{Step 3.}
It is easy to see that the union of the strips associated with the (modified) $f_i$ and $g_j$ provides a covering of $K$: A strip $H_i$ must cover a full subsquare whenever the graph $f_i$ covers the center of the subsquare, as the $f_i$ are $1$-Lipschitz functions. For the $V_j$ and $g_j$, the analogous assertion holds.
\end{proof}
We now provide a slightly stronger version of Lemma \ref{CoveringOfCompactSet} which states that
\begin{itemize}
\item one may actually choose the interior of the horizontal strips $H_i$ to be mutually disjoint and similarly for the vertical strips $V_j$, and
\item the strips $H_i$ and $V_j$ can be chosen to be contained in the unit square.
\end{itemize}
This fact has first been observed by Marchese \cite{Marchese} and later, independently, by the authors of the present paper in an earlier version of the paper.
\begin{lemma}
\label{IntersectionFreeCovering}
The following slightly strengthened version of Lemma \ref{CoveringOfCompactSet} holds: We may additionally enforce in Lemma \ref{CoveringOfCompactSet} that we have $f_{i+1}\leq f_i-2\delta$ and $g_{j+1}\leq g_j - 2\delta$ or, equivalently,
\begin{align*}
\sum_{i=1}^N \chi_{\{(x,y):x\in [0,1],|y-f_i(x)|< \delta\}}\leq 1
\quad\text{and}\quad
\sum_{j=1}^M \chi_{ \{(x,y):y\in [0,1],|x-g_j(y)|< \delta\}}\leq 1.
\end{align*}
Furthermore, we may enforce $\delta\leq f_i\leq 1-\delta$ and $\delta\leq g_j\leq 1-\delta$, or, equivalently,
$$\bigcup_{i=1}^N H_i \cup \bigcup_{j=1}^M V_j\subset [0,1]^2.$$
\end{lemma}
\begin{proof}\textit{Step 1.}
Let $\tilde f_i$, $\tilde g_j$ be the family of functions obtained by applying Lemma~\ref{CoveringOfCompactSet}. Then define $f_1$ as
\begin{align*}
f_1(x):=\min\big\{\max\{\tilde f_1(x),\ldots,\tilde f_N(x),(1+2(N-1))\delta\},1-\delta\big\}.
\end{align*}
Inductively, define for $2\leq i\leq N$
\begin{align*}
f_{i}(x):=\min\big\{\max\big\{{\max}_{i}\{\tilde f_1(x),\ldots,\tilde f_N(x)\},(1+2(N-i))\delta\big\},f_{i-1}-2\delta\big\}
\end{align*}
where $\max_{i}$ denotes the $i$-th largest number of the set (i.e. in particular\ $\max_1 S = \max S$ and, e.\,g.\
$\max_2\{1,3,3\}=3$). We define the $g_j$ analogously.
We claim that the $f_i$ and $g_j$ have all the wished properties. This will be shown in the remaining
three steps.
\textit{Step 2.} First we trivially have that
$$f_{i+1}\leq f_i-2\delta, \quad \text{and}\quad f_i\leq 1-\delta$$ and similarly for the $g_j.$ Moreover a direct induction shows that $f_i\geq (1+2(N-i))\delta$ which in particular implies that
$$f_i\geq \delta$$ and similarly for $g_j.$
The Lipschitz estimate on the $\tilde f_i$ from Lemma \ref{CoveringOfCompactSet} easily carries over to the $f_i$ (same for $g_j$). It only remains to prove (see Step 3) the covering property of the modified strips, i.e.
\begin{align}\label{K.covered}
K\subset \bigcup_{i=1}^N \{(x,y):x\in [0,1],|y-f_i(x)|\leq \delta\} \cup \bigcup_{j=1}^M \{(x,y):y\in [0,1],|x-g_j(y)|\leq \delta\}.
\end{align}
\textit{Step 3.}
Since by construction (\ref{K.covered}) is fulfilled with $f_i$ and $g_j$ replaced by $\tilde f_i$ and $\tilde g_j$
and since $K\subset [0,1]^2$ it is enough to show that, for every $x\in [0,1],$
\begin{align}
\label{strips.still.covering.K}
[0,1]\cap \bigcup_{i=1}^N
[\tilde f_i(x)-\delta,\tilde f_i(x)+\delta]
\subset\bigcup_{i=1}^N
[f_i-\delta,f_i+\delta]
\end{align}
and similarly for $\tilde g_j$ and $g_j$ to show (\ref{strips.still.covering.K}).
We only prove the assertion for the $f_i$ (the other one being the same) and proceed in three substeps.
\textit{Step 3.1.}
Defining the family of functions $\overline f_i$ by
\begin{align*}
\overline f_{i}(x):={\max}_{i}\{\tilde f_1(x),\ldots,\tilde f_N(x)\big\},
\end{align*} we claim that
\begin{align*}\bigcup_{i=1}^N
[\tilde f_i(x)-\delta,\tilde f_i(x)+\delta]
=\bigcup_{i=1}^N
[\overline f_i(x)-\delta,\overline f_i(x)+\delta].
\end{align*}
Indeed this directly follows since the tuple $(\overline f_{1}(x),\cdots,\overline f_{N}(x))$ is just a reordering of the tuple
$(\tilde f_{1}(x),\cdots,\tilde f_{N}(x))$.
\textit{Step 3.2.} Define another family $\hat f_i$ by
\begin{align*}
\hat f_{i}(x):=\max\big\{\overline f_{i}(x),(1+2(N-i))\delta\big\}.
\end{align*}
We claim that
\begin{align*}
[0,1]\cap\bigcup_{i=1}^N
[\overline f_i(x)-\delta,\overline f_i(x)+\delta]\subset
\bigcup_{i=1}^N
[\hat f_{i}(x)-\delta,\hat f_{i}(x)+\delta].
\end{align*}
Assume that, for some $i$, $\hat f_i(x)\neq \overline f_i(x)$ (otherwise the claim is trivial), and thus, by definition,
\begin{align*}\hat f_i(x)=(1+2(N-i))\delta> \overline f_i(x).
\end{align*}
If $\hat f_{i+1}(x)\leq \overline f_i(x) $ we deduce, by definition, that
\begin{align*}(1+2(N-i-1))\delta\leq \hat f_{i+1}(x)\leq \overline f_i(x)<\hat f_i(x)=(1+2(N-i))\delta
\end{align*}
which directly implies that
\begin{align*}
[\overline f_i(x)-\delta,\overline f_i(x)+\delta]\subset[\hat f_{i}(x)-\delta,\hat f_{i}(x)+\delta]\cup [\hat f_{i+1}(x)-\delta,\hat f_{i+1}(x)+\delta]
\end{align*}
and we are done.
If $\hat f_{i+1}(x)> \overline f_i(x)$ we deduce, by definition of $\hat f_{i+1}(x)$ and
since $\overline f_i(x)\geq \overline f_{i+1}(x)$, that
\begin{align*}\hat f_{i+1}(x)=(1+2(N-i-1))\delta.
\end{align*}
Hence, proceeding by induction we end up with one of the two following cases:
\begin{itemize}
\item there exists $i<l<N$ such that
\begin{align*}(1+2(N-l-1))\delta\leq \hat f_{l+1}(x)\leq \overline f_{i}(x)\leq \hat f_{l}(x)=(1+2(N-l))\delta,
\end{align*}
and we are done as before.
\item If such a $l$ does not exist we have $\delta=\hat f_{N}(x)>\overline f_{i}(x)$ in which case we trivially have
\begin{align*}
[0,1]\cap[\overline f_i-\delta,\overline f_i+\delta]\subset[\hat f_{N}(x)-\delta,\hat f_{N}(x)+\delta],
\end{align*}
showing the claim.
\end{itemize}
\textit{Step 3.3.} Recalling that the family $f_i$ is defined as \begin{align*}
f_1(x)=\min\big\{\hat f_{1}(x),1-\delta\big\}.
\end{align*}
and, for $2\leq i\leq N,$
\begin{align*}
f_{i}(x)=\min\big\{\hat f_{i}(x),f_{i-1}-2\delta\big\},
\end{align*}
it is enough, in view of Steps 3.1 and 3.2, to prove that
\begin{align*} [0,1]\cap
\bigcup_{i=1}^N[\hat f_{i}(x)-\delta,\hat f_{i}(x)+\delta]\subset
\bigcup_{i=1}^N[f_{i}(x)-\delta,f_{i}(x)+\delta],
\end{align*}
in order to show (\ref{strips.still.covering.K}) and prove the lemma.
First we trivially have
\begin{align*}
[0,1]\cap
[\hat f_{1}(x)-\delta,\hat f_{1}(x)+\delta]\subset
[f_{1}(x)-\delta,f_{1}(x)+\delta]
\end{align*}
Finally, suppose that for some $i\geq 2$ we have $f_i(x)\neq \hat f_{i}(x)$ (otherwise the claim is trivial) and thus, by definition,
\begin{align*}
\hat f_{i}(x)> f_{i}(x)=f_{i-1}-2\delta.
\end{align*}
If $\hat f_{i}(x)\in [f_{i}(x),f_{i-1}(x)]$ then the previous equality directly implies that
\begin{align*}[\hat f_{i}(x)-\delta,\hat f_{i}(x)+\delta]\subset
[f_{i}(x)-\delta,f_{i}(x)+\delta]\cup [f_{i-1}(x)-\delta,f_{i-1}(x)+\delta]
\end{align*}
and we are done.
If $\hat f_{i}(x)>f_{i-1}(x)=\min\{\hat f_{i-1}(x),f_{i-2}-2\delta\}$ we deduce that,
since $\hat f_{i-1}(x)\geq \hat f_{i}(x)$,
\begin{align*} f_{i-1}(x)=f_{i-2}-2\delta.
\end{align*}
Hence proceeding by induction we deduce that one of the cases below occur:
\begin{itemize}
\item there exists some $i>l\geq 2$ such that $\hat f_{i}(x)\in [f_{l}(x),f_{l-1}(x)]$ and
$f_{l}(x)=f_{l-1}(x)-2\delta$ and we done as above.
\item If such a $l$ does not exist we have $\hat f_{i}(x)>f_1(x)=1-\delta$ in which case we trivially have
\begin{align*}
[0,1]\cap
[\hat f_{i}(x)-\delta,\hat f_{i}(x)+\delta]\subset
[f_{1}(x)-\delta,f_{1}(x)+\delta],
\end{align*}
proving the claim.
\end{itemize}
\end{proof}
\subsection{The simplified model case}\label{subsection.stretching}
We now prove Proposition \ref{Prop.Bi.Lip.simpl} giving an explicit formula for the stretching map $\phi.$
\begin{figure}
\caption{The above pictures illustrate the proofs of Theorem~\ref{CoveringOfCompactSet}
\label{SketchStretching}
\end{figure}
In the accompanying figure, the strategy of the proof of the proposition is
illustrated.
\textbf{Explanation of Figure \ref{SketchStretching}.}
In the two pictures at the top, we provide an illustration of the proof of Theorem~\ref{CoveringOfCompactSet} due to Alberti, Cs\"ornyei, and Preiss, which reduces the covering assertion to the combinatorial result in Lemma \ref{CombinatorialResult}.
The unit square is divided into $2^{2l}$ squares of size $1/2^l$ (the division is sketched by gray lines). The compact set $K$ corresponds to the gray regions. The black points in these pictures represent the centers of all grid squares which have nonempty intersection with $K$. The blue and red lines in the second picture at the top represent the graphs of the $1$-Lipschitz maps $f_i$ and $g_j$ which cover the black points, as provided by Lemma \ref{CombinatorialResult}.
In the picture at the center and the two pictures at the bottom, we illustrate the construction of the stretching map of Proposition~\ref{Prop.Bi.Lip.simpl} (using a different example). In these pictures, the covering of the set $K$ by strips -- as provided by Lemma \ref{IntersectionFreeCovering} -- is indicated by the framed colored regions, each strip being assigned a separate color. In the picture in the center, the compact set $K$ is also depicted: It corresponds to the gray regions in the background of the strips. The pictures at the bottom illustrate the construction of our stretching map $\phi=(\phi_1,\phi_2)$ in Proposition \ref{Prop.Bi.Lip.simpl}: To define the second component $\phi_2$ of our map $\phi$, we stretch all the horizontal strips by a factor of $1+2\tau$ (see the penultimate picture); similarly, to define the first component $\phi_1$ we stretch all the vertical strips by a factor of $1+2\tau$ (see the last picture).
\begin{proof}[Proof of Proposition \ref{Prop.Bi.Lip.simpl}]
First note that for $|K|=0$ the identity map $\phi=\operatorname{id}$
trivially has all the properties stated in the proposition. Hence we can assume that
$|K|>0$.
\textit{Step 1 (preliminaries).}
Applying Lemma \ref{IntersectionFreeCovering} to $K$ and $\epsilon=|K|^{1/2},$ there exist
$\delta=1/2^l>0$ (for some $l$ large enough), $N,M\in \mathbb{N}$ and $1$-Lipschitz functions
$f_i,g_j:[0,1]\rightarrow [0,1],$ $1\leq i\leq N,$ $1\leq j\leq M$ such that
\begin{equation}
\label{control.N.M}
\delta N\leq 2\sqrt{|K|}\quad \text{and}\quad \delta M\leq 2\sqrt{|K|},
\end{equation}
\begin{equation}
\label{strips.covering.K}
K\subset \big(\bigcup_{i=1}^N\{|f_i(x)-y|\leq \delta\}\big)
\cup \big(\bigcup_{j=1}^M \{|g_j(y)-x|\leq \delta\}\big)\subset [0,1]^2,
\end{equation}
\begin{equation}
\label{control.intersections}
\sum_{i=1}^N\chi_{\{|f_i(x)-y|< \delta\}}\leq 1\quad
\text{and}\quad \sum_{j=1}^M\chi_{\{|g_j(y)-x|< \delta\}}\leq 1,
\end{equation}
where $\{|f_i(x)-y|\leq \delta\}$ is a short notation for
$\{(x,y):x\in [0,1],|f_i(x)-y|\leq \delta\}$ and similarly for the $g_j$.
\textit{Step 2 (definition of $\phi$).}
Define our map $\phi=(\phi_1,\phi_2)$ by
\begin{align*}\phi_2(x,y)&=(1-4\delta N\tau)y+2\tau\sum_{i=1}^{N}
\Big(\min\left(2\delta,y-f_i(x)+\delta\right)-\min\left(0,y-f_i(x)+\delta\right)
\Big)
\\
&=(1-4\delta N\tau)y+2\tau\sum_{i=1}^{N}\left\{\begin{array}{cl}
2\delta& \text{if $y\geq f_i(x)+\delta$}\\
y- f_i(x)+\delta & \text{if $f_i(x)-\delta\leq y\leq f_i(x)+\delta$}\\
0 &\text{if $y\leq f_i(x)-\delta$}
\end{array}
\right.
\end{align*}
and, symmetrically,
\begin{align*}
\phi_1(x,y)&=(1-4\delta M\tau)x+2\tau\sum_{j=1}^{M}
\Big(
\min\left(2\delta,x-g_j(y)+\delta\right)-\min\left(0,x-g_j(y)+\delta\right)
\Big)
\\
&=(1-4\delta M\tau)x+2\tau\sum_{j=1}^{M}\left\{\begin{array}{cl}
2\delta& \text{if $x\geq g_j(y)+\delta$}\\
x- g_j(y)+\delta & \text{if $g_j(y)-\delta\leq x\leq g_j(y)+\delta$}\\
0 &\text{if $x\leq g_j(y)-\delta$}.
\end{array}
\right.
\end{align*}
\textit{Step 3 (properties of $\phi_2$).} First from \eqref{control.N.M} we get that, for every $(x,y)\in [0,1]^2$,
\begin{equation}
\label{phi.2.L.infty}
|\phi_2(x,y)-y|\leq 4\tau \delta N y+4\tau \delta N\leq 16\tau\sqrt{|K|}.
\end{equation}
Together with the corresponding estimate for $\phi_1$, this shows \eqref{prop.L.infty}.
Moreover, since all the strips are contained in $[0,1]^2$
(see the second inclusion in (\ref{strips.covering.K})) we get that
for $1\leq i\leq N$ and $x\in [0,1]$
\begin{align*}
0\leq f_i(x)-\delta\leq f_i(x)+\delta\leq 1
\end{align*}
which directly implies
\begin{equation}\label{phi.2.boundary.horiz}
\phi_2(x,0)=0\quad \text{and}\quad \phi_2(x,1)=1\quad \text{for all $x\in [0,1].$}
\end{equation}
We now turn to the properties of the derivatives of $\phi_2.$
It is elementary to see that $\phi_2$ is Lipschitz and that for a.\,e.\ $(x,y)\in [0,1]^2$ we have
\begin{align*}
\partial_{x}\phi_2(x,y)&=-2\tau \sum_{i=1}^Nf_i'(x)\chi_{\{|f_i(x)-y|<
\delta\}},
\\
\partial_{y}\phi_2(x,y)&=1-4\delta N\tau+2\tau \sum_{i=1}^N\chi_{\{|f_i(x)-y|<
\delta\}}.
\end{align*}
We first deal with $\partial_x\phi_2.$
From the fact that $|f_i'|\leq 1$ and (\ref{control.intersections}),
we immediately obtain
\begin{align*}
\partial_x\phi_2&=0 \quad \text{a.\,e.\ outside
$\bigcup_{i=1}^N\{|f_i(x)-y|< \delta\}$},\\
|\partial_x\phi_2|&\leq 2\tau \quad \text{a.\,e.\ in $\bigcup_{i=1}^N\{|f_i(x)-y|<
\delta\}$}.
\end{align*}
Since (\ref{control.N.M}) yields
\begin{align*}
|\cup_{i=1}^N\{|f_i(x)-y|< \delta\}|\leq 2\delta N\leq 4\sqrt{|K|},
\end{align*}
we directly obtain
\begin{align*}
\int_{[0,1]^2}|\partial_x\phi_2|^p \,dx
=\int_{\cup_{i=1}^N\{|f_i(x)-y|<\delta\}} |\partial_x\phi_2|^p \,dx
\leq 4\sqrt{|K|}(2\tau)^p
\end{align*}
holds, which implies for every $1\leq p\leq \infty$
\begin{equation}\label{phi.2.x.L.p}
\|\partial_x\phi_2\|_{L^p([0,1]^2)}\leq 8\tau |K|^{1/(2p)}.
\end{equation}
We now deal with $\partial_y\phi_2.$ We first notice that we have
\begin{align}\label{partial.y.phi.2.outside}
&\partial_{y}\phi_2(x,y)=1-4\tau\delta N\quad \text{for }(x,y)\in [0,1]^2\setminus \cup_{i=1}^N\{(x,y)\in [0,1]^2:|f_i(x)-y|\leq \delta\}.
\end{align}
and
\begin{align}\label{partial.y.phi.2.inside}
&\partial_{y}\phi_2(x,y)=1-4\tau\delta N+2\tau \quad \text{for }(x,y)\in \cup_{i=1}^N\{(x,y)\in [0,1]^2:|f_i(x)-y|\leq \delta\}.
\end{align}
This directly yields the lower bound \eqref{prop.derivees.phi.2}. The bound \eqref{prop.derivees.phi.1} is obtained by analogous estimates for $\phi_1$.
Using (\ref{partial.y.phi.2.outside}) and (\ref{partial.y.phi.2.inside})
and noticing as before that
\begin{align*}
|\{(x,y)\in [0,1]^2: |f_i(x)-y|\leq \delta \text{ for some }i\}|
\leq 2\delta N\leq 4\sqrt{|K|},
\end{align*}
we get for every fixed $x\in [0,1]$
\begin{align*}
&\int_{[0,1]^2}|\partial_{y}\phi_2(x,y)-1|^p dx
\\&
= \int_{\{(x,y)\in [0,1]^2: |f_i(x)-y|>\delta \text{ }\forall i\}}
|\partial_{y}\phi_2(x,y)-1|^p \,dx
\\&~~~
+\int_{\{(x,y)\in [0,1]^2: |f_i(x)-y|\leq \delta \text{ for some }i\}}
|\partial_{y}\phi_2(x,y)-1|^p \,dx
\\&
\leq (C\tau)^{p}\sqrt{|K|}.
\end{align*}
Thus, for every $x\in [0,1]$ and every $1\leq p\leq \infty$ we have
\begin{equation}\label{phi.2.y.L.p.line}
\|\partial_y\phi_2-1\|_{L^p([0,1]^2)}\leq C
\tau |K|^{1/(2p)}.
\end{equation}
In connection with the analogous estimates for $\phi_1$, \eqref{phi.2.x.L.p} and \eqref{phi.2.y.L.p.line} imply \eqref{prop.W.1.p}.
\textit{Step 4.} In this step we prove the three assertions involving the determinant, namely (\ref{prop.nabla.phi.bounded.by.det})-(\ref{prop.det.global}). For a.\,e.\ $(x,y)\in [0,1]^2$ we have
\begin{align*}
\nabla\phi(x,y)=
\begin{pmatrix}
1-4\tau\delta M+2\tau \sum_{j=1}^M\chi_{\{|g_j(y)-x|< \delta\}}
&
-2\tau \sum_{j=1}^Mg_j'(y)\chi_{\{|g_j(y)-x|<\delta\}}
\\
-2\tau \sum_{i=1}^Nf_i'(x)\chi_{\{|f_i(x)-y|<\delta\}}
&
1-4\tau\delta N+2\tau \sum_{i=1}^N\chi_{\{|f_i(x)-y|< \delta\}}
\end{pmatrix}.
\end{align*}
Denote by $S$ the union of the interior of all the strips, i.e.
\begin{align*}
S:=\big(\cup_{i=1}^N\{|f_i(x)-y|< \delta\}\big)\cup \big(\cup_{j=1}^M
\{|g_j(y)-x|< \delta\}\big).
\end{align*}
Then, obviously, a.\,e.\ outside $S$ we have
\begin{align*}
\nabla\phi=
\begin{pmatrix}
1-4\delta M\tau
& 0
\\
0&
1-4\delta N\tau
\end{pmatrix}.
\end{align*}
Thus, recalling that $M\delta,N\delta\leq 2\sqrt{|K|}$ (see
(\ref{control.N.M})),
\begin{equation}
\label{phi.det.outside.A}
\det\nabla \phi= 1-4\tau(\delta M+\delta N)+16\delta^2MN\tau^2
\geq 1-16\tau \sqrt{|K|}\quad \text{a.\,e.\ in $[0,1]^2\setminus S.$}
\end{equation}
Furthermore, assuming $\tau |K|^{1/2}\leq 1/32$, we have
\begin{equation}
\label{nabla.bounded.by.det.outside.A}
|\nabla \phi|\leq 2\det\nabla\phi\quad \text{a.\,e.\ in $[0,1]^2\setminus S.$}
\end{equation}
Trivially for all $(x,y)\in S$ we have
\begin{align*}
1\leq \sum_{i=1}^N\chi_{\{|f_i(x)-y|<
\delta\}}+\sum_{j=1}^M\chi_{\{|g_j(y)-x|< \delta\}}.
\end{align*}
Thus we get for a.\,e.\ $(x,y)\in S$, recalling that $|f'_i|,|g'_j|\leq 1$ and
using (\ref{control.intersections}),
\begin{align*}
\det\nabla \phi(x,y)&=\left[1-4\tau\delta M+2\tau
\sum_{j=1}^M\chi_{\{|g_j(y)-x|< \delta\}}\right]\left[1-4\tau\delta N+2\tau
\sum_{i=1}^N\chi_{\{|f_i(x)-y|< \delta\}}\right]\\
&~~~-4\tau^2\left[\sum_{j=1}^Mg_j'(y)\chi_{\{|g_j(y)-x|<
\delta\}}\right]\left[\sum_{i=1}^Nf_i'(x)\chi_{\{|f_i(x)-y|<
\delta\}}\right]
\\
&= 1+2\tau\left[
\left(\sum_{j=1}^M\chi_{\{|g_j(y)-x|< \delta\}}
+\sum_{i=1}^N\chi_{\{|f_i(x)-y|< \delta\}}\right)-2\delta (M+N)\right]\\
&~~~
+4\tau^2
\Bigg(
\left[\sum_{j=1}^M \chi_{\{|g_j(y)-x|<
\delta\}}\right]\left[\sum_{i=1}^N \chi_{\{|f_i(x)-y|<
\delta\}}\right]
\\
&~~~~~~~~~~~~~~~
-\left[\sum_{j=1}^Mg_j'(y)\chi_{\{|g_j(y)-x|\leq
\delta\}}\right]\left[\sum_{i=1}^Nf_i'(x)\chi_{\{|f_i(x)-y|<
\delta\}}\right]\\
&~~~~~~~~~~~~~~~+4\delta M\delta N-2 \delta N
\sum_{j=1}^M\chi_{\{|g_j(y)-x|< \delta\}}
-2 \delta M
\sum_{i=1}^N\chi_{\{|f_i(x)-y|< \delta\}}
\Bigg)
\\
&\geq 1+2\tau\left[1-2\delta (M+N)\right]-8\tau^2\delta(M+N)
\\
&= 1+2\tau[1-2\delta (M+N)-8\tau\delta(M+N)].
\end{align*}
Recalling (\ref{control.N.M}), we may choose the upper bound $c$ for $|K|$ and $\sqrt{|K|}\tau$ in the assumptions of the proposition small enough so that
\begin{align*}
1+2\tau[1-2\delta (M+N)-8\tau\delta(M+N)]
\geq 1+\tau.
\end{align*}
Combining the last two estimates gives
\begin{equation}
\label{phi.det.A}
\det\nabla \phi\geq 1+\tau \quad \text{a.\,e.\ in $S.$}
\end{equation}
Since $K\subset S$ (see (\ref{strips.covering.K})), the last inequality
obviously holds true a.\,e.\ in $K.$ Combining (\ref{phi.det.outside.A}) and the
last estimate gives
\begin{align*}
\det \nabla \phi\geq 1-16\sqrt{|K|}\tau\quad
\text{a.\,e.\ on $[0,1]^2.$}
\end{align*}
Taking into account the bound $|\nabla \phi-\operatorname{Id}|\leq C\tau$, we also obtain
\begin{align*}
|\nabla \phi| \leq \det\nabla \phi \quad \text{a.\,e.\ in }S.
\end{align*}
\textit{Step 5.} We finally prove that $\phi$ is bi-Lipschitz from $[0,1]^2$ to $[0,1]^2.$
Taking $\tau\sqrt{|K|}\leq 1/32$, the last inequality implies
\begin{align*}
\det\nabla \phi\geq 1/2>0\quad \text{a.\,e.\ in $\Omega.$}
\end{align*}
By (\ref{phi.2.boundary.horiz}), \eqref{prop.W.1.p} for $p=\infty$, \eqref{prop.derivees.phi.2}, and the analogous estimates for $\phi_1$, we deduce that
\begin{align*}\phi|_{\partial (0,1)^2}:\partial (0,1)^2\rightarrow \partial (0,1)^2\quad\text{is bi-Lipschitz.}
\end{align*}
Hence, classical results based on the degree theory show that $\phi:[0,1]^2\rightarrow [0,1]^2$ is also bi-Lipschitz (see e.\,g.\ Theorem 2 in the work of Ball \cite{Ball2}; note that $\phi|_{\partial (0,1)^2}$ can be easily extended to a homeomorphism from $[0,1]^2$ to $[0,1]^2$).
\end{proof}
\begin{appendix}
\section{Decomposition of $C^{1,1}$-domains into subdomains that are $C^{1,1}$-equivalent to the unit square}
The following lemma has been used in Section \ref{sub.sect.Simplification.of.the.domain}.
\begin{lemma}\label{lemma:decomposition.domain}
Let $\Omega\subset \mathbb{R}^2$ be a bounded domain with boundary of class $C^{1,1}$. Then there exists a finite number of open sets $A_i\subset \Omega$ such that:
\begin{itemize}
\item For every $A_i$ there exists a $C^{1,1}$-diffeomorphism which maps $\overline{A_i}$ to the unit square $[0,1]^2$.
\item The sets $A_i$ are pairwise disjoint.
\item We have $\overline{\Omega}=\bigcup_i \overline{A_i}$.
\end{itemize}
\end{lemma}
\begin{proof}
We first summarize the proof.
\begin{itemize}
\item First we reduce the problem to the case of a polygon: We approximate $\Omega$ from inside by a polygon whose boundary follows closely the boundary $\partial\Omega$ (in a sense to be made explicit below). The outer part may then be decomposed as shown in Figure \ref{ApproximateBoundaryByPolygon}. The outer quadrilaterals (which have one curved side) are easily mapped by a $C^{1,1}$-diffeomorphism to the unit square. It then just remains to treat the case of the (inner) polygon (which possibly contains holes).
\item Any polygon (including polygons with holes) may be decomposed into a finite number of triangles, so the problem is reduced to the case of a triangle. Since every triangle may be mapped by an affine map to the reference triangle with corners $(0,0)$, $(1,0)$, and $(0,1)$, we only need to consider the case of the reference triangle.
\item The reference triangle may be covered by three convex quadrilaterals as shown in Figure \ref{CoverTriangle}.
Applying Lemma \ref{lemma:bilinear}, each of these quadrilaterals can be mapped by a (bi-linear) $C^{\infty}$-diffeomorphism to the unit square.
\end{itemize}
It only remains to make explicit the first point. We split its proof into three steps.
\textit{Step 1.} We can find (see e.\,g.\ Theorem 1.2.6 in \cite{KrantzParks}) a function $F:\mathbb{R}^2 \rightarrow \mathbb{R}$ with the following properties:
\begin{itemize}
\item $F$ is of class $C^{1,1}$,
\item we have $\{F=0\}=\partial\Omega$ and $\{F>0\}=\Omega$,
\item $\nabla F$ is nonzero on $\partial\Omega$.
\end{itemize}
Take $\delta>0$ small enough and construct a closed polygon $P$ whose boundary is contained in the set $\{\delta/2<F<2\delta\}$, whose vertices lie on the curve $\{F=\delta\}$, and whose edges have length of order $\delta$. Note that for $\delta$ small enough, such a polygon exists. Connect every vertex of $P$ to the boundary of $\partial \Omega$ with a line segment pointing in direction $-\nabla F$ (the green line segments in Figure \ref{ApproximateBoundaryByPolygon}). Let us denote the open regions which are bounded by the polygon $P$ on one side and these line segments and $\partial\Omega$ on the other sides as $B_i$ (see Figure \ref{ApproximateBoundaryByPolygon}). We will show in the remaining two steps that every $\overline{B_i}$ is $C^{1,1}$-diffeomorphic to the unit square $[0,1]^2.$
\textit{Step 2.} Fix some $i$ and let $z_1,z_2,w_1,w_2$ be the four "vertices" of $B_i$
with $z_1,z_2\in \{F=\delta\}$ and $w_1,w_2\in \partial \Omega$ ordered such that
$$[z_1,z_2]\cup [z_1,w_1]\cup [z_2,w_2]$$ is the "non-curved" boundary of $B_i$ (see Figure \ref{ApproximateBoundaryByPolygon}).
Moreover, for $\delta>0$ small enough, the "curved" boundary of $B_i$ is (up to a rotation) the graph of some $C^{1,1}$ function.
Since $F(z_1)-F(z_2)=0$ we have
$$\nabla F(z)\cdot (F(z_1)-F(z_2))=0$$ for some
$z\in [F(z_1),F(z_2)]$.
Hence since $F\in C^{1,1}$ we deduce that, for $j=1,2$,
$$|\nabla F(z_j)\cdot (z_1-z_2)|=|\nabla (F(z_j)-\nabla F(z))\cdot (z_1-z_2)|\leq C(F)|z_1-z_2|^2.$$
Since $|z_1-z_2|$ is of order $\delta$ and $|\nabla F(z_j)|$ is of order $1$ (since $\nabla F\neq 0$ on $\partial \Omega$) we get that the cosine of the angle between $z_1-z_2$ and $z_1-w_1$ as well as between $z_1-z_2$ and $z_1-w_1$ is of order $\delta$.
This implies in particular that the two lines
$$l_1(s):=z_1+s(w_1-z_1)/|w_1-z_1|\quad \text{and}\quad l_2(t):=z_2+t(w_2-z_2)/|w_2-z_2|$$
can only intersect at a point at distance at least of order $1$ from the segment $[z_1,z_2]$.
Recalling that the "curved" boundary of $B_i$ is at distance of order $\delta$ from the segment $[z_1,z_2]$, we can therefore find some $\overline{s}>|w_1-z_1|$ and $\overline{t}>|w_2-z_2|$ such the convex hull of the points
$$\{z_1,z_2,z_3:=l_2(\overline{t}),z_4:=l_1(\overline{s})\},$$ which we denote by $P_i$ (see Figure \ref{ApproximateBoundaryByPolygon}),
is a convex quadrilateral containing $B_i$ and whose boundary consists of
$$[z_1,z_2]\cup [z_2,z_3]\cup[z_3,z_4]\cup[z_4,z_1].
$$
\textit{Step 3.}
Using Lemma \ref{lemma:bilinear} there exists a (bi-linear) $C^{\infty}$- differomorphism $\varphi_i$ mapping $P_i$ to $[0,1]^2.$ Up to a rotation we have that $\varphi_i(\overline{B_i})$ is the region delimited by
$$[(0,0),(1,0)]\cup [(0,0),(0,h_i(0))]\cup [(1,0),(1,h_i(1))]\cup \{(x,h_i(x)):x\in [0,1]\}$$
for some $h_i\in C^{1,1}([0,1],(0,\infty))$.
Finally the map
$$(x,y)\rightarrow (x,h_i(x)y)$$ is trivially seen to be a $C^{1,1}$- diffeomorphism from $[0,1]^2$ to $\varphi_i(\overline{B_i})$ which concludes the proof.
\end{proof}
\begin{figure}
\caption{Reduction to the case of a polygon.\label{ApproximateBoundaryByPolygon}
\label{ApproximateBoundaryByPolygon}
\end{figure}
\begin{figure}
\caption{Covering of a triangle by three convex quadrilaterals.\label{CoverTriangle}
\label{CoverTriangle}
\end{figure}
\begin{definition} A map $\varphi:\mathbb{R}^2\rightarrow \mathbb{R}^2$ is called bi-linear if
for every $x,y\in \mathbb{R}$ the functions $\varphi(x,\cdot)$ and $\varphi(\cdot,y)$ are affine.
Equivalently a map $\varphi$ is bi-linear if there exist eight constants $a_1,a_2,a_3,a_4,b_1,b_2,b_3,b_4\in \mathbb{R}$ such that
$$\varphi(x,y)=(a_1xy+a_2x+a_3y+a_4,b_1xy+b_2x+b_3y+b_4),\quad \text{for $(x,y)\in \mathbb{R}^2$}.$$
\end{definition}
In the previous proof we used the following elementary lemma.
\begin{lemma}\label{lemma:bilinear} For any closed convex quadrilateral $P\subset \mathbb{R}^2$ there exists a bi-linear map $\varphi\in \operatorname{Diff}^{\infty}([0,1]^2;P)$.
\end{lemma}
\begin{proof} Denote by $z_1,z_2,z_3,z_4$ the vertices of $P$ ordered so that $\partial P=[z_1,z_2]\cup [z_2,z_3]\cup [z_3,z_4]\cup [z_4,z_1].$
With no loss of generality, using an affine map (which preserves the convexity; note that the composition of a bi-linear map with an affine map is still bi-linear), one can assume that
\begin{align*}
z_1=(0,0),\quad z_2=(1,0)\quad \text{and}\quad z_4=(0,1).
\end{align*}
Let us denote $z_3=(\alpha,\beta).$ Since $P$ is convex we easily deduce that
\begin{equation}\label{condition.convex}\alpha>0,\quad \beta>0\quad \text{and}\quad\alpha+\beta>1.\end{equation}
We claim that the map
\begin{align*}
\varphi(x,y)=((\alpha-1)xy+x,(\beta-1)xy+y)
\end{align*}
has all the wished properties.
First using (\ref{condition.convex}), one easily deduces that $\varphi$ is one-to-one in $[0,1]^2$ and that $\det\nabla \varphi>0$ in $[0,1]^2$. Thus $\varphi\in \operatorname{Diff}^{\infty}([0,1]^2;\varphi([0,1]^2)).$
Finally since
\begin{align*}
\varphi(0,0)=z_1,\quad \varphi(1,0)=z_2,\quad \varphi(1,1)=z_3\quad \text{and}\quad \varphi(0,1)=z_4,
\end{align*}
we have $\varphi(\partial [0,1]^2)=\partial P$ and hence by degree theory $\varphi([0,1]^2)=P$, proving the lemma.
\end{proof}
\end{appendix}
\end{document} |
\begin{document}
\title{Sensitivity of Codispersion to Noise and Error in Ecological and Environmental Data}
\begin{abstract}
Codispersion analysis is a new statistical method developed to assess spatial covariation between two spatial processes that may not be isotropic or stationary. Its application to anisotropic ecological datasets have provided new insights into mechanisms underlying observed patterns of species distributions and the relationship between individual species and underlying environmental gradients. However, the performance of the codispersion coefficient when there is noise or measurement error ("contamination") in the data has been addressed only theoretically. Here, we use Monte Carlo simulations and real datasets to investigate the sensitivity of codispersion to four types of contamination commonly seen in many real-world environmental and ecological studies. Three of these involved examining codispersion of a spatial dataset with a contaminated version of itself. The fourth examined differences in codisperson between plants and soil conditions, where the estimates of soil characteristics were based on complete or thinned datasets. In all cases, we found that estimates of codispersion were robust when contamination, such as data thinning, was relatively low (<15\%), but were sensitive to larger percentages of contamination. We also present a useful method for imputing missing spatial data and discuss several aspects of the codispersion coefficient when applied to noisy data to gain more insight about the performance of codispersion in practice.
\end{abstract}
\noindent {\bf Keywords}: Codispersion coefficient; Codispersion map; Imputation; Kriging; Measurement error; Missing observations; Spatial Noise.
\section{Introduction}\label{sec:intro}
Spatial associations are a fundamental aspect of most ecological and environmental data. Although accounting for spatial covariation has become routine in ecological data analysis \citep{Fortin:2005}, ecologists and environmental scientists have been slower to appreciate and account for anisotropic patterns and processes \citep[but see, e.g.,][] {Ellison:2014}. Codispersion \citep{Vallejos:2015} measures lag-dependent spatial covariation in two or more spatial processes, which may be anisotropic. Codispersion recently has been used to analyze ecological data and provide new insights into potential ecological processes that underlie observed patterns in co-occurrence between pairs of species \citep{Buckley:2016a} and in relationships between attributes of individual species and the underlying environment \citep{Buckley:2016b}.
Applications of codispersion analysis that have been published to date have assumed either that there are no errors in the datasets or that any errors that are present would have no effect on the analysis. These assumptions are clearly unrealistic. The goal of this paper is to better understand how sensitive the codispersion coefficient is to different types of noise and measurement error ("contamination") in the analyzed data. We approach this goal by using Monte Carlo simulation studies to examine several classes of noise that we would expect to occur in datasets or images analyzed using codispersion.
We consider the effect on codispersion of simple observation error, in which noise is added to a fixed number of random points (or pixels) in a dataset (or image) either as white noise (spatially independent and identically distributed) or as a spatially-dependent process. We also consider images with missing values distributed either randomly throughout the sample space or in clusters (e.g., clouds obscuring a large section of a remotely-sensed image) and present different algorithms for interpolation prior to calculation of codispersion. Using bivariate data for forest tree-environment relationships, we evaluate the robustness of codispersion under different levels of error introduced into the environmental data (kriged surfaces derived from complete or "thinned" datasets mimicking those with missing values). In most cases, the effect of these sources of noise and error are tested in one of two ways: either by calculating the codispersion between the original dataset (image) and a contaminated version of itself (codispersion values are correlated to quantify the effect of the noise) or, in the case of the species-environment relationships, by calculating the codispersion between the two original datasets and then comparing the output to the codispersion calculated between the tree data and contaminated environmental data.
In Section \ref{Methods}, we describe the different types of contamination and observation error that we added to both real and simulated datasets. We also describe the method we used for imputing missing data. Results are presented in Section \ref{Results}, and discussed in Section \ref{Discussion}. Technical details on simulation and imputation algorithms are given in the Appendix.
\section{Methods}\label{Methods}
In spatial modeling and time series, several types of contamination can be specified \citep{Anselin:1995, Fox:1972}. Here, we consider types of contamination frequently observed in spatial data. Many of our examples are from data collected on forest trees. These data will be familiar to many ecologists and environmental scientists, and to date, codispersion analysis used in ecological settings has been applied primarily to forested ecosystems.
\begin{enumerate}
\item[\bf 1.] {\bf Salt-and-pepper noise on an image}: Salt-and-pepper noise has been used widely in image processing and computational statistics to represent real distortions \citep{Huang:2010} and to generate different scenarios via Monte Carlo simulation \citep{McQuarrie:2003}.
Assume that $X$ is an image and suppose that $X$ follows a zero mean normal distribution with variance $\sigma^2.$ Then we consider additive noise following a zero mean normal distribution with variance $\tau^2$ such that $\tau^2 \gg \sigma^2.$ The contamination is located randomly in space such that a small percentage of observations are corrupted with a probability $\delta.$ Specifically,
\begin{equation}\label{cont1}
X\sim (1-\delta)\mathcal{N}(0,\sigma^2)+\delta \mathcal{N}(0,\tau^2).
\end{equation}
Preliminary experiments were introduced by \cite{Vallejos:2015} but here we present a more extensive study. The contamination scheme was generated by using Monte Carlo simulations according to \eqref{cont1}. We considered $\sigma^2=1$, $\tau^2=1, 5, 10,$ and the percentage of contamination $\delta=0.05, 0.1, 0.25.$ We conjecture that the codispersion coefficient is robust for $\delta\leq 0.05.$
\begin{figure}
\caption{\label{fig:gray_image2}
\label{fig:gray_image2}
\end{figure}
In Figure \ref{fig:gray_image2}(a) we illustrate an aerial photograph of a forest stand in Massachusetts, USA. Figures \ref{fig:gray_image2} (c),(e), and (g) are contaminated versions of the original one when $\delta=\{0.05, 0.10, 0.25\}$. The corresponding perspective plots shown in Figure \ref{fig:gray_image2}(b),(d),(f), and (h) depict the effect of contamination on the gray intensities. The greater the contamination, the greater the dispersion as is observed in the z-axis of the three dimensional scatter plots displayed in Figure \ref{fig:gray_image2}.
We compared codispersion calculated for the original image to that calculated for the contaminated images. In addition to the reference image shown in Figure \ref{fig:gray_image2}(a) we considered other aerial images. Figure \ref{fig:image_reference} displays four images with different textures related to the same forest stand in Massachusetts. The codispersion maps of these images are presented in the supplementary material for this paper.
\begin{figure}
\caption{\label{fig:image_reference}
\label{fig:image_reference}
\end{figure}
\item[\bf 2.] {\bf Salt-and-pepper on Dependent Processes}:
\cite{Gneiting:2010} extended the well known Mat\'ern class of covariance functions to a multivariate random field. In particular, a bivariate spatial process $(X(\bm s), Y(\bm s))^{\top}$, where $\bm s\in D\subset \mathbb{R}^2$, has a Gaussian distribution with mean vector zero and a matrix-valued covariance function
\begin{equation}\label{eq:bivariate}
\left(
\begin{matrix}
C_{11}(\bm h) & C_{12}(\bm h)\\
C_{21}(\bm h) &C_{22}(\bm h)
\end{matrix}
\right),
\end{equation}
where $\bm h \in D,$ $C_{ii}(\bm h)=\sigma_iM(\bm h|a_i)$ for $i=1,2$, $C_{12}=C_{21}=\rho_{12}\sigma_1\sigma_2M(\bm h|\nu_{12},a_{12})$,
$$M(\bm h| \nu, a)=\frac{2^{1-\nu}}{\Gamma(\nu)}(a||\bm h||)^{\nu}K_{\nu}(a||\bm h||)$$ is the correlation at lag distance $\bm h$ with $K_{\nu}$ (a modified Bessel function of the second kind), and $a>0.$ The parsimonious bivariate Matérn model has the restriction
$$|\rho_{12}|\leq \frac{(\nu_1\nu_2)^{1/2}}{\frac{1}{2}(\nu_1+\nu_2)}.$$ The correlation between the spatial variables $X(\cdot)$ and $Y(\cdot)$ is controlled by the parameter $\rho_{12},$ which allows one to generate bivariate Gaussian spatial processes with different levels of dependence. Without loss of generality, it can be assumed that the mean of the bivariate process is zero, but the theory works well for any bivariate process with mean $(\mu_1,\mu_2)^{\top}.$ Any type of contamination can be applied over the generated dependence data. In this case, we applied salt-and-pepper noise.
We generated dependent random fields from the bivariate Mat\'ern class of covariance functions described in Equation \eqref{eq:bivariate} by Monte Carlo simulation using the R package RandomFields \citep{Schlather:2017}. We then added the salt-and-pepper noise, varying the additional parameter $\rho_{12}$, which represents the known correlation between processes $X(\cdot)$ and $Y(\cdot)$.
\begin{figure}
\caption{\label{fig:dependent1}
\label{fig:dependent1}
\end{figure}
Figure \ref{fig:dependent1} shows one realization of size $512\times 512$ from a bivariate Gaussian process with correlation equal to 0.8 (images (a) and (b)). Figure \ref{fig:dependent1} (c),(d), and (e) show versions of (b) contaminated with salt-and-pepper noise with the percentage of contamination equal to 5\%, 15\%, and 25\%, respectively. Because the Gaussian process is stationary, images (a) and (b) look very smooth and regular, and any correlation between them (if it exists) is difficult to observe in the printed images.
\item[\bf 3.] {\bf Missing Observations at Random Locations}: We used the salt-and-pepper scheme to intentionally delete \textit{n} observations at random. We first defined the percentage of contamination ($\delta$), and then deleted that many observations from the dataset. In practice we replaced observations with NA at the randomly-selected locations. The main feature of these missing observations is that they are spatially independent of one another, but for the posterior data analysis they will remain fixed. The imputation Algorithm described in the Appendix was not applied here because codispersion calculations are not affected when the percentage of contamination $\delta$ is small.
\begin{figure}
\caption{\label{fig:missing}
\label{fig:missing}
\end{figure}
In Figure \ref{fig:missing} we illustrate the missing random scheme with nine contaminated versions of the original image shown in Figure \ref{fig:gray_image2}(a). The columns show the effect of increasing the percentage of contamination (5\%, 15\%, and 25\% respectively), and the rows depict the effect of increasing the block size of contaminated pixels, which are $15\times15$, $30\times30$, and $60\times60$ respectively. The contaminated pixels have been colored in white. NAs were ignored in the computation of the codispersion coefficients because for large gaps of missing observations the computation of the codispersion coefficient will be affected for those directions $\bm h$ such that $||\bm h||$ is less than the maximum diameter of the missing block.
\item[\bf 4.] {\bf Gaps Resulting from Clusters of Missing Observations}: Missing values may be clustered, for example, either because of local difficulties in sampling or because large sections of an image are obscured by clouds or shadows. We simulated clustered missing observations for the image shown in Figure \ref{fig:gray_image2}(a), given three different pixel sizes for the contaminated block: $200\times 200$, $400\times 400$, and $800\times 800$ (Figure \ref{fig:missing_gaps}). We used simple clustered geometries (squares) for ease of computation. The difference between the previous type of contamination and this one is that in the former, the contamination consisted of several blocks of small size. Here we introduced just one gap containing a large number of pixels which, in Figure \ref{fig:missing_gaps}, is located for illustrative purposes in the center of the image. However, in our simulations and analysis, we randomized both the size of the missing block and its location.
\begin{figure}
\caption{\label{fig:missing_gaps}
\label{fig:missing_gaps}
\end{figure}
To compute codispersion coefficients for datasets with such large blocks of missing data, we needed to fill the missing gaps (impute missing data) prior to computing the codispersion coefficient. We address this issue in two ways.
First, the image with a missing gap is represented by a first-order spatial autoregressive process. The fitting of the parameters of the models is done via least squares estimation following the guidelines given by \cite{Allende:2001}. This estimation method was studied by \cite{Ojeda:2010} and found to yield an approximated image $\widehat{Z}$ of the original one $X$ (see Algorithm \ref{alg:alg1} in the Appendix).
Second, to predict the values of the process in the locations belonging to the missing block, a reconstruction is made by applying Algorithm \ref{alg:alg1} to the four closest blocks to the missing gap as is illustrated in Figure \ref{fig:prediction}. This prediction scheme is summarized in Algorithm \ref{alg:pred} (Appendix). In simple words, the first step represents the image intensity by an autoregressive process that assumes that the intensity of any pixel is a weighted average of the intensity of the surrounding pixels. This is a model-based alternative to the average or median commonly computed using the intensities of a moving window across the image. The second step predicts the missing values using similar autoregressive models to represent the surrounding blocks. The predicted value of a pixel belonging to the missing block is a weighted average where the weights are proportional to the distance from the missing pixel to the surrounding blocks
\item[\bf 5.] {\bf Sampling Error}: Spatial data often are sampled from a kriged surface, which itself is generated from a set of field observations. The information in the kriged surface is a function of the number of observations and the smoothing parameter of the covariance function \citep{Minasny:2005}. For a pair of spatial point processes $X(\cdot)$ and $Y(\cdot)$, where the number of observations in $X(\cdot)$ and $Y(\cdot)$ differed by several orders of magnitude, we generated a kriged surface from $Y(\cdot)$ using either all or two random ("thinned") subsets of forest plot data (tree species locations and soil chemistry variables), containing respectively 90\% and 80\% of the original soil chemistry data \citep{Minasny:2005}. We then sampled the kriged surfaces to obtain predicted values $\hat{Y}(\cdot)$ for each of the sampled points (tree locations) in $X(\cdot)$.
To assess sampling error, we used data from plants and soils collected in the 50-ha forest dynamics plot on Barro Colorado Island, Panam\'a \citep{Condit:1998, Hubbell:1998, Hubbell:2005}. Of the 299 plant species mapped, identified, and measured every five years in this plot, we used six: \emph{Alseis blackiana, Oenocarpus mapora, Hirtella triandra, Protium tenuifolium, Poulsenia armata, and Guarea guidonia} (Figure \ref{fig:BCIVeg}). The abundances of unique single-stemmed individuals of each of these six species ranged from 993 (\emph{Poulsenia armata}) to 7928 (\emph{Alseis blackiana}), and included species that had a range of positive, negative, and weak associations with measured soil variables \citep{John:2007}. Spatial locations and diameters of individual trees of each species (excluding dead individuals and individuals with more than one stem) were taken from the seventh (2010) semi-decadal census of the plot.
\begin{figure}
\caption{Distribution and size of the six species of trees growing in the 50-hectare plot at Barro Colorado Island, Panam\'a that we analyzed to assess the effect of sampling error.}
\label{fig:BCIVeg}
\end{figure}
Soil samples were collected on a 50-m lattice in 2005 with additional samples taken at finer spatial grains at alternate sampling stations \citep{John:2007}. Soil samples were analyzed for concentrations of 11 elements; we used only data for concentrations of calcium (Ca), phosphorus (P), and aluminium (Al), which had the highest loadings on the first three principal axes of a multivariate analysis (NMDS) on the complete soil dataset \citep{John:2007}. We used ordinary kriging in the geoR package \citep{Ribeiro:2001}, version 1.7-5.2, to fit a surface to the data for each soil element and predict its concentration at the location of each tree (Figure \ref{fig:BCISoils}). Variogram models (exponential, exponential, and wave for Ca, P, and Al, respectively) needed as input for the kriging function were fit to detrended (2\textsuperscript{nd}-order polynomial) data that had been Box-Cox transformed ($\lambda$ = 0.5, 1.0, and 1.0 for Ca, P, and Al, respectively); kriging was done on back-transformed data to which the trend had been added. Nuggets were estimated empirically for Ca and P, but the nugget for Al was fixed (following visual inspection of the empirical variogram) equal to 4,000.
\begin{figure}
\caption{Kriged surfaces of the concentration (mg/kg) of aluminum (Al; top), calcium (Ca; center), and phosphorus (P; bottom) in the 50-hectare plot at Barro Colorado Island, Panam\'a. Contours were estimated for a regular grid (5-m spacing) based on data from samples taken at the individual points shown on the plots.}
\label{fig:BCISoils}
\end{figure}
\end{enumerate}
\subsection{Availability of data and code}
BCI vegetation and soils data are available from
\url{http://ctfs.si.edu/webatlas/datasets/bci/}. Analyses were done using the R software system, version 3.3.1 \citep{R:2016}. The images and all the code used in this paper are available from (\url{blinded/ website})
\section{Results}\label{Results}
We used codispersion maps \citep{Buckley:2016a, Vallejos:2015} to explore the possible patterns and features caused by the introduction of noise and to evaluate the performance of the codispersion coefficient when the process was contaminated with one of the distortions described above. Recall that the generation of the noise is through statistical models that do not necessarily include a particular direction in space. The effects of specific directional contamination on codispersion was investigated by \cite{Vallejos:2015}.
The only effect observed when the forest image was contaminated with salt-and-pepper noise (Figure \ref{fig:gray_image2}) was a trend of decreasing codispersion between the original and contaminated images with an increase in the percentage of contamination (Figure \ref{fig:salt and pepper}). In the case of the dependent processes generated by a Gaussian process with covariance matrix as in \eqref{eq:bivariate}, we plotted the codispersion maps between the original and contaminated images displayed in Figure \ref{fig:dependent1}. The salt-and-pepper contamination caused a complete loss of correlation between the two images, which were originally correlated by 0.8 (Figure \ref{fig:dependent2}). A decrease in codispersion between the original and contaminated images was also observed when noise was introduced through missing observations at random locations or as the missing block size increased (Figure \ref{fig:Missing Observation}).
Figure \ref{fig:missing_gaps} illustrates how we introduced large gaps of missing values in the center of the reference image shown in Figure \ref{fig:gray_image2}(a). Before computing the codispersion map, we imputed the missing data (Algorithm \ref{alg:pred} in the Appendix). Although the performance of such algorithms strongly depends on the size of the block of missing observations, the construction of it is based on the spatial information contained in the nearest neighbors (Algorithm \ref{alg:alg1} in the Appendix). The spatial autoregressive lags in the AR-2D process are fixed when the order of the process is chosen. In this case, three neighbors were considered in a strongly causal set to guarantee an infinite moving average representation of the process. The images filled by the imputation algorithm are shown in Figure \ref{fig:gap missing}(d)-(f). The filled areas are smooth in terms of texture and have a smaller variance. Visually, the imputation of the larger missing block looks different from the rest of the image. For small missing blocks it is difficult to see the imputed values. From Figure \ref{fig:gap missing}(g)-(h), we observed that Algorithm \ref{alg:pred} was able to recover valuable information and that the codispersion between the original and imputed images in all cases was close to one.
Finally, the codispersion between tree species' diameters (for the six species shown in Figure \ref{fig:BCIVeg}) and the three soil elements (Figure \ref{fig:BCISoils}) sampled in the Barro Colorado Island plot at three levels of data 'thinning' (none, 10\% and 20\%) showed that codispersion was robust to this form of contamination. Only the results for the most abundant (Figure \ref{fig:specie1}) and the least abundant (Figure \ref{fig:specie5}) species are shown.
\begin{figure}
\caption{\label{fig:salt and pepper}
\label{fig:salt and pepper}
\end{figure}
\begin{figure}
\caption{\label{fig:dependent2}
\label{fig:dependent2}
\end{figure}
\begin{figure}
\caption{\label{fig:Missing Observation}
\label{fig:Missing Observation}
\end{figure}
\begin{figure}
\caption{\label{fig:gap missing}
\label{fig:gap missing}
\end{figure}
\begin{figure}
\caption{\label{fig:specie1}
\label{fig:specie1}
\end{figure}
\begin{figure}
\caption{\label{fig:specie5}
\label{fig:specie5}
\end{figure}
\section{Discussion}\label{Discussion}
The methods and examples developed in this paper improve our understanding of the behavior of the codispersion coefficient when data have been contaminated. The codispersion coefficient appears to be robust for small percentages of contamination (< 15\%), but always leads to an underestimation of the codispersion between the datasets. As the percentage of contamination increases, the codispersion decreases in all directions on the plane. We note that the types of noise considered in this paper did not affect the codispersion in any particular direction(s). Although the performance of codispersion for directional noise was explored by \cite{Vallejos:2015,Vallejos:2016}, directtional noise has not yet been observed in real datasets.
When applied to environmental data, codispersion has been shown to useful for describing scales of covariation in two or more variables across complex spatial gradients (e.g., Buckley et al. 2016a, b). Our ability to detect such spatial pattern depends on the grain of spatial variation in the data and how this compares to the lag sizes used in the codispersion analysis. For example, the complete loss of correlation between the two images in Figure \ref{fig:dependent1} under only a small degree of contamination highlights the importance of considering the spatial grain of the datasets relative to that of the noise-inducing processes. The coarser-grained spatial pattern in the forest images is retained, even under contamination, whereas the spatial dependence in the images in Figure \ref{fig:dependent1} is at a smaller grain than the extent of the image, which is relatively heavily disturbed by the salt-and-pepper noise.
The imputation algorithm described in the Appendix seems to be a promising technique to handle blocks of missing observations. Several aspects of it are worth exploring with future research. These include the success of the algorithm in recovering missing observations as a function of the block size; how to select the number of neighbors to be considered in the AR-2D process; and the similarity between the texture of the imputed observations and the texture of the reference image. For simplicity and without loss of generality, the missing blocks we illustrated were square regions located in the center of the image, but certainly Algorithm \ref{alg:pred} could be extended to other types of regions located anywhere in the image.
More general aspects of codispersion analysis are in need of further exploration and testing. First, it will be of interest to study the results of codispersion analysis of rasterized images. This is because rasterization of images is widespread and common rasterization methods rarely, if ever, preserve the original spatial correlation of each process. The development of a new rasterization method that preserves better the spatial correlation within processes could follow \cite{Goovaerts:2010}. Second, the computation of codispersion maps is computationally expensive. Thus, the development of efficient algorithms capable of creating codispersion maps for large images is still needed.
\section{Image Imputation Algorithm}
The algorithm described below is based on the fact that it is possible to represent any image by using unilateral AR-2D processes \citep{Ojeda:2010}. The generated image is called a local AR-2D approximated image by using blocks.
Let $Z=\{ Z_{r,s}: 0\leq r\leq M-1,0\leq s\leq N-1 \}$ be an original image, and let
$X$ the original image corrected by the mean. That is, $X_{r,s}=Z_{r,s}-\overline{Z},$
for all $0\leq r\leq M-1,$ $0\leq s\leq N-1,$ and for which $\overline{Z}$ is the mean of $Z$.
Following \cite{Bustos:2009}, assume that $X$ follows a causal AR-2D process of the form
\begin{equation*}
X_{r,s}=\phi _{1}X_{r-1,s}+\phi_{2}X_{r,s-1}+\phi_{3}X_{r-1,s-1}+\varepsilon_{r,s},
\end{equation*}
where $(r,s)\in \mathbb{Z}^{2}$, $\left( \varepsilon_{r,s} \right) _{(r,s)\in \mathbb{Z}^{2}}$ is Gaussian white noise, and $\phi_1,\phi_2$, and $\phi_3$ are the autoregressive parameters.
Let $4\leq k\leq \min (M,N).$ For simplicity we consider that the images to be processed are
arranged in such a way that the number of columns minus one and the number of rows minus one are multiples of $k-1$; Then we define the $(k-1)\times (k-1)$ block $\left(i_{b},j_{b}\right) $ of the image $X$ by
\begin{equation*}
B_{X}\left( i_{b},j_{b}\right) =\{ X_{r,s}: (k-1)(i_{b}-1)+1\leq r\leq (k-1)i_{b},(k-1)(j_{b}-1)+1\leq s\leq (k-1)j_{b}\},
\end{equation*}
for all $i_{b}=1,\cdots ,\left[ (M-1)/(k-1)\right] $ and for all $ j_{b}=1,\cdots ,\left[ (N-1)/(k-1)\right]$, where $[\cdot]$ denotes the integer part. The $M^{\prime }\times N^{\prime }$ approximated image $\widehat{Z}$, where $M^{\prime } =\left[(M-1)/(k-1)\right] (k-1)+1$ and $N^{\prime }=\left[ (N-1)/(k-1)\right] (k-1)+1$ can be obtained by the following algorithm.
\begin{algorithm}[h!!]
\caption{Approximated AR-2D Image.}\label{alg:alg1}
\label{alg}
{\bf Input:} An original image $Z$ of size $M\times N$.
{\bf Output:} An approximated $\widehat{Z}$ of size $M^{\prime}\times N^{\prime}.$
\begin{algorithmic}[1]
\State {\bf for each block $B_{X}\left( i_{b},j_{b}\right)$ do}
\State \hspace{3mm} Compute the least square (LS) estimators of $\phi _{1}$, $\phi _{2}$ and $\phi_3$ associated with block $B_{X}\left( i_{b},j_{b}\right).$
\State \hspace{3mm} Define $\widehat{X}$ on the block $B_{X}\left( i_{b},j_{b}\right) $ by
\begin{equation*}
\widehat{X}_{r,s}=\widehat{\phi }_{1}\left( i_{b},j_{b}\right) X_{r-1,s}+
\widehat{\phi }_{2}\left( i_{b},j_{b}\right) X_{r,s-1}+\widehat{\phi }_{3}\left( i_{b},j_{b}\right) X_{r-1,s-1},
\end{equation*}
\hspace{3mm} where $(k-1)(i_{b}-1)+1\leq r\leq (k-1)i_{b}$, $(k-1)(j_{b}-1)+1\leq s\leq (k-1)j_{b},$ and $\widehat{\phi }_{1}\left( i_{b},j_{b}\right)$,
\hspace{-4mm} $\widehat{\phi }_{2}\left( i_{b},j_{b}\right)$, and $\widehat{\phi }_{3}\left( i_{b},j_{b}\right)$
are the LS estimators of $\phi_1, \phi_2$ and $\phi_3$ respectively.
\State {\bf end for}
\State \hspace{3mm} The approximated image $\widehat{Z}$ of $Z$ is:
\begin{equation*}
\widehat{Z}_{r,s}=\widehat{X}_{r,s}+\overline{Z},\text{ \ \ }0\leq
r\leq M^{\prime }-1,0\leq s\leq N^{\prime }-1.
\end{equation*}
\State {\bf Return} $\widehat{Z}$.
\end{algorithmic}
\end{algorithm}
Now suppose that image $Z$ has a rectangular block of missing values. Without loss of generality, assume that the rectangular block of missing values is of size $(K-1)\times(K-1)$. Furthermore, in each border, $X^{(l)}$ ,$l=1,2,3,4$, is defined as a block of information of $Z$ of size $K \times K$, such as appears in Figure \ref{fig:prediction}.
\begin{figure}
\caption{\label{fig:prediction}
\label{fig:prediction}
\end{figure}
Also assume that for all $l$, $X^{(l)}$ is represented by a AR-2D model of the form
\begin{equation*}
X^{(l)}_{r,s}=\phi^{(l)} _{1}X^{(l)}_{r-1,s}+\phi^{(l)}_{2}X^{(l)}_{r,s-1}+\phi^{(l)}_{3}X^{(l)}_{r-1,s-1}+\varepsilon^{(l)}_{r,s},\quad l=1,2,3,4.
\end{equation*}
where $\phi^{(l)}_{1}$, $\phi^{(l)}_{2}$, and $\phi^{(l)}_{3}$ are estimated using the block $X^{(l)}$ for $l=1,2,3,4$, respectively. Then the prediction model is
\begin{equation*}
\widehat{X}^{(l)}_{r+i,s+j}=\left\{\begin{array}{lcc}
\widehat{\phi}^{(l)}_{1}\widehat{X}^{(l)}_{r+i-1,s+j}+\widehat{\phi}^{(l)}_{2}\widehat{X}^{(l)}_{r+i,s+j-1}+\widehat{\phi}^{(l)}_{3}\widehat{X}^{(l)}_{r+i-1,s+j-1} &;& (r+i,s+j)\not\in A^{(l)}\\
X^{(l)}_{r+i,s+j} &;& (r+i,s+j)\in A^{(l)}\end{array}\right.,
\end{equation*}
where $A^{(l)}$ is the index set for which $X^{(l)}$ is known and $i,j=1,\dots,K$. The prediction algorithm is the following
\begin{algorithm}[h!!]
\caption{Prediction Algorithm.}\label{alg:pred}
{\bf Input}: An image $Z$ with a missing block, and $K$.
{\bf Output}: Image $Z$ without missing values.
\begin{algorithmic}[1]
\State Get a sub-image $X$ of $Z$ of size $3K\times3K$, so that the missing data is in the center of $X$.
\State Get $X^{(l)}$, for $l=1,2,3,4$, and reverse the order of the rows in $X^{(3)}$ and the columns in $X^{(4)}$, i.e. $X^{(3)}_{i,j}=X^{(3)}_{K+1-i,j}$, and $X^{(4)}_{i,j}=X^{(4)}_{i,K+1-j}$
\State Compute $\widehat{\phi}^{(l)}_{1}$, $\widehat{\phi}^{(l)}_{2}$ and $\widehat{\phi}^{(l)}_{3}$ for $l=1,2,3,4$.
\State Let $K_2=K$.
\While{ $K_2>0$. }
\For {$j=1$ {\bf until} $j=K-1$ }
\State Compute: \begin{eqnarray*}
X_{K+1,K+j} &=& \widehat{\phi}^{(1)}_{1}X_{K,K+j}+\widehat{\phi}^{(1)}_{2}X_{K+1,K+j-1}+\widehat{\phi}^{(1)}_{3}X_{K,K+j-1} \\
X_{K+j,K+1} &=& \widehat{\phi}^{(2)}_{1}X_{K+j-1,K+1}+\widehat{\phi}^{(2)}_{2}X_{K+j,K}+\widehat{\phi}^{(2)}_{3}X_{K+j-1,K} \\
X_{2K-1,K+j} &=& \widehat{\phi}^{(3)}_{1}X_{2K,K+j}+\widehat{\phi}^{(3)}_{2}X_{2K-1,K+j-1}+\widehat{\phi}^{(3)}_{3}X_{2K,K+j-1} \\
X_{K+j,2K-1} &=& \widehat{\phi}^{(4)}_{1}X_{K+j-1,2K-1}+\widehat{\phi}^{(4)}_{2}X_{K+j,2K}+\widehat{\phi}^{(4)}_{3}X_{K+j-1,2K}
\end{eqnarray*}
\Comment For those points that the estimation is repeated consider the average of both estimations. These points are obtained for $j=1$ and $j=K-1$.
{\bf end for}
\EndFor
\State Put $K_2=K_2-2$ and $K=K+1$.
\EndWhile
{\bf end while}
\State Replace the NA values of $Z$ by $X$.
\State {\bf Return} $Z$.
\end{algorithmic}
\end{algorithm}
\end{document} |
\begin{document}
{\noindent{\LARGE \textbf{Supplementary materials to the paper: \\Fully Bayesian binary Markov random field\\
models:
Prior specification and posterior\\ simulation}
\\}
{\Large \textsc{Petter Arnesen}}\\{\it Department of
Mathematical Sciences, Norwegian University of Science and
Technology}
\\{\Large \textsc{H\aa kon Tjelmeland}}\\{\it Department of
Mathematical Sciences, Norwegian University of Science and
Technology}
}
\renewcommand{1.5}{1.5} \small\normalsize
\renewcommand{S.\arabic{section}}{S.\arabic{section}}
\renewcommand{\thesection.\arabic{subsection}}{S.\arabic{section}.\arabic{subsection}}
\section{Proof of one-to-one relation between $\phi$ and $\beta$}
\label{sec:S1}
\begin{theorem}
Consider an MRF and constrain the $\phi$ parametrisation of the potential functions as described in
Definition 1. Then there is a one-to-one relation between
$\{\beta^\lambda;\lambda\in\mathcal{L}\}$
and $\{\phi^\lambda;\lambda\in\mathcal{L}\}$.
\end{theorem}
\begin{proof}We prove the theorem by
establishing recursive equations showing how to compute the $\beta^\lambda$'s from
the $\phi^\lambda$'s and vice versa.
Setting $x=\bold 1^\lambda$ for some $\lambda\in\mathcal{L}$ into the two
representations of $U(x)$ in (2) and (4), we get
\begin{equation*}
\sum_{\Lambda\in \mathcal{L}_m}V_{\Lambda}(\bold
1^{\lambda}_{\Lambda})=\sum_{\lambda^\star\in \mathcal{L}}\beta^{\lambda^\star}
\prod_{(i,j)\in \lambda^\star} \bold 1^{\lambda}_{\{(i,j)\}}.
\end{equation*}
Using (3) and Definition 1 this gives
\begin{equation}
\sum_{\Lambda\in \mathcal{L}_m}\phi^{\lambda\cap
\Lambda}=\sum_{\lambda^\star\subseteq \lambda}\beta^{\lambda^\star}.
\label{eq:equal}
\end{equation}
Splitting the sum on the left hand side into one sum over
$\Lambda\in\mathcal{L}_m^\lambda$ and one sum over
$\Lambda\in\mathcal{L}_m\setminus\mathcal{L}_m^\lambda$,
and using that $\lambda\cap \Lambda=\lambda$ for
$\Lambda\in \mathcal{L}_m^\lambda$ we get
\begin{equation*}
|\mathcal{L}_m^{\lambda}|\phi^{\lambda}+\sum_{\Lambda\in \mathcal{L}_m\setminus \mathcal{L}_m^\lambda}\phi^{\lambda\cap
\Lambda}=\sum_{\lambda^\star\subseteq \lambda}\beta^{\lambda^\star}.
\end{equation*}
Solving for $\phi^\lambda$ gives
\begin{equation}
\phi^{\lambda}=\frac{1}{|\mathcal{L}_m^{\lambda}|}\left
[\sum_{\lambda^\star\subseteq
\lambda}\beta^{\lambda^\star}-\sum_{\Lambda\in \mathcal{L}_m\setminus
\mathcal{L}_m^\lambda}\phi^{\lambda\cap \Lambda}\right].
\label{eq:claim}
\end{equation}
Clearly $|\lambda\cap\Lambda| < | \lambda|$ when
$\Lambda\in \mathcal{L}_m\setminus \mathcal{L}_m^\lambda$,
so \eqref{eq:claim} implies that we can compute all
$\{ \phi^\lambda;\lambda\in\mathcal{L}\}$ recursively from
$\{ \beta^\lambda;\lambda\in\mathcal{L}\}$. First
we can compute $\beta^\lambda$ for $|\lambda|=0$, i.e.
$\phi^\lambda=\phi^\emptyset = \beta^\emptyset/|\mathcal{L}_m|$, then
all $\beta^\lambda$ for which $|\Lambda|=1$, thereafter
all $\beta^\lambda$ for which $|\Lambda|=2$ and so on until
$\beta^\lambda$ has been computed for all $\lambda\in\mathcal{L}$.
Thus, $\{ \phi^\lambda;\lambda\in\mathcal{L}\}$ is uniquely
specified by $\{ \beta^\lambda;\lambda\in\mathcal{L}\}$.
Solving \eqref{eq:equal} with respect to $\beta^\lambda$ we get
\begin{equation}
\beta^{\lambda}=\sum_{\Lambda\in \mathcal{L}_m}\phi^{\lambda
\cap \Lambda}-\sum_{\lambda^\star \subset \lambda}\beta^{\lambda^\star},
\label{eq:claim2}
\end{equation}
and noting that clearly $|\lambda^\star|<|\lambda|$ when
$\lambda^\star \subset \lambda$ we correspondingly get
that $\{ \beta^\lambda;\lambda\in\mathcal{L}\}$ can be
recursively computed from $\{ \phi^\lambda;\lambda\in\mathcal{L}\}$.
One must first compute $\beta^\lambda$ for $|\lambda|=0$, i.e.
$\beta^\emptyset$, then all $\beta^\lambda$ for which
$|\lambda|=1$, thereafter all $\beta^\lambda$ for which
$|\lambda|=2$ and so on. Thereby $\{ \beta^\lambda;\lambda\in\mathcal{L}\}$
is also uniquely specified by $\{ \phi^\lambda;\lambda\in\mathcal{L}\}$
and the proof is complete.
\end{proof}
\section{Proof of translational invariance for $\beta$}
\label{sec:S2}
\begin{theorem}
An MRF $x$ defined on a rectangular lattice
$S=\{ (i,j);i=0,\ldots,n-1,j=0,\ldots,m-1\}$ with
torus boundary conditions and $\mathcal{L}_m$ given
in (5) is stationary if and only
if $\beta^{\lambda}=\beta^{\lambda\oplus (t,u)}$ for all
$\lambda \in \mathcal{L}$, $(t,u)\in S$. We then say
that $\beta^\lambda$ is translational invariant.
\end{theorem}
\begin{proof}
We first prove the only if part of
the theorem by induction on $|\lambda|$. Since $\emptyset
\oplus (t,u)=\emptyset$ we clearly have
$\beta^{\emptyset}=\beta^{\emptyset \oplus (t,u)}$ and thereby
$\beta^{\lambda}=\beta^{\lambda \oplus (t,u)}$ for $|\lambda|=0$.
Now assume all interaction parameters up to order $o$
to be translational invariant, i.e. $\beta^{\lambda}=\beta^{\lambda \oplus
(t,u)}$ for $|\lambda|\leq o$. Now focusing on any $\lambda^\star\in \mathcal{L}$
with $|\lambda^\star|=o+1$, the assumed stationarity in particular gives
that we must have $p(\bold 1^{\lambda^\star})=p(\bold 1^{\lambda^\star\oplus (t,u)})$.
Using (2) and (4) it follows that
\begin{equation*}
\beta^{\lambda^\star}+\sum_{\lambda \subset \lambda^\star}
\beta^{\lambda}=\beta^{\lambda^\star\oplus(t,u)}+\sum_{\lambda \subset \lambda^\star\oplus(t,u)}\beta^{\lambda}.
\end{equation*}
Rewriting the sum on the right hand side we get
\begin{equation*}
\beta^{\lambda^\star}+\sum_{\lambda \subset \lambda^\star}\beta^{\lambda}=\beta^{\lambda^\star\oplus(t,u)}+
\sum_{\lambda \subset \lambda^\star}\beta^{\lambda\oplus(t,u)},
\end{equation*}
where the induction assumption gives that the two sums must be equal, and
thereby $\beta^{\lambda^\star}=\beta^{\lambda^\star\oplus(t,u)}$, which
completes the only if part of the proof.
To prove the if part of the theorem we need to show that if the interaction parameters
are translational invariant then $U(\bold 1^A) = U( \bold 1^{A\oplus (t,u)})$ for any
$A\subseteq S$ and $(t,u)\in S$. For $U(\bold 1^A)$ we have
\begin{eqnarray*}
U(\bold 1^A) &=& \sum_{\lambda\in\mathcal{L}} \beta^\lambda \prod_{(i,j)\in\lambda} \bold 1^A_{i,j}\\
&=& \sum_{\lambda\in \mathcal{L}} \beta^{\lambda\oplus (t,u)} \prod_{(i,j)\in\lambda\oplus (t,u)}
\bold 1_{i,j}^A\\
&=& \sum_{\lambda\in\mathcal{L}} \beta^{\lambda\oplus (t,u)} \prod_{(i,j)\in \lambda}
\bold 1_{i,j}^{A\oplus (t,u)} \\
&=& \sum_{\lambda\in\mathcal{L}} \beta^{\lambda} \prod_{(i,j)\in \lambda}
\bold 1_{i,j}^{A\oplus (t,u)} = U(\bold 1^{A\oplus (t,u)}),
\end{eqnarray*}
where the first equality follows from (4), the second equality
is true because $\{ \lambda\oplus (t,u);\lambda\in \mathcal{L}\} =
\mathcal{L}$ for any $(t,u)\in S$, the third equality follows
from the identity $\bold 1^A_{(i,j)\oplus (t,u)} =
\bold 1_{i,j}^{A\oplus (t,u)}$, and the fourth equality is using the assumed
translational invariance of the interaction parameters. Thereby the proof
is complete.
\end{proof}
\section{Proof of translational invariance for $\phi$}
\label{sec:S3}
\begin{theorem}
An MRF $x$ defined on a rectangular lattice
$S=\{ (i,j);i=0,\ldots,n-1,j=0,\ldots,m-1\}$ with torus boundary
conditions and $\mathcal{L}_m$ given in
(5) is stationary if and only if
$\phi^{\lambda}=\phi^{\lambda\oplus (t,u)}$ for all
$\lambda \in \mathcal{L}$ and $(t,u)\in S$. We then say
that $\phi^\lambda$ is translational invariant.
\end{theorem}
\begin{proof}
Given the result in Theorem 2 it is
sufficient to show that $\phi^\lambda$ is translational invariant
if and only if $\beta^\lambda$ is translational invariant. We first
assume $\beta^\lambda$ to be translational invariant for all $\lambda\in\mathcal{L}$
and need to show that then also $\phi^\lambda$ must be translational invariant for
all $\lambda\in\mathcal{L}$. Starting with
\eqref{eq:claim}, repeatedly using the specific form we are using for $\mathcal{L}_m$ and the assumed
translational invariance for $\beta^\lambda$, we get for any $\lambda\in\mathcal{L}$, $(t,u)\in S$
\begin{eqnarray*}
\phi^{\lambda\oplus (t,u)} &=& \frac{1}{\left|\mathcal{L}_m^{\lambda\oplus (t,u)}\right|}
\left[ \sum_{\lambda^\star\subseteq \lambda\oplus (t,u)}\beta^{\lambda^\star} -
\sum_{\Lambda\in\mathcal{L}_m} \phi^{(\lambda\oplus (t,u))\cap\Lambda}
+ \sum_{\Lambda\in\mathcal{L}_m^{\lambda\oplus (t,u)}} \phi^{(\lambda\oplus (t,u))\cap \Lambda}\right] \\
&=& \frac{1}{|\mathcal{L}_m^\lambda|}\left[ \sum_{\lambda^\star\subseteq \lambda}
\beta^{\lambda^\star\oplus (t,u)} -
\sum_{\Lambda\in\mathcal{L}_m} \phi^{(\lambda\oplus (t,u))\cap (\Lambda\oplus (t,u))}
+ \sum_{\Lambda\in\mathcal{L}_m^\lambda} \phi^{(\lambda\oplus (t,u))\cap (\Lambda\oplus (t,u))}\right]\\
&=& \frac{1}{|\mathcal{L}_m^\lambda|} \left[ \sum_{\lambda^\star\subseteq \lambda} \beta^{\lambda^\star}
- \sum_{\Lambda\in\mathcal{L}_m} \phi^{(\lambda\cap \Lambda)\oplus (t,u)} +
\sum_{\Lambda\in\mathcal{L}_m^\lambda}\phi^{(\lambda\cap\Lambda) \oplus (t,u)}\right] \\
&=& \frac{1}{|\mathcal{L}_m^\lambda|}\left[ \sum_{\lambda^\star\subseteq\lambda} \beta^{\lambda^\star} -
\sum_{\Lambda\in\mathcal{L}_m\setminus\mathcal{L}_m^\lambda} \phi^{(\lambda\cap\Lambda)\oplus (t,u)}\right].
\end{eqnarray*}
From this we can use induction on $|\lambda|$ to show that $\phi^\lambda$ is
translational invariant for all $\lambda\in\mathcal{L}$. Setting $\lambda=\emptyset$
we get $\phi^{\lambda\oplus (t,u)} = (1/|\mathcal{L}_m^\emptyset|)\beta^\emptyset$ which
is clearly not a function of $(t,u)$. Then assuming $\phi^{\lambda\oplus (t,u)}=\phi^\lambda$
for all $\lambda\in \mathcal{L}$ with $|\lambda| \leq o$, considering the above
relation for a $\lambda$ with $|\lambda|=o+1$, and observing that
$|\lambda\cap \Lambda|\leq o$ when $|\lambda|=o+1$ and $\Lambda\in \mathcal{L}_m\setminus
\mathcal{L}_m^\lambda$, we get that also $\phi^{\lambda\oplus (t,u)}=\phi^\lambda$ for
$|\lambda|=o+1$, and the induction proof is complete.
Next we assume $\phi^\lambda$ to be translational invariant for all $\lambda\in\mathcal{L}$
and need to show that then also $\beta^\lambda$ is translational invariant for all
$\lambda\in\mathcal{L}$. Starting with \eqref{eq:claim2}, using the assumed translational
invariance of $\beta^\lambda$,
and again repeatedly using the specific form of $\mathcal{L}_m$, we get
\begin{eqnarray*}
\beta^{\lambda\oplus (t,u)} &=& \sum_{\Lambda\in\mathcal{L}_m}\phi^{(\lambda\oplus (t,u))\cap \Lambda}
-\sum_{\lambda^\star\subset \lambda\oplus (t,u)}\beta^{\lambda^\star} \\
&=& \sum_{\Lambda\in\mathcal{L}_m} \phi^{(\lambda\oplus (t,u))\cap (\Lambda\oplus (t,u))}
- \sum_{\lambda^\star \subset \lambda} \beta^{\lambda^\star \oplus (t,u)}\\
&=&\sum_{\Lambda\in\mathcal{L}_m} \phi^{(\lambda\cap \Lambda)\oplus (t,u))}
- \sum_{\lambda^\star \subset \lambda} \beta^{\lambda^\star \oplus (t,u)}.
\end{eqnarray*}
Using this we can easily use induction on $|\lambda|$ to show that we must have
$\beta^{\lambda\oplus (t,u)}=\beta^\lambda$. The proof is thereby complete.
\end{proof}
\section{Details for the MCMC sampling algorithm}
\label{sec:S4}
In this section we provide details of the proposal distributions that we use when sampling from the posterior distribution
\begin{equation*}
p(z|x) \propto p(x|z) p(z),
\end{equation*}
where $p(x|z)$ and $p(z)$ are the MRF
and the prior given in the paper, respectively.
To simulate from this posterior distribution we adopt a reversible
jump Markov chain Monte Carlo (RJMCMC) algorithm with
three types of updates. The first update type uses a random walk proposal
for one of the $\varphi$ parameters, the second proposes to move one
configuration set to a new group, and the third proposes to
change the number of groups, $r$, in the partition of the
configuration sets. In the following we describe the
proposal mechanisms for each of the three update types.
The corresponding acceptance probabilities are given by standard formulas. It should be noted that only the
last type of proposal produces a change in the dimension of
the parameter space.
\subsection{\it Random walk proposal for parameter values}
The first proposal in our algorithm is simply to propose
a new value for an already existing parameter using a random walk proposal, but
correcting for the fact that the parameters should sum to zero. More precisely, we first draw a change $\varepsilon \sim \mbox{N}(0,\sigma^2)$,
where $\sigma^2$ is an algorithmic tuning parameter. Second, we
uniformly draw one element from the current state
$z=\{ (C_i,\varphi_i),i=1,\ldots,r\}$, $(C_i,\varphi_i)$ say, and
define the potential new state as
\begin{equation*}
z^* = \left \{ \left (C_j,\varphi_j - \frac{1}{r}\varepsilon \right ),
j=1,\ldots,i-1,i+1,\ldots,r \right \} \cup \left \{
\left (C_i,\varphi_i+\varepsilon - \frac{1}{r}\varepsilon \right )
\right \}.
\end{equation*}
\subsection{\it Proposing to change the group for one configuration set}
\label{sec:swappingProposal}
Letting the current state be $z=\{ (C_i,\varphi_i),i=1,\ldots,r\}$,
we start this proposal by drawing a pair of groups,
$C_i$ and $C_j$ say, where the first set $C_i$ is restricted to
include at least two configuration sets. We draw $C_i$ and $C_j$ so
that the difference between the
corresponding parameter values, $\varphi_i - \varphi_j$, tend to be small.
More precisely, we draw $(i,j)$ from the joint distribution
\begin{eqnarray*}
q(i,j)\propto
\begin{cases}
\exp\left(-(\varphi_i-\varphi_j)^2\right ) & \mbox{if $i\neq
j$ and group $C_i$ contains at least two configuration sets,}\\
0 & \mbox{otherwise.}
\end{cases}
\end{eqnarray*}
Thereafter
we draw uniformly at random one of the configuration sets in $C_i$,
$c$ say. Our potential new state is then obtained by moving $c$ from
$C_i$ to $C_j$. Thus, our potential
new state becomes
\begin{equation*}
z^* = \left ( z \setminus
\{ (C_i,\varphi_i),(C_j,\varphi_j)\}\right )
\cup \left \{ (C_i\setminus c,\varphi_i ), (C_j\cup
c,\varphi_j )\right \}.
\end{equation*}
\subsection{\it Trans-dimensional proposals}
\label{sec:transDimProposal}
Let again the current state be $z=\{ (C_i,\varphi_i),i=1,\ldots,r\}$. In
the following we describe how we propose a new state by either
increasing or reducing the number of groups, $r$, with one. There
will be a one-to-one transition in the proposal, meaning that the
opposite proposal, going from the new state to the old state has a non-zero
probability. We make no attempt to jump between states where the
difference between the dimensions are larger than one.
First we draw whether to increase or to decrease the number of
groups. If the number of groups are equal to the number of
configurations sets, no proposal to increase the number of groups can
be made due to the fact that empty groups have zero prior
probability. In that case we propose to decrease the number
of dimensions with probability 1. In our proposals we also make the
restriction that only groupings containing at least one group with
only one configuration set can be subject to a dimension reducing
proposal. In a case where no such group exists, a proposal of
increasing the number of dimensions are made with probability 1. In a
case where both proposals are allowed we draw at random which to do with
probability 1/2 for each. Note that at least one of the two proposals
is always valid.
We now explain how to propose to increase the number of groups by one.
We start by drawing uniformly at random one of the groups with more
than one configuration set, $C_i$ say, which we want to split into two
new groups. Thereafter we draw uniformly at random one of the
configuration sets in $C_i$, $c$ say, and form a new partition of
the configuration sets by extracting $c$ from $C_i$ and adding a
new group containing only $c$. Next we need to draw a parameter value for
the new group $\{ c\}$, and the parameter values for the other groups
also need to be modified for the proposal to conform with the requirement
that the sum of the (proposed) parameters should equal zero.
We do this by
first drawing a change $\varepsilon\sim \mbox{N}(0,\sigma^2)$ in
the parameter value for $c$, where $\sigma^2$ is the same tuning
parameter as in the random walk proposal. We then define the potential new state as
\begin{eqnarray*}
z^* &=& \left \{ \left (C_j,\varphi_j - \frac{1}{r+1}(\varphi_i +
\varepsilon)\right ),
j=1,\ldots,i-1,i+1,\ldots,r\right \} \cup \\
&&\left \{
\left (C_i\setminus c,\varphi_i - \frac{1}{r+1}(\varphi_i + \varepsilon)\right),
\left(\{ c\},\varphi_i + \varepsilon - \frac{1}{r+1}(\varphi_i +
\varepsilon)\right)\right \}.
\end{eqnarray*}
Next we explain the proposal we make when the dimension is to be decreased
by one. Since we need a one-to-one transition in our proposals, we
get certain restrictions for these proposals. In particular, the fact that
only groupings containing at least one group with only one
configuration set are possible outcomes from a dimension increasing
proposal dictates that dimension decreasing proposals only can be made
from such groupings. Assume again our current model to be $z=\{
(C_i,\varphi_i),i=1,\ldots,r\}$, where at least one group contains
only one configuration set. The strategy is to propose to merge one group consisting of only one
configuration set into another group. As in Section \ref{sec:swappingProposal}, we draw the two configuration
sets to be merged so that the difference between the corresponding
parameter values tend to be small. More precisely, we let the two
groups be $C_i$ and $C_j$ where $(i,j)$ is sampled according to the
joint distribution
\begin{eqnarray*}
q(i,j)\propto
\begin{cases}
\exp\left(-(\varphi_i-\varphi_j)^2\right ) & \mbox{if $i\neq j$ and $C_i$ consists of
only one configuration set,}\\
0 & \mbox{otherwise.}
\end{cases}
\end{eqnarray*}
Next we need to specify potential new parameter values. As
these must conform with how we generated potential new values in the split
proposal, we have no freedom left in how to do this. The potential
new state must be
\begin{equation*}
z^* = \left \{ \left (C_k,\varphi_k + \frac{1}{r-1}\varphi_i\right
), k\in \{ 1,\ldots,r\} \setminus \{ i,j\}\right \}
\cup \left \{ \left (C_j\cup C_i,\varphi_j +
\frac{1}{r-1}\varphi_i\right )\right \}.
\end{equation*}
The split and merge steps produce a change in the dimension of
the parameter space, so to calculate the acceptance probabilities
for such proposals we need corresponding Jacobi determinants.
It is straightforward to show that the Jacobi determinants for
the merge and split proposals become $\frac{r}{r-1}$ and
$\frac{r}{r+1}$, respectively.
\section{The independence model with check of convergence}
\label{sec:S5}
Consider a model where the variables are all independent of each other
and $p(x_{i,j})=p^{x_{i,j}}(1-p)^{1-x_{i,j}}$ for each $(i,j)\in S$ and where $p$ is the
probability of $x_{i,j}$ being equal to $1$. We get
\begin{equation}\label{eq:independence}
p(x)=\prod_{(i,j)\in S}p^{x_{i,j}}(1-p)^{1-x_{i,j}}\propto\exp\left(\alpha \sum_{(i,j)\in S}x_{i,j}\right),
\end{equation}
where
\begin{equation*}
\alpha=\ln \left ( \frac{p}{1-p}\right ).
\end{equation*}
In this section we use the independence model as an example, and in particular we fit an MRF with $2 \times 2$ maximal cliques to
data simulated from this model. Therefore it is helpful to know
how one
can represent the independence model using $2 \times 2$ maximal
cliques, and this can be done using the same strategy that was used
for the Ising model in Section 2.2 in the paper. We get $\phi^0=\eta$, $\phi^1=\frac{\alpha}{4}+\eta$, $\phi^{11}=\phi^{\confcABAB}=\phi^{\confcABBA}=
\phi^{\confcBAAB}=\frac{\alpha}{2}+\eta$, $\phi^{\confcAAAB}=\phi^{\confcAABA}=\phi^{\confcABAA}=
\phi^{\confcBAAA}=\frac{3\alpha}{4}+\eta$ and $\phi^{\confcAAAA}=\alpha+\eta$, where
$\eta$ is an arbitrary value coming from the arbitrary value for $\beta^\emptyset$.
If $p=0.5$ we see that $\alpha=0$ and all the configuration set
parameters are equal.
We generate a realisation from the independence model with $p=0.3$ on a $100\times 100$
lattice, consider this as our observed data $x$, and simulate by the
MCMC algorithm defined in Section 4 from the
resulting posterior distribution. Using the notation for the configuration
sets in a $2\times 2$ maximal clique and the results above, we ideally want our algorithm to produce realisations
with the groups $\{c^0\}$, $\{c^1\}$, $\{c^{11},c^{\confcABAB},
c^{\confcABBA},c^{\confcBAAB}\}$,
$\{c^{\confcAAAB},c^{\confcAABA},c^{\confcABAA},c^{\confcBAAA}\}$ and
$\{c^{\confcAAAA}\}$. Note that due to our
identifiability restriction in
(11) the configuration set parameters
should be close to the solution above with
$\eta=-\alpha/2$. To check convergence we investigated trace plots of
various statistics, see Figure \ref{fig:tracePlotsMarginal}, and the
conclusion was that the algorithm converges very quickly.
The acceptance rate for the parameter value proposals
is $24\%$, whereas the acceptance rates for the other two types of
proposals are both around $2\%$. We run
our sampling algorithm for 20000 iterations, and estimate the
posterior probability of the number of groups. The configuration sets
are organised into 4 (77\%), 5
(21\%) or 6 (2\%) groups, so for these data the grouping tends to be a little bit
too strong compared to the correct number of groups. This can also be seen
from the estimated posterior probability of two configuration sets being
assigned to the same group, shown in Figure \ref{fig:pairMatrixMarginal}.
This figure suggests the four groups
$\{c^0\},\{c^1\},\{c^{11},c^{\confcABAB},c^{\confcABBA},c^{\confcBAAB},
c^{\confcAABA}\}$, $\{c^{\confcAAAB},c^{\confcABAA},c^{\confcBAAA},
c^{\confcAAAA}\}$
which is also calculated to be the most probable
grouping estimated by counting the number of occurrences. In fact the
posterior probability for this grouping is as high as $55 \%$. In Figure \ref{fig:pairMatrixMarginal} we also see how the most probable
grouping differ from the correct model grouping, shown in grey. The group $\{ c^{\confcAAAB},c^{\confcAABA},c^{\confcABAA},
c^{\confcBAAA}\}$ in the correct model is split in two
in the most probable grouping, and the subsets
$\{ c^{\confcAABA}\}$ and
$\{ c^{\confcAAAB},c^{\confcABAA},c^{\confcBAAA}\}$ are inserted into the correct model groups
$\{ c^{11},c^{\confcABAB},c^{\confcABBA},c^{\confcBAAB}\}$
and $\{ c^{\confcAAAA}\}$, respectively.
As in the Ising model example in Section 5.1 we estimate the posterior distribution for the
interaction parameters, see Figure \ref{fig:marginalInteractionParameters}.
As we can
see, the true value of the interaction parameters are mostly within the
credibility intervals, but the tendency to group the configurations too
much is in this case forcing some of the true values into a tail of
the marginal posterior distributions.
As in the Ising example we compare the distribution of the same six
statistics from simulations from our $2\times 2$ model with
posterior samples of $z$, the independence model with correct parameter
value, and the independence model with parameter value obtained by posterior
sampling, see Figure \ref{fig:compareSimIndependence}.
As we can see, our model captures approximately the
correct distribution of the chosen statistics. It is interesting to
note that for some statistics the realisations from the independence
model with simulated $\alpha$ values follows our model tightly
whereas for the other statistics it is close to the correct model.
Also for this data set we investigated the case where $\gamma=0$ and
$\gamma=1$. For $\gamma=0$ the configuration
sets are organised into 4 (75\%), 5
(23\%) or 6 (2\%) groups, and for $\gamma=1$ we get 4 (93\%), 5 (6\%),
6 (1\%) groups. As expected we again see the tendency
towards stronger grouping when $\gamma$ is increased.
We also did experiments were the value of $p$ was changed. If the
value of $p$ is close to 0.5 the tendency to group the
configurations too much becomes stronger. This makes perfectly sense,
since the correct grouping for $p=0.5$ is to put all
configuration sets into only one group. In the other end, choosing $p$
closer to 0 or 1 gives a stronger tendency to group the configurations
according to the correct solution. This illustrates the fact that
the algorithm tries to find a good model for the data using as few
groups as possible, but as the difference between the true parameter values
of the groups becomes larger the price to pay for choosing a model with fewer
parameters increases.
\section{Red deer data with $3\times 3$ maximal cliques}
\label{sec:S6}
In this section we present some results when assuming maximal clique size of
$3\times 3$ for the red deer data set presented in Section 5.2 in the
paper. The main drawback
with our approach is computational time, which is very dependent on
the approximation parameter $\nu$. One also needs to keep in mind that even data from simple models
will need many groups in the $3 \times 3$ case to be modelled correctly. For instance, for the
independence model the 401 configuration sets would need to be separated
into 10 groups, while for the Ising model one would need 11 groups to
get the correct model grouping. Similarly, the posterior most
probable grouping found for the $2 \times 2$ case for the reed deer
example would need 38 groups to be modelled in the $3\times 3$
case. Thus it is important not to assume larger maximal cliques than
needed. However for this data set it is possible to run the sampling algorithm
with $3\times 3$ clique, even though this is computationally
expensive.
To get convergence we need a small generalisation to the
proposal distribution for the trans-dimensional sampling step
presented in Section \ref{sec:transDimProposal}. In particular we
allow for several configuration sets to be split out into a new group
at a single proposal, and correspondingly allow for the
possibility of several configuration sets to be merge into another
group in one single proposal. The estimated marginal
distribution of the number of groups is 1\%, 65\%, 33\%, and 1\%
for 29, 30, 31 and 32 groups respectively. Three realisations from
the likelihood for
three randomly chosen realisations of $z$ are shown in Figure
\ref{fig:3t3}(a), and comparing with the realisations for
the $2\times 2$ case, see Figure 8 in the paper, it is hard
to see any differences in the spatial structure of the realisations.
We also investigated the distribution of three statistics for 5000
realisation from the likelihood of each of the two clique sizes, see Figure \ref{fig:3t3}(b), and it appears to be
little difference also here. These results indicate that $2\times
2$ maximal cliques might have sufficient complexity to explain this data set.
\section{Parallelisation of the sampling algorithm}
\label{sec:S7}
Most of the computing time for running our sampling algorithm is used
to evaluate the likelihood in (2). In order to reduce
the running time we adopt a scheme that do multiple updates of the Markov chain by evaluating likelihoods in parallel.
Assume we are in a state $z$ and propose to split/merge into a new
state $z_1$. Now there are two possible outcomes for this
proposal. Either we reject the proposal, which results in state $z$, or
we accept the proposal, which results in state $z_1$. Either way we
always propose a parameter update in the next step, and proposing this
step from both the two states $z$ and $z_1$ before evaluating the
acceptance probability for the split/merge step is possible. The
possible outcomes for these three proposals are $z$, $z_1$, $z_2$
and $z_{12}$, where $z$ is the outcome where neither the split/merge
proposal nor the following parameter proposal is accepted, $z_1$ is the
outcome where the split/merge proposal is accepted but not the following
parameter proposal, $z_2$ is the outcome where the split/merge
proposal is not accepted but the parameter proposal is, and $z_{12}$ is the
outcome where both the split/merge proposal and the following parameter proposal are accepted. If we continue the argument we can do the same to
propose updates where configurations are moved from one group to
another group, and in the red deer example we even include a
level where updates of covariates are proposed. After making all
proposals we evaluate the likelihoods for each possible state in
parallel. The result is that we do need to evaluate too many
likelihoods, but if the number of CPUs that are available is larger
than or equal to the number of likelihoods we need to evaluate, a
computational gain close to the number of levels is obtained. The
updating scheme is illustrated in Figure \ref{fig:parallellScheme}.
\counterwithin{figure}{section}
\setcounter{section}{5}
\setcounter{figure}{0}
\begin{figure}
\caption{Independence model example: Trace plots for the first quarter
of the posterior simulation run. Solid curves are the result from a
simulation where the initial number of groups is 1, and dashed
curves are from a run with an initial value of 11 (maximal) number of groups.}
\label{fig:g1}
\label{fig:g2}
\label{fig:tracePlotsMarginal}
\end{figure}
\begin{figure}
\caption{\label{fig:pairMatrixMarginal}
\label{fig:pairMatrixMarginal}
\end{figure}
\begin{figure}\label{fig:b1}
\label{fig:b2}
\label{fig:b3}
\label{fig:b4}
\label{fig:b5}
\label{fig:b9}
\label{fig:b8}
\label{fig:b7}
\label{fig:b6}
\label{fig:b10}
\label{fig:marginalInteractionParameters}
\end{figure}
\begin{figure}
\caption{\label{fig:compareSimIndependence}
\label{fig:i1}
\label{fig:i2}
\label{fig:i3}
\label{fig:i4}
\label{fig:i5}
\label{fig:i6}
\label{fig:compareSimIndependence}
\end{figure}
\counterwithin{figure}{section}
\setcounter{section}{6}
\setcounter{figure}{0}
\begin{figure}
\caption{Red deer $3\times 3$ example: Posterior results with $\gamma=0.5$.}
\label{fig:simulate_3t3}
\label{fig:gfunc_Deer_3t3}
\label{fig:3t3}
\end{figure}
\counterwithin{figure}{section}
\setcounter{section}{7}
\setcounter{figure}{0}
\begin{figure}
\caption{Proposal scheme for parallel likelihood evaluations. Starting in model $z$, proposals are made down the graph. Arrows pointing straight down represents rejection of proposal while arrow pointing down and left represent acceptance. Double squares are used to represents states where a new likelihood evaluation is needed.}
\label{fig:parallellScheme}
\end{figure}
\end{document} |
\begin{document}
\title[Emden-Fowler equation]{The Emden-Fowler equation\\ on a spherical cap of $\mathbb{S}^N$}
\thanks{The second author was supported by JSPS KAKENHI Grant Number 24740100.}
\author{Atsushi Kosaka}
\address{Bukkyo University,
96, Kitahananobo-cho, Murasakino, Kita-ku, Kyoto 603-8301, Japan}
\varepsilonmail{[email protected]}
\author{Yasuhito Miyamoto}
\address{Graduate School of Mathematical Sciences, The University of Tokyo,
3-8-1 Komaba, Meguro-ku, Tokyo 153-8914, Japan}
\varepsilonmail{[email protected]}
\begin{abstract}
Let $\mathbb{S}^N\subset\mathbb{R}^{N+1}$, $N\ge 3$, be the unit sphere, and let $S_{\Theta}\subset\mathbb{S}^N$ be a geodesic ball with geodesic radius $\Theta\in(0,\pi)$.
We study the bifurcation diagram $\{(\Theta,\left\|U\right\|_{\infty})\}\subset\mathbb{R}^2$ of the radial solutions of the Emden-Fowler equation on $S_{\Theta}$
\[
\begin{cases}
\Delta_{\mathbb{S}^N}U+U^p=0 & \textrm{in}\ S_{\Theta},\\
U=0 & \textrm{on}\ \partial S_{\Theta},\\
U>0 & \textrm{in}\ S_{\Theta},
\varepsilonnd{cases}
\]
where $p>1$.
Among other things, we prove the following: For each $p>p_{\rm S}:=(N-2)/(N+2)$, there exists $\underline{\Theta}\in(0,\pi)$ such that the problem has a radial solution for $\Theta\in(\underline{\Theta},\pi)$ and has no radial solution for $\Theta\in(0,\underline{\Theta})$.
Moreover, this solution is unique in the space of radial functions if $\Theta$ is close to $\pi$.
If $p_{\rm S}<p<p_{\rm JL}$, then there exists $\Theta^*\in(\underline{\Theta},\pi)$ such that the problem has infinitely many radial solutions for $\Theta=\Theta^*$, where
\[
p_{\rm JL}=\begin{cases}
1+\frac{4}{N-4-2\sqrt{N-1}} & \textrm{if}\ N\ge 11,\\
\infty & \textrm{if}\ 2\le N\le 10.
\varepsilonnd{cases}
\]
Asymptotic behaviors of the bifurcation diagram as $p\to\infty$ and $p\downarrow 1$ are also studied.
\varepsilonnd{abstract}
\date{\today}
\subjclass[2010]{Primary: 35J60, 35B32; Secondary: 34C23, 34C10}
\keywords{Bifurcation diagram, Joseph-Lundgren exponent, Singular solution, Infinitely many turning points}
\maketitle
\section{Introduction and Main results}
Let $\mathbb{S}^N\subset\mathbb{R}^{N+1}$, $N\ge 3$, be the unit sphere, and let $S_{\Theta}\subset\mathbb{S}^N$ be the geodesic ball centered at the North Pole with geodesic radius $\Theta\in(0,\pi)$.
We call $S_{\Theta}$ the spherical cap.
In this paper we are concerned with the solution of the Emden-Fowler equation on $S_{\Theta}$
\begin{equation}\label{EFS}
\begin{cases}
\Delta_{\mathbb{S}^N}U+U^p=0 & \textrm{in}\ \ S_{\Theta},\\
U=0 & \textrm{on}\ \ \partial S_{\Theta},\\
U>0 & \textrm{in}\ \ S_{\Theta},
\varepsilonnd{cases}
\varepsilonnd{equation}
where $\Delta_{\mathbb{S}^N}$ denotes the Laplace-Beltrami operator on $\mathbb{S}^N$ and $p>1$.
In the Euclidean case it is well known that the qualitative property of the structure of the solutions of the problem
\begin{equation}\label{EFE}
\begin{cases}
\Delta U+U^p=0 & \textrm{in}\ \ B_{\Lambda},\\
U=0 & \textrm{on}\ \ \partial B_{\Lambda},\\
U>0 & \textrm{in}\ \ B_{\Lambda}
\varepsilonnd{cases}
\varepsilonnd{equation}
depends on $p$, and does not depend on $\Lambda$.
Here, $B_{\Lambda}\subset\mathbb{R}^N$ denotes the ball centered at the origin $O$ with radius $\Lambda>0$.
By the symmetry result of Gidas, {\it et al.}\cite{GNN79}, every solution of (\ref{EFE}) is radially symmetric.
The critical Sobolev exponent
\[
p_{\rm S}:=
\begin{cases}
\frac{N+2}{N-2}, & \textrm{if}\ N\ge 3,\\
\infty, & \textrm{if}\ N=1,2
\varepsilonnd{cases}
\]
plays an important role.
It is known that (\ref{EFE}) has a unique solution if $1<p<p_{\rm S}$, and has no solution if $p\ge p_{\rm S}$ (See Poho\'zaev~\cite{P65}).
In the hyperbolic space the moving plane method is applicable and every positive solution of a semilinear elliptic equation with general nonlinearity on a geodesic ball with radius $\Lambda>0$ is radially symmetric. See \cite{KP98,SW12} for this symmetry result.
Bonforte, {\it et al.} \cite{BGGV13} showed, among other things, that in the hyperbolic space the Emden-Fowler equation on the geodesic ball with radius $\Lambda>0$ has a unique positive solution if $1<p<p_{\rm S}$, and has no solution if $p\ge p_{\rm S}$.
Thus, the hyperbolic case is qualitatively the same as the Euclidean case.
In the spherical case Padilla~\cite{P97} and Kumaresan-Prajapat~\cite{KP98} showed that if $S_{\Theta}$ is included in a hemisphere ($0<\Theta<\frac{\pi}{2}$), then every positive solution of a semilinear elliptic equation with general nonlinearity is radially symmetric.
On the other hand, if $S_{\Theta}$ includes a hemisphere ($\frac{\pi}{2}<\Theta<\pi$), then there is a semilinear elliptic equation such that it has a nonradial positive solution.
See \cite{BW07,Mi13} for the existence of nonradial positive solutions.
As far as (\ref{EFS}) is concerned, if $0<\Theta<\pi$ and $1<p\le p_{\rm S}$, then one can easily show that the solution is radial, changing variables and applying the symmetry result of \cite{GNN79} to the equation.
When $\Theta=\frac{\pi}{2}$ and $p>1$, the radial symmetry of a solution of (\ref{EFS}) is guaranteed by \cite[Theorem~1]{SW12}.
The question whether a solution of (\ref{EFS}) is radial in the case where $\frac{\pi}{2}<\Theta<\pi$ and $p>p_{\rm S}$ seems to remain open.
In this paper we restrict ourselves to radially symmetric solutions.
This study is motivated by the result of Bandle-Peletier~\cite{BP99}.
In the case where $N=3$ and $p=p_{\rm S}(=5)$ they showed that (\ref{EFS}) has no solution if $S_{\Theta}$ is included in a hemisphere, and has a radial solution if $S_{\Theta}$ includes a hemisphere.
This indicates that the solution structure depends not only on $p$ but also on the radius $\Theta$.
Actually, we will see in Corollary~\ref{B} below that (\ref{EFS}) has a solution even in the supercritical case $p>p_{\rm S}$ if $\Theta$ is close to $\pi$.
Hence, the solution structure in the spherical case is different from the solution structures in both the Euclidean and hyperbolic cases.
The difference between the Euclidean and spherical cases was also found in the structure of the positive solutions of the Brezis-Nirenberg problem
\[
\begin{cases}
\Delta_{\mathbb{S}^3}u+\lambda u+u^5=0 & \textrm{in}\ S_{\Theta}(\subset\mathbb{S}^3),\\
u=0 & \textrm{on}\ \partial S_{\Theta}
\varepsilonnd{cases}
\]
which involves the critical Sobolev exponent.
See \cite{BB02,BP06} for details.
It seems that the present paper is the first attempt to study the supercritical Emden-Fowler equation on a spherical cap.
The supercritical Emden-Fowler equation on other manifolds was studied in Berchio, {\it et al.}\cite{BFG14}.
Let us explain the problem in detail.
Let $\theta$ be the geodesic distance from the North Pole of $\mathbb{S}^N$.
Let $p>1$ be fixed.
Then the solution $U$ of (\ref{EFS}) depends only on $\theta$.
The problem (\ref{EFS}) can be reduced to the ODE
\begin{equation}
\begin{cases}\label{EFODE}
U''+(N-1)\frac{\cos\theta}{\sin\theta}U'+U^p=0, & 0<\theta<\Theta,\\
U(\Theta)=0,\\
U>0, & 0\le\theta<\Theta.
\varepsilonnd{cases}
\varepsilonnd{equation}
We consider the possibly sign-changing solution of the initial value problem
\begin{equation}\label{IVP}
\begin{cases}
U''+(N-1)\frac{\cos\theta}{\sin\theta}U'+|U|^{p-1}U=0, & 0<\theta<\pi,\\
U(0)=\Gamma>0,\ U'(0)=0.\\
\varepsilonnd{cases}
\varepsilonnd{equation}
In Lemma~\ref{S3L2} we will see that the regular solution $U(\,\cdot\,)$ of (\ref{IVP}) has the first positive zero $\Theta(\Gamma)\in(0,\pi)$.
In Theorem~\ref{A} below we show that $\Theta(\Gamma)$ is a $C^1$-function defined on $0<\Gamma<\infty$.
It is clear that $U(\theta)$ $(0\le\theta\le\Theta(\Gamma))$ is decreasing.
Hence, $\|U\|_{C^0(S_{\Theta(\Gamma)})}=\Gamma$.
The set of all the regular radial solutions of (\ref{EFS}) can be represented by the bifurcation diagram $\{(\Theta(\Gamma),\Gamma)\}\subset\mathbb{R}^2$.
Thus, in this paper we mainly study the graph of the function $\Theta(\Gamma)$.
By $p_{\rm JL}$ we define the Joseph-Lundgren exponent \cite{JL73}, i.e.,
\[
p_{\rm JL}:=
\begin{cases}
1+\frac{4}{N-4-2\sqrt{N-1}}, & \textrm{if}\ N\ge 11,\\
\infty, & \textrm{if}\ 2\le N\le 10.
\varepsilonnd{cases}
\]
\begin{maintheorem}[Supercritical]\label{A}
Suppose that $N\ge 3$ and $p>p_{\rm S}$.
Let $\Theta(\Gamma)$ be the first positive zero of the solution of (\ref{IVP}).
Then the following hold:\\
(i) The function $\Theta(\Gamma)$ is of class $C^1$. For each $\Gamma>0$, $0<\Theta(\Gamma)<\pi$.\\
(ii) $\Theta(\Gamma)\rightarrow\pi$ as $\Gamma\downarrow 0$. If $\Gamma>0$ is small, then $\Theta'(\Gamma)<0$.\\
(iii) $\Theta(\Gamma)\rightarrow \Theta^*$ as $\Gamma\rightarrow\infty$, where $\Theta^*\in(0,\pi)$ is defined in Theorem~\ref{C} below.\\
(iv) If $p_{\rm S}<p<p_{\rm JL}$, then $\Theta(\Gamma)$ oscillates infinitely many times around $\Theta^*$ as $\Gamma\rightarrow\infty$.
\varepsilonnd{maintheorem}
\noindent
See Figure~\ref{FIG} (a) for the bifurcation diagram in the case $p_{\rm S}<p<p_{\rm JL}$.
\begin{figure}[t]
\begin{center}
\includegraphics{fig.eps}
\label{FIG}
\caption{Schematic bifurcation diagrams: (a) $p_{\rm S}<p<p_{\rm JL}$ (Theorem~\ref{A}), (b) $p\ge p_{\rm JL}$ (Conjecture~\ref{Conj1}), (c) $1<p\le p_{\rm S}$ $(N\ge 4)$ and $1<p<p_{\rm S}$ $(N=3)$ (Proposition~\ref{CriSub}), (d) $p=p_{\rm S}$ and $N=3$ (Proposition~\ref{CriSub}).}
\varepsilonnd{center}
\varepsilonnd{figure}
When $3\le N\le 10$, $p_{\rm JL}=\infty$, and hence, (iv) always holds.
An immediate consequence of Theorem~\ref{A} is the following:
\begin{maincorollary}\label{B}
Suppose that $N\ge 3$ and $p>p_{\rm S}$.
Then the following hold:\\
(i) There exists $\underline{\Theta}>0$ such that (\ref{EFODE}) has no regular solution for $\Theta\in(0,\underline{\Theta})$ and has a regular solution for $\Theta\in(\underline{\Theta},\pi)$.\\
(ii) If $p_{\rm S}<p<p_{\rm JL}$, then (\ref{EFODE}) has a regular solution for $\Theta=\underline{\Theta}$, where $\underline{\Theta}$ is given in (i).\\
(iii) If $p_{\rm S}<p<p_{\rm JL}$, then (\ref{EFODE}) has infinitely many regular solutions for $\Theta=\Theta^*$, where $\Theta^*$ is given in Theorem~\ref{C} below.\\
(iv) There exists $\overline{\Theta}\in (0,\pi)$ such that (\ref{EFODE}) has a unique regular solution for $\Theta\in (\overline{\Theta},\pi)$. This solution is nondegenerate in the space of radial functions.
\varepsilonnd{maincorollary}
\noindent
The problem (\ref{EFODE}) has a singular solution $U^*(\theta)$ such that $U^*(\theta)=O(\theta^{-\frac{2}{p-1}})$ $(\theta\downarrow 0)$.
\begin{maintheorem}\label{C}
Suppose that $N\ge 3$ and $p>p_{\rm S}$.
There exists $\Theta^*\in (0,\pi)$ such that (\ref{EFODE}) has a singular solution $U^*(\theta)$ for $\Theta=\Theta^*$ such that $U^*(\theta)\in C^2(0,\Theta^*]$ and
\begin{equation}\label{CE0}
U^*(\theta)=a\left(\cos\frac{\theta}{2}\right)^{-(N-2)}\left(2\tan\frac{\theta}{2}\right)^{-\mu}(1+o(1))\quad\textrm{as}\quad \theta\downarrow 0,
\varepsilonnd{equation}
where
\begin{equation}\label{Anu}
a:=\left\{\mu(N-2-\mu)\right\}^{\mu/2}\ \ \textrm{and}\ \ \mu:=\frac{2}{p-1}.
\varepsilonnd{equation}
\varepsilonnd{maintheorem}
In the next theorem we obtain the behavior of the curve $\{(\Theta(\Gamma),\Gamma)\}$ for large $p$.
\begin{maintheorem}\label{ThD}
Suppose that $N\ge 3$.
Let $\underline{\Theta}$ be given in Corollary~\ref{B}~(i), and let $\Theta^*$ be given in Theorem~\ref{C}.
Then,
\[
\underline{\Theta}\to\pi\ \textrm{as}\ p\to\infty.
\]
Since $\underline{\Theta}\le\Theta^*$, it holds that $\Theta^*\to\pi$ as $p\to\infty$.
In particular, when $N=3$, $\underline{\Theta}\ge\pi-\arcsin\frac{4}{p-1}$ for $p\ge p_{\rm S}(=5)$.
\varepsilonnd{maintheorem}
In Theorems~\ref{A} and \ref{ThD} detailed properties of $\Theta(\Gamma)$ in the case $p\ge p_{\rm JL}$ are not clarified.
\begin{conjecture}\label{Conj1}
Suppose that $N\ge 11$.
If $p\ge p_{\rm JL}$, then $\Theta(\Gamma)$ is strictly decreasing and (\ref{EFODE}) has no regular solution for $\Theta\in(0,\Theta^*]$.
\varepsilonnd{conjecture}
\noindent
Figure~\ref{FIG} (b) shows a conjectured bifurcation diagram in the case $p\ge p_{\rm JL}$.
Next, we consider the critical case $p=p_{\rm S}$ and subcritical case $1<p<p_{\rm S}$.
The following proposition follows from combining known results~\cite{BBF98,BP99,SW13} and our results.
\begin{proposition}[Critical/Subcritical]\label{CriSub}
Suppose that $N\ge 3$ and $1<p\le p_{\rm S}$.
Let $\Theta(\Gamma)$ be the first positive zero of the solution of (\ref{IVP}).\\
(i) The function $\Theta(\Gamma)$ is of class $C^1$. For each $\Gamma>0$, $0<\Theta(\Gamma)<\pi$.\\
(ii) $\Theta(\Gamma)\rightarrow\pi$ as $\Gamma\downarrow 0$.\\
(iii) $\Theta(\Gamma)$ is strictly decreasing.\\
(iv) If $N\ge 4$, then $\Theta(\Gamma)\to 0$ as $\Gamma\to\infty$.\\
(v) If $N=3$ and $p=p_{\rm S}(=5)$, then $\Theta(\Gamma)\to\frac{\pi}{2}$ as $\Gamma\to\infty$. On the other hand, if $N=3$ and $1<p<p_{\rm S}$, then $\Theta(\Gamma)\to 0$ as $\Gamma\to\infty$.
In particular, if $N=3$ and $p=p_{\rm S}(=5)$, (\ref{EFODE}) has no regular solution for $\Theta\in(0,\frac{\pi}{2}]$.
\varepsilonnd{proposition}
\noindent
See Figure~\ref{FIG} (c) and (d).
When $1<p<p_{\rm S}$, for each fixed $\Theta_0\in(0,\pi)$, there is a unique $\Gamma_0>0$ depending on $p$ such that $\Theta(\Gamma_0)=\Theta_0$.
Therefore, we write $\Gamma_0$ by $\Gamma(p)$.
The asymptotic shape of the branch as $p\downarrow 1$ is as follows:
\begin{maintheorem}\label{ThE}
Suppose that $N\ge 3$.
There exists $\Theta^{\dagger}\in (0,\pi)$ such that the following statements hold:\\
(i) If $0<\Theta<\Theta^{\dagger}$, then $\Gamma(p)\to\infty$ as $p\downarrow 1$.\\
(ii) If $\Theta=\Theta^{\dagger}$, then $\Gamma(p)\to\Gamma^{\dagger}$ as $p\downarrow 1$ with some constant $\Gamma^{\dagger}>0$.\\
(iii) If $\Theta^{\dagger}<\Theta<\pi$, then $\Gamma(p)\to 0$ as $p\downarrow 1$.
\varepsilonnd{maintheorem}
Since the solution structure changes at $p=p_{\rm S}$, it is natural to study the case where $p\downarrow p_{\rm S}$.
We are led to the following:
\begin{conjecture}
Let $\overline{\Theta}$ be given in Corollary~\ref{B} (iv), and let $\Theta^*$ be given in Theorem~\ref{C}.
If $N\ge 4$, then $\overline{\Theta}\to 0$ ($p\downarrow p_{\rm S}$) and $\Theta^*\to 0$ ($p\downarrow p_{\rm S}$).
If $N=3$, then $\overline{\Theta}\to\frac{\pi}{2}$ ($p\downarrow p_{\rm S}$) and $\Theta^*\to\frac{\pi}{2}$ ($p\downarrow p_{\rm S}$).
\varepsilonnd{conjecture}
Let us explain technical details.
Using the stereographic projection $v(r):=U(\theta)$ and $r:=\tan\frac{\theta}{2}$, we have
\[
v''+\frac{N-1}{r}v'-(N-2)rA(r)v'+A(r)^2v^p=0,
\]
where
\[
A(r):=\frac{2}{1+r^2}.
\]
We let $u(r):=A(r)^{\frac{N-2}{2}}v(r)$.
Then, we have the semilinear elliptic problem
\begin{equation}\label{D}
\begin{cases}
u''+\frac{N-1}{r}u'+\frac{N(N-2)}{4}A(r)^2u+\frac{1}{A(r)^q}u^{p}=0, & 0<r<R,\\
u(R)=0, & \\
u>0, & 0\le r<R,
\varepsilonnd{cases}
\varepsilonnd{equation}
where
\[
R:=\tan\frac{\Theta}{2}\ \ \textrm{and}\ \
q:=\frac{N-2}{2}(p-p_{\rm S}).
\]
Note that if $R=1$, then $S_{\Theta}$ is a hemisphere ($\Theta=\frac{\pi}{2}$).
The problem (\ref{IVP}) is equivalent to the problem
\begin{equation}\label{DD}
\begin{cases}
u''+\frac{N-1}{r}u'+\frac{N(N-2)}{4}A(r)^2u+\frac{1}{A(r)^q}|u|^{p-1}u=0, & 0<r<\infty,\\
u(0)=\gamma>0,\ u'(0)=0,
\varepsilonnd{cases}
\varepsilonnd{equation}
where $\gamma:=2^{\frac{N-2}{2}}\Gamma$.
By $R(\gamma)$ we denote the first positive zero of the solution $u(\,\cdot\,,\gamma)$ of (\ref{DD}), i.e., $R(\gamma)=\tan\frac{\Theta(\Gamma)}{2}$.
In this paper we mainly consider (\ref{DD}).
The existence of infinitely many turning points for semilinear elliptic equations on a Euclidean ball was proved by the several authors.
In \cite{DF07,GW11} the Brezis-Nirenberg problem including a supercritical exponent was studied.
Dolbeault-Flores~\cite{DF07} used the geometric theory of dynamical systems.
Guo-Wei~\cite{GW11} used the Morse indices of solutions, using the intersection number between the regular and singular solutions.
See \cite{KW18,Mi14a,Mi14b,Mi18} for other results.
In \cite{D00,D08a,D08b,D13} Dancer studied infinitely many turning points of supercritical semilinear Dirichlet problems on a rather general domain, using the analytic property.
We show that (\ref{EFS}) has a singular solution $U^*$.
Using the intersection number of the singular solution $U^*(R)$ and a regular solution $U(R,\Gamma)$ of (\ref{IVP}) in the interval $I(\gamma)$
\[
\mathcal{Z}_{I(\gamma)}[U^*(\,\cdot\,)-U(\,\cdot\,,\Gamma)],
\]
we prove the existence of infinitely many turning points as $\Gamma\to\infty$.
Here, $I(\gamma):=(0,\min\{R(\gamma),R^*\})$, $R(\gamma)$ and $R^*$ are the first positive zeros of $U$ and $U^*$, repsectively.
This paper consists of eight sections.
In Section~2 we recall known results about the Emden-Fowler equation on $\mathbb{R}^N$.
In Section~3 we prove Theorem~\ref{A} (i).
In Section~4 we construct the singular solution (Theorem~\ref{C}).
In Sections 5, 6, and 7 we prove Theorem~\ref{A} (iii), (ii), and (iv), respectively.
The proof of Corollary~\ref{B} is in Section~7.
In Section~8 we prove Theorems~\ref{ThD} and \ref{ThE}.
Proposition~\ref{CriSub} is also proved in Section~8.
\section{Known results}
We recall known results about solutions of the equation
\[
\bar{u}''+\frac{N-1}{\rho}\bar{u}'+\bar{u}^p=0,\quad 0<\rho<\infty.
\]
See \cite{JL73,W93} for details.
This problem has the singular solution
\begin{equation}\label{S2E0}
\bar{u}^*(\rho):=a\rho^{-\mu},
\varepsilonnd{equation}
where $a$ and $\mu$ are defined by (\ref{Anu}).
Let $\bar{u}(\rho,\bar{\gamma})$ be the solution of
\begin{equation}\label{S2E1}
\begin{cases}
\bar{u}''+\frac{N-1}{\rho}\bar{u}'+\bar{u}^p=0, & 0<\rho<\infty,\\
\bar{u}(0)=\bar{\gamma}>0,\ \bar{u}'(0)=0.\\
\varepsilonnd{cases}
\varepsilonnd{equation}
We use Emden's transformation
\[
\bar{y}(t):=\frac{\bar{u}(\rho,\bar{\gamma})}{\bar{u}^*(\rho)}\ \ \textrm{and}\ \ t:=\frac{1}{m}\log\rho,
\]
where
\begin{equation}\label{m}
m:=a^{-\frac{p-1}{2}}.
\varepsilonnd{equation}
Then $\bar{y}(t)$ satisfies
\begin{equation}\label{S2E2}
\begin{cases}
\bar{y}''+\alpha\bar{y}-\bar{y}+\bar{y}^p=0,& -\infty<t<\infty,\\
ae^{-m\mu t}\bar{y}(t)\rightarrow\bar{\gamma}\ \textrm{as}\ t\rightarrow-\infty,\\
e^{-mt}(e^{-m\mu t}\bar{y}(t))'\rightarrow 0\ \textrm{as}\ t\rightarrow-\infty,
\varepsilonnd{cases}
\varepsilonnd{equation}
where
\begin{equation}\label{alpha}
\alpha:=m(N-2-2\mu).
\varepsilonnd{equation}
Let $\bar{z}(t):=\bar{y}'(t)$.
Then, $(\bar{y},\bar{z})$ satisfies
\begin{equation}\label{S2E3}
\begin{cases}
\bar{y}'=\bar{z}\\
\bar{z}'=-\alpha\bar{z}+\bar{y}-\bar{y}^p.
\varepsilonnd{cases}
\varepsilonnd{equation}
We study the orbit $(\bar{y}(t),\bar{z}(t))$.
Let
\[
J(\bar{y},\bar{z}):=\frac{\bar{z}^2}{2}-\frac{\bar{y}^2}{2}+\frac{\bar{y}^{p+1}}{p+1}.
\]
By direct calculation we have
\[
\frac{d}{dt}J(\bar{y}(t),\bar{z}(t))=-\alpha\bar{z}(t)^2.
\]
If $p>p_{\rm S}$, then $\alpha>0$, and hence, $\frac{d}{dt}J(\bar{y}(t),\bar{z}(t))\le 0$.
Then, $J$ is a Lyapunov function of (\ref{S2E3}).
We see by the initial condition in (\ref{S2E2}) that $(\bar{y}(-\infty),\bar{z}(-\infty))=(0,0)$.
Therefore, $J(\bar{y}(t),\bar{y}(t))\le 0$ for all $t\in\mathbb{R}$.
The system (\ref{S2E3}) has the unique equilibrium $(1,0)$ in the bounded set $\{(\bar{y},\bar{z})\in\mathbb{R}^2;\ J(\bar{y},\bar{z})<0,\ \bar{y}>0\}$.
It follows from the Poincar\'e-Bendixson theorem that $(\bar{y}(t),\bar{z}(t))\rightarrow (1,0)$ as $t\rightarrow\infty$.
Next, we study the behavior of $(\bar{y}(t),\bar{z}(t))$ near $(1,0)$.
The two eigenvalues of the linearization at $(1,0)$ are given by $\lambda^2+\alpha\lambda+p-1=0$.
Therefore, $(1,0)$ is a stable spiral if $\alpha^2-4(p-1)<0$.
This inequality is equivalent to $(N-2-2\mu)^2-8(N-2-\mu)<0$.
Solving this inequality for $\mu$, we have
\begin{equation}\label{S2E4}
\frac{N-4-2\sqrt{N-1}}{2}<\mu<\frac{N-4+2\sqrt{N-1}}{2}.
\varepsilonnd{equation}
Since $1+4/(N-4+2\sqrt{N-1})<p_{\rm S}<p$, we see that $\mu<(N-4+2\sqrt{N-1})/2$.
If $N\le 10$, then $(N-4-2\sqrt{N-1})/2\le 0<\mu$, and hence (\ref{S2E4}) holds.
In the case $N\ge 11$, (\ref{S2E4}) holds if
\begin{equation}\label{S2E5}
p<1+\frac{4}{N-4-2\sqrt{N-1}}(=p_{\rm JL}).
\varepsilonnd{equation}
We have seen the following: The orbit $(\bar{y}(t),\bar{z}(t))$ starts from $(0,0)$ at $t=-\infty$ and converges to $(1,0)$ as $t\rightarrow\infty$.
Moreover, if (\ref{S2E5}) holds, then $(\bar{y}(t),\bar{z}(t))$ rotates clockwise around $(1,0)$.
Therefore, there is $\{t_j\}_{j=1}^{\infty}$ $(t_1<t_2<\cdots\rightarrow\infty)$ such that $z(t_j)=0$ $(j\in\{1,2,\ldots\})$ and
\[
y(t_2)<y(t_4)<\cdots<y(t_{2j})<\cdots<1<\cdots<y(t_{2j-1})<\cdots<y(t_3)<y(t_1).
\]
This means that $y(t)$ oscillates around $1$ infinitely many times.
Since $\bar{y}(t)=\frac{\bar{u}(\rho,\bar{\gamma})}{\bar{u}^*(\rho)}$, the intersection number between $\bar{u}(\rho,\bar{\gamma})$ and $\bar{u}^*(\rho)$, which we denote by $\mathcal{Z}_{(0,\infty)}[\bar{u}(\,\cdot\,,\bar{\gamma})-\bar{u}^*(\,\cdot\,)]$, is $\infty$.
\begin{proposition}\label{S2P1}
(i) Let $\bar{u}(\rho,\bar{\gamma})$ be the solution of (\ref{S2E1}).
If $p_{\rm S}<p<p_{\rm JL}$, then $\mathcal{Z}_{(0,\infty)}[\bar{u}(\,\cdot\,,\bar{\gamma})-\bar{u}^*(\,\cdot\,)]=\infty$.\\
(ii) Let $(\bar{y}(t),\bar{z}(t))$ be the solution of (\ref{S2E2}).
If $p>p_{\rm S}$, then, for each $\bar{\gamma}>0$, $(\bar{y}(t),\bar{z}(t))$ converges to $(1,0)$ as $t\rightarrow\infty$.
\varepsilonnd{proposition}
\section{Parameterization results}
The aim of this section is to show that the regular solutions of (\ref{D}) can be parameterized by $\gamma$.
Parametrization results for Euclidean cases were obtained by several authors.
See \cite{K97,Mi14a} for example.
The proof is similar.
However, we give the proof for readers' convenience.
\begin{lemma}\label{S3L1}
Suppose that $p>1$.
Let $(R_0,u_0(r))$ be a solution of (\ref{DD}) with $\gamma=\gamma_0$.
Then, there is a $C^1$-mapping $\gamma\mapsto (R(\gamma),u(r,\gamma))$ such that all solutions of (\ref{DD}) near $(R_0,u_0(r))$ can be described as $\{(R(\gamma),u(r,\gamma))\}_{|\gamma-\gamma_0|<\varepsilon}$ $(u(0,\gamma)=\gamma)$ and that $(R(\gamma_0),u(r,\gamma_0))=(R_0,u_0(r))$.
\varepsilonnd{lemma}
\begin{proof}
Since $u(r,\gamma)$ is a solution of (\ref{DD}), $u(r,\gamma)$ is a $C^1$-function of $r$ and $\gamma$.
Since $u$ satisfies the equation in (\ref{DD}), $u_r(R_0,\gamma_0)\neq 0$, otherwise $u(r,\gamma_0)\varepsilonquiv 0$ $(0<r<R)$ by the uniqueness of the solution of the ODE.
Since $u(R_0,\gamma_0)=0$, we can apply the implicit function theorem to $u(r,\gamma)=0$.
Then, there is a $C^1$-function $R=R(\gamma)$ defined on $|\gamma-\gamma_0|<\varepsilon$ such that $u(R(\gamma),\gamma)=0$ and $R(\gamma_0)=R_0$.
Because of the continuity of $u(r,\gamma)$, $u(r,\gamma)>0$ in $\{(r,\gamma);\ 0<r<R(\gamma),\ |\gamma-\gamma_0|<\varepsilon\}$.
Thus, $(R(\gamma),u(r,\gamma))$ is a solution of (\ref{DD}).
The implicit function theorem also says that all solutions of (\ref{DD}) near $(R_0,u_0(r))$ are $\{(R(\gamma),u(r,\gamma))\}_{|\gamma-\gamma_0|<\varepsilon}$ and that the mapping $\gamma\mapsto (R(\gamma),u(r,\gamma))$ is of class $C^1$.
The proof is complete.
\varepsilonnd{proof}
\begin{lemma}\label{S3L2}
Suppose that $p>1$.
Let $U(\theta,\Gamma)$ be the solution of (\ref{IVP}).
Then $U(\,\cdot\,,\Gamma)$ has the first positive zero $\Theta(\Gamma)\in (0,\pi)$.
\varepsilonnd{lemma}
\begin{proof}
Let $U$ be the solution of (\ref{IVP}).
By the equation in (\ref{IVP}) we have
\begin{equation}\label{S3L2E1}
(U'\sin^{N-1}\theta)'+|U|^{p-1}U\sin^{N-1}\theta=0.
\varepsilonnd{equation}
Integrating (\ref{S3L2E1}) over $[0,\theta]$, we have
\begin{equation}\label{S3L2E1+}
U'(\theta)=-\frac{1}{\sin^{N-1}\theta}\int_0^{\theta}|U(\varphi)|^{p-1}U(\varphi)\sin^{N-1}\varphi d\varphi.
\varepsilonnd{equation}
Thus,
\begin{equation}\label{S3L2E2}
\textrm{if $U(\theta)>0$ for $\theta\in[0,\theta_0)$, then $U'(\theta)<0$ for $\theta\in(0,\theta_0]$}.
\varepsilonnd{equation}
By contradiction we prove the statement of the lemma.
Suppose the contrary, i.e., $U(\theta)>0$ for $\theta\in[0,\pi)$.
By (\ref{S3L2E2}) we see that $U'(\theta)<0$ for $\theta\in(0,\pi)$.
Let $\theta_1$ and $\theta_2$ be such that $0<\theta_1<\theta_2<\pi$.
We let $\theta>\theta_2$.
Integrating (\ref{S3L2E1}) over $[\theta_1,\theta]$, we have
\[
U'(\theta)=-\frac{C(\theta)}{\sin^{N-1}\theta},
\]
where
\[
C(\theta):=|U'(\theta_1)|\sin^{N-1}\theta_1+\int_{\theta_1}^{\theta}|U(\varphi)|^{p-1}U(\varphi)\sin^{N-1}\varphi d\varphi.
\]
We have
\begin{align*}
C(\theta_2)&=|U'(\theta_1)|\sin^{N-1}\theta_1+\int_{\theta_1}^{\theta_2}|U(\varphi)|^{p-1}U(\varphi)\sin^{N-1}\varphi d\varphi\\
&\ge|U'(\theta_1)|\sin^{N-1}\theta_1+U(\theta_2)^p\int_{\theta_1}^{\theta_2}\sin^{N-1}\varphi d\varphi\\
&>0.
\varepsilonnd{align*}
Since $\theta_2<\theta$, $C(\theta_2)<C(\theta)$.
Therefore,
\begin{equation}\label{S3L2E3}
U'(\theta)<-\frac{C(\theta_2)}{\sin^{N-1}\theta}\ \ \textrm{for}\ \ \theta>\theta_2.
\varepsilonnd{equation}
Integrating (\ref{S3L2E3}) over $[\theta_2,\theta]$, we have
\[
U(\theta)\le U(\theta_2)-C(\theta_2)\int_{\theta_2}^{\theta}\frac{d\varphi}{\sin^{N-1}\varphi}.
\]
Hence, $U(\theta)\to -\infty$ as $\theta\uparrow\pi$.
This contradicts the assumption.
Thus, there exists the first positive zero $\Theta(\Gamma)\in(0,\pi)$.
\varepsilonnd{proof}
As we see in the following lemma, the solution set of (\ref{D}) is a curve and it can be parametrized by $\gamma$.
\begin{lemma}\label{S3L3}
Suppose that $p>1$.
There is a $C^1$-mapping $\gamma\mapsto (R(\gamma),u(r,\gamma))$ defined on $(0,\infty)$ such that all regular solutions of (\ref{D}) can be described as $(R(\gamma),u(r,\gamma))$.
Specifically, for each $\gamma>0$, $R(\gamma)$ is defined and $0<R(\gamma)<\infty$.
\varepsilonnd{lemma}
\begin{proof}
Let $u(r,\gamma)$ be the solution of (\ref{DD}).
Because of Lemma~\ref{S3L2}, the solution $u(\,\cdot\,,\gamma)$ of (\ref{DD}) also has the first positive zero $R(\gamma)\in (0,\infty)$.
The first positive zero $R(\gamma)$ is defined for every $\gamma>0$, and $0<R(\gamma)<\infty$ for $\gamma>0$.
By Lemma~\ref{S3L1} we see that $R(\gamma)$ is of class $C^1$.
It is clear that $\{(R(\gamma),u(r,\gamma))\}_{\gamma>0}$ is the set of all regular solutions of (\ref{D}).
The proof is complete.
\varepsilonnd{proof}
\section{Singular solution}
In this section we show that (\ref{D}) has a singular solution $(R^*,u^*(r))$.
Let $u(r)$ be a solution of (\ref{D}).
We use the change of variables
\begin{equation}\label{y}
y(t):=2^{-\frac{q}{p-1}}\frac{u(r)}{\bar{u}^*(r)}\ \ \textrm{and}\ \ t:=\frac{1}{m}\log r.
\varepsilonnd{equation}
Here, $\bar{u}^*(r)$ is defined by (\ref{S2E0}), $m$ is defined by (\ref{m}).
Then $y$ satisfies
\begin{equation}\label{S4E1}
y''+\alpha y'-y+y^p+B_0(t)y^p+B_1(t)y=0,
\varepsilonnd{equation}
where $\alpha$ is defined by (\ref{alpha}),
\begin{equation}\label{S4E1+}
B_0(t):=\left(1+e^{2mt}\right)^q-1,\ \ \textrm{and}\ \ B_1(t):=\frac{N(N-2)e^{2mt}}{(1+e^{2mt})^2}.
\varepsilonnd{equation}
Note that $B_0(t)>0$ and $B_1(t)>0$.
We construct the singular solution near $t=-\infty$.
\begin{lemma}\label{S4L1}
Suppose that $p>p_{\rm S}$.
Assume that the problem
\begin{equation}\label{S4L1E1}
\begin{cases}
y''+\alpha y'-y+y^p+B_0(t)y^p+B_1(t)y=0,\\
y(t)\rightarrow 1\ \textrm{as}\ t\rightarrow -\infty
\varepsilonnd{cases}
\varepsilonnd{equation}
has a solution $y^*(t)$ near $t=-\infty$.
Then, $y^*(t)$ satisfies
\begin{equation}\label{S4L1E1+}
y^*(t)=1+O(e^{2mt})\ \textrm{as}\ t\rightarrow -\infty.
\varepsilonnd{equation}
\varepsilonnd{lemma}
\begin{proof}
Let $\tau:=-t$ and $\varepsilonta(\tau):=y(t)-1$.
Then $\varepsilonta(\tau)$ satisfies
\begin{equation}\label{S4L1E2}
\begin{cases}
\varepsilonta''-\alpha\varepsilonta'+(p-1)\varepsilonta=g(\tau), & \tau_0<\tau<\infty,\\
\varepsilonta(\tau)\rightarrow 0\ \textrm{as}\ \tau\rightarrow\infty,
\varepsilonnd{cases}
\varepsilonnd{equation}
where $\tau_0$ is large,
\begin{equation}\label{S4L1E3}
g(\tau):=-B_0(-\tau)(\varepsilonta+1)^p-B_1(-\tau)(\varepsilonta+1)-\varphi(\varepsilonta),
\varepsilonnd{equation}
\[
\varphi(\varepsilonta):=(1+\varepsilonta)^p-1-p\varepsilonta.
\]
There are three cases:
\begin{equation}\label{S4L1E31}
\textrm{(1)}\ p-1>\left(\frac{\alpha}{2}\right)^2,\quad
\textrm{(2)}\ p-1<\left(\frac{\alpha}{2}\right)^2,\quad
\textrm{(3)}\ p-1=\left(\frac{\alpha}{2}\right)^2.
\varepsilonnd{equation}
We consider only the case (1).
The other cases can be similarly treated.
Because the linearly independent solutions of the homogeneous equation associated with the equation of (\ref{S4L1E2}) becomes unbounded as $\tau\rightarrow\infty$, we have
\[
\varepsilonta(\tau)=\frac{e^{\frac{\alpha\tau}{2}}}{\beta}\int_{\tau}^{\infty}e^{-\frac{\alpha}{2}\sigma}\sin (\beta(\sigma-\tau))g(\sigma)d\sigma,
\]
where $\beta:=\sqrt{(p-1)-\left(\frac{\alpha}{2}\right)^2}$.
If $|\varepsilonta|$ is small, then there are a small $\varepsilon>0$ and $\tau_{\varepsilon}$ such that
\begin{equation}\label{S4L1E4}
|\varphi(\tau)|\le |(1+\varepsilonta)^p-1-p\varepsilonta|\le\varepsilon|\varepsilonta|\ (\tau>\tau_{\varepsilon})
\varepsilonnd{equation}
By (\ref{S4L1E3}) and (\ref{S4L1E4}) we have
\[
|g(\tau)|\le C_0e^{-2m\tau}+\varepsilon|\varepsilonta(\tau)|\ (\tau>\tau_{\varepsilon}).
\]
Using the same method as in the proof of Merle-Peletier~\cite[Lemma~3.1]{MP91}, we have $\varepsilonta(\tau)=O(e^{-2m\tau})$ as $\tau\to\infty$.
Therefore, (\ref{S4L1E1+}) holds.
See \cite[Lemma~6.3]{Mi14a} for details.
\varepsilonnd{proof}
\begin{lemma}\label{S4L2}
Suppose that $p>p_{\rm S}$.
The problem (\ref{S4L1E1}) has a unique solution near $t=-\infty$.
\varepsilonnd{lemma}
\begin{proof}
There are three cases (\ref{S4L1E31}) as in the proof of Lemma~\ref{S4L1}.
We consider only the case (1).
We transform (\ref{S4E1}) to the integral equation
\begin{equation}\label{S4L2E1}
\varepsilonta(\tau)=\mathcal{F}(\varepsilonta)(\tau).
\varepsilonnd{equation}
In the case (1) $\mathcal{F}$ becomes
\[
\mathcal{F}(\varepsilonta)(\tau)=\frac{e^{\frac{\alpha\tau}{2}}}{\beta}\int_{\tau}^{\infty}e^{-\frac{\alpha}{2}\sigma}\sin(\beta(\sigma-\tau))g(\sigma)d\sigma.
\]
By $\|\,\cdot\,\|$ we denote $\left\|\,\cdot\,\right\|_{C^0[\tau_0,\infty)}$.
We set $X:=\{\varepsilonta(\tau)\in C^0[\tau_0,\infty);\ \|\varepsilonta(\tau)\|<\infty\}$ and
$\mathcal{B}:=\{\varepsilonta(\tau)\in X;\ \|\varepsilonta\|<\delta\}$.
If $\delta>0$ is small, then we can show that $\mathcal{F}(\mathcal{B})\subset\mathcal{B}$ and $\mathcal{F}$ is a contraction mapping on $\mathcal{B}$, using Lemma~\ref{S4L1}.
By the contraction mapping theorem we see that (\ref{S4L2E1}) has a unique solution in $\mathcal{B}$.
We omit the detail.
\varepsilonnd{proof}
Let $y^*(t)$ be the solution of (\ref{S4L1E1}) obtained in Lemma~\ref{S4L2}.
We define
\begin{equation}\label{S4E2}
u^*(r)=2^{\frac{q}{p-1}}ar^{-\mu}y^*(\frac{1}{m}\log r).
\varepsilonnd{equation}
\begin{corollary}
Suppose that $p>p_{\rm S}$.
Let $u^*(r)$ be defined by (\ref{S4E2}). Then
\begin{equation}\label{S4C2E1}
u^*(r)=2^{\frac{q}{p-1}}ar^{-\mu}(1+o(1))\ \textrm{as}\ r\downarrow 0.
\varepsilonnd{equation}
\begin{equation}\label{S4C2E2}
(u^*)'(r)=-2^{\frac{q}{p-1}}\mu ar^{-\mu-1}(1+o(1))\ \textrm{as}\ r\downarrow 0.
\varepsilonnd{equation}
\varepsilonnd{corollary}
\begin{proof}
By (\ref{S4E2}) and Lemmas~\ref{S4L1} and \ref{S4L2} we obtain (\ref{S4C2E1}).
Differentiating (\ref{S4L2E1}) in $\tau$, we have
\[
\varepsilonta'(\tau)=\frac{\alpha}{2\beta}e^{\frac{\alpha\tau}{2}}\int_{\tau}^{\infty}e^{-\frac{\alpha}{2}\sigma}\sin(\beta(\sigma-\tau))g(\sigma)d\sigma
-e^{\frac{\alpha\tau}{2}}\int_{\tau}^{\infty}e^{-\frac{\alpha}{2}\sigma}\cos(\beta(\sigma-\tau))g(\sigma)d\sigma.
\]
We have that $\varepsilonta'(\tau)=O(e^{-2m\tau})$, and hence,
\begin{equation}\label{S4C2E3-}
(y^*)'(r)=O(e^{2mt}).
\varepsilonnd{equation}
Differentiating (\ref{S4E2}) in $r$, we have
\begin{equation}\label{S4C2E3}
(u^*)'(r)=-2^{\frac{q}{p-1}}\mu ar^{-\mu-1}y^*(\frac{1}{m}\log r)+2^{\frac{q}{p-1}}ar^{-\mu}(y^*)'(\frac{1}{m}\log r)\frac{1}{r}.
\varepsilonnd{equation}
Substituting (\ref{S4C2E3-}) and (\ref{S4L1E1+}) into (\ref{S4C2E3}), we have (\ref{S4C2E2}).
\varepsilonnd{proof}
\begin{corollary}\label{S4C4}
Suppose that $p>p_{\rm S}$.
Let $U^*(\theta):=A(r)^{-\frac{N-2}{2}}u^*(r)$ and $r:=\tan\frac{\theta}{2}$.
Then, (\ref{CE0}) and the following hold:
\begin{equation}\label{S4C4E0}
(U^*)'(\theta)=a\left(\cos\frac{\theta}{2}\right)^{-N}\left(2\tan\frac{\theta}{2}\right)^{-\mu-1}\left(-\mu+(N-2)\left(\sin\frac{\theta}{2}\right)^2+o(1)\right)\ \ \textrm{as}\ \ \theta\downarrow 0.
\varepsilonnd{equation}
\varepsilonnd{corollary}
\begin{proof}
By direct calculation we have (\ref{CE0}).
We have
\begin{align}
\frac{d}{d\theta}U^*(\theta)&=\frac{1}{A(r)}\frac{d}{dr}\left(A(r)^{-\frac{N-2}{2}}u^*(r)\right)\nonumber\\
&=-\frac{N-2}{2}A(r)^{\frac{N-2}{2}}A'(r)u^*(r)+A(r)^{-\frac{N}{2}}(u^*)'(r).\label{S4C4E1}
\varepsilonnd{align}
Substituting (\ref{S4C2E1}) and (\ref{S4C2E2}) into (\ref{S4C4E1}), we obtain (\ref{S4C4E0}).
\varepsilonnd{proof}
Since $u^*(r)$ satisfies the equation in (\ref{D}), $U^*(\theta)$ satisfies the equation in (\ref{EFODE}).
Then the domain of $U^*(\theta)$ can be extended.
In the following lemma we show that $U^*(\theta)$ has the first positive zero, and hence, $U^*(\theta)$ is a singular solution of (\ref{EFODE}).
\begin{lemma}\label{S4L3}
Suppose that $p>p_{\rm S}$.
Let $U^*(\theta):=A(r)^{-\frac{N-2}{2}}u^*(r)$ and $r:=\tan\frac{\theta}{2}$.
Then $U^*(\theta)$ has the first positive zero $\Theta^*\in(0,\pi)$.
Hence, $(\Theta^*,U^*(\theta))$ is the singular solution of (\ref{EFODE}).
\varepsilonnd{lemma}
\begin{proof}
First, we prove
\begin{equation}\label{S4L3E1}
(U^*)'(\theta)\sin^{N-1}\theta\to 0\ \ \textrm{as}\ \ \theta\downarrow 0.
\varepsilonnd{equation}
In fact, $(U^*)'(\theta)\sin^{N-1}\theta=O(\theta^{-\mu-1+N-1})$ and $-\mu-1+N-1=N-2-\frac{2}{p-1}>0$.
Hence, (\ref{S4L3E1}) holds.
Integrating (\ref{S3L2E1}) over $(0,\theta]$, we have (\ref{S3L2E1+}).
Hence, (\ref{S3L2E2}) holds.
The rest of the proof is the same as the proof of Lemma~\ref{S3L2}.
\varepsilonnd{proof}
\begin{proof}[Proof of Theorem~\ref{C}]
The singular solution $(\Theta^*,U^*(\theta))$ is established in Lemma~\ref{S4L3}, and (\ref{CE0}) is obtained in Corollary~\ref{S4C4}.
\varepsilonnd{proof}
\begin{remark}\label{rem1}
Let $(\Theta^*,U^*(\theta))$ be the singular solution of (\ref{EFODE}).
Let $u^*(r):=U^*(\theta)A(\tan\frac{\theta}{2})^{\frac{N-2}{2}}$, $r:=\tan\frac{\theta}{2}$, and $R^*:=\tan\frac{\Theta^*}{2}$.
Then $(R^*,u^*(r))$ is the singular solution of (\ref{D}).
\varepsilonnd{remark}
\section{Convergence to the singular solution as $\gamma\to\infty$}
Let $u(r,\gamma)$ be the solution of (\ref{DD}), and let $R(\gamma)$ be the first positive zero of $u(\,\cdot\,,\gamma)$.
Let $(R^*,u^*(r))$ be the singular solution of (\ref{D}) given in Remark~\ref{rem1}.
Our goal in this section is to prove the following:
\begin{lemma}\label{S5L2}
Suppose that $p>p_{\rm S}$.
Let $(R^*,u^*(r))$ be the singular solution given by Lemma~\ref{S4L3}.
As $\gamma\rightarrow\infty$,
\[
R(\gamma)\rightarrow R^*\ \ \textrm{and}\ \ u(r,\gamma)\rightarrow u^*(r)\ \textrm{in}\ C^2_{loc}(0,R^*].
\]
\varepsilonnd{lemma}
We postpone the proof of Lemma~\ref{S5L2}.
Let $y(t)$ be defined as (\ref{y}).
Then (\ref{DD}) is equivalent to the problem
\begin{equation}\label{S5E1}
\begin{cases}
y''+\alpha y'-y+y^p+B_0(t)y^p+B_1(t)y=0,& -\infty<t<t_{\Theta},\\
2^{\frac{q}{p-1}}ae^{-m\mu t}y(t)\rightarrow\gamma\ \textrm{as}\ t\rightarrow -\infty,\\
e^{-mt}(e^{-m\mu t}y(t))'\rightarrow 0\ \textrm{as}\ t\rightarrow -\infty,
\varepsilonnd{cases}
\varepsilonnd{equation}
where $t_{\Theta}:=\frac{1}{m}\log\tan\frac{\Theta}{2}$.
We define
\[
s:=t+\frac{\log\gamma}{m\mu}\ \ \textrm{and}\ \ \hat{y}(s):=y(t).
\]
Then (\ref{S5E1}) becomes
\begin{equation}\label{S5E2}
\begin{cases}
\hat{y}''+\alpha\hat{y}'-\hat{y}+\hat{y}^p+B_0(s-\frac{\log\gamma}{m\mu })\hat{y}^p+B_1(s-\frac{\log\gamma}{m\mu})\hat{y}=0, & -\infty<s<t_{\Theta}+\frac{\log\gamma}{m\mu},\\
2^{\frac{q}{p-1}}ae^{-m\mu s}\hat{y}(s)\rightarrow 1\ \textrm{as}\ s\rightarrow -\infty,\\
e^{-ms}(e^{-m\mu s}\hat{y}(s))'\rightarrow 0\ \textrm{as}\ s\rightarrow -\infty.
\varepsilonnd{cases}
\varepsilonnd{equation}
For each fixed $s$, as $\gamma\rightarrow\infty$, $B_0(s-\frac{\log\gamma}{m\mu })\rightarrow 0$ and $B_1(s-\frac{\log\gamma}{m\mu})\rightarrow 0$.
Therefore, we expect that $\hat{y}(s)$ converges to the solution of (\ref{S2E2}) in a certain sense.
\begin{lemma}\label{S5L3}
Suppose that $p>p_{\rm S}$.
Let $\bar{y}(s)$ be the solution of (\ref{S2E2}) with $\bar{\gamma}:=2^{-\frac{q}{p-1}}$.
For each $s_0\in\mathbb{R}$, as $\gamma\rightarrow\infty$,
\begin{align*}
\hat{y}(s)\rightarrow\bar{y}(s)\ \textrm{uniformly in}\ s\in(-\infty,s_0]\ \ \textrm{and}\ \
\hat{y}'(s)\rightarrow\bar{y}'(s)\ \textrm{uniformly in}\ s\in(-\infty,s_0].
\varepsilonnd{align*}
\varepsilonnd{lemma}
\begin{proof}
Multiplying the equation in (\ref{S5E1}) by $e^{m(N-2-\mu)t}$, we have
\[
\left\{ e^{m(N-2-\mu)t}(y'-m\mu y)\right\}'=-e^{m(N-2-\mu)t}(y^p+B_0(t)y^p+B_1(t)y)<0,
\]
where we use $m^2\mu(N-2-\mu)=1$.
Since
\begin{equation}\label{S5L3E1}
e^{m(N-2-\mu)t}(y'-m\mu y)\rightarrow 0\ \textrm{as}\ t\rightarrow -\infty,
\varepsilonnd{equation}
we see that $y'-m\mu y<0$. Since
\[
2^{\frac{q}{p-1}}ae^{-m\mu t}y(t)\rightarrow\gamma\ \textrm{as}\ t\rightarrow -\infty,
\]
we have
\[
y(t)<2^{-\frac{q}{p-1}}a^{-1}\gamma e^{m\mu t}=2^{-\frac{q}{p-1}}a^{-1}e^{m\mu s}.
\]
Since $\hat{y}(s)=y(t)$,
\begin{equation}\label{S5L3E2}
\hat{y}(s)<2^{-\frac{q}{p-1}}a^{-1}e^{m\mu s}.
\varepsilonnd{equation}
Multiplying the equation in (\ref{S5E2}) by $e^{m(N-2-\mu)s}$, we have
\begin{equation}\label{S5L3E3}
\left\{e^{m(N-2-\mu)s}(\hat{y}'-m\mu\hat{y})\right\}'=-e^{m(N-2-\mu)s}(\hat{y}^p+\widehat{B}_0(s)\hat{y}^p+\widehat{B}_1(s)\hat{y}),
\varepsilonnd{equation}
where $\widehat{B}_0(s):=B_0(s-\frac{\log\gamma}{m\mu})$ and $\widehat{B}_1(s):=B_1(s-\frac{\log\gamma}{m\mu})$.
Integrating (\ref{S5L3E3}) and solving it for $\hat{y}'$, we have
\[
\hat{y}'(s)=m\mu\hat{y}(s)-e^{-m(N-2-\mu)s}\int_{-\infty}^s\left(\hat{y}(\tau)^p+\widehat{B}_0(\tau)\hat{y}(\tau)^p+\widehat{B}_1(\tau)\hat{y}(\tau)\right)e^{m(N-2-\mu)\tau}d\tau,
\]
where we use (\ref{S5L3E1}).
Using (\ref{S5L3E2}), we have $|\hat{y}(\tau)^p+\widehat{B}_0(\tau)\hat{y}(\tau)^p+\widehat{B}_1(\tau)\hat{y}(\tau)|\le C_0e^{m\mu\tau}$, and there holds
\begin{align}
|\hat{y}'(s)|&\le m\mu|\hat{y}(s)|+e^{-m(N-2-\mu)s}\int_{-\infty}^sC_0e^{m\mu\tau}e^{m(N-2-\mu)\tau}d\tau\nonumber\\
&=m\mu|\hat{y}(s)|+\frac{C_0}{m(N-2)}e^{m\mu s}\label{S5L3E4}\\
&\le C_1e^{m\mu s}.\nonumber
\varepsilonnd{align}
Therefore, $\{\hat{y}(s)\}_{\gamma}$ is equicontinuous on $(-\infty,s_0]$.
It follows from the Arzel\`a-Ascoli theorem that for each fixed $s_1\in (-\infty,s_0]$, as $\gamma\rightarrow\infty$, $\hat{y}(s)$ uniformly converges to a certain function $\hat{y}_0(s)$ on $[s_1,s_0]$.
Because of (\ref{S5L3E2}), this convergence is uniform on $(-\infty,s_0]$.
By (\ref{S5L3E4}) we see that $\{\hat{y}'(s)\}_{\gamma}$ is bounded on $(-\infty,s_0]$.
Because of (\ref{S5E2}), $\{\hat{y}''(s)\}_{\gamma}$ is also bounded on $(-\infty,s_0]$.
By the same argument as before we see that as $\gamma\rightarrow\infty$, $\hat{y}'(s)$ converges to a certain function $\hat{y}_1(s)$ on $(-\infty,s_0]$.
Taking the limit of $y(s)=\int_{-\infty}^sy'(\tau)d\tau$, we see that $\hat{y}_0(s)=\int_{-\infty}^s\hat{y}_1(\tau)d\tau$, where by (\ref{S5L3E4}) we can use the dominated convergence theorem.
Hence, $\hat{y}_0(s)$ is of class $C^1$ and $\hat{y}_1=\hat{y}_0'$.
By (\ref{S5E2}) we see that $\hat{y}''(s)$ also converges to a certain function $\hat{y}_2(s)$ on $(-\infty,s_0]$.
By the same argument as before, we see that $\hat{y}_2=\hat{y}_1'(=\hat{y}_0'')$.
Taking the limit of (\ref{S5E2}), we see that $\hat{y}_0(s)$ satisfies (\ref{S2E2}) with $\bar{\gamma}=2^{-\frac{q}{p-1}}$.
Thus $\hat{y}_0=\bar{y}$.
We obtain the conclusion.
\varepsilonnd{proof}
Let $z(t,\gamma):=y_t(t,\gamma)$.
Then $(y,z)$ satisfies
\begin{equation}\label{S5E3}
\begin{cases}
y'=z\\
z'=-\alpha z+y-y^p-B_0(t)y^p-B_1(t)y.
\varepsilonnd{cases}
\varepsilonnd{equation}
Proposition~\ref{S2P1} (i) says that $(\bar{y}(s),\bar{z}(s))$ converges to $(1,0)$ if $p>p_{\rm S}$.
This fact and Lemma~\ref{S5L3} indicate that $(y(t,\gamma),z(t,\gamma))$ approaches to $(1,0)$ as $\gamma\rightarrow\infty$ along $t=s_0-\frac{\log\gamma}{m\mu}$ provided that $s_0$ is chosen large enough.
\begin{lemma}\label{S5L4}
Suppose that $p>p_{\rm S}$.
Let
\[
H(y,z):=\frac{z^2}{2}-\frac{y^2-1}{2}+\frac{y^{p+1}-1}{p+1},
\]
and let $\Omega_{\varepsilon}:=\{(y,z)\in\mathbb{R}^2;\ H(y,z)<\varepsilon,\ y>0\}$.
Then the following hold:\\
(i) Let $\varepsilon>0$ be fixed. For each large $t_0>0$, $(y(-t_0,\gamma),z(-t_0,\gamma))\in\Omega_{\varepsilon}$ provided that $\gamma>0$ is large.\\
(ii) If $(y(-t_0,\gamma),z(-t_0,\gamma))\in\Omega_{\varepsilon}$, then there is $T_{\varepsilon}<0$ independent of $t_0$ such that $(y(t,\gamma),z(t,\gamma))\in\Omega_{2\varepsilon}$ for $t\in[-t_0,T_{\varepsilon}]$.
\varepsilonnd{lemma}
\begin{proof}
Because of Lemma~\ref{S5L3}, for each $t_0$, as $\gamma\rightarrow\infty$,
\[
y(-t_0)=\hat{y}(s)\rightarrow\bar{y}(s)=\bar{y}(-t_0+\frac{\log\gamma}{m\mu}),
\]
where $s=-t_0+\frac{\log\gamma}{m\mu}$.
We similarly see that $z(-t_0)\rightarrow\bar{z}(-t_0+\frac{\log\gamma}{m\mu})$.
Since $(\bar{y},\bar{z})$ converges to $(1,0)$ and $\Omega_{\varepsilon}$ is a neighborhood of $(1,0)$, (i) holds.
We define $E(y,z,t)$ by
\begin{equation}\label{S5L4E0}
E(y,z,t):=H(y,z)+B_0(t)\frac{y^{p+1}}{p+1}+B_1(t)\frac{y^2}{2}.
\varepsilonnd{equation}
Let $y(t)$ be the solution of (\ref{S5E1}).
By direct calculation we have
\begin{equation}\label{S5L4E1}
\frac{d}{dt}E(y(t),z(t),t)=-\alpha y'(t)^2+B_0'(t)\frac{y(t)^{p+1}}{p+1}+B_1'(t)\frac{y(t)^2}{2}.
\varepsilonnd{equation}
Let $\xi:=\left(\frac{p+1}{2}\right)^{\frac{1}{p-1}}$.
Let $\varepsilon>0$ be small such that $\Omega_{2\varepsilon}\subset\{0\le y\le\xi\}$.
We can choose $T<0$ such that
\begin{equation}\label{S5L4E2}
B_0(T)\frac{\xi^{p+1}}{p+1}<\frac{\varepsilon}{8}\ \ \textrm{and}\ \ B_1(T)\frac{\xi^2}{2}<\frac{\varepsilon}{8}.
\varepsilonnd{equation}
We show that $(y(t),z(t))\in\Omega_{2\varepsilon}$ for $t\in[-t_0,T]$ if $(y(-t_0,\gamma),y_t(-t_0,\gamma))\in\Omega_{\varepsilon}$.
Suppose the contrary, i.e., we assume that
\begin{equation}\label{S5L4E2+}
(y(t),z(t))\in\Omega_{2\varepsilon}\ (-t_0\le t<T)\ \ \textrm{and}\ \ (y(T),z(T))\not\in\Omega_{2\varepsilon}.
\varepsilonnd{equation}
Integrating (\ref{S5L4E1}) over $[-t_0,T]$, we have
\begin{align}
E(y(T),z(T),T)-& E(y(-t_0),z(-t_0),-t_0)\nonumber\\
&\le\int_{-t_0}^T\left(B_0'(t)\frac{y(t)^{p+1}}{p+1}+B_1'(t)\frac{y(t)^2}{2}\right)dt\nonumber\\
&\le\frac{\xi^{p+1}}{p+1}\int_{-t_0}^TB_0'(t)dt+\frac{\xi^2}{2}\int_{-t_0}^TB_1'(t)dt\nonumber\\
&\le\frac{\xi^{p+1}}{p+1}B_0(T)+\frac{\xi^2}{2}B_1(T)\nonumber\\
&\le\frac{\varepsilon}{8}+\frac{\varepsilon}{8}=\frac{\varepsilon}{4},\label{S5L4E3}
\varepsilonnd{align}
where we use (\ref{S5L4E2}) and the two inequalities
\begin{align*}
B_0'(t)&=2mq(1+e^{2mt})^{q-1}e^{2mt}>0\ \ \textrm{for}\ \ t\in\mathbb{R},\ \textrm{and},\\
B_1'(t)&=\frac{2mN(N-2)(1-e^{2mt})e^{2mt}}{(1+e^{2mt})^3}>0\ \ \textrm{for}\ \ t<0.
\varepsilonnd{align*}
Using (\ref{S5L4E3}) and (\ref{S5L4E0}), we have
\begin{align*}
H(y(T),z(T))&\le H(y(-t_0),z(-t_0))+B_0(-t_0)\frac{y(-t_0)^{p+1}}{p+1}+B_1(-t_0)\frac{y(-t_0)^2}{2}\\
&\quad-\left(B_0(T)\frac{y(T)^{p+1}}{p+1}+B_1(T)\frac{y(T)^2}{2}\right)+\frac{\varepsilon}{4}\\
&\le \varepsilon+\frac{\varepsilon}{8}+\frac{\varepsilon}{8}+\frac{\varepsilon}{4}=\frac{3}{2}\varepsilon.
\varepsilonnd{align*}
Hence, $(y(T),z(T))\in\Omega_{3\varepsilon/2}\subset\Omega_{2\varepsilon}$, which contradicts (\ref{S5L4E2+}).
The proof of (ii) is complete.
\varepsilonnd{proof}
\begin{proof}[Proof of Lemma~\ref{S5L2}]
Let $\{\gamma_n\}_{n=1}^{\infty}$ be a sequence diverging to $\infty$.
Let $y_n:=y(t,\gamma_n)$ be the solution of (\ref{S5E1}), and let $z_n:=y_n'$.
We fix $\varepsilon>0$.
By Lemma~\ref{S5L4} (i) we see that for arbitrary large $t_0>0$, $(y_n(-t_0),z_n(-t_0))\in\Omega_{\varepsilon}$ provided that $n$ is large.
Because of Lemma~\ref{S5L4} (ii), there is $T<0$ such that $(y_n(t),z_n(t))\in\Omega_{2\varepsilon}$ for $t\in [-t_0,T]$.
Thus, $\{(y_n(t),z_n(t))\}$ is bounded in $(C^0[-t_0,T])^2$.
It follows from the equation in (\ref{S5E1}) that $\{y_n''(t)\}$ is bounded in $C^0[-t_0,T]$.
Differentiating the equation in (\ref{S5E1}), we see that $\{z_n''(t)\}$ is also bounded in $C^0[-t_0,T]$.
Thus by the Ascoli-Arzel\`a theorem we see that $\{(y_n,z_n)\}$ converges to some pair of functions $(y_*(t),z_*(t))$ in $(C^1[-t_0,T])^2$.
Since $(y_n,z_n)$ satisfies the equation in (\ref{S5E3}), $(y_*,z_*)$ satisfies the same equation.
Next, we prove $y_*=y^*$, where $y^*$ is the solution of (\ref{S4L1E1}).
If $y_*=y^*$, then $y_n\rightarrow y^*$ in $C^2[-t_0,T]$, and $u\rightarrow u^*$ in $C^2(I)$ for some interval $I$.
Let $r_0\in I$ be fixed.
Because of the continuous dependence of $u$ in $C^2_{loc}(0,R^*]$ with respect to $(u(r_0),u'(r_0))$, $u\rightarrow u^*$ in $C^2_{loc}(0,R^*]$.
Moreover, $R(\gamma)\rightarrow R^*$.
By Lemma~\ref{S4L2} it suffices to show that
\begin{equation}\label{S5L2E0}
y_*(t)\rightarrow 1\ \textrm{as}\ t\rightarrow -\infty.
\varepsilonnd{equation}
We prove (\ref{S5L2E0}) by contradiction.
Suppose the contrary, i.e., there is a sequence $\{t_k\}$ such that $t_k\rightarrow -\infty$ and $(y_*(t_k),z_*(t_k))\not\in\Omega_{\delta}$ for all $k\ge 1$.
We choose $\varepsilon=\delta/4$.
By Lemma~\ref{S5L3} for each large $s_0>0$, if $\gamma$ is large, then $(\hat{y}(s_0,\gamma),\hat{y}_s(s_0,\gamma))\in\Omega_{\varepsilon}$.
Since $\hat{y}(s_0,\gamma_n)=y(t,\gamma_n)=y_n(s_0-\frac{\log\gamma_n}{m\mu})$ and $\hat{y}_s(s_0,\gamma_n)=y_t(t,\gamma_n)=z_n(s_0-\frac{\log\gamma_n}{m\mu})$, $(y_n(s_0-\frac{\log\gamma_n}{m\mu}),z_n(s_0-\frac{\log\gamma_n}{m\mu}))\in\Omega_{\varepsilon}$ provided that $n$ is large.
By Lemma~\ref{S5L4} (ii) we see that
\[
(y_n(t),z_n(t))\in\Omega_{2\varepsilon}\subset\Omega_{\delta}\ \ \textrm{for}\ \ t\in[s_0-\frac{\log\gamma_n}{m\mu},T_{\varepsilon}],
\]
where $T_{\varepsilon}$ is independent of $n$.
Since $s_0-\frac{\log\gamma_n}{m\mu}\to -\infty$ $(n\to\infty)$, we can choose $n$ such that $[s_0-\frac{\log\gamma_n}{m\mu},T_{\varepsilon}]$ includes an element of $\{t_k\}$.
We obtain a contradiction.
\varepsilonnd{proof}
\section{Uniqueness of a small solution}
Let $u(r,\gamma)$ be the solutions of (\ref{DD}), and let $R(\gamma)$ be the first positive zero of $u(\,\cdot\,,\gamma)$.
\begin{lemma}\label{S6L1}
Suppose that $p>1$. Then
\[
R(\gamma)\rightarrow\infty\ \textrm{as}\ \gamma\downarrow 0.
\]
\varepsilonnd{lemma}
\begin{proof}
Let $v(r):=A(r)^{-\frac{N-2}{2}}u(r)$.
Then, $v$ satisfies
\[
(r^{N-1}A(r)^{N-2}v')'+r^{N-1}A(r)^Nv^p=0.
\]
Integrating this equation over $[0,r]$, we have
\[
v'(r)=-\frac{1}{r^{N-1}A(r)^{N-2}}\int_0^rs^{N-1}A(s)^Nv^pds\le 0.
\]
where we use $v'(0)=0$.
Let $\delta:=2^{-\frac{N-2}{2}}\gamma$.
Since $0\le v\le\delta$,
\[
-v'(r)\le\frac{1}{r^{N-1}A(r)^{N-2}}\int_0^rs^{N-1}A(s)^N\delta^pds,
\]
and hence,
\begin{equation}\label{S6L1E1}
-\frac{v'(r)}{2\delta^p}\le\frac{1}{r^{N-1}A(r)^{N-2}}\int_0^r(sA(s))^{N-1}ds.
\varepsilonnd{equation}
We have
\[
\int_0^r(sA(s))^{N-1}ds\le
\begin{cases}
\frac{2^{N-1}}{N}+\int_1^r(sA(s))ds=C_0+\log (1+r^2), & 1\le r,\\
\int_0^r(2s)^{N-1}ds=\frac{2^{N-1}r^N}{N}, & 0\le r\le 1.
\varepsilonnd{cases}
\]
Integrating (\ref{S6L1E1}) over $[0,R(\gamma)]$, we have
\begin{align*}
-\int_0^{R(\gamma)}\frac{v'(r)}{2\delta^p}&\le\int_0^{R(\gamma)}\frac{1}{r^{N-1}A(r)^{N-2}}\int_0^r(sA(s))^{N-1}dsdr\\
&\le\int_0^1\frac{2^{N-1}r}{NA(r)^{N-2}}dr+\int_1^{R(\gamma)}\frac{C_0+\log(r^2+1)}{r^{N-1}A(r)^{N-2}}dr\ \ \textrm{for}\ \ R>1.
\varepsilonnd{align*}
The first positive zero of $v(\,\cdot\,)$ is equal to that of $u(\,\cdot\,)$, i.e., $R(\gamma)$.
Therefore, $v(R(\gamma))=0$.
Since $v(0)=\delta$ and $C_1:=\int_0^1\frac{2^{N-1}r}{NA(r)^{N-2}}dr<\infty$,
\begin{equation}\label{S6L1E3}
\frac{1}{2\delta^{p-1}}\le C_1+\int_1^{R(\gamma)}\frac{C_0+\log(1+r^2)}{r^{N-1}A(r)^{N-2}}dr\ \ \textrm{for}\ \ R(\gamma)>1.
\varepsilonnd{equation}
Taking the limit $\delta\downarrow 0$, we see that the right-hand side of (\ref{S6L1E3}) diverges.
Hence, $R(\gamma)\rightarrow\infty$ as $\delta\downarrow 0$.
\varepsilonnd{proof}
\begin{lemma}\label{S6L2}
Suppose that $p>1$.
There is a $\gamma_0>0$ such that $R'(\gamma)<0$ for $\gamma\in(0,\gamma_0)$.
In particular, if $\gamma\in(0,\gamma_0)$, then $u(r,\gamma)$ is nondegenerate in the space of radial functions.
\varepsilonnd{lemma}
\begin{proof}
By $\mathcal{L}$ we denote
\[
\mathcal{L}:=\frac{d^2}{dr^2}+\frac{N-1}{r}\frac{d}{dr}+\frac{N(N-2)}{4}A(r)^2.
\]
We define $w(r):=u_{\gamma}(r,\gamma)$.
Then $w(r)$ satisfies
\begin{equation}\label{S6L2E1}
\begin{cases}
\left(\mathcal{L}+{pA(r)^{-q}u(r,\gamma)^{p-1}}\right)w=0, & 0<r<R(\gamma),\\
w(0)=1,\ w'(0)=0.\\
\varepsilonnd{cases}
\varepsilonnd{equation}
We show that
\begin{equation}\label{S6L2E2}
w(R(\gamma))<0.
\varepsilonnd{equation}
Let $\psi_0(r):=A(r)^{\frac{N-2}{2}}(A(r)-1)$.
Then, by direct calculation we see that $\psi_0(r)$ satisfies
\[
\begin{cases}
\left(\mathcal{L}+NA(r)^2\right)\psi_0=0, & 0<r<\infty,\\
\psi_0(0)=2^{\frac{N-2}{2}},\ \psi_0'(0)=0.\\
\varepsilonnd{cases}
\]
Note that $\psi_0$ has a unique zero at $r=1$ on $[0,\infty)$ and that $\psi_0$ corresponds to the second eigenfunction of $\Delta_{\mathbb{S}^N}$ on the whole sphere.
Since $U(\theta)(=A(r)^{-\frac{N-2}{2}}u(r))$ satisfies (\ref{EFODE}), $U(\theta)$ is decreasing and $|U(\theta)|\le\Gamma$ $(0\le\theta\le\Theta)$, where $\Gamma=2^{-\frac{N-2}{2}}\gamma$.
Therefore, $|u(r)|\le 2^{-\frac{N-2}{2}}\gamma A(r)^{\frac{N-2}{2}}$ for $r\in[0,R(\gamma)]$.
We have
\[
{p{A(r)^{-q}}u(r,\gamma)^{p-1}}\le 2^{-\frac{(N-2)(p-1)}{2}}p\gamma^{p-1}A(r)^2\ \ \textrm{for}\ \ r\in[0,R(\gamma)].
\]
Thus, if $\gamma>0$ is small, then ${p{A(r)^{-q}}u(r,\gamma)^{p-1}}\le NA(r)^2$ for $r\in[0,R(\gamma)]$.
Hence, by the oscillation theorem for Sturm-Liouville equations (e.g., see Ince~\cite[pp.224--225]{I44}) we see that $w(r)$ oscillates more slowly than $\psi_0(r)$.
Since $R(\gamma)$ is large, $\psi_0(r)$ has exactly one zero on $[0,R(\gamma)]$, and hence $w(r)$ has at most one zero on $[0,R(\gamma)]$.
Let $\lambda_1$ be the first eigenvalue of the eigenvalue problem
\begin{equation}\label{S6L2E3}
\begin{cases}
\left(\mathcal{L}+{p{A(r)^{-q}}u(r,\gamma)^{p-1}}\right)\phi+\lambda\phi=0, & 0<r<R(\gamma),\\
\phi(R(\gamma))=0,\\
\phi(r)>0, & 0\le r<R(\gamma),\\
\phi'(0)=0.
\varepsilonnd{cases}
\varepsilonnd{equation}
We define
\[
\mathcal{H}[\psi]:=\int_0^{R(\gamma)}\left((\psi')^2-\frac{N(N-2)}{4}A(r)\psi^2-\frac{pu(r,\gamma)^{p-1}}{A(r)^q}\psi^2\right)r^{N-1}dr.
\]
Multiplying $\left(\mathcal{L}+{p{A(r)^{-q}}u(r,\gamma)^{p-1}}\right)u=(p-1)u^p$ by $ur^{N-1}$ and integrating it, we have
\begin{equation}\label{S6L2E3+}
\mathcal{H}[u]=-(p-1)\int_0^{R(\gamma)}u(r,\gamma)^{p+1}r^{N-1}dr<0.
\varepsilonnd{equation}
Using a variational characterization of $\lambda_1$ and (\ref{S6L2E3+}), we have
\begin{equation}\label{S6L2E4}
\lambda_1=\inf_{\psi\in X}\frac{\mathcal{H}[\psi]}{\left\|\psi\right\|^2_{L^2}}\le\frac{\mathcal{H}[u]}{\left\|u\right\|^2_{L^2}}<0,
\varepsilonnd{equation}
where $\left\|\psi\right\|_{L^2}:=\left(\int_0^{R(\gamma)}\psi^2r^{N-1}dr\right)^{1/2}$ and
\[
X:=\left\{\psi(r);\ \int_0^{R(\gamma)}\left((\psi')^2+\psi^2\right)r^{N-1}dr<\infty\ \ \textrm{and}\ \ \psi(R(\gamma))=0\right\}.
\]
The first eigenfunction $\phi_1(r)$ satisfies
\[
\begin{cases}
\left(\mathcal{L}+{p{A(r)^{-q}}u(r,\gamma)^{p-1}}+\lambda_1\right)\phi_1=0, & 0<r<R(\gamma),\\
\phi_1(0)=1,\ \phi_1'(0)=0.\\
\phi_1>0, & 0\le r<R(\gamma).\\
\varepsilonnd{cases}
\]
Since ${p{A(r)^{-q}}u(r,\gamma)^{p-1}}+\lambda_1<{p{A(r)^{-q}}u(r,\gamma)^{p-1}}$, by the oscillation theorem we see that $w(r)$ oscillates more rapidly than $\phi_1(r)$, and hence $w(r)$ has at least one zero on $[0,R(\gamma)]$.
Thus $w(r)$ has exactly one zero on $[0,R(\gamma)]$.
If $w(R(\gamma))=0$, then $w(r)>0$ on $[0,R(\gamma))$.
Therefore, $0$ is the first eigenvalue, which contradicts (\ref{S6L2E4}).
Thus, $w(R(\gamma))\neq 0$.
Since $w(0)>0$, $w(r)$ has exactly one zero on $(0,R(\gamma))$, which indicates that $w(R(\gamma))<0$.
We obtain (\ref{S6L2E2}).
Next, we prove the statements of the lemma, using (\ref{S6L2E2}).
Differentiating $u(R(\gamma),\gamma)=0$ in $\gamma$, we have $u_r(R(\gamma),\gamma)R'(\gamma)+u_{\gamma}(R(\gamma),\gamma)=0$.
It follows from Hopf's boundary point lemma that $u_r(R(\gamma),\gamma)<0$.
Hence,
\[
R'(\gamma)=-\frac{u_{\gamma}(R(\gamma),\gamma)}{u_r(R(\gamma),\gamma)}<0.
\]
Because of (\ref{S6L2E1}), $0$ is an eigenvalue of (\ref{S6L2E3}) if and only if $w(R(\gamma))=0$.
By (\ref{S6L2E2}) we see that $0$ is not an eigenvalue which means that $u(r,\gamma)$ is nondegenerate.
\varepsilonnd{proof}
\begin{remark}
In the above proof we show that $w(r)$ has one zero in $(0,R(\gamma))$ and $w(R(\gamma))<0$.
This indicates that the Morse index of $u$ in the space of radial functions is one.
We do not use this fact in this paper.
\varepsilonnd{remark}
\section{Infinitely many turning points}
First, we show that $R(\gamma)$ oscillates around $R^*$ as $\gamma\to\infty$.
\begin{lemma}\label{S7L1}
Suppose that $p_{\rm S}<p<p_{\rm JL}$.
Let $u(r,\gamma)$ be the solution of (\ref{DD}), and let $R(\gamma)$ be the first positive zero of $u(\,\cdot\,,\gamma)$.
Let $(R^*,u^*(r))$ be the singular solution of (\ref{D}) given in Remark~\ref{rem1}.
Then the following hold:\\
(i) $\mathcal{Z}_{(0,\min\{R(\gamma),R^*\})}[u(\,\cdot\,,\gamma)-u^*(\,\cdot\,)]\rightarrow\infty$ as $\gamma\rightarrow\infty$.\\
(ii) $R(\gamma)$ oscillates around $R^*$ infinitely many times as $\gamma\to\infty$.
\varepsilonnd{lemma}
\begin{proof}
We prove (i), using a blow-up argument.
We change variables
\[
\tilde{u}(\rho,\gamma):=2^{-\frac{q}{p-1}}\gamma^{-1}u(r,\gamma)\ \ \textrm{and}\ \ \rho:=\gamma^{\frac{p-1}{2}}r.
\]
Then $\tilde{u}(\rho)$ satisfies
\begin{equation}\label{S7L1E1}
\begin{cases}
\tilde{u}''+\frac{N-1}{\rho}\tilde{u}'+\tilde{u}^p+\widetilde{B}_0(\rho,\gamma)\tilde{u}^p+\widetilde{B}_1(\rho,\gamma)\tilde{u}=0, & 0<\rho<\widetilde{R}(\gamma),\\
\tilde{u}(0)=1,\ \tilde{u}'(0)=0,\\
\varepsilonnd{cases}
\varepsilonnd{equation}
where $\widetilde{R}(\gamma):=\gamma^{\frac{p-1}{2}}R(\gamma)$ which is the first positive zero of $\tilde{u}(\,\cdot\,,\gamma)$,
\[
\widetilde{B}_0(\rho,\gamma):=\left(1+\frac{\rho^2}{\gamma^{p-1}}\right)^q-1,\ \ \textrm{and}\ \ \widetilde{B}_1(\rho,\gamma):=\frac{N(N-2)\gamma^{p-1}}{(\gamma^{p-1}+\rho^2)^2}.
\]
From Lemma~\ref{S5L2} it holds $\widetilde{R}(\gamma)\to\infty$ $(\gamma\to\infty)$.
Let $\rho_0>0$ be large.
If $\gamma$ is large, then the interval $[0,\rho_0]$ is included in $[0,\widetilde{R}(\gamma)]$.
Since $\widetilde{B}_0>0$ and $\widetilde{B}_1>0$, it is clear from the equation in (\ref{S7L1E1}) that $\tilde{u}(\rho)$ is decreasing on $[0,\widetilde{R}(\gamma)]$.
Therefore, $0\le\tilde{u}(\rho)\le 1$ on $[0,\rho_0]$ provided that $\gamma$ is large.
Since $|\widetilde{B}_0(\rho,\gamma)|$ and $|\widetilde{B}_1(\rho,\gamma)|$ uniformly converge to $0$ on $[0,\rho_0]$, $|\widetilde{B}_0(\rho,\gamma)\tilde{u}(\rho)^p|+|\widetilde{B}_1(\rho,\gamma)\tilde{u}(\rho)|\to 0$ in $C^0[0,\rho_0]$.
It follows from the equation in (\ref{S7L1E1}) that as $\gamma\to\infty$,
\begin{equation}\label{S7L1E2}
\tilde{u}(\rho)\to\bar{u}(\rho)\ \ \textrm{in}\ \ C^1[0,\rho_0],
\varepsilonnd{equation}
where $\bar{u}(\rho)$ is the solution of (\ref{S2E1}) with $\bar{\gamma}=1$.
Next, we apply the same change of variables to the singular solution $u^*(r)$. We define $\tilde{u}^*(\rho)$ by
\[
\tilde{u}^*(\rho):=2^{-\frac{q}{p-1}}\gamma^{-1}u^*(r)\ \ \textrm{and}\ \ \rho=\gamma^{\frac{p-1}{2}}r.
\]
By (\ref{S4C2E1}) we have
\begin{equation}\label{S7L1E3}
\tilde{u}^*(\rho)=a\rho^{-\frac{2}{p-1}}(1+o(1))\ \textrm{as}\ \frac{\rho}{\gamma^{\frac{p-1}{2}}}\to 0.
\varepsilonnd{equation}
When $\rho\in[0,\rho_0]$, $\frac{\rho}{\gamma^{\frac{p-1}{2}}}$ uniformly converges to $0$, and hence $o(1)$ in (\ref{S7L1E3}) uniformly converges to $0$.
Since $\tilde{u}^*(\rho)$ is unbounded near $\rho=0$,
\[
\tilde{u}^*(\rho)\to\bar{u}^*(\rho)\ \textrm{in}\ C^0_{loc}(0,\rho_0],
\]
where $\bar{u}^*(\rho)$ is defined by (\ref{S2E0}).
Since $\tilde{u}^*(\rho)$ satisfies the ODE in (\ref{S7L1E1}), this convergence holds in $C^2_{loc}(0,\rho_0]$, i.e.,
\begin{equation}\label{S7L1E4}
\tilde{u}^*(\rho)\to\bar{u}^*(\rho)\ \textrm{in}\ C^2_{loc}(0,\rho_0].
\varepsilonnd{equation}
On the other hand, if $\gamma>0$ is large, then
\begin{align}
\mathcal{Z}_{(0,\min\{R(\gamma),R^*\})}[u(\,\cdot\,,\gamma)-u^*(\,\cdot\,)]&=\mathcal{Z}_{(0,\min\{\widetilde{R}(\gamma),\gamma^{\frac{p-1}{2}}R^*\})}[\tilde{u}(\,\cdot\,,\gamma)-\tilde{u}^*(\,\cdot\,)]\nonumber\\
&\ge\mathcal{Z}_{(0,\rho_0)}[\tilde{u}(\,\cdot\,,\gamma)-\tilde{u}^*(\,\cdot\,)].\label{S7L1E5}
\varepsilonnd{align}
We see by (\ref{S7L1E2}) and (\ref{S7L1E4}) that if $\gamma>0$ is large, then
\[
\mathcal{Z}_{(0,\rho_0)}[\tilde{u}(\,\cdot\,,\gamma)-\tilde{u}^*(\,\cdot\,)]\ge\mathcal{Z}_{(0,\rho_0)}[\bar{u}(\,\cdot\,)-\bar{u}^*(\,\cdot\,)].
\]
Proposition~\ref{S2P1} (ii) says that $\mathcal{Z}_{(0,\rho_0)}[\bar{u}(\,\cdot\,)-\bar{u}^*(\,\cdot\,)]\to\infty$ as $\rho_0\to\infty$.
Therefore, if $\rho_0$ and $\gamma$ are large and $\rho_0\le\widetilde{R}(\gamma)$, then $\mathcal{Z}_{(0,\rho_0)}[\tilde{u}(\,\cdot\,,\gamma)-\tilde{u}^*(\,\cdot\,)]$ can be arbitrary large.
By (\ref{S7L1E5}) we see that (i) holds.
We prove (ii).
Since $u(r,\gamma)$ and $u^*(r)$ satisfy the same equation, every zero of $u(\,\cdot\,,\gamma)-u^*(\,\cdot\,)$ is simple.
Each zero continuously depends on $\gamma$.
The zero number of $u(\,\cdot\,,\gamma)-u^*(\,\cdot\,)$ on a bounded interval is finite, since the zero set of $u(\,\cdot\,,\gamma)-u^*(\,\cdot\,)$ does not have an accumulation point.
Let $I(\gamma):=(0,\min\{R(\gamma),R^*\})$.
The intersection number $\mathcal{Z}_{I(\gamma)}[u(\,\cdot\,,\gamma)-u^*(\,\cdot\,)]$ is preserved if another zero does not come from $\partial I(\gamma)$.
Since $u(0,\gamma)-u^*(0)=-\infty$, a zero cannot come from $0\in\partial I(\gamma)$.
If $R(\gamma)>R^*$ for large $\gamma$, then there is $C>0$ such that $\mathcal{Z}_{I(\gamma)}[u(\,\cdot\,,\gamma)-u^*(\,\cdot\,)]\le C$ for all $\gamma>0$, which contradicts (i).
If $R(\gamma)<R^*$ for large $\gamma$, then we similarly obtain a contradiction.
Therefore, there are a positive integer $m$ and a sequence $\{\gamma_n\}_{n=m}^{\infty}$ $(\gamma_m<\gamma_{m+1}<\cdots\to\infty)$ such that $\mathcal{Z}_{I(\gamma_n)}[u(\,\cdot\,,\gamma)-u^*(\,\cdot\,)]=n$ and $u(\,\cdot\,,\gamma)-u^*(\,\cdot\,)$ has a zero at $\min\{R(\gamma),R^*\}$, i.e., $R(\gamma)=R^*$.
Since the zero set is discrete, there is a sequence $\{\hat{\gamma}_n\}_{n=m}^{\infty}$ such that $\gamma_m<\hat{\gamma}_m<\gamma_{m+1}<\hat{\gamma}_{m+1}<\cdots$ and $R(\hat{\gamma}_n)\neq R^*$.
We easily see the following:
If $\mathcal{Z}_{I(\gamma)}[u(\,\cdot\,,\gamma)-u^*(\,\cdot\,)]$ is even (resp. odd), then $R(\hat{\gamma}_n)<R^*$ (resp. $R^*<R(\hat{\gamma}_n)$).
Thus, (ii) holds.
\varepsilonnd{proof}
\begin{proof}[Proof of Theorem~\ref{A}]
Let $\Theta(\Gamma):=2\arctan R(\gamma)$, $\Gamma:=2^{-\frac{N-2}{2}}\gamma$, and $\Theta^*:=2\arctan R^*$.
Note that the range of $\Theta$ is $(0,\pi)$.
Lemma~\ref{S3L3} says that $R(\gamma)$ is a $C^1$-function on $(0,\infty)$ and $0<R(\gamma)<\infty$ for $\gamma\in(0,\infty)$.
Hence, (i) holds.
It follows from Lemma~\ref{S6L1} that $\Theta(\Gamma)\to\pi$ as $\Gamma\downarrow 0$.
Since $\Theta'(\Gamma)=2^{\frac{N}{2}}R'(\gamma)/(1+R(\gamma)^2)$, we see by Lemma~\ref{S6L2} that $\Theta'(\Gamma)<0$ if $\Gamma>0$ is small.
Thus, (ii) holds.
By Lemma~\ref{S5L2} we see that $\Theta(\Gamma)\to\Theta^*$ as $\Gamma\to\infty$.
Thus, (iii) holds.
By Lemma~\ref{S7L1}~(ii) we see that (iv) holds.
The proof is complete.
\varepsilonnd{proof}
\begin{proof}[Proof of Corollary~\ref{B}]
Let $\underline{\Theta}:=\inf\{\Theta(\Gamma);\ \Gamma>0\}$.
Since $\Theta(\Gamma)\to\Theta^*$ $(\Gamma\to\infty)$, $\Theta(\Gamma)\to\pi$ $(\Gamma\to 0)$, and $\Theta(\Gamma)$ is continuous, we see that $\underline{\Theta}>0$.
Therefore, (i) holds.
If $p_{\rm S}<p<p_{\rm JL}$, then $\Theta(\Gamma)$ oscillates around $\Theta^*$.
Hence, $\underline{\Theta}<\Theta^*$ and $\{\Gamma>0;\ \Theta(\Gamma)\le\Theta^*-\varepsilon\}$ is bounded for small $\varepsilon>0$.
The infimum is attained, and (ii) holds.
(iii) follows from Theorem~\ref{A}~(iv).
If $\Gamma_0>0$ is small, then $\Theta'(\Gamma)<0$ for $\Gamma\in(0,\Gamma_0)$, because of Theorem~\ref{A}~(ii).
On the other hand, $\Theta_0:=\sup_{\Gamma\ge\Gamma_0}\Theta(\Gamma)<\pi$, because of Theorem~\ref{A}~(iii).
We see that if $\Theta_1\in(\frac{\Theta_0+\pi}{2},\pi)$, then there exists the unique $\Gamma>0$ such that $\Theta(\Gamma)=\Theta_1$ and $0<\Gamma<\Gamma_0$.
It is known that the solution $(\Theta(\Gamma),U(\theta))$ is nondegenerate if and only if $\Theta'(\Gamma)\neq 0$ which is equivalent to $U'(\Theta(\Gamma))\neq 0$.
The nondegeneracy holds, since $\Theta'(\Gamma)\neq 0$ for $\Gamma\in(0,\Gamma_0)$.
Thus, (iv) holds.
\varepsilonnd{proof}
\section{Asymptotic shapes of the branch as $p\to\infty$ and $p\downarrow 1$}
We briefly prove Proposition~\ref{CriSub} before proving Theorems~\ref{ThD} and \ref{ThE}.
\begin{proof}[Proof of Proposition~\ref{CriSub}]
Since Lemmas~\ref{S3L3} and \ref{S6L1} hold for $p=p_{\rm S}$, (i) and (ii) hold.
Shioji-Watanabe~\cite[Theorem 5]{SW13} showed that if $N\ge 3$ and $1<p\le p_{\rm S}$, then (\ref{EFODE}) has at most one solution.
Since $\Theta(\Gamma)$ is continuous and $\Theta(\Gamma)\to\pi$ $(\Gamma\downarrow 0)$, $\Theta(\Gamma)$ should be strictly decreasing, otherwise (\ref{EFODE}) has more than two solutions, which is a contradiction.
Thus, (iii) holds.
When $N\ge 4$ and $p=p_{\rm S}$, Bandle {\it et al.}~\cite[Section~7.4]{BBF98} showed that for each $\Theta\in(0,\pi)$, (\ref{EFODE}) has a regular solution.
This result indicates that $\Theta(\Gamma)\to 0$ $(\Gamma\to\infty)$, otherwise $\Theta(\Gamma)\to c>0$ and (\ref{EFODE}) has no solution for $\Theta\in (0,c)$, which is a contradiction.
Thus, (iv) holds.
When $N=3$ and $p=p_{\rm S}$, Bandle-Peletier~\cite[Theorem 1]{BP99} showed that (\ref{EFODE}) has no regular solution for $\Theta\in(0,\frac{\pi}{2}]$ and that it has a regular solution for $\Theta\in(\frac{\pi}{2},\pi)$.
This indicates that $\Theta(\Gamma)\downarrow\frac{\pi}{2}$ as $\Gamma\to\infty$.
When $N=3$ and $1<p<p_{\rm S}$, it is easily shown that (\ref{EFS}) has a radial solution for each $\Theta\in(0,\pi)$.
This indicates that $\Theta(\Gamma)\to 0$ as $\Gamma\to\infty$.
Hence, (v) holds.
\varepsilonnd{proof}
\begin{proof}[Proof of Theorem~\ref{ThD}]
Let $U(\theta)$ be the solution of (\ref{EFODE}).
Then, $U(\Theta)=0$ and $U(\theta)$ is a solution of (\ref{IVP}) for some $\Gamma>0$.
We use the Poho\v{z}aev identity of the following type:
\begin{multline}\label{S8P1E-1}
H(\theta):=-U'(\theta)^2\sin^{2N-2}\theta\int_{\theta}^{\Theta}\frac{d\varphi}{\sin^{N-1}\varphi}-U(\theta)U'(\theta)\sin^{N-1}\theta\\
-\frac{2}{p+1}U(\theta)^{p+1}\sin^{2N-2}\theta\int_{\theta}^{\Theta}\frac{d\varphi}{\sin^{N-1}\varphi}.
\varepsilonnd{multline}
It is clear that
\begin{equation}\label{S8P1E0}
H(\Theta)=0.
\varepsilonnd{equation}
By L'Hopital's rule we have
\begin{equation}\label{S8P1E1}
\lim_{\theta\downarrow 0}\frac{\int_{\theta}^{\Theta}\frac{d\varphi}{\sin^{N-1}\varphi}}{\frac{1}{\sin^{N-2}\theta}}
=\lim_{\theta\downarrow 0}\frac{-\sin^{-N+1}\theta}{(-N+2)\sin^{-N+1}\theta}\nonumber\\
=\frac{1}{N-2}.
\varepsilonnd{equation}
By (\ref{S8P1E1}) we have
\begin{align}
\lim_{\theta\downarrow 0}\sin^{2N-2}\theta\int_{\theta}^{\Theta}\frac{d\varphi}{\sin^{N-1}\varphi}&=\lim_{\theta\downarrow 0}\sin^N\theta\left(\sin^{N-2}\theta\int_{\theta}^{\Theta}\frac{d\varphi}{\sin^{N-1}\varphi}\right)\nonumber\\
&=0.\label{S8P1E2}
\varepsilonnd{align}
Using (\ref{S8P1E2}), we have
\begin{equation}\label{S8P1E3}
\lim_{\theta\downarrow 0}H(\theta)=0.
\varepsilonnd{equation}
Differentiating $H(\theta)$ in $\theta$, we have
\[
H'(\theta)=\frac{4N-4}{p+1}U(\theta)^{p+1}\sin^{N-1}\theta\left(\frac{p+3}{4N-4}-F(\theta)\right),
\]
where
$F(\theta):=\cos\theta\sin^{N-2}\theta\int_{\theta}^{\Theta}\frac{d\varphi}{\sin^{N-1}\varphi}$.
Hereafter, let $\Theta=\Theta_0\in(0,\pi)$ be fixed.
By (\ref{S8P1E1}) we have
\begin{equation}\label{S8P1E4}
\lim_{\theta\downarrow 0}F(\theta)=\frac{1}{N-2}.
\varepsilonnd{equation}
Because of (\ref{S8P1E4}) and the continuity of $F(\theta)$ on $(0,\Theta_0]$, we see that $\sup_{0<\theta\le\Theta_0}F(\theta)<\infty$.
Therefore there is a large $\bar{p}=\bar{p}(\Theta_0)>0$ such that if $p>\bar{p}$, then $H'(\theta)>0$ for $\theta\in(0,\Theta_0)$.
We obtain a contradiction, because of (\ref{S8P1E0}) and (\ref{S8P1E3}).
Thus, if $p>\bar{p}$, then (\ref{EFODE}) has no solution for $\Theta=\Theta_0$.
Since the solution set $\{(\Theta(\Gamma),\Gamma)\}$ is a continuous curve including a point near $(\pi,0)$, (\ref{EFODE}) has no solution for $\Theta\in(0,\Theta_0]$.
We prove the first statement of Theorem~\ref{D} by contradiction.
Suppose the contrary, i.e., there is $\varepsilon>0$ such that $\underline{\Theta}\in(0,\pi-\varepsilon)$ for large $p>1$, where $\underline{\Theta}$ is given in Corollary~\ref{B} (i).
Let $\Theta_1:=\pi-\frac{\varepsilon}{2}(>\underline{\Theta})$.
If $p>\bar{p}(\Theta_1)$, then (\ref{EFODE}) has no solution for $\Theta=\Theta_1$.
This is a contradiction, because the definition of $\underline{\Theta}$ says that (\ref{EFODE}) has a solution for $\Theta\in(\underline{\Theta},\pi)$.
Thus, $\underline{\Theta}\to\pi$ as $p\to\infty$.
We consider the case $N=3$.
Then,
\[
F(\Theta)=\cos\theta\sin\theta\int_{\theta}^{\Theta}\frac{d\varphi}{\sin^2\varphi}\\
=\frac{1}{2}-\frac{\sin(2\theta-\Theta)}{\sin\Theta}.
\]
When $N=3$, we have
\begin{align*}
\frac{p+3}{4N-4}-F(\theta)&=\frac{p-1}{8}+\frac{\sin(2\theta-\Theta)}{2\sin\Theta}\\
&>\frac{p-1}{8}-\frac{1}{2\sin\Theta}\ \ \textrm{for}\ \ \theta\in[0,\pi]\backslash\left\{\frac{\Theta}{2}+\frac{3+4n}{4}\pi;\ n\in\mathbb{Z}\right\}.
\varepsilonnd{align*}
Therefore, if $\sin\Theta\ge\frac{4}{p-1}$, then (\ref{EFODE}) has no solution.
Since this nonexistence result is valid for $p\ge 5(=p_{\rm S})$, we assume hereafter that $p\ge p_{\rm S}$.
Since the solution set is a continuous curve and it includes a point near $(\pi,0)$, (\ref{EFODE}) has no solution if $\Theta\le\pi-\arcsin\frac{4}{p-1}$.
Thus, $\underline{\Theta}\ge\pi-\arcsin\frac{4}{p-1}$ for $p\ge p_{\rm S}$.
\varepsilonnd{proof}
We consider the case $p=1$.
First, we investigate the following eigenvalue problem:
\begin{equation}\label{S8E1}
\begin{cases}
\phi''+(N-1)\frac{\cos\theta}{\sin\theta}\phi'+\lambda\phi=0, & 0<\theta<\Theta,\\
\phi(0)=1,\ \phi'(0)=0,\\
\phi(\Theta)=0.
\varepsilonnd{cases}
\varepsilonnd{equation}
\begin{lemma}\label{S8L1}
Let $\lambda_1(\Theta)$ be the first eigenvalue of (\ref{S8E1}).
Then, $\lambda_1(\Theta)$ is continuous and strictly decreasing, $\lambda_1(\Theta)\to 0$ as $\Theta\uparrow\pi$, and $\lambda_1(\Theta)\to\infty$ as $\Theta\downarrow 0$.
In particular, for $N=3$, $\lambda_1(\Theta)=(\frac{\pi}{\Theta})^2-1$.
\varepsilonnd{lemma}
\begin{proof}
First, we consider the case $N=3$.
Let $\bar{\phi}(\theta):=\phi(\theta)\sin\theta$.
Then, $\bar{\phi}$ satisfies
\[
\begin{cases}
\bar{\phi}''+(1+\lambda)\bar{\phi}=0, & 0<\theta<\Theta,\\
\bar{\phi}(0)=\bar{\phi}(\Theta)=0.
\varepsilonnd{cases}
\]
Thus, $\bar{\phi}(\theta)=c\sin\frac{\pi\theta}{\Theta}$ for some $c\in\mathbb{R}$ and $1+\lambda=(\frac{\pi}{\Theta})^2$.
Since $\phi(0)=1$, $c$ is equal to $\frac{\Theta}{\pi}$ and $\phi(\theta)=\frac{\Theta\sin\frac{\pi\theta}{\Theta}}{\pi\sin\theta}$.
Since
$\phi'(0)=\lim_{\theta\downarrow 0}\frac{\frac{\Theta\sin\frac{\pi\theta}{\Theta}}{\pi\sin\theta}-1}{\theta}=0$
and $\phi(\theta)>0$ on $[0,\Theta)$, $\phi$ satisfies (\ref{S8E1}) and $\phi$ is the first eigenfunction.
Therefore, $\lambda_1(\Theta)=(\frac{\pi}{\Theta})^2-1$.
Next, we consider the case $N\ge 4$.
By a similar method as in the proof of Lemma~\ref{S3L2} we can prove that, for each $\lambda>0$, there exists $\Theta=\Theta(\lambda)\in (0,\pi)$ such that (\ref{S8E1}) holds.
Let $\phi(\theta,\lambda)$ be the solution of (\ref{S8E1}).
Then $\phi$ is of class $C^1$.
It follows from the uniqueness of the solution of (\ref{S8E1}) that $\phi_{\theta}(\Theta,\lambda)\neq 0$.
Applying the implicit function theorem to $\phi(\theta,\lambda)=0$, we see that $\Theta(\lambda)$, which satisfies $\phi(\Theta(\lambda),\lambda)=0$, is of class $C^1$.
On the other hand, by Theorem~III in \cite{I44}, for each $\Theta_1\in(0,\pi)$, there exists the first eigenvalue $\lambda_1>0$ such that $\Theta(\lambda_1)=\Theta_1$.
By the Sturm-Liouville comparison theorem, if $\lambda_a<\lambda_b$, then $\Theta(\lambda_b)<\Theta(\lambda_a)$, which indicates that $\Theta(\lambda)$ is strictly decreasing.
Thus, the inverse function $\lambda_1=\lambda_1(\Theta)$ exists and it is continuous and strictly decreasing.
Let $\Theta_0\in(0,\pi)$ be fixed.
Then, as $\lambda\to 0$, $\phi(\theta)$ converges to $\phi_*(\theta)$ uniformly on $[0,\Theta_0]$, where $\phi_*$ is the unique solution of the problem
\[
\begin{cases}
\phi_*''+(N-1)\frac{\cos\theta}{\sin\theta}\phi_*'=0, & 0<\theta<\Theta_0,\\
\phi_*(0)=1,\ \phi_*'(0)=0.
\varepsilonnd{cases}
\]
It is clear that $\phi_*(\theta)\varepsilonquiv 1$.
For each $\Theta_0\in(0,\pi)$, the solution of (\ref{S8E1}) satisfies that $\phi(\theta)>0$ on $[0,\Theta_0]$ for small $\lambda>0$.
We can choose $\Theta_0$ arbitrarily close to $\pi$.
Hence, $\Theta(\lambda)\uparrow\pi$ as $\lambda\to 0$ which indicates that
\begin{equation}\label{S8L1E1}
\lambda_1(\Theta)\to 0\ \textrm{as}\ \Theta\uparrow\pi.
\varepsilonnd{equation}
We consider the initial value problem
\[
\begin{cases}
\phi''+(N-1)\frac{\cos\theta}{\sin\theta}\phi'+\lambda\phi=0, & 0<\theta<\pi,\\
\phi(0)=1,\ \phi'(0)=0.
\varepsilonnd{cases}
\]
We use the same change of variables as in Section~1.
Let $\psi(r):=A(r)^{\frac{N-2}{2}}\phi(\theta)$ and $r:=\tan\frac{\theta}{2}$.
Then $\psi(r)$ satisfies
\[
\begin{cases}
\psi''+\frac{N-1}{r}\psi'+\frac{N(N-2)}{4}A(r)^2\psi+\lambda A(r)^2\psi=0, & 0<r<\infty,\\
\psi(0)=2^{\frac{N-2}{2}},\ \psi'(0)=0.
\varepsilonnd{cases}
\]
Let $\tilde{\psi}(s):=\psi(r)$ and $s:=2\sqrt{\lambda}r$.
Then $\tilde{\psi}(s)$ satisfies
\[
\begin{cases}
\tilde{\psi}''+\frac{N-1}{s}\tilde{\psi}'+\frac{N(N-2)}{4\lambda}\left(\frac{\lambda}{\lambda+s^2}\right)^2\tilde{\psi}+\left(\frac{\lambda}{\lambda+s^2}\right)^2\tilde{\psi}=0, & 0<s<\infty,\\
\tilde{\psi}(0)=2^{\frac{N-2}{2}},\ \tilde{\psi}'(0)=0.
\varepsilonnd{cases}
\]
Taking the limit $\lambda\to\infty$, we see that $\tilde{\psi}(s)$ converges to $\tilde{\psi}_*(s)$ uniformly on any bounded interval, where $\tilde{\psi}_*(s)$ is the solution of
\[
\begin{cases}
\tilde{\psi}''_*+\frac{N-1}{s}\tilde{\psi}'_*+\tilde{\psi}_*=0, & 0<s<\infty,\\
\tilde{\psi}_*(0)=2^{\frac{N-2}{2}},\ \tilde{\psi}'_*(0)=0.
\varepsilonnd{cases}
\]
Moreover, $\tilde{\psi}_*$ can be explicitly written as $\tilde{\psi}_*(s)=cs^{-\frac{N}{2}+1}J_{\frac{N}{2}-1}(s)$ for some constant $c>0$, where $J_{\frac{N}{2}-1}(s)$ represents the Bessel function of the first kind of order $\frac{N}{2}-1$.
It is known that $J_{\frac{N}{2}-1}(s)$ has the first positive zero which we denote by $j_{\frac{N}{2}-1}$.
Since $\tilde{\psi}_*$ satisfies the linear equation, the zero $j_{\frac{N}{2}-1}$ is simple.
Hence, when $\lambda$ is large, $\tilde{\psi}(s)$ also has the first positive zero, which we denote by $s_1(\lambda)$.
By the uniform convergence of $\tilde{\psi}(s)$ to $\tilde{\psi}_*(s)$ and the simplicity of $j_{\frac{N}{2}-1}$ we see that $s_1(\lambda)\to j_{\frac{N}{2}-1}$ $(\lambda\to\infty)$.
The first positive zero $r_1(\lambda)$ of $\psi(\,\cdot\,)$ satisfies that $r_1(\lambda)=\frac{s_1(\lambda)}{2\sqrt{\lambda}}$.
Therefore,
$\lim_{\lambda\to\infty}r_1(\lambda)=\lim_{\lambda\to\infty}\frac{s_1(\lambda)}{2\sqrt{\lambda}}=0$.
This indicates that $\Theta(\lambda)\to 0$ as $\lambda\to\infty$.
Hence
\begin{equation}\label{S8L1E2}
\lambda_1(\Theta)\to\infty\ \textrm{as}\ \Theta\downarrow 0.
\varepsilonnd{equation}
Because of (\ref{S8L1E1}) and (\ref{S8L1E2}), $\lambda_1(\Theta)$ is defined on $(0,\pi)$.
The proof is complete.
\varepsilonnd{proof}
We study the case where $p>1$ is close to $1$.
Let $\Theta_0\in(0,\pi)$ be fixed.
Since $1<p<p_{\rm S}$, Proposition~\ref{CriSub} says that there is a unique $\Gamma>0$ such that (\ref{EFODE}) with $U(0)=\Gamma$ has a solution for $\Theta=\Theta_0$.
Since $\Gamma$ depends on $p$, we denote $\Gamma$ by $\Gamma(p)$.
We follow the idea of Yanagida~\cite[Theorem~2.6]{YY96} to prove Theorem~\ref{ThE}.
Now we fix $\lambda_1>0$.
Then, by Lemma~\ref{S8L1}, there exists a unique $\Theta_1\in(0,\pi)$ such that (\ref{S8E1}) with $(\lambda,\Theta)=(\lambda_1,\Theta_1)$ has a positive solution.
We set the following problem
\begin{equation}\label{S8E2}
\begin{cases}
W''+(N-1)\frac{\cos\theta}{\sin\theta}W'+\lambda_1W^p=0, & 0<\theta<\Theta_1,\\
W(\Theta_1)=0,\\
W(\theta)>0, & 0<\theta<\Theta_1,\\
W'(0)=0.
\varepsilonnd{cases}
\varepsilonnd{equation}
Since Proposition~\ref{CriSub} is valid for (\ref{S8E2}), (\ref{S8E2}) has a unique solution $W(\theta,p)$ provided that $1<p<p_{\rm S}$.
Let $\Gamma_1(p):=W(0,p)$.
We also consider the initial value problem
\begin{equation}\label{S8E3}
\begin{cases}
Z''+(N-1)\frac{\cos\theta}{\sin\theta}Z'+\lambda_1|Z|^{p-1}Z=0, & 0<\theta<\pi,\\
Z(0)=\Gamma,\ Z'(0)=0.\\
\varepsilonnd{cases}
\varepsilonnd{equation}
Then the following holds:
\begin{lemma}\label{S8L2}
There exists a unique $\Gamma^{\dagger}>0$ such that $\Gamma_1(p)\to\Gamma^{\dagger}$ as $p\downarrow 1$.
\varepsilonnd{lemma}
\begin{proof}
Let $\phi(\theta)$ be a solution of (\ref{S8E1}) with $(\lambda,\Theta)=(\lambda_1,\Theta_1)$.
Then, as $p\downarrow 1$, the solution $Z(\theta)$ of (\ref{S8E3}) converges to $\Gamma\phi(\theta)$ uniformly on $[0,\Theta_1]$.
Applying Green's formula for $Z$ and $\Gamma\phi$, we obtain
\begin{equation}\label{S8L2E1}
(Z'(\theta)\phi(\theta)-Z(\theta)\phi'(\theta))\sin^{N-1}\theta=-\lambda_1(p-1)F(\theta,\Gamma,p),
\varepsilonnd{equation}
where
\[
F(\theta,\Gamma,p):=\int_0^{\theta}\frac{|Z(\varphi)|^{p-1}-1}{p-1}Z(\varphi)\phi(\varphi)\sin^{N-1}\varphi d\varphi.
\]
Since $Z$ converges to $\Gamma\phi$ uniformly on $[0,\Theta_1]$,
\[
\lim_{p\downarrow 1}F(\Theta_1,\Gamma,p)=\Gamma\int_0^{\Theta_1}(\log\Gamma+\log\phi)\phi(\varphi)^2\sin^{N-1}\varphi d\varphi.
\]
Hence, there exists a unique $\Gamma^{\dagger}\in\mathbb{R}$ such that $\lim_{p\downarrow 1}F(\Theta_1,\Gamma,p)=0$ if and only if $\Gamma=\Gamma^{\dagger}$.
We prove the lemma by contradiction.
We assume that there exists some $\delta>0$ such that $\Gamma_1(p)\not\in [\Gamma^{\dagger}-\delta,\Gamma^{\dagger}+\delta]$ as $p\downarrow 1$.
Let $\Gamma=\Gamma_1(p)$. Then, $Z(\theta)=W(\theta)$ on $[0,\Theta_1]$.
The left-hand side of (\ref{S8L2E1}) is $0$ at $\theta=\Theta_1$.
On the other hand, the right-hand side of (\ref{S8L2E1}) is some non-zero constant at $\theta=\Theta_1$ when $p$ is close to $1$.
This is a contradiction, and therefore, $\Gamma_1(p)\to\Gamma^{\dagger}$ as $p\downarrow 1$.
\varepsilonnd{proof}
\begin{proof}[Proof of Theorem~\ref{ThE}]
We take the same $\lambda_1$ and $\Theta_1$ as above.
Let $W$ be the solution of (\ref{S8E2}).
Let $U(\theta):=\lambda_1^{\frac{1}{p-1}}W(\theta)$.
Then $U$ is a solution of (\ref{EFODE}) with $\Theta=\Theta_1$ and $\Gamma(p)=U(0)=\lambda_1^{\frac{1}{p-1}}\Gamma_1(p)$.
By Lemma~\ref{S8L2} we see the following:\\
(i) If $\lambda_1>1$, then $\lambda_1^{\frac{1}{p-1}}\Gamma_1(p)\to\infty$ as $p\downarrow 1$.\\
(ii) If $\lambda_1=1$, then $\lambda_1^{\frac{1}{p-1}}\Gamma_1(p)\to\Gamma^{\dagger}$ as $p\downarrow 1$.\\
(iii) If $\lambda_1<1$, then $\lambda_1^{\frac{1}{p-1}}\Gamma_1(p)\to 0$ as $p\downarrow 1$.\\
Here, by Lemma~\ref{S8L1}, there exists some $\Theta^{\dagger}\in(0,\pi)$ such that $\Theta_1>\Theta^{\dagger}$ for $\lambda_1<1$, $\Theta_1=\Theta^{\dagger}$ for $\lambda_1=1$, and $\Theta_1<\Theta^{\dagger}$ for $\lambda_1>1$.
Thus, the statement of Theorem~\ref{ThE} holds.
\varepsilonnd{proof}
\begin{thebibliography}{99}
\bibitem{BB02}{C.~Bandle and R.~Benguria},
{\it The Br\'ezis-Nirenberg problem on $\mathbb{S}^3$},
J. Differential Equations {\bf 178} (2002), 264--279.
\bibitem{BBF98}{C.~Bandle, A.~Brillard, and M.~Flucher},
{\it Green's function, harmonic transplantation, and best Sobolev constant in spaces of constant curvature},
Trans. Amer. Math. Soc. {\bf 350} (1998), 1103--1128.
\bibitem{BP99}{C.~Bandle and L.~Peletier},
{\it Best Sobolev constants and Emden equations for the critical exponent in $\mathbf{S}^3$},
Math. Ann. {\bf 313} (1999), 83--93.
\bibitem{BW07}{C.~Bandle and J.~Wei},
{\it Non-radial clustered spike solutions for semilinear elliptic problems on {${\bf S}^n$}},
J. Anal. Math. {\bf 102} (2007), 181--208.
\bibitem{BFG14}{E.~Berchio, A.~Ferrero, and G.~Grillo},
{\it Stability and qualitative properties of radial solutions of the Lane-Emden-Fowler equation on Riemannian models},
J. Math. Pures Appl. {\bf 102} (2014), 1--35.
\bibitem{BGGV13}{M.~Bonforte, F.~Gazzola, G.~Grillo, and J.~V\'azquez},
{\it Classification of radial solutions to the Emden-Fowler equation on the hyperbolic space},
Calc. Var. Partial Differential Equations {\bf 46} (2013), 375--401.
\bibitem{BP06}{H.~Brezis and L.~Peletier},
{\it Elliptic equations with critical exponent on spherical caps of ${\bf S}\sp 3$},
J. Anal. Math. {\bf 98} (2006), 279--316.
\bibitem{D00}{E.~Dancer},
{\it Infinitely many turning points for some supercritical problems},
Ann. Mat. Pura Appl. {\bf 178} (2000), 225--233.
\bibitem{D08a}{E.~Dancer},
{\it Finite Morse index solutions of supercritical problems},
J. Reine Angew. Math. {\bf 620} (2008), 213--233.
\bibitem{D08b}{E.~Dancer},
{\it Finite Morse index solutions of exponential problems},
Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire {\bf 25} (2008), 173--179.
\bibitem{D13}{E.~Dancer},
{\it Some bifurcation results for rapidly growing nonlinearities}
Discrete Contin. Dyn. Syst. {\bf 33} (2013), 153--161.
\bibitem{DF07}{J.~Dolbeault and I.~Flores},
{\it Geometry of phase space and solutions of semilinear elliptic equations in a ball},
Trans. Amer. Math. Soc. {\bf 359} (2007), 4073--4087.
\bibitem{GNN79}{B.~Gidas, W.-M.~Ni, and L.~Nirenberg},
{\it Symmetry and related properties via the maximum principle},
Comm. Math. Phys. {\bf 68} (1979), 209--243.
\bibitem{GW11}{Z.~Guo and J.~Wei},
{\it Global solution branch and Morse index estimates of a semilinear elliptic equation with super-critical exponent},
Trans. Amer. Math. Soc. {\bf 363} (2011), 4777--4799.
\bibitem{I44}{E.~Ince},
{Ordinary Differential Equations},
{\it Dover Publications, New York, 1944.}
\bibitem{JL73}{D.~Joseph and S.~Lundgren},
{\it Quasilinear Dirichlet problems driven by positive sources},
Arch. Rational Mech. Anal. {\bf 49} (1972/73), 241--269.
\bibitem{KW18}{H.~Kikuchi and J.~Wei},
{\it A bifurcation diagram of solutions to an elliptic equation with exponential nonlinearity in higher dimensions},
Proc. Roy. Soc. Edinburgh Sect. A {\bf 148} (2018), 101--122.
\bibitem{K97}{P.~Korman},
{\it Solution curves for semilinear equations on a ball},
Proc. Amer. Math. Soc. {\bf 125} (1997), 1997--2005.
\bibitem{KP98}{S.~Kumaresan and J.~Prajapat},
{\it Serrin's result for hyperbolic space and sphere},
Duke Math. J. {\bf 91} (1998), 17--28.
\bibitem{MP91}{F.~Merle and L.~Peletier},
{\it Positive solutions of elliptic equations involving supercritical growth},
Proc. Roy. Soc. Edinburgh Sect. A {\bf 118} (1991), 49--62.
\bibitem{Mi13}{Y.~Miyamoto},
{\it Symmetry breaking bifurcation from solutions concentrating on the equator of $\mathbb{S}^N$},
J. Anal. Math. {\bf 121} (2013), 353--381.
\bibitem{Mi14a}{Y.~Miyamoto},
{\it Structure of the positive solutions for supercritical elliptic equations in a ball},
J. Math. Pures Appl. {\bf 102} (2014), 672--701.
\bibitem{Mi14b}{Y.~Miyamoto},
{\it Classification of bifurcation diagrams for elliptic equations with exponential growth in a ball},
Ann. Mat. Pura Appl. {\bf 194} (2015), 931--952.
\bibitem{Mi18}{Y.~Miyamoto},
{\it A limit equation and bifurcation diagrams of semilinear elliptic equations with general supercritical growth},
J. Differential Equations {\bf 264} (2018), 2684--2707.
\bibitem{P97}{P.~Padilla},
{\it Symmetry properties of positive solutions of elliptic equations on symmetric domains},
Appl. Anal. {\bf 64} (1997), 153--169.
\bibitem{P65}{S.~Poho\v{z}aev},
{\it On the eigenfunctions of the equation $\Delta u+\lambda f(u)=0$},
Dokl. Akad. Nauk SSSR {\bf 165} (1965), 36--39.
\bibitem{SW12}{N.~Shioji and K.~Watanabe},
{\it Radial symmetry of positive solutions for semilinear elliptic equations in the unit ball via elliptic and hyperbolic geometry},
J. Differential Equations {\bf 252} (2012), 1392--1402.
\bibitem{SW13}{N.~Shioji and K.~Watanabe},
{\it A generalized Poho\v{z}aev identity and uniqueness of positive radial solutions of $\Delta u+g(r)u+h(r)u^p=0$},
J. Differential Equations {\bf 255} (2013), 4448--4475.
\bibitem{YY96}{E.~Yanagida and S.~Yotsutani},
{\it Global structure of positive solutions to equations of Matukuma type},
Arch. Rational Mech. Anal. {\bf 134} (1996), 199--226.
\bibitem{W93}{X.~Wang},
{\it On the Cauchy problem for reaction-diffusion equations},
Trans. Amer. Math. Soc. {\bf 337} (1993), 549--590.
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\title{The complex Brownian motion as a strong limit of processes constructed from a Poisson process}
\date{}
\author{Xavier Bardina\footnote{X. Bardina is supported by the grant MTM2012-33937 from SEIDI, Ministerio de Economia y
Competividad.}, Giulia Binotto$^\dagger$
and Carles Rovira\footnote{ G. Binotto and C. Rovira are supported by the grant MTM2012-31192 from SEIDI, Ministerio de Economia y Competividad.}}
\maketitle
$^*${\rm Departament de Matem\`atiques, Facultat de Ci\`encies,
Edifici C, Universitat Aut\`onoma de Barcelona, 08193 Bellaterra}.
{\tt [email protected]}
\newline
$\mbox{ }$\hspace{0.1cm} $^\dagger${\rm Facultat de Matem\`atiques,
Universitat de Barcelona, Gran Via 585, 08007 Barcelona}. {\tt [email protected]}, {\tt [email protected]}
\begin{abstract}
We construct a family of processes, from a single
Poisson process, that converges in law to a complex Brownian motion.
Moreover, we find realizations of these processes that converge
almost surely to the complex Brownian motion, uniformly on the unit
time interval. Finally the rate of convergence is derived.
\end{abstract}
\section{Introduction}
Kac \cite{K} in 1956 to
obtain a solution from a Poisson of the telegraph equation
\begin{equation}\label{tele} \frac1v \frac{\partial^2 F}{\partial
t^2}=v\frac{\partial^2 F}{\partial x^2}-\frac{2a}{v}\frac{\partial
F}{\partial t},\end{equation} with $a,v>0$, introduced the processes
$$x(t)=v\int_0^t(-1)^{N_a(r)}dr,$$
where $N_a=\{N_a(t),\,t\geq0\}$ is a Poisson process of intensity
$a$. He noticed that if in equation (\ref{tele}) the parameters $a$ and $v$ tend to
infinity with $\frac{2a}{v^2}$ constant and equal to $\frac1{D}$,
then the equation converges to the heat equation:
\begin{equation}\label{calor} \frac1{D}\frac{\partial F}{\partial t}=\frac{\partial^2
F}{\partial x^2}.\end{equation}
Let $x_\varepsilon(t)$ be the processes considered by Kac with
$a=\frac1{\varepsilon^2}$, $v=\frac1{\varepsilon}$. These values
satisfy that $\frac{2a}{v^2}$ is constant and $D=\frac12$ and we get
in (\ref{calor}) an equation whose solution is a standard Brownian
motion.
Stroock \cite{S} proved in 1982 that the processes $x_{\varepsilon}$
converge in law to a standard Brownian motion. That is, if we
consider $(P^{\varepsilon})$ the image law of the process
$x_{\varepsilon}$ in the Banach space $\mathcal C([0,T])$ of
continuous functions on $[0,T]$, then $(P^{\varepsilon})$ converges
weakly, when $\varepsilon$ tends to zero, towards the Wiener
measure.
Doing a change of variables, these processes
can be written as
$$x_{\varepsilon}=\left\{x_{\varepsilon}(t):=\varepsilon\int_0^{\frac{t}{\varepsilon^2}}(-1)^{N(u)}du,\,t\in[0,T]\right\},$$
where $\{N(t),\,t\geq0\}$ is a standard Poisson process.
In the mathematical literature we find generalizations with regard to the Stroock
result which can be channeled in three directions:
\begin{enumerate}[(i)]
\item modifying the processes $x_{\varepsilon}$ in order to obtain
approximations of other Gaussian processes, \item proving
convergence in a stronger sense that the convergence in law in the
space of continuous functions, \item weakening the conditions of
the approximating processes.
\end{enumerate}
In direction (i), a first generalization is also made by Stroock
\cite{S} who modified the processes $x_{\varepsilon}$ to obtain
approximations of stochastic differential equations. There are also
generalizations, among others, to the fractional Brownian motion
(fBm) \cite{LD}, to a general class of Gaussian processes (that
includes fBm) \cite{DJ}, to a fractional stochastic differential
equation \cite{BNRT}, to the stochastic heat equation driven by
Gaussian white noise \cite{BJQ} or to the Stratonovich heat
equation \cite{DJQ}.
On the other hand, there is some literature where the authors present
realizations of the processes that converge almost surely, uniformly on the unit time interval. These processes are usually called as uniform
transport processes. Since the approximations always start increasing, a modification
of the processes as
\begin{equation*}
\tilde
x_{\varepsilon}(t)=\varepsilon(-1)^{A}\int_0^{\frac{t}{\varepsilon^2}}(-1)^{N(u)}du,\end{equation*}
has to be considered where $A\sim
\textrm{Bernoulli}\left(\frac12\right)$ independent of the Poisson
process $N$.
Griego, Heath and Ruiz-Moncayo \cite{art G-H-RM} showed that these
processes converge strongly and uniformly on bounded time intervals
to Brownian motion. In \cite{art G-G2} Gorostiza and Griego
extended the result to diffusions. Again Gorostiza and Griego
\cite{art G-G} and Cs\"{o}rg\H{o} and Horv\'ath \cite{CH} obtained a
rate of convergence. More precisely, in \cite{art G-G} it is proved
that there exist versions of the transport processes $\tilde
x_{\varepsilon}$ on the same probability space as a given Brownian
motion $(W(t))_{t\geq0}$ such that, for each $q > 0$,
$$P\left(\sup_{a\leq t\leq b}|W(t)-\tilde x_{\varepsilon}(t)|> {C}\varepsilon\left(\log
\frac1{\varepsilon^2}\right)^{\frac52}
\right)=o\left({{\varepsilon}^{2q}}\right),$$ as $\varepsilon\to 0$
and where $C$ is a positive constant depending on $a$, $b$ and $q$.
Garz\'on, Gorostiza and Le\'on \cite{GGL} defined a sequence of
processes that converges strongly to fractional Brownian motion
uniformly on bounded intervals, for any Hurst parameter $H\in(0,1)$
and computed the rate of convergence. In \cite{GGL2} and \cite{GGL3}
the same authors deal with subfractional Brownian motion and
fractional stochastic differential equations.
Since
$$(-1)^{N(u)}=e^{i \pi N(u)}=\cos(\pi N(u)),$$
the question that if the convergence is also true with
other angles appears.
Bardina \cite{B} showed that if we consider
\begin{equation}\label{dedede}\bar x^\theta_{\varepsilon}(t)={\varepsilon}\int_0^{\frac{2t}{\varepsilon^2}}e^{i \theta N_s}ds
\end{equation}
where $\theta\neq0,\pi,$ the laws of the processes converge weakly
towards the law of a complex Brownian motion, i.e., the laws of the real and imaginary parts
$$\bar z^\theta_{\varepsilon}(t)={\varepsilon}\int_0^{\frac{2t}{\varepsilon^2}}\cos(\theta N_s)ds$$
and
$$\bar y^\theta_{\varepsilon}(t)={\varepsilon}\int_0^{\frac{2t}{\varepsilon^2}}\sin(\theta N_s)ds,$$
converge weakly towards the law of two independent Brownian motions.
The approximating processes are functionally dependent because we
use a single Poisson process but,
in the limit, we obtain two independent processes.
Later, in \cite{BR1} it is shown that for different angles $\theta_i$ the corresponding processes
converge in law towards independent Brownian motions despite
using only one Poisson process. Finally, in \cite{BR2}, we prove that we can use a L\'evy process instead
of a Poisson process in the definition of the sequence of approximations.
In this paper we present an extension of the Kac-Stroock result in
the directions (ii) and (iii). Our aim is to define a modification
of the processes $\bar x_{\varepsilon}$ used by Stroock, similar to
(\ref{dedede}) proposed in \cite{B}. These complex processes, that
we will denote by $x_\varepsilon^\theta=z_\varepsilon^\theta+ i
y_\varepsilon^\theta$, will depend on a parameter $\theta \in
(0,\pi) \cup (\pi,2\pi)$ and will be defined from an unique standard
Poisson process and a sequence of independent random variables with
common distribution Bernoulli($\frac12$). We will check that if we
consider $\theta_1,\theta_2,\dots,\theta_m$ such that for all $i\neq
j$, $1\leq i,j\leq m$, $\theta_i,\theta_j\in(0,\pi)\cup(\pi,2\pi)$,
$\theta_i+\theta_j\neq 2\pi$ and $\theta_i\neq\theta_j$, the law of
the processes
$$(z^{\theta_1}_\varepsilon,\dots,z^{\theta_m}_\varepsilon,y^{\theta_1}_\varepsilon,\dots,y^{\theta_m}_\varepsilon)$$
converges weakly in the space of the continuous functions towards the
joint law of $2m$ independent Brownian motions. Moreover, we also prove that there exist realizations of
$x_\varepsilon^\theta$ that converge almost surely to a complex Brownian
motion and we are able to obtain the rate of convergence that does not depend on $\theta$.
As a consequence, simulating a sequence of independent random variables with common distribution exponential(1) and a sequence of independent random variables with common distribution Bernoulli($\frac12$), we can get sequences of almost sure approximations of $d$ independent Brownian motions for any $d$.
For simplicity's sake, we only consider
$\theta\in(0,\pi)\cup(\pi,2\pi)$ for which it does not exist any
$m\in{\mathbb N}$ such that $\cos(m\theta)=0$ or $\sin(m\theta)=0$.
As usual, the weak convergence is proved using tightness and the identification of the law of all possible weak limits
(see, for instance \cite{B}, \cite{BR1}). The almost sure converge is inspired in \cite{art G-H-RM} while the computation of the rate of convergence follows the method given in
\cite{art G-G} and \cite{art G-G2}.
The paper is organized in the following way. Section 2 is devoted to define the processes and to give the main results. In Section 3
we prove the weak convergence theorems. In Section
4 we prove the strong convergence theorem. The proof of the rate of convergence is given in Section 5. Finally, there is an Appendix with some technical results.
Throughout the paper $K, C$ will denote any positive constant, not
depending on $\varepsilon$, which may change from one expression to
another.
\section{Main results}
Let $\{M_t,t\geq0\}$ be a Poisson process of parameter 2. We define $\{N_t,t\geq0\}$ and $\{N'_t,t\geq0\}$ two other counter processes
that, at each jump of $M$, each of them jumps or does not jump with probability $\frac12$, independently of the jumps of the other process and of its past.
In Proposition \ref{2poisson} (see Appendix) we prove that $N$ and $N'$ are Poisson processes of parameter 1 with independent increments on disjoint intervals.
Consider now, for $\theta\in(0,\pi)\cap(\pi,2\pi)$, the following
processes:
\begin{equation}\label{process}
\bigg\{ z_\varepsilon^\theta(t) = (-1)^G \,\varepsilon \,\int_0^{\frac{2t}{\varepsilon^2}} (-1)^{N'_r} \,e^{i\theta N_r}\,dr, \quad t\in[0,T]
\bigg\},
\end{equation}
where $N$ and $N'$ are the processes defined above and $G$ is a random variable, independent of $N$ and $N'$, with Bernoulli distribution of parameter $\frac12$.
We can write the process $z_\varepsilon^\theta(t)$ as $z_\varepsilon^\theta(t)=x_\varepsilon^\theta(t)+iy_\varepsilon^\theta(t)$, where
$$ x_\varepsilon^\theta(t) := \varepsilon\int_0^{\frac{2t}{\varepsilon^2}}(-1)^{N'_s+G} \cos(\theta N_s) \,ds , \quad y_\varepsilon^\theta(t) := \varepsilon\int_0^{\frac{2t}{\varepsilon^2}}(-1)^{N'_s+G} \sin(\theta N_s) \,ds $$
are the real part and the imaginary part, respectively.
In Figure \ref{fig1} we can see a simulation of the trajectories of
these processes for different values of $\theta$.
\begin{figure}
\caption{Simulation of the trajectories of the processes $x_{\varepsilon}
\label{fig1}
\label{fig1}
\end{figure}
Our first result gives us the weak convergence to a complex Brownian motion.
\begin{teo}\label{teofeble}
Let $P_\varepsilon^\theta$ be the image law of
$z_\varepsilon^\theta$ in the Banach space $\mathcal{C}([0,T],{\mathbb C})$
of continuous function on $[0,T]$. Then $P_\varepsilon^\theta$
converges weakly when $\varepsilon$ tends to zero to the law
$P^\theta$
on $\mathcal{C}([0,T],{\mathbb C})$ of a complex Brownian motion.
\end{teo}
\begin{proof} See Section \ref{feble}. \end{proof}
We can also get the following extension of Theorem
\ref{teofeble}, that is the equivalent of the result obtained in
\cite{BR1} for our processes.
\begin{teo}\label{general} Consider $\theta_1,\theta_2,\dots,\theta_m$
such that for all $i\neq j$, $1\leq i,j\leq m$,
$\theta_i,\theta_j\in(0,\pi)\cup(\pi,2\pi)$, $\theta_i+\theta_j\neq
2\pi$ and $\theta_i\neq\theta_j$. Then the laws of the processes
$$(x^{\theta_1}_{\varepsilon},\dots,x^{\theta_m}_{\varepsilon},y^{\theta_1}_{\varepsilon},\dots,y^{\theta_m}_{\varepsilon})$$
converge weakly, in the space of the continuous functions, towards
the joint law of $2m$ independent Brownian motions.
\end{teo}
\begin{proof} See the end of Section \ref{feble}. \end{proof}
Our next result gives the strong convergence of realizations of our processes $\{z_\varepsilon^\theta(t);\,t\in[0,1]\}$
and states as follows:
\begin{teo}\label{resultat}
There exists realizations of the process $z_{\varepsilon}^{\theta}$
on the same probability space as a complex Brownian motion
$\{z(t),t\geq0\}$ such that
$$ \lim_{\varepsilon\rightarrow 0} \max_{0\leq t\leq1} |z_\varepsilon^\theta(t)-z(t)|=0 \quad a.s. $$
\end{teo}
\begin{proof} See Section \ref{cap_realp}. \end{proof}
Notice that combining the results of Theorem \ref{general} and Theorem \ref{resultat} we get that from our two Poisson processes $N$ and $N'$ and a random variable $G$ with Bernoulli law, we are able to construct approximations to $d$ standard independent Brownian motions for $d$ as large as we want.
In our last result we give the rate of convergence of these processes.
\begin{teo}\label{thm_rate}
For all $q>0$,
$$ P\left( \max_{0\leq t\leq1} |z_\varepsilon^\theta(t)-z(t)| > \alpha^*\,\varepsilon^{\frac12}\left(\log{\frac1\varepsilon}\right)^\frac52 \right) = o(\varepsilon^q), \qquad \mbox{as} \quad \varepsilon\rightarrow0 $$
where $\alpha^*$ is a positive constant depending on $q$.
\end{teo}
\begin{proof} See Section \ref{rates}. \end{proof}
\section{Proof of weak convergence}\label{feble}
In order to prove Theorem \ref{teofeble} we have to check that the family
$P_\varepsilon^\theta$
is tight and that the law of all possible limits of $P_\varepsilon^\theta$ is the law of a complex Brownian motion. Following the same method that in \cite{B}, the proof is based on the following lemma:
\begin{lema}\label{lematecnic}For any $0\leq x_1\leq x_2$
\begin{eqnarray*}
{\mathbf E}\big[(-1)^{N'_{x_2}-N'_{x_1}} e^{i\theta(N_{x_2}-N_{x_1})}\big] = e^{-2(x_2-x_1)}.
\end{eqnarray*}
\end{lema}
\begin{proof}
From the definition of $N$ and $N'$ it follows that
\begin{eqnarray*}
&& {\mathbf E}\big[(-1)^{N'_{x_2}-N'_{x_1}} e^{i\theta(N_{x_2}-N_{x_1})}\big]\\
& =& \sum_{n=0}^\infty\sum_{m=0}^\infty (-1)^n e^{i\theta m} P\big(N'_{x_2}-N'_{x_1}=n,N_{x_2}-N_{x_1}=m\big) \\
&=&\!\!\!\!\! \sum_{n=0}^\infty\sum_{m=0}^\infty (-1)^n e^{i\theta m}\!\!\!\!\! \sum_{k=n\vee m}^\infty \!\!\! P\big(N'_{x_2}-N'_{x_1}=n,N_{x_2}-N_{x_1}=m \big| M_{x_2}-M_{x_1}=k\big) \\
&& \times\, P(M_{x_2}-M_{x_1}=k) \\
&=&\sum_{n=0}^\infty\sum_{m=0}^\infty (-1)^n e^{i\theta m} \sum_{k=n\vee m}^\infty {k\choose n}{k\choose m}\frac{1}{2^k}\frac{1}{2^k}\frac{[2(x_2-x_1)]^ke^{-2(x_2-x_1)}}{k!} \\
&=& e^{-2(x_2-x_1)} \sum_{k=0}^\infty \left(\frac{x_2-x_1}{2}\right)^k\frac{1}{k!} \sum_{n=0}^k{k\choose n}(-1)^n \sum_{m=0}^k{k\choose m}e^{i\theta
m}.
\end{eqnarray*}
Notice that $\sum_{n=0}^k{k\choose n}(-1)^n=0$ when $k\neq0$, therefore the above expression is different from zero only when $k=0$ and, as a consequence, when $n=0$ and $m=0$. Hence, as the series is absolutely convergent,
$$ {\mathbf E}\big[(-1)^{N'_{x_2}-N'_{x_1}} e^{i\theta(N_{x_2}-N_{x_1})}\big] = e^{-2(x_2-x_1)}, $$
as we wanted to prove.
\end{proof}
Using Lemma \ref{lematecnic}, we can also get a version of Lemma 3.2 in
\cite{B} well adapted to our processes.
\begin{lema}\label{lemma2}
Consider $\{\mathcal{F}_t^{\varepsilon,\theta}\}$ the natural filtration of the processes $z_\varepsilon^\theta$. Then, for any $s<t$ and for any real $\{\mathcal{F}_s^{\varepsilon,\theta}\}$-measurable and bounded random variable $Y$, we have that, for any $\theta\in(0,\pi)\cup(\pi,2\pi)$,
\begin{enumerate}[a)]
\item $\displaystyle\varepsilon^2\int_{\frac{2s}{\varepsilon^2}}^{\frac{2t}{\varepsilon^2}}\int_{\frac{2s}{\varepsilon^2}}^{x_2} \!\!{\mathbf E}\left[(-1)^{N'_{x_2}-N'_{x_1}} e^{i\theta(N_{x_2}-N_{x_1})}\right]\! dx_1dx_2 \!=\! (t-s)+\frac{\varepsilon^2}{4}(e^{-\frac{4}{\varepsilon^2}(t-s)}-1)$
\item $\displaystyle\lim_{\varepsilon\rightarrow0} \Big| \varepsilon^2\int_{\frac{2s}{\varepsilon^2}}^{\frac{2t}{\varepsilon^2}}\int_{\frac{2s}{\varepsilon^2}}^{x_2}{\mathbf E}\left[(-1)^{N'_{x_2}+N'_{x_1}} e^{i\theta(N_{x_2}+N_{x_1})}Y\right]\,dx_1dx_2 \Big|=0$.
\end{enumerate}
\end{lema}
\begin{proof}
Follow the same ideas that in \cite{B} using Lemma \ref{lematecnic}.
\end{proof}
{\it Proof of Theorem \ref{teofeble}}. We will give only the skeleton of the proof.
We need to prove that the laws corresponding to
$\{x_\varepsilon^\theta(t), t\geq0\}$ and
$\{y_\varepsilon^\theta(t), t\geq0\}$ are tight, using
Billingsley criterion and that our processes are null in the origin,
it is sufficient to check that there exists a constant $K$ such that
for any $s<t$
\begin{eqnarray*}
&& \sup_\varepsilon\left[{\mathbf E}\left(\varepsilon\int_{\frac{2s}{\varepsilon^2}}^{\frac{2t}{\varepsilon^2}}(-1)^{N'_x+G}\cos(\theta N_x)\,dx\right)^4 \right. \\
&& \qquad\quad \left. +{\mathbf E}\left(\varepsilon\int_{\frac{2s}{\varepsilon^2}}^{\frac{2t}{\varepsilon^2}}(-1)^{N'_x+G}\sin(\theta N_x)\,dx\right)^4\right]\leq K(t-s)^2. \end{eqnarray*}
Following the proof of lemma 2.1 in \cite{B} and applying Lemma \ref{lematecnic} we obtain that
\begin{eqnarray*}
&& {\mathbf E}\left(\varepsilon\int_{\frac{2s}{\varepsilon^2}}^{\frac{2t}{\varepsilon^2}}(-1)^{N'_x+G}\cos(\theta N_x)\,dx\right)^4+{\mathbf E}\left(\varepsilon\int_{\frac{2s}{\varepsilon^2}}^{\frac{2t}{\varepsilon^2}}(-1)^{N'_x+G}\sin(\theta N_x)\,dx\right)^4 \\
&& \leq 12\left(\varepsilon^2\int_{\left[\frac{2s}{\varepsilon^2},\frac{2t}{\varepsilon^2}\right]^2}{\rm l}\hskip -0.21truecm 1_{\{x_1\leq x_2\}} \Big| \,{\mathbf E}\big[(-1)^{N'_{x_2}-N'_{x_1}} e^{i\theta(N_{x_2}-N_{x_1})}\big]\Big|\,dx_1\,dx_2\right)^2 \\
&& \quad + 48\,\varepsilon^4\,\int_{\mathcal{I}} \big| {\mathbf E}\big[(-1)^{N'_{x_4}-N'_{x_3}} e^{i\theta(N_{x_4}-N_{x_3})}\big] \big| \big| {\mathbf E}\big[e^{2i\theta(N_{x_3}-N_{x_2})}\big] \big| \,dx_1\dots dx_4 \\
&& \leq
12\left(\varepsilon^2\int_{\frac{2s}{\varepsilon^2}}^{\frac{2t}{\varepsilon^2}} \int_{\frac{2s}{\varepsilon^2}}^{x_2} e^{-2(x_2-x_1)} \,dx_1\,dx_2\right)^2 \\ & &\qquad \qquad + 48\,\varepsilon^4\,\int_{\mathcal{I}} \,e^{-2(x_4-x_3)} \,e^{-(x_3-x_2)(1-\cos(2\theta))} \,dx_1\dots dx_4
\\
&& \leq 12(t-s)^2 + \frac{48(t-s)^2}{1-\cos(2\theta)}
\end{eqnarray*}
where
$\mathcal{I}:=\{(x_1,x_2,x_3,x_4)\in\left[\frac{2s}{\varepsilon^2},\frac{2t}{\varepsilon^2}\right]^4:\,x_1\leq
x_2\leq x_3\leq x_4\}$.
So the family of laws is tight.
Now we need to identify all the
possible limit laws. Consider a subsequence, which we
will also denote by $\{P_{\varepsilon}^{\theta}\}$, weakly
convergent to some probability $P^{\theta}$. We want to prove that
the canonical process $Z^{\theta}=\{z^{\theta}(t),t\geq0\}$ is a complex Brownian motion under $P^\theta$, that is the real part $X$ and the imaginary part $Y$ of this process are two independent Brownian motions.
Using Paul L\'evy's theorem it is sufficient to prove that, under $P^\theta$, $X$ and $Y$ are both martingale with respect to the natural filtration $\{\mathcal{F}_t\}$ with quadratic variation $<Re[Z^{\theta}],Re[Z^{\theta}]>_t=t$, $<Im[Z^{\theta}],Im[Z^{\theta}]>_t=t$ and null covariation.
To prove the matingale property with respect to the natural filtration $\{\mathcal{F}_t\}$,
following the Section 3.1 in \cite{B},
it is enough to see that for any $s_1\leq s_2\leq \cdots\leq s_n\leq s<t$ and for any bounded continuous function $\varphi:{\mathbb C}^n\rightarrow{\mathbb R}$
$$ \left| {\mathbf E}\left(\varphi\big(z_\varepsilon^\theta(s_1),\dots,z_\varepsilon^\theta(s_n)\big)\varepsilon\int_{\frac{2s}{\varepsilon^2}}^{\frac{2t}{\varepsilon^2}}(-1)^{G+N'_x}\,e^{i\theta N_x}\,dx\right) \right| $$
converges to zero as $\varepsilon$ tends to zero. But,
\begin{eqnarray*}
&& \left| {\mathbf E}\left(\varphi\big(z_\varepsilon^\theta(s_1),\dots,z_\varepsilon^\theta(s_n)\big)\varepsilon\int_{\frac{2s}{\varepsilon^2}}^{\frac{2t}{\varepsilon^2}}(-1)^{G+N'_x}\,e^{i\theta N_x}\,dx\right) \right| \\
&=&\left| {\mathbf E}\left(\varphi\big(z_\varepsilon^\theta(s_1),\dots,z_\varepsilon^\theta(s_n)\big)(-1)^{G+N'_{2s/\varepsilon^2}}\,e^{i\theta
N_{2s/\varepsilon^2}}\right)\right.
\\&&\times\left.\varepsilon\int_{\frac{2s}{\varepsilon^2}}^{\frac{2t}{\varepsilon^2}}{\mathbf E}\left((-1)^{N'_x-N'_{2s/\varepsilon^2}}\,e^{i\theta(N_x-N_{2s/\varepsilon^2})}\right)\,dx \right| \\
&\leq&K\varepsilon\int_{\frac{2s}{\varepsilon^2}}^{\frac{2t}{\varepsilon^2}} e^{-2(x-\frac{2s}{\varepsilon^2})} \,dx
=\frac{K\varepsilon}{2} \big(1-e^{-\frac{4}{\varepsilon^2}(t-s)}\big),
\end{eqnarray*}
that converges to zero as $\varepsilon$ tends to zero. Therefore, martingale property is proved.
To prove that $<Re[Z^{\theta}],Re[Z^{\theta}]>_t=t$ and $<Im[Z^{\theta}],Im[Z^{\theta}]>_t=t$ we will check that for any $s_1\leq\dots\leq s_n\leq s<t$ and for any bounded continuous function $\varphi:{\mathbb C}^n\rightarrow{\mathbb R}$,
$$ {\mathbf E}\Big[ \varphi\big(z_\varepsilon^\theta(s_1),\dots,z_\varepsilon^\theta(s_n)\big) \big((x_\varepsilon^\theta(t)-x_\varepsilon^\theta(s))^2-(t-s)\big) \Big] $$
and
$$ {\mathbf E}\Big[ \varphi\big(z_\varepsilon^\theta(s_1),\dots,z_\varepsilon^\theta(s_n)\big) \big((y_\varepsilon^\theta(t)-y_\varepsilon^\theta(s))^2-(t-s)\big) \Big] $$
converge to zero as $\varepsilon$ tends to zero. Notice that, in our case,
\begin{eqnarray*}
&& {\mathbf E}\Big[ \varphi\big(z_\varepsilon^\theta(s_1),\dots,z_\varepsilon^\theta(s_n)\big) \big(x_\varepsilon^\theta(t)-x_\varepsilon^\theta(s)\big)^2 \Big] \\
&=&{\mathbf E}\left[ \varphi\big(z_\varepsilon^\theta(s_1),\dots,z_\varepsilon^\theta(s_n)\big) \left(\varepsilon\int_{\frac{2s}{\varepsilon^2}}^{\frac{2t}{\varepsilon^2}}(-1)^{G+N'_x}\cos(\theta N_x)\,dx\right)^2 \right] \\
&=&2\varepsilon^2 \int_{\frac{2s}{\varepsilon^2}}^{\frac{2t}{\varepsilon^2}}\int_{\frac{2s}{\varepsilon^2}}^{x_2} {\mathbf E}\left[ \varphi\big(z_\varepsilon^\theta(s_1),\dots,z_\varepsilon^\theta(s_n)\big) (-1)^{N'_{x_2}+N'_{x_1}} \right.
\\ & &\quad\qquad \times
\left. \cos(\theta N_{x_1})\cos(\theta N_{x_2}) \right] \,dx_1dx_2. \\
\end{eqnarray*}
The proof of the real part follows again the structure of Section 3.2 in \cite{B}
using Lemma \ref{lemma2}. The imaginary part can be done similarly.
Finally we have to prove that $<Re[Z^{\theta}],Im[Z^{\theta}]>_t=0$.
It is sufficient to show that for any $s_1\leq\dots\leq
s_n\leq s<t$ and for any bounded continuous function
$\varphi:{\mathbb C}^n\rightarrow{\mathbb R}$,
$$ {\mathbf E}\big[ \varphi\big(z_\varepsilon^\theta(s_1),\dots,z_\varepsilon^\theta(s_n)\big) \big(x_\varepsilon^\theta(t)-x_\varepsilon^\theta(s)\big) \big(y_\varepsilon^\theta(t)-y_\varepsilon^\theta(s)\big) \big] $$
converges to zero as $\varepsilon$ tends to zero. But we obtain
this convergence using similar calculations and statement b) of Lemma \ref{lemma2}.
$\square$
{\it Proof of Theorem \ref{general}}. Taking into account the proof of Theorem \ref{teofeble} it remains
only to check that for $i\neq j$, and $\theta_i,\theta_j$ in the
conditions of Theorem \ref{general}, \linebreak {\small
$<Re[Z^{\theta_i}],Re[Z^{\theta_j}]>_t=0$,
$<Im[Z^{\theta_i}],Im[Z^{\theta_j}]>_t=0$ } and {\small
$<Im[Z^{\theta_i}],Re[Z^{\theta_j}]>_t=0$. }
But it can be proved following the proof of Theorem 2 in
\cite{BR1} and taking into account our Lemma \ref{lematecnic}.
$\square$
\section{Proof of strong convergence }\label{cap_realp}
In this section, we will prove the strong convergence when
$\varepsilon$ tends to zero of the processes
$\{z_\varepsilon^\theta(t);\,t\in[0,1]\}$ defined in Section 2.
{\it Proof of Theorem \ref{resultat}}. We will study the strong
convergence of $x_\varepsilon^\theta$,
\begin{equation}
x_\varepsilon^\theta(t)= \frac{2}{\varepsilon}(-1)^G\int_0^t(-1)^{N'_{\frac{2r}{\varepsilon^2}}}\cos{(\theta N_{\frac{2r}{\varepsilon^2}})}\,dr, \label{processreal}
\end{equation}
the real part of the processes
$z_\varepsilon^\theta$, to a standard Brownian motion
$\{x_t;\,t\in[0,1]\}$ when $\varepsilon$ tends to 0. More precisely, we will prove
that there exist realizations $\{x_\varepsilon^\theta(t),\,t\geq0\}$ of the above process on the same probability space of a Brownian motion process $\{x(t),\,t\geq0\}$ such that
\begin{equation}
\lim_{\varepsilon\rightarrow\infty} \max_{0\leq t\leq1} |x_\varepsilon^\theta(t)-x(t)|=0 \quad a.s. \label{fort1}
\end{equation}
The convergence
of the imaginary part of $z_\varepsilon$ to another
standard Brownian motion $\{y_t;\,t\in[0,1]\}$, independent of
$\{x_t,t\geq0\}$, follows the same proof. We will follow the method used in \cite{art G-H-RM} to prove the
strong convergence to a standard
Brownian motion.
We will divide the proof in five steps.
{\it Step 1: Definitions of the processes.} Let $(\Omega,\cal{F},\cal{P})$ be the probability space for a standard Brownian motion $\{x_t,t\geq0\}$ with $x(0)=0$ and let us define:
\begin{enumerate}
\item for each $\varepsilon>0$, $\{\epsilon_m^\varepsilon\}_{m \ge 1}$ a sequence of independent identically distributed random variables with law exponential of parameter $\frac{2}{\varepsilon}$, independent of the Brownian motion $x$,
\item $\{ \eta_m \}_{m \ge 1}$ a sequence of independent identically distributed random variables with law Bernoulli($\frac12$), independent of $x$ and $\{\epsilon_m^\varepsilon\}_{m \ge 1}$ for all $\varepsilon$.
\item $\{ k_m \}_{m \ge 1}$ a sequence of independent identically distributed random variables such that
$P(k_1=1)=P(k_1=-1)=\frac12$ , independent of $x$, $\{ \eta_m \}_{m \ge 1}$ and $\{\epsilon_m^\varepsilon\}_{m \ge 1}$ for all $\varepsilon$.
\end{enumerate}
Using these random variables we are able to introduce the following ones:
\begin{enumerate}
\item $\{b_m \}_{m \ge 0}$ such that $b_0=0$ and
$b_m=\sum_{j=1}^{m}\eta_j$ for $m\geq1$. Clearly $b_m$ has a
Binomial distribution of parameters $(m,\frac12)$ and, for all
$n\in\{0,1,\dots,m\}$,
$P(b_{m+1}=n|b_m=n)=P(b_{m+1}=n+1|b_m=n)=\frac12$.
\item $\{\xi_m^{\varepsilon,\theta}\}_{m \ge 1}=\{|\cos{(b_{m-1}\theta)}|\epsilon_m^\varepsilon\}_{m\ge 1}$ . This family of random variables is clearly independent of $x$.
\end{enumerate}
Let $\mathscr{B}$ be the $\sigma$-algebra generated by $\{b_m\}_{m\ge 1}$. The sequence of random variables $\{k_m\xi_m^{\varepsilon,\theta}\}_{m \ge 0}$ satisfies
\begin{eqnarray*}
&&{\mathbf E}(k_m\xi_m^{\varepsilon,\theta}|\mathscr{B})=0,\\
&&Var(k_m\xi_m^{\varepsilon,\theta}|\mathscr{B})={\mathbf E}[(\xi_m^{\varepsilon,\theta})^2|\mathscr{B}]=\frac{\varepsilon^2}{2}[\cos(b_{m-1}\theta)]^2.
\end{eqnarray*}
By Skorokhod's theorem (\cite{sko} page 163 or \textit{Lemma 2} in \cite{art G-G2}) for each $\varepsilon>0$ there exists a sequence $\sigma_1^{\varepsilon,\theta},\sigma_2^{\varepsilon,\theta},...$ of nonnegative random variables on $(\Omega,\cal{F},\cal{P})$ so that the sequence $x(\sigma_1^{\varepsilon,\theta}), x(\sigma_1^{\varepsilon,\theta}+\sigma_2^{\varepsilon,\theta}),...,$ has the same distribution as $k_1\xi_1^{\varepsilon,\theta},k_1\xi_1^{\varepsilon,\theta}+k_2\xi_2^{\varepsilon,\theta},...,$ and, for each $m$,
$$ {\mathbf E}(\sigma_m^{\varepsilon,\theta}|\mathscr{B})=Var(k_m\xi_m^{\varepsilon,\theta}|\mathscr{B})=\frac{\varepsilon^2}{2}[\cos(b_{m-1}\theta)]^2.$$
For each $\varepsilon$ we define $\gamma_0^{\varepsilon,\theta}\equiv0$ and for each $m$
$$ \gamma_m^{\varepsilon,\theta}=|\beta_m^{\varepsilon,\theta}|^{-1} \left| x\left(\sum_{j=0}^m\sigma_j^{\varepsilon,\theta}\right)-x\left(\sum_{j=0}^{m-1}\sigma_j^{\varepsilon,\theta}\right) \right|, $$
where $\sigma_0^{\varepsilon,\theta}\equiv0$ and $$ \beta_m^{\varepsilon,\theta}=\frac{2}{\varepsilon} \cos(b_{m-1}\theta). $$
Then, the random variables $\gamma_1^{\varepsilon,\theta},\gamma_2^{\varepsilon,\theta},...,$ are independent with common exponential distribution with parameter $\frac{4}{\varepsilon^2}$.
Indeed
for any $0\leq n<m$, $x, y \in {\mathbb R}$
\begin{eqnarray*}
&&P(\gamma_{n+1}^{\varepsilon,\theta}\leq x,\gamma_{m+1}^{\varepsilon,\theta}\leq y)\\% &=&
&=& \sum_{j=0}^n\sum_{k=j}^{j+(m-n)} P(\gamma_{n+1}^{\varepsilon,\theta}\leq x,\,\gamma_{m+1}^{\varepsilon,\theta}\leq y\,|\,b_n=j,b_m=k) P(b_n=j,b_m=k) \\
&=& (1-e^{-\frac{4x}{\varepsilon^2}})(1-e^{-\frac{4y}{\varepsilon^2}}) \,\sum_{j=0}^n\sum_{k=j}^{j+(m-n)} P(b_n=j,b_m=k) \\
&=& P(\gamma_{n+1}^{\varepsilon,\theta}\leq x) P(\gamma_{m+1}^{\varepsilon,\theta}\leq
y),
\end{eqnarray*}
using that when $b_n$ and $b_m$ are known, $\gamma_{n+1}^{\varepsilon,\theta}$ and $\gamma_{m+1}^{\varepsilon,\theta}$ are independent.
Now, we define $x_\varepsilon^\theta(t), t \ge 0$ to be piecewise linear satisfying
\begin{equation} x_\varepsilon^\theta\left(\sum_{j=1}^{m}\gamma_j^{\varepsilon,\theta}\right)=x\left(\sum_{j=1}^{m}\sigma_j^{\varepsilon,\theta}\right), \qquad m \ge 1 \label{defreal}
\end{equation}
and $x_\varepsilon^\theta(0)\equiv0$. Observe that the process $x_\varepsilon^\theta$ has slope $\pm|\beta_m^{\varepsilon,\theta}|$ in the interval $[\sum_{j=1}^{m-1}\gamma_j^{\varepsilon,\theta},\sum_{j=1}^{m}\gamma_j^{\varepsilon,\theta}]$.
Let $\rho_m^{\varepsilon,\theta}$ be the time of the $m$th change of the absolute values of $\beta_j^{\varepsilon,\theta}$'s, i.e. the time when $\beta_j^{\varepsilon,\theta}=\frac{2}{\varepsilon}\cos[(m-1)\theta]$ and $\beta_{j+1}^{\varepsilon,\theta}=\frac{2}{\varepsilon}\cos(m\theta)$, that is when the slope of $x_\varepsilon^\theta(\cdot)$ changes from $\pm\frac{2}{\varepsilon}|\cos[(m-1)\theta]|$ to $\pm\frac{2}{\varepsilon}|\cos(m\theta)|$. Then the increments $\rho_m^{\varepsilon,\theta}-\rho_{m-1}^{\varepsilon,\theta}, m \geq 1$, with $\rho_0^{\varepsilon,\theta}\equiv0$ are independent and exponentially distributed with a parameter $\frac{2}{\varepsilon^2}$. Indeed, since
$$P\left(\beta_m^{\varepsilon,\theta}=\frac{2}{\varepsilon}\cos(n\theta)\,\big|\,\beta_{m-1}^{\varepsilon,\theta}=\frac{2}{\varepsilon}\cos[(n-1)\theta]\right)=\frac12$$ for every $n=0,1,\cdots,m$, we can write $\rho_1^{\varepsilon,\theta}=\gamma_1^{\varepsilon,\theta}+\dots+\gamma_{\widehat{N}}^{\varepsilon,\theta}$, where $P(\widehat{N}=n)=2^{-n}$ for $n\in{\mathbb N}$. Therefore $\rho_1^{\varepsilon,\theta}$ has an exponential distribution with parameter $\frac{2}{\varepsilon^2}$ (see \cite{fel}). Likewise, each increment $\rho_m^{\varepsilon,\theta}-\rho_{m-1}^{\varepsilon,\theta}$ has an exponential distribution with parameter $\frac{2}{\varepsilon^2}$ and the increments are independent since they are sum of disjoint blocks of the $\gamma_m^{\varepsilon,\theta}$'s.
On the other hand, let $\tau_m^\varepsilon$ be the time of the $m$th
change of the sign of the slopes. Following the same arguments for
the times $\rho_m^{\varepsilon,\theta}$ we get that the increments
$\tau_m^\varepsilon-\tau_{m-1}^\varepsilon$, for each $m$, with
$\tau_0^\varepsilon\equiv0$, are independent and exponentially
distributed with a parameter $\frac{2}{\varepsilon^2}$. Moreover the
increments are sum of disjoint blocks of the
$\gamma_m^{\varepsilon,\theta}$'s.
Thus $x_\varepsilon^\theta$ is a realization of the process (\ref{processreal}).
{\it Step 2 : Decomposition of the convergence.}
Let us come back to the proof of (\ref{fort1}). Recalling that $\gamma_0^{\varepsilon,\theta}\equiv\sigma_0^{\varepsilon,\theta}\equiv0$, by (\ref{defreal}) and the uniform continuity of Brownian motion on $[0,1]$, we have almost surely
\begin{eqnarray*}
\lim_{\varepsilon\rightarrow0} \,\,\max_{0\leq t\leq1} \left| x_\varepsilon^\theta(t)-x(t) \right| &=& \lim_{\varepsilon\rightarrow0} \,\,\max_{0\leq m\leq\frac{4}{\varepsilon}} \left| x_\varepsilon^\theta\left(\sum_{j=1}^m\gamma_j^{\varepsilon,\theta}\right)-x\left(\sum_{j=1}^m\gamma_j^{\varepsilon,\theta}\right) \right| \\
&=& \lim_{\varepsilon\rightarrow0} \,\,\max_{0\leq m\leq\frac{4}{\varepsilon}} \left| x\left(\sum_{j=1}^m\sigma_j^{\varepsilon,\theta}\right)-x\left(\sum_{j=1}^m\gamma_j^{\varepsilon,\theta}\right) \right|,
\end{eqnarray*}
and it reduces the proof to check that,
\begin{equation}\nonumber
\lim_{\varepsilon\rightarrow0} \,\max_{1\leq m\leq\frac{4}{\varepsilon^2}} \left| \gamma_1^{\varepsilon,\theta}+\dots+\gamma_m^{\varepsilon,\theta}-m\frac{\varepsilon^2}{4} \right|=0 \quad a.s.,
\end{equation}
and that
\begin{equation}\label{limdif} \lim_{\varepsilon\rightarrow0} \,\max_{1\leq m\leq\frac{4}{\varepsilon^2}} \left| \sigma_1^{\varepsilon,\theta}+\dots+\sigma_m^{\varepsilon,\theta}-m\frac{\varepsilon^2}{4} \right|=0 \quad a.s. \end{equation}
The first limit can be obtained easily by Borel-Cantelli lemma since by Kolmogorov's inequality, for each $\alpha>0$, we have
\begin{eqnarray*}
P\left(\max_{1\leq m\leq\frac{4}{\varepsilon^2}}\left| \gamma_1^{\varepsilon,\theta}+\dots+\gamma_m^{\varepsilon,\theta}-m\frac{\varepsilon^2}{4} \right|\geq\alpha\right) &\leq& \frac{1}{\alpha^2}\sum_{m=1}^{\frac{4}{\varepsilon^2}} Var(\gamma_m^{\varepsilon,\theta}) \\
&=& \frac{\varepsilon^2}{4\alpha^2}.
\end{eqnarray*}
In order to deal with (\ref{limdif}) we can use the decompostion
\begin{equation}\label{exp1}
\max_{1\leq m\leq\frac{4}{\varepsilon^2}} \left| \sum_{j=1}^m\sigma_j^{\varepsilon,\theta}-m\frac{\varepsilon^2}{4} \right| \! \leq \! \max_{1\leq m\leq\frac{4}{\varepsilon^2}} \left| \sum_{j=1}^m(\sigma_j^{\varepsilon,\theta}-\alpha_j^{\varepsilon,\theta}) \right| + \max_{1\leq m\leq\frac{4}{\varepsilon^2}} \left| \sum_{j=1}^m\alpha_j^{\varepsilon,\theta}-m\frac{\varepsilon^2}{4} \right|, \end{equation}
where $\alpha_j^{\varepsilon,\theta}:={\mathbf E}(\sigma_j^{\varepsilon,\theta}|\mathscr{B})=\frac{\varepsilon^2}{2}[\cos(b_{j-1}\theta)]^2$ and
\begin{eqnarray}\label{eq10}
&&\left| \sum_{j=1}^m\alpha_j^{\varepsilon,\theta}-m\frac{\varepsilon^2}{4} \right| = \left| \sum_{j=1}^m\frac{\varepsilon^2}{2}[\cos(b_{j-1}\theta)]^2-m\frac{\varepsilon^2}{4} \right| \\
&=& \frac{\varepsilon^2}{4} \left| \,\sum_{j=1}^m[1+\cos(2b_{j-1}\theta)]-m\, \right| \nonumber
= \frac{\varepsilon^2}{4} \left| \sum_{j=1}^m\cos(2b_{j-1}\theta) \right|,
\end{eqnarray}
Notice now, that for a fixed $m$,
\begin{eqnarray}\label{eq11}
\sum_{j=1}^m\cos(2b_{j-1}\theta) = \sum_{\begin{subarray}{l} 0\leq k\leq n\\ B_{n,m}\end{subarray}}T_k\cos(2k\theta) + Z_{n+1}\cos[2(n+1)\theta],
\end{eqnarray}
where $B_{n,m}:=\{k\in\{0,\dots,n\}\mbox{ s.t. }T_0+\cdots+T_n\leq m,\,T_0+\cdots+T_{n+1}>m\}$, $T_k$ are independent identically distributed random variables with $T_k\sim\mbox{Geom}\big(\frac12\big)$ for each $k=0,1,2,\dots$, that is $P(T_k=j)=2^{-j}$ for $j\geq1$ and ${\mathbf E}(T_k)=Var(T_k)=2$, and $0\leq Z_{n+1}\leq T_{n+1}$.
Hence, putting together (\ref{exp1}), (\ref{eq10}) and (\ref{eq11}), it follows that
\begin{eqnarray*}
& &\max_{1\leq m\leq\frac{4}{\varepsilon^2}} \left| \sum_{j=1}^m\sigma_j^{\varepsilon,\theta}-m\frac{\varepsilon^2}{4} \right| \leq \max_{1\leq m\leq\frac{4}{\varepsilon^2}} \left| \sum_{j=1}^m(\sigma_j^{\varepsilon,\theta}-\alpha_j^{\varepsilon,\theta}) \right|\ \\ & &+ \max_{1\leq m\leq\frac{4}{\varepsilon^2}} \frac{\varepsilon^2}{4} \left| \sum_{\begin{subarray}{l} 0\leq k\leq n\\ B_{n,m}\end{subarray}}T_k\cos(2k\theta) \right| +\max_{\begin{subarray}{c} 1\leq m\leq\frac{4}{\varepsilon^2}\\ B_{n,m}\end{subarray}} \frac{\varepsilon^2}{4} \Big| Z_{n+1}\cos[2(n+1)\theta] \Big| \\
& & := L_1^\varepsilon + L_2^\varepsilon+L_3^\varepsilon,
\end{eqnarray*}
and reduces the proof of (\ref{limdif}) to check that $\lim_{\varepsilon \to 0} (L_1^\varepsilon + L_2^\varepsilon+L_3^\varepsilon) = 0 \,\, a.s$.
\goodbreak
{\it Step 3: Study of $L_1^\varepsilon$.} Let $M_n:=\sum_{j=1}^n(\sigma_j^{\varepsilon,\theta}-\alpha_j^{\varepsilon,\theta})$. Let $\mathcal{B}_n$ denote the $\sigma$-algebra generated by $\{\mathscr{B},\sigma_k^{\varepsilon,\theta};\, k\leq n\}$. Clearly, $M_n$ is $\mathcal{B}_n$-measurable and, since ${\mathbf E}(\sigma_n^{\varepsilon,\theta}-\alpha_n^{\varepsilon,\theta}|\mathscr{B})=0$, ${\mathbf E}(M_n|\mathcal{B}_{n-1})=M_{n-1}$. So $|M_n|$ is a submartingale. By Doob's martingale inequality, for each $\alpha>0$
\begin{equation} P\left( \max_{1\leq m\leq\frac{4}{\varepsilon^2}} |M_m|\geq\alpha \right) \leq \frac{1}{\alpha^2} {\mathbf E}\Big(\big|M_{[\frac{4}{\varepsilon^2}]}\big|^2\Big), \label{dd} \end{equation}
where $[v]$ is the integer part of $v$.
Fixed $b_j$'s, $\{\sigma_j^{\varepsilon,\theta}\}$'s are independent, and so
\begin{equation}\label{aa}
{\mathbf E}[(\sigma_j^{\varepsilon,\theta}-\alpha_j^{\varepsilon,\theta})(\sigma_k^{\varepsilon,\theta}-\alpha_k)]= {\mathbf E}\big[{\mathbf E}\big(\sigma_j^{\varepsilon,\theta}-\alpha_j^{\varepsilon,\theta}\big|\mathscr{B}\big) {\mathbf E}\big(\sigma_k^{\varepsilon,\theta}-\alpha_k\big|\mathscr{B}\big)\big]
= 0.
\end{equation}
On the other hand
\begin{equation}\label{bb}{\mathbf E}[\sigma_j^{\varepsilon,\theta}\alpha_j^{\varepsilon,\theta}] = {\mathbf E}[{\mathbf E}(\sigma_j^{\varepsilon,\theta}\alpha_j^{\varepsilon,\theta}|\mathscr{B}] ]= {\mathbf E}[\alpha_j^{\varepsilon,\theta}{\mathbf E}(\sigma_j^{\varepsilon,\theta}|\mathscr{B}]] = {\mathbf E}[(\alpha_j^{\varepsilon,\theta})^2]. \end{equation}
Using (\ref{aa}) and (\ref{bb}), we get that
\begin{eqnarray}\label{eq_sigma1}
& & {\mathbf E}\Big(\big|M_{[\frac{4}{\varepsilon^2}]}\big|^2\Big) \nonumber \\
& &=
\sum_{j=1}^{[\frac{4}{\varepsilon^2}]}{\mathbf E}[(\sigma_j^{\varepsilon,\theta}-\alpha_j^{\varepsilon,\theta})^2]+2\sum_{j=1}^{[\frac{4}{\varepsilon^2}]-1}\sum_{k=j+1}^{[\frac{4}{\varepsilon^2}]}{\mathbf E}[(\sigma_j^{\varepsilon,\theta}-\alpha_j^{\varepsilon,\theta})(\sigma_k^{\varepsilon,\theta}-\alpha_k^{\varepsilon,\theta})]\nonumber \\
&& = \sum_{j=1}^{[\frac{4}{\varepsilon^2}]}{\mathbf E}[(\sigma_j^{\varepsilon,\theta})^2] -
\sum_{j=1}^{[\frac{4}{\varepsilon^2}]}{\mathbf E}[(\alpha_j^{\varepsilon,\theta})^2].
\end{eqnarray}
Recalling that by Skorokhod's theorem (\cite{sko}, page 163) there exists a positive constant $C_1$ such that
$$ {\mathbf E}[(\sigma_j^{\varepsilon,\theta})^2|\mathscr{B}] \leq C_1\,{\mathbf E}[(\xi_j^{\varepsilon,\theta})^4|\mathscr{B}] = C_1 4!\left(\frac{\varepsilon}{2}\right)^4[\cos(b_{j-1}\theta)]^4, $$
(\ref{eq_sigma1}) can be bounded by
\begin{eqnarray*}
&&{\mathbf E}\Big(\big|M_{[\frac{4}{\varepsilon^2}]}\big|^2\Big)
\leq C_1 4!\left(\frac{\varepsilon}{2}\right)^4\sum_{j=1}^{[\frac{4}{\varepsilon^2}]}{\mathbf E}[\cos(b_{j-1}\theta)^4] - \left(\frac{\varepsilon^2}{2}\right)^2\sum_{j=1}^{[\frac{4}{\varepsilon^2}]}{\mathbf E}[\cos(b_{j-1}\theta)^4] \\
&& =C\varepsilon^4 \,\sum_{j=1}^{[\frac{4}{\varepsilon^2}]}{\mathbf E}[\cos(b_{j-1}\theta)^4]
\leq
4C\varepsilon^2,
\end{eqnarray*}
where $C$ is a positive constant.
So, from (\ref{dd}) we obtain that for any $\alpha$
$$ P\left( \max_{1\leq m\leq\frac{4}{\varepsilon^2}} |M_m|\geq\alpha \right) \leq \frac{4C\varepsilon^2}{\alpha^2},$$
and by Borel-Cantelli lemma it follows that
$ \lim_{\varepsilon\rightarrow0} L_1^\varepsilon=0 \, a.s.$
{\it Step 4: Study of $L_2^\varepsilon$.} Since $n\leq m-1$, we have
\begin{eqnarray*}
L_2^\varepsilon &\leq& \max_{0\leq n\leq\frac{4}{\varepsilon^2}-1} \frac{\varepsilon^2}{4} \left| \sum_{k=0}^n T_k\cos(2k\theta) \right| \\
&\leq& \max_{0\leq n\leq\frac{4}{\varepsilon^2}-1} \frac{\varepsilon^2}{4} \left| \sum_{k=0}^n (T_k-2)\cos(2k\theta) \right| + \max_{0\leq n\leq\frac{4}{\varepsilon^2}-1} \frac{\varepsilon^2}{2} \left| \sum_{k=0}^n \cos(2k\theta) \right|
\\ & :=& L_{21}^\varepsilon + L_{22}^\varepsilon.
\end{eqnarray*}
Let us prove that $ L_{21}^\varepsilon$ vanishes when $\varepsilon$ goes to 0. Let $\mathcal{F}^n$ denote the $\sigma$-algebra generated by $T_k$ for $k\leq n$. Define $M'_n:=\frac{\varepsilon^2}{4}\sum_{k=0}^n(T_k-2)\cos(2k\theta)$. It is easy to see that $M'_n$ is a martingale. By Doob's martingale inequality, for each $\alpha>0$
\begin{eqnarray*}
&&P\left( \max_{0\leq n\leq\frac{4}{\varepsilon^2}-1} |M'_n|>\alpha \right)\\
&\leq& \frac{1}{\alpha^2} \,{\mathbf E}\left(\big|M'_{[\frac{4}{\varepsilon^2}]-1}\big|^2\right)
= \frac{1}{\alpha^2} \,{\mathbf E}\left( \frac{\varepsilon^4}{16}\left|\sum_{k=0}^{[\frac{4}{\varepsilon^2}]-1}(T_k-2)\cos(2k\theta)\right|^2\,
\right)\\
&=& \frac{1}{\alpha^2}\,\frac{\varepsilon^4}{16} \,\sum_{k=0}^{[\frac{4}{\varepsilon^2}]-1}{\mathbf E}[(T_k-2)^2]\cos^2(2k\theta)
\leq \frac{1}{\alpha^2}\,\frac{\varepsilon^4}{16}\,\frac{4}{\varepsilon^2}
= \frac{\varepsilon^2}{2\alpha^2},
\end{eqnarray*}
where we have used that $(T_k-2)$'s are independent and centered.
Therefore, by Borell-Cantelli lemma
$ \lim_{\varepsilon\rightarrow0} L_{21}^\varepsilon=0 $ a.s.
On the other hand, since for any $n$, we get that
\begin{eqnarray*}
&&\sum_{k=0}^n\cos(2k\theta) \\&=& \frac12\left( \sum_{k=0}^ne^{i2k\theta}+\sum_{k=0}^ne^{-i2k\theta} \right)
= \frac12\left( \frac{1-e^{i2(n+1)\theta}}{1-e^{i2\theta}}+\frac{1-e^{-i2(n+1)\theta}}{1-e^{-i2\theta}} \right) \\
&=&
\frac12\left(1+\frac{-\cos\big(2(n+1)\theta\big)+\cos(2n\theta)}{1-\cos(2\theta)}\right)
\leq\frac12\left(1+\frac{2}{1-\cos(2\theta)}\right),
\end{eqnarray*}
it yields that
$ \lim_{\varepsilon\rightarrow0} L_{22}^\varepsilon=0 $, a.s.
{\it Step 5: Study of $L_3^\varepsilon$.}
Since $n\leq m-1$ and $Z_{n+1}\leq T_{n+1}$,
\begin{equation}\label{eq13}
L_3^\varepsilon\leq \max_{\begin{subarray}{c} 1\leq m\leq\frac{4}{\varepsilon^2}\\ B_{n,m}\end{subarray}} \frac{\varepsilon^2}{4} T_{n+1} \leq \max_{0\leq n\leq\frac{4}{\varepsilon^2}-1} \frac{\varepsilon^2}{4} T_{n+1}.
\end{equation}
Thus, it is sufficient to see that
$$ \lim_{n\rightarrow\infty}\,\max_{1\leq k\leq n}\,\frac{T_k}{n}=0 \qquad \mbox{a.s.} $$
Fixed $\delta>0$, we have that
\begin{eqnarray*}
P\left( \max_{1\leq k\leq n}\frac{T_k}{n}>\delta \right) &=& P\left( \max_{1\leq k\leq n}T_k>\delta n \right) = 1-\Big(P(T_k\leq\delta n)\Big)^n \\
&=& 1-\left( \sum_{j=1}^{[\delta n]}\frac{1}{2^j} \right)^n = 1-\left( 1-\frac{1}{2^{[\delta n]}}
\right)^n.
\end{eqnarray*}
Since
$ \lim_{n\rightarrow\infty} 1-\left( 1-\frac{1}{2^{[\delta n]}} \right)^n = 0 $ it follows that
$$ \max_{1\leq k\leq n}\,\frac{T_k}{n} \xrightarrow[n\rightarrow\infty]{P} 0. $$
Finally the almost sure convergence follows for the fact that
$$ \sum_{n=1}^\infty \left[ 1-\left( 1-\frac{1}{2^{[\delta n]}} \right)^n \right]<\infty,$$
proved in Lemma \ref{serie} in the Appendix.
$\square$
\section{Proof of rate of convergence}\label{rates}
In this section we will prove the rate of convergence of the processes $z_\varepsilon^\theta(t)$. Although the proof follows the structure of part b) of Theorem 1 in \cite{art G-G}, some of the terms appearing have to be computed in a new way.
{\it Proof of Theorem \ref{thm_rate}.}
To prove the theorem it is sufficient to check that, for any $q>0$,
$$ P\left( \max_{0\leq t\leq1} |x_\varepsilon^\theta(t)-x(t)| > \alpha\,\varepsilon^{\frac12}\left(\log{\frac1\varepsilon}\right)^\frac52 \right) = o(\varepsilon^q) \qquad \mbox{as} \quad \varepsilon\rightarrow0 $$
and
$$ P\left( \max_{0\leq t\leq1} |y_\varepsilon^\theta(t)-y(t)| > \alpha'\,\varepsilon^{\frac12}\left(\log{\frac1\varepsilon}\right)^\frac52 \right) = o(\varepsilon^q) \qquad \mbox{as} \quad \varepsilon\rightarrow0 $$
where $\alpha$ and $\alpha'$ are two positive constants depending on $q$.
We will analyze the rate of convergence for the real part. The results for the imaginary part can be obtained by similar computations.
Recall that $\gamma_0^{\varepsilon,\theta}\equiv\sigma_0^{\varepsilon,\theta}\equiv0$ and define
$$ {\mathbb P}amma_m^{\varepsilon,\theta}=\sum_{j=0}^m \gamma_j^{\varepsilon,\theta} \qquad \mbox{and} \qquad \Lambda_m^{\varepsilon,\theta}=\sum_{j=0}^m \sigma_j^{\varepsilon,\theta}. $$
Set
$$ J^\varepsilon \equiv \max_{0\leq m\leq\frac{4}{\varepsilon^2}} \,\max_{0\leq r\leq\gamma_{m+1}^{\varepsilon,\theta}} \big|x_\varepsilon^\theta({\mathbb P}amma_m^{\varepsilon,\theta}+r)-x({\mathbb P}amma_m^{\varepsilon,\theta}+r)\big|. $$
Since $ x_\varepsilon^\theta$ is piecewise linear and using the definition of $\gamma_m^{\varepsilon,\theta}$, notice that
\begin{eqnarray*}
x_\varepsilon^\theta({\mathbb P}amma_m^{\varepsilon,\theta}+r)
&=& x(\Lambda_m^{\varepsilon,\theta})+\frac{x(\Lambda_{m+1}^{\varepsilon,\theta})-x(\Lambda_m^{\varepsilon,\theta})}{\gamma_{m+1}^{\varepsilon,\theta}}\,r \\
&=& x(\Lambda_m^{\varepsilon,\theta})+|\beta_{m+1}^{\varepsilon,\theta}| \cdot \operatorname{sgn}\Big(x(\Lambda_{m+1}^{\varepsilon,\theta})-x(\Lambda_m^{\varepsilon,\theta})\Big) r.
\end{eqnarray*}
Thus,
\begin{eqnarray*}
J^\varepsilon &\leq& \max_{0\leq m\leq\frac{4}{\varepsilon^2}} \big|x(\Lambda_m^{\varepsilon,\theta})-x({\mathbb P}amma_m^{\varepsilon,\theta})\big| + \max_{1\leq m\leq\frac{4}{\varepsilon^2}+1} |\beta_m^{\varepsilon,\theta}|\gamma_m^{\varepsilon,\theta} \\ && +\max_{0\leq m\leq\frac{4}{\varepsilon^2}} \,\max_{0\leq r\leq\gamma_{m+1}^{\varepsilon,\theta}} \big|x({\mathbb P}amma_m^{\varepsilon,\theta})-x({\mathbb P}amma_m^{\varepsilon,\theta}+r)\big| \\
&\leq& \max_{0\leq m\leq\frac{4}{\varepsilon^2}} \Big|x(\Lambda_m^{\varepsilon,\theta})-x\Big(\frac{m\varepsilon^2}{4}\Big)\Big| + \max_{0\leq m\leq\frac{4}{\varepsilon^2}} \Big|x({\mathbb P}amma_m^{\varepsilon,\theta})-x\Big(\frac{m\varepsilon^2}{4}\Big)\Big| \\
&& + \max_{0\leq m\leq\frac{4}{\varepsilon^2}} \,\max_{0\leq r\leq\gamma_{m+1}^{\varepsilon,\theta}} \big|x({\mathbb P}amma_m^{\varepsilon,\theta})-x({\mathbb P}amma_m^{\varepsilon,\theta}+r)\big| + \max_{1\leq m\leq\frac{4}{\varepsilon^2}+1} |\beta_m^{\varepsilon,\theta}|\gamma_m^{\varepsilon,\theta} \\
&:=& J_1^\varepsilon+J_2^\varepsilon+J_3^\varepsilon+J_4^\varepsilon,
\end{eqnarray*}
and for any $a_\varepsilon>0$,
$$ P(J^\varepsilon>a_\varepsilon) \leq \sum_{j=1}^4P\Big(J_j^\varepsilon>\frac{a_\varepsilon}{4}\Big) := I_1^\varepsilon+I_2^\varepsilon+I_3^\varepsilon+I_4^\varepsilon.$$
We will study the four terms separately.
{\it 1. Study of the term $I_4^\varepsilon.$}
Since $\beta_m^{\varepsilon,\theta}=\frac{2}{\varepsilon}\cos(b_{m-1}\theta)$ and
$\gamma_m^{\varepsilon,\theta}$'s are independent exponentially distributed variables with parameter $\frac{4}{\varepsilon^2}$, this term can be handled as term D in Theorem 1 in \cite{art G-G}. Indeed
\begin{eqnarray*}
I_4^\varepsilon &=& P\left( \max_{1\leq m\leq\frac{4}{\varepsilon^2}+1} |\beta_m^{\varepsilon,\theta}|\gamma_m^{\varepsilon,\theta}>\frac{a_\varepsilon}{4} \right)
\leq P\left( \max_{1\leq m\leq\frac{4}{\varepsilon^2}+1} \gamma_m^{\varepsilon,\theta}>\frac{a_\varepsilon\varepsilon}{8} \right) \\
&=& 1-P\left( \gamma_m^{\varepsilon,\theta}\leq\frac{a_\varepsilon\varepsilon}{8} \right)^{\frac{4}{\varepsilon^2}+1}
= 1-\left( 1-e^{-\frac{a_\varepsilon}{2\varepsilon}}
\right)^{\frac{4}{\varepsilon^2}+1}.
\end{eqnarray*}
For $a_\varepsilon$ of the type $\alpha\,\varepsilon^\frac12\left(\log\frac{1}{\varepsilon}\right)^\beta$, with $\alpha$ and $\beta$ are positive arbitrary fixed constants, we can see that
\begin{equation}\label{limD}
\lim_{\varepsilon\rightarrow0} \frac{1-\left( 1-e^{-\frac{a_\varepsilon}{2\varepsilon}} \right)^{\frac{4}{\varepsilon^2}+1}}{\varepsilon^q}=0 \qquad \mbox{a.s.}
\end{equation}
and so $I_4^\varepsilon=o(\varepsilon^q)$. Thus, the result is true with
$\beta=\frac52$.
{\it 2. Study of the term $I_1^\varepsilon.$} Let $\delta_\varepsilon>0$. Using the same decomposition that in the previous Section (see (\ref{eq10}) and (\ref{eq11})) we can write
\begin{eqnarray*}
I_1^\varepsilon
&\leq& P\left( \max_{0\leq m\leq\frac{4}{\varepsilon^2}} \, \max_{|s|\leq\delta_\varepsilon} \Big|x\Big(\frac{m\varepsilon^2}{4}+s\Big)-x\Big(\frac{m\varepsilon^2}{4}\Big)\Big|>\frac{a_\varepsilon}{4} \right) \\
&&+ P\left( \max_{1\leq m\leq\frac{4}{\varepsilon^2}} \Big|\Lambda_m^{\varepsilon,\theta}-\frac{m\varepsilon^2}{4}\Big|>\delta_\varepsilon \right)\\
&\leq& P\left( \max_{0\leq m\leq\frac{4}{\varepsilon^2}} \, \max_{|s|\leq\delta_\varepsilon} \Big|x\Big(\frac{m\varepsilon^2}{4}+s\Big)-x\Big(\frac{m\varepsilon^2}{4}\Big)\Big|>\frac{a_\varepsilon}{4} \right) \\
&&+ P\left( \max_{1\leq m\leq\frac{4}{\varepsilon^2}} \bigg|\sum_{j=1}^m(\sigma_j^{\varepsilon,\theta}-\alpha_j^{\varepsilon,\theta})\bigg|>\frac{\delta_\varepsilon}{2} \right) \\
&&+
P\left( \max_{1\leq m\leq\frac{4}{\varepsilon^2}} \frac{\varepsilon^2}{4} \Bigg|\sum_{\begin{subarray}{l} 0\leq k\leq n\\ B_{n,m}\end{subarray}}T_k\cos(2k\theta)\Bigg|>\frac{\delta_\varepsilon}{4} \right) \\
&&+ P\left( \,\max_{\begin{subarray}{c} 1\leq m\leq\frac{4}{\varepsilon^2}\\ B_{n,m}\end{subarray}} \frac{\varepsilon^2}{4} \Big| Z_{n+1}\cos[2(n+1)\theta] \Big|>\frac{\delta_\varepsilon}{4} \right)\\
&=& I_{11}^\varepsilon+ I_{12}^\varepsilon+ I_{13}^\varepsilon++ I_{14}^\varepsilon,
\end{eqnarray*}
recalling that $\alpha_j^{\varepsilon,\theta}={\mathbf E}(\sigma_j^{\varepsilon,\theta}|\mathscr{B})=\frac{\varepsilon^2}{2}[\cos(b_{j-1}\theta)]^2$ and where $B_{n,m}:=\{k\in\{0,\dots,n\}\mbox{ s.t. }T_0+\cdots+T_n\leq m,\,T_0+\cdots+T_{n+1}>m\}$, $T_k$ are independent identically distributed random variables with $T_k\sim\mbox{Geom}\big(\frac12\big)$ for each $k=0,1,2,\dots$ and $0\leq Z_{n+1}\leq T_{n+1}$. We will study again the four terms separately.
{\it 2.1. Study of the term $I_{12}^\varepsilon.$}
In Section \ref{cap_realp} we have seen that $|M_n|=\big|\sum_{j=1}^m(\sigma_j^{\varepsilon,\theta}-\alpha_j^{\varepsilon,\theta})\big|$ is a submartingale, so
\begin{eqnarray}
I_{12}^\varepsilon &=& P\left( \max_{1\leq m\leq\frac{4}{\varepsilon^2}} \bigg|\sum_{j=1}^m\Big(\frac{4}{\varepsilon^2}\sigma_j^{\varepsilon,\theta}-2[\cos(b_{j-1}\theta)]^2\Big)\bigg|>\frac{2\delta_\varepsilon}{\varepsilon^2} \right) \nonumber \\
&\leq& \left(\frac{\varepsilon^2}{2\delta_\varepsilon}\right)^{2p} \, {\mathbf E}\left[\left(\sum_{m=1}^{ [4/\varepsilon^2] }\bigg(\frac{4}{\varepsilon^2}\sigma_m^{\varepsilon,\theta}-2[\cos(b_{m-1}\theta)]^2\bigg)\right)^{2p}\right],
\label{eq4}
\end{eqnarray}
for any $p\geq1$, by Doob's martingale inequality.
Set $Y_m:=\frac{4}{\varepsilon^2}\sigma_m^{\varepsilon,\theta}-2[\cos(b_{m-1}\theta)]^2$. Using H\"older's inequality, we obtain
\begin{eqnarray}
{\mathbf E}\left[\Bigg(\sum_{m=1}^{[4/\varepsilon^2]}Y_m\Bigg)^{2p}\right] &=& \sum_{\begin{subarray}{c} |u|=2p\\ u_m\neq1\,\forall m \end{subarray}}{2p\choose u} {\mathbf E}\Big(Y_1^{u_1}\cdots Y_{[4/\varepsilon^2]}^{u_{[4/\varepsilon^2]}}\Big) \label{eq5} \\
&\leq& \sum_{\begin{subarray}{c} |u|=2p\\ u_m\neq1\,\forall m \end{subarray}}{2p\choose u} \big[{\mathbf E}\big(Y_1^{2p}\big)\big]^{u_1/2p}\cdots\big[{\mathbf E}\big(Y_{[4/\varepsilon^2]}^{2p}\big)\big]^{u_{[4/\varepsilon^2]}/2p}. \nonumber
\end{eqnarray}
where $u=(u_1,\dots,u_{[4/\varepsilon^2]})$ with $|u|=u_1+\cdots+u_{[4/\varepsilon^2]}$ and
$$ {2p\choose u} = \frac{(2p)!}{u_1!\cdots u_{[4/\varepsilon^2]}!}. $$
Notice that in the first equality we have used that if $u_m=1$ for any $m$, then ${\mathbf E}\big(Y_1^{u_1}\cdots Y_{[4/\varepsilon^2]}^{u_{[4/\varepsilon^2]}}\big)=0$. Inded, assume that $u_m=1$, then
\begin{eqnarray*}
&&{\mathbf E}\big(Y_1^{u_1}\cdots Y_{[4/\varepsilon^2]}^{u_{[4/\varepsilon^2]}}\big)= {\mathbf E}\Big[{\mathbf E}\big(Y_1^{u_1}\cdots Y_{[4/\varepsilon^2]}^{u_{[4/\varepsilon^2]}}\big|\mathscr{B}\big)\Big] \\
&=& {\mathbf E}\Big[{\mathbf E}\big(Y_1^{u_1}\big|\mathscr{B}\big)\cdots{\mathbf E}\big(Y_{m-1}^{u_{m-1}}\big|\mathscr{B}\big){\mathbf E}\big(Y_m\big|\mathscr{B}\big){\mathbf E}\big(Y_{m+1}^{u_{m+1}}\big|\mathscr{B}\big)\cdots{\mathbf E}\big(Y_{[4/\varepsilon^2]}^{u_{[4/\varepsilon^2]}}\big|\mathscr{B}\big)\Big]
\end{eqnarray*}
that is clearly zero since we have used that fixed $\{b_j\}$'s, $\{\sigma_j\}$'s are independent and ${\mathbf E}\big(Y_m\big|\mathscr{B}\big)=0$.
On the other hand, by Skorohod's theorem, we have
\begin{eqnarray*}
&&{\mathbf E}\big[(\sigma_m^{\varepsilon,\theta})^{2p}\big] = {\mathbf E}\Big[{\mathbf E}\big[(\sigma_m^{\varepsilon,\theta})^{2p}\big|\mathscr{B}\big]\Big]
\leq {\mathbf E}\Big[2(2p)!\,{\mathbf E}\big[(k_i\xi_m^{\varepsilon,\theta})^{4p}\big|\mathscr{B}\big]\Big] \nonumber \\
&& \leq 2(2p)!\,{\mathbf E}\bigg[(4p)!\left(\frac{\varepsilon}{2}\right)^{4p}\big(\cos(b_{m-1}\theta)\big)^{4p}\bigg]
\leq 2(2p)!\,(4p)!\left(\frac{\varepsilon}{2}\right)^{4p}.
\end{eqnarray*}
So, using the inequality $|a+b|^{2p}\leq2^{2p}(|a|^{2p}+|b|^{2p})$, we obtain
\begin{eqnarray} \nonumber
{\mathbf E}\Big(Y_m^{2p}\Big) &\leq& 2^{2p}\left[ \bigg(\frac{4}{\varepsilon^2}\bigg)^{2p}{\mathbf E}\big[(\sigma_m^{\varepsilon,\theta})^{2p}\big]+2^{2p}\,{\mathbf E}\big[\big(\cos(b_{m-1}\theta)\big)^{4p}\big] \right] \\
& \leq & 2^{2p}\left[ \bigg(\frac{4}{\varepsilon^2}\bigg)^{2p}2(2p)!\,(4p)!\left(\frac{\varepsilon}{2}\right)^{4p}+2^{2p} \right] \nonumber \\
&\leq& 2^{2p}\left[ 2(2p)!\,(4p)!+2^{2p} \right]
\leq 2^{2p+1}\cdot2(2p)!\,(4p)! \nonumber
\\ & = & 4\cdot2^{2p}(2p)!\,(4p)!.\label{eq6}
\end{eqnarray}
Finally Lemma \ref{lemF} yields that, for $\displaystyle p\leq1+\frac{\log2}{\log\big[1+\varepsilon(1-\varepsilon^2/4)^\frac12\big]}$,
\begin{eqnarray}\label{eq7}
\sum_{\begin{subarray}{c} |u|=2p\\ u_i\neq1\,\forall i \end{subarray}}{2p\choose u} \leq 2^{2p}(2p)!\left(\frac{4}{\varepsilon^2}\right)^p.
\end{eqnarray}
Therefore, for $p$ as above, putting together (\ref{eq4}), (\ref{eq5}), (\ref{eq6}) and (\ref{eq7}) and applying Stirling formula, $k!=\sqrt{2\pi}\,k^{k+\frac12}e^{-k}e^{\frac{a}{12k}}$, with $0<a<1$, we obtain
\begin{eqnarray*}
I_{12}^\varepsilon &\leq& 4\,(\delta_\varepsilon)^{-2p} \,\varepsilon^{2p} \,2^{4p}\big((2p)!\big)^2\,(4p)! \\ &\leq& 4\,(\delta_\varepsilon)^{-2p} \,\varepsilon^{2p} \,2^{4p} \Big[\sqrt{2\pi}(2p)^{2p+\frac12}e^{-2p}e^{\frac{a}{24p}}\Big]^2 \Big[\sqrt{2\pi}(4p)^{4p+\frac12}e^{-4p}e^{\frac{a}{48p}}\Big] \\
&=& (\delta_\varepsilon)^{-2p} \,\varepsilon^{2p} \,2^{16p+4} \,(2\pi)^{\frac32} \,e^{-8p+\frac{a}{12p}+\frac{a}{48p}} \,p^{8p+\frac32} \\
&\leq& K_1^p \,(\delta_\varepsilon)^{-2p} \,\varepsilon^{2p} \,p^{8p+3/2}
\end{eqnarray*}
where $K_1$ is a constant.
Let us impose now $ K_1^p \,(\delta_\varepsilon)^{-2p} \,\varepsilon^{2p} \,p^{8p+3/2}=\varepsilon^{2q}$ and $p=\big[\log{\frac1\varepsilon}\big]$. Observe that $p=\big[\log{\frac1\varepsilon}\big]$ fulfills the condition on $p$ of inequality (\ref{eq7}). We get
\begin{eqnarray}\label{eq9}
\delta_\varepsilon = K_2 \,\varepsilon^{1-q/[\log{1/\varepsilon}]}
\,\left[\log{\frac1\varepsilon}\right]^{4+3/(4[\log{1/\varepsilon}])},
\end{eqnarray}
where $K_2=\sqrt{K_1}$ is a constant. Clearly, with this $\delta_\varepsilon$,
{\it 2.2. Study of the term $I_{13}^\varepsilon.$}
As in Section \ref{cap_realp}, since $n\leq m-1$, we can write
\begin{eqnarray*}
I_{13}^\varepsilon &\leq& P\left( \max_{0\leq n\leq\frac{4}{\varepsilon^2}-1} \frac{\varepsilon^2}{4} \Bigg|\sum_{k=0}^n (T_k-2)\cos(2k\theta)\Bigg|>\frac{\delta_\varepsilon}{8} \right) \\
&&+\, {\rm l}\hskip -0.21truecm 1_{\left\{\max_{0\leq n\leq\frac{4}{\varepsilon^2}-1} \frac{\varepsilon^2}{2} \Bigg|\sum_{k=0}^n \cos(2k\theta)\Bigg|>\frac{\delta_\varepsilon}{8} \right\}}:= I_{131}^\varepsilon+ I_{132}^\varepsilon.
\end{eqnarray*}
Let us begin studying $I_{132}^\varepsilon$. By (\ref{eq9}),
\begin{eqnarray*}
I_{132}^\varepsilon &\leq& {\rm l}\hskip -0.21truecm 1_{\left\{ \frac{\varepsilon^2}{2}K>\frac{\delta_\varepsilon}{8} \right\}}= {\rm l}\hskip -0.21truecm 1_{\left\{ \frac{\varepsilon^2}{2}K>\frac{K_2}{8}\,\varepsilon^{1-q/[\log{1/\varepsilon}]} \,\left[\log{\frac1\varepsilon}\right]^{4+3/(4[\log{1/\varepsilon}])} \right\}} \\
&=& {\rm l}\hskip -0.21truecm 1_{\left\{ \varepsilon^{-1-q/[\log{1/\varepsilon}]}
\,\left[\log{\frac1\varepsilon}\right]^{4+3/(4[\log{1/\varepsilon}])}<\frac{4K}{K_2}\right\}},
\end{eqnarray*}
and clearly $I_{132}^\varepsilon=0$ for small $\varepsilon$.
On the other hand, since $M'_n:=\frac{\varepsilon^2}{4}\sum_{k=0}^n(T_k-2)\cos(2k\theta)$ is a martingale (see Section \ref{cap_realp}), by Doob's martingale inequality,
\begin{eqnarray*}
I_{131}^\varepsilon &=& P\left( \max_{0\leq n\leq\frac{4}{\varepsilon^2}-1} \Bigg|\sum_{k=0}^n (T_k-2)\cos(2k\theta)\Bigg|>\frac{\delta_\varepsilon}{2\varepsilon^2} \right) \\
&\leq& \bigg(\frac{2\varepsilon^2}{\delta_\varepsilon}\bigg)^{2p} \,{\mathbf E} \left[\Bigg(\sum_{k=0}^{[4/\varepsilon^2]-1}
(T_k-2)\cos(2k\theta)\Bigg)^{2p}\right].
\end{eqnarray*}
Set $U_k:=(T_k-2)\cos(2k\theta)$. $U_k$'s are independent and centered random variables and by H\"older's inequality, we have
$$ {\mathbf E}\left[\Bigg(\sum_{k=0}^{[4/\varepsilon^2]-1}U_k\Bigg)^{2p}\right] \leq \sum_{\begin{subarray}{c} |u|=2p\\ u_i\neq1\,\forall i \end{subarray}}{2p\choose u} \big[{\mathbf E}\big(U_0^{2p}\big)\big]^{u_0/2p}\cdots\big[{\mathbf E}\big(U_{[4/\varepsilon^2]-1}^{2p}\big)\big]^{u_{[4/\varepsilon^2]-1}/2p}. $$
Let us recall that $T_k\sim\mbox{Geom}(\frac12)$. It is well-known that $T_k=[\widetilde{T}_k]+1$ where $\widetilde{T}_k\sim\mbox{Exp}(\log2)$. Hence
$$
{\mathbf E}\big(T_k^{2p}\big) \leq {\mathbf E}\big[(\widetilde{T}_k+1)^{2p}\big]
\leq 2^{2p} \big[{\mathbf E}\big(\widetilde{T}_k^{2p}\big)+1\big]
\leq 2^{2p} \left[\frac{(2p)!}{(log2)^{2p}}+1\right]
\leq 2 \,(2p)! \,(4p)!,
$$
and it follows that
\begin{eqnarray*}
&&{\mathbf E}\Big(U_k^{2p}\Big) \leq 2^{2p} \left[ {\mathbf E}\big[\big(T_k\cos(2k\theta)\big)^{2p}\big]+2^{2p}\,\big[\cos(2k\theta)\big]^{2p} \right] \\
&\leq& 2^{2p}\big[\cos(2k\theta)\big]^{2p} \left( {\mathbf E}\big(T_k^{2p}\big)+2^{2p} \right)
\leq4\cdot2^{2p}(2p)!\,(4p)! .
\end{eqnarray*}
Some inequalities are very crude, but they are helpful since we get the same bounds that in the study of $I_{12}^\varepsilon.$
Thus, with $\delta_\varepsilon$ as in (\ref{eq9}) we get that $I_{131}^\varepsilon=o(\varepsilon^q)$.
{\it 2.3. Study of the term $I_{14}^\varepsilon$.} Since $T_k$'s are independent identically distributed random variables with $T_k\sim\mbox{Geom}\big(\frac12\big)$ for each $k$, by (\ref{eq13}),
\begin{eqnarray*}
I_{14}^\varepsilon &\leq& P\left( \max_{0\leq n\leq\frac{4}{\varepsilon^2}-1} \,\frac{\varepsilon^2}{4} \,T_{n+1}>\frac{\delta_\varepsilon}{4} \right)
= 1-P\left( \max_{1\leq n\leq\frac{4}{\varepsilon^2}} T_n \leq \frac{\delta_\varepsilon}{\varepsilon^2} \right) \\
&=& 1-P\left[\Big( T_n \leq \frac{\delta_\varepsilon}{\varepsilon^2} \Big)\right]^{4/\varepsilon^2}
\!\!\!\!\!\!\!\!= 1-\Bigg( \sum_{k=1}^{[\delta_\varepsilon/\varepsilon^2]} \frac{1}{2^k} \Bigg)^{4/\varepsilon^2}
\!\!\!\!\!\!\!\!= 1-\Bigg( 1-\bigg(\frac12\bigg)^{[\delta_\varepsilon/\varepsilon^2]} \Bigg)^{4/\varepsilon^2} \\
&=& 1-\Big( 1-e^{-\log2\,[\delta_\varepsilon/\varepsilon^2]} \Big)^{4/\varepsilon^2}
\leq 1-\Big( 1-e^{-\frac{a_\varepsilon}{2\varepsilon}} \Big)^{4/\varepsilon^2 + 1}.
\end{eqnarray*}
Notice that in the last inequality we have used that $\frac{a_\varepsilon}{2 \varepsilon} \leq \log 2 [\frac{\delta_\varepsilon}{\varepsilon^2}]$.
The bound for $I_{14}^\varepsilon$ is the same that we got in the study of $I_{4}^\varepsilon$. So, by the same arguments, we can conclude that $I_{14}^\varepsilon=o(\varepsilon^q)$.
{\it 2.4. Study of the term $I_{11}^\varepsilon.$} As in Theorem 1 in \cite{art G-G}, for small $\varepsilon$ and using a Doob's martingale inequality for Brownian motion we get
\begin{eqnarray*}
I_{11}^\varepsilon &\leq& \sum_{m=0}^{[4/\varepsilon^2]} P\left(\max_{|s|\leq\delta_\varepsilon} \Big|x\Big(\frac{m\varepsilon^2}{4}+s\Big)-x\Big(\frac{m\varepsilon^2}{4}\Big)\Big|>\frac{a_\varepsilon}{4}\right)\nonumber \\
&=& \Big(\frac{4}{\varepsilon^2}+1\Big) P\left(\max_{|s|\leq\delta_\varepsilon} \big|x(s)\big|>\frac{a_\varepsilon}{4}\right)
\leq \frac{32}{\varepsilon^2} P\left(\max_{0\leq s\leq\delta_\varepsilon} x(s)>\frac{a_\varepsilon}{4}\right) \nonumber\\
&\leq& \frac{32}{\varepsilon^2} \exp\left(-\Big(\frac{a_\varepsilon}{4}\Big)^2\frac{1}{2\delta_\varepsilon}\right)
= \frac{32}{\varepsilon^2} \,\exp\left(-\frac{(a_\varepsilon)^2}{32\delta_\varepsilon}\right).
\end{eqnarray*}
Condition
$(32/\varepsilon^2)\exp\big(-(a_\varepsilon)^2/(32\delta_\varepsilon)\big) \leq 32\,\varepsilon^{2q}$ yields that
$$
a_\varepsilon \ge K_3 \,\varepsilon^{1/2} \,\varepsilon^{-q/2[\log{1/\varepsilon}]} \,\bigg[\log{\frac1\varepsilon}\bigg]^{2+3/8[\log{1/\varepsilon}]}
\,\bigg(\log{\frac1\varepsilon}\bigg)^{1/2},
$$
where $K_3$ is a constant depending on $q$. Notice that $$ a_\varepsilon = \alpha \,\varepsilon^{1/2} \,\bigg(\log{\frac1\varepsilon}\bigg)^{5/2}, $$
for small $\varepsilon$, where $\alpha$ is a constant that depends on $q$, satisfies such a condition. Thus,
with $\delta_\varepsilon$ as in (\ref{eq9}), it follows that $ I_{11}^\varepsilon=o(\varepsilon^q)$.
{\it 3. Study of the term $I_{2}^\varepsilon$.}
For our $\delta_\varepsilon>0$, we have
\begin{eqnarray*}
I_{2}^\varepsilon
&\leq& P\left( \max_{0\leq m\leq\frac{4}{\varepsilon^2}} \, \max_{|s|\leq\delta_\varepsilon} \Big|x\Big(\frac{m\varepsilon^2}{4}+s\Big)-x\Big(\frac{m\varepsilon^2}{4}\Big)\Big|>\frac{a_\varepsilon}{4} \right) \\
&& + P\left( \max_{0\leq m\leq\frac{4}{\varepsilon^2}} \Big|{\mathbb P}amma_m^{\varepsilon,\theta}-\frac{m\varepsilon^2}{4}\Big|>\delta_\varepsilon \right)
= I_{21}^\varepsilon + I_{22}^\varepsilon.
\end{eqnarray*}
On one hand, observe that $ I_{21}^\varepsilon = I_{11}^\varepsilon$, thus $ I_{21}^\varepsilon=o(\varepsilon^q)$.
On the other hand, it is easy to check that $\sum_{j=0}^m(\gamma_j^{\varepsilon,\theta}-\frac{\varepsilon^2}{4})$ is a martingale. So
applying Doob's martingale inequality
\begin{eqnarray*}
I_{22}^\varepsilon
&=& P\left( \max_{0\leq m\leq\frac{4}{\varepsilon^2}} \Bigg|\sum_{j=0}^m\bigg(\frac{4}{\varepsilon^2}\gamma_j^{\varepsilon,\theta}-1\bigg)\Bigg|>\frac{4\delta_\varepsilon}{\varepsilon^2} \right) \\
&\leq& \left(\frac{\varepsilon^2}{4\delta_\varepsilon}\right)^{2p} \,
{\mathbf E}\left[\left(\sum_{m=1}^{4/\varepsilon^2}\bigg(\frac{4}{\varepsilon^2}\gamma_m^{\varepsilon,\theta}-1\bigg)\right)^{2p}\right].
\end{eqnarray*}
Set $V_m:=\frac{4}{\varepsilon^2}\gamma_m^{\varepsilon,\theta}-1$. Notice that $V_m$'s are independent and centered random variables with
\begin{eqnarray*}
{\mathbf E}\Big(V_m^{2p}\Big) &\leq& 2^{2p}\left( \bigg(\frac{4}{\varepsilon^2}\bigg)^{2p}{\mathbf E}\big[(\gamma_m^{\varepsilon,\theta})^{2p}\big]+1 \right) \\
&\leq& 2^{2p}\big((2p)!+1\big)
\leq 2^{2p+1}(2p)!
\leq 4\cdot2^{2p}(2p)!\,(4p)!.
\end{eqnarray*}
Then using an inequality of the type of (\ref{eq5}) and following the same arguments that in the study
of $I_{12}^\varepsilon$, we get that $ I_{22}^\varepsilon=o(\varepsilon^q)$.
{\it 4. Study of the term $I_{3}^\varepsilon$.}
For $\delta_\varepsilon>0$ defined in (\ref{eq9}) and $a_\varepsilon$ of the type $\alpha\,\varepsilon^\frac12\left(\log\frac{1}{\varepsilon}\right)^\frac52$
\begin{eqnarray*}
I_{3}^\varepsilon&\leq& P\left( \max_{0\leq m\leq\frac{4}{\varepsilon^2}} \,\max_{|r|\leq\delta_\varepsilon} \big|x({\mathbb P}amma_m^{\varepsilon,\theta})-x({\mathbb P}amma_m^{\varepsilon,\theta}+r)\big|>\frac{a_\varepsilon}{4} \right) \\
&& + P\left( \max_{1\leq m\leq\frac{4}{\varepsilon^2}+1} \gamma_m^{\varepsilon,\theta}>\delta_\varepsilon \right)
:= I_{31}^\varepsilon + I_{32}^\varepsilon.
\end{eqnarray*}
On one hand, $ I_{31}^\varepsilon=o(\varepsilon^q)$ is proved in the same way as $ I_{11}^\varepsilon$.
On the other hand,
\begin{eqnarray*}
I_{32}^\varepsilon&=& 1-\left[ P\big(\gamma_m^{\varepsilon,\theta}\leq\delta_\varepsilon\big) \right]^{(4/\varepsilon^2)+1}
= 1-\Big( 1-e^{-4\delta_\varepsilon/\varepsilon^2}
\Big)^{(4/\varepsilon^2)+1}.
\end{eqnarray*}
Thus $ I_{32}^\varepsilon=o(\varepsilon^q)$, similarly as we have proved for $ I_{4}^\varepsilon$.
We have checked now that all the terms in our decomposition are of order $\varepsilon^q$. More precisely
we have proved that for $a_\varepsilon=\alpha\varepsilon^{\frac12}\big(\log\frac{1}{\varepsilon}\big)^{\frac52}$ and
$q>0$,
$$ E := P\left( \max_{0\leq m\leq\frac{4}{\varepsilon^2}} \,\max_{0\leq r\leq\gamma_{m+1}^{\varepsilon,\theta}} \big|x_\varepsilon^\theta({\mathbb P}amma_m^{\varepsilon,\theta}+r)-x({\mathbb P}amma_m^{\varepsilon,\theta}+r)\big| > a_\varepsilon \right) = o(\varepsilon^q). $$
Let us fix $0<v<1$. Then
\begin{eqnarray*}
&& P\left( \max_{0\leq t\leq 1-v} \,\big|x_\varepsilon^\theta(t)-x(t)\big| > a_\varepsilon \right) \\
& \leq& P\left( \max_{0\leq t\leq 1-v} \,\big|x_\varepsilon^\theta(t)-x(t)\big|>a_\varepsilon, \,{\mathbb P}amma_{4/\varepsilon^2}^{\varepsilon,\theta}\geq 1-v \right) + P\left( {\mathbb P}amma_{4/\varepsilon^2}^{\varepsilon,\theta} < 1-v \right) \\
&:=& E_1 + E_2.
\end{eqnarray*}
On one hand, since $E_1\leq E$, we get that $E_1=o(\varepsilon)$. On the other hand,
$$ E_2 \leq P\left( \max_{0\leq m\leq\frac{4}{\varepsilon^2}} \,\bigg|{\mathbb P}amma_m^{\varepsilon,\theta}-\frac{m\varepsilon^2}{4}\bigg|>v \right) \leq I_{22}^\varepsilon, $$
for small $\varepsilon$. Thus, $E_2=o(\varepsilon^q)$.
We have proved the rate of convergence results in the interval $[0,1-v]$, but we can extend the argument for any compact interval. So, the proof of Theorem \ref{thm_rate} is completed.
$\square$
\section{Appendix}
We begin this appendix recalling two technical lemmas that will be useful in our computations.
\begin{prop}\label{2poisson}
Let $\{M_t,t\geq0\}$ be a Poisson process of parameter 2. Set $\{N_t,t\geq0\}$ and $\{N'_t,t\geq0\}$ two other counter processes
that, at each jump of $M$, each of them jumps or does not jump with probability $\frac12$, independently of the jumps of the other process and of its past.Then $N$ and $N'$ are Poisson processes of parameter 1 with independent increments on disjoint intervals.
\end{prop}
\begin{proof} Let us check first that $N$ is a Poisson process of parameter 1. Clearly
$N_0=0$ and for any $0\leq s<t$ and for $k\in{\mathbb N}\cup\{0\}$,
\begin{eqnarray*}
P(N_t-N_s=k) &=& \sum_{n=k}^\infty P(N_t-N_s=k\,|\,M_t-M_s=n)P(M_t-M_s=n) \\
&=& \sum_{n=k}^\infty {n\choose k} \frac{1}{2^k} \frac{1}{2^{n-k}} \frac{[2(t-s)]^n}{n!} e^{-2(t-s)} \\
&=& e^{-(t-s)}\,\frac{(t-s)^k}{k!}.
\end{eqnarray*}
Finally, for any $n\geq0$ and for any $0\leq t_1<\cdots<t_{n+1}$, it holds that the increments $N_{t_2}-N_{t_1},\dots,N_{t_{n+1}}-N_{t_n}$ are independent random
variables. Indeed, consider $k_i\in{\mathbb N}\cup\{0\}$, using the independence of the increments of the Poisson process $M$ we
get that
\begin{eqnarray*}
&&P(N_{t_2}-N_{t_1}=k_1,\dots,N_{t_{n+1}}-N_{t_n}=k_n \,|\, M_{t_2}-M_{t_1}=m_1, \\
&& \qquad\qquad \dots,M_{t_{n+1}}-M_{t_n}=m_n) \\
&&\quad =\prod_{i=1}^n \,P(N_{t_{i+1}}-N_{t_i}=k_i \,|\,
M_{t_{i+1}}-M_{t_i}=m_i).
\end{eqnarray*}
Then
\begin{eqnarray*}
&&\!\!\!\!\! P(N_{t_2}-N_{t_1}=k_1,\dots,N_{t_{n+1}}-N_{t_n}=k_n) \\
&=&\!\!\!\!\!\!\!\!\!\!\sum_{\begin{subarray}{c} m_i=0 \\ i=1,\dots,n\end{subarray}}^\infty \!\!\!\!\! P(N_{t_2}-N_{t_1}\!\!=k_1,\dots,N_{t_{n+1}}-N_{t_n}\!\!=k_n \,|\, M_{t_2}-M_{t_1}\!\!=m_1, \\
&& \qquad\qquad\qquad \dots,M_{t_{n+1}}-M_{t_n}\!\!=m_n) \\
&&\!\!\!\!\!\times \,P(M_{t_2}-M_{t_1}=m_1,\dots,M_{t_{n+1}}-M_{t_n}=m_n) \\
&=&\!\!\!\!\! P(N_{t_2}-N_{t_1}=k_1) \cdots
P(N_{t_{n+1}}-N_{t_n}=k_n).
\end{eqnarray*}
Using similar arguments we can prove that for any $k,j\in{\mathbb N}\cup\{0\}$, and for $0\leq s<t<u<v$,
$$ P(N_t-N_s=k,N'_v-N'_u=j) =P(N_t-N_s=k)P(N'_v-N'_u=j).$$
Thus $N$ and $N'$ have independent increments on disjoint intervals.
\end{proof}
\begin{lema}\label{serie}
For any $\delta >0$
$$
\sum_{n=1}^\infty \left[ 1-\left( 1-\frac{1}{2^{[\delta n]}} \right)^n \right] <\infty.
$$
\end{lema}
\begin{proof}
We have
\begin{eqnarray*}
&& \sum_{n=1}^\infty \left[ 1-\left( 1-\frac{1}{2^{[\delta n]}} \right)^n \right]
\leq \sum_{n=1}^\infty \left[ 1-\left( 1-\frac{1}{2^{\delta n-1}} \right)^n \right]\\
&& \quad = \sum_{n=1}^\infty \frac{2^{(\delta n-1)n}-(2^{\delta n-1}-1)^n}{2^{(\delta n-1)n}} \\
&& \quad = \sum_{n=1}^\infty \frac{1}{2^{(\delta n-1)n}} \sum_{k=1}^n {n\choose k}(-1)^{k+1}\,2^{(\delta n-1)(n-k)}\\
&& \quad
= \sum_{n=1}^\infty \sum_{k=1}^n {n\choose k}(-1)^{k+1}\,2^{-k(\delta n-1)}\\
&& \quad \leq \sum_{n=1}^\infty \sum_{k=1}^n {n\choose k}2^{-k(\delta n-1)}
= \sum_{k=1}^\infty 2^{-k(\delta k-1)} \sum_{n=k}^\infty {n\choose k}2^{-\delta k(n-k)}.
\end{eqnarray*}
Using that $\displaystyle\sum_{n=k}^\infty{n\choose k}x^{n-k}=\frac{1}{(1-x)^{k+1}}$, we can bound the above expression by
$$ \sum_{k=1}^\infty \left( \frac{2^{-(\delta k-1)}}{1-2^{-\delta k}} \right)^k \cdot\frac{1}{1-2^{-\delta k}}
\leq \frac{1}{1-2^{-\delta}}\cdot\sum_{k=1}^\infty \left( \frac{2^{-(\delta k-1)}}{1-2^{-\delta k}}
\right)^k.$$
Finally, we use d'Alembert's ratio test to check the convergence of this series
$$ \lim_{k\rightarrow\infty} \frac{\left(\frac{2^{-\delta(k+1)+1}}{1-2^{-\delta(k+1)}}\right)^{k+1}}{\left(\frac{2^{-\delta k+1}}{1-2^{-\delta k}}\right)^k} = \lim_{k\rightarrow\infty} \,\frac{2^{1-\delta}\cdot2^{-\delta k}}{1-2^{-\delta}\cdot2^{-\delta k}} \left( 2^{-\delta}\cdot\frac{1-2^{-\delta k}}{1-2^{-\delta(k+1)}} \right)^k = 0 . $$
\end{proof}
Finally, for completeness we also recall a Lemma of \cite{art G-G} (page 298).
\begin{lema}\label{lemF}
Let $F(k,n)=$ number of ways of putting $k$ balls into $n$ boxes so that no box contains exactly one ball, i.e.,
$$F(k,n)=\sum_{\begin{subarray}{c} \alpha_1+\ldots+\alpha_n=k\\ \alpha_i\neq1\,\forall i \end{subarray}}
\frac{k !}{ \alpha_1! \ldots\alpha_n!}.$$
Then $F(k,n) \le 2^k k! n^{k/2}$ for $k \le 2 +\log4/\log[1+2(1/n-1/n^2)^\frac12].$
\end{lema}
\end{document} |
\begin{document}
\title{Forward-Backward-Half Forward
Algorithm
for Solving Monotone Inclusions}
\slugger{siopt}{xxxx}{xx}{x}{x--x}
\begin{abstract}
Tseng's algorithm finds a zero of the sum of a maximally monotone operator
and a monotone continuous operator by evaluating the latter
twice per iteration.
In this paper, we
modify Tseng's algorithm for finding a zero of the sum of three operators,
where we add a cocoercive operator to the inclusion. Since the sum
of a cocoercive and a monotone-Lipschitz operator is monotone and Lipschitz,
we could use Tseng's method for solving this problem, but implementing both operators
twice per iteration and
without taking into advantage the
cocoercivity property of one operator. Instead, in our approach, although the
{continuous monotone} operator must still be evaluated twice,
we exploit the cocoercivity of one operator by evaluating it only once per
iteration. Moreover, when the cocoercive or {continuous-monotone} operators
are zero
it reduces to Tseng's or forward-backward splittings, respectively, unifying in this way
both algorithms.
In addition, we provide a {preconditioned} version of the proposed method
including
non self-adjoint linear operators in the computation of resolvents and the single-valued
operators involved. This approach allows us to {also} extend previous variable
metric
versions of Tseng's and forward-backward methods and simplify their conditions
on the underlying metrics. We also exploit the case when non self-adjoint
linear operators are triangular by blocks in the primal-dual product space for solving
primal-dual composite monotone inclusions, obtaining Gauss-Seidel type algorithms
which generalize several primal-dual methods available in the literature. Finally we
explore {applications to the obstacle problem, Empirical Risk Minimization,
distributed optimization and nonlinear programming and we illustrate the performance of
the method via some numerical simulations.}
\end{abstract}
\begin{keywords}
Convex optimization, forward-backward splitting, monotone operator theory, sequential
algorithms, Tseng's splitting.
\end{keywords}
\begin{AMS}
47H05, 65K05, 65K15, 90C25
\end{AMS}
\pagestyle{myheadings}
\thispagestyle{plain}
\markboth{Luis M. Brice\~{n}o-Arias \and Damek Davis}{Forward-Backward-Half Forward
Algorithm for Solving Monotone Inclusions}
\section{Introduction}
This paper is devoted to the numerical resolution of following problem.
\begin{problem}
\label{prob:main}
Let $X$ be a nonempty closed convex subset of a real Hilbert space ${\mathcal{H}}$, let $A
: {\mathcal{H}} {\mathrm{i}}ghtarrow 2^{\mathcal{H}}$ {and $B_2 : {\mathcal{H}}
{\mathrm{i}}ghtarrow 2^{{\mathcal{H}}}$ be maximally monotone operators, with $B_2$ single valued in ${\mathrm{dom}\,}
B_2\supset{\mathrm{dom}\,} A\cup X$, and let $B_1 : {\mathcal{H}}
{\mathrm{i}}ghtarrow {\mathcal{H}}$ be $\beta$-cocoercive\footnote{An operator $C : {\mathcal{H}} {\mathrm{i}}ghtarrow
{\mathcal{H}}$ is $\beta$-cocoercive for some $\beta > 0$ provided that $\dotp{Cx - Cy, x-y}
\geq \beta\|Cx - Cy\|^2$.}, for
some $\beta>0$. Moreover
assume that $B_2$ is continuous on ${\mathrm{dom}\,} A\cup
X$
and that $A+B_2$ is
maximally monotone. }The
problem is to
\begin{equation}
\label{e:main}
\text{find }\quad x\in X\quad \text{such that }\quad 0\in Ax+B_1x+B_2x,
\end{equation}
under the assumption that the set of solutions to \eqref{e:main} is nonempty.
\end{problem}
The wide variety of applications of Problem~\ref{prob:main} involving optimization
problems, variational inequalities,
partial differential equations, image processing, saddle point problems,
game theory, among others can be explored in
{\cite{bauschke2017convex,combetteswajs2005} }and the
references therein.
As an important application, consider the case of composite optimization problems of
the form
\begin{align}\label{eq:primal_before_PD}
\Min_{\mathrm{x} \in \mathrm{H}}\, \mathrm{f}(\mathrm{x}) +
\mathrm{g}(\mathrm{L}\mathrm{x}) + \mathrm{h}(\mathrm{x}),
\end{align}
where $\mathrm{H}$ and $\mathrm{G}$ are real Hilbert spaces, $\mathrm{L} :
\mathrm{H} {\mathrm{i}}ghtarrow \mathrm{G}$ is linear and bounded,
$\mathrm{f} :\mathrm{H} {\mathrm{i}}ghtarrow (-\infty, \infty]$ and $\mathrm{g} : \mathrm{G}
{\mathrm{i}}ghtarrow (-\infty, \infty]$ are
lower semicontinuous, convex, and proper, and $\mathrm{h} : \mathrm{H} {\mathrm{i}}ghtarrow
\mathbb{R}$ is
convex
differentiable with $\beta^{-1}$-Lipschitz gradient. Since $g$ may be non smooth,
primal algorithms in this context
need to evaluate $\mathbf{prox}_{\mathrm{g} \circ L}$ or invert $L$ which can be costly
numerically. In order to overcome this difficulty, fully split primal-dual algorithms are
proposed, e.g., { in
\cite{briceno2011monotone+,condat2013primal,vu2013splitting}}, in
which only
$\mathbf{prox}_{\mathrm{g}}$, $L$, and $L^\ast$ are computed. These algorithms follow
from the first order optimality conditions of \eqref{eq:primal_before_PD}, which,
under qualification conditions,
can be written as Problem~\ref{prob:main} with
\begin{align}{\mathbf{s}}pace{-10pt}
\label{e:primdualops}
X=\ensuremath{{\mathcal H}}=\mathrm{H}\times \mathrm{G},&&A = \partial \mathrm{f} \times
\partial \mathrm{g}^\ast, &&
B_1 = \nabla \mathrm{h} \times \{0\}, && B_2 =
\begin{bmatrix} 0 & \mathrm{L}^\ast \\ -\mathrm{L} & 0 \end{bmatrix},
\end{align}
{where we point out that $B_2$ is monotone and Lipschitz but {\em not
cocoercive},
because it is skew linear and, for every $x\in\ensuremath{{\mathcal H}}$, $\scal{x}{B_2x}=0$.}
We have that, for any solution $x=(\mathrm{x}_1^\ast, \mathrm{x}_2^\ast)\in
\zer(A+ B_1 + B_2)$, $\mathrm{x}_1^\ast$
solves~\eqref{eq:primal_before_PD}, where we denote $\zer T=\menge{x\in\ensuremath{{\mathcal H}}}{0\in Tx}$
for any set valued operator $T\colon\ensuremath{{\mathcal H}}\to 2^{\ensuremath{{\mathcal H}}}$.
A method proposed in \cite{vu2013splitting} solves \eqref{eq:primal_before_PD} in a more
general context
by using forward-backward splitting (FB) in the product space with {the} metric
$\scal{\cdot}{\cdot}_V=\scal{V\cdot}{\cdot}$ for the operators
$V^{-1}(A+B_2)$ and $V^{-1}B_1$ with a specific choice of self-adjoint strongly
monotone linear operator $V$. We recall that the forward-backward splitting
\cite{combettes2004solving,bruck1975,lions1979splitting,goldstein1964}
finds a zero of the sum of a maximally monotone and a cocoercive operator, which
is a particular case of Problem~\ref{prob:main} when $X=\ensuremath{{\mathcal H}}$ and $B_2=0$. This
method provides a sequence obtained from the fixed point iteration of the
nonexpansive operator (for some
$\gamma\in]0,2\beta[$)
$$
T_{\mathrm{FB}} := J_{\gamma A}\circ({\ensuremath{\operatorname{Id}}\,} - \gamma B_1),
$$
which converges weakly to a zero of $A+B_1$. Here ${\ensuremath{\operatorname{Id}}\,}$ stands for the
identity map in
$\ensuremath{{\mathcal H}}$ and{, for every set valued operator $M\colon\ensuremath{{\mathcal H}}\to 2^{\ensuremath{{\mathcal H}}}$,
$J_{M}=({\ensuremath{\operatorname{Id}}\,}+M)^{-1}\colon\ensuremath{{\mathcal H}}\to 2^{\ensuremath{{\mathcal H}}}$ is the
resolvent of $M$, which is single valued and
nonexpansive when $M$ is maximally monotone.}
In the context
of \eqref{e:primdualops}, the operators $V^{-1}(A+B_2)$ and $V^{-1}B_1$ are
maximally monotone and $\beta$-cocoercive in the metric
$\scal{\cdot}{\cdot}_V=\scal{V\cdot}{\cdot}$,
respectively, which ensures the convergence of the forward-backward splitting. The
choice of $V$
permits the explicit computation of $J_{V^{-1}(A+B_2)}$, which leads to a sequential
method that generalizes the algorithm proposed in \cite{chambolle2011first}. A variant
for solving
\eqref{eq:primal_before_PD} in the case when $h=0$ is proposed in
\cite{he2012convergence}. However, previous methods need the skew linear
structure of $B_2$ in order to obtain an implementable method.
{ An example in which a non-linear continuous operator $B_2$ arises naturally is
the convex constrained optimization problem
\begin{equation}
\min_{\substack{x\in C\\g(x)\le 0}}f(x),
\end{equation}
where $f\colon\ensuremath{{\mathcal H}}\to\mathbb{R}$ is convex differentiable with $\beta^{-1}$-Lipschitz-gradient,
$C\subset\ensuremath{{\mathcal H}}$ is nonempty, closed and convex, and $g\colon\ensuremath{{\mathcal H}}\to\mathbb{R}$ is a
$\mathcal{C}^1$ and convex function. The Lagrangian function in this case takes the form
\begin{equation}
L(x,\lambda)=\iota_C(x)+f(x)+\lambda g(x)-\iota_{\mathbb{R}_+}(\lambda),
\end{equation}
which, under standard qualification conditions can be found by solving the monotone
inclusion (see \cite{rockafellar1970saddle})
\begin{equation}
0\in A(x,\lambda)+B_1(x,\lambda)+B_2(x,\lambda),
\end{equation}
where $A\colon (x,\lambda)\mapsto N_Cx\times N_{\mathbb{R}_+}\lambda$ is maximally
monotone, $B_1\colon (x,\lambda)\mapsto (\nabla f(x),0)$ is cocoercive, and
$B_2\colon (x,\lambda)\mapsto (\lambda\nabla g(x),-g(x))$ is monotone and continuous
\cite{rockafellar1970saddle}. Of course, the problem can be easily extended to
consider finitely many
inequality and equality constraints and allow for more general lower semicontinuous
convex functions than $\iota_C$, but we prefer the simplified version for the ease of
presentation. Note that the non-linearity of $B_2$ does not allow to use previous
methods in this context.
}
In the {case when $B_2$ is $L$-Lipschitz for some $L>0$}, since $B:=B_1+B_2$
is monotone
and $(\beta^{-1}+L)$--Lipschitz
continuous, the forward-backward-forward splitting (FBF) proposed by Tseng in
\cite{tseng2000modified} solves Problem~\ref{prob:main}. This method
generates a sequence from the fixed point iteration of the operator
\begin{align*}
T_{\mathrm{FBF}} := P_X\circ \left[({\ensuremath{\operatorname{Id}}\,} - \gamma B) \circ J_{\gamma A} \circ
({\ensuremath{\operatorname{Id}}\,}-
\gamma B) + \gamma B{\mathrm{i}}ght],
\end{align*}
which converges weakly to a zero of $A+B$, provided that
$\gamma\in]0,(\beta^{-1}+L)^{-1}[$. However, this approach has two drawbacks:
\begin{enumerate}
\item FBF needs to evaluate $B=B_1+B_2$ twice per iteration, without taking into
advantage the cocoercivity property of $B_1$. In the particular case when $B_2=0$,
this method computes $B_1$ twice at each iteration, while the forward-backward
splitting needs only one computation of $B_1$ for finding a zero of $A+B_1$. Even if
we cannot ensure that FB is more efficient than FBF in this context, the cost
of each iteration of FB is lower than that of FBF, especially when the computation
cost of $B_1$ is high. This is usually the case, for instance, when $A$, $B_1$, and
$B_2$
are as in \eqref{e:primdualops} and we aim at solving \eqref{eq:primal_before_PD}
representing a
variational formulation of some partial differential equation (PDE). In this case,
the computation of $\nabla\mathrm{h}$ frequently amounts to solving a PDE, which
is computationally costly.
\item The step size $\gamma$ in FBF is bounded above by $(\beta^{-1}+L)^{-1}$,
which in the case when the influence of $B_2$ in the problem is low ($B_2\approx 0$)
leads to a method whose step size cannot go too far beyond $\beta$. In the case
$B_2=0$, the step size $\gamma $ in FB is bounded by $2\beta$. This can affect
the performance of the method, since very small stepsizes can lead to slow
algorithms.
\end{enumerate}
{In the general case when $B_2$ is monotone and continuous, we can also
apply a version of the method in \cite{tseng2000modified} which uses line search for
choosing the step-size at each iteration. However, this approach share the disadvantage of
computing
twice $B_1$ by iteration and, moreover, in the line search $B_1$ has to be computed several
times up to find a sufficiently small step-size, which can be computationally costly.}
In this paper we propose a splitting algorithm for solving Problem~\ref{prob:main}
which overcomes previous drawbacks. The method is derived from the
fixed point iteration of the operator $T_{\gamma}: {\mathcal{H}}
{\mathrm{i}}ghtarrow {\mathcal{H}}$, defined by
\begin{align}\label{eq:ouroperator}
T_\gamma := P_X\circ \left[({\ensuremath{\operatorname{Id}}\,}- \gamma B_2) \circ J_{\gamma A}
\circ ({\ensuremath{\operatorname{Id}}\,} -
\gamma (B_1+B_2)) + \gamma B_2{\mathrm{i}}ght],
\end{align}
for some $\gamma\in]0,\chi(\beta,L)[$, where $\chi(\beta,L)\leq\min\{2\beta,L^{-1}\}$ in the
case when $B_2$ is $L$-Lipschitz.
The algorithm thus obtained implements $B_1$ only once by iteration and it reduces to
FB or FBF when $X=\ensuremath{{\mathcal H}}$ and $B_2=0$, or $B_1=0$, respectively, and in these cases
we have $\chi(\beta,0)=2\beta$ and $\lim_{\beta\to+\infty}\chi(\beta,L)=L^{-1}$.
{Moreover,
in the case when $B_2$ is merely continuous,
the step-size is found
by a line search in which $B_1$ is only computed once at each backtracking step.}
These results
can be found in Theorem~\ref{t:1} in Section~\ref{sec:2}.
Moreover,
a generalization of FB for finding a point in $X\cap\zer(A+B_1)$ can be derived
when $B_2=0$. This can be useful when the solution is known to belong to a closed
convex set $X$, which is the case, for example, in convex constrained minimization.
The additional projection onto $X$ can improve the performance of the method (see,
e.g., \cite{BAKS16}).
Another contribution of this paper is to include in our method non self-adjoint
linear operators in the computation of resolvents and other operators involved. More
precisely, in Theorem~\ref{thm:asymmetric_metric} in Section~\ref{sec:asymm}, for
an
invertible linear operator
$P$ (not necesarily self-adjoint) we justify
the computation of $P^{-1}(B_1+B_2)$ and
$J_{P^{-1}A}$, respectively.
In the case when $P$ is self-adjoint and strongly monotone,
the properties that $A$, $B_1$ and $B_2$ have with the standard metric
are preserved by $P^{-1}A$, $P^{-1}B_1$, and $P^{-1}B_2$ in the metric
$\scal{\cdot}{\cdot}_P=\scal{P\cdot}{\cdot}$. In this context, variable metric versions
of FB and FBF have been developed in
\cite{combettes2012variable,vu2013variableFBF}. Of course, a similar generalization
can be done for our algorithm, but we go beyond this self-adjoint case and we
implement $P^{-1}(B_1+B_2)$ and $J_{P^{-1}A}$, where the linear
operator $P$ is strongly monotone but non necesarily self-adjoint. The key for this
implementation is the decomposition $P=S+U$, where $U$ is self-adjoint and strongly
monotone and $S$ is skew linear. Our implementation follows after coupling $S$ with
the monotone and Lipschitz component $B_2$ and using some resolvent identities
valid for
the metric $\scal{\cdot}{\cdot}_U$. One of the important implications of this issue is
the justification of the convergence of some Gauss-Seidel type methods in product
spaces, which are deduced from our setting for block triangular linear operators $P$.
Additionally, we
provide a modification
of the previous method{ in Theorem~\ref{cor:asymmetricnoinversion}, in which
linear
operators
$P$ may vary among iterations. }In the case when, for every iteration $k\in\ensuremath{\mathbb N}$,
$P_k$ is self-adjoint, this feature has also been implemented for FB and FBF in
\cite{combettes2012variable,vu2013variableFBF}
but with a strong dependence between $P_{k+1}$ and $P_k$ coming from
the variable metric approach. Instead, in the general case, we modify our method for
allowing variable metrics and ensuring convergence under weaker
conditions. For instance, in the case when $B_2=0$ and $P_k$ is self-adjoint and
$\rho_k$-strongly monotone for some $\rho_k>0$,
our condition on our FB variable metric version reduces to
$(2\beta-{\mathbf{a}}repsilon)\rho_k>1$ for every
$k\in\ensuremath{\mathbb N}$. In the case when $P_k={\ensuremath{\operatorname{Id}}\,}/\gamma_k$ this
condition reduces to $\gamma_k<2\beta-{\mathbf{a}}repsilon$ which is a standard assumption
for FB with variable stepsizes. Hence, our condition on operators $(P_k)_{k\in\ensuremath{\mathbb N}}$
can be interpreted as ``step-size'' bounds.
Moreover, in Section~\ref{sec:5} we use our methods in composite primal-dual
inclusions, obtaining
generalizations and new versions of several primal-dual methods
\cite{chambolle2011first,vu2013variableFBF,patrinos2016asym,combettes2012primal}.
We provide comparisons among methods and new bounds on stepsizes which improve
several bounds in the literature. Finally, for illustrating the flexibility of the proposed
methods,
in Section~\ref{sec:6} we apply them to the obstacle problem in PDE's, to empirical
risk minimization, to distributed operator splitting schemes and to nonlinear constrained
optimization.
In the first example, we take advantage to dropping the extra forward step on $B_1$,
which amounts to reduce the computation of a PDE by iteration. In the
second example, we use non self-adjoint linear operators in order to obtain a
Gauss-Seidel
structure which can be preferable to parallel architectures {for high dimensions.
The third example illustrates how the variable metrics allowed by our proposed algorithm
can be used to develop distributed operator splitting schemes with time-varying
communication networks. The last example illustrates our backtracking line search
procedure for nonlinear constrained optimization wherein the underlying operator
$B_2$ is nonlinear and non Lipschitz. Finally, some numerical examples show the
performance of the proposed algorithms.
}
\section{Convergence theory}
\label{sec:2}
This section is devoted to study the conditions ensuring the convergence
of {the iterates generated recursively by} $z^{k+1}=T_{\gamma_k}z^k$ for any
starting point
$z^0\in\ensuremath{{\mathcal H}}$,
where, for every $\gamma>0$, $T_{\gamma}$ is defined in \eqref{eq:ouroperator}.
We first prove that $T_{\gamma}$ is quasi-nonexpansive for a suitable
choice of $\gamma$ and satisfies
$\Fix(T_{\gamma}) = \zer(A+B_1+B_2)\cap X$. Using these results we prove the weak
convergence of iterates $\{z^k\}_{k\in\ensuremath{\mathbb N}}$ to a solution to Problem~\ref{prob:main}.
\begin{proposition}[Properties of
$T_{\gamma}$]\label{prop:Tproperties}
\label{p:prop1}Let $\gamma>0$, assume that hypotheses of Problem~\ref{prob:main}
hold, {and set $S_{\gamma}:=(\ensuremath{\operatorname{Id}}\, - \gamma B_2) \circ J_{\gamma A} \circ (\ensuremath{\operatorname{Id}}\,-
\gamma (B_1+B_2)) + \gamma B_2$.}
Then,
\begin{enumerate}
\item \label{prop:Tproperties:item:fixed-points}
{We have $\zer(A+B_1+B_2)\subset\Fix S_{\gamma}$ and
$\zer(A+B_1+B_2)\cap X\subset\Fix T_{\gamma}$. Moreover, if $B_2$ is
$L$-Lipschitz in ${\mathrm{dom}\,} B_2$ for some $L>0$ and $\gamma<L^{-1}$ we have
$
\Fix(S_{\gamma}) = \zer(A+B_1+B_2)
$
and
$
\Fix(T_{\gamma}) = \zer(A+B_1+B_2) \cap X.
$
\item \label{prop:Tproperties:item:quasi0}
For all $z^\ast \in
\Fix(T_{\gamma})$ and $z\in{\mathrm{dom}\,} B_2$, by denoting $x:=J_{\gamma A} (z - \gamma
(B_1+B_2)z
)$ we have, for every ${\mathbf{a}}repsilon>0$,
\begin{align*}
\|T_\gamma z - z^\ast\|^2 &\leq\|z - z^\ast\|^2 - (1-{\mathbf{a}}repsilon) \|z - x\|^2 + \gamma^2
\|B_2z -
B_2x\|^2 \\
&\hspace{.35cm} -\! \frac{\gamma}{{\mathbf{a}}repsilon}\left(2\beta{\mathbf{a}}repsilon -\!
{\gamma}{\mathrm{i}}ght)\!\|B_1z - B_1 z^\ast\|^2
\!-\!{\mathbf{a}}repsilon\left\|z-x-\frac{\gamma}{{\mathbf{a}}repsilon}(B_1z-B_1z^*){\mathrm{i}}ght\|^2.
\numberthis\label{eq:fejer0}
\end{align*}}
\item\label{prop:Tproperties:item:quasi} Suppose that
$B_2$ is $L$-Lipschitz {in ${\mathrm{dom}\,} A\cup X$ for some $L>0$. For all $z^\ast \in
\Fix(T_{\gamma})$ and $z\in{\mathrm{dom}\,} B_2$, by denoting $x:=J_{\gamma A} (z - \gamma
(B_1+B_2)z
)$ we have
\begin{align*}
\|T_\gamma z - z^\ast\|^2 &\leq \|z - z^\ast\|^2 - L^{2}(\chi^2-
\gamma^2)\|z - x\|^2-\frac{2\beta\gamma}{\chi}\left(\chi
- \gamma{\mathrm{i}}ght)\|B_1z - B_1 z^\ast\|^2\nonumber\\
&-\frac{\chi}{2\beta}\left\|z-x-\frac{2\beta\gamma}{\chi}(B_1z-B_1z^*){\mathrm{i}}ght\|^2,
\numberthis\label{eq:fejer}
\end{align*}
}
where
\begin{equation}
\label{e:chi}
\chi:=\frac{4\beta}{1+\sqrt{1+16\beta^2 L^2}}\leq
\min\{2\beta,L^{-1}\}.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
Part~\ref{prop:Tproperties:item:fixed-points}: Let $z^*\in\ensuremath{{\mathcal H}}$. We have
{
\begin{align}
\label{e:auxequiv}
z^*\in \zer(A+B_1+B_2)\quad&\Leftrightarrow\quad 0\in Az^*+B_1z^*+B_2z^*\nonumber\\
&\Leftrightarrow\quad -\gamma (B_1z^*+B_2z^*)\in \gamma Az^*\nonumber\\
&\Leftrightarrow\quad z^*=J_{\gamma A}\left(z^*-\gamma
(B_1z^*+B_2z^*){\mathrm{i}}ght).
\end{align}
}
Then, since $B_2$ is single-valued in ${\mathrm{dom}\,} A$, if {$z^*\in \zer(A+B_1+B_2)$}
we have $B_2z^*=B_2J_{\gamma A}(z^*-\gamma
(B_1z^*+B_2z^*))$ and, hence, {$S_{\gamma}z^*=z^*$ }which yields
{$\zer(A+B_1+B_2)\subset \Fix S_{\gamma}$. Hence, if $z^\ast\in
\zer(A+B_1+B_2)\cap X$ then $z^\ast\in\Fix P_X$ and $z^\ast\in\Fix S_{\gamma}$,
which yields $z^\ast\in \Fix P_X\circ S_{\gamma}=\Fix T_{\gamma}$.} Conversely,
if { $B_2$ is $L$-Lipschitz in ${\mathrm{dom}\,} B_2$ and $z^*\in \Fix S_{\gamma}$ we
have }
\begin{equation*}
z^*-J_{\gamma A}(z^*-\gamma(B_1+B_2)z^*)=\gamma\left(B_2z^*-B_2J_{\gamma
A}(z^*-\gamma(B_1+B_2)z^*) {\mathrm{i}}ght),
\end{equation*}
which, from the Lipschitz continuity of $B_2$ yields
\begin{align*}
\|z^*-J_{\gamma A}(z^*-\gamma(B_1+B_2)z^*)\|&=\gamma\|B_2z^*-B_2J_{\gamma
A}(z^*-\gamma(B_1+B_2)z^*)\| \\
&\leq \gamma L\|z^*-J_{\gamma
A}(z^*-\gamma(B_1+B_2)z^*)\|.
\end{align*}
Therefore, if $\gamma<L^{-1}$ we deduce $z^*=J_{\gamma
A}(z^*-\gamma(B_1+B_2)z^*)$ and from \eqref{e:auxequiv}, we deduce
{$\zer(A+B_1+B_2)=\Fix S_{\gamma}$. Since $T_{\gamma}=P_XS_{\gamma}$ and
$P_X$ is
strictly quasi-nonexpansive, the result follows from
\cite[Proposition~4.49]{bauschke2017convex}.}
Part~\ref{prop:Tproperties:item:quasi0}:
Let $z^*\in \Fix T_{\gamma}$, {$z\in{\mathrm{dom}\,} B_2$} and define $B := B_1 + B_2$, $y:=
z - \gamma Bz$, $x:=
J_{\gamma A} y$, and $z^+ = T_{\gamma}z$. Note that $(x, y-x) \in
\ensuremath{\operatorname{gra}}a(\gamma A)$ and, from Part~\ref{prop:Tproperties:item:fixed-points}, $(z^\ast,
-\gamma Bz^\ast)
\in \ensuremath{\operatorname{gra}}a(\gamma A)$. Hence, by the monotonicity of
$A$ and $B_2$, we have $\dotp{x - z^\ast, x-y -\gamma
Bz^\ast} \leq 0$ and $\dotp{x - z^\ast, \gamma B_2 z^\ast - \gamma B_2x}\leq 0$. Thus,
\begin{align*}
\dotp{x - z^\ast, x-y - \gamma B_2x} &= \dotp{x - z^\ast, \gamma B_1z^\ast} + \dotp{x -
z^\ast, x-y -\gamma Bz^\ast} \\
&\hspace{2.8cm}+\dotp{x - z^\ast, \gamma B_2z^\ast - \gamma B_2x} \\
&\leq \dotp{x - z^\ast, \gamma B_1z^\ast}.
\end{align*}
Therefore, we have
\begin{align}
\label{e:desig1sec2}
2\gamma \dotp{x - z^\ast, B_2z - B_2x}
&= 2\dotp{ x - z^\ast,
\gamma B_2z + y - x} + 2 \dotp{ x- z^\ast, x - y - \gamma B_2x} \nonumber\\
&\leq 2 \dotp{ x - z^\ast, \gamma Bz + y - x} + 2\dotp{x - z^\ast, \gamma B_1z^\ast -
\gamma B_1z} \nonumber\\
&= 2 \dotp{ x - z^\ast, z - x} + 2\dotp{x - z^\ast, \gamma B_1z^\ast - \gamma B_1z}
\nonumber\\
&= \|z - z^\ast\|^2 \!- \!\|x - z^\ast\|^2 \!-\! \|z - x\|^2 \!+\! 2\dotp{x - z^\ast, \gamma
B_1z^\ast - \gamma B_1z} .
\end{align}
In addition, by cocoercivity of $B_1$, for all ${\mathbf{a}}repsilon > 0$, we have
\begin{align}
\label{e:desig2sec2}
2\dotp{x - z^\ast, \gamma B_1z^\ast - \gamma B_1z}
&= 2\dotp{z - z^\ast, \gamma B_1z^\ast - \gamma B_1z} + 2\dotp{x - z, \gamma B_1z^\ast -
\gamma B_1z}\nonumber \\
&\leq - 2\gamma \beta\|B_1z - B_1 z^\ast\|^2 + 2\dotp{x - z, \gamma B_1z^\ast -
\gamma B_1z}\nonumber \\
&= - 2\gamma \beta\|B_1z - B_1 z^\ast\|^2 + {\mathbf{a}}repsilon\|z-x\|^2 +
\frac{\gamma^2}{{\mathbf{a}}repsilon}\|B_1z - B_1
z^\ast\|^2\nonumber \\
&\hspace{4cm}-{\mathbf{a}}repsilon\left\|z-x-\frac{\gamma}{{\mathbf{a}}repsilon}(B_1z-B_1z^*){\mathrm{i}}ght\|^2
\nonumber \\
&= {\mathbf{a}}repsilon\|z-x\|^2 - \gamma\left(2\beta -
\frac{\gamma}{{\mathbf{a}}repsilon}{\mathrm{i}}ght)\|B_1z - B_1
z^\ast\|^2\nonumber \\
&\hspace{4cm}-{\mathbf{a}}repsilon\left\|z-x-\frac{\gamma}{{\mathbf{a}}repsilon}(B_1z-B_1z^*){\mathrm{i}}ght\|^2.
\end{align}
Hence, combining \eqref{e:desig1sec2} and \eqref{e:desig2sec2}, it follows from $z^*\in
X$, the nonexpansivity of $P_X$, and the Lipschitz property of $B_2$ in ${\mathrm{dom}\,} B_2\supset
X\cup{\mathrm{dom}\,} A$ that
{
\begin{align}
\label{e:auxprt3}
\|z^+ - z^\ast\|^2
&\leq \|x - z^\ast + \gamma B_2z - \gamma B_2x\|^2 \nonumber\\
&= \|x - z^\ast\|^2 + 2\gamma\dotp{x - z^\ast, B_2z - B_2x} + \gamma^2\| B_2z - B_2x\|^2
\nonumber\\
&\leq \|z - z^\ast\|^2 - \|z - x\|^2 + \gamma^2 \|B_2z - B_2x\|^2 \nonumber\\
&\hspace{.35cm}+ {\mathbf{a}}repsilon\|z\!- x\|^2\! -\! \gamma\left(2\beta -\!
\frac{\gamma}{{\mathbf{a}}repsilon}{\mathrm{i}}ght)\!\|B_1z - B_1 z^\ast\|^2
\!-\!{\mathbf{a}}repsilon\left\|z-x-\frac{\gamma}{{\mathbf{a}}repsilon}(B_1z-B_1z^*){\mathrm{i}}ght\|^2,
\end{align}
and the result follows.
Part 3: It follows from \eqref{e:auxprt3} and the Lipschitz property on $B_2$ that
\begin{align*}
&\|z^+ - z^\ast\|^2\leq \|z - z^\ast\|^2 - L^2\left(\frac{1-
{\mathbf{a}}repsilon}{L^2}-\gamma^2{\mathrm{i}}ght)\|z - x\|^2 -
\frac{\gamma}{{\mathbf{a}}repsilon}\left(2\beta{\mathbf{a}}repsilon -
\gamma{\mathrm{i}}ght)\|B_1z - B_1 z^\ast\|^2 \\
&\hspace{.35cm}-{\mathbf{a}}repsilon\left\|z-x-\frac{\gamma}{{\mathbf{a}}repsilon}(B_1z-B_1z^*){\mathrm{i}}ght\|^2.
\end{align*}}
In order to obtain the largest interval for $\gamma$ ensuring that
{the second and third terms on the right of} the above equation are
negative, we choose
the value ${\mathbf{a}}repsilon$ so that $\sqrt{1-{\mathbf{a}}repsilon}/L =
2\beta{\mathbf{a}}repsilon$, which yields
${\mathbf{a}}repsilon=(-1+\sqrt{1+16\beta^2L^2})(8\beta^2L^2)^{-1}$. For this
choice of ${\mathbf{a}}repsilon$ we obtain $\chi=\sqrt{1-{\mathbf{a}}repsilon}/L =
2\beta{\mathbf{a}}repsilon$.
\end{proof}
{ In the case when $B_2$ is merely continuous, we need the following result,
which gives additional information to
\cite[Lemma~3.3]{tseng2000modified} and allows us to guarantee the convergence of
the algorithm under weaker assumptions than \cite[Theorem~3.4]{tseng2000modified}.
\begin{lemma}
\label{lem:Ts}
In the context of Problem~\ref{prob:main}, define, for every $z\in{\mathrm{dom}\,} B_2$ and $\gamma
>0$,
\begin{equation}
\label{e:defxphi}
x_{z}\colon \gamma\mapsto J_{\gamma
A}(z-\gamma(B_1+B_2)z)\quad \text{and \quad }{\mathbf{a}}rphi_z\colon \gamma\mapsto
\frac{\|z-x_z(\gamma)\|}{\gamma}.
\end{equation}
Then, the following hold:
\begin{enumerate}
\item \label{lem:Tsi}${\mathbf{a}}rphi_z$ is nonincreasing and
\begin{equation*}
(\forall z\in{\mathrm{dom}\,} A)\quad \lim_{\gamma\downarrow
0^+}{\mathbf{a}}rphi_z(\gamma)=
\|(A+B_1+B_2)^0(z)\|:=\inf_{w\in(A+B_1+B_2)z}\|w\|.
\end{equation*}
\item\label{lem:Tsii} For every $\theta\in]0,1[$ and $z\in{\mathrm{dom}\,} B_2$, there exists
$\gamma(z)>0$ such
that, for every $\gamma\in]0,\gamma(z)]$,
\begin{equation}
\label{e:armijocond}
\gamma\|B_2z-B_2x_z(\gamma))\|
\leq\theta\|z-x_z(\gamma)\|.
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
Part \ref{lem:Tsi}: Denote $B:=B_1+B_2$.
If $z\in\zer(A+B)$ then it follows from \eqref{e:auxequiv} that ${\mathbf{a}}rphi_z\equiv 0$ and there
is nothing to prove. Hence,
assume $z\in{\mathrm{dom}\,} B_2\setminus\zer(A+B)$ which yields ${\mathbf{a}}rphi_z(\gamma)>0$ for every
$\gamma>0$. From the definition of
$J_{\gamma A}$, we have $(z-x_z(\gamma))/\gamma-Bz\in A(x_z(\gamma))$ for every
$\gamma>0$ and,
from the monotonicity of $A$, we deduce that, for every strictly positive constants
$\gamma_1$ and $\gamma_2$ we have
\begin{align}
0&\le \Scal{\frac{z-x_z(\gamma_1)}{\gamma_1}-\frac{z-x_z(\gamma_2)}{\gamma_2}}
{x_z(\gamma_1)-x_z(\gamma_2)}\nonumber\\
&=-\frac{\|z-x_z(\gamma_1)\|^2}{\gamma_1}+\left(\frac{1}{\gamma_1}+\frac{1}{\gamma_2}{\mathrm{i}}ght)
\scal{z-x_z(\gamma_1)}{z-x_z(\gamma_2)}-\frac{\|z-x_z(\gamma_2)\|^2}{\gamma_2}.
\end{align}
Therefore
\begin{align}
\label{e:DOng}
\gamma_1{\mathbf{a}}rphi_z(\gamma_1)^2+\gamma_2{\mathbf{a}}rphi_z(\gamma_2)^2&\le
(\gamma_1+\gamma_2)
\Scal{\frac{z-x_z(\gamma_1)}{\gamma_1}}{\frac{z-x_z(\gamma_2)}{\gamma_2}}\nonumber\\
&\le
\frac{\gamma_1+\gamma_2}{2}({\mathbf{a}}rphi_z(\gamma_1)^2+{\mathbf{a}}rphi_z(\gamma_2)^2),
\end{align}
which is equivalent to
$
(\gamma_1-\gamma_2)({\mathbf{a}}rphi_z(\gamma_1)^2-{\mathbf{a}}rphi_z(\gamma_2)^2)\le 0,
$
and the monotonicity of ${\mathbf{a}}rphi_z$ is obtained. The limit follows from
\cite[Lemma~3.3\&Eq~(3.5)]{tseng2000modified}.
Part \ref{lem:Tsii}: As before, if $z\in\zer(A+B)$ we have $z=x_z(\gamma)$ for every
$\gamma>0$ and,
hence, there is nothing to prove. From \ref{lem:Tsi} we have that, for every
$z\in{\mathrm{dom}\,}(A)\setminus
\zer(A+B)$,
\begin{equation*}
0<\|z-x_z(1)\|\le \lim_{\gamma\downarrow
0^+}\frac{\|z-x_z(\gamma)\|}{\gamma}= \|(A+B_1+B_2)^0(z)\|.
\end{equation*}
Therefore, $\lim_{\gamma\downarrow
0^+}x_z(\gamma)=z$ and from continuity of $B_2$, $\lim_{\gamma\downarrow
0^+}\|B_2z-B_2x_z(\gamma)\|=0$. This ensures the existence of $\gamma(z)>0$
such that, for every $\gamma\in]0,\gamma(z)]$, \eqref{e:armijocond} holds.
\end{proof}
\begin{remark}
Note that the previous lemma differs from \cite[Lemma~3.3]{tseng2000modified}
because we provide the additional information ${\mathbf{a}}rphi_z$ nonincreasing. This
property is used in \cite{Nghia},
proved in \cite{Dong}, and will be crucial
for obtaining the convergence of the algorithm with line search to a solution to
Problem~\ref{prob:main} under weaker assumptions. We keep our proof for
the sake of completeness and because the inequality \eqref{e:DOng} is slightly stronger
than that obtained in \cite{Dong}.
\end{remark}
}
\begin{theorem}[Forward-backward-half forward algorithm]
\label{t:1}
{Under the assumptions of Problem~\ref{prob:main},
let $z^0 \in {\mathrm{dom}\,} A\cup X$, and consider the sequence $\{z_k\}_{k\in\ensuremath{\mathbb N}}$ recursively
defined
by $z^{k+1} := T_{\gamma_k} z^k$ or, equivalently,
\begin{equation}
\label{e:algomain1}
(\forall k\in\ensuremath{\mathbb N})\quad
\begin{array}{l}
\left\lfloor
\begin{array}{l}
x^k = J_{\gamma_k A}(z^k - \gamma_k (B_1 + B_2)z^k) \\[2mm]
z^{k+1} = P_X \big(x^k + \gamma_k B_2z^k - \gamma_k
B_2x^k\big),
\end{array}
{\mathrm{i}}ght.
\end{array}
\end{equation}
where $\{\gamma_k\}_{k \in \ensuremath{\mathbb N}}$ is a sequence of
stepsizes satisfying one of the following conditions:
\begin{enumerate}
\item Suppose that $B_2$ is $L$-Lipschitz in ${\mathrm{dom}\,} A\cup X$. Then, for every $k\in\ensuremath{\mathbb N}$,
$\gamma_k\in[\end{tabbing}a,\chi-\end{tabbing}a]$, where $\end{tabbing}a\in\left]0,\chi/2{\mathrm{i}}ght[$ and
$\chi$ is defined in~\eqref{e:chi}.
\item Suppose $X\subset{\mathrm{dom}\,} A$ and let ${\mathbf{a}}repsilon\in\left]0,1{\mathrm{i}}ght[$,
$\sigma\in]0,1[$,
and $\theta\in\left]0,\sqrt{1-{\mathbf{a}}repsilon}{\mathrm{i}}ght[$. Then, for every $k\in\ensuremath{\mathbb N}$, $\gamma_k$ is
the largest
$\gamma\in
\{2\beta{\mathbf{a}}repsilon\sigma,2\beta{\mathbf{a}}repsilon\sigma^2,\cdots\}$ satisfying
\eqref{e:armijocond} with $z=z^k$, and at least one of the following additional conditions
holds:
\begin{enumerate}
\item $\liminf_{k\to\infty}\gamma_k=\delta>0$.
\item $B_2$ is uniformly continuous in any weakly compact subset of
$X$.
\end{enumerate}
\end{enumerate}
Then, $\{z_k\}_{k\in\ensuremath{\mathbb N}}$ converges weakly to a solution
to Problem~\ref{prob:main}.
}
\end{theorem}
\begin{proof}
{In the case when $B_2$ is $L$-Lipschitz in ${\mathrm{dom}\,} A\cup X$, it} follows from
Proposition~\ref{prop:Tproperties}(\ref{prop:Tproperties:item:quasi}) that the sequence
$\{z^{k}\}_{k \in
\ensuremath{\mathbb N}}$ is Fej{\'e}r monotone with
respect to $\zer(A+B_1+B_2) \cap X$. Thus, to show that $\{z^k\}_{k \in \ensuremath{\mathbb N}}$
converges weakly to a solution to Problem~\ref{prob:main}, we only need to prove that all
of
its weak subsequential limits lie in $\zer(A+B_1+B_2) \cap X$ {\cite[Theorem
5.33]{bauschke2017convex}.} Indeed,
it follows from Proposition~\ref{prop:Tproperties} and our hypotheses on the stepsizes
that, for every $z^*\in\Fix
T_{\gamma}$,
\begin{align}
\label{e:auxfejer}
\|z^{k}-z^*\|^2-\|z^{k+1}-z^*\|^2&\ge
L^2\end{tabbing}a^2\|z^k-x^k\|^2+\frac{2\beta\end{tabbing}a^2}{\chi}\|B_1z^k-B_1z^*\|^2\nonumber\\
&\hspace{20pt}+\frac{\chi}{2\beta}
\left\|z^k-x^k-\frac{2\beta\gamma_k}{\chi}(B_1z^k-B_1z^*){\mathrm{i}}ght\|^2.
\end{align}
Therefore, we deduce from \cite[Lemma~3.1]{combettes2001quasi} that
\begin{equation}
\label{e:tozero}
z^k-x^k\to 0
\end{equation}
when $L>0$ and
$0<\beta<\infty$\footnote{The case $B_1=0$ ($\beta=+\infty$)
has been studied by Tseng in \cite{tseng2000modified}. In the case when $B_2=0$
we
can also obtain convergence from
Proposition~\ref{prop:Tproperties}, since $L=0$ implies $\chi=2\beta$ and even
since the first term in the right hand side of \eqref{e:auxfejer} vanishes, the other two
terms yield $z^k-x^k\to0$.}. Now let $z\in\ensuremath{{\mathcal H}}$ be the weak limit point of some
subsequence of
$\{z^k\}_{k\in\ensuremath{\mathbb N}}$. Since $z^k\in X$ for every $k\ge 1$ and $X$ is weakly
sequentially closed {\cite[Theorem 3.34]{bauschke2017convex}}
we deduce $z\in X$. Moreover, {by denoting $B:=B_1+B_2$,} it follows from $x^k
= J_{\gamma_k
A}(z^k - \gamma_k Bz^k)$ that {$u^k := \gamma_k^{-1}(z^k - x^k) - Bz^k +
Bx^k\in
(A+B)x^k$. Then, \eqref{e:tozero}, $\gamma_k\ge \end{tabbing}a>0$ and the Lipschitz continuity of
$B$ yield $u^k
{\mathrm{i}}ghtarrow 0$.
Now, since $A+B_2$ is maximally monotone and $B_1$ is cocoercive with full
domain,
$A+B$ is maximally monotone and its graph is}
closed in the weak-strong topology in $\ensuremath{{\mathcal H}}\times\ensuremath{{\mathcal H}}$, which yields
{$0\in Az+Bz$} and the result follows.
{
In the second case, we deduce from
Proposition~\ref{prop:Tproperties}
(\ref{prop:Tproperties:item:fixed-points}\&\ref{prop:Tproperties:item:quasi0}) and
$\gamma_k\leq 2\beta{\mathbf{a}}repsilon\sigma$ that,
for every $z^*\in\zer(A+B_1+B_2)\cap X$ we have
\begin{align*}
\|z^{k} - z^\ast\|^2- \|z^{k+1} - z^\ast\|^2&\geq (1-{\mathbf{a}}repsilon) \|z^k - x^k\|^2
+\! \frac{\gamma_k}{{\mathbf{a}}repsilon}\left(2\beta{\mathbf{a}}repsilon -
{\gamma_k}{\mathrm{i}}ght)\|B_1z^k - B_1 z^\ast\|^2 \\
&\hspace{.5cm}+{\mathbf{a}}repsilon\left\|z^k-x^k-\frac{\gamma_k}{{\mathbf{a}}repsilon}(B_1z^k-B_1z^*){\mathrm{i}}ght\|^2
-\gamma_k^2 \|B_2z^k - B_2x^k\|^2\\
&\geq (1-{\mathbf{a}}repsilon-\theta^2) \|z^k - x^k\|^2
+\! 2\beta{\mathbf{a}}repsilon(1-\sigma)\gamma_k\|B_1z^k - B_1 z^\ast\|^2,
\numberthis\label{eq:fejer1}
\end{align*}
where in the last inequality we use the conditions on $\{\gamma_k\}_{k\in\ensuremath{\mathbb N}}$,
whose existence is guaranteed by Lemma~\ref{lem:Ts}\eqref{lem:Tsii} because $z^k\in
X\subset {\mathrm{dom}\,} A$.
Then, we deduce from
\cite[Lemma~3.1]{combettes2001quasi} that
$z^k-x^k\to 0$.
Now let $z$ be a weak limit point of a subsequence $\{z^{k}\}_{k\in K}$, with $K\subset
\ensuremath{\mathbb N}$.
If $\liminf_{k\to\infty}\gamma_k=\delta>0$, from \eqref{e:armijocond} and $z^k-x^k\to 0$
we have $B_2x^k-B_2z^k\to 0$
and the proof is analogous to the previous case. Finally, for the last case, suppose that
there exists a subsequence of $\{\gamma_{k}\}_{k\in K}$ (called similarly) satisfying
$\lim_{k\to\infty,k\in K}\gamma_k=0$. Our choice of $\gamma_k$ yields, for every $k\in K$,
\begin{equation}\label{e:palotro}
\theta\|z^k-J_{\tilde{\gamma}_k A}(z^k-\tilde{\gamma}_k B
z^k)\|/\tilde{\gamma}_k<\|B_2z^k-B_2J_{\tilde{\gamma}_k A}(z^k-\tilde{\gamma}_k B
z^k)\|,
\end{equation}
where $\tilde{\gamma}_k=\gamma_k/\sigma>\gamma_k$ and, from
Lemma~\ref{lem:Ts}\eqref{lem:Tsi} we
have
\begin{align}
\label{e:auxLS}
\sigma\|z^k-J_{\tilde{\gamma}_k A}(z^k-\tilde{\gamma}_k B
z^k)\|/{\gamma}_k&= \|z^k-J_{\tilde{\gamma}_k A}(z^k-\tilde{\gamma}_k B
z^k)\|/\tilde{\gamma}_k\nonumber\\
&\le \|z^k-J_{{\gamma}_k A}(z^k-{\gamma}_k B
z^k)\|/{\gamma}_k,
\end{align}
which, from $z^k-x^k\to 0$, yields
$$\|z^k-J_{\tilde{\gamma}_k A}(z^k-\tilde{\gamma}_k B
z^k)\|\le \|z^k-x^k\|/\sigma\to 0$$
as $k\to\infty, k\in K$. Therefore, since $z_k\ensuremath{\:\rightharpoonup\:} z$, the sequence $\{\tilde{x}^k\}_{k\in
K}$ defined by
$$(\forall k\in K)\quad \tilde{x}^k:=J_{\tilde{\gamma}_k A}(z^k-\tilde{\gamma}_k B
z^k)$$ satisfies $\tilde{x}^k\ensuremath{\:\rightharpoonup\:} z$ as $k\to+\infty,k\in K$ and
\begin{equation}
\tilde{w}^k:=\frac{z^k-\tilde{x}^k}{\tilde{\gamma}_k}+ B \tilde{x}^k- B
z^k\in (A+B_1+B_2)\tilde{x}^k.
\end{equation}
Hence, since $\{z\}\cup\bigcup_{k\in\ensuremath{\mathbb N}}[\tilde{x}^k,z^k]$ is a weakly compact
subset
of $X$ \cite[Lemma~3.2]{Salzo}, it follows from the uniform continuity of $B_2$ that the
right hand side of
\eqref{e:palotro} goes to $0$ and, hence, ${(z^k-\tilde{x}^k)}/{\tilde{\gamma}_k}\to 0$ as
$k\to\infty,k\in K$. Moreover, since $B_1$ is uniformly continuous, $B=B_1+B_2$
is also locally uniformly continuous and $B \tilde{x}^k- B
z^k\to 0$, which yields $\tilde{w}^k\to 0$ as $k\to+\infty,k\in K$. The result is obtained as in
the first case since the graph of $A+B$ is weakly-strongly closed in the product topology.
}
\end{proof}
\begin{remark}
\begin{enumerate}
\item In \cite[Theorem~3.4]{tseng2000modified} the local boundedness of $z\mapsto
\min_{w\in(A+B)z}\|w\|$ is needed to guarantee the convergence of the method with
line
search. We drop this assumption by using the monotonicity of ${\mathbf{a}}rphi_z$ in
Lemma~\ref{lem:Ts}\eqref{lem:Tsi}, which leads us to the inequality \eqref{e:auxLS}.
\item Since continuity on compact sets yields uniform continuity, in the finite
dimensional setting, the assumption on $B_2$ reduces to the mere continuity on $X$
(see \cite[Remark~3.1(v)]{Salzo}). In this case, we do not need to assume further
assumptions than those given in Problem~\ref{prob:main}.
\end{enumerate}
\end{remark}
\begin{remark}
The maximal monotonicity assumption on $A+B_2$ is satisfied, for
instance, if $\mathrm{cone}({\mathrm{dom}\,} A-{\mathrm{dom}\,}
B_2)=\overline{\mathrm{span}}({\mathrm{dom}\,} A-{\mathrm{dom}\,} B_2)$, where, for any set
$D\subset\ensuremath{{\mathcal H}}$,
$\mathrm{cone}(D)=\menge{\lambda d}{\lambda\in\mathbb{R}_+, d\in D}$
and $\overline{\mathrm{span}}(D)$ is the smallest closed linear
subspace of $\ensuremath{{\mathcal H}}$ containing $D$ \cite[Theorem~3.11.11]{Zalinescu}.
\end{remark}
\begin{remark}
\label{rem:chi}
{In the case when $B_2$ is $L$-Lipschitz in ${\mathrm{dom}\,} A\cup X$, the} stepsize
upper
bound $\chi=
\chi(\beta,L)$ defined in \eqref{e:chi}
depends on the
cocoercivity parameter $\beta$ of $B_1$ and the Lipschitz parameter
$L$ of $B_2$. In order to fully recover Tseng's splitting algorithm or the
forward-backward algorithm in the cases when $B_1$ or $B_2$ are zero,
respectively, we study the asymptotic behaviour of $\chi(\beta,L)$
when $L\to 0$ and $\beta\to+\infty$. It is easy to verify that
\begin{align*}
\lim_{L\to 0}\chi(\beta,L)=2\beta\quad\text{and}\quad \lim_{\beta\to
+\infty}\chi(\beta,L)=\frac{1}{L},
\end{align*}
which are exactly the bounds on the stepsizes of forward-backward and
Tseng's splittings.
{ On the other hand, when $B_2$ is continuous, if we choose
${\mathbf{a}}repsilon\in\left]0,1{\mathrm{i}}ght[$ close to 1,
$\{\gamma_k\}_{k\in\ensuremath{\mathbb N}}$ could be larger since the line search starts from
$2\beta{\mathbf{a}}repsilon\sigma$. However, $\theta<\sqrt{1-{\mathbf{a}}repsilon}$
should be close to 0 in this case, and condition \eqref{e:armijocond} is more restrictive
and satisfied only for
small values of $\gamma_k$. Conversely, for small values of ${\mathbf{a}}repsilon$ we restrict
the sequence $\{\gamma_k\}_{k\in\ensuremath{\mathbb N}}$ in a small interval but \eqref{e:armijocond} is more
easily satisfied. The optimal choice of ${\mathbf{a}}repsilon$ in order to obtain an optimal sequence
$\{\gamma_k\}_{k\in\ensuremath{\mathbb N}}$ depends on the properties of the operators involved.
Note that, in the particular case when $B_2\equiv 0$, \eqref{e:armijocond} is satisfied for
$\theta=0$ and we can choose ${\mathbf{a}}repsilon=1$, recovering forward-backward splitting.
On the other hand, when $B_1\equiv0$, we can take ${\mathbf{a}}repsilon=0$ and
$\theta\in\left]0,1{\mathrm{i}}ght[$ recovering the Tseng's method with backtracking proposed in
\cite{tseng2000modified}.}
\end{remark}
\section{Forward-backward-half forward splitting with
non self-adjoint linear operators}
\label{sec:asymm}
In this section, we introduce modified resolvents $J_{ P^{-1} A}$, which depend on an
invertible
linear mapping $P$. In some cases, it is preferable to compute the { modified
resolvent instead of the standard resolvent $J_{A} = ({\ensuremath{\operatorname{Id}}\,}+A)^{-1}$} because
the former may be easier to compute than the latter or, when $P$ is
triangular by blocks in a product space, the former may \textit{order the component
computation}
of the resolvent, replacing a parallel computation with a
Gauss-Seidel style sequential computation. However, ${P^{-1}A}$ may
not be maximally monotone. The following result allows us to use
some non self-adjoint linear operators in the computation of the resolvent
by using specific metrics. For simplicity, we assume from here
that $B_2$ is $L$-Lipschitz in ${\mathrm{dom}\,} A\cup X$, for some $L\geq 0$.
\begin{proposition}
\label{prop:new}
Let $A\colon\ensuremath{{\mathcal H}}\to 2^{\ensuremath{{\mathcal H}}}$ be a maximally monotone operator, let $P\colon\ensuremath{{\mathcal H}}\to\ensuremath{{\mathcal H}}$
be a linear bounded operator, and let $U := (P + P^\ast)/2$ and $S := (P -
P^\ast)/2$ be the
self-adjoint and skew symmetric components
of $P$, respectively. Assume that there exists $\rho>0$ such that
\begin{equation}
(\forall x\in\ensuremath{{\mathcal H}})\quad \rho\|x\|^2\le \scal{Ux}{x}=:\|x\|^2_U.
\end{equation}
Then, we have
\begin{equation}
J_{P^{-1}A}=J_{U^{-1}(A+S)}({\ensuremath{\operatorname{Id}}\,}+U^{-1}S).
\end{equation}
In particular, $J_{P^{-1}A}\colon\ensuremath{{\mathcal H}}\to\ensuremath{{\mathcal H}}$ is single valued, everywhere defined and
satisfies
\begin{equation}
(\forall (x,y)\in\ensuremath{{\mathcal H}}^2)\quad \scal{J_{P^{-1}A}x-J_{P^{-1}A}y}{Px-Py}\ge
\|J_{P^{-1}A}x-J_{P^{-1}A}y\|^2_U
\end{equation}
and, hence, $U^{-1}P^*J_{P^{-1}A}$ is firmly nonexpansive in $(\ensuremath{{\mathcal H}},\scal{\cdot}{\cdot}_U)$,
where $\scal{\cdot}{\cdot}_U\colon (x,y)\mapsto\scal{Ux}{y}$.
\end{proposition}
\begin{proof}
Indeed, since $S$ is monotone and everywhere defined, $A+S$ is maximally monotone in
$\ensuremath{{\mathcal H}}$ \cite[Corollary~25.5]{bauschke2017convex} and, from
\cite[Lemma~3.7]{combettes2012variable}
we have that $U^{-1}(A+S)$ is maximally monotone in $\ensuremath{{\mathcal H}}$ with the metric
$\scal{\cdot}{\cdot}_U\colon(x,y)\mapsto\scal{x}{Uy}$. Hence, $J_{U^{-1}(A+S)}$ is single
valued (indeed firmly nonexpansive) and, for every $(x,z) \in {\mathcal{H}}^2$, we have
\begin{align*}
x = J_{ U^{-1}(A+S)}(z + U^{-1} Sz) &\quad \Leftrightarrow\quad z
+ U^{-1} S z - x \in
U^{-1}(A+S )x \\
&\quad\Leftrightarrow \quad (U+S)z- (U+S)x \in
Ax \\
&\quad\Leftrightarrow \quad x = J_{ P^{-1} A} z.
\end{align*}
Hence, for every
$(x,y)\in\ensuremath{{\mathcal H}}^2$, denoting by $p=J_{P^{-1}A}x=J_{U^{-1}(A+S)}(x+U^{-1} Sx)$
and $q=J_{P^{-1}A}y=J_{U^{-1}(A+S)}(y+U^{-1} Sy)$, the firm
nonexpansivity of
$J_{U^{-1}(A+S)}$ in $(\ensuremath{{\mathcal H}},\scal{\cdot}{\cdot}_U)$ yields
\begin{align*}
\scal{p-q}{Px-Py}&=
\scal{p-q}{U\left(x+U^{-1}
Sx-(y+U^{-1} Sy){\mathrm{i}}ght)}\\
&=\scal{p-q}{x+U^{-1}
Sx-(y+U^{-1} Sy)}_U\\
&\ge \|p-q\|^2_U,
\end{align*}
and the result follows from $\scal{p-q}{Px-Py}=\scal{U^{-1}P^*(p-q)}{x-y}_U$.
\end{proof}
\begin{theorem}[New Metrics and $T_\gamma$]\label{thm:asymmetric_metric}
{Under the hypotheses of Problem~\ref{prob:main} and assuming additionally
that $B_2$ is $L$-Lipschitz in ${\mathrm{dom}\,} A\cup X$}, let $P : {\mathcal{H}} {\mathrm{i}}ghtarrow
{\mathcal{H}}$ be {a}
bounded linear
operator, let $U := (P + P^\ast)/2$ and $S := (P - P^\ast)/2$ be the the
self-adjoint and skew symmetric components
of $P$, respectively. Suppose that there exists $\rho>0$ such
that
\begin{align}
\label{e:metricconditions}
\left(\forall x \in {\mathcal{H}} {\mathrm{i}}ght) \qquad \rho\|x\|^2 \leq
\dotp{Ux, x}\quad\text{and}\quad
K^2<\rho\left(\rho-\frac{1}{2\beta}{\mathrm{i}}ght),
\end{align}
where $K\ge0$ is the Lipschitz constant of $B_2-S$.
Let $z^0 \in{{\mathrm{dom}\,} A\cup X}$ and let $\{z^k\}_{k\in\ensuremath{\mathbb N}}$ be the sequence defined
by
the
following iteration:
\begin{equation}
\label{eq:compressedalg}
(\forall k\in\ensuremath{\mathbb N})\quad
\begin{array}{l}
\left\lfloor
\begin{array}{l}
x^k = J_{ P^{-1} A}(z^k - P^{-1}(B_1 + B_2)z^k) \\[2mm]
z^{k+1} = P_X^U(x^k + U^{-1}(B_2z^k -
B_2x^k-S(z^k-x^k))),
\end{array}
{\mathrm{i}}ght.
\end{array}
\end{equation}
where $P_X^U$ is the projection operator of $X$ under the inner
product $\dotp{\cdot, \cdot}_U$.
Then $\{z^k \}_{k \in\ensuremath{\mathbb N}}$ converges weakly to a solution to Problem~\ref{prob:main}.
\end{theorem}
\begin{proof}
Note that, since $U$ is invertible from \eqref{e:metricconditions}, by adding and
subtracting the skew term $S$,
Problem~\ref{prob:main} is equivalent to
\begin{align}\label{eq:modifiedmono}
\text{find $x \in X$ such that} \quad 0 \in
U^{-1}(A+S)x + U^{-1}B_1{x} +
U^{-1}(B_2-S)x.
\end{align}
Because $S$ and $-S$ are both monotone and Lipschitz,
$\mathcal{A}:=U^{-1}(A+S)$ is monotone;
$\mathcal{B}_1:=U^{-1}B_1$ is $\rho \beta$-cocoercive~\cite[Proposition
1.5]{davis2014convergenceprimaldual};
and $\mathcal{B}_2:=U^{-1} (B_2 - S)$ is
monotone and $\rho^{-1}K$-Lipschitz {in ${\mathrm{dom}\,} A\cup X$} under the inner
product
$\dotp{\cdot,
\cdot}_U=\scal{U\cdot}{\cdot}$, where $K$
is the Lipschitz constant of $C:=B_2-S$.\footnote{Note that $K\leq
L+\|S\|$, but this constant is not precise when, for instance,
$B_2=S$. } For the last assertion
note that, for every $ x,y\, {\in {\mathrm{dom}\,} A\cup X}$, $$\|\mathcal{B}_2x -
\mathcal{B}_2y\|^2_{U} = \dotp{U^{-1}(Cx - Cy), Cx - Cy}
\leq \rho^{-1}K^2\|x-y\|^2\leq
\rho^{-2}K^2\|x-y\|^2_U.$$
Moreover, the stepsize condition reduces to
\begin{equation}
\label{e:auxiliary}
\gamma=1<\frac{4\beta\rho}{1+\sqrt{1+16\beta^2
K^2}}=\frac{-\rho+\sqrt{\rho^2+16\beta^2\rho^2
K^2}}{4\beta K^2}
\end{equation}
or, equivalently,
\begin{equation}
\label{e:aux221}
(4\beta
K^2+\rho)^2<{\rho^2+16\beta^2\rho^2
K^2}\quad
\Leftrightarrow\qquad
2\beta K^2+\rho<2\beta\rho^2,
\end{equation}
which yields the second condition in \eqref{e:metricconditions}.
Therefore, since $\mathcal{A}+\mathcal{B}_2=U^{-1}(A+B_2)$ is maximally monotone
in $({\mathcal{H}},\|\cdot\|_U)$, the inclusion~\eqref{eq:modifiedmono} meets the conditions of
Theorem~\ref{t:1} under this metric. {Therefore, by considering the sequence
generated
by
$z^{k+1}=T_1z^k$ for the
quasi-nonexpansive operator
\begin{align}
\label{e:defT1}
T_{1} = P_X^U\circ \left[({\ensuremath{\operatorname{Id}}\,} -
\mathcal{B}_2) \circ J_{ \mathcal{A}}
\circ \big({\ensuremath{\operatorname{Id}}\,} -
(\mathcal{B}_1+\mathcal{B}_2)\big) + \mathcal{B}_2{\mathrm{i}}ght],
\end{align}
which, from Proposition~\ref{prop:new} reduces to \eqref{eq:compressedalg},}
we obtain a sequence that weakly converges to a fixed point of
$T_{1}$, and hence, to a solution of $\zer(A+B_1+B_2)\cap
X$.
\end{proof}
\begin{remark}
\begin{enumerate}
\item Note that, in the particular case when $P=\ensuremath{\operatorname{Id}}\,/\gamma$, the
algorithm
\eqref{eq:compressedalg}
{reduces to \eqref{e:algomain1} when the stepsizes are constant}. Moreover,
$U=P$, $S=0$, $K=L$, $\rho=1/\gamma$ and the second condition in
\eqref{e:metricconditions} reduces to $\gamma<\chi$ with $\chi$
defined in \eqref{e:chi}. Hence, this assumption can be seen as a
kind of ``step size'' condition on $P$.
\item \label{rem:4}
As in Remark~\ref{rem:chi}, note that the second condition in
\eqref{e:metricconditions} depends
on the cocoercivity parameter $\beta$ and the Lipschitz constant
$L$. In the
case when $B_1$ is zero, we can take $\beta\to+\infty$ and this
condition reduces to $K<\rho$. On the other hand, if $B_2$ is zero
we can take $L= 0$, then $K= \|S\|$ and, hence, the condition
reduces to $\|S\|^2<\rho(\rho-1/(2\beta))$.
In this way we obtain convergent versions
of Tseng's splitting
and forward-backward algorithm with non self-adjoint linear operators by
setting $B_1=0$ or $B_2=0$ in \eqref{eq:compressedalg}, respectively.
\item When $S=0$ {and $B_1=0$ or $B_2=0$, }
from Theorem~\ref{thm:asymmetric_metric} we
recover {the versions of Tseng's
forward-backward-forward splitting
\cite[Theorem~3.1]{vu2013variableFBF}
or forward-backward \cite[Theorem~4.1]{combettes2012variable},
respectively, when the step-sizes and the
non-standard metrics involved are constant. }Of course, when
$S=0$, $U=\ensuremath{\operatorname{Id}}\,/\gamma$, and $\rho=1/\gamma$, we recover
the classical bound for step-sizes in the standard metric case for each method.
\item For a particular choice
of operators and metric, the forward-backward method {
with non-standard metric} discused before
has been
used for solving
primal-dual composite inclusions and primal-dual optimization problems
{\cite{condat2013primal,vu2013splitting}}. This approach generalizes,
e.g., the method in \cite{chambolle2011first}.
In Section~\ref{sec:5} we compare the application of our method in the primal-dual
context
with \cite{vu2013splitting} and other methods in the literature.
\item In the particular instance when
$B_1=B_2=0$, we need $\|S\|<\rho$ and we obtain from
\eqref{eq:compressedalg} the
following version of the proximal point algorithm (we consider $X=\ensuremath{{\mathcal H}}$ for simplicity)
\begin{align*}
z^0\in\ensuremath{{\mathcal H}},\quad (\forall k\in\ensuremath{\mathbb N})\quad z^{k+1}&=J_{ P^{-1}
A}z^k+U^{-1}S(J_{ P^{-1}
A}z^k-z^k) \\
&=(\ensuremath{\operatorname{Id}}\,-U^{-1}P)z^k+U^{-1}PJ_{ P^{-1} A}z^k.\numberthis\label{e:ppavm}
\end{align*}
Moreover, in the case when $A=B_2=0$, since $U^{-1}\circ S\circ
P^{-1}=U^{-1}-P^{-1}$, we
recover from
\eqref{eq:compressedalg} the gradient-type method:
\begin{equation}
\label{e:gradvm}
z^0\in\ensuremath{{\mathcal H}},\quad (\forall k\in\ensuremath{\mathbb N})\quad z^{k+1}=z^k-
U^{-1}B_1z^k.
\end{equation}
\item In the particular case when $X=\ensuremath{{\mathcal H}}$ and $B_2$ is linear, in
\cite{patrinos2016asym} a method involving $B_2^*$ is proposed. In
the case when, $B_2$ is skew linear, i.e., $B_2^*=-B_2$
\eqref{e:metricconditions} reduces to this method in the case
$\alpha_n\equiv 1$ and $S=P$. The methods are different in general.
\end{enumerate}
\end{remark}
\section{Allowing variable $P$ and avoiding inversion of $U$}
\label{sec:4}
In Algorithm~\eqref{eq:compressedalg}, the linear operator $U$
must be inverted. In this section, for the special case ${{\mathrm{dom}\,} B_2=}X = {\mathcal{H}}$,
we show how to replace this sometimes costly inversion with a
single multiplication by the map $P$, which, in addition, may
vary at each iteration.
This new feature is a consequence of Proposition~\ref{prop:classT} below,
which allows us to obtain from an operator of the class
$\mathfrak{T}$ in $(\ensuremath{{\mathcal H}},\|\cdot\|_U)$, another operator of the same class
in $(\ensuremath{{\mathcal H}},\|\cdot\|)$ preserving the set of fixed points. This change
to the standard metric allows us to use different linear operators at each
iteration by avoiding classical restrictive additional assumptions of
the type $U_{n+1}\preccurlyeq U_n(1+\end{tabbing}a_n)$ with $(\end{tabbing}a_n)_{n\in\ensuremath{\mathbb N}}$
in $\ell^1_+$.
We recall
that an operator $\mathcal{S}\colon\ensuremath{{\mathcal H}}\to\ensuremath{{\mathcal H}}$ belongs to the class
$\mathfrak{T}$
in $(\ensuremath{{\mathcal H}},\|\cdot\|)$ if and only if ${\mathrm{dom}\,} \mathcal{S}=\ensuremath{{\mathcal H}}$ and
$(\forall
y\in\Fix \mathcal{S})(\forall
x\in\ensuremath{{\mathcal H}})\quad
\|x-\mathcal{S}x\|^2\leq \scal{x-\mathcal{S}x}{x-y}$.
\begin{proposition}
\label{prop:classT}
Let $U\colon\ensuremath{{\mathcal H}}\to\ensuremath{{\mathcal H}}$ {be} a self-adjoint bounded linear operator such
that, for every $x\in\ensuremath{{\mathcal H}}$, $\scal{Ux}{x}\geq\rho\|x\|^2$, for some
$\rho>0$, let $0<\mu\leq \|U\|^{-1}$, and let
$\mathcal{S}\colon\ensuremath{{\mathcal H}}\to\ensuremath{{\mathcal H}}$
be
an
operator in the class $\mathfrak{T}$ in $(\ensuremath{{\mathcal H}},\|\cdot\|_U)$. Then, the
operator $\mathcal{Q}=\ensuremath{\operatorname{Id}}\,-\mu U(\ensuremath{\operatorname{Id}}\,-\mathcal{S})$ belongs to the
class
$\mathfrak{T}$
in $(\ensuremath{{\mathcal H}},\|\cdot\|)$ and $\Fix \mathcal{S}=\Fix \mathcal{Q}$.
\end{proposition}
\begin{proof}
First note that, under the assumptions on $U$ it is invertible and,
from \cite[Lemma~2.1]{combettes2013variable}, we deduce
\begin{equation}
\label{e:ineqU}
(\forall
x\in\ensuremath{{\mathcal H}})\quad \|x\|_U^2=\scal{Ux}{x}=\scal{Ux}{U^{-1}Ux}
\geq \|U\|^{-1}\|Ux\|^2,
\end{equation}
and $\Fix \mathcal{S}=\Fix \mathcal{Q}$ thus follows from the
definition of $\mathcal{Q}$.
Now let $y\in\Fix
\mathcal{S}$ and $x\in\ensuremath{{\mathcal H}}$. We have from
\eqref{e:ineqU} that
\begin{align}
\|x-\mathcal{S}x\|_U^2\leq \scal{x-\mathcal{S}x}{x-y}_U\:
&\Leftrightarrow\: \|x-\mathcal{S}x\|_U^2\leq
\scal{U(x-\mathcal{S}x)}{x-y}\nonumber\\
&\Rightarrow\: \|U\|^{-1}\|U(x-\mathcal{S}x)\|^2\leq
\scal{U(x-\mathcal{S}x)}{x-y}\nonumber\\
&\Leftrightarrow\: \frac{\|U\|^{-1}}{\mu}\|\mu
U(x-\mathcal{S}x)\|^2\leq
\scal{\mu U(x-\mathcal{S}x)}{x-y}\nonumber\\
&\Leftrightarrow\: \frac{\|U\|^{-1}}{\mu}\|x-\mathcal{Q}x\|^2\leq
\scal{x-\mathcal{Q}x}{x-y}
\end{align}
and, hence, if $\mu\in]0,\|U\|^{-1}]$ we deduce the result.
\end{proof}
\begin{theorem}\label{cor:asymmetricnoinversion}
{Under the hypotheses of Problem~\ref{prob:main}and assuming additionally
that $B_2$ is $L$-Lipschitz in ${\mathrm{dom}\,} B_2=\ensuremath{{\mathcal H}}$}, let $\{P_k\}_{k \in
\ensuremath{\mathbb N}}$ be a
sequence of bounded,
linear maps from ${\mathcal{H}}$ to ${\mathcal{H}}$. For each $k \in \ensuremath{\mathbb N}$, let
$U_k := (P_k + P_k^\ast)/2$ and ${S}_k := (P_k -
P_k^\ast)/2$
be the self-adjoint and skew symmetric components
of $P_k$, respectively. Suppose that
$M:=\sup_{k\in\ensuremath{\mathbb N}}\|U_k\|<\infty$ and that there
exist ${\mathbf{a}}repsilon\in]0,(2M)^{-1}[$, $\rho > 0$, and $\{\rho_k\}_{k
\in \ensuremath{\mathbb N}}
\subseteq [\rho,
\infty[$
such that, for every $k\in\ensuremath{\mathbb N}$,
\begin{align}
\label{e:metricconditions2}
\left(\forall x \in {\mathcal{H}} {\mathrm{i}}ght) \qquad \rho_k\|x\|^2 \leq
\dotp{U_kx, x} && \text{and} &&
K_k^2\leq\frac{\rho_k}{1+{\mathbf{a}}repsilon}\left(\frac{\rho_k}
{1+{\mathbf{a}}repsilon}-\frac{1}{2\beta}{\mathrm{i}}ght),
\end{align}
where $K_k\ge0$ is the Lipschitz constant of $B_2-S_k$.
Let $\{\lambda_k\}_{k \in \ensuremath{\mathbb N}}$ be a sequence in $[{\mathbf{a}}repsilon,
\|U_k\|^{-1} - {\mathbf{a}}repsilon]$, let $z^0 \in\ensuremath{{\mathcal H}}$, and let $\{z^k\}_{k
\in \ensuremath{\mathbb N}}$ be a sequence of points defined by the following iteration:
\begin{equation}
\label{eq:FBF-asymmetric-no-U}
(\forall k\in\ensuremath{\mathbb N})\quad
\begin{array}{l}
\left\lfloor
\begin{array}{l}
x^k = J_{P_k^{-1} A}(z^k - P_k^{-1}(B_1 + B_2)z^k) \\[2mm]
z^{k+1} = z^k + \lambda_k\left( P_k(x^k - z^k) + B_2z^k -
B_2x^k{\mathrm{i}}ght).
\end{array}
{\mathrm{i}}ght.
\end{array}
\end{equation}
Then $\{z^k \}_{k \in\ensuremath{\mathbb N}}$ converges weakly to a solution to Problem~\ref{prob:main}.
\end{theorem}
\begin{proof}
For every invertible and bounded linear map $P : {\mathcal{H}} {\mathrm{i}}ghtarrow
{\mathcal{H}}$, let us denote by
$\mathcal{T}_{ P} : {\mathcal{H}} {\mathrm{i}}ghtarrow {\mathcal{H}}$ the
forward-backward-forward
operator of Theorem~\ref{thm:asymmetric_metric} in the case
$X={\mathcal{H}}$, which associates, to every $z\in\ensuremath{{\mathcal H}}$,
\begin{align*}
\mathcal{T}_{P}z &= x_z + U^{-1}( B_2z - B_2x_z-S( z-x_z)),
\end{align*}
where $x_z = J_{P^{-1} A}(z -P^{-1}(B_1 + B_2)z)$. Recall that, from \eqref{eq:fejer}
and the proof of Theorem~\ref{thm:asymmetric_metric}, $\mathcal{T}_P$
is
a quasi-nonexpansive mapping in $\ensuremath{{\mathcal H}}$ endowed with the scalar product
$\scal{\cdot}{\cdot}_U$.
Observe that multiplying ${\ensuremath{\operatorname{Id}}\,} - \mathcal{T}_{P}$ by $U$ on the left
yields a $U^{-1}$-free
expression:
\begin{align*}
({\ensuremath{\operatorname{Id}}\,} - \mathcal{T}_{P})(z) &= (z - x_z) +U^{-1}S(z-x_z) -U^{-1}(B_2z -
B_2x_z)\\
\Leftrightarrow\qquad U({\ensuremath{\operatorname{Id}}\,} - \mathcal{T}_{P})(z) &= (U+S)(z - x_z) +
B_2x_z - B_2z\\
&= P(z - x_z) + B_2x_z -
B_2z\numberthis\label{eq:Snotation}.
\end{align*}
Note that, since $\mathcal{T}_P$ is quasi-nonexpansive in
$(\ensuremath{{\mathcal H}},\|\cdot\|_U)$,
it follows from
\cite[Proposition~2.2]{combettes2001quasi} that
$\mathcal{S}:=(\ensuremath{\operatorname{Id}}\,+\mathcal{T}_P)/2$
belongs to the class $\mathfrak{T}$ in $(\ensuremath{{\mathcal H}},\|\cdot\|_U)$ and, from
Proposition~\ref{prop:classT} and \eqref{eq:Snotation} we obtain that the operator
\begin{equation}
\label{e:QP}
\mathcal{Q}_{P}:={\ensuremath{\operatorname{Id}}\,}-\|U\|^{-1}U({\ensuremath{\operatorname{Id}}\,}-\mathcal{S})=
{\ensuremath{\operatorname{Id}}\,}-\frac{\|U\|^{-1}}{2}U({\ensuremath{\operatorname{Id}}\,}-\mathcal{T}_P)
\end{equation}
belongs to the class $\mathfrak{T}$ in
$(\ensuremath{{\mathcal H}},\|\cdot\|)$ and $\Fix \mathcal{S}=\Fix
\mathcal{Q}_{P}=\zer(U({\ensuremath{\operatorname{Id}}\,} -
\mathcal{T}_{P}))=\Fix(\mathcal{T}_{P})=\zer(A+B_1+B_2)$.
Hence, from \eqref{eq:Snotation} and \eqref{e:QP}, the algorithm
\eqref{eq:FBF-asymmetric-no-U} can be written equivalently as
\begin{align}
z^{k+1}&=z^k-\lambda_k(P_k(z^k - x_{z^k}) + B_2x_{z^k} -
B_2z^k)\nonumber\\
&=z^k+2\lambda_k\|U_k\|(\mathcal{Q}_{P_k}z^k-z^k).
\end{align}
Hence, since $0<\liminf\lambda_k\|U_k\|\le \limsup \lambda_k\|U_k\|<1$,
it follows from
\cite[Proposition~4.2 and
Theorem~4.3]{combettes2001quasi}
that
$(\|z^k-\mathcal{Q}_{P_k}z^k\|^2)_{k\in\ensuremath{\mathbb N}}$ is a summable sequence
and
$\{z^k\}_{k\in\ensuremath{\mathbb N}}$ converges weakly in $(\ensuremath{{\mathcal H}},\scal{\cdot}{\cdot})$ to
a solution to
$\cap_{k\in\ensuremath{\mathbb N}}\Fix \mathcal{T}_{P_k}=\zer(A+B_1+B_2)$ if and only if
every
weak limit of the sequence is a solution. Note that, since \eqref{e:metricconditions2} yields
$\|U_k^{-1}\|\leq \rho_k^{-1}$, we have
\begin{align}
\label{e:tozero1}
\|z^k-\mathcal{T}_{P_k}z^k\|_{U_k}^2&=\scal{U_{k}(z^k-\mathcal{T}_{P_k}z^k)}{z^k-
\mathcal{T}_{P_k}z^k}\nonumber\\
&\le \|U_{k}(z^k-\mathcal{T}_{P_k}z^k)\|\,\|z^k-\mathcal{T}_{P_k}z^k\|\nonumber\\
&=\|U_k^{-1}\|\|U_{k}(z^k-\mathcal{T}_{P_k}z^k)\|^2\nonumber\\
&\le 4\|U_k\|^2\rho_k^{-1}\,\|z^k-\mathcal{Q}_{P_k}z^k\|^2\nonumber\\
&\le 4M^2\rho^{-1}\|z^k-\mathcal{Q}_{P_k}z^k\|^2\to 0.
\end{align}
Moreover, since $\mathcal{T}_{P_k}$ coincides
with $T_1$ defined in \eqref{e:defT1} involving the operators
$\mathcal{A}_k:=U_k^{-1}(A+S_k)$, $\mathcal{B}_{1,k}=U_k^{-1}B_1$,
and
$\mathcal{B}_{2,k}=U_k^{-1}(B_2-S_k)$ which are
monotone, $\rho_k\beta$-cocoercive, and monotone and
$\rho_k^{-1}K_k$-lipschitzian in $(\ensuremath{{\mathcal H}},\|\cdot\|_{U_k})$,
respectively,
we deduce from \eqref{eq:fejer}
that, for every
$z^{\ast}\in\zer(A+B_1+B_2)=\cap_{k\in\ensuremath{\mathbb N}}\zer(\mathcal{A}_k+
\mathcal{B}_{1,k}+\mathcal{B}_{2,k})$ we have
\begin{align}
\label{e:aux331}
&\rho_k^{-2}K_k^{2}(\chi_k^2-
1)\|z^k - J_{P_k^{-1} A}(z^k - P_k^{-1}(B_1+
B_2)z^k)\|_{U_k}^2 \nonumber\\
&+ \frac{2\beta\rho_k}{\chi_k}\left(\chi_k
- 1{\mathrm{i}}ght)\|U_k^{-1}(B_1z^k - B_1 z^\ast)\|_{U_k}^2\nonumber \\
&+\frac{\chi_k}{2\beta\rho_k}\left\|z^k-J_{P_k^{-1}A} (z^k - P_k^{-1}
(B_1z^k+ B_2z^k))-
\frac{2\beta\rho_k}{\chi_k}U_k^{-1}(B_1z^k-B_1z^*){\mathrm{i}}ght\|_{U_k}^2\nonumber\\
&\leq \|z^k - z^\ast\|^2_{U_k} - \|\mathcal{T}_{P_k}z^k -
z^\ast\|^2_{U_k}\nonumber\\
&= -\|\mathcal{T}_{P_k}z^k - z^k\|_{U_k}^2 -
2\dotp{\mathcal{T}_{P_k}z^k - z^k,
z^\ast -
z^k}_{U_k}\nonumber\\
&\leq-\|\mathcal{T}_{P_k}z^k - z^k\|_{U_k}^2 +
2M\|\mathcal{T}_{P_k}z^k - z^k\|_{U_k}
\|z^\ast - z^k\|,
\end{align}
where
\begin{equation}
\label{e:chikbound}
\chi_k:=\frac{4\beta
\rho_k}{1+\sqrt{1+16\beta^2K_k^2}}\leq\rho_k\min\{2\beta,K_k^{-1}\}.
\end{equation}
By straightforward computations in the line
of \eqref{e:auxiliary} and \eqref{e:aux221} we deduce that
\eqref{e:metricconditions2}
implies, for all $k\in\ensuremath{\mathbb N}$, $\chi_k\geq1+{\mathbf{a}}repsilon$, $K_k\le \rho_k\le \|U_k\|\le M$ and,
hence,
we deduce from \eqref{e:aux331} and \eqref{e:metricconditions2} that
\begin{multline}
\label{e:tozero2}
\frac{{\mathbf{a}}repsilon\rho K_k^2}{M^2}\|z^k - J_{P_k^{-1} A}(z^k -
P_k^{-1}(B_1+
B_2)z^k)\|^2+{{\mathbf{a}}repsilon\rho}\|U_k^{-1}(B_1z^k - B_1
z^\ast)\|^2\\
+\frac{\rho}{2\beta M}\left\|z^k-J_{P_k^{-1}A} (z^k - P_k^{-1}
(B_1z^k+ B_2z^k))-
\frac{2\beta\rho_k}{\chi_k}U_k^{-1}(B_1z^k-B_1z^*){\mathrm{i}}ght\|^2\\
\leq-\|\mathcal{T}_{P_k}z^k - z^k\|_{U_k}^2 +
2M\|\mathcal{T}_{P_k}z^k - z^k\|_{U_k}
\|z^\ast - z^k\|.
\end{multline}
Now, let $z$ be a weak limit of some subsequence of $(z^{k})_{k\in\ensuremath{\mathbb N}}$ called
similarly for simplicity.
We have that $(\|z^\ast -
z^k\|)_{k\in\ensuremath{\mathbb N}}$ is bounded and, since \eqref{e:tozero1} implies
$\|z^k-\mathcal{T}_{P_k}z^k\|_{U_k}^2\to0$ we deduce from
\eqref{e:tozero2}
that{, by denoting $x^k:=J_{P_k^{-1} A}(z^k - P_k^{-1}(B_1+
B_2)z^k)$, that $z^k-x^k\to0$. Hence, since, for every $x\in\ensuremath{{\mathcal H}}$,
\begin{equation}
\|S_kx\|\leq \|(S_k-B_2)x-(S_k-B_2)0\|+\|B_2x-B_20\|\leq
(K_k+L)\|x\|\leq\left(M+L{\mathrm{i}}ght)\|x\|,
\end{equation}
we have
\begin{align}
\label{e:goingto0}
\|P_k(z^k-x^k)\|&=\|(U_k+S_k)(z^k-x^k)\|\nonumber\\
&\leq \|U_k(z^k-x^k)\|+\|S_k(z^k-x^k)\|\nonumber\\
&\leq
(2M+L)\|z^k-x^k\|\to 0.
\end{align}
Finally, denoting by $B:=B_1+B_2$
we have
\begin{equation}
u^k:=P_k(z^k-x^k)-(Bz^k-Bx^k)\in (A+B)x^k,
\end{equation}
and since $z^k-x^k\to0$ and $B$ is continuous, it follows from
\eqref{e:goingto0} that $u^k\to 0$ and the result follows from the
weak-strong closedness of the maximally monotone operator $A+B$
and \cite[Theorem
5.33]{bauschke2017convex}. }
\end{proof}
7
\begin{remark}
\begin{enumerate}
\item Note that, in the particular case when $S_k\equiv 0$ and
$P_k=U_k=\gamma_k^{-1}V_k^{-1}$, we have from
\cite[Lemma~2.1]{combettes2013variable} that
$\rho_k=\gamma_k^{-1}\|V_k^{-1}\|$, the conditions on the constants
involved in Theorem~\ref{cor:asymmetricnoinversion} reduce to
\begin{equation}
\label{e:condit_sym_case}
\frac{\|V_k^{-1}\|}{M}\leq\gamma_k\leq
\frac{\|V_k^{-1}\|}{\rho},\quad L^2\leq
\frac{\gamma_k^{-1}\|V_k^{-1}\|}{1+{\mathbf{a}}repsilon}
\left(\frac{\gamma_k^{-1}\|V_k^{-1}\|}{1+{\mathbf{a}}repsilon}-\frac{1}{2\beta}{\mathrm{i}}ght),
\end{equation}
for some $0<\rho<M$, for every $k\in\ensuremath{\mathbb N}$, and \eqref{eq:FBF-asymmetric-no-U}
reduces to
\begin{equation}
\label{eq:FBF-symmetric-no-U}
(\forall k\in\ensuremath{\mathbb N})\quad
\begin{array}{l}
\left\lfloor
\begin{array}{l}
x^k = J_{\gamma_kV_k A}(z^k - \gamma_k V_k(B_1 + B_2)z^k) \\[2mm]
z^{k+1} = z^k + \frac{\lambda_k}{\gamma_k}\left( V_k^{-1}(x^k - z^k)
+ \gamma_kB_2z^k - \gamma_kB_2x^k{\mathrm{i}}ght).
\end{array}
{\mathrm{i}}ght.
\end{array}
\end{equation}
If in addition we assume that $B_2=0$ and, hence $L=0$,
\eqref{e:condit_sym_case} reduces to $\gamma_k\leq
\|V_k^{-1}\|2\beta/(1+{\mathbf{a}}repsilon)$ which is more general than the
condition in \cite{combettes2012variable} and, moreover,
we do not need any {compatibility assumption on} $(V_k)_{k\in\ensuremath{\mathbb N}}$ for
achieving
convergence. Similarly, if $B_1=0$, and hence, we can take
$\beta\to\infty$, \eqref{e:condit_sym_case} reduces to $\gamma_k\leq
\|V_k^{-1}\|/(L(1+{\mathbf{a}}repsilon))$ which is more general than the condition in
\cite{vu2013variableFBF} and no additional assumption on
$(V_k)_{k\in\ensuremath{\mathbb N}}$ is needed. However, \eqref{eq:FBF-symmetric-no-U}
involves an additional computation of $V_k^{-1}$ in the last step of
each iteration $k\in\ensuremath{\mathbb N}$.
\item In the particular case when, for every $k\in\ensuremath{\mathbb N}$,
$P_k=U_k=\ensuremath{\operatorname{Id}}\,/\gamma_k$, where $(\gamma_k)_{k\in\ensuremath{\mathbb N}}$ is a real
sequence, we have $S_k\equiv0$, $K_k\equiv L$,
$\|U_k\|=\rho_k=1/\gamma_k$, and conditions
$\sup_{k\in\ensuremath{\mathbb N}}\|U_k\|<\infty$ and \eqref{e:metricconditions2} reduce
to
\begin{equation}
0<\inf_{k\in\ensuremath{\mathbb N}}\gamma_k\leq\sup_{k\in\ensuremath{\mathbb N}}\gamma_k<\chi,
\end{equation}
where $\chi$ is defined in \eqref{e:chi} and
\eqref{eq:FBF-asymmetric-no-U} reduces to
\begin{equation*}
(\forall k\in\ensuremath{\mathbb N})\quad
\begin{array}{l}
\left\lfloor
\begin{array}{l}
x^k =J_{\gamma_k A}(z^k - \gamma_k(B_1 + B_2)z^k) \\[2mm]
z^{k+1} = z^k + \end{tabbing}a_k\left( x^k +
\gamma_kB_2z^k - \gamma_kB_2x^k- z^k{\mathrm{i}}ght),
\end{array}
{\mathrm{i}}ght.
\end{array}
\end{equation*}
where $\end{tabbing}a_k\in[{\mathbf{a}}repsilon,1-{\mathbf{a}}repsilon]$, which is a relaxed version of
Theorem~\ref{t:1}.
\item As in Remark~\ref{rem:4}, by setting $B_1=0$ or $B_2=0$, we can
derive from \eqref{eq:FBF-asymmetric-no-U} versions of Tseng's
splitting and forward-backward algorithm with non self-adjoint linear operators
but without needing the inversion of $U$. In particular, the proximal
point algorithm in
\eqref{e:ppavm} reduces to
\begin{equation}
\label{e:ppavmwinv}
z^0\in\ensuremath{{\mathcal H}},\quad (\forall k\in\ensuremath{\mathbb N})\quad z^{k+1}=z^k+\lambda
P(J_{P^{-1} A}z^k-z^k)
\end{equation}
for $\lambda<\|U\|^{-1}$ and, in the case of \eqref{e:gradvm},
to avoid inversion is to
come back to the gradient-type method with the standard metric.
\end{enumerate}
\end{remark}
\section{Primal-dual composite
monotone inclusions with non self-adjoint linear operators}
\label{sec:5}
In this section, we apply our algorithm to composite primal-dual monotone
inclusions involving a cocoercive and a lipschitzian monotone
operator.
\begin{problem}
\label{prob:mi}
Let ${\mathrm H}$ be a real Hilbert space, let ${\mathrm
X}\subset {\mathrm H}$ be closed and convex, let $z\in{\mathrm
H}$, let ${\mathrm A}\colon {\mathrm H}
\to
2^{\mathrm H}$ be maximally monotone, let
${\mathrm C}_1 \colon {\mathrm H} \to{\mathrm H}$ be
$\mu$-cocoercive, for some $\mu\in\ensuremath{\left[0,+\infty\right[}P$, and let ${\mathrm C}_2
\colon {\mathrm H} \to{\mathrm H}$ be a monotone and
$\delta$-lipschitzian operator, for some $\delta\in\ensuremath{\left[0,+\infty\right[}P$. Let $m\geq 1$ be an
integer,
and, for
every
$i\in\{1,\ldots,m\}$, let ${\mathrm G}_i$ be a real Hilbert
space, let $r_i\in{\mathrm G}_i$, let ${\mathrm B}_i\colon
{\mathrm G}_i \to 2^{{\mathrm G}_i}$ be maximally monotone,
let ${\mathrm D}_i\colon {\mathrm G}_i \to 2^{{\mathrm G}_i}$ be
maximally monotone and $\nu_i$-strongly monotone, for some
$\nu_i\in\ensuremath{\left[0,+\infty\right[}P$, and suppose that ${\mathrm L}_i\colon {\mathrm
H} \to {\mathrm G}_i$ is a nonzero linear bounded operator. The
problem is to solve the primal inclusion.
\begin{equation}
\label{e:miprimal}
\text{find }\quad {\mathrm x}\in {\mathrm X}\quad \text{such that
}\quad {\mathrm z}\in
{\mathrm A}{\mathrm x}+\sum_{i=1}^m{\mathrm
L}_i^*({\mathrm B}_i\ensuremath{\mbox{\small$\,\square\,$}} {\mathrm D}_i)(\mathrm
{L}_i\mathrm
{x}-\mathrm
{r}_i)+\mathrm{C}_1\mathrm{x}+\mathrm{C}_2\mathrm{x}
\end{equation}
together with the dual inclusion
\begin{align*}
&\text{find }\quad \mathrm{v}_1\in
\mathrm{G}_1,\ldots,\mathrm{v}_m\in
\mathrm{G}_m\quad \\
&\text{such that
}\quad
(\ensuremath{\exists\,}sts \mathrm{x\in
X})\:
\begin{cases}
\mathrm{z}-\sum_{i=1}^m\mathrm{L}_i^*\mathrm{v}_i\in
\mathrm{Ax+C}_1\mathrm{x}+\mathrm{C}_2\mathrm{x}\\
(\forall i\in\{1,\ldots,m\})\:\:\mathrm{v}_i\in({\mathrm
B}_i\ensuremath{\mbox{\small$\,\square\,$}} {\mathrm D}_i)(\mathrm
{L}_i\mathrm
{x}-\mathrm
{r}_i) \numberthis\label{e:midual}
\end{cases}
\end{align*}
under the assumption that a solution exists.
\end{problem}
In the case when ${\mathrm X}={\mathrm H}$ and ${\mathrm
C}_2=0$,
Problem~\ref{prob:mi} is studied in
\cite{vu2013splitting}\footnote{Note that in \cite{vu2013splitting},
weights
$(\omega_i)_{1\leq i\leq m}$ multiplying operators
$(\mathrm{B}_i\ensuremath{\mbox{\small$\,\square\,$}}\mathrm{D}_i)_{1\leq i\leq m}$ are considered.
They can be retrieved in \eqref{e:miprimal} by considering
$(\omega_i\mathrm{B}_i)_{1\leq i\leq m}$ and
$(\omega_i\mathrm{D}_i)_{1\leq i\leq m}$ instead of
$(\mathrm{B}_i)_{1\leq i\leq m}$ and
$(\mathrm{D}_i)_{1\leq i\leq m}$. Then both formulations are
equivalent.}
and
models a large class of problems including
optimization problems, variational inequalities, equilibrium
problems, among others (see
{\color{red}\cite{briceno2011monotone+,he2012convergence,vu2013splitting,condat2013primal}}
and
the
references therein). In \cite{vu2013splitting} the author
rewrite \eqref{e:miprimal} and \eqref{e:midual} in the case
${\mathrm X}={\mathrm H}$ as
\begin{equation}
\label{e:mipd}
\text{find}\quad {z}\in\ensuremath{{\mathcal H}}\quad\text{such that
}\quad
{0}\in
{M}{z}+{S}{z}+Q{z},
\end{equation}
where $\ensuremath{{\mathcal H}}=\mathrm{H\times G}_1\times\cdots\times\mathrm{G}_m$,
${M}\colon\ensuremath{{\mathcal H}}\to
2^{\ensuremath{{\mathcal H}}}\colon (\mathrm{x},\mathrm{v}_1,\ldots,
\mathrm{v}_m)\mapsto (\mathrm{Ax-z})\times
(\mathrm{B}_1^{-1}\mathrm{v}_1+\mathrm{r}_1)\times\cdots\times
(\mathrm{B}_m^{-1}\mathrm{v}_m+\mathrm{r}_m)$ is maximally monotone,
${S}\colon
\ensuremath{{\mathcal H}}\to\ensuremath{{\mathcal H}}\colon (\mathrm{x},\mathrm{v}_1,\ldots,
\mathrm{v}_m)\mapsto
(\sum_{i=1}^m\mathrm{L}_i^*\mathrm{v}_i,
-\mathrm{L}_1\mathrm{x},\ldots,-\mathrm{L}_m\mathrm{x})$ is skew linear,
and
$Q\colon
\ensuremath{{\mathcal H}}\to\ensuremath{{\mathcal H}}\colon
(\mathrm{x},\mathrm{v}_1,\ldots,\mathrm{v}_m)\mapsto
(\mathrm{C}_1\mathrm{x},\mathrm{D}_1^{-1}\mathrm{v}_1,\ldots,
\mathrm{D}_m^{-1}\mathrm{v}_m)$ is cocoercive.
If $(\mathrm{x},\mathrm{v}_1,\ldots,\mathrm{v}_m)$ is a
solution in the primal-dual space $\ensuremath{{\mathcal H}}$ to \eqref{e:mipd}, then
$\mathrm{x}$ is a solution to \eqref{e:miprimal} and
$(\mathrm{v}_1,\ldots,\mathrm{v}_m)$ is a solution to \eqref{e:midual}. The author
provide an algorithm for solving
\eqref{e:miprimal}--\eqref{e:midual} in this particular instance,
which is an application of the {\em
forward-backward} splitting (FBS) applied to the inclusion
\begin{equation}
\label{e:mipdvm}
\text{find}\quad {z}\in\ensuremath{{\mathcal H}}\quad\text{such that
}\quad
{0}\in
V^{-1}({M}+{S}){z}+V^{-1}Q{z},
\end{equation}
where $V$ is a specific symmetric strongly monotone operator.
Under the metric $\scal{V\cdot}{\cdot}$, $V^{-1}({M}+{S})$ is
maximally monotone and $V^{-1}Q$ is cocoercive and, therefore,
the FBS converges weakly to a primal-dual solution.
In order to tackle the case $\mathrm{C}_2\neq 0$, we propose to
use the method in Theorem~\ref{cor:asymmetricnoinversion} for solving
$0\in Ax+B_1x+B_2x$ where $A=M$, $B_1=Q$, $B_2=S+C_2$,
and $C_2\colon(\mathrm{x},\mathrm{v}_1,\ldots,\mathrm{v}_m)\mapsto
(\mathrm{C}_2\mathrm{x},0,\ldots,
0)$
allowing, in that way, non self-adjoint linear operators which may vary among
iterations.
The following result provides the method thus obtained, where the dependence
of the non self-adjoint linear operators with respect to iterations has been avoided for
simplicity.
\begin{theorem}
\label{p:primdumi}
In Problem~\ref{prob:mi}, set $\mathrm{X}=\mathrm{H}$, set
$\mathrm{G}_0=\mathrm{H}$, for every $i\in\{0,1,\ldots,m\}$ and
$j\in\{0,\ldots, i\}$, let
$\mathrm{P}_{ij}\colon\mathrm{G}_j\to\mathrm{G}_i$ be a linear
operator satisfying
\begin{equation}
\label{e:condiPii}
(\forall
\mathrm{x}_{i}\in\mathrm{G}_{i})\quad
\scal{\mathrm{P}_{ii}\mathrm{x}_{i}}{\mathrm{x}_{i}}
\geq{\mathbf{a}}rrho_i\|\mathrm{x}_{i}\|^2
\end{equation}
for some ${\mathbf{a}}rrho_i>0$. Define the $(m+1)\times(m+1)$
symmetric real
matrices $\Upsilon$, $\Sigma$, and $\Delta$ by
\begin{align*}
(\forall i\in\{0,\ldots,m\})(\forall j<i)\quad \Upsilon_{ij}&=
\begin{cases}
0,\quad&\text{if }i=j;\\
\|\mathrm{P}_{ij}\|/2,&\text{if }i>j,\\
\end{cases}
\quad \\
\Sigma_{ij}&=
\begin{cases}
\|\mathrm{P}_{ii}-\mathrm{P}_{ii}^*\|/2,\quad&\text{if }i=j;\\
\|\mathrm{L}_i+\mathrm{P}_{i0}/2\|,&\text{if }i\geq 1;j=0;\\
\|\mathrm{P}_{ij}\|/2,&\text{if }i>j>0,
\end{cases}\numberthis \label{e:defUpsilon}
\end{align*}
and $\Delta={\rm Diag}({\mathbf{a}}rrho_0,\ldots,{\mathbf{a}}rrho_m)$.
Assume that $\Delta-\Upsilon$
is positive definite with smallest eigenvalue $\rho>0$ and
that
\begin{equation}
\label{e:metricconditionpd}
(\|\Sigma\|_2+\delta)^2<\rho\left(\rho-\frac{1}{2\beta}{\mathrm{i}}ght),
\end{equation}
where $\beta=\min\{\mu,\nu_1,\ldots,\nu_m\}$.
Let
$M=\max_{i=0,\ldots,m}\|\mathrm{P}_{ii}\|+\|\Upsilon\|_2$, let
$\lambda\in]0,M^{-1}[$,
let $(\mathrm{x}^0,\mathrm{u}_1^0,\ldots,\mathrm{u}_m^0)\in
\mathrm{H\times
G}_1\times\cdots\times
\mathrm{G}_m$, and let $\{\mathrm{x}^k\}_{k\in\ensuremath{\mathbb N}}$
and
$\{\mathrm{u}_i^k\}_{k\in\ensuremath{\mathbb N}, 1\leq i\leq m}$ the sequences
generated by the
following
routine:
for every $k\in\ensuremath{\mathbb N}$
\begin{equation}
\label{e:algomicomp1}
\hspace{-.2cm}
\begin{array}{l}
\left\lfloor
\begin{array}{l}
\!\!\mathrm{y}^k
\!=\!J_{
\mathrm{P}_{00}^{-1}\mathrm{A}}
\left(\mathrm{x}^k-\mathrm{P}_{00}^{-1}\bigg(\mathrm{C}_1\mathrm{x}^k+
\mathrm{C}_2\mathrm{x}^k+\sum_{i=1}^m
\mathrm{L}_i^*\mathrm{u}_i^k\bigg){\mathrm{i}}ght)\\
\!\!\mathrm{v}_1^k \!=\! J_{\mathrm{P}_{11}^{-1}\mathrm{B}_1^{-1}}
\!\left(\mathrm{u}_1^k -
\mathrm{P}_{11}^{-1}\bigg(\mathrm{D}_1^{-1}\mathrm{u}_1^k-
\mathrm{L}_1\mathrm{x}^k-\mathrm{P}_{10}(\mathrm{x}^k-\mathrm{y}^k)\bigg){\mathrm{i}}ght)\\
\!\!\mathrm{v}_2^k\!=\! J_{ \mathrm{P}_{22}^{-1}\mathrm{B}_2^{-1}}
\!\left(\mathrm{u}_2^k -
\mathrm{P}_{22}^{-1}\bigg(\mathrm{D}_2^{-1}\mathrm{u}_2^k-
\mathrm{L}_2\mathrm{x}^k-\mathrm{P}_{20}(\mathrm{x}^k-\mathrm{y}^k)
-\mathrm{P}_{21}(\mathrm{u}_1^k-\mathrm{v}_1^k)\bigg){\mathrm{i}}ght)\\
{\mathbf{d}}ots\\
\!\!\mathrm{v}_m^k \!=\!J_{\mathrm{P}_{mm}^{-1}\mathrm{B}_m^{-1}}
\!\!\left(\!\mathrm{u}_m^k\! -\!
\mathrm{P}_{mm}^{-1}\bigg(\!\mathrm{D}_m^{-1}\mathrm{u}_m^k\!-\!
\mathrm{L}_m\mathrm{x}^k\!-\!\mathrm{P}_{m0}(\mathrm{x}^k\!-\!\mathrm{y}^k)
\!-\!\sum_{j=1}^{m-1}\!\mathrm{P}_{mj}(\mathrm{u}_j^k\!-\!\mathrm{v}_j^k)\bigg)\!{\mathrm{i}}ght)\\
\!\!\mathrm{x}^{k+1}=
\mathrm{x}^k+\lambda\left(\mathrm{P}_{00}(\mathrm{y}^k-\mathrm{x}^k)+
\left(\mathrm{C}_2\mathrm{x}^k-\mathrm{C}_2\mathrm{y}^k+\sum_{i=1}^m
\mathrm{L}_i^*(\mathrm{u}_i^k-\mathrm{v}_i^k){\mathrm{i}}ght){\mathrm{i}}ght)\\
\! \!\mathrm{u}_1^{k+1}=\mathrm{u}_1^k+\lambda\Big(\mathrm{P}_{10}
(\mathrm{y}^k-\mathrm{x}^k)+
\mathrm{P}_{11}(\mathrm{v}_1^k-\mathrm{u}_1^k)-
\mathrm{L}_1(\mathrm{x}^k-\mathrm{y}^k)\Big)\\
{\mathbf{d}}ots\\
\!\! \mathrm{u}_m^{k+1}
=\mathrm{u}_m^k+\lambda\left(\mathrm{P}_{m0}(\mathrm{y}^k-\mathrm{x}^k)+
\sum_{j=1}^m\mathrm{P}_{mj}(\mathrm{v}_j^k-\mathrm{u}_j^k)-
\mathrm{L}_m(\mathrm{x}^k-\mathrm{y}^k){\mathrm{i}}ght).
\end{array}
{\mathrm{i}}ght.\\[2mm]
\end{array}
\end{equation}
Then there exists a primal-dual solution
$(\mathrm{x}^*,\mathrm{u}_1^*,\ldots,\mathrm{u}_m^*)\in
\mathrm{H\times
G}_1\times\cdots\times
\mathrm{G}_m$ to
Problem~\ref{prob:mi} such that $\mathrm{x}^k{\mathrm{i}}ghtharpoonup
\mathrm{x}^*$ and, for every $i\in\{1,\ldots,m\}$,
$\mathrm{u}_i^k{\mathrm{i}}ghtharpoonup \mathrm{u}_i^*$.
\end{theorem}
\begin{proof}
Consider the real Hilbert space $\ensuremath{{\mathcal H}}=\mathrm{H{\mathrm{op}}lus
G}_1{\mathrm{op}}lus\cdots{\mathrm{op}}lus
\mathrm{G}_m$, where its scalar product and norm are denoted by
$\pscal{\cdot}{\cdot}$ and $|||\cdot|||$, respectively, and
$x=(\mathrm{x}_0,\mathrm{x}_1,\ldots,\mathrm{x}_m)$ and
$y=(\mathrm{y}_0,\mathrm{y}_1,\ldots,\mathrm{y}_m)$ denote generic
elements of $\ensuremath{{\mathcal H}}$. Similarly as in
\cite{vu2013splitting}, note that the set of
primal-dual solutions
$x^*=(\mathrm{x}^*,\mathrm{u}_1^*,\ldots,\mathrm{u}_m^*)\in\ensuremath{{\mathcal H}}$ to
Problem~\ref{prob:mi} in the case $\mathrm{X}=\mathrm{H}$ coincides
with the set of
solutions to the monotone inclusion
\begin{equation}
\text{find}\quad x\in\ensuremath{{\mathcal H}}\quad\text{such that}\quad 0\in Ax+B_1x+B_2x,
\end{equation}
where the operators ${A}\colon\ensuremath{{\mathcal H}}\to
2^{\ensuremath{{\mathcal H}}}$, $B_1\colon \ensuremath{{\mathcal H}}\to\ensuremath{{\mathcal H}}$, and
${B}_2\colon \ensuremath{{\mathcal H}}\to\ensuremath{{\mathcal H}}$ {(${\mathrm{dom}\,} B_2=\ensuremath{{\mathcal H}}$)} defined by
\begin{equation}
\begin{cases}
A&\colon (\mathrm{x},\mathrm{v}_1,\ldots,
\mathrm{v}_m)\mapsto (\mathrm{Ax-z})\times
(\mathrm{B}_1^{-1}\mathrm{v}_1+\mathrm{r}_1)\times\cdots\times
(\mathrm{B}_m^{-1}\mathrm{v}_m+\mathrm{r}_m)\\
B_1&\colon
(\mathrm{x},\mathrm{v}_1,\ldots,\mathrm{v}_m)\mapsto
(\mathrm{C}_1\mathrm{x},\mathrm{D}_1^{-1}\mathrm{v}_1,\ldots,
\mathrm{D}_m^{-1}\mathrm{v}_m)\\
B_2&\colon (\mathrm{x},\mathrm{v}_1,\ldots,
\mathrm{v}_m)\mapsto
(\mathrm{C}_2\mathrm{x}+\sum_{i=1}^m\mathrm{L}_i^*\mathrm{v}_i,
-\mathrm{L}_1\mathrm{x},\ldots,-\mathrm{L}_m\mathrm{x}),
\end{cases}
\end{equation}
are maximally monotone,
$\beta$-cocoercive, and monotone-Lipschitz, respectively
{ (see \cite[Proposition~20.22\,and\,20.23]{bauschke2017convex} and
\cite[Eq. (3.12)]{vu2013splitting}).}
Now let
$P\colon\ensuremath{{\mathcal H}}\to \ensuremath{{\mathcal H}}$ defined by
\begin{equation}
\label{e:def_P}
P\colon x\mapsto \left(\mathrm{P}_{00}\mathrm{x}_0,
\mathrm{P}_{10}\mathrm{x}_0+\mathrm{P}_{11}\mathrm{x}_1,\ldots,\sum_{j=0}^m
\mathrm{P}_{mj}\mathrm{x}_j{\mathrm{i}}ght)=\left(\sum_{j=0}^i
\mathrm{P}_{ij}\mathrm{x}_j{\mathrm{i}}ght)_{i=0}^m.
\end{equation}
Then $P^*\colon x\mapsto(\sum_{j=i}^m
\mathrm{P}_{ji}^*\mathrm{x}_j)_{i=0}^m$ and
$U\colon\ensuremath{{\mathcal H}}\to \ensuremath{{\mathcal H}}$ and $S\colon\ensuremath{{\mathcal H}}\to \ensuremath{{\mathcal H}}$ defined by
\begin{align}
\label{e:def_U}
U&\colon x\mapsto \left(\frac{1}{2}\sum_{j=0}^{i-1}
\mathrm{P}_{ij}\mathrm{x}_j+
\left(\frac{\mathrm{P}_{ii}+\mathrm{P}_{ii}^*}{2}{\mathrm{i}}ght)\mathrm{x}_i+
\frac{1}{2}\sum_{j=i+1}^m
\mathrm{P}_{ji}^*\mathrm{x}_j{\mathrm{i}}ght)_{i=0}^m\\
\label{e:def_S}
S&\colon x\mapsto \left(\frac{1}{2}\sum_{j=0}^{i-1}
\mathrm{P}_{ij}\mathrm{x}_j+
\left(\frac{\mathrm{P}_{ii}-\mathrm{P}_{ii}^*}{2}{\mathrm{i}}ght)\mathrm{x}_i-
\frac{1}{2}\sum_{j=i+1}^m
\mathrm{P}_{ji}^*\mathrm{x}_j{\mathrm{i}}ght)_{i=0}^m
\end{align}
are the self-adjoint and skew components of $P$, respectively, satisfying $P=U+S$.
Moreover, for every
$x=(\mathrm{x}_0,\mathrm{x}_1,\ldots,\mathrm{x}_m)$
in $\ensuremath{{\mathcal H}}$, we have
\begin{align}
\label{e:Ustronglymon}
\pscal{Ux}{x}&=\sum_{i=0}^m\frac{1}{2}\sum_{j=0}^{i-1}
\scal{\mathrm{P}_{ij}\mathrm{x}_j}{\mathrm{x}_i}+
\scal{\mathrm{P}_{ii}\mathrm{x}_i}{\mathrm{x}_i}+
\frac{1}{2}\sum_{j=i+1}^m
\scal{\mathrm{P}_{ji}^*\mathrm{x}_j}{\mathrm{x}_i}\nonumber\\
&=\sum_{i=0}^m\scal{\mathrm{P}_{ii}\mathrm{x}_i}{\mathrm{x}_i}
+\sum_{i=1}^m\sum_{j=0}^{i-1}\scal{\mathrm{P}_{ij}\mathrm{x}_j}{\mathrm{x}_i}
\nonumber\\
&\geq\sum_{i=0}^m{\mathbf{a}}rrho_i\|\mathrm{x}_i\|^2-
\sum_{i=1}^m\sum_{j=0}^{i-1}\|\mathrm{P}_{ij}\|\,\|\mathrm{x}_i\|\,\|
\mathrm{x}_j\|
\nonumber\\
&=\xi\cdot(\Delta-\Upsilon)\xi\geq\rho|\xi|^2=\rho\,|||x|||^2,
\end{align}
where $\xi:=(\|\mathrm{x}_i\|)_{i=0}^m\in\mathbb{R}^{m+1}$, $\Upsilon$ is
defined in \eqref{e:defUpsilon}, and $\rho$ is the smallest (strictly
positive)
eigenvalue of $\Delta-\Upsilon$. In addition, we can write
$B_2-S=C_2+R$,
where $C_2\colon x\mapsto
(\mathrm{C}_2\mathrm{x},0,\ldots,0)$ is monotone and
$\delta$-lipschitzian, and $R$ is a skew
linear operator satisfying, for every
$x=(\mathrm{x}_0,\mathrm{x}_1,\ldots,\mathrm{x}_m)\in\ensuremath{{\mathcal H}}$,
$Rx=(\sum_{j=0}^mR_{i,j}\mathrm{x}_j)_{0\leq i\leq m}$, where the
operators
$R_{i,j}\colon\mathrm{G}_j\to \mathrm{G}_i$ are defined by
$R_{i,j}=-\mathrm{P}_{ij}/2$ if $i>j>0$,
$R_{i,j}=-(\mathrm{L}_{i}+\mathrm{P}_{i0})/2$ if $i>j=0$,
$R_{i,i}=(\mathrm{P}_{ii}^*-\mathrm{P}_{ii})/2$ and the other
components follow from the skew property of $R$.
Therefore,
\begin{align}
\label{e:desigS}
|||Rx|||^2\!=\!\sum_{i=0}^m\left\|\sum_{j=0}^mR_{i,j}\mathrm{x}_j{\mathrm{i}}ght\|^2
\!\!\!\leq\sum_{i=0}^m\left(\sum_{j=0}^m\|R_{i,j}\|\,\|\mathrm{x}_j\|{\mathrm{i}}ght)^{\!\!\!2}
\!\!=|\Sigma\xi|^2\!\leq\|\Sigma\|^2_2|\xi|^2\!=\|\Sigma\|^2_2|||x|||^2,
\end{align}
from which we obtain that
$B_2-S$ is $(\delta+\|\Sigma\|_2)$-lipschitzian.
Altogether, by noting that, for every $x\in\ensuremath{{\mathcal H}}$,
$\|Ux\|\leq M$, all the
hypotheses of Theorem~\ref{cor:asymmetricnoinversion} hold in this
instance and by developing \eqref{eq:FBF-asymmetric-no-U} for this
specific choices of $A$, $B_1$, $B_2$, $P$, $\gamma$, and setting,
for every $k\in\ensuremath{\mathbb N}$,
$z^k=(\mathrm{x}^k,\mathrm{u}_1^k,\ldots,\mathrm{u}_m^k)$ and
$x^k=(\mathrm{y}^k,\mathrm{v}_1^k,\ldots,\mathrm{v}_m^k)$, we obtain
\eqref{e:algomicomp1} after straighforward computations and using
\begin{equation}
{ x^k=J_{ P^{-1}A}(z^k-P^{-1}(B_1z^k+B_2z^k))\quad
\Leftrightarrow\quad P(z^k-x^k)-(B_1z^k+B_2z^k)\in Ax^k.}
\end{equation}
The result follows, hence, as a consequence of
Theorem~\ref{cor:asymmetricnoinversion}.
\end{proof}
\begin{remark}
\begin{enumerate}
\item As in Theorem~\ref{cor:asymmetricnoinversion}, the algorithm in
Theorem~\ref{p:primdumi} allows for linear operators
$(\mathrm{P}_{ij})_{0\leq i,j\leq m}$ depending on the iteration,
whenever \eqref{e:metricconditions2} holds for the corresponding
operators defined in
\eqref{e:def_P}--\eqref{e:def_S}. We omit this generalization in
Theorem~\ref{p:primdumi} for the sake of simplicity.
\item In the particular case when, for every $i\in\{1,\ldots,m\}$,
$\textrm{B}_i=\widetilde{\textrm{B}}_i\ensuremath{\mbox{\small$\,\square\,$}} \textrm{M}_i$, where
$\textrm{M}_i$ is such that $\textrm{M}_i^{-1}$ is monotone and
$\sigma_i$-Lipschitz, for some $\sigma_i>0$, Problem~\eqref{prob:mi}
can be solved
in a similar way if, instead of $B_2$ and $\delta$, we consider
$\widetilde{B}_2\colon
(\mathrm{x},\mathrm{v}_1,\ldots,
\mathrm{v}_m)\mapsto
(\mathrm{C}_2\mathrm{x}+\sum_{i=1}^m\mathrm{L}_i^*\mathrm{v}_i,
\mathrm{M}_1^{-1}\mathrm{v}_1-\mathrm{L}_1\mathrm{x},\ldots,
\mathrm{M}_m^{-1}\mathrm{v}_m-\mathrm{L}_m\mathrm{x})$ and
$\widetilde{\delta}=\max\{\delta,\sigma_1,\ldots,\sigma_m\}$. Again,
for the sake of
simplicity, this extension has not been considered in
Problem~\ref{prob:mi}.
\item If the inversion of the matrix $U$ is not difficult or no
variable metric is used and the projection onto
$\mathrm{X}\subset\mathrm{H}$ is computable, we can also use
Theorem~\ref{thm:asymmetric_metric} for solving Problem~\ref{prob:mi}
in the general case $\mathrm{X}\subset\mathrm{H}$.
\end{enumerate}
\end{remark}
\begin{corollary}
\label{c:pd}
In Problem~\ref{prob:mi}, let $\theta\in[-1,1]$, let
$\sigma_0,\ldots,\sigma_m$ be strictly positive real numbers and let
$\Omega$ the
$(m+1)\times(m+1)$ symmetric real matrix given by
\begin{equation}
\label{e:Omega}
(\forall i,j\in\{0,\ldots,m\})\quad \Omega_{ij}=
\begin{cases}
\frac{1}{\sigma_i},\quad&\text{if }i=j;\\
-(\frac{1+\theta}{2})\|\mathrm{L}_i\|,&\text{if }0=j<i;\\
0,&\text{if }0<j<i.
\end{cases}
\end{equation}
Assume that $\Omega$ is positive definite with $\rho>0$ its
smallest eigenvalue and that
\begin{equation}
\label{e:conditioncor}
\left(\delta+\left(\frac{1-\theta}{2}{\mathrm{i}}ght)
\sqrt{\sum_{i=1}^m\|\mathrm{L}_i\|^2}{\mathrm{i}}ght)^2
<\rho\left(\rho-\frac{1}{2\beta}{\mathrm{i}}ght),
\end{equation}
where $\beta=\min\{\mu,\nu_1,\ldots,\nu_m\}$.
Let
$M=(\min\{\sigma_0,\ldots,\sigma_m\})^{-1}+
(\frac{1+\theta}{2})\sqrt{\sum_{i=1}^m\|\mathrm{L}_{i}\|^2}$,
let
$\lambda\in]0,M^{-1}[$,
let $(\mathrm{x}^0,\mathrm{u}_1^0,\ldots,\mathrm{u}_m^0)\in
\mathrm{H\times
G}_1\times\cdots\times
\mathrm{G}_m$, and let $\{\mathrm{x}^k\}_{k\in\ensuremath{\mathbb N}}$
and
$\{\mathrm{u}_i^k\}_{k\in\ensuremath{\mathbb N}, 1\leq i\leq m}$ the sequences
generated by the
following
routine:
\begin{equation}
\label{e:genVU}
(\forall k\in\ensuremath{\mathbb N})\quad
\begin{array}{l}
\left\lfloor
\begin{array}{l}
\mathrm{y}^k
=J_{\sigma_0\mathrm{A}}
\left(\mathrm{x}^k-\sigma_0\left(\mathrm{C}_1\mathrm{x}^k+
\mathrm{C}_2\mathrm{x}^k+\sum_{i=1}^m
\mathrm{L}_i^*\mathrm{u}_i^k{\mathrm{i}}ght){\mathrm{i}}ght)\\[2mm]
\text{For every } i=1,\ldots,m\\
\left\lfloor
\mathrm{v}_i^k = J_{
\sigma_i\mathrm{B}_i^{-1}}\left(\mathrm{u}_i^k -
\sigma_i\left(\mathrm{D}_i^{-1}\mathrm{u}_i^k-
\mathrm{L}_i(\mathrm{y}^k+\theta(\mathrm{y}^k
-\mathrm{x}^k)){\mathrm{i}}ght){\mathrm{i}}ght)
{\mathrm{i}}ght.\\[2mm]
\mathrm{x}^{k+1}=
\mathrm{x}^k+\frac{\lambda}{\sigma_0}\left(\mathrm{y}^k-\mathrm{x}^k+
\sigma_0\left(\mathrm{C}_2\mathrm{x}^k-\mathrm{C}_2\mathrm{y}^k+\sum_{i=1}^m
\mathrm{L}_i^*(\mathrm{u}_i^k-\mathrm{v}_i^k){\mathrm{i}}ght){\mathrm{i}}ght)\\
\text{For every } i=1,\ldots,m\\
\left\lfloor
\mathrm{u}_i^{k+1}=\mathrm{u}_i^k+\frac{\lambda}{\sigma_i}
\left(\mathrm{v}_i^k-\mathrm{u}_i^k-
\sigma_i\theta\mathrm{L}_i(\mathrm{y}^k-\mathrm{x}^k){\mathrm{i}}ght),
{\mathrm{i}}ght.\\[2mm]
\end{array}
{\mathrm{i}}ght.\\[2mm]
\end{array}
\end{equation}
Then there exists a primal-dual solution
$(\mathrm{x}^*,\mathrm{u}_1^*,\ldots,\mathrm{u}_m^*)\in
\mathrm{H\times
G}_1\times\cdots\times
\mathrm{G}_m$ to
Problem~\ref{prob:mi} such that $\mathrm{x}^k{\mathrm{i}}ghtharpoonup
\mathrm{x}^*$ and, for every $i\in\{1,\ldots,m\}$,
$\mathrm{u}_i^k{\mathrm{i}}ghtharpoonup \mathrm{u}_i^*$.
\end{corollary}
\begin{proof}
This result is a consequence of Theorem~\ref{p:primdumi} when, for
every
$i\in\{0,\ldots,m\}$, $\mathrm{P}_{ii}=\ensuremath{\operatorname{Id}}\,/\sigma_i$,
$\mathrm{P}_{i0}=-(1+\theta)\mathrm{L}_{i}$,
and, for every $0<j<i$, $\mathrm{P}_{ij}=0$. Indeed,
we have from \eqref{e:condiPii} that
${\mathbf{a}}rrho_i=1/\sigma_i$, and from \eqref{e:defUpsilon} we deduce that,
for every $x=(\xi_i)_{0\leq i\leq m}\in\mathbb{R}^{m+1}$,
\begin{equation}
\|\Sigma
x\|^2=\left(\frac{1-\theta}{2}{\mathrm{i}}ght)^2\left[\left(\sum_{i=0}^m\|\mathrm{L}_i\|\xi_i{\mathrm{i}}ght)^2+\xi_0^2
\sum_{i=1}^m\|\mathrm{L}_i\|^2{\mathrm{i}}ght]\leq
\left(\frac{1-\theta}{2}{\mathrm{i}}ght)^2\left(\sum_{i=0}^m\|\mathrm{L}_i\|^2{\mathrm{i}}ght)\|x\|^2,
\end{equation}
from which we obtain
$\|\Sigma\|_2\leq
(\frac{1-\theta}{2})\sqrt{\sum_{i=1}^m\|\mathrm{L}_i\|^2}$. Actually,
we have the equality by choosing $\bar{x}=(\bar{\xi}_i)_{0\leq i\leq
m}$ defined
by
$\bar{\xi}_i=\|\mathrm{L}_i\|/\sqrt{\sum_{j=1}^m\|\mathrm{L}_j\|^2}$
for
every $i\in\{1,\ldots,m\}$ and $\bar{\xi}_0=0$, which satisfies
$\|\bar{x}\|=1$
and $\|\Sigma
\bar{x}\|=(\frac{1-\theta}{2})\sqrt{\sum_{i=1}^m\|\mathrm{L}_i\|^2}$.
Therefore, condition
\eqref{e:metricconditionpd} reduces to \eqref{e:conditioncor}.
On the other hand, from \eqref{e:defUpsilon} we deduce that
$\Omega=\Delta-\Upsilon$ and
$\Upsilon=(\frac{1+\theta}{1-\theta})\Sigma$, which yields
$\|\Upsilon\|_2=(\frac{1+\theta}{2})\sqrt{\sum_{i=1}^m\|\mathrm{L}_i\|^2}$
and $\max_{i=0,\ldots,m}\|\mathrm{P}_{ii}\|
=(\min\{\sigma_0,\ldots,\sigma_m\})^{-1}$. Altogether, since
\eqref{e:genVU} is exactly \eqref{e:algomicomp1} for this choice of
matrices $(\mathrm{P}_{i,j})_{0\leq i,j,\leq m}$, the result is a
consequence of Theorem~\ref{p:primdumi}.
\end{proof}
\begin{remark}
\label{r:pd2}
\begin{enumerate}
\item\label{r:pd21} Note that, the condition $\rho>0$ where $\rho$ is the smallest
eigenvalue of $\Omega$ defined in \eqref{e:Omega}, is
guaranteed if
$\sigma_0(\frac{1+\theta}{2})^2\sum_{i=1}^m\sigma_i\|\mathrm{L}_i\|^2<1$.
Indeed, by repeating the procedure in \cite[(3.20)]{vu2013splitting}
in finite dimension we obtain, for every $x=(\xi_i)_{0\leq i\leq
m}\in\mathbb{R}^{m+1}$,
\begin{align}
\hspace{-.5cm}x\cdot\Omega x&=
\sum_{i=0}^m\frac{\xi_i^2}{\sigma_i}-\sum_{i=1}^m
2\left(\frac{1+\theta}{2}{\mathrm{i}}ght)\xi_0\|\mathrm{L}_i\|\xi_i\nonumber\\
&=\sum_{i=0}^m\frac{\xi_i^2}{\sigma_i}-\left(\frac{1+\theta}{2}{\mathrm{i}}ght)
\sum_{i=1}^m2
\frac{\sqrt{\sigma_i\|\mathrm{L}_i\|\xi_0}}{(\sigma_0\sum_{j=1}^m
\sigma_j\|\mathrm{L}_j\|^2)^{1/4}}
\frac{(\sigma_0\sum_{j=1}^m
\sigma_j\|\mathrm{L}_j\|^2)^{1/4}\xi_i}{\sqrt{\sigma_i}}\label{e:necess}\\
&\ge\sum_{i=0}^m\frac{\xi_i^2}{\sigma_i}-\left(\frac{1+\theta}{2}{\mathrm{i}}ght)
\left(\frac{\xi_0^2}{\sqrt{\sigma_0}}\sqrt{\sum_{j=1}^m\sigma_j\|\mathrm{L}_j\|^2}
+\sqrt{\sigma_0\sum_{j=1}^m\sigma_j\|\mathrm{L}_j\|^2}
\sum_{j=1}^m\frac{\xi_j^2}{\sigma_j}{\mathrm{i}}ght)\nonumber\\
&=\left(1-\left(\frac{1+\theta}{2}{\mathrm{i}}ght)\sqrt{\sigma_0\sum_{j=1}^m\sigma_j\|\mathrm{L}_j\|^2}{\mathrm{i}}ght)
\sum_{i=0}^m\frac{\xi_i^2}{\sigma_i}\nonumber\\
&\ge \rho_v\|x\|^2\label{e:ineqrho}
\end{align}
with
\begin{equation}
\label{e:rhov}
\rho_v=\max\{\sigma_0,\ldots,\sigma_m\}^{-1}\left(1-
\left(\frac{1+\theta}{2}{\mathrm{i}}ght)
\sqrt{\sigma_0\sum_{j=1}^m\sigma_j\|\mathrm{L}_j\|^2}{\mathrm{i}}ght).
\end{equation}
Note that $\rho_v$ coincides with the constant obtained in
\cite{vu2013splitting} in the case $\theta=1$ and we have $\rho\geq
\rho_v$. Moreover,
$\sigma_0(\frac{1+\theta}{2})^2\sum_{i=1}^m\sigma_i\|\mathrm{L}_i\|^2<1$
is also necessary for obtaining $\rho>0$, since in \eqref{e:necess}
we can choose a particular vector $x$ for
obtaining the equality. Of course, this choice
does not guarantee to also have equality in the last inequality in
\eqref{e:ineqrho} and, hence, $\rho\geq\rho_v$ in general.
\item\label{r:pd22} If we set $\theta=1$ and
$\mathrm{C}_2=0$ and, hence, $\delta=0$,
\eqref{e:conditioncor} reduces to $2\beta\rho>1$
and we obtain from \eqref{e:genVU} a variant of
\cite[Theorem~3.1]{vu2013splitting}
including an extra forward step involving only the operators
$(\mathrm{L}_i)_{1\leq i\leq m}$.
However, our condition is less restrictive, since $\rho\geq\rho_v$,
where $\rho_v$ is defined in \eqref{e:rhov} and it is obtained in
\cite{vu2013splitting} as we have seen in the last remark. Actually,
in the particular case when $m=1$,
$\mathrm{L}_1=\alpha\ensuremath{\operatorname{Id}}\,$, $\sigma_0=\end{tabbing}a^2\sigma_1=:\end{tabbing}a\sigma$ for
some $0<\end{tabbing}a<1$, constants $\rho_v$ and $\rho$ reduce to
$$\rho_v(\end{tabbing}a)=\frac{1-\end{tabbing}a\sigma\alpha}{\sigma}\quad\text{and}\quad
\rho(\end{tabbing}a)=\frac{1}{2\sigma}\left(\frac{\end{tabbing}a^2+1}{\end{tabbing}a^2}-\sqrt{\left(\frac{\end{tabbing}a^2-1}
{\end{tabbing}a^2}{\mathrm{i}}ght)^2+4\alpha^2\sigma^2}{\mathrm{i}}ght),$$
respectively. By straightforward computations we deduce that
$\rho(\end{tabbing}a)>\rho_v(\end{tabbing}a)$ for every
$0<\end{tabbing}a<(\alpha\sigma)^{-1}$, and hence our constant can strictly
improve the condition $2\beta\rho>1$, needed in both approaches.
Moreover, since Theorem~\ref{p:primdumi} allows for non self-adjoint
linear operators varying among iterations, we can permit variable
stepsizes $\sigma_{0}^k,\ldots,\sigma_{m}^k$ in
Theorem~\ref{p:primdumi}, which could not
be used in \cite{vu2013splitting} because of the variable metric
framework.
\item\label{r:pd23} In the particular case when $\mathrm{C}_1=0$ and
$\mathrm{C}_2=0$ we can take $\beta\to+\infty$ and, hence, condition
\eqref{e:conditioncor} reduces to
\begin{equation}
\label{e:conditioncorem}
\left(\frac{1-\theta}{2}{\mathrm{i}}ght)
\sqrt{\sum_{i=1}^m\|\mathrm{L}_i\|^2}
<\rho,
\end{equation}
which is stronger than the condition in \cite{he2012convergence}
for the case $m=1$, in which it is only needed that $\rho>0$ for
achieving convergence. Indeed, in the case $m=1$,
\eqref{e:conditioncorem} reduces to
$2-2\theta\sigma_0\sigma_1\|\mathrm{L}_1\|^2>
(1-\theta)(\sigma_0+\sigma_1)\|\mathrm{L}_1\|$,
which coincides with the condition in \cite{he2012convergence} in the
case $\theta=1$, but they differ if $\theta\neq 1$
because of the extra forward step coming from the Tseng's splitting
framework. Actually, in the case $\theta=0$ it reduces to
$\sigma_0+\sigma_1<2/\|\mathrm{L}_1\|$ and in the case $\theta=-1$
we obtain the stronger condition
$\max\{\sigma_0,\sigma_1\}<1/\|\mathrm{L}_1\|$.
Anyway, in our context we can use constants
$\sigma_0^k,\ldots,\sigma_m^k$ varying among iterations and we have a
variant of the method in \cite{he2012convergence} and, in the case
when $\theta=1$, of Chambolle-Pock's splitting
\cite{chambolle2011first}.
\item\label{r:pd24} Since $\rho_v$ defined in \eqref{e:rhov} satisfies $\rho_v\leq
\rho$ in the case when $\mathrm{C}_1=\mathrm{C}_2=0$, a sufficient condition for
guaranteeing \eqref{e:conditioncorem} is
$(1-\theta)\sqrt{\sum_{i=1}^m\|\mathrm{L}_i\|^2}/2<\rho_v$, which implied
by the condition
\begin{equation}
\max\{\sigma_0,\ldots,\sigma_m\}\sqrt{\sum_{i=1}^m\|\mathrm{L}_i\|^2}<1.
\end{equation}
\item\label{r:pd25} Consider the case of composite optimization problems, i.e.,
when $\mathrm{A}=\partial \mathrm{f}$, $\mathrm{C}_1=\nabla \mathrm{h}$
for every $i=1,\ldots,m$, $\mathrm{B}_i=\partial
\mathrm{g}_i$ and $\mathrm{D}_i=\partial\mathrm{\ell}_i$, where, for
every
$i=1,\ldots,m$,
$\mathrm{f}\colon\mathrm{H}\to\ensuremath{\left]-\infty,+\infty\right]}$ and
$\mathrm{g}_i\colon\mathrm{G}_i\to\ensuremath{\left]-\infty,+\infty\right]}$ are proper lower
semicontinuous and convex functions and
$\mathrm{h}\colon\mathrm{H}\to\mathbb{R}$ is differentiable, convex, with
$\beta^{-1}$-Lipschitz gradient. In this case, any solution to
Problem~\ref{prob:mi} when $\mathrm{C}_2=0$ is a solution to the
primal-dual optimization problems
\begin{equation}
\min_{\mathrm{x}\in\mathrm{H}}{\mathrm{f}(\mathrm{x})+\mathrm{h}(\mathrm{x})
+\sum_{i=1}^m(\mathrm{g}_i\ensuremath{\mbox{\small$\,\square\,$}}\mathrm{\ell}_i)(\mathrm{L}_i\mathrm{x})}
\end{equation}
and
\begin{equation}
\min_{\mathrm{u}_1\in\mathrm{G}_1,\ldots,\mathrm{u}_m\in\mathrm{G}_m}
{(\mathrm{f}^*\ensuremath{\mbox{\small$\,\square\,$}}\mathrm{h}^*)
\left(-\sum_{i=1}^m\mathrm{L}_i^*\mathrm{u}_i{\mathrm{i}}ght)+
\sum_{i=1}^m\mathrm{g}_i^*(\mathrm{u}_i)+\mathrm{\ell}_i^*(\mathrm{u}_i)},
\end{equation}
and the equivalence holds under some qualification condition. In this particular case,
\eqref{e:genVU} reduces to
\begin{equation}
\label{e:genVUopti}
\begin{array}{l}
\left\lfloor
\begin{array}{l}
\mathrm{y}^k
=\mathbf{prox}_{\sigma_0\mathrm{f}}
\left(\mathrm{x}^k-\sigma_0\left(\nabla\mathrm{h}(\mathrm{x}^k)+
\sum_{i=1}^m
\mathrm{L}_i^*\mathrm{u}_i^k{\mathrm{i}}ght){\mathrm{i}}ght)\\[2mm]
\text{For every } i=1,\ldots,m\\
\left\lfloor
\mathrm{v}_i^k = \mathbf{prox}_{
\sigma_i\mathrm{g}_i^*}\left(\mathrm{u}_i^k -
\sigma_i\left(\nabla\mathrm{\ell}_i^*(\mathrm{u}_i^k)-
\mathrm{L}_i(\mathrm{y}^k+\theta(\mathrm{y}^k
-\mathrm{x}^k)){\mathrm{i}}ght){\mathrm{i}}ght)
{\mathrm{i}}ght.\\[2mm]
\mathrm{x}^{k+1}=
\mathrm{x}^k+\frac{\lambda}{\sigma_0}\left(\mathrm{y}^k-\mathrm{x}^k+
\sigma_0\sum_{i=1}^m
\mathrm{L}_i^*(\mathrm{u}_i^k-\mathrm{v}_i^k){\mathrm{i}}ght)\\
\text{For every } i=1,\ldots,m\\
\left\lfloor
\mathrm{u}_i^{k+1}=\mathrm{u}_i^k+\frac{\lambda}{\sigma_i}
\left(\mathrm{v}_i^k-\mathrm{u}_i^k-
\sigma_i\theta\mathrm{L}_i(\mathrm{y}^k-\mathrm{x}^k){\mathrm{i}}ght),
{\mathrm{i}}ght.\\[2mm]
\end{array}
{\mathrm{i}}ght.\\[2mm]
\end{array}
\end{equation}
which, in the case $m=1$, is very similar to the method proposed in
\cite[Algorithm~3]{patrinos2016asym} (by taking $\mu=(1-\theta)^{-1}$ for
$\theta\in[-1,0]$), with a slightly different choice of the
parameters involved in the last two lines in \eqref{e:genVUopti}. {On the other
hand, in the case when $\ell=0$ and $\theta=1$, it differs from
\cite[Algorithm~5.1]{condat2013primal} in the last two steps, in which linear operators are
involved in our case. } An advantage of our
method, even in the case $m=1$, is that the stepsizes $\sigma_0$ and $\sigma_1$
may vary among iterations.
\end{enumerate}
\end{remark}
\section{Applications}
\label{sec:6}
In this section we explore {four} applications for illustrating the advantages and
flexibility
of the methods proposed in the previous sections. In the first application, we apply
Theorem~\ref{t:1} to the obstacle problem in PDE's in which dropping the extra forward
step decreases the computational cost per iteration because
the computation of an extra gradient step is numerically expensive.
In the second application, devoted to empirical risk minimization (ERM), we illustrate
the flexibility of using non self-adjoint linear operators. We derive different sequential
algorithms depending on the nature of the linear operator involved. { In the third
application, we develop a distributed operator-splitting scheme which allows for
time-varying communication graphs. Finally, the last application focuses in nonlinear
constrained optimization, in which monotone non-Lipschitz operators arise naturally. }
\subsection{Obstacle problem}
The obstacle problem is to find the equilibrium position of an elastic membrane on a
domain $\Omega$, whose boundary is fixed and is restricted to remain above the
some obstacle, given by the function ${\mathbf{a}}rphi\colon\Omega\to\mathbb{R}$. This problem can
be applied to fluid filtration in porous media, elasto-plasticity, optimal control among
other disciplines (see, e.g., \cite{Caf88} and the references therein).
Let $u\colon\Omega\to\mathbb{R}$ be a function representing the vertical displacement of the
membrane and let $\psi\colon\Gamma\to\mathbb{R}$ be the function representing the fixed
boundary, where $\Gamma$ is the smooth boundary of $\Omega$. Assume that
$\psi\in H^{1/2}(\Gamma)$ and ${\mathbf{a}}rphi\in C^{1,1}(\Omega)$ satisfy
$\mathrm{T}{\mathbf{a}}rphi\leq\psi$, and consider the
problem
\begin{align}
\min_{\mathrm{u}\in H^1(\Omega)}&\frac{1}{2}\int_{\Omega}|\nabla
\mathrm{u}|^2\mathrm{dx}\nonumber\label{e:obstprob}\\
\text{s.t. } \mathrm{T}\mathrm{u}&=\psi,\quad\text{a.e. on
}\Gamma; \\
\mathrm{u}&\geq {\mathbf{a}}rphi,\quad\text{a.e. in }\Omega\nonumber,
\end{align}
where $\mathrm{T}\colon H^1(\Omega)\to H^{1/2}(\Gamma)$ is
the (linear) trace operator and $H^1(\Omega)$ is endowed with the
scalar product
$\scal{\cdot}{\cdot}\colon(\mathrm{u},\mathrm{v})\mapsto
\int_{\Omega}\mathrm{u}\mathrm{v}\,dx+\int_{\Omega}\nabla\mathrm{u}
\cdot\nabla\mathrm{v}\,dx$. There is a unique solution to this
obstacle problem \cite{Caff98}.
In order to set this problem in our context, let us define the
operator
\begin{equation}
\label{e:Q}
Q\colon H^{-1}(\Omega)\times H^{-1/2}(\Gamma)\to H^1(\Omega)
\end{equation}
which associates to each $(\mathrm{q},\mathrm{w})\in
H^{-1}(\Omega)\times H^{-1/2}(\Gamma)$ the unique weak solution
(in the sense of
distributions) to \cite[Section~25]{ZeidlerIIB}
\begin{equation}
\begin{cases}
-\Delta \mathrm{u}+\mathrm{u}=\mathrm{q},\quad&\text{in
}\Omega;\\
\frac{\partial \mathrm{u}}{\partial \nu}=\mathrm{w},&\text{on
}\Gamma,
\end{cases}
\end{equation}
where $\nu$ is outer unit vector normal to $\Gamma$. Hence, $Q$
satisfies
\begin{equation}
\label{e:Qweak}
(\forall \mathrm{v}\in
\mathrm{H})\qquad\scal{Q(\mathrm{q},\mathrm{w})}{\mathrm{v}}=
\scal{\mathrm{w}}{\mathrm{T}\mathrm{v}}_{{-1/2},{1/2}}
+\scal{\mathrm{q}}{\mathrm{v}}_{{-1},1},
\end{equation}
where $\scal{\cdot}{\cdot}_{{-1/2},{1/2}}$ and
$\scal{\cdot}{\cdot}_{{-1},{1}}$ stand for the dual pairs
$H^{-1/2}(\Gamma)-H^{1/2}(\Gamma)$ and
$H^{-1}(\Omega)-H^{1}(\Omega)$, respectively. Then, by defining
$\mathrm{H}=H^1(\Omega)$, $\mathrm{G}=H^{1/2}(\Gamma)$,
$\mathrm{f}\colon \mathrm{u}\mapsto
\frac{1}{2}\int_{\Omega}|\nabla
\mathrm{u}|^2\mathrm{dx}$, $\mathrm{g}=\iota_{\mathrm{C}}$,
where
$\mathrm{C}=\menge{\mathrm{u}\in\mathrm{H}}{\mathrm{u}\geq{\mathbf{a}}rphi\;\text{
a.e.
in }\Omega}$, let $\mathrm{D}={\{\psi\}}$, and
let $\mathrm{L}=\mathrm{T}$, \eqref{e:obstprob} can be
written equivalently as
\begin{equation}
\label{e:probobstheo}
\min_{\mathrm{L}\mathrm{u}\in\mathrm{D}}\mathrm{f}(\mathrm{u})+
\mathrm{g}(\mathrm{u}).
\end{equation}
Moreover, it is easy to verify that $\mathrm{f}$ is convex and, by
using integration by parts and
\eqref{e:Qweak}, for every
$\mathrm{h}\in\mathrm{H}$ we have
\begin{align}
\mathrm{f}(\mathrm{u}+\mathrm{h})-\mathrm{f}(\mathrm{u})-
\Scal{\!Q\left(\!-\Delta\mathrm{u},\frac{\partial\mathrm{u}}{\partial\nu}{\mathrm{i}}ght)\!}{\!\mathrm{h}}
&=\frac{1}{2}\int_{\Omega}|\nabla\mathrm{h}|^2dx+\int_{\Omega}\!\!
\nabla\mathrm{u}\cdot\nabla\mathrm{h}\,dx+\scal{\Delta\mathrm{u}}{\mathrm{h}}_{-1,1}\nonumber\\
&\hspace{20pt}-\scal{\frac{\partial\mathrm{u}}{\partial\nu}}{\mathrm{T}\mathrm{h}}_{-1/2,1/2}\nonumber\\
&=\frac{1}{2}\int_{\Omega}|\nabla\mathrm{h}|^2dx,
\end{align}
which yields
\begin{equation}
\lim_{\|\mathrm{h}\|\to0}\frac{\left|\mathrm{f}(\mathrm{u}+\mathrm{h})-\mathrm{f}(\mathrm{u})-
\Scal{Q\left(-\Delta\mathrm{u},\frac{\partial\mathrm{u}}{\partial\nu}{\mathrm{i}}ght)}
{\mathrm{h}}{\mathrm{i}}ght|}{\|\mathrm{h}\|}=\frac{1}{2}\lim_{\|\mathrm{h}\|\to0}\frac{\|\nabla
\mathrm{h}\|_{L^2}^2}{\|\mathrm{h}\|}=0.
\end{equation}
Hence, $\mathrm{f}$ is Fr\'echet differentiable with a linear gradient given by
$\nabla\mathrm{f}\colon
\mathrm{u}\mapsto Q\left(\!-\Delta\mathrm{u},\frac{\partial\mathrm{u}}
{\partial\nu}{\mathrm{i}}ght)$. Moreover, from integration by parts we have
\begin{equation}
\Scal{Q\left(-\Delta\mathrm{u},\frac{\partial\mathrm{u}}{\partial\nu}{\mathrm{i}}ght)}{\mathrm{h}}=
\Scal{\frac{\partial\mathrm{u}}{\partial\nu}}{\mathrm{T}\mathrm{h}}_{-1/2,1/2}
-\scal{\Delta\mathrm{u}}{\mathrm{h}}_{-1,1}=\int_{\Omega}
\nabla\mathrm{u}\cdot\nabla\mathrm{h}\,dx\leq\|\mathrm{u}\|\|\mathrm{h}\|,
\end{equation}
which yields $\|\nabla\mathrm{f}(\mathrm{u})\|\leq\|\mathrm{u}\|$
and, hence, it is $1$-cocoercive \cite{baillon1977quelques}. In addition, the trace
operator is
linear and bounded \cite{Grisv86} and we have from \eqref{e:Qweak}
that
\begin{equation}
\label{e:Qweak2}
(\forall \mathrm{v}\in
\mathrm{H})(\forall \mathrm{w}\in
H^{1/2}(\Gamma))\qquad\scal{Q(0,\mathrm{w})}{\mathrm{v}}=
\scal{\mathrm{w}}{\mathrm{T}\mathrm{v}}_{{-1/2},{1/2}},
\end{equation}
which yields
$\mathrm{L}^*\colon\mathrm{w}\mapsto\mathrm{Q}(0,\mathrm{w})$ and
since $\mathrm{C}$ is non-empty closed convex, $\mathrm{g}$ is
convex, proper, lower semicontinuous and $\mathbf{prox}_{\gamma
\mathrm{g}}=P_{\mathrm{C}}$, for any $\gamma >0$.
Since first order conditions of \eqref{e:probobstheo} reduce to find
$(\mathrm{u},\mathrm{w})\in\mathrm{H}\times\mathrm{G}$ such that
$0\in N_{\mathrm{C}}(\mathrm{u})+\nabla\mathrm{f}(\mathrm{u})+
\mathrm{T}^*N_{\mathrm{D}}(\mathrm{T}\mathrm{u})$, which is a particular case
of Problem~\ref{prob:mi} and from Corollary~\ref{c:pd} when $\theta=1$ the method
\begin{equation}
\label{e:genVUobst}
\begin{array}{l}
\left\lfloor
\begin{array}{l}
\mathrm{v}^k
=P_{\mathrm{C}}
\left(\mathrm{u}^k-\sigma_0Q\left(-\Delta\mathrm{u}^k,
\frac{\partial\mathrm{u}^k}{\partial\nu}+\mathrm{w}^k{\mathrm{i}}ght){\mathrm{i}}ght)\\[2mm]
\mathrm{t}^k = \mathrm{w}^k +
\sigma_1
\left(\mathrm{T}(2\mathrm{y}^k-\mathrm{x}^k)-\psi{\mathrm{i}}ght)\\[2mm]
\mathrm{u}^{k+1}=
\mathrm{u}^k+\frac{\lambda}{\sigma_0}\left(\mathrm{v}^k-\mathrm{u}^k+
\sigma_0
Q(0,\mathrm{w}^k-\mathrm{t}^k){\mathrm{i}}ght)\\[2mm]
\mathrm{w}^{k+1}=\mathrm{w}^k+\frac{\lambda}{\sigma_1}
\left(\mathrm{t}^k-\mathrm{w}^k-
\sigma_1\mathrm{T}(\mathrm{v}^k-\mathrm{u}^k){\mathrm{i}}ght)\\[2mm]
\end{array}
{\mathrm{i}}ght.\\[2mm]
\end{array}
\end{equation}
generates a weakly convergent sequence $(\mathrm{u}^k)_{k\in\ensuremath{\mathbb N}}$ to the
unique
solution to the obstacle problem provided, for instance (see
Remark~\ref{r:pd2}.\ref{r:pd21}), that
$\max\{\sigma_0,\sigma_1\}+2\sqrt{\sigma_0\sigma_1}\|\mathrm{T}\|<2$.
Note that $\nabla\mathrm{f}$ must be computed only once at each iteration, improving
the performance with respect to primal-dual methods following Tseng's approach, in
which $\nabla\mathrm{f}$ must be computed twice by iteration (see, e.g.,
\cite{briceno2011monotone+,tseng2000modified}). The method proposed in
{\cite{condat2013primal,vu2013splitting}} can also solve this problem but with
stronger
conditions on
constants $\sigma_0$ and $\sigma_1$ as studied in Remark~\ref{r:pd2}. Moreover,
our approach may include variable stepsizes together with different assymetric linear
operators which may improve the performance of the method.
On the other hand, the general version of our method in
Theorem~\ref{thm:asymmetric_metric} allows for an
additional projection onto a closed convex set. In this case this can be useful to impose
some of the constraints of the problem in order to guarantee that iterates at each
iteration satisfy such constraints. An additional projection step may accelerate the
method as it has been studied in \cite{BAKS16}.
Numerical comparisons
among these methods are part of further research.
\subsection{An Incremental Algorithm for Nonsmooth Empirical Risk
Minimization}
In machine learning~\cite{Shalev-Shwartz:2014:UML:2621980}, the Empirical Risk
Minimization (ERM) problem seeks to minimize a finite sample approximation of an
expected loss, under conditions on the feasible set and the loss function. If the solution
to the sample approximation converges to a minimizer of the expected loss when the
size of the sample increases, we say that the problem is learnable.
Suppose that we have a sample of size $m$, and, for every $i\in\{1,\ldots,m\}$,
the loss function associated to the sample $\mathrm{z}_i$ is given by $l(\cdot;
\mathrm{z}_i) \colon
\mathrm{x}\mapsto \mathrm{f}_i(\mathrm{a}_i^{\top} x)$, where each
$\mathrm{a}_i \in \mathbb{R}^{d}\backslash \{0\}$
and each $\mathrm{f}_i : \mathbb{R} {\mathrm{i}}ghtarrow (-\infty, \infty]$ is closed, proper, and
convex.
Then the ERM problem is to
\begin{align}\label{eq:ERM}
\Min_{\mathrm{x} \in \mathbb{R}^d} \frac{1}{m} \sum_{i=1}^{m}
\mathrm{f}_i(\mathrm{a}_i^{\top} \mathrm{x}).
\end{align}
This form features in support vector machines, logistic regression, linear regression, least-absolute deviations, and many other common models in machine learning.
The parameter $m$ indicates the size of the training set and is typically large.
Parallelizing a (sub)gradient computation of~\eqref{eq:ERM} is straightforward, but in
general, because training sets are large, we may not have enough processors to do so.
Thus, when only a few processors are available, incremental iterative algorithms, in
which one or a few training samples are used per iteration to update our solution
estimate, are a natural choice.
Several incremental algorithms are available for solving~\eqref{eq:ERM}, including
incremental (sub)gradient descent and incremental aggregated gradient
methods~\cite{schmidt2013minimizing,defazio2014finito,johnson2013accelerating,defazio2014saga,bertsekas2015incremental,wang2013incremental,bertsekasincrementalproximal,doi:10.1137/S1052623499362111,bianchi2015ergodic}.
The former class requires
diminishing stepsizes (e.g., of size
$O(k^{-1/2})$) and, hence, their convergence may be very slow, while the latter
class of algorithms is
usually restricted to the cases in which either $\mathrm{f}_i$ is smooth or the dual
problem
of~\eqref{eq:ERM} is smooth (in which case~\eqref{eq:ERM} is strongly convex). In
contrast, we now develop an
incremental proximal algorithm, which imposes no smoothness or
strong convexity assumptions. It has a Gauss-Seidel
structure and is obtained by an application of
Theorem~\ref{p:primdumi}. The involved
stepsizes may vary among iterations but they are set to be constants
for simplicity.
The method follows from the following first-order optimality
conditions obtained
assuming some qualification condition:
\begin{equation}
\mathrm{x}\quad\text{solves } \eqref{eq:ERM}\quad\Leftrightarrow\quad
0\in\sum_{i=1}^{m}\mathrm{a}_i\partial
\mathrm{f}_i(\mathrm{a}_i^{\top}\mathrm{x}),
\end{equation}
which is a particular case of Problem~\ref{prob:mi} when
$\mathrm{H}=\mathbb{R}^d$,
$\mathrm{A}\equiv\{0\}$, $\mathrm{C}_1=\mathrm{C}_2\equiv0$ and, for
every
$i\in\{1,\ldots,m\}$, $\mathrm{G}_i=\mathbb{R}$, $\mathrm{D}_i^{-1}=0$,
$\mathrm{L}_i=\mathrm{a}_i^{\top}$, and
$\mathrm{B}_i=\partial\mathrm{f}_i$. By using
Theorem~\ref{p:primdumi} in this case for matrices
$(\mathrm{P}_{ij})_{0\leq i<j\leq m}$ given by
\begin{equation}
(\forall 0\leq j<i\leq m)\quad \mathrm{P}_{ij}=
\begin{cases}
\frac{\ensuremath{\operatorname{Id}}\,}{\sigma_0},\quad&\text{if }i=j=0;\\
\frac{1}{\sigma_i},\quad&\text{if }i=j>0;\\
-\mathrm{a}_i^{\top},&\text{if }j=0;\\
\sigma_0 \mathrm{a}_i^{\top}\mathrm{a}_j,&\text{if }0<j<i,
\end{cases}
\end{equation}
we obtain
\begin{equation}
\label{e:algomicompERM3}
\begin{array}{l}
\left\lfloor
\begin{array}{l}
\mathrm{v}_1^k = \mathbf{prox}_{
\sigma_{1}\mathrm{f}_1^{*}}\left(\mathrm{u}_1^k +
\sigma_{1}\left(
\mathrm{a}_1^{\top}\mathrm{x}^k-\sigma_{0}\sum_{i=1}^m
\mathrm{a}_1^{\top}\mathrm{a}_i\mathrm{u}_i^k{\mathrm{i}}ght){\mathrm{i}}ght)\\
\mathrm{v}_2^k = \mathbf{prox}_{
\sigma_{2}\mathrm{f}_2^{*}}\left(\mathrm{u}_2^k+
\sigma_{2}\left(
\mathrm{a}_2^{\top}\mathrm{x}^k-\sigma_{0}\left(\mathrm{a}_2^{\top}
\mathrm{a}_1\mathrm{v}_1^k+\sum_{i=2}^m
\mathrm{a}_2^{\top}\mathrm{a}_i\mathrm{u}_i^k{\mathrm{i}}ght){\mathrm{i}}ght){\mathrm{i}}ght)\\[1mm]
\hspace{0.5cm}{\mathbf{d}}ots\\
\mathrm{v}_m^k = \mathbf{prox}_{
\sigma_{m}\mathrm{f}_m^{*}}\left(\mathrm{u}_m^k +
\sigma_{m}\left(
\mathrm{a}_m^{\top}\mathrm{x}^k-\sigma_{0}\left(\sum_{i=1}^{m-1}
\mathrm{a}_m^{\top}\mathrm{a}_i\mathrm{v}_i^k
+\|\mathrm{a}_m\|^2\mathrm{u}_m^k{\mathrm{i}}ght){\mathrm{i}}ght){\mathrm{i}}ght)\\[1mm]
\mathrm{x}^{k+1}=
\mathrm{x}^k-{\lambda}\sum_{i=1}^m
\mathrm{a}_i\mathrm{v}_i^k\\
\mathrm{u}_1^{k+1}=\mathrm{u}_1^k+\frac{\lambda}{\sigma_1}\left(
\mathrm{v}_1^k-\mathrm{u}_1^k{\mathrm{i}}ght)\\
\hspace{0.8cm} {\mathbf{d}}ots\\
\mathrm{u}_m^{k+1}=\mathrm{u}_m^k+\frac{\lambda}{\sigma_m}\left(
\mathrm{v}_m^k-\mathrm{u}_m^k{\mathrm{i}}ght)+
\sigma_0\sum_{j=1}^{m-1}\mathrm{a}_m^{\top}\mathrm{a}_j(\mathrm{v}_j^k
-\mathrm{u}_j^k).
\end{array}
{\mathrm{i}}ght.\\[2mm]
\end{array}
\end{equation}
Since conditions \eqref{e:condiPii}-\eqref{e:metricconditionpd}
hold if
\begin{equation}
\label{e:condERM1}
\sqrt{\sum_{i=1}^m\|\mathrm{a}_i\|^2}+\sigma_0\sum_{i=1}^m
\|\mathrm{a}_i\|^2+\frac{\sigma_0}{2}
\left(\max_{i=1,\ldots,m}\|\mathrm{a}_i\|^2-
\min_{i=1,\ldots,m}\|\mathrm{a}_i\|^2{\mathrm{i}}ght)<
\frac{1}{\max\limits_{i=0,\ldots,m}\sigma_i},
\end{equation}
by choosing $(\sigma_i)_{0\leq i\leq m}$ satisfying
\eqref{e:condERM1}
the sequence $(\mathrm{x}^k)_{k\in\ensuremath{\mathbb N}}$ generated by
\eqref{e:algomicompERM3} converges to a solution provided that
$\lambda<M^{-1}$ where
$$M=\left(\min_{i=0,\ldots,m}\sigma_i{\mathrm{i}}ght)^{-1}+\frac{1}{2}
\sqrt{\sum_{i=1}^m\|\mathrm{a}_i\|^2}+\frac{\sigma_0}{2}\left(\sum_{i=1}^m
\|\mathrm{a}_i\|^2+\max_{i=1,\ldots,m}\|\mathrm{a}_i\|^2{\mathrm{i}}ght).$$
Note that, without loss of generality, we can assume, for every
$i\in\{1,\ldots,m\}$, $\|\mathrm{a}_i\|=1$, since
$\mathrm{f}_i(\mathrm{a}_i^{\top}\mathrm{x})=
\mathrm{g}_i((\mathrm{a}_i/\|\mathrm{a}_i\|)^{\top}\mathrm{x})$
with $\mathrm{g}_i\colon \mathrm{x}\mapsto
\mathrm{f}_i(\|\mathrm{a}_i\|\mathrm{x})$ and
$\mathbf{prox}_{\mathrm{g}_i}\colon\mathrm{x}\mapsto
\mathbf{prox}_{\|\mathrm{a}_i\|^2\mathrm{f}_i}
(\|\mathrm{a}_i\|\mathrm{x})/\|\mathrm{a}_i\|$. Therefore, condition
\eqref{e:condERM1} can be reduced to
$\sqrt{m}+m\sigma_0<(\max_{i=0,\ldots,m}\sigma_i)^{-1}$, which, in
the
case $\sigma_0=\cdots=\sigma_m$ reduces to
$\sigma_0<(\sqrt{5}-1)/(2\sqrt{m})$.
{
\subsection{A Distributed Operator Splitting Scheme with Time-Varying Networks}
In this section we develop an extension of the popular distributed operator splitting
scheme \ensuremath{{\varnothing}}h{PG-Extra}~\cite{shi2015extra,shi2015proximal} to time-varying graphs. The
problem data are a collection of cost functions $f_1, \ldots, f_n$ on a Hilbert space ${\mathcal{H}}$
and a sequence of connected, undirected communication graphs $ G_t = (V_t, E_t)$ with
vertices $V_t = \{1, \ldots, n\}$ and edges $ E_t \subseteq \{1, \ldots, n \}^2$. Then the goal
of distributed optimization is to
\begin{align}\label{eq:logs}
\Min_{x \in {\mathcal{H}}} \; \sum_{i =1}^n f_i(x),
\end{align}
through an iterative algorithm that, at every time $t \in \ensuremath{\mathbb N}$, only allows communication between neighbors in $G_t$. For simplicity, we focus on the case wherein
$f_i : {\mathcal{H}} {\mathrm{i}}ghtarrow \ensuremath{\left]-\infty,+\infty\right]}$ is proper, lower semicontinuous and convex.
A well-known distributed operator splitting schemes is known as \ensuremath{{\varnothing}}h{PG-Extra}. This method applies to fixed communicated graphs $G_t \equiv G$, and can be viewed as an instance of modern primal-dual algorithms, such as Condat-Vu~\cite{condat2013primal,vu2013splitting}. To the best of our knowledge there is no known extension of \ensuremath{{\varnothing}}h{PG-Extra} to time-varying graphs that may also be applied to monotone inclusions. We will now develop such an extension.
For the graph $G_t$, let $A_t$ denote its adjacency matrix and let $D_t$ denote its degree matrix.\footnote{Briefly, $(A_t)_{ij} = 1$ if $(i,j) \in E$ and is zero otherwise, while $D$ is a diagonal matrix with diagonal entries $D_{ii} = \text{deg}(i)$.} The Laplacian matrix of $G_t$ is defined as the difference
$$
L_t := D_t - A_t.
$$
It is well-known that, for fully connected graphs, we have the identity $\ker(L_t) = \mathrm{span}(\mathbf{1}_n)$~\cite{chung1997spectral}. We may exploit this fact to develop an equivalent formulation of~\eqref{eq:logs}.
The Laplacian operator has a natural extension to the product space ${\mathcal{H}}^n$. It is then a straightforward exercise to show that the extension induces the following identity:
\begin{align*}
\left(\forall \mathbf{x} := (x_1, \ldots, x_n) \in {\mathcal{H}}^n{\mathrm{i}}ght) &&L_t\mathbf{x} = 0 \iff x_1 = x_2 = \cdots = x_n.
\end{align*}
Therefore, a family of equivalent formulations of~\eqref{eq:logs} is given by
\begin{align*}
\Min_{\mathbf{x} \in {\mathcal{H}}^n} &\; \sum_{i \in V} f_i(x_i) \\
\text{subject to:} & \; L_t \mathbf{x} = 0. \numberthis\label{eq:new_reform}
\end{align*}
The constraint $L_t\mathbf{x} = 0$ is equivalent to the constraint $x \in {\mathcal{U}} := \{x \in {\mathcal{H}}^n
\mid x_1 = \ldots = x_n\}$. Thus, one could apply a splitting method to derive a distributed
algorithm consisting of decoupled proximal steps on the $f_i$ followed by \ensuremath{{\varnothing}}h{global
averaging} steps induced by the projection onto ${\mathcal{U}}$. However, in order to develop an
algorithm that respects the \ensuremath{{\varnothing}}h{local communication structure} of the graphs $G_t$, we
must avoid computing such projections onto ${\mathcal{U}}$. For any fixed $t$, we may develop
such a method as a special case of modern primal-dual algorithms.
Indeed, a straightforward application of Condat-Vu~\cite{condat2013primal,vu2013splitting} yields the update rule
\begin{align*}
\text{For all $i \in V$} & \text{ in parallel} \\
x_i^{k+1} &= \mathbf{prox}_{\gamma f_i}(x_i^{k} -\gamma (L_t\mathbf{y}^k)_i)\\
\mathbf{y}^{k+1} &= \mathbf{y}^{k} + \tau L_t (2\mathbf{x}^{k+1} - \mathbf{x}^k), \numberthis\label{eq:condat_vu_decentralized}
\end{align*}
where $\gamma, \tau > 0$ are appropriately chosen stepsizes. This algorithm is fully decentralized because multiplications by $L_t$ only induce communication among neighbors in the graph $G_t$.
If we allow $t = k$, this Condat-Vu~\cite{condat2013primal,vu2013splitting} algorithm has, to the best of our knoweldge, no supporting convergence theory, although each of the optimization problems~\eqref{eq:new_reform} have the same set of solutions. The lack of convergence theory arises because Condat-Vu measures convergence in the product space $({\mathcal{H}}^n \times {\mathcal{H}}^n, \|\cdot\|_{P_t})$, where $P_t$ is a metric inducing linear transformation depending on $L_t$:
\begin{align*}
P_t &:= \begin{bmatrix}
\frac{1}{\gamma} \ensuremath{\operatorname{Id}}\, & -L_t \\
-L_t & \frac{1}{\tau} \ensuremath{\operatorname{Id}}\,
\end{bmatrix}.
\end{align*}
One may hope to apply standard variable metric operator-splitting
schemes~\cite{combettes2012variable,vu2013variableFBF}, but the compatibility condition
cannot hope to be satisfied. Thus, instead of Condat-Vu, we apply the variable metric
technique developed in this manuscript.
Mathematically, we let
\begin{align*}
{\mathcal{S}}_t : {\mathcal{H}}^n \times {\mathcal{H}}^n & {\mathrm{i}}ghtarrow {\mathcal{H}}^{n} \times {\mathcal{H}}^n \\
(\mathbf{x}, \mathbf{y}) &\mapsto ( (\mathbf{prox}_{\gamma f_i}(x_i^{k} - \gamma (L_ty^k)_i))_{i=1}^n , \mathbf{y}^{k} + \tau L_t (2\mathbf{x}^{k+1} - \mathbf{x}^k)).
\end{align*}
Given a proper choice of $\gamma$ and $\tau$, the results of~\cite{condat2013primal,vu2013splitting} show that ${\mathcal{S}}_t$ is of $\mathfrak{T}$-class in the space $({\mathcal{H}}^n \times {\mathcal{H}}^n, \|\cdot \|_{P_t})$ (indeed, ${\mathcal{S}}_t$ is a resolvent). Thus, for any $0 < \mu \leq \|P_t\|^{-1}$, Proposition~\ref{prop:classT} implies that
$$
{\mathcal{Q}}_t = \ensuremath{\operatorname{Id}}\, - \mu P_t(\ensuremath{\operatorname{Id}}\, - {\mathcal{S}}_t),
$$
is of $\mathfrak{T}$-class in the space $({\mathcal{H}}^n \times {\mathcal{H}}^n, \|\cdot \|)$ and $\Fix({\mathcal{Q}}_t) = \Fix({\mathcal{S}}_t)$. Like ${\mathcal{S}}_t$, the operator ${\mathcal{Q}}_t$ may be computed in a decentralized fashion, as communication between agents is only induced through multiplications by $L_t$.
The algorithm resulting from applying $Q_t$ is a time-varying distributed operator-splitting scheme:
\begin{align*}
(\mathbf{x}^{k+1}, \mathbf{y}^{k+1}) = {\mathcal{Q}}_k (\mathbf{x}^{k}, \mathbf{y}^{k}).
\end{align*}
The convergence of this iteration may be proved using an argument similar to Theorem~\ref{cor:asymmetricnoinversion} (which does not capture the case in which the operator at hand is varying). To prove convergence of this iteration, one must observe that the $\Fix({\mathcal{Q}}_k)$ is constant, that for all $(\mathbf{x}^\ast, \mathbf{y}^\ast) \in \Fix(Q_k)$ the sequence $\|((\mathbf{x}^{k}, \mathbf{y}^{k}) - (\mathbf{x}^\ast, \mathbf{y}^\ast)\|$ is nonincreasing, and that $\sum_{k=0}^\infty \| (\mathbf{x}^{k+1}, \mathbf{y}^{k+1}) - (\mathbf{x}^{k}, \mathbf{y}^{k})\|^2 < \infty $. A standard argument then shows that $(\mathbf{x}^{k}, \mathbf{y}^{k})$ converges to an element of $\Fix({\mathcal{Q}}_k) \equiv \Fix({\mathcal{Q}}_0)$.
\subsection{Nonlinear constrained optimization problems}
\label{sec:NLMP}
In this application we aim at solving the nonlinear constrained optimization problem
\begin{equation}
\label{e:nonlMP}
\Min_{x\in C}{f(x)+h(x)},
\end{equation}
where $C=\menge{x\in\ensuremath{{\mathcal H}}}{(\forall i\in\{1,\ldots,p\})\quad g_i(x)\le 0}$, $f\colon\ensuremath{{\mathcal H}}\to\ensuremath{\left]-\infty,+\infty\right]}$
is lower semicontinuous, convex and proper, for every $i\in\{1,\ldots,p\}$,
$g_i\colon{\mathrm{dom}\,}(g_i)\subset \ensuremath{{\mathcal H}}\to\mathbb{R}$ and $h\colon\ensuremath{{\mathcal H}}\to\mathbb{R}$ are $\mathcal{C}^1$
convex functions in $\ensuremath{\operatorname{int}}{\mathrm{dom}\,} g_i$ and $\ensuremath{{\mathcal H}}$, respectively, and $\nabla h$ is
$\beta^{-1}-$Lipschitz. A
solution of the optimization
problem \eqref{e:nonlMP}
can be found via the saddle points of the Lagrangian
\begin{equation}
L(x,u)=f(x)+h(x)+u^{\top}{g(x)}-\iota_{\mathbb{R}^p_+}(u),
\end{equation}
which, under standard qualification conditions can be found by solving the monotone
inclusion (see \cite{rockafellar1970saddle})
\begin{equation}
{\rm find}\quad x\in Y\quad \text{ such that }\quad (\ensuremath{\exists\,} u\in\mathbb{R}_+^p)\quad (0,0)\in
A(x,u)+B_1(x,u)+B_2(x,u),
\end{equation}
where $Y\subset\ensuremath{{\mathcal H}}$ is a nonempty closed convex set modeling apriori information on the
solution (eventually we can take $Y=\ensuremath{{\mathcal H}}$), $A\colon (x,u)\mapsto \partial f(x)\times
N_{\mathbb{R}^p_+}u$ is maximally
monotone, $B_1\colon (x,u)\mapsto (\nabla h(x),0)$ is $\beta-$cocoercive, and
$$B_2\colon (x,u)\mapsto \left(\sum_{i=1}^pu_i\nabla g_i(x),-g_1(x),\ldots,-g_p(x){\mathrm{i}}ght)$$
is nonlinear,
monotone and continuous
\cite{rockafellar1970saddle}. If $Y\subset {\mathrm{dom}\,}\partial f\subset\cap_{i=1}^p\ensuremath{\operatorname{int}}{\mathrm{dom}\,} g_i$
we have that $X:=Y\times\mathbb{R}_+^p\subset {\mathrm{dom}\,} A={\mathrm{dom}\,}\partial f\times \mathbb{R}_+^p\subset
{\mathrm{dom}\,} B_2=\cap_{i=1}^p\ensuremath{\operatorname{int}}{\mathrm{dom}\,} g_i\times\mathbb{R}^p$ and, from
\cite[Corollary~25.5]{bauschke2017convex},
we have that $A+B_2$ is maximally monotone.
The method proposed in Theorem~\ref{t:1} reduces to
\begin{equation}
\label{e:algonlc}
(\forall k\in\ensuremath{\mathbb N})\quad
\begin{array}{l}
\left\lfloor
\begin{array}{l}
y^k=\mathbf{prox}_{\gamma_k f}\left(x^k-\gamma_k(\nabla h(x^k)+\sum_{i=1}^pu_i^k\nabla
g_i(x^k)){\mathrm{i}}ght)\\[0.5mm]
\text{For every } i=1,\ldots,p\\[0.5mm]
\left\lfloor
\begin{array}{l}
\end{tabbing}a_i^k=\max\left\{0,u_i^k+\gamma_kg_i(x^k){\mathrm{i}}ght\}\\[0.5mm]
u_i^{k+1}=\max\left\{0,\end{tabbing}a_i^k-\gamma_k(g_i(x^k)-g_i(y^k)){\mathrm{i}}ght\}\\[0.5mm]
\end{array}
{\mathrm{i}}ght.\\[4mm]
x^{k+1}=P_Y\left(y^k+\gamma_k\sum_{i=1}^p(u_i^k\nabla
g_i(x^k)-\end{tabbing}a_i^k\nabla
g_i(y^k)){\mathrm{i}}ght),
\end{array}
{\mathrm{i}}ght.
\end{array}
\end{equation}
where, for every $k\in\ensuremath{\mathbb N}$, $\gamma_k$ is found by the backtracking procedure
defined in \eqref{e:armijocond}. Note that, since $B_2$ is nonlinear, the approaches
proposed in
\cite{condat2013primal,vu2013splitting} cannot be applied to this instance.
In the particular instance when $f=\iota_{\Omega}$ for
some
nonempty closed convex set $\Omega$, we can choose, among other options,
$Y=\Omega$ since we know
that any solution must belong to $\Omega$.
Moroever, when, for every $i\in\{1,\ldots,p\}$, $g_i\colon x\mapsto d_i^{\top}x$, where
$d_i\in\mathbb{R}^N$, we have $B_2\colon (x,u)\mapsto (D^{\top}u,-Dx)$, where
$D=[d_1,\ldots,d_p]^{\top}$. This is a particular instance of problem
\eqref{eq:primal_before_PD} and $B_2$ is $\|D\|-$Lipschitz in this case, which allows us to
use constant stepsizes $\gamma_k=\gamma\in]0,\chi[$, where $\chi$ is defined in
\eqref{e:chi} and $L=\|D\|$.
Theorem~\ref{t:1} guarantees the convergence of the
iterates $\{x^k\}_{k\in\ensuremath{\mathbb N}}$ thus generated to a solution to \eqref{e:nonlMP} in any case.
In the next section, we explore some numerical results showing the good performance
of this method and the method with constant step-size when $g_i$ are affine linear.
}
{
\section{Numerical simulations}
\label{sec:7}
In this section we provide two instances of Section~\ref{sec:NLMP} and we compare
our proposed method with available algorithms in the literature.
\subsection{Optimization with linear inequalities}
In the context of problem~\eqref{e:nonlMP}, suppose that $\ensuremath{{\mathcal H}}=\mathbb{R}^N$, $h\colon
x\mapsto \|Ax-b\|^2/2$, $A$ is a $m\times N$ real matrix with $N=2m$ and $b\in\mathbb{R}^m$,
$f=\iota_{[0,1]^N}$, and
$$(\forall i\in\{1,\ldots,p\})\quad g_i(x)=d_i^{\top}x,$$
where $d_1,\ldots,d_p\in\mathbb{R}^N$. In this case, $B_1\colon (x,u)\mapsto (A^{\top}(Ax-b),0)$,
$B_2\colon (x,u)\mapsto (D^{\top}u,-Dx)$, where $D=[d_1,\ldots, d_p]^{\top}$,
$\beta=\|A\|^{-2}$ and $L=\|D\|$.
We compare the
method proposed in \eqref{e:algonlc} using the line search (FBHF-LS),
the version with constant stepsize (FBHF), the method proposed by
Condat and V\~u \cite{condat2013primal,vu2013splitting} (CV), the method proposed by
Tseng \cite{tseng2000modified} with line search (Tseng-LS) and with constant stepsize
(Tseng) for randomly generated matrices and vectors $A$, $D$ and $b$.
We choose the same starting point for each method
and the parameters for the line search for Tseng-LS and FBHF-LS are $\theta=0.316$,
${\mathbf{a}}repsilon=0.88$ and $\sigma=0.9$. For the constant stepsizes versions of Tseng and
FBHF, we use $\gamma=\delta/(\beta^{-1}+L)$ and
$\gamma=\delta\beta/(1+\sqrt{1+16\beta^2L^2})$, respectively, and for $\bar{\sigma}>0$ we
select $\tau=1/(1/2\beta+\bar{\sigma} L^2)$
in order to satisfy
the convergence conditions on the parameters of each algorithm.
We choose several values of $\delta$ and $\bar{\sigma}$ for studying the behavior and
we use the stopping criterion
$\|(x_{k+1}-x_k,u_{k+1}-u_k)\|/\|(x_k,u_k)\|<10^{-7}$.
In Table~\ref{tab:0} we show the performance of the five algorithms
for random matrices $A$ and $D$ and a random vector $b$ with $N=2000$ and
$p=100$ and a selection of the parameters $\sigma,\delta$. We see that for Tseng and
FBHF
the performance improve for larger choices of $\delta$, while for CV it is not clear how to
choose $\bar{\sigma}$ in general. Even if the theoretical bound of FBHF does not permit
$\delta$ to go beyond $4$, for $\delta=4.4$ the convergence is also obtained for this case
with a better performance.
We suspect that the particular structure of this particular case can be exploited for
obtaining a
better bound. We also observe that the best performance in time is obtained for the lowest
number of iterations for each method. In addition, for this instance, algorithms
FBHF, CV and FBHF-LS are comparable in computational time, while the algorithms by
Tseng \cite{tseng2000modified} are considerably less efficient in time and in number of
iterations. In Table~\ref{tab:01} we compare the average time and iterations that the more
efficient methods in the first simulation take to achieve the stop criterion
($\epsilon=10^{-7}$) for $20$ random realizations of matrices $A$ and $D$ and a random
vector $b$, with $N=2000$ and
$p=100$. We use the parameters yielding the best performance of each method in the first
simulation. For FBHF we also explore the case when
$\delta=4.7$, which gives the best performance. We also observe that FBHF for
$\delta=3.999$ is comparable with CV in average time, while FBHF-LS is slower for this
instance.
\begin{table}[]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{|l||l|l|l||l|l|l||l|l|l|l||l||l|}
\hline
$\epsilon=10^{-7}$ & \multicolumn{3}{c||}{Tseng} &
\multicolumn{3}{c||}{FBHF}
& \multicolumn{4}{c||}{CV} & Tseng-LS & FBHF-LS \\ \hline
$\delta$,$\bar{\sigma}$ & 0.8 & 0.9 & 0.99 & 3.2 & 3.99 & 4.4 &
0.0125 &
0.0031 & 0.0008 & 0.0002 & LS & LS \\ \hline
$h(x^*)$ & 158.685 & 158.684 & 158.684 & 158.681 & 158.680 & 158.679 &
158.674 &
158.674 & 158.676 & 158.687 & 158.683 & 158.680 \\ \hline
iter. & 20564 & 18482 & 16791 & 11006 & 8915 & 8243 & 9384 &
9158 & 8516 & 13375 & 14442 & 10068 \\ \hline
time (s) & 41.55 & 37.18 & 33.76 & 13.11 & 10.48 & 9.76 & 10.70 & 10.61
& 9.67 & 15.27 & 94.86 & 12.40 \\ \hline
\end{tabular}
}
{\mathbf{s}}pace{0.2cm}
\caption{Comparison of Tseng, FBHF and CV (with different values of $\delta$ and
$\bar{\sigma}$),
Tseng-LS and FBHF-LS for a stop criterion of $\epsilon=10^{-7}$.}
\label{tab:0}
\end{table}
\begin{table}[]
\centering
\begin{tabular}{|l|l|l|}
\hline
$\epsilon=10^{-7}$ & av. iter. & av. time (s) \\ \hline
FBHF-LS & 36225 & 43.96 \\ \hline
FBHF ($\delta=3.999$) & 32563 & 39.07 \\ \hline
FBHF ($\delta=4.7$) & 28364 & 34.14 \\ \hline
CV ($\sigma=0.0008$) & 33308 & 38.60 \\ \hline
\end{tabular}
\caption{Average performance of the more efficient methods for $20$ random
realizations of $A$, $D$ and $b$ with $N=2000$ and $p=100$.}
\label{tab:01}
\end{table}
\subsection{Entropy constrained optimization}
\label{sec:71}
In the context of problem~\eqref{e:nonlMP}, suppose that $\ensuremath{{\mathcal H}}=\mathbb{R}^N$, $h\colon
x\mapsto x^{\top}Qx-d^{\top}x+c$, $Q$ is a $N\times N$ semidefinite positive real
matrix, $b\in\mathbb{R}^N$, $c\in\mathbb{R}$, $f=\iota_{\Omega}$,
$\Omega$ is a closed convex subset of $\mathbb{R}^N$,
$p=1$, and $$g_1\colon\mathbb{R}^N_+\to\mathbb{R}\colon x\mapsto
\sum_{i=1}^Nx_i\left(\ln\left(\frac{x_i}{a_i}{\mathrm{i}}ght)-1{\mathrm{i}}ght)-r,
$$
where $-\sum_{i=1}^Na_i<r<0$, $a\in\mathbb{R}^N_{++}$ and we use the convention $0\ln(0)=0$.
This problem appears
in robust least squares estimation when a relative entropy constraint is included
\cite{levy04}.
This constraint can be seen as a distance constraint with respect to the vector $a$, where
the distance is measured by the Kullback-Leibler divergence \cite{basseville13}.
In our numerical experience, we assume $Q=A^{\top}A$, $d=A^{\top}b$ and $c=\|b\|^2/2$,
where $A$ is a $m\times N$ real matrix with $N=2m$ and $b\in\mathbb{R}^m$, which yields
$h\colon x\mapsto \|Ax-b\|^2/2$, $\beta=\|A\|^{-2}$, $\Omega=[0.001,1]^N$, and
$a=(1,\ldots,1)^{\top}$.
In this context, $g_1$ achieves its minimum in $\bar{x}=(1,\ldots,1)^{\top}$ and
$g_1(\bar{x})=-N$
and we choose $r\in]-N,0[$.
Since the constraint is not linear, we cannot
use the methods proposed in \cite{vu2013splitting,condat2013primal}. We compare the
method proposed in \eqref{e:algonlc} with line search (FBHF-LS) with the Tseng's method
with linesearch
\cite{tseng2000modified} (Tseng-LS) and two routines in matlab: {\tt fmincon.interior-point}
(FIP) and
{\tt fmincon.sqp} (SQP). For $m=100, 200, 300$, we generate $20$ random matrices $A$
and random vectors $b$ and we compare the previous methods by changing
$r\in\{-0.2N,-0.4N,-0.6N,-0.8N\}$
in order to vary the feasible regions. We choose the same starting point for each method
and the parameters for the line search for Tseng-LS and FBHF-LS are $\theta=0.707$,
${\mathbf{a}}repsilon=0.88$ and $\sigma=0.9$.
The stopping criterion is
$\|(x_{k+1}-x_k,u_{k+1}-u_k)\|/\|(x_k,u_k)\|<\epsilon$ with $\epsilon=10^{-11}$. In
Table~\ref{tab:1} we show,
for $m=300$,
the value of the objective function $h$, the nonlinear constraint $g_1$ and time for
achieving the stopping criterion for a fixed random matrix $A$ and vector $b$
by moving $r\in\{-0.2N,-0.4N,-0.6N,-0.8N\}$. We observe that all methods achieve
almost the same value of the objective function and satisfy the constraints, but
in time FBHF-LS obtains the best performance, even if the number of iterations are larger
than
that of FIP and SQP. Tseng-LS has also a better performance in time than FIP and SQP, with
a much larger number of iterations. We also observe that, the smaller the feasible
set is, the harder is for all the methods to approximate the solution and the only case
when the constraint is inactive is when $r=-0.2N$. On the other hand, even if in the cases
$r=-0.6N$ and $r=-0.8N$ we have $g_1(x^*)>0$, the value is $\approx 10^{-6}$ which is
very near to feasibility.
This behavior is confirmed in Table~\ref{tab:2}, in which we show,
for each $m\in\{100,200,300\}$, the average time and iterations
obtained from the $20$ random realizations by moving
$r\in\{-0.2N,-0.4N,-0.6N,-0.8N\}$. We observe that FBHF-LS takes considerably less time
than the other algorithms to reach the stopping criterion and the difference is more when
dimension is higher.
Since FIP and SQP are very slow for high dimensions, in Table~\ref{tab:3}
we compare the efficiency of Tseng-LS and FBHF-LS for $20$ random realizations
of $A$ and $b$ with $N\in\{1000,2000,3000\}$ for $r=-0.4N$ and $\epsilon=10^{-5}$.
The computational time of both methods are reasonable, but again FBHF-LS is faster.
FBHF-LS use less iterations than Tseng-LS for achieving the same criterion and, even if
we reduce $\epsilon$ from $10^{-5}$ to $10^{-10}$ and the number of iterations are more
than 3 times that of Tseng-LS for the weaker criterion, the computational time is similar.
We also observe that the percentage of relative improvement of an algorithm $A$ with
respect to Tseng-LS,
measured via $\% imp.(A)=100*(f(x_A)-f(x_T))/f(x_T)$, where $x_T$ and $x_A$ are the
approximative solutions obtained by Tseng-LS and $A$, is bigger for smaller dimensions.
For instance, in the case $500\times 1000$, FBHF-LS obtain an approximative solution
for which the objective function has a $12\%$ of relative improvement with respect to that
of
Tseng-LS for $\epsilon=10^{-5}$ and, if the criterion is strengthened to $10^{-10}$, the
improvement raises to $20\%$. For higher dimensions, this quantities are considerably
reduced.
\begin{table}[]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{|l||l|l|l|l||l|l|l|l||l|l|l|l||l|l|l|l|}
\hline
& \multicolumn{4}{c||}{$r=-0.2N$} &
\multicolumn{4}{c||}{$r=-0.4N$} & \multicolumn{4}{c||}{$r=-0.6N$} &
\multicolumn{4}{c|}{$r=-0.8N$} \\ \hline
$300\times600$ & $h(x^*)$ & $g_1(x^*)$ & time (s) &iter. & $h(x^*)$ & $g_1(x^*)$
&
time (s) & iter.& $h(x^*)$ & $g_1(x^*)$ & time (s) &iter.& $h(x^*)$ & $g_1(x^*)$ &
time (s)&iter. \\ \hline
FIP & 6.06E-10 & -105.038& 165.190 & 562 & 3.73E-09 & -7.78E-02 &
183.467 & 574 & 244.551 & -3.39E-08 & 218.413 & 794 & 4075.6824 &
-3.20E-09 & 378.269 & 1197 \\
SQP & 6.06E-10 & -105.038 & 372.210 & 357 & 3.73E-09 & -7.78E-02 &
598.258 & 341 & 244.551 & -3.39E-08 & 515.568 & 653 & 4075.6824 &
-3.20E-09 & 988.143 & 655 \\
Tseng-LS & 5.47E-15 & -119.365 & 13.682 & 13785 & 1.41E-14 & -9.81E-09 &
29.160 & 30110 & 244.551 & 1.16E-06 & 44.248 & 20717 & 4075.6822 &
2.67E-06 & 110.916 & 75254 \\
FBHF-LS & 2.15E-15 & -119.338 & 1.053 & 9680 & 5.56E-15 & -4.71E-09 &
2.220 & 21106 & 244.551 & 1.10E-06 & 10.381 & 19492 & 4075.6822 &
2.07E-06 & 17.464 & 60442\\
\hline
\end{tabular}
}
{\mathbf{s}}pace{.2cm}
\caption{Comparison of objective function and constraints values, time and number of
iterations of FIP, SQP, Tseng-LS and FBHF-LS algorithms for solving the entropy
constrained
optimization when $N=600$, $m=300$ and $r\in
\{-0.2N,-0.4N,-0.6N,-0.8N\}$.}
\label{tab:1}
\end{table}
\begin{table}[]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{|l||l|l|l|l||l|l|l|l||l|l|l|l|}
\hline
Time (s) & \multicolumn{4}{c||}{$100\times 200$} &
\multicolumn{4}{c||}{$200\times
400$}
&
\multicolumn{4}{c|}{$300\times 600$} \\ \hline
constraint & $r=-0.2N$ & $r=-0.4N$ & $r=-0.6N$ & $r=-0.8N$ & $r=-0.2N$ &
$r=-0.4N$ & $r=-0.6N$
& $r=-0.8N$ & $r=-0.2N$ & $r=-0.4N$ & $r=-0.6N$ & $r=-0.8N$ \\ \hline
FIP & 8.24 & 9.50 & 11.92 & 11.22 & 52.95 & 57.92 & 75.02 & 76.29 &
142.51 & 183.22 & 253.80 & 324.42 \\
SQP & 8.60 & 11.18 & 14.28 & 18.51 & 70.88 & 98.70 & 122.03 & 209.73
& 313.52 & 489.95 & 569.34 & 1075.42 \\% \hline
Tseng-LS & 22.92 & 10.71 & 7.92 & 9.91 & 81.16 & 13.46 & 39.17 & 83.01
& 139.47 & 26.50 & 84.07 & 111.44 \\
FBHF-LS & 2.21 & 0.99 & 2.72 & 2.51 & 7.06 & 0.95 & 10.23 & 18.09
& 12.48 & 1.88 & 20.59 & 18.30 \\ \hline
\end{tabular}
}
{\mathbf{s}}pace{.2cm}
\caption{Average time (s) to reach the stopping criterion of $20$ random realizations for
FIP,
SQP, Tseng-LS and FBHF-LS for a matrix $A$ with
dimension $100\times 200$, $200\times 400$ and $300\times 600$ and $r\in
\{-0.2N,-0.4N,-0.6N,-0.8N\}$.}
\label{tab:2}
\end{table}
\begin{table}[]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{|l|l||l|l|l||l|l|l||l|l|l|}
\hline
& & \multicolumn{3}{c||}{$500\times1000$} &
\multicolumn{3}{c||}{$1000\times2000$} & \multicolumn{3}{c|}{$1500\times3000$} \\
\hline
stop crit. & Algorithm & time (s) & iter. & \%
imp. & time (s) & iter. & \% imp. & time (s) & iter. & \% imp. \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{$10^{-5}$}} & Tseng-LS & 17.22 & 2704 &
0 & 52.80 & 4027 & 0 & 91.23 & 3349 & 0 \\
\multicolumn{1}{|c|}{} & FBHF-LS & 2.33 & 1993 &
12.2 & 6.26 & 3239 & 3.2 & 9.56 & 2474 & 0.1 \\ \hline
\multicolumn{1}{|c|}{$10^{-10}$} & FBHF-LS & 10.67 & 10092
&
20.1 & 71.51
&
33637 & 6.5 & 53.47 & 12481 & 0.2 \\ \hline
\end{tabular}
}
{\mathbf{s}}pace{.2cm}
\caption{Comparison between Tseng-LS and FBHF-LS for higher dimensions. We compare
average time and average number of iterations for achieving stop criteria together with
percentage of relative improvement with respect to Tseng-LS approximate solution.}
\label{tab:3}
\end{table}
}
\section{Conclusion}
In this paper, we systematically investigated a new extension of Tseng's
forward-backward-forward method and the forward-backward method. The three primary
contributions of this investigation are (1) a lower per-iteration complexity variant of
Tseng's method which activates the cocoercive operator only once; (2) the ability to
incorporate variable metrics in operator-splitting
schemes, which, unlike typical variable metric methods, do not enforce
compatibility conditions between metrics employed at successive time steps; and (3)
the ability to incorporate modified resolvents $J_{P^{-1} A}$ in iterative fixed-point
algorithms, which, unlike typical preconditioned fixed point iterations, can be formed
from non self-adjoint linear operators $P$, which lead to new
Gauss-Seidel style operator-splitting schemes.
{\mathbf{s}}pace{.3cm}
{\footnotesize {\bf Acknowledgments:} This work is partially supported by NSF GRFP grant
DGE-0707424,
by CONICYT grant FONDECYT 11140360, and by ``Programa de financiamiento
basal'' from CMM, Universidad de Chile. We want to thank the two anonymous reviewers,
whose comments and concerns allowed us to improve the quality of this manuscript.}
\end{document} |
\begin{document}
\title{Privacy-preserving Non-negative Matrix Factorization with Outliers}
\author{Swapnil~Saha,~and~Hafiz~Imtiaz~\IEEEmembership{}
\IEEEcompsocitemizethanks{
\IEEEcompsocthanksitem Swapnil Saha is a graduate student with the Department
of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh.\protect\\
E-mail: [email protected]
\IEEEcompsocthanksitem Hafiz Imtiaz is an Associate Professor with the Department
of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh.\protect\\
E-mail: [email protected].}
\thanks{Manuscript received XXXX XX, 2022; revised XXXX XX, XXXX.}}
\markboth{IEEE Transactions on Knowledge and Data Engineering,~Vol.~XX, No.~XX, XXXX~XXXX}
{Saha and Imtiaz: Privacy-preserving Non-negative Matrix Factorization with Outliers}
\IEEEtitleabstractindextext{
\begin{abstract}
Non-negative matrix factorization is a popular unsupervised machine learning algorithm for extracting meaningful features from data which are inherently non-negative. However, such data sets may often contain privacy-sensitive user data, and therefore, we may need to take necessary steps to ensure the privacy of the users while analyzing the data. In this work, we focus on developing a Non-negative matrix factorization algorithm in the privacy-preserving framework. More specifically, we propose a novel privacy-preserving algorithm for non-negative matrix factorisation capable of operating on private data, while achieving results comparable to those of the non-private algorithm. We design the framework such that one has the control to select the degree of privacy grantee based on the utility gap. We show our proposed framework’s performance in six real data sets. The experimental results show that our proposed method can achieve very close performance with the non-private algorithm under some parameter regime, while ensuring strict privacy.
\end{abstract}
\begin{IEEEkeywords}
Differential Privacy, Non-negative Matrix Factorization, R\'enyi Differential Privacy, Topic Modelling, Facial Feature.
\end{IEEEkeywords}}
\maketitle
\IEEEdisplaynontitleabstractindextext
\IEEEpeerreviewmaketitle
\IEEEraisesectionheading{\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{N}{on-negative} matrix factorisation is an unsupervised machine learning technique for discovering the part-based representation of intrinsically non-negative data \cite{hoyer2004non}. For a $D \times N$ data matrix $\mathbf{V}$, where $N$ is the number of the data samples, and $D$ is the data dimension, the entries satisfy $v_{ij} \geqslant 0\ \forall i\in \{1, 2, \ldots, D\}$ and $\forall j\in \{1, 2, \ldots, N\}$. The NMF objective is to decompose the data matrix as the following:
\begin{equation}\label{VWH}
\mathbf{V} \approx \mathbf{WH},
\end{equation}
where $\mathbf{W} \in \mathcal{R}^{D\times K}$ is the basis matrix, $\mathbf{H} \in \mathcal{R}^{K\times N}$ is the coefficient matrix, and $K$ is the reduced latent dimension. Here, each entry of $\mathbf{W}$ satisfies $w_{ik} \geqslant 0$, and each entry of $\mathbf{H}$ satisfies $h_{kj}\geqslant 0,\forall k\in\{1, 2, \ldots, K\}$. In short, NMF performs dimension reduction by mapping the ambient data dimension $D$ into reduced latent dimension $K$ for $N$ data samples. The $j$-th column of $\mathbf{V}$ can be written as:
\begin{equation}
\mathbf{v}_j \approx \mathbf{W} \mathbf{h}_j,
\end{equation}
where $\mathbf{h}_j$ is the $j$-th column of $\mathbf{H}$. Essentially, the $j$-th column of $\mathbf{V}$ is represented as a linear combination of all columns of $\mathbf{W}$ with the coefficients being the corresponding entries from $\mathbf{h}_j$. Therefore, the dictionary matrix $\mathbf{W}$ can be interpreted as storing the ``parts'' of the data matrix $\mathbf{V}$. This part-based decomposition, and considering only the non-negative values, make NMF popular in many practical applications including, but not limited to, dimension reduction, topic modeling in text mining, representation learning, extracting local facial features from human faces, unsupervised image segmentation, speech denoising, and community detection in social networks.
\noindent\textbf{NMF and Privacy. }\textcolor{blue}{In the modern era of big data, many services are customized for the users to provide better suggestions and experiences. Such customization is often done via some machine learning algorithm, which are trained or fine-tuned on users’ sensitive data. As shown in both theoretical and applied works, the users may be rightfully concerned regarding their privacy being compromised by the machine learning algorithm’s outputs. For example, the seminal work of Homer et al.~\cite{homer2008resolving} showed that the presence of an individual in a genome dataset can be identified from simple summary statistics about the dataset. In the machine learning setting, typically the model parameters (as a matrix $\mathbf{W}$ or vector $\mathbf{w}$) are learned by training on the users’ data. To that end, ``Membership Inference Attacks” \cite{fredrikson2015model} are discussed in detail by Hu et al.~\cite{hu2022membership}, and Shokri et al.~\cite{shokri2017membership}. They showed that given the learned parameters $\mathbf{W}$, an adversary can identify users in the training set. Basically, the model's trained weights can be used to extract sensitive information. The weights' tendency to memorize the training examples is used to regenerate the training example. Several other works~\cite{narayanan2006break,le2013differentially,sweeney2015only} also showed how personal data leakage occurred from modern machine learning (ML) and signal processing tasks. Additionally, it has been shown that simple anonymization of data does not provide any privacy in the presence of auxiliary information available from other/public sources~\cite{sweeney2015only}. One of the examples is the Netflix prize, where an anonymized data set was released in 2007. This data set contained anonymized movie ratings from Netflix subscribers. Nevertheless, researchers found a way to crack the privacy of the dataset and successfully recovered $99\%$ of the removed personal data \cite{narayanan2006break} by using the publicly available IMDb datasets. In addition, privacy can leak through gradient sharing \cite{zhu2019deep} as well. In the distributed machine learning system, several nodes exchange gradient values. The authors in~\cite{zhu2019deep} showed how one can extract private training information from gradient sharing. These discoveries led to widespread concern over using private data for public machine learning algorithms.} Evidently, personal information leakage is the main hindrance to collecting and analyzing sensor data for training machine learning and signal processing algorithms. It is, therefore, necessary to develop a framework where one can share private data without disclosing their participation or identity. Differential privacy (DP) is a rigorous mathematical framework that can protect against information leakage \cite{dwork2014algorithmic}. The definition of differential privacy is motivated by the cryptographic work, which has gained significant attention in machine learning and data mining communities \cite{dwork2006calibrating}. Though the target may be the same: give privacy to data, how DP guarantees privacy is much different from the cryptography~\cite{vaidya2006privacy} and information theory~\cite{zhou2013achieving}. In a differentially private mechanism, we can learn useful information from the population while ensuring the privacy of the population members. It is done statistically by introducing randomness in the algorithm. This randomness provides confusion to attack the sensitive data of the participants. However, one may need to compromise the algorithm’s utility to ensure privacy. That is, one needs to quantitatively choose the optimum privacy budget considering the required privacy-utility trade-off.
\noindent\textbf{Related Works. }The non-negative matrix factorization is attained in the literature by minimizing the following objective function: $\min_{\mathbf{W} \in \mathcal{C}, h_{kj}\geqslant 0, \forall k,j} \norm{\mathbf{V} - \mathbf{WH}}_F^2$,
where $\mathcal{C} \subseteq \mathcal{R}^{D\times K}$ is the constraint set for $\mathbf{W}$. Several algorithms have been proposed to obtain the optimal point of this objective function, such as the multiplicate updates~\cite{NIPS2000_f9d11525}, alternating direction method of multipliers~\cite{xu2012alternating}, block principal pivoting~\cite{kim2008toward}, active set method~\cite{kim2008nonnegative}, and projected gradient descent~\cite{lin2007projected}. Most of these algorithms are based on alternatively updating $\mathbf{W}$ and $\mathbf{H}$. This makes the optimization problem divided into two sub-problems: each of which can be optimized using the standard optimization techniques, such as the projected gradient or the interior point method. A detailed survey of these optimization techniques can be found in~\cite{wang2012nonnegative, kim2014algorithms}. Our work is based on the robust NMF algorithm using projected gradient descent~\cite{zhao2016online}. This modified robust algorithm improves two extreme scenarios: (i) when the data matrix $\mathbf{V}$ has a large number of data samples, and (ii) the existence of outliers in data samples. The second scenario is common in many practical cases, such as the salt and pepper noise in image data, and impulse noise in time series data. If these outliers and noises are not handled properly during matrix decomposition, the basis (or dictionary) matrix $\mathbf{W}$ may not be well optimized and fail to learn the part-based representation. We discuss the implementation of the algorithm in detail in the Section \ref{nmf_problem_formulation}.
Extensive works and surveys exist in the literature on differential privacy. In particular, Dwork and Smith's survey \cite{dwork2010differential} contains the earlier theoretical work. We refer the reader to \cite{abadi2016deep,chaudhuri2008privacy,chaudhuri2011differentially,song2013stochastic,ji2014differential,li2018differentially,bassily2014private,ligett2017accuracy,wang2017differentially} for the most relevant works in differentially private machine learning, deep learning, optimization problems, gradient descent, and empirical risk minimization. Adding randomness in the gradient calculation is one of the most common approaches for implementing differential privacy~\cite{song2013stochastic,bassily2014private}. Other common approaches are employing the output~\cite{chaudhuri2011differentially} and objective perturbations~\cite{nozari2016differentially}, the exponential mechanism~\cite{mcsherry2007mechanism}, the Laplace mechanism~\cite{dwork2006calibrating} and Smooth sensitivity~\cite{nissim2007smooth}. \textcolor{blue}{Last but not the least, the work of Alsulaimawi~\cite{alsulaimawi2020non} has introduced a privacy filter with federated learning for NMF factorization. Fu et al.~\cite{fu2019cloud} implemented privacy-preserving NMF for dimension reduction using Paillier Cryptosystem, Nikolaenko et al.~\cite{nikolaenko2013privacy} did privacy preserving matrix factorization for recommendation systems using partially homomorphic encryption with Yao’s garbled circuits. Privacy Preserving Data Mining (PPDM) was implemented in \cite{afrin2019privacy} using combined NMF and Singular Value Decomposition (SVD) methods. The authors in~\cite{ran2022differentially} proposed differentially private NMF for the recommender system, and we showed the comparison analyses in Section \ref{Comparison}. However, to the best of our knowledge, no work has introduced differential privacy in the universal NMF decomposition and calculated privacy composition for multi-stage implementation to account for the best privacy budget.}
\noindent\textbf{Our Contributions. }\textcolor{blue}{In this work, we intend to perform NMF decomposition on inherently non-negative and privacy-sensitive data. As the data matrix $\mathbf{V}$ contains user-specific information, an adversary can extract sensitive information regarding the users from the estimated dictionary matrix $\mathbf{W}$. However, the estimated $\mathbf{W}$ should encompass the fundamental basis of the population. Note that, the data may have some outliers, which may cause one to capture unrepresentative dictionary matrix if the outliers are not handled properly. To that end, we propose a privacy-preserving non-negative matrix factorization algorithm considering outliers. We propose to compute the dictionary matrix $\mathbf{W}$ satisfying a mathematically rigorous privacy guarantee, differential privacy, such that the computed $\mathbf{W}$ reflects very little about any particular user's data, is relatively unaffected by the presence of outliers, and closely approximates the true dictionary matrix. Our major contribution is summarized below:
\begin{itemize}
\item We develop a novel privacy-preserving algorithm for non-negative matrix factorization capable of operating on privacy-sensitive data, while closely approximating the results of the non-private algorithm.
\item We consider the effect of outliers by specifically modeling them, such that the presence of outliers have very little effect on the estimated dictionary matrix $\mathbf{W}$.
\item We analyze our algorithm with R\'enyi Differential Privacy \cite{mironov2017renyi} to obtain a much better accounting of the overall privacy loss, compared to the conventional strong composition rule~\cite{dwork2014algorithmic}.
\item We performed extensive experimentation on real datasets to show the effectiveness of our proposed algorithm. We compare the results with those of the non-private algorithm and observe that our proposed algorithm can offer close approximation to the non-private results for some parameter choices.
\item We present the result plots in a way that the user can choose between the overall privacy budget and the required ``closeness'' to non-private results (utility-gap).
\end{itemize}
}
\section{Problem formulation}
\subsection{Notations}
\begin{table}[t]
\centering
\begin{tabular}{c c}
\hline
Notation & Meaning \\
\hline
$\mathbf{V}$ & Data Matrix \\
$\mathbf{W}$ & Dictionary / Basis Matrix \\
$\mathbf{W}_{\textrm{private}}$ & Differentially private $\mathbf{W}$ \\
$\mathbf{H}$ & Coefficient Matrix\\
$\mathbf{R}$ & Outlier Matrix \\
$D$ & Ambient dimension of Data matrix \\
$K$ & Latent dimension \\
$N$ & Number of users in Data matrix \\
$\mathbf{v}_n$ & $n$-th user vector in Data matrix \\
$(\epsilon,\delta)$ & Privacy parameters \\
$\Updelta$ & Sensitivity \\
$\triangledown f(W)$ & Gradient function to update Dictionary matrix \\
$\overline{\triangledown f(W)}$ & Noisy gradient to learn $\mathbf{W}_{\textrm{private}}$ \\
$\mathbf{A},\mathbf{B}$ & Statistics matrices of $\triangledown f(W)$ \\
$\overline{\mathbf{A}},\overline{\mathbf{B}}$ & Noise perturbed statistics matrices of $\mathbf{A},\mathbf{B}$\\
\hline
\end{tabular}
\caption{Notations}
\label{tab:notation}
\end{table}
For clarity and readability, we denote vector, matrix and scalar with different notation. Bold lower case letter $(\textbf{v})$, bold capital letter $(\textbf{V})$ and unbolded letter $(M)$ are used respectively for vector, matrix and scalar. To indicate the iteration instant, we use subscript $t$. For example, $\mathbf{W}_t$ denotes the dictionary matrix after $t$ iterations. The superscript $'+'$ indicates a single update. The $n$-th column of matrix $\textbf{V}$ is denoted as $\textbf{v}_n$. We denote the indices with lowercase unbolded letter. For example, $v_{ij}$ indicates the entry of the $i$-th row and $j$-th column of the matrix $\mathbf{V}$. Inequality $\mathbf{x} \geqslant 0$ or $\mathbf{X} \geqslant 0$ apply entry-wise. For element wise matrix multiplication, we use the notation $\odot$. We denote the $\mathcal{L}_2$ norm (Euclidean norm) with $\norm{.}_2$, the $\mathcal{L}_{1,1}$ norm with $\norm{.}_{1,1}$, and the Frobenius norm with $\norm{.}_F$. $\mathds{R} \text{ , and } \mathds{R}_{+}$ denote the set of real numbers, and set of positive real numbers, respectably. $\mathcal{P}_+$ denotes the Euclidean projector, which projects value onto the non-negative orthant. Lastly, the probability distribution function of zero mean unit variance Gaussian random variable is given as follows: $f(x)=\frac{1}{\sqrt{2\pi}}\exp{\frac{-x^2}{2}}$. We summarized the frequently used notations at Tab. \ref{tab:notation}.
\subsection{Definitions and Preliminaries} \label{definations}
In differential privacy, we define $\mathcal{D}$ as the domain of the databases consisting of $N$ records. We also define neighboring data sets $D,\ D^\prime$, which differ by only one record.
\begin{definition}
(($\epsilon,\delta$)-DP \cite{dwork2006calibrating}) An algorithm $f$ : $\mathcal{D} \mapsto \mathcal{T}$ provides ($\epsilon,\delta$)- differential privacy (($\epsilon,\delta$)-DP) if $P(f(D)\in \mathcal{S}) \leqslant \delta + e^\epsilon P(f(D^\prime)\in \mathcal{S} )$ for all measurable $\mathcal{S} \subseteq \mathcal{T} $ and for all neighbouring data sets $D,D^\prime \in \mathcal{D}$.
\end{definition}
Here, $\epsilon,\delta \geqslant 0$ are the privacy parameters and determine how the algorithm will perform in providing utility and preserving privacy. The $\epsilon$ indicates how much the algorithm’s output deviates in probability, when we replace one single person’s data with another. The parameter $\delta$ indicates the probability that the privacy mechanism fails to give the guarantee of $\epsilon$. Intuitively, higher privacy demand makes poor utility. A lower value of $\epsilon$ and $\delta$ guarantee more privacy but lower utility. There are several mechanisms to implement differential privacy: Gaussian \cite{dwork2006calibrating}, Laplace mechanism \cite{dwork2014algorithmic}, random sampling, and exponential mechanism \cite{mcsherry2007mechanism} are well-known. Among the additive noise mechanisms, the noise’s standard deviation is scaled by the privacy budget and the sensitivity of the function.
\begin{definition}
($\mathcal{L}_2$ sensitivity \cite{dwork2006calibrating}) The $\mathcal{L}_2$- sensitivity of vector valued function $f(D)$ is $\Updelta := \max_{D,D^\prime} \norm{f(D)-f(D^\prime)}_{2}$, where $D$ and $D^\prime$ are the neighbouring dataset.
\end{definition}
The $\mathcal{L}_2$ sensitivity of a function gives the upper bound of how much randomness we need to perturb to the function’s output, if we want to keep the guarantee of differential privacy. It captures the maximum output change by a single user in the worst-case scenario.
\begin{definition}
(Gaussian Mechanism \cite{dwork2014algorithmic}) Let $f : \mathcal{D} \mapsto \mathcal{R}^d $ be an arbitrary function with $\mathcal{L}_{2}$ sensitivity $\Updelta$. The Gaussian mechanism with parameter $\tau$ adds noise from $\mathcal{N}(0,\tau^2)$ to each of the $d$ entries of the output and satisfies $(\epsilon,\delta)$ differential privacy for $\epsilon \in (0,1)$ and $\delta \in (0,1)$, if $\tau \geq \frac{\Updelta}{\epsilon} \sqrt{2\log\frac{1.25}{\delta}}$.
\end{definition}
Here, $(\epsilon,\delta)$- differential privacy is guaranteed by adding noise drawn form $\mathcal{N}(0,\tau^2)$ distribution. Note that, there is infinite combinations of $(\epsilon,\delta)$ for a given $\tau^2$.
\begin{definition}
(R\'enyi Differential Privacy \cite{mironov2017renyi}) A randomized algorithm $f$ : $\mathcal{D} \mapsto \mathcal{T}$ is $(\alpha,\epsilon_{r})$-R\'enyi differentially private if, for any adjacent $D,D^\prime \in \mathcal{D}$, the following holds: $D_{\alpha}(\mathcal{A}(D)\ || \mathcal{A}(D^\prime)) \leq \epsilon_{r}$. Here, $D_{\alpha}(P(x)||Q(x))=\frac{1}{\alpha-1}\log\mathds{E}_{x \sim Q} \Big(\frac{P(x)}{Q(x)}\Big)^{\alpha}$ and $P(x)$ and $Q(x)$ are probability density functions defined on $\mathcal{T}$.
\end{definition}
We use R\'enyi Differential Privacy for calculating the total privacy budget spent in multi-stage differentially private mechanisms. RDP provides a much simpler rule for calculating overall privacy risk $\epsilon$ that is shown to be tight \cite{mironov2017renyi}.
\newtheorem{prop}{Proposition}
\begin{prop}\label{rdp_dp}
(From RDP to DP \cite{mironov2017renyi}). If $f$ is an $(\alpha,\epsilon_r)$-RDP mechanism, it also satisfies $(\epsilon_r+\frac{\log 1/\delta}{\alpha-1},\delta)$-differential privacy for any $0<\delta<1$.
\end{prop}
\begin{prop}\label{comp_rdp}
(Composition of RDP \cite{mironov2017renyi}). Let $f_1:\mathcal{D}\rightarrow \mathcal{R}_1$ be $(\alpha,\epsilon_1)$ and $f_2:{\mathcal{R}_1} \times \mathcal{D} \rightarrow \mathcal{R}_2$ be $(\alpha,\epsilon_2)$-RDP,then the mechanism defined as $(X_1,X_2)$, where $X_1 \sim f_1(\mathcal{D})$ and $X_2 \sim f_2(X_1,\mathcal{D})$ satisfies $(\alpha,\epsilon_1+\epsilon_2)$-RDP.
\end{prop}
\begin{prop}\label{rdp_gaussian}
(RDP and Gaussian Mechanism\cite{mironov2017renyi}). If $f$ has $\mathcal{L}_2$ sensitivity 1, then the Gaussian mechanism $\mathcal{G}_{\sigma}f(\mathcal{D})=f(\mathcal{D})+\mathcal{E}$ where $\mathcal{E} \sim \mathcal{N}(0,\sigma^2)$ satisfies $(\alpha,\frac{\alpha}{2\sigma^2})$-RDP. Additionally, a composition of $K$ such Gaussian mechanisms satisfies $(\alpha,\frac{\alpha K}{2 \sigma^2})$- RDP.
\end{prop}
\subsection{NMF Problem Formulation} \label{nmf_problem_formulation}
\begin{algorithm}[t]
\caption{General NMF Algorithms: Two-Block Coordinate Descent Method \cite{qian2020fast}}
\begin{algorithmic}[1]
\REQUIRE Data matrix $\mathbf{V}$, number of iteration $n$
\ENSURE Dictionary matrix $\mathbf{W}$
\\ \textit{Initialisation} : $\mathbf{W}_0$, $\mathbf{H}_0$
\FOR {$t = 0$ to $n-1$}
\STATE $\mathbf{W}_{t+1} \leftarrow $ update($\mathbf{V}$,$\mathbf{W}_{t}$,$\mathbf{H}_{t}$)
\STATE $\mathbf{H}_{t+1} \leftarrow $ update($\mathbf{V}$,$\mathbf{W}_{t+1}$,$\mathbf{H}_{t}$)
\ENDFOR
\RETURN Optimized dictionary matrix $\mathbf{W}$
\end{algorithmic}
\label{alg:alg_two_block}
\end{algorithm}
Most of the algorithms discussed in the Section \ref{sec:introduction} follow the two-block coordinate descent framework shown in the Algorithm \ref{alg:alg_two_block}. First, the dictionary matrix $\mathbf{W}$ is updated, while keeping the coefficient $\mathbf{H}$ constant. Then, the updated dictionary matrix $\mathbf{W}$ is used to update the coefficient $\mathbf{H}$. The process continues until some convergence criteria is met. In our work, we use a robust non-negative matrix factorization solver considering the outliers~\cite{zhao2016online} based on projected gradient descent (PGD). We intend to decompose the noisy data matrix $\mathbf{V}$ as $\mathbf{V} \approx \mathbf{WH} + \mathbf{R}$, where $\mathbf{W}$ and $\mathbf{H}$ are defined as before. The matrix $\mathbf{R} = [\mathbf{r}_{1},\mathbf{r}_{2}....,\mathbf{r}_{N}] \in \mathcal{R}^{D\times N}$ is a matrix containing the outliers of the data. Thus, the NMF optimization problem is reformulated as
\begin{equation} \label{update_optimization}
\min_{\mathbf{W} \in \mathcal{C}, \mathbf{H}\geqslant 0,\mathbf{R} \in \mathcal{Q}} \frac{1}{N} \big( \frac{1}{2} \norm{\mathbf{V}-\mathbf{W} \mathbf {H}-\mathbf{R}}_F^2 + \lambda\norm{\mathbf{R}}_{1,1} \big),
\end{equation}
where $\mathcal{C} \subseteq \mathcal{R}^{D\times K}$ is the constraint set for updating $W$, $\mathcal{Q}$ is the feasible set for $\mathbf{R}$ and $\lambda \geqslant 0$ is the regularization parameter. Intuitively, the outlier matrix $\mathbf{R}$ is sparse in nature and contains a smaller values compared to the noise-free original data entries. \textcolor{blue}{The sparsity of the outlier $\mathbf{R}$ is enforced by the choice of $\mathcal{L}_{1,1}$-norm regularization, as discussed in \cite{zhao2016online}. The level of sparsity is controlled by the hyper-parameter $\lambda$. In line with the work of \cite{zhao2016online}, we used the same hyper-parameters in most cases for our experiments. Evidently, a grid search can be performed to select optimal hyper-parameters for our proposed differentially-private NMF. Additionally, one can compare the performance of NMF with outliers with $\mathcal{L}_{2}$ or other norms, but we defer that for future work.}
Note that the robust NMF algorithm can not guarantee exact recovery of the original data matrix $\mathbf{V}$, as the loss function \eqref{update_optimization} is not convex in nature \cite{zhao2016online}. However, it can be shown empirically that the estimated basis matrix $\mathbf{\hatv{W}}$ can be meaningful, and the difference between the data matrix $\mathbf{V}$ and the new reconstructed matrix $\mathbf{\hatv{W}\hatv{H}}$ is negligible and sparse \cite{zhao2016online,shen2014robust,fevotte2015nonlinear}. Now, following the two-block coordinate descent method mentioned in the Algorithm \ref{alg:alg_two_block}, we reformulate our optimization steps as follows:
\begin{enumerate}
\item Update the coefficient matrix $\mathbf{H}_t$ and outlier matrix $\mathbf{R}_t$ based on the fixed dictionary matrix $\mathbf{W}_{t-1}$. Here the optimization can be done as \begin{equation}\label{hr}
(\mathbf{H}_t,\mathbf{R}_t)= \argmin_{\mathbf{H}\geqslant 0,\mathbf{R} \in \mathcal{Q}} \hspace{5pt} L( \mathbf{V},\mathbf{W}_{t-1},\mathbf{H},\mathbf{R}),
\end{equation}
where the loss function $L$ is
\begin{align*}
L(\mathbf{V},\mathbf{W},\mathbf{H},\mathbf{R}) &\triangleq \frac{1}{N} ( \frac{1}{2} \norm{\mathbf{V}- \mathbf{WH}- \mathbf{R}}_F^2\\
& +\lambda \norm{\mathbf{R}}_{1}) \numberthis \label{loss_org}.
\end{align*}
Here constraint set $\mathcal{Q} \triangleq \set{\mathbf{r} \in \mathds{R}^D \text{ | } \norm{ \mathbf{r}}_{\infty} \leqslant M}$ keeps the entries of outlier matrix $\mathbf{R}$ uniformly bounded. The value of $M$ depends on the data set and noise distribution. For example, in the gray scale image data with $2^b-1$ levels in each pixel, we can set $M=2^b-1$ where b is the number of bits to present the pixel value.
\item After optimization with respect to $\mathbf{H}_t$ and $\mathbf{R_t}$, update $\mathbf{W}_t$ using the same loss function \eqref{loss_org}:
\begin{equation}\label{w}
\mathbf{W}_t= \argmin_{\mathbf{W} \in \mathcal{C}} \hspace{5pt} L(\mathbf{V},\mathbf{W},\mathbf{H}_{t},\mathbf{R}_{t}).
\end{equation}
Here, the set $\mathcal{C}$ constrains the columns of dictionary matrix $\mathbf{W}$ into a unit (non-negative) $\ell_2$ ball to keep the matrix entries bounded \cite{lin2007convergence,lin2007projected}. \\
\end{enumerate}
\noindent\textbf{PGD Solver for \eqref{hr}. }Equation \eqref{hr} can be solved by following the two steps alternatively for a fixed $\mathbf{W}$.
\begin{equation} \label{argminH}
\mathbf{H}^+ := \argmin_{\mathbf{H'} \geqslant 0}Q (\mathbf{H'} | \mathbf{H}),
\end{equation}
\begin{equation} \label{argminR}
\mathbf{R}^+= \argmin_{\mathbf{R'} \in \mathcal{Q}} \frac{1}{N}\Big ( \frac{1}{2} \norm{\mathbf{V}- \mathbf{WH_{t}}- \mathbf{R'}}_F^2 + \lambda \norm{\mathbf{R'}}_{1} \Big),
\end{equation}
where
\begin{equation}
Q(\mathbf{H'}|\mathbf{H}) \triangleq q(\mathbf{H})+<\bigtriangledown q (\mathbf{H}), \mathbf{H'}-\mathbf{H}> + \frac{1}{2\eta N} \norm{\mathbf{H'}-\mathbf{H}}_2^2,
\end{equation}
\begin{equation} \label{q(h)}
q(\mathbf{H}) \triangleq \frac{1}{2N} \norm{\mathbf{V}-\mathbf{WH}-\mathbf{R}}_F^2.
\end{equation}
Here, $\eta$ is the fixed step size. Minimizing both \eqref{argminH} and \eqref{argminR} have closed-form solutions. For \eqref{argminH}, the solution can be formulated as following
\begin{equation} \label{h_update}
\mathbf{H}^+ := \mathcal{P}_{+}(\mathbf{H}-\eta_{H} \bigtriangledown q(\mathbf{H}) ).
\end{equation}
Here, we replace the step size $\eta$ with $\eta_{H}$ to distinguish this from the dictionary matrix $\mathbf{W}$ update. We use a fixed step size to ease the hyper-parameter setting in the whole iteration process. $\bigtriangledown q(\mathbf{H})$ is the gradient function derived by doing partial derivative of \eqref{q(h)} with respect to $\mathbf{H}$.
\begin{equation} \label{Hgradient}
\triangledown q(\mathbf{H})=\frac{1}{N} \big(\mathbf{W}^{\top}\mathbf{WH}-\mathbf{W}^{\top}( \mathbf{V}- \mathbf{R})\big).
\end{equation}
We use the following matrix properties \cite{petersen2008matrix} to find the expression of $\bigtriangledown q(\mathbf{H})$: $\norm{\mathbf{A}}_{2}^2=\tr(\mathbf{A^\top}\mathbf{A})$, and $\bigtriangledown \tr(\mathbf{X}^\top\mathbf{A}) = \mathbf{A}$, where the gradient is taken with respect to $\mathbf{X}$.
For \eqref{argminR}, the solution is straightforward.
\begin{equation}\label{r_update}
\mathbf{R}^+=S_{\lambda,M} \big(\mathbf{V}-\mathbf{WH^+}\big).
\end{equation}
Here, $S_{\lambda,M}(\mathbf{X})$ performs element-wise thresholding as:
\begin{equation}
S_{\lambda,M}(\mathbf{X})_{ij} :=
\begin{cases}
0, & \norm{x_{ij}} < \lambda \\
x_{ij}-\textrm{sgn}(x_{ij})\lambda, & \lambda \leqslant \norm{x_{ij}} \leqslant \lambda+M \\
\textrm{sgn}(x_{ij})M. & \norm{x_{ij}} > \lambda+M
\end{cases}
\end{equation}
In the $t$-th iteration, we update the matrices $\mathbf{H}$ and $\mathbf{R}$ according to \eqref{h_update} and \eqref{r_update}; until some stopping criteria is met~\cite{zhao2016online}.\\
\noindent\textbf{PGD Solver for \eqref{w}. }We can rewrite (\ref{w}) as follows
\begin{equation} \label{WAB}
\mathbf{W}_t= \argmin_{\mathbf{W} \in \mathcal{C}} \frac{1}{2}\tr(\mathbf{W}^\top\mathbf{W}\mathbf{A}_t)-\tr(\mathbf{W}^\top\mathbf{B}_{t}),
\end{equation}
where $\mathbf{A}_t \triangleq \frac{1}{N}\mathbf{H}_{t}\mathbf{H}_{t}^\top$, and $\mathbf{B}_t \triangleq \frac{1}{N} (\mathbf{V-R_{t}})\mathbf{H}_{t}^\top$. To calculate the gradient value, we define new function $f_{W}(\mathbf{W})$.
\begin{equation} \label{fw}
f_{W}(\mathbf{W})= \frac{1}{2}\tr(\mathbf{W}^\top\mathbf{W}\mathbf{A}_t)-tr(\mathbf{W}^\top\mathbf{B}_{t}).
\end{equation}
Taking partial derivative of \eqref{fw} with respect to $\mathbf{W}$, we find the following expression.
\begin{equation}
\label{w_gradient}
\triangledown f_{W}(\mathbf{W})=\mathbf{WA}-\mathbf{B}.
\end{equation}
We use the aforementioned matrix property~\cite{petersen2008matrix} to derive the expression \eqref{w_gradient}. We write the update equation ensuring the constraints of $\mathbf{W}$.
\begin{equation} \label{w_update}
\mathbf{W}^+=\mathcal{P}_{\mathcal{C}}( \mathbf{W}-\eta_{W}\triangledown f_{W}(\mathbf{W})).
\end{equation}
Here, $\eta_{W}$ is the step size to update $\mathbf{W}$. As in \eqref{h_update}, we use a fixed step size. The constraint projection function keeps the columns of the dictionary matrix $\mathbf{W}$ in unit $\ell_2$ ball. In \eqref{w_update}, each column is being updated as following
\begin{equation}
\mathbf{w}_{n}^{+}:=\frac{\mathcal{P}_{+}\big(\mathbf{w}_{n}-\eta_{W}\triangledown f_{W}(\mathbf{w}_{n})\big)}{\max\Big(1,\norm{\mathcal{P}_{+}\big(\mathbf{w}_{n}-\eta_{W}\triangledown f_{W}(\mathbf{w}_{n})\big)_{}}_{2}\Big)}, \forall n \in [K].
\end{equation}
Similarly as the PGD Solver for \eqref{hr}, we use \eqref{w_update} to update the dictionary matrix $\mathbf{W}$. The steps stated above are followed until we reach the optimum loss point of \eqref{update_optimization}. In the Section \ref{experimental_results}, we will discuss hyper-parameter tuning and matrix initialization methods for better optimization.
Now that we discussed estimating the dictionary matrix $\mathbf{W}$ from the data matrix $\mathbf{V}$ without any privacy constraint, in the next section we will show how to preserve and control privacy leakage for the NMF.
\section {Proposed method} \label{proposed_method}
\subsection{Separate Private and Non-private Training Nodes}
\begin{figure}
\caption{Schematic diagram of privacy-preserving NMF}
\label{fig:basic_archi}
\end{figure}
As mentioned earlier, we are interested in estimating the dictionary matrix $\mathbf{W}$, which captures the meaningful population feature. The decomposition also produces two more matrices $\mathbf{H}$ and $\mathbf{R}$.
All these three matrices are estimated from the user data $\mathbf{V}$; however, we only share the dictionary matrix $\mathbf{W}$ in public. Therefore, we need to estimate the matrix $\mathbf{W}$ satisfying differential privacy. Intuitively, we need to process the user-sensitive data and population feature data separately. Fig.\ref{fig:basic_archi} shows the basic system diagram which serves the purposes. Here, there are two data processing centers: one is the ``Data Curator'', which holds the sensitive data: data matrix $\mathbf{V}$ and the coefficient matrix $\mathbf{H}$ and as well as the outlier matrix $\mathbf{R}$. It updates the matrices $\mathbf{H}$ and $\mathbf{R}$ as mentioned in the Section \ref{nmf_problem_formulation}. Next, it calculates the gradient value $f_{W}(\mathbf{W})$ and adds white noise with variance depends on the privacy budget and $\mathcal{L}_2$ sensitivity of the gradient function $f_{W}(\mathbf{W})$. The other data processing center is the ``Data Analyst'' center, where the noisy gradient values are passed to. This center then updates the dictionary matrix $\mathbf{W}$ with the received noisy gradient value and passes the updated $\mathbf{W}$ to the Data Curator center. This cycle continues until the stopping criteria is met. At the end, we get a differentially-private dictionary matrix $\mathbf{W}_\textrm{private}$ at the Data Analyst center.
\subsection{Derivations Related to Estimating Differentially-Private $\mathbf{W}$}
\begin{table}[t]
\centering
\begin{tabular}{c c c c}
\hline
Dataset & $N$ & $D$ & $K$ \\
\hline
Guardian News Articles & $4551$ & $10285$ & $8$ \\
UCI-news-aggregator & $3000$ & $93$ & $7$ \\
RCV1 & $9625$ & $2999$ & $4$ \\
TDT2 & $9394$ & $3677$ & $30$ \\
YaleB & $2414$ & $1024$ & $38$ \\
CBCL & $2429$ & $361$ & $50$ \\
\hline
\end{tabular}
\caption{Summary of data sets}
\label{tab:dataset_size}
\end{table}
In this section, we will show the necessary proof and derivations related to the calculation of $\mathcal{L}_2$ sensitivity for estimating the dictionary matrix $\mathbf{W}$ satisfying differential privacy.
As mentioned earlier, we use the Gaussian mechanism to deploy the DP mechanism in our non-private algorithm. Prior to adding noise, we need to calculate the $\mathcal{L}_2$ sensitivity of the function of whose output we will make randomize. In our implementation, it is $\bigtriangledown f_{W}(\mathbf{W})$ -- the gradient function for updating $\mathbf{W}$ matrix: $\triangledown f_{W}(\mathbf{W})=\mathbf{WA}-\mathbf{B}$.
As $\triangledown f_{W}(\mathbf{W})$ depends on the statistics matrix $\mathbf{A}$ and $\mathbf{B}$, we need to calculate their $\mathcal{L}_2$ sensitivity separately. Let us first calculate the $\mathcal{L}_2$ sensitivity of matrix $\mathbf{A}=\frac{1}{N} \mathbf{H}\mathbf{H}^\top$.
Consider two neighboring data sets, the corresponding coefficient matrices are $\mathbf{H}$ and $\mathbf{H}'$. By definition, they differ by only one user data. In our calculation, We consider that the difference is at the last $N$-th column entries. We derive the $\mathcal{L}_2$ sensitivity $\Updelta_{A}$ following the definition.
\begin{align*}
\Updelta_{A} &= \max \frac{1}{N}\norm{\mathbf{H} \mathbf{H}^\top-\mathbf{H}' \mathbf{H}'^\top}_F \\
&=\max \frac{1}{N}\norm{\mathbf{h}_N \mathbf{h}^\top_N -\mathbf{h}'_N \mathbf{h}'^\top_N}_F \\
&\leqslant \max \frac{1}{N}\big(\norm{\mathbf{h}_N \mathbf{h}_N^\top}_F+\norm{\mathbf{h}'_N \mathbf{h}'^\top_N}_F\big) \\
&\leqslant \max \frac{1}{N} \big( \norm{\mathbf{h}_N}_2\norm{\mathbf{h}_N}^\top_2 + \norm{\mathbf{h}'_N}_2\norm{\mathbf{h}'_N}^\top_2 \big)\\
&=\max \frac{1}{N} \big (\norm{\mathbf{h}_N}_2^2+\norm{\mathbf{h}'_N}_2^2\big) \\
&=\frac{2}{N}\times \text{(max $\mathcal{L}_2$ norm of column of $\mathbf{H}$)}^2 \numberthis \label{A_sensitivity},
\end{align*}
where we have used the triangle inequality and $\norm{\mathbf{ab}}\leqslant \norm{\mathbf{a}}\norm{\mathbf{b}}$. To get a bounded value in \eqref{A_sensitivity}, we need to define the max $\mathcal{L}_2$ norm for the column of $\mathbf{H}$. One way to do that is by normalizing each column of $\mathbf{H}$, and therefore, $\Updelta_{A} = \frac{2}{N}$. Next, we will calculate the $\mathcal{L}_2$ sensitivity of $\mathbf{B} = \frac{1}{N}(\mathbf{V-R})\mathbf{H}^\top$.
Following the definition, we consider two neighboring data matrices $\mathbf{V}$ and $\mathbf{V}'$ and their corresponding neighboring coefficient matrices $\mathbf{H}$ and $\mathbf{H}'$. The details calculation of calculating $\mathcal{L}_2$ sensitivity $\Updelta_{B}$ is given as follows:
\begin{align*}
\Updelta_{B} &= \max \frac{1}{N}\norm{(\mathbf{V}-\mathbf{R}) \mathbf{H}^\top-(\mathbf{V}'-\mathbf{R}) \mathbf{H}'^\top}_F \\
&=\max \frac{1}{N}\norm{(\mathbf{v}_N-\mathbf{r}_N) \mathbf{h}^\top_N-(\mathbf{v}_N'-\mathbf{r}_N) \mathbf{h}'^\top_N}_F \\
&\leqslant \max \frac{1}{N}\big(\norm{(\mathbf{v}_N-\mathbf{r}_N) \mathbf{h}^\top_N}_F+\norm{(\mathbf{v}_N'-\mathbf{r}_N)\mathbf{h}'^\top_N}_F\big) \\
&\leqslant \max \frac{1}{N} \big( \norm{(\mathbf{v}_N-\mathbf{r}_N)}_2\norm{\mathbf{h}_N}^\top_2\\ &+\norm{(\mathbf{v}_N'-\mathbf{r}_N)}_2\norm{\mathbf{h}'_N}^\top_2 \big) \\\
&=\max \frac{1}{N} \big (\norm{(\mathbf{v}_N-\mathbf{r}_N)}_2+\norm{(\mathbf{v}_N'-\mathbf{r}_N)}_2\big)\\
&=\max \frac{2}{N} \norm{(\mathbf{v}_N-\mathbf{r}_N)}_2 \leqslant \max \frac{2}{N} (\norm{\mathbf{v}_N}+\norm{\mathbf{r}_N})\numberthis \label {B_sensitivity},
\end{align*}
where the second-last equality follows from $\max_{\forall n}\norm{\mathbf{h}_n}_2=1$, and the last inequality follows from $\norm{\mathbf{a}-\mathbf{b}}\leqslant \norm{\mathbf{a}}+\norm{\mathbf{b}}$. To get a constant $\mathcal{L}_2$ sensitivity value in \eqref{B_sensitivity}, we can normalize the columns of $\mathbf{V}$ and $\mathbf{R}$ to have unit $\mathcal{L}_2$-norm. Thus, we have $\Updelta_{B} = \frac{4}{N}$. Note that, if we do not model the outliers explicitly, we would have $\Updelta_{B}=\frac{2}{N}$.
Now, as we have computed the $\mathcal{L}_2$ sensitivities $\Updelta_{A}$ and $\Updelta_{B}$, we can generate noise perturbed statistics $\overline{\mathbf{A}},\overline{\mathbf{B}}$ following the Gaussian mechanism~\cite{dwork2014algorithmic}. Using these values, we can compute the noisy gradient $\overline{\triangledown f_{W}(\mathbf{W})}$ and update our dictionary matrix $\mathbf{W}$. At the end of optimization, we will obtain the differentially private dictionary matrix $\mathbf{W}_\textrm{private}$. The detailed step-by-step description of our proposed method is summarized in Algorithm~\ref{alg:nmf_private_outlier}.
\begin{algorithm}[t]
\caption{Privacy Preserving NMF with Outliers}
\begin{algorithmic}[1]
\REQUIRE Data matrix $\mathbf{V}$, step size $\eta_H$ and $\eta_W$, maximum number of iterations $T$, sensitivities $\Updelta_{A}$ and $\Updelta_{B}$, privacy parameters $\epsilon_t,\ \delta$, regularization parameter $\lambda$
\ENSURE Differentially private dictionary matrix $\mathbf{W}_\textrm{private}$
\\ \textit{Initialisation} : $\mathbf{W}_0$, $\mathbf{H}_0$, $\mathbf{R}_0$.
\FOR {$t = 1$ to $T$}
\STATE \algorithmiccomment{/* At Data Curator */} Learn $\mathbf{H}_t$ and $\mathbf{R}_t$ based on $\mathbf{W}_{t-1}$ \\
$(\mathbf{H}_t,\mathbf{R}_t) := \argmin_{\mathbf{H}\geqslant 0,\mathbf{R} \in \mathcal{Q}} \hspace{5pt} L( \mathbf{V},\mathbf{W}_{t-1},\mathbf{H},\mathbf{R})$,\\
where matrices update as follow\\
$\mathbf{H}^+ := \mathcal{P}_{+}(\mathbf{H}-\eta_{H} \bigtriangledown q(\mathbf{H}) ).$\\
$\mathbf{R}^+ :=S_{\lambda,M} \big(\mathbf{V}-\mathbf{WH^+}\big).$
\STATE Calculate the noise perturbed statistics matrices\\
$\overline{\mathbf{A}_t} := \frac{1}{N} \mathbf{H}_{t}\mathbf{H}_{t}^\top + \mathcal{N}(0,\tau^2_A)^{K \times K}$,\\
$\overline{\mathbf{B}_t} := \frac{1}{N} (\mathbf{V-R_{t}})\mathbf{H}_{t}^\top + \mathcal{N}(0,\tau^2_B)^{D \times K}$,\\
where\\
$\tau_A = \frac{\Updelta_A}{\epsilon_t} \sqrt{2\log\frac{1.25}{\delta}}$.\\
$\tau_B = \frac{\Updelta_B}{\epsilon_t} \sqrt{2\log\frac{1.25}{\delta}}$.
\STATE Calculate the noisy gradient $\overline{\triangledown f_{W}(\mathbf{W})}$ with the noise perturbed statistics matrices: $\overline{\triangledown f_{W}(\mathbf{W})}=\mathbf{W}_{t-1}\overline{\mathbf{A}_{t}}-\overline{\mathbf{B}_{t}}$ \\
\STATE \algorithmiccomment{/* At Data Analyst */} Learn dictionary matrix with the noisy gradient $\overline{\triangledown f_{W}}$
$\mathbf{W}_t := \argmin_{\mathbf{W} \in \mathcal{C}} \frac{1}{2}\tr(\mathbf{W}^\top\mathbf{W}\overline{\mathbf{A}_t})-\tr(\mathbf{W}^\top\overline{\mathbf{B}_{t}})$.\\
where dictionary matrix updates as follow\\
$\mathbf{W}^+ :=\mathcal{P}_{\mathcal{C}}( \mathbf{W}-\eta_{W}\overline{\triangledown f_{W}(\mathbf{W})})$.
\ENDFOR
\RETURN $\mathbf{W}_\textrm{private}$
\end{algorithmic}
\label{alg:nmf_private_outlier}
\end{algorithm}
\textcolor{blue}{
\begin{theorem}[Privacy of Algorithm~\ref{alg:nmf_private_outlier}]
\label{theorem1}
Consider Algorithm~\ref{alg:nmf_private_outlier} in the setting of Section~\ref{nmf_problem_formulation}. Then Algorithm\ref{alg:nmf_private_outlier} releases $\big(\frac{T\alpha_\mathrm{opt}}{2}(\frac{\Updelta^2_A}{\tau^2_A}+\frac{\Updelta^2_B}{\tau^2_B})+\frac{\log \frac{1}{\delta}}{\alpha_\mathrm{opt}-1},\delta\big)$ differentially-private basis matrix $\mathbf{W}_\textrm{private}$ for any $0<\delta <1$ after $T$ iterations, where $\alpha_{ \mathrm{opt}}=1+\sqrt{\frac{2}{T(\frac{\Updelta^2_A}{\sigma^2_A}+\frac{\Updelta^2_B}{\delta^2_B}) } \log \frac{1}{\delta}}$.
\end{theorem}
\begin{proof}
We now analyze Algorithm~\ref{alg:nmf_private_outlier} with R\'enyi Differential Privacy (RDP)~\cite{mironov2017renyi}. Recall that, at each iteration $t$, we compute the noisy estimate of the gradient $\overline{\triangledown f_{W}(\mathbf{W})}$, using two differentially-private matrices $\overline{\mathbf{A}_t}$ and $\overline{\mathbf{B}_t}$. According to Proposition~\ref{rdp_gaussian}, computation of these matrices satisfy $\left(\alpha, \frac{\alpha}{2\left(\frac{\tau_A}{\Updelta_A}\right)^2}\right)$-RDP and $\left(\alpha, \frac{\alpha}{2\left(\frac{\tau_B}{\Updelta_B}\right)^2}\right)$-RDP, respectively. According to Proposition~\ref{comp_rdp}, each step of Algorithm~\ref{alg:nmf_private_outlier} is $(\alpha,\frac{\alpha}{2}(\frac{\Updelta^2_A}{\tau^2_A}+\frac{\Updelta^2_B}{\tau^2_B}))$-RDP. If the number of required iterations for reaching convergence is $T$, then under $T$-fold composition of RDP, the overall algorithm is $(\alpha,\frac{T\alpha}{2}(\frac{\Updelta^2_A}{\tau^2_A}+\frac{\Updelta^2_B}{\tau^2_B}))$-RDP. From Proposition \ref{rdp_dp}, we have that the algorithm satisfies $(\frac{T\alpha}{2}(\frac{\Updelta^2_A}{\tau^2_A}+\frac{\Updelta^2_B}{\tau^2_B}) + \frac{\log\frac{1}{\delta}}{\alpha-1},\delta)$-DP for any $0<\delta <1$. For a given $\delta$, we can compute the optimal $\alpha$ as $\alpha_{ \mathrm{opt}} = 1+\sqrt{\frac{2}{T(\frac{\Updelta^2_A}{\tau^2_A}+\frac{\Updelta^2_B}{\tau^2_B}) } \log \frac{1}{\delta}}$. This $\alpha_{ \mathrm{opt}}$ provides the smallest $\epsilon$, i.e., the smallest privacy risk. Therefore, Algorithm~\ref{alg:nmf_private_outlier} releases a $(\frac{T\alpha_ \mathrm{opt}}{2}(\frac{\Updelta^2_A}{\tau^2_A}+\frac{\Updelta^2_B}{\tau^2_B}) + \frac{\log\frac{1}{\delta}}{\alpha_ \mathrm{opt}-1},\delta)$ differentially-private basis matrix $\mathbf{W}_\textrm{private}$ for any $0<\delta <1$.
\end{proof}
}
\noindent\textbf{Convergence of Algorithm~\ref{alg:nmf_private_outlier}. }We note that the objective function is
non-increasing under the two update steps (i.e., steps 2 and 5), and the objective function is bounded below. Additionally, the noisy gradient estimate $\overline{\triangledown f_{W}(\mathbf{W})}$ essentially contains zero mean noise. Although this does not provide guarantees on the excess error, the estimate of the gradient converges in expectation to the true gradient~\cite{bottou1999}. However, if the batch size is too small, the noise can be too high for the algorithm to converge~\cite{song2013stochastic}. Since the total additive noise variance is quite small, the convergence rate is faster. Note that a theoretical analysis of intricate relation between the excess error and the privacy parameters is beyond the scope of the current paper. We refer the reader Bassily et al.~\cite{bassily2014private} for further details.
\section {Experimental results}
\label{experimental_results}
In this section, we compare the utility of private and non-private algorithm. We define the objective value to show the comparison, quantifying how well the algorithm can decompose the matrix. The objective value is calculated using the following formula:
\begin{equation} \label{objective_value}
\text{Objective Value} = \frac{1}{2N} \norm{\mathbf{V}^o-\mathbf{WH}}^2_F .
\end{equation}
Here, $\mathbf{V}^o$ is the noise-free clean data set. In our experiments, some of the data sets contain noise, and some of them are not. For a fair comparison, we evaluate only how well the decomposition $\mathbf{W}$$\mathbf{H}$ can reconstruct the noise-free data $\mathbf{V}^o$.
To evaluate our proposed method, we use six real data sets. Among which, four are text data sets, and two are face image data sets. The size of the datasets and the corresponding latent dimensions $K$ are mentioned in the Tab. \ref{tab:dataset_size}. \textcolor{blue}{
For selection of the latent dimension $K$, we mentioned one procedure in Appendix \ref{topic_modeling}, where we calculate the topic coherence score of the Guardian News Articles datasets to select the optimum topic number. For the rest of the data sets, similar procedures can be followed.} In the experiments, we use fixed $\delta=10^{-5}$ and vary $\epsilon$ to show the effect of the privacy budget. Also, we normalized each data sample of $\mathbf{V}$ so that it has a unit maximum index value.
\subsection{Hyper-parameter Selection and Initialization}
\label{hyperparameter}
We followed the hyper-parameter settings mentioned as~\cite{zhao2016online} except for the learning rate. The authors in~\cite{zhao2016online} suggested using the same learning rate for updating both the dictionary matrix $\mathbf{W}$ and the coefficient matrix $\mathbf{H}$ for ease of parameter tuning. The optimization process requires a different configuration in our privacy-preserving implementation. As discussed in Section \ref{proposed_method}, we add Gaussian noise in the gradient calculation $\triangledown f_{W}(\mathbf{W})$ for updating $\mathbf{W}$, wheres matrix $\mathbf{H}$ updates with its original gradient calculation $\triangledown q(\mathbf{H})$. This dissimilar nature for updating $\mathbf{W}$ and $\mathbf{H}$ motivates us to use different learning rates. \textcolor{blue}{We note that to get faster convergence, we need to choose the learning rates for updating $\mathbf{W}$ and $\mathbf{H}$. To that end, we can employ a grid search to find the optimum learning rates for $\mathbf{H}$ and $\mathbf{W}$. The time required for such a time search depends on the dataset and the search space. With sub-optimal learning rates, the convergence may be delayed, but the excess error of the proposed algorithm should be approximately the same given that sufficient time is allowed, and small enough learning rate is used.} In our experiments, we performed a grid search, and found that the learning rate for updating $\mathbf{W}$ should be much lower (about 10,000 to 20,000 times depending on the dataset) compared to that of updating $\mathbf{H}$. To initialize $\mathbf{W}_0$ and $\mathbf{H}_0$, we followed the Non negative Double Singular Value Decomposition (NNDSVD) \cite{boutsidis2008svd} approach, as we found that it performs better than random initialization in minimizing the objective value in \eqref{objective_value}. For sparseness, we initialized $\mathbf{R}_0$ with all zeros.
\subsection{Text Data Set}
\begin{figure*}
\caption{Guardian News Articles}
\label{fig:git_utility}
\caption{UCI-news-aggregator}
\label{fig:uci_utility}
\caption{RCV1}
\label{fig:rcv1}
\caption{TDT2}
\label{fig:TDT2}
\caption{Utility Comparison on Text Data Set}
\label{fig:utility_text}
\end{figure*}
\begin{figure*}
\caption{Guardian News Articles}
\label{fig:rdp_gurdian}
\caption{UCI-news-aggregator}
\label{fig:uci_rdp}
\caption{RCV1}
\label{fig:rcv1_rdp}
\caption{TDT2}
\label{tdt2_rdp}
\caption{Overall $\epsilon$ and Objective Value on Text Data Set}
\label{fig:rdp_text}
\end{figure*}
\begin{figure*}
\caption{Non-Private Guardian News Articles}
\label{}
\caption{$(\epsilon=0.5,\delta=10^-5)$-DP Guardian News Articles}
\label{}
\caption{Non-Private UCI-news-aggregator}
\label{}
\caption{$(\epsilon=0.5,\delta=10^-5)$-DP UCI-news-aggregator}
\label{}
\caption{Topic Word Comparison}
\label{fig:topic_word_text}
\end{figure*}
\begin{table}[t]
\centering
\begin{tabular}{c c c}
\hline $\epsilon$ & Guardian News Articles & UCI-news-aggregator \\
\hline
0.5 & 0.4511 & 0.9996975 \\
0.6 & 0.4529 & 0.9996895 \\
0.7 & 0.4535 & 0.9996849 \\
0.8 & 0.4547 & 0.9996836\\
0.9 & 0.4543 & 0.9996832\\
0.999 & 0.4540 & 0.9996839 \\
Non-Private & 0.4658 & 0.9996816 \\
\hline
\end{tabular}
\caption{Comparison Average Coherence Score}.
\label{tab:compare_coherence_text}
\end{table}
With the text data set, we apply the topic modeling algorithm. Topic modeling is a statistical algorithm that identifies the abstract topics present in the set of documents. A document generally contains multiple topics with different proportions. Suppose a document is said to be ``80\% about religion and 20\% percent about politics”. In that case, it means about 80 percent of words related to religion and 20 percent related to politics. The topic modeling algorithm tries to find the unique “cluster of words” that indicates one single topic from the document. We discuss more about it and its implementation in Appendix \ref{topic_modeling}.
We evaluate our proposed method by showing the learning curve with respect to variable privacy budget $\epsilon$ in each iteration, and topic word distribution. We also calculate the overall $\epsilon$ using RDP calculation and show the comparison of objective value with that of the non-private mechanism to select optimum $\epsilon_i$ in each iteration. For two data sets, we compared the topic word distribution and average coherence score between non-private and private algorithm.
Here is a short description of each text data sets.
\noindent\textbf{Guardian News Articles.} This data set consists of 4551 news articles collected from Guardian News API in 2006. The detailed mechanism of collecting the articles is described in this paper \cite{o2015analysis}. Here we extract eight distinguished topics ($K=8$) from the dataset and show the high-scoring word distributions corresponding to the topics.
\noindent\textbf{UCI News Aggregator Dataset.} This data set \cite{Dua:2019} is formed by collecting news from a web aggregator from 10-March-2014 to 10-August-2014. There is a total of 422937 news articles in the data set. The topics covered in the news articles are entertainment, science and technology, business, and health. We take 750 news articles from each category and apply the NMF algorithm.
\noindent\textbf{RCV1.} Reuters Corpus Volume I (RCV1) \cite{RCV1} archive consists of over 800,000 manually categorized news wires. For our experiment, we randomly select approximately $\frac{1}{10}$-th of the features that contain 9625 documents.
\noindent\textbf{TDT2.} The TDT2 \cite{TDT2} text database contains 9394 documents of size $9394 \times 36771$-dimensional matrix. Here, we randomly select $\frac{1}{10}$-th of the features.
\noindent\textbf{Utility Comparison on Text Data Set} Fig. \ref{fig:utility_text} shows the utility gap between private and non-private mechanism's output for the text data sets. For all the data sets, there exists a little utility gap, and this gap decreases further for a higher privacy budget $\epsilon$. Comparing the convergence speed, private learning needs more epochs to reach the optimum loss point. This is because we have to keep the learning rate lower in private learning. Also, the noisy gradient can be responsible to reach the optimum loss point lately.
\noindent\textbf{Average Topic Coherence Score} Table \ref{tab:compare_coherence_text} shows the average topic coherence score comparison for Guardian News Articles and UCI-news-aggregator data sets.
Topic coherence score measures quantitatively how the topic modeling algorithm performs. In short, topic coherence attempts to represent human intractability through a mathematical framework by measuring the semantic link between high-scoring words. The greater the coherence score, the more human-interpretable the “cluster of words” is. It is also used to tune the hyper-parameter, topic number $K$. We discuss about topic coherence in more detail in Appendix \ref{topic_modeling}.
As Table \ref{tab:compare_coherence_text} shows, topic coherence score increases with increasing the privacy budget $\epsilon$ for Guardian News Articles data set, which is self-explanatory. For UCI-news-aggregator data set, all the scores are very close to each other in respect to privacy budget. The almost equal optimum loss point for private and non-private learning in Fig. \ref{fig:uci_utility} also justifies this high similarity coherence score.
\noindent\textbf{Overall $\epsilon$ on Text Data Set} Fig. \ref{fig:rdp_text} shows the overall $\epsilon$ and also the utility gap between private and non-private mechanisms after reaching the optimum solution. This result helps to decide how much one has to introduce privacy budget $\epsilon_i$ in each stage of iteration, considering the utility gap and overall $\epsilon$.
\noindent\textbf{Topic Word Comparison} Fig. \ref{fig:topic_word_text} shows the topic word comparison for private and non-private algorithm. We choose two data sets to show this comparison-UCI-news-aggregator and Guardian News Articles. The topic word distribution looks almost similar for private and non-private mechanism, which justifies the high similarity of coherence scores (Table \ref{tab:compare_coherence_text}).
\subsection{Face Image Data Set}
\begin{figure*}
\caption{Yaleb Data Set}
\label{fig:Yaleb}
\caption{CBCL Data Set }
\label{fig:CBCL}
\caption{Utility Comparison on Face Image Data Set}
\label{fig:utility_face}
\end{figure*}
\begin{figure*}
\caption{Yaleb Dataset}
\label{fig:rdp_yaleb}
\caption{CBCL Dataset}
\label{fig:rdp_cbcl}
\caption{ Overall $\epsilon$ and Objective Value on Face Image Data Set}
\label{fig:rdp_face}
\end{figure*}
\begin{figure*}
\caption{Non-Private Yaleb Data Set}
\label{}
\caption{$(\epsilon=0.5,\delta=10^-5)$-DP Yaleb Data Set}
\label{fig:yaleb_private}
\caption{Non-Private CBCL Data Set}
\label{}
\caption{$(\epsilon=0.5,\delta=10^-5)$-DP CBCL Data Set}
\label{}
\caption{Basic Representation Comparison}
\label{fig:basic_represent_face}
\end{figure*}
With the face image dataset, we generate the fundamental facial feature by which one can reconstruct all the face images of the dataset. The details of the implementation are discussed in Appendix \ref{face_image_nmf}. Like the text dataset, we also show the overall $\epsilon$ and utility comparison to select $\epsilon_i$ for each iteration. Additionally, as the effect of outliers is much visible and common in practice for image data, we conduct our experiments with additional noise outlier dataset. Short description of each text data sets are given below:
\noindent\textbf{YaleB.} There are 2414 face images in Yaleb \cite{YALEB} of size $32 \times 32$. The sample images are captured in different light conditions. There are 38 subjects (male and females) in the data set.
\noindent\textbf{CBCL.} The CBCL \cite{CBCL} database contains 2429 face images of size $19\times 19$. The facial photos consist of $19 \times 19$ hand-aligned frontal shots. Each face image is processed beforehand. The grey scale intensities of each image are first linearly adjusted so that the pixel mean and standard deviation are equal to 0.25, and then clipped to the range [0, 1].
\noindent\textbf{Utility Comparison on Face Image Data Sets} Fig. \ref{fig:utility_face} shows the learning curve of private and non-private mechanism for the face image data sets. All the characteristics of the simulation result are similar to the text data sets result. The utility gap is very small and is even smaller for higher privacy budget ($\epsilon$).
\noindent\textbf{Overall $\epsilon$ on Face Image Data Sets} Fig. \ref{fig:rdp_face} shows the overall $\epsilon$ and utility gap after reaching the optimum loss point. Based on the criteria of preserving privacy as well as the tolerance of utility gap, one can select how much privacy budget $\epsilon_i$ one needs to introduce in each iteration.
$\noindent\textbf{Basic Representation Comparison}$ Fig. \ref{fig:basic_represent_face} shows how algorithm learns the fundamental representation of face image under privacy and non-privacy mechanism. In the case of ($\epsilon=0.5,\delta=10^-5$)-DP Private, the facial features are noisy compared to the non-privacy mechanism. However, they can still generate the interpretable human facial feature
$\noindent\textbf{Data Set with Outlier}$ We also performed experiments to demonstrate the effect of outliers. We contaminated the Yaleb dataset with outliers, as mentioned in \cite{zhao2016online}. In short, we randomly chose 10\% of user data from the dataset, and then we contaminated 70\% of the pixel with uniform noise distribution noise $\mathcal{U}[-1,1]$. The simulation results are shown in the Fig. \ref{fig:outlier_face_yaleb}. In Section \ref{proposed_method}, it has been mathematically shown that the $\mathcal{L}_2$ sensitivity of matrix $\mathbf{B}$ is double when we allow for updating the outlier matrix $\mathbf{R}$. Higher $\mathcal{L}_2$ sensitivity gives more noise to ensure the privacy budget ($\epsilon,\delta$) we demand. Thus the basic representation of Fig. \ref{fig:yaleb_basic_outlier} is more noisy compared to the Fig. \ref{fig:yaleb_private}.
\begin{figure*}
\caption{Utility comparison}
\label{}
\caption{ Overall $\epsilon$ and Objective Value}
\label{}
\caption{Basic Representation on Non-Private}
\label{}
\caption{Basic Representation on $(\epsilon=0.5,\delta=10^-5)$-DP Private}
\label{fig:yaleb_basic_outlier}
\caption{Yaleb Data Set with Outlier}
\label{fig:outlier_face_yaleb}
\end{figure*}
\subsection{Comparison}
\label{Comparison}
\begin{table}[t]
\centering
\begin{tabular}{c c c}
\hline
Privacy Budget & Ours & DPNMF \\
\hline
Non-private & $0.0624$ & $0.8568$ \\
$\epsilon=0.3$ & $0.0675$ & $1.1953$ \\
$\epsilon=0.5$ & $0.0648$ & $1.0785$ \\
$\epsilon=0.7$ & $0.0640$ & $ 1.0155$ \\
\hline
\end{tabular}
\caption{RMSE comparison on MovieLens 1M Dataset}
\label{tab:simultion_compare}
\end{table}
Compared with existing work, this work \cite{ran2022differentially} proposed the Differential Private NMF algorithm (DPNMF) only for recommender systems using the Laplacian mechanism. On the contrary, our proposed method works for any part-based learning NMF-based tasks. Moreover, the Laplacian mechanism follows $\mathcal{L}_1$ sensitivity to add noise, in contrast to $\mathcal{L}_2$ sensitivity. In our context and many machine learning tasks where we need to add noise to vectors with many elements, $\mathcal{L}_2$ sensitivity is much smaller than $\mathcal{L}_1$ sensitivity \cite{near_abuah_2021}. Furthermore, their work fails to calculate the $\mathcal{L}_1$ sensitivity of the desired objective function directly, whereas we can precisely calculate the $\mathcal{L}_2$ sensitivity of the gradient function. Moreover, they used the alternating non-negative least square algorithm (ANLS) \cite{gillis2011nonnegative} for their base NMF approach, where there is no mechanism to remove the outlier effect.
We also performed simulations for comparison on the MovieLens 1M Dataset \cite{harper2006movielens}, and defined a similar evaluation metric, RMSE, as in~\cite{ran2022differentially}: $\text{RMSE} = \frac{1}{\sqrt{N}} \norm{\mathbf{V}-\mathbf{ \hatv V} \odot \mathbf{X}}_2$,
where $\mathbf{V} \in \mathcal{R}_{+}^{U \times I}$ is the user-item matrix, $\mathbf{\hatv V}$ and $\mathbf{X}$, same shape as $\mathbf{V}$, are the predicted user-item matrix and the observation mask respectively, and $N$ is the number of user-item pairs. Each entry $v_{ui}$ in user-item matrix denotes how much a user $u \in U$ gives rating to an item $i \in I$. Each entry $x_{ui}$ of observation mask matrix is set to 1 if user $u$ has rated the item $i$, and 0 otherwise. The findings of the simulation comparison are provided in Tab. \ref{tab:simultion_compare}. The comparison shows that ours method outperforms the DPNMF.
\section {Conclusion and Future Works}
We proposed a novel privacy-preserving NMF algorithm that can learn the dictionary matrix $\mathbf{W}_{private}$ from the data matrix $\mathbf{V}$ preserving the privacy of data in any NMF-related task. Our proposed algorithm enjoys $(\epsilon,\delta)$- differential privacy guarantee with good performance results. It adds white additive noise to the function’s output based on the privacy budget and $\mathcal{L}_2$ sensitivity following the Gaussian mechanism. The proposed algorithm shows a comparable result with the non-private algorithm. Moreover, we calculate the overall $\epsilon$ for multi-stage composition cases using R\'enyi Differential Privacy. The overall $\epsilon$ along with the utility gap plot can give control to select $\epsilon_i$ in each epoch. One can choose a privacy budget $\epsilon_i$ for each iteration stage depending on how much utility gap one can tolerate and how much privacy to preserve one wants.
We experimentally justified our proposed method and compared the result using six real data sets. All the results show a small utility gap compared to the non-private mechanism. When comparing the learning curve, the private mechanism needs more epochs to reach the optimum loss point because we have to use a smaller step size in private learning. Also, adding noise in the $\triangledown f_W$ calculation can disturb the training. The text data set shows the similarity between the topic word distributions. We quantitatively measure this similarity by using the Topic Coherence score. In the face image data set, we compare facial feature construction. In private learning, there exists some noise in the facial feature because of the noisy gradient of $\triangledown f_W$. However, the features can still show the fundamental facial parts of the human face. Also, our experimental results of facial decomposition show that the performance of the image data set is more sensitive to privacy noise compared to the text data set. It will be an interesting work to mitigate the noise effect on the face image data set.
Here we use the private data matrix $\mathbf{W}$ at the single node-data curator. However, when it is not possible to accumulate all the private data at a single node, we need to transform our mechanism into federated learning and decentralized learning \cite{wei2020federated}. It will be interesting to see how our proposed mechanism will work under the decentralized framework. Moreover, we use the offline method to implement the NMF algorithm in our implementation. Nevertheless, in the case of big data implementation, we need to focus on online learning and batch learning. It directs another possible privacy framework: privacy amplification by sub-sampling \cite{balle2018privacy}.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to express their sincere gratitude towards the authorities of the Department of Electrical and Electronic Engineering and Bangladesh University of Engineering and Technology (BUET) for providing constant support throughout this research work.
\ifCLASSOPTIONcaptionsoff
\fi
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{pdf/Swapnil.png}}]{Swapnil Saha}
received his B.Sc. degree from the Bangladesh University of Engineering and Technology (BUET), Dhaka, Bangladesh in 2022. His research interest focuses on math-driven problems requiring efficient solutions that include but are not limited to optimization, information theory, and privacy-preserving machine learning. Currently, he is doing research on distributed machine learning algorithm with a focus on data privacy.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{pdf/Hafiz-Imtiaz.jpg}}]{Hafiz Imtiaz}
completed his PhD from Rutgers University, New Jersey, USA in 2020. He earned his second M.Sc. degree from Rutgers University in 2017, his first M.Sc. degree and his B.Sc. degree from Bangladesh University of Engineering and Technology (BUET), Dhaka, Bangladesh in 2011 and 2009, respectively. He is currently an Associate Professor with the Department of Electrical and Electronic Engineering at BUET. Previously, he worked as an intern at Qualcomm and Intel Labs, focusing on activity/image analysis and adversarial attacks on neural networks, respectively. His primary area of research includes developing privacy-preserving machine learning algorithms for decentralized data settings. More specifically, he focuses on matrix and tensor factorization, and optimization problems, which are core components of modern machine learning algorithms.
\end{IEEEbiography}
\appendices
\section{Topic Modeling and its implementation}
\label{topic_modeling}
\subsection{Topic Modeling}
Topic modeling is a statistical model used in statistics and natural language processing to discover the abstract “topics” that occur in a collection of documents. Topic modeling is a common text-mining technique to uncover hidden semantic structures within a text body. Given that a document is about a specific topic, one would expect certain words to appear more or less frequently: “dog” and “bone” will appear more frequently in documents about dogs, “cat” and “meow” will appear more frequently in documents about cats, and “the” and “is” will appear roughly equally in both. A document typically addresses multiple topics in varying proportions; therefore, a document that is 10 \% about cats and 90 \% about dogs would likely contain nine times as many dog words as cat words. The “topics” generated by topic modeling techniques are word clusters. A topic model encapsulates this intuition in a mathematical framework, enabling the examination of a set of documents and the identification of their potential topics and balance of topics based on the statistics of their words.
Topic models are also known as probabilistic topic models, which refer to statistical algorithms for identifying the latent semantic structures of a large text body. In this information age, the amount of written material we encounter daily exceeds our capacity to process it. Extensive collections of unstructured text bodies can be organized and comprehended better with topic models. Originally developed as a tool for text mining, topic models have been used to detect instructive structures in data, including genetic information, images, and networks. They have applications in fields such as computer vision \cite{cao2007spatially} and bioinformatics \cite{blei2012probabilistic}.
A text document consists of one or more topics. In the mathematical context, we can say that the linear combination of topics forms each text document. Each topic reflects its semantic meaning by some representative `cluster of words’. In topic modeling, we find these representative clusters of words from the corpus and the coefficient weights which say how much a single topic is more present than others in a single document. In the context of NMF decomposition, data matrix $\mathbf{V}$ contains the text documents, dictionary matrix $\mathbf{W}$ contains topic words, and coefficient matrix $\mathbf{H}$ contains the coefficient weight.\\
\subsection{Text Pre-processing}
The first step before applying any topic modeling algorithm is to do text preprocessing. Raw documents contain textual words which need to convert into numerical form. To do so, we split each word from the document and give a unique token to each of them.
Let us say we have five documents in our corpus of documents, and there is a total of 100 unique words present in all documents. Then, after tokenizing the corpus of documents, we will form a matrix $\mathbf{A}$ of size $100 \times 5$. Column entry indicates the document number, and row number indicates the specific term word. If $\mathbf{a_{ij}}=3$ in $\mathbf{A}$ where $i=50,j=4$, it means that 50 'no-term word' is used 3 times in the \nth{4} document.
However, we need further preprocessing to do actual topic modeling. Intuitively all the words in a document do not contribute equal contributions to determine the topic category of this document. Besides, some high-frequency words (like articles and auxiliary verbs) and low-frequency words do not indicate a specific topic. We remove these unnecessary words and add weight to the important topic words. The first one is done easily by simple text preprocessing like maximum frequency filtering, minimum frequency filtering, and stop-word (which stores predefined high-frequency English words) filtering. To give extra weight to important topic words, we need to introduce a new mathematical framework: term frequency-inverse document frequency (TF-IDF)\cite{salton1988term} \cite{berger2000bridging}.
TF-IDF wants to calculate quantitatively how a term word is “important” to determine the nature of the specific document’s topic category. The calculation involves two steps. First, it computes the frequency of word terms in that specific document. Then it computes the frequency in all the documents. The second calculation wants to penalize if the term word is common in all documents. The equation of TF-IDF is as follows \cite{ramos2003using}
\begin{equation}
w_d=f_{w,d} \times \log \big(\frac{|D|}{f_{w,D}}\big).
\end{equation}
In our implementation we use scikit-learn default function TfidfVectorizer() to produce TF-IDF normalized document-term matrix. According to our notation, TfidfVectorizer() produces matrix of size $\mathbf{D} \times \mathbf{N}$ where $\mathbf{D}$ is the number of word term after processing the text data and $\mathbf{N}$ is the total number of documents present in the corpus. Now the corpus of raw documents is ready to use the NMF algorithm.
\subsection{Implementation through NMF}
\begin{figure}
\caption{Topic Word Distribution of Guardian News Articles}
\label{fig:topic_git}
\end{figure}
If we apply the NMF algorithm on the TF-IDF normalized document-term, we get two matrices: one is dictionary matrix $\mathbf{W}$ ($D \times K$), and the other is coefficient matrix $\mathbf{H}$ ($K \times N$) where $K$ is the topic number presented in the corpus. The $\mathbf{W}$ matrix shows the topic distribution word. We can tell about the topic category by observing the highest entry values.
Let us revisit the experimental implementation of the Guardian News Articles dataset discussed in the Section \ref{experimental_results}. We get the following topic word distribution in Fig. \ref{fig:topic_git}. Here, the eight distinguish topic word distribution indicates eight distinguish topics. Applying NMF in the topic modeling algorithm requires one important hyper-parameter selection: topic number $K$. Though here we assume the topic number $K=8$ before applying NMF, there is a systemic way to tune this hyper-parameter. This is done by measuring the Topic Coherence score.
\subsection{Topic Coherence}
\begin{figure}
\caption{Mean Coherence vs Number of Topics}
\label{fig:mc_topics}
\end{figure}
Topic Coherence measures the semantic similarity between high-scoring words. These measurements help to differentiate between semantically interpretable topics and those that are statistical artifacts of inference. There are numerous methods for measuring coherence, such as NPMI, UMass, TC-W2V, etc \cite{o2015analysis}. Our study uses the TC-W2V method to measure the coherence score.
Fig. \ref{fig:mc_topics} shows the comparison of mean coherence scores with respect to the number of topics. This figure suggests selecting $K=8$ as the topic number to get the optimum human interpretability from the topic word distribution.
\section{Extracting Local Facial Feature by NMF}
\label{face_image_nmf}
\subsection{Interpret the Decomposition of Face Image }
\begin{figure}
\caption{Face Image Decomposition}
\label{fig:Face_image}
\end{figure}
The extraction of local facial features is one of the beautiful and practical applications of NMF. The basic idea behind this decomposition is to extract fundamental local facial features so that one can reconstruct any image of the data set using appropriate wight. To extract these fundamental features, one needs first to construct the data matrix $\mathbf{V}$ where each column of $\mathbf{V}$ represents the pixel information of the individual image. If we now apply the NMF algorithm on matrix $\mathbf{V}$, we generate the two matrices: matrix $\mathbf{W}$ stores the facial feature, and $\mathbf{H}$ stores the coefficient. Fig. \ref{fig:Face_image} shows the visual representation of the result.\\
If we want to reconstruct an image of data set: let’s example we want to reconstruct the $100^{th}$ column image in matrix $\mathbf{V}$. Then we will take all the facial features from matrix $\mathbf{W}$ and $100^th$ column vector from matrix $\mathbf{H}$ as coefficients. Then we will multiply the features with the coefficients and add them linearly. This will reconstruct the $100^th$ column image of matrix $\mathbf{V}$ with little loss.
\end{document} |
\begin{document}
\title{Local turnpike analysis using local dissipativity for discrete time discounted optimal control}
\author{Lars Gr\"une and Lisa Kr\"ugel}
\publishers{Chair of Applied Mathematics, Mathematical Institute\\
Universit\"at Bayreuth, Germany}
\date{\today}
\maketitle
\begin{abstract}
Recent results in the literature have provided connections between the so-called turnpike property, near optimality of closed-loop solutions, and strict dissipativity. Motivated by applications in economics, optimal control problems with discounted stage cost are of great interest. In contrast to non-discounted optimal control problems, it is more likely that several asymptotically stable optimal equilibria coexist. Due to the discounting and transition cost from a local to the global equilibrium, it may be more favourable staying in a local equilibrium than moving to the global – cheaper – equilibrium. In the literature, strict dissipativity was shown to provide criteria for global asymptotic stability of optimal equilibria and turnpike behavior. In this paper, we propose a local notion of discounted strict dissipativity and a local turnpike property, both depending on the discount factor. Using these concepts, we investigate the local behaviour of (near-)optimal trajectories and develop conditions on the discount factor to ensure convergence to a local asymptotically stable optimal equilibrium.\\
\textbf{Keywords: Discounted Optimal Control, Dissipativity, Turnpike}
\end{abstract}
\section{Introduction}
{\let\thefootnote\relax\footnotetext{The authors are supported by DFG Grant Gr 1569/13-2.}}
In recent years, dissipativity as introduced into systems theory by Willems \cite{Will72a,Will72b} has turned out to be a highly useful concept in order to understand the qualitative behaviour of optimally controlled systems. While related ideas were already present in early works by Willems \cite{Will71} in a linear quadratic setting, the approach has been revived and extended to fully nonlinear problems motivated by the observation of the importance of dissipativity concepts in model predictive control \cite{DiAR10,AnAR12,MuAA15,MuGA15} and for the characterization of the turnpike property \cite{GruM16}. The turnpike property expresses the fact that optimal (and possible also near-optimal) trajectories stay within a vicinity of an optimal equilibrium for most of the time. It can be seen as a way to generalize asymptotic stability properties of optimal equilibria to finite- and infinite-horizon optimal control problems.
While the references just discussed addressed non-discounted optimal control problems, the results from \cite{GGHKW18,GaGT15,GMKW20,Gruene2015} show that central results from this theory can be carried over to discounted optimal control problems and complement detectability-based approaches such as \cite{PBND17,PBND14} for analysing global asymptotic stability of equilibria of discounted optimally controlled systems.
A crucial difference between discounted and non-discounted optimal control problems is that in discounted problems it is much more likely that several asymptotically stable optimal equilibria coexist. Indeed, assuming complete controllability, in non-discounted optimal control two optimal equilibria can only coexist for arbitrary long (or infinite) horizons if they yield exactly the same optimal cost. Otherwise, for sufficiently long time it will always be beneficial to steer the system from the more expensive equilibrium to the cheaper one. In contrast to this, in discounted optimal control, due to the discounting it may not be possible to compensate for the transition cost from one equilibrium to the other with the lower cost of staying in the cheaper equilibrium. Therefore, in the discounted case locally asymptotically stable equilibria with different costs may coexist even for infinite horizon problems. In mathematical economy, where discounted optimal control problems are an important modelling tool, this is a well known fact at least since the pioneering work of Skiba \cite{Skib78} and Dechert and Nishimura \cite{DecN83}, and since then it was observed in many other papers, see, e.g., \cite{HKHF03} and the references therein.
It is the goal of this paper to show that a local version of the strict dissipativity property for discounted optimal control problems can be used for obtaining local convergence results to optimal equilibria. More precisely, we show that in the presence of local strict dissipativity and appropriate growth conditions on the optimal value functions there exist two thresholds for the discount factor $\beta\in(0,1)$, denoted by $\beta_1$ and $\beta_2$, with the following properties: Whenever $\beta\ge \beta_1$, any optimal trajectory that stays near a locally optimal equilibrium converges to this equilibrium. Whenever $\beta\le \beta_2$, any optimal trajectory that starts near this equilibrium will stay near the equilibrium. Together, this yields an interval $[\beta_1,\beta_2]$, which --- provided that $\beta_1\le \beta_2$ holds --- contains the discount factors for which convergence of optimal trajectories to the locally optimal equilibrium holds locally.
We formalize this convergence behaviour using the formalism from turnpike theory (see, e.g., \cite{Gruene2017}), because this provides a convenient way to express these properties in a mathematically precise way also for near-optimal trajectories and to link our results to the recent literature on the relation between dissipativity and turnpike properties. We carry out our analysis in discrete time because this simplifies some of our arguments, yet we think that conceptually similar results can also be achieved for continuous time problems.
The remainder of this paper is organised as follows. In Section \ref{sec:setting} we introduce the precise problem formulation and notation. Section \ref{sec:global} summarises the known results for globally strictly dissipative discounted problems. In Section \ref{sec:local} we show how this result can be reformulated in case that only local strict dissipativity holds, provided the trajectories under consideration satisfy an invariance condition. In Section \ref{sec:stay} we then show that this invariance condition is ``automatically'' satisfied under suitable conditions. Section \ref{sec:main} then contains the main result by bringing together the two results from Sections \ref{sec:local} and \ref{sec:stay}. In Section \ref{sec:ex} we illustrate our results by several examples and the final Section \ref{sec:conclusion} provides a brief concluding discussion.
\section{Setting and preliminaries}\label{sec:setting}
\subsection{System class and notation}
We consider discrete time nonlinear systems of the form
\begin{equation}\label{eq: nsys}
x(k+1)=f(x(k),u(k)),\quad x(0)=x_0
\end{equation}
for a map $f: X\times U\to X$, where $X$ and $U$ are normed spaces. We impose the constraints $(x,u)\in \mathbb{Y}\subset X\times U$ on the state $x$ and the input $u$ and define $\mathbb{X}:=\{x\in X \mid \exists u\in U: (x,u)\in\mathbb{Y}\}$ and $\mathbb{U}:=\{u\in U\mid \exists x\in X: (x,u)\in \mathbb{Y}\}$. A control sequence $u\in \mathbb{U}^N$ is called admissible for $x_0\in\mathbb{X}$ if $(x(k),u(k))\in\mathbb{Y}$ for $k=0,\dots,N-1$ and $x(N)\in\mathbb{X}$. In this case, the corresponding trajectory $x(k)$ is also called admissible. The set of admissible control sequences is denoted by $\mathbb{U}^N(x_0)$. Likewise, we define $\mathbb{U}^\infty(x_0)$ as the set of all control sequences $u\in\mathbb{U}^\infty$ with $(x(k),u(k))\in\mathbb{Y}$ for all $k\in\mathbb{N}_0$. Furthermore, we assume that $\mathbb{X}$ is controlled invariant, i.e. that $\mathbb{U}^\infty(x_0)\neq \empty$ for all $x_0\in\mathbb{X}$. The trajectories of \eqref{eq: nsys} are denoted by $x_u(k,x_0)$ or simply by $x(k)$ if there is no ambiguity about $x_0$ and $u$.
We will make use of comparison-functions defined by
\begin{align*}
\mathcal{K} :=\{\alpha:\mathbb{R}_0^+\to\mathbb{R}^+_0&| \alpha \text{ is continuous and strictly increasing with }\alpha(0)=0\}\\
\mathcal{K}_\infty :=\{\alpha:\mathbb{R}_0^+\to\mathbb{R}^+_0&|\alpha\in\mathcal{K}, \alpha \text{ is unbounded}\}\\
\mathcal{L}:=\{\delta:\mathbb{R}_0^+\to\mathbb{R}^+_0&|\delta\text{ is continuous and strictly decreasing with} \lim_{t\to\infty}\delta(t)=0\}\\
\mathcal{KL}:=\{\beta:\mathbb{R}_0^+\times\mathbb{R}_0^+\to\mathbb{R}_0^+&|\beta \text{ is continuous, } \beta(\cdot,t)\in\mathcal{K}, \beta(r,\cdot)\in\mathcal{L}\}.
\end{align*}
Moreover, with $\mathcal{B}_\varepsilon(x_0)$ we denote the open ball with radius $\varepsilon>0$ around $x_0$.
In this paper we consider infinite horizon discounted optimal control problems, i.e. problems of the type
\begin{equation}\label{dis OCP}
\min_{u\in\mathbb{U}\infty(x_0)} J_\infty(x_0,u) \quad \text{with } J_\infty(x_0,u)=\sum_{k=0}^\infty \beta^k\ell(x(k,x_0),u(k)).
\end{equation}
Herein, the number $\beta \in (0,1)$ is called the discount factor.
For such problems it was shown in \cite{GGHKW18} that if the optimal control problem is strictly dissipative at an optimal equilibrium $x^\beta$, then for sufficiently large $\beta\in(0,1)$ all optimal trajectories converge to a neighbourhood of $x^\beta$. This neighbourhood shrinks down to $x^\beta$ when $\beta\to 1$, cf.\ \cite[Theorem 4.4]{GGHKW18}. Under slightly stronger conditions on the problem data one can even show that the optimal trajectories converge to the optimal equilibrium $x^\beta$ itself and not only to a neighbourhood, cf.\ \cite[Section 6]{GGHKW18}. We will show in Theorem \ref{th: disturninf}, below, that this result can be rewritten in the language of turnpike theory, in which convergence is weakened to the property that the trajectories stay in a neighbourhood of the optimal equilibrium for a (quantifiable) amount of time, but not necessarily forever. While only the optimal trajectories satisfy convergence to the optimal equilibrium, we will show that also near-optimal trajectories satisfy the turnpike property.\footnote{We note that the turnpike property can also be defined for finite horizon optimal control problems. Still, we restrict ourselves to the infinite horizon case, since it was shown in \cite{Gruene2017} that under mild conditions on the problem data the finite horizon turnpike property holds if and only if the infinite horizon turnpike property holds.}
While this global turnpike result follows from a relatively straightforward modification of the arguments in \cite{GGHKW18}, the main question that we want to address in this paper is more difficult: assume that strict dissipativity does not hold globally but only in a neighbourhood of a locally optimal equilibrium $x_l^\beta$. Can we still expect to see a turnpike property of trajectories starting close to $x_l^\beta$?
For the derivation of our technical results, we make frequent use of the dynamic programming principle
\[V_\infty(x_0)= \inf_{u\in\mathbb{U}^1(x_0)}\{\ell(x,u)+\beta V_\infty(f(x_0,u))\},\]
where
\[V_\infty(x_0):=\min_{u\in\mathbb{U}^\infty(x_0)}J_\infty(x_0,u)\]
denotes the optimal value function of \eqref{dis OCP}.
If $u^*\in\mathbb{U}^\infty(x_0)$ is an optimal control sequence for an initial value $x_0\in\mathbb{X}$, i.e.\ if $J_\infty(x_0, u^*)=V_\infty(x_0)$ holds, then the identity
\[V_\infty(x_0)=\ell(x_0,u^*(0))+\beta V_\infty(f(x_0,u^*(0)))\]
holds. Proofs for these statements can be found, e.g., in \cite[Section 4.2]{Gruene2017a}.
We denote optimal trajectories by $x^*(k,x_0)$ and we say that a set $\mathbb{X}_{inv}\subset \mathbb{X}$ is forward invariant for the optimally controlled system, if for each $x_0\in\mathbb{X}_{inv}$ it follows that $x^*(k,x_0)\in\mathbb{X}_{inv}$ for all $k\geq 0$ and all optimal trajectories starting in $x_0$.
\section{The global discounted turnpike property}\label{sec:global}
In this section we first consider the optimal control problem \eqref{dis OCP} assuming {\em global} strict dissipativity. We show that under similar technical assumptions and with a similar proof technique as in \cite{GGHKW18} we can obtain a global turnpike result for near-optimal trajectories. To this end, we first introduce discounted strict dissipativity and afterwards we use it to conclude the turnpike property.
\subsection{Global discounted strict dissipativity}
We denote an equilibrium of system \eqref{eq: nsys} in the discounted case by $(x^{e},u^{e})b$ since the equilibria are dependent on the discount factor $\beta\in (0,1)$.
\begin{defn}\label{def:discdiss}
Given a discount factor $\beta\in(0,1)$, we say that the system \eqref{eq: nsys} is discounted strictly dissipative at an equilibrium $(x^{e},u^{e})b$ with supply rate $s:\mathbb{Y}\to\mathbb{R}$ if there exists a storage function $\lambda:\mathbb{X}\to\mathbb{R}$ bounded from below with $\lambda(x^\beta)=0$ and a class $\mathcal{K}_\infty$-function $\alpha$ such that the inequality
\begin{equation}
s(x,u)+\lambda(x)-\beta\lambda(f(x,u))\geq \alpha(\|x-x^\beta\|)
\label{eq:globdiss}\end{equation}
holds for all $(x,u)\in\mathbb{Y}$ with $f(x,u)\in\mathbb{X}$.
\end{defn}
The following lemma is Proposition 3.2 from \cite{GMKW20}. Since its proof is short and simple, we provide it here for convenience of the readers. It shows that we can replace the stage cost $\ell$ by a modified---usually called \emph{rotated}---stage cost $\tilde{\ell}$ that is positive definite without changing the optimal trajectories.
\begin{lem}\label{prop: traj}
Consider the discounted optimal control problem \eqref{dis OCP} with discount factor $\beta \in(0,1)$ and assume the system \eqref{eq: nsys} is discounted strictly dissipative at an equilibrium $(x^{e},u^{e})b$ with supply rate $s(x,u)=\ell(x,u)-\ell(x^{e},u^{e})b$ and bounded storage function $\lambda$.
Then the optimal trajectories of \eqref{dis OCP} coincide with those of the problem
\begin{equation}\label{mod OCP}
\min_{u\in\mathbb{U}^\infty(x_0)}\widetilde{J}_\infty(x_0,u) \quad\text{with } \widetilde{J}_\infty(x_0,u):=\sum_{k=0}^\infty\beta^k\tilde{\ell}(x(k,x_0),u(k))
\end{equation}
with rotated stage cost
\begin{equation}
\tilde{\ell}(x,u)=\ell(x,u)-\ell(x^{e},u^{e})b+\lambda(x)-\beta\lambda(f(x,u))
\end{equation}
which is positive definite in $x^\beta$ at $(x^{e},u^{e})b$, i.e.\ it satisfies the inequality $\tilde{\ell}(x,u) \ge \alpha(\|x-x^\beta\|)$ with $\alpha\in\mathcal{K}_\infty$ from \eqref{eq:globdiss} for all $(x,u)\in\mathbb{Y}$.
\end{lem}
\begin{proof}
We rearrange
\begin{align*}
\widetilde{J}_\infty(x_0,u)&=\sum_{k=0}^\infty\beta^k\tilde{\ell}(x(k,x_0),u(k))\\
&=\sum_{k=0}^\infty \beta^k\left(\ell(x(k,x_0),u(k))-\ell(x^{e},u^{e})b+\lambda(x(k,x_0))-\beta\lambda(x(k+1,x_0))\right)
\end{align*}
and a straightforward calculation shows that
\begin{equation}\label{eq: Jtilde}
\widetilde{J}_\infty(x_0,u)=J_\infty(x_0,u)-\dfrac{\ell(x^{e},u^{e})b}{1-\beta}+\lambda(x_0)-\lim_{k\to\infty}\beta^k\lambda(x_u(k)).
\end{equation}
Since $\lambda$ is bounded and $\beta\in(0,1)$, the last limit exists and is equal to 0. Hence, the objectives differ only by expressions which are independent of $U$, from which the identity of the optimal trajectories immediately follows. The positive definiteness of $\tilde{\ell}$ follows from its definition, using strict dissipativity and the fact that $\lambda(x^\beta)=0$ implies $\tilde{\ell}(x^{e},u^{e})b=0$.
\end{proof}
\begin{Bem}
The requirement that $\ell(x^{e},u^{e})b=0$ is the reason for imposing $\lambda(x^\beta)=0$ as a condition in Definition \ref{def:discdiss}. Readers familiar with dissipativity for undiscounted problems will know that in the undiscounted case $\lambda(x^\beta)=0$ can be assumed without loss of generality, since if $\lambda$ is a storage function then $\lambda + c$ is a storage function for all $c\in\mathbb{R}$. In the discounted case, this invariance with respect to addition of constants no longer holds.
\end{Bem}
\subsection{The global turnpike property}
In the non-discounted setting it is known that strict dissipativity (together with suitable regularity assumptions on the problem data) implies that optimal as well as near-optimal trajectories exhibit the turnpike property. In the discounted setting, it was observed already in \cite{Gruene2017} that for merely near-optimal trajectories the turnpike property can only be guaranteed on a finite discrete interval $\{0,\ldots,M\}$. Here $M$ depends on the deviation from optimality (denoted by $\delta$ in the following theorem) and tends to infinity as this distance tends to 0. Exactly the same happens here. As the following theorem shows, under the assumption of global discounted dissipativity we obtain precisely the turnpike property from \cite[Definition 4.2]{Gruene2017}.
\begin{thm}\label{th: disturninf}
Consider the infinite horizon optimal control problem \eqref{dis OCP} with discount factor $\beta\in(0,1)$.
Assume that the optimal value function $\widetilde V_\beta$ of the modified problem satisfies $\widetilde V_\infty(x) \le \alpha_V(\|x-x^\beta\|)$ and
\begin{eqnarray} \widetilde V_\infty(x) \le C \inf_{u\in\mathbb{U}}\tilde \ell(x,u)\label{eq:Cdiss}\end{eqnarray}
for all $x\in\mathbb{X}$, a function $\alpha_V\in\mathcal{K}_\infty$, and a constant $C\ge 1$ satisfying
\begin{eqnarray} C < 1/(1-\beta) \label{eq:Cbetadiss}.\end{eqnarray}
Then the optimal control problem has the following turnpike property (cf.\ \cite[Definition 4.2]{Gruene2017}):
For each $\varepsilon>0$ and each bounded set $\mathbb{X}_b\subset \mathbb{X}$ there exist a constant $P>0$ such that for each $M\in\mathbb{N}$ there is a $\delta>0$, such that for all $x_0\in\mathbb{X}_b$ and $u\in\mathbb{U}^\infty(x_0)$ with $J_\infty(x_0,u)\leq V_\infty(x_0)+\delta$, the set $\mathcal{Q}(x_0,u,\varepsilon,M,\beta):=\{k\in\{0,\dots, M\}\mid\|x_u(k,x_0)-x^\beta\|\geq \varepsilon\}$ has at most $P$ elements.
\end{thm}
\begin{proof}
It follows from the proof of Lemma \ref{prop: traj} that the inequality $J_\infty(x_0,u)\leq V_\infty(x_0)+\delta$ implies $\widetilde J_\infty(x_0,u)\leq \widetilde V_\infty(x_0)+\delta$. Together with the dynamic programming principle for $\widetilde V_\infty$ this yields
\begin{eqnarray*} \delta & \geq & \widetilde J_\infty(x_0,u) - \widetilde V_\infty(x_0) \\
& = & \tilde \ell(x_0,u(0)) + \beta \widetilde J_\infty(x_u(1,x_0),u(\cdot+1)) - \inf_{u\in \mathbb{U}}\left\{ \tilde \ell(x_0,u) + \beta \widetilde V_\infty(f(x_0,u))\right\} \\
& \geq & \tilde \ell(x_0,u(0)) + \beta \widetilde J_\infty(x_u(1,x_0),u(\cdot+1))
- \left( \tilde \ell(x_0,u(0)) + \beta \widetilde V_\infty(f(x_0,u(0)))\right)\\
& = & \beta(\widetilde J_\infty(x_u(1,x_0),u(\cdot+1)) - \widetilde V_\infty(f(x_0,u(0)))).
\end{eqnarray*}
This implies $\widetilde J_\infty(x_u(1,x_0),u(\cdot+1))\leq \widetilde V_\infty(x_u(1,x_0))+\delta/\beta$, and proceding inductively we obtain
\[ \widetilde J_\infty(x_u(k,x_0),u(\cdot+k))\leq \widetilde V_\infty(x_u(k,x_0))+\frac{\delta}{\beta^k} \]
for all $k\in\mathbb{N}$.
This implies
\begin{eqnarray}
\lefteqn{\widetilde V_\infty(x_u(k+1,x_0)) - \widetilde V_\infty(x(k,x_0)) } & & \nonumber \\
& = & \frac{1}{\beta}\Big(\beta \widetilde V_\infty(x_u(k+1,x_0)) - \beta \widetilde V_\infty(x_u(k,x_0))\Big)\nonumber\\
& = & \frac{1}{\beta}\Big(\beta \widetilde V_\infty(x_u(k+1,x_0)) - \widetilde V_\infty(x_u(k,x_0)) + (1-\beta) \widetilde V_\infty(x_u(k,x_0))\Big)\nonumber\\
& \leq & \frac{1}{\beta}\Big(\beta \widetilde J_\infty(x_u(k+1,x_0),u(\cdot+k+1)) - \widetilde J_\infty(x_u(k,x_0),u(\cdot+k)) \nonumber\\
&&+ (1-\beta) \widetilde V_\infty(x_u(k,x_0))\Big) + \frac{\delta}{\beta^{k+1}}\nonumber\\
& = & \frac{1}{\beta}\Big(-\tilde\ell(x_u(k,x_0),u(k)) + (1-\beta) \widetilde V_\infty(x_u(k,x_0))\Big)+\frac{\delta}{\beta^{k+1}}\nonumber\\
& \leq & \frac{1}{\beta}\Big(-\frac{1}{C} \widetilde V_\infty(x_u(k,x_0))+ (1-\beta) \widetilde V_\infty(x_u(k,x_0))\Big) + \frac{\delta}{\beta^{k+1}}\nonumber\\
& = & \frac{\kappa}{\beta} \widetilde V_\infty(x_u(k,x_0)) +\frac{\delta}{\beta^{k+1}} \label{eq:case2}
\end{eqnarray}
where $\kappa = (1-\beta)-1/C < 0$ because of \eqref{eq:Cbetadiss}.
This implies that for fixed $M\in\mathbb{N}$ and $k\in\{0,\ldots,M\}$ the function $\widetilde V_\infty$ is a practical Lyapunov function. Using \cite{Gruene2014} Theorem 2.4 restricted to $\{0,\ldots,M\}$ and the fact that $\mathbb{X}_b$ is bounded we can conclude that there is a sequence $\eta_k\to 0$ (depending on $\mathbb{X}_b$) and a function $\gamma\in\mathcal{K}_\infty$ with
\[ \|x_u(k,x_0)-x^\beta\| \le \eta_k + \gamma(\delta/\beta^{k+1}) \leq \eta_k + \gamma(\delta/\beta^M)\]
for all $k\in\{0,\ldots,M\}$.
This implies the desired claim by choosing $P\in\mathbb{N}$ (depending on $\varepsilon$ and $\eta_k$, hence on $\mathbb{X}_b$) such that $\eta_k < \varepsilon/2$ for all $k\ge P$ and $\delta>0$ (depending on $\beta$, $\varepsilon$ and $M$) such that $\gamma(\delta/\beta^M)<\varepsilon/2$.
\end{proof}
For an illustration of the described turnpike property we refer to Fig. \ref{figure}. We note again that in the formulation of the discounted turnpike property the level $\delta$ which measures the deviation from optimality of the trajectory $x_u(\cdot,x0)$ depends on $M$. For guaranteeing the turnpike property on $\{0,\ldots,M\}$, $\delta\to 0$ may be required if $M\to\infty$, cf.\ also Remark \ref{rem:suffcond} (iv).
\begin{figure}
\caption{Illustration of the set $\mathcal{Q}
\label{figure}
\end{figure}
The following remark discusses aspects of the assumptions of Theorem \ref{th: disturninf}. For the turnpike property to hold, it is obviously necessary that the state of the system can be steered to $x^\beta$, at least asymptotically. This is made precise in part (i) of the remark. Part (ii) shows that if the state can be steered to $x^\beta$ fast enough, then a constant $C$ satisfying \eqref{eq:Cdiss} for all $\beta\in(0,1)$ holds. Finally, part (iii) of the remark discusses how inequality \eqref{eq:Cdiss} can be relaxed if such a $C$ cannot be found.
\begin{Bem} \begin{itemize}
\item[(i)] A necessary condition for the turnpike property to hold is that for each $\varepsilon>0$, each bounded subset $\mathbb{X}_b\subseteq\mathbb{X}$ and each $x_0\in\mathbb{X}_b$ there exists a control sequence $u\in \mathbb{U}^{P+1}(x_0)$ with $x_u(k,x_0)\in \mathcal{B}_\varepsilon(x^\beta)$ for some $k\le P+1$, where $P$ is the constant from the turnpike property in Theorem \ref{th: disturninf}. This is immediately clear, because if such a control does not exist, then the number of points $x_u(k,x_0)\not\in\mathcal{Q}(x_0,u,\varepsilon,M,\beta)$ is larger than $P$ for all $u$.
\item[(ii)] If a constant $C$ satisfying \eqref{eq:Cdiss} for all $\beta\in(0,1)$ exists, then \eqref{eq:Cbetadiss} will hold for all sufficiently large $\beta\in(0,1)$. A sufficient condition for the existence of such a $C$ is the following exponential stabilizability assumption of the cost at the equilibrium $(x^{e},u^{e})b$: there are constants $\sigma,\lambda>0$ such that for each $x_0\in\mathbb{X}$ there is $u\in\mathbb{U}^\infty(x_0)$ with
\begin{equation} \tilde{\ell}(x_u(k,x_0),u(k)) \le \sigma e^{-\lambda k}\inf_{\hat u \in \mathbb{U}} \tilde{\ell}(x_0,\hat u). \label{eq:expcost}\end{equation}
Then, since $\tilde{\ell}\ge 0$ we obtain
\begin{align*} \widetilde V_\infty(x_0) & \le \sum_{k=0}^\infty \beta^k\tilde{\ell}(x_u(k,x_0),u(k)) \le \sum_{k=0}^\infty \tilde{\ell}(x_u(k,x_0),u(k))\\
& \le \sum_{k=0}^\infty \sigma e^{-\lambda k}\inf_{\hat u \in \mathbb{U}} \tilde{\ell}(x_0,\hat u) = \frac{\sigma}{1-e^{-\lambda}} \inf_{\hat u \in \mathbb{U}} \tilde{\ell}(x_0,\hat u), \end{align*}
implying \eqref{eq:Cdiss} with $C=\sigma/(1-e^{-\lambda})$. We note that \eqref{eq:expcost} holds in particular if the system itself is exponentially stabilizable to $x^\beta$ with exponentially bounded controls and $\tilde{\ell}$ is a polynomial\footnote{We could further relax this assumption to $\tilde{\ell}$ being bounded by $C_1 P$ and $C_2 P$ from below and above, respectively, for constants $C_1>C_2>0$ and a polynomial $P$.}. Exponential stabilizability of the system, in turn, follows locally around $x^\beta$ from stabilizability of its linearization in $x^\beta$. If, in addition, the necessary condition from part (i) of this remark holds, then local exponential stabilizability implies exponential stabilizability for bounded $\mathbb{X}$. We refer to \cite[Section 6]{GGHKW18} for a more detailed discussion on these conditions.
\item[(iii)] If a $C$ meeting \eqref{eq:Cbetadiss} and \eqref{eq:Cdiss} for all $x\in\mathbb{X}$ does not exist, then we may still be able to find a $C$ satisfying \eqref{eq:Cbetadiss} and \eqref{eq:Cdiss} for all $x\in\mathbb{X}$ with $\vartheta \le \|x-x^\beta\| \le \Theta$, for parameters $0\le\vartheta<\Theta$. In this case we can follow the reasoning in the proof of Corollary 4.3 from \cite{GGHKW18} to conclude that we still obtain a turnpike property for $\varepsilon>\varepsilon_0$ and $\mathbb{X}_b=\mathcal{B}_\Delta(x^\beta)\cap \mathbb{X}$, with $\varepsilon_0\to 0$ as $\vartheta\to 0$ and $\Delta\to\infty$ as $\Theta\to\infty$.
\item[(iv)] Optimal trajectories, i.e., trajectories for which $J_\infty(x_0,u)=V_\infty(x_0)$ holds, satisfy the assumptions of Theorem \ref{th: disturninf} for each $\delta>0$. Hence, the assertion of the theorem holds for each $\varepsilon>0$ and each $M\in\mathbb{N}$, implying that $x_u(k,x_0)$ converges to $x^\beta$ as $k\to\infty$.
\end{itemize}
\label{rem:suffcond}
\end{Bem}
\section{The local discounted turnpike property assuming invariance}\label{sec:local}
In the previous section, we have shown that an equilibrium at which the system is globally strictly dissipative has the turnpike property. Now, we consider an equilibrium denoted by $(x^{e},u^{e})bl$ at which discounted strict dissipativity holds only locally, i.e., for all $x$ in a neighbourhood $\mathbb{X}_{\mathbb{N}N}$ of $x_l^\beta$, in the following sense.
\begin{defn}
Given a discount factor $\beta\in(0,1)$, we say that the system \eqref{eq: nsys} is locally discounted strictly dissipative at an equilibrium $(x^{e},u^{e})bl$ with supply rate $s:\mathbb{Y}\to\mathbb{R}$ if there exists a storage function $\lambda:\mathbb{X}\to\mathbb{R}$ bounded from below with $\lambda(x_l^\beta)=0$ and a class $\mathcal{K}_\infty$-function $\alpha_\beta$ such that the inequality
\begin{equation}\label{ineq: dis.diss}
s(x,u)+\lambda(x)-\beta\lambda(f(x,u))\geq \alpha_\beta(\|x-x_l^\beta\|)
\end{equation}
holds for all $(x,u)\in\mathbb{X}_\mathcal{N}\times \mathbb{U}$.
Further, we say that system \eqref{eq: nsys} is locally discounted strictly $(x,u)$-dissipative at the equilibrium $(x^{e},u^{e})bl$ with supply rate $s:\mathbb{X}\times \mathbb{U}\to\mathbb{R}$ if the same holds with the inequality
\begin{equation}
s(x,u)+\lambda(x)-\beta\lambda(f(x,u))\geq \alpha_\beta(\|(x-x_l^\beta\|+\| u-u_l^\beta)\|).
\end{equation}
\label{def:ldiss}\end{defn}
As in the global case we define the rotated stage cost by
\begin{equation}\label{eq: rotstcost b}
\tilde{\ell}(x,u) := \ell(x,u)-\ell(x^{e},u^{e})bl +\lambda(x)-\beta\lambda(f(x,u)).
\end{equation}
Obviously, with this definition Lemma \ref{prop: traj} remains valid. Moreover, for $x\in\mathbb{X}_{\mathbb{N}N}$ the function $\tilde{\ell}$ satisfies the same properties as in the globally dissipative case. This will enable us to derive a local turnpike property, provided the neighbourhood $\mathbb{X}_{\mathbb{N}N}$ contains an invariant set $\mathbb{X}_{inv}\subset \mathbb{X}_{\mathbb{N}N}$ for the optimally controlled system. The following lemma gives a consequence of this assumption for the modified optimal value function, which will be important for concluding the local turnpike property.
\begin{lem}\label{lem:lbound} Consider the optimal control problem \eqref{dis OCP} with given discount factor $\beta\in (0,1)$ and assume that the system is locally strictly dissipative in $(x^{e},u^{e})bl\in\mathbb{X}_\mathcal{N}\subset \mathbb{X}$. Consider a subset $\mathbb{X}_{inv}\subset\mathbb{X}_\mathcal{N}$ such that all optimal solutions $x^*(k,x_0)$ with $x_0\in\mathbb{X}_{inv}$ satisfy $x^*(k,x_0) \in \mathbb{X}_{inv}$ for all $k\ge 0$.
Then the modified optimal value function $\widetilde V_\infty$ satisfies
\begin{equation} \widetilde V_\infty(x) \ge \alpha_\beta(\|x-x_l^\beta\|) \label{eq:lbound}
\end{equation}
for all $x\in\mathbb{X}_{inv}$.
\end{lem}
\begin{proof}
For all $x\in\mathbb{X}_\mathcal{N}$ and $u\in\mathbb{U}$ the modified cost satisfies $\tilde\ell(x,u) \ge \alpha_\beta(\|x-x_l^\beta\|)\ge 0$. This implies
\[ \widetilde V_\infty(x_0) = \sum_{k=0}^\infty \beta^k\tilde\ell(x(k,x_0),u(k)) \ge \sum_{k=0}^\infty \beta^k\alpha_\beta(\|x(k,x_0)-x_l^\beta\|) \ge \alpha_\beta(\|x_0-x_l^\beta\|),\]
which shows the claim.
\end{proof}
The following now gives a local version of Theorem \ref{th: disturninf}.
\begin{thm}\label{thm: localturn}
Consider the infinite horizon optimal control problem \eqref{dis OCP} with discount factor $\beta\in(0,1)$ and assume that the system is locally strictly dissipative at $(x^{e},u^{e})bl\in\mathbb{X}_\mathcal{N}\subset \mathbb{X}$. Consider a subset $\mathbb{X}_{inv}\subset\mathbb{X}_\mathcal{N}$ such that all optimal solutions $x^*(k,x_0)$ with $x_0\in\mathbb{X}_{inv}$ satisfy $x^*(k,x_0) \in \mathbb{X}_{inv}$ for all $k\ge 0$ and suppose that the assumptions of Theorem \ref{th: disturninf} hold for all $x \in \mathbb{X}_{inv}$.
Then the optimal control problem has the following turnpike property on $\mathbb{X}_{inv}$:
For each $\varepsilon>0$ and each bounded set $\mathbb{X}_b\subset \mathbb{X}_{inv}$ there exist a constant $P>0$ such that for each $M\in\mathbb{N}$ there is a $\delta>0$, such that for all $u\in\mathbb{U}^\infty(x_0)$ with $J_\infty(x_0,u)\leq V_\infty(x_0)+\delta$ and $x_u(k,x_0)\in\mathbb{X}_{inv}$ for all $k\in\{0,\ldots,M\}$, the set $\mathcal{Q}(x,u,\varepsilon,M,\beta):=\{k\in\{0,\dots, M\}\mid\|x_u(k,x_0)-x^\beta\|\geq \varepsilon\}$ has at most $P$ elements.
\end{thm}
\begin{proof}
The proof proceeds completely identical to the proof of Theorem \ref{th: disturninf}, using the fact that all inqualities used in this proof remain valid as long as the considered solutions stay in $\mathbb{X}_{inv}$ which is guaranteed by the assumptions. We note that Lemma \ref{lem:lbound} is needed for establishing the lower bound on $\widetilde V_\infty$ required from a practical Lyapunov function.
\end{proof}
\begin{rem} Instead of assuming the existence of the invariant set $\mathbb{X}_{inv}$ we could also assume \eqref{eq:lbound} for all $x\in\mathbb{X}_{\mathcal{N}}$. Then by standard Lyapunov function arguments the largest sublevel set of $\widetilde V_\infty$ contained in $\mathbb{X}_{\mathcal{N}}$ is forward invariant for the optimal solutions and can then be used as set $\mathbb{X}_{inv}$. Using \eqref{eq:case2} we can even ensure that this sublevel set is also forward invariant for all solutions satisfying $J_\infty(x_0,u)\leq V_\infty(x_0)+\delta$ provided $\delta>0$ is sufficiently small. Hence, for this choice of $\mathbb{X}_{inv}$ the assumption that $x_u(k,x_0)\in\mathbb{X}_{inv}$ for all $k\in\{0,\ldots,M\}$ in Theorem \ref{thm: localturn} would be automatically satisfies if $\delta$ is not too large.
\end{rem}
\section{Optimal trajectories stay near a locally dissipative equilibrium}\label{sec:stay}
Theorem 4.3 shows that the local turnpike property holds if the optimal solutions stay in the neighbourhood of $x_l^\beta$ in which the strict dissipativity property holds. In this section we show that this condition is "automatically'' satisfied for appropriate discount factors. This will enable us to conclude a local turnpike property from local strict dissipativity. To this end, we aim to show that there exists a range of discount factors $\beta$ for which it is more favourable to stay near the locally dissipative equilibrium than to move to other parts of the state space. The first lemma we need to this end shows a property of trajectories that move out of a neighbourhood of $x_l^\beta$. In contrast to the previous result, now we need the stronger $(x,u)$-dissipativity.
\begin{lem}\label{lem: ball}
Consider a discounted optimal control problem \eqref{dis OCP} subject to system \eqref{eq: nsys} with continuous $f$. Assume local strict $(x,u)$-dissipativity at an equilibrium $(x^{e},u^{e})bl$ according to Definition \ref{def:ldiss} and let $\rho>0$ be such that $\mathcal{B}_\rho(x_l^\beta)\subset\mathbb{X}_{\mathbb{N}N}$ holds for the neighbourhood $\mathbb{X}_{\mathbb{N}N}$ from Definition \ref{def:ldiss}. Then there exists $\eta >0$ such that for each $K\in\mathbb{N}$ and any trajectory $x(\cdot)$ with $x_0=x(0)\in\mathcal{B}_\eta(x_l^\beta)$ and $x(K)\notin \mathcal{B}_\rho(x_l^\beta)$ there is a $M\in\{0,\dots, K-1\}$ such that $x(0),\ldots,x(M)\in \mathcal{B}_\eta(x_l^\beta)$ and either
\[
\text{(i) } x(M)\in\mathcal{B}_\rho(x_l^\beta)\backslash\mathcal{B}_\eta(x_l^\beta)
\quad \text{or} \quad
\text{(ii) } \|u(M)-u_l^\beta\|\geq \eta\]
holds.
\end{lem}
\begin{proof}
The continuity of $f$ implies that there exists $\varepsilon>0$ such that $\|f(x,u)-x_l^\beta\|<\rho$ for all $(x,u)\in\mathbb{Y}$ with $\|x-x_l^\beta\|<\varepsilon$ and $\|u-u_l^\beta\|<\varepsilon$. We let $K_{\min}$ be minimal with $x(K_{\min})\notin\mathcal{B}_\rho(x_l^\beta)$, set $\eta:=\min\{\varepsilon, \rho\}$, and claim that this implies the assertion for $M=K_{\min}-1$.
We prove this claim by contradiction. To this end, we assume that for $M=K_{\min}-1$ neither assertion (i) nor assertion (ii) holds. This implies on the one hand that $\|x(M)-x_l^\beta\|<\eta$, since $x(M)\in\mathcal{B}_\rho(x_l^\beta)$ by minimality of $K_{\min}$ and (i) is not fulfilled. On the other hand, it implies $\|u(M)-u_l^\beta\|<\eta$, because (ii) does not hold. Then, however, since $\eta \le \varepsilon$, the continuity of $f$ implies
\[\|x(K_{\min})-x_l^\beta\| = \|f(x(M),u(M))-x_l^\beta\|< \rho.\]
This means that $x(K_{\min})\in\mathcal{B}_\rho(x_l^\beta)$, which is a contradiction to the choice of $K_{\min}$. Hence, either assertion (i) or assertion (ii) must hold for $M=K_{\min}-1$.
\end{proof}
The next lemma shows that the behavior characterized in Lemma \ref{lem: ball} induces a lower bound for the rotated discounted functional $\widetilde{J}_\infty$ from \eqref{mod OCP} along trajectories that start in a neighborhood of $x_l^\beta$ and leave this neighborhood. To this end, we note that even if merely local strict dissipativity holds, the modified stage cost $\tilde{\ell}$ from \eqref{eq: rotstcost b} is well defined, since $\lambda$ is defined for all $x\in\mathbb{X}$. However, the inequality $\tilde{\ell}(x,u) \ge \alpha_\beta(\|x-x_l^\beta\|+\|u-u_l^\beta\|)$ and, more generally, positivity of $\tilde{\ell}$ are only guaranteed for $x\in\mathbb{X}_{\mathbb{N}N}$.
\begin{lem}\label{lem: optimal}
Let the assumptions of Lemma \ref{lem: ball} hold. In addition, assume that $\lambda$ from Definition \ref{def:ldiss} is bounded and the stage cost $\ell$ is bounded from below. Then, there exists $\beta^\star\in(0,1)$ with the following property: for any $\beta\in(0,\beta^\star)$ and any $K\in\mathbb{N}$ there is $\sigma(\beta,K)>0$ such that for any trajectory $x(\cdot)$ with $x_0=x(0)\in\mathcal{B}_\eta(x_l^\beta)$ and $x(P)\notin \mathcal{B}_\rho(x_l^\beta)$ for some $P\in\{1,\ldots,K\}$ the inequality
\begin{equation} \widetilde J_\infty(x_0,u) \ge \sigma(\beta,K) \label{ineq: goal}\end{equation}
holds.
\end{lem}
\begin{proof}
First observe that boundedness from below of $\ell$ and boundedness of $\lambda$ imply boundedness from below of $\tilde{\ell}$. Let $\tilde{\ell}_{\min} := \inf_{(x,u)\in\mathbb{Y}} \tilde{\ell}(x,u)$ with $\tilde{\ell}_{\min}<0$. Moreover, local dissipativity implies that $\tilde{\ell}(x,u)\ge 0$ for all $x\in\mathbb{X}_{\mathbb{N}N}$ and all $u\in\mathbb{U}$.
Since the trajectory under consideration satisfies the assumptions of Lemma \ref{lem: ball} with $K=P$, there exists $M\in\{0,\dots,P\}$ such that either assertion (i) or assertion (ii) of this lemma holds. In case (i), we obtain that
\[ \tilde{\ell}(x(M),u(M)) \ge \alpha_\beta(\|x(M)-x_l^\beta\|) \ge \alpha_\beta(\eta) \]
and in case (ii) we obtain
\[ \tilde{\ell}(x(M),u(M)) \ge \alpha_\beta(\|u(M)-u_l^\beta\|) \ge \alpha_\beta(\eta). \]
Hence, we get the same inequality in both cases and we abbreviate $\delta:=\alpha_\beta(\eta)>0$.
In addition, Lemma \ref{lem: ball} yields $x(0),\ldots,x(M)\in \mathcal{B}_\eta(x_l^\beta)\subset\mathbb{X}_{\mathbb{N}N}$, which implies $\tilde{\ell}(x(k),u(k))\ge 0$ for all $k=0,\ldots,M-1$, and the lower bound on $\tilde{\ell}$ implies $\tilde{\ell}(x(k),u(k))\ge\tilde{\ell}_{\min}$ for all $k\ge M+1$. Together this yields
\begin{align*} \widetilde J_\infty(x_0,u) = & \sum_{k=0}^\infty\beta^k\ell(x(k,x_0),u(k)\\
=&\sum_{k=0}^{M-1}\beta^k\underbrace{\ell(x(k,x_0),u(k))}_{\ge 0}+\beta^M\underbrace{\ell(x(M,x_0),u(M))}_{\ge \delta}+\sum_{k=M+1}^\infty\beta^k\underbrace{\ell(x(k,x_0),u(k))}_{\ge \tilde{\ell}_{\min}}\\
\geq& \beta^M\delta+\dfrac{\beta^{M+1}}{1-\beta}\tilde{\ell}_{\min} = \dfrac{\beta^M}{1-\beta}\left(\left(\tilde{\ell}_{\min}-\delta\right)\beta+\delta\right).
\end{align*}
We now claim that the assertion holds for $\sigma = \frac{\beta^K\delta}{2(1-\beta)} \le \frac{\beta^M\delta}{2(1-\beta)}$. To this end, it is sufficient to show the existence of $\beta^\star$ with
\[ \dfrac{\beta^M}{1-\beta}\left(\left(\tilde{\ell}_{\min}-\delta\right)\beta+\delta\right)\ge \frac{\beta^M\delta}{2(1-\beta)}\]
for all $\beta\in(0,\beta^\star)$.
This is equivalent to
\[ \dfrac{\beta^M}{1-\beta}\left(\left(\tilde{\ell}_{\min}-\delta\right)\beta+\frac{\delta}{2}\right)\ge 0\;\; \Leftrightarrow \;\; \left(\tilde{\ell}_{\min}-\delta\right)\beta+\frac{\delta}{2}\ge 0,\]
since $\tilde{\ell}_{\min}-\delta<0$. This inequality holds for all $\beta\in(0,\beta^\star)$ if $\beta^*=\delta/(2(\delta-\tilde{\ell}_{\min}))$.
\end{proof}
\begin{rem}
The choice of the fraction $\frac 1 2$ for $\sigma$ in the proof of Lemma \ref{lem: optimal} is arbitrary. We can also use a more general fraction $\frac{1}{k+1}$ with $k\in\mathbb{N}$. Then, with the same calculation as above we get that $\beta^* = \dfrac{k}{k+1}\dfrac{\delta}{\delta-\tilde{\ell}_{\min}}$.
\end{rem}
Based on the estimate from Lemma \ref{lem: optimal} we can now conclude that near-optimal solutions starting near $x_l^\beta$ stay in $\mathbb{X}_\mathcal{N}$ for a certain amount of time.
\begin{lem} Consider a discounted optimal control problem \eqref{dis OCP} subject to system \eqref{eq: nsys} with $f$ continuous and stage cost $\ell$ bounded from below. Assume local strict $(x,u)$-dissipativity at an equilibrium $(x^{e},u^{e})bl$ according to Definition \ref{def:ldiss} with bounded storage function $\lambda$. Assume furthermore that there is $\gamma\in\mathcal{K}_\infty$ and $\hat\beta\in(0,1]$ such that $|\widetilde V_\infty(x)| \le \gamma(\|x-x_l^\beta\|)$ for all $x\in\mathbb{X}_{\mathbb{N}N}$ and all $\beta\in(0,\hat\beta]$. Then there exists $\beta_2\in(0,1)$ with the following property: for any $\beta\in(0,\beta_2)$ and any $K\in\mathbb{N}$ there exists a neighbourhood $\mathcal{B}_{\varepsilon(\beta,K)}(x_l^\beta)$ and a threshold value $\theta(\beta,K)>0$ such that all trajectories with $x_0\in \mathcal{B}_{\varepsilon(\beta,K)}(x_l^\beta)$ and $J_\infty(x_0,u) < V_\infty(x_0) + \theta(\beta,K)$ satisfy $x(k)\in \mathbb{X}_{\mathbb{N}N}$ for all $k\in\{0,\ldots,K\}$.
\label{lem:stay}\end{lem}
\begin{proof} We choose $\beta_2$ as the minimum of $\beta^\star$ from Lemma \ref{lem: optimal} and $\hat\beta$. We further use $\sigma(\beta,K)>0$ from Lemma \ref{lem: optimal} to set $\varepsilon(\beta,K):= \gamma^{-1}(\sigma(\beta,K)/2)$ and $\theta(\beta,K):= \sigma(\beta,K)/2$. Now consider a trajectory meeting the assumptions and observe that since $J_\infty$ and $\widetilde J_\infty$ differ only by a term that is independent of $u(\cdot)$, the assumption $J_\infty(x_0,u) \le V_\infty(x_0) + \theta(\beta,K)$ together with the assumption on $x_0$ implies
\[ \widetilde J_\infty(x_0,u) < \widetilde V_\infty(x_0) + \theta(\beta,K) < \gamma(\varepsilon(\beta,K)) + \theta(\beta,K).\]
The definition of $\theta$ and $\varepsilon$ then implies
\[\widetilde J_\infty(x_0,u) < \sigma(\beta,K)/2 + \sigma(\beta,K)/2 = \sigma(\beta,K). \]
Since by Lemma \ref{lem: optimal} any trajectory leaving $\mathbb{X}_{\mathbb{N}N}$ (and thus also $\mathcal{B}_\rho(x_l^\beta)$) up to time $K$ has a rotated value satisfying
\[\widetilde J_\infty(x_0,u) \ge \sigma(\beta,K), \]
the trajectory under consideration cannot leave $\mathbb{X}_{\mathbb{N}N}$ for $k\in\{0,\ldots,K\}$.
\end{proof}
\section{The local discounted turnpike property without assuming invariance}
\label{sec:main}
With the preparations from the previous sections, we are now able to formulate our main theorem on the existence of a local turnpike property.
\begin{thm}\label{th: locturnpike}
Consider a discounted optimal control problem \eqref{dis OCP} subject to system \eqref{eq: nsys} with $f$ continuous and stage cost $\ell$ bounded from below. Assume local strict $(x,u)$-dissipativity at an equilibrium $(x^{e},u^{e})bl$ according to Definition \ref{def:ldiss} with bounded storage function $\lambda$ on $\mathbb{X}_{\mathbb{N}N}$. Assume furthermore that there is $\gamma\in\mathcal{K}_\infty$ and $\hat\beta\in(0,1]$ such that $|\widetilde V_\infty(x)| \le \gamma(\|x-x_l^\beta\|)$ for all $x\in\mathbb{X}_{\mathbb{N}N}$ and all $\beta\in(0,\hat\beta)$, and that there is an interval $[\beta_1,\beta^*]$ of discount rates with $\beta_1<\hat\beta$, such that for each $\beta\in(\beta_1,\beta^*)$ the assumptions of Theorem \ref{th: disturninf} hold for all $x \in \mathbb{X}_{\mathbb{N}N}$.
Then there is $\beta_2\in(0,1)$ such that for all $\beta\in (\beta_1,\beta_2)$ there exists a neighbourhood $\mathbb{N}N$ of $x_l^\beta$ on which the system exhibits a local turnpike property in the following sense:
For each $\varepsilon>0$ there exist a constant $P>0$ such that for each $M\in\mathbb{N}$ there is a $\delta>0$, such that for all $x_0\in \mathbb{N}N$ and all $u\in\mathbb{U}^\infty(x_0)$ with $J_\infty(x_0,u)\leq V_\infty(x_0)+\delta$, the set $\mathcal{Q}(x,u,\varepsilon,M,\beta):=\{k\in\{0,\dots, M\}\mid\|x_u(k,x_0)-x^\beta\|\geq \varepsilon\}$ has at most $P$ elements.
Particularly, if $J_\infty(x_0,u) = V_\infty(x_0)$, i.e., if the trajectory is optimal, then for each $\varepsilon>0$ the set $\mathcal{Q}(x,u,\varepsilon,\infty,\beta):=\bigcup_{M\in\mathbb{N}}\mathcal{Q}(x,u,\varepsilon,M,\beta)$ has at most $P$ elements, implying the convergence $x_u(k,x_0)\to x^\beta$ as $k\to\infty$.
\end{thm}
\begin{proof} The idea of the proof is to use $\beta_2$ from Lemma \ref{lem:stay} and, for each $\beta\in(\beta_1,\beta_2)$, to construct a neighbourhood $\mathbb{N}N$ of $x_l^\beta$ and a $\delta>0$ such that all trajectories starting in $x_0\in\mathbb{N}N$ and satisfying $J_\infty(x_0,u)\leq V_\infty(x_0)+\delta$ stay in $\mathbb{N}N$ for all future times. Then the turnpike property follows from Theorem \ref{thm: localturn} applied with $\mathbb{X}_{inv}=\mathbb{N}N$.
To this end, we take $\beta_2$ from Lemma \ref{lem:stay}, fix $\beta\in(\beta_1,\beta_2)$, and consider the neighbourhood $\mathcal{B}_{\varepsilon(\beta,1)}(x_l^\beta)$ and the threshold value $\theta(\beta,1)$ from Lemma \ref{lem:stay} for $K=1$. We choose $\mathbb{N}N$ as the largest sublevel set of $\widetilde V_\infty$ that is contained in $\mathcal{B}_{\varepsilon(\beta,1)}(x_l^\beta)$ and denote the level by $\lambda>0$, i.e., $\mathbb{N}N=\{x\in\mathbb{X}_{\mathbb{N}N}\,|\, \widetilde V_\infty(x) < \lambda\}$. We abbreviate $\kappa = (1-\beta)-1/C$, observing that $\kappa<0$ because of because of \eqref{eq:Cdiss} (cf.\ also the proof of Theorem \ref{th: disturninf}), and set
\[ \delta:= \beta^M\min\left\{ \theta(\beta,1), -\frac{\kappa\lambda}{2\beta},\frac \lambda 2\right\}. \]
Now let $x_0$ and $u$ be as in the assertion, i.e., satisfying $J_\infty(x_0,u)\leq V_\infty(x_0)+\delta$, and denote the corresponding trajectory by $x(\cdot)$. Then, just as in the first part of the proof of Theorem \ref{th: disturninf}, we obtain the estimate
\[ \widetilde J_\infty(x(k),u(\cdot+k))\leq \widetilde V_\infty(x(k))+\frac{\delta}{\beta^k} \]
for all $k\in\mathbb{N}$. By definition of $\delta$ this in particular implies
\begin{equation} \widetilde J_\infty(x(k),u(\cdot+k))\leq \widetilde V_\infty(x(k))+ \theta(\beta,1)
\label{eq:Jbound} \end{equation}
for all $k=0,\ldots,M$.
Now we prove by induction that $x(k)\in \mathbb{N}N$ for all $k=0,\ldots,M$. For $k=0$ this follows from the choice of $x_0$. For $k\to k+1$, we make the induction assumption that $x(k)\in \mathbb{N}N$, i.e., $\widetilde V_\infty(x(k))<\lambda$. Then, because of \eqref{eq:Jbound} and $\mathbb{N}N\subseteq \mathcal{B}_{\varepsilon(\beta,1)}(x_l^\beta)$, Lemma \ref{lem:stay} (applied with initial value $x_0=x(k)$ and control $u(\cdot+k)$) implies that $x(k+1)\in \mathbb{X}_{\mathbb{N}N}$. Hence, all the (in)equalities leading to inequality \eqref{eq:case2} in the proof of Theorem \ref{th: disturninf} are valid and, together with the definition of $\delta$, yield
\[ \widetilde V_\infty(x(k+1)) - \widetilde V_\infty(x(k)) \le \frac{\kappa}{\beta} \widetilde V_\infty(x(k)) + \frac{\delta}{\beta^k} \le \frac{\kappa}{\beta} \widetilde V_\infty(x(k)) + \min\left\{\theta(\beta,1), -\frac{\kappa\lambda}{2\beta},\; \frac \lambda 2\right\}. \]
Now if $\widetilde V_\infty(x(k)) \ge \lambda/2$, then second term in the minimum defining $\delta$ implies
\[ \widetilde V_\infty(x(k+1)) - \widetilde V_\infty(x(k)) \le \frac{\kappa}{\beta} \frac \lambda 2 -\frac{\kappa\lambda}{2\beta}=0,\]
implying $\widetilde V_\infty(x(k+1)) \le \widetilde V_\infty(x(k)) < \lambda$ and thus $x(k+1)\in \mathbb{N}N$.
If $\widetilde V_\infty(x(k)) < \lambda/2$, then the third term in the minimum defining $\delta$ implies
\[ \widetilde V_\infty(x(k+1)) - \widetilde V_\infty(x(k)) \le \underbrace{\frac{\kappa}{\beta} \widetilde V_\infty(x(k)}_{\le 0} + \frac \lambda 2\le \frac \lambda 2,\]
implying $\widetilde V_\infty(x(k+1)) \le \widetilde V_\infty(x(k)) + \frac{\lambda}{2} < \lambda$, i.e., again $x(k+1)\in \mathbb{N}N$. This proves the induction step and hence $x(k)\in \mathbb{N}N$ for all $k=0,\ldots,M$.
Now the turnpike property follows from Theorem \ref{thm: localturn} applied with $\mathbb{X}_{inv}=\mathbb{N}N$.\end{proof}
\begin{rem}
We note that the interval $(\beta_1,\beta_2)$ may be empty. This is because
\begin{enumerate}
\item[(i)] the condition \eqref{eq:Cdiss} needed for proving the turnpike property for trajectories staying near $x_l^\beta$ may require sufficiently large $\beta$ to hold
\item[(ii)] a trajectory starting near $x_l^\beta$ will in general only stay near $x_l^\beta$ for sufficiently small $\beta$
\end{enumerate}
More precisely, the lower bound in (ii) as identified at the end of the proof of Lemma \ref{lem: optimal} depends on the cost $\tilde{\ell}$ outside a neighbourhood of $x_l^\beta$ and the cost to leave this neighbourhood. The upper bound in (i), in turn, depends on the cost to reach the equilibrium $x_l^\beta$ from a neighbourhood. If this cost is high and, in addition, the cost to leave the neighbourhood and the cost outside the neighbourhood are low, then the set of discount rates for which a local turnpike behaviour occurs may be empty.
\end{rem}
\begin{rem}
The attentive reader may have noted that we apply Lemma \ref{lem:stay} with $K=1$ in this proof, rather than with $K=M$, which might appear more natural given that we want to make a statement for $\{0,\ldots,M\}$. This is because the size of the neighbourhood $\mathcal{B}_{\varepsilon(\beta,K)}(x_l^\beta)$ delivered by Lemma \ref{lem:stay} depends on $K$. Hence, if we applied Lemma \ref{lem:stay} with $K=M$ in order to construct the neighbourhood $\mathbb{N}N$, this neighbourhood may shrink down to $\{x_l^\beta\}$ as $M$ increases. In contrast to this, the fact that $\widetilde V_\infty$ is a (practical) Lyapunov function allows us to construct a neighbourhood $\mathbb{N}N$ that does not depend on $M$.
\end{rem}
\section{Examples}\label{sec:ex}
We end our paper with a couple of examples illustrating our theoretical results. All numerical solutions were obtained using a dynamic programming algorithm as described in \cite{GruS04}. We start with two examples exhibiting a locally and a globally optimal equilibrium.
\begin{bsp}\label{ex:1}
Consider the dynamics $f(x,u) = x+ u$ and the stage cost $\ell(x,u)=x^4-\frac 1 4 x^3 - \frac 7 4 x^2$.
\begin{figure}
\caption{Stage cost $\ell(x)$}
\label{fig: ex sc}
\end{figure}
As visualized in Figure \ref{fig: ex sc}, the stage cost $\ell$ has a local minimum in $x = \frac{3-\sqrt{905}}{32}$, a maximum in $x=0$ and a global minimum in $x = \frac{3+\sqrt{905}}{32}$. Following \cite[Section 4]{GMKW20} we can calculate the storage function $\lambda$ by using the optimality conditions for optimal equilibria. We remark that the procedure for computing global storage functions described in this reference also works for the local dissipativity in case of local convexity which is given in this example, cf.\ also the discussion after Example \ref{ex:2}, below. Thus, by a straightforward calculation, we get the local equilibrium $(x^{e},u^{e})bl = (\frac{3-\sqrt{905}}{32},0)$ and the storage function $\lambda(x^{e},u^{e})iv 0$. Inserting this, we get the rotated stage cost $\tilde{\ell}(x,u) = x^4-\frac 1 4 x^3 - \frac 7 4 x^2 - \ell(x_l^\beta, 0)$ and local discounted strict $(x,u)$-dissipativity of the system $f(x,u) = x+ u$ at $x_l^\beta$ for any $\beta\in(0,1)$. Thus, the assumptions of Lemma \ref{lem: ball} and Lemma \ref{lem: optimal} are fulfilled. Hence, following the proof of Lemma \ref{lem: optimal} we can estimate $\beta_2\approx 0.67$ with $\delta \approx 1$ and $\tilde{\ell}_{\min}\approx -0.42$. Further, since $\norm{\tilde{\ell}(x,u)}$ is bounded for $x$ in a neighbourhood $\mathcal{B}_\varepsilon(x_0)$, $\varepsilon>0$, Theorem \ref{th: locturnpike} can be applied. For illustrating the theoretical results, we set $\mathbb{U}=[-0.75, 0.75]$.
\begin{figure}
\caption{Example \ref{ex:1}
\label{fig: ex1 beta}
\end{figure}
On the left hand side of Figure \ref{fig: ex1 beta} we show the behaviour of the trajectory $x$ and the control $u$ for different discount factors $\beta$. On the right hand side, we can observe the optimal feedback control values $u_x$ and therefore the domain of attraction of the equilibria dependent on $\beta$. After a maximum of three time instants, the trajectory reaches the global equilibrium for $\beta$ large enough. In contrast, for $\beta\leq 0.67$ we can observe that it is more favourable to stay in a neighbourhood of the local equilibrium $x_l^\beta$. We remark that is sufficient to depict $\beta = 0.8$ as a representative for all $\beta \in (0.67, 1)$ since the behaviour of the trajectory, the control and the stage cost does not change significantly.
\begin{figure}
\caption{Example \ref{ex:1}
\label{fig: ex1 x0}
\end{figure}
Figure \ref{fig: ex1 x0}, for fixed $\beta = 0.7$ we consider different initial values $x_0$. As we can see, the initial value determines to which equilibrium the trajectory converges. This underpins the theoretical results of Theorem \ref{th: locturnpike} and especially of Lemma \ref{lem: optimal}. We note that for a completely controllable system such a behaviour cannot occur in undiscounted problems.
\end{bsp}
The following modified example illustrates the case that the interval $(\beta_1, \beta_2)$ is empty.
\begin{bsp}\label{ex:2}
Consider again the system $f(x,u)=x+u$, now with stage cost $\ell(x,u)=x^4-\frac 1 4 x^3 - \frac 7 4 x^2+\gamma |u|$ with $\gamma\neq 0$. As the added term has no influence on the conditions of Theorem \ref{th: locturnpike} we can again estimate $\beta_2\approx 0.67$. Further, for $\gamma=0$ we get the same stage cost as in Example \ref{ex:1} above. In contrast to Example \ref{ex:1}, now for $\gamma$ large enough we can observe that $(\beta_1, \beta_2)$ is empty. This fact is illustrated in Figure \ref{fig: ex2 beta} for $\gamma = 10$. For the numerical results we use the same setting as in example \ref{ex:1}.
\begin{figure}
\caption{Example \ref{ex:2}
\label{fig: ex2 beta}
\end{figure}
In contrast, in the graph with $\gamma = 10$ we can clearly observe that independent of the discount factor $\beta$ we do not get convergence to the local equilibrium any more For $\beta$ large enough we even get convergence to the global equilibrium.
\begin{figure}
\caption{Example \ref{ex:2}
\label{fig: ex2 gamma}
\end{figure}
In order to examine this property in more detail we illustrate the behaviour of different values of $\gamma$ for fixed discount factors $\beta$ in Figure \ref{fig: ex2 gamma}. For $\gamma >1$ and $\beta \lesssim 0.95$ we can observe that the trajectories stay near by the start value and do not move away. In contrast, for $\beta \approx 1$ the trajectories converge to the global equilibrium. Thus, we do not get convergence to the local equilibrium any more
\end{bsp}
The two examples, above, have the particular feature that the dynamics is affine and the stage cost $\ell$ is strictly convex in a neighbourhood of the optimal equilibria. In this case, similar arguments as used in the proof of Theorem 4.2 in \cite{GMKW20} show that local strict dissipativity always holds. More precisely, we can restrict the proof of Theorem 4.2 in \cite{GMKW20} to a bounded neighbourhood $\mathbb{X}_{\mathbb{N}N}\subset \mathbb{X}$ of the local equilibrium $x_l^\beta$, e.g., $\mathcal{B}_\varepsilon(x_l^\beta)$, $\varepsilon>0$, instead of $\mathbb{X}$, and a local strict convex stage cost function $\ell$. Following the proof, $\mathrm{D}\tilde{\ell}(x^{e},u^{e})bl=0$ holds in the neighbourhood $\mathbb{X}_{\mathbb{N}N}$, which by the local strict convexity of $\tilde{\ell}$ implies that $(x^{e},u^{e})bl$ is a strict local minimum. Together with the boundedness of $\mathbb{X}_{\mathbb{N}N}$, this implies the existence of $\alpha_\beta\in\mathcal{K}_\infty$ and thus local discounted strict dissipativity. We remark that the calculation of $\lambda$ is the same as in the global case and yields a linear storage function. In the special case of Example \ref{ex:1}, above, it yields the storage function $\lambda(x^{e},u^{e})iv 0$. In conclusion, local strict dissipativity always holds if the dynamics is affine and the stage cost $\ell$ is strictly convex near the locally optimal equilibrium.
With this observation, our dissipativity based analysis provides a complementary approach to the stable manifold based analysis carried out, e.g., in \cite{HKHF03}. Particularly, we can conclude that the model from this reference exhibits two equilibria at which the local turnpike property holds, which explains why the optimal trajectories are correctly reproduced by nonlinear model predictive control as shown in \cite[Section 5.1]{Gruene2015}.
Our final example demonstrates that strict convexity of $\ell$ is not needed for obtaining strict dissipativity, thus showing that a dissipativity based analysis allows for strictly weaker assumptions than strict convexity of $\ell$.
\begin{bsp}\label{ex:nonconvex}
Consider the 1d control system
\[ x^+ = f(x,u) = 2x+u \]
with state constraints $\mathbb{X}=[-1,1]$, control constraints $\mathbb{U}=[-3,3]$, and stage cost
\[ \ell(x,u) = -x^2/2 + u^2.\]
Obviously, the stage cost is strictly concave in $x$ and strictly convex in $u$. Nevertheless, we can establish discounted strict $(x,u)$-dissipativity in $(x^*,u^*)=(0,0)$ (in this example even global) for $\beta\ge3/5$ with $\lambda(x) = -x^2$. This follows from the fact that with $a=2\beta/\sqrt{1+\beta}$ and $b=\sqrt{1+\beta}$ we have
\begin{eqnarray*}
\ell(x,u) + \lambda(x) - \beta\lambda(f(x,u)) & = &
-x^2/2 + u^2 - x^2 + \beta (2x+u)^2\\
& = & (4\beta-3/2) x^2 + 4\beta xu + (1+\beta)u^2\\
& = & (a x + b u)^2 + \left(4\beta-\frac{3}{2} - \frac{4\beta^2}{1+\beta}\right)x^2\\
& \ge & (a x + b u)^2,
\end{eqnarray*}
where the last inequality holds since the term in the large brackets is $\ge 0$ for $\beta \ge 3/5$.
Since the system is completely controllable in finite time, hence exponentially stabilizable, Theorem \ref{th: disturninf} in conjunction with Remark \ref{rem:suffcond}(ii) implies that for sufficiently large $\beta$ turnpike behaviour occurs at $x^*=0$. This is confirmed for $\beta=0.7$ in the left graph in Figure \ref{fig: ex nonconvex}. In contrast to this, the right graph in Figure~\ref{fig: ex nonconvex} shows that for $\beta=0.6$ the turnpike behaviour for $x^*=0$ does not occur. Rather, the optimal solution converges to the upper bound $x=1$ of the state constraint set. In this example, the numerical computations indicate that $\beta = 3/5=0.6$ is a relatively precise estimate of the threshold for the occurrence of the turnpike property at $x^*=0$, although for $\beta$ decreasing from $0.7$ to $0.6$ the set of initial values around $x^*=0$ for which the turnpike behaviour can be seen shrinks down rapidly.
\begin{figure}
\caption{Optimal trajectories for Example \ref{ex:nonconvex}
\label{fig: ex nonconvex}
\end{figure}
\end{bsp}
\section{Conclusion}\label{sec:conclusion}
In this paper we have shown that a local strict dissipativity assumption in conjunction with an appropriate growth condition on the optimal value function can be used in order to conclude a local turnpike property at an optimal equilibrium. The turnpike property holds for discount factors from an interval $[\beta_1,\beta_2]$, where $\beta_1$ is determined by local quantities while $\beta_2$ is also determined by properties of the optimal control problem away from the local equilibrium. Hence, local and global properties together determine whether the interval is not empty. This is in accordance with other approaches for analysing local stability of equilibria in discounted optimal control such as those based on stable and unstable manifolds \cite{HKHF03}. In contrast to other approaches, however, the dissipativity based approach is not limited to (locally) strictly convex problems, as our last example showed.
{
}
\end{document} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.